Building an AI-Powered Career Platform from Scratch

I built an AI-powered career platform. Not a tutorial project. Not a bootcamp capstone. A production system with 80+ API endpoints, 13 database tables, a 4-stage prompt engineering pipeline, CRM integration, and 40+ product planning documents — all designed, scoped, and shipped by one person.

This article is the story of why I built it, how I built it, and what it demonstrates about how I work.

The Problem

After 15+ years leading large-scale programs — $500M+ healthcare IT implementations at Kaiser Permanente and Northwestern Medicine, EV fleet deployments at Ford, startup operations at my own company — I found myself in a familiar spot that a lot of experienced professionals know too well: my resume didn't capture what I actually do.

Resumes are flat. They list titles, dates, and bullet points. They don't show how you think. They don't show how you break down a complex problem, architect a solution, manage stakeholders, and ship something real. And in a job market where everyone is listing "AI" as a skill, I wanted to demonstrate it — not just claim it.

So I decided to build the proof.

What I Built

The platform at allenwalker.info serves two purposes: a professional portfolio and an AI-powered referral intelligence system. The portfolio showcases my background across business development, program delivery, technology, and advisory work. The referral engine is the technical centerpiece — a system that analyzes job descriptions and generates personalized, data-driven referral pages with semantic fit scoring.

Here's the stack:

  • Framework: Next.js 15 with React 19 and TypeScript
  • Database: PostgreSQL with row-level security via Supabase
  • AI/ML: OpenAI GPT-4o-mini for content generation, text-embedding-3-small for vector embeddings
  • CRM: HubSpot API integration for deals, companies, and contacts
  • Auth: Session-based authentication with HMAC-SHA256 signed cookies
  • Hosting: Vercel with serverless edge functions
  • Styling: Tailwind CSS 4 with dark mode support

The codebase includes 80+ API endpoints across public, enrichment, and admin domains, with a component library powering 8 public-facing pages and an authenticated admin console.

The AI Referral Engine

This is the system I'm most proud of, because it's not a wrapper around a chat API. It's an engineered pipeline with real architecture decisions.

The 4-Stage Prompt Pipeline

When a job description enters the system, it flows through four specialized AI stages, each with a single responsibility:

Stage 1 — Extraction. GPT-4o-mini parses the raw job description and identifies up to 30 key attributes, ranked by importance to the role. These aren't just keywords — they're structured skill/requirement descriptors that capture what the role actually needs.

Stage 2 — Categorization. Each extracted attribute is classified into one of three pillars: Industry Fit, Process Fit, or Technical Fit. This taxonomy was designed to mirror how hiring managers actually evaluate candidates — not just "can they code" but "do they understand our industry" and "can they run our processes."

Stage 3 — Refinement. Attributes are polished into concise skill labels (5 words or fewer) suitable for visual display. This sounds simple, but getting an LLM to consistently produce clean, display-ready labels required careful prompt engineering with boundary rules and output format constraints.

Stage 4 — Relevance Ranking. Attributes are re-ranked by generic role relevance, independent of company-specific context. This produces a normalized ranking that enables fair comparison across different job descriptions.

Each stage persists its results to dedicated database tables. This isn't just for logging — it means I can reprocess any individual stage without rerunning the full pipeline, debug issues at any point in the chain, and audit every transformation the system makes.

Semantic Matching

After extraction, each attribute is compared against my professional profile using a hybrid scoring model:

  • Semantic component (85% weight): Vector embeddings generated via OpenAI's text-embedding-3-small. Both the job attribute and my profile entries are embedded, and cosine similarity measures alignment.
  • Lexical component (15% weight): Token overlap provides a fallback for cases where embeddings might miss literal keyword matches — "Python" vs. "Python programming," for example.

The blended score maps to a fit color: Green (0.80 or above), Yellow (0.60 to 0.79), or Grey (below 0.60). These thresholds were calibrated through iterative testing against real job descriptions to produce results that feel intuitively correct.

Pareto-Weighted Scoring

Not all job attributes matter equally. A role might list 30 requirements, but the top 5 usually determine whether you're a fit. The scoring system uses an exponential decay formula inspired by the Pareto principle:

weight = 100 * 0.6^(rank - 1)

The highest-ranked attribute gets full weight. Each subsequent attribute gets 60% of the previous one's weight. Each weight is then multiplied by the fit color multiplier (Green = 1.0, Yellow = 0.65, Grey = 0). The result is a single 0-100% fit score that emphasizes alignment on the dimensions that matter most.

The Admin Console

The referral engine runs through an authenticated admin dashboard with session-based auth (8-hour expiration, HMAC-SHA256 signed cookies). The admin console provides:

A 5-step referral page builder that transforms raw CRM data into shareable referral pages: company search, job selection, attribute review with AI-assisted label suggestions, narrative preview, and publish to a unique slug-based URL.

An attribute override system where I can adjust AI-generated labels, reassign pillar classifications, change fit colors, or toggle visibility. All overrides are stored separately from base data, so I can always roll back to the original AI output.

CRM integration that syncs deals, companies, and contacts from HubSpot, providing the data layer for the referral page builder.

The Product Management Layer

What separates this project from a typical developer portfolio is the product management discipline behind it. I didn't just start coding — I planned, scoped, and documented every feature the way I would for any enterprise program.

The project includes 15+ Product Requirements Documents with user stories and acceptance criteria, 15+ Technical Design Documents with API contracts and data models, a comprehensive gap analysis comparing every PRD requirement against the actual codebase, an architecture overview designed for stakeholder communication, a 4-phase roadmap with dependency mapping and prioritization, and 5 engineered prompt specifications with structured output schemas.

This documentation isn't afterthought — it was the foundation. Every feature was scoped before implementation. The gap analysis I conducted in February 2026 audited every single requirement against the codebase, producing a clear picture of what's deployed, what's in progress, and what's next.

The Roadmap

The project is organized into four workstreams with a phased delivery plan:

Workstream 1: Referral Engine — 5 of 8 features deployed, including the core AI pipeline, semantic matching, weighted scoring, and context-preserved referral pages. Remaining: data dictionary completion, JD URL import, LinkedIn profile capture.

Workstream 2: Admin Console — Core referral workflow operational. Next: dashboard with system health metrics, configurable scoring parameters, enrichment history and logging.

Workstream 3: AI Features — Three major features scoped with full PRDs: a dedicated AI and Automation showcase page, a site-wide chatbot with multi-agent orchestration and RAG, and an interview module with video Q&A powered by Notion.

Workstream 4: Portfolio Pages — All 8 public-facing pages deployed across 4 content patterns: static JSX, CSV-driven filterable grids, MDX blog articles, and API-driven dynamic pages.

What This Demonstrates

I built this project because I wanted to show — not tell — what I bring to the table. Here's what I think it demonstrates:

I build end-to-end. This isn't a frontend demo or a backend API. It's a complete system: product requirements, technical architecture, database design, AI pipeline, CRM integration, admin tooling, public-facing pages, and serverless deployment. I took it from "I have an idea" to "it's live in production."

I ship real AI products. The semantic matching pipeline, the 4-stage prompt chain, the hybrid scoring model — these aren't experiments. They're production systems processing real data with real architectural tradeoffs. I chose GPT-4o-mini over GPT-4 for cost efficiency. I added lexical fallback because pure embeddings miss edge cases. I designed stage-level persistence for debuggability. These are engineering decisions, not tutorial exercises.

I think like a product leader. 40+ planning documents. Gap analysis. Phased roadmap. User stories and acceptance criteria for every feature. I managed this project the same way I managed $500M+ healthcare IT programs — because that discipline is what makes complex systems work.

I have a founder's mentality. I identified a real problem (the referral process in job searching is broken and inefficient), designed a solution, built it myself, and shipped it. This is what I've done my entire career — at Kaiser, at Northwestern, at Ford, at my own startup. I see problems, I design systems, and I execute.

What's Next

The platform continues to evolve. The immediate roadmap includes completing the admin dashboard, building the AI and Automation showcase page, and launching the interview module. Longer term, I'm designing a site-wide chatbot with multi-agent orchestration, RAG-grounded responses, and conversational lead capture.

If you've read this far, you probably understand that I don't do things halfway. I'd love to connect. You can find me at allenwalker.info, on LinkedIn, or reach me directly at allen.h.walker@gmail.com.

Let's build something.

Building an AI-Powered Career Platform from Scratch | Allen Walker | Systems in Motion