An AI-Powered Career Concierge & Automated Resume Tailoring System. Developed as part of the 5-Day AI Agents Intensive Course with Google.
| Authors | Jedrzej Golaszewski |
|---|---|
| Title | Career Compass |
| Status | Completed |
| Date | Dec 1, 2025 |
| Tech stack | FastAPI, Python, GoogleADK |
| Links |
Overview
Job hunting is stressful and highly repetitive. Customizing a resume for every single job application can take hours, but failing to do so means getting filtered out by Applicant Tracking Systems (ATS).
Career Compass acts as a personal career concierge. It interviews the user about their work history, automatically formats a perfect CV, and dynamically tailors it for specific job descriptions. By integrating a multi-agent AI system with a high-performance LaTeX engine and state-of-the-art privacy transformers, this application cuts resume tailoring time from over 45 minutes down to less than 2 minutes.
The Impact: Human vs. Agent
| Metric | Traditional Method | Career Compass Agent |
|---|---|---|
| Time to create | 45 min – 2 hours | < 2 minutes |
| Keyword/ATS Optimization | Often lacks precision | High accuracy |
| Application Scalability | ~3-5 applications/day | 50+ applications/day |
| Human Error Risk | High | Low |
Technical Architecture
My goal was to build more than just an AI script—I wanted to create a production-ready, secure, and scalable service. The backend is built with FastAPI and utilizes Google’s Agent Development Kit (ADK) powered by Gemini 2.5.
1. Hierarchical Multi-Agent System
Instead of relying on a single mega-prompt, the system decomposes the workload into specialized, sequential agents managed by a master Orchestrator:
- Content Agent: Extracts skills and optimizes keywords specifically for ATS systems.
- Design Agent: Strictly handles LaTeX coding and visual layout, ensuring the optimized content remains intact while the design adapts.
2. Self-Correcting “Judge-Refiner” Loops
To prevent hallucinations and syntax errors, I implemented continuous validation loops:
- Content Loop: A Judge Agent audits the generated CV against a predefined error list and provides actionable feedback to a Refiner Agent to rewrite the text.
- Design Loop: A LaTeX Auditor attempts to compile the code. If compilation fails, it feeds the exact error log back to the Refiner Agent to automatically patch the syntax.
3. Privacy-First Data Handling (GLiNER)
Handling career data requires strict security. I integrated a GLiNER model fine-tuned for Personally Identifiable Information (PII) detection.
- Before data reaches the LLM, sensitive entities (names, phones, addresses) are tokenized and sanitized.
- These entities are safely stored in a local database and only re-injected into the final compiled PDF artifact at the very end of the pipeline.
4. High-Performance LaTeX Engine
Generating beautiful resumes requires robust typesetting. I implemented server-side compilation using the Tectonic engine, allowing for blazingly fast PDF generation without the bloat of a massive local TeX Live installation.
Challenges & Solutions
The Challenge: Unpredictable LaTeX Syntax
Generating error-free LaTeX is notoriously difficult for LLMs due to its rigid syntax requirements. Initial tests resulted in frequent compilation failures.
The Solution: Dynamic Context Injection & Compiler Feedback
I engineered custom lifecycle callbacks (before_model_callback) to dynamically inject verified, working LaTeX resume examples into the model’s context based on the current system state. Combined with the Tectonic error logs feeding directly back into the Refiner Agent, the system learned to self-correct its own syntax errors.
- Result: Reduced average refinement loops by 3 iterations, improved processing speed by 20%, and achieved near-perfect robustness against syntax errors.
Deployment & Infrastructure
The entire service is containerized using Docker Compose for high reproducibility and is fully architected for cloud-native deployment.
- Frontend & Backend: Decoupled architecture allowing the FastAPI backend and Demo UI to scale independently.
- Google Cloud Ready: Built to be deployed seamlessly onto Google Cloud Run with artifact management via Google Artifact Registry.
- Human-in-the-loop: A custom ticketing service keeps users informed via long-polling during the multi-step compilation and review process.
Future Roadmap
To scale this prototype into an enterprise-ready product, my next steps include:
- Feature Expansion: Adding a web scraper to parse job offers directly via URLs and generating perfectly matching Cover Letters.
- Database Migration: Upgrading the artifact service from SQLite to PostgreSQL.
- Microservices: Decoupling the LaTeX Tectonic compiler into its own auto-scaling microservice to handle high-traffic concurrency.
- Frontend Overhaul: Building a fully featured, interactive React frontend.
