Spaces:
Running
title: JobsAI Space
emoji: 🤖
colorFrom: yellow
colorTo: blue
sdk: gradio
sdk_version: 3.0.5
app_file: app.py
pinned: false
AI-Powered Swedish Job Matching Platform
This repository contains the final project for the course ID2223 Scalable Machine Learning and Deep Learning at KTH.
The project culminates in an AI-powered job matching platform, JobsAI, designed to help users find job listings tailored to their resumes. The application is hosted on Gradio using HuggingFace Community Cloud and can be accessed here:
JobsAI
Overview
Project Pitch
Finding the right job can be overwhelming, especially with over 40,000 listings available on Arbetsförmedlingen. JobsAI streamlines this process by using vector embeddings and similarity search to match users’ resumes with the most relevant job postings. Say goodbye to endless scrolling and let AI do the heavy lifting!
Problem Statement
Traditional job search methods often involve manual browsing of job listings, leading to inefficiency and mismatched applications. To address this, we developed an AI-powered job matching platform that:
- Analyzes resumes and job descriptions to calculate compatibility scores.
- Recommends the most relevant job postings based on semantic similarity.
The platform leverages Natural Language Processing (NLP) and machine learning to eliminate the inefficiencies of manual job searches.
Data Sources
The platform uses two primary data sources:
- Job Listings: Retrieved via Arbetsförmedlingen’s JobStream API, which provides real-time updates for job postings.
- Resumes: Uploaded directly by users via the frontend application.
Methodology
Tool Selection
- Vector Database: After evaluating several options, we chose Pinecone for its ease of use and targeted support for vector embeddings.
- Embedding Model: The base model is sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2, a pre-trained transformer model that encodes sentences and paragraphs into a 384-dimensional dense vector space.
- Finetuned Model: The base model is finetuned on user-provided data every 7-days, and stored on HuggingFace. It can be found here!
- Backend Updates: GitHub Actions was utilized to automate daily updates to the vector database.
- Feature Store: To store user provided data, we used Hopsworks as it allows for easy feature interaction, as well as allows us to save older models to evaluate performance over time.
Workflow
Flowchart of JobsAI JobsAI flowchart structure
Data Retrieval:
- Job data is fetched via the JobStream API and stored in Pinecone after being vectorized.
- Metadata such as job title, description, location, and contact details is extracted.
Similarity Search:
- User-uploaded resumes are vectorized using the same sentence transformer model.
- Pinecone is queried for the top-k most similar job embeddings, which are then displayed to the user alongside their similarity scores.
Feature Uploading:
- If a user chooses to leave feedback, by either clicking Relevant or Not Relevant, the users CV is uploaded to Hopsworks together with the specific ad data, and the selected choice.
Model Training:
- Once every seven days, a chrone job on Github Actions runs, where the base model is finetuned on the total data stored in the feature store.
Code Architecture
First-Time Setup
- Run
bootstrap.py
to:- Retrieve all job listings using the JobStream API’s snapshot endpoint.
- Vectorize the listings and insert them into the Pinecone database.
- Embeddings and metadata are generated using helper functions:
_create_embedding
: Combines job title, occupation, and description for encoding into a dense vector._prepare_metadata
: Extracts additional details like email, location, and timestamps for storage alongside embeddings.
Daily Updates
- Automated Workflow: A GitHub Actions workflow runs
main.py
daily at midnight. - Incremental Updates: The
keep_updated.py
function fetches job listings updated since the last recorded timestamp, ensuring the vector database remains current.
Weekly Updates
- Automated Workflow: A GitHub Actions workflow runs
training_pipeline.ipynb
every Sunday at midnight. - Model Training: Features are downloaded from Hopsworks, and the base LLM is finetuned on the total dataset with both negative and positive examples.
Querying for Matches
- When a user uploads their resume:
- The resume is encoded using the same transformer model.
- Pinecone’s similarity search retrieves the top-k most relevant job listings.
How to Run
Prerequisites
- Python 3.x installed locally.
- A Pinecone account and API key.
- Arbetsförmedlingen JobStream API access (free).
- Hopsworks Account and API key.
- Huggingface Account and API key.
Steps
- Clone this repository:
git clone https://github.com/filiporestav/jobsai.git cd jobsai
- Install dependencies:
pip install -r requirements.txt
- Add your API keys as an environment variables:
export PINECONE_API_KEY=<your-api-key> export HOPSWORKS_API_KEY=<your-api-key> export HUGGINGFACE_API_KEY=<your-api-key>
- Run the application locally:
gradio run app.py
- Open the Gradio app in your browser to upload resumes and view job recommendations.
Potential Improvements
Model Limitation
- The current embedding model truncates text longer than 128 tokens.
- For longer job descriptions, a model capable of processing more tokens (e.g., 512 or 1024) could improve accuracy.
Scalability
- Embedding and querying currently run on CPU, which may limit performance for larger datasets.
- Switching to GPU-based processing would significantly enhance speed.
Conclusion
JobsAI is a proof-of-concept platform that demonstrates how AI can revolutionize the job search experience. By leveraging vector embeddings and similarity search, the platform reduces inefficiencies and matches users with the most relevant job postings.
While it is functional and effective as a prototype, there are ample opportunities for enhancement, particularly in scalability and model capacity.
For a live demo, visit JobsAI.