File size: 5,057 Bytes
41f1c9e
42ec36c
 
41f1c9e
 
 
 
 
 
42ec36c
41f1c9e
 
c5864d6
1bf2c14
c5864d6
 
6a29073
 
 
9a63818
4f274c5
1bf2c14
e5096ab
113c4ac
e5096ab
6e3c51c
71eec88
6e3c51c
 
e5096ab
113c4ac
 
 
 
 
4f274c5
9727362
e5096ab
71eec88
 
 
 
e5096ab
113c4ac
 
 
 
4f274c5
113c4ac
71eec88
 
 
 
4f274c5
113c4ac
4f274c5
0b7bcaa
 
 
 
113c4ac
 
 
 
 
 
 
 
 
 
0de3a7e
62a5c7e
 
 
 
 
0de3a7e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
89cf21f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
---
title: In-demand ML skills
emoji: 💻
colorFrom: yellow
colorTo: purple
sdk: gradio
sdk_version: 5.0.1
app_file: app.py
pinned: false
short_description: Monitoring in-demand skills on ML job-postings
---

<h1>
  <img src="./images/skills_logo.png" alt="Logo" width="30" height="30">
  In-demand Skill Monitoring for Machine Learning Industry
</h1>

## About

This projects aims to monitor in-demand skills for machine learning roles. Skills are extracted with a BERT-based skill extraction model called JobBERT, which is continously fine-tuned on the job postings. The skills are monitored/visualized by: 1. embedding the extracted skills tokens into vector form, 2. performing dimensionality reduction with UMAP, 3. visualizing the reduced embeddings. 

![Header Image](./images/header.png)

### [Monitoring Platform Link](https://huggingface.co/spaces/Robzy/jobbert_knowledge_extraction)

<h2>
  <img src="./images/arch_frame.png" alt="Logo" width="30" height="30">
  Architecture & Frameworks
</h2>

- **Hugging Face Spaces**: Used as an UI to host interactive visualisation of skills embeddings and their clusters.
- **GitHub Actions**: Used to schedule training, inference and visualisation-updating scripts.
- **Rapid API**: The API used to scrape job descriptions from LinkedIn
- **Weights & Biases**: Used for model training monitoring as well as model storing.
- **OpenAI API**: Used to extract ground-truth from job descriptions by leveraging multi-shot learning and prompt engineering.

   
# High-Level Overview
<h2>
  <img src="./images/model.png" alt="Logo" width="30" height="30">
  Models
</h2>

* **BERT** - finetuned skill extraction model, lightweight.
* **LLM** - gpt-4o for skill extraction with multi-shot learning. Computationally expensive.
* **Embedding model** - [SentenceTransformers](https://sbert.net/) used to embed skills into vectors.
* [**spaCy**](https://spacy.io/models/en#en_core_web_sm) - sentence tokenization model. 

## Pipeline
<h2>
  <img src="./images/pipeline.png" alt="Logo" width="30" height="30">
  Pipeline
</h2>

The follow scripts are scheduled to automate the skill monitoring and model tranining processes continually. 

<div align="center">
    <img src="./images/in-demand-flow.png" alt="Flow Image">
</div>

### 1. Job-posting scraping
Fetching job descriptions for machine learning from LinkedIn via Rapid API.
### 2. Skills tagging with LLM
We opinionately extract the ground truth of skills from the job descriptions by leveraging multi-shot learning and prompt engineering.
### 3. Model training
The skill extraction model is finetuned with respect to the extracted groundtruth.
### 4. Skills tagging with JobBERT
Skills are extracted from job-postings with finetuned model
### 5. Embedding & visualization
Extracted skills are embedded, reduced and clustered with an embedding model, UMAP and K-means respectively.


<h1>
  <img src="./images/scraping_logo.png" alt="Logo" width="30" height="30">
  Job Scraping
</h1>

This component scrapes job descriptions from the LinkedIn Job Search API for Machine Learning, and saves them in text files for further analysis.

## Workflow

1. **API Configuration**:
   - The script uses the `linkedin-job-search-api.p.rapidapi.com` endpoint to fetch job data.
   - API access is authenticated using a RapidAPI key stored as an environment variable `RAPID_API_KEY`.

2. **Data Retrieval**:
   - The script fetches jobs matching the keyword `machine learning`.
   - It retrieves job details including the description, which is saved for further analysis.

3. **Job Description Extraction**:
   - Each job description is saved in a `.txt` file under the `job-postings/<date>` folder.
   
# Skill Embeddings and Visualization

We generate embeddings for technical skills listed in .txt files and visualizes their relationships using dimensionality reduction and clustering techniques. The visualizations are created for both 2D and 3D embeddings, and clustering is performed using KMeans to identify groups of similar skills.

## Workflow

### 1. Input Data
- Skills are loaded from `.txt` files located in date-based subfolders under the `./tags` directory.
- Each subfolder corresponds to a specific date (e.g., `03-01-2024`).

### 2. Embedding Generation
- The script uses the `SentenceTransformer` model (`paraphrase-MiniLM-L3-v2`) to generate high-dimensional embeddings for the unique skills.

### 3. Dimensionality Reduction
- UMAP (Uniform Manifold Approximation and Projection) is used to reduce the embeddings to:
  - **2D**: For creating simple scatter plots.
  - **3D**: For interactive visualizations.

### 4. Clustering
- KMeans clustering is applied to the 3D embeddings to group similar skills into clusters.
- The number of clusters can be specified in the script.

### 5. Visualization and Outputs
- **2D Projection**: Saved as PNG images in the `./plots` folder.
- **3D Projection**: Saved as interactive HTML files in the `./plots` folder.
- **3D Clustering Visualization**: Saved as HTML files, showing clusters with different colors.