File size: 1,851 Bytes
7086247 3d568e9 7086247 95d6590 2291afb 95d6590 3d568e9 28de519 95d6590 fb29812 95d6590 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
---
license: apache-2.0
datasets:
- Novora/CodeClassifier_v1
pipeline_tag: text-classification
---
# Introduction
Novora Code Classifier v1 Tiny, is a tiny `Text Classification` model, which classifies given code text input under 1 of `31` different classes (programming languages).
This model is designed to be able to run on CPU, but optimally runs on GPUs.
# Info
- 1 of 31 classes output
- 512 token input dimension
- 64 hidden dimensions
- 2 linear layers
- The `snowflake-arctic-embed-xs` model is used as the embeddings model.
- Dataset split into 80% training set, 20% testing set.
- The combined test and training data is 100 chunks per programming language, the data is 3,100 chunks (entries) as 512 tokens per chunk, being a snippet of the code.
# Architecture
The `CodeClassifier-v1-Tiny` model employs a neural network architecture optimized for text classification tasks, specifically for classifying programming languages from code snippets. This model includes:
- **Bidirectional LSTM Feature Extractor**: This bidirectional LSTM layer processes input embeddings, effectively capturing contextual relationships in both forward and reverse directions within the code snippets.
- **Adaptive Pooling**: Following the LSTM, adaptive average pooling reduces the feature dimension to a fixed size, accommodating variable-length inputs.
- **Fully Connected Layers**: The network includes two linear layers. The first projects the pooled features into a hidden feature space, and the second linear layer maps these to the output classes, which correspond to different programming languages. A dropout layer with a rate of 0.5 between these layers helps mitigate overfitting.
The model's bidirectional nature and architectural components make it adept at understanding the syntax and structure crucial for code classification.
|