|
--- |
|
tags: |
|
- javascript |
|
- code-generation |
|
- transformers |
|
- fine-tuned |
|
- distilgpt2 |
|
license: mit |
|
library_name: transformers |
|
--- |
|
|
|
# π DistilGPT-2 Code Generator (Explanation β JavaScript Code) |
|
|
|
This model is a **fine-tuned version of `distilgpt2`** trained to generate **JavaScript code** from natural language explanations. |
|
|
|
It was trained on a dataset containing **explanation-code pairs**, making it useful for: |
|
β
**Code generation from text descriptions** |
|
β
**Learning JavaScript syntax & patterns** |
|
β
**Automated coding assistance** |
|
|
|
--- |
|
|
|
## **π Model Details** |
|
- **Base Model:** `distilgpt2` (6x smaller than GPT-2) |
|
- **Dataset:** JavaScript explanations + corresponding functions |
|
- **Fine-tuning:** Trained using **LoRA (memory-efficient adaptation)** |
|
- **Training Environment:** Google Colab (T4 GPU) |
|
- **Optimization:** FP16 precision for faster training |
|
|
|
--- |
|
|
|
## **π Example Usage** |
|
Load the model and generate JavaScript code from explanations: |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
model_name = "sureal01/distilgpt2-code-generator" # Replace with your username |
|
model = AutoModelForCausalLM.from_pretrained(model_name) |
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|
|
def generate_code(explanation): |
|
input_text = f"### Explanation:\n{explanation}\n\n### Generate JavaScript code:\n" |
|
inputs = tokenizer(input_text, return_tensors="pt") |
|
|
|
output = model.generate(**inputs, max_length=150, temperature=0.5, top_p=0.9, repetition_penalty=1.5) |
|
return tokenizer.decode(output[0], skip_special_tokens=True) |
|
|
|
# Example |
|
test_explanation = "This function takes a name as input and returns a greeting message." |
|
generated_code = generate_code(test_explanation) |
|
print("\nπΉ **Generated Code:**\n", generated_code) |
|
|