File size: 1,805 Bytes
c56660a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
---
tags:
- javascript
- code-generation
- transformers
- fine-tuned
- distilgpt2
license: mit
library_name: transformers
---
# π DistilGPT-2 Code Generator (Explanation β JavaScript Code)
This model is a **fine-tuned version of `distilgpt2`** trained to generate **JavaScript code** from natural language explanations.
It was trained on a dataset containing **explanation-code pairs**, making it useful for:
β
**Code generation from text descriptions**
β
**Learning JavaScript syntax & patterns**
β
**Automated coding assistance**
---
## **π Model Details**
- **Base Model:** `distilgpt2` (6x smaller than GPT-2)
- **Dataset:** JavaScript explanations + corresponding functions
- **Fine-tuning:** Trained using **LoRA (memory-efficient adaptation)**
- **Training Environment:** Google Colab (T4 GPU)
- **Optimization:** FP16 precision for faster training
---
## **π Example Usage**
Load the model and generate JavaScript code from explanations:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "sureal01/distilgpt2-code-generator" # Replace with your username
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
def generate_code(explanation):
input_text = f"### Explanation:\n{explanation}\n\n### Generate JavaScript code:\n"
inputs = tokenizer(input_text, return_tensors="pt")
output = model.generate(**inputs, max_length=150, temperature=0.5, top_p=0.9, repetition_penalty=1.5)
return tokenizer.decode(output[0], skip_special_tokens=True)
# Example
test_explanation = "This function takes a name as input and returns a greeting message."
generated_code = generate_code(test_explanation)
print("\nπΉ **Generated Code:**\n", generated_code)
|