|
--- |
|
license: apache-2.0 |
|
base_model: google/gemma-2b-it |
|
tags: |
|
- text-to-lora |
|
- sakana-ai |
|
- peft |
|
- lora |
|
- python |
|
- code-generation |
|
- programming |
|
library_name: peft |
|
--- |
|
|
|
# gemma-2b-python-expert-lora(Text to Model) |
|
|
|
This LoRA adapter specializes the base model for expert-level Python programming. Created using Sakana AI's Text-to-LoRA technology. |
|
|
|
## Model Details |
|
|
|
- **Base Model**: `google/gemma-2b-it` |
|
- **LoRA Rank**: 16 |
|
- **Target Modules**: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj |
|
- **Task**: Python Code Generation |
|
|
|
## Usage |
|
|
|
```python |
|
from peft import PeftModel |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
# Load base model and tokenizer |
|
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it") |
|
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it") |
|
|
|
# Load LoRA adapter |
|
model = PeftModel.from_pretrained(model, "rohitnagareddy/gemma-2b-python-expert-lora") |
|
|
|
# Generate Python code |
|
prompt = "Write a Python function to implement binary search:" |
|
inputs = tokenizer(prompt, return_tensors="pt") |
|
outputs = model.generate(**inputs, max_new_tokens=256) |
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
``` |
|
|
|
## Capabilities |
|
|
|
- Clean, documented Python code |
|
- Type hints and error handling |
|
- PEP 8 compliance |
|
- Algorithm implementation |
|
- Web development |
|
- Data processing |
|
- Testing and debugging |
|
|
|
## Citation |
|
|
|
```bibtex |
|
@misc{sakana2024texttolora, |
|
title={Text-to-LoRA}, |
|
author={Sakana AI}, |
|
year={2024}, |
|
url={https://github.com/SakanaAI/text-to-lora} |
|
} |
|
``` |
|
|