File size: 1,544 Bytes
fa4d4d6
 
 
 
 
 
 
 
 
 
 
 
 
 
be5f3c7
fa4d4d6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
---
license: apache-2.0
base_model: google/gemma-2b-it
tags:
- text-to-lora
- sakana-ai
- peft
- lora
- python
- code-generation
- programming
library_name: peft
---

# gemma-2b-python-expert-lora(Text to Model)

This LoRA adapter specializes the base model for expert-level Python programming. Created using Sakana AI's Text-to-LoRA technology.

## Model Details

- **Base Model**: `google/gemma-2b-it`
- **LoRA Rank**: 16
- **Target Modules**: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
- **Task**: Python Code Generation

## Usage

```python
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

# Load base model and tokenizer
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it")
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")

# Load LoRA adapter
model = PeftModel.from_pretrained(model, "rohitnagareddy/gemma-2b-python-expert-lora")

# Generate Python code
prompt = "Write a Python function to implement binary search:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```

## Capabilities

- Clean, documented Python code
- Type hints and error handling
- PEP 8 compliance
- Algorithm implementation
- Web development
- Data processing
- Testing and debugging

## Citation

```bibtex
@misc{sakana2024texttolora,
    title={Text-to-LoRA},
    author={Sakana AI},
    year={2024},
    url={https://github.com/SakanaAI/text-to-lora}
}
```