rohitnagareddy commited on
Commit
fa4d4d6
·
verified ·
1 Parent(s): 5da43d0

Initial upload of Python expert LoRA adapter

Browse files
Files changed (3) hide show
  1. README.md +65 -0
  2. adapter_config.json +20 -0
  3. adapter_model.safetensors +3 -0
README.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: google/gemma-2b-it
4
+ tags:
5
+ - text-to-lora
6
+ - sakana-ai
7
+ - peft
8
+ - lora
9
+ - python
10
+ - code-generation
11
+ - programming
12
+ library_name: peft
13
+ ---
14
+
15
+ # gemma-2b-python-expert-lora
16
+
17
+ This LoRA adapter specializes the base model for expert-level Python programming. Created using Sakana AI's Text-to-LoRA technology.
18
+
19
+ ## Model Details
20
+
21
+ - **Base Model**: `google/gemma-2b-it`
22
+ - **LoRA Rank**: 16
23
+ - **Target Modules**: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
24
+ - **Task**: Python Code Generation
25
+
26
+ ## Usage
27
+
28
+ ```python
29
+ from peft import PeftModel
30
+ from transformers import AutoModelForCausalLM, AutoTokenizer
31
+
32
+ # Load base model and tokenizer
33
+ model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it")
34
+ tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
35
+
36
+ # Load LoRA adapter
37
+ model = PeftModel.from_pretrained(model, "rohitnagareddy/gemma-2b-python-expert-lora")
38
+
39
+ # Generate Python code
40
+ prompt = "Write a Python function to implement binary search:"
41
+ inputs = tokenizer(prompt, return_tensors="pt")
42
+ outputs = model.generate(**inputs, max_new_tokens=256)
43
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
44
+ ```
45
+
46
+ ## Capabilities
47
+
48
+ - Clean, documented Python code
49
+ - Type hints and error handling
50
+ - PEP 8 compliance
51
+ - Algorithm implementation
52
+ - Web development
53
+ - Data processing
54
+ - Testing and debugging
55
+
56
+ ## Citation
57
+
58
+ ```bibtex
59
+ @misc{sakana2024texttolora,
60
+ title={Text-to-LoRA},
61
+ author={Sakana AI},
62
+ year={2024},
63
+ url={https://github.com/SakanaAI/text-to-lora}
64
+ }
65
+ ```
adapter_config.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "base_model_name_or_path": "google/gemma-2b-it",
3
+ "peft_type": "LORA",
4
+ "task_type": "CAUSAL_LM",
5
+ "r": 16,
6
+ "lora_alpha": 32,
7
+ "lora_dropout": 0.1,
8
+ "target_modules": [
9
+ "q_proj",
10
+ "k_proj",
11
+ "v_proj",
12
+ "o_proj",
13
+ "gate_proj",
14
+ "up_proj",
15
+ "down_proj"
16
+ ],
17
+ "inference_mode": false,
18
+ "fan_in_fan_out": false,
19
+ "bias": "none"
20
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb56b2946941a86287f1f08eb78c3dc7be4e1e6a23c2ca94eb50c204aef328d4
3
+ size 78480072