Harish2002 commited on
Commit
0719e04
·
verified ·
1 Parent(s): 1663c44

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -17
README.md CHANGED
@@ -1,43 +1,51 @@
 
 
 
1
  ---
2
  license: mit
3
  tags:
4
- - lora
5
  - tinyllama
 
6
  - cli
7
  - fine-tuning
8
  - qna
9
- - huggingface
10
- model_name: cli-lora-tinyllama
11
- base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
12
- datasets:
13
- - custom-cli-qa
14
  library_name: transformers
15
- pipeline_tag: text-generation
 
 
 
16
  ---
17
 
18
- # CLI LoRA-TinyLlama
19
 
20
- Fine-tuned version of [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on a custom dataset of CLI-related Q&A using LoRA (Low-Rank Adaptation).
21
 
22
  ---
23
 
24
- ## Base Model
25
- - **Model**: `TinyLlama/TinyLlama-1.1B-Chat-v1.0`
26
- - **Technique**: Fine-tuning using [LoRA](https://arxiv.org/abs/2106.09685)
27
- - **Libraries**: `transformers`, `peft`, `datasets`, `accelerate`
 
28
 
29
  ---
30
 
31
- ## Dataset
32
- - Custom-made with **150+ Q&A** pairs on:
 
33
  - `git`, `bash`, `grep`, `tar`, `venv`
34
- - Stored in: `cli_questions.json`
35
  - Tokenized version: `tokenized_dataset/`
36
 
37
  ---
38
 
39
- ## Fine-Tuning Configuration
 
40
  ```python
 
 
41
  base_model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
42
 
43
  lora_config = LoraConfig(
 
1
+ ---
2
+ license: mit
3
+
4
  ---
5
  license: mit
6
  tags:
 
7
  - tinyllama
8
+ - lora
9
  - cli
10
  - fine-tuning
11
  - qna
12
+ - transformers
13
+ - peft
 
 
 
14
  library_name: transformers
15
+ datasets:
16
+ - custom
17
+ language: en
18
+ model_type: causal-lm
19
  ---
20
 
21
+ # 🔧 CLI LoRA-TinyLlama
22
 
23
+ A fine-tuned version of [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on a custom dataset of command-line Q&A, using **LoRA** (Low-Rank Adaptation). Built for fast, accurate help on common CLI topics.
24
 
25
  ---
26
 
27
+ ## 🧩 Base Model
28
+
29
+ - Model: `TinyLlama/TinyLlama-1.1B-Chat-v1.0`
30
+ - Fine-Tuning Method: [LoRA](https://arxiv.org/abs/2106.09685)
31
+ - Libraries Used: `transformers`, `peft`, `datasets`, `accelerate`
32
 
33
  ---
34
 
35
+ ## 📚 Dataset
36
+
37
+ - Custom dataset with **150+ Q&A pairs** covering:
38
  - `git`, `bash`, `grep`, `tar`, `venv`
39
+ - Raw file: `cli_questions.json`
40
  - Tokenized version: `tokenized_dataset/`
41
 
42
  ---
43
 
44
+ ## 🛠️ Training Configuration
45
+
46
  ```python
47
+ from peft import LoraConfig
48
+
49
  base_model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
50
 
51
  lora_config = LoraConfig(