davzoku commited on
Commit
c653d96
·
1 Parent(s): 163a5be

create README

Browse files
Files changed (1) hide show
  1. README.md +79 -0
README.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ language: en
4
+ license: llama2
5
+ model_type: llama
6
+ datasets:
7
+ - mlabonne/CodeLlama-2-20k
8
+ pipeline_tag: text-generation
9
+ tags:
10
+ - llama-2
11
+ ---
12
+
13
+ # CRIA v1.3
14
+
15
+ 💡 [Article](https://walterteng.com/cria) |
16
+ 💻 [Github](https://github.com/davzoku/cria) |
17
+ 📔 Colab [1](https://colab.research.google.com/drive/1rYTs3qWJerrYwihf1j0f00cnzzcpAfYe),[2](https://colab.research.google.com/drive/1Wjs2I1VHjs6zT_GE42iEXsLtYh6VqiJU)
18
+
19
+ ## What is CRIA?
20
+
21
+ > krē-ə plural crias. : a baby llama, alpaca, vicuña, or guanaco.
22
+
23
+ <p align="center">
24
+ <img src="assets/icon-512x512.png" width="300" height="300" alt="Cria Logo"> <br>
25
+ <i>or what ChatGPT suggests, <b>"Crafting a Rapid prototype of an Intelligent llm App using open source resources"</b>.</i>
26
+ </p>
27
+
28
+ This model is a `llama-2-7b-chat-hf` model fine-tuned using QLoRA (4-bit precision) on the [mlabonne/CodeLlama-2-20k](https://huggingface.co/datasets/mlabonne/CodeLlama-2-20k) dataset and it is used to power [CRIA chat](https://chat.walterteng.com).
29
+
30
+ ## 📦 Model Release
31
+
32
+ CRIA v1.3 comes with several variants.
33
+
34
+ - [davzoku/cria-llama2-7b-v1.3](https://huggingface.co/davzoku/cria-llama2-7b-v1.3): Merged Model
35
+ - [davzoku/cria-llama2-7b-v1.3-GGML](https://huggingface.co/davzoku/cria-llama2-7b-v1.3-GGML): Quantized Merged Model
36
+ - [davzoku/cria-llama2-7b-v1.3_peft](https://huggingface.co/davzoku/cria-llama2-7b-v1.3_peft): PEFT adapter
37
+
38
+ ## 🔧 Training
39
+
40
+ It was trained on a Google Colab notebook with a T4 GPU and high RAM.
41
+
42
+ ## 💻 Usage
43
+
44
+ ```python
45
+ # pip install transformers accelerate
46
+
47
+ from transformers import AutoTokenizer
48
+ import transformers
49
+ import torch
50
+
51
+ model = "davzoku/cria-llama2-7b-v1.3"
52
+ prompt = "What is a cria?"
53
+
54
+ tokenizer = AutoTokenizer.from_pretrained(model)
55
+ pipeline = transformers.pipeline(
56
+ "text-generation",
57
+ model=model,
58
+ torch_dtype=torch.float16,
59
+ device_map="auto",
60
+ )
61
+
62
+ sequences = pipeline(
63
+ f'<s>[INST] {prompt} [/INST]',
64
+ do_sample=True,
65
+ top_k=10,
66
+ num_return_sequences=1,
67
+ eos_token_id=tokenizer.eos_token_id,
68
+ max_length=200,
69
+ )
70
+ for seq in sequences:
71
+ print(f"Result: {seq['generated_text']}")
72
+ ```
73
+
74
+ ## References
75
+
76
+ We'd like to thank:
77
+
78
+ - [mlabonne](https://huggingface.co/mlabonne) for his article and resources on implementation of instruction tuning
79
+ - [TheBloke](https://huggingface.co/TheBloke) for his script for LLM quantization.