munish0838 commited on
Commit
3963c4e
1 Parent(s): ba218d4

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +71 -0
README.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3
3
+ base_model: turboderp/llama3-turbcat-instruct-8b
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - llama
7
+ - conversational
8
+ ---
9
+ # QuantFactory/llama3-turbcat-instruct-8b-GGUF
10
+ This is quantized version of [turboderp/llama3-turbcat-instruct-8b](https://huggingface.co/turboderp/llama3-turbcat-instruct-8b) created using llama.cpp
11
+
12
+ # Turbcat-8b Model Description
13
+ ![image/png](3.png)
14
+ ![image/png](4.png)
15
+ ![image/png](5.png)
16
+ ![image/png](6.png)
17
+ ![image/png](7.png)
18
+ ![image/png](8.png)
19
+ # Release notes
20
+ This is a direct upgrade over cat 70B, with 2x the dataset size(2GB-> 5GB), added Chinese support with quality on par with the original English dataset.
21
+ The medical COT portion of the dataset has been sponsored by steelskull, and the action packed character play portion was donated by Gryphe's(aesir dataset). Note that 8b is based on llama3 with limited Chinese support due to base model choice. The chat format in 8b is llama3. The 72b has more comprehensive Chinese support and the format will be chatml.
22
+
23
+ # Data Generation
24
+ In addition to the specified fortifications above, the data generation process is largely the same. Except for added Chinese Ph. D. Entrance exam, Traditional Chinese and Chinese story telling data.
25
+
26
+ ## Special Highlights
27
+ * 20 postdocs (10 Chinese, 10 English speaking doctors specialized in computational biology, biomed, biophysics and biochemistry)participated in the annotation process.
28
+ * GRE and MCAT/Kaoyan questions were manually answered by the participants using strictly COT and BERT judges producing embeddings were trained based on the provided annotation. For an example of BERT embedding visualization and scoring, please refer to https://huggingface.co/turboderp/Cat-Llama-3-70B-instruct
29
+ * Initial support of roleplay as api usage. When roleplaying as an API or function, the model does not produce irrelevant content that's not specified by the system prompt.
30
+
31
+ # Task coverage
32
+
33
+ ## Chinese tasks on par with English data
34
+ ![image/png](1.png)
35
+ For the Chinese portion of the dataset, we strictly kept its distribution and quality comparable to the English counterpart, as visualized by the close distance of the doublets. The overall QC is visualized by PCA after bert embedding
36
+
37
+ ## Individual tasks Quality Checked by doctors
38
+ For each cluster, we QC using BERT embeddings on an umap:
39
+ ![image/png](2.png)
40
+ The outliers have been manually checked by doctors.
41
+
42
+ # Thirdparty dataset
43
+ Thanks to the following people for their tremendous support for dataset generation:
44
+ * steelskull for the medical COT dataset with gpt4o
45
+ * Gryphe for the wonderful action packed dataset
46
+ * Turbca for being turbca
47
+
48
+ # Prompt format for 8b:
49
+ **llama3**
50
+ Example raw prompt:
51
+ ```
52
+ <|begin_of_text|><|start_header_id|>system<|end_header_id|>
53
+
54
+ CatGPT really likes its new cat ears and ends every message with Nyan_<|eot_id|><|start_header_id|>user<|end_header_id|>
55
+
56
+ CatA: pats CatGPT cat ears<|eot_id|><|start_header_id|>assistant<|end_header_id|>
57
+
58
+ CatGPT:
59
+ ```
60
+
61
+ # Prompt format for 72b:
62
+ **chatml**
63
+ Example raw prompt:
64
+ ```
65
+ <|im_start|>system
66
+ CatGPT really likes its new cat ears and ends every message with Nyan_<|im_end|>
67
+ <|im_start|>user
68
+ CatA: pats CatGPT cat ears<|im_end|>
69
+ <|im_start|>assistant
70
+ CatGPT:
71
+ ```