prince-canuma commited on
Commit
11aaebb
·
verified ·
1 Parent(s): 96477a0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -80
README.md CHANGED
@@ -19,13 +19,8 @@ datasets:
19
  <img src="Damysus.png" width="500" alt="Damysus - the fastest giant"/>
20
 
21
  <!-- Provide a quick summary of what the model is/does. -->
22
- This model is a instruction-tuned version of Phi-2, a Transformer model with 2.7 billion parameters from Microsoft.
23
- The model has undergone further training to better follow specific user instructions, enhancing its ability to perform tasks as directed and improve its interaction with users.
24
- This additional training helps the model to understand context better, generate more accurate and relevant responses, and adapt to a wide range of language-based tasks such as:
25
- - Questions and Answers,
26
- - Data Extraction,
27
- - Structured Outputs (i.e., JSON outputs),
28
- - And providing explanations,
29
 
30
  ## Model Description
31
 
@@ -56,99 +51,80 @@ This model inherits some of the base model's limitations, such as:
56
  - Limited Scope for code: Majority of Phi-2 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
57
  - Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
58
 
59
- ## How to Get Started with the Model
60
 
61
- Use the code below to get started with the model.
62
 
63
- ```python
64
- from transformers import pipeline, Conversation
 
65
 
66
- chatbot = pipeline("conversational", model="prince-canuma/Damysus-2.7B-Chat")
67
- conversation = Conversation("I'm looking for a movie - what's your favourite one?")
68
- output = chatbot(conversation)
69
 
70
- print(output)
 
71
  ```
72
 
73
- Or you can instatiate the model and tokenizer directly
74
- ```python
75
- from transformers import AutoTokenizer, AutoModelForCausalLM
76
 
77
- tokenizer = AutoTokenizer.from_pretrained("prince-canuma/Damysus-2.7B-Chat")
78
- model = AutoModelForCausalLM.from_pretrained("prince-canuma/Damysus-2.7B-Chat")
79
 
80
- inputs = tokenizer.apply_chat_template(
81
- [
82
- {"content":"You are an helpful AI assistant","role":"system"},
83
- {"content":"I'm looking for a movie - what's your favourite one?","role":"user"},
84
- ], add_generation_prompt=True, return_tensors="pt",
85
- ).to("cuda")
86
 
87
- outputs = model.generate(inputs, do_sample=False, max_new_tokens=256)
88
 
89
- input_length = inputs.shape[1]
90
- print(tokenizer.batch_decode(outputs[:, input_length:], skip_special_tokens=True)[0])
 
 
91
  ```
92
 
93
- Output:
 
94
  ```shell
95
- My favorite movie is "The Shawshank Redemption."
 
 
 
 
 
96
 
97
- It's a powerful and inspiring story about hope, friendship, and redemption.
98
- The performances by Tim Robbins and Morgan Freeman are exceptional,
99
- and the film's themes and messages are timeless.
100
 
101
- I highly recommend it to anyone who enjoys a well-crafted and emotionally engaging story.
 
 
 
 
 
 
102
  ```
103
 
104
- ### Structured Output
105
- ```python
106
- from transformers import AutoTokenizer, AutoModelForCausalLM
107
-
108
- tokenizer = AutoTokenizer.from_pretrained("prince-canuma/Damysus-2.7B-Chat")
109
- model = AutoModelForCausalLM.from_pretrained("prince-canuma/Damysus-2.7B-Chat")
110
-
111
- inputs = tokenizer.apply_chat_template(
112
- [
113
- {"content":"You are a Robot that ONLY outputs JSON. Use this structure: {'entities': [{'type':..., 'name':...}]}.","role":"system"},
114
- {"content":""""Extract the entities of type 'technology' and 'file_type' in JSON format from the following passage: AI is a transformative
115
- force in document processing employing technologies such as 'Machine Learning (ML), Natural Language Processing (NLP) and
116
- Optical Character Recognition (OCR) to understand, interpret, and summarize text. These technologies enhance accuracy,
117
- increase efficiency, and allow you and your company to process high volumes of data in short amount of time.
118
- For instance, you can easily extract key points and summarize a large PDF document (i.e., 500 pages) in just a few seconds.""",
119
- "role":"user"},
120
- ], add_generation_prompt=True, return_tensors="pt",
121
- ).to("cuda")
122
-
123
- outputs = model.generate(inputs, do_sample=False, max_new_tokens=256)
124
-
125
- input_length = inputs.shape[1]
126
- print(tokenizer.batch_decode(outputs[:, input_length:], skip_special_tokens=True)[0])
127
  ```
 
128
 
129
- Output:
130
- ```json
131
- {
132
- "entities": [
133
- {
134
- "type": "technology",
135
- "name": "Machine Learning (ML)"
136
- },
137
- {
138
- "type": "technology",
139
- "name": "Natural Language Processing (NLP)"
140
- },
141
- {
142
- "type": "technology",
143
- "name": "Optical Character Recognition (OCR)"
144
- },
145
- {
146
- "type": "file_type",
147
- "name": "PDF"
148
- },
149
- ]
150
- }
151
  ```
 
152
  ## Training Details
153
 
154
  ### Training Data
 
19
  <img src="Damysus.png" width="500" alt="Damysus - the fastest giant"/>
20
 
21
  <!-- Provide a quick summary of what the model is/does. -->
22
+ This model is a GGUF version of [Damysus-2.7B-Chat](https://huggingface.co/prince-canuma/Damysus-2.7B-Chat).
23
+
 
 
 
 
 
24
 
25
  ## Model Description
26
 
 
51
  - Limited Scope for code: Majority of Phi-2 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
52
  - Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
53
 
54
+ ### On the command line, including multiple files at once
55
 
56
+ I recommend using the `huggingface-hub` Python library:
57
 
58
+ ```shell
59
+ pip3 install huggingface-hub
60
+ ```
61
 
62
+ Then you can download any individual model file to the current directory, at high speed, with a command like this:
 
 
63
 
64
+ ```shell
65
+ huggingface-cli download prince-canuma/Damysus-2.7B-Chat-GGUF Damysus-2.7B-Chat.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
66
  ```
67
 
68
+ <details>
69
+ <summary>More advanced huggingface-cli download usage (click to read)</summary>
 
70
 
71
+ You can also download multiple files at once with a pattern:
 
72
 
73
+ ```shell
74
+ huggingface-cli download prince-canuma/Damysus-2.7B-Chat-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
75
+ ```
 
 
 
76
 
77
+ For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
78
 
79
+ To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
80
+
81
+ ```shell
82
+ pip3 install hf_transfer
83
  ```
84
 
85
+ And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
86
+
87
  ```shell
88
+ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download prince-canuma/Damysus-2.7B-Chat-GGUF Damysus-2.7B-Chat.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
89
+ ```
90
+
91
+ Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
92
+ </details>
93
+ <!-- README_GGUF.md-how-to-download end -->
94
 
95
+ <!-- README_GGUF.md-how-to-run start -->
96
+ ## Example `llama.cpp` command
 
97
 
98
+ Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
99
+
100
+ ```shell
101
+ !./main -m ../Damysus-2.7B-Chat-GGUF/Damysus-2.7B-Chat.Q4_K_M.gguf \
102
+ --color -c 2048 --temp 0 \
103
+ --prompt "<|im_start|>system\nYou are a helpful assistant. Please keep your answers short.<|im_end|>\n<|im_start|>user\nCount to ten<|im_end|>\n" \
104
+ -n 256 --in-suffix "<|im_start|>assistant\n" -r "User:" -e --verbose-prompt
105
  ```
106
 
107
+ or
108
+
109
+ ```shell
110
+ !./main -m ../Damysus-2.7B-Chat-GGUF/Damysus-2.7B-Chat.Q4_K_M.gguf \
111
+ --color -c 2048 --temp 0 \
112
+ -p "You are a helpful assistant. Please keep your answers short." -n 256 --in-suffix "<|im_start|>assistant\n" \
113
+ -r "User:" -e --verbose-prompt -cml
114
+
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
115
  ```
116
+ - `-ngl N` offload N number of layers to GPU. Remove it if you don't have GPU acceleration.
117
 
118
+ - `-c 2048` set desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
119
+
120
+ - Add `-i -ins` or `-cml` argument for interactive chat-style conversation.
121
+
122
+ For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) or run:
123
+
124
+ ```shell
125
+ !./main --help
 
 
 
 
 
 
 
 
 
 
 
 
 
 
126
  ```
127
+
128
  ## Training Details
129
 
130
  ### Training Data