prince-canuma commited on
Commit
d805640
1 Parent(s): a2996e5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -45
README.md CHANGED
@@ -18,7 +18,6 @@ datasets:
18
  # Model Summary
19
  <img src="Damysus.png" width="500" alt="Damysus - the fastest giant"/>
20
 
21
-
22
  <!-- Provide a quick summary of what the model is/does. -->
23
  This model is a instruction-tuned version of Phi-2, a Transformer model with 2.7 billion parameters from Microsoft.
24
  The model has undergone further training to better follow specific user instructions, enhancing its ability to perform tasks as directed and improve its interaction with users.
@@ -39,36 +38,26 @@ This is the model card of a 🤗 transformers model that has been pushed on the
39
  - **Finetuned from model:** microsoft/phi-2
40
 
41
 
42
-
43
  ## Uses
44
 
45
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
46
 
47
-
48
  ### Direct Use
49
 
50
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
51
 
52
- [More Information Needed]
53
-
54
-
55
- ### Out-of-Scope Use
56
-
57
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
58
-
59
- [More Information Needed]
60
-
61
- ## Bias, Risks, and Limitations
62
 
63
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
64
 
65
- [More Information Needed]
66
-
67
- ### Recommendations
68
-
69
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
70
-
71
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
72
 
73
  ## How to Get Started with the Model
74
 
@@ -83,21 +72,6 @@ output = chatbot(conversation)
83
 
84
  print(output)
85
  ```
86
- Output:
87
- ```shell
88
- Conversation id: 5dad71bd-a24a-425a-80aa-95f56924f8c7
89
-
90
- user: I'm looking for a movie - what's your favourite one?
91
-
92
- assistant:
93
- My favorite movie is "The Shawshank Redemption."
94
-
95
- It's a powerful and inspiring story about hope, friendship, and redemption.
96
- The performances by Tim Robbins and Morgan Freeman are exceptional,
97
- and the film's themes and messages are timeless.
98
-
99
- I highly recommend it to anyone who enjoys a well-crafted and emotionally engaging story.
100
- ```
101
 
102
  Or you can instatiate the model and tokenizer directly
103
  ```python
@@ -108,7 +82,7 @@ model = AutoModelForCausalLM.from_pretrained("prince-canuma/Damysus-2.7B-Chat")
108
 
109
  inputs = tokenizer.apply_chat_template(
110
  [
111
- {"content":"","role":"system"},
112
  {"content":"""I'm looking for a movie - what's your favourite one?""","role":"user"},
113
  ], add_generation_prompt=True, return_tensors="pt",
114
  ).to("cuda")
@@ -118,6 +92,7 @@ outputs = model.generate(inputs, do_sample=False, max_new_tokens=256)
118
  input_length = inputs.shape[1]
119
  print(tokenizer.batch_decode(outputs[:, input_length:], skip_special_tokens=True)[0])
120
  ```
 
121
  Output:
122
  ```shell
123
  My favorite movie is "The Shawshank Redemption."
@@ -129,8 +104,6 @@ and the film's themes and messages are timeless.
129
  I highly recommend it to anyone who enjoys a well-crafted and emotionally engaging story.
130
  ```
131
 
132
-
133
-
134
  ## Training Details
135
 
136
  ### Training Data
@@ -184,12 +157,6 @@ I used [SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) dataset, a
184
 
185
  [TODO]
186
 
187
- ## Limitations of Phi-2
188
- This model inherits some of the base model's limitations, such as:
189
- - Generate Inaccurate Code and Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
190
- - Limited Scope for code: Majority of Phi-2 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
191
- - Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
192
-
193
  ## Technical Specifications
194
 
195
  ### Compute Infrastructure
 
18
  # Model Summary
19
  <img src="Damysus.png" width="500" alt="Damysus - the fastest giant"/>
20
 
 
21
  <!-- Provide a quick summary of what the model is/does. -->
22
  This model is a instruction-tuned version of Phi-2, a Transformer model with 2.7 billion parameters from Microsoft.
23
  The model has undergone further training to better follow specific user instructions, enhancing its ability to perform tasks as directed and improve its interaction with users.
 
38
  - **Finetuned from model:** microsoft/phi-2
39
 
40
 
 
41
  ## Uses
42
 
43
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
44
 
 
45
  ### Direct Use
46
 
47
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
48
 
49
+ You can use this model to build local/cloud RAG applications.
50
+ It can serve as the:
51
+ - Answer synthesizer,
52
+ - Summarizer
53
+ - Or query rewriter model.
 
 
 
 
 
54
 
55
+ ### Limitations
56
 
57
+ This model inherits some of the base model's limitations, such as:
58
+ - Generate Inaccurate Code and Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
59
+ - Limited Scope for code: Majority of Phi-2 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
60
+ - Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
 
 
 
61
 
62
  ## How to Get Started with the Model
63
 
 
72
 
73
  print(output)
74
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
76
  Or you can instatiate the model and tokenizer directly
77
  ```python
 
82
 
83
  inputs = tokenizer.apply_chat_template(
84
  [
85
+ {"content":"You are an helpful AI assistant","role":"system"},
86
  {"content":"""I'm looking for a movie - what's your favourite one?""","role":"user"},
87
  ], add_generation_prompt=True, return_tensors="pt",
88
  ).to("cuda")
 
92
  input_length = inputs.shape[1]
93
  print(tokenizer.batch_decode(outputs[:, input_length:], skip_special_tokens=True)[0])
94
  ```
95
+
96
  Output:
97
  ```shell
98
  My favorite movie is "The Shawshank Redemption."
 
104
  I highly recommend it to anyone who enjoys a well-crafted and emotionally engaging story.
105
  ```
106
 
 
 
107
  ## Training Details
108
 
109
  ### Training Data
 
157
 
158
  [TODO]
159
 
 
 
 
 
 
 
160
  ## Technical Specifications
161
 
162
  ### Compute Infrastructure