File size: 6,214 Bytes
5f822d2
a2996e5
 
 
5f822d2
a695f59
 
 
 
 
a2996e5
 
a695f59
 
 
5f822d2
 
a695f59
5817e8d
 
5f822d2
a695f59
 
 
 
 
 
 
5f822d2
a695f59
5f822d2
 
 
 
a695f59
 
 
5817e8d
5f822d2
 
 
 
 
 
 
 
 
 
d805640
 
 
 
 
5f822d2
d805640
5f822d2
d805640
 
 
 
5f822d2
 
 
 
 
a695f59
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d805640
a695f59
 
 
 
 
 
 
 
 
d805640
a695f59
 
 
 
 
 
 
 
 
 
 
5f822d2
 
 
 
 
a695f59
5f822d2
 
 
 
a695f59
5f822d2
a695f59
5f822d2
a695f59
 
 
5f822d2
 
 
a695f59
5f822d2
a695f59
5f822d2
 
 
 
 
 
 
 
 
 
 
 
a695f59
5f822d2
 
 
 
 
a695f59
5f822d2
 
 
 
 
a695f59
5f822d2
 
 
a695f59
5f822d2
a695f59
5f822d2
 
 
a695f59
5f822d2
 
 
a695f59
 
5f822d2
a695f59
5f822d2
a695f59
 
 
 
 
 
 
 
 
5f822d2
a695f59
5f822d2
 
 
 
a695f59
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
---
language:
- en
license: mit
library_name: transformers
tags:
- nlp
- phi
- phi-2
- instruct
base_model:
- microsoft/phi-2
datasets:
- Open-Orca/SlimOrca
- prince-canuma/TinyOrca
---

# Model Summary
<img src="Damysus.png" width="500" alt="Damysus - the fastest giant"/>

<!-- Provide a quick summary of what the model is/does. -->
This model is a instruction-tuned version of Phi-2, a Transformer model with 2.7 billion parameters from Microsoft. 
The model has undergone further training to better follow specific user instructions, enhancing its ability to perform tasks as directed and improve its interaction with users. 
This additional training helps the model to understand context better, generate more accurate and relevant responses, and adapt to a wide range of language-based tasks such as:
- Questions and Answers,
- Data Extraction,
- Structured Outputs (i.e., JSON outputs),
- And providing explanations,

## Model Description

<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.

- **Developed by:** [Prince Canuma](https://huggingface.co/prince-canuma)
- **Model type:** Transformer
- **License:** MIT
- **Finetuned from model:** microsoft/phi-2


## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

### Direct Use

<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->

You can use this model to build local/cloud RAG applications.
It can serve as the:
- Answer synthesizer,
- Summarizer
- Or query rewriter model.

### Limitations 

This model inherits some of the base model's limitations, such as:
- Generate Inaccurate Code and Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
- Limited Scope for code: Majority of Phi-2 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
- Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.

## How to Get Started with the Model

Use the code below to get started with the model.

```python
from transformers import pipeline, Conversation

chatbot = pipeline("conversational", model="prince-canuma/Damysus-2.7B-Chat")
conversation = Conversation("I'm looking for a movie - what's your favourite one?")
output = chatbot(conversation)

print(output)
```

Or you can instatiate the model and tokenizer directly
```python
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("prince-canuma/Damysus-2.7B-Chat")
model = AutoModelForCausalLM.from_pretrained("prince-canuma/Damysus-2.7B-Chat")

inputs = tokenizer.apply_chat_template(
    [
        {"content":"You are an helpful AI assistant","role":"system"},
        {"content":"""I'm looking for a movie - what's your favourite one?""","role":"user"},
    ], add_generation_prompt=True, return_tensors="pt",
).to("cuda")

outputs = model.generate(inputs, do_sample=False, max_new_tokens=256)

input_length = inputs.shape[1]
print(tokenizer.batch_decode(outputs[:, input_length:], skip_special_tokens=True)[0])
```

Output:
```shell
My favorite movie is "The Shawshank Redemption."

It's a powerful and inspiring story about hope, friendship, and redemption.
The performances by Tim Robbins and Morgan Freeman are exceptional,
and the film's themes and messages are timeless.

I highly recommend it to anyone who enjoys a well-crafted and emotionally engaging story.
```

## Training Details

### Training Data

<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
I used [SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) dataset, a new curated subset of our OpenOrca data. This release provides an efficient means of reaching performance on-par with using larger slices of the [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca), while only including ~500k GPT-4 completions.

### Training Procedure

<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
[TODO]

#### Preprocessing

1. Convert dataset to chatML format
2. Remove all samples with more than 2048 tokens (Phi-2 context size)
3. Mask instructions (System and User) at training time.



#### Training Hyperparameters

  - **Training regime:** bf16 mixed precision <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->


## Evaluation

<!-- This section describes the evaluation protocols and provides the results. -->

### Testing Data, Factors & Metrics

#### Testing Data

<!-- This should link to a Dataset Card if possible. -->

[TODO]

#### Factors

<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->

[TODO]

#### Metrics

<!-- These are the evaluation metrics being used, ideally with a description of why. -->

[TODO]

### Results

[TODO]

## Technical Specifications

### Compute Infrastructure

- Modal Labs

#### Hardware

- OS: Linux
- GPU: A10G

#### Libraries

- TRL
- Transformers
- PEFT
- Datasets
- Accelerate
- torch
- Wandb
- Bitsandbytes
- Plotly

## Citation 

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**
```bibtex
@misc{Damysus-2.7B-Chat,
      title={Damysus-2.7B-Chat} , 
      author={Prince Canuma},
      year={2024},
}
```