File size: 11,893 Bytes
5f822d2 a2996e5 5f822d2 a695f59 a2996e5 a695f59 5f822d2 a695f59 5817e8d 5f822d2 a695f59 5f822d2 a695f59 5f822d2 a695f59 5817e8d 5f822d2 d805640 5f822d2 d805640 5f822d2 d805640 5f822d2 a695f59 d805640 20e4cbb a695f59 d805640 a695f59 a3a6e67 f76b164 a3a6e67 5f822d2 7a19e2b a139750 7a19e2b a139750 7a19e2b 5f822d2 a695f59 5f822d2 a695f59 5f822d2 a695f59 5f822d2 a695f59 5f822d2 7a19e2b 5f822d2 7a19e2b 5f822d2 7a19e2b 5f822d2 7a19e2b 5f822d2 7a19e2b 5f822d2 421f0a0 5f822d2 a695f59 5f822d2 a695f59 5f822d2 a695f59 5f822d2 a695f59 5f822d2 a695f59 5f822d2 7a19e2b a695f59 5f822d2 a695f59 7a19e2b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 |
---
language:
- en
license: mit
library_name: transformers
tags:
- nlp
- phi
- phi-2
- instruct
base_model:
- microsoft/phi-2
datasets:
- Open-Orca/SlimOrca
- prince-canuma/TinyOrca
---
# Model Summary
<img src="Damysus.png" width="500" alt="Damysus - the fastest giant"/>
<!-- Provide a quick summary of what the model is/does. -->
This model is a instruction-tuned version of Phi-2, a Transformer model with 2.7 billion parameters from Microsoft.
The model has undergone further training to better follow specific user instructions, enhancing its ability to perform tasks as directed and improve its interaction with users.
This additional training helps the model to understand context better, generate more accurate and relevant responses, and adapt to a wide range of language-based tasks such as:
- Questions and Answers,
- Data Extraction,
- Structured Outputs (i.e., JSON outputs),
- And providing explanations,
## Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [Prince Canuma](https://huggingface.co/prince-canuma)
- **Model type:** Transformer
- **License:** MIT
- **Finetuned from model:** microsoft/phi-2
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
You can use this model to build local/cloud RAG applications.
It can serve as the:
- Answer synthesizer,
- Summarizer
- Or query rewriter model.
### Limitations
This model inherits some of the base model's limitations, such as:
- Generate Inaccurate Code and Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
- Limited Scope for code: Majority of Phi-2 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
- Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import pipeline, Conversation
chatbot = pipeline("conversational", model="prince-canuma/Damysus-2.7B-Chat")
conversation = Conversation("I'm looking for a movie - what's your favourite one?")
output = chatbot(conversation)
print(output)
```
Or you can instatiate the model and tokenizer directly
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("prince-canuma/Damysus-2.7B-Chat")
model = AutoModelForCausalLM.from_pretrained("prince-canuma/Damysus-2.7B-Chat")
inputs = tokenizer.apply_chat_template(
[
{"content":"You are an helpful AI assistant","role":"system"},
{"content":"I'm looking for a movie - what's your favourite one?","role":"user"},
], add_generation_prompt=True, return_tensors="pt",
).to("cuda")
outputs = model.generate(inputs, do_sample=False, max_new_tokens=256)
input_length = inputs.shape[1]
print(tokenizer.batch_decode(outputs[:, input_length:], skip_special_tokens=True)[0])
```
Output:
```shell
My favorite movie is "The Shawshank Redemption."
It's a powerful and inspiring story about hope, friendship, and redemption.
The performances by Tim Robbins and Morgan Freeman are exceptional,
and the film's themes and messages are timeless.
I highly recommend it to anyone who enjoys a well-crafted and emotionally engaging story.
```
### Structured Output
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("prince-canuma/Damysus-2.7B-Chat")
model = AutoModelForCausalLM.from_pretrained("prince-canuma/Damysus-2.7B-Chat")
inputs = tokenizer.apply_chat_template(
[
{"content":"You are a Robot that ONLY outputs JSON. Use this structure: {'entities': [{'type':..., 'name':...}]}.","role":"system"},
{"content":""""Extract the entities of type 'technology' and 'file_type' in JSON format from the following passage: AI is a transformative
force in document processing employing technologies such as 'Machine Learning (ML), Natural Language Processing (NLP) and
Optical Character Recognition (OCR) to understand, interpret, and summarize text. These technologies enhance accuracy,
increase efficiency, and allow you and your company to process high volumes of data in short amount of time.
For instance, you can easily extract key points and summarize a large PDF document (i.e., 500 pages) in just a few seconds.""",
"role":"user"},
], add_generation_prompt=True, return_tensors="pt",
).to("cuda")
outputs = model.generate(inputs, do_sample=False, max_new_tokens=256)
input_length = inputs.shape[1]
print(tokenizer.batch_decode(outputs[:, input_length:], skip_special_tokens=True)[0])
```
Output:
```json
{
"entities": [
{
"type": "technology",
"name": "Machine Learning (ML)"
},
{
"type": "technology",
"name": "Natural Language Processing (NLP)"
},
{
"type": "technology",
"name": "Optical Character Recognition (OCR)"
},
{
"type": "file_type",
"name": "PDF"
},
]
}
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
I used [SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) dataset, a new curated subset of our OpenOrca data.
In the course of this study, the [SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) dataset was used, representing a meticulously curated subset derived from the broader OpenOrca dataset. This release provides an efficient means of reaching performance on-par with using larger slices of the [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca), while only including ~500k GPT-4 completions.
Subsequently, two distinct subsets were crafted, comprising 102,000 and 1,000 samples, denoted as:
- [prince-canuma/SmallOrca](https://huggingface.co/datasets/prince-canuma/SmallOrca)
- [prince-canuma/TinyOrca](https://huggingface.co/datasets/prince-canuma/TinyOrca)
Although experimentation was conducted with both datasets, optimal results were achieved through fine-tuning on a modest set of 200 samples.
Notably, the investigation revealed that augmenting the training data beyond this threshold predominantly enhanced the model's proficiency in generating Chain-of-Thought responses.
However, it is imperative to note that the preference for Chain-of-Thought responses may not be universally applicable. Particularly in scenarios like the RAG setup,
succinct answers to prompts are often favored, especially for straightforward queries.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
[TODO]
#### Preprocessing
1. Convert dataset to chatML format
2. Remove all samples with more than 2048 tokens (Phi-2 context size)
3. Mask instructions (System and User) at training time.
#### Training Hyperparameters
- **Training regime:** bf16 mixed precision <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
[TODO]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
We evaluate models on 7 key benchmarks using the Eleuther AI Language Model Evaluation Harness , a unified framework to test generative language models on a large number of different evaluation tasks.
- AI2 Reasoning Challenge (25-shot) - a set of grade-school science questions.
- HellaSwag (10-shot) - a test of commonsense inference, which is easy for humans (~95%) but challenging for SOTA models.
- MMLU (5-shot) - a test to measure a text model's multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more.
- TruthfulQA (0-shot) - a test to measure a model's propensity to reproduce falsehoods commonly found online. Note: TruthfulQA is technically a 6-shot task in the Harness because each example is prepended with 6 Q/A pairs, even in the 0-shot setting.
- Winogrande (5-shot) - an adversarial and difficult Winograd benchmark at scale, for commonsense reasoning.
- GSM8k (5-shot) - diverse grade school math word problems to measure a model's ability to solve multi-step mathematical reasoning problems.
For all these evaluations, a higher score is a better score. We chose these benchmarks as they test a variety of reasoning and general knowledge across a wide variety of fields in 0-shot and few-shot settings.
Read more [here](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
### Results
| Model | AVG | ARC | Hellaswag | MMLU | Truthful QA | Winogrande | GSM8K |
|-------|--------:|------:|----------:|-----:|----------:|----------:|----------:|
| [NousResearch/Nous-Puffin-70B](NousResearch/Nous-Puffin-70B) | 64.91 | 67.41 | 87.37 | 69.77 | 46.77 | 83.9 | 34.27 |
| [TheBloke/Llama-2-70B-fp16](https://huggingface.co/TheBloke/Llama-2-70B-fp16) | 64.52 | 67.32 | 87.33 | 69.83 | 44.92 | 83.74 | 33.97 |
| [NousResearch/Yarn-Mistral-7B-64k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k) | 59.63 | 59.9 | 82.51 | 62.96 | 41.86 | 77.27 | 33.28 |
| [Qwen1.5-4B-Chat](https://huggingface.co/Qwen/Qwen1.5-4B-Chat) | 46.79 | 43.26 | 69.73 | 55.55 | 44.79 | 64.96 | 2.43 |
| [Microsoft/phi-2](https://huggingface.co/microsoft/phi-2) | 61.33 | 61.09 | 75.11 | 58.11 | 44.47 | 74.35 | 54.81 |
| [Damysus-2.7B-Chat](https://huggingface.co/prince-canuma/Damysus-2.7B-Chat) (Ours) | 60.49 | 59.81 | 74.52 | 56.33 | **46.74** | **75.06** | 50.64 |
## Technical Specifications
### Compute Infrastructure
- Modal Labs
#### Hardware
- OS: Linux
- GPU: A10G
#### Libraries
- TRL
- Transformers
- PEFT
- Datasets
- Accelerate
- torch
- Wandb
- Bitsandbytes
- Plotly
## Future work
I plan to explore the following tuning setups:
- Function calling
- DPO
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@misc{Damysus-2.7B-Chat,
title={Damysus-2.7B-Chat} ,
author={Prince Canuma},
year={2024},
}
```
```bibtex
@misc{SlimOrca,
title = {SlimOrca: An Open Dataset of GPT-4 Augmented FLAN Reasoning Traces, with Verification},
author = {Wing Lian and Guan Wang and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
year = {2023},
publisher = {HuggingFace},
url = {https://https://huggingface.co/Open-Orca/SlimOrca}
}
```
```bibtex
@misc{open-llm-leaderboard,
author = {Edward Beeching and Clémentine Fourrier and Nathan Habib and Sheon Han and Nathan Lambert and Nazneen Rajani and Omar Sanseviero and Lewis Tunstall and Thomas Wolf},
title = {Open LLM Leaderboard},
year = {2023},
publisher = {Hugging Face},
howpublished = "\url{https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard}"
}
```
|