library_name: transformers
tags:
- nlp
- phi
- phi-2
- instruct
license: mit
datasets:
- Open-Orca/SlimOrca
- prince-canuma/TinyOrca
language:
- en
Model Summary
This model is a instruction-tuned version of Phi-2, a Transformer model with 2.7 billion parameters from Microsoft. The model has undergone further training to better follow specific user instructions, enhancing its ability to perform tasks as directed and improve its interaction with users. This additional training helps the model to understand context better, generate more accurate and relevant responses, and adapt to a wide range of language-based tasks such as:
- Questions and Answers,
- Data Extraction,
- Structured Outputs (i.e., JSON outputs),
- And providing explanations,
Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by: Prince Canuma
- Model type: Transformer
- License: MIT
- Finetuned from model: microsoft/phi-2
Uses
Direct Use
[More Information Needed]
Out-of-Scope Use
[More Information Needed]
Bias, Risks, and Limitations
[More Information Needed]
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
How to Get Started with the Model
Use the code below to get started with the model.
from transformers import pipeline, Conversation
chatbot = pipeline("conversational", model="prince-canuma/Damysus-2.7B-Chat")
conversation = Conversation("I'm looking for a movie - what's your favourite one?")
output = chatbot(conversation)
print(output)
Output:
Conversation id: 5dad71bd-a24a-425a-80aa-95f56924f8c7
user: I'm looking for a movie - what's your favourite one?
assistant:
My favorite movie is "The Shawshank Redemption."
It's a powerful and inspiring story about hope, friendship, and redemption.
The performances by Tim Robbins and Morgan Freeman are exceptional,
and the film's themes and messages are timeless.
I highly recommend it to anyone who enjoys a well-crafted and emotionally engaging story.
Or you can instatiate the model and tokenizer directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("prince-canuma/Damysus-2.7B-Chat")
model = AutoModelForCausalLM.from_pretrained("prince-canuma/Damysus-2.7B-Chat")
inputs = tokenizer.apply_chat_template(
[
{"content":"","role":"system"},
{"content":"""I'm looking for a movie - what's your favourite one?""","role":"user"},
], add_generation_prompt=True, return_tensors="pt",
).to("cuda")
outputs = model.generate(inputs, do_sample=False, max_new_tokens=256)
input_length = inputs.shape[1]
print(tokenizer.batch_decode(outputs[:, input_length:], skip_special_tokens=True)[0])
Output:
My favorite movie is "The Shawshank Redemption."
It's a powerful and inspiring story about hope, friendship, and redemption.
The performances by Tim Robbins and Morgan Freeman are exceptional,
and the film's themes and messages are timeless.
I highly recommend it to anyone who enjoys a well-crafted and emotionally engaging story.
Training Details
Training Data
I used SlimOrca dataset, a new curated subset of our OpenOrca data. This release provides an efficient means of reaching performance on-par with using larger slices of the OpenOrca, while only including ~500k GPT-4 completions.
Training Procedure
[TODO]
Preprocessing
- Convert dataset to chatML format
- Remove all samples with more than 2048 tokens (Phi-2 context size)
- Mask instructions (System and User) at training time.
Training Hyperparameters
- Training regime: bf16 mixed precision
Evaluation
Testing Data, Factors & Metrics
Testing Data
[TODO]
Factors
[TODO]
Metrics
[TODO]
Results
[TODO]
Limitations of Phi-2
This model inherits some of the base model's limitations, such as:
- Generate Inaccurate Code and Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
- Limited Scope for code: Majority of Phi-2 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
- Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
Technical Specifications
Compute Infrastructure
- Modal Labs
Hardware
- OS: Linux
- GPU: A10G
Libraries
- TRL
- Transformers
- PEFT
- Datasets
- Accelerate
- torch
- Wandb
- Bitsandbytes
- Plotly
Citation
BibTeX:
@misc{Damysus-2.7B-Chat,
title={Damysus-2.7B-Chat} ,
author={Prince Canuma},
year={2024},
}