Fine-Tuned Llama 2 Model
Model Description
This model is a fine-tuned version of Llama 2, trained on a dataset of diverse prompts and scenarios. The model has been designed to generate responses based on various tasks described in the prompt
column of the dataset. The fine-tuning process aims to improve the model's performance in handling specific tasks across multiple domains, such as software development, SEO, and Linux commands.
Dataset Information
The dataset used for fine-tuning this model consists of two primary columns:
act
: The role or scenario that the model is asked to act upon. For example:- "An Ethereum Developer"
- "SEO Prompt"
- "Linux Terminal"
prompt
: The detailed task or scenario description related to theact
. This provides context and specific instructions that the model needs to follow. Example prompts:- "Imagine you are an experienced Ethereum developer tasked with creating a smart contract for a blockchain messenger..."
- "Using WebPilot, create an outline for an article that will be 2,000 words on the keyword 'Best SEO prompts'..."
- "I want you to act as a linux terminal. I will type commands and you will reply with what the terminal should show..."
The dataset includes a wide range of scenarios aimed at helping the model generalize across technical and creative tasks.
Dataset Samples
Act | Prompt |
---|---|
Ethereum Developer | Imagine you are an experienced Ethereum developer tasked with creating a smart contract for a blockchain... |
SEO Prompt | Using WebPilot, create an outline for an article that will be 2,000 words on the keyword 'Best SEO prompts'... |
Linux Terminal | I want you to act as a linux terminal. I will type commands and you will reply with what the terminal should show... |
Output Example
The model has been fine-tuned to generate detailed, contextually relevant responses based on the prompts provided. Here’s an example of how the model might respond to a sample prompt:
Input:
Act: Linux Terminal
Output:
Prompt:
I want you to act as a Linux terminal. I will type commands, and you will reply with what the terminal should show. Execute the command ls
.
In this scenario, the model understands that it should act as a Linux terminal and simulate the result of running the ls
command.
Another Example:
Input:
Act: Ethereum Developer
Output:
Prompt: Imagine you are an experienced Ethereum developer tasked with creating a smart contract for a blockchain messenger...
In this example, the model generates Solidity code based on the prompt, addressing the requirements for a blockchain messenger.
How to Use the Model
This model can be loaded and used through the Hugging Face Hub:
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("Manish-KT/Fine_tune_Llama_2")
model = AutoModelForCausalLM.from_pretrained("Manish-KT/Fine_tune_Llama_2")
# Encode the prompt
inputs = tokenizer("I want you to act as a linux terminal.", return_tensors="pt")
# Generate the response
outputs = model.generate(inputs["input_ids"], max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Acknowledgements
Special thanks to the creators of the dataset fka/awesome-chatgpt-prompts
, which provided the rich prompts and diverse scenarios used in fine-tuning this model.
License
This model is open-sourced and can be used for both commercial and non-commercial purposes. Please ensure that you attribute the original dataset and respect any usage policies.
license: mit
- Downloads last month
- 0