Uploaded model
- Developed by: wdli
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
The model is trained on reddit_depression_dataset, The epoch = 2.
The training is in dialog format, but the user's input is ignored.
For example
def formatting_prompts_func(examples):
texts_dataset = examples['text']
formatted_prompts = []
for text in texts_dataset:
dialog = [
{"role": "system", "content": "You are a patient undergoing depression."},
# {"role": "user", "content": ""},
{"role": "assistant", "content": text}
]
formatted_prompt = tokenizer.apply_chat_template(dialog, tokenize=False, add_generation_prompt=False)
formatted_prompts.append(formatted_prompt)
return {"text": formatted_prompts}
Model tree for wdli/llama3-instruct_depression_3
Base model
unsloth/llama-3-8b-Instruct-bnb-4bit