|
--- |
|
{} |
|
--- |
|
# UltraInteract-Llama-FT |
|
|
|
This model is a fine-tuned version of Llama-2 using the UltraInteract dataset. It has been trained to handle interactive conversational tasks with improved accuracy and contextual understanding. |
|
|
|
## Usage |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("kritsadaK/UltraInteract-Llama-FT") |
|
model = AutoModelForCausalLM.from_pretrained("kritsadaK/UltraInteract-Llama-FT") |
|
|
|
input_text = "typing your prompt here" |
|
inputs = tokenizer(input_text, return_tensors="pt") |
|
outputs = model.generate(**inputs) |
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
``` |
|
|
|
## Training Details |
|
|
|
Dataset: UltraInteract |
|
|
|
Training Parameters: 4-bit quantization with LoRA |
|
|