File size: 763 Bytes
5d79313
 
 
cd34c82
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6140999
 
 
 
 
e136c28
6140999
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
---
{}
---
# UltraInteract-Llama-FT

This model is a fine-tuned version of Llama-2 using the UltraInteract dataset. It has been trained to handle interactive conversational tasks with improved accuracy and contextual understanding.

## Usage

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("kritsadaK/UltraInteract-Llama-FT")
model = AutoModelForCausalLM.from_pretrained("kritsadaK/UltraInteract-Llama-FT")

input_text = "typing your prompt here"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```

## Training Details

Dataset: UltraInteract

Training Parameters: 4-bit quantization with LoRA