File size: 3,599 Bytes
f2a2704
 
 
15584a6
f2a2704
 
 
 
 
 
 
1dbe894
f2a2704
 
15584a6
f2a2704
15584a6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ebaa21d
 
15584a6
 
ebaa21d
 
 
 
 
 
 
15584a6
 
 
 
ebaa21d
 
 
 
 
 
15584a6
 
ebaa21d
 
15584a6
ebaa21d
 
15584a6
ebaa21d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15584a6
 
 
 
 
 
 
 
 
 
 
 
 
 
f2a2704
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
---
language:
- en
- it
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---

# Meta LLaMA 3.1 8B 4-bit Finetuned Model

This model is a fine-tuned version of `Meta-Llama-3.1-8B`, developed by **ruslanmv** for text generation tasks. It leverages 4-bit quantization, making it more efficient for inference while maintaining strong performance in natural language generation.

---

## Model Details

- **Base Model**: `unsloth/meta-llama-3.1-8b-bnb-4bit`
- **Finetuned by**: ruslanmv
- **Language**: English
- **License**: [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
- **Tags**: 
  - text-generation-inference
  - transformers
  - unsloth
  - llama
  - trl
  - sft

---

## Model Usage

### Installation

To use this model, you will need to install the necessary libraries:

```bash
pip install transformers accelerate bitsandbytes
```

### Loading the Model in Python

Here’s an example of how to load this fine-tuned model using Hugging Face's `transformers` library:

```python
#!pip install bitsandbytes
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
import torch

# Define the quantization config
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_use_double_quant=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.float16,
)

# Ensure you have the right device setup
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

# Load the model and tokenizer from the Hugging Face Hub with BitsAndBytesConfig
model_name = "ruslanmv/Meta-Llama-3.1-8B-Text-to-SQL-4bit"
model = AutoModelForCausalLM.from_pretrained(
    model_name, 
    device_map="auto", 
    quantization_config=bnb_config)
tokenizer = AutoTokenizer.from_pretrained(model_name)

# Define EOS token for terminating the sequences
EOS_TOKEN = tokenizer.eos_token

# Define Alpaca-style prompt template
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
{}

### Input:
{}

### Response:
"""

# Format the prompt without the response part
prompt = alpaca_prompt.format(
    "Provide the SQL query",
    "Seleziona tutte le colonne della tabella table1 dove la colonna anni è uguale a 2020"
)

# Tokenize the prompt and generate text
inputs = tokenizer([prompt], return_tensors="pt").to(device)
outputs = model.generate(**inputs, max_new_tokens=64, use_cache=True)

# Decode the generated text
generated_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]

# Extract the generated response only (remove the prompt part)
response_start = generated_text.find("### Response:") + len("### Response:\n")
response = generated_text[response_start:].strip()

# Print the response (excluding the prompt)
print(response)

```
and the ansewer is
```
SELECT * FROM table1 WHERE anni = 2020
```

### Model Features

- **Text Generation**: This model is fine-tuned to generate coherent and contextually accurate text based on the provided input. 
- **Efficiency**: Using 4-bit quantization with the `bitsandbytes` library, it optimizes memory and inference performance.

### License

This model is licensed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). You are free to use, modify, and distribute this model, provided that you comply with the license terms.

### Acknowledgments

This model was fine-tuned by **ruslanmv** based on the original work of `unsloth` and the `meta-llama-3.1-8b-bnb-4bit` model.