File size: 2,574 Bytes
16c7232
 
 
6f018ed
 
 
 
 
 
16c7232
 
 
 
 
 
 
6f018ed
 
 
 
16c7232
 
 
 
 
 
6f018ed
16c7232
 
 
 
6f018ed
 
 
16c7232
 
5bb9700
 
 
 
 
 
 
 
 
 
 
 
 
16c7232
 
 
 
6f018ed
 
 
 
16c7232
 
6f018ed
16c7232
6f018ed
 
 
 
 
16c7232
991cdf8
16c7232
 
34b1a29
16c7232
6f018ed
 
 
 
 
 
 
 
 
16c7232
6f018ed
 
 
 
 
 
 
 
 
 
16c7232
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
base_model: unsloth/Qwen3-4B
library_name: peft
license: mit
datasets:
- DeepMount00/o1-ITA-REASONING
language:
- it
pipeline_tag: question-answering
---

# Model Card for Model ID


### Model Description

- **Training objective**: Fine-tuned on Italian instruction-style reasoning dataset for better performance in logical, educational, and chain-of-thought tasks.
- **Language(s) (NLP):** Italian
- **License:** MIT
- **Finetuned from model:** unsloth/Qwen3-4B


## Uses

### Direct Use

This model is intended for reasoning-intensive tasks in Italian


## Bias, Risks, and Limitations

- May hallucinate or make factual errors in complex logic chains.
- Not safe for unsupervised use in high-stakes domains like medical/legal reasoning.
- Output quality depends on instruction clarity.


# Training Data

The DeepMount00/o1-ITA-REASONING dataset is crafted to train language models in providing structured, methodical responses to questions in Italian.
Each entry follows a four-step reasoning approach:

 - Reasoning: Initial thought process
 - Verification: Self-review of the reasoning
 - Correction: Amendments if needed
 - Final Answer: Conclusive response

The dataset is formatted using XML-like tags to delineate each component, promoting transparency and structured thinking.
It is particularly beneficial for educational purposes, encouraging systematic problem-solving and critical thinking in the Italian language.

## How to Get Started with the Model

Use the code below to get started with the model.

```python
from huggingface_hub import login
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel


login(token="")

tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen3-4B",)
base_model = AutoModelForCausalLM.from_pretrained(
    "unsloth/Qwen3-4B",
    device_map={"": 0}, token=""
)

model = PeftModel.from_pretrained(base_model,"Rustamshry/ITA-Reasoning-o1")


question = "Quali sono i costi e i benefici ambientali, sociali ed economici dell'energia solare?"

messages = [
    {"role" : "user", "content" : question}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize = False,
    add_generation_prompt = True, # Must add for generation
    enable_thinking = True, # Disable thinking
)

from transformers import TextStreamer
_ = model.generate(
    **tokenizer(text, return_tensors = "pt").to("cuda"),
    max_new_tokens = 2048,
    temperature = 0.6, 
    top_p = 0.95, 
    top_k = 20,
    streamer = TextStreamer(tokenizer, skip_prompt = True),
)
```
### Framework versions

- PEFT 0.14.0