Text Generation
GGUF
Russian
Inference Endpoints
conversational
aashish1904 commited on
Commit
6c5fe44
1 Parent(s): 84cd7f4

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +162 -0
README.md ADDED
@@ -0,0 +1,162 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ language:
5
+ - ru
6
+
7
+ ---
8
+
9
+ ![](https://cdn.discordapp.com/attachments/791342238541152306/1264099835221381251/image.png?ex=669ca436&is=669b52b6&hm=129f56187c31e1ed22cbd1bcdbc677a2baeea5090761d2f1a458c8b1ec7cca4b&)
10
+
11
+ # QuantFactory/T-lite-0.1-GGUF
12
+ This is quantized version of [AnatoliiPotapov/T-lite-0.1](https://huggingface.co/AnatoliiPotapov/T-lite-0.1) created using llama.cpp
13
+
14
+ # Original Model Card
15
+
16
+ # T-lite-0.1
17
+
18
+
19
+ 🚨 **T-lite is designed for further fine-tuning and is not intended as a ready-to-use conversational assistant. Users are advised to exercise caution and are responsible for any additional training and oversight required to ensure the model's responses meet acceptable ethical and safety standards. The responsibility for incorporating this model into industrial or commercial solutions lies entirely with those who choose to deploy it.**
20
+
21
+
22
+ ## Description
23
+
24
+ T-lite is a continual pretraining model designed specifically for the Russian language, enabling the creation of large language model applications in Russian. This model aims to improve the quality of Russian text generation and provide domain-specific and cultural knowledge relevant to the Russian context.
25
+
26
+
27
+ ## Model Training Details
28
+
29
+ ### 🏛️ Architecture and Configuration
30
+
31
+ T-lite is a decoder language model with:
32
+ - pre-normalization via RMSNorm
33
+ - SwiGLU activation function
34
+ - rotary positional embeddings (RoPE)
35
+ - grouped query attention (GQA)
36
+
37
+ T-lite was trained in bf16.
38
+
39
+
40
+ ### ⚙️ Hyperparameters
41
+
42
+ We employed the Decoupled AdamW optimizer with β1 = 0.9, β2 = 0.95, and eps = 1.0e-8. The learning rate was set to 1.0e-5 with a constant schedule and a warmup period of 10 steps during stage 1, and a cosine schedule during stage 2. Weight decay was applied at a rate of 1.0e-6, and gradient clipping was performed with a maximum norm of 1.0. The maximum sequence length was set to 8192. Each batch contained approximately 6 million tokens.
43
+
44
+
45
+ ### 🏋🏽 Hardware Configuration & Performance
46
+
47
+ Training was conducted on 96 A100 GPUs with 80GB memory each, using Fully Sharded Data Parallel (FSDP) with full shard/hybrid shard strategies. The setup achieved a throughput of 3000 tokens/sec/GPU, with 100B tokens being processed in approximately 4 days. We achieved a 0.59 Model FLOPs Utilization (MFU).
48
+
49
+
50
+ ### 📚 Data
51
+
52
+ #### Stage 1
53
+ **Massive** continual pre-training
54
+
55
+ - 300B tokens * 0.3 epoch
56
+ - Proportion of data in Russian is 85%, as a trade-off between language adoptation and English language performance
57
+ - Styles and topics in Common Crawl (CC) data were downsampled
58
+ - Domains in book datasets were balanced
59
+ - Proportion of code data was increased
60
+
61
+ <img src="stage1.jpg" width="700px">
62
+
63
+ #### Stage 2
64
+ Focuses on refining the **quality** of the dataset
65
+
66
+ - 20B tokens * 3 epochs
67
+ - Includes instructional sets of smaller volume
68
+ - Advertisements and news were aggressively downsampled
69
+ - Instructions and articles were upsampled
70
+ - Educational content was balanced
71
+
72
+ <img src="stage2.jpg" width="700px">
73
+
74
+
75
+ ## 📊 Benchmarks
76
+
77
+ ### 🇷🇺 Russian
78
+ <style>
79
+ table {
80
+ width: auto;
81
+ }
82
+ th, td {
83
+ padding: 5px;
84
+ }
85
+ </style>
86
+ [MERA](https://mera.a-ai.ru/en) benchmark results
87
+
88
+ | Task name | Metric | N shot | Llama-3-8b | T-lite-0.1 |
89
+ |------------------|:--------------------:|:-------:|:----------:|:----------:|
90
+ | **Total score** | | | 0.445 | **0.492** |
91
+ | BPS | Accuracy | 2-shot | **0.459** | 0.358 |
92
+ | CheGeKa | F1 / EM | 4-shot | 0.04/0 | **0.118/0.06** |
93
+ | LCS | Accuracy | 2-shot | **0.146** | 0.14 |
94
+ | MathLogicQA | Accuracy | 5-shot | 0.365 | **0.37** |
95
+ | MultiQ | F1-score / EM | 0-shot | 0.106/0.027| **0.383/0.29** |
96
+ | PARus | Accuracy | 0-shot | 0.72 | **0.858** |
97
+ | RCB | Avg F1 / Accuracy | 0-shot | 0.42/0.434 | **0.511/0.416**|
98
+ | ruHumanEval | pass@k | 0-shot | 0.017/0.085/0.171 | **0.023/0.113/0.226** |
99
+ | ruMMLU | Accuracy | 5-shot | 0.693 | **0.759** |
100
+ | ruModAr | EM | 0-shot | **0.708** | 0.667 |
101
+ | ruMultiAr | EM | 5-shot | 0.259 | **0.269** |
102
+ | ruOpenBookQA | Avg F1 / Accuracy | 5-shot | 0.745/0.744| **0.783/0.782** |
103
+ | ruTiE | Accuracy | 0-shot | 0.553 | **0.681** |
104
+ | ruWorldTree | Avg F1 / Accuracy | 5-shot | 0.838/0.839| **0.88/0.88** |
105
+ | RWSD | Accuracy | 0-shot | 0.504 | **0.585** |
106
+ | SimpleAr | EM | 5-shot | 0.954 | **0.955** |
107
+ | USE | Grade Norm | 0-shot | 0.023 | **0.05** |
108
+
109
+ The evluation was performed using https://github.com/ai-forever/MERA/tree/main
110
+
111
+ ### 🇬🇧 English
112
+ <style>
113
+ table {
114
+ width: auto;
115
+ }
116
+ th, td {
117
+ padding: 5px;
118
+ }
119
+ </style>
120
+ It's consistent that after the model was adapted for the Russian language, performance on English benchmarks declined.
121
+
122
+ | Benchmark | N shot | Llama-3-8b | T-lite-0.1 |
123
+ |-----------------------------------------------------------------|:-----:|:----------:|:--------------------:|
124
+ | [ARC-challenge](https://arxiv.org/abs/1911.01547) | 0-shot | **0.518** | 0.489 |
125
+ | [ARC-easy](https://arxiv.org/abs/1911.01547) | 0-shot | **0.789** | 0.787 |
126
+ | [MMLU](https://arxiv.org/abs/2009.03300) | 0-shot | **0.62** | 0.6 |
127
+ | [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 0-shot | 0.162 | **0.222** |
128
+ | [TriviaQA](https://arxiv.org/abs/1705.03551) | 0-shot | **0.63** | 0.539 |
129
+
130
+ The evluation was performed using https://github.com/EleutherAI/lm-evaluation-harness.
131
+
132
+
133
+
134
+ ## 👨‍💻 Examples of usage
135
+
136
+
137
+ ```python
138
+ from transformers import AutoTokenizer, AutoModelForCausalLM
139
+ import torch
140
+ torch.manual_seed(42)
141
+
142
+ tokenizer = AutoTokenizer.from_pretrained("t-bank-ai/T-lite-0.1")
143
+ model = AutoModelForCausalLM.from_pretrained("t-bank-ai/T-lite-0.1", device_map="auto")
144
+
145
+ input_text = "Машинное обучение нужно для"
146
+ input_ids = tokenizer(input_text, return_tensors="pt").to(model.device)
147
+
148
+ outputs = model.generate(**input_ids, max_new_tokens=256)
149
+ print(tokenizer.decode(outputs[0]))
150
+ ```
151
+
152
+ Output:
153
+
154
+ ```
155
+ Машинное обучение нужно для того, чтобы автоматизировать процесс принятия решений. Вместо того, чтобы человеку нужно было вручную просматривать и анализировать данные, алгоритмы машинного обучения могут автоматически выявлять закономерности и делать прогнозы на основе этих данных. Это может быть особенно полезно в таких областях, как финансы, где объем данных огромен, а решения должны приниматься быстро.
156
+
157
+ Вот несколько примеров того, как машинное обучение используется в финансах:
158
+
159
+ 1. Обнаружение мошенничества: алгоритмы машинного обучения могут анализировать закономерности в транзакциях и выявлять подозрительные действия, которые могут указывать на мошенничество.
160
+ 2. Управление рисками: Машинное обучение может помочь финансовым учреждениям выявлять и оценивать риски, связанные с различными инвестициями или кредитами.
161
+ 3. Обработка данных на естественном языке: Машинное обучение может использоваться для анализа финансовых новостей и других текстовых данных, чтобы выявить тенденции
162
+ ```