File size: 14,120 Bytes
e30bf68
 
 
 
 
 
 
9b5886f
 
 
e30bf68
 
 
9b5886f
e30bf68
9b5886f
 
 
 
 
 
67a9aa7
9b5886f
 
 
 
 
67a9aa7
e30bf68
 
 
9b5886f
 
e30bf68
35da3af
 
67a9aa7
e30bf68
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35da3af
e30bf68
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35da3af
e30bf68
 
 
 
35da3af
 
 
e30bf68
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67a9aa7
e30bf68
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9b5886f
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_button_content: Acknowledge license
tags:
- conversational
language:
- ar
- en
---


# SILMA AI

SILMA.AI is a leading Generative AI startup dedicated to empowering Arabic speakers with state-of-the-art AI solutions. 


## ๐Ÿš€ Our Flagship Model: SILMA 1.0 ๐Ÿš€

* **SILMA 1.0** is the **TOP-RANKED** open-weights Arabic LLM with an impressive **9 billion parameter size**, surpassing models that are over seven times larger. ๐Ÿ†
* 

## What makes SILMA exceptional?

* SIMLA is a small language model outperforming 72B models in most arabic language tasks, thus more practical for business use-cases
* SILMA is built on the strong foundational models of Google Gemma, giving you the best of both worlds
* SILMA is an open-weight model free to use in accordance with our open license


## ๐Ÿ‘ฅ Our Team

We are a team of seasoned **Arabic AI experts** who understand the nuances of the language and cultural considerations, enabling us to build solutions that truly resonate with Arabic users.

**Authors**: [silma.ai](https://silma.ai)


### Usage

Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with:
```sh
pip install -U transformers
```

Then, copy the snippet from the section that is relevant for your usecase.

#### Running with the `pipeline` API

```python
import torch
from transformers import pipeline

pipe = pipeline(
    "text-generation",
    model="silma-ai/SILMA-9B-Instruct-v0.8",
    model_kwargs={"torch_dtype": torch.bfloat16},
    device="cuda",  # replace with "mps" to run on a Mac device
)

messages = [
    {"role": "user", "content": "ุงูƒุชุจ ุฑุณุงู„ุฉ ุชุนุชุฐุฑ ููŠู‡ุง ู„ู…ุฏูŠุฑูŠ ููŠ ุงู„ุนู…ู„ ุนู† ุงู„ุญุถูˆุฑ ุงู„ูŠูˆู… ู„ุฃุณุจุงุจ ู…ุฑุถูŠุฉ."},
]

outputs = pipe(messages, max_new_tokens=256)
assistant_response = outputs[0]["generated_text"][-1]["content"].strip()
print(assistant_response)

# ุงู„ุณู„ุงู… ุนู„ูŠูƒู… ูˆุฑุญู…ุฉ ุงู„ู„ู‡ ูˆุจุฑูƒุงุชู‡ุŒ ุฃูˆุฏู‘ ุฃู† ุฃุนุชุฐุฑ ุนู† ุนุฏู… ุงู„ุญุถูˆุฑ ุฅู„ู‰ ุงู„ุนู…ู„ ุงู„ูŠูˆู… ุจุณุจุจ ู…ุฑุถูŠ. ุฃุดูƒุฑูƒู… ุนู„ู‰ ุชูู‡ู…ูƒู….
```

#### Running the model on a single / multi GPU

```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "silma-ai/SILMA-9B-Instruct-v0.8"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="auto",
    torch_dtype=torch.bfloat16,
)

input_text = "ุฃูŠู‡ู…ุง ุฃุฎู ูˆุฒู†ุง, ูƒูŠู„ูˆ ู…ู† ุงู„ุญุฏูŠุฏ ุฃู… ูƒูŠู„ูˆ ู…ู† ุงู„ู‚ุทู†ุŸ"
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))

# ูƒู„ุงู‡ู…ุง ู„ู‡ ู†ูุณ ุงู„ูˆุฒู†.

```

You can ensure the correct chat template is applied by using `tokenizer.apply_chat_template` as follows:
```python
messages = [
    {"role": "user", "content": "ุงูƒุชุจ ูƒูˆุฏ ุจุงูŠุซูˆู† ู„ุชูˆู„ูŠุฏ ู…ุชุณู„ุณู„ุฉ ุฃุฑู‚ุงู… ุฒูˆุฌูŠุฉ."},
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda")

outputs = model.generate(**input_ids, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))

# def generate_even_numbers(n):
#  """
#  This function generates a list of even numbers from 1 to n.
#
#  Args:
#    n: The upper limit of the range.
#
#  Returns:
#    A list of even numbers.
#  """
#  return [i for i in range(1, n + 1) if i % 2 == 0]

# Example usage
# n = 10
# even_numbers = generate_even_numbers(n)
# print(f"The first {n} even numbers are: {even_numbers}")

```

#### Quantized Versions through `bitsandbytes`

<details>
  <summary>
    Using 8-bit precision (int8)  
  </summary>

```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig

model_id = "silma-ai/SILMA-9B-Instruct-v0.8"
quantization_config = BitsAndBytesConfig(load_in_8bit=True)

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    quantization_config=quantization_config,
)

input_text = "ุงุฐูƒุฑ ุฎู…ุณ ุงู†ูˆุงุน ููˆุงูƒู‡ ุจู‡ุง ู†ุณุจ ุนุงู„ูŠุฉ ู…ู† ููŠุชุงู…ูŠู† ุฌ."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))

# ุงู„ู„ูŠู…ูˆู†ุŒ ุงู„ุจุฑุชู‚ุงู„ุŒ ุงู„ู…ูˆุฒุŒ ุงู„ูƒูŠูˆูŠุŒ ุงู„ูุฑุงูˆู„ุฉ

```
</details>

<details>
  <summary>
    Using 4-bit precision  
  </summary>

```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig

model_id = "silma-ai/SILMA-9B-Instruct-v0.8"
quantization_config = BitsAndBytesConfig(load_in_4bit=True)

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    quantization_config=quantization_config,
)

input_text = "ููŠ ุฃูŠ ุนุงู… ุชูˆูู‰ ุตู„ุงุญ ุงู„ุฏูŠู† ุงู„ุฃูŠูˆุจูŠุŸ"
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))

# 1193
```
</details>

#### Advanced Usage

<details>
  <summary>
    Torch compile  
  </summary>

[Torch compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) is a method for speeding-up the 
inference of PyTorch modules. The Silma model can be run up to 6x faster by leveraging torch compile.

Note that two warm-up steps are required before the full inference speed is realised:

```python
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false"

from transformers import AutoTokenizer, Gemma2ForCausalLM
from transformers.cache_utils import HybridCache
import torch

torch.set_float32_matmul_precision("high")

# load the model + tokenizer
model_id = "silma-ai/SILMA-9B-Instruct-v0.8"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = Gemma2ForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16)
model.to("cuda")

# apply the torch compile transformation
model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True)

# pre-process inputs
input_text = "ู…ู† ุงู„ุฑุฆูŠุณ ุงู„ุฐูŠ ุชูˆู„ู‰ ุงู„ู…ู†ุตุจ ููŠ ุฃู…ุฑูŠูƒุง ุจุนุฏ ุฏูˆู†ุงู„ุฏ ุชุฑุงู…ุจุŸ"
model_inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
prompt_length = model_inputs.input_ids.shape[1]

# set-up k/v cache
past_key_values = HybridCache(
    config=model.config,
    max_batch_size=1,
    max_cache_len=model.config.max_position_embeddings,
    device=model.device,
    dtype=model.dtype
)

# enable passing kv cache to generate
model._supports_cache_class = True
model.generation_config.cache_implementation = None

# two warm-up steps
for idx in range(2):
    outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
    past_key_values.reset()

# fast run
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

# ุฌูˆ ุจุงูŠุฏู†

```

For more details, refer to the [Transformers documentation](https://huggingface.co/docs/transformers/main/en/llm_optims?static-kv=basic+usage%3A+generation_config).

</details>

### Chat Template

The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.

Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:

```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch

model_id = "silma-ai/SILMA-9B-Instruct-v0.8"
dtype = torch.bfloat16

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,)

chat = [
    { "role": "user", "content": "ู…ุง ุงุดู‡ุฑ ุงุทุงุฑุงุช ุงู„ุนู…ู„ ููŠ ุงู„ุจุงูŠุซูˆู† ู„ุจู†ุงุก ู†ู…ุงุฐุฌ ุงู„ุฐูƒุงุก ุงู„ุงุตุทู†ุงุนูŠุŸ" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```

At this point, the prompt contains the following text:

```
<bos><start_of_turn>user
ู…ุง ุงุดู‡ุฑ ุงุทุงุฑุงุช ุงู„ุนู…ู„ ููŠ ุงู„ุจุงูŠุซูˆู† ู„ุจู†ุงุก ู†ู…ุงุฐุฌ ุงู„ุฐูƒุงุก ุงู„ุงุตุทู†ุงุนูŠุŸ<end_of_turn>
<start_of_turn>model
```

As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.

You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.

After the prompt is ready, generation can be performed like this:

```python
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
print(tokenizer.decode(outputs[0]))
```

### Inputs and outputs

*   **Input:** Text string, such as a question, a prompt, or a document to be
    summarized.
*   **Output:** Generated Arabic or English text in response to the input, such
    as an answer to a question, or a summary of a document.

### Citation

```none
@article{silma_01_2024,
    title={Silma},
    url={https://www.silma.ai},
    publisher={Silma},
    author={Silma Team},
    year={2024}
}
```

## Usage and Limitations

These models have certain limitations that users should be aware of.

### Intended Usage

Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.

* Content Creation and Communication
  * Text Generation: These models can be used to generate creative text formats
    such as poems, scripts, code, marketing copy, and email drafts.
  * Chatbots and Conversational AI: Power conversational interfaces for customer
    service, virtual assistants, or interactive applications.
  * Text Summarization: Generate concise summaries of a text corpus, research
    papers, or reports.
* Research and Education
  * Natural Language Processing (NLP) Research: These models can serve as a
    foundation for researchers to experiment with NLP techniques, develop
    algorithms, and contribute to the advancement of the field.
  * Language Learning Tools: Support interactive language learning experiences,
    aiding in grammar correction or providing writing practice.
  * Knowledge Exploration: Assist researchers in exploring large bodies of text
    by generating summaries or answering questions about specific topics.

### Limitations

* Training Data
  * The quality and diversity of the training data significantly influence the
    model's capabilities. Biases or gaps in the training data can lead to
    limitations in the model's responses.
  * The scope of the training dataset determines the subject areas the model can
    handle effectively.
* Context and Task Complexity
  * LLMs are better at tasks that can be framed with clear prompts and
    instructions. Open-ended or highly complex tasks might be challenging.
  * A model's performance can be influenced by the amount of context provided
    (longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
  * Natural language is inherently complex. LLMs might struggle to grasp subtle
    nuances, sarcasm, or figurative language.
* Factual Accuracy
  * LLMs generate responses based on information they learned from their
    training datasets, but they are not knowledge bases. They may generate
    incorrect or outdated factual statements.
* Common Sense
  * LLMs rely on statistical patterns in language. They might lack the ability
    to apply common sense reasoning in certain situations.

### Ethical Considerations and Risks

The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:

* Bias and Fairness
  * LLMs trained on large-scale, real-world text data can reflect socio-cultural
    biases embedded in the training material. These models underwent careful
    scrutiny, input data pre-processing described and posterior evaluations
    reported in this card.
* Misinformation and Misuse
  * LLMs can be misused to generate text that is false, misleading, or harmful.
  * Guidelines are provided for responsible use with the model, see the
    [Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
  * This model card summarizes details on the models' architecture,
    capabilities, limitations, and evaluation processes.
  * A responsibly developed open model offers the opportunity to share
    innovation by making LLM technology accessible to developers and researchers
    across the AI ecosystem.

Risks identified and mitigations:

* Perpetuation of biases: It's encouraged to perform continuous monitoring
  (using evaluation metrics, human review) and the exploration of de-biasing
  techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
  are essential. Developers are encouraged to exercise caution and implement
  appropriate content safety safeguards based on their specific product policies
  and application use cases.
* Privacy violations: Models were trained on data filtered for removal of PII
  (Personally Identifiable Information). Developers are encouraged to adhere to
  privacy regulations with privacy-preserving techniques.