Update README.md
Browse files
README.md
CHANGED
@@ -10,13 +10,6 @@ tags:
|
|
10 |
|
11 |
# Silma
|
12 |
|
13 |
-
---
|
14 |
-
Thank you for being part of our journey to advance AI for the Arabic-speaking world! ๐
|
15 |
-
|
16 |
-
**Authors**: [silma.ai](https://silma.ai)
|
17 |
-
|
18 |
-
### Description
|
19 |
-
|
20 |
Silma is a leading Generative AI startup dedicated to empowering Arabic speakers with state-of-the-art AI solutions.
|
21 |
|
22 |
## ๐ Our Flagship Model: Silma 1.0 ๐
|
@@ -25,6 +18,8 @@ Silma is a leading Generative AI startup dedicated to empowering Arabic speakers
|
|
25 |
## ๐ฅ Our Team
|
26 |
Our team is composed of seasoned **Arabic AI experts** who understand the nuances of the language and cultural considerations, enabling us to build solutions that truly resonate with Arabic users. ๐โจ
|
27 |
|
|
|
|
|
28 |
### Usage
|
29 |
|
30 |
Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with:
|
@@ -54,6 +49,7 @@ messages = [
|
|
54 |
outputs = pipe(messages, max_new_tokens=256)
|
55 |
assistant_response = outputs[0]["generated_text"][-1]["content"].strip()
|
56 |
print(assistant_response)
|
|
|
57 |
# ุงูุณูุงู
ุนูููู
ูุฑุญู
ุฉ ุงููู ูุจุฑูุงุชูุ ุฃูุฏู ุฃู ุฃุนุชุฐุฑ ุนู ุนุฏู
ุงูุญุถูุฑ ุฅูู ุงูุนู
ู ุงูููู
ุจุณุจุจ ู
ุฑุถู. ุฃุดูุฑูู
ุนูู ุชููู
ูู
.
|
58 |
```
|
59 |
|
@@ -72,11 +68,14 @@ model = AutoModelForCausalLM.from_pretrained(
|
|
72 |
torch_dtype=torch.bfloat16,
|
73 |
)
|
74 |
|
75 |
-
input_text = "
|
76 |
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
|
77 |
|
78 |
outputs = model.generate(**input_ids, max_new_tokens=32)
|
79 |
print(tokenizer.decode(outputs[0]))
|
|
|
|
|
|
|
80 |
```
|
81 |
|
82 |
You can ensure the correct chat template is applied by using `tokenizer.apply_chat_template` as follows:
|
|
|
10 |
|
11 |
# Silma
|
12 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
Silma is a leading Generative AI startup dedicated to empowering Arabic speakers with state-of-the-art AI solutions.
|
14 |
|
15 |
## ๐ Our Flagship Model: Silma 1.0 ๐
|
|
|
18 |
## ๐ฅ Our Team
|
19 |
Our team is composed of seasoned **Arabic AI experts** who understand the nuances of the language and cultural considerations, enabling us to build solutions that truly resonate with Arabic users. ๐โจ
|
20 |
|
21 |
+
**Authors**: [silma.ai](https://silma.ai)
|
22 |
+
|
23 |
### Usage
|
24 |
|
25 |
Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with:
|
|
|
49 |
outputs = pipe(messages, max_new_tokens=256)
|
50 |
assistant_response = outputs[0]["generated_text"][-1]["content"].strip()
|
51 |
print(assistant_response)
|
52 |
+
|
53 |
# ุงูุณูุงู
ุนูููู
ูุฑุญู
ุฉ ุงููู ูุจุฑูุงุชูุ ุฃูุฏู ุฃู ุฃุนุชุฐุฑ ุนู ุนุฏู
ุงูุญุถูุฑ ุฅูู ุงูุนู
ู ุงูููู
ุจุณุจุจ ู
ุฑุถู. ุฃุดูุฑูู
ุนูู ุชููู
ูู
.
|
54 |
```
|
55 |
|
|
|
68 |
torch_dtype=torch.bfloat16,
|
69 |
)
|
70 |
|
71 |
+
input_text = "ุฃููู
ุง ุฃุฎู ูุฒูุง, ูููู ู
ู ุงูุญุฏูุฏ ุฃู
ูููู ู
ู ุงููุทูุ"
|
72 |
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
|
73 |
|
74 |
outputs = model.generate(**input_ids, max_new_tokens=32)
|
75 |
print(tokenizer.decode(outputs[0]))
|
76 |
+
|
77 |
+
# ููุงูู
ุง ูู ููุณ ุงููุฒู.
|
78 |
+
|
79 |
```
|
80 |
|
81 |
You can ensure the correct chat template is applied by using `tokenizer.apply_chat_template` as follows:
|