|
--- |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
language: |
|
- ar |
|
--- |
|
|
|
# 1p46G-gemma-fp-dedup-rehydr-ar-350BT-seed-6/transformers/134000 |
|
|
|
Tokenizer: `google/gemma-7b` |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
# Initialize model and tokenizer |
|
TEST_PROMPT = "الزرادشتية هي ديانة انتشرت في بلاد" |
|
save_path = "nouamanetazi/hf-ar-134000" |
|
tokenizer = AutoTokenizer.from_pretrained(save_path) |
|
input_ids = tokenizer(TEST_PROMPT, return_tensors="pt")["input_ids"].cuda() # google/gemma-7b |
|
print("Input prompt:", tokenizer.batch_decode(input_ids)[0]) |
|
|
|
model = AutoModelForCausalLM.from_pretrained(save_path, device="cuda", dtype=torch.bfloat16) |
|
outputs = model.generate(input_ids, max_new_tokens=100) |
|
print("Generated text:", tokenizer.batch_decode(outputs)[0]) |
|
``` |
|
|