File size: 1,106 Bytes
6576898 eb08456 7cc0d8b 6576898 eb08456 d236fe8 70302d5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
---
tags:
- autotrain
- text-generation
widget:
- text: 'I love AutoTrain because '
license: openrail
language:
- en
metrics:
- accuracy
pipeline_tag: text-generation
---
# Model Trained Using AutoTrain
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "Andyrasika/mistral_autotrain_llm"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path)
```
```
input_text = "Health benefits of regular exercise"
input_ids = tokenizer.encode(input_text, return_tensors="pt")
output = model.generate(input_ids)
predicted_text = tokenizer.decode(output[0], skip_special_tokens=False)
print(predicted_text)
```
Output:
```
Health benefits of regular exercise include improved cardiovascular health, increased strength and flexibility, improved mental
```
Resources:
- https://github.com/huggingface/autotrain-advanced/issues/339
- https://www.kdnuggets.com/how-to-use-hugging-face-autotrain-to-finetune-llms
- https://github.com/hiyouga/LLaMA-Factory#llama-factory-training-and-evaluating-large-language-models-with-minimal-effort |