--- tags: - autotrain - text-generation widget: - text: 'I love AutoTrain because ' license: openrail language: - en metrics: - accuracy pipeline_tag: text-generation --- # Model Trained Using AutoTrain ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "Andyrasika/mistral_autotrain_llm" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained(model_path) ``` ``` input_text = "Health benefits of regular exercise" input_ids = tokenizer.encode(input_text, return_tensors="pt") output = model.generate(input_ids) predicted_text = tokenizer.decode(output[0], skip_special_tokens=False) print(predicted_text) ``` Output: ``` Health benefits of regular exercise include improved cardiovascular health, increased strength and flexibility, improved mental ``` Resources: - https://github.com/huggingface/autotrain-advanced/issues/339 - https://www.kdnuggets.com/how-to-use-hugging-face-autotrain-to-finetune-llms - https://github.com/hiyouga/LLaMA-Factory#llama-factory-training-and-evaluating-large-language-models-with-minimal-effort