--- datasets: - llk010502/fingpt-sentiment base_model: - meta-llama/Llama-3.1-8B tags: - financial-sentiment - fine-tuned - LoRA - 8bit metrics: - weighted_f1 library_name: transformers pipeline_tag: text-generation language: - en --- # Model Card for Llama-3.1-8B Fine-Tuned for Financial Sentiment Analysis This model is a fine-tuned version of Meta's [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B), tailored for financial sentiment analysis tasks. It leverages LoRA and 8-bit quantization techniques to achieve efficient performance while reducing computational overhead. ## Model Details ### Model Description - **Model type:** Causal Language Model fine-tuned for financial sentiment analysis - **Language(s):** English - **Finetuned from model:** meta-llama/Llama-3.1-8B ### Direct Use The model can be directly used for financial sentiment analysis tasks, including: - Analyzing financial news sentiment - Sentiment classification on financial social media data - ## How to Get Started with the Model Use the following code to load the model: ```python from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel # Base and fine-tuned model base_model = "meta-llama/Llama-3.1-8B" peft_model = "llk010502/llama3.1-8B-financial_sentiment" # Load the base model model = AutoModelForCausalLM.from_pretrained( base_model, trust_remote_code=True, device_map="auto" ) # Load the tokenizer tokenizer = AutoTokenizer.from_pretrained(base_model, trust_remote_code=True) # Load the fine-tuned model model = PeftModel.from_pretrained(model, peft_model) model = model.eval()