Model Card for Model ID
Model Description
This model is a fine-tuned version of Mistral 7B using the LoRA (Low-Rank Adaptation) method. It has been developed with the Turkish finance dataset "umarigan/turkiye_finance_qa" to better understand Turkish texts in the financial domain and to perform well in related tasks.
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by: [linkedin @saribasmetehan]
- Shared by [optional]: [linkedin @saribasmetehan]
- Model type: [Mistral 7B fine-tuned with LoRA]
- Language(s) (NLP): [Turkish]
- Finetuned from model [optional]: [mistralai/Mistral-7B-v0.1]
- Fine-tuning Steps and Model Usage:[github @saribasmetehan]
Bias, Risks, and Limitations
Bias
- Language Bias: Since the model is trained only on Turkish data, it may not perform well in other languages.
- Domain Bias: As the model is trained on Turkish finance data, its performance may be lower in other domains (e.g., healthcare, technology).
- Data Bias: The data set used is collected from specific sources within a certain time frame, so biases in the data may be reflected in the model's outputs.
Risks
Misinformation: The model may generate incorrect information. It is important to verify the accuracy of the outputs.
Over-reliance: Users should not overly rely on the model's outputs and should seek human review and approval when making critical decisions.
Ethical Concerns: The model may raise ethical and privacy concerns when working with sensitive financial information. Limitations
Limited Knowledge Base: The model's knowledge base is limited to the training data and may not include the most recent information or events.
Performance in Complex Scenarios: The model may not perform adequately in very complex financial scenarios or those requiring in-depth analysis.
Resource Intensive: Using large models can require significant computational power and resources.
Fine-Tuning Process :
You can use the following link:
How to Get Started with the Model
You can use the following link:
Training Details
- Learning_rate=2e-5
- Per device train batch size=8
- Trainable params: 21260288
- All params: 3773331456
- Ratio%: 0.5634354746703705
Training Data
umarigan/turkiye_finance_qa
Model Card Contact
Model tree for saribasmetehan/mistral-7b-turkish-finance
Base model
mistralai/Mistral-7B-v0.1