AntaresAI
We introduce Antares-7b-slovenian, an instruction-tuned and alignment model based on Mixtral-8x7B-v0.1 and Llama-2-70b-hf finetuned for Slovenian language.
Please refer to the evaluation results table for details.
Instruction Fine-tuning Strategy
We utilize state-of-the-art instruction fine-tuning methods including supervised fine-tuning (SFT) and direct preference optimization (DPO)
Data Contamination Test Results
Results will be updated soon.
Evaluation Results
Results will be updated soon.
Contact Us
Any questions and suggestions are welcomed at the discussion tab.
- Downloads last month
- 0
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.