This is not an official SOLAR Model from Upstage, it's just my attempt at a recreation of these powerful models using the new Llama-3

Further Testing Coming Soon

Where the SOLAR Model used a mix of these datasets:

  • c-s-ale/alpaca-gpt4-data (SFT)
  • Open-Orca/OpenOrca (SFT)
  • in-house generated data utilizing Metamath [2] (SFT, DPO)
  • Intel/orca_dpo_pairs (DPO)
  • allenai/ultrafeedback_binarized_cleaned (DPO)

I Used:

  • llm-wizard/alpaca-gpt4-data
  • Crystalcareai/slimorca-dedup-alpaca-100k
  • meta-math/MetaMathQA
  • (DPO Datasets Coming Soon)

More Info:

  • Developed by: cookinai
  • License: apache-2.0
  • Finetuned from model : unsloth/llama-3-8b-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Special Thanks to Upstage's SOLAR Project for the inspiration behind this model

Downloads last month
13
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for cookinai/Llama-3-SOLAR-v0.1

Finetuned
(2534)
this model
Quantizations
1 model