Model description

The model was trained on about 20,000 examples of the HuggingFaceH4/ultrachat_200k dataset, with plans to release more checkpoints later on.

This model has not been aligned with DPO. In the future, different repositories will be released that contain versions of this model aligned with DPO, using various datasets.

Evaluation

Upon personal testing, the model demonstrates excellent performance in mathematics, history, and coding tasks.

However, due to the model's requirement for the setting trust_remote_code=True, it cannot be submitted to the Open LLM Leaderboard. As a result, I will release a llama-fied version of this model that can be submitted to the Open LLM leaderboard.

Recommended inference parameters

temperature=0.2, top_p=0.14, top_k=12, repetition_penalty=1.1

License

Please make sure to read the Qwen licensing agreement before using this model.

Downloads last month
26
Safetensors
Model size
1.84B params
Tensor type
FP16
·
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Model tree for Locutusque/UltraQwen-1_8B

Base model

Qwen/Qwen-1_8B
Finetuned
(1)
this model

Dataset used to train Locutusque/UltraQwen-1_8B