Mulberry

Mulberry-llama-11b is a step-by-step reasoning model trained on the Mulberry-260K SFT dataset, which was generated through collective knowledge search using CoMCTS.

For reasoning inference, please refer to our GitHub.

Paper: https://arxiv.org/abs/2412.18319

Code: https://github.com/HJYao00/Mulberry

More Details

Base Model: https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct

Training Framework: LLaMA-Factory

Hardware: 8x NVIDIA H100

Downloads last month
2
Safetensors
Model size
10.7B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) does not yet support transformers models for this pipeline type.

Model tree for HuanjinYao/Mulberry_llama_11b

Finetuned
(78)
this model

Collection including HuanjinYao/Mulberry_llama_11b