As you can see, this is a LoRA model focused on reasoning in Chinese, based on LLaMA 3.2.

In my opinion, level 1 should follow the reasoning process and output results without a fixed format, avoiding chain-of-thought reasoning.

In this version, the output didn’t perform as I expected. I will work on improving it next time.

Let me know if you’d like further adjustments!

python
# Generate template
prompt = "写一首七言绝句"
reasoning_template = (
    f"<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nCutting Knowledge Date: December 2023\nToday Date: 26 July 2024\n\n<|eot_id|>"
    f"<|start_header_id|>user<|end_header_id|>\n\n{prompt}<|eot_id|>"
)

# Generate reasoning
reasoning_inputs = tokenizer(reasoning_template, return_tensors="pt").to(model.device)
reasoning_ids = model.generate(**reasoning_inputs, max_new_tokens=1024)
reasoning_output = tokenizer.decode(reasoning_ids[0, reasoning_inputs.input_ids.shape[1]:], skip_special_tokens=True)

print("|检验输出|" + reasoning_output)

Uploaded model

  • Developed by: jinliuxi
  • License: apache-2.0
  • Finetuned from model : unsloth/llama-3.2-3b-instruct-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.