XeTute's picture
You're currently using 1.5B as a base model, the correct one is 14B.
fae2307 verified
|
raw
history blame
1.26 kB
metadata
license: mit
license_link: https://huggingface.co/OpenPipe/Deductive-Reasoning-Qwen-14B/blob/main/LICENSE
language:
  - en
pipeline_tag: text-generation
base_model:
  - Qwen/Qwen2.5-14B-Instruct
tags:
  - chat
library_name: transformers

Deductive-Reasoning-Qwen-14B

image/png

Deductive Reasoning Qwen 14B is a reinforcement fine-tune of Qwen 2.5 14B Instruct to solve challenging deduction problems from the Temporal Clue dataset, trained by OpenPipe!

Here are some additional resources to check out:

If you're interested in training your own models with reinforcement learning or just chatting, feel free to reach out or email Kyle directly at [email protected]!