YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

๐Ÿš€ Qwen2.5-3B Fine-Tuned on BBH (Dyck Languages) - Model Card ๐Ÿ“Œ Model Overview Model Name: Qwen2.5-3B Fine-Tuned on BBH (Dyck Languages) Base Model: Qwen2.5-3B-Instruct Fine-Tuned Dataset: BBH (BigBench Hard) - Dyck Languages Task: Causal Language Modeling (CLM) Fine-Tuning Objective: Improve performance on Dyck language sequence completion (correctly closing nested parentheses and brackets) ๐Ÿ“Œ Dataset Information This model was fine-tuned on the Dyck Languages subset of the BigBench Hard (BBH) dataset.

Dataset characteristics:

Task Type: Sequence completion of balanced parentheses Input Format: A sequence of open parentheses, brackets, or braces with missing closing elements Target Labels: The correct sequence of closing parentheses, brackets, or braces Example:

Input: Complete the rest of the sequence, making sure that the parentheses are closed properly. Input: [ [

Target: ] ]

This dataset evaluates a modelโ€™s ability to correctly complete structured sequences, which is crucial for programming language syntax, formal language understanding, and symbolic reasoning.

Downloads last month
17
Safetensors
Model size
3.09B params
Tensor type
F32
ยท
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.