A Llama Chat Model of 101M Parameters

Recommended Prompt Format

<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{user_message}<|im_end|>
<|im_start|>assistant

Recommended Inference Parameters

penalty_alpha: 0.5
top_k: 4
repetition_penalty: 1.105

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 28.73
AI2 Reasoning Challenge (25-Shot) 22.87
HellaSwag (10-Shot) 28.69
MMLU (5-Shot) 24.93
TruthfulQA (0-shot) 45.76
Winogrande (5-shot) 50.04
GSM8k (5-shot) 0.08
Downloads last month
131
Safetensors
Model size
101M params
Tensor type
F32
Β·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Felladrin/Smol-Llama-101M-Chat-v1

Finetuned
(3)
this model
Finetunes
1 model
Quantizations
4 models

Datasets used to train Felladrin/Smol-Llama-101M-Chat-v1

Spaces using Felladrin/Smol-Llama-101M-Chat-v1 2

Collection including Felladrin/Smol-Llama-101M-Chat-v1

Evaluation results