File size: 8,300 Bytes
9e2bbd5 92fda69 9e2bbd5 92fda69 9e2bbd5 92fda69 9e2bbd5 92fda69 91ef220 92fda69 91ef220 92fda69 1a2ecea 91ef220 92fda69 91ef220 92fda69 91ef220 92fda69 91ef220 92fda69 1a2ecea 92fda69 1a2ecea 92fda69 91ef220 6b33256 6ac9738 6b33256 92fda69 6b33256 92fda69 6b33256 92fda69 6b33256 92fda69 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 |
---
library_name: transformers
tags:
- code
- math
license: apache-2.0
language:
- en
pipeline_tag: text-generation
---
# Huginn-0125
This is Huginn, version 01/25. This is a latent recurrent-depth model with 3.5B parameters, trained for 800B tokens. This is a proof-of-concept model, but surprisingly capable in reasoning and code given its training budget and size.
All details on this model can be found in the tech report: "Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach."
## Table of Contents
1. [How to Use](##how-to-use)
2. [Model Summary](##model-summary)
3. [Limitations](##limitations)
4. [Training](##training)
5. [License](##license)
6. [Citation](##citation)
## Downloading and Using the Model
Load the model like this:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("tomg-group-umd/huginn-0125", torch_dtype=torch.bfloat16, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("tomg-group-umd/huginn-0125")
```
### Fixed depth Usage
By providing the argument `num_steps`, the model will execute a pass with that amount of compute:
```python
input_ids = tokenizer.encode("The capital of Westphalia is", return_tensors="pt", add_special_tokens=True).to(device)[:, :-1]
model.eval()
model.to(device)
model(input_ids, num_steps=32)
```
The model has about 1.5B parameters in non-recurrent code, 0.5B parameters in the embedding, and 1.5B recurrent parameters, so, as a guideline,
the number of materialized parameters is `num_steps * 1.5B + 2B`. Playing with this parameter is what makes this model interesting (and different from fixed-depth) transformers!
The model is trained to accept an arbitrary number of steps. However, using fewer than 4 steps will result in very coarse answers. If given enough context to reason about, benchmarks show the model improving up to around `num_steps=64`. Beyond that, more steps generally do not hurt, but we see no further improvements.
### Inference
The model was trained with bfloat16-mixed precision, so we recommend using `bfloat16` to run inference (or AMP bfloat16-mixed precision, if you really want). All benchmarks were evaluated in pure `bfloat16`.
### Sampling
The model can be used like a normal HF model to generate text with KV-caching working as expected. You can provide `num_steps` directly to the `generate` call, for example:
```
model.eval()
config = GenerationConfig(max_length=256, stop_strings=["<|end_text|>", "<|end_turn|>"],
use_cache=True,
do_sample=False, temperature=None, top_k=None, top_p=None, min_p=None,
return_dict_in_generate=True,
eos_token_id=65505,bos_token_id=65504,pad_token_id=65509)
input_ids = tokenizer.encode("The capital of Westphalia is", return_tensors="pt", add_special_tokens=True).to(device)[:, :-1]
outputs = model.generate(input_ids, config, tokenizer=tokenizer, num_steps=16)
```
*Note*: `num_steps` and other model arguments CANNOT be included in the `GenerationConfig`, they will shadow model args at runtime.
### Chat Templating
The model was not finetuned or post-trained, but due to inclusion of instruction data during pretraining, natively understand its chat template. You can chat with the model like so
```
messages = []
messages.append({"role": "system", "content" : You are a helpful assistant."}
messages.append({"role": "user", "content" : What do you think of Goethe's Faust?"}
formatted_messages = [{"role": "Huginn" if m["role"] == "assistant" else m["role"], "content": m.content.strip()} for m in messages]
chat_input = tokenizer.apply_chat_template(formatted_messages, tokenize=False, add_generation_prompt=True)
print(chat_input)
input_ids = tokenizer.encode(chat_input, return_tensors="pt", add_special_tokens=False).to(device)
model.generate(input_ids, config, num_steps=64, tokenizer=tokenizer)
```
### KV-cache Details
The model requires its own KV-cache implementation `HuginnDynamicCache`, otherwise the KV-caches of later calls to the recurrent block will overwrite the earlier ones.
This should be handled automatically by this implementation, but may break with huggingface updates. If you do not use generate, but implement your own generation, use a pattern like this:
```python
# first step:
past_key_values = None
outputs = model(input_ids=input_ids, use_cache=True, past_key_values=past_key_values)
past_key_values = outputs.past_key_values # Should be an instance of HuginnDynamicCache
# next step
outputs = model(input_ids=input_ids, use_cache=True, past_key_values=past_key_values)
```
## Advanced Features
### Per-Token Adaptive Compute
```python
model.to(device=device, dtype=torch.bfloat16)
model.eval()
past_key_values = DynamicCache()
config = GenerationConfig(max_length=64, stop_strings=["<|end_text|>", "<|end_turn|>"],
use_cache=True, past_key_values=past_key_values,
do_sample=False, temperature=None, top_k=None, top_p=None, min_p=None,
return_dict_in_generate=True,
eos_token_id=65505,bos_token_id=65504,pad_token_id=65509)
# Note: num_steps and other model arguments CANNOT be included here, they will shadow model args at runtime
input_ids = tokenizer.encode("The capital of Westphalia is", return_tensors="pt", add_special_tokens=True).to(device)[:, :-1]
outputs = model.generate(input_ids, config, tokenizer=tokenizer)
```
### KV-cache Sharing
## Model Summary
The model is primarily structured around decoder-only transformer blocks. However these blocks are structured into three functional groups, the __prelude__ \\(P\\),
which embeds the input data into a latent space using multiple transformer layers, then the core __recurrent block__ \\(R\\), which is the central unit of recurrent
computation modifying states \\(\mathbf{s} \in \mathbb{R}^{n \times h }\\), and finally the __coda__ \\(C\\), which un-embeds from latent space using several layers and
also contains the prediction head of the model.
Given a number of recurrent iterations \\(r\\), and a sequence of input tokens \\(\mathbf{x} \in V^n\\) these groups are used in the following way to produce output
probabilities \\(\mathbf{p} \in \mathbb{R}^{n \times |V|}\\).
$$\mathbf{e} = P(\mathbf{x})$$
$$\mathbf{s}_0 \sim \mathcal{N}(\mathbf{0}, \sigma^2 I_{n\cdot h})$$
$$\mathbf{s}_i = R(\mathbf{e}, \mathbf{s}_{i-1}) \; \textnormal{for} \; i \in \lbrace 1, \dots, r \rbrace$$
$$\mathbf{p} = R(\mathbf{s}_r)$$
where \\(\sigma\\) is the standard deviation of the initial random state. Given an init random state \\(\mathbf{s}_0\\), the model repeatedly applies the core
block \\(R\\), which accepts the latent state \\(\mathbf{s}_{i-1}\\) and the embedded input \\(\mathbf{e}\\) and outputs a new latent state \\(\mathbf{s}_i\\).
After finishing all iterations, the coda block processes the last state and produces the probabilities of the next token.
Please refer to the paper for benchmark performance on standard benchmarks.
## Limitations
Our checkpoint is trained for only 47000 steps on a broadly untested mixture, and the learning rate is never cooled down from its peak. As an academic project, the model is trained only on publicly available data and the 800B token count, while large in comparison to older fully open-source models such as the Pythia series, is small in comparison to modern open-source efforts such as OLMo, and tiny in comparison to the datasets used to train industrial open-weight models.
## License
This model is released under the [apache-2.0](https://choosealicense.com/licenses/apache-2.0/) licence.
## Citation
```
@article{geiping2025scaling,
title={Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach},
author={Jonas Geiping and Sean McLeish and Neel Jain and John Kirchenbauer and Siddharth Singh and Brian R. Bartoldson and Bhavya Kailkhura and Abhinav Bhatele and Tom Goldstein},
year={2025},
eprint={2502.},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Contact
Please, feel free to contact us with any questions, or open an discussion thread on Hugging Face. |