|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- HuggingFaceTB/finemath |
|
language: |
|
- en |
|
base_model: |
|
- meta-llama/Llama-3.2-3B |
|
--- |
|
|
|
# Model Card |
|
|
|
## Model summary |
|
|
|
This model is part of the π [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath) ablations, we continue pretraining [Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) base on different math datasets for 60B tokens. |
|
The model has 3.21B parameters and 4096 context length. It was trained on **60B tokens** using a mix of 50% FineMath-3+ and 50% InfiWebMath-3+ from the π [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath) dataset. |
|
|
|
- **License**: Apache-2 |
|
- **Languages**: English |
|
|
|
## Use |
|
|
|
### Intended use |
|
|
|
This model was trained on English math data and is not instruction-tuned, making it intended for text completion in English with a focus on math. |
|
It is important to note that the primary intended use case of this model is to compare its performance with other models trained under the same conditions. This model is not necessarily the best possible outcome achievable with the given dataset. |
|
|
|
### Generation |
|
|
|
```python |
|
# pip install -q transformers |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
model = "HuggingFaceTB/finemath-ablation-finemath-infimath-3plus" |
|
device = "cuda" # for GPU usage or "cpu" for CPU usage |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model) |
|
model = AutoModelForCausalLM.from_pretrained(model).to(device) |
|
|
|
inputs = tokenizer.encode("Machine Learning is", return_tensors="pt").to(device) |
|
outputs = model.generate(inputs) |
|
print(tokenizer.decode(outputs[0])) |
|
``` |
|
|
|
## Intermediate checkpoints |
|
|
|
We are releasing intermediate checkpoints for this model at intervals of every 10000 training steps (10B tokens) in separate branches. The naming convention is `10B`. |
|
|
|
You can load a specific model revision with `transformers` using the argument `revision`: |
|
```python |
|
model = AutoModelForCausalLM.from_pretrained("HuggingFaceTB/finemath-ablation-finemath-infimath-3plus", revision="10B") |
|
``` |
|
You can access all the revisions for the models via the following code: |
|
```python |
|
from huggingface_hub import list_repo_refs |
|
out = list_repo_refs("HuggingFaceTB/finemath-ablation-finemath-infimath-3plus") |
|
print([b.name for b in out.branches]) |
|
``` |
|
|
|
## Training |
|
### Model |
|
- **Architecture**: Llama3 |
|
- **Pretraining steps**: 60k |
|
- **Pretraining tokens**: 60B |
|
- **Precision**: bfloat16 |
|
|
|
### Hardware |
|
- **GPUs**: 64 H100 |
|
|
|
### Software |
|
- [nanotron](https://github.com/huggingface/nanotron/) for training |
|
- [datatrove](https://github.com/huggingface/datatrove) for tokenization |
|
- [lighteval](https://github.com/huggingface/lighteval) for evaluation |
|
|
|
## Evaluation |
|
We used the SmolLM2 setup to evaluate all our ablation models with `lighteval`. You can find the details here: https://github.com/huggingface/smollm/tree/main/evaluation#smollm2-base-models |
|
|
|
## Limitations |
|
This model was predominantly trained on English math data, potentially limiting its performance in other languages. Furthermore, the model's behavior is influenced by the quality and diversity of its training data, which may include biases and harmful content. |