File size: 3,106 Bytes
aea823d
 
 
 
 
 
 
 
 
 
 
 
 
 
2721a09
 
 
aea823d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
---
license: apache-2.0
datasets:
- PrimeIntellect/fineweb-edu
- PrimeIntellect/fineweb
- PrimeIntellect/StackV1-popular
- mlfoundations/dclm-baseline-1.0-parquet
- open-web-math/open-web-math
language:
- en
pipeline_tag: text-generation
---
# INTELLECT-1-step-78000

This is an intermediate checkpoint of INTELLECT-1. You can find the [final version](https://huggingface.co/PrimeIntellect/INTELLECT-1) as well as the [instruct one](https://huggingface.co/PrimeIntellect/INTELLECT-1-Instruct)


|  | Step | Model URL |
|---|------|-----------|
| | 17000 | https://huggingface.co/PrimeIntellect/INTELLECT-1-step-17000 |
| | 28600 | https://huggingface.co/PrimeIntellect/INTELLECT-1-step-28600 |
| | 39200 | https://huggingface.co/PrimeIntellect/INTELLECT-1-step-39200 |
| | 49200 | https://huggingface.co/PrimeIntellect/INTELLECT-1-step-49200 |
| | 59200 | https://huggingface.co/PrimeIntellect/INTELLECT-1-step-59200 |
| | 69200 | https://huggingface.co/PrimeIntellect/INTELLECT-1-step-69200 |
| -> | 78000 | https://huggingface.co/PrimeIntellect/INTELLECT-1-step-78000 |
| | 88000 | https://huggingface.co/PrimeIntellect/INTELLECT-1-step-88000 |

## **Model Overview**
**INTELLECT-1** is the first collaboratively trained 10 billion parameter language model trained from scratch on 1 trillion tokens of English text and code.

**INTELLECT-1** was trained on up to 14 concurrent nodes distributed across 3 continents, with contributions from 30 independent community contributors providing compute.
The training code utilizes the [prime framework](https://github.com/PrimeIntellect-ai/prime), a scalable distributed training framework designed for fault-tolerant, dynamically scaling, high-perfomance training on unreliable, globally distributed workers.
The key abstraction that allows dynamic scaling is the `ElasticDeviceMesh` which manages dynamic global process groups for fault-tolerant communication across the internet and local process groups for communication within a node
The global all-reduce was done with custom int8 all-reduce kernels to reduce the communication payload required, greatly reducing the communication overhead.

For more detailed technical insights, please refer to our [technical paper](https://github.com/PrimeIntellect-ai/prime).

## **Model Details**
- **Model Contributors**: samsja, Prime Intellect, Arcee AI, kotaro, skre_0, marlo, rodeo, Herb, Olas, superchillen, Hugging Face, mev_pete, 0xfr_, dj, primeprimeint1234, Marco Giglio, realtek, Hyperbolic, hecataeus, NWO, Virtual Machine, droll, SemiAnalysis, _waiting__, toptickcrypto, sto, Johannes, washout_segment_0b, klee
- **Release Date**: 29 Nov 2024
- **Model License**: Apache 2.0

## **Technical Specifications**
| **Parameter**       | **Value**              |
|----------------------|------------------------|
| Parameter Size       | 10B |
| Number of Layers     | 42 |
| Number of Attention Heads | 32 |
| Hidden Size          | 4096 |
| Context Length       | 8192 |
| Vocabulary Size      | 128256 |

## **Citations**
If you use this model in your research, please cite it as follows:
```
@article{}
```