Text Generation
Safetensors
English
llama
samsja commited on
Commit
35f6fe3
·
verified ·
1 Parent(s): 529e86f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -3
README.md CHANGED
@@ -17,6 +17,8 @@ pipeline_tag: text-generation
17
 
18
  ![Intellect 1 training visual](intellect-1-map.png)
19
 
 
 
20
  **INTELLECT-1** was trained on up to 14 concurrent nodes distributed across 3 continents, with contributions from 30 independent community contributors providing compute.
21
  The training code utilizes the [prime framework](https://github.com/PrimeIntellect-ai/prime), a scalable distributed training framework designed for fault-tolerant, dynamically scaling, high-perfomance training on unreliable, globally distributed workers.
22
  The key abstraction that allows dynamic scaling is the `ElasticDeviceMesh` which manages dynamic global process groups for fault-tolerant communication across the internet and local process groups for communication within a node.
@@ -24,7 +26,7 @@ The model was trained using the [DiLoCo](https://arxiv.org/abs/2311.08105) algor
24
 
25
  For more detailed technical insights, please refer to our [technical paper](https://github.com/PrimeIntellect-ai/prime).
26
 
27
- **Note: The model will immediately output EOS token if the BOS token is not set. This is a result of the tensor packing used during training. This can result in terrible eval scores.**
28
 
29
  ## Usage
30
  ```python
@@ -54,7 +56,7 @@ print(pipe("What is prime intellect ?"))
54
  ```
55
 
56
  ## **Model Details**
57
- - **Model Contributors**: samsja, Prime Intellect, Arcee AI, kotaro, skre_0, marlo, rodeo, Herb, Olas, superchillen, Hugging Face, mev_pete, 0xfr_, dj, primeprimeint1234, Marco Giglio, realtek, Hyperbolic, hecataeus, NWO, Virtual Machine, droll, SemiAnalysis, _waiting__, toptickcrypto, sto, Johannes, washout_segment_0b, klee
58
  - **Release Date**: 29 Nov 2024
59
  - **Model License**: Apache 2.0
60
 
@@ -101,5 +103,10 @@ Base Models:
101
  ## **Citations**
102
  If you use this model in your research, please cite it as follows:
103
  ```
104
- @article{}
 
 
 
 
 
105
  ```
 
17
 
18
  ![Intellect 1 training visual](intellect-1-map.png)
19
 
20
+ This is a base model. Please use the [INTELLECT-1-Instruct](https://huggingface.co/PrimeIntellect/INTELLECT-1-Instruct) for chat use case.
21
+
22
  **INTELLECT-1** was trained on up to 14 concurrent nodes distributed across 3 continents, with contributions from 30 independent community contributors providing compute.
23
  The training code utilizes the [prime framework](https://github.com/PrimeIntellect-ai/prime), a scalable distributed training framework designed for fault-tolerant, dynamically scaling, high-perfomance training on unreliable, globally distributed workers.
24
  The key abstraction that allows dynamic scaling is the `ElasticDeviceMesh` which manages dynamic global process groups for fault-tolerant communication across the internet and local process groups for communication within a node.
 
26
 
27
  For more detailed technical insights, please refer to our [technical paper](https://github.com/PrimeIntellect-ai/prime).
28
 
29
+ **Note: You must add a BOS token at the beginning of each sample. Performance may be impacted otherwise.**
30
 
31
  ## Usage
32
  ```python
 
56
  ```
57
 
58
  ## **Model Details**
59
+ - **Compute Contributors**: Prime Intellect, Arcee AI, kotaro, skre_0, marlo, rodeo, Herb, Olas, superchillen, Hugging Face, mev_pete, 0xfr_, dj, primeprimeint1234, Marco Giglio, realtek, Hyperbolic, hecataeus, NWO, Virtual Machine, droll, SemiAnalysis, _waiting__, toptickcrypto, sto, Johannes, washout_segment_0b, klee
60
  - **Release Date**: 29 Nov 2024
61
  - **Model License**: Apache 2.0
62
 
 
103
  ## **Citations**
104
  If you use this model in your research, please cite it as follows:
105
  ```
106
+ @article{jaghouar2024intellect,
107
+ title={INTELLECT-1 Technical Report.},
108
+ author={Jaghouar, Sami and Ong, Jack Min and Basra, Manveer and Obeid, Fares and Straube, Jannik and Keiblinger, Michael and Bakouch, Elie and Atkins, Lucas and Panahi, Maziyar and Goddard, Charles and Ryabinin, Max and Hagemann, Johannes},
109
+ journal={arXiv preprint},
110
+ year={2024}
111
+ }
112
  ```