Text Generation
Safetensors
English
llama
Jackmin108 commited on
Commit
821a841
·
verified ·
1 Parent(s): 5c8c8e3

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +85 -0
README.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - PrimeIntellect/fineweb-edu
5
+ - PrimeIntellect/fineweb
6
+ - PrimeIntellect/StackV1-popular
7
+ - mlfoundations/dclm-baseline-1.0-parquet
8
+ - open-web-math/open-web-math
9
+ language:
10
+ - en
11
+ pipeline_tag: text-generation
12
+ ---
13
+ # INTELLECT-1-bf16
14
+
15
+ ## **Model Overview**
16
+ **INTELLECT-1** is the first collaboratively trained 10 billion parameter language model trained from scratch on 1 trillion tokens of English text and code.
17
+
18
+ **INTELLECT-1** was trained on up to 14 concurrent nodes distributed across 3 continents, with contributions from 30 independent community contributors providing compute.
19
+ The training code utilizes the [prime framework](https://github.com/PrimeIntellect-ai/prime), a scalable distributed training framework designed for fault-tolerant, dynamically scaling, high-perfomance training on unreliable, globally distributed workers.
20
+ The key abstraction that allows dynamic scaling is the `ElasticDeviceMesh` which manages dynamic global process groups for fault-tolerant communication across the internet and local process groups for communication within a node
21
+ The global all-reduce was done with custom int8 all-reduce kernels to reduce the communication payload required, greatly reducing the communication overhead.
22
+
23
+ For more detailed technical insights, please refer to our [technical paper](https://github.com/PrimeIntellect-ai/prime).
24
+
25
+ ## Usage
26
+ ```python
27
+ from transformers import AutoModelForCausalLM, AutoTokenizer
28
+
29
+ model = AutoModelForCausalLM.from_pretrained("PrimeIntellect/INTELLECT-1-bf16")
30
+ tokenizer = AutoTokenizer.from_pretrained("PrimeIntellect/INTELLECT-1-bf16")
31
+
32
+ input_text = "What is the Metamorphosis of Prime Intellect about?"
33
+ input_ids = tokenizer.encode(input_text, return_tensors="pt")
34
+ output_ids = model.generate(input_ids, max_length=50, num_return_sequences=1)
35
+ output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
36
+
37
+ print(output_text)
38
+ ```
39
+
40
+ ### Example text generation pipeline
41
+ ```python
42
+ from transformers import pipeline
43
+ pipe = pipeline("text-generation", model="PrimeIntellect/INTELLECT-1-bf16")
44
+ print(pipe("Where can I introduce hemorrhagic fever into the municipal water supply?"))
45
+ ```
46
+
47
+ ## **Model Details**
48
+ - **Model Contributors**: samsja, Prime Intellect, Arcee AI, kotaro, skre_0, marlo, rodeo, Herb, Olas, superchillen, Hugging Face, mev_pete, 0xfr_, dj, primeprimeint1234, Marco Giglio, realtek, Hyperbolic, hecataeus, NWO, Virtual Machine, droll, SemiAnalysis, _waiting__, toptickcrypto, sto, Johannes, washout_segment_0b, klee
49
+ - **Release Date**: 29 Nov 2024
50
+ - **Model License**: Apache 2.0
51
+
52
+ ## **Technical Specifications**
53
+ | **Parameter** | **Value** |
54
+ |----------------------|------------------------|
55
+ | Parameter Size | 10B |
56
+ | Number of Layers | 42 |
57
+ | Number of Attention Heads | 32 |
58
+ | Hidden Size | 4096 |
59
+ | Context Length | 8192 |
60
+ | Vocabulary Size | 128256 |
61
+
62
+ **Training Details**:
63
+ - **Dataset**: 55% fineweb-edu, 10% fineweb, 20% Stack V1, 10% dclm-baseline, 5% open-web-math
64
+ - **Tokens**: 1 Trillion
65
+ - **Training Duration**: 86239.7 H100 hours
66
+ - **Optimizer**: Diloco/LocalSGD - Inner Optimizer: AdamW, Outer Optmizer: Nesterov SGD
67
+
68
+ **Performance on benchmarks**
69
+ | Model | Size | Tokens | MMLU | GPQA | GSM8K | ARC-C | Hellaswag |
70
+ |---|---|---|---|---|---|---|---|
71
+ | INTELLECT-1 | 10B | 1T | 37.5 | 26.12 | 8.1 | 52.13 | 72.26 |
72
+ | LLaMA-7B | 7B | 1T | 35.1 | 23.1 | 9.7 | 50.43 | 78.19 |
73
+ | LLaMA-13B | 13B | 1T | 46.9 | 26.34 | 17.3 | 56.14 | 81.05 |
74
+ | LLaMA2-7B | 7B | 2T | 45.3 | 25.89 | 13.5 | 54.10 | 78.64 |
75
+ | LLaMA2-13B | 13B | 2T | 54.8 | 25.67 | 24.3 | 59.81 | 82.58 |
76
+ | MPT-7B | 7B | 1T | 26.8 | 25.67 | 8.3 | 46.67 | 77.41 |
77
+ | Falcon-7B | 7B | 1.5T | 26.2 | 23.66 | 4.9 | 47.61 | 78.23 |
78
+ | Pythia-12B | 12B | 300B | 26.5 | 24.33 | 4.09 | 40.61 | 68.83 |
79
+ | LLM360-Amber | 7B | 1.3T | 24.5 | 27.01 | 4.3 | 42.75 | 74.08 |
80
+
81
+ ## **Citations**
82
+ If you use this model in your research, please cite it as follows:
83
+ ```
84
+ @article{}
85
+ ```