Triangle104 commited on
Commit
4dd749e
1 Parent(s): 6e58f8b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -0
README.md CHANGED
@@ -33,6 +33,68 @@ tags:
33
  This model was converted to GGUF format from [`PrimeIntellect/INTELLECT-1-Instruct`](https://huggingface.co/PrimeIntellect/INTELLECT-1-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
34
  Refer to the [original model card](https://huggingface.co/PrimeIntellect/INTELLECT-1-Instruct) for more details on the model.
35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
  ## Use with llama.cpp
37
  Install llama.cpp through brew (works on Mac and Linux)
38
 
 
33
  This model was converted to GGUF format from [`PrimeIntellect/INTELLECT-1-Instruct`](https://huggingface.co/PrimeIntellect/INTELLECT-1-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
34
  Refer to the [original model card](https://huggingface.co/PrimeIntellect/INTELLECT-1-Instruct) for more details on the model.
35
 
36
+ ---
37
+
38
+ arcee-ai/Llama-405B-Logits
39
+ arcee-ai/The-Tomb
40
+
41
+ Instruction Following:
42
+ -
43
+ mlabonne/open-perfectblend-fixed (generalist capabilities)
44
+ microsoft/orca-agentinstruct-1M-v1-cleaned (Chain-of-Thought)
45
+ Post-training-Data-Flywheel/AutoIF-instruct-61k-with-funcs
46
+
47
+ Domain-Specific:
48
+ -
49
+ Team-ACE/ToolACE (function calling)
50
+ Synthia coder (programming)
51
+ ServiceNow-AI/M2Lingual (multilingual)
52
+ AI-MO/NuminaMath-TIR (mathematics)
53
+
54
+ Tulu-3 Persona Datasets:
55
+ -
56
+ allenai/tulu-3-sft-personas-code
57
+ allenai/tulu-3-sft-personas-math
58
+ allenai/tulu-3-sft-personas-math-grade
59
+ allenai/tulu-3-sft-personas-algebra
60
+
61
+ Second, we execute 8 distinct Direct Preference Optimization (DPO)
62
+ runs with various combinations of data sets to enhance specific
63
+ performance metrics and align the model with human preferences. A key
64
+ advantage in our post-training process was INTELLECT-1's use of the
65
+ Llama-3 tokenizer, which allowed us to utilize logits from
66
+ Llama-3.1-405B to heal and maintain precision during the post-training
67
+ process via DistillKit.
68
+
69
+ Finally, we performed 16 strategic merges between candidate models
70
+ using MergeKit to create superior combined models that leverage the
71
+ strengths of different training runs. During the post-training phase, we
72
+ observed that when using a ChatML template without an explicit BOS
73
+ (begin-of-sequence) token, the initial loss was approximately 15.
74
+ However, when switching to the Llama 3.1 chat template, the loss for
75
+ these trainings started much lower at approximately 1.1, indicating
76
+ better alignment with the underlying Llama 3 tokenizer.
77
+
78
+ The combination of these post-training techniques resulted in
79
+ significant improvements in various benchmarks, particularly in
80
+ knowledge retrieval, grade school math, instruction following and
81
+ reasoning.
82
+
83
+ Citations
84
+
85
+
86
+
87
+
88
+ If you use this model in your research, please cite it as follows:
89
+
90
+ @article{jaghouar2024intellect,
91
+ title={INTELLECT-1 Technical Report.},
92
+ author={Jaghouar, Sami and Ong, Jack Min and Basra, Manveer and Obeid, Fares and Straube, Jannik and Keiblinger, Michael and Bakouch, Elie and Atkins, Lucas and Panahi, Maziyar and Goddard, Charles and Ryabinin, Max and Hagemann, Johannes},
93
+ journal={arXiv preprint},
94
+ year={2024}
95
+ }
96
+
97
+ ---
98
  ## Use with llama.cpp
99
  Install llama.cpp through brew (works on Mac and Linux)
100