Triangle104 commited on
Commit
3e5fea6
1 Parent(s): 77a782d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -0
README.md CHANGED
@@ -46,6 +46,79 @@ tags:
46
  This model was converted to GGUF format from [`utter-project/EuroLLM-1.7B-Instruct`](https://huggingface.co/utter-project/EuroLLM-1.7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
47
  Refer to the [original model card](https://huggingface.co/utter-project/EuroLLM-1.7B-Instruct) for more details on the model.
48
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
  ## Use with llama.cpp
50
  Install llama.cpp through brew (works on Mac and Linux)
51
 
 
46
  This model was converted to GGUF format from [`utter-project/EuroLLM-1.7B-Instruct`](https://huggingface.co/utter-project/EuroLLM-1.7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
47
  Refer to the [original model card](https://huggingface.co/utter-project/EuroLLM-1.7B-Instruct) for more details on the model.
48
 
49
+ ---
50
+ Model details:
51
+ -
52
+ This is the model card for the first instruction tuned model of the
53
+ EuroLLM series: EuroLLM-1.7B-Instruct. You can also check the
54
+ pre-trained version: EuroLLM-1.7B.
55
+
56
+
57
+ Developed by: Unbabel, Instituto Superior Técnico,
58
+ Instituto de Telecomunicações, University of Edinburgh, Aveni,
59
+ University of Paris-Saclay, University of Amsterdam, Naver Labs,
60
+ Sorbonne Université.
61
+ Funded by: European Union.
62
+ Model type: A 1.7B parameter instruction tuned multilingual transfomer LLM.
63
+ Language(s) (NLP): Bulgarian, Croatian, Czech,
64
+ Danish, Dutch, English, Estonian, Finnish, French, German, Greek,
65
+ Hungarian, Irish, Italian, Latvian, Lithuanian, Maltese, Polish,
66
+ Portuguese, Romanian, Slovak, Slovenian, Spanish, Swedish, Arabic,
67
+ Catalan, Chinese, Galician, Hindi, Japanese, Korean, Norwegian, Russian,
68
+ Turkish, and Ukrainian.
69
+ License: Apache License 2.0.
70
+
71
+
72
+
73
+
74
+
75
+
76
+
77
+ Model Details
78
+
79
+
80
+
81
+
82
+ The EuroLLM project has the goal of creating a suite of LLMs capable
83
+ of understanding and generating text in all European Union languages as
84
+ well as some additional relevant languages.
85
+ EuroLLM-1.7B is a 1.7B parameter model trained on 4 trillion tokens
86
+ divided across the considered languages and several data sources: Web
87
+ data, parallel data (en-xx and xx-en), and high-quality datasets.
88
+ EuroLLM-1.7B-Instruct was further instruction tuned on EuroBlocks, an
89
+ instruction tuning dataset with focus on general instruction-following
90
+ and machine translation.
91
+
92
+
93
+
94
+
95
+
96
+
97
+
98
+ Model Description
99
+
100
+
101
+
102
+
103
+ EuroLLM uses a standard, dense Transformer architecture:
104
+
105
+
106
+ We use grouped query attention (GQA) with 8 key-value heads, since
107
+ it has been shown to increase speed at inference time while maintaining
108
+ downstream performance.
109
+ We perform pre-layer normalization, since it improves the training stability, and use the RMSNorm, which is faster.
110
+ We use the SwiGLU activation function, since it has been shown to lead to good results on downstream tasks.
111
+ We use rotary positional embeddings (RoPE) in every layer, since
112
+ these have been shown to lead to good performances while allowing the
113
+ extension of the context length.
114
+
115
+
116
+ For pre-training, we use 256 Nvidia H100 GPUs of the Marenostrum 5
117
+ supercomputer, training the model with a constant batch size of 3,072
118
+ sequences, which corresponds to approximately 12 million tokens, using
119
+ the Adam optimizer, and BF16 precision.
120
+
121
+ ---
122
  ## Use with llama.cpp
123
  Install llama.cpp through brew (works on Mac and Linux)
124