goldfish-models commited on
Commit
f1e054e
1 Parent(s): 96b9848

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +62 -0
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ license: apache-2.0
4
+ language:
5
+ - mdf
6
+ datasets:
7
+ - allenai/MADLAD-400
8
+ library_name: transformers
9
+ pipeline_tag: text-generation
10
+ tags:
11
+ - goldfish
12
+
13
+ ---
14
+
15
+ # mdf_cyrl_full
16
+
17
+ Goldfish is a suite of monolingual language models trained for 350 languages.
18
+ This model is the <b>Moksha</b> (Cyrillic script) model trained on 6MB of data (all our data in the language), after accounting for an estimated byte premium of 1.71; content-matched text in Moksha takes on average 1.71x as many UTF-8 bytes to encode as English.
19
+ The Goldfish models are trained primarily for comparability across languages and for low-resource languages; Goldfish performance for high-resource languages is not designed to be comparable with modern large language models (LLMs).
20
+
21
+ Note: mdf_cyrl is an [individual language](https://iso639-3.sil.org/code_tables/639/data) code. It is not contained in any macrolanguage codes contained in Goldfish (for script cyrl).
22
+
23
+ All training and hyperparameter details are in our paper, [Goldfish: Monolingual Language Models for 350 Languages (Chang et al., 2024)](https://github.com/tylerachang/goldfish/blob/main/goldfish_paper_20240815.pdf).
24
+
25
+ Training code and sample usage: https://github.com/tylerachang/goldfish
26
+
27
+ Sample usage also in this Google Colab: [link](https://colab.research.google.com/drive/1rHFpnQsyXJ32ONwCosWZ7frjOYjbGCXG?usp=sharing)
28
+
29
+ ## Model details:
30
+
31
+ To access all Goldfish model details programmatically, see https://github.com/tylerachang/goldfish/model_details.json.
32
+ All models are trained with a [CLS] (same as [BOS]) token prepended, and a [SEP] (same as [EOS]) token separating sequences.
33
+ Details for this model specifically:
34
+
35
+ * Architecture: gpt2
36
+ * Parameters: 124770816
37
+ * Maximum sequence length: 512 tokens
38
+ * Training text data (raw): 10.56MB
39
+ * Training text data (byte premium scaled): 6.175MB
40
+ * Training tokens: 1302016 (x10 epochs)
41
+ * Vocabulary size: 50000
42
+ * Compute cost: 6641782161408000.0 FLOPs or ~0.6 NVIDIA A6000 GPU hours
43
+
44
+ Training datasets (percentages prior to deduplication):
45
+ * 79.42795%: [MADLAD-400 (CommonCrawl)](https://huggingface.co/datasets/allenai/MADLAD-400)
46
+ * 18.39365%: [Wikipedia 2023/08](https://dumps.wikimedia.org/)
47
+ * 2.17561%: [Languages of Russia](http://web-corpora.net/wsgi3/minorlangs/download)
48
+ * 0.00279%: [Tatoeba](https://tatoeba.org/en/)
49
+
50
+
51
+ ## Citation
52
+
53
+ If you use this model, please cite:
54
+
55
+ ```
56
+ @article{chang-etal-2024-goldfish,
57
+ title={Goldfish: Monolingual Language Models for 350 Languages},
58
+ author={Chang, Tyler A. and Arnett, Catherine and Tu, Zhuowen and Bergen, Benjamin K.},
59
+ journal={Preprint},
60
+ year={2024},
61
+ }
62
+ ```