mtasic85 commited on
Commit
ca965c6
·
1 Parent(s): ef71605

git config

Browse files
Files changed (3) hide show
  1. .gitattributes +3 -0
  2. .gitignore +171 -0
  3. README.md +133 -0
.gitattributes CHANGED
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ *.png filter=lfs diff=lfs merge=lfs -text
37
+ results.json filter=lfs diff=lfs merge=lfs -text
38
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
.gitignore ADDED
@@ -0,0 +1,171 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ---> Python
2
+ # Byte-compiled / optimized / DLL files
3
+ __pycache__/
4
+ *.py[cod]
5
+ *$py.class
6
+
7
+ # C extensions
8
+ *.so
9
+
10
+ # Distribution / packaging
11
+ .Python
12
+ build/
13
+ develop-eggs/
14
+ dist/
15
+ downloads/
16
+ eggs/
17
+ .eggs/
18
+ lib/
19
+ lib64/
20
+ parts/
21
+ sdist/
22
+ var/
23
+ wheels/
24
+ share/python-wheels/
25
+ *.egg-info/
26
+ .installed.cfg
27
+ *.egg
28
+ MANIFEST
29
+
30
+ # PyInstaller
31
+ # Usually these files are written by a python script from a template
32
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
33
+ *.manifest
34
+ *.spec
35
+
36
+ # Installer logs
37
+ pip-log.txt
38
+ pip-delete-this-directory.txt
39
+
40
+ # Unit test / coverage reports
41
+ htmlcov/
42
+ .tox/
43
+ .nox/
44
+ .coverage
45
+ .coverage.*
46
+ .cache
47
+ nosetests.xml
48
+ coverage.xml
49
+ *.cover
50
+ *.py,cover
51
+ .hypothesis/
52
+ .pytest_cache/
53
+ cover/
54
+
55
+ # Translations
56
+ *.mo
57
+ *.pot
58
+
59
+ # Django stuff:
60
+ *.log
61
+ local_settings.py
62
+ db.sqlite3
63
+ db.sqlite3-journal
64
+
65
+ # Flask stuff:
66
+ instance/
67
+ .webassets-cache
68
+
69
+ # Scrapy stuff:
70
+ .scrapy
71
+
72
+ # Sphinx documentation
73
+ docs/_build/
74
+
75
+ # PyBuilder
76
+ .pybuilder/
77
+ target/
78
+
79
+ # Jupyter Notebook
80
+ .ipynb_checkpoints
81
+
82
+ # IPython
83
+ profile_default/
84
+ ipython_config.py
85
+
86
+ # pyenv
87
+ # For a library or package, you might want to ignore these files since the code is
88
+ # intended to run in multiple environments; otherwise, check them in:
89
+ # .python-version
90
+
91
+ # pipenv
92
+ # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
93
+ # However, in case of collaboration, if having platform-specific dependencies or dependencies
94
+ # having no cross-platform support, pipenv may install dependencies that don't work, or not
95
+ # install all needed dependencies.
96
+ #Pipfile.lock
97
+
98
+ # poetry
99
+ # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
100
+ # This is especially recommended for binary packages to ensure reproducibility, and is more
101
+ # commonly ignored for libraries.
102
+ # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
103
+ #poetry.lock
104
+
105
+ # pdm
106
+ # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
107
+ #pdm.lock
108
+ # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
109
+ # in version control.
110
+ # https://pdm.fming.dev/#use-with-ide
111
+ .pdm.toml
112
+
113
+ # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
114
+ __pypackages__/
115
+
116
+ # Celery stuff
117
+ celerybeat-schedule
118
+ celerybeat.pid
119
+
120
+ # SageMath parsed files
121
+ *.sage.py
122
+
123
+ # Environments
124
+ .env
125
+ .venv
126
+ env/
127
+ venv/
128
+ ENV/
129
+ env.bak/
130
+ venv.bak/
131
+
132
+ # Spyder project settings
133
+ .spyderproject
134
+ .spyproject
135
+
136
+ # Rope project settings
137
+ .ropeproject
138
+
139
+ # mkdocs documentation
140
+ /site
141
+
142
+ # mypy
143
+ .mypy_cache/
144
+ .dmypy.json
145
+ dmypy.json
146
+
147
+ # Pyre type checker
148
+ .pyre/
149
+
150
+ # pytype static type analyzer
151
+ .pytype/
152
+
153
+ # Cython debug symbols
154
+ cython_debug/
155
+
156
+ # PyCharm
157
+ # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
158
+ # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
159
+ # and can be added to the global gitignore or merged into this file. For a more nuclear
160
+ # option (not recommended) you can uncomment the following to ignore the entire idea folder.
161
+ .idea/
162
+
163
+ .DS_Store
164
+ .ruff_cache
165
+ venv*/
166
+ wandb*/
167
+ data/
168
+ pretrain-data/
169
+ contrain-data/
170
+ core-data-*/
171
+ out/pretrain-core/step-*/
README.md CHANGED
@@ -1,3 +1,136 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ pipeline_tag: text-generation
4
+ library_name: transformers
5
+ language: [
6
+ 'en', 'am', 'ar', 'as', 'az', 'be', 'bg', 'bn', 'br', 'bs', 'ca', 'cs', 'cy', 'da', 'de', 'el',
7
+ 'eo', 'es', 'et', 'eu', 'fa', 'ff', 'fi', 'fr', 'fy', 'ga', 'gd', 'gl', 'gn', 'gu', 'ha', 'he',
8
+ 'hi', 'hr', 'ht', 'hu', 'hy', 'id', 'ig', 'is', 'it', 'ja', 'jv', 'ka', 'kk', 'km', 'kn', 'ko',
9
+ 'ku', 'ky', 'la', 'lg', 'li', 'ln', 'lo', 'lt', 'lv', 'mg', 'mk', 'ml', 'mn', 'mr', 'ms', 'my',
10
+ 'ne', 'nl', 'no', 'ns', 'om', 'or', 'pa', 'pl', 'ps', 'pt', 'qu', 'rm', 'ro', 'ru', 'sa', 'si',
11
+ 'sc', 'sd', 'sk', 'sl', 'so', 'sq', 'sr', 'ss', 'su', 'sv', 'sw', 'ta', 'te', 'th', 'tl', 'tn',
12
+ 'tr', 'ug', 'uk', 'ur', 'uz', 'vi', 'wo', 'xh', 'yi', 'yo', 'zu',
13
+ ]
14
+ datasets:
15
+ # core - base
16
+ - ontocord/fineweb-permissive-multilingual-2m
17
+ - distily/c4_multilingual_1M
18
+ - data-silence/sumnews
19
+ - xu-song/cc100-samples
20
+ - badrex/llm-emoji-dataset
21
+ - fblgit/simple-math
22
+ - Gusarich/math-expressions-1m
23
+ - neuralwork/arxiver
24
+ - christopher/rosetta-code
25
+ - nampdn-ai/tiny-codes
26
+ - JeanKaddour/minipile
27
+ # core - instruct
28
+ - NousResearch/hermes-function-calling-v1
29
+ - simplescaling/s1K-1.1
30
+ # base - instruct
31
+ - mlabonne/open-perfectblend
32
+ - allenai/tulu-3-sft-mixture
33
+ - rombodawg/Everything_Instruct_Multilingual
34
+ # base - reason
35
+ - open-r1/OpenR1-Math-220k
36
+ - open-thoughts/OpenThoughts-114k
37
+ - cognitivecomputations/dolphin-r1
38
+ - simplescaling/s1K-1.1
39
+ tags:
40
+ - chat
41
+ - core
42
+ - base
43
+ - instruct
44
+ - reason
45
  ---
46
+
47
+ # tangled-alpha-0.11-core
48
+
49
+ ![logo](./misc/logo.jpg)
50
+
51
+ ```bash
52
+ time python -B prepare_core_datasets.py
53
+ ```
54
+
55
+ ```
56
+ i=0, min_len=0, max_len=1073741824, block_size=1025, chunk_size=16400000, len(dataset)=10913927, len(dataset) * block_size=11186775175
57
+ Total number of tokens in the optimized dataset '../core-data-0-0-1073741824-1025-16000' is 11186775175
58
+
59
+ i=1, min_len=1025, max_len=2049, block_size=2049, chunk_size=16392000, len(dataset)=893465, len(dataset) * block_size=1830709785
60
+ Total number of tokens in the optimized dataset '../core-data-1-1025-2049-2049-8000' is 1830709785
61
+
62
+ i=2, min_len=2049, max_len=4097, block_size=4097, chunk_size=16388000, len(dataset)=375104, len(dataset) * block_size=1536801088
63
+ Total number of tokens in the optimized dataset '../core-data-2-2049-4097-4097-4000' is 1536801088
64
+
65
+ i=3, min_len=4097, max_len=8193, block_size=8193, chunk_size=16386000, len(dataset)=177522, len(dataset) * block_size=1454437746
66
+ Total number of tokens in the optimized dataset '../core-data-3-4097-8193-8193-2000' is 1454437746
67
+
68
+ i=4, min_len=8193, max_len=16385, block_size=16385, chunk_size=16385000, len(dataset)=77725, len(dataset) * block_size=1273524125
69
+ Total number of tokens in the optimized dataset '../core-data-4-8193-16385-16385-1000' is 1273524125
70
+
71
+ i=5, min_len=16385, max_len=32769, block_size=32769, chunk_size=16384500, len(dataset)=22931, len(dataset) * block_size=751425939
72
+ Total number of tokens in the optimized dataset '../core-data-5-16385-32769-32769-500' is 751425939
73
+
74
+ i=6, min_len=32769, max_len=65537, block_size=65537, chunk_size=16384250, len(dataset)=4988, len(dataset) * block_size=326898556
75
+ Total number of tokens in the optimized dataset '../core-data-6-32769-65537-65537-250' is 326898556
76
+
77
+ i=7, min_len=65537, max_len=131073, block_size=131073, chunk_size=16384125, len(dataset)=1137, len(dataset) * block_size=149030001
78
+ Total number of tokens in the optimized dataset '../core-data-7-65537-131073-131073-125' is 149030001
79
+
80
+ 42G ../core-data-0-0-1073741824-1025-16000
81
+ 6.9G ../core-data-1-1025-2049-2049-8000
82
+ 5.8G ../core-data-2-2049-4097-4097-4000
83
+ 5.5G ../core-data-3-4097-8193-8193-2000
84
+ 4.8G ../core-data-4-8193-16385-16385-1000
85
+ 2.9G ../core-data-5-16385-32769-32769-500
86
+ 1.3G ../core-data-6-32769-65537-65537-250
87
+ 573M ../core-data-7-65537-131073-131073-125
88
+ ```
89
+
90
+ ```bash
91
+ CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt pretrain --config pretrain_core_model_0.yaml
92
+ ```
93
+
94
+ ```
95
+ Seed set to 23
96
+ Time to instantiate model: 0.21 seconds.
97
+ Total parameters: 402,703,104
98
+ Verifying settings ...
99
+ Measured TFLOPs: 42432.35
100
+ Epoch 1 | iter 64 step 1 | loss train: 11.984, val: n/a | iter time: 460.76 ms (step) remaining time: 12 days, 3:41:55
101
+ Epoch 1 | iter 128 step 2 | loss train: 11.979, val: n/a | iter time: 402.83 ms (step) remaining time: 9 days, 0:57:24
102
+ Epoch 1 | iter 192 step 3 | loss train: 11.983, val: n/a | iter time: 403.46 ms (step) remaining time: 8 days, 0:12:58
103
+ Epoch 1 | iter 256 step 4 | loss train: 11.983, val: n/a | iter time: 403.39 ms (step) remaining time: 7 days, 11:52:07
104
+ Epoch 1 | iter 320 step 5 | loss train: 11.979, val: n/a | iter time: 403.85 ms (step) remaining time: 7 days, 4:28:33
105
+ Epoch 1 | iter 384 step 6 | loss train: 11.978, val: n/a | iter time: 403.93 ms (step) remaining time: 6 days, 23:33:15
106
+ Epoch 1 | iter 448 step 7 | loss train: 11.978, val: n/a | iter time: 403.38 ms (step) remaining time: 6 days, 20:02:28
107
+ Epoch 1 | iter 512 step 8 | loss train: 11.973, val: n/a | iter time: 403.80 ms (step) remaining time: 6 days, 17:24:49
108
+ Epoch 1 | iter 576 step 9 | loss train: 11.972, val: n/a | iter time: 403.23 ms (step) remaining time: 6 days, 15:21:59
109
+ Epoch 1 | iter 640 step 10 | loss train: 11.967, val: n/a | iter time: 403.38 ms (step) remaining time: 6 days, 13:43:53
110
+ # ...
111
+ ```
112
+
113
+ Backup `wandb`:
114
+
115
+ ```bash
116
+ mv wandb wandb-pretrain-core-0
117
+ ```
118
+
119
+ Copy config:
120
+
121
+ ```bash
122
+ cp ../config-0.json ../out/pretrain-core-0/final/config.json
123
+ ```
124
+
125
+ Chat with model:
126
+
127
+ ```bash
128
+ CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt chat ../out/pretrain-core-0/final
129
+ ```
130
+
131
+ ```bash
132
+ CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True time litgpt evaluate --tasks 'leaderboard' --out_dir '../evaluate/pretrain-core-0/leaderboard/' --batch_size '4' --dtype 'bfloat16' '../out/pretrain-core-0/final'
133
+ ```
134
+
135
+ ```
136
+ ```