tangled-alpha-0.10-core
time python -B prepare_core_datasets.py
i=0, min_len=0, max_len=1073741824, block_size=1025, chunk_size=16400000, len(dataset)=5146620, len(dataset) * block_size=5275285500
Total number of tokens in the optimized dataset '../core-data-0-0-1073741824-1025-16000' is 5275285500
i=1, min_len=1025, max_len=2049, block_size=2049, chunk_size=16392000, len(dataset)=309838, len(dataset) * block_size=634858062
Total number of tokens in the optimized dataset '../core-data-1-1025-2049-2049-8000' is 634858062
i=2, min_len=2049, max_len=4097, block_size=4097, chunk_size=16388000, len(dataset)=113843, len(dataset) * block_size=466414771
Total number of tokens in the optimized dataset '../core-data-2-2049-4097-4097-4000' is 466414771
i=3, min_len=4097, max_len=8193, block_size=8193, chunk_size=16386000, len(dataset)=56713, len(dataset) * block_size=464649609
Total number of tokens in the optimized dataset '../core-data-3-4097-8193-8193-2000' is 464649609
i=4, min_len=8193, max_len=16385, block_size=16385, chunk_size=16385000, len(dataset)=37406, len(dataset) * block_size=612897310
Total number of tokens in the optimized dataset '../core-data-4-8193-16385-16385-1000' is 612897310
i=5, min_len=16385, max_len=32769, block_size=32769, chunk_size=16384500, len(dataset)=12737, len(dataset) * block_size=417378753
Total number of tokens in the optimized dataset '../core-data-5-16385-32769-32769-500' is 417378753
i=6, min_len=32769, max_len=65537, block_size=65537, chunk_size=16384250, len(dataset)=2824, len(dataset) * block_size=185076488
Total number of tokens in the optimized dataset '../core-data-6-32769-65537-65537-250' is 185076488
i=7, min_len=65537, max_len=131073, block_size=131073, chunk_size=16384125, len(dataset)=634, len(dataset) * block_size=83100282
Total number of tokens in the optimized dataset '../core-data-7-65537-131073-131073-125' is 83100282
real 292m54.341s
user 2118m1.154s
sys 12m2.746s
20G tangled-alpha-0.9-core/core-data-0-0-1073741824-1025-16000
2.4G tangled-alpha-0.9-core/core-data-1-1025-2049-2049-8000
1.8G tangled-alpha-0.9-core/core-data-2-2049-4097-4097-4000
1.8G tangled-alpha-0.9-core/core-data-3-4097-8193-8193-2000
2.3G tangled-alpha-0.9-core/core-data-4-8193-16385-16385-1000
1.6G tangled-alpha-0.9-core/core-data-5-16385-32769-32769-500
709M tangled-alpha-0.9-core/core-data-6-32769-65537-65537-250
321M tangled-alpha-0.9-core/core-data-7-65537-131073-131073-125
CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt pretrain --config pretrain_core_model_0.yaml
Backup wandb
:
mv wandb wandb-pretrain-core-0
Copy config:
cp ../config-0.json ../out/pretrain-core-0/final/config.json
Chat with model:
CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt chat ../out/pretrain-core-0/final
CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True time litgpt evaluate --tasks 'leaderboard' --out_dir '../evaluate/pretrain-core-0/leaderboard/' --batch_size '4' --dtype 'bfloat16' '../out/pretrain-core-0/final'
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.