Safetensors
llama
  • Trained using torchtune for CPT testing
  • Shows noticeable improvement in Ground Truth Accuracy when compared to the base model: 37% to 48%. Illustrates that larger models are more likely to learn with CPT.

Torchtune logs

Step 1 | loss:1.1300560235977173 lr:1.6666666666666667e-06 tokens_per_second_per_gpu:349.0091552734375 peak_memory_active:27.935279369354248 peak_memory_alloc:19.935279369354248 peak_memory_reserved:40.74609375 
Step 2 | loss:1.1594016551971436 lr:3.3333333333333333e-06 tokens_per_second_per_gpu:359.1905212402344 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 3 | loss:1.1536775827407837 lr:5e-06 tokens_per_second_per_gpu:359.3911437988281 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 4 | loss:1.1208356618881226 lr:6.666666666666667e-06 tokens_per_second_per_gpu:144.13449096679688 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 5 | loss:1.1083028316497803 lr:8.333333333333334e-06 tokens_per_second_per_gpu:359.2149658203125 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 6 | loss:1.0660074949264526 lr:1e-05 tokens_per_second_per_gpu:356.5550537109375 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 7 | loss:1.0272692441940308 lr:1.1666666666666668e-05 tokens_per_second_per_gpu:137.47555541992188 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 8 | loss:1.0509568452835083 lr:1.3333333333333333e-05 tokens_per_second_per_gpu:358.0554504394531 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 9 | loss:0.9919566512107849 lr:1.5000000000000002e-05 tokens_per_second_per_gpu:358.1855163574219 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 10 | loss:1.002397894859314 lr:1.6666666666666667e-05 tokens_per_second_per_gpu:141.70672607421875 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 11 | loss:0.9400060176849365 lr:1.8333333333333333e-05 tokens_per_second_per_gpu:357.97412109375 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 12 | loss:0.9606466293334961 lr:2e-05 tokens_per_second_per_gpu:359.090576171875 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 13 | loss:0.9323594570159912 lr:1.9978589232386036e-05 tokens_per_second_per_gpu:142.2957000732422 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 14 | loss:0.8749242424964905 lr:1.9914448613738107e-05 tokens_per_second_per_gpu:357.0643005371094 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 15 | loss:0.8630730509757996 lr:1.9807852804032306e-05 tokens_per_second_per_gpu:358.6167907714844 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 16 | loss:0.8365017175674438 lr:1.9659258262890683e-05 tokens_per_second_per_gpu:141.15533447265625 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 17 | loss:0.8293017148971558 lr:1.946930129495106e-05 tokens_per_second_per_gpu:358.5122375488281 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 18 | loss:0.7823613882064819 lr:1.9238795325112867e-05 tokens_per_second_per_gpu:356.6804504394531 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 19 | loss:0.7624105215072632 lr:1.8968727415326885e-05 tokens_per_second_per_gpu:141.69068908691406 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 20 | loss:0.7408965826034546 lr:1.866025403784439e-05 tokens_per_second_per_gpu:357.17572021484375 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 21 | loss:0.7194341421127319 lr:1.8314696123025456e-05 tokens_per_second_per_gpu:357.96331787109375 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 22 | loss:0.710287868976593 lr:1.7933533402912354e-05 tokens_per_second_per_gpu:142.11309814453125 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 23 | loss:0.6661117672920227 lr:1.7518398074789776e-05 tokens_per_second_per_gpu:358.33612060546875 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 24 | loss:0.653155505657196 lr:1.7071067811865477e-05 tokens_per_second_per_gpu:356.1979064941406 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 25 | loss:0.6476803421974182 lr:1.659345815100069e-05 tokens_per_second_per_gpu:142.03372192382812 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 26 | loss:0.6234614253044128 lr:1.608761429008721e-05 tokens_per_second_per_gpu:357.3049621582031 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 27 | loss:0.5996619462966919 lr:1.5555702330196024e-05 tokens_per_second_per_gpu:358.3849792480469 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 28 | loss:0.5878142714500427 lr:1.5000000000000002e-05 tokens_per_second_per_gpu:141.94393920898438 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 29 | loss:0.5714110136032104 lr:1.4422886902190014e-05 tokens_per_second_per_gpu:358.5306091308594 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 30 | loss:0.5443007946014404 lr:1.3826834323650899e-05 tokens_per_second_per_gpu:358.9737548828125 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 31 | loss:0.525748610496521 lr:1.3214394653031616e-05 tokens_per_second_per_gpu:141.8433380126953 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 32 | loss:0.5265642404556274 lr:1.2588190451025209e-05 tokens_per_second_per_gpu:357.0601501464844 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 33 | loss:0.5053389668464661 lr:1.1950903220161286e-05 tokens_per_second_per_gpu:358.7293395996094 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 34 | loss:0.4904974400997162 lr:1.130526192220052e-05 tokens_per_second_per_gpu:142.004150390625 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 35 | loss:0.4666512906551361 lr:1.0654031292301432e-05 tokens_per_second_per_gpu:358.0657958984375 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 36 | loss:0.4776063561439514 lr:1e-05 tokens_per_second_per_gpu:358.3638610839844 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 37 | loss:0.46145376563072205 lr:9.34596870769857e-06 tokens_per_second_per_gpu:142.17311096191406 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 38 | loss:0.46022146940231323 lr:8.694738077799487e-06 tokens_per_second_per_gpu:357.508056640625 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 39 | loss:0.42891719937324524 lr:8.04909677983872e-06 tokens_per_second_per_gpu:358.75506591796875 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 40 | loss:0.4283742606639862 lr:7.411809548974792e-06 tokens_per_second_per_gpu:141.5640106201172 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 41 | loss:0.4312312602996826 lr:6.785605346968387e-06 tokens_per_second_per_gpu:358.67095947265625 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 42 | loss:0.43069934844970703 lr:6.173165676349103e-06 tokens_per_second_per_gpu:357.9062194824219 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 43 | loss:0.4175654947757721 lr:5.5771130978099896e-06 tokens_per_second_per_gpu:142.04470825195312 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 44 | loss:0.4196785092353821 lr:5.000000000000003e-06 tokens_per_second_per_gpu:357.5122985839844 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 45 | loss:0.412609726190567 lr:4.444297669803981e-06 tokens_per_second_per_gpu:357.4186096191406 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 46 | loss:0.4035313129425049 lr:3.912385709912794e-06 tokens_per_second_per_gpu:141.44119262695312 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 47 | loss:0.40271955728530884 lr:3.4065418489993118e-06 tokens_per_second_per_gpu:358.289794921875 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 48 | loss:0.40422722697257996 lr:2.9289321881345257e-06 tokens_per_second_per_gpu:357.8194580078125 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 49 | loss:0.4023560881614685 lr:2.4816019252102274e-06 tokens_per_second_per_gpu:141.70408630371094 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 50 | loss:0.39871829748153687 lr:2.0664665970876496e-06 tokens_per_second_per_gpu:358.13641357421875 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Step 51 | loss:0.4081781804561615 lr:1.6853038769745466e-06 tokens_per_second_per_gpu:358.302490234375 peak_memory_active:27.935280323028564 peak_memory_alloc:19.935280323028564 peak_memory_reserved:40.74609375 
Downloads last month
5
Safetensors
Model size
70.6B params
Tensor type
BF16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for amang1802/llama-3.1-70B-cpttest_mode1_fulltext

Finetuned
(31)
this model

Dataset used to train amang1802/llama-3.1-70B-cpttest_mode1_fulltext