Joosep Pata commited on
Commit
b370d01
·
1 Parent(s): 3d70ffa

add v2.0.0

Browse files
Files changed (35) hide show
  1. clic/clusters/v2.0.0/README.md +145 -0
  2. clic/clusters/v2.0.0/history/epoch_1.json +1 -0
  3. clic/clusters/v2.0.0/history/epoch_10.json +1 -0
  4. clic/clusters/v2.0.0/history/epoch_11.json +1 -0
  5. clic/clusters/v2.0.0/history/epoch_12.json +1 -0
  6. clic/clusters/v2.0.0/history/epoch_13.json +1 -0
  7. clic/clusters/v2.0.0/history/epoch_14.json +1 -0
  8. clic/clusters/v2.0.0/history/epoch_15.json +1 -0
  9. clic/clusters/v2.0.0/history/epoch_16.json +1 -0
  10. clic/clusters/v2.0.0/history/epoch_17.json +1 -0
  11. clic/clusters/v2.0.0/history/epoch_18.json +1 -0
  12. clic/clusters/v2.0.0/history/epoch_19.json +1 -0
  13. clic/clusters/v2.0.0/history/epoch_2.json +1 -0
  14. clic/clusters/v2.0.0/history/epoch_20.json +1 -0
  15. clic/clusters/v2.0.0/history/epoch_21.json +1 -0
  16. clic/clusters/v2.0.0/history/epoch_22.json +1 -0
  17. clic/clusters/v2.0.0/history/epoch_23.json +1 -0
  18. clic/clusters/v2.0.0/history/epoch_24.json +1 -0
  19. clic/clusters/v2.0.0/history/epoch_25.json +1 -0
  20. clic/clusters/v2.0.0/history/epoch_26.json +1 -0
  21. clic/clusters/v2.0.0/history/epoch_27.json +1 -0
  22. clic/clusters/v2.0.0/history/epoch_28.json +1 -0
  23. clic/clusters/v2.0.0/history/epoch_29.json +1 -0
  24. clic/clusters/v2.0.0/history/epoch_3.json +1 -0
  25. clic/clusters/v2.0.0/history/epoch_4.json +1 -0
  26. clic/clusters/v2.0.0/history/epoch_5.json +1 -0
  27. clic/clusters/v2.0.0/history/epoch_6.json +1 -0
  28. clic/clusters/v2.0.0/history/epoch_7.json +1 -0
  29. clic/clusters/v2.0.0/history/epoch_8.json +1 -0
  30. clic/clusters/v2.0.0/history/epoch_9.json +1 -0
  31. clic/clusters/v2.0.0/hyperparameters.json +1 -0
  32. clic/clusters/v2.0.0/test-config.yaml +126 -0
  33. clic/clusters/v2.0.0/test.log +0 -0
  34. clic/clusters/v2.0.0/train-config.yaml +125 -0
  35. clic/clusters/v2.0.0/train.log +850 -0
clic/clusters/v2.0.0/README.md ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Model Card for mlpf-clic-clusters-v2.0.0
2
+
3
+ This model reconstructs particles in a detector, based on the tracks and calorimeter clusters recorded by the detector.
4
+
5
+ ## Model Details
6
+
7
+ ### Model Description
8
+
9
+ - **Developed by:** Joosep Pata, Eric Wulff, Farouk Mokhtar, Mengke Zhang, David Southwick, Maria Girone, David Southwick, Javier Duarte, Michael Kagan
10
+ - **Model type:** transformer
11
+ - **License:** Apache License
12
+
13
+ ### Model Sources
14
+
15
+ - **Repository:** https://github.com/jpata/particleflow/releases/tag/v2.0.0
16
+
17
+ ## Uses
18
+ ### Direct Use
19
+
20
+ This model may be used to study the physics and computational performance on ML-based reconstruction in simulation.
21
+
22
+ ### Out-of-Scope Use
23
+
24
+ This model is not intended for physics measurements on real data.
25
+
26
+ ## Bias, Risks, and Limitations
27
+
28
+ The model has only been trained on simulation data and has not been validated against real data.
29
+ The model has not been peer reviewed or published in a peer-reviewed journal.
30
+
31
+ ## How to Get Started with the Model
32
+
33
+ Use the code below to get started with the model.
34
+
35
+ ```
36
+ #get the code
37
+ git clone https://github.com/jpata/particleflow
38
+ cd particleflow
39
+ git checkout v2.0.0
40
+
41
+ #get the models
42
+ git clone https://huggingface.co/jpata/particleflow models
43
+ ```
44
+
45
+ ## Training Details
46
+ Trained on 8x MI250X for 26 epochs over ~3 days.
47
+ The training was continued twice from a checkpoint due to the 24h time limit.
48
+
49
+ ### Training Data
50
+ The following datasets were used:
51
+ ```
52
+ /eos/user/j/jpata/mlpf/tensorflow_datasets/clic/clic_edm_qq_pf/2.3.0
53
+ /eos/user/j/jpata/mlpf/tensorflow_datasets/clic/clic_edm_ttbar_pf/2.3.0
54
+ /eos/user/j/jpata/mlpf/tensorflow_datasets/clic/clic_edm_ww_fullhad_pf/2.3.0
55
+ ```
56
+
57
+ The truth and target definition was updated in [jpata/particleflow#352](https://github.com/jpata/particleflow/pull/352) have an updated truth and target definition with respect to [Pata, J., Wulff, E., Mokhtar, F. et al. Improved particle-flow event reconstruction with scalable neural networks for current and future particle detectors. Commun Phys 7, 124 (2024)](https://doi.org/10.1038/s42005-024-01599-5).
58
+
59
+ In particular, target particles for MLPF reconstruction are based on status=1 particles.
60
+ For non-interacting status=1, the direct children interacting status=0 are used instead.
61
+
62
+ The datasets were generated using Key4HEP with the following scripts:
63
+ - https://github.com/HEP-KBFI/key4hep-sim/releases/tag/v1.0.0
64
+ - https://github.com/HEP-KBFI/key4hep-sim/blob/v1.0.0/clic/run_sim.sh
65
+
66
+ ## Training Procedure
67
+
68
+ ```bash
69
+ #!/bin/bash
70
+ #SBATCH --job-name=mlpf-train
71
+ #SBATCH --account=project_465000301
72
+ #SBATCH --time=3-00:00:00
73
+ #SBATCH --nodes=1
74
+ #SBATCH --ntasks-per-node=1
75
+ #SBATCH --cpus-per-task=32
76
+ #SBATCH --mem=200G
77
+ #SBATCH --gpus-per-task=8
78
+ #SBATCH --partition=small-g
79
+ #SBATCH --no-requeue
80
+ #SBATCH -o logs/slurm-%x-%j-%N.out
81
+
82
+ cd /scratch/project_465000301/particleflow
83
+
84
+ module load LUMI/24.03 partition/G
85
+
86
+ export IMG=/scratch/project_465000301/pytorch-rocm6.2.simg
87
+ export PYTHONPATH=`pwd`
88
+ export TFDS_DATA_DIR=/scratch/project_465000301/tensorflow_datasets
89
+ #export MIOPEN_DISABLE_CACHE=true
90
+ export MIOPEN_USER_DB_PATH=/tmp/${USER}-${SLURM_JOB_ID}-miopen-cache
91
+ export MIOPEN_CUSTOM_CACHE_DIR=${MIOPEN_USER_DB_PATH}
92
+ export TF_CPP_MAX_VLOG_LEVEL=-1 #to suppress ROCm fusion is enabled messages
93
+ export ROCM_PATH=/opt/rocm
94
+ #export NCCL_DEBUG=INFO
95
+ #export MIOPEN_ENABLE_LOGGING=1
96
+ #export MIOPEN_ENABLE_LOGGING_CMD=1
97
+ #export MIOPEN_LOG_LEVEL=4
98
+ export KERAS_BACKEND=torch
99
+
100
+ env
101
+
102
+ #TF training
103
+ singularity exec \
104
+ --rocm \
105
+ -B /scratch/project_465000301 \
106
+ -B /tmp \
107
+ --env LD_LIBRARY_PATH=/opt/rocm/lib/ \
108
+ --env CUDA_VISIBLE_DEVICES=$ROCR_VISIBLE_DEVICES \
109
+ $IMG python3 mlpf/pipeline.py --dataset clic --gpus 8 \
110
+ --data-dir $TFDS_DATA_DIR --config parameters/pytorch/pyg-clic.yaml \
111
+ --train --gpu-batch-multiplier 128 --num-workers 8 --prefetch-factor 100 --checkpoint-freq 1 --conv-type attention --dtype bfloat16 --lr 0.0001 --num-epochs 30
112
+ ```
113
+
114
+ ## Evaluation
115
+ ```bash
116
+ #!/bin/bash
117
+ #SBATCH --partition gpu
118
+ #SBATCH --gres gpu:mig:1
119
+ #SBATCH --mem-per-gpu 200G
120
+ #SBATCH -o logs/slurm-%x-%j-%N.out
121
+
122
+ IMG=/home/software/singularity/pytorch.simg:2024-08-18
123
+ cd ~/particleflow
124
+
125
+ WEIGHTS=models/clic/clusters/v2.0.0/checkpoints/checkpoint-29-1.901667.pth
126
+ singularity exec -B /scratch/persistent --nv \
127
+ --env PYTHONPATH=`pwd` \
128
+ --env KERAS_BACKEND=torch \
129
+ $IMG python3 mlpf/pyg_pipeline.py --dataset clic --gpus 1 \
130
+ --data-dir /scratch/persistent/joosep/tensorflow_datasets --config parameters/pytorch/pyg-clic.yaml \
131
+ --test --make-plots --gpu-batch-multiplier 100 --load $WEIGHTS --dtype bfloat16 --prefetch-factor 10 --num-workers 8 --load $WEIGHTS
132
+ ~
133
+ ```
134
+
135
+ ## Citation
136
+
137
+ ## Glossary
138
+
139
+ - PF: particle flow reconstruction
140
+ - MLPF: machine learning for particle flow
141
+ - CLIC: Compact Linear Collider
142
+
143
+ ## Model Card Contact
144
+
145
+ Joosep Pata, [email protected]
clic/clusters/v2.0.0/history/epoch_1.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train": {"Regression_pt": 0.22410098989764854, "Regression_eta": 0.00027124753357291037, "Regression_sin_phi": 0.00018932005113030378, "Regression_cos_phi": 0.00018533344456782053, "Regression_energy": 0.2203152060866305, "Classification_binary": 2.9749025808813028, "Classification": 0.029067417771931017, "MET": 7.935175555277818, "Sliced_Wasserstein_Loss": 47.84886087701757, "Total": 3.449032113715898}, "valid": {"Regression_pt": 0.1809588171661475, "Regression_eta": 0.00020314403285550918, "Regression_sin_phi": 0.00014497815029414137, "Regression_cos_phi": 0.00014624492148494414, "Regression_energy": 0.17771335614072548, "Classification_binary": 2.486452471864952, "Classification": 0.022163982330027882, "MET": 4.977831089228296, "Sliced_Wasserstein_Loss": 42.26595156752411, "Total": 2.867782857717042}, "epoch_train_time": 11350.586619138718, "epoch_valid_time": 420.86845803260803, "epoch_total_time": 11777.652925252914}
clic/clusters/v2.0.0/history/epoch_10.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train": {"Regression_pt": 0.11755432991994447, "Regression_eta": 0.0001295076287009004, "Regression_sin_phi": 8.343577538195924e-05, "Regression_cos_phi": 8.436581557554069e-05, "Regression_energy": 0.1157861150276523, "Classification_binary": 1.7303218680813455, "Classification": 0.011502766537676538, "MET": 4.10354491769033, "Sliced_Wasserstein_Loss": 34.71469923939437, "Total": 1.9754638846236252}, "valid": {"Regression_pt": 0.11910374877537179, "Regression_eta": 0.0001345980014065071, "Regression_sin_phi": 8.7266393796424e-05, "Regression_cos_phi": 8.84002236308009e-05, "Regression_energy": 0.1177366103390022, "Classification_binary": 1.7656793232516077, "Classification": 0.011443455196270222, "MET": 4.119582747186495, "Sliced_Wasserstein_Loss": 34.6036349477492, "Total": 2.0142735128617364}, "epoch_train_time": 11209.789572238922, "epoch_valid_time": 416.5914318561554, "epoch_total_time": 11632.885925292969}
clic/clusters/v2.0.0/history/epoch_11.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train": {"Regression_pt": 0.11599312542404656, "Regression_eta": 0.0001275183183604386, "Regression_sin_phi": 8.20733652576653e-05, "Regression_cos_phi": 8.309921048876388e-05, "Regression_energy": 0.11427387117130589, "Classification_binary": 1.7118472472682473, "Classification": 0.01130196340185219, "MET": 4.07779747982431, "Sliced_Wasserstein_Loss": 34.498703310241396, "Total": 1.953710853806599}, "valid": {"Regression_pt": 0.11830525781564007, "Regression_eta": 0.00013354199300624935, "Regression_sin_phi": 8.635481645823291e-05, "Regression_cos_phi": 8.725384040660797e-05, "Regression_energy": 0.11687312969440816, "Classification_binary": 1.75138744096664, "Classification": 0.011257291836753919, "MET": 4.050699608118971, "Sliced_Wasserstein_Loss": 34.49408410369775, "Total": 1.998130714931672}, "epoch_train_time": 11221.056280136108, "epoch_valid_time": 417.03449153900146, "epoch_total_time": 11644.654552698135}
clic/clusters/v2.0.0/history/epoch_12.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train": {"Regression_pt": 0.1146443888317294, "Regression_eta": 0.00012577207509729968, "Regression_sin_phi": 8.092914508285054e-05, "Regression_cos_phi": 8.200626852784459e-05, "Regression_energy": 0.11298052914873769, "Classification_binary": 1.6960452357252536, "Classification": 0.011135802577519617, "MET": 4.0526077189865735, "Sliced_Wasserstein_Loss": 34.313254356520495, "Total": 1.9350961860359235}, "valid": {"Regression_pt": 0.11705597021955386, "Regression_eta": 0.00013250527466225086, "Regression_sin_phi": 8.490811973522714e-05, "Regression_cos_phi": 8.627060333632197e-05, "Regression_energy": 0.11595015449155949, "Classification_binary": 1.7400233621382637, "Classification": 0.011120271836063103, "MET": 4.019951705687299, "Sliced_Wasserstein_Loss": 34.405772709003216, "Total": 1.9844535018086817}, "epoch_train_time": 11210.498413801193, "epoch_valid_time": 415.2183647155762, "epoch_total_time": 11632.256944417953}
clic/clusters/v2.0.0/history/epoch_13.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train": {"Regression_pt": 0.11343557714411334, "Regression_eta": 0.00012412126226742562, "Regression_sin_phi": 7.985220042624962e-05, "Regression_cos_phi": 8.097113481403641e-05, "Regression_energy": 0.11182199452812544, "Classification_binary": 1.6819292277442508, "Classification": 0.010991795440006896, "MET": 4.028762911101985, "Sliced_Wasserstein_Loss": 34.144026299814314, "Total": 1.9184633779995715}, "valid": {"Regression_pt": 0.11587548945877713, "Regression_eta": 0.000130290023000294, "Regression_sin_phi": 8.379978957283535e-05, "Regression_cos_phi": 8.549349101026725e-05, "Regression_energy": 0.11482585158762058, "Classification_binary": 1.7307191707696945, "Classification": 0.011027215301415545, "MET": 4.010250452170418, "Sliced_Wasserstein_Loss": 34.209244372990355, "Total": 1.9727471550944533}, "epoch_train_time": 11217.373317480087, "epoch_valid_time": 417.2627513408661, "epoch_total_time": 11641.021118164062}
clic/clusters/v2.0.0/history/epoch_14.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train": {"Regression_pt": 0.11233104220323437, "Regression_eta": 0.00012261849474079386, "Regression_sin_phi": 7.893113370725656e-05, "Regression_cos_phi": 8.002588057957313e-05, "Regression_energy": 0.11076830332831916, "Classification_binary": 1.6691825832916727, "Classification": 0.010866083156856499, "MET": 4.006217861734038, "Sliced_Wasserstein_Loss": 33.99698257391801, "Total": 1.903428974432224}, "valid": {"Regression_pt": 0.11487826135764168, "Regression_eta": 0.0001287383186088881, "Regression_sin_phi": 8.293907742025001e-05, "Regression_cos_phi": 8.436576538147267e-05, "Regression_energy": 0.1137227539847518, "Classification_binary": 1.7209353333500803, "Classification": 0.01087729647228572, "MET": 3.97639042403537, "Sliced_Wasserstein_Loss": 34.00300944533762, "Total": 1.9607107867765274}, "epoch_train_time": 11210.823578119278, "epoch_valid_time": 416.7164807319641, "epoch_total_time": 11634.004011631012}
clic/clusters/v2.0.0/history/epoch_15.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train": {"Regression_pt": 0.11133088862584363, "Regression_eta": 0.00012126362778394329, "Regression_sin_phi": 7.814369854152654e-05, "Regression_cos_phi": 7.921281056376189e-05, "Regression_energy": 0.10981926375194169, "Classification_binary": 1.6575475266926154, "Classification": 0.010756359800511199, "MET": 3.9855149041208398, "Sliced_Wasserstein_Loss": 33.85943525924868, "Total": 1.8897301780549207}, "valid": {"Regression_pt": 0.11403750698666097, "Regression_eta": 0.00012722494517875254, "Regression_sin_phi": 8.216095890646195e-05, "Regression_cos_phi": 8.354090226041542e-05, "Regression_energy": 0.1130240388238545, "Classification_binary": 1.7124789615152733, "Classification": 0.010770042149583627, "MET": 4.015169375502412, "Sliced_Wasserstein_Loss": 33.934761856913184, "Total": 1.950602736887058}, "epoch_train_time": 11219.220667362213, "epoch_valid_time": 415.95088052749634, "epoch_total_time": 11641.601608276367}
clic/clusters/v2.0.0/history/epoch_16.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train": {"Regression_pt": 0.11042405319050404, "Regression_eta": 0.0001200718328690907, "Regression_sin_phi": 7.746683974007916e-05, "Regression_cos_phi": 7.841118300511077e-05, "Regression_energy": 0.10896015970932724, "Classification_binary": 1.6468976809116556, "Classification": 0.010660380469851963, "MET": 3.9689274300099986, "Sliced_Wasserstein_Loss": 33.72894050849879, "Total": 1.8772180146139836}, "valid": {"Regression_pt": 0.11350140832244775, "Regression_eta": 0.00012682964755791177, "Regression_sin_phi": 8.166588579343446e-05, "Regression_cos_phi": 8.278518819348989e-05, "Regression_energy": 0.1124117639670418, "Classification_binary": 1.7058728773110932, "Classification": 0.010691288889796022, "MET": 3.9297477893890673, "Sliced_Wasserstein_Loss": 33.838901728295816, "Total": 1.9427690413987138}, "epoch_train_time": 11219.385544538498, "epoch_valid_time": 416.36424255371094, "epoch_total_time": 11641.983350753784}
clic/clusters/v2.0.0/history/epoch_17.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train": {"Regression_pt": 0.10959823471088952, "Regression_eta": 0.0001189503613888681, "Regression_sin_phi": 7.682247785751589e-05, "Regression_cos_phi": 7.775076422209809e-05, "Regression_energy": 0.10818125312455364, "Classification_binary": 1.6371782267711756, "Classification": 0.010576099689441686, "MET": 3.9535671801349808, "Sliced_Wasserstein_Loss": 33.61642086844736, "Total": 1.8658073958184545}, "valid": {"Regression_pt": 0.1128469093052904, "Regression_eta": 0.00012563044426908832, "Regression_sin_phi": 8.105971997190518e-05, "Regression_cos_phi": 8.237407905112508e-05, "Regression_energy": 0.11182280414740756, "Classification_binary": 1.7019418207395498, "Classification": 0.01064330833901163, "MET": 3.93617269141881, "Sliced_Wasserstein_Loss": 33.74690514469454, "Total": 1.9375439610128617}, "epoch_train_time": 11235.242618560791, "epoch_valid_time": 417.24506163597107, "epoch_total_time": 11658.845057725906}
clic/clusters/v2.0.0/history/epoch_18.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train": {"Regression_pt": 0.10883164674856717, "Regression_eta": 0.00011799402227403095, "Regression_sin_phi": 7.631180712434671e-05, "Regression_cos_phi": 7.714453534966894e-05, "Regression_energy": 0.1074518651353378, "Classification_binary": 1.6280019428028139, "Classification": 0.010498198351337643, "MET": 3.9387966897586058, "Sliced_Wasserstein_Loss": 33.506046011284106, "Total": 1.855053351753321}, "valid": {"Regression_pt": 0.11230542345445639, "Regression_eta": 0.00012436031911917437, "Regression_sin_phi": 8.059409461987363e-05, "Regression_cos_phi": 8.169540063361263e-05, "Regression_energy": 0.1112928973133541, "Classification_binary": 1.6954759407656752, "Classification": 0.010563235267565564, "MET": 3.913439823653537, "Sliced_Wasserstein_Loss": 33.69031601688103, "Total": 1.9299245754622187}, "epoch_train_time": 11231.461406707764, "epoch_valid_time": 416.85057735443115, "epoch_total_time": 11654.869730949402}
clic/clusters/v2.0.0/history/epoch_19.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train": {"Regression_pt": 0.10812470777054171, "Regression_eta": 0.00011713886036223777, "Regression_sin_phi": 7.583019376192718e-05, "Regression_cos_phi": 7.664578384270822e-05, "Regression_energy": 0.10678898724568366, "Classification_binary": 1.6196017756391945, "Classification": 0.010433087500822984, "MET": 3.9279115260319952, "Sliced_Wasserstein_Loss": 33.40636605484931, "Total": 1.8452181217415369}, "valid": {"Regression_pt": 0.11187282942498995, "Regression_eta": 0.00012365884336244638, "Regression_sin_phi": 8.001324639826343e-05, "Regression_cos_phi": 8.111327790754018e-05, "Regression_energy": 0.11095981413911776, "Classification_binary": 1.6907678732918006, "Classification": 0.010488686760905471, "MET": 3.9330002763263665, "Sliced_Wasserstein_Loss": 33.609555868167206, "Total": 1.924373869573955}, "epoch_train_time": 11233.42175936699, "epoch_valid_time": 416.27726459503174, "epoch_total_time": 11656.115841150284}
clic/clusters/v2.0.0/history/epoch_2.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train": {"Regression_pt": 0.16195459451663155, "Regression_eta": 0.00017737329150928528, "Regression_sin_phi": 0.00012810314370809053, "Regression_cos_phi": 0.00012819214627293583, "Regression_energy": 0.1588706147447686, "Classification_binary": 2.2734354076649765, "Classification": 0.01913727421264038, "MET": 4.808671305884873, "Sliced_Wasserstein_Loss": 40.444829310098555, "Total": 2.613829162798172}, "valid": {"Regression_pt": 0.15254765844805065, "Regression_eta": 0.00017308740370526574, "Regression_sin_phi": 0.00012081858621149584, "Regression_cos_phi": 0.0001220595798308443, "Regression_energy": 0.15012515150849076, "Classification_binary": 2.179849213725884, "Classification": 0.017035812426993317, "MET": 4.531436520297428, "Sliced_Wasserstein_Loss": 39.15094704581993, "Total": 2.4999742514067522}, "epoch_train_time": 11215.72525548935, "epoch_valid_time": 416.6873936653137, "epoch_total_time": 11638.765037059784}
clic/clusters/v2.0.0/history/epoch_20.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train": {"Regression_pt": 0.10748206450417798, "Regression_eta": 0.00011632373853677205, "Regression_sin_phi": 7.539108095467388e-05, "Regression_cos_phi": 7.61319545145938e-05, "Regression_energy": 0.10618606347251731, "Classification_binary": 1.6118662735680618, "Classification": 0.010375622391206948, "MET": 3.9123275357984575, "Sliced_Wasserstein_Loss": 33.31785860948436, "Total": 1.836176584148693}, "valid": {"Regression_pt": 0.11135896639808582, "Regression_eta": 0.00012300748533757935, "Regression_sin_phi": 7.96666386809763e-05, "Regression_cos_phi": 8.063100733557698e-05, "Regression_energy": 0.11043660939698051, "Classification_binary": 1.6876306270096464, "Classification": 0.010441160048702523, "MET": 3.932484676446945, "Sliced_Wasserstein_Loss": 33.580541599678455, "Total": 1.9201511002813505}, "epoch_train_time": 11212.203944444656, "epoch_valid_time": 416.51384139060974, "epoch_total_time": 11635.304948091507}
clic/clusters/v2.0.0/history/epoch_21.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train": {"Regression_pt": 0.10687463349264838, "Regression_eta": 0.00011559049328162557, "Regression_sin_phi": 7.500507681391236e-05, "Regression_cos_phi": 7.571664610482542e-05, "Regression_energy": 0.10561656475135249, "Classification_binary": 1.6046765919600772, "Classification": 0.0103237782524511, "MET": 3.9033355168011714, "Sliced_Wasserstein_Loss": 33.23323453792315, "Total": 1.8277580044368662}, "valid": {"Regression_pt": 0.11090413673131029, "Regression_eta": 0.00012243332586871084, "Regression_sin_phi": 7.941454074007138e-05, "Regression_cos_phi": 8.034869022308055e-05, "Regression_energy": 0.10994413811294212, "Classification_binary": 1.6844999748794212, "Classification": 0.010403036381270724, "MET": 3.921498191318328, "Sliced_Wasserstein_Loss": 33.49826919212219, "Total": 1.9160335234123793}, "epoch_train_time": 11223.394551753998, "epoch_valid_time": 416.8033883571625, "epoch_total_time": 11646.649324893951}
clic/clusters/v2.0.0/history/epoch_22.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train": {"Regression_pt": 0.10632739198529674, "Regression_eta": 0.00011500313084153785, "Regression_sin_phi": 7.467548466804895e-05, "Regression_cos_phi": 7.536618679936418e-05, "Regression_energy": 0.10510465755171136, "Classification_binary": 1.597958801644408, "Classification": 0.01027920933693345, "MET": 3.8968947516426224, "Sliced_Wasserstein_Loss": 33.15556706184831, "Total": 1.8199361586737608}, "valid": {"Regression_pt": 0.11054084998618369, "Regression_eta": 0.00012211642464640823, "Regression_sin_phi": 7.891557224310479e-05, "Regression_cos_phi": 8.000904915800431e-05, "Regression_energy": 0.1094820764670418, "Classification_binary": 1.6824691330888264, "Classification": 0.010369256071722392, "MET": 3.902202446744373, "Sliced_Wasserstein_Loss": 33.42549236334405, "Total": 1.913142144795016}, "epoch_train_time": 11212.190004348755, "epoch_valid_time": 418.27385449409485, "epoch_total_time": 11636.898732662201}
clic/clusters/v2.0.0/history/epoch_23.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train": {"Regression_pt": 0.10582694903797225, "Regression_eta": 0.00011444309748439956, "Regression_sin_phi": 7.434966410590723e-05, "Regression_cos_phi": 7.502420717197697e-05, "Regression_energy": 0.1046316939376964, "Classification_binary": 1.5917921323739466, "Classification": 0.010237994696000459, "MET": 3.88400568891944, "Sliced_Wasserstein_Loss": 33.08106654406513, "Total": 1.8127523356038424}, "valid": {"Regression_pt": 0.11020492112138264, "Regression_eta": 0.00012220931973104692, "Regression_sin_phi": 7.871877150520251e-05, "Regression_cos_phi": 7.96159651501769e-05, "Regression_energy": 0.10915503793207396, "Classification_binary": 1.6803984123794211, "Classification": 0.010336244834580989, "MET": 3.8741342820538587, "Sliced_Wasserstein_Loss": 33.40897307073955, "Total": 1.9103751130426045}, "epoch_train_time": 11492.488549947739, "epoch_valid_time": 414.8949770927429, "epoch_total_time": 11913.816793203354}
clic/clusters/v2.0.0/history/epoch_24.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train": {"Regression_pt": 0.10537340926742608, "Regression_eta": 0.00011395173589768809, "Regression_sin_phi": 7.40716226133955e-05, "Regression_cos_phi": 7.472390226765438e-05, "Regression_energy": 0.10420991406919547, "Classification_binary": 1.5862389357323954, "Classification": 0.010202363974161615, "MET": 3.877836648335952, "Sliced_Wasserstein_Loss": 33.016834648621625, "Total": 1.80628994965005}, "valid": {"Regression_pt": 0.1099853515625, "Regression_eta": 0.00012166760742089373, "Regression_sin_phi": 7.83466976555214e-05, "Regression_cos_phi": 7.932854522846136e-05, "Regression_energy": 0.10893743091840837, "Classification_binary": 1.6789492689911576, "Classification": 0.010306545392493344, "MET": 3.879703828376206, "Sliced_Wasserstein_Loss": 33.311364549839226, "Total": 1.9084574708601285}, "epoch_train_time": 11487.443885326385, "epoch_valid_time": 415.8364679813385, "epoch_total_time": 11910.326171636581}
clic/clusters/v2.0.0/history/epoch_25.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train": {"Regression_pt": 0.10497148914552386, "Regression_eta": 0.00011349009575290078, "Regression_sin_phi": 7.381505275552502e-05, "Regression_cos_phi": 7.444741027454975e-05, "Regression_energy": 0.10383741740856217, "Classification_binary": 1.5812325359770032, "Classification": 0.010170177976806884, "MET": 3.8715518318811597, "Sliced_Wasserstein_Loss": 32.96086719754321, "Total": 1.800472839817526}, "valid": {"Regression_pt": 0.10967475915645096, "Regression_eta": 0.00012127597998959458, "Regression_sin_phi": 7.82751289594595e-05, "Regression_cos_phi": 7.920638445489277e-05, "Regression_energy": 0.10866716881656953, "Classification_binary": 1.676336100783762, "Classification": 0.010273796645774718, "MET": 3.89210711414791, "Sliced_Wasserstein_Loss": 33.23907757234727, "Total": 1.9052304185088424}, "epoch_train_time": 11486.676162958145, "epoch_valid_time": 414.46273970603943, "epoch_total_time": 11907.428564786911}
clic/clusters/v2.0.0/history/epoch_26.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train": {"Regression_pt": 0.1046210579013266, "Regression_eta": 0.00011314179042189007, "Regression_sin_phi": 7.366601985925403e-05, "Regression_cos_phi": 7.426204076580892e-05, "Regression_energy": 0.10350892080986199, "Classification_binary": 1.5768355636694757, "Classification": 0.01014353230823331, "MET": 3.866421705381374, "Sliced_Wasserstein_Loss": 32.901092254677906, "Total": 1.795371727030067}, "valid": {"Regression_pt": 0.1095309045920418, "Regression_eta": 0.00012065126198281045, "Regression_sin_phi": 7.814548885707303e-05, "Regression_cos_phi": 7.902628643336403e-05, "Regression_energy": 0.10850268790192925, "Classification_binary": 1.6756780986233923, "Classification": 0.010261604946909227, "MET": 3.8605176723271706, "Sliced_Wasserstein_Loss": 33.22721563504823, "Total": 1.9042516579581994}, "epoch_train_time": 11479.693927288055, "epoch_valid_time": 417.1463315486908, "epoch_total_time": 11903.40635561943}
clic/clusters/v2.0.0/history/epoch_27.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train": {"Regression_pt": 0.10432675354413655, "Regression_eta": 0.00011282419412583219, "Regression_sin_phi": 7.350894999902532e-05, "Regression_cos_phi": 7.411791177974941e-05, "Regression_energy": 0.10323505159140212, "Classification_binary": 1.5729909678081702, "Classification": 0.010120879512329713, "MET": 3.8640849856270534, "Sliced_Wasserstein_Loss": 32.860613662333954, "Total": 1.7909338844450793}, "valid": {"Regression_pt": 0.1093830268099377, "Regression_eta": 0.0001203362389776101, "Regression_sin_phi": 7.803604437990587e-05, "Regression_cos_phi": 7.884450664090957e-05, "Regression_energy": 0.10836989227981812, "Classification_binary": 1.673815564710611, "Classification": 0.010237073744991585, "MET": 3.8567765901326365, "Sliced_Wasserstein_Loss": 33.18617363344052, "Total": 1.90208233897709}, "epoch_train_time": 11509.053350448608, "epoch_valid_time": 415.4282293319702, "epoch_total_time": 11930.967039823532}
clic/clusters/v2.0.0/history/epoch_28.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train": {"Regression_pt": 0.10406909120404674, "Regression_eta": 0.00011259657947391531, "Regression_sin_phi": 7.335984899289029e-05, "Regression_cos_phi": 7.39567553276778e-05, "Regression_energy": 0.10299630744715041, "Classification_binary": 1.569789416244108, "Classification": 0.010104072177398206, "MET": 3.8581435910941293, "Sliced_Wasserstein_Loss": 32.8221950435652, "Total": 1.7872187343772319}, "valid": {"Regression_pt": 0.1093371915663937, "Regression_eta": 0.00012041222052558825, "Regression_sin_phi": 7.789004653979727e-05, "Regression_cos_phi": 7.877612899737343e-05, "Regression_energy": 0.10830188027532155, "Classification_binary": 1.6738309510651126, "Classification": 0.010229814167574671, "MET": 3.84474383289791, "Sliced_Wasserstein_Loss": 33.204220257234724, "Total": 1.9019774605606914}, "epoch_train_time": 11456.624339103699, "epoch_valid_time": 415.2877013683319, "epoch_total_time": 11878.801152706146}
clic/clusters/v2.0.0/history/epoch_29.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train": {"Regression_pt": 0.10386441724842879, "Regression_eta": 0.00011236702369087305, "Regression_sin_phi": 7.330479068154013e-05, "Regression_cos_phi": 7.385174694478111e-05, "Regression_energy": 0.10280298440903532, "Classification_binary": 1.5671975711059134, "Classification": 0.010089120700042293, "MET": 3.8600258556813314, "Sliced_Wasserstein_Loss": 32.78943633052421, "Total": 1.7842155361734038}, "valid": {"Regression_pt": 0.10923993763816318, "Regression_eta": 0.00011995665705089016, "Regression_sin_phi": 7.786119290870102e-05, "Regression_cos_phi": 7.867006722753837e-05, "Regression_energy": 0.10822336865391378, "Classification_binary": 1.6737055051748393, "Classification": 0.010221332905760149, "MET": 3.8721824130827973, "Sliced_Wasserstein_Loss": 33.17918006430868, "Total": 1.9016669074055466}, "epoch_train_time": 11500.102574825287, "epoch_valid_time": 413.5199978351593, "epoch_total_time": 11920.152396202087}
clic/clusters/v2.0.0/history/epoch_3.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train": {"Regression_pt": 0.14496229716904283, "Regression_eta": 0.00015978972548468591, "Regression_sin_phi": 0.00011169671195010733, "Regression_cos_phi": 0.00011186725904968462, "Regression_energy": 0.14217958748638587, "Classification_binary": 2.0907613086523353, "Classification": 0.016321110054520672, "MET": 4.529948846593344, "Sliced_Wasserstein_Loss": 38.47457506070561, "Total": 2.3946121536209115}, "valid": {"Regression_pt": 0.14203008424814106, "Regression_eta": 0.00016207206287568022, "Regression_sin_phi": 0.00011132922969830381, "Regression_cos_phi": 0.00011116718555953341, "Regression_energy": 0.13993038655860632, "Classification_binary": 2.070844899266479, "Classification": 0.015637052480814158, "MET": 4.390331717242765, "Sliced_Wasserstein_Loss": 37.99571442926045, "Total": 2.368827873794212}, "epoch_train_time": 11230.485923290253, "epoch_valid_time": 417.6828181743622, "epoch_total_time": 11654.763090133667}
clic/clusters/v2.0.0/history/epoch_4.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train": {"Regression_pt": 0.13628292533265338, "Regression_eta": 0.00015098359683227104, "Regression_sin_phi": 0.00010357710518610169, "Regression_cos_phi": 0.00010353283377447021, "Regression_energy": 0.1336098365691062, "Classification_binary": 1.999462269898943, "Classification": 0.015236734543642202, "MET": 4.386986289905013, "Sliced_Wasserstein_Loss": 37.41231163405228, "Total": 2.2849514801456934}, "valid": {"Regression_pt": 0.13568469473786676, "Regression_eta": 0.00015768849581384198, "Regression_sin_phi": 0.00010537840545752424, "Regression_cos_phi": 0.00010523528723088108, "Regression_energy": 0.13362879124485028, "Classification_binary": 1.9973497789389068, "Classification": 0.014792787607076467, "MET": 4.334894681973473, "Sliced_Wasserstein_Loss": 36.97285470257235, "Total": 2.28182400522508}, "epoch_train_time": 11210.8334004879, "epoch_valid_time": 414.8488028049469, "epoch_total_time": 11632.07339334488}
clic/clusters/v2.0.0/history/epoch_5.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train": {"Regression_pt": 0.13111573834876267, "Regression_eta": 0.00014555711368887173, "Regression_sin_phi": 9.84128535330219e-05, "Regression_cos_phi": 9.837286452909245e-05, "Regression_energy": 0.12852008341721272, "Classification_binary": 1.9302377841111984, "Classification": 0.014384665977544503, "MET": 4.301009454006571, "Sliced_Wasserstein_Loss": 36.69936660834166, "Total": 2.2046026069936437}, "valid": {"Regression_pt": 0.13089269901778536, "Regression_eta": 0.0001510359658305668, "Regression_sin_phi": 0.00010049001005301522, "Regression_cos_phi": 0.00010077041061744812, "Regression_energy": 0.12910330916524318, "Classification_binary": 1.9329823779139872, "Classification": 0.013855981366810691, "MET": 4.278978471663987, "Sliced_Wasserstein_Loss": 36.31261555466238, "Total": 2.2071882536173635}, "epoch_train_time": 11231.486518859863, "epoch_valid_time": 416.5859453678131, "epoch_total_time": 11654.359743595123}
clic/clusters/v2.0.0/history/epoch_6.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train": {"Regression_pt": 0.12751388822244145, "Regression_eta": 0.00014165317547387726, "Regression_sin_phi": 9.496446814371541e-05, "Regression_cos_phi": 9.485025641543919e-05, "Regression_energy": 0.12504516828232126, "Classification_binary": 1.8715302110859162, "Classification": 0.013408395532641542, "MET": 4.240105487162548, "Sliced_Wasserstein_Loss": 36.15900407084702, "Total": 2.137831593254535}, "valid": {"Regression_pt": 0.12807342431169613, "Regression_eta": 0.00014617216932045303, "Regression_sin_phi": 9.77680614140256e-05, "Regression_cos_phi": 9.759754613281446e-05, "Regression_energy": 0.12617450481059084, "Classification_binary": 1.879413528687701, "Classification": 0.01291795712191959, "MET": 4.29762233721865, "Sliced_Wasserstein_Loss": 35.808601286173634, "Total": 2.1469214730707393}, "epoch_train_time": 11222.83605313301, "epoch_valid_time": 414.3109006881714, "epoch_total_time": 11644.033034801483}
clic/clusters/v2.0.0/history/epoch_7.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train": {"Regression_pt": 0.124092284040338, "Regression_eta": 0.0001369230135255772, "Regression_sin_phi": 9.062163847852581e-05, "Regression_cos_phi": 9.13321180116141e-05, "Regression_energy": 0.12206584885194972, "Classification_binary": 1.8204286943383088, "Classification": 0.012620926720638682, "MET": 4.214292489465791, "Sliced_Wasserstein_Loss": 35.66058420225682, "Total": 2.079524074685759}, "valid": {"Regression_pt": 0.1250619673652281, "Regression_eta": 0.0001421002329737427, "Regression_sin_phi": 9.3659998136318e-05, "Regression_cos_phi": 9.405039897685649e-05, "Regression_energy": 0.12348332543081793, "Classification_binary": 1.8350366132435691, "Classification": 0.012317114090996157, "MET": 4.232725834003215, "Sliced_Wasserstein_Loss": 35.38281752411576, "Total": 2.096229087118167}, "epoch_train_time": 11242.381155490875, "epoch_valid_time": 417.21450686454773, "epoch_total_time": 11665.937296628952}
clic/clusters/v2.0.0/history/epoch_8.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train": {"Regression_pt": 0.12145471266381588, "Regression_eta": 0.00013390145562679627, "Regression_sin_phi": 8.719075732427978e-05, "Regression_cos_phi": 8.812197241301605e-05, "Regression_energy": 0.11957195114704595, "Classification_binary": 1.7823741418636623, "Classification": 0.012110475437451458, "MET": 4.17112482814955, "Sliced_Wasserstein_Loss": 35.2700730252821, "Total": 2.0358210545814885}, "valid": {"Regression_pt": 0.12264579944671926, "Regression_eta": 0.00014085805109459486, "Regression_sin_phi": 9.00495186496011e-05, "Regression_cos_phi": 9.156061138754106e-05, "Regression_energy": 0.12115546223434988, "Classification_binary": 1.8063186105807878, "Classification": 0.011966087120522257, "MET": 4.201982013665595, "Sliced_Wasserstein_Loss": 35.172462821543405, "Total": 2.0624090949055467}, "epoch_train_time": 11220.988495588303, "epoch_valid_time": 416.6119010448456, "epoch_total_time": 11644.025322198868}
clic/clusters/v2.0.0/history/epoch_9.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train": {"Regression_pt": 0.11933376495880053, "Regression_eta": 0.00013156201801646047, "Regression_sin_phi": 8.509110049032651e-05, "Regression_cos_phi": 8.591650690526353e-05, "Regression_energy": 0.11751268373491287, "Classification_binary": 1.753052298332381, "Classification": 0.011758926101725981, "MET": 4.1335741099485785, "Sliced_Wasserstein_Loss": 34.96227101485502, "Total": 2.001957728137052}, "valid": {"Regression_pt": 0.12088643653599779, "Regression_eta": 0.0001371413948451591, "Regression_sin_phi": 8.864979651963212e-05, "Regression_cos_phi": 8.976404881554018e-05, "Regression_energy": 0.11950649249208702, "Classification_binary": 1.7831902507033761, "Classification": 0.01168908122268137, "MET": 4.149929662379421, "Sliced_Wasserstein_Loss": 34.88086314308681, "Total": 2.0355873819332797}, "epoch_train_time": 11223.379476547241, "epoch_valid_time": 416.62553310394287, "epoch_total_time": 11646.32419371605}
clic/clusters/v2.0.0/hyperparameters.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Num of mlpf parameters": 89388050, "config": "parameters/pytorch/pyg-clic.yaml", "prefix": null, "data_dir": "/scratch/project_465000301/tensorflow_datasets", "gpus": 8, "gpu_batch_multiplier": 128, "dataset": "clic", "num_workers": 8, "prefetch_factor": 100, "resume_training": null, "load": "experiments/pyg-clic_20241011_102451_167094/checkpoints/checkpoint-22-1.913142.pth", "train": true, "test": null, "num_epochs": 30, "patience": null, "lr": 0.0001, "conv_type": "attention", "num_convs": null, "make_plots": null, "export_onnx": null, "ntrain": null, "ntest": null, "nvalid": null, "val_freq": null, "checkpoint_freq": 1, "hpo": null, "ray_train": false, "local": null, "ray_cpus": null, "ray_gpus": null, "raytune_num_samples": null, "comet": false, "comet_offline": false, "comet_step_freq": null, "experiments_dir": null, "pipeline": null, "dtype": "bfloat16", "attention_type": null, "test_datasets": {"clic_edm_qq_pf": {"version": "2.3.0"}, "clic_edm_ttbar_pf": {"version": "2.3.0"}, "clic_edm_ww_fullhad_pf": {"version": "2.3.0"}}}
clic/clusters/v2.0.0/test-config.yaml ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ backend: pytorch
2
+ checkpoint_freq: null
3
+ comet: false
4
+ comet_name: particleflow-pt
5
+ comet_offline: false
6
+ comet_step_freq: 100
7
+ config: parameters/pytorch/pyg-clic.yaml
8
+ conv_type: attention
9
+ data_dir: /scratch/persistent/joosep/tensorflow_datasets
10
+ dataset: clic
11
+ dtype: bfloat16
12
+ gpu_batch_multiplier: 100
13
+ gpus: 1
14
+ load: experiments/pyg-clic_20241011_102451_167094/checkpoints/checkpoint-29-1.901667.pth
15
+ lr: 0.0001
16
+ lr_schedule: cosinedecay
17
+ lr_schedule_config:
18
+ onecycle:
19
+ pct_start: 0.3
20
+ make_plots: true
21
+ model:
22
+ attention:
23
+ activation: gelu
24
+ attention_type: math
25
+ conv_type: attention
26
+ dropout_conv_id_ff: 0.0
27
+ dropout_conv_id_mha: 0.0
28
+ dropout_conv_reg_ff: 0.1
29
+ dropout_conv_reg_mha: 0.1
30
+ dropout_ff: 0.1
31
+ head_dim: 32
32
+ num_convs: 6
33
+ num_heads: 32
34
+ use_pre_layernorm: true
35
+ cos_phi_mode: linear
36
+ energy_mode: direct-elemtype-split
37
+ eta_mode: linear
38
+ gnn_lsh:
39
+ activation: elu
40
+ bin_size: 32
41
+ conv_type: gnn_lsh
42
+ distance_dim: 128
43
+ embedding_dim: 512
44
+ ffn_dist_hidden_dim: 128
45
+ ffn_dist_num_layers: 2
46
+ layernorm: true
47
+ max_num_bins: 200
48
+ num_convs: 8
49
+ num_node_messages: 2
50
+ width: 512
51
+ input_encoding: split
52
+ learned_representation_mode: last
53
+ mamba:
54
+ activation: elu
55
+ conv_type: mamba
56
+ d_conv: 4
57
+ d_state: 16
58
+ dropout: 0.0
59
+ embedding_dim: 128
60
+ expand: 2
61
+ num_convs: 2
62
+ num_heads: 2
63
+ width: 128
64
+ pt_mode: direct-elemtype-split
65
+ sin_phi_mode: linear
66
+ trainable: all
67
+ ntest: null
68
+ ntrain: null
69
+ num_epochs: 30
70
+ num_workers: 8
71
+ nvalid: null
72
+ patience: 20
73
+ prefetch_factor: 10
74
+ ray_train: false
75
+ raytune:
76
+ asha:
77
+ brackets: 1
78
+ grace_period: 10
79
+ max_t: 200
80
+ reduction_factor: 4
81
+ default_metric: val_loss
82
+ default_mode: min
83
+ hyperband:
84
+ max_t: 200
85
+ reduction_factor: 4
86
+ hyperopt:
87
+ n_random_steps: 10
88
+ local_dir: null
89
+ nevergrad:
90
+ n_random_steps: 10
91
+ sched: null
92
+ search_alg: null
93
+ save_attention: true
94
+ sort_data: false
95
+ test: true
96
+ test_dataset:
97
+ clic_edm_qq_pf:
98
+ version: 2.3.0
99
+ clic_edm_ttbar_pf:
100
+ version: 2.3.0
101
+ clic_edm_ww_fullhad_pf:
102
+ version: 2.3.0
103
+ test_datasets: []
104
+ train_dataset:
105
+ clic:
106
+ physical:
107
+ batch_size: 1
108
+ samples:
109
+ clic_edm_qq_pf:
110
+ version: 2.3.0
111
+ clic_edm_ttbar_pf:
112
+ version: 2.3.0
113
+ clic_edm_ww_fullhad_pf:
114
+ version: 2.3.0
115
+ val_freq: null
116
+ valid_dataset:
117
+ clic:
118
+ physical:
119
+ batch_size: 1
120
+ samples:
121
+ clic_edm_qq_pf:
122
+ version: 2.3.0
123
+ clic_edm_ttbar_pf:
124
+ version: 2.3.0
125
+ clic_edm_ww_fullhad_pf:
126
+ version: 2.3.0
clic/clusters/v2.0.0/test.log ADDED
The diff for this file is too large to render. See raw diff
 
clic/clusters/v2.0.0/train-config.yaml ADDED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ backend: pytorch
2
+ checkpoint_freq: 1
3
+ comet: false
4
+ comet_name: particleflow-pt
5
+ comet_offline: false
6
+ comet_step_freq: 100
7
+ config: parameters/pytorch/pyg-clic.yaml
8
+ conv_type: attention
9
+ data_dir: /scratch/project_465000301/tensorflow_datasets
10
+ dataset: clic
11
+ dtype: bfloat16
12
+ gpu_batch_multiplier: 128
13
+ gpus: 8
14
+ load: experiments/pyg-clic_20241011_102451_167094/checkpoints/checkpoint-22-1.913142.pth
15
+ lr: 0.0001
16
+ lr_schedule: cosinedecay
17
+ lr_schedule_config:
18
+ onecycle:
19
+ pct_start: 0.3
20
+ model:
21
+ attention:
22
+ activation: gelu
23
+ attention_type: math
24
+ conv_type: attention
25
+ dropout_conv_id_ff: 0.0
26
+ dropout_conv_id_mha: 0.0
27
+ dropout_conv_reg_ff: 0.1
28
+ dropout_conv_reg_mha: 0.1
29
+ dropout_ff: 0.1
30
+ head_dim: 32
31
+ num_convs: 6
32
+ num_heads: 32
33
+ use_pre_layernorm: true
34
+ cos_phi_mode: linear
35
+ energy_mode: direct-elemtype-split
36
+ eta_mode: linear
37
+ gnn_lsh:
38
+ activation: elu
39
+ bin_size: 32
40
+ conv_type: gnn_lsh
41
+ distance_dim: 128
42
+ embedding_dim: 512
43
+ ffn_dist_hidden_dim: 128
44
+ ffn_dist_num_layers: 2
45
+ layernorm: true
46
+ max_num_bins: 200
47
+ num_convs: 8
48
+ num_node_messages: 2
49
+ width: 512
50
+ input_encoding: split
51
+ learned_representation_mode: last
52
+ mamba:
53
+ activation: elu
54
+ conv_type: mamba
55
+ d_conv: 4
56
+ d_state: 16
57
+ dropout: 0.0
58
+ embedding_dim: 128
59
+ expand: 2
60
+ num_convs: 2
61
+ num_heads: 2
62
+ width: 128
63
+ pt_mode: direct-elemtype-split
64
+ sin_phi_mode: linear
65
+ trainable: all
66
+ ntest: null
67
+ ntrain: null
68
+ num_epochs: 30
69
+ num_workers: 8
70
+ nvalid: null
71
+ patience: 20
72
+ prefetch_factor: 100
73
+ ray_train: false
74
+ raytune:
75
+ asha:
76
+ brackets: 1
77
+ grace_period: 10
78
+ max_t: 200
79
+ reduction_factor: 4
80
+ default_metric: val_loss
81
+ default_mode: min
82
+ hyperband:
83
+ max_t: 200
84
+ reduction_factor: 4
85
+ hyperopt:
86
+ n_random_steps: 10
87
+ local_dir: null
88
+ nevergrad:
89
+ n_random_steps: 10
90
+ sched: null
91
+ search_alg: null
92
+ save_attention: true
93
+ sort_data: false
94
+ test_dataset:
95
+ clic_edm_qq_pf:
96
+ version: 2.3.0
97
+ clic_edm_ttbar_pf:
98
+ version: 2.3.0
99
+ clic_edm_ww_fullhad_pf:
100
+ version: 2.3.0
101
+ test_datasets: []
102
+ train: true
103
+ train_dataset:
104
+ clic:
105
+ physical:
106
+ batch_size: 1
107
+ samples:
108
+ clic_edm_qq_pf:
109
+ version: 2.3.0
110
+ clic_edm_ttbar_pf:
111
+ version: 2.3.0
112
+ clic_edm_ww_fullhad_pf:
113
+ version: 2.3.0
114
+ val_freq: null
115
+ valid_dataset:
116
+ clic:
117
+ physical:
118
+ batch_size: 1
119
+ samples:
120
+ clic_edm_qq_pf:
121
+ version: 2.3.0
122
+ clic_edm_ttbar_pf:
123
+ version: 2.3.0
124
+ clic_edm_ww_fullhad_pf:
125
+ version: 2.3.0
clic/clusters/v2.0.0/train.log ADDED
@@ -0,0 +1,850 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [2024-10-11 10:25:04,881] INFO: Will use torch.nn.parallel.DistributedDataParallel() and 8 gpus
2
+ [2024-10-11 10:25:04,885] INFO: AMD Radeon Graphics
3
+ [2024-10-11 10:25:04,885] INFO: AMD Radeon Graphics
4
+ [2024-10-11 10:25:04,885] INFO: AMD Radeon Graphics
5
+ [2024-10-11 10:25:04,885] INFO: AMD Radeon Graphics
6
+ [2024-10-11 10:25:04,886] INFO: AMD Radeon Graphics
7
+ [2024-10-11 10:25:04,886] INFO: AMD Radeon Graphics
8
+ [2024-10-11 10:25:04,886] INFO: AMD Radeon Graphics
9
+ [2024-10-11 10:25:04,886] INFO: AMD Radeon Graphics
10
+ [2024-10-11 10:25:08,545] INFO: configured dtype=torch.bfloat16 for autocast
11
+ [2024-10-11 10:25:10,626] INFO: using attention_type=math
12
+ [2024-10-11 10:25:10,662] INFO: using attention_type=math
13
+ [2024-10-11 10:25:10,697] INFO: using attention_type=math
14
+ [2024-10-11 10:25:10,735] INFO: using attention_type=math
15
+ [2024-10-11 10:25:10,770] INFO: using attention_type=math
16
+ [2024-10-11 10:25:10,807] INFO: using attention_type=math
17
+ [2024-10-11 10:25:10,843] INFO: using attention_type=math
18
+ [2024-10-11 10:25:10,879] INFO: using attention_type=math
19
+ [2024-10-11 10:25:10,914] INFO: using attention_type=math
20
+ [2024-10-11 10:25:10,950] INFO: using attention_type=math
21
+ [2024-10-11 10:25:10,987] INFO: using attention_type=math
22
+ [2024-10-11 10:25:11,024] INFO: using attention_type=math
23
+ [2024-10-11 10:25:16,061] INFO: DistributedDataParallel(
24
+ (module): MLPF(
25
+ (nn0_id): ModuleList(
26
+ (0-1): 2 x Sequential(
27
+ (0): Linear(in_features=17, out_features=1024, bias=True)
28
+ (1): GELU(approximate='none')
29
+ (2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
30
+ (3): Dropout(p=0.1, inplace=False)
31
+ (4): Linear(in_features=1024, out_features=1024, bias=True)
32
+ )
33
+ )
34
+ (nn0_reg): ModuleList(
35
+ (0-1): 2 x Sequential(
36
+ (0): Linear(in_features=17, out_features=1024, bias=True)
37
+ (1): GELU(approximate='none')
38
+ (2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
39
+ (3): Dropout(p=0.1, inplace=False)
40
+ (4): Linear(in_features=1024, out_features=1024, bias=True)
41
+ )
42
+ )
43
+ (conv_id): ModuleList(
44
+ (0-5): 6 x PreLnSelfAttentionLayer(
45
+ (mha): MultiheadAttention(
46
+ (out_proj): NonDynamicallyQuantizableLinear(in_features=1024, out_features=1024, bias=True)
47
+ )
48
+ (norm0): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
49
+ (norm1): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
50
+ (seq): Sequential(
51
+ (0): Linear(in_features=1024, out_features=1024, bias=True)
52
+ (1): GELU(approximate='none')
53
+ (2): Linear(in_features=1024, out_features=1024, bias=True)
54
+ (3): GELU(approximate='none')
55
+ )
56
+ (dropout): Dropout(p=0.0, inplace=False)
57
+ )
58
+ )
59
+ (conv_reg): ModuleList(
60
+ (0-5): 6 x PreLnSelfAttentionLayer(
61
+ (mha): MultiheadAttention(
62
+ (out_proj): NonDynamicallyQuantizableLinear(in_features=1024, out_features=1024, bias=True)
63
+ )
64
+ (norm0): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
65
+ (norm1): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
66
+ (seq): Sequential(
67
+ (0): Linear(in_features=1024, out_features=1024, bias=True)
68
+ (1): GELU(approximate='none')
69
+ (2): Linear(in_features=1024, out_features=1024, bias=True)
70
+ (3): GELU(approximate='none')
71
+ )
72
+ (dropout): Dropout(p=0.1, inplace=False)
73
+ )
74
+ )
75
+ (nn_binary_particle): Sequential(
76
+ (0): Linear(in_features=1024, out_features=1024, bias=True)
77
+ (1): GELU(approximate='none')
78
+ (2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
79
+ (3): Dropout(p=0.1, inplace=False)
80
+ (4): Linear(in_features=1024, out_features=2, bias=True)
81
+ )
82
+ (nn_pid): Sequential(
83
+ (0): Linear(in_features=1024, out_features=1024, bias=True)
84
+ (1): GELU(approximate='none')
85
+ (2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
86
+ (3): Dropout(p=0.1, inplace=False)
87
+ (4): Linear(in_features=1024, out_features=6, bias=True)
88
+ )
89
+ (nn_pt): RegressionOutput(
90
+ (nn): ModuleList(
91
+ (0-1): 2 x Sequential(
92
+ (0): Linear(in_features=1024, out_features=1024, bias=True)
93
+ (1): GELU(approximate='none')
94
+ (2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
95
+ (3): Dropout(p=0.1, inplace=False)
96
+ (4): Linear(in_features=1024, out_features=1, bias=True)
97
+ )
98
+ )
99
+ )
100
+ (nn_eta): RegressionOutput(
101
+ (nn): Sequential(
102
+ (0): Linear(in_features=1024, out_features=1024, bias=True)
103
+ (1): GELU(approximate='none')
104
+ (2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
105
+ (3): Dropout(p=0.1, inplace=False)
106
+ (4): Linear(in_features=1024, out_features=2, bias=True)
107
+ )
108
+ )
109
+ (nn_sin_phi): RegressionOutput(
110
+ (nn): Sequential(
111
+ (0): Linear(in_features=1024, out_features=1024, bias=True)
112
+ (1): GELU(approximate='none')
113
+ (2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
114
+ (3): Dropout(p=0.1, inplace=False)
115
+ (4): Linear(in_features=1024, out_features=2, bias=True)
116
+ )
117
+ )
118
+ (nn_cos_phi): RegressionOutput(
119
+ (nn): Sequential(
120
+ (0): Linear(in_features=1024, out_features=1024, bias=True)
121
+ (1): GELU(approximate='none')
122
+ (2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
123
+ (3): Dropout(p=0.1, inplace=False)
124
+ (4): Linear(in_features=1024, out_features=2, bias=True)
125
+ )
126
+ )
127
+ (nn_energy): RegressionOutput(
128
+ (nn): ModuleList(
129
+ (0-1): 2 x Sequential(
130
+ (0): Linear(in_features=1024, out_features=1024, bias=True)
131
+ (1): GELU(approximate='none')
132
+ (2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
133
+ (3): Dropout(p=0.1, inplace=False)
134
+ (4): Linear(in_features=1024, out_features=1, bias=True)
135
+ )
136
+ )
137
+ )
138
+ (final_norm_id): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
139
+ (final_norm_reg): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
140
+ )
141
+ )
142
+ [2024-10-11 10:25:16,062] INFO: Trainable parameters: 89388050
143
+ [2024-10-11 10:25:16,062] INFO: Non-trainable parameters: 0
144
+ [2024-10-11 10:25:16,062] INFO: Total parameters: 89388050
145
+ [2024-10-11 10:25:16,067] INFO: Modules Trainable parameters Non-trainable parameters
146
+ module.nn0_id.0.0.weight 17408 0
147
+ module.nn0_id.0.0.bias 1024 0
148
+ module.nn0_id.0.2.weight 1024 0
149
+ module.nn0_id.0.2.bias 1024 0
150
+ module.nn0_id.0.4.weight 1048576 0
151
+ module.nn0_id.0.4.bias 1024 0
152
+ module.nn0_id.1.0.weight 17408 0
153
+ module.nn0_id.1.0.bias 1024 0
154
+ module.nn0_id.1.2.weight 1024 0
155
+ module.nn0_id.1.2.bias 1024 0
156
+ module.nn0_id.1.4.weight 1048576 0
157
+ module.nn0_id.1.4.bias 1024 0
158
+ module.nn0_reg.0.0.weight 17408 0
159
+ module.nn0_reg.0.0.bias 1024 0
160
+ module.nn0_reg.0.2.weight 1024 0
161
+ module.nn0_reg.0.2.bias 1024 0
162
+ module.nn0_reg.0.4.weight 1048576 0
163
+ module.nn0_reg.0.4.bias 1024 0
164
+ module.nn0_reg.1.0.weight 17408 0
165
+ module.nn0_reg.1.0.bias 1024 0
166
+ module.nn0_reg.1.2.weight 1024 0
167
+ module.nn0_reg.1.2.bias 1024 0
168
+ module.nn0_reg.1.4.weight 1048576 0
169
+ module.nn0_reg.1.4.bias 1024 0
170
+ module.conv_id.0.mha.in_proj_weight 3145728 0
171
+ module.conv_id.0.mha.in_proj_bias 3072 0
172
+ module.conv_id.0.mha.out_proj.weight 1048576 0
173
+ module.conv_id.0.mha.out_proj.bias 1024 0
174
+ module.conv_id.0.norm0.weight 1024 0
175
+ module.conv_id.0.norm0.bias 1024 0
176
+ module.conv_id.0.norm1.weight 1024 0
177
+ module.conv_id.0.norm1.bias 1024 0
178
+ module.conv_id.0.seq.0.weight 1048576 0
179
+ module.conv_id.0.seq.0.bias 1024 0
180
+ module.conv_id.0.seq.2.weight 1048576 0
181
+ module.conv_id.0.seq.2.bias 1024 0
182
+ module.conv_id.1.mha.in_proj_weight 3145728 0
183
+ module.conv_id.1.mha.in_proj_bias 3072 0
184
+ module.conv_id.1.mha.out_proj.weight 1048576 0
185
+ module.conv_id.1.mha.out_proj.bias 1024 0
186
+ module.conv_id.1.norm0.weight 1024 0
187
+ module.conv_id.1.norm0.bias 1024 0
188
+ module.conv_id.1.norm1.weight 1024 0
189
+ module.conv_id.1.norm1.bias 1024 0
190
+ module.conv_id.1.seq.0.weight 1048576 0
191
+ module.conv_id.1.seq.0.bias 1024 0
192
+ module.conv_id.1.seq.2.weight 1048576 0
193
+ module.conv_id.1.seq.2.bias 1024 0
194
+ module.conv_id.2.mha.in_proj_weight 3145728 0
195
+ module.conv_id.2.mha.in_proj_bias 3072 0
196
+ module.conv_id.2.mha.out_proj.weight 1048576 0
197
+ module.conv_id.2.mha.out_proj.bias 1024 0
198
+ module.conv_id.2.norm0.weight 1024 0
199
+ module.conv_id.2.norm0.bias 1024 0
200
+ module.conv_id.2.norm1.weight 1024 0
201
+ module.conv_id.2.norm1.bias 1024 0
202
+ module.conv_id.2.seq.0.weight 1048576 0
203
+ module.conv_id.2.seq.0.bias 1024 0
204
+ module.conv_id.2.seq.2.weight 1048576 0
205
+ module.conv_id.2.seq.2.bias 1024 0
206
+ module.conv_id.3.mha.in_proj_weight 3145728 0
207
+ module.conv_id.3.mha.in_proj_bias 3072 0
208
+ module.conv_id.3.mha.out_proj.weight 1048576 0
209
+ module.conv_id.3.mha.out_proj.bias 1024 0
210
+ module.conv_id.3.norm0.weight 1024 0
211
+ module.conv_id.3.norm0.bias 1024 0
212
+ module.conv_id.3.norm1.weight 1024 0
213
+ module.conv_id.3.norm1.bias 1024 0
214
+ module.conv_id.3.seq.0.weight 1048576 0
215
+ module.conv_id.3.seq.0.bias 1024 0
216
+ module.conv_id.3.seq.2.weight 1048576 0
217
+ module.conv_id.3.seq.2.bias 1024 0
218
+ module.conv_id.4.mha.in_proj_weight 3145728 0
219
+ module.conv_id.4.mha.in_proj_bias 3072 0
220
+ module.conv_id.4.mha.out_proj.weight 1048576 0
221
+ module.conv_id.4.mha.out_proj.bias 1024 0
222
+ module.conv_id.4.norm0.weight 1024 0
223
+ module.conv_id.4.norm0.bias 1024 0
224
+ module.conv_id.4.norm1.weight 1024 0
225
+ module.conv_id.4.norm1.bias 1024 0
226
+ module.conv_id.4.seq.0.weight 1048576 0
227
+ module.conv_id.4.seq.0.bias 1024 0
228
+ module.conv_id.4.seq.2.weight 1048576 0
229
+ module.conv_id.4.seq.2.bias 1024 0
230
+ module.conv_id.5.mha.in_proj_weight 3145728 0
231
+ module.conv_id.5.mha.in_proj_bias 3072 0
232
+ module.conv_id.5.mha.out_proj.weight 1048576 0
233
+ module.conv_id.5.mha.out_proj.bias 1024 0
234
+ module.conv_id.5.norm0.weight 1024 0
235
+ module.conv_id.5.norm0.bias 1024 0
236
+ module.conv_id.5.norm1.weight 1024 0
237
+ module.conv_id.5.norm1.bias 1024 0
238
+ module.conv_id.5.seq.0.weight 1048576 0
239
+ module.conv_id.5.seq.0.bias 1024 0
240
+ module.conv_id.5.seq.2.weight 1048576 0
241
+ module.conv_id.5.seq.2.bias 1024 0
242
+ module.conv_reg.0.mha.in_proj_weight 3145728 0
243
+ module.conv_reg.0.mha.in_proj_bias 3072 0
244
+ module.conv_reg.0.mha.out_proj.weight 1048576 0
245
+ module.conv_reg.0.mha.out_proj.bias 1024 0
246
+ module.conv_reg.0.norm0.weight 1024 0
247
+ module.conv_reg.0.norm0.bias 1024 0
248
+ module.conv_reg.0.norm1.weight 1024 0
249
+ module.conv_reg.0.norm1.bias 1024 0
250
+ module.conv_reg.0.seq.0.weight 1048576 0
251
+ module.conv_reg.0.seq.0.bias 1024 0
252
+ module.conv_reg.0.seq.2.weight 1048576 0
253
+ module.conv_reg.0.seq.2.bias 1024 0
254
+ module.conv_reg.1.mha.in_proj_weight 3145728 0
255
+ module.conv_reg.1.mha.in_proj_bias 3072 0
256
+ module.conv_reg.1.mha.out_proj.weight 1048576 0
257
+ module.conv_reg.1.mha.out_proj.bias 1024 0
258
+ module.conv_reg.1.norm0.weight 1024 0
259
+ module.conv_reg.1.norm0.bias 1024 0
260
+ module.conv_reg.1.norm1.weight 1024 0
261
+ module.conv_reg.1.norm1.bias 1024 0
262
+ module.conv_reg.1.seq.0.weight 1048576 0
263
+ module.conv_reg.1.seq.0.bias 1024 0
264
+ module.conv_reg.1.seq.2.weight 1048576 0
265
+ module.conv_reg.1.seq.2.bias 1024 0
266
+ module.conv_reg.2.mha.in_proj_weight 3145728 0
267
+ module.conv_reg.2.mha.in_proj_bias 3072 0
268
+ module.conv_reg.2.mha.out_proj.weight 1048576 0
269
+ module.conv_reg.2.mha.out_proj.bias 1024 0
270
+ module.conv_reg.2.norm0.weight 1024 0
271
+ module.conv_reg.2.norm0.bias 1024 0
272
+ module.conv_reg.2.norm1.weight 1024 0
273
+ module.conv_reg.2.norm1.bias 1024 0
274
+ module.conv_reg.2.seq.0.weight 1048576 0
275
+ module.conv_reg.2.seq.0.bias 1024 0
276
+ module.conv_reg.2.seq.2.weight 1048576 0
277
+ module.conv_reg.2.seq.2.bias 1024 0
278
+ module.conv_reg.3.mha.in_proj_weight 3145728 0
279
+ module.conv_reg.3.mha.in_proj_bias 3072 0
280
+ module.conv_reg.3.mha.out_proj.weight 1048576 0
281
+ module.conv_reg.3.mha.out_proj.bias 1024 0
282
+ module.conv_reg.3.norm0.weight 1024 0
283
+ module.conv_reg.3.norm0.bias 1024 0
284
+ module.conv_reg.3.norm1.weight 1024 0
285
+ module.conv_reg.3.norm1.bias 1024 0
286
+ module.conv_reg.3.seq.0.weight 1048576 0
287
+ module.conv_reg.3.seq.0.bias 1024 0
288
+ module.conv_reg.3.seq.2.weight 1048576 0
289
+ module.conv_reg.3.seq.2.bias 1024 0
290
+ module.conv_reg.4.mha.in_proj_weight 3145728 0
291
+ module.conv_reg.4.mha.in_proj_bias 3072 0
292
+ module.conv_reg.4.mha.out_proj.weight 1048576 0
293
+ module.conv_reg.4.mha.out_proj.bias 1024 0
294
+ module.conv_reg.4.norm0.weight 1024 0
295
+ module.conv_reg.4.norm0.bias 1024 0
296
+ module.conv_reg.4.norm1.weight 1024 0
297
+ module.conv_reg.4.norm1.bias 1024 0
298
+ module.conv_reg.4.seq.0.weight 1048576 0
299
+ module.conv_reg.4.seq.0.bias 1024 0
300
+ module.conv_reg.4.seq.2.weight 1048576 0
301
+ module.conv_reg.4.seq.2.bias 1024 0
302
+ module.conv_reg.5.mha.in_proj_weight 3145728 0
303
+ module.conv_reg.5.mha.in_proj_bias 3072 0
304
+ module.conv_reg.5.mha.out_proj.weight 1048576 0
305
+ module.conv_reg.5.mha.out_proj.bias 1024 0
306
+ module.conv_reg.5.norm0.weight 1024 0
307
+ module.conv_reg.5.norm0.bias 1024 0
308
+ module.conv_reg.5.norm1.weight 1024 0
309
+ module.conv_reg.5.norm1.bias 1024 0
310
+ module.conv_reg.5.seq.0.weight 1048576 0
311
+ module.conv_reg.5.seq.0.bias 1024 0
312
+ module.conv_reg.5.seq.2.weight 1048576 0
313
+ module.conv_reg.5.seq.2.bias 1024 0
314
+ module.nn_binary_particle.0.weight 1048576 0
315
+ module.nn_binary_particle.0.bias 1024 0
316
+ module.nn_binary_particle.2.weight 1024 0
317
+ module.nn_binary_particle.2.bias 1024 0
318
+ module.nn_binary_particle.4.weight 2048 0
319
+ module.nn_binary_particle.4.bias 2 0
320
+ module.nn_pid.0.weight 1048576 0
321
+ module.nn_pid.0.bias 1024 0
322
+ module.nn_pid.2.weight 1024 0
323
+ module.nn_pid.2.bias 1024 0
324
+ module.nn_pid.4.weight 6144 0
325
+ module.nn_pid.4.bias 6 0
326
+ module.nn_pt.nn.0.0.weight 1048576 0
327
+ module.nn_pt.nn.0.0.bias 1024 0
328
+ module.nn_pt.nn.0.2.weight 1024 0
329
+ module.nn_pt.nn.0.2.bias 1024 0
330
+ module.nn_pt.nn.0.4.weight 1024 0
331
+ module.nn_pt.nn.0.4.bias 1 0
332
+ module.nn_pt.nn.1.0.weight 1048576 0
333
+ module.nn_pt.nn.1.0.bias 1024 0
334
+ module.nn_pt.nn.1.2.weight 1024 0
335
+ module.nn_pt.nn.1.2.bias 1024 0
336
+ module.nn_pt.nn.1.4.weight 1024 0
337
+ module.nn_pt.nn.1.4.bias 1 0
338
+ module.nn_eta.nn.0.weight 1048576 0
339
+ module.nn_eta.nn.0.bias 1024 0
340
+ module.nn_eta.nn.2.weight 1024 0
341
+ module.nn_eta.nn.2.bias 1024 0
342
+ module.nn_eta.nn.4.weight 2048 0
343
+ module.nn_eta.nn.4.bias 2 0
344
+ module.nn_sin_phi.nn.0.weight 1048576 0
345
+ module.nn_sin_phi.nn.0.bias 1024 0
346
+ module.nn_sin_phi.nn.2.weight 1024 0
347
+ module.nn_sin_phi.nn.2.bias 1024 0
348
+ module.nn_sin_phi.nn.4.weight 2048 0
349
+ module.nn_sin_phi.nn.4.bias 2 0
350
+ module.nn_cos_phi.nn.0.weight 1048576 0
351
+ module.nn_cos_phi.nn.0.bias 1024 0
352
+ module.nn_cos_phi.nn.2.weight 1024 0
353
+ module.nn_cos_phi.nn.2.bias 1024 0
354
+ module.nn_cos_phi.nn.4.weight 2048 0
355
+ module.nn_cos_phi.nn.4.bias 2 0
356
+ module.nn_energy.nn.0.0.weight 1048576 0
357
+ module.nn_energy.nn.0.0.bias 1024 0
358
+ module.nn_energy.nn.0.2.weight 1024 0
359
+ module.nn_energy.nn.0.2.bias 1024 0
360
+ module.nn_energy.nn.0.4.weight 1024 0
361
+ module.nn_energy.nn.0.4.bias 1 0
362
+ module.nn_energy.nn.1.0.weight 1048576 0
363
+ module.nn_energy.nn.1.0.bias 1024 0
364
+ module.nn_energy.nn.1.2.weight 1024 0
365
+ module.nn_energy.nn.1.2.bias 1024 0
366
+ module.nn_energy.nn.1.4.weight 1024 0
367
+ module.nn_energy.nn.1.4.bias 1 0
368
+ module.final_norm_id.weight 1024 0
369
+ module.final_norm_id.bias 1024 0
370
+ module.final_norm_reg.weight 1024 0
371
+ module.final_norm_reg.bias 1024 0
372
+ [2024-10-11 10:25:16,070] INFO: Creating experiment dir experiments/pyg-clic_20241011_102451_167094
373
+ [2024-10-11 10:25:16,071] INFO: Model directory experiments/pyg-clic_20241011_102451_167094
374
+ [2024-10-11 10:25:16,279] INFO: train_dataset: clic_edm_qq_pf, 3598296
375
+ [2024-10-11 10:25:16,379] INFO: train_dataset: clic_edm_ttbar_pf, 7139800
376
+ [2024-10-11 10:25:16,464] INFO: train_dataset: clic_edm_ww_fullhad_pf, 3600900
377
+ [2024-10-11 10:25:49,796] INFO: valid_dataset: clic_edm_qq_pf, 399822
378
+ [2024-10-11 10:25:49,821] INFO: valid_dataset: clic_edm_ttbar_pf, 793400
379
+ [2024-10-11 10:25:49,834] INFO: valid_dataset: clic_edm_ww_fullhad_pf, 400100
380
+ [2024-10-11 10:25:50,665] INFO: Initiating epoch #1 train run on device rank=0
381
+ [2024-10-11 13:35:01,251] INFO: Initiating epoch #1 valid run on device rank=0
382
+ [2024-10-11 13:42:08,318] INFO: Rank 0: epoch=1 / 30 train_loss=3.4490 valid_loss=2.8678 stale=0 epoch_train_time=189.18m epoch_valid_time=7.01m epoch_total_time=196.29m eta=5692.5m
383
+ [2024-10-11 13:42:08,328] INFO: Initiating epoch #2 train run on device rank=0
384
+ [2024-10-11 16:49:04,054] INFO: Initiating epoch #2 valid run on device rank=0
385
+ [2024-10-11 16:56:07,094] INFO: Rank 0: epoch=2 / 30 train_loss=2.6138 valid_loss=2.5000 stale=0 epoch_train_time=186.93m epoch_valid_time=6.94m epoch_total_time=193.98m eta=5463.8m
386
+ [2024-10-11 16:56:07,110] INFO: Initiating epoch #3 train run on device rank=0
387
+ [2024-10-11 20:03:17,596] INFO: Initiating epoch #3 valid run on device rank=0
388
+ [2024-10-11 20:10:21,873] INFO: Rank 0: epoch=3 / 30 train_loss=2.3946 valid_loss=2.3688 stale=0 epoch_train_time=187.17m epoch_valid_time=6.96m epoch_total_time=194.25m eta=5260.7m
389
+ [2024-10-11 20:10:21,896] INFO: Initiating epoch #4 train run on device rank=0
390
+ [2024-10-11 23:17:12,730] INFO: Initiating epoch #4 valid run on device rank=0
391
+ [2024-10-11 23:24:13,970] INFO: Rank 0: epoch=4 / 30 train_loss=2.2850 valid_loss=2.2818 stale=0 epoch_train_time=186.85m epoch_valid_time=6.91m epoch_total_time=193.87m eta=5059.5m
392
+ [2024-10-11 23:24:13,979] INFO: Initiating epoch #5 train run on device rank=0
393
+ [2024-10-12 02:31:25,465] INFO: Initiating epoch #5 valid run on device rank=0
394
+ [2024-10-12 02:38:28,338] INFO: Rank 0: epoch=5 / 30 train_loss=2.2046 valid_loss=2.2072 stale=0 epoch_train_time=187.19m epoch_valid_time=6.94m epoch_total_time=194.24m eta=4863.1m
395
+ [2024-10-12 02:38:28,362] INFO: Initiating epoch #6 train run on device rank=0
396
+ [2024-10-12 05:45:31,198] INFO: Initiating epoch #6 valid run on device rank=0
397
+ [2024-10-12 05:52:32,395] INFO: Rank 0: epoch=6 / 30 train_loss=2.1378 valid_loss=2.1469 stale=0 epoch_train_time=187.05m epoch_valid_time=6.91m epoch_total_time=194.07m eta=4666.8m
398
+ [2024-10-12 05:52:32,420] INFO: Initiating epoch #7 train run on device rank=0
399
+ [2024-10-12 08:59:54,801] INFO: Initiating epoch #7 valid run on device rank=0
400
+ [2024-10-12 09:06:58,357] INFO: Rank 0: epoch=7 / 30 train_loss=2.0795 valid_loss=2.0962 stale=0 epoch_train_time=187.37m epoch_valid_time=6.95m epoch_total_time=194.43m eta=4472.3m
401
+ [2024-10-12 09:06:58,381] INFO: Initiating epoch #8 train run on device rank=0
402
+ [2024-10-12 12:13:59,370] INFO: Initiating epoch #8 valid run on device rank=0
403
+ [2024-10-12 12:21:02,407] INFO: Rank 0: epoch=8 / 30 train_loss=2.0358 valid_loss=2.0624 stale=0 epoch_train_time=187.02m epoch_valid_time=6.94m epoch_total_time=194.07m eta=4276.8m
404
+ [2024-10-12 12:21:02,425] INFO: Initiating epoch #9 train run on device rank=0
405
+ [2024-10-12 15:28:05,805] INFO: Initiating epoch #9 valid run on device rank=0
406
+ [2024-10-12 15:35:08,750] INFO: Rank 0: epoch=9 / 30 train_loss=2.0020 valid_loss=2.0356 stale=0 epoch_train_time=187.06m epoch_valid_time=6.94m epoch_total_time=194.11m eta=4081.7m
407
+ [2024-10-12 15:35:08,766] INFO: Initiating epoch #10 train run on device rank=0
408
+ [2024-10-12 18:41:58,556] INFO: Initiating epoch #10 valid run on device rank=0
409
+ [2024-10-12 18:49:01,652] INFO: Rank 0: epoch=10 / 30 train_loss=1.9755 valid_loss=2.0143 stale=0 epoch_train_time=186.83m epoch_valid_time=6.94m epoch_total_time=193.88m eta=3886.4m
410
+ [2024-10-12 18:49:01,676] INFO: Initiating epoch #11 train run on device rank=0
411
+ [2024-10-12 21:56:02,732] INFO: Initiating epoch #11 valid run on device rank=0
412
+ [2024-10-12 22:03:06,330] INFO: Rank 0: epoch=11 / 30 train_loss=1.9537 valid_loss=1.9981 stale=0 epoch_train_time=187.02m epoch_valid_time=6.95m epoch_total_time=194.08m eta=3691.6m
413
+ [2024-10-12 22:03:06,351] INFO: Initiating epoch #12 train run on device rank=0
414
+ [2024-10-13 01:09:56,849] INFO: Initiating epoch #12 valid run on device rank=0
415
+ [2024-10-13 01:16:58,608] INFO: Rank 0: epoch=12 / 30 train_loss=1.9351 valid_loss=1.9845 stale=0 epoch_train_time=186.84m epoch_valid_time=6.92m epoch_total_time=193.87m eta=3496.7m
416
+ [2024-10-13 01:16:58,631] INFO: Initiating epoch #13 train run on device rank=0
417
+ [2024-10-13 04:23:56,005] INFO: Initiating epoch #13 valid run on device rank=0
418
+ [2024-10-13 04:30:59,653] INFO: Rank 0: epoch=13 / 30 train_loss=1.9185 valid_loss=1.9727 stale=0 epoch_train_time=186.96m epoch_valid_time=6.95m epoch_total_time=194.02m eta=3302.1m
419
+ [2024-10-13 04:30:59,663] INFO: Initiating epoch #14 train run on device rank=0
420
+ [2024-10-13 07:37:50,487] INFO: Initiating epoch #14 valid run on device rank=0
421
+ [2024-10-13 07:44:53,667] INFO: Rank 0: epoch=14 / 30 train_loss=1.9034 valid_loss=1.9607 stale=0 epoch_train_time=186.85m epoch_valid_time=6.95m epoch_total_time=193.9m eta=3107.5m
422
+ [2024-10-13 07:44:53,696] INFO: Initiating epoch #15 train run on device rank=0
423
+ [2024-10-13 10:51:52,917] INFO: Initiating epoch #15 valid run on device rank=0
424
+ [2024-10-13 10:58:55,298] INFO: Rank 0: epoch=15 / 30 train_loss=1.8897 valid_loss=1.9506 stale=0 epoch_train_time=186.99m epoch_valid_time=6.93m epoch_total_time=194.03m eta=2913.1m
425
+ [2024-10-13 10:58:55,316] INFO: Initiating epoch #16 train run on device rank=0
426
+ [2024-10-13 14:05:54,702] INFO: Initiating epoch #16 valid run on device rank=0
427
+ [2024-10-13 14:12:57,300] INFO: Rank 0: epoch=16 / 30 train_loss=1.8772 valid_loss=1.9428 stale=0 epoch_train_time=186.99m epoch_valid_time=6.94m epoch_total_time=194.03m eta=2718.7m
428
+ [2024-10-13 14:12:57,326] INFO: Initiating epoch #17 train run on device rank=0
429
+ [2024-10-13 17:20:12,568] INFO: Initiating epoch #17 valid run on device rank=0
430
+ [2024-10-13 17:27:16,171] INFO: Rank 0: epoch=17 / 30 train_loss=1.8658 valid_loss=1.9375 stale=0 epoch_train_time=187.25m epoch_valid_time=6.95m epoch_total_time=194.31m eta=2524.6m
431
+ [2024-10-13 17:27:16,206] INFO: Initiating epoch #18 train run on device rank=0
432
+ [2024-10-13 20:34:27,667] INFO: Initiating epoch #18 valid run on device rank=0
433
+ [2024-10-13 20:41:31,076] INFO: Rank 0: epoch=18 / 30 train_loss=1.8551 valid_loss=1.9299 stale=0 epoch_train_time=187.19m epoch_valid_time=6.95m epoch_total_time=194.25m eta=2330.4m
434
+ [2024-10-13 20:41:31,089] INFO: Initiating epoch #19 train run on device rank=0
435
+ [2024-10-13 23:48:44,511] INFO: Initiating epoch #19 valid run on device rank=0
436
+ [2024-10-13 23:55:47,205] INFO: Rank 0: epoch=19 / 30 train_loss=1.8452 valid_loss=1.9244 stale=0 epoch_train_time=187.22m epoch_valid_time=6.94m epoch_total_time=194.27m eta=2136.3m
437
+ [2024-10-13 23:55:47,222] INFO: Initiating epoch #20 train run on device rank=0
438
+ [2024-10-14 03:02:39,426] INFO: Initiating epoch #20 valid run on device rank=0
439
+ [2024-10-14 03:09:42,527] INFO: Rank 0: epoch=20 / 30 train_loss=1.8362 valid_loss=1.9202 stale=0 epoch_train_time=186.87m epoch_valid_time=6.94m epoch_total_time=193.92m eta=1941.9m
440
+ [2024-10-14 03:09:42,550] INFO: Initiating epoch #21 train run on device rank=0
441
+ [2024-10-14 06:16:45,945] INFO: Initiating epoch #21 valid run on device rank=0
442
+ [2024-10-14 06:23:49,200] INFO: Rank 0: epoch=21 / 30 train_loss=1.8278 valid_loss=1.9160 stale=0 epoch_train_time=187.06m epoch_valid_time=6.95m epoch_total_time=194.11m eta=1747.7m
443
+ [2024-10-14 06:23:49,218] INFO: Initiating epoch #22 train run on device rank=0
444
+ [2024-10-14 09:30:41,408] INFO: Initiating epoch #22 valid run on device rank=0
445
+ [2024-10-14 09:37:46,117] INFO: Rank 0: epoch=22 / 30 train_loss=1.8199 valid_loss=1.9131 stale=0 epoch_train_time=186.87m epoch_valid_time=6.97m epoch_total_time=193.95m eta=1553.4m
446
+ [2024-10-14 09:37:46,143] INFO: Initiating epoch #23 train run on device rank=0
447
+ [2024-10-14 09:47:43,648] INFO: Will use torch.nn.parallel.DistributedDataParallel() and 8 gpus
448
+ [2024-10-14 09:47:43,652] INFO: AMD Radeon Graphics
449
+ [2024-10-14 09:47:43,652] INFO: AMD Radeon Graphics
450
+ [2024-10-14 09:47:43,652] INFO: AMD Radeon Graphics
451
+ [2024-10-14 09:47:43,652] INFO: AMD Radeon Graphics
452
+ [2024-10-14 09:47:43,652] INFO: AMD Radeon Graphics
453
+ [2024-10-14 09:47:43,652] INFO: AMD Radeon Graphics
454
+ [2024-10-14 09:47:43,653] INFO: AMD Radeon Graphics
455
+ [2024-10-14 09:47:43,653] INFO: AMD Radeon Graphics
456
+ [2024-10-14 09:47:47,850] INFO: configured dtype=torch.bfloat16 for autocast
457
+ [2024-10-14 09:47:50,639] INFO: model_kwargs: {'input_dim': 17, 'num_classes': 6, 'input_encoding': 'split', 'pt_mode': 'direct-elemtype-split', 'eta_mode': 'linear', 'sin_phi_mode': 'linear', 'cos_phi_mode': 'linear', 'energy_mode': 'direct-elemtype-split', 'elemtypes_nonzero': [1, 2], 'learned_representation_mode': 'last', 'conv_type': 'attention', 'num_convs': 6, 'dropout_ff': 0.1, 'dropout_conv_id_mha': 0.0, 'dropout_conv_id_ff': 0.0, 'dropout_conv_reg_mha': 0.1, 'dropout_conv_reg_ff': 0.1, 'activation': 'gelu', 'head_dim': 32, 'num_heads': 32, 'attention_type': 'math', 'use_pre_layernorm': True}
458
+ [2024-10-14 09:47:50,738] INFO: using attention_type=math
459
+ [2024-10-14 09:47:50,778] INFO: using attention_type=math
460
+ [2024-10-14 09:47:50,814] INFO: using attention_type=math
461
+ [2024-10-14 09:47:50,849] INFO: using attention_type=math
462
+ [2024-10-14 09:47:50,884] INFO: using attention_type=math
463
+ [2024-10-14 09:47:50,921] INFO: using attention_type=math
464
+ [2024-10-14 09:47:50,957] INFO: using attention_type=math
465
+ [2024-10-14 09:47:50,993] INFO: using attention_type=math
466
+ [2024-10-14 09:47:51,029] INFO: using attention_type=math
467
+ [2024-10-14 09:47:51,065] INFO: using attention_type=math
468
+ [2024-10-14 09:47:51,102] INFO: using attention_type=math
469
+ [2024-10-14 09:47:51,137] INFO: using attention_type=math
470
+ [2024-10-14 09:47:58,146] INFO: Loaded model weights from experiments/pyg-clic_20241011_102451_167094/checkpoints/checkpoint-22-1.913142.pth
471
+ [2024-10-14 09:47:59,962] INFO: DistributedDataParallel(
472
+ (module): MLPF(
473
+ (nn0_id): ModuleList(
474
+ (0-1): 2 x Sequential(
475
+ (0): Linear(in_features=17, out_features=1024, bias=True)
476
+ (1): GELU(approximate='none')
477
+ (2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
478
+ (3): Dropout(p=0.1, inplace=False)
479
+ (4): Linear(in_features=1024, out_features=1024, bias=True)
480
+ )
481
+ )
482
+ (nn0_reg): ModuleList(
483
+ (0-1): 2 x Sequential(
484
+ (0): Linear(in_features=17, out_features=1024, bias=True)
485
+ (1): GELU(approximate='none')
486
+ (2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
487
+ (3): Dropout(p=0.1, inplace=False)
488
+ (4): Linear(in_features=1024, out_features=1024, bias=True)
489
+ )
490
+ )
491
+ (conv_id): ModuleList(
492
+ (0-5): 6 x PreLnSelfAttentionLayer(
493
+ (mha): MultiheadAttention(
494
+ (out_proj): NonDynamicallyQuantizableLinear(in_features=1024, out_features=1024, bias=True)
495
+ )
496
+ (norm0): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
497
+ (norm1): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
498
+ (seq): Sequential(
499
+ (0): Linear(in_features=1024, out_features=1024, bias=True)
500
+ (1): GELU(approximate='none')
501
+ (2): Linear(in_features=1024, out_features=1024, bias=True)
502
+ (3): GELU(approximate='none')
503
+ )
504
+ (dropout): Dropout(p=0.0, inplace=False)
505
+ )
506
+ )
507
+ (conv_reg): ModuleList(
508
+ (0-5): 6 x PreLnSelfAttentionLayer(
509
+ (mha): MultiheadAttention(
510
+ (out_proj): NonDynamicallyQuantizableLinear(in_features=1024, out_features=1024, bias=True)
511
+ )
512
+ (norm0): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
513
+ (norm1): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
514
+ (seq): Sequential(
515
+ (0): Linear(in_features=1024, out_features=1024, bias=True)
516
+ (1): GELU(approximate='none')
517
+ (2): Linear(in_features=1024, out_features=1024, bias=True)
518
+ (3): GELU(approximate='none')
519
+ )
520
+ (dropout): Dropout(p=0.1, inplace=False)
521
+ )
522
+ )
523
+ (nn_binary_particle): Sequential(
524
+ (0): Linear(in_features=1024, out_features=1024, bias=True)
525
+ (1): GELU(approximate='none')
526
+ (2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
527
+ (3): Dropout(p=0.1, inplace=False)
528
+ (4): Linear(in_features=1024, out_features=2, bias=True)
529
+ )
530
+ (nn_pid): Sequential(
531
+ (0): Linear(in_features=1024, out_features=1024, bias=True)
532
+ (1): GELU(approximate='none')
533
+ (2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
534
+ (3): Dropout(p=0.1, inplace=False)
535
+ (4): Linear(in_features=1024, out_features=6, bias=True)
536
+ )
537
+ (nn_pt): RegressionOutput(
538
+ (nn): ModuleList(
539
+ (0-1): 2 x Sequential(
540
+ (0): Linear(in_features=1024, out_features=1024, bias=True)
541
+ (1): GELU(approximate='none')
542
+ (2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
543
+ (3): Dropout(p=0.1, inplace=False)
544
+ (4): Linear(in_features=1024, out_features=1, bias=True)
545
+ )
546
+ )
547
+ )
548
+ (nn_eta): RegressionOutput(
549
+ (nn): Sequential(
550
+ (0): Linear(in_features=1024, out_features=1024, bias=True)
551
+ (1): GELU(approximate='none')
552
+ (2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
553
+ (3): Dropout(p=0.1, inplace=False)
554
+ (4): Linear(in_features=1024, out_features=2, bias=True)
555
+ )
556
+ )
557
+ (nn_sin_phi): RegressionOutput(
558
+ (nn): Sequential(
559
+ (0): Linear(in_features=1024, out_features=1024, bias=True)
560
+ (1): GELU(approximate='none')
561
+ (2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
562
+ (3): Dropout(p=0.1, inplace=False)
563
+ (4): Linear(in_features=1024, out_features=2, bias=True)
564
+ )
565
+ )
566
+ (nn_cos_phi): RegressionOutput(
567
+ (nn): Sequential(
568
+ (0): Linear(in_features=1024, out_features=1024, bias=True)
569
+ (1): GELU(approximate='none')
570
+ (2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
571
+ (3): Dropout(p=0.1, inplace=False)
572
+ (4): Linear(in_features=1024, out_features=2, bias=True)
573
+ )
574
+ )
575
+ (nn_energy): RegressionOutput(
576
+ (nn): ModuleList(
577
+ (0-1): 2 x Sequential(
578
+ (0): Linear(in_features=1024, out_features=1024, bias=True)
579
+ (1): GELU(approximate='none')
580
+ (2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
581
+ (3): Dropout(p=0.1, inplace=False)
582
+ (4): Linear(in_features=1024, out_features=1, bias=True)
583
+ )
584
+ )
585
+ )
586
+ (final_norm_id): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
587
+ (final_norm_reg): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
588
+ )
589
+ )
590
+ [2024-10-14 09:47:59,965] INFO: Trainable parameters: 89388050
591
+ [2024-10-14 09:47:59,965] INFO: Non-trainable parameters: 0
592
+ [2024-10-14 09:47:59,965] INFO: Total parameters: 89388050
593
+ [2024-10-14 09:47:59,972] INFO: Modules Trainable parameters Non-trainable parameters
594
+ module.nn0_id.0.0.weight 17408 0
595
+ module.nn0_id.0.0.bias 1024 0
596
+ module.nn0_id.0.2.weight 1024 0
597
+ module.nn0_id.0.2.bias 1024 0
598
+ module.nn0_id.0.4.weight 1048576 0
599
+ module.nn0_id.0.4.bias 1024 0
600
+ module.nn0_id.1.0.weight 17408 0
601
+ module.nn0_id.1.0.bias 1024 0
602
+ module.nn0_id.1.2.weight 1024 0
603
+ module.nn0_id.1.2.bias 1024 0
604
+ module.nn0_id.1.4.weight 1048576 0
605
+ module.nn0_id.1.4.bias 1024 0
606
+ module.nn0_reg.0.0.weight 17408 0
607
+ module.nn0_reg.0.0.bias 1024 0
608
+ module.nn0_reg.0.2.weight 1024 0
609
+ module.nn0_reg.0.2.bias 1024 0
610
+ module.nn0_reg.0.4.weight 1048576 0
611
+ module.nn0_reg.0.4.bias 1024 0
612
+ module.nn0_reg.1.0.weight 17408 0
613
+ module.nn0_reg.1.0.bias 1024 0
614
+ module.nn0_reg.1.2.weight 1024 0
615
+ module.nn0_reg.1.2.bias 1024 0
616
+ module.nn0_reg.1.4.weight 1048576 0
617
+ module.nn0_reg.1.4.bias 1024 0
618
+ module.conv_id.0.mha.in_proj_weight 3145728 0
619
+ module.conv_id.0.mha.in_proj_bias 3072 0
620
+ module.conv_id.0.mha.out_proj.weight 1048576 0
621
+ module.conv_id.0.mha.out_proj.bias 1024 0
622
+ module.conv_id.0.norm0.weight 1024 0
623
+ module.conv_id.0.norm0.bias 1024 0
624
+ module.conv_id.0.norm1.weight 1024 0
625
+ module.conv_id.0.norm1.bias 1024 0
626
+ module.conv_id.0.seq.0.weight 1048576 0
627
+ module.conv_id.0.seq.0.bias 1024 0
628
+ module.conv_id.0.seq.2.weight 1048576 0
629
+ module.conv_id.0.seq.2.bias 1024 0
630
+ module.conv_id.1.mha.in_proj_weight 3145728 0
631
+ module.conv_id.1.mha.in_proj_bias 3072 0
632
+ module.conv_id.1.mha.out_proj.weight 1048576 0
633
+ module.conv_id.1.mha.out_proj.bias 1024 0
634
+ module.conv_id.1.norm0.weight 1024 0
635
+ module.conv_id.1.norm0.bias 1024 0
636
+ module.conv_id.1.norm1.weight 1024 0
637
+ module.conv_id.1.norm1.bias 1024 0
638
+ module.conv_id.1.seq.0.weight 1048576 0
639
+ module.conv_id.1.seq.0.bias 1024 0
640
+ module.conv_id.1.seq.2.weight 1048576 0
641
+ module.conv_id.1.seq.2.bias 1024 0
642
+ module.conv_id.2.mha.in_proj_weight 3145728 0
643
+ module.conv_id.2.mha.in_proj_bias 3072 0
644
+ module.conv_id.2.mha.out_proj.weight 1048576 0
645
+ module.conv_id.2.mha.out_proj.bias 1024 0
646
+ module.conv_id.2.norm0.weight 1024 0
647
+ module.conv_id.2.norm0.bias 1024 0
648
+ module.conv_id.2.norm1.weight 1024 0
649
+ module.conv_id.2.norm1.bias 1024 0
650
+ module.conv_id.2.seq.0.weight 1048576 0
651
+ module.conv_id.2.seq.0.bias 1024 0
652
+ module.conv_id.2.seq.2.weight 1048576 0
653
+ module.conv_id.2.seq.2.bias 1024 0
654
+ module.conv_id.3.mha.in_proj_weight 3145728 0
655
+ module.conv_id.3.mha.in_proj_bias 3072 0
656
+ module.conv_id.3.mha.out_proj.weight 1048576 0
657
+ module.conv_id.3.mha.out_proj.bias 1024 0
658
+ module.conv_id.3.norm0.weight 1024 0
659
+ module.conv_id.3.norm0.bias 1024 0
660
+ module.conv_id.3.norm1.weight 1024 0
661
+ module.conv_id.3.norm1.bias 1024 0
662
+ module.conv_id.3.seq.0.weight 1048576 0
663
+ module.conv_id.3.seq.0.bias 1024 0
664
+ module.conv_id.3.seq.2.weight 1048576 0
665
+ module.conv_id.3.seq.2.bias 1024 0
666
+ module.conv_id.4.mha.in_proj_weight 3145728 0
667
+ module.conv_id.4.mha.in_proj_bias 3072 0
668
+ module.conv_id.4.mha.out_proj.weight 1048576 0
669
+ module.conv_id.4.mha.out_proj.bias 1024 0
670
+ module.conv_id.4.norm0.weight 1024 0
671
+ module.conv_id.4.norm0.bias 1024 0
672
+ module.conv_id.4.norm1.weight 1024 0
673
+ module.conv_id.4.norm1.bias 1024 0
674
+ module.conv_id.4.seq.0.weight 1048576 0
675
+ module.conv_id.4.seq.0.bias 1024 0
676
+ module.conv_id.4.seq.2.weight 1048576 0
677
+ module.conv_id.4.seq.2.bias 1024 0
678
+ module.conv_id.5.mha.in_proj_weight 3145728 0
679
+ module.conv_id.5.mha.in_proj_bias 3072 0
680
+ module.conv_id.5.mha.out_proj.weight 1048576 0
681
+ module.conv_id.5.mha.out_proj.bias 1024 0
682
+ module.conv_id.5.norm0.weight 1024 0
683
+ module.conv_id.5.norm0.bias 1024 0
684
+ module.conv_id.5.norm1.weight 1024 0
685
+ module.conv_id.5.norm1.bias 1024 0
686
+ module.conv_id.5.seq.0.weight 1048576 0
687
+ module.conv_id.5.seq.0.bias 1024 0
688
+ module.conv_id.5.seq.2.weight 1048576 0
689
+ module.conv_id.5.seq.2.bias 1024 0
690
+ module.conv_reg.0.mha.in_proj_weight 3145728 0
691
+ module.conv_reg.0.mha.in_proj_bias 3072 0
692
+ module.conv_reg.0.mha.out_proj.weight 1048576 0
693
+ module.conv_reg.0.mha.out_proj.bias 1024 0
694
+ module.conv_reg.0.norm0.weight 1024 0
695
+ module.conv_reg.0.norm0.bias 1024 0
696
+ module.conv_reg.0.norm1.weight 1024 0
697
+ module.conv_reg.0.norm1.bias 1024 0
698
+ module.conv_reg.0.seq.0.weight 1048576 0
699
+ module.conv_reg.0.seq.0.bias 1024 0
700
+ module.conv_reg.0.seq.2.weight 1048576 0
701
+ module.conv_reg.0.seq.2.bias 1024 0
702
+ module.conv_reg.1.mha.in_proj_weight 3145728 0
703
+ module.conv_reg.1.mha.in_proj_bias 3072 0
704
+ module.conv_reg.1.mha.out_proj.weight 1048576 0
705
+ module.conv_reg.1.mha.out_proj.bias 1024 0
706
+ module.conv_reg.1.norm0.weight 1024 0
707
+ module.conv_reg.1.norm0.bias 1024 0
708
+ module.conv_reg.1.norm1.weight 1024 0
709
+ module.conv_reg.1.norm1.bias 1024 0
710
+ module.conv_reg.1.seq.0.weight 1048576 0
711
+ module.conv_reg.1.seq.0.bias 1024 0
712
+ module.conv_reg.1.seq.2.weight 1048576 0
713
+ module.conv_reg.1.seq.2.bias 1024 0
714
+ module.conv_reg.2.mha.in_proj_weight 3145728 0
715
+ module.conv_reg.2.mha.in_proj_bias 3072 0
716
+ module.conv_reg.2.mha.out_proj.weight 1048576 0
717
+ module.conv_reg.2.mha.out_proj.bias 1024 0
718
+ module.conv_reg.2.norm0.weight 1024 0
719
+ module.conv_reg.2.norm0.bias 1024 0
720
+ module.conv_reg.2.norm1.weight 1024 0
721
+ module.conv_reg.2.norm1.bias 1024 0
722
+ module.conv_reg.2.seq.0.weight 1048576 0
723
+ module.conv_reg.2.seq.0.bias 1024 0
724
+ module.conv_reg.2.seq.2.weight 1048576 0
725
+ module.conv_reg.2.seq.2.bias 1024 0
726
+ module.conv_reg.3.mha.in_proj_weight 3145728 0
727
+ module.conv_reg.3.mha.in_proj_bias 3072 0
728
+ module.conv_reg.3.mha.out_proj.weight 1048576 0
729
+ module.conv_reg.3.mha.out_proj.bias 1024 0
730
+ module.conv_reg.3.norm0.weight 1024 0
731
+ module.conv_reg.3.norm0.bias 1024 0
732
+ module.conv_reg.3.norm1.weight 1024 0
733
+ module.conv_reg.3.norm1.bias 1024 0
734
+ module.conv_reg.3.seq.0.weight 1048576 0
735
+ module.conv_reg.3.seq.0.bias 1024 0
736
+ module.conv_reg.3.seq.2.weight 1048576 0
737
+ module.conv_reg.3.seq.2.bias 1024 0
738
+ module.conv_reg.4.mha.in_proj_weight 3145728 0
739
+ module.conv_reg.4.mha.in_proj_bias 3072 0
740
+ module.conv_reg.4.mha.out_proj.weight 1048576 0
741
+ module.conv_reg.4.mha.out_proj.bias 1024 0
742
+ module.conv_reg.4.norm0.weight 1024 0
743
+ module.conv_reg.4.norm0.bias 1024 0
744
+ module.conv_reg.4.norm1.weight 1024 0
745
+ module.conv_reg.4.norm1.bias 1024 0
746
+ module.conv_reg.4.seq.0.weight 1048576 0
747
+ module.conv_reg.4.seq.0.bias 1024 0
748
+ module.conv_reg.4.seq.2.weight 1048576 0
749
+ module.conv_reg.4.seq.2.bias 1024 0
750
+ module.conv_reg.5.mha.in_proj_weight 3145728 0
751
+ module.conv_reg.5.mha.in_proj_bias 3072 0
752
+ module.conv_reg.5.mha.out_proj.weight 1048576 0
753
+ module.conv_reg.5.mha.out_proj.bias 1024 0
754
+ module.conv_reg.5.norm0.weight 1024 0
755
+ module.conv_reg.5.norm0.bias 1024 0
756
+ module.conv_reg.5.norm1.weight 1024 0
757
+ module.conv_reg.5.norm1.bias 1024 0
758
+ module.conv_reg.5.seq.0.weight 1048576 0
759
+ module.conv_reg.5.seq.0.bias 1024 0
760
+ module.conv_reg.5.seq.2.weight 1048576 0
761
+ module.conv_reg.5.seq.2.bias 1024 0
762
+ module.nn_binary_particle.0.weight 1048576 0
763
+ module.nn_binary_particle.0.bias 1024 0
764
+ module.nn_binary_particle.2.weight 1024 0
765
+ module.nn_binary_particle.2.bias 1024 0
766
+ module.nn_binary_particle.4.weight 2048 0
767
+ module.nn_binary_particle.4.bias 2 0
768
+ module.nn_pid.0.weight 1048576 0
769
+ module.nn_pid.0.bias 1024 0
770
+ module.nn_pid.2.weight 1024 0
771
+ module.nn_pid.2.bias 1024 0
772
+ module.nn_pid.4.weight 6144 0
773
+ module.nn_pid.4.bias 6 0
774
+ module.nn_pt.nn.0.0.weight 1048576 0
775
+ module.nn_pt.nn.0.0.bias 1024 0
776
+ module.nn_pt.nn.0.2.weight 1024 0
777
+ module.nn_pt.nn.0.2.bias 1024 0
778
+ module.nn_pt.nn.0.4.weight 1024 0
779
+ module.nn_pt.nn.0.4.bias 1 0
780
+ module.nn_pt.nn.1.0.weight 1048576 0
781
+ module.nn_pt.nn.1.0.bias 1024 0
782
+ module.nn_pt.nn.1.2.weight 1024 0
783
+ module.nn_pt.nn.1.2.bias 1024 0
784
+ module.nn_pt.nn.1.4.weight 1024 0
785
+ module.nn_pt.nn.1.4.bias 1 0
786
+ module.nn_eta.nn.0.weight 1048576 0
787
+ module.nn_eta.nn.0.bias 1024 0
788
+ module.nn_eta.nn.2.weight 1024 0
789
+ module.nn_eta.nn.2.bias 1024 0
790
+ module.nn_eta.nn.4.weight 2048 0
791
+ module.nn_eta.nn.4.bias 2 0
792
+ module.nn_sin_phi.nn.0.weight 1048576 0
793
+ module.nn_sin_phi.nn.0.bias 1024 0
794
+ module.nn_sin_phi.nn.2.weight 1024 0
795
+ module.nn_sin_phi.nn.2.bias 1024 0
796
+ module.nn_sin_phi.nn.4.weight 2048 0
797
+ module.nn_sin_phi.nn.4.bias 2 0
798
+ module.nn_cos_phi.nn.0.weight 1048576 0
799
+ module.nn_cos_phi.nn.0.bias 1024 0
800
+ module.nn_cos_phi.nn.2.weight 1024 0
801
+ module.nn_cos_phi.nn.2.bias 1024 0
802
+ module.nn_cos_phi.nn.4.weight 2048 0
803
+ module.nn_cos_phi.nn.4.bias 2 0
804
+ module.nn_energy.nn.0.0.weight 1048576 0
805
+ module.nn_energy.nn.0.0.bias 1024 0
806
+ module.nn_energy.nn.0.2.weight 1024 0
807
+ module.nn_energy.nn.0.2.bias 1024 0
808
+ module.nn_energy.nn.0.4.weight 1024 0
809
+ module.nn_energy.nn.0.4.bias 1 0
810
+ module.nn_energy.nn.1.0.weight 1048576 0
811
+ module.nn_energy.nn.1.0.bias 1024 0
812
+ module.nn_energy.nn.1.2.weight 1024 0
813
+ module.nn_energy.nn.1.2.bias 1024 0
814
+ module.nn_energy.nn.1.4.weight 1024 0
815
+ module.nn_energy.nn.1.4.bias 1 0
816
+ module.final_norm_id.weight 1024 0
817
+ module.final_norm_id.bias 1024 0
818
+ module.final_norm_reg.weight 1024 0
819
+ module.final_norm_reg.bias 1024 0
820
+ [2024-10-14 09:47:59,977] INFO: Creating experiment dir experiments/pyg-clic_20241011_102451_167094
821
+ [2024-10-14 09:47:59,978] INFO: Model directory experiments/pyg-clic_20241011_102451_167094
822
+ [2024-10-14 09:48:00,140] INFO: train_dataset: clic_edm_qq_pf, 3598296
823
+ [2024-10-14 09:48:00,234] INFO: train_dataset: clic_edm_ttbar_pf, 7139800
824
+ [2024-10-14 09:48:00,332] INFO: train_dataset: clic_edm_ww_fullhad_pf, 3600900
825
+ [2024-10-14 09:48:35,829] INFO: valid_dataset: clic_edm_qq_pf, 399822
826
+ [2024-10-14 09:48:35,938] INFO: valid_dataset: clic_edm_ttbar_pf, 793400
827
+ [2024-10-14 09:48:35,965] INFO: valid_dataset: clic_edm_ww_fullhad_pf, 400100
828
+ [2024-10-14 09:48:46,758] INFO: Initiating epoch #23 train run on device rank=0
829
+ [2024-10-14 13:00:19,246] INFO: Initiating epoch #23 valid run on device rank=0
830
+ [2024-10-14 13:07:20,574] INFO: Rank 0: epoch=23 / 30 train_loss=1.8128 valid_loss=1.9104 stale=0 epoch_train_time=191.54m epoch_valid_time=6.91m epoch_total_time=198.56m eta=60.4m
831
+ [2024-10-14 13:07:20,598] INFO: Initiating epoch #24 train run on device rank=0
832
+ [2024-10-14 16:18:48,042] INFO: Initiating epoch #24 valid run on device rank=0
833
+ [2024-10-14 16:25:50,924] INFO: Rank 0: epoch=24 / 30 train_loss=1.8063 valid_loss=1.9085 stale=0 epoch_train_time=191.46m epoch_valid_time=6.93m epoch_total_time=198.51m eta=99.3m
834
+ [2024-10-14 16:25:50,944] INFO: Initiating epoch #25 train run on device rank=0
835
+ [2024-10-14 19:37:17,620] INFO: Initiating epoch #25 valid run on device rank=0
836
+ [2024-10-14 19:44:18,373] INFO: Rank 0: epoch=25 / 30 train_loss=1.8005 valid_loss=1.9052 stale=0 epoch_train_time=191.44m epoch_valid_time=6.91m epoch_total_time=198.46m eta=119.1m
837
+ [2024-10-14 19:44:18,398] INFO: Initiating epoch #26 train run on device rank=0
838
+ [2024-10-14 22:55:38,092] INFO: Initiating epoch #26 valid run on device rank=0
839
+ [2024-10-14 23:02:41,805] INFO: Rank 0: epoch=26 / 30 train_loss=1.7954 valid_loss=1.9043 stale=0 epoch_train_time=191.33m epoch_valid_time=6.95m epoch_total_time=198.39m eta=122.1m
840
+ [2024-10-14 23:02:41,835] INFO: Initiating epoch #27 train run on device rank=0
841
+ [2024-10-15 02:14:30,888] INFO: Initiating epoch #27 valid run on device rank=0
842
+ [2024-10-15 02:21:32,802] INFO: Rank 0: epoch=27 / 30 train_loss=1.7909 valid_loss=1.9021 stale=0 epoch_train_time=191.82m epoch_valid_time=6.92m epoch_total_time=198.85m eta=110.3m
843
+ [2024-10-15 02:21:32,828] INFO: Initiating epoch #28 train run on device rank=0
844
+ [2024-10-15 05:32:29,453] INFO: Initiating epoch #28 valid run on device rank=0
845
+ [2024-10-15 05:39:31,630] INFO: Rank 0: epoch=28 / 30 train_loss=1.7872 valid_loss=1.9020 stale=0 epoch_train_time=190.94m epoch_valid_time=6.92m epoch_total_time=197.98m eta=85.1m
846
+ [2024-10-15 05:39:31,652] INFO: Initiating epoch #29 train run on device rank=0
847
+ [2024-10-15 08:51:11,754] INFO: Initiating epoch #29 valid run on device rank=0
848
+ [2024-10-15 08:58:11,804] INFO: Rank 0: epoch=29 / 30 train_loss=1.7842 valid_loss=1.9017 stale=0 epoch_train_time=191.67m epoch_valid_time=6.89m epoch_total_time=198.67m eta=47.9m
849
+ [2024-10-15 08:58:11,830] INFO: Initiating epoch #30 train run on device rank=0
850
+ [2024-10-15 12:09:34,952] INFO: Initiating epoch #30 valid run on device rank=0