InMDev commited on
Commit
644b8c4
·
verified ·
1 Parent(s): ca5cebc

Upload folder using huggingface_hub

Browse files
.summary/0/events.out.tfevents.1730870341.5fc646b7334d ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a27a5517e59a4071d44c9885b2d8e478e06a95f3b576b9d5df0df6ddfce82d31
3
+ size 2683
README.md CHANGED
@@ -15,7 +15,7 @@ model-index:
15
  type: doom_health_gathering_supreme
16
  metrics:
17
  - type: mean_reward
18
- value: 8.16 +/- 4.68
19
  name: mean_reward
20
  verified: false
21
  ---
 
15
  type: doom_health_gathering_supreme
16
  metrics:
17
  - type: mean_reward
18
+ value: 11.26 +/- 4.86
19
  name: mean_reward
20
  verified: false
21
  ---
checkpoint_p0/checkpoint_000002445_10014720.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b70569eeb35ce76ed32fbc5acadf105fb938e94b59a4d3a7a1b035df904d599d
3
+ size 34929669
config.json CHANGED
@@ -65,7 +65,7 @@
65
  "summaries_use_frameskip": true,
66
  "heartbeat_interval": 20,
67
  "heartbeat_reporting_interval": 600,
68
- "train_for_env_steps": 10000000,
69
  "train_for_seconds": 10000000000,
70
  "save_every_sec": 120,
71
  "keep_checkpoints": 2,
 
65
  "summaries_use_frameskip": true,
66
  "heartbeat_interval": 20,
67
  "heartbeat_reporting_interval": 600,
68
+ "train_for_env_steps": 4000000,
69
  "train_for_seconds": 10000000000,
70
  "save_every_sec": 120,
71
  "keep_checkpoints": 2,
replay.mp4 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2ff44b21e88284b78b6410f0911e8519a2e81c386ee068b41bf3a6f2f05a7bbb
3
- size 15182217
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:979e8ff3f88eb11590005c54c16ad19fe2dbb6c3b1f6f046804d838fcabdd19b
3
+ size 21898810
sf_log.txt CHANGED
@@ -2853,3 +2853,580 @@ main_loop: 1648.2174
2853
  [2024-11-06 05:15:51,814][00514] Avg episode rewards: #0: 18.363, true rewards: #0: 8.163
2854
  [2024-11-06 05:15:51,817][00514] Avg episode reward: 18.363, avg true_objective: 8.163
2855
  [2024-11-06 05:16:44,667][00514] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2853
  [2024-11-06 05:15:51,814][00514] Avg episode rewards: #0: 18.363, true rewards: #0: 8.163
2854
  [2024-11-06 05:15:51,817][00514] Avg episode reward: 18.363, avg true_objective: 8.163
2855
  [2024-11-06 05:16:44,667][00514] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
2856
+ [2024-11-06 05:16:49,473][00514] The model has been pushed to https://huggingface.co/InMDev/vizdoom_health_gathering_supreme
2857
+ [2024-11-06 05:19:07,461][20138] Saving configuration to /content/train_dir/default_experiment/config.json...
2858
+ [2024-11-06 05:19:07,467][20138] Rollout worker 0 uses device cpu
2859
+ [2024-11-06 05:19:07,468][20138] Rollout worker 1 uses device cpu
2860
+ [2024-11-06 05:19:07,470][20138] Rollout worker 2 uses device cpu
2861
+ [2024-11-06 05:19:07,471][20138] Rollout worker 3 uses device cpu
2862
+ [2024-11-06 05:19:07,473][20138] Rollout worker 4 uses device cpu
2863
+ [2024-11-06 05:19:07,474][20138] Rollout worker 5 uses device cpu
2864
+ [2024-11-06 05:19:07,475][20138] Rollout worker 6 uses device cpu
2865
+ [2024-11-06 05:19:07,476][20138] Rollout worker 7 uses device cpu
2866
+ [2024-11-06 05:19:07,587][20138] Using GPUs [0] for process 0 (actually maps to GPUs [0])
2867
+ [2024-11-06 05:19:07,589][20138] InferenceWorker_p0-w0: min num requests: 2
2868
+ [2024-11-06 05:19:07,623][20138] Starting all processes...
2869
+ [2024-11-06 05:19:07,626][20138] Starting process learner_proc0
2870
+ [2024-11-06 05:19:07,675][20138] Starting all processes...
2871
+ [2024-11-06 05:19:07,682][20138] Starting process inference_proc0-0
2872
+ [2024-11-06 05:19:07,683][20138] Starting process rollout_proc0
2873
+ [2024-11-06 05:19:07,684][20138] Starting process rollout_proc1
2874
+ [2024-11-06 05:19:07,685][20138] Starting process rollout_proc2
2875
+ [2024-11-06 05:19:07,685][20138] Starting process rollout_proc3
2876
+ [2024-11-06 05:19:07,685][20138] Starting process rollout_proc4
2877
+ [2024-11-06 05:19:07,685][20138] Starting process rollout_proc5
2878
+ [2024-11-06 05:19:07,685][20138] Starting process rollout_proc6
2879
+ [2024-11-06 05:19:07,685][20138] Starting process rollout_proc7
2880
+ [2024-11-06 05:19:26,754][20529] Worker 6 uses CPU cores [0]
2881
+ [2024-11-06 05:19:26,755][20530] Worker 4 uses CPU cores [0]
2882
+ [2024-11-06 05:19:26,823][20511] Using GPUs [0] for process 0 (actually maps to GPUs [0])
2883
+ [2024-11-06 05:19:26,824][20511] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
2884
+ [2024-11-06 05:19:26,855][20511] Num visible devices: 1
2885
+ [2024-11-06 05:19:26,856][20524] Using GPUs [0] for process 0 (actually maps to GPUs [0])
2886
+ [2024-11-06 05:19:26,866][20524] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
2887
+ [2024-11-06 05:19:26,873][20528] Worker 3 uses CPU cores [1]
2888
+ [2024-11-06 05:19:26,882][20511] Starting seed is not provided
2889
+ [2024-11-06 05:19:26,883][20511] Using GPUs [0] for process 0 (actually maps to GPUs [0])
2890
+ [2024-11-06 05:19:26,884][20511] Initializing actor-critic model on device cuda:0
2891
+ [2024-11-06 05:19:26,885][20511] RunningMeanStd input shape: (3, 72, 128)
2892
+ [2024-11-06 05:19:26,886][20511] RunningMeanStd input shape: (1,)
2893
+ [2024-11-06 05:19:26,923][20511] ConvEncoder: input_channels=3
2894
+ [2024-11-06 05:19:26,942][20524] Num visible devices: 1
2895
+ [2024-11-06 05:19:27,022][20525] Worker 0 uses CPU cores [0]
2896
+ [2024-11-06 05:19:27,094][20527] Worker 1 uses CPU cores [1]
2897
+ [2024-11-06 05:19:27,107][20526] Worker 2 uses CPU cores [0]
2898
+ [2024-11-06 05:19:27,201][20511] Conv encoder output size: 512
2899
+ [2024-11-06 05:19:27,202][20511] Policy head output size: 512
2900
+ [2024-11-06 05:19:27,218][20531] Worker 5 uses CPU cores [1]
2901
+ [2024-11-06 05:19:27,231][20511] Created Actor Critic model with architecture:
2902
+ [2024-11-06 05:19:27,232][20511] ActorCriticSharedWeights(
2903
+ (obs_normalizer): ObservationNormalizer(
2904
+ (running_mean_std): RunningMeanStdDictInPlace(
2905
+ (running_mean_std): ModuleDict(
2906
+ (obs): RunningMeanStdInPlace()
2907
+ )
2908
+ )
2909
+ )
2910
+ (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
2911
+ (encoder): VizdoomEncoder(
2912
+ (basic_encoder): ConvEncoder(
2913
+ (enc): RecursiveScriptModule(
2914
+ original_name=ConvEncoderImpl
2915
+ (conv_head): RecursiveScriptModule(
2916
+ original_name=Sequential
2917
+ (0): RecursiveScriptModule(original_name=Conv2d)
2918
+ (1): RecursiveScriptModule(original_name=ELU)
2919
+ (2): RecursiveScriptModule(original_name=Conv2d)
2920
+ (3): RecursiveScriptModule(original_name=ELU)
2921
+ (4): RecursiveScriptModule(original_name=Conv2d)
2922
+ (5): RecursiveScriptModule(original_name=ELU)
2923
+ )
2924
+ (mlp_layers): RecursiveScriptModule(
2925
+ original_name=Sequential
2926
+ (0): RecursiveScriptModule(original_name=Linear)
2927
+ (1): RecursiveScriptModule(original_name=ELU)
2928
+ )
2929
+ )
2930
+ )
2931
+ )
2932
+ (core): ModelCoreRNN(
2933
+ (core): GRU(512, 512)
2934
+ )
2935
+ (decoder): MlpDecoder(
2936
+ (mlp): Identity()
2937
+ )
2938
+ (critic_linear): Linear(in_features=512, out_features=1, bias=True)
2939
+ (action_parameterization): ActionParameterizationDefault(
2940
+ (distribution_linear): Linear(in_features=512, out_features=5, bias=True)
2941
+ )
2942
+ )
2943
+ [2024-11-06 05:19:27,249][20532] Worker 7 uses CPU cores [1]
2944
+ [2024-11-06 05:19:27,441][20511] Using optimizer <class 'torch.optim.adam.Adam'>
2945
+ [2024-11-06 05:19:27,584][20138] Heartbeat connected on Batcher_0
2946
+ [2024-11-06 05:19:27,590][20138] Heartbeat connected on InferenceWorker_p0-w0
2947
+ [2024-11-06 05:19:27,600][20138] Heartbeat connected on RolloutWorker_w1
2948
+ [2024-11-06 05:19:27,601][20138] Heartbeat connected on RolloutWorker_w0
2949
+ [2024-11-06 05:19:27,605][20138] Heartbeat connected on RolloutWorker_w2
2950
+ [2024-11-06 05:19:27,609][20138] Heartbeat connected on RolloutWorker_w3
2951
+ [2024-11-06 05:19:27,613][20138] Heartbeat connected on RolloutWorker_w4
2952
+ [2024-11-06 05:19:27,616][20138] Heartbeat connected on RolloutWorker_w5
2953
+ [2024-11-06 05:19:27,623][20138] Heartbeat connected on RolloutWorker_w7
2954
+ [2024-11-06 05:19:27,625][20138] Heartbeat connected on RolloutWorker_w6
2955
+ [2024-11-06 05:19:28,291][20511] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000002443_10006528.pth...
2956
+ [2024-11-06 05:19:28,327][20511] Loading model from checkpoint
2957
+ [2024-11-06 05:19:28,329][20511] Loaded experiment state at self.train_step=2443, self.env_steps=10006528
2958
+ [2024-11-06 05:19:28,330][20511] Initialized policy 0 weights for model version 2443
2959
+ [2024-11-06 05:19:28,333][20511] Using GPUs [0] for process 0 (actually maps to GPUs [0])
2960
+ [2024-11-06 05:19:28,340][20511] LearnerWorker_p0 finished initialization!
2961
+ [2024-11-06 05:19:28,341][20138] Heartbeat connected on LearnerWorker_p0
2962
+ [2024-11-06 05:19:28,429][20524] RunningMeanStd input shape: (3, 72, 128)
2963
+ [2024-11-06 05:19:28,430][20524] RunningMeanStd input shape: (1,)
2964
+ [2024-11-06 05:19:28,443][20524] ConvEncoder: input_channels=3
2965
+ [2024-11-06 05:19:28,552][20524] Conv encoder output size: 512
2966
+ [2024-11-06 05:19:28,552][20524] Policy head output size: 512
2967
+ [2024-11-06 05:19:28,606][20138] Inference worker 0-0 is ready!
2968
+ [2024-11-06 05:19:28,607][20138] All inference workers are ready! Signal rollout workers to start!
2969
+ [2024-11-06 05:19:28,825][20527] Doom resolution: 160x120, resize resolution: (128, 72)
2970
+ [2024-11-06 05:19:28,825][20530] Doom resolution: 160x120, resize resolution: (128, 72)
2971
+ [2024-11-06 05:19:28,824][20529] Doom resolution: 160x120, resize resolution: (128, 72)
2972
+ [2024-11-06 05:19:28,836][20532] Doom resolution: 160x120, resize resolution: (128, 72)
2973
+ [2024-11-06 05:19:28,833][20525] Doom resolution: 160x120, resize resolution: (128, 72)
2974
+ [2024-11-06 05:19:28,841][20526] Doom resolution: 160x120, resize resolution: (128, 72)
2975
+ [2024-11-06 05:19:28,846][20528] Doom resolution: 160x120, resize resolution: (128, 72)
2976
+ [2024-11-06 05:19:28,838][20531] Doom resolution: 160x120, resize resolution: (128, 72)
2977
+ [2024-11-06 05:19:30,197][20530] Decorrelating experience for 0 frames...
2978
+ [2024-11-06 05:19:30,199][20525] Decorrelating experience for 0 frames...
2979
+ [2024-11-06 05:19:30,201][20529] Decorrelating experience for 0 frames...
2980
+ [2024-11-06 05:19:30,253][20532] Decorrelating experience for 0 frames...
2981
+ [2024-11-06 05:19:30,255][20527] Decorrelating experience for 0 frames...
2982
+ [2024-11-06 05:19:30,263][20531] Decorrelating experience for 0 frames...
2983
+ [2024-11-06 05:19:31,208][20529] Decorrelating experience for 32 frames...
2984
+ [2024-11-06 05:19:31,341][20138] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 10006528. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
2985
+ [2024-11-06 05:19:31,379][20526] Decorrelating experience for 0 frames...
2986
+ [2024-11-06 05:19:31,416][20532] Decorrelating experience for 32 frames...
2987
+ [2024-11-06 05:19:31,419][20528] Decorrelating experience for 0 frames...
2988
+ [2024-11-06 05:19:31,422][20531] Decorrelating experience for 32 frames...
2989
+ [2024-11-06 05:19:31,467][20530] Decorrelating experience for 32 frames...
2990
+ [2024-11-06 05:19:32,415][20526] Decorrelating experience for 32 frames...
2991
+ [2024-11-06 05:19:32,418][20525] Decorrelating experience for 32 frames...
2992
+ [2024-11-06 05:19:32,423][20528] Decorrelating experience for 32 frames...
2993
+ [2024-11-06 05:19:32,764][20532] Decorrelating experience for 64 frames...
2994
+ [2024-11-06 05:19:33,381][20525] Decorrelating experience for 64 frames...
2995
+ [2024-11-06 05:19:33,482][20527] Decorrelating experience for 32 frames...
2996
+ [2024-11-06 05:19:33,501][20531] Decorrelating experience for 64 frames...
2997
+ [2024-11-06 05:19:33,725][20530] Decorrelating experience for 64 frames...
2998
+ [2024-11-06 05:19:34,660][20525] Decorrelating experience for 96 frames...
2999
+ [2024-11-06 05:19:34,714][20526] Decorrelating experience for 64 frames...
3000
+ [2024-11-06 05:19:35,084][20530] Decorrelating experience for 96 frames...
3001
+ [2024-11-06 05:19:35,094][20532] Decorrelating experience for 96 frames...
3002
+ [2024-11-06 05:19:35,228][20531] Decorrelating experience for 96 frames...
3003
+ [2024-11-06 05:19:35,442][20528] Decorrelating experience for 64 frames...
3004
+ [2024-11-06 05:19:35,547][20527] Decorrelating experience for 64 frames...
3005
+ [2024-11-06 05:19:36,186][20526] Decorrelating experience for 96 frames...
3006
+ [2024-11-06 05:19:36,341][20138] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 10006528. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
3007
+ [2024-11-06 05:19:36,776][20529] Decorrelating experience for 64 frames...
3008
+ [2024-11-06 05:19:38,786][20528] Decorrelating experience for 96 frames...
3009
+ [2024-11-06 05:19:39,134][20527] Decorrelating experience for 96 frames...
3010
+ [2024-11-06 05:19:41,341][20138] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 10006528. Throughput: 0: 186.0. Samples: 1860. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
3011
+ [2024-11-06 05:19:41,346][20138] Avg episode reward: [(0, '2.160')]
3012
+ [2024-11-06 05:19:42,481][20511] Signal inference workers to stop experience collection...
3013
+ [2024-11-06 05:19:42,519][20524] InferenceWorker_p0-w0: stopping experience collection
3014
+ [2024-11-06 05:19:42,607][20529] Decorrelating experience for 96 frames...
3015
+ [2024-11-06 05:19:44,301][20511] Signal inference workers to resume experience collection...
3016
+ [2024-11-06 05:19:44,312][20511] Stopping Batcher_0...
3017
+ [2024-11-06 05:19:44,312][20511] Loop batcher_evt_loop terminating...
3018
+ [2024-11-06 05:19:44,313][20138] Component Batcher_0 stopped!
3019
+ [2024-11-06 05:19:44,349][20524] Weights refcount: 2 0
3020
+ [2024-11-06 05:19:44,351][20138] Component InferenceWorker_p0-w0 stopped!
3021
+ [2024-11-06 05:19:44,355][20524] Stopping InferenceWorker_p0-w0...
3022
+ [2024-11-06 05:19:44,356][20524] Loop inference_proc0-0_evt_loop terminating...
3023
+ [2024-11-06 05:19:44,639][20138] Component RolloutWorker_w7 stopped!
3024
+ [2024-11-06 05:19:44,645][20532] Stopping RolloutWorker_w7...
3025
+ [2024-11-06 05:19:44,665][20138] Component RolloutWorker_w5 stopped!
3026
+ [2024-11-06 05:19:44,654][20532] Loop rollout_proc7_evt_loop terminating...
3027
+ [2024-11-06 05:19:44,670][20531] Stopping RolloutWorker_w5...
3028
+ [2024-11-06 05:19:44,672][20531] Loop rollout_proc5_evt_loop terminating...
3029
+ [2024-11-06 05:19:44,677][20526] Stopping RolloutWorker_w2...
3030
+ [2024-11-06 05:19:44,681][20526] Loop rollout_proc2_evt_loop terminating...
3031
+ [2024-11-06 05:19:44,677][20138] Component RolloutWorker_w2 stopped!
3032
+ [2024-11-06 05:19:44,696][20138] Component RolloutWorker_w3 stopped!
3033
+ [2024-11-06 05:19:44,705][20528] Stopping RolloutWorker_w3...
3034
+ [2024-11-06 05:19:44,711][20529] Stopping RolloutWorker_w6...
3035
+ [2024-11-06 05:19:44,711][20138] Component RolloutWorker_w6 stopped!
3036
+ [2024-11-06 05:19:44,707][20528] Loop rollout_proc3_evt_loop terminating...
3037
+ [2024-11-06 05:19:44,715][20529] Loop rollout_proc6_evt_loop terminating...
3038
+ [2024-11-06 05:19:44,766][20138] Component RolloutWorker_w1 stopped!
3039
+ [2024-11-06 05:19:44,769][20527] Stopping RolloutWorker_w1...
3040
+ [2024-11-06 05:19:44,774][20527] Loop rollout_proc1_evt_loop terminating...
3041
+ [2024-11-06 05:19:44,832][20530] Stopping RolloutWorker_w4...
3042
+ [2024-11-06 05:19:44,836][20525] Stopping RolloutWorker_w0...
3043
+ [2024-11-06 05:19:44,837][20525] Loop rollout_proc0_evt_loop terminating...
3044
+ [2024-11-06 05:19:44,841][20530] Loop rollout_proc4_evt_loop terminating...
3045
+ [2024-11-06 05:19:44,832][20138] Component RolloutWorker_w4 stopped!
3046
+ [2024-11-06 05:19:44,843][20138] Component RolloutWorker_w0 stopped!
3047
+ [2024-11-06 05:19:45,180][20511] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000002445_10014720.pth...
3048
+ [2024-11-06 05:19:45,313][20511] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000002366_9691136.pth
3049
+ [2024-11-06 05:19:45,327][20511] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000002445_10014720.pth...
3050
+ [2024-11-06 05:19:45,520][20511] Stopping LearnerWorker_p0...
3051
+ [2024-11-06 05:19:45,521][20138] Component LearnerWorker_p0 stopped!
3052
+ [2024-11-06 05:19:45,524][20511] Loop learner_proc0_evt_loop terminating...
3053
+ [2024-11-06 05:19:45,525][20138] Waiting for process learner_proc0 to stop...
3054
+ [2024-11-06 05:19:47,034][20138] Waiting for process inference_proc0-0 to join...
3055
+ [2024-11-06 05:19:47,041][20138] Waiting for process rollout_proc0 to join...
3056
+ [2024-11-06 05:19:48,558][20138] Waiting for process rollout_proc1 to join...
3057
+ [2024-11-06 05:19:48,720][20138] Waiting for process rollout_proc2 to join...
3058
+ [2024-11-06 05:19:48,725][20138] Waiting for process rollout_proc3 to join...
3059
+ [2024-11-06 05:19:48,729][20138] Waiting for process rollout_proc4 to join...
3060
+ [2024-11-06 05:19:48,732][20138] Waiting for process rollout_proc5 to join...
3061
+ [2024-11-06 05:19:48,736][20138] Waiting for process rollout_proc6 to join...
3062
+ [2024-11-06 05:19:48,739][20138] Waiting for process rollout_proc7 to join...
3063
+ [2024-11-06 05:19:48,743][20138] Batcher 0 profile tree view:
3064
+ batching: 0.0990, releasing_batches: 0.0020
3065
+ [2024-11-06 05:19:48,744][20138] InferenceWorker_p0-w0 profile tree view:
3066
+ update_model: 0.0210
3067
+ wait_policy: 0.0123
3068
+ wait_policy_total: 9.4983
3069
+ one_step: 0.0048
3070
+ handle_policy_step: 4.0362
3071
+ deserialize: 0.0619, stack: 0.0140, obs_to_device_normalize: 0.6994, forward: 2.5820, send_messages: 0.1661
3072
+ prepare_outputs: 0.3641
3073
+ to_cpu: 0.1995
3074
+ [2024-11-06 05:19:48,746][20138] Learner 0 profile tree view:
3075
+ misc: 0.0000, prepare_batch: 2.6073
3076
+ train: 4.1122
3077
+ epoch_init: 0.0000, minibatch_init: 0.0000, losses_postprocess: 0.0004, kl_divergence: 0.0227, after_optimizer: 0.0703
3078
+ calculate_losses: 2.2671
3079
+ losses_init: 0.0000, forward_head: 0.4234, bptt_initial: 1.6088, tail: 0.0734, advantages_returns: 0.0020, losses: 0.1522
3080
+ bptt: 0.0068
3081
+ bptt_forward_core: 0.0067
3082
+ update: 1.7506
3083
+ clip: 0.0383
3084
+ [2024-11-06 05:19:48,748][20138] RolloutWorker_w0 profile tree view:
3085
+ wait_for_trajectories: 0.0138, enqueue_policy_requests: 1.2427, env_step: 4.2976, overhead: 0.1271, complete_rollouts: 0.0335
3086
+ save_policy_outputs: 0.1316
3087
+ split_output_tensors: 0.0640
3088
+ [2024-11-06 05:19:48,753][20138] RolloutWorker_w7 profile tree view:
3089
+ wait_for_trajectories: 0.0012, enqueue_policy_requests: 1.3873, env_step: 4.2567, overhead: 0.0947, complete_rollouts: 0.1022
3090
+ save_policy_outputs: 0.1197
3091
+ split_output_tensors: 0.0320
3092
+ [2024-11-06 05:19:48,754][20138] Loop Runner_EvtLoop terminating...
3093
+ [2024-11-06 05:19:48,756][20138] Runner profile tree view:
3094
+ main_loop: 41.1334
3095
+ [2024-11-06 05:19:48,758][20138] Collected {0: 10014720}, FPS: 199.2
3096
+ [2024-11-06 05:19:48,991][20138] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
3097
+ [2024-11-06 05:19:48,993][20138] Overriding arg 'num_workers' with value 1 passed from command line
3098
+ [2024-11-06 05:19:48,995][20138] Adding new argument 'no_render'=True that is not in the saved config file!
3099
+ [2024-11-06 05:19:48,997][20138] Adding new argument 'save_video'=True that is not in the saved config file!
3100
+ [2024-11-06 05:19:48,999][20138] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
3101
+ [2024-11-06 05:19:49,001][20138] Adding new argument 'video_name'=None that is not in the saved config file!
3102
+ [2024-11-06 05:19:49,005][20138] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file!
3103
+ [2024-11-06 05:19:49,006][20138] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
3104
+ [2024-11-06 05:19:49,008][20138] Adding new argument 'push_to_hub'=False that is not in the saved config file!
3105
+ [2024-11-06 05:19:49,011][20138] Adding new argument 'hf_repository'=None that is not in the saved config file!
3106
+ [2024-11-06 05:19:49,014][20138] Adding new argument 'policy_index'=0 that is not in the saved config file!
3107
+ [2024-11-06 05:19:49,015][20138] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
3108
+ [2024-11-06 05:19:49,016][20138] Adding new argument 'train_script'=None that is not in the saved config file!
3109
+ [2024-11-06 05:19:49,017][20138] Adding new argument 'enjoy_script'=None that is not in the saved config file!
3110
+ [2024-11-06 05:19:49,018][20138] Using frameskip 1 and render_action_repeat=4 for evaluation
3111
+ [2024-11-06 05:19:49,049][20138] Doom resolution: 160x120, resize resolution: (128, 72)
3112
+ [2024-11-06 05:19:49,053][20138] RunningMeanStd input shape: (3, 72, 128)
3113
+ [2024-11-06 05:19:49,056][20138] RunningMeanStd input shape: (1,)
3114
+ [2024-11-06 05:19:49,075][20138] ConvEncoder: input_channels=3
3115
+ [2024-11-06 05:19:49,182][20138] Conv encoder output size: 512
3116
+ [2024-11-06 05:19:49,183][20138] Policy head output size: 512
3117
+ [2024-11-06 05:19:49,357][20138] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000002445_10014720.pth...
3118
+ [2024-11-06 05:19:50,179][20138] Num frames 100...
3119
+ [2024-11-06 05:19:50,303][20138] Num frames 200...
3120
+ [2024-11-06 05:19:50,428][20138] Num frames 300...
3121
+ [2024-11-06 05:19:50,553][20138] Num frames 400...
3122
+ [2024-11-06 05:19:50,676][20138] Num frames 500...
3123
+ [2024-11-06 05:19:50,798][20138] Num frames 600...
3124
+ [2024-11-06 05:19:50,868][20138] Avg episode rewards: #0: 10.080, true rewards: #0: 6.080
3125
+ [2024-11-06 05:19:50,869][20138] Avg episode reward: 10.080, avg true_objective: 6.080
3126
+ [2024-11-06 05:19:50,989][20138] Num frames 700...
3127
+ [2024-11-06 05:19:51,116][20138] Num frames 800...
3128
+ [2024-11-06 05:19:51,241][20138] Num frames 900...
3129
+ [2024-11-06 05:19:51,372][20138] Num frames 1000...
3130
+ [2024-11-06 05:19:51,498][20138] Num frames 1100...
3131
+ [2024-11-06 05:19:51,621][20138] Num frames 1200...
3132
+ [2024-11-06 05:19:51,752][20138] Num frames 1300...
3133
+ [2024-11-06 05:19:51,878][20138] Num frames 1400...
3134
+ [2024-11-06 05:19:52,010][20138] Num frames 1500...
3135
+ [2024-11-06 05:19:52,112][20138] Avg episode rewards: #0: 15.180, true rewards: #0: 7.680
3136
+ [2024-11-06 05:19:52,113][20138] Avg episode reward: 15.180, avg true_objective: 7.680
3137
+ [2024-11-06 05:19:52,193][20138] Num frames 1600...
3138
+ [2024-11-06 05:19:52,324][20138] Num frames 1700...
3139
+ [2024-11-06 05:19:52,451][20138] Num frames 1800...
3140
+ [2024-11-06 05:19:52,575][20138] Num frames 1900...
3141
+ [2024-11-06 05:19:52,700][20138] Num frames 2000...
3142
+ [2024-11-06 05:19:52,823][20138] Num frames 2100...
3143
+ [2024-11-06 05:19:52,944][20138] Num frames 2200...
3144
+ [2024-11-06 05:19:53,079][20138] Num frames 2300...
3145
+ [2024-11-06 05:19:53,208][20138] Num frames 2400...
3146
+ [2024-11-06 05:19:53,341][20138] Num frames 2500...
3147
+ [2024-11-06 05:19:53,517][20138] Num frames 2600...
3148
+ [2024-11-06 05:19:53,689][20138] Num frames 2700...
3149
+ [2024-11-06 05:19:53,861][20138] Num frames 2800...
3150
+ [2024-11-06 05:19:54,061][20138] Num frames 2900...
3151
+ [2024-11-06 05:19:54,259][20138] Num frames 3000...
3152
+ [2024-11-06 05:19:54,435][20138] Num frames 3100...
3153
+ [2024-11-06 05:19:54,612][20138] Num frames 3200...
3154
+ [2024-11-06 05:19:54,794][20138] Num frames 3300...
3155
+ [2024-11-06 05:19:54,974][20138] Num frames 3400...
3156
+ [2024-11-06 05:19:55,187][20138] Avg episode rewards: #0: 27.626, true rewards: #0: 11.627
3157
+ [2024-11-06 05:19:55,189][20138] Avg episode reward: 27.626, avg true_objective: 11.627
3158
+ [2024-11-06 05:19:55,215][20138] Num frames 3500...
3159
+ [2024-11-06 05:19:55,415][20138] Num frames 3600...
3160
+ [2024-11-06 05:19:55,600][20138] Num frames 3700...
3161
+ [2024-11-06 05:19:55,774][20138] Num frames 3800...
3162
+ [2024-11-06 05:19:55,959][20138] Num frames 3900...
3163
+ [2024-11-06 05:19:56,092][20138] Num frames 4000...
3164
+ [2024-11-06 05:19:56,232][20138] Num frames 4100...
3165
+ [2024-11-06 05:19:56,367][20138] Num frames 4200...
3166
+ [2024-11-06 05:19:56,493][20138] Num frames 4300...
3167
+ [2024-11-06 05:19:56,620][20138] Num frames 4400...
3168
+ [2024-11-06 05:19:56,747][20138] Num frames 4500...
3169
+ [2024-11-06 05:19:56,874][20138] Num frames 4600...
3170
+ [2024-11-06 05:19:57,000][20138] Num frames 4700...
3171
+ [2024-11-06 05:19:57,124][20138] Num frames 4800...
3172
+ [2024-11-06 05:19:57,264][20138] Num frames 4900...
3173
+ [2024-11-06 05:19:57,410][20138] Avg episode rewards: #0: 29.430, true rewards: #0: 12.430
3174
+ [2024-11-06 05:19:57,412][20138] Avg episode reward: 29.430, avg true_objective: 12.430
3175
+ [2024-11-06 05:19:57,449][20138] Num frames 5000...
3176
+ [2024-11-06 05:19:57,578][20138] Num frames 5100...
3177
+ [2024-11-06 05:19:57,705][20138] Num frames 5200...
3178
+ [2024-11-06 05:19:57,827][20138] Num frames 5300...
3179
+ [2024-11-06 05:19:57,956][20138] Num frames 5400...
3180
+ [2024-11-06 05:19:58,082][20138] Num frames 5500...
3181
+ [2024-11-06 05:19:58,220][20138] Num frames 5600...
3182
+ [2024-11-06 05:19:58,355][20138] Num frames 5700...
3183
+ [2024-11-06 05:19:58,482][20138] Num frames 5800...
3184
+ [2024-11-06 05:19:58,612][20138] Num frames 5900...
3185
+ [2024-11-06 05:19:58,736][20138] Num frames 6000...
3186
+ [2024-11-06 05:19:58,872][20138] Num frames 6100...
3187
+ [2024-11-06 05:19:59,009][20138] Num frames 6200...
3188
+ [2024-11-06 05:19:59,142][20138] Num frames 6300...
3189
+ [2024-11-06 05:19:59,281][20138] Num frames 6400...
3190
+ [2024-11-06 05:19:59,406][20138] Num frames 6500...
3191
+ [2024-11-06 05:19:59,536][20138] Num frames 6600...
3192
+ [2024-11-06 05:19:59,663][20138] Num frames 6700...
3193
+ [2024-11-06 05:19:59,796][20138] Num frames 6800...
3194
+ [2024-11-06 05:19:59,926][20138] Num frames 6900...
3195
+ [2024-11-06 05:20:00,051][20138] Num frames 7000...
3196
+ [2024-11-06 05:20:00,199][20138] Avg episode rewards: #0: 36.544, true rewards: #0: 14.144
3197
+ [2024-11-06 05:20:00,201][20138] Avg episode reward: 36.544, avg true_objective: 14.144
3198
+ [2024-11-06 05:20:00,252][20138] Num frames 7100...
3199
+ [2024-11-06 05:20:00,390][20138] Num frames 7200...
3200
+ [2024-11-06 05:20:00,515][20138] Num frames 7300...
3201
+ [2024-11-06 05:20:00,637][20138] Num frames 7400...
3202
+ [2024-11-06 05:20:00,765][20138] Num frames 7500...
3203
+ [2024-11-06 05:20:00,892][20138] Num frames 7600...
3204
+ [2024-11-06 05:20:01,017][20138] Num frames 7700...
3205
+ [2024-11-06 05:20:01,142][20138] Num frames 7800...
3206
+ [2024-11-06 05:20:01,278][20138] Num frames 7900...
3207
+ [2024-11-06 05:20:01,407][20138] Num frames 8000...
3208
+ [2024-11-06 05:20:01,532][20138] Num frames 8100...
3209
+ [2024-11-06 05:20:01,666][20138] Num frames 8200...
3210
+ [2024-11-06 05:20:01,801][20138] Num frames 8300...
3211
+ [2024-11-06 05:20:01,930][20138] Num frames 8400...
3212
+ [2024-11-06 05:20:02,000][20138] Avg episode rewards: #0: 36.350, true rewards: #0: 14.017
3213
+ [2024-11-06 05:20:02,002][20138] Avg episode reward: 36.350, avg true_objective: 14.017
3214
+ [2024-11-06 05:20:02,115][20138] Num frames 8500...
3215
+ [2024-11-06 05:20:02,242][20138] Num frames 8600...
3216
+ [2024-11-06 05:20:02,382][20138] Num frames 8700...
3217
+ [2024-11-06 05:20:02,511][20138] Num frames 8800...
3218
+ [2024-11-06 05:20:02,641][20138] Num frames 8900...
3219
+ [2024-11-06 05:20:02,765][20138] Num frames 9000...
3220
+ [2024-11-06 05:20:02,884][20138] Num frames 9100...
3221
+ [2024-11-06 05:20:03,009][20138] Num frames 9200...
3222
+ [2024-11-06 05:20:03,133][20138] Num frames 9300...
3223
+ [2024-11-06 05:20:03,263][20138] Num frames 9400...
3224
+ [2024-11-06 05:20:03,399][20138] Num frames 9500...
3225
+ [2024-11-06 05:20:03,522][20138] Num frames 9600...
3226
+ [2024-11-06 05:20:03,649][20138] Num frames 9700...
3227
+ [2024-11-06 05:20:03,776][20138] Num frames 9800...
3228
+ [2024-11-06 05:20:03,900][20138] Num frames 9900...
3229
+ [2024-11-06 05:20:04,032][20138] Num frames 10000...
3230
+ [2024-11-06 05:20:04,160][20138] Num frames 10100...
3231
+ [2024-11-06 05:20:04,305][20138] Num frames 10200...
3232
+ [2024-11-06 05:20:04,437][20138] Num frames 10300...
3233
+ [2024-11-06 05:20:04,562][20138] Num frames 10400...
3234
+ [2024-11-06 05:20:04,686][20138] Num frames 10500...
3235
+ [2024-11-06 05:20:04,755][20138] Avg episode rewards: #0: 40.157, true rewards: #0: 15.014
3236
+ [2024-11-06 05:20:04,757][20138] Avg episode reward: 40.157, avg true_objective: 15.014
3237
+ [2024-11-06 05:20:04,863][20138] Num frames 10600...
3238
+ [2024-11-06 05:20:04,995][20138] Num frames 10700...
3239
+ [2024-11-06 05:20:05,115][20138] Num frames 10800...
3240
+ [2024-11-06 05:20:05,240][20138] Num frames 10900...
3241
+ [2024-11-06 05:20:05,371][20138] Num frames 11000...
3242
+ [2024-11-06 05:20:05,501][20138] Num frames 11100...
3243
+ [2024-11-06 05:20:05,627][20138] Num frames 11200...
3244
+ [2024-11-06 05:20:05,752][20138] Num frames 11300...
3245
+ [2024-11-06 05:20:05,875][20138] Num frames 11400...
3246
+ [2024-11-06 05:20:06,006][20138] Num frames 11500...
3247
+ [2024-11-06 05:20:06,186][20138] Num frames 11600...
3248
+ [2024-11-06 05:20:06,362][20138] Num frames 11700...
3249
+ [2024-11-06 05:20:06,540][20138] Num frames 11800...
3250
+ [2024-11-06 05:20:06,714][20138] Num frames 11900...
3251
+ [2024-11-06 05:20:06,886][20138] Num frames 12000...
3252
+ [2024-11-06 05:20:07,083][20138] Avg episode rewards: #0: 40.601, true rewards: #0: 15.101
3253
+ [2024-11-06 05:20:07,086][20138] Avg episode reward: 40.601, avg true_objective: 15.101
3254
+ [2024-11-06 05:20:07,121][20138] Num frames 12100...
3255
+ [2024-11-06 05:20:07,291][20138] Num frames 12200...
3256
+ [2024-11-06 05:20:07,488][20138] Num frames 12300...
3257
+ [2024-11-06 05:20:07,668][20138] Num frames 12400...
3258
+ [2024-11-06 05:20:07,845][20138] Num frames 12500...
3259
+ [2024-11-06 05:20:08,028][20138] Num frames 12600...
3260
+ [2024-11-06 05:20:08,253][20138] Avg episode rewards: #0: 37.432, true rewards: #0: 14.099
3261
+ [2024-11-06 05:20:08,255][20138] Avg episode reward: 37.432, avg true_objective: 14.099
3262
+ [2024-11-06 05:20:08,277][20138] Num frames 12700...
3263
+ [2024-11-06 05:20:08,461][20138] Num frames 12800...
3264
+ [2024-11-06 05:20:08,625][20138] Num frames 12900...
3265
+ [2024-11-06 05:20:08,751][20138] Num frames 13000...
3266
+ [2024-11-06 05:20:08,880][20138] Num frames 13100...
3267
+ [2024-11-06 05:20:09,122][20138] Num frames 13200...
3268
+ [2024-11-06 05:20:09,331][20138] Num frames 13300...
3269
+ [2024-11-06 05:20:09,453][20138] Num frames 13400...
3270
+ [2024-11-06 05:20:09,587][20138] Num frames 13500...
3271
+ [2024-11-06 05:20:09,713][20138] Num frames 13600...
3272
+ [2024-11-06 05:20:09,856][20138] Num frames 13700...
3273
+ [2024-11-06 05:20:10,116][20138] Num frames 13800...
3274
+ [2024-11-06 05:20:10,238][20138] Num frames 13900...
3275
+ [2024-11-06 05:20:10,364][20138] Num frames 14000...
3276
+ [2024-11-06 05:20:10,498][20138] Avg episode rewards: #0: 36.862, true rewards: #0: 14.062
3277
+ [2024-11-06 05:20:10,500][20138] Avg episode reward: 36.862, avg true_objective: 14.062
3278
+ [2024-11-06 05:21:42,824][20138] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
3279
+ [2024-11-06 05:22:33,686][20138] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
3280
+ [2024-11-06 05:22:33,688][20138] Overriding arg 'num_workers' with value 1 passed from command line
3281
+ [2024-11-06 05:22:33,690][20138] Adding new argument 'no_render'=True that is not in the saved config file!
3282
+ [2024-11-06 05:22:33,692][20138] Adding new argument 'save_video'=True that is not in the saved config file!
3283
+ [2024-11-06 05:22:33,694][20138] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
3284
+ [2024-11-06 05:22:33,695][20138] Adding new argument 'video_name'=None that is not in the saved config file!
3285
+ [2024-11-06 05:22:33,697][20138] Adding new argument 'max_num_frames'=100000 that is not in the saved config file!
3286
+ [2024-11-06 05:22:33,698][20138] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
3287
+ [2024-11-06 05:22:33,699][20138] Adding new argument 'push_to_hub'=True that is not in the saved config file!
3288
+ [2024-11-06 05:22:33,700][20138] Adding new argument 'hf_repository'='InMDev/vizdoom_health_gathering_supreme' that is not in the saved config file!
3289
+ [2024-11-06 05:22:33,701][20138] Adding new argument 'policy_index'=0 that is not in the saved config file!
3290
+ [2024-11-06 05:22:33,702][20138] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
3291
+ [2024-11-06 05:22:33,703][20138] Adding new argument 'train_script'=None that is not in the saved config file!
3292
+ [2024-11-06 05:22:33,704][20138] Adding new argument 'enjoy_script'=None that is not in the saved config file!
3293
+ [2024-11-06 05:22:33,706][20138] Using frameskip 1 and render_action_repeat=4 for evaluation
3294
+ [2024-11-06 05:22:33,738][20138] RunningMeanStd input shape: (3, 72, 128)
3295
+ [2024-11-06 05:22:33,739][20138] RunningMeanStd input shape: (1,)
3296
+ [2024-11-06 05:22:33,756][20138] ConvEncoder: input_channels=3
3297
+ [2024-11-06 05:22:33,794][20138] Conv encoder output size: 512
3298
+ [2024-11-06 05:22:33,796][20138] Policy head output size: 512
3299
+ [2024-11-06 05:22:33,814][20138] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000002445_10014720.pth...
3300
+ [2024-11-06 05:22:34,266][20138] Num frames 100...
3301
+ [2024-11-06 05:22:34,399][20138] Num frames 200...
3302
+ [2024-11-06 05:22:34,526][20138] Num frames 300...
3303
+ [2024-11-06 05:22:34,661][20138] Num frames 400...
3304
+ [2024-11-06 05:22:34,790][20138] Num frames 500...
3305
+ [2024-11-06 05:22:34,917][20138] Num frames 600...
3306
+ [2024-11-06 05:22:35,038][20138] Num frames 700...
3307
+ [2024-11-06 05:22:35,166][20138] Num frames 800...
3308
+ [2024-11-06 05:22:35,299][20138] Num frames 900...
3309
+ [2024-11-06 05:22:35,421][20138] Num frames 1000...
3310
+ [2024-11-06 05:22:35,547][20138] Num frames 1100...
3311
+ [2024-11-06 05:22:35,681][20138] Num frames 1200...
3312
+ [2024-11-06 05:22:35,811][20138] Num frames 1300...
3313
+ [2024-11-06 05:22:35,887][20138] Avg episode rewards: #0: 36.150, true rewards: #0: 13.150
3314
+ [2024-11-06 05:22:35,888][20138] Avg episode reward: 36.150, avg true_objective: 13.150
3315
+ [2024-11-06 05:22:36,008][20138] Num frames 1400...
3316
+ [2024-11-06 05:22:36,142][20138] Num frames 1500...
3317
+ [2024-11-06 05:22:36,281][20138] Num frames 1600...
3318
+ [2024-11-06 05:22:36,409][20138] Num frames 1700...
3319
+ [2024-11-06 05:22:36,534][20138] Num frames 1800...
3320
+ [2024-11-06 05:22:36,677][20138] Num frames 1900...
3321
+ [2024-11-06 05:22:36,800][20138] Num frames 2000...
3322
+ [2024-11-06 05:22:36,923][20138] Num frames 2100...
3323
+ [2024-11-06 05:22:37,046][20138] Num frames 2200...
3324
+ [2024-11-06 05:22:37,167][20138] Avg episode rewards: #0: 27.760, true rewards: #0: 11.260
3325
+ [2024-11-06 05:22:37,170][20138] Avg episode reward: 27.760, avg true_objective: 11.260
3326
+ [2024-11-06 05:22:37,230][20138] Num frames 2300...
3327
+ [2024-11-06 05:22:37,364][20138] Num frames 2400...
3328
+ [2024-11-06 05:22:37,492][20138] Num frames 2500...
3329
+ [2024-11-06 05:22:37,613][20138] Num frames 2600...
3330
+ [2024-11-06 05:22:37,745][20138] Num frames 2700...
3331
+ [2024-11-06 05:22:37,871][20138] Num frames 2800...
3332
+ [2024-11-06 05:22:37,992][20138] Num frames 2900...
3333
+ [2024-11-06 05:22:38,119][20138] Num frames 3000...
3334
+ [2024-11-06 05:22:38,249][20138] Num frames 3100...
3335
+ [2024-11-06 05:22:38,379][20138] Num frames 3200...
3336
+ [2024-11-06 05:22:38,501][20138] Num frames 3300...
3337
+ [2024-11-06 05:22:38,621][20138] Num frames 3400...
3338
+ [2024-11-06 05:22:38,753][20138] Num frames 3500...
3339
+ [2024-11-06 05:22:38,931][20138] Num frames 3600...
3340
+ [2024-11-06 05:22:39,103][20138] Num frames 3700...
3341
+ [2024-11-06 05:22:39,293][20138] Num frames 3800...
3342
+ [2024-11-06 05:22:39,463][20138] Num frames 3900...
3343
+ [2024-11-06 05:22:39,638][20138] Num frames 4000...
3344
+ [2024-11-06 05:22:39,816][20138] Num frames 4100...
3345
+ [2024-11-06 05:22:39,993][20138] Avg episode rewards: #0: 34.573, true rewards: #0: 13.907
3346
+ [2024-11-06 05:22:39,995][20138] Avg episode reward: 34.573, avg true_objective: 13.907
3347
+ [2024-11-06 05:22:40,054][20138] Num frames 4200...
3348
+ [2024-11-06 05:22:40,256][20138] Num frames 4300...
3349
+ [2024-11-06 05:22:40,430][20138] Num frames 4400...
3350
+ [2024-11-06 05:22:40,615][20138] Num frames 4500...
3351
+ [2024-11-06 05:22:40,811][20138] Num frames 4600...
3352
+ [2024-11-06 05:22:41,001][20138] Num frames 4700...
3353
+ [2024-11-06 05:22:41,188][20138] Num frames 4800...
3354
+ [2024-11-06 05:22:41,378][20138] Num frames 4900...
3355
+ [2024-11-06 05:22:41,510][20138] Num frames 5000...
3356
+ [2024-11-06 05:22:41,635][20138] Num frames 5100...
3357
+ [2024-11-06 05:22:41,760][20138] Num frames 5200...
3358
+ [2024-11-06 05:22:41,900][20138] Num frames 5300...
3359
+ [2024-11-06 05:22:42,035][20138] Num frames 5400...
3360
+ [2024-11-06 05:22:42,171][20138] Num frames 5500...
3361
+ [2024-11-06 05:22:42,308][20138] Num frames 5600...
3362
+ [2024-11-06 05:22:42,432][20138] Num frames 5700...
3363
+ [2024-11-06 05:22:42,557][20138] Num frames 5800...
3364
+ [2024-11-06 05:22:42,682][20138] Num frames 5900...
3365
+ [2024-11-06 05:22:42,803][20138] Num frames 6000...
3366
+ [2024-11-06 05:22:42,936][20138] Num frames 6100...
3367
+ [2024-11-06 05:22:43,070][20138] Num frames 6200...
3368
+ [2024-11-06 05:22:43,222][20138] Avg episode rewards: #0: 40.422, true rewards: #0: 15.673
3369
+ [2024-11-06 05:22:43,224][20138] Avg episode reward: 40.422, avg true_objective: 15.673
3370
+ [2024-11-06 05:22:43,272][20138] Num frames 6300...
3371
+ [2024-11-06 05:22:43,400][20138] Num frames 6400...
3372
+ [2024-11-06 05:22:43,524][20138] Num frames 6500...
3373
+ [2024-11-06 05:22:43,645][20138] Num frames 6600...
3374
+ [2024-11-06 05:22:43,769][20138] Num frames 6700...
3375
+ [2024-11-06 05:22:43,902][20138] Num frames 6800...
3376
+ [2024-11-06 05:22:44,025][20138] Num frames 6900...
3377
+ [2024-11-06 05:22:44,151][20138] Num frames 7000...
3378
+ [2024-11-06 05:22:44,327][20138] Avg episode rewards: #0: 35.978, true rewards: #0: 14.178
3379
+ [2024-11-06 05:22:44,330][20138] Avg episode reward: 35.978, avg true_objective: 14.178
3380
+ [2024-11-06 05:22:44,345][20138] Num frames 7100...
3381
+ [2024-11-06 05:22:44,474][20138] Num frames 7200...
3382
+ [2024-11-06 05:22:44,596][20138] Num frames 7300...
3383
+ [2024-11-06 05:22:44,719][20138] Num frames 7400...
3384
+ [2024-11-06 05:22:44,848][20138] Num frames 7500...
3385
+ [2024-11-06 05:22:44,988][20138] Num frames 7600...
3386
+ [2024-11-06 05:22:45,115][20138] Num frames 7700...
3387
+ [2024-11-06 05:22:45,244][20138] Num frames 7800...
3388
+ [2024-11-06 05:22:45,371][20138] Num frames 7900...
3389
+ [2024-11-06 05:22:45,499][20138] Num frames 8000...
3390
+ [2024-11-06 05:22:45,566][20138] Avg episode rewards: #0: 33.680, true rewards: #0: 13.347
3391
+ [2024-11-06 05:22:45,568][20138] Avg episode reward: 33.680, avg true_objective: 13.347
3392
+ [2024-11-06 05:22:45,684][20138] Num frames 8100...
3393
+ [2024-11-06 05:22:45,817][20138] Num frames 8200...
3394
+ [2024-11-06 05:22:45,955][20138] Num frames 8300...
3395
+ [2024-11-06 05:22:46,088][20138] Num frames 8400...
3396
+ [2024-11-06 05:22:46,215][20138] Num frames 8500...
3397
+ [2024-11-06 05:22:46,347][20138] Num frames 8600...
3398
+ [2024-11-06 05:22:46,469][20138] Num frames 8700...
3399
+ [2024-11-06 05:22:46,597][20138] Num frames 8800...
3400
+ [2024-11-06 05:22:46,724][20138] Num frames 8900...
3401
+ [2024-11-06 05:22:46,848][20138] Num frames 9000...
3402
+ [2024-11-06 05:22:46,907][20138] Avg episode rewards: #0: 31.858, true rewards: #0: 12.859
3403
+ [2024-11-06 05:22:46,908][20138] Avg episode reward: 31.858, avg true_objective: 12.859
3404
+ [2024-11-06 05:22:47,040][20138] Num frames 9100...
3405
+ [2024-11-06 05:22:47,165][20138] Num frames 9200...
3406
+ [2024-11-06 05:22:47,298][20138] Num frames 9300...
3407
+ [2024-11-06 05:22:47,429][20138] Num frames 9400...
3408
+ [2024-11-06 05:22:47,585][20138] Avg episode rewards: #0: 28.850, true rewards: #0: 11.850
3409
+ [2024-11-06 05:22:47,587][20138] Avg episode reward: 28.850, avg true_objective: 11.850
3410
+ [2024-11-06 05:22:47,614][20138] Num frames 9500...
3411
+ [2024-11-06 05:22:47,739][20138] Num frames 9600...
3412
+ [2024-11-06 05:22:47,867][20138] Num frames 9700...
3413
+ [2024-11-06 05:22:48,002][20138] Num frames 9800...
3414
+ [2024-11-06 05:22:48,129][20138] Num frames 9900...
3415
+ [2024-11-06 05:22:48,261][20138] Num frames 10000...
3416
+ [2024-11-06 05:22:48,388][20138] Num frames 10100...
3417
+ [2024-11-06 05:22:48,510][20138] Num frames 10200...
3418
+ [2024-11-06 05:22:48,624][20138] Avg episode rewards: #0: 27.609, true rewards: #0: 11.387
3419
+ [2024-11-06 05:22:48,625][20138] Avg episode reward: 27.609, avg true_objective: 11.387
3420
+ [2024-11-06 05:22:48,693][20138] Num frames 10300...
3421
+ [2024-11-06 05:22:48,820][20138] Num frames 10400...
3422
+ [2024-11-06 05:22:48,975][20138] Num frames 10500...
3423
+ [2024-11-06 05:22:49,105][20138] Num frames 10600...
3424
+ [2024-11-06 05:22:49,229][20138] Num frames 10700...
3425
+ [2024-11-06 05:22:49,365][20138] Num frames 10800...
3426
+ [2024-11-06 05:22:49,486][20138] Num frames 10900...
3427
+ [2024-11-06 05:22:49,618][20138] Num frames 11000...
3428
+ [2024-11-06 05:22:49,746][20138] Num frames 11100...
3429
+ [2024-11-06 05:22:49,869][20138] Num frames 11200...
3430
+ [2024-11-06 05:22:50,001][20138] Avg episode rewards: #0: 26.958, true rewards: #0: 11.258
3431
+ [2024-11-06 05:22:50,004][20138] Avg episode reward: 26.958, avg true_objective: 11.258
3432
+ [2024-11-06 05:24:03,470][20138] Replay video saved to /content/train_dir/default_experiment/replay.mp4!