|
[2024-06-06 12:54:36,757][00159] Saving configuration to /content/train_dir/default_experiment/config.json... |
|
[2024-06-06 12:54:36,761][00159] Rollout worker 0 uses device cpu |
|
[2024-06-06 12:54:36,762][00159] Rollout worker 1 uses device cpu |
|
[2024-06-06 12:54:36,765][00159] Rollout worker 2 uses device cpu |
|
[2024-06-06 12:54:36,768][00159] Rollout worker 3 uses device cpu |
|
[2024-06-06 12:54:36,769][00159] Rollout worker 4 uses device cpu |
|
[2024-06-06 12:54:36,771][00159] Rollout worker 5 uses device cpu |
|
[2024-06-06 12:54:36,772][00159] Rollout worker 6 uses device cpu |
|
[2024-06-06 12:54:36,774][00159] Rollout worker 7 uses device cpu |
|
[2024-06-06 12:54:36,975][00159] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-06-06 12:54:36,977][00159] InferenceWorker_p0-w0: min num requests: 2 |
|
[2024-06-06 12:54:37,028][00159] Starting all processes... |
|
[2024-06-06 12:54:37,033][00159] Starting process learner_proc0 |
|
[2024-06-06 12:54:38,650][00159] Starting all processes... |
|
[2024-06-06 12:54:38,662][00159] Starting process inference_proc0-0 |
|
[2024-06-06 12:54:38,663][00159] Starting process rollout_proc0 |
|
[2024-06-06 12:54:38,663][00159] Starting process rollout_proc1 |
|
[2024-06-06 12:54:38,663][00159] Starting process rollout_proc2 |
|
[2024-06-06 12:54:38,663][00159] Starting process rollout_proc3 |
|
[2024-06-06 12:54:38,663][00159] Starting process rollout_proc4 |
|
[2024-06-06 12:54:38,663][00159] Starting process rollout_proc5 |
|
[2024-06-06 12:54:38,663][00159] Starting process rollout_proc6 |
|
[2024-06-06 12:54:38,663][00159] Starting process rollout_proc7 |
|
[2024-06-06 12:54:54,777][03285] Worker 4 uses CPU cores [0] |
|
[2024-06-06 12:54:55,344][03267] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-06-06 12:54:55,346][03267] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2024-06-06 12:54:55,418][03267] Num visible devices: 1 |
|
[2024-06-06 12:54:55,452][03267] Starting seed is not provided |
|
[2024-06-06 12:54:55,454][03267] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-06-06 12:54:55,455][03267] Initializing actor-critic model on device cuda:0 |
|
[2024-06-06 12:54:55,456][03267] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-06-06 12:54:55,459][03267] RunningMeanStd input shape: (1,) |
|
[2024-06-06 12:54:55,551][03267] ConvEncoder: input_channels=3 |
|
[2024-06-06 12:54:55,606][03282] Worker 1 uses CPU cores [1] |
|
[2024-06-06 12:54:55,768][03281] Worker 0 uses CPU cores [0] |
|
[2024-06-06 12:54:55,853][03280] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-06-06 12:54:55,857][03280] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2024-06-06 12:54:55,939][03280] Num visible devices: 1 |
|
[2024-06-06 12:54:56,016][03287] Worker 5 uses CPU cores [1] |
|
[2024-06-06 12:54:56,075][03288] Worker 7 uses CPU cores [1] |
|
[2024-06-06 12:54:56,092][03286] Worker 6 uses CPU cores [0] |
|
[2024-06-06 12:54:56,116][03283] Worker 2 uses CPU cores [0] |
|
[2024-06-06 12:54:56,151][03284] Worker 3 uses CPU cores [1] |
|
[2024-06-06 12:54:56,174][03267] Conv encoder output size: 512 |
|
[2024-06-06 12:54:56,174][03267] Policy head output size: 512 |
|
[2024-06-06 12:54:56,231][03267] Created Actor Critic model with architecture: |
|
[2024-06-06 12:54:56,231][03267] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
|
) |
|
) |
|
[2024-06-06 12:54:56,615][03267] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2024-06-06 12:54:56,962][00159] Heartbeat connected on Batcher_0 |
|
[2024-06-06 12:54:56,975][00159] Heartbeat connected on InferenceWorker_p0-w0 |
|
[2024-06-06 12:54:56,985][00159] Heartbeat connected on RolloutWorker_w0 |
|
[2024-06-06 12:54:56,999][00159] Heartbeat connected on RolloutWorker_w1 |
|
[2024-06-06 12:54:57,005][00159] Heartbeat connected on RolloutWorker_w2 |
|
[2024-06-06 12:54:57,010][00159] Heartbeat connected on RolloutWorker_w3 |
|
[2024-06-06 12:54:57,013][00159] Heartbeat connected on RolloutWorker_w4 |
|
[2024-06-06 12:54:57,018][00159] Heartbeat connected on RolloutWorker_w5 |
|
[2024-06-06 12:54:57,023][00159] Heartbeat connected on RolloutWorker_w6 |
|
[2024-06-06 12:54:57,029][00159] Heartbeat connected on RolloutWorker_w7 |
|
[2024-06-06 12:54:57,759][03267] No checkpoints found |
|
[2024-06-06 12:54:57,759][03267] Did not load from checkpoint, starting from scratch! |
|
[2024-06-06 12:54:57,759][03267] Initialized policy 0 weights for model version 0 |
|
[2024-06-06 12:54:57,763][03267] LearnerWorker_p0 finished initialization! |
|
[2024-06-06 12:54:57,763][03267] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-06-06 12:54:57,770][00159] Heartbeat connected on LearnerWorker_p0 |
|
[2024-06-06 12:54:58,042][03280] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-06-06 12:54:58,044][03280] RunningMeanStd input shape: (1,) |
|
[2024-06-06 12:54:58,059][03280] ConvEncoder: input_channels=3 |
|
[2024-06-06 12:54:58,176][03280] Conv encoder output size: 512 |
|
[2024-06-06 12:54:58,176][03280] Policy head output size: 512 |
|
[2024-06-06 12:54:58,237][00159] Inference worker 0-0 is ready! |
|
[2024-06-06 12:54:58,238][00159] All inference workers are ready! Signal rollout workers to start! |
|
[2024-06-06 12:54:58,574][03286] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-06-06 12:54:58,601][03288] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-06-06 12:54:58,656][03283] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-06-06 12:54:58,692][03285] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-06-06 12:54:58,704][03284] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-06-06 12:54:58,731][03281] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-06-06 12:54:58,757][03287] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-06-06 12:54:58,766][03282] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-06-06 12:54:59,836][03282] Decorrelating experience for 0 frames... |
|
[2024-06-06 12:54:59,836][03288] Decorrelating experience for 0 frames... |
|
[2024-06-06 12:55:00,090][03286] Decorrelating experience for 0 frames... |
|
[2024-06-06 12:55:00,130][03283] Decorrelating experience for 0 frames... |
|
[2024-06-06 12:55:00,173][03281] Decorrelating experience for 0 frames... |
|
[2024-06-06 12:55:01,163][00159] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2024-06-06 12:55:01,632][03287] Decorrelating experience for 0 frames... |
|
[2024-06-06 12:55:01,698][03288] Decorrelating experience for 32 frames... |
|
[2024-06-06 12:55:01,708][03282] Decorrelating experience for 32 frames... |
|
[2024-06-06 12:55:01,928][03286] Decorrelating experience for 32 frames... |
|
[2024-06-06 12:55:02,128][03285] Decorrelating experience for 0 frames... |
|
[2024-06-06 12:55:02,212][03283] Decorrelating experience for 32 frames... |
|
[2024-06-06 12:55:02,249][03284] Decorrelating experience for 0 frames... |
|
[2024-06-06 12:55:04,056][03281] Decorrelating experience for 32 frames... |
|
[2024-06-06 12:55:04,649][03285] Decorrelating experience for 32 frames... |
|
[2024-06-06 12:55:04,745][03284] Decorrelating experience for 32 frames... |
|
[2024-06-06 12:55:04,786][03287] Decorrelating experience for 32 frames... |
|
[2024-06-06 12:55:05,346][03288] Decorrelating experience for 64 frames... |
|
[2024-06-06 12:55:05,820][03283] Decorrelating experience for 64 frames... |
|
[2024-06-06 12:55:06,166][00159] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2024-06-06 12:55:08,116][03286] Decorrelating experience for 64 frames... |
|
[2024-06-06 12:55:08,549][03282] Decorrelating experience for 64 frames... |
|
[2024-06-06 12:55:09,005][03281] Decorrelating experience for 64 frames... |
|
[2024-06-06 12:55:09,384][03288] Decorrelating experience for 96 frames... |
|
[2024-06-06 12:55:09,514][03285] Decorrelating experience for 64 frames... |
|
[2024-06-06 12:55:09,758][03287] Decorrelating experience for 64 frames... |
|
[2024-06-06 12:55:09,769][03284] Decorrelating experience for 64 frames... |
|
[2024-06-06 12:55:09,855][03283] Decorrelating experience for 96 frames... |
|
[2024-06-06 12:55:11,164][00159] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2024-06-06 12:55:12,377][03286] Decorrelating experience for 96 frames... |
|
[2024-06-06 12:55:12,400][03281] Decorrelating experience for 96 frames... |
|
[2024-06-06 12:55:12,554][03282] Decorrelating experience for 96 frames... |
|
[2024-06-06 12:55:12,657][03287] Decorrelating experience for 96 frames... |
|
[2024-06-06 12:55:12,972][03285] Decorrelating experience for 96 frames... |
|
[2024-06-06 12:55:15,544][03284] Decorrelating experience for 96 frames... |
|
[2024-06-06 12:55:16,165][00159] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 24.5. Samples: 368. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2024-06-06 12:55:16,167][00159] Avg episode reward: [(0, '1.631')] |
|
[2024-06-06 12:55:17,542][03267] Signal inference workers to stop experience collection... |
|
[2024-06-06 12:55:17,559][03280] InferenceWorker_p0-w0: stopping experience collection |
|
[2024-06-06 12:55:19,721][03267] Signal inference workers to resume experience collection... |
|
[2024-06-06 12:55:19,723][03280] InferenceWorker_p0-w0: resuming experience collection |
|
[2024-06-06 12:55:21,163][00159] Fps is (10 sec: 819.3, 60 sec: 409.6, 300 sec: 409.6). Total num frames: 8192. Throughput: 0: 119.1. Samples: 2382. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
|
[2024-06-06 12:55:21,167][00159] Avg episode reward: [(0, '2.441')] |
|
[2024-06-06 12:55:26,163][00159] Fps is (10 sec: 1638.8, 60 sec: 655.4, 300 sec: 655.4). Total num frames: 16384. Throughput: 0: 186.3. Samples: 4658. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-06-06 12:55:26,167][00159] Avg episode reward: [(0, '3.222')] |
|
[2024-06-06 12:55:31,163][00159] Fps is (10 sec: 2048.0, 60 sec: 955.7, 300 sec: 955.7). Total num frames: 28672. Throughput: 0: 203.9. Samples: 6118. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 12:55:31,169][00159] Avg episode reward: [(0, '3.594')] |
|
[2024-06-06 12:55:35,226][03280] Updated weights for policy 0, policy_version 10 (0.0330) |
|
[2024-06-06 12:55:36,163][00159] Fps is (10 sec: 2457.6, 60 sec: 1170.3, 300 sec: 1170.3). Total num frames: 40960. Throughput: 0: 279.8. Samples: 9794. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 12:55:36,169][00159] Avg episode reward: [(0, '3.949')] |
|
[2024-06-06 12:55:41,163][00159] Fps is (10 sec: 2457.6, 60 sec: 1331.2, 300 sec: 1331.2). Total num frames: 53248. Throughput: 0: 340.8. Samples: 13630. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 12:55:41,165][00159] Avg episode reward: [(0, '4.225')] |
|
[2024-06-06 12:55:46,163][00159] Fps is (10 sec: 2457.6, 60 sec: 1456.4, 300 sec: 1456.4). Total num frames: 65536. Throughput: 0: 337.2. Samples: 15176. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 12:55:46,172][00159] Avg episode reward: [(0, '4.373')] |
|
[2024-06-06 12:55:50,953][03280] Updated weights for policy 0, policy_version 20 (0.0021) |
|
[2024-06-06 12:55:51,163][00159] Fps is (10 sec: 2867.1, 60 sec: 1638.4, 300 sec: 1638.4). Total num frames: 81920. Throughput: 0: 436.3. Samples: 19632. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 12:55:51,166][00159] Avg episode reward: [(0, '4.503')] |
|
[2024-06-06 12:55:56,163][00159] Fps is (10 sec: 3276.8, 60 sec: 1787.3, 300 sec: 1787.3). Total num frames: 98304. Throughput: 0: 549.3. Samples: 24720. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 12:55:56,165][00159] Avg episode reward: [(0, '4.521')] |
|
[2024-06-06 12:56:01,163][00159] Fps is (10 sec: 2457.7, 60 sec: 1774.9, 300 sec: 1774.9). Total num frames: 106496. Throughput: 0: 578.6. Samples: 26402. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 12:56:01,166][00159] Avg episode reward: [(0, '4.462')] |
|
[2024-06-06 12:56:01,186][03267] Saving new best policy, reward=4.462! |
|
[2024-06-06 12:56:06,163][00159] Fps is (10 sec: 1638.4, 60 sec: 1911.6, 300 sec: 1764.4). Total num frames: 114688. Throughput: 0: 592.5. Samples: 29044. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 12:56:06,165][00159] Avg episode reward: [(0, '4.336')] |
|
[2024-06-06 12:56:07,582][03280] Updated weights for policy 0, policy_version 30 (0.0020) |
|
[2024-06-06 12:56:11,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2184.6, 300 sec: 1872.5). Total num frames: 131072. Throughput: 0: 629.0. Samples: 32964. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 12:56:11,166][00159] Avg episode reward: [(0, '4.233')] |
|
[2024-06-06 12:56:16,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2389.4, 300 sec: 1911.5). Total num frames: 143360. Throughput: 0: 643.6. Samples: 35080. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 12:56:16,165][00159] Avg episode reward: [(0, '4.240')] |
|
[2024-06-06 12:56:21,163][00159] Fps is (10 sec: 1638.4, 60 sec: 2321.1, 300 sec: 1843.2). Total num frames: 147456. Throughput: 0: 623.3. Samples: 37844. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 12:56:21,168][00159] Avg episode reward: [(0, '4.243')] |
|
[2024-06-06 12:56:24,981][03280] Updated weights for policy 0, policy_version 40 (0.0024) |
|
[2024-06-06 12:56:26,163][00159] Fps is (10 sec: 2048.0, 60 sec: 2457.6, 300 sec: 1927.5). Total num frames: 163840. Throughput: 0: 622.7. Samples: 41652. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 12:56:26,169][00159] Avg episode reward: [(0, '4.431')] |
|
[2024-06-06 12:56:31,163][00159] Fps is (10 sec: 3276.8, 60 sec: 2525.9, 300 sec: 2002.5). Total num frames: 180224. Throughput: 0: 643.9. Samples: 44150. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 12:56:31,170][00159] Avg episode reward: [(0, '4.426')] |
|
[2024-06-06 12:56:31,180][03267] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000044_180224.pth... |
|
[2024-06-06 12:56:36,164][00159] Fps is (10 sec: 2457.3, 60 sec: 2457.6, 300 sec: 1983.3). Total num frames: 188416. Throughput: 0: 603.6. Samples: 46796. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 12:56:36,167][00159] Avg episode reward: [(0, '4.458')] |
|
[2024-06-06 12:56:41,163][00159] Fps is (10 sec: 2048.0, 60 sec: 2457.6, 300 sec: 2007.0). Total num frames: 200704. Throughput: 0: 570.2. Samples: 50380. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 12:56:41,165][00159] Avg episode reward: [(0, '4.601')] |
|
[2024-06-06 12:56:41,177][03267] Saving new best policy, reward=4.601! |
|
[2024-06-06 12:56:41,983][03280] Updated weights for policy 0, policy_version 50 (0.0044) |
|
[2024-06-06 12:56:46,163][00159] Fps is (10 sec: 2457.9, 60 sec: 2457.6, 300 sec: 2028.5). Total num frames: 212992. Throughput: 0: 586.3. Samples: 52784. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 12:56:46,166][00159] Avg episode reward: [(0, '4.548')] |
|
[2024-06-06 12:56:51,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2389.4, 300 sec: 2048.0). Total num frames: 225280. Throughput: 0: 613.0. Samples: 56630. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 12:56:51,167][00159] Avg episode reward: [(0, '4.600')] |
|
[2024-06-06 12:56:56,163][00159] Fps is (10 sec: 2048.0, 60 sec: 2252.8, 300 sec: 2030.2). Total num frames: 233472. Throughput: 0: 575.6. Samples: 58868. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 12:56:56,170][00159] Avg episode reward: [(0, '4.639')] |
|
[2024-06-06 12:56:56,176][03267] Saving new best policy, reward=4.639! |
|
[2024-06-06 12:56:59,749][03280] Updated weights for policy 0, policy_version 60 (0.0015) |
|
[2024-06-06 12:57:01,163][00159] Fps is (10 sec: 2048.0, 60 sec: 2321.1, 300 sec: 2048.0). Total num frames: 245760. Throughput: 0: 569.2. Samples: 60692. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 12:57:01,172][00159] Avg episode reward: [(0, '4.406')] |
|
[2024-06-06 12:57:06,163][00159] Fps is (10 sec: 2048.0, 60 sec: 2321.1, 300 sec: 2031.6). Total num frames: 253952. Throughput: 0: 575.3. Samples: 63732. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 12:57:06,165][00159] Avg episode reward: [(0, '4.449')] |
|
[2024-06-06 12:57:11,163][00159] Fps is (10 sec: 2048.0, 60 sec: 2252.8, 300 sec: 2048.0). Total num frames: 266240. Throughput: 0: 565.9. Samples: 67116. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 12:57:11,166][00159] Avg episode reward: [(0, '4.320')] |
|
[2024-06-06 12:57:16,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2252.8, 300 sec: 2063.2). Total num frames: 278528. Throughput: 0: 544.1. Samples: 68636. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2024-06-06 12:57:16,165][00159] Avg episode reward: [(0, '4.431')] |
|
[2024-06-06 12:57:18,187][03280] Updated weights for policy 0, policy_version 70 (0.0035) |
|
[2024-06-06 12:57:21,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2389.3, 300 sec: 2077.3). Total num frames: 290816. Throughput: 0: 579.8. Samples: 72888. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 12:57:21,169][00159] Avg episode reward: [(0, '4.576')] |
|
[2024-06-06 12:57:26,166][00159] Fps is (10 sec: 2456.9, 60 sec: 2320.9, 300 sec: 2090.3). Total num frames: 303104. Throughput: 0: 580.2. Samples: 76492. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 12:57:26,168][00159] Avg episode reward: [(0, '4.642')] |
|
[2024-06-06 12:57:26,176][03267] Saving new best policy, reward=4.642! |
|
[2024-06-06 12:57:31,163][00159] Fps is (10 sec: 1638.4, 60 sec: 2116.3, 300 sec: 2048.0). Total num frames: 307200. Throughput: 0: 549.2. Samples: 77500. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-06-06 12:57:31,167][00159] Avg episode reward: [(0, '4.497')] |
|
[2024-06-06 12:57:36,163][00159] Fps is (10 sec: 1638.9, 60 sec: 2184.6, 300 sec: 2061.2). Total num frames: 319488. Throughput: 0: 514.2. Samples: 79770. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-06-06 12:57:36,165][00159] Avg episode reward: [(0, '4.438')] |
|
[2024-06-06 12:57:38,993][03280] Updated weights for policy 0, policy_version 80 (0.0022) |
|
[2024-06-06 12:57:41,163][00159] Fps is (10 sec: 2048.0, 60 sec: 2116.3, 300 sec: 2048.0). Total num frames: 327680. Throughput: 0: 540.0. Samples: 83170. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 12:57:41,176][00159] Avg episode reward: [(0, '4.247')] |
|
[2024-06-06 12:57:46,163][00159] Fps is (10 sec: 1638.4, 60 sec: 2048.0, 300 sec: 2035.6). Total num frames: 335872. Throughput: 0: 525.4. Samples: 84334. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-06-06 12:57:46,170][00159] Avg episode reward: [(0, '4.141')] |
|
[2024-06-06 12:57:51,163][00159] Fps is (10 sec: 1638.4, 60 sec: 1979.7, 300 sec: 2023.9). Total num frames: 344064. Throughput: 0: 508.8. Samples: 86630. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 12:57:51,174][00159] Avg episode reward: [(0, '4.154')] |
|
[2024-06-06 12:57:56,163][00159] Fps is (10 sec: 2457.7, 60 sec: 2116.3, 300 sec: 2059.7). Total num frames: 360448. Throughput: 0: 507.0. Samples: 89932. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 12:57:56,165][00159] Avg episode reward: [(0, '4.162')] |
|
[2024-06-06 12:57:58,964][03280] Updated weights for policy 0, policy_version 90 (0.0034) |
|
[2024-06-06 12:58:01,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2116.3, 300 sec: 2070.8). Total num frames: 372736. Throughput: 0: 522.5. Samples: 92150. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 12:58:01,171][00159] Avg episode reward: [(0, '4.346')] |
|
[2024-06-06 12:58:06,165][00159] Fps is (10 sec: 2457.1, 60 sec: 2184.5, 300 sec: 2081.2). Total num frames: 385024. Throughput: 0: 523.6. Samples: 96452. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 12:58:06,167][00159] Avg episode reward: [(0, '4.427')] |
|
[2024-06-06 12:58:11,169][00159] Fps is (10 sec: 2046.7, 60 sec: 2116.0, 300 sec: 2069.5). Total num frames: 393216. Throughput: 0: 505.3. Samples: 99232. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 12:58:11,172][00159] Avg episode reward: [(0, '4.627')] |
|
[2024-06-06 12:58:15,322][03280] Updated weights for policy 0, policy_version 100 (0.0058) |
|
[2024-06-06 12:58:16,163][00159] Fps is (10 sec: 2458.1, 60 sec: 2184.5, 300 sec: 2100.5). Total num frames: 409600. Throughput: 0: 529.7. Samples: 101336. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 12:58:16,165][00159] Avg episode reward: [(0, '4.607')] |
|
[2024-06-06 12:58:21,163][00159] Fps is (10 sec: 2869.0, 60 sec: 2184.5, 300 sec: 2109.4). Total num frames: 421888. Throughput: 0: 573.2. Samples: 105562. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 12:58:21,166][00159] Avg episode reward: [(0, '4.659')] |
|
[2024-06-06 12:58:21,177][03267] Saving new best policy, reward=4.659! |
|
[2024-06-06 12:58:26,168][00159] Fps is (10 sec: 1637.6, 60 sec: 2047.9, 300 sec: 2077.9). Total num frames: 425984. Throughput: 0: 531.9. Samples: 107108. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 12:58:26,171][00159] Avg episode reward: [(0, '4.647')] |
|
[2024-06-06 12:58:31,163][00159] Fps is (10 sec: 1638.4, 60 sec: 2184.5, 300 sec: 2087.0). Total num frames: 438272. Throughput: 0: 537.6. Samples: 108524. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 12:58:31,166][00159] Avg episode reward: [(0, '4.576')] |
|
[2024-06-06 12:58:31,177][03267] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000107_438272.pth... |
|
[2024-06-06 12:58:35,285][03280] Updated weights for policy 0, policy_version 110 (0.0035) |
|
[2024-06-06 12:58:36,163][00159] Fps is (10 sec: 2458.8, 60 sec: 2184.5, 300 sec: 2095.6). Total num frames: 450560. Throughput: 0: 571.1. Samples: 112330. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 12:58:36,165][00159] Avg episode reward: [(0, '4.530')] |
|
[2024-06-06 12:58:41,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2252.8, 300 sec: 2103.9). Total num frames: 462848. Throughput: 0: 592.2. Samples: 116582. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 12:58:41,166][00159] Avg episode reward: [(0, '4.454')] |
|
[2024-06-06 12:58:46,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2321.1, 300 sec: 2111.7). Total num frames: 475136. Throughput: 0: 577.5. Samples: 118136. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 12:58:46,165][00159] Avg episode reward: [(0, '4.382')] |
|
[2024-06-06 12:58:50,499][03280] Updated weights for policy 0, policy_version 120 (0.0042) |
|
[2024-06-06 12:58:51,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2457.6, 300 sec: 2137.0). Total num frames: 491520. Throughput: 0: 577.2. Samples: 122426. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 12:58:51,166][00159] Avg episode reward: [(0, '4.568')] |
|
[2024-06-06 12:58:56,163][00159] Fps is (10 sec: 3276.7, 60 sec: 2457.6, 300 sec: 2161.3). Total num frames: 507904. Throughput: 0: 628.4. Samples: 127508. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 12:58:56,166][00159] Avg episode reward: [(0, '4.803')] |
|
[2024-06-06 12:58:56,172][03267] Saving new best policy, reward=4.803! |
|
[2024-06-06 12:59:01,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2457.6, 300 sec: 2167.5). Total num frames: 520192. Throughput: 0: 620.9. Samples: 129276. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 12:59:01,167][00159] Avg episode reward: [(0, '4.857')] |
|
[2024-06-06 12:59:01,183][03267] Saving new best policy, reward=4.857! |
|
[2024-06-06 12:59:05,788][03280] Updated weights for policy 0, policy_version 130 (0.0026) |
|
[2024-06-06 12:59:06,163][00159] Fps is (10 sec: 2457.7, 60 sec: 2457.7, 300 sec: 2173.4). Total num frames: 532480. Throughput: 0: 597.7. Samples: 132458. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 12:59:06,165][00159] Avg episode reward: [(0, '4.948')] |
|
[2024-06-06 12:59:06,172][03267] Saving new best policy, reward=4.948! |
|
[2024-06-06 12:59:11,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2594.4, 300 sec: 2195.5). Total num frames: 548864. Throughput: 0: 673.7. Samples: 137422. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-06-06 12:59:11,166][00159] Avg episode reward: [(0, '4.860')] |
|
[2024-06-06 12:59:16,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2525.9, 300 sec: 2200.6). Total num frames: 561152. Throughput: 0: 697.6. Samples: 139918. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 12:59:16,166][00159] Avg episode reward: [(0, '4.732')] |
|
[2024-06-06 12:59:20,828][03280] Updated weights for policy 0, policy_version 140 (0.0023) |
|
[2024-06-06 12:59:21,164][00159] Fps is (10 sec: 2457.3, 60 sec: 2525.8, 300 sec: 2205.5). Total num frames: 573440. Throughput: 0: 685.1. Samples: 143160. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 12:59:21,169][00159] Avg episode reward: [(0, '4.764')] |
|
[2024-06-06 12:59:26,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2730.9, 300 sec: 2225.8). Total num frames: 589824. Throughput: 0: 686.4. Samples: 147472. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-06-06 12:59:26,169][00159] Avg episode reward: [(0, '4.853')] |
|
[2024-06-06 12:59:31,163][00159] Fps is (10 sec: 2867.6, 60 sec: 2730.7, 300 sec: 2230.0). Total num frames: 602112. Throughput: 0: 708.0. Samples: 149998. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 12:59:31,172][00159] Avg episode reward: [(0, '4.867')] |
|
[2024-06-06 12:59:35,161][03280] Updated weights for policy 0, policy_version 150 (0.0033) |
|
[2024-06-06 12:59:36,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2730.7, 300 sec: 2234.2). Total num frames: 614400. Throughput: 0: 689.6. Samples: 153458. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 12:59:36,171][00159] Avg episode reward: [(0, '4.983')] |
|
[2024-06-06 12:59:36,173][03267] Saving new best policy, reward=4.983! |
|
[2024-06-06 12:59:41,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2730.7, 300 sec: 2238.2). Total num frames: 626688. Throughput: 0: 645.5. Samples: 156556. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-06-06 12:59:41,171][00159] Avg episode reward: [(0, '5.201')] |
|
[2024-06-06 12:59:41,183][03267] Saving new best policy, reward=5.201! |
|
[2024-06-06 12:59:46,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2798.9, 300 sec: 2256.4). Total num frames: 643072. Throughput: 0: 663.2. Samples: 159120. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 12:59:46,171][00159] Avg episode reward: [(0, '5.199')] |
|
[2024-06-06 12:59:49,440][03280] Updated weights for policy 0, policy_version 160 (0.0023) |
|
[2024-06-06 12:59:51,163][00159] Fps is (10 sec: 3276.8, 60 sec: 2798.9, 300 sec: 2274.0). Total num frames: 659456. Throughput: 0: 706.9. Samples: 164268. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 12:59:51,166][00159] Avg episode reward: [(0, '4.979')] |
|
[2024-06-06 12:59:56,164][00159] Fps is (10 sec: 2457.3, 60 sec: 2662.3, 300 sec: 2263.2). Total num frames: 667648. Throughput: 0: 670.6. Samples: 167598. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-06-06 12:59:56,173][00159] Avg episode reward: [(0, '4.648')] |
|
[2024-06-06 13:00:01,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2730.7, 300 sec: 2318.8). Total num frames: 684032. Throughput: 0: 652.0. Samples: 169260. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-06-06 13:00:01,170][00159] Avg episode reward: [(0, '4.594')] |
|
[2024-06-06 13:00:04,635][03280] Updated weights for policy 0, policy_version 170 (0.0025) |
|
[2024-06-06 13:00:06,163][00159] Fps is (10 sec: 3277.3, 60 sec: 2798.9, 300 sec: 2374.3). Total num frames: 700416. Throughput: 0: 692.7. Samples: 174332. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:00:06,170][00159] Avg episode reward: [(0, '4.749')] |
|
[2024-06-06 13:00:11,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2730.7, 300 sec: 2416.0). Total num frames: 712704. Throughput: 0: 691.4. Samples: 178584. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:00:11,167][00159] Avg episode reward: [(0, '4.979')] |
|
[2024-06-06 13:00:16,163][00159] Fps is (10 sec: 2048.0, 60 sec: 2662.4, 300 sec: 2415.9). Total num frames: 720896. Throughput: 0: 667.4. Samples: 180030. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:00:16,165][00159] Avg episode reward: [(0, '4.935')] |
|
[2024-06-06 13:00:20,520][03280] Updated weights for policy 0, policy_version 180 (0.0029) |
|
[2024-06-06 13:00:21,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2730.7, 300 sec: 2443.7). Total num frames: 737280. Throughput: 0: 683.1. Samples: 184198. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:00:21,172][00159] Avg episode reward: [(0, '4.819')] |
|
[2024-06-06 13:00:26,163][00159] Fps is (10 sec: 3276.7, 60 sec: 2730.7, 300 sec: 2457.6). Total num frames: 753664. Throughput: 0: 721.4. Samples: 189020. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:00:26,166][00159] Avg episode reward: [(0, '4.698')] |
|
[2024-06-06 13:00:31,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2730.7, 300 sec: 2457.6). Total num frames: 765952. Throughput: 0: 705.2. Samples: 190856. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:00:31,165][00159] Avg episode reward: [(0, '4.659')] |
|
[2024-06-06 13:00:31,188][03267] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000187_765952.pth... |
|
[2024-06-06 13:00:31,338][03267] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000044_180224.pth |
|
[2024-06-06 13:00:35,501][03280] Updated weights for policy 0, policy_version 190 (0.0017) |
|
[2024-06-06 13:00:36,163][00159] Fps is (10 sec: 2457.7, 60 sec: 2730.7, 300 sec: 2457.6). Total num frames: 778240. Throughput: 0: 661.6. Samples: 194040. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:00:36,170][00159] Avg episode reward: [(0, '4.872')] |
|
[2024-06-06 13:00:41,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2798.9, 300 sec: 2471.5). Total num frames: 794624. Throughput: 0: 698.1. Samples: 199012. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 13:00:41,170][00159] Avg episode reward: [(0, '5.067')] |
|
[2024-06-06 13:00:46,164][00159] Fps is (10 sec: 3276.4, 60 sec: 2798.9, 300 sec: 2471.5). Total num frames: 811008. Throughput: 0: 717.8. Samples: 201560. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:00:46,170][00159] Avg episode reward: [(0, '5.205')] |
|
[2024-06-06 13:00:46,175][03267] Saving new best policy, reward=5.205! |
|
[2024-06-06 13:00:50,057][03280] Updated weights for policy 0, policy_version 200 (0.0023) |
|
[2024-06-06 13:00:51,166][00159] Fps is (10 sec: 2456.8, 60 sec: 2662.3, 300 sec: 2443.7). Total num frames: 819200. Throughput: 0: 677.2. Samples: 204810. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-06-06 13:00:51,169][00159] Avg episode reward: [(0, '5.100')] |
|
[2024-06-06 13:00:56,163][00159] Fps is (10 sec: 2457.9, 60 sec: 2799.0, 300 sec: 2471.5). Total num frames: 835584. Throughput: 0: 678.3. Samples: 209108. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 13:00:56,165][00159] Avg episode reward: [(0, '5.017')] |
|
[2024-06-06 13:01:01,165][00159] Fps is (10 sec: 3277.2, 60 sec: 2798.8, 300 sec: 2499.2). Total num frames: 851968. Throughput: 0: 702.8. Samples: 211656. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:01:01,168][00159] Avg episode reward: [(0, '5.309')] |
|
[2024-06-06 13:01:01,185][03267] Saving new best policy, reward=5.309! |
|
[2024-06-06 13:01:03,653][03280] Updated weights for policy 0, policy_version 210 (0.0026) |
|
[2024-06-06 13:01:06,163][00159] Fps is (10 sec: 2867.1, 60 sec: 2730.7, 300 sec: 2485.4). Total num frames: 864256. Throughput: 0: 702.4. Samples: 215806. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:01:06,171][00159] Avg episode reward: [(0, '5.162')] |
|
[2024-06-06 13:01:11,163][00159] Fps is (10 sec: 2458.1, 60 sec: 2730.7, 300 sec: 2485.4). Total num frames: 876544. Throughput: 0: 666.7. Samples: 219020. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:01:11,169][00159] Avg episode reward: [(0, '5.173')] |
|
[2024-06-06 13:01:16,163][00159] Fps is (10 sec: 2867.3, 60 sec: 2867.2, 300 sec: 2527.0). Total num frames: 892928. Throughput: 0: 682.8. Samples: 221584. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 13:01:16,165][00159] Avg episode reward: [(0, '5.007')] |
|
[2024-06-06 13:01:18,126][03280] Updated weights for policy 0, policy_version 220 (0.0028) |
|
[2024-06-06 13:01:21,168][00159] Fps is (10 sec: 3275.1, 60 sec: 2866.9, 300 sec: 2527.0). Total num frames: 909312. Throughput: 0: 725.8. Samples: 226706. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 13:01:21,173][00159] Avg episode reward: [(0, '5.087')] |
|
[2024-06-06 13:01:26,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2730.7, 300 sec: 2499.3). Total num frames: 917504. Throughput: 0: 687.6. Samples: 229954. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 13:01:26,170][00159] Avg episode reward: [(0, '5.313')] |
|
[2024-06-06 13:01:26,174][03267] Saving new best policy, reward=5.313! |
|
[2024-06-06 13:01:31,163][00159] Fps is (10 sec: 2458.9, 60 sec: 2798.9, 300 sec: 2527.0). Total num frames: 933888. Throughput: 0: 669.2. Samples: 231672. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:01:31,169][00159] Avg episode reward: [(0, '5.624')] |
|
[2024-06-06 13:01:31,180][03267] Saving new best policy, reward=5.624! |
|
[2024-06-06 13:01:33,577][03280] Updated weights for policy 0, policy_version 230 (0.0042) |
|
[2024-06-06 13:01:36,163][00159] Fps is (10 sec: 3276.8, 60 sec: 2867.2, 300 sec: 2540.9). Total num frames: 950272. Throughput: 0: 707.7. Samples: 236656. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:01:36,170][00159] Avg episode reward: [(0, '5.774')] |
|
[2024-06-06 13:01:36,176][03267] Saving new best policy, reward=5.774! |
|
[2024-06-06 13:01:41,166][00159] Fps is (10 sec: 2866.2, 60 sec: 2798.8, 300 sec: 2540.9). Total num frames: 962560. Throughput: 0: 705.0. Samples: 240836. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:01:41,169][00159] Avg episode reward: [(0, '5.628')] |
|
[2024-06-06 13:01:46,163][00159] Fps is (10 sec: 2048.0, 60 sec: 2662.5, 300 sec: 2527.0). Total num frames: 970752. Throughput: 0: 682.2. Samples: 242352. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:01:46,165][00159] Avg episode reward: [(0, '5.484')] |
|
[2024-06-06 13:01:48,814][03280] Updated weights for policy 0, policy_version 240 (0.0022) |
|
[2024-06-06 13:01:51,163][00159] Fps is (10 sec: 2458.4, 60 sec: 2799.1, 300 sec: 2554.8). Total num frames: 987136. Throughput: 0: 683.2. Samples: 246548. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 13:01:51,166][00159] Avg episode reward: [(0, '5.560')] |
|
[2024-06-06 13:01:56,163][00159] Fps is (10 sec: 3276.8, 60 sec: 2798.9, 300 sec: 2568.7). Total num frames: 1003520. Throughput: 0: 725.7. Samples: 251678. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 13:01:56,166][00159] Avg episode reward: [(0, '5.800')] |
|
[2024-06-06 13:01:56,195][03267] Saving new best policy, reward=5.800! |
|
[2024-06-06 13:02:01,166][00159] Fps is (10 sec: 2866.2, 60 sec: 2730.6, 300 sec: 2582.5). Total num frames: 1015808. Throughput: 0: 708.5. Samples: 253468. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:02:01,174][00159] Avg episode reward: [(0, '6.108')] |
|
[2024-06-06 13:02:01,186][03267] Saving new best policy, reward=6.108! |
|
[2024-06-06 13:02:05,037][03280] Updated weights for policy 0, policy_version 250 (0.0036) |
|
[2024-06-06 13:02:06,163][00159] Fps is (10 sec: 2048.0, 60 sec: 2662.4, 300 sec: 2568.7). Total num frames: 1024000. Throughput: 0: 654.9. Samples: 256174. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:02:06,165][00159] Avg episode reward: [(0, '6.281')] |
|
[2024-06-06 13:02:06,169][03267] Saving new best policy, reward=6.281! |
|
[2024-06-06 13:02:11,163][00159] Fps is (10 sec: 2048.7, 60 sec: 2662.4, 300 sec: 2568.7). Total num frames: 1036288. Throughput: 0: 646.4. Samples: 259044. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 13:02:11,166][00159] Avg episode reward: [(0, '6.440')] |
|
[2024-06-06 13:02:11,178][03267] Saving new best policy, reward=6.440! |
|
[2024-06-06 13:02:16,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2594.1, 300 sec: 2568.7). Total num frames: 1048576. Throughput: 0: 655.5. Samples: 261170. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:02:16,167][00159] Avg episode reward: [(0, '6.484')] |
|
[2024-06-06 13:02:16,174][03267] Saving new best policy, reward=6.484! |
|
[2024-06-06 13:02:21,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2526.1, 300 sec: 2568.7). Total num frames: 1060864. Throughput: 0: 633.0. Samples: 265142. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:02:21,165][00159] Avg episode reward: [(0, '6.510')] |
|
[2024-06-06 13:02:21,184][03267] Saving new best policy, reward=6.510! |
|
[2024-06-06 13:02:22,225][03280] Updated weights for policy 0, policy_version 260 (0.0058) |
|
[2024-06-06 13:02:26,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2662.4, 300 sec: 2610.3). Total num frames: 1077248. Throughput: 0: 631.6. Samples: 269256. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 13:02:26,171][00159] Avg episode reward: [(0, '6.128')] |
|
[2024-06-06 13:02:31,163][00159] Fps is (10 sec: 3686.4, 60 sec: 2730.7, 300 sec: 2638.1). Total num frames: 1097728. Throughput: 0: 659.1. Samples: 272012. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:02:31,170][00159] Avg episode reward: [(0, '5.809')] |
|
[2024-06-06 13:02:31,181][03267] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000268_1097728.pth... |
|
[2024-06-06 13:02:31,309][03267] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000107_438272.pth |
|
[2024-06-06 13:02:33,394][03280] Updated weights for policy 0, policy_version 270 (0.0038) |
|
[2024-06-06 13:02:36,163][00159] Fps is (10 sec: 3276.8, 60 sec: 2662.4, 300 sec: 2652.0). Total num frames: 1110016. Throughput: 0: 687.4. Samples: 277482. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:02:36,165][00159] Avg episode reward: [(0, '5.927')] |
|
[2024-06-06 13:02:41,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2662.6, 300 sec: 2665.9). Total num frames: 1122304. Throughput: 0: 652.7. Samples: 281048. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:02:41,165][00159] Avg episode reward: [(0, '6.138')] |
|
[2024-06-06 13:02:46,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2798.9, 300 sec: 2693.6). Total num frames: 1138688. Throughput: 0: 665.6. Samples: 283418. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:02:46,166][00159] Avg episode reward: [(0, '6.177')] |
|
[2024-06-06 13:02:47,430][03280] Updated weights for policy 0, policy_version 280 (0.0018) |
|
[2024-06-06 13:02:51,163][00159] Fps is (10 sec: 3276.7, 60 sec: 2798.9, 300 sec: 2693.6). Total num frames: 1155072. Throughput: 0: 711.1. Samples: 288174. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-06-06 13:02:51,171][00159] Avg episode reward: [(0, '6.408')] |
|
[2024-06-06 13:02:56,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2662.4, 300 sec: 2679.8). Total num frames: 1163264. Throughput: 0: 728.4. Samples: 291824. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:02:56,168][00159] Avg episode reward: [(0, '6.507')] |
|
[2024-06-06 13:03:01,166][00159] Fps is (10 sec: 1637.9, 60 sec: 2594.1, 300 sec: 2665.9). Total num frames: 1171456. Throughput: 0: 693.7. Samples: 292388. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-06-06 13:03:01,168][00159] Avg episode reward: [(0, '6.580')] |
|
[2024-06-06 13:03:01,186][03267] Saving new best policy, reward=6.580! |
|
[2024-06-06 13:03:06,163][00159] Fps is (10 sec: 2048.0, 60 sec: 2662.4, 300 sec: 2679.8). Total num frames: 1183744. Throughput: 0: 669.8. Samples: 295284. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:03:06,170][00159] Avg episode reward: [(0, '6.953')] |
|
[2024-06-06 13:03:06,176][03267] Saving new best policy, reward=6.953! |
|
[2024-06-06 13:03:08,102][03280] Updated weights for policy 0, policy_version 290 (0.0034) |
|
[2024-06-06 13:03:11,163][00159] Fps is (10 sec: 2048.7, 60 sec: 2594.1, 300 sec: 2652.0). Total num frames: 1191936. Throughput: 0: 651.8. Samples: 298586. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:03:11,170][00159] Avg episode reward: [(0, '7.150')] |
|
[2024-06-06 13:03:11,181][03267] Saving new best policy, reward=7.150! |
|
[2024-06-06 13:03:16,165][00159] Fps is (10 sec: 1228.5, 60 sec: 2457.5, 300 sec: 2624.2). Total num frames: 1196032. Throughput: 0: 615.3. Samples: 299700. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:03:16,171][00159] Avg episode reward: [(0, '7.039')] |
|
[2024-06-06 13:03:21,163][00159] Fps is (10 sec: 2048.0, 60 sec: 2525.9, 300 sec: 2665.9). Total num frames: 1212416. Throughput: 0: 557.3. Samples: 302560. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:03:21,170][00159] Avg episode reward: [(0, '7.436')] |
|
[2024-06-06 13:03:21,180][03267] Saving new best policy, reward=7.436! |
|
[2024-06-06 13:03:26,163][00159] Fps is (10 sec: 2867.8, 60 sec: 2457.6, 300 sec: 2665.9). Total num frames: 1224704. Throughput: 0: 571.5. Samples: 306766. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:03:26,169][00159] Avg episode reward: [(0, '7.624')] |
|
[2024-06-06 13:03:26,178][03267] Saving new best policy, reward=7.624! |
|
[2024-06-06 13:03:26,701][03280] Updated weights for policy 0, policy_version 300 (0.0027) |
|
[2024-06-06 13:03:31,165][00159] Fps is (10 sec: 2047.5, 60 sec: 2252.7, 300 sec: 2652.0). Total num frames: 1232896. Throughput: 0: 556.6. Samples: 308464. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:03:31,171][00159] Avg episode reward: [(0, '7.556')] |
|
[2024-06-06 13:03:36,163][00159] Fps is (10 sec: 1638.4, 60 sec: 2184.5, 300 sec: 2638.1). Total num frames: 1241088. Throughput: 0: 488.8. Samples: 310168. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:03:36,168][00159] Avg episode reward: [(0, '7.882')] |
|
[2024-06-06 13:03:36,176][03267] Saving new best policy, reward=7.882! |
|
[2024-06-06 13:03:41,163][00159] Fps is (10 sec: 1638.8, 60 sec: 2116.3, 300 sec: 2624.2). Total num frames: 1249280. Throughput: 0: 465.7. Samples: 312782. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-06-06 13:03:41,168][00159] Avg episode reward: [(0, '8.001')] |
|
[2024-06-06 13:03:41,179][03267] Saving new best policy, reward=8.001! |
|
[2024-06-06 13:03:46,163][00159] Fps is (10 sec: 1638.4, 60 sec: 1979.7, 300 sec: 2596.4). Total num frames: 1257472. Throughput: 0: 484.9. Samples: 314206. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:03:46,169][00159] Avg episode reward: [(0, '8.392')] |
|
[2024-06-06 13:03:46,174][03267] Saving new best policy, reward=8.392! |
|
[2024-06-06 13:03:51,163][00159] Fps is (10 sec: 2047.9, 60 sec: 1911.5, 300 sec: 2582.6). Total num frames: 1269760. Throughput: 0: 482.2. Samples: 316982. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:03:51,168][00159] Avg episode reward: [(0, '8.622')] |
|
[2024-06-06 13:03:51,178][03280] Updated weights for policy 0, policy_version 310 (0.0057) |
|
[2024-06-06 13:03:51,179][03267] Saving new best policy, reward=8.622! |
|
[2024-06-06 13:03:56,163][00159] Fps is (10 sec: 2048.0, 60 sec: 1911.5, 300 sec: 2568.7). Total num frames: 1277952. Throughput: 0: 482.2. Samples: 320284. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:03:56,168][00159] Avg episode reward: [(0, '9.328')] |
|
[2024-06-06 13:03:56,270][03267] Saving new best policy, reward=9.328! |
|
[2024-06-06 13:04:01,163][00159] Fps is (10 sec: 2048.1, 60 sec: 1979.8, 300 sec: 2568.7). Total num frames: 1290240. Throughput: 0: 502.6. Samples: 322314. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:04:01,166][00159] Avg episode reward: [(0, '9.604')] |
|
[2024-06-06 13:04:01,176][03267] Saving new best policy, reward=9.604! |
|
[2024-06-06 13:04:06,163][00159] Fps is (10 sec: 2457.6, 60 sec: 1979.7, 300 sec: 2554.8). Total num frames: 1302528. Throughput: 0: 519.7. Samples: 325948. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:04:06,166][00159] Avg episode reward: [(0, '9.371')] |
|
[2024-06-06 13:04:09,827][03280] Updated weights for policy 0, policy_version 320 (0.0030) |
|
[2024-06-06 13:04:11,163][00159] Fps is (10 sec: 2048.0, 60 sec: 1979.7, 300 sec: 2540.9). Total num frames: 1310720. Throughput: 0: 479.1. Samples: 328326. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 13:04:11,168][00159] Avg episode reward: [(0, '9.875')] |
|
[2024-06-06 13:04:11,190][03267] Saving new best policy, reward=9.875! |
|
[2024-06-06 13:04:16,163][00159] Fps is (10 sec: 1638.4, 60 sec: 2048.1, 300 sec: 2527.0). Total num frames: 1318912. Throughput: 0: 453.4. Samples: 328866. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:04:16,171][00159] Avg episode reward: [(0, '10.262')] |
|
[2024-06-06 13:04:16,180][03267] Saving new best policy, reward=10.262! |
|
[2024-06-06 13:04:21,163][00159] Fps is (10 sec: 1638.4, 60 sec: 1911.5, 300 sec: 2499.3). Total num frames: 1327104. Throughput: 0: 493.6. Samples: 332382. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:04:21,170][00159] Avg episode reward: [(0, '9.875')] |
|
[2024-06-06 13:04:26,164][00159] Fps is (10 sec: 2047.8, 60 sec: 1911.4, 300 sec: 2499.2). Total num frames: 1339392. Throughput: 0: 490.7. Samples: 334866. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) |
|
[2024-06-06 13:04:26,168][00159] Avg episode reward: [(0, '9.995')] |
|
[2024-06-06 13:04:31,163][00159] Fps is (10 sec: 2048.0, 60 sec: 1911.5, 300 sec: 2485.4). Total num frames: 1347584. Throughput: 0: 491.2. Samples: 336312. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-06-06 13:04:31,165][00159] Avg episode reward: [(0, '10.575')] |
|
[2024-06-06 13:04:31,176][03267] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000329_1347584.pth... |
|
[2024-06-06 13:04:31,311][03267] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000187_765952.pth |
|
[2024-06-06 13:04:31,331][03267] Saving new best policy, reward=10.575! |
|
[2024-06-06 13:04:31,647][03280] Updated weights for policy 0, policy_version 330 (0.0027) |
|
[2024-06-06 13:04:36,163][00159] Fps is (10 sec: 2048.2, 60 sec: 1979.7, 300 sec: 2485.4). Total num frames: 1359872. Throughput: 0: 504.9. Samples: 339702. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) |
|
[2024-06-06 13:04:36,167][00159] Avg episode reward: [(0, '10.546')] |
|
[2024-06-06 13:04:41,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2048.0, 300 sec: 2471.5). Total num frames: 1372160. Throughput: 0: 526.5. Samples: 343976. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-06-06 13:04:41,167][00159] Avg episode reward: [(0, '10.597')] |
|
[2024-06-06 13:04:41,190][03267] Saving new best policy, reward=10.597! |
|
[2024-06-06 13:04:46,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2116.3, 300 sec: 2457.6). Total num frames: 1384448. Throughput: 0: 518.7. Samples: 345654. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-06-06 13:04:46,169][00159] Avg episode reward: [(0, '10.712')] |
|
[2024-06-06 13:04:46,177][03267] Saving new best policy, reward=10.712! |
|
[2024-06-06 13:04:49,960][03280] Updated weights for policy 0, policy_version 340 (0.0023) |
|
[2024-06-06 13:04:51,163][00159] Fps is (10 sec: 2048.0, 60 sec: 2048.0, 300 sec: 2457.6). Total num frames: 1392640. Throughput: 0: 500.5. Samples: 348472. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:04:51,165][00159] Avg episode reward: [(0, '10.867')] |
|
[2024-06-06 13:04:51,179][03267] Saving new best policy, reward=10.867! |
|
[2024-06-06 13:04:56,163][00159] Fps is (10 sec: 2048.0, 60 sec: 2116.3, 300 sec: 2443.7). Total num frames: 1404928. Throughput: 0: 517.1. Samples: 351594. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:04:56,165][00159] Avg episode reward: [(0, '10.668')] |
|
[2024-06-06 13:05:01,170][00159] Fps is (10 sec: 2865.1, 60 sec: 2184.3, 300 sec: 2443.7). Total num frames: 1421312. Throughput: 0: 557.6. Samples: 353960. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:05:01,173][00159] Avg episode reward: [(0, '11.072')] |
|
[2024-06-06 13:05:01,187][03267] Saving new best policy, reward=11.072! |
|
[2024-06-06 13:05:06,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2116.3, 300 sec: 2429.8). Total num frames: 1429504. Throughput: 0: 555.2. Samples: 357368. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:05:06,169][00159] Avg episode reward: [(0, '11.663')] |
|
[2024-06-06 13:05:06,171][03267] Saving new best policy, reward=11.663! |
|
[2024-06-06 13:05:07,369][03280] Updated weights for policy 0, policy_version 350 (0.0040) |
|
[2024-06-06 13:05:11,165][00159] Fps is (10 sec: 2458.8, 60 sec: 2252.7, 300 sec: 2457.6). Total num frames: 1445888. Throughput: 0: 587.9. Samples: 361322. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:05:11,168][00159] Avg episode reward: [(0, '12.061')] |
|
[2024-06-06 13:05:11,179][03267] Saving new best policy, reward=12.061! |
|
[2024-06-06 13:05:16,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2321.1, 300 sec: 2443.7). Total num frames: 1458176. Throughput: 0: 602.9. Samples: 363444. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-06-06 13:05:16,165][00159] Avg episode reward: [(0, '11.393')] |
|
[2024-06-06 13:05:21,164][00159] Fps is (10 sec: 2048.3, 60 sec: 2321.0, 300 sec: 2415.9). Total num frames: 1466368. Throughput: 0: 602.6. Samples: 366820. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-06-06 13:05:21,172][00159] Avg episode reward: [(0, '11.221')] |
|
[2024-06-06 13:05:25,519][03280] Updated weights for policy 0, policy_version 360 (0.0034) |
|
[2024-06-06 13:05:26,163][00159] Fps is (10 sec: 1638.4, 60 sec: 2252.8, 300 sec: 2402.1). Total num frames: 1474560. Throughput: 0: 554.5. Samples: 368928. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) |
|
[2024-06-06 13:05:26,170][00159] Avg episode reward: [(0, '10.681')] |
|
[2024-06-06 13:05:31,163][00159] Fps is (10 sec: 2457.9, 60 sec: 2389.3, 300 sec: 2415.9). Total num frames: 1490944. Throughput: 0: 561.0. Samples: 370900. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-06-06 13:05:31,166][00159] Avg episode reward: [(0, '10.672')] |
|
[2024-06-06 13:05:36,163][00159] Fps is (10 sec: 3276.8, 60 sec: 2457.6, 300 sec: 2415.9). Total num frames: 1507328. Throughput: 0: 607.1. Samples: 375790. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-06-06 13:05:36,168][00159] Avg episode reward: [(0, '10.504')] |
|
[2024-06-06 13:05:40,029][03280] Updated weights for policy 0, policy_version 370 (0.0027) |
|
[2024-06-06 13:05:41,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2389.3, 300 sec: 2388.2). Total num frames: 1515520. Throughput: 0: 616.1. Samples: 379320. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-06-06 13:05:41,166][00159] Avg episode reward: [(0, '10.329')] |
|
[2024-06-06 13:05:46,163][00159] Fps is (10 sec: 2048.0, 60 sec: 2389.3, 300 sec: 2402.1). Total num frames: 1527808. Throughput: 0: 596.9. Samples: 380818. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 13:05:46,166][00159] Avg episode reward: [(0, '10.893')] |
|
[2024-06-06 13:05:51,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2525.9, 300 sec: 2402.1). Total num frames: 1544192. Throughput: 0: 629.6. Samples: 385702. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:05:51,168][00159] Avg episode reward: [(0, '10.824')] |
|
[2024-06-06 13:05:53,682][03280] Updated weights for policy 0, policy_version 380 (0.0015) |
|
[2024-06-06 13:05:56,163][00159] Fps is (10 sec: 3276.7, 60 sec: 2594.1, 300 sec: 2402.1). Total num frames: 1560576. Throughput: 0: 647.4. Samples: 390452. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:05:56,167][00159] Avg episode reward: [(0, '11.064')] |
|
[2024-06-06 13:06:01,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2457.9, 300 sec: 2388.2). Total num frames: 1568768. Throughput: 0: 633.5. Samples: 391952. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 13:06:01,173][00159] Avg episode reward: [(0, '10.839')] |
|
[2024-06-06 13:06:06,163][00159] Fps is (10 sec: 2457.7, 60 sec: 2594.1, 300 sec: 2402.1). Total num frames: 1585152. Throughput: 0: 629.6. Samples: 395150. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-06-06 13:06:06,166][00159] Avg episode reward: [(0, '10.967')] |
|
[2024-06-06 13:06:09,751][03280] Updated weights for policy 0, policy_version 390 (0.0037) |
|
[2024-06-06 13:06:11,163][00159] Fps is (10 sec: 3276.8, 60 sec: 2594.2, 300 sec: 2402.1). Total num frames: 1601536. Throughput: 0: 695.9. Samples: 400244. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-06-06 13:06:11,170][00159] Avg episode reward: [(0, '10.182')] |
|
[2024-06-06 13:06:16,163][00159] Fps is (10 sec: 2457.5, 60 sec: 2525.9, 300 sec: 2374.3). Total num frames: 1609728. Throughput: 0: 697.5. Samples: 402288. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:06:16,168][00159] Avg episode reward: [(0, '10.012')] |
|
[2024-06-06 13:06:21,163][00159] Fps is (10 sec: 1638.4, 60 sec: 2525.9, 300 sec: 2374.3). Total num frames: 1617920. Throughput: 0: 642.0. Samples: 404678. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:06:21,169][00159] Avg episode reward: [(0, '9.667')] |
|
[2024-06-06 13:06:26,163][00159] Fps is (10 sec: 2048.1, 60 sec: 2594.1, 300 sec: 2360.4). Total num frames: 1630208. Throughput: 0: 647.4. Samples: 408454. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:06:26,169][00159] Avg episode reward: [(0, '9.739')] |
|
[2024-06-06 13:06:29,465][03280] Updated weights for policy 0, policy_version 400 (0.0045) |
|
[2024-06-06 13:06:31,163][00159] Fps is (10 sec: 2048.0, 60 sec: 2457.6, 300 sec: 2332.6). Total num frames: 1638400. Throughput: 0: 642.0. Samples: 409710. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:06:31,165][00159] Avg episode reward: [(0, '9.847')] |
|
[2024-06-06 13:06:31,183][03267] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000400_1638400.pth... |
|
[2024-06-06 13:06:31,412][03267] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000268_1097728.pth |
|
[2024-06-06 13:06:36,164][00159] Fps is (10 sec: 1638.1, 60 sec: 2321.0, 300 sec: 2318.8). Total num frames: 1646592. Throughput: 0: 587.7. Samples: 412150. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:06:36,167][00159] Avg episode reward: [(0, '10.164')] |
|
[2024-06-06 13:06:41,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2457.6, 300 sec: 2346.5). Total num frames: 1662976. Throughput: 0: 559.9. Samples: 415646. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:06:41,166][00159] Avg episode reward: [(0, '10.655')] |
|
[2024-06-06 13:06:45,127][03280] Updated weights for policy 0, policy_version 410 (0.0030) |
|
[2024-06-06 13:06:46,163][00159] Fps is (10 sec: 3687.0, 60 sec: 2594.1, 300 sec: 2360.4). Total num frames: 1683456. Throughput: 0: 587.9. Samples: 418408. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:06:46,165][00159] Avg episode reward: [(0, '11.305')] |
|
[2024-06-06 13:06:51,163][00159] Fps is (10 sec: 3686.4, 60 sec: 2594.1, 300 sec: 2360.4). Total num frames: 1699840. Throughput: 0: 656.8. Samples: 424704. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:06:51,167][00159] Avg episode reward: [(0, '11.382')] |
|
[2024-06-06 13:06:56,165][00159] Fps is (10 sec: 2866.5, 60 sec: 2525.8, 300 sec: 2360.4). Total num frames: 1712128. Throughput: 0: 632.9. Samples: 428724. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-06-06 13:06:56,168][00159] Avg episode reward: [(0, '11.037')] |
|
[2024-06-06 13:06:57,811][03280] Updated weights for policy 0, policy_version 420 (0.0019) |
|
[2024-06-06 13:07:01,163][00159] Fps is (10 sec: 3276.8, 60 sec: 2730.7, 300 sec: 2402.1). Total num frames: 1732608. Throughput: 0: 641.5. Samples: 431156. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-06-06 13:07:01,166][00159] Avg episode reward: [(0, '11.399')] |
|
[2024-06-06 13:07:06,163][00159] Fps is (10 sec: 4096.9, 60 sec: 2798.9, 300 sec: 2429.8). Total num frames: 1753088. Throughput: 0: 721.6. Samples: 437148. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:07:06,165][00159] Avg episode reward: [(0, '10.888')] |
|
[2024-06-06 13:07:08,249][03280] Updated weights for policy 0, policy_version 430 (0.0018) |
|
[2024-06-06 13:07:11,163][00159] Fps is (10 sec: 3276.8, 60 sec: 2730.7, 300 sec: 2429.8). Total num frames: 1765376. Throughput: 0: 750.1. Samples: 442208. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:07:11,165][00159] Avg episode reward: [(0, '10.881')] |
|
[2024-06-06 13:07:16,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2867.2, 300 sec: 2443.7). Total num frames: 1781760. Throughput: 0: 762.8. Samples: 444036. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:07:16,166][00159] Avg episode reward: [(0, '11.304')] |
|
[2024-06-06 13:07:20,650][03280] Updated weights for policy 0, policy_version 440 (0.0021) |
|
[2024-06-06 13:07:21,163][00159] Fps is (10 sec: 3686.4, 60 sec: 3072.0, 300 sec: 2457.6). Total num frames: 1802240. Throughput: 0: 834.0. Samples: 449680. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:07:21,174][00159] Avg episode reward: [(0, '11.809')] |
|
[2024-06-06 13:07:26,164][00159] Fps is (10 sec: 4095.5, 60 sec: 3208.5, 300 sec: 2457.6). Total num frames: 1822720. Throughput: 0: 894.6. Samples: 455904. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:07:26,169][00159] Avg episode reward: [(0, '13.023')] |
|
[2024-06-06 13:07:26,173][03267] Saving new best policy, reward=13.023! |
|
[2024-06-06 13:07:31,163][00159] Fps is (10 sec: 3276.7, 60 sec: 3276.8, 300 sec: 2457.6). Total num frames: 1835008. Throughput: 0: 874.7. Samples: 457768. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:07:31,170][00159] Avg episode reward: [(0, '14.047')] |
|
[2024-06-06 13:07:31,186][03267] Saving new best policy, reward=14.047! |
|
[2024-06-06 13:07:33,528][03280] Updated weights for policy 0, policy_version 450 (0.0043) |
|
[2024-06-06 13:07:36,163][00159] Fps is (10 sec: 2867.6, 60 sec: 3413.4, 300 sec: 2471.5). Total num frames: 1851392. Throughput: 0: 831.8. Samples: 462134. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:07:36,169][00159] Avg episode reward: [(0, '13.858')] |
|
[2024-06-06 13:07:41,163][00159] Fps is (10 sec: 3686.5, 60 sec: 3481.6, 300 sec: 2485.4). Total num frames: 1871872. Throughput: 0: 879.4. Samples: 468294. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) |
|
[2024-06-06 13:07:41,169][00159] Avg episode reward: [(0, '13.255')] |
|
[2024-06-06 13:07:43,538][03280] Updated weights for policy 0, policy_version 460 (0.0015) |
|
[2024-06-06 13:07:46,163][00159] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 2485.4). Total num frames: 1888256. Throughput: 0: 891.6. Samples: 471278. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-06-06 13:07:46,165][00159] Avg episode reward: [(0, '12.997')] |
|
[2024-06-06 13:07:51,163][00159] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 2499.3). Total num frames: 1900544. Throughput: 0: 845.6. Samples: 475202. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 13:07:51,171][00159] Avg episode reward: [(0, '13.045')] |
|
[2024-06-06 13:07:56,079][03280] Updated weights for policy 0, policy_version 470 (0.0020) |
|
[2024-06-06 13:07:56,163][00159] Fps is (10 sec: 3686.3, 60 sec: 3550.0, 300 sec: 2554.8). Total num frames: 1925120. Throughput: 0: 860.5. Samples: 480932. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-06-06 13:07:56,170][00159] Avg episode reward: [(0, '13.546')] |
|
[2024-06-06 13:08:01,165][00159] Fps is (10 sec: 4504.6, 60 sec: 3549.7, 300 sec: 2582.5). Total num frames: 1945600. Throughput: 0: 891.4. Samples: 484150. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:08:01,169][00159] Avg episode reward: [(0, '13.882')] |
|
[2024-06-06 13:08:06,163][00159] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 2596.4). Total num frames: 1957888. Throughput: 0: 873.5. Samples: 488986. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:08:06,168][00159] Avg episode reward: [(0, '14.603')] |
|
[2024-06-06 13:08:06,171][03267] Saving new best policy, reward=14.603! |
|
[2024-06-06 13:08:08,965][03280] Updated weights for policy 0, policy_version 480 (0.0034) |
|
[2024-06-06 13:08:11,163][00159] Fps is (10 sec: 2867.8, 60 sec: 3481.6, 300 sec: 2638.1). Total num frames: 1974272. Throughput: 0: 835.3. Samples: 493492. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:08:11,166][00159] Avg episode reward: [(0, '15.196')] |
|
[2024-06-06 13:08:11,174][03267] Saving new best policy, reward=15.196! |
|
[2024-06-06 13:08:16,163][00159] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 2652.0). Total num frames: 1994752. Throughput: 0: 861.2. Samples: 496520. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:08:16,165][00159] Avg episode reward: [(0, '15.516')] |
|
[2024-06-06 13:08:16,169][03267] Saving new best policy, reward=15.516! |
|
[2024-06-06 13:08:18,897][03280] Updated weights for policy 0, policy_version 490 (0.0023) |
|
[2024-06-06 13:08:21,163][00159] Fps is (10 sec: 3686.3, 60 sec: 3481.6, 300 sec: 2665.9). Total num frames: 2011136. Throughput: 0: 896.5. Samples: 502478. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:08:21,169][00159] Avg episode reward: [(0, '14.578')] |
|
[2024-06-06 13:08:26,163][00159] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 2679.8). Total num frames: 2023424. Throughput: 0: 835.9. Samples: 505908. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-06-06 13:08:26,170][00159] Avg episode reward: [(0, '13.930')] |
|
[2024-06-06 13:08:31,163][00159] Fps is (10 sec: 2867.3, 60 sec: 3413.3, 300 sec: 2707.5). Total num frames: 2039808. Throughput: 0: 823.4. Samples: 508330. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:08:31,166][00159] Avg episode reward: [(0, '14.923')] |
|
[2024-06-06 13:08:31,177][03267] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000498_2039808.pth... |
|
[2024-06-06 13:08:31,309][03267] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000329_1347584.pth |
|
[2024-06-06 13:08:32,897][03280] Updated weights for policy 0, policy_version 500 (0.0048) |
|
[2024-06-06 13:08:36,163][00159] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 2735.3). Total num frames: 2056192. Throughput: 0: 855.0. Samples: 513676. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-06-06 13:08:36,169][00159] Avg episode reward: [(0, '15.707')] |
|
[2024-06-06 13:08:36,174][03267] Saving new best policy, reward=15.707! |
|
[2024-06-06 13:08:41,163][00159] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 2749.2). Total num frames: 2068480. Throughput: 0: 815.8. Samples: 517642. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:08:41,165][00159] Avg episode reward: [(0, '16.863')] |
|
[2024-06-06 13:08:41,181][03267] Saving new best policy, reward=16.863! |
|
[2024-06-06 13:08:46,163][00159] Fps is (10 sec: 2457.6, 60 sec: 3208.5, 300 sec: 2749.2). Total num frames: 2080768. Throughput: 0: 780.1. Samples: 519254. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-06-06 13:08:46,169][00159] Avg episode reward: [(0, '16.868')] |
|
[2024-06-06 13:08:46,172][03267] Saving new best policy, reward=16.868! |
|
[2024-06-06 13:08:48,131][03280] Updated weights for policy 0, policy_version 510 (0.0036) |
|
[2024-06-06 13:08:51,163][00159] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 2776.9). Total num frames: 2097152. Throughput: 0: 774.9. Samples: 523858. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:08:51,175][00159] Avg episode reward: [(0, '17.348')] |
|
[2024-06-06 13:08:51,195][03267] Saving new best policy, reward=17.348! |
|
[2024-06-06 13:08:56,168][00159] Fps is (10 sec: 3275.0, 60 sec: 3140.0, 300 sec: 2790.8). Total num frames: 2113536. Throughput: 0: 788.5. Samples: 528978. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-06-06 13:08:56,171][00159] Avg episode reward: [(0, '17.300')] |
|
[2024-06-06 13:09:01,163][00159] Fps is (10 sec: 2867.2, 60 sec: 3003.8, 300 sec: 2790.8). Total num frames: 2125824. Throughput: 0: 757.6. Samples: 530614. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-06-06 13:09:01,167][00159] Avg episode reward: [(0, '17.760')] |
|
[2024-06-06 13:09:01,182][03267] Saving new best policy, reward=17.760! |
|
[2024-06-06 13:09:02,561][03280] Updated weights for policy 0, policy_version 520 (0.0028) |
|
[2024-06-06 13:09:06,163][00159] Fps is (10 sec: 2458.9, 60 sec: 3003.7, 300 sec: 2804.7). Total num frames: 2138112. Throughput: 0: 710.6. Samples: 534456. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-06-06 13:09:06,171][00159] Avg episode reward: [(0, '17.481')] |
|
[2024-06-06 13:09:11,163][00159] Fps is (10 sec: 3276.8, 60 sec: 3072.0, 300 sec: 2846.4). Total num frames: 2158592. Throughput: 0: 753.2. Samples: 539802. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:09:11,172][00159] Avg episode reward: [(0, '16.249')] |
|
[2024-06-06 13:09:15,579][03280] Updated weights for policy 0, policy_version 530 (0.0025) |
|
[2024-06-06 13:09:16,164][00159] Fps is (10 sec: 3276.4, 60 sec: 2935.4, 300 sec: 2860.2). Total num frames: 2170880. Throughput: 0: 751.8. Samples: 542162. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:09:16,170][00159] Avg episode reward: [(0, '17.256')] |
|
[2024-06-06 13:09:21,163][00159] Fps is (10 sec: 2048.0, 60 sec: 2798.9, 300 sec: 2846.4). Total num frames: 2179072. Throughput: 0: 702.2. Samples: 545276. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:09:21,165][00159] Avg episode reward: [(0, '16.305')] |
|
[2024-06-06 13:09:26,163][00159] Fps is (10 sec: 2867.5, 60 sec: 2935.5, 300 sec: 2888.0). Total num frames: 2199552. Throughput: 0: 715.5. Samples: 549840. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:09:26,165][00159] Avg episode reward: [(0, '16.862')] |
|
[2024-06-06 13:09:29,586][03280] Updated weights for policy 0, policy_version 540 (0.0041) |
|
[2024-06-06 13:09:31,163][00159] Fps is (10 sec: 3686.3, 60 sec: 2935.5, 300 sec: 2901.9). Total num frames: 2215936. Throughput: 0: 736.7. Samples: 552404. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-06-06 13:09:31,166][00159] Avg episode reward: [(0, '17.127')] |
|
[2024-06-06 13:09:36,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2798.9, 300 sec: 2888.0). Total num frames: 2224128. Throughput: 0: 722.3. Samples: 556362. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:09:36,168][00159] Avg episode reward: [(0, '17.809')] |
|
[2024-06-06 13:09:36,172][03267] Saving new best policy, reward=17.809! |
|
[2024-06-06 13:09:41,163][00159] Fps is (10 sec: 2048.1, 60 sec: 2798.9, 300 sec: 2888.0). Total num frames: 2236416. Throughput: 0: 675.1. Samples: 559356. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:09:41,165][00159] Avg episode reward: [(0, '17.883')] |
|
[2024-06-06 13:09:41,180][03267] Saving new best policy, reward=17.883! |
|
[2024-06-06 13:09:45,699][03280] Updated weights for policy 0, policy_version 550 (0.0029) |
|
[2024-06-06 13:09:46,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2867.2, 300 sec: 2915.8). Total num frames: 2252800. Throughput: 0: 691.9. Samples: 561748. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:09:46,166][00159] Avg episode reward: [(0, '19.579')] |
|
[2024-06-06 13:09:46,172][03267] Saving new best policy, reward=19.579! |
|
[2024-06-06 13:09:51,163][00159] Fps is (10 sec: 2867.1, 60 sec: 2798.9, 300 sec: 2915.8). Total num frames: 2265088. Throughput: 0: 714.6. Samples: 566612. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-06-06 13:09:51,166][00159] Avg episode reward: [(0, '19.339')] |
|
[2024-06-06 13:09:56,166][00159] Fps is (10 sec: 2456.9, 60 sec: 2730.8, 300 sec: 2902.0). Total num frames: 2277376. Throughput: 0: 663.8. Samples: 569676. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-06-06 13:09:56,172][00159] Avg episode reward: [(0, '19.697')] |
|
[2024-06-06 13:09:56,178][03267] Saving new best policy, reward=19.697! |
|
[2024-06-06 13:10:01,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2730.7, 300 sec: 2915.8). Total num frames: 2289664. Throughput: 0: 649.1. Samples: 571370. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:10:01,170][00159] Avg episode reward: [(0, '19.878')] |
|
[2024-06-06 13:10:01,182][03267] Saving new best policy, reward=19.878! |
|
[2024-06-06 13:10:01,675][03280] Updated weights for policy 0, policy_version 560 (0.0017) |
|
[2024-06-06 13:10:06,163][00159] Fps is (10 sec: 2868.0, 60 sec: 2798.9, 300 sec: 2915.8). Total num frames: 2306048. Throughput: 0: 687.3. Samples: 576206. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:10:06,171][00159] Avg episode reward: [(0, '19.245')] |
|
[2024-06-06 13:10:11,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2662.4, 300 sec: 2915.8). Total num frames: 2318336. Throughput: 0: 678.7. Samples: 580380. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:10:11,170][00159] Avg episode reward: [(0, '18.826')] |
|
[2024-06-06 13:10:16,163][00159] Fps is (10 sec: 2048.0, 60 sec: 2594.2, 300 sec: 2915.8). Total num frames: 2326528. Throughput: 0: 655.0. Samples: 581880. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:10:16,165][00159] Avg episode reward: [(0, '17.512')] |
|
[2024-06-06 13:10:17,395][03280] Updated weights for policy 0, policy_version 570 (0.0020) |
|
[2024-06-06 13:10:21,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2798.9, 300 sec: 2957.5). Total num frames: 2347008. Throughput: 0: 657.4. Samples: 585946. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-06-06 13:10:21,166][00159] Avg episode reward: [(0, '17.182')] |
|
[2024-06-06 13:10:26,163][00159] Fps is (10 sec: 3686.4, 60 sec: 2730.7, 300 sec: 2957.5). Total num frames: 2363392. Throughput: 0: 702.7. Samples: 590976. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:10:26,168][00159] Avg episode reward: [(0, '17.373')] |
|
[2024-06-06 13:10:31,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2594.1, 300 sec: 2929.7). Total num frames: 2371584. Throughput: 0: 689.7. Samples: 592784. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-06-06 13:10:31,171][00159] Avg episode reward: [(0, '18.332')] |
|
[2024-06-06 13:10:31,182][03267] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000579_2371584.pth... |
|
[2024-06-06 13:10:31,402][03267] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000400_1638400.pth |
|
[2024-06-06 13:10:31,879][03280] Updated weights for policy 0, policy_version 580 (0.0027) |
|
[2024-06-06 13:10:36,163][00159] Fps is (10 sec: 2048.0, 60 sec: 2662.4, 300 sec: 2943.6). Total num frames: 2383872. Throughput: 0: 650.5. Samples: 595884. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:10:36,170][00159] Avg episode reward: [(0, '18.554')] |
|
[2024-06-06 13:10:41,163][00159] Fps is (10 sec: 3276.8, 60 sec: 2798.9, 300 sec: 2971.3). Total num frames: 2404352. Throughput: 0: 693.4. Samples: 600876. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-06-06 13:10:41,171][00159] Avg episode reward: [(0, '18.797')] |
|
[2024-06-06 13:10:45,205][03280] Updated weights for policy 0, policy_version 590 (0.0018) |
|
[2024-06-06 13:10:46,165][00159] Fps is (10 sec: 3276.1, 60 sec: 2730.6, 300 sec: 2957.4). Total num frames: 2416640. Throughput: 0: 712.6. Samples: 603440. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 13:10:46,167][00159] Avg episode reward: [(0, '19.660')] |
|
[2024-06-06 13:10:51,163][00159] Fps is (10 sec: 2048.0, 60 sec: 2662.4, 300 sec: 2929.7). Total num frames: 2424832. Throughput: 0: 679.3. Samples: 606774. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:10:51,166][00159] Avg episode reward: [(0, '20.334')] |
|
[2024-06-06 13:10:51,181][03267] Saving new best policy, reward=20.334! |
|
[2024-06-06 13:10:56,163][00159] Fps is (10 sec: 1638.7, 60 sec: 2594.2, 300 sec: 2929.7). Total num frames: 2433024. Throughput: 0: 643.6. Samples: 609344. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:10:56,172][00159] Avg episode reward: [(0, '20.302')] |
|
[2024-06-06 13:11:01,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2662.4, 300 sec: 2929.7). Total num frames: 2449408. Throughput: 0: 643.9. Samples: 610856. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:11:01,168][00159] Avg episode reward: [(0, '21.878')] |
|
[2024-06-06 13:11:01,183][03267] Saving new best policy, reward=21.878! |
|
[2024-06-06 13:11:03,834][03280] Updated weights for policy 0, policy_version 600 (0.0055) |
|
[2024-06-06 13:11:06,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2594.1, 300 sec: 2915.8). Total num frames: 2461696. Throughput: 0: 650.0. Samples: 615198. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:11:06,168][00159] Avg episode reward: [(0, '22.483')] |
|
[2024-06-06 13:11:06,172][03267] Saving new best policy, reward=22.483! |
|
[2024-06-06 13:11:11,164][00159] Fps is (10 sec: 2047.8, 60 sec: 2525.8, 300 sec: 2915.8). Total num frames: 2469888. Throughput: 0: 603.9. Samples: 618152. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-06-06 13:11:11,171][00159] Avg episode reward: [(0, '22.012')] |
|
[2024-06-06 13:11:16,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2662.4, 300 sec: 2943.6). Total num frames: 2486272. Throughput: 0: 610.0. Samples: 620236. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:11:16,169][00159] Avg episode reward: [(0, '22.186')] |
|
[2024-06-06 13:11:19,203][03280] Updated weights for policy 0, policy_version 610 (0.0034) |
|
[2024-06-06 13:11:21,163][00159] Fps is (10 sec: 3277.1, 60 sec: 2594.1, 300 sec: 2957.5). Total num frames: 2502656. Throughput: 0: 651.1. Samples: 625182. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:11:21,169][00159] Avg episode reward: [(0, '22.617')] |
|
[2024-06-06 13:11:21,181][03267] Saving new best policy, reward=22.617! |
|
[2024-06-06 13:11:26,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2525.9, 300 sec: 2971.3). Total num frames: 2514944. Throughput: 0: 622.7. Samples: 628896. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-06-06 13:11:26,167][00159] Avg episode reward: [(0, '22.308')] |
|
[2024-06-06 13:11:31,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2594.1, 300 sec: 2985.2). Total num frames: 2527232. Throughput: 0: 598.6. Samples: 630378. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-06-06 13:11:31,170][00159] Avg episode reward: [(0, '22.327')] |
|
[2024-06-06 13:11:34,732][03280] Updated weights for policy 0, policy_version 620 (0.0039) |
|
[2024-06-06 13:11:36,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2662.4, 300 sec: 2985.2). Total num frames: 2543616. Throughput: 0: 630.4. Samples: 635142. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 13:11:36,172][00159] Avg episode reward: [(0, '20.477')] |
|
[2024-06-06 13:11:41,164][00159] Fps is (10 sec: 2866.8, 60 sec: 2525.8, 300 sec: 2957.4). Total num frames: 2555904. Throughput: 0: 673.5. Samples: 639652. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:11:41,171][00159] Avg episode reward: [(0, '19.769')] |
|
[2024-06-06 13:11:46,165][00159] Fps is (10 sec: 2047.6, 60 sec: 2457.6, 300 sec: 2929.7). Total num frames: 2564096. Throughput: 0: 673.2. Samples: 641150. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:11:46,170][00159] Avg episode reward: [(0, '19.121')] |
|
[2024-06-06 13:11:50,321][03280] Updated weights for policy 0, policy_version 630 (0.0018) |
|
[2024-06-06 13:11:51,163][00159] Fps is (10 sec: 2458.0, 60 sec: 2594.1, 300 sec: 2943.6). Total num frames: 2580480. Throughput: 0: 655.8. Samples: 644710. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:11:51,166][00159] Avg episode reward: [(0, '19.527')] |
|
[2024-06-06 13:11:56,163][00159] Fps is (10 sec: 3277.5, 60 sec: 2730.7, 300 sec: 2929.7). Total num frames: 2596864. Throughput: 0: 701.3. Samples: 649710. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:11:56,172][00159] Avg episode reward: [(0, '19.800')] |
|
[2024-06-06 13:12:01,167][00159] Fps is (10 sec: 2865.9, 60 sec: 2662.2, 300 sec: 2901.9). Total num frames: 2609152. Throughput: 0: 705.1. Samples: 651970. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:12:01,170][00159] Avg episode reward: [(0, '20.145')] |
|
[2024-06-06 13:12:05,734][03280] Updated weights for policy 0, policy_version 640 (0.0024) |
|
[2024-06-06 13:12:06,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2662.4, 300 sec: 2901.9). Total num frames: 2621440. Throughput: 0: 663.0. Samples: 655018. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:12:06,168][00159] Avg episode reward: [(0, '20.694')] |
|
[2024-06-06 13:12:11,163][00159] Fps is (10 sec: 2868.5, 60 sec: 2799.0, 300 sec: 2901.9). Total num frames: 2637824. Throughput: 0: 688.3. Samples: 659870. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:12:11,171][00159] Avg episode reward: [(0, '21.916')] |
|
[2024-06-06 13:12:16,163][00159] Fps is (10 sec: 3276.8, 60 sec: 2798.9, 300 sec: 2888.0). Total num frames: 2654208. Throughput: 0: 710.7. Samples: 662360. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:12:16,167][00159] Avg episode reward: [(0, '23.499')] |
|
[2024-06-06 13:12:16,170][03267] Saving new best policy, reward=23.499! |
|
[2024-06-06 13:12:19,187][03280] Updated weights for policy 0, policy_version 650 (0.0033) |
|
[2024-06-06 13:12:21,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2730.7, 300 sec: 2860.3). Total num frames: 2666496. Throughput: 0: 688.3. Samples: 666116. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:12:21,166][00159] Avg episode reward: [(0, '24.102')] |
|
[2024-06-06 13:12:21,183][03267] Saving new best policy, reward=24.102! |
|
[2024-06-06 13:12:26,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2730.7, 300 sec: 2860.3). Total num frames: 2678784. Throughput: 0: 676.5. Samples: 670094. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:12:26,170][00159] Avg episode reward: [(0, '25.100')] |
|
[2024-06-06 13:12:26,176][03267] Saving new best policy, reward=25.100! |
|
[2024-06-06 13:12:31,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2798.9, 300 sec: 2860.3). Total num frames: 2695168. Throughput: 0: 701.1. Samples: 672696. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:12:31,173][00159] Avg episode reward: [(0, '24.530')] |
|
[2024-06-06 13:12:31,255][03267] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000659_2699264.pth... |
|
[2024-06-06 13:12:31,388][03267] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000498_2039808.pth |
|
[2024-06-06 13:12:32,317][03280] Updated weights for policy 0, policy_version 660 (0.0029) |
|
[2024-06-06 13:12:36,163][00159] Fps is (10 sec: 3276.8, 60 sec: 2798.9, 300 sec: 2846.4). Total num frames: 2711552. Throughput: 0: 734.8. Samples: 677778. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 13:12:36,166][00159] Avg episode reward: [(0, '24.968')] |
|
[2024-06-06 13:12:41,167][00159] Fps is (10 sec: 2866.0, 60 sec: 2798.8, 300 sec: 2832.4). Total num frames: 2723840. Throughput: 0: 693.7. Samples: 680928. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 13:12:41,169][00159] Avg episode reward: [(0, '24.889')] |
|
[2024-06-06 13:12:46,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2935.6, 300 sec: 2846.4). Total num frames: 2740224. Throughput: 0: 690.4. Samples: 683034. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-06-06 13:12:46,170][00159] Avg episode reward: [(0, '24.199')] |
|
[2024-06-06 13:12:47,177][03280] Updated weights for policy 0, policy_version 670 (0.0023) |
|
[2024-06-06 13:12:51,163][00159] Fps is (10 sec: 3278.2, 60 sec: 2935.5, 300 sec: 2818.6). Total num frames: 2756608. Throughput: 0: 738.3. Samples: 688240. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-06-06 13:12:51,166][00159] Avg episode reward: [(0, '22.940')] |
|
[2024-06-06 13:12:56,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2867.2, 300 sec: 2790.9). Total num frames: 2768896. Throughput: 0: 725.4. Samples: 692512. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:12:56,165][00159] Avg episode reward: [(0, '22.345')] |
|
[2024-06-06 13:13:01,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2867.4, 300 sec: 2790.8). Total num frames: 2781184. Throughput: 0: 706.4. Samples: 694146. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-06-06 13:13:01,165][00159] Avg episode reward: [(0, '21.314')] |
|
[2024-06-06 13:13:02,119][03280] Updated weights for policy 0, policy_version 680 (0.0029) |
|
[2024-06-06 13:13:06,163][00159] Fps is (10 sec: 2867.1, 60 sec: 2935.5, 300 sec: 2790.8). Total num frames: 2797568. Throughput: 0: 727.9. Samples: 698870. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 13:13:06,166][00159] Avg episode reward: [(0, '20.619')] |
|
[2024-06-06 13:13:11,172][00159] Fps is (10 sec: 3273.9, 60 sec: 2935.0, 300 sec: 2776.9). Total num frames: 2813952. Throughput: 0: 753.0. Samples: 703986. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-06-06 13:13:11,176][00159] Avg episode reward: [(0, '19.714')] |
|
[2024-06-06 13:13:16,163][00159] Fps is (10 sec: 2457.7, 60 sec: 2798.9, 300 sec: 2749.2). Total num frames: 2822144. Throughput: 0: 729.1. Samples: 705504. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:13:16,165][00159] Avg episode reward: [(0, '19.375')] |
|
[2024-06-06 13:13:16,606][03280] Updated weights for policy 0, policy_version 690 (0.0028) |
|
[2024-06-06 13:13:21,163][00159] Fps is (10 sec: 2459.8, 60 sec: 2867.2, 300 sec: 2763.1). Total num frames: 2838528. Throughput: 0: 693.9. Samples: 709002. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:13:21,168][00159] Avg episode reward: [(0, '20.181')] |
|
[2024-06-06 13:13:26,163][00159] Fps is (10 sec: 3276.8, 60 sec: 2935.5, 300 sec: 2763.1). Total num frames: 2854912. Throughput: 0: 737.4. Samples: 714106. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-06-06 13:13:26,165][00159] Avg episode reward: [(0, '20.871')] |
|
[2024-06-06 13:13:29,870][03280] Updated weights for policy 0, policy_version 700 (0.0034) |
|
[2024-06-06 13:13:31,164][00159] Fps is (10 sec: 2867.0, 60 sec: 2867.2, 300 sec: 2749.2). Total num frames: 2867200. Throughput: 0: 744.8. Samples: 716552. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:13:31,170][00159] Avg episode reward: [(0, '21.117')] |
|
[2024-06-06 13:13:36,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2798.9, 300 sec: 2749.2). Total num frames: 2879488. Throughput: 0: 698.2. Samples: 719660. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-06-06 13:13:36,170][00159] Avg episode reward: [(0, '21.636')] |
|
[2024-06-06 13:13:41,163][00159] Fps is (10 sec: 2867.3, 60 sec: 2867.4, 300 sec: 2763.1). Total num frames: 2895872. Throughput: 0: 703.7. Samples: 724178. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-06-06 13:13:41,166][00159] Avg episode reward: [(0, '23.355')] |
|
[2024-06-06 13:13:44,429][03280] Updated weights for policy 0, policy_version 710 (0.0035) |
|
[2024-06-06 13:13:46,163][00159] Fps is (10 sec: 3276.8, 60 sec: 2867.2, 300 sec: 2763.1). Total num frames: 2912256. Throughput: 0: 723.1. Samples: 726686. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-06-06 13:13:46,170][00159] Avg episode reward: [(0, '24.400')] |
|
[2024-06-06 13:13:51,167][00159] Fps is (10 sec: 2866.1, 60 sec: 2798.8, 300 sec: 2749.2). Total num frames: 2924544. Throughput: 0: 707.4. Samples: 730704. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-06-06 13:13:51,169][00159] Avg episode reward: [(0, '26.740')] |
|
[2024-06-06 13:13:51,189][03267] Saving new best policy, reward=26.740! |
|
[2024-06-06 13:13:56,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2798.9, 300 sec: 2749.2). Total num frames: 2936832. Throughput: 0: 668.2. Samples: 734050. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-06-06 13:13:56,171][00159] Avg episode reward: [(0, '26.659')] |
|
[2024-06-06 13:13:59,811][03280] Updated weights for policy 0, policy_version 720 (0.0016) |
|
[2024-06-06 13:14:01,163][00159] Fps is (10 sec: 2868.3, 60 sec: 2867.2, 300 sec: 2763.1). Total num frames: 2953216. Throughput: 0: 690.5. Samples: 736578. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:14:01,166][00159] Avg episode reward: [(0, '27.289')] |
|
[2024-06-06 13:14:01,181][03267] Saving new best policy, reward=27.289! |
|
[2024-06-06 13:14:06,163][00159] Fps is (10 sec: 2867.1, 60 sec: 2798.9, 300 sec: 2735.3). Total num frames: 2965504. Throughput: 0: 723.0. Samples: 741538. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:14:06,166][00159] Avg episode reward: [(0, '26.836')] |
|
[2024-06-06 13:14:11,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2731.1, 300 sec: 2735.3). Total num frames: 2977792. Throughput: 0: 677.6. Samples: 744596. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:14:11,169][00159] Avg episode reward: [(0, '26.594')] |
|
[2024-06-06 13:14:15,014][03280] Updated weights for policy 0, policy_version 730 (0.0026) |
|
[2024-06-06 13:14:16,163][00159] Fps is (10 sec: 2457.7, 60 sec: 2798.9, 300 sec: 2749.2). Total num frames: 2990080. Throughput: 0: 664.8. Samples: 746468. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:14:16,165][00159] Avg episode reward: [(0, '26.302')] |
|
[2024-06-06 13:14:21,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2798.9, 300 sec: 2735.3). Total num frames: 3006464. Throughput: 0: 706.3. Samples: 751444. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-06-06 13:14:21,171][00159] Avg episode reward: [(0, '26.516')] |
|
[2024-06-06 13:14:26,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2730.7, 300 sec: 2721.4). Total num frames: 3018752. Throughput: 0: 697.5. Samples: 755566. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-06-06 13:14:26,167][00159] Avg episode reward: [(0, '25.631')] |
|
[2024-06-06 13:14:30,702][03280] Updated weights for policy 0, policy_version 740 (0.0025) |
|
[2024-06-06 13:14:31,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2730.7, 300 sec: 2735.3). Total num frames: 3031040. Throughput: 0: 676.0. Samples: 757108. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-06-06 13:14:31,168][00159] Avg episode reward: [(0, '25.844')] |
|
[2024-06-06 13:14:31,180][03267] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000740_3031040.pth... |
|
[2024-06-06 13:14:31,316][03267] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000579_2371584.pth |
|
[2024-06-06 13:14:36,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2798.9, 300 sec: 2749.2). Total num frames: 3047424. Throughput: 0: 682.0. Samples: 761392. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:14:36,169][00159] Avg episode reward: [(0, '25.507')] |
|
[2024-06-06 13:14:41,163][00159] Fps is (10 sec: 3276.8, 60 sec: 2798.9, 300 sec: 2749.2). Total num frames: 3063808. Throughput: 0: 717.9. Samples: 766356. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:14:41,165][00159] Avg episode reward: [(0, '25.613')] |
|
[2024-06-06 13:14:44,641][03280] Updated weights for policy 0, policy_version 750 (0.0032) |
|
[2024-06-06 13:14:46,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2662.4, 300 sec: 2735.3). Total num frames: 3072000. Throughput: 0: 695.5. Samples: 767876. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:14:46,172][00159] Avg episode reward: [(0, '25.257')] |
|
[2024-06-06 13:14:51,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2730.8, 300 sec: 2749.2). Total num frames: 3088384. Throughput: 0: 661.0. Samples: 771284. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:14:51,170][00159] Avg episode reward: [(0, '26.196')] |
|
[2024-06-06 13:14:56,163][00159] Fps is (10 sec: 3276.8, 60 sec: 2798.9, 300 sec: 2763.1). Total num frames: 3104768. Throughput: 0: 703.1. Samples: 776236. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 13:14:56,173][00159] Avg episode reward: [(0, '25.943')] |
|
[2024-06-06 13:14:58,410][03280] Updated weights for policy 0, policy_version 760 (0.0017) |
|
[2024-06-06 13:15:01,164][00159] Fps is (10 sec: 2866.8, 60 sec: 2730.6, 300 sec: 2749.2). Total num frames: 3117056. Throughput: 0: 717.0. Samples: 778736. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:15:01,169][00159] Avg episode reward: [(0, '25.952')] |
|
[2024-06-06 13:15:06,163][00159] Fps is (10 sec: 2048.0, 60 sec: 2662.4, 300 sec: 2735.3). Total num frames: 3125248. Throughput: 0: 675.5. Samples: 781842. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:15:06,175][00159] Avg episode reward: [(0, '24.967')] |
|
[2024-06-06 13:15:11,163][00159] Fps is (10 sec: 2457.9, 60 sec: 2730.7, 300 sec: 2763.1). Total num frames: 3141632. Throughput: 0: 681.9. Samples: 786250. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:15:11,168][00159] Avg episode reward: [(0, '24.551')] |
|
[2024-06-06 13:15:13,665][03280] Updated weights for policy 0, policy_version 770 (0.0028) |
|
[2024-06-06 13:15:16,163][00159] Fps is (10 sec: 3276.8, 60 sec: 2798.9, 300 sec: 2749.2). Total num frames: 3158016. Throughput: 0: 702.6. Samples: 788724. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:15:16,168][00159] Avg episode reward: [(0, '23.552')] |
|
[2024-06-06 13:15:21,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2662.4, 300 sec: 2721.4). Total num frames: 3166208. Throughput: 0: 681.7. Samples: 792070. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:15:21,168][00159] Avg episode reward: [(0, '23.415')] |
|
[2024-06-06 13:15:26,165][00159] Fps is (10 sec: 1638.0, 60 sec: 2594.0, 300 sec: 2721.4). Total num frames: 3174400. Throughput: 0: 624.9. Samples: 794476. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:15:26,168][00159] Avg episode reward: [(0, '22.536')] |
|
[2024-06-06 13:15:31,163][00159] Fps is (10 sec: 2457.5, 60 sec: 2662.4, 300 sec: 2735.3). Total num frames: 3190784. Throughput: 0: 627.2. Samples: 796100. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-06-06 13:15:31,170][00159] Avg episode reward: [(0, '20.950')] |
|
[2024-06-06 13:15:32,040][03280] Updated weights for policy 0, policy_version 780 (0.0053) |
|
[2024-06-06 13:15:36,163][00159] Fps is (10 sec: 3277.5, 60 sec: 2662.4, 300 sec: 2721.4). Total num frames: 3207168. Throughput: 0: 663.7. Samples: 801152. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-06-06 13:15:36,165][00159] Avg episode reward: [(0, '21.177')] |
|
[2024-06-06 13:15:41,163][00159] Fps is (10 sec: 2867.3, 60 sec: 2594.1, 300 sec: 2721.4). Total num frames: 3219456. Throughput: 0: 648.3. Samples: 805408. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-06-06 13:15:41,165][00159] Avg episode reward: [(0, '21.354')] |
|
[2024-06-06 13:15:46,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2662.4, 300 sec: 2735.3). Total num frames: 3231744. Throughput: 0: 625.0. Samples: 806862. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:15:46,169][00159] Avg episode reward: [(0, '19.533')] |
|
[2024-06-06 13:15:47,281][03280] Updated weights for policy 0, policy_version 790 (0.0035) |
|
[2024-06-06 13:15:51,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2662.4, 300 sec: 2763.1). Total num frames: 3248128. Throughput: 0: 651.0. Samples: 811136. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-06-06 13:15:51,166][00159] Avg episode reward: [(0, '19.300')] |
|
[2024-06-06 13:15:56,163][00159] Fps is (10 sec: 3276.8, 60 sec: 2662.4, 300 sec: 2763.1). Total num frames: 3264512. Throughput: 0: 663.7. Samples: 816116. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:15:56,166][00159] Avg episode reward: [(0, '19.549')] |
|
[2024-06-06 13:16:01,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2594.2, 300 sec: 2749.2). Total num frames: 3272704. Throughput: 0: 646.8. Samples: 817832. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:16:01,166][00159] Avg episode reward: [(0, '20.000')] |
|
[2024-06-06 13:16:01,831][03280] Updated weights for policy 0, policy_version 800 (0.0024) |
|
[2024-06-06 13:16:06,163][00159] Fps is (10 sec: 2048.0, 60 sec: 2662.4, 300 sec: 2763.1). Total num frames: 3284992. Throughput: 0: 642.7. Samples: 820992. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 13:16:06,166][00159] Avg episode reward: [(0, '20.408')] |
|
[2024-06-06 13:16:11,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2662.4, 300 sec: 2763.1). Total num frames: 3301376. Throughput: 0: 701.6. Samples: 826048. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-06-06 13:16:11,166][00159] Avg episode reward: [(0, '21.719')] |
|
[2024-06-06 13:16:15,467][03280] Updated weights for policy 0, policy_version 810 (0.0034) |
|
[2024-06-06 13:16:16,164][00159] Fps is (10 sec: 3276.3, 60 sec: 2662.3, 300 sec: 2763.1). Total num frames: 3317760. Throughput: 0: 721.5. Samples: 828570. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:16:16,169][00159] Avg episode reward: [(0, '22.912')] |
|
[2024-06-06 13:16:21,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2662.4, 300 sec: 2749.2). Total num frames: 3325952. Throughput: 0: 681.2. Samples: 831808. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 13:16:21,166][00159] Avg episode reward: [(0, '23.657')] |
|
[2024-06-06 13:16:26,163][00159] Fps is (10 sec: 2457.9, 60 sec: 2799.0, 300 sec: 2763.1). Total num frames: 3342336. Throughput: 0: 678.8. Samples: 835952. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:16:26,165][00159] Avg episode reward: [(0, '24.381')] |
|
[2024-06-06 13:16:30,363][03280] Updated weights for policy 0, policy_version 820 (0.0030) |
|
[2024-06-06 13:16:31,163][00159] Fps is (10 sec: 3276.8, 60 sec: 2798.9, 300 sec: 2763.1). Total num frames: 3358720. Throughput: 0: 703.0. Samples: 838496. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:16:31,169][00159] Avg episode reward: [(0, '25.621')] |
|
[2024-06-06 13:16:31,183][03267] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000820_3358720.pth... |
|
[2024-06-06 13:16:31,353][03267] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000659_2699264.pth |
|
[2024-06-06 13:16:36,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2730.7, 300 sec: 2763.1). Total num frames: 3371008. Throughput: 0: 704.3. Samples: 842828. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-06-06 13:16:36,165][00159] Avg episode reward: [(0, '25.687')] |
|
[2024-06-06 13:16:41,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2730.7, 300 sec: 2777.0). Total num frames: 3383296. Throughput: 0: 663.2. Samples: 845962. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:16:41,170][00159] Avg episode reward: [(0, '25.437')] |
|
[2024-06-06 13:16:45,685][03280] Updated weights for policy 0, policy_version 830 (0.0018) |
|
[2024-06-06 13:16:46,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2798.9, 300 sec: 2776.9). Total num frames: 3399680. Throughput: 0: 680.6. Samples: 848460. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:16:46,165][00159] Avg episode reward: [(0, '23.744')] |
|
[2024-06-06 13:16:51,163][00159] Fps is (10 sec: 3276.8, 60 sec: 2798.9, 300 sec: 2776.9). Total num frames: 3416064. Throughput: 0: 723.3. Samples: 853542. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:16:51,165][00159] Avg episode reward: [(0, '22.794')] |
|
[2024-06-06 13:16:56,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2662.4, 300 sec: 2763.1). Total num frames: 3424256. Throughput: 0: 685.1. Samples: 856878. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 13:16:56,170][00159] Avg episode reward: [(0, '23.741')] |
|
[2024-06-06 13:17:00,855][03280] Updated weights for policy 0, policy_version 840 (0.0048) |
|
[2024-06-06 13:17:01,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2798.9, 300 sec: 2776.9). Total num frames: 3440640. Throughput: 0: 663.8. Samples: 858442. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 13:17:01,165][00159] Avg episode reward: [(0, '23.624')] |
|
[2024-06-06 13:17:06,163][00159] Fps is (10 sec: 3276.7, 60 sec: 2867.2, 300 sec: 2776.9). Total num frames: 3457024. Throughput: 0: 704.7. Samples: 863520. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 13:17:06,166][00159] Avg episode reward: [(0, '24.765')] |
|
[2024-06-06 13:17:11,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2798.9, 300 sec: 2763.1). Total num frames: 3469312. Throughput: 0: 712.2. Samples: 868002. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:17:11,166][00159] Avg episode reward: [(0, '25.034')] |
|
[2024-06-06 13:17:15,965][03280] Updated weights for policy 0, policy_version 850 (0.0040) |
|
[2024-06-06 13:17:16,165][00159] Fps is (10 sec: 2457.1, 60 sec: 2730.6, 300 sec: 2763.0). Total num frames: 3481600. Throughput: 0: 690.8. Samples: 869582. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:17:16,171][00159] Avg episode reward: [(0, '25.447')] |
|
[2024-06-06 13:17:21,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2867.2, 300 sec: 2776.9). Total num frames: 3497984. Throughput: 0: 685.8. Samples: 873688. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-06-06 13:17:21,171][00159] Avg episode reward: [(0, '27.004')] |
|
[2024-06-06 13:17:26,163][00159] Fps is (10 sec: 3277.5, 60 sec: 2867.2, 300 sec: 2776.9). Total num frames: 3514368. Throughput: 0: 727.4. Samples: 878694. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-06-06 13:17:26,165][00159] Avg episode reward: [(0, '28.146')] |
|
[2024-06-06 13:17:26,169][03267] Saving new best policy, reward=28.146! |
|
[2024-06-06 13:17:29,234][03280] Updated weights for policy 0, policy_version 860 (0.0033) |
|
[2024-06-06 13:17:31,165][00159] Fps is (10 sec: 2457.2, 60 sec: 2730.6, 300 sec: 2749.2). Total num frames: 3522560. Throughput: 0: 713.9. Samples: 880586. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:17:31,169][00159] Avg episode reward: [(0, '28.540')] |
|
[2024-06-06 13:17:31,273][03267] Saving new best policy, reward=28.540! |
|
[2024-06-06 13:17:36,163][00159] Fps is (10 sec: 2048.0, 60 sec: 2730.7, 300 sec: 2749.2). Total num frames: 3534848. Throughput: 0: 668.5. Samples: 883626. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 13:17:36,174][00159] Avg episode reward: [(0, '27.493')] |
|
[2024-06-06 13:17:41,165][00159] Fps is (10 sec: 3276.6, 60 sec: 2867.1, 300 sec: 2763.0). Total num frames: 3555328. Throughput: 0: 706.8. Samples: 888684. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:17:41,168][00159] Avg episode reward: [(0, '26.839')] |
|
[2024-06-06 13:17:43,651][03280] Updated weights for policy 0, policy_version 870 (0.0017) |
|
[2024-06-06 13:17:46,165][00159] Fps is (10 sec: 3276.3, 60 sec: 2798.9, 300 sec: 2749.2). Total num frames: 3567616. Throughput: 0: 729.6. Samples: 891276. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-06-06 13:17:46,169][00159] Avg episode reward: [(0, '25.433')] |
|
[2024-06-06 13:17:51,163][00159] Fps is (10 sec: 2458.2, 60 sec: 2730.7, 300 sec: 2749.2). Total num frames: 3579904. Throughput: 0: 692.8. Samples: 894696. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:17:51,168][00159] Avg episode reward: [(0, '25.292')] |
|
[2024-06-06 13:17:56,163][00159] Fps is (10 sec: 2458.0, 60 sec: 2798.9, 300 sec: 2749.2). Total num frames: 3592192. Throughput: 0: 683.1. Samples: 898740. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:17:56,172][00159] Avg episode reward: [(0, '23.054')] |
|
[2024-06-06 13:17:58,735][03280] Updated weights for policy 0, policy_version 880 (0.0037) |
|
[2024-06-06 13:18:01,163][00159] Fps is (10 sec: 3276.8, 60 sec: 2867.2, 300 sec: 2763.1). Total num frames: 3612672. Throughput: 0: 704.7. Samples: 901292. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-06-06 13:18:01,173][00159] Avg episode reward: [(0, '22.928')] |
|
[2024-06-06 13:18:06,163][00159] Fps is (10 sec: 2867.1, 60 sec: 2730.7, 300 sec: 2735.4). Total num frames: 3620864. Throughput: 0: 712.8. Samples: 905764. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:18:06,166][00159] Avg episode reward: [(0, '23.251')] |
|
[2024-06-06 13:18:11,163][00159] Fps is (10 sec: 2048.0, 60 sec: 2730.7, 300 sec: 2749.2). Total num frames: 3633152. Throughput: 0: 672.6. Samples: 908962. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:18:11,166][00159] Avg episode reward: [(0, '23.428')] |
|
[2024-06-06 13:18:13,829][03280] Updated weights for policy 0, policy_version 890 (0.0032) |
|
[2024-06-06 13:18:16,163][00159] Fps is (10 sec: 2867.3, 60 sec: 2799.0, 300 sec: 2749.2). Total num frames: 3649536. Throughput: 0: 687.5. Samples: 911524. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:18:16,165][00159] Avg episode reward: [(0, '23.869')] |
|
[2024-06-06 13:18:21,166][00159] Fps is (10 sec: 3275.7, 60 sec: 2798.8, 300 sec: 2749.1). Total num frames: 3665920. Throughput: 0: 733.9. Samples: 916656. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-06-06 13:18:21,169][00159] Avg episode reward: [(0, '24.209')] |
|
[2024-06-06 13:18:26,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2730.7, 300 sec: 2749.2). Total num frames: 3678208. Throughput: 0: 696.2. Samples: 920012. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:18:26,165][00159] Avg episode reward: [(0, '24.952')] |
|
[2024-06-06 13:18:28,775][03280] Updated weights for policy 0, policy_version 900 (0.0036) |
|
[2024-06-06 13:18:31,164][00159] Fps is (10 sec: 2867.8, 60 sec: 2867.2, 300 sec: 2763.1). Total num frames: 3694592. Throughput: 0: 675.1. Samples: 921656. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:18:31,171][00159] Avg episode reward: [(0, '25.548')] |
|
[2024-06-06 13:18:31,183][03267] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000902_3694592.pth... |
|
[2024-06-06 13:18:31,312][03267] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000740_3031040.pth |
|
[2024-06-06 13:18:36,163][00159] Fps is (10 sec: 3276.8, 60 sec: 2935.5, 300 sec: 2763.1). Total num frames: 3710976. Throughput: 0: 711.4. Samples: 926708. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) |
|
[2024-06-06 13:18:36,171][00159] Avg episode reward: [(0, '27.440')] |
|
[2024-06-06 13:18:41,166][00159] Fps is (10 sec: 2866.6, 60 sec: 2798.9, 300 sec: 2749.1). Total num frames: 3723264. Throughput: 0: 719.5. Samples: 931120. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-06-06 13:18:41,169][00159] Avg episode reward: [(0, '27.356')] |
|
[2024-06-06 13:18:43,035][03280] Updated weights for policy 0, policy_version 910 (0.0028) |
|
[2024-06-06 13:18:46,163][00159] Fps is (10 sec: 2048.0, 60 sec: 2730.7, 300 sec: 2735.3). Total num frames: 3731456. Throughput: 0: 696.3. Samples: 932624. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-06-06 13:18:46,166][00159] Avg episode reward: [(0, '27.612')] |
|
[2024-06-06 13:18:51,163][00159] Fps is (10 sec: 2458.4, 60 sec: 2798.9, 300 sec: 2749.2). Total num frames: 3747840. Throughput: 0: 688.1. Samples: 936728. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-06-06 13:18:51,168][00159] Avg episode reward: [(0, '27.287')] |
|
[2024-06-06 13:18:56,163][00159] Fps is (10 sec: 3276.7, 60 sec: 2867.2, 300 sec: 2749.2). Total num frames: 3764224. Throughput: 0: 728.6. Samples: 941748. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-06-06 13:18:56,168][00159] Avg episode reward: [(0, '25.818')] |
|
[2024-06-06 13:18:56,663][03280] Updated weights for policy 0, policy_version 920 (0.0015) |
|
[2024-06-06 13:19:01,163][00159] Fps is (10 sec: 2867.1, 60 sec: 2730.7, 300 sec: 2749.2). Total num frames: 3776512. Throughput: 0: 712.8. Samples: 943600. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:19:01,166][00159] Avg episode reward: [(0, '26.124')] |
|
[2024-06-06 13:19:06,163][00159] Fps is (10 sec: 2457.7, 60 sec: 2798.9, 300 sec: 2749.2). Total num frames: 3788800. Throughput: 0: 668.8. Samples: 946748. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 13:19:06,166][00159] Avg episode reward: [(0, '27.144')] |
|
[2024-06-06 13:19:11,163][00159] Fps is (10 sec: 2867.2, 60 sec: 2867.2, 300 sec: 2763.1). Total num frames: 3805184. Throughput: 0: 708.8. Samples: 951906. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-06-06 13:19:11,171][00159] Avg episode reward: [(0, '26.373')] |
|
[2024-06-06 13:19:11,366][03280] Updated weights for policy 0, policy_version 930 (0.0049) |
|
[2024-06-06 13:19:16,166][00159] Fps is (10 sec: 3275.9, 60 sec: 2867.1, 300 sec: 2763.0). Total num frames: 3821568. Throughput: 0: 730.0. Samples: 954506. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:19:16,169][00159] Avg episode reward: [(0, '26.356')] |
|
[2024-06-06 13:19:21,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2730.8, 300 sec: 2749.2). Total num frames: 3829760. Throughput: 0: 693.4. Samples: 957912. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:19:21,165][00159] Avg episode reward: [(0, '26.576')] |
|
[2024-06-06 13:19:26,163][00159] Fps is (10 sec: 2458.3, 60 sec: 2798.9, 300 sec: 2763.1). Total num frames: 3846144. Throughput: 0: 686.9. Samples: 962028. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:19:26,173][00159] Avg episode reward: [(0, '27.595')] |
|
[2024-06-06 13:19:26,638][03280] Updated weights for policy 0, policy_version 940 (0.0032) |
|
[2024-06-06 13:19:31,163][00159] Fps is (10 sec: 3276.8, 60 sec: 2799.0, 300 sec: 2763.1). Total num frames: 3862528. Throughput: 0: 710.4. Samples: 964590. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:19:31,165][00159] Avg episode reward: [(0, '26.910')] |
|
[2024-06-06 13:19:36,171][00159] Fps is (10 sec: 2864.8, 60 sec: 2730.3, 300 sec: 2749.1). Total num frames: 3874816. Throughput: 0: 716.2. Samples: 968962. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 13:19:36,174][00159] Avg episode reward: [(0, '27.747')] |
|
[2024-06-06 13:19:41,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2730.8, 300 sec: 2763.1). Total num frames: 3887104. Throughput: 0: 673.3. Samples: 972046. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:19:41,175][00159] Avg episode reward: [(0, '27.667')] |
|
[2024-06-06 13:19:41,700][03280] Updated weights for policy 0, policy_version 950 (0.0036) |
|
[2024-06-06 13:19:46,163][00159] Fps is (10 sec: 2459.6, 60 sec: 2798.9, 300 sec: 2749.2). Total num frames: 3899392. Throughput: 0: 684.5. Samples: 974404. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-06-06 13:19:46,170][00159] Avg episode reward: [(0, '27.250')] |
|
[2024-06-06 13:19:51,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2730.7, 300 sec: 2735.3). Total num frames: 3911680. Throughput: 0: 682.4. Samples: 977456. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-06-06 13:19:51,167][00159] Avg episode reward: [(0, '26.046')] |
|
[2024-06-06 13:19:56,163][00159] Fps is (10 sec: 2048.0, 60 sec: 2594.1, 300 sec: 2721.4). Total num frames: 3919872. Throughput: 0: 636.0. Samples: 980526. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-06-06 13:19:56,166][00159] Avg episode reward: [(0, '24.773')] |
|
[2024-06-06 13:20:00,032][03280] Updated weights for policy 0, policy_version 960 (0.0051) |
|
[2024-06-06 13:20:01,163][00159] Fps is (10 sec: 2457.6, 60 sec: 2662.4, 300 sec: 2749.2). Total num frames: 3936256. Throughput: 0: 613.5. Samples: 982110. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:20:01,165][00159] Avg episode reward: [(0, '23.114')] |
|
[2024-06-06 13:20:06,163][00159] Fps is (10 sec: 3276.8, 60 sec: 2730.7, 300 sec: 2749.2). Total num frames: 3952640. Throughput: 0: 649.8. Samples: 987154. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:20:06,173][00159] Avg episode reward: [(0, '23.766')] |
|
[2024-06-06 13:20:11,165][00159] Fps is (10 sec: 2866.6, 60 sec: 2662.3, 300 sec: 2735.3). Total num frames: 3964928. Throughput: 0: 658.8. Samples: 991676. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:20:11,170][00159] Avg episode reward: [(0, '23.351')] |
|
[2024-06-06 13:20:14,547][03280] Updated weights for policy 0, policy_version 970 (0.0027) |
|
[2024-06-06 13:20:16,163][00159] Fps is (10 sec: 2048.0, 60 sec: 2526.0, 300 sec: 2735.3). Total num frames: 3973120. Throughput: 0: 634.7. Samples: 993152. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-06-06 13:20:16,172][00159] Avg episode reward: [(0, '22.772')] |
|
[2024-06-06 13:20:21,163][00159] Fps is (10 sec: 2458.1, 60 sec: 2662.4, 300 sec: 2763.1). Total num frames: 3989504. Throughput: 0: 628.1. Samples: 997222. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-06-06 13:20:21,165][00159] Avg episode reward: [(0, '23.209')] |
|
[2024-06-06 13:20:24,919][03267] Stopping Batcher_0... |
|
[2024-06-06 13:20:24,920][03267] Loop batcher_evt_loop terminating... |
|
[2024-06-06 13:20:24,920][00159] Component Batcher_0 stopped! |
|
[2024-06-06 13:20:24,932][03267] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2024-06-06 13:20:25,007][03280] Weights refcount: 2 0 |
|
[2024-06-06 13:20:25,015][00159] Component InferenceWorker_p0-w0 stopped! |
|
[2024-06-06 13:20:25,015][03280] Stopping InferenceWorker_p0-w0... |
|
[2024-06-06 13:20:25,025][03280] Loop inference_proc0-0_evt_loop terminating... |
|
[2024-06-06 13:20:25,050][03285] Stopping RolloutWorker_w4... |
|
[2024-06-06 13:20:25,051][03285] Loop rollout_proc4_evt_loop terminating... |
|
[2024-06-06 13:20:25,050][00159] Component RolloutWorker_w4 stopped! |
|
[2024-06-06 13:20:25,058][00159] Component RolloutWorker_w5 stopped! |
|
[2024-06-06 13:20:25,065][03287] Stopping RolloutWorker_w5... |
|
[2024-06-06 13:20:25,065][03287] Loop rollout_proc5_evt_loop terminating... |
|
[2024-06-06 13:20:25,072][03286] Stopping RolloutWorker_w6... |
|
[2024-06-06 13:20:25,075][03286] Loop rollout_proc6_evt_loop terminating... |
|
[2024-06-06 13:20:25,073][00159] Component RolloutWorker_w6 stopped! |
|
[2024-06-06 13:20:25,091][00159] Component RolloutWorker_w7 stopped! |
|
[2024-06-06 13:20:25,098][03283] Stopping RolloutWorker_w2... |
|
[2024-06-06 13:20:25,099][03283] Loop rollout_proc2_evt_loop terminating... |
|
[2024-06-06 13:20:25,098][00159] Component RolloutWorker_w2 stopped! |
|
[2024-06-06 13:20:25,107][03281] Stopping RolloutWorker_w0... |
|
[2024-06-06 13:20:25,108][03281] Loop rollout_proc0_evt_loop terminating... |
|
[2024-06-06 13:20:25,109][03282] Stopping RolloutWorker_w1... |
|
[2024-06-06 13:20:25,109][03282] Loop rollout_proc1_evt_loop terminating... |
|
[2024-06-06 13:20:25,108][00159] Component RolloutWorker_w0 stopped! |
|
[2024-06-06 13:20:25,114][00159] Component RolloutWorker_w1 stopped! |
|
[2024-06-06 13:20:25,119][03288] Stopping RolloutWorker_w7... |
|
[2024-06-06 13:20:25,131][03288] Loop rollout_proc7_evt_loop terminating... |
|
[2024-06-06 13:20:25,135][00159] Component RolloutWorker_w3 stopped! |
|
[2024-06-06 13:20:25,137][03284] Stopping RolloutWorker_w3... |
|
[2024-06-06 13:20:25,143][03267] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000820_3358720.pth |
|
[2024-06-06 13:20:25,144][03284] Loop rollout_proc3_evt_loop terminating... |
|
[2024-06-06 13:20:25,169][03267] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2024-06-06 13:20:25,382][03267] Stopping LearnerWorker_p0... |
|
[2024-06-06 13:20:25,382][03267] Loop learner_proc0_evt_loop terminating... |
|
[2024-06-06 13:20:25,383][00159] Component LearnerWorker_p0 stopped! |
|
[2024-06-06 13:20:25,386][00159] Waiting for process learner_proc0 to stop... |
|
[2024-06-06 13:20:27,520][00159] Waiting for process inference_proc0-0 to join... |
|
[2024-06-06 13:20:27,527][00159] Waiting for process rollout_proc0 to join... |
|
[2024-06-06 13:20:30,488][00159] Waiting for process rollout_proc1 to join... |
|
[2024-06-06 13:20:30,874][00159] Waiting for process rollout_proc2 to join... |
|
[2024-06-06 13:20:30,880][00159] Waiting for process rollout_proc3 to join... |
|
[2024-06-06 13:20:30,887][00159] Waiting for process rollout_proc4 to join... |
|
[2024-06-06 13:20:30,890][00159] Waiting for process rollout_proc5 to join... |
|
[2024-06-06 13:20:30,895][00159] Waiting for process rollout_proc6 to join... |
|
[2024-06-06 13:20:30,900][00159] Waiting for process rollout_proc7 to join... |
|
[2024-06-06 13:20:30,904][00159] Batcher 0 profile tree view: |
|
batching: 28.6482, releasing_batches: 0.0428 |
|
[2024-06-06 13:20:30,908][00159] InferenceWorker_p0-w0 profile tree view: |
|
wait_policy: 0.0052 |
|
wait_policy_total: 537.9611 |
|
update_model: 14.5551 |
|
weight_update: 0.0020 |
|
one_step: 0.0044 |
|
handle_policy_step: 905.5214 |
|
deserialize: 24.6233, stack: 5.4127, obs_to_device_normalize: 184.4027, forward: 492.5886, send_messages: 44.2074 |
|
prepare_outputs: 110.5839 |
|
to_cpu: 57.7050 |
|
[2024-06-06 13:20:30,910][00159] Learner 0 profile tree view: |
|
misc: 0.0057, prepare_batch: 14.9343 |
|
train: 78.1430 |
|
epoch_init: 0.0063, minibatch_init: 0.0123, losses_postprocess: 0.7220, kl_divergence: 0.8606, after_optimizer: 34.9406 |
|
calculate_losses: 28.3048 |
|
losses_init: 0.0124, forward_head: 1.6688, bptt_initial: 17.7691, tail: 1.4542, advantages_returns: 0.3196, losses: 3.9017 |
|
bptt: 2.6576 |
|
bptt_forward_core: 2.5458 |
|
update: 12.4541 |
|
clip: 1.1720 |
|
[2024-06-06 13:20:30,911][00159] RolloutWorker_w0 profile tree view: |
|
wait_for_trajectories: 0.4295, enqueue_policy_requests: 167.0857, env_step: 1167.1406, overhead: 25.7791, complete_rollouts: 9.5851 |
|
save_policy_outputs: 32.6104 |
|
split_output_tensors: 13.1462 |
|
[2024-06-06 13:20:30,913][00159] RolloutWorker_w7 profile tree view: |
|
wait_for_trajectories: 0.5461, enqueue_policy_requests: 171.5998, env_step: 1167.8541, overhead: 24.8985, complete_rollouts: 9.0936 |
|
save_policy_outputs: 30.4404 |
|
split_output_tensors: 12.0340 |
|
[2024-06-06 13:20:30,914][00159] Loop Runner_EvtLoop terminating... |
|
[2024-06-06 13:20:30,916][00159] Runner profile tree view: |
|
main_loop: 1553.8885 |
|
[2024-06-06 13:20:30,917][00159] Collected {0: 4005888}, FPS: 2578.0 |
|
[2024-06-06 13:20:32,266][00159] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2024-06-06 13:20:32,268][00159] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2024-06-06 13:20:32,271][00159] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2024-06-06 13:20:32,273][00159] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2024-06-06 13:20:32,274][00159] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2024-06-06 13:20:32,276][00159] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2024-06-06 13:20:32,278][00159] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2024-06-06 13:20:32,279][00159] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2024-06-06 13:20:32,280][00159] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2024-06-06 13:20:32,281][00159] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2024-06-06 13:20:32,282][00159] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2024-06-06 13:20:32,283][00159] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2024-06-06 13:20:32,284][00159] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2024-06-06 13:20:32,285][00159] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2024-06-06 13:20:32,286][00159] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2024-06-06 13:20:32,326][00159] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-06-06 13:20:32,331][00159] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-06-06 13:20:32,333][00159] RunningMeanStd input shape: (1,) |
|
[2024-06-06 13:20:32,350][00159] ConvEncoder: input_channels=3 |
|
[2024-06-06 13:20:32,462][00159] Conv encoder output size: 512 |
|
[2024-06-06 13:20:32,464][00159] Policy head output size: 512 |
|
[2024-06-06 13:20:32,760][00159] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2024-06-06 13:20:33,619][00159] Num frames 100... |
|
[2024-06-06 13:20:33,765][00159] Num frames 200... |
|
[2024-06-06 13:20:33,922][00159] Num frames 300... |
|
[2024-06-06 13:20:34,074][00159] Num frames 400... |
|
[2024-06-06 13:20:34,223][00159] Num frames 500... |
|
[2024-06-06 13:20:34,379][00159] Num frames 600... |
|
[2024-06-06 13:20:34,445][00159] Avg episode rewards: #0: 13.050, true rewards: #0: 6.050 |
|
[2024-06-06 13:20:34,448][00159] Avg episode reward: 13.050, avg true_objective: 6.050 |
|
[2024-06-06 13:20:34,589][00159] Num frames 700... |
|
[2024-06-06 13:20:34,741][00159] Num frames 800... |
|
[2024-06-06 13:20:34,885][00159] Num frames 900... |
|
[2024-06-06 13:20:35,031][00159] Num frames 1000... |
|
[2024-06-06 13:20:35,175][00159] Num frames 1100... |
|
[2024-06-06 13:20:35,335][00159] Num frames 1200... |
|
[2024-06-06 13:20:35,480][00159] Num frames 1300... |
|
[2024-06-06 13:20:35,627][00159] Num frames 1400... |
|
[2024-06-06 13:20:35,771][00159] Num frames 1500... |
|
[2024-06-06 13:20:35,940][00159] Avg episode rewards: #0: 20.365, true rewards: #0: 7.865 |
|
[2024-06-06 13:20:35,942][00159] Avg episode reward: 20.365, avg true_objective: 7.865 |
|
[2024-06-06 13:20:35,986][00159] Num frames 1600... |
|
[2024-06-06 13:20:36,137][00159] Num frames 1700... |
|
[2024-06-06 13:20:36,293][00159] Num frames 1800... |
|
[2024-06-06 13:20:36,443][00159] Num frames 1900... |
|
[2024-06-06 13:20:36,586][00159] Num frames 2000... |
|
[2024-06-06 13:20:36,738][00159] Num frames 2100... |
|
[2024-06-06 13:20:36,885][00159] Num frames 2200... |
|
[2024-06-06 13:20:37,031][00159] Num frames 2300... |
|
[2024-06-06 13:20:37,177][00159] Num frames 2400... |
|
[2024-06-06 13:20:37,333][00159] Num frames 2500... |
|
[2024-06-06 13:20:37,441][00159] Avg episode rewards: #0: 20.443, true rewards: #0: 8.443 |
|
[2024-06-06 13:20:37,443][00159] Avg episode reward: 20.443, avg true_objective: 8.443 |
|
[2024-06-06 13:20:37,540][00159] Num frames 2600... |
|
[2024-06-06 13:20:37,696][00159] Num frames 2700... |
|
[2024-06-06 13:20:37,848][00159] Num frames 2800... |
|
[2024-06-06 13:20:37,995][00159] Num frames 2900... |
|
[2024-06-06 13:20:38,141][00159] Num frames 3000... |
|
[2024-06-06 13:20:38,290][00159] Num frames 3100... |
|
[2024-06-06 13:20:38,448][00159] Num frames 3200... |
|
[2024-06-06 13:20:38,601][00159] Num frames 3300... |
|
[2024-06-06 13:20:38,757][00159] Num frames 3400... |
|
[2024-06-06 13:20:38,910][00159] Num frames 3500... |
|
[2024-06-06 13:20:39,059][00159] Num frames 3600... |
|
[2024-06-06 13:20:39,209][00159] Num frames 3700... |
|
[2024-06-06 13:20:39,362][00159] Num frames 3800... |
|
[2024-06-06 13:20:39,529][00159] Num frames 3900... |
|
[2024-06-06 13:20:39,680][00159] Num frames 4000... |
|
[2024-06-06 13:20:39,827][00159] Num frames 4100... |
|
[2024-06-06 13:20:39,986][00159] Num frames 4200... |
|
[2024-06-06 13:20:40,141][00159] Num frames 4300... |
|
[2024-06-06 13:20:40,328][00159] Avg episode rewards: #0: 27.722, true rewards: #0: 10.972 |
|
[2024-06-06 13:20:40,331][00159] Avg episode reward: 27.722, avg true_objective: 10.972 |
|
[2024-06-06 13:20:40,349][00159] Num frames 4400... |
|
[2024-06-06 13:20:40,500][00159] Num frames 4500... |
|
[2024-06-06 13:20:40,647][00159] Num frames 4600... |
|
[2024-06-06 13:20:40,795][00159] Num frames 4700... |
|
[2024-06-06 13:20:40,942][00159] Num frames 4800... |
|
[2024-06-06 13:20:41,137][00159] Num frames 4900... |
|
[2024-06-06 13:20:41,357][00159] Num frames 5000... |
|
[2024-06-06 13:20:41,569][00159] Num frames 5100... |
|
[2024-06-06 13:20:41,777][00159] Num frames 5200... |
|
[2024-06-06 13:20:42,002][00159] Num frames 5300... |
|
[2024-06-06 13:20:42,210][00159] Num frames 5400... |
|
[2024-06-06 13:20:42,421][00159] Num frames 5500... |
|
[2024-06-06 13:20:42,657][00159] Num frames 5600... |
|
[2024-06-06 13:20:42,875][00159] Num frames 5700... |
|
[2024-06-06 13:20:43,084][00159] Num frames 5800... |
|
[2024-06-06 13:20:43,294][00159] Num frames 5900... |
|
[2024-06-06 13:20:43,524][00159] Avg episode rewards: #0: 29.756, true rewards: #0: 11.956 |
|
[2024-06-06 13:20:43,526][00159] Avg episode reward: 29.756, avg true_objective: 11.956 |
|
[2024-06-06 13:20:43,579][00159] Num frames 6000... |
|
[2024-06-06 13:20:43,808][00159] Num frames 6100... |
|
[2024-06-06 13:20:44,003][00159] Num frames 6200... |
|
[2024-06-06 13:20:44,148][00159] Num frames 6300... |
|
[2024-06-06 13:20:44,293][00159] Num frames 6400... |
|
[2024-06-06 13:20:44,441][00159] Num frames 6500... |
|
[2024-06-06 13:20:44,598][00159] Num frames 6600... |
|
[2024-06-06 13:20:44,747][00159] Num frames 6700... |
|
[2024-06-06 13:20:44,897][00159] Num frames 6800... |
|
[2024-06-06 13:20:45,045][00159] Num frames 6900... |
|
[2024-06-06 13:20:45,206][00159] Num frames 7000... |
|
[2024-06-06 13:20:45,368][00159] Num frames 7100... |
|
[2024-06-06 13:20:45,532][00159] Num frames 7200... |
|
[2024-06-06 13:20:45,721][00159] Avg episode rewards: #0: 29.625, true rewards: #0: 12.125 |
|
[2024-06-06 13:20:45,723][00159] Avg episode reward: 29.625, avg true_objective: 12.125 |
|
[2024-06-06 13:20:45,767][00159] Num frames 7300... |
|
[2024-06-06 13:20:45,923][00159] Num frames 7400... |
|
[2024-06-06 13:20:46,071][00159] Num frames 7500... |
|
[2024-06-06 13:20:46,221][00159] Num frames 7600... |
|
[2024-06-06 13:20:46,374][00159] Num frames 7700... |
|
[2024-06-06 13:20:46,522][00159] Num frames 7800... |
|
[2024-06-06 13:20:46,682][00159] Num frames 7900... |
|
[2024-06-06 13:20:46,833][00159] Num frames 8000... |
|
[2024-06-06 13:20:46,986][00159] Num frames 8100... |
|
[2024-06-06 13:20:47,055][00159] Avg episode rewards: #0: 28.153, true rewards: #0: 11.581 |
|
[2024-06-06 13:20:47,057][00159] Avg episode reward: 28.153, avg true_objective: 11.581 |
|
[2024-06-06 13:20:47,199][00159] Num frames 8200... |
|
[2024-06-06 13:20:47,355][00159] Num frames 8300... |
|
[2024-06-06 13:20:47,510][00159] Num frames 8400... |
|
[2024-06-06 13:20:47,675][00159] Num frames 8500... |
|
[2024-06-06 13:20:47,822][00159] Num frames 8600... |
|
[2024-06-06 13:20:47,976][00159] Num frames 8700... |
|
[2024-06-06 13:20:48,122][00159] Num frames 8800... |
|
[2024-06-06 13:20:48,270][00159] Num frames 8900... |
|
[2024-06-06 13:20:48,423][00159] Num frames 9000... |
|
[2024-06-06 13:20:48,576][00159] Num frames 9100... |
|
[2024-06-06 13:20:48,730][00159] Num frames 9200... |
|
[2024-06-06 13:20:48,878][00159] Num frames 9300... |
|
[2024-06-06 13:20:49,029][00159] Num frames 9400... |
|
[2024-06-06 13:20:49,182][00159] Num frames 9500... |
|
[2024-06-06 13:20:49,332][00159] Num frames 9600... |
|
[2024-06-06 13:20:49,486][00159] Num frames 9700... |
|
[2024-06-06 13:20:49,602][00159] Avg episode rewards: #0: 29.799, true rewards: #0: 12.174 |
|
[2024-06-06 13:20:49,604][00159] Avg episode reward: 29.799, avg true_objective: 12.174 |
|
[2024-06-06 13:20:49,709][00159] Num frames 9800... |
|
[2024-06-06 13:20:49,854][00159] Num frames 9900... |
|
[2024-06-06 13:20:49,999][00159] Num frames 10000... |
|
[2024-06-06 13:20:50,148][00159] Num frames 10100... |
|
[2024-06-06 13:20:50,290][00159] Num frames 10200... |
|
[2024-06-06 13:20:50,436][00159] Num frames 10300... |
|
[2024-06-06 13:20:50,596][00159] Num frames 10400... |
|
[2024-06-06 13:20:50,755][00159] Num frames 10500... |
|
[2024-06-06 13:20:50,902][00159] Num frames 10600... |
|
[2024-06-06 13:20:51,052][00159] Num frames 10700... |
|
[2024-06-06 13:20:51,202][00159] Num frames 10800... |
|
[2024-06-06 13:20:51,348][00159] Num frames 10900... |
|
[2024-06-06 13:20:51,499][00159] Num frames 11000... |
|
[2024-06-06 13:20:51,647][00159] Num frames 11100... |
|
[2024-06-06 13:20:51,807][00159] Num frames 11200... |
|
[2024-06-06 13:20:51,931][00159] Avg episode rewards: #0: 30.381, true rewards: #0: 12.492 |
|
[2024-06-06 13:20:51,933][00159] Avg episode reward: 30.381, avg true_objective: 12.492 |
|
[2024-06-06 13:20:52,030][00159] Num frames 11300... |
|
[2024-06-06 13:20:52,178][00159] Num frames 11400... |
|
[2024-06-06 13:20:52,326][00159] Num frames 11500... |
|
[2024-06-06 13:20:52,474][00159] Num frames 11600... |
|
[2024-06-06 13:20:52,619][00159] Num frames 11700... |
|
[2024-06-06 13:20:52,772][00159] Num frames 11800... |
|
[2024-06-06 13:20:52,930][00159] Num frames 11900... |
|
[2024-06-06 13:20:53,079][00159] Num frames 12000... |
|
[2024-06-06 13:20:53,230][00159] Num frames 12100... |
|
[2024-06-06 13:20:53,384][00159] Num frames 12200... |
|
[2024-06-06 13:20:53,530][00159] Num frames 12300... |
|
[2024-06-06 13:20:53,689][00159] Num frames 12400... |
|
[2024-06-06 13:20:53,849][00159] Num frames 12500... |
|
[2024-06-06 13:20:54,050][00159] Avg episode rewards: #0: 30.687, true rewards: #0: 12.587 |
|
[2024-06-06 13:20:54,053][00159] Avg episode reward: 30.687, avg true_objective: 12.587 |
|
[2024-06-06 13:22:15,894][00159] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2024-06-06 13:31:43,856][00159] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2024-06-06 13:31:43,859][00159] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2024-06-06 13:31:43,863][00159] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2024-06-06 13:31:43,864][00159] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2024-06-06 13:31:43,868][00159] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2024-06-06 13:31:43,870][00159] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2024-06-06 13:31:43,871][00159] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2024-06-06 13:31:43,872][00159] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2024-06-06 13:31:43,873][00159] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2024-06-06 13:31:43,874][00159] Adding new argument 'hf_repository'='swritchie/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2024-06-06 13:31:43,875][00159] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2024-06-06 13:31:43,877][00159] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2024-06-06 13:31:43,878][00159] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2024-06-06 13:31:43,880][00159] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2024-06-06 13:31:43,882][00159] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2024-06-06 13:31:43,932][00159] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-06-06 13:31:43,935][00159] RunningMeanStd input shape: (1,) |
|
[2024-06-06 13:31:43,953][00159] ConvEncoder: input_channels=3 |
|
[2024-06-06 13:31:44,026][00159] Conv encoder output size: 512 |
|
[2024-06-06 13:31:44,028][00159] Policy head output size: 512 |
|
[2024-06-06 13:31:44,058][00159] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2024-06-06 13:31:45,044][00159] Num frames 100... |
|
[2024-06-06 13:31:45,284][00159] Num frames 200... |
|
[2024-06-06 13:31:45,506][00159] Num frames 300... |
|
[2024-06-06 13:31:45,732][00159] Num frames 400... |
|
[2024-06-06 13:31:45,939][00159] Num frames 500... |
|
[2024-06-06 13:31:46,186][00159] Num frames 600... |
|
[2024-06-06 13:31:46,402][00159] Num frames 700... |
|
[2024-06-06 13:31:46,558][00159] Avg episode rewards: #0: 15.680, true rewards: #0: 7.680 |
|
[2024-06-06 13:31:46,560][00159] Avg episode reward: 15.680, avg true_objective: 7.680 |
|
[2024-06-06 13:31:46,612][00159] Num frames 800... |
|
[2024-06-06 13:31:46,769][00159] Num frames 900... |
|
[2024-06-06 13:31:46,921][00159] Num frames 1000... |
|
[2024-06-06 13:31:47,072][00159] Num frames 1100... |
|
[2024-06-06 13:31:47,231][00159] Num frames 1200... |
|
[2024-06-06 13:31:47,385][00159] Num frames 1300... |
|
[2024-06-06 13:31:47,540][00159] Num frames 1400... |
|
[2024-06-06 13:31:47,700][00159] Num frames 1500... |
|
[2024-06-06 13:31:47,858][00159] Num frames 1600... |
|
[2024-06-06 13:31:48,005][00159] Num frames 1700... |
|
[2024-06-06 13:31:48,197][00159] Avg episode rewards: #0: 19.960, true rewards: #0: 8.960 |
|
[2024-06-06 13:31:48,199][00159] Avg episode reward: 19.960, avg true_objective: 8.960 |
|
[2024-06-06 13:31:48,226][00159] Num frames 1800... |
|
[2024-06-06 13:31:48,382][00159] Num frames 1900... |
|
[2024-06-06 13:31:48,536][00159] Num frames 2000... |
|
[2024-06-06 13:31:48,685][00159] Num frames 2100... |
|
[2024-06-06 13:31:48,832][00159] Num frames 2200... |
|
[2024-06-06 13:31:48,977][00159] Num frames 2300... |
|
[2024-06-06 13:31:49,124][00159] Num frames 2400... |
|
[2024-06-06 13:31:49,276][00159] Avg episode rewards: #0: 18.213, true rewards: #0: 8.213 |
|
[2024-06-06 13:31:49,278][00159] Avg episode reward: 18.213, avg true_objective: 8.213 |
|
[2024-06-06 13:31:49,335][00159] Num frames 2500... |
|
[2024-06-06 13:31:49,487][00159] Num frames 2600... |
|
[2024-06-06 13:31:49,637][00159] Num frames 2700... |
|
[2024-06-06 13:31:49,790][00159] Num frames 2800... |
|
[2024-06-06 13:31:49,943][00159] Num frames 2900... |
|
[2024-06-06 13:31:50,065][00159] Avg episode rewards: #0: 15.610, true rewards: #0: 7.360 |
|
[2024-06-06 13:31:50,067][00159] Avg episode reward: 15.610, avg true_objective: 7.360 |
|
[2024-06-06 13:31:50,157][00159] Num frames 3000... |
|
[2024-06-06 13:31:50,314][00159] Num frames 3100... |
|
[2024-06-06 13:31:50,467][00159] Num frames 3200... |
|
[2024-06-06 13:31:50,619][00159] Num frames 3300... |
|
[2024-06-06 13:31:50,770][00159] Num frames 3400... |
|
[2024-06-06 13:31:50,915][00159] Num frames 3500... |
|
[2024-06-06 13:31:51,067][00159] Num frames 3600... |
|
[2024-06-06 13:31:51,222][00159] Num frames 3700... |
|
[2024-06-06 13:31:51,386][00159] Num frames 3800... |
|
[2024-06-06 13:31:51,537][00159] Num frames 3900... |
|
[2024-06-06 13:31:51,689][00159] Num frames 4000... |
|
[2024-06-06 13:31:51,839][00159] Num frames 4100... |
|
[2024-06-06 13:31:51,984][00159] Num frames 4200... |
|
[2024-06-06 13:31:52,139][00159] Num frames 4300... |
|
[2024-06-06 13:31:52,288][00159] Num frames 4400... |
|
[2024-06-06 13:31:52,448][00159] Num frames 4500... |
|
[2024-06-06 13:31:52,591][00159] Avg episode rewards: #0: 21.318, true rewards: #0: 9.118 |
|
[2024-06-06 13:31:52,593][00159] Avg episode reward: 21.318, avg true_objective: 9.118 |
|
[2024-06-06 13:31:52,657][00159] Num frames 4600... |
|
[2024-06-06 13:31:52,803][00159] Num frames 4700... |
|
[2024-06-06 13:31:52,955][00159] Num frames 4800... |
|
[2024-06-06 13:31:53,103][00159] Num frames 4900... |
|
[2024-06-06 13:31:53,253][00159] Num frames 5000... |
|
[2024-06-06 13:31:53,411][00159] Num frames 5100... |
|
[2024-06-06 13:31:53,563][00159] Num frames 5200... |
|
[2024-06-06 13:31:53,719][00159] Num frames 5300... |
|
[2024-06-06 13:31:53,835][00159] Avg episode rewards: #0: 20.208, true rewards: #0: 8.875 |
|
[2024-06-06 13:31:53,838][00159] Avg episode reward: 20.208, avg true_objective: 8.875 |
|
[2024-06-06 13:31:54,165][00159] Num frames 5400... |
|
[2024-06-06 13:31:54,319][00159] Num frames 5500... |
|
[2024-06-06 13:31:54,489][00159] Num frames 5600... |
|
[2024-06-06 13:31:54,641][00159] Num frames 5700... |
|
[2024-06-06 13:31:54,800][00159] Num frames 5800... |
|
[2024-06-06 13:31:54,954][00159] Num frames 5900... |
|
[2024-06-06 13:31:55,102][00159] Num frames 6000... |
|
[2024-06-06 13:31:55,252][00159] Num frames 6100... |
|
[2024-06-06 13:31:55,416][00159] Num frames 6200... |
|
[2024-06-06 13:31:55,572][00159] Num frames 6300... |
|
[2024-06-06 13:31:55,724][00159] Num frames 6400... |
|
[2024-06-06 13:31:55,877][00159] Num frames 6500... |
|
[2024-06-06 13:31:56,024][00159] Num frames 6600... |
|
[2024-06-06 13:31:56,182][00159] Num frames 6700... |
|
[2024-06-06 13:31:56,334][00159] Num frames 6800... |
|
[2024-06-06 13:31:56,439][00159] Avg episode rewards: #0: 22.327, true rewards: #0: 9.756 |
|
[2024-06-06 13:31:56,442][00159] Avg episode reward: 22.327, avg true_objective: 9.756 |
|
[2024-06-06 13:31:56,595][00159] Num frames 6900... |
|
[2024-06-06 13:31:56,818][00159] Num frames 7000... |
|
[2024-06-06 13:31:57,034][00159] Num frames 7100... |
|
[2024-06-06 13:31:57,251][00159] Num frames 7200... |
|
[2024-06-06 13:31:57,481][00159] Avg episode rewards: #0: 20.346, true rewards: #0: 9.096 |
|
[2024-06-06 13:31:57,483][00159] Avg episode reward: 20.346, avg true_objective: 9.096 |
|
[2024-06-06 13:31:57,539][00159] Num frames 7300... |
|
[2024-06-06 13:31:57,780][00159] Num frames 7400... |
|
[2024-06-06 13:31:58,009][00159] Num frames 7500... |
|
[2024-06-06 13:31:58,239][00159] Num frames 7600... |
|
[2024-06-06 13:31:58,479][00159] Num frames 7700... |
|
[2024-06-06 13:31:58,723][00159] Num frames 7800... |
|
[2024-06-06 13:31:58,951][00159] Num frames 7900... |
|
[2024-06-06 13:31:59,173][00159] Num frames 8000... |
|
[2024-06-06 13:31:59,428][00159] Num frames 8100... |
|
[2024-06-06 13:31:59,651][00159] Num frames 8200... |
|
[2024-06-06 13:31:59,887][00159] Num frames 8300... |
|
[2024-06-06 13:32:00,049][00159] Num frames 8400... |
|
[2024-06-06 13:32:00,195][00159] Num frames 8500... |
|
[2024-06-06 13:32:00,339][00159] Num frames 8600... |
|
[2024-06-06 13:32:00,484][00159] Num frames 8700... |
|
[2024-06-06 13:32:00,651][00159] Num frames 8800... |
|
[2024-06-06 13:32:00,803][00159] Num frames 8900... |
|
[2024-06-06 13:32:00,952][00159] Num frames 9000... |
|
[2024-06-06 13:32:01,105][00159] Num frames 9100... |
|
[2024-06-06 13:32:01,192][00159] Avg episode rewards: #0: 23.574, true rewards: #0: 10.130 |
|
[2024-06-06 13:32:01,193][00159] Avg episode reward: 23.574, avg true_objective: 10.130 |
|
[2024-06-06 13:32:01,318][00159] Num frames 9200... |
|
[2024-06-06 13:32:01,474][00159] Num frames 9300... |
|
[2024-06-06 13:32:01,629][00159] Num frames 9400... |
|
[2024-06-06 13:32:01,783][00159] Num frames 9500... |
|
[2024-06-06 13:32:01,934][00159] Num frames 9600... |
|
[2024-06-06 13:32:02,036][00159] Avg episode rewards: #0: 22.029, true rewards: #0: 9.629 |
|
[2024-06-06 13:32:02,037][00159] Avg episode reward: 22.029, avg true_objective: 9.629 |
|
[2024-06-06 13:33:05,226][00159] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
|