saxelsso's picture
Upload folder using huggingface_hub
ddb00cb verified
[2025-01-16 08:24:15,883][00226] Saving configuration to /content/train_dir/default_experiment/config.json...
[2025-01-16 08:24:15,886][00226] Rollout worker 0 uses device cpu
[2025-01-16 08:24:15,887][00226] Rollout worker 1 uses device cpu
[2025-01-16 08:24:15,891][00226] Rollout worker 2 uses device cpu
[2025-01-16 08:24:15,892][00226] Rollout worker 3 uses device cpu
[2025-01-16 08:24:15,893][00226] Rollout worker 4 uses device cpu
[2025-01-16 08:24:15,894][00226] Rollout worker 5 uses device cpu
[2025-01-16 08:24:15,895][00226] Rollout worker 6 uses device cpu
[2025-01-16 08:24:15,896][00226] Rollout worker 7 uses device cpu
[2025-01-16 08:24:16,052][00226] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2025-01-16 08:24:16,054][00226] InferenceWorker_p0-w0: min num requests: 2
[2025-01-16 08:24:16,087][00226] Starting all processes...
[2025-01-16 08:24:16,088][00226] Starting process learner_proc0
[2025-01-16 08:24:16,132][00226] Starting all processes...
[2025-01-16 08:24:16,141][00226] Starting process inference_proc0-0
[2025-01-16 08:24:16,141][00226] Starting process rollout_proc0
[2025-01-16 08:24:16,143][00226] Starting process rollout_proc1
[2025-01-16 08:24:16,144][00226] Starting process rollout_proc2
[2025-01-16 08:24:16,144][00226] Starting process rollout_proc3
[2025-01-16 08:24:16,144][00226] Starting process rollout_proc4
[2025-01-16 08:24:16,144][00226] Starting process rollout_proc5
[2025-01-16 08:24:16,144][00226] Starting process rollout_proc6
[2025-01-16 08:24:16,144][00226] Starting process rollout_proc7
[2025-01-16 08:24:32,238][02691] Worker 6 uses CPU cores [0]
[2025-01-16 08:24:32,702][02685] Worker 0 uses CPU cores [0]
[2025-01-16 08:24:32,734][02688] Worker 2 uses CPU cores [0]
[2025-01-16 08:24:32,869][02671] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2025-01-16 08:24:32,875][02671] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
[2025-01-16 08:24:32,881][02687] Worker 3 uses CPU cores [1]
[2025-01-16 08:24:32,956][02671] Num visible devices: 1
[2025-01-16 08:24:32,962][02690] Worker 5 uses CPU cores [1]
[2025-01-16 08:24:32,978][02671] Starting seed is not provided
[2025-01-16 08:24:32,979][02671] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2025-01-16 08:24:32,979][02671] Initializing actor-critic model on device cuda:0
[2025-01-16 08:24:32,980][02671] RunningMeanStd input shape: (3, 72, 128)
[2025-01-16 08:24:32,984][02671] RunningMeanStd input shape: (1,)
[2025-01-16 08:24:33,027][02671] ConvEncoder: input_channels=3
[2025-01-16 08:24:33,110][02692] Worker 7 uses CPU cores [1]
[2025-01-16 08:24:33,142][02689] Worker 4 uses CPU cores [0]
[2025-01-16 08:24:33,196][02686] Worker 1 uses CPU cores [1]
[2025-01-16 08:24:33,287][02684] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2025-01-16 08:24:33,287][02684] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
[2025-01-16 08:24:33,328][02684] Num visible devices: 1
[2025-01-16 08:24:33,434][02671] Conv encoder output size: 512
[2025-01-16 08:24:33,434][02671] Policy head output size: 512
[2025-01-16 08:24:33,496][02671] Created Actor Critic model with architecture:
[2025-01-16 08:24:33,496][02671] ActorCriticSharedWeights(
(obs_normalizer): ObservationNormalizer(
(running_mean_std): RunningMeanStdDictInPlace(
(running_mean_std): ModuleDict(
(obs): RunningMeanStdInPlace()
)
)
)
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
(encoder): VizdoomEncoder(
(basic_encoder): ConvEncoder(
(enc): RecursiveScriptModule(
original_name=ConvEncoderImpl
(conv_head): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Conv2d)
(1): RecursiveScriptModule(original_name=ELU)
(2): RecursiveScriptModule(original_name=Conv2d)
(3): RecursiveScriptModule(original_name=ELU)
(4): RecursiveScriptModule(original_name=Conv2d)
(5): RecursiveScriptModule(original_name=ELU)
)
(mlp_layers): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Linear)
(1): RecursiveScriptModule(original_name=ELU)
)
)
)
)
(core): ModelCoreRNN(
(core): GRU(512, 512)
)
(decoder): MlpDecoder(
(mlp): Identity()
)
(critic_linear): Linear(in_features=512, out_features=1, bias=True)
(action_parameterization): ActionParameterizationDefault(
(distribution_linear): Linear(in_features=512, out_features=5, bias=True)
)
)
[2025-01-16 08:24:33,922][02671] Using optimizer <class 'torch.optim.adam.Adam'>
[2025-01-16 08:24:36,047][00226] Heartbeat connected on Batcher_0
[2025-01-16 08:24:36,053][00226] Heartbeat connected on InferenceWorker_p0-w0
[2025-01-16 08:24:36,061][00226] Heartbeat connected on RolloutWorker_w0
[2025-01-16 08:24:36,065][00226] Heartbeat connected on RolloutWorker_w1
[2025-01-16 08:24:36,069][00226] Heartbeat connected on RolloutWorker_w2
[2025-01-16 08:24:36,071][00226] Heartbeat connected on RolloutWorker_w3
[2025-01-16 08:24:36,075][00226] Heartbeat connected on RolloutWorker_w4
[2025-01-16 08:24:36,081][00226] Heartbeat connected on RolloutWorker_w6
[2025-01-16 08:24:36,084][00226] Heartbeat connected on RolloutWorker_w5
[2025-01-16 08:24:36,087][00226] Heartbeat connected on RolloutWorker_w7
[2025-01-16 08:24:38,043][02671] No checkpoints found
[2025-01-16 08:24:38,043][02671] Did not load from checkpoint, starting from scratch!
[2025-01-16 08:24:38,043][02671] Initialized policy 0 weights for model version 0
[2025-01-16 08:24:38,046][02671] LearnerWorker_p0 finished initialization!
[2025-01-16 08:24:38,052][02671] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2025-01-16 08:24:38,047][00226] Heartbeat connected on LearnerWorker_p0
[2025-01-16 08:24:38,254][02684] RunningMeanStd input shape: (3, 72, 128)
[2025-01-16 08:24:38,255][02684] RunningMeanStd input shape: (1,)
[2025-01-16 08:24:38,266][02684] ConvEncoder: input_channels=3
[2025-01-16 08:24:38,364][02684] Conv encoder output size: 512
[2025-01-16 08:24:38,364][02684] Policy head output size: 512
[2025-01-16 08:24:38,398][00226] Inference worker 0-0 is ready!
[2025-01-16 08:24:38,400][00226] All inference workers are ready! Signal rollout workers to start!
[2025-01-16 08:24:38,603][02687] Doom resolution: 160x120, resize resolution: (128, 72)
[2025-01-16 08:24:38,605][02690] Doom resolution: 160x120, resize resolution: (128, 72)
[2025-01-16 08:24:38,609][02686] Doom resolution: 160x120, resize resolution: (128, 72)
[2025-01-16 08:24:38,607][02692] Doom resolution: 160x120, resize resolution: (128, 72)
[2025-01-16 08:24:38,610][02689] Doom resolution: 160x120, resize resolution: (128, 72)
[2025-01-16 08:24:38,607][02688] Doom resolution: 160x120, resize resolution: (128, 72)
[2025-01-16 08:24:38,609][02685] Doom resolution: 160x120, resize resolution: (128, 72)
[2025-01-16 08:24:38,611][02691] Doom resolution: 160x120, resize resolution: (128, 72)
[2025-01-16 08:24:39,468][02689] Decorrelating experience for 0 frames...
[2025-01-16 08:24:39,837][02689] Decorrelating experience for 32 frames...
[2025-01-16 08:24:40,332][02686] Decorrelating experience for 0 frames...
[2025-01-16 08:24:40,339][02687] Decorrelating experience for 0 frames...
[2025-01-16 08:24:40,343][02692] Decorrelating experience for 0 frames...
[2025-01-16 08:24:40,349][02689] Decorrelating experience for 64 frames...
[2025-01-16 08:24:40,352][02690] Decorrelating experience for 0 frames...
[2025-01-16 08:24:40,979][00226] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2025-01-16 08:24:41,153][02689] Decorrelating experience for 96 frames...
[2025-01-16 08:24:41,154][02685] Decorrelating experience for 0 frames...
[2025-01-16 08:24:41,682][02687] Decorrelating experience for 32 frames...
[2025-01-16 08:24:41,685][02692] Decorrelating experience for 32 frames...
[2025-01-16 08:24:41,694][02690] Decorrelating experience for 32 frames...
[2025-01-16 08:24:41,753][02686] Decorrelating experience for 32 frames...
[2025-01-16 08:24:42,495][02685] Decorrelating experience for 32 frames...
[2025-01-16 08:24:43,372][02692] Decorrelating experience for 64 frames...
[2025-01-16 08:24:43,379][02687] Decorrelating experience for 64 frames...
[2025-01-16 08:24:43,432][02686] Decorrelating experience for 64 frames...
[2025-01-16 08:24:43,448][02685] Decorrelating experience for 64 frames...
[2025-01-16 08:24:45,354][02690] Decorrelating experience for 64 frames...
[2025-01-16 08:24:45,441][02687] Decorrelating experience for 96 frames...
[2025-01-16 08:24:45,506][02686] Decorrelating experience for 96 frames...
[2025-01-16 08:24:45,979][00226] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 78.8. Samples: 394. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2025-01-16 08:24:45,984][00226] Avg episode reward: [(0, '3.370')]
[2025-01-16 08:24:46,395][02685] Decorrelating experience for 96 frames...
[2025-01-16 08:24:48,647][02692] Decorrelating experience for 96 frames...
[2025-01-16 08:24:49,133][02690] Decorrelating experience for 96 frames...
[2025-01-16 08:24:50,979][00226] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 218.2. Samples: 2182. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2025-01-16 08:24:50,985][00226] Avg episode reward: [(0, '3.211')]
[2025-01-16 08:24:52,229][02671] Signal inference workers to stop experience collection...
[2025-01-16 08:24:52,235][02684] InferenceWorker_p0-w0: stopping experience collection
[2025-01-16 08:24:54,165][02671] Signal inference workers to resume experience collection...
[2025-01-16 08:24:54,166][02684] InferenceWorker_p0-w0: resuming experience collection
[2025-01-16 08:24:55,979][00226] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 12288. Throughput: 0: 206.4. Samples: 3096. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0)
[2025-01-16 08:24:55,982][00226] Avg episode reward: [(0, '3.569')]
[2025-01-16 08:25:00,979][00226] Fps is (10 sec: 3686.4, 60 sec: 1843.2, 300 sec: 1843.2). Total num frames: 36864. Throughput: 0: 419.2. Samples: 8384. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:25:00,982][00226] Avg episode reward: [(0, '3.995')]
[2025-01-16 08:25:01,246][02684] Updated weights for policy 0, policy_version 10 (0.0197)
[2025-01-16 08:25:05,979][00226] Fps is (10 sec: 4095.9, 60 sec: 2129.9, 300 sec: 2129.9). Total num frames: 53248. Throughput: 0: 541.0. Samples: 13526. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:25:05,984][00226] Avg episode reward: [(0, '4.313')]
[2025-01-16 08:25:10,979][00226] Fps is (10 sec: 3276.8, 60 sec: 2321.1, 300 sec: 2321.1). Total num frames: 69632. Throughput: 0: 528.7. Samples: 15862. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:25:10,985][00226] Avg episode reward: [(0, '4.267')]
[2025-01-16 08:25:12,969][02684] Updated weights for policy 0, policy_version 20 (0.0021)
[2025-01-16 08:25:15,979][00226] Fps is (10 sec: 4096.0, 60 sec: 2691.7, 300 sec: 2691.7). Total num frames: 94208. Throughput: 0: 642.0. Samples: 22470. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:25:15,981][00226] Avg episode reward: [(0, '4.519')]
[2025-01-16 08:25:20,981][00226] Fps is (10 sec: 4095.5, 60 sec: 2764.7, 300 sec: 2764.7). Total num frames: 110592. Throughput: 0: 698.4. Samples: 27938. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:25:20,984][00226] Avg episode reward: [(0, '4.481')]
[2025-01-16 08:25:21,004][02671] Saving new best policy, reward=4.481!
[2025-01-16 08:25:24,742][02684] Updated weights for policy 0, policy_version 30 (0.0015)
[2025-01-16 08:25:25,979][00226] Fps is (10 sec: 3276.8, 60 sec: 2821.7, 300 sec: 2821.7). Total num frames: 126976. Throughput: 0: 664.3. Samples: 29894. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-01-16 08:25:25,981][00226] Avg episode reward: [(0, '4.597')]
[2025-01-16 08:25:25,986][02671] Saving new best policy, reward=4.597!
[2025-01-16 08:25:30,979][00226] Fps is (10 sec: 4096.5, 60 sec: 3031.0, 300 sec: 3031.0). Total num frames: 151552. Throughput: 0: 800.6. Samples: 36420. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:25:30,985][00226] Avg episode reward: [(0, '4.447')]
[2025-01-16 08:25:33,916][02684] Updated weights for policy 0, policy_version 40 (0.0022)
[2025-01-16 08:25:35,979][00226] Fps is (10 sec: 4095.9, 60 sec: 3053.4, 300 sec: 3053.4). Total num frames: 167936. Throughput: 0: 895.8. Samples: 42494. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:25:35,983][00226] Avg episode reward: [(0, '4.232')]
[2025-01-16 08:25:40,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3072.0, 300 sec: 3072.0). Total num frames: 184320. Throughput: 0: 921.6. Samples: 44568. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:25:40,982][00226] Avg episode reward: [(0, '4.261')]
[2025-01-16 08:25:45,280][02684] Updated weights for policy 0, policy_version 50 (0.0015)
[2025-01-16 08:25:45,979][00226] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3213.8). Total num frames: 208896. Throughput: 0: 940.6. Samples: 50712. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:25:45,986][00226] Avg episode reward: [(0, '4.513')]
[2025-01-16 08:25:50,979][00226] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3276.8). Total num frames: 229376. Throughput: 0: 972.0. Samples: 57268. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2025-01-16 08:25:50,982][00226] Avg episode reward: [(0, '4.605')]
[2025-01-16 08:25:50,999][02671] Saving new best policy, reward=4.605!
[2025-01-16 08:25:55,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3222.2). Total num frames: 241664. Throughput: 0: 963.6. Samples: 59226. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:25:55,984][00226] Avg episode reward: [(0, '4.420')]
[2025-01-16 08:25:56,645][02684] Updated weights for policy 0, policy_version 60 (0.0023)
[2025-01-16 08:26:00,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3276.8). Total num frames: 262144. Throughput: 0: 940.4. Samples: 64788. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:26:00,982][00226] Avg episode reward: [(0, '4.358')]
[2025-01-16 08:26:05,855][02684] Updated weights for policy 0, policy_version 70 (0.0021)
[2025-01-16 08:26:05,979][00226] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3373.2). Total num frames: 286720. Throughput: 0: 967.5. Samples: 71474. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:26:05,982][00226] Avg episode reward: [(0, '4.568')]
[2025-01-16 08:26:10,983][00226] Fps is (10 sec: 3685.2, 60 sec: 3822.7, 300 sec: 3322.2). Total num frames: 299008. Throughput: 0: 979.6. Samples: 73980. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:26:10,985][00226] Avg episode reward: [(0, '4.736')]
[2025-01-16 08:26:10,993][02671] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000073_299008.pth...
[2025-01-16 08:26:11,127][02671] Saving new best policy, reward=4.736!
[2025-01-16 08:26:15,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3363.0). Total num frames: 319488. Throughput: 0: 942.9. Samples: 78852. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:26:15,987][00226] Avg episode reward: [(0, '4.507')]
[2025-01-16 08:26:17,486][02684] Updated weights for policy 0, policy_version 80 (0.0014)
[2025-01-16 08:26:20,979][00226] Fps is (10 sec: 4097.3, 60 sec: 3823.0, 300 sec: 3399.7). Total num frames: 339968. Throughput: 0: 955.6. Samples: 85494. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:26:20,982][00226] Avg episode reward: [(0, '4.563')]
[2025-01-16 08:26:25,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3393.8). Total num frames: 356352. Throughput: 0: 973.5. Samples: 88376. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:26:25,984][00226] Avg episode reward: [(0, '4.638')]
[2025-01-16 08:26:29,048][02684] Updated weights for policy 0, policy_version 90 (0.0029)
[2025-01-16 08:26:30,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3388.5). Total num frames: 372736. Throughput: 0: 939.0. Samples: 92966. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:26:30,984][00226] Avg episode reward: [(0, '4.700')]
[2025-01-16 08:26:35,979][00226] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3454.9). Total num frames: 397312. Throughput: 0: 942.7. Samples: 99690. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2025-01-16 08:26:35,982][00226] Avg episode reward: [(0, '4.563')]
[2025-01-16 08:26:38,267][02684] Updated weights for policy 0, policy_version 100 (0.0025)
[2025-01-16 08:26:40,979][00226] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3447.5). Total num frames: 413696. Throughput: 0: 971.8. Samples: 102958. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:26:40,986][00226] Avg episode reward: [(0, '4.504')]
[2025-01-16 08:26:45,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3440.6). Total num frames: 430080. Throughput: 0: 944.0. Samples: 107266. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:26:45,982][00226] Avg episode reward: [(0, '4.661')]
[2025-01-16 08:26:49,890][02684] Updated weights for policy 0, policy_version 110 (0.0019)
[2025-01-16 08:26:50,979][00226] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3497.4). Total num frames: 454656. Throughput: 0: 937.2. Samples: 113646. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:26:50,982][00226] Avg episode reward: [(0, '4.587')]
[2025-01-16 08:26:55,980][00226] Fps is (10 sec: 4505.5, 60 sec: 3891.2, 300 sec: 3519.5). Total num frames: 475136. Throughput: 0: 953.8. Samples: 116900. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:26:55,985][00226] Avg episode reward: [(0, '4.460')]
[2025-01-16 08:27:00,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3481.6). Total num frames: 487424. Throughput: 0: 952.4. Samples: 121708. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-01-16 08:27:00,982][00226] Avg episode reward: [(0, '4.635')]
[2025-01-16 08:27:01,609][02684] Updated weights for policy 0, policy_version 120 (0.0028)
[2025-01-16 08:27:05,979][00226] Fps is (10 sec: 3276.9, 60 sec: 3686.4, 300 sec: 3502.8). Total num frames: 507904. Throughput: 0: 934.7. Samples: 127554. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:27:05,982][00226] Avg episode reward: [(0, '4.784')]
[2025-01-16 08:27:05,986][02671] Saving new best policy, reward=4.784!
[2025-01-16 08:27:10,980][00226] Fps is (10 sec: 4095.9, 60 sec: 3823.1, 300 sec: 3522.6). Total num frames: 528384. Throughput: 0: 941.8. Samples: 130756. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:27:10,982][00226] Avg episode reward: [(0, '4.659')]
[2025-01-16 08:27:11,046][02684] Updated weights for policy 0, policy_version 130 (0.0021)
[2025-01-16 08:27:15,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3514.6). Total num frames: 544768. Throughput: 0: 947.7. Samples: 135612. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:27:15,982][00226] Avg episode reward: [(0, '4.591')]
[2025-01-16 08:27:20,979][00226] Fps is (10 sec: 3276.9, 60 sec: 3686.4, 300 sec: 3507.2). Total num frames: 561152. Throughput: 0: 914.8. Samples: 140858. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:27:20,983][00226] Avg episode reward: [(0, '4.697')]
[2025-01-16 08:27:22,984][02684] Updated weights for policy 0, policy_version 140 (0.0021)
[2025-01-16 08:27:25,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3525.0). Total num frames: 581632. Throughput: 0: 913.5. Samples: 144066. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:27:25,987][00226] Avg episode reward: [(0, '4.884')]
[2025-01-16 08:27:26,027][02671] Saving new best policy, reward=4.884!
[2025-01-16 08:27:30,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3517.7). Total num frames: 598016. Throughput: 0: 944.1. Samples: 149752. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:27:30,984][00226] Avg episode reward: [(0, '4.748')]
[2025-01-16 08:27:35,213][02684] Updated weights for policy 0, policy_version 150 (0.0020)
[2025-01-16 08:27:35,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3510.9). Total num frames: 614400. Throughput: 0: 899.8. Samples: 154136. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:27:35,986][00226] Avg episode reward: [(0, '4.750')]
[2025-01-16 08:27:40,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3527.1). Total num frames: 634880. Throughput: 0: 899.7. Samples: 157386. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:27:40,982][00226] Avg episode reward: [(0, '4.748')]
[2025-01-16 08:27:44,968][02684] Updated weights for policy 0, policy_version 160 (0.0036)
[2025-01-16 08:27:45,979][00226] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3542.5). Total num frames: 655360. Throughput: 0: 935.3. Samples: 163796. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:27:45,985][00226] Avg episode reward: [(0, '4.902')]
[2025-01-16 08:27:45,988][02671] Saving new best policy, reward=4.902!
[2025-01-16 08:27:50,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3535.5). Total num frames: 671744. Throughput: 0: 896.1. Samples: 167880. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:27:50,982][00226] Avg episode reward: [(0, '4.675')]
[2025-01-16 08:27:55,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3549.9). Total num frames: 692224. Throughput: 0: 893.6. Samples: 170966. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:27:55,982][00226] Avg episode reward: [(0, '4.812')]
[2025-01-16 08:27:56,773][02684] Updated weights for policy 0, policy_version 170 (0.0021)
[2025-01-16 08:28:00,979][00226] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3563.5). Total num frames: 712704. Throughput: 0: 928.3. Samples: 177386. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:28:00,988][00226] Avg episode reward: [(0, '4.731')]
[2025-01-16 08:28:05,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3536.5). Total num frames: 724992. Throughput: 0: 911.2. Samples: 181864. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:28:05,982][00226] Avg episode reward: [(0, '4.899')]
[2025-01-16 08:28:08,515][02684] Updated weights for policy 0, policy_version 180 (0.0014)
[2025-01-16 08:28:10,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3618.2, 300 sec: 3549.9). Total num frames: 745472. Throughput: 0: 902.7. Samples: 184688. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:28:10,982][00226] Avg episode reward: [(0, '4.886')]
[2025-01-16 08:28:10,992][02671] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000182_745472.pth...
[2025-01-16 08:28:15,979][00226] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3581.6). Total num frames: 770048. Throughput: 0: 920.8. Samples: 191190. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:28:15,984][00226] Avg episode reward: [(0, '4.723')]
[2025-01-16 08:28:18,528][02684] Updated weights for policy 0, policy_version 190 (0.0018)
[2025-01-16 08:28:20,982][00226] Fps is (10 sec: 3685.6, 60 sec: 3686.3, 300 sec: 3556.0). Total num frames: 782336. Throughput: 0: 933.2. Samples: 196134. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:28:20,986][00226] Avg episode reward: [(0, '4.957')]
[2025-01-16 08:28:20,994][02671] Saving new best policy, reward=4.957!
[2025-01-16 08:28:25,979][00226] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3549.9). Total num frames: 798720. Throughput: 0: 909.2. Samples: 198300. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:28:25,981][00226] Avg episode reward: [(0, '5.006')]
[2025-01-16 08:28:25,986][02671] Saving new best policy, reward=5.006!
[2025-01-16 08:28:29,991][02684] Updated weights for policy 0, policy_version 200 (0.0012)
[2025-01-16 08:28:30,979][00226] Fps is (10 sec: 4096.9, 60 sec: 3754.7, 300 sec: 3579.5). Total num frames: 823296. Throughput: 0: 908.3. Samples: 204668. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:28:30,988][00226] Avg episode reward: [(0, '4.990')]
[2025-01-16 08:28:35,979][00226] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3573.1). Total num frames: 839680. Throughput: 0: 943.3. Samples: 210328. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:28:35,986][00226] Avg episode reward: [(0, '5.088')]
[2025-01-16 08:28:35,990][02671] Saving new best policy, reward=5.088!
[2025-01-16 08:28:40,980][00226] Fps is (10 sec: 3276.7, 60 sec: 3686.4, 300 sec: 3566.9). Total num frames: 856064. Throughput: 0: 920.0. Samples: 212368. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:28:40,982][00226] Avg episode reward: [(0, '5.276')]
[2025-01-16 08:28:40,993][02671] Saving new best policy, reward=5.276!
[2025-01-16 08:28:41,514][02684] Updated weights for policy 0, policy_version 210 (0.0015)
[2025-01-16 08:28:45,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3577.7). Total num frames: 876544. Throughput: 0: 914.8. Samples: 218554. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:28:45,981][00226] Avg episode reward: [(0, '5.528')]
[2025-01-16 08:28:45,990][02671] Saving new best policy, reward=5.528!
[2025-01-16 08:28:50,979][00226] Fps is (10 sec: 4096.1, 60 sec: 3754.7, 300 sec: 3588.1). Total num frames: 897024. Throughput: 0: 949.7. Samples: 224600. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:28:50,983][00226] Avg episode reward: [(0, '5.661')]
[2025-01-16 08:28:50,992][02671] Saving new best policy, reward=5.661!
[2025-01-16 08:28:52,185][02684] Updated weights for policy 0, policy_version 220 (0.0015)
[2025-01-16 08:28:55,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3565.9). Total num frames: 909312. Throughput: 0: 930.9. Samples: 226578. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:28:55,981][00226] Avg episode reward: [(0, '5.533')]
[2025-01-16 08:29:00,980][00226] Fps is (10 sec: 3686.0, 60 sec: 3686.3, 300 sec: 3591.9). Total num frames: 933888. Throughput: 0: 913.0. Samples: 232276. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:29:00,989][00226] Avg episode reward: [(0, '5.716')]
[2025-01-16 08:29:00,998][02671] Saving new best policy, reward=5.716!
[2025-01-16 08:29:02,901][02684] Updated weights for policy 0, policy_version 230 (0.0020)
[2025-01-16 08:29:05,984][00226] Fps is (10 sec: 4503.7, 60 sec: 3822.7, 300 sec: 3601.3). Total num frames: 954368. Throughput: 0: 944.1. Samples: 238622. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:29:05,986][00226] Avg episode reward: [(0, '5.873')]
[2025-01-16 08:29:05,988][02671] Saving new best policy, reward=5.873!
[2025-01-16 08:29:10,979][00226] Fps is (10 sec: 3277.1, 60 sec: 3686.4, 300 sec: 3580.2). Total num frames: 966656. Throughput: 0: 942.3. Samples: 240704. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:29:10,982][00226] Avg episode reward: [(0, '6.032')]
[2025-01-16 08:29:10,989][02671] Saving new best policy, reward=6.032!
[2025-01-16 08:29:14,694][02684] Updated weights for policy 0, policy_version 240 (0.0021)
[2025-01-16 08:29:15,979][00226] Fps is (10 sec: 3278.2, 60 sec: 3618.1, 300 sec: 3589.6). Total num frames: 987136. Throughput: 0: 912.7. Samples: 245738. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:29:15,981][00226] Avg episode reward: [(0, '5.511')]
[2025-01-16 08:29:20,979][00226] Fps is (10 sec: 4096.0, 60 sec: 3754.8, 300 sec: 3598.6). Total num frames: 1007616. Throughput: 0: 930.0. Samples: 252180. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:29:20,983][00226] Avg episode reward: [(0, '5.744')]
[2025-01-16 08:29:25,787][02684] Updated weights for policy 0, policy_version 250 (0.0015)
[2025-01-16 08:29:25,981][00226] Fps is (10 sec: 3685.7, 60 sec: 3754.5, 300 sec: 3593.0). Total num frames: 1024000. Throughput: 0: 941.2. Samples: 254722. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:29:25,985][00226] Avg episode reward: [(0, '5.801')]
[2025-01-16 08:29:30,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3587.5). Total num frames: 1040384. Throughput: 0: 904.8. Samples: 259268. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:29:30,984][00226] Avg episode reward: [(0, '5.905')]
[2025-01-16 08:29:35,979][00226] Fps is (10 sec: 3687.1, 60 sec: 3686.4, 300 sec: 3596.1). Total num frames: 1060864. Throughput: 0: 914.5. Samples: 265754. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:29:35,982][00226] Avg episode reward: [(0, '5.814')]
[2025-01-16 08:29:36,263][02684] Updated weights for policy 0, policy_version 260 (0.0018)
[2025-01-16 08:29:40,979][00226] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3665.6). Total num frames: 1081344. Throughput: 0: 942.5. Samples: 268990. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2025-01-16 08:29:40,986][00226] Avg episode reward: [(0, '5.670')]
[2025-01-16 08:29:45,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3707.2). Total num frames: 1093632. Throughput: 0: 906.2. Samples: 273052. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:29:45,981][00226] Avg episode reward: [(0, '5.297')]
[2025-01-16 08:29:47,968][02684] Updated weights for policy 0, policy_version 270 (0.0012)
[2025-01-16 08:29:50,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3735.0). Total num frames: 1114112. Throughput: 0: 907.8. Samples: 279470. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:29:50,984][00226] Avg episode reward: [(0, '5.494')]
[2025-01-16 08:29:55,979][00226] Fps is (10 sec: 4095.9, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 1134592. Throughput: 0: 932.0. Samples: 282644. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2025-01-16 08:29:55,983][00226] Avg episode reward: [(0, '5.926')]
[2025-01-16 08:29:59,483][02684] Updated weights for policy 0, policy_version 280 (0.0028)
[2025-01-16 08:30:00,980][00226] Fps is (10 sec: 3686.3, 60 sec: 3618.2, 300 sec: 3721.1). Total num frames: 1150976. Throughput: 0: 917.6. Samples: 287030. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:30:00,989][00226] Avg episode reward: [(0, '5.978')]
[2025-01-16 08:30:05,979][00226] Fps is (10 sec: 3686.5, 60 sec: 3618.4, 300 sec: 3735.0). Total num frames: 1171456. Throughput: 0: 907.4. Samples: 293012. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:30:05,981][00226] Avg episode reward: [(0, '6.547')]
[2025-01-16 08:30:05,986][02671] Saving new best policy, reward=6.547!
[2025-01-16 08:30:09,507][02684] Updated weights for policy 0, policy_version 290 (0.0015)
[2025-01-16 08:30:10,979][00226] Fps is (10 sec: 4096.1, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 1191936. Throughput: 0: 923.3. Samples: 296270. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:30:10,985][00226] Avg episode reward: [(0, '6.937')]
[2025-01-16 08:30:10,993][02671] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000291_1191936.pth...
[2025-01-16 08:30:11,135][02671] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000073_299008.pth
[2025-01-16 08:30:11,164][02671] Saving new best policy, reward=6.937!
[2025-01-16 08:30:15,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3707.2). Total num frames: 1204224. Throughput: 0: 925.2. Samples: 300904. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-01-16 08:30:15,983][00226] Avg episode reward: [(0, '7.115')]
[2025-01-16 08:30:15,986][02671] Saving new best policy, reward=7.115!
[2025-01-16 08:30:20,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3721.1). Total num frames: 1224704. Throughput: 0: 907.2. Samples: 306580. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:30:20,983][00226] Avg episode reward: [(0, '7.144')]
[2025-01-16 08:30:20,992][02671] Saving new best policy, reward=7.144!
[2025-01-16 08:30:21,479][02684] Updated weights for policy 0, policy_version 300 (0.0015)
[2025-01-16 08:30:25,979][00226] Fps is (10 sec: 4096.0, 60 sec: 3686.5, 300 sec: 3707.2). Total num frames: 1245184. Throughput: 0: 905.9. Samples: 309754. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:30:25,981][00226] Avg episode reward: [(0, '6.456')]
[2025-01-16 08:30:30,980][00226] Fps is (10 sec: 3686.3, 60 sec: 3686.4, 300 sec: 3707.2). Total num frames: 1261568. Throughput: 0: 932.7. Samples: 315024. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:30:30,984][00226] Avg episode reward: [(0, '6.362')]
[2025-01-16 08:30:33,265][02684] Updated weights for policy 0, policy_version 310 (0.0015)
[2025-01-16 08:30:35,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3707.2). Total num frames: 1277952. Throughput: 0: 905.1. Samples: 320198. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:30:35,987][00226] Avg episode reward: [(0, '6.869')]
[2025-01-16 08:30:40,979][00226] Fps is (10 sec: 4096.1, 60 sec: 3686.4, 300 sec: 3707.2). Total num frames: 1302528. Throughput: 0: 905.6. Samples: 323396. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:30:40,986][00226] Avg episode reward: [(0, '7.416')]
[2025-01-16 08:30:40,992][02671] Saving new best policy, reward=7.416!
[2025-01-16 08:30:42,815][02684] Updated weights for policy 0, policy_version 320 (0.0017)
[2025-01-16 08:30:45,980][00226] Fps is (10 sec: 4095.8, 60 sec: 3754.6, 300 sec: 3693.3). Total num frames: 1318912. Throughput: 0: 936.1. Samples: 329154. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:30:45,987][00226] Avg episode reward: [(0, '7.452')]
[2025-01-16 08:30:45,993][02671] Saving new best policy, reward=7.452!
[2025-01-16 08:30:50,979][00226] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 1331200. Throughput: 0: 904.4. Samples: 333708. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:30:50,984][00226] Avg episode reward: [(0, '7.802')]
[2025-01-16 08:30:51,012][02671] Saving new best policy, reward=7.802!
[2025-01-16 08:30:54,734][02684] Updated weights for policy 0, policy_version 330 (0.0023)
[2025-01-16 08:30:55,979][00226] Fps is (10 sec: 3686.6, 60 sec: 3686.4, 300 sec: 3707.2). Total num frames: 1355776. Throughput: 0: 901.8. Samples: 336852. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:30:55,984][00226] Avg episode reward: [(0, '7.760')]
[2025-01-16 08:31:00,981][00226] Fps is (10 sec: 4095.5, 60 sec: 3686.3, 300 sec: 3679.4). Total num frames: 1372160. Throughput: 0: 935.1. Samples: 342986. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:31:00,983][00226] Avg episode reward: [(0, '8.176')]
[2025-01-16 08:31:00,991][02671] Saving new best policy, reward=8.176!
[2025-01-16 08:31:05,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3693.4). Total num frames: 1388544. Throughput: 0: 904.1. Samples: 347266. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:31:05,982][00226] Avg episode reward: [(0, '8.396')]
[2025-01-16 08:31:05,985][02671] Saving new best policy, reward=8.396!
[2025-01-16 08:31:06,728][02684] Updated weights for policy 0, policy_version 340 (0.0020)
[2025-01-16 08:31:10,979][00226] Fps is (10 sec: 3686.8, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 1409024. Throughput: 0: 903.4. Samples: 350408. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2025-01-16 08:31:10,982][00226] Avg episode reward: [(0, '9.332')]
[2025-01-16 08:31:10,990][02671] Saving new best policy, reward=9.332!
[2025-01-16 08:31:15,981][00226] Fps is (10 sec: 4095.1, 60 sec: 3754.5, 300 sec: 3693.3). Total num frames: 1429504. Throughput: 0: 925.0. Samples: 356652. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:31:15,988][00226] Avg episode reward: [(0, '9.781')]
[2025-01-16 08:31:15,993][02671] Saving new best policy, reward=9.781!
[2025-01-16 08:31:17,285][02684] Updated weights for policy 0, policy_version 350 (0.0012)
[2025-01-16 08:31:20,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3679.5). Total num frames: 1441792. Throughput: 0: 905.3. Samples: 360936. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:31:20,985][00226] Avg episode reward: [(0, '9.646')]
[2025-01-16 08:31:25,979][00226] Fps is (10 sec: 3277.5, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 1462272. Throughput: 0: 901.3. Samples: 363954. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:31:25,984][00226] Avg episode reward: [(0, '9.933')]
[2025-01-16 08:31:25,988][02671] Saving new best policy, reward=9.933!
[2025-01-16 08:31:28,255][02684] Updated weights for policy 0, policy_version 360 (0.0021)
[2025-01-16 08:31:30,981][00226] Fps is (10 sec: 4095.3, 60 sec: 3686.3, 300 sec: 3679.4). Total num frames: 1482752. Throughput: 0: 913.8. Samples: 370278. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:31:30,991][00226] Avg episode reward: [(0, '9.421')]
[2025-01-16 08:31:35,980][00226] Fps is (10 sec: 3686.2, 60 sec: 3686.4, 300 sec: 3679.5). Total num frames: 1499136. Throughput: 0: 917.3. Samples: 374988. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:31:35,985][00226] Avg episode reward: [(0, '9.808')]
[2025-01-16 08:31:40,212][02684] Updated weights for policy 0, policy_version 370 (0.0012)
[2025-01-16 08:31:40,980][00226] Fps is (10 sec: 3686.9, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 1519616. Throughput: 0: 903.3. Samples: 377502. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:31:40,984][00226] Avg episode reward: [(0, '9.595')]
[2025-01-16 08:31:45,979][00226] Fps is (10 sec: 4096.2, 60 sec: 3686.4, 300 sec: 3679.5). Total num frames: 1540096. Throughput: 0: 912.1. Samples: 384030. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:31:45,982][00226] Avg episode reward: [(0, '9.173')]
[2025-01-16 08:31:50,468][02684] Updated weights for policy 0, policy_version 380 (0.0016)
[2025-01-16 08:31:50,979][00226] Fps is (10 sec: 3686.5, 60 sec: 3754.7, 300 sec: 3665.6). Total num frames: 1556480. Throughput: 0: 935.0. Samples: 389340. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:31:50,985][00226] Avg episode reward: [(0, '8.853')]
[2025-01-16 08:31:55,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3679.5). Total num frames: 1572864. Throughput: 0: 911.2. Samples: 391412. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:31:55,986][00226] Avg episode reward: [(0, '9.096')]
[2025-01-16 08:32:00,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3686.5, 300 sec: 3679.5). Total num frames: 1593344. Throughput: 0: 910.3. Samples: 397612. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0)
[2025-01-16 08:32:00,986][00226] Avg episode reward: [(0, '8.952')]
[2025-01-16 08:32:01,373][02684] Updated weights for policy 0, policy_version 390 (0.0025)
[2025-01-16 08:32:05,980][00226] Fps is (10 sec: 4095.6, 60 sec: 3754.6, 300 sec: 3679.5). Total num frames: 1613824. Throughput: 0: 946.2. Samples: 403514. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:32:05,988][00226] Avg episode reward: [(0, '8.983')]
[2025-01-16 08:32:10,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3665.6). Total num frames: 1626112. Throughput: 0: 925.3. Samples: 405592. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:32:10,986][00226] Avg episode reward: [(0, '9.326')]
[2025-01-16 08:32:11,012][02671] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000398_1630208.pth...
[2025-01-16 08:32:11,126][02671] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000182_745472.pth
[2025-01-16 08:32:12,928][02684] Updated weights for policy 0, policy_version 400 (0.0020)
[2025-01-16 08:32:15,979][00226] Fps is (10 sec: 3277.1, 60 sec: 3618.3, 300 sec: 3679.5). Total num frames: 1646592. Throughput: 0: 914.6. Samples: 411434. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:32:15,982][00226] Avg episode reward: [(0, '9.774')]
[2025-01-16 08:32:20,979][00226] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3679.5). Total num frames: 1667072. Throughput: 0: 948.3. Samples: 417660. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:32:20,982][00226] Avg episode reward: [(0, '10.176')]
[2025-01-16 08:32:20,991][02671] Saving new best policy, reward=10.176!
[2025-01-16 08:32:24,310][02684] Updated weights for policy 0, policy_version 410 (0.0018)
[2025-01-16 08:32:25,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3679.5). Total num frames: 1683456. Throughput: 0: 936.7. Samples: 419654. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:32:25,981][00226] Avg episode reward: [(0, '10.654')]
[2025-01-16 08:32:25,986][02671] Saving new best policy, reward=10.654!
[2025-01-16 08:32:30,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3686.5, 300 sec: 3693.3). Total num frames: 1703936. Throughput: 0: 907.8. Samples: 424882. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:32:30,981][00226] Avg episode reward: [(0, '10.914')]
[2025-01-16 08:32:30,996][02671] Saving new best policy, reward=10.914!
[2025-01-16 08:32:34,561][02684] Updated weights for policy 0, policy_version 420 (0.0017)
[2025-01-16 08:32:35,979][00226] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3693.3). Total num frames: 1724416. Throughput: 0: 933.7. Samples: 431358. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:32:35,986][00226] Avg episode reward: [(0, '11.201')]
[2025-01-16 08:32:35,988][02671] Saving new best policy, reward=11.201!
[2025-01-16 08:32:40,982][00226] Fps is (10 sec: 3276.1, 60 sec: 3618.0, 300 sec: 3665.5). Total num frames: 1736704. Throughput: 0: 934.9. Samples: 433484. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-01-16 08:32:40,984][00226] Avg episode reward: [(0, '10.662')]
[2025-01-16 08:32:45,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3679.5). Total num frames: 1757184. Throughput: 0: 908.1. Samples: 438478. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:32:45,981][00226] Avg episode reward: [(0, '9.603')]
[2025-01-16 08:32:46,519][02684] Updated weights for policy 0, policy_version 430 (0.0015)
[2025-01-16 08:32:50,979][00226] Fps is (10 sec: 4096.9, 60 sec: 3686.4, 300 sec: 3679.5). Total num frames: 1777664. Throughput: 0: 923.9. Samples: 445088. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:32:50,986][00226] Avg episode reward: [(0, '9.626')]
[2025-01-16 08:32:55,981][00226] Fps is (10 sec: 3685.9, 60 sec: 3686.3, 300 sec: 3665.6). Total num frames: 1794048. Throughput: 0: 940.8. Samples: 447930. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:32:55,987][00226] Avg episode reward: [(0, '10.057')]
[2025-01-16 08:32:57,623][02684] Updated weights for policy 0, policy_version 440 (0.0029)
[2025-01-16 08:33:00,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3693.3). Total num frames: 1814528. Throughput: 0: 909.9. Samples: 452378. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-01-16 08:33:00,985][00226] Avg episode reward: [(0, '10.336')]
[2025-01-16 08:33:05,979][00226] Fps is (10 sec: 4096.6, 60 sec: 3686.5, 300 sec: 3693.3). Total num frames: 1835008. Throughput: 0: 918.6. Samples: 458998. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:33:05,985][00226] Avg episode reward: [(0, '11.008')]
[2025-01-16 08:33:07,397][02684] Updated weights for policy 0, policy_version 450 (0.0013)
[2025-01-16 08:33:10,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3665.6). Total num frames: 1851392. Throughput: 0: 945.8. Samples: 462216. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:33:10,984][00226] Avg episode reward: [(0, '11.582')]
[2025-01-16 08:33:10,991][02671] Saving new best policy, reward=11.582!
[2025-01-16 08:33:15,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3679.5). Total num frames: 1867776. Throughput: 0: 923.2. Samples: 466424. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:33:15,982][00226] Avg episode reward: [(0, '13.277')]
[2025-01-16 08:33:15,990][02671] Saving new best policy, reward=13.277!
[2025-01-16 08:33:19,038][02684] Updated weights for policy 0, policy_version 460 (0.0013)
[2025-01-16 08:33:20,980][00226] Fps is (10 sec: 4095.9, 60 sec: 3754.7, 300 sec: 3707.2). Total num frames: 1892352. Throughput: 0: 922.9. Samples: 472888. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-01-16 08:33:20,987][00226] Avg episode reward: [(0, '14.531')]
[2025-01-16 08:33:20,995][02671] Saving new best policy, reward=14.531!
[2025-01-16 08:33:25,983][00226] Fps is (10 sec: 4504.2, 60 sec: 3822.7, 300 sec: 3693.3). Total num frames: 1912832. Throughput: 0: 949.1. Samples: 476196. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:33:25,985][00226] Avg episode reward: [(0, '15.674')]
[2025-01-16 08:33:25,989][02671] Saving new best policy, reward=15.674!
[2025-01-16 08:33:30,530][02684] Updated weights for policy 0, policy_version 470 (0.0016)
[2025-01-16 08:33:30,979][00226] Fps is (10 sec: 3276.9, 60 sec: 3686.4, 300 sec: 3679.5). Total num frames: 1925120. Throughput: 0: 945.0. Samples: 481004. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:33:30,982][00226] Avg episode reward: [(0, '16.946')]
[2025-01-16 08:33:30,993][02671] Saving new best policy, reward=16.946!
[2025-01-16 08:33:35,980][00226] Fps is (10 sec: 3277.8, 60 sec: 3686.4, 300 sec: 3693.3). Total num frames: 1945600. Throughput: 0: 926.4. Samples: 486778. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:33:35,981][00226] Avg episode reward: [(0, '17.531')]
[2025-01-16 08:33:35,990][02671] Saving new best policy, reward=17.531!
[2025-01-16 08:33:40,053][02684] Updated weights for policy 0, policy_version 480 (0.0012)
[2025-01-16 08:33:40,979][00226] Fps is (10 sec: 4096.0, 60 sec: 3823.1, 300 sec: 3693.3). Total num frames: 1966080. Throughput: 0: 934.4. Samples: 489976. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0)
[2025-01-16 08:33:40,983][00226] Avg episode reward: [(0, '19.307')]
[2025-01-16 08:33:40,990][02671] Saving new best policy, reward=19.307!
[2025-01-16 08:33:45,979][00226] Fps is (10 sec: 3686.5, 60 sec: 3754.7, 300 sec: 3679.5). Total num frames: 1982464. Throughput: 0: 953.9. Samples: 495302. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:33:45,985][00226] Avg episode reward: [(0, '19.432')]
[2025-01-16 08:33:45,988][02671] Saving new best policy, reward=19.432!
[2025-01-16 08:33:50,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3707.2). Total num frames: 2002944. Throughput: 0: 924.7. Samples: 500608. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:33:50,983][00226] Avg episode reward: [(0, '18.461')]
[2025-01-16 08:33:51,983][02684] Updated weights for policy 0, policy_version 490 (0.0012)
[2025-01-16 08:33:55,979][00226] Fps is (10 sec: 4096.0, 60 sec: 3823.0, 300 sec: 3693.4). Total num frames: 2023424. Throughput: 0: 927.3. Samples: 503946. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2025-01-16 08:33:55,983][00226] Avg episode reward: [(0, '18.423')]
[2025-01-16 08:34:00,980][00226] Fps is (10 sec: 3686.0, 60 sec: 3754.6, 300 sec: 3679.5). Total num frames: 2039808. Throughput: 0: 967.4. Samples: 509960. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:34:00,983][00226] Avg episode reward: [(0, '18.535')]
[2025-01-16 08:34:03,019][02684] Updated weights for policy 0, policy_version 500 (0.0017)
[2025-01-16 08:34:05,985][00226] Fps is (10 sec: 3684.4, 60 sec: 3754.3, 300 sec: 3707.2). Total num frames: 2060288. Throughput: 0: 931.4. Samples: 514806. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:34:05,990][00226] Avg episode reward: [(0, '16.375')]
[2025-01-16 08:34:10,979][00226] Fps is (10 sec: 4096.4, 60 sec: 3822.9, 300 sec: 3707.2). Total num frames: 2080768. Throughput: 0: 933.5. Samples: 518202. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:34:10,983][00226] Avg episode reward: [(0, '16.537')]
[2025-01-16 08:34:10,989][02671] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000508_2080768.pth...
[2025-01-16 08:34:11,095][02671] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000291_1191936.pth
[2025-01-16 08:34:12,575][02684] Updated weights for policy 0, policy_version 510 (0.0019)
[2025-01-16 08:34:15,979][00226] Fps is (10 sec: 3688.4, 60 sec: 3822.9, 300 sec: 3693.3). Total num frames: 2097152. Throughput: 0: 968.6. Samples: 524590. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:34:15,982][00226] Avg episode reward: [(0, '15.626')]
[2025-01-16 08:34:20,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3693.4). Total num frames: 2113536. Throughput: 0: 940.8. Samples: 529116. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:34:20,982][00226] Avg episode reward: [(0, '15.174')]
[2025-01-16 08:34:23,947][02684] Updated weights for policy 0, policy_version 520 (0.0017)
[2025-01-16 08:34:25,979][00226] Fps is (10 sec: 4096.0, 60 sec: 3754.9, 300 sec: 3721.1). Total num frames: 2138112. Throughput: 0: 942.9. Samples: 532406. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:34:25,981][00226] Avg episode reward: [(0, '15.016')]
[2025-01-16 08:34:30,979][00226] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3721.1). Total num frames: 2158592. Throughput: 0: 973.4. Samples: 539106. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:34:30,982][00226] Avg episode reward: [(0, '16.988')]
[2025-01-16 08:34:34,824][02684] Updated weights for policy 0, policy_version 530 (0.0017)
[2025-01-16 08:34:35,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3693.3). Total num frames: 2170880. Throughput: 0: 955.2. Samples: 543592. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:34:35,988][00226] Avg episode reward: [(0, '16.637')]
[2025-01-16 08:34:40,981][00226] Fps is (10 sec: 3685.8, 60 sec: 3822.8, 300 sec: 3735.0). Total num frames: 2195456. Throughput: 0: 950.1. Samples: 546702. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:34:40,986][00226] Avg episode reward: [(0, '17.867')]
[2025-01-16 08:34:44,306][02684] Updated weights for policy 0, policy_version 540 (0.0015)
[2025-01-16 08:34:45,983][00226] Fps is (10 sec: 4504.2, 60 sec: 3891.0, 300 sec: 3735.0). Total num frames: 2215936. Throughput: 0: 967.0. Samples: 553476. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:34:45,986][00226] Avg episode reward: [(0, '18.587')]
[2025-01-16 08:34:50,979][00226] Fps is (10 sec: 3687.0, 60 sec: 3822.9, 300 sec: 3721.1). Total num frames: 2232320. Throughput: 0: 969.6. Samples: 558434. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:34:50,983][00226] Avg episode reward: [(0, '17.924')]
[2025-01-16 08:34:55,638][02684] Updated weights for policy 0, policy_version 550 (0.0023)
[2025-01-16 08:34:55,979][00226] Fps is (10 sec: 3687.6, 60 sec: 3822.9, 300 sec: 3735.0). Total num frames: 2252800. Throughput: 0: 954.4. Samples: 561148. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:34:55,981][00226] Avg episode reward: [(0, '17.945')]
[2025-01-16 08:35:00,980][00226] Fps is (10 sec: 4095.8, 60 sec: 3891.2, 300 sec: 3735.0). Total num frames: 2273280. Throughput: 0: 960.7. Samples: 567824. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:35:00,984][00226] Avg episode reward: [(0, '18.591')]
[2025-01-16 08:35:05,979][00226] Fps is (10 sec: 3686.3, 60 sec: 3823.3, 300 sec: 3721.1). Total num frames: 2289664. Throughput: 0: 978.4. Samples: 573146. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:35:05,986][00226] Avg episode reward: [(0, '19.637')]
[2025-01-16 08:35:05,988][02671] Saving new best policy, reward=19.637!
[2025-01-16 08:35:06,519][02684] Updated weights for policy 0, policy_version 560 (0.0022)
[2025-01-16 08:35:10,979][00226] Fps is (10 sec: 3686.6, 60 sec: 3822.9, 300 sec: 3748.9). Total num frames: 2310144. Throughput: 0: 952.8. Samples: 575282. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:35:10,983][00226] Avg episode reward: [(0, '19.983')]
[2025-01-16 08:35:10,993][02671] Saving new best policy, reward=19.983!
[2025-01-16 08:35:15,979][00226] Fps is (10 sec: 4096.1, 60 sec: 3891.2, 300 sec: 3748.9). Total num frames: 2330624. Throughput: 0: 950.9. Samples: 581898. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:35:15,982][00226] Avg episode reward: [(0, '20.537')]
[2025-01-16 08:35:15,988][02671] Saving new best policy, reward=20.537!
[2025-01-16 08:35:16,605][02684] Updated weights for policy 0, policy_version 570 (0.0012)
[2025-01-16 08:35:20,979][00226] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3748.9). Total num frames: 2351104. Throughput: 0: 981.7. Samples: 587768. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:35:20,983][00226] Avg episode reward: [(0, '19.792')]
[2025-01-16 08:35:25,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3748.9). Total num frames: 2367488. Throughput: 0: 960.5. Samples: 589922. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:35:25,983][00226] Avg episode reward: [(0, '19.926')]
[2025-01-16 08:35:27,795][02684] Updated weights for policy 0, policy_version 580 (0.0023)
[2025-01-16 08:35:30,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3762.8). Total num frames: 2387968. Throughput: 0: 947.6. Samples: 596114. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:35:30,983][00226] Avg episode reward: [(0, '19.772')]
[2025-01-16 08:35:35,979][00226] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3748.9). Total num frames: 2408448. Throughput: 0: 977.4. Samples: 602418. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:35:35,983][00226] Avg episode reward: [(0, '18.480')]
[2025-01-16 08:35:38,409][02684] Updated weights for policy 0, policy_version 590 (0.0012)
[2025-01-16 08:35:40,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3754.8, 300 sec: 3735.0). Total num frames: 2420736. Throughput: 0: 963.6. Samples: 604512. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-01-16 08:35:40,981][00226] Avg episode reward: [(0, '19.109')]
[2025-01-16 08:35:45,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3823.1, 300 sec: 3776.7). Total num frames: 2445312. Throughput: 0: 940.2. Samples: 610132. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:35:45,982][00226] Avg episode reward: [(0, '21.580')]
[2025-01-16 08:35:45,983][02671] Saving new best policy, reward=21.580!
[2025-01-16 08:35:48,713][02684] Updated weights for policy 0, policy_version 600 (0.0016)
[2025-01-16 08:35:50,980][00226] Fps is (10 sec: 4505.5, 60 sec: 3891.2, 300 sec: 3762.8). Total num frames: 2465792. Throughput: 0: 969.1. Samples: 616756. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:35:50,985][00226] Avg episode reward: [(0, '22.703')]
[2025-01-16 08:35:51,000][02671] Saving new best policy, reward=22.703!
[2025-01-16 08:35:55,981][00226] Fps is (10 sec: 3276.2, 60 sec: 3754.6, 300 sec: 3748.9). Total num frames: 2478080. Throughput: 0: 969.8. Samples: 618926. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:35:55,983][00226] Avg episode reward: [(0, '22.098')]
[2025-01-16 08:36:00,345][02684] Updated weights for policy 0, policy_version 610 (0.0017)
[2025-01-16 08:36:00,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3762.8). Total num frames: 2498560. Throughput: 0: 938.4. Samples: 624128. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:36:00,986][00226] Avg episode reward: [(0, '22.820')]
[2025-01-16 08:36:00,992][02671] Saving new best policy, reward=22.820!
[2025-01-16 08:36:05,980][00226] Fps is (10 sec: 4506.2, 60 sec: 3891.2, 300 sec: 3776.6). Total num frames: 2523136. Throughput: 0: 953.0. Samples: 630654. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:36:05,982][00226] Avg episode reward: [(0, '23.936')]
[2025-01-16 08:36:05,984][02671] Saving new best policy, reward=23.936!
[2025-01-16 08:36:10,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3748.9). Total num frames: 2535424. Throughput: 0: 964.8. Samples: 633340. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-01-16 08:36:10,986][00226] Avg episode reward: [(0, '24.611')]
[2025-01-16 08:36:11,001][02671] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000619_2535424.pth...
[2025-01-16 08:36:11,148][02671] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000398_1630208.pth
[2025-01-16 08:36:11,169][02671] Saving new best policy, reward=24.611!
[2025-01-16 08:36:11,470][02684] Updated weights for policy 0, policy_version 620 (0.0018)
[2025-01-16 08:36:15,979][00226] Fps is (10 sec: 3276.9, 60 sec: 3754.7, 300 sec: 3776.7). Total num frames: 2555904. Throughput: 0: 927.0. Samples: 637828. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:36:15,985][00226] Avg episode reward: [(0, '23.176')]
[2025-01-16 08:36:20,980][00226] Fps is (10 sec: 4095.9, 60 sec: 3754.6, 300 sec: 3776.6). Total num frames: 2576384. Throughput: 0: 936.1. Samples: 644542. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:36:20,982][00226] Avg episode reward: [(0, '24.105')]
[2025-01-16 08:36:21,447][02684] Updated weights for policy 0, policy_version 630 (0.0015)
[2025-01-16 08:36:25,979][00226] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3776.7). Total num frames: 2596864. Throughput: 0: 963.2. Samples: 647854. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:36:25,982][00226] Avg episode reward: [(0, '23.071')]
[2025-01-16 08:36:30,979][00226] Fps is (10 sec: 3686.5, 60 sec: 3754.7, 300 sec: 3776.7). Total num frames: 2613248. Throughput: 0: 933.2. Samples: 652126. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:36:30,982][00226] Avg episode reward: [(0, '22.512')]
[2025-01-16 08:36:32,761][02684] Updated weights for policy 0, policy_version 640 (0.0012)
[2025-01-16 08:36:35,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3776.7). Total num frames: 2633728. Throughput: 0: 932.4. Samples: 658716. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:36:35,984][00226] Avg episode reward: [(0, '20.322')]
[2025-01-16 08:36:40,983][00226] Fps is (10 sec: 4094.7, 60 sec: 3891.0, 300 sec: 3776.6). Total num frames: 2654208. Throughput: 0: 956.5. Samples: 661970. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:36:40,985][00226] Avg episode reward: [(0, '20.223')]
[2025-01-16 08:36:43,457][02684] Updated weights for policy 0, policy_version 650 (0.0012)
[2025-01-16 08:36:45,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3762.8). Total num frames: 2666496. Throughput: 0: 943.3. Samples: 666578. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:36:45,983][00226] Avg episode reward: [(0, '18.872')]
[2025-01-16 08:36:50,979][00226] Fps is (10 sec: 3687.6, 60 sec: 3754.7, 300 sec: 3790.5). Total num frames: 2691072. Throughput: 0: 939.2. Samples: 672916. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:36:50,981][00226] Avg episode reward: [(0, '20.936')]
[2025-01-16 08:36:53,369][02684] Updated weights for policy 0, policy_version 660 (0.0012)
[2025-01-16 08:36:55,979][00226] Fps is (10 sec: 4505.6, 60 sec: 3891.3, 300 sec: 3790.5). Total num frames: 2711552. Throughput: 0: 952.8. Samples: 676216. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:36:55,985][00226] Avg episode reward: [(0, '21.186')]
[2025-01-16 08:37:00,982][00226] Fps is (10 sec: 3276.1, 60 sec: 3754.5, 300 sec: 3762.8). Total num frames: 2723840. Throughput: 0: 965.1. Samples: 681260. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:37:00,987][00226] Avg episode reward: [(0, '20.733')]
[2025-01-16 08:37:04,947][02684] Updated weights for policy 0, policy_version 670 (0.0012)
[2025-01-16 08:37:05,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 2748416. Throughput: 0: 944.1. Samples: 687028. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2025-01-16 08:37:05,985][00226] Avg episode reward: [(0, '22.283')]
[2025-01-16 08:37:10,979][00226] Fps is (10 sec: 4506.6, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 2768896. Throughput: 0: 944.8. Samples: 690368. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:37:10,982][00226] Avg episode reward: [(0, '20.634')]
[2025-01-16 08:37:15,589][02684] Updated weights for policy 0, policy_version 680 (0.0018)
[2025-01-16 08:37:15,980][00226] Fps is (10 sec: 3686.0, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 2785280. Throughput: 0: 968.8. Samples: 695724. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:37:15,983][00226] Avg episode reward: [(0, '19.421')]
[2025-01-16 08:37:20,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3790.5). Total num frames: 2801664. Throughput: 0: 942.6. Samples: 701134. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:37:20,988][00226] Avg episode reward: [(0, '19.748')]
[2025-01-16 08:37:25,656][02684] Updated weights for policy 0, policy_version 690 (0.0017)
[2025-01-16 08:37:25,979][00226] Fps is (10 sec: 4096.4, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 2826240. Throughput: 0: 945.3. Samples: 704506. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:37:25,982][00226] Avg episode reward: [(0, '19.336')]
[2025-01-16 08:37:30,980][00226] Fps is (10 sec: 4095.9, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 2842624. Throughput: 0: 978.8. Samples: 710624. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:37:30,986][00226] Avg episode reward: [(0, '20.321')]
[2025-01-16 08:37:35,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 2859008. Throughput: 0: 943.9. Samples: 715390. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2025-01-16 08:37:35,982][00226] Avg episode reward: [(0, '20.204')]
[2025-01-16 08:37:37,147][02684] Updated weights for policy 0, policy_version 700 (0.0021)
[2025-01-16 08:37:40,981][00226] Fps is (10 sec: 4095.7, 60 sec: 3823.1, 300 sec: 3818.3). Total num frames: 2883584. Throughput: 0: 944.3. Samples: 718710. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:37:40,983][00226] Avg episode reward: [(0, '20.738')]
[2025-01-16 08:37:45,982][00226] Fps is (10 sec: 4094.7, 60 sec: 3891.0, 300 sec: 3804.4). Total num frames: 2899968. Throughput: 0: 973.5. Samples: 725070. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:37:45,985][00226] Avg episode reward: [(0, '20.442')]
[2025-01-16 08:37:47,662][02684] Updated weights for policy 0, policy_version 710 (0.0013)
[2025-01-16 08:37:50,979][00226] Fps is (10 sec: 3277.2, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 2916352. Throughput: 0: 944.0. Samples: 729506. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-01-16 08:37:50,982][00226] Avg episode reward: [(0, '20.348')]
[2025-01-16 08:37:55,979][00226] Fps is (10 sec: 4097.3, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 2940928. Throughput: 0: 944.0. Samples: 732850. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2025-01-16 08:37:55,981][00226] Avg episode reward: [(0, '20.412')]
[2025-01-16 08:37:57,725][02684] Updated weights for policy 0, policy_version 720 (0.0014)
[2025-01-16 08:38:00,979][00226] Fps is (10 sec: 4505.6, 60 sec: 3959.6, 300 sec: 3818.3). Total num frames: 2961408. Throughput: 0: 975.8. Samples: 739634. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:38:00,983][00226] Avg episode reward: [(0, '21.563')]
[2025-01-16 08:38:05,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 2973696. Throughput: 0: 957.6. Samples: 744228. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:38:05,982][00226] Avg episode reward: [(0, '22.757')]
[2025-01-16 08:38:08,997][02684] Updated weights for policy 0, policy_version 730 (0.0015)
[2025-01-16 08:38:10,980][00226] Fps is (10 sec: 3686.3, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 2998272. Throughput: 0: 948.4. Samples: 747186. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:38:10,982][00226] Avg episode reward: [(0, '23.552')]
[2025-01-16 08:38:10,989][02671] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000732_2998272.pth...
[2025-01-16 08:38:11,116][02671] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000508_2080768.pth
[2025-01-16 08:38:15,979][00226] Fps is (10 sec: 4505.5, 60 sec: 3891.3, 300 sec: 3818.3). Total num frames: 3018752. Throughput: 0: 958.9. Samples: 753776. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:38:15,988][00226] Avg episode reward: [(0, '23.595')]
[2025-01-16 08:38:19,669][02684] Updated weights for policy 0, policy_version 740 (0.0013)
[2025-01-16 08:38:20,980][00226] Fps is (10 sec: 3276.7, 60 sec: 3822.9, 300 sec: 3790.6). Total num frames: 3031040. Throughput: 0: 963.9. Samples: 758766. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:38:20,983][00226] Avg episode reward: [(0, '23.103')]
[2025-01-16 08:38:25,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3818.3). Total num frames: 3051520. Throughput: 0: 944.6. Samples: 761216. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:38:25,985][00226] Avg episode reward: [(0, '22.754')]
[2025-01-16 08:38:29,814][02684] Updated weights for policy 0, policy_version 750 (0.0015)
[2025-01-16 08:38:30,979][00226] Fps is (10 sec: 4096.2, 60 sec: 3823.0, 300 sec: 3818.3). Total num frames: 3072000. Throughput: 0: 949.3. Samples: 767784. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:38:30,981][00226] Avg episode reward: [(0, '21.713')]
[2025-01-16 08:38:35,979][00226] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 3092480. Throughput: 0: 974.2. Samples: 773344. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:38:35,982][00226] Avg episode reward: [(0, '21.972')]
[2025-01-16 08:38:40,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3818.3). Total num frames: 3108864. Throughput: 0: 946.0. Samples: 775422. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:38:40,981][00226] Avg episode reward: [(0, '22.541')]
[2025-01-16 08:38:41,733][02684] Updated weights for policy 0, policy_version 760 (0.0011)
[2025-01-16 08:38:45,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3823.1, 300 sec: 3818.3). Total num frames: 3129344. Throughput: 0: 937.4. Samples: 781818. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:38:45,988][00226] Avg episode reward: [(0, '23.160')]
[2025-01-16 08:38:50,979][00226] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 3149824. Throughput: 0: 974.4. Samples: 788074. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:38:50,984][00226] Avg episode reward: [(0, '23.864')]
[2025-01-16 08:38:51,351][02684] Updated weights for policy 0, policy_version 770 (0.0015)
[2025-01-16 08:38:55,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3818.3). Total num frames: 3166208. Throughput: 0: 955.8. Samples: 790198. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:38:55,982][00226] Avg episode reward: [(0, '23.519')]
[2025-01-16 08:39:00,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3818.4). Total num frames: 3186688. Throughput: 0: 941.4. Samples: 796140. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:39:00,981][00226] Avg episode reward: [(0, '23.545')]
[2025-01-16 08:39:02,083][02684] Updated weights for policy 0, policy_version 780 (0.0013)
[2025-01-16 08:39:05,979][00226] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 3211264. Throughput: 0: 976.0. Samples: 802684. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:39:05,984][00226] Avg episode reward: [(0, '24.400')]
[2025-01-16 08:39:10,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3818.3). Total num frames: 3223552. Throughput: 0: 969.0. Samples: 804820. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:39:10,988][00226] Avg episode reward: [(0, '24.293')]
[2025-01-16 08:39:13,713][02684] Updated weights for policy 0, policy_version 790 (0.0012)
[2025-01-16 08:39:15,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3832.2). Total num frames: 3244032. Throughput: 0: 937.4. Samples: 809968. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-01-16 08:39:15,985][00226] Avg episode reward: [(0, '24.412')]
[2025-01-16 08:39:20,979][00226] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 3264512. Throughput: 0: 958.5. Samples: 816476. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:39:20,982][00226] Avg episode reward: [(0, '24.991')]
[2025-01-16 08:39:20,992][02671] Saving new best policy, reward=24.991!
[2025-01-16 08:39:24,650][02684] Updated weights for policy 0, policy_version 800 (0.0025)
[2025-01-16 08:39:25,984][00226] Fps is (10 sec: 3275.4, 60 sec: 3754.4, 300 sec: 3790.5). Total num frames: 3276800. Throughput: 0: 964.2. Samples: 818816. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:39:25,992][00226] Avg episode reward: [(0, '25.048')]
[2025-01-16 08:39:25,997][02671] Saving new best policy, reward=25.048!
[2025-01-16 08:39:30,980][00226] Fps is (10 sec: 3276.7, 60 sec: 3754.7, 300 sec: 3818.3). Total num frames: 3297280. Throughput: 0: 921.9. Samples: 823302. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:39:30,982][00226] Avg episode reward: [(0, '24.221')]
[2025-01-16 08:39:35,867][02684] Updated weights for policy 0, policy_version 810 (0.0013)
[2025-01-16 08:39:35,979][00226] Fps is (10 sec: 4097.7, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 3317760. Throughput: 0: 922.6. Samples: 829592. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:39:35,982][00226] Avg episode reward: [(0, '23.754')]
[2025-01-16 08:39:40,979][00226] Fps is (10 sec: 3686.5, 60 sec: 3754.7, 300 sec: 3790.6). Total num frames: 3334144. Throughput: 0: 940.4. Samples: 832518. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2025-01-16 08:39:40,987][00226] Avg episode reward: [(0, '23.066')]
[2025-01-16 08:39:45,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3790.5). Total num frames: 3350528. Throughput: 0: 904.0. Samples: 836818. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:39:45,987][00226] Avg episode reward: [(0, '21.039')]
[2025-01-16 08:39:47,438][02684] Updated weights for policy 0, policy_version 820 (0.0014)
[2025-01-16 08:39:50,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3790.5). Total num frames: 3371008. Throughput: 0: 900.3. Samples: 843198. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:39:50,982][00226] Avg episode reward: [(0, '21.220')]
[2025-01-16 08:39:55,979][00226] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3790.5). Total num frames: 3391488. Throughput: 0: 920.7. Samples: 846252. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:39:55,984][00226] Avg episode reward: [(0, '21.004')]
[2025-01-16 08:39:59,373][02684] Updated weights for policy 0, policy_version 830 (0.0012)
[2025-01-16 08:40:00,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3776.7). Total num frames: 3403776. Throughput: 0: 898.2. Samples: 850386. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:40:00,986][00226] Avg episode reward: [(0, '21.403')]
[2025-01-16 08:40:05,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3776.7). Total num frames: 3424256. Throughput: 0: 881.0. Samples: 856122. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:40:05,983][00226] Avg episode reward: [(0, '21.747')]
[2025-01-16 08:40:09,851][02684] Updated weights for policy 0, policy_version 840 (0.0012)
[2025-01-16 08:40:10,982][00226] Fps is (10 sec: 3685.6, 60 sec: 3618.0, 300 sec: 3762.7). Total num frames: 3440640. Throughput: 0: 895.3. Samples: 859102. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:40:10,984][00226] Avg episode reward: [(0, '21.505')]
[2025-01-16 08:40:10,997][02671] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000841_3444736.pth...
[2025-01-16 08:40:11,151][02671] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000619_2535424.pth
[2025-01-16 08:40:15,981][00226] Fps is (10 sec: 3276.4, 60 sec: 3549.8, 300 sec: 3748.9). Total num frames: 3457024. Throughput: 0: 899.0. Samples: 863756. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:40:15,988][00226] Avg episode reward: [(0, '21.956')]
[2025-01-16 08:40:20,979][00226] Fps is (10 sec: 3687.2, 60 sec: 3549.9, 300 sec: 3762.8). Total num frames: 3477504. Throughput: 0: 883.7. Samples: 869358. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:40:20,984][00226] Avg episode reward: [(0, '22.324')]
[2025-01-16 08:40:21,783][02684] Updated weights for policy 0, policy_version 850 (0.0024)
[2025-01-16 08:40:25,979][00226] Fps is (10 sec: 4096.6, 60 sec: 3686.7, 300 sec: 3762.8). Total num frames: 3497984. Throughput: 0: 888.9. Samples: 872518. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:40:25,986][00226] Avg episode reward: [(0, '22.316')]
[2025-01-16 08:40:30,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3735.0). Total num frames: 3510272. Throughput: 0: 909.0. Samples: 877722. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:40:30,987][00226] Avg episode reward: [(0, '23.603')]
[2025-01-16 08:40:33,707][02684] Updated weights for policy 0, policy_version 860 (0.0012)
[2025-01-16 08:40:35,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3762.8). Total num frames: 3530752. Throughput: 0: 881.7. Samples: 882876. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:40:35,985][00226] Avg episode reward: [(0, '23.090')]
[2025-01-16 08:40:40,979][00226] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3748.9). Total num frames: 3551232. Throughput: 0: 882.4. Samples: 885958. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:40:40,981][00226] Avg episode reward: [(0, '24.506')]
[2025-01-16 08:40:43,824][02684] Updated weights for policy 0, policy_version 870 (0.0012)
[2025-01-16 08:40:45,980][00226] Fps is (10 sec: 3686.0, 60 sec: 3618.1, 300 sec: 3735.0). Total num frames: 3567616. Throughput: 0: 917.9. Samples: 891692. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:40:45,983][00226] Avg episode reward: [(0, '25.426')]
[2025-01-16 08:40:45,990][02671] Saving new best policy, reward=25.426!
[2025-01-16 08:40:50,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3748.9). Total num frames: 3584000. Throughput: 0: 893.2. Samples: 896316. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:40:50,982][00226] Avg episode reward: [(0, '23.998')]
[2025-01-16 08:40:55,246][02684] Updated weights for policy 0, policy_version 880 (0.0023)
[2025-01-16 08:40:55,982][00226] Fps is (10 sec: 3686.0, 60 sec: 3549.7, 300 sec: 3748.9). Total num frames: 3604480. Throughput: 0: 899.2. Samples: 899564. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:40:55,985][00226] Avg episode reward: [(0, '22.436')]
[2025-01-16 08:41:00,984][00226] Fps is (10 sec: 4094.2, 60 sec: 3686.1, 300 sec: 3734.9). Total num frames: 3624960. Throughput: 0: 932.6. Samples: 905728. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0)
[2025-01-16 08:41:00,990][00226] Avg episode reward: [(0, '22.153')]
[2025-01-16 08:41:05,979][00226] Fps is (10 sec: 3277.5, 60 sec: 3549.9, 300 sec: 3735.0). Total num frames: 3637248. Throughput: 0: 898.7. Samples: 909800. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:41:05,985][00226] Avg episode reward: [(0, '21.677')]
[2025-01-16 08:41:07,291][02684] Updated weights for policy 0, policy_version 890 (0.0013)
[2025-01-16 08:41:10,979][00226] Fps is (10 sec: 3278.2, 60 sec: 3618.3, 300 sec: 3735.0). Total num frames: 3657728. Throughput: 0: 897.8. Samples: 912920. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:41:10,986][00226] Avg episode reward: [(0, '21.176')]
[2025-01-16 08:41:15,979][00226] Fps is (10 sec: 4096.0, 60 sec: 3686.5, 300 sec: 3735.0). Total num frames: 3678208. Throughput: 0: 925.2. Samples: 919354. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:41:15,982][00226] Avg episode reward: [(0, '21.112')]
[2025-01-16 08:41:18,224][02684] Updated weights for policy 0, policy_version 900 (0.0021)
[2025-01-16 08:41:20,986][00226] Fps is (10 sec: 3274.8, 60 sec: 3549.5, 300 sec: 3707.1). Total num frames: 3690496. Throughput: 0: 904.1. Samples: 923566. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:41:20,988][00226] Avg episode reward: [(0, '21.861')]
[2025-01-16 08:41:25,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3735.0). Total num frames: 3715072. Throughput: 0: 903.8. Samples: 926630. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:41:25,985][00226] Avg episode reward: [(0, '23.940')]
[2025-01-16 08:41:28,609][02684] Updated weights for policy 0, policy_version 910 (0.0012)
[2025-01-16 08:41:30,980][00226] Fps is (10 sec: 4508.3, 60 sec: 3754.7, 300 sec: 3735.0). Total num frames: 3735552. Throughput: 0: 923.2. Samples: 933234. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:41:30,982][00226] Avg episode reward: [(0, '24.213')]
[2025-01-16 08:41:35,981][00226] Fps is (10 sec: 3685.6, 60 sec: 3686.3, 300 sec: 3721.1). Total num frames: 3751936. Throughput: 0: 928.9. Samples: 938120. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:41:35,984][00226] Avg episode reward: [(0, '24.934')]
[2025-01-16 08:41:40,167][02684] Updated weights for policy 0, policy_version 920 (0.0018)
[2025-01-16 08:41:40,979][00226] Fps is (10 sec: 3686.5, 60 sec: 3686.4, 300 sec: 3748.9). Total num frames: 3772416. Throughput: 0: 913.6. Samples: 940674. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2025-01-16 08:41:40,982][00226] Avg episode reward: [(0, '25.269')]
[2025-01-16 08:41:45,979][00226] Fps is (10 sec: 4096.9, 60 sec: 3754.7, 300 sec: 3735.0). Total num frames: 3792896. Throughput: 0: 923.2. Samples: 947268. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2025-01-16 08:41:45,983][00226] Avg episode reward: [(0, '24.829')]
[2025-01-16 08:41:50,809][02684] Updated weights for policy 0, policy_version 930 (0.0012)
[2025-01-16 08:41:50,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 3809280. Throughput: 0: 948.6. Samples: 952488. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:41:50,986][00226] Avg episode reward: [(0, '23.518')]
[2025-01-16 08:41:55,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3686.5, 300 sec: 3735.0). Total num frames: 3825664. Throughput: 0: 926.1. Samples: 954594. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:41:55,982][00226] Avg episode reward: [(0, '22.876')]
[2025-01-16 08:42:00,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3686.7, 300 sec: 3721.1). Total num frames: 3846144. Throughput: 0: 929.2. Samples: 961170. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2025-01-16 08:42:00,982][00226] Avg episode reward: [(0, '22.146')]
[2025-01-16 08:42:01,195][02684] Updated weights for policy 0, policy_version 940 (0.0017)
[2025-01-16 08:42:05,980][00226] Fps is (10 sec: 4095.5, 60 sec: 3822.9, 300 sec: 3721.1). Total num frames: 3866624. Throughput: 0: 966.3. Samples: 967044. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0)
[2025-01-16 08:42:05,987][00226] Avg episode reward: [(0, '22.853')]
[2025-01-16 08:42:10,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 3883008. Throughput: 0: 942.9. Samples: 969060. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:42:10,982][00226] Avg episode reward: [(0, '23.201')]
[2025-01-16 08:42:10,990][02671] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000948_3883008.pth...
[2025-01-16 08:42:11,122][02671] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000732_2998272.pth
[2025-01-16 08:42:12,889][02684] Updated weights for policy 0, policy_version 950 (0.0026)
[2025-01-16 08:42:15,979][00226] Fps is (10 sec: 3686.8, 60 sec: 3754.7, 300 sec: 3735.0). Total num frames: 3903488. Throughput: 0: 931.0. Samples: 975130. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:42:15,984][00226] Avg episode reward: [(0, '24.747')]
[2025-01-16 08:42:20,979][00226] Fps is (10 sec: 4096.0, 60 sec: 3891.6, 300 sec: 3721.1). Total num frames: 3923968. Throughput: 0: 963.3. Samples: 981468. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:42:20,989][00226] Avg episode reward: [(0, '24.975')]
[2025-01-16 08:42:23,524][02684] Updated weights for policy 0, policy_version 960 (0.0015)
[2025-01-16 08:42:25,983][00226] Fps is (10 sec: 3275.4, 60 sec: 3686.1, 300 sec: 3707.2). Total num frames: 3936256. Throughput: 0: 949.7. Samples: 983416. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:42:25,986][00226] Avg episode reward: [(0, '23.736')]
[2025-01-16 08:42:30,979][00226] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3721.1). Total num frames: 3956736. Throughput: 0: 921.1. Samples: 988716. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:42:30,985][00226] Avg episode reward: [(0, '23.583')]
[2025-01-16 08:42:33,837][02684] Updated weights for policy 0, policy_version 970 (0.0024)
[2025-01-16 08:42:35,979][00226] Fps is (10 sec: 4507.5, 60 sec: 3823.1, 300 sec: 3721.1). Total num frames: 3981312. Throughput: 0: 953.0. Samples: 995372. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2025-01-16 08:42:35,988][00226] Avg episode reward: [(0, '24.305')]
[2025-01-16 08:42:40,979][00226] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3707.3). Total num frames: 3993600. Throughput: 0: 956.4. Samples: 997632. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2025-01-16 08:42:40,983][00226] Avg episode reward: [(0, '22.302')]
[2025-01-16 08:42:43,569][02671] Stopping Batcher_0...
[2025-01-16 08:42:43,571][02671] Loop batcher_evt_loop terminating...
[2025-01-16 08:42:43,571][00226] Component Batcher_0 stopped!
[2025-01-16 08:42:43,573][02671] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2025-01-16 08:42:43,573][00226] Component RolloutWorker_w2 process died already! Don't wait for it.
[2025-01-16 08:42:43,579][00226] Component RolloutWorker_w6 process died already! Don't wait for it.
[2025-01-16 08:42:43,634][02684] Weights refcount: 2 0
[2025-01-16 08:42:43,635][02684] Stopping InferenceWorker_p0-w0...
[2025-01-16 08:42:43,636][02684] Loop inference_proc0-0_evt_loop terminating...
[2025-01-16 08:42:43,637][00226] Component InferenceWorker_p0-w0 stopped!
[2025-01-16 08:42:43,705][02671] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000841_3444736.pth
[2025-01-16 08:42:43,725][02671] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2025-01-16 08:42:43,882][00226] Component LearnerWorker_p0 stopped!
[2025-01-16 08:42:43,884][02671] Stopping LearnerWorker_p0...
[2025-01-16 08:42:43,885][02671] Loop learner_proc0_evt_loop terminating...
[2025-01-16 08:42:43,986][02687] Stopping RolloutWorker_w3...
[2025-01-16 08:42:43,986][00226] Component RolloutWorker_w3 stopped!
[2025-01-16 08:42:44,002][02690] Stopping RolloutWorker_w5...
[2025-01-16 08:42:43,987][02687] Loop rollout_proc3_evt_loop terminating...
[2025-01-16 08:42:44,002][00226] Component RolloutWorker_w5 stopped!
[2025-01-16 08:42:44,003][02690] Loop rollout_proc5_evt_loop terminating...
[2025-01-16 08:42:44,017][00226] Component RolloutWorker_w7 stopped!
[2025-01-16 08:42:44,017][02692] Stopping RolloutWorker_w7...
[2025-01-16 08:42:44,026][02686] Stopping RolloutWorker_w1...
[2025-01-16 08:42:44,026][00226] Component RolloutWorker_w1 stopped!
[2025-01-16 08:42:44,032][00226] Component RolloutWorker_w0 stopped!
[2025-01-16 08:42:44,034][02685] Stopping RolloutWorker_w0...
[2025-01-16 08:42:44,034][02685] Loop rollout_proc0_evt_loop terminating...
[2025-01-16 08:42:44,022][02692] Loop rollout_proc7_evt_loop terminating...
[2025-01-16 08:42:44,027][02686] Loop rollout_proc1_evt_loop terminating...
[2025-01-16 08:42:44,076][00226] Component RolloutWorker_w4 stopped!
[2025-01-16 08:42:44,078][00226] Waiting for process learner_proc0 to stop...
[2025-01-16 08:42:44,080][02689] Stopping RolloutWorker_w4...
[2025-01-16 08:42:44,081][02689] Loop rollout_proc4_evt_loop terminating...
[2025-01-16 08:42:45,352][00226] Waiting for process inference_proc0-0 to join...
[2025-01-16 08:42:45,355][00226] Waiting for process rollout_proc0 to join...
[2025-01-16 08:42:46,399][00226] Waiting for process rollout_proc1 to join...
[2025-01-16 08:42:47,052][00226] Waiting for process rollout_proc2 to join...
[2025-01-16 08:42:47,053][00226] Waiting for process rollout_proc3 to join...
[2025-01-16 08:42:47,055][00226] Waiting for process rollout_proc4 to join...
[2025-01-16 08:42:47,057][00226] Waiting for process rollout_proc5 to join...
[2025-01-16 08:42:47,060][00226] Waiting for process rollout_proc6 to join...
[2025-01-16 08:42:47,062][00226] Waiting for process rollout_proc7 to join...
[2025-01-16 08:42:47,065][00226] Batcher 0 profile tree view:
batching: 24.4481, releasing_batches: 0.0378
[2025-01-16 08:42:47,066][00226] InferenceWorker_p0-w0 profile tree view:
wait_policy: 0.0000
wait_policy_total: 442.9811
update_model: 9.1541
weight_update: 0.0026
one_step: 0.0046
handle_policy_step: 593.9582
deserialize: 14.5087, stack: 3.3789, obs_to_device_normalize: 132.0158, forward: 308.5428, send_messages: 23.1980
prepare_outputs: 86.4103
to_cpu: 54.6077
[2025-01-16 08:42:47,068][00226] Learner 0 profile tree view:
misc: 0.0040, prepare_batch: 13.6461
train: 73.9479
epoch_init: 0.0068, minibatch_init: 0.0062, losses_postprocess: 0.6937, kl_divergence: 0.5336, after_optimizer: 33.1222
calculate_losses: 27.8330
losses_init: 0.0037, forward_head: 1.3790, bptt_initial: 20.0086, tail: 0.9304, advantages_returns: 0.2455, losses: 3.2898
bptt: 1.7295
bptt_forward_core: 1.6680
update: 11.2376
clip: 0.8470
[2025-01-16 08:42:47,069][00226] RolloutWorker_w0 profile tree view:
wait_for_trajectories: 0.3155, enqueue_policy_requests: 211.4581, env_step: 761.2902, overhead: 14.5281, complete_rollouts: 5.1357
save_policy_outputs: 21.0872
split_output_tensors: 8.0883
[2025-01-16 08:42:47,070][00226] RolloutWorker_w7 profile tree view:
wait_for_trajectories: 0.2720, enqueue_policy_requests: 110.6259, env_step: 862.3899, overhead: 13.9043, complete_rollouts: 8.4023
save_policy_outputs: 22.0603
split_output_tensors: 8.5603
[2025-01-16 08:42:47,071][00226] Loop Runner_EvtLoop terminating...
[2025-01-16 08:42:47,072][00226] Runner profile tree view:
main_loop: 1110.9860
[2025-01-16 08:42:47,073][00226] Collected {0: 4005888}, FPS: 3605.7
[2025-01-16 08:42:58,749][00226] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
[2025-01-16 08:42:58,751][00226] Overriding arg 'num_workers' with value 1 passed from command line
[2025-01-16 08:42:58,754][00226] Adding new argument 'no_render'=True that is not in the saved config file!
[2025-01-16 08:42:58,756][00226] Adding new argument 'save_video'=True that is not in the saved config file!
[2025-01-16 08:42:58,757][00226] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2025-01-16 08:42:58,760][00226] Adding new argument 'video_name'=None that is not in the saved config file!
[2025-01-16 08:42:58,761][00226] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file!
[2025-01-16 08:42:58,762][00226] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2025-01-16 08:42:58,763][00226] Adding new argument 'push_to_hub'=False that is not in the saved config file!
[2025-01-16 08:42:58,764][00226] Adding new argument 'hf_repository'=None that is not in the saved config file!
[2025-01-16 08:42:58,765][00226] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2025-01-16 08:42:58,766][00226] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2025-01-16 08:42:58,767][00226] Adding new argument 'train_script'=None that is not in the saved config file!
[2025-01-16 08:42:58,768][00226] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2025-01-16 08:42:58,769][00226] Using frameskip 1 and render_action_repeat=4 for evaluation
[2025-01-16 08:42:58,802][00226] Doom resolution: 160x120, resize resolution: (128, 72)
[2025-01-16 08:42:58,805][00226] RunningMeanStd input shape: (3, 72, 128)
[2025-01-16 08:42:58,807][00226] RunningMeanStd input shape: (1,)
[2025-01-16 08:42:58,822][00226] ConvEncoder: input_channels=3
[2025-01-16 08:42:58,919][00226] Conv encoder output size: 512
[2025-01-16 08:42:58,920][00226] Policy head output size: 512
[2025-01-16 08:42:59,187][00226] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2025-01-16 08:43:00,065][00226] Num frames 100...
[2025-01-16 08:43:00,233][00226] Num frames 200...
[2025-01-16 08:43:00,457][00226] Num frames 300...
[2025-01-16 08:43:00,645][00226] Num frames 400...
[2025-01-16 08:43:00,941][00226] Num frames 500...
[2025-01-16 08:43:01,164][00226] Num frames 600...
[2025-01-16 08:43:01,376][00226] Num frames 700...
[2025-01-16 08:43:01,566][00226] Avg episode rewards: #0: 16.650, true rewards: #0: 7.650
[2025-01-16 08:43:01,571][00226] Avg episode reward: 16.650, avg true_objective: 7.650
[2025-01-16 08:43:01,707][00226] Num frames 800...
[2025-01-16 08:43:01,992][00226] Num frames 900...
[2025-01-16 08:43:02,263][00226] Num frames 1000...
[2025-01-16 08:43:02,616][00226] Num frames 1100...
[2025-01-16 08:43:02,869][00226] Num frames 1200...
[2025-01-16 08:43:03,139][00226] Num frames 1300...
[2025-01-16 08:43:03,410][00226] Num frames 1400...
[2025-01-16 08:43:03,577][00226] Num frames 1500...
[2025-01-16 08:43:03,749][00226] Num frames 1600...
[2025-01-16 08:43:03,916][00226] Num frames 1700...
[2025-01-16 08:43:04,086][00226] Num frames 1800...
[2025-01-16 08:43:04,256][00226] Num frames 1900...
[2025-01-16 08:43:04,431][00226] Num frames 2000...
[2025-01-16 08:43:04,587][00226] Avg episode rewards: #0: 24.295, true rewards: #0: 10.295
[2025-01-16 08:43:04,590][00226] Avg episode reward: 24.295, avg true_objective: 10.295
[2025-01-16 08:43:04,663][00226] Num frames 2100...
[2025-01-16 08:43:04,831][00226] Num frames 2200...
[2025-01-16 08:43:05,004][00226] Num frames 2300...
[2025-01-16 08:43:05,179][00226] Num frames 2400...
[2025-01-16 08:43:05,359][00226] Num frames 2500...
[2025-01-16 08:43:05,546][00226] Num frames 2600...
[2025-01-16 08:43:05,724][00226] Num frames 2700...
[2025-01-16 08:43:05,899][00226] Avg episode rewards: #0: 20.877, true rewards: #0: 9.210
[2025-01-16 08:43:05,902][00226] Avg episode reward: 20.877, avg true_objective: 9.210
[2025-01-16 08:43:05,963][00226] Num frames 2800...
[2025-01-16 08:43:06,092][00226] Num frames 2900...
[2025-01-16 08:43:06,224][00226] Num frames 3000...
[2025-01-16 08:43:06,359][00226] Num frames 3100...
[2025-01-16 08:43:06,493][00226] Num frames 3200...
[2025-01-16 08:43:06,625][00226] Num frames 3300...
[2025-01-16 08:43:06,753][00226] Num frames 3400...
[2025-01-16 08:43:06,884][00226] Num frames 3500...
[2025-01-16 08:43:07,013][00226] Num frames 3600...
[2025-01-16 08:43:07,140][00226] Num frames 3700...
[2025-01-16 08:43:07,276][00226] Num frames 3800...
[2025-01-16 08:43:07,397][00226] Avg episode rewards: #0: 21.878, true rewards: #0: 9.627
[2025-01-16 08:43:07,399][00226] Avg episode reward: 21.878, avg true_objective: 9.627
[2025-01-16 08:43:07,462][00226] Num frames 3900...
[2025-01-16 08:43:07,598][00226] Num frames 4000...
[2025-01-16 08:43:07,726][00226] Num frames 4100...
[2025-01-16 08:43:07,854][00226] Num frames 4200...
[2025-01-16 08:43:08,032][00226] Avg episode rewards: #0: 18.998, true rewards: #0: 8.598
[2025-01-16 08:43:08,034][00226] Avg episode reward: 18.998, avg true_objective: 8.598
[2025-01-16 08:43:08,038][00226] Num frames 4300...
[2025-01-16 08:43:08,164][00226] Num frames 4400...
[2025-01-16 08:43:08,297][00226] Num frames 4500...
[2025-01-16 08:43:08,427][00226] Num frames 4600...
[2025-01-16 08:43:08,562][00226] Num frames 4700...
[2025-01-16 08:43:08,687][00226] Num frames 4800...
[2025-01-16 08:43:08,801][00226] Avg episode rewards: #0: 17.572, true rewards: #0: 8.072
[2025-01-16 08:43:08,803][00226] Avg episode reward: 17.572, avg true_objective: 8.072
[2025-01-16 08:43:08,877][00226] Num frames 4900...
[2025-01-16 08:43:09,009][00226] Num frames 5000...
[2025-01-16 08:43:09,132][00226] Avg episode rewards: #0: 15.479, true rewards: #0: 7.193
[2025-01-16 08:43:09,133][00226] Avg episode reward: 15.479, avg true_objective: 7.193
[2025-01-16 08:43:09,218][00226] Num frames 5100...
[2025-01-16 08:43:09,355][00226] Num frames 5200...
[2025-01-16 08:43:09,488][00226] Avg episode rewards: #0: 14.074, true rewards: #0: 6.574
[2025-01-16 08:43:09,489][00226] Avg episode reward: 14.074, avg true_objective: 6.574
[2025-01-16 08:43:09,544][00226] Num frames 5300...
[2025-01-16 08:43:09,682][00226] Num frames 5400...
[2025-01-16 08:43:09,815][00226] Num frames 5500...
[2025-01-16 08:43:09,949][00226] Num frames 5600...
[2025-01-16 08:43:10,103][00226] Avg episode rewards: #0: 13.417, true rewards: #0: 6.306
[2025-01-16 08:43:10,105][00226] Avg episode reward: 13.417, avg true_objective: 6.306
[2025-01-16 08:43:10,142][00226] Num frames 5700...
[2025-01-16 08:43:10,277][00226] Num frames 5800...
[2025-01-16 08:43:10,407][00226] Num frames 5900...
[2025-01-16 08:43:10,536][00226] Num frames 6000...
[2025-01-16 08:43:10,671][00226] Num frames 6100...
[2025-01-16 08:43:10,797][00226] Num frames 6200...
[2025-01-16 08:43:10,926][00226] Num frames 6300...
[2025-01-16 08:43:11,055][00226] Num frames 6400...
[2025-01-16 08:43:11,181][00226] Num frames 6500...
[2025-01-16 08:43:11,316][00226] Num frames 6600...
[2025-01-16 08:43:11,443][00226] Num frames 6700...
[2025-01-16 08:43:11,571][00226] Num frames 6800...
[2025-01-16 08:43:11,714][00226] Num frames 6900...
[2025-01-16 08:43:11,849][00226] Num frames 7000...
[2025-01-16 08:43:11,980][00226] Avg episode rewards: #0: 15.258, true rewards: #0: 7.058
[2025-01-16 08:43:11,982][00226] Avg episode reward: 15.258, avg true_objective: 7.058
[2025-01-16 08:43:52,268][00226] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
[2025-01-16 08:46:03,130][00226] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
[2025-01-16 08:46:03,132][00226] Overriding arg 'num_workers' with value 1 passed from command line
[2025-01-16 08:46:03,134][00226] Adding new argument 'no_render'=True that is not in the saved config file!
[2025-01-16 08:46:03,136][00226] Adding new argument 'save_video'=True that is not in the saved config file!
[2025-01-16 08:46:03,137][00226] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2025-01-16 08:46:03,139][00226] Adding new argument 'video_name'=None that is not in the saved config file!
[2025-01-16 08:46:03,141][00226] Adding new argument 'max_num_frames'=100000 that is not in the saved config file!
[2025-01-16 08:46:03,142][00226] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2025-01-16 08:46:03,143][00226] Adding new argument 'push_to_hub'=True that is not in the saved config file!
[2025-01-16 08:46:03,144][00226] Adding new argument 'hf_repository'='saxelsso/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file!
[2025-01-16 08:46:03,145][00226] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2025-01-16 08:46:03,146][00226] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2025-01-16 08:46:03,147][00226] Adding new argument 'train_script'=None that is not in the saved config file!
[2025-01-16 08:46:03,148][00226] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2025-01-16 08:46:03,149][00226] Using frameskip 1 and render_action_repeat=4 for evaluation
[2025-01-16 08:46:03,179][00226] RunningMeanStd input shape: (3, 72, 128)
[2025-01-16 08:46:03,182][00226] RunningMeanStd input shape: (1,)
[2025-01-16 08:46:03,193][00226] ConvEncoder: input_channels=3
[2025-01-16 08:46:03,226][00226] Conv encoder output size: 512
[2025-01-16 08:46:03,227][00226] Policy head output size: 512
[2025-01-16 08:46:03,247][00226] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2025-01-16 08:46:03,689][00226] Num frames 100...
[2025-01-16 08:46:03,816][00226] Num frames 200...
[2025-01-16 08:46:03,944][00226] Num frames 300...
[2025-01-16 08:46:04,070][00226] Num frames 400...
[2025-01-16 08:46:04,195][00226] Num frames 500...
[2025-01-16 08:46:04,266][00226] Avg episode rewards: #0: 7.120, true rewards: #0: 5.120
[2025-01-16 08:46:04,268][00226] Avg episode reward: 7.120, avg true_objective: 5.120
[2025-01-16 08:46:04,386][00226] Num frames 600...
[2025-01-16 08:46:04,513][00226] Num frames 700...
[2025-01-16 08:46:04,644][00226] Num frames 800...
[2025-01-16 08:46:04,769][00226] Num frames 900...
[2025-01-16 08:46:04,892][00226] Num frames 1000...
[2025-01-16 08:46:05,017][00226] Num frames 1100...
[2025-01-16 08:46:05,177][00226] Avg episode rewards: #0: 9.420, true rewards: #0: 5.920
[2025-01-16 08:46:05,178][00226] Avg episode reward: 9.420, avg true_objective: 5.920
[2025-01-16 08:46:05,202][00226] Num frames 1200...
[2025-01-16 08:46:05,336][00226] Num frames 1300...
[2025-01-16 08:46:05,473][00226] Num frames 1400...
[2025-01-16 08:46:05,600][00226] Num frames 1500...
[2025-01-16 08:46:05,725][00226] Num frames 1600...
[2025-01-16 08:46:05,851][00226] Num frames 1700...
[2025-01-16 08:46:05,979][00226] Num frames 1800...
[2025-01-16 08:46:06,105][00226] Num frames 1900...
[2025-01-16 08:46:06,237][00226] Num frames 2000...
[2025-01-16 08:46:06,369][00226] Num frames 2100...
[2025-01-16 08:46:06,506][00226] Num frames 2200...
[2025-01-16 08:46:06,637][00226] Num frames 2300...
[2025-01-16 08:46:06,777][00226] Avg episode rewards: #0: 15.227, true rewards: #0: 7.893
[2025-01-16 08:46:06,778][00226] Avg episode reward: 15.227, avg true_objective: 7.893
[2025-01-16 08:46:06,820][00226] Num frames 2400...
[2025-01-16 08:46:06,944][00226] Num frames 2500...
[2025-01-16 08:46:07,073][00226] Num frames 2600...
[2025-01-16 08:46:07,201][00226] Num frames 2700...
[2025-01-16 08:46:07,336][00226] Num frames 2800...
[2025-01-16 08:46:07,469][00226] Num frames 2900...
[2025-01-16 08:46:07,598][00226] Num frames 3000...
[2025-01-16 08:46:07,728][00226] Num frames 3100...
[2025-01-16 08:46:07,853][00226] Num frames 3200...
[2025-01-16 08:46:07,978][00226] Num frames 3300...
[2025-01-16 08:46:08,148][00226] Avg episode rewards: #0: 17.230, true rewards: #0: 8.480
[2025-01-16 08:46:08,150][00226] Avg episode reward: 17.230, avg true_objective: 8.480
[2025-01-16 08:46:08,164][00226] Num frames 3400...
[2025-01-16 08:46:08,295][00226] Num frames 3500...
[2025-01-16 08:46:08,421][00226] Num frames 3600...
[2025-01-16 08:46:08,554][00226] Num frames 3700...
[2025-01-16 08:46:08,679][00226] Num frames 3800...
[2025-01-16 08:46:08,809][00226] Num frames 3900...
[2025-01-16 08:46:08,938][00226] Num frames 4000...
[2025-01-16 08:46:09,065][00226] Num frames 4100...
[2025-01-16 08:46:09,191][00226] Num frames 4200...
[2025-01-16 08:46:09,321][00226] Num frames 4300...
[2025-01-16 08:46:09,445][00226] Num frames 4400...
[2025-01-16 08:46:09,581][00226] Num frames 4500...
[2025-01-16 08:46:09,709][00226] Num frames 4600...
[2025-01-16 08:46:09,836][00226] Num frames 4700...
[2025-01-16 08:46:09,977][00226] Avg episode rewards: #0: 21.136, true rewards: #0: 9.536
[2025-01-16 08:46:09,979][00226] Avg episode reward: 21.136, avg true_objective: 9.536
[2025-01-16 08:46:10,024][00226] Num frames 4800...
[2025-01-16 08:46:10,154][00226] Num frames 4900...
[2025-01-16 08:46:10,286][00226] Num frames 5000...
[2025-01-16 08:46:10,413][00226] Num frames 5100...
[2025-01-16 08:46:10,555][00226] Num frames 5200...
[2025-01-16 08:46:10,683][00226] Num frames 5300...
[2025-01-16 08:46:10,809][00226] Num frames 5400...
[2025-01-16 08:46:10,934][00226] Num frames 5500...
[2025-01-16 08:46:11,102][00226] Avg episode rewards: #0: 20.482, true rewards: #0: 9.315
[2025-01-16 08:46:11,104][00226] Avg episode reward: 20.482, avg true_objective: 9.315
[2025-01-16 08:46:11,122][00226] Num frames 5600...
[2025-01-16 08:46:11,277][00226] Num frames 5700...
[2025-01-16 08:46:11,405][00226] Num frames 5800...
[2025-01-16 08:46:11,536][00226] Num frames 5900...
[2025-01-16 08:46:11,670][00226] Num frames 6000...
[2025-01-16 08:46:11,798][00226] Num frames 6100...
[2025-01-16 08:46:11,927][00226] Num frames 6200...
[2025-01-16 08:46:12,054][00226] Num frames 6300...
[2025-01-16 08:46:12,185][00226] Num frames 6400...
[2025-01-16 08:46:12,339][00226] Num frames 6500...
[2025-01-16 08:46:12,498][00226] Num frames 6600...
[2025-01-16 08:46:12,694][00226] Avg episode rewards: #0: 21.253, true rewards: #0: 9.539
[2025-01-16 08:46:12,696][00226] Avg episode reward: 21.253, avg true_objective: 9.539
[2025-01-16 08:46:12,740][00226] Num frames 6700...
[2025-01-16 08:46:12,904][00226] Num frames 6800...
[2025-01-16 08:46:13,077][00226] Num frames 6900...
[2025-01-16 08:46:13,252][00226] Num frames 7000...
[2025-01-16 08:46:13,472][00226] Avg episode rewards: #0: 19.241, true rewards: #0: 8.866
[2025-01-16 08:46:13,477][00226] Avg episode reward: 19.241, avg true_objective: 8.866
[2025-01-16 08:46:13,491][00226] Num frames 7100...
[2025-01-16 08:46:13,669][00226] Num frames 7200...
[2025-01-16 08:46:13,847][00226] Num frames 7300...
[2025-01-16 08:46:14,025][00226] Num frames 7400...
[2025-01-16 08:46:14,198][00226] Num frames 7500...
[2025-01-16 08:46:14,389][00226] Num frames 7600...
[2025-01-16 08:46:14,573][00226] Num frames 7700...
[2025-01-16 08:46:14,755][00226] Num frames 7800...
[2025-01-16 08:46:14,936][00226] Num frames 7900...
[2025-01-16 08:46:15,115][00226] Avg episode rewards: #0: 19.773, true rewards: #0: 8.884
[2025-01-16 08:46:15,116][00226] Avg episode reward: 19.773, avg true_objective: 8.884
[2025-01-16 08:46:15,125][00226] Num frames 8000...
[2025-01-16 08:46:15,256][00226] Num frames 8100...
[2025-01-16 08:46:15,385][00226] Num frames 8200...
[2025-01-16 08:46:15,510][00226] Num frames 8300...
[2025-01-16 08:46:15,643][00226] Num frames 8400...
[2025-01-16 08:46:15,778][00226] Num frames 8500...
[2025-01-16 08:46:15,907][00226] Num frames 8600...
[2025-01-16 08:46:16,041][00226] Num frames 8700...
[2025-01-16 08:46:16,093][00226] Avg episode rewards: #0: 19.100, true rewards: #0: 8.700
[2025-01-16 08:46:16,094][00226] Avg episode reward: 19.100, avg true_objective: 8.700
[2025-01-16 08:47:05,401][00226] Replay video saved to /content/train_dir/default_experiment/replay.mp4!