kingabzpro commited on
Commit
0ea370b
1 Parent(s): 4ecb4b8

Code added

Browse files
Files changed (1) hide show
  1. README.md +40 -1
README.md CHANGED
@@ -24,5 +24,44 @@ model-index:
24
  This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
25
 
26
  ## Usage (with Stable-baselines3)
27
- TODO: Add your code
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
 
 
24
  This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
25
 
26
  ## Usage (with Stable-baselines3)
27
+ ```python
28
+ import gym
29
+ from stable_baselines3 import PPO
30
+ from stable_baselines3.common.evaluation import evaluate_policy
31
+ from stable_baselines3.common.env_util import make_vec_env
32
+
33
+ # Create a vectorized environment of 16 parallel environments
34
+ env = make_vec_env("LunarLander-v2", n_envs=16)
35
+
36
+ # Optimizaed Hyperparameters
37
+ model = PPO(
38
+ "MlpPolicy",
39
+ env=env,
40
+ n_steps=655,
41
+ batch_size=32,
42
+ n_epochs=8,
43
+ gamma=0.998,
44
+ gae_lambda=0.98,
45
+ ent_coef=0.01,
46
+ verbose=1,
47
+ )
48
+
49
+ # Train it for 500,000 timesteps
50
+ model.learn(total_timesteps=int(5e6))
51
+
52
+ # Create a new environment for evaluation
53
+ eval_env = gym.make("LunarLander-v2")
54
+
55
+ # Evaluate the model with 10 evaluation episodes and deterministic=True
56
+ mean_reward, std_reward = evaluate_policy(
57
+ model, eval_env, n_eval_episodes=10, deterministic=True
58
+ )
59
+
60
+ # Print the results
61
+ print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
62
+
63
+ #>>> mean_reward=254.56 +/- 18.45056958672337
64
+
65
+
66
+ ```
67