hishamcse commited on
Commit
f686406
·
verified ·
1 Parent(s): fdd95b4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -25
README.md CHANGED
@@ -7,29 +7,38 @@ tags:
7
  - ML-Agents-Pyramids
8
  ---
9
 
10
- # **ppo** Agent playing **Pyramids**
11
- This is a trained model of a **ppo** agent playing **Pyramids**
12
- using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
13
-
14
- ## Usage (with ML-Agents)
15
- The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
16
-
17
- We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
18
- - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
19
- browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
20
- - A *longer tutorial* to understand how works ML-Agents:
21
- https://huggingface.co/learn/deep-rl-course/unit5/introduction
22
-
23
- ### Resume the training
24
- ```bash
25
- mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
26
- ```
27
-
28
- ### Watch your Agent play
29
- You can watch your agent **playing directly in your browser**
30
-
31
- 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
32
- 2. Step 1: Find your model_id: hishamcse/ppo-PyramidsRND
33
- 3. Step 2: Select your *.nn /*.onnx file
34
- 4. Click on Watch the agent play 👀
 
 
 
 
 
 
 
 
 
35
 
 
7
  - ML-Agents-Pyramids
8
  ---
9
 
10
+ # **ppo** Agent playing **Pyramids**
11
+ This is a trained model of a **ppo** agent playing **Pyramids**
12
+ using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
13
+
14
+ ## Codes
15
+
16
+ Github repos(Give a star if found useful):
17
+ * https://github.com/hishamcse/DRL-Renegades-Game-Bots
18
+ * https://github.com/hishamcse/Advanced-DRL-Renegades-Game-Bots
19
+
20
+ Kaggle Notebook:
21
+ * https://www.kaggle.com/code/syedjarullahhisham/drl-huggingface-unit-5-unity-ml-snowball-pyramid
22
+
23
+ ## Usage (with ML-Agents)
24
+ The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
25
+
26
+ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
27
+ - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
28
+ browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
29
+ - A *longer tutorial* to understand how works ML-Agents:
30
+ https://huggingface.co/learn/deep-rl-course/unit5/introduction
31
+
32
+ ### Resume the training
33
+ ```bash
34
+ mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
35
+ ```
36
+
37
+ ### Watch your Agent play
38
+ You can watch your agent **playing directly in your browser**
39
+
40
+ 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
41
+ 2. Step 1: Find your model_id: hishamcse/ppo-PyramidsRND
42
+ 3. Step 2: Select your *.nn /*.onnx file
43
+ 4. Click on Watch the agent play 👀
44