file_path
stringlengths 21
202
| content
stringlengths 12
1.02M
| size
int64 12
1.02M
| lang
stringclasses 9
values | avg_line_length
float64 3.33
100
| max_line_length
int64 10
993
| alphanum_fraction
float64 0.27
0.93
|
---|---|---|---|---|---|---|
StanfordVL/OmniGibson/docs/miscellaneous/known_issues.md | # **Known Issues & Troubleshooting**
## π€ **Known Issues**
??? question "How can I parallelize running multiple scenes in OmniGibson?"
Currently, to run multiple scenes in parallel, you will need to launch separate instances of the OmniGibson environment. While this introduces some overhead due to running multiple instances of IsaacSim, we are actively working on implementing parallelization capabilities. Our goal is to enable running multiple scenes within a single instance, streamlining the process and reducing the associated overhead.
## π§― **Troubleshooting**
??? question "I cannot open Omniverse Launcher AppImage on Linux"
You probably need to [install FUSE](https://github.com/AppImage/AppImageKit/wiki/FUSE) to run the Omniverse Launcher AppImage.
??? question "OmniGibson is stuck at `HydraEngine rtx failed creating scene renderer.`"
`OmniGibson` is likely using an unsupported GPU (default is id 0). Run `nvidia-smi` to see the active list of GPUs, and select an NVIDIA-supported GPU and set its corresponding ID when running `OmniGibson` with `export OMNIGIBSON_GPU_ID=<ID NUMBER>`. | 1,120 | Markdown | 64.941173 | 412 | 0.776786 |
StanfordVL/OmniGibson/docs/miscellaneous/contact.md | # **Contact**
If you have any questions, comments, or concerns, please feel free to reach out to use by joining our Discord server:
<a href="https://discord.gg/bccR5vGFEx"><img src="https://discordapp.com/api/guilds/1166422812160966707/widget.png?style=banner3"></a> | 268 | Markdown | 52.799989 | 134 | 0.757463 |
StanfordVL/OmniGibson/docs/getting_started/examples.md | ---
icon: material/laptop
---
# π» **Examples**
**`OmniGibson`** ships with many demo scripts highlighting its modularity and diverse feature set intended as a set of building blocks enabling your research. Let's try them out!
***
## βοΈ **A quick word about macros**
??? question annotate "Why macros?"
Macros enforce global behavior that is consistent within an individual python process but can differ between processes. This is useful because globally enabling all of **`OmniGibson`**'s features can cause unnecessary slowdowns, and so configuring the macros for your specific use case can optimize performance.
For example, Omniverse provides a so-called `flatcache` feature which provides significant performance boosts, but cannot be used when fluids or soft bodies are present. So, we ideally should always have `gm.USE_FLATCACHE=True` unless we have fluids or soft bodies in our environment.
`macros` define a globally available set of magic numbers or flags set throughout **`OmniGibson`**. These can either be directly set in `omnigibson.macros.py`, or can be programmatically modified at runtime via:
```{.python .annotate}
from omnigibson.macros import gm, macros
gm.<GLOBAL_MACRO> = <VALUE> # (1)!
macros.<OG_DIRECTORY>.<OG_MODULE>.<MODULE_MACRO> = <VALUE> # (2)!
```
1. `gm` refers to the "global" macros -- i.e.: settings that generally impact the entire **`OmniGibson`** stack. These are usually the only settings you may need to modify.
2. `macros` captures all remaining macros defined throughout **`OmniGibson`**'s codebase -- these are often hardcoded default settings or magic numbers defined in a specific module. These can also be overridden, but we recommend inspecting the module first to understand how it is used.
Many of our examples set various `macros` settings at the beginning of the script, and is a good way to understand use cases for modifying them!
***
## π **Environments**
These examples showcase the full **`OmniGibson`** stack in use, and the types of environments immediately supported.
### **BEHAVIOR Task Demo**
!!! abstract "This demo is useful for..."
* Understanding how to instantiate a BEHAVIOR task
* Understanding how a pre-defined configuration file is used
```{.python .annotate}
python -m omnigibson.examples.environments.behavior_env_demo
```
This demo instantiates one of our BEHAVIOR tasks (and optionally sampling object locations online) in a fully-populated scene and loads a `Fetch` robot. The robot executes random actions and the environment is reset periodically.
??? code "behavior_env_demo.py"
``` py linenums="1"
--8<-- "examples/environments/behavior_env_demo.py"
```
### **Navigation Task Demo**
!!! abstract "This demo is useful for..."
* Understanding how to instantiate a navigation task
* Understanding how a pre-defined configuration file is used
```{.python .annotate}
python -m omnigibson.examples.environments.navigation_env_demo
```
This demo instantiates one of our navigation tasks in a fully-populated scene and loads a `Turtlebot` robot. The robot executes random actions and the environment is reset periodically.
??? code "navigation_env_demo.py"
``` py linenums="1"
--8<-- "examples/environments/navigation_env_demo.py"
```
## π§βπ« **Learning**
These examples showcase how **`OmniGibson`** can be used to train embodied AI agents.
### **Reinforcement Learning Demo**
!!! abstract "This demo is useful for..."
* Understanding how to hook up **`OmniGibson`** to an external algorithm
* Understanding how to train and evaluate a policy
```{.python .annotate}
python -m omnigibson.examples.learning.navigation_policy_demo
```
This demo loads a BEHAVIOR task with a `Fetch` robot, and trains / evaluates the agent using [Stable Baseline3](https://stable-baselines3.readthedocs.io/en/master/)'s PPO algorithm.
??? code "navigation_policy_demo.py"
``` py linenums="1"
--8<-- "examples/learning/navigation_policy_demo.py"
```
## ποΈ **Scenes**
These examples showcase how to leverage **`OmniGibson`**'s large-scale, diverse scenes shipped with the BEHAVIOR dataset.
### **Scene Selector Demo**
!!! abstract "This demo is useful for..."
* Understanding how to load a scene into **`OmniGibson`**
* Accessing all BEHAVIOR dataset scenes
```{.python .annotate}
python -m omnigibson.examples.scenes.scene_selector
```
This demo lets you choose a scene from the BEHAVIOR dataset, loads it along with a `Turtlebot` robot, and cycles the resulting environment periodically.
??? code "scene_selector.py"
``` py linenums="1"
--8<-- "examples/scenes/scene_selector.py"
```
### **Scene Tour Demo**
!!! abstract "This demo is useful for..."
* Understanding how to load a scene into **`OmniGibson`**
* Understanding how to generate a trajectory from a set of waypoints
```{.python .annotate}
python -m omnigibson.examples.scenes.scene_tour_demo
```
This demo lets you choose a scene from the BEHAVIOR dataset. It allows you to move the camera using the keyboard, select waypoints, and then programmatically generates a video trajectory from the selected waypoints
??? code "scene_tour_demo.py"
``` py linenums="1"
--8<-- "examples/scenes/scene_tour_demo.py"
```
### **Traversability Map Demo**
!!! abstract "This demo is useful for..."
* Understanding how to leverage traversability map information from BEHAVIOR dataset scenes
```{.python .annotate}
python -m omnigibson.examples.scenes.traversability_map_example
```
This demo lets you choose a scene from the BEHAVIOR dataset, and generates its corresponding traversability map.
??? code "traversability_map_example.py"
``` py linenums="1"
--8<-- "examples/scenes/traversability_map_example.py"
```
## π **Objects**
These examples showcase how to leverage objects in **`OmniGibson`**.
### **Load Object Demo**
!!! abstract "This demo is useful for..."
* Understanding how to load an object into **`OmniGibson`**
* Accessing all BEHAVIOR dataset asset categories and models
```{.python .annotate}
python -m omnigibson.examples.objects.load_object_selector
```
This demo lets you choose a specific object from the BEHAVIOR dataset, and loads the requested object into an environment.
??? code "load_object_selector.py"
``` py linenums="1"
--8<-- "examples/objects/load_object_selector.py"
```
### **Object Visualizer Demo**
!!! abstract "This demo is useful for..."
* Viewing objects' textures as rendered in **`OmniGibson`**
* Viewing articulated objects' range of motion
* Understanding how to reference object instances from the environment
* Understanding how to set object poses and joint states
```{.python .annotate}
python -m omnigibson.examples.objects.visualize_object
```
This demo lets you choose a specific object from the BEHAVIOR dataset, and rotates the object in-place. If the object is articulated, it additionally moves its joints through its full range of motion.
??? code "visualize_object.py"
``` py linenums="1"
--8<-- "examples/objects/visualize_object.py"
```
### **Highlight Object**
!!! abstract "This demo is useful for..."
* Understanding how to highlight individual objects within a cluttered scene
* Understanding how to access groups of objects from the environment
```{.python .annotate}
python -m omnigibson.examples.objects.highlight_objects
```
This demo loads the Rs_int scene and highlights windows on/off repeatedly.
??? code "highlight_objects.py"
``` py linenums="1"
--8<-- "examples/objects/highlight_objects.py"
```
### **Draw Object Bounding Box Demo**
!!! abstract annotate "This demo is useful for..."
* Understanding how to access observations from a `GymObservable` object
* Understanding how to access objects' bounding box information
* Understanding how to dynamically modify vision modalities
*[GymObservable]: [`Environment`](../reference/envs/env_base.md), all sensors extending from [`BaseSensor`](../reference/sensors/sensor_base.md), and all objects extending from [`BaseObject`](../reference/objects/object_base.md) (which includes all robots extending from [`BaseRobot`](../reference/robots/robot_base.md)!) are [`GymObservable`](../reference/utils/gym_utils.md#utils.gym_utils.GymObservable) objects!
```{.python .annotate}
python -m omnigibson.examples.objects.draw_bounding_box
```
This demo loads a door object and banana object, and partially obscures the banana with the door. It generates both "loose" and "tight" bounding boxes (where the latter respects occlusions) for both objects, and dumps them to an image on disk.
??? code "draw_bounding_box.py"
``` py linenums="1"
--8<-- "examples/objects/draw_bounding_box.py"
```
## π‘οΈ **Object States**
These examples showcase **`OmniGibson`**'s powerful object states functionality, which captures both individual and relational kinematic and non-kinematic states.
### **Slicing Demo**
!!! abstract "This demo is useful for..."
* Understanding how slicing works in **`OmniGibson`**
* Understanding how to access individual objects once the environment is created
```{.python .annotate}
python -m omnigibson.examples.object_states.slicing_demo
```
This demo spawns an apple on a table with a knife above it, and lets the knife fall to "cut" the apple in half.
??? code "slicing_demo.py"
``` py linenums="1"
--8<-- "examples/object_states/slicing_demo.py"
```
### **Dicing Demo**
!!! abstract "This demo is useful for..."
* Understanding how to leverage the `Dicing` state
* Understanding how to enable objects to be `diceable`
```{.python .annotate}
python -m omnigibson.examples.object_states.dicing_demo
```
This demo loads an apple and a knife, and showcases how apple can be diced into smaller chunks with the knife.
??? code "dicing_demo.py"
``` py linenums="1"
--8<-- "examples/object_states/dicing_demo.py"
```
### **Folded and Unfolded Demo**
!!! abstract "This demo is useful for..."
* Understanding how to load a softbody (cloth) version of a BEHAVIOR dataset object
* Understanding how to enable cloth objects to be `foldable`
* Understanding the current heuristics used for gauging a cloth's "foldness"
```{.python .annotate}
python -m omnigibson.examples.object_states.folded_unfolded_state_demo
```
This demo loads in three different cloth objects, and allows you to manipulate them while printing out their `Folded` state status in real-time. Try manipulating the object by holding down **`Shift`** and then **`Left-click + Drag`**!
??? code "folded_unfolded_state_demo.py"
``` py linenums="1"
--8<-- "examples/object_states/folded_unfolded_state_demo.py"
```
### **Overlaid Demo**
!!! abstract "This demo is useful for..."
* Understanding how cloth objects can be overlaid on rigid objects
* Understanding current heuristics used for gauging a cloth's "overlaid" status
```{.python .annotate}
python -m omnigibson.examples.object_states.overlaid_demo
```
This demo loads in a carpet on top of a table. The demo allows you to manipulate the carpet while printing out their `Overlaid` state status in real-time. Try manipulating the object by holding down **`Shift`** and then **`Left-click + Drag`**!
??? code "overlaid_demo.py"
``` py linenums="1"
--8<-- "examples/object_states/overlaid_demo.py"
```
### **Heat Source or Sink Demo**
!!! abstract "This demo is useful for..."
* Understanding how a heat source (or sink) is visualized in **`OmniGibson`**
* Understanding how dynamic fire visuals are generated in real-time
```{.python .annotate}
python -m omnigibson.examples.object_states.heat_source_or_sink_demo
```
This demo loads in a stove and toggles its `HeatSource` on and off, showcasing the dynamic fire visuals available in **`OmniGibson`**.
??? code "heat_source_or_sink_demo.py"
``` py linenums="1"
--8<-- "examples/object_states/heat_source_or_sink_demo.py"
```
### **Temperature Demo**
!!! abstract "This demo is useful for..."
* Understanding how to dynamically sample kinematic states for BEHAVIOR dataset objects
* Understanding how temperature changes are propagated to individual objects from individual heat sources or sinks
```{.python .annotate}
python -m omnigibson.examples.object_states.temperature_demo
```
This demo loads in various heat sources and sinks, and places an apple within close proximity to each of them. As the environment steps, each apple's temperature is printed in real-time, showcasing **`OmniGibson`**'s rudimentary temperature dynamics.
??? code "temperature_demo.py"
``` py linenums="1"
--8<-- "examples/object_states/temperature_demo.py"
```
### **Heated Demo**
!!! abstract "This demo is useful for..."
* Understanding how temperature modifications can cause objects' visual changes
* Understanding how dynamic steam visuals are generated in real-time
```{.python .annotate}
python -m omnigibson.examples.object_states.heated_state_demo
```
This demo loads in three bowls, and immediately sets their temperatures past their `Heated` threshold. Steam is generated in real-time from these objects, and then disappears once the temperature of the objects drops below their `Heated` threshold.
??? code "heated_state_demo.py"
``` py linenums="1"
--8<-- "examples/object_states/heated_state_demo.py"
```
### **Onfire Demo**
!!! abstract "This demo is useful for..."
* Understanding how changing onfire state can cause objects' visual changes
* Understanding how onfire can be triggered by nearby onfire objects
```{.python .annotate}
python -m omnigibson.examples.object_states.onfire_demo
```
This demo loads in a stove (toggled on) and two apples. The first apple will be ignited by the stove first, then the second apple will be ignited by the first apple.
??? code "onfire_demo.py"
``` py linenums="1"
--8<-- "examples/object_states/onfire_demo.py"
```
### **Particle Applier and Remover Demo**
!!! abstract "This demo is useful for..."
* Understanding how a `ParticleRemover` or `ParticleApplier` object can be generated
* Understanding how particles can be dynamically generated on objects
* Understanding different methods for applying and removing particles via the `ParticleRemover` or `ParticleApplier` object
```{.python .annotate}
python -m omnigibson.examples.object_states.particle_applier_remover_demo
```
This demo loads in a washtowel and table and lets you choose the ability configuration to enable the washtowel with. The washtowel will then proceed to either remove and generate particles dynamically on the table while moving.
??? code "particle_applier_remover_demo.py"
``` py linenums="1"
--8<-- "examples/object_states/particle_applier_remover_demo.py"
```
### **Particle Source and Sink Demo**
!!! abstract "This demo is useful for..."
* Understanding how a `ParticleSource` or `ParticleSink` object can be generated
* Understanding how particles can be dynamically generated and destroyed via such objects
```{.python .annotate}
python -m omnigibson.examples.object_states.particle_source_sink_demo
```
This demo loads in a sink, which is enabled with both the ParticleSource and ParticleSink states. The sink's particle source is located at the faucet spout and spawns a continuous stream of water particles, which is then destroyed ("sunk") by the sink's particle sink located at the drain.
??? note "Difference between `ParticleApplier/Removers` and `ParticleSource/Sinks`"
The key difference between `ParticleApplier/Removers` and `ParticleSource/Sinks` is that `Applier/Removers`
requires contact (if using `ParticleProjectionMethod.ADJACENCY`) or overlap
(if using `ParticleProjectionMethod.PROJECTION`) in order to spawn / remove particles, and generally only spawn
particles at the contact points. `ParticleSource/Sinks` are special cases of `ParticleApplier/Removers` that
always use `ParticleProjectionMethod.PROJECTION` and always spawn / remove particles within their projection volume,
irregardless of overlap with other objects.
??? code "particle_source_sink_demo.py"
``` py linenums="1"
--8<-- "examples/object_states/particle_source_sink_demo.py"
```
### **Kinematics Demo**
!!! abstract "This demo is useful for..."
* Understanding how to dynamically sample kinematic states for BEHAVIOR dataset objects
* Understanding how to import additional objects after the environment is created
```{.python .annotate}
python -m omnigibson.examples.object_states.sample_kinematics_demo
```
This demo procedurally generates a mini populated scene, spawning in a cabinet and placing boxes in its shelves, and then generating a microwave on a cabinet with a plate and apples sampled both inside and on top of it.
??? code "sample_kinematics_demo.py"
``` py linenums="1"
--8<-- "examples/object_states/sample_kinematics_demo.py"
```
### **Attachment Demo**
!!! abstract "This demo is useful for..."
* Understanding how to leverage the `Attached` state
* Understanding how to enable objects to be `attachable`
```{.python .annotate}
python -m omnigibson.examples.object_states.attachment_demo
```
This demo loads an assembled shelf, and showcases how it can be manipulated to attach and detach parts.
??? code "attachment_demo.py"
``` py linenums="1"
--8<-- "examples/object_states/attachment_demo.py"
```
### **Object Texture Demo**
!!! abstract "This demo is useful for..."
* Understanding how different object states can result in texture changes
* Understanding how to enable objects with texture-changing states
* Understanding how to dynamically modify object states
```{.python .annotate}
python -m omnigibson.examples.object_states.object_state_texture_demo
```
This demo loads in a single object, and then dynamically modifies its state so that its texture changes with each modification.
??? code "object_state_texture_demo.py"
``` py linenums="1"
--8<-- "examples/object_states/object_state_texture_demo.py"
```
## π€ **Robots**
These examples showcase how to interact and leverage robot objects in **`OmniGibson`**.
### **Robot Visualizer Demo**
!!! abstract "This demo is useful for..."
* Understanding how to load a robot into **`OmniGibson`** after an environment is created
* Accessing all **`OmniGibson`** robot models
* Viewing robots' low-level joint motion
```{.python .annotate}
python -m omnigibson.examples.robots.all_robots_visualizer
```
This demo iterates over all robots in **`OmniGibson`**, loading each one into an empty scene and randomly moving its joints for a brief amount of time.
??? code "all_robots_visualizer.py"
``` py linenums="1"
--8<-- "examples/robots/all_robots_visualizer.py"
```
### **Robot Control Demo**
!!! abstract "This demo is useful for..."
* Understanding how different controllers can be used to control robots
* Understanding how to teleoperate a robot through external commands
```{.python .annotate}
python -m omnigibson.examples.robots.robot_control_example
```
This demo lets you choose a robot and the set of controllers to control the robot, and then lets you teleoperate the robot using your keyboard.
??? code "robot_control_example.py"
``` py linenums="1"
--8<-- "examples/robots/robot_control_example.py"
```
### **Robot Grasping Demo**
!!! abstract annotate "This demo is useful for..."
* Understanding the difference between `physical` and `sticky` grasping
* Understanding how to teleoperate a robot through external commands
```{.python .annotate}
python -m omnigibson.examples.robots.grasping_mode_example
```
This demo lets you choose a grasping mode and then loads a `Fetch` robot and a cube on a table. You can then teleoperate the robot to grasp the cube, observing the difference is grasping behavior based on the grasping mode chosen. Here, `physical` means natural friction is required to hold objects, while `sticky` means that objects are constrained to the robot's gripper once contact is made.
??? code "grasping_mode_example.py"
``` py linenums="1"
--8<-- "examples/robots/grasping_mode_example.py"
```
### **Advanced: IK Demo**
!!! abstract "This demo is useful for..."
* Understanding how to construct your own IK functionality using omniverse's native lula library without explicitly utilizing all of OmniGibson's class abstractions
* Understanding how to manipulate the simulator at a lower-level than the main Environment entry point
```{.python .annotate}
python -m omnigibson.examples.robots.advanced.ik_example
```
This demo loads in `Fetch` robot and a IK solver to control the robot, and then lets you teleoperate the robot using your keyboard.
??? code "ik_example.py"
``` py linenums="1"
--8<-- "examples/robots/advanced/ik_example.py"
```
## π§° **Simulator**
These examples showcase useful functionality from **`OmniGibson`**'s monolithic `Simulator` object.
??? question "What's the difference between `Environment` and `Simulator`?"
The [`Simulator`](../../reference/simulator) class is a lower-level object that:
* handles importing scenes and objects into the actual simulation
* directly interfaces with the underlying physics engine
The [`Environment`](../../reference/environemnts/base_env) class thinly wraps the `Simulator`'s core functionality, by:
* providing convenience functions for automatically importing a predefined scene, object(s), and robot(s) (via the `cfg` argument), as well as a [`task`](../../reference/tasks/task_base)
* providing a OpenAI Gym interface for stepping through the simulation
While most of the core functionality in `Environment` (as well as more fine-grained physics control) can be replicated via direct calls to `Simulator` (`og.sim`), it requires deeper understanding of **`OmniGibson`**'s infrastructure and is not recommended for new users.
### **State Saving and Loading Demo**
!!! abstract "This demo is useful for..."
* Understanding how to interact with objects using the mouse
* Understanding how to save the active simulator state to a file
* Understanding how to restore the simulator state from a given file
```{.python .annotate}
python -m omnigibson.examples.simulator.sim_save_load_example
```
This demo loads a stripped-down scene with the `Turtlebot` robot, and lets you interact with objects to modify the scene. The state is then saved, written to a `.json` file, and then restored in the simulation.
??? code "sim_save_load_example.py"
``` py linenums="1"
--8<-- "examples/simulator/sim_save_load_example.py"
```
## πΌοΈ **Rendering**
These examples showcase how to change renderer settings in **`OmniGibson`**.
### **Renderer Settings Demo**
!!! abstract "This demo is useful for..."
* Understanding how to use RendererSettings class
```{.python .annotate}
python -m omnigibson.examples.renderer_settings.renderer_settings_example
```
This demo iterates over different renderer settings of and shows how they can be programmatically set with **`OmniGibson`** interface.
??? code "renderer_settings_example.py"
``` py linenums="1"
--8<-- "examples/renderer_settings/renderer_settings_example.py"
```
| 23,454 | Markdown | 37.45082 | 415 | 0.725377 |
StanfordVL/OmniGibson/docs/getting_started/quickstart.md | ---
icon: octicons/rocket-16
---
# π **Quickstart**
Let's quickly create an environment programmatically!
**`OmniGibson`**'s workflow is straightforward: define the configuration of scene, object(s), robot(s), and task you'd like to load, and then instantiate our `Environment` class with that config.
Let's start with the following:
```{.python .annotate}
import omnigibson as og # (1)!
from omnigibson.macros import gm # (2)!
# Start with an empty configuration
cfg = dict()
```
1. All python scripts should start with this line! This allows access to key global variables through the top-level package.
2. Global macros (`gm`) can always be accessed directly and modified on the fly!
## ποΈ **Defining a scene**
Next, let's define a scene:
```{.python .annotate}
cfg["scene"] = {
"type": "Scene", # (1)!
"floor_plane_visible": True, # (2)!
}
```
1. Our configuration gets parsed automatically and generates the appropriate class instance based on `type` (the string form of the class name). In this case, we're generating the most basic scene, which only consists of a floor plane. Check out [all of our available `Scene` classes](../reference/scenes/scene_base.md)!
2. In addition to specifying `type`, the remaining keyword-arguments get passed directly into the class constructor. So for the base [`Scene`](../reference/scenes/scene_base.md) class, you could optionally specify `"use_floor_plane"` and `"floor_plane_visible"`, whereas for the more powerful [`InteractiveTraversableScene`](../reference/scenes/interactive_traversable_scene.md) class (which loads a curated, preconfigured scene) you can additionally specify options for filtering objects, such as `"load_object_categories"` and `"load_room_types"`. You can see all available keyword-arguments by viewing the [individual `Scene` class](../reference/scenes/scene_base.md) you'd like to load!
## πΎ **Defining objects**
We can optionally define some objects to load into our scene:
```{.python .annotate}
cfg["objects"] = [ # (1)!
{
"type": "USDObject", # (2)!
"name": "ghost_stain", # (3)!
"usd_path": f"{gm.ASSET_PATH}/models/stain/stain.usd",
"category": "stain", # (4)!
"visual_only": True, # (5)!
"scale": [1.0, 1.0, 1.0], # (6)!
"position": [1.0, 2.0, 0.001], # (7)!
"orientation": [0, 0, 0, 1.0], # (8)!
},
{
"type": "DatasetObject", # (9)!
"name": "delicious_apple",
"category": "apple",
"model": "agveuv", # (10)!
"position": [0, 0, 1.0],
},
{
"type": "PrimitiveObject", # (11)!
"name": "incredible_box",
"primitive_type": "Cube", # (12)!
"rgba": [0, 1.0, 1.0, 1.0], # (13)!
"scale": [0.5, 0.5, 0.1],
"fixed_base": True, # (14)!
"position": [-1.0, 0, 1.0],
"orientation": [0, 0, 0.707, 0.707],
},
{
"type": "LightObject", # (15)!
"name": "brilliant_light",
"light_type": "Sphere", # (16)!
"intensity": 50000, # (17)!
"radius": 0.1, # (18)!
"position": [3.0, 3.0, 4.0],
},
]
```
1. Unlike the `"scene"` sub-config, we can define an arbitrary number of objects to load, so this is a `list` of `dict` istead of a single nested `dict`.
2. **`OmniGibson`** supports multiple object classes, and we showcase an instance of each core class here. A [`USDObject`](../reference/objects/usd_object.md) is our most generic object class, and generates an object sourced from the `usd_path` argument.
3. All objects **must** define the `name` argument! This is because **`OmniGibson`** enforces a global unique naming scheme, and so any created objects must have unique names assigned to them.
4. `category` is used by all object classes to assign semantic segmentation IDs.
5. `visual_only` is used by all object classes and defines whether the object is subject to both gravity and collisions.
6. `scale` is used by all object classes and defines the global (x,y,z) relative scale of the object.
7. `position` is used by all object classes and defines the initial (x,y,z) position of the object in the global frame.
8. `orientation` is used by all object classes and defines the initial (x,y,z,w) quaternion orientation of the object in the global frame.
9. A [`DatasetObject`](../reference/objects/dataset_object.md) is an object pulled directly from our **BEHAVIOR** dataset. It includes metadata and annotations not found on a generic `USDObject`. Note that these assets are encrypted, and thus cannot be created via the `USDObject` class.
10. Instead of explicitly defining the hardcoded path to the dataset USD model, `model` (in conjunction with `category`) is used to infer the exact dataset object to load. In this case this is the exact same underlying raw USD asset that was loaded above as a `USDObject`!
11. A [`PrimitiveObject`](../reference/objects/primitive_object.md) is a programmatically generated object defining a convex primitive shape.
12. `primitive_type` defines what primitive shape to load -- see [`PrimitiveObject`](../reference/objects/primitive_object.md) for available options!
13. Because this object is programmatically generated, we can also specify the color to assign to this primitive object.
14. `fixed_base` is used by all object classes and determines whether the generated object is fixed relative to the world frame. Useful for fixing in place large objects, such as furniture or structures.
15. A [`LightObject`](../reference/objects/light_object.md) is a programmatically generated light source. It is used to directly illuminate the given scene.
16. `light_type` defines what light shape to load -- see [`LightObject`](../reference/objects/light_object.md) for available options!
17. `intensity` defines how bright the generated light source should be.
18. `radius` is used by `Sphere` lights and determines their relative size.
## π€ **Defining robots**
We can also optionally define robots to load into our scene:
```{.python .annotate}
cfg["robots"] = [ # (1)!
{
"type": "Fetch", # (2)!
"name": "baby_robot",
"obs_modalities": ["scan", "rgb", "depth"], # (3)!
},
]
```
1. Like the `"objects"` sub-config, we can define an arbitrary number of robots to load, so this is a `list` of `dict`.
2. **`OmniGibson`** supports multiple robot classes, where each class represents a specific robot model. Check out our [`robots`](../reference/robots/robot_base.md) to view all available robot classes!
3. Execute `print(og.ALL_SENSOR_MODALITIES)` for a list of all available observation modalities!
## π **Defining a task**
Lastly, we can optionally define a task to load into our scene. Since we're just getting started, let's load a "Dummy" task (which is the task that is loaded anyways even if we don't explicitly define a task in our config):
```{.python .annotate}
cfg["task"] = {
"type": "DummyTask", # (1)!
"termination_config": dict(), # (2)!
"reward_config": dict(), # (3)!
}
```
1. Check out all of **`OmniGibson`**'s [available tasks](../reference/tasks/task_base.md)!
2. `termination_config` configures the termination conditions for this task. It maps specific [`TerminationCondition`](../reference/termination_conditions/termination_condition_base.md) arguments to their corresponding values to set.
3. `reward_config` configures the reward functions for this task. It maps specific [`RewardFunction`](../reference/reward_functions/reward_function_base.md) arguments to their corresponding values to set.
## π **Creating the environment**
We're all set! Let's load the config and create our environment:
```{.python .annotate}
env = og.Environment(cfg)
```
Once the environment loads, we can interface with our environment similar to OpenAI's Gym interface:
```{.python .annotate}
obs, rew, done, info = env.step(env.action_space.sample())
```
??? question "What happens if we have no robot loaded?"
Even if we have no robot loaded, we still need to define an "action" to pass into the environment. In this case, our action space is 0, so you can simply pass `[]` or `np.array([])` into the `env.step()` call!
??? code "my_first_env.py"
``` py linenums="1"
import omnigibson as og
from omnigibson.macros import gm
cfg = dict()
# Define scene
cfg["scene"] = {
"type": "Scene",
"floor_plane_visible": True,
}
# Define objects
cfg["objects"] = [
{
"type": "USDObject",
"name": "ghost_stain",
"usd_path": f"{gm.ASSET_PATH}/models/stain/stain.usd",
"category": "stain",
"visual_only": True,
"scale": [1.0, 1.0, 1.0],
"position": [1.0, 2.0, 0.001],
"orientation": [0, 0, 0, 1.0],
},
{
"type": "DatasetObject",
"name": "delicious_apple",
"category": "apple",
"model": "agveuv",
"position": [0, 0, 1.0],
},
{
"type": "PrimitiveObject",
"name": "incredible_box",
"primitive_type": "Cube",
"rgba": [0, 1.0, 1.0, 1.0],
"scale": [0.5, 0.5, 0.1],
"fixed_base": True,
"position": [-1.0, 0, 1.0],
"orientation": [0, 0, 0.707, 0.707],
},
{
"type": "LightObject",
"name": "brilliant_light",
"light_type": "Sphere",
"intensity": 50000,
"radius": 0.1,
"position": [3.0, 3.0, 4.0],
},
]
# Define robots
cfg["robots"] = [
{
"type": "Fetch",
"name": "skynet_robot",
"obs_modalities": ["scan", "rgb", "depth"],
},
]
# Define task
cfg["task"] = {
"type": "DummyTask",
"termination_config": dict(),
"reward_config": dict(),
}
# Create the environment
env = og.Environment(cfg)
# Allow camera teleoperation
og.sim.enable_viewer_camera_teleoperation()
# Step!
for _ in range(10000):
obs, rew, done, info = env.step(env.action_space.sample())
og.shutdown()
```
## π **Looking around**
Look around by:
* `Left-CLICK + Drag`: Tilt
* `Scroll-Wheel-CLICK + Drag`: Pan
* `Scroll-Wheel UP / DOWN`: Zoom
Interact with objects by:
* `Shift + Left-CLICK + Drag`: Apply force on selected object
Or, for more fine-grained control, run:
```{.python .annotate}
og.sim.enable_viewer_camera_teleoperation() # (1)!
```
1. This allows you to move the camera precisely with your keyboard, record camera poses, and dynamically modify lights!
Or, for programmatic control, directly set the viewer camera's global pose:
```{.python .annotate}
og.sim.viewer_camera.set_position_orientation(<POSITION>, <ORIENTATION>)
```
***
**Next:** Check out some of **`OmniGibson`**'s breadth of features from our [Building Block](./building_blocks.md) examples!
| 10,980 | Markdown | 41.727626 | 690 | 0.643443 |
StanfordVL/OmniGibson/docs/getting_started/installation.md | ---
icon: material/hammer-wrench
---
# π οΈ **Installation**
## ποΈ **Requirements**
Please make sure your system meets the following specs:
- [x] **OS:** Ubuntu 20.04+ / Windows 10+
- [x] **RAM:** 32GB+
- [x] **GPU:** NVIDIA RTX 2070+
- [x] **VRAM:** 8GB+
??? question "Why these specs?"
**`OmniGibson`** is built upon NVIDIA's [Omniverse](https://www.nvidia.com/en-us/omniverse/) and [Isaac Sim](https://developer.nvidia.com/isaac-sim) platforms, so we inherit their dependencies. For more information, please see [Isaac Sim's Requirements](https://docs.omniverse.nvidia.com/app_isaacsim/app_isaacsim/requirements.html).
## π» **Setup**
There are two ways to setup **`OmniGibson`**:
- **π³ Install with Docker (Linux only)**: You can quickly get **`OmniGibson`** immediately up and running from our pre-built docker image.
- **π§ͺ Install from source (Linux / Windows)**: This method is recommended for deeper users looking to develop upon **`OmniGibson`** or use it extensively for research.
!!! tip ""
=== "π³ Install with Docker (Linux only)"
Install **`OmniGibson`** with Docker is supported for **π§ Linux** only.
??? info "Need to install docker or NVIDIA docker?"
```{.shell .annotate}
# Install docker
curl https://get.docker.com | sh && sudo systemctl --now enable docker
# Install nvidia-docker runtime
distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
&& curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | \
sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt-get update
sudo apt-get install -y nvidia-docker2 # install
sudo systemctl restart docker # restart docker engine
```
1. Install our docker launching scripts:
```shell
curl -LJO https://raw.githubusercontent.com/StanfordVL/OmniGibson/main/docker/run_docker.sh
chmod a+x run_docker.sh
```
??? question annotate "What is being installed?"
Our docker image automatically ships with a pre-configured conda virtual environment named `omnigibson` with Isaac Sim and **`OmniGibson`** pre-installed. Upon running the first time, our scene and object assets will automatically be downloaded as well.
2. Then, simply launch the shell script:
=== "Headless"
```{.shell .annotate}
sudo ./run_docker.sh -h <ABS_DATA_PATH> # (1)!
```
1. `<ABS_DATA_PATH>` specifies the **absolute** path data will be stored on your machine (if no `<ABS_DATA_PATH>` is specified, it defaults to `./omnigibson_data`). This needs to be called each time the docker container is run!
=== "GUI"
```{.shell .annotate}
sudo ./run_docker.sh <ABS_DATA_PATH> # (1)!
```
1. `<ABS_DATA_PATH>` specifies the **absolute** path data will be stored on your machine (if no `<ABS_DATA_PATH>` is specified, it defaults to `./omnigibson_data`). This needs to be called each time the docker container is run!
??? warning annotate "Are you using NFS or AFS?"
Docker containers are unable to access NFS or AFS drives, so if `run_docker.sh` are located on an NFS / AFS partition, please set `<DATA_PATH>` to an alternative data directory located on a non-NFS / AFS partition.
=== "π§ͺ Install from source (Linux / Windows)"
Install **`OmniGibson`** from source is supported for both **π§ Linux (bash)** and **π Windows (powershell/cmd)**.
!!! example ""
=== "π§ Linux (bash)"
<div class="annotate" markdown>
1. Install [Conda](https://conda.io/projects/conda/en/latest/user-guide/install/index.html) and NVIDIA's [Omniverse Isaac Sim](https://docs.omniverse.nvidia.com/app_isaacsim/app_isaacsim/install_workstation.html)
!!! warning "Please make sure you have the latest version of Isaac Sim (2023.1.1) installed."
For Ubuntu 22.04, you need to [install FUSE](https://github.com/AppImage/AppImageKit/wiki/FUSE) to run the Omniverse Launcher AppImage.
2. Clone [**`OmniGibson`**](https://github.com/StanfordVL/OmniGibson) and move into the directory:
```shell
git clone https://github.com/StanfordVL/OmniGibson.git
cd OmniGibson
```
??? note "Nightly build"
The main branch contains the stable version of **`OmniGibson`**. For our latest developed (yet not fully tested) features and bug fixes, please clone from the `og-develop` branch.
3. Setup a virtual conda environment to run **`OmniGibson`**:
```{.shell .annotate}
./scripts/setup.sh # (1)!
```
1. The script will ask you which Isaac Sim to use. If you installed it in the default location, it should be `~/.local/share/ov/pkg/isaac_sim-2023.1.1`
This will create a conda env with `omnigibson` installed. Simply call `conda activate` to activate it.
4. Download **`OmniGibson`** dataset (within the conda env):
```shell
python scripts/download_datasets.py
```
</div>
=== "π Windows (powershell/cmd)"
<div class="annotate" markdown>
1. Install [Conda](https://conda.io/projects/conda/en/latest/user-guide/install/index.html) and NVIDIA's [Omniverse Isaac Sim](https://docs.omniverse.nvidia.com/app_isaacsim/app_isaacsim/install_workstation.html)
!!! warning "Please make sure you have the latest version of Isaac Sim (2023.1.1) installed."
2. Clone [**`OmniGibson`**](https://github.com/StanfordVL/OmniGibson) and move into the directory:
```shell
git clone https://github.com/StanfordVL/OmniGibson.git
cd OmniGibson
```
??? note "Nightly build"
The main branch contains the stable version of **`OmniGibson`**. For our latest developed (yet not fully tested) features and bug fixes, please clone from the `og-develop` branch.
3. Setup a virtual conda environment to run **`OmniGibson`**:
```{.powershell .annotate}
.\scripts\setup.bat # (1)!
```
1. The script will ask you which Isaac Sim to use. If you installed it in the default location, it should be `C:\Users\<USER_NAME>\AppData\Local\ov\pkg\isaac_sim-2023.1.1`
This will create a conda env with `omnigibson` installed. Simply call `conda activate` to activate it.
4. Download **`OmniGibson`** dataset (within the conda env):
```powershell
python scripts\download_datasets.py
```
</div>
## π **Explore `OmniGibson`!**
!!! warning annotate "Expect slowdown during first execution"
Omniverse requires some one-time startup setup when **`OmniGibson`** is imported for the first time.
The process could take up to 5 minutes. This is expected behavior, and should only occur once!
**`OmniGibson`** is now successfully installed! Try exploring some of our new scenes interactively:
```{.shell .annotate}
python -m omnigibson.examples.scenes.scene_selector # (1)!
```
1. This demo lets you choose a scene and interactively move around using your keyboard and mouse. Hold down **`Shift`** and then **`Left-click + Drag`** an object to apply forces!
You can also try teleoperating one of our robots:
```{.shell .annotate}
python -m omnigibson.examples.robots.robot_control_example # (1)!
```
1. This demo lets you choose a scene, robot, and set of controllers, and then teleoperate the robot using your keyboard.
***
**Next:** Get quickly familiarized with **`OmniGibson`** from our [Quickstart Guide](./quickstart.md)!
## π§― **Troubleshooting**
??? question "I cannot open Omniverse Launcher AppImage on Linux"
You probably need to [install FUSE](https://github.com/AppImage/AppImageKit/wiki/FUSE) to run the Omniverse Launcher AppImage.
??? question "OmniGibson is stuck at `HydraEngine rtx failed creating scene renderer.`"
`OmniGibson` is likely using an unsupported GPU (default is id 0). Run `nvidia-smi` to see the active list of GPUs, and select an NVIDIA-supported GPU and set its corresponding ID when running `OmniGibson` with `export OMNIGIBSON_GPU_ID=<ID NUMBER>`. | 9,239 | Markdown | 44.294117 | 337 | 0.613486 |
StanfordVL/OmniGibson/docs/getting_started/slurm.md | ---
icon: material/server-network
---
# π **Running on a SLURM cluster**
_This documentation is a work in progress._
OmniGibson can be run on a SLURM cluster using the _enroot_ container software, which is a replacement
for Docker that allows containers to be run as the current user rather than as root. _enroot_ needs
to be installed on your SLURM cluster by an administrator.
With enroot installed, you can follow the below steps to run OmniGibson on SLURM:
1. Download the dataset to a location that is accessible by cluster nodes. To do this, you can use
the download_dataset.py script inside OmniGibson's scripts directory, and move it to the right spot
later. In the below example, /cvgl/ is a networked drive that is accessible by the cluster nodes.
**For Stanford users, this step is already done for SVL and Viscam nodes**
```{.shell .annotate}
OMNIGIBSON_NO_OMNIVERSE=1 python scripts/download_dataset.py
mv omnigibson/data /cvgl/group/Gibson/og-data-0-2-1
```
2. (Optional) Distribute the dataset to the individual nodes.
This will make load times much better than reading from a network drive.
To do this, run the below command on your SLURM head node (replace `svl` with your partition
name and `cvgl` with your account name, as well as the paths with the respective network
and local paths). Confirm via `squeue -u $USER` that all jobs have finished. **This step is already done for SVL and Viscam nodes**
```{.shell .annotate}
sinfo -p svl -o "%N,%n" -h | \
sed s/,.*//g | \
xargs -L1 -I{} \
sbatch \
--account=cvgl --partition=svl --nodelist={} --mem=8G --cpus-per-task=4 \
--wrap 'cp -R /cvgl/group/Gibson/og-data-0-2-1 /scr-ssd/og-data-0-2-1'
```
3. Download your desired image to a location that is accessible by the cluster nodes. (Replace the path with your own path, and feel free to replace `latest` with your desired branch tag). You have the option to mount code (meaning you don't need the container to come with all the code you want to run, just the right dependencies / environment setup)
```{.shell .annotate}
enroot import --output /cvgl2/u/cgokmen/omnigibson.sqsh docker://stanfordvl/omnigibson:action-primitives
```
4. (Optional) If you intend to mount code onto the container, make it available at a location that is accessible by the cluster nodes. You can mount arbitrary code, and you can also mount a custom version of OmniGibson (for the latter, you need to make sure you mount your copy of OmniGibson at /omnigibson-src inside the container). For example:
```{.shell .annotate}
git clone https://github.com/StanfordVL/OmniGibson.git /cvgl2/u/cgokmen/OmniGibson
```
5. Create your launch script. You can start with a copy of the script below. If you want to launch multiple workers, increase the job array option. You should keep the setting at at least 1 GPU per node, but can feel free to edit other settings. You can mount any additional code as you'd like, and you can change the entrypoint such that the container runs your mounted code upon launch. See the mounts section for an example. A copy of this script can be found in docker/sbatch_example.sh
```{.shell .annotate}
#!/usr/bin/env bash
#SBATCH --account=cvgl
#SBATCH --partition=svl --qos=normal
#SBATCH --nodes=1
#SBATCH --cpus-per-task=8
#SBATCH --mem=30G
#SBATCH --gres=gpu:2080ti:1
IMAGE_PATH="/cvgl2/u/cgokmen/omnigibson.sqsh"
GPU_ID=$(nvidia-smi -L | grep -oP '(?<=GPU-)[a-fA-F0-9\-]+' | head -n 1)
ISAAC_CACHE_PATH="/scr-ssd/${SLURM_JOB_USER}/isaac_cache_${GPU_ID}"
# Define env kwargs to pass
declare -A ENVS=(
[NVIDIA_DRIVER_CAPABILITIES]=all
[NVIDIA_VISIBLE_DEVICES]=0
[DISPLAY]=""
[OMNIGIBSON_HEADLESS]=1
)
for env_var in "${!ENVS[@]}"; do
# Add to env kwargs we'll pass to enroot command later
ENV_KWARGS="${ENV_KWARGS} --env ${env_var}=${ENVS[${env_var}]}"
done
# Define mounts to create (maps local directory to container directory)
declare -A MOUNTS=(
[/scr-ssd/og-data-0-2-1]=/data
[${ISAAC_CACHE_PATH}/isaac-sim/kit/cache/Kit]=/isaac-sim/kit/cache/Kit
[${ISAAC_CACHE_PATH}/isaac-sim/cache/ov]=/root/.cache/ov
[${ISAAC_CACHE_PATH}/isaac-sim/cache/pip]=/root/.cache/pip
[${ISAAC_CACHE_PATH}/isaac-sim/cache/glcache]=/root/.cache/nvidia/GLCache
[${ISAAC_CACHE_PATH}/isaac-sim/cache/computecache]=/root/.nv/ComputeCache
[${ISAAC_CACHE_PATH}/isaac-sim/logs]=/root/.nvidia-omniverse/logs
[${ISAAC_CACHE_PATH}/isaac-sim/config]=/root/.nvidia-omniverse/config
[${ISAAC_CACHE_PATH}/isaac-sim/data]=/root/.local/share/ov/data
[${ISAAC_CACHE_PATH}/isaac-sim/documents]=/root/Documents
# Feel free to include lines like the below to mount a workspace or a custom OG version
# [/cvgl2/u/cgokmen/OmniGibson]=/omnigibson-src
# [/cvgl2/u/cgokmen/my-project]=/my-project
)
MOUNT_KWARGS=""
for mount in "${!MOUNTS[@]}"; do
# Verify mount path in local directory exists, otherwise, create it
if [ ! -e "$mount" ]; then
mkdir -p ${mount}
fi
# Add to mount kwargs we'll pass to enroot command later
MOUNT_KWARGS="${MOUNT_KWARGS} --mount ${mount}:${MOUNTS[${mount}]}"
done
# Create the image if it doesn't already exist
CONTAINER_NAME=omnigibson_${GPU_ID}
enroot create --force --name ${CONTAINER_NAME} ${IMAGE_PATH}
# Remove leading space in string
ENV_KWARGS="${ENV_KWARGS:1}"
MOUNT_KWARGS="${MOUNT_KWARGS:1}"
# The last line here is the command you want to run inside the container.
# Here I'm running some unit tests.
enroot start \
--root \
--rw \
${ENV_KWARGS} \
${MOUNT_KWARGS} \
${CONTAINER_NAME} \
source /isaac-sim/setup_conda_env.sh && pytest tests/test_object_states.py
# Clean up the image if possible.
enroot remove -f ${CONTAINER_NAME}
```
6. Launch your job using `sbatch your_script.sh` - and profit! | 5,782 | Markdown | 46.01626 | 490 | 0.717226 |
lucasapchagas/Omniverse/README.md |
# OmniVerse API π
OmniVerse API is a straightforward API that provides access only to the basic CRUD concept routes, enabling efficient and consistent data manipulation.
Our API uses the ViaCEP API, a well-known API that returns the data of a specific address based on the provided postal code (CEP).
## Setup π§
OmniVerse API is an API built on top of the Java Spring Boot framework, designed to be easily installed and deployed.
For an easy setup, you'll need a MySQL server, but the API itself is prepared to accept any DB you want. Follow [MySQL Documentation](https://dev.mysql.com/doc/mysql-getting-started/en) link in order to setup a working server.
1. First thing you'll need after your MySQL server is running is to setup the API to be able to connect to it. You'll need to modify [**application.properties**](https://github.com/lucasapchagas/Omniverse/blob/main/src/main/resources/application.properties) file to your own needs.
- `spring.datasource.url`, you must provide your MySQL server url.
- `spring.datasource.username`, you must provide your MySQL server username.
- `spring.datasource.password`, you must provide your MySQL server password.
β**If you provide an url for a database which is not previously created the API will not start. Use `CREATE database <db_name>;` in order to properly create it.**
2. Building it π¨
To build the project, you need to have Java 17 installed, but you can easily change the version by modifying the application's [**pom.xml**](https://github.com/lucasapchagas/Omniverse/blob/main/pom.xml) file. The project uses Maven as the build platform, which brings all the conveniences of Maven.
- You can build it just by running `./mvnw pacakge` in the project root folder, the target file will be generated at `/target/` folder.
3. Using it π―
Utilizing the API is as simple as modifying, understanding, and building it. Given that Java runs on the JVM, deploying the API becomes effortlessβsimply run the compiled JAR on any cloud service.
- You can just use a [RELEASE](https://github.com/lucasapchagas/Omniverse/releases/tag/RELEASE) instead of compiling it. Please, always use the latest one.
- In order to run it you must use the following command `java -jar OmniVerse-0.0.1-SNAPSHOT.jar`. By default it will try to open the api to [`http://localhost:8080/`](http://localhost:8080/).
- Use the OmniverseCLI to test the API. https://github.com/lucasapchagas/OmniverseCLI
## Features πͺΆ
- Uses **viacep api** in order to register users address.
- Migrations with flyway library.
- Data validation with spring boot data validation.
- JPA design pattern.
## API Usage πͺ
The OmniVerse API is user-friendly and comprises only 5 possible routes that align with the CRUD standard.
You can use popular API testing tools like Insomnia. We have created a configuration that can be accessed on pastebin by [clicking here](https://pastebin.com/f1rBDfZP). Import it into your Insomnia to streamline your testing process.
### What is an user?
Example:
```json
{
"id": 8,
"name": "Lucas",
"email": "[email protected]",
"address": {
"cep": "69050500",
"place": "Rua Peru",
"complement": "",
"neighborhood": "Parque 10 de Novembro",
"locality": "Manaus",
"uf": "AM"
}
}
```
#### Register a user
```http
POST /user
```
| Parameter | Type | Description |
| :---------- | :--------- | :---------------------------------- |
| `name` | `string` | User name |
| `email` | `string` | Valid email |
| `cep` | `string` | Valid cep, just numbers. |
#### Returns an user
```http
GET /user/{id}
```
#### Returns all users
```http
GET /user
```
#### Delete a user
```http
DELETE /user/{id}
```
#### Update a user
Just the field you want to modify is needed as a Parameter. User id is a **must have**.
```http
PUT /user
```
| Parameter | Type | Description |
| :---------- | :--------- | :---------------------------------- |
| `id` | `int` | User id|
| `name` | `string` | User name |
| `email` | `string` | Valid email |
| `cep` | `string` | Valid cep, just numbers. |
## Roadmap
- [x] Implement JPA pattern.
- [x] Usage of **ViaCEP API** in order to generate user's adress.
- [x] Implement Flyway migrations to our database.
- [x] Implement Spring boot data validation.
- [ ] Implement Spring boot security module.
- [ ] Implement JSON Web Token usage.
| 4,499 | Markdown | 35.290322 | 302 | 0.673038 |
Toni-SM/skrl/pyproject.toml | [project]
name = "skrl"
version = "1.1.0"
description = "Modular and flexible library for reinforcement learning on PyTorch and JAX"
readme = "README.md"
requires-python = ">=3.6"
license = {text = "MIT License"}
authors = [
{name = "Toni-SM"},
]
maintainers = [
{name = "Toni-SM"},
]
keywords = ["reinforcement-learning", "machine-learning", "reinforcement", "machine", "learning", "rl"]
classifiers = [
"License :: OSI Approved :: MIT License",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
]
# dependencies / optional-dependencies
dependencies = [
"gym",
"gymnasium",
"tqdm",
"packaging",
"tensorboard",
]
[project.optional-dependencies]
torch = [
"torch>=1.9",
]
jax = [
"jax>=0.4.3",
"jaxlib>=0.4.3",
"flax",
"optax",
]
all = [
"torch>=1.9",
"jax>=0.4.3",
"jaxlib>=0.4.3",
"flax",
"optax",
]
# urls
[project.urls]
"Homepage" = "https://github.com/Toni-SM/skrl"
"Documentation" = "https://skrl.readthedocs.io"
"Discussions" = "https://github.com/Toni-SM/skrl/discussions"
"Bug Reports" = "https://github.com/Toni-SM/skrl/issues"
"Say Thanks!" = "https://github.com/Toni-SM"
"Source" = "https://github.com/Toni-SM/skrl"
[tool.yapf]
# run: yapf -p -m -i -r <folder>
based_on_style = "pep8"
blank_line_before_nested_class_or_def = false
blank_lines_between_top_level_imports_and_variables = 2
column_limit = 120
join_multiple_lines = false
space_between_ending_comma_and_closing_bracket = false
spaces_around_power_operator = true
split_all_top_level_comma_separated_values = true
split_before_arithmetic_operator = true
split_before_dict_set_generator = false
split_before_dot = true
split_complex_comprehension = true
coalesce_brackets = true
[tool.codespell]
# run: codespell <folder>
skip = "./docs/_build,./docs/source/_static"
quiet-level = 3
count = ""
[tool.isort]
use_parentheses = false
line_length = 120
multi_line_output = 3
lines_after_imports = 2
known_annotation = ["typing"]
known_framework = [
"torch",
"jax",
"jaxlib",
"flax",
"optax",
"numpy",
]
sections = [
"FUTURE",
"ANNOTATION",
"STDLIB",
"THIRDPARTY",
"FRAMEWORK",
"FIRSTPARTY",
"LOCALFOLDER",
]
no_lines_before = "THIRDPARTY"
skip = ["docs"]
| 2,365 | TOML | 21.112149 | 103 | 0.671036 |
Toni-SM/skrl/CONTRIBUTING.md |
First of all, **thank you**... For what? Because you are dedicating some time to reading these guidelines and possibly thinking about contributing
<hr>
### I just want to ask a question!
If you have a question, please do not open an issue for this. Instead, use the following resources for it (you will get a faster response):
- [skrl's GitHub discussions](https://github.com/Toni-SM/skrl/discussions), a place to ask questions and discuss about the project
- [Isaac Gym's forum](https://forums.developer.nvidia.com/c/agx-autonomous-machines/isaac/isaac-gym/322), a place to post your questions, find past answers, or just chat with other members of the community about Isaac Gym topics
- [Omniverse Isaac Sim's forum](https://forums.developer.nvidia.com/c/agx-autonomous-machines/isaac/simulation/69), a place to post your questions, find past answers, or just chat with other members of the community about Omniverse Isaac Sim/Gym topics
### I have found a (good) bug. What can I do?
Open an issue on [skrl's GitHub issues](https://github.com/Toni-SM/skrl/issues) and describe the bug. If possible, please provide some of the following items:
- Minimum code that reproduces the bug...
- or the exact steps to reproduce it
- The error log or a screenshot of it
- A link to the source code of the library that you are using (some problems may be due to the use of older versions. If possible, always use the latest version)
- Any other information that you think may be useful or help to reproduce/describe the problem
### I want to contribute, but I don't know how
There is a [board](https://github.com/users/Toni-SM/projects/2/views/8) containing relevant future implementations which can be a good starting place to identify contributions. Please consider the following points
#### Notes about contributing
- Try to **communicate your change first** to [discuss](https://github.com/Toni-SM/skrl/discussions) the implementation if you want to add a new feature or change an existing one
- Modify only the minimum amount of code required and the files needed to make the change
- Use the provided [pre-commit](https://pre-commit.com/) hooks to format the code. Install it by running `pre-commit install` in the root of the repository, running it periodically using `pre-commit run --all` helps reducing commit errors
- Changes that are cosmetic in nature (code formatting, removing whitespace, etc.) or that correct grammatical, spelling or typo errors, and that do not add anything substantial to the functionality of the library will generally not be accepted as a pull request
- The only exception are changes that results from the use of the pre-commit hooks
#### Coding conventions
**skrl** is designed with a focus on modularity, readability, simplicity and transparency of algorithm implementation. The file system structure groups components according to their functionality. Library components only inherit (and must inherit) from a single base class (no multilevel or multiple inheritance) that provides a uniform interface and implements common functionality that is not tied to the implementation details of the algorithms
Read the code a little bit and you will understand it at first glance... Also
- Use 4 indentation spaces
- Follow, as much as possible, the PEP8 Style Guide for Python code
- Document each module, class, function or method using the reStructuredText format
- Annotate all functions, both for the parameters and for the return value
- Follow the commit message style guide for Git described in https://commit.style
- Capitalize (the first letter) and omit any trailing punctuation
- Write it in the imperative tense
- Aim for about 50 (or 72) characters
- Add import statements at the top of each module as follows:
```ini
function annotation (e.g. typing)
# insert an empty line
python libraries and other libraries (e.g. gym, numpy, time, etc.)
# insert an empty line
machine learning framework modules (e.g. torch, torch.nn)
# insert an empty line
skrl components
```
<hr>
Thank you once again,
Toni
| 4,086 | Markdown | 58.231883 | 447 | 0.773128 |
Toni-SM/skrl/CHANGELOG.md | # Changelog
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
## [1.1.0] - 2024-02-12
### Added
- MultiCategorical mixin to operate MultiDiscrete action spaces
### Changed (breaking changes)
- Rename the `ManualTrainer` to `StepTrainer`
- Output training/evaluation progress messages to system's stdout
- Get single observation/action spaces for vectorized environments
- Update Isaac Orbit environment wrapper
## [1.0.0] - 2023-08-16
Transition from pre-release versions (`1.0.0-rc.1` and`1.0.0-rc.2`) to a stable version.
This release also announces the publication of the **skrl** paper in the Journal of Machine Learning Research (JMLR): https://www.jmlr.org/papers/v24/23-0112.html
Summary of the most relevant features:
- JAX support
- New documentation theme and structure
- Multi-agent Reinforcement Learning (MARL)
## [1.0.0-rc.2] - 2023-08-11
### Added
- Get truncation from `time_outs` info in Isaac Gym, Isaac Orbit and Omniverse Isaac Gym environments
- Time-limit (truncation) boostrapping in on-policy actor-critic agents
- Model instantiators `initial_log_std` parameter to set the log standard deviation's initial value
### Changed (breaking changes)
- Structure environment loaders and wrappers file hierarchy coherently
Import statements now follow the next convention:
- Wrappers (e.g.):
- `from skrl.envs.wrappers.torch import wrap_env`
- `from skrl.envs.wrappers.jax import wrap_env`
- Loaders (e.g.):
- `from skrl.envs.loaders.torch import load_omniverse_isaacgym_env`
- `from skrl.envs.loaders.jax import load_omniverse_isaacgym_env`
### Changed
- Drop support for versions prior to PyTorch 1.9 (1.8.0 and 1.8.1)
## [1.0.0-rc.1] - 2023-07-25
### Added
- JAX support (with Flax and Optax)
- RPO agent
- IPPO and MAPPO multi-agent
- Multi-agent base class
- Bi-DexHands environment loader
- Wrapper for PettingZoo and Bi-DexHands environments
- Parameters `num_envs`, `headless` and `cli_args` for configuring Isaac Gym, Isaac Orbit
and Omniverse Isaac Gym environments when they are loaded
### Changed
- Migrate to `pyproject.toml` Python package development
- Define ML framework dependencies as optional dependencies in the library installer
- Move agent implementations with recurrent models to a separate file
- Allow closing the environment at the end of execution instead of after training/evaluation
- Documentation theme from *sphinx_rtd_theme* to *furo*
- Update documentation structure and examples
### Fixed
- Compatibility for Isaac Sim or OmniIsaacGymEnvs (2022.2.0 or earlier)
- Disable PyTorch gradient computation during the environment stepping
- Get categorical models' entropy
- Typo in `KLAdaptiveLR` learning rate scheduler
(keep the old name for compatibility with the examples of previous versions.
The old name will be removed in future releases)
## [0.10.2] - 2023-03-23
### Changed
- Update loader and utils for OmniIsaacGymEnvs 2022.2.1.0
- Update Omniverse Isaac Gym real-world examples
## [0.10.1] - 2023-01-26
### Fixed
- Tensorboard writer instantiation when `write_interval` is zero
## [0.10.0] - 2023-01-22
### Added
- Isaac Orbit environment loader
- Wrap an Isaac Orbit environment
- Gaussian-Deterministic shared model instantiator
## [0.9.1] - 2023-01-17
### Added
- Utility for downloading models from Hugging Face Hub
### Fixed
- Initialization of agent components if they have not been defined
- Manual trainer `train`/`eval` method default arguments
## [0.9.0] - 2023-01-13
### Added
- Support for Farama Gymnasium interface
- Wrapper for robosuite environments
- Weights & Biases integration
- Set the running mode (training or evaluation) of the agents
- Allow clipping the gradient norm for DDPG, TD3 and SAC agents
- Initialize model biases
- Add RNN (RNN, LSTM, GRU and any other variant) support for A2C, DDPG, PPO, SAC, TD3 and TRPO agents
- Allow disabling training/evaluation progressbar
- Farama Shimmy and robosuite examples
- KUKA LBR iiwa real-world example
### Changed (breaking changes)
- Forward model inputs as a Python dictionary
- Returns a Python dictionary with extra output values in model calls
### Changed
- Adopt the implementation of `terminated` and `truncated` over `done` for all environments
### Fixed
- Omniverse Isaac Gym simulation speed for the Franka Emika real-world example
- Call agents' method `record_transition` instead of parent method
to allow storing samples in memories during evaluation
- Move TRPO policy optimization out of the value optimization loop
- Access to the categorical model distribution
- Call reset only once for Gym/Gymnasium vectorized environments
### Removed
- Deprecated method `start` in trainers
## [0.8.0] - 2022-10-03
### Added
- AMP agent for physics-based character animation
- Manual trainer
- Gaussian model mixin
- Support for creating shared models
- Parameter `role` to model methods
- Wrapper compatibility with the new OpenAI Gym environment API
- Internal library colored logger
- Migrate checkpoints/models from other RL libraries to skrl models/agents
- Configuration parameter `store_separately` to agent configuration dict
- Save/load agent modules (models, optimizers, preprocessors)
- Set random seed and configure deterministic behavior for reproducibility
- Benchmark results for Isaac Gym and Omniverse Isaac Gym on the GitHub discussion page
- Franka Emika real-world example
### Changed (breaking changes)
- Models implementation as Python mixin
### Changed
- Multivariate Gaussian model (`GaussianModel` until 0.7.0) to `MultivariateGaussianMixin`
- Trainer's `cfg` parameter position and default values
- Show training/evaluation display progress using `tqdm`
- Update Isaac Gym and Omniverse Isaac Gym examples
### Fixed
- Missing recursive arguments during model weights initialization
- Tensor dimension when computing preprocessor parallel variance
- Models' clip tensors dtype to `float32`
### Removed
- Parameter `inference` from model methods
- Configuration parameter `checkpoint_policy_only` from agent configuration dict
## [0.7.0] - 2022-07-11
### Added
- A2C agent
- Isaac Gym (preview 4) environment loader
- Wrap an Isaac Gym (preview 4) environment
- Support for OpenAI Gym vectorized environments
- Running standard scaler for input preprocessing
- Installation from PyPI (`pip install skrl`)
## [0.6.0] - 2022-06-09
### Added
- Omniverse Isaac Gym environment loader
- Wrap an Omniverse Isaac Gym environment
- Save best models during training
## [0.5.0] - 2022-05-18
### Added
- TRPO agent
- DeepMind environment wrapper
- KL Adaptive learning rate scheduler
- Handle `gym.spaces.Dict` observation spaces (OpenAI Gym and DeepMind environments)
- Forward environment info to agent `record_transition` method
- Expose and document the random seeding mechanism
- Define rewards shaping function in agents' config
- Define learning rate scheduler in agents' config
- Improve agent's algorithm description in documentation (PPO and TRPO at the moment)
### Changed
- Compute the Generalized Advantage Estimation (GAE) in agent `_update` method
- Move noises definition to `resources` folder
- Update the Isaac Gym examples
### Removed
- `compute_functions` for computing the GAE from memory base class
## [0.4.1] - 2022-03-22
### Added
- Examples of all Isaac Gym environments (preview 3)
- Tensorboard file iterator for data post-processing
### Fixed
- Init and evaluate agents in ParallelTrainer
## [0.4.0] - 2022-03-09
### Added
- CEM, SARSA and Q-learning agents
- Tabular model
- Parallel training using multiprocessing
- Isaac Gym utilities
### Changed
- Initialize agents in a separate method
- Change the name of the `networks` argument to `models`
### Fixed
- Reset environments after post-processing
## [0.3.0] - 2022-02-07
### Added
- DQN and DDQN agents
- Export memory to files
- Postprocessing utility to iterate over memory files
- Model instantiator utility to allow fast development
- More examples and contents in the documentation
### Fixed
- Clip actions using the whole space's limits
## [0.2.0] - 2022-01-18
### Added
- First official release
| 8,132 | Markdown | 34.207792 | 162 | 0.764142 |
Toni-SM/skrl/README.md | [](https://pypi.org/project/skrl)
[<img src="https://img.shields.io/badge/%F0%9F%A4%97%20models-hugging%20face-F8D521">](https://huggingface.co/skrl)

<br>
[](https://github.com/Toni-SM/skrl)
<span> </span>
[](https://skrl.readthedocs.io/en/latest/?badge=latest)
[](https://github.com/Toni-SM/skrl/actions/workflows/python-test.yml)
[](https://github.com/Toni-SM/skrl/actions/workflows/pre-commit.yml)
<br>
<p align="center">
<a href="https://skrl.readthedocs.io">
<img width="300rem" src="https://raw.githubusercontent.com/Toni-SM/skrl/main/docs/source/_static/data/logo-light-mode.png">
</a>
</p>
<h2 align="center" style="border-bottom: 0 !important;">SKRL - Reinforcement Learning library</h2>
<br>
**skrl** is an open-source modular library for Reinforcement Learning written in Python (on top of [PyTorch](https://pytorch.org/) and [JAX](https://jax.readthedocs.io)) and designed with a focus on modularity, readability, simplicity, and transparency of algorithm implementation. In addition to supporting the OpenAI [Gym](https://www.gymlibrary.dev) / Farama [Gymnasium](https://gymnasium.farama.org) and [DeepMind](https://github.com/deepmind/dm_env) and other environment interfaces, it allows loading and configuring [NVIDIA Isaac Gym](https://developer.nvidia.com/isaac-gym/), [NVIDIA Isaac Orbit](https://isaac-orbit.github.io/orbit/index.html) and [NVIDIA Omniverse Isaac Gym](https://docs.omniverse.nvidia.com/isaacsim/latest/tutorial_gym_isaac_gym.html) environments, enabling agents' simultaneous training by scopes (subsets of environments among all available environments), which may or may not share resources, in the same run.
<br>
### Please, visit the documentation for usage details and examples
<strong>https://skrl.readthedocs.io</strong>
<br>
> **Note:** This project is under **active continuous development**. Please make sure you always have the latest version. Visit the [develop](https://github.com/Toni-SM/skrl/tree/develop) branch or its [documentation](https://skrl.readthedocs.io/en/develop) to access the latest updates to be released.
<br>
### Citing this library
To cite this library in publications, please use the following reference:
```bibtex
@article{serrano2023skrl,
author = {Antonio Serrano-MuΓ±oz and Dimitrios Chrysostomou and Simon BΓΈgh and Nestor Arana-Arexolaleiba},
title = {skrl: Modular and Flexible Library for Reinforcement Learning},
journal = {Journal of Machine Learning Research},
year = {2023},
volume = {24},
number = {254},
pages = {1--9},
url = {http://jmlr.org/papers/v24/23-0112.html}
}
```
| 3,043 | Markdown | 59.879999 | 942 | 0.744003 |
Toni-SM/skrl/skrl/__init__.py | from typing import Union
import logging
import sys
import numpy as np
__all__ = ["__version__", "logger", "config"]
# read library version from metadata
try:
import importlib.metadata
__version__ = importlib.metadata.version("skrl")
except ImportError:
__version__ = "unknown"
# logger with format
class _Formatter(logging.Formatter):
_format = "[%(name)s:%(levelname)s] %(message)s"
_formats = {logging.DEBUG: f"\x1b[38;20m{_format}\x1b[0m",
logging.INFO: f"\x1b[38;20m{_format}\x1b[0m",
logging.WARNING: f"\x1b[33;20m{_format}\x1b[0m",
logging.ERROR: f"\x1b[31;20m{_format}\x1b[0m",
logging.CRITICAL: f"\x1b[31;1m{_format}\x1b[0m"}
def format(self, record):
return logging.Formatter(self._formats.get(record.levelno)).format(record)
_handler = logging.StreamHandler()
_handler.setLevel(logging.DEBUG)
_handler.setFormatter(_Formatter())
logger = logging.getLogger("skrl")
logger.setLevel(logging.DEBUG)
logger.addHandler(_handler)
# machine learning framework configuration
class _Config(object):
def __init__(self) -> None:
"""Machine learning framework specific configuration
"""
class JAX(object):
def __init__(self) -> None:
"""JAX configuration
"""
self._backend = "numpy"
self._key = np.array([0, 0], dtype=np.uint32)
@property
def backend(self) -> str:
"""Backend used by the different components to operate and generate arrays
This configuration excludes models and optimizers.
Supported backend are: ``"numpy"`` and ``"jax"``
"""
return self._backend
@backend.setter
def backend(self, value: str) -> None:
if value not in ["numpy", "jax"]:
raise ValueError("Invalid jax backend. Supported values are: numpy, jax")
self._backend = value
@property
def key(self) -> "jax.Array":
"""Pseudo-random number generator (PRNG) key
"""
if isinstance(self._key, np.ndarray):
try:
import jax
self._key = jax.random.PRNGKey(self._key[1])
except ImportError:
pass
return self._key
@key.setter
def key(self, value: Union[int, "jax.Array"]) -> None:
if type(value) is int:
# don't import JAX if it has not been imported before
if "jax" in sys.modules:
import jax
value = jax.random.PRNGKey(value)
else:
value = np.array([0, value], dtype=np.uint32)
self._key = value
self.jax = JAX()
config = _Config()
| 2,993 | Python | 30.851064 | 93 | 0.529569 |
Toni-SM/skrl/skrl/envs/jax.py | # TODO: Delete this file in future releases
from skrl import logger # isort: skip
logger.warning("Using `from skrl.envs.jax import ...` is deprecated and will be removed in future versions.")
logger.warning(" - Import loaders using `from skrl.envs.loaders.jax import ...`")
logger.warning(" - Import wrappers using `from skrl.envs.wrappers.jax import ...`")
from skrl.envs.loaders.jax import (
load_bidexhands_env,
load_isaac_orbit_env,
load_isaacgym_env_preview2,
load_isaacgym_env_preview3,
load_isaacgym_env_preview4,
load_omniverse_isaacgym_env
)
from skrl.envs.wrappers.jax import MultiAgentEnvWrapper, Wrapper, wrap_env
| 654 | Python | 35.388887 | 109 | 0.740061 |
Toni-SM/skrl/skrl/envs/loaders/torch/bidexhands_envs.py | from typing import Optional, Sequence
import os
import sys
from contextlib import contextmanager
from skrl import logger
__all__ = ["load_bidexhands_env"]
@contextmanager
def cwd(new_path: str) -> None:
"""Context manager to change the current working directory
This function restores the current working directory after the context manager exits
:param new_path: The new path to change to
:type new_path: str
"""
current_path = os.getcwd()
os.chdir(new_path)
try:
yield
finally:
os.chdir(current_path)
def _print_cfg(d, indent=0) -> None:
"""Print the environment configuration
:param d: The dictionary to print
:type d: dict
:param indent: The indentation level (default: ``0``)
:type indent: int, optional
"""
for key, value in d.items():
if isinstance(value, dict):
_print_cfg(value, indent + 1)
else:
print(" | " * indent + f" |-- {key}: {value}")
def load_bidexhands_env(task_name: str = "",
num_envs: Optional[int] = None,
headless: Optional[bool] = None,
cli_args: Sequence[str] = [],
bidexhands_path: str = "",
show_cfg: bool = True):
"""Load a Bi-DexHands environment
:param task_name: The name of the task (default: ``""``).
If not specified, the task name is taken from the command line argument (``--task TASK_NAME``).
Command line argument has priority over function parameter if both are specified
:type task_name: str, optional
:param num_envs: Number of parallel environments to create (default: ``None``).
If not specified, the default number of environments defined in the task configuration is used.
Command line argument has priority over function parameter if both are specified
:type num_envs: int, optional
:param headless: Whether to use headless mode (no rendering) (default: ``None``).
If not specified, the default task configuration is used.
Command line argument has priority over function parameter if both are specified
:type headless: bool, optional
:param cli_args: Isaac Gym environment configuration and command line arguments (default: ``[]``)
:type cli_args: list of str, optional
:param bidexhands_path: The path to the ``bidexhands`` directory (default: ``""``).
If empty, the path will obtained from bidexhands package metadata
:type bidexhands_path: str, optional
:param show_cfg: Whether to print the configuration (default: ``True``)
:type show_cfg: bool, optional
:raises ValueError: The task name has not been defined, neither by the function parameter nor by the command line arguments
:raises RuntimeError: The bidexhands package is not installed or the path is wrong
:return: Bi-DexHands environment (preview 4)
:rtype: isaacgymenvs.tasks.base.vec_task.VecTask
"""
import isaacgym # isort:skip
import bidexhands
# check task from command line arguments
defined = False
for arg in sys.argv:
if arg.startswith("--task"):
defined = True
break
# get task name from command line arguments
if defined:
arg_index = sys.argv.index("--task") + 1
if arg_index >= len(sys.argv):
raise ValueError("No task name defined. Set the task_name parameter or use --task <task_name> as command line argument")
if task_name and task_name != sys.argv[arg_index]:
logger.warning(f"Overriding task ({task_name}) with command line argument ({sys.argv[arg_index]})")
# get task name from function arguments
else:
if task_name:
sys.argv.append("--task")
sys.argv.append(task_name)
else:
raise ValueError("No task name defined. Set the task_name parameter or use --task <task_name> as command line argument")
# check num_envs from command line arguments
defined = False
for arg in sys.argv:
if arg.startswith("--num_envs"):
defined = True
break
# get num_envs from command line arguments
if defined:
if num_envs is not None:
logger.warning("Overriding num_envs with command line argument --num_envs")
# get num_envs from function arguments
elif num_envs is not None and num_envs > 0:
sys.argv.append("--num_envs")
sys.argv.append(str(num_envs))
# check headless from command line arguments
defined = False
for arg in sys.argv:
if arg.startswith("--headless"):
defined = True
break
# get headless from command line arguments
if defined:
if headless is not None:
logger.warning("Overriding headless with command line argument --headless")
# get headless from function arguments
elif headless is not None:
sys.argv.append("--headless")
# others command line arguments
sys.argv += cli_args
# get bidexhands path from bidexhands package metadata
if not bidexhands_path:
if not hasattr(bidexhands, "__path__"):
raise RuntimeError("bidexhands package is not installed")
path = list(bidexhands.__path__)[0]
else:
path = bidexhands_path
sys.path.append(path)
status = True
try:
from utils.config import get_args, load_cfg, parse_sim_params # type: ignore
from utils.parse_task import parse_task # type: ignore
from utils.process_marl import get_AgentIndex # type: ignore
except Exception as e:
status = False
logger.error(f"Failed to import required packages: {e}")
if not status:
raise RuntimeError(f"The path ({path}) is not valid")
args = get_args()
# print config
if show_cfg:
print(f"\nBi-DexHands environment ({args.task})")
_print_cfg(vars(args))
# update task arguments
args.task_type = "MultiAgent" # TODO: get from parameters
args.cfg_train = os.path.join(path, args.cfg_train)
args.cfg_env = os.path.join(path, args.cfg_env)
# load environment
with cwd(path):
cfg, cfg_train, _ = load_cfg(args)
agent_index = get_AgentIndex(cfg)
sim_params = parse_sim_params(args, cfg, cfg_train)
task, env = parse_task(args, cfg, cfg_train, sim_params, agent_index)
return env
| 6,552 | Python | 36.445714 | 132 | 0.628205 |
Toni-SM/skrl/skrl/envs/loaders/torch/__init__.py | from skrl.envs.loaders.torch.bidexhands_envs import load_bidexhands_env
from skrl.envs.loaders.torch.isaac_orbit_envs import load_isaac_orbit_env
from skrl.envs.loaders.torch.isaacgym_envs import (
load_isaacgym_env_preview2,
load_isaacgym_env_preview3,
load_isaacgym_env_preview4
)
from skrl.envs.loaders.torch.omniverse_isaacgym_envs import load_omniverse_isaacgym_env
| 383 | Python | 41.666662 | 87 | 0.804178 |
Toni-SM/skrl/skrl/envs/loaders/torch/isaacgym_envs.py | from typing import Optional, Sequence
import os
import sys
from contextlib import contextmanager
from skrl import logger
__all__ = ["load_isaacgym_env_preview2",
"load_isaacgym_env_preview3",
"load_isaacgym_env_preview4"]
@contextmanager
def cwd(new_path: str) -> None:
"""Context manager to change the current working directory
This function restores the current working directory after the context manager exits
:param new_path: The new path to change to
:type new_path: str
"""
current_path = os.getcwd()
os.chdir(new_path)
try:
yield
finally:
os.chdir(current_path)
def _omegaconf_to_dict(config) -> dict:
"""Convert OmegaConf config to dict
:param config: The OmegaConf config
:type config: OmegaConf.Config
:return: The config as dict
:rtype: dict
"""
# return config.to_container(dict)
from omegaconf import DictConfig
d = {}
for k, v in config.items():
d[k] = _omegaconf_to_dict(v) if isinstance(v, DictConfig) else v
return d
def _print_cfg(d, indent=0) -> None:
"""Print the environment configuration
:param d: The dictionary to print
:type d: dict
:param indent: The indentation level (default: ``0``)
:type indent: int, optional
"""
for key, value in d.items():
if isinstance(value, dict):
_print_cfg(value, indent + 1)
else:
print(" | " * indent + f" |-- {key}: {value}")
def load_isaacgym_env_preview2(task_name: str = "",
num_envs: Optional[int] = None,
headless: Optional[bool] = None,
cli_args: Sequence[str] = [],
isaacgymenvs_path: str = "",
show_cfg: bool = True):
"""Load an Isaac Gym environment (preview 2)
:param task_name: The name of the task (default: ``""``).
If not specified, the task name is taken from the command line argument (``--task TASK_NAME``).
Command line argument has priority over function parameter if both are specified
:type task_name: str, optional
:param num_envs: Number of parallel environments to create (default: ``None``).
If not specified, the default number of environments defined in the task configuration is used.
Command line argument has priority over function parameter if both are specified
:type num_envs: int, optional
:param headless: Whether to use headless mode (no rendering) (default: ``None``).
If not specified, the default task configuration is used.
Command line argument has priority over function parameter if both are specified
:type headless: bool, optional
:param cli_args: Isaac Gym environment configuration and command line arguments (default: ``[]``)
:type cli_args: list of str, optional
:param isaacgymenvs_path: The path to the ``rlgpu`` directory (default: ``""``).
If empty, the path will obtained from isaacgym package metadata
:type isaacgymenvs_path: str, optional
:param show_cfg: Whether to print the configuration (default: ``True``)
:type show_cfg: bool, optional
:raises ValueError: The task name has not been defined,
neither by the function parameter nor by the command line arguments
:raises RuntimeError: The isaacgym package is not installed or the path is wrong
:return: Isaac Gym environment (preview 2)
:rtype: tasks.base.vec_task.VecTask
"""
import isaacgym
# check task from command line arguments
defined = False
for arg in sys.argv:
if arg.startswith("--task"):
defined = True
break
# get task name from command line arguments
if defined:
arg_index = sys.argv.index("--task") + 1
if arg_index >= len(sys.argv):
raise ValueError("No task name defined. Set the task_name parameter or use --task <task_name> as command line argument")
if task_name and task_name != sys.argv[arg_index]:
logger.warning(f"Overriding task ({task_name}) with command line argument ({sys.argv[arg_index]})")
# get task name from function arguments
else:
if task_name:
sys.argv.append("--task")
sys.argv.append(task_name)
else:
raise ValueError("No task name defined. Set the task_name parameter or use --task <task_name> as command line argument")
# check num_envs from command line arguments
defined = False
for arg in sys.argv:
if arg.startswith("--num_envs"):
defined = True
break
# get num_envs from command line arguments
if defined:
if num_envs is not None:
logger.warning("Overriding num_envs with command line argument --num_envs")
# get num_envs from function arguments
elif num_envs is not None and num_envs > 0:
sys.argv.append("--num_envs")
sys.argv.append(str(num_envs))
# check headless from command line arguments
defined = False
for arg in sys.argv:
if arg.startswith("--headless"):
defined = True
break
# get headless from command line arguments
if defined:
if headless is not None:
logger.warning("Overriding headless with command line argument --headless")
# get headless from function arguments
elif headless is not None:
sys.argv.append("--headless")
# others command line arguments
sys.argv += cli_args
# get isaacgym envs path from isaacgym package metadata
if not isaacgymenvs_path:
if not hasattr(isaacgym, "__path__"):
raise RuntimeError("isaacgym package is not installed or could not be accessed by the current Python environment")
path = isaacgym.__path__
path = os.path.join(path[0], "..", "rlgpu")
else:
path = isaacgymenvs_path
# import required packages
sys.path.append(path)
status = True
try:
from utils.config import get_args, load_cfg, parse_sim_params # type: ignore
from utils.parse_task import parse_task # type: ignore
except Exception as e:
status = False
logger.error(f"Failed to import required packages: {e}")
if not status:
raise RuntimeError(f"Path ({path}) is not valid or the isaacgym package is not installed in editable mode (pip install -e .)")
args = get_args()
# print config
if show_cfg:
print(f"\nIsaac Gym environment ({args.task})")
_print_cfg(vars(args))
# update task arguments
args.cfg_train = os.path.join(path, args.cfg_train)
args.cfg_env = os.path.join(path, args.cfg_env)
# load environment
with cwd(path):
cfg, cfg_train, _ = load_cfg(args)
sim_params = parse_sim_params(args, cfg, cfg_train)
task, env = parse_task(args, cfg, cfg_train, sim_params)
return env
def load_isaacgym_env_preview3(task_name: str = "",
num_envs: Optional[int] = None,
headless: Optional[bool] = None,
cli_args: Sequence[str] = [],
isaacgymenvs_path: str = "",
show_cfg: bool = True):
"""Load an Isaac Gym environment (preview 3)
Isaac Gym benchmark environments: https://github.com/NVIDIA-Omniverse/IsaacGymEnvs
:param task_name: The name of the task (default: ``""``).
If not specified, the task name is taken from the command line argument (``task=TASK_NAME``).
Command line argument has priority over function parameter if both are specified
:type task_name: str, optional
:param num_envs: Number of parallel environments to create (default: ``None``).
If not specified, the default number of environments defined in the task configuration is used.
Command line argument has priority over function parameter if both are specified
:type num_envs: int, optional
:param headless: Whether to use headless mode (no rendering) (default: ``None``).
If not specified, the default task configuration is used.
Command line argument has priority over function parameter if both are specified
:type headless: bool, optional
:param cli_args: IsaacGymEnvs configuration and command line arguments (default: ``[]``)
:type cli_args: list of str, optional
:param isaacgymenvs_path: The path to the ``isaacgymenvs`` directory (default: ``""``).
If empty, the path will obtained from isaacgymenvs package metadata
:type isaacgymenvs_path: str, optional
:param show_cfg: Whether to print the configuration (default: ``True``)
:type show_cfg: bool, optional
:raises ValueError: The task name has not been defined, neither by the function parameter nor by the command line arguments
:raises RuntimeError: The isaacgymenvs package is not installed or the path is wrong
:return: Isaac Gym environment (preview 3)
:rtype: isaacgymenvs.tasks.base.vec_task.VecTask
"""
import isaacgym
import isaacgymenvs
from hydra._internal.hydra import Hydra
from hydra._internal.utils import create_automatic_config_search_path, get_args_parser
from hydra.types import RunMode
from omegaconf import OmegaConf
# check task from command line arguments
defined = False
for arg in sys.argv:
if arg.startswith("task="):
defined = True
break
# get task name from command line arguments
if defined:
if task_name and task_name != arg.split("task=")[1].split(" ")[0]:
logger.warning("Overriding task name ({}) with command line argument ({})" \
.format(task_name, arg.split("task=")[1].split(" ")[0]))
# get task name from function arguments
else:
if task_name:
sys.argv.append(f"task={task_name}")
else:
raise ValueError("No task name defined. Set task_name parameter or use task=<task_name> as command line argument")
# check num_envs from command line arguments
defined = False
for arg in sys.argv:
if arg.startswith("num_envs="):
defined = True
break
# get num_envs from command line arguments
if defined:
if num_envs is not None and num_envs != int(arg.split("num_envs=")[1].split(" ")[0]):
logger.warning("Overriding num_envs ({}) with command line argument (num_envs={})" \
.format(num_envs, arg.split("num_envs=")[1].split(" ")[0]))
# get num_envs from function arguments
elif num_envs is not None and num_envs > 0:
sys.argv.append(f"num_envs={num_envs}")
# check headless from command line arguments
defined = False
for arg in sys.argv:
if arg.startswith("headless="):
defined = True
break
# get headless from command line arguments
if defined:
if headless is not None and str(headless).lower() != arg.split("headless=")[1].split(" ")[0].lower():
logger.warning("Overriding headless ({}) with command line argument (headless={})" \
.format(headless, arg.split("headless=")[1].split(" ")[0]))
# get headless from function arguments
elif headless is not None:
sys.argv.append(f"headless={headless}")
# others command line arguments
sys.argv += cli_args
# get isaacgymenvs path from isaacgymenvs package metadata
if isaacgymenvs_path == "":
if not hasattr(isaacgymenvs, "__path__"):
raise RuntimeError("isaacgymenvs package is not installed")
isaacgymenvs_path = list(isaacgymenvs.__path__)[0]
config_path = os.path.join(isaacgymenvs_path, "cfg")
# set omegaconf resolvers
try:
OmegaConf.register_new_resolver('eq', lambda x, y: x.lower() == y.lower())
except Exception as e:
pass
try:
OmegaConf.register_new_resolver('contains', lambda x, y: x.lower() in y.lower())
except Exception as e:
pass
try:
OmegaConf.register_new_resolver('if', lambda condition, a, b: a if condition else b)
except Exception as e:
pass
try:
OmegaConf.register_new_resolver('resolve_default', lambda default, arg: default if arg == '' else arg)
except Exception as e:
pass
# get hydra config without use @hydra.main
config_file = "config"
args = get_args_parser().parse_args()
search_path = create_automatic_config_search_path(config_file, None, config_path)
hydra_object = Hydra.create_main_hydra2(task_name='load_isaacgymenv', config_search_path=search_path)
config = hydra_object.compose_config(config_file, args.overrides, run_mode=RunMode.RUN)
cfg = _omegaconf_to_dict(config.task)
# print config
if show_cfg:
print(f"\nIsaac Gym environment ({config.task.name})")
_print_cfg(cfg)
# load environment
sys.path.append(isaacgymenvs_path)
from tasks import isaacgym_task_map # type: ignore
try:
env = isaacgym_task_map[config.task.name](cfg=cfg,
sim_device=config.sim_device,
graphics_device_id=config.graphics_device_id,
headless=config.headless)
except TypeError as e:
env = isaacgym_task_map[config.task.name](cfg=cfg,
rl_device=config.rl_device,
sim_device=config.sim_device,
graphics_device_id=config.graphics_device_id,
headless=config.headless,
virtual_screen_capture=config.capture_video, # TODO: check
force_render=config.force_render)
return env
def load_isaacgym_env_preview4(task_name: str = "",
num_envs: Optional[int] = None,
headless: Optional[bool] = None,
cli_args: Sequence[str] = [],
isaacgymenvs_path: str = "",
show_cfg: bool = True):
"""Load an Isaac Gym environment (preview 4)
Isaac Gym benchmark environments: https://github.com/NVIDIA-Omniverse/IsaacGymEnvs
:param task_name: The name of the task (default: ``""``).
If not specified, the task name is taken from the command line argument (``task=TASK_NAME``).
Command line argument has priority over function parameter if both are specified
:type task_name: str, optional
:param num_envs: Number of parallel environments to create (default: ``None``).
If not specified, the default number of environments defined in the task configuration is used.
Command line argument has priority over function parameter if both are specified
:type num_envs: int, optional
:param headless: Whether to use headless mode (no rendering) (default: ``None``).
If not specified, the default task configuration is used.
Command line argument has priority over function parameter if both are specified
:type headless: bool, optional
:param cli_args: IsaacGymEnvs configuration and command line arguments (default: ``[]``)
:type cli_args: list of str, optional
:param isaacgymenvs_path: The path to the ``isaacgymenvs`` directory (default: ``""``).
If empty, the path will obtained from isaacgymenvs package metadata
:type isaacgymenvs_path: str, optional
:param show_cfg: Whether to print the configuration (default: ``True``)
:type show_cfg: bool, optional
:raises ValueError: The task name has not been defined, neither by the function parameter nor by the command line arguments
:raises RuntimeError: The isaacgymenvs package is not installed or the path is wrong
:return: Isaac Gym environment (preview 4)
:rtype: isaacgymenvs.tasks.base.vec_task.VecTask
"""
return load_isaacgym_env_preview3(task_name, num_envs, headless, cli_args, isaacgymenvs_path, show_cfg)
| 16,639 | Python | 42.446475 | 134 | 0.615241 |
Toni-SM/skrl/skrl/envs/loaders/torch/isaac_orbit_envs.py | from typing import Optional, Sequence
import os
import sys
from skrl import logger
__all__ = ["load_isaac_orbit_env"]
def _print_cfg(d, indent=0) -> None:
"""Print the environment configuration
:param d: The dictionary to print
:type d: dict
:param indent: The indentation level (default: ``0``)
:type indent: int, optional
"""
for key, value in d.items():
if isinstance(value, dict):
_print_cfg(value, indent + 1)
else:
print(" | " * indent + f" |-- {key}: {value}")
def load_isaac_orbit_env(task_name: str = "",
num_envs: Optional[int] = None,
headless: Optional[bool] = None,
cli_args: Sequence[str] = [],
show_cfg: bool = True):
"""Load an Isaac Orbit environment
Isaac Orbit: https://isaac-orbit.github.io/orbit/index.html
This function includes the definition and parsing of command line arguments used by Isaac Orbit:
- ``--headless``: Force display off at all times
- ``--cpu``: Use CPU pipeline
- ``--num_envs``: Number of environments to simulate
- ``--task``: Name of the task
- ``--num_envs``: Seed used for the environment
:param task_name: The name of the task (default: ``""``).
If not specified, the task name is taken from the command line argument (``--task TASK_NAME``).
Command line argument has priority over function parameter if both are specified
:type task_name: str, optional
:param num_envs: Number of parallel environments to create (default: ``None``).
If not specified, the default number of environments defined in the task configuration is used.
Command line argument has priority over function parameter if both are specified
:type num_envs: int, optional
:param headless: Whether to use headless mode (no rendering) (default: ``None``).
If not specified, the default task configuration is used.
Command line argument has priority over function parameter if both are specified
:type headless: bool, optional
:param cli_args: Isaac Orbit configuration and command line arguments (default: ``[]``)
:type cli_args: list of str, optional
:param show_cfg: Whether to print the configuration (default: ``True``)
:type show_cfg: bool, optional
:raises ValueError: The task name has not been defined, neither by the function parameter nor by the command line arguments
:return: Isaac Orbit environment
:rtype: gym.Env
"""
import argparse
import atexit
import gym
# check task from command line arguments
defined = False
for arg in sys.argv:
if arg.startswith("--task"):
defined = True
break
# get task name from command line arguments
if defined:
arg_index = sys.argv.index("--task") + 1
if arg_index >= len(sys.argv):
raise ValueError("No task name defined. Set the task_name parameter or use --task <task_name> as command line argument")
if task_name and task_name != sys.argv[arg_index]:
logger.warning(f"Overriding task ({task_name}) with command line argument ({sys.argv[arg_index]})")
# get task name from function arguments
else:
if task_name:
sys.argv.append("--task")
sys.argv.append(task_name)
else:
raise ValueError("No task name defined. Set the task_name parameter or use --task <task_name> as command line argument")
# check num_envs from command line arguments
defined = False
for arg in sys.argv:
if arg.startswith("--num_envs"):
defined = True
break
# get num_envs from command line arguments
if defined:
if num_envs is not None:
logger.warning("Overriding num_envs with command line argument (--num_envs)")
# get num_envs from function arguments
elif num_envs is not None and num_envs > 0:
sys.argv.append("--num_envs")
sys.argv.append(str(num_envs))
# check headless from command line arguments
defined = False
for arg in sys.argv:
if arg.startswith("--headless"):
defined = True
break
# get headless from command line arguments
if defined:
if headless is not None:
logger.warning("Overriding headless with command line argument (--headless)")
# get headless from function arguments
elif headless is not None:
sys.argv.append("--headless")
# others command line arguments
sys.argv += cli_args
# parse arguments
parser = argparse.ArgumentParser("Welcome to Orbit: Omniverse Robotics Environments!")
parser.add_argument("--headless", action="store_true", default=False, help="Force display off at all times.")
parser.add_argument("--cpu", action="store_true", default=False, help="Use CPU pipeline.")
parser.add_argument("--num_envs", type=int, default=None, help="Number of environments to simulate.")
parser.add_argument("--task", type=str, default=None, help="Name of the task.")
parser.add_argument("--seed", type=int, default=None, help="Seed used for the environment")
args = parser.parse_args()
# load the most efficient kit configuration in headless mode
if args.headless:
app_experience = f"{os.environ['EXP_PATH']}/omni.isaac.sim.python.gym.headless.kit"
else:
app_experience = f"{os.environ['EXP_PATH']}/omni.isaac.sim.python.kit"
# launch the simulator
from omni.isaac.kit import SimulationApp # type: ignore
config = {"headless": args.headless}
simulation_app = SimulationApp(config, experience=app_experience)
@atexit.register
def close_the_simulator():
simulation_app.close()
# import orbit extensions
import omni.isaac.contrib_envs # type: ignore
import omni.isaac.orbit_envs # type: ignore
from omni.isaac.orbit_envs.utils import parse_env_cfg # type: ignore
cfg = parse_env_cfg(args.task, use_gpu=not args.cpu, num_envs=args.num_envs)
# print config
if show_cfg:
print(f"\nIsaac Orbit environment ({args.task})")
try:
_print_cfg(cfg)
except AttributeError as e:
pass
# load environment
env = gym.make(args.task, cfg=cfg, headless=args.headless)
return env
| 6,481 | Python | 37.814371 | 132 | 0.636013 |
Toni-SM/skrl/skrl/envs/loaders/torch/omniverse_isaacgym_envs.py | from typing import Optional, Sequence, Union
import os
import queue
import sys
from skrl import logger
__all__ = ["load_omniverse_isaacgym_env"]
def _omegaconf_to_dict(config) -> dict:
"""Convert OmegaConf config to dict
:param config: The OmegaConf config
:type config: OmegaConf.Config
:return: The config as dict
:rtype: dict
"""
# return config.to_container(dict)
from omegaconf import DictConfig
d = {}
for k, v in config.items():
d[k] = _omegaconf_to_dict(v) if isinstance(v, DictConfig) else v
return d
def _print_cfg(d, indent=0) -> None:
"""Print the environment configuration
:param d: The dictionary to print
:type d: dict
:param indent: The indentation level (default: ``0``)
:type indent: int, optional
"""
for key, value in d.items():
if isinstance(value, dict):
_print_cfg(value, indent + 1)
else:
print(" | " * indent + f" |-- {key}: {value}")
def load_omniverse_isaacgym_env(task_name: str = "",
num_envs: Optional[int] = None,
headless: Optional[bool] = None,
cli_args: Sequence[str] = [],
omniisaacgymenvs_path: str = "",
show_cfg: bool = True,
multi_threaded: bool = False,
timeout: int = 30) -> Union["VecEnvBase", "VecEnvMT"]:
"""Load an Omniverse Isaac Gym environment (OIGE)
Omniverse Isaac Gym benchmark environments: https://github.com/NVIDIA-Omniverse/OmniIsaacGymEnvs
:param task_name: The name of the task (default: ``""``).
If not specified, the task name is taken from the command line argument (``task=TASK_NAME``).
Command line argument has priority over function parameter if both are specified
:type task_name: str, optional
:param num_envs: Number of parallel environments to create (default: ``None``).
If not specified, the default number of environments defined in the task configuration is used.
Command line argument has priority over function parameter if both are specified
:type num_envs: int, optional
:param headless: Whether to use headless mode (no rendering) (default: ``None``).
If not specified, the default task configuration is used.
Command line argument has priority over function parameter if both are specified
:type headless: bool, optional
:param cli_args: OIGE configuration and command line arguments (default: ``[]``)
:type cli_args: list of str, optional
:param omniisaacgymenvs_path: The path to the ``omniisaacgymenvs`` directory (default: ``""``).
If empty, the path will obtained from omniisaacgymenvs package metadata
:type omniisaacgymenvs_path: str, optional
:param show_cfg: Whether to print the configuration (default: ``True``)
:type show_cfg: bool, optional
:param multi_threaded: Whether to use multi-threaded environment (default: ``False``)
:type multi_threaded: bool, optional
:param timeout: Seconds to wait for data when queue is empty in multi-threaded environment (default: ``30``)
:type timeout: int, optional
:raises ValueError: The task name has not been defined, neither by the function parameter nor by the command line arguments
:raises RuntimeError: The omniisaacgymenvs package is not installed or the path is wrong
:return: Omniverse Isaac Gym environment
:rtype: omni.isaac.gym.vec_env.vec_env_base.VecEnvBase or omni.isaac.gym.vec_env.vec_env_mt.VecEnvMT
"""
import omegaconf
import omniisaacgymenvs # type: ignore
from hydra._internal.hydra import Hydra
from hydra._internal.utils import create_automatic_config_search_path, get_args_parser
from hydra.types import RunMode
from omegaconf import OmegaConf
from omni.isaac.gym.vec_env import TaskStopException, VecEnvBase, VecEnvMT # type: ignore
from omni.isaac.gym.vec_env.vec_env_mt import TrainerMT # type: ignore
import torch
# check task from command line arguments
defined = False
for arg in sys.argv:
if arg.startswith("task="):
defined = True
break
# get task name from command line arguments
if defined:
if task_name and task_name != arg.split("task=")[1].split(" ")[0]:
logger.warning("Overriding task name ({}) with command line argument (task={})" \
.format(task_name, arg.split("task=")[1].split(" ")[0]))
# get task name from function arguments
else:
if task_name:
sys.argv.append(f"task={task_name}")
else:
raise ValueError("No task name defined. Set task_name parameter or use task=<task_name> as command line argument")
# check num_envs from command line arguments
defined = False
for arg in sys.argv:
if arg.startswith("num_envs="):
defined = True
break
# get num_envs from command line arguments
if defined:
if num_envs is not None and num_envs != int(arg.split("num_envs=")[1].split(" ")[0]):
logger.warning("Overriding num_envs ({}) with command line argument (num_envs={})" \
.format(num_envs, arg.split("num_envs=")[1].split(" ")[0]))
# get num_envs from function arguments
elif num_envs is not None and num_envs > 0:
sys.argv.append(f"num_envs={num_envs}")
# check headless from command line arguments
defined = False
for arg in sys.argv:
if arg.startswith("headless="):
defined = True
break
# get headless from command line arguments
if defined:
if headless is not None and str(headless).lower() != arg.split("headless=")[1].split(" ")[0].lower():
logger.warning("Overriding headless ({}) with command line argument (headless={})" \
.format(headless, arg.split("headless=")[1].split(" ")[0]))
# get headless from function arguments
elif headless is not None:
sys.argv.append(f"headless={headless}")
# others command line arguments
sys.argv += cli_args
# get omniisaacgymenvs path from omniisaacgymenvs package metadata
if omniisaacgymenvs_path == "":
if not hasattr(omniisaacgymenvs, "__path__"):
raise RuntimeError("omniisaacgymenvs package is not installed")
omniisaacgymenvs_path = list(omniisaacgymenvs.__path__)[0]
config_path = os.path.join(omniisaacgymenvs_path, "cfg")
# set omegaconf resolvers
OmegaConf.register_new_resolver('eq', lambda x, y: x.lower() == y.lower())
OmegaConf.register_new_resolver('contains', lambda x, y: x.lower() in y.lower())
OmegaConf.register_new_resolver('if', lambda condition, a, b: a if condition else b)
OmegaConf.register_new_resolver('resolve_default', lambda default, arg: default if arg == '' else arg)
# get hydra config without use @hydra.main
config_file = "config"
args = get_args_parser().parse_args()
search_path = create_automatic_config_search_path(config_file, None, config_path)
hydra_object = Hydra.create_main_hydra2(task_name='load_omniisaacgymenv', config_search_path=search_path)
config = hydra_object.compose_config(config_file, args.overrides, run_mode=RunMode.RUN)
del config.hydra
cfg = _omegaconf_to_dict(config)
cfg["train"] = {}
# print config
if show_cfg:
print(f"\nOmniverse Isaac Gym environment ({config.task.name})")
_print_cfg(cfg)
# internal classes
class _OmniIsaacGymVecEnv(VecEnvBase):
def step(self, actions):
actions = torch.clamp(actions, -self._task.clip_actions, self._task.clip_actions).to(self._task.device).clone()
self._task.pre_physics_step(actions)
for _ in range(self._task.control_frequency_inv):
self._world.step(render=self._render)
self.sim_frame_count += 1
observations, rewards, dones, info = self._task.post_physics_step()
return {"obs": torch.clamp(observations, -self._task.clip_obs, self._task.clip_obs).to(self._task.rl_device).clone()}, \
rewards.to(self._task.rl_device).clone(), dones.to(self._task.rl_device).clone(), info.copy()
def reset(self):
self._task.reset()
actions = torch.zeros((self.num_envs, self._task.num_actions), device=self._task.device)
return self.step(actions)[0]
class _OmniIsaacGymTrainerMT(TrainerMT):
def run(self):
pass
def stop(self):
pass
class _OmniIsaacGymVecEnvMT(VecEnvMT):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.action_queue = queue.Queue(1)
self.data_queue = queue.Queue(1)
def run(self, trainer=None):
super().run(_OmniIsaacGymTrainerMT() if trainer is None else trainer)
def _parse_data(self, data):
self._observations = torch.clamp(data["obs"], -self._task.clip_obs, self._task.clip_obs).to(self._task.rl_device).clone()
self._rewards = data["rew"].to(self._task.rl_device).clone()
self._dones = data["reset"].to(self._task.rl_device).clone()
self._info = data["extras"].copy()
def step(self, actions):
if self._stop:
raise TaskStopException()
actions = torch.clamp(actions, -self._task.clip_actions, self._task.clip_actions).clone()
self.send_actions(actions)
data = self.get_data()
return {"obs": self._observations}, self._rewards, self._dones, self._info
def reset(self):
self._task.reset()
actions = torch.zeros((self.num_envs, self._task.num_actions), device=self._task.device)
return self.step(actions)[0]
def close(self):
# end stop signal to main thread
self.send_actions(None)
self.stop = True
# load environment
sys.path.append(omniisaacgymenvs_path)
from utils.task_util import initialize_task # type: ignore
try:
if config.multi_gpu:
rank = int(os.getenv("LOCAL_RANK", "0"))
config.device_id = rank
config.rl_device = f"cuda:{rank}"
except omegaconf.errors.ConfigAttributeError:
logger.warning("Using an older version of OmniIsaacGymEnvs (2022.2.0 or earlier)")
enable_viewport = "enable_cameras" in config.task.sim and config.task.sim.enable_cameras
if multi_threaded:
try:
env = _OmniIsaacGymVecEnvMT(headless=config.headless,
sim_device=config.device_id,
enable_livestream=config.enable_livestream,
enable_viewport=enable_viewport)
except (TypeError, omegaconf.errors.ConfigAttributeError):
logger.warning("Using an older version of Isaac Sim or OmniIsaacGymEnvs (2022.2.0 or earlier)")
env = _OmniIsaacGymVecEnvMT(headless=config.headless) # Isaac Sim 2022.2.0 and earlier
task = initialize_task(cfg, env, init_sim=False)
env.initialize(env.action_queue, env.data_queue, timeout=timeout)
else:
try:
env = _OmniIsaacGymVecEnv(headless=config.headless,
sim_device=config.device_id,
enable_livestream=config.enable_livestream,
enable_viewport=enable_viewport)
except (TypeError, omegaconf.errors.ConfigAttributeError):
logger.warning("Using an older version of Isaac Sim or OmniIsaacGymEnvs (2022.2.0 or earlier)")
env = _OmniIsaacGymVecEnv(headless=config.headless) # Isaac Sim 2022.2.0 and earlier
task = initialize_task(cfg, env, init_sim=True)
return env
| 12,134 | Python | 42.808664 | 133 | 0.619829 |
Toni-SM/skrl/skrl/envs/loaders/jax/bidexhands_envs.py | # since Bi-DexHands environments are implemented on top of PyTorch, the loader is the same
from skrl.envs.loaders.torch import load_bidexhands_env
| 148 | Python | 36.249991 | 90 | 0.810811 |
Toni-SM/skrl/skrl/envs/loaders/jax/__init__.py | from skrl.envs.loaders.jax.bidexhands_envs import load_bidexhands_env
from skrl.envs.loaders.jax.isaac_orbit_envs import load_isaac_orbit_env
from skrl.envs.loaders.jax.isaacgym_envs import (
load_isaacgym_env_preview2,
load_isaacgym_env_preview3,
load_isaacgym_env_preview4
)
from skrl.envs.loaders.jax.omniverse_isaacgym_envs import load_omniverse_isaacgym_env
| 375 | Python | 40.777773 | 85 | 0.8 |
Toni-SM/skrl/skrl/envs/loaders/jax/isaacgym_envs.py | # since Isaac Gym (preview) environments are implemented on top of PyTorch, the loaders are the same
from skrl.envs.loaders.torch import ( # isort:skip
load_isaacgym_env_preview2,
load_isaacgym_env_preview3,
load_isaacgym_env_preview4,
)
| 252 | Python | 30.624996 | 100 | 0.746032 |
Toni-SM/skrl/skrl/envs/loaders/jax/isaac_orbit_envs.py | # since Isaac Orbit environments are implemented on top of PyTorch, the loader is the same
from skrl.envs.loaders.torch import load_isaac_orbit_env
| 149 | Python | 36.499991 | 90 | 0.805369 |
Toni-SM/skrl/skrl/envs/loaders/jax/omniverse_isaacgym_envs.py | # since Omniverse Isaac Gym environments are implemented on top of PyTorch, the loader is the same
from skrl.envs.loaders.torch import load_omniverse_isaacgym_env
| 164 | Python | 40.24999 | 98 | 0.817073 |
Toni-SM/skrl/skrl/envs/wrappers/torch/gym_envs.py | from typing import Any, Optional, Tuple
import gym
from packaging import version
import numpy as np
import torch
from skrl import logger
from skrl.envs.wrappers.torch.base import Wrapper
class GymWrapper(Wrapper):
def __init__(self, env: Any) -> None:
"""OpenAI Gym environment wrapper
:param env: The environment to wrap
:type env: Any supported OpenAI Gym environment
"""
super().__init__(env)
self._vectorized = False
try:
if isinstance(env, gym.vector.SyncVectorEnv) or isinstance(env, gym.vector.AsyncVectorEnv):
self._vectorized = True
self._reset_once = True
self._obs_tensor = None
self._info_dict = None
except Exception as e:
logger.warning(f"Failed to check for a vectorized environment: {e}")
self._deprecated_api = version.parse(gym.__version__) < version.parse("0.25.0")
if self._deprecated_api:
logger.warning(f"Using a deprecated version of OpenAI Gym's API: {gym.__version__}")
@property
def state_space(self) -> gym.Space:
"""State space
An alias for the ``observation_space`` property
"""
if self._vectorized:
return self._env.single_observation_space
return self._env.observation_space
@property
def observation_space(self) -> gym.Space:
"""Observation space
"""
if self._vectorized:
return self._env.single_observation_space
return self._env.observation_space
@property
def action_space(self) -> gym.Space:
"""Action space
"""
if self._vectorized:
return self._env.single_action_space
return self._env.action_space
def _observation_to_tensor(self, observation: Any, space: Optional[gym.Space] = None) -> torch.Tensor:
"""Convert the OpenAI Gym observation to a flat tensor
:param observation: The OpenAI Gym observation to convert to a tensor
:type observation: Any supported OpenAI Gym observation space
:raises: ValueError if the observation space type is not supported
:return: The observation as a flat tensor
:rtype: torch.Tensor
"""
observation_space = self._env.observation_space if self._vectorized else self.observation_space
space = space if space is not None else observation_space
if self._vectorized and isinstance(space, gym.spaces.MultiDiscrete):
return torch.tensor(observation, device=self.device, dtype=torch.int64).view(self.num_envs, -1)
elif isinstance(observation, int):
return torch.tensor(observation, device=self.device, dtype=torch.int64).view(self.num_envs, -1)
elif isinstance(observation, np.ndarray):
return torch.tensor(observation, device=self.device, dtype=torch.float32).view(self.num_envs, -1)
elif isinstance(space, gym.spaces.Discrete):
return torch.tensor(observation, device=self.device, dtype=torch.float32).view(self.num_envs, -1)
elif isinstance(space, gym.spaces.Box):
return torch.tensor(observation, device=self.device, dtype=torch.float32).view(self.num_envs, -1)
elif isinstance(space, gym.spaces.Dict):
tmp = torch.cat([self._observation_to_tensor(observation[k], space[k]) \
for k in sorted(space.keys())], dim=-1).view(self.num_envs, -1)
return tmp
else:
raise ValueError(f"Observation space type {type(space)} not supported. Please report this issue")
def _tensor_to_action(self, actions: torch.Tensor) -> Any:
"""Convert the action to the OpenAI Gym expected format
:param actions: The actions to perform
:type actions: torch.Tensor
:raise ValueError: If the action space type is not supported
:return: The action in the OpenAI Gym format
:rtype: Any supported OpenAI Gym action space
"""
space = self._env.action_space if self._vectorized else self.action_space
if self._vectorized:
if isinstance(space, gym.spaces.MultiDiscrete):
return np.array(actions.cpu().numpy(), dtype=space.dtype).reshape(space.shape)
elif isinstance(space, gym.spaces.Tuple):
if isinstance(space[0], gym.spaces.Box):
return np.array(actions.cpu().numpy(), dtype=space[0].dtype).reshape(space.shape)
elif isinstance(space[0], gym.spaces.Discrete):
return np.array(actions.cpu().numpy(), dtype=space[0].dtype).reshape(-1)
elif isinstance(space, gym.spaces.Discrete):
return actions.item()
elif isinstance(space, gym.spaces.MultiDiscrete):
return np.array(actions.cpu().numpy(), dtype=space.dtype).reshape(space.shape)
elif isinstance(space, gym.spaces.Box):
return np.array(actions.cpu().numpy(), dtype=space.dtype).reshape(space.shape)
raise ValueError(f"Action space type {type(space)} not supported. Please report this issue")
def step(self, actions: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, Any]:
"""Perform a step in the environment
:param actions: The actions to perform
:type actions: torch.Tensor
:return: Observation, reward, terminated, truncated, info
:rtype: tuple of torch.Tensor and any other info
"""
if self._deprecated_api:
observation, reward, terminated, info = self._env.step(self._tensor_to_action(actions))
# truncated: https://gymnasium.farama.org/tutorials/handling_time_limits
if type(info) is list:
truncated = np.array([d.get("TimeLimit.truncated", False) for d in info], dtype=terminated.dtype)
terminated *= np.logical_not(truncated)
else:
truncated = info.get("TimeLimit.truncated", False)
if truncated:
terminated = False
else:
observation, reward, terminated, truncated, info = self._env.step(self._tensor_to_action(actions))
# convert response to torch
observation = self._observation_to_tensor(observation)
reward = torch.tensor(reward, device=self.device, dtype=torch.float32).view(self.num_envs, -1)
terminated = torch.tensor(terminated, device=self.device, dtype=torch.bool).view(self.num_envs, -1)
truncated = torch.tensor(truncated, device=self.device, dtype=torch.bool).view(self.num_envs, -1)
# save observation and info for vectorized envs
if self._vectorized:
self._obs_tensor = observation
self._info_dict = info
return observation, reward, terminated, truncated, info
def reset(self) -> Tuple[torch.Tensor, Any]:
"""Reset the environment
:return: Observation, info
:rtype: torch.Tensor and any other info
"""
# handle vectorized envs
if self._vectorized:
if not self._reset_once:
return self._obs_tensor, self._info_dict
self._reset_once = False
# reset the env/envs
if self._deprecated_api:
observation = self._env.reset()
info = {}
else:
observation, info = self._env.reset()
return self._observation_to_tensor(observation), info
def render(self, *args, **kwargs) -> None:
"""Render the environment
"""
self._env.render(*args, **kwargs)
def close(self) -> None:
"""Close the environment
"""
self._env.close()
| 7,739 | Python | 40.612903 | 113 | 0.625275 |
Toni-SM/skrl/skrl/envs/wrappers/torch/bidexhands_envs.py | from typing import Any, Mapping, Sequence, Tuple
import gym
import torch
from skrl.envs.wrappers.torch.base import MultiAgentEnvWrapper
class BiDexHandsWrapper(MultiAgentEnvWrapper):
def __init__(self, env: Any) -> None:
"""Bi-DexHands wrapper
:param env: The environment to wrap
:type env: Any supported Bi-DexHands environment
"""
super().__init__(env)
self._reset_once = True
self._obs_buf = None
self._shared_obs_buf = None
self.possible_agents = [f"agent_{i}" for i in range(self.num_agents)]
@property
def agents(self) -> Sequence[str]:
"""Names of all current agents
These may be changed as an environment progresses (i.e. agents can be added or removed)
"""
return self.possible_agents
@property
def observation_spaces(self) -> Mapping[str, gym.Space]:
"""Observation spaces
"""
return {uid: space for uid, space in zip(self.possible_agents, self._env.observation_space)}
@property
def action_spaces(self) -> Mapping[str, gym.Space]:
"""Action spaces
"""
return {uid: space for uid, space in zip(self.possible_agents, self._env.action_space)}
@property
def shared_observation_spaces(self) -> Mapping[str, gym.Space]:
"""Shared observation spaces
"""
return {uid: space for uid, space in zip(self.possible_agents, self._env.share_observation_space)}
def step(self, actions: Mapping[str, torch.Tensor]) -> \
Tuple[Mapping[str, torch.Tensor], Mapping[str, torch.Tensor],
Mapping[str, torch.Tensor], Mapping[str, torch.Tensor], Mapping[str, Any]]:
"""Perform a step in the environment
:param actions: The actions to perform
:type actions: dictionary of torch.Tensor
:return: Observation, reward, terminated, truncated, info
:rtype: tuple of dictionaries torch.Tensor and any other info
"""
actions = [actions[uid] for uid in self.possible_agents]
obs_buf, shared_obs_buf, reward_buf, terminated_buf, info, _ = self._env.step(actions)
self._obs_buf = {uid: obs_buf[:,i] for i, uid in enumerate(self.possible_agents)}
self._shared_obs_buf = {uid: shared_obs_buf[:,i] for i, uid in enumerate(self.possible_agents)}
reward = {uid: reward_buf[:,i].view(-1, 1) for i, uid in enumerate(self.possible_agents)}
terminated = {uid: terminated_buf[:,i].view(-1, 1) for i, uid in enumerate(self.possible_agents)}
truncated = {uid: torch.zeros_like(value) for uid, value in terminated.items()}
info = {"shared_states": self._shared_obs_buf}
return self._obs_buf, reward, terminated, truncated, info
def reset(self) -> Tuple[Mapping[str, torch.Tensor], Mapping[str, Any]]:
"""Reset the environment
:return: Observation, info
:rtype: tuple of dictionaries of torch.Tensor and any other info
"""
if self._reset_once:
obs_buf, shared_obs_buf, _ = self._env.reset()
self._obs_buf = {uid: obs_buf[:,i] for i, uid in enumerate(self.possible_agents)}
self._shared_obs_buf = {uid: shared_obs_buf[:,i] for i, uid in enumerate(self.possible_agents)}
self._reset_once = False
return self._obs_buf, {"shared_states": self._shared_obs_buf}
| 3,394 | Python | 38.476744 | 107 | 0.629641 |
Toni-SM/skrl/skrl/envs/wrappers/torch/robosuite_envs.py | from typing import Any, Optional, Tuple
import collections
import gym
import numpy as np
import torch
from skrl.envs.wrappers.torch.base import Wrapper
class RobosuiteWrapper(Wrapper):
def __init__(self, env: Any) -> None:
"""Robosuite environment wrapper
:param env: The environment to wrap
:type env: Any supported robosuite environment
"""
super().__init__(env)
# observation and action spaces
self._observation_space = self._spec_to_space(self._env.observation_spec())
self._action_space = self._spec_to_space(self._env.action_spec)
@property
def state_space(self) -> gym.Space:
"""State space
An alias for the ``observation_space`` property
"""
return self._observation_space
@property
def observation_space(self) -> gym.Space:
"""Observation space
"""
return self._observation_space
@property
def action_space(self) -> gym.Space:
"""Action space
"""
return self._action_space
def _spec_to_space(self, spec: Any) -> gym.Space:
"""Convert the robosuite spec to a Gym space
:param spec: The robosuite spec to convert
:type spec: Any supported robosuite spec
:raises: ValueError if the spec type is not supported
:return: The Gym space
:rtype: gym.Space
"""
if type(spec) is tuple:
return gym.spaces.Box(shape=spec[0].shape,
dtype=np.float32,
low=spec[0],
high=spec[1])
elif isinstance(spec, np.ndarray):
return gym.spaces.Box(shape=spec.shape,
dtype=np.float32,
low=np.full(spec.shape, float("-inf")),
high=np.full(spec.shape, float("inf")))
elif isinstance(spec, collections.OrderedDict):
return gym.spaces.Dict({k: self._spec_to_space(v) for k, v in spec.items()})
else:
raise ValueError(f"Spec type {type(spec)} not supported. Please report this issue")
def _observation_to_tensor(self, observation: Any, spec: Optional[Any] = None) -> torch.Tensor:
"""Convert the observation to a flat tensor
:param observation: The observation to convert to a tensor
:type observation: Any supported observation
:raises: ValueError if the observation spec type is not supported
:return: The observation as a flat tensor
:rtype: torch.Tensor
"""
spec = spec if spec is not None else self._env.observation_spec()
if isinstance(spec, np.ndarray):
return torch.tensor(observation, device=self.device, dtype=torch.float32).reshape(self.num_envs, -1)
elif isinstance(spec, collections.OrderedDict):
return torch.cat([self._observation_to_tensor(observation[k], spec[k]) \
for k in sorted(spec.keys())], dim=-1).reshape(self.num_envs, -1)
else:
raise ValueError(f"Observation spec type {type(spec)} not supported. Please report this issue")
def _tensor_to_action(self, actions: torch.Tensor) -> Any:
"""Convert the action to the robosuite expected format
:param actions: The actions to perform
:type actions: torch.Tensor
:raise ValueError: If the action space type is not supported
:return: The action in the robosuite expected format
:rtype: Any supported robosuite action
"""
spec = self._env.action_spec
if type(spec) is tuple:
return np.array(actions.cpu().numpy(), dtype=np.float32).reshape(spec[0].shape)
else:
raise ValueError(f"Action spec type {type(spec)} not supported. Please report this issue")
def step(self, actions: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, Any]:
"""Perform a step in the environment
:param actions: The actions to perform
:type actions: torch.Tensor
:return: Observation, reward, terminated, truncated, info
:rtype: tuple of torch.Tensor and any other info
"""
observation, reward, terminated, info = self._env.step(self._tensor_to_action(actions))
truncated = False
info = {}
# convert response to torch
return self._observation_to_tensor(observation), \
torch.tensor(reward, device=self.device, dtype=torch.float32).view(self.num_envs, -1), \
torch.tensor(terminated, device=self.device, dtype=torch.bool).view(self.num_envs, -1), \
torch.tensor(truncated, device=self.device, dtype=torch.bool).view(self.num_envs, -1), \
info
def reset(self) -> Tuple[torch.Tensor, Any]:
"""Reset the environment
:return: The state of the environment
:rtype: torch.Tensor
"""
observation = self._env.reset()
return self._observation_to_tensor(observation), {}
def render(self, *args, **kwargs) -> None:
"""Render the environment
"""
self._env.render(*args, **kwargs)
def close(self) -> None:
"""Close the environment
"""
self._env.close()
| 5,343 | Python | 35.108108 | 112 | 0.600786 |
Toni-SM/skrl/skrl/envs/wrappers/torch/base.py | from typing import Any, Mapping, Sequence, Tuple
import gym
import torch
class Wrapper(object):
def __init__(self, env: Any) -> None:
"""Base wrapper class for RL environments
:param env: The environment to wrap
:type env: Any supported RL environment
"""
self._env = env
# device (faster than @property)
if hasattr(self._env, "device"):
self.device = torch.device(self._env.device)
else:
self.device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# spaces
try:
self._action_space = self._env.single_action_space
self._observation_space = self._env.single_observation_space
except AttributeError:
self._action_space = self._env.action_space
self._observation_space = self._env.observation_space
self._state_space = self._env.state_space if hasattr(self._env, "state_space") else self._observation_space
def __getattr__(self, key: str) -> Any:
"""Get an attribute from the wrapped environment
:param key: The attribute name
:type key: str
:raises AttributeError: If the attribute does not exist
:return: The attribute value
:rtype: Any
"""
if hasattr(self._env, key):
return getattr(self._env, key)
raise AttributeError(f"Wrapped environment ({self._env.__class__.__name__}) does not have attribute '{key}'")
def reset(self) -> Tuple[torch.Tensor, Any]:
"""Reset the environment
:raises NotImplementedError: Not implemented
:return: Observation, info
:rtype: torch.Tensor and any other info
"""
raise NotImplementedError
def step(self, actions: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, Any]:
"""Perform a step in the environment
:param actions: The actions to perform
:type actions: torch.Tensor
:raises NotImplementedError: Not implemented
:return: Observation, reward, terminated, truncated, info
:rtype: tuple of torch.Tensor and any other info
"""
raise NotImplementedError
def render(self, *args, **kwargs) -> None:
"""Render the environment
"""
pass
def close(self) -> None:
"""Close the environment
"""
pass
@property
def num_envs(self) -> int:
"""Number of environments
If the wrapped environment does not have the ``num_envs`` property, it will be set to 1
"""
return self._env.num_envs if hasattr(self._env, "num_envs") else 1
@property
def num_agents(self) -> int:
"""Number of agents
If the wrapped environment does not have the ``num_agents`` property, it will be set to 1
"""
return self._env.num_agents if hasattr(self._env, "num_agents") else 1
@property
def state_space(self) -> gym.Space:
"""State space
If the wrapped environment does not have the ``state_space`` property,
the value of the ``observation_space`` property will be used
"""
return self._state_space
@property
def observation_space(self) -> gym.Space:
"""Observation space
"""
return self._observation_space
@property
def action_space(self) -> gym.Space:
"""Action space
"""
return self._action_space
class MultiAgentEnvWrapper(object):
def __init__(self, env: Any) -> None:
"""Base wrapper class for multi-agent environments
:param env: The multi-agent environment to wrap
:type env: Any supported multi-agent environment
"""
self._env = env
# device (faster than @property)
if hasattr(self._env, "device"):
self.device = torch.device(self._env.device)
else:
self.device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
self.possible_agents = []
def __getattr__(self, key: str) -> Any:
"""Get an attribute from the wrapped environment
:param key: The attribute name
:type key: str
:raises AttributeError: If the attribute does not exist
:return: The attribute value
:rtype: Any
"""
if hasattr(self._env, key):
return getattr(self._env, key)
raise AttributeError(f"Wrapped environment ({self._env.__class__.__name__}) does not have attribute '{key}'")
def reset(self) -> Tuple[Mapping[str, torch.Tensor], Mapping[str, Any]]:
"""Reset the environment
:raises NotImplementedError: Not implemented
:return: Observation, info
:rtype: tuple of dictionaries of torch.Tensor and any other info
"""
raise NotImplementedError
def step(self, actions: Mapping[str, torch.Tensor]) -> \
Tuple[Mapping[str, torch.Tensor], Mapping[str, torch.Tensor],
Mapping[str, torch.Tensor], Mapping[str, torch.Tensor], Mapping[str, Any]]:
"""Perform a step in the environment
:param actions: The actions to perform
:type actions: dictionary of torch.Tensor
:raises NotImplementedError: Not implemented
:return: Observation, reward, terminated, truncated, info
:rtype: tuple of dictionaries of torch.Tensor and any other info
"""
raise NotImplementedError
def render(self, *args, **kwargs) -> None:
"""Render the environment
"""
pass
def close(self) -> None:
"""Close the environment
"""
pass
@property
def num_envs(self) -> int:
"""Number of environments
If the wrapped environment does not have the ``num_envs`` property, it will be set to 1
"""
return self._env.num_envs if hasattr(self._env, "num_envs") else 1
@property
def num_agents(self) -> int:
"""Number of agents
If the wrapped environment does not have the ``num_agents`` property, it will be set to 1
"""
return self._env.num_agents if hasattr(self._env, "num_agents") else 1
@property
def agents(self) -> Sequence[str]:
"""Names of all current agents
These may be changed as an environment progresses (i.e. agents can be added or removed)
"""
raise NotImplementedError
@property
def state_spaces(self) -> Mapping[str, gym.Space]:
"""State spaces
An alias for the ``observation_spaces`` property
"""
return self.observation_spaces
@property
def observation_spaces(self) -> Mapping[str, gym.Space]:
"""Observation spaces
"""
raise NotImplementedError
@property
def action_spaces(self) -> Mapping[str, gym.Space]:
"""Action spaces
"""
raise NotImplementedError
@property
def shared_state_spaces(self) -> Mapping[str, gym.Space]:
"""Shared state spaces
An alias for the ``shared_observation_spaces`` property
"""
return self.shared_observation_spaces
@property
def shared_observation_spaces(self) -> Mapping[str, gym.Space]:
"""Shared observation spaces
"""
raise NotImplementedError
def state_space(self, agent: str) -> gym.Space:
"""State space
:param agent: Name of the agent
:type agent: str
:return: The state space for the specified agent
:rtype: gym.Space
"""
return self.state_spaces[agent]
def observation_space(self, agent: str) -> gym.Space:
"""Observation space
:param agent: Name of the agent
:type agent: str
:return: The observation space for the specified agent
:rtype: gym.Space
"""
return self.observation_spaces[agent]
def action_space(self, agent: str) -> gym.Space:
"""Action space
:param agent: Name of the agent
:type agent: str
:return: The action space for the specified agent
:rtype: gym.Space
"""
return self.action_spaces[agent]
def shared_state_space(self, agent: str) -> gym.Space:
"""Shared state space
:param agent: Name of the agent
:type agent: str
:return: The shared state space for the specified agent
:rtype: gym.Space
"""
return self.shared_state_spaces[agent]
def shared_observation_space(self, agent: str) -> gym.Space:
"""Shared observation space
:param agent: Name of the agent
:type agent: str
:return: The shared observation space for the specified agent
:rtype: gym.Space
"""
return self.shared_observation_spaces[agent]
| 8,836 | Python | 28.85473 | 117 | 0.601517 |
Toni-SM/skrl/skrl/envs/wrappers/torch/__init__.py | from typing import Any, Union
import gym
import gymnasium
from skrl import logger
from skrl.envs.wrappers.torch.base import MultiAgentEnvWrapper, Wrapper
from skrl.envs.wrappers.torch.bidexhands_envs import BiDexHandsWrapper
from skrl.envs.wrappers.torch.deepmind_envs import DeepMindWrapper
from skrl.envs.wrappers.torch.gym_envs import GymWrapper
from skrl.envs.wrappers.torch.gymnasium_envs import GymnasiumWrapper
from skrl.envs.wrappers.torch.isaac_orbit_envs import IsaacOrbitWrapper
from skrl.envs.wrappers.torch.isaacgym_envs import IsaacGymPreview2Wrapper, IsaacGymPreview3Wrapper
from skrl.envs.wrappers.torch.omniverse_isaacgym_envs import OmniverseIsaacGymWrapper
from skrl.envs.wrappers.torch.pettingzoo_envs import PettingZooWrapper
from skrl.envs.wrappers.torch.robosuite_envs import RobosuiteWrapper
__all__ = ["wrap_env", "Wrapper", "MultiAgentEnvWrapper"]
def wrap_env(env: Any, wrapper: str = "auto", verbose: bool = True) -> Union[Wrapper, MultiAgentEnvWrapper]:
"""Wrap an environment to use a common interface
Example::
>>> from skrl.envs.wrappers.torch import wrap_env
>>>
>>> # assuming that there is an environment called "env"
>>> env = wrap_env(env)
:param env: The environment to be wrapped
:type env: gym.Env, gymnasium.Env, dm_env.Environment or VecTask
:param wrapper: The type of wrapper to use (default: ``"auto"``).
If ``"auto"``, the wrapper will be automatically selected based on the environment class.
The supported wrappers are described in the following table:
+--------------------+-------------------------+
|Environment |Wrapper tag |
+====================+=========================+
|OpenAI Gym |``"gym"`` |
+--------------------+-------------------------+
|Gymnasium |``"gymnasium"`` |
+--------------------+-------------------------+
|Petting Zoo |``"pettingzoo"`` |
+--------------------+-------------------------+
|DeepMind |``"dm"`` |
+--------------------+-------------------------+
|Robosuite |``"robosuite"`` |
+--------------------+-------------------------+
|Bi-DexHands |``"bidexhands"`` |
+--------------------+-------------------------+
|Isaac Gym preview 2 |``"isaacgym-preview2"`` |
+--------------------+-------------------------+
|Isaac Gym preview 3 |``"isaacgym-preview3"`` |
+--------------------+-------------------------+
|Isaac Gym preview 4 |``"isaacgym-preview4"`` |
+--------------------+-------------------------+
|Omniverse Isaac Gym |``"omniverse-isaacgym"`` |
+--------------------+-------------------------+
|Isaac Sim (orbit) |``"isaac-orbit"`` |
+--------------------+-------------------------+
:type wrapper: str, optional
:param verbose: Whether to print the wrapper type (default: ``True``)
:type verbose: bool, optional
:raises ValueError: Unknown wrapper type
:return: Wrapped environment
:rtype: Wrapper or MultiAgentEnvWrapper
"""
if verbose:
logger.info("Environment class: {}".format(", ".join([str(base).replace("<class '", "").replace("'>", "") \
for base in env.__class__.__bases__])))
if wrapper == "auto":
base_classes = [str(base) for base in env.__class__.__bases__]
if "<class 'omni.isaac.gym.vec_env.vec_env_base.VecEnvBase'>" in base_classes or \
"<class 'omni.isaac.gym.vec_env.vec_env_mt.VecEnvMT'>" in base_classes:
if verbose:
logger.info("Environment wrapper: Omniverse Isaac Gym")
return OmniverseIsaacGymWrapper(env)
elif isinstance(env, gym.core.Env) or isinstance(env, gym.core.Wrapper):
# isaac-orbit
if hasattr(env, "sim") and hasattr(env, "env_ns"):
if verbose:
logger.info("Environment wrapper: Isaac Orbit")
return IsaacOrbitWrapper(env)
# gym
if verbose:
logger.info("Environment wrapper: Gym")
return GymWrapper(env)
elif isinstance(env, gymnasium.core.Env) or isinstance(env, gymnasium.core.Wrapper):
if verbose:
logger.info("Environment wrapper: Gymnasium")
return GymnasiumWrapper(env)
elif "<class 'pettingzoo.utils.env" in base_classes[0] or "<class 'pettingzoo.utils.wrappers" in base_classes[0]:
if verbose:
logger.info("Environment wrapper: Petting Zoo")
return PettingZooWrapper(env)
elif "<class 'dm_env._environment.Environment'>" in base_classes:
if verbose:
logger.info("Environment wrapper: DeepMind")
return DeepMindWrapper(env)
elif "<class 'robosuite.environments." in base_classes[0]:
if verbose:
logger.info("Environment wrapper: Robosuite")
return RobosuiteWrapper(env)
elif "<class 'rlgpu.tasks.base.vec_task.VecTask'>" in base_classes:
if verbose:
logger.info("Environment wrapper: Isaac Gym (preview 2)")
return IsaacGymPreview2Wrapper(env)
if verbose:
logger.info("Environment wrapper: Isaac Gym (preview 3/4)")
return IsaacGymPreview3Wrapper(env) # preview 4 is the same as 3
elif wrapper == "gym":
if verbose:
logger.info("Environment wrapper: Gym")
return GymWrapper(env)
elif wrapper == "gymnasium":
if verbose:
logger.info("Environment wrapper: gymnasium")
return GymnasiumWrapper(env)
elif wrapper == "pettingzoo":
if verbose:
logger.info("Environment wrapper: Petting Zoo")
return PettingZooWrapper(env)
elif wrapper == "dm":
if verbose:
logger.info("Environment wrapper: DeepMind")
return DeepMindWrapper(env)
elif wrapper == "robosuite":
if verbose:
logger.info("Environment wrapper: Robosuite")
return RobosuiteWrapper(env)
elif wrapper == "bidexhands":
if verbose:
logger.info("Environment wrapper: Bi-DexHands")
return BiDexHandsWrapper(env)
elif wrapper == "isaacgym-preview2":
if verbose:
logger.info("Environment wrapper: Isaac Gym (preview 2)")
return IsaacGymPreview2Wrapper(env)
elif wrapper == "isaacgym-preview3":
if verbose:
logger.info("Environment wrapper: Isaac Gym (preview 3)")
return IsaacGymPreview3Wrapper(env)
elif wrapper == "isaacgym-preview4":
if verbose:
logger.info("Environment wrapper: Isaac Gym (preview 4)")
return IsaacGymPreview3Wrapper(env) # preview 4 is the same as 3
elif wrapper == "omniverse-isaacgym":
if verbose:
logger.info("Environment wrapper: Omniverse Isaac Gym")
return OmniverseIsaacGymWrapper(env)
elif wrapper == "isaac-orbit":
if verbose:
logger.info("Environment wrapper: Isaac Orbit")
return IsaacOrbitWrapper(env)
else:
raise ValueError(f"Unknown wrapper type: {wrapper}")
| 7,723 | Python | 46.975155 | 121 | 0.537356 |
Toni-SM/skrl/skrl/envs/wrappers/torch/isaacgym_envs.py | from typing import Any, Tuple
import torch
from skrl.envs.wrappers.torch.base import Wrapper
class IsaacGymPreview2Wrapper(Wrapper):
def __init__(self, env: Any) -> None:
"""Isaac Gym environment (preview 2) wrapper
:param env: The environment to wrap
:type env: Any supported Isaac Gym environment (preview 2) environment
"""
super().__init__(env)
self._reset_once = True
self._obs_buf = None
def step(self, actions: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, Any]:
"""Perform a step in the environment
:param actions: The actions to perform
:type actions: torch.Tensor
:return: Observation, reward, terminated, truncated, info
:rtype: tuple of torch.Tensor and any other info
"""
self._obs_buf, reward, terminated, info = self._env.step(actions)
truncated = info["time_outs"] if "time_outs" in info else torch.zeros_like(terminated)
return self._obs_buf, reward.view(-1, 1), terminated.view(-1, 1), truncated.view(-1, 1), info
def reset(self) -> Tuple[torch.Tensor, Any]:
"""Reset the environment
:return: Observation, info
:rtype: torch.Tensor and any other info
"""
if self._reset_once:
self._obs_buf = self._env.reset()
self._reset_once = False
return self._obs_buf, {}
def render(self, *args, **kwargs) -> None:
"""Render the environment
"""
pass
def close(self) -> None:
"""Close the environment
"""
pass
class IsaacGymPreview3Wrapper(Wrapper):
def __init__(self, env: Any) -> None:
"""Isaac Gym environment (preview 3) wrapper
:param env: The environment to wrap
:type env: Any supported Isaac Gym environment (preview 3) environment
"""
super().__init__(env)
self._reset_once = True
self._obs_dict = None
def step(self, actions: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, Any]:
"""Perform a step in the environment
:param actions: The actions to perform
:type actions: torch.Tensor
:return: Observation, reward, terminated, truncated, info
:rtype: tuple of torch.Tensor and any other info
"""
self._obs_dict, reward, terminated, info = self._env.step(actions)
truncated = info["time_outs"] if "time_outs" in info else torch.zeros_like(terminated)
return self._obs_dict["obs"], reward.view(-1, 1), terminated.view(-1, 1), truncated.view(-1, 1), info
def reset(self) -> Tuple[torch.Tensor, Any]:
"""Reset the environment
:return: Observation, info
:rtype: torch.Tensor and any other info
"""
if self._reset_once:
self._obs_dict = self._env.reset()
self._reset_once = False
return self._obs_dict["obs"], {}
def render(self, *args, **kwargs) -> None:
"""Render the environment
"""
pass
def close(self) -> None:
"""Close the environment
"""
pass
| 3,182 | Python | 30.83 | 112 | 0.595223 |
Toni-SM/skrl/skrl/envs/wrappers/torch/gymnasium_envs.py | from typing import Any, Optional, Tuple
import gymnasium
import numpy as np
import torch
from skrl import logger
from skrl.envs.wrappers.torch.base import Wrapper
class GymnasiumWrapper(Wrapper):
def __init__(self, env: Any) -> None:
"""Gymnasium environment wrapper
:param env: The environment to wrap
:type env: Any supported Gymnasium environment
"""
super().__init__(env)
self._vectorized = False
try:
if isinstance(env, gymnasium.vector.SyncVectorEnv) or isinstance(env, gymnasium.vector.AsyncVectorEnv):
self._vectorized = True
self._reset_once = True
self._obs_tensor = None
self._info_dict = None
except Exception as e:
logger.warning(f"Failed to check for a vectorized environment: {e}")
@property
def state_space(self) -> gymnasium.Space:
"""State space
An alias for the ``observation_space`` property
"""
if self._vectorized:
return self._env.single_observation_space
return self._env.observation_space
@property
def observation_space(self) -> gymnasium.Space:
"""Observation space
"""
if self._vectorized:
return self._env.single_observation_space
return self._env.observation_space
@property
def action_space(self) -> gymnasium.Space:
"""Action space
"""
if self._vectorized:
return self._env.single_action_space
return self._env.action_space
def _observation_to_tensor(self, observation: Any, space: Optional[gymnasium.Space] = None) -> torch.Tensor:
"""Convert the Gymnasium observation to a flat tensor
:param observation: The Gymnasium observation to convert to a tensor
:type observation: Any supported Gymnasium observation space
:raises: ValueError if the observation space type is not supported
:return: The observation as a flat tensor
:rtype: torch.Tensor
"""
observation_space = self._env.observation_space if self._vectorized else self.observation_space
space = space if space is not None else observation_space
if self._vectorized and isinstance(space, gymnasium.spaces.MultiDiscrete):
return torch.tensor(observation, device=self.device, dtype=torch.int64).view(self.num_envs, -1)
elif isinstance(observation, int):
return torch.tensor(observation, device=self.device, dtype=torch.int64).view(self.num_envs, -1)
elif isinstance(observation, np.ndarray):
return torch.tensor(observation, device=self.device, dtype=torch.float32).view(self.num_envs, -1)
elif isinstance(space, gymnasium.spaces.Discrete):
return torch.tensor(observation, device=self.device, dtype=torch.float32).view(self.num_envs, -1)
elif isinstance(space, gymnasium.spaces.Box):
return torch.tensor(observation, device=self.device, dtype=torch.float32).view(self.num_envs, -1)
elif isinstance(space, gymnasium.spaces.Dict):
tmp = torch.cat([self._observation_to_tensor(observation[k], space[k]) \
for k in sorted(space.keys())], dim=-1).view(self.num_envs, -1)
return tmp
else:
raise ValueError(f"Observation space type {type(space)} not supported. Please report this issue")
def _tensor_to_action(self, actions: torch.Tensor) -> Any:
"""Convert the action to the Gymnasium expected format
:param actions: The actions to perform
:type actions: torch.Tensor
:raise ValueError: If the action space type is not supported
:return: The action in the Gymnasium format
:rtype: Any supported Gymnasium action space
"""
space = self._env.action_space if self._vectorized else self.action_space
if self._vectorized:
if isinstance(space, gymnasium.spaces.MultiDiscrete):
return np.array(actions.cpu().numpy(), dtype=space.dtype).reshape(space.shape)
elif isinstance(space, gymnasium.spaces.Tuple):
if isinstance(space[0], gymnasium.spaces.Box):
return np.array(actions.cpu().numpy(), dtype=space[0].dtype).reshape(space.shape)
elif isinstance(space[0], gymnasium.spaces.Discrete):
return np.array(actions.cpu().numpy(), dtype=space[0].dtype).reshape(-1)
if isinstance(space, gymnasium.spaces.Discrete):
return actions.item()
elif isinstance(space, gymnasium.spaces.MultiDiscrete):
return np.array(actions.cpu().numpy(), dtype=space.dtype).reshape(space.shape)
elif isinstance(space, gymnasium.spaces.Box):
return np.array(actions.cpu().numpy(), dtype=space.dtype).reshape(space.shape)
raise ValueError(f"Action space type {type(space)} not supported. Please report this issue")
def step(self, actions: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, Any]:
"""Perform a step in the environment
:param actions: The actions to perform
:type actions: torch.Tensor
:return: Observation, reward, terminated, truncated, info
:rtype: tuple of torch.Tensor and any other info
"""
observation, reward, terminated, truncated, info = self._env.step(self._tensor_to_action(actions))
# convert response to torch
observation = self._observation_to_tensor(observation)
reward = torch.tensor(reward, device=self.device, dtype=torch.float32).view(self.num_envs, -1)
terminated = torch.tensor(terminated, device=self.device, dtype=torch.bool).view(self.num_envs, -1)
truncated = torch.tensor(truncated, device=self.device, dtype=torch.bool).view(self.num_envs, -1)
# save observation and info for vectorized envs
if self._vectorized:
self._obs_tensor = observation
self._info_dict = info
return observation, reward, terminated, truncated, info
def reset(self) -> Tuple[torch.Tensor, Any]:
"""Reset the environment
:return: Observation, info
:rtype: torch.Tensor and any other info
"""
# handle vectorized envs
if self._vectorized:
if not self._reset_once:
return self._obs_tensor, self._info_dict
self._reset_once = False
# reset the env/envs
observation, info = self._env.reset()
return self._observation_to_tensor(observation), info
def render(self, *args, **kwargs) -> None:
"""Render the environment
"""
self._env.render(*args, **kwargs)
def close(self) -> None:
"""Close the environment
"""
self._env.close()
| 6,882 | Python | 40.463855 | 115 | 0.639494 |
Toni-SM/skrl/skrl/envs/wrappers/torch/pettingzoo_envs.py | from typing import Any, Mapping, Sequence, Tuple
import collections
import gymnasium
import numpy as np
import torch
from skrl.envs.wrappers.torch.base import MultiAgentEnvWrapper
class PettingZooWrapper(MultiAgentEnvWrapper):
def __init__(self, env: Any) -> None:
"""PettingZoo (parallel) environment wrapper
:param env: The environment to wrap
:type env: Any supported PettingZoo (parallel) environment
"""
super().__init__(env)
self.possible_agents = self._env.possible_agents
self._shared_observation_space = self._compute_shared_observation_space(self._env.observation_spaces)
def _compute_shared_observation_space(self, observation_spaces):
space = next(iter(observation_spaces.values()))
shape = (len(self.possible_agents),) + space.shape
return gymnasium.spaces.Box(low=np.stack([space.low for _ in self.possible_agents], axis=0),
high=np.stack([space.high for _ in self.possible_agents], axis=0),
dtype=space.dtype,
shape=shape)
@property
def num_agents(self) -> int:
"""Number of agents
"""
return len(self.possible_agents)
@property
def agents(self) -> Sequence[str]:
"""Names of all current agents
These may be changed as an environment progresses (i.e. agents can be added or removed)
"""
return self._env.agents
@property
def observation_spaces(self) -> Mapping[str, gymnasium.Space]:
"""Observation spaces
"""
return {uid: self._env.observation_space(uid) for uid in self.possible_agents}
@property
def action_spaces(self) -> Mapping[str, gymnasium.Space]:
"""Action spaces
"""
return {uid: self._env.action_space(uid) for uid in self.possible_agents}
@property
def shared_observation_spaces(self) -> Mapping[str, gymnasium.Space]:
"""Shared observation spaces
"""
return {uid: self._shared_observation_space for uid in self.possible_agents}
def _observation_to_tensor(self, observation: Any, space: gymnasium.Space) -> torch.Tensor:
"""Convert the Gymnasium observation to a flat tensor
:param observation: The Gymnasium observation to convert to a tensor
:type observation: Any supported Gymnasium observation space
:raises: ValueError if the observation space type is not supported
:return: The observation as a flat tensor
:rtype: torch.Tensor
"""
if isinstance(observation, int):
return torch.tensor(observation, device=self.device, dtype=torch.int64).view(self.num_envs, -1)
elif isinstance(observation, np.ndarray):
return torch.tensor(observation, device=self.device, dtype=torch.float32).view(self.num_envs, -1)
elif isinstance(space, gymnasium.spaces.Discrete):
return torch.tensor(observation, device=self.device, dtype=torch.float32).view(self.num_envs, -1)
elif isinstance(space, gymnasium.spaces.Box):
return torch.tensor(observation, device=self.device, dtype=torch.float32).view(self.num_envs, -1)
elif isinstance(space, gymnasium.spaces.Dict):
tmp = torch.cat([self._observation_to_tensor(observation[k], space[k]) \
for k in sorted(space.keys())], dim=-1).view(self.num_envs, -1)
return tmp
else:
raise ValueError(f"Observation space type {type(space)} not supported. Please report this issue")
def _tensor_to_action(self, actions: torch.Tensor, space: gymnasium.Space) -> Any:
"""Convert the action to the Gymnasium expected format
:param actions: The actions to perform
:type actions: torch.Tensor
:raise ValueError: If the action space type is not supported
:return: The action in the Gymnasium format
:rtype: Any supported Gymnasium action space
"""
if isinstance(space, gymnasium.spaces.Discrete):
return actions.item()
elif isinstance(space, gymnasium.spaces.Box):
return np.array(actions.cpu().numpy(), dtype=space.dtype).reshape(space.shape)
raise ValueError(f"Action space type {type(space)} not supported. Please report this issue")
def step(self, actions: Mapping[str, torch.Tensor]) -> \
Tuple[Mapping[str, torch.Tensor], Mapping[str, torch.Tensor],
Mapping[str, torch.Tensor], Mapping[str, torch.Tensor], Mapping[str, Any]]:
"""Perform a step in the environment
:param actions: The actions to perform
:type actions: dictionary of torch.Tensor
:return: Observation, reward, terminated, truncated, info
:rtype: tuple of dictionaries torch.Tensor and any other info
"""
actions = {uid: self._tensor_to_action(action, self._env.action_space(uid)) for uid, action in actions.items()}
observations, rewards, terminated, truncated, infos = self._env.step(actions)
# build shared observation
shared_observations = np.stack([observations[uid] for uid in self.possible_agents], axis=0)
shared_observations = self._observation_to_tensor(shared_observations, self._shared_observation_space)
infos["shared_states"] = {uid: shared_observations for uid in self.possible_agents}
# convert response to torch
observations = {uid: self._observation_to_tensor(value, self._env.observation_space(uid)) for uid, value in observations.items()}
rewards = {uid: torch.tensor(value, device=self.device, dtype=torch.float32).view(self.num_envs, -1) for uid, value in rewards.items()}
terminated = {uid: torch.tensor(value, device=self.device, dtype=torch.bool).view(self.num_envs, -1) for uid, value in terminated.items()}
truncated = {uid: torch.tensor(value, device=self.device, dtype=torch.bool).view(self.num_envs, -1) for uid, value in truncated.items()}
return observations, rewards, terminated, truncated, infos
def reset(self) -> Tuple[Mapping[str, torch.Tensor], Mapping[str, Any]]:
"""Reset the environment
:return: Observation, info
:rtype: tuple of dictionaries of torch.Tensor and any other info
"""
outputs = self._env.reset()
if isinstance(outputs, collections.abc.Mapping):
observations = outputs
infos = {uid: {} for uid in self.possible_agents}
else:
observations, infos = outputs
# build shared observation
shared_observations = np.stack([observations[uid] for uid in self.possible_agents], axis=0)
shared_observations = self._observation_to_tensor(shared_observations, self._shared_observation_space)
infos["shared_states"] = {uid: shared_observations for uid in self.possible_agents}
# convert response to torch
observations = {uid: self._observation_to_tensor(observation, self._env.observation_space(uid)) for uid, observation in observations.items()}
return observations, infos
def render(self, *args, **kwargs) -> None:
"""Render the environment
"""
self._env.render(*args, **kwargs)
def close(self) -> None:
"""Close the environment
"""
self._env.close()
| 7,391 | Python | 44.07317 | 149 | 0.652686 |
Toni-SM/skrl/skrl/envs/wrappers/torch/omniverse_isaacgym_envs.py | from typing import Any, Optional, Tuple
import torch
from skrl.envs.wrappers.torch.base import Wrapper
class OmniverseIsaacGymWrapper(Wrapper):
def __init__(self, env: Any) -> None:
"""Omniverse Isaac Gym environment wrapper
:param env: The environment to wrap
:type env: Any supported Omniverse Isaac Gym environment
"""
super().__init__(env)
self._reset_once = True
self._obs_dict = None
def run(self, trainer: Optional["omni.isaac.gym.vec_env.vec_env_mt.TrainerMT"] = None) -> None:
"""Run the simulation in the main thread
This method is valid only for the Omniverse Isaac Gym multi-threaded environments
:param trainer: Trainer which should implement a ``run`` method that initiates the RL loop on a new thread
:type trainer: omni.isaac.gym.vec_env.vec_env_mt.TrainerMT, optional
"""
self._env.run(trainer)
def step(self, actions: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, Any]:
"""Perform a step in the environment
:param actions: The actions to perform
:type actions: torch.Tensor
:return: Observation, reward, terminated, truncated, info
:rtype: tuple of torch.Tensor and any other info
"""
self._obs_dict, reward, terminated, info = self._env.step(actions)
truncated = info["time_outs"] if "time_outs" in info else torch.zeros_like(terminated)
return self._obs_dict["obs"], reward.view(-1, 1), terminated.view(-1, 1), truncated.view(-1, 1), info
def reset(self) -> Tuple[torch.Tensor, Any]:
"""Reset the environment
:return: Observation, info
:rtype: torch.Tensor and any other info
"""
if self._reset_once:
self._obs_dict = self._env.reset()
self._reset_once = False
return self._obs_dict["obs"], {}
def render(self, *args, **kwargs) -> None:
"""Render the environment
"""
pass
def close(self) -> None:
"""Close the environment
"""
self._env.close()
| 2,133 | Python | 32.873015 | 114 | 0.619316 |
Toni-SM/skrl/skrl/envs/wrappers/jax/gym_envs.py | from typing import Any, Optional, Tuple, Union
import gym
from packaging import version
import jax
import numpy as np
from skrl import logger
from skrl.envs.wrappers.jax.base import Wrapper
class GymWrapper(Wrapper):
def __init__(self, env: Any) -> None:
"""OpenAI Gym environment wrapper
:param env: The environment to wrap
:type env: Any supported OpenAI Gym environment
"""
super().__init__(env)
self._vectorized = False
try:
if isinstance(env, gym.vector.SyncVectorEnv) or isinstance(env, gym.vector.AsyncVectorEnv):
self._vectorized = True
self._reset_once = True
self._obs_tensor = None
self._info_dict = None
except Exception as e:
logger.warning(f"Failed to check for a vectorized environment: {e}")
self._deprecated_api = version.parse(gym.__version__) < version.parse("0.25.0")
if self._deprecated_api:
logger.warning(f"Using a deprecated version of OpenAI Gym's API: {gym.__version__}")
@property
def state_space(self) -> gym.Space:
"""State space
An alias for the ``observation_space`` property
"""
if self._vectorized:
return self._env.single_observation_space
return self._env.observation_space
@property
def observation_space(self) -> gym.Space:
"""Observation space
"""
if self._vectorized:
return self._env.single_observation_space
return self._env.observation_space
@property
def action_space(self) -> gym.Space:
"""Action space
"""
if self._vectorized:
return self._env.single_action_space
return self._env.action_space
def _observation_to_tensor(self, observation: Any, space: Optional[gym.Space] = None) -> np.ndarray:
"""Convert the OpenAI Gym observation to a flat tensor
:param observation: The OpenAI Gym observation to convert to a tensor
:type observation: Any supported OpenAI Gym observation space
:raises: ValueError if the observation space type is not supported
:return: The observation as a flat tensor
:rtype: np.ndarray
"""
observation_space = self._env.observation_space if self._vectorized else self.observation_space
space = space if space is not None else observation_space
if self._vectorized and isinstance(space, gym.spaces.MultiDiscrete):
return observation.reshape(self.num_envs, -1).astype(np.int32)
elif isinstance(observation, int):
return np.array(observation, dtype=np.int32).reshape(self.num_envs, -1)
elif isinstance(observation, np.ndarray):
return observation.reshape(self.num_envs, -1).astype(np.float32)
elif isinstance(space, gym.spaces.Discrete):
return np.array(observation, dtype=np.float32).reshape(self.num_envs, -1)
elif isinstance(space, gym.spaces.Box):
return observation.reshape(self.num_envs, -1).astype(np.float32)
elif isinstance(space, gym.spaces.Dict):
tmp = np.concatenate([self._observation_to_tensor(observation[k], space[k]) \
for k in sorted(space.keys())], axis=-1).reshape(self.num_envs, -1)
return tmp
else:
raise ValueError(f"Observation space type {type(space)} not supported. Please report this issue")
def _tensor_to_action(self, actions: np.ndarray) -> Any:
"""Convert the action to the OpenAI Gym expected format
:param actions: The actions to perform
:type actions: np.ndarray
:raise ValueError: If the action space type is not supported
:return: The action in the OpenAI Gym format
:rtype: Any supported OpenAI Gym action space
"""
space = self._env.action_space if self._vectorized else self.action_space
if self._vectorized:
if isinstance(space, gym.spaces.MultiDiscrete):
return actions.astype(space.dtype).reshape(space.shape)
elif isinstance(space, gym.spaces.Tuple):
if isinstance(space[0], gym.spaces.Box):
return actions.astype(space[0].dtype).reshape(space.shape)
elif isinstance(space[0], gym.spaces.Discrete):
return actions.astype(space[0].dtype).reshape(-1)
elif isinstance(space, gym.spaces.Discrete):
return actions.item()
elif isinstance(space, gym.spaces.MultiDiscrete):
return actions.astype(space.dtype).reshape(space.shape)
elif isinstance(space, gym.spaces.Box):
return actions.astype(space.dtype).reshape(space.shape)
raise ValueError(f"Action space type {type(space)} not supported. Please report this issue")
def step(self, actions: Union[np.ndarray, jax.Array]) -> \
Tuple[Union[np.ndarray, jax.Array], Union[np.ndarray, jax.Array],
Union[np.ndarray, jax.Array], Union[np.ndarray, jax.Array], Any]:
"""Perform a step in the environment
:param actions: The actions to perform
:type actions: np.ndarray or jax.Array
:return: Observation, reward, terminated, truncated, info
:rtype: tuple of np.ndarray or jax.Array and any other info
"""
if self._jax:
actions = jax.device_get(actions)
if self._deprecated_api:
observation, reward, terminated, info = self._env.step(self._tensor_to_action(actions))
# truncated: https://gymnasium.farama.org/tutorials/handling_time_limits
if type(info) is list:
truncated = np.array([d.get("TimeLimit.truncated", False) for d in info], dtype=terminated.dtype)
terminated *= np.logical_not(truncated)
else:
truncated = info.get("TimeLimit.truncated", False)
if truncated:
terminated = False
else:
observation, reward, terminated, truncated, info = self._env.step(self._tensor_to_action(actions))
# convert response to numpy or jax
observation = self._observation_to_tensor(observation)
reward = np.array(reward, dtype=np.float32).reshape(self.num_envs, -1)
terminated = np.array(terminated, dtype=np.int8).reshape(self.num_envs, -1)
truncated = np.array(truncated, dtype=np.int8).reshape(self.num_envs, -1)
# save observation and info for vectorized envs
if self._vectorized:
self._obs_tensor = observation
self._info_dict = info
return observation, reward, terminated, truncated, info
def reset(self) -> Tuple[Union[np.ndarray, jax.Array], Any]:
"""Reset the environment
:return: Observation, info
:rtype: np.ndarray or jax.Array and any other info
"""
# handle vectorized envs
if self._vectorized:
if not self._reset_once:
return self._obs_tensor, self._info_dict
self._reset_once = False
# reset the env/envs
if self._deprecated_api:
observation = self._env.reset()
info = {}
else:
observation, info = self._env.reset()
return self._observation_to_tensor(observation), info
def render(self, *args, **kwargs) -> None:
"""Render the environment
"""
self._env.render(*args, **kwargs)
def close(self) -> None:
"""Close the environment
"""
self._env.close()
| 7,637 | Python | 39.2 | 113 | 0.618437 |
Toni-SM/skrl/skrl/envs/wrappers/jax/bidexhands_envs.py | from typing import Any, Mapping, Sequence, Tuple, Union
import gym
import jax
import jax.dlpack
import numpy as np
try:
import torch
import torch.utils.dlpack
except:
pass # TODO: show warning message
from skrl.envs.wrappers.jax.base import MultiAgentEnvWrapper
def _jax2torch(array, device, from_jax=True):
return torch.utils.dlpack.from_dlpack(jax.dlpack.to_dlpack(array)) if from_jax else torch.tensor(array, device=device)
def _torch2jax(tensor, to_jax=True):
return jax.dlpack.from_dlpack(torch.utils.dlpack.to_dlpack(tensor.contiguous())) if to_jax else tensor.cpu().numpy()
class BiDexHandsWrapper(MultiAgentEnvWrapper):
def __init__(self, env: Any) -> None:
"""Bi-DexHands wrapper
:param env: The environment to wrap
:type env: Any supported Bi-DexHands environment
"""
super().__init__(env)
self._reset_once = True
self._obs_buf = None
self._shared_obs_buf = None
self.possible_agents = [f"agent_{i}" for i in range(self.num_agents)]
@property
def agents(self) -> Sequence[str]:
"""Names of all current agents
These may be changed as an environment progresses (i.e. agents can be added or removed)
"""
return self.possible_agents
@property
def observation_spaces(self) -> Mapping[str, gym.Space]:
"""Observation spaces
"""
return {uid: space for uid, space in zip(self.possible_agents, self._env.observation_space)}
@property
def action_spaces(self) -> Mapping[str, gym.Space]:
"""Action spaces
"""
return {uid: space for uid, space in zip(self.possible_agents, self._env.action_space)}
@property
def shared_observation_spaces(self) -> Mapping[str, gym.Space]:
"""Shared observation spaces
"""
return {uid: space for uid, space in zip(self.possible_agents, self._env.share_observation_space)}
def step(self, actions: Mapping[str, Union[np.ndarray, jax.Array]]) -> \
Tuple[Mapping[str, Union[np.ndarray, jax.Array]], Mapping[str, Union[np.ndarray, jax.Array]],
Mapping[str, Union[np.ndarray, jax.Array]], Mapping[str, Union[np.ndarray, jax.Array]],
Mapping[str, Any]]:
"""Perform a step in the environment
:param actions: The actions to perform
:type actions: dict of nd.ndarray or jax.Array
:return: Observation, reward, terminated, truncated, info
:rtype: tuple of dict of nd.ndarray or jax.Array and any other info
"""
actions = [_jax2torch(actions[uid], self.device, self._jax) for uid in self.possible_agents]
with torch.no_grad():
obs_buf, shared_obs_buf, reward_buf, terminated_buf, info, _ = self._env.step(actions)
obs_buf = _torch2jax(obs_buf, self._jax)
shared_obs_buf = _torch2jax(shared_obs_buf, self._jax)
reward_buf = _torch2jax(reward_buf, self._jax)
terminated_buf = _torch2jax(terminated_buf.to(dtype=torch.int8), self._jax)
self._obs_buf = {uid: obs_buf[:,i] for i, uid in enumerate(self.possible_agents)}
self._shared_obs_buf = {uid: shared_obs_buf[:,i] for i, uid in enumerate(self.possible_agents)}
reward = {uid: reward_buf[:,i].reshape(-1, 1) for i, uid in enumerate(self.possible_agents)}
terminated = {uid: terminated_buf[:,i].reshape(-1, 1) for i, uid in enumerate(self.possible_agents)}
truncated = terminated
info = {"shared_states": self._shared_obs_buf}
return self._obs_buf, reward, terminated, truncated, info
def reset(self) -> Tuple[Mapping[str, Union[np.ndarray, jax.Array]], Mapping[str, Any]]:
"""Reset the environment
:return: Observation, info
:rtype: tuple of dict of np.ndarray of jax.Array and any other info
"""
if self._reset_once:
obs_buf, shared_obs_buf, _ = self._env.reset()
obs_buf = _torch2jax(obs_buf, self._jax)
shared_obs_buf = _torch2jax(shared_obs_buf, self._jax)
self._obs_buf = {uid: obs_buf[:,i] for i, uid in enumerate(self.possible_agents)}
self._shared_obs_buf = {uid: shared_obs_buf[:,i] for i, uid in enumerate(self.possible_agents)}
self._reset_once = False
return self._obs_buf, {"shared_states": self._shared_obs_buf}
| 4,383 | Python | 37.45614 | 122 | 0.635866 |
Toni-SM/skrl/skrl/envs/wrappers/jax/isaacgym_envs.py | from typing import Any, Tuple, Union
import jax
import jax.dlpack as jax_dlpack
import numpy as np
try:
import torch
import torch.utils.dlpack as torch_dlpack
except:
pass # TODO: show warning message
from skrl import logger
from skrl.envs.wrappers.jax.base import Wrapper
# ML frameworks conversion utilities
# jaxlib.xla_extension.XlaRuntimeError: INVALID_ARGUMENT: DLPack tensor is on GPU, but no GPU backend was provided.
_CPU = jax.devices()[0].device_kind.lower() == "cpu"
if _CPU:
logger.warning("IsaacGymEnvs runs on GPU, but there is no GPU backend for JAX. JAX operations will run on CPU.")
def _jax2torch(array, device, from_jax=True):
if from_jax:
return torch_dlpack.from_dlpack(jax_dlpack.to_dlpack(array)).to(device=device)
return torch.tensor(array, device=device)
def _torch2jax(tensor, to_jax=True):
if to_jax:
return jax_dlpack.from_dlpack(torch_dlpack.to_dlpack(tensor.contiguous().cpu() if _CPU else tensor.contiguous()))
return tensor.cpu().numpy()
class IsaacGymPreview2Wrapper(Wrapper):
def __init__(self, env: Any) -> None:
"""Isaac Gym environment (preview 2) wrapper
:param env: The environment to wrap
:type env: Any supported Isaac Gym environment (preview 2) environment
"""
super().__init__(env)
self._reset_once = True
self._obs_buf = None
def step(self, actions: Union[np.ndarray, jax.Array]) -> \
Tuple[Union[np.ndarray, jax.Array], Union[np.ndarray, jax.Array],
Union[np.ndarray, jax.Array], Union[np.ndarray, jax.Array], Any]:
"""Perform a step in the environment
:param actions: The actions to perform
:type actions: np.ndarray or jax.Array
:return: Observation, reward, terminated, truncated, info
:rtype: tuple of np.ndarray or jax.Array and any other info
"""
actions = _jax2torch(actions, self._env.device, self._jax)
with torch.no_grad():
self._obs_buf, reward, terminated, info = self._env.step(actions)
terminated = terminated.to(dtype=torch.int8)
truncated = info["time_outs"].to(dtype=torch.int8) if "time_outs" in info else torch.zeros_like(terminated)
return _torch2jax(self._obs_buf, self._jax), \
_torch2jax(reward.view(-1, 1), self._jax), \
_torch2jax(terminated.view(-1, 1), self._jax), \
_torch2jax(truncated.view(-1, 1), self._jax), \
info
def reset(self) -> Tuple[Union[np.ndarray, jax.Array], Any]:
"""Reset the environment
:return: Observation, info
:rtype: np.ndarray or jax.Array and any other info
"""
if self._reset_once:
self._obs_buf = self._env.reset()
self._reset_once = False
return _torch2jax(self._obs_buf, self._jax), {}
def render(self, *args, **kwargs) -> None:
"""Render the environment
"""
pass
def close(self) -> None:
"""Close the environment
"""
pass
class IsaacGymPreview3Wrapper(Wrapper):
def __init__(self, env: Any) -> None:
"""Isaac Gym environment (preview 3) wrapper
:param env: The environment to wrap
:type env: Any supported Isaac Gym environment (preview 3) environment
"""
super().__init__(env)
self._reset_once = True
self._obs_dict = None
def step(self, actions: Union[np.ndarray, jax.Array]) ->\
Tuple[Union[np.ndarray, jax.Array], Union[np.ndarray, jax.Array],
Union[np.ndarray, jax.Array], Union[np.ndarray, jax.Array], Any]:
"""Perform a step in the environment
:param actions: The actions to perform
:type actions: np.ndarray or jax.Array
:return: Observation, reward, terminated, truncated, info
:rtype: tuple of np.ndarray or jax.Array and any other info
"""
actions = _jax2torch(actions, self._env.device, self._jax)
with torch.no_grad():
self._obs_dict, reward, terminated, info = self._env.step(actions)
terminated = terminated.to(dtype=torch.int8)
truncated = info["time_outs"].to(dtype=torch.int8) if "time_outs" in info else torch.zeros_like(terminated)
return _torch2jax(self._obs_dict["obs"], self._jax), \
_torch2jax(reward.view(-1, 1), self._jax), \
_torch2jax(terminated.view(-1, 1), self._jax), \
_torch2jax(truncated.view(-1, 1), self._jax), \
info
def reset(self) -> Tuple[Union[np.ndarray, jax.Array], Any]:
"""Reset the environment
:return: Observation, info
:rtype: np.ndarray or jax.Array and any other info
"""
if self._reset_once:
self._obs_dict = self._env.reset()
self._reset_once = False
return _torch2jax(self._obs_dict["obs"], self._jax), {}
def render(self, *args, **kwargs) -> None:
"""Render the environment
"""
pass
def close(self) -> None:
"""Close the environment
"""
pass
| 5,142 | Python | 33.059602 | 121 | 0.608129 |
Toni-SM/skrl/skrl/agents/torch/base.py | from typing import Any, Mapping, Optional, Tuple, Union
import collections
import copy
import datetime
import os
import gym
import gymnasium
import numpy as np
import torch
from torch.utils.tensorboard import SummaryWriter
from skrl import logger
from skrl.memories.torch import Memory
from skrl.models.torch import Model
class Agent:
def __init__(self,
models: Mapping[str, Model],
memory: Optional[Union[Memory, Tuple[Memory]]] = None,
observation_space: Optional[Union[int, Tuple[int], gym.Space, gymnasium.Space]] = None,
action_space: Optional[Union[int, Tuple[int], gym.Space, gymnasium.Space]] = None,
device: Optional[Union[str, torch.device]] = None,
cfg: Optional[dict] = None) -> None:
"""Base class that represent a RL agent
:param models: Models used by the agent
:type models: dictionary of skrl.models.torch.Model
:param memory: Memory to storage the transitions.
If it is a tuple, the first element will be used for training and
for the rest only the environment transitions will be added
:type memory: skrl.memory.torch.Memory, list of skrl.memory.torch.Memory or None
:param observation_space: Observation/state space or shape (default: ``None``)
:type observation_space: int, tuple or list of int, gym.Space, gymnasium.Space or None, optional
:param action_space: Action space or shape (default: ``None``)
:type action_space: int, tuple or list of int, gym.Space, gymnasium.Space or None, optional
:param device: Device on which a tensor/array is or will be allocated (default: ``None``).
If None, the device will be either ``"cuda"`` if available or ``"cpu"``
:type device: str or torch.device, optional
:param cfg: Configuration dictionary
:type cfg: dict
"""
self.models = models
self.observation_space = observation_space
self.action_space = action_space
self.cfg = cfg if cfg is not None else {}
self.device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") if device is None else torch.device(device)
if type(memory) is list:
self.memory = memory[0]
self.secondary_memories = memory[1:]
else:
self.memory = memory
self.secondary_memories = []
# convert the models to their respective device
for model in self.models.values():
if model is not None:
model.to(model.device)
self.tracking_data = collections.defaultdict(list)
self.write_interval = self.cfg.get("experiment", {}).get("write_interval", 1000)
self._track_rewards = collections.deque(maxlen=100)
self._track_timesteps = collections.deque(maxlen=100)
self._cumulative_rewards = None
self._cumulative_timesteps = None
self.training = True
# checkpoint
self.checkpoint_modules = {}
self.checkpoint_interval = self.cfg.get("experiment", {}).get("checkpoint_interval", 1000)
self.checkpoint_store_separately = self.cfg.get("experiment", {}).get("store_separately", False)
self.checkpoint_best_modules = {"timestep": 0, "reward": -2 ** 31, "saved": False, "modules": {}}
# experiment directory
directory = self.cfg.get("experiment", {}).get("directory", "")
experiment_name = self.cfg.get("experiment", {}).get("experiment_name", "")
if not directory:
directory = os.path.join(os.getcwd(), "runs")
if not experiment_name:
experiment_name = "{}_{}".format(datetime.datetime.now().strftime("%y-%m-%d_%H-%M-%S-%f"), self.__class__.__name__)
self.experiment_dir = os.path.join(directory, experiment_name)
def __str__(self) -> str:
"""Generate a representation of the agent as string
:return: Representation of the agent as string
:rtype: str
"""
string = f"Agent: {repr(self)}"
for k, v in self.cfg.items():
if type(v) is dict:
string += f"\n |-- {k}"
for k1, v1 in v.items():
string += f"\n | |-- {k1}: {v1}"
else:
string += f"\n |-- {k}: {v}"
return string
def _empty_preprocessor(self, _input: Any, *args, **kwargs) -> Any:
"""Empty preprocess method
This method is defined because PyTorch multiprocessing can't pickle lambdas
:param _input: Input to preprocess
:type _input: Any
:return: Preprocessed input
:rtype: Any
"""
return _input
def _get_internal_value(self, _module: Any) -> Any:
"""Get internal module/variable state/value
:param _module: Module or variable
:type _module: Any
:return: Module/variable state/value
:rtype: Any
"""
return _module.state_dict() if hasattr(_module, "state_dict") else _module
def init(self, trainer_cfg: Optional[Mapping[str, Any]] = None) -> None:
"""Initialize the agent
This method should be called before the agent is used.
It will initialize the TensoBoard writer (and optionally Weights & Biases) and create the checkpoints directory
:param trainer_cfg: Trainer configuration
:type trainer_cfg: dict, optional
"""
# setup Weights & Biases
if self.cfg.get("experiment", {}).get("wandb", False):
# save experiment config
trainer_cfg = trainer_cfg if trainer_cfg is not None else {}
try:
models_cfg = {k: v.net._modules for (k, v) in self.models.items()}
except AttributeError:
models_cfg = {k: v._modules for (k, v) in self.models.items()}
config={**self.cfg, **trainer_cfg, **models_cfg}
# set default values
wandb_kwargs = copy.deepcopy(self.cfg.get("experiment", {}).get("wandb_kwargs", {}))
wandb_kwargs.setdefault("name", os.path.split(self.experiment_dir)[-1])
wandb_kwargs.setdefault("sync_tensorboard", True)
wandb_kwargs.setdefault("config", {})
wandb_kwargs["config"].update(config)
# init Weights & Biases
import wandb
wandb.init(**wandb_kwargs)
# main entry to log data for consumption and visualization by TensorBoard
if self.write_interval > 0:
self.writer = SummaryWriter(log_dir=self.experiment_dir)
if self.checkpoint_interval > 0:
os.makedirs(os.path.join(self.experiment_dir, "checkpoints"), exist_ok=True)
def track_data(self, tag: str, value: float) -> None:
"""Track data to TensorBoard
Currently only scalar data are supported
:param tag: Data identifier (e.g. 'Loss / policy loss')
:type tag: str
:param value: Value to track
:type value: float
"""
self.tracking_data[tag].append(value)
def write_tracking_data(self, timestep: int, timesteps: int) -> None:
"""Write tracking data to TensorBoard
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
for k, v in self.tracking_data.items():
if k.endswith("(min)"):
self.writer.add_scalar(k, np.min(v), timestep)
elif k.endswith("(max)"):
self.writer.add_scalar(k, np.max(v), timestep)
else:
self.writer.add_scalar(k, np.mean(v), timestep)
# reset data containers for next iteration
self._track_rewards.clear()
self._track_timesteps.clear()
self.tracking_data.clear()
def write_checkpoint(self, timestep: int, timesteps: int) -> None:
"""Write checkpoint (modules) to disk
The checkpoints are saved in the directory 'checkpoints' in the experiment directory.
The name of the checkpoint is the current timestep if timestep is not None, otherwise it is the current time.
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
tag = str(timestep if timestep is not None else datetime.datetime.now().strftime("%y-%m-%d_%H-%M-%S-%f"))
# separated modules
if self.checkpoint_store_separately:
for name, module in self.checkpoint_modules.items():
torch.save(self._get_internal_value(module),
os.path.join(self.experiment_dir, "checkpoints", f"{name}_{tag}.pt"))
# whole agent
else:
modules = {}
for name, module in self.checkpoint_modules.items():
modules[name] = self._get_internal_value(module)
torch.save(modules, os.path.join(self.experiment_dir, "checkpoints", f"agent_{tag}.pt"))
# best modules
if self.checkpoint_best_modules["modules"] and not self.checkpoint_best_modules["saved"]:
# separated modules
if self.checkpoint_store_separately:
for name, module in self.checkpoint_modules.items():
torch.save(self.checkpoint_best_modules["modules"][name],
os.path.join(self.experiment_dir, "checkpoints", f"best_{name}.pt"))
# whole agent
else:
modules = {}
for name, module in self.checkpoint_modules.items():
modules[name] = self.checkpoint_best_modules["modules"][name]
torch.save(modules, os.path.join(self.experiment_dir, "checkpoints", "best_agent.pt"))
self.checkpoint_best_modules["saved"] = True
def act(self,
states: torch.Tensor,
timestep: int,
timesteps: int) -> torch.Tensor:
"""Process the environment's states to make a decision (actions) using the main policy
:param states: Environment's states
:type states: torch.Tensor
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
:raises NotImplementedError: The method is not implemented by the inheriting classes
:return: Actions
:rtype: torch.Tensor
"""
raise NotImplementedError
def record_transition(self,
states: torch.Tensor,
actions: torch.Tensor,
rewards: torch.Tensor,
next_states: torch.Tensor,
terminated: torch.Tensor,
truncated: torch.Tensor,
infos: Any,
timestep: int,
timesteps: int) -> None:
"""Record an environment transition in memory (to be implemented by the inheriting classes)
Inheriting classes must call this method to record episode information (rewards, timesteps, etc.).
In addition to recording environment transition (such as states, rewards, etc.), agent information can be recorded.
:param states: Observations/states of the environment used to make the decision
:type states: torch.Tensor
:param actions: Actions taken by the agent
:type actions: torch.Tensor
:param rewards: Instant rewards achieved by the current actions
:type rewards: torch.Tensor
:param next_states: Next observations/states of the environment
:type next_states: torch.Tensor
:param terminated: Signals to indicate that episodes have terminated
:type terminated: torch.Tensor
:param truncated: Signals to indicate that episodes have been truncated
:type truncated: torch.Tensor
:param infos: Additional information about the environment
:type infos: Any type supported by the environment
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
if self.write_interval > 0:
# compute the cumulative sum of the rewards and timesteps
if self._cumulative_rewards is None:
self._cumulative_rewards = torch.zeros_like(rewards, dtype=torch.float32)
self._cumulative_timesteps = torch.zeros_like(rewards, dtype=torch.int32)
self._cumulative_rewards.add_(rewards)
self._cumulative_timesteps.add_(1)
# check ended episodes
finished_episodes = (terminated + truncated).nonzero(as_tuple=False)
if finished_episodes.numel():
# storage cumulative rewards and timesteps
self._track_rewards.extend(self._cumulative_rewards[finished_episodes][:, 0].reshape(-1).tolist())
self._track_timesteps.extend(self._cumulative_timesteps[finished_episodes][:, 0].reshape(-1).tolist())
# reset the cumulative rewards and timesteps
self._cumulative_rewards[finished_episodes] = 0
self._cumulative_timesteps[finished_episodes] = 0
# record data
self.tracking_data["Reward / Instantaneous reward (max)"].append(torch.max(rewards).item())
self.tracking_data["Reward / Instantaneous reward (min)"].append(torch.min(rewards).item())
self.tracking_data["Reward / Instantaneous reward (mean)"].append(torch.mean(rewards).item())
if len(self._track_rewards):
track_rewards = np.array(self._track_rewards)
track_timesteps = np.array(self._track_timesteps)
self.tracking_data["Reward / Total reward (max)"].append(np.max(track_rewards))
self.tracking_data["Reward / Total reward (min)"].append(np.min(track_rewards))
self.tracking_data["Reward / Total reward (mean)"].append(np.mean(track_rewards))
self.tracking_data["Episode / Total timesteps (max)"].append(np.max(track_timesteps))
self.tracking_data["Episode / Total timesteps (min)"].append(np.min(track_timesteps))
self.tracking_data["Episode / Total timesteps (mean)"].append(np.mean(track_timesteps))
def set_mode(self, mode: str) -> None:
"""Set the model mode (training or evaluation)
:param mode: Mode: 'train' for training or 'eval' for evaluation
:type mode: str
"""
for model in self.models.values():
if model is not None:
model.set_mode(mode)
def set_running_mode(self, mode: str) -> None:
"""Set the current running mode (training or evaluation)
This method sets the value of the ``training`` property (boolean).
This property can be used to know if the agent is running in training or evaluation mode.
:param mode: Mode: 'train' for training or 'eval' for evaluation
:type mode: str
"""
self.training = mode == "train"
def save(self, path: str) -> None:
"""Save the agent to the specified path
:param path: Path to save the model to
:type path: str
"""
modules = {}
for name, module in self.checkpoint_modules.items():
modules[name] = self._get_internal_value(module)
torch.save(modules, path)
def load(self, path: str) -> None:
"""Load the model from the specified path
The final storage device is determined by the constructor of the model
:param path: Path to load the model from
:type path: str
"""
modules = torch.load(path, map_location=self.device)
if type(modules) is dict:
for name, data in modules.items():
module = self.checkpoint_modules.get(name, None)
if module is not None:
if hasattr(module, "load_state_dict"):
module.load_state_dict(data)
if hasattr(module, "eval"):
module.eval()
else:
raise NotImplementedError
else:
logger.warning(f"Cannot load the {name} module. The agent doesn't have such an instance")
def migrate(self,
path: str,
name_map: Mapping[str, Mapping[str, str]] = {},
auto_mapping: bool = True,
verbose: bool = False) -> bool:
"""Migrate the specified extrernal checkpoint to the current agent
The final storage device is determined by the constructor of the agent.
Only files generated by the *rl_games* library are supported at the moment
For ambiguous models (where 2 or more parameters, for source or current model, have equal shape)
it is necessary to define the ``name_map``, at least for those parameters, to perform the migration successfully
:param path: Path to the external checkpoint to migrate from
:type path: str
:param name_map: Name map to use for the migration (default: ``{}``).
Keys are the current parameter names and values are the external parameter names
:type name_map: Mapping[str, Mapping[str, str]], optional
:param auto_mapping: Automatically map the external state dict to the current state dict (default: ``True``)
:type auto_mapping: bool, optional
:param verbose: Show model names and migration (default: ``False``)
:type verbose: bool, optional
:raises ValueError: If the correct file type cannot be identified from the ``path`` parameter
:return: True if the migration was successful, False otherwise.
Migration is successful if all parameters of the current model are found in the external model
:rtype: bool
Example::
# migrate a rl_games checkpoint with ambiguous state_dict
>>> agent.migrate(path="./runs/Cartpole/nn/Cartpole.pth", verbose=False)
[skrl:WARNING] Ambiguous match for net.0.bias <- [a2c_network.actor_mlp.0.bias, a2c_network.actor_mlp.2.bias]
[skrl:WARNING] Ambiguous match for net.2.bias <- [a2c_network.actor_mlp.0.bias, a2c_network.actor_mlp.2.bias]
[skrl:WARNING] Ambiguous match for net.4.weight <- [a2c_network.value.weight, a2c_network.mu.weight]
[skrl:WARNING] Ambiguous match for net.4.bias <- [a2c_network.value.bias, a2c_network.mu.bias]
[skrl:WARNING] Multiple use of a2c_network.actor_mlp.0.bias -> [net.0.bias, net.2.bias]
[skrl:WARNING] Multiple use of a2c_network.actor_mlp.2.bias -> [net.0.bias, net.2.bias]
[skrl:WARNING] Ambiguous match for net.0.bias <- [a2c_network.actor_mlp.0.bias, a2c_network.actor_mlp.2.bias]
[skrl:WARNING] Ambiguous match for net.2.bias <- [a2c_network.actor_mlp.0.bias, a2c_network.actor_mlp.2.bias]
[skrl:WARNING] Ambiguous match for net.4.weight <- [a2c_network.value.weight, a2c_network.mu.weight]
[skrl:WARNING] Ambiguous match for net.4.bias <- [a2c_network.value.bias, a2c_network.mu.bias]
[skrl:WARNING] Multiple use of a2c_network.actor_mlp.0.bias -> [net.0.bias, net.2.bias]
[skrl:WARNING] Multiple use of a2c_network.actor_mlp.2.bias -> [net.0.bias, net.2.bias]
False
>>> name_map = {"policy": {"net.0.bias": "a2c_network.actor_mlp.0.bias",
... "net.2.bias": "a2c_network.actor_mlp.2.bias",
... "net.4.weight": "a2c_network.mu.weight",
... "net.4.bias": "a2c_network.mu.bias"},
... "value": {"net.0.bias": "a2c_network.actor_mlp.0.bias",
... "net.2.bias": "a2c_network.actor_mlp.2.bias",
... "net.4.weight": "a2c_network.value.weight",
... "net.4.bias": "a2c_network.value.bias"}}
>>> model.migrate(path="./runs/Cartpole/nn/Cartpole.pth", name_map=name_map, verbose=True)
[skrl:INFO] Modules
[skrl:INFO] |-- current
[skrl:INFO] | |-- policy (Policy)
[skrl:INFO] | | |-- log_std_parameter : [1]
[skrl:INFO] | | |-- net.0.weight : [32, 4]
[skrl:INFO] | | |-- net.0.bias : [32]
[skrl:INFO] | | |-- net.2.weight : [32, 32]
[skrl:INFO] | | |-- net.2.bias : [32]
[skrl:INFO] | | |-- net.4.weight : [1, 32]
[skrl:INFO] | | |-- net.4.bias : [1]
[skrl:INFO] | |-- value (Value)
[skrl:INFO] | | |-- net.0.weight : [32, 4]
[skrl:INFO] | | |-- net.0.bias : [32]
[skrl:INFO] | | |-- net.2.weight : [32, 32]
[skrl:INFO] | | |-- net.2.bias : [32]
[skrl:INFO] | | |-- net.4.weight : [1, 32]
[skrl:INFO] | | |-- net.4.bias : [1]
[skrl:INFO] | |-- optimizer (Adam)
[skrl:INFO] | | |-- state (dict)
[skrl:INFO] | | |-- param_groups (list)
[skrl:INFO] | |-- state_preprocessor (RunningStandardScaler)
[skrl:INFO] | | |-- running_mean : [4]
[skrl:INFO] | | |-- running_variance : [4]
[skrl:INFO] | | |-- current_count : []
[skrl:INFO] | |-- value_preprocessor (RunningStandardScaler)
[skrl:INFO] | | |-- running_mean : [1]
[skrl:INFO] | | |-- running_variance : [1]
[skrl:INFO] | | |-- current_count : []
[skrl:INFO] |-- source
[skrl:INFO] | |-- model (OrderedDict)
[skrl:INFO] | | |-- value_mean_std.running_mean : [1]
[skrl:INFO] | | |-- value_mean_std.running_var : [1]
[skrl:INFO] | | |-- value_mean_std.count : []
[skrl:INFO] | | |-- running_mean_std.running_mean : [4]
[skrl:INFO] | | |-- running_mean_std.running_var : [4]
[skrl:INFO] | | |-- running_mean_std.count : []
[skrl:INFO] | | |-- a2c_network.sigma : [1]
[skrl:INFO] | | |-- a2c_network.actor_mlp.0.weight : [32, 4]
[skrl:INFO] | | |-- a2c_network.actor_mlp.0.bias : [32]
[skrl:INFO] | | |-- a2c_network.actor_mlp.2.weight : [32, 32]
[skrl:INFO] | | |-- a2c_network.actor_mlp.2.bias : [32]
[skrl:INFO] | | |-- a2c_network.value.weight : [1, 32]
[skrl:INFO] | | |-- a2c_network.value.bias : [1]
[skrl:INFO] | | |-- a2c_network.mu.weight : [1, 32]
[skrl:INFO] | | |-- a2c_network.mu.bias : [1]
[skrl:INFO] | |-- epoch (int)
[skrl:INFO] | |-- optimizer (dict)
[skrl:INFO] | |-- frame (int)
[skrl:INFO] | |-- last_mean_rewards (float32)
[skrl:INFO] | |-- env_state (NoneType)
[skrl:INFO] Migration
[skrl:INFO] Model: policy (Policy)
[skrl:INFO] Models
[skrl:INFO] |-- current: 7 items
[skrl:INFO] | |-- log_std_parameter : [1]
[skrl:INFO] | |-- net.0.weight : [32, 4]
[skrl:INFO] | |-- net.0.bias : [32]
[skrl:INFO] | |-- net.2.weight : [32, 32]
[skrl:INFO] | |-- net.2.bias : [32]
[skrl:INFO] | |-- net.4.weight : [1, 32]
[skrl:INFO] | |-- net.4.bias : [1]
[skrl:INFO] |-- source: 9 items
[skrl:INFO] | |-- a2c_network.sigma : [1]
[skrl:INFO] | |-- a2c_network.actor_mlp.0.weight : [32, 4]
[skrl:INFO] | |-- a2c_network.actor_mlp.0.bias : [32]
[skrl:INFO] | |-- a2c_network.actor_mlp.2.weight : [32, 32]
[skrl:INFO] | |-- a2c_network.actor_mlp.2.bias : [32]
[skrl:INFO] | |-- a2c_network.value.weight : [1, 32]
[skrl:INFO] | |-- a2c_network.value.bias : [1]
[skrl:INFO] | |-- a2c_network.mu.weight : [1, 32]
[skrl:INFO] | |-- a2c_network.mu.bias : [1]
[skrl:INFO] Migration
[skrl:INFO] |-- auto: log_std_parameter <- a2c_network.sigma
[skrl:INFO] |-- auto: net.0.weight <- a2c_network.actor_mlp.0.weight
[skrl:INFO] |-- map: net.0.bias <- a2c_network.actor_mlp.0.bias
[skrl:INFO] |-- auto: net.2.weight <- a2c_network.actor_mlp.2.weight
[skrl:INFO] |-- map: net.2.bias <- a2c_network.actor_mlp.2.bias
[skrl:INFO] |-- map: net.4.weight <- a2c_network.mu.weight
[skrl:INFO] |-- map: net.4.bias <- a2c_network.mu.bias
[skrl:INFO] Model: value (Value)
[skrl:INFO] Models
[skrl:INFO] |-- current: 6 items
[skrl:INFO] | |-- net.0.weight : [32, 4]
[skrl:INFO] | |-- net.0.bias : [32]
[skrl:INFO] | |-- net.2.weight : [32, 32]
[skrl:INFO] | |-- net.2.bias : [32]
[skrl:INFO] | |-- net.4.weight : [1, 32]
[skrl:INFO] | |-- net.4.bias : [1]
[skrl:INFO] |-- source: 9 items
[skrl:INFO] | |-- a2c_network.sigma : [1]
[skrl:INFO] | |-- a2c_network.actor_mlp.0.weight : [32, 4]
[skrl:INFO] | |-- a2c_network.actor_mlp.0.bias : [32]
[skrl:INFO] | |-- a2c_network.actor_mlp.2.weight : [32, 32]
[skrl:INFO] | |-- a2c_network.actor_mlp.2.bias : [32]
[skrl:INFO] | |-- a2c_network.value.weight : [1, 32]
[skrl:INFO] | |-- a2c_network.value.bias : [1]
[skrl:INFO] | |-- a2c_network.mu.weight : [1, 32]
[skrl:INFO] | |-- a2c_network.mu.bias : [1]
[skrl:INFO] Migration
[skrl:INFO] |-- auto: net.0.weight <- a2c_network.actor_mlp.0.weight
[skrl:INFO] |-- map: net.0.bias <- a2c_network.actor_mlp.0.bias
[skrl:INFO] |-- auto: net.2.weight <- a2c_network.actor_mlp.2.weight
[skrl:INFO] |-- map: net.2.bias <- a2c_network.actor_mlp.2.bias
[skrl:INFO] |-- map: net.4.weight <- a2c_network.value.weight
[skrl:INFO] |-- map: net.4.bias <- a2c_network.value.bias
True
"""
# load state_dict from path
if path is not None:
# rl_games checkpoint
if path.endswith(".pt") or path.endswith(".pth"):
checkpoint = torch.load(path, map_location=self.device)
else:
raise ValueError("Cannot identify file type")
# show modules
if verbose:
logger.info("Modules")
logger.info(" |-- current")
for name, module in self.checkpoint_modules.items():
logger.info(f" | |-- {name} ({type(module).__name__})")
if hasattr(module, "state_dict"):
for k, v in module.state_dict().items():
if hasattr(v, "shape"):
logger.info(f" | | |-- {k} : {list(v.shape)}")
else:
logger.info(f" | | |-- {k} ({type(v).__name__})")
logger.info(" |-- source")
for name, module in checkpoint.items():
logger.info(f" | |-- {name} ({type(module).__name__})")
if name == "model":
for k, v in module.items():
logger.info(f" | | |-- {k} : {list(v.shape)}")
else:
if hasattr(module, "state_dict"):
for k, v in module.state_dict().items():
if hasattr(v, "shape"):
logger.info(f" | | |-- {k} : {list(v.shape)}")
else:
logger.info(f" | | |-- {k} ({type(v).__name__})")
logger.info("Migration")
if "optimizer" in self.checkpoint_modules:
# loaded state dict contains a parameter group that doesn't match the size of optimizer's group
# self.checkpoint_modules["optimizer"].load_state_dict(checkpoint["optimizer"])
pass
# state_preprocessor
if "state_preprocessor" in self.checkpoint_modules:
if "running_mean_std.running_mean" in checkpoint["model"]:
state_dict = copy.deepcopy(self.checkpoint_modules["state_preprocessor"].state_dict())
state_dict["running_mean"] = checkpoint["model"]["running_mean_std.running_mean"]
state_dict["running_variance"] = checkpoint["model"]["running_mean_std.running_var"]
state_dict["current_count"] = checkpoint["model"]["running_mean_std.count"]
self.checkpoint_modules["state_preprocessor"].load_state_dict(state_dict)
del checkpoint["model"]["running_mean_std.running_mean"]
del checkpoint["model"]["running_mean_std.running_var"]
del checkpoint["model"]["running_mean_std.count"]
# value_preprocessor
if "value_preprocessor" in self.checkpoint_modules:
if "value_mean_std.running_mean" in checkpoint["model"]:
state_dict = copy.deepcopy(self.checkpoint_modules["value_preprocessor"].state_dict())
state_dict["running_mean"] = checkpoint["model"]["value_mean_std.running_mean"]
state_dict["running_variance"] = checkpoint["model"]["value_mean_std.running_var"]
state_dict["current_count"] = checkpoint["model"]["value_mean_std.count"]
self.checkpoint_modules["value_preprocessor"].load_state_dict(state_dict)
del checkpoint["model"]["value_mean_std.running_mean"]
del checkpoint["model"]["value_mean_std.running_var"]
del checkpoint["model"]["value_mean_std.count"]
# TODO: AMP state preprocessor
# model
status = True
for name, module in self.checkpoint_modules.items():
if module not in ["state_preprocessor", "value_preprocessor", "optimizer"] and hasattr(module, "migrate"):
if verbose:
logger.info(f"Model: {name} ({type(module).__name__})")
status *= module.migrate(state_dict=checkpoint["model"],
name_map=name_map.get(name, {}),
auto_mapping=auto_mapping,
verbose=verbose)
self.set_mode("eval")
return bool(status)
def pre_interaction(self, timestep: int, timesteps: int) -> None:
"""Callback called before the interaction with the environment
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
pass
def post_interaction(self, timestep: int, timesteps: int) -> None:
"""Callback called after the interaction with the environment
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
timestep += 1
# update best models and write checkpoints
if timestep > 1 and self.checkpoint_interval > 0 and not timestep % self.checkpoint_interval:
# update best models
reward = np.mean(self.tracking_data.get("Reward / Total reward (mean)", -2 ** 31))
if reward > self.checkpoint_best_modules["reward"]:
self.checkpoint_best_modules["timestep"] = timestep
self.checkpoint_best_modules["reward"] = reward
self.checkpoint_best_modules["saved"] = False
self.checkpoint_best_modules["modules"] = {k: copy.deepcopy(self._get_internal_value(v)) for k, v in self.checkpoint_modules.items()}
# write checkpoints
self.write_checkpoint(timestep, timesteps)
# write to tensorboard
if timestep > 1 and self.write_interval > 0 and not timestep % self.write_interval:
self.write_tracking_data(timestep, timesteps)
def _update(self, timestep: int, timesteps: int) -> None:
"""Algorithm's main update step
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
:raises NotImplementedError: The method is not implemented by the inheriting classes
"""
raise NotImplementedError
| 33,314 | Python | 49.097744 | 149 | 0.555082 |
Toni-SM/skrl/skrl/agents/torch/__init__.py | from skrl.agents.torch.base import Agent
| 41 | Python | 19.99999 | 40 | 0.829268 |
Toni-SM/skrl/skrl/agents/torch/trpo/__init__.py | from skrl.agents.torch.trpo.trpo import TRPO, TRPO_DEFAULT_CONFIG
from skrl.agents.torch.trpo.trpo_rnn import TRPO_RNN
| 119 | Python | 38.999987 | 65 | 0.815126 |
Toni-SM/skrl/skrl/agents/torch/trpo/trpo.py | from typing import Any, Mapping, Optional, Tuple, Union
import copy
import gym
import gymnasium
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn.utils.convert_parameters import parameters_to_vector, vector_to_parameters
from skrl.agents.torch import Agent
from skrl.memories.torch import Memory
from skrl.models.torch import Model
# [start-config-dict-torch]
TRPO_DEFAULT_CONFIG = {
"rollouts": 16, # number of rollouts before updating
"learning_epochs": 8, # number of learning epochs during each update
"mini_batches": 2, # number of mini batches during each learning epoch
"discount_factor": 0.99, # discount factor (gamma)
"lambda": 0.95, # TD(lambda) coefficient (lam) for computing returns and advantages
"value_learning_rate": 1e-3, # value learning rate
"learning_rate_scheduler": None, # learning rate scheduler class (see torch.optim.lr_scheduler)
"learning_rate_scheduler_kwargs": {}, # learning rate scheduler's kwargs (e.g. {"step_size": 1e-3})
"state_preprocessor": None, # state preprocessor class (see skrl.resources.preprocessors)
"state_preprocessor_kwargs": {}, # state preprocessor's kwargs (e.g. {"size": env.observation_space})
"value_preprocessor": None, # value preprocessor class (see skrl.resources.preprocessors)
"value_preprocessor_kwargs": {}, # value preprocessor's kwargs (e.g. {"size": 1})
"random_timesteps": 0, # random exploration steps
"learning_starts": 0, # learning starts after this many steps
"grad_norm_clip": 0.5, # clipping coefficient for the norm of the gradients
"value_loss_scale": 1.0, # value loss scaling factor
"damping": 0.1, # damping coefficient for computing the Hessian-vector product
"max_kl_divergence": 0.01, # maximum KL divergence between old and new policy
"conjugate_gradient_steps": 10, # maximum number of iterations for the conjugate gradient algorithm
"max_backtrack_steps": 10, # maximum number of backtracking steps during line search
"accept_ratio": 0.5, # accept ratio for the line search loss improvement
"step_fraction": 1.0, # fraction of the step size for the line search
"rewards_shaper": None, # rewards shaping function: Callable(reward, timestep, timesteps) -> reward
"time_limit_bootstrap": False, # bootstrap at timeout termination (episode truncation)
"experiment": {
"directory": "", # experiment's parent directory
"experiment_name": "", # experiment name
"write_interval": 250, # TensorBoard writing interval (timesteps)
"checkpoint_interval": 1000, # interval for checkpoints (timesteps)
"store_separately": False, # whether to store checkpoints separately
"wandb": False, # whether to use Weights & Biases
"wandb_kwargs": {} # wandb kwargs (see https://docs.wandb.ai/ref/python/init)
}
}
# [end-config-dict-torch]
class TRPO(Agent):
def __init__(self,
models: Mapping[str, Model],
memory: Optional[Union[Memory, Tuple[Memory]]] = None,
observation_space: Optional[Union[int, Tuple[int], gym.Space, gymnasium.Space]] = None,
action_space: Optional[Union[int, Tuple[int], gym.Space, gymnasium.Space]] = None,
device: Optional[Union[str, torch.device]] = None,
cfg: Optional[dict] = None) -> None:
"""Trust Region Policy Optimization (TRPO)
https://arxiv.org/abs/1502.05477
:param models: Models used by the agent
:type models: dictionary of skrl.models.torch.Model
:param memory: Memory to storage the transitions.
If it is a tuple, the first element will be used for training and
for the rest only the environment transitions will be added
:type memory: skrl.memory.torch.Memory, list of skrl.memory.torch.Memory or None
:param observation_space: Observation/state space or shape (default: ``None``)
:type observation_space: int, tuple or list of int, gym.Space, gymnasium.Space or None, optional
:param action_space: Action space or shape (default: ``None``)
:type action_space: int, tuple or list of int, gym.Space, gymnasium.Space or None, optional
:param device: Device on which a tensor/array is or will be allocated (default: ``None``).
If None, the device will be either ``"cuda"`` if available or ``"cpu"``
:type device: str or torch.device, optional
:param cfg: Configuration dictionary
:type cfg: dict
:raises KeyError: If the models dictionary is missing a required key
"""
_cfg = copy.deepcopy(TRPO_DEFAULT_CONFIG)
_cfg.update(cfg if cfg is not None else {})
super().__init__(models=models,
memory=memory,
observation_space=observation_space,
action_space=action_space,
device=device,
cfg=_cfg)
# models
self.policy = self.models.get("policy", None)
self.value = self.models.get("value", None)
self.backup_policy = copy.deepcopy(self.policy)
# checkpoint models
self.checkpoint_modules["policy"] = self.policy
self.checkpoint_modules["value"] = self.value
# configuration
self._learning_epochs = self.cfg["learning_epochs"]
self._mini_batches = self.cfg["mini_batches"]
self._rollouts = self.cfg["rollouts"]
self._rollout = 0
self._grad_norm_clip = self.cfg["grad_norm_clip"]
self._value_loss_scale = self.cfg["value_loss_scale"]
self._max_kl_divergence = self.cfg["max_kl_divergence"]
self._damping = self.cfg["damping"]
self._conjugate_gradient_steps = self.cfg["conjugate_gradient_steps"]
self._max_backtrack_steps = self.cfg["max_backtrack_steps"]
self._accept_ratio = self.cfg["accept_ratio"]
self._step_fraction = self.cfg["step_fraction"]
self._value_learning_rate = self.cfg["value_learning_rate"]
self._learning_rate_scheduler = self.cfg["learning_rate_scheduler"]
self._state_preprocessor = self.cfg["state_preprocessor"]
self._value_preprocessor = self.cfg["value_preprocessor"]
self._discount_factor = self.cfg["discount_factor"]
self._lambda = self.cfg["lambda"]
self._random_timesteps = self.cfg["random_timesteps"]
self._learning_starts = self.cfg["learning_starts"]
self._rewards_shaper = self.cfg["rewards_shaper"]
self._time_limit_bootstrap = self.cfg["time_limit_bootstrap"]
# set up optimizer and learning rate scheduler
if self.policy is not None and self.value is not None:
self.value_optimizer = torch.optim.Adam(self.value.parameters(), lr=self._value_learning_rate)
if self._learning_rate_scheduler is not None:
self.value_scheduler = self._learning_rate_scheduler(self.value_optimizer, **self.cfg["learning_rate_scheduler_kwargs"])
self.checkpoint_modules["value_optimizer"] = self.value_optimizer
# set up preprocessors
if self._state_preprocessor:
self._state_preprocessor = self._state_preprocessor(**self.cfg["state_preprocessor_kwargs"])
self.checkpoint_modules["state_preprocessor"] = self._state_preprocessor
else:
self._state_preprocessor = self._empty_preprocessor
if self._value_preprocessor:
self._value_preprocessor = self._value_preprocessor(**self.cfg["value_preprocessor_kwargs"])
self.checkpoint_modules["value_preprocessor"] = self._value_preprocessor
else:
self._value_preprocessor = self._empty_preprocessor
def init(self, trainer_cfg: Optional[Mapping[str, Any]] = None) -> None:
"""Initialize the agent
"""
super().init(trainer_cfg=trainer_cfg)
self.set_mode("eval")
# create tensors in memory
if self.memory is not None:
self.memory.create_tensor(name="states", size=self.observation_space, dtype=torch.float32)
self.memory.create_tensor(name="actions", size=self.action_space, dtype=torch.float32)
self.memory.create_tensor(name="rewards", size=1, dtype=torch.float32)
self.memory.create_tensor(name="terminated", size=1, dtype=torch.bool)
self.memory.create_tensor(name="log_prob", size=1, dtype=torch.float32)
self.memory.create_tensor(name="values", size=1, dtype=torch.float32)
self.memory.create_tensor(name="returns", size=1, dtype=torch.float32)
self.memory.create_tensor(name="advantages", size=1, dtype=torch.float32)
self._tensors_names_policy = ["states", "actions", "log_prob", "advantages"]
self._tensors_names_value = ["states", "returns"]
# create temporary variables needed for storage and computation
self._current_log_prob = None
self._current_next_states = None
def act(self, states: torch.Tensor, timestep: int, timesteps: int) -> torch.Tensor:
"""Process the environment's states to make a decision (actions) using the main policy
:param states: Environment's states
:type states: torch.Tensor
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
:return: Actions
:rtype: torch.Tensor
"""
# sample random actions
# TODO: fix for stochasticity
if timestep < self._random_timesteps:
return self.policy.random_act({"states": self._state_preprocessor(states)}, role="policy")
# sample stochastic actions
actions, log_prob, outputs = self.policy.act({"states": self._state_preprocessor(states)}, role="policy")
self._current_log_prob = log_prob
return actions, log_prob, outputs
def record_transition(self,
states: torch.Tensor,
actions: torch.Tensor,
rewards: torch.Tensor,
next_states: torch.Tensor,
terminated: torch.Tensor,
truncated: torch.Tensor,
infos: Any,
timestep: int,
timesteps: int) -> None:
"""Record an environment transition in memory
:param states: Observations/states of the environment used to make the decision
:type states: torch.Tensor
:param actions: Actions taken by the agent
:type actions: torch.Tensor
:param rewards: Instant rewards achieved by the current actions
:type rewards: torch.Tensor
:param next_states: Next observations/states of the environment
:type next_states: torch.Tensor
:param terminated: Signals to indicate that episodes have terminated
:type terminated: torch.Tensor
:param truncated: Signals to indicate that episodes have been truncated
:type truncated: torch.Tensor
:param infos: Additional information about the environment
:type infos: Any type supported by the environment
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
super().record_transition(states, actions, rewards, next_states, terminated, truncated, infos, timestep, timesteps)
if self.memory is not None:
self._current_next_states = next_states
# reward shaping
if self._rewards_shaper is not None:
rewards = self._rewards_shaper(rewards, timestep, timesteps)
# compute values
values, _, _ = self.value.act({"states": self._state_preprocessor(states)}, role="value")
values = self._value_preprocessor(values, inverse=True)
# time-limit (truncation) boostrapping
if self._time_limit_bootstrap:
rewards += self._discount_factor * values * truncated
# storage transition in memory
self.memory.add_samples(states=states, actions=actions, rewards=rewards, next_states=next_states,
terminated=terminated, truncated=truncated, log_prob=self._current_log_prob, values=values)
for memory in self.secondary_memories:
memory.add_samples(states=states, actions=actions, rewards=rewards, next_states=next_states,
terminated=terminated, truncated=truncated, log_prob=self._current_log_prob, values=values)
def pre_interaction(self, timestep: int, timesteps: int) -> None:
"""Callback called before the interaction with the environment
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
pass
def post_interaction(self, timestep: int, timesteps: int) -> None:
"""Callback called after the interaction with the environment
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
self._rollout += 1
if not self._rollout % self._rollouts and timestep >= self._learning_starts:
self.set_mode("train")
self._update(timestep, timesteps)
self.set_mode("eval")
# write tracking data and checkpoints
super().post_interaction(timestep, timesteps)
def _update(self, timestep: int, timesteps: int) -> None:
"""Algorithm's main update step
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
def compute_gae(rewards: torch.Tensor,
dones: torch.Tensor,
values: torch.Tensor,
next_values: torch.Tensor,
discount_factor: float = 0.99,
lambda_coefficient: float = 0.95) -> torch.Tensor:
"""Compute the Generalized Advantage Estimator (GAE)
:param rewards: Rewards obtained by the agent
:type rewards: torch.Tensor
:param dones: Signals to indicate that episodes have ended
:type dones: torch.Tensor
:param values: Values obtained by the agent
:type values: torch.Tensor
:param next_values: Next values obtained by the agent
:type next_values: torch.Tensor
:param discount_factor: Discount factor
:type discount_factor: float
:param lambda_coefficient: Lambda coefficient
:type lambda_coefficient: float
:return: Generalized Advantage Estimator
:rtype: torch.Tensor
"""
advantage = 0
advantages = torch.zeros_like(rewards)
not_dones = dones.logical_not()
memory_size = rewards.shape[0]
# advantages computation
for i in reversed(range(memory_size)):
next_values = values[i + 1] if i < memory_size - 1 else last_values
advantage = rewards[i] - values[i] + discount_factor * not_dones[i] * (next_values + lambda_coefficient * advantage)
advantages[i] = advantage
# returns computation
returns = advantages + values
# normalize advantages
advantages = (advantages - advantages.mean()) / (advantages.std() + 1e-8)
return returns, advantages
def surrogate_loss(policy: Model,
states: torch.Tensor,
actions: torch.Tensor,
log_prob: torch.Tensor,
advantages: torch.Tensor) -> torch.Tensor:
"""Compute the surrogate objective (policy loss)
:param policy: Policy
:type policy: Model
:param states: States
:type states: torch.Tensor
:param actions: Actions
:type actions: torch.Tensor
:param log_prob: Log probability
:type log_prob: torch.Tensor
:param advantages: Advantages
:type advantages: torch.Tensor
:return: Surrogate loss
:rtype: torch.Tensor
"""
_, new_log_prob, _ = policy.act({"states": states, "taken_actions": actions}, role="policy")
return (advantages * torch.exp(new_log_prob - log_prob.detach())).mean()
def conjugate_gradient(policy: Model,
states: torch.Tensor,
b: torch.Tensor,
num_iterations: float = 10,
residual_tolerance: float = 1e-10) -> torch.Tensor:
"""Conjugate gradient algorithm to solve Ax = b using the iterative method
https://en.wikipedia.org/wiki/Conjugate_gradient_method#As_an_iterative_method
:param policy: Policy
:type policy: Model
:param states: States
:type states: torch.Tensor
:param b: Vector b
:type b: torch.Tensor
:param num_iterations: Number of iterations (default: ``10``)
:type num_iterations: float, optional
:param residual_tolerance: Residual tolerance (default: ``1e-10``)
:type residual_tolerance: float, optional
:return: Conjugate vector
:rtype: torch.Tensor
"""
x = torch.zeros_like(b)
r = b.clone()
p = b.clone()
rr_old = torch.dot(r, r)
for _ in range(num_iterations):
hv = fisher_vector_product(policy, states, p, damping=self._damping)
alpha = rr_old / torch.dot(p, hv)
x += alpha * p
r -= alpha * hv
rr_new = torch.dot(r, r)
if rr_new < residual_tolerance:
break
p = r + rr_new / rr_old * p
rr_old = rr_new
return x
def fisher_vector_product(policy: Model,
states: torch.Tensor,
vector: torch.Tensor,
damping: float = 0.1) -> torch.Tensor:
"""Compute the Fisher vector product (direct method)
https://www.telesens.co/2018/06/09/efficiently-computing-the-fisher-vector-product-in-trpo/
:param policy: Policy
:type policy: Model
:param states: States
:type states: torch.Tensor
:param vector: Vector
:type vector: torch.Tensor
:param damping: Damping (default: ``0.1``)
:type damping: float, optional
:return: Hessian vector product
:rtype: torch.Tensor
"""
kl = kl_divergence(policy, policy, states)
kl_gradient = torch.autograd.grad(kl, policy.parameters(), create_graph=True)
flat_kl_gradient = torch.cat([gradient.view(-1) for gradient in kl_gradient])
hessian_vector_gradient = torch.autograd.grad((flat_kl_gradient * vector).sum(), policy.parameters())
flat_hessian_vector_gradient = torch.cat([gradient.contiguous().view(-1) for gradient in hessian_vector_gradient])
return flat_hessian_vector_gradient + damping * vector
def kl_divergence(policy_1: Model, policy_2: Model, states: torch.Tensor) -> torch.Tensor:
"""Compute the KL divergence between two distributions
https://en.wikipedia.org/wiki/Normal_distribution#Other_properties
:param policy_1: First policy
:type policy_1: Model
:param policy_2: Second policy
:type policy_2: Model
:param states: States
:type states: torch.Tensor
:return: KL divergence
:rtype: torch.Tensor
"""
mu_1 = policy_1.act({"states": states}, role="policy")[2]["mean_actions"]
logstd_1 = policy_1.get_log_std(role="policy")
mu_1, logstd_1 = mu_1.detach(), logstd_1.detach()
mu_2 = policy_2.act({"states": states}, role="policy")[2]["mean_actions"]
logstd_2 = policy_2.get_log_std(role="policy")
kl = logstd_1 - logstd_2 + 0.5 * (torch.square(logstd_1.exp()) + torch.square(mu_1 - mu_2)) \
/ torch.square(logstd_2.exp()) - 0.5
return torch.sum(kl, dim=-1).mean()
# compute returns and advantages
with torch.no_grad():
self.value.train(False)
last_values, _, _ = self.value.act({"states": self._state_preprocessor(self._current_next_states.float())}, role="value")
self.value.train(True)
last_values = self._value_preprocessor(last_values, inverse=True)
values = self.memory.get_tensor_by_name("values")
returns, advantages = compute_gae(rewards=self.memory.get_tensor_by_name("rewards"),
dones=self.memory.get_tensor_by_name("terminated"),
values=values,
next_values=last_values,
discount_factor=self._discount_factor,
lambda_coefficient=self._lambda)
self.memory.set_tensor_by_name("values", self._value_preprocessor(values, train=True))
self.memory.set_tensor_by_name("returns", self._value_preprocessor(returns, train=True))
self.memory.set_tensor_by_name("advantages", advantages)
# sample all from memory
sampled_states, sampled_actions, sampled_log_prob, sampled_advantages \
= self.memory.sample_all(names=self._tensors_names_policy, mini_batches=1)[0]
sampled_states = self._state_preprocessor(sampled_states, train=True)
# compute policy loss gradient
policy_loss = surrogate_loss(self.policy, sampled_states, sampled_actions, sampled_log_prob, sampled_advantages)
policy_loss_gradient = torch.autograd.grad(policy_loss, self.policy.parameters())
flat_policy_loss_gradient = torch.cat([gradient.view(-1) for gradient in policy_loss_gradient])
# compute the search direction using the conjugate gradient algorithm
search_direction = conjugate_gradient(self.policy, sampled_states, flat_policy_loss_gradient.data,
num_iterations=self._conjugate_gradient_steps)
# compute step size and full step
xHx = (search_direction * fisher_vector_product(self.policy, sampled_states, search_direction, self._damping)) \
.sum(0, keepdim=True)
step_size = torch.sqrt(2 * self._max_kl_divergence / xHx)[0]
full_step = step_size * search_direction
# backtracking line search
restore_policy_flag = True
self.backup_policy.update_parameters(self.policy)
params = parameters_to_vector(self.policy.parameters())
expected_improvement = (flat_policy_loss_gradient * full_step).sum(0, keepdim=True)
for alpha in [self._step_fraction * 0.5 ** i for i in range(self._max_backtrack_steps)]:
new_params = params + alpha * full_step
vector_to_parameters(new_params, self.policy.parameters())
expected_improvement *= alpha
kl = kl_divergence(self.backup_policy, self.policy, sampled_states)
loss = surrogate_loss(self.policy, sampled_states, sampled_actions, sampled_log_prob, sampled_advantages)
if kl < self._max_kl_divergence and (loss - policy_loss) / expected_improvement > self._accept_ratio:
restore_policy_flag = False
break
if restore_policy_flag:
self.policy.update_parameters(self.backup_policy)
# sample mini-batches from memory
sampled_batches = self.memory.sample_all(names=self._tensors_names_value, mini_batches=self._mini_batches)
cumulative_value_loss = 0
# learning epochs
for epoch in range(self._learning_epochs):
# mini-batches loop
for sampled_states, sampled_returns in sampled_batches:
sampled_states = self._state_preprocessor(sampled_states, train=not epoch)
# compute value loss
predicted_values, _, _ = self.value.act({"states": sampled_states}, role="value")
value_loss = self._value_loss_scale * F.mse_loss(sampled_returns, predicted_values)
# optimization step (value)
self.value_optimizer.zero_grad()
value_loss.backward()
if self._grad_norm_clip > 0:
nn.utils.clip_grad_norm_(self.value.parameters(), self._grad_norm_clip)
self.value_optimizer.step()
# update cumulative losses
cumulative_value_loss += value_loss.item()
# update learning rate
if self._learning_rate_scheduler:
self.value_scheduler.step()
# record data
self.track_data("Loss / Policy loss", policy_loss.item())
self.track_data("Loss / Value loss", cumulative_value_loss / (self._learning_epochs * self._mini_batches))
self.track_data("Policy / Standard deviation", self.policy.distribution(role="policy").stddev.mean().item())
if self._learning_rate_scheduler:
self.track_data("Learning / Value learning rate", self.value_scheduler.get_last_lr()[0])
| 26,328 | Python | 45.682624 | 136 | 0.598184 |
Toni-SM/skrl/skrl/agents/torch/q_learning/__init__.py | from skrl.agents.torch.q_learning.q_learning import Q_LEARNING, Q_LEARNING_DEFAULT_CONFIG
| 90 | Python | 44.499978 | 89 | 0.822222 |
Toni-SM/skrl/skrl/agents/torch/q_learning/q_learning.py | from typing import Any, Mapping, Optional, Tuple, Union
import copy
import gym
import gymnasium
import torch
from skrl.agents.torch import Agent
from skrl.memories.torch import Memory
from skrl.models.torch import Model
# [start-config-dict-torch]
Q_LEARNING_DEFAULT_CONFIG = {
"discount_factor": 0.99, # discount factor (gamma)
"random_timesteps": 0, # random exploration steps
"learning_starts": 0, # learning starts after this many steps
"learning_rate": 0.5, # learning rate (alpha)
"rewards_shaper": None, # rewards shaping function: Callable(reward, timestep, timesteps) -> reward
"experiment": {
"directory": "", # experiment's parent directory
"experiment_name": "", # experiment name
"write_interval": 250, # TensorBoard writing interval (timesteps)
"checkpoint_interval": 1000, # interval for checkpoints (timesteps)
"store_separately": False, # whether to store checkpoints separately
"wandb": False, # whether to use Weights & Biases
"wandb_kwargs": {} # wandb kwargs (see https://docs.wandb.ai/ref/python/init)
}
}
# [end-config-dict-torch]
class Q_LEARNING(Agent):
def __init__(self,
models: Mapping[str, Model],
memory: Optional[Union[Memory, Tuple[Memory]]] = None,
observation_space: Optional[Union[int, Tuple[int], gym.Space, gymnasium.Space]] = None,
action_space: Optional[Union[int, Tuple[int], gym.Space, gymnasium.Space]] = None,
device: Optional[Union[str, torch.device]] = None,
cfg: Optional[dict] = None) -> None:
"""Q-learning
https://www.academia.edu/3294050/Learning_from_delayed_rewards
:param models: Models used by the agent
:type models: dictionary of skrl.models.torch.Model
:param memory: Memory to storage the transitions.
If it is a tuple, the first element will be used for training and
for the rest only the environment transitions will be added
:type memory: skrl.memory.torch.Memory, list of skrl.memory.torch.Memory or None
:param observation_space: Observation/state space or shape (default: ``None``)
:type observation_space: int, tuple or list of int, gym.Space, gymnasium.Space or None, optional
:param action_space: Action space or shape (default: ``None``)
:type action_space: int, tuple or list of int, gym.Space, gymnasium.Space or None, optional
:param device: Device on which a tensor/array is or will be allocated (default: ``None``).
If None, the device will be either ``"cuda"`` if available or ``"cpu"``
:type device: str or torch.device, optional
:param cfg: Configuration dictionary
:type cfg: dict
:raises KeyError: If the models dictionary is missing a required key
"""
_cfg = copy.deepcopy(Q_LEARNING_DEFAULT_CONFIG)
_cfg.update(cfg if cfg is not None else {})
super().__init__(models=models,
memory=memory,
observation_space=observation_space,
action_space=action_space,
device=device,
cfg=_cfg)
# models
self.policy = self.models.get("policy", None)
# checkpoint models
self.checkpoint_modules["policy"] = self.policy
# configuration
self._discount_factor = self.cfg["discount_factor"]
self._random_timesteps = self.cfg["random_timesteps"]
self._learning_starts = self.cfg["learning_starts"]
self._learning_rate = self.cfg["learning_rate"]
self._rewards_shaper = self.cfg["rewards_shaper"]
# create temporary variables needed for storage and computation
self._current_states = None
self._current_actions = None
self._current_rewards = None
self._current_next_states = None
self._current_dones = None
def init(self, trainer_cfg: Optional[Mapping[str, Any]] = None) -> None:
"""Initialize the agent
"""
super().init(trainer_cfg=trainer_cfg)
def act(self, states: torch.Tensor, timestep: int, timesteps: int) -> torch.Tensor:
"""Process the environment's states to make a decision (actions) using the main policy
:param states: Environment's states
:type states: torch.Tensor
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
:return: Actions
:rtype: torch.Tensor
"""
# sample random actions
if timestep < self._random_timesteps:
return self.policy.random_act({"states": states}, role="policy")
# sample actions from policy
return self.policy.act({"states": states}, role="policy")
def record_transition(self,
states: torch.Tensor,
actions: torch.Tensor,
rewards: torch.Tensor,
next_states: torch.Tensor,
terminated: torch.Tensor,
truncated: torch.Tensor,
infos: Any,
timestep: int,
timesteps: int) -> None:
"""Record an environment transition in memory
:param states: Observations/states of the environment used to make the decision
:type states: torch.Tensor
:param actions: Actions taken by the agent
:type actions: torch.Tensor
:param rewards: Instant rewards achieved by the current actions
:type rewards: torch.Tensor
:param next_states: Next observations/states of the environment
:type next_states: torch.Tensor
:param terminated: Signals to indicate that episodes have terminated
:type terminated: torch.Tensor
:param truncated: Signals to indicate that episodes have been truncated
:type truncated: torch.Tensor
:param infos: Additional information about the environment
:type infos: Any type supported by the environment
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
super().record_transition(states, actions, rewards, next_states, terminated, truncated, infos, timestep, timesteps)
# reward shaping
if self._rewards_shaper is not None:
rewards = self._rewards_shaper(rewards, timestep, timesteps)
self._current_states = states
self._current_actions = actions
self._current_rewards = rewards
self._current_next_states = next_states
self._current_dones = terminated + truncated
if self.memory is not None:
self.memory.add_samples(states=states, actions=actions, rewards=rewards, next_states=next_states,
terminated=terminated, truncated=truncated)
for memory in self.secondary_memories:
memory.add_samples(states=states, actions=actions, rewards=rewards, next_states=next_states,
terminated=terminated, truncated=truncated)
def pre_interaction(self, timestep: int, timesteps: int) -> None:
"""Callback called before the interaction with the environment
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
pass
def post_interaction(self, timestep: int, timesteps: int) -> None:
"""Callback called after the interaction with the environment
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
if timestep >= self._learning_starts:
self._update(timestep, timesteps)
# write tracking data and checkpoints
super().post_interaction(timestep, timesteps)
def _update(self, timestep: int, timesteps: int) -> None:
"""Algorithm's main update step
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
q_table = self.policy.table()
env_ids = torch.arange(self._current_rewards.shape[0]).view(-1, 1)
# compute next actions
next_actions = torch.argmax(q_table[env_ids, self._current_next_states], dim=-1, keepdim=True).view(-1,1)
# update Q-table
q_table[env_ids, self._current_states, self._current_actions] += self._learning_rate \
* (self._current_rewards + self._discount_factor * self._current_dones.logical_not() \
* q_table[env_ids, self._current_next_states, next_actions] \
- q_table[env_ids, self._current_states, self._current_actions])
| 9,186 | Python | 40.759091 | 123 | 0.609514 |
Toni-SM/skrl/skrl/agents/torch/cem/cem.py | from typing import Any, Mapping, Optional, Tuple, Union
import copy
import gym
import gymnasium
import torch
import torch.nn.functional as F
from skrl import logger
from skrl.agents.torch import Agent
from skrl.memories.torch import Memory
from skrl.models.torch import Model
# [start-config-dict-torch]
CEM_DEFAULT_CONFIG = {
"rollouts": 16, # number of rollouts before updating
"percentile": 0.70, # percentile to compute the reward bound [0, 1]
"discount_factor": 0.99, # discount factor (gamma)
"learning_rate": 1e-2, # learning rate
"learning_rate_scheduler": None, # learning rate scheduler class (see torch.optim.lr_scheduler)
"learning_rate_scheduler_kwargs": {}, # learning rate scheduler's kwargs (e.g. {"step_size": 1e-3})
"state_preprocessor": None, # state preprocessor class (see skrl.resources.preprocessors)
"state_preprocessor_kwargs": {}, # state preprocessor's kwargs (e.g. {"size": env.observation_space})
"random_timesteps": 0, # random exploration steps
"learning_starts": 0, # learning starts after this many steps
"rewards_shaper": None, # rewards shaping function: Callable(reward, timestep, timesteps) -> reward
"experiment": {
"directory": "", # experiment's parent directory
"experiment_name": "", # experiment name
"write_interval": 250, # TensorBoard writing interval (timesteps)
"checkpoint_interval": 1000, # interval for checkpoints (timesteps)
"store_separately": False, # whether to store checkpoints separately
"wandb": False, # whether to use Weights & Biases
"wandb_kwargs": {} # wandb kwargs (see https://docs.wandb.ai/ref/python/init)
}
}
# [end-config-dict-torch]
class CEM(Agent):
def __init__(self,
models: Mapping[str, Model],
memory: Optional[Union[Memory, Tuple[Memory]]] = None,
observation_space: Optional[Union[int, Tuple[int], gym.Space, gymnasium.Space]] = None,
action_space: Optional[Union[int, Tuple[int], gym.Space, gymnasium.Space]] = None,
device: Optional[Union[str, torch.device]] = None,
cfg: Optional[dict] = None) -> None:
"""Cross-Entropy Method (CEM)
https://ieeexplore.ieee.org/abstract/document/6796865/
:param models: Models used by the agent
:type models: dictionary of skrl.models.torch.Model
:param memory: Memory to storage the transitions.
If it is a tuple, the first element will be used for training and
for the rest only the environment transitions will be added
:type memory: skrl.memory.torch.Memory, list of skrl.memory.torch.Memory or None
:param observation_space: Observation/state space or shape (default: ``None``)
:type observation_space: int, tuple or list of int, gym.Space, gymnasium.Space or None, optional
:param action_space: Action space or shape (default: ``None``)
:type action_space: int, tuple or list of int, gym.Space, gymnasium.Space or None, optional
:param device: Device on which a tensor/array is or will be allocated (default: ``None``).
If None, the device will be either ``"cuda"`` if available or ``"cpu"``
:type device: str or torch.device, optional
:param cfg: Configuration dictionary
:type cfg: dict
:raises KeyError: If the models dictionary is missing a required key
"""
_cfg = copy.deepcopy(CEM_DEFAULT_CONFIG)
_cfg.update(cfg if cfg is not None else {})
super().__init__(models=models,
memory=memory,
observation_space=observation_space,
action_space=action_space,
device=device,
cfg=_cfg)
# models
self.policy = self.models.get("policy", None)
# checkpoint models
self.checkpoint_modules["policy"] = self.policy
# configuration:
self._rollouts = self.cfg["rollouts"]
self._rollout = 0
self._percentile = self.cfg["percentile"]
self._discount_factor = self.cfg["discount_factor"]
self._learning_rate = self.cfg["learning_rate"]
self._learning_rate_scheduler = self.cfg["learning_rate_scheduler"]
self._state_preprocessor = self.cfg["state_preprocessor"]
self._random_timesteps = self.cfg["random_timesteps"]
self._learning_starts = self.cfg["learning_starts"]
self._rewards_shaper = self.cfg["rewards_shaper"]
self._episode_tracking = []
# set up optimizer and learning rate scheduler
if self.policy is not None:
self.optimizer = torch.optim.Adam(self.policy.parameters(), lr=self._learning_rate)
if self._learning_rate_scheduler is not None:
self.scheduler = self._learning_rate_scheduler(self.optimizer, **self.cfg["learning_rate_scheduler_kwargs"])
self.checkpoint_modules["optimizer"] = self.optimizer
# set up preprocessors
if self._state_preprocessor:
self._state_preprocessor = self._state_preprocessor(**self.cfg["state_preprocessor_kwargs"])
self.checkpoint_modules["state_preprocessor"] = self._state_preprocessor
else:
self._state_preprocessor = self._empty_preprocessor
def init(self, trainer_cfg: Optional[Mapping[str, Any]] = None) -> None:
"""Initialize the agent
"""
super().init(trainer_cfg=trainer_cfg)
# create tensors in memory
if self.memory is not None:
self.memory.create_tensor(name="states", size=self.observation_space, dtype=torch.float32)
self.memory.create_tensor(name="next_states", size=self.observation_space, dtype=torch.float32)
self.memory.create_tensor(name="actions", size=self.action_space, dtype=torch.int64)
self.memory.create_tensor(name="rewards", size=1, dtype=torch.float32)
self.memory.create_tensor(name="terminated", size=1, dtype=torch.bool)
self.tensors_names = ["states", "actions", "rewards", "next_states", "terminated"]
def act(self, states: torch.Tensor, timestep: int, timesteps: int) -> torch.Tensor:
"""Process the environment's states to make a decision (actions) using the main policy
:param states: Environment's states
:type states: torch.Tensor
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
:return: Actions
:rtype: torch.Tensor
"""
states = self._state_preprocessor(states)
# sample random actions
# TODO, check for stochasticity
if timestep < self._random_timesteps:
return self.policy.random_act({"states": states}, role="policy")
# sample stochastic actions
return self.policy.act({"states": states}, role="policy")
def record_transition(self,
states: torch.Tensor,
actions: torch.Tensor,
rewards: torch.Tensor,
next_states: torch.Tensor,
terminated: torch.Tensor,
truncated: torch.Tensor,
infos: Any,
timestep: int,
timesteps: int) -> None:
"""Record an environment transition in memory
:param states: Observations/states of the environment used to make the decision
:type states: torch.Tensor
:param actions: Actions taken by the agent
:type actions: torch.Tensor
:param rewards: Instant rewards achieved by the current actions
:type rewards: torch.Tensor
:param next_states: Next observations/states of the environment
:type next_states: torch.Tensor
:param terminated: Signals to indicate that episodes have terminated
:type terminated: torch.Tensor
:param truncated: Signals to indicate that episodes have been truncated
:type truncated: torch.Tensor
:param infos: Additional information about the environment
:type infos: Any type supported by the environment
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
super().record_transition(states, actions, rewards, next_states, terminated, truncated, infos, timestep, timesteps)
# reward shaping
if self._rewards_shaper is not None:
rewards = self._rewards_shaper(rewards, timestep, timesteps)
if self.memory is not None:
self.memory.add_samples(states=states, actions=actions, rewards=rewards, next_states=next_states,
terminated=terminated, truncated=truncated)
for memory in self.secondary_memories:
memory.add_samples(states=states, actions=actions, rewards=rewards, next_states=next_states,
terminated=terminated, truncated=truncated)
# track episodes internally
if self._rollout:
indexes = torch.nonzero(terminated + truncated)
if indexes.numel():
for i in indexes[:, 0]:
self._episode_tracking[i.item()].append(self._rollout + 1)
else:
self._episode_tracking = [[0] for _ in range(rewards.size(-1))]
def pre_interaction(self, timestep: int, timesteps: int) -> None:
"""Callback called before the interaction with the environment
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
pass
def post_interaction(self, timestep: int, timesteps: int) -> None:
"""Callback called after the interaction with the environment
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
self._rollout += 1
if not self._rollout % self._rollouts and timestep >= self._learning_starts:
self._rollout = 0
self._update(timestep, timesteps)
# write tracking data and checkpoints
super().post_interaction(timestep, timesteps)
def _update(self, timestep: int, timesteps: int) -> None:
"""Algorithm's main update step
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
# sample all memory
sampled_states, sampled_actions, sampled_rewards, _, _ = self.memory.sample_all(names=self.tensors_names)[0]
sampled_states = self._state_preprocessor(sampled_states, train=True)
with torch.no_grad():
# compute discounted return threshold
limits = []
returns = []
for e in range(sampled_rewards.size(-1)):
for i, j in zip(self._episode_tracking[e][:-1], self._episode_tracking[e][1:]):
limits.append([e + i, e + j])
rewards = sampled_rewards[e + i: e + j]
returns.append(torch.sum(rewards * self._discount_factor ** \
torch.arange(rewards.size(0), device=rewards.device).flip(-1).view(rewards.size())))
if not len(returns):
logger.warning("No returns to update. Consider increasing the number of rollouts")
return
returns = torch.tensor(returns)
return_threshold = torch.quantile(returns, self._percentile, dim=-1)
# get elite states and actions
indexes = torch.nonzero(returns >= return_threshold)
elite_states = torch.cat([sampled_states[limits[i][0]:limits[i][1]] for i in indexes[:, 0]], dim=0)
elite_actions = torch.cat([sampled_actions[limits[i][0]:limits[i][1]] for i in indexes[:, 0]], dim=0)
# compute scores for the elite states
_, _, outputs = self.policy.act({"states": elite_states}, role="policy")
scores = outputs["net_output"]
# compute policy loss
policy_loss = F.cross_entropy(scores, elite_actions.view(-1))
# optimization step
self.optimizer.zero_grad()
policy_loss.backward()
self.optimizer.step()
# update learning rate
if self._learning_rate_scheduler:
self.scheduler.step()
# record data
self.track_data("Loss / Policy loss", policy_loss.item())
self.track_data("Coefficient / Return threshold", return_threshold.item())
self.track_data("Coefficient / Mean discounted returns", torch.mean(returns).item())
if self._learning_rate_scheduler:
self.track_data("Learning / Learning rate", self.scheduler.get_last_lr()[0])
| 13,279 | Python | 42.398693 | 124 | 0.609308 |
Toni-SM/skrl/skrl/agents/torch/cem/__init__.py | from skrl.agents.torch.cem.cem import CEM, CEM_DEFAULT_CONFIG
| 62 | Python | 30.499985 | 61 | 0.806452 |
Toni-SM/skrl/skrl/agents/torch/sac/__init__.py | from skrl.agents.torch.sac.sac import SAC, SAC_DEFAULT_CONFIG
from skrl.agents.torch.sac.sac_rnn import SAC_RNN
| 112 | Python | 36.666654 | 61 | 0.803571 |
Toni-SM/skrl/skrl/agents/torch/sac/sac.py | from typing import Any, Mapping, Optional, Tuple, Union
import copy
import itertools
import gym
import gymnasium
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from skrl.agents.torch import Agent
from skrl.memories.torch import Memory
from skrl.models.torch import Model
# [start-config-dict-torch]
SAC_DEFAULT_CONFIG = {
"gradient_steps": 1, # gradient steps
"batch_size": 64, # training batch size
"discount_factor": 0.99, # discount factor (gamma)
"polyak": 0.005, # soft update hyperparameter (tau)
"actor_learning_rate": 1e-3, # actor learning rate
"critic_learning_rate": 1e-3, # critic learning rate
"learning_rate_scheduler": None, # learning rate scheduler class (see torch.optim.lr_scheduler)
"learning_rate_scheduler_kwargs": {}, # learning rate scheduler's kwargs (e.g. {"step_size": 1e-3})
"state_preprocessor": None, # state preprocessor class (see skrl.resources.preprocessors)
"state_preprocessor_kwargs": {}, # state preprocessor's kwargs (e.g. {"size": env.observation_space})
"random_timesteps": 0, # random exploration steps
"learning_starts": 0, # learning starts after this many steps
"grad_norm_clip": 0, # clipping coefficient for the norm of the gradients
"learn_entropy": True, # learn entropy
"entropy_learning_rate": 1e-3, # entropy learning rate
"initial_entropy_value": 0.2, # initial entropy value
"target_entropy": None, # target entropy
"rewards_shaper": None, # rewards shaping function: Callable(reward, timestep, timesteps) -> reward
"experiment": {
"base_directory": "", # base directory for the experiment
"experiment_name": "", # experiment name
"write_interval": 250, # TensorBoard writing interval (timesteps)
"checkpoint_interval": 1000, # interval for checkpoints (timesteps)
"store_separately": False, # whether to store checkpoints separately
"wandb": False, # whether to use Weights & Biases
"wandb_kwargs": {} # wandb kwargs (see https://docs.wandb.ai/ref/python/init)
}
}
# [end-config-dict-torch]
class SAC(Agent):
def __init__(self,
models: Mapping[str, Model],
memory: Optional[Union[Memory, Tuple[Memory]]] = None,
observation_space: Optional[Union[int, Tuple[int], gym.Space, gymnasium.Space]] = None,
action_space: Optional[Union[int, Tuple[int], gym.Space, gymnasium.Space]] = None,
device: Optional[Union[str, torch.device]] = None,
cfg: Optional[dict] = None) -> None:
"""Soft Actor-Critic (SAC)
https://arxiv.org/abs/1801.01290
:param models: Models used by the agent
:type models: dictionary of skrl.models.torch.Model
:param memory: Memory to storage the transitions.
If it is a tuple, the first element will be used for training and
for the rest only the environment transitions will be added
:type memory: skrl.memory.torch.Memory, list of skrl.memory.torch.Memory or None
:param observation_space: Observation/state space or shape (default: ``None``)
:type observation_space: int, tuple or list of int, gym.Space, gymnasium.Space or None, optional
:param action_space: Action space or shape (default: ``None``)
:type action_space: int, tuple or list of int, gym.Space, gymnasium.Space or None, optional
:param device: Device on which a tensor/array is or will be allocated (default: ``None``).
If None, the device will be either ``"cuda"`` if available or ``"cpu"``
:type device: str or torch.device, optional
:param cfg: Configuration dictionary
:type cfg: dict
:raises KeyError: If the models dictionary is missing a required key
"""
_cfg = copy.deepcopy(SAC_DEFAULT_CONFIG)
_cfg.update(cfg if cfg is not None else {})
super().__init__(models=models,
memory=memory,
observation_space=observation_space,
action_space=action_space,
device=device,
cfg=_cfg)
# models
self.policy = self.models.get("policy", None)
self.critic_1 = self.models.get("critic_1", None)
self.critic_2 = self.models.get("critic_2", None)
self.target_critic_1 = self.models.get("target_critic_1", None)
self.target_critic_2 = self.models.get("target_critic_2", None)
# checkpoint models
self.checkpoint_modules["policy"] = self.policy
self.checkpoint_modules["critic_1"] = self.critic_1
self.checkpoint_modules["critic_2"] = self.critic_2
self.checkpoint_modules["target_critic_1"] = self.target_critic_1
self.checkpoint_modules["target_critic_2"] = self.target_critic_2
if self.target_critic_1 is not None and self.target_critic_2 is not None:
# freeze target networks with respect to optimizers (update via .update_parameters())
self.target_critic_1.freeze_parameters(True)
self.target_critic_2.freeze_parameters(True)
# update target networks (hard update)
self.target_critic_1.update_parameters(self.critic_1, polyak=1)
self.target_critic_2.update_parameters(self.critic_2, polyak=1)
# configuration
self._gradient_steps = self.cfg["gradient_steps"]
self._batch_size = self.cfg["batch_size"]
self._discount_factor = self.cfg["discount_factor"]
self._polyak = self.cfg["polyak"]
self._actor_learning_rate = self.cfg["actor_learning_rate"]
self._critic_learning_rate = self.cfg["critic_learning_rate"]
self._learning_rate_scheduler = self.cfg["learning_rate_scheduler"]
self._state_preprocessor = self.cfg["state_preprocessor"]
self._random_timesteps = self.cfg["random_timesteps"]
self._learning_starts = self.cfg["learning_starts"]
self._grad_norm_clip = self.cfg["grad_norm_clip"]
self._entropy_learning_rate = self.cfg["entropy_learning_rate"]
self._learn_entropy = self.cfg["learn_entropy"]
self._entropy_coefficient = self.cfg["initial_entropy_value"]
self._rewards_shaper = self.cfg["rewards_shaper"]
# entropy
if self._learn_entropy:
self._target_entropy = self.cfg["target_entropy"]
if self._target_entropy is None:
if issubclass(type(self.action_space), gym.spaces.Box) or issubclass(type(self.action_space), gymnasium.spaces.Box):
self._target_entropy = -np.prod(self.action_space.shape).astype(np.float32)
elif issubclass(type(self.action_space), gym.spaces.Discrete) or issubclass(type(self.action_space), gymnasium.spaces.Discrete):
self._target_entropy = -self.action_space.n
else:
self._target_entropy = 0
self.log_entropy_coefficient = torch.log(torch.ones(1, device=self.device) * self._entropy_coefficient).requires_grad_(True)
self.entropy_optimizer = torch.optim.Adam([self.log_entropy_coefficient], lr=self._entropy_learning_rate)
self.checkpoint_modules["entropy_optimizer"] = self.entropy_optimizer
# set up optimizers and learning rate schedulers
if self.policy is not None and self.critic_1 is not None and self.critic_2 is not None:
self.policy_optimizer = torch.optim.Adam(self.policy.parameters(), lr=self._actor_learning_rate)
self.critic_optimizer = torch.optim.Adam(itertools.chain(self.critic_1.parameters(), self.critic_2.parameters()),
lr=self._critic_learning_rate)
if self._learning_rate_scheduler is not None:
self.policy_scheduler = self._learning_rate_scheduler(self.policy_optimizer, **self.cfg["learning_rate_scheduler_kwargs"])
self.critic_scheduler = self._learning_rate_scheduler(self.critic_optimizer, **self.cfg["learning_rate_scheduler_kwargs"])
self.checkpoint_modules["policy_optimizer"] = self.policy_optimizer
self.checkpoint_modules["critic_optimizer"] = self.critic_optimizer
# set up preprocessors
if self._state_preprocessor:
self._state_preprocessor = self._state_preprocessor(**self.cfg["state_preprocessor_kwargs"])
self.checkpoint_modules["state_preprocessor"] = self._state_preprocessor
else:
self._state_preprocessor = self._empty_preprocessor
def init(self, trainer_cfg: Optional[Mapping[str, Any]] = None) -> None:
"""Initialize the agent
"""
super().init(trainer_cfg=trainer_cfg)
self.set_mode("eval")
# create tensors in memory
if self.memory is not None:
self.memory.create_tensor(name="states", size=self.observation_space, dtype=torch.float32)
self.memory.create_tensor(name="next_states", size=self.observation_space, dtype=torch.float32)
self.memory.create_tensor(name="actions", size=self.action_space, dtype=torch.float32)
self.memory.create_tensor(name="rewards", size=1, dtype=torch.float32)
self.memory.create_tensor(name="terminated", size=1, dtype=torch.bool)
self._tensors_names = ["states", "actions", "rewards", "next_states", "terminated"]
def act(self, states: torch.Tensor, timestep: int, timesteps: int) -> torch.Tensor:
"""Process the environment's states to make a decision (actions) using the main policy
:param states: Environment's states
:type states: torch.Tensor
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
:return: Actions
:rtype: torch.Tensor
"""
# sample random actions
# TODO, check for stochasticity
if timestep < self._random_timesteps:
return self.policy.random_act({"states": self._state_preprocessor(states)}, role="policy")
# sample stochastic actions
actions, _, outputs = self.policy.act({"states": self._state_preprocessor(states)}, role="policy")
return actions, None, outputs
def record_transition(self,
states: torch.Tensor,
actions: torch.Tensor,
rewards: torch.Tensor,
next_states: torch.Tensor,
terminated: torch.Tensor,
truncated: torch.Tensor,
infos: Any,
timestep: int,
timesteps: int) -> None:
"""Record an environment transition in memory
:param states: Observations/states of the environment used to make the decision
:type states: torch.Tensor
:param actions: Actions taken by the agent
:type actions: torch.Tensor
:param rewards: Instant rewards achieved by the current actions
:type rewards: torch.Tensor
:param next_states: Next observations/states of the environment
:type next_states: torch.Tensor
:param terminated: Signals to indicate that episodes have terminated
:type terminated: torch.Tensor
:param truncated: Signals to indicate that episodes have been truncated
:type truncated: torch.Tensor
:param infos: Additional information about the environment
:type infos: Any type supported by the environment
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
super().record_transition(states, actions, rewards, next_states, terminated, truncated, infos, timestep, timesteps)
if self.memory is not None:
# reward shaping
if self._rewards_shaper is not None:
rewards = self._rewards_shaper(rewards, timestep, timesteps)
# storage transition in memory
self.memory.add_samples(states=states, actions=actions, rewards=rewards, next_states=next_states,
terminated=terminated, truncated=truncated)
for memory in self.secondary_memories:
memory.add_samples(states=states, actions=actions, rewards=rewards, next_states=next_states,
terminated=terminated, truncated=truncated)
def pre_interaction(self, timestep: int, timesteps: int) -> None:
"""Callback called before the interaction with the environment
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
pass
def post_interaction(self, timestep: int, timesteps: int) -> None:
"""Callback called after the interaction with the environment
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
if timestep >= self._learning_starts:
self.set_mode("train")
self._update(timestep, timesteps)
self.set_mode("eval")
# write tracking data and checkpoints
super().post_interaction(timestep, timesteps)
def _update(self, timestep: int, timesteps: int) -> None:
"""Algorithm's main update step
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
# sample a batch from memory
sampled_states, sampled_actions, sampled_rewards, sampled_next_states, sampled_dones = \
self.memory.sample(names=self._tensors_names, batch_size=self._batch_size)[0]
# gradient steps
for gradient_step in range(self._gradient_steps):
sampled_states = self._state_preprocessor(sampled_states, train=True)
sampled_next_states = self._state_preprocessor(sampled_next_states, train=True)
# compute target values
with torch.no_grad():
next_actions, next_log_prob, _ = self.policy.act({"states": sampled_next_states}, role="policy")
target_q1_values, _, _ = self.target_critic_1.act({"states": sampled_next_states, "taken_actions": next_actions}, role="target_critic_1")
target_q2_values, _, _ = self.target_critic_2.act({"states": sampled_next_states, "taken_actions": next_actions}, role="target_critic_2")
target_q_values = torch.min(target_q1_values, target_q2_values) - self._entropy_coefficient * next_log_prob
target_values = sampled_rewards + self._discount_factor * sampled_dones.logical_not() * target_q_values
# compute critic loss
critic_1_values, _, _ = self.critic_1.act({"states": sampled_states, "taken_actions": sampled_actions}, role="critic_1")
critic_2_values, _, _ = self.critic_2.act({"states": sampled_states, "taken_actions": sampled_actions}, role="critic_2")
critic_loss = (F.mse_loss(critic_1_values, target_values) + F.mse_loss(critic_2_values, target_values)) / 2
# optimization step (critic)
self.critic_optimizer.zero_grad()
critic_loss.backward()
if self._grad_norm_clip > 0:
nn.utils.clip_grad_norm_(itertools.chain(self.critic_1.parameters(), self.critic_2.parameters()), self._grad_norm_clip)
self.critic_optimizer.step()
# compute policy (actor) loss
actions, log_prob, _ = self.policy.act({"states": sampled_states}, role="policy")
critic_1_values, _, _ = self.critic_1.act({"states": sampled_states, "taken_actions": actions}, role="critic_1")
critic_2_values, _, _ = self.critic_2.act({"states": sampled_states, "taken_actions": actions}, role="critic_2")
policy_loss = (self._entropy_coefficient * log_prob - torch.min(critic_1_values, critic_2_values)).mean()
# optimization step (policy)
self.policy_optimizer.zero_grad()
policy_loss.backward()
if self._grad_norm_clip > 0:
nn.utils.clip_grad_norm_(self.policy.parameters(), self._grad_norm_clip)
self.policy_optimizer.step()
# entropy learning
if self._learn_entropy:
# compute entropy loss
entropy_loss = -(self.log_entropy_coefficient * (log_prob + self._target_entropy).detach()).mean()
# optimization step (entropy)
self.entropy_optimizer.zero_grad()
entropy_loss.backward()
self.entropy_optimizer.step()
# compute entropy coefficient
self._entropy_coefficient = torch.exp(self.log_entropy_coefficient.detach())
# update target networks
self.target_critic_1.update_parameters(self.critic_1, polyak=self._polyak)
self.target_critic_2.update_parameters(self.critic_2, polyak=self._polyak)
# update learning rate
if self._learning_rate_scheduler:
self.policy_scheduler.step()
self.critic_scheduler.step()
# record data
if self.write_interval > 0:
self.track_data("Loss / Policy loss", policy_loss.item())
self.track_data("Loss / Critic loss", critic_loss.item())
self.track_data("Q-network / Q1 (max)", torch.max(critic_1_values).item())
self.track_data("Q-network / Q1 (min)", torch.min(critic_1_values).item())
self.track_data("Q-network / Q1 (mean)", torch.mean(critic_1_values).item())
self.track_data("Q-network / Q2 (max)", torch.max(critic_2_values).item())
self.track_data("Q-network / Q2 (min)", torch.min(critic_2_values).item())
self.track_data("Q-network / Q2 (mean)", torch.mean(critic_2_values).item())
self.track_data("Target / Target (max)", torch.max(target_values).item())
self.track_data("Target / Target (min)", torch.min(target_values).item())
self.track_data("Target / Target (mean)", torch.mean(target_values).item())
if self._learn_entropy:
self.track_data("Loss / Entropy loss", entropy_loss.item())
self.track_data("Coefficient / Entropy coefficient", self._entropy_coefficient.item())
if self._learning_rate_scheduler:
self.track_data("Learning / Policy learning rate", self.policy_scheduler.get_last_lr()[0])
self.track_data("Learning / Critic learning rate", self.critic_scheduler.get_last_lr()[0])
| 19,278 | Python | 48.181122 | 153 | 0.615002 |
Toni-SM/skrl/skrl/agents/torch/td3/__init__.py | from skrl.agents.torch.td3.td3 import TD3, TD3_DEFAULT_CONFIG
from skrl.agents.torch.td3.td3_rnn import TD3_RNN
| 112 | Python | 36.666654 | 61 | 0.803571 |
Toni-SM/skrl/skrl/agents/torch/ddpg/__init__.py | from skrl.agents.torch.ddpg.ddpg import DDPG, DDPG_DEFAULT_CONFIG
from skrl.agents.torch.ddpg.ddpg_rnn import DDPG_RNN
| 119 | Python | 38.999987 | 65 | 0.815126 |
Toni-SM/skrl/skrl/agents/torch/dqn/dqn.py | from typing import Any, Mapping, Optional, Tuple, Union
import copy
import math
import gym
import gymnasium
import torch
import torch.nn.functional as F
from skrl.agents.torch import Agent
from skrl.memories.torch import Memory
from skrl.models.torch import Model
# [start-config-dict-torch]
DQN_DEFAULT_CONFIG = {
"gradient_steps": 1, # gradient steps
"batch_size": 64, # training batch size
"discount_factor": 0.99, # discount factor (gamma)
"polyak": 0.005, # soft update hyperparameter (tau)
"learning_rate": 1e-3, # learning rate
"learning_rate_scheduler": None, # learning rate scheduler class (see torch.optim.lr_scheduler)
"learning_rate_scheduler_kwargs": {}, # learning rate scheduler's kwargs (e.g. {"step_size": 1e-3})
"state_preprocessor": None, # state preprocessor class (see skrl.resources.preprocessors)
"state_preprocessor_kwargs": {}, # state preprocessor's kwargs (e.g. {"size": env.observation_space})
"random_timesteps": 0, # random exploration steps
"learning_starts": 0, # learning starts after this many steps
"update_interval": 1, # agent update interval
"target_update_interval": 10, # target network update interval
"exploration": {
"initial_epsilon": 1.0, # initial epsilon for epsilon-greedy exploration
"final_epsilon": 0.05, # final epsilon for epsilon-greedy exploration
"timesteps": 1000, # timesteps for epsilon-greedy decay
},
"rewards_shaper": None, # rewards shaping function: Callable(reward, timestep, timesteps) -> reward
"experiment": {
"directory": "", # experiment's parent directory
"experiment_name": "", # experiment name
"write_interval": 250, # TensorBoard writing interval (timesteps)
"checkpoint_interval": 1000, # interval for checkpoints (timesteps)
"store_separately": False, # whether to store checkpoints separately
"wandb": False, # whether to use Weights & Biases
"wandb_kwargs": {} # wandb kwargs (see https://docs.wandb.ai/ref/python/init)
}
}
# [end-config-dict-torch]
class DQN(Agent):
def __init__(self,
models: Mapping[str, Model],
memory: Optional[Union[Memory, Tuple[Memory]]] = None,
observation_space: Optional[Union[int, Tuple[int], gym.Space, gymnasium.Space]] = None,
action_space: Optional[Union[int, Tuple[int], gym.Space, gymnasium.Space]] = None,
device: Optional[Union[str, torch.device]] = None,
cfg: Optional[dict] = None) -> None:
"""Deep Q-Network (DQN)
https://arxiv.org/abs/1312.5602
:param models: Models used by the agent
:type models: dictionary of skrl.models.torch.Model
:param memory: Memory to storage the transitions.
If it is a tuple, the first element will be used for training and
for the rest only the environment transitions will be added
:type memory: skrl.memory.torch.Memory, list of skrl.memory.torch.Memory or None
:param observation_space: Observation/state space or shape (default: ``None``)
:type observation_space: int, tuple or list of int, gym.Space, gymnasium.Space or None, optional
:param action_space: Action space or shape (default: ``None``)
:type action_space: int, tuple or list of int, gym.Space, gymnasium.Space or None, optional
:param device: Device on which a tensor/array is or will be allocated (default: ``None``).
If None, the device will be either ``"cuda"`` if available or ``"cpu"``
:type device: str or torch.device, optional
:param cfg: Configuration dictionary
:type cfg: dict
:raises KeyError: If the models dictionary is missing a required key
"""
_cfg = copy.deepcopy(DQN_DEFAULT_CONFIG)
_cfg.update(cfg if cfg is not None else {})
super().__init__(models=models,
memory=memory,
observation_space=observation_space,
action_space=action_space,
device=device,
cfg=_cfg)
# models
self.q_network = self.models.get("q_network", None)
self.target_q_network = self.models.get("target_q_network", None)
# checkpoint models
self.checkpoint_modules["q_network"] = self.q_network
self.checkpoint_modules["target_q_network"] = self.target_q_network
if self.target_q_network is not None:
# freeze target networks with respect to optimizers (update via .update_parameters())
self.target_q_network.freeze_parameters(True)
# update target networks (hard update)
self.target_q_network.update_parameters(self.q_network, polyak=1)
# configuration
self._gradient_steps = self.cfg["gradient_steps"]
self._batch_size = self.cfg["batch_size"]
self._discount_factor = self.cfg["discount_factor"]
self._polyak = self.cfg["polyak"]
self._learning_rate = self.cfg["learning_rate"]
self._learning_rate_scheduler = self.cfg["learning_rate_scheduler"]
self._state_preprocessor = self.cfg["state_preprocessor"]
self._random_timesteps = self.cfg["random_timesteps"]
self._learning_starts = self.cfg["learning_starts"]
self._update_interval = self.cfg["update_interval"]
self._target_update_interval = self.cfg["target_update_interval"]
self._exploration_initial_epsilon = self.cfg["exploration"]["initial_epsilon"]
self._exploration_final_epsilon = self.cfg["exploration"]["final_epsilon"]
self._exploration_timesteps = self.cfg["exploration"]["timesteps"]
self._rewards_shaper = self.cfg["rewards_shaper"]
# set up optimizer and learning rate scheduler
if self.q_network is not None:
self.optimizer = torch.optim.Adam(self.q_network.parameters(), lr=self._learning_rate)
if self._learning_rate_scheduler is not None:
self.scheduler = self._learning_rate_scheduler(self.optimizer, **self.cfg["learning_rate_scheduler_kwargs"])
self.checkpoint_modules["optimizer"] = self.optimizer
# set up preprocessors
if self._state_preprocessor:
self._state_preprocessor = self._state_preprocessor(**self.cfg["state_preprocessor_kwargs"])
self.checkpoint_modules["state_preprocessor"] = self._state_preprocessor
else:
self._state_preprocessor = self._empty_preprocessor
def init(self, trainer_cfg: Optional[Mapping[str, Any]] = None) -> None:
"""Initialize the agent
"""
super().init(trainer_cfg=trainer_cfg)
# create tensors in memory
if self.memory is not None:
self.memory.create_tensor(name="states", size=self.observation_space, dtype=torch.float32)
self.memory.create_tensor(name="next_states", size=self.observation_space, dtype=torch.float32)
self.memory.create_tensor(name="actions", size=self.action_space, dtype=torch.int64)
self.memory.create_tensor(name="rewards", size=1, dtype=torch.float32)
self.memory.create_tensor(name="terminated", size=1, dtype=torch.bool)
self.tensors_names = ["states", "actions", "rewards", "next_states", "terminated"]
def act(self, states: torch.Tensor, timestep: int, timesteps: int) -> torch.Tensor:
"""Process the environment's states to make a decision (actions) using the main policy
:param states: Environment's states
:type states: torch.Tensor
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
:return: Actions
:rtype: torch.Tensor
"""
states = self._state_preprocessor(states)
if not self._exploration_timesteps:
return torch.argmax(self.q_network.act({"states": states}, role="q_network")[0], dim=1, keepdim=True), None, None
# sample random actions
actions = self.q_network.random_act({"states": states}, role="q_network")[0]
if timestep < self._random_timesteps:
return actions, None, None
# sample actions with epsilon-greedy policy
epsilon = self._exploration_final_epsilon + (self._exploration_initial_epsilon - self._exploration_final_epsilon) \
* math.exp(-1.0 * timestep / self._exploration_timesteps)
indexes = (torch.rand(states.shape[0], device=self.device) >= epsilon).nonzero().view(-1)
if indexes.numel():
actions[indexes] = torch.argmax(self.q_network.act({"states": states[indexes]}, role="q_network")[0], dim=1, keepdim=True)
# record epsilon
self.track_data("Exploration / Exploration epsilon", epsilon)
return actions, None, None
def record_transition(self,
states: torch.Tensor,
actions: torch.Tensor,
rewards: torch.Tensor,
next_states: torch.Tensor,
terminated: torch.Tensor,
truncated: torch.Tensor,
infos: Any,
timestep: int,
timesteps: int) -> None:
"""Record an environment transition in memory
:param states: Observations/states of the environment used to make the decision
:type states: torch.Tensor
:param actions: Actions taken by the agent
:type actions: torch.Tensor
:param rewards: Instant rewards achieved by the current actions
:type rewards: torch.Tensor
:param next_states: Next observations/states of the environment
:type next_states: torch.Tensor
:param terminated: Signals to indicate that episodes have terminated
:type terminated: torch.Tensor
:param truncated: Signals to indicate that episodes have been truncated
:type truncated: torch.Tensor
:param infos: Additional information about the environment
:type infos: Any type supported by the environment
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
super().record_transition(states, actions, rewards, next_states, terminated, truncated, infos, timestep, timesteps)
if self.memory is not None:
# reward shaping
if self._rewards_shaper is not None:
rewards = self._rewards_shaper(rewards, timestep, timesteps)
self.memory.add_samples(states=states, actions=actions, rewards=rewards, next_states=next_states,
terminated=terminated, truncated=truncated)
for memory in self.secondary_memories:
memory.add_samples(states=states, actions=actions, rewards=rewards, next_states=next_states,
terminated=terminated, truncated=truncated)
def pre_interaction(self, timestep: int, timesteps: int) -> None:
"""Callback called before the interaction with the environment
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
pass
def post_interaction(self, timestep: int, timesteps: int) -> None:
"""Callback called after the interaction with the environment
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
if timestep >= self._learning_starts and not timestep % self._update_interval:
self._update(timestep, timesteps)
# write tracking data and checkpoints
super().post_interaction(timestep, timesteps)
def _update(self, timestep: int, timesteps: int) -> None:
"""Algorithm's main update step
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
# sample a batch from memory
sampled_states, sampled_actions, sampled_rewards, sampled_next_states, sampled_dones = \
self.memory.sample(names=self.tensors_names, batch_size=self._batch_size)[0]
# gradient steps
for gradient_step in range(self._gradient_steps):
sampled_states = self._state_preprocessor(sampled_states, train=True)
sampled_next_states = self._state_preprocessor(sampled_next_states, train=True)
# compute target values
with torch.no_grad():
next_q_values, _, _ = self.target_q_network.act({"states": sampled_next_states}, role="target_q_network")
target_q_values = torch.max(next_q_values, dim=-1, keepdim=True)[0]
target_values = sampled_rewards + self._discount_factor * sampled_dones.logical_not() * target_q_values
# compute Q-network loss
q_values = torch.gather(self.q_network.act({"states": sampled_states}, role="q_network")[0],
dim=1, index=sampled_actions.long())
q_network_loss = F.mse_loss(q_values, target_values)
# optimize Q-network
self.optimizer.zero_grad()
q_network_loss.backward()
self.optimizer.step()
# update target network
if not timestep % self._target_update_interval:
self.target_q_network.update_parameters(self.q_network, polyak=self._polyak)
# update learning rate
if self._learning_rate_scheduler:
self.scheduler.step()
# record data
self.track_data("Loss / Q-network loss", q_network_loss.item())
self.track_data("Target / Target (max)", torch.max(target_values).item())
self.track_data("Target / Target (min)", torch.min(target_values).item())
self.track_data("Target / Target (mean)", torch.mean(target_values).item())
if self._learning_rate_scheduler:
self.track_data("Learning / Learning rate", self.scheduler.get_last_lr()[0])
| 14,654 | Python | 44.092308 | 134 | 0.617101 |
Toni-SM/skrl/skrl/agents/torch/dqn/__init__.py | from skrl.agents.torch.dqn.ddqn import DDQN, DDQN_DEFAULT_CONFIG
from skrl.agents.torch.dqn.dqn import DQN, DQN_DEFAULT_CONFIG
| 127 | Python | 41.666653 | 64 | 0.811024 |
Toni-SM/skrl/skrl/agents/torch/sarsa/__init__.py | from skrl.agents.torch.sarsa.sarsa import SARSA, SARSA_DEFAULT_CONFIG
| 70 | Python | 34.499983 | 69 | 0.828571 |
Toni-SM/skrl/skrl/agents/torch/a2c/a2c.py | from typing import Any, Mapping, Optional, Tuple, Union
import copy
import itertools
import gym
import gymnasium
import torch
import torch.nn as nn
import torch.nn.functional as F
from skrl.agents.torch import Agent
from skrl.memories.torch import Memory
from skrl.models.torch import Model
from skrl.resources.schedulers.torch import KLAdaptiveLR
# [start-config-dict-torch]
A2C_DEFAULT_CONFIG = {
"rollouts": 16, # number of rollouts before updating
"mini_batches": 1, # number of mini batches to use for updating
"discount_factor": 0.99, # discount factor (gamma)
"lambda": 0.95, # TD(lambda) coefficient (lam) for computing returns and advantages
"learning_rate": 1e-3, # learning rate
"learning_rate_scheduler": None, # learning rate scheduler class (see torch.optim.lr_scheduler)
"learning_rate_scheduler_kwargs": {}, # learning rate scheduler's kwargs (e.g. {"step_size": 1e-3})
"state_preprocessor": None, # state preprocessor class (see skrl.resources.preprocessors)
"state_preprocessor_kwargs": {}, # state preprocessor's kwargs (e.g. {"size": env.observation_space})
"value_preprocessor": None, # value preprocessor class (see skrl.resources.preprocessors)
"value_preprocessor_kwargs": {}, # value preprocessor's kwargs (e.g. {"size": 1})
"random_timesteps": 0, # random exploration steps
"learning_starts": 0, # learning starts after this many steps
"grad_norm_clip": 0.5, # clipping coefficient for the norm of the gradients
"entropy_loss_scale": 0.0, # entropy loss scaling factor
"rewards_shaper": None, # rewards shaping function: Callable(reward, timestep, timesteps) -> reward
"time_limit_bootstrap": False, # bootstrap at timeout termination (episode truncation)
"experiment": {
"directory": "", # experiment's parent directory
"experiment_name": "", # experiment name
"write_interval": 250, # TensorBoard writing interval (timesteps)
"checkpoint_interval": 1000, # interval for checkpoints (timesteps)
"store_separately": False, # whether to store checkpoints separately
"wandb": False, # whether to use Weights & Biases
"wandb_kwargs": {} # wandb kwargs (see https://docs.wandb.ai/ref/python/init)
}
}
# [end-config-dict-torch]
class A2C(Agent):
def __init__(self,
models: Mapping[str, Model],
memory: Optional[Union[Memory, Tuple[Memory]]] = None,
observation_space: Optional[Union[int, Tuple[int], gym.Space, gymnasium.Space]] = None,
action_space: Optional[Union[int, Tuple[int], gym.Space, gymnasium.Space]] = None,
device: Optional[Union[str, torch.device]] = None,
cfg: Optional[dict] = None) -> None:
"""Advantage Actor Critic (A2C)
https://arxiv.org/abs/1602.01783
:param models: Models used by the agent
:type models: dictionary of skrl.models.torch.Model
:param memory: Memory to storage the transitions.
If it is a tuple, the first element will be used for training and
for the rest only the environment transitions will be added
:type memory: skrl.memory.torch.Memory, list of skrl.memory.torch.Memory or None
:param observation_space: Observation/state space or shape (default: ``None``)
:type observation_space: int, tuple or list of int, gym.Space, gymnasium.Space or None, optional
:param action_space: Action space or shape (default: ``None``)
:type action_space: int, tuple or list of int, gym.Space, gymnasium.Space or None, optional
:param device: Device on which a tensor/array is or will be allocated (default: ``None``).
If None, the device will be either ``"cuda"`` if available or ``"cpu"``
:type device: str or torch.device, optional
:param cfg: Configuration dictionary
:type cfg: dict
:raises KeyError: If the models dictionary is missing a required key
"""
_cfg = copy.deepcopy(A2C_DEFAULT_CONFIG)
_cfg.update(cfg if cfg is not None else {})
super().__init__(models=models,
memory=memory,
observation_space=observation_space,
action_space=action_space,
device=device,
cfg=_cfg)
# models
self.policy = self.models.get("policy", None)
self.value = self.models.get("value", None)
# checkpoint models
self.checkpoint_modules["policy"] = self.policy
self.checkpoint_modules["value"] = self.value
# configuration
self._mini_batches = self.cfg["mini_batches"]
self._rollouts = self.cfg["rollouts"]
self._rollout = 0
self._grad_norm_clip = self.cfg["grad_norm_clip"]
self._entropy_loss_scale = self.cfg["entropy_loss_scale"]
self._learning_rate = self.cfg["learning_rate"]
self._learning_rate_scheduler = self.cfg["learning_rate_scheduler"]
self._state_preprocessor = self.cfg["state_preprocessor"]
self._value_preprocessor = self.cfg["value_preprocessor"]
self._discount_factor = self.cfg["discount_factor"]
self._lambda = self.cfg["lambda"]
self._random_timesteps = self.cfg["random_timesteps"]
self._learning_starts = self.cfg["learning_starts"]
self._rewards_shaper = self.cfg["rewards_shaper"]
self._time_limit_bootstrap = self.cfg["time_limit_bootstrap"]
# set up optimizer and learning rate scheduler
if self.policy is not None and self.value is not None:
if self.policy is self.value:
self.optimizer = torch.optim.Adam(self.policy.parameters(), lr=self._learning_rate)
else:
self.optimizer = torch.optim.Adam(itertools.chain(self.policy.parameters(), self.value.parameters()),
lr=self._learning_rate)
if self._learning_rate_scheduler is not None:
self.scheduler = self._learning_rate_scheduler(self.optimizer, **self.cfg["learning_rate_scheduler_kwargs"])
self.checkpoint_modules["optimizer"] = self.optimizer
# set up preprocessors
if self._state_preprocessor:
self._state_preprocessor = self._state_preprocessor(**self.cfg["state_preprocessor_kwargs"])
self.checkpoint_modules["state_preprocessor"] = self._state_preprocessor
else:
self._state_preprocessor = self._empty_preprocessor
if self._value_preprocessor:
self._value_preprocessor = self._value_preprocessor(**self.cfg["value_preprocessor_kwargs"])
self.checkpoint_modules["value_preprocessor"] = self._value_preprocessor
else:
self._value_preprocessor = self._empty_preprocessor
def init(self, trainer_cfg: Optional[Mapping[str, Any]] = None) -> None:
"""Initialize the agent
"""
super().init(trainer_cfg=trainer_cfg)
self.set_mode("eval")
# create tensors in memory
if self.memory is not None:
self.memory.create_tensor(name="states", size=self.observation_space, dtype=torch.float32)
self.memory.create_tensor(name="actions", size=self.action_space, dtype=torch.float32)
self.memory.create_tensor(name="rewards", size=1, dtype=torch.float32)
self.memory.create_tensor(name="terminated", size=1, dtype=torch.bool)
self.memory.create_tensor(name="log_prob", size=1, dtype=torch.float32)
self.memory.create_tensor(name="values", size=1, dtype=torch.float32)
self.memory.create_tensor(name="returns", size=1, dtype=torch.float32)
self.memory.create_tensor(name="advantages", size=1, dtype=torch.float32)
self._tensors_names = ["states", "actions", "log_prob", "returns", "advantages"]
# create temporary variables needed for storage and computation
self._current_log_prob = None
self._current_next_states = None
def act(self, states: torch.Tensor, timestep: int, timesteps: int) -> torch.Tensor:
"""Process the environment's states to make a decision (actions) using the main policy
:param states: Environment's states
:type states: torch.Tensor
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
:return: Actions
:rtype: torch.Tensor
"""
# sample random actions
# TODO, check for stochasticity
if timestep < self._random_timesteps:
return self.policy.random_act({"states": self._state_preprocessor(states)}, role="policy")
# sample stochastic actions
actions, log_prob, outputs = self.policy.act({"states": self._state_preprocessor(states)}, role="policy")
self._current_log_prob = log_prob
return actions, log_prob, outputs
def record_transition(self,
states: torch.Tensor,
actions: torch.Tensor,
rewards: torch.Tensor,
next_states: torch.Tensor,
terminated: torch.Tensor,
truncated: torch.Tensor,
infos: Any,
timestep: int,
timesteps: int) -> None:
"""Record an environment transition in memory
:param states: Observations/states of the environment used to make the decision
:type states: torch.Tensor
:param actions: Actions taken by the agent
:type actions: torch.Tensor
:param rewards: Instant rewards achieved by the current actions
:type rewards: torch.Tensor
:param next_states: Next observations/states of the environment
:type next_states: torch.Tensor
:param terminated: Signals to indicate that episodes have terminated
:type terminated: torch.Tensor
:param truncated: Signals to indicate that episodes have been truncated
:type truncated: torch.Tensor
:param infos: Additional information about the environment
:type infos: Any type supported by the environment
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
super().record_transition(states, actions, rewards, next_states, terminated, truncated, infos, timestep, timesteps)
if self.memory is not None:
self._current_next_states = next_states
# reward shaping
if self._rewards_shaper is not None:
rewards = self._rewards_shaper(rewards, timestep, timesteps)
# compute values
values, _, _ = self.value.act({"states": self._state_preprocessor(states)}, role="value")
values = self._value_preprocessor(values, inverse=True)
# time-limit (truncation) boostrapping
if self._time_limit_bootstrap:
rewards += self._discount_factor * values * truncated
# storage transition in memory
self.memory.add_samples(states=states, actions=actions, rewards=rewards, next_states=next_states,
terminated=terminated, truncated=truncated, log_prob=self._current_log_prob, values=values)
for memory in self.secondary_memories:
memory.add_samples(states=states, actions=actions, rewards=rewards, next_states=next_states,
terminated=terminated, truncated=truncated, log_prob=self._current_log_prob, values=values)
def pre_interaction(self, timestep: int, timesteps: int) -> None:
"""Callback called before the interaction with the environment
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
pass
def post_interaction(self, timestep: int, timesteps: int) -> None:
"""Callback called after the interaction with the environment
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
self._rollout += 1
if not self._rollout % self._rollouts and timestep >= self._learning_starts:
self.set_mode("train")
self._update(timestep, timesteps)
self.set_mode("eval")
# write tracking data and checkpoints
super().post_interaction(timestep, timesteps)
def _update(self, timestep: int, timesteps: int) -> None:
"""Algorithm's main update step
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
def compute_gae(rewards: torch.Tensor,
dones: torch.Tensor,
values: torch.Tensor,
next_values: torch.Tensor,
discount_factor: float = 0.99,
lambda_coefficient: float = 0.95) -> torch.Tensor:
"""Compute the Generalized Advantage Estimator (GAE)
:param rewards: Rewards obtained by the agent
:type rewards: torch.Tensor
:param dones: Signals to indicate that episodes have ended
:type dones: torch.Tensor
:param values: Values obtained by the agent
:type values: torch.Tensor
:param next_values: Next values obtained by the agent
:type next_values: torch.Tensor
:param discount_factor: Discount factor
:type discount_factor: float
:param lambda_coefficient: Lambda coefficient
:type lambda_coefficient: float
:return: Generalized Advantage Estimator
:rtype: torch.Tensor
"""
advantage = 0
advantages = torch.zeros_like(rewards)
not_dones = dones.logical_not()
memory_size = rewards.shape[0]
# advantages computation
for i in reversed(range(memory_size)):
next_values = values[i + 1] if i < memory_size - 1 else last_values
advantage = rewards[i] - values[i] + discount_factor * not_dones[i] * (next_values + lambda_coefficient * advantage)
advantages[i] = advantage
# returns computation
returns = advantages + values
# normalize advantages
advantages = (advantages - advantages.mean()) / (advantages.std() + 1e-8)
return returns, advantages
# compute returns and advantages
with torch.no_grad():
self.value.train(False)
last_values, _, _ = self.value.act({"states": self._state_preprocessor(self._current_next_states.float())}, role="value")
self.value.train(True)
last_values = self._value_preprocessor(last_values, inverse=True)
values = self.memory.get_tensor_by_name("values")
returns, advantages = compute_gae(rewards=self.memory.get_tensor_by_name("rewards"),
dones=self.memory.get_tensor_by_name("terminated"),
values=values,
next_values=last_values,
discount_factor=self._discount_factor,
lambda_coefficient=self._lambda)
self.memory.set_tensor_by_name("values", self._value_preprocessor(values, train=True))
self.memory.set_tensor_by_name("returns", self._value_preprocessor(returns, train=True))
self.memory.set_tensor_by_name("advantages", advantages)
# sample mini-batches from memory
sampled_batches = self.memory.sample_all(names=self._tensors_names, mini_batches=self._mini_batches)
cumulative_policy_loss = 0
cumulative_entropy_loss = 0
cumulative_value_loss = 0
kl_divergences = []
# mini-batches loop
for sampled_states, sampled_actions, sampled_log_prob, sampled_returns, sampled_advantages in sampled_batches:
sampled_states = self._state_preprocessor(sampled_states, train=True)
_, next_log_prob, _ = self.policy.act({"states": sampled_states, "taken_actions": sampled_actions}, role="policy")
# compute approximate KL divergence for KLAdaptive learning rate scheduler
if self._learning_rate_scheduler:
if isinstance(self.scheduler, KLAdaptiveLR):
with torch.no_grad():
ratio = next_log_prob - sampled_log_prob
kl_divergence = ((torch.exp(ratio) - 1) - ratio).mean()
kl_divergences.append(kl_divergence)
# compute entropy loss
if self._entropy_loss_scale:
entropy_loss = -self._entropy_loss_scale * self.policy.get_entropy(role="policy").mean()
else:
entropy_loss = 0
# compute policy loss
policy_loss = -(sampled_advantages * next_log_prob).mean()
# compute value loss
predicted_values, _, _ = self.value.act({"states": sampled_states}, role="value")
value_loss = F.mse_loss(sampled_returns, predicted_values)
# optimization step
self.optimizer.zero_grad()
(policy_loss + entropy_loss + value_loss).backward()
if self._grad_norm_clip > 0:
if self.policy is self.value:
nn.utils.clip_grad_norm_(self.policy.parameters(), self._grad_norm_clip)
else:
nn.utils.clip_grad_norm_(itertools.chain(self.policy.parameters(), self.value.parameters()), self._grad_norm_clip)
self.optimizer.step()
# update cumulative losses
cumulative_policy_loss += policy_loss.item()
cumulative_value_loss += value_loss.item()
if self._entropy_loss_scale:
cumulative_entropy_loss += entropy_loss.item()
# update learning rate
if self._learning_rate_scheduler:
if isinstance(self.scheduler, KLAdaptiveLR):
self.scheduler.step(torch.tensor(kl_divergences).mean())
else:
self.scheduler.step()
# record data
self.track_data("Loss / Policy loss", cumulative_policy_loss / len(sampled_batches))
self.track_data("Loss / Value loss", cumulative_value_loss / len(sampled_batches))
if self._entropy_loss_scale:
self.track_data("Loss / Entropy loss", cumulative_entropy_loss / len(sampled_batches))
self.track_data("Policy / Standard deviation", self.policy.distribution(role="policy").stddev.mean().item())
if self._learning_rate_scheduler:
self.track_data("Learning / Learning rate", self.scheduler.get_last_lr()[0])
| 19,522 | Python | 44.93647 | 134 | 0.604702 |
Toni-SM/skrl/skrl/agents/torch/a2c/__init__.py | from skrl.agents.torch.a2c.a2c import A2C, A2C_DEFAULT_CONFIG
from skrl.agents.torch.a2c.a2c_rnn import A2C_RNN
| 112 | Python | 36.666654 | 61 | 0.803571 |
Toni-SM/skrl/skrl/agents/torch/ppo/__init__.py | from skrl.agents.torch.ppo.ppo import PPO, PPO_DEFAULT_CONFIG
from skrl.agents.torch.ppo.ppo_rnn import PPO_RNN
| 112 | Python | 36.666654 | 61 | 0.803571 |
Toni-SM/skrl/skrl/agents/torch/amp/amp.py | from typing import Any, Callable, Mapping, Optional, Tuple, Union
import copy
import itertools
import math
import gym
import gymnasium
import torch
import torch.nn as nn
import torch.nn.functional as F
from skrl.agents.torch import Agent
from skrl.memories.torch import Memory
from skrl.models.torch import Model
# [start-config-dict-torch]
AMP_DEFAULT_CONFIG = {
"rollouts": 16, # number of rollouts before updating
"learning_epochs": 6, # number of learning epochs during each update
"mini_batches": 2, # number of mini batches during each learning epoch
"discount_factor": 0.99, # discount factor (gamma)
"lambda": 0.95, # TD(lambda) coefficient (lam) for computing returns and advantages
"learning_rate": 5e-5, # learning rate
"learning_rate_scheduler": None, # learning rate scheduler class (see torch.optim.lr_scheduler)
"learning_rate_scheduler_kwargs": {}, # learning rate scheduler's kwargs (e.g. {"step_size": 1e-3})
"state_preprocessor": None, # state preprocessor class (see skrl.resources.preprocessors)
"state_preprocessor_kwargs": {}, # state preprocessor's kwargs (e.g. {"size": env.observation_space})
"value_preprocessor": None, # value preprocessor class (see skrl.resources.preprocessors)
"value_preprocessor_kwargs": {}, # value preprocessor's kwargs (e.g. {"size": 1})
"amp_state_preprocessor": None, # AMP state preprocessor class (see skrl.resources.preprocessors)
"amp_state_preprocessor_kwargs": {}, # AMP state preprocessor's kwargs (e.g. {"size": env.amp_observation_space})
"random_timesteps": 0, # random exploration steps
"learning_starts": 0, # learning starts after this many steps
"grad_norm_clip": 0.0, # clipping coefficient for the norm of the gradients
"ratio_clip": 0.2, # clipping coefficient for computing the clipped surrogate objective
"value_clip": 0.2, # clipping coefficient for computing the value loss (if clip_predicted_values is True)
"clip_predicted_values": False, # clip predicted values during value loss computation
"entropy_loss_scale": 0.0, # entropy loss scaling factor
"value_loss_scale": 2.5, # value loss scaling factor
"discriminator_loss_scale": 5.0, # discriminator loss scaling factor
"amp_batch_size": 512, # batch size for updating the reference motion dataset
"task_reward_weight": 0.0, # task-reward weight (wG)
"style_reward_weight": 1.0, # style-reward weight (wS)
"discriminator_batch_size": 0, # batch size for computing the discriminator loss (all samples if 0)
"discriminator_reward_scale": 2, # discriminator reward scaling factor
"discriminator_logit_regularization_scale": 0.05, # logit regularization scale factor for the discriminator loss
"discriminator_gradient_penalty_scale": 5, # gradient penalty scaling factor for the discriminator loss
"discriminator_weight_decay_scale": 0.0001, # weight decay scaling factor for the discriminator loss
"rewards_shaper": None, # rewards shaping function: Callable(reward, timestep, timesteps) -> reward
"time_limit_bootstrap": False, # bootstrap at timeout termination (episode truncation)
"experiment": {
"directory": "", # experiment's parent directory
"experiment_name": "", # experiment name
"write_interval": 250, # TensorBoard writing interval (timesteps)
"checkpoint_interval": 1000, # interval for checkpoints (timesteps)
"store_separately": False, # whether to store checkpoints separately
"wandb": False, # whether to use Weights & Biases
"wandb_kwargs": {} # wandb kwargs (see https://docs.wandb.ai/ref/python/init)
}
}
# [end-config-dict-torch]
class AMP(Agent):
def __init__(self,
models: Mapping[str, Model],
memory: Optional[Union[Memory, Tuple[Memory]]] = None,
observation_space: Optional[Union[int, Tuple[int], gym.Space, gymnasium.Space]] = None,
action_space: Optional[Union[int, Tuple[int], gym.Space, gymnasium.Space]] = None,
device: Optional[Union[str, torch.device]] = None,
cfg: Optional[dict] = None,
amp_observation_space: Optional[Union[int, Tuple[int], gym.Space, gymnasium.Space]] = None,
motion_dataset: Optional[Memory] = None,
reply_buffer: Optional[Memory] = None,
collect_reference_motions: Optional[Callable[[int], torch.Tensor]] = None,
collect_observation: Optional[Callable[[], torch.Tensor]] = None) -> None:
"""Adversarial Motion Priors (AMP)
https://arxiv.org/abs/2104.02180
The implementation is adapted from the NVIDIA IsaacGymEnvs
(https://github.com/NVIDIA-Omniverse/IsaacGymEnvs/blob/main/isaacgymenvs/learning/amp_continuous.py)
:param models: Models used by the agent
:type models: dictionary of skrl.models.torch.Model
:param memory: Memory to storage the transitions.
If it is a tuple, the first element will be used for training and
for the rest only the environment transitions will be added
:type memory: skrl.memory.torch.Memory, list of skrl.memory.torch.Memory or None
:param observation_space: Observation/state space or shape (default: ``None``)
:type observation_space: int, tuple or list of int, gym.Space, gymnasium.Space or None, optional
:param action_space: Action space or shape (default: ``None``)
:type action_space: int, tuple or list of int, gym.Space, gymnasium.Space or None, optional
:param device: Device on which a tensor/array is or will be allocated (default: ``None``).
If None, the device will be either ``"cuda"`` if available or ``"cpu"``
:type device: str or torch.device, optional
:param cfg: Configuration dictionary
:type cfg: dict
:param amp_observation_space: AMP observation/state space or shape (default: ``None``)
:type amp_observation_space: int, tuple or list of int, gym.Space, gymnasium.Space or None
:param motion_dataset: Reference motion dataset: M (default: ``None``)
:type motion_dataset: skrl.memory.torch.Memory or None
:param reply_buffer: Reply buffer for preventing discriminator overfitting: B (default: ``None``)
:type reply_buffer: skrl.memory.torch.Memory or None
:param collect_reference_motions: Callable to collect reference motions (default: ``None``)
:type collect_reference_motions: Callable[[int], torch.Tensor] or None
:param collect_observation: Callable to collect observation (default: ``None``)
:type collect_observation: Callable[[], torch.Tensor] or None
:raises KeyError: If the models dictionary is missing a required key
"""
_cfg = copy.deepcopy(AMP_DEFAULT_CONFIG)
_cfg.update(cfg if cfg is not None else {})
super().__init__(models=models,
memory=memory,
observation_space=observation_space,
action_space=action_space,
device=device,
cfg=_cfg)
self.amp_observation_space = amp_observation_space
self.motion_dataset = motion_dataset
self.reply_buffer = reply_buffer
self.collect_reference_motions = collect_reference_motions
self.collect_observation = collect_observation
# models
self.policy = self.models.get("policy", None)
self.value = self.models.get("value", None)
self.discriminator = self.models.get("discriminator", None)
# checkpoint models
self.checkpoint_modules["policy"] = self.policy
self.checkpoint_modules["value"] = self.value
self.checkpoint_modules["discriminator"] = self.discriminator
# configuration
self._learning_epochs = self.cfg["learning_epochs"]
self._mini_batches = self.cfg["mini_batches"]
self._rollouts = self.cfg["rollouts"]
self._rollout = 0
self._grad_norm_clip = self.cfg["grad_norm_clip"]
self._ratio_clip = self.cfg["ratio_clip"]
self._value_clip = self.cfg["value_clip"]
self._clip_predicted_values = self.cfg["clip_predicted_values"]
self._value_loss_scale = self.cfg["value_loss_scale"]
self._entropy_loss_scale = self.cfg["entropy_loss_scale"]
self._discriminator_loss_scale = self.cfg["discriminator_loss_scale"]
self._learning_rate = self.cfg["learning_rate"]
self._learning_rate_scheduler = self.cfg["learning_rate_scheduler"]
self._state_preprocessor = self.cfg["state_preprocessor"]
self._value_preprocessor = self.cfg["value_preprocessor"]
self._amp_state_preprocessor = self.cfg["amp_state_preprocessor"]
self._discount_factor = self.cfg["discount_factor"]
self._lambda = self.cfg["lambda"]
self._random_timesteps = self.cfg["random_timesteps"]
self._learning_starts = self.cfg["learning_starts"]
self._amp_batch_size = self.cfg["amp_batch_size"]
self._task_reward_weight = self.cfg["task_reward_weight"]
self._style_reward_weight = self.cfg["style_reward_weight"]
self._discriminator_batch_size = self.cfg["discriminator_batch_size"]
self._discriminator_reward_scale = self.cfg["discriminator_reward_scale"]
self._discriminator_logit_regularization_scale = self.cfg["discriminator_logit_regularization_scale"]
self._discriminator_gradient_penalty_scale = self.cfg["discriminator_gradient_penalty_scale"]
self._discriminator_weight_decay_scale = self.cfg["discriminator_weight_decay_scale"]
self._rewards_shaper = self.cfg["rewards_shaper"]
self._time_limit_bootstrap = self.cfg["time_limit_bootstrap"]
# set up optimizer and learning rate scheduler
if self.policy is not None and self.value is not None and self.discriminator is not None:
self.optimizer = torch.optim.Adam(itertools.chain(self.policy.parameters(),
self.value.parameters(),
self.discriminator.parameters()),
lr=self._learning_rate)
if self._learning_rate_scheduler is not None:
self.scheduler = self._learning_rate_scheduler(self.optimizer, **self.cfg["learning_rate_scheduler_kwargs"])
self.checkpoint_modules["optimizer"] = self.optimizer
# set up preprocessors
if self._state_preprocessor:
self._state_preprocessor = self._state_preprocessor(**self.cfg["state_preprocessor_kwargs"])
self.checkpoint_modules["state_preprocessor"] = self._state_preprocessor
else:
self._state_preprocessor = self._empty_preprocessor
if self._value_preprocessor:
self._value_preprocessor = self._value_preprocessor(**self.cfg["value_preprocessor_kwargs"])
self.checkpoint_modules["value_preprocessor"] = self._value_preprocessor
else:
self._value_preprocessor = self._empty_preprocessor
if self._amp_state_preprocessor:
self._amp_state_preprocessor = self._amp_state_preprocessor(**self.cfg["amp_state_preprocessor_kwargs"])
self.checkpoint_modules["amp_state_preprocessor"] = self._amp_state_preprocessor
else:
self._amp_state_preprocessor = self._empty_preprocessor
def init(self, trainer_cfg: Optional[Mapping[str, Any]] = None) -> None:
"""Initialize the agent
"""
super().init(trainer_cfg=trainer_cfg)
self.set_mode("eval")
# create tensors in memory
if self.memory is not None:
self.memory.create_tensor(name="states", size=self.observation_space, dtype=torch.float32)
self.memory.create_tensor(name="next_states", size=self.observation_space, dtype=torch.float32)
self.memory.create_tensor(name="actions", size=self.action_space, dtype=torch.float32)
self.memory.create_tensor(name="rewards", size=1, dtype=torch.float32)
self.memory.create_tensor(name="terminated", size=1, dtype=torch.bool)
self.memory.create_tensor(name="log_prob", size=1, dtype=torch.float32)
self.memory.create_tensor(name="values", size=1, dtype=torch.float32)
self.memory.create_tensor(name="returns", size=1, dtype=torch.float32)
self.memory.create_tensor(name="advantages", size=1, dtype=torch.float32)
self.memory.create_tensor(name="amp_states", size=self.amp_observation_space, dtype=torch.float32)
self.memory.create_tensor(name="next_values", size=1, dtype=torch.float32)
self.tensors_names = ["states", "actions", "rewards", "next_states", "terminated", \
"log_prob", "values", "returns", "advantages", "amp_states", "next_values"]
# create tensors for motion dataset and reply buffer
if self.motion_dataset is not None:
self.motion_dataset.create_tensor(name="states", size=self.amp_observation_space, dtype=torch.float32)
self.reply_buffer.create_tensor(name="states", size=self.amp_observation_space, dtype=torch.float32)
# initialize motion dataset
for _ in range(math.ceil(self.motion_dataset.memory_size / self._amp_batch_size)):
self.motion_dataset.add_samples(states=self.collect_reference_motions(self._amp_batch_size))
# create temporary variables needed for storage and computation
self._current_log_prob = None
self._current_states = None
def act(self, states: torch.Tensor, timestep: int, timesteps: int) -> torch.Tensor:
"""Process the environment's states to make a decision (actions) using the main policy
:param states: Environment's states
:type states: torch.Tensor
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
:return: Actions
:rtype: torch.Tensor
"""
# use collected states
if self._current_states is not None:
states = self._current_states
states = self._state_preprocessor(states)
# sample random actions
# TODO, check for stochasticity
if timestep < self._random_timesteps:
return self.policy.random_act({"states": states}, role="policy")
# sample stochastic actions
actions, log_prob, outputs = self.policy.act({"states": states}, role="policy")
self._current_log_prob = log_prob
return actions, log_prob, outputs
def record_transition(self,
states: torch.Tensor,
actions: torch.Tensor,
rewards: torch.Tensor,
next_states: torch.Tensor,
terminated: torch.Tensor,
truncated: torch.Tensor,
infos: Any,
timestep: int,
timesteps: int) -> None:
"""Record an environment transition in memory
:param states: Observations/states of the environment used to make the decision
:type states: torch.Tensor
:param actions: Actions taken by the agent
:type actions: torch.Tensor
:param rewards: Instant rewards achieved by the current actions
:type rewards: torch.Tensor
:param next_states: Next observations/states of the environment
:type next_states: torch.Tensor
:param terminated: Signals to indicate that episodes have terminated
:type terminated: torch.Tensor
:param truncated: Signals to indicate that episodes have been truncated
:type truncated: torch.Tensor
:param infos: Additional information about the environment
:type infos: Any type supported by the environment
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
# use collected states
if self._current_states is not None:
states = self._current_states
super().record_transition(states, actions, rewards, next_states, terminated, truncated, infos, timestep, timesteps)
if self.memory is not None:
amp_states = infos["amp_obs"]
# reward shaping
if self._rewards_shaper is not None:
rewards = self._rewards_shaper(rewards, timestep, timesteps)
with torch.no_grad():
values, _, _ = self.value.act({"states": self._state_preprocessor(states)}, role="value")
values = self._value_preprocessor(values, inverse=True)
# time-limit (truncation) boostrapping
if self._time_limit_bootstrap:
rewards += self._discount_factor * values * truncated
with torch.no_grad():
next_values, _, _ = self.value.act({"states": self._state_preprocessor(next_states)}, role="value")
next_values = self._value_preprocessor(next_values, inverse=True)
next_values *= infos['terminate'].view(-1, 1).logical_not()
self.memory.add_samples(states=states, actions=actions, rewards=rewards, next_states=next_states, terminated=terminated, truncated=truncated,
log_prob=self._current_log_prob, values=values, amp_states=amp_states, next_values=next_values)
for memory in self.secondary_memories:
memory.add_samples(states=states, actions=actions, rewards=rewards, next_states=next_states, terminated=terminated, truncated=truncated,
log_prob=self._current_log_prob, values=values, amp_states=amp_states, next_values=next_values)
def pre_interaction(self, timestep: int, timesteps: int) -> None:
"""Callback called before the interaction with the environment
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
if self.collect_observation is not None:
self._current_states = self.collect_observation()
def post_interaction(self, timestep: int, timesteps: int) -> None:
"""Callback called after the interaction with the environment
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
self._rollout += 1
if not self._rollout % self._rollouts and timestep >= self._learning_starts:
self.set_mode("train")
self._update(timestep, timesteps)
self.set_mode("eval")
# write tracking data and checkpoints
super().post_interaction(timestep, timesteps)
def _update(self, timestep: int, timesteps: int) -> None:
"""Algorithm's main update step
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
def compute_gae(rewards: torch.Tensor,
dones: torch.Tensor,
values: torch.Tensor,
next_values: torch.Tensor,
discount_factor: float = 0.99,
lambda_coefficient: float = 0.95) -> torch.Tensor:
"""Compute the Generalized Advantage Estimator (GAE)
:param rewards: Rewards obtained by the agent
:type rewards: torch.Tensor
:param dones: Signals to indicate that episodes have ended
:type dones: torch.Tensor
:param values: Values obtained by the agent
:type values: torch.Tensor
:param next_values: Next values obtained by the agent
:type next_values: torch.Tensor
:param discount_factor: Discount factor
:type discount_factor: float
:param lambda_coefficient: Lambda coefficient
:type lambda_coefficient: float
:return: Generalized Advantage Estimator
:rtype: torch.Tensor
"""
advantage = 0
advantages = torch.zeros_like(rewards)
not_dones = dones.logical_not()
memory_size = rewards.shape[0]
# advantages computation
for i in reversed(range(memory_size)):
advantage = rewards[i] - values[i] + discount_factor * (next_values[i] + lambda_coefficient * not_dones[i] * advantage)
advantages[i] = advantage
# returns computation
returns = advantages + values
# normalize advantages
advantages = (advantages - advantages.mean()) / (advantages.std() + 1e-8)
return returns, advantages
# update dataset of reference motions
self.motion_dataset.add_samples(states=self.collect_reference_motions(self._amp_batch_size))
# compute combined rewards
rewards = self.memory.get_tensor_by_name("rewards")
amp_states = self.memory.get_tensor_by_name("amp_states")
with torch.no_grad():
amp_logits, _, _ = self.discriminator.act({"states": self._amp_state_preprocessor(amp_states)}, role="discriminator")
style_reward = -torch.log(torch.maximum(1 - 1 / (1 + torch.exp(-amp_logits)), torch.tensor(0.0001, device=self.device)))
style_reward *= self._discriminator_reward_scale
combined_rewards = self._task_reward_weight * rewards + self._style_reward_weight * style_reward
# compute returns and advantages
values = self.memory.get_tensor_by_name("values")
next_values=self.memory.get_tensor_by_name("next_values")
returns, advantages = compute_gae(rewards=combined_rewards,
dones=self.memory.get_tensor_by_name("terminated"),
values=values,
next_values=next_values,
discount_factor=self._discount_factor,
lambda_coefficient=self._lambda)
self.memory.set_tensor_by_name("values", self._value_preprocessor(values, train=True))
self.memory.set_tensor_by_name("returns", self._value_preprocessor(returns, train=True))
self.memory.set_tensor_by_name("advantages", advantages)
# sample mini-batches from memory
sampled_batches = self.memory.sample_all(names=self.tensors_names, mini_batches=self._mini_batches)
sampled_motion_batches = self.motion_dataset.sample(names=["states"],
batch_size=self.memory.memory_size * self.memory.num_envs,
mini_batches=self._mini_batches)
if len(self.reply_buffer):
sampled_replay_batches = self.reply_buffer.sample(names=["states"],
batch_size=self.memory.memory_size * self.memory.num_envs,
mini_batches=self._mini_batches)
else:
sampled_replay_batches = [[batches[self.tensors_names.index("amp_states")]] for batches in sampled_batches]
cumulative_policy_loss = 0
cumulative_entropy_loss = 0
cumulative_value_loss = 0
cumulative_discriminator_loss = 0
# learning epochs
for epoch in range(self._learning_epochs):
# mini-batches loop
for batch_index, (sampled_states, sampled_actions, _, _, _, \
sampled_log_prob, sampled_values, sampled_returns, sampled_advantages, \
sampled_amp_states, _) in enumerate(sampled_batches):
sampled_states = self._state_preprocessor(sampled_states, train=True)
_, next_log_prob, _ = self.policy.act({"states": sampled_states, "taken_actions": sampled_actions}, role="policy")
# compute entropy loss
if self._entropy_loss_scale:
entropy_loss = -self._entropy_loss_scale * self.policy.get_entropy(role="policy").mean()
else:
entropy_loss = 0
# compute policy loss
ratio = torch.exp(next_log_prob - sampled_log_prob)
surrogate = sampled_advantages * ratio
surrogate_clipped = sampled_advantages * torch.clip(ratio, 1.0 - self._ratio_clip, 1.0 + self._ratio_clip)
policy_loss = -torch.min(surrogate, surrogate_clipped).mean()
# compute value loss
predicted_values, _, _ = self.value.act({"states": sampled_states}, role="value")
if self._clip_predicted_values:
predicted_values = sampled_values + torch.clip(predicted_values - sampled_values,
min=-self._value_clip,
max=self._value_clip)
value_loss = self._value_loss_scale * F.mse_loss(sampled_returns, predicted_values)
# compute discriminator loss
if self._discriminator_batch_size:
sampled_amp_states = self._amp_state_preprocessor(sampled_amp_states[0:self._discriminator_batch_size], train=True)
sampled_amp_replay_states = self._amp_state_preprocessor(
sampled_replay_batches[batch_index][0][0:self._discriminator_batch_size], train=True)
sampled_amp_motion_states = self._amp_state_preprocessor(
sampled_motion_batches[batch_index][0][0:self._discriminator_batch_size], train=True)
else:
sampled_amp_states = self._amp_state_preprocessor(sampled_amp_states, train=True)
sampled_amp_replay_states = self._amp_state_preprocessor(sampled_replay_batches[batch_index][0], train=True)
sampled_amp_motion_states = self._amp_state_preprocessor(sampled_motion_batches[batch_index][0], train=True)
sampled_amp_motion_states.requires_grad_(True)
amp_logits, _, _ = self.discriminator.act({"states": sampled_amp_states}, role="discriminator")
amp_replay_logits, _, _ = self.discriminator.act({"states": sampled_amp_replay_states}, role="discriminator")
amp_motion_logits, _, _ = self.discriminator.act({"states": sampled_amp_motion_states}, role="discriminator")
amp_cat_logits = torch.cat([amp_logits, amp_replay_logits], dim=0)
# discriminator prediction loss
discriminator_loss = 0.5 * (nn.BCEWithLogitsLoss()(amp_cat_logits, torch.zeros_like(amp_cat_logits)) \
+ torch.nn.BCEWithLogitsLoss()(amp_motion_logits, torch.ones_like(amp_motion_logits)))
# discriminator logit regularization
if self._discriminator_logit_regularization_scale:
logit_weights = torch.flatten(list(self.discriminator.modules())[-1].weight)
discriminator_loss += self._discriminator_logit_regularization_scale * torch.sum(torch.square(logit_weights))
# discriminator gradient penalty
if self._discriminator_gradient_penalty_scale:
amp_motion_gradient = torch.autograd.grad(amp_motion_logits,
sampled_amp_motion_states,
grad_outputs=torch.ones_like(amp_motion_logits),
create_graph=True,
retain_graph=True,
only_inputs=True)
gradient_penalty = torch.sum(torch.square(amp_motion_gradient[0]), dim=-1).mean()
discriminator_loss += self._discriminator_gradient_penalty_scale * gradient_penalty
# discriminator weight decay
if self._discriminator_weight_decay_scale:
weights = [torch.flatten(module.weight) for module in self.discriminator.modules() \
if isinstance(module, torch.nn.Linear)]
weight_decay = torch.sum(torch.square(torch.cat(weights, dim=-1)))
discriminator_loss += self._discriminator_weight_decay_scale * weight_decay
discriminator_loss *= self._discriminator_loss_scale
# optimization step
self.optimizer.zero_grad()
(policy_loss + entropy_loss + value_loss + discriminator_loss).backward()
if self._grad_norm_clip > 0:
nn.utils.clip_grad_norm_(itertools.chain(self.policy.parameters(),
self.value.parameters(),
self.discriminator.parameters()), self._grad_norm_clip)
self.optimizer.step()
# update cumulative losses
cumulative_policy_loss += policy_loss.item()
cumulative_value_loss += value_loss.item()
if self._entropy_loss_scale:
cumulative_entropy_loss += entropy_loss.item()
cumulative_discriminator_loss += discriminator_loss.item()
# update learning rate
if self._learning_rate_scheduler:
self.scheduler.step()
# update AMP repaly buffer
self.reply_buffer.add_samples(states=amp_states.view(-1, amp_states.shape[-1]))
# record data
self.track_data("Loss / Policy loss", cumulative_policy_loss / (self._learning_epochs * self._mini_batches))
self.track_data("Loss / Value loss", cumulative_value_loss / (self._learning_epochs * self._mini_batches))
if self._entropy_loss_scale:
self.track_data("Loss / Entropy loss", cumulative_entropy_loss / (self._learning_epochs * self._mini_batches))
self.track_data("Loss / Discriminator loss", cumulative_discriminator_loss / (self._learning_epochs * self._mini_batches))
self.track_data("Policy / Standard deviation", self.policy.distribution(role="policy").stddev.mean().item())
if self._learning_rate_scheduler:
self.track_data("Learning / Learning rate", self.scheduler.get_last_lr()[0])
| 31,430 | Python | 52.454082 | 153 | 0.605663 |
Toni-SM/skrl/skrl/agents/torch/amp/__init__.py | from skrl.agents.torch.amp.amp import AMP, AMP_DEFAULT_CONFIG
| 62 | Python | 30.499985 | 61 | 0.806452 |
Toni-SM/skrl/skrl/agents/torch/rpo/__init__.py | from skrl.agents.torch.rpo.rpo import RPO, RPO_DEFAULT_CONFIG
from skrl.agents.torch.rpo.rpo_rnn import RPO_RNN
| 112 | Python | 36.666654 | 61 | 0.803571 |
Toni-SM/skrl/skrl/agents/jax/base.py | from typing import Any, Mapping, Optional, Tuple, Union
import collections
import copy
import datetime
import os
import pickle
import gym
import gymnasium
import flax
import jax
import numpy as np
from skrl import config, logger
from skrl.memories.jax import Memory
from skrl.models.jax import Model
class Agent:
def __init__(self,
models: Mapping[str, Model],
memory: Optional[Union[Memory, Tuple[Memory]]] = None,
observation_space: Optional[Union[int, Tuple[int], gym.Space, gymnasium.Space]] = None,
action_space: Optional[Union[int, Tuple[int], gym.Space, gymnasium.Space]] = None,
device: Optional[Union[str, jax.Device]] = None,
cfg: Optional[dict] = None) -> None:
"""Base class that represent a RL agent
:param models: Models used by the agent
:type models: dictionary of skrl.models.jax.Model
:param memory: Memory to storage the transitions.
If it is a tuple, the first element will be used for training and
for the rest only the environment transitions will be added
:type memory: skrl.memory.jax.Memory, list of skrl.memory.jax.Memory or None
:param observation_space: Observation/state space or shape (default: ``None``)
:type observation_space: int, tuple or list of int, gym.Space, gymnasium.Space or None, optional
:param action_space: Action space or shape (default: ``None``)
:type action_space: int, tuple or list of int, gym.Space, gymnasium.Space or None, optional
:param device: Device on which a tensor/array is or will be allocated (default: ``None``).
If None, the device will be either ``"cuda"`` if available or ``"cpu"``
:type device: str or jax.Device, optional
:param cfg: Configuration dictionary
:type cfg: dict
"""
self._jax = config.jax.backend == "jax"
self.models = models
self.observation_space = observation_space
self.action_space = action_space
self.cfg = cfg if cfg is not None else {}
if device is None:
self.device = jax.devices()[0]
else:
self.device = device if isinstance(device, jax.Device) else jax.devices(device)[0]
if type(memory) is list:
self.memory = memory[0]
self.secondary_memories = memory[1:]
else:
self.memory = memory
self.secondary_memories = []
# convert the models to their respective device
for model in self.models.values():
if model is not None:
pass
self.tracking_data = collections.defaultdict(list)
self.write_interval = self.cfg.get("experiment", {}).get("write_interval", 1000)
self._track_rewards = collections.deque(maxlen=100)
self._track_timesteps = collections.deque(maxlen=100)
self._cumulative_rewards = None
self._cumulative_timesteps = None
self.training = True
# checkpoint
self.checkpoint_modules = {}
self.checkpoint_interval = self.cfg.get("experiment", {}).get("checkpoint_interval", 1000)
self.checkpoint_store_separately = self.cfg.get("experiment", {}).get("store_separately", False)
self.checkpoint_best_modules = {"timestep": 0, "reward": -2 ** 31, "saved": False, "modules": {}}
# experiment directory
directory = self.cfg.get("experiment", {}).get("directory", "")
experiment_name = self.cfg.get("experiment", {}).get("experiment_name", "")
if not directory:
directory = os.path.join(os.getcwd(), "runs")
if not experiment_name:
experiment_name = "{}_{}".format(datetime.datetime.now().strftime("%y-%m-%d_%H-%M-%S-%f"), self.__class__.__name__)
self.experiment_dir = os.path.join(directory, experiment_name)
def __str__(self) -> str:
"""Generate a representation of the agent as string
:return: Representation of the agent as string
:rtype: str
"""
string = f"Agent: {repr(self)}"
for k, v in self.cfg.items():
if type(v) is dict:
string += f"\n |-- {k}"
for k1, v1 in v.items():
string += f"\n | |-- {k1}: {v1}"
else:
string += f"\n |-- {k}: {v}"
return string
def _empty_preprocessor(self, _input: Any, *args, **kwargs) -> Any:
"""Empty preprocess method
This method is defined because PyTorch multiprocessing can't pickle lambdas
:param _input: Input to preprocess
:type _input: Any
:return: Preprocessed input
:rtype: Any
"""
return _input
def _get_internal_value(self, _module: Any) -> Any:
"""Get internal module/variable state/value
:param _module: Module or variable
:type _module: Any
:return: Module/variable state/value
:rtype: Any
"""
return _module.state_dict.params if hasattr(_module, "state_dict") else _module
def init(self, trainer_cfg: Optional[Mapping[str, Any]] = None) -> None:
"""Initialize the agent
This method should be called before the agent is used.
It will initialize the TensoBoard writer (and optionally Weights & Biases) and create the checkpoints directory
:param trainer_cfg: Trainer configuration
:type trainer_cfg: dict, optional
"""
# setup Weights & Biases
if self.cfg.get("experiment", {}).get("wandb", False):
# save experiment config
trainer_cfg = trainer_cfg if trainer_cfg is not None else {}
try:
models_cfg = {k: v.net._modules for (k, v) in self.models.items()}
except AttributeError:
models_cfg = {k: v._modules for (k, v) in self.models.items()}
config={**self.cfg, **trainer_cfg, **models_cfg}
# set default values
wandb_kwargs = copy.deepcopy(self.cfg.get("experiment", {}).get("wandb_kwargs", {}))
wandb_kwargs.setdefault("name", os.path.split(self.experiment_dir)[-1])
wandb_kwargs.setdefault("sync_tensorboard", True)
wandb_kwargs.setdefault("config", {})
wandb_kwargs["config"].update(config)
# init Weights & Biases
import wandb
wandb.init(**wandb_kwargs)
# main entry to log data for consumption and visualization by TensorBoard
if self.write_interval > 0:
self.writer = None
# tensorboard via torch SummaryWriter
try:
from torch.utils.tensorboard import SummaryWriter
self.writer = SummaryWriter(log_dir=self.experiment_dir)
except ImportError as e:
pass
# tensorboard via tensorflow
if self.writer is None:
try:
import tensorflow
class _SummaryWriter:
def __init__(self, log_dir):
self.writer = tensorflow.summary.create_file_writer(logdir=log_dir)
def add_scalar(self, tag, value, step):
with self.writer.as_default():
tensorflow.summary.scalar(tag, value, step=step)
self.writer = _SummaryWriter(log_dir=self.experiment_dir)
except ImportError as e:
pass
# tensorboard via tensorboardX
if self.writer is None:
try:
import tensorboardX
self.writer = tensorboardX.SummaryWriter(log_dir=self.experiment_dir)
except ImportError as e:
pass
# show warnings and exit
if self.writer is None:
logger.warning("No package found to write events to Tensorboard.")
logger.warning("Set agent's `write_interval` setting to 0 to disable writing")
logger.warning("or install one of the following packages:")
logger.warning(" - PyTorch: https://pytorch.org/get-started/locally")
logger.warning(" - TensorFlow: https://www.tensorflow.org/install")
logger.warning(" - TensorboardX: https://github.com/lanpa/tensorboardX#install")
logger.warning("The current running process will be terminated.")
exit()
if self.checkpoint_interval > 0:
os.makedirs(os.path.join(self.experiment_dir, "checkpoints"), exist_ok=True)
def track_data(self, tag: str, value: float) -> None:
"""Track data to TensorBoard
Currently only scalar data are supported
:param tag: Data identifier (e.g. 'Loss / policy loss')
:type tag: str
:param value: Value to track
:type value: float
"""
self.tracking_data[tag].append(value)
def write_tracking_data(self, timestep: int, timesteps: int) -> None:
"""Write tracking data to TensorBoard
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
for k, v in self.tracking_data.items():
if k.endswith("(min)"):
self.writer.add_scalar(k, np.min(v), timestep)
elif k.endswith("(max)"):
self.writer.add_scalar(k, np.max(v), timestep)
else:
self.writer.add_scalar(k, np.mean(v), timestep)
# reset data containers for next iteration
self._track_rewards.clear()
self._track_timesteps.clear()
self.tracking_data.clear()
def write_checkpoint(self, timestep: int, timesteps: int) -> None:
"""Write checkpoint (modules) to disk
The checkpoints are saved in the directory 'checkpoints' in the experiment directory.
The name of the checkpoint is the current timestep if timestep is not None, otherwise it is the current time.
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
tag = str(timestep if timestep is not None else datetime.datetime.now().strftime("%y-%m-%d_%H-%M-%S-%f"))
# separated modules
if self.checkpoint_store_separately:
for name, module in self.checkpoint_modules.items():
with open(os.path.join(self.experiment_dir, "checkpoints", f"{name}_{tag}.pickle"), "wb") as file:
pickle.dump(flax.serialization.to_bytes(self._get_internal_value(module)), file, protocol=4)
# whole agent
else:
modules = {}
for name, module in self.checkpoint_modules.items():
modules[name] = flax.serialization.to_bytes(self._get_internal_value(module))
with open(os.path.join(self.experiment_dir, "checkpoints", f"agent_{tag}.pickle"), "wb") as file:
pickle.dump(modules, file, protocol=4)
# best modules
if self.checkpoint_best_modules["modules"] and not self.checkpoint_best_modules["saved"]:
# separated modules
if self.checkpoint_store_separately:
for name, module in self.checkpoint_modules.items():
with open(os.path.join(self.experiment_dir, "checkpoints", f"best_{name}.pickle"), "wb") as file:
pickle.dump(flax.serialization.to_bytes(self.checkpoint_best_modules["modules"][name]), file, protocol=4)
# whole agent
else:
modules = {}
for name, module in self.checkpoint_modules.items():
modules[name] = flax.serialization.to_bytes(self.checkpoint_best_modules["modules"][name])
with open(os.path.join(self.experiment_dir, "checkpoints", "best_agent.pickle"), "wb") as file:
pickle.dump(modules, file, protocol=4)
self.checkpoint_best_modules["saved"] = True
def act(self, states: Union[np.ndarray, jax.Array], timestep: int, timesteps: int) -> Union[np.ndarray, jax.Array]:
"""Process the environment's states to make a decision (actions) using the main policy
:param states: Environment's states
:type states: np.ndarray or jax.Array
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
:raises NotImplementedError: The method is not implemented by the inheriting classes
:return: Actions
:rtype: np.ndarray or jax.Array
"""
raise NotImplementedError
def record_transition(self,
states: Union[np.ndarray, jax.Array],
actions: Union[np.ndarray, jax.Array],
rewards: Union[np.ndarray, jax.Array],
next_states: Union[np.ndarray, jax.Array],
terminated: Union[np.ndarray, jax.Array],
truncated: Union[np.ndarray, jax.Array],
infos: Any,
timestep: int,
timesteps: int) -> None:
"""Record an environment transition in memory (to be implemented by the inheriting classes)
Inheriting classes must call this method to record episode information (rewards, timesteps, etc.).
In addition to recording environment transition (such as states, rewards, etc.), agent information can be recorded.
:param states: Observations/states of the environment used to make the decision
:type states: np.ndarray or jax.Array
:param actions: Actions taken by the agent
:type actions: np.ndarray or jax.Array
:param rewards: Instant rewards achieved by the current actions
:type rewards: np.ndarray or jax.Array
:param next_states: Next observations/states of the environment
:type next_states: np.ndarray or jax.Array
:param terminated: Signals to indicate that episodes have terminated
:type terminated: np.ndarray or jax.Array
:param truncated: Signals to indicate that episodes have been truncated
:type truncated: np.ndarray or jax.Array
:param infos: Additional information about the environment
:type infos: Any type supported by the environment
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
if self.write_interval > 0:
# compute the cumulative sum of the rewards and timesteps
if self._cumulative_rewards is None:
self._cumulative_rewards = np.zeros_like(rewards, dtype=np.float32)
self._cumulative_timesteps = np.zeros_like(rewards, dtype=np.int32)
# TODO: find a better way to avoid https://jax.readthedocs.io/en/latest/errors.html#jax.errors.ConcretizationTypeError
if self._jax:
rewards = jax.device_get(rewards)
terminated = jax.device_get(terminated)
truncated = jax.device_get(truncated)
self._cumulative_rewards += rewards
self._cumulative_timesteps += 1
# check ended episodes
finished_episodes = (terminated + truncated).nonzero()[0]
if finished_episodes.size:
# storage cumulative rewards and timesteps
self._track_rewards.extend(self._cumulative_rewards[finished_episodes][:, 0].reshape(-1).tolist())
self._track_timesteps.extend(self._cumulative_timesteps[finished_episodes][:, 0].reshape(-1).tolist())
# reset the cumulative rewards and timesteps
self._cumulative_rewards[finished_episodes] = 0
self._cumulative_timesteps[finished_episodes] = 0
# record data
self.tracking_data["Reward / Instantaneous reward (max)"].append(np.max(rewards).item())
self.tracking_data["Reward / Instantaneous reward (min)"].append(np.min(rewards).item())
self.tracking_data["Reward / Instantaneous reward (mean)"].append(np.mean(rewards).item())
if len(self._track_rewards):
track_rewards = np.array(self._track_rewards)
track_timesteps = np.array(self._track_timesteps)
self.tracking_data["Reward / Total reward (max)"].append(np.max(track_rewards))
self.tracking_data["Reward / Total reward (min)"].append(np.min(track_rewards))
self.tracking_data["Reward / Total reward (mean)"].append(np.mean(track_rewards))
self.tracking_data["Episode / Total timesteps (max)"].append(np.max(track_timesteps))
self.tracking_data["Episode / Total timesteps (min)"].append(np.min(track_timesteps))
self.tracking_data["Episode / Total timesteps (mean)"].append(np.mean(track_timesteps))
def set_mode(self, mode: str) -> None:
"""Set the model mode (training or evaluation)
:param mode: Mode: 'train' for training or 'eval' for evaluation
:type mode: str
"""
for model in self.models.values():
if model is not None:
model.set_mode(mode)
def set_running_mode(self, mode: str) -> None:
"""Set the current running mode (training or evaluation)
This method sets the value of the ``training`` property (boolean).
This property can be used to know if the agent is running in training or evaluation mode.
:param mode: Mode: 'train' for training or 'eval' for evaluation
:type mode: str
"""
self.training = mode == "train"
def save(self, path: str) -> None:
"""Save the agent to the specified path
:param path: Path to save the model to
:type path: str
"""
modules = {}
for name, module in self.checkpoint_modules.items():
modules[name] = flax.serialization.to_bytes(self._get_internal_value(module))
# HACK: Does it make sense to use https://github.com/google/orbax
# file.write(flax.serialization.to_bytes(modules))
with open(path, "wb") as file:
pickle.dump(modules, file, protocol=4)
def load(self, path: str) -> None:
"""Load the model from the specified path
:param path: Path to load the model from
:type path: str
"""
with open(path, "rb") as file:
modules = pickle.load(file)
if type(modules) is dict:
for name, data in modules.items():
module = self.checkpoint_modules.get(name, None)
if module is not None:
if hasattr(module, "state_dict"):
params = flax.serialization.from_bytes(module.state_dict.params, data)
module.state_dict = module.state_dict.replace(params=params)
else:
pass # TODO: raise NotImplementedError
else:
logger.warning(f"Cannot load the {name} module. The agent doesn't have such an instance")
def migrate(self,
path: str,
name_map: Mapping[str, Mapping[str, str]] = {},
auto_mapping: bool = True,
verbose: bool = False) -> bool:
"""Migrate the specified extrernal checkpoint to the current agent
:raises NotImplementedError: Not yet implemented
"""
raise NotImplementedError
def pre_interaction(self, timestep: int, timesteps: int) -> None:
"""Callback called before the interaction with the environment
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
pass
def post_interaction(self, timestep: int, timesteps: int) -> None:
"""Callback called after the interaction with the environment
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
timestep += 1
# update best models and write checkpoints
if timestep > 1 and self.checkpoint_interval > 0 and not timestep % self.checkpoint_interval:
# update best models
reward = np.mean(self.tracking_data.get("Reward / Total reward (mean)", -2 ** 31))
if reward > self.checkpoint_best_modules["reward"]:
self.checkpoint_best_modules["timestep"] = timestep
self.checkpoint_best_modules["reward"] = reward
self.checkpoint_best_modules["saved"] = False
self.checkpoint_best_modules["modules"] = {k: copy.deepcopy(self._get_internal_value(v)) for k, v in self.checkpoint_modules.items()}
# write checkpoints
self.write_checkpoint(timestep, timesteps)
# write to tensorboard
if timestep > 1 and self.write_interval > 0 and not timestep % self.write_interval:
self.write_tracking_data(timestep, timesteps)
def _update(self, timestep: int, timesteps: int) -> None:
"""Algorithm's main update step
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
:raises NotImplementedError: The method is not implemented by the inheriting classes
"""
raise NotImplementedError
| 21,860 | Python | 43.432927 | 149 | 0.595288 |
Toni-SM/skrl/skrl/agents/jax/__init__.py | from skrl.agents.jax.base import Agent
| 39 | Python | 18.999991 | 38 | 0.820513 |
Toni-SM/skrl/skrl/agents/jax/cem/cem.py | from typing import Any, Mapping, Optional, Tuple, Union
import copy
import gym
import gymnasium
import jax
import jax.numpy as jnp
import numpy as np
import optax
from skrl import logger
from skrl.agents.jax import Agent
from skrl.memories.jax import Memory
from skrl.models.jax import Model
from skrl.resources.optimizers.jax import Adam
# [start-config-dict-jax]
CEM_DEFAULT_CONFIG = {
"rollouts": 16, # number of rollouts before updating
"percentile": 0.70, # percentile to compute the reward bound [0, 1]
"discount_factor": 0.99, # discount factor (gamma)
"learning_rate": 1e-2, # learning rate
"learning_rate_scheduler": None, # learning rate scheduler class (see torch.optim.lr_scheduler)
"learning_rate_scheduler_kwargs": {}, # learning rate scheduler's kwargs (e.g. {"step_size": 1e-3})
"state_preprocessor": None, # state preprocessor class (see skrl.resources.preprocessors)
"state_preprocessor_kwargs": {}, # state preprocessor's kwargs (e.g. {"size": env.observation_space})
"random_timesteps": 0, # random exploration steps
"learning_starts": 0, # learning starts after this many steps
"rewards_shaper": None, # rewards shaping function: Callable(reward, timestep, timesteps) -> reward
"experiment": {
"directory": "", # experiment's parent directory
"experiment_name": "", # experiment name
"write_interval": 250, # TensorBoard writing interval (timesteps)
"checkpoint_interval": 1000, # interval for checkpoints (timesteps)
"store_separately": False, # whether to store checkpoints separately
"wandb": False, # whether to use Weights & Biases
"wandb_kwargs": {} # wandb kwargs (see https://docs.wandb.ai/ref/python/init)
}
}
# [end-config-dict-jax]
class CEM(Agent):
def __init__(self,
models: Mapping[str, Model],
memory: Optional[Union[Memory, Tuple[Memory]]] = None,
observation_space: Optional[Union[int, Tuple[int], gym.Space, gymnasium.Space]] = None,
action_space: Optional[Union[int, Tuple[int], gym.Space, gymnasium.Space]] = None,
device: Optional[Union[str, jax.Device]] = None,
cfg: Optional[dict] = None) -> None:
"""Cross-Entropy Method (CEM)
https://ieeexplore.ieee.org/abstract/document/6796865/
:param models: Models used by the agent
:type models: dictionary of skrl.models.jax.Model
:param memory: Memory to storage the transitions.
If it is a tuple, the first element will be used for training and
for the rest only the environment transitions will be added
:type memory: skrl.memory.jax.Memory, list of skrl.memory.jax.Memory or None
:param observation_space: Observation/state space or shape (default: ``None``)
:type observation_space: int, tuple or list of int, gym.Space, gymnasium.Space or None, optional
:param action_space: Action space or shape (default: ``None``)
:type action_space: int, tuple or list of int, gym.Space, gymnasium.Space or None, optional
:param device: Device on which a tensor/array is or will be allocated (default: ``None``).
If None, the device will be either ``"cuda"`` if available or ``"cpu"``
:type device: str or jax.Device, optional
:param cfg: Configuration dictionary
:type cfg: dict
:raises KeyError: If the models dictionary is missing a required key
"""
# _cfg = copy.deepcopy(CEM_DEFAULT_CONFIG) # TODO: TypeError: cannot pickle 'jax.Device' object
_cfg = CEM_DEFAULT_CONFIG
_cfg.update(cfg if cfg is not None else {})
super().__init__(models=models,
memory=memory,
observation_space=observation_space,
action_space=action_space,
device=device,
cfg=_cfg)
# models
self.policy = self.models.get("policy", None)
# checkpoint models
self.checkpoint_modules["policy"] = self.policy
# configuration
self._rollouts = self.cfg["rollouts"]
self._rollout = 0
self._percentile = self.cfg["percentile"]
self._discount_factor = self.cfg["discount_factor"]
self._learning_rate = self.cfg["learning_rate"]
self._learning_rate_scheduler = self.cfg["learning_rate_scheduler"]
self._state_preprocessor = self.cfg["state_preprocessor"]
self._random_timesteps = self.cfg["random_timesteps"]
self._learning_starts = self.cfg["learning_starts"]
self._rewards_shaper = self.cfg["rewards_shaper"]
self._episode_tracking = []
# set up optimizer and learning rate scheduler
if self.policy is not None:
self.optimizer = Adam(model=self.policy, lr=self._learning_rate)
if self._learning_rate_scheduler is not None:
self.scheduler = self._learning_rate_scheduler(self.optimizer, **self.cfg["learning_rate_scheduler_kwargs"])
self.checkpoint_modules["optimizer"] = self.optimizer
# set up preprocessors
if self._state_preprocessor:
self._state_preprocessor = self._state_preprocessor(**self.cfg["state_preprocessor_kwargs"])
self.checkpoint_modules["state_preprocessor"] = self._state_preprocessor
else:
self._state_preprocessor = self._empty_preprocessor
def init(self, trainer_cfg: Optional[Mapping[str, Any]] = None) -> None:
"""Initialize the agent
"""
super().init(trainer_cfg=trainer_cfg)
self.set_mode("eval")
# create tensors in memory
if self.memory is not None:
self.memory.create_tensor(name="states", size=self.observation_space, dtype=jnp.float32)
self.memory.create_tensor(name="next_states", size=self.observation_space, dtype=jnp.float32)
self.memory.create_tensor(name="actions", size=self.action_space, dtype=jnp.int32)
self.memory.create_tensor(name="rewards", size=1, dtype=jnp.float32)
self.memory.create_tensor(name="terminated", size=1, dtype=jnp.int8)
self.tensors_names = ["states", "actions", "rewards"]
# set up models for just-in-time compilation with XLA
self.policy.apply = jax.jit(self.policy.apply, static_argnums=2)
def act(self, states: Union[np.ndarray, jax.Array], timestep: int, timesteps: int) -> Union[np.ndarray, jax.Array]:
"""Process the environment's states to make a decision (actions) using the main policy
:param states: Environment's states
:type states: np.ndarray or jax.Array
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
:return: Actions
:rtype: np.ndarray or jax.Array
"""
# sample random actions
# TODO, check for stochasticity
if timestep < self._random_timesteps:
return self.policy.random_act({"states": self._state_preprocessor(states)}, role="policy")
# sample stochastic actions
actions, _, outputs = self.policy.act({"states": self._state_preprocessor(states)}, role="policy")
if not self._jax: # numpy backend
actions = jax.device_get(actions)
return actions, None, outputs
def record_transition(self,
states: Union[np.ndarray, jax.Array],
actions: Union[np.ndarray, jax.Array],
rewards: Union[np.ndarray, jax.Array],
next_states: Union[np.ndarray, jax.Array],
terminated: Union[np.ndarray, jax.Array],
truncated: Union[np.ndarray, jax.Array],
infos: Any,
timestep: int,
timesteps: int) -> None:
"""Record an environment transition in memory
:param states: Observations/states of the environment used to make the decision
:type states: np.ndarray or jax.Array
:param actions: Actions taken by the agent
:type actions: np.ndarray or jax.Array
:param rewards: Instant rewards achieved by the current actions
:type rewards: np.ndarray or jax.Array
:param next_states: Next observations/states of the environment
:type next_states: np.ndarray or jax.Array
:param terminated: Signals to indicate that episodes have terminated
:type terminated: np.ndarray or jax.Array
:param truncated: Signals to indicate that episodes have been truncated
:type truncated: np.ndarray or jax.Array
:param infos: Additional information about the environment
:type infos: Any type supported by the environment
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
super().record_transition(states, actions, rewards, next_states, terminated, truncated, infos, timestep, timesteps)
if self.memory is not None:
# reward shaping
if self._rewards_shaper is not None:
rewards = self._rewards_shaper(rewards, timestep, timesteps)
# storage transition in memory
self.memory.add_samples(states=states, actions=actions, rewards=rewards, next_states=next_states,
terminated=terminated, truncated=truncated)
for memory in self.secondary_memories:
memory.add_samples(states=states, actions=actions, rewards=rewards, next_states=next_states,
terminated=terminated, truncated=truncated)
# track episodes internally
if self._rollout:
indexes = (terminated + truncated).nonzero()[0]
if indexes.size:
for i in indexes:
self._episode_tracking[i.item()].append(self._rollout + 1)
else:
self._episode_tracking = [[0] for _ in range(rewards.shape[-1])]
def pre_interaction(self, timestep: int, timesteps: int) -> None:
"""Callback called before the interaction with the environment
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
pass
def post_interaction(self, timestep: int, timesteps: int) -> None:
"""Callback called after the interaction with the environment
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
self._rollout += 1
if not self._rollout % self._rollouts and timestep >= self._learning_starts:
self._rollout = 0
self.set_mode("train")
self._update(timestep, timesteps)
self.set_mode("eval")
# write tracking data and checkpoints
super().post_interaction(timestep, timesteps)
def _update(self, timestep: int, timesteps: int) -> None:
"""Algorithm's main update step
:param timestep: Current timestep
:type timestep: int
:param timesteps: Number of timesteps
:type timesteps: int
"""
# sample all memory
sampled_states, sampled_actions, sampled_rewards = self.memory.sample_all(names=self.tensors_names)[0]
sampled_states = self._state_preprocessor(sampled_states, train=True)
if self._jax: # move to numpy backend
sampled_states = jax.device_get(sampled_states)
sampled_actions = jax.device_get(sampled_actions)
sampled_rewards = jax.device_get(sampled_rewards)
# compute discounted return threshold
limits = []
returns = []
for e in range(sampled_rewards.shape[-1]):
for i, j in zip(self._episode_tracking[e][:-1], self._episode_tracking[e][1:]):
limits.append([e + i, e + j])
rewards = sampled_rewards[e + i: e + j]
returns.append(np.sum(rewards * self._discount_factor ** \
np.flip(np.arange(rewards.shape[0]), axis=-1).reshape(rewards.shape)))
if not len(returns):
logger.warning("No returns to update. Consider increasing the number of rollouts")
return
returns = np.array(returns)
return_threshold = np.quantile(returns, self._percentile, axis=-1)
# get elite states and actions
indexes = (returns >= return_threshold).nonzero()[0]
elite_states = np.concatenate([sampled_states[limits[i][0]:limits[i][1]] for i in indexes], axis=0)
elite_actions = np.concatenate([sampled_actions[limits[i][0]:limits[i][1]] for i in indexes], axis=0).reshape(-1)
# compute policy loss
def _policy_loss(params):
# compute scores for the elite states
_, _, outputs = self.policy.act({"states": elite_states}, "policy", params)
scores = outputs["net_output"]
# HACK: return optax.softmax_cross_entropy_with_integer_labels(scores, elite_actions).mean()
labels = jax.nn.one_hot(elite_actions, self.action_space.n)
return optax.softmax_cross_entropy(scores, labels).mean()
policy_loss, grad = jax.value_and_grad(_policy_loss, has_aux=False)(self.policy.state_dict.params)
# optimization step (policy)
self.optimizer = self.optimizer.step(grad, self.policy)
# update learning rate
if self._learning_rate_scheduler:
self.scheduler.step()
# record data
self.track_data("Loss / Policy loss", policy_loss.item())
self.track_data("Coefficient / Return threshold", return_threshold.item())
self.track_data("Coefficient / Mean discounted returns", returns.mean().item())
if self._learning_rate_scheduler:
self.track_data("Learning / Learning rate", self.scheduler.get_last_lr()[0])
| 14,421 | Python | 43.239264 | 124 | 0.615145 |
Toni-SM/skrl/skrl/agents/jax/cem/__init__.py | from skrl.agents.jax.cem.cem import CEM, CEM_DEFAULT_CONFIG
| 60 | Python | 29.499985 | 59 | 0.8 |
Toni-SM/skrl/skrl/agents/jax/sac/__init__.py | from skrl.agents.jax.sac.sac import SAC, SAC_DEFAULT_CONFIG
| 60 | Python | 29.499985 | 59 | 0.8 |
Toni-SM/skrl/skrl/agents/jax/td3/__init__.py | from skrl.agents.jax.td3.td3 import TD3, TD3_DEFAULT_CONFIG
| 60 | Python | 29.499985 | 59 | 0.8 |
Toni-SM/skrl/skrl/agents/jax/ddpg/__init__.py | from skrl.agents.jax.ddpg.ddpg import DDPG, DDPG_DEFAULT_CONFIG
| 64 | Python | 31.499984 | 63 | 0.8125 |
Toni-SM/skrl/skrl/agents/jax/dqn/__init__.py | from skrl.agents.jax.dqn.ddqn import DDQN, DDQN_DEFAULT_CONFIG
from skrl.agents.jax.dqn.dqn import DQN, DQN_DEFAULT_CONFIG
| 123 | Python | 40.33332 | 62 | 0.804878 |
Toni-SM/skrl/skrl/agents/jax/a2c/__init__.py | from skrl.agents.jax.a2c.a2c import A2C, A2C_DEFAULT_CONFIG
| 60 | Python | 29.499985 | 59 | 0.8 |
Toni-SM/skrl/skrl/agents/jax/ppo/__init__.py | from skrl.agents.jax.ppo.ppo import PPO, PPO_DEFAULT_CONFIG
| 60 | Python | 29.499985 | 59 | 0.8 |
Toni-SM/skrl/skrl/agents/jax/rpo/__init__.py | from skrl.agents.jax.rpo.rpo import RPO, RPO_DEFAULT_CONFIG
| 60 | Python | 29.499985 | 59 | 0.8 |
Toni-SM/skrl/skrl/resources/schedulers/torch/kl_adaptive.py | from typing import Optional, Union
import torch
from torch.optim.lr_scheduler import _LRScheduler
class KLAdaptiveLR(_LRScheduler):
def __init__(self,
optimizer: torch.optim.Optimizer,
kl_threshold: float = 0.008,
min_lr: float = 1e-6,
max_lr: float = 1e-2,
kl_factor: float = 2,
lr_factor: float = 1.5,
last_epoch: int = -1,
verbose: bool = False) -> None:
"""Adaptive KL scheduler
Adjusts the learning rate according to the KL divergence.
The implementation is adapted from the rl_games library
(https://github.com/Denys88/rl_games/blob/master/rl_games/common/schedulers.py)
.. note::
This scheduler is only available for PPO at the moment.
Applying it to other agents will not change the learning rate
Example::
>>> scheduler = KLAdaptiveLR(optimizer, kl_threshold=0.01)
>>> for epoch in range(100):
>>> # ...
>>> kl_divergence = ...
>>> scheduler.step(kl_divergence)
:param optimizer: Wrapped optimizer
:type optimizer: torch.optim.Optimizer
:param kl_threshold: Threshold for KL divergence (default: ``0.008``)
:type kl_threshold: float, optional
:param min_lr: Lower bound for learning rate (default: ``1e-6``)
:type min_lr: float, optional
:param max_lr: Upper bound for learning rate (default: ``1e-2``)
:type max_lr: float, optional
:param kl_factor: The number used to modify the KL divergence threshold (default: ``2``)
:type kl_factor: float, optional
:param lr_factor: The number used to modify the learning rate (default: ``1.5``)
:type lr_factor: float, optional
:param last_epoch: The index of last epoch (default: ``-1``)
:type last_epoch: int, optional
:param verbose: Verbose mode (default: ``False``)
:type verbose: bool, optional
"""
super().__init__(optimizer, last_epoch, verbose)
self.kl_threshold = kl_threshold
self.min_lr = min_lr
self.max_lr = max_lr
self._kl_factor = kl_factor
self._lr_factor = lr_factor
self._last_lr = [group['lr'] for group in self.optimizer.param_groups]
def step(self, kl: Optional[Union[torch.Tensor, float]] = None, epoch: Optional[int] = None) -> None:
"""
Step scheduler
Example::
>>> kl = torch.distributions.kl_divergence(p, q)
>>> kl
tensor([0.0332, 0.0500, 0.0383, ..., 0.0076, 0.0240, 0.0164])
>>> scheduler.step(kl.mean())
>>> kl = 0.0046
>>> scheduler.step(kl)
:param kl: KL divergence (default: ``None``)
If None, no adjustment is made.
If tensor, the number of elements must be 1
:type kl: torch.Tensor, float or None, optional
:param epoch: Epoch (default: ``None``)
:type epoch: int, optional
"""
if kl is not None:
for group in self.optimizer.param_groups:
if kl > self.kl_threshold * self._kl_factor:
group['lr'] = max(group['lr'] / self._lr_factor, self.min_lr)
elif kl < self.kl_threshold / self._kl_factor:
group['lr'] = min(group['lr'] * self._lr_factor, self.max_lr)
self._last_lr = [group['lr'] for group in self.optimizer.param_groups]
| 3,588 | Python | 38.010869 | 105 | 0.557971 |
Toni-SM/skrl/skrl/resources/schedulers/torch/__init__.py | from skrl.resources.schedulers.torch.kl_adaptive import KLAdaptiveLR
KLAdaptiveRL = KLAdaptiveLR # known typo (compatibility with versions prior to 1.0.0)
| 158 | Python | 30.799994 | 86 | 0.810127 |
Toni-SM/skrl/skrl/resources/schedulers/jax/kl_adaptive.py | from typing import Optional, Union
import numpy as np
class KLAdaptiveLR:
def __init__(self,
init_value: float,
kl_threshold: float = 0.008,
min_lr: float = 1e-6,
max_lr: float = 1e-2,
kl_factor: float = 2,
lr_factor: float = 1.5) -> None:
"""Adaptive KL scheduler
Adjusts the learning rate according to the KL divergence.
The implementation is adapted from the rl_games library
(https://github.com/Denys88/rl_games/blob/master/rl_games/common/schedulers.py)
.. note::
This scheduler is only available for PPO at the moment.
Applying it to other agents will not change the learning rate
Example::
>>> scheduler = KLAdaptiveLR(init_value=1e-3, kl_threshold=0.01)
>>> for epoch in range(100):
>>> # ...
>>> kl_divergence = ...
>>> scheduler.step(kl_divergence)
>>> scheduler.lr # get the updated learning rate
:param init_value: Initial learning rate
:type init_value: float
:param kl_threshold: Threshold for KL divergence (default: ``0.008``)
:type kl_threshold: float, optional
:param min_lr: Lower bound for learning rate (default: ``1e-6``)
:type min_lr: float, optional
:param max_lr: Upper bound for learning rate (default: ``1e-2``)
:type max_lr: float, optional
:param kl_factor: The number used to modify the KL divergence threshold (default: ``2``)
:type kl_factor: float, optional
:param lr_factor: The number used to modify the learning rate (default: ``1.5``)
:type lr_factor: float, optional
"""
self.kl_threshold = kl_threshold
self.min_lr = min_lr
self.max_lr = max_lr
self._kl_factor = kl_factor
self._lr_factor = lr_factor
self._lr = init_value
@property
def lr(self) -> float:
"""Learning rate
"""
return self._lr
def step(self, kl: Optional[Union[np.ndarray, float]] = None) -> None:
"""
Step scheduler
Example::
>>> kl = [0.0332, 0.0500, 0.0383, 0.0456, 0.0076, 0.0240, 0.0164]
>>> kl
[0.0332, 0.05, 0.0383, 0.0456, 0.0076, 0.024, 0.0164]
>>> scheduler.step(np.mean(kl))
>>> kl = 0.0046
>>> scheduler.step(kl)
:param kl: KL divergence (default: ``None``)
If None, no adjustment is made.
If array, the number of elements must be 1
:type kl: np.ndarray, float or None, optional
"""
if kl is not None:
if kl > self.kl_threshold * self._kl_factor:
self._lr = max(self._lr / self._lr_factor, self.min_lr)
elif kl < self.kl_threshold / self._kl_factor:
self._lr = min(self._lr * self._lr_factor, self.max_lr)
# Alias to maintain naming compatibility with Optax schedulers
# https://optax.readthedocs.io/en/latest/api.html#schedules
kl_adaptive = KLAdaptiveLR
| 3,168 | Python | 34.211111 | 96 | 0.553662 |
Toni-SM/skrl/skrl/resources/schedulers/jax/__init__.py | from skrl.resources.schedulers.jax.kl_adaptive import KLAdaptiveLR, kl_adaptive
KLAdaptiveRL = KLAdaptiveLR # known typo (compatibility with versions prior to 1.0.0)
| 169 | Python | 32.999993 | 86 | 0.804734 |
Toni-SM/skrl/skrl/resources/preprocessors/torch/running_standard_scaler.py | from typing import Optional, Tuple, Union
import gym
import gymnasium
import numpy as np
import torch
import torch.nn as nn
class RunningStandardScaler(nn.Module):
def __init__(self,
size: Union[int, Tuple[int], gym.Space, gymnasium.Space],
epsilon: float = 1e-8,
clip_threshold: float = 5.0,
device: Optional[Union[str, torch.device]] = None) -> None:
"""Standardize the input data by removing the mean and scaling by the standard deviation
The implementation is adapted from the rl_games library
(https://github.com/Denys88/rl_games/blob/master/rl_games/algos_torch/running_mean_std.py)
Example::
>>> running_standard_scaler = RunningStandardScaler(size=2)
>>> data = torch.rand(3, 2) # tensor of shape (N, 2)
>>> running_standard_scaler(data)
tensor([[0.1954, 0.3356],
[0.9719, 0.4163],
[0.8540, 0.1982]])
:param size: Size of the input space
:type size: int, tuple or list of integers, gym.Space, or gymnasium.Space
:param epsilon: Small number to avoid division by zero (default: ``1e-8``)
:type epsilon: float
:param clip_threshold: Threshold to clip the data (default: ``5.0``)
:type clip_threshold: float
:param device: Device on which a tensor/array is or will be allocated (default: ``None``).
If None, the device will be either ``"cuda"`` if available or ``"cpu"``
:type device: str or torch.device, optional
"""
super().__init__()
self.epsilon = epsilon
self.clip_threshold = clip_threshold
if device is None:
self.device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
else:
self.device = torch.device(device)
size = self._get_space_size(size)
self.register_buffer("running_mean", torch.zeros(size, dtype=torch.float64, device=self.device))
self.register_buffer("running_variance", torch.ones(size, dtype=torch.float64, device=self.device))
self.register_buffer("current_count", torch.ones((), dtype=torch.float64, device=self.device))
def _get_space_size(self, space: Union[int, Tuple[int], gym.Space, gymnasium.Space]) -> int:
"""Get the size (number of elements) of a space
:param space: Space or shape from which to obtain the number of elements
:type space: int, tuple or list of integers, gym.Space, or gymnasium.Space
:raises ValueError: If the space is not supported
:return: Size of the space data
:rtype: Space size (number of elements)
"""
if type(space) in [int, float]:
return int(space)
elif type(space) in [tuple, list]:
return np.prod(space)
elif issubclass(type(space), gym.Space):
if issubclass(type(space), gym.spaces.Discrete):
return 1
elif issubclass(type(space), gym.spaces.Box):
return np.prod(space.shape)
elif issubclass(type(space), gym.spaces.Dict):
return sum([self._get_space_size(space.spaces[key]) for key in space.spaces])
elif issubclass(type(space), gymnasium.Space):
if issubclass(type(space), gymnasium.spaces.Discrete):
return 1
elif issubclass(type(space), gymnasium.spaces.Box):
return np.prod(space.shape)
elif issubclass(type(space), gymnasium.spaces.Dict):
return sum([self._get_space_size(space.spaces[key]) for key in space.spaces])
raise ValueError(f"Space type {type(space)} not supported")
def _parallel_variance(self, input_mean: torch.Tensor, input_var: torch.Tensor, input_count: int) -> None:
"""Update internal variables using the parallel algorithm for computing variance
https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Parallel_algorithm
:param input_mean: Mean of the input data
:type input_mean: torch.Tensor
:param input_var: Variance of the input data
:type input_var: torch.Tensor
:param input_count: Batch size of the input data
:type input_count: int
"""
delta = input_mean - self.running_mean
total_count = self.current_count + input_count
M2 = (self.running_variance * self.current_count) + (input_var * input_count) \
+ delta ** 2 * self.current_count * input_count / total_count
# update internal variables
self.running_mean = self.running_mean + delta * input_count / total_count
self.running_variance = M2 / total_count
self.current_count = total_count
def _compute(self, x: torch.Tensor, train: bool = False, inverse: bool = False) -> torch.Tensor:
"""Compute the standardization of the input data
:param x: Input tensor
:type x: torch.Tensor
:param train: Whether to train the standardizer (default: ``False``)
:type train: bool, optional
:param inverse: Whether to inverse the standardizer to scale back the data (default: ``False``)
:type inverse: bool, optional
:return: Standardized tensor
:rtype: torch.Tensor
"""
if train:
if x.dim() == 3:
self._parallel_variance(torch.mean(x, dim=(0, 1)), torch.var(x, dim=(0, 1)), x.shape[0] * x.shape[1])
else:
self._parallel_variance(torch.mean(x, dim=0), torch.var(x, dim=0), x.shape[0])
# scale back the data to the original representation
if inverse:
return torch.sqrt(self.running_variance.float()) \
* torch.clamp(x, min=-self.clip_threshold, max=self.clip_threshold) + self.running_mean.float()
# standardization by centering and scaling
return torch.clamp((x - self.running_mean.float()) / (torch.sqrt(self.running_variance.float()) + self.epsilon),
min=-self.clip_threshold,
max=self.clip_threshold)
def forward(self,
x: torch.Tensor,
train: bool = False,
inverse: bool = False,
no_grad: bool = True) -> torch.Tensor:
"""Forward pass of the standardizer
Example::
>>> x = torch.rand(3, 2, device="cuda:0")
>>> running_standard_scaler(x)
tensor([[0.6933, 0.1905],
[0.3806, 0.3162],
[0.1140, 0.0272]], device='cuda:0')
>>> running_standard_scaler(x, train=True)
tensor([[ 0.8681, -0.6731],
[ 0.0560, -0.3684],
[-0.6360, -1.0690]], device='cuda:0')
>>> running_standard_scaler(x, inverse=True)
tensor([[0.6260, 0.5468],
[0.5056, 0.5987],
[0.4029, 0.4795]], device='cuda:0')
:param x: Input tensor
:type x: torch.Tensor
:param train: Whether to train the standardizer (default: ``False``)
:type train: bool, optional
:param inverse: Whether to inverse the standardizer to scale back the data (default: ``False``)
:type inverse: bool, optional
:param no_grad: Whether to disable the gradient computation (default: ``True``)
:type no_grad: bool, optional
:return: Standardized tensor
:rtype: torch.Tensor
"""
if no_grad:
with torch.no_grad():
return self._compute(x, train, inverse)
return self._compute(x, train, inverse)
| 7,719 | Python | 42.370786 | 120 | 0.588807 |
Toni-SM/skrl/skrl/resources/preprocessors/torch/__init__.py | from skrl.resources.preprocessors.torch.running_standard_scaler import RunningStandardScaler
| 93 | Python | 45.999977 | 92 | 0.892473 |
Toni-SM/skrl/skrl/resources/preprocessors/jax/running_standard_scaler.py | from typing import Mapping, Optional, Tuple, Union
import gym
import gymnasium
import jax
import jax.numpy as jnp
import numpy as np
from skrl import config
# https://jax.readthedocs.io/en/latest/faq.html#strategy-1-jit-compiled-helper-function
@jax.jit
def _copyto(dst, src):
"""NumPy function copyto not yet implemented
"""
return dst.at[:].set(src)
@jax.jit
def _parallel_variance(running_mean: jax.Array,
running_variance: jax.Array,
current_count: jax.Array,
array: jax.Array) -> Tuple[jax.Array, jax.Array, jax.Array]: # yapf: disable
# ddof = 1: https://github.com/pytorch/pytorch/issues/50010
if array.ndim == 3:
input_mean = jnp.mean(array, axis=(0, 1))
input_var = jnp.var(array, axis=(0, 1), ddof=1)
input_count = array.shape[0] * array.shape[1]
else:
input_mean = jnp.mean(array, axis=0)
input_var = jnp.var(array, axis=0, ddof=1)
input_count = array.shape[0]
delta = input_mean - running_mean
total_count = current_count + input_count
M2 = (running_variance * current_count) + (input_var * input_count) \
+ delta ** 2 * current_count * input_count / total_count
return running_mean + delta * input_count / total_count, M2 / total_count, total_count
@jax.jit
def _inverse(running_mean: jax.Array,
running_variance: jax.Array,
clip_threshold: float,
array: jax.Array) -> jax.Array: # yapf: disable
return jnp.sqrt(running_variance) * jnp.clip(array, -clip_threshold, clip_threshold) + running_mean
@jax.jit
def _standardization(running_mean: jax.Array,
running_variance: jax.Array,
clip_threshold: float,
epsilon: float,
array: jax.Array) -> jax.Array:
return jnp.clip((array - running_mean) / (jnp.sqrt(running_variance) + epsilon), -clip_threshold, clip_threshold)
class RunningStandardScaler:
def __init__(self,
size: Union[int, Tuple[int], gym.Space, gymnasium.Space],
epsilon: float = 1e-8,
clip_threshold: float = 5.0,
device: Optional[Union[str, jax.Device]] = None) -> None:
"""Standardize the input data by removing the mean and scaling by the standard deviation
The implementation is adapted from the rl_games library
(https://github.com/Denys88/rl_games/blob/master/rl_games/algos_torch/running_mean_std.py)
Example::
>>> running_standard_scaler = RunningStandardScaler(size=2)
>>> data = jax.random.uniform(jax.random.PRNGKey(0), (3,2)) # tensor of shape (N, 2)
>>> running_standard_scaler(data)
Array([[0.57450044, 0.09968603],
[0.7419659 , 0.8941783 ],
[0.59656656, 0.45325184]], dtype=float32)
:param size: Size of the input space
:type size: int, tuple or list of integers, gym.Space, or gymnasium.Space
:param epsilon: Small number to avoid division by zero (default: ``1e-8``)
:type epsilon: float
:param clip_threshold: Threshold to clip the data (default: ``5.0``)
:type clip_threshold: float
:param device: Device on which a tensor/array is or will be allocated (default: ``None``).
If None, the device will be either ``"cuda"`` if available or ``"cpu"``
:type device: str or jax.Device, optional
"""
self._jax = config.jax.backend == "jax"
self.epsilon = epsilon
self.clip_threshold = clip_threshold
if device is None:
self.device = jax.devices()[0]
else:
self.device = device if isinstance(device, jax.Device) else jax.devices(device)[0]
size = self._get_space_size(size)
if self._jax:
self.running_mean = jnp.zeros(size, dtype=jnp.float32)
self.running_variance = jnp.ones(size, dtype=jnp.float32)
self.current_count = jnp.ones((1,), dtype=jnp.float32)
else:
self.running_mean = np.zeros(size, dtype=np.float32)
self.running_variance = np.ones(size, dtype=np.float32)
self.current_count = np.ones((1,), dtype=np.float32)
@property
def state_dict(self) -> Mapping[str, Union[np.ndarray, jax.Array]]:
"""Dictionary containing references to the whole state of the module
"""
class _StateDict:
def __init__(self, params):
self.params = params
def replace(self, params):
return params
return _StateDict({
"running_mean": self.running_mean,
"running_variance": self.running_variance,
"current_count": self.current_count
})
@state_dict.setter
def state_dict(self, value: Mapping[str, Union[np.ndarray, jax.Array]]) -> None:
if self._jax:
self.running_mean = _copyto(self.running_mean, value["running_mean"])
self.running_variance = _copyto(self.running_variance, value["running_variance"])
self.current_count = _copyto(self.current_count, value["current_count"])
else:
np.copyto(self.running_mean, value["running_mean"])
np.copyto(self.running_variance, value["running_variance"])
np.copyto(self.current_count, value["current_count"])
def _get_space_size(self, space: Union[int, Tuple[int], gym.Space, gymnasium.Space]) -> int:
"""Get the size (number of elements) of a space
:param space: Space or shape from which to obtain the number of elements
:type space: int, tuple or list of integers, gym.Space, or gymnasium.Space
:raises ValueError: If the space is not supported
:return: Size of the space data
:rtype: Space size (number of elements)
"""
if type(space) in [int, float]:
return int(space)
elif type(space) in [tuple, list]:
return np.prod(space)
elif issubclass(type(space), gym.Space):
if issubclass(type(space), gym.spaces.Discrete):
return 1
elif issubclass(type(space), gym.spaces.Box):
return np.prod(space.shape)
elif issubclass(type(space), gym.spaces.Dict):
return sum([self._get_space_size(space.spaces[key]) for key in space.spaces])
elif issubclass(type(space), gymnasium.Space):
if issubclass(type(space), gymnasium.spaces.Discrete):
return 1
elif issubclass(type(space), gymnasium.spaces.Box):
return np.prod(space.shape)
elif issubclass(type(space), gymnasium.spaces.Dict):
return sum([self._get_space_size(space.spaces[key]) for key in space.spaces])
raise ValueError(f"Space type {type(space)} not supported")
def _parallel_variance(self,
input_mean: Union[np.ndarray, jax.Array],
input_var: Union[np.ndarray, jax.Array],
input_count: int) -> None:
"""Update internal variables using the parallel algorithm for computing variance
https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Parallel_algorithm
:param input_mean: Mean of the input data
:type input_mean: np.ndarray or jax.Array
:param input_var: Variance of the input data
:type input_var: np.ndarray or jax.Array
:param input_count: Batch size of the input data
:type input_count: int
"""
delta = input_mean - self.running_mean
total_count = self.current_count + input_count
M2 = (self.running_variance * self.current_count) + (input_var * input_count) \
+ delta ** 2 * self.current_count * input_count / total_count
# update internal variables
self.running_mean = self.running_mean + delta * input_count / total_count
self.running_variance = M2 / total_count
self.current_count = total_count
def __call__(self,
x: Union[np.ndarray, jax.Array],
train: bool = False,
inverse: bool = False) -> Union[np.ndarray, jax.Array]:
"""Forward pass of the standardizer
Example::
>>> x = jax.random.uniform(jax.random.PRNGKey(0), (3,2))
>>> running_standard_scaler(x)
Array([[0.57450044, 0.09968603],
[0.7419659 , 0.8941783 ],
[0.59656656, 0.45325184]], dtype=float32)
>>> running_standard_scaler(x, train=True)
Array([[ 0.167439 , -0.4292293 ],
[ 0.45878986, 0.8719094 ],
[ 0.20582889, 0.14980486]], dtype=float32)
>>> running_standard_scaler(x, inverse=True)
Array([[0.80847514, 0.4226486 ],
[0.9047325 , 0.90777594],
[0.8211585 , 0.6385405 ]], dtype=float32)
:param x: Input tensor
:type x: np.ndarray or jax.Array
:param train: Whether to train the standardizer (default: ``False``)
:type train: bool, optional
:param inverse: Whether to inverse the standardizer to scale back the data (default: ``False``)
:type inverse: bool, optional
:return: Standardized tensor
:rtype: np.ndarray or jax.Array
"""
if train:
if self._jax:
self.running_mean, self.running_variance, self.current_count = \
_parallel_variance(self.running_mean, self.running_variance, self.current_count, x)
else:
# ddof = 1: https://github.com/pytorch/pytorch/issues/50010
if x.ndim == 3:
self._parallel_variance(np.mean(x, axis=(0, 1)),
np.var(x, axis=(0, 1), ddof=1),
x.shape[0] * x.shape[1])
else:
self._parallel_variance(np.mean(x, axis=0), np.var(x, axis=0, ddof=1), x.shape[0])
# scale back the data to the original representation
if inverse:
if self._jax:
return _inverse(self.running_mean, self.running_variance, self.clip_threshold, x)
return np.sqrt(self.running_variance) * np.clip(x, -self.clip_threshold,
self.clip_threshold) + self.running_mean
# standardization by centering and scaling
if self._jax:
return _standardization(self.running_mean, self.running_variance, self.clip_threshold, self.epsilon, x)
return np.clip((x - self.running_mean) / (np.sqrt(self.running_variance) + self.epsilon),
a_min=-self.clip_threshold,
a_max=self.clip_threshold)
| 10,976 | Python | 42.216535 | 117 | 0.580813 |
Toni-SM/skrl/skrl/resources/preprocessors/jax/__init__.py | from skrl.resources.preprocessors.jax.running_standard_scaler import RunningStandardScaler
| 91 | Python | 44.999978 | 90 | 0.89011 |
Toni-SM/skrl/skrl/resources/noises/torch/base.py | from typing import Optional, Tuple, Union
import torch
class Noise():
def __init__(self, device: Optional[Union[str, torch.device]] = None) -> None:
"""Base class representing a noise
:param device: Device on which a tensor/array is or will be allocated (default: ``None``).
If None, the device will be either ``"cuda"`` if available or ``"cpu"``
:type device: str or torch.device, optional
Custom noises should override the ``sample`` method::
import torch
from skrl.resources.noises.torch import Noise
class CustomNoise(Noise):
def __init__(self, device=None):
super().__init__(device)
def sample(self, size):
return torch.rand(size, device=self.device)
"""
if device is None:
self.device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
else:
self.device = torch.device(device)
def sample_like(self, tensor: torch.Tensor) -> torch.Tensor:
"""Sample a noise with the same size (shape) as the input tensor
This method will call the sampling method as follows ``.sample(tensor.shape)``
:param tensor: Input tensor used to determine output tensor size (shape)
:type tensor: torch.Tensor
:return: Sampled noise
:rtype: torch.Tensor
Example::
>>> x = torch.rand(3, 2, device="cuda:0")
>>> noise.sample_like(x)
tensor([[-0.0423, -0.1325],
[-0.0639, -0.0957],
[-0.1367, 0.1031]], device='cuda:0')
"""
return self.sample(tensor.shape)
def sample(self, size: Union[Tuple[int], torch.Size]) -> torch.Tensor:
"""Noise sampling method to be implemented by the inheriting classes
:param size: Shape of the sampled tensor
:type size: tuple or list of int, or torch.Size
:raises NotImplementedError: The method is not implemented by the inheriting classes
:return: Sampled noise
:rtype: torch.Tensor
"""
raise NotImplementedError("The sampling method (.sample()) is not implemented")
| 2,241 | Python | 34.031249 | 98 | 0.58456 |
Toni-SM/skrl/skrl/resources/noises/torch/ornstein_uhlenbeck.py | from typing import Optional, Tuple, Union
import torch
from torch.distributions import Normal
from skrl.resources.noises.torch import Noise
class OrnsteinUhlenbeckNoise(Noise):
def __init__(self,
theta: float,
sigma: float,
base_scale: float,
mean: float = 0,
std: float = 1,
device: Optional[Union[str, torch.device]] = None) -> None:
"""Class representing an Ornstein-Uhlenbeck noise
:param theta: Factor to apply to current internal state
:type theta: float
:param sigma: Factor to apply to the normal distribution
:type sigma: float
:param base_scale: Factor to apply to returned noise
:type base_scale: float
:param mean: Mean of the normal distribution (default: ``0.0``)
:type mean: float, optional
:param std: Standard deviation of the normal distribution (default: ``1.0``)
:type std: float, optional
:param device: Device on which a tensor/array is or will be allocated (default: ``None``).
If None, the device will be either ``"cuda"`` if available or ``"cpu"``
:type device: str or torch.device, optional
Example::
>>> noise = OrnsteinUhlenbeckNoise(theta=0.1, sigma=0.2, base_scale=0.5)
"""
super().__init__(device)
self.state = 0
self.theta = theta
self.sigma = sigma
self.base_scale = base_scale
self.distribution = Normal(loc=torch.tensor(mean, device=self.device, dtype=torch.float32),
scale=torch.tensor(std, device=self.device, dtype=torch.float32))
def sample(self, size: Union[Tuple[int], torch.Size]) -> torch.Tensor:
"""Sample an Ornstein-Uhlenbeck noise
:param size: Shape of the sampled tensor
:type size: tuple or list of int, or torch.Size
:return: Sampled noise
:rtype: torch.Tensor
Example::
>>> noise.sample((3, 2))
tensor([[-0.0452, 0.0162],
[ 0.0649, -0.0708],
[-0.0211, 0.0066]], device='cuda:0')
>>> x = torch.rand(3, 2, device="cuda:0")
>>> noise.sample(x.shape)
tensor([[-0.0540, 0.0461],
[ 0.1117, -0.1157],
[-0.0074, 0.0420]], device='cuda:0')
"""
if hasattr(self.state, "shape") and self.state.shape != torch.Size(size):
self.state = 0
self.state += -self.state * self.theta + self.sigma * self.distribution.sample(size)
return self.base_scale * self.state
| 2,696 | Python | 35.445945 | 100 | 0.560831 |
Toni-SM/skrl/skrl/resources/noises/torch/__init__.py | from skrl.resources.noises.torch.base import Noise # isort:skip
from skrl.resources.noises.torch.gaussian import GaussianNoise
from skrl.resources.noises.torch.ornstein_uhlenbeck import OrnsteinUhlenbeckNoise
| 211 | Python | 41.399992 | 81 | 0.853081 |
Toni-SM/skrl/skrl/resources/noises/torch/gaussian.py | from typing import Optional, Tuple, Union
import torch
from torch.distributions import Normal
from skrl.resources.noises.torch import Noise
class GaussianNoise(Noise):
def __init__(self, mean: float, std: float, device: Optional[Union[str, torch.device]] = None) -> None:
"""Class representing a Gaussian noise
:param mean: Mean of the normal distribution
:type mean: float
:param std: Standard deviation of the normal distribution
:type std: float
:param device: Device on which a tensor/array is or will be allocated (default: ``None``).
If None, the device will be either ``"cuda"`` if available or ``"cpu"``
:type device: str or torch.device, optional
Example::
>>> noise = GaussianNoise(mean=0, std=1)
"""
super().__init__(device)
self.distribution = Normal(loc=torch.tensor(mean, device=self.device, dtype=torch.float32),
scale=torch.tensor(std, device=self.device, dtype=torch.float32))
def sample(self, size: Union[Tuple[int], torch.Size]) -> torch.Tensor:
"""Sample a Gaussian noise
:param size: Shape of the sampled tensor
:type size: tuple or list of int, or torch.Size
:return: Sampled noise
:rtype: torch.Tensor
Example::
>>> noise.sample((3, 2))
tensor([[-0.4901, 1.3357],
[-1.2141, 0.3323],
[-0.0889, -1.1651]], device='cuda:0')
>>> x = torch.rand(3, 2, device="cuda:0")
>>> noise.sample(x.shape)
tensor([[0.5398, 1.2009],
[0.0307, 1.3065],
[0.2082, 0.6116]], device='cuda:0')
"""
return self.distribution.sample(size)
| 1,820 | Python | 33.35849 | 107 | 0.564835 |
Toni-SM/skrl/skrl/resources/noises/jax/base.py | from typing import Optional, Tuple, Union
import jax
import numpy as np
from skrl import config
class Noise():
def __init__(self, device: Optional[Union[str, jax.Device]] = None) -> None:
"""Base class representing a noise
:param device: Device on which a tensor/array is or will be allocated (default: ``None``).
If None, the device will be either ``"cuda"`` if available or ``"cpu"``
:type device: str or jax.Device, optional
Custom noises should override the ``sample`` method::
import jax
from skrl.resources.noises.jax import Noise
class CustomNoise(Noise):
def __init__(self, device=None):
super().__init__(device)
def sample(self, size):
return jax.random.uniform(jax.random.PRNGKey(0), size)
"""
self._jax = config.jax.backend == "jax"
if device is None:
self.device = jax.devices()[0]
else:
self.device = device if isinstance(device, jax.Device) else jax.devices(device)[0]
def sample_like(self, tensor: Union[np.ndarray, jax.Array]) -> Union[np.ndarray, jax.Array]:
"""Sample a noise with the same size (shape) as the input tensor
This method will call the sampling method as follows ``.sample(tensor.shape)``
:param tensor: Input tensor used to determine output tensor size (shape)
:type tensor: np.ndarray or jax.Array
:return: Sampled noise
:rtype: np.ndarray or jax.Array
Example::
>>> x = jax.random.uniform(jax.random.PRNGKey(0), (3, 2))
>>> noise.sample_like(x)
Array([[0.57450044, 0.09968603],
[0.7419659 , 0.8941783 ],
[0.59656656, 0.45325184]], dtype=float32)
"""
return self.sample(tensor.shape)
def sample(self, size: Tuple[int]) -> Union[np.ndarray, jax.Array]:
"""Noise sampling method to be implemented by the inheriting classes
:param size: Shape of the sampled tensor
:type size: tuple or list of int
:raises NotImplementedError: The method is not implemented by the inheriting classes
:return: Sampled noise
:rtype: np.ndarray or jax.Array
"""
raise NotImplementedError("The sampling method (.sample()) is not implemented")
| 2,413 | Python | 33.985507 | 98 | 0.596353 |
Toni-SM/skrl/skrl/resources/noises/jax/ornstein_uhlenbeck.py | from typing import Optional, Tuple, Union
from functools import partial
import jax
import jax.numpy as jnp
import numpy as np
from skrl import config
from skrl.resources.noises.jax import Noise
# https://jax.readthedocs.io/en/latest/faq.html#strategy-1-jit-compiled-helper-function
@partial(jax.jit, static_argnames=("shape"))
def _sample(theta, sigma, state, mean, std, key, iterator, shape):
subkey = jax.random.fold_in(key, iterator)
return state * theta + sigma * (jax.random.normal(subkey, shape) * std + mean)
class OrnsteinUhlenbeckNoise(Noise):
def __init__(self,
theta: float,
sigma: float,
base_scale: float,
mean: float = 0,
std: float = 1,
device: Optional[Union[str, jax.Device]] = None) -> None:
"""Class representing an Ornstein-Uhlenbeck noise
:param theta: Factor to apply to current internal state
:type theta: float
:param sigma: Factor to apply to the normal distribution
:type sigma: float
:param base_scale: Factor to apply to returned noise
:type base_scale: float
:param mean: Mean of the normal distribution (default: ``0.0``)
:type mean: float, optional
:param std: Standard deviation of the normal distribution (default: ``1.0``)
:type std: float, optional
:param device: Device on which a tensor/array is or will be allocated (default: ``None``).
If None, the device will be either ``"cuda"`` if available or ``"cpu"``
:type device: str or jax.Device, optional
Example::
>>> noise = OrnsteinUhlenbeckNoise(theta=0.1, sigma=0.2, base_scale=0.5)
"""
super().__init__(device)
self.state = 0
self.theta = theta
self.sigma = sigma
self.base_scale = base_scale
if self._jax:
self.mean = jnp.array(mean)
self.std = jnp.array(std)
self._i = 0
self._key = config.jax.key
else:
self.mean = np.array(mean)
self.std = np.array(std)
def sample(self, size: Tuple[int]) -> Union[np.ndarray, jax.Array]:
"""Sample an Ornstein-Uhlenbeck noise
:param size: Shape of the sampled tensor
:type size: tuple or list of int
:return: Sampled noise
:rtype: np.ndarray or jax.Array
Example::
>>> noise.sample((3, 2))
Array([[ 0.01878439, -0.12833427],
[ 0.06494182, 0.12490594],
[ 0.024447 , -0.01174496]], dtype=float32)
>>> x = jax.random.uniform(jax.random.PRNGKey(0), (3, 2))
>>> noise.sample(x.shape)
Array([[ 0.17988093, -1.2289404 ],
[ 0.6218886 , 1.1961104 ],
[ 0.23410667, -0.11247082]], dtype=float32)
"""
if hasattr(self.state, "shape") and self.state.shape != size:
self.state = 0
if self._jax:
self._i += 1
self.state = _sample(self.theta, self.sigma, self.state, self.mean, self.std, self._key, self._i, size)
else:
self.state += -self.state * self.theta + self.sigma * np.random.normal(self.mean, self.std, size)
return self.base_scale * self.state
| 3,360 | Python | 34.378947 | 115 | 0.569048 |
Toni-SM/skrl/skrl/resources/noises/jax/__init__.py | from skrl.resources.noises.jax.base import Noise # isort:skip
from skrl.resources.noises.jax.gaussian import GaussianNoise
from skrl.resources.noises.jax.ornstein_uhlenbeck import OrnsteinUhlenbeckNoise
| 205 | Python | 40.199992 | 79 | 0.84878 |
Toni-SM/skrl/skrl/resources/noises/jax/gaussian.py | from typing import Optional, Tuple, Union
from functools import partial
import jax
import jax.numpy as jnp
import numpy as np
from skrl import config
from skrl.resources.noises.jax import Noise
# https://jax.readthedocs.io/en/latest/faq.html#strategy-1-jit-compiled-helper-function
@partial(jax.jit, static_argnames=("shape"))
def _sample(mean, std, key, iterator, shape):
subkey = jax.random.fold_in(key, iterator)
return jax.random.normal(subkey, shape) * std + mean
class GaussianNoise(Noise):
def __init__(self, mean: float, std: float, device: Optional[Union[str, jax.Device]] = None) -> None:
"""Class representing a Gaussian noise
:param mean: Mean of the normal distribution
:type mean: float
:param std: Standard deviation of the normal distribution
:type std: float
:param device: Device on which a tensor/array is or will be allocated (default: ``None``).
If None, the device will be either ``"cuda"`` if available or ``"cpu"``
:type device: str or jax.Device, optional
Example::
>>> noise = GaussianNoise(mean=0, std=1)
"""
super().__init__(device)
if self._jax:
self._i = 0
self._key = config.jax.key
self.mean = jnp.array(mean)
self.std = jnp.array(std)
else:
self.mean = np.array(mean)
self.std = np.array(std)
def sample(self, size: Tuple[int]) -> Union[np.ndarray, jax.Array]:
"""Sample a Gaussian noise
:param size: Shape of the sampled tensor
:type size: tuple or list of int
:return: Sampled noise
:rtype: np.ndarray or jax.Array
Example::
>>> noise.sample((3, 2))
Array([[ 0.01878439, -0.12833427],
[ 0.06494182, 0.12490594],
[ 0.024447 , -0.01174496]], dtype=float32)
>>> x = jax.random.uniform(jax.random.PRNGKey(0), (3, 2))
>>> noise.sample(x.shape)
Array([[ 0.17988093, -1.2289404 ],
[ 0.6218886 , 1.1961104 ],
[ 0.23410667, -0.11247082]], dtype=float32)
"""
if self._jax:
self._i += 1
return _sample(self.mean, self.std, self._key, self._i, size)
return np.random.normal(self.mean, self.std, size)
| 2,395 | Python | 31.821917 | 105 | 0.573278 |
Toni-SM/skrl/skrl/resources/optimizers/jax/__init__.py | from skrl.resources.optimizers.jax.adam import Adam
| 52 | Python | 25.499987 | 51 | 0.846154 |
Toni-SM/skrl/skrl/resources/optimizers/jax/adam.py | from typing import Optional
import functools
import flax
import jax
import optax
from skrl.models.jax import Model
# https://jax.readthedocs.io/en/latest/faq.html#strategy-1-jit-compiled-helper-function
@functools.partial(jax.jit, static_argnames=("transformation"))
def _step(transformation, grad, state, state_dict):
# optax transform
params, optimizer_state = transformation.update(grad, state, state_dict.params)
# apply transformation
params = optax.apply_updates(state_dict.params, params)
return optimizer_state, state_dict.replace(params=params)
@functools.partial(jax.jit, static_argnames=("transformation"))
def _step_with_scale(transformation, grad, state, state_dict, scale):
# optax transform
params, optimizer_state = transformation.update(grad, state, state_dict.params)
# custom scale
# https://optax.readthedocs.io/en/latest/api.html?#optax.scale
params = jax.tree_util.tree_map(lambda params: scale * params, params)
# apply transformation
params = optax.apply_updates(state_dict.params, params)
return optimizer_state, state_dict.replace(params=params)
class Adam:
def __new__(cls, model: Model, lr: float = 1e-3, grad_norm_clip: float = 0, scale: bool = True) -> "Optimizer":
"""Adam optimizer
Adapted from `Optax's Adam <https://optax.readthedocs.io/en/latest/api.html?#adam>`_
to support custom scale (learning rate)
:param model: Model
:type model: skrl.models.jax.Model
:param lr: Learning rate (default: ``1e-3``)
:type lr: float, optional
:param grad_norm_clip: Clipping coefficient for the norm of the gradients (default: ``0``).
Disabled if less than or equal to zero
:type grad_norm_clip: float, optional
:param scale: Whether to instantiate the optimizer as-is or remove the scaling step (default: ``True``).
Remove the scaling step if a custom learning rate is to be applied during optimization steps
:type scale: bool, optional
:return: Adam optimizer
:rtype: flax.struct.PyTreeNode
Example::
>>> optimizer = Adam(model=policy, lr=5e-4)
>>> # step the optimizer given a computed gradiend (grad)
>>> optimizer = optimizer.step(grad, policy)
# apply custom learning rate during optimization steps
>>> optimizer = Adam(model=policy, lr=5e-4, scale=False)
>>> # step the optimizer given a computed gradiend and an updated learning rate (lr)
>>> optimizer = optimizer.step(grad, policy, lr)
"""
class Optimizer(flax.struct.PyTreeNode):
"""Optimizer
This class is the result of isolating the Optax optimizer,
which is mixed with the model parameters, from Flax's TrainState class
https://flax.readthedocs.io/en/latest/api_reference/flax.training.html#train-state
"""
transformation: optax.GradientTransformation = flax.struct.field(pytree_node=False)
state: optax.OptState = flax.struct.field(pytree_node=True)
@classmethod
def _create(cls, *, transformation, state, **kwargs):
return cls(transformation=transformation, state=state, **kwargs)
def step(self, grad: jax.Array, model: Model, lr: Optional[float] = None) -> "Optimizer":
"""Performs a single optimization step
:param grad: Gradients
:type grad: jax.Array
:param model: Model
:type model: skrl.models.jax.Model
:param lr: Learning rate.
If given, a scale optimization step will be performed
:type lr: float, optional
:return: Optimizer
:rtype: flax.struct.PyTreeNode
"""
if lr is None:
optimizer_state, model.state_dict = _step(self.transformation, grad, self.state, model.state_dict)
else:
optimizer_state, model.state_dict = _step_with_scale(self.transformation, grad, self.state, model.state_dict, -lr)
return self.replace(state=optimizer_state)
# default optax transformation
if scale:
transformation = optax.adam(learning_rate=lr)
# optax transformation without scaling step
else:
transformation = optax.scale_by_adam()
# clip updates using their global norm
if grad_norm_clip > 0:
transformation = optax.chain(optax.clip_by_global_norm(grad_norm_clip), transformation)
return Optimizer._create(transformation=transformation, state=transformation.init(model.state_dict.params))
| 4,818 | Python | 41.646017 | 134 | 0.634703 |
Toni-SM/skrl/skrl/trainers/torch/base.py | from typing import List, Optional, Union
import atexit
import sys
import tqdm
import torch
from skrl import logger
from skrl.agents.torch import Agent
from skrl.envs.wrappers.torch import Wrapper
def generate_equally_spaced_scopes(num_envs: int, num_simultaneous_agents: int) -> List[int]:
"""Generate a list of equally spaced scopes for the agents
:param num_envs: Number of environments
:type num_envs: int
:param num_simultaneous_agents: Number of simultaneous agents
:type num_simultaneous_agents: int
:raises ValueError: If the number of simultaneous agents is greater than the number of environments
:return: List of equally spaced scopes
:rtype: List[int]
"""
scopes = [int(num_envs / num_simultaneous_agents)] * num_simultaneous_agents
if sum(scopes):
scopes[-1] += num_envs - sum(scopes)
else:
raise ValueError(f"The number of simultaneous agents ({num_simultaneous_agents}) is greater than the number of environments ({num_envs})")
return scopes
class Trainer:
def __init__(self,
env: Wrapper,
agents: Union[Agent, List[Agent]],
agents_scope: Optional[List[int]] = None,
cfg: Optional[dict] = None) -> None:
"""Base class for trainers
:param env: Environment to train on
:type env: skrl.envs.wrappers.torch.Wrapper
:param agents: Agents to train
:type agents: Union[Agent, List[Agent]]
:param agents_scope: Number of environments for each agent to train on (default: ``None``)
:type agents_scope: tuple or list of int, optional
:param cfg: Configuration dictionary (default: ``None``)
:type cfg: dict, optional
"""
self.cfg = cfg if cfg is not None else {}
self.env = env
self.agents = agents
self.agents_scope = agents_scope if agents_scope is not None else []
# get configuration
self.timesteps = self.cfg.get("timesteps", 0)
self.headless = self.cfg.get("headless", False)
self.disable_progressbar = self.cfg.get("disable_progressbar", False)
self.close_environment_at_exit = self.cfg.get("close_environment_at_exit", True)
self.initial_timestep = 0
# setup agents
self.num_simultaneous_agents = 0
self._setup_agents()
# register environment closing if configured
if self.close_environment_at_exit:
@atexit.register
def close_env():
logger.info("Closing environment")
self.env.close()
logger.info("Environment closed")
def __str__(self) -> str:
"""Generate a string representation of the trainer
:return: Representation of the trainer as string
:rtype: str
"""
string = f"Trainer: {self}"
string += f"\n |-- Number of parallelizable environments: {self.env.num_envs}"
string += f"\n |-- Number of simultaneous agents: {self.num_simultaneous_agents}"
string += "\n |-- Agents and scopes:"
if self.num_simultaneous_agents > 1:
for agent, scope in zip(self.agents, self.agents_scope):
string += f"\n | |-- agent: {type(agent)}"
string += f"\n | | |-- scope: {scope[1] - scope[0]} environments ({scope[0]}:{scope[1]})"
else:
string += f"\n | |-- agent: {type(self.agents)}"
string += f"\n | | |-- scope: {self.env.num_envs} environment(s)"
return string
def _setup_agents(self) -> None:
"""Setup agents for training
:raises ValueError: Invalid setup
"""
# validate agents and their scopes
if type(self.agents) in [tuple, list]:
# single agent
if len(self.agents) == 1:
self.num_simultaneous_agents = 1
self.agents = self.agents[0]
self.agents_scope = [1]
# parallel agents
elif len(self.agents) > 1:
self.num_simultaneous_agents = len(self.agents)
# check scopes
if not len(self.agents_scope):
logger.warning("The agents' scopes are empty, they will be generated as equal as possible")
self.agents_scope = [int(self.env.num_envs / len(self.agents))] * len(self.agents)
if sum(self.agents_scope):
self.agents_scope[-1] += self.env.num_envs - sum(self.agents_scope)
else:
raise ValueError(f"The number of agents ({len(self.agents)}) is greater than the number of parallelizable environments ({self.env.num_envs})")
elif len(self.agents_scope) != len(self.agents):
raise ValueError(f"The number of agents ({len(self.agents)}) doesn't match the number of scopes ({len(self.agents_scope)})")
elif sum(self.agents_scope) != self.env.num_envs:
raise ValueError(f"The scopes ({sum(self.agents_scope)}) don't cover the number of parallelizable environments ({self.env.num_envs})")
# generate agents' scopes
index = 0
for i in range(len(self.agents_scope)):
index += self.agents_scope[i]
self.agents_scope[i] = (index - self.agents_scope[i], index)
else:
raise ValueError("A list of agents is expected")
else:
self.num_simultaneous_agents = 1
def train(self) -> None:
"""Train the agents
:raises NotImplementedError: Not implemented
"""
raise NotImplementedError
def eval(self) -> None:
"""Evaluate the agents
:raises NotImplementedError: Not implemented
"""
raise NotImplementedError
def single_agent_train(self) -> None:
"""Train agent
This method executes the following steps in loop:
- Pre-interaction
- Compute actions
- Interact with the environments
- Render scene
- Record transitions
- Post-interaction
- Reset environments
"""
assert self.num_simultaneous_agents == 1, "This method is not allowed for simultaneous agents"
assert self.env.num_agents == 1, "This method is not allowed for multi-agents"
# reset env
states, infos = self.env.reset()
for timestep in tqdm.tqdm(range(self.initial_timestep, self.timesteps), disable=self.disable_progressbar, file=sys.stdout):
# pre-interaction
self.agents.pre_interaction(timestep=timestep, timesteps=self.timesteps)
# compute actions
with torch.no_grad():
actions = self.agents.act(states, timestep=timestep, timesteps=self.timesteps)[0]
# step the environments
next_states, rewards, terminated, truncated, infos = self.env.step(actions)
# render scene
if not self.headless:
self.env.render()
# record the environments' transitions
self.agents.record_transition(states=states,
actions=actions,
rewards=rewards,
next_states=next_states,
terminated=terminated,
truncated=truncated,
infos=infos,
timestep=timestep,
timesteps=self.timesteps)
# post-interaction
self.agents.post_interaction(timestep=timestep, timesteps=self.timesteps)
# reset environments
if self.env.num_envs > 1:
states = next_states
else:
if terminated.any() or truncated.any():
with torch.no_grad():
states, infos = self.env.reset()
else:
states = next_states
def single_agent_eval(self) -> None:
"""Evaluate agent
This method executes the following steps in loop:
- Compute actions (sequentially)
- Interact with the environments
- Render scene
- Reset environments
"""
assert self.num_simultaneous_agents == 1, "This method is not allowed for simultaneous agents"
assert self.env.num_agents == 1, "This method is not allowed for multi-agents"
# reset env
states, infos = self.env.reset()
for timestep in tqdm.tqdm(range(self.initial_timestep, self.timesteps), disable=self.disable_progressbar, file=sys.stdout):
# compute actions
with torch.no_grad():
actions = self.agents.act(states, timestep=timestep, timesteps=self.timesteps)[0]
# step the environments
next_states, rewards, terminated, truncated, infos = self.env.step(actions)
# render scene
if not self.headless:
self.env.render()
# write data to TensorBoard
self.agents.record_transition(states=states,
actions=actions,
rewards=rewards,
next_states=next_states,
terminated=terminated,
truncated=truncated,
infos=infos,
timestep=timestep,
timesteps=self.timesteps)
super(type(self.agents), self.agents).post_interaction(timestep=timestep, timesteps=self.timesteps)
# reset environments
if self.env.num_envs > 1:
states = next_states
else:
if terminated.any() or truncated.any():
with torch.no_grad():
states, infos = self.env.reset()
else:
states = next_states
def multi_agent_train(self) -> None:
"""Train multi-agents
This method executes the following steps in loop:
- Pre-interaction
- Compute actions
- Interact with the environments
- Render scene
- Record transitions
- Post-interaction
- Reset environments
"""
assert self.num_simultaneous_agents == 1, "This method is not allowed for simultaneous agents"
assert self.env.num_agents > 1, "This method is not allowed for single-agent"
# reset env
states, infos = self.env.reset()
shared_states = infos.get("shared_states", None)
for timestep in tqdm.tqdm(range(self.initial_timestep, self.timesteps), disable=self.disable_progressbar, file=sys.stdout):
# pre-interaction
self.agents.pre_interaction(timestep=timestep, timesteps=self.timesteps)
# compute actions
with torch.no_grad():
actions = self.agents.act(states, timestep=timestep, timesteps=self.timesteps)[0]
# step the environments
next_states, rewards, terminated, truncated, infos = self.env.step(actions)
shared_next_states = infos.get("shared_states", None)
infos["shared_states"] = shared_states
infos["shared_next_states"] = shared_next_states
# render scene
if not self.headless:
self.env.render()
# record the environments' transitions
self.agents.record_transition(states=states,
actions=actions,
rewards=rewards,
next_states=next_states,
terminated=terminated,
truncated=truncated,
infos=infos,
timestep=timestep,
timesteps=self.timesteps)
# post-interaction
self.agents.post_interaction(timestep=timestep, timesteps=self.timesteps)
# reset environments
with torch.no_grad():
if not self.env.agents:
states, infos = self.env.reset()
shared_states = infos.get("shared_states", None)
else:
states = next_states
shared_states = shared_next_states
def multi_agent_eval(self) -> None:
"""Evaluate multi-agents
This method executes the following steps in loop:
- Compute actions (sequentially)
- Interact with the environments
- Render scene
- Reset environments
"""
assert self.num_simultaneous_agents == 1, "This method is not allowed for simultaneous agents"
assert self.env.num_agents > 1, "This method is not allowed for single-agent"
# reset env
states, infos = self.env.reset()
shared_states = infos.get("shared_states", None)
for timestep in tqdm.tqdm(range(self.initial_timestep, self.timesteps), disable=self.disable_progressbar, file=sys.stdout):
# compute actions
with torch.no_grad():
actions = self.agents.act(states, timestep=timestep, timesteps=self.timesteps)[0]
# step the environments
next_states, rewards, terminated, truncated, infos = self.env.step(actions)
shared_next_states = infos.get("shared_states", None)
infos["shared_states"] = shared_states
infos["shared_next_states"] = shared_next_states
# render scene
if not self.headless:
self.env.render()
# write data to TensorBoard
self.agents.record_transition(states=states,
actions=actions,
rewards=rewards,
next_states=next_states,
terminated=terminated,
truncated=truncated,
infos=infos,
timestep=timestep,
timesteps=self.timesteps)
super(type(self.agents), self.agents).post_interaction(timestep=timestep, timesteps=self.timesteps)
# reset environments
if not self.env.agents:
states, infos = self.env.reset()
shared_states = infos.get("shared_states", None)
else:
states = next_states
shared_states = shared_next_states
| 15,410 | Python | 40.539083 | 166 | 0.53355 |
Toni-SM/skrl/skrl/trainers/torch/__init__.py | from skrl.trainers.torch.base import Trainer, generate_equally_spaced_scopes # isort:skip
from skrl.trainers.torch.parallel import ParallelTrainer
from skrl.trainers.torch.sequential import SequentialTrainer
from skrl.trainers.torch.step import StepTrainer
| 259 | Python | 42.333326 | 90 | 0.849421 |
Toni-SM/skrl/skrl/trainers/torch/step.py | from typing import Any, List, Optional, Tuple, Union
import copy
import sys
import tqdm
import torch
from skrl.agents.torch import Agent
from skrl.envs.wrappers.torch import Wrapper
from skrl.trainers.torch import Trainer
# [start-config-dict-torch]
STEP_TRAINER_DEFAULT_CONFIG = {
"timesteps": 100000, # number of timesteps to train for
"headless": False, # whether to use headless mode (no rendering)
"disable_progressbar": False, # whether to disable the progressbar. If None, disable on non-TTY
"close_environment_at_exit": True, # whether to close the environment on normal program termination
}
# [end-config-dict-torch]
class StepTrainer(Trainer):
def __init__(self,
env: Wrapper,
agents: Union[Agent, List[Agent]],
agents_scope: Optional[List[int]] = None,
cfg: Optional[dict] = None) -> None:
"""Step-by-step trainer
Train agents by controlling the training/evaluation loop step by step
:param env: Environment to train on
:type env: skrl.envs.wrappers.torch.Wrapper
:param agents: Agents to train
:type agents: Union[Agent, List[Agent]]
:param agents_scope: Number of environments for each agent to train on (default: ``None``)
:type agents_scope: tuple or list of int, optional
:param cfg: Configuration dictionary (default: ``None``).
See STEP_TRAINER_DEFAULT_CONFIG for default values
:type cfg: dict, optional
"""
_cfg = copy.deepcopy(STEP_TRAINER_DEFAULT_CONFIG)
_cfg.update(cfg if cfg is not None else {})
agents_scope = agents_scope if agents_scope is not None else []
super().__init__(env=env, agents=agents, agents_scope=agents_scope, cfg=_cfg)
# init agents
if self.num_simultaneous_agents > 1:
for agent in self.agents:
agent.init(trainer_cfg=self.cfg)
else:
self.agents.init(trainer_cfg=self.cfg)
self._timestep = 0
self._progress = None
self.states = None
def train(self, timestep: Optional[int] = None, timesteps: Optional[int] = None) -> \
Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, Any]:
"""Execute a training iteration
This method executes the following steps once:
- Pre-interaction (sequentially if num_simultaneous_agents > 1)
- Compute actions (sequentially if num_simultaneous_agents > 1)
- Interact with the environments
- Render scene
- Record transitions (sequentially if num_simultaneous_agents > 1)
- Post-interaction (sequentially if num_simultaneous_agents > 1)
- Reset environments
:param timestep: Current timestep (default: ``None``).
If None, the current timestep will be carried by an internal variable
:type timestep: int, optional
:param timesteps: Total number of timesteps (default: ``None``).
If None, the total number of timesteps is obtained from the trainer's config
:type timesteps: int, optional
:return: Observation, reward, terminated, truncated, info
:rtype: tuple of torch.Tensor and any other info
"""
if timestep is None:
self._timestep += 1
timestep = self._timestep
timesteps = self.timesteps if timesteps is None else timesteps
if self._progress is None:
self._progress = tqdm.tqdm(total=timesteps, disable=self.disable_progressbar, file=sys.stdout)
self._progress.update(n=1)
# set running mode
if self.num_simultaneous_agents > 1:
for agent in self.agents:
agent.set_running_mode("train")
else:
self.agents.set_running_mode("train")
# reset env
if self.states is None:
self.states, infos = self.env.reset()
if self.num_simultaneous_agents == 1:
# pre-interaction
self.agents.pre_interaction(timestep=timestep, timesteps=timesteps)
# compute actions
with torch.no_grad():
actions = self.agents.act(self.states, timestep=timestep, timesteps=timesteps)[0]
else:
# pre-interaction
for agent in self.agents:
agent.pre_interaction(timestep=timestep, timesteps=timesteps)
# compute actions
with torch.no_grad():
actions = torch.vstack([agent.act(self.states[scope[0]:scope[1]], timestep=timestep, timesteps=timesteps)[0] \
for agent, scope in zip(self.agents, self.agents_scope)])
with torch.no_grad():
# step the environments
next_states, rewards, terminated, truncated, infos = self.env.step(actions)
# render scene
if not self.headless:
self.env.render()
if self.num_simultaneous_agents == 1:
# record the environments' transitions
with torch.no_grad():
self.agents.record_transition(states=self.states,
actions=actions,
rewards=rewards,
next_states=next_states,
terminated=terminated,
truncated=truncated,
infos=infos,
timestep=timestep,
timesteps=timesteps)
# post-interaction
self.agents.post_interaction(timestep=timestep, timesteps=timesteps)
else:
# record the environments' transitions
with torch.no_grad():
for agent, scope in zip(self.agents, self.agents_scope):
agent.record_transition(states=self.states[scope[0]:scope[1]],
actions=actions[scope[0]:scope[1]],
rewards=rewards[scope[0]:scope[1]],
next_states=next_states[scope[0]:scope[1]],
terminated=terminated[scope[0]:scope[1]],
truncated=truncated[scope[0]:scope[1]],
infos=infos,
timestep=timestep,
timesteps=timesteps)
# post-interaction
for agent in self.agents:
agent.post_interaction(timestep=timestep, timesteps=timesteps)
# reset environments
with torch.no_grad():
if terminated.any() or truncated.any():
self.states, infos = self.env.reset()
else:
self.states = next_states
return next_states, rewards, terminated, truncated, infos
def eval(self, timestep: Optional[int] = None, timesteps: Optional[int] = None) -> \
Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, Any]:
"""Evaluate the agents sequentially
This method executes the following steps in loop:
- Compute actions (sequentially if num_simultaneous_agents > 1)
- Interact with the environments
- Render scene
- Reset environments
:param timestep: Current timestep (default: ``None``).
If None, the current timestep will be carried by an internal variable
:type timestep: int, optional
:param timesteps: Total number of timesteps (default: ``None``).
If None, the total number of timesteps is obtained from the trainer's config
:type timesteps: int, optional
:return: Observation, reward, terminated, truncated, info
:rtype: tuple of torch.Tensor and any other info
"""
if timestep is None:
self._timestep += 1
timestep = self._timestep
timesteps = self.timesteps if timesteps is None else timesteps
if self._progress is None:
self._progress = tqdm.tqdm(total=timesteps, disable=self.disable_progressbar, file=sys.stdout)
self._progress.update(n=1)
# set running mode
if self.num_simultaneous_agents > 1:
for agent in self.agents:
agent.set_running_mode("eval")
else:
self.agents.set_running_mode("eval")
# reset env
if self.states is None:
self.states, infos = self.env.reset()
with torch.no_grad():
if self.num_simultaneous_agents == 1:
# compute actions
actions = self.agents.act(self.states, timestep=timestep, timesteps=timesteps)[0]
else:
# compute actions
actions = torch.vstack([agent.act(self.states[scope[0]:scope[1]], timestep=timestep, timesteps=timesteps)[0] \
for agent, scope in zip(self.agents, self.agents_scope)])
# step the environments
next_states, rewards, terminated, truncated, infos = self.env.step(actions)
# render scene
if not self.headless:
self.env.render()
if self.num_simultaneous_agents == 1:
# write data to TensorBoard
self.agents.record_transition(states=self.states,
actions=actions,
rewards=rewards,
next_states=next_states,
terminated=terminated,
truncated=truncated,
infos=infos,
timestep=timestep,
timesteps=timesteps)
super(type(self.agents), self.agents).post_interaction(timestep=timestep, timesteps=timesteps)
else:
# write data to TensorBoard
for agent, scope in zip(self.agents, self.agents_scope):
agent.record_transition(states=self.states[scope[0]:scope[1]],
actions=actions[scope[0]:scope[1]],
rewards=rewards[scope[0]:scope[1]],
next_states=next_states[scope[0]:scope[1]],
terminated=terminated[scope[0]:scope[1]],
truncated=truncated[scope[0]:scope[1]],
infos=infos,
timestep=timestep,
timesteps=timesteps)
super(type(agent), agent).post_interaction(timestep=timestep, timesteps=timesteps)
# reset environments
if terminated.any() or truncated.any():
self.states, infos = self.env.reset()
else:
self.states = next_states
return next_states, rewards, terminated, truncated, infos
| 11,519 | Python | 42.308271 | 126 | 0.537894 |
Toni-SM/skrl/skrl/trainers/torch/parallel.py | from typing import List, Optional, Union
import copy
import sys
import tqdm
import torch
import torch.multiprocessing as mp
from skrl.agents.torch import Agent
from skrl.envs.wrappers.torch import Wrapper
from skrl.trainers.torch import Trainer
# [start-config-dict-torch]
PARALLEL_TRAINER_DEFAULT_CONFIG = {
"timesteps": 100000, # number of timesteps to train for
"headless": False, # whether to use headless mode (no rendering)
"disable_progressbar": False, # whether to disable the progressbar. If None, disable on non-TTY
"close_environment_at_exit": True, # whether to close the environment on normal program termination
}
# [end-config-dict-torch]
def fn_processor(process_index, *args):
print(f"[INFO] Processor {process_index}: started")
pipe = args[0][process_index]
queue = args[1][process_index]
barrier = args[2]
scope = args[3][process_index]
trainer_cfg = args[4]
agent = None
_states = None
_actions = None
# wait for the main process to start all the workers
barrier.wait()
while True:
msg = pipe.recv()
task = msg['task']
# terminate process
if task == 'terminate':
break
# initialize agent
elif task == 'init':
agent = queue.get()
agent.init(trainer_cfg=trainer_cfg)
print(f"[INFO] Processor {process_index}: init agent {type(agent).__name__} with scope {scope}")
barrier.wait()
# execute agent's pre-interaction step
elif task == "pre_interaction":
agent.pre_interaction(timestep=msg['timestep'], timesteps=msg['timesteps'])
barrier.wait()
# get agent's actions
elif task == "act":
_states = queue.get()[scope[0]:scope[1]]
with torch.no_grad():
_actions = agent.act(_states, timestep=msg['timestep'], timesteps=msg['timesteps'])[0]
if not _actions.is_cuda:
_actions.share_memory_()
queue.put(_actions)
barrier.wait()
# record agent's experience
elif task == "record_transition":
with torch.no_grad():
agent.record_transition(states=_states,
actions=_actions,
rewards=queue.get()[scope[0]:scope[1]],
next_states=queue.get()[scope[0]:scope[1]],
terminated=queue.get()[scope[0]:scope[1]],
truncated=queue.get()[scope[0]:scope[1]],
infos=queue.get(),
timestep=msg['timestep'],
timesteps=msg['timesteps'])
barrier.wait()
# execute agent's post-interaction step
elif task == "post_interaction":
agent.post_interaction(timestep=msg['timestep'], timesteps=msg['timesteps'])
barrier.wait()
# write data to TensorBoard (evaluation)
elif task == "eval-record_transition-post_interaction":
with torch.no_grad():
agent.record_transition(states=_states,
actions=_actions,
rewards=queue.get()[scope[0]:scope[1]],
next_states=queue.get()[scope[0]:scope[1]],
terminated=queue.get()[scope[0]:scope[1]],
truncated=queue.get()[scope[0]:scope[1]],
infos=queue.get(),
timestep=msg['timestep'],
timesteps=msg['timesteps'])
super(type(agent), agent).post_interaction(timestep=msg['timestep'], timesteps=msg['timesteps'])
barrier.wait()
class ParallelTrainer(Trainer):
def __init__(self,
env: Wrapper,
agents: Union[Agent, List[Agent]],
agents_scope: Optional[List[int]] = None,
cfg: Optional[dict] = None) -> None:
"""Parallel trainer
Train agents in parallel using multiple processes
:param env: Environment to train on
:type env: skrl.envs.wrappers.torch.Wrapper
:param agents: Agents to train
:type agents: Union[Agent, List[Agent]]
:param agents_scope: Number of environments for each agent to train on (default: ``None``)
:type agents_scope: tuple or list of int, optional
:param cfg: Configuration dictionary (default: ``None``).
See PARALLEL_TRAINER_DEFAULT_CONFIG for default values
:type cfg: dict, optional
"""
_cfg = copy.deepcopy(PARALLEL_TRAINER_DEFAULT_CONFIG)
_cfg.update(cfg if cfg is not None else {})
agents_scope = agents_scope if agents_scope is not None else []
super().__init__(env=env, agents=agents, agents_scope=agents_scope, cfg=_cfg)
mp.set_start_method(method='spawn', force=True)
def train(self) -> None:
"""Train the agents in parallel
This method executes the following steps in loop:
- Pre-interaction (parallel)
- Compute actions (in parallel)
- Interact with the environments
- Render scene
- Record transitions (in parallel)
- Post-interaction (in parallel)
- Reset environments
"""
# set running mode
if self.num_simultaneous_agents > 1:
for agent in self.agents:
agent.set_running_mode("train")
else:
self.agents.set_running_mode("train")
# non-simultaneous agents
if self.num_simultaneous_agents == 1:
self.agents.init(trainer_cfg=self.cfg)
# single-agent
if self.env.num_agents == 1:
self.single_agent_train()
# multi-agent
else:
self.multi_agent_train()
return
# initialize multiprocessing variables
queues = []
producer_pipes = []
consumer_pipes = []
barrier = mp.Barrier(self.num_simultaneous_agents + 1)
processes = []
for i in range(self.num_simultaneous_agents):
pipe_read, pipe_write = mp.Pipe(duplex=False)
producer_pipes.append(pipe_write)
consumer_pipes.append(pipe_read)
queues.append(mp.Queue())
# move tensors to shared memory
for agent in self.agents:
if agent.memory is not None:
agent.memory.share_memory()
for model in agent.models.values():
try:
model.share_memory()
except RuntimeError:
pass
# spawn and wait for all processes to start
for i in range(self.num_simultaneous_agents):
process = mp.Process(target=fn_processor,
args=(i, consumer_pipes, queues, barrier, self.agents_scope, self.cfg),
daemon=True)
processes.append(process)
process.start()
barrier.wait()
# initialize agents
for pipe, queue, agent in zip(producer_pipes, queues, self.agents):
pipe.send({'task': 'init'})
queue.put(agent)
barrier.wait()
# reset env
states, infos = self.env.reset()
if not states.is_cuda:
states.share_memory_()
for timestep in tqdm.tqdm(range(self.initial_timestep, self.timesteps), disable=self.disable_progressbar, file=sys.stdout):
# pre-interaction
for pipe in producer_pipes:
pipe.send({"task": "pre_interaction", "timestep": timestep, "timesteps": self.timesteps})
barrier.wait()
# compute actions
with torch.no_grad():
for pipe, queue in zip(producer_pipes, queues):
pipe.send({"task": "act", "timestep": timestep, "timesteps": self.timesteps})
queue.put(states)
barrier.wait()
actions = torch.vstack([queue.get() for queue in queues])
# step the environments
next_states, rewards, terminated, truncated, infos = self.env.step(actions)
# render scene
if not self.headless:
self.env.render()
# record the environments' transitions
if not rewards.is_cuda:
rewards.share_memory_()
if not next_states.is_cuda:
next_states.share_memory_()
if not terminated.is_cuda:
terminated.share_memory_()
if not truncated.is_cuda:
truncated.share_memory_()
for pipe, queue in zip(producer_pipes, queues):
pipe.send({"task": "record_transition", "timestep": timestep, "timesteps": self.timesteps})
queue.put(rewards)
queue.put(next_states)
queue.put(terminated)
queue.put(truncated)
queue.put(infos)
barrier.wait()
# post-interaction
for pipe in producer_pipes:
pipe.send({"task": "post_interaction", "timestep": timestep, "timesteps": self.timesteps})
barrier.wait()
# reset environments
with torch.no_grad():
if terminated.any() or truncated.any():
states, infos = self.env.reset()
if not states.is_cuda:
states.share_memory_()
else:
states.copy_(next_states)
# terminate processes
for pipe in producer_pipes:
pipe.send({"task": "terminate"})
# join processes
for process in processes:
process.join()
def eval(self) -> None:
"""Evaluate the agents sequentially
This method executes the following steps in loop:
- Compute actions (in parallel)
- Interact with the environments
- Render scene
- Reset environments
"""
# set running mode
if self.num_simultaneous_agents > 1:
for agent in self.agents:
agent.set_running_mode("eval")
else:
self.agents.set_running_mode("eval")
# non-simultaneous agents
if self.num_simultaneous_agents == 1:
self.agents.init(trainer_cfg=self.cfg)
# single-agent
if self.env.num_agents == 1:
self.single_agent_eval()
# multi-agent
else:
self.multi_agent_eval()
return
# initialize multiprocessing variables
queues = []
producer_pipes = []
consumer_pipes = []
barrier = mp.Barrier(self.num_simultaneous_agents + 1)
processes = []
for i in range(self.num_simultaneous_agents):
pipe_read, pipe_write = mp.Pipe(duplex=False)
producer_pipes.append(pipe_write)
consumer_pipes.append(pipe_read)
queues.append(mp.Queue())
# move tensors to shared memory
for agent in self.agents:
if agent.memory is not None:
agent.memory.share_memory()
for model in agent.models.values():
if model is not None:
try:
model.share_memory()
except RuntimeError:
pass
# spawn and wait for all processes to start
for i in range(self.num_simultaneous_agents):
process = mp.Process(target=fn_processor,
args=(i, consumer_pipes, queues, barrier, self.agents_scope, self.cfg),
daemon=True)
processes.append(process)
process.start()
barrier.wait()
# initialize agents
for pipe, queue, agent in zip(producer_pipes, queues, self.agents):
pipe.send({'task': 'init'})
queue.put(agent)
barrier.wait()
# reset env
states, infos = self.env.reset()
if not states.is_cuda:
states.share_memory_()
for timestep in tqdm.tqdm(range(self.initial_timestep, self.timesteps), disable=self.disable_progressbar, file=sys.stdout):
# compute actions
with torch.no_grad():
for pipe, queue in zip(producer_pipes, queues):
pipe.send({"task": "act", "timestep": timestep, "timesteps": self.timesteps})
queue.put(states)
barrier.wait()
actions = torch.vstack([queue.get() for queue in queues])
# step the environments
next_states, rewards, terminated, truncated, infos = self.env.step(actions)
# render scene
if not self.headless:
self.env.render()
# write data to TensorBoard
if not rewards.is_cuda:
rewards.share_memory_()
if not next_states.is_cuda:
next_states.share_memory_()
if not terminated.is_cuda:
terminated.share_memory_()
if not truncated.is_cuda:
truncated.share_memory_()
for pipe, queue in zip(producer_pipes, queues):
pipe.send({"task": "eval-record_transition-post_interaction",
"timestep": timestep,
"timesteps": self.timesteps})
queue.put(rewards)
queue.put(next_states)
queue.put(terminated)
queue.put(truncated)
queue.put(infos)
barrier.wait()
# reset environments
if terminated.any() or truncated.any():
states, infos = self.env.reset()
if not states.is_cuda:
states.share_memory_()
else:
states.copy_(next_states)
# terminate processes
for pipe in producer_pipes:
pipe.send({"task": "terminate"})
# join processes
for process in processes:
process.join()
| 14,758 | Python | 36.176322 | 131 | 0.521751 |
Toni-SM/skrl/skrl/trainers/torch/sequential.py | from typing import List, Optional, Union
import copy
import sys
import tqdm
import torch
from skrl.agents.torch import Agent
from skrl.envs.wrappers.torch import Wrapper
from skrl.trainers.torch import Trainer
# [start-config-dict-torch]
SEQUENTIAL_TRAINER_DEFAULT_CONFIG = {
"timesteps": 100000, # number of timesteps to train for
"headless": False, # whether to use headless mode (no rendering)
"disable_progressbar": False, # whether to disable the progressbar. If None, disable on non-TTY
"close_environment_at_exit": True, # whether to close the environment on normal program termination
}
# [end-config-dict-torch]
class SequentialTrainer(Trainer):
def __init__(self,
env: Wrapper,
agents: Union[Agent, List[Agent]],
agents_scope: Optional[List[int]] = None,
cfg: Optional[dict] = None) -> None:
"""Sequential trainer
Train agents sequentially (i.e., one after the other in each interaction with the environment)
:param env: Environment to train on
:type env: skrl.envs.wrappers.torch.Wrapper
:param agents: Agents to train
:type agents: Union[Agent, List[Agent]]
:param agents_scope: Number of environments for each agent to train on (default: ``None``)
:type agents_scope: tuple or list of int, optional
:param cfg: Configuration dictionary (default: ``None``).
See SEQUENTIAL_TRAINER_DEFAULT_CONFIG for default values
:type cfg: dict, optional
"""
_cfg = copy.deepcopy(SEQUENTIAL_TRAINER_DEFAULT_CONFIG)
_cfg.update(cfg if cfg is not None else {})
agents_scope = agents_scope if agents_scope is not None else []
super().__init__(env=env, agents=agents, agents_scope=agents_scope, cfg=_cfg)
# init agents
if self.num_simultaneous_agents > 1:
for agent in self.agents:
agent.init(trainer_cfg=self.cfg)
else:
self.agents.init(trainer_cfg=self.cfg)
def train(self) -> None:
"""Train the agents sequentially
This method executes the following steps in loop:
- Pre-interaction (sequentially)
- Compute actions (sequentially)
- Interact with the environments
- Render scene
- Record transitions (sequentially)
- Post-interaction (sequentially)
- Reset environments
"""
# set running mode
if self.num_simultaneous_agents > 1:
for agent in self.agents:
agent.set_running_mode("train")
else:
self.agents.set_running_mode("train")
# non-simultaneous agents
if self.num_simultaneous_agents == 1:
# single-agent
if self.env.num_agents == 1:
self.single_agent_train()
# multi-agent
else:
self.multi_agent_train()
return
# reset env
states, infos = self.env.reset()
for timestep in tqdm.tqdm(range(self.initial_timestep, self.timesteps), disable=self.disable_progressbar, file=sys.stdout):
# pre-interaction
for agent in self.agents:
agent.pre_interaction(timestep=timestep, timesteps=self.timesteps)
# compute actions
with torch.no_grad():
actions = torch.vstack([agent.act(states[scope[0]:scope[1]], timestep=timestep, timesteps=self.timesteps)[0] \
for agent, scope in zip(self.agents, self.agents_scope)])
# step the environments
next_states, rewards, terminated, truncated, infos = self.env.step(actions)
# render scene
if not self.headless:
self.env.render()
# record the environments' transitions
for agent, scope in zip(self.agents, self.agents_scope):
agent.record_transition(states=states[scope[0]:scope[1]],
actions=actions[scope[0]:scope[1]],
rewards=rewards[scope[0]:scope[1]],
next_states=next_states[scope[0]:scope[1]],
terminated=terminated[scope[0]:scope[1]],
truncated=truncated[scope[0]:scope[1]],
infos=infos,
timestep=timestep,
timesteps=self.timesteps)
# post-interaction
for agent in self.agents:
agent.post_interaction(timestep=timestep, timesteps=self.timesteps)
# reset environments
with torch.no_grad():
if terminated.any() or truncated.any():
states, infos = self.env.reset()
else:
states = next_states
def eval(self) -> None:
"""Evaluate the agents sequentially
This method executes the following steps in loop:
- Compute actions (sequentially)
- Interact with the environments
- Render scene
- Reset environments
"""
# set running mode
if self.num_simultaneous_agents > 1:
for agent in self.agents:
agent.set_running_mode("eval")
else:
self.agents.set_running_mode("eval")
# non-simultaneous agents
if self.num_simultaneous_agents == 1:
# single-agent
if self.env.num_agents == 1:
self.single_agent_eval()
# multi-agent
else:
self.multi_agent_eval()
return
# reset env
states, infos = self.env.reset()
for timestep in tqdm.tqdm(range(self.initial_timestep, self.timesteps), disable=self.disable_progressbar, file=sys.stdout):
# compute actions
with torch.no_grad():
actions = torch.vstack([agent.act(states[scope[0]:scope[1]], timestep=timestep, timesteps=self.timesteps)[0] \
for agent, scope in zip(self.agents, self.agents_scope)])
# step the environments
next_states, rewards, terminated, truncated, infos = self.env.step(actions)
# render scene
if not self.headless:
self.env.render()
# write data to TensorBoard
for agent, scope in zip(self.agents, self.agents_scope):
agent.record_transition(states=states[scope[0]:scope[1]],
actions=actions[scope[0]:scope[1]],
rewards=rewards[scope[0]:scope[1]],
next_states=next_states[scope[0]:scope[1]],
terminated=terminated[scope[0]:scope[1]],
truncated=truncated[scope[0]:scope[1]],
infos=infos,
timestep=timestep,
timesteps=self.timesteps)
super(type(agent), agent).post_interaction(timestep=timestep, timesteps=self.timesteps)
# reset environments
if terminated.any() or truncated.any():
states, infos = self.env.reset()
else:
states = next_states
| 7,732 | Python | 39.276041 | 131 | 0.535049 |
Toni-SM/skrl/skrl/trainers/jax/__init__.py | from skrl.trainers.jax.base import Trainer, generate_equally_spaced_scopes # isort:skip
from skrl.trainers.jax.sequential import SequentialTrainer
from skrl.trainers.jax.step import StepTrainer
| 196 | Python | 38.399992 | 88 | 0.836735 |
Toni-SM/skrl/skrl/models/torch/base.py | from typing import Any, Mapping, Optional, Sequence, Tuple, Union
import collections
import gym
import gymnasium
import numpy as np
import torch
from skrl import logger
class Model(torch.nn.Module):
def __init__(self,
observation_space: Union[int, Sequence[int], gym.Space, gymnasium.Space],
action_space: Union[int, Sequence[int], gym.Space, gymnasium.Space],
device: Optional[Union[str, torch.device]] = None) -> None:
"""Base class representing a function approximator
The following properties are defined:
- ``device`` (torch.device): Device to be used for the computations
- ``observation_space`` (int, sequence of int, gym.Space, gymnasium.Space): Observation/state space
- ``action_space`` (int, sequence of int, gym.Space, gymnasium.Space): Action space
- ``num_observations`` (int): Number of elements in the observation/state space
- ``num_actions`` (int): Number of elements in the action space
:param observation_space: Observation/state space or shape.
The ``num_observations`` property will contain the size of that space
:type observation_space: int, sequence of int, gym.Space, gymnasium.Space
:param action_space: Action space or shape.
The ``num_actions`` property will contain the size of that space
:type action_space: int, sequence of int, gym.Space, gymnasium.Space
:param device: Device on which a tensor/array is or will be allocated (default: ``None``).
If None, the device will be either ``"cuda"`` if available or ``"cpu"``
:type device: str or torch.device, optional
Custom models should override the ``act`` method::
import torch
from skrl.models.torch import Model
class CustomModel(Model):
def __init__(self, observation_space, action_space, device="cuda:0"):
Model.__init__(self, observation_space, action_space, device)
self.layer_1 = nn.Linear(self.num_observations, 64)
self.layer_2 = nn.Linear(64, self.num_actions)
def act(self, inputs, role=""):
x = F.relu(self.layer_1(inputs["states"]))
x = F.relu(self.layer_2(x))
return x, None, {}
"""
super(Model, self).__init__()
self.device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") if device is None else torch.device(device)
self.observation_space = observation_space
self.action_space = action_space
self.num_observations = None if observation_space is None else self._get_space_size(observation_space)
self.num_actions = None if action_space is None else self._get_space_size(action_space)
self._random_distribution = None
def _get_space_size(self,
space: Union[int, Sequence[int], gym.Space, gymnasium.Space],
number_of_elements: bool = True) -> int:
"""Get the size (number of elements) of a space
:param space: Space or shape from which to obtain the number of elements
:type space: int, sequence of int, gym.Space, or gymnasium.Space
:param number_of_elements: Whether the number of elements occupied by the space is returned (default: ``True``).
If ``False``, the shape of the space is returned.
It only affects Discrete and MultiDiscrete spaces
:type number_of_elements: bool, optional
:raises ValueError: If the space is not supported
:return: Size of the space (number of elements)
:rtype: int
Example::
# from int
>>> model._get_space_size(2)
2
# from sequence of int
>>> model._get_space_size([2, 3])
6
# Box space
>>> space = gym.spaces.Box(low=-1, high=1, shape=(2, 3))
>>> model._get_space_size(space)
6
# Discrete space
>>> space = gym.spaces.Discrete(4)
>>> model._get_space_size(space)
4
>>> model._get_space_size(space, number_of_elements=False)
1
# MultiDiscrete space
>>> space = gym.spaces.MultiDiscrete([5, 3, 2])
>>> model._get_space_size(space)
10
>>> model._get_space_size(space, number_of_elements=False)
3
# Dict space
>>> space = gym.spaces.Dict({'a': gym.spaces.Box(low=-1, high=1, shape=(2, 3)),
... 'b': gym.spaces.Discrete(4)})
>>> model._get_space_size(space)
10
>>> model._get_space_size(space, number_of_elements=False)
7
"""
size = None
if type(space) in [int, float]:
size = space
elif type(space) in [tuple, list]:
size = np.prod(space)
elif issubclass(type(space), gym.Space):
if issubclass(type(space), gym.spaces.Discrete):
if number_of_elements:
size = space.n
else:
size = 1
elif issubclass(type(space), gym.spaces.MultiDiscrete):
if number_of_elements:
size = np.sum(space.nvec)
else:
size = space.nvec.shape[0]
elif issubclass(type(space), gym.spaces.Box):
size = np.prod(space.shape)
elif issubclass(type(space), gym.spaces.Dict):
size = sum([self._get_space_size(space.spaces[key], number_of_elements) for key in space.spaces])
elif issubclass(type(space), gymnasium.Space):
if issubclass(type(space), gymnasium.spaces.Discrete):
if number_of_elements:
size = space.n
else:
size = 1
elif issubclass(type(space), gymnasium.spaces.MultiDiscrete):
if number_of_elements:
size = np.sum(space.nvec)
else:
size = space.nvec.shape[0]
elif issubclass(type(space), gymnasium.spaces.Box):
size = np.prod(space.shape)
elif issubclass(type(space), gymnasium.spaces.Dict):
size = sum([self._get_space_size(space.spaces[key], number_of_elements) for key in space.spaces])
if size is None:
raise ValueError(f"Space type {type(space)} not supported")
return int(size)
def tensor_to_space(self,
tensor: torch.Tensor,
space: Union[gym.Space, gymnasium.Space],
start: int = 0) -> Union[torch.Tensor, dict]:
"""Map a flat tensor to a Gym/Gymnasium space
The mapping is done in the following way:
- Tensors belonging to Discrete spaces are returned without modification
- Tensors belonging to Box spaces are reshaped to the corresponding space shape
keeping the first dimension (number of samples) as they are
- Tensors belonging to Dict spaces are mapped into a dictionary with the same keys as the original space
:param tensor: Tensor to map from
:type tensor: torch.Tensor
:param space: Space to map the tensor to
:type space: gym.Space or gymnasium.Space
:param start: Index of the first element of the tensor to map (default: ``0``)
:type start: int, optional
:raises ValueError: If the space is not supported
:return: Mapped tensor or dictionary
:rtype: torch.Tensor or dict
Example::
>>> space = gym.spaces.Dict({'a': gym.spaces.Box(low=-1, high=1, shape=(2, 3)),
... 'b': gym.spaces.Discrete(4)})
>>> tensor = torch.tensor([[-0.3, -0.2, -0.1, 0.1, 0.2, 0.3, 2]])
>>>
>>> model.tensor_to_space(tensor, space)
{'a': tensor([[[-0.3000, -0.2000, -0.1000],
[ 0.1000, 0.2000, 0.3000]]]),
'b': tensor([[2.]])}
"""
if issubclass(type(space), gym.Space):
if issubclass(type(space), gym.spaces.Discrete):
return tensor
elif issubclass(type(space), gym.spaces.Box):
return tensor.view(tensor.shape[0], *space.shape)
elif issubclass(type(space), gym.spaces.Dict):
output = {}
for k in sorted(space.keys()):
end = start + self._get_space_size(space[k], number_of_elements=False)
output[k] = self.tensor_to_space(tensor[:, start:end], space[k], end)
start = end
return output
else:
if issubclass(type(space), gymnasium.spaces.Discrete):
return tensor
elif issubclass(type(space), gymnasium.spaces.Box):
return tensor.view(tensor.shape[0], *space.shape)
elif issubclass(type(space), gymnasium.spaces.Dict):
output = {}
for k in sorted(space.keys()):
end = start + self._get_space_size(space[k], number_of_elements=False)
output[k] = self.tensor_to_space(tensor[:, start:end], space[k], end)
start = end
return output
raise ValueError(f"Space type {type(space)} not supported")
def random_act(self,
inputs: Mapping[str, Union[torch.Tensor, Any]],
role: str = "") -> Tuple[torch.Tensor, None, Mapping[str, Union[torch.Tensor, Any]]]:
"""Act randomly according to the action space
:param inputs: Model inputs. The most common keys are:
- ``"states"``: state of the environment used to make the decision
- ``"taken_actions"``: actions taken by the policy for the given states
:type inputs: dict where the values are typically torch.Tensor
:param role: Role play by the model (default: ``""``)
:type role: str, optional
:raises NotImplementedError: Unsupported action space
:return: Model output. The first component is the action to be taken by the agent
:rtype: tuple of torch.Tensor, None, and dict
"""
# discrete action space (Discrete)
if issubclass(type(self.action_space), gym.spaces.Discrete) or issubclass(type(self.action_space), gymnasium.spaces.Discrete):
return torch.randint(self.action_space.n, (inputs["states"].shape[0], 1), device=self.device), None, {}
# continuous action space (Box)
elif issubclass(type(self.action_space), gym.spaces.Box) or issubclass(type(self.action_space), gymnasium.spaces.Box):
if self._random_distribution is None:
self._random_distribution = torch.distributions.uniform.Uniform(
low=torch.tensor(self.action_space.low[0], device=self.device, dtype=torch.float32),
high=torch.tensor(self.action_space.high[0], device=self.device, dtype=torch.float32))
return self._random_distribution.sample(sample_shape=(inputs["states"].shape[0], self.num_actions)), None, {}
else:
raise NotImplementedError(f"Action space type ({type(self.action_space)}) not supported")
def init_parameters(self, method_name: str = "normal_", *args, **kwargs) -> None:
"""Initialize the model parameters according to the specified method name
Method names are from the `torch.nn.init <https://pytorch.org/docs/stable/nn.init.html>`_ module.
Allowed method names are *uniform_*, *normal_*, *constant_*, etc.
:param method_name: `torch.nn.init <https://pytorch.org/docs/stable/nn.init.html>`_ method name (default: ``"normal_"``)
:type method_name: str, optional
:param args: Positional arguments of the method to be called
:type args: tuple, optional
:param kwargs: Key-value arguments of the method to be called
:type kwargs: dict, optional
Example::
# initialize all parameters with an orthogonal distribution with a gain of 0.5
>>> model.init_parameters("orthogonal_", gain=0.5)
# initialize all parameters as a sparse matrix with a sparsity of 0.1
>>> model.init_parameters("sparse_", sparsity=0.1)
"""
for parameters in self.parameters():
exec(f"torch.nn.init.{method_name}(parameters, *args, **kwargs)")
def init_weights(self, method_name: str = "orthogonal_", *args, **kwargs) -> None:
"""Initialize the model weights according to the specified method name
Method names are from the `torch.nn.init <https://pytorch.org/docs/stable/nn.init.html>`_ module.
Allowed method names are *uniform_*, *normal_*, *constant_*, etc.
The following layers will be initialized:
- torch.nn.Linear
:param method_name: `torch.nn.init <https://pytorch.org/docs/stable/nn.init.html>`_ method name (default: ``"orthogonal_"``)
:type method_name: str, optional
:param args: Positional arguments of the method to be called
:type args: tuple, optional
:param kwargs: Key-value arguments of the method to be called
:type kwargs: dict, optional
Example::
# initialize all weights with uniform distribution in range [-0.1, 0.1]
>>> model.init_weights(method_name="uniform_", a=-0.1, b=0.1)
# initialize all weights with normal distribution with mean 0 and standard deviation 0.25
>>> model.init_weights(method_name="normal_", mean=0.0, std=0.25)
"""
def _update_weights(module, method_name, args, kwargs):
for layer in module:
if isinstance(layer, torch.nn.Sequential):
_update_weights(layer, method_name, args, kwargs)
elif isinstance(layer, torch.nn.Linear):
exec(f"torch.nn.init.{method_name}(layer.weight, *args, **kwargs)")
_update_weights(self.children(), method_name, args, kwargs)
def init_biases(self, method_name: str = "constant_", *args, **kwargs) -> None:
"""Initialize the model biases according to the specified method name
Method names are from the `torch.nn.init <https://pytorch.org/docs/stable/nn.init.html>`_ module.
Allowed method names are *uniform_*, *normal_*, *constant_*, etc.
The following layers will be initialized:
- torch.nn.Linear
:param method_name: `torch.nn.init <https://pytorch.org/docs/stable/nn.init.html>`_ method name (default: ``"constant_"``)
:type method_name: str, optional
:param args: Positional arguments of the method to be called
:type args: tuple, optional
:param kwargs: Key-value arguments of the method to be called
:type kwargs: dict, optional
Example::
# initialize all biases with a constant value (0)
>>> model.init_biases(method_name="constant_", val=0)
# initialize all biases with normal distribution with mean 0 and standard deviation 0.25
>>> model.init_biases(method_name="normal_", mean=0.0, std=0.25)
"""
def _update_biases(module, method_name, args, kwargs):
for layer in module:
if isinstance(layer, torch.nn.Sequential):
_update_biases(layer, method_name, args, kwargs)
elif isinstance(layer, torch.nn.Linear):
exec(f"torch.nn.init.{method_name}(layer.bias, *args, **kwargs)")
_update_biases(self.children(), method_name, args, kwargs)
def get_specification(self) -> Mapping[str, Any]:
"""Returns the specification of the model
The following keys are used by the agents for initialization:
- ``"rnn"``: Recurrent Neural Network (RNN) specification for RNN, LSTM and GRU layers/cells
- ``"sizes"``: List of RNN shapes (number of layers, number of environments, number of features in the RNN state).
There must be as many tuples as there are states in the recurrent layer/cell. E.g., LSTM has 2 states (hidden and cell).
:return: Dictionary containing advanced specification of the model
:rtype: dict
Example::
# model with a LSTM layer.
# - number of layers: 1
# - number of environments: 4
# - number of features in the RNN state: 64
>>> model.get_specification()
{'rnn': {'sizes': [(1, 4, 64), (1, 4, 64)]}}
"""
return {}
def forward(self,
inputs: Mapping[str, Union[torch.Tensor, Any]],
role: str = "") -> Tuple[torch.Tensor, Union[torch.Tensor, None], Mapping[str, Union[torch.Tensor, Any]]]:
"""Forward pass of the model
This method calls the ``.act()`` method and returns its outputs
:param inputs: Model inputs. The most common keys are:
- ``"states"``: state of the environment used to make the decision
- ``"taken_actions"``: actions taken by the policy for the given states
:type inputs: dict where the values are typically torch.Tensor
:param role: Role play by the model (default: ``""``)
:type role: str, optional
:return: Model output. The first component is the action to be taken by the agent.
The second component is the log of the probability density function for stochastic models
or None for deterministic models. The third component is a dictionary containing extra output values
:rtype: tuple of torch.Tensor, torch.Tensor or None, and dict
"""
return self.act(inputs, role)
def compute(self,
inputs: Mapping[str, Union[torch.Tensor, Any]],
role: str = "") -> Tuple[Union[torch.Tensor, Mapping[str, Union[torch.Tensor, Any]]]]:
"""Define the computation performed (to be implemented by the inheriting classes) by the models
:param inputs: Model inputs. The most common keys are:
- ``"states"``: state of the environment used to make the decision
- ``"taken_actions"``: actions taken by the policy for the given states
:type inputs: dict where the values are typically torch.Tensor
:param role: Role play by the model (default: ``""``)
:type role: str, optional
:raises NotImplementedError: Child class must implement this method
:return: Computation performed by the models
:rtype: tuple of torch.Tensor and dict
"""
raise NotImplementedError("The computation performed by the models (.compute()) is not implemented")
def act(self,
inputs: Mapping[str, Union[torch.Tensor, Any]],
role: str = "") -> Tuple[torch.Tensor, Union[torch.Tensor, None], Mapping[str, Union[torch.Tensor, Any]]]:
"""Act according to the specified behavior (to be implemented by the inheriting classes)
Agents will call this method to obtain the decision to be taken given the state of the environment.
This method is currently implemented by the helper models (**GaussianModel**, etc.).
The classes that inherit from the latter must only implement the ``.compute()`` method
:param inputs: Model inputs. The most common keys are:
- ``"states"``: state of the environment used to make the decision
- ``"taken_actions"``: actions taken by the policy for the given states
:type inputs: dict where the values are typically torch.Tensor
:param role: Role play by the model (default: ``""``)
:type role: str, optional
:raises NotImplementedError: Child class must implement this method
:return: Model output. The first component is the action to be taken by the agent.
The second component is the log of the probability density function for stochastic models
or None for deterministic models. The third component is a dictionary containing extra output values
:rtype: tuple of torch.Tensor, torch.Tensor or None, and dict
"""
logger.warning("Make sure to place Mixins before Model during model definition")
raise NotImplementedError("The action to be taken by the agent (.act()) is not implemented")
def set_mode(self, mode: str) -> None:
"""Set the model mode (training or evaluation)
:param mode: Mode: ``"train"`` for training or ``"eval"`` for evaluation.
See `torch.nn.Module.train <https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.train>`_
:type mode: str
:raises ValueError: If the mode is not ``"train"`` or ``"eval"``
"""
if mode == "train":
self.train(True)
elif mode == "eval":
self.train(False)
else:
raise ValueError("Invalid mode. Use 'train' for training or 'eval' for evaluation")
def save(self, path: str, state_dict: Optional[dict] = None) -> None:
"""Save the model to the specified path
:param path: Path to save the model to
:type path: str
:param state_dict: State dictionary to save (default: ``None``).
If None, the model's state_dict will be saved
:type state_dict: dict, optional
Example::
# save the current model to the specified path
>>> model.save("/tmp/model.pt")
# save an older version of the model to the specified path
>>> old_state_dict = copy.deepcopy(model.state_dict())
>>> # ...
>>> model.save("/tmp/model.pt", old_state_dict)
"""
torch.save(self.state_dict() if state_dict is None else state_dict, path)
def load(self, path: str) -> None:
"""Load the model from the specified path
The final storage device is determined by the constructor of the model
:param path: Path to load the model from
:type path: str
Example::
# load the model onto the CPU
>>> model = Model(observation_space, action_space, device="cpu")
>>> model.load("model.pt")
# load the model onto the GPU 1
>>> model = Model(observation_space, action_space, device="cuda:1")
>>> model.load("model.pt")
"""
self.load_state_dict(torch.load(path, map_location=self.device))
self.eval()
def migrate(self,
state_dict: Optional[Mapping[str, torch.Tensor]] = None,
path: Optional[str] = None,
name_map: Mapping[str, str] = {},
auto_mapping: bool = True,
verbose: bool = False) -> bool:
"""Migrate the specified extrernal model's state dict to the current model
The final storage device is determined by the constructor of the model
Only one of ``state_dict`` or ``path`` can be specified.
The ``path`` parameter allows automatic loading the ``state_dict`` only from files generated
by the *rl_games* and *stable-baselines3* libraries at the moment
For ambiguous models (where 2 or more parameters, for source or current model, have equal shape)
it is necessary to define the ``name_map``, at least for those parameters, to perform the migration successfully
:param state_dict: External model's state dict to migrate from (default: ``None``)
:type state_dict: Mapping[str, torch.Tensor], optional
:param path: Path to the external checkpoint to migrate from (default: ``None``)
:type path: str, optional
:param name_map: Name map to use for the migration (default: ``{}``).
Keys are the current parameter names and values are the external parameter names
:type name_map: Mapping[str, str], optional
:param auto_mapping: Automatically map the external state dict to the current state dict (default: ``True``)
:type auto_mapping: bool, optional
:param verbose: Show model names and migration (default: ``False``)
:type verbose: bool, optional
:raises ValueError: If neither or both of ``state_dict`` and ``path`` parameters have been set
:raises ValueError: If the correct file type cannot be identified from the ``path`` parameter
:return: True if the migration was successful, False otherwise.
Migration is successful if all parameters of the current model are found in the external model
:rtype: bool
Example::
# migrate a rl_games checkpoint with unambiguous state_dict
>>> model.migrate(path="./runs/Ant/nn/Ant.pth")
True
# migrate a rl_games checkpoint with ambiguous state_dict
>>> model.migrate(path="./runs/Cartpole/nn/Cartpole.pth", verbose=False)
[skrl:WARNING] Ambiguous match for log_std_parameter <- [value_mean_std.running_mean, value_mean_std.running_var, a2c_network.sigma]
[skrl:WARNING] Ambiguous match for net.0.bias <- [a2c_network.actor_mlp.0.bias, a2c_network.actor_mlp.2.bias]
[skrl:WARNING] Ambiguous match for net.2.bias <- [a2c_network.actor_mlp.0.bias, a2c_network.actor_mlp.2.bias]
[skrl:WARNING] Ambiguous match for net.4.weight <- [a2c_network.value.weight, a2c_network.mu.weight]
[skrl:WARNING] Ambiguous match for net.4.bias <- [a2c_network.value.bias, a2c_network.mu.bias]
[skrl:WARNING] Multiple use of a2c_network.actor_mlp.0.bias -> [net.0.bias, net.2.bias]
[skrl:WARNING] Multiple use of a2c_network.actor_mlp.2.bias -> [net.0.bias, net.2.bias]
False
>>> name_map = {"log_std_parameter": "a2c_network.sigma",
... "net.0.bias": "a2c_network.actor_mlp.0.bias",
... "net.2.bias": "a2c_network.actor_mlp.2.bias",
... "net.4.weight": "a2c_network.mu.weight",
... "net.4.bias": "a2c_network.mu.bias"}
>>> model.migrate(path="./runs/Cartpole/nn/Cartpole.pth", name_map=name_map, verbose=True)
[skrl:INFO] Models
[skrl:INFO] |-- current: 7 items
[skrl:INFO] | |-- log_std_parameter : torch.Size([1])
[skrl:INFO] | |-- net.0.weight : torch.Size([32, 4])
[skrl:INFO] | |-- net.0.bias : torch.Size([32])
[skrl:INFO] | |-- net.2.weight : torch.Size([32, 32])
[skrl:INFO] | |-- net.2.bias : torch.Size([32])
[skrl:INFO] | |-- net.4.weight : torch.Size([1, 32])
[skrl:INFO] | |-- net.4.bias : torch.Size([1])
[skrl:INFO] |-- source: 15 items
[skrl:INFO] | |-- value_mean_std.running_mean : torch.Size([1])
[skrl:INFO] | |-- value_mean_std.running_var : torch.Size([1])
[skrl:INFO] | |-- value_mean_std.count : torch.Size([])
[skrl:INFO] | |-- running_mean_std.running_mean : torch.Size([4])
[skrl:INFO] | |-- running_mean_std.running_var : torch.Size([4])
[skrl:INFO] | |-- running_mean_std.count : torch.Size([])
[skrl:INFO] | |-- a2c_network.sigma : torch.Size([1])
[skrl:INFO] | |-- a2c_network.actor_mlp.0.weight : torch.Size([32, 4])
[skrl:INFO] | |-- a2c_network.actor_mlp.0.bias : torch.Size([32])
[skrl:INFO] | |-- a2c_network.actor_mlp.2.weight : torch.Size([32, 32])
[skrl:INFO] | |-- a2c_network.actor_mlp.2.bias : torch.Size([32])
[skrl:INFO] | |-- a2c_network.value.weight : torch.Size([1, 32])
[skrl:INFO] | |-- a2c_network.value.bias : torch.Size([1])
[skrl:INFO] | |-- a2c_network.mu.weight : torch.Size([1, 32])
[skrl:INFO] | |-- a2c_network.mu.bias : torch.Size([1])
[skrl:INFO] Migration
[skrl:INFO] |-- map: log_std_parameter <- a2c_network.sigma
[skrl:INFO] |-- auto: net.0.weight <- a2c_network.actor_mlp.0.weight
[skrl:INFO] |-- map: net.0.bias <- a2c_network.actor_mlp.0.bias
[skrl:INFO] |-- auto: net.2.weight <- a2c_network.actor_mlp.2.weight
[skrl:INFO] |-- map: net.2.bias <- a2c_network.actor_mlp.2.bias
[skrl:INFO] |-- map: net.4.weight <- a2c_network.mu.weight
[skrl:INFO] |-- map: net.4.bias <- a2c_network.mu.bias
False
# migrate a stable-baselines3 checkpoint with unambiguous state_dict
>>> model.migrate(path="./ddpg_pendulum.zip")
True
# migrate from any exported model by loading its state_dict (unambiguous state_dict)
>>> state_dict = torch.load("./external_model.pt")
>>> model.migrate(state_dict=state_dict)
True
"""
if (state_dict is not None) + (path is not None) != 1:
raise ValueError("Exactly one of state_dict or path may be specified")
# load state_dict from path
if path is not None:
state_dict = {}
# rl_games checkpoint
if path.endswith(".pt") or path.endswith(".pth"):
checkpoint = torch.load(path, map_location=self.device)
if type(checkpoint) is dict:
state_dict = checkpoint.get("model", {})
# stable-baselines3
elif path.endswith(".zip"):
import zipfile
try:
archive = zipfile.ZipFile(path, 'r')
with archive.open('policy.pth', mode="r") as file:
state_dict = torch.load(file, map_location=self.device)
except KeyError as e:
logger.warning(str(e))
state_dict = {}
else:
raise ValueError("Cannot identify file type")
# show state_dict
if verbose:
logger.info("Models")
logger.info(f" |-- current: {len(self.state_dict().keys())} items")
for name, tensor in self.state_dict().items():
logger.info(f" | |-- {name} : {list(tensor.shape)}")
logger.info(f" |-- source: {len(state_dict.keys())} items")
for name, tensor in state_dict.items():
logger.info(f" | |-- {name} : {list(tensor.shape)}")
logger.info("Migration")
# migrate the state_dict to current model
new_state_dict = collections.OrderedDict()
match_counter = collections.defaultdict(list)
used_counter = collections.defaultdict(list)
for name, tensor in self.state_dict().items():
for external_name, external_tensor in state_dict.items():
# mapped names
if name_map.get(name, "") == external_name:
if tensor.shape == external_tensor.shape:
new_state_dict[name] = external_tensor
match_counter[name].append(external_name)
used_counter[external_name].append(name)
if verbose:
logger.info(f" |-- map: {name} <- {external_name}")
break
else:
logger.warning(f"Shape mismatch for {name} <- {external_name} : {tensor.shape} != {external_tensor.shape}")
# auto-mapped names
if auto_mapping and name not in name_map:
if tensor.shape == external_tensor.shape:
if name.endswith(".weight"):
if external_name.endswith(".weight"):
new_state_dict[name] = external_tensor
match_counter[name].append(external_name)
used_counter[external_name].append(name)
if verbose:
logger.info(f" |-- auto: {name} <- {external_name}")
elif name.endswith(".bias"):
if external_name.endswith(".bias"):
new_state_dict[name] = external_tensor
match_counter[name].append(external_name)
used_counter[external_name].append(name)
if verbose:
logger.info(f" |-- auto: {name} <- {external_name}")
else:
if not external_name.endswith(".weight") and not external_name.endswith(".bias"):
new_state_dict[name] = external_tensor
match_counter[name].append(external_name)
used_counter[external_name].append(name)
if verbose:
logger.info(f" |-- auto: {name} <- {external_name}")
# show ambiguous matches
status = True
for name, tensor in self.state_dict().items():
if len(match_counter.get(name, [])) > 1:
logger.warning("Ambiguous match for {} <- [{}]".format(name, ", ".join(match_counter.get(name, []))))
status = False
# show missing matches
for name, tensor in self.state_dict().items():
if not match_counter.get(name, []):
logger.warning(f"Missing match for {name}")
status = False
# show multiple uses
for name, tensor in state_dict.items():
if len(used_counter.get(name, [])) > 1:
logger.warning("Multiple use of {} -> [{}]".format(name, ", ".join(used_counter.get(name, []))))
status = False
# load new state dict
self.load_state_dict(new_state_dict, strict=False)
self.eval()
return status
def freeze_parameters(self, freeze: bool = True) -> None:
"""Freeze or unfreeze internal parameters
- Freeze: disable gradient computation (``parameters.requires_grad = False``)
- Unfreeze: enable gradient computation (``parameters.requires_grad = True``)
:param freeze: Freeze the internal parameters if True, otherwise unfreeze them (default: ``True``)
:type freeze: bool, optional
Example::
# freeze model parameters
>>> model.freeze_parameters(True)
# unfreeze model parameters
>>> model.freeze_parameters(False)
"""
for parameters in self.parameters():
parameters.requires_grad = not freeze
def update_parameters(self, model: torch.nn.Module, polyak: float = 1) -> None:
"""Update internal parameters by hard or soft (polyak averaging) update
- Hard update: :math:`\\theta = \\theta_{net}`
- Soft (polyak averaging) update: :math:`\\theta = (1 - \\rho) \\theta + \\rho \\theta_{net}`
:param model: Model used to update the internal parameters
:type model: torch.nn.Module (skrl.models.torch.Model)
:param polyak: Polyak hyperparameter between 0 and 1 (default: ``1``).
A hard update is performed when its value is 1
:type polyak: float, optional
Example::
# hard update (from source model)
>>> model.update_parameters(source_model)
# soft update (from source model)
>>> model.update_parameters(source_model, polyak=0.005)
"""
with torch.no_grad():
# hard update
if polyak == 1:
for parameters, model_parameters in zip(self.parameters(), model.parameters()):
parameters.data.copy_(model_parameters.data)
# soft update (use in-place operations to avoid creating new parameters)
else:
for parameters, model_parameters in zip(self.parameters(), model.parameters()):
parameters.data.mul_(1 - polyak)
parameters.data.add_(polyak * model_parameters.data)
| 36,772 | Python | 48.293566 | 144 | 0.571794 |
Toni-SM/skrl/skrl/models/torch/tabular.py | from typing import Any, Mapping, Optional, Sequence, Tuple, Union
import torch
from skrl.models.torch import Model
class TabularMixin:
def __init__(self, num_envs: int = 1, role: str = "") -> None:
"""Tabular mixin model
:param num_envs: Number of environments (default: 1)
:type num_envs: int, optional
:param role: Role play by the model (default: ``""``)
:type role: str, optional
Example::
# define the model
>>> import torch
>>> from skrl.models.torch import Model, TabularMixin
>>>
>>> class GreedyPolicy(TabularMixin, Model):
... def __init__(self, observation_space, action_space, device="cuda:0", num_envs=1):
... Model.__init__(self, observation_space, action_space, device)
... TabularMixin.__init__(self, num_envs)
...
... self.table = torch.ones((num_envs, self.num_observations, self.num_actions),
... dtype=torch.float32, device=self.device)
...
... def compute(self, inputs, role):
... actions = torch.argmax(self.table[torch.arange(self.num_envs).view(-1, 1), inputs["states"]],
... dim=-1, keepdim=True).view(-1,1)
... return actions, {}
...
>>> # given an observation_space: gym.spaces.Discrete with n=100
>>> # and an action_space: gym.spaces.Discrete with n=5
>>> model = GreedyPolicy(observation_space, action_space, num_envs=1)
>>>
>>> print(model)
GreedyPolicy(
(table): Tensor(shape=[1, 100, 5])
)
"""
self.num_envs = num_envs
def __repr__(self) -> str:
"""String representation of an object as torch.nn.Module
"""
lines = []
for name in self._get_tensor_names():
tensor = getattr(self, name)
lines.append(f"({name}): {tensor.__class__.__name__}(shape={list(tensor.shape)})")
main_str = self.__class__.__name__ + '('
if lines:
main_str += "\n {}\n".format("\n ".join(lines))
main_str += ')'
return main_str
def _get_tensor_names(self) -> Sequence[str]:
"""Get the names of the tensors that the model is using
:return: Tensor names
:rtype: sequence of str
"""
tensors = []
for attr in dir(self):
if not attr.startswith("__") and issubclass(type(getattr(self, attr)), torch.Tensor):
tensors.append(attr)
return sorted(tensors)
def act(self,
inputs: Mapping[str, Union[torch.Tensor, Any]],
role: str = "") -> Tuple[torch.Tensor, Union[torch.Tensor, None], Mapping[str, Union[torch.Tensor, Any]]]:
"""Act in response to the state of the environment
:param inputs: Model inputs. The most common keys are:
- ``"states"``: state of the environment used to make the decision
- ``"taken_actions"``: actions taken by the policy for the given states
:type inputs: dict where the values are typically torch.Tensor
:param role: Role play by the model (default: ``""``)
:type role: str, optional
:return: Model output. The first component is the action to be taken by the agent.
The second component is ``None``. The third component is a dictionary containing extra output values
:rtype: tuple of torch.Tensor, torch.Tensor or None, and dict
Example::
>>> # given a batch of sample states with shape (1, 100)
>>> actions, _, outputs = model.act({"states": states})
>>> print(actions[0], outputs)
tensor([[3]], device='cuda:0') {}
"""
actions, outputs = self.compute(inputs, role)
return actions, None, outputs
def table(self) -> torch.Tensor:
"""Return the Q-table
:return: Q-table
:rtype: torch.Tensor
Example::
>>> output = model.table()
>>> print(output.shape)
torch.Size([1, 100, 5])
"""
return self.q_table
def to(self, *args, **kwargs) -> Model:
"""Move the model to a different device
:param args: Arguments to pass to the method
:type args: tuple
:param kwargs: Keyword arguments to pass to the method
:type kwargs: dict
:return: Model moved to the specified device
:rtype: Model
"""
Model.to(self, *args, **kwargs)
for name in self._get_tensor_names():
setattr(self, name, getattr(self, name).to(*args, **kwargs))
return self
def state_dict(self, *args, **kwargs) -> Mapping:
"""Returns a dictionary containing a whole state of the module
:return: A dictionary containing a whole state of the module
:rtype: dict
"""
_state_dict = {name: getattr(self, name) for name in self._get_tensor_names()}
Model.state_dict(self, destination=_state_dict)
return _state_dict
def load_state_dict(self, state_dict: Mapping, strict: bool = True) -> None:
"""Copies parameters and buffers from state_dict into this module and its descendants
:param state_dict: A dict containing parameters and persistent buffers
:type state_dict: dict
:param strict: Whether to strictly enforce that the keys in state_dict match the keys
returned by this module's state_dict() function (default: ``True``)
:type strict: bool, optional
"""
Model.load_state_dict(self, state_dict, strict=False)
for name, tensor in state_dict.items():
if hasattr(self, name) and isinstance(getattr(self, name), torch.Tensor):
_tensor = getattr(self, name)
if isinstance(_tensor, torch.Tensor):
if _tensor.shape == tensor.shape and _tensor.dtype == tensor.dtype:
setattr(self, name, tensor)
else:
raise ValueError(f"Tensor shape ({_tensor.shape} vs {tensor.shape}) or dtype ({_tensor.dtype} vs {tensor.dtype}) mismatch")
else:
raise ValueError(f"{name} is not a tensor of {self.__class__.__name__}")
def save(self, path: str, state_dict: Optional[dict] = None) -> None:
"""Save the model to the specified path
:param path: Path to save the model to
:type path: str
:param state_dict: State dictionary to save (default: ``None``).
If None, the model's state_dict will be saved
:type state_dict: dict, optional
Example::
# save the current model to the specified path
>>> model.save("/tmp/model.pt")
"""
# TODO: save state_dict
torch.save({name: getattr(self, name) for name in self._get_tensor_names()}, path)
def load(self, path: str) -> None:
"""Load the model from the specified path
The final storage device is determined by the constructor of the model
:param path: Path to load the model from
:type path: str
:raises ValueError: If the models are not compatible
Example::
# load the model onto the CPU
>>> model = Model(observation_space, action_space, device="cpu")
>>> model.load("model.pt")
# load the model onto the GPU 1
>>> model = Model(observation_space, action_space, device="cuda:1")
>>> model.load("model.pt")
"""
tensors = torch.load(path)
for name, tensor in tensors.items():
if hasattr(self, name) and isinstance(getattr(self, name), torch.Tensor):
_tensor = getattr(self, name)
if isinstance(_tensor, torch.Tensor):
if _tensor.shape == tensor.shape and _tensor.dtype == tensor.dtype:
setattr(self, name, tensor)
else:
raise ValueError(f"Tensor shape ({_tensor.shape} vs {tensor.shape}) or dtype ({_tensor.dtype} vs {tensor.dtype}) mismatch")
else:
raise ValueError(f"{name} is not a tensor of {self.__class__.__name__}")
| 8,489 | Python | 39.428571 | 147 | 0.552715 |
Toni-SM/skrl/skrl/models/torch/__init__.py | from skrl.models.torch.base import Model # isort:skip
from skrl.models.torch.categorical import CategoricalMixin
from skrl.models.torch.deterministic import DeterministicMixin
from skrl.models.torch.gaussian import GaussianMixin
from skrl.models.torch.multicategorical import MultiCategoricalMixin
from skrl.models.torch.multivariate_gaussian import MultivariateGaussianMixin
from skrl.models.torch.tabular import TabularMixin
| 429 | Python | 46.777773 | 77 | 0.869464 |
Toni-SM/skrl/skrl/models/torch/deterministic.py | from typing import Any, Mapping, Tuple, Union
import gym
import gymnasium
import torch
class DeterministicMixin:
def __init__(self, clip_actions: bool = False, role: str = "") -> None:
"""Deterministic mixin model (deterministic model)
:param clip_actions: Flag to indicate whether the actions should be clipped to the action space (default: ``False``)
:type clip_actions: bool, optional
:param role: Role play by the model (default: ``""``)
:type role: str, optional
Example::
# define the model
>>> import torch
>>> import torch.nn as nn
>>> from skrl.models.torch import Model, DeterministicMixin
>>>
>>> class Value(DeterministicMixin, Model):
... def __init__(self, observation_space, action_space, device="cuda:0", clip_actions=False):
... Model.__init__(self, observation_space, action_space, device)
... DeterministicMixin.__init__(self, clip_actions)
...
... self.net = nn.Sequential(nn.Linear(self.num_observations, 32),
... nn.ELU(),
... nn.Linear(32, 32),
... nn.ELU(),
... nn.Linear(32, 1))
...
... def compute(self, inputs, role):
... return self.net(inputs["states"]), {}
...
>>> # given an observation_space: gym.spaces.Box with shape (60,)
>>> # and an action_space: gym.spaces.Box with shape (8,)
>>> model = Value(observation_space, action_space)
>>>
>>> print(model)
Value(
(net): Sequential(
(0): Linear(in_features=60, out_features=32, bias=True)
(1): ELU(alpha=1.0)
(2): Linear(in_features=32, out_features=32, bias=True)
(3): ELU(alpha=1.0)
(4): Linear(in_features=32, out_features=1, bias=True)
)
)
"""
self._clip_actions = clip_actions and (issubclass(type(self.action_space), gym.Space) or \
issubclass(type(self.action_space), gymnasium.Space))
if self._clip_actions:
self._clip_actions_min = torch.tensor(self.action_space.low, device=self.device, dtype=torch.float32)
self._clip_actions_max = torch.tensor(self.action_space.high, device=self.device, dtype=torch.float32)
def act(self,
inputs: Mapping[str, Union[torch.Tensor, Any]],
role: str = "") -> Tuple[torch.Tensor, Union[torch.Tensor, None], Mapping[str, Union[torch.Tensor, Any]]]:
"""Act deterministically in response to the state of the environment
:param inputs: Model inputs. The most common keys are:
- ``"states"``: state of the environment used to make the decision
- ``"taken_actions"``: actions taken by the policy for the given states
:type inputs: dict where the values are typically torch.Tensor
:param role: Role play by the model (default: ``""``)
:type role: str, optional
:return: Model output. The first component is the action to be taken by the agent.
The second component is ``None``. The third component is a dictionary containing extra output values
:rtype: tuple of torch.Tensor, torch.Tensor or None, and dict
Example::
>>> # given a batch of sample states with shape (4096, 60)
>>> actions, _, outputs = model.act({"states": states})
>>> print(actions.shape, outputs)
torch.Size([4096, 1]) {}
"""
# map from observations/states to actions
actions, outputs = self.compute(inputs, role)
# clip actions
if self._clip_actions:
actions = torch.clamp(actions, min=self._clip_actions_min, max=self._clip_actions_max)
return actions, None, outputs
| 4,136 | Python | 43.483871 | 124 | 0.544246 |
Toni-SM/skrl/skrl/models/torch/gaussian.py | from typing import Any, Mapping, Tuple, Union
import gym
import gymnasium
import torch
from torch.distributions import Normal
class GaussianMixin:
def __init__(self,
clip_actions: bool = False,
clip_log_std: bool = True,
min_log_std: float = -20,
max_log_std: float = 2,
reduction: str = "sum",
role: str = "") -> None:
"""Gaussian mixin model (stochastic model)
:param clip_actions: Flag to indicate whether the actions should be clipped to the action space (default: ``False``)
:type clip_actions: bool, optional
:param clip_log_std: Flag to indicate whether the log standard deviations should be clipped (default: ``True``)
:type clip_log_std: bool, optional
:param min_log_std: Minimum value of the log standard deviation if ``clip_log_std`` is True (default: ``-20``)
:type min_log_std: float, optional
:param max_log_std: Maximum value of the log standard deviation if ``clip_log_std`` is True (default: ``2``)
:type max_log_std: float, optional
:param reduction: Reduction method for returning the log probability density function: (default: ``"sum"``).
Supported values are ``"mean"``, ``"sum"``, ``"prod"`` and ``"none"``. If "``none"``, the log probability density
function is returned as a tensor of shape ``(num_samples, num_actions)`` instead of ``(num_samples, 1)``
:type reduction: str, optional
:param role: Role play by the model (default: ``""``)
:type role: str, optional
:raises ValueError: If the reduction method is not valid
Example::
# define the model
>>> import torch
>>> import torch.nn as nn
>>> from skrl.models.torch import Model, GaussianMixin
>>>
>>> class Policy(GaussianMixin, Model):
... def __init__(self, observation_space, action_space, device="cuda:0",
... clip_actions=False, clip_log_std=True, min_log_std=-20, max_log_std=2, reduction="sum"):
... Model.__init__(self, observation_space, action_space, device)
... GaussianMixin.__init__(self, clip_actions, clip_log_std, min_log_std, max_log_std, reduction)
...
... self.net = nn.Sequential(nn.Linear(self.num_observations, 32),
... nn.ELU(),
... nn.Linear(32, 32),
... nn.ELU(),
... nn.Linear(32, self.num_actions))
... self.log_std_parameter = nn.Parameter(torch.zeros(self.num_actions))
...
... def compute(self, inputs, role):
... return self.net(inputs["states"]), self.log_std_parameter, {}
...
>>> # given an observation_space: gym.spaces.Box with shape (60,)
>>> # and an action_space: gym.spaces.Box with shape (8,)
>>> model = Policy(observation_space, action_space)
>>>
>>> print(model)
Policy(
(net): Sequential(
(0): Linear(in_features=60, out_features=32, bias=True)
(1): ELU(alpha=1.0)
(2): Linear(in_features=32, out_features=32, bias=True)
(3): ELU(alpha=1.0)
(4): Linear(in_features=32, out_features=8, bias=True)
)
)
"""
self._clip_actions = clip_actions and (issubclass(type(self.action_space), gym.Space) or \
issubclass(type(self.action_space), gymnasium.Space))
if self._clip_actions:
self._clip_actions_min = torch.tensor(self.action_space.low, device=self.device, dtype=torch.float32)
self._clip_actions_max = torch.tensor(self.action_space.high, device=self.device, dtype=torch.float32)
self._clip_log_std = clip_log_std
self._log_std_min = min_log_std
self._log_std_max = max_log_std
self._log_std = None
self._num_samples = None
self._distribution = None
if reduction not in ["mean", "sum", "prod", "none"]:
raise ValueError("reduction must be one of 'mean', 'sum', 'prod' or 'none'")
self._reduction = torch.mean if reduction == "mean" else torch.sum if reduction == "sum" \
else torch.prod if reduction == "prod" else None
def act(self,
inputs: Mapping[str, Union[torch.Tensor, Any]],
role: str = "") -> Tuple[torch.Tensor, Union[torch.Tensor, None], Mapping[str, Union[torch.Tensor, Any]]]:
"""Act stochastically in response to the state of the environment
:param inputs: Model inputs. The most common keys are:
- ``"states"``: state of the environment used to make the decision
- ``"taken_actions"``: actions taken by the policy for the given states
:type inputs: dict where the values are typically torch.Tensor
:param role: Role play by the model (default: ``""``)
:type role: str, optional
:return: Model output. The first component is the action to be taken by the agent.
The second component is the log of the probability density function.
The third component is a dictionary containing the mean actions ``"mean_actions"``
and extra output values
:rtype: tuple of torch.Tensor, torch.Tensor or None, and dict
Example::
>>> # given a batch of sample states with shape (4096, 60)
>>> actions, log_prob, outputs = model.act({"states": states})
>>> print(actions.shape, log_prob.shape, outputs["mean_actions"].shape)
torch.Size([4096, 8]) torch.Size([4096, 1]) torch.Size([4096, 8])
"""
# map from states/observations to mean actions and log standard deviations
mean_actions, log_std, outputs = self.compute(inputs, role)
# clamp log standard deviations
if self._clip_log_std:
log_std = torch.clamp(log_std, self._log_std_min, self._log_std_max)
self._log_std = log_std
self._num_samples = mean_actions.shape[0]
# distribution
self._distribution = Normal(mean_actions, log_std.exp())
# sample using the reparameterization trick
actions = self._distribution.rsample()
# clip actions
if self._clip_actions:
actions = torch.clamp(actions, min=self._clip_actions_min, max=self._clip_actions_max)
# log of the probability density function
log_prob = self._distribution.log_prob(inputs.get("taken_actions", actions))
if self._reduction is not None:
log_prob = self._reduction(log_prob, dim=-1)
if log_prob.dim() != actions.dim():
log_prob = log_prob.unsqueeze(-1)
outputs["mean_actions"] = mean_actions
return actions, log_prob, outputs
def get_entropy(self, role: str = "") -> torch.Tensor:
"""Compute and return the entropy of the model
:return: Entropy of the model
:rtype: torch.Tensor
:param role: Role play by the model (default: ``""``)
:type role: str, optional
Example::
>>> entropy = model.get_entropy()
>>> print(entropy.shape)
torch.Size([4096, 8])
"""
if self._distribution is None:
return torch.tensor(0.0, device=self.device)
return self._distribution.entropy().to(self.device)
def get_log_std(self, role: str = "") -> torch.Tensor:
"""Return the log standard deviation of the model
:return: Log standard deviation of the model
:rtype: torch.Tensor
:param role: Role play by the model (default: ``""``)
:type role: str, optional
Example::
>>> log_std = model.get_log_std()
>>> print(log_std.shape)
torch.Size([4096, 8])
"""
return self._log_std.repeat(self._num_samples, 1)
def distribution(self, role: str = "") -> torch.distributions.Normal:
"""Get the current distribution of the model
:return: Distribution of the model
:rtype: torch.distributions.Normal
:param role: Role play by the model (default: ``""``)
:type role: str, optional
Example::
>>> distribution = model.distribution()
>>> print(distribution)
Normal(loc: torch.Size([4096, 8]), scale: torch.Size([4096, 8]))
"""
return self._distribution
| 8,814 | Python | 43.075 | 139 | 0.562401 |
Toni-SM/skrl/skrl/models/torch/multicategorical.py | from typing import Any, Mapping, Sequence, Tuple, Union
import torch
from torch.distributions import Categorical
class MultiCategoricalMixin:
def __init__(self, unnormalized_log_prob: bool = True, reduction: str = "sum", role: str = "") -> None:
"""MultiCategorical mixin model (stochastic model)
:param unnormalized_log_prob: Flag to indicate how to be interpreted the model's output (default: ``True``).
If True, the model's output is interpreted as unnormalized log probabilities
(it can be any real number), otherwise as normalized probabilities
(the output must be non-negative, finite and have a non-zero sum)
:type unnormalized_log_prob: bool, optional
:param reduction: Reduction method for returning the log probability density function: (default: ``"sum"``).
Supported values are ``"mean"``, ``"sum"``, ``"prod"`` and ``"none"``. If "``none"``, the log probability density
function is returned as a tensor of shape ``(num_samples, num_actions)`` instead of ``(num_samples, 1)``
:type reduction: str, optional
:param role: Role play by the model (default: ``""``)
:type role: str, optional
:raises ValueError: If the reduction method is not valid
Example::
# define the model
>>> import torch
>>> import torch.nn as nn
>>> from skrl.models.torch import Model, MultiCategoricalMixin
>>>
>>> class Policy(MultiCategoricalMixin, Model):
... def __init__(self, observation_space, action_space, device="cuda:0", unnormalized_log_prob=True, reduction="sum"):
... Model.__init__(self, observation_space, action_space, device)
... MultiCategoricalMixin.__init__(self, unnormalized_log_prob, reduction)
...
... self.net = nn.Sequential(nn.Linear(self.num_observations, 32),
... nn.ELU(),
... nn.Linear(32, 32),
... nn.ELU(),
... nn.Linear(32, self.num_actions))
...
... def compute(self, inputs, role):
... return self.net(inputs["states"]), {}
...
>>> # given an observation_space: gym.spaces.Box with shape (4,)
>>> # and an action_space: gym.spaces.MultiDiscrete with nvec = [3, 2]
>>> model = Policy(observation_space, action_space)
>>>
>>> print(model)
Policy(
(net): Sequential(
(0): Linear(in_features=4, out_features=32, bias=True)
(1): ELU(alpha=1.0)
(2): Linear(in_features=32, out_features=32, bias=True)
(3): ELU(alpha=1.0)
(4): Linear(in_features=32, out_features=5, bias=True)
)
)
"""
self._unnormalized_log_prob = unnormalized_log_prob
self._distributions = []
if reduction not in ["mean", "sum", "prod", "none"]:
raise ValueError("reduction must be one of 'mean', 'sum', 'prod' or 'none'")
self._reduction = torch.mean if reduction == "mean" else torch.sum if reduction == "sum" \
else torch.prod if reduction == "prod" else None
def act(self,
inputs: Mapping[str, Union[torch.Tensor, Any]],
role: str = "") -> Tuple[torch.Tensor, Union[torch.Tensor, None], Mapping[str, Union[torch.Tensor, Any]]]:
"""Act stochastically in response to the state of the environment
:param inputs: Model inputs. The most common keys are:
- ``"states"``: state of the environment used to make the decision
- ``"taken_actions"``: actions taken by the policy for the given states
:type inputs: dict where the values are typically torch.Tensor
:param role: Role play by the model (default: ``""``)
:type role: str, optional
:return: Model output. The first component is the action to be taken by the agent.
The second component is the log of the probability density function.
The third component is a dictionary containing the network output ``"net_output"``
and extra output values
:rtype: tuple of torch.Tensor, torch.Tensor or None, and dict
Example::
>>> # given a batch of sample states with shape (4096, 4)
>>> actions, log_prob, outputs = model.act({"states": states})
>>> print(actions.shape, log_prob.shape, outputs["net_output"].shape)
torch.Size([4096, 2]) torch.Size([4096, 1]) torch.Size([4096, 5])
"""
# map from states/observations to normalized probabilities or unnormalized log probabilities
net_output, outputs = self.compute(inputs, role)
# unnormalized log probabilities
if self._unnormalized_log_prob:
self._distributions = [Categorical(logits=logits) for logits in torch.split(net_output, self.action_space.nvec.tolist(), dim=-1)]
# normalized probabilities
else:
self._distributions = [Categorical(probs=probs) for probs in torch.split(net_output, self.action_space.nvec.tolist(), dim=-1)]
# actions
actions = torch.stack([distribution.sample() for distribution in self._distributions], dim=-1)
# log of the probability density function
log_prob = torch.stack([distribution.log_prob(_actions.view(-1)) for _actions, distribution \
in zip(torch.unbind(inputs.get("taken_actions", actions), dim=-1), self._distributions)], dim=-1)
if self._reduction is not None:
log_prob = self._reduction(log_prob, dim=-1)
if log_prob.dim() != actions.dim():
log_prob = log_prob.unsqueeze(-1)
outputs["net_output"] = net_output
return actions, log_prob, outputs
def get_entropy(self, role: str = "") -> torch.Tensor:
"""Compute and return the entropy of the model
:return: Entropy of the model
:rtype: torch.Tensor
:param role: Role play by the model (default: ``""``)
:type role: str, optional
Example::
>>> entropy = model.get_entropy()
>>> print(entropy.shape)
torch.Size([4096, 1])
"""
if self._distributions:
entropy = torch.stack([distribution.entropy().to(self.device) for distribution in self._distributions], dim=-1)
if self._reduction is not None:
return self._reduction(entropy, dim=-1).unsqueeze(-1)
return entropy
return torch.tensor(0.0, device=self.device)
def distribution(self, role: str = "") -> torch.distributions.Categorical:
"""Get the current distribution of the model
:return: First distributions of the model
:rtype: torch.distributions.Categorical
:param role: Role play by the model (default: ``""``)
:type role: str, optional
Example::
>>> distribution = model.distribution()
>>> print(distribution)
Categorical(probs: torch.Size([10, 3]), logits: torch.Size([10, 3]))
"""
# TODO: find a way to integrate in the class the distribution functions (e.g.: stddev)
return self._distributions[0]
| 7,655 | Python | 48.076923 | 141 | 0.568125 |
Toni-SM/skrl/skrl/models/torch/categorical.py | from typing import Any, Mapping, Tuple, Union
import torch
from torch.distributions import Categorical
class CategoricalMixin:
def __init__(self, unnormalized_log_prob: bool = True, role: str = "") -> None:
"""Categorical mixin model (stochastic model)
:param unnormalized_log_prob: Flag to indicate how to be interpreted the model's output (default: ``True``).
If True, the model's output is interpreted as unnormalized log probabilities
(it can be any real number), otherwise as normalized probabilities
(the output must be non-negative, finite and have a non-zero sum)
:type unnormalized_log_prob: bool, optional
:param role: Role play by the model (default: ``""``)
:type role: str, optional
Example::
# define the model
>>> import torch
>>> import torch.nn as nn
>>> from skrl.models.torch import Model, CategoricalMixin
>>>
>>> class Policy(CategoricalMixin, Model):
... def __init__(self, observation_space, action_space, device="cuda:0", unnormalized_log_prob=True):
... Model.__init__(self, observation_space, action_space, device)
... CategoricalMixin.__init__(self, unnormalized_log_prob)
...
... self.net = nn.Sequential(nn.Linear(self.num_observations, 32),
... nn.ELU(),
... nn.Linear(32, 32),
... nn.ELU(),
... nn.Linear(32, self.num_actions))
...
... def compute(self, inputs, role):
... return self.net(inputs["states"]), {}
...
>>> # given an observation_space: gym.spaces.Box with shape (4,)
>>> # and an action_space: gym.spaces.Discrete with n = 2
>>> model = Policy(observation_space, action_space)
>>>
>>> print(model)
Policy(
(net): Sequential(
(0): Linear(in_features=4, out_features=32, bias=True)
(1): ELU(alpha=1.0)
(2): Linear(in_features=32, out_features=32, bias=True)
(3): ELU(alpha=1.0)
(4): Linear(in_features=32, out_features=2, bias=True)
)
)
"""
self._unnormalized_log_prob = unnormalized_log_prob
self._distribution = None
def act(self,
inputs: Mapping[str, Union[torch.Tensor, Any]],
role: str = "") -> Tuple[torch.Tensor, Union[torch.Tensor, None], Mapping[str, Union[torch.Tensor, Any]]]:
"""Act stochastically in response to the state of the environment
:param inputs: Model inputs. The most common keys are:
- ``"states"``: state of the environment used to make the decision
- ``"taken_actions"``: actions taken by the policy for the given states
:type inputs: dict where the values are typically torch.Tensor
:param role: Role play by the model (default: ``""``)
:type role: str, optional
:return: Model output. The first component is the action to be taken by the agent.
The second component is the log of the probability density function.
The third component is a dictionary containing the network output ``"net_output"``
and extra output values
:rtype: tuple of torch.Tensor, torch.Tensor or None, and dict
Example::
>>> # given a batch of sample states with shape (4096, 4)
>>> actions, log_prob, outputs = model.act({"states": states})
>>> print(actions.shape, log_prob.shape, outputs["net_output"].shape)
torch.Size([4096, 1]) torch.Size([4096, 1]) torch.Size([4096, 2])
"""
# map from states/observations to normalized probabilities or unnormalized log probabilities
net_output, outputs = self.compute(inputs, role)
# unnormalized log probabilities
if self._unnormalized_log_prob:
self._distribution = Categorical(logits=net_output)
# normalized probabilities
else:
self._distribution = Categorical(probs=net_output)
# actions and log of the probability density function
actions = self._distribution.sample()
log_prob = self._distribution.log_prob(inputs.get("taken_actions", actions).view(-1))
outputs["net_output"] = net_output
return actions.unsqueeze(-1), log_prob.unsqueeze(-1), outputs
def get_entropy(self, role: str = "") -> torch.Tensor:
"""Compute and return the entropy of the model
:return: Entropy of the model
:rtype: torch.Tensor
:param role: Role play by the model (default: ``""``)
:type role: str, optional
Example::
>>> entropy = model.get_entropy()
>>> print(entropy.shape)
torch.Size([4096, 1])
"""
if self._distribution is None:
return torch.tensor(0.0, device=self.device)
return self._distribution.entropy().to(self.device)
def distribution(self, role: str = "") -> torch.distributions.Categorical:
"""Get the current distribution of the model
:return: Distribution of the model
:rtype: torch.distributions.Categorical
:param role: Role play by the model (default: ``""``)
:type role: str, optional
Example::
>>> distribution = model.distribution()
>>> print(distribution)
Categorical(probs: torch.Size([4096, 2]), logits: torch.Size([4096, 2]))
"""
return self._distribution
| 5,941 | Python | 43.343283 | 118 | 0.555967 |
Toni-SM/skrl/skrl/models/jax/base.py | from typing import Any, Callable, Mapping, Optional, Sequence, Tuple, Union
import gym
import gymnasium
import flax
import jax
import numpy as np
from skrl import config
class StateDict(flax.struct.PyTreeNode):
apply_fn: Callable = flax.struct.field(pytree_node=False)
params: flax.core.FrozenDict[str, Any] = flax.struct.field(pytree_node=True)
@classmethod
def create(cls, *, apply_fn, params, **kwargs):
return cls(apply_fn=apply_fn, params=params, **kwargs)
class Model(flax.linen.Module):
observation_space: Union[int, Sequence[int], gym.Space, gymnasium.Space]
action_space: Union[int, Sequence[int], gym.Space, gymnasium.Space]
device: Optional[Union[str, jax.Device]] = None
def __init__(self,
observation_space: Union[int, Sequence[int], gym.Space, gymnasium.Space],
action_space: Union[int, Sequence[int], gym.Space, gymnasium.Space],
device: Optional[Union[str, jax.Device]] = None,
parent: Optional[Any] = None,
name: Optional[str] = None) -> None:
"""Base class representing a function approximator
The following properties are defined:
- ``device`` (jax.Device): Device to be used for the computations
- ``observation_space`` (int, sequence of int, gym.Space, gymnasium.Space): Observation/state space
- ``action_space`` (int, sequence of int, gym.Space, gymnasium.Space): Action space
- ``num_observations`` (int): Number of elements in the observation/state space
- ``num_actions`` (int): Number of elements in the action space
:param observation_space: Observation/state space or shape.
The ``num_observations`` property will contain the size of that space
:type observation_space: int, sequence of int, gym.Space, gymnasium.Space
:param action_space: Action space or shape.
The ``num_actions`` property will contain the size of that space
:type action_space: int, sequence of int, gym.Space, gymnasium.Space
:param device: Device on which a tensor/array is or will be allocated (default: ``None``).
If None, the device will be either ``"cuda"`` if available or ``"cpu"``
:type device: str or jax.Device, optional
:param parent: The parent Module of this Module (default: ``None``).
It is a Flax reserved attribute
:type parent: str, optional
:param name: The name of this Module (default: ``None``).
It is a Flax reserved attribute
:type name: str, optional
Custom models should override the ``act`` method::
import flax.linen as nn
from skrl.models.jax import Model
class CustomModel(Model):
def __init__(self, observation_space, action_space, device=None, **kwargs):
Model.__init__(self, observation_space, action_space, device, **kwargs)
# https://flax.readthedocs.io/en/latest/api_reference/flax.errors.html#flax.errors.IncorrectPostInitOverrideError
flax.linen.Module.__post_init__(self)
@nn.compact
def __call__(self, inputs, role):
x = nn.relu(nn.Dense(64)(inputs["states"]))
x = nn.relu(nn.Dense(self.num_actions)(x))
return x, None, {}
"""
self._jax = config.jax.backend == "jax"
if device is None:
self.device = jax.devices()[0]
else:
self.device = device if isinstance(device, jax.Device) else jax.devices(device)[0]
self.observation_space = observation_space
self.action_space = action_space
self.num_observations = None if observation_space is None else self._get_space_size(observation_space)
self.num_actions = None if action_space is None else self._get_space_size(action_space)
self.state_dict: StateDict
self.training = False
# https://flax.readthedocs.io/en/latest/api_reference/flax.errors.html#flax.errors.ReservedModuleAttributeError
self.parent = parent
self.name = name
def init_state_dict(self,
role: str,
inputs: Mapping[str, Union[np.ndarray, jax.Array]] = {},
key: Optional[jax.Array] = None) -> None:
"""Initialize state dictionary
:param role: Role play by the model
:type role: str
:param inputs: Model inputs. The most common keys are:
- ``"states"``: state of the environment used to make the decision
- ``"taken_actions"``: actions taken by the policy for the given states
If not specified, the keys will be populated with observation and action space samples
:type inputs: dict of np.ndarray or jax.Array, optional
:param key: Pseudo-random number generator (PRNG) key (default: ``None``).
If not provided, the skrl's PRNG key (``config.jax.key``) will be used
:type key: jax.Array, optional
"""
if not inputs:
inputs = {"states": self.observation_space.sample(), "taken_actions": self.action_space.sample()}
if key is None:
key = config.jax.key
if isinstance(inputs["states"], (int, np.int32, np.int64)):
inputs["states"] = np.array(inputs["states"]).reshape(-1,1)
# init internal state dict
self.state_dict = StateDict.create(apply_fn=self.apply, params=self.init(key, inputs, role))
def _get_space_size(self,
space: Union[int, Sequence[int], gym.Space, gymnasium.Space],
number_of_elements: bool = True) -> int:
"""Get the size (number of elements) of a space
:param space: Space or shape from which to obtain the number of elements
:type space: int, sequence of int, gym.Space, or gymnasium.Space
:param number_of_elements: Whether the number of elements occupied by the space is returned (default: ``True``).
If ``False``, the shape of the space is returned.
It only affects Discrete and MultiDiscrete spaces
:type number_of_elements: bool, optional
:raises ValueError: If the space is not supported
:return: Size of the space (number of elements)
:rtype: int
Example::
# from int
>>> model._get_space_size(2)
2
# from sequence of int
>>> model._get_space_size([2, 3])
6
# Box space
>>> space = gym.spaces.Box(low=-1, high=1, shape=(2, 3))
>>> model._get_space_size(space)
6
# Discrete space
>>> space = gym.spaces.Discrete(4)
>>> model._get_space_size(space)
4
>>> model._get_space_size(space, number_of_elements=False)
1
# MultiDiscrete space
>>> space = gym.spaces.MultiDiscrete([5, 3, 2])
>>> model._get_space_size(space)
10
>>> model._get_space_size(space, number_of_elements=False)
3
# Dict space
>>> space = gym.spaces.Dict({'a': gym.spaces.Box(low=-1, high=1, shape=(2, 3)),
... 'b': gym.spaces.Discrete(4)})
>>> model._get_space_size(space)
10
>>> model._get_space_size(space, number_of_elements=False)
7
"""
size = None
if type(space) in [int, float]:
size = space
elif type(space) in [tuple, list]:
size = np.prod(space)
elif issubclass(type(space), gym.Space):
if issubclass(type(space), gym.spaces.Discrete):
if number_of_elements:
size = space.n
else:
size = 1
elif issubclass(type(space), gym.spaces.MultiDiscrete):
if number_of_elements:
size = np.sum(space.nvec)
else:
size = space.nvec.shape[0]
elif issubclass(type(space), gym.spaces.Box):
size = np.prod(space.shape)
elif issubclass(type(space), gym.spaces.Dict):
size = sum([self._get_space_size(space.spaces[key], number_of_elements) for key in space.spaces])
elif issubclass(type(space), gymnasium.Space):
if issubclass(type(space), gymnasium.spaces.Discrete):
if number_of_elements:
size = space.n
else:
size = 1
elif issubclass(type(space), gymnasium.spaces.MultiDiscrete):
if number_of_elements:
size = np.sum(space.nvec)
else:
size = space.nvec.shape[0]
elif issubclass(type(space), gymnasium.spaces.Box):
size = np.prod(space.shape)
elif issubclass(type(space), gymnasium.spaces.Dict):
size = sum([self._get_space_size(space.spaces[key], number_of_elements) for key in space.spaces])
if size is None:
raise ValueError(f"Space type {type(space)} not supported")
return int(size)
def tensor_to_space(self,
tensor: Union[np.ndarray, jax.Array],
space: Union[gym.Space, gymnasium.Space],
start: int = 0) -> Union[Union[np.ndarray, jax.Array], dict]:
"""Map a flat tensor to a Gym/Gymnasium space
The mapping is done in the following way:
- Tensors belonging to Discrete spaces are returned without modification
- Tensors belonging to Box spaces are reshaped to the corresponding space shape
keeping the first dimension (number of samples) as they are
- Tensors belonging to Dict spaces are mapped into a dictionary with the same keys as the original space
:param tensor: Tensor to map from
:type tensor: np.ndarray or jax.Array
:param space: Space to map the tensor to
:type space: gym.Space or gymnasium.Space
:param start: Index of the first element of the tensor to map (default: ``0``)
:type start: int, optional
:raises ValueError: If the space is not supported
:return: Mapped tensor or dictionary
:rtype: np.ndarray or jax.Array, or dict
Example::
>>> space = gym.spaces.Dict({'a': gym.spaces.Box(low=-1, high=1, shape=(2, 3)),
... 'b': gym.spaces.Discrete(4)})
>>> tensor = jnp.array([[-0.3, -0.2, -0.1, 0.1, 0.2, 0.3, 2]])
>>>
>>> model.tensor_to_space(tensor, space)
{'a': Array([[[-0.3, -0.2, -0.1],
[ 0.1, 0.2, 0.3]]], dtype=float32),
'b': Array([[2.]], dtype=float32)}
"""
if issubclass(type(space), gym.Space):
if issubclass(type(space), gym.spaces.Discrete):
return tensor
elif issubclass(type(space), gym.spaces.Box):
return tensor.reshape(tensor.shape[0], *space.shape)
elif issubclass(type(space), gym.spaces.Dict):
output = {}
for k in sorted(space.keys()):
end = start + self._get_space_size(space[k], number_of_elements=False)
output[k] = self.tensor_to_space(tensor[:, start:end], space[k], end)
start = end
return output
else:
if issubclass(type(space), gymnasium.spaces.Discrete):
return tensor
elif issubclass(type(space), gymnasium.spaces.Box):
return tensor.reshape(tensor.shape[0], *space.shape)
elif issubclass(type(space), gymnasium.spaces.Dict):
output = {}
for k in sorted(space.keys()):
end = start + self._get_space_size(space[k], number_of_elements=False)
output[k] = self.tensor_to_space(tensor[:, start:end], space[k], end)
start = end
return output
raise ValueError(f"Space type {type(space)} not supported")
def random_act(self,
inputs: Mapping[str, Union[Union[np.ndarray, jax.Array], Any]],
role: str = "",
params: Optional[jax.Array] = None) -> Tuple[Union[np.ndarray, jax.Array], Union[Union[np.ndarray, jax.Array], None], Mapping[str, Union[Union[np.ndarray, jax.Array], Any]]]:
"""Act randomly according to the action space
:param inputs: Model inputs. The most common keys are:
- ``"states"``: state of the environment used to make the decision
- ``"taken_actions"``: actions taken by the policy for the given states
:type inputs: dict where the values are typically np.ndarray or jax.Array
:param role: Role play by the model (default: ``""``)
:type role: str, optional
:param params: Parameters used to compute the output (default: ``None``).
If ``None``, internal parameters will be used
:type params: jnp.array
:raises NotImplementedError: Unsupported action space
:return: Model output. The first component is the action to be taken by the agent
:rtype: tuple of np.ndarray or jax.Array, None, and dict
"""
# discrete action space (Discrete)
if issubclass(type(self.action_space), gym.spaces.Discrete) or issubclass(type(self.action_space), gymnasium.spaces.Discrete):
actions = np.random.randint(self.action_space.n, size=(inputs["states"].shape[0], 1))
# continuous action space (Box)
elif issubclass(type(self.action_space), gym.spaces.Box) or issubclass(type(self.action_space), gymnasium.spaces.Box):
actions = np.random.uniform(low=self.action_space.low[0], high=self.action_space.high[0], size=(inputs["states"].shape[0], self.num_actions))
else:
raise NotImplementedError(f"Action space type ({type(self.action_space)}) not supported")
if self._jax:
return jax.device_put(actions), None, {}
return actions, None, {}
def init_parameters(self, method_name: str = "normal", *args, **kwargs) -> None:
"""Initialize the model parameters according to the specified method name
Method names are from the `flax.linen.initializers <https://flax.readthedocs.io/en/latest/api_reference/flax.linen/initializers.html>`_ module.
Allowed method names are *uniform*, *normal*, *constant*, etc.
:param method_name: `flax.linen.initializers <https://flax.readthedocs.io/en/latest/api_reference/flax.linen/initializers.html>`_ method name (default: ``"normal"``)
:type method_name: str, optional
:param args: Positional arguments of the method to be called
:type args: tuple, optional
:param kwargs: Key-value arguments of the method to be called
:type kwargs: dict, optional
Example::
# initialize all parameters with an orthogonal distribution with a scale of 0.5
>>> model.init_parameters("orthogonal", scale=0.5)
# initialize all parameters as a normal distribution with a standard deviation of 0.1
>>> model.init_parameters("normal", stddev=0.1)
"""
if method_name in ["ones", "zeros"]:
method = eval(f"flax.linen.initializers.{method_name}")
else:
method = eval(f"flax.linen.initializers.{method_name}(*args, **kwargs)")
params = jax.tree_util.tree_map(lambda param: method(config.jax.key, param.shape), self.state_dict.params)
self.state_dict = self.state_dict.replace(params=params)
def init_weights(self, method_name: str = "normal", *args, **kwargs) -> None:
"""Initialize the model weights according to the specified method name
Method names are from the `flax.linen.initializers <https://flax.readthedocs.io/en/latest/api_reference/flax.linen/initializers.html>`_ module.
Allowed method names are *uniform*, *normal*, *constant*, etc.
:param method_name: `flax.linen.initializers <https://flax.readthedocs.io/en/latest/api_reference/flax.linen/initializers.html>`_ method name (default: ``"normal"``)
:type method_name: str, optional
:param args: Positional arguments of the method to be called
:type args: tuple, optional
:param kwargs: Key-value arguments of the method to be called
:type kwargs: dict, optional
Example::
# initialize all weights with uniform distribution in range [-0.1, 0.1]
>>> model.init_weights(method_name="uniform_", a=-0.1, b=0.1)
# initialize all weights with normal distribution with mean 0 and standard deviation 0.25
>>> model.init_weights(method_name="normal_", mean=0.0, std=0.25)
"""
if method_name in ["ones", "zeros"]:
method = eval(f"flax.linen.initializers.{method_name}")
else:
method = eval(f"flax.linen.initializers.{method_name}(*args, **kwargs)")
params = jax.tree_util.tree_map_with_path(lambda path, param: method(config.jax.key, param.shape) if path[-1].key == "kernel" else param,
self.state_dict.params)
self.state_dict = self.state_dict.replace(params=params)
def init_biases(self, method_name: str = "constant_", *args, **kwargs) -> None:
"""Initialize the model biases according to the specified method name
Method names are from the `flax.linen.initializers <https://flax.readthedocs.io/en/latest/api_reference/flax.linen/initializers.html>`_ module.
Allowed method names are *uniform*, *normal*, *constant*, etc.
:param method_name: `flax.linen.initializers <https://flax.readthedocs.io/en/latest/api_reference/flax.linen/initializers.html>`_ method name (default: ``"normal"``)
:type method_name: str, optional
:param args: Positional arguments of the method to be called
:type args: tuple, optional
:param kwargs: Key-value arguments of the method to be called
:type kwargs: dict, optional
Example::
# initialize all biases with a constant value (0)
>>> model.init_biases(method_name="constant_", val=0)
# initialize all biases with normal distribution with mean 0 and standard deviation 0.25
>>> model.init_biases(method_name="normal_", mean=0.0, std=0.25)
"""
if method_name in ["ones", "zeros"]:
method = eval(f"flax.linen.initializers.{method_name}")
else:
method = eval(f"flax.linen.initializers.{method_name}(*args, **kwargs)")
params = jax.tree_util.tree_map_with_path(lambda path, param: method(config.jax.key, param.shape) if path[-1].key == "bias" else param,
self.state_dict.params)
self.state_dict = self.state_dict.replace(params=params)
def get_specification(self) -> Mapping[str, Any]:
"""Returns the specification of the model
The following keys are used by the agents for initialization:
- ``"rnn"``: Recurrent Neural Network (RNN) specification for RNN, LSTM and GRU layers/cells
- ``"sizes"``: List of RNN shapes (number of layers, number of environments, number of features in the RNN state).
There must be as many tuples as there are states in the recurrent layer/cell. E.g., LSTM has 2 states (hidden and cell).
:return: Dictionary containing advanced specification of the model
:rtype: dict
Example::
# model with a LSTM layer.
# - number of layers: 1
# - number of environments: 4
# - number of features in the RNN state: 64
>>> model.get_specification()
{'rnn': {'sizes': [(1, 4, 64), (1, 4, 64)]}}
"""
return {}
def act(self,
inputs: Mapping[str, Union[Union[np.ndarray, jax.Array], Any]],
role: str = "",
params: Optional[jax.Array] = None) -> Tuple[jax.Array, Union[jax.Array, None], Mapping[str, Union[jax.Array, Any]]]:
"""Act according to the specified behavior (to be implemented by the inheriting classes)
Agents will call this method to obtain the decision to be taken given the state of the environment.
The classes that inherit from the latter must only implement the ``.__call__()`` method
:param inputs: Model inputs. The most common keys are:
- ``"states"``: state of the environment used to make the decision
- ``"taken_actions"``: actions taken by the policy for the given states
:type inputs: dict where the values are typically np.ndarray or jax.Array
:param role: Role play by the model (default: ``""``)
:type role: str, optional
:param params: Parameters used to compute the output (default: ``None``).
If ``None``, internal parameters will be used
:type params: jnp.array
:raises NotImplementedError: Child class must implement this method
:return: Model output. The first component is the action to be taken by the agent.
The second component is the log of the probability density function for stochastic models
or None for deterministic models. The third component is a dictionary containing extra output values
:rtype: tuple of jax.Array, jax.Array or None, and dict
"""
raise NotImplementedError
def set_mode(self, mode: str) -> None:
"""Set the model mode (training or evaluation)
:param mode: Mode: ``"train"`` for training or ``"eval"`` for evaluation
:type mode: str
:raises ValueError: If the mode is not ``"train"`` or ``"eval"``
"""
if mode == "train":
self.training = True
elif mode == "eval":
self.training = False
else:
raise ValueError("Invalid mode. Use 'train' for training or 'eval' for evaluation")
def save(self, path: str, state_dict: Optional[dict] = None) -> None:
"""Save the model to the specified path
:param path: Path to save the model to
:type path: str
:param state_dict: State dictionary to save (default: ``None``).
If None, the model's state_dict will be saved
:type state_dict: dict, optional
Example::
# save the current model to the specified path
>>> model.save("/tmp/model.flax")
# TODO: save an older version of the model to the specified path
"""
# HACK: Does it make sense to use https://github.com/google/orbax
with open(path, "wb") as file:
file.write(flax.serialization.to_bytes(self.state_dict.params if state_dict is None else state_dict.params))
def load(self, path: str) -> None:
"""Load the model from the specified path
:param path: Path to load the model from
:type path: str
Example::
# load the model
>>> model = Model(observation_space, action_space)
>>> model.load("model.flax")
"""
# HACK: Does it make sense to use https://github.com/google/orbax
with open(path, "rb") as file:
params = flax.serialization.from_bytes(self.state_dict.params, file.read())
self.state_dict = self.state_dict.replace(params=params)
self.set_mode("eval")
def migrate(self,
state_dict: Optional[Mapping[str, Any]] = None,
path: Optional[str] = None,
name_map: Mapping[str, str] = {},
auto_mapping: bool = True,
verbose: bool = False) -> bool:
"""Migrate the specified extrernal model's state dict to the current model
.. warning::
This method is not implemented yet, just maintains compatibility with other ML frameworks
:raises NotImplementedError: Not implemented
"""
raise NotImplementedError
def freeze_parameters(self, freeze: bool = True) -> None:
"""Freeze or unfreeze internal parameters
.. note::
This method does nothing, just maintains compatibility with other ML frameworks
:param freeze: Freeze the internal parameters if True, otherwise unfreeze them (default: ``True``)
:type freeze: bool, optional
Example::
# freeze model parameters
>>> model.freeze_parameters(True)
# unfreeze model parameters
>>> model.freeze_parameters(False)
"""
pass
def update_parameters(self, model: flax.linen.Module, polyak: float = 1) -> None:
"""Update internal parameters by hard or soft (polyak averaging) update
- Hard update: :math:`\\theta = \\theta_{net}`
- Soft (polyak averaging) update: :math:`\\theta = (1 - \\rho) \\theta + \\rho \\theta_{net}`
:param model: Model used to update the internal parameters
:type model: flax.linen.Module (skrl.models.jax.Model)
:param polyak: Polyak hyperparameter between 0 and 1 (default: ``1``).
A hard update is performed when its value is 1
:type polyak: float, optional
Example::
# hard update (from source model)
>>> model.update_parameters(source_model)
# soft update (from source model)
>>> model.update_parameters(source_model, polyak=0.005)
"""
# hard update
if polyak == 1:
self.state_dict = self.state_dict.replace(params=model.state_dict.params)
# soft update
else:
# HACK: Does it make sense to use https://optax.readthedocs.io/en/latest/api.html?#optax.incremental_update
params = jax.tree_util.tree_map(lambda params, model_params: polyak * model_params + (1 - polyak) * params,
self.state_dict.params, model.state_dict.params)
self.state_dict = self.state_dict.replace(params=params)
| 26,683 | Python | 45.732049 | 193 | 0.590901 |
Toni-SM/skrl/skrl/models/jax/__init__.py | from skrl.models.jax.base import Model # isort:skip
from skrl.models.jax.categorical import CategoricalMixin
from skrl.models.jax.deterministic import DeterministicMixin
from skrl.models.jax.gaussian import GaussianMixin
from skrl.models.jax.multicategorical import MultiCategoricalMixin
| 290 | Python | 40.571423 | 66 | 0.858621 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.