file_path
stringlengths 21
207
| content
stringlengths 5
1.02M
| size
int64 5
1.02M
| lang
stringclasses 9
values | avg_line_length
float64 1.33
100
| max_line_length
int64 4
993
| alphanum_fraction
float64 0.27
0.93
|
---|---|---|---|---|---|---|
NVIDIA-Omniverse/orbit/pyproject.toml | [tool.isort]
py_version = 310
line_length = 120
group_by_package = true
# Files to skip
skip_glob = ["docs/*", "logs/*", "_isaac_sim/*", ".vscode/*"]
# Order of imports
sections = [
"FUTURE",
"STDLIB",
"THIRDPARTY",
"ASSETS_FIRSTPARTY",
"FIRSTPARTY",
"EXTRA_FIRSTPARTY",
"LOCALFOLDER",
]
# Extra standard libraries considered as part of python (permissive licenses
extra_standard_library = [
"numpy",
"h5py",
"open3d",
"torch",
"tensordict",
"bpy",
"matplotlib",
"gymnasium",
"gym",
"scipy",
"hid",
"yaml",
"prettytable",
"toml",
"trimesh",
"tqdm",
]
# Imports from Isaac Sim and Omniverse
known_third_party = [
"omni.isaac.core",
"omni.replicator.isaac",
"omni.replicator.core",
"pxr",
"omni.kit.*",
"warp",
"carb",
]
# Imports from this repository
known_first_party = "omni.isaac.orbit"
known_assets_firstparty = "omni.isaac.orbit_assets"
known_extra_firstparty = [
"omni.isaac.orbit_tasks"
]
# Imports from the local folder
known_local_folder = "config"
[tool.pyright]
include = ["source/extensions", "source/standalone"]
exclude = [
"**/__pycache__",
"**/_isaac_sim",
"**/docs",
"**/logs",
".git",
".vscode",
]
typeCheckingMode = "basic"
pythonVersion = "3.10"
pythonPlatform = "Linux"
enableTypeIgnoreComments = true
# This is required as the CI pre-commit does not download the module (i.e. numpy, torch, prettytable)
# Therefore, we have to ignore missing imports
reportMissingImports = "none"
# This is required to ignore for type checks of modules with stubs missing.
reportMissingModuleSource = "none" # -> most common: prettytable in mdp managers
reportGeneralTypeIssues = "none" # -> raises 218 errors (usage of literal MISSING in dataclasses)
reportOptionalMemberAccess = "warning" # -> raises 8 errors
reportPrivateUsage = "warning"
[tool.codespell]
skip = '*.usd,*.svg,*.png,_isaac_sim*,*.bib,*.css,*/_build'
quiet-level = 0
# the world list should always have words in lower case
ignore-words-list = "haa,slq,collapsable"
# todo: this is hack to deal with incorrect spelling of "Environment" in the Isaac Sim grid world asset
exclude-file = "source/extensions/omni.isaac.orbit/omni/isaac/orbit/sim/spawners/from_files/from_files.py"
| 2,314 | TOML | 23.627659 | 106 | 0.665946 |
NVIDIA-Omniverse/orbit/CONTRIBUTING.md | # Contribution Guidelines
Orbit is a community maintained project. We wholeheartedly welcome contributions to the project to make
the framework more mature and useful for everyone. These may happen in forms of bug reports, feature requests,
design proposals and more.
For general information on how to contribute see
<https://isaac-orbit.github.io/orbit/source/refs/contributing.html>.
| 388 | Markdown | 42.222218 | 110 | 0.81701 |
NVIDIA-Omniverse/orbit/CONTRIBUTORS.md | # Orbit Developers and Contributors
This is the official list of Orbit Project developers and contributors.
To see the full list of contributors, please check the revision history in the source control.
Guidelines for modifications:
* Please keep the lists sorted alphabetically.
* Names should be added to this file as: *individual names* or *organizations*.
* E-mail addresses are tracked elsewhere to avoid spam.
## Developers
* Boston Dynamics AI Institute, Inc.
* ETH Zurich
* NVIDIA Corporation & Affiliates
* University of Toronto
---
* David Hoeller
* Farbod Farshidian
* Hunter Hansen
* James Smith
* **Mayank Mittal** (maintainer)
* Nikita Rudin
* Pascal Roth
## Contributors
* Anton Bjørndahl Mortensen
* Alice Zhou
* Andrej Orsula
* Antonio Serrano-Muñoz
* Arjun Bhardwaj
* Calvin Yu
* Chenyu Yang
* Jia Lin Yuan
* Jingzhou Liu
* Kourosh Darvish
* Qinxi Yu
* René Zurbrügg
* Ritvik Singh
* Rosario Scalise
* Shafeef Omar
* Vladimir Fokow
## Acknowledgements
* Ajay Mandlekar
* Animesh Garg
* Buck Babich
* Gavriel State
* Hammad Mazhar
* Marco Hutter
* Yunrong Guo
| 1,089 | Markdown | 17.793103 | 94 | 0.757576 |
NVIDIA-Omniverse/orbit/README.md | 
---
# Orbit
[](https://docs.omniverse.nvidia.com/isaacsim/latest/overview.html)
[](https://docs.python.org/3/whatsnew/3.10.html)
[](https://releases.ubuntu.com/20.04/)
[](https://pre-commit.com/)
[](https://isaac-orbit.github.io/orbit)
[](https://opensource.org/licenses/BSD-3-Clause)
<!-- TODO: Replace docs status with workflow badge? Link: https://github.com/isaac-orbit/orbit/actions/workflows/docs.yaml/badge.svg -->
**Orbit** is a unified and modular framework for robot learning that aims to simplify common workflows
in robotics research (such as RL, learning from demonstrations, and motion planning). It is built upon
[NVIDIA Isaac Sim](https://docs.omniverse.nvidia.com/isaacsim/latest/overview.html) to leverage the latest
simulation capabilities for photo-realistic scenes and fast and accurate simulation.
Please refer to our [documentation page](https://isaac-orbit.github.io/orbit) to learn more about the
installation steps, features, tutorials, and how to setup your own project with Orbit.
## 🎉 Announcement (22.12.2023)
We're excited to announce merging of our latest development branch into the main branch! This update introduces
several improvements and fixes to enhance the modularity and user-friendliness of the framework. We have added
several new environments, especially for legged locomotion, and are in the process of adding new environments.
Feel free to explore the latest changes and updates. We appreciate your ongoing support and contributions!
For more details, please check the post here: [#106](https://github.com/NVIDIA-Omniverse/Orbit/discussions/106)
## Contributing to Orbit
We wholeheartedly welcome contributions from the community to make this framework mature and useful for everyone.
These may happen as bug reports, feature requests, or code contributions. For details, please check our
[contribution guidelines](https://isaac-orbit.github.io/orbit/source/refs/contributing.html).
## Troubleshooting
Please see the [troubleshooting](https://isaac-orbit.github.io/orbit/source/refs/troubleshooting.html) section for
common fixes or [submit an issue](https://github.com/NVIDIA-Omniverse/orbit/issues).
For issues related to Isaac Sim, we recommend checking its [documentation](https://docs.omniverse.nvidia.com/app_isaacsim/app_isaacsim/overview.html)
or opening a question on its [forums](https://forums.developer.nvidia.com/c/agx-autonomous-machines/isaac/67).
## Support
* Please use GitHub [Discussions](https://github.com/NVIDIA-Omniverse/Orbit/discussions) for discussing ideas, asking questions, and requests for new features.
* Github [Issues](https://github.com/NVIDIA-Omniverse/orbit/issues) should only be used to track executable pieces of work with a definite scope and a clear deliverable. These can be fixing bugs, documentation issues, new features, or general updates.
## Acknowledgement
NVIDIA Isaac Sim is available freely under [individual license](https://www.nvidia.com/en-us/omniverse/download/). For more information about its license terms, please check [here](https://docs.omniverse.nvidia.com/app_isaacsim/common/NVIDIA_Omniverse_License_Agreement.html#software-support-supplement).
Orbit framework is released under [BSD-3 License](LICENSE). The license files of its dependencies and assets are present in the [`docs/licenses`](docs/licenses) directory.
## Citing
If you use this framework in your work, please cite [this paper](https://arxiv.org/abs/2301.04195):
```text
@article{mittal2023orbit,
author={Mittal, Mayank and Yu, Calvin and Yu, Qinxi and Liu, Jingzhou and Rudin, Nikita and Hoeller, David and Yuan, Jia Lin and Singh, Ritvik and Guo, Yunrong and Mazhar, Hammad and Mandlekar, Ajay and Babich, Buck and State, Gavriel and Hutter, Marco and Garg, Animesh},
journal={IEEE Robotics and Automation Letters},
title={Orbit: A Unified Simulation Framework for Interactive Robot Learning Environments},
year={2023},
volume={8},
number={6},
pages={3740-3747},
doi={10.1109/LRA.2023.3270034}
}
```
| 4,545 | Markdown | 59.613333 | 304 | 0.777558 |
NVIDIA-Omniverse/orbit/tools/tests_to_skip.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
# The following tests are skipped by run_tests.py
TESTS_TO_SKIP = [
# orbit
"test_argparser_launch.py", # app.close issue
"test_env_var_launch.py", # app.close issue
"test_kwarg_launch.py", # app.close issue
"test_differential_ik.py", # Failing
# orbit_tasks
"test_data_collector.py", # Failing
"test_record_video.py", # Failing
"test_rsl_rl_wrapper.py", # Timing out (10 minutes)
"test_sb3_wrapper.py", # Timing out (10 minutes)
]
| 603 | Python | 30.789472 | 56 | 0.660033 |
NVIDIA-Omniverse/orbit/tools/run_all_tests.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""A runner script for all the tests within source directory.
.. code-block:: bash
./orbit.sh -p tools/run_all_tests.py
# for dry run
./orbit.sh -p tools/run_all_tests.py --discover_only
# for quiet run
./orbit.sh -p tools/run_all_tests.py --quiet
# for increasing timeout (default is 600 seconds)
./orbit.sh -p tools/run_all_tests.py --timeout 1000
"""
from __future__ import annotations
import argparse
import logging
import os
import subprocess
import sys
import time
from datetime import datetime
from pathlib import Path
from prettytable import PrettyTable
# Tests to skip
from tests_to_skip import TESTS_TO_SKIP
ORBIT_PATH = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
"""Path to the root directory of Orbit repository."""
def parse_args() -> argparse.Namespace:
"""Parse command line arguments."""
parser = argparse.ArgumentParser(description="Run all tests under current directory.")
# add arguments
parser.add_argument(
"--skip_tests",
default="",
help="Space separated list of tests to skip in addition to those in tests_to_skip.py.",
type=str,
nargs="*",
)
# configure default test directory (source directory)
default_test_dir = os.path.join(ORBIT_PATH, "source")
parser.add_argument(
"--test_dir", type=str, default=default_test_dir, help="Path to the directory containing the tests."
)
# configure default logging path based on time stamp
log_file_name = datetime.now().strftime("%Y-%m-%d_%H-%M-%S") + ".log"
default_log_path = os.path.join(ORBIT_PATH, "logs", "test_results", log_file_name)
parser.add_argument(
"--log_path", type=str, default=default_log_path, help="Path to the log file to store the results in."
)
parser.add_argument("--discover_only", action="store_true", help="Only discover and print tests, don't run them.")
parser.add_argument("--quiet", action="store_true", help="Don't print to console, only log to file.")
parser.add_argument("--timeout", type=int, default=600, help="Timeout for each test in seconds.")
# parse arguments
args = parser.parse_args()
return args
def test_all(
test_dir: str,
tests_to_skip: list[str],
log_path: str,
timeout: float = 600.0,
discover_only: bool = False,
quiet: bool = False,
) -> bool:
"""Run all tests under the given directory.
Args:
test_dir: Path to the directory containing the tests.
tests_to_skip: List of tests to skip.
log_path: Path to the log file to store the results in.
timeout: Timeout for each test in seconds. Defaults to 600 seconds (10 minutes).
discover_only: If True, only discover and print the tests without running them. Defaults to False.
quiet: If False, print the output of the tests to the terminal console (in addition to the log file).
Defaults to False.
Returns:
True if all un-skipped tests pass or `discover_only` is True. Otherwise, False.
Raises:
ValueError: If any test to skip is not found under the given `test_dir`.
"""
# Create the log directory if it doesn't exist
os.makedirs(os.path.dirname(log_path), exist_ok=True)
# Add file handler to log to file
logging_handlers = [logging.FileHandler(log_path)]
# We also want to print to console
if not quiet:
logging_handlers.append(logging.StreamHandler())
# Set up logger
logging.basicConfig(level=logging.INFO, format="%(message)s", handlers=logging_handlers)
# Discover all tests under current directory
all_test_paths = [str(path) for path in Path(test_dir).resolve().rglob("*test_*.py")]
skipped_test_paths = []
test_paths = []
# Check that all tests to skip are actually in the tests
for test_to_skip in tests_to_skip:
for test_path in all_test_paths:
if test_to_skip in test_path:
break
else:
raise ValueError(f"Test to skip '{test_to_skip}' not found in tests.")
# Remove tests to skip from the list of tests to run
if len(tests_to_skip) != 0:
for test_path in all_test_paths:
if any([test_to_skip in test_path for test_to_skip in tests_to_skip]):
skipped_test_paths.append(test_path)
else:
test_paths.append(test_path)
else:
test_paths = all_test_paths
# Sort test paths so they're always in the same order
all_test_paths.sort()
test_paths.sort()
skipped_test_paths.sort()
# Print tests to be run
logging.info("\n" + "=" * 60 + "\n")
logging.info(f"The following {len(all_test_paths)} tests were found:")
for i, test_path in enumerate(all_test_paths):
logging.info(f"{i + 1:02d}: {test_path}")
logging.info("\n" + "=" * 60 + "\n")
logging.info(f"The following {len(skipped_test_paths)} tests are marked to be skipped:")
for i, test_path in enumerate(skipped_test_paths):
logging.info(f"{i + 1:02d}: {test_path}")
logging.info("\n" + "=" * 60 + "\n")
# Exit if only discovering tests
if discover_only:
return True
results = {}
# Run each script and store results
for test_path in test_paths:
results[test_path] = {}
before = time.time()
logging.info("\n" + "-" * 60 + "\n")
logging.info(f"[INFO] Running '{test_path}'\n")
try:
completed_process = subprocess.run(
[sys.executable, test_path], check=True, capture_output=True, timeout=timeout
)
except subprocess.TimeoutExpired as e:
logging.error(f"Timeout occurred: {e}")
result = "TIMEDOUT"
stdout = e.stdout
stderr = e.stderr
except subprocess.CalledProcessError as e:
# When check=True is passed to subprocess.run() above, CalledProcessError is raised if the process returns a
# non-zero exit code. The caveat is returncode is not correctly updated in this case, so we simply
# catch the exception and set this test as FAILED
result = "FAILED"
stdout = e.stdout
stderr = e.stderr
except Exception as e:
logging.error(f"Unexpected exception {e}. Please report this issue on the repository.")
result = "FAILED"
stdout = e.stdout
stderr = e.stderr
else:
# Should only get here if the process ran successfully, e.g. no exceptions were raised
# but we still check the returncode just in case
result = "PASSED" if completed_process.returncode == 0 else "FAILED"
stdout = completed_process.stdout
stderr = completed_process.stderr
after = time.time()
time_elapsed = after - before
# Decode stdout and stderr and write to file and print to console if desired
stdout_str = stdout.decode("utf-8") if stdout is not None else ""
stderr_str = stderr.decode("utf-8") if stderr is not None else ""
# Write to log file
logging.info(stdout_str)
logging.info(stderr_str)
logging.info(f"[INFO] Time elapsed: {time_elapsed:.2f} s")
logging.info(f"[INFO] Result '{test_path}': {result}")
# Collect results
results[test_path]["time_elapsed"] = time_elapsed
results[test_path]["result"] = result
# Calculate the number and percentage of passing tests
num_tests = len(all_test_paths)
num_passing = len([test_path for test_path in test_paths if results[test_path]["result"] == "PASSED"])
num_failing = len([test_path for test_path in test_paths if results[test_path]["result"] == "FAILED"])
num_timing_out = len([test_path for test_path in test_paths if results[test_path]["result"] == "TIMEDOUT"])
num_skipped = len(skipped_test_paths)
if num_tests == 0:
passing_percentage = 100
else:
passing_percentage = (num_passing + num_skipped) / num_tests * 100
# Print summaries of test results
summary_str = "\n\n"
summary_str += "===================\n"
summary_str += "Test Result Summary\n"
summary_str += "===================\n"
summary_str += f"Total: {num_tests}\n"
summary_str += f"Passing: {num_passing}\n"
summary_str += f"Failing: {num_failing}\n"
summary_str += f"Skipped: {num_skipped}\n"
summary_str += f"Timing Out: {num_timing_out}\n"
summary_str += f"Passing Percentage: {passing_percentage:.2f}%\n"
# Print time elapsed in hours, minutes, seconds
total_time = sum([results[test_path]["time_elapsed"] for test_path in test_paths])
summary_str += f"Total Time Elapsed: {total_time // 3600}h"
summary_str += f"{total_time // 60 % 60}m"
summary_str += f"{total_time % 60:.2f}s"
summary_str += "\n\n=======================\n"
summary_str += "Per Test Result Summary\n"
summary_str += "=======================\n"
# Construct table of results per test
per_test_result_table = PrettyTable(field_names=["Test Path", "Result", "Time (s)"])
per_test_result_table.align["Test Path"] = "l"
per_test_result_table.align["Time (s)"] = "r"
for test_path in test_paths:
per_test_result_table.add_row(
[test_path, results[test_path]["result"], f"{results[test_path]['time_elapsed']:0.2f}"]
)
for test_path in skipped_test_paths:
per_test_result_table.add_row([test_path, "SKIPPED", "N/A"])
summary_str += per_test_result_table.get_string()
# Print summary to console and log file
logging.info(summary_str)
# Only count failing and timing out tests towards failure
return num_failing + num_timing_out == 0
if __name__ == "__main__":
# parse command line arguments
args = parse_args()
# add tests to skip to the list of tests to skip
tests_to_skip = TESTS_TO_SKIP
tests_to_skip += args.skip_tests
# run all tests
test_success = test_all(
test_dir=args.test_dir,
tests_to_skip=tests_to_skip,
log_path=args.log_path,
timeout=args.timeout,
discover_only=args.discover_only,
quiet=args.quiet,
)
# update exit status based on all tests passing or not
if not test_success:
exit(1)
| 10,468 | Python | 36.256228 | 120 | 0.62629 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/setup.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Installation script for the 'omni.isaac.orbit_tasks' python package."""
import itertools
import os
import toml
from setuptools import setup
# Obtain the extension data from the extension.toml file
EXTENSION_PATH = os.path.dirname(os.path.realpath(__file__))
# Read the extension.toml file
EXTENSION_TOML_DATA = toml.load(os.path.join(EXTENSION_PATH, "config", "extension.toml"))
# Minimum dependencies required prior to installation
INSTALL_REQUIRES = [
# generic
"numpy",
"torch==2.0.1",
"torchvision>=0.14.1", # ensure compatibility with torch 1.13.1
"protobuf>=3.20.2",
# data collection
"h5py",
# basic logger
"tensorboard",
# video recording
"moviepy",
]
# Extra dependencies for RL agents
EXTRAS_REQUIRE = {
"sb3": ["stable-baselines3>=2.0"],
"skrl": ["skrl>=1.1.0"],
"rl_games": ["rl-games==1.6.1", "gym"], # rl-games still needs gym :(
"rsl_rl": ["rsl_rl@git+https://github.com/leggedrobotics/rsl_rl.git"],
"robomimic": ["robomimic@git+https://github.com/ARISE-Initiative/robomimic.git"],
}
# cumulation of all extra-requires
EXTRAS_REQUIRE["all"] = list(itertools.chain.from_iterable(EXTRAS_REQUIRE.values()))
# Installation operation
setup(
name="omni-isaac-orbit_tasks",
author="ORBIT Project Developers",
maintainer="Mayank Mittal",
maintainer_email="[email protected]",
url=EXTENSION_TOML_DATA["package"]["repository"],
version=EXTENSION_TOML_DATA["package"]["version"],
description=EXTENSION_TOML_DATA["package"]["description"],
keywords=EXTENSION_TOML_DATA["package"]["keywords"],
include_package_data=True,
python_requires=">=3.10",
install_requires=INSTALL_REQUIRES,
extras_require=EXTRAS_REQUIRE,
packages=["omni.isaac.orbit_tasks"],
classifiers=[
"Natural Language :: English",
"Programming Language :: Python :: 3.10",
"Isaac Sim :: 2023.1.0-hotfix.1",
"Isaac Sim :: 2023.1.1",
],
zip_safe=False,
)
| 2,113 | Python | 29.637681 | 89 | 0.67345 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/test/test_environments.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
from omni.isaac.orbit.app import AppLauncher, run_tests
# launch the simulator
app_launcher = AppLauncher(headless=True)
simulation_app = app_launcher.app
"""Rest everything follows."""
import gymnasium as gym
import torch
import unittest
import omni.usd
from omni.isaac.orbit.envs import RLTaskEnv, RLTaskEnvCfg
import omni.isaac.orbit_tasks # noqa: F401
from omni.isaac.orbit_tasks.utils.parse_cfg import parse_env_cfg
class TestEnvironments(unittest.TestCase):
"""Test cases for all registered environments."""
@classmethod
def setUpClass(cls):
# acquire all Isaac environments names
cls.registered_tasks = list()
for task_spec in gym.registry.values():
if "Isaac" in task_spec.id:
cls.registered_tasks.append(task_spec.id)
# sort environments by name
cls.registered_tasks.sort()
# print all existing task names
print(">>> All registered environments:", cls.registered_tasks)
"""
Test fixtures.
"""
def test_multiple_instances_gpu(self):
"""Run all environments with multiple instances and check environments return valid signals."""
# common parameters
num_envs = 32
use_gpu = True
# iterate over all registered environments
for task_name in self.registered_tasks:
print(f">>> Running test for environment: {task_name}")
# check environment
self._check_random_actions(task_name, use_gpu, num_envs, num_steps=100)
# close the environment
print(f">>> Closing environment: {task_name}")
print("-" * 80)
def test_single_instance_gpu(self):
"""Run all environments with single instance and check environments return valid signals."""
# common parameters
num_envs = 1
use_gpu = True
# iterate over all registered environments
for task_name in self.registered_tasks:
print(f">>> Running test for environment: {task_name}")
# check environment
self._check_random_actions(task_name, use_gpu, num_envs, num_steps=100)
# close the environment
print(f">>> Closing environment: {task_name}")
print("-" * 80)
"""
Helper functions.
"""
def _check_random_actions(self, task_name: str, use_gpu: bool, num_envs: int, num_steps: int = 1000):
"""Run random actions and check environments return valid signals."""
# create a new stage
omni.usd.get_context().new_stage()
# parse configuration
env_cfg: RLTaskEnvCfg = parse_env_cfg(task_name, use_gpu=use_gpu, num_envs=num_envs)
# create environment
env: RLTaskEnv = gym.make(task_name, cfg=env_cfg)
# reset environment
obs, _ = env.reset()
# check signal
self.assertTrue(self._check_valid_tensor(obs))
# simulate environment for num_steps steps
with torch.inference_mode():
for _ in range(num_steps):
# sample actions from -1 to 1
actions = 2 * torch.rand(env.action_space.shape, device=env.unwrapped.device) - 1
# apply actions
transition = env.step(actions)
# check signals
for data in transition:
self.assertTrue(self._check_valid_tensor(data), msg=f"Invalid data: {data}")
# close the environment
env.close()
@staticmethod
def _check_valid_tensor(data: torch.Tensor | dict) -> bool:
"""Checks if given data does not have corrupted values.
Args:
data: Data buffer.
Returns:
True if the data is valid.
"""
if isinstance(data, torch.Tensor):
return not torch.any(torch.isnan(data))
elif isinstance(data, dict):
valid_tensor = True
for value in data.values():
if isinstance(value, dict):
valid_tensor &= TestEnvironments._check_valid_tensor(value)
elif isinstance(value, torch.Tensor):
valid_tensor &= not torch.any(torch.isnan(value))
return valid_tensor
else:
raise ValueError(f"Input data of invalid type: {type(data)}.")
if __name__ == "__main__":
run_tests()
| 4,563 | Python | 32.807407 | 105 | 0.608372 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/test/test_data_collector.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
from omni.isaac.orbit.app import AppLauncher, run_tests
# launch the simulator
app_launcher = AppLauncher(headless=True)
simulation_app = app_launcher.app
"""Rest everything follows."""
import os
import torch
import unittest
from omni.isaac.orbit_tasks.utils.data_collector import RobomimicDataCollector
class TestRobomimicDataCollector(unittest.TestCase):
"""Test dataset flushing behavior of robomimic data collector."""
def test_basic_flushing(self):
"""Adds random data into the collector and checks saving of the data."""
# name of the environment (needed by robomimic)
task_name = "My-Task-v0"
# specify directory for logging experiments
test_dir = os.path.dirname(os.path.abspath(__file__))
log_dir = os.path.join(test_dir, "output", "demos")
# name of the file to save data
filename = "hdf_dataset.hdf5"
# number of episodes to collect
num_demos = 10
# number of environments to simulate
num_envs = 4
# create data-collector
collector_interface = RobomimicDataCollector(task_name, log_dir, filename, num_demos)
# reset the collector
collector_interface.reset()
while not collector_interface.is_stopped():
# generate random data to store
# -- obs
obs = {"joint_pos": torch.randn(num_envs, 7), "joint_vel": torch.randn(num_envs, 7)}
# -- actions
actions = torch.randn(num_envs, 7)
# -- next obs
next_obs = {"joint_pos": torch.randn(num_envs, 7), "joint_vel": torch.randn(num_envs, 7)}
# -- rewards
rewards = torch.randn(num_envs)
# -- dones
dones = torch.rand(num_envs) > 0.5
# store signals
# -- obs
for key, value in obs.items():
collector_interface.add(f"obs/{key}", value)
# -- actions
collector_interface.add("actions", actions)
# -- next_obs
for key, value in next_obs.items():
collector_interface.add(f"next_obs/{key}", value.cpu().numpy())
# -- rewards
collector_interface.add("rewards", rewards)
# -- dones
collector_interface.add("dones", dones)
# flush data from collector for successful environments
# note: in this case we flush all the time
reset_env_ids = dones.nonzero(as_tuple=False).squeeze(-1)
collector_interface.flush(reset_env_ids)
# close collector
collector_interface.close()
# TODO: Add inspection of the saved dataset as part of the test.
if __name__ == "__main__":
run_tests()
| 2,942 | Python | 32.443181 | 101 | 0.604351 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/test/test_record_video.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
from omni.isaac.orbit.app import AppLauncher, run_tests
# launch the simulator
app_launcher = AppLauncher(headless=True, offscreen_render=True)
simulation_app = app_launcher.app
"""Rest everything follows."""
import gymnasium as gym
import os
import torch
import unittest
import omni.usd
from omni.isaac.orbit.envs import RLTaskEnv, RLTaskEnvCfg
import omni.isaac.orbit_tasks # noqa: F401
from omni.isaac.orbit_tasks.utils import parse_env_cfg
class TestRecordVideoWrapper(unittest.TestCase):
"""Test recording videos using the RecordVideo wrapper."""
@classmethod
def setUpClass(cls):
# acquire all Isaac environments names
cls.registered_tasks = list()
for task_spec in gym.registry.values():
if "Isaac" in task_spec.id:
cls.registered_tasks.append(task_spec.id)
# sort environments by name
cls.registered_tasks.sort()
# print all existing task names
print(">>> All registered environments:", cls.registered_tasks)
# directory to save videos
cls.videos_dir = os.path.join(os.path.dirname(__file__), "output", "videos")
def setUp(self) -> None:
# common parameters
self.num_envs = 16
self.use_gpu = True
# video parameters
self.step_trigger = lambda step: step % 225 == 0
self.video_length = 200
def test_record_video(self):
"""Run random actions agent with recording of videos."""
for task_name in self.registered_tasks:
print(f">>> Running test for environment: {task_name}")
# create a new stage
omni.usd.get_context().new_stage()
# parse configuration
env_cfg: RLTaskEnvCfg = parse_env_cfg(task_name, use_gpu=self.use_gpu, num_envs=self.num_envs)
# create environment
env: RLTaskEnv = gym.make(task_name, cfg=env_cfg, render_mode="rgb_array")
# directory to save videos
videos_dir = os.path.join(self.videos_dir, task_name)
# wrap environment to record videos
env = gym.wrappers.RecordVideo(
env, videos_dir, step_trigger=self.step_trigger, video_length=self.video_length, disable_logger=True
)
# reset environment
env.reset()
# simulate environment
with torch.inference_mode():
for _ in range(500):
# compute zero actions
actions = 2 * torch.rand(env.action_space.shape, device=env.unwrapped.device) - 1
# apply actions
_ = env.step(actions)
# close the simulator
env.close()
if __name__ == "__main__":
run_tests()
| 2,956 | Python | 31.141304 | 116 | 0.618065 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/test/wrappers/test_rsl_rl_wrapper.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
from omni.isaac.orbit.app import AppLauncher, run_tests
# launch the simulator
app_launcher = AppLauncher(headless=True)
simulation_app = app_launcher.app
"""Rest everything follows."""
import gymnasium as gym
import torch
import unittest
import omni.usd
from omni.isaac.orbit.envs import RLTaskEnvCfg
import omni.isaac.orbit_tasks # noqa: F401
from omni.isaac.orbit_tasks.utils.parse_cfg import parse_env_cfg
from omni.isaac.orbit_tasks.utils.wrappers.rsl_rl import RslRlVecEnvWrapper
class TestRslRlVecEnvWrapper(unittest.TestCase):
"""Test that RSL-RL VecEnv wrapper works as expected."""
@classmethod
def setUpClass(cls):
# acquire all Isaac environments names
cls.registered_tasks = list()
for task_spec in gym.registry.values():
if "Isaac" in task_spec.id:
cls.registered_tasks.append(task_spec.id)
# sort environments by name
cls.registered_tasks.sort()
# only pick the first three environments to test
cls.registered_tasks = cls.registered_tasks[:3]
# print all existing task names
print(">>> All registered environments:", cls.registered_tasks)
def setUp(self) -> None:
# common parameters
self.num_envs = 64
self.use_gpu = True
def test_random_actions(self):
"""Run random actions and check environments return valid signals."""
for task_name in self.registered_tasks:
print(f">>> Running test for environment: {task_name}")
# create a new stage
omni.usd.get_context().new_stage()
# parse configuration
env_cfg: RLTaskEnvCfg = parse_env_cfg(task_name, use_gpu=self.use_gpu, num_envs=self.num_envs)
# create environment
env = gym.make(task_name, cfg=env_cfg)
# wrap environment
env = RslRlVecEnvWrapper(env)
# reset environment
obs, extras = env.reset()
# check signal
self.assertTrue(self._check_valid_tensor(obs))
self.assertTrue(self._check_valid_tensor(extras))
# simulate environment for 1000 steps
with torch.inference_mode():
for _ in range(1000):
# sample actions from -1 to 1
actions = 2 * torch.rand(env.action_space.shape, device=env.unwrapped.device) - 1
# apply actions
transition = env.step(actions)
# check signals
for data in transition:
self.assertTrue(self._check_valid_tensor(data), msg=f"Invalid data: {data}")
# close the environment
print(f">>> Closing environment: {task_name}")
env.close()
def test_no_time_outs(self):
"""Check that environments with finite horizon do not send time-out signals."""
for task_name in self.registered_tasks[0:5]:
print(f">>> Running test for environment: {task_name}")
# create a new stage
omni.usd.get_context().new_stage()
# parse configuration
env_cfg: RLTaskEnvCfg = parse_env_cfg(task_name, use_gpu=self.use_gpu, num_envs=self.num_envs)
# change to finite horizon
env_cfg.is_finite_horizon = True
# create environment
env = gym.make(task_name, cfg=env_cfg)
# wrap environment
env = RslRlVecEnvWrapper(env)
# reset environment
_, extras = env.reset()
# check signal
self.assertNotIn("time_outs", extras, msg="Time-out signal found in finite horizon environment.")
# simulate environment for 10 steps
with torch.inference_mode():
for _ in range(10):
# sample actions from -1 to 1
actions = 2 * torch.rand(env.action_space.shape, device=env.unwrapped.device) - 1
# apply actions
extras = env.step(actions)[-1]
# check signals
self.assertNotIn("time_outs", extras, msg="Time-out signal found in finite horizon environment.")
# close the environment
print(f">>> Closing environment: {task_name}")
env.close()
"""
Helper functions.
"""
@staticmethod
def _check_valid_tensor(data: torch.Tensor | dict) -> bool:
"""Checks if given data does not have corrupted values.
Args:
data: Data buffer.
Returns:
True if the data is valid.
"""
if isinstance(data, torch.Tensor):
return not torch.any(torch.isnan(data))
elif isinstance(data, dict):
valid_tensor = True
for value in data.values():
if isinstance(value, dict):
valid_tensor &= TestRslRlVecEnvWrapper._check_valid_tensor(value)
elif isinstance(value, torch.Tensor):
valid_tensor &= not torch.any(torch.isnan(value))
return valid_tensor
else:
raise ValueError(f"Input data of invalid type: {type(data)}.")
if __name__ == "__main__":
run_tests()
| 5,464 | Python | 34.487013 | 117 | 0.587299 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/test/wrappers/test_rl_games_wrapper.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
from omni.isaac.orbit.app import AppLauncher, run_tests
# launch the simulator
app_launcher = AppLauncher(headless=True)
simulation_app = app_launcher.app
"""Rest everything follows."""
import gymnasium as gym
import torch
import unittest
import omni.usd
from omni.isaac.orbit.envs import RLTaskEnvCfg
import omni.isaac.orbit_tasks # noqa: F401
from omni.isaac.orbit_tasks.utils.parse_cfg import parse_env_cfg
from omni.isaac.orbit_tasks.utils.wrappers.rl_games import RlGamesVecEnvWrapper
class TestRlGamesVecEnvWrapper(unittest.TestCase):
"""Test that RL-Games VecEnv wrapper works as expected."""
@classmethod
def setUpClass(cls):
# acquire all Isaac environments names
cls.registered_tasks = list()
for task_spec in gym.registry.values():
if "Isaac" in task_spec.id:
cls.registered_tasks.append(task_spec.id)
# sort environments by name
cls.registered_tasks.sort()
# only pick the first three environments to test
cls.registered_tasks = cls.registered_tasks[:3]
# print all existing task names
print(">>> All registered environments:", cls.registered_tasks)
def setUp(self) -> None:
# common parameters
self.num_envs = 64
self.use_gpu = True
def test_random_actions(self):
"""Run random actions and check environments return valid signals."""
for task_name in self.registered_tasks:
print(f">>> Running test for environment: {task_name}")
# create a new stage
omni.usd.get_context().new_stage()
# parse configuration
env_cfg: RLTaskEnvCfg = parse_env_cfg(task_name, use_gpu=self.use_gpu, num_envs=self.num_envs)
# create environment
env = gym.make(task_name, cfg=env_cfg)
# wrap environment
env = RlGamesVecEnvWrapper(env, "cuda:0", 100, 100)
# reset environment
obs = env.reset()
# check signal
self.assertTrue(self._check_valid_tensor(obs))
# simulate environment for 100 steps
with torch.inference_mode():
for _ in range(100):
# sample actions from -1 to 1
actions = 2 * torch.rand(env.num_envs, *env.action_space.shape, device=env.device) - 1
# apply actions
transition = env.step(actions)
# check signals
for data in transition:
self.assertTrue(self._check_valid_tensor(data), msg=f"Invalid data: {data}")
# close the environment
print(f">>> Closing environment: {task_name}")
env.close()
"""
Helper functions.
"""
@staticmethod
def _check_valid_tensor(data: torch.Tensor | dict) -> bool:
"""Checks if given data does not have corrupted values.
Args:
data: Data buffer.
Returns:
True if the data is valid.
"""
if isinstance(data, torch.Tensor):
return not torch.any(torch.isnan(data))
elif isinstance(data, dict):
valid_tensor = True
for value in data.values():
if isinstance(value, dict):
valid_tensor &= TestRlGamesVecEnvWrapper._check_valid_tensor(value)
elif isinstance(value, torch.Tensor):
valid_tensor &= not torch.any(torch.isnan(value))
return valid_tensor
else:
raise ValueError(f"Input data of invalid type: {type(data)}.")
if __name__ == "__main__":
run_tests()
| 3,878 | Python | 31.872881 | 106 | 0.600825 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/test/wrappers/test_sb3_wrapper.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
from omni.isaac.orbit.app import AppLauncher, run_tests
# launch the simulator
app_launcher = AppLauncher(headless=True)
simulation_app = app_launcher.app
"""Rest everything follows."""
import gymnasium as gym
import numpy as np
import torch
import unittest
import omni.usd
from omni.isaac.orbit.envs import RLTaskEnvCfg
import omni.isaac.orbit_tasks # noqa: F401
from omni.isaac.orbit_tasks.utils.parse_cfg import parse_env_cfg
from omni.isaac.orbit_tasks.utils.wrappers.sb3 import Sb3VecEnvWrapper
class TestStableBaselines3VecEnvWrapper(unittest.TestCase):
"""Test that RSL-RL VecEnv wrapper works as expected."""
@classmethod
def setUpClass(cls):
# acquire all Isaac environments names
cls.registered_tasks = list()
for task_spec in gym.registry.values():
if "Isaac" in task_spec.id:
cls.registered_tasks.append(task_spec.id)
# sort environments by name
cls.registered_tasks.sort()
# only pick the first three environments to test
cls.registered_tasks = cls.registered_tasks[:3]
# print all existing task names
print(">>> All registered environments:", cls.registered_tasks)
def setUp(self) -> None:
# common parameters
self.num_envs = 64
self.use_gpu = True
def test_random_actions(self):
"""Run random actions and check environments return valid signals."""
for task_name in self.registered_tasks:
print(f">>> Running test for environment: {task_name}")
# create a new stage
omni.usd.get_context().new_stage()
# parse configuration
env_cfg: RLTaskEnvCfg = parse_env_cfg(task_name, use_gpu=self.use_gpu, num_envs=self.num_envs)
# create environment
env = gym.make(task_name, cfg=env_cfg)
# wrap environment
env = Sb3VecEnvWrapper(env)
# reset environment
obs = env.reset()
# check signal
self.assertTrue(self._check_valid_array(obs))
# simulate environment for 1000 steps
with torch.inference_mode():
for _ in range(1000):
# sample actions from -1 to 1
actions = 2 * np.random.rand(env.num_envs, *env.action_space.shape) - 1
# apply actions
transition = env.step(actions)
# check signals
for data in transition:
self.assertTrue(self._check_valid_array(data), msg=f"Invalid data: {data}")
# close the environment
print(f">>> Closing environment: {task_name}")
env.close()
"""
Helper functions.
"""
@staticmethod
def _check_valid_array(data: np.ndarray | dict | list) -> bool:
"""Checks if given data does not have corrupted values.
Args:
data: Data buffer.
Returns:
True if the data is valid.
"""
if isinstance(data, np.ndarray):
return not np.any(np.isnan(data))
elif isinstance(data, dict):
valid_array = True
for value in data.values():
if isinstance(value, dict):
valid_array &= TestStableBaselines3VecEnvWrapper._check_valid_array(value)
elif isinstance(value, np.ndarray):
valid_array &= not np.any(np.isnan(value))
return valid_array
elif isinstance(data, list):
valid_array = True
for value in data:
valid_array &= TestStableBaselines3VecEnvWrapper._check_valid_array(value)
return valid_array
else:
raise ValueError(f"Input data of invalid type: {type(data)}.")
if __name__ == "__main__":
run_tests()
| 4,070 | Python | 31.568 | 106 | 0.59828 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/config/extension.toml | [package]
# Note: Semantic Versioning is used: https://semver.org/
version = "0.6.0"
# Description
title = "ORBIT Environments"
description="Extension containing suite of environments for robot learning."
readme = "docs/README.md"
repository = "https://github.com/NVIDIA-Omniverse/Orbit"
category = "robotics"
keywords = ["robotics", "rl", "il", "learning"]
[dependencies]
"omni.isaac.orbit" = {}
"omni.isaac.orbit_assets" = {}
"omni.isaac.core" = {}
"omni.isaac.gym" = {}
"omni.replicator.isaac" = {}
[[python.module]]
name = "omni.isaac.orbit_tasks"
| 557 | TOML | 23.260869 | 76 | 0.696589 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Package containing task implementations for various robotic environments."""
import os
import toml
# Conveniences to other module directories via relative paths
ORBIT_TASKS_EXT_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), "../../../"))
"""Path to the extension source directory."""
ORBIT_TASKS_METADATA = toml.load(os.path.join(ORBIT_TASKS_EXT_DIR, "config", "extension.toml"))
"""Extension metadata dictionary parsed from the extension.toml file."""
# Configure the module-level variables
__version__ = ORBIT_TASKS_METADATA["package"]["version"]
##
# Register Gym environments.
##
from .utils import import_packages
# The blacklist is used to prevent importing configs from sub-packages
_BLACKLIST_PKGS = ["utils"]
# Import all configs in this package
import_packages(__name__, _BLACKLIST_PKGS)
| 946 | Python | 29.548386 | 95 | 0.742072 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Classic environments for control.
These environments are based on the MuJoCo environments provided by OpenAI.
Reference:
https://github.com/openai/gym/tree/master/gym/envs/mujoco
"""
| 315 | Python | 23.307691 | 75 | 0.75873 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/ant/ant_env_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.actuators import ImplicitActuatorCfg
from omni.isaac.orbit.assets import ArticulationCfg, AssetBaseCfg
from omni.isaac.orbit.envs import RLTaskEnvCfg
from omni.isaac.orbit.managers import EventTermCfg as EventTerm
from omni.isaac.orbit.managers import ObservationGroupCfg as ObsGroup
from omni.isaac.orbit.managers import ObservationTermCfg as ObsTerm
from omni.isaac.orbit.managers import RewardTermCfg as RewTerm
from omni.isaac.orbit.managers import SceneEntityCfg
from omni.isaac.orbit.managers import TerminationTermCfg as DoneTerm
from omni.isaac.orbit.scene import InteractiveSceneCfg
from omni.isaac.orbit.terrains import TerrainImporterCfg
from omni.isaac.orbit.utils import configclass
from omni.isaac.orbit.utils.assets import ISAAC_NUCLEUS_DIR
import omni.isaac.orbit_tasks.classic.humanoid.mdp as mdp
##
# Scene definition
##
@configclass
class MySceneCfg(InteractiveSceneCfg):
"""Configuration for the terrain scene with an ant robot."""
# terrain
terrain = TerrainImporterCfg(
prim_path="/World/ground",
terrain_type="plane",
collision_group=-1,
physics_material=sim_utils.RigidBodyMaterialCfg(
friction_combine_mode="average",
restitution_combine_mode="average",
static_friction=1.0,
dynamic_friction=1.0,
restitution=0.0,
),
debug_vis=False,
)
# robot
robot = ArticulationCfg(
prim_path="{ENV_REGEX_NS}/Robot",
spawn=sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_NUCLEUS_DIR}/Robots/Ant/ant_instanceable.usd",
rigid_props=sim_utils.RigidBodyPropertiesCfg(
disable_gravity=False,
max_depenetration_velocity=10.0,
enable_gyroscopic_forces=True,
),
articulation_props=sim_utils.ArticulationRootPropertiesCfg(
enabled_self_collisions=False,
solver_position_iteration_count=4,
solver_velocity_iteration_count=0,
sleep_threshold=0.005,
stabilization_threshold=0.001,
),
copy_from_source=False,
),
init_state=ArticulationCfg.InitialStateCfg(
pos=(0.0, 0.0, 0.5),
joint_pos={".*": 0.0},
),
actuators={
"body": ImplicitActuatorCfg(
joint_names_expr=[".*"],
stiffness=0.0,
damping=0.0,
),
},
)
# lights
light = AssetBaseCfg(
prim_path="/World/light",
spawn=sim_utils.DistantLightCfg(color=(0.75, 0.75, 0.75), intensity=3000.0),
)
##
# MDP settings
##
@configclass
class CommandsCfg:
"""Command terms for the MDP."""
# no commands for this MDP
null = mdp.NullCommandCfg()
@configclass
class ActionsCfg:
"""Action specifications for the MDP."""
joint_effort = mdp.JointEffortActionCfg(asset_name="robot", joint_names=[".*"], scale=7.5)
@configclass
class ObservationsCfg:
"""Observation specifications for the MDP."""
@configclass
class PolicyCfg(ObsGroup):
"""Observations for the policy."""
base_height = ObsTerm(func=mdp.base_pos_z)
base_lin_vel = ObsTerm(func=mdp.base_lin_vel)
base_ang_vel = ObsTerm(func=mdp.base_ang_vel)
base_yaw_roll = ObsTerm(func=mdp.base_yaw_roll)
base_angle_to_target = ObsTerm(func=mdp.base_angle_to_target, params={"target_pos": (1000.0, 0.0, 0.0)})
base_up_proj = ObsTerm(func=mdp.base_up_proj)
base_heading_proj = ObsTerm(func=mdp.base_heading_proj, params={"target_pos": (1000.0, 0.0, 0.0)})
joint_pos_norm = ObsTerm(func=mdp.joint_pos_norm)
joint_vel_rel = ObsTerm(func=mdp.joint_vel_rel, scale=0.2)
feet_body_forces = ObsTerm(
func=mdp.body_incoming_wrench,
scale=0.1,
params={
"asset_cfg": SceneEntityCfg(
"robot", body_names=["front_left_foot", "front_right_foot", "left_back_foot", "right_back_foot"]
)
},
)
actions = ObsTerm(func=mdp.last_action)
def __post_init__(self):
self.enable_corruption = False
self.concatenate_terms = True
# observation groups
policy: PolicyCfg = PolicyCfg()
@configclass
class EventCfg:
"""Configuration for events."""
reset_base = EventTerm(
func=mdp.reset_root_state_uniform,
mode="reset",
params={"pose_range": {}, "velocity_range": {}},
)
reset_robot_joints = EventTerm(
func=mdp.reset_joints_by_offset,
mode="reset",
params={
"position_range": (-0.2, 0.2),
"velocity_range": (-0.1, 0.1),
},
)
@configclass
class RewardsCfg:
"""Reward terms for the MDP."""
# (1) Reward for moving forward
progress = RewTerm(func=mdp.progress_reward, weight=1.0, params={"target_pos": (1000.0, 0.0, 0.0)})
# (2) Stay alive bonus
alive = RewTerm(func=mdp.is_alive, weight=0.5)
# (3) Reward for non-upright posture
upright = RewTerm(func=mdp.upright_posture_bonus, weight=0.1, params={"threshold": 0.93})
# (4) Reward for moving in the right direction
move_to_target = RewTerm(
func=mdp.move_to_target_bonus, weight=0.5, params={"threshold": 0.8, "target_pos": (1000.0, 0.0, 0.0)}
)
# (5) Penalty for large action commands
action_l2 = RewTerm(func=mdp.action_l2, weight=-0.005)
# (6) Penalty for energy consumption
energy = RewTerm(func=mdp.power_consumption, weight=-0.05, params={"gear_ratio": {".*": 15.0}})
# (7) Penalty for reaching close to joint limits
joint_limits = RewTerm(
func=mdp.joint_limits_penalty_ratio, weight=-0.1, params={"threshold": 0.99, "gear_ratio": {".*": 15.0}}
)
@configclass
class TerminationsCfg:
"""Termination terms for the MDP."""
# (1) Terminate if the episode length is exceeded
time_out = DoneTerm(func=mdp.time_out, time_out=True)
# (2) Terminate if the robot falls
torso_height = DoneTerm(func=mdp.base_height, params={"minimum_height": 0.31})
@configclass
class CurriculumCfg:
"""Curriculum terms for the MDP."""
pass
@configclass
class AntEnvCfg(RLTaskEnvCfg):
"""Configuration for the MuJoCo-style Ant walking environment."""
# Scene settings
scene: MySceneCfg = MySceneCfg(num_envs=4096, env_spacing=5.0)
# Basic settings
observations: ObservationsCfg = ObservationsCfg()
actions: ActionsCfg = ActionsCfg()
commands: CommandsCfg = CommandsCfg()
# MDP settings
rewards: RewardsCfg = RewardsCfg()
terminations: TerminationsCfg = TerminationsCfg()
events: EventCfg = EventCfg()
curriculum: CurriculumCfg = CurriculumCfg()
def __post_init__(self):
"""Post initialization."""
# general settings
self.decimation = 2
self.episode_length_s = 16.0
# simulation settings
self.sim.dt = 1 / 120.0
self.sim.physx.bounce_threshold_velocity = 0.2
# default friction material
self.sim.physics_material.static_friction = 1.0
self.sim.physics_material.dynamic_friction = 1.0
self.sim.physics_material.restitution = 0.0
| 7,500 | Python | 31.055555 | 116 | 0.634667 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/ant/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""
Ant locomotion environment (similar to OpenAI Gym Ant-v2).
"""
import gymnasium as gym
from . import agents, ant_env_cfg
##
# Register Gym environments.
##
gym.register(
id="Isaac-Ant-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
disable_env_checker=True,
kwargs={
"env_cfg_entry_point": ant_env_cfg.AntEnvCfg,
"rsl_rl_cfg_entry_point": agents.rsl_rl_ppo_cfg.AntPPORunnerCfg,
"rl_games_cfg_entry_point": f"{agents.__name__}:rl_games_ppo_cfg.yaml",
"skrl_cfg_entry_point": f"{agents.__name__}:skrl_ppo_cfg.yaml",
"sb3_cfg_entry_point": f"{agents.__name__}:sb3_ppo_cfg.yaml",
},
)
| 776 | Python | 24.899999 | 79 | 0.653351 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/ant/agents/rsl_rl_ppo_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from omni.isaac.orbit.utils import configclass
from omni.isaac.orbit_tasks.utils.wrappers.rsl_rl import (
RslRlOnPolicyRunnerCfg,
RslRlPpoActorCriticCfg,
RslRlPpoAlgorithmCfg,
)
@configclass
class AntPPORunnerCfg(RslRlOnPolicyRunnerCfg):
num_steps_per_env = 32
max_iterations = 1000
save_interval = 50
experiment_name = "ant"
empirical_normalization = False
policy = RslRlPpoActorCriticCfg(
init_noise_std=1.0,
actor_hidden_dims=[400, 200, 100],
critic_hidden_dims=[400, 200, 100],
activation="elu",
)
algorithm = RslRlPpoAlgorithmCfg(
value_loss_coef=1.0,
use_clipped_value_loss=True,
clip_param=0.2,
entropy_coef=0.0,
num_learning_epochs=5,
num_mini_batches=4,
learning_rate=5.0e-4,
schedule="adaptive",
gamma=0.99,
lam=0.95,
desired_kl=0.01,
max_grad_norm=1.0,
)
| 1,068 | Python | 24.45238 | 58 | 0.641386 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/ant/agents/skrl_ppo_cfg.yaml | seed: 42
# Models are instantiated using skrl's model instantiator utility
# https://skrl.readthedocs.io/en/develop/modules/skrl.utils.model_instantiators.html
models:
separate: False
policy: # see skrl.utils.model_instantiators.gaussian_model for parameter details
clip_actions: True
clip_log_std: True
initial_log_std: 0
min_log_std: -20.0
max_log_std: 2.0
input_shape: "Shape.STATES"
hiddens: [256, 128, 64]
hidden_activation: ["elu", "elu", "elu"]
output_shape: "Shape.ACTIONS"
output_activation: "tanh"
output_scale: 1.0
value: # see skrl.utils.model_instantiators.deterministic_model for parameter details
clip_actions: False
input_shape: "Shape.STATES"
hiddens: [256, 128, 64]
hidden_activation: ["elu", "elu", "elu"]
output_shape: "Shape.ONE"
output_activation: ""
output_scale: 1.0
# PPO agent configuration (field names are from PPO_DEFAULT_CONFIG)
# https://skrl.readthedocs.io/en/latest/modules/skrl.agents.ppo.html
agent:
rollouts: 16
learning_epochs: 8
mini_batches: 4
discount_factor: 0.99
lambda: 0.95
learning_rate: 3.e-4
learning_rate_scheduler: "KLAdaptiveLR"
learning_rate_scheduler_kwargs:
kl_threshold: 0.008
state_preprocessor: "RunningStandardScaler"
state_preprocessor_kwargs: null
value_preprocessor: "RunningStandardScaler"
value_preprocessor_kwargs: null
random_timesteps: 0
learning_starts: 0
grad_norm_clip: 1.0
ratio_clip: 0.2
value_clip: 0.2
clip_predicted_values: True
entropy_loss_scale: 0.0
value_loss_scale: 1.0
kl_threshold: 0
rewards_shaper_scale: 0.01
# logging and checkpoint
experiment:
directory: "ant"
experiment_name: ""
write_interval: 40
checkpoint_interval: 400
# Sequential trainer
# https://skrl.readthedocs.io/en/latest/modules/skrl.trainers.sequential.html
trainer:
timesteps: 8000
| 1,888 | YAML | 27.194029 | 88 | 0.710805 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/ant/agents/sb3_ppo_cfg.yaml | # Reference: https://github.com/DLR-RM/rl-baselines3-zoo/blob/master/hyperparams/ppo.yml#L161
seed: 42
n_timesteps: !!float 1e7
policy: 'MlpPolicy'
batch_size: 128
n_steps: 512
gamma: 0.99
gae_lambda: 0.9
n_epochs: 20
ent_coef: 0.0
sde_sample_freq: 4
max_grad_norm: 0.5
vf_coef: 0.5
learning_rate: !!float 3e-5
use_sde: True
clip_range: 0.4
policy_kwargs: "dict(
log_std_init=-1,
ortho_init=False,
activation_fn=nn.ReLU,
net_arch=dict(pi=[256, 256], vf=[256, 256])
)"
| 557 | YAML | 22.249999 | 93 | 0.597846 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/ant/agents/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from . import rsl_rl_ppo_cfg
| 152 | Python | 20.85714 | 56 | 0.736842 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/ant/agents/rl_games_ppo_cfg.yaml | params:
seed: 42
# environment wrapper clipping
env:
clip_actions: 1.0
algo:
name: a2c_continuous
model:
name: continuous_a2c_logstd
network:
name: actor_critic
separate: False
space:
continuous:
mu_activation: None
sigma_activation: None
mu_init:
name: default
sigma_init:
name: const_initializer
val: 0
fixed_sigma: True
mlp:
units: [256, 128, 64]
activation: elu
d2rl: False
initializer:
name: default
regularizer:
name: None
load_checkpoint: False # flag which sets whether to load the checkpoint
load_path: '' # path to the checkpoint to load
config:
name: ant
env_name: rlgpu
device: 'cuda:0'
device_name: 'cuda:0'
multi_gpu: False
ppo: True
mixed_precision: True
normalize_input: True
normalize_value: True
value_bootstrap: True
num_actors: -1
reward_shaper:
scale_value: 0.6
normalize_advantage: True
gamma: 0.99
tau: 0.95
learning_rate: 3e-4
lr_schedule: adaptive
schedule_type: legacy
kl_threshold: 0.008
score_to_win: 20000
max_epochs: 500
save_best_after: 100
save_frequency: 50
grad_norm: 1.0
entropy_coef: 0.0
truncate_grads: True
e_clip: 0.2
horizon_length: 16
minibatch_size: 32768
mini_epochs: 4
critic_coef: 2
clip_value: True
seq_length: 4
bounds_loss_coef: 0.0001
| 1,502 | YAML | 18.51948 | 73 | 0.601198 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/cartpole/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""
Cartpole balancing environment.
"""
import gymnasium as gym
from . import agents
from .cartpole_env_cfg import CartpoleEnvCfg
##
# Register Gym environments.
##
gym.register(
id="Isaac-Cartpole-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
disable_env_checker=True,
kwargs={
"env_cfg_entry_point": CartpoleEnvCfg,
"rl_games_cfg_entry_point": f"{agents.__name__}:rl_games_ppo_cfg.yaml",
"rsl_rl_cfg_entry_point": agents.rsl_rl_ppo_cfg.CartpolePPORunnerCfg,
"skrl_cfg_entry_point": f"{agents.__name__}:skrl_ppo_cfg.yaml",
"sb3_cfg_entry_point": f"{agents.__name__}:sb3_ppo_cfg.yaml",
},
)
| 784 | Python | 24.32258 | 79 | 0.667092 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/cartpole/cartpole_env_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
import math
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.assets import ArticulationCfg, AssetBaseCfg
from omni.isaac.orbit.envs import RLTaskEnvCfg
from omni.isaac.orbit.managers import EventTermCfg as EventTerm
from omni.isaac.orbit.managers import ObservationGroupCfg as ObsGroup
from omni.isaac.orbit.managers import ObservationTermCfg as ObsTerm
from omni.isaac.orbit.managers import RewardTermCfg as RewTerm
from omni.isaac.orbit.managers import SceneEntityCfg
from omni.isaac.orbit.managers import TerminationTermCfg as DoneTerm
from omni.isaac.orbit.scene import InteractiveSceneCfg
from omni.isaac.orbit.utils import configclass
import omni.isaac.orbit_tasks.classic.cartpole.mdp as mdp
##
# Pre-defined configs
##
from omni.isaac.orbit_assets.cartpole import CARTPOLE_CFG # isort:skip
##
# Scene definition
##
@configclass
class CartpoleSceneCfg(InteractiveSceneCfg):
"""Configuration for a cart-pole scene."""
# ground plane
ground = AssetBaseCfg(
prim_path="/World/ground",
spawn=sim_utils.GroundPlaneCfg(size=(100.0, 100.0)),
)
# cartpole
robot: ArticulationCfg = CARTPOLE_CFG.replace(prim_path="{ENV_REGEX_NS}/Robot")
# lights
dome_light = AssetBaseCfg(
prim_path="/World/DomeLight",
spawn=sim_utils.DomeLightCfg(color=(0.9, 0.9, 0.9), intensity=500.0),
)
distant_light = AssetBaseCfg(
prim_path="/World/DistantLight",
spawn=sim_utils.DistantLightCfg(color=(0.9, 0.9, 0.9), intensity=2500.0),
init_state=AssetBaseCfg.InitialStateCfg(rot=(0.738, 0.477, 0.477, 0.0)),
)
##
# MDP settings
##
@configclass
class CommandsCfg:
"""Command terms for the MDP."""
# no commands for this MDP
null = mdp.NullCommandCfg()
@configclass
class ActionsCfg:
"""Action specifications for the MDP."""
joint_effort = mdp.JointEffortActionCfg(asset_name="robot", joint_names=["slider_to_cart"], scale=100.0)
@configclass
class ObservationsCfg:
"""Observation specifications for the MDP."""
@configclass
class PolicyCfg(ObsGroup):
"""Observations for policy group."""
# observation terms (order preserved)
joint_pos_rel = ObsTerm(func=mdp.joint_pos_rel)
joint_vel_rel = ObsTerm(func=mdp.joint_vel_rel)
def __post_init__(self) -> None:
self.enable_corruption = False
self.concatenate_terms = True
# observation groups
policy: PolicyCfg = PolicyCfg()
@configclass
class EventCfg:
"""Configuration for events."""
# reset
reset_cart_position = EventTerm(
func=mdp.reset_joints_by_offset,
mode="reset",
params={
"asset_cfg": SceneEntityCfg("robot", joint_names=["slider_to_cart"]),
"position_range": (-1.0, 1.0),
"velocity_range": (-0.5, 0.5),
},
)
reset_pole_position = EventTerm(
func=mdp.reset_joints_by_offset,
mode="reset",
params={
"asset_cfg": SceneEntityCfg("robot", joint_names=["cart_to_pole"]),
"position_range": (-0.25 * math.pi, 0.25 * math.pi),
"velocity_range": (-0.25 * math.pi, 0.25 * math.pi),
},
)
@configclass
class RewardsCfg:
"""Reward terms for the MDP."""
# (1) Constant running reward
alive = RewTerm(func=mdp.is_alive, weight=1.0)
# (2) Failure penalty
terminating = RewTerm(func=mdp.is_terminated, weight=-2.0)
# (3) Primary task: keep pole upright
pole_pos = RewTerm(
func=mdp.joint_pos_target_l2,
weight=-1.0,
params={"asset_cfg": SceneEntityCfg("robot", joint_names=["cart_to_pole"]), "target": 0.0},
)
# (4) Shaping tasks: lower cart velocity
cart_vel = RewTerm(
func=mdp.joint_vel_l1,
weight=-0.01,
params={"asset_cfg": SceneEntityCfg("robot", joint_names=["slider_to_cart"])},
)
# (5) Shaping tasks: lower pole angular velocity
pole_vel = RewTerm(
func=mdp.joint_vel_l1,
weight=-0.005,
params={"asset_cfg": SceneEntityCfg("robot", joint_names=["cart_to_pole"])},
)
@configclass
class TerminationsCfg:
"""Termination terms for the MDP."""
# (1) Time out
time_out = DoneTerm(func=mdp.time_out, time_out=True)
# (2) Cart out of bounds
cart_out_of_bounds = DoneTerm(
func=mdp.joint_pos_manual_limit,
params={"asset_cfg": SceneEntityCfg("robot", joint_names=["slider_to_cart"]), "bounds": (-3.0, 3.0)},
)
@configclass
class CurriculumCfg:
"""Configuration for the curriculum."""
pass
##
# Environment configuration
##
@configclass
class CartpoleEnvCfg(RLTaskEnvCfg):
"""Configuration for the locomotion velocity-tracking environment."""
# Scene settings
scene: CartpoleSceneCfg = CartpoleSceneCfg(num_envs=4096, env_spacing=4.0)
# Basic settings
observations: ObservationsCfg = ObservationsCfg()
actions: ActionsCfg = ActionsCfg()
events: EventCfg = EventCfg()
# MDP settings
curriculum: CurriculumCfg = CurriculumCfg()
rewards: RewardsCfg = RewardsCfg()
terminations: TerminationsCfg = TerminationsCfg()
# No command generator
commands: CommandsCfg = CommandsCfg()
# Post initialization
def __post_init__(self) -> None:
"""Post initialization."""
# general settings
self.decimation = 2
self.episode_length_s = 5
# viewer settings
self.viewer.eye = (8.0, 0.0, 5.0)
# simulation settings
self.sim.dt = 1 / 120
| 5,671 | Python | 26.803921 | 109 | 0.6535 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/cartpole/agents/rsl_rl_ppo_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from omni.isaac.orbit.utils import configclass
from omni.isaac.orbit_tasks.utils.wrappers.rsl_rl import (
RslRlOnPolicyRunnerCfg,
RslRlPpoActorCriticCfg,
RslRlPpoAlgorithmCfg,
)
@configclass
class CartpolePPORunnerCfg(RslRlOnPolicyRunnerCfg):
num_steps_per_env = 16
max_iterations = 150
save_interval = 50
experiment_name = "cartpole"
empirical_normalization = False
policy = RslRlPpoActorCriticCfg(
init_noise_std=1.0,
actor_hidden_dims=[32, 32],
critic_hidden_dims=[32, 32],
activation="elu",
)
algorithm = RslRlPpoAlgorithmCfg(
value_loss_coef=1.0,
use_clipped_value_loss=True,
clip_param=0.2,
entropy_coef=0.005,
num_learning_epochs=5,
num_mini_batches=4,
learning_rate=1.0e-3,
schedule="adaptive",
gamma=0.99,
lam=0.95,
desired_kl=0.01,
max_grad_norm=1.0,
)
| 1,065 | Python | 24.380952 | 58 | 0.644131 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/cartpole/agents/skrl_ppo_cfg.yaml | seed: 42
# Models are instantiated using skrl's model instantiator utility
# https://skrl.readthedocs.io/en/develop/modules/skrl.utils.model_instantiators.html
models:
separate: False
policy: # see skrl.utils.model_instantiators.gaussian_model for parameter details
clip_actions: True
clip_log_std: True
initial_log_std: 0
min_log_std: -20.0
max_log_std: 2.0
input_shape: "Shape.STATES"
hiddens: [32, 32]
hidden_activation: ["elu", "elu"]
output_shape: "Shape.ACTIONS"
output_activation: "tanh"
output_scale: 1.0
value: # see skrl.utils.model_instantiators.deterministic_model for parameter details
clip_actions: False
input_shape: "Shape.STATES"
hiddens: [32, 32]
hidden_activation: ["elu", "elu"]
output_shape: "Shape.ONE"
output_activation: ""
output_scale: 1.0
# PPO agent configuration (field names are from PPO_DEFAULT_CONFIG)
# https://skrl.readthedocs.io/en/latest/modules/skrl.agents.ppo.html
agent:
rollouts: 16
learning_epochs: 5
mini_batches: 4
discount_factor: 0.99
lambda: 0.95
learning_rate: 1.e-3
learning_rate_scheduler: "KLAdaptiveLR"
learning_rate_scheduler_kwargs:
kl_threshold: 0.01
state_preprocessor: "RunningStandardScaler"
state_preprocessor_kwargs: null
value_preprocessor: "RunningStandardScaler"
value_preprocessor_kwargs: null
random_timesteps: 0
learning_starts: 0
grad_norm_clip: 1.0
ratio_clip: 0.2
value_clip: 0.2
clip_predicted_values: True
entropy_loss_scale: 0.0
value_loss_scale: 2.0
kl_threshold: 0
rewards_shaper_scale: 1.0
# logging and checkpoint
experiment:
directory: "cartpole"
experiment_name: ""
write_interval: 12
checkpoint_interval: 120
# Sequential trainer
# https://skrl.readthedocs.io/en/latest/modules/skrl.trainers.sequential.html
trainer:
timesteps: 2400
| 1,865 | YAML | 26.850746 | 88 | 0.713673 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/cartpole/agents/sb3_ppo_cfg.yaml | # Reference: https://github.com/DLR-RM/rl-baselines3-zoo/blob/master/hyperparams/ppo.yml#L32
seed: 42
n_timesteps: !!float 1e6
policy: 'MlpPolicy'
n_steps: 16
batch_size: 4096
gae_lambda: 0.95
gamma: 0.99
n_epochs: 20
ent_coef: 0.01
learning_rate: !!float 3e-4
clip_range: !!float 0.2
policy_kwargs: "dict(
activation_fn=nn.ELU,
net_arch=[32, 32],
squash_output=False,
)"
vf_coef: 1.0
max_grad_norm: 1.0
| 475 | YAML | 21.666666 | 92 | 0.610526 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/cartpole/agents/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from . import rsl_rl_ppo_cfg # noqa: F401, F403
| 172 | Python | 23.714282 | 56 | 0.72093 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/cartpole/agents/rl_games_ppo_cfg.yaml | params:
seed: 42
# environment wrapper clipping
env:
# added to the wrapper
clip_observations: 5.0
# can make custom wrapper?
clip_actions: 1.0
algo:
name: a2c_continuous
model:
name: continuous_a2c_logstd
# doesn't have this fine grained control but made it close
network:
name: actor_critic
separate: False
space:
continuous:
mu_activation: None
sigma_activation: None
mu_init:
name: default
sigma_init:
name: const_initializer
val: 0
fixed_sigma: True
mlp:
units: [32, 32]
activation: elu
d2rl: False
initializer:
name: default
regularizer:
name: None
load_checkpoint: False # flag which sets whether to load the checkpoint
load_path: '' # path to the checkpoint to load
config:
name: cartpole
env_name: rlgpu
device: 'cuda:0'
device_name: 'cuda:0'
multi_gpu: False
ppo: True
mixed_precision: False
normalize_input: False
normalize_value: False
num_actors: -1 # configured from the script (based on num_envs)
reward_shaper:
scale_value: 1.0
normalize_advantage: False
gamma: 0.99
tau : 0.95
learning_rate: 3e-4
lr_schedule: adaptive
kl_threshold: 0.008
score_to_win: 20000
max_epochs: 150
save_best_after: 50
save_frequency: 25
grad_norm: 1.0
entropy_coef: 0.0
truncate_grads: True
e_clip: 0.2
horizon_length: 16
minibatch_size: 8192
mini_epochs: 8
critic_coef: 4
clip_value: True
seq_length: 4
bounds_loss_coef: 0.0001
| 1,648 | YAML | 19.873417 | 73 | 0.61165 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/cartpole/mdp/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""This sub-module contains the functions that are specific to the cartpole environments."""
from omni.isaac.orbit.envs.mdp import * # noqa: F401, F403
from .rewards import * # noqa: F401, F403
| 321 | Python | 28.272725 | 92 | 0.735202 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/cartpole/mdp/rewards.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
import torch
from typing import TYPE_CHECKING
from omni.isaac.orbit.assets import Articulation
from omni.isaac.orbit.managers import SceneEntityCfg
from omni.isaac.orbit.utils.math import wrap_to_pi
if TYPE_CHECKING:
from omni.isaac.orbit.envs import RLTaskEnv
def joint_pos_target_l2(env: RLTaskEnv, target: float, asset_cfg: SceneEntityCfg) -> torch.Tensor:
"""Penalize joint position deviation from a target value."""
# extract the used quantities (to enable type-hinting)
asset: Articulation = env.scene[asset_cfg.name]
# wrap the joint positions to (-pi, pi)
joint_pos = wrap_to_pi(asset.data.joint_pos[:, asset_cfg.joint_ids])
# compute the reward
return torch.sum(torch.square(joint_pos - target), dim=1)
| 907 | Python | 32.629628 | 98 | 0.742007 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/humanoid/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""
Humanoid locomotion environment (similar to OpenAI Gym Humanoid-v2).
"""
import gymnasium as gym
from . import agents, humanoid_env_cfg
##
# Register Gym environments.
##
gym.register(
id="Isaac-Humanoid-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
disable_env_checker=True,
kwargs={
"env_cfg_entry_point": humanoid_env_cfg.HumanoidEnvCfg,
"rsl_rl_cfg_entry_point": agents.rsl_rl_ppo_cfg.HumanoidPPORunnerCfg,
"rl_games_cfg_entry_point": f"{agents.__name__}:rl_games_ppo_cfg.yaml",
"skrl_cfg_entry_point": f"{agents.__name__}:skrl_ppo_cfg.yaml",
"sb3_cfg_entry_point": f"{agents.__name__}:sb3_ppo_cfg.yaml",
},
)
| 811 | Python | 26.066666 | 79 | 0.668311 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/humanoid/humanoid_env_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.actuators import ImplicitActuatorCfg
from omni.isaac.orbit.assets import ArticulationCfg, AssetBaseCfg
from omni.isaac.orbit.envs import RLTaskEnvCfg
from omni.isaac.orbit.managers import EventTermCfg as EventTerm
from omni.isaac.orbit.managers import ObservationGroupCfg as ObsGroup
from omni.isaac.orbit.managers import ObservationTermCfg as ObsTerm
from omni.isaac.orbit.managers import RewardTermCfg as RewTerm
from omni.isaac.orbit.managers import SceneEntityCfg
from omni.isaac.orbit.managers import TerminationTermCfg as DoneTerm
from omni.isaac.orbit.scene import InteractiveSceneCfg
from omni.isaac.orbit.terrains import TerrainImporterCfg
from omni.isaac.orbit.utils import configclass
from omni.isaac.orbit.utils.assets import ISAAC_NUCLEUS_DIR
import omni.isaac.orbit_tasks.classic.humanoid.mdp as mdp
##
# Scene definition
##
@configclass
class MySceneCfg(InteractiveSceneCfg):
"""Configuration for the terrain scene with a humanoid robot."""
# terrain
terrain = TerrainImporterCfg(
prim_path="/World/ground",
terrain_type="plane",
collision_group=-1,
physics_material=sim_utils.RigidBodyMaterialCfg(static_friction=1.0, dynamic_friction=1.0, restitution=0.0),
debug_vis=False,
)
# robot
robot = ArticulationCfg(
prim_path="{ENV_REGEX_NS}/Robot",
spawn=sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_NUCLEUS_DIR}/Robots/Humanoid/humanoid_instanceable.usd",
rigid_props=sim_utils.RigidBodyPropertiesCfg(
disable_gravity=None,
max_depenetration_velocity=10.0,
enable_gyroscopic_forces=True,
),
articulation_props=sim_utils.ArticulationRootPropertiesCfg(
enabled_self_collisions=True,
solver_position_iteration_count=4,
solver_velocity_iteration_count=0,
sleep_threshold=0.005,
stabilization_threshold=0.001,
),
copy_from_source=False,
),
init_state=ArticulationCfg.InitialStateCfg(
pos=(0.0, 0.0, 1.34),
joint_pos={".*": 0.0},
),
actuators={
"body": ImplicitActuatorCfg(
joint_names_expr=[".*"],
stiffness={
".*_waist.*": 20.0,
".*_upper_arm.*": 10.0,
"pelvis": 10.0,
".*_lower_arm": 2.0,
".*_thigh:0": 10.0,
".*_thigh:1": 20.0,
".*_thigh:2": 10.0,
".*_shin": 5.0,
".*_foot.*": 2.0,
},
damping={
".*_waist.*": 5.0,
".*_upper_arm.*": 5.0,
"pelvis": 5.0,
".*_lower_arm": 1.0,
".*_thigh:0": 5.0,
".*_thigh:1": 5.0,
".*_thigh:2": 5.0,
".*_shin": 0.1,
".*_foot.*": 1.0,
},
),
},
)
# lights
light = AssetBaseCfg(
prim_path="/World/light",
spawn=sim_utils.DistantLightCfg(color=(0.75, 0.75, 0.75), intensity=3000.0),
)
##
# MDP settings
##
@configclass
class CommandsCfg:
"""Command terms for the MDP."""
# no commands for this MDP
null = mdp.NullCommandCfg()
@configclass
class ActionsCfg:
"""Action specifications for the MDP."""
joint_effort = mdp.JointEffortActionCfg(
asset_name="robot",
joint_names=[".*"],
scale={
".*_waist.*": 67.5,
".*_upper_arm.*": 67.5,
"pelvis": 67.5,
".*_lower_arm": 45.0,
".*_thigh:0": 45.0,
".*_thigh:1": 135.0,
".*_thigh:2": 45.0,
".*_shin": 90.0,
".*_foot.*": 22.5,
},
)
@configclass
class ObservationsCfg:
"""Observation specifications for the MDP."""
@configclass
class PolicyCfg(ObsGroup):
"""Observations for the policy."""
base_height = ObsTerm(func=mdp.base_pos_z)
base_lin_vel = ObsTerm(func=mdp.base_lin_vel)
base_ang_vel = ObsTerm(func=mdp.base_ang_vel, scale=0.25)
base_yaw_roll = ObsTerm(func=mdp.base_yaw_roll)
base_angle_to_target = ObsTerm(func=mdp.base_angle_to_target, params={"target_pos": (1000.0, 0.0, 0.0)})
base_up_proj = ObsTerm(func=mdp.base_up_proj)
base_heading_proj = ObsTerm(func=mdp.base_heading_proj, params={"target_pos": (1000.0, 0.0, 0.0)})
joint_pos_norm = ObsTerm(func=mdp.joint_pos_norm)
joint_vel_rel = ObsTerm(func=mdp.joint_vel_rel, scale=0.1)
feet_body_forces = ObsTerm(
func=mdp.body_incoming_wrench,
scale=0.01,
params={"asset_cfg": SceneEntityCfg("robot", body_names=["left_foot", "right_foot"])},
)
actions = ObsTerm(func=mdp.last_action)
def __post_init__(self):
self.enable_corruption = False
self.concatenate_terms = True
# observation groups
policy: PolicyCfg = PolicyCfg()
@configclass
class EventCfg:
"""Configuration for events."""
reset_base = EventTerm(
func=mdp.reset_root_state_uniform,
mode="reset",
params={"pose_range": {}, "velocity_range": {}},
)
reset_robot_joints = EventTerm(
func=mdp.reset_joints_by_offset,
mode="reset",
params={
"position_range": (-0.2, 0.2),
"velocity_range": (-0.1, 0.1),
},
)
@configclass
class RewardsCfg:
"""Reward terms for the MDP."""
# (1) Reward for moving forward
progress = RewTerm(func=mdp.progress_reward, weight=1.0, params={"target_pos": (1000.0, 0.0, 0.0)})
# (2) Stay alive bonus
alive = RewTerm(func=mdp.is_alive, weight=2.0)
# (3) Reward for non-upright posture
upright = RewTerm(func=mdp.upright_posture_bonus, weight=0.1, params={"threshold": 0.93})
# (4) Reward for moving in the right direction
move_to_target = RewTerm(
func=mdp.move_to_target_bonus, weight=0.5, params={"threshold": 0.8, "target_pos": (1000.0, 0.0, 0.0)}
)
# (5) Penalty for large action commands
action_l2 = RewTerm(func=mdp.action_l2, weight=-0.01)
# (6) Penalty for energy consumption
energy = RewTerm(
func=mdp.power_consumption,
weight=-0.005,
params={
"gear_ratio": {
".*_waist.*": 67.5,
".*_upper_arm.*": 67.5,
"pelvis": 67.5,
".*_lower_arm": 45.0,
".*_thigh:0": 45.0,
".*_thigh:1": 135.0,
".*_thigh:2": 45.0,
".*_shin": 90.0,
".*_foot.*": 22.5,
}
},
)
# (7) Penalty for reaching close to joint limits
joint_limits = RewTerm(
func=mdp.joint_limits_penalty_ratio,
weight=-0.25,
params={
"threshold": 0.98,
"gear_ratio": {
".*_waist.*": 67.5,
".*_upper_arm.*": 67.5,
"pelvis": 67.5,
".*_lower_arm": 45.0,
".*_thigh:0": 45.0,
".*_thigh:1": 135.0,
".*_thigh:2": 45.0,
".*_shin": 90.0,
".*_foot.*": 22.5,
},
},
)
@configclass
class TerminationsCfg:
"""Termination terms for the MDP."""
# (1) Terminate if the episode length is exceeded
time_out = DoneTerm(func=mdp.time_out, time_out=True)
# (2) Terminate if the robot falls
torso_height = DoneTerm(func=mdp.base_height, params={"minimum_height": 0.8})
@configclass
class CurriculumCfg:
"""Curriculum terms for the MDP."""
pass
@configclass
class HumanoidEnvCfg(RLTaskEnvCfg):
"""Configuration for the MuJoCo-style Humanoid walking environment."""
# Scene settings
scene: MySceneCfg = MySceneCfg(num_envs=4096, env_spacing=5.0)
# Basic settings
observations: ObservationsCfg = ObservationsCfg()
actions: ActionsCfg = ActionsCfg()
commands: CommandsCfg = CommandsCfg()
# MDP settings
rewards: RewardsCfg = RewardsCfg()
terminations: TerminationsCfg = TerminationsCfg()
events: EventCfg = EventCfg()
curriculum: CurriculumCfg = CurriculumCfg()
def __post_init__(self):
"""Post initialization."""
# general settings
self.decimation = 2
self.episode_length_s = 16.0
# simulation settings
self.sim.dt = 1 / 120.0
self.sim.physx.bounce_threshold_velocity = 0.2
# default friction material
self.sim.physics_material.static_friction = 1.0
self.sim.physics_material.dynamic_friction = 1.0
self.sim.physics_material.restitution = 0.0
| 9,098 | Python | 30.484429 | 116 | 0.558474 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/humanoid/agents/rsl_rl_ppo_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from omni.isaac.orbit.utils import configclass
from omni.isaac.orbit_tasks.utils.wrappers.rsl_rl import (
RslRlOnPolicyRunnerCfg,
RslRlPpoActorCriticCfg,
RslRlPpoAlgorithmCfg,
)
@configclass
class HumanoidPPORunnerCfg(RslRlOnPolicyRunnerCfg):
num_steps_per_env = 32
max_iterations = 1000
save_interval = 50
experiment_name = "humanoid"
empirical_normalization = False
policy = RslRlPpoActorCriticCfg(
init_noise_std=1.0,
actor_hidden_dims=[400, 200, 100],
critic_hidden_dims=[400, 200, 100],
activation="elu",
)
algorithm = RslRlPpoAlgorithmCfg(
value_loss_coef=1.0,
use_clipped_value_loss=True,
clip_param=0.2,
entropy_coef=0.0,
num_learning_epochs=5,
num_mini_batches=4,
learning_rate=5.0e-4,
schedule="adaptive",
gamma=0.99,
lam=0.95,
desired_kl=0.01,
max_grad_norm=1.0,
)
| 1,078 | Python | 24.690476 | 58 | 0.644712 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/humanoid/agents/skrl_ppo_cfg.yaml | seed: 42
# Models are instantiated using skrl's model instantiator utility
# https://skrl.readthedocs.io/en/develop/modules/skrl.utils.model_instantiators.html
models:
separate: False
policy: # see skrl.utils.model_instantiators.gaussian_model for parameter details
clip_actions: True
clip_log_std: True
initial_log_std: 0
min_log_std: -20.0
max_log_std: 2.0
input_shape: "Shape.STATES"
hiddens: [400, 200, 100]
hidden_activation: ["elu", "elu", "elu"]
output_shape: "Shape.ACTIONS"
output_activation: "tanh"
output_scale: 1.0
value: # see skrl.utils.model_instantiators.deterministic_model for parameter details
clip_actions: False
input_shape: "Shape.STATES"
hiddens: [400, 200, 100]
hidden_activation: ["elu", "elu", "elu"]
output_shape: "Shape.ONE"
output_activation: ""
output_scale: 1.0
# PPO agent configuration (field names are from PPO_DEFAULT_CONFIG)
# https://skrl.readthedocs.io/en/latest/modules/skrl.agents.ppo.html
agent:
rollouts: 32
learning_epochs: 8
mini_batches: 8
discount_factor: 0.99
lambda: 0.95
learning_rate: 3.e-4
learning_rate_scheduler: "KLAdaptiveLR"
learning_rate_scheduler_kwargs:
kl_threshold: 0.008
state_preprocessor: "RunningStandardScaler"
state_preprocessor_kwargs: null
value_preprocessor: "RunningStandardScaler"
value_preprocessor_kwargs: null
random_timesteps: 0
learning_starts: 0
grad_norm_clip: 1.0
ratio_clip: 0.2
value_clip: 0.2
clip_predicted_values: True
entropy_loss_scale: 0.0
value_loss_scale: 1.0
kl_threshold: 0
rewards_shaper_scale: 0.01
# logging and checkpoint
experiment:
directory: "humanoid"
experiment_name: ""
write_interval: 80
checkpoint_interval: 800
# Sequential trainer
# https://skrl.readthedocs.io/en/latest/modules/skrl.trainers.sequential.html
trainer:
timesteps: 16000
| 1,896 | YAML | 27.313432 | 88 | 0.712025 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/humanoid/agents/sb3_ppo_cfg.yaml | # Reference: https://github.com/DLR-RM/rl-baselines3-zoo/blob/master/hyperparams/ppo.yml#L245
seed: 42
policy: 'MlpPolicy'
n_timesteps: !!float 5e7
batch_size: 256
n_steps: 512
gamma: 0.99
learning_rate: !!float 2.5e-4
ent_coef: 0.0
clip_range: 0.2
n_epochs: 10
gae_lambda: 0.95
max_grad_norm: 1.0
vf_coef: 0.5
policy_kwargs: "dict(
log_std_init=-2,
ortho_init=False,
activation_fn=nn.ReLU,
net_arch=dict(pi=[256, 256], vf=[256, 256])
)"
| 527 | YAML | 22.999999 | 93 | 0.590133 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/humanoid/agents/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from . import rsl_rl_ppo_cfg # noqa: F401, F403
| 172 | Python | 23.714282 | 56 | 0.72093 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/humanoid/agents/rl_games_ppo_cfg.yaml | params:
seed: 42
# environment wrapper clipping
env:
clip_actions: 1.0
algo:
name: a2c_continuous
model:
name: continuous_a2c_logstd
network:
name: actor_critic
separate: False
space:
continuous:
mu_activation: None
sigma_activation: None
mu_init:
name: default
sigma_init:
name: const_initializer
val: 0
fixed_sigma: True
mlp:
units: [400, 200, 100]
activation: elu
d2rl: False
initializer:
name: default
regularizer:
name: None
load_checkpoint: False # flag which sets whether to load the checkpoint
load_path: '' # path to the checkpoint to load
config:
name: humanoid
env_name: rlgpu
device: 'cuda:0'
device_name: 'cuda:0'
multi_gpu: False
ppo: True
mixed_precision: True
normalize_input: True
normalize_value: True
value_bootstrap: True
num_actors: -1
reward_shaper:
scale_value: 0.6
normalize_advantage: True
gamma: 0.99
tau: 0.95
learning_rate: 5e-4
lr_schedule: adaptive
kl_threshold: 0.01
score_to_win: 20000
max_epochs: 1000
save_best_after: 100
save_frequency: 100
grad_norm: 1.0
entropy_coef: 0.0
truncate_grads: True
e_clip: 0.2
horizon_length: 32
minibatch_size: 32768
mini_epochs: 5
critic_coef: 4
clip_value: True
seq_length: 4
bounds_loss_coef: 0.0001
| 1,483 | YAML | 18.526316 | 73 | 0.601483 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/humanoid/mdp/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""This sub-module contains the functions that are specific to the humanoid environment."""
from omni.isaac.orbit.envs.mdp import * # noqa: F401, F403
from .observations import *
from .rewards import *
| 328 | Python | 26.416664 | 91 | 0.746951 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/humanoid/mdp/rewards.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
import torch
from typing import TYPE_CHECKING
import omni.isaac.orbit.utils.math as math_utils
import omni.isaac.orbit.utils.string as string_utils
from omni.isaac.orbit.assets import Articulation
from omni.isaac.orbit.managers import ManagerTermBase, RewardTermCfg, SceneEntityCfg
from . import observations as obs
if TYPE_CHECKING:
from omni.isaac.orbit.envs import RLTaskEnv
def upright_posture_bonus(
env: RLTaskEnv, threshold: float, asset_cfg: SceneEntityCfg = SceneEntityCfg("robot")
) -> torch.Tensor:
"""Reward for maintaining an upright posture."""
up_proj = obs.base_up_proj(env, asset_cfg).squeeze(-1)
return (up_proj > threshold).float()
def move_to_target_bonus(
env: RLTaskEnv,
threshold: float,
target_pos: tuple[float, float, float],
asset_cfg: SceneEntityCfg = SceneEntityCfg("robot"),
) -> torch.Tensor:
"""Reward for moving to the target heading."""
heading_proj = obs.base_heading_proj(env, target_pos, asset_cfg).squeeze(-1)
return torch.where(heading_proj > threshold, 1.0, heading_proj / threshold)
class progress_reward(ManagerTermBase):
"""Reward for making progress towards the target."""
def __init__(self, env: RLTaskEnv, cfg: RewardTermCfg):
# initialize the base class
super().__init__(cfg, env)
# create history buffer
self.potentials = torch.zeros(env.num_envs, device=env.device)
self.prev_potentials = torch.zeros_like(self.potentials)
def reset(self, env_ids: torch.Tensor):
# extract the used quantities (to enable type-hinting)
asset: Articulation = self._env.scene["robot"]
# compute projection of current heading to desired heading vector
target_pos = torch.tensor(self.cfg.params["target_pos"], device=self.device)
to_target_pos = target_pos - asset.data.root_pos_w[env_ids, :3]
# reward terms
self.potentials[env_ids] = -torch.norm(to_target_pos, p=2, dim=-1) / self._env.step_dt
self.prev_potentials[env_ids] = self.potentials[env_ids]
def __call__(
self,
env: RLTaskEnv,
target_pos: tuple[float, float, float],
asset_cfg: SceneEntityCfg = SceneEntityCfg("robot"),
) -> torch.Tensor:
# extract the used quantities (to enable type-hinting)
asset: Articulation = env.scene[asset_cfg.name]
# compute vector to target
target_pos = torch.tensor(target_pos, device=env.device)
to_target_pos = target_pos - asset.data.root_pos_w[:, :3]
to_target_pos[:, 2] = 0.0
# update history buffer and compute new potential
self.prev_potentials[:] = self.potentials[:]
self.potentials[:] = -torch.norm(to_target_pos, p=2, dim=-1) / env.step_dt
return self.potentials - self.prev_potentials
class joint_limits_penalty_ratio(ManagerTermBase):
"""Penalty for violating joint limits weighted by the gear ratio."""
def __init__(self, env: RLTaskEnv, cfg: RewardTermCfg):
# add default argument
if "asset_cfg" not in cfg.params:
cfg.params["asset_cfg"] = SceneEntityCfg("robot")
# extract the used quantities (to enable type-hinting)
asset: Articulation = env.scene[cfg.params["asset_cfg"].name]
# resolve the gear ratio for each joint
self.gear_ratio = torch.ones(env.num_envs, asset.num_joints, device=env.device)
index_list, _, value_list = string_utils.resolve_matching_names_values(
cfg.params["gear_ratio"], asset.joint_names
)
self.gear_ratio[:, index_list] = torch.tensor(value_list, device=env.device)
self.gear_ratio_scaled = self.gear_ratio / torch.max(self.gear_ratio)
def __call__(
self, env: RLTaskEnv, threshold: float, gear_ratio: dict[str, float], asset_cfg: SceneEntityCfg
) -> torch.Tensor:
# extract the used quantities (to enable type-hinting)
asset: Articulation = env.scene[asset_cfg.name]
# compute the penalty over normalized joints
joint_pos_scaled = math_utils.scale_transform(
asset.data.joint_pos, asset.data.soft_joint_pos_limits[..., 0], asset.data.soft_joint_pos_limits[..., 1]
)
# scale the violation amount by the gear ratio
violation_amount = (torch.abs(joint_pos_scaled) - threshold) / (1 - threshold)
violation_amount = violation_amount * self.gear_ratio_scaled
return torch.sum((torch.abs(joint_pos_scaled) > threshold) * violation_amount, dim=-1)
class power_consumption(ManagerTermBase):
"""Penalty for the power consumed by the actions to the environment.
This is computed as commanded torque times the joint velocity.
"""
def __init__(self, env: RLTaskEnv, cfg: RewardTermCfg):
# add default argument
if "asset_cfg" not in cfg.params:
cfg.params["asset_cfg"] = SceneEntityCfg("robot")
# extract the used quantities (to enable type-hinting)
asset: Articulation = env.scene[cfg.params["asset_cfg"].name]
# resolve the gear ratio for each joint
self.gear_ratio = torch.ones(env.num_envs, asset.num_joints, device=env.device)
index_list, _, value_list = string_utils.resolve_matching_names_values(
cfg.params["gear_ratio"], asset.joint_names
)
self.gear_ratio[:, index_list] = torch.tensor(value_list, device=env.device)
self.gear_ratio_scaled = self.gear_ratio / torch.max(self.gear_ratio)
def __call__(self, env: RLTaskEnv, gear_ratio: dict[str, float], asset_cfg: SceneEntityCfg) -> torch.Tensor:
# extract the used quantities (to enable type-hinting)
asset: Articulation = env.scene[asset_cfg.name]
# return power = torque * velocity (here actions: joint torques)
return torch.sum(torch.abs(env.action_manager.action * asset.data.joint_vel * self.gear_ratio_scaled), dim=-1)
| 6,069 | Python | 42.985507 | 118 | 0.66782 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/humanoid/mdp/observations.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
import torch
from typing import TYPE_CHECKING
import omni.isaac.orbit.utils.math as math_utils
from omni.isaac.orbit.assets import Articulation
from omni.isaac.orbit.managers import SceneEntityCfg
if TYPE_CHECKING:
from omni.isaac.orbit.envs import BaseEnv
def base_yaw_roll(env: BaseEnv, asset_cfg: SceneEntityCfg = SceneEntityCfg("robot")) -> torch.Tensor:
"""Yaw and roll of the base in the simulation world frame."""
# extract the used quantities (to enable type-hinting)
asset: Articulation = env.scene[asset_cfg.name]
# extract euler angles (in world frame)
roll, _, yaw = math_utils.euler_xyz_from_quat(asset.data.root_quat_w)
# normalize angle to [-pi, pi]
roll = torch.atan2(torch.sin(roll), torch.cos(roll))
yaw = torch.atan2(torch.sin(yaw), torch.cos(yaw))
return torch.cat((yaw.unsqueeze(-1), roll.unsqueeze(-1)), dim=-1)
def base_up_proj(env: BaseEnv, asset_cfg: SceneEntityCfg = SceneEntityCfg("robot")) -> torch.Tensor:
"""Projection of the base up vector onto the world up vector."""
# extract the used quantities (to enable type-hinting)
asset: Articulation = env.scene[asset_cfg.name]
# compute base up vector
base_up_vec = math_utils.quat_rotate(asset.data.root_quat_w, -asset.GRAVITY_VEC_W)
return base_up_vec[:, 2].unsqueeze(-1)
def base_heading_proj(
env: BaseEnv, target_pos: tuple[float, float, float], asset_cfg: SceneEntityCfg = SceneEntityCfg("robot")
) -> torch.Tensor:
"""Projection of the base forward vector onto the world forward vector."""
# extract the used quantities (to enable type-hinting)
asset: Articulation = env.scene[asset_cfg.name]
# compute desired heading direction
to_target_pos = torch.tensor(target_pos, device=env.device) - asset.data.root_pos_w[:, :3]
to_target_pos[:, 2] = 0.0
to_target_dir = math_utils.normalize(to_target_pos)
# compute base forward vector
heading_vec = math_utils.quat_rotate(asset.data.root_quat_w, asset.FORWARD_VEC_B)
# compute dot product between heading and target direction
heading_proj = torch.bmm(heading_vec.view(env.num_envs, 1, 3), to_target_dir.view(env.num_envs, 3, 1))
return heading_proj.view(env.num_envs, 1)
def base_angle_to_target(
env: BaseEnv, target_pos: tuple[float, float, float], asset_cfg: SceneEntityCfg = SceneEntityCfg("robot")
) -> torch.Tensor:
"""Angle between the base forward vector and the vector to the target."""
# extract the used quantities (to enable type-hinting)
asset: Articulation = env.scene[asset_cfg.name]
# compute desired heading direction
to_target_pos = torch.tensor(target_pos, device=env.device) - asset.data.root_pos_w[:, :3]
walk_target_angle = torch.atan2(to_target_pos[:, 1], to_target_pos[:, 0])
# compute base forward vector
_, _, yaw = math_utils.euler_xyz_from_quat(asset.data.root_quat_w)
# normalize angle to target to [-pi, pi]
angle_to_target = walk_target_angle - yaw
angle_to_target = torch.atan2(torch.sin(angle_to_target), torch.cos(angle_to_target))
return angle_to_target.unsqueeze(-1)
| 3,270 | Python | 42.039473 | 109 | 0.705505 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Manipulation environments for fixed-arm robots."""
from .reach import * # noqa
| 207 | Python | 22.111109 | 56 | 0.729469 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/reach/reach_env_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
from dataclasses import MISSING
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.assets import ArticulationCfg, AssetBaseCfg
from omni.isaac.orbit.envs import RLTaskEnvCfg
from omni.isaac.orbit.managers import ActionTermCfg as ActionTerm
from omni.isaac.orbit.managers import CurriculumTermCfg as CurrTerm
from omni.isaac.orbit.managers import EventTermCfg as EventTerm
from omni.isaac.orbit.managers import ObservationGroupCfg as ObsGroup
from omni.isaac.orbit.managers import ObservationTermCfg as ObsTerm
from omni.isaac.orbit.managers import RewardTermCfg as RewTerm
from omni.isaac.orbit.managers import SceneEntityCfg
from omni.isaac.orbit.managers import TerminationTermCfg as DoneTerm
from omni.isaac.orbit.scene import InteractiveSceneCfg
from omni.isaac.orbit.utils import configclass
from omni.isaac.orbit.utils.assets import ISAAC_NUCLEUS_DIR
from omni.isaac.orbit.utils.noise import AdditiveUniformNoiseCfg as Unoise
import omni.isaac.orbit_tasks.manipulation.reach.mdp as mdp
##
# Scene definition
##
@configclass
class ReachSceneCfg(InteractiveSceneCfg):
"""Configuration for the scene with a robotic arm."""
# world
ground = AssetBaseCfg(
prim_path="/World/ground",
spawn=sim_utils.GroundPlaneCfg(),
init_state=AssetBaseCfg.InitialStateCfg(pos=(0.0, 0.0, -1.05)),
)
table = AssetBaseCfg(
prim_path="{ENV_REGEX_NS}/Table",
spawn=sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_NUCLEUS_DIR}/Props/Mounts/SeattleLabTable/table_instanceable.usd",
),
init_state=AssetBaseCfg.InitialStateCfg(pos=(0.55, 0.0, 0.0), rot=(0.70711, 0.0, 0.0, 0.70711)),
)
# robots
robot: ArticulationCfg = MISSING
# lights
light = AssetBaseCfg(
prim_path="/World/light",
spawn=sim_utils.DomeLightCfg(color=(0.75, 0.75, 0.75), intensity=2500.0),
)
##
# MDP settings
##
@configclass
class CommandsCfg:
"""Command terms for the MDP."""
ee_pose = mdp.UniformPoseCommandCfg(
asset_name="robot",
body_name=MISSING,
resampling_time_range=(4.0, 4.0),
debug_vis=True,
ranges=mdp.UniformPoseCommandCfg.Ranges(
pos_x=(0.35, 0.65),
pos_y=(-0.2, 0.2),
pos_z=(0.15, 0.5),
roll=(0.0, 0.0),
pitch=MISSING, # depends on end-effector axis
yaw=(-3.14, 3.14),
),
)
@configclass
class ActionsCfg:
"""Action specifications for the MDP."""
arm_action: ActionTerm = MISSING
gripper_action: ActionTerm | None = None
@configclass
class ObservationsCfg:
"""Observation specifications for the MDP."""
@configclass
class PolicyCfg(ObsGroup):
"""Observations for policy group."""
# observation terms (order preserved)
joint_pos = ObsTerm(func=mdp.joint_pos_rel, noise=Unoise(n_min=-0.01, n_max=0.01))
joint_vel = ObsTerm(func=mdp.joint_vel_rel, noise=Unoise(n_min=-0.01, n_max=0.01))
pose_command = ObsTerm(func=mdp.generated_commands, params={"command_name": "ee_pose"})
actions = ObsTerm(func=mdp.last_action)
def __post_init__(self):
self.enable_corruption = True
self.concatenate_terms = True
# observation groups
policy: PolicyCfg = PolicyCfg()
@configclass
class EventCfg:
"""Configuration for events."""
reset_robot_joints = EventTerm(
func=mdp.reset_joints_by_scale,
mode="reset",
params={
"position_range": (0.5, 1.5),
"velocity_range": (0.0, 0.0),
},
)
@configclass
class RewardsCfg:
"""Reward terms for the MDP."""
# task terms
end_effector_position_tracking = RewTerm(
func=mdp.position_command_error,
weight=-0.2,
params={"asset_cfg": SceneEntityCfg("robot", body_names=MISSING), "command_name": "ee_pose"},
)
end_effector_orientation_tracking = RewTerm(
func=mdp.orientation_command_error,
weight=-0.05,
params={"asset_cfg": SceneEntityCfg("robot", body_names=MISSING), "command_name": "ee_pose"},
)
# action penalty
action_rate = RewTerm(func=mdp.action_rate_l2, weight=-0.0001)
joint_vel = RewTerm(
func=mdp.joint_vel_l2,
weight=-0.0001,
params={"asset_cfg": SceneEntityCfg("robot")},
)
@configclass
class TerminationsCfg:
"""Termination terms for the MDP."""
time_out = DoneTerm(func=mdp.time_out, time_out=True)
@configclass
class CurriculumCfg:
"""Curriculum terms for the MDP."""
action_rate = CurrTerm(
func=mdp.modify_reward_weight, params={"term_name": "action_rate", "weight": -0.005, "num_steps": 4500}
)
##
# Environment configuration
##
@configclass
class ReachEnvCfg(RLTaskEnvCfg):
"""Configuration for the reach end-effector pose tracking environment."""
# Scene settings
scene: ReachSceneCfg = ReachSceneCfg(num_envs=4096, env_spacing=2.5)
# Basic settings
observations: ObservationsCfg = ObservationsCfg()
actions: ActionsCfg = ActionsCfg()
commands: CommandsCfg = CommandsCfg()
# MDP settings
rewards: RewardsCfg = RewardsCfg()
terminations: TerminationsCfg = TerminationsCfg()
events: EventCfg = EventCfg()
curriculum: CurriculumCfg = CurriculumCfg()
def __post_init__(self):
"""Post initialization."""
# general settings
self.decimation = 2
self.episode_length_s = 12.0
self.viewer.eye = (3.5, 3.5, 3.5)
# simulation settings
self.sim.dt = 1.0 / 60.0
| 5,740 | Python | 27.562189 | 111 | 0.661324 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/reach/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Fixed-arm environments with end-effector pose tracking commands."""
| 194 | Python | 26.857139 | 70 | 0.752577 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/reach/mdp/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""This sub-module contains the functions that are specific to the locomotion environments."""
from omni.isaac.orbit.envs.mdp import * # noqa: F401, F403
from .rewards import * # noqa: F401, F403
| 323 | Python | 28.454543 | 94 | 0.736842 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/reach/mdp/rewards.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
import torch
from typing import TYPE_CHECKING
from omni.isaac.orbit.assets import RigidObject
from omni.isaac.orbit.managers import SceneEntityCfg
from omni.isaac.orbit.utils.math import combine_frame_transforms, quat_error_magnitude, quat_mul
if TYPE_CHECKING:
from omni.isaac.orbit.envs import RLTaskEnv
def position_command_error(env: RLTaskEnv, command_name: str, asset_cfg: SceneEntityCfg) -> torch.Tensor:
"""Penalize tracking of the position error using L2-norm.
The function computes the position error between the desired position (from the command) and the
current position of the asset's body (in world frame). The position error is computed as the L2-norm
of the difference between the desired and current positions.
"""
# extract the asset (to enable type hinting)
asset: RigidObject = env.scene[asset_cfg.name]
command = env.command_manager.get_command(command_name)
# obtain the desired and current positions
des_pos_b = command[:, :3]
des_pos_w, _ = combine_frame_transforms(asset.data.root_state_w[:, :3], asset.data.root_state_w[:, 3:7], des_pos_b)
curr_pos_w = asset.data.body_state_w[:, asset_cfg.body_ids[0], :3] # type: ignore
return torch.norm(curr_pos_w - des_pos_w, dim=1)
def orientation_command_error(env: RLTaskEnv, command_name: str, asset_cfg: SceneEntityCfg) -> torch.Tensor:
"""Penalize tracking orientation error using shortest path.
The function computes the orientation error between the desired orientation (from the command) and the
current orientation of the asset's body (in world frame). The orientation error is computed as the shortest
path between the desired and current orientations.
"""
# extract the asset (to enable type hinting)
asset: RigidObject = env.scene[asset_cfg.name]
command = env.command_manager.get_command(command_name)
# obtain the desired and current orientations
des_quat_b = command[:, 3:7]
des_quat_w = quat_mul(asset.data.root_state_w[:, 3:7], des_quat_b)
curr_quat_w = asset.data.body_state_w[:, asset_cfg.body_ids[0], 3:7] # type: ignore
return quat_error_magnitude(curr_quat_w, des_quat_w)
| 2,337 | Python | 44.843136 | 119 | 0.728712 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/reach/config/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Configurations for arm-based reach-tracking environments."""
# We leave this file empty since we don't want to expose any configs in this package directly.
# We still need this file to import the "config" module in the parent package.
| 362 | Python | 35.299996 | 94 | 0.759669 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/reach/config/franka/ik_rel_env_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from omni.isaac.orbit.controllers.differential_ik_cfg import DifferentialIKControllerCfg
from omni.isaac.orbit.envs.mdp.actions.actions_cfg import DifferentialInverseKinematicsActionCfg
from omni.isaac.orbit.utils import configclass
from . import joint_pos_env_cfg
##
# Pre-defined configs
##
from omni.isaac.orbit_assets.franka import FRANKA_PANDA_HIGH_PD_CFG # isort: skip
@configclass
class FrankaReachEnvCfg(joint_pos_env_cfg.FrankaReachEnvCfg):
def __post_init__(self):
# post init of parent
super().__post_init__()
# Set Franka as robot
# We switch here to a stiffer PD controller for IK tracking to be better.
self.scene.robot = FRANKA_PANDA_HIGH_PD_CFG.replace(prim_path="{ENV_REGEX_NS}/Robot")
# Set actions for the specific robot type (franka)
self.actions.body_joint_pos = DifferentialInverseKinematicsActionCfg(
asset_name="robot",
joint_names=["panda_joint.*"],
body_name="panda_hand",
controller=DifferentialIKControllerCfg(command_type="pose", use_relative_mode=True, ik_method="dls"),
scale=0.5,
body_offset=DifferentialInverseKinematicsActionCfg.OffsetCfg(pos=[0.0, 0.0, 0.107]),
)
@configclass
class FrankaReachEnvCfg_PLAY(FrankaReachEnvCfg):
def __post_init__(self):
# post init of parent
super().__post_init__()
# make a smaller scene for play
self.scene.num_envs = 50
self.scene.env_spacing = 2.5
# disable randomization for play
self.observations.policy.enable_corruption = False
| 1,734 | Python | 34.408163 | 113 | 0.683968 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/reach/config/franka/ik_abs_env_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from omni.isaac.orbit.controllers.differential_ik_cfg import DifferentialIKControllerCfg
from omni.isaac.orbit.envs.mdp.actions.actions_cfg import DifferentialInverseKinematicsActionCfg
from omni.isaac.orbit.utils import configclass
from . import joint_pos_env_cfg
##
# Pre-defined configs
##
from omni.isaac.orbit_assets.franka import FRANKA_PANDA_HIGH_PD_CFG # isort: skip
@configclass
class FrankaReachEnvCfg(joint_pos_env_cfg.FrankaReachEnvCfg):
def __post_init__(self):
# post init of parent
super().__post_init__()
# Set Franka as robot
# We switch here to a stiffer PD controller for IK tracking to be better.
self.scene.robot = FRANKA_PANDA_HIGH_PD_CFG.replace(prim_path="{ENV_REGEX_NS}/Robot")
# Set actions for the specific robot type (franka)
self.actions.body_joint_pos = DifferentialInverseKinematicsActionCfg(
asset_name="robot",
joint_names=["panda_joint.*"],
body_name="panda_hand",
controller=DifferentialIKControllerCfg(command_type="pose", use_relative_mode=False, ik_method="dls"),
body_offset=DifferentialInverseKinematicsActionCfg.OffsetCfg(pos=[0.0, 0.0, 0.107]),
)
@configclass
class FrankaReachEnvCfg_PLAY(FrankaReachEnvCfg):
def __post_init__(self):
# post init of parent
super().__post_init__()
# make a smaller scene for play
self.scene.num_envs = 50
self.scene.env_spacing = 2.5
# disable randomization for play
self.observations.policy.enable_corruption = False
| 1,712 | Python | 34.687499 | 114 | 0.689252 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/reach/config/franka/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
import gymnasium as gym
from . import agents, ik_abs_env_cfg, ik_rel_env_cfg, joint_pos_env_cfg
##
# Register Gym environments.
##
##
# Joint Position Control
##
gym.register(
id="Isaac-Reach-Franka-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
disable_env_checker=True,
kwargs={
"env_cfg_entry_point": joint_pos_env_cfg.FrankaReachEnvCfg,
"rl_games_cfg_entry_point": f"{agents.__name__}:rl_games_ppo_cfg.yaml",
"rsl_rl_cfg_entry_point": f"{agents.__name__}.rsl_rl_cfg:FrankaReachPPORunnerCfg",
"skrl_cfg_entry_point": f"{agents.__name__}:skrl_ppo_cfg.yaml",
},
)
gym.register(
id="Isaac-Reach-Franka-Play-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
disable_env_checker=True,
kwargs={
"env_cfg_entry_point": joint_pos_env_cfg.FrankaReachEnvCfg_PLAY,
"rl_games_cfg_entry_point": f"{agents.__name__}:rl_games_ppo_cfg.yaml",
"rsl_rl_cfg_entry_point": f"{agents.__name__}.rsl_rl_cfg:FrankaReachPPORunnerCfg",
"skrl_cfg_entry_point": f"{agents.__name__}:skrl_ppo_cfg.yaml",
},
)
##
# Inverse Kinematics - Absolute Pose Control
##
gym.register(
id="Isaac-Reach-Franka-IK-Abs-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
kwargs={
"env_cfg_entry_point": ik_abs_env_cfg.FrankaReachEnvCfg,
"rl_games_cfg_entry_point": f"{agents.__name__}:rl_games_ppo_cfg.yaml",
"rsl_rl_cfg_entry_point": f"{agents.__name__}.rsl_rl_cfg:FrankaReachPPORunnerCfg",
"skrl_cfg_entry_point": f"{agents.__name__}:skrl_ppo_cfg.yaml",
},
disable_env_checker=True,
)
gym.register(
id="Isaac-Reach-Franka-IK-Abs-Play-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
kwargs={
"env_cfg_entry_point": ik_abs_env_cfg.FrankaReachEnvCfg_PLAY,
"rl_games_cfg_entry_point": f"{agents.__name__}:rl_games_ppo_cfg.yaml",
"rsl_rl_cfg_entry_point": f"{agents.__name__}.rsl_rl_cfg:FrankaReachPPORunnerCfg",
"skrl_cfg_entry_point": f"{agents.__name__}:skrl_ppo_cfg.yaml",
},
disable_env_checker=True,
)
##
# Inverse Kinematics - Relative Pose Control
##
gym.register(
id="Isaac-Reach-Franka-IK-Rel-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
kwargs={
"env_cfg_entry_point": ik_rel_env_cfg.FrankaReachEnvCfg,
"rl_games_cfg_entry_point": f"{agents.__name__}:rl_games_ppo_cfg.yaml",
"rsl_rl_cfg_entry_point": f"{agents.__name__}.rsl_rl_cfg:FrankaReachPPORunnerCfg",
"skrl_cfg_entry_point": f"{agents.__name__}:skrl_ppo_cfg.yaml",
},
disable_env_checker=True,
)
gym.register(
id="Isaac-Reach-Franka-IK-Rel-Play-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
kwargs={
"env_cfg_entry_point": ik_rel_env_cfg.FrankaReachEnvCfg_PLAY,
"rl_games_cfg_entry_point": f"{agents.__name__}:rl_games_ppo_cfg.yaml",
"rsl_rl_cfg_entry_point": f"{agents.__name__}.rsl_rl_cfg:FrankaReachPPORunnerCfg",
"skrl_cfg_entry_point": f"{agents.__name__}:skrl_ppo_cfg.yaml",
},
disable_env_checker=True,
)
| 3,205 | Python | 31.714285 | 90 | 0.64337 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/reach/config/franka/joint_pos_env_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
import math
from omni.isaac.orbit.utils import configclass
import omni.isaac.orbit_tasks.manipulation.reach.mdp as mdp
from omni.isaac.orbit_tasks.manipulation.reach.reach_env_cfg import ReachEnvCfg
##
# Pre-defined configs
##
from omni.isaac.orbit_assets import FRANKA_PANDA_CFG # isort: skip
##
# Environment configuration
##
@configclass
class FrankaReachEnvCfg(ReachEnvCfg):
def __post_init__(self):
# post init of parent
super().__post_init__()
# switch robot to franka
self.scene.robot = FRANKA_PANDA_CFG.replace(prim_path="{ENV_REGEX_NS}/Robot")
# override rewards
self.rewards.end_effector_position_tracking.params["asset_cfg"].body_names = ["panda_hand"]
self.rewards.end_effector_orientation_tracking.params["asset_cfg"].body_names = ["panda_hand"]
# override actions
self.actions.arm_action = mdp.JointPositionActionCfg(
asset_name="robot", joint_names=["panda_joint.*"], scale=0.5, use_default_offset=True
)
# override command generator body
# end-effector is along z-direction
self.commands.ee_pose.body_name = "panda_hand"
self.commands.ee_pose.ranges.pitch = (math.pi, math.pi)
@configclass
class FrankaReachEnvCfg_PLAY(FrankaReachEnvCfg):
def __post_init__(self):
# post init of parent
super().__post_init__()
# make a smaller scene for play
self.scene.num_envs = 50
self.scene.env_spacing = 2.5
# disable randomization for play
self.observations.policy.enable_corruption = False
| 1,754 | Python | 29.789473 | 102 | 0.676739 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/reach/config/franka/agents/rsl_rl_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from omni.isaac.orbit.utils import configclass
from omni.isaac.orbit_tasks.utils.wrappers.rsl_rl import (
RslRlOnPolicyRunnerCfg,
RslRlPpoActorCriticCfg,
RslRlPpoAlgorithmCfg,
)
@configclass
class FrankaReachPPORunnerCfg(RslRlOnPolicyRunnerCfg):
num_steps_per_env = 24
max_iterations = 1000
save_interval = 50
experiment_name = "franka_reach"
run_name = ""
resume = False
empirical_normalization = False
policy = RslRlPpoActorCriticCfg(
init_noise_std=1.0,
actor_hidden_dims=[64, 64],
critic_hidden_dims=[64, 64],
activation="elu",
)
algorithm = RslRlPpoAlgorithmCfg(
value_loss_coef=1.0,
use_clipped_value_loss=True,
clip_param=0.2,
entropy_coef=0.01,
num_learning_epochs=8,
num_mini_batches=4,
learning_rate=1.0e-3,
schedule="adaptive",
gamma=0.99,
lam=0.95,
desired_kl=0.01,
max_grad_norm=1.0,
)
| 1,109 | Python | 24.227272 | 58 | 0.640216 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/reach/config/franka/agents/skrl_ppo_cfg.yaml | seed: 42
# Models are instantiated using skrl's model instantiator utility
# https://skrl.readthedocs.io/en/develop/modules/skrl.utils.model_instantiators.html
models:
separate: False
policy: # see skrl.utils.model_instantiators.gaussian_model for parameter details
clip_actions: False
clip_log_std: True
initial_log_std: 0
min_log_std: -20.0
max_log_std: 2.0
input_shape: "Shape.STATES"
hiddens: [64, 64]
hidden_activation: ["elu", "elu"]
output_shape: "Shape.ACTIONS"
output_activation: ""
output_scale: 1.0
value: # see skrl.utils.model_instantiators.deterministic_model for parameter details
clip_actions: False
input_shape: "Shape.STATES"
hiddens: [64, 64]
hidden_activation: ["elu", "elu"]
output_shape: "Shape.ONE"
output_activation: ""
output_scale: 1.0
# PPO agent configuration (field names are from PPO_DEFAULT_CONFIG)
# https://skrl.readthedocs.io/en/latest/modules/skrl.agents.ppo.html
agent:
rollouts: 24
learning_epochs: 8
mini_batches: 4
discount_factor: 0.99
lambda: 0.95
learning_rate: 1.e-3
learning_rate_scheduler: "KLAdaptiveLR"
learning_rate_scheduler_kwargs:
kl_threshold: 0.01
state_preprocessor: "RunningStandardScaler"
state_preprocessor_kwargs: null
value_preprocessor: "RunningStandardScaler"
value_preprocessor_kwargs: null
random_timesteps: 0
learning_starts: 0
grad_norm_clip: 1.0
ratio_clip: 0.2
value_clip: 0.2
clip_predicted_values: True
entropy_loss_scale: 0.0
value_loss_scale: 2.0
kl_threshold: 0
rewards_shaper_scale: 0.01
# logging and checkpoint
experiment:
directory: "franka_reach"
experiment_name: ""
write_interval: 120
checkpoint_interval: 1200
# Sequential trainer
# https://skrl.readthedocs.io/en/latest/modules/skrl.trainers.sequential.html
trainer:
timesteps: 24000
| 1,870 | YAML | 26.925373 | 88 | 0.713904 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/reach/config/franka/agents/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
| 122 | Python | 23.599995 | 56 | 0.745902 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/reach/config/franka/agents/rl_games_ppo_cfg.yaml | params:
seed: 42
# environment wrapper clipping
env:
clip_observations: 100.0
clip_actions: 100.0
algo:
name: a2c_continuous
model:
name: continuous_a2c_logstd
network:
name: actor_critic
separate: False
space:
continuous:
mu_activation: None
sigma_activation: None
mu_init:
name: default
sigma_init:
name: const_initializer
val: 0
fixed_sigma: True
mlp:
units: [64, 64]
activation: elu
d2rl: False
initializer:
name: default
regularizer:
name: None
load_checkpoint: False # flag which sets whether to load the checkpoint
load_path: '' # path to the checkpoint to load
config:
name: reach_franka
env_name: rlgpu
device: 'cuda:0'
device_name: 'cuda:0'
multi_gpu: False
ppo: True
mixed_precision: False
normalize_input: True
normalize_value: True
value_bootstrap: True
num_actors: -1
reward_shaper:
scale_value: 1.0
normalize_advantage: True
gamma: 0.99
tau: 0.95
learning_rate: 1e-3
lr_schedule: adaptive
schedule_type: legacy
kl_threshold: 0.01
score_to_win: 10000
max_epochs: 1000
save_best_after: 200
save_frequency: 100
print_stats: True
grad_norm: 1.0
entropy_coef: 0.01
truncate_grads: True
e_clip: 0.2
horizon_length: 24
minibatch_size: 24576
mini_epochs: 5
critic_coef: 2
clip_value: True
clip_actions: False
bounds_loss_coef: 0.0001
| 1,567 | YAML | 18.848101 | 73 | 0.60753 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/reach/config/ur_10/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
import gymnasium as gym
from . import agents, joint_pos_env_cfg
##
# Register Gym environments.
##
gym.register(
id="Isaac-Reach-UR10-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
disable_env_checker=True,
kwargs={
"env_cfg_entry_point": joint_pos_env_cfg.UR10ReachEnvCfg,
"rl_games_cfg_entry_point": f"{agents.__name__}:rl_games_ppo_cfg.yaml",
"rsl_rl_cfg_entry_point": f"{agents.__name__}.rsl_rl_ppo_cfg:UR10ReachPPORunnerCfg",
},
)
gym.register(
id="Isaac-Reach-UR10-Play-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
disable_env_checker=True,
kwargs={
"env_cfg_entry_point": joint_pos_env_cfg.UR10ReachEnvCfg_PLAY,
"rl_games_cfg_entry_point": f"{agents.__name__}:rl_games_ppo_cfg.yaml",
"rsl_rl_cfg_entry_point": f"{agents.__name__}.rsl_rl_ppo_cfg:UR10ReachPPORunnerCfg",
},
)
| 1,008 | Python | 27.828571 | 92 | 0.660714 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/reach/config/ur_10/joint_pos_env_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
import math
from omni.isaac.orbit.utils import configclass
import omni.isaac.orbit_tasks.manipulation.reach.mdp as mdp
from omni.isaac.orbit_tasks.manipulation.reach.reach_env_cfg import ReachEnvCfg
##
# Pre-defined configs
##
from omni.isaac.orbit_assets import UR10_CFG # isort: skip
##
# Environment configuration
##
@configclass
class UR10ReachEnvCfg(ReachEnvCfg):
def __post_init__(self):
# post init of parent
super().__post_init__()
# switch robot to ur10
self.scene.robot = UR10_CFG.replace(prim_path="{ENV_REGEX_NS}/Robot")
# override events
self.events.reset_robot_joints.params["position_range"] = (0.75, 1.25)
# override rewards
self.rewards.end_effector_position_tracking.params["asset_cfg"].body_names = ["ee_link"]
self.rewards.end_effector_orientation_tracking.params["asset_cfg"].body_names = ["ee_link"]
# override actions
self.actions.arm_action = mdp.JointPositionActionCfg(
asset_name="robot", joint_names=[".*"], scale=0.5, use_default_offset=True
)
# override command generator body
# end-effector is along x-direction
self.commands.ee_pose.body_name = "ee_link"
self.commands.ee_pose.ranges.pitch = (math.pi / 2, math.pi / 2)
@configclass
class UR10ReachEnvCfg_PLAY(UR10ReachEnvCfg):
def __post_init__(self):
# post init of parent
super().__post_init__()
# make a smaller scene for play
self.scene.num_envs = 50
self.scene.env_spacing = 2.5
# disable randomization for play
self.observations.policy.enable_corruption = False
| 1,823 | Python | 29.915254 | 99 | 0.665387 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/reach/config/ur_10/agents/rsl_rl_ppo_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from omni.isaac.orbit.utils import configclass
from omni.isaac.orbit_tasks.utils.wrappers.rsl_rl import (
RslRlOnPolicyRunnerCfg,
RslRlPpoActorCriticCfg,
RslRlPpoAlgorithmCfg,
)
@configclass
class UR10ReachPPORunnerCfg(RslRlOnPolicyRunnerCfg):
num_steps_per_env = 24
max_iterations = 1000
save_interval = 50
experiment_name = "reach_ur10"
run_name = ""
resume = False
empirical_normalization = False
policy = RslRlPpoActorCriticCfg(
init_noise_std=1.0,
actor_hidden_dims=[64, 64],
critic_hidden_dims=[64, 64],
activation="elu",
)
algorithm = RslRlPpoAlgorithmCfg(
value_loss_coef=1.0,
use_clipped_value_loss=True,
clip_param=0.2,
entropy_coef=0.01,
num_learning_epochs=8,
num_mini_batches=4,
learning_rate=1.0e-3,
schedule="adaptive",
gamma=0.99,
lam=0.95,
desired_kl=0.01,
max_grad_norm=1.0,
)
| 1,105 | Python | 24.136363 | 58 | 0.638914 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/reach/config/ur_10/agents/rl_games_ppo_cfg.yaml | params:
seed: 42
# environment wrapper clipping
env:
clip_observations: 100.0
clip_actions: 100.0
algo:
name: a2c_continuous
model:
name: continuous_a2c_logstd
network:
name: actor_critic
separate: False
space:
continuous:
mu_activation: None
sigma_activation: None
mu_init:
name: default
sigma_init:
name: const_initializer
val: 0
fixed_sigma: True
mlp:
units: [64, 64]
activation: elu
d2rl: False
initializer:
name: default
regularizer:
name: None
load_checkpoint: False # flag which sets whether to load the checkpoint
load_path: '' # path to the checkpoint to load
config:
name: reach_ur10
env_name: rlgpu
device: 'cuda:0'
device_name: 'cuda:0'
multi_gpu: False
ppo: True
mixed_precision: False
normalize_input: True
normalize_value: True
value_bootstrap: True
num_actors: -1
reward_shaper:
scale_value: 1.0
normalize_advantage: True
gamma: 0.99
tau: 0.95
learning_rate: 1e-3
lr_schedule: adaptive
schedule_type: legacy
kl_threshold: 0.01
score_to_win: 10000
max_epochs: 1000
save_best_after: 200
save_frequency: 100
print_stats: True
grad_norm: 1.0
entropy_coef: 0.01
truncate_grads: True
e_clip: 0.2
horizon_length: 24
minibatch_size: 24576
mini_epochs: 5
critic_coef: 2
clip_value: True
clip_actions: False
bounds_loss_coef: 0.0001
| 1,565 | YAML | 18.822785 | 73 | 0.607029 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/cabinet/cabinet_env_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
# Copyright (c) 2022-2023, The ORBIT Project Developers.
# All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
from dataclasses import MISSING
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.actuators.actuator_cfg import ImplicitActuatorCfg
from omni.isaac.orbit.assets import ArticulationCfg, AssetBaseCfg
from omni.isaac.orbit.envs import RLTaskEnvCfg
from omni.isaac.orbit.managers import EventTermCfg as EventTerm
from omni.isaac.orbit.managers import ObservationGroupCfg as ObsGroup
from omni.isaac.orbit.managers import ObservationTermCfg as ObsTerm
from omni.isaac.orbit.managers import RewardTermCfg as RewTerm
from omni.isaac.orbit.managers import SceneEntityCfg
from omni.isaac.orbit.managers import TerminationTermCfg as DoneTerm
from omni.isaac.orbit.scene import InteractiveSceneCfg
from omni.isaac.orbit.sensors import FrameTransformerCfg
from omni.isaac.orbit.sensors.frame_transformer import OffsetCfg
from omni.isaac.orbit.utils import configclass
from omni.isaac.orbit.utils.assets import ISAAC_NUCLEUS_DIR
from . import mdp
##
# Pre-defined configs
##
from omni.isaac.orbit.markers.config import FRAME_MARKER_CFG # isort: skip
FRAME_MARKER_SMALL_CFG = FRAME_MARKER_CFG.copy()
FRAME_MARKER_SMALL_CFG.markers["frame"].scale = (0.10, 0.10, 0.10)
##
# Scene definition
##
@configclass
class CabinetSceneCfg(InteractiveSceneCfg):
"""Configuration for the cabinet scene with a robot and a cabinet.
This is the abstract base implementation, the exact scene is defined in the derived classes
which need to set the robot and end-effector frames
"""
# robots, Will be populated by agent env cfg
robot: ArticulationCfg = MISSING
# End-effector, Will be populated by agent env cfg
ee_frame: FrameTransformerCfg = MISSING
cabinet = ArticulationCfg(
prim_path="{ENV_REGEX_NS}/Cabinet",
spawn=sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_NUCLEUS_DIR}/Props/Sektion_Cabinet/sektion_cabinet_instanceable.usd",
activate_contact_sensors=False,
),
init_state=ArticulationCfg.InitialStateCfg(
pos=(0.8, 0, 0.4),
rot=(0.0, 0.0, 0.0, 1.0),
joint_pos={
"door_left_joint": 0.0,
"door_right_joint": 0.0,
"drawer_bottom_joint": 0.0,
"drawer_top_joint": 0.0,
},
),
actuators={
"drawers": ImplicitActuatorCfg(
joint_names_expr=["drawer_top_joint", "drawer_bottom_joint"],
effort_limit=87.0,
velocity_limit=100.0,
stiffness=10.0,
damping=1.0,
),
"doors": ImplicitActuatorCfg(
joint_names_expr=["door_left_joint", "door_right_joint"],
effort_limit=87.0,
velocity_limit=100.0,
stiffness=10.0,
damping=2.5,
),
},
)
# Frame definitions for the cabinet.
cabinet_frame = FrameTransformerCfg(
prim_path="{ENV_REGEX_NS}/Cabinet/sektion",
debug_vis=True,
visualizer_cfg=FRAME_MARKER_SMALL_CFG.replace(prim_path="/Visuals/CabinetFrameTransformer"),
target_frames=[
FrameTransformerCfg.FrameCfg(
prim_path="{ENV_REGEX_NS}/Cabinet/drawer_handle_top",
name="drawer_handle_top",
offset=OffsetCfg(
pos=(0.305, 0.0, 0.01),
rot=(0.5, 0.5, -0.5, -0.5), # align with end-effector frame
),
),
],
)
# plane
plane = AssetBaseCfg(
prim_path="/World/GroundPlane",
init_state=AssetBaseCfg.InitialStateCfg(),
spawn=sim_utils.GroundPlaneCfg(),
collision_group=-1,
)
# lights
light = AssetBaseCfg(
prim_path="/World/light",
spawn=sim_utils.DomeLightCfg(color=(0.75, 0.75, 0.75), intensity=3000.0),
)
##
# MDP settings
##
@configclass
class CommandsCfg:
"""Command terms for the MDP."""
null_command = mdp.NullCommandCfg()
@configclass
class ActionsCfg:
"""Action specifications for the MDP."""
body_joint_pos: mdp.JointPositionActionCfg = MISSING
finger_joint_pos: mdp.BinaryJointPositionActionCfg = MISSING
@configclass
class ObservationsCfg:
"""Observation specifications for the MDP."""
@configclass
class PolicyCfg(ObsGroup):
"""Observations for policy group."""
joint_pos = ObsTerm(func=mdp.joint_pos_rel)
joint_vel = ObsTerm(func=mdp.joint_vel_rel)
cabinet_joint_pos = ObsTerm(
func=mdp.joint_pos_rel,
params={"asset_cfg": SceneEntityCfg("cabinet", joint_names=["drawer_top_joint"])},
)
cabinet_joint_vel = ObsTerm(
func=mdp.joint_vel_rel,
params={"asset_cfg": SceneEntityCfg("cabinet", joint_names=["drawer_top_joint"])},
)
rel_ee_drawer_distance = ObsTerm(func=mdp.rel_ee_drawer_distance)
actions = ObsTerm(func=mdp.last_action)
def __post_init__(self):
self.enable_corruption = True
self.concatenate_terms = True
# observation groups
policy: PolicyCfg = PolicyCfg()
@configclass
class EventCfg:
"""Configuration for events."""
robot_physics_material = EventTerm(
func=mdp.randomize_rigid_body_material,
mode="startup",
params={
"asset_cfg": SceneEntityCfg("robot", body_names=".*"),
"static_friction_range": (0.8, 1.25),
"dynamic_friction_range": (0.8, 1.25),
"restitution_range": (0.0, 0.0),
"num_buckets": 16,
},
)
cabinet_physics_material = EventTerm(
func=mdp.randomize_rigid_body_material,
mode="startup",
params={
"asset_cfg": SceneEntityCfg("cabinet", body_names="drawer_handle_top"),
"static_friction_range": (1.0, 1.25),
"dynamic_friction_range": (1.25, 1.5),
"restitution_range": (0.0, 0.0),
"num_buckets": 16,
},
)
reset_all = EventTerm(func=mdp.reset_scene_to_default, mode="reset")
reset_robot_joints = EventTerm(
func=mdp.reset_joints_by_offset,
mode="reset",
params={
"position_range": (-0.1, 0.1),
"velocity_range": (0.0, 0.0),
},
)
@configclass
class RewardsCfg:
"""Reward terms for the MDP."""
# 1. Approach the handle
approach_ee_handle = RewTerm(func=mdp.approach_ee_handle, weight=2.0, params={"threshold": 0.2})
align_ee_handle = RewTerm(func=mdp.align_ee_handle, weight=0.5)
# 2. Grasp the handle
approach_gripper_handle = RewTerm(func=mdp.approach_gripper_handle, weight=5.0, params={"offset": MISSING})
align_grasp_around_handle = RewTerm(func=mdp.align_grasp_around_handle, weight=0.125)
grasp_handle = RewTerm(
func=mdp.grasp_handle,
weight=0.5,
params={
"threshold": 0.03,
"open_joint_pos": MISSING,
"asset_cfg": SceneEntityCfg("robot", joint_names=MISSING),
},
)
# 3. Open the drawer
open_drawer_bonus = RewTerm(
func=mdp.open_drawer_bonus,
weight=7.5,
params={"asset_cfg": SceneEntityCfg("cabinet", joint_names=["drawer_top_joint"])},
)
multi_stage_open_drawer = RewTerm(
func=mdp.multi_stage_open_drawer,
weight=1.0,
params={"asset_cfg": SceneEntityCfg("cabinet", joint_names=["drawer_top_joint"])},
)
# 4. Penalize actions for cosmetic reasons
action_rate_l2 = RewTerm(func=mdp.action_rate_l2, weight=-1e-2)
joint_vel = RewTerm(func=mdp.joint_vel_l2, weight=-0.0001)
@configclass
class TerminationsCfg:
"""Termination terms for the MDP."""
time_out = DoneTerm(func=mdp.time_out, time_out=True)
##
# Environment configuration
##
@configclass
class CabinetEnvCfg(RLTaskEnvCfg):
"""Configuration for the cabinet environment."""
# Scene settings
scene: CabinetSceneCfg = CabinetSceneCfg(num_envs=4096, env_spacing=2.0)
# Basic settings
observations: ObservationsCfg = ObservationsCfg()
actions: ActionsCfg = ActionsCfg()
commands: CommandsCfg = CommandsCfg()
# MDP settings
rewards: RewardsCfg = RewardsCfg()
terminations: TerminationsCfg = TerminationsCfg()
events: EventCfg = EventCfg()
def __post_init__(self):
"""Post initialization."""
# general settings
self.decimation = 1
self.episode_length_s = 8.0
self.viewer.eye = (-2.0, 2.0, 2.0)
self.viewer.lookat = (0.8, 0.0, 0.5)
# simulation settings
self.sim.dt = 1 / 60 # 60Hz
self.sim.physx.bounce_threshold_velocity = 0.2
self.sim.physx.bounce_threshold_velocity = 0.01
self.sim.physx.friction_correlation_distance = 0.00625
| 9,095 | Python | 30.044368 | 111 | 0.625069 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/cabinet/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Manipulation environments to open drawers in a cabinet."""
| 185 | Python | 25.571425 | 61 | 0.745946 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/cabinet/mdp/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""This sub-module contains the functions that are specific to the cabinet environments."""
from omni.isaac.orbit.envs.mdp import * # noqa: F401, F403
from .observations import * # noqa: F401, F403
from .rewards import * # noqa: F401, F403
| 368 | Python | 29.749998 | 91 | 0.730978 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/cabinet/mdp/rewards.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
import torch
from typing import TYPE_CHECKING
from omni.isaac.orbit.managers import SceneEntityCfg
from omni.isaac.orbit.utils.math import matrix_from_quat
if TYPE_CHECKING:
from omni.isaac.orbit.envs import RLTaskEnv
def approach_ee_handle(env: RLTaskEnv, threshold: float) -> torch.Tensor:
r"""Reward the robot for reaching the drawer handle using inverse-square law.
It uses a piecewise function to reward the robot for reaching the handle.
.. math::
reward = \begin{cases}
2 * (1 / (1 + distance^2))^2 & \text{if } distance \leq threshold \\
(1 / (1 + distance^2))^2 & \text{otherwise}
\end{cases}
"""
ee_tcp_pos = env.scene["ee_frame"].data.target_pos_w[..., 0, :]
handle_pos = env.scene["cabinet_frame"].data.target_pos_w[..., 0, :]
# Compute the distance of the end-effector to the handle
distance = torch.norm(handle_pos - ee_tcp_pos, dim=-1, p=2)
# Reward the robot for reaching the handle
reward = 1.0 / (1.0 + distance**2)
reward = torch.pow(reward, 2)
return torch.where(distance <= threshold, 2 * reward, reward)
def align_ee_handle(env: RLTaskEnv) -> torch.Tensor:
"""Reward for aligning the end-effector with the handle.
The reward is based on the alignment of the gripper with the handle. It is computed as follows:
.. math::
reward = 0.5 * (align_z^2 + align_x^2)
where :math:`align_z` is the dot product of the z direction of the gripper and the -x direction of the handle
and :math:`align_x` is the dot product of the x direction of the gripper and the -y direction of the handle.
"""
ee_tcp_quat = env.scene["ee_frame"].data.target_quat_w[..., 0, :]
handle_quat = env.scene["cabinet_frame"].data.target_quat_w[..., 0, :]
ee_tcp_rot_mat = matrix_from_quat(ee_tcp_quat)
handle_mat = matrix_from_quat(handle_quat)
# get current x and y direction of the handle
handle_x, handle_y = handle_mat[..., 0], handle_mat[..., 1]
# get current x and z direction of the gripper
ee_tcp_x, ee_tcp_z = ee_tcp_rot_mat[..., 0], ee_tcp_rot_mat[..., 2]
# make sure gripper aligns with the handle
# in this case, the z direction of the gripper should be close to the -x direction of the handle
# and the x direction of the gripper should be close to the -y direction of the handle
# dot product of z and x should be large
align_z = torch.bmm(ee_tcp_z.unsqueeze(1), -handle_x.unsqueeze(-1)).squeeze(-1).squeeze(-1)
align_x = torch.bmm(ee_tcp_x.unsqueeze(1), -handle_y.unsqueeze(-1)).squeeze(-1).squeeze(-1)
return 0.5 * (torch.sign(align_z) * align_z**2 + torch.sign(align_x) * align_x**2)
def align_grasp_around_handle(env: RLTaskEnv) -> torch.Tensor:
"""Bonus for correct hand orientation around the handle.
The correct hand orientation is when the left finger is above the handle and the right finger is below the handle.
"""
# Target object position: (num_envs, 3)
handle_pos = env.scene["cabinet_frame"].data.target_pos_w[..., 0, :]
# Fingertips position: (num_envs, n_fingertips, 3)
ee_fingertips_w = env.scene["ee_frame"].data.target_pos_w[..., 1:, :]
lfinger_pos = ee_fingertips_w[..., 0, :]
rfinger_pos = ee_fingertips_w[..., 1, :]
# Check if hand is in a graspable pose
is_graspable = (rfinger_pos[:, 2] < handle_pos[:, 2]) & (lfinger_pos[:, 2] > handle_pos[:, 2])
# bonus if left finger is above the drawer handle and right below
return is_graspable
def approach_gripper_handle(env: RLTaskEnv, offset: float = 0.04) -> torch.Tensor:
"""Reward the robot's gripper reaching the drawer handle with the right pose.
This function returns the distance of fingertips to the handle when the fingers are in a grasping orientation
(i.e., the left finger is above the handle and the right finger is below the handle). Otherwise, it returns zero.
"""
# Target object position: (num_envs, 3)
handle_pos = env.scene["cabinet_frame"].data.target_pos_w[..., 0, :]
# Fingertips position: (num_envs, n_fingertips, 3)
ee_fingertips_w = env.scene["ee_frame"].data.target_pos_w[..., 1:, :]
lfinger_pos = ee_fingertips_w[..., 0, :]
rfinger_pos = ee_fingertips_w[..., 1, :]
# Compute the distance of each finger from the handle
lfinger_dist = torch.abs(lfinger_pos[:, 2] - handle_pos[:, 2])
rfinger_dist = torch.abs(rfinger_pos[:, 2] - handle_pos[:, 2])
# Check if hand is in a graspable pose
is_graspable = (rfinger_pos[:, 2] < handle_pos[:, 2]) & (lfinger_pos[:, 2] > handle_pos[:, 2])
return is_graspable * ((offset - lfinger_dist) + (offset - rfinger_dist))
def grasp_handle(env: RLTaskEnv, threshold: float, open_joint_pos: float, asset_cfg: SceneEntityCfg) -> torch.Tensor:
"""Reward for closing the fingers when being close to the handle.
The :attr:`threshold` is the distance from the handle at which the fingers should be closed.
The :attr:`open_joint_pos` is the joint position when the fingers are open.
Note:
It is assumed that zero joint position corresponds to the fingers being closed.
"""
ee_tcp_pos = env.scene["ee_frame"].data.target_pos_w[..., 0, :]
handle_pos = env.scene["cabinet_frame"].data.target_pos_w[..., 0, :]
gripper_joint_pos = env.scene[asset_cfg.name].data.joint_pos[:, asset_cfg.joint_ids]
distance = torch.norm(handle_pos - ee_tcp_pos, dim=-1, p=2)
is_close = distance <= threshold
return is_close * torch.sum(open_joint_pos - gripper_joint_pos, dim=-1)
def open_drawer_bonus(env: RLTaskEnv, asset_cfg: SceneEntityCfg) -> torch.Tensor:
"""Bonus for opening the drawer given by the joint position of the drawer.
The bonus is given when the drawer is open. If the grasp is around the handle, the bonus is doubled.
"""
drawer_pos = env.scene[asset_cfg.name].data.joint_pos[:, asset_cfg.joint_ids[0]]
is_graspable = align_grasp_around_handle(env).float()
return (is_graspable + 1.0) * drawer_pos
def multi_stage_open_drawer(env: RLTaskEnv, asset_cfg: SceneEntityCfg) -> torch.Tensor:
"""Multi-stage bonus for opening the drawer.
Depending on the drawer's position, the reward is given in three stages: easy, medium, and hard.
This helps the agent to learn to open the drawer in a controlled manner.
"""
drawer_pos = env.scene[asset_cfg.name].data.joint_pos[:, asset_cfg.joint_ids[0]]
is_graspable = align_grasp_around_handle(env).float()
open_easy = (drawer_pos > 0.01) * 0.5
open_medium = (drawer_pos > 0.2) * is_graspable
open_hard = (drawer_pos > 0.3) * is_graspable
return open_easy + open_medium + open_hard
| 6,848 | Python | 41.540372 | 118 | 0.665596 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/cabinet/mdp/observations.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
import torch
from typing import TYPE_CHECKING
from omni.isaac.orbit.assets import ArticulationData
from omni.isaac.orbit.sensors import FrameTransformerData
if TYPE_CHECKING:
from omni.isaac.orbit.envs import RLTaskEnv
def rel_ee_object_distance(env: RLTaskEnv) -> torch.Tensor:
"""The distance between the end-effector and the object."""
ee_tf_data: FrameTransformerData = env.scene["ee_frame"].data
object_data: ArticulationData = env.scene["object"].data
return object_data.root_pos_w - ee_tf_data.target_pos_w[..., 0, :]
def rel_ee_drawer_distance(env: RLTaskEnv) -> torch.Tensor:
"""The distance between the end-effector and the object."""
ee_tf_data: FrameTransformerData = env.scene["ee_frame"].data
cabinet_tf_data: FrameTransformerData = env.scene["cabinet_frame"].data
return cabinet_tf_data.target_pos_w[..., 0, :] - ee_tf_data.target_pos_w[..., 0, :]
def fingertips_pos(env: RLTaskEnv) -> torch.Tensor:
"""The position of the fingertips relative to the environment origins."""
ee_tf_data: FrameTransformerData = env.scene["ee_frame"].data
fingertips_pos = ee_tf_data.target_pos_w[..., 1:, :] - env.scene.env_origins.unsqueeze(1)
return fingertips_pos.view(env.num_envs, -1)
def ee_pos(env: RLTaskEnv) -> torch.Tensor:
"""The position of the end-effector relative to the environment origins."""
ee_tf_data: FrameTransformerData = env.scene["ee_frame"].data
ee_pos = ee_tf_data.target_pos_w[..., 0, :] - env.scene.env_origins
return ee_pos
def ee_quat(env: RLTaskEnv) -> torch.Tensor:
"""The orientation of the end-effector in the environment frame."""
ee_tf_data: FrameTransformerData = env.scene["ee_frame"].data
ee_quat = ee_tf_data.target_quat_w[..., 0, :]
# make first element of quaternion positive
ee_quat[ee_quat[:, 0] < 0] *= -1
return ee_quat
| 2,031 | Python | 34.034482 | 93 | 0.696701 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/cabinet/config/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Configurations for the cabinet environments."""
# We leave this file empty since we don't want to expose any configs in this package directly.
# We still need this file to import the "config" module in the parent package.
| 349 | Python | 33.999997 | 94 | 0.756447 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/cabinet/config/franka/ik_rel_env_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from omni.isaac.orbit.controllers.differential_ik_cfg import DifferentialIKControllerCfg
from omni.isaac.orbit.envs.mdp.actions.actions_cfg import DifferentialInverseKinematicsActionCfg
from omni.isaac.orbit.utils import configclass
from . import joint_pos_env_cfg
##
# Pre-defined configs
##
from omni.isaac.orbit_assets.franka import FRANKA_PANDA_HIGH_PD_CFG # isort: skip
@configclass
class FrankaCabinetEnvCfg(joint_pos_env_cfg.FrankaCabinetEnvCfg):
def __post_init__(self):
# post init of parent
super().__post_init__()
# Set Franka as robot
# We switch here to a stiffer PD controller for IK tracking to be better.
self.scene.robot = FRANKA_PANDA_HIGH_PD_CFG.replace(prim_path="{ENV_REGEX_NS}/Robot")
# Set actions for the specific robot type (franka)
self.actions.body_joint_pos = DifferentialInverseKinematicsActionCfg(
asset_name="robot",
joint_names=["panda_joint.*"],
body_name="panda_hand",
controller=DifferentialIKControllerCfg(command_type="pose", use_relative_mode=True, ik_method="dls"),
scale=0.5,
body_offset=DifferentialInverseKinematicsActionCfg.OffsetCfg(pos=[0.0, 0.0, 0.107]),
)
@configclass
class FrankaCabinetEnvCfg_PLAY(FrankaCabinetEnvCfg):
def __post_init__(self):
# post init of parent
super().__post_init__()
# make a smaller scene for play
self.scene.num_envs = 50
self.scene.env_spacing = 2.5
# disable randomization for play
self.observations.policy.enable_corruption = False
| 1,742 | Python | 34.571428 | 113 | 0.685419 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/cabinet/config/franka/ik_abs_env_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from omni.isaac.orbit.controllers.differential_ik_cfg import DifferentialIKControllerCfg
from omni.isaac.orbit.envs.mdp.actions.actions_cfg import DifferentialInverseKinematicsActionCfg
from omni.isaac.orbit.utils import configclass
from . import joint_pos_env_cfg
##
# Pre-defined configs
##
from omni.isaac.orbit_assets.franka import FRANKA_PANDA_HIGH_PD_CFG # isort: skip
@configclass
class FrankaCabinetEnvCfg(joint_pos_env_cfg.FrankaCabinetEnvCfg):
def __post_init__(self):
# post init of parent
super().__post_init__()
# Set Franka as robot
# We switch here to a stiffer PD controller for IK tracking to be better.
self.scene.robot = FRANKA_PANDA_HIGH_PD_CFG.replace(prim_path="{ENV_REGEX_NS}/Robot")
# Set actions for the specific robot type (franka)
self.actions.body_joint_pos = DifferentialInverseKinematicsActionCfg(
asset_name="robot",
joint_names=["panda_joint.*"],
body_name="panda_hand",
controller=DifferentialIKControllerCfg(command_type="pose", use_relative_mode=False, ik_method="dls"),
body_offset=DifferentialInverseKinematicsActionCfg.OffsetCfg(pos=[0.0, 0.0, 0.107]),
)
@configclass
class FrankaCabinetEnvCfg_PLAY(FrankaCabinetEnvCfg):
def __post_init__(self):
# post init of parent
super().__post_init__()
# make a smaller scene for play
self.scene.num_envs = 50
self.scene.env_spacing = 2.5
# disable randomization for play
self.observations.policy.enable_corruption = False
| 1,720 | Python | 34.854166 | 114 | 0.690698 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/cabinet/config/franka/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
import gymnasium as gym
from . import agents, ik_abs_env_cfg, ik_rel_env_cfg, joint_pos_env_cfg
##
# Register Gym environments.
##
##
# Joint Position Control
##
gym.register(
id="Isaac-Open-Drawer-Franka-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
kwargs={
"env_cfg_entry_point": joint_pos_env_cfg.FrankaCabinetEnvCfg,
"rsl_rl_cfg_entry_point": agents.rsl_rl_cfg.CabinetPPORunnerCfg,
"rl_games_cfg_entry_point": f"{agents.__name__}:rl_games_ppo_cfg.yaml",
},
disable_env_checker=True,
)
gym.register(
id="Isaac-Open-Drawer-Franka-Play-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
kwargs={
"env_cfg_entry_point": joint_pos_env_cfg.FrankaCabinetEnvCfg_PLAY,
"rsl_rl_cfg_entry_point": agents.rsl_rl_cfg.CabinetPPORunnerCfg,
"rl_games_cfg_entry_point": f"{agents.__name__}:rl_games_ppo_cfg.yaml",
},
disable_env_checker=True,
)
##
# Inverse Kinematics - Absolute Pose Control
##
gym.register(
id="Isaac-Open-Drawer-Franka-IK-Abs-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
kwargs={
"env_cfg_entry_point": ik_abs_env_cfg.FrankaCabinetEnvCfg,
"rsl_rl_cfg_entry_point": agents.rsl_rl_cfg.CabinetPPORunnerCfg,
"rl_games_cfg_entry_point": f"{agents.__name__}:rl_games_ppo_cfg.yaml",
},
disable_env_checker=True,
)
gym.register(
id="Isaac-Open-Drawer-Franka-IK-Abs-Play-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
kwargs={
"env_cfg_entry_point": ik_abs_env_cfg.FrankaCabinetEnvCfg_PLAY,
"rsl_rl_cfg_entry_point": agents.rsl_rl_cfg.CabinetPPORunnerCfg,
"rl_games_cfg_entry_point": f"{agents.__name__}:rl_games_ppo_cfg.yaml",
},
disable_env_checker=True,
)
##
# Inverse Kinematics - Relative Pose Control
##
gym.register(
id="Isaac-Open-Drawer-Franka-IK-Rel-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
kwargs={
"env_cfg_entry_point": ik_rel_env_cfg.FrankaCabinetEnvCfg,
"rsl_rl_cfg_entry_point": agents.rsl_rl_cfg.CabinetPPORunnerCfg,
"rl_games_cfg_entry_point": f"{agents.__name__}:rl_games_ppo_cfg.yaml",
},
disable_env_checker=True,
)
gym.register(
id="Isaac-Open-Drawer-Franka-IK-Rel-Play-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
kwargs={
"env_cfg_entry_point": ik_rel_env_cfg.FrankaCabinetEnvCfg_PLAY,
"rsl_rl_cfg_entry_point": agents.rsl_rl_cfg.CabinetPPORunnerCfg,
"rl_games_cfg_entry_point": f"{agents.__name__}:rl_games_ppo_cfg.yaml",
},
disable_env_checker=True,
)
| 2,714 | Python | 28.193548 | 79 | 0.662491 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/cabinet/config/franka/joint_pos_env_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from omni.isaac.orbit.sensors import FrameTransformerCfg
from omni.isaac.orbit.sensors.frame_transformer.frame_transformer_cfg import OffsetCfg
from omni.isaac.orbit.utils import configclass
from omni.isaac.orbit_tasks.manipulation.cabinet import mdp
from omni.isaac.orbit_tasks.manipulation.cabinet.cabinet_env_cfg import CabinetEnvCfg
##
# Pre-defined configs
##
from omni.isaac.orbit_assets.franka import FRANKA_PANDA_CFG # isort: skip
from omni.isaac.orbit_tasks.manipulation.cabinet.cabinet_env_cfg import FRAME_MARKER_SMALL_CFG # isort: skip
@configclass
class FrankaCabinetEnvCfg(CabinetEnvCfg):
def __post_init__(self):
# post init of parent
super().__post_init__()
# Set franka as robot
self.scene.robot = FRANKA_PANDA_CFG.replace(prim_path="{ENV_REGEX_NS}/Robot")
# Set Actions for the specific robot type (franka)
self.actions.body_joint_pos = mdp.JointPositionActionCfg(
asset_name="robot",
joint_names=["panda_joint.*"],
scale=1.0,
use_default_offset=True,
)
self.actions.finger_joint_pos = mdp.BinaryJointPositionActionCfg(
asset_name="robot",
joint_names=["panda_finger.*"],
open_command_expr={"panda_finger_.*": 0.04},
close_command_expr={"panda_finger_.*": 0.0},
)
# Listens to the required transforms
# IMPORTANT: The order of the frames in the list is important. The first frame is the tool center point (TCP)
# the other frames are the fingers
self.scene.ee_frame = FrameTransformerCfg(
prim_path="{ENV_REGEX_NS}/Robot/panda_link0",
debug_vis=False,
visualizer_cfg=FRAME_MARKER_SMALL_CFG.replace(prim_path="/Visuals/EndEffectorFrameTransformer"),
target_frames=[
FrameTransformerCfg.FrameCfg(
prim_path="{ENV_REGEX_NS}/Robot/panda_hand",
name="ee_tcp",
offset=OffsetCfg(
pos=(0.0, 0.0, 0.1034),
),
),
FrameTransformerCfg.FrameCfg(
prim_path="{ENV_REGEX_NS}/Robot/panda_leftfinger",
name="tool_leftfinger",
offset=OffsetCfg(
pos=(0.0, 0.0, 0.046),
),
),
FrameTransformerCfg.FrameCfg(
prim_path="{ENV_REGEX_NS}/Robot/panda_rightfinger",
name="tool_rightfinger",
offset=OffsetCfg(
pos=(0.0, 0.0, 0.046),
),
),
],
)
# override rewards
self.rewards.approach_gripper_handle.params["offset"] = 0.04
self.rewards.grasp_handle.params["open_joint_pos"] = 0.04
self.rewards.grasp_handle.params["asset_cfg"].joint_names = ["panda_finger_.*"]
@configclass
class FrankaCabinetEnvCfg_PLAY(FrankaCabinetEnvCfg):
def __post_init__(self):
# post init of parent
super().__post_init__()
# make a smaller scene for play
self.scene.num_envs = 50
self.scene.env_spacing = 2.5
# disable randomization for play
self.observations.policy.enable_corruption = False
| 3,464 | Python | 37.076923 | 117 | 0.58776 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/cabinet/config/franka/agents/rsl_rl_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from omni.isaac.orbit.utils import configclass
from omni.isaac.orbit_tasks.utils.wrappers.rsl_rl import (
RslRlOnPolicyRunnerCfg,
RslRlPpoActorCriticCfg,
RslRlPpoAlgorithmCfg,
)
@configclass
class CabinetPPORunnerCfg(RslRlOnPolicyRunnerCfg):
num_steps_per_env = 96
max_iterations = 400
save_interval = 50
experiment_name = "franka_open_drawer"
empirical_normalization = False
policy = RslRlPpoActorCriticCfg(
init_noise_std=1.0,
actor_hidden_dims=[256, 128, 64],
critic_hidden_dims=[256, 128, 64],
activation="elu",
)
algorithm = RslRlPpoAlgorithmCfg(
value_loss_coef=1.0,
use_clipped_value_loss=True,
clip_param=0.2,
entropy_coef=1e-3,
num_learning_epochs=5,
num_mini_batches=4,
learning_rate=5.0e-4,
schedule="adaptive",
gamma=0.99,
lam=0.95,
desired_kl=0.02,
max_grad_norm=1.0,
)
| 1,085 | Python | 24.857142 | 58 | 0.645161 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/cabinet/config/franka/agents/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from . import rsl_rl_cfg # noqa: F401, F403
| 168 | Python | 23.142854 | 56 | 0.720238 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/cabinet/config/franka/agents/rl_games_ppo_cfg.yaml | params:
seed: 42
# environment wrapper clipping
env:
clip_observations: 5.0
clip_actions: 1.0
algo:
name: a2c_continuous
model:
name: continuous_a2c_logstd
network:
name: actor_critic
separate: False
space:
continuous:
mu_activation: None
sigma_activation: None
mu_init:
name: default
sigma_init:
name: const_initializer
val: 0
fixed_sigma: True
mlp:
units: [256, 128, 64]
activation: elu
d2rl: False
initializer:
name: default
regularizer:
name: None
load_checkpoint: False
load_path: ''
config:
name: franka_open_drawer
env_name: rlgpu
device: 'cuda:0'
device_name: 'cuda:0'
multi_gpu: False
ppo: True
mixed_precision: False
normalize_input: False
normalize_value: False
num_actors: -1 # configured from the script (based on num_envs)
reward_shaper:
scale_value: 1
normalize_advantage: False
gamma: 0.99
tau: 0.95
learning_rate: 5e-4
lr_schedule: adaptive
kl_threshold: 0.008
score_to_win: 200
max_epochs: 400
save_best_after: 50
save_frequency: 50
print_stats: True
grad_norm: 1.0
entropy_coef: 0.001
truncate_grads: True
e_clip: 0.2
horizon_length: 96
minibatch_size: 4096
mini_epochs: 5
critic_coef: 4
clip_value: True
seq_length: 4
bounds_loss_coef: 0.0001
| 1,482 | YAML | 18.25974 | 68 | 0.597841 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/lift/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Configurations for the object lift environments."""
# We leave this file empty since we don't want to expose any configs in this package directly.
# We still need this file to import the "config" module in the parent package.
| 353 | Python | 34.399997 | 94 | 0.756374 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/lift/lift_env_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
from dataclasses import MISSING
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.assets import ArticulationCfg, AssetBaseCfg, RigidObjectCfg
from omni.isaac.orbit.envs import RLTaskEnvCfg
from omni.isaac.orbit.managers import CurriculumTermCfg as CurrTerm
from omni.isaac.orbit.managers import EventTermCfg as EventTerm
from omni.isaac.orbit.managers import ObservationGroupCfg as ObsGroup
from omni.isaac.orbit.managers import ObservationTermCfg as ObsTerm
from omni.isaac.orbit.managers import RewardTermCfg as RewTerm
from omni.isaac.orbit.managers import SceneEntityCfg
from omni.isaac.orbit.managers import TerminationTermCfg as DoneTerm
from omni.isaac.orbit.scene import InteractiveSceneCfg
from omni.isaac.orbit.sensors.frame_transformer.frame_transformer_cfg import FrameTransformerCfg
from omni.isaac.orbit.sim.spawners.from_files.from_files_cfg import GroundPlaneCfg, UsdFileCfg
from omni.isaac.orbit.utils import configclass
from omni.isaac.orbit.utils.assets import ISAAC_NUCLEUS_DIR
from . import mdp
##
# Scene definition
##
@configclass
class ObjectTableSceneCfg(InteractiveSceneCfg):
"""Configuration for the lift scene with a robot and a object.
This is the abstract base implementation, the exact scene is defined in the derived classes
which need to set the target object, robot and end-effector frames
"""
# robots: will be populated by agent env cfg
robot: ArticulationCfg = MISSING
# end-effector sensor: will be populated by agent env cfg
ee_frame: FrameTransformerCfg = MISSING
# target object: will be populated by agent env cfg
object: RigidObjectCfg = MISSING
# Table
table = AssetBaseCfg(
prim_path="{ENV_REGEX_NS}/Table",
init_state=AssetBaseCfg.InitialStateCfg(pos=[0.5, 0, 0], rot=[0.707, 0, 0, 0.707]),
spawn=UsdFileCfg(usd_path=f"{ISAAC_NUCLEUS_DIR}/Props/Mounts/SeattleLabTable/table_instanceable.usd"),
)
# plane
plane = AssetBaseCfg(
prim_path="/World/GroundPlane",
init_state=AssetBaseCfg.InitialStateCfg(pos=[0, 0, -1.05]),
spawn=GroundPlaneCfg(),
)
# lights
light = AssetBaseCfg(
prim_path="/World/light",
spawn=sim_utils.DomeLightCfg(color=(0.75, 0.75, 0.75), intensity=3000.0),
)
##
# MDP settings
##
@configclass
class CommandsCfg:
"""Command terms for the MDP."""
object_pose = mdp.UniformPoseCommandCfg(
asset_name="robot",
body_name=MISSING, # will be set by agent env cfg
resampling_time_range=(5.0, 5.0),
debug_vis=True,
ranges=mdp.UniformPoseCommandCfg.Ranges(
pos_x=(0.4, 0.6), pos_y=(-0.25, 0.25), pos_z=(0.25, 0.5), roll=(0.0, 0.0), pitch=(0.0, 0.0), yaw=(0.0, 0.0)
),
)
@configclass
class ActionsCfg:
"""Action specifications for the MDP."""
# will be set by agent env cfg
body_joint_pos: mdp.JointPositionActionCfg = MISSING
finger_joint_pos: mdp.BinaryJointPositionActionCfg = MISSING
@configclass
class ObservationsCfg:
"""Observation specifications for the MDP."""
@configclass
class PolicyCfg(ObsGroup):
"""Observations for policy group."""
joint_pos = ObsTerm(func=mdp.joint_pos_rel)
joint_vel = ObsTerm(func=mdp.joint_vel_rel)
object_position = ObsTerm(func=mdp.object_position_in_robot_root_frame)
target_object_position = ObsTerm(func=mdp.generated_commands, params={"command_name": "object_pose"})
actions = ObsTerm(func=mdp.last_action)
def __post_init__(self):
self.enable_corruption = True
self.concatenate_terms = True
# observation groups
policy: PolicyCfg = PolicyCfg()
@configclass
class EventCfg:
"""Configuration for events."""
reset_all = EventTerm(func=mdp.reset_scene_to_default, mode="reset")
reset_object_position = EventTerm(
func=mdp.reset_root_state_uniform,
mode="reset",
params={
"pose_range": {"x": (-0.1, 0.1), "y": (-0.25, 0.25), "z": (0.0, 0.0)},
"velocity_range": {},
"asset_cfg": SceneEntityCfg("object", body_names="Object"),
},
)
@configclass
class RewardsCfg:
"""Reward terms for the MDP."""
reaching_object = RewTerm(func=mdp.object_ee_distance, params={"std": 0.1}, weight=1.0)
lifting_object = RewTerm(func=mdp.object_is_lifted, params={"minimal_height": 0.06}, weight=15.0)
object_goal_tracking = RewTerm(
func=mdp.object_goal_distance,
params={"std": 0.3, "minimal_height": 0.06, "command_name": "object_pose"},
weight=16.0,
)
object_goal_tracking_fine_grained = RewTerm(
func=mdp.object_goal_distance,
params={"std": 0.05, "minimal_height": 0.06, "command_name": "object_pose"},
weight=5.0,
)
# action penalty
action_rate = RewTerm(func=mdp.action_rate_l2, weight=-1e-3)
joint_vel = RewTerm(
func=mdp.joint_vel_l2,
weight=-1e-4,
params={"asset_cfg": SceneEntityCfg("robot")},
)
@configclass
class TerminationsCfg:
"""Termination terms for the MDP."""
time_out = DoneTerm(func=mdp.time_out, time_out=True)
object_dropping = DoneTerm(
func=mdp.base_height, params={"minimum_height": -0.05, "asset_cfg": SceneEntityCfg("object")}
)
@configclass
class CurriculumCfg:
"""Curriculum terms for the MDP."""
action_rate = CurrTerm(
func=mdp.modify_reward_weight, params={"term_name": "action_rate", "weight": -1e-1, "num_steps": 10000}
)
joint_vel = CurrTerm(
func=mdp.modify_reward_weight, params={"term_name": "joint_vel", "weight": -1e-1, "num_steps": 10000}
)
##
# Environment configuration
##
@configclass
class LiftEnvCfg(RLTaskEnvCfg):
"""Configuration for the lifting environment."""
# Scene settings
scene: ObjectTableSceneCfg = ObjectTableSceneCfg(num_envs=4096, env_spacing=2.5)
# Basic settings
observations: ObservationsCfg = ObservationsCfg()
actions: ActionsCfg = ActionsCfg()
commands: CommandsCfg = CommandsCfg()
# MDP settings
rewards: RewardsCfg = RewardsCfg()
terminations: TerminationsCfg = TerminationsCfg()
events: EventCfg = EventCfg()
curriculum: CurriculumCfg = CurriculumCfg()
def __post_init__(self):
"""Post initialization."""
# general settings
self.decimation = 2
self.episode_length_s = 5.0
# simulation settings
self.sim.dt = 0.01 # 100Hz
self.sim.physx.bounce_threshold_velocity = 0.2
self.sim.physx.bounce_threshold_velocity = 0.01
self.sim.physx.gpu_found_lost_aggregate_pairs_capacity = 1024 * 1024 * 4
self.sim.physx.gpu_total_aggregate_pairs_capacity = 16 * 1024
self.sim.physx.friction_correlation_distance = 0.00625
| 6,999 | Python | 30.111111 | 119 | 0.673096 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/lift/mdp/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""This sub-module contains the functions that are specific to the lift environments."""
from omni.isaac.orbit.envs.mdp import * # noqa: F401, F403
from .observations import * # noqa: F401, F403
from .rewards import * # noqa: F401, F403
from .terminations import * # noqa: F401, F403
| 413 | Python | 30.846151 | 88 | 0.726392 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/lift/mdp/rewards.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
import torch
from typing import TYPE_CHECKING
from omni.isaac.orbit.assets import RigidObject
from omni.isaac.orbit.managers import SceneEntityCfg
from omni.isaac.orbit.sensors import FrameTransformer
from omni.isaac.orbit.utils.math import combine_frame_transforms
if TYPE_CHECKING:
from omni.isaac.orbit.envs import RLTaskEnv
def object_is_lifted(
env: RLTaskEnv, minimal_height: float, object_cfg: SceneEntityCfg = SceneEntityCfg("object")
) -> torch.Tensor:
"""Reward the agent for lifting the object above the minimal height."""
object: RigidObject = env.scene[object_cfg.name]
return torch.where(object.data.root_pos_w[:, 2] > minimal_height, 1.0, 0.0)
def object_ee_distance(
env: RLTaskEnv,
std: float,
object_cfg: SceneEntityCfg = SceneEntityCfg("object"),
ee_frame_cfg: SceneEntityCfg = SceneEntityCfg("ee_frame"),
) -> torch.Tensor:
"""Reward the agent for reaching the object using tanh-kernel."""
# extract the used quantities (to enable type-hinting)
object: RigidObject = env.scene[object_cfg.name]
ee_frame: FrameTransformer = env.scene[ee_frame_cfg.name]
# Target object position: (num_envs, 3)
cube_pos_w = object.data.root_pos_w
# End-effector position: (num_envs, 3)
ee_w = ee_frame.data.target_pos_w[..., 0, :]
# Distance of the end-effector to the object: (num_envs,)
object_ee_distance = torch.norm(cube_pos_w - ee_w, dim=1)
return 1 - torch.tanh(object_ee_distance / std)
def object_goal_distance(
env: RLTaskEnv,
std: float,
minimal_height: float,
command_name: str,
robot_cfg: SceneEntityCfg = SceneEntityCfg("robot"),
object_cfg: SceneEntityCfg = SceneEntityCfg("object"),
) -> torch.Tensor:
"""Reward the agent for tracking the goal pose using tanh-kernel."""
# extract the used quantities (to enable type-hinting)
robot: RigidObject = env.scene[robot_cfg.name]
object: RigidObject = env.scene[object_cfg.name]
command = env.command_manager.get_command(command_name)
# compute the desired position in the world frame
des_pos_b = command[:, :3]
des_pos_w, _ = combine_frame_transforms(robot.data.root_state_w[:, :3], robot.data.root_state_w[:, 3:7], des_pos_b)
# distance of the end-effector to the object: (num_envs,)
distance = torch.norm(des_pos_w - object.data.root_pos_w[:, :3], dim=1)
# rewarded if the object is lifted above the threshold
return (object.data.root_pos_w[:, 2] > minimal_height) * (1 - torch.tanh(distance / std))
| 2,683 | Python | 38.470588 | 119 | 0.701826 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/lift/mdp/terminations.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Common functions that can be used to activate certain terminations for the lift task.
The functions can be passed to the :class:`omni.isaac.orbit.managers.TerminationTermCfg` object to enable
the termination introduced by the function.
"""
from __future__ import annotations
import torch
from typing import TYPE_CHECKING
from omni.isaac.orbit.assets import RigidObject
from omni.isaac.orbit.managers import SceneEntityCfg
from omni.isaac.orbit.utils.math import combine_frame_transforms
if TYPE_CHECKING:
from omni.isaac.orbit.envs import RLTaskEnv
def object_reached_goal(
env: RLTaskEnv,
command_name: str = "object_pose",
threshold: float = 0.02,
robot_cfg: SceneEntityCfg = SceneEntityCfg("robot"),
object_cfg: SceneEntityCfg = SceneEntityCfg("object"),
) -> torch.Tensor:
"""Termination condition for the object reaching the goal position.
Args:
env: The environment.
command_name: The name of the command that is used to control the object.
threshold: The threshold for the object to reach the goal position. Defaults to 0.02.
robot_cfg: The robot configuration. Defaults to SceneEntityCfg("robot").
object_cfg: The object configuration. Defaults to SceneEntityCfg("object").
"""
# extract the used quantities (to enable type-hinting)
robot: RigidObject = env.scene[robot_cfg.name]
object: RigidObject = env.scene[object_cfg.name]
command = env.command_manager.get_command(command_name)
# compute the desired position in the world frame
des_pos_b = command[:, :3]
des_pos_w, _ = combine_frame_transforms(robot.data.root_state_w[:, :3], robot.data.root_state_w[:, 3:7], des_pos_b)
# distance of the end-effector to the object: (num_envs,)
distance = torch.norm(des_pos_w - object.data.root_pos_w[:, :3], dim=1)
# rewarded if the object is lifted above the threshold
return distance < threshold
| 2,055 | Python | 37.074073 | 119 | 0.722141 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/lift/mdp/observations.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
import torch
from typing import TYPE_CHECKING
from omni.isaac.orbit.assets import RigidObject
from omni.isaac.orbit.managers import SceneEntityCfg
from omni.isaac.orbit.utils.math import subtract_frame_transforms
if TYPE_CHECKING:
from omni.isaac.orbit.envs import RLTaskEnv
def object_position_in_robot_root_frame(
env: RLTaskEnv,
robot_cfg: SceneEntityCfg = SceneEntityCfg("robot"),
object_cfg: SceneEntityCfg = SceneEntityCfg("object"),
) -> torch.Tensor:
"""The position of the object in the robot's root frame."""
robot: RigidObject = env.scene[robot_cfg.name]
object: RigidObject = env.scene[object_cfg.name]
object_pos_w = object.data.root_pos_w[:, :3]
object_pos_b, _ = subtract_frame_transforms(
robot.data.root_state_w[:, :3], robot.data.root_state_w[:, 3:7], object_pos_w
)
return object_pos_b
| 1,020 | Python | 30.906249 | 85 | 0.721569 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/lift/config/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Configurations for the object lift environments."""
# We leave this file empty since we don't want to expose any configs in this package directly.
# We still need this file to import the "config" module in the parent package.
| 353 | Python | 34.399997 | 94 | 0.756374 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/lift/config/franka/ik_rel_env_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from omni.isaac.orbit.controllers.differential_ik_cfg import DifferentialIKControllerCfg
from omni.isaac.orbit.envs.mdp.actions.actions_cfg import DifferentialInverseKinematicsActionCfg
from omni.isaac.orbit.utils import configclass
from . import joint_pos_env_cfg
##
# Pre-defined configs
##
from omni.isaac.orbit_assets.franka import FRANKA_PANDA_HIGH_PD_CFG # isort: skip
@configclass
class FrankaCubeLiftEnvCfg(joint_pos_env_cfg.FrankaCubeLiftEnvCfg):
def __post_init__(self):
# post init of parent
super().__post_init__()
# Set Franka as robot
# We switch here to a stiffer PD controller for IK tracking to be better.
self.scene.robot = FRANKA_PANDA_HIGH_PD_CFG.replace(prim_path="{ENV_REGEX_NS}/Robot")
# Set actions for the specific robot type (franka)
self.actions.body_joint_pos = DifferentialInverseKinematicsActionCfg(
asset_name="robot",
joint_names=["panda_joint.*"],
body_name="panda_hand",
controller=DifferentialIKControllerCfg(command_type="pose", use_relative_mode=True, ik_method="dls"),
scale=0.5,
body_offset=DifferentialInverseKinematicsActionCfg.OffsetCfg(pos=[0.0, 0.0, 0.107]),
)
@configclass
class FrankaCubeLiftEnvCfg_PLAY(FrankaCubeLiftEnvCfg):
def __post_init__(self):
# post init of parent
super().__post_init__()
# make a smaller scene for play
self.scene.num_envs = 50
self.scene.env_spacing = 2.5
# disable randomization for play
self.observations.policy.enable_corruption = False
| 1,746 | Python | 34.653061 | 113 | 0.68614 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/lift/config/franka/ik_abs_env_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from omni.isaac.orbit.controllers.differential_ik_cfg import DifferentialIKControllerCfg
from omni.isaac.orbit.envs.mdp.actions.actions_cfg import DifferentialInverseKinematicsActionCfg
from omni.isaac.orbit.utils import configclass
from . import joint_pos_env_cfg
##
# Pre-defined configs
##
from omni.isaac.orbit_assets.franka import FRANKA_PANDA_HIGH_PD_CFG # isort: skip
@configclass
class FrankaCubeLiftEnvCfg(joint_pos_env_cfg.FrankaCubeLiftEnvCfg):
def __post_init__(self):
# post init of parent
super().__post_init__()
# Set Franka as robot
# We switch here to a stiffer PD controller for IK tracking to be better.
self.scene.robot = FRANKA_PANDA_HIGH_PD_CFG.replace(prim_path="{ENV_REGEX_NS}/Robot")
# Set actions for the specific robot type (franka)
self.actions.body_joint_pos = DifferentialInverseKinematicsActionCfg(
asset_name="robot",
joint_names=["panda_joint.*"],
body_name="panda_hand",
controller=DifferentialIKControllerCfg(command_type="pose", use_relative_mode=False, ik_method="dls"),
body_offset=DifferentialInverseKinematicsActionCfg.OffsetCfg(pos=[0.0, 0.0, 0.107]),
)
@configclass
class FrankaCubeLiftEnvCfg_PLAY(FrankaCubeLiftEnvCfg):
def __post_init__(self):
# post init of parent
super().__post_init__()
# make a smaller scene for play
self.scene.num_envs = 50
self.scene.env_spacing = 2.5
# disable randomization for play
self.observations.policy.enable_corruption = False
| 1,724 | Python | 34.937499 | 114 | 0.691415 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/lift/config/franka/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
import gymnasium as gym
import os
from . import agents, ik_abs_env_cfg, ik_rel_env_cfg, joint_pos_env_cfg
##
# Register Gym environments.
##
##
# Joint Position Control
##
gym.register(
id="Isaac-Lift-Cube-Franka-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
kwargs={
"env_cfg_entry_point": joint_pos_env_cfg.FrankaCubeLiftEnvCfg,
"rsl_rl_cfg_entry_point": agents.rsl_rl_cfg.LiftCubePPORunnerCfg,
"skrl_cfg_entry_point": f"{agents.__name__}:skrl_ppo_cfg.yaml",
},
disable_env_checker=True,
)
gym.register(
id="Isaac-Lift-Cube-Franka-Play-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
kwargs={
"env_cfg_entry_point": joint_pos_env_cfg.FrankaCubeLiftEnvCfg_PLAY,
"rsl_rl_cfg_entry_point": agents.rsl_rl_cfg.LiftCubePPORunnerCfg,
"skrl_cfg_entry_point": f"{agents.__name__}:skrl_ppo_cfg.yaml",
},
disable_env_checker=True,
)
##
# Inverse Kinematics - Absolute Pose Control
##
gym.register(
id="Isaac-Lift-Cube-Franka-IK-Abs-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
kwargs={
"env_cfg_entry_point": ik_abs_env_cfg.FrankaCubeLiftEnvCfg,
"rsl_rl_cfg_entry_point": agents.rsl_rl_cfg.LiftCubePPORunnerCfg,
"skrl_cfg_entry_point": f"{agents.__name__}:skrl_ppo_cfg.yaml",
},
disable_env_checker=True,
)
gym.register(
id="Isaac-Lift-Cube-Franka-IK-Abs-Play-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
kwargs={
"env_cfg_entry_point": ik_abs_env_cfg.FrankaCubeLiftEnvCfg_PLAY,
"rsl_rl_cfg_entry_point": agents.rsl_rl_cfg.LiftCubePPORunnerCfg,
"skrl_cfg_entry_point": f"{agents.__name__}:skrl_ppo_cfg.yaml",
},
disable_env_checker=True,
)
##
# Inverse Kinematics - Relative Pose Control
##
gym.register(
id="Isaac-Lift-Cube-Franka-IK-Rel-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
kwargs={
"env_cfg_entry_point": ik_rel_env_cfg.FrankaCubeLiftEnvCfg,
"rsl_rl_cfg_entry_point": agents.rsl_rl_cfg.LiftCubePPORunnerCfg,
"skrl_cfg_entry_point": f"{agents.__name__}:skrl_ppo_cfg.yaml",
"robomimic_bc_cfg_entry_point": os.path.join(agents.__path__[0], "robomimic/bc.json"),
},
disable_env_checker=True,
)
gym.register(
id="Isaac-Lift-Cube-Franka-IK-Rel-Play-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
kwargs={
"env_cfg_entry_point": ik_rel_env_cfg.FrankaCubeLiftEnvCfg_PLAY,
"rsl_rl_cfg_entry_point": agents.rsl_rl_cfg.LiftCubePPORunnerCfg,
"skrl_cfg_entry_point": f"{agents.__name__}:skrl_ppo_cfg.yaml",
},
disable_env_checker=True,
)
| 2,769 | Python | 28.784946 | 94 | 0.660888 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/lift/config/franka/joint_pos_env_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from omni.isaac.orbit.assets import RigidObjectCfg
from omni.isaac.orbit.sensors import FrameTransformerCfg
from omni.isaac.orbit.sensors.frame_transformer.frame_transformer_cfg import OffsetCfg
from omni.isaac.orbit.sim.schemas.schemas_cfg import RigidBodyPropertiesCfg
from omni.isaac.orbit.sim.spawners.from_files.from_files_cfg import UsdFileCfg
from omni.isaac.orbit.utils import configclass
from omni.isaac.orbit.utils.assets import ISAAC_NUCLEUS_DIR
from omni.isaac.orbit_tasks.manipulation.lift import mdp
from omni.isaac.orbit_tasks.manipulation.lift.lift_env_cfg import LiftEnvCfg
##
# Pre-defined configs
##
from omni.isaac.orbit.markers.config import FRAME_MARKER_CFG # isort: skip
from omni.isaac.orbit_assets.franka import FRANKA_PANDA_CFG # isort: skip
@configclass
class FrankaCubeLiftEnvCfg(LiftEnvCfg):
def __post_init__(self):
# post init of parent
super().__post_init__()
# Set Franka as robot
self.scene.robot = FRANKA_PANDA_CFG.replace(prim_path="{ENV_REGEX_NS}/Robot")
# Set actions for the specific robot type (franka)
self.actions.body_joint_pos = mdp.JointPositionActionCfg(
asset_name="robot", joint_names=["panda_joint.*"], scale=0.5, use_default_offset=True
)
self.actions.finger_joint_pos = mdp.BinaryJointPositionActionCfg(
asset_name="robot",
joint_names=["panda_finger.*"],
open_command_expr={"panda_finger_.*": 0.04},
close_command_expr={"panda_finger_.*": 0.0},
)
# Set the body name for the end effector
self.commands.object_pose.body_name = "panda_hand"
# Set Cube as object
self.scene.object = RigidObjectCfg(
prim_path="{ENV_REGEX_NS}/Object",
init_state=RigidObjectCfg.InitialStateCfg(pos=[0.5, 0, 0.055], rot=[1, 0, 0, 0]),
spawn=UsdFileCfg(
usd_path=f"{ISAAC_NUCLEUS_DIR}/Props/Blocks/DexCube/dex_cube_instanceable.usd",
scale=(0.8, 0.8, 0.8),
rigid_props=RigidBodyPropertiesCfg(
solver_position_iteration_count=16,
solver_velocity_iteration_count=1,
max_angular_velocity=1000.0,
max_linear_velocity=1000.0,
max_depenetration_velocity=5.0,
disable_gravity=False,
),
),
)
# Listens to the required transforms
marker_cfg = FRAME_MARKER_CFG.copy()
marker_cfg.markers["frame"].scale = (0.1, 0.1, 0.1)
marker_cfg.prim_path = "/Visuals/FrameTransformer"
self.scene.ee_frame = FrameTransformerCfg(
prim_path="{ENV_REGEX_NS}/Robot/panda_link0",
debug_vis=False,
visualizer_cfg=marker_cfg,
target_frames=[
FrameTransformerCfg.FrameCfg(
prim_path="{ENV_REGEX_NS}/Robot/panda_hand",
name="end_effector",
offset=OffsetCfg(
pos=[0.0, 0.0, 0.1034],
),
),
],
)
@configclass
class FrankaCubeLiftEnvCfg_PLAY(FrankaCubeLiftEnvCfg):
def __post_init__(self):
# post init of parent
super().__post_init__()
# make a smaller scene for play
self.scene.num_envs = 50
self.scene.env_spacing = 2.5
# disable randomization for play
self.observations.policy.enable_corruption = False
| 3,644 | Python | 37.776595 | 97 | 0.613886 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/lift/config/franka/agents/rsl_rl_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from omni.isaac.orbit.utils import configclass
from omni.isaac.orbit_tasks.utils.wrappers.rsl_rl import (
RslRlOnPolicyRunnerCfg,
RslRlPpoActorCriticCfg,
RslRlPpoAlgorithmCfg,
)
@configclass
class LiftCubePPORunnerCfg(RslRlOnPolicyRunnerCfg):
num_steps_per_env = 24
max_iterations = 1500
save_interval = 50
experiment_name = "franka_lift"
empirical_normalization = False
policy = RslRlPpoActorCriticCfg(
init_noise_std=1.0,
actor_hidden_dims=[256, 128, 64],
critic_hidden_dims=[256, 128, 64],
activation="elu",
)
algorithm = RslRlPpoAlgorithmCfg(
value_loss_coef=1.0,
use_clipped_value_loss=True,
clip_param=0.2,
entropy_coef=0.006,
num_learning_epochs=5,
num_mini_batches=4,
learning_rate=1.0e-4,
schedule="adaptive",
gamma=0.98,
lam=0.95,
desired_kl=0.01,
max_grad_norm=1.0,
)
| 1,081 | Python | 24.761904 | 58 | 0.644773 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/lift/config/franka/agents/skrl_ppo_cfg.yaml | seed: 42
# Models are instantiated using skrl's model instantiator utility
# https://skrl.readthedocs.io/en/develop/modules/skrl.utils.model_instantiators.html
models:
separate: True
policy: # see skrl.utils.model_instantiators.gaussian_model for parameter details
clip_actions: False
clip_log_std: True
initial_log_std: 0
min_log_std: -20.0
max_log_std: 2.0
input_shape: "Shape.STATES"
hiddens: [256, 128, 64]
hidden_activation: ["elu", "elu", "elu"]
output_shape: "Shape.ACTIONS"
output_activation: ""
output_scale: 1.0
value: # see skrl.utils.model_instantiators.deterministic_model for parameter details
clip_actions: False
input_shape: "Shape.STATES"
hiddens: [256, 128, 64]
hidden_activation: ["elu", "elu", "elu"]
output_shape: "Shape.ONE"
output_activation: ""
output_scale: 1.0
# PPO agent configuration (field names are from PPO_DEFAULT_CONFIG)
# https://skrl.readthedocs.io/en/latest/modules/skrl.agents.ppo.html
agent:
rollouts: 16
learning_epochs: 8
mini_batches: 8
discount_factor: 0.99
lambda: 0.95
learning_rate: 3.e-4
learning_rate_scheduler: "KLAdaptiveLR"
learning_rate_scheduler_kwargs:
kl_threshold: 0.008
state_preprocessor: "RunningStandardScaler"
state_preprocessor_kwargs: null
value_preprocessor: "RunningStandardScaler"
value_preprocessor_kwargs: null
random_timesteps: 0
learning_starts: 0
grad_norm_clip: 1.0
ratio_clip: 0.2
value_clip: 0.2
clip_predicted_values: True
entropy_loss_scale: 0.0
value_loss_scale: 2.0
kl_threshold: 0
rewards_shaper_scale: 0.01
# logging and checkpoint
experiment:
directory: "franka_lift"
experiment_name: ""
write_interval: 120
checkpoint_interval: 1200
# Sequential trainer
# https://skrl.readthedocs.io/en/latest/modules/skrl.trainers.sequential.html
trainer:
timesteps: 24000
| 1,895 | YAML | 27.298507 | 88 | 0.711346 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/lift/config/franka/agents/sb3_ppo_cfg.yaml | # Reference: https://github.com/DLR-RM/rl-baselines3-zoo/blob/master/hyperparams/ppo.yml#L32
seed: 42
# epoch * n_steps * nenvs: 500×512*8*8
n_timesteps: 16384000
policy: 'MlpPolicy'
n_steps: 64
# mini batch size: num_envs * nsteps / nminibatches 2048×512÷2048
batch_size: 192
gae_lambda: 0.95
gamma: 0.99
n_epochs: 8
ent_coef: 0.00
vf_coef: 0.0001
learning_rate: !!float 3e-4
clip_range: 0.2
policy_kwargs: "dict(
activation_fn=nn.ELU,
net_arch=[32, 32, dict(pi=[256, 128, 64], vf=[256, 128, 64])]
)"
target_kl: 0.01
max_grad_norm: 1.0
# # Uses VecNormalize class to normalize obs
# normalize_input: True
# # Uses VecNormalize class to normalize rew
# normalize_value: True
# clip_obs: 5
| 743 | YAML | 24.655172 | 92 | 0.660834 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/lift/config/franka/agents/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from . import rsl_rl_cfg # noqa: F401, F403
| 168 | Python | 23.142854 | 56 | 0.720238 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/manipulation/lift/config/franka/agents/rl_games_ppo_cfg.yaml | params:
seed: 42
# environment wrapper clipping
env:
clip_observations: 100.0
clip_actions: 100.0
algo:
name: a2c_continuous
model:
name: continuous_a2c_logstd
network:
name: actor_critic
separate: False
space:
continuous:
mu_activation: None
sigma_activation: None
mu_init:
name: default
sigma_init:
name: const_initializer
val: 0
fixed_sigma: True
mlp:
units: [256, 128, 64]
activation: elu
d2rl: False
initializer:
name: default
regularizer:
name: None
load_checkpoint: False # flag which sets whether to load the checkpoint
load_path: '' # path to the checkpoint to load
config:
name: reach
env_name: rlgpu
device: 'cuda:0'
device_name: 'cuda:0'
multi_gpu: False
ppo: True
mixed_precision: False
normalize_input: True
normalize_value: True
value_bootstrap: False
num_actors: -1
reward_shaper:
scale_value: 0.01
normalize_advantage: True
gamma: 0.99
tau: 0.95
learning_rate: 3e-4
lr_schedule: adaptive
schedule_type: legacy
kl_threshold: 0.008
score_to_win: 100000000
max_epochs: 500
save_best_after: 100
save_frequency: 50
print_stats: True
grad_norm: 1.0
entropy_coef: 0.0
truncate_grads: True
e_clip: 0.2
horizon_length: 16
minibatch_size: 4096 #2048
mini_epochs: 8
critic_coef: 4
clip_value: True
seq_len: 4
bounds_loss_coef: 0.0001
| 1,566 | YAML | 18.835443 | 73 | 0.605364 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/locomotion/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Locomotion environments for legged robots."""
from .velocity import * # noqa
| 205 | Python | 21.888886 | 56 | 0.731707 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/locomotion/velocity/velocity_env_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
import math
from dataclasses import MISSING
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.assets import ArticulationCfg, AssetBaseCfg
from omni.isaac.orbit.envs import RLTaskEnvCfg
from omni.isaac.orbit.managers import CurriculumTermCfg as CurrTerm
from omni.isaac.orbit.managers import EventTermCfg as EventTerm
from omni.isaac.orbit.managers import ObservationGroupCfg as ObsGroup
from omni.isaac.orbit.managers import ObservationTermCfg as ObsTerm
from omni.isaac.orbit.managers import RewardTermCfg as RewTerm
from omni.isaac.orbit.managers import SceneEntityCfg
from omni.isaac.orbit.managers import TerminationTermCfg as DoneTerm
from omni.isaac.orbit.scene import InteractiveSceneCfg
from omni.isaac.orbit.sensors import ContactSensorCfg, RayCasterCfg, patterns
from omni.isaac.orbit.terrains import TerrainImporterCfg
from omni.isaac.orbit.utils import configclass
from omni.isaac.orbit.utils.noise import AdditiveUniformNoiseCfg as Unoise
import omni.isaac.orbit_tasks.locomotion.velocity.mdp as mdp
##
# Pre-defined configs
##
from omni.isaac.orbit.terrains.config.rough import ROUGH_TERRAINS_CFG # isort: skip
##
# Scene definition
##
@configclass
class MySceneCfg(InteractiveSceneCfg):
"""Configuration for the terrain scene with a legged robot."""
# ground terrain
terrain = TerrainImporterCfg(
prim_path="/World/ground",
terrain_type="generator",
terrain_generator=ROUGH_TERRAINS_CFG,
max_init_terrain_level=5,
collision_group=-1,
physics_material=sim_utils.RigidBodyMaterialCfg(
friction_combine_mode="multiply",
restitution_combine_mode="multiply",
static_friction=1.0,
dynamic_friction=1.0,
),
visual_material=sim_utils.MdlFileCfg(
mdl_path="{NVIDIA_NUCLEUS_DIR}/Materials/Base/Architecture/Shingles_01.mdl",
project_uvw=True,
),
debug_vis=False,
)
# robots
robot: ArticulationCfg = MISSING
# sensors
height_scanner = RayCasterCfg(
prim_path="{ENV_REGEX_NS}/Robot/base",
offset=RayCasterCfg.OffsetCfg(pos=(0.0, 0.0, 20.0)),
attach_yaw_only=True,
pattern_cfg=patterns.GridPatternCfg(resolution=0.1, size=[1.6, 1.0]),
debug_vis=False,
mesh_prim_paths=["/World/ground"],
)
contact_forces = ContactSensorCfg(prim_path="{ENV_REGEX_NS}/Robot/.*", history_length=3, track_air_time=True)
# lights
light = AssetBaseCfg(
prim_path="/World/light",
spawn=sim_utils.DistantLightCfg(color=(0.75, 0.75, 0.75), intensity=3000.0),
)
sky_light = AssetBaseCfg(
prim_path="/World/skyLight",
spawn=sim_utils.DomeLightCfg(color=(0.13, 0.13, 0.13), intensity=1000.0),
)
##
# MDP settings
##
@configclass
class CommandsCfg:
"""Command specifications for the MDP."""
base_velocity = mdp.UniformVelocityCommandCfg(
asset_name="robot",
resampling_time_range=(10.0, 10.0),
rel_standing_envs=0.02,
rel_heading_envs=1.0,
heading_command=True,
heading_control_stiffness=0.5,
debug_vis=True,
ranges=mdp.UniformVelocityCommandCfg.Ranges(
lin_vel_x=(-1.0, 1.0), lin_vel_y=(-1.0, 1.0), ang_vel_z=(-1.0, 1.0), heading=(-math.pi, math.pi)
),
)
@configclass
class ActionsCfg:
"""Action specifications for the MDP."""
joint_pos = mdp.JointPositionActionCfg(asset_name="robot", joint_names=[".*"], scale=0.5, use_default_offset=True)
@configclass
class ObservationsCfg:
"""Observation specifications for the MDP."""
@configclass
class PolicyCfg(ObsGroup):
"""Observations for policy group."""
# observation terms (order preserved)
base_lin_vel = ObsTerm(func=mdp.base_lin_vel, noise=Unoise(n_min=-0.1, n_max=0.1))
base_ang_vel = ObsTerm(func=mdp.base_ang_vel, noise=Unoise(n_min=-0.2, n_max=0.2))
projected_gravity = ObsTerm(
func=mdp.projected_gravity,
noise=Unoise(n_min=-0.05, n_max=0.05),
)
velocity_commands = ObsTerm(func=mdp.generated_commands, params={"command_name": "base_velocity"})
joint_pos = ObsTerm(func=mdp.joint_pos_rel, noise=Unoise(n_min=-0.01, n_max=0.01))
joint_vel = ObsTerm(func=mdp.joint_vel_rel, noise=Unoise(n_min=-1.5, n_max=1.5))
actions = ObsTerm(func=mdp.last_action)
height_scan = ObsTerm(
func=mdp.height_scan,
params={"sensor_cfg": SceneEntityCfg("height_scanner")},
noise=Unoise(n_min=-0.1, n_max=0.1),
clip=(-1.0, 1.0),
)
def __post_init__(self):
self.enable_corruption = True
self.concatenate_terms = True
# observation groups
policy: PolicyCfg = PolicyCfg()
@configclass
class EventCfg:
"""Configuration for events."""
# startup
physics_material = EventTerm(
func=mdp.randomize_rigid_body_material,
mode="startup",
params={
"asset_cfg": SceneEntityCfg("robot", body_names=".*"),
"static_friction_range": (0.8, 0.8),
"dynamic_friction_range": (0.6, 0.6),
"restitution_range": (0.0, 0.0),
"num_buckets": 64,
},
)
add_base_mass = EventTerm(
func=mdp.add_body_mass,
mode="startup",
params={"asset_cfg": SceneEntityCfg("robot", body_names="base"), "mass_range": (-5.0, 5.0)},
)
# reset
base_external_force_torque = EventTerm(
func=mdp.apply_external_force_torque,
mode="reset",
params={
"asset_cfg": SceneEntityCfg("robot", body_names="base"),
"force_range": (0.0, 0.0),
"torque_range": (-0.0, 0.0),
},
)
reset_base = EventTerm(
func=mdp.reset_root_state_uniform,
mode="reset",
params={
"pose_range": {"x": (-0.5, 0.5), "y": (-0.5, 0.5), "yaw": (-3.14, 3.14)},
"velocity_range": {
"x": (-0.5, 0.5),
"y": (-0.5, 0.5),
"z": (-0.5, 0.5),
"roll": (-0.5, 0.5),
"pitch": (-0.5, 0.5),
"yaw": (-0.5, 0.5),
},
},
)
reset_robot_joints = EventTerm(
func=mdp.reset_joints_by_scale,
mode="reset",
params={
"position_range": (0.5, 1.5),
"velocity_range": (0.0, 0.0),
},
)
# interval
push_robot = EventTerm(
func=mdp.push_by_setting_velocity,
mode="interval",
interval_range_s=(10.0, 15.0),
params={"velocity_range": {"x": (-0.5, 0.5), "y": (-0.5, 0.5)}},
)
@configclass
class RewardsCfg:
"""Reward terms for the MDP."""
# -- task
track_lin_vel_xy_exp = RewTerm(
func=mdp.track_lin_vel_xy_exp, weight=1.0, params={"command_name": "base_velocity", "std": math.sqrt(0.25)}
)
track_ang_vel_z_exp = RewTerm(
func=mdp.track_ang_vel_z_exp, weight=0.5, params={"command_name": "base_velocity", "std": math.sqrt(0.25)}
)
# -- penalties
lin_vel_z_l2 = RewTerm(func=mdp.lin_vel_z_l2, weight=-2.0)
ang_vel_xy_l2 = RewTerm(func=mdp.ang_vel_xy_l2, weight=-0.05)
dof_torques_l2 = RewTerm(func=mdp.joint_torques_l2, weight=-1.0e-5)
dof_acc_l2 = RewTerm(func=mdp.joint_acc_l2, weight=-2.5e-7)
action_rate_l2 = RewTerm(func=mdp.action_rate_l2, weight=-0.01)
feet_air_time = RewTerm(
func=mdp.feet_air_time,
weight=0.125,
params={
"sensor_cfg": SceneEntityCfg("contact_forces", body_names=".*FOOT"),
"command_name": "base_velocity",
"threshold": 0.5,
},
)
undesired_contacts = RewTerm(
func=mdp.undesired_contacts,
weight=-1.0,
params={"sensor_cfg": SceneEntityCfg("contact_forces", body_names=".*THIGH"), "threshold": 1.0},
)
# -- optional penalties
flat_orientation_l2 = RewTerm(func=mdp.flat_orientation_l2, weight=0.0)
dof_pos_limits = RewTerm(func=mdp.joint_pos_limits, weight=0.0)
@configclass
class TerminationsCfg:
"""Termination terms for the MDP."""
time_out = DoneTerm(func=mdp.time_out, time_out=True)
base_contact = DoneTerm(
func=mdp.illegal_contact,
params={"sensor_cfg": SceneEntityCfg("contact_forces", body_names="base"), "threshold": 1.0},
)
@configclass
class CurriculumCfg:
"""Curriculum terms for the MDP."""
terrain_levels = CurrTerm(func=mdp.terrain_levels_vel)
##
# Environment configuration
##
@configclass
class LocomotionVelocityRoughEnvCfg(RLTaskEnvCfg):
"""Configuration for the locomotion velocity-tracking environment."""
# Scene settings
scene: MySceneCfg = MySceneCfg(num_envs=4096, env_spacing=2.5)
# Basic settings
observations: ObservationsCfg = ObservationsCfg()
actions: ActionsCfg = ActionsCfg()
commands: CommandsCfg = CommandsCfg()
# MDP settings
rewards: RewardsCfg = RewardsCfg()
terminations: TerminationsCfg = TerminationsCfg()
events: EventCfg = EventCfg()
curriculum: CurriculumCfg = CurriculumCfg()
def __post_init__(self):
"""Post initialization."""
# general settings
self.decimation = 4
self.episode_length_s = 20.0
# simulation settings
self.sim.dt = 0.005
self.sim.disable_contact_processing = True
self.sim.physics_material = self.scene.terrain.physics_material
# update sensor update periods
# we tick all the sensors based on the smallest update period (physics update period)
if self.scene.height_scanner is not None:
self.scene.height_scanner.update_period = self.decimation * self.sim.dt
if self.scene.contact_forces is not None:
self.scene.contact_forces.update_period = self.sim.dt
# check if terrain levels curriculum is enabled - if so, enable curriculum for terrain generator
# this generates terrains with increasing difficulty and is useful for training
if getattr(self.curriculum, "terrain_levels", None) is not None:
if self.scene.terrain.terrain_generator is not None:
self.scene.terrain.terrain_generator.curriculum = True
else:
if self.scene.terrain.terrain_generator is not None:
self.scene.terrain.terrain_generator.curriculum = False
| 10,608 | Python | 32.466877 | 118 | 0.625094 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/locomotion/velocity/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Locomotion environments with velocity-tracking commands.
These environments are based on the `legged_gym` environments provided by Rudin et al.
Reference:
https://github.com/leggedrobotics/legged_gym
"""
| 336 | Python | 24.923075 | 86 | 0.764881 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/locomotion/velocity/mdp/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""This sub-module contains the functions that are specific to the locomotion environments."""
from omni.isaac.orbit.envs.mdp import * # noqa: F401, F403
from .curriculums import * # noqa: F401, F403
from .rewards import * # noqa: F401, F403
| 370 | Python | 29.916664 | 94 | 0.732432 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/locomotion/velocity/mdp/curriculums.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Common functions that can be used to create curriculum for the learning environment.
The functions can be passed to the :class:`omni.isaac.orbit.managers.CurriculumTermCfg` object to enable
the curriculum introduced by the function.
"""
from __future__ import annotations
import torch
from collections.abc import Sequence
from typing import TYPE_CHECKING
from omni.isaac.orbit.assets import Articulation
from omni.isaac.orbit.managers import SceneEntityCfg
from omni.isaac.orbit.terrains import TerrainImporter
if TYPE_CHECKING:
from omni.isaac.orbit.envs import RLTaskEnv
def terrain_levels_vel(
env: RLTaskEnv, env_ids: Sequence[int], asset_cfg: SceneEntityCfg = SceneEntityCfg("robot")
) -> torch.Tensor:
"""Curriculum based on the distance the robot walked when commanded to move at a desired velocity.
This term is used to increase the difficulty of the terrain when the robot walks far enough and decrease the
difficulty when the robot walks less than half of the distance required by the commanded velocity.
.. note::
It is only possible to use this term with the terrain type ``generator``. For further information
on different terrain types, check the :class:`omni.isaac.orbit.terrains.TerrainImporter` class.
Returns:
The mean terrain level for the given environment ids.
"""
# extract the used quantities (to enable type-hinting)
asset: Articulation = env.scene[asset_cfg.name]
terrain: TerrainImporter = env.scene.terrain
command = env.command_manager.get_command("base_velocity")
# compute the distance the robot walked
distance = torch.norm(asset.data.root_pos_w[env_ids, :2] - env.scene.env_origins[env_ids, :2], dim=1)
# robots that walked far enough progress to harder terrains
move_up = distance > terrain.cfg.terrain_generator.size[0] / 2
# robots that walked less than half of their required distance go to simpler terrains
move_down = distance < torch.norm(command[env_ids, :2], dim=1) * env.max_episode_length_s * 0.5
move_down *= ~move_up
# update terrain levels
terrain.update_env_origins(env_ids, move_up, move_down)
# return the mean terrain level
return torch.mean(terrain.terrain_levels.float())
| 2,376 | Python | 41.446428 | 112 | 0.742424 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/locomotion/velocity/mdp/rewards.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
import torch
from typing import TYPE_CHECKING
from omni.isaac.orbit.managers import SceneEntityCfg
from omni.isaac.orbit.sensors import ContactSensor
if TYPE_CHECKING:
from omni.isaac.orbit.envs import RLTaskEnv
def feet_air_time(env: RLTaskEnv, command_name: str, sensor_cfg: SceneEntityCfg, threshold: float) -> torch.Tensor:
"""Reward long steps taken by the feet using L2-kernel.
This function rewards the agent for taking steps that are longer than a threshold. This helps ensure
that the robot lifts its feet off the ground and takes steps. The reward is computed as the sum of
the time for which the feet are in the air.
If the commands are small (i.e. the agent is not supposed to take a step), then the reward is zero.
"""
# extract the used quantities (to enable type-hinting)
contact_sensor: ContactSensor = env.scene.sensors[sensor_cfg.name]
# compute the reward
first_contact = contact_sensor.compute_first_contact(env.step_dt)[:, sensor_cfg.body_ids]
last_air_time = contact_sensor.data.last_air_time[:, sensor_cfg.body_ids]
reward = torch.sum((last_air_time - threshold) * first_contact, dim=1)
# no reward for zero command
reward *= torch.norm(env.command_manager.get_command(command_name)[:, :2], dim=1) > 0.1
return reward
def feet_air_time_positive_biped(env, command_name: str, threshold: float, sensor_cfg: SceneEntityCfg) -> torch.Tensor:
"""Reward long steps taken by the feet for bipeds.
This function rewards the agent for taking steps up to a specified threshold and also keep one foot at
a time in the air.
If the commands are small (i.e. the agent is not supposed to take a step), then the reward is zero.
"""
contact_sensor: ContactSensor = env.scene.sensors[sensor_cfg.name]
# compute the reward
air_time = contact_sensor.data.current_air_time[:, sensor_cfg.body_ids]
contact_time = contact_sensor.data.current_contact_time[:, sensor_cfg.body_ids]
in_contact = contact_time > 0.0
in_mode_time = torch.where(in_contact, contact_time, air_time)
single_stance = torch.sum(in_contact.int(), dim=1) == 1
reward = torch.min(torch.where(single_stance.unsqueeze(-1), in_mode_time, 0.0), dim=1)[0]
reward = torch.clamp(reward, max=threshold)
# no reward for zero command
reward *= torch.norm(env.command_manager.get_command(command_name)[:, :2], dim=1) > 0.1
return reward
| 2,595 | Python | 43.75862 | 119 | 0.717148 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/locomotion/velocity/config/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Configurations for velocity-based locomotion environments."""
# We leave this file empty since we don't want to expose any configs in this package directly.
# We still need this file to import the "config" module in the parent package.
| 363 | Python | 35.399996 | 94 | 0.763085 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/locomotion/velocity/config/unitree_go1/rough_env_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from omni.isaac.orbit.utils import configclass
from omni.isaac.orbit_tasks.locomotion.velocity.velocity_env_cfg import LocomotionVelocityRoughEnvCfg
##
# Pre-defined configs
##
from omni.isaac.orbit_assets.unitree import UNITREE_GO1_CFG # isort: skip
@configclass
class UnitreeGo1RoughEnvCfg(LocomotionVelocityRoughEnvCfg):
def __post_init__(self):
# post init of parent
super().__post_init__()
self.scene.robot = UNITREE_GO1_CFG.replace(prim_path="{ENV_REGEX_NS}/Robot")
self.scene.height_scanner.prim_path = "{ENV_REGEX_NS}/Robot/trunk"
# scale down the terrains because the robot is small
self.scene.terrain.terrain_generator.sub_terrains["boxes"].grid_height_range = (0.025, 0.1)
self.scene.terrain.terrain_generator.sub_terrains["random_rough"].noise_range = (0.01, 0.06)
self.scene.terrain.terrain_generator.sub_terrains["random_rough"].noise_step = 0.01
# reduce action scale
self.actions.joint_pos.scale = 0.25
# event
self.events.push_robot = None
self.events.add_base_mass.params["mass_range"] = (-1.0, 3.0)
self.events.add_base_mass.params["asset_cfg"].body_names = "trunk"
self.events.base_external_force_torque.params["asset_cfg"].body_names = "trunk"
self.events.reset_robot_joints.params["position_range"] = (1.0, 1.0)
self.events.reset_base.params = {
"pose_range": {"x": (-0.5, 0.5), "y": (-0.5, 0.5), "yaw": (-3.14, 3.14)},
"velocity_range": {
"x": (0.0, 0.0),
"y": (0.0, 0.0),
"z": (0.0, 0.0),
"roll": (0.0, 0.0),
"pitch": (0.0, 0.0),
"yaw": (0.0, 0.0),
},
}
# rewards
self.rewards.feet_air_time.params["sensor_cfg"].body_names = ".*_foot"
self.rewards.feet_air_time.weight = 0.01
self.rewards.undesired_contacts = None
self.rewards.dof_torques_l2.weight = -0.0002
self.rewards.track_lin_vel_xy_exp.weight = 1.5
self.rewards.track_ang_vel_z_exp.weight = 0.75
self.rewards.dof_acc_l2.weight = -2.5e-7
# terminations
self.terminations.base_contact.params["sensor_cfg"].body_names = "trunk"
@configclass
class UnitreeGo1RoughEnvCfg_PLAY(UnitreeGo1RoughEnvCfg):
def __post_init__(self):
# post init of parent
super().__post_init__()
# make a smaller scene for play
self.scene.num_envs = 50
self.scene.env_spacing = 2.5
# spawn the robot randomly in the grid (instead of their terrain levels)
self.scene.terrain.max_init_terrain_level = None
# reduce the number of terrains to save memory
if self.scene.terrain.terrain_generator is not None:
self.scene.terrain.terrain_generator.num_rows = 5
self.scene.terrain.terrain_generator.num_cols = 5
self.scene.terrain.terrain_generator.curriculum = False
# disable randomization for play
self.observations.policy.enable_corruption = False
# remove random pushing event
self.events.base_external_force_torque = None
self.events.push_robot = None
| 3,351 | Python | 38.435294 | 101 | 0.621904 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/locomotion/velocity/config/unitree_go1/flat_env_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from omni.isaac.orbit.utils import configclass
from .rough_env_cfg import UnitreeGo1RoughEnvCfg
@configclass
class UnitreeGo1FlatEnvCfg(UnitreeGo1RoughEnvCfg):
def __post_init__(self):
# post init of parent
super().__post_init__()
# override rewards
self.rewards.flat_orientation_l2.weight = -2.5
self.rewards.feet_air_time.weight = 0.25
# change terrain to flat
self.scene.terrain.terrain_type = "plane"
self.scene.terrain.terrain_generator = None
# no height scan
self.scene.height_scanner = None
self.observations.policy.height_scan = None
# no terrain curriculum
self.curriculum.terrain_levels = None
class UnitreeGo1FlatEnvCfg_PLAY(UnitreeGo1FlatEnvCfg):
def __post_init__(self) -> None:
# post init of parent
super().__post_init__()
# make a smaller scene for play
self.scene.num_envs = 50
self.scene.env_spacing = 2.5
# disable randomization for play
self.observations.policy.enable_corruption = False
# remove random pushing event
self.events.base_external_force_torque = None
self.events.push_robot = None
| 1,338 | Python | 29.431818 | 58 | 0.658445 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/locomotion/velocity/config/unitree_go1/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
import gymnasium as gym
from . import agents, flat_env_cfg, rough_env_cfg
##
# Register Gym environments.
##
gym.register(
id="Isaac-Velocity-Flat-Unitree-Go1-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
disable_env_checker=True,
kwargs={
"env_cfg_entry_point": flat_env_cfg.UnitreeGo1FlatEnvCfg,
"rsl_rl_cfg_entry_point": agents.rsl_rl_cfg.UnitreeGo1FlatPPORunnerCfg,
},
)
gym.register(
id="Isaac-Velocity-Flat-Unitree-Go1-Play-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
disable_env_checker=True,
kwargs={
"env_cfg_entry_point": flat_env_cfg.UnitreeGo1FlatEnvCfg_PLAY,
"rsl_rl_cfg_entry_point": agents.rsl_rl_cfg.UnitreeGo1FlatPPORunnerCfg,
},
)
gym.register(
id="Isaac-Velocity-Rough-Unitree-Go1-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
disable_env_checker=True,
kwargs={
"env_cfg_entry_point": rough_env_cfg.UnitreeGo1RoughEnvCfg,
"rsl_rl_cfg_entry_point": agents.rsl_rl_cfg.UnitreeGo1RoughPPORunnerCfg,
},
)
gym.register(
id="Isaac-Velocity-Rough-Unitree-Go1-Play-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
disable_env_checker=True,
kwargs={
"env_cfg_entry_point": rough_env_cfg.UnitreeGo1RoughEnvCfg_PLAY,
"rsl_rl_cfg_entry_point": agents.rsl_rl_cfg.UnitreeGo1RoughPPORunnerCfg,
},
)
| 1,498 | Python | 27.283018 | 80 | 0.690921 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/locomotion/velocity/config/unitree_go1/agents/rsl_rl_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from omni.isaac.orbit.utils import configclass
from omni.isaac.orbit_tasks.utils.wrappers.rsl_rl import (
RslRlOnPolicyRunnerCfg,
RslRlPpoActorCriticCfg,
RslRlPpoAlgorithmCfg,
)
@configclass
class UnitreeGo1RoughPPORunnerCfg(RslRlOnPolicyRunnerCfg):
num_steps_per_env = 24
max_iterations = 1500
save_interval = 50
experiment_name = "unitree_go1_rough"
empirical_normalization = False
policy = RslRlPpoActorCriticCfg(
init_noise_std=1.0,
actor_hidden_dims=[512, 256, 128],
critic_hidden_dims=[512, 256, 128],
activation="elu",
)
algorithm = RslRlPpoAlgorithmCfg(
value_loss_coef=1.0,
use_clipped_value_loss=True,
clip_param=0.2,
entropy_coef=0.01,
num_learning_epochs=5,
num_mini_batches=4,
learning_rate=1.0e-3,
schedule="adaptive",
gamma=0.99,
lam=0.95,
desired_kl=0.01,
max_grad_norm=1.0,
)
@configclass
class UnitreeGo1FlatPPORunnerCfg(UnitreeGo1RoughPPORunnerCfg):
def __post_init__(self):
super().__post_init__()
self.max_iterations = 300
self.experiment_name = "unitree_go1_flat"
self.policy.actor_hidden_dims = [128, 128, 128]
self.policy.critic_hidden_dims = [128, 128, 128]
| 1,432 | Python | 26.037735 | 62 | 0.648045 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.