file_path
stringlengths 20
207
| content
stringlengths 5
3.85M
| size
int64 5
3.85M
| lang
stringclasses 9
values | avg_line_length
float64 1.33
100
| max_line_length
int64 4
993
| alphanum_fraction
float64 0.26
0.93
|
---|---|---|---|---|---|---|
Helbling-Technik/orbit.maze/pyproject.toml
|
# This section defines the build system requirements
[build-system]
requires = ["setuptools >= 61.0"]
build-backend = "setuptools.build_meta"
# Project metadata
[project]
version = "0.1.0"
name = "maze" # TODO
description = "Maze Extension Task for RL Learning" # TODO
keywords = ["extension", "maze", "orbit"] # TODO
readme = "README.md"
requires-python = ">=3.10"
license = {file = "LICENSE.txt"}
classifiers = [
"Programming Language :: Python :: 3",
]
authors = [
{name = "Kevin Schneider", email = "[email protected]"}, # TODO
]
maintainers = [
{name = "Kevin Schneider", email = "[email protected]"}, # TODO
]
# Tool dependent subtables
[tool.setuptools]
py-modules = [
'orbit'
] # TODO, add modules required for your extension
| 738 |
TOML
| 22.093749 | 59 | 0.665312 |
Helbling-Technik/orbit.maze/README.md
|
# Extension Template for Orbit
[](https://docs.omniverse.nvidia.com/isaacsim/latest/overview.html)
[](https://isaac-orbit.github.io/orbit/)
[](https://docs.python.org/3/whatsnew/3.10.html)
[](https://releases.ubuntu.com/20.04/)
[](https://pre-commit.com/)
## Overview
This repository serves as a template for building projects or extensions based on Orbit. It allows you to develop in an isolated environment, outside of the core Orbit repository. Furthermore, this template serves three use cases:
- **Python Package**
Can be installed into Isaac Sim's Python environment, making it suitable for users who want to integrate their extension to `Orbit` as a python package.
- **Project Template**
Ensures access to `Isaac Sim` and `Orbit` functionalities, which can be used as a project template.
- **Omniverse Extension**
Can be used as an Omniverse extension, ideal for projects that leverage the Omniverse platform's graphical user interface.
**Key Features:**
- `Isolation` Work outside the core Orbit repository, ensuring that your development efforts remain self-contained.
- `Flexibility` This template is set up to allow your code to be run as an extension in Omniverse.
**Keywords:** extension, template, orbit
### License
The source code is released under a [BSD 3-Clause license](https://opensource.org/licenses/BSD-3-Clause).
**Author: The ORBIT Project Developers<br />
Affiliation: [The AI Institute](https://theaiinstitute.com/)<br />
Maintainer: Nico Burger, [email protected]**
## Setup
Depending on the use case defined [above](#overview), follow the instructions to set up your extension template. Start with the [Basic Setup](#basic-setup), which is required for either use case.
### Basic Setup
#### Dependencies
This template depends on Isaac Sim and Orbit. For detailed instructions on how to install these dependencies, please refer to the [installation guide](https://isaac-orbit.github.io/orbit/source/setup/installation.html).
- [Isaac Sim](https://docs.omniverse.nvidia.com/isaacsim/latest/index.html)
- [Orbit](https://isaac-orbit.github.io/orbit/)
#### Configuration
- Set up a symbolic link from Orbit to this directory.
This makes it convenient to index the python modules and look for extensions shipped with Isaac Sim and Orbit.
```bash
ln -s <your_orbit_path> _orbit
```
#### Environment (Optional)
For clarity, we will be using the `${ISAACSIM_PATH}/python.sh` command to call the Orbit specific python interpreter. However, you might be working from within a virtual environment, allowing you to use the `python` command directly, instead of `${ISAACSIM_PATH}/python.sh`. Information on setting up a virtual environment for Orbit can be found [here](https://isaac-orbit.github.io/orbit/source/setup/installation.html#setting-up-the-environment). The `ISAACSIM_PATH` should already be set from installing Orbit, see [here](https://isaac-orbit.github.io/orbit/source/setup/installation.html#configuring-the-environment-variables).
#### Configure Python Interpreter
In the provided configuration, we set the default Python interpreter to use the Python executable provided by Omniverse. This is specified in the `.vscode/settings.json` file:
```json
"python.defaultInterpreterPath": "${env:ISAACSIM_PATH}/python.sh"
```
This setup requires you to have set up the `ISAACSIM_PATH` environment variable. If you want to use a different Python interpreter, you need to change the Python interpreter used by selecting and activating the Python interpreter of your choice in the bottom left corner of VSCode, or opening the command palette (`Ctrl+Shift+P`) and selecting `Python: Select Interpreter`.
#### Set up IDE
To setup the IDE, please follow these instructions:
1. Open the `orbit.maze` directory on Visual Studio Code IDE
2. Run VSCode Tasks, by pressing Ctrl+Shift+P, selecting Tasks: Run Task and running the setup_python_env in the drop down menu.
If everything executes correctly, it should create a file .python.env in the .vscode directory. The file contains the python paths to all the extensions provided by Isaac Sim and Omniverse. This helps in indexing all the python modules for intelligent suggestions while writing code.
### Setup as Python Package / Project Template
From within this repository, install your extension as a Python package to the Isaac Sim Python executable.
```bash
${ISAACSIM_PATH}/python.sh -m pip install --upgrade pip
${ISAACSIM_PATH}/python.sh -m pip install -e .
```
### Setup as Omniverse Extension
To enable your extension, follow these steps:
1. **Add the search path of your repository** to the extension manager:
- Navigate to the extension manager using `Window` -> `Extensions`.
- Click on the **Hamburger Icon** (☰), then go to `Settings`.
- In the `Extension Search Paths`, enter the path that goes up to your repository's location without actually including the repository's own directory. For example, if your repository is located at `/home/code/orbit.ext_template`, you should add `/home/code` as the search path.
- If not already present, in the `Extension Search Paths`, enter the path that leads to your local Orbit directory. For example: `/home/orbit/source/extensions`
- Click on the **Hamburger Icon** (☰), then click `Refresh`.
2. **Search and enable your extension**:
- Find your extension under the `Third Party` category.
- Toggle it to enable your extension.
## Usage
### Python Package
Import your python package within `Isaac Sim` and `Orbit` using:
```python
import orbit.<your_extension_name>
```
### Project Template
We provide an example for training and playing a policy for ANYmal on flat terrain. Install [RSL_RL](https://github.com/leggedrobotics/rsl_rl) outside of the orbit repository, e.g. `home/code/rsl_rl`.
```bash
git clone https://github.com/leggedrobotics/rsl_rl.git
cd rsl_rl
${ISAACSIM_PATH}/python.sh -m pip install -e .
```
Train a policy.
```bash
cd <path_to_your_extension>
${ISAACSIM_PATH}/python.sh scripts/sb3/train.py --task Isaac-Maze-v0 --num_envs 4096 --headless
```
Play the trained policy.
```bash
${ISAACSIM_PATH}/python.sh scripts/sb3/play.py --task Isaac-Maze-v0 --num_envs 16
```
## Bugs & Feature Requests
Please report bugs and request features using the [Issue Tracker](https://github.com/isaac-orbit/orbit.ext_template/issues).
| 6,738 |
Markdown
| 46.457746 | 631 | 0.756308 |
Helbling-Technik/orbit.maze/scripts/create_env.py
|
from __future__ import annotations
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="Test adding sensors on a robot.")
parser.add_argument("--num_envs", type=int, default=1, help="Number of environments to spawn.")
parser.add_argument("--num_cams", type=int, default=1, help="Number of cams per env (2 Max)")
parser.add_argument("--save", action="store_true", default=False, help="Save the obtained data to disk.")
# parser.add_argument("--livestream", type=int, default="1", help="stream remotely")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
args_cli.num_cams = min(2, args_cli.num_cams)
args_cli.num_cams = max(0, args_cli.num_cams)
args_cli.num_envs = max(1, args_cli.num_envs)
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
import math
from PIL import Image
import torch
import traceback
import carb
import os
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.assets import ArticulationCfg, AssetBaseCfg, RigidObjectCfg
from omni.isaac.orbit.scene import InteractiveScene, InteractiveSceneCfg
from omni.isaac.orbit.sensors import CameraCfg, ContactSensorCfg, RayCasterCfg, patterns
from omni.isaac.orbit.actuators import ImplicitActuatorCfg
from omni.isaac.orbit.utils import configclass
from omni.isaac.orbit.utils.timer import Timer
import omni.replicator.core as rep
from omni.isaac.orbit.utils import convert_dict_to_backend
from tqdm import tqdm
current_script_path = os.path.abspath(__file__)
# Absolute path of the project root (assuming it's three levels up from the current script)
project_root = os.path.join(current_script_path, "../..")
MAZE_CFG = ArticulationCfg(
spawn=sim_utils.UsdFileCfg(
# usd_path=f"{ISAAC_ORBIT_NUCLEUS_DIR}/Robots/Classic/Cartpole/cartpole.usd",
# Path to the USD file relative to the project root
usd_path=os.path.join(project_root, "usds/Maze_Simple.usd"),
# usd_path=f"../../../../usds/Maze_Simple.usd",
rigid_props=sim_utils.RigidBodyPropertiesCfg(
rigid_body_enabled=True,
max_linear_velocity=1000.0,
max_angular_velocity=1000.0,
max_depenetration_velocity=100.0,
enable_gyroscopic_forces=True,
),
articulation_props=sim_utils.ArticulationRootPropertiesCfg(
enabled_self_collisions=False,
solver_position_iteration_count=4,
solver_velocity_iteration_count=0,
sleep_threshold=0.005,
stabilization_threshold=0.001,
),
),
init_state=ArticulationCfg.InitialStateCfg(
pos=(0.0, 0.0, 0.0), joint_pos={"OuterDOF_RevoluteJoint": 0.0, "InnerDOF_RevoluteJoint": 0.0}
),
actuators={
"outer_actuator": ImplicitActuatorCfg(
joint_names_expr=["OuterDOF_RevoluteJoint"],
effort_limit=0.01,
velocity_limit=1.0 / math.pi,
stiffness=0.0,
damping=10.0,
),
"inner_actuator": ImplicitActuatorCfg(
joint_names_expr=["InnerDOF_RevoluteJoint"],
effort_limit=0.01,
velocity_limit=1.0 / math.pi,
stiffness=0.0,
damping=10.0,
),
},
)
@configclass
class SensorsSceneCfg(InteractiveSceneCfg):
"""Design the scene with sensors on the robot."""
# ground plane
ground = AssetBaseCfg(
prim_path="/World/ground",
spawn=sim_utils.GroundPlaneCfg(size=(100.0, 100.0)),
)
# cartpole
robot: ArticulationCfg = MAZE_CFG.replace(prim_path="{ENV_REGEX_NS}/Labyrinth")
# Sphere with collision enabled but not actuated
sphere = RigidObjectCfg(
prim_path="{ENV_REGEX_NS}/sphere",
spawn=sim_utils.SphereCfg(
radius=0.005, # Define the radius of the sphere
mass_props=sim_utils.MassPropertiesCfg(density=7850), # Density of steel in kg/m^3)
rigid_props=sim_utils.RigidBodyPropertiesCfg(rigid_body_enabled=True),
collision_props=sim_utils.CollisionPropertiesCfg(collision_enabled=True),
visual_material=sim_utils.PreviewSurfaceCfg(diffuse_color=(0.9, 0.9, 0.9), metallic=0.8),
),
init_state=RigidObjectCfg.InitialStateCfg(pos=(0.0, 0.0, 0.11)),
)
# sensors
camera_1 = CameraCfg(
prim_path="{ENV_REGEX_NS}/top_cam",
update_period=0.1,
height=8,
width=8,
data_types=["rgb"],#, "distance_to_image_plane"],
spawn=sim_utils.PinholeCameraCfg(
focal_length=24.0, focus_distance=400.0, horizontal_aperture=20.955, clipping_range=(0.1, 1.0e5)
),
offset=CameraCfg.OffsetCfg(pos=(0.0, 0.0, 0.5), rot=(0,1,0,0), convention="ros"),
)
# sphere_object = RigidObject(cfg=sphere_cfg)
# lights
dome_light = AssetBaseCfg(
prim_path="/World/DomeLight",
spawn=sim_utils.DomeLightCfg(color=(0.9, 0.9, 0.9), intensity=500.0),
)
distant_light = AssetBaseCfg(
prim_path="/World/DistantLight",
spawn=sim_utils.DistantLightCfg(color=(0.9, 0.9, 0.9), intensity=2500.0),
init_state=AssetBaseCfg.InitialStateCfg(rot=(0.738, 0.477, 0.477, 0.0)),
)
def run_simulator(
sim: sim_utils.SimulationContext,
scene: InteractiveScene,
):
"""Run the simulator."""
# Define simulation stepping
sim_dt = sim.get_physics_dt()
sim_time = 0.0
def reset():
# reset the scene entities
# root state
# we offset the root state by the origin since the states are written in simulation world frame
# if this is not done, then the robots will be spawned at the (0, 0, 0) of the simulation world
root_state = scene["robot"].data.default_root_state.clone()
root_state[:, :3] += scene.env_origins
scene["robot"].write_root_state_to_sim(root_state)
# set joint positions with some noise
joint_pos, joint_vel = (
scene["robot"].data.default_joint_pos.clone(),
scene["robot"].data.default_joint_vel.clone(),
)
joint_pos += torch.rand_like(joint_pos) * 0.1
scene["robot"].write_joint_state_to_sim(joint_pos, joint_vel)
# clear internal buffers
scene.reset()
print("[INFO]: Resetting robot state...")
output_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), "output", "camera")
rep_writer = rep.BasicWriter(output_dir=output_dir, frame_padding=3)
episode_steps = 500
while simulation_app.is_running():
reset()
with Timer(f"Time taken for {episode_steps} steps with {args_cli.num_envs} envs"):
with tqdm(range(episode_steps*args_cli.num_envs)) as pbar:
for count in range(episode_steps):
# Apply default actions to the robot
# -- generate actions/commands
targets = scene["robot"].data.default_joint_pos
# -- apply action to the robot
scene["robot"].set_joint_position_target(targets)
# -- write data to sim
scene.write_data_to_sim()
# perform step
sim.step()
# update sim-time
sim_time += sim_dt
count += 1
# update buffers
scene.update(sim_dt)
pbar.update(args_cli.num_envs)
# Extract camera data
if args_cli.save:
for i in range(args_cli.num_envs):
for j in range(args_cli.num_cams):
single_cam_data = convert_dict_to_backend(scene[f"camera_{j+1}"].data.output, backend="numpy")
#single_cam_info = scene[f"camera_{j+1}"].data.info
# Pack data back into replicator format to save them using its writer
rep_output = dict()
for key, data in zip(single_cam_data.keys(), single_cam_data.values()):#, single_cam_info):
# if info is not None:
# rep_output[key] = {"data": data, "info": info}
# else:
rep_output[key] = data[i]
# Save images
# Note: We need to provide On-time data for Replicator to save the images.
rep_output["trigger_outputs"] = {"on_time":f"{count}_{i}_{j}"}#{"on_time": scene["camera_1"].frame}
rep_writer.write(rep_output)
if args_cli.num_cams > 0:
cam1_rgb = scene["camera_1"].data.output["rgb"]
squeezed_img = cam1_rgb.squeeze(0).cpu().numpy().astype('uint8')
image = Image.fromarray(squeezed_img)
# image.save('test_cam'+str(count)+'.png')
if args_cli.num_cams > 1:
cam2_rgb = scene["camera_2"].data.output["rgb"]
def main():
"""Main function."""
# Initialize the simulation context
sim_cfg = sim_utils.SimulationCfg(dt=0.005, substeps=1)
sim = sim_utils.SimulationContext(sim_cfg)
# Set main camera
sim.set_camera_view(eye=[3.5, 3.5, 3.5], target=[0.0, 0.0, 0.0])
# design scene
scene_cfg = SensorsSceneCfg(num_envs=args_cli.num_envs, env_spacing=2.0)
scene = InteractiveScene(scene_cfg)
# Play the simulator
sim.reset()
# Now we are ready!
print("[INFO]: Setup complete...")
# Run the simulator
run_simulator(sim, scene)
if __name__ == "__main__":
try:
# run the main execution
main()
except Exception as err:
carb.log_error(err)
carb.log_error(traceback.format_exc())
raise
finally:
# close sim app
simulation_app.close()
| 10,261 |
Python
| 38.929961 | 131 | 0.587467 |
Helbling-Technik/orbit.maze/scripts/rsl_rl/play.py
|
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Script to play a checkpoint if an RL agent from RSL-RL."""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# local imports
import cli_args # isort: skip
# add argparse arguments
parser = argparse.ArgumentParser(description="Train an RL agent with RSL-RL.")
parser.add_argument("--cpu", action="store_true", default=False, help="Use CPU pipeline.")
parser.add_argument("--num_envs", type=int, default=None, help="Number of environments to simulate.")
parser.add_argument("--task", type=str, default=None, help="Name of the task.")
parser.add_argument("--seed", type=int, default=None, help="Seed used for the environment")
# append RSL-RL cli arguments
cli_args.add_rsl_rl_args(parser)
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import os
import gymnasium as gym
import omni.isaac.contrib_tasks # noqa: F401
import omni.isaac.orbit_tasks # noqa: F401
import torch
from omni.isaac.orbit_tasks.utils import get_checkpoint_path, parse_env_cfg
from omni.isaac.orbit_tasks.utils.wrappers.rsl_rl import (
RslRlOnPolicyRunnerCfg,
RslRlVecEnvWrapper,
export_policy_as_onnx,
)
from rsl_rl.runners import OnPolicyRunner
# Import extensions to set up environment tasks
import orbit.maze # noqa: F401 TODO: import orbit.<your_extension_name>
def main():
"""Play with RSL-RL agent."""
# parse configuration
env_cfg = parse_env_cfg(args_cli.task, use_gpu=not args_cli.cpu, num_envs=args_cli.num_envs)
agent_cfg: RslRlOnPolicyRunnerCfg = cli_args.parse_rsl_rl_cfg(args_cli.task, args_cli)
# create isaac environment
env = gym.make(args_cli.task, cfg=env_cfg)
# wrap around environment for rsl-rl
env = RslRlVecEnvWrapper(env)
# specify directory for logging experiments
log_root_path = os.path.join("logs", "rsl_rl", agent_cfg.experiment_name)
log_root_path = os.path.abspath(log_root_path)
print(f"[INFO] Loading experiment from directory: {log_root_path}")
resume_path = get_checkpoint_path(log_root_path, agent_cfg.load_run, agent_cfg.load_checkpoint)
print(f"[INFO]: Loading model checkpoint from: {resume_path}")
# load previously trained model
ppo_runner = OnPolicyRunner(env, agent_cfg.to_dict(), log_dir=None, device=agent_cfg.device)
ppo_runner.load(resume_path)
print(f"[INFO]: Loading model checkpoint from: {resume_path}")
# obtain the trained policy for inference
policy = ppo_runner.get_inference_policy(device=env.unwrapped.device)
# export policy to onnx
export_model_dir = os.path.join(os.path.dirname(resume_path), "exported")
export_policy_as_onnx(ppo_runner.alg.actor_critic, export_model_dir, filename="policy.onnx")
# reset environment
obs, _ = env.get_observations()
# simulate environment
while simulation_app.is_running():
# run everything in inference mode
with torch.inference_mode():
# agent stepping
actions = policy(obs)
# env stepping
obs, _, _, _ = env.step(actions)
# close the simulator
env.close()
if __name__ == "__main__":
# run the main execution
main()
# close sim app
simulation_app.close()
| 3,551 |
Python
| 31.888889 | 101 | 0.706562 |
Helbling-Technik/orbit.maze/scripts/rsl_rl/cli_args.py
|
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
import argparse
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from omni.isaac.orbit_tasks.utils.wrappers.rsl_rl import RslRlOnPolicyRunnerCfg
def add_rsl_rl_args(parser: argparse.ArgumentParser):
"""Add RSL-RL arguments to the parser.
Args:
parser: The parser to add the arguments to.
"""
# create a new argument group
arg_group = parser.add_argument_group("rsl_rl", description="Arguments for RSL-RL agent.")
# -- experiment arguments
arg_group.add_argument(
"--experiment_name", type=str, default=None, help="Name of the experiment folder where logs will be stored."
)
arg_group.add_argument("--run_name", type=str, default=None, help="Run name suffix to the log directory.")
# -- load arguments
arg_group.add_argument("--resume", type=bool, default=None, help="Whether to resume from a checkpoint.")
arg_group.add_argument("--load_run", type=str, default=None, help="Name of the run folder to resume from.")
arg_group.add_argument("--checkpoint", type=str, default=None, help="Checkpoint file to resume from.")
# -- logger arguments
arg_group.add_argument(
"--logger", type=str, default=None, choices={"wandb", "tensorboard", "neptune"}, help="Logger module to use."
)
arg_group.add_argument(
"--log_project_name", type=str, default=None, help="Name of the logging project when using wandb or neptune."
)
def parse_rsl_rl_cfg(task_name: str, args_cli: argparse.Namespace) -> RslRlOnPolicyRunnerCfg:
"""Parse configuration for RSL-RL agent based on inputs.
Args:
task_name: The name of the environment.
args_cli: The command line arguments.
Returns:
The parsed configuration for RSL-RL agent based on inputs.
"""
from omni.isaac.orbit_tasks.utils.parse_cfg import load_cfg_from_registry
# load the default configuration
rslrl_cfg: RslRlOnPolicyRunnerCfg = load_cfg_from_registry(task_name, "rsl_rl_cfg_entry_point")
# override the default configuration with CLI arguments
if args_cli.seed is not None:
rslrl_cfg.seed = args_cli.seed
if args_cli.resume is not None:
rslrl_cfg.resume = args_cli.resume
if args_cli.load_run is not None:
rslrl_cfg.load_run = args_cli.load_run
if args_cli.checkpoint is not None:
rslrl_cfg.load_checkpoint = args_cli.checkpoint
if args_cli.run_name is not None:
rslrl_cfg.run_name = args_cli.run_name
if args_cli.logger is not None:
rslrl_cfg.logger = args_cli.logger
# set the project name for wandb and neptune
if rslrl_cfg.logger in {"wandb", "neptune"} and args_cli.log_project_name:
rslrl_cfg.wandb_project = args_cli.log_project_name
rslrl_cfg.neptune_project = args_cli.log_project_name
return rslrl_cfg
| 2,981 |
Python
| 38.759999 | 117 | 0.688695 |
Helbling-Technik/orbit.maze/scripts/rsl_rl/train.py
|
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Script to train RL agent with RSL-RL."""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
import os
from omni.isaac.orbit.app import AppLauncher
# local imports
import cli_args # isort: skip
# add argparse arguments
parser = argparse.ArgumentParser(description="Train an RL agent with RSL-RL.")
parser.add_argument("--video", action="store_true", default=False, help="Record videos during training.")
parser.add_argument("--video_length", type=int, default=200, help="Length of the recorded video (in steps).")
parser.add_argument("--video_interval", type=int, default=2000, help="Interval between video recordings (in steps).")
parser.add_argument("--cpu", action="store_true", default=False, help="Use CPU pipeline.")
parser.add_argument("--num_envs", type=int, default=None, help="Number of environments to simulate.")
parser.add_argument("--task", type=str, default=None, help="Name of the task.")
parser.add_argument("--seed", type=int, default=None, help="Seed used for the environment")
# append RSL-RL cli arguments
cli_args.add_rsl_rl_args(parser)
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
args_cli = parser.parse_args()
# load cheaper kit config in headless
if args_cli.headless:
app_experience = f"{os.environ['EXP_PATH']}/omni.isaac.sim.python.gym.headless.kit"
else:
app_experience = f"{os.environ['EXP_PATH']}/omni.isaac.sim.python.kit"
# launch omniverse app
app_launcher = AppLauncher(args_cli, experience=app_experience)
simulation_app = app_launcher.app
"""Rest everything follows."""
import os
from datetime import datetime
import gymnasium as gym
import omni.isaac.orbit_tasks # noqa: F401
import torch
from omni.isaac.orbit.envs import RLTaskEnvCfg
from omni.isaac.orbit.utils.dict import print_dict
from omni.isaac.orbit.utils.io import dump_pickle, dump_yaml
from omni.isaac.orbit_tasks.utils import get_checkpoint_path, parse_env_cfg
from omni.isaac.orbit_tasks.utils.wrappers.rsl_rl import (
RslRlOnPolicyRunnerCfg,
RslRlVecEnvWrapper,
)
from rsl_rl.runners import OnPolicyRunner
# Import extensions to set up environment tasks
import orbit.maze # noqa: F401 TODO: import orbit.<your_extension_name>
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True
torch.backends.cudnn.deterministic = False
torch.backends.cudnn.benchmark = False
def main():
"""Train with RSL-RL agent."""
# parse configuration
env_cfg: RLTaskEnvCfg = parse_env_cfg(args_cli.task, use_gpu=not args_cli.cpu, num_envs=args_cli.num_envs)
agent_cfg: RslRlOnPolicyRunnerCfg = cli_args.parse_rsl_rl_cfg(args_cli.task, args_cli)
# specify directory for logging experiments
log_root_path = os.path.join("logs", "rsl_rl", agent_cfg.experiment_name)
log_root_path = os.path.abspath(log_root_path)
print(f"[INFO] Logging experiment in directory: {log_root_path}")
# specify directory for logging runs: {time-stamp}_{run_name}
log_dir = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
if agent_cfg.run_name:
log_dir += f"_{agent_cfg.run_name}"
log_dir = os.path.join(log_root_path, log_dir)
# create isaac environment
env = gym.make(args_cli.task, cfg=env_cfg, render_mode="rgb_array" if args_cli.video else None)
# wrap for video recording
if args_cli.video:
video_kwargs = {
"video_folder": os.path.join(log_dir, "videos"),
"step_trigger": lambda step: step % args_cli.video_interval == 0,
"video_length": args_cli.video_length,
"disable_logger": True,
}
print("[INFO] Recording videos during training.")
print_dict(video_kwargs, nesting=4)
env = gym.wrappers.RecordVideo(env, **video_kwargs)
# wrap around environment for rsl-rl
env = RslRlVecEnvWrapper(env)
# create runner from rsl-rl
runner = OnPolicyRunner(env, agent_cfg.to_dict(), log_dir=log_dir, device=agent_cfg.device)
# write git state to logs
runner.add_git_repo_to_log(__file__)
# save resume path before creating a new log_dir
if agent_cfg.resume:
# get path to previous checkpoint
resume_path = get_checkpoint_path(log_root_path, agent_cfg.load_run, agent_cfg.load_checkpoint)
print(f"[INFO]: Loading model checkpoint from: {resume_path}")
# load previously trained model
runner.load(resume_path)
# set seed of the environment
env.seed(agent_cfg.seed)
# dump the configuration into log-directory
dump_yaml(os.path.join(log_dir, "params", "env.yaml"), env_cfg)
dump_yaml(os.path.join(log_dir, "params", "agent.yaml"), agent_cfg)
dump_pickle(os.path.join(log_dir, "params", "env.pkl"), env_cfg)
dump_pickle(os.path.join(log_dir, "params", "agent.pkl"), agent_cfg)
# run training
runner.learn(num_learning_iterations=agent_cfg.max_iterations, init_at_random_ep_len=True)
# close the simulator
env.close()
if __name__ == "__main__":
# run the main execution
main()
# close sim app
simulation_app.close()
| 5,216 |
Python
| 36.532374 | 117 | 0.703029 |
Helbling-Technik/orbit.maze/scripts/sb3/play.py
|
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Script to play a checkpoint if an RL agent from Stable-Baselines3."""
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="Play a checkpoint of an RL agent from Stable-Baselines3.")
parser.add_argument("--cpu", action="store_true", default=False, help="Use CPU pipeline.")
parser.add_argument(
"--disable_fabric", action="store_true", default=False, help="Disable fabric and use USD I/O operations."
)
parser.add_argument("--num_envs", type=int, default=4, help="Number of environments to simulate.")
parser.add_argument("--task", type=str, default="Isaac-Maze-v0", help="Name of the task.")
# parser.add_argument("--livestream", type=int, default="1", help="stream remotely")
parser.add_argument(
"--checkpoint",
type=str,
default="logs/sb3/Isaac-Maze-v0/2024-05-31_09-56-42/model_16384000_steps.zip",
help="Path to model checkpoint.",
)
parser.add_argument(
"--use_last_checkpoint",
action="store_true",
help="When no checkpoint provided, use the last saved model. Otherwise use the best saved model.",
)
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import gymnasium as gym
import numpy as np
import os
import torch
from stable_baselines3 import PPO
from stable_baselines3.common.vec_env import VecNormalize
import omni.isaac.orbit_tasks # noqa: F401
from omni.isaac.orbit_tasks.utils.parse_cfg import get_checkpoint_path, load_cfg_from_registry, parse_env_cfg
from omni.isaac.orbit_tasks.utils.wrappers.sb3 import Sb3VecEnvWrapper, process_sb3_cfg
import orbit.maze
def main():
"""Play with stable-baselines agent."""
# parse configuration
env_cfg = parse_env_cfg(
args_cli.task, use_gpu=not args_cli.cpu, num_envs=args_cli.num_envs, use_fabric=not args_cli.disable_fabric
)
agent_cfg = load_cfg_from_registry(args_cli.task, "sb3_cfg_entry_point")
# post-process agent configuration
agent_cfg = process_sb3_cfg(agent_cfg)
# create isaac environment
env = gym.make(args_cli.task, cfg=env_cfg)
# wrap around environment for stable baselines
env = Sb3VecEnvWrapper(env)
# normalize environment (if needed)
if "normalize_input" in agent_cfg:
env = VecNormalize(
env,
training=True,
norm_obs="normalize_input" in agent_cfg and agent_cfg.pop("normalize_input"),
norm_reward="normalize_value" in agent_cfg and agent_cfg.pop("normalize_value"),
clip_obs="clip_obs" in agent_cfg and agent_cfg.pop("clip_obs"),
gamma=agent_cfg["gamma"],
clip_reward=np.inf,
)
# directory for logging into
log_root_path = os.path.join("logs", "sb3", args_cli.task)
log_root_path = os.path.abspath(log_root_path)
# check checkpoint is valid
if args_cli.checkpoint is None:
if args_cli.use_last_checkpoint:
checkpoint = "model_.*.zip"
else:
checkpoint = "model.zip"
checkpoint_path = get_checkpoint_path(log_root_path, ".*", checkpoint)
else:
checkpoint_path = args_cli.checkpoint
# create agent from stable baselines
print(f"Loading checkpoint from: {checkpoint_path}")
agent = PPO.load(checkpoint_path, env, print_system_info=True)
# reset environment
obs = env.reset()
# simulate environment
while simulation_app.is_running():
# run everything in inference mode
with torch.inference_mode():
# agent stepping
actions, _ = agent.predict(obs, deterministic=True)
# env stepping
obs, _, _, _ = env.step(actions)
# close the simulator
env.close()
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 4,173 |
Python
| 32.934959 | 115 | 0.679128 |
Helbling-Technik/orbit.maze/scripts/sb3/train.py
|
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Script to train RL agent with Stable Baselines3.
Since Stable-Baselines3 does not support buffers living on GPU directly,
we recommend using smaller number of environments. Otherwise,
there will be significant overhead in GPU->CPU transfer.
"""
"""Launch Isaac Sim Simulator first."""
import argparse
import os
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="Train an RL agent with Stable-Baselines3.")
parser.add_argument("--video", action="store_true", default=False, help="Record videos during training.")
parser.add_argument("--video_length", type=int, default=200, help="Length of the recorded video (in steps).")
parser.add_argument("--video_interval", type=int, default=2000, help="Interval between video recordings (in steps).")
parser.add_argument("--cpu", action="store_true", default=False, help="Use CPU pipeline.")
parser.add_argument(
"--disable_fabric", action="store_true", default=False, help="Disable fabric and use USD I/O operations."
)
parser.add_argument("--num_envs", type=int, default=4, help="Number of environments to simulate.")
parser.add_argument("--task", type=str, default="Isaac-Maze-v0", help="Name of the task.")
parser.add_argument("--seed", type=int, default=None, help="Seed used for the environment")
# parser.add_argument(
# "--model_path",
# type=str,
# default="logs/sb3/Isaac-Maze-v0/2024-05-27-GroundTruthModel/model_81920000_steps.zip",
# )
parser.add_argument("--model_path", type=str, default=None, help="Path to the existing model to continue training")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import gymnasium as gym
import numpy as np
import os
from datetime import datetime
from stable_baselines3 import PPO
from stable_baselines3.common.callbacks import CheckpointCallback
from stable_baselines3.common.logger import configure
from stable_baselines3.common.vec_env import VecNormalize
from omni.isaac.orbit.utils.dict import print_dict
from omni.isaac.orbit.utils.io import dump_pickle, dump_yaml
import omni.isaac.orbit_tasks # noqa: F401
from omni.isaac.orbit_tasks.utils import load_cfg_from_registry, parse_env_cfg
from omni.isaac.orbit_tasks.utils.wrappers.sb3 import Sb3VecEnvWrapper, process_sb3_cfg
import orbit.maze # noqa: F401 TODO: import orbit.<your_extension_name>
def main():
"""Train with stable-baselines agent."""
# parse configuration
env_cfg = parse_env_cfg(
args_cli.task, use_gpu=not args_cli.cpu, num_envs=args_cli.num_envs, use_fabric=not args_cli.disable_fabric
)
agent_cfg = load_cfg_from_registry(args_cli.task, "sb3_cfg_entry_point")
# override configuration with command line arguments
if args_cli.seed is not None:
agent_cfg["seed"] = args_cli.seed
# directory for logging into
log_dir = os.path.join("logs", "sb3", args_cli.task, datetime.now().strftime("%Y-%m-%d_%H-%M-%S"))
# dump the configuration into log-directory
dump_yaml(os.path.join(log_dir, "params", "env.yaml"), env_cfg)
dump_yaml(os.path.join(log_dir, "params", "agent.yaml"), agent_cfg)
dump_pickle(os.path.join(log_dir, "params", "env.pkl"), env_cfg)
dump_pickle(os.path.join(log_dir, "params", "agent.pkl"), agent_cfg)
# post-process agent configuration
agent_cfg = process_sb3_cfg(agent_cfg)
# read configurations about the agent-training
policy_arch = agent_cfg.pop("policy")
n_timesteps = agent_cfg.pop("n_timesteps")
# create isaac environment
env = gym.make(args_cli.task, cfg=env_cfg, render_mode="rgb_array" if args_cli.video else None)
# wrap for video recording
if args_cli.video:
video_kwargs = {
"video_folder": os.path.join(log_dir, "videos"),
"step_trigger": lambda step: step % args_cli.video_interval == 0,
"video_length": args_cli.video_length,
"disable_logger": True,
}
print("[INFO] Recording videos during training.")
print_dict(video_kwargs, nesting=4)
env = gym.wrappers.RecordVideo(env, **video_kwargs)
# wrap around environment for stable baselines
env = Sb3VecEnvWrapper(env)
# set the seed
env.seed(seed=agent_cfg["seed"])
if "normalize_input" in agent_cfg:
env = VecNormalize(
env,
training=True,
norm_obs="normalize_input" in agent_cfg and agent_cfg.pop("normalize_input"),
norm_reward="normalize_value" in agent_cfg and agent_cfg.pop("normalize_value"),
clip_obs="clip_obs" in agent_cfg and agent_cfg.pop("clip_obs"),
gamma=agent_cfg["gamma"],
clip_reward=np.inf,
)
# Check if a model path is provided
if args_cli.model_path:
model_path = os.path.abspath(args_cli.model_path)
if os.path.isfile(model_path):
# Load the existing model
agent = PPO.load(args_cli.model_path, env=env)
print(f"[INFO] Loaded existing model from {args_cli.model_path}")
else:
# Create a new agent from scratch
agent = PPO(policy_arch, env, verbose=1, **agent_cfg)
# configure the logger
new_logger = configure(log_dir, ["stdout", "tensorboard"])
agent.set_logger(new_logger)
# callbacks for agent
checkpoint_callback = CheckpointCallback(save_freq=1000, save_path=log_dir, name_prefix="model", verbose=2)
# train the agent
agent.learn(total_timesteps=n_timesteps, callback=checkpoint_callback)
# save the final model
agent.save(os.path.join(log_dir, "model"))
# close the simulator
env.close()
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 6,050 |
Python
| 37.541401 | 117 | 0.689917 |
Helbling-Technik/orbit.maze/config/extension.toml
|
[package]
# Semantic Versioning is used: https://semver.org/
version = "0.1.0"
# Description
title = "Maze" # TODO: Please adapt to your title.
description="Maze Task for RL Learning" #TODO: Please adapt to your description.
repository = "https://github.com/kevchef/orbit.maze.git" # TODO: Please adapt to your repository.
keywords = ["extension", "maze","task","RL", "orbit"] # TODO: Please adapt to your keywords.
category = "orbit"
readme = "README.md"
[dependencies]
"omni.kit.uiapp" = {}
"omni.isaac.orbit" = {}
"omni.isaac.orbit_assets" = {}
"omni.isaac.orbit_tasks" = {}
"omni.isaac.core" = {}
"omni.isaac.gym" = {}
"omni.replicator.isaac" = {}
# Note: You can add additional dependencies here for your extension.
# For example, if you want to use the omni.kit module, you can add it as a dependency:
# "omni.kit" = {}
[[python.module]]
name = "orbit.maze" # TODO: Please adapt to your package name.
| 917 |
TOML
| 31.785713 | 98 | 0.688113 |
Helbling-Technik/orbit.maze/orbit/maze/__init__.py
|
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""
Python module serving as a project/extension template.
"""
# Register Gym environments.
from .tasks import *
# Register UI extensions.
from .ui_extension_example import *
| 300 |
Python
| 19.066665 | 56 | 0.743333 |
Helbling-Technik/orbit.maze/orbit/maze/ui_extension_example.py
|
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
import omni.ext
import omni.ui as ui
# Functions and vars are available to other extension as usual in python: `example.python_ext.some_public_function(x)`
def some_public_function(x: int):
print("[orbit.ext_template] some_public_function was called with x: ", x)
return x**x
# Any class derived from `omni.ext.IExt` in top level module (defined in `python.modules` of `extension.toml`) will be
# instantiated when extension gets enabled and `on_startup(ext_id)` will be called. Later when extension gets disabled
# on_shutdown() is called.
class ExampleExtension(omni.ext.IExt):
# ext_id is current extension id. It can be used with extension manager to query additional information, like where
# this extension is located on filesystem.
def on_startup(self, ext_id):
print("[orbit.ext_template] startup")
self._count = 0
self._window = ui.Window("My Window", width=300, height=300)
with self._window.frame:
with ui.VStack():
label = ui.Label("")
def on_click():
self._count += 1
label.text = f"count: {self._count}"
def on_reset():
self._count = 0
label.text = "empty"
on_reset()
with ui.HStack():
ui.Button("Add", clicked_fn=on_click)
ui.Button("Reset", clicked_fn=on_reset)
def on_shutdown(self):
print("[orbit.ext_template] shutdown")
| 1,650 |
Python
| 33.395833 | 119 | 0.609697 |
Helbling-Technik/orbit.maze/orbit/maze/tasks/__init__.py
|
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Package containing task implementations for various robotic environments."""
import os
import toml
# Conveniences to other module directories via relative paths
ORBIT_TASKS_EXT_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), "../../../"))
"""Path to the extension source directory."""
ORBIT_TASKS_METADATA = toml.load(os.path.join(ORBIT_TASKS_EXT_DIR, "config", "extension.toml"))
"""Extension metadata dictionary parsed from the extension.toml file."""
# Configure the module-level variables
__version__ = ORBIT_TASKS_METADATA["package"]["version"]
##
# Register Gym environments.
##
from omni.isaac.orbit_tasks.utils import import_packages
# The blacklist is used to prevent importing configs from sub-packages
_BLACKLIST_PKGS = ["utils"]
# Import all configs in this package
import_packages(__name__, _BLACKLIST_PKGS)
| 969 |
Python
| 29.312499 | 95 | 0.744066 |
Helbling-Technik/orbit.maze/orbit/maze/tasks/maze/__init__.py
|
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""
Cartpole balancing environment.
"""
import gymnasium as gym
from . import agents
from .maze_env_cfg import MazeEnvCfg
##
# Register Gym environments.
##
gym.register(
id="Isaac-Maze-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
disable_env_checker=True,
kwargs={
"env_cfg_entry_point": MazeEnvCfg,
"skrl_cfg_entry_point": f"{agents.__name__}:skrl_ppo_cfg.yaml",
"sb3_cfg_entry_point": f"{agents.__name__}:sb3_ppo_cfg.yaml",
},
)
| 610 |
Python
| 20.068965 | 71 | 0.662295 |
Helbling-Technik/orbit.maze/orbit/maze/tasks/maze/maze_env_cfg.py
|
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
import math
import torch
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.assets import ArticulationCfg, AssetBaseCfg, RigidObjectCfg
from omni.isaac.orbit.envs import RLTaskEnvCfg
from omni.isaac.orbit.sensors import CameraCfg
from omni.isaac.orbit.managers import EventTermCfg as EventTerm
from omni.isaac.orbit.managers import ObservationGroupCfg as ObsGroup
from omni.isaac.orbit.managers import ObservationTermCfg as ObsTerm
from omni.isaac.orbit.managers import RewardTermCfg as RewTerm
from omni.isaac.orbit.managers import SceneEntityCfg
from omni.isaac.orbit.managers import TerminationTermCfg as DoneTerm
from omni.isaac.orbit.scene import InteractiveSceneCfg
from omni.isaac.orbit.actuators import ImplicitActuatorCfg
from omni.isaac.orbit.utils import configclass
import orbit.maze.tasks.maze.mdp as mdp
import os
##
# Pre-defined configs
##
# from omni.isaac.orbit_assets.maze import MAZE_CFG # isort:skip
# from maze import MAZE_CFG # isort:skip
# Absolute path of the current script
current_script_path = os.path.abspath(__file__)
# Absolute path of the project root (assuming it's three levels up from the current script)
project_root = os.path.join(current_script_path, "../../../../..")
MAZE_CFG = ArticulationCfg(
spawn=sim_utils.UsdFileCfg(
usd_path=os.path.join(project_root, "usds/Maze_Simple.usd"),
rigid_props=sim_utils.RigidBodyPropertiesCfg(
rigid_body_enabled=True,
max_linear_velocity=1000.0,
max_angular_velocity=1000.0,
max_depenetration_velocity=100.0,
enable_gyroscopic_forces=True,
),
articulation_props=sim_utils.ArticulationRootPropertiesCfg(
enabled_self_collisions=False,
solver_position_iteration_count=4,
solver_velocity_iteration_count=0,
sleep_threshold=0.005,
stabilization_threshold=0.001,
),
),
init_state=ArticulationCfg.InitialStateCfg(
pos=(0.0, 0.0, 0.0), joint_pos={"OuterDOF_RevoluteJoint": 0.0, "InnerDOF_RevoluteJoint": 0.0}
),
actuators={
"outer_actuator": ImplicitActuatorCfg(
joint_names_expr=["OuterDOF_RevoluteJoint"],
effort_limit=0.01, # 5g * 9.81 * 0.15m = 0.007357
velocity_limit=1.0 / math.pi,
stiffness=0.0,
damping=10.0,
),
"inner_actuator": ImplicitActuatorCfg(
joint_names_expr=["InnerDOF_RevoluteJoint"],
effort_limit=0.01, # 5g * 9.81 * 0.15m = 0.007357
velocity_limit=1.0 / math.pi,
stiffness=0.0,
damping=10.0,
),
},
)
# Scene definition
##
@configclass
class MazeSceneCfg(InteractiveSceneCfg):
"""Configuration for a cart-pole scene."""
# ground plane
ground = AssetBaseCfg(
prim_path="/World/ground",
spawn=sim_utils.GroundPlaneCfg(size=(100.0, 100.0)),
)
# cartpole
robot: ArticulationCfg = MAZE_CFG.replace(prim_path="{ENV_REGEX_NS}/Labyrinth")
# Sphere with collision enabled but not actuated
sphere = RigidObjectCfg(
prim_path="{ENV_REGEX_NS}/sphere",
spawn=sim_utils.SphereCfg(
radius=0.005,
mass_props=sim_utils.MassPropertiesCfg(density=7850),
rigid_props=sim_utils.RigidBodyPropertiesCfg(rigid_body_enabled=True),
collision_props=sim_utils.CollisionPropertiesCfg(collision_enabled=True),
visual_material=sim_utils.PreviewSurfaceCfg(diffuse_color=(0.9, 0.9, 0.9), metallic=0.8),
),
init_state=RigidObjectCfg.InitialStateCfg(pos=(0.0, 0.0, 0.11)),
)
dome_light = AssetBaseCfg(
prim_path="/World/DomeLight",
spawn=sim_utils.DomeLightCfg(color=(0.9, 0.9, 0.9), intensity=1000.0),
)
##
# MDP settings
##
@configclass
class CommandsCfg:
"""Command terms for the MDP."""
# no commands for this MDP
null = mdp.NullCommandCfg()
# sphere_cmd_pos = mdp.UniformPose2dCommandCfg(
# asset_name="sphere",
# simple_heading=True,
# resampling_time_range=(10000000000, 10000000000),
# debug_vis=False,
# ranges=mdp.UniformPose2dCommandCfg.Ranges(pos_x=(-0.05, 0.05), pos_y=(-0.05, 0.05)),
# )
@configclass
class ActionsCfg:
"""Action specifications for the MDP."""
outer_joint_effort = mdp.JointEffortActionCfg(asset_name="robot", joint_names=["OuterDOF_RevoluteJoint"], scale=0.1)
inner_joint_effort = mdp.JointEffortActionCfg(asset_name="robot", joint_names=["InnerDOF_RevoluteJoint"], scale=0.1)
@configclass
class ObservationsCfg:
"""Observation specifications for the MDP."""
@configclass
class PolicyCfg(ObsGroup):
"""Observations for policy group."""
# observation terms (order preserved)
joint_pos = ObsTerm(func=mdp.joint_pos_rel)
joint_vel = ObsTerm(func=mdp.joint_vel_rel)
sphere_pos = ObsTerm(
func=mdp.root_pos_w,
params={"asset_cfg": SceneEntityCfg("sphere")},
)
sphere_lin_vel = ObsTerm(
func=mdp.root_lin_vel_w,
params={"asset_cfg": SceneEntityCfg("sphere")},
)
target_pos_rel = ObsTerm(
func=mdp.get_target_pos,
params={
"asset_cfg": SceneEntityCfg("sphere"),
"target": {"x": 0.0, "y": 0.0},
},
)
# target_sphere_pos = ObsTerm(
# func=mdp.get_generated_commands_xy,
# params={"command_name": "sphere_cmd_pos"},
# )
def __post_init__(self) -> None:
self.enable_corruption = False
self.concatenate_terms = True
# observation groups
policy: PolicyCfg = PolicyCfg()
@configclass
class EventCfg:
"""Configuration for events."""
# reset
reset_outer_joint = EventTerm(
func=mdp.reset_joints_by_offset,
mode="reset",
params={
"asset_cfg": SceneEntityCfg("robot", joint_names=["OuterDOF_RevoluteJoint"]),
"position_range": (-0.01 * math.pi, 0.01 * math.pi),
"velocity_range": (-0.01 * math.pi, 0.01 * math.pi),
},
)
reset_inner_joint = EventTerm(
func=mdp.reset_joints_by_offset,
mode="reset",
params={
"asset_cfg": SceneEntityCfg("robot", joint_names=["InnerDOF_RevoluteJoint"]),
"position_range": (-0.01 * math.pi, 0.01 * math.pi),
"velocity_range": (-0.01 * math.pi, 0.01 * math.pi),
},
)
reset_sphere_pos = EventTerm(
func=mdp.reset_root_state_uniform,
mode="reset",
params={
"asset_cfg": SceneEntityCfg("sphere"),
"pose_range": {"x": (-0.05, 0.05), "y": (-0.05, 0.05)},
"velocity_range": {},
},
)
@configclass
class RewardsCfg:
"""Reward terms for the MDP."""
# (1) Constant running reward
alive = RewTerm(func=mdp.is_alive, weight=0.1)
# (2) Failure penalty
terminating = RewTerm(func=mdp.is_terminated, weight=-2.0)
# (3) Primary task: keep sphere in center
sphere_pos = RewTerm(
func=mdp.root_xypos_target_l2,
weight=-5000.0,
params={
"asset_cfg": SceneEntityCfg("sphere"),
"target": {"x": 0.0, "y": 0.0},
},
)
# sphere_to_target = RewTerm(
# func=mdp.object_goal_distance_l2,
# params={"command_name": "sphere_cmd_pos", "object_cfg": SceneEntityCfg("sphere")},
# weight=-5000.0,
# )
outer_joint_vel = RewTerm(
func=mdp.joint_vel_l1,
weight=-0.01,
params={"asset_cfg": SceneEntityCfg("robot", joint_names=["OuterDOF_RevoluteJoint"])},
)
inner_joint_vel = RewTerm(
func=mdp.joint_vel_l1,
weight=-0.01,
params={"asset_cfg": SceneEntityCfg("robot", joint_names=["InnerDOF_RevoluteJoint"])},
)
@configclass
class TerminationsCfg:
"""Termination terms for the MDP."""
# (1) Time out
time_out = DoneTerm(func=mdp.time_out, time_out=True)
# (2) Sphere off maze
sphere_on_ground = DoneTerm(
func=mdp.root_height_below_minimum,
params={"asset_cfg": SceneEntityCfg("sphere"), "minimum_height": 0.01},
)
@configclass
class CurriculumCfg:
"""Configuration for the curriculum."""
pass
##
# Environment configuration
##
@configclass
class MazeEnvCfg(RLTaskEnvCfg):
"""Configuration for the locomotion velocity-tracking environment."""
# Scene settings
scene: MazeSceneCfg = MazeSceneCfg(num_envs=16, env_spacing=0.5)
# Basic settings
observations: ObservationsCfg = ObservationsCfg()
actions: ActionsCfg = ActionsCfg()
events: EventCfg = EventCfg()
# MDP settings
curriculum: CurriculumCfg = CurriculumCfg()
rewards: RewardsCfg = RewardsCfg()
terminations: TerminationsCfg = TerminationsCfg()
# No command generator
commands: CommandsCfg = CommandsCfg()
# Post initialization
def __post_init__(self) -> None:
"""Post initialization."""
# general settings
self.decimation = 2
self.episode_length_s = 10
# viewer settings
self.viewer.eye = (1, 1, 1.5)
# simulation settings
self.sim.dt = 1 / 200
| 9,410 |
Python
| 30.162252 | 120 | 0.623273 |
Helbling-Technik/orbit.maze/orbit/maze/tasks/maze/agents/rsl_rl_ppo_cfg.py
|
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from omni.isaac.orbit.utils import configclass
from omni.isaac.orbit_tasks.utils.wrappers.rsl_rl import (
RslRlOnPolicyRunnerCfg,
RslRlPpoActorCriticCfg,
RslRlPpoAlgorithmCfg,
)
@configclass
class CartpolePPORunnerCfg(RslRlOnPolicyRunnerCfg):
num_steps_per_env = 16
max_iterations = 150
save_interval = 50
experiment_name = "cartpole"
empirical_normalization = False
policy = RslRlPpoActorCriticCfg(
init_noise_std=1.0,
actor_hidden_dims=[32, 32],
critic_hidden_dims=[32, 32],
activation="elu",
)
algorithm = RslRlPpoAlgorithmCfg(
value_loss_coef=1.0,
use_clipped_value_loss=True,
clip_param=0.2,
entropy_coef=0.005,
num_learning_epochs=5,
num_mini_batches=4,
learning_rate=1.0e-3,
schedule="adaptive",
gamma=0.99,
lam=0.95,
desired_kl=0.01,
max_grad_norm=1.0,
)
| 1,065 |
Python
| 24.380952 | 58 | 0.644131 |
Helbling-Technik/orbit.maze/orbit/maze/tasks/maze/agents/skrl_ppo_cfg.yaml
|
seed: 42
# Models are instantiated using skrl's model instantiator utility
# https://skrl.readthedocs.io/en/develop/modules/skrl.utils.model_instantiators.html
models:
separate: False
policy: # see skrl.utils.model_instantiators.gaussian_model for parameter details
clip_actions: True
clip_log_std: True
initial_log_std: 0
min_log_std: -20.0
max_log_std: 2.0
input_shape: "Shape.STATES"
hiddens: [32, 32]
hidden_activation: ["elu", "elu"]
output_shape: "Shape.ACTIONS"
output_activation: "tanh"
output_scale: 1.0
value: # see skrl.utils.model_instantiators.deterministic_model for parameter details
clip_actions: False
input_shape: "Shape.STATES"
hiddens: [32, 32]
hidden_activation: ["elu", "elu"]
output_shape: "Shape.ONE"
output_activation: ""
output_scale: 1.0
# PPO agent configuration (field names are from PPO_DEFAULT_CONFIG)
# https://skrl.readthedocs.io/en/latest/modules/skrl.agents.ppo.html
agent:
rollouts: 16
learning_epochs: 5
mini_batches: 4
discount_factor: 0.99
lambda: 0.95
learning_rate: 1.e-3
learning_rate_scheduler: "KLAdaptiveLR"
learning_rate_scheduler_kwargs:
kl_threshold: 0.01
state_preprocessor: "RunningStandardScaler"
state_preprocessor_kwargs: null
value_preprocessor: "RunningStandardScaler"
value_preprocessor_kwargs: null
random_timesteps: 0
learning_starts: 0
grad_norm_clip: 1.0
ratio_clip: 0.2
value_clip: 0.2
clip_predicted_values: True
entropy_loss_scale: 0.0
value_loss_scale: 2.0
kl_threshold: 0
rewards_shaper_scale: 1.0
# logging and checkpoint
experiment:
directory: "cartpole"
experiment_name: ""
write_interval: 12
checkpoint_interval: 120
# Sequential trainer
# https://skrl.readthedocs.io/en/latest/modules/skrl.trainers.sequential.html
trainer:
timesteps: 2400
| 1,865 |
YAML
| 26.850746 | 88 | 0.713673 |
Helbling-Technik/orbit.maze/orbit/maze/tasks/maze/agents/sb3_ppo_cfg.yaml
|
# Reference: https://github.com/DLR-RM/rl-baselines3-zoo/blob/master/hyperparams/ppo.yml#L32
seed: 42
n_timesteps: !!float 1e9
policy: 'MlpPolicy'
n_steps: 16
batch_size: 4096
gae_lambda: 0.95
gamma: 0.99
n_epochs: 20
ent_coef: 0.01
learning_rate: !!float 3e-4
clip_range: !!float 0.2
policy_kwargs: "dict(
activation_fn=nn.ELU,
net_arch=[32, 32],
squash_output=False,
)"
vf_coef: 1.0
max_grad_norm: 1.0
| 474 |
YAML
| 22.749999 | 92 | 0.611814 |
Helbling-Technik/orbit.maze/orbit/maze/tasks/maze/agents/__init__.py
|
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from . import rsl_rl_ppo_cfg # noqa: F401, F403
| 172 |
Python
| 23.714282 | 56 | 0.72093 |
Helbling-Technik/orbit.maze/orbit/maze/tasks/maze/agents/rl_games_ppo_cfg.yaml
|
params:
seed: 42
# environment wrapper clipping
env:
# added to the wrapper
clip_observations: 5.0
# can make custom wrapper?
clip_actions: 1.0
algo:
name: a2c_continuous
model:
name: continuous_a2c_logstd
# doesn't have this fine grained control but made it close
network:
name: actor_critic
separate: False
space:
continuous:
mu_activation: None
sigma_activation: None
mu_init:
name: default
sigma_init:
name: const_initializer
val: 0
fixed_sigma: True
mlp:
units: [32, 32]
activation: elu
d2rl: False
initializer:
name: default
regularizer:
name: None
load_checkpoint: False # flag which sets whether to load the checkpoint
load_path: '' # path to the checkpoint to load
config:
name: cartpole
env_name: rlgpu
device: 'cuda:0'
device_name: 'cuda:0'
multi_gpu: False
ppo: True
mixed_precision: False
normalize_input: False
normalize_value: False
num_actors: -1 # configured from the script (based on num_envs)
reward_shaper:
scale_value: 1.0
normalize_advantage: False
gamma: 0.99
tau : 0.95
learning_rate: 3e-4
lr_schedule: adaptive
kl_threshold: 0.008
score_to_win: 20000
max_epochs: 150
save_best_after: 50
save_frequency: 25
grad_norm: 1.0
entropy_coef: 0.0
truncate_grads: True
e_clip: 0.2
horizon_length: 16
minibatch_size: 8192
mini_epochs: 8
critic_coef: 4
clip_value: True
seq_length: 4
bounds_loss_coef: 0.0001
| 1,648 |
YAML
| 19.873417 | 73 | 0.61165 |
Helbling-Technik/orbit.maze/orbit/maze/tasks/maze/mdp/__init__.py
|
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""This sub-module contains the functions that are specific to the cartpole environments."""
from omni.isaac.orbit.envs.mdp import * # noqa: F401, F403
from .rewards import * # noqa: F401, F403
from .observations import * # noqa: F401, F403
from .events import * # noqa: F401, F403
| 411 |
Python
| 30.692305 | 92 | 0.725061 |
Helbling-Technik/orbit.maze/orbit/maze/tasks/maze/mdp/rewards.py
|
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
import torch
from typing import TYPE_CHECKING
from omni.isaac.orbit.assets import Articulation, RigidObject
from omni.isaac.orbit.managers import SceneEntityCfg
from omni.isaac.orbit.utils.math import wrap_to_pi
if TYPE_CHECKING:
from omni.isaac.orbit.envs import RLTaskEnv
def joint_pos_target_l2(env: RLTaskEnv, target: float, asset_cfg: SceneEntityCfg) -> torch.Tensor:
"""Penalize joint position deviation from a target value."""
# extract the used quantities (to enable type-hinting)
asset: Articulation = env.scene[asset_cfg.name]
# wrap the joint positions to (-pi, pi)
joint_pos = wrap_to_pi(asset.data.joint_pos[:, asset_cfg.joint_ids])
# compute the reward
# print("joint pos reward: ", torch.sum(torch.square(joint_pos - target), dim=1))
return torch.sum(torch.square(joint_pos - target), dim=1)
def root_pos_target_l2(env: RLTaskEnv, target: dict[str, float], asset_cfg: SceneEntityCfg) -> torch.Tensor:
"""Penalize joint position deviation from a target value."""
# extract the used quantities (to enable type-hinting)
asset: RigidObject = env.scene[asset_cfg.name]
target_list = torch.tensor([target.get(key, 0.0) for key in ["x", "y", "z"]], device=asset.data.root_pos_w.device)
root_pos = asset.data.root_pos_w - env.scene.env_origins
# compute the reward
return torch.sum(torch.square(root_pos - target_list), dim=1)
def root_xypos_target_l2(env: RLTaskEnv, target: dict[str, float], asset_cfg: SceneEntityCfg) -> torch.Tensor:
"""Penalize joint position deviation from a target value."""
# extract the used quantities (to enable type-hinting)
asset: RigidObject = env.scene[asset_cfg.name]
target_tensor = torch.tensor([target.get(key, 0.0) for key in ["x", "y"]], device=asset.data.root_pos_w.device)
root_pos = asset.data.root_pos_w - env.scene.env_origins
# compute the reward
# xy_reward_l2 = (torch.sum(torch.square(root_pos[:,:2] - target_tensor), dim=1) <= 0.0025).float()*2 - 1
xy_reward_l2 = torch.sum(torch.square(root_pos[:, :2] - target_tensor), dim=1)
# print("sphere_xypos_rewards: ", xy_reward_l2.tolist())
return xy_reward_l2
def object_goal_distance_l2(
env: RLTaskEnv,
command_name: str,
object_cfg: SceneEntityCfg = SceneEntityCfg("sphere"),
) -> torch.Tensor:
"""Reward the agent for tracking the goal pose using L2-kernel."""
# extract the used quantities (to enable type-hinting)
object: RigidObject = env.scene[object_cfg.name]
command = env.command_manager.get_command(command_name)
# command_pos is difference between target in env frame and object in env frame
command_pos = command[:, :2]
object_pos = object.data.root_pos_w - env.scene.env_origins
# print("target_pos: ", command_pos)
# print("object_pos: ", object_pos[:, :2])
# distance of the target to the object: (num_envs,)
distance = torch.norm(command_pos, dim=1)
# print("distance: ", distance)
# rewarded if the object is closest to the target
return distance
| 3,207 |
Python
| 43.555555 | 118 | 0.696913 |
Helbling-Technik/orbit.maze/orbit/maze/tasks/maze/mdp/events.py
|
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
import torch
from typing import TYPE_CHECKING
import omni.isaac.orbit.utils.math as math_utils
from omni.isaac.orbit.assets import Articulation, RigidObject
from omni.isaac.orbit.managers import SceneEntityCfg
if TYPE_CHECKING:
from omni.isaac.orbit.envs import RLTaskEnv
def set_random_target_pos(
env: RLTaskEnv,
env_ids: torch.Tensor,
pose_range: dict[str, tuple[float, float]],
asset_cfg: SceneEntityCfg = SceneEntityCfg("robot"),
):
# extract the used quantities (to enable type-hinting)
asset: RigidObject | Articulation = env.scene[asset_cfg.name]
# poses
range_list = [pose_range.get(key, (0.0, 0.0)) for key in ["x", "y"]]
ranges = torch.tensor(range_list, device=asset.device)
rand_samples = math_utils.sample_uniform(ranges[:, 0], ranges[:, 1], (len(env_ids), 6), device=asset.device)
target_positions = env.scene.env_origins[env_ids] + rand_samples[:, 0:3]
return target_positions
| 1,111 |
Python
| 29.888888 | 112 | 0.713771 |
Helbling-Technik/orbit.maze/orbit/maze/tasks/maze/mdp/observations.py
|
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
import torch
from typing import TYPE_CHECKING
from omni.isaac.orbit.sensors import Camera
from omni.isaac.orbit.assets import Articulation, RigidObject
from omni.isaac.orbit.managers import SceneEntityCfg
from omni.isaac.orbit.utils.math import wrap_to_pi
if TYPE_CHECKING:
from omni.isaac.orbit.envs import RLTaskEnv
def camera_image(env: RLTaskEnv, asset_cfg: SceneEntityCfg) -> torch.Tensor:
"""Camera image from top camera."""
# Extract the used quantities (to enable type-hinting)
asset: Camera = env.scene[asset_cfg.name]
# Get the RGBA image tensor
rgba_tensor = asset.data.output["rgb"]
# Check the shape of the input tensor
assert rgba_tensor.dim() == 4 and rgba_tensor.size(-1) == 4, "Expected tensor of shape (n, 128, 128, 4)"
# Ensure the tensor is on the correct device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
rgba_tensor = rgba_tensor.to(device)
# Convert the RGBA tensor to grayscale
# Using the weights for R, G, and B, and ignoring the Alpha channel
weights = torch.tensor([0.2989, 0.5870, 0.1140, 0.0], device=device).view(1, 1, 1, 4)
grayscale_tensor = (rgba_tensor * weights).sum(dim=-1)
# Flatten each image to a 1D tensor
n_envs = grayscale_tensor.size(0)
n_pixels = grayscale_tensor.size(1) * grayscale_tensor.size(2)
grayscale_tensor_flattened = grayscale_tensor.view(n_envs, n_pixels)
return grayscale_tensor_flattened
def get_target_pos(env: RLTaskEnv, target: dict[str, float], asset_cfg: SceneEntityCfg) -> torch.Tensor:
"""Penalize joint position deviation from a target value."""
# extract the used quantities (to enable type-hinting)
# asset: RigidObject = env.scene[asset_cfg.name]
# target_tensor = torch.tensor([target.get(key, 0.0) for key in ["x", "y"]], device=asset.data.root_pos_w.device)
# root_pos = asset.data.root_pos_w - env.scene.env_origins
zeros_tensor = torch.zeros_like(env.scene.env_origins)
# return (zeros_tensor - root_pos)[:, :2].to(dtype=torch.float16)
return zeros_tensor[:, :2]
def get_env_pos_of_command(env: RLTaskEnv, object_cfg: SceneEntityCfg, command_name: str) -> torch.Tensor:
"""The generated command from command term in the command manager with the given name."""
"""The env frame target position can not fully be recovered as one of the terms is updated less frequently"""
object: RigidObject = env.scene[object_cfg.name]
object_pos = object.data.root_pos_w - env.scene.env_origins
commanded = env.command_manager.get_command(command_name)
target_pos_env = commanded[:, :2] + object_pos[:, :2]
print("target_pos_env_observation: ", target_pos_env[:, :2])
return target_pos_env
def get_generated_commands_xy(env: RLTaskEnv, command_name: str) -> torch.Tensor:
"""The generated command from command term in the command manager with the given name."""
commanded = env.command_manager.get_command(command_name)
return commanded[:, :2]
| 3,161 |
Python
| 41.729729 | 117 | 0.707371 |
Helbling-Technik/orbit.maze/orbit/maze/tasks/locomotion/__init__.py
|
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Locomotion environments for legged robots."""
from .velocity import * # noqa
| 205 |
Python
| 21.888886 | 56 | 0.731707 |
Helbling-Technik/orbit.maze/orbit/maze/tasks/locomotion/velocity/velocity_env_cfg.py
|
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
import math
from dataclasses import MISSING
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.assets import ArticulationCfg, AssetBaseCfg
from omni.isaac.orbit.envs import RLTaskEnvCfg
from omni.isaac.orbit.managers import CurriculumTermCfg as CurrTerm
from omni.isaac.orbit.managers import ObservationGroupCfg as ObsGroup
from omni.isaac.orbit.managers import ObservationTermCfg as ObsTerm
from omni.isaac.orbit.managers import RandomizationTermCfg as RandTerm
from omni.isaac.orbit.managers import RewardTermCfg as RewTerm
from omni.isaac.orbit.managers import SceneEntityCfg
from omni.isaac.orbit.managers import TerminationTermCfg as DoneTerm
from omni.isaac.orbit.scene import InteractiveSceneCfg
from omni.isaac.orbit.sensors import ContactSensorCfg, RayCasterCfg, patterns
from omni.isaac.orbit.terrains import TerrainImporterCfg
from omni.isaac.orbit.utils import configclass
from omni.isaac.orbit.utils.noise import AdditiveUniformNoiseCfg as Unoise
import orbit.maze.tasks.locomotion.velocity.mdp as mdp
##
# Pre-defined configs
##
from omni.isaac.orbit.terrains.config.rough import ROUGH_TERRAINS_CFG # isort: skip
##
# Scene definition
##
@configclass
class MySceneCfg(InteractiveSceneCfg):
"""Configuration for the terrain scene with a legged robot."""
# ground terrain
terrain = TerrainImporterCfg(
prim_path="/World/ground",
terrain_type="generator",
terrain_generator=ROUGH_TERRAINS_CFG,
max_init_terrain_level=5,
collision_group=-1,
physics_material=sim_utils.RigidBodyMaterialCfg(
friction_combine_mode="multiply",
restitution_combine_mode="multiply",
static_friction=1.0,
dynamic_friction=1.0,
),
visual_material=sim_utils.MdlFileCfg(
mdl_path="{NVIDIA_NUCLEUS_DIR}/Materials/Base/Architecture/Shingles_01.mdl",
project_uvw=True,
),
debug_vis=False,
)
# robots
robot: ArticulationCfg = MISSING
# sensors
height_scanner = RayCasterCfg(
prim_path="{ENV_REGEX_NS}/Robot/base",
offset=RayCasterCfg.OffsetCfg(pos=(0.0, 0.0, 20.0)),
attach_yaw_only=True,
pattern_cfg=patterns.GridPatternCfg(resolution=0.1, size=[1.6, 1.0]),
debug_vis=False,
mesh_prim_paths=["/World/ground"],
)
contact_forces = ContactSensorCfg(prim_path="{ENV_REGEX_NS}/Robot/.*", history_length=3, track_air_time=True)
# lights
light = AssetBaseCfg(
prim_path="/World/light",
spawn=sim_utils.DistantLightCfg(color=(0.75, 0.75, 0.75), intensity=3000.0),
)
sky_light = AssetBaseCfg(
prim_path="/World/skyLight",
spawn=sim_utils.DomeLightCfg(color=(0.13, 0.13, 0.13), intensity=1000.0),
)
##
# MDP settings
##
@configclass
class CommandsCfg:
"""Command specifications for the MDP."""
base_velocity = mdp.UniformVelocityCommandCfg(
asset_name="robot",
resampling_time_range=(10.0, 10.0),
rel_standing_envs=0.02,
rel_heading_envs=1.0,
heading_command=True,
heading_control_stiffness=0.5,
debug_vis=True,
ranges=mdp.UniformVelocityCommandCfg.Ranges(
lin_vel_x=(-1.0, 1.0), lin_vel_y=(-1.0, 1.0), ang_vel_z=(-1.0, 1.0), heading=(-math.pi, math.pi)
),
)
@configclass
class ActionsCfg:
"""Action specifications for the MDP."""
joint_pos = mdp.JointPositionActionCfg(asset_name="robot", joint_names=[".*"], scale=0.5, use_default_offset=True)
@configclass
class ObservationsCfg:
"""Observation specifications for the MDP."""
@configclass
class PolicyCfg(ObsGroup):
"""Observations for policy group."""
# observation terms (order preserved)
base_lin_vel = ObsTerm(func=mdp.base_lin_vel, noise=Unoise(n_min=-0.1, n_max=0.1))
base_ang_vel = ObsTerm(func=mdp.base_ang_vel, noise=Unoise(n_min=-0.2, n_max=0.2))
projected_gravity = ObsTerm(
func=mdp.projected_gravity,
noise=Unoise(n_min=-0.05, n_max=0.05),
)
velocity_commands = ObsTerm(func=mdp.generated_commands, params={"command_name": "base_velocity"})
joint_pos = ObsTerm(func=mdp.joint_pos_rel, noise=Unoise(n_min=-0.01, n_max=0.01))
joint_vel = ObsTerm(func=mdp.joint_vel_rel, noise=Unoise(n_min=-1.5, n_max=1.5))
actions = ObsTerm(func=mdp.last_action)
height_scan = ObsTerm(
func=mdp.height_scan,
params={"sensor_cfg": SceneEntityCfg("height_scanner")},
noise=Unoise(n_min=-0.1, n_max=0.1),
clip=(-1.0, 1.0),
)
def __post_init__(self):
self.enable_corruption = True
self.concatenate_terms = True
# observation groups
policy: PolicyCfg = PolicyCfg()
@configclass
class RandomizationCfg:
"""Configuration for randomization."""
# startup
physics_material = RandTerm(
func=mdp.randomize_rigid_body_material,
mode="startup",
params={
"asset_cfg": SceneEntityCfg("robot", body_names=".*"),
"static_friction_range": (0.8, 0.8),
"dynamic_friction_range": (0.6, 0.6),
"restitution_range": (0.0, 0.0),
"num_buckets": 64,
},
)
add_base_mass = RandTerm(
func=mdp.add_body_mass,
mode="startup",
params={"asset_cfg": SceneEntityCfg("robot", body_names="base"), "mass_range": (-5.0, 5.0)},
)
# reset
base_external_force_torque = RandTerm(
func=mdp.apply_external_force_torque,
mode="reset",
params={
"asset_cfg": SceneEntityCfg("robot", body_names="base"),
"force_range": (0.0, 0.0),
"torque_range": (-0.0, 0.0),
},
)
reset_base = RandTerm(
func=mdp.reset_root_state_uniform,
mode="reset",
params={
"pose_range": {"x": (-0.5, 0.5), "y": (-0.5, 0.5), "yaw": (-3.14, 3.14)},
"velocity_range": {
"x": (-0.5, 0.5),
"y": (-0.5, 0.5),
"z": (-0.5, 0.5),
"roll": (-0.5, 0.5),
"pitch": (-0.5, 0.5),
"yaw": (-0.5, 0.5),
},
},
)
reset_robot_joints = RandTerm(
func=mdp.reset_joints_by_scale,
mode="reset",
params={
"position_range": (0.5, 1.5),
"velocity_range": (0.0, 0.0),
},
)
# interval
push_robot = RandTerm(
func=mdp.push_by_setting_velocity,
mode="interval",
interval_range_s=(10.0, 15.0),
params={"velocity_range": {"x": (-0.5, 0.5), "y": (-0.5, 0.5)}},
)
@configclass
class RewardsCfg:
"""Reward terms for the MDP."""
# -- task
track_lin_vel_xy_exp = RewTerm(
func=mdp.track_lin_vel_xy_exp, weight=1.0, params={"command_name": "base_velocity", "std": math.sqrt(0.25)}
)
track_ang_vel_z_exp = RewTerm(
func=mdp.track_ang_vel_z_exp, weight=0.5, params={"command_name": "base_velocity", "std": math.sqrt(0.25)}
)
# -- penalties
lin_vel_z_l2 = RewTerm(func=mdp.lin_vel_z_l2, weight=-2.0)
ang_vel_xy_l2 = RewTerm(func=mdp.ang_vel_xy_l2, weight=-0.05)
dof_torques_l2 = RewTerm(func=mdp.joint_torques_l2, weight=-1.0e-5)
dof_acc_l2 = RewTerm(func=mdp.joint_acc_l2, weight=-2.5e-7)
action_rate_l2 = RewTerm(func=mdp.action_rate_l2, weight=-0.01)
feet_air_time = RewTerm(
func=mdp.feet_air_time,
weight=0.125,
params={
"sensor_cfg": SceneEntityCfg("contact_forces", body_names=".*FOOT"),
"command_name": "base_velocity",
"threshold": 0.5,
},
)
undesired_contacts = RewTerm(
func=mdp.undesired_contacts,
weight=-1.0,
params={"sensor_cfg": SceneEntityCfg("contact_forces", body_names=".*THIGH"), "threshold": 1.0},
)
# -- optional penalties
flat_orientation_l2 = RewTerm(func=mdp.flat_orientation_l2, weight=0.0)
dof_pos_limits = RewTerm(func=mdp.joint_pos_limits, weight=0.0)
@configclass
class TerminationsCfg:
"""Termination terms for the MDP."""
time_out = DoneTerm(func=mdp.time_out, time_out=True)
base_contact = DoneTerm(
func=mdp.illegal_contact,
params={"sensor_cfg": SceneEntityCfg("contact_forces", body_names="base"), "threshold": 1.0},
)
@configclass
class CurriculumCfg:
"""Curriculum terms for the MDP."""
terrain_levels = CurrTerm(func=mdp.terrain_levels_vel)
##
# Environment configuration
##
@configclass
class LocomotionVelocityRoughEnvCfg(RLTaskEnvCfg):
"""Configuration for the locomotion velocity-tracking environment."""
# Scene settings
scene: MySceneCfg = MySceneCfg(num_envs=4096, env_spacing=2.5)
# Basic settings
observations: ObservationsCfg = ObservationsCfg()
actions: ActionsCfg = ActionsCfg()
commands: CommandsCfg = CommandsCfg()
# MDP settings
rewards: RewardsCfg = RewardsCfg()
terminations: TerminationsCfg = TerminationsCfg()
randomization: RandomizationCfg = RandomizationCfg()
curriculum: CurriculumCfg = CurriculumCfg()
def __post_init__(self):
"""Post initialization."""
# general settings
self.decimation = 4
self.episode_length_s = 20.0
# simulation settings
self.sim.dt = 0.005
self.sim.disable_contact_processing = True
self.sim.physics_material = self.scene.terrain.physics_material
# update sensor update periods
# we tick all the sensors based on the smallest update period (physics update period)
if self.scene.height_scanner is not None:
self.scene.height_scanner.update_period = self.decimation * self.sim.dt
if self.scene.contact_forces is not None:
self.scene.contact_forces.update_period = self.sim.dt
# check if terrain levels curriculum is enabled - if so, enable curriculum for terrain generator
# this generates terrains with increasing difficulty and is useful for training
if getattr(self.curriculum, "terrain_levels", None) is not None:
if self.scene.terrain.terrain_generator is not None:
self.scene.terrain.terrain_generator.curriculum = True
else:
if self.scene.terrain.terrain_generator is not None:
self.scene.terrain.terrain_generator.curriculum = False
| 10,641 |
Python
| 32.570978 | 118 | 0.626351 |
Helbling-Technik/orbit.maze/orbit/maze/tasks/locomotion/velocity/__init__.py
|
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Locomotion environments with velocity-tracking commands.
These environments are based on the `legged_gym` environments provided by Rudin et al.
Reference:
https://github.com/leggedrobotics/legged_gym
"""
| 336 |
Python
| 24.923075 | 86 | 0.764881 |
Helbling-Technik/orbit.maze/orbit/maze/tasks/locomotion/velocity/mdp/__init__.py
|
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""This sub-module contains the functions that are specific to the locomotion environments."""
from omni.isaac.orbit.envs.mdp import * # noqa: F401, F403
from .curriculums import * # noqa: F401, F403
from .rewards import * # noqa: F401, F403
| 370 |
Python
| 29.916664 | 94 | 0.732432 |
Helbling-Technik/orbit.maze/orbit/maze/tasks/locomotion/velocity/mdp/curriculums.py
|
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Common functions that can be used to create curriculum for the learning environment.
The functions can be passed to the :class:`omni.isaac.orbit.managers.CurriculumTermCfg` object to enable
the curriculum introduced by the function.
"""
from __future__ import annotations
from collections.abc import Sequence
from typing import TYPE_CHECKING
import torch
from omni.isaac.orbit.assets import Articulation
from omni.isaac.orbit.managers import SceneEntityCfg
from omni.isaac.orbit.terrains import TerrainImporter
if TYPE_CHECKING:
from omni.isaac.orbit.envs import RLTaskEnv
def terrain_levels_vel(
env: RLTaskEnv, env_ids: Sequence[int], asset_cfg: SceneEntityCfg = SceneEntityCfg("robot")
) -> torch.Tensor:
"""Curriculum based on the distance the robot walked when commanded to move at a desired velocity.
This term is used to increase the difficulty of the terrain when the robot walks far enough and decrease the
difficulty when the robot walks less than half of the distance required by the commanded velocity.
.. note::
It is only possible to use this term with the terrain type ``generator``. For further information
on different terrain types, check the :class:`omni.isaac.orbit.terrains.TerrainImporter` class.
Returns:
The mean terrain level for the given environment ids.
"""
# extract the used quantities (to enable type-hinting)
asset: Articulation = env.scene[asset_cfg.name]
terrain: TerrainImporter = env.scene.terrain
command = env.command_manager.get_command("base_velocity")
# compute the distance the robot walked
distance = torch.norm(asset.data.root_pos_w[env_ids, :2] - env.scene.env_origins[env_ids, :2], dim=1)
# robots that walked far enough progress to harder terrains
move_up = distance > terrain.cfg.terrain_generator.size[0] / 2
# robots that walked less than half of their required distance go to simpler terrains
move_down = distance < torch.norm(command[env_ids, :2], dim=1) * env.max_episode_length_s * 0.5
move_down *= ~move_up
# update terrain levels
terrain.update_env_origins(env_ids, move_up, move_down)
# return the mean terrain level
return torch.mean(terrain.terrain_levels.float())
| 2,376 |
Python
| 41.446428 | 112 | 0.742424 |
Helbling-Technik/orbit.maze/orbit/maze/tasks/locomotion/velocity/mdp/rewards.py
|
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
from typing import TYPE_CHECKING
import torch
from omni.isaac.orbit.managers import SceneEntityCfg
from omni.isaac.orbit.sensors import ContactSensor
if TYPE_CHECKING:
from omni.isaac.orbit.envs import RLTaskEnv
def feet_air_time(env: RLTaskEnv, command_name: str, sensor_cfg: SceneEntityCfg, threshold: float) -> torch.Tensor:
"""Reward long steps taken by the feet using L2-kernel.
This function rewards the agent for taking steps that are longer than a threshold. This helps ensure
that the robot lifts its feet off the ground and takes steps. The reward is computed as the sum of
the time for which the feet are in the air.
If the commands are small (i.e. the agent is not supposed to take a step), then the reward is zero.
"""
# extract the used quantities (to enable type-hinting)
contact_sensor: ContactSensor = env.scene.sensors[sensor_cfg.name]
# compute the reward
first_contact = contact_sensor.compute_first_contact(env.step_dt)[:, sensor_cfg.body_ids]
last_air_time = contact_sensor.data.last_air_time[:, sensor_cfg.body_ids]
reward = torch.sum((last_air_time - threshold) * first_contact, dim=1)
# no reward for zero command
reward *= torch.norm(env.command_manager.get_command(command_name)[:, :2], dim=1) > 0.1
return reward
def feet_air_time_positive_biped(env, command_name: str, threshold: float, sensor_cfg: SceneEntityCfg) -> torch.Tensor:
"""Reward long steps taken by the feet for bipeds.
This function rewards the agent for taking steps up to a specified threshold and also keep one foot at
a time in the air.
If the commands are small (i.e. the agent is not supposed to take a step), then the reward is zero.
"""
contact_sensor: ContactSensor = env.scene.sensors[sensor_cfg.name]
# compute the reward
air_time = contact_sensor.data.current_air_time[:, sensor_cfg.body_ids]
contact_time = contact_sensor.data.current_contact_time[:, sensor_cfg.body_ids]
in_contact = contact_time > 0.0
in_mode_time = torch.where(in_contact, contact_time, air_time)
single_stance = torch.sum(in_contact.int(), dim=1) == 1
reward = torch.min(torch.where(single_stance.unsqueeze(-1), in_mode_time, 0.0), dim=1)[0]
reward = torch.clamp(reward, max=threshold)
# no reward for zero command
reward *= torch.norm(env.command_manager.get_command(command_name)[:, :2], dim=1) > 0.1
return reward
| 2,595 |
Python
| 43.75862 | 119 | 0.717148 |
Helbling-Technik/orbit.maze/orbit/maze/tasks/locomotion/velocity/config/__init__.py
|
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Configurations for velocity-based locomotion environments."""
# We leave this file empty since we don't want to expose any configs in this package directly.
# We still need this file to import the "config" module in the parent package.
| 363 |
Python
| 35.399996 | 94 | 0.763085 |
Helbling-Technik/orbit.maze/orbit/maze/tasks/locomotion/velocity/config/anymal_d/rough_env_cfg.py
|
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from omni.isaac.orbit.utils import configclass
from orbit.maze.tasks.locomotion.velocity.velocity_env_cfg import (
LocomotionVelocityRoughEnvCfg,
)
##
# Pre-defined configs
##
from omni.isaac.orbit_assets.anymal import ANYMAL_D_CFG # isort: skip
@configclass
class AnymalDRoughEnvCfg(LocomotionVelocityRoughEnvCfg):
def __post_init__(self):
# post init of parent
super().__post_init__()
# switch robot to anymal-d
self.scene.robot = ANYMAL_D_CFG.replace(prim_path="{ENV_REGEX_NS}/Robot")
@configclass
class AnymalDRoughEnvCfg_PLAY(AnymalDRoughEnvCfg):
def __post_init__(self):
# post init of parent
super().__post_init__()
# make a smaller scene for play
self.scene.num_envs = 50
self.scene.env_spacing = 2.5
# spawn the robot randomly in the grid (instead of their terrain levels)
self.scene.terrain.max_init_terrain_level = None
# reduce the number of terrains to save memory
if self.scene.terrain.terrain_generator is not None:
self.scene.terrain.terrain_generator.num_rows = 5
self.scene.terrain.terrain_generator.num_cols = 5
self.scene.terrain.terrain_generator.curriculum = False
# disable randomization for play
self.observations.policy.enable_corruption = False
# remove random pushing
self.randomization.base_external_force_torque = None
self.randomization.push_robot = None
| 1,609 |
Python
| 31.857142 | 81 | 0.682411 |
Helbling-Technik/orbit.maze/orbit/maze/tasks/locomotion/velocity/config/anymal_d/flat_env_cfg.py
|
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from omni.isaac.orbit.utils import configclass
from .rough_env_cfg import AnymalDRoughEnvCfg
@configclass
class AnymalDFlatEnvCfg(AnymalDRoughEnvCfg):
def __post_init__(self):
# post init of parent
super().__post_init__()
# override rewards
self.rewards.flat_orientation_l2.weight = -5.0
self.rewards.dof_torques_l2.weight = -2.5e-5
self.rewards.feet_air_time.weight = 0.5
# change terrain to flat
self.scene.terrain.terrain_type = "plane"
self.scene.terrain.terrain_generator = None
# no height scan
self.scene.height_scanner = None
self.observations.policy.height_scan = None
# no terrain curriculum
self.curriculum.terrain_levels = None
class AnymalDFlatEnvCfg_PLAY(AnymalDFlatEnvCfg):
def __post_init__(self) -> None:
# post init of parent
super().__post_init__()
# make a smaller scene for play
self.scene.num_envs = 50
self.scene.env_spacing = 2.5
# disable randomization for play
self.observations.policy.enable_corruption = False
# remove random pushing
self.randomization.base_external_force_torque = None
self.randomization.push_robot = None
| 1,382 |
Python
| 30.431817 | 60 | 0.656295 |
Helbling-Technik/orbit.maze/orbit/maze/tasks/locomotion/velocity/config/anymal_d/__init__.py
|
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
import gymnasium as gym
from . import agents, flat_env_cfg, rough_env_cfg
##
# Register Gym environments.
##
gym.register(
id="Isaac-Velocity-Flat-Anymal-D-Template-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
disable_env_checker=True,
kwargs={
"env_cfg_entry_point": flat_env_cfg.AnymalDFlatEnvCfg,
"rsl_rl_cfg_entry_point": agents.rsl_rl_cfg.AnymalDFlatPPORunnerCfg,
"sb3_cfg_entry_point": f"{agents.__name__}:sb3_ppo_cfg.yaml",
},
)
gym.register(
id="Isaac-Velocity-Flat-Anymal-D-Template-Play-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
disable_env_checker=True,
kwargs={
"env_cfg_entry_point": flat_env_cfg.AnymalDFlatEnvCfg_PLAY,
"rsl_rl_cfg_entry_point": agents.rsl_rl_cfg.AnymalDFlatPPORunnerCfg,
"sb3_cfg_entry_point": f"{agents.__name__}:sb3_ppo_cfg.yaml",
},
)
gym.register(
id="Isaac-Velocity-Rough-Anymal-D-Template-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
disable_env_checker=True,
kwargs={
"env_cfg_entry_point": rough_env_cfg.AnymalDRoughEnvCfg,
"rsl_rl_cfg_entry_point": agents.rsl_rl_cfg.AnymalDRoughPPORunnerCfg,
"sb3_cfg_entry_point": f"{agents.__name__}:sb3_ppo_cfg.yaml",
},
)
gym.register(
id="Isaac-Velocity-Rough-Anymal-D-Template-Play-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
disable_env_checker=True,
kwargs={
"env_cfg_entry_point": rough_env_cfg.AnymalDRoughEnvCfg_PLAY,
"rsl_rl_cfg_entry_point": agents.rsl_rl_cfg.AnymalDRoughPPORunnerCfg,
"sb3_cfg_entry_point": f"{agents.__name__}:sb3_ppo_cfg.yaml",
},
)
| 1,778 |
Python
| 30.210526 | 77 | 0.669854 |
Helbling-Technik/orbit.maze/orbit/maze/tasks/locomotion/velocity/config/anymal_d/agents/rsl_rl_cfg.py
|
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from omni.isaac.orbit.utils import configclass
from omni.isaac.orbit_tasks.utils.wrappers.rsl_rl import (
RslRlOnPolicyRunnerCfg,
RslRlPpoActorCriticCfg,
RslRlPpoAlgorithmCfg,
)
@configclass
class AnymalDRoughPPORunnerCfg(RslRlOnPolicyRunnerCfg):
num_steps_per_env = 24
max_iterations = 1500
save_interval = 50
experiment_name = "anymal_d_rough"
empirical_normalization = False
policy = RslRlPpoActorCriticCfg(
init_noise_std=1.0,
actor_hidden_dims=[512, 256, 128],
critic_hidden_dims=[512, 256, 128],
activation="elu",
)
algorithm = RslRlPpoAlgorithmCfg(
value_loss_coef=1.0,
use_clipped_value_loss=True,
clip_param=0.2,
entropy_coef=0.005,
num_learning_epochs=5,
num_mini_batches=4,
learning_rate=1.0e-3,
schedule="adaptive",
gamma=0.99,
lam=0.95,
desired_kl=0.01,
max_grad_norm=1.0,
)
@configclass
class AnymalDFlatPPORunnerCfg(AnymalDRoughPPORunnerCfg):
def __post_init__(self):
super().__post_init__()
self.max_iterations = 300
self.experiment_name = "anymal_d_flat"
self.policy.actor_hidden_dims = [128, 128, 128]
self.policy.critic_hidden_dims = [128, 128, 128]
| 1,417 |
Python
| 26.26923 | 58 | 0.645025 |
Helbling-Technik/orbit.maze/orbit/maze/tasks/locomotion/velocity/config/anymal_d/agents/sb3_ppo_cfg.yaml
|
# Reference: https://github.com/DLR-RM/rl-baselines3-zoo/blob/master/hyperparams/ppo.yml#L32
seed: 42
n_timesteps: !!float 1e6
policy: 'MlpPolicy'
n_steps: 16
batch_size: 4096
gae_lambda: 0.95
gamma: 0.99
n_epochs: 20
ent_coef: 0.01
learning_rate: !!float 3e-4
clip_range: !!float 0.2
policy_kwargs: "dict(
activation_fn=nn.ELU,
net_arch=[32, 32],
squash_output=False,
)"
vf_coef: 1.0
max_grad_norm: 1.0
| 475 |
YAML
| 21.666666 | 92 | 0.610526 |
Helbling-Technik/orbit.maze/orbit/maze/tasks/locomotion/velocity/config/anymal_d/agents/__init__.py
|
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from . import rsl_rl_cfg # noqa: F401, F403
| 168 |
Python
| 23.142854 | 56 | 0.720238 |
Helbling-Technik/orbit.maze/docs/CHANGELOG.rst
|
Changelog
---------
0.1.0 (2024-01-29)
~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Created an initial template for building an extension or project based on Orbit
| 155 |
reStructuredText
| 13.181817 | 81 | 0.593548 |
ashleygoldstein/kit-exts-joints/README.md
|
# Create Joints in Omniverse

This extension allows you to create any joint easily and efficiently between two prims in your Omniverse USD stage!
## Get Started
This extension is available in Omniverse Kit and can be installed via the Extensions manager tab.
Once you are in the Extensions tab, navigate to the Community tab and search `Joint Connection`.
Install and Enable the extension and the Joint Connection window will appear.
## How to Use
To use this extension once enabled select your first Prim in the stage and click the `S` button in the Joint Connection window for `Prim A`.
Then select your second Prim in the stage that you want the joint to be connected with and click the `S` button for `Prim B`.
In the `Joints` drop down menu, select which joint you want to create.
Then, click `Create Joint` button.
:tada: Congratulations! :tada:
You now have a joint! Click the `play` button in Omniverse to test it out.
> :exclamation: You must have rigidbodies added to your prims for joint physics to work properly
| 1,144 |
Markdown
| 38.482757 | 141 | 0.765734 |
ashleygoldstein/kit-exts-joints/tools/scripts/link_app.py
|
import os
import argparse
import sys
import json
import packmanapi
import urllib3
def find_omniverse_apps():
http = urllib3.PoolManager()
try:
r = http.request("GET", "http://127.0.0.1:33480/components")
except Exception as e:
print(f"Failed retrieving apps from an Omniverse Launcher, maybe it is not installed?\nError: {e}")
sys.exit(1)
apps = {}
for x in json.loads(r.data.decode("utf-8")):
latest = x.get("installedVersions", {}).get("latest", "")
if latest:
for s in x.get("settings", []):
if s.get("version", "") == latest:
root = s.get("launch", {}).get("root", "")
apps[x["slug"]] = (x["name"], root)
break
return apps
def create_link(src, dst):
print(f"Creating a link '{src}' -> '{dst}'")
packmanapi.link(src, dst)
APP_PRIORITIES = ["code", "create", "view"]
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Create folder link to Kit App installed from Omniverse Launcher")
parser.add_argument(
"--path",
help="Path to Kit App installed from Omniverse Launcher, e.g.: 'C:/Users/bob/AppData/Local/ov/pkg/create-2021.3.4'",
required=False,
)
parser.add_argument(
"--app", help="Name of Kit App installed from Omniverse Launcher, e.g.: 'code', 'create'", required=False
)
args = parser.parse_args()
path = args.path
if not path:
print("Path is not specified, looking for Omniverse Apps...")
apps = find_omniverse_apps()
if len(apps) == 0:
print(
"Can't find any Omniverse Apps. Use Omniverse Launcher to install one. 'Code' is the recommended app for developers."
)
sys.exit(0)
print("\nFound following Omniverse Apps:")
for i, slug in enumerate(apps):
name, root = apps[slug]
print(f"{i}: {name} ({slug}) at: '{root}'")
if args.app:
selected_app = args.app.lower()
if selected_app not in apps:
choices = ", ".join(apps.keys())
print(f"Passed app: '{selected_app}' is not found. Specify one of the following found Apps: {choices}")
sys.exit(0)
else:
selected_app = next((x for x in APP_PRIORITIES if x in apps), None)
if not selected_app:
selected_app = next(iter(apps))
print(f"\nSelected app: {selected_app}")
_, path = apps[selected_app]
if not os.path.exists(path):
print(f"Provided path doesn't exist: {path}")
else:
SCRIPT_ROOT = os.path.dirname(os.path.realpath(__file__))
create_link(f"{SCRIPT_ROOT}/../../app", path)
print("Success!")
| 2,813 |
Python
| 32.5 | 133 | 0.562389 |
ashleygoldstein/kit-exts-joints/tools/packman/config.packman.xml
|
<config remotes="cloudfront">
<remote2 name="cloudfront">
<transport actions="download" protocol="https" packageLocation="d4i3qtqj3r0z5.cloudfront.net/${name}@${version}" />
</remote2>
</config>
| 211 |
XML
| 34.333328 | 123 | 0.691943 |
ashleygoldstein/kit-exts-joints/tools/packman/bootstrap/install_package.py
|
# Copyright 2019 NVIDIA CORPORATION
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import zipfile
import tempfile
import sys
import shutil
__author__ = "hfannar"
logging.basicConfig(level=logging.WARNING, format="%(message)s")
logger = logging.getLogger("install_package")
class TemporaryDirectory:
def __init__(self):
self.path = None
def __enter__(self):
self.path = tempfile.mkdtemp()
return self.path
def __exit__(self, type, value, traceback):
# Remove temporary data created
shutil.rmtree(self.path)
def install_package(package_src_path, package_dst_path):
with zipfile.ZipFile(
package_src_path, allowZip64=True
) as zip_file, TemporaryDirectory() as temp_dir:
zip_file.extractall(temp_dir)
# Recursively copy (temp_dir will be automatically cleaned up on exit)
try:
# Recursive copy is needed because both package name and version folder could be missing in
# target directory:
shutil.copytree(temp_dir, package_dst_path)
except OSError as exc:
logger.warning(
"Directory %s already present, packaged installation aborted" % package_dst_path
)
else:
logger.info("Package successfully installed to %s" % package_dst_path)
install_package(sys.argv[1], sys.argv[2])
| 1,888 |
Python
| 31.568965 | 103 | 0.68697 |
ashleygoldstein/kit-exts-joints/exts/goldstein.joint.connection/goldstein/joint/connection/extension.py
|
import omni.ext
import omni.ui as ui
from .window import JointWindow
# Any class derived from `omni.ext.IExt` in top level module (defined in `python.modules` of `extension.toml`) will be
# instantiated when extension gets enabled and `on_startup(ext_id)` will be called. Later when extension gets disabled
# on_shutdown() is called.
class JointCreationExt(omni.ext.IExt):
# ext_id is current extension id. It can be used with extension manager to query additional information, like where
# this extension is located on filesystem.
def on_startup(self, ext_id):
print("[Joint.Creation.Ext] startup")
self._window = JointWindow("Joint Creation", width=300, height=300)
def on_shutdown(self):
self._window.destroy()
print("[Joint.Creation.Ext] shutdown")
| 805 |
Python
| 39.299998 | 119 | 0.720497 |
ashleygoldstein/kit-exts-joints/exts/goldstein.joint.connection/goldstein/joint/connection/__init__.py
|
from .extension import *
| 25 |
Python
| 11.999994 | 24 | 0.76 |
ashleygoldstein/kit-exts-joints/exts/goldstein.joint.connection/goldstein/joint/connection/utils.py
|
from typing import List
import omni.usd
import omni.kit.commands
def get_selection() -> List[str]:
"""Get the list of currently selected prims"""
return omni.usd.get_context().get_selection().get_selected_prim_paths()
| 227 |
Python
| 27.499997 | 75 | 0.726872 |
ashleygoldstein/kit-exts-joints/exts/goldstein.joint.connection/goldstein/joint/connection/window.py
|
import omni.ui as ui
from .utils import get_selection
import omni.kit.commands
import omni.usd
JOINTS = ("D6", "Revolute", "Fixed", "Spherical", "Prismatic", "Distance", "Gear", "Rack and Pinion")
class JointWindow(ui.Window):
def __init__(self, title: str, **kwargs) -> None:
super().__init__(title, **kwargs)
self._source_prim_model_a = ui.SimpleStringModel()
self._source_prim_model_b = ui.SimpleStringModel()
self._stage = omni.usd.get_context().get_stage()
self.frame.set_build_fn(self._build_fn)
self.combo_model = None
self.current_joint = None
def _on_get_selection_a(self):
"""Called when the user presses the "Get From Selection" button"""
self._source_prim_model_a.as_string = ", ".join(get_selection())
def _on_get_selection_b(self):
"""Called when the user presses the "Get From Selection" button"""
self._source_prim_model_b.as_string = ", ".join(get_selection())
def _build_window(self):
with self.frame:
with ui.VStack():
with ui.CollapsableFrame("Source"):
with ui.VStack(height=20, spacing=4):
with ui.HStack():
ui.Label("Prim A")
ui.StringField(model = self._source_prim_model_a)
ui.Button("S", clicked_fn=self._on_get_selection_a)
ui.Spacer()
with ui.HStack():
ui.Label("Prim B")
ui.StringField(model = self._source_prim_model_b)
ui.Button("S", clicked_fn=self._on_get_selection_b)
with ui.CollapsableFrame("Joints"):
with ui.VStack():
ui.Label
self.combo_model = ui.ComboBox(0,*JOINTS).model
def combo_changed(item_model, item):
value_model = item_model.get_item_value_model(item)
self.current_joint = JOINTS[value_model.as_int]
# self.current_index = value_model.as_int
self._combo_changed_sub = self.combo_model.subscribe_item_changed_fn(combo_changed)
def on_click():
print("clicked!")
omni.kit.commands.execute('CreateJointCommand',
stage = self._stage,
joint_type=self.current_joint,
from_prim = self._stage.GetPrimAtPath(self._source_prim_model_a.as_string),
to_prim = self._stage.GetPrimAtPath(self._source_prim_model_b.as_string))
ui.Button("Create Joint", clicked_fn=lambda: on_click())
def _build_fn(self):
with ui.ScrollingFrame():
with ui.VStack(height=10):
self._build_window()
def destroy(self) -> None:
self._combo_changed_sub = None
return super().destroy()
| 3,350 |
Python
| 42.51948 | 127 | 0.477313 |
ashleygoldstein/kit-exts-joints/exts/goldstein.joint.connection/config/extension.toml
|
[package]
# Semantic Versionning is used: https://semver.org/
version = "1.0.0"
# The title and description fields are primarily for displaying extension info in UI
title = "Joint Creation"
description="This extension provides an easy and efficient way to select Prims and connect with any type of Joint."
# Path (relative to the root) or content of readme markdown file for UI.
readme = "docs/README.md"
# URL of the extension source repository.
repository = ""
# One of categories for UI.
category = "Example"
# Keywords for the extension
keywords = ["kit", "example"]
# Use omni.ui to build simple UI
[dependencies]
"omni.kit.uiapp" = {}
# Main python module this extension provides, it will be publicly available as "import goldstein.joint.connection".
[[python.module]]
name = "goldstein.joint.connection"
| 820 |
TOML
| 27.310344 | 115 | 0.746341 |
gist-ailab/AILAB-isaac-sim-pick-place/README.md
|
# isaac-sim-pick-place
## Environment Setup
### 1. Download Isaac Sim
- Dependency check
- Ubuntu
- Recommanded: 20.04 / 22.04
- Tested on: 20.04
- NVIDIA Driver version
- Recommanded: 525.60.11
- Minimum: 510.73.05
- Tested on: 510.108.03 /
- [Download Omniverse](https://developer.nvidia.com/isaac-sim)
- [Workstation Setup](https://docs.omniverse.nvidia.com/app_isaacsim/app_isaacsim/install_basic.html)
- [Python Environment Installation](https://docs.omniverse.nvidia.com/app_isaacsim/app_isaacsim/install_python.html#advanced-running-with-anaconda)
### 2. Environment Setup
## 2-1. Conda
Check [Python Environment Installation](https://docs.omniverse.nvidia.com/app_isaacsim/app_isaacsim/install_python.html#advanced-running-with-anaconda)
- Create env create
```
conda env create -f environment.yml
conda activate isaac-sim
```
- Setup environment variables so that Isaac Sim python packages are located correctly
```
source setup_conda_env.sh
```
- Install requirment pakages
```
pip install -r requirements.txt
```
## 2-2. Docker (recommended)
- Install Init file
```
wget https://raw.githubusercontent.com/gist-ailab/AILAB-isaac-sim-pick-place/main/dockers/init_script.sh
zsh init_script.sh
```
| 1,312 |
Markdown
| 26.93617 | 152 | 0.696646 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/3-4/pick_place.py
|
from omni.isaac.kit import SimulationApp
simulation_app = SimulationApp({"headless": False})
from omni.isaac.core import World
from omni.kit.viewport.utility import get_active_viewport
import sys, os
from pathlib import Path
import numpy as np
import random
sys.path.append(os.path.dirname(os.path.abspath(os.path.dirname(__file__))))
from utils.controllers.pick_place_controller_robotiq import PickPlaceController
from utils.tasks.pick_place_task import UR5ePickPlace
# YCB Dataset 물체들에 대한 정보 취득
working_dir = os.path.dirname(os.path.realpath(__file__))
ycb_path = os.path.join(Path(working_dir).parent, 'dataset/ycb')
obj_dirs = [os.path.join(ycb_path, obj_name) for obj_name in os.listdir(ycb_path)]
obj_dirs.sort()
object_info = {}
label2name = {}
total_object_num = len(obj_dirs)
for obj_idx, obj_dir in enumerate(obj_dirs):
usd_file = os.path.join(obj_dir, 'final.usd')
object_info[obj_idx] = {
'name': os.path.basename(obj_dir),
'usd_file': usd_file,
'label': obj_idx,
}
label2name[obj_idx]=os.path.basename(obj_dir)
# 랜덤한 물체에 대한 usd file path 선택
obje_info = random.sample(list(object_info.values()), 1)
objects_usd = obje_info[0]['usd_file']
# Random하게 생성된 물체들의 번호와 카테고리 출력
print("object: {}".format(obje_info[0]['name']))
# 물체를 생성할 위치 지정(너무 멀어지는 경우 로봇이 닿지 않을 수 있음, 물체 사이의 거리가 가까울 경우 충돌이 발생할 수 있음)
objects_position = np.array([[0.5, 0, 0.1]])
offset = np.array([0, 0, 0.1])
# 물체를 놓을 위치(place position) 지정
target_position = np.array([0.4, -0.33, 0.55])
target_orientation = np.array([0, 0, 0, 1])
# World 생성
my_world = World(stage_units_in_meters=1.0)
# Task 생성
my_task = UR5ePickPlace(objects_list = [objects_usd],
objects_position = objects_position,
offset=offset)
# World에 Task 추가
my_world.add_task(my_task)
my_world.reset()
# Task로부터 ur5e 획득
task_params = my_task.get_params()
my_ur5e = my_world.scene.get_object(task_params["robot_name"]["value"])
# PickPlace controller 생성
my_controller = PickPlaceController(
name="pick_place_controller",
gripper=my_ur5e.gripper,
robot_articulation=my_ur5e
)
# robot control(PD control)을 위한 instance 선언
articulation_controller = my_ur5e.get_articulation_controller()
# GUI 상에서 보는 view point 지정(Depth 카메라 view에서 Perspective view로 변환시, 전체적으로 보기 편함)
viewport = get_active_viewport()
viewport.set_active_camera('/World/ur5e/realsense/Depth')
viewport.set_active_camera('/OmniverseKit_Persp')
# 생성한 world 에서 physics simulation step
while simulation_app.is_running():
my_world.step(render=True)
if my_world.is_playing():
# step이 0일때, world와 controller를 reset
if my_world.current_time_step_index == 0:
my_world.reset()
my_controller.reset()
# my_world로 부터 observation 값들 획득
observations = my_world.get_observations()
# 획득한 observation을 pick place controller에 전달
actions = my_controller.forward(
picking_position=observations[task_params["task_object_name_0"]["value"]]["position"],
placing_position=observations[task_params["task_object_name_0"]["value"]]["target_position"],
current_joint_positions=observations[task_params["robot_name"]["value"]]["joint_positions"],
end_effector_offset=np.array([0, 0, 0.14])
)
# controller의 동작이 끝났음을 출력
if my_controller.is_done():
print("done picking and placing")
break
# 선언한 action을 입력받아 articulation_controller를 통해 action 수행.
# Controller 내부에서 계산된 joint position값을 통해 action을 수행함
articulation_controller.apply_action(actions)
# simulation 종료
simulation_app.close()
| 3,733 |
Python
| 32.044248 | 105 | 0.669167 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/3-4/1_practice_controller_generation.py
|
from omni.isaac.kit import SimulationApp
simulation_app = SimulationApp({"headless": False})
from omni.isaac.core import World
from omni.kit.viewport.utility import get_active_viewport
import sys, os
from pathlib import Path
import numpy as np
import random
sys.path.append(os.path.dirname(os.path.abspath(os.path.dirname(__file__))))
from utils.controllers.pick_place_controller_robotiq import PickPlaceController
from utils.tasks.pick_place_task import UR5ePickPlace
############### Random한 YCB 물체 생성을 포함하는 Task 생성 ######################
# YCB Dataset 물체들에 대한 정보 취득
working_dir = os.path.dirname(os.path.realpath(__file__))
ycb_path = os.path.join(Path(working_dir).parent, 'dataset/ycb')
obj_dirs = [os.path.join(ycb_path, obj_name) for obj_name in os.listdir(ycb_path)]
obj_dirs.sort()
object_info = {}
label2name = {}
total_object_num = len(obj_dirs)
for obj_idx, obj_dir in enumerate(obj_dirs):
usd_file = os.path.join(obj_dir, 'final.usd')
object_info[obj_idx] = {
'name': os.path.basename(obj_dir),
'usd_file': usd_file,
'label': obj_idx,
}
label2name[obj_idx]=os.path.basename(obj_dir)
# 랜덤한 물체에 대한 usd file path 선택
obje_info = random.sample(list(object_info.values()), 1)
objects_usd = obje_info[0]['usd_file']
# Random하게 생성된 물체들의 번호와 카테고리 출력
print("object: {}".format(obje_info[0]['name']))
# 물체를 생성할 위치 지정(너무 멀어지는 경우 로봇이 닿지 않을 수 있음, 물체 사이의 거리가 가까울 경우 충돌이 발생할 수 있음)
objects_position = np.array([[0.5, 0, 0.1]])
offset = np.array([0, 0, 0.1])
# 물체를 놓을 위치(place position) 지정
target_position = np.array([0.4, -0.33, 0.55])
target_orientation = np.array([0, 0, 0, 1])
# World 생성
my_world = World(stage_units_in_meters=1.0)
# Task 생성
my_task = UR5ePickPlace(objects_list = [objects_usd],
objects_position = objects_position,
offset=offset)
# World에 Task 추가
my_world.add_task(my_task)
my_world.reset()
########################################################################
################### Pick place controller 생성 ##########################
# Task로부터 ur5e 획득
# PickPlace controller 생성
# robot control(PD control)을 위한 instance 선언
########################################################################
# GUI 상에서 보는 view point 지정(Depth 카메라 view에서 Perspective view로 변환시, 전체적으로 보기 편함)
viewport = get_active_viewport()
viewport.set_active_camera('/World/ur5e/realsense/Depth')
viewport.set_active_camera('/OmniverseKit_Persp')
# 생성한 world 에서 physics simulation step
while simulation_app.is_running():
my_world.step(render=True)
# simulation 종료
simulation_app.close()
| 2,598 |
Python
| 28.873563 | 82 | 0.635489 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/3-4/2_practice_pickplace.py
|
from omni.isaac.kit import SimulationApp
simulation_app = SimulationApp({"headless": False})
from omni.isaac.core import World
from omni.kit.viewport.utility import get_active_viewport
import sys, os
from pathlib import Path
import numpy as np
import random
sys.path.append(os.path.dirname(os.path.abspath(os.path.dirname(__file__))))
from utils.controllers.pick_place_controller_robotiq import PickPlaceController
from utils.tasks.pick_place_task import UR5ePickPlace
############### Random한 YCB 물체 생성을 포함하는 Task 생성 ######################
# YCB Dataset 물체들에 대한 정보 취득
working_dir = os.path.dirname(os.path.realpath(__file__))
ycb_path = os.path.join(Path(working_dir).parent, 'dataset/ycb')
obj_dirs = [os.path.join(ycb_path, obj_name) for obj_name in os.listdir(ycb_path)]
obj_dirs.sort()
object_info = {}
label2name = {}
total_object_num = len(obj_dirs)
for obj_idx, obj_dir in enumerate(obj_dirs):
usd_file = os.path.join(obj_dir, 'final.usd')
object_info[obj_idx] = {
'name': os.path.basename(obj_dir),
'usd_file': usd_file,
'label': obj_idx,
}
label2name[obj_idx]=os.path.basename(obj_dir)
# 랜덤한 물체에 대한 usd file path 선택
obje_info = random.sample(list(object_info.values()), 1)
objects_usd = obje_info[0]['usd_file']
# Random하게 생성된 물체들의 번호와 카테고리 출력
print("object: {}".format(obje_info[0]['name']))
# 물체를 생성할 위치 지정(너무 멀어지는 경우 로봇이 닿지 않을 수 있음, 물체 사이의 거리가 가까울 경우 충돌이 발생할 수 있음)
objects_position = np.array([[0.5, 0, 0.1]])
offset = np.array([0, 0, 0.1])
# 물체를 놓을 위치(place position) 지정
target_position = np.array([0.4, -0.33, 0.55])
target_orientation = np.array([0, 0, 0, 1])
# World 생성
my_world = World(stage_units_in_meters=1.0)
# Task 생성
my_task = UR5ePickPlace(objects_list = [objects_usd],
objects_position = objects_position,
offset=offset)
# World에 Task 추가
my_world.add_task(my_task)
my_world.reset()
########################################################################
################### Pick place controller 생성 ##########################
# Task로부터 ur5e 획득
task_params = my_task.get_params()
my_ur5e = my_world.scene.get_object(task_params["robot_name"]["value"])
# PickPlace controller 생성
my_controller = PickPlaceController(
name="pick_place_controller",
gripper=my_ur5e.gripper,
robot_articulation=my_ur5e
)
# robot control(PD control)을 위한 instance 선언
articulation_controller = my_ur5e.get_articulation_controller()
########################################################################
# GUI 상에서 보는 view point 지정(Depth 카메라 view에서 Perspective view로 변환시, 전체적으로 보기 편함)
viewport = get_active_viewport()
viewport.set_active_camera('/World/ur5e/realsense/Depth')
viewport.set_active_camera('/OmniverseKit_Persp')
######################## Pick place 수행 ###############################
# 생성한 world 에서 physics simulation step
while simulation_app.is_running():
my_world.step(render=True)
# world가 동작하는 동안 작업 수행
if my_world.is_playing():
# step이 0일때, world와 controller를 reset
if my_world.current_time_step_index == 0:
my_world.reset()
my_controller.reset()
# my_world로 부터 observation 값들 획득
# 획득한 observation을 pick place controller에 전달
# controller의 동작이 끝났음을 출력
# 선언한 action을 입력받아 articulation_controller를 통해 action 수행.
# Controller 내부에서 계산된 joint position값을 통해 action을 수행함
# simulation 종료
simulation_app.close()
########################################################################
| 3,579 |
Python
| 29.862069 | 82 | 0.610226 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/3-4/0_practice_task_generation.py
|
from omni.isaac.kit import SimulationApp
simulation_app = SimulationApp({"headless": False})
from omni.isaac.core import World
from omni.kit.viewport.utility import get_active_viewport
import sys, os
from pathlib import Path
import numpy as np
import random
sys.path.append(os.path.dirname(os.path.abspath(os.path.dirname(__file__))))
from utils.controllers.pick_place_controller_robotiq import PickPlaceController
from utils.tasks.pick_place_task import UR5ePickPlace
############### Random한 YCB 물체 생성을 포함하는 Task 생성 ######################
# YCB Dataset 물체들에 대한 정보 취득
# 랜덤한 물체에 대한 usd file path 선택
# Random하게 생성된 물체들의 번호와 카테고리 출력
# 물체를 생성할 위치 지정(너무 멀어지는 경우 로봇이 닿지 않을 수 있음, 물체 사이의 거리가 가까울 경우 충돌이 발생할 수 있음)
# 물체를 놓을 위치(place position) 지정
# World 생성
my_world = World(stage_units_in_meters=1.0)
# Task 생성
# World에 Task 추가
########################################################################
# GUI 상에서 보는 view point 지정(Depth 카메라 view에서 Perspective view로 변환시, 전체적으로 보기 편함)
viewport = get_active_viewport()
viewport.set_active_camera('/World/ur5e/realsense/Depth')
viewport.set_active_camera('/OmniverseKit_Persp')
# 생성한 world 에서 physics simulation step
while simulation_app.is_running():
my_world.step(render=True)
# simulation 종료
simulation_app.close()
| 1,280 |
Python
| 25.687499 | 79 | 0.689844 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/3-2/0_position_control.py
|
from omni.isaac.kit import SimulationApp
simulation_app = SimulationApp({"headless": False})
from omni.isaac.core import World
from omni.kit.viewport.utility import get_active_viewport
import numpy as np
import sys, os
sys.path.append(os.path.dirname(os.path.abspath(os.path.dirname(__file__))))
from utils.tasks.pick_place_task import UR5ePickPlace
from utils.controllers.RMPFflow_pickplace import RMPFlowController
from utils.controllers.basic_manipulation_controller import BasicManipulationController
# World 생성
my_world = World(stage_units_in_meters=1.0)
# Task 생성
my_task = UR5ePickPlace()
my_world.add_task(my_task)
my_world.reset()
# Controller 생성
task_params = my_task.get_params()
my_ur5e = my_world.scene.get_object(task_params["robot_name"]["value"])
my_controller = BasicManipulationController(
# Controller의 이름 설정
name='basic_manipulation_controller',
# 로봇 모션 controller 설정
cspace_controller=RMPFlowController(
name="end_effector_controller_cspace_controller", robot_articulation=my_ur5e, attach_gripper=True
),
# 로봇의 gripper 설정
gripper=my_ur5e.gripper,
# phase의 진행 속도 설정
events_dt=[0.008],
)
# robot control(PD control)을 위한 instance 선언
articulation_controller = my_ur5e.get_articulation_controller()
my_controller.reset()
# GUI 상에서 보는 view point 지정(Depth 카메라 view에서 Perspective view로 변환시, 전체적으로 보기 편함)
viewport = get_active_viewport()
viewport.set_active_camera('/World/ur5e/realsense/Depth')
viewport.set_active_camera('/OmniverseKit_Persp')
# 시뮬레이션 앱 실행 후 dalay를 위한 변수
max_step = 150
# 시뮬레이션 앱이 실행 중이면 동작
ee_target_position = np.array([0.25, -0.23, 0.4])
while simulation_app.is_running():
# 생성한 world 에서 physics simulation step
my_world.step(render=True)
if my_world.is_playing():
if my_world.current_time_step_index > max_step:
# my_world로 부터 observation 값들 획득
observations = my_world.get_observations()
# 획득한 observation을 pick place controller에 전달
actions = my_controller.forward(
target_position=ee_target_position,
current_joint_positions=observations[task_params["robot_name"]["value"]]["joint_positions"],
end_effector_offset = np.array([0, 0, 0.14])
)
# controller의 동작이 끝났음을 출력
if my_controller.is_done():
print("done position control of end-effector")
break
# 컨트롤러 내부에서 계산된 타겟 joint position값을
# articulation controller에 전달하여 action 수행
articulation_controller.apply_action(actions)
# 시뮬레이션 종료
simulation_app.close()
| 2,648 |
Python
| 32.1125 | 108 | 0.686934 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/3-2/1_look_around.py
|
from omni.isaac.kit import SimulationApp
simulation_app = SimulationApp({"headless": False})
from omni.isaac.core import World
from omni.kit.viewport.utility import get_active_viewport
from omni.isaac.core.utils.rotations import euler_angles_to_quat
import numpy as np
import sys, os
sys.path.append(os.path.dirname(os.path.abspath(os.path.dirname(__file__))))
from utils.tasks.pick_place_task import UR5ePickPlace
from utils.controllers.RMPFflow_pickplace import RMPFlowController
from utils.controllers.basic_manipulation_controller import BasicManipulationController
# World 생성
my_world = World(stage_units_in_meters=1.0)
# Task 생성
my_task = UR5ePickPlace()
# World에 Task 추가 및 World 리셋
my_world.add_task(my_task)
my_world.reset()
# Task로부터 로봇과 카메라 획득
task_params = my_task.get_params()
my_ur5e = my_world.scene.get_object(task_params["robot_name"]["value"])
# Controller 생성
my_controller = BasicManipulationController(
name='basic_manipulation_controller',
cspace_controller=RMPFlowController(
name="basic_manipulation_controller_cspace_controller",
robot_articulation=my_ur5e,
attach_gripper=True
),
gripper=my_ur5e.gripper,
events_dt=[0.008],
)
# robot control(PD control)을 위한 instance 선언
articulation_controller = my_ur5e.get_articulation_controller()
# GUI 상에서 보는 view point 지정(Depth 카메라 view에서 Perspective view로 변환시, 전체적으로 보기 편함)
viewport = get_active_viewport()
viewport.set_active_camera('/World/ur5e/realsense/Depth')
viewport.set_active_camera('/OmniverseKit_Persp')
# target object를 찾기 위한 예제 코드
# end effector가 반지름 4를 가지며 theta가 45도씩 360도를 회전 수행
for theta in range(0, 360, 45):
# theta 값에 따라서 end effector의 위치를 지정(x, y, z)
r, z = 4, 0.35
x, y = r/10 * np.cos(theta/360*2*np.pi), r/10 * np.sin(theta/360*2*np.pi)
while simulation_app.is_running():
# 생성한 world 에서 physics simulation step
my_world.step(render=True)
if my_world.is_playing():
# step이 0일때, World와 Controller를 reset
if my_world.current_time_step_index == 0:
my_world.reset()
my_controller.reset()
# 획득한 observation을 pick place controller에 전달
actions = my_controller.forward(
target_position=np.array([x, y, z]),
current_joint_positions=my_ur5e.get_joint_positions(),
end_effector_offset = np.array([0, 0, 0.14]),
end_effector_orientation=euler_angles_to_quat(np.array([0, np.pi, theta * 2 * np.pi / 360]))
)
# end effector가 원하는 위치에 도달하면
# controller reset 및 while문 나가기
if my_controller.is_done():
my_controller.reset()
break
# 선언한 action을 입력받아 articulation_controller를 통해 action 수행
# Controller에서 계산된 joint position값을 통해 action을 수행함
articulation_controller.apply_action(actions)
simulation_app.close()
| 3,016 |
Python
| 32.522222 | 108 | 0.656499 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/3-2/2_gripper_control.py
|
from omni.isaac.kit import SimulationApp
simulation_app = SimulationApp({"headless": False})
from omni.isaac.core import World
from omni.kit.viewport.utility import get_active_viewport
import numpy as np
import sys, os
sys.path.append(os.path.dirname(os.path.abspath(os.path.dirname(__file__))))
from utils.tasks.pick_place_task import UR5ePickPlace
from utils.controllers.RMPFflow_pickplace import RMPFlowController
from utils.controllers.basic_manipulation_controller import BasicManipulationController
# if you don't declare objects_position, the objects will be placed randomly
objects_position = np.array([0.4, 0.4, 0.1])
target_position = np.array([0.4, -0.33, 0.05]) # 0.55 for considering the length of the gripper tip
target_orientation = np.array([0, 0, 0, 1])
offset = np.array([0, 0, 0.1]) # releasing offset at the target position
my_world = World(stage_units_in_meters=1.0)
my_task = UR5ePickPlace()
my_world.add_task(my_task)
my_world.reset()
task_params = my_task.get_params()
my_ur5e = my_world.scene.get_object(task_params["robot_name"]["value"])
my_controller = BasicManipulationController(
name='basic_manipulation_controller',
cspace_controller=RMPFlowController(
name="end_effector_controller_cspace_controller", robot_articulation=my_ur5e, attach_gripper=True
),
gripper=my_ur5e.gripper,
events_dt=[0.008],
)
articulation_controller = my_ur5e.get_articulation_controller()
my_controller.reset()
viewport = get_active_viewport()
viewport.set_active_camera('/World/ur5e/realsense/Depth')
viewport.set_active_camera('/OmniverseKit_Persp')
# 그리퍼 열기 / 닫기 명령어 입력받음
while True:
instruction = input('Enter the instruction [open/close]:')
if instruction in ["o", "open", "c", "close"]:
break
else:
print("wrong instruction")
print('instruction : ', instruction)
while simulation_app.is_running():
my_world.step(render=True)
if my_world.is_playing():
observations = my_world.get_observations()
if instruction == "o" or instruction == "open":
actions = my_controller.open(
current_joint_positions=observations[task_params["robot_name"]["value"]]["joint_positions"],
)
elif instruction == "c" or instruction == "close":
actions = my_controller.close(
current_joint_positions=observations[task_params["robot_name"]["value"]]["joint_positions"],
)
articulation_controller.apply_action(actions)
# 컨트롤러가 끝나면 새로운 명령어 입력 받음
if my_controller.is_done():
if instruction == "o" or instruction == "open":
print("done opening the gripper\n")
elif instruction == "c" or instruction == "close":
print("done closing the gripper\n")
while True:
instruction = input('Enter the instruction [open/close/quit]:')
if instruction in ["o", "open", "c", "close", "q", "quit"]:
break
else:
print("wrong instruction")
print('instruction : ', instruction)
print()
if instruction == 'q' or instruction == 'quit':
break
my_controller.reset()
simulation_app.close()
| 3,281 |
Python
| 37.16279 | 108 | 0.650716 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/3-2/1_practice_look_around.py
|
from omni.isaac.kit import SimulationApp
simulation_app = SimulationApp({"headless": False})
from omni.isaac.core import World
from omni.kit.viewport.utility import get_active_viewport
from omni.isaac.core.utils.rotations import euler_angles_to_quat
import numpy as np
import sys, os
sys.path.append(os.path.dirname(os.path.abspath(os.path.dirname(__file__))))
from utils.tasks.pick_place_task import UR5ePickPlace
from utils.controllers.RMPFflow_pickplace import RMPFlowController
from utils.controllers.basic_manipulation_controller import BasicManipulationController
############### 로봇의 기본적인 매니퓰레이션 동작을 위한 환경 설정 ################
# World 생성
# Task 생성
# World에 Task 추가 및 World 리셋
# Task로부터 로봇과 카메라 획득
# 로봇의 액션을 수행하는 Controller 생성
#########################################################################
# robot control(PD control)을 위한 instance 선언
articulation_controller = my_ur5e.get_articulation_controller()
# GUI 상에서 보는 view point 지정(Depth 카메라 view에서 Perspective view로 변환시, 전체적으로 보기 편함)
viewport = get_active_viewport()
viewport.set_active_camera('/World/ur5e/realsense/Depth')
viewport.set_active_camera('/OmniverseKit_Persp')
# target object를 찾기 위한 예제 코드
# end effector가 반지름 4를 가지며 theta가 45도씩 360도를 회전 수행
for theta in range(0, 360, 45):
# theta 값에 따라서 end effector의 위치를 지정(x, y, z)
r, z = 4, 0.35
x, y = r/10 * np.cos(theta/360*2*np.pi), r/10 * np.sin(theta/360*2*np.pi)
while simulation_app.is_running():
# 생성한 world 에서 physics simulation step
my_world.step(render=True)
if my_world.is_playing():
# step이 0일때, World와 Controller를 reset
if my_world.current_time_step_index == 0:
my_world.reset()
my_controller.reset()
############################# 로봇 액션 생성 ##############################
# 획득한 observation을 pick place controller에 전달
#########################################################################
# end effector가 원하는 위치에 도달하면
# controller reset 및 while문 나가기
if my_controller.is_done():
my_controller.reset()
break
############################# 로봇 액션 수행 ##############################
# 선언한 action을 입력받아 articulation_controller를 통해 action 수행
# Controller에서 계산된 joint position값을 통해 action을 수행함
#########################################################################
simulation_app.close()
| 2,621 |
Python
| 26.030928 | 87 | 0.540252 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/1-1/debug_example.py
|
a =1
b=1
print(a+b)
b=2
print(a+b)
| 36 |
Python
| 4.285714 | 10 | 0.555556 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/3-3/pick_place.py
|
from omni.isaac.kit import SimulationApp
simulation_app = SimulationApp({"headless": False})
from omni.isaac.core import World
from omni.kit.viewport.utility import get_active_viewport
import sys, os
from pathlib import Path
import numpy as np
import random
sys.path.append(os.path.dirname(os.path.abspath(os.path.dirname(__file__))))
from utils.controllers.pick_place_controller_robotiq import PickPlaceController
from utils.tasks.pick_place_task import UR5ePickPlace
# YCB Dataset 물체들에 대한 정보 취득
working_dir = os.path.dirname(os.path.realpath(__file__))
ycb_path = os.path.join(Path(working_dir).parent, 'dataset/ycb')
obj_dirs = [os.path.join(ycb_path, obj_name) for obj_name in os.listdir(ycb_path)]
obj_dirs.sort()
object_info = {}
label2name = {}
total_object_num = len(obj_dirs)
for obj_idx, obj_dir in enumerate(obj_dirs):
usd_file = os.path.join(obj_dir, 'final.usd')
object_info[obj_idx] = {
'name': os.path.basename(obj_dir),
'usd_file': usd_file,
'label': obj_idx,
}
label2name[obj_idx]=os.path.basename(obj_dir)
# 랜덤한 물체에 대한 usd file path 선택
obje_info = random.sample(list(object_info.values()), 1)
objects_usd = obje_info[0]['usd_file']
# Random하게 생성된 물체들의 번호와 카테고리 출력
print("object: {}".format(obje_info[0]['name']))
# 물체를 생성할 위치 지정(너무 멀어지는 경우 로봇이 닿지 않을 수 있음, 물체 사이의 거리가 가까울 경우 충돌이 발생할 수 있음)
objects_position = np.array([[0.5, 0, 0.1]])
offset = np.array([0, 0, 0.1])
# 물체를 놓을 위치(place position) 지정
target_position = np.array([0.4, -0.33, 0.55])
target_orientation = np.array([0, 0, 0, 1])
# World 생성
my_world = World(stage_units_in_meters=1.0)
# Task 생성
my_task = UR5ePickPlace(objects_list = [objects_usd],
objects_position = objects_position,
offset=offset)
# World에 Task 추가
my_world.add_task(my_task)
my_world.reset()
# Task로부터 ur5e 획득
task_params = my_task.get_params()
my_ur5e = my_world.scene.get_object(task_params["robot_name"]["value"])
# PickPlace controller 생성
my_controller = PickPlaceController(
name="pick_place_controller",
gripper=my_ur5e.gripper,
robot_articulation=my_ur5e
)
# robot control(PD control)을 위한 instance 선언
articulation_controller = my_ur5e.get_articulation_controller()
# GUI 상에서 보는 view point 지정(Depth 카메라 view에서 Perspective view로 변환시, 전체적으로 보기 편함)
viewport = get_active_viewport()
viewport.set_active_camera('/World/ur5e/realsense/Depth')
viewport.set_active_camera('/OmniverseKit_Persp')
# 생성한 world 에서 physics simulation step
while simulation_app.is_running():
my_world.step(render=True)
# world가 동작하는 동안 작업 수행
if my_world.is_playing():
# step이 0일때, world와 controller를 reset
if my_world.current_time_step_index == 0:
my_world.reset()
my_controller.reset()
# my_world로 부터 observation 값들 획득
observations = my_world.get_observations()
# 획득한 observation을 pick place controller에 전달
actions = my_controller.forward(
picking_position=observations[task_params["task_object_name_0"]["value"]]["position"],
placing_position=observations[task_params["task_object_name_0"]["value"]]["target_position"],
current_joint_positions=observations[task_params["robot_name"]["value"]]["joint_positions"],
end_effector_offset=np.array([0, 0, 0.14])
)
# controller의 동작이 끝났음을 출력
if my_controller.is_done():
print("done picking and placing")
break
# 선언한 action을 입력받아 articulation_controller를 통해 action 수행.
# Controller 내부에서 계산된 joint position값을 통해 action을 수행함
articulation_controller.apply_action(actions)
# simulation 종료
simulation_app.close()
| 3,761 |
Python
| 31.713043 | 105 | 0.668439 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/3-3/1_practice_controller_generation.py
|
from omni.isaac.kit import SimulationApp
simulation_app = SimulationApp({"headless": False})
from omni.isaac.core import World
from omni.kit.viewport.utility import get_active_viewport
import sys, os
from pathlib import Path
import numpy as np
import random
sys.path.append(os.path.dirname(os.path.abspath(os.path.dirname(__file__))))
from utils.controllers.pick_place_controller_robotiq import PickPlaceController
from utils.tasks.pick_place_task import UR5ePickPlace
############### Random한 YCB 물체 생성을 포함하는 Task 생성 ######################
# YCB Dataset 물체들에 대한 정보 취득
working_dir = os.path.dirname(os.path.realpath(__file__))
ycb_path = os.path.join(Path(working_dir).parent, 'dataset/ycb')
obj_dirs = [os.path.join(ycb_path, obj_name) for obj_name in os.listdir(ycb_path)]
obj_dirs.sort()
object_info = {}
label2name = {}
total_object_num = len(obj_dirs)
for obj_idx, obj_dir in enumerate(obj_dirs):
usd_file = os.path.join(obj_dir, 'final.usd')
object_info[obj_idx] = {
'name': os.path.basename(obj_dir),
'usd_file': usd_file,
'label': obj_idx,
}
label2name[obj_idx]=os.path.basename(obj_dir)
# 랜덤한 물체에 대한 usd file path 선택
obje_info = random.sample(list(object_info.values()), 1)
objects_usd = obje_info[0]['usd_file']
# Random하게 생성된 물체들의 번호와 카테고리 출력
print("object: {}".format(obje_info[0]['name']))
# 물체를 생성할 위치 지정(너무 멀어지는 경우 로봇이 닿지 않을 수 있음, 물체 사이의 거리가 가까울 경우 충돌이 발생할 수 있음)
objects_position = np.array([[0.5, 0, 0.1]])
offset = np.array([0, 0, 0.1])
# 물체를 놓을 위치(place position) 지정
target_position = np.array([0.4, -0.33, 0.55])
target_orientation = np.array([0, 0, 0, 1])
# World 생성
my_world = World(stage_units_in_meters=1.0)
# Task 생성
my_task = UR5ePickPlace(objects_list = [objects_usd],
objects_position = objects_position,
offset=offset)
# World에 Task 추가
my_world.add_task(my_task)
my_world.reset()
########################################################################
################### Pick place controller 생성 ##########################
# Task로부터 ur5e 획득
# PickPlace controller 생성
# robot control(PD control)을 위한 instance 선언
########################################################################
# GUI 상에서 보는 view point 지정(Depth 카메라 view에서 Perspective view로 변환시, 전체적으로 보기 편함)
viewport = get_active_viewport()
viewport.set_active_camera('/World/ur5e/realsense/Depth')
viewport.set_active_camera('/OmniverseKit_Persp')
# 생성한 world 에서 physics simulation step
while simulation_app.is_running():
my_world.step(render=True)
# simulation 종료
simulation_app.close()
| 2,598 |
Python
| 28.873563 | 82 | 0.635489 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/3-3/2_practice_pickplace.py
|
from omni.isaac.kit import SimulationApp
simulation_app = SimulationApp({"headless": False})
from omni.isaac.core import World
from omni.kit.viewport.utility import get_active_viewport
import sys, os
from pathlib import Path
import numpy as np
import random
sys.path.append(os.path.dirname(os.path.abspath(os.path.dirname(__file__))))
from utils.controllers.pick_place_controller_robotiq import PickPlaceController
from utils.tasks.pick_place_task import UR5ePickPlace
############### Random한 YCB 물체 생성을 포함하는 Task 생성 ######################
# YCB Dataset 물체들에 대한 정보 취득
working_dir = os.path.dirname(os.path.realpath(__file__))
ycb_path = os.path.join(Path(working_dir).parent, 'dataset/ycb')
obj_dirs = [os.path.join(ycb_path, obj_name) for obj_name in os.listdir(ycb_path)]
obj_dirs.sort()
object_info = {}
label2name = {}
total_object_num = len(obj_dirs)
for obj_idx, obj_dir in enumerate(obj_dirs):
usd_file = os.path.join(obj_dir, 'final.usd')
object_info[obj_idx] = {
'name': os.path.basename(obj_dir),
'usd_file': usd_file,
'label': obj_idx,
}
label2name[obj_idx]=os.path.basename(obj_dir)
# 랜덤한 물체에 대한 usd file path 선택
obje_info = random.sample(list(object_info.values()), 1)
objects_usd = obje_info[0]['usd_file']
# Random하게 생성된 물체들의 번호와 카테고리 출력
print("object: {}".format(obje_info[0]['name']))
# 물체를 생성할 위치 지정(너무 멀어지는 경우 로봇이 닿지 않을 수 있음, 물체 사이의 거리가 가까울 경우 충돌이 발생할 수 있음)
objects_position = np.array([[0.5, 0, 0.1]])
offset = np.array([0, 0, 0.1])
# 물체를 놓을 위치(place position) 지정
target_position = np.array([0.4, -0.33, 0.55])
target_orientation = np.array([0, 0, 0, 1])
# World 생성
my_world = World(stage_units_in_meters=1.0)
# Task 생성
my_task = UR5ePickPlace(objects_list = [objects_usd],
objects_position = objects_position,
offset=offset)
# World에 Task 추가
my_world.add_task(my_task)
my_world.reset()
########################################################################
################### Pick place controller 생성 ##########################
# Task로부터 ur5e 획득
task_params = my_task.get_params()
my_ur5e = my_world.scene.get_object(task_params["robot_name"]["value"])
# PickPlace controller 생성
my_controller = PickPlaceController(
name="pick_place_controller",
gripper=my_ur5e.gripper,
robot_articulation=my_ur5e
)
# robot control(PD control)을 위한 instance 선언
articulation_controller = my_ur5e.get_articulation_controller()
########################################################################
# GUI 상에서 보는 view point 지정(Depth 카메라 view에서 Perspective view로 변환시, 전체적으로 보기 편함)
viewport = get_active_viewport()
viewport.set_active_camera('/World/ur5e/realsense/Depth')
viewport.set_active_camera('/OmniverseKit_Persp')
######################## Pick place 수행 ###############################
# 생성한 world 에서 physics simulation step
while simulation_app.is_running():
my_world.step(render=True)
# world가 동작하는 동안 작업 수행
if my_world.is_playing():
# step이 0일때, world와 controller를 reset
if my_world.current_time_step_index == 0:
my_world.reset()
my_controller.reset()
# my_world로 부터 observation 값들 획득
# 획득한 observation을 pick place controller에 전달
# controller의 동작이 끝났음을 출력
# 선언한 action을 입력받아 articulation_controller를 통해 action 수행.
# Controller 내부에서 계산된 joint position값을 통해 action을 수행함
# simulation 종료
simulation_app.close()
########################################################################
| 3,579 |
Python
| 29.862069 | 82 | 0.610226 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/3-3/0_practice_task_generation.py
|
from omni.isaac.kit import SimulationApp
simulation_app = SimulationApp({"headless": False})
from omni.isaac.core import World
from omni.kit.viewport.utility import get_active_viewport
import sys, os
from pathlib import Path
import numpy as np
import random
sys.path.append(os.path.dirname(os.path.abspath(os.path.dirname(__file__))))
from utils.controllers.pick_place_controller_robotiq import PickPlaceController
from utils.tasks.pick_place_task import UR5ePickPlace
############### Random한 YCB 물체 생성을 포함하는 Task 생성 ######################
# YCB Dataset 물체들에 대한 정보 취득
# 랜덤한 물체에 대한 usd file path 선택
# Random하게 생성된 물체들의 번호와 카테고리 출력
# 물체를 생성할 위치 지정(너무 멀어지는 경우 로봇이 닿지 않을 수 있음, 물체 사이의 거리가 가까울 경우 충돌이 발생할 수 있음)
# 물체를 놓을 위치(place position) 지정
# World 생성
my_world = World(stage_units_in_meters=1.0)
# Task 생성
# World에 Task 추가
########################################################################
# GUI 상에서 보는 view point 지정(Depth 카메라 view에서 Perspective view로 변환시, 전체적으로 보기 편함)
viewport = get_active_viewport()
viewport.set_active_camera('/World/ur5e/realsense/Depth')
viewport.set_active_camera('/OmniverseKit_Persp')
# 생성한 world 에서 physics simulation step
while simulation_app.is_running():
my_world.step(render=True)
# simulation 종료
simulation_app.close()
| 1,280 |
Python
| 25.687499 | 79 | 0.689844 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/044_flat_screwdriver/poisson/nontextured.xml
|
<KinBody name="044_flat_screwdriver">
<Body type="static" name="044_flat_screwdriver">
<Geom type="trimesh">
<Render>./nontextured.stl</Render>
<Data>./nontextured.stl</Data>
</Geom>
</Body>
</KinBody>
| 226 |
XML
| 24.22222 | 50 | 0.632743 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/044_flat_screwdriver/tsdf/nontextured.xml
|
<KinBody name="044_flat_screwdriver">
<Body type="static" name="044_flat_screwdriver">
<Geom type="trimesh">
<Render>./nontextured.stl</Render>
<Data>./nontextured.stl</Data>
</Geom>
</Body>
</KinBody>
| 226 |
XML
| 24.22222 | 50 | 0.632743 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/035_power_drill/poisson/nontextured.xml
|
<KinBody name="035_power_drill">
<Body type="static" name="035_power_drill">
<Geom type="trimesh">
<Render>./nontextured.stl</Render>
<Data>./nontextured.stl</Data>
</Geom>
</Body>
</KinBody>
| 216 |
XML
| 23.111109 | 45 | 0.615741 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/035_power_drill/tsdf/nontextured.xml
|
<KinBody name="035_power_drill">
<Body type="static" name="035_power_drill">
<Geom type="trimesh">
<Render>./nontextured.stl</Render>
<Data>./nontextured.stl</Data>
</Geom>
</Body>
</KinBody>
| 216 |
XML
| 23.111109 | 45 | 0.615741 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/035_power_drill/google_16k/kinbody.xml
|
<KinBody name="power_drill">
<Body type="static" name="power_drill">
<Geom type="trimesh">
<Render>textured.dae</Render>
<Data>textured.dae</Data>
</Geom>
</Body>
</KinBody>
| 198 |
XML
| 21.111109 | 41 | 0.611111 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/073-h_lego_duplo/poisson/nontextured.xml
|
<KinBody name="073-h_lego_duplo">
<Body type="static" name="073-h_lego_duplo">
<Geom type="trimesh">
<Render>./nontextured.stl</Render>
<Data>./nontextured.stl</Data>
</Geom>
</Body>
</KinBody>
| 218 |
XML
| 23.333331 | 46 | 0.610092 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/073-h_lego_duplo/tsdf/nontextured.xml
|
<KinBody name="073-h_lego_duplo">
<Body type="static" name="073-h_lego_duplo">
<Geom type="trimesh">
<Render>./nontextured.stl</Render>
<Data>./nontextured.stl</Data>
</Geom>
</Body>
</KinBody>
| 218 |
XML
| 23.333331 | 46 | 0.610092 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/026_sponge/poisson/nontextured.xml
|
<KinBody name="026_sponge">
<Body type="static" name="026_sponge">
<Geom type="trimesh">
<Render>./nontextured.stl</Render>
<Data>./nontextured.stl</Data>
</Geom>
</Body>
</KinBody>
| 206 |
XML
| 21.999998 | 40 | 0.606796 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/026_sponge/tsdf/nontextured.xml
|
<KinBody name="026_sponge">
<Body type="static" name="026_sponge">
<Geom type="trimesh">
<Render>./nontextured.stl</Render>
<Data>./nontextured.stl</Data>
</Geom>
</Body>
</KinBody>
| 206 |
XML
| 21.999998 | 40 | 0.606796 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/026_sponge/google_16k/kinbody.xml
|
<KinBody name="sponge">
<Body type="static" name="sponge">
<Geom type="trimesh">
<Render>textured.dae</Render>
<Data>textured.dae</Data>
</Geom>
</Body>
</KinBody>
| 188 |
XML
| 19.999998 | 36 | 0.601064 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/070-a_colored_wood_blocks/poisson/nontextured.xml
|
<KinBody name="070-a_colored_wood_blocks">
<Body type="static" name="070-a_colored_wood_blocks">
<Geom type="trimesh">
<Render>./nontextured.stl</Render>
<Data>./nontextured.stl</Data>
</Geom>
</Body>
</KinBody>
| 236 |
XML
| 25.333331 | 55 | 0.631356 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/070-a_colored_wood_blocks/tsdf/nontextured.xml
|
<KinBody name="070-a_colored_wood_blocks">
<Body type="static" name="070-a_colored_wood_blocks">
<Geom type="trimesh">
<Render>./nontextured.stl</Render>
<Data>./nontextured.stl</Data>
</Geom>
</Body>
</KinBody>
| 236 |
XML
| 25.333331 | 55 | 0.631356 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/070-a_colored_wood_blocks/google_16k/kinbody.xml
|
<KinBody name="colored_wood_blocks">
<Body type="static" name="colored_wood_blocks">
<Geom type="trimesh">
<Render>textured.dae</Render>
<Data>textured.dae</Data>
</Geom>
</Body>
</KinBody>
| 214 |
XML
| 22.888886 | 49 | 0.630841 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/023_wine_glass/tsdf/nontextured.xml
|
<KinBody name="023_wine_glass">
<Body type="static" name="023_wine_glass">
<Geom type="trimesh">
<Render>./nontextured.stl</Render>
<Data>./nontextured.stl</Data>
</Geom>
</Body>
</KinBody>
| 214 |
XML
| 22.888886 | 44 | 0.61215 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/072-d_toy_airplane/poisson/nontextured.xml
|
<KinBody name="072-d_toy_airplane">
<Body type="static" name="072-d_toy_airplane">
<Geom type="trimesh">
<Render>./nontextured.stl</Render>
<Data>./nontextured.stl</Data>
</Geom>
</Body>
</KinBody>
| 222 |
XML
| 23.777775 | 48 | 0.617117 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/072-d_toy_airplane/tsdf/nontextured.xml
|
<KinBody name="072-d_toy_airplane">
<Body type="static" name="072-d_toy_airplane">
<Geom type="trimesh">
<Render>./nontextured.stl</Render>
<Data>./nontextured.stl</Data>
</Geom>
</Body>
</KinBody>
| 222 |
XML
| 23.777775 | 48 | 0.617117 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/072-d_toy_airplane/google_16k/kinbody.xml
|
<KinBody name="toy_airplane">
<Body type="static" name="toy_airplane">
<Geom type="trimesh">
<Render>textured.dae</Render>
<Data>textured.dae</Data>
</Geom>
</Body>
</KinBody>
| 200 |
XML
| 21.333331 | 42 | 0.615 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/072-c_toy_airplane/poisson/nontextured.xml
|
<KinBody name="072-c_toy_airplane">
<Body type="static" name="072-c_toy_airplane">
<Geom type="trimesh">
<Render>./nontextured.stl</Render>
<Data>./nontextured.stl</Data>
</Geom>
</Body>
</KinBody>
| 222 |
XML
| 23.777775 | 48 | 0.617117 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/072-c_toy_airplane/tsdf/nontextured.xml
|
<KinBody name="072-c_toy_airplane">
<Body type="static" name="072-c_toy_airplane">
<Geom type="trimesh">
<Render>./nontextured.stl</Render>
<Data>./nontextured.stl</Data>
</Geom>
</Body>
</KinBody>
| 222 |
XML
| 23.777775 | 48 | 0.617117 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/072-c_toy_airplane/google_16k/kinbody.xml
|
<KinBody name="toy_airplane">
<Body type="static" name="toy_airplane">
<Geom type="trimesh">
<Render>textured.dae</Render>
<Data>textured.dae</Data>
</Geom>
</Body>
</KinBody>
| 200 |
XML
| 21.333331 | 42 | 0.615 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/065-c_cups/poisson/nontextured.xml
|
<KinBody name="065-c_cups">
<Body type="static" name="065-c_cups">
<Geom type="trimesh">
<Render>./nontextured.stl</Render>
<Data>./nontextured.stl</Data>
</Geom>
</Body>
</KinBody>
| 206 |
XML
| 21.999998 | 40 | 0.597087 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/065-c_cups/tsdf/nontextured.xml
|
<KinBody name="065-c_cups">
<Body type="static" name="065-c_cups">
<Geom type="trimesh">
<Render>./nontextured.stl</Render>
<Data>./nontextured.stl</Data>
</Geom>
</Body>
</KinBody>
| 206 |
XML
| 21.999998 | 40 | 0.597087 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/065-c_cups/google_16k/kinbody.xml
|
<KinBody name="cups">
<Body type="static" name="cups">
<Geom type="trimesh">
<Render>textured.dae</Render>
<Data>textured.dae</Data>
</Geom>
</Body>
</KinBody>
| 184 |
XML
| 19.555553 | 35 | 0.592391 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/049_small_clamp/tsdf/nontextured.xml
|
<KinBody name="049_small_clamp">
<Body type="static" name="049_small_clamp">
<Geom type="trimesh">
<Render>./nontextured.stl</Render>
<Data>./nontextured.stl</Data>
</Geom>
</Body>
</KinBody>
| 216 |
XML
| 23.111109 | 45 | 0.615741 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/073-e_lego_duplo/poisson/nontextured.xml
|
<KinBody name="073-e_lego_duplo">
<Body type="static" name="073-e_lego_duplo">
<Geom type="trimesh">
<Render>./nontextured.stl</Render>
<Data>./nontextured.stl</Data>
</Geom>
</Body>
</KinBody>
| 218 |
XML
| 23.333331 | 46 | 0.610092 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/073-e_lego_duplo/tsdf/nontextured.xml
|
<KinBody name="073-e_lego_duplo">
<Body type="static" name="073-e_lego_duplo">
<Geom type="trimesh">
<Render>./nontextured.stl</Render>
<Data>./nontextured.stl</Data>
</Geom>
</Body>
</KinBody>
| 218 |
XML
| 23.333331 | 46 | 0.610092 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/073-e_lego_duplo/google_16k/kinbody.xml
|
<KinBody name="lego_duplo">
<Body type="static" name="lego_duplo">
<Geom type="trimesh">
<Render>textured.dae</Render>
<Data>textured.dae</Data>
</Geom>
</Body>
</KinBody>
| 196 |
XML
| 20.888887 | 40 | 0.607143 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/070-b_colored_wood_blocks/google_16k/kinbody.xml
|
<KinBody name="colored_wood_blocks">
<Body type="static" name="colored_wood_blocks">
<Geom type="trimesh">
<Render>textured.dae</Render>
<Data>textured.dae</Data>
</Geom>
</Body>
</KinBody>
| 214 |
XML
| 22.888886 | 49 | 0.630841 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/014_lemon/poisson/nontextured.xml
|
<KinBody name="014_lemon">
<Body type="static" name="014_lemon">
<Geom type="trimesh">
<Render>./nontextured.stl</Render>
<Data>./nontextured.stl</Data>
</Geom>
</Body>
</KinBody>
| 204 |
XML
| 21.777775 | 40 | 0.602941 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/014_lemon/tsdf/nontextured.xml
|
<KinBody name="014_lemon">
<Body type="static" name="014_lemon">
<Geom type="trimesh">
<Render>./nontextured.stl</Render>
<Data>./nontextured.stl</Data>
</Geom>
</Body>
</KinBody>
| 204 |
XML
| 21.777775 | 40 | 0.602941 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/014_lemon/google_16k/kinbody.xml
|
<KinBody name="lemon">
<Body type="static" name="lemon">
<Geom type="trimesh">
<Render>textured.dae</Render>
<Data>textured.dae</Data>
</Geom>
</Body>
</KinBody>
| 186 |
XML
| 19.777776 | 35 | 0.596774 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/062_dice/poisson/nontextured.xml
|
<KinBody name="062_dice">
<Body type="static" name="062_dice">
<Geom type="trimesh">
<Render>./nontextured.stl</Render>
<Data>./nontextured.stl</Data>
</Geom>
</Body>
</KinBody>
| 202 |
XML
| 21.555553 | 40 | 0.59901 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/062_dice/tsdf/nontextured.xml
|
<KinBody name="062_dice">
<Body type="static" name="062_dice">
<Geom type="trimesh">
<Render>./nontextured.stl</Render>
<Data>./nontextured.stl</Data>
</Geom>
</Body>
</KinBody>
| 202 |
XML
| 21.555553 | 40 | 0.59901 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/062_dice/google_16k/kinbody.xml
|
<KinBody name="dice">
<Body type="static" name="dice">
<Geom type="trimesh">
<Render>textured.dae</Render>
<Data>textured.dae</Data>
</Geom>
</Body>
</KinBody>
| 184 |
XML
| 19.555553 | 35 | 0.592391 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/052_extra_large_clamp/poisson/nontextured.xml
|
<KinBody name="052_extra_large_clamp">
<Body type="static" name="052_extra_large_clamp">
<Geom type="trimesh">
<Render>./nontextured.stl</Render>
<Data>./nontextured.stl</Data>
</Geom>
</Body>
</KinBody>
| 228 |
XML
| 24.444442 | 51 | 0.627193 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/052_extra_large_clamp/tsdf/nontextured.xml
|
<KinBody name="052_extra_large_clamp">
<Body type="static" name="052_extra_large_clamp">
<Geom type="trimesh">
<Render>./nontextured.stl</Render>
<Data>./nontextured.stl</Data>
</Geom>
</Body>
</KinBody>
| 228 |
XML
| 24.444442 | 51 | 0.627193 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/dataset/origin_YCB/052_extra_large_clamp/google_16k/kinbody.xml
|
<KinBody name="extra_large_clamp">
<Body type="static" name="extra_large_clamp">
<Geom type="trimesh">
<Render>textured.dae</Render>
<Data>textured.dae</Data>
</Geom>
</Body>
</KinBody>
| 210 |
XML
| 22.444442 | 47 | 0.62381 |
gist-ailab/AILAB-isaac-sim-pick-place/lecture/checkpoint/1.py
|
https://drive.google.com/file/d/1MbkrrGytk8aqzOBoVs11tUyhMoGgQnKs/view?usp=sharing
| 83 |
Python
| 40.99998 | 82 | 0.855422 |
rosklyar/omniverse_extensions/README.md
|
# Extension Project Template
This project was automatically generated.
- `app` - It is a folder link to the location of your *Omniverse Kit* based app.
- `exts` - It is a folder where you can add new extensions. It was automatically added to extension search path. (Extension Manager -> Gear Icon -> Extension Search Path).
Open this folder using Visual Studio Code. It will suggest you to install few extensions that will make python experience better.
Look for "playtika.eyedarts.export" extension in extension manager and enable it. Try applying changes to any python files, it will hot-reload and you can observe results immediately.
Alternatively, you can launch your app from console with this folder added to search path and your extension enabled, e.g.:
```
> app\omni.code.bat --ext-folder exts --enable company.hello.world
```
# App Link Setup
If `app` folder link doesn't exist or broken it can be created again. For better developer experience it is recommended to create a folder link named `app` to the *Omniverse Kit* app installed from *Omniverse Launcher*. Convenience script to use is included.
Run:
```
> link_app.bat
```
If successful you should see `app` folder link in the root of this repo.
If multiple Omniverse apps is installed script will select recommended one. Or you can explicitly pass an app:
```
> link_app.bat --app create
```
You can also just pass a path to create link to:
```
> link_app.bat --path "C:/Users/bob/AppData/Local/ov/pkg/create-2021.3.4"
```
# Sharing Your Extensions
This folder is ready to be pushed to any git repository. Once pushed direct link to a git repository can be added to *Omniverse Kit* extension search paths.
Link might look like this: `git://github.com/[user]/[your_repo].git?branch=main&dir=exts`
Notice `exts` is repo subfolder with extensions. More information can be found in "Git URL as Extension Search Paths" section of developers manual.
To add a link to your *Omniverse Kit* based app go into: Extension Manager -> Gear Icon -> Extension Search Path
| 2,048 |
Markdown
| 37.660377 | 258 | 0.757812 |
rosklyar/omniverse_extensions/tools/scripts/link_app.py
|
import os
import argparse
import sys
import json
import packmanapi
import urllib3
def find_omniverse_apps():
http = urllib3.PoolManager()
try:
r = http.request("GET", "http://127.0.0.1:33480/components")
except Exception as e:
print(f"Failed retrieving apps from an Omniverse Launcher, maybe it is not installed?\nError: {e}")
sys.exit(1)
apps = {}
for x in json.loads(r.data.decode("utf-8")):
latest = x.get("installedVersions", {}).get("latest", "")
if latest:
for s in x.get("settings", []):
if s.get("version", "") == latest:
root = s.get("launch", {}).get("root", "")
apps[x["slug"]] = (x["name"], root)
break
return apps
def create_link(src, dst):
print(f"Creating a link '{src}' -> '{dst}'")
packmanapi.link(src, dst)
APP_PRIORITIES = ["code", "create", "view"]
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Create folder link to Kit App installed from Omniverse Launcher")
parser.add_argument(
"--path",
help="Path to Kit App installed from Omniverse Launcher, e.g.: 'C:/Users/bob/AppData/Local/ov/pkg/create-2021.3.4'",
required=False,
)
parser.add_argument(
"--app", help="Name of Kit App installed from Omniverse Launcher, e.g.: 'code', 'create'", required=False
)
args = parser.parse_args()
path = args.path
if not path:
print("Path is not specified, looking for Omniverse Apps...")
apps = find_omniverse_apps()
if len(apps) == 0:
print(
"Can't find any Omniverse Apps. Use Omniverse Launcher to install one. 'Code' is the recommended app for developers."
)
sys.exit(0)
print("\nFound following Omniverse Apps:")
for i, slug in enumerate(apps):
name, root = apps[slug]
print(f"{i}: {name} ({slug}) at: '{root}'")
if args.app:
selected_app = args.app.lower()
if selected_app not in apps:
choices = ", ".join(apps.keys())
print(f"Passed app: '{selected_app}' is not found. Specify one of the following found Apps: {choices}")
sys.exit(0)
else:
selected_app = next((x for x in APP_PRIORITIES if x in apps), None)
if not selected_app:
selected_app = next(iter(apps))
print(f"\nSelected app: {selected_app}")
_, path = apps[selected_app]
if not os.path.exists(path):
print(f"Provided path doesn't exist: {path}")
else:
SCRIPT_ROOT = os.path.dirname(os.path.realpath(__file__))
create_link(f"{SCRIPT_ROOT}/../../app", path)
print("Success!")
| 2,813 |
Python
| 32.5 | 133 | 0.562389 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.