file_path
stringlengths 20
207
| content
stringlengths 5
3.85M
| size
int64 5
3.85M
| lang
stringclasses 9
values | avg_line_length
float64 1.33
100
| max_line_length
int64 4
993
| alphanum_fraction
float64 0.26
0.93
|
---|---|---|---|---|---|---|
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/cfg/train/FactoryTaskNutBoltPlacePPO.yaml | params:
seed: ${...seed}
algo:
name: a2c_continuous
model:
name: continuous_a2c_logstd
network:
name: actor_critic
separate: False
space:
continuous:
mu_activation: None
sigma_activation: None
mu_init:
name: default
sigma_init:
name: const_initializer
val: 0
fixed_sigma: True
mlp:
units: [256, 128, 64]
activation: elu
d2rl: False
initializer:
name: default
regularizer:
name: None
load_checkpoint: ${if:${...checkpoint},True,False}
load_path: ${...checkpoint}
config:
name: ${resolve_default:FactoryTaskNutBoltPlace,${....experiment}}
full_experiment_name: ${.name}
device: ${....rl_device}
device_name: ${....rl_device}
env_name: rlgpu
multi_gpu: False
ppo: True
mixed_precision: True
normalize_input: True
normalize_value: True
value_bootstrap: True
num_actors: ${....task.env.numEnvs}
reward_shaper:
scale_value: 1.0
normalize_advantage: True
gamma: 0.99
tau: 0.95
learning_rate: 1e-4
lr_schedule: fixed
schedule_type: standard
kl_threshold: 0.016
score_to_win: 20000
max_epochs: ${resolve_default:400,${....max_iterations}}
save_best_after: 50
save_frequency: 100
print_stats: True
grad_norm: 1.0
entropy_coef: 0.0
truncate_grads: False
e_clip: 0.2
horizon_length: 128
minibatch_size: 512
mini_epochs: 8
critic_coef: 2
clip_value: True
seq_length: 4
bounds_loss_coef: 0.0001
| 1,597 | YAML | 20.594594 | 70 | 0.594865 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/cfg/train/CartpoleCameraPPO.yaml | params:
seed: ${...seed}
algo:
name: a2c_continuous
model:
name: continuous_a2c_logstd
network:
name: actor_critic
separate: False
space:
continuous:
mu_activation: None
sigma_activation: None
mu_init:
name: default
sigma_init:
name: const_initializer
val: 0
fixed_sigma: True
cnn:
type: conv2d
activation: relu
initializer:
name: default
regularizer:
name: None
convs:
- filters: 32
kernel_size: 8
strides: 4
padding: 0
- filters: 64
kernel_size: 4
strides: 2
padding: 0
- filters: 64
kernel_size: 3
strides: 1
padding: 0
mlp:
units: [512]
activation: elu
initializer:
name: default
# rnn:
# name: lstm
# units: 128
# layers: 1
# before_mlp: False
# concat_input: True
# layer_norm: True
load_checkpoint: ${if:${...checkpoint},True,False} # flag which sets whether to load the checkpoint
load_path: ${...checkpoint} # path to the checkpoint to load
config:
name: ${resolve_default:CartpoleCamera,${....experiment}}
full_experiment_name: ${.name}
device: ${....rl_device}
device_name: ${....rl_device}
env_name: rlgpu
multi_gpu: ${....multi_gpu}
ppo: True
mixed_precision: False
normalize_input: False
normalize_value: True
num_actors: ${....task.env.numEnvs}
reward_shaper:
scale_value: 1.0 #0.1
normalize_advantage: True
gamma: 0.99
tau: 0.95
learning_rate: 1e-4
lr_schedule: adaptive
kl_threshold: 0.008
score_to_win: 20000
max_epochs: ${resolve_default:500,${....max_iterations}}
save_best_after: 50
save_frequency: 10
grad_norm: 1.0
entropy_coef: 0.0
truncate_grads: True
e_clip: 0.2
horizon_length: 256
minibatch_size: 512 #1024
mini_epochs: 4
critic_coef: 2
clip_value: True
seq_length: 4
bounds_loss_coef: 0.0001 | 2,124 | YAML | 21.135416 | 101 | 0.556026 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/cfg/train/AntPPO.yaml | params:
seed: ${...seed}
algo:
name: a2c_continuous
model:
name: continuous_a2c_logstd
network:
name: actor_critic
separate: False
space:
continuous:
mu_activation: None
sigma_activation: None
mu_init:
name: default
sigma_init:
name: const_initializer
val: 0
fixed_sigma: True
mlp:
units: [256, 128, 64]
activation: elu
d2rl: False
initializer:
name: default
regularizer:
name: None
load_checkpoint: ${if:${...checkpoint},True,False} # flag which sets whether to load the checkpoint
load_path: ${...checkpoint} # path to the checkpoint to load
config:
name: ${resolve_default:Ant,${....experiment}}
full_experiment_name: ${.name}
env_name: rlgpu
device: ${....rl_device}
device_name: ${....rl_device}
multi_gpu: ${....multi_gpu}
ppo: True
mixed_precision: True
normalize_input: True
normalize_value: True
value_bootstrap: True
num_actors: ${....task.env.numEnvs}
reward_shaper:
scale_value: 0.01
normalize_advantage: True
gamma: 0.99
tau: 0.95
learning_rate: 3e-4
lr_schedule: adaptive
schedule_type: legacy
kl_threshold: 0.008
score_to_win: 20000
max_epochs: ${resolve_default:500,${....max_iterations}}
save_best_after: 100
save_frequency: 50
grad_norm: 1.0
entropy_coef: 0.0
truncate_grads: True
e_clip: 0.2
horizon_length: 16
minibatch_size: 32768
mini_epochs: 4
critic_coef: 2
clip_value: True
seq_length: 4
bounds_loss_coef: 0.0001
| 1,657 | YAML | 21.405405 | 101 | 0.594448 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/cfg/train/FrankaCabinetPPO.yaml | params:
seed: ${...seed}
algo:
name: a2c_continuous
model:
name: continuous_a2c_logstd
network:
name: actor_critic
separate: False
space:
continuous:
mu_activation: None
sigma_activation: None
mu_init:
name: default
sigma_init:
name: const_initializer
val: 0
fixed_sigma: True
mlp:
units: [256, 128, 64]
activation: elu
d2rl: False
initializer:
name: default
regularizer:
name: None
load_checkpoint: ${if:${...checkpoint},True,False} # flag which sets whether to load the checkpoint
load_path: ${...checkpoint} # path to the checkpoint to load
config:
name: ${resolve_default:FrankaCabinet,${....experiment}}
full_experiment_name: ${.name}
env_name: rlgpu
device: ${....rl_device}
device_name: ${....rl_device}
multi_gpu: ${....multi_gpu}
ppo: True
mixed_precision: False
normalize_input: True
normalize_value: True
num_actors: ${....task.env.numEnvs}
reward_shaper:
scale_value: 0.01
normalize_advantage: True
gamma: 0.99
tau: 0.95
learning_rate: 5e-4
lr_schedule: adaptive
kl_threshold: 0.008
score_to_win: 100000000
max_epochs: ${resolve_default:1500,${....max_iterations}}
save_best_after: 200
save_frequency: 100
print_stats: True
grad_norm: 1.0
entropy_coef: 0.0
truncate_grads: True
e_clip: 0.2
horizon_length: 16
minibatch_size: 8192
mini_epochs: 8
critic_coef: 4
clip_value: True
seq_length: 4
bounds_loss_coef: 0.0001
| 1,636 | YAML | 21.736111 | 101 | 0.598411 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/cfg/train/AntSAC.yaml | params:
seed: ${...seed}
algo:
name: sac
model:
name: soft_actor_critic
network:
name: soft_actor_critic
separate: True
space:
continuous:
mlp:
units: [512, 256]
activation: relu
initializer:
name: default
log_std_bounds: [-5, 2]
load_checkpoint: ${if:${...checkpoint},True,False} # flag which sets whether to load the checkpoint
load_path: ${...checkpoint} # path to the checkpoint to load
config:
name: ${resolve_default:AntSAC,${....experiment}}
env_name: rlgpu
device: ${....rl_device}
device_name: ${....rl_device}
multi_gpu: ${....multi_gpu}
normalize_input: True
reward_shaper:
scale_value: 1.0
max_epochs: ${resolve_default:20000,${....max_iterations}}
num_steps_per_episode: 8
save_best_after: 100
save_frequency: 1000
gamma: 0.99
init_alpha: 1.0
alpha_lr: 0.005
actor_lr: 0.0005
critic_lr: 0.0005
critic_tau: 0.005
batch_size: 4096
learnable_temperature: true
num_seed_steps: 5
num_warmup_steps: 10
replay_buffer_size: 1000000
num_actors: ${....task.env.numEnvs}
| 1,160 | YAML | 21.326923 | 101 | 0.601724 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/cfg/train/AllegroHandPPO.yaml | params:
seed: ${...seed}
algo:
name: a2c_continuous
model:
name: continuous_a2c_logstd
network:
name: actor_critic
separate: False
space:
continuous:
mu_activation: None
sigma_activation: None
mu_init:
name: default
sigma_init:
name: const_initializer
val: 0
fixed_sigma: True
mlp:
units: [512, 256, 128]
activation: elu
d2rl: False
initializer:
name: default
regularizer:
name: None
load_checkpoint: ${if:${...checkpoint},True,False}
load_path: ${...checkpoint}
config:
name: ${resolve_default:AllegroHand,${....experiment}}
full_experiment_name: ${.name}
device: ${....rl_device}
device_name: ${....rl_device}
env_name: rlgpu
multi_gpu: ${....multi_gpu}
ppo: True
mixed_precision: False
normalize_input: True
normalize_value: True
value_bootstrap: True
num_actors: ${....task.env.numEnvs}
reward_shaper:
scale_value: 0.01
normalize_advantage: True
gamma: 0.99
tau: 0.95
learning_rate: 5e-4
lr_schedule: adaptive
schedule_type: standard
kl_threshold: 0.02
score_to_win: 100000
max_epochs: ${resolve_default:10000,${....max_iterations}}
save_best_after: 100
save_frequency: 200
print_stats: True
grad_norm: 1.0
entropy_coef: 0.0
truncate_grads: True
e_clip: 0.2
horizon_length: 16
minibatch_size: 32768
mini_epochs: 5
critic_coef: 4
clip_value: True
seq_length: 4
bounds_loss_coef: 0.0001
player:
deterministic: True
games_num: 100000
print_stats: True
| 1,694 | YAML | 20.455696 | 62 | 0.590909 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/cfg/train/AnymalPPO.yaml | params:
seed: ${...seed}
algo:
name: a2c_continuous
model:
name: continuous_a2c_logstd
network:
name: actor_critic
separate: False
space:
continuous:
mu_activation: None
sigma_activation: None
mu_init:
name: default
sigma_init:
name: const_initializer
val: 0. # std = 1.
fixed_sigma: True
mlp:
units: [256, 128, 64]
activation: elu
d2rl: False
initializer:
name: default
regularizer:
name: None
load_checkpoint: ${if:${...checkpoint},True,False} # flag which sets whether to load the checkpoint
load_path: ${...checkpoint} # path to the checkpoint to load
config:
name: ${resolve_default:Anymal,${....experiment}}
full_experiment_name: ${.name}
device: ${....rl_device}
device_name: ${....rl_device}
env_name: rlgpu
multi_gpu: ${....multi_gpu}
ppo: True
mixed_precision: True
normalize_input: True
normalize_value: True
value_bootstrap: True
num_actors: ${....task.env.numEnvs}
reward_shaper:
scale_value: 1.0
normalize_advantage: True
gamma: 0.99
tau: 0.95
e_clip: 0.2
entropy_coef: 0.0
learning_rate: 3.e-4 # overwritten by adaptive lr_schedule
lr_schedule: adaptive
kl_threshold: 0.008 # target kl for adaptive lr
truncate_grads: True
grad_norm: 1.
horizon_length: 24
minibatch_size: 32768
mini_epochs: 5
critic_coef: 2
clip_value: True
seq_length: 4 # only for rnn
bounds_loss_coef: 0.001
max_epochs: ${resolve_default:1000,${....max_iterations}}
save_best_after: 200
score_to_win: 20000
save_frequency: 50
print_stats: True
| 1,744 | YAML | 21.960526 | 101 | 0.600917 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/cfg/train/CartpolePPO.yaml | params:
seed: ${...seed}
algo:
name: a2c_continuous
model:
name: continuous_a2c_logstd
network:
name: actor_critic
separate: False
space:
continuous:
mu_activation: None
sigma_activation: None
mu_init:
name: default
sigma_init:
name: const_initializer
val: 0
fixed_sigma: True
mlp:
units: [32, 32]
activation: elu
initializer:
name: default
regularizer:
name: None
load_checkpoint: ${if:${...checkpoint},True,False} # flag which sets whether to load the checkpoint
load_path: ${...checkpoint} # path to the checkpoint to load
config:
name: ${resolve_default:Cartpole,${....experiment}}
full_experiment_name: ${.name}
device: ${....rl_device}
device_name: ${....rl_device}
env_name: rlgpu
multi_gpu: ${....multi_gpu}
ppo: True
mixed_precision: False
normalize_input: True
normalize_value: True
num_actors: ${....task.env.numEnvs}
reward_shaper:
scale_value: 0.1
normalize_advantage: True
gamma: 0.99
tau: 0.95
learning_rate: 3e-4
lr_schedule: adaptive
kl_threshold: 0.008
score_to_win: 20000
max_epochs: ${resolve_default:100,${....max_iterations}}
save_best_after: 50
save_frequency: 25
grad_norm: 1.0
entropy_coef: 0.0
truncate_grads: True
e_clip: 0.2
horizon_length: 16
minibatch_size: 8192
mini_epochs: 8
critic_coef: 4
clip_value: True
seq_length: 4
bounds_loss_coef: 0.0001 | 1,583 | YAML | 21.628571 | 101 | 0.593178 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/cfg/train/FactoryTaskNutBoltPickPPO.yaml | params:
seed: ${...seed}
algo:
name: a2c_continuous
model:
name: continuous_a2c_logstd
network:
name: actor_critic
separate: False
space:
continuous:
mu_activation: None
sigma_activation: None
mu_init:
name: default
sigma_init:
name: const_initializer
val: 0
fixed_sigma: True
mlp:
units: [256, 128, 64]
activation: elu
d2rl: False
initializer:
name: default
regularizer:
name: None
load_checkpoint: ${if:${...checkpoint},True,False}
load_path: ${...checkpoint}
config:
name: ${resolve_default:FactoryTaskNutBoltPick,${....experiment}}
full_experiment_name: ${.name}
device: ${....rl_device}
device_name: ${....rl_device}
env_name: rlgpu
multi_gpu: False
ppo: True
mixed_precision: True
normalize_input: True
normalize_value: True
value_bootstrap: True
num_actors: ${....task.env.numEnvs}
reward_shaper:
scale_value: 1.0
normalize_advantage: True
gamma: 0.99
tau: 0.95
learning_rate: 1e-4
lr_schedule: fixed
schedule_type: standard
kl_threshold: 0.016
score_to_win: 20000
max_epochs: ${resolve_default:200,${....max_iterations}}
save_best_after: 50
save_frequency: 100
print_stats: True
grad_norm: 1.0
entropy_coef: 0.0
truncate_grads: False
e_clip: 0.2
horizon_length: 128
minibatch_size: 512
mini_epochs: 8
critic_coef: 2
clip_value: True
seq_length: 4
bounds_loss_coef: 0.0001
| 1,596 | YAML | 20.581081 | 69 | 0.594612 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/scripts/rlgames_demo.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import datetime
import os
import gym
import hydra
import torch
from omegaconf import DictConfig
import omniisaacgymenvs
from omniisaacgymenvs.envs.vec_env_rlgames import VecEnvRLGames
from omniisaacgymenvs.scripts.rlgames_train import RLGTrainer
from omniisaacgymenvs.utils.config_utils.path_utils import retrieve_checkpoint_path
from omniisaacgymenvs.utils.demo_util import initialize_demo
from omniisaacgymenvs.utils.hydra_cfg.hydra_utils import *
from omniisaacgymenvs.utils.hydra_cfg.reformat import omegaconf_to_dict, print_dict
class RLGDemo(RLGTrainer):
def __init__(self, cfg, cfg_dict):
RLGTrainer.__init__(self, cfg, cfg_dict)
self.cfg.test = True
@hydra.main(version_base=None, config_name="config", config_path="../cfg")
def parse_hydra_configs(cfg: DictConfig):
time_str = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
headless = cfg.headless
env = VecEnvRLGames(headless=headless, sim_device=cfg.device_id, enable_livestream=cfg.enable_livestream)
# parse experiment directory
module_path = os.path.abspath(os.path.join(os.path.dirname(omniisaacgymenvs.__file__)))
experiment_dir = os.path.join(module_path, "runs", cfg.train.params.config.name)
# use gym RecordVideo wrapper for viewport recording
if cfg.enable_recording:
if cfg.recording_dir == '':
videos_dir = os.path.join(experiment_dir, "videos")
else:
videos_dir = cfg.recording_dir
video_interval = lambda step: step % cfg.recording_interval == 0
video_length = cfg.recording_length
env.is_vector_env = True
if env.metadata is None:
env.metadata = {"render_modes": ["rgb_array"], "render_fps": cfg.recording_fps}
else:
env.metadata["render_modes"] = ["rgb_array"]
env.metadata["render_fps"] = cfg.recording_fps
env = gym.wrappers.RecordVideo(
env, video_folder=videos_dir, step_trigger=video_interval, video_length=video_length
)
# ensure checkpoints can be specified as relative paths
if cfg.checkpoint:
cfg.checkpoint = retrieve_checkpoint_path(cfg.checkpoint)
if cfg.checkpoint is None:
quit()
cfg_dict = omegaconf_to_dict(cfg)
print_dict(cfg_dict)
# sets seed. if seed is -1 will pick a random one
from omni.isaac.core.utils.torch.maths import set_seed
cfg.seed = set_seed(cfg.seed, torch_deterministic=cfg.torch_deterministic)
cfg_dict["seed"] = cfg.seed
task = initialize_demo(cfg_dict, env)
if cfg.wandb_activate:
# Make sure to install WandB if you actually use this.
import wandb
run_name = f"{cfg.wandb_name}_{time_str}"
wandb.init(
project=cfg.wandb_project,
group=cfg.wandb_group,
entity=cfg.wandb_entity,
config=cfg_dict,
sync_tensorboard=True,
id=run_name,
resume="allow",
monitor_gym=True,
)
rlg_trainer = RLGDemo(cfg, cfg_dict)
rlg_trainer.launch_rlg_hydra(env)
rlg_trainer.run(module_path, experiment_dir)
env.close()
if cfg.wandb_activate:
wandb.finish()
if __name__ == "__main__":
parse_hydra_configs()
| 4,814 | Python | 37.52 | 109 | 0.701703 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/scripts/rlgames_train.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import datetime
import os
import gym
import hydra
import torch
from omegaconf import DictConfig
import omniisaacgymenvs
from omniisaacgymenvs.envs.vec_env_rlgames import VecEnvRLGames
from omniisaacgymenvs.utils.config_utils.path_utils import retrieve_checkpoint_path, get_experience
from omniisaacgymenvs.utils.hydra_cfg.hydra_utils import *
from omniisaacgymenvs.utils.hydra_cfg.reformat import omegaconf_to_dict, print_dict
from omniisaacgymenvs.utils.rlgames.rlgames_utils import RLGPUAlgoObserver, RLGPUEnv
from omniisaacgymenvs.utils.task_util import initialize_task
from rl_games.common import env_configurations, vecenv
from rl_games.torch_runner import Runner
class RLGTrainer:
def __init__(self, cfg, cfg_dict):
self.cfg = cfg
self.cfg_dict = cfg_dict
def launch_rlg_hydra(self, env):
# `create_rlgpu_env` is environment construction function which is passed to RL Games and called internally.
# We use the helper function here to specify the environment config.
self.cfg_dict["task"]["test"] = self.cfg.test
# register the rl-games adapter to use inside the runner
vecenv.register("RLGPU", lambda config_name, num_actors, **kwargs: RLGPUEnv(config_name, num_actors, **kwargs))
env_configurations.register("rlgpu", {"vecenv_type": "RLGPU", "env_creator": lambda **kwargs: env})
self.rlg_config_dict = omegaconf_to_dict(self.cfg.train)
def run(self, module_path, experiment_dir):
self.rlg_config_dict["params"]["config"]["train_dir"] = os.path.join(module_path, "runs")
# create runner and set the settings
runner = Runner(RLGPUAlgoObserver())
runner.load(self.rlg_config_dict)
runner.reset()
# dump config dict
os.makedirs(experiment_dir, exist_ok=True)
with open(os.path.join(experiment_dir, "config.yaml"), "w") as f:
f.write(OmegaConf.to_yaml(self.cfg))
runner.run(
{"train": not self.cfg.test, "play": self.cfg.test, "checkpoint": self.cfg.checkpoint, "sigma": None}
)
@hydra.main(version_base=None, config_name="config", config_path="../cfg")
def parse_hydra_configs(cfg: DictConfig):
time_str = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
headless = cfg.headless
# local rank (GPU id) in a current multi-gpu mode
local_rank = int(os.getenv("LOCAL_RANK", "0"))
# global rank (GPU id) in multi-gpu multi-node mode
global_rank = int(os.getenv("RANK", "0"))
if cfg.multi_gpu:
cfg.device_id = local_rank
cfg.rl_device = f'cuda:{local_rank}'
enable_viewport = "enable_cameras" in cfg.task.sim and cfg.task.sim.enable_cameras
# select kit app file
experience = get_experience(headless, cfg.enable_livestream, enable_viewport, cfg.enable_recording, cfg.kit_app)
env = VecEnvRLGames(
headless=headless,
sim_device=cfg.device_id,
enable_livestream=cfg.enable_livestream,
enable_viewport=enable_viewport or cfg.enable_recording,
experience=experience
)
# parse experiment directory
module_path = os.path.abspath(os.path.join(os.path.dirname(omniisaacgymenvs.__file__)))
experiment_dir = os.path.join(module_path, "runs", cfg.train.params.config.name)
# use gym RecordVideo wrapper for viewport recording
if cfg.enable_recording:
if cfg.recording_dir == '':
videos_dir = os.path.join(experiment_dir, "videos")
else:
videos_dir = cfg.recording_dir
video_interval = lambda step: step % cfg.recording_interval == 0
video_length = cfg.recording_length
env.is_vector_env = True
if env.metadata is None:
env.metadata = {"render_modes": ["rgb_array"], "render_fps": cfg.recording_fps}
else:
env.metadata["render_modes"] = ["rgb_array"]
env.metadata["render_fps"] = cfg.recording_fps
env = gym.wrappers.RecordVideo(
env, video_folder=videos_dir, step_trigger=video_interval, video_length=video_length
)
# ensure checkpoints can be specified as relative paths
if cfg.checkpoint:
cfg.checkpoint = retrieve_checkpoint_path(cfg.checkpoint)
if cfg.checkpoint is None:
quit()
cfg_dict = omegaconf_to_dict(cfg)
print_dict(cfg_dict)
# sets seed. if seed is -1 will pick a random one
from omni.isaac.core.utils.torch.maths import set_seed
cfg.seed = cfg.seed + global_rank if cfg.seed != -1 else cfg.seed
cfg.seed = set_seed(cfg.seed, torch_deterministic=cfg.torch_deterministic)
cfg_dict["seed"] = cfg.seed
task = initialize_task(cfg_dict, env)
if cfg.wandb_activate and global_rank == 0:
# Make sure to install WandB if you actually use this.
import wandb
run_name = f"{cfg.wandb_name}_{time_str}"
wandb.init(
project=cfg.wandb_project,
group=cfg.wandb_group,
entity=cfg.wandb_entity,
config=cfg_dict,
sync_tensorboard=True,
name=run_name,
resume="allow",
)
torch.cuda.set_device(local_rank)
rlg_trainer = RLGTrainer(cfg, cfg_dict)
rlg_trainer.launch_rlg_hydra(env)
rlg_trainer.run(module_path, experiment_dir)
env.close()
if cfg.wandb_activate and global_rank == 0:
wandb.finish()
if __name__ == "__main__":
parse_hydra_configs()
| 7,007 | Python | 39.045714 | 119 | 0.687455 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/scripts/random_policy.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import gym
import hydra
from omegaconf import DictConfig
import os
import time
import numpy as np
import torch
import omniisaacgymenvs
from omniisaacgymenvs.envs.vec_env_rlgames import VecEnvRLGames
from omniisaacgymenvs.utils.config_utils.path_utils import get_experience
from omniisaacgymenvs.utils.hydra_cfg.hydra_utils import *
from omniisaacgymenvs.utils.hydra_cfg.reformat import omegaconf_to_dict, print_dict
from omniisaacgymenvs.utils.task_util import initialize_task
@hydra.main(version_base=None, config_name="config", config_path="../cfg")
def parse_hydra_configs(cfg: DictConfig):
cfg_dict = omegaconf_to_dict(cfg)
print_dict(cfg_dict)
headless = cfg.headless
render = not headless
enable_viewport = "enable_cameras" in cfg.task.sim and cfg.task.sim.enable_cameras
# select kit app file
experience = get_experience(headless, cfg.enable_livestream, enable_viewport, cfg.enable_recording, cfg.kit_app)
env = VecEnvRLGames(
headless=headless,
sim_device=cfg.device_id,
enable_livestream=cfg.enable_livestream,
enable_viewport=enable_viewport or cfg.enable_recording,
experience=experience
)
# parse experiment directory
module_path = os.path.abspath(os.path.join(os.path.dirname(omniisaacgymenvs.__file__)))
experiment_dir = os.path.join(module_path, "runs", cfg.train.params.config.name)
# use gym RecordVideo wrapper for viewport recording
if cfg.enable_recording:
if cfg.recording_dir == '':
videos_dir = os.path.join(experiment_dir, "videos")
else:
videos_dir = cfg.recording_dir
video_interval = lambda step: step % cfg.recording_interval == 0
video_length = cfg.recording_length
env.is_vector_env = True
if env.metadata is None:
env.metadata = {"render_modes": ["rgb_array"], "render_fps": cfg.recording_fps}
else:
env.metadata["render_modes"] = ["rgb_array"]
env.metadata["render_fps"] = cfg.recording_fps
env = gym.wrappers.RecordVideo(
env, video_folder=videos_dir, step_trigger=video_interval, video_length=video_length
)
# sets seed. if seed is -1 will pick a random one
from omni.isaac.core.utils.torch.maths import set_seed
cfg.seed = set_seed(cfg.seed, torch_deterministic=cfg.torch_deterministic)
cfg_dict["seed"] = cfg.seed
task = initialize_task(cfg_dict, env)
num_frames = 0
first_frame = True
prev_time = time.time()
while env.simulation_app.is_running():
if env.world.is_playing():
if first_frame:
env.reset()
prev_time = time.time()
first_frame = False
# get upper and lower bounds of action space, sample actions randomly on this interval
action_high = env.action_space.high[0]
action_low = env.action_space.low[0]
actions = (action_high - action_low) * torch.rand(env.num_envs, env.action_space.shape[0], device=task.rl_device) - action_high
if time.time() - prev_time >= 1:
print("FPS:", num_frames, "FPS * num_envs:", env.num_envs * num_frames)
num_frames = 0
prev_time = time.time()
else:
num_frames += 1
env.step(actions)
else:
env.world.step(render=render)
env.simulation_app.close()
if __name__ == "__main__":
parse_hydra_configs()
| 5,069 | Python | 38.92126 | 139 | 0.688301 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/demos/anymal_terrain.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from omniisaacgymenvs.tasks.anymal_terrain import AnymalTerrainTask, wrap_to_pi
from omni.isaac.core.utils.prims import get_prim_at_path
from omni.isaac.core.utils.stage import get_current_stage
from omni.isaac.core.utils.torch.rotations import *
from omni.isaac.core.utils.torch.transformations import tf_combine
import numpy as np
import torch
import math
import omni
import carb
from omni.kit.viewport.utility.camera_state import ViewportCameraState
from omni.kit.viewport.utility import get_viewport_from_window_name
from pxr import Sdf
class AnymalTerrainDemo(AnymalTerrainTask):
def __init__(
self,
name,
sim_config,
env,
offset=None
) -> None:
max_num_envs = 128
if sim_config.task_config["env"]["numEnvs"] >= max_num_envs:
print(f"num_envs reduced to {max_num_envs} for this demo.")
sim_config.task_config["env"]["numEnvs"] = max_num_envs
sim_config.task_config["env"]["learn"]["episodeLength_s"] = 120
AnymalTerrainTask.__init__(self, name, sim_config, env)
self.add_noise = False
self.knee_threshold = 0.05
self.create_camera()
self._current_command = [0.0, 0.0, 0.0, 0.0]
self.set_up_keyboard()
self._prim_selection = omni.usd.get_context().get_selection()
self._selected_id = None
self._previous_selected_id = None
return
def create_camera(self):
stage = omni.usd.get_context().get_stage()
self.view_port = get_viewport_from_window_name("Viewport")
# Create camera
self.camera_path = "/World/Camera"
self.perspective_path = "/OmniverseKit_Persp"
camera_prim = stage.DefinePrim(self.camera_path, "Camera")
camera_prim.GetAttribute("focalLength").Set(8.5)
coi_prop = camera_prim.GetProperty("omni:kit:centerOfInterest")
if not coi_prop or not coi_prop.IsValid():
camera_prim.CreateAttribute(
"omni:kit:centerOfInterest", Sdf.ValueTypeNames.Vector3d, True, Sdf.VariabilityUniform
).Set(Gf.Vec3d(0, 0, -10))
self.view_port.set_active_camera(self.perspective_path)
def set_up_keyboard(self):
self._input = carb.input.acquire_input_interface()
self._keyboard = omni.appwindow.get_default_app_window().get_keyboard()
self._sub_keyboard = self._input.subscribe_to_keyboard_events(self._keyboard, self._on_keyboard_event)
T = 1
R = 1
self._key_to_control = {
"UP": [T, 0.0, 0.0, 0.0],
"DOWN": [-T, 0.0, 0.0, 0.0],
"LEFT": [0.0, T, 0.0, 0.0],
"RIGHT": [0.0, -T, 0.0, 0.0],
"Z": [0.0, 0.0, R, 0.0],
"X": [0.0, 0.0, -R, 0.0],
}
def _on_keyboard_event(self, event, *args, **kwargs):
if event.type == carb.input.KeyboardEventType.KEY_PRESS:
if event.input.name in self._key_to_control:
self._current_command = self._key_to_control[event.input.name]
elif event.input.name == "ESCAPE":
self._prim_selection.clear_selected_prim_paths()
elif event.input.name == "C":
if self._selected_id is not None:
if self.view_port.get_active_camera() == self.camera_path:
self.view_port.set_active_camera(self.perspective_path)
else:
self.view_port.set_active_camera(self.camera_path)
elif event.type == carb.input.KeyboardEventType.KEY_RELEASE:
self._current_command = [0.0, 0.0, 0.0, 0.0]
def update_selected_object(self):
self._previous_selected_id = self._selected_id
selected_prim_paths = self._prim_selection.get_selected_prim_paths()
if len(selected_prim_paths) == 0:
self._selected_id = None
self.view_port.set_active_camera(self.perspective_path)
elif len(selected_prim_paths) > 1:
print("Multiple prims are selected. Please only select one!")
else:
prim_splitted_path = selected_prim_paths[0].split("/")
if len(prim_splitted_path) >= 4 and prim_splitted_path[3][0:4] == "env_":
self._selected_id = int(prim_splitted_path[3][4:])
if self._previous_selected_id != self._selected_id:
self.view_port.set_active_camera(self.camera_path)
self._update_camera()
else:
print("The selected prim was not an Anymal")
if self._previous_selected_id is not None and self._previous_selected_id != self._selected_id:
self.commands[self._previous_selected_id, 0] = np.random.uniform(self.command_x_range[0], self.command_x_range[1])
self.commands[self._previous_selected_id, 1] = np.random.uniform(self.command_y_range[0], self.command_y_range[1])
self.commands[self._previous_selected_id, 2] = 0.0
def _update_camera(self):
base_pos = self.base_pos[self._selected_id, :].clone()
base_quat = self.base_quat[self._selected_id, :].clone()
camera_local_transform = torch.tensor([-1.8, 0.0, 0.6], device=self.device)
camera_pos = quat_apply(base_quat, camera_local_transform) + base_pos
camera_state = ViewportCameraState(self.camera_path, self.view_port)
eye = Gf.Vec3d(camera_pos[0].item(), camera_pos[1].item(), camera_pos[2].item())
target = Gf.Vec3d(base_pos[0].item(), base_pos[1].item(), base_pos[2].item()+0.6)
camera_state.set_position_world(eye, True)
camera_state.set_target_world(target, True)
def post_physics_step(self):
self.progress_buf[:] += 1
self.refresh_dof_state_tensors()
self.refresh_body_state_tensors()
self.update_selected_object()
self.common_step_counter += 1
if self.common_step_counter % self.push_interval == 0:
self.push_robots()
# prepare quantities
self.base_lin_vel = quat_rotate_inverse(self.base_quat, self.base_velocities[:, 0:3])
self.base_ang_vel = quat_rotate_inverse(self.base_quat, self.base_velocities[:, 3:6])
self.projected_gravity = quat_rotate_inverse(self.base_quat, self.gravity_vec)
forward = quat_apply(self.base_quat, self.forward_vec)
heading = torch.atan2(forward[:, 1], forward[:, 0])
self.commands[:, 2] = torch.clip(0.5*wrap_to_pi(self.commands[:, 3] - heading), -1., 1.)
self.check_termination()
if self._selected_id is not None:
self.commands[self._selected_id, :] = torch.tensor(self._current_command, device=self.device)
self.timeout_buf[self._selected_id] = 0
self.reset_buf[self._selected_id] = 0
self.get_states()
env_ids = self.reset_buf.nonzero(as_tuple=False).flatten()
if len(env_ids) > 0:
self.reset_idx(env_ids)
self.get_observations()
if self.add_noise:
self.obs_buf += (2 * torch.rand_like(self.obs_buf) - 1) * self.noise_scale_vec
self.last_actions[:] = self.actions[:]
self.last_dof_vel[:] = self.dof_vel[:]
return self.obs_buf, self.rew_buf, self.reset_buf, self.extras | 8,841 | Python | 44.577319 | 126 | 0.636127 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/tests/__init__.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from .runner import * | 1,580 | Python | 53.51724 | 80 | 0.783544 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/tests/runner.py | # Copyright (c) 2018-2023, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
import asyncio
from datetime import date
import sys
import unittest
import weakref
import omni.kit.test
from omni.kit.test import AsyncTestSuite
from omni.kit.test.async_unittest import AsyncTextTestRunner
import omni.ui as ui
from omni.isaac.ui.menu import make_menu_item_description
from omni.isaac.ui.ui_utils import btn_builder
from omni.kit.menu.utils import MenuItemDescription, add_menu_items
import omni.timeline
import omni.usd
from omniisaacgymenvs import RLExtension, get_instance
class GymRLTests(omni.kit.test.AsyncTestCase):
def __init__(self, *args, **kwargs):
super(GymRLTests, self).__init__(*args, **kwargs)
self.ext = get_instance()
async def _train(self, task, load=True, experiment=None, max_iterations=None):
task_idx = self.ext._task_list.index(task)
self.ext._task_dropdown.get_item_value_model().set_value(task_idx)
if load:
self.ext._on_load_world()
while True:
_, files_loaded, total_files = omni.usd.get_context().get_stage_loading_status()
if files_loaded or total_files:
await omni.kit.app.get_app().next_update_async()
else:
break
for _ in range(100):
await omni.kit.app.get_app().next_update_async()
self.ext._render_dropdown.get_item_value_model().set_value(2)
overrides = None
if experiment is not None:
overrides = [f"experiment={experiment}"]
if max_iterations is not None:
if overrides is None:
overrides = [f"max_iterations={max_iterations}"]
else:
overrides += [f"max_iterations={max_iterations}"]
await self.ext._on_train_async(overrides=overrides)
async def test_train(self):
date_str = date.today()
tasks = self.ext._task_list
for task in tasks:
await self._train(task, load=True, experiment=f"{task}_{date_str}")
async def test_train_determinism(self):
date_str = date.today()
tasks = self.ext._task_list
for task in tasks:
for i in range(3):
await self._train(task, load=(i==0), experiment=f"{task}_{date_str}_{i}", max_iterations=100)
class TestRunner():
def __init__(self):
self._build_ui()
def _build_ui(self):
menu_items = [make_menu_item_description("RL Examples Tests", "RL Examples Tests", lambda a=weakref.proxy(self): a._menu_callback())]
add_menu_items(menu_items, "Isaac Examples")
self._window = omni.ui.Window(
"RL Examples Tests", width=250, height=0, visible=True, dockPreference=ui.DockPreference.LEFT_BOTTOM
)
with self._window.frame:
main_stack = ui.VStack(spacing=5, height=0)
with main_stack:
dict = {
"label": "Run Tests",
"type": "button",
"text": "Run Tests",
"tooltip": "Run all tests",
"on_clicked_fn": self._run_tests,
}
btn_builder(**dict)
def _menu_callback(self):
self._window.visible = not self._window.visible
def _run_tests(self):
loader = unittest.TestLoader()
loader.SuiteClass = AsyncTestSuite
test_suite = AsyncTestSuite()
test_suite.addTests(loader.loadTestsFromTestCase(GymRLTests))
test_runner = AsyncTextTestRunner(verbosity=2, stream=sys.stdout)
async def single_run():
await test_runner.run(test_suite)
print("=======================================")
print(f"Running Tests")
print("=======================================")
asyncio.ensure_future(single_run())
TestRunner() | 4,254 | Python | 35.059322 | 141 | 0.607428 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/utils/demo_util.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
def initialize_demo(config, env, init_sim=True):
from omniisaacgymenvs.demos.anymal_terrain import AnymalTerrainDemo
# Mappings from strings to environments
task_map = {
"AnymalTerrain": AnymalTerrainDemo,
}
from omniisaacgymenvs.utils.config_utils.sim_config import SimConfig
sim_config = SimConfig(config)
cfg = sim_config.config
task = task_map[cfg["task_name"]](
name=cfg["task_name"], sim_config=sim_config, env=env
)
env.set_task(task=task, sim_params=sim_config.get_physics_params(), backend="torch", init_sim=init_sim)
return task | 2,167 | Python | 44.166666 | 107 | 0.757268 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/utils/task_util.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
def import_tasks():
from omniisaacgymenvs.tasks.allegro_hand import AllegroHandTask
from omniisaacgymenvs.tasks.ant import AntLocomotionTask
from omniisaacgymenvs.tasks.anymal import AnymalTask
from omniisaacgymenvs.tasks.anymal_terrain import AnymalTerrainTask
from omniisaacgymenvs.tasks.ball_balance import BallBalanceTask
from omniisaacgymenvs.tasks.cartpole import CartpoleTask
from omniisaacgymenvs.tasks.cartpole_camera import CartpoleCameraTask
from omniisaacgymenvs.tasks.crazyflie import CrazyflieTask
from omniisaacgymenvs.tasks.factory.factory_task_nut_bolt_pick import FactoryTaskNutBoltPick
from omniisaacgymenvs.tasks.factory.factory_task_nut_bolt_place import FactoryTaskNutBoltPlace
from omniisaacgymenvs.tasks.factory.factory_task_nut_bolt_screw import FactoryTaskNutBoltScrew
from omniisaacgymenvs.tasks.franka_cabinet import FrankaCabinetTask
from omniisaacgymenvs.tasks.franka_deformable import FrankaDeformableTask
from omniisaacgymenvs.tasks.humanoid import HumanoidLocomotionTask
from omniisaacgymenvs.tasks.ingenuity import IngenuityTask
from omniisaacgymenvs.tasks.quadcopter import QuadcopterTask
from omniisaacgymenvs.tasks.shadow_hand import ShadowHandTask
from omniisaacgymenvs.tasks.guarddog import GuarddogTask
from omniisaacgymenvs.tasks.warp.ant import AntLocomotionTask as AntLocomotionTaskWarp
from omniisaacgymenvs.tasks.warp.cartpole import CartpoleTask as CartpoleTaskWarp
from omniisaacgymenvs.tasks.warp.humanoid import HumanoidLocomotionTask as HumanoidLocomotionTaskWarp
# Mappings from strings to environments
task_map = {
"AllegroHand": AllegroHandTask,
"Ant": AntLocomotionTask,
"Anymal": AnymalTask,
"AnymalTerrain": AnymalTerrainTask,
"BallBalance": BallBalanceTask,
"Cartpole": CartpoleTask,
"CartpoleCamera": CartpoleCameraTask,
"FactoryTaskNutBoltPick": FactoryTaskNutBoltPick,
"FactoryTaskNutBoltPlace": FactoryTaskNutBoltPlace,
"FactoryTaskNutBoltScrew": FactoryTaskNutBoltScrew,
"FrankaCabinet": FrankaCabinetTask,
"FrankaDeformable": FrankaDeformableTask,
"Humanoid": HumanoidLocomotionTask,
"Ingenuity": IngenuityTask,
"Quadcopter": QuadcopterTask,
"Crazyflie": CrazyflieTask,
"ShadowHand": ShadowHandTask,
"ShadowHandOpenAI_FF": ShadowHandTask,
"ShadowHandOpenAI_LSTM": ShadowHandTask,
"Guarddog": GuarddogTask,
}
task_map_warp = {
"Cartpole": CartpoleTaskWarp,
"Ant":AntLocomotionTaskWarp,
"Humanoid": HumanoidLocomotionTaskWarp
}
return task_map, task_map_warp
def initialize_task(config, env, init_sim=True):
from omniisaacgymenvs.utils.config_utils.sim_config import SimConfig
sim_config = SimConfig(config)
task_map, task_map_warp = import_tasks()
cfg = sim_config.config
if cfg["warp"]:
task_map = task_map_warp
task = task_map[cfg["task_name"]](
name=cfg["task_name"], sim_config=sim_config, env=env
)
backend = "warp" if cfg["warp"] else "torch"
rendering_dt = sim_config.get_physics_params()["rendering_dt"]
env.set_task(
task=task,
sim_params=sim_config.get_physics_params(),
backend=backend,
init_sim=init_sim,
rendering_dt=rendering_dt,
)
return task
| 4,996 | Python | 41.709401 | 105 | 0.751401 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/utils/domain_randomization/randomize.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import copy
import numpy as np
import torch
import omni
from omni.isaac.core.prims import RigidPrimView
from omni.isaac.core.utils.extensions import enable_extension
class Randomizer:
def __init__(self, main_config, task_config):
self._cfg = task_config
self._config = main_config
self.randomize = False
dr_config = self._cfg.get("domain_randomization", None)
self.distributions = dict()
self.active_domain_randomizations = dict()
self._observations_dr_params = None
self._actions_dr_params = None
if dr_config is not None:
randomize = dr_config.get("randomize", False)
randomization_params = dr_config.get("randomization_params", None)
if randomize and randomization_params is not None:
self.randomize = True
self.min_frequency = dr_config.get("min_frequency", 1)
# import DR extensions
enable_extension("omni.replicator.isaac")
import omni.replicator.core as rep
import omni.replicator.isaac as dr
self.rep = rep
self.dr = dr
def apply_on_startup_domain_randomization(self, task):
if self.randomize:
torch.manual_seed(self._config["seed"])
randomization_params = self._cfg["domain_randomization"]["randomization_params"]
for opt in randomization_params.keys():
if opt == "rigid_prim_views":
if randomization_params["rigid_prim_views"] is not None:
for view_name in randomization_params["rigid_prim_views"].keys():
if randomization_params["rigid_prim_views"][view_name] is not None:
for attribute, params in randomization_params["rigid_prim_views"][view_name].items():
params = randomization_params["rigid_prim_views"][view_name][attribute]
if attribute in ["scale", "mass", "density"] and params is not None:
if "on_startup" in params.keys():
if not set(
("operation", "distribution", "distribution_parameters")
).issubset(params["on_startup"]):
raise ValueError(
f"Please ensure the following randomization parameters for {view_name} {attribute} "
+ "on_startup are provided: operation, distribution, distribution_parameters."
)
view = task.world.scene._scene_registry.rigid_prim_views[view_name]
if attribute == "scale":
self.randomize_scale_on_startup(
view=view,
distribution=params["on_startup"]["distribution"],
distribution_parameters=params["on_startup"][
"distribution_parameters"
],
operation=params["on_startup"]["operation"],
sync_dim_noise=True,
)
elif attribute == "mass":
self.randomize_mass_on_startup(
view=view,
distribution=params["on_startup"]["distribution"],
distribution_parameters=params["on_startup"][
"distribution_parameters"
],
operation=params["on_startup"]["operation"],
)
elif attribute == "density":
self.randomize_density_on_startup(
view=view,
distribution=params["on_startup"]["distribution"],
distribution_parameters=params["on_startup"][
"distribution_parameters"
],
operation=params["on_startup"]["operation"],
)
if opt == "articulation_views":
if randomization_params["articulation_views"] is not None:
for view_name in randomization_params["articulation_views"].keys():
if randomization_params["articulation_views"][view_name] is not None:
for attribute, params in randomization_params["articulation_views"][view_name].items():
params = randomization_params["articulation_views"][view_name][attribute]
if attribute in ["scale"] and params is not None:
if "on_startup" in params.keys():
if not set(
("operation", "distribution", "distribution_parameters")
).issubset(params["on_startup"]):
raise ValueError(
f"Please ensure the following randomization parameters for {view_name} {attribute} "
+ "on_startup are provided: operation, distribution, distribution_parameters."
)
view = task.world.scene._scene_registry.articulated_views[view_name]
if attribute == "scale":
self.randomize_scale_on_startup(
view=view,
distribution=params["on_startup"]["distribution"],
distribution_parameters=params["on_startup"][
"distribution_parameters"
],
operation=params["on_startup"]["operation"],
sync_dim_noise=True,
)
else:
dr_config = self._cfg.get("domain_randomization", None)
if dr_config is None:
raise ValueError("No domain randomization parameters are specified in the task yaml config file")
randomize = dr_config.get("randomize", False)
randomization_params = dr_config.get("randomization_params", None)
if randomize == False or randomization_params is None:
print("On Startup Domain randomization will not be applied.")
def set_up_domain_randomization(self, task):
if self.randomize:
randomization_params = self._cfg["domain_randomization"]["randomization_params"]
self.rep.set_global_seed(self._config["seed"])
with self.dr.trigger.on_rl_frame(num_envs=self._cfg["env"]["numEnvs"]):
for opt in randomization_params.keys():
if opt == "observations":
self._set_up_observations_randomization(task)
elif opt == "actions":
self._set_up_actions_randomization(task)
elif opt == "simulation":
if randomization_params["simulation"] is not None:
self.distributions["simulation"] = dict()
self.dr.physics_view.register_simulation_context(task.world)
for attribute, params in randomization_params["simulation"].items():
self._set_up_simulation_randomization(attribute, params)
elif opt == "rigid_prim_views":
if randomization_params["rigid_prim_views"] is not None:
self.distributions["rigid_prim_views"] = dict()
for view_name in randomization_params["rigid_prim_views"].keys():
if randomization_params["rigid_prim_views"][view_name] is not None:
self.distributions["rigid_prim_views"][view_name] = dict()
self.dr.physics_view.register_rigid_prim_view(
rigid_prim_view=task.world.scene._scene_registry.rigid_prim_views[
view_name
],
)
for attribute, params in randomization_params["rigid_prim_views"][
view_name
].items():
if attribute not in ["scale", "density"]:
self._set_up_rigid_prim_view_randomization(view_name, attribute, params)
elif opt == "articulation_views":
if randomization_params["articulation_views"] is not None:
self.distributions["articulation_views"] = dict()
for view_name in randomization_params["articulation_views"].keys():
if randomization_params["articulation_views"][view_name] is not None:
self.distributions["articulation_views"][view_name] = dict()
self.dr.physics_view.register_articulation_view(
articulation_view=task.world.scene._scene_registry.articulated_views[
view_name
],
)
for attribute, params in randomization_params["articulation_views"][
view_name
].items():
if attribute not in ["scale"]:
self._set_up_articulation_view_randomization(view_name, attribute, params)
self.rep.orchestrator.run()
if self._config.get("enable_recording", False):
# we need to deal with initializing render product here because it has to be initialized after orchestrator.run.
# otherwise, replicator will stop the simulation
task._env.create_viewport_render_product(resolution=(task.viewport_camera_width, task.viewport_camera_height))
if not task.is_extension:
task.world.render()
else:
dr_config = self._cfg.get("domain_randomization", None)
if dr_config is None:
raise ValueError("No domain randomization parameters are specified in the task yaml config file")
randomize = dr_config.get("randomize", False)
randomization_params = dr_config.get("randomization_params", None)
if randomize == False or randomization_params is None:
print("Domain randomization will not be applied.")
def _set_up_observations_randomization(self, task):
task.randomize_observations = True
self._observations_dr_params = self._cfg["domain_randomization"]["randomization_params"]["observations"]
if self._observations_dr_params is None:
raise ValueError(f"Observations randomization parameters are not provided.")
if "on_reset" in self._observations_dr_params.keys():
if not set(("operation", "distribution", "distribution_parameters")).issubset(
self._observations_dr_params["on_reset"].keys()
):
raise ValueError(
f"Please ensure the following observations on_reset randomization parameters are provided: "
+ "operation, distribution, distribution_parameters."
)
self.active_domain_randomizations[("observations", "on_reset")] = np.array(
self._observations_dr_params["on_reset"]["distribution_parameters"]
)
if "on_interval" in self._observations_dr_params.keys():
if not set(("frequency_interval", "operation", "distribution", "distribution_parameters")).issubset(
self._observations_dr_params["on_interval"].keys()
):
raise ValueError(
f"Please ensure the following observations on_interval randomization parameters are provided: "
+ "frequency_interval, operation, distribution, distribution_parameters."
)
self.active_domain_randomizations[("observations", "on_interval")] = np.array(
self._observations_dr_params["on_interval"]["distribution_parameters"]
)
self._observations_counter_buffer = torch.zeros(
(self._cfg["env"]["numEnvs"]), dtype=torch.int, device=self._config["rl_device"]
)
self._observations_correlated_noise = torch.zeros(
(self._cfg["env"]["numEnvs"], task.num_observations), device=self._config["rl_device"]
)
def _set_up_actions_randomization(self, task):
task.randomize_actions = True
self._actions_dr_params = self._cfg["domain_randomization"]["randomization_params"]["actions"]
if self._actions_dr_params is None:
raise ValueError(f"Actions randomization parameters are not provided.")
if "on_reset" in self._actions_dr_params.keys():
if not set(("operation", "distribution", "distribution_parameters")).issubset(
self._actions_dr_params["on_reset"].keys()
):
raise ValueError(
f"Please ensure the following actions on_reset randomization parameters are provided: "
+ "operation, distribution, distribution_parameters."
)
self.active_domain_randomizations[("actions", "on_reset")] = np.array(
self._actions_dr_params["on_reset"]["distribution_parameters"]
)
if "on_interval" in self._actions_dr_params.keys():
if not set(("frequency_interval", "operation", "distribution", "distribution_parameters")).issubset(
self._actions_dr_params["on_interval"].keys()
):
raise ValueError(
f"Please ensure the following actions on_interval randomization parameters are provided: "
+ "frequency_interval, operation, distribution, distribution_parameters."
)
self.active_domain_randomizations[("actions", "on_interval")] = np.array(
self._actions_dr_params["on_interval"]["distribution_parameters"]
)
self._actions_counter_buffer = torch.zeros(
(self._cfg["env"]["numEnvs"]), dtype=torch.int, device=self._config["rl_device"]
)
self._actions_correlated_noise = torch.zeros(
(self._cfg["env"]["numEnvs"], task.num_actions), device=self._config["rl_device"]
)
def apply_observations_randomization(self, observations, reset_buf):
env_ids = reset_buf.nonzero(as_tuple=False).squeeze(-1)
self._observations_counter_buffer[env_ids] = 0
self._observations_counter_buffer += 1
if "on_reset" in self._observations_dr_params.keys():
observations[:] = self._apply_correlated_noise(
buffer_type="observations",
buffer=observations,
reset_ids=env_ids,
operation=self._observations_dr_params["on_reset"]["operation"],
distribution=self._observations_dr_params["on_reset"]["distribution"],
distribution_parameters=self._observations_dr_params["on_reset"]["distribution_parameters"],
)
if "on_interval" in self._observations_dr_params.keys():
randomize_ids = (
(self._observations_counter_buffer >= self._observations_dr_params["on_interval"]["frequency_interval"])
.nonzero(as_tuple=False)
.squeeze(-1)
)
self._observations_counter_buffer[randomize_ids] = 0
observations[:] = self._apply_uncorrelated_noise(
buffer=observations,
randomize_ids=randomize_ids,
operation=self._observations_dr_params["on_interval"]["operation"],
distribution=self._observations_dr_params["on_interval"]["distribution"],
distribution_parameters=self._observations_dr_params["on_interval"]["distribution_parameters"],
)
return observations
def apply_actions_randomization(self, actions, reset_buf):
env_ids = reset_buf.nonzero(as_tuple=False).squeeze(-1)
self._actions_counter_buffer[env_ids] = 0
self._actions_counter_buffer += 1
if "on_reset" in self._actions_dr_params.keys():
actions[:] = self._apply_correlated_noise(
buffer_type="actions",
buffer=actions,
reset_ids=env_ids,
operation=self._actions_dr_params["on_reset"]["operation"],
distribution=self._actions_dr_params["on_reset"]["distribution"],
distribution_parameters=self._actions_dr_params["on_reset"]["distribution_parameters"],
)
if "on_interval" in self._actions_dr_params.keys():
randomize_ids = (
(self._actions_counter_buffer >= self._actions_dr_params["on_interval"]["frequency_interval"])
.nonzero(as_tuple=False)
.squeeze(-1)
)
self._actions_counter_buffer[randomize_ids] = 0
actions[:] = self._apply_uncorrelated_noise(
buffer=actions,
randomize_ids=randomize_ids,
operation=self._actions_dr_params["on_interval"]["operation"],
distribution=self._actions_dr_params["on_interval"]["distribution"],
distribution_parameters=self._actions_dr_params["on_interval"]["distribution_parameters"],
)
return actions
def _apply_uncorrelated_noise(self, buffer, randomize_ids, operation, distribution, distribution_parameters):
if distribution == "gaussian" or distribution == "normal":
noise = torch.normal(
mean=distribution_parameters[0],
std=distribution_parameters[1],
size=(len(randomize_ids), buffer.shape[1]),
device=self._config["rl_device"],
)
elif distribution == "uniform":
noise = (distribution_parameters[1] - distribution_parameters[0]) * torch.rand(
(len(randomize_ids), buffer.shape[1]), device=self._config["rl_device"]
) + distribution_parameters[0]
elif distribution == "loguniform" or distribution == "log_uniform":
noise = torch.exp(
(np.log(distribution_parameters[1]) - np.log(distribution_parameters[0]))
* torch.rand((len(randomize_ids), buffer.shape[1]), device=self._config["rl_device"])
+ np.log(distribution_parameters[0])
)
else:
print(f"The specified {distribution} distribution is not supported.")
if operation == "additive":
buffer[randomize_ids] += noise
elif operation == "scaling":
buffer[randomize_ids] *= noise
else:
print(f"The specified {operation} operation type is not supported.")
return buffer
def _apply_correlated_noise(self, buffer_type, buffer, reset_ids, operation, distribution, distribution_parameters):
if buffer_type == "observations":
correlated_noise_buffer = self._observations_correlated_noise
elif buffer_type == "actions":
correlated_noise_buffer = self._actions_correlated_noise
if len(reset_ids) > 0:
if distribution == "gaussian" or distribution == "normal":
correlated_noise_buffer[reset_ids] = torch.normal(
mean=distribution_parameters[0],
std=distribution_parameters[1],
size=(len(reset_ids), buffer.shape[1]),
device=self._config["rl_device"],
)
elif distribution == "uniform":
correlated_noise_buffer[reset_ids] = (
distribution_parameters[1] - distribution_parameters[0]
) * torch.rand(
(len(reset_ids), buffer.shape[1]), device=self._config["rl_device"]
) + distribution_parameters[
0
]
elif distribution == "loguniform" or distribution == "log_uniform":
correlated_noise_buffer[reset_ids] = torch.exp(
(np.log(distribution_parameters[1]) - np.log(distribution_parameters[0]))
* torch.rand((len(reset_ids), buffer.shape[1]), device=self._config["rl_device"])
+ np.log(distribution_parameters[0])
)
else:
print(f"The specified {distribution} distribution is not supported.")
if operation == "additive":
buffer += correlated_noise_buffer
elif operation == "scaling":
buffer *= correlated_noise_buffer
else:
print(f"The specified {operation} operation type is not supported.")
return buffer
def _set_up_simulation_randomization(self, attribute, params):
if params is None:
raise ValueError(f"Randomization parameters for simulation {attribute} is not provided.")
if attribute in self.dr.SIMULATION_CONTEXT_ATTRIBUTES:
self.distributions["simulation"][attribute] = dict()
if "on_reset" in params.keys():
if not set(("operation", "distribution", "distribution_parameters")).issubset(params["on_reset"]):
raise ValueError(
f"Please ensure the following randomization parameters for simulation {attribute} on_reset are provided: "
+ "operation, distribution, distribution_parameters."
)
self.active_domain_randomizations[("simulation", attribute, "on_reset")] = np.array(
params["on_reset"]["distribution_parameters"]
)
kwargs = {"operation": params["on_reset"]["operation"]}
self.distributions["simulation"][attribute]["on_reset"] = self._generate_distribution(
dimension=self.dr.physics_view._simulation_context_initial_values[attribute].shape[0],
view_name="simulation",
attribute=attribute,
params=params["on_reset"],
)
kwargs[attribute] = self.distributions["simulation"][attribute]["on_reset"]
with self.dr.gate.on_env_reset():
self.dr.physics_view.randomize_simulation_context(**kwargs)
if "on_interval" in params.keys():
if not set(("frequency_interval", "operation", "distribution", "distribution_parameters")).issubset(
params["on_interval"]
):
raise ValueError(
f"Please ensure the following randomization parameters for simulation {attribute} on_interval are provided: "
+ "frequency_interval, operation, distribution, distribution_parameters."
)
self.active_domain_randomizations[("simulation", attribute, "on_interval")] = np.array(
params["on_interval"]["distribution_parameters"]
)
kwargs = {"operation": params["on_interval"]["operation"]}
self.distributions["simulation"][attribute]["on_interval"] = self._generate_distribution(
dimension=self.dr.physics_view._simulation_context_initial_values[attribute].shape[0],
view_name="simulation",
attribute=attribute,
params=params["on_interval"],
)
kwargs[attribute] = self.distributions["simulation"][attribute]["on_interval"]
with self.dr.gate.on_interval(interval=params["on_interval"]["frequency_interval"]):
self.dr.physics_view.randomize_simulation_context(**kwargs)
def _set_up_rigid_prim_view_randomization(self, view_name, attribute, params):
if params is None:
raise ValueError(f"Randomization parameters for rigid prim view {view_name} {attribute} is not provided.")
if attribute in self.dr.RIGID_PRIM_ATTRIBUTES:
self.distributions["rigid_prim_views"][view_name][attribute] = dict()
if "on_reset" in params.keys():
if not set(("operation", "distribution", "distribution_parameters")).issubset(params["on_reset"]):
raise ValueError(
f"Please ensure the following randomization parameters for {view_name} {attribute} on_reset are provided: "
+ "operation, distribution, distribution_parameters."
)
self.active_domain_randomizations[("rigid_prim_views", view_name, attribute, "on_reset")] = np.array(
params["on_reset"]["distribution_parameters"]
)
kwargs = {"view_name": view_name, "operation": params["on_reset"]["operation"]}
if attribute == "material_properties" and "num_buckets" in params["on_reset"].keys():
kwargs["num_buckets"] = params["on_reset"]["num_buckets"]
self.distributions["rigid_prim_views"][view_name][attribute]["on_reset"] = self._generate_distribution(
dimension=self.dr.physics_view._rigid_prim_views_initial_values[view_name][attribute].shape[1],
view_name=view_name,
attribute=attribute,
params=params["on_reset"],
)
kwargs[attribute] = self.distributions["rigid_prim_views"][view_name][attribute]["on_reset"]
with self.dr.gate.on_env_reset():
self.dr.physics_view.randomize_rigid_prim_view(**kwargs)
if "on_interval" in params.keys():
if not set(("frequency_interval", "operation", "distribution", "distribution_parameters")).issubset(
params["on_interval"]
):
raise ValueError(
f"Please ensure the following randomization parameters for {view_name} {attribute} on_interval are provided: "
+ "frequency_interval, operation, distribution, distribution_parameters."
)
self.active_domain_randomizations[("rigid_prim_views", view_name, attribute, "on_interval")] = np.array(
params["on_interval"]["distribution_parameters"]
)
kwargs = {"view_name": view_name, "operation": params["on_interval"]["operation"]}
if attribute == "material_properties" and "num_buckets" in params["on_interval"].keys():
kwargs["num_buckets"] = params["on_interval"]["num_buckets"]
self.distributions["rigid_prim_views"][view_name][attribute][
"on_interval"
] = self._generate_distribution(
dimension=self.dr.physics_view._rigid_prim_views_initial_values[view_name][attribute].shape[1],
view_name=view_name,
attribute=attribute,
params=params["on_interval"],
)
kwargs[attribute] = self.distributions["rigid_prim_views"][view_name][attribute]["on_interval"]
with self.dr.gate.on_interval(interval=params["on_interval"]["frequency_interval"]):
self.dr.physics_view.randomize_rigid_prim_view(**kwargs)
else:
raise ValueError(f"The attribute {attribute} for {view_name} is invalid for domain randomization.")
def _set_up_articulation_view_randomization(self, view_name, attribute, params):
if params is None:
raise ValueError(f"Randomization parameters for articulation view {view_name} {attribute} is not provided.")
if attribute in self.dr.ARTICULATION_ATTRIBUTES:
self.distributions["articulation_views"][view_name][attribute] = dict()
if "on_reset" in params.keys():
if not set(("operation", "distribution", "distribution_parameters")).issubset(params["on_reset"]):
raise ValueError(
f"Please ensure the following randomization parameters for {view_name} {attribute} on_reset are provided: "
+ "operation, distribution, distribution_parameters."
)
self.active_domain_randomizations[("articulation_views", view_name, attribute, "on_reset")] = np.array(
params["on_reset"]["distribution_parameters"]
)
kwargs = {"view_name": view_name, "operation": params["on_reset"]["operation"]}
if attribute == "material_properties" and "num_buckets" in params["on_reset"].keys():
kwargs["num_buckets"] = params["on_reset"]["num_buckets"]
self.distributions["articulation_views"][view_name][attribute][
"on_reset"
] = self._generate_distribution(
dimension=self.dr.physics_view._articulation_views_initial_values[view_name][attribute].shape[1],
view_name=view_name,
attribute=attribute,
params=params["on_reset"],
)
kwargs[attribute] = self.distributions["articulation_views"][view_name][attribute]["on_reset"]
with self.dr.gate.on_env_reset():
self.dr.physics_view.randomize_articulation_view(**kwargs)
if "on_interval" in params.keys():
if not set(("frequency_interval", "operation", "distribution", "distribution_parameters")).issubset(
params["on_interval"]
):
raise ValueError(
f"Please ensure the following randomization parameters for {view_name} {attribute} on_interval are provided: "
+ "frequency_interval, operation, distribution, distribution_parameters."
)
self.active_domain_randomizations[
("articulation_views", view_name, attribute, "on_interval")
] = np.array(params["on_interval"]["distribution_parameters"])
kwargs = {"view_name": view_name, "operation": params["on_interval"]["operation"]}
if attribute == "material_properties" and "num_buckets" in params["on_interval"].keys():
kwargs["num_buckets"] = params["on_interval"]["num_buckets"]
self.distributions["articulation_views"][view_name][attribute][
"on_interval"
] = self._generate_distribution(
dimension=self.dr.physics_view._articulation_views_initial_values[view_name][attribute].shape[1],
view_name=view_name,
attribute=attribute,
params=params["on_interval"],
)
kwargs[attribute] = self.distributions["articulation_views"][view_name][attribute]["on_interval"]
with self.dr.gate.on_interval(interval=params["on_interval"]["frequency_interval"]):
self.dr.physics_view.randomize_articulation_view(**kwargs)
else:
raise ValueError(f"The attribute {attribute} for {view_name} is invalid for domain randomization.")
def _generate_distribution(self, view_name, attribute, dimension, params):
dist_params = self._sanitize_distribution_parameters(attribute, dimension, params["distribution_parameters"])
if params["distribution"] == "uniform":
return self.rep.distribution.uniform(tuple(dist_params[0]), tuple(dist_params[1]))
elif params["distribution"] == "gaussian" or params["distribution"] == "normal":
return self.rep.distribution.normal(tuple(dist_params[0]), tuple(dist_params[1]))
elif params["distribution"] == "loguniform" or params["distribution"] == "log_uniform":
return self.rep.distribution.log_uniform(tuple(dist_params[0]), tuple(dist_params[1]))
else:
raise ValueError(
f"The provided distribution for {view_name} {attribute} is not supported. "
+ "Options: uniform, gaussian/normal, loguniform/log_uniform"
)
def _sanitize_distribution_parameters(self, attribute, dimension, params):
distribution_parameters = np.array(params)
if distribution_parameters.shape == (2,):
# if the user does not provide a set of parameters for each dimension
dist_params = [[distribution_parameters[0]] * dimension, [distribution_parameters[1]] * dimension]
elif distribution_parameters.shape == (2, dimension):
# if the user provides a set of parameters for each dimension in the format [[...], [...]]
dist_params = distribution_parameters.tolist()
elif attribute in ["material_properties", "body_inertias"] and distribution_parameters.shape == (2, 3):
# if the user only provides the parameters for one body in the articulation, assume the same parameters for all other links
dist_params = [
[distribution_parameters[0]] * (dimension // 3),
[distribution_parameters[1]] * (dimension // 3),
]
else:
raise ValueError(
f"The provided distribution_parameters for {view_name} {attribute} is invalid due to incorrect dimensions."
)
return dist_params
def set_dr_distribution_parameters(self, distribution_parameters, *distribution_path):
if distribution_path not in self.active_domain_randomizations.keys():
raise ValueError(
f"Cannot find a valid domain randomization distribution using the path {distribution_path}."
)
if distribution_path[0] == "observations":
if len(distribution_parameters) == 2:
self._observations_dr_params[distribution_path[1]]["distribution_parameters"] = distribution_parameters
else:
raise ValueError(
f"Please provide distribution_parameters for observations {distribution_path[1]} "
+ "in the form of [dist_param_1, dist_param_2]"
)
elif distribution_path[0] == "actions":
if len(distribution_parameters) == 2:
self._actions_dr_params[distribution_path[1]]["distribution_parameters"] = distribution_parameters
else:
raise ValueError(
f"Please provide distribution_parameters for actions {distribution_path[1]} "
+ "in the form of [dist_param_1, dist_param_2]"
)
else:
replicator_distribution = self.distributions[distribution_path[0]][distribution_path[1]][
distribution_path[2]
]
if distribution_path[0] == "rigid_prim_views" or distribution_path[0] == "articulation_views":
replicator_distribution = replicator_distribution[distribution_path[3]]
if (
replicator_distribution.node.get_node_type().get_node_type() == "omni.replicator.core.OgnSampleUniform"
or replicator_distribution.node.get_node_type().get_node_type()
== "omni.replicator.core.OgnSampleLogUniform"
):
dimension = len(self.dr.utils.get_distribution_params(replicator_distribution, ["lower"])[0])
dist_params = self._sanitize_distribution_parameters(
distribution_path[-2], dimension, distribution_parameters
)
self.dr.utils.set_distribution_params(
replicator_distribution, {"lower": dist_params[0], "upper": dist_params[1]}
)
elif replicator_distribution.node.get_node_type().get_node_type() == "omni.replicator.core.OgnSampleNormal":
dimension = len(self.dr.utils.get_distribution_params(replicator_distribution, ["mean"])[0])
dist_params = self._sanitize_distribution_parameters(
distribution_path[-2], dimension, distribution_parameters
)
self.dr.utils.set_distribution_params(
replicator_distribution, {"mean": dist_params[0], "std": dist_params[1]}
)
def get_dr_distribution_parameters(self, *distribution_path):
if distribution_path not in self.active_domain_randomizations.keys():
raise ValueError(
f"Cannot find a valid domain randomization distribution using the path {distribution_path}."
)
if distribution_path[0] == "observations":
return self._observations_dr_params[distribution_path[1]]["distribution_parameters"]
elif distribution_path[0] == "actions":
return self._actions_dr_params[distribution_path[1]]["distribution_parameters"]
else:
replicator_distribution = self.distributions[distribution_path[0]][distribution_path[1]][
distribution_path[2]
]
if distribution_path[0] == "rigid_prim_views" or distribution_path[0] == "articulation_views":
replicator_distribution = replicator_distribution[distribution_path[3]]
if (
replicator_distribution.node.get_node_type().get_node_type() == "omni.replicator.core.OgnSampleUniform"
or replicator_distribution.node.get_node_type().get_node_type()
== "omni.replicator.core.OgnSampleLogUniform"
):
return self.dr.utils.get_distribution_params(replicator_distribution, ["lower", "upper"])
elif replicator_distribution.node.get_node_type().get_node_type() == "omni.replicator.core.OgnSampleNormal":
return self.dr.utils.get_distribution_params(replicator_distribution, ["mean", "std"])
def get_initial_dr_distribution_parameters(self, *distribution_path):
if distribution_path not in self.active_domain_randomizations.keys():
raise ValueError(
f"Cannot find a valid domain randomization distribution using the path {distribution_path}."
)
return self.active_domain_randomizations[distribution_path].copy()
def _generate_noise(self, distribution, distribution_parameters, size, device):
if distribution == "gaussian" or distribution == "normal":
noise = torch.normal(
mean=distribution_parameters[0], std=distribution_parameters[1], size=size, device=device
)
elif distribution == "uniform":
noise = (distribution_parameters[1] - distribution_parameters[0]) * torch.rand(
size, device=device
) + distribution_parameters[0]
elif distribution == "loguniform" or distribution == "log_uniform":
noise = torch.exp(
(np.log(distribution_parameters[1]) - np.log(distribution_parameters[0]))
* torch.rand(size, device=device)
+ np.log(distribution_parameters[0])
)
else:
print(f"The specified {distribution} distribution is not supported.")
return noise
def randomize_scale_on_startup(self, view, distribution, distribution_parameters, operation, sync_dim_noise=True):
scales = view.get_local_scales()
if sync_dim_noise:
dist_params = np.asarray(
self._sanitize_distribution_parameters(attribute="scale", dimension=1, params=distribution_parameters)
)
noise = (
self._generate_noise(distribution, dist_params.squeeze(), (view.count,), view._device).repeat(3, 1).T
)
else:
dist_params = np.asarray(
self._sanitize_distribution_parameters(attribute="scale", dimension=3, params=distribution_parameters)
)
noise = torch.zeros((view.count, 3), device=view._device)
for i in range(3):
noise[:, i] = self._generate_noise(distribution, dist_params[:, i], (view.count,), view._device)
if operation == "additive":
scales += noise
elif operation == "scaling":
scales *= noise
elif operation == "direct":
scales = noise
else:
print(f"The specified {operation} operation type is not supported.")
view.set_local_scales(scales=scales)
def randomize_mass_on_startup(self, view, distribution, distribution_parameters, operation):
if isinstance(view, omni.isaac.core.prims.RigidPrimView) or isinstance(view, RigidPrimView):
masses = view.get_masses()
dist_params = np.asarray(
self._sanitize_distribution_parameters(
attribute=f"{view.name} mass", dimension=1, params=distribution_parameters
)
)
noise = self._generate_noise(distribution, dist_params.squeeze(), (view.count,), view._device)
set_masses = view.set_masses
if operation == "additive":
masses += noise
elif operation == "scaling":
masses *= noise
elif operation == "direct":
masses = noise
else:
print(f"The specified {operation} operation type is not supported.")
set_masses(masses)
def randomize_density_on_startup(self, view, distribution, distribution_parameters, operation):
if isinstance(view, omni.isaac.core.prims.RigidPrimView) or isinstance(view, RigidPrimView):
densities = view.get_densities()
dist_params = np.asarray(
self._sanitize_distribution_parameters(
attribute=f"{view.name} density", dimension=1, params=distribution_parameters
)
)
noise = self._generate_noise(distribution, dist_params.squeeze(), (view.count,), view._device)
set_densities = view.set_densities
if operation == "additive":
densities += noise
elif operation == "scaling":
densities *= noise
elif operation == "direct":
densities = noise
else:
print(f"The specified {operation} operation type is not supported.")
set_densities(densities)
| 46,049 | Python | 58.650259 | 136 | 0.55593 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/utils/rlgames/rlgames_utils.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from typing import Callable
import numpy as np
import torch
from rl_games.algos_torch import torch_ext
from rl_games.common import env_configurations, vecenv
from rl_games.common.algo_observer import AlgoObserver
class RLGPUAlgoObserver(AlgoObserver):
"""Allows us to log stats from the env along with the algorithm running stats."""
def __init__(self):
pass
def after_init(self, algo):
self.algo = algo
self.mean_scores = torch_ext.AverageMeter(1, self.algo.games_to_track).to(self.algo.ppo_device)
self.ep_infos = []
self.direct_info = {}
self.writer = self.algo.writer
def process_infos(self, infos, done_indices):
assert isinstance(infos, dict), "RLGPUAlgoObserver expects dict info"
if isinstance(infos, dict):
if "episode" in infos:
self.ep_infos.append(infos["episode"])
if len(infos) > 0 and isinstance(infos, dict): # allow direct logging from env
self.direct_info = {}
for k, v in infos.items():
# only log scalars
if (
isinstance(v, float)
or isinstance(v, int)
or (isinstance(v, torch.Tensor) and len(v.shape) == 0)
):
self.direct_info[k] = v
def after_clear_stats(self):
self.mean_scores.clear()
def after_print_stats(self, frame, epoch_num, total_time):
if self.ep_infos:
for key in self.ep_infos[0]:
infotensor = torch.tensor([], device=self.algo.device)
for ep_info in self.ep_infos:
# handle scalar and zero dimensional tensor infos
if not isinstance(ep_info[key], torch.Tensor):
ep_info[key] = torch.Tensor([ep_info[key]])
if len(ep_info[key].shape) == 0:
ep_info[key] = ep_info[key].unsqueeze(0)
infotensor = torch.cat((infotensor, ep_info[key].to(self.algo.device)))
value = torch.mean(infotensor)
self.writer.add_scalar("Episode/" + key, value, epoch_num)
self.ep_infos.clear()
for k, v in self.direct_info.items():
self.writer.add_scalar(f"{k}/frame", v, frame)
self.writer.add_scalar(f"{k}/iter", v, epoch_num)
self.writer.add_scalar(f"{k}/time", v, total_time)
if self.mean_scores.current_size > 0:
mean_scores = self.mean_scores.get_mean()
self.writer.add_scalar("scores/mean", mean_scores, frame)
self.writer.add_scalar("scores/iter", mean_scores, epoch_num)
self.writer.add_scalar("scores/time", mean_scores, total_time)
class RLGPUEnv(vecenv.IVecEnv):
def __init__(self, config_name, num_actors, **kwargs):
self.env = env_configurations.configurations[config_name]["env_creator"](**kwargs)
def step(self, action):
return self.env.step(action)
def reset(self):
return self.env.reset()
def get_number_of_agents(self):
return self.env.get_number_of_agents()
def get_env_info(self):
info = {}
info["action_space"] = self.env.action_space
info["observation_space"] = self.env.observation_space
if self.env.num_states > 0:
info["state_space"] = self.env.state_space
print(info["action_space"], info["observation_space"], info["state_space"])
else:
print(info["action_space"], info["observation_space"])
return info
| 5,201 | Python | 40.951613 | 103 | 0.636801 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/utils/rlgames/rlgames_train_mt.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import copy
import datetime
import os
import queue
import threading
import traceback
import hydra
from omegaconf import DictConfig
from omni.isaac.gym.vec_env.vec_env_mt import TrainerMT
import omniisaacgymenvs
from omniisaacgymenvs.envs.vec_env_rlgames_mt import VecEnvRLGamesMT
from omniisaacgymenvs.utils.config_utils.path_utils import retrieve_checkpoint_path
from omniisaacgymenvs.utils.hydra_cfg.hydra_utils import *
from omniisaacgymenvs.utils.hydra_cfg.reformat import omegaconf_to_dict, print_dict
from omniisaacgymenvs.utils.rlgames.rlgames_utils import RLGPUAlgoObserver, RLGPUEnv
from omniisaacgymenvs.utils.task_util import initialize_task
from rl_games.common import env_configurations, vecenv
from rl_games.torch_runner import Runner
class RLGTrainer:
def __init__(self, cfg, cfg_dict):
self.cfg = cfg
self.cfg_dict = cfg_dict
# ensure checkpoints can be specified as relative paths
self._bad_checkpoint = False
if self.cfg.checkpoint:
self.cfg.checkpoint = retrieve_checkpoint_path(self.cfg.checkpoint)
if not self.cfg.checkpoint:
self._bad_checkpoint = True
def launch_rlg_hydra(self, env):
# `create_rlgpu_env` is environment construction function which is passed to RL Games and called internally.
# We use the helper function here to specify the environment config.
self.cfg_dict["task"]["test"] = self.cfg.test
# register the rl-games adapter to use inside the runner
vecenv.register("RLGPU", lambda config_name, num_actors, **kwargs: RLGPUEnv(config_name, num_actors, **kwargs))
env_configurations.register("rlgpu", {"vecenv_type": "RLGPU", "env_creator": lambda **kwargs: env})
self.rlg_config_dict = omegaconf_to_dict(self.cfg.train)
def run(self):
# create runner and set the settings
runner = Runner(RLGPUAlgoObserver())
# add evaluation parameters
if self.cfg.evaluation:
player_config = self.rlg_config_dict["params"]["config"].get("player", {})
player_config["evaluation"] = True
player_config["update_checkpoint_freq"] = 100
player_config["dir_to_monitor"] = os.path.dirname(self.cfg.checkpoint)
self.rlg_config_dict["params"]["config"]["player"] = player_config
module_path = os.path.abspath(os.path.join(os.path.dirname(omniisaacgymenvs.__file__)))
self.rlg_config_dict["params"]["config"]["train_dir"] = os.path.join(module_path, "runs")
# load config
runner.load(copy.deepcopy(self.rlg_config_dict))
runner.reset()
# dump config dict
experiment_dir = os.path.join(module_path, "runs", self.cfg.train.params.config.name)
os.makedirs(experiment_dir, exist_ok=True)
with open(os.path.join(experiment_dir, "config.yaml"), "w") as f:
f.write(OmegaConf.to_yaml(self.cfg))
time_str = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
if self.cfg.wandb_activate:
# Make sure to install WandB if you actually use this.
import wandb
run_name = f"{self.cfg.wandb_name}_{time_str}"
wandb.init(
project=self.cfg.wandb_project,
group=self.cfg.wandb_group,
entity=self.cfg.wandb_entity,
config=self.cfg_dict,
sync_tensorboard=True,
id=run_name,
resume="allow",
monitor_gym=True,
)
runner.run(
{"train": not self.cfg.test, "play": self.cfg.test, "checkpoint": self.cfg.checkpoint, "sigma": None}
)
if self.cfg.wandb_activate:
wandb.finish()
class Trainer(TrainerMT):
def __init__(self, trainer, env):
self.ppo_thread = None
self.action_queue = None
self.data_queue = None
self.trainer = trainer
self.is_running = False
self.env = env
self.create_task()
self.run()
def create_task(self):
self.trainer.launch_rlg_hydra(self.env)
# task = initialize_task(self.trainer.cfg_dict, self.env, init_sim=False)
self.task = self.env.task
def run(self):
self.is_running = True
self.action_queue = queue.Queue(1)
self.data_queue = queue.Queue(1)
if "mt_timeout" in self.trainer.cfg_dict:
self.env.initialize(self.action_queue, self.data_queue, self.trainer.cfg_dict["mt_timeout"])
else:
self.env.initialize(self.action_queue, self.data_queue)
self.ppo_thread = PPOTrainer(self.env, self.task, self.trainer)
self.ppo_thread.daemon = True
self.ppo_thread.start()
def stop(self):
self.env.stop = True
self.env.clear_queues()
if self.action_queue:
self.action_queue.join()
if self.data_queue:
self.data_queue.join()
if self.ppo_thread:
self.ppo_thread.join()
self.action_queue = None
self.data_queue = None
self.ppo_thread = None
self.is_running = False
class PPOTrainer(threading.Thread):
def __init__(self, env, task, trainer):
super().__init__()
self.env = env
self.task = task
self.trainer = trainer
def run(self):
from omni.isaac.gym.vec_env import TaskStopException
print("starting ppo...")
try:
self.trainer.run()
# trainer finished - send stop signal to main thread
self.env.should_run = False
self.env.send_actions(None, block=False)
except TaskStopException:
print("Task Stopped!")
self.env.should_run = False
self.env.send_actions(None, block=False)
except Exception as e:
# an error occurred on the RL side - signal stop to main thread
print(traceback.format_exc())
self.env.should_run = False
self.env.send_actions(None, block=False)
| 7,633 | Python | 37.17 | 119 | 0.654395 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/utils/config_utils/sim_config.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import copy
import carb
import numpy as np
import omni.usd
import torch
from omni.isaac.core.utils.extensions import enable_extension
from omniisaacgymenvs.utils.config_utils.default_scene_params import *
class SimConfig:
def __init__(self, config: dict = None):
if config is None:
config = dict()
self._config = config
self._cfg = config.get("task", dict())
self._parse_config()
if self._config["test"] == True:
self._sim_params["enable_scene_query_support"] = True
if (
self._config["headless"] == True
and not self._sim_params["enable_cameras"]
and not self._config["enable_livestream"]
and not self._config.get("enable_recording", False)
):
self._sim_params["use_fabric"] = False
self._sim_params["enable_viewport"] = False
else:
self._sim_params["enable_viewport"] = True
enable_extension("omni.kit.viewport.bundle")
if self._sim_params["enable_cameras"] or self._config.get("enable_recording", False):
enable_extension("omni.replicator.isaac")
self._sim_params["warp"] = self._config["warp"]
self._sim_params["sim_device"] = self._config["sim_device"]
self._adjust_dt()
if self._sim_params["disable_contact_processing"]:
carb.settings.get_settings().set_bool("/physics/disableContactProcessing", True)
carb.settings.get_settings().set_bool("/physics/physxDispatcher", True)
# Force the background grid off all the time for RL tasks, to avoid the grid showing up in any RL camera task
carb.settings.get_settings().set("/app/viewport/grid/enabled", False)
# Disable framerate limiting which might cause rendering slowdowns
carb.settings.get_settings().set("/app/runLoops/main/rateLimitEnabled", False)
import omni.ui
# Dock floating UIs this might not be needed anymore as extensions dock themselves
# Method for docking a particular window to a location
def dock_window(space, name, location, ratio=0.5):
window = omni.ui.Workspace.get_window(name)
if window and space:
window.dock_in(space, location, ratio=ratio)
return window
# Acquire the main docking station
main_dockspace = omni.ui.Workspace.get_window("DockSpace")
dock_window(main_dockspace, "Content", omni.ui.DockPosition.BOTTOM, 0.3)
window = omni.ui.Workspace.get_window("Content")
if window:
window.visible = False
window = omni.ui.Workspace.get_window("Simulation Settings")
if window:
window.visible = False
# workaround for asset root search hang
carb.settings.get_settings().set_string(
"/persistent/isaac/asset_root/default",
"http://omniverse-content-production.s3-us-west-2.amazonaws.com/Assets/Isaac/2023.1.1",
)
carb.settings.get_settings().set_string(
"/persistent/isaac/asset_root/nvidia",
"http://omniverse-content-production.s3-us-west-2.amazonaws.com/Assets/Isaac/2023.1.1",
)
# make sure the correct USD update flags are set
if self._sim_params["use_fabric"]:
carb.settings.get_settings().set_bool("/physics/updateToUsd", False)
carb.settings.get_settings().set_bool("/physics/updateParticlesToUsd", False)
carb.settings.get_settings().set_bool("/physics/updateVelocitiesToUsd", False)
carb.settings.get_settings().set_bool("/physics/updateForceSensorsToUsd", False)
carb.settings.get_settings().set_bool("/physics/outputVelocitiesLocalSpace", False)
carb.settings.get_settings().set_bool("/physics/fabricUpdateTransformations", True)
carb.settings.get_settings().set_bool("/physics/fabricUpdateVelocities", False)
carb.settings.get_settings().set_bool("/physics/fabricUpdateForceSensors", False)
carb.settings.get_settings().set_bool("/physics/fabricUpdateJointStates", False)
def _parse_config(self):
# general sim parameter
self._sim_params = copy.deepcopy(default_sim_params)
self._default_physics_material = copy.deepcopy(default_physics_material)
sim_cfg = self._cfg.get("sim", None)
if sim_cfg is not None:
for opt in sim_cfg.keys():
if opt in self._sim_params:
if opt == "default_physics_material":
for material_opt in sim_cfg[opt]:
self._default_physics_material[material_opt] = sim_cfg[opt][material_opt]
else:
self._sim_params[opt] = sim_cfg[opt]
else:
print("Sim params does not have attribute: ", opt)
self._sim_params["default_physics_material"] = self._default_physics_material
# physx parameters
self._physx_params = copy.deepcopy(default_physx_params)
if sim_cfg is not None and "physx" in sim_cfg:
for opt in sim_cfg["physx"].keys():
if opt in self._physx_params:
self._physx_params[opt] = sim_cfg["physx"][opt]
else:
print("Physx sim params does not have attribute: ", opt)
self._sanitize_device()
def _sanitize_device(self):
if self._sim_params["use_gpu_pipeline"]:
self._physx_params["use_gpu"] = True
# device should be in sync with pipeline
if self._sim_params["use_gpu_pipeline"]:
self._config["sim_device"] = f"cuda:{self._config['device_id']}"
else:
self._config["sim_device"] = "cpu"
# also write to physics params for setting sim device
self._physx_params["sim_device"] = self._config["sim_device"]
print("Pipeline: ", "GPU" if self._sim_params["use_gpu_pipeline"] else "CPU")
print("Pipeline Device: ", self._config["sim_device"])
print("Sim Device: ", "GPU" if self._physx_params["use_gpu"] else "CPU")
def parse_actor_config(self, actor_name):
actor_params = copy.deepcopy(default_actor_options)
if "sim" in self._cfg and actor_name in self._cfg["sim"]:
actor_cfg = self._cfg["sim"][actor_name]
for opt in actor_cfg.keys():
if actor_cfg[opt] != -1 and opt in actor_params:
actor_params[opt] = actor_cfg[opt]
elif opt not in actor_params:
print("Actor params does not have attribute: ", opt)
return actor_params
def _get_actor_config_value(self, actor_name, attribute_name, attribute=None):
actor_params = self.parse_actor_config(actor_name)
if attribute is not None:
if attribute_name not in actor_params:
return attribute.Get()
if actor_params[attribute_name] != -1:
return actor_params[attribute_name]
elif actor_params["override_usd_defaults"] and not attribute.IsAuthored():
return self._physx_params[attribute_name]
else:
if actor_params[attribute_name] != -1:
return actor_params[attribute_name]
def _adjust_dt(self):
# re-evaluate rendering dt to simulate physics substeps
physics_dt = self.sim_params["dt"]
rendering_dt = self.sim_params["rendering_dt"]
# by default, rendering dt = physics dt
if rendering_dt <= 0:
rendering_dt = physics_dt
self.task_config["renderingInterval"] = max(round((1/physics_dt) / (1/rendering_dt)), 1)
# we always set rendering dt to be the same as physics dt, stepping is taken care of in VecEnvRLGames
self.sim_params["rendering_dt"] = physics_dt
@property
def sim_params(self):
return self._sim_params
@property
def config(self):
return self._config
@property
def task_config(self):
return self._cfg
@property
def physx_params(self):
return self._physx_params
def get_physics_params(self):
return {**self.sim_params, **self.physx_params}
def _get_physx_collision_api(self, prim):
from pxr import PhysxSchema, UsdPhysics
physx_collision_api = PhysxSchema.PhysxCollisionAPI(prim)
if not physx_collision_api:
physx_collision_api = PhysxSchema.PhysxCollisionAPI.Apply(prim)
return physx_collision_api
def _get_physx_rigid_body_api(self, prim):
from pxr import PhysxSchema, UsdPhysics
physx_rb_api = PhysxSchema.PhysxRigidBodyAPI(prim)
if not physx_rb_api:
physx_rb_api = PhysxSchema.PhysxRigidBodyAPI.Apply(prim)
return physx_rb_api
def _get_physx_articulation_api(self, prim):
from pxr import PhysxSchema, UsdPhysics
arti_api = PhysxSchema.PhysxArticulationAPI(prim)
if not arti_api:
arti_api = PhysxSchema.PhysxArticulationAPI.Apply(prim)
return arti_api
def set_contact_offset(self, name, prim, value=None):
physx_collision_api = self._get_physx_collision_api(prim)
contact_offset = physx_collision_api.GetContactOffsetAttr()
# if not contact_offset:
# contact_offset = physx_collision_api.CreateContactOffsetAttr()
if value is None:
value = self._get_actor_config_value(name, "contact_offset", contact_offset)
if value != -1:
contact_offset.Set(value)
def set_rest_offset(self, name, prim, value=None):
physx_collision_api = self._get_physx_collision_api(prim)
rest_offset = physx_collision_api.GetRestOffsetAttr()
# if not rest_offset:
# rest_offset = physx_collision_api.CreateRestOffsetAttr()
if value is None:
value = self._get_actor_config_value(name, "rest_offset", rest_offset)
if value != -1:
rest_offset.Set(value)
def set_position_iteration(self, name, prim, value=None):
physx_rb_api = self._get_physx_rigid_body_api(prim)
solver_position_iteration_count = physx_rb_api.GetSolverPositionIterationCountAttr()
if value is None:
value = self._get_actor_config_value(
name, "solver_position_iteration_count", solver_position_iteration_count
)
if value != -1:
solver_position_iteration_count.Set(value)
def set_velocity_iteration(self, name, prim, value=None):
physx_rb_api = self._get_physx_rigid_body_api(prim)
solver_velocity_iteration_count = physx_rb_api.GetSolverVelocityIterationCountAttr()
if value is None:
value = self._get_actor_config_value(
name, "solver_velocity_iteration_count", solver_velocity_iteration_count
)
if value != -1:
solver_velocity_iteration_count.Set(value)
def set_max_depenetration_velocity(self, name, prim, value=None):
physx_rb_api = self._get_physx_rigid_body_api(prim)
max_depenetration_velocity = physx_rb_api.GetMaxDepenetrationVelocityAttr()
if value is None:
value = self._get_actor_config_value(name, "max_depenetration_velocity", max_depenetration_velocity)
if value != -1:
max_depenetration_velocity.Set(value)
def set_sleep_threshold(self, name, prim, value=None):
physx_rb_api = self._get_physx_rigid_body_api(prim)
sleep_threshold = physx_rb_api.GetSleepThresholdAttr()
if value is None:
value = self._get_actor_config_value(name, "sleep_threshold", sleep_threshold)
if value != -1:
sleep_threshold.Set(value)
def set_stabilization_threshold(self, name, prim, value=None):
physx_rb_api = self._get_physx_rigid_body_api(prim)
stabilization_threshold = physx_rb_api.GetStabilizationThresholdAttr()
if value is None:
value = self._get_actor_config_value(name, "stabilization_threshold", stabilization_threshold)
if value != -1:
stabilization_threshold.Set(value)
def set_gyroscopic_forces(self, name, prim, value=None):
physx_rb_api = self._get_physx_rigid_body_api(prim)
enable_gyroscopic_forces = physx_rb_api.GetEnableGyroscopicForcesAttr()
if value is None:
value = self._get_actor_config_value(name, "enable_gyroscopic_forces", enable_gyroscopic_forces)
if value != -1:
enable_gyroscopic_forces.Set(value)
def set_density(self, name, prim, value=None):
physx_rb_api = self._get_physx_rigid_body_api(prim)
density = physx_rb_api.GetDensityAttr()
if value is None:
value = self._get_actor_config_value(name, "density", density)
if value != -1:
density.Set(value)
# auto-compute mass
self.set_mass(prim, 0.0)
def set_mass(self, name, prim, value=None):
physx_rb_api = self._get_physx_rigid_body_api(prim)
mass = physx_rb_api.GetMassAttr()
if value is None:
value = self._get_actor_config_value(name, "mass", mass)
if value != -1:
mass.Set(value)
def retain_acceleration(self, prim):
# retain accelerations if running with more than one substep
physx_rb_api = self._get_physx_rigid_body_api(prim)
if self._sim_params["substeps"] > 1:
physx_rb_api.GetRetainAccelerationsAttr().Set(True)
def make_kinematic(self, name, prim, cfg, value=None):
# make rigid body kinematic (fixed base and no collision)
from pxr import PhysxSchema, UsdPhysics
stage = omni.usd.get_context().get_stage()
if value is None:
value = self._get_actor_config_value(name, "make_kinematic")
if value == True:
# parse through all children prims
prims = [prim]
while len(prims) > 0:
cur_prim = prims.pop(0)
rb = UsdPhysics.RigidBodyAPI.Get(stage, cur_prim.GetPath())
if rb:
rb.CreateKinematicEnabledAttr().Set(True)
children_prims = cur_prim.GetPrim().GetChildren()
prims = prims + children_prims
def set_articulation_position_iteration(self, name, prim, value=None):
arti_api = self._get_physx_articulation_api(prim)
solver_position_iteration_count = arti_api.GetSolverPositionIterationCountAttr()
if value is None:
value = self._get_actor_config_value(
name, "solver_position_iteration_count", solver_position_iteration_count
)
if value != -1:
solver_position_iteration_count.Set(value)
def set_articulation_velocity_iteration(self, name, prim, value=None):
arti_api = self._get_physx_articulation_api(prim)
solver_velocity_iteration_count = arti_api.GetSolverVelocityIterationCountAttr()
if value is None:
value = self._get_actor_config_value(
name, "solver_velocity_iteration_count", solver_velocity_iteration_count
)
if value != -1:
solver_velocity_iteration_count.Set(value)
def set_articulation_sleep_threshold(self, name, prim, value=None):
arti_api = self._get_physx_articulation_api(prim)
sleep_threshold = arti_api.GetSleepThresholdAttr()
if value is None:
value = self._get_actor_config_value(name, "sleep_threshold", sleep_threshold)
if value != -1:
sleep_threshold.Set(value)
def set_articulation_stabilization_threshold(self, name, prim, value=None):
arti_api = self._get_physx_articulation_api(prim)
stabilization_threshold = arti_api.GetStabilizationThresholdAttr()
if value is None:
value = self._get_actor_config_value(name, "stabilization_threshold", stabilization_threshold)
if value != -1:
stabilization_threshold.Set(value)
def apply_rigid_body_settings(self, name, prim, cfg, is_articulation):
from pxr import PhysxSchema, UsdPhysics
stage = omni.usd.get_context().get_stage()
rb_api = UsdPhysics.RigidBodyAPI.Get(stage, prim.GetPath())
physx_rb_api = PhysxSchema.PhysxRigidBodyAPI.Get(stage, prim.GetPath())
if not physx_rb_api:
physx_rb_api = PhysxSchema.PhysxRigidBodyAPI.Apply(prim)
# if it's a body in an articulation, it's handled at articulation root
if not is_articulation:
self.make_kinematic(name, prim, cfg, cfg["make_kinematic"])
self.set_position_iteration(name, prim, cfg["solver_position_iteration_count"])
self.set_velocity_iteration(name, prim, cfg["solver_velocity_iteration_count"])
self.set_max_depenetration_velocity(name, prim, cfg["max_depenetration_velocity"])
self.set_sleep_threshold(name, prim, cfg["sleep_threshold"])
self.set_stabilization_threshold(name, prim, cfg["stabilization_threshold"])
self.set_gyroscopic_forces(name, prim, cfg["enable_gyroscopic_forces"])
# density and mass
mass_api = UsdPhysics.MassAPI.Get(stage, prim.GetPath())
if mass_api is None:
mass_api = UsdPhysics.MassAPI.Apply(prim)
mass_attr = mass_api.GetMassAttr()
density_attr = mass_api.GetDensityAttr()
if not mass_attr:
mass_attr = mass_api.CreateMassAttr()
if not density_attr:
density_attr = mass_api.CreateDensityAttr()
if cfg["density"] != -1:
density_attr.Set(cfg["density"])
mass_attr.Set(0.0) # mass is to be computed
elif cfg["override_usd_defaults"] and not density_attr.IsAuthored() and not mass_attr.IsAuthored():
density_attr.Set(self._physx_params["density"])
self.retain_acceleration(prim)
def apply_rigid_shape_settings(self, name, prim, cfg):
from pxr import PhysxSchema, UsdPhysics
stage = omni.usd.get_context().get_stage()
# collision APIs
collision_api = UsdPhysics.CollisionAPI(prim)
if not collision_api:
collision_api = UsdPhysics.CollisionAPI.Apply(prim)
physx_collision_api = PhysxSchema.PhysxCollisionAPI(prim)
if not physx_collision_api:
physx_collision_api = PhysxSchema.PhysxCollisionAPI.Apply(prim)
self.set_contact_offset(name, prim, cfg["contact_offset"])
self.set_rest_offset(name, prim, cfg["rest_offset"])
def apply_articulation_settings(self, name, prim, cfg):
from pxr import PhysxSchema, UsdPhysics
stage = omni.usd.get_context().get_stage()
is_articulation = False
# check if is articulation
prims = [prim]
while len(prims) > 0:
prim_tmp = prims.pop(0)
articulation_api = UsdPhysics.ArticulationRootAPI.Get(stage, prim_tmp.GetPath())
physx_articulation_api = PhysxSchema.PhysxArticulationAPI.Get(stage, prim_tmp.GetPath())
if articulation_api or physx_articulation_api:
is_articulation = True
children_prims = prim_tmp.GetPrim().GetChildren()
prims = prims + children_prims
# parse through all children prims
prims = [prim]
while len(prims) > 0:
cur_prim = prims.pop(0)
rb = UsdPhysics.RigidBodyAPI.Get(stage, cur_prim.GetPath())
collision_body = UsdPhysics.CollisionAPI.Get(stage, cur_prim.GetPath())
articulation = UsdPhysics.ArticulationRootAPI.Get(stage, cur_prim.GetPath())
if rb:
self.apply_rigid_body_settings(name, cur_prim, cfg, is_articulation)
if collision_body:
self.apply_rigid_shape_settings(name, cur_prim, cfg)
if articulation:
articulation_api = UsdPhysics.ArticulationRootAPI.Get(stage, cur_prim.GetPath())
physx_articulation_api = PhysxSchema.PhysxArticulationAPI.Get(stage, cur_prim.GetPath())
# enable self collisions
enable_self_collisions = physx_articulation_api.GetEnabledSelfCollisionsAttr()
if cfg["enable_self_collisions"] != -1:
enable_self_collisions.Set(cfg["enable_self_collisions"])
self.set_articulation_position_iteration(name, cur_prim, cfg["solver_position_iteration_count"])
self.set_articulation_velocity_iteration(name, cur_prim, cfg["solver_velocity_iteration_count"])
self.set_articulation_sleep_threshold(name, cur_prim, cfg["sleep_threshold"])
self.set_articulation_stabilization_threshold(name, cur_prim, cfg["stabilization_threshold"])
children_prims = cur_prim.GetPrim().GetChildren()
prims = prims + children_prims
| 22,563 | Python | 43.769841 | 117 | 0.634446 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/utils/config_utils/default_scene_params.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
default_physx_params = {
### Per-scene settings
"use_gpu": False,
"worker_thread_count": 4,
"solver_type": 1, # 0: PGS, 1:TGS
"bounce_threshold_velocity": 0.2,
"friction_offset_threshold": 0.04, # A threshold of contact separation distance used to decide if a contact
# point will experience friction forces.
"friction_correlation_distance": 0.025, # Contact points can be merged into a single friction anchor if the
# distance between the contacts is smaller than correlation distance.
# disabling these can be useful for debugging
"enable_sleeping": True,
"enable_stabilization": True,
# GPU buffers
"gpu_max_rigid_contact_count": 512 * 1024,
"gpu_max_rigid_patch_count": 80 * 1024,
"gpu_found_lost_pairs_capacity": 1024,
"gpu_found_lost_aggregate_pairs_capacity": 1024,
"gpu_total_aggregate_pairs_capacity": 1024,
"gpu_max_soft_body_contacts": 1024 * 1024,
"gpu_max_particle_contacts": 1024 * 1024,
"gpu_heap_capacity": 64 * 1024 * 1024,
"gpu_temp_buffer_capacity": 16 * 1024 * 1024,
"gpu_max_num_partitions": 8,
"gpu_collision_stack_size": 64 * 1024 * 1024,
### Per-actor settings ( can override in actor_options )
"solver_position_iteration_count": 4,
"solver_velocity_iteration_count": 1,
"sleep_threshold": 0.0, # Mass-normalized kinetic energy threshold below which an actor may go to sleep.
# Allowed range [0, max_float).
"stabilization_threshold": 0.0, # Mass-normalized kinetic energy threshold below which an actor may
# participate in stabilization. Allowed range [0, max_float).
### Per-body settings ( can override in actor_options )
"enable_gyroscopic_forces": False,
"density": 1000.0, # density to be used for bodies that do not specify mass or density
"max_depenetration_velocity": 100.0,
### Per-shape settings ( can override in actor_options )
"contact_offset": 0.02,
"rest_offset": 0.001,
}
default_physics_material = {"static_friction": 1.0, "dynamic_friction": 1.0, "restitution": 0.0}
default_sim_params = {
"gravity": [0.0, 0.0, -9.81],
"dt": 1.0 / 60.0,
"rendering_dt": -1.0, # we don't want to override this if it's set from cfg
"substeps": 1,
"use_gpu_pipeline": True,
"add_ground_plane": True,
"add_distant_light": True,
"use_fabric": True,
"enable_scene_query_support": False,
"enable_cameras": False,
"disable_contact_processing": False,
"default_physics_material": default_physics_material,
}
default_actor_options = {
# -1 means use authored value from USD or default values from default_sim_params if not explicitly authored in USD.
# If an attribute value is not explicitly authored in USD, add one with the value given here,
# which overrides the USD default.
"override_usd_defaults": False,
"make_kinematic": -1,
"enable_self_collisions": -1,
"enable_gyroscopic_forces": -1,
"solver_position_iteration_count": -1,
"solver_velocity_iteration_count": -1,
"sleep_threshold": -1,
"stabilization_threshold": -1,
"max_depenetration_velocity": -1,
"density": -1,
"mass": -1,
"contact_offset": -1,
"rest_offset": -1,
}
| 4,783 | Python | 44.132075 | 119 | 0.703951 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/utils/config_utils/path_utils.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import os
import carb
from hydra.utils import to_absolute_path
def is_valid_local_file(path):
return os.path.isfile(path)
def is_valid_ov_file(path):
import omni.client
result, entry = omni.client.stat(path)
return result == omni.client.Result.OK
def download_ov_file(source_path, target_path):
import omni.client
result = omni.client.copy(source_path, target_path)
if result == omni.client.Result.OK:
return True
return False
def break_ov_path(path):
import omni.client
return omni.client.break_url(path)
def retrieve_checkpoint_path(path):
# check if it's a local path
if is_valid_local_file(path):
return to_absolute_path(path)
# check if it's an OV path
elif is_valid_ov_file(path):
ov_path = break_ov_path(path)
file_name = os.path.basename(ov_path.path)
target_path = f"checkpoints/{file_name}"
copy_to_local = download_ov_file(path, target_path)
return to_absolute_path(target_path)
else:
carb.log_error(f"Invalid checkpoint path: {path}. Does the file exist?")
return None
def get_experience(headless, enable_livestream, enable_viewport, enable_recording, kit_app):
if kit_app == '':
if enable_viewport:
import omniisaacgymenvs
experience = os.path.abspath(os.path.join(os.path.dirname(omniisaacgymenvs.__file__), '../apps/omni.isaac.sim.python.gym.camera.kit'))
else:
experience = f'{os.environ["EXP_PATH"]}/omni.isaac.sim.python.gym.kit'
if headless and not enable_livestream and not enable_recording:
experience = f'{os.environ["EXP_PATH"]}/omni.isaac.sim.python.gym.headless.kit'
else:
experience = kit_app
return experience
| 3,346 | Python | 35.780219 | 146 | 0.715481 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/utils/hydra_cfg/hydra_utils.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import hydra
from omegaconf import DictConfig, OmegaConf
## OmegaConf & Hydra Config
# Resolvers used in hydra configs (see https://omegaconf.readthedocs.io/en/2.1_branch/usage.html#resolvers)
if not OmegaConf.has_resolver("eq"):
OmegaConf.register_new_resolver("eq", lambda x, y: x.lower() == y.lower())
if not OmegaConf.has_resolver("contains"):
OmegaConf.register_new_resolver("contains", lambda x, y: x.lower() in y.lower())
if not OmegaConf.has_resolver("if"):
OmegaConf.register_new_resolver("if", lambda pred, a, b: a if pred else b)
# allows us to resolve default arguments which are copied in multiple places in the config. used primarily for
# num_ensv
if not OmegaConf.has_resolver("resolve_default"):
OmegaConf.register_new_resolver("resolve_default", lambda default, arg: default if arg == "" else arg)
| 2,394 | Python | 51.065216 | 110 | 0.767753 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/utils/hydra_cfg/reformat.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from typing import Dict
from omegaconf import DictConfig, OmegaConf
def omegaconf_to_dict(d: DictConfig) -> Dict:
"""Converts an omegaconf DictConfig to a python Dict, respecting variable interpolation."""
ret = {}
for k, v in d.items():
if isinstance(v, DictConfig):
ret[k] = omegaconf_to_dict(v)
else:
ret[k] = v
return ret
def print_dict(val, nesting: int = -4, start: bool = True):
"""Outputs a nested dictionory."""
if type(val) == dict:
if not start:
print("")
nesting += 4
for k in val:
print(nesting * " ", end="")
print(k, end=": ")
print_dict(val[k], nesting, start=False)
else:
print(val)
| 2,313 | Python | 38.896551 | 95 | 0.707739 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/utils/terrain_utils/terrain_utils.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from math import sqrt
import numpy as np
from numpy.random import choice
from omni.isaac.core.prims import XFormPrim
from pxr import Gf, PhysxSchema, Sdf, UsdPhysics
from scipy import interpolate
def random_uniform_terrain(
terrain,
min_height,
max_height,
step=1,
downsampled_scale=None,
):
"""
Generate a uniform noise terrain
Parameters
terrain (SubTerrain): the terrain
min_height (float): the minimum height of the terrain [meters]
max_height (float): the maximum height of the terrain [meters]
step (float): minimum height change between two points [meters]
downsampled_scale (float): distance between two randomly sampled points ( musty be larger or equal to terrain.horizontal_scale)
"""
if downsampled_scale is None:
downsampled_scale = terrain.horizontal_scale
# switch parameters to discrete units
min_height = int(min_height / terrain.vertical_scale)
max_height = int(max_height / terrain.vertical_scale)
step = int(step / terrain.vertical_scale)
heights_range = np.arange(min_height, max_height + step, step)
height_field_downsampled = np.random.choice(
heights_range,
(
int(terrain.width * terrain.horizontal_scale / downsampled_scale),
int(terrain.length * terrain.horizontal_scale / downsampled_scale),
),
)
x = np.linspace(0, terrain.width * terrain.horizontal_scale, height_field_downsampled.shape[0])
y = np.linspace(0, terrain.length * terrain.horizontal_scale, height_field_downsampled.shape[1])
f = interpolate.RectBivariateSpline(y, x, height_field_downsampled)
x_upsampled = np.linspace(0, terrain.width * terrain.horizontal_scale, terrain.width)
y_upsampled = np.linspace(0, terrain.length * terrain.horizontal_scale, terrain.length)
z_upsampled = np.rint(f(y_upsampled, x_upsampled))
terrain.height_field_raw += z_upsampled.astype(np.int16)
return terrain
def sloped_terrain(terrain, slope=1):
"""
Generate a sloped terrain
Parameters:
terrain (SubTerrain): the terrain
slope (int): positive or negative slope
Returns:
terrain (SubTerrain): update terrain
"""
x = np.arange(0, terrain.width)
y = np.arange(0, terrain.length)
xx, yy = np.meshgrid(x, y, sparse=True)
xx = xx.reshape(terrain.width, 1)
max_height = int(slope * (terrain.horizontal_scale / terrain.vertical_scale) * terrain.width)
terrain.height_field_raw[:, np.arange(terrain.length)] += (max_height * xx / terrain.width).astype(
terrain.height_field_raw.dtype
)
return terrain
def pyramid_sloped_terrain(terrain, slope=1, platform_size=1.0):
"""
Generate a sloped terrain
Parameters:
terrain (terrain): the terrain
slope (int): positive or negative slope
platform_size (float): size of the flat platform at the center of the terrain [meters]
Returns:
terrain (SubTerrain): update terrain
"""
x = np.arange(0, terrain.width)
y = np.arange(0, terrain.length)
center_x = int(terrain.width / 2)
center_y = int(terrain.length / 2)
xx, yy = np.meshgrid(x, y, sparse=True)
xx = (center_x - np.abs(center_x - xx)) / center_x
yy = (center_y - np.abs(center_y - yy)) / center_y
xx = xx.reshape(terrain.width, 1)
yy = yy.reshape(1, terrain.length)
max_height = int(slope * (terrain.horizontal_scale / terrain.vertical_scale) * (terrain.width / 2))
terrain.height_field_raw += (max_height * xx * yy).astype(terrain.height_field_raw.dtype)
platform_size = int(platform_size / terrain.horizontal_scale / 2)
x1 = terrain.width // 2 - platform_size
x2 = terrain.width // 2 + platform_size
y1 = terrain.length // 2 - platform_size
y2 = terrain.length // 2 + platform_size
min_h = min(terrain.height_field_raw[x1, y1], 0)
max_h = max(terrain.height_field_raw[x1, y1], 0)
terrain.height_field_raw = np.clip(terrain.height_field_raw, min_h, max_h)
return terrain
def discrete_obstacles_terrain(terrain, max_height, min_size, max_size, num_rects, platform_size=1.0):
"""
Generate a terrain with gaps
Parameters:
terrain (terrain): the terrain
max_height (float): maximum height of the obstacles (range=[-max, -max/2, max/2, max]) [meters]
min_size (float): minimum size of a rectangle obstacle [meters]
max_size (float): maximum size of a rectangle obstacle [meters]
num_rects (int): number of randomly generated obstacles
platform_size (float): size of the flat platform at the center of the terrain [meters]
Returns:
terrain (SubTerrain): update terrain
"""
# switch parameters to discrete units
max_height = int(max_height / terrain.vertical_scale)
min_size = int(min_size / terrain.horizontal_scale)
max_size = int(max_size / terrain.horizontal_scale)
platform_size = int(platform_size / terrain.horizontal_scale)
(i, j) = terrain.height_field_raw.shape
height_range = [-max_height, -max_height // 2, max_height // 2, max_height]
width_range = range(min_size, max_size, 4)
length_range = range(min_size, max_size, 4)
for _ in range(num_rects):
width = np.random.choice(width_range)
length = np.random.choice(length_range)
start_i = np.random.choice(range(0, i - width, 4))
start_j = np.random.choice(range(0, j - length, 4))
terrain.height_field_raw[start_i : start_i + width, start_j : start_j + length] = np.random.choice(height_range)
x1 = (terrain.width - platform_size) // 2
x2 = (terrain.width + platform_size) // 2
y1 = (terrain.length - platform_size) // 2
y2 = (terrain.length + platform_size) // 2
terrain.height_field_raw[x1:x2, y1:y2] = 0
return terrain
def wave_terrain(terrain, num_waves=1, amplitude=1.0):
"""
Generate a wavy terrain
Parameters:
terrain (terrain): the terrain
num_waves (int): number of sine waves across the terrain length
Returns:
terrain (SubTerrain): update terrain
"""
amplitude = int(0.5 * amplitude / terrain.vertical_scale)
if num_waves > 0:
div = terrain.length / (num_waves * np.pi * 2)
x = np.arange(0, terrain.width)
y = np.arange(0, terrain.length)
xx, yy = np.meshgrid(x, y, sparse=True)
xx = xx.reshape(terrain.width, 1)
yy = yy.reshape(1, terrain.length)
terrain.height_field_raw += (amplitude * np.cos(yy / div) + amplitude * np.sin(xx / div)).astype(
terrain.height_field_raw.dtype
)
return terrain
def stairs_terrain(terrain, step_width, step_height):
"""
Generate a stairs
Parameters:
terrain (terrain): the terrain
step_width (float): the width of the step [meters]
step_height (float): the height of the step [meters]
Returns:
terrain (SubTerrain): update terrain
"""
# switch parameters to discrete units
step_width = int(step_width / terrain.horizontal_scale)
step_height = int(step_height / terrain.vertical_scale)
num_steps = terrain.width // step_width
height = step_height
for i in range(num_steps):
terrain.height_field_raw[i * step_width : (i + 1) * step_width, :] += height
height += step_height
return terrain
def pyramid_stairs_terrain(terrain, step_width, step_height, platform_size=1.0):
"""
Generate stairs
Parameters:
terrain (terrain): the terrain
step_width (float): the width of the step [meters]
step_height (float): the step_height [meters]
platform_size (float): size of the flat platform at the center of the terrain [meters]
Returns:
terrain (SubTerrain): update terrain
"""
# switch parameters to discrete units
step_width = int(step_width / terrain.horizontal_scale)
step_height = int(step_height / terrain.vertical_scale)
platform_size = int(platform_size / terrain.horizontal_scale)
height = 0
start_x = 0
stop_x = terrain.width
start_y = 0
stop_y = terrain.length
while (stop_x - start_x) > platform_size and (stop_y - start_y) > platform_size:
start_x += step_width
stop_x -= step_width
start_y += step_width
stop_y -= step_width
height += step_height
terrain.height_field_raw[start_x:stop_x, start_y:stop_y] = height
return terrain
def stepping_stones_terrain(terrain, stone_size, stone_distance, max_height, platform_size=1.0, depth=-10):
"""
Generate a stepping stones terrain
Parameters:
terrain (terrain): the terrain
stone_size (float): horizontal size of the stepping stones [meters]
stone_distance (float): distance between stones (i.e size of the holes) [meters]
max_height (float): maximum height of the stones (positive and negative) [meters]
platform_size (float): size of the flat platform at the center of the terrain [meters]
depth (float): depth of the holes (default=-10.) [meters]
Returns:
terrain (SubTerrain): update terrain
"""
# switch parameters to discrete units
stone_size = int(stone_size / terrain.horizontal_scale)
stone_distance = int(stone_distance / terrain.horizontal_scale)
max_height = int(max_height / terrain.vertical_scale)
platform_size = int(platform_size / terrain.horizontal_scale)
height_range = np.arange(-max_height - 1, max_height, step=1)
start_x = 0
start_y = 0
terrain.height_field_raw[:, :] = int(depth / terrain.vertical_scale)
if terrain.length >= terrain.width:
while start_y < terrain.length:
stop_y = min(terrain.length, start_y + stone_size)
start_x = np.random.randint(0, stone_size)
# fill first hole
stop_x = max(0, start_x - stone_distance)
terrain.height_field_raw[0:stop_x, start_y:stop_y] = np.random.choice(height_range)
# fill row
while start_x < terrain.width:
stop_x = min(terrain.width, start_x + stone_size)
terrain.height_field_raw[start_x:stop_x, start_y:stop_y] = np.random.choice(height_range)
start_x += stone_size + stone_distance
start_y += stone_size + stone_distance
elif terrain.width > terrain.length:
while start_x < terrain.width:
stop_x = min(terrain.width, start_x + stone_size)
start_y = np.random.randint(0, stone_size)
# fill first hole
stop_y = max(0, start_y - stone_distance)
terrain.height_field_raw[start_x:stop_x, 0:stop_y] = np.random.choice(height_range)
# fill column
while start_y < terrain.length:
stop_y = min(terrain.length, start_y + stone_size)
terrain.height_field_raw[start_x:stop_x, start_y:stop_y] = np.random.choice(height_range)
start_y += stone_size + stone_distance
start_x += stone_size + stone_distance
x1 = (terrain.width - platform_size) // 2
x2 = (terrain.width + platform_size) // 2
y1 = (terrain.length - platform_size) // 2
y2 = (terrain.length + platform_size) // 2
terrain.height_field_raw[x1:x2, y1:y2] = 0
return terrain
def convert_heightfield_to_trimesh(height_field_raw, horizontal_scale, vertical_scale, slope_threshold=None):
"""
Convert a heightfield array to a triangle mesh represented by vertices and triangles.
Optionally, corrects vertical surfaces above the provide slope threshold:
If (y2-y1)/(x2-x1) > slope_threshold -> Move A to A' (set x1 = x2). Do this for all directions.
B(x2,y2)
/|
/ |
/ |
(x1,y1)A---A'(x2',y1)
Parameters:
height_field_raw (np.array): input heightfield
horizontal_scale (float): horizontal scale of the heightfield [meters]
vertical_scale (float): vertical scale of the heightfield [meters]
slope_threshold (float): the slope threshold above which surfaces are made vertical. If None no correction is applied (default: None)
Returns:
vertices (np.array(float)): array of shape (num_vertices, 3). Each row represents the location of each vertex [meters]
triangles (np.array(int)): array of shape (num_triangles, 3). Each row represents the indices of the 3 vertices connected by this triangle.
"""
hf = height_field_raw
num_rows = hf.shape[0]
num_cols = hf.shape[1]
y = np.linspace(0, (num_cols - 1) * horizontal_scale, num_cols)
x = np.linspace(0, (num_rows - 1) * horizontal_scale, num_rows)
yy, xx = np.meshgrid(y, x)
if slope_threshold is not None:
slope_threshold *= horizontal_scale / vertical_scale
move_x = np.zeros((num_rows, num_cols))
move_y = np.zeros((num_rows, num_cols))
move_corners = np.zeros((num_rows, num_cols))
move_x[: num_rows - 1, :] += hf[1:num_rows, :] - hf[: num_rows - 1, :] > slope_threshold
move_x[1:num_rows, :] -= hf[: num_rows - 1, :] - hf[1:num_rows, :] > slope_threshold
move_y[:, : num_cols - 1] += hf[:, 1:num_cols] - hf[:, : num_cols - 1] > slope_threshold
move_y[:, 1:num_cols] -= hf[:, : num_cols - 1] - hf[:, 1:num_cols] > slope_threshold
move_corners[: num_rows - 1, : num_cols - 1] += (
hf[1:num_rows, 1:num_cols] - hf[: num_rows - 1, : num_cols - 1] > slope_threshold
)
move_corners[1:num_rows, 1:num_cols] -= (
hf[: num_rows - 1, : num_cols - 1] - hf[1:num_rows, 1:num_cols] > slope_threshold
)
xx += (move_x + move_corners * (move_x == 0)) * horizontal_scale
yy += (move_y + move_corners * (move_y == 0)) * horizontal_scale
# create triangle mesh vertices and triangles from the heightfield grid
vertices = np.zeros((num_rows * num_cols, 3), dtype=np.float32)
vertices[:, 0] = xx.flatten()
vertices[:, 1] = yy.flatten()
vertices[:, 2] = hf.flatten() * vertical_scale
triangles = -np.ones((2 * (num_rows - 1) * (num_cols - 1), 3), dtype=np.uint32)
for i in range(num_rows - 1):
ind0 = np.arange(0, num_cols - 1) + i * num_cols
ind1 = ind0 + 1
ind2 = ind0 + num_cols
ind3 = ind2 + 1
start = 2 * i * (num_cols - 1)
stop = start + 2 * (num_cols - 1)
triangles[start:stop:2, 0] = ind0
triangles[start:stop:2, 1] = ind3
triangles[start:stop:2, 2] = ind1
triangles[start + 1 : stop : 2, 0] = ind0
triangles[start + 1 : stop : 2, 1] = ind2
triangles[start + 1 : stop : 2, 2] = ind3
return vertices, triangles
def add_terrain_to_stage(stage, vertices, triangles, position=None, orientation=None):
num_faces = triangles.shape[0]
terrain_mesh = stage.DefinePrim("/World/terrain", "Mesh")
terrain_mesh.GetAttribute("points").Set(vertices)
terrain_mesh.GetAttribute("faceVertexIndices").Set(triangles.flatten())
terrain_mesh.GetAttribute("faceVertexCounts").Set(np.asarray([3] * num_faces))
terrain = XFormPrim(prim_path="/World/terrain", name="terrain", position=position, orientation=orientation)
UsdPhysics.CollisionAPI.Apply(terrain.prim)
# collision_api = UsdPhysics.MeshCollisionAPI.Apply(terrain.prim)
# collision_api.CreateApproximationAttr().Set("meshSimplification")
physx_collision_api = PhysxSchema.PhysxCollisionAPI.Apply(terrain.prim)
physx_collision_api.GetContactOffsetAttr().Set(0.02)
physx_collision_api.GetRestOffsetAttr().Set(0.00)
class SubTerrain:
def __init__(self, terrain_name="terrain", width=256, length=256, vertical_scale=1.0, horizontal_scale=1.0):
self.terrain_name = terrain_name
self.vertical_scale = vertical_scale
self.horizontal_scale = horizontal_scale
self.width = width
self.length = length
self.height_field_raw = np.zeros((self.width, self.length), dtype=np.int16)
| 17,645 | Python | 41.215311 | 147 | 0.649306 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/utils/terrain_utils/create_terrain_demo.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import os, sys
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
sys.path.append(SCRIPT_DIR)
import omni
from omni.isaac.kit import SimulationApp
import numpy as np
import torch
simulation_app = SimulationApp({"headless": False})
from abc import abstractmethod
from omni.isaac.core.tasks import BaseTask
from omni.isaac.core.prims import RigidPrimView, RigidPrim, XFormPrim
from omni.isaac.core import World
from omni.isaac.core.objects import DynamicSphere
from omni.isaac.core.utils.prims import define_prim, get_prim_at_path
from omni.isaac.core.utils.nucleus import find_nucleus_server
from omni.isaac.core.utils.stage import add_reference_to_stage, get_current_stage
from omni.isaac.core.materials import PreviewSurface
from omni.isaac.cloner import GridCloner
from pxr import UsdPhysics, UsdLux, UsdShade, Sdf, Gf, UsdGeom, PhysxSchema
from terrain_utils import *
class TerrainCreation(BaseTask):
def __init__(self, name, num_envs, num_per_row, env_spacing, config=None, offset=None,) -> None:
BaseTask.__init__(self, name=name, offset=offset)
self._num_envs = num_envs
self._num_per_row = num_per_row
self._env_spacing = env_spacing
self._device = "cpu"
self._cloner = GridCloner(self._env_spacing, self._num_per_row)
self._cloner.define_base_env(self.default_base_env_path)
define_prim(self.default_zero_env_path)
@property
def default_base_env_path(self):
return "/World/envs"
@property
def default_zero_env_path(self):
return f"{self.default_base_env_path}/env_0"
def set_up_scene(self, scene) -> None:
self._stage = get_current_stage()
distantLight = UsdLux.DistantLight.Define(self._stage, Sdf.Path("/World/DistantLight"))
distantLight.CreateIntensityAttr(2000)
self.get_terrain()
self.get_ball()
super().set_up_scene(scene)
prim_paths = self._cloner.generate_paths("/World/envs/env", self._num_envs)
print(f"cloning {self._num_envs} environments...")
self._env_pos = self._cloner.clone(
source_prim_path="/World/envs/env_0",
prim_paths=prim_paths
)
return
def get_terrain(self):
# create all available terrain types
num_terains = 8
terrain_width = 12.
terrain_length = 12.
horizontal_scale = 0.25 # [m]
vertical_scale = 0.005 # [m]
num_rows = int(terrain_width/horizontal_scale)
num_cols = int(terrain_length/horizontal_scale)
heightfield = np.zeros((num_terains*num_rows, num_cols), dtype=np.int16)
def new_sub_terrain():
return SubTerrain(width=num_rows, length=num_cols, vertical_scale=vertical_scale, horizontal_scale=horizontal_scale)
heightfield[0:num_rows, :] = random_uniform_terrain(new_sub_terrain(), min_height=-0.2, max_height=0.2, step=0.2, downsampled_scale=0.5).height_field_raw
heightfield[num_rows:2*num_rows, :] = sloped_terrain(new_sub_terrain(), slope=-0.5).height_field_raw
heightfield[2*num_rows:3*num_rows, :] = pyramid_sloped_terrain(new_sub_terrain(), slope=-0.5).height_field_raw
heightfield[3*num_rows:4*num_rows, :] = discrete_obstacles_terrain(new_sub_terrain(), max_height=0.5, min_size=1., max_size=5., num_rects=20).height_field_raw
heightfield[4*num_rows:5*num_rows, :] = wave_terrain(new_sub_terrain(), num_waves=2., amplitude=1.).height_field_raw
heightfield[5*num_rows:6*num_rows, :] = stairs_terrain(new_sub_terrain(), step_width=0.75, step_height=-0.5).height_field_raw
heightfield[6*num_rows:7*num_rows, :] = pyramid_stairs_terrain(new_sub_terrain(), step_width=0.75, step_height=-0.5).height_field_raw
heightfield[7*num_rows:8*num_rows, :] = stepping_stones_terrain(new_sub_terrain(), stone_size=1.,
stone_distance=1., max_height=0.5, platform_size=0.).height_field_raw
vertices, triangles = convert_heightfield_to_trimesh(heightfield, horizontal_scale=horizontal_scale, vertical_scale=vertical_scale, slope_threshold=1.5)
position = np.array([-6.0, 48.0, 0])
orientation = np.array([0.70711, 0.0, 0.0, -0.70711])
add_terrain_to_stage(stage=self._stage, vertices=vertices, triangles=triangles, position=position, orientation=orientation)
def get_ball(self):
ball = DynamicSphere(prim_path=self.default_zero_env_path + "/ball",
name="ball",
translation=np.array([0.0, 0.0, 1.0]),
mass=0.5,
radius=0.2,)
def post_reset(self):
for i in range(self._num_envs):
ball_prim = self._stage.GetPrimAtPath(f"{self.default_base_env_path}/env_{i}/ball")
color = 0.5 + 0.5 * np.random.random(3)
visual_material = PreviewSurface(prim_path=f"{self.default_base_env_path}/env_{i}/ball/Looks/visual_material", color=color)
binding_api = UsdShade.MaterialBindingAPI(ball_prim)
binding_api.Bind(visual_material.material, bindingStrength=UsdShade.Tokens.strongerThanDescendants)
def get_observations(self):
pass
def calculate_metrics(self) -> None:
pass
def is_done(self) -> None:
pass
if __name__ == "__main__":
world = World(
stage_units_in_meters=1.0,
rendering_dt=1.0/60.0,
backend="torch",
device="cpu",
)
num_envs = 800
num_per_row = 80
env_spacing = 0.56*2
terrain_creation_task = TerrainCreation(name="TerrainCreation",
num_envs=num_envs,
num_per_row=num_per_row,
env_spacing=env_spacing,
)
world.add_task(terrain_creation_task)
world.reset()
while simulation_app.is_running():
if world.is_playing():
if world.current_time_step_index == 0:
world.reset(soft=True)
world.step(render=True)
else:
world.step(render=True)
simulation_app.close() | 7,869 | Python | 43.213483 | 166 | 0.650654 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/utils/usd_utils/create_instanceable_assets.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import omni.client
import omni.usd
from pxr import Sdf, UsdGeom
def update_reference(source_prim_path, source_reference_path, target_reference_path):
stage = omni.usd.get_context().get_stage()
prims = [stage.GetPrimAtPath(source_prim_path)]
while len(prims) > 0:
prim = prims.pop(0)
prim_spec = stage.GetRootLayer().GetPrimAtPath(prim.GetPath())
reference_list = prim_spec.referenceList
refs = reference_list.GetAddedOrExplicitItems()
if len(refs) > 0:
for ref in refs:
if ref.assetPath == source_reference_path:
prim.GetReferences().RemoveReference(ref)
prim.GetReferences().AddReference(assetPath=target_reference_path, primPath=prim.GetPath())
prims = prims + prim.GetChildren()
def create_parent_xforms(asset_usd_path, source_prim_path, save_as_path=None):
"""Adds a new UsdGeom.Xform prim for each Mesh/Geometry prim under source_prim_path.
Moves material assignment to new parent prim if any exists on the Mesh/Geometry prim.
Args:
asset_usd_path (str): USD file path for asset
source_prim_path (str): USD path of root prim
save_as_path (str): USD file path for modified USD stage. Defaults to None, will save in same file.
"""
omni.usd.get_context().open_stage(asset_usd_path)
stage = omni.usd.get_context().get_stage()
prims = [stage.GetPrimAtPath(source_prim_path)]
edits = Sdf.BatchNamespaceEdit()
while len(prims) > 0:
prim = prims.pop(0)
print(prim)
if prim.GetTypeName() in ["Mesh", "Capsule", "Sphere", "Box"]:
new_xform = UsdGeom.Xform.Define(stage, str(prim.GetPath()) + "_xform")
print(prim, new_xform)
edits.Add(Sdf.NamespaceEdit.Reparent(prim.GetPath(), new_xform.GetPath(), 0))
continue
children_prims = prim.GetChildren()
prims = prims + children_prims
stage.GetRootLayer().Apply(edits)
if save_as_path is None:
omni.usd.get_context().save_stage()
else:
omni.usd.get_context().save_as_stage(save_as_path)
def convert_asset_instanceable(asset_usd_path, source_prim_path, save_as_path=None, create_xforms=True):
"""Makes all mesh/geometry prims instanceable.
Can optionally add UsdGeom.Xform prim as parent for all mesh/geometry prims.
Makes a copy of the asset USD file, which will be used for referencing.
Updates asset file to convert all parent prims of mesh/geometry prims to reference cloned USD file.
Args:
asset_usd_path (str): USD file path for asset
source_prim_path (str): USD path of root prim
save_as_path (str): USD file path for modified USD stage. Defaults to None, will save in same file.
create_xforms (bool): Whether to add new UsdGeom.Xform prims to mesh/geometry prims.
"""
if create_xforms:
create_parent_xforms(asset_usd_path, source_prim_path, save_as_path)
asset_usd_path = save_as_path
instance_usd_path = ".".join(asset_usd_path.split(".")[:-1]) + "_meshes.usd"
omni.client.copy(asset_usd_path, instance_usd_path)
omni.usd.get_context().open_stage(asset_usd_path)
stage = omni.usd.get_context().get_stage()
prims = [stage.GetPrimAtPath(source_prim_path)]
while len(prims) > 0:
prim = prims.pop(0)
if prim:
if prim.GetTypeName() in ["Mesh", "Capsule", "Sphere", "Box"]:
parent_prim = prim.GetParent()
if parent_prim and not parent_prim.IsInstance():
parent_prim.GetReferences().AddReference(
assetPath=instance_usd_path, primPath=str(parent_prim.GetPath())
)
parent_prim.SetInstanceable(True)
continue
children_prims = prim.GetChildren()
prims = prims + children_prims
if save_as_path is None:
omni.usd.get_context().save_stage()
else:
omni.usd.get_context().save_as_stage(save_as_path)
| 5,627 | Python | 42.627907 | 111 | 0.67727 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/robots/articulations/balance_bot.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from typing import Optional
import numpy as np
import torch
from omni.isaac.core.robots.robot import Robot
from omni.isaac.core.utils.nucleus import get_assets_root_path
from omni.isaac.core.utils.stage import add_reference_to_stage
from omniisaacgymenvs.tasks.utils.usd_utils import set_drive
class BalanceBot(Robot):
def __init__(
self,
prim_path: str,
name: Optional[str] = "BalanceBot",
usd_path: Optional[str] = None,
translation: Optional[np.ndarray] = None,
orientation: Optional[np.ndarray] = None,
) -> None:
"""[summary]"""
self._usd_path = usd_path
self._name = name
if self._usd_path is None:
assets_root_path = get_assets_root_path()
if assets_root_path is None:
carb.log_error("Could not find Isaac Sim assets folder")
self._usd_path = assets_root_path + "/Isaac/Robots/BalanceBot/balance_bot.usd"
add_reference_to_stage(self._usd_path, prim_path)
super().__init__(
prim_path=prim_path,
name=name,
translation=translation,
orientation=orientation,
articulation_controller=None,
)
for j in range(3):
# set leg joint properties
joint_path = f"joints/lower_leg{j}"
set_drive(f"{self.prim_path}/{joint_path}", "angular", "position", 0, 400, 40, 1000)
| 2,996 | Python | 40.054794 | 96 | 0.697597 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/robots/articulations/allegro_hand.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from typing import Optional
import carb
import numpy as np
import torch
from omni.isaac.core.robots.robot import Robot
from omni.isaac.core.utils.nucleus import get_assets_root_path
from omni.isaac.core.utils.stage import add_reference_to_stage
from pxr import Gf, PhysxSchema, Sdf, Usd, UsdGeom, UsdPhysics
class AllegroHand(Robot):
def __init__(
self,
prim_path: str,
name: Optional[str] = "allegro_hand",
usd_path: Optional[str] = None,
translation: Optional[torch.tensor] = None,
orientation: Optional[torch.tensor] = None,
) -> None:
self._usd_path = usd_path
self._name = name
if self._usd_path is None:
assets_root_path = get_assets_root_path()
if assets_root_path is None:
carb.log_error("Could not find Isaac Sim assets folder")
self._usd_path = assets_root_path + "/Isaac/Robots/AllegroHand/allegro_hand_instanceable.usd"
self._position = torch.tensor([0.0, 0.0, 0.5]) if translation is None else translation
self._orientation = (
torch.tensor([0.257551, 0.283045, 0.683330, -0.621782]) if orientation is None else orientation
)
add_reference_to_stage(self._usd_path, prim_path)
super().__init__(
prim_path=prim_path,
name=name,
translation=self._position,
orientation=self._orientation,
articulation_controller=None,
)
def set_allegro_hand_properties(self, stage, allegro_hand_prim):
for link_prim in allegro_hand_prim.GetChildren():
if not (
link_prim == stage.GetPrimAtPath("/allegro/Looks")
or link_prim == stage.GetPrimAtPath("/allegro/root_joint")
):
rb = PhysxSchema.PhysxRigidBodyAPI.Apply(link_prim)
rb.GetDisableGravityAttr().Set(True)
rb.GetRetainAccelerationsAttr().Set(False)
rb.GetEnableGyroscopicForcesAttr().Set(False)
rb.GetAngularDampingAttr().Set(0.01)
rb.GetMaxLinearVelocityAttr().Set(1000)
rb.GetMaxAngularVelocityAttr().Set(64 / np.pi * 180)
rb.GetMaxDepenetrationVelocityAttr().Set(1000)
rb.GetMaxContactImpulseAttr().Set(1e32)
def set_motor_control_mode(self, stage, allegro_hand_path):
prim = stage.GetPrimAtPath(allegro_hand_path)
self._set_joint_properties(stage, prim)
def _set_joint_properties(self, stage, prim):
if prim.HasAPI(UsdPhysics.DriveAPI):
drive = UsdPhysics.DriveAPI.Apply(prim, "angular")
drive.GetStiffnessAttr().Set(3 * np.pi / 180)
drive.GetDampingAttr().Set(0.1 * np.pi / 180)
drive.GetMaxForceAttr().Set(0.5)
revolute_joint = PhysxSchema.PhysxJointAPI.Get(stage, prim.GetPath())
revolute_joint.GetJointFrictionAttr().Set(0.01)
for child_prim in prim.GetChildren():
self._set_joint_properties(stage, child_prim)
| 4,627 | Python | 43.5 | 107 | 0.673655 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/robots/articulations/guarddog.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from typing import Optional
import carb
import numpy as np
import torch
from omni.isaac.core.robots.robot import Robot
from omni.isaac.core.utils.nucleus import get_assets_root_path
from omni.isaac.core.utils.stage import add_reference_to_stage
from pxr import PhysxSchema
class Guarddog(Robot):
def __init__(
self,
prim_path: str,
name: Optional[str] = "Guarddog",
usd_path: Optional[str] = None,
translation: Optional[np.ndarray] = None,
orientation: Optional[np.ndarray] = None,
) -> None:
self._usd_path = usd_path
self._name = name
self._usd_path = r"C:\Users\Plutonium\MyProjects\GuardDog\isaac\IsaacGymEnvs\assets\urdf\QuadCoordFix\urdf\QuadCoordFix\QuadCoordFix.usd"
# self._usd_path = r"C:\Users\Plutonium\MyProjects\GuardDog\isaac\IsaacGymEnvs\assets\urdf\Quad_Foot\urdf\Quad_Foot\Quad_Foot.usd"
add_reference_to_stage(self._usd_path, prim_path)
super().__init__(
prim_path=prim_path,
name=name,
translation=translation,
orientation=orientation,
articulation_controller=None,
)
def set_guarddog_properties(self, stage, prim):
for link_prim in prim.GetChildren():
if link_prim.HasAPI(PhysxSchema.PhysxRigidBodyAPI):
rb = PhysxSchema.PhysxRigidBodyAPI.Get(stage, link_prim.GetPrimPath())
rb.GetDisableGravityAttr().Set(False)
rb.GetRetainAccelerationsAttr().Set(False)
rb.GetLinearDampingAttr().Set(0.0)
rb.GetMaxLinearVelocityAttr().Set(1000.0)
rb.GetAngularDampingAttr().Set(0.0)
rb.GetMaxAngularVelocityAttr().Set(64 / np.pi * 180)
def prepare_contacts(self, stage, prim):
for link_prim in prim.GetChildren():
if link_prim.HasAPI(PhysxSchema.PhysxRigidBodyAPI):
if "_Hip" not in str(link_prim.GetPrimPath()):
print('!!')
print(link_prim.GetPrimPath())
rb = PhysxSchema.PhysxRigidBodyAPI.Get(stage, link_prim.GetPrimPath())
rb.CreateSleepThresholdAttr().Set(0)
cr_api = PhysxSchema.PhysxContactReportAPI.Apply(link_prim)
cr_api.CreateThresholdAttr().Set(0)
| 3,911 | Python | 43.965517 | 146 | 0.685502 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/robots/articulations/shadow_hand.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from typing import Optional
import carb
import numpy as np
import torch
from omni.isaac.core.robots.robot import Robot
from omni.isaac.core.utils.nucleus import get_assets_root_path
from omni.isaac.core.utils.stage import add_reference_to_stage
from omniisaacgymenvs.tasks.utils.usd_utils import set_drive
from pxr import Gf, PhysxSchema, Sdf, Usd, UsdGeom, UsdPhysics
class ShadowHand(Robot):
def __init__(
self,
prim_path: str,
name: Optional[str] = "shadow_hand",
usd_path: Optional[str] = None,
translation: Optional[torch.tensor] = None,
orientation: Optional[torch.tensor] = None,
) -> None:
self._usd_path = usd_path
self._name = name
if self._usd_path is None:
assets_root_path = get_assets_root_path()
if assets_root_path is None:
carb.log_error("Could not find Isaac Sim assets folder")
self._usd_path = assets_root_path + "/Isaac/Robots/ShadowHand/shadow_hand_instanceable.usd"
self._position = torch.tensor([0.0, 0.0, 0.5]) if translation is None else translation
self._orientation = torch.tensor([1.0, 0.0, 0.0, 0.0]) if orientation is None else orientation
add_reference_to_stage(self._usd_path, prim_path)
super().__init__(
prim_path=prim_path,
name=name,
translation=self._position,
orientation=self._orientation,
articulation_controller=None,
)
def set_shadow_hand_properties(self, stage, shadow_hand_prim):
for link_prim in shadow_hand_prim.GetChildren():
if link_prim.HasAPI(PhysxSchema.PhysxRigidBodyAPI):
rb = PhysxSchema.PhysxRigidBodyAPI.Get(stage, link_prim.GetPrimPath())
rb.GetDisableGravityAttr().Set(True)
rb.GetRetainAccelerationsAttr().Set(True)
def set_motor_control_mode(self, stage, shadow_hand_path):
joints_config = {
"robot0_WRJ1": {"stiffness": 5, "damping": 0.5, "max_force": 4.785},
"robot0_WRJ0": {"stiffness": 5, "damping": 0.5, "max_force": 2.175},
"robot0_FFJ3": {"stiffness": 1, "damping": 0.1, "max_force": 0.9},
"robot0_FFJ2": {"stiffness": 1, "damping": 0.1, "max_force": 0.9},
"robot0_FFJ1": {"stiffness": 1, "damping": 0.1, "max_force": 0.7245},
"robot0_MFJ3": {"stiffness": 1, "damping": 0.1, "max_force": 0.9},
"robot0_MFJ2": {"stiffness": 1, "damping": 0.1, "max_force": 0.9},
"robot0_MFJ1": {"stiffness": 1, "damping": 0.1, "max_force": 0.7245},
"robot0_RFJ3": {"stiffness": 1, "damping": 0.1, "max_force": 0.9},
"robot0_RFJ2": {"stiffness": 1, "damping": 0.1, "max_force": 0.9},
"robot0_RFJ1": {"stiffness": 1, "damping": 0.1, "max_force": 0.7245},
"robot0_LFJ4": {"stiffness": 1, "damping": 0.1, "max_force": 0.9},
"robot0_LFJ3": {"stiffness": 1, "damping": 0.1, "max_force": 0.9},
"robot0_LFJ2": {"stiffness": 1, "damping": 0.1, "max_force": 0.9},
"robot0_LFJ1": {"stiffness": 1, "damping": 0.1, "max_force": 0.7245},
"robot0_THJ4": {"stiffness": 1, "damping": 0.1, "max_force": 2.3722},
"robot0_THJ3": {"stiffness": 1, "damping": 0.1, "max_force": 1.45},
"robot0_THJ2": {"stiffness": 1, "damping": 0.1, "max_force": 0.99},
"robot0_THJ1": {"stiffness": 1, "damping": 0.1, "max_force": 0.99},
"robot0_THJ0": {"stiffness": 1, "damping": 0.1, "max_force": 0.81},
}
for joint_name, config in joints_config.items():
set_drive(
f"{self.prim_path}/joints/{joint_name}",
"angular",
"position",
0.0,
config["stiffness"] * np.pi / 180,
config["damping"] * np.pi / 180,
config["max_force"],
)
| 5,517 | Python | 46.982608 | 103 | 0.623527 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/robots/articulations/crazyflie.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from typing import Optional
import carb
import numpy as np
import torch
from omni.isaac.core.robots.robot import Robot
from omni.isaac.core.utils.nucleus import get_assets_root_path
from omni.isaac.core.utils.stage import add_reference_to_stage
class Crazyflie(Robot):
def __init__(
self,
prim_path: str,
name: Optional[str] = "crazyflie",
usd_path: Optional[str] = None,
translation: Optional[np.ndarray] = None,
orientation: Optional[np.ndarray] = None,
scale: Optional[np.array] = None,
) -> None:
"""[summary]"""
self._usd_path = usd_path
self._name = name
if self._usd_path is None:
assets_root_path = get_assets_root_path()
if assets_root_path is None:
carb.log_error("Could not find Isaac Sim assets folder")
self._usd_path = assets_root_path + "/Isaac/Robots/Crazyflie/cf2x.usd"
add_reference_to_stage(self._usd_path, prim_path)
scale = torch.tensor([5, 5, 5])
super().__init__(prim_path=prim_path, name=name, translation=translation, orientation=orientation, scale=scale)
| 2,720 | Python | 40.861538 | 119 | 0.718015 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/robots/articulations/cabinet.py | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
from typing import Optional
import numpy as np
import torch
from omni.isaac.core.robots.robot import Robot
from omni.isaac.core.utils.nucleus import get_assets_root_path
from omni.isaac.core.utils.stage import add_reference_to_stage
class Cabinet(Robot):
def __init__(
self,
prim_path: str,
name: Optional[str] = "cabinet",
usd_path: Optional[str] = None,
translation: Optional[torch.tensor] = None,
orientation: Optional[torch.tensor] = None,
) -> None:
"""[summary]"""
self._usd_path = usd_path
self._name = name
if self._usd_path is None:
assets_root_path = get_assets_root_path()
if assets_root_path is None:
carb.log_error("Could not find Isaac Sim assets folder")
self._usd_path = assets_root_path + "/Isaac/Props/Sektion_Cabinet/sektion_cabinet_instanceable.usd"
add_reference_to_stage(self._usd_path, prim_path)
self._position = torch.tensor([0.0, 0.0, 0.4]) if translation is None else translation
self._orientation = torch.tensor([0.1, 0.0, 0.0, 0.0]) if orientation is None else orientation
super().__init__(
prim_path=prim_path,
name=name,
translation=self._position,
orientation=self._orientation,
articulation_controller=None,
)
| 1,819 | Python | 35.399999 | 111 | 0.660803 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/robots/articulations/humanoid.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from typing import Optional
import carb
import numpy as np
import torch
from omni.isaac.core.robots.robot import Robot
from omni.isaac.core.utils.nucleus import get_assets_root_path
from omni.isaac.core.utils.stage import add_reference_to_stage
class Humanoid(Robot):
def __init__(
self,
prim_path: str,
name: Optional[str] = "Humanoid",
usd_path: Optional[str] = None,
translation: Optional[np.ndarray] = None,
orientation: Optional[np.ndarray] = None,
) -> None:
self._usd_path = usd_path
self._name = name
if self._usd_path is None:
assets_root_path = get_assets_root_path()
if assets_root_path is None:
carb.log_error("Could not find Isaac Sim assets folder")
self._usd_path = assets_root_path + "/Isaac/Robots/Humanoid/humanoid_instanceable.usd"
add_reference_to_stage(self._usd_path, prim_path)
super().__init__(
prim_path=prim_path,
name=name,
translation=translation,
orientation=orientation,
articulation_controller=None,
)
| 2,716 | Python | 38.955882 | 98 | 0.71134 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/robots/articulations/franka.py | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
import math
from typing import Optional
import numpy as np
import torch
from omni.isaac.core.robots.robot import Robot
from omni.isaac.core.utils.nucleus import get_assets_root_path
from omni.isaac.core.utils.prims import get_prim_at_path
from omni.isaac.core.utils.stage import add_reference_to_stage
from omniisaacgymenvs.tasks.utils.usd_utils import set_drive
from pxr import PhysxSchema
class Franka(Robot):
def __init__(
self,
prim_path: str,
name: Optional[str] = "franka",
usd_path: Optional[str] = None,
translation: Optional[torch.tensor] = None,
orientation: Optional[torch.tensor] = None,
) -> None:
"""[summary]"""
self._usd_path = usd_path
self._name = name
self._position = torch.tensor([1.0, 0.0, 0.0]) if translation is None else translation
self._orientation = torch.tensor([0.0, 0.0, 0.0, 1.0]) if orientation is None else orientation
if self._usd_path is None:
assets_root_path = get_assets_root_path()
if assets_root_path is None:
carb.log_error("Could not find Isaac Sim assets folder")
self._usd_path = assets_root_path + "/Isaac/Robots/Franka/franka_instanceable.usd"
add_reference_to_stage(self._usd_path, prim_path)
super().__init__(
prim_path=prim_path,
name=name,
translation=self._position,
orientation=self._orientation,
articulation_controller=None,
)
dof_paths = [
"panda_link0/panda_joint1",
"panda_link1/panda_joint2",
"panda_link2/panda_joint3",
"panda_link3/panda_joint4",
"panda_link4/panda_joint5",
"panda_link5/panda_joint6",
"panda_link6/panda_joint7",
"panda_hand/panda_finger_joint1",
"panda_hand/panda_finger_joint2",
]
drive_type = ["angular"] * 7 + ["linear"] * 2
default_dof_pos = [math.degrees(x) for x in [0.0, -1.0, 0.0, -2.2, 0.0, 2.4, 0.8]] + [0.02, 0.02]
stiffness = [400 * np.pi / 180] * 7 + [10000] * 2
damping = [80 * np.pi / 180] * 7 + [100] * 2
max_force = [87, 87, 87, 87, 12, 12, 12, 200, 200]
max_velocity = [math.degrees(x) for x in [2.175, 2.175, 2.175, 2.175, 2.61, 2.61, 2.61]] + [0.2, 0.2]
for i, dof in enumerate(dof_paths):
set_drive(
prim_path=f"{self.prim_path}/{dof}",
drive_type=drive_type[i],
target_type="position",
target_value=default_dof_pos[i],
stiffness=stiffness[i],
damping=damping[i],
max_force=max_force[i],
)
PhysxSchema.PhysxJointAPI(get_prim_at_path(f"{self.prim_path}/{dof}")).CreateMaxJointVelocityAttr().Set(
max_velocity[i]
)
def set_franka_properties(self, stage, prim):
for link_prim in prim.GetChildren():
if link_prim.HasAPI(PhysxSchema.PhysxRigidBodyAPI):
rb = PhysxSchema.PhysxRigidBodyAPI.Get(stage, link_prim.GetPrimPath())
rb.GetDisableGravityAttr().Set(True)
| 3,653 | Python | 37.0625 | 116 | 0.599781 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/robots/articulations/ant.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from typing import Optional
import carb
import numpy as np
import torch
from omni.isaac.core.robots.robot import Robot
from omni.isaac.core.utils.nucleus import get_assets_root_path
from omni.isaac.core.utils.stage import add_reference_to_stage
class Ant(Robot):
def __init__(
self,
prim_path: str,
name: Optional[str] = "Ant",
usd_path: Optional[str] = None,
translation: Optional[np.ndarray] = None,
orientation: Optional[np.ndarray] = None,
) -> None:
self._usd_path = usd_path
self._name = name
if self._usd_path is None:
assets_root_path = get_assets_root_path()
if assets_root_path is None:
carb.log_error("Could not find Isaac Sim assets folder")
self._usd_path = assets_root_path + "/Isaac/Robots/Ant/ant_instanceable.usd"
add_reference_to_stage(self._usd_path, prim_path)
super().__init__(
prim_path=prim_path,
name=name,
translation=translation,
orientation=orientation,
articulation_controller=None,
)
| 2,696 | Python | 38.661764 | 88 | 0.709199 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/robots/articulations/cartpole.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from typing import Optional
import carb
import numpy as np
import torch
from omni.isaac.core.robots.robot import Robot
from omni.isaac.core.utils.nucleus import get_assets_root_path
from omni.isaac.core.utils.stage import add_reference_to_stage
class Cartpole(Robot):
def __init__(
self,
prim_path: str,
name: Optional[str] = "Cartpole",
usd_path: Optional[str] = None,
translation: Optional[np.ndarray] = None,
orientation: Optional[np.ndarray] = None,
) -> None:
self._usd_path = usd_path
self._name = name
if self._usd_path is None:
assets_root_path = get_assets_root_path()
if assets_root_path is None:
carb.log_error("Could not find Isaac Sim assets folder")
self._usd_path = assets_root_path + "/Isaac/Robots/Cartpole/cartpole.usd"
add_reference_to_stage(self._usd_path, prim_path)
super().__init__(
prim_path=prim_path,
name=name,
translation=translation,
orientation=orientation,
articulation_controller=None,
)
| 2,703 | Python | 38.764705 | 85 | 0.710322 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/robots/articulations/factory_franka.py | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
import math
from typing import Optional
import numpy as np
import torch
from omni.isaac.core.robots.robot import Robot
from omni.isaac.core.utils.nucleus import get_assets_root_path
from omni.isaac.core.utils.prims import get_prim_at_path
from omni.isaac.core.utils.stage import add_reference_to_stage
from omniisaacgymenvs.tasks.utils.usd_utils import set_drive
from pxr import PhysxSchema
class FactoryFranka(Robot):
def __init__(
self,
prim_path: str,
name: Optional[str] = "franka",
usd_path: Optional[str] = None,
translation: Optional[torch.tensor] = None,
orientation: Optional[torch.tensor] = None,
) -> None:
"""[summary]"""
self._usd_path = usd_path
self._name = name
self._position = torch.tensor([1.0, 0.0, 0.0]) if translation is None else translation
self._orientation = torch.tensor([0.0, 0.0, 0.0, 1.0]) if orientation is None else orientation
if self._usd_path is None:
assets_root_path = get_assets_root_path()
if assets_root_path is None:
carb.log_error("Could not find Isaac Sim assets folder")
self._usd_path = assets_root_path + "/Isaac/Robots/FactoryFranka/factory_franka.usd"
add_reference_to_stage(self._usd_path, prim_path)
super().__init__(
prim_path=prim_path,
name=name,
translation=self._position,
orientation=self._orientation,
articulation_controller=None,
)
dof_paths = [
"panda_link0/panda_joint1",
"panda_link1/panda_joint2",
"panda_link2/panda_joint3",
"panda_link3/panda_joint4",
"panda_link4/panda_joint5",
"panda_link5/panda_joint6",
"panda_link6/panda_joint7",
"panda_hand/panda_finger_joint1",
"panda_hand/panda_finger_joint2",
]
drive_type = ["angular"] * 7 + ["linear"] * 2
default_dof_pos = [math.degrees(x) for x in [0.0, -1.0, 0.0, -2.2, 0.0, 2.4, 0.8]] + [0.02, 0.02]
stiffness = [40 * np.pi / 180] * 7 + [500] * 2
damping = [80 * np.pi / 180] * 7 + [20] * 2
max_force = [87, 87, 87, 87, 12, 12, 12, 200, 200]
max_velocity = [math.degrees(x) for x in [2.175, 2.175, 2.175, 2.175, 2.61, 2.61, 2.61]] + [0.2, 0.2]
for i, dof in enumerate(dof_paths):
set_drive(
prim_path=f"{self.prim_path}/{dof}",
drive_type=drive_type[i],
target_type="position",
target_value=default_dof_pos[i],
stiffness=stiffness[i],
damping=damping[i],
max_force=max_force[i],
)
PhysxSchema.PhysxJointAPI(get_prim_at_path(f"{self.prim_path}/{dof}")).CreateMaxJointVelocityAttr().Set(
max_velocity[i]
)
| 3,356 | Python | 36.719101 | 116 | 0.596544 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/robots/articulations/quadcopter.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from typing import Optional
import numpy as np
import torch
from omni.isaac.core.robots.robot import Robot
from omni.isaac.core.utils.nucleus import get_assets_root_path
from omni.isaac.core.utils.stage import add_reference_to_stage
class Quadcopter(Robot):
def __init__(
self,
prim_path: str,
name: Optional[str] = "Quadcopter",
usd_path: Optional[str] = None,
translation: Optional[np.ndarray] = None,
orientation: Optional[np.ndarray] = None,
) -> None:
"""[summary]"""
self._usd_path = usd_path
self._name = name
if self._usd_path is None:
assets_root_path = get_assets_root_path()
if assets_root_path is None:
carb.log_error("Could not find Isaac Sim assets folder")
self._usd_path = assets_root_path + "/Isaac/Robots/Quadcopter/quadcopter.usd"
add_reference_to_stage(self._usd_path, prim_path)
super().__init__(
prim_path=prim_path,
name=name,
position=translation,
orientation=orientation,
articulation_controller=None,
)
| 2,719 | Python | 39.597014 | 89 | 0.706878 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/robots/articulations/ingenuity.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from typing import Optional
import numpy as np
import torch
from omni.isaac.core.prims import RigidPrimView
from omni.isaac.core.robots.robot import Robot
from omni.isaac.core.utils.nucleus import get_assets_root_path
from omni.isaac.core.utils.stage import add_reference_to_stage
class Ingenuity(Robot):
def __init__(
self,
prim_path: str,
name: Optional[str] = "ingenuity",
usd_path: Optional[str] = None,
translation: Optional[np.ndarray] = None,
orientation: Optional[np.ndarray] = None,
scale: Optional[np.array] = None,
) -> None:
"""[summary]"""
self._usd_path = usd_path
self._name = name
if self._usd_path is None:
assets_root_path = get_assets_root_path()
if assets_root_path is None:
carb.log_error("Could not find Isaac Sim assets folder")
self._usd_path = (
assets_root_path + "/Isaac/Robots/Ingenuity/ingenuity.usd"
)
add_reference_to_stage(self._usd_path, prim_path)
scale = torch.tensor([0.01, 0.01, 0.01])
super().__init__(prim_path=prim_path, name=name, translation=translation, orientation=orientation, scale=scale)
| 2,802 | Python | 40.83582 | 119 | 0.711991 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/robots/articulations/anymal.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from typing import Optional
import numpy as np
import torch
from omni.isaac.core.prims import RigidPrimView
from omni.isaac.core.robots.robot import Robot
from omni.isaac.core.utils.nucleus import get_assets_root_path
from omni.isaac.core.utils.stage import add_reference_to_stage
from pxr import PhysxSchema
class Anymal(Robot):
def __init__(
self,
prim_path: str,
name: Optional[str] = "Anymal",
usd_path: Optional[str] = None,
translation: Optional[np.ndarray] = None,
orientation: Optional[np.ndarray] = None,
) -> None:
"""[summary]"""
self._usd_path = usd_path
self._name = name
if self._usd_path is None:
assets_root_path = get_assets_root_path()
if assets_root_path is None:
carb.log_error("Could not find nucleus server with /Isaac folder")
self._usd_path = assets_root_path + "/Isaac/Robots/ANYbotics/anymal_instanceable.usd"
add_reference_to_stage(self._usd_path, prim_path)
super().__init__(
prim_path=prim_path,
name=name,
translation=translation,
orientation=orientation,
articulation_controller=None,
)
self._dof_names = [
"LF_HAA",
"LH_HAA",
"RF_HAA",
"RH_HAA",
"LF_HFE",
"LH_HFE",
"RF_HFE",
"RH_HFE",
"LF_KFE",
"LH_KFE",
"RF_KFE",
"RH_KFE",
]
@property
def dof_names(self):
return self._dof_names
def set_anymal_properties(self, stage, prim):
for link_prim in prim.GetChildren():
if link_prim.HasAPI(PhysxSchema.PhysxRigidBodyAPI):
rb = PhysxSchema.PhysxRigidBodyAPI.Get(stage, link_prim.GetPrimPath())
rb.GetDisableGravityAttr().Set(False)
rb.GetRetainAccelerationsAttr().Set(False)
rb.GetLinearDampingAttr().Set(0.0)
rb.GetMaxLinearVelocityAttr().Set(1000.0)
rb.GetAngularDampingAttr().Set(0.0)
rb.GetMaxAngularVelocityAttr().Set(64 / np.pi * 180)
def prepare_contacts(self, stage, prim):
for link_prim in prim.GetChildren():
if link_prim.HasAPI(PhysxSchema.PhysxRigidBodyAPI):
if "_HIP" not in str(link_prim.GetPrimPath()):
rb = PhysxSchema.PhysxRigidBodyAPI.Get(stage, link_prim.GetPrimPath())
rb.CreateSleepThresholdAttr().Set(0)
cr_api = PhysxSchema.PhysxContactReportAPI.Apply(link_prim)
cr_api.CreateThresholdAttr().Set(0)
| 4,273 | Python | 38.943925 | 97 | 0.648022 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/robots/articulations/views/cabinet_view.py | from typing import Optional
from omni.isaac.core.articulations import ArticulationView
from omni.isaac.core.prims import RigidPrimView
class CabinetView(ArticulationView):
def __init__(
self,
prim_paths_expr: str,
name: Optional[str] = "CabinetView",
) -> None:
"""[summary]"""
super().__init__(prim_paths_expr=prim_paths_expr, name=name, reset_xform_properties=False)
self._drawers = RigidPrimView(
prim_paths_expr="/World/envs/.*/cabinet/drawer_top", name="drawers_view", reset_xform_properties=False
)
| 586 | Python | 28.349999 | 114 | 0.653584 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/robots/articulations/views/shadow_hand_view.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from typing import Optional
import torch
from omni.isaac.core.articulations import ArticulationView
from omni.isaac.core.prims import RigidPrimView
class ShadowHandView(ArticulationView):
def __init__(
self,
prim_paths_expr: str,
name: Optional[str] = "ShadowHandView",
) -> None:
super().__init__(prim_paths_expr=prim_paths_expr, name=name, reset_xform_properties=False)
self._fingers = RigidPrimView(
prim_paths_expr="/World/envs/.*/shadow_hand/robot0.*distal",
name="finger_view",
reset_xform_properties=False,
)
@property
def actuated_dof_indices(self):
return self._actuated_dof_indices
def initialize(self, physics_sim_view):
super().initialize(physics_sim_view)
self.actuated_joint_names = [
"robot0_WRJ1",
"robot0_WRJ0",
"robot0_FFJ3",
"robot0_FFJ2",
"robot0_FFJ1",
"robot0_MFJ3",
"robot0_MFJ2",
"robot0_MFJ1",
"robot0_RFJ3",
"robot0_RFJ2",
"robot0_RFJ1",
"robot0_LFJ4",
"robot0_LFJ3",
"robot0_LFJ2",
"robot0_LFJ1",
"robot0_THJ4",
"robot0_THJ3",
"robot0_THJ2",
"robot0_THJ1",
"robot0_THJ0",
]
self._actuated_dof_indices = list()
for joint_name in self.actuated_joint_names:
self._actuated_dof_indices.append(self.get_dof_index(joint_name))
self._actuated_dof_indices.sort()
limit_stiffness = torch.tensor([30.0] * self.num_fixed_tendons, device=self._device)
damping = torch.tensor([0.1] * self.num_fixed_tendons, device=self._device)
self.set_fixed_tendon_properties(dampings=damping, limit_stiffnesses=limit_stiffness)
fingertips = ["robot0_ffdistal", "robot0_mfdistal", "robot0_rfdistal", "robot0_lfdistal", "robot0_thdistal"]
self._sensor_indices = torch.tensor([self._body_indices[j] for j in fingertips], device=self._device, dtype=torch.long)
| 3,681 | Python | 38.591397 | 127 | 0.669383 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/robots/articulations/views/franka_view.py | from typing import Optional
from omni.isaac.core.articulations import ArticulationView
from omni.isaac.core.prims import RigidPrimView
class FrankaView(ArticulationView):
def __init__(
self,
prim_paths_expr: str,
name: Optional[str] = "FrankaView",
) -> None:
"""[summary]"""
super().__init__(prim_paths_expr=prim_paths_expr, name=name, reset_xform_properties=False)
self._hands = RigidPrimView(
prim_paths_expr="/World/envs/.*/franka/panda_link7", name="hands_view", reset_xform_properties=False
)
self._lfingers = RigidPrimView(
prim_paths_expr="/World/envs/.*/franka/panda_leftfinger", name="lfingers_view", reset_xform_properties=False
)
self._rfingers = RigidPrimView(
prim_paths_expr="/World/envs/.*/franka/panda_rightfinger",
name="rfingers_view",
reset_xform_properties=False,
)
def initialize(self, physics_sim_view):
super().initialize(physics_sim_view)
self._gripper_indices = [self.get_dof_index("panda_finger_joint1"), self.get_dof_index("panda_finger_joint2")]
@property
def gripper_indices(self):
return self._gripper_indices
| 1,241 | Python | 32.567567 | 120 | 0.637389 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/robots/articulations/views/guarddog_view.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from typing import Optional
from omni.isaac.core.articulations import ArticulationView
from omni.isaac.core.prims import RigidPrimView
class GuarddogView(ArticulationView):
def __init__(
self,
prim_paths_expr: str,
name: Optional[str] = "GuarddogView",
track_contact_forces=False,
prepare_contact_sensors=False,
) -> None:
"""[summary]"""
super().__init__(prim_paths_expr=prim_paths_expr, name=name, reset_xform_properties=False)
self._knees = RigidPrimView(
prim_paths_expr="/World/envs/.*/Guarddog/.*_Thigh",
name="knees_view",
reset_xform_properties=False,
track_contact_forces=track_contact_forces,
prepare_contact_sensors=prepare_contact_sensors,
)
self._base = RigidPrimView(
prim_paths_expr="/World/envs/.*/Guarddog/Body",
name="base_view",
reset_xform_properties=False,
track_contact_forces=track_contact_forces,
prepare_contact_sensors=prepare_contact_sensors,
)
def get_knee_transforms(self):
return self._knees.get_world_poses()
def is_knee_below_threshold(self, threshold, ground_heights=None):
knee_pos, _ = self._knees.get_world_poses()
knee_heights = knee_pos.view((-1, 4, 3))[:, :, 2]
if ground_heights is not None:
knee_heights -= ground_heights
return (
(knee_heights[:, 0] < threshold)
| (knee_heights[:, 1] < threshold)
| (knee_heights[:, 2] < threshold)
| (knee_heights[:, 3] < threshold)
)
def is_base_below_threshold(self, threshold, ground_heights):
base_pos, _ = self.get_world_poses()
base_heights = base_pos[:, 2]
base_heights -= ground_heights
return base_heights[:] < threshold
| 3,441 | Python | 41.493827 | 98 | 0.679163 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/robots/articulations/views/factory_franka_view.py | from typing import Optional
from omni.isaac.core.articulations import ArticulationView
from omni.isaac.core.prims import RigidPrimView
class FactoryFrankaView(ArticulationView):
def __init__(
self,
prim_paths_expr: str,
name: Optional[str] = "FactoryFrankaView",
) -> None:
"""Initialize articulation view."""
super().__init__(
prim_paths_expr=prim_paths_expr, name=name, reset_xform_properties=False
)
self._hands = RigidPrimView(
prim_paths_expr="/World/envs/.*/franka/panda_hand",
name="hands_view",
reset_xform_properties=False,
)
self._lfingers = RigidPrimView(
prim_paths_expr="/World/envs/.*/franka/panda_leftfinger",
name="lfingers_view",
reset_xform_properties=False,
track_contact_forces=True,
)
self._rfingers = RigidPrimView(
prim_paths_expr="/World/envs/.*/franka/panda_rightfinger",
name="rfingers_view",
reset_xform_properties=False,
track_contact_forces=True,
)
self._fingertip_centered = RigidPrimView(
prim_paths_expr="/World/envs/.*/franka/panda_fingertip_centered",
name="fingertips_view",
reset_xform_properties=False,
)
def initialize(self, physics_sim_view):
"""Initialize physics simulation view."""
super().initialize(physics_sim_view)
| 1,488 | Python | 31.369565 | 84 | 0.598118 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/robots/articulations/views/anymal_view.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from typing import Optional
from omni.isaac.core.articulations import ArticulationView
from omni.isaac.core.prims import RigidPrimView
class AnymalView(ArticulationView):
def __init__(
self,
prim_paths_expr: str,
name: Optional[str] = "AnymalView",
track_contact_forces=False,
prepare_contact_sensors=False,
) -> None:
"""[summary]"""
super().__init__(prim_paths_expr=prim_paths_expr, name=name, reset_xform_properties=False)
self._knees = RigidPrimView(
prim_paths_expr="/World/envs/.*/anymal/.*_THIGH",
name="knees_view",
reset_xform_properties=False,
track_contact_forces=track_contact_forces,
prepare_contact_sensors=prepare_contact_sensors,
)
self._base = RigidPrimView(
prim_paths_expr="/World/envs/.*/anymal/base",
name="base_view",
reset_xform_properties=False,
track_contact_forces=track_contact_forces,
prepare_contact_sensors=prepare_contact_sensors,
)
def get_knee_transforms(self):
return self._knees.get_world_poses()
def is_knee_below_threshold(self, threshold, ground_heights=None):
knee_pos, _ = self._knees.get_world_poses()
knee_heights = knee_pos.view((-1, 4, 3))[:, :, 2]
if ground_heights is not None:
knee_heights -= ground_heights
return (
(knee_heights[:, 0] < threshold)
| (knee_heights[:, 1] < threshold)
| (knee_heights[:, 2] < threshold)
| (knee_heights[:, 3] < threshold)
)
def is_base_below_threshold(self, threshold, ground_heights):
base_pos, _ = self.get_world_poses()
base_heights = base_pos[:, 2]
base_heights -= ground_heights
return base_heights[:] < threshold
| 3,433 | Python | 41.395061 | 98 | 0.678415 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/robots/articulations/views/quadcopter_view.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from typing import Optional
from omni.isaac.core.articulations import ArticulationView
from omni.isaac.core.prims import RigidPrimView
class QuadcopterView(ArticulationView):
def __init__(self, prim_paths_expr: str, name: Optional[str] = "QuadcopterView") -> None:
"""[summary]"""
super().__init__(prim_paths_expr=prim_paths_expr, name=name, reset_xform_properties=False)
self.rotors = RigidPrimView(
prim_paths_expr=f"/World/envs/.*/Quadcopter/rotor[0-3]", name="rotors_view", reset_xform_properties=False
)
| 2,121 | Python | 47.227272 | 117 | 0.759547 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/robots/articulations/views/allegro_hand_view.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from typing import Optional
import torch
from omni.isaac.core.articulations import ArticulationView
from omni.isaac.core.prims import RigidPrimView
class AllegroHandView(ArticulationView):
def __init__(
self,
prim_paths_expr: str,
name: Optional[str] = "AllegroHandView",
) -> None:
super().__init__(prim_paths_expr=prim_paths_expr, name=name, reset_xform_properties=False)
self._actuated_dof_indices = list()
@property
def actuated_dof_indices(self):
return self._actuated_dof_indices
def initialize(self, physics_sim_view):
super().initialize(physics_sim_view)
self._actuated_dof_indices = [i for i in range(self.num_dof)]
| 2,275 | Python | 41.148147 | 98 | 0.74989 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/robots/articulations/views/crazyflie_view.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from typing import Optional
from omni.isaac.core.articulations import ArticulationView
from omni.isaac.core.prims import RigidPrimView
class CrazyflieView(ArticulationView):
def __init__(self, prim_paths_expr: str, name: Optional[str] = "CrazyflieView") -> None:
"""[summary]"""
super().__init__(
prim_paths_expr=prim_paths_expr,
name=name,
)
self.physics_rotors = [
RigidPrimView(prim_paths_expr=f"/World/envs/.*/Crazyflie/m{i}_prop", name=f"m{i}_prop_view")
for i in range(1, 5)
]
| 2,140 | Python | 42.693877 | 104 | 0.737383 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/omniisaacgymenvs/robots/articulations/views/ingenuity_view.py | # Copyright (c) 2018-2022, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from typing import Optional
from omni.isaac.core.articulations import ArticulationView
from omni.isaac.core.prims import RigidPrimView
class IngenuityView(ArticulationView):
def __init__(self, prim_paths_expr: str, name: Optional[str] = "IngenuityView") -> None:
"""[summary]"""
super().__init__(prim_paths_expr=prim_paths_expr, name=name, reset_xform_properties=False)
self.physics_rotors = [
RigidPrimView(
prim_paths_expr=f"/World/envs/.*/Ingenuity/rotor_physics_{i}",
name=f"physics_rotor_{i}_view",
reset_xform_properties=False,
)
for i in range(2)
]
self.visual_rotors = [
RigidPrimView(
prim_paths_expr=f"/World/envs/.*/Ingenuity/rotor_visual_{i}",
name=f"visual_rotor_{i}_view",
reset_xform_properties=False,
)
for i in range(2)
]
| 2,524 | Python | 42.534482 | 98 | 0.70206 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/docs/release_notes.md | Release Notes
=============
2023.1.1a - March 14, 2024
--------------------------
Fixes
-----
- Add workaround for nucleus hang issue on startup
- Fix USD update flags being reset after creating new stage. This should fix the long hang when running the Humanoid environment with `headless=False`.
Known Issues
------------
- A segmentation fault may occasionally occur at the end of a training run. This does not prevent the training from completing successfully.
2023.1.1 - December 12, 2023
----------------------------
Additions
---------
- Add support for viewport recording during training/inferencing using gym wrapper class `RecordVideo`
- Add `enable_recording`, `recording_interval`, `recording_length`, and `recording_fps`, `recording_dir` arguments to config/command-line for video recording
- Add `moviepy` as dependency for video recording
- Add video tutorial for extension workflow, available at [docs/framework/extension_workflow.md](docs/framework/extension_workflow.md)
- Add camera clipping for CartpoleCamera to avoid seeing other environments in the background
Changes
-------
- Use rl_device for sampling random policy (https://github.com/NVIDIA-Omniverse/OmniIsaacGymEnvs/pull/51)
- Add FPS printouts for random policy
- Use absolute path for default checkpoint folder for consistency between Python and extension workflows
- Change camera creation API in CartpoleCamera to use USD APIs instead of `rep.create`
Fixes
-----
- Fix missing device in warp kernel launch for Ant and Humanoid
- Fix typo for velocity iteration (https://github.com/NVIDIA-Omniverse/OmniIsaacGymEnvs/pull/111)
- Clean up private variable access in task classes in favour of property getters
- Clean up private variable access in extension.py in favour of setter methods
- Unregister replicator in extension workflow on training completion to allow for restart
2023.1.0b - November 02, 2023
-----------------------------
Changes
-------
- Update docker scripts to Isaac Sim docker image 2023.1.0-hotfix.1
- Use omniisaacgymenvs module root for app file parsing
- Update FrankaDeformable physics dt for better training stability
Fixes
-----
- Fix CartpoleCamera num_observations value
- Fix missing import in startup randomization for mass and density
2023.1.0a - October 20, 2023
----------------------------
Fixes
-----
- Fix extension loading error in camera app file
2023.1.0 - October 18, 2023
---------------------------
Additions
---------
- Add support for Warp backend task implementation
- Add Warp-based RL examples: Cartpole, Ant, Humanoid
- Add new Factory environments for place and screw: FactoryTaskNutBoltPlace and FactoryTaskNutBoltScrew
- Add new camera-based Cartpole example: CartpoleCamera
- Add new deformable environment showing Franka picking up a deformable tube: FrankaDeformable
- Add support for running OIGE as an extension in Isaac Sim
- Add options to filter collisions between environments and specify global collision filter paths to `RLTask.set_to_scene()`
- Add multinode training support
- Add dockerfile with OIGE
- Add option to select kit app file from command line argument `kit_app`
- Add `rendering_dt` parameter to the task config file for setting rendering dt. Defaults to the same value as the physics dt.
Changes
-------
- `use_flatcache` flag has been renamed to `use_fabric`
- Update hydra-core version to 1.3.2, omegaconf version to 2.3.0
- Update rlgames to version 1.6.1.
- The `get_force_sensor_forces` API for articulations is now deprecated and replaced with `get_measured_joint_forces`
- Remove unnecessary cloning of buffers in VecEnv classes
- Only enable omni.replicator.isaac when domain randomization or cameras are enabled
- The multi-threaded launch script `rlgames_train_mt.py` has been re-designed to support the extension workflow. This script can no longer be used to launch a training run from python. Please use `rlgames_train.py` instead.
- Restructures for environments to support the new extension-based workflow
- Add async workflow to factory pick environment to support extension-based workflow
- Update docker scripts with cache directories
Fixes
-----
- Fix errors related to setting velocities to kinematic markers in Ingenuity and Quadcopter environments
- Fix contact-related issues with quadruped assets
- Fix errors in physics APIs when returning empty tensors
- Fix orientation correctness issues when using some assets with omni.isaac.core. Additional orientations applied to accommodate for the error are no longer required (i.e. ShadowHand)
- Updated the deprecated config name `seq_len` used with RNN networks to `seq_length`
2022.2.1 - March 16, 2023
-------------------------
Additions
---------
- Add FactoryTaskNutBoltPick example
- Add Ant and Humanoid SAC training examples
- Add multi-GPU support for training
- Add utility scripts for launching Isaac Sim docker with OIGE
- Add support for livestream through the Omniverse Streaming Client
Changes
-------
- Change rigid body fixed_base option to make_kinematic, avoiding creation of unnecessary articulations
- Update ShadowHand, Ingenuity, Quadcopter and Crazyflie marker objects to use kinematics
- Update ShadowHand GPU buffer parameters
- Disable PyTorch nvFuser for better performance
- Enable viewport and replicator extensions dynamically to maintain order of extension startup
- Separate app files for headless environments with rendering (requires Isaac Sim update)
- Update rl-games to v1.6.0
Fixes
-----
- Fix material property randomization at run-time, including friction and restitution (requires Isaac Sim update)
- Fix a bug in contact reporting API where incorrect values were being reported (requires Isaac Sim update)
- Enable render flag in Isaac Sim when enable_cameras is set to True
- Add root pose and velocity reset to BallBalance environment
2.0.0 - December 15, 2022
-------------------------
Additions
---------
- Update to Viewport 2.0
- Allow for runtime mass randomization on GPU pipeline
- Add runtime mass randomization to ShadowHand environments
- Introduce `disable_contact_processing` simulation parameter for faster contact processing
- Use physics replication for cloning by default for faster load time
Changes
-------
- Update AnymalTerrain environment to use contact forces
- Update Quadcopter example to apply local forces
- Update training parameters for ShadowHandOpenAI_FF environment
- Rename rlgames_play.py to rlgames_demo.py
Fixes
-----
- Remove fix_base option from articulation configs
- Fix in_hand_manipulation random joint position sampling on reset
- Fix mass and density randomization in MT training script
- Fix actions/observations noise randomization in MT training script
- Fix random seed when domain randomization is enabled
- Check whether simulation is running before executing pre_physics_step logic
1.1.0 - August 22, 2022
-----------------------
Additions
---------
- Additional examples: Anymal, AnymalTerrain, BallBalance, Crazyflie, FrankaCabinet, Ingenuity, Quadcopter
- Add OpenAI variantions for Feed-Forward and LSTM networks for ShadowHand
- Add domain randomization framework `using omni.replicator.isaac`
- Add AnymalTerrain interactable demo
- Automatically disable `omni.kit.window.viewport` and `omni.physx.flatcache` extensions in headless mode to improve start-up load time
- Introduce `reset_xform_properties` flag for initializing Views of cloned environments to reduce load time
- Add WandB support
- Update RL-Games version to 1.5.2
Fixes
-----
- Correctly sets simulation device for GPU simulation
- Fix omni.client import order
- Fix episode length reset condition for ShadowHand and AllegroHand
1.0.0 - June 03, 2022
----------------------
- Initial release for RL examples with Isaac Sim
- Examples provided: AllegroHand, Ant, Cartpole, Humanoid, ShadowHand | 7,825 | Markdown | 40.850267 | 223 | 0.764089 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/docs/examples/training_with_camera.md | ## Reinforcement Learning with Vision in the Loop
Some reinforcement learning tasks can benefit from having image data in the pipeline by collecting sensor data from cameras to use as observations. However, high fidelity rendering can be expensive when scaled up towards thousands of environments during training.
Although Isaac Sim does not currently have the capability to scale towards thousands of environments, we are continually working on improvements to reach the goal. As a starting point, we are providing a simple example showcasing a proof-of-concept for reinforcement learning with vision in the loop.
### CartpoleCamera [cartpole_camera.py](../../omniisaacgymenvs/tasks/cartpole_camera.py)
As an example showcasing the possiblity of reinforcmenet learning with vision in the loop, we provide a variation of the Cartpole task, which uses RGB image data as observations. This example
can be launched with command line argument `task=CartpoleCamera`.
Config files used for this task are:
- **Task config**: [CartpoleCamera.yaml](../../omniisaacgymenvs/cfg/task/CartpoleCamera.yaml)
- **rl_games training config**: [CartpoleCameraPPO.yaml](../../omniisaacgymenvs/cfg/train/CartpoleCameraPPO.yaml)
### Working with Cameras
We have provided an individual app file `apps/omni.isaac.sim.python.gym.camera.kit`, designed specifically towards vision-based RL tasks. This app file provides necessary settings to enable multiple cameras to be rendered each frame. Additional settings are also applied to increase performance when rendering cameras across multiple environments.
In addition, the following settings can be added to the app file to increase performance at a cost of accuracy. By setting these flags to `false`, data collected from the cameras may have a 1 to 2 frame delay.
```
app.renderer.waitIdle=false
app.hydraEngine.waitIdle=false
```
We can also render in white-mode by adding the following line:
```
rtx.debugMaterialType=0
```
### Config Settings
In order for rendering to occur during training, tasks using camera rendering must have the `enable_cameras` flag set to `True` in the task config file. By default, the `omni.isaac.sim.python.gym.camera.kit` app file will be used automatically when `enable_cameras` is set to `True`. This flag is located in the task config file, under the `sim` section.
In addition, the `rendering_dt` parameter can be used to specify the rendering frequency desired. Similar to `dt` for physics simulation frequency, the `rendering_dt` specifies the amount of time in `s` between each rendering step. The `rendering_dt` should be larger or equal to the physics `dt`, and be a multiple of physics `dt`. Note that specifying the `controlFrequencyInv` flag will reduce the control frequency in terms of the physics simulation frequency.
For example, assume control frequency is 30hz, physics simulation frequency is 120 hz, and rendering frequency is 10hz. In the task config file, we can set `dt: 1/120`, `controlFrequencyInv: 4`, such that control is applied every 4 physics steps, and `rendering_dt: 1/10`. In this case, render data will only be updated once every 12 physics steps. Note that both `dt` and `rendering_dt` parameters are under the `sim` section of the config file, while `controlFrequencyInv` is under the `env` section.
### Environment Setup
To set up a task for vision-based RL, we will first need to add a camera to each environment in the scene and wrap it in a Replicator `render_product` to use the vectorized rendering API available in Replicator.
This can be done with the following code in `set_up_scene`:
```python
self.render_products = []
env_pos = self._env_pos.cpu()
for i in range(self._num_envs):
camera = self.rep.create.camera(
position=(-4.2 + env_pos[i][0], env_pos[i][1], 3.0), look_at=(env_pos[i][0], env_pos[i][1], 2.55))
render_product = self.rep.create.render_product(camera, resolution=(self.camera_width, self.camera_height))
self.render_products.append(render_product)
```
Next, we need to initialize Replicator and the PytorchListener, which will be used to collect rendered data.
```python
# start replicator to capture image data
self.rep.orchestrator._orchestrator._is_started = True
# initialize pytorch writer for vectorized collection
self.pytorch_listener = self.PytorchListener()
self.pytorch_writer = self.rep.WriterRegistry.get("PytorchWriter")
self.pytorch_writer.initialize(listener=self.pytorch_listener, device="cuda")
self.pytorch_writer.attach(self.render_products)
```
Then, we can simply collect rendered data from each environment using a single API call:
```python
# retrieve RGB data from all render products
images = self.pytorch_listener.get_rgb_data()
``` | 4,737 | Markdown | 58.974683 | 502 | 0.776019 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/docs/examples/rl_examples.md | ## Reinforcement Learning Examples
We introduce the following reinforcement learning examples that are implemented using
Isaac Sim's RL framework.
Pre-trained checkpoints can be found on the Nucleus server. To set up localhost, please refer to the [Isaac Sim installation guide](https://docs.omniverse.nvidia.com/isaacsim/latest/installation/install_workstation.html).
*Note: All commands should be executed from `omniisaacgymenvs/omniisaacgymenvs`.*
- [Reinforcement Learning Examples](#reinforcement-learning-examples)
- [Cartpole cartpole.py](#cartpole-cartpolepy)
- [Ant ant.py](#ant-antpy)
- [Humanoid humanoid.py](#humanoid-humanoidpy)
- [Shadow Hand Object Manipulation shadow_hand.py](#shadow-hand-object-manipulation-shadow_handpy)
- [OpenAI Variant](#openai-variant)
- [LSTM Training Variant](#lstm-training-variant)
- [Allegro Hand Object Manipulation allegro_hand.py](#allegro-hand-object-manipulation-allegro_handpy)
- [ANYmal anymal.py](#anymal-anymalpy)
- [Anymal Rough Terrain anymal_terrain.py](#anymal-rough-terrain-anymal_terrainpy)
- [NASA Ingenuity Helicopter ingenuity.py](#nasa-ingenuity-helicopter-ingenuitypy)
- [Quadcopter quadcopter.py](#quadcopter-quadcopterpy)
- [Crazyflie crazyflie.py](#crazyflie-crazyfliepy)
- [Ball Balance ball_balance.py](#ball-balance-ball_balancepy)
- [Franka Cabinet franka_cabinet.py](#franka-cabinet-franka_cabinetpy)
- [Franka Deformable franka_deformable.py](#franka-deformablepy)
- [Factory: Fast Contact for Robotic Assembly](#factory-fast-contact-for-robotic-assembly)
### Cartpole [cartpole.py](../../omniisaacgymenvs/tasks/cartpole.py)
Cartpole is a simple example that demonstrates getting and setting usage of DOF states using
`ArticulationView` from `omni.isaac.core`. The goal of this task is to move a cart horizontally
such that the pole, which is connected to the cart via a revolute joint, stays upright.
Joint positions and joint velocities are retrieved using `get_joint_positions` and
`get_joint_velocities` respectively, which are required in computing observations. Actions are
applied onto the cartpoles via `set_joint_efforts`. Cartpoles are reset by using `set_joint_positions`
and `set_joint_velocities`.
Training can be launched with command line argument `task=Cartpole`.
Training using the Warp backend can be launched with `task=Cartpole warp=True`.
Running inference with pre-trained model can be launched with command line argument `task=Cartpole test=True checkpoint=omniverse://localhost/NVIDIA/Assets/Isaac/2023.1.1/Isaac/Samples/OmniIsaacGymEnvs/Checkpoints/cartpole.pth`
Config files used for this task are:
- **Task config**: [Cartpole.yaml](../../omniisaacgymenvs/cfg/task/Cartpole.yaml)
- **rl_games training config**: [CartpolePPO.yaml](../../omniisaacgymenvs/cfg/train/CartpolePPO.yaml)
#### CartpoleCamera [cartpole_camera.py](../../omniisaacgymenvs/tasks/cartpole_camera.py)
A variation of the Cartpole task showcases the usage of RGB image data as observations. This example
can be launched with command line argument `task=CartpoleCamera`. Note that to use camera data as
observations, `enable_cameras` must be set to `True` in the task config file. In addition, the example must be run with the `omni.isaac.sim.python.gym.camera.kit` app file provided under `apps`, which applies necessary settings to enable camera training. By default, this app file will be used automatically when `enable_cameras` is set to `True`. Due to this limitation, this
example is currently not available in the extension workflow.
Config files used for this task are:
- **Task config**: [CartpoleCamera.yaml](../../omniisaacgymenvs/cfg/task/CartpoleCamera.yaml)
- **rl_games training config**: [CartpoleCameraPPO.yaml](../../omniisaacgymenvs/cfg/train/CartpoleCameraPPO.yaml)
For more details on training with camera data, please visit [here](training_with_camera.md).
<img src="https://user-images.githubusercontent.com/34286328/171454189-6afafbff-bb61-4aac-b518-24646007cb9f.gif" width="300" height="150"/>
### Ant [ant.py](../../omniisaacgymenvs/tasks/ant.py)
Ant is an example of a simple locomotion task. The goal of this task is to train
quadruped robots (ants) to run forward as fast as possible. This example inherets
from [LocomotionTask](../../omniisaacgymenvs/tasks/shared/locomotion.py),
which is a shared class between this example and the humanoid example; this simplifies
implementations for both environemnts since they compute rewards, observations,
and resets in a similar manner. This framework allows us to easily switch between
robots used in the task.
The Ant task includes more examples of utilizing `ArticulationView` from `omni.isaac.core`, which
provides various functions to get and set both DOF states and articulation root states
in a tensorized fashion across all of the actors in the environment. `get_world_poses`,
`get_linear_velocities`, and `get_angular_velocities`, can be used to determine whether the
ants have been moving towards the desired direction and whether they have fallen or flipped over.
Actions are applied onto the ants via `set_joint_efforts`, which moves the ants by setting
torques to the DOFs.
Note that the previously used force sensors and `get_force_sensor_forces` API are now deprecated.
Force sensors can now be retrieved directly using `get_measured_joint_forces` from `ArticulationView`.
Training with PPO can be launched with command line argument `task=Ant`.
Training with SAC with command line arguments `task=AntSAC train=AntSAC`.
Training using the Warp backend can be launched with `task=Ant warp=True`.
Running inference with pre-trained model can be launched with command line argument `task=Ant test=True checkpoint=omniverse://localhost/NVIDIA/Assets/Isaac/2023.1.1/Isaac/Samples/OmniIsaacGymEnvs/Checkpoints/ant.pth`
Config files used for this task are:
- **PPO task config**: [Ant.yaml](../../omniisaacgymenvs/cfg/task/Ant.yaml)
- **rl_games PPO training config**: [AntPPO.yaml](../../omniisaacgymenvs/cfg/train/AntPPO.yaml)
<img src="https://user-images.githubusercontent.com/34286328/171454182-0be1b830-bceb-4cfd-93fb-e1eb8871ec68.gif" width="300" height="150"/>
### Humanoid [humanoid.py](../../omniisaacgymenvs/tasks/humanoid.py)
Humanoid is another environment that uses
[LocomotionTask](../../omniisaacgymenvs/tasks/shared/locomotion.py). It is conceptually
very similar to the Ant example, where the goal for the humanoid is to run forward
as fast as possible.
Training can be launched with command line argument `task=Humanoid`.
Training with SAC with command line arguments `task=HumanoidSAC train=HumanoidSAC`.
Training using the Warp backend can be launched with `task=Humanoid warp=True`.
Running inference with pre-trained model can be launched with command line argument `task=Humanoid test=True checkpoint=omniverse://localhost/NVIDIA/Assets/Isaac/2023.1.1/Isaac/Samples/OmniIsaacGymEnvs/Checkpoints/humanoid.pth`
Config files used for this task are:
- **PPO task config**: [Humanoid.yaml](../../omniisaacgymenvs/cfg/task/Humanoid.yaml)
- **rl_games PPO training config**: [HumanoidPPO.yaml](../../omniisaacgymenvs/cfg/train/HumanoidPPO.yaml)
<img src="https://user-images.githubusercontent.com/34286328/171454193-e027885d-1510-4ef4-b838-06b37f70c1c7.gif" width="300" height="150"/>
### Shadow Hand Object Manipulation [shadow_hand.py](../../omniisaacgymenvs/tasks/shadow_hand.py)
The Shadow Hand task is an example of a challenging dexterity manipulation task with complex contact
dynamics. It resembles OpenAI's [Learning Dexterity](https://openai.com/blog/learning-dexterity/)
project and [Robotics Shadow Hand](https://github.com/openai/gym/tree/v0.21.0/gym/envs/robotics)
training environments. The goal of this task is to orient the object in the robot hand to match
a random target orientation, which is visually displayed by a goal object in the scene.
This example inherets from [InHandManipulationTask](../../omniisaacgymenvs/tasks/shared/in_hand_manipulation.py),
which is a shared class between this example and the Allegro Hand example. The idea of
this shared [InHandManipulationTask](../../omniisaacgymenvs/tasks/shared/in_hand_manipulation.py) class
is similar to that of the [LocomotionTask](../../omniisaacgymenvs/tasks/shared/locomotion.py);
since the Shadow Hand example and the Allegro Hand example only differ by the robot hand used
in the task, using this shared class simplifies implementation across the two.
In this example, motion of the hand is controlled using position targets with `set_joint_position_targets`.
The object and the goal object are reset using `set_world_poses`; their states are retrieved via
`get_world_poses` for computing observations. It is worth noting that the Shadow Hand model in
this example also demonstrates the use of tendons, which are imported using the `omni.isaac.mjcf` extension.
Training can be launched with command line argument `task=ShadowHand`.
Training with Domain Randomization can be launched with command line argument `task.domain_randomization.randomize=True`.
For best training results with DR, use `num_envs=16384`.
Running inference with pre-trained model can be launched with command line argument `task=ShadowHand test=True checkpoint=omniverse://localhost/NVIDIA/Assets/Isaac/2023.1.1/Isaac/Samples/OmniIsaacGymEnvs/Checkpoints/shadow_hand.pth`
Config files used for this task are:
- **Task config**: [ShadowHand.yaml](../../omniisaacgymenvs/cfg/task/ShadowHand.yaml)
- **rl_games training config**: [ShadowHandPPO.yaml](../../omniisaacgymenvs/cfg/train/ShadowHandPPO.yaml)
#### OpenAI Variant
In addition to the basic version of this task, there is an additional variant matching OpenAI's
[Learning Dexterity](https://openai.com/blog/learning-dexterity/) project. This variant uses the **openai**
observations in the policy network, but asymmetric observations of the **full_state** in the value network.
This can be launched with command line argument `task=ShadowHandOpenAI_FF`.
Running inference with pre-trained model can be launched with command line argument `task=ShadowHandOpenAI_FF test=True checkpoint=omniverse://localhost/NVIDIA/Assets/Isaac/2023.1.1/Isaac/Samples/OmniIsaacGymEnvs/Checkpoints/shadow_hand_openai_ff.pth`
Config files used for this are:
- **Task config**: [ShadowHandOpenAI_FF.yaml](../../omniisaacgymenvs/cfg/task/ShadowHandOpenAI_FF.yaml)
- **rl_games training config**: [ShadowHandOpenAI_FFPPO.yaml](../../omniisaacgymenvs/cfg/train/ShadowHandOpenAI_FFPPO.yaml).
#### LSTM Training Variant
This variant uses LSTM policy and value networks instead of feed forward networks, and also asymmetric
LSTM critic designed for the OpenAI variant of the task. This can be launched with command line argument
`task=ShadowHandOpenAI_LSTM`.
Running inference with pre-trained model can be launched with command line argument `task=ShadowHandOpenAI_LSTM test=True checkpoint=omniverse://localhost/NVIDIA/Assets/Isaac/2023.1.1/Isaac/Samples/OmniIsaacGymEnvs/Checkpoints/shadow_hand_openai_lstm.pth`
Config files used for this are:
- **Task config**: [ShadowHandOpenAI_LSTM.yaml](../../omniisaacgymenvs/cfg/task/ShadowHandOpenAI_LSTM.yaml)
- **rl_games training config**: [ShadowHandOpenAI_LSTMPPO.yaml](../../omniisaacgymenvs/cfg/train/ShadowHandOpenAI_LSTMPPO.yaml).
<img src="https://user-images.githubusercontent.com/34286328/171454160-8cb6739d-162a-4c84-922d-cda04382633f.gif" width="300" height="150"/>
### Allegro Hand Object Manipulation [allegro_hand.py](../../omniisaacgymenvs/tasks/allegro_hand.py)
This example performs the same object orientation task as the Shadow Hand example,
but using the Allegro hand instead of the Shadow hand.
Training can be launched with command line argument `task=AllegroHand`.
Running inference with pre-trained model can be launched with command line argument `task=AllegroHand test=True checkpoint=omniverse://localhost/NVIDIA/Assets/Isaac/2023.1.1/Isaac/Samples/OmniIsaacGymEnvs/Checkpoints/allegro_hand.pth`
Config files used for this task are:
- **Task config**: [AllegroHand.yaml](../../omniisaacgymenvs/cfg/task/Allegro.yaml)
- **rl_games training config**: [AllegroHandPPO.yaml](../../omniisaacgymenvs/cfg/train/AllegroHandPPO.yaml)
<img src="https://user-images.githubusercontent.com/34286328/171454176-ce08f6d0-3087-4ecc-9273-7d30d8f73f6d.gif" width="300" height="150"/>
### ANYmal [anymal.py](../../omniisaacgymenvs/tasks/anymal.py)
This example trains a model of the ANYmal quadruped robot from ANYbotics
to follow randomly chosen x, y, and yaw target velocities.
Training can be launched with command line argument `task=Anymal`.
Running inference with pre-trained model can be launched with command line argument `task=Anymal test=True checkpoint=omniverse://localhost/NVIDIA/Assets/Isaac/2023.1.1/Isaac/Samples/OmniIsaacGymEnvs/Checkpoints/anymal.pth`
Config files used for this task are:
- **Task config**: [Anymal.yaml](../../omniisaacgymenvs/cfg/task/Anymal.yaml)
- **rl_games training config**: [AnymalPPO.yaml](../../omniisaacgymenvs/cfg/train/AnymalPPO.yaml)
<img src="https://user-images.githubusercontent.com/34286328/184168200-152567a8-3354-4947-9ae0-9443a56fee4c.gif" width="300" height="150"/>
### Anymal Rough Terrain [anymal_terrain.py](../../omniisaacgymenvs/tasks/anymal_terrain.py)
A more complex version of the above Anymal environment that supports
traversing various forms of rough terrain.
Training can be launched with command line argument `task=AnymalTerrain`.
Running inference with pre-trained model can be launched with command line argument `task=AnymalTerrain test=True checkpoint=omniverse://localhost/NVIDIA/Assets/Isaac/2023.1.1/Isaac/Samples/OmniIsaacGymEnvs/Checkpoints/anymal_terrain.pth`
- **Task config**: [AnymalTerrain.yaml](../../omniisaacgymenvs/cfg/task/AnymalTerrain.yaml)
- **rl_games training config**: [AnymalTerrainPPO.yaml](../../omniisaacgymenvs/cfg/train/AnymalTerrainPPO.yaml)
**Note** during test time use the last weights generated, rather than the usual best weights.
Due to curriculum training, the reward goes down as the task gets more challenging, so the best weights
do not typically correspond to the best outcome.
**Note** if you use the ANYmal rough terrain environment in your work, please ensure you cite the following work:
```
@misc{rudin2021learning,
title={Learning to Walk in Minutes Using Massively Parallel Deep Reinforcement Learning},
author={Nikita Rudin and David Hoeller and Philipp Reist and Marco Hutter},
year={2021},
journal = {arXiv preprint arXiv:2109.11978}
```
**Note** The OmniIsaacGymEnvs implementation slightly differs from the implementation used in the paper above, which also
uses a different RL library and PPO implementation. The original implementation is made available [here](https://github.com/leggedrobotics/legged_gym). Results reported in the Isaac Gym technical paper are based on that repository, not this one.
<img src="https://user-images.githubusercontent.com/34286328/184170040-3f76f761-e748-452e-b8c8-3cc1c7c8cb98.gif" width="300" height="150"/>
### NASA Ingenuity Helicopter [ingenuity.py](../../omniisaacgymenvs/tasks/ingenuity.py)
This example trains a simplified model of NASA's Ingenuity helicopter to navigate to a moving target.
It showcases the use of velocity tensors and applying force vectors to rigid bodies.
Note that we are applying force directly to the chassis, rather than simulating aerodynamics.
This example also demonstrates using different values for gravitational forces.
Ingenuity Helicopter visual 3D Model courtesy of NASA: https://mars.nasa.gov/resources/25043/mars-ingenuity-helicopter-3d-model/.
Training can be launched with command line argument `task=Ingenuity`.
Running inference with pre-trained model can be launched with command line argument `task=Ingenuity test=True checkpoint=omniverse://localhost/NVIDIA/Assets/Isaac/2023.1.1/Isaac/Samples/OmniIsaacGymEnvs/Checkpoints/ingenuity.pth`
Config files used for this task are:
- **Task config**: [Ingenuity.yaml](../../omniisaacgymenvs/cfg/task/Ingenuity.yaml)
- **rl_games training config**: [IngenuityPPO.yaml](../../omniisaacgymenvs/cfg/train/IngenuityPPO.yaml)
<img src="https://user-images.githubusercontent.com/34286328/184176312-df7d2727-f043-46e3-b537-48a583d321b9.gif" width="300" height="150"/>
### Quadcopter [quadcopter.py](../../omniisaacgymenvs/tasks/quadcopter.py)
This example trains a very simple quadcopter model to reach and hover near a fixed position.
Lift is achieved by applying thrust forces to the "rotor" bodies, which are modeled as flat cylinders.
In addition to thrust, the pitch and roll of each rotor is controlled using DOF position targets.
Training can be launched with command line argument `task=Quadcopter`.
Running inference with pre-trained model can be launched with command line argument `task=Quadcopter test=True checkpoint=omniverse://localhost/NVIDIA/Assets/Isaac/2023.1.1/Isaac/Samples/OmniIsaacGymEnvs/Checkpoints/quadcopter.pth`
Config files used for this task are:
- **Task config**: [Quadcopter.yaml](../../omniisaacgymenvs/cfg/task/Quadcopter.yaml)
- **rl_games training config**: [QuadcopterPPO.yaml](../../omniisaacgymenvs/cfg/train/QuadcopterPPO.yaml)
<img src="https://user-images.githubusercontent.com/34286328/184178817-9c4b6b3c-c8a2-41fb-94be-cfc8ece51d5d.gif" width="300" height="150"/>
### Crazyflie [crazyflie.py](../../omniisaacgymenvs/tasks/crazyflie.py)
This example trains the Crazyflie drone model to hover near a fixed position. It is achieved by applying thrust forces to the four rotors.
Training can be launched with command line argument `task=Crazyflie`.
Running inference with pre-trained model can be launched with command line argument `task=Crazyflie test=True checkpoint=omniverse://localhost/NVIDIA/Assets/Isaac/2023.1.1/Isaac/Samples/OmniIsaacGymEnvs/Checkpoints/crazyflie.pth`
Config files used for this task are:
- **Task config**: [Crazyflie.yaml](../../omniisaacgymenvs/cfg/task/Crazyflie.yaml)
- **rl_games training config**: [CrazyfliePPO.yaml](../../omniisaacgymenvs/cfg/train/CrazyfliePPO.yaml)
<img src="https://user-images.githubusercontent.com/6352136/185715165-b430a0c7-948b-4dce-b3bb-7832be714c37.gif" width="300" height="150"/>
### Ball Balance [ball_balance.py](../../omniisaacgymenvs/tasks/ball_balance.py)
This example trains balancing tables to balance a ball on the table top.
This is a great example to showcase the use of force and torque sensors, as well as DOF states for the table and root states for the ball.
In this example, the three-legged table has a force sensor attached to each leg.
We use the force sensor APIs to collect force and torque data on the legs, which guide position target outputs produced by the policy.
Training can be launched with command line argument `task=BallBalance`.
Running inference with pre-trained model can be launched with command line argument `task=BallBalance test=True checkpoint=omniverse://localhost/NVIDIA/Assets/Isaac/2023.1.1/Isaac/Samples/OmniIsaacGymEnvs/Checkpoints/ball_balance.pth`
Config files used for this task are:
- **Task config**: [BallBalance.yaml](../../omniisaacgymenvs/cfg/task/BallBalance.yaml)
- **rl_games training config**: [BallBalancePPO.yaml](../../omniisaacgymenvs/cfg/train/BallBalancePPO.yaml)
<img src="https://user-images.githubusercontent.com/34286328/184172037-cdad9ee8-f705-466f-bbde-3caa6c7dea37.gif" width="300" height="150"/>
### Franka Cabinet [franka_cabinet.py](../../omniisaacgymenvs/tasks/franka_cabinet.py)
This Franka example demonstrates interaction between Franka arm and cabinet, as well as setting states of objects inside the drawer.
It also showcases control of the Franka arm using position targets.
In this example, we use DOF state tensors to retrieve the state of the Franka arm, as well as the state of the drawer on the cabinet.
Actions are applied as position targets to the Franka arm DOFs.
Training can be launched with command line argument `task=FrankaCabinet`.
Running inference with pre-trained model can be launched with command line argument `task=FrankaCabinet test=True checkpoint=omniverse://localhost/NVIDIA/Assets/Isaac/2023.1.1/Isaac/Samples/OmniIsaacGymEnvs/Checkpoints/franka_cabinet.pth`
Config files used for this task are:
- **Task config**: [FrankaCabinet.yaml](../../omniisaacgymenvs/cfg/task/FrankaCabinet.yaml)
- **rl_games training config**: [FrankaCabinetPPO.yaml](../../omniisaacgymenvs/cfg/train/FrankaCabinetPPO.yaml)
<img src="https://user-images.githubusercontent.com/34286328/184174894-03767aa0-936c-4bfe-bbe9-a6865f539bb4.gif" width="300" height="150"/>
### Franka Deformable [franka_deformable.py](../../omniisaacgymenvs/tasks/franka_deformable.py)
This Franka example demonstrates interaction between Franka arm and a deformable tube. It demonstrates the manipulation of deformable objects, using nodal positions and velocities of the simulation mesh as observations.
Training can be launched with command line argument `task=FrankaDeformable`.
Running inference with pre-trained model can be launched with command line argument `task=FrankaDeformable test=True checkpoint=omniverse://localhost/NVIDIA/Assets/Isaac/2023.1.1/Isaac/Samples/OmniIsaacGymEnvs/Checkpoints/franka_deformable.pth`
Config files used for this task are:
- **Task config**: [FrankaDeformable.yaml](../../omniisaacgymenvs/cfg/task/FrankaDeformable.yaml)
- **rl_games training config**: [FrankaCabinetFrankaDeformable.yaml](../../omniisaacgymenvs/cfg/train/FrankaDeformablePPO.yaml)
### Factory: Fast Contact for Robotic Assembly
We provide a set of Factory example tasks, [**FactoryTaskNutBoltPick**](../../omniisaacgymenvs/tasks/factory/factory_task_nut_bolt_pick.py), [**FactoryTaskNutBoltPlace**](../../omniisaacgymenvs/tasks/factory/factory_task_nut_bolt_place.py), and [**FactoryTaskNutBoltScrew**](../../omniisaacgymenvs/tasks/factory/factory_task_nut_bolt_screw.py),
`FactoryTaskNutBoltPick` can be executed with `python train.py task=FactoryTaskNutBoltPick`. This task trains policy for the Pick task, a simplified version of the corresponding task in the Factory paper. The policy may take ~1 hour to achieve high success rates on a modern GPU.
- The general configuration file for the above task is [FactoryTaskNutBoltPick.yaml](../../omniisaacgymenvs/cfg/task/FactoryTaskNutBoltPick.yaml).
- The training configuration file for the above task is [FactoryTaskNutBoltPickPPO.yaml](../../omniisaacgymenvs/cfg/train/FactoryTaskNutBoltPickPPO.yaml).
Running inference with pre-trained model can be launched with command line argument `task=FactoryTaskNutBoltPick test=True checkpoint=omniverse://localhost/NVIDIA/Assets/Isaac/2023.1.1/Isaac/Samples/OmniIsaacGymEnvs/Checkpoints/factory_task_nut_bolt_pick.pth`
`FactoryTaskNutBoltPlace` can be executed with `python train.py task=FactoryTaskNutBoltPlace`. This task trains policy for the Place task.
- The general configuration file for the above task is [FactoryTaskNutBoltPlace.yaml](../../omniisaacgymenvs/cfg/task/FactoryTaskNutBoltPlace.yaml).
- The training configuration file for the above task is [FactoryTaskNutBoltPlacePPO.yaml](../../omniisaacgymenvs/cfg/train/FactoryTaskNutBoltPlacePPO.yaml).
Running inference with pre-trained model can be launched with command line argument `task=FactoryTaskNutBoltPlace test=True checkpoint=omniverse://localhost/NVIDIA/Assets/Isaac/2023.1.1/Isaac/Samples/OmniIsaacGymEnvs/Checkpoints/factory_task_nut_bolt_place.pth`
`FactoryTaskNutBoltScrew` can be executed with `python train.py task=FactoryTaskNutBoltScrew`. This task trains policy for the Screw task.
- The general configuration file for the above task is [FactoryTaskNutBoltScrew.yaml](../../omniisaacgymenvs/cfg/task/FactoryTaskNutBoltScrew.yaml).
- The training configuration file for the above task is [FactoryTaskNutBoltScrewPPO.yaml](../../omniisaacgymenvs/cfg/train/FactoryTaskNutBoltScrewPPO.yaml).
Running inference with pre-trained model can be launched with command line argument `task=FactoryTaskNutBoltScrew test=True checkpoint=omniverse://localhost/NVIDIA/Assets/Isaac/2023.1.1/Isaac/Samples/OmniIsaacGymEnvs/Checkpoints/factory_task_nut_bolt_screw.pth`
If you use the Factory simulation methods (e.g., SDF collisions, contact reduction) or Factory learning tools (e.g., assets, environments, or controllers) in your work, please cite the following paper:
```
@inproceedings{
narang2022factory,
author = {Yashraj Narang and Kier Storey and Iretiayo Akinola and Miles Macklin and Philipp Reist and Lukasz Wawrzyniak and Yunrong Guo and Adam Moravanszky and Gavriel State and Michelle Lu and Ankur Handa and Dieter Fox},
title = {Factory: Fast contact for robotic assembly},
booktitle = {Robotics: Science and Systems},
year = {2022}
}
```
Also note that our original formulations of SDF collisions and contact reduction were developed by [Macklin, et al.](https://dl.acm.org/doi/abs/10.1145/3384538) and [Moravanszky and Terdiman](https://scholar.google.com/scholar?q=Game+Programming+Gems+4%2C+chapter+Fast+Contact+Reduction+for+Dynamics+Simulation), respectively.
<img src="https://user-images.githubusercontent.com/6352136/205978286-fa2ae714-a3cb-4acd-9f5f-a467338a8bb3.gif"/>
| 25,398 | Markdown | 64.46134 | 377 | 0.787188 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/docs/examples/transfering_policies_from_isaac_gym.md | ## Transfering Policies from Isaac Gym Preview Releases
This section delineates some of the differences between the standalone
[Isaac Gym Preview Releases](https://developer.nvidia.com/isaac-gym) and
Isaac Sim reinforcement learning extensions, in hopes of facilitating the
process of transferring policies trained in the standalone preview releases
to Isaac Sim.
### Isaac Sim RL Extensions
Unlike the monolithic standalone Isaac Gym Preview Releases, Omniverse is
a highly modular system, with functionality split between various [Extensions](https://docs.omniverse.nvidia.com/extensions/latest/index.html).
The APIs used by typical robotics RL systems are split between a handful of
extensions in Isaac Sim. These include `omni.isaac.core`, which provides
tensorized access to physics simulation state as well as a task management
framework, the `omni.isaac.cloner` extension for creating many copies of
your environments, and the `omni.isaac.gym` extension for interfacing with
external RL training libraries.
For naming clarity, we'll refer collectively to the extensions used for RL
within Isaac Sim as the **Isaac Sim RL extensions**, in contrast with the
older **Isaac Gym Preview Releases**.
### Quaternion Convention
The Isaac Sim RL extensions use various classes and methods in `omni.isaac.core`,
which adopts `wxyz` as the quaternion convention. However, the quaternion
convention used in Isaac Gym Preview Releases is `xyzw`. Therefore, if a policy
trained in one of the Isaac Gym Preview Releases takes in quaternions as part
of its observations, remember to switch all quaternions to use the `xyzw` convention
in the observation buffer `self.obs_buf`. Similarly, please ensure all quaternions
are in `wxyz` before passing them in any of the utility functions in `omni.isaac.core`.
### Assets
Isaac Sim provides [URDF](https://docs.omniverse.nvidia.com/isaacsim/latest/advanced_tutorials/tutorial_advanced_import_urdf.html)
and [MJCF](https://docs.omniverse.nvidia.com/isaacsim/latest/advanced_tutorials/tutorial_advanced_import_mjcf.html) importers for translating URDF and MJCF assets into USD format.
Any robot or object assets must be in .usd, .usda, or .usdc format for Isaac Sim and Omniverse.
For more details on working with USD, please see https://docs.omniverse.nvidia.com/isaacsim/latest/reference_glossary.html#usd.
Importer tools are also available for other common geometry file formats, such as .obj, .fbx, and more.
Please see [Asset Importer](https://docs.omniverse.nvidia.com/extensions/latest/ext_asset-importer.html) for more details.
### Joint Order
Isaac Sim's `ArticulationView` in `omni.isaac.core` assumes a breadth-first
ordering for the joints in a given kinematic tree. Specifically, for the following
kinematic tree, the method `ArticulationView.get_joint_positions` returns a
tensor of shape `(number of articulations in the view, number of joints in the articulation)`.
Along the second dimension of this tensor, the values represent the articulation's joint positions
in the following order: `[Joint 1, Joint 2, Joint 4, Joint 3, Joint 5]`. On the other hand,
the Isaac Gym Preview Releases assume a depth-first ordering for the joints in the kinematic
tree; In the example below, the joint orders would be the following: `[Joint 1, Joint 2, Joint 3, Joint 4, Joint 5]`.
<img src="./media/KinematicTree.png" height="300"/>
With this in mind, it is important to change the joint order to depth-first in
the observation buffer before feeding it into an existing policy trained in one of the
Isaac Gym Preview Releases. Similarly, you would also need to change the joint order
in the output (the action buffer) of the Isaac Gym Preview Release trained policy
to breadth-first before applying joint actions to articulations via methods in `ArticulationView`.
### Physics Parameters
One factor that could dictate the success of policy transfer from Isaac Gym Preview
Releases to Isaac Sim is to ensure the physics parameters used in both simulations are
identical or very similar. In general, the `sim` parameters specified in the
task configuration `yaml` file overwrite the corresponding parameters in the USD asset.
However, there are additional parameters in the USD asset that are not included
in the task configuration `yaml` file. These additional parameters may sometimes
impact the performance of Isaac Gym Preview Release trained policies and hence need
modifications in the USD asset itself to match the values set in Isaac Gym Preview Releases.
For instance, the following parameters in the `RigidBodyAPI` could be modified in the
USD asset to yield better policy transfer performance:
| RigidBodyAPI Parameter | Default Value in Isaac Sim | Default Value in Isaac Gym Preview Releases |
|:----------------------:|:--------------------------:|:--------------------------:|
| Linear Damping | 0.00 | 0.00 |
| Angular Damping | 0.05 | 0.00 |
| Max Linear Velocity | inf | 1000 |
| Max Angular Velocity | 5729.58008 (deg/s) | 64 (rad/s) |
| Max Contact Impulse | inf | 1e32 |
<img src="./media/RigidBodyAPI.png" width="500"/>
Parameters in the `JointAPI` as well as the `DriveAPI` could be altered as well. Note
that the Isaac Sim UI assumes the unit of angle to be degrees. It is particularly
worth noting that the `Damping` and `Stiffness` paramters in the `DriveAPI` have the unit
of `1/deg` in the Isaac Sim UI but `1/rad` in Isaac Gym Preview Releases.
| Joint Parameter | Default Value in Isaac Sim | Default Value in Isaac Gym Preview Releases |
|:----------------------:|:--------------------------:|:--------------------------:|
| Maximum Joint Velocity | 1000000.0 (deg) | 100.0 (rad) |
<img src="./media/JointAPI.png" width="500"/>
### Differences in APIs
APIs for accessing physics states in Isaac Sim require the creation of an ArticulationView or RigidPrimView
object. Multiple view objects can be initialized for different articulations or bodies in the scene by defining
a regex expression that matches the paths of the desired objects. This approach eliminates the need of retrieving
body handles to slice states for specific bodies in the scene.
We have also removed `acquire` and `refresh` APIs in Isaac Sim. Physics states can be directly applied or retrieved
by using `set`/`get` APIs defined for the views.
New APIs provided in Isaac Sim no longer require explicit wrapping and un-wrapping of underlying buffers.
APIs can now work with tensors directly for reading and writing data. Most APIs in Isaac Sim also provide
the option to specify an `indices` parameter, which can be used when reading or writing data for a subset
of environments. Note that when setting states with the `indices` parameter, the shape of the states buffer
should match with the dimension of the `indices` list.
Note some naming differences between APIs in Isaac Gym Preview Release and Isaac Sim. Most `dof` related APIs have been
named to `joint` in Isaac Sim. `root_states` is now separated into different APIs for `world_poses` and `velocities`.
Similary, `dof_states` are retrieved individually in Isaac Sim as `joint_positions` and `joint_velocities`.
APIs in Isaac Sim also no longer follow the explicit `_tensors` or `_tensor_indexed` suffixes in naming.
Indexed versions of APIs now happen implicitly through the optional `indices` parameter.
### Task Configuration Files
There are a few modifications that need to be made to an existing Isaac Gym Preview Release
task `yaml` file in order for it to be compatible with the Isaac Sim RL extensions.
#### Frequencies of Physics Simulation and RL Policy
The way in which physics simulation frequency and RL policy frequency are specified is different
between Isaac Gym Preview Releases and Isaac Sim, dictated by the following three
parameters: `dt`, `substeps`, and `controlFrequencyInv`.
- `dt`: The simulation time difference between each simulation step.
- `substeps`: The number of physics steps within one simulation step. *i.e.* if `dt: 1/60`
and `substeps: 4`, physics is simulated at 240 hz.
- `controlFrequencyInv`: The control decimation of the RL policy, which is the number of
simulation steps between RL actions. *i.e.* if `dt: 1/60` and `controlFrequencyInv: 2`,
RL policy is running at 30 hz.
In Isaac Gym Preview Releases, all three of the above parameters are used to specify
the frequencies of physics simulation and RL policy. However, Isaac Sim only uses `controlFrequencyInv` and `dt` as `substeps` is always fixed at `1`. Note that despite
only using two parameters, Isaac Sim can still achieve the same substeps definition
as Isaac Gym. For example, if in an Isaac Gym Preview Release policy, we set `substeps: 2`,
`dt: 1/60` and `controlFrequencyInv: 1`, we can achieve the equivalent in Isaac Sim
by setting `controlFrequencyInv: 2` and `dt: 1/120`.
In the Isaac Sim RL extensions, `dt` is specified in the task configuration `yaml` file
under `sim`, whereas `controlFrequencyInv` is a parameter under `env`.
#### Physx Parameters
Parameters under `physx` in the task configuration `yaml` file remain mostly unchanged.
In Isaac Gym Preview Releases, `use_gpu` is frequently set to
`${contains:"cuda",${....sim_device}}`. For Isaac Sim, please ensure this is changed
to `${eq:${....sim_device},"gpu"}`.
In Isaac Gym Preview Releases, GPU buffer sizes are specified using the following two parameters:
`default_buffer_size_multiplier` and `max_gpu_contact_pairs`. With the Isaac Sim RL extensions,
these two parameters are no longer used; instead, the various GPU buffer sizes can be
set explicitly.
For instance, in the [Humanoid task configuration file](../omniisaacgymenvs/cfg/task/Humanoid.yaml),
GPU buffer sizes are specified as follows:
```yaml
gpu_max_rigid_contact_count: 524288
gpu_max_rigid_patch_count: 81920
gpu_found_lost_pairs_capacity: 8192
gpu_found_lost_aggregate_pairs_capacity: 262144
gpu_total_aggregate_pairs_capacity: 8192
gpu_max_soft_body_contacts: 1048576
gpu_max_particle_contacts: 1048576
gpu_heap_capacity: 67108864
gpu_temp_buffer_capacity: 16777216
gpu_max_num_partitions: 8
```
Please refer to the [Troubleshooting](./troubleshoot.md#simulation) documentation should
you encounter errors related to GPU buffer sizes.
#### Articulation Parameters
The articulation parameters of each actor can now be individually specified tn the Isaac Sim
task configuration `yaml` file. The following is an example template for setting these parameters:
```yaml
ARTICULATION_NAME:
# -1 to use default values
override_usd_defaults: False
fixed_base: False
enable_self_collisions: True
enable_gyroscopic_forces: True
# per-actor
solver_position_iteration_count: 4
solver_velocity_iteration_count: 0
sleep_threshold: 0.005
stabilization_threshold: 0.001
# per-body
density: -1
max_depenetration_velocity: 10.0
```
These articulation parameters can be parsed using the `parse_actor_config` method in the
[SimConfig](../omniisaacgymenvs/utils/config_utils/sim_config.py) class, which can then be applied
to a prim in simulation via the `apply_articulation_settings` method. A concrete example of this
is the following code snippet from the [HumanoidTask](../omniisaacgymenvs/tasks/humanoid.py#L75):
```python
self._sim_config.apply_articulation_settings("Humanoid", get_prim_at_path(humanoid.prim_path), self._sim_config.parse_actor_config("Humanoid"))
```
#### Additional Simulation Parameters
- `use_fabric`: Setting this paramter to `True` enables [PhysX Fabric](https://docs.omniverse.nvidia.com/prod_extensions/prod_extensions/ext_physics.html#flatcache), which offers a significant increase in simulation speed. However, this parameter must
be set to `False` if soft-body simulation is required because `PhysX Fabric` curently only supports rigid-body simulation.
- `enable_scene_query_support`: Setting this paramter to `True` allows the user to interact with prims in the scene. Keeping this setting to `False` during
training improves simulation speed. Note that this parameter is always set to `True` if in test/inference mode to enable user interaction with trained models.
### Training Configuration Files
The Omniverse Isaac Gym RL Environments are trained using a third-party highly-optimized RL library,
[rl_games](https://github.com/Denys88/rl_games), which is also used to train the Isaac Gym Preview Release examples
in [IsaacGymEnvs](https://github.com/NVIDIA-Omniverse/IsaacGymEnvs). Therefore, the rl_games training
configuration `yaml` files in Isaac Sim are compatible with those from IsaacGymEnvs. However, please
add the following lines under `config` in the training configuration `yaml` files (*i.e.*
line 41-42 in [HumanoidPPO.yaml](../omniisaacgymenvs/cfg/train/HumanoidPPO.yaml#L41)) to ensure
RL training runs on the intended device.
```yaml
device: ${....rl_device}
device_name: ${....rl_device}
``` | 13,250 | Markdown | 55.387234 | 252 | 0.749585 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/docs/framework/domain_randomization.md | Domain Randomization
====================
Overview
--------
We sometimes need our reinforcement learning agents to be robust to
different physics than they are trained with, such as when attempting a
sim2real policy transfer. Using domain randomization (DR), we repeatedly
randomize the simulation dynamics during training in order to learn a
good policy under a wide range of physical parameters.
OmniverseIsaacGymEnvs supports "on the fly" domain randomization, allowing
dynamics to be changed without requiring reloading of assets. This allows
us to efficiently apply domain randomizations without common overheads like
re-parsing asset files.
The OmniverseIsaacGymEnvs DR framework utilizes the `omni.replicator.isaac`
extension in its backend to perform "on the fly" randomization. Users can
add domain randomization by either directly using methods provided in
`omni.replicator.isaac` in python, or specifying DR settings in the
task configuration `yaml` file. The following sections will focus on setting
up DR using the `yaml` file interface. For more detailed documentations
regarding methods provided in the `omni.replicator.isaac` extension, please
visit [here](https://docs.omniverse.nvidia.com/py/isaacsim/source/extensions/omni.replicator.isaac/docs/index.html).
Domain Randomization Options
-------------------------------
We will first explain what can be randomized in the scene and the sampling
distributions. There are five main parameter groups that support randomization.
They are:
- `observations`: Add noise directly to the agent observations
- `actions`: Add noise directly to the agent actions
- `simulation`: Add noise to physical parameters defined for the entire
scene, such as `gravity`
- `rigid_prim_views`: Add noise to properties belonging to rigid prims,
such as `material_properties`.
- `articulation_views`: Add noise to properties belonging to articulations,
such as `stiffness` of joints.
For each parameter you wish to randomize, you can specify two ways that
determine when the randomization is applied:
- `on_reset`: Adds correlated noise to a parameter of an environment when
that environment gets reset. This correlated noise will remain
with an environment until that environemnt gets reset again, which
will then set a new correlated noise. To trigger `on_reset`,
the indices for the environemnts that need to be reset must be passed in
to `omni.replicator.isaac.physics_view.step_randomization(reset_inds)`.
- `on_interval`: Adds uncorrelated noise to a parameter at a frequency specified
by `frequency_interval`. If a parameter also has `on_reset` randomization,
the `on_interval` noise is combined with the noise applied at `on_reset`.
- `on_startup`: Applies randomization once prior to the start of the simulation. Only available
to rigid prim scale, mass, density and articulation scale parameters.
For `on_reset`, `on_interval`, and `on_startup`, you can specify the following settings:
- `distribution`: The distribution to generate a sample `x` from. The available distributions
are listed below. Note that parameters `a` and `b` are defined by the
`distribution_parameters` setting.
- `uniform`: `x ~ unif(a, b)`
- `loguniform`: `x ~ exp(unif(log(a), log(b)))`
- `gaussian`: `x ~ normal(a, b)`
- `distribution_parameters`: The parameters to the distribution.
- For observations and actions, this setting is specified as a tuple `[a, b]` of
real values.
- For simulation and view parameters, this setting is specified as a nested tuple
in the form of `[[a_1, a_2, ..., a_n], [[b_1, b_2, ..., b_n]]`, where the `n` is
the dimension of the parameter (*i.e.* `n` is 3 for position). It can also be
specified as a tuple in the form of `[a, b]`, which will be broadcasted to the
correct dimensions.
- For `uniform` and `loguniform` distributions, `a` and `b` are the lower and
upper bounds.
- For `gaussian`, `a` is the distribution mean and `b` is the variance.
- `operation`: Defines how the generated sample `x` will be applied to the original
simulation parameter. The options are `additive`, `scaling`, `direct`.
- `additive`:, add the sample to the original value.
- `scaling`: multiply the original value by the sample.
- `direct`: directly sets the sample as the parameter value.
- `frequency_interval`: Specifies the number of steps to apply randomization.
- Only used with `on_interval`.
- Steps of each environemnt are incremented with each
`omni.replicator.isaac.physics_view.step_randomization(reset_inds)` call and
reset if the environment index is in `reset_inds`.
- `num_buckets`: Only used for `material_properties` randomization
- Physx only allows 64000 unique physics materials in the scene at once. If more than
64000 materials are needed, increase `num_buckets` to allow materials to be shared
between prims.
YAML Interface
--------------
Now that we know what options are available for domain randomization,
let's put it all together in the YAML config. In your `omniverseisaacgymenvs/cfg/task`
yaml file, you can specify your domain randomization parameters under the
`domain_randomization` key. First, we turn on domain randomization by setting
`randomize` to `True`:
```yaml
domain_randomization:
randomize: True
randomization_params:
...
```
This can also be set as a command line argument at launch time with `task.domain_randomization.randomize=True`.
Next, we will define our parameters under the `randomization_params`
keys. Here you can see how we used the previous settings to define some
randomization parameters for a ShadowHand cube manipulation task:
```yaml
randomization_params:
randomization_params:
observations:
on_reset:
operation: "additive"
distribution: "gaussian"
distribution_parameters: [0, .0001]
on_interval:
frequency_interval: 1
operation: "additive"
distribution: "gaussian"
distribution_parameters: [0, .002]
actions:
on_reset:
operation: "additive"
distribution: "gaussian"
distribution_parameters: [0, 0.015]
on_interval:
frequency_interval: 1
operation: "additive"
distribution: "gaussian"
distribution_parameters: [0., 0.05]
simulation:
gravity:
on_reset:
operation: "additive"
distribution: "gaussian"
distribution_parameters: [[0.0, 0.0, 0.0], [0.0, 0.0, 0.4]]
rigid_prim_views:
object_view:
material_properties:
on_reset:
num_buckets: 250
operation: "scaling"
distribution: "uniform"
distribution_parameters: [[0.7, 1, 1], [1.3, 1, 1]]
articulation_views:
shadow_hand_view:
stiffness:
on_reset:
operation: "scaling"
distribution: "uniform"
distribution_parameters: [0.75, 1.5]
```
Note how we structured `rigid_prim_views` and `articulation_views`. When creating
a `RigidPrimView` or `ArticulationView` in the task python file, you have the option to
pass in `name` as an argument. **To use domain randomization, the name of the `RigidPrimView` or
`ArticulationView` must match the name provided in the randomization `yaml` file.** In the
example above, `object_view` is the name of a `RigidPrimView` and `shadow_hand_view` is the name
of the `ArticulationView`.
The exact parameters that can be randomized are listed below:
**simulation**:
- gravity (dim=3): The gravity vector of the entire scene.
**rigid\_prim\_views**:
- position (dim=3): The position of the rigid prim. In meters.
- orientation (dim=3): The orientation of the rigid prim, specified with euler angles. In radians.
- linear_velocity (dim=3): The linear velocity of the rigid prim. In m/s. **CPU pipeline only**
- angular_velocity (dim=3): The angular velocity of the rigid prim. In rad/s. **CPU pipeline only**
- velocity (dim=6): The linear + angular velocity of the rigid prim.
- force (dim=3): Apply a force to the rigid prim. In N.
- mass (dim=1): Mass of the rigid prim. In kg. **CPU pipeline only during runtime**.
- inertia (dim=3): The diagonal values of the inertia matrix. **CPU pipeline only**
- material_properties (dim=3): Static friction, Dynamic friction, and Restitution.
- contact_offset (dim=1): A small distance from the surface of the collision geometry at
which contacts start being generated.
- rest_offset (dim=1): A small distance from the surface of the collision geometry at
which the effective contact with the shape takes place.
- scale (dim=1): The scale of the rigid prim. `on_startup` only.
- density (dim=1): Density of the rigid prim. `on_startup` only.
**articulation\_views**:
- position (dim=3): The position of the articulation root. In meters.
- orientation (dim=3): The orientation of the articulation root, specified with euler angles. In radians.
- linear_velocity (dim=3): The linear velocity of the articulation root. In m/s. **CPU pipeline only**
- angular_velocity (dim=3): The angular velocity of the articulation root. In rad/s. **CPU pipeline only**
- velocity (dim=6): The linear + angular velocity of the articulation root.
- stiffness (dim=num_dof): The stiffness of the joints.
- damping (dim=num_dof): The damping of the joints
- joint_friction (dim=num_dof): The friction coefficient of the joints.
- joint_positions (dim=num_dof): The joint positions. In radians or meters.
- joint_velocities (dim=num_dof): The joint velocities. In rad/s or m/s.
- lower_dof_limits (dim=num_dof): The lower limit of the joints. In radians or meters.
- upper_dof_limits (dim=num_dof): The upper limit of the joints. In radians or meters.
- max_efforts (dim=num_dof): The maximum force or torque that the joints can exert. In N or Nm.
- joint_armatures (dim=num_dof): A value added to the diagonal of the joint-space inertia matrix.
Physically, it corresponds to the rotating part of a motor
- joint_max_velocities (dim=num_dof): The maximum velocity allowed on the joints. In rad/s or m/s.
- joint_efforts (dim=num_dof): Applies a force or a torque on the joints. In N or Nm.
- body_masses (dim=num_bodies): The mass of each body in the articulation. In kg. **CPU pipeline only**
- body_inertias (dim=num_bodies×3): The diagonal values of the inertia matrix of each body. **CPU pipeline only**
- material_properties (dim=num_bodies×3): The static friction, dynamic friction, and restitution of each body
in the articulation, specified in the following order:
[body_1_static_friciton, body_1_dynamic_friciton, body_1_restitution,
body_1_static_friciton, body_2_dynamic_friciton, body_2_restitution,
... ]
- tendon_stiffnesses (dim=num_tendons): The stiffness of the fixed tendons in the articulation.
- tendon_dampings (dim=num_tendons): The damping of the fixed tendons in the articulation.
- tendon_limit_stiffnesses (dim=num_tendons): The limit stiffness of the fixed tendons in the articulation.
- tendon_lower_limits (dim=num_tendons): The lower limits of the fixed tendons in the articulation.
- tendon_upper_limits (dim=num_tendons): The upper limits of the fixed tendons in the articulation.
- tendon_rest_lengths (dim=num_tendons): The rest lengths of the fixed tendons in the articulation.
- tendon_offsets (dim=num_tendons): The offsets of the fixed tendons in the articulation.
- scale (dim=1): The scale of the articulation. `on_startup` only.
Applying Domain Randomization
------------------------------
To parse the domain randomization configurations in the task `yaml` file and set up the DR pipeline,
it is necessary to call `self._randomizer.set_up_domain_randomization(self)`, where `self._randomizer`
is the `Randomizer` object created in RLTask's `__init__`.
It is worth noting that the names of the views provided under `rigid_prim_views` or `articulation_views`
in the task `yaml` file must match the names passed into `RigidPrimView` or `ArticulationView` objects
in the python task file. In addition, all `RigidPrimView` and `ArticulationView` that would have domain
randomizaiton applied must be added to the scene in the task's `set_up_scene()` via `scene.add()`.
To trigger `on_startup` randomizations, call `self._randomizer.apply_on_startup_domain_randomization(self)`
in `set_up_scene()` after all views are added to the scene. Note that `on_startup` randomizations
are only availble to rigid prim scale, mass, density and articulation scale parameters since these parameters
cannot be randomized after the simulation begins on GPU pipeline. Therefore, randomizations must be applied
to these parameters in `set_up_scene()` prior to the start of the simulation.
To trigger `on_reset` and `on_interval` randomizations, it is required to step the interal
counter of the DR pipeline in `pre_physics_step()`:
```python
if self._randomizer.randomize:
omni.replicator.isaac.physics_view.step_randomization(reset_inds)
```
`reset_inds` is a list of indices of the environments that need to be reset. For those environments, it will
trigger the randomizations defined with `on_reset`. All other environments will follow randomizations
defined with `on_interval`.
Randomization Scheduling
----------------------------
We provide methods to modify distribution parameters defined in the `yaml` file during training, which
allows custom DR scheduling. There are three methods from the `Randomizer` class
that are relevant to DR scheduling:
- `get_initial_dr_distribution_parameters`: returns a numpy array of the initial parameters (as defined in
the `yaml` file) of a specified distribution
- `get_dr_distribution_parameters`: returns a numpy array of the current parameters of a specified distribution
- `set_dr_distribution_parameters`: sets new parameters to a specified distribution
Using the DR configuration example defined above, we can get the current parameters and set new parameters
to gravity randomization and shadow hand joint stiffness randomization as follows:
```python
current_gravity_dr_params = self._randomizer.get_dr_distribution_parameters(
"simulation",
"gravity",
"on_reset",
)
self._randomizer.set_dr_distribution_parameters(
[[0.0, 0.0, 0.0], [0.0, 0.0, 0.5]],
"simulation",
"gravity",
"on_reset",
)
current_joint_stiffness_dr_params = self._randomizer.get_dr_distribution_parameters(
"articulation_views",
"shadow_hand_view",
"stiffness",
"on_reset",
)
self._randomizer.set_dr_distribution_parameters(
[0.7, 1.55],
"articulation_views",
"shadow_hand_view",
"stiffness",
"on_reset",
)
```
The following is an example of using these methods to perform linear scheduling of gaussian noise
that is added to observations and actions in the above shadow hand example. The following method
linearly adds more noise to observations and actions every epoch up until the `schedule_epoch`.
This method can be added to the Task python class and be called in `pre_physics_step()`.
```python
def apply_observations_actions_noise_linear_scheduling(self, schedule_epoch=100):
current_epoch = self._env.sim_frame_count // self._cfg["task"]["env"]["controlFrequencyInv"] // self._cfg["train"]["params"]["config"]["horizon_length"]
if current_epoch <= schedule_epoch:
if (self._env.sim_frame_count // self._cfg["task"]["env"]["controlFrequencyInv"]) % self._cfg["train"]["params"]["config"]["horizon_length"] == 0:
for distribution_path in [("observations", "on_reset"), ("observations", "on_interval"), ("actions", "on_reset"), ("actions", "on_interval")]:
scheduled_params = self._randomizer.get_initial_dr_distribution_parameters(*distribution_path)
scheduled_params[1] = (1/schedule_epoch) * current_epoch * scheduled_params[1]
self._randomizer.set_dr_distribution_parameters(scheduled_params, *distribution_path)
```
| 16,889 | Markdown | 51.453416 | 156 | 0.68814 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/docs/framework/instanceable_assets.md | ## A Note on Instanceable USD Assets
The following section presents a method that modifies existing USD assets
which allows Isaac Sim to load significantly more environments. This is currently
an experimental method and has thus not been completely integrated into the
framework. As a result, this section is reserved for power users who wish to
maxmimize the performance of the Isaac Sim RL framework.
### Motivation
One common issue in Isaac Sim that occurs when we try to increase
the number of environments `numEnvs` is running out of RAM. This occurs because
the Isaac Sim RL framework uses `omni.isaac.cloner` to duplicate environments.
As a result, there are `numEnvs` number of identical copies of the visual and
collision meshes in the scene, which consumes lots of memory. However, only one
copy of the meshes are needed on stage since prims in all other environments could
merely reference that one copy, thus reducing the amount of memory used for loading
environments. To enable this functionality, USD assets need to be modified to be
`instanceable`.
### Creating Instanceable Assets
Assets can now be directly imported as Instanceable assets through the URDF and MJCF importers provided in Isaac Sim. By selecting this option, imported assets will be split into two separate USD files that follow the above hierarchy definition. Any mesh data will be written to an USD stage to be referenced by the main USD stage, which contains the main robot definition.
To use the Instanceable option in the importers, first check the `Create Instanceable Asset` option. Then, specify a file path to indicate the location for saving the mesh data in the `Instanceable USD Path` textbox. This will default to `./instanceable_meshes.usd`, which will generate a file `instanceable_meshes.usd` that is saved to the current directory.
Once the asset is imported with these options enabled, you will see the robot definition in the stage - we will refer to this stage as the master stage. If we expand the robot hierarchy in the Stage, we will notice that the parent prims that have mesh decendants have been marked as Instanceable and they reference a prim in our `Instanceable USD Path` USD file. We are also no longer able to modify attributes of descendant meshes.
To add the instanced asset into a new stage, we will simply need to add the master USD file.
### Converting Existing Assets
We provide the utility function `convert_asset_instanceable`, which creates an instanceable
version of a given USD asset in `/omniisaacgymenvs/utils/usd_utils/create_instanceable_assets.py`.
To run this function, launch Isaac Sim and open the script editor via `Window -> Script Editor`.
Enter the following script and press `Run (Ctrl + Enter)`:
```bash
from omniisaacgymenvs.utils.usd_utils.create_instanceable_assets import convert_asset_instanceable
convert_asset_instanceable(
asset_usd_path=ASSET_USD_PATH,
source_prim_path=SOURCE_PRIM_PATH,
save_as_path=SAVE_AS_PATH
)
```
Note that `ASSET_USD_PATH` is the file path to the USD asset (*e.g.* robot_asset.usd).
`SOURCE_PRIM_PATH` is the USD path of the root prim of the asset on stage. `SAVE_AS_PATH`
is the file path of the generated instanceable version of the asset
(*e.g.* robot_asset_instanceable.usd).
Assuming that `SAVE_AS_PATH` is `OUTPUT_NAME.usd`, the above script will generate two files:
`OUTPUT_NAME.usd` and `OUTPUT_NAME_meshes.usd`. `OUTPUT_NAME.usd` is the instanceable version
of the asset that can be imported to stage and used by `omni.isaac.cloner` to create numerous
duplicates without consuming much memory. `OUTPUT_NAME_meshes.usd` contains all the visual
and collision meshes that `OUTPUT_NAME.usd` references.
It is worth noting that any [USD Relationships](https://graphics.pixar.com/usd/dev/api/class_usd_relationship.html)
on the referenced meshes are removed in `OUTPUT_NAME.usd`. This is because those USD Relationships
originally have targets set to prims in `OUTPUT_NAME_meshes.usd` and hence cannot be accessed
from `OUTPUT_NAME.usd`. Common examples of USD Relationships that could exist on the meshes are
visual materials, physics materials, and filtered collision pairs. Therefore, it is recommanded
to set these USD Relationships on the meshes' parent Xforms instead of the meshes themselves.
In a case where we would like to update the main USD file where the instanceable USD file is being referenced from, we also provide a utility method to update all references in the stage that matches a source reference path to a new USD file path.
```bash
from omniisaacgymenvs.utils.usd_utils.create_instanceable_assets import update_reference
update_reference(
source_prim_path=SOURCE_PRIM_PATH,
source_reference_path=SOURCE_REFERENCE_PATH,
target_reference_path=TARGET_REFERENCE_PATH
)
```
### Limitations
USD requires a specific structure in the asset tree definition in order for the instanceable flag to take action. To mark any mesh or primitive geometry prim in the asset as instanceable, the mesh prim requires a parent Xform prim to be present, which will be used to add a reference to a master USD file containing definition of the mesh prim.
For example, the following definition:
```
World
|_ Robot
|_ Collisions
|_ Sphere
|_ Box
```
would have to be modified to:
```
World
|_ Robot
|_ Collisions
|_ Sphere_Xform
| |_ Sphere
|_ Box_Xform
|_ Box
```
Any references that exist on the original `Sphere` and `Box` prims would have to be moved to `Sphere_Xform` and `Box_Xform` prims.
To help with the process of creating new parent prims, we provide a utility method `create_parent_xforms()` in `omniisaacgymenvs/utils/usd_utils/create_instanceable_assets.py` to automatically insert a new Xform prim as a parent of every mesh prim in the stage. This method can be run on an existing non-instanced USD file for an asset from the script editor:
```bash
from omniisaacgymenvs.utils.usd_utils.create_instanceable_assets import create_parent_xforms
create_parent_xforms(
asset_usd_path=ASSET_USD_PATH,
source_prim_path=SOURCE_PRIM_PATH,
save_as_path=SAVE_AS_PATH
)
```
This method can also be run as part of `convert_asset_instanceable()` method, by passing in the argument `create_xforms=True`.
It is also worth noting that once an instanced asset is added to the stage, we can no longer modify USD attributes on the instanceable prims. For example, to modify attributes of collision meshes that are set as instanceable, we have to first modify the attributes on the corresponding prims in the master prim which our instanced asset references from. Then, we can allow the instanced asset to pick up the updated values from the master prim. | 6,846 | Markdown | 56.058333 | 444 | 0.76804 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/docs/framework/reproducibility.md | Reproducibility and Determinism
===============================
Seeds
-----
To achieve deterministic behavior on multiple training runs, a seed
value can be set in the training config file for each task. This will potentially
allow for individual runs of the same task to be deterministic when
executed on the same machine and system setup. Alternatively, a seed can
also be set via command line argument `seed=<seed>` to override any
settings in config files. If no seed is specified in either config files
or command line arguments, we default to generating a random seed. In
this case, individual runs of the same task should not be expected to be
deterministic. For convenience, we also support setting `seed=-1` to
generate a random seed, which will override any seed values set in
config files. By default, we have explicitly set all seed values in
config files to be 42.
PyTorch Deterministic Training
------------------------------
We also include a `torch_deterministic` argument for use when running RL
training. Enabling this flag (by passing `torch_deterministic=True`) will
apply additional settings to PyTorch that can force the usage of deterministic
algorithms in PyTorch, but may also negatively impact runtime performance.
For more details regarding PyTorch reproducibility, refer to
<https://pytorch.org/docs/stable/notes/randomness.html>. If both
`torch_deterministic=True` and `seed=-1` are set, the seed value will be
fixed to 42.
Runtime Simulation Changes / Domain Randomization
-------------------------------------------------
Note that using a fixed seed value will only **potentially** allow for deterministic
behavior. Due to GPU work scheduling, it is possible that runtime changes to
simulation parameters can alter the order in which operations take place, as
environment updates can happen while the GPU is doing other work. Because of the nature
of floating point numeric storage, any alteration of execution ordering can
cause small changes in the least significant bits of output data, leading
to divergent execution over the simulation of thousands of environments and
simulation frames.
As an example of this, runtime domain randomization of object scales
is known to cause both determinancy and simulation issues when running on the GPU
due to the way those parameters are passed from CPU to GPU in lower level APIs. Therefore,
this is only supported at setup time before starting simulation, which is specified by
the `on_startup` condition for Domain Randomization.
At this time, we do not believe that other domain randomizations offered by this
framework cause issues with deterministic execution when running GPU simulation,
but directly manipulating other simulation parameters outside of the omni.isaac.core View
APIs may induce similar issues.
Also due to floating point precision, states across different environments in the simulation
may be non-deterministic when the same set of actions are applied to the same initial
states. This occurs as environments are placed further apart from the world origin at (0, 0, 0).
As actors get placed at different origins in the world, floating point errors may build up
and result in slight variance in results even when starting from the same initial states. One
possible workaround for this issue is to place all actors/environments at the world origin
at (0, 0, 0) and filter out collisions between the environments. Note that this may induce
a performance degradation of around 15-50%, depending on the complexity of actors and
environment.
Another known cause of non-determinism is from resetting actors into contact states.
If actors within a scene is reset to a state where contacts are registered
between actors, the simulation may not be able to produce deterministic results.
This is because contacts are not recorded and will be re-computed from scratch for
each reset scenario where actors come into contact, which cannot guarantee
deterministic behavior across different computations.
| 4,017 | Markdown | 53.297297 | 96 | 0.787155 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/docs/framework/framework.md | ## RL Framework
### Overview
Our RL examples are built on top of Isaac Sim's RL framework provided in `omni.isaac.gym`. Tasks are implemented following `omni.isaac.core`'s Task structure. PPO training is performed using the [rl_games](https://github.com/Denys88/rl_games) library, but we provide the flexibility to use other RL libraries for training.
For a list of examples provided, refer to the
[RL List of Examples](../examples/rl_examples.md)
### Class Definition
The RL ecosystem can be viewed as three main pieces: the Task, the RL policy, and the Environment wrapper that provides an interface for communication between the task and the RL policy.
#### Task
The Task class is where main task logic is implemented, such as computing observations and rewards. This is where we can collect states of actors in the scene and apply controls or actions to our actors.
For convenience, we provide a base Task class, `RLTask`, which inherits from the `BaseTask` class in `omni.isaac.core`. This class is responsible for dealing with common configuration parsing, buffer initialization, and environment creation. Note that some config parameters and buffers in this class are specific to the rl_games library, and it is not necessary to inherit new tasks from `RLTask`.
A few key methods in `RLTask` include:
* `__init__(self, name: str, env: VecEnvBase, offset: np.ndarray = None)` - Parses config values common to all tasks and initializes action/observation spaces if not defined in the child class. Defines a GridCloner by default and creates a base USD scope for holding all environment prims. Can be called from child class.
* `set_up_scene(self, scene: Scene, replicate_physics=True, collision_filter_global_paths=[], filter_collisions=True)` - Adds ground plane and creates clones of environment 0 based on values specifid in config. Can be called from child class `set_up_scene()`.
* `pre_physics_step(self, actions: torch.Tensor)` - Takes in actions buffer from RL policy. Can be overriden by child class to process actions.
* `post_physics_step(self)` - Controls flow of RL data processing by triggering APIs to compute observations, retrieve states, compute rewards, resets, and extras. Will return observation, reward, reset, and extras buffers.
#### Environment Wrappers
As part of the RL framework in Isaac Sim, we have introduced environment wrapper classes in `omni.isaac.gym` for RL policies to communicate with simulation in Isaac Sim. This class provides a vectorized interface for common RL APIs used by `gym.Env` and can be easily extended towards RL libraries that require additional APIs. We show an example of this extension process in this repository, where we extend `VecEnvBase` as provided in `omni.isaac.gym` to include additional APIs required by the rl_games library.
Commonly used APIs provided by the base wrapper class `VecEnvBase` include:
* `render(self, mode: str = "human")` - renders the current frame
* `close(self)` - closes the simulator
* `seed(self, seed: int = -1)` - sets a seed. Use `-1` for a random seed.
* `step(self, actions: Union[np.ndarray, torch.Tensor])` - triggers task `pre_physics_step` with actions, steps simulation and renderer, computes observations, rewards, dones, and returns state buffers
* `reset(self)` - triggers task `reset()`, steps simulation, and re-computes observations
##### Multi-Threaded Environment Wrapper for Extension Workflows
`VecEnvBase` is a simple interface that’s designed to provide commonly used `gym.Env` APIs required by RL libraries. Users can create an instance of this class, attach your task to the interface, and provide your wrapper instance to the RL policy. Since the RL algorithm maintains the main loop of execution, interaction with the UI and environments in the scene can be limited and may interfere with the training loop.
We also provide another environment wrapper class called `VecEnvMT`, which is designed to isolate the RL policy in a new thread, separate from the main simulation and rendering thread. This class provides the same set of interface as `VecEnvBase`, but also provides threaded queues for sending and receiving actions and states between the RL policy and the task. In order to use this wrapper interface, users have to implement a `TrainerMT` class, which should implement a `run()` method that initiates the RL loop on a new thread. We show an example of this in OmniIsaacGymEnvs under `omniisaacgymenvs/utils/rlgames/rlgames_train_mt.py`. The setup for using `VecEnvMT` is more involved compared to the single-threaded `VecEnvBase` interface, but will allow users to have more control over starting and stopping the training loop through interaction with the UI.
Note that `VecEnvMT` has a timeout variable, which defaults to 90 seconds. If either the RL thread waiting for physics state exceeds the timeout amount or the simulation thread waiting for RL actions exceeds the timeout amount, the threaded queues will throw an exception and terminate training. For larger scenes that require longer simulation or training time, try increasing the timeout variable in `VecEnvMT` to prevent unnecessary timeouts. This can be done by passing in a `timeout` argument when calling `VecEnvMT.initialize()`.
This wrapper is currently only supported with the [extension workflow](extension_workflow.md).
### Creating New Examples
For simplicity, we will focus on using the single-threaded `VecEnvBase` interface in this tutorial.
To run any example, first make sure an instance of `VecEnvBase` or descendant of `VecEnvBase` is initialized.
This will be required as an argumet to our new Task. For example:
``` python
env = VecEnvBase(headless=False)
```
The headless parameter indicates whether a viewer should be created for visualizing results.
Then, create our task class, extending it from `RLTask`:
```python
class MyNewTask(RLTask):
def __init__(
self,
name: str, # name of the Task
sim_config: SimConfig, # SimConfig instance for parsing cfg
env: VecEnvBase, # env instance of VecEnvBase or inherited class
offset=None # transform offset in World
) -> None:
# parse configurations, set task-specific members
...
self._num_observations = 4
self._num_actions = 1
# call parent class’s __init__
RLTask.__init__(self, name, env)
```
The `__init__` method should take 4 arguments:
* `name`: a string for the name of the task (required by BaseTask)
* `sim_config`: an instance of `SimConfig` used for config parsing, can be `None`. This object is created in `omniisaacgymenvs/utils/task_utils.py`.
* `env`: an instance of `VecEnvBase` or an inherited class of `VecEnvBase`
* `offset`: any offset required to place the `Task` in `World` (required by `BaseTask`)
In the `__init__` method of `MyNewTask`, we can populate any task-specific parameters, such as dimension of observations and actions, and retrieve data from config dictionaries. Make sure to make a call to `RLTask`’s `__init__` at the end of the method to perform additional data initialization.
Next, we can implement the methods required by the RL framework. These methods follow APIs defined in `omni.isaac.core` `BaseTask` class. Below is an example of a simple implementation for each method.
```python
def set_up_scene(self, scene: Scene) -> None:
# implement environment setup here
add_prim_to_stage(my_robot) # add a robot actor to the stage
super().set_up_scene(scene) # pass scene to parent class - this method in RLTask also uses GridCloner to clone the robot and adds a ground plane if desired
self._my_robots = ArticulationView(...) # create a view of robots
scene.add(self._my_robots) # add view to scene for initialization
def post_reset(self):
# implement any logic required for simulation on-start here
pass
def pre_physics_step(self, actions: torch.Tensor) -> None:
# implement logic to be performed before physics steps
self.perform_reset()
self.apply_action(actions)
def get_observations(self) -> dict:
# implement logic to retrieve observation states
self.obs_buf = self.compute_observations()
def calculate_metrics(self) -> None:
# implement logic to compute rewards
self.rew_buf = self.compute_rewards()
def is_done(self) -> None:
# implement logic to update dones/reset buffer
self.reset_buf = self.compute_resets()
```
To launch the new example from one of our training scripts, add `MyNewTask` to `omniisaacgymenvs/utils/task_util.py`. In `initialize_task()`, add an import to the `MyNewTask` class and add an instance to the `task_map` dictionary to register it into the command line parsing.
To use the Hydra config parsing system, also add a task and train config files into `omniisaacgymenvs/cfg`. The config files should be named `cfg/task/MyNewTask.yaml` and `cfg/train/MyNewTaskPPO.yaml`.
Finally, we can launch `MyNewTask` with:
```bash
PYTHON_PATH random_policy.py task=MyNewTask
```
### Using a New RL Library
In this repository, we provide an example of extending Isaac Sim's environment wrapper classes to work with the rl_games library, which can be found at `omniisaacgymenvs/envs/vec_env_rlgames.py` and `omniisaacgymenvs/envs/vec_env_rlgames_mt.py`.
The first script, `omniisaacgymenvs/envs/vec_env_rlgames.py`, extends from `VecEnvBase`.
```python
from omni.isaac.gym.vec_env import VecEnvBase
class VecEnvRLGames(VecEnvBase):
```
One of the features in rl_games is the support for asymmetrical actor-critic policies, which requires a `states` buffer in addition to the `observations` buffer. Thus, we have overriden a few of the class in `VecEnvBase` to incorporate this requirement.
```python
def set_task(
self, task, backend="numpy", sim_params=None, init_sim=True
) -> None:
super().set_task(task, backend, sim_params, init_sim) # class VecEnvBase's set_task to register task to the environment instance
# special variables required by rl_games
self.num_states = self._task.num_states
self.state_space = self._task.state_space
def step(self, actions):
# we clamp the actions so that values are within a defined range
actions = torch.clamp(actions, -self._task.clip_actions, self._task.clip_actions).to(self._task.device).clone()
# pass actions buffer to task for processing
self._task.pre_physics_step(actions)
# allow users to specify the control frequency through config
for _ in range(self._task.control_frequency_inv):
self._world.step(render=self._render)
self.sim_frame_count += 1
# compute new buffers
self._obs, self._rew, self._resets, self._extras = self._task.post_physics_step()
self._states = self._task.get_states() # special buffer required by rl_games
# return buffers in format required by rl_games
obs_dict = {"obs": self._obs, "states": self._states}
return obs_dict, self._rew, self._resets, self._extras
```
Similarly, we also have a multi-threaded version of the rl_games environment wrapper implementation, `omniisaacgymenvs/envs/vec_env_rlgames_mt.py`. This class extends from `VecEnvMT` and `VecEnvRLGames`:
```python
from omni.isaac.gym.vec_env import VecEnvMT
from .vec_env_rlgames import VecEnvRLGames
class VecEnvRLGamesMT(VecEnvRLGames, VecEnvMT):
```
In this class, we also have a special method `_parse_data(self, data)`, which is required to be implemented to parse dictionary values passed through queues. Since multiple buffers of data are required by the RL policy, we concatenate all of the buffers in a single dictionary, and send that to the queue to be received by the RL thread.
```python
def _parse_data(self, data):
self._obs = torch.clamp(data["obs"], -self._task.clip_obs, self._task.clip_obs).to(self._task.rl_device).clone()
self._rew = data["rew"].to(self._task.rl_device).clone()
self._states = torch.clamp(data["states"], -self._task.clip_obs, self._task.clip_obs).to(self._task.rl_device).clone()
self._resets = data["reset"].to(self._task.rl_device).clone()
self._extras = data["extras"].copy()
```
| 12,172 | Markdown | 60.791878 | 862 | 0.747453 |
Tbarkin121/GuardDog/OmniIsaacGymEnvs/docs/framework/limitations.md | ### API Limitations
#### omni.isaac.core Setter APIs
Setter APIs in omni.isaac.core for ArticulationView, RigidPrimView, and RigidContactView should only be called once per simulation step for
each view instance per API. This means that for use cases where multiple calls to the same setter API from the same view instance is required,
users will need to cache the states to be set for intermmediate calls, and make only one call to the setter API prior to stepping physics with
the complete buffer containing all cached states.
If multiple calls to the same setter API from the same view object are made within the simulation step,
subsequent calls will override the states that have been set by prior calls to the same API,
voiding the previous calls to the API. The API can be called again once a simulation step is made.
For example, the below code will override states.
```python
my_view.set_world_poses(positions=[[0, 0, 1]], orientations=[[1, 0, 0, 0]], indices=[0])
# this call will void the previous call
my_view.set_world_poses(positions=[[0, 1, 1]], orientations=[[1, 0, 0, 0]], indices=[1])
my_world.step()
```
Instead, the below code should be used.
```python
my_view.set_world_poses(positions=[[0, 0, 1], [0, 1, 1]], orientations=[[1, 0, 0, 0], [1, 0, 0, 0]], indices=[0, 1])
my_world.step()
```
#### omni.isaac.core Getter APIs
Getter APIs for cloth simulation may return stale states when used with the GPU pipeline. This is because the physics simulation requires a simulation step
to occur in order to refresh the GPU buffers with new states. Therefore, when a getter API is called after a setter API before a
simulation step, the states returned from the getter API may not reflect the values that were set using the setter API.
For example:
```python
my_view.set_world_positions(positions=[[0, 0, 1]], indices=[0])
# Values may be stale when called before step
positions = my_view.get_world_positions() # positions may not match [[0, 0, 1]]
my_world.step()
# Values will be updated when called after step
positions = my_view.get_world_positions() # positions will reflect the new states
```
#### Performing Resets
When resetting the states of actors, impulses generated by previous target or effort controls
will continue to be carried over from the previous states in simulation.
Therefore, depending on the time step, the masses of the objects, and the magnitude of the impulses,
the difference between the desired reset state and the observed first state after reset can be large.
To eliminate this issue, users should also reset any position/velocity targets or effort controllers
to the reset state or zero state when resetting actor states. For setting joint positions and velocities
using the omni.isaac.core ArticulationView APIs, position targets and velocity targets will
automatically be set to the same states as joint positions and velocities.
#### Massless Links
It may be helpful in some scenarios to introduce dummy bodies into articulations for
retrieving transformations at certain locations of the articulation. Although it is possible
to introduce rigid bodies with no mass and colliders APIs and attach them to the articulation
with fixed joints, this can sometimes cause physics instabilities in simulation. To prevent
instabilities from occurring, it is recommended to add a dummy geometry to the rigid body
and include both Mass and Collision APIs. The mass of the geometry can be set to a very
small value, such as 0.0001, to avoid modifying physical behaviors of the articulation.
Similarly, we can also disable collision on the Collision API of the geometry to preserve
contact behavior of the articulation. | 3,685 | Markdown | 52.420289 | 155 | 0.775577 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/setup.py | """Installation script for the 'isaacgymenvs' python package."""
from __future__ import absolute_import
from __future__ import print_function
from __future__ import division
from setuptools import setup, find_packages
import os
root_dir = os.path.dirname(os.path.realpath(__file__))
# Minimum dependencies required prior to installation
INSTALL_REQUIRES = [
# RL
"gym==0.23.1",
"torch",
"omegaconf",
"termcolor",
"jinja2",
"hydra-core>=1.2",
"rl-games>=1.6.0",
"pyvirtualdisplay",
"urdfpy==0.0.22",
"pysdf==0.1.9",
"warp-lang==0.10.1",
"trimesh==3.23.5",
]
# Installation operation
setup(
name="isaacgymenvs",
author="NVIDIA",
version="1.5.1",
description="Benchmark environments for high-speed robot learning in NVIDIA IsaacGym.",
keywords=["robotics", "rl"],
include_package_data=True,
python_requires=">=3.6",
install_requires=INSTALL_REQUIRES,
packages=find_packages("."),
classifiers=["Natural Language :: English", "Programming Language :: Python :: 3.6, 3.7, 3.8"],
zip_safe=False,
)
# EOF
| 1,107 | Python | 21.612244 | 99 | 0.644986 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/README.md | # Isaac Gym Benchmark Environments
[Website](https://developer.nvidia.com/isaac-gym) | [Technical Paper](https://arxiv.org/abs/2108.10470) | [Videos](https://sites.google.com/view/isaacgym-nvidia)
### About this repository
This repository contains example RL environments for the NVIDIA Isaac Gym high performance environments described [in our NeurIPS 2021 Datasets and Benchmarks paper](https://openreview.net/forum?id=fgFBtYgJQX_)
### Installation
Download the Isaac Gym Preview 4 release from the [website](https://developer.nvidia.com/isaac-gym), then
follow the installation instructions in the documentation. We highly recommend using a conda environment
to simplify set up.
Ensure that Isaac Gym works on your system by running one of the examples from the `python/examples`
directory, like `joint_monkey.py`. Follow troubleshooting steps described in the Isaac Gym Preview 4
install instructions if you have any trouble running the samples.
Once Isaac Gym is installed and samples work within your current python environment, install this repo:
```bash
pip install -e .
```
### Creating an environment
We offer an easy-to-use API for creating preset vectorized environments. For more info on what a vectorized environment is and its usage, please refer to the Gym library [documentation](https://www.gymlibrary.dev/content/vectorising/#vectorized-environments).
```python
import isaacgym
import isaacgymenvs
import torch
num_envs = 2000
envs = isaacgymenvs.make(
seed=0,
task="Ant",
num_envs=num_envs,
sim_device="cuda:0",
rl_device="cuda:0",
)
print("Observation space is", envs.observation_space)
print("Action space is", envs.action_space)
obs = envs.reset()
for _ in range(20):
random_actions = 2.0 * torch.rand((num_envs,) + envs.action_space.shape, device = 'cuda:0') - 1.0
envs.step(random_actions)
```
### Running the benchmarks
To train your first policy, run this line:
```bash
python train.py task=Cartpole
```
Cartpole should train to the point that the pole stays upright within a few seconds of starting.
Here's another example - Ant locomotion:
```bash
python train.py task=Ant
```
Note that by default we show a preview window, which will usually slow down training. You
can use the `v` key while running to disable viewer updates and allow training to proceed
faster. Hit the `v` key again to resume viewing after a few seconds of training, once the
ants have learned to run a bit better.
Use the `esc` key or close the viewer window to stop training early.
Alternatively, you can train headlessly, as follows:
```bash
python train.py task=Ant headless=True
```
Ant may take a minute or two to train a policy you can run. When running headlessly, you
can stop it early using Control-C in the command line window.
### Loading trained models // Checkpoints
Checkpoints are saved in the folder `runs/EXPERIMENT_NAME/nn` where `EXPERIMENT_NAME`
defaults to the task name, but can also be overridden via the `experiment` argument.
To load a trained checkpoint and continue training, use the `checkpoint` argument:
```bash
python train.py task=Ant checkpoint=runs/Ant/nn/Ant.pth
```
To load a trained checkpoint and only perform inference (no training), pass `test=True`
as an argument, along with the checkpoint name. To avoid rendering overhead, you may
also want to run with fewer environments using `num_envs=64`:
```bash
python train.py task=Ant checkpoint=runs/Ant/nn/Ant.pth test=True num_envs=64
```
Note that If there are special characters such as `[` or `=` in the checkpoint names,
you will need to escape them and put quotes around the string. For example,
`checkpoint="./runs/Ant/nn/last_Antep\=501rew\[5981.31\].pth"`
### Configuration and command line arguments
We use [Hydra](https://hydra.cc/docs/intro/) to manage the config. Note that this has some
differences from previous incarnations in older versions of Isaac Gym.
Key arguments to the `train.py` script are:
* `task=TASK` - selects which task to use. Any of `AllegroHand`, `AllegroHandDextremeADR`, `AllegroHandDextremeManualDR`, `AllegroKukaLSTM`, `AllegroKukaTwoArmsLSTM`, `Ant`, `Anymal`, `AnymalTerrain`, `BallBalance`, `Cartpole`, `FrankaCabinet`, `Humanoid`, `Ingenuity` `Quadcopter`, `ShadowHand`, `ShadowHandOpenAI_FF`, `ShadowHandOpenAI_LSTM`, and `Trifinger` (these correspond to the config for each environment in the folder `isaacgymenvs/config/task`)
* `train=TRAIN` - selects which training config to use. Will automatically default to the correct config for the environment (ie. `<TASK>PPO`).
* `num_envs=NUM_ENVS` - selects the number of environments to use (overriding the default number of environments set in the task config).
* `seed=SEED` - sets a seed value for randomizations, and overrides the default seed set up in the task config
* `sim_device=SIM_DEVICE_TYPE` - Device used for physics simulation. Set to `cuda:0` (default) to use GPU and to `cpu` for CPU. Follows PyTorch-like device syntax.
* `rl_device=RL_DEVICE` - Which device / ID to use for the RL algorithm. Defaults to `cuda:0`, and also follows PyTorch-like device syntax.
* `graphics_device_id=GRAPHICS_DEVICE_ID` - Which Vulkan graphics device ID to use for rendering. Defaults to 0. **Note** - this may be different from CUDA device ID, and does **not** follow PyTorch-like device syntax.
* `pipeline=PIPELINE` - Which API pipeline to use. Defaults to `gpu`, can also set to `cpu`. When using the `gpu` pipeline, all data stays on the GPU and everything runs as fast as possible. When using the `cpu` pipeline, simulation can run on either CPU or GPU, depending on the `sim_device` setting, but a copy of the data is always made on the CPU at every step.
* `test=TEST`- If set to `True`, only runs inference on the policy and does not do any training.
* `checkpoint=CHECKPOINT_PATH` - Set to path to the checkpoint to load for training or testing.
* `headless=HEADLESS` - Whether to run in headless mode.
* `experiment=EXPERIMENT` - Sets the name of the experiment.
* `max_iterations=MAX_ITERATIONS` - Sets how many iterations to run for. Reasonable defaults are provided for the provided environments.
Hydra also allows setting variables inside config files directly as command line arguments. As an example, to set the discount rate for a rl_games training run, you can use `train.params.config.gamma=0.999`. Similarly, variables in task configs can also be set. For example, `task.env.enableDebugVis=True`.
#### Hydra Notes
Default values for each of these are found in the `isaacgymenvs/config/config.yaml` file.
The way that the `task` and `train` portions of the config works are through the use of config groups.
You can learn more about how these work [here](https://hydra.cc/docs/tutorials/structured_config/config_groups/)
The actual configs for `task` are in `isaacgymenvs/config/task/<TASK>.yaml` and for train in `isaacgymenvs/config/train/<TASK>PPO.yaml`.
In some places in the config you will find other variables referenced (for example,
`num_actors: ${....task.env.numEnvs}`). Each `.` represents going one level up in the config hierarchy.
This is documented fully [here](https://omegaconf.readthedocs.io/en/latest/usage.html#variable-interpolation).
## Tasks
Source code for tasks can be found in `isaacgymenvs/tasks`.
Each task subclasses the `VecEnv` base class in `isaacgymenvs/base/vec_task.py`.
Refer to [docs/framework.md](docs/framework.md) for how to create your own tasks.
Full details on each of the tasks available can be found in the [RL examples documentation](docs/rl_examples.md).
## Domain Randomization
IsaacGymEnvs includes a framework for Domain Randomization to improve Sim-to-Real transfer of trained
RL policies. You can read more about it [here](docs/domain_randomization.md).
## Reproducibility and Determinism
If deterministic training of RL policies is important for your work, you may wish to review our [Reproducibility and Determinism Documentation](docs/reproducibility.md).
## Multi-GPU Training
You can run multi-GPU training using `torchrun` (i.e., `torch.distributed`) using this repository.
Here is an example command for how to run in this way -
`torchrun --standalone --nnodes=1 --nproc_per_node=2 train.py multi_gpu=True task=Ant <OTHER_ARGS>`
Where the `--nproc_per_node=` flag specifies how many processes to run and note the `multi_gpu=True` flag must be set on the train script in order for multi-GPU training to run.
## Population Based Training
You can run population based training to help find good hyperparameters or to train on very difficult environments which would otherwise
be hard to learn anything on without it. See [the readme](docs/pbt.md) for details.
## WandB support
You can run [WandB](https://wandb.ai/) with Isaac Gym Envs by setting `wandb_activate=True` flag from the command line. You can set the group, name, entity, and project for the run by setting the `wandb_group`, `wandb_name`, `wandb_entity` and `wandb_project` set. Make sure you have WandB installed with `pip install wandb` before activating.
## Capture videos
We implement the standard `env.render(mode='rgb_rray')` `gym` API to provide an image of the simulator viewer. Additionally, we can leverage `gym.wrappers.RecordVideo` to help record videos that shows agent's gameplay. Consider running the following file which should produce a video in the `videos` folder.
```python
import gym
import isaacgym
import isaacgymenvs
import torch
num_envs = 64
envs = isaacgymenvs.make(
seed=0,
task="Ant",
num_envs=num_envs,
sim_device="cuda:0",
rl_device="cuda:0",
graphics_device_id=0,
headless=False,
multi_gpu=False,
virtual_screen_capture=True,
force_render=False,
)
envs.is_vector_env = True
envs = gym.wrappers.RecordVideo(
envs,
"./videos",
step_trigger=lambda step: step % 10000 == 0, # record the videos every 10000 steps
video_length=100 # for each video record up to 100 steps
)
envs.reset()
print("the image of Isaac Gym viewer is an array of shape", envs.render(mode="rgb_array").shape)
for _ in range(100):
actions = 2.0 * torch.rand((num_envs,) + envs.action_space.shape, device = 'cuda:0') - 1.0
envs.step(actions)
```
## Capture videos during training
You can automatically capture the videos of the agents gameplay by toggling the `capture_video=True` flag and tune the capture frequency `capture_video_freq=1500` and video length via `capture_video_len=100`. You can set `force_render=False` to disable rendering when the videos are not captured.
```
python train.py capture_video=True capture_video_freq=1500 capture_video_len=100 force_render=False
```
You can also automatically upload the videos to Weights and Biases:
```
python train.py task=Ant wandb_activate=True wandb_entity=nvidia wandb_project=rl_games capture_video=True force_render=False
```
## Pre-commit
We use [pre-commit](https://pre-commit.com/) to helps us automate short tasks that improve code quality. Before making a commit to the repository, please ensure `pre-commit run --all-files` runs without error.
## Troubleshooting
Please review the Isaac Gym installation instructions first if you run into any issues.
You can either submit issues through GitHub or through the [Isaac Gym forum here](https://forums.developer.nvidia.com/c/agx-autonomous-machines/isaac/isaac-gym/322).
## Citing
Please cite this work as:
```
@misc{makoviychuk2021isaac,
title={Isaac Gym: High Performance GPU-Based Physics Simulation For Robot Learning},
author={Viktor Makoviychuk and Lukasz Wawrzyniak and Yunrong Guo and Michelle Lu and Kier Storey and Miles Macklin and David Hoeller and Nikita Rudin and Arthur Allshire and Ankur Handa and Gavriel State},
year={2021},
journal={arXiv preprint arXiv:2108.10470}
}
```
**Note** if you use the DexPBT: Scaling up Dexterous Manipulation for Hand-Arm Systems with Population Based Training work or the code related to Population Based Training, please cite the following paper:
```
@inproceedings{
petrenko2023dexpbt,
author = {Aleksei Petrenko, Arthur Allshire, Gavriel State, Ankur Handa, Viktor Makoviychuk},
title = {DexPBT: Scaling up Dexterous Manipulation for Hand-Arm Systems with Population Based Training},
booktitle = {RSS},
year = {2023}
}
```
**Note** if you use the DeXtreme: Transfer of Agile In-hand Manipulation from Simulation to Reality work or the code related to Automatic Domain Randomisation, please cite the following paper:
```
@inproceedings{
handa2023dextreme,
author = {Ankur Handa, Arthur Allshire, Viktor Makoviychuk, Aleksei Petrenko, Ritvik Singh, Jingzhou Liu, Denys Makoviichuk, Karl Van Wyk, Alexander Zhurkevich, Balakumar Sundaralingam, Yashraj Narang, Jean-Francois Lafleche, Dieter Fox, Gavriel State},
title = {DeXtreme: Transfer of Agile In-hand Manipulation from Simulation to Reality},
booktitle = {ICRA},
year = {2023}
}
```
**Note** if you use the ANYmal rough terrain environment in your work, please ensure you cite the following work:
```
@misc{rudin2021learning,
title={Learning to Walk in Minutes Using Massively Parallel Deep Reinforcement Learning},
author={Nikita Rudin and David Hoeller and Philipp Reist and Marco Hutter},
year={2021},
journal = {arXiv preprint arXiv:2109.11978}
}
```
**Note** if you use the Trifinger environment in your work, please ensure you cite the following work:
```
@misc{isaacgym-trifinger,
title = {{Transferring Dexterous Manipulation from GPU Simulation to a Remote Real-World TriFinger}},
author = {Allshire, Arthur and Mittal, Mayank and Lodaya, Varun and Makoviychuk, Viktor and Makoviichuk, Denys and Widmaier, Felix and Wuthrich, Manuel and Bauer, Stefan and Handa, Ankur and Garg, Animesh},
year = {2021},
journal = {arXiv preprint arXiv:2108.09779}
}
```
**Note** if you use the AMP: Adversarial Motion Priors environment in your work, please ensure you cite the following work:
```
@article{
2021-TOG-AMP,
author = {Peng, Xue Bin and Ma, Ze and Abbeel, Pieter and Levine, Sergey and Kanazawa, Angjoo},
title = {AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control},
journal = {ACM Trans. Graph.},
issue_date = {August 2021},
volume = {40},
number = {4},
month = jul,
year = {2021},
articleno = {1},
numpages = {15},
url = {http://doi.acm.org/10.1145/3450626.3459670},
doi = {10.1145/3450626.3459670},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {motion control, physics-based character animation, reinforcement learning},
}
```
**Note** if you use the Factory simulation methods (e.g., SDF collisions, contact reduction) or Factory learning tools (e.g., assets, environments, or controllers) in your work, please cite the following paper:
```
@inproceedings{
narang2022factory,
author = {Yashraj Narang and Kier Storey and Iretiayo Akinola and Miles Macklin and Philipp Reist and Lukasz Wawrzyniak and Yunrong Guo and Adam Moravanszky and Gavriel State and Michelle Lu and Ankur Handa and Dieter Fox},
title = {Factory: Fast contact for robotic assembly},
booktitle = {Robotics: Science and Systems},
year = {2022}
}
```
**Note** if you use the IndustReal training environments or algorithms in your work, please cite the following paper:
```
@inproceedings{
tang2023industreal,
author = {Bingjie Tang and Michael A Lin and Iretiayo Akinola and Ankur Handa and Gaurav S Sukhatme and Fabio Ramos and Dieter Fox and Yashraj Narang},
title = {IndustReal: Transferring contact-rich assembly tasks from simulation to reality},
booktitle = {Robotics: Science and Systems},
year = {2023}
}
``` | 15,616 | Markdown | 44.135838 | 455 | 0.75698 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/__init__.py | import hydra
from hydra import compose, initialize
from hydra.core.hydra_config import HydraConfig
from omegaconf import DictConfig, OmegaConf
from isaacgymenvs.utils.reformat import omegaconf_to_dict
OmegaConf.register_new_resolver('eq', lambda x, y: x.lower()==y.lower())
OmegaConf.register_new_resolver('contains', lambda x, y: x.lower() in y.lower())
OmegaConf.register_new_resolver('if', lambda pred, a, b: a if pred else b)
OmegaConf.register_new_resolver('resolve_default', lambda default, arg: default if arg=='' else arg)
def make(
seed: int,
task: str,
num_envs: int,
sim_device: str,
rl_device: str,
graphics_device_id: int = -1,
headless: bool = False,
multi_gpu: bool = False,
virtual_screen_capture: bool = False,
force_render: bool = True,
cfg: DictConfig = None
):
from isaacgymenvs.utils.rlgames_utils import get_rlgames_env_creator
# create hydra config if no config passed in
if cfg is None:
# reset current hydra config if already parsed (but not passed in here)
if HydraConfig.initialized():
task = HydraConfig.get().runtime.choices['task']
hydra.core.global_hydra.GlobalHydra.instance().clear()
with initialize(config_path="./cfg"):
cfg = compose(config_name="config", overrides=[f"task={task}"])
cfg_dict = omegaconf_to_dict(cfg.task)
cfg_dict['env']['numEnvs'] = num_envs
# reuse existing config
else:
cfg_dict = omegaconf_to_dict(cfg.task)
create_rlgpu_env = get_rlgames_env_creator(
seed=seed,
task_config=cfg_dict,
task_name=cfg_dict["name"],
sim_device=sim_device,
rl_device=rl_device,
graphics_device_id=graphics_device_id,
headless=headless,
multi_gpu=multi_gpu,
virtual_screen_capture=virtual_screen_capture,
force_render=force_render,
)
return create_rlgpu_env()
| 1,953 | Python | 33.892857 | 100 | 0.656938 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/train.py | # train.py
# Script to train policies in Isaac Gym
#
# Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import hydra
from omegaconf import DictConfig, OmegaConf
from omegaconf import DictConfig, OmegaConf
def preprocess_train_config(cfg, config_dict):
"""
Adding common configuration parameters to the rl_games train config.
An alternative to this is inferring them in task-specific .yaml files, but that requires repeating the same
variable interpolations in each config.
"""
train_cfg = config_dict['params']['config']
train_cfg['device'] = cfg.rl_device
train_cfg['population_based_training'] = cfg.pbt.enabled
train_cfg['pbt_idx'] = cfg.pbt.policy_idx if cfg.pbt.enabled else None
train_cfg['full_experiment_name'] = cfg.get('full_experiment_name')
print(f'Using rl_device: {cfg.rl_device}')
print(f'Using sim_device: {cfg.sim_device}')
print(train_cfg)
try:
model_size_multiplier = config_dict['params']['network']['mlp']['model_size_multiplier']
if model_size_multiplier != 1:
units = config_dict['params']['network']['mlp']['units']
for i, u in enumerate(units):
units[i] = u * model_size_multiplier
print(f'Modified MLP units by x{model_size_multiplier} to {config_dict["params"]["network"]["mlp"]["units"]}')
except KeyError:
pass
return config_dict
@hydra.main(version_base="1.1", config_name="config", config_path="./cfg")
def launch_rlg_hydra(cfg: DictConfig):
import logging
import os
from datetime import datetime
# noinspection PyUnresolvedReferences
import isaacgym
from isaacgymenvs.pbt.pbt import PbtAlgoObserver, initial_pbt_check
from isaacgymenvs.utils.rlgames_utils import multi_gpu_get_rank
from hydra.utils import to_absolute_path
from isaacgymenvs.tasks import isaacgym_task_map
import gym
from isaacgymenvs.utils.reformat import omegaconf_to_dict, print_dict
from isaacgymenvs.utils.utils import set_np_formatting, set_seed
if cfg.pbt.enabled:
initial_pbt_check(cfg)
from isaacgymenvs.utils.rlgames_utils import RLGPUEnv, RLGPUAlgoObserver, MultiObserver, ComplexObsRLGPUEnv
from isaacgymenvs.utils.wandb_utils import WandbAlgoObserver
from rl_games.common import env_configurations, vecenv
from rl_games.torch_runner import Runner
from rl_games.algos_torch import model_builder
from isaacgymenvs.learning import amp_continuous
from isaacgymenvs.learning import amp_players
from isaacgymenvs.learning import amp_models
from isaacgymenvs.learning import amp_network_builder
import isaacgymenvs
time_str = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
# run_name = f"{cfg.wandb_name}_{time_str}"
run_name = f"{cfg.wandb_name}"
# ensure checkpoints can be specified as relative paths
if cfg.checkpoint:
cfg.checkpoint = to_absolute_path(cfg.checkpoint)
cfg_dict = omegaconf_to_dict(cfg)
print_dict(cfg_dict)
# set numpy formatting for printing only
set_np_formatting()
# global rank of the GPU
global_rank = int(os.getenv("RANK", "0"))
# sets seed. if seed is -1 will pick a random one
cfg.seed = set_seed(cfg.seed, torch_deterministic=cfg.torch_deterministic, rank=global_rank)
def create_isaacgym_env(**kwargs):
envs = isaacgymenvs.make(
cfg.seed,
cfg.task_name,
cfg.task.env.numEnvs,
cfg.sim_device,
cfg.rl_device,
cfg.graphics_device_id,
cfg.headless,
cfg.multi_gpu,
cfg.capture_video,
cfg.force_render,
cfg,
**kwargs,
)
if cfg.capture_video:
envs.is_vector_env = True
envs = gym.wrappers.RecordVideo(
envs,
f"videos/{run_name}",
step_trigger=lambda step: step % cfg.capture_video_freq == 0,
video_length=cfg.capture_video_len,
)
return envs
env_configurations.register('rlgpu', {
'vecenv_type': 'RLGPU',
'env_creator': lambda **kwargs: create_isaacgym_env(**kwargs),
})
ige_env_cls = isaacgym_task_map[cfg.task_name]
dict_cls = ige_env_cls.dict_obs_cls if hasattr(ige_env_cls, 'dict_obs_cls') and ige_env_cls.dict_obs_cls else False
if dict_cls:
obs_spec = {}
actor_net_cfg = cfg.train.params.network
obs_spec['obs'] = {'names': list(actor_net_cfg.inputs.keys()), 'concat': not actor_net_cfg.name == "complex_net", 'space_name': 'observation_space'}
if "central_value_config" in cfg.train.params.config:
critic_net_cfg = cfg.train.params.config.central_value_config.network
obs_spec['states'] = {'names': list(critic_net_cfg.inputs.keys()), 'concat': not critic_net_cfg.name == "complex_net", 'space_name': 'state_space'}
vecenv.register('RLGPU', lambda config_name, num_actors, **kwargs: ComplexObsRLGPUEnv(config_name, num_actors, obs_spec, **kwargs))
else:
vecenv.register('RLGPU', lambda config_name, num_actors, **kwargs: RLGPUEnv(config_name, num_actors, **kwargs))
rlg_config_dict = omegaconf_to_dict(cfg.train)
rlg_config_dict = preprocess_train_config(cfg, rlg_config_dict)
observers = [RLGPUAlgoObserver()]
if cfg.pbt.enabled:
pbt_observer = PbtAlgoObserver(cfg)
observers.append(pbt_observer)
if cfg.wandb_activate:
cfg.seed += global_rank
if global_rank == 0:
# initialize wandb only once per multi-gpu run
wandb_observer = WandbAlgoObserver(cfg)
observers.append(wandb_observer)
# register new AMP network builder and agent
def build_runner(algo_observer):
runner = Runner(algo_observer)
runner.algo_factory.register_builder('amp_continuous', lambda **kwargs : amp_continuous.AMPAgent(**kwargs))
runner.player_factory.register_builder('amp_continuous', lambda **kwargs : amp_players.AMPPlayerContinuous(**kwargs))
model_builder.register_model('continuous_amp', lambda network, **kwargs : amp_models.ModelAMPContinuous(network))
model_builder.register_network('amp', lambda **kwargs : amp_network_builder.AMPBuilder())
return runner
# convert CLI arguments into dictionary
# create runner and set the settings
runner = build_runner(MultiObserver(observers))
runner.load(rlg_config_dict)
runner.reset()
# dump config dict
if not cfg.test:
experiment_dir = os.path.join('runs', cfg.train.params.config.name +
'_{date:%d-%H-%M-%S}'.format(date=datetime.now()))
os.makedirs(experiment_dir, exist_ok=True)
with open(os.path.join(experiment_dir, 'config.yaml'), 'w') as f:
f.write(OmegaConf.to_yaml(cfg))
runner.run({
'train': not cfg.test,
'play': cfg.test,
'checkpoint': cfg.checkpoint,
'sigma': cfg.sigma if cfg.sigma != '' else None
})
if __name__ == "__main__":
launch_rlg_hydra()
| 8,641 | Python | 38.104072 | 159 | 0.674459 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/learning/amp_models.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import torch.nn as nn
from rl_games.algos_torch.models import ModelA2CContinuousLogStd
class ModelAMPContinuous(ModelA2CContinuousLogStd):
def __init__(self, network):
super().__init__(network)
return
def build(self, config):
net = self.network_builder.build('amp', **config)
for name, _ in net.named_parameters():
print(name)
obs_shape = config['input_shape']
normalize_value = config.get('normalize_value', False)
normalize_input = config.get('normalize_input', False)
value_size = config.get('value_size', 1)
return self.Network(net, obs_shape=obs_shape,
normalize_value=normalize_value, normalize_input=normalize_input, value_size=value_size)
class Network(ModelA2CContinuousLogStd.Network):
def __init__(self, a2c_network, **kwargs):
super().__init__(a2c_network, **kwargs)
return
def forward(self, input_dict):
is_train = input_dict.get('is_train', True)
result = super().forward(input_dict)
if (is_train):
amp_obs = input_dict['amp_obs']
disc_agent_logit = self.a2c_network.eval_disc(amp_obs)
result["disc_agent_logit"] = disc_agent_logit
amp_obs_replay = input_dict['amp_obs_replay']
disc_agent_replay_logit = self.a2c_network.eval_disc(amp_obs_replay)
result["disc_agent_replay_logit"] = disc_agent_replay_logit
amp_demo_obs = input_dict['amp_obs_demo']
disc_demo_logit = self.a2c_network.eval_disc(amp_demo_obs)
result["disc_demo_logit"] = disc_demo_logit
return result | 3,290 | Python | 43.472972 | 100 | 0.685714 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/learning/hrl_models.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import torch.nn as nn
from rl_games.algos_torch.models import ModelA2CContinuousLogStd
class ModelHRLContinuous(ModelA2CContinuousLogStd):
def __init__(self, network):
super().__init__(network)
return
def build(self, config):
net = self.network_builder.build('amp', **config)
for name, _ in net.named_parameters():
print(name)
return ModelHRLContinuous.Network(net)
class Network(ModelA2CContinuousLogStd.Network):
def __init__(self, a2c_network):
super().__init__(a2c_network)
return | 2,142 | Python | 45.586956 | 80 | 0.744631 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/learning/amp_datasets.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import torch
from rl_games.common import datasets
class AMPDataset(datasets.PPODataset):
def __init__(self, batch_size, minibatch_size, is_discrete, is_rnn, device, seq_len):
super().__init__(batch_size, minibatch_size, is_discrete, is_rnn, device, seq_len)
self._idx_buf = torch.randperm(batch_size)
return
def update_mu_sigma(self, mu, sigma):
raise NotImplementedError()
return
def _get_item(self, idx):
start = idx * self.minibatch_size
end = (idx + 1) * self.minibatch_size
sample_idx = self._idx_buf[start:end]
input_dict = {}
for k,v in self.values_dict.items():
if k not in self.special_names and v is not None:
input_dict[k] = v[sample_idx]
if (end >= self.batch_size):
self._shuffle_idx_buf()
return input_dict
def _shuffle_idx_buf(self):
self._idx_buf[:] = torch.randperm(self.batch_size)
return | 2,564 | Python | 41.749999 | 90 | 0.704758 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/learning/replay_buffer.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import torch
class ReplayBuffer():
def __init__(self, buffer_size, device):
self._head = 0
self._total_count = 0
self._buffer_size = buffer_size
self._device = device
self._data_buf = None
self._sample_idx = torch.randperm(buffer_size)
self._sample_head = 0
return
def reset(self):
self._head = 0
self._total_count = 0
self._reset_sample_idx()
return
def get_buffer_size(self):
return self._buffer_size
def get_total_count(self):
return self._total_count
def store(self, data_dict):
if (self._data_buf is None):
self._init_data_buf(data_dict)
n = next(iter(data_dict.values())).shape[0]
buffer_size = self.get_buffer_size()
assert(n < buffer_size)
for key, curr_buf in self._data_buf.items():
curr_n = data_dict[key].shape[0]
assert(n == curr_n)
store_n = min(curr_n, buffer_size - self._head)
curr_buf[self._head:(self._head + store_n)] = data_dict[key][:store_n]
remainder = n - store_n
if (remainder > 0):
curr_buf[0:remainder] = data_dict[key][store_n:]
self._head = (self._head + n) % buffer_size
self._total_count += n
return
def sample(self, n):
total_count = self.get_total_count()
buffer_size = self.get_buffer_size()
idx = torch.arange(self._sample_head, self._sample_head + n)
idx = idx % buffer_size
rand_idx = self._sample_idx[idx]
if (total_count < buffer_size):
rand_idx = rand_idx % self._head
samples = dict()
for k, v in self._data_buf.items():
samples[k] = v[rand_idx]
self._sample_head += n
if (self._sample_head >= buffer_size):
self._reset_sample_idx()
return samples
def _reset_sample_idx(self):
buffer_size = self.get_buffer_size()
self._sample_idx[:] = torch.randperm(buffer_size)
self._sample_head = 0
return
def _init_data_buf(self, data_dict):
buffer_size = self.get_buffer_size()
self._data_buf = dict()
for k, v in data_dict.items():
v_shape = v.shape[1:]
self._data_buf[k] = torch.zeros((buffer_size,) + v_shape, device=self._device)
return | 3,986 | Python | 33.973684 | 90 | 0.632965 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/learning/amp_network_builder.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from rl_games.algos_torch import torch_ext
from rl_games.algos_torch import layers
from rl_games.algos_torch import network_builder
import torch
import torch.nn as nn
import numpy as np
DISC_LOGIT_INIT_SCALE = 1.0
class AMPBuilder(network_builder.A2CBuilder):
def __init__(self, **kwargs):
super().__init__(**kwargs)
return
class Network(network_builder.A2CBuilder.Network):
def __init__(self, params, **kwargs):
super().__init__(params, **kwargs)
if self.is_continuous:
if (not self.space_config['learn_sigma']):
actions_num = kwargs.get('actions_num')
sigma_init = self.init_factory.create(**self.space_config['sigma_init'])
self.sigma = nn.Parameter(torch.zeros(actions_num, requires_grad=False, dtype=torch.float32), requires_grad=False)
sigma_init(self.sigma)
amp_input_shape = kwargs.get('amp_input_shape')
self._build_disc(amp_input_shape)
return
def load(self, params):
super().load(params)
self._disc_units = params['disc']['units']
self._disc_activation = params['disc']['activation']
self._disc_initializer = params['disc']['initializer']
return
def eval_critic(self, obs):
c_out = self.critic_cnn(obs)
c_out = c_out.contiguous().view(c_out.size(0), -1)
c_out = self.critic_mlp(c_out)
value = self.value_act(self.value(c_out))
return value
def eval_disc(self, amp_obs):
disc_mlp_out = self._disc_mlp(amp_obs)
disc_logits = self._disc_logits(disc_mlp_out)
return disc_logits
def get_disc_logit_weights(self):
return torch.flatten(self._disc_logits.weight)
def get_disc_weights(self):
weights = []
for m in self._disc_mlp.modules():
if isinstance(m, nn.Linear):
weights.append(torch.flatten(m.weight))
weights.append(torch.flatten(self._disc_logits.weight))
return weights
def _build_disc(self, input_shape):
self._disc_mlp = nn.Sequential()
mlp_args = {
'input_size' : input_shape[0],
'units' : self._disc_units,
'activation' : self._disc_activation,
'dense_func' : torch.nn.Linear
}
self._disc_mlp = self._build_mlp(**mlp_args)
mlp_out_size = self._disc_units[-1]
self._disc_logits = torch.nn.Linear(mlp_out_size, 1)
mlp_init = self.init_factory.create(**self._disc_initializer)
for m in self._disc_mlp.modules():
if isinstance(m, nn.Linear):
mlp_init(m.weight)
if getattr(m, "bias", None) is not None:
torch.nn.init.zeros_(m.bias)
torch.nn.init.uniform_(self._disc_logits.weight, -DISC_LOGIT_INIT_SCALE, DISC_LOGIT_INIT_SCALE)
torch.nn.init.zeros_(self._disc_logits.bias)
return
def build(self, name, **kwargs):
net = AMPBuilder.Network(self.params, **kwargs)
return net | 4,898 | Python | 39.487603 | 134 | 0.620457 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/learning/hrl_continuous.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import copy
from datetime import datetime
from gym import spaces
import numpy as np
import os
import time
import yaml
from rl_games.algos_torch import torch_ext
from rl_games.algos_torch import central_value
from rl_games.algos_torch.running_mean_std import RunningMeanStd
from rl_games.common import a2c_common
from rl_games.common import datasets
from rl_games.common import schedulers
from rl_games.common import vecenv
import torch
from torch import optim
import isaacgymenvs.learning.common_agent as common_agent
import isaacgymenvs.learning.gen_amp as gen_amp
import isaacgymenvs.learning.gen_amp_models as gen_amp_models
import isaacgymenvs.learning.gen_amp_network_builder as gen_amp_network_builder
from tensorboardX import SummaryWriter
class HRLAgent(common_agent.CommonAgent):
def __init__(self, base_name, config):
with open(os.path.join(os.getcwd(), config['llc_config']), 'r') as f:
llc_config = yaml.load(f, Loader=yaml.SafeLoader)
llc_config_params = llc_config['params']
self._latent_dim = llc_config_params['config']['latent_dim']
super().__init__(base_name, config)
self._task_size = self.vec_env.env.get_task_obs_size()
self._llc_steps = config['llc_steps']
llc_checkpoint = config['llc_checkpoint']
assert(llc_checkpoint != "")
self._build_llc(llc_config_params, llc_checkpoint)
return
def env_step(self, actions):
actions = self.preprocess_actions(actions)
obs = self.obs['obs']
rewards = 0.0
done_count = 0.0
for t in range(self._llc_steps):
llc_actions = self._compute_llc_action(obs, actions)
obs, curr_rewards, curr_dones, infos = self.vec_env.step(llc_actions)
rewards += curr_rewards
done_count += curr_dones
rewards /= self._llc_steps
dones = torch.zeros_like(done_count)
dones[done_count > 0] = 1.0
if self.is_tensor_obses:
if self.value_size == 1:
rewards = rewards.unsqueeze(1)
return self.obs_to_tensors(obs), rewards.to(self.ppo_device), dones.to(self.ppo_device), infos
else:
if self.value_size == 1:
rewards = np.expand_dims(rewards, axis=1)
return self.obs_to_tensors(obs), torch.from_numpy(rewards).to(self.ppo_device).float(), torch.from_numpy(dones).to(self.ppo_device), infos
def cast_obs(self, obs):
obs = super().cast_obs(obs)
self._llc_agent.is_tensor_obses = self.is_tensor_obses
return obs
def preprocess_actions(self, actions):
clamped_actions = torch.clamp(actions, -1.0, 1.0)
if not self.is_tensor_obses:
clamped_actions = clamped_actions.cpu().numpy()
return clamped_actions
def _setup_action_space(self):
super()._setup_action_space()
self.actions_num = self._latent_dim
return
def _build_llc(self, config_params, checkpoint_file):
network_params = config_params['network']
network_builder = gen_amp_network_builder.GenAMPBuilder()
network_builder.load(network_params)
network = gen_amp_models.ModelGenAMPContinuous(network_builder)
llc_agent_config = self._build_llc_agent_config(config_params, network)
self._llc_agent = gen_amp.GenAMPAgent('llc', llc_agent_config)
self._llc_agent.restore(checkpoint_file)
print("Loaded LLC checkpoint from {:s}".format(checkpoint_file))
self._llc_agent.set_eval()
return
def _build_llc_agent_config(self, config_params, network):
llc_env_info = copy.deepcopy(self.env_info)
obs_space = llc_env_info['observation_space']
obs_size = obs_space.shape[0]
obs_size -= self._task_size
llc_env_info['observation_space'] = spaces.Box(obs_space.low[:obs_size], obs_space.high[:obs_size])
config = config_params['config']
config['network'] = network
config['num_actors'] = self.num_actors
config['features'] = {'observer' : self.algo_observer}
config['env_info'] = llc_env_info
return config
def _compute_llc_action(self, obs, actions):
llc_obs = self._extract_llc_obs(obs)
processed_obs = self._llc_agent._preproc_obs(llc_obs)
z = torch.nn.functional.normalize(actions, dim=-1)
mu, _ = self._llc_agent.model.a2c_network.eval_actor(obs=processed_obs, amp_latents=z)
llc_action = mu
llc_action = self._llc_agent.preprocess_actions(llc_action)
return llc_action
def _extract_llc_obs(self, obs):
obs_size = obs.shape[-1]
llc_obs = obs[..., :obs_size - self._task_size]
return llc_obs
| 6,339 | Python | 38.625 | 150 | 0.675974 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/learning/amp_continuous.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from rl_games.algos_torch.running_mean_std import RunningMeanStd
from rl_games.algos_torch import torch_ext
from rl_games.common import a2c_common
from rl_games.common import schedulers
from rl_games.common import vecenv
from isaacgymenvs.utils.torch_jit_utils import to_torch
import time
from datetime import datetime
import numpy as np
from torch import optim
import torch
from torch import nn
import isaacgymenvs.learning.replay_buffer as replay_buffer
import isaacgymenvs.learning.common_agent as common_agent
from tensorboardX import SummaryWriter
class AMPAgent(common_agent.CommonAgent):
def __init__(self, base_name, params):
super().__init__(base_name, params)
if self.normalize_value:
self.value_mean_std = self.central_value_net.model.value_mean_std if self.has_central_value else self.model.value_mean_std
if self._normalize_amp_input:
self._amp_input_mean_std = RunningMeanStd(self._amp_observation_space.shape).to(self.ppo_device)
return
def init_tensors(self):
super().init_tensors()
self._build_amp_buffers()
return
def set_eval(self):
super().set_eval()
if self._normalize_amp_input:
self._amp_input_mean_std.eval()
return
def set_train(self):
super().set_train()
if self._normalize_amp_input:
self._amp_input_mean_std.train()
return
def get_stats_weights(self):
state = super().get_stats_weights()
if self._normalize_amp_input:
state['amp_input_mean_std'] = self._amp_input_mean_std.state_dict()
return state
def set_stats_weights(self, weights):
super().set_stats_weights(weights)
if self._normalize_amp_input:
self._amp_input_mean_std.load_state_dict(weights['amp_input_mean_std'])
return
def play_steps(self):
self.set_eval()
epinfos = []
update_list = self.update_list
for n in range(self.horizon_length):
self.obs, done_env_ids = self._env_reset_done()
self.experience_buffer.update_data('obses', n, self.obs['obs'])
if self.use_action_masks:
masks = self.vec_env.get_action_masks()
res_dict = self.get_masked_action_values(self.obs, masks)
else:
res_dict = self.get_action_values(self.obs)
for k in update_list:
self.experience_buffer.update_data(k, n, res_dict[k])
if self.has_central_value:
self.experience_buffer.update_data('states', n, self.obs['states'])
self.obs, rewards, self.dones, infos = self.env_step(res_dict['actions'])
shaped_rewards = self.rewards_shaper(rewards)
self.experience_buffer.update_data('rewards', n, shaped_rewards)
self.experience_buffer.update_data('next_obses', n, self.obs['obs'])
self.experience_buffer.update_data('dones', n, self.dones)
self.experience_buffer.update_data('amp_obs', n, infos['amp_obs'])
terminated = infos['terminate'].float()
terminated = terminated.unsqueeze(-1)
next_vals = self._eval_critic(self.obs)
next_vals *= (1.0 - terminated)
self.experience_buffer.update_data('next_values', n, next_vals)
self.current_rewards += rewards
self.current_lengths += 1
all_done_indices = self.dones.nonzero(as_tuple=False)
done_indices = all_done_indices[::self.num_agents]
self.game_rewards.update(self.current_rewards[done_indices])
self.game_lengths.update(self.current_lengths[done_indices])
self.algo_observer.process_infos(infos, done_indices)
not_dones = 1.0 - self.dones.float()
self.current_rewards = self.current_rewards * not_dones.unsqueeze(1)
self.current_lengths = self.current_lengths * not_dones
if (self.vec_env.env.viewer and (n == (self.horizon_length - 1))):
self._amp_debug(infos)
mb_fdones = self.experience_buffer.tensor_dict['dones'].float()
mb_values = self.experience_buffer.tensor_dict['values']
mb_next_values = self.experience_buffer.tensor_dict['next_values']
mb_rewards = self.experience_buffer.tensor_dict['rewards']
mb_amp_obs = self.experience_buffer.tensor_dict['amp_obs']
amp_rewards = self._calc_amp_rewards(mb_amp_obs)
mb_rewards = self._combine_rewards(mb_rewards, amp_rewards)
mb_advs = self.discount_values(mb_fdones, mb_values, mb_rewards, mb_next_values)
mb_returns = mb_advs + mb_values
batch_dict = self.experience_buffer.get_transformed_list(a2c_common.swap_and_flatten01, self.tensor_list)
batch_dict['returns'] = a2c_common.swap_and_flatten01(mb_returns)
batch_dict['played_frames'] = self.batch_size
for k, v in amp_rewards.items():
batch_dict[k] = a2c_common.swap_and_flatten01(v)
return batch_dict
def prepare_dataset(self, batch_dict):
super().prepare_dataset(batch_dict)
self.dataset.values_dict['amp_obs'] = batch_dict['amp_obs']
self.dataset.values_dict['amp_obs_demo'] = batch_dict['amp_obs_demo']
self.dataset.values_dict['amp_obs_replay'] = batch_dict['amp_obs_replay']
return
def train_epoch(self):
play_time_start = time.time()
with torch.no_grad():
if self.is_rnn:
batch_dict = self.play_steps_rnn()
else:
batch_dict = self.play_steps()
play_time_end = time.time()
update_time_start = time.time()
rnn_masks = batch_dict.get('rnn_masks', None)
self._update_amp_demos()
num_obs_samples = batch_dict['amp_obs'].shape[0]
amp_obs_demo = self._amp_obs_demo_buffer.sample(num_obs_samples)['amp_obs']
batch_dict['amp_obs_demo'] = amp_obs_demo
if (self._amp_replay_buffer.get_total_count() == 0):
batch_dict['amp_obs_replay'] = batch_dict['amp_obs']
else:
batch_dict['amp_obs_replay'] = self._amp_replay_buffer.sample(num_obs_samples)['amp_obs']
self.set_train()
self.curr_frames = batch_dict.pop('played_frames')
self.prepare_dataset(batch_dict)
self.algo_observer.after_steps()
if self.has_central_value:
self.train_central_value()
train_info = None
if self.is_rnn:
frames_mask_ratio = rnn_masks.sum().item() / (rnn_masks.nelement())
print(frames_mask_ratio)
for _ in range(0, self.mini_epochs_num):
ep_kls = []
for i in range(len(self.dataset)):
curr_train_info = self.train_actor_critic(self.dataset[i])
if self.schedule_type == 'legacy':
self.last_lr, self.entropy_coef = self.scheduler.update(self.last_lr, self.entropy_coef, self.epoch_num, 0, curr_train_info['kl'].item())
self.update_lr(self.last_lr)
if (train_info is None):
train_info = dict()
for k, v in curr_train_info.items():
train_info[k] = [v]
else:
for k, v in curr_train_info.items():
train_info[k].append(v)
av_kls = torch_ext.mean_list(train_info['kl'])
if self.schedule_type == 'standard':
self.last_lr, self.entropy_coef = self.scheduler.update(self.last_lr, self.entropy_coef, self.epoch_num, 0, av_kls.item())
self.update_lr(self.last_lr)
if self.schedule_type == 'standard_epoch':
self.last_lr, self.entropy_coef = self.scheduler.update(self.last_lr, self.entropy_coef, self.epoch_num, 0, av_kls.item())
self.update_lr(self.last_lr)
update_time_end = time.time()
play_time = play_time_end - play_time_start
update_time = update_time_end - update_time_start
total_time = update_time_end - play_time_start
self._store_replay_amp_obs(batch_dict['amp_obs'])
train_info['play_time'] = play_time
train_info['update_time'] = update_time
train_info['total_time'] = total_time
self._record_train_batch_info(batch_dict, train_info)
return train_info
def calc_gradients(self, input_dict):
self.set_train()
value_preds_batch = input_dict['old_values']
old_action_log_probs_batch = input_dict['old_logp_actions']
advantage = input_dict['advantages']
old_mu_batch = input_dict['mu']
old_sigma_batch = input_dict['sigma']
return_batch = input_dict['returns']
actions_batch = input_dict['actions']
obs_batch = input_dict['obs']
obs_batch = self._preproc_obs(obs_batch)
amp_obs = input_dict['amp_obs'][0:self._amp_minibatch_size]
amp_obs = self._preproc_amp_obs(amp_obs)
amp_obs_replay = input_dict['amp_obs_replay'][0:self._amp_minibatch_size]
amp_obs_replay = self._preproc_amp_obs(amp_obs_replay)
amp_obs_demo = input_dict['amp_obs_demo'][0:self._amp_minibatch_size]
amp_obs_demo = self._preproc_amp_obs(amp_obs_demo)
amp_obs_demo.requires_grad_(True)
lr = self.last_lr
kl = 1.0
lr_mul = 1.0
curr_e_clip = lr_mul * self.e_clip
batch_dict = {
'is_train': True,
'prev_actions': actions_batch,
'obs' : obs_batch,
'amp_obs' : amp_obs,
'amp_obs_replay' : amp_obs_replay,
'amp_obs_demo' : amp_obs_demo
}
rnn_masks = None
if self.is_rnn:
rnn_masks = input_dict['rnn_masks']
batch_dict['rnn_states'] = input_dict['rnn_states']
batch_dict['seq_length'] = self.seq_len
with torch.cuda.amp.autocast(enabled=self.mixed_precision):
res_dict = self.model(batch_dict)
action_log_probs = res_dict['prev_neglogp']
values = res_dict['values']
entropy = res_dict['entropy']
mu = res_dict['mus']
sigma = res_dict['sigmas']
disc_agent_logit = res_dict['disc_agent_logit']
disc_agent_replay_logit = res_dict['disc_agent_replay_logit']
disc_demo_logit = res_dict['disc_demo_logit']
a_info = self._actor_loss(old_action_log_probs_batch, action_log_probs, advantage, curr_e_clip)
a_loss = a_info['actor_loss']
c_info = self._critic_loss(value_preds_batch, values, curr_e_clip, return_batch, self.clip_value)
c_loss = c_info['critic_loss']
b_loss = self.bound_loss(mu)
losses, sum_mask = torch_ext.apply_masks([a_loss.unsqueeze(1), c_loss, entropy.unsqueeze(1), b_loss.unsqueeze(1)], rnn_masks)
a_loss, c_loss, entropy, b_loss = losses[0], losses[1], losses[2], losses[3]
disc_agent_cat_logit = torch.cat([disc_agent_logit, disc_agent_replay_logit], dim=0)
disc_info = self._disc_loss(disc_agent_cat_logit, disc_demo_logit, amp_obs_demo)
disc_loss = disc_info['disc_loss']
loss = a_loss + self.critic_coef * c_loss - self.entropy_coef * entropy + self.bounds_loss_coef * b_loss \
+ self._disc_coef * disc_loss
if self.multi_gpu:
self.optimizer.zero_grad()
else:
for param in self.model.parameters():
param.grad = None
self.scaler.scale(loss).backward()
#TODO: Refactor this ugliest code of the year
if self.truncate_grads:
if self.multi_gpu:
self.optimizer.synchronize()
self.scaler.unscale_(self.optimizer)
nn.utils.clip_grad_norm_(self.model.parameters(), self.grad_norm)
with self.optimizer.skip_synchronize():
self.scaler.step(self.optimizer)
self.scaler.update()
else:
self.scaler.unscale_(self.optimizer)
nn.utils.clip_grad_norm_(self.model.parameters(), self.grad_norm)
self.scaler.step(self.optimizer)
self.scaler.update()
else:
self.scaler.step(self.optimizer)
self.scaler.update()
with torch.no_grad():
reduce_kl = not self.is_rnn
kl_dist = torch_ext.policy_kl(mu.detach(), sigma.detach(), old_mu_batch, old_sigma_batch, reduce_kl)
if self.is_rnn:
kl_dist = (kl_dist * rnn_masks).sum() / rnn_masks.numel() #/ sum_mask
self.train_result = {
'entropy': entropy,
'kl': kl_dist,
'last_lr': self.last_lr,
'lr_mul': lr_mul,
'b_loss': b_loss
}
self.train_result.update(a_info)
self.train_result.update(c_info)
self.train_result.update(disc_info)
return
def _load_config_params(self, config):
super()._load_config_params(config)
self._task_reward_w = config['task_reward_w']
self._disc_reward_w = config['disc_reward_w']
self._amp_observation_space = self.env_info['amp_observation_space']
self._amp_batch_size = int(config['amp_batch_size'])
self._amp_minibatch_size = int(config['amp_minibatch_size'])
assert(self._amp_minibatch_size <= self.minibatch_size)
self._disc_coef = config['disc_coef']
self._disc_logit_reg = config['disc_logit_reg']
self._disc_grad_penalty = config['disc_grad_penalty']
self._disc_weight_decay = config['disc_weight_decay']
self._disc_reward_scale = config['disc_reward_scale']
self._normalize_amp_input = config.get('normalize_amp_input', True)
return
def _build_net_config(self):
config = super()._build_net_config()
config['amp_input_shape'] = self._amp_observation_space.shape
return config
def _init_train(self):
super()._init_train()
self._init_amp_demo_buf()
return
def _disc_loss(self, disc_agent_logit, disc_demo_logit, obs_demo):
# prediction loss
disc_loss_agent = self._disc_loss_neg(disc_agent_logit)
disc_loss_demo = self._disc_loss_pos(disc_demo_logit)
disc_loss = 0.5 * (disc_loss_agent + disc_loss_demo)
# logit reg
logit_weights = self.model.a2c_network.get_disc_logit_weights()
disc_logit_loss = torch.sum(torch.square(logit_weights))
disc_loss += self._disc_logit_reg * disc_logit_loss
# grad penalty
disc_demo_grad = torch.autograd.grad(disc_demo_logit, obs_demo, grad_outputs=torch.ones_like(disc_demo_logit),
create_graph=True, retain_graph=True, only_inputs=True)
disc_demo_grad = disc_demo_grad[0]
disc_demo_grad = torch.sum(torch.square(disc_demo_grad), dim=-1)
disc_grad_penalty = torch.mean(disc_demo_grad)
disc_loss += self._disc_grad_penalty * disc_grad_penalty
# weight decay
if (self._disc_weight_decay != 0):
disc_weights = self.model.a2c_network.get_disc_weights()
disc_weights = torch.cat(disc_weights, dim=-1)
disc_weight_decay = torch.sum(torch.square(disc_weights))
disc_loss += self._disc_weight_decay * disc_weight_decay
disc_agent_acc, disc_demo_acc = self._compute_disc_acc(disc_agent_logit, disc_demo_logit)
disc_info = {
'disc_loss': disc_loss,
'disc_grad_penalty': disc_grad_penalty,
'disc_logit_loss': disc_logit_loss,
'disc_agent_acc': disc_agent_acc,
'disc_demo_acc': disc_demo_acc,
'disc_agent_logit': disc_agent_logit,
'disc_demo_logit': disc_demo_logit
}
return disc_info
def _disc_loss_neg(self, disc_logits):
bce = torch.nn.BCEWithLogitsLoss()
loss = bce(disc_logits, torch.zeros_like(disc_logits))
return loss
def _disc_loss_pos(self, disc_logits):
bce = torch.nn.BCEWithLogitsLoss()
loss = bce(disc_logits, torch.ones_like(disc_logits))
return loss
def _compute_disc_acc(self, disc_agent_logit, disc_demo_logit):
agent_acc = disc_agent_logit < 0
agent_acc = torch.mean(agent_acc.float())
demo_acc = disc_demo_logit > 0
demo_acc = torch.mean(demo_acc.float())
return agent_acc, demo_acc
def _fetch_amp_obs_demo(self, num_samples):
amp_obs_demo = self.vec_env.env.fetch_amp_obs_demo(num_samples)
return amp_obs_demo
def _build_amp_buffers(self):
batch_shape = self.experience_buffer.obs_base_shape
self.experience_buffer.tensor_dict['amp_obs'] = torch.zeros(batch_shape + self._amp_observation_space.shape,
device=self.ppo_device)
amp_obs_demo_buffer_size = int(self.config['amp_obs_demo_buffer_size'])
self._amp_obs_demo_buffer = replay_buffer.ReplayBuffer(amp_obs_demo_buffer_size, self.ppo_device)
self._amp_replay_keep_prob = self.config['amp_replay_keep_prob']
replay_buffer_size = int(self.config['amp_replay_buffer_size'])
self._amp_replay_buffer = replay_buffer.ReplayBuffer(replay_buffer_size, self.ppo_device)
self.tensor_list += ['amp_obs']
return
def _init_amp_demo_buf(self):
buffer_size = self._amp_obs_demo_buffer.get_buffer_size()
num_batches = int(np.ceil(buffer_size / self._amp_batch_size))
for i in range(num_batches):
curr_samples = self._fetch_amp_obs_demo(self._amp_batch_size)
self._amp_obs_demo_buffer.store({'amp_obs': curr_samples})
return
def _update_amp_demos(self):
new_amp_obs_demo = self._fetch_amp_obs_demo(self._amp_batch_size)
self._amp_obs_demo_buffer.store({'amp_obs': new_amp_obs_demo})
return
def _preproc_amp_obs(self, amp_obs):
if self._normalize_amp_input:
amp_obs = self._amp_input_mean_std(amp_obs)
return amp_obs
def _combine_rewards(self, task_rewards, amp_rewards):
disc_r = amp_rewards['disc_rewards']
combined_rewards = self._task_reward_w * task_rewards + \
+ self._disc_reward_w * disc_r
return combined_rewards
def _eval_disc(self, amp_obs):
proc_amp_obs = self._preproc_amp_obs(amp_obs)
return self.model.a2c_network.eval_disc(proc_amp_obs)
def _calc_amp_rewards(self, amp_obs):
disc_r = self._calc_disc_rewards(amp_obs)
output = {
'disc_rewards': disc_r
}
return output
def _calc_disc_rewards(self, amp_obs):
with torch.no_grad():
disc_logits = self._eval_disc(amp_obs)
prob = 1 / (1 + torch.exp(-disc_logits))
disc_r = -torch.log(torch.maximum(1 - prob, torch.tensor(0.0001, device=self.ppo_device)))
disc_r *= self._disc_reward_scale
return disc_r
def _store_replay_amp_obs(self, amp_obs):
buf_size = self._amp_replay_buffer.get_buffer_size()
buf_total_count = self._amp_replay_buffer.get_total_count()
if (buf_total_count > buf_size):
keep_probs = to_torch(np.array([self._amp_replay_keep_prob] * amp_obs.shape[0]), device=self.ppo_device)
keep_mask = torch.bernoulli(keep_probs) == 1.0
amp_obs = amp_obs[keep_mask]
self._amp_replay_buffer.store({'amp_obs': amp_obs})
return
def _record_train_batch_info(self, batch_dict, train_info):
train_info['disc_rewards'] = batch_dict['disc_rewards']
return
def _log_train_info(self, train_info, frame):
super()._log_train_info(train_info, frame)
self.writer.add_scalar('losses/disc_loss', torch_ext.mean_list(train_info['disc_loss']).item(), frame)
self.writer.add_scalar('info/disc_agent_acc', torch_ext.mean_list(train_info['disc_agent_acc']).item(), frame)
self.writer.add_scalar('info/disc_demo_acc', torch_ext.mean_list(train_info['disc_demo_acc']).item(), frame)
self.writer.add_scalar('info/disc_agent_logit', torch_ext.mean_list(train_info['disc_agent_logit']).item(), frame)
self.writer.add_scalar('info/disc_demo_logit', torch_ext.mean_list(train_info['disc_demo_logit']).item(), frame)
self.writer.add_scalar('info/disc_grad_penalty', torch_ext.mean_list(train_info['disc_grad_penalty']).item(), frame)
self.writer.add_scalar('info/disc_logit_loss', torch_ext.mean_list(train_info['disc_logit_loss']).item(), frame)
disc_reward_std, disc_reward_mean = torch.std_mean(train_info['disc_rewards'])
self.writer.add_scalar('info/disc_reward_mean', disc_reward_mean.item(), frame)
self.writer.add_scalar('info/disc_reward_std', disc_reward_std.item(), frame)
return
def _amp_debug(self, info):
with torch.no_grad():
amp_obs = info['amp_obs']
amp_obs = amp_obs[0:1]
disc_pred = self._eval_disc(amp_obs)
amp_rewards = self._calc_amp_rewards(amp_obs)
disc_reward = amp_rewards['disc_rewards']
disc_pred = disc_pred.detach().cpu().numpy()[0, 0]
disc_reward = disc_reward.cpu().numpy()[0, 0]
print("disc_pred: ", disc_pred, disc_reward)
return | 23,314 | Python | 40.933453 | 157 | 0.6035 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/learning/amp_players.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import torch
from rl_games.algos_torch import torch_ext
from rl_games.algos_torch.running_mean_std import RunningMeanStd
from rl_games.common.player import BasePlayer
import isaacgymenvs.learning.common_player as common_player
class AMPPlayerContinuous(common_player.CommonPlayer):
def __init__(self, params):
config = params['config']
self._normalize_amp_input = config.get('normalize_amp_input', True)
self._disc_reward_scale = config['disc_reward_scale']
self._print_disc_prediction = config.get('print_disc_prediction', False)
super().__init__(params)
return
def restore(self, fn):
super().restore(fn)
if self._normalize_amp_input:
checkpoint = torch_ext.load_checkpoint(fn)
self._amp_input_mean_std.load_state_dict(checkpoint['amp_input_mean_std'])
return
def _build_net(self, config):
super()._build_net(config)
if self._normalize_amp_input:
self._amp_input_mean_std = RunningMeanStd(config['amp_input_shape']).to(self.device)
self._amp_input_mean_std.eval()
return
def _post_step(self, info):
super()._post_step(info)
if self._print_disc_prediction:
self._amp_debug(info)
return
def _build_net_config(self):
config = super()._build_net_config()
if (hasattr(self, 'env')):
config['amp_input_shape'] = self.env.amp_observation_space.shape
else:
config['amp_input_shape'] = self.env_info['amp_observation_space']
return config
def _amp_debug(self, info):
with torch.no_grad():
amp_obs = info['amp_obs']
amp_obs = amp_obs[0:1]
disc_pred = self._eval_disc(amp_obs.to(self.device))
amp_rewards = self._calc_amp_rewards(amp_obs.to(self.device))
disc_reward = amp_rewards['disc_rewards']
disc_pred = disc_pred.detach().cpu().numpy()[0, 0]
disc_reward = disc_reward.cpu().numpy()[0, 0]
print("disc_pred: ", disc_pred, disc_reward)
return
def _preproc_amp_obs(self, amp_obs):
if self._normalize_amp_input:
amp_obs = self._amp_input_mean_std(amp_obs)
return amp_obs
def _eval_disc(self, amp_obs):
proc_amp_obs = self._preproc_amp_obs(amp_obs)
return self.model.a2c_network.eval_disc(proc_amp_obs)
def _calc_amp_rewards(self, amp_obs):
disc_r = self._calc_disc_rewards(amp_obs)
output = {
'disc_rewards': disc_r
}
return output
def _calc_disc_rewards(self, amp_obs):
with torch.no_grad():
disc_logits = self._eval_disc(amp_obs)
prob = 1.0 / (1.0 + torch.exp(-disc_logits))
disc_r = -torch.log(torch.maximum(1 - prob, torch.tensor(0.0001, device=self.device)))
disc_r *= self._disc_reward_scale
return disc_r
| 4,535 | Python | 38.103448 | 98 | 0.657773 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/learning/common_agent.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import copy
from datetime import datetime
from gym import spaces
import numpy as np
import os
import time
import yaml
from rl_games.algos_torch import a2c_continuous
from rl_games.algos_torch import torch_ext
from rl_games.algos_torch import central_value
from rl_games.algos_torch.running_mean_std import RunningMeanStd
from rl_games.common import a2c_common
from rl_games.common import datasets
from rl_games.common import schedulers
from rl_games.common import vecenv
import torch
from torch import optim
from . import amp_datasets as amp_datasets
from tensorboardX import SummaryWriter
class CommonAgent(a2c_continuous.A2CAgent):
def __init__(self, base_name, params):
a2c_common.A2CBase.__init__(self, base_name, params)
config = params['config']
self._load_config_params(config)
self.is_discrete = False
self._setup_action_space()
self.bounds_loss_coef = config.get('bounds_loss_coef', None)
self.clip_actions = config.get('clip_actions', True)
self.network_path = self.nn_dir
net_config = self._build_net_config()
self.model = self.network.build(net_config)
self.model.to(self.ppo_device)
self.states = None
self.init_rnn_from_model(self.model)
self.last_lr = float(self.last_lr)
self.optimizer = optim.Adam(self.model.parameters(), float(self.last_lr), eps=1e-08, weight_decay=self.weight_decay)
if self.has_central_value:
cv_config = {
'state_shape' : torch_ext.shape_whc_to_cwh(self.state_shape),
'value_size' : self.value_size,
'ppo_device' : self.ppo_device,
'num_agents' : self.num_agents,
'num_steps' : self.horizon_length,
'num_actors' : self.num_actors,
'num_actions' : self.actions_num,
'seq_len' : self.seq_len,
'model' : self.central_value_config['network'],
'config' : self.central_value_config,
'writter' : self.writer,
'multi_gpu' : self.multi_gpu
}
self.central_value_net = central_value.CentralValueTrain(**cv_config).to(self.ppo_device)
self.use_experimental_cv = self.config.get('use_experimental_cv', True)
self.dataset = amp_datasets.AMPDataset(self.batch_size, self.minibatch_size, self.is_discrete, self.is_rnn, self.ppo_device, self.seq_len)
self.algo_observer.after_init(self)
return
def init_tensors(self):
super().init_tensors()
self.experience_buffer.tensor_dict['next_obses'] = torch.zeros_like(self.experience_buffer.tensor_dict['obses'])
self.experience_buffer.tensor_dict['next_values'] = torch.zeros_like(self.experience_buffer.tensor_dict['values'])
self.tensor_list += ['next_obses']
return
def train(self):
self.init_tensors()
self.last_mean_rewards = -100500
start_time = time.time()
total_time = 0
rep_count = 0
self.frame = 0
self.obs = self.env_reset()
self.curr_frames = self.batch_size_envs
self.model_output_file = os.path.join(self.network_path,
self.config['name'] + '_{date:%d-%H-%M-%S}'.format(date=datetime.now()))
self._init_train()
# global rank of the GPU
# multi-gpu training is not currently supported for AMP
self.global_rank = int(os.getenv("RANK", "0"))
while True:
epoch_num = self.update_epoch()
train_info = self.train_epoch()
sum_time = train_info['total_time']
total_time += sum_time
frame = self.frame
if self.global_rank == 0:
scaled_time = sum_time
scaled_play_time = train_info['play_time']
curr_frames = self.curr_frames
self.frame += curr_frames
if self.print_stats:
fps_step = curr_frames / scaled_play_time
fps_total = curr_frames / scaled_time
print(f'fps step: {fps_step:.1f} fps total: {fps_total:.1f}')
self.writer.add_scalar('performance/total_fps', curr_frames / scaled_time, frame)
self.writer.add_scalar('performance/step_fps', curr_frames / scaled_play_time, frame)
self.writer.add_scalar('info/epochs', epoch_num, frame)
self._log_train_info(train_info, frame)
self.algo_observer.after_print_stats(frame, epoch_num, total_time)
if self.game_rewards.current_size > 0:
mean_rewards = self.game_rewards.get_mean()
mean_lengths = self.game_lengths.get_mean()
for i in range(self.value_size):
self.writer.add_scalar('rewards/frame'.format(i), mean_rewards[i], frame)
self.writer.add_scalar('rewards/iter'.format(i), mean_rewards[i], epoch_num)
self.writer.add_scalar('rewards/time'.format(i), mean_rewards[i], total_time)
self.writer.add_scalar('episode_lengths/frame', mean_lengths, frame)
self.writer.add_scalar('episode_lengths/iter', mean_lengths, epoch_num)
if self.has_self_play_config:
self.self_play_manager.update(self)
if self.save_freq > 0:
if (epoch_num % self.save_freq == 0):
self.save(self.model_output_file + "_" + str(epoch_num))
if epoch_num > self.max_epochs:
self.save(self.model_output_file)
print('MAX EPOCHS NUM!')
return self.last_mean_rewards, epoch_num
update_time = 0
return
def train_epoch(self):
play_time_start = time.time()
with torch.no_grad():
if self.is_rnn:
batch_dict = self.play_steps_rnn()
else:
batch_dict = self.play_steps()
play_time_end = time.time()
update_time_start = time.time()
rnn_masks = batch_dict.get('rnn_masks', None)
self.set_train()
self.curr_frames = batch_dict.pop('played_frames')
self.prepare_dataset(batch_dict)
self.algo_observer.after_steps()
if self.has_central_value:
self.train_central_value()
train_info = None
if self.is_rnn:
frames_mask_ratio = rnn_masks.sum().item() / (rnn_masks.nelement())
print(frames_mask_ratio)
for _ in range(0, self.mini_epochs_num):
ep_kls = []
for i in range(len(self.dataset)):
curr_train_info = self.train_actor_critic(self.dataset[i])
print(type(curr_train_info))
if self.schedule_type == 'legacy':
self.last_lr, self.entropy_coef = self.scheduler.update(self.last_lr, self.entropy_coef, self.epoch_num, 0, curr_train_info['kl'].item())
self.update_lr(self.last_lr)
if (train_info is None):
train_info = dict()
for k, v in curr_train_info.items():
train_info[k] = [v]
else:
for k, v in curr_train_info.items():
train_info[k].append(v)
av_kls = torch_ext.mean_list(train_info['kl'])
if self.schedule_type == 'standard':
self.last_lr, self.entropy_coef = self.scheduler.update(self.last_lr, self.entropy_coef, self.epoch_num, 0, av_kls.item())
self.update_lr(self.last_lr)
if self.schedule_type == 'standard_epoch':
self.last_lr, self.entropy_coef = self.scheduler.update(self.last_lr, self.entropy_coef, self.epoch_num, 0, av_kls.item())
self.update_lr(self.last_lr)
update_time_end = time.time()
play_time = play_time_end - play_time_start
update_time = update_time_end - update_time_start
total_time = update_time_end - play_time_start
train_info['play_time'] = play_time
train_info['update_time'] = update_time
train_info['total_time'] = total_time
self._record_train_batch_info(batch_dict, train_info)
return train_info
def play_steps(self):
self.set_eval()
epinfos = []
update_list = self.update_list
for n in range(self.horizon_length):
self.obs, done_env_ids = self._env_reset_done()
self.experience_buffer.update_data('obses', n, self.obs['obs'])
if self.use_action_masks:
masks = self.vec_env.get_action_masks()
res_dict = self.get_masked_action_values(self.obs, masks)
else:
res_dict = self.get_action_values(self.obs)
for k in update_list:
self.experience_buffer.update_data(k, n, res_dict[k])
if self.has_central_value:
self.experience_buffer.update_data('states', n, self.obs['states'])
self.obs, rewards, self.dones, infos = self.env_step(res_dict['actions'])
shaped_rewards = self.rewards_shaper(rewards)
self.experience_buffer.update_data('rewards', n, shaped_rewards)
self.experience_buffer.update_data('next_obses', n, self.obs['obs'])
self.experience_buffer.update_data('dones', n, self.dones)
terminated = infos['terminate'].float()
terminated = terminated.unsqueeze(-1)
next_vals = self._eval_critic(self.obs)
next_vals *= (1.0 - terminated)
self.experience_buffer.update_data('next_values', n, next_vals)
self.current_rewards += rewards
self.current_lengths += 1
all_done_indices = self.dones.nonzero(as_tuple=False)
done_indices = all_done_indices[::self.num_agents]
self.game_rewards.update(self.current_rewards[done_indices])
self.game_lengths.update(self.current_lengths[done_indices])
self.algo_observer.process_infos(infos, done_indices)
not_dones = 1.0 - self.dones.float()
self.current_rewards = self.current_rewards * not_dones.unsqueeze(1)
self.current_lengths = self.current_lengths * not_dones
mb_fdones = self.experience_buffer.tensor_dict['dones'].float()
mb_values = self.experience_buffer.tensor_dict['values']
mb_next_values = self.experience_buffer.tensor_dict['next_values']
mb_rewards = self.experience_buffer.tensor_dict['rewards']
mb_advs = self.discount_values(mb_fdones, mb_values, mb_rewards, mb_next_values)
mb_returns = mb_advs + mb_values
batch_dict = self.experience_buffer.get_transformed_list(a2c_common.swap_and_flatten01, self.tensor_list)
batch_dict['returns'] = a2c_common.swap_and_flatten01(mb_returns)
batch_dict['played_frames'] = self.batch_size
return batch_dict
def calc_gradients(self, input_dict):
self.set_train()
value_preds_batch = input_dict['old_values']
old_action_log_probs_batch = input_dict['old_logp_actions']
advantage = input_dict['advantages']
old_mu_batch = input_dict['mu']
old_sigma_batch = input_dict['sigma']
return_batch = input_dict['returns']
actions_batch = input_dict['actions']
obs_batch = input_dict['obs']
obs_batch = self._preproc_obs(obs_batch)
lr = self.last_lr
kl = 1.0
lr_mul = 1.0
curr_e_clip = lr_mul * self.e_clip
batch_dict = {
'is_train': True,
'prev_actions': actions_batch,
'obs' : obs_batch
}
rnn_masks = None
if self.is_rnn:
rnn_masks = input_dict['rnn_masks']
batch_dict['rnn_states'] = input_dict['rnn_states']
batch_dict['seq_length'] = self.seq_len
with torch.cuda.amp.autocast(enabled=self.mixed_precision):
res_dict = self.model(batch_dict)
action_log_probs = res_dict['prev_neglogp']
values = res_dict['value']
entropy = res_dict['entropy']
mu = res_dict['mu']
sigma = res_dict['sigma']
a_info = self._actor_loss(old_action_log_probs_batch, action_log_probs, advantage, curr_e_clip)
a_loss = a_info['actor_loss']
c_info = self._critic_loss(value_preds_batch, values, curr_e_clip, return_batch, self.clip_value)
c_loss = c_info['critic_loss']
b_loss = self.bound_loss(mu)
losses, sum_mask = torch_ext.apply_masks([a_loss.unsqueeze(1), c_loss, entropy.unsqueeze(1), b_loss.unsqueeze(1)], rnn_masks)
a_loss, c_loss, entropy, b_loss = losses[0], losses[1], losses[2], losses[3]
loss = a_loss + self.critic_coef * c_loss - self.entropy_coef * entropy + self.bounds_loss_coef * b_loss
if self.multi_gpu:
self.optimizer.zero_grad()
else:
for param in self.model.parameters():
param.grad = None
self.scaler.scale(loss).backward()
#TODO: Refactor this ugliest code of the year
if self.truncate_grads:
if self.multi_gpu:
self.optimizer.synchronize()
self.scaler.unscale_(self.optimizer)
nn.utils.clip_grad_norm_(self.model.parameters(), self.grad_norm)
with self.optimizer.skip_synchronize():
self.scaler.step(self.optimizer)
self.scaler.update()
else:
self.scaler.unscale_(self.optimizer)
nn.utils.clip_grad_norm_(self.model.parameters(), self.grad_norm)
self.scaler.step(self.optimizer)
self.scaler.update()
else:
self.scaler.step(self.optimizer)
self.scaler.update()
with torch.no_grad():
reduce_kl = not self.is_rnn
kl_dist = torch_ext.policy_kl(mu.detach(), sigma.detach(), old_mu_batch, old_sigma_batch, reduce_kl)
if self.is_rnn:
kl_dist = (kl_dist * rnn_masks).sum() / rnn_masks.numel() #/ sum_mask
self.train_result = {
'entropy': entropy,
'kl': kl_dist,
'last_lr': self.last_lr,
'lr_mul': lr_mul,
'b_loss': b_loss
}
self.train_result.update(a_info)
self.train_result.update(c_info)
return
def discount_values(self, mb_fdones, mb_values, mb_rewards, mb_next_values):
lastgaelam = 0
mb_advs = torch.zeros_like(mb_rewards)
for t in reversed(range(self.horizon_length)):
not_done = 1.0 - mb_fdones[t]
not_done = not_done.unsqueeze(1)
delta = mb_rewards[t] + self.gamma * mb_next_values[t] - mb_values[t]
lastgaelam = delta + self.gamma * self.tau * not_done * lastgaelam
mb_advs[t] = lastgaelam
return mb_advs
def bound_loss(self, mu):
if self.bounds_loss_coef is not None:
soft_bound = 1.0
mu_loss_high = torch.maximum(mu - soft_bound, torch.tensor(0, device=self.ppo_device))**2
mu_loss_low = torch.minimum(mu + soft_bound, torch.tensor(0, device=self.ppo_device))**2
b_loss = (mu_loss_low + mu_loss_high).sum(axis=-1)
else:
b_loss = 0
return b_loss
def _load_config_params(self, config):
self.last_lr = config['learning_rate']
return
def _build_net_config(self):
obs_shape = torch_ext.shape_whc_to_cwh(self.obs_shape)
config = {
'actions_num' : self.actions_num,
'input_shape' : obs_shape,
'num_seqs' : self.num_actors * self.num_agents,
'value_size': self.env_info.get('value_size', 1),
'normalize_value' : self.normalize_value,
'normalize_input': self.normalize_input,
}
return config
def _setup_action_space(self):
action_space = self.env_info['action_space']
self.actions_num = action_space.shape[0]
# todo introduce device instead of cuda()
self.actions_low = torch.from_numpy(action_space.low.copy()).float().to(self.ppo_device)
self.actions_high = torch.from_numpy(action_space.high.copy()).float().to(self.ppo_device)
return
def _init_train(self):
return
def _env_reset_done(self):
obs, done_env_ids = self.vec_env.reset_done()
return self.obs_to_tensors(obs), done_env_ids
def _eval_critic(self, obs_dict):
self.model.eval()
obs = obs_dict['obs']
processed_obs = self._preproc_obs(obs)
if self.normalize_input:
processed_obs = self.model.norm_obs(processed_obs)
value = self.model.a2c_network.eval_critic(processed_obs)
if self.normalize_value:
value = self.value_mean_std(value, True)
return value
def _actor_loss(self, old_action_log_probs_batch, action_log_probs, advantage, curr_e_clip):
clip_frac = None
if (self.ppo):
ratio = torch.exp(old_action_log_probs_batch - action_log_probs)
surr1 = advantage * ratio
surr2 = advantage * torch.clamp(ratio, 1.0 - curr_e_clip,
1.0 + curr_e_clip)
a_loss = torch.max(-surr1, -surr2)
clipped = torch.abs(ratio - 1.0) > curr_e_clip
clip_frac = torch.mean(clipped.float())
clip_frac = clip_frac.detach()
else:
a_loss = (action_log_probs * advantage)
info = {
'actor_loss': a_loss,
'actor_clip_frac': clip_frac
}
return info
def _critic_loss(self, value_preds_batch, values, curr_e_clip, return_batch, clip_value):
if clip_value:
value_pred_clipped = value_preds_batch + \
(values - value_preds_batch).clamp(-curr_e_clip, curr_e_clip)
value_losses = (values - return_batch)**2
value_losses_clipped = (value_pred_clipped - return_batch)**2
c_loss = torch.max(value_losses, value_losses_clipped)
else:
c_loss = (return_batch - values)**2
info = {
'critic_loss': c_loss
}
return info
def _record_train_batch_info(self, batch_dict, train_info):
return
def _log_train_info(self, train_info, frame):
self.writer.add_scalar('performance/update_time', train_info['update_time'], frame)
self.writer.add_scalar('performance/play_time', train_info['play_time'], frame)
self.writer.add_scalar('losses/a_loss', torch_ext.mean_list(train_info['actor_loss']).item(), frame)
self.writer.add_scalar('losses/c_loss', torch_ext.mean_list(train_info['critic_loss']).item(), frame)
self.writer.add_scalar('losses/bounds_loss', torch_ext.mean_list(train_info['b_loss']).item(), frame)
self.writer.add_scalar('losses/entropy', torch_ext.mean_list(train_info['entropy']).item(), frame)
self.writer.add_scalar('info/last_lr', train_info['last_lr'][-1] * train_info['lr_mul'][-1], frame)
self.writer.add_scalar('info/lr_mul', train_info['lr_mul'][-1], frame)
self.writer.add_scalar('info/e_clip', self.e_clip * train_info['lr_mul'][-1], frame)
self.writer.add_scalar('info/clip_frac', torch_ext.mean_list(train_info['actor_clip_frac']).item(), frame)
self.writer.add_scalar('info/kl', torch_ext.mean_list(train_info['kl']).item(), frame)
return
| 21,575 | Python | 39.863636 | 157 | 0.585724 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/learning/common_player.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import torch
from rl_games.algos_torch import players
from rl_games.algos_torch import torch_ext
from rl_games.algos_torch.running_mean_std import RunningMeanStd
from rl_games.common.player import BasePlayer
class CommonPlayer(players.PpoPlayerContinuous):
def __init__(self, params):
BasePlayer.__init__(self, params)
self.network = self.config['network']
self.normalize_input = self.config['normalize_input']
self.normalize_value = self.config['normalize_value']
self._setup_action_space()
self.mask = [False]
net_config = self._build_net_config()
self._build_net(net_config)
return
def run(self):
n_games = self.games_num
render = self.render_env
n_game_life = self.n_game_life
is_determenistic = self.is_deterministic
sum_rewards = 0
sum_steps = 0
sum_game_res = 0
n_games = n_games * n_game_life
games_played = 0
has_masks = False
has_masks_func = getattr(self.env, "has_action_mask", None) is not None
op_agent = getattr(self.env, "create_agent", None)
if op_agent:
agent_inited = True
if has_masks_func:
has_masks = self.env.has_action_mask()
need_init_rnn = self.is_rnn
for _ in range(n_games):
if games_played >= n_games:
break
obs_dict = self.env_reset(self.env)
batch_size = 1
batch_size = self.get_batch_size(obs_dict['obs'], batch_size)
if need_init_rnn:
self.init_rnn()
need_init_rnn = False
cr = torch.zeros(batch_size, dtype=torch.float32)
steps = torch.zeros(batch_size, dtype=torch.float32)
print_game_res = False
for n in range(self.max_steps):
obs_dict, done_env_ids = self._env_reset_done()
if has_masks:
masks = self.env.get_action_mask()
action = self.get_masked_action(obs_dict, masks, is_determenistic)
else:
action = self.get_action(obs_dict, is_determenistic)
obs_dict, r, done, info = self.env_step(self.env, action)
cr += r
steps += 1
self._post_step(info)
if render:
self.env.render(mode = 'human')
time.sleep(self.render_sleep)
all_done_indices = done.nonzero(as_tuple=False)
done_indices = all_done_indices[::self.num_agents]
done_count = len(done_indices)
games_played += done_count
if done_count > 0:
if self.is_rnn:
for s in self.states:
s[:,all_done_indices,:] = s[:,all_done_indices,:] * 0.0
cur_rewards = cr[done_indices].sum().item()
cur_steps = steps[done_indices].sum().item()
cr = cr * (1.0 - done.float())
steps = steps * (1.0 - done.float())
sum_rewards += cur_rewards
sum_steps += cur_steps
game_res = 0.0
if isinstance(info, dict):
if 'battle_won' in info:
print_game_res = True
game_res = info.get('battle_won', 0.5)
if 'scores' in info:
print_game_res = True
game_res = info.get('scores', 0.5)
if self.print_stats:
if print_game_res:
print('reward:', cur_rewards/done_count, 'steps:', cur_steps/done_count, 'w:', game_res)
else:
print('reward:', cur_rewards/done_count, 'steps:', cur_steps/done_count)
sum_game_res += game_res
if batch_size//self.num_agents == 1 or games_played >= n_games:
break
print(sum_rewards)
if print_game_res:
print('av reward:', sum_rewards / games_played * n_game_life, 'av steps:', sum_steps / games_played * n_game_life, 'winrate:', sum_game_res / games_played * n_game_life)
else:
print('av reward:', sum_rewards / games_played * n_game_life, 'av steps:', sum_steps / games_played * n_game_life)
return
def obs_to_torch(self, obs):
obs = super().obs_to_torch(obs)
obs_dict = {
'obs': obs
}
return obs_dict
def get_action(self, obs_dict, is_determenistic = False):
output = super().get_action(obs_dict['obs'], is_determenistic)
return output
def _build_net(self, config):
self.model = self.network.build(config)
self.model.to(self.device)
self.model.eval()
self.is_rnn = self.model.is_rnn()
return
def _env_reset_done(self):
obs, done_env_ids = self.env.reset_done()
return self.obs_to_torch(obs), done_env_ids
def _post_step(self, info):
return
def _build_net_config(self):
obs_shape = torch_ext.shape_whc_to_cwh(self.obs_shape)
config = {
'actions_num' : self.actions_num,
'input_shape' : obs_shape,
'num_seqs' : self.num_agents,
'value_size': self.env_info.get('value_size', 1),
'normalize_value': self.normalize_value,
'normalize_input': self.normalize_input,
}
return config
def _setup_action_space(self):
self.actions_num = self.action_space.shape[0]
self.actions_low = torch.from_numpy(self.action_space.low.copy()).float().to(self.device)
self.actions_high = torch.from_numpy(self.action_space.high.copy()).float().to(self.device)
return | 7,570 | Python | 37.627551 | 181 | 0.571731 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/tasks/allegro_hand.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import numpy as np
import os
import torch
from isaacgym import gymtorch
from isaacgym import gymapi
from isaacgymenvs.utils.torch_jit_utils import scale, unscale, quat_mul, quat_conjugate, quat_from_angle_axis, \
to_torch, get_axis_params, torch_rand_float, tensor_clamp
from isaacgymenvs.tasks.base.vec_task import VecTask
class AllegroHand(VecTask):
def __init__(self, cfg, rl_device, sim_device, graphics_device_id, headless, virtual_screen_capture, force_render):
self.cfg = cfg
self.aggregate_mode = self.cfg["env"]["aggregateMode"]
self.dist_reward_scale = self.cfg["env"]["distRewardScale"]
self.rot_reward_scale = self.cfg["env"]["rotRewardScale"]
self.action_penalty_scale = self.cfg["env"]["actionPenaltyScale"]
self.success_tolerance = self.cfg["env"]["successTolerance"]
self.reach_goal_bonus = self.cfg["env"]["reachGoalBonus"]
self.fall_dist = self.cfg["env"]["fallDistance"]
self.fall_penalty = self.cfg["env"]["fallPenalty"]
self.rot_eps = self.cfg["env"]["rotEps"]
self.vel_obs_scale = 0.2 # scale factor of velocity based observations
self.force_torque_obs_scale = 10.0 # scale factor of velocity based observations
self.reset_position_noise = self.cfg["env"]["resetPositionNoise"]
self.reset_rotation_noise = self.cfg["env"]["resetRotationNoise"]
self.reset_dof_pos_noise = self.cfg["env"]["resetDofPosRandomInterval"]
self.reset_dof_vel_noise = self.cfg["env"]["resetDofVelRandomInterval"]
self.force_scale = self.cfg["env"].get("forceScale", 0.0)
self.force_prob_range = self.cfg["env"].get("forceProbRange", [0.001, 0.1])
self.force_decay = self.cfg["env"].get("forceDecay", 0.99)
self.force_decay_interval = self.cfg["env"].get("forceDecayInterval", 0.08)
self.shadow_hand_dof_speed_scale = self.cfg["env"]["dofSpeedScale"]
self.use_relative_control = self.cfg["env"]["useRelativeControl"]
self.act_moving_average = self.cfg["env"]["actionsMovingAverage"]
self.debug_viz = self.cfg["env"]["enableDebugVis"]
self.max_episode_length = self.cfg["env"]["episodeLength"]
self.reset_time = self.cfg["env"].get("resetTime", -1.0)
self.print_success_stat = self.cfg["env"]["printNumSuccesses"]
self.max_consecutive_successes = self.cfg["env"]["maxConsecutiveSuccesses"]
self.av_factor = self.cfg["env"].get("averFactor", 0.1)
self.object_type = self.cfg["env"]["objectType"]
assert self.object_type in ["block", "egg", "pen"]
self.ignore_z = (self.object_type == "pen")
self.asset_files_dict = {
"block": "urdf/objects/cube_multicolor.urdf",
"egg": "mjcf/open_ai_assets/hand/egg.xml",
"pen": "mjcf/open_ai_assets/hand/pen.xml"
}
if "asset" in self.cfg["env"]:
self.asset_files_dict["block"] = self.cfg["env"]["asset"].get("assetFileNameBlock", self.asset_files_dict["block"])
self.asset_files_dict["egg"] = self.cfg["env"]["asset"].get("assetFileNameEgg", self.asset_files_dict["egg"])
self.asset_files_dict["pen"] = self.cfg["env"]["asset"].get("assetFileNamePen", self.asset_files_dict["pen"])
# can be "full_no_vel", "full", "full_state"
self.obs_type = self.cfg["env"]["observationType"]
if not (self.obs_type in ["full_no_vel", "full", "full_state"]):
raise Exception(
"Unknown type of observations!\nobservationType should be one of: [openai, full_no_vel, full, full_state]")
print("Obs type:", self.obs_type)
self.num_obs_dict = {
"full_no_vel": 50,
"full": 72,
"full_state": 88
}
self.up_axis = 'z'
self.use_vel_obs = False
self.fingertip_obs = True
self.asymmetric_obs = self.cfg["env"]["asymmetric_observations"]
num_states = 0
if self.asymmetric_obs:
num_states = 88
self.cfg["env"]["numObservations"] = self.num_obs_dict[self.obs_type]
self.cfg["env"]["numStates"] = num_states
self.cfg["env"]["numActions"] = 16
super().__init__(config=self.cfg, rl_device=rl_device, sim_device=sim_device, graphics_device_id=graphics_device_id, headless=headless, virtual_screen_capture=virtual_screen_capture, force_render=force_render)
self.dt = self.sim_params.dt
control_freq_inv = self.cfg["env"].get("controlFrequencyInv", 1)
if self.reset_time > 0.0:
self.max_episode_length = int(round(self.reset_time/(control_freq_inv * self.dt)))
print("Reset time: ", self.reset_time)
print("New episode length: ", self.max_episode_length)
if self.viewer != None:
cam_pos = gymapi.Vec3(10.0, 5.0, 1.0)
cam_target = gymapi.Vec3(6.0, 5.0, 0.0)
self.gym.viewer_camera_look_at(self.viewer, None, cam_pos, cam_target)
# get gym GPU state tensors
actor_root_state_tensor = self.gym.acquire_actor_root_state_tensor(self.sim)
dof_state_tensor = self.gym.acquire_dof_state_tensor(self.sim)
rigid_body_tensor = self.gym.acquire_rigid_body_state_tensor(self.sim)
if self.obs_type == "full_state" or self.asymmetric_obs:
# sensor_tensor = self.gym.acquire_force_sensor_tensor(self.sim)
# self.vec_sensor_tensor = gymtorch.wrap_tensor(sensor_tensor).view(self.num_envs, self.num_fingertips * 6)
dof_force_tensor = self.gym.acquire_dof_force_tensor(self.sim)
self.dof_force_tensor = gymtorch.wrap_tensor(dof_force_tensor).view(self.num_envs, self.num_shadow_hand_dofs)
self.gym.refresh_actor_root_state_tensor(self.sim)
self.gym.refresh_dof_state_tensor(self.sim)
self.gym.refresh_rigid_body_state_tensor(self.sim)
# create some wrapper tensors for different slices
self.shadow_hand_default_dof_pos = torch.zeros(self.num_shadow_hand_dofs, dtype=torch.float, device=self.device)
self.dof_state = gymtorch.wrap_tensor(dof_state_tensor)
self.shadow_hand_dof_state = self.dof_state.view(self.num_envs, -1, 2)[:, :self.num_shadow_hand_dofs]
self.shadow_hand_dof_pos = self.shadow_hand_dof_state[..., 0]
self.shadow_hand_dof_vel = self.shadow_hand_dof_state[..., 1]
self.rigid_body_states = gymtorch.wrap_tensor(rigid_body_tensor).view(self.num_envs, -1, 13)
self.num_bodies = self.rigid_body_states.shape[1]
self.root_state_tensor = gymtorch.wrap_tensor(actor_root_state_tensor).view(-1, 13)
self.num_dofs = self.gym.get_sim_dof_count(self.sim) // self.num_envs
print("Num dofs: ", self.num_dofs)
self.prev_targets = torch.zeros((self.num_envs, self.num_dofs), dtype=torch.float, device=self.device)
self.cur_targets = torch.zeros((self.num_envs, self.num_dofs), dtype=torch.float, device=self.device)
self.global_indices = torch.arange(self.num_envs * 3, dtype=torch.int32, device=self.device).view(self.num_envs, -1)
self.x_unit_tensor = to_torch([1, 0, 0], dtype=torch.float, device=self.device).repeat((self.num_envs, 1))
self.y_unit_tensor = to_torch([0, 1, 0], dtype=torch.float, device=self.device).repeat((self.num_envs, 1))
self.z_unit_tensor = to_torch([0, 0, 1], dtype=torch.float, device=self.device).repeat((self.num_envs, 1))
self.reset_goal_buf = self.reset_buf.clone()
self.successes = torch.zeros(self.num_envs, dtype=torch.float, device=self.device)
self.consecutive_successes = torch.zeros(1, dtype=torch.float, device=self.device)
self.av_factor = to_torch(self.av_factor, dtype=torch.float, device=self.device)
self.total_successes = 0
self.total_resets = 0
# object apply random forces parameters
self.force_decay = to_torch(self.force_decay, dtype=torch.float, device=self.device)
self.force_prob_range = to_torch(self.force_prob_range, dtype=torch.float, device=self.device)
self.random_force_prob = torch.exp((torch.log(self.force_prob_range[0]) - torch.log(self.force_prob_range[1]))
* torch.rand(self.num_envs, device=self.device) + torch.log(self.force_prob_range[1]))
self.rb_forces = torch.zeros((self.num_envs, self.num_bodies, 3), dtype=torch.float, device=self.device)
def create_sim(self):
self.dt = self.sim_params.dt
self.up_axis_idx = 2 # index of up axis: Y=1, Z=2
self.sim = super().create_sim(self.device_id, self.graphics_device_id, self.physics_engine, self.sim_params)
self._create_ground_plane()
self._create_envs(self.num_envs, self.cfg["env"]['envSpacing'], int(np.sqrt(self.num_envs)))
def _create_ground_plane(self):
plane_params = gymapi.PlaneParams()
plane_params.normal = gymapi.Vec3(0.0, 0.0, 1.0)
self.gym.add_ground(self.sim, plane_params)
def _create_envs(self, num_envs, spacing, num_per_row):
lower = gymapi.Vec3(-spacing, -spacing, 0.0)
upper = gymapi.Vec3(spacing, spacing, spacing)
asset_root = os.path.join(os.path.dirname(os.path.abspath(__file__)), '../../assets')
allegro_hand_asset_file = "urdf/kuka_allegro_description/allegro.urdf"
if "asset" in self.cfg["env"]:
asset_root = self.cfg["env"]["asset"].get("assetRoot", asset_root)
allegro_hand_asset_file = self.cfg["env"]["asset"].get("assetFileName", allegro_hand_asset_file)
object_asset_file = self.asset_files_dict[self.object_type]
# load shadow hand_ asset
asset_options = gymapi.AssetOptions()
asset_options.flip_visual_attachments = False
asset_options.fix_base_link = True
asset_options.collapse_fixed_joints = True
asset_options.disable_gravity = True
asset_options.thickness = 0.001
asset_options.angular_damping = 0.01
if self.physics_engine == gymapi.SIM_PHYSX:
asset_options.use_physx_armature = True
asset_options.default_dof_drive_mode = gymapi.DOF_MODE_POS
allegro_hand_asset = self.gym.load_asset(self.sim, asset_root, allegro_hand_asset_file, asset_options)
self.num_shadow_hand_bodies = self.gym.get_asset_rigid_body_count(allegro_hand_asset)
self.num_shadow_hand_shapes = self.gym.get_asset_rigid_shape_count(allegro_hand_asset)
self.num_shadow_hand_dofs = self.gym.get_asset_dof_count(allegro_hand_asset)
print("Num dofs: ", self.num_shadow_hand_dofs)
self.num_shadow_hand_actuators = self.num_shadow_hand_dofs
self.actuated_dof_indices = [i for i in range(self.num_shadow_hand_dofs)]
# set shadow_hand dof properties
shadow_hand_dof_props = self.gym.get_asset_dof_properties(allegro_hand_asset)
self.shadow_hand_dof_lower_limits = []
self.shadow_hand_dof_upper_limits = []
self.shadow_hand_dof_default_pos = []
self.shadow_hand_dof_default_vel = []
self.sensors = []
sensor_pose = gymapi.Transform()
for i in range(self.num_shadow_hand_dofs):
self.shadow_hand_dof_lower_limits.append(shadow_hand_dof_props['lower'][i])
self.shadow_hand_dof_upper_limits.append(shadow_hand_dof_props['upper'][i])
self.shadow_hand_dof_default_pos.append(0.0)
self.shadow_hand_dof_default_vel.append(0.0)
print("Max effort: ", shadow_hand_dof_props['effort'][i])
shadow_hand_dof_props['effort'][i] = 0.5
shadow_hand_dof_props['stiffness'][i] = 3
shadow_hand_dof_props['damping'][i] = 0.1
shadow_hand_dof_props['friction'][i] = 0.01
shadow_hand_dof_props['armature'][i] = 0.001
self.actuated_dof_indices = to_torch(self.actuated_dof_indices, dtype=torch.long, device=self.device)
self.shadow_hand_dof_lower_limits = to_torch(self.shadow_hand_dof_lower_limits, device=self.device)
self.shadow_hand_dof_upper_limits = to_torch(self.shadow_hand_dof_upper_limits, device=self.device)
self.shadow_hand_dof_default_pos = to_torch(self.shadow_hand_dof_default_pos, device=self.device)
self.shadow_hand_dof_default_vel = to_torch(self.shadow_hand_dof_default_vel, device=self.device)
# load manipulated object and goal assets
object_asset_options = gymapi.AssetOptions()
object_asset = self.gym.load_asset(self.sim, asset_root, object_asset_file, object_asset_options)
object_asset_options.disable_gravity = True
goal_asset = self.gym.load_asset(self.sim, asset_root, object_asset_file, object_asset_options)
shadow_hand_start_pose = gymapi.Transform()
shadow_hand_start_pose.p = gymapi.Vec3(*get_axis_params(0.5, self.up_axis_idx))
shadow_hand_start_pose.r = gymapi.Quat.from_axis_angle(gymapi.Vec3(0, 1, 0), np.pi) * gymapi.Quat.from_axis_angle(gymapi.Vec3(1, 0, 0), 0.47 * np.pi) * gymapi.Quat.from_axis_angle(gymapi.Vec3(0, 0, 1), 0.25 * np.pi)
object_start_pose = gymapi.Transform()
object_start_pose.p = gymapi.Vec3()
object_start_pose.p.x = shadow_hand_start_pose.p.x
pose_dy, pose_dz = -0.2, 0.06
object_start_pose.p.y = shadow_hand_start_pose.p.y + pose_dy
object_start_pose.p.z = shadow_hand_start_pose.p.z + pose_dz
if self.object_type == "pen":
object_start_pose.p.z = shadow_hand_start_pose.p.z + 0.02
self.goal_displacement = gymapi.Vec3(-0.2, -0.06, 0.12)
self.goal_displacement_tensor = to_torch(
[self.goal_displacement.x, self.goal_displacement.y, self.goal_displacement.z], device=self.device)
goal_start_pose = gymapi.Transform()
goal_start_pose.p = object_start_pose.p + self.goal_displacement
goal_start_pose.p.z -= 0.04
# compute aggregate size
max_agg_bodies = self.num_shadow_hand_bodies + 2
max_agg_shapes = self.num_shadow_hand_shapes + 2
self.allegro_hands = []
self.envs = []
self.object_init_state = []
self.hand_start_states = []
self.hand_indices = []
self.fingertip_indices = []
self.object_indices = []
self.goal_object_indices = []
shadow_hand_rb_count = self.gym.get_asset_rigid_body_count(allegro_hand_asset)
object_rb_count = self.gym.get_asset_rigid_body_count(object_asset)
self.object_rb_handles = list(range(shadow_hand_rb_count, shadow_hand_rb_count + object_rb_count))
for i in range(self.num_envs):
# create env instance
env_ptr = self.gym.create_env(
self.sim, lower, upper, num_per_row
)
if self.aggregate_mode >= 1:
self.gym.begin_aggregate(env_ptr, max_agg_bodies, max_agg_shapes, True)
# add hand - collision filter = -1 to use asset collision filters set in mjcf loader
allegro_hand_actor = self.gym.create_actor(env_ptr, allegro_hand_asset, shadow_hand_start_pose, "hand", i, -1, 0)
self.hand_start_states.append([shadow_hand_start_pose.p.x, shadow_hand_start_pose.p.y, shadow_hand_start_pose.p.z,
shadow_hand_start_pose.r.x, shadow_hand_start_pose.r.y, shadow_hand_start_pose.r.z, shadow_hand_start_pose.r.w,
0, 0, 0, 0, 0, 0])
self.gym.set_actor_dof_properties(env_ptr, allegro_hand_actor, shadow_hand_dof_props)
hand_idx = self.gym.get_actor_index(env_ptr, allegro_hand_actor, gymapi.DOMAIN_SIM)
self.hand_indices.append(hand_idx)
# add object
object_handle = self.gym.create_actor(env_ptr, object_asset, object_start_pose, "object", i, 0, 0)
self.object_init_state.append([object_start_pose.p.x, object_start_pose.p.y, object_start_pose.p.z,
object_start_pose.r.x, object_start_pose.r.y, object_start_pose.r.z, object_start_pose.r.w,
0, 0, 0, 0, 0, 0])
object_idx = self.gym.get_actor_index(env_ptr, object_handle, gymapi.DOMAIN_SIM)
self.object_indices.append(object_idx)
# add goal object
goal_handle = self.gym.create_actor(env_ptr, goal_asset, goal_start_pose, "goal_object", i + self.num_envs, 0, 0)
goal_object_idx = self.gym.get_actor_index(env_ptr, goal_handle, gymapi.DOMAIN_SIM)
self.goal_object_indices.append(goal_object_idx)
if self.object_type != "block":
self.gym.set_rigid_body_color(
env_ptr, object_handle, 0, gymapi.MESH_VISUAL, gymapi.Vec3(0.6, 0.72, 0.98))
self.gym.set_rigid_body_color(
env_ptr, goal_handle, 0, gymapi.MESH_VISUAL, gymapi.Vec3(0.6, 0.72, 0.98))
if self.aggregate_mode > 0:
self.gym.end_aggregate(env_ptr)
self.envs.append(env_ptr)
self.allegro_hands.append(allegro_hand_actor)
object_rb_props = self.gym.get_actor_rigid_body_properties(env_ptr, object_handle)
self.object_rb_masses = [prop.mass for prop in object_rb_props]
self.object_init_state = to_torch(self.object_init_state, device=self.device, dtype=torch.float).view(self.num_envs, 13)
self.goal_states = self.object_init_state.clone()
self.goal_states[:, self.up_axis_idx] -= 0.04
self.goal_init_state = self.goal_states.clone()
self.hand_start_states = to_torch(self.hand_start_states, device=self.device).view(self.num_envs, 13)
self.object_rb_handles = to_torch(self.object_rb_handles, dtype=torch.long, device=self.device)
self.object_rb_masses = to_torch(self.object_rb_masses, dtype=torch.float, device=self.device)
self.hand_indices = to_torch(self.hand_indices, dtype=torch.long, device=self.device)
self.object_indices = to_torch(self.object_indices, dtype=torch.long, device=self.device)
self.goal_object_indices = to_torch(self.goal_object_indices, dtype=torch.long, device=self.device)
def compute_reward(self, actions):
self.rew_buf[:], self.reset_buf[:], self.reset_goal_buf[:], self.progress_buf[:], self.successes[:], self.consecutive_successes[:] = compute_hand_reward(
self.rew_buf, self.reset_buf, self.reset_goal_buf, self.progress_buf, self.successes, self.consecutive_successes,
self.max_episode_length, self.object_pos, self.object_rot, self.goal_pos, self.goal_rot,
self.dist_reward_scale, self.rot_reward_scale, self.rot_eps, self.actions, self.action_penalty_scale,
self.success_tolerance, self.reach_goal_bonus, self.fall_dist, self.fall_penalty,
self.max_consecutive_successes, self.av_factor, (self.object_type == "pen")
)
self.extras['consecutive_successes'] = self.consecutive_successes.mean()
if self.print_success_stat:
self.total_resets = self.total_resets + self.reset_buf.sum()
direct_average_successes = self.total_successes + self.successes.sum()
self.total_successes = self.total_successes + (self.successes * self.reset_buf).sum()
# The direct average shows the overall result more quickly, but slightly undershoots long term
# policy performance.
print("Direct average consecutive successes = {:.1f}".format(direct_average_successes/(self.total_resets + self.num_envs)))
if self.total_resets > 0:
print("Post-Reset average consecutive successes = {:.1f}".format(self.total_successes/self.total_resets))
def compute_observations(self):
self.gym.refresh_dof_state_tensor(self.sim)
self.gym.refresh_actor_root_state_tensor(self.sim)
self.gym.refresh_rigid_body_state_tensor(self.sim)
if self.obs_type == "full_state" or self.asymmetric_obs:
self.gym.refresh_force_sensor_tensor(self.sim)
self.gym.refresh_dof_force_tensor(self.sim)
self.object_pose = self.root_state_tensor[self.object_indices, 0:7]
self.object_pos = self.root_state_tensor[self.object_indices, 0:3]
self.object_rot = self.root_state_tensor[self.object_indices, 3:7]
self.object_linvel = self.root_state_tensor[self.object_indices, 7:10]
self.object_angvel = self.root_state_tensor[self.object_indices, 10:13]
self.goal_pose = self.goal_states[:, 0:7]
self.goal_pos = self.goal_states[:, 0:3]
self.goal_rot = self.goal_states[:, 3:7]
if self.obs_type == "full_no_vel":
self.compute_full_observations(True)
elif self.obs_type == "full":
self.compute_full_observations()
elif self.obs_type == "full_state":
self.compute_full_state()
else:
print("Unknown observations type!")
if self.asymmetric_obs:
self.compute_full_state(True)
def compute_full_observations(self, no_vel=False):
if no_vel:
self.obs_buf[:, 0:self.num_shadow_hand_dofs] = unscale(self.shadow_hand_dof_pos,
self.shadow_hand_dof_lower_limits, self.shadow_hand_dof_upper_limits)
self.obs_buf[:, 16:23] = self.object_pose
self.obs_buf[:, 23:30] = self.goal_pose
self.obs_buf[:, 30:34] = quat_mul(self.object_rot, quat_conjugate(self.goal_rot))
self.obs_buf[:, 34:50] = self.actions
else:
self.obs_buf[:, 0:self.num_shadow_hand_dofs] = unscale(self.shadow_hand_dof_pos,
self.shadow_hand_dof_lower_limits, self.shadow_hand_dof_upper_limits)
self.obs_buf[:, self.num_shadow_hand_dofs:2*self.num_shadow_hand_dofs] = self.vel_obs_scale * self.shadow_hand_dof_vel
# 2*16 = 32 -16
self.obs_buf[:, 32:39] = self.object_pose
self.obs_buf[:, 39:42] = self.object_linvel
self.obs_buf[:, 42:45] = self.vel_obs_scale * self.object_angvel
self.obs_buf[:, 45:52] = self.goal_pose
self.obs_buf[:, 52:56] = quat_mul(self.object_rot, quat_conjugate(self.goal_rot))
self.obs_buf[:, 56:72] = self.actions
def compute_full_state(self, asymm_obs=False):
if asymm_obs:
self.states_buf[:, 0:self.num_shadow_hand_dofs] = unscale(self.shadow_hand_dof_pos,
self.shadow_hand_dof_lower_limits, self.shadow_hand_dof_upper_limits)
self.states_buf[:, self.num_shadow_hand_dofs:2*self.num_shadow_hand_dofs] = self.vel_obs_scale * self.shadow_hand_dof_vel
self.states_buf[:, 2*self.num_shadow_hand_dofs:3*self.num_shadow_hand_dofs] = self.force_torque_obs_scale * self.dof_force_tensor
obj_obs_start = 3*self.num_shadow_hand_dofs # 48
self.states_buf[:, obj_obs_start:obj_obs_start + 7] = self.object_pose
self.states_buf[:, obj_obs_start + 7:obj_obs_start + 10] = self.object_linvel
self.states_buf[:, obj_obs_start + 10:obj_obs_start + 13] = self.vel_obs_scale * self.object_angvel
goal_obs_start = obj_obs_start + 13 # 61
self.states_buf[:, goal_obs_start:goal_obs_start + 7] = self.goal_pose
self.states_buf[:, goal_obs_start + 7:goal_obs_start + 11] = quat_mul(self.object_rot, quat_conjugate(self.goal_rot))
fingertip_obs_start = goal_obs_start + 11 # 72
# obs_end = 96 + 65 + 30 = 191
# obs_total = obs_end + num_actions = 72 + 16 = 88
obs_end = fingertip_obs_start
self.states_buf[:, obs_end:obs_end + self.num_actions] = self.actions
else:
self.obs_buf[:, 0:self.num_shadow_hand_dofs] = unscale(self.shadow_hand_dof_pos,
self.shadow_hand_dof_lower_limits, self.shadow_hand_dof_upper_limits)
self.obs_buf[:, self.num_shadow_hand_dofs:2*self.num_shadow_hand_dofs] = self.vel_obs_scale * self.shadow_hand_dof_vel
self.obs_buf[:, 2*self.num_shadow_hand_dofs:3*self.num_shadow_hand_dofs] = self.force_torque_obs_scale * self.dof_force_tensor
obj_obs_start = 3*self.num_shadow_hand_dofs # 48
self.obs_buf[:, obj_obs_start:obj_obs_start + 7] = self.object_pose
self.obs_buf[:, obj_obs_start + 7:obj_obs_start + 10] = self.object_linvel
self.obs_buf[:, obj_obs_start + 10:obj_obs_start + 13] = self.vel_obs_scale * self.object_angvel
goal_obs_start = obj_obs_start + 13 # 61
self.obs_buf[:, goal_obs_start:goal_obs_start + 7] = self.goal_pose
self.obs_buf[:, goal_obs_start + 7:goal_obs_start + 11] = quat_mul(self.object_rot, quat_conjugate(self.goal_rot))
fingertip_obs_start = goal_obs_start + 11 # 72
# obs_end = 96 + 65 + 30 = 191
# obs_total = obs_end + num_actions = 72 + 16 = 88
obs_end = fingertip_obs_start #+ num_ft_states + num_ft_force_torques
self.obs_buf[:, obs_end:obs_end + self.num_actions] = self.actions
def reset_target_pose(self, env_ids, apply_reset=False):
rand_floats = torch_rand_float(-1.0, 1.0, (len(env_ids), 4), device=self.device)
new_rot = randomize_rotation(rand_floats[:, 0], rand_floats[:, 1], self.x_unit_tensor[env_ids], self.y_unit_tensor[env_ids])
self.goal_states[env_ids, 0:3] = self.goal_init_state[env_ids, 0:3]
self.goal_states[env_ids, 3:7] = new_rot
self.root_state_tensor[self.goal_object_indices[env_ids], 0:3] = self.goal_states[env_ids, 0:3] + self.goal_displacement_tensor
self.root_state_tensor[self.goal_object_indices[env_ids], 3:7] = self.goal_states[env_ids, 3:7]
self.root_state_tensor[self.goal_object_indices[env_ids], 7:13] = torch.zeros_like(self.root_state_tensor[self.goal_object_indices[env_ids], 7:13])
if apply_reset:
goal_object_indices = self.goal_object_indices[env_ids].to(torch.int32)
self.gym.set_actor_root_state_tensor_indexed(self.sim,
gymtorch.unwrap_tensor(self.root_state_tensor),
gymtorch.unwrap_tensor(goal_object_indices), len(env_ids))
self.reset_goal_buf[env_ids] = 0
def reset_idx(self, env_ids, goal_env_ids):
# generate random values
rand_floats = torch_rand_float(-1.0, 1.0, (len(env_ids), self.num_shadow_hand_dofs * 2 + 5), device=self.device)
# randomize start object poses
self.reset_target_pose(env_ids)
# reset rigid body forces
self.rb_forces[env_ids, :, :] = 0.0
# reset object
self.root_state_tensor[self.object_indices[env_ids]] = self.object_init_state[env_ids].clone()
self.root_state_tensor[self.object_indices[env_ids], 0:2] = self.object_init_state[env_ids, 0:2] + \
self.reset_position_noise * rand_floats[:, 0:2]
self.root_state_tensor[self.object_indices[env_ids], self.up_axis_idx] = self.object_init_state[env_ids, self.up_axis_idx] + \
self.reset_position_noise * rand_floats[:, self.up_axis_idx]
new_object_rot = randomize_rotation(rand_floats[:, 3], rand_floats[:, 4], self.x_unit_tensor[env_ids], self.y_unit_tensor[env_ids])
if self.object_type == "pen":
rand_angle_y = torch.tensor(0.3)
new_object_rot = randomize_rotation_pen(rand_floats[:, 3], rand_floats[:, 4], rand_angle_y,
self.x_unit_tensor[env_ids], self.y_unit_tensor[env_ids], self.z_unit_tensor[env_ids])
self.root_state_tensor[self.object_indices[env_ids], 3:7] = new_object_rot
self.root_state_tensor[self.object_indices[env_ids], 7:13] = torch.zeros_like(self.root_state_tensor[self.object_indices[env_ids], 7:13])
object_indices = torch.unique(torch.cat([self.object_indices[env_ids],
self.goal_object_indices[env_ids],
self.goal_object_indices[goal_env_ids]]).to(torch.int32))
self.gym.set_actor_root_state_tensor_indexed(self.sim,
gymtorch.unwrap_tensor(self.root_state_tensor),
gymtorch.unwrap_tensor(object_indices), len(object_indices))
# reset random force probabilities
self.random_force_prob[env_ids] = torch.exp((torch.log(self.force_prob_range[0]) - torch.log(self.force_prob_range[1]))
* torch.rand(len(env_ids), device=self.device) + torch.log(self.force_prob_range[1]))
# reset shadow hand
delta_max = self.shadow_hand_dof_upper_limits - self.shadow_hand_dof_default_pos
delta_min = self.shadow_hand_dof_lower_limits - self.shadow_hand_dof_default_pos
rand_delta = delta_min + (delta_max - delta_min) * 0.5 * (rand_floats[:, 5:5+self.num_shadow_hand_dofs] + 1)
pos = self.shadow_hand_default_dof_pos + self.reset_dof_pos_noise * rand_delta
self.shadow_hand_dof_pos[env_ids, :] = pos
self.shadow_hand_dof_vel[env_ids, :] = self.shadow_hand_dof_default_vel + \
self.reset_dof_vel_noise * rand_floats[:, 5+self.num_shadow_hand_dofs:5+self.num_shadow_hand_dofs*2]
self.prev_targets[env_ids, :self.num_shadow_hand_dofs] = pos
self.cur_targets[env_ids, :self.num_shadow_hand_dofs] = pos
hand_indices = self.hand_indices[env_ids].to(torch.int32)
self.gym.set_dof_position_target_tensor_indexed(self.sim,
gymtorch.unwrap_tensor(self.prev_targets),
gymtorch.unwrap_tensor(hand_indices), len(env_ids))
self.gym.set_dof_state_tensor_indexed(self.sim,
gymtorch.unwrap_tensor(self.dof_state),
gymtorch.unwrap_tensor(hand_indices), len(env_ids))
self.progress_buf[env_ids] = 0
self.reset_buf[env_ids] = 0
self.successes[env_ids] = 0
def pre_physics_step(self, actions):
env_ids = self.reset_buf.nonzero(as_tuple=False).squeeze(-1)
goal_env_ids = self.reset_goal_buf.nonzero(as_tuple=False).squeeze(-1)
# if only goals need reset, then call set API
if len(goal_env_ids) > 0 and len(env_ids) == 0:
self.reset_target_pose(goal_env_ids, apply_reset=True)
# if goals need reset in addition to other envs, call set API in reset()
elif len(goal_env_ids) > 0:
self.reset_target_pose(goal_env_ids)
if len(env_ids) > 0:
self.reset_idx(env_ids, goal_env_ids)
self.actions = actions.clone().to(self.device)
if self.use_relative_control:
targets = self.prev_targets[:, self.actuated_dof_indices] + self.shadow_hand_dof_speed_scale * self.dt * self.actions
self.cur_targets[:, self.actuated_dof_indices] = tensor_clamp(targets,
self.shadow_hand_dof_lower_limits[self.actuated_dof_indices], self.shadow_hand_dof_upper_limits[self.actuated_dof_indices])
else:
self.cur_targets[:, self.actuated_dof_indices] = scale(self.actions,
self.shadow_hand_dof_lower_limits[self.actuated_dof_indices], self.shadow_hand_dof_upper_limits[self.actuated_dof_indices])
self.cur_targets[:, self.actuated_dof_indices] = self.act_moving_average * self.cur_targets[:,
self.actuated_dof_indices] + (1.0 - self.act_moving_average) * self.prev_targets[:, self.actuated_dof_indices]
self.cur_targets[:, self.actuated_dof_indices] = tensor_clamp(self.cur_targets[:, self.actuated_dof_indices],
self.shadow_hand_dof_lower_limits[self.actuated_dof_indices], self.shadow_hand_dof_upper_limits[self.actuated_dof_indices])
self.prev_targets[:, self.actuated_dof_indices] = self.cur_targets[:, self.actuated_dof_indices]
self.gym.set_dof_position_target_tensor(self.sim, gymtorch.unwrap_tensor(self.cur_targets))
if self.force_scale > 0.0:
self.rb_forces *= torch.pow(self.force_decay, self.dt / self.force_decay_interval)
# apply new forces
force_indices = (torch.rand(self.num_envs, device=self.device) < self.random_force_prob).nonzero()
self.rb_forces[force_indices, self.object_rb_handles, :] = torch.randn(
self.rb_forces[force_indices, self.object_rb_handles, :].shape, device=self.device) * self.object_rb_masses * self.force_scale
self.gym.apply_rigid_body_force_tensors(self.sim, gymtorch.unwrap_tensor(self.rb_forces), None, gymapi.LOCAL_SPACE)
def post_physics_step(self):
self.progress_buf += 1
self.randomize_buf += 1
self.compute_observations()
self.compute_reward(self.actions)
if self.viewer and self.debug_viz:
# draw axes on target object
self.gym.clear_lines(self.viewer)
self.gym.refresh_rigid_body_state_tensor(self.sim)
for i in range(self.num_envs):
targetx = (self.goal_pos[i] + quat_apply(self.goal_rot[i], to_torch([1, 0, 0], device=self.device) * 0.2)).cpu().numpy()
targety = (self.goal_pos[i] + quat_apply(self.goal_rot[i], to_torch([0, 1, 0], device=self.device) * 0.2)).cpu().numpy()
targetz = (self.goal_pos[i] + quat_apply(self.goal_rot[i], to_torch([0, 0, 1], device=self.device) * 0.2)).cpu().numpy()
p0 = self.goal_pos[i].cpu().numpy() + self.goal_displacement_tensor.cpu().numpy()
self.gym.add_lines(self.viewer, self.envs[i], 1, [p0[0], p0[1], p0[2], targetx[0], targetx[1], targetx[2]], [0.85, 0.1, 0.1])
self.gym.add_lines(self.viewer, self.envs[i], 1, [p0[0], p0[1], p0[2], targety[0], targety[1], targety[2]], [0.1, 0.85, 0.1])
self.gym.add_lines(self.viewer, self.envs[i], 1, [p0[0], p0[1], p0[2], targetz[0], targetz[1], targetz[2]], [0.1, 0.1, 0.85])
objectx = (self.object_pos[i] + quat_apply(self.object_rot[i], to_torch([1, 0, 0], device=self.device) * 0.2)).cpu().numpy()
objecty = (self.object_pos[i] + quat_apply(self.object_rot[i], to_torch([0, 1, 0], device=self.device) * 0.2)).cpu().numpy()
objectz = (self.object_pos[i] + quat_apply(self.object_rot[i], to_torch([0, 0, 1], device=self.device) * 0.2)).cpu().numpy()
p0 = self.object_pos[i].cpu().numpy()
self.gym.add_lines(self.viewer, self.envs[i], 1, [p0[0], p0[1], p0[2], objectx[0], objectx[1], objectx[2]], [0.85, 0.1, 0.1])
self.gym.add_lines(self.viewer, self.envs[i], 1, [p0[0], p0[1], p0[2], objecty[0], objecty[1], objecty[2]], [0.1, 0.85, 0.1])
self.gym.add_lines(self.viewer, self.envs[i], 1, [p0[0], p0[1], p0[2], objectz[0], objectz[1], objectz[2]], [0.1, 0.1, 0.85])
#####################################################################
###=========================jit functions=========================###
#####################################################################
@torch.jit.script
def compute_hand_reward(
rew_buf, reset_buf, reset_goal_buf, progress_buf, successes, consecutive_successes,
max_episode_length: float, object_pos, object_rot, target_pos, target_rot,
dist_reward_scale: float, rot_reward_scale: float, rot_eps: float,
actions, action_penalty_scale: float,
success_tolerance: float, reach_goal_bonus: float, fall_dist: float,
fall_penalty: float, max_consecutive_successes: int, av_factor: float, ignore_z_rot: bool
):
# Distance from the hand to the object
goal_dist = torch.norm(object_pos - target_pos, p=2, dim=-1)
if ignore_z_rot:
success_tolerance = 2.0 * success_tolerance
# Orientation alignment for the cube in hand and goal cube
quat_diff = quat_mul(object_rot, quat_conjugate(target_rot))
rot_dist = 2.0 * torch.asin(torch.clamp(torch.norm(quat_diff[:, 0:3], p=2, dim=-1), max=1.0))
dist_rew = goal_dist * dist_reward_scale
rot_rew = 1.0/(torch.abs(rot_dist) + rot_eps) * rot_reward_scale
action_penalty = torch.sum(actions ** 2, dim=-1)
# Total reward is: position distance + orientation alignment + action regularization + success bonus + fall penalty
reward = dist_rew + rot_rew + action_penalty * action_penalty_scale
# Find out which envs hit the goal and update successes count
goal_resets = torch.where(torch.abs(rot_dist) <= success_tolerance, torch.ones_like(reset_goal_buf), reset_goal_buf)
successes = successes + goal_resets
# Success bonus: orientation is within `success_tolerance` of goal orientation
reward = torch.where(goal_resets == 1, reward + reach_goal_bonus, reward)
# Fall penalty: distance to the goal is larger than a threshold
reward = torch.where(goal_dist >= fall_dist, reward + fall_penalty, reward)
# Check env termination conditions, including maximum success number
resets = torch.where(goal_dist >= fall_dist, torch.ones_like(reset_buf), reset_buf)
if max_consecutive_successes > 0:
# Reset progress buffer on goal envs if max_consecutive_successes > 0
progress_buf = torch.where(torch.abs(rot_dist) <= success_tolerance, torch.zeros_like(progress_buf), progress_buf)
resets = torch.where(successes >= max_consecutive_successes, torch.ones_like(resets), resets)
timed_out = progress_buf >= max_episode_length - 1
resets = torch.where(timed_out, torch.ones_like(resets), resets)
# Apply penalty for not reaching the goal
if max_consecutive_successes > 0:
reward = torch.where(timed_out, reward + 0.5 * fall_penalty, reward)
num_resets = torch.sum(resets)
finished_cons_successes = torch.sum(successes * resets.float())
cons_successes = torch.where(num_resets > 0, av_factor*finished_cons_successes/num_resets + (1.0 - av_factor)*consecutive_successes, consecutive_successes)
return reward, resets, goal_resets, progress_buf, successes, cons_successes
@torch.jit.script
def randomize_rotation(rand0, rand1, x_unit_tensor, y_unit_tensor):
return quat_mul(quat_from_angle_axis(rand0 * np.pi, x_unit_tensor),
quat_from_angle_axis(rand1 * np.pi, y_unit_tensor))
@torch.jit.script
def randomize_rotation_pen(rand0, rand1, max_angle, x_unit_tensor, y_unit_tensor, z_unit_tensor):
rot = quat_mul(quat_from_angle_axis(0.5 * np.pi + rand0 * max_angle, x_unit_tensor),
quat_from_angle_axis(rand0 * np.pi, z_unit_tensor))
return rot
| 40,972 | Python | 54.897681 | 223 | 0.622157 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/tasks/ball_balance.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import math
import numpy as np
import os
import torch
import xml.etree.ElementTree as ET
from isaacgym import gymutil, gymtorch, gymapi
from isaacgymenvs.utils.torch_jit_utils import to_torch, torch_rand_float, tensor_clamp, torch_random_dir_2
from .base.vec_task import VecTask
def _indent_xml(elem, level=0):
i = "\n" + level * " "
if len(elem):
if not elem.text or not elem.text.strip():
elem.text = i + " "
if not elem.tail or not elem.tail.strip():
elem.tail = i
for elem in elem:
_indent_xml(elem, level + 1)
if not elem.tail or not elem.tail.strip():
elem.tail = i
else:
if level and (not elem.tail or not elem.tail.strip()):
elem.tail = i
class BallBalance(VecTask):
def __init__(self, cfg, rl_device, sim_device, graphics_device_id, headless, virtual_screen_capture, force_render):
self.cfg = cfg
self.max_episode_length = self.cfg["env"]["maxEpisodeLength"]
self.action_speed_scale = self.cfg["env"]["actionSpeedScale"]
self.debug_viz = self.cfg["env"]["enableDebugVis"]
sensors_per_env = 3
actors_per_env = 2
dofs_per_env = 6
bodies_per_env = 7 + 1
# Observations:
# 0:3 - activated DOF positions
# 3:6 - activated DOF velocities
# 6:9 - ball position
# 9:12 - ball linear velocity
# 12:15 - sensor force (same for each sensor)
# 15:18 - sensor torque 1
# 18:21 - sensor torque 2
# 21:24 - sensor torque 3
self.cfg["env"]["numObservations"] = 24
# Actions: target velocities for the 3 actuated DOFs
self.cfg["env"]["numActions"] = 3
super().__init__(config=self.cfg, rl_device=rl_device, sim_device=sim_device, graphics_device_id=graphics_device_id, headless=headless, virtual_screen_capture=virtual_screen_capture, force_render=force_render)
self.root_tensor = self.gym.acquire_actor_root_state_tensor(self.sim)
self.dof_state_tensor = self.gym.acquire_dof_state_tensor(self.sim)
self.sensor_tensor = self.gym.acquire_force_sensor_tensor(self.sim)
vec_root_tensor = gymtorch.wrap_tensor(self.root_tensor).view(self.num_envs, actors_per_env, 13)
vec_dof_tensor = gymtorch.wrap_tensor(self.dof_state_tensor).view(self.num_envs, dofs_per_env, 2)
vec_sensor_tensor = gymtorch.wrap_tensor(self.sensor_tensor).view(self.num_envs, sensors_per_env, 6)
self.root_states = vec_root_tensor
self.tray_positions = vec_root_tensor[..., 0, 0:3]
self.ball_positions = vec_root_tensor[..., 1, 0:3]
self.ball_orientations = vec_root_tensor[..., 1, 3:7]
self.ball_linvels = vec_root_tensor[..., 1, 7:10]
self.ball_angvels = vec_root_tensor[..., 1, 10:13]
self.dof_states = vec_dof_tensor
self.dof_positions = vec_dof_tensor[..., 0]
self.dof_velocities = vec_dof_tensor[..., 1]
self.sensor_forces = vec_sensor_tensor[..., 0:3]
self.sensor_torques = vec_sensor_tensor[..., 3:6]
self.gym.refresh_actor_root_state_tensor(self.sim)
self.gym.refresh_dof_state_tensor(self.sim)
self.initial_dof_states = self.dof_states.clone()
self.initial_root_states = vec_root_tensor.clone()
self.dof_position_targets = torch.zeros((self.num_envs, dofs_per_env), dtype=torch.float32, device=self.device, requires_grad=False)
self.all_actor_indices = torch.arange(actors_per_env * self.num_envs, dtype=torch.int32, device=self.device).view(self.num_envs, actors_per_env)
self.all_bbot_indices = actors_per_env * torch.arange(self.num_envs, dtype=torch.int32, device=self.device)
# vis
self.axes_geom = gymutil.AxesGeometry(0.2)
def create_sim(self):
self.dt = self.sim_params.dt
self.sim_params.up_axis = gymapi.UP_AXIS_Z
self.sim_params.gravity.x = 0
self.sim_params.gravity.y = 0
self.sim_params.gravity.z = -9.81
self.sim = super().create_sim(self.device_id, self.graphics_device_id, self.physics_engine, self.sim_params)
self._create_balance_bot_asset()
self._create_ground_plane()
self._create_envs(self.num_envs, self.cfg["env"]['envSpacing'], int(np.sqrt(self.num_envs)))
def _create_balance_bot_asset(self):
# there is an asset balance_bot.xml, here we override some features.
tray_radius = 0.5
tray_thickness = 0.02
leg_radius = 0.02
leg_outer_offset = tray_radius - 0.1
leg_length = leg_outer_offset - 2 * leg_radius
leg_inner_offset = leg_outer_offset - leg_length / math.sqrt(2)
tray_height = leg_length * math.sqrt(2) + 2 * leg_radius + 0.5 * tray_thickness
root = ET.Element('mujoco')
root.attrib["model"] = "BalanceBot"
compiler = ET.SubElement(root, "compiler")
compiler.attrib["angle"] = "degree"
compiler.attrib["coordinate"] = "local"
compiler.attrib["inertiafromgeom"] = "true"
worldbody = ET.SubElement(root, "worldbody")
tray = ET.SubElement(worldbody, "body")
tray.attrib["name"] = "tray"
tray.attrib["pos"] = "%g %g %g" % (0, 0, tray_height)
tray_joint = ET.SubElement(tray, "joint")
tray_joint.attrib["name"] = "root_joint"
tray_joint.attrib["type"] = "free"
tray_geom = ET.SubElement(tray, "geom")
tray_geom.attrib["type"] = "cylinder"
tray_geom.attrib["size"] = "%g %g" % (tray_radius, 0.5 * tray_thickness)
tray_geom.attrib["pos"] = "0 0 0"
tray_geom.attrib["density"] = "100"
leg_angles = [0.0, 2.0 / 3.0 * math.pi, 4.0 / 3.0 * math.pi]
for i in range(len(leg_angles)):
angle = leg_angles[i]
upper_leg_from = gymapi.Vec3()
upper_leg_from.x = leg_outer_offset * math.cos(angle)
upper_leg_from.y = leg_outer_offset * math.sin(angle)
upper_leg_from.z = -leg_radius - 0.5 * tray_thickness
upper_leg_to = gymapi.Vec3()
upper_leg_to.x = leg_inner_offset * math.cos(angle)
upper_leg_to.y = leg_inner_offset * math.sin(angle)
upper_leg_to.z = upper_leg_from.z - leg_length / math.sqrt(2)
upper_leg_pos = (upper_leg_from + upper_leg_to) * 0.5
upper_leg_quat = gymapi.Quat.from_euler_zyx(0, -0.75 * math.pi, angle)
upper_leg = ET.SubElement(tray, "body")
upper_leg.attrib["name"] = "upper_leg" + str(i)
upper_leg.attrib["pos"] = "%g %g %g" % (upper_leg_pos.x, upper_leg_pos.y, upper_leg_pos.z)
upper_leg.attrib["quat"] = "%g %g %g %g" % (upper_leg_quat.w, upper_leg_quat.x, upper_leg_quat.y, upper_leg_quat.z)
upper_leg_geom = ET.SubElement(upper_leg, "geom")
upper_leg_geom.attrib["type"] = "capsule"
upper_leg_geom.attrib["size"] = "%g %g" % (leg_radius, 0.5 * leg_length)
upper_leg_geom.attrib["density"] = "1000"
upper_leg_joint = ET.SubElement(upper_leg, "joint")
upper_leg_joint.attrib["name"] = "upper_leg_joint" + str(i)
upper_leg_joint.attrib["type"] = "hinge"
upper_leg_joint.attrib["pos"] = "%g %g %g" % (0, 0, -0.5 * leg_length)
upper_leg_joint.attrib["axis"] = "0 1 0"
upper_leg_joint.attrib["limited"] = "true"
upper_leg_joint.attrib["range"] = "-45 45"
lower_leg_pos = gymapi.Vec3(-0.5 * leg_length, 0, 0.5 * leg_length)
lower_leg_quat = gymapi.Quat.from_euler_zyx(0, -0.5 * math.pi, 0)
lower_leg = ET.SubElement(upper_leg, "body")
lower_leg.attrib["name"] = "lower_leg" + str(i)
lower_leg.attrib["pos"] = "%g %g %g" % (lower_leg_pos.x, lower_leg_pos.y, lower_leg_pos.z)
lower_leg.attrib["quat"] = "%g %g %g %g" % (lower_leg_quat.w, lower_leg_quat.x, lower_leg_quat.y, lower_leg_quat.z)
lower_leg_geom = ET.SubElement(lower_leg, "geom")
lower_leg_geom.attrib["type"] = "capsule"
lower_leg_geom.attrib["size"] = "%g %g" % (leg_radius, 0.5 * leg_length)
lower_leg_geom.attrib["density"] = "1000"
lower_leg_joint = ET.SubElement(lower_leg, "joint")
lower_leg_joint.attrib["name"] = "lower_leg_joint" + str(i)
lower_leg_joint.attrib["type"] = "hinge"
lower_leg_joint.attrib["pos"] = "%g %g %g" % (0, 0, -0.5 * leg_length)
lower_leg_joint.attrib["axis"] = "0 1 0"
lower_leg_joint.attrib["limited"] = "true"
lower_leg_joint.attrib["range"] = "-70 90"
_indent_xml(root)
ET.ElementTree(root).write("balance_bot.xml")
# save some useful robot parameters
self.tray_height = tray_height
self.leg_radius = leg_radius
self.leg_length = leg_length
self.leg_outer_offset = leg_outer_offset
self.leg_angles = leg_angles
def _create_ground_plane(self):
plane_params = gymapi.PlaneParams()
plane_params.normal = gymapi.Vec3(0.0, 0.0, 1.0)
self.gym.add_ground(self.sim, plane_params)
def _create_envs(self, num_envs, spacing, num_per_row):
lower = gymapi.Vec3(-spacing, -spacing, 0.0)
upper = gymapi.Vec3(spacing, spacing, spacing)
asset_root = "."
asset_file = "balance_bot.xml"
asset_path = os.path.join(asset_root, asset_file)
asset_root = os.path.dirname(asset_path)
asset_file = os.path.basename(asset_path)
bbot_options = gymapi.AssetOptions()
bbot_options.fix_base_link = False
bbot_options.slices_per_cylinder = 40
bbot_asset = self.gym.load_asset(self.sim, asset_root, asset_file, bbot_options)
# printed view of asset built
# self.gym.debug_print_asset(bbot_asset)
self.num_bbot_dofs = self.gym.get_asset_dof_count(bbot_asset)
bbot_dof_props = self.gym.get_asset_dof_properties(bbot_asset)
self.bbot_dof_lower_limits = []
self.bbot_dof_upper_limits = []
for i in range(self.num_bbot_dofs):
self.bbot_dof_lower_limits.append(bbot_dof_props['lower'][i])
self.bbot_dof_upper_limits.append(bbot_dof_props['upper'][i])
self.bbot_dof_lower_limits = to_torch(self.bbot_dof_lower_limits, device=self.device)
self.bbot_dof_upper_limits = to_torch(self.bbot_dof_upper_limits, device=self.device)
bbot_pose = gymapi.Transform()
bbot_pose.p.z = self.tray_height
# create force sensors attached to the tray body
bbot_tray_idx = self.gym.find_asset_rigid_body_index(bbot_asset, "tray")
for angle in self.leg_angles:
sensor_pose = gymapi.Transform()
sensor_pose.p.x = self.leg_outer_offset * math.cos(angle)
sensor_pose.p.y = self.leg_outer_offset * math.sin(angle)
self.gym.create_asset_force_sensor(bbot_asset, bbot_tray_idx, sensor_pose)
# create ball asset
self.ball_radius = 0.1
ball_options = gymapi.AssetOptions()
ball_options.density = 200
ball_asset = self.gym.create_sphere(self.sim, self.ball_radius, ball_options)
self.envs = []
self.bbot_handles = []
self.obj_handles = []
for i in range(self.num_envs):
# create env instance
env_ptr = self.gym.create_env(
self.sim, lower, upper, num_per_row
)
bbot_handle = self.gym.create_actor(env_ptr, bbot_asset, bbot_pose, "bbot", i, 0, 0)
actuated_dofs = np.array([1, 3, 5])
free_dofs = np.array([0, 2, 4])
dof_props = self.gym.get_actor_dof_properties(env_ptr, bbot_handle)
dof_props['driveMode'][actuated_dofs] = gymapi.DOF_MODE_POS
dof_props['stiffness'][actuated_dofs] = 4000.0
dof_props['damping'][actuated_dofs] = 100.0
dof_props['driveMode'][free_dofs] = gymapi.DOF_MODE_NONE
dof_props['stiffness'][free_dofs] = 0
dof_props['damping'][free_dofs] = 0
self.gym.set_actor_dof_properties(env_ptr, bbot_handle, dof_props)
lower_leg_handles = []
lower_leg_handles.append(self.gym.find_actor_rigid_body_handle(env_ptr, bbot_handle, "lower_leg0"))
lower_leg_handles.append(self.gym.find_actor_rigid_body_handle(env_ptr, bbot_handle, "lower_leg1"))
lower_leg_handles.append(self.gym.find_actor_rigid_body_handle(env_ptr, bbot_handle, "lower_leg2"))
# create attractors to hold the feet in place
attractor_props = gymapi.AttractorProperties()
attractor_props.stiffness = 5e7
attractor_props.damping = 5e3
attractor_props.axes = gymapi.AXIS_TRANSLATION
for j in range(3):
angle = self.leg_angles[j]
attractor_props.rigid_handle = lower_leg_handles[j]
# attractor world pose to keep the feet in place
attractor_props.target.p.x = self.leg_outer_offset * math.cos(angle)
attractor_props.target.p.z = self.leg_radius
attractor_props.target.p.y = self.leg_outer_offset * math.sin(angle)
# attractor local pose in lower leg body
attractor_props.offset.p.z = 0.5 * self.leg_length
self.gym.create_rigid_body_attractor(env_ptr, attractor_props)
ball_pose = gymapi.Transform()
ball_pose.p.x = 0.2
ball_pose.p.z = 2.0
ball_handle = self.gym.create_actor(env_ptr, ball_asset, ball_pose, "ball", i, 0, 0)
self.obj_handles.append(ball_handle)
# pretty colors
self.gym.set_rigid_body_color(env_ptr, ball_handle, 0, gymapi.MESH_VISUAL, gymapi.Vec3(0.99, 0.66, 0.25))
self.gym.set_rigid_body_color(env_ptr, bbot_handle, 0, gymapi.MESH_VISUAL, gymapi.Vec3(0.48, 0.65, 0.8))
for j in range(1, 7):
self.gym.set_rigid_body_color(env_ptr, bbot_handle, j, gymapi.MESH_VISUAL, gymapi.Vec3(0.15, 0.2, 0.3))
self.envs.append(env_ptr)
self.bbot_handles.append(bbot_handle)
def compute_observations(self):
#print("~!~!~!~! Computing obs")
actuated_dof_indices = torch.tensor([1, 3, 5], device=self.device)
#print(self.dof_states[:, actuated_dof_indices, :])
self.obs_buf[..., 0:3] = self.dof_positions[..., actuated_dof_indices]
self.obs_buf[..., 3:6] = self.dof_velocities[..., actuated_dof_indices]
self.obs_buf[..., 6:9] = self.ball_positions
self.obs_buf[..., 9:12] = self.ball_linvels
self.obs_buf[..., 12:15] = self.sensor_forces[..., 0] / 20 # !!! lousy normalization
self.obs_buf[..., 15:18] = self.sensor_torques[..., 0] / 20 # !!! lousy normalization
self.obs_buf[..., 18:21] = self.sensor_torques[..., 1] / 20 # !!! lousy normalization
self.obs_buf[..., 21:24] = self.sensor_torques[..., 2] / 20 # !!! lousy normalization
return self.obs_buf
def compute_reward(self):
self.rew_buf[:], self.reset_buf[:] = compute_bbot_reward(
self.tray_positions,
self.ball_positions,
self.ball_linvels,
self.ball_radius,
self.reset_buf, self.progress_buf, self.max_episode_length
)
def reset_idx(self, env_ids):
num_resets = len(env_ids)
# reset bbot and ball root states
self.root_states[env_ids] = self.initial_root_states[env_ids]
min_d = 0.001 # min horizontal dist from origin
max_d = 0.5 # max horizontal dist from origin
min_height = 1.0
max_height = 2.0
min_horizontal_speed = 0
max_horizontal_speed = 5
dists = torch_rand_float(min_d, max_d, (num_resets, 1), self.device)
dirs = torch_random_dir_2((num_resets, 1), self.device)
hpos = dists * dirs
speedscales = (dists - min_d) / (max_d - min_d)
hspeeds = torch_rand_float(min_horizontal_speed, max_horizontal_speed, (num_resets, 1), self.device)
hvels = -speedscales * hspeeds * dirs
vspeeds = -torch_rand_float(5.0, 5.0, (num_resets, 1), self.device).squeeze()
self.ball_positions[env_ids, 0] = hpos[..., 0]
self.ball_positions[env_ids, 2] = torch_rand_float(min_height, max_height, (num_resets, 1), self.device).squeeze()
self.ball_positions[env_ids, 1] = hpos[..., 1]
self.ball_orientations[env_ids, 0:3] = 0
self.ball_orientations[env_ids, 3] = 1
self.ball_linvels[env_ids, 0] = hvels[..., 0]
self.ball_linvels[env_ids, 2] = vspeeds
self.ball_linvels[env_ids, 1] = hvels[..., 1]
self.ball_angvels[env_ids] = 0
# reset root state for bbots and balls in selected envs
actor_indices = self.all_actor_indices[env_ids].flatten()
self.gym.set_actor_root_state_tensor_indexed(self.sim, self.root_tensor, gymtorch.unwrap_tensor(actor_indices), len(actor_indices))
# reset DOF states for bbots in selected envs
bbot_indices = self.all_bbot_indices[env_ids].flatten()
self.dof_states[env_ids] = self.initial_dof_states[env_ids]
self.gym.set_dof_state_tensor_indexed(self.sim, self.dof_state_tensor, gymtorch.unwrap_tensor(bbot_indices), len(bbot_indices))
self.reset_buf[env_ids] = 0
self.progress_buf[env_ids] = 0
def pre_physics_step(self, _actions):
# resets
reset_env_ids = self.reset_buf.nonzero(as_tuple=False).squeeze(-1)
if len(reset_env_ids) > 0:
self.reset_idx(reset_env_ids)
actions = _actions.to(self.device)
actuated_indices = torch.LongTensor([1, 3, 5])
# update position targets from actions
self.dof_position_targets[..., actuated_indices] += self.dt * self.action_speed_scale * actions
self.dof_position_targets[:] = tensor_clamp(self.dof_position_targets, self.bbot_dof_lower_limits, self.bbot_dof_upper_limits)
# reset position targets for reset envs
self.dof_position_targets[reset_env_ids] = 0
self.gym.set_dof_position_target_tensor(self.sim, gymtorch.unwrap_tensor(self.dof_position_targets))
def post_physics_step(self):
self.progress_buf += 1
self.gym.refresh_actor_root_state_tensor(self.sim)
self.gym.refresh_dof_state_tensor(self.sim)
self.gym.refresh_force_sensor_tensor(self.sim)
self.compute_observations()
self.compute_reward()
# vis
if self.viewer and self.debug_viz:
self.gym.clear_lines(self.viewer)
for i in range(self.num_envs):
env = self.envs[i]
bbot_handle = self.bbot_handles[i]
body_handles = []
body_handles.append(self.gym.find_actor_rigid_body_handle(env, bbot_handle, "upper_leg0"))
body_handles.append(self.gym.find_actor_rigid_body_handle(env, bbot_handle, "upper_leg1"))
body_handles.append(self.gym.find_actor_rigid_body_handle(env, bbot_handle, "upper_leg2"))
for lhandle in body_handles:
lpose = self.gym.get_rigid_transform(env, lhandle)
gymutil.draw_lines(self.axes_geom, self.gym, self.viewer, env, lpose)
#####################################################################
###=========================jit functions=========================###
#####################################################################
@torch.jit.script
def compute_bbot_reward(tray_positions, ball_positions, ball_velocities, ball_radius, reset_buf, progress_buf, max_episode_length):
# type: (Tensor, Tensor, Tensor, float, Tensor, Tensor, float) -> Tuple[Tensor, Tensor]
# calculating the norm for ball distance to desired height above the ground plane (i.e. 0.7)
ball_dist = torch.sqrt(ball_positions[..., 0] * ball_positions[..., 0] +
(ball_positions[..., 2] - 0.7) * (ball_positions[..., 2] - 0.7) +
(ball_positions[..., 1]) * ball_positions[..., 1])
ball_speed = torch.sqrt(ball_velocities[..., 0] * ball_velocities[..., 0] +
ball_velocities[..., 1] * ball_velocities[..., 1] +
ball_velocities[..., 2] * ball_velocities[..., 2])
pos_reward = 1.0 / (1.0 + ball_dist)
speed_reward = 1.0 / (1.0 + ball_speed)
reward = pos_reward * speed_reward
reset = torch.where(progress_buf >= max_episode_length - 1, torch.ones_like(reset_buf), reset_buf)
reset = torch.where(ball_positions[..., 2] < ball_radius * 1.5, torch.ones_like(reset_buf), reset)
return reward, reset
| 22,414 | Python | 45.991614 | 217 | 0.605559 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/tasks/anymal_terrain.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import numpy as np
import os, time
from isaacgym import gymtorch
from isaacgym import gymapi
from .base.vec_task import VecTask
import torch
from typing import Tuple, Dict
from isaacgymenvs.utils.torch_jit_utils import to_torch, get_axis_params, torch_rand_float, normalize, quat_apply, quat_rotate_inverse
from isaacgymenvs.tasks.base.vec_task import VecTask
class AnymalTerrain(VecTask):
def __init__(self, cfg, rl_device, sim_device, graphics_device_id, headless, virtual_screen_capture, force_render):
self.cfg = cfg
self.height_samples = None
self.custom_origins = False
self.debug_viz = self.cfg["env"]["enableDebugVis"]
self.init_done = False
# normalization
self.lin_vel_scale = self.cfg["env"]["learn"]["linearVelocityScale"]
self.ang_vel_scale = self.cfg["env"]["learn"]["angularVelocityScale"]
self.dof_pos_scale = self.cfg["env"]["learn"]["dofPositionScale"]
self.dof_vel_scale = self.cfg["env"]["learn"]["dofVelocityScale"]
self.height_meas_scale = self.cfg["env"]["learn"]["heightMeasurementScale"]
self.action_scale = self.cfg["env"]["control"]["actionScale"]
# reward scales
self.rew_scales = {}
self.rew_scales["termination"] = self.cfg["env"]["learn"]["terminalReward"]
self.rew_scales["lin_vel_xy"] = self.cfg["env"]["learn"]["linearVelocityXYRewardScale"]
self.rew_scales["lin_vel_z"] = self.cfg["env"]["learn"]["linearVelocityZRewardScale"]
self.rew_scales["ang_vel_z"] = self.cfg["env"]["learn"]["angularVelocityZRewardScale"]
self.rew_scales["ang_vel_xy"] = self.cfg["env"]["learn"]["angularVelocityXYRewardScale"]
self.rew_scales["orient"] = self.cfg["env"]["learn"]["orientationRewardScale"]
self.rew_scales["torque"] = self.cfg["env"]["learn"]["torqueRewardScale"]
self.rew_scales["joint_acc"] = self.cfg["env"]["learn"]["jointAccRewardScale"]
self.rew_scales["base_height"] = self.cfg["env"]["learn"]["baseHeightRewardScale"]
self.rew_scales["air_time"] = self.cfg["env"]["learn"]["feetAirTimeRewardScale"]
self.rew_scales["collision"] = self.cfg["env"]["learn"]["kneeCollisionRewardScale"]
self.rew_scales["stumble"] = self.cfg["env"]["learn"]["feetStumbleRewardScale"]
self.rew_scales["action_rate"] = self.cfg["env"]["learn"]["actionRateRewardScale"]
self.rew_scales["hip"] = self.cfg["env"]["learn"]["hipRewardScale"]
#command ranges
self.command_x_range = self.cfg["env"]["randomCommandVelocityRanges"]["linear_x"]
self.command_y_range = self.cfg["env"]["randomCommandVelocityRanges"]["linear_y"]
self.command_yaw_range = self.cfg["env"]["randomCommandVelocityRanges"]["yaw"]
# base init state
pos = self.cfg["env"]["baseInitState"]["pos"]
rot = self.cfg["env"]["baseInitState"]["rot"]
v_lin = self.cfg["env"]["baseInitState"]["vLinear"]
v_ang = self.cfg["env"]["baseInitState"]["vAngular"]
self.base_init_state = pos + rot + v_lin + v_ang
# default joint positions
self.named_default_joint_angles = self.cfg["env"]["defaultJointAngles"]
# other
self.decimation = self.cfg["env"]["control"]["decimation"]
self.dt = self.decimation * self.cfg["sim"]["dt"]
self.max_episode_length_s = self.cfg["env"]["learn"]["episodeLength_s"]
self.max_episode_length = int(self.max_episode_length_s/ self.dt + 0.5)
self.push_interval = int(self.cfg["env"]["learn"]["pushInterval_s"] / self.dt + 0.5)
self.allow_knee_contacts = self.cfg["env"]["learn"]["allowKneeContacts"]
self.Kp = self.cfg["env"]["control"]["stiffness"]
self.Kd = self.cfg["env"]["control"]["damping"]
self.curriculum = self.cfg["env"]["terrain"]["curriculum"]
for key in self.rew_scales.keys():
self.rew_scales[key] *= self.dt
super().__init__(config=self.cfg, rl_device=rl_device, sim_device=sim_device, graphics_device_id=graphics_device_id, headless=headless, virtual_screen_capture=virtual_screen_capture, force_render=force_render)
if self.graphics_device_id != -1:
p = self.cfg["env"]["viewer"]["pos"]
lookat = self.cfg["env"]["viewer"]["lookat"]
cam_pos = gymapi.Vec3(p[0], p[1], p[2])
cam_target = gymapi.Vec3(lookat[0], lookat[1], lookat[2])
self.gym.viewer_camera_look_at(self.viewer, None, cam_pos, cam_target)
# get gym GPU state tensors
actor_root_state = self.gym.acquire_actor_root_state_tensor(self.sim)
dof_state_tensor = self.gym.acquire_dof_state_tensor(self.sim)
net_contact_forces = self.gym.acquire_net_contact_force_tensor(self.sim)
self.gym.refresh_dof_state_tensor(self.sim)
self.gym.refresh_actor_root_state_tensor(self.sim)
self.gym.refresh_net_contact_force_tensor(self.sim)
# create some wrapper tensors for different slices
self.root_states = gymtorch.wrap_tensor(actor_root_state)
self.dof_state = gymtorch.wrap_tensor(dof_state_tensor)
self.dof_pos = self.dof_state.view(self.num_envs, self.num_dof, 2)[..., 0]
self.dof_vel = self.dof_state.view(self.num_envs, self.num_dof, 2)[..., 1]
self.contact_forces = gymtorch.wrap_tensor(net_contact_forces).view(self.num_envs, -1, 3) # shape: num_envs, num_bodies, xyz axis
# initialize some data used later on
self.common_step_counter = 0
self.extras = {}
self.noise_scale_vec = self._get_noise_scale_vec(self.cfg)
self.commands = torch.zeros(self.num_envs, 4, dtype=torch.float, device=self.device, requires_grad=False) # x vel, y vel, yaw vel, heading
self.commands_scale = torch.tensor([self.lin_vel_scale, self.lin_vel_scale, self.ang_vel_scale], device=self.device, requires_grad=False,)
self.gravity_vec = to_torch(get_axis_params(-1., self.up_axis_idx), device=self.device).repeat((self.num_envs, 1))
self.forward_vec = to_torch([1., 0., 0.], device=self.device).repeat((self.num_envs, 1))
self.torques = torch.zeros(self.num_envs, self.num_actions, dtype=torch.float, device=self.device, requires_grad=False)
self.actions = torch.zeros(self.num_envs, self.num_actions, dtype=torch.float, device=self.device, requires_grad=False)
self.last_actions = torch.zeros(self.num_envs, self.num_actions, dtype=torch.float, device=self.device, requires_grad=False)
self.feet_air_time = torch.zeros(self.num_envs, 4, dtype=torch.float, device=self.device, requires_grad=False)
self.last_dof_vel = torch.zeros_like(self.dof_vel)
self.height_points = self.init_height_points()
self.measured_heights = None
# joint positions offsets
self.default_dof_pos = torch.zeros_like(self.dof_pos, dtype=torch.float, device=self.device, requires_grad=False)
for i in range(self.num_actions):
name = self.dof_names[i]
angle = self.named_default_joint_angles[name]
self.default_dof_pos[:, i] = angle
# reward episode sums
torch_zeros = lambda : torch.zeros(self.num_envs, dtype=torch.float, device=self.device, requires_grad=False)
self.episode_sums = {"lin_vel_xy": torch_zeros(), "lin_vel_z": torch_zeros(), "ang_vel_z": torch_zeros(), "ang_vel_xy": torch_zeros(),
"orient": torch_zeros(), "torques": torch_zeros(), "joint_acc": torch_zeros(), "base_height": torch_zeros(),
"air_time": torch_zeros(), "collision": torch_zeros(), "stumble": torch_zeros(), "action_rate": torch_zeros(), "hip": torch_zeros()}
self.reset_idx(torch.arange(self.num_envs, device=self.device))
self.init_done = True
def create_sim(self):
self.up_axis_idx = 2 # index of up axis: Y=1, Z=2
self.sim = super().create_sim(self.device_id, self.graphics_device_id, self.physics_engine, self.sim_params)
terrain_type = self.cfg["env"]["terrain"]["terrainType"]
if terrain_type=='plane':
self._create_ground_plane()
elif terrain_type=='trimesh':
self._create_trimesh()
self.custom_origins = True
self._create_envs(self.num_envs, self.cfg["env"]['envSpacing'], int(np.sqrt(self.num_envs)))
def _get_noise_scale_vec(self, cfg):
noise_vec = torch.zeros_like(self.obs_buf[0])
self.add_noise = self.cfg["env"]["learn"]["addNoise"]
noise_level = self.cfg["env"]["learn"]["noiseLevel"]
noise_vec[:3] = self.cfg["env"]["learn"]["linearVelocityNoise"] * noise_level * self.lin_vel_scale
noise_vec[3:6] = self.cfg["env"]["learn"]["angularVelocityNoise"] * noise_level * self.ang_vel_scale
noise_vec[6:9] = self.cfg["env"]["learn"]["gravityNoise"] * noise_level
noise_vec[9:12] = 0. # commands
noise_vec[12:24] = self.cfg["env"]["learn"]["dofPositionNoise"] * noise_level * self.dof_pos_scale
noise_vec[24:36] = self.cfg["env"]["learn"]["dofVelocityNoise"] * noise_level * self.dof_vel_scale
noise_vec[36:176] = self.cfg["env"]["learn"]["heightMeasurementNoise"] * noise_level * self.height_meas_scale
noise_vec[176:188] = 0. # previous actions
return noise_vec
def _create_ground_plane(self):
plane_params = gymapi.PlaneParams()
plane_params.normal = gymapi.Vec3(0.0, 0.0, 1.0)
plane_params.static_friction = self.cfg["env"]["terrain"]["staticFriction"]
plane_params.dynamic_friction = self.cfg["env"]["terrain"]["dynamicFriction"]
plane_params.restitution = self.cfg["env"]["terrain"]["restitution"]
self.gym.add_ground(self.sim, plane_params)
def _create_trimesh(self):
self.terrain = Terrain(self.cfg["env"]["terrain"], num_robots=self.num_envs)
tm_params = gymapi.TriangleMeshParams()
tm_params.nb_vertices = self.terrain.vertices.shape[0]
tm_params.nb_triangles = self.terrain.triangles.shape[0]
tm_params.transform.p.x = -self.terrain.border_size
tm_params.transform.p.y = -self.terrain.border_size
tm_params.transform.p.z = 0.0
tm_params.static_friction = self.cfg["env"]["terrain"]["staticFriction"]
tm_params.dynamic_friction = self.cfg["env"]["terrain"]["dynamicFriction"]
tm_params.restitution = self.cfg["env"]["terrain"]["restitution"]
self.gym.add_triangle_mesh(self.sim, self.terrain.vertices.flatten(order='C'), self.terrain.triangles.flatten(order='C'), tm_params)
self.height_samples = torch.tensor(self.terrain.heightsamples).view(self.terrain.tot_rows, self.terrain.tot_cols).to(self.device)
def _create_envs(self, num_envs, spacing, num_per_row):
asset_root = os.path.join(os.path.dirname(os.path.abspath(__file__)), '../../assets')
asset_file = self.cfg["env"]["urdfAsset"]["file"]
asset_path = os.path.join(asset_root, asset_file)
asset_root = os.path.dirname(asset_path)
asset_file = os.path.basename(asset_path)
asset_options = gymapi.AssetOptions()
asset_options.default_dof_drive_mode = gymapi.DOF_MODE_EFFORT
asset_options.collapse_fixed_joints = True
asset_options.replace_cylinder_with_capsule = True
asset_options.flip_visual_attachments = True
asset_options.fix_base_link = self.cfg["env"]["urdfAsset"]["fixBaseLink"]
asset_options.density = 0.001
asset_options.angular_damping = 0.0
asset_options.linear_damping = 0.0
asset_options.armature = 0.0
asset_options.thickness = 0.01
asset_options.disable_gravity = False
anymal_asset = self.gym.load_asset(self.sim, asset_root, asset_file, asset_options)
self.num_dof = self.gym.get_asset_dof_count(anymal_asset)
self.num_bodies = self.gym.get_asset_rigid_body_count(anymal_asset)
# prepare friction randomization
rigid_shape_prop = self.gym.get_asset_rigid_shape_properties(anymal_asset)
friction_range = self.cfg["env"]["learn"]["frictionRange"]
num_buckets = 100
friction_buckets = torch_rand_float(friction_range[0], friction_range[1], (num_buckets,1), device=self.device)
self.base_init_state = to_torch(self.base_init_state, device=self.device, requires_grad=False)
start_pose = gymapi.Transform()
start_pose.p = gymapi.Vec3(*self.base_init_state[:3])
body_names = self.gym.get_asset_rigid_body_names(anymal_asset)
self.dof_names = self.gym.get_asset_dof_names(anymal_asset)
foot_name = self.cfg["env"]["urdfAsset"]["footName"]
knee_name = self.cfg["env"]["urdfAsset"]["kneeName"]
feet_names = [s for s in body_names if foot_name in s]
self.feet_indices = torch.zeros(len(feet_names), dtype=torch.long, device=self.device, requires_grad=False)
knee_names = [s for s in body_names if knee_name in s]
self.knee_indices = torch.zeros(len(knee_names), dtype=torch.long, device=self.device, requires_grad=False)
self.base_index = 0
dof_props = self.gym.get_asset_dof_properties(anymal_asset)
# env origins
self.env_origins = torch.zeros(self.num_envs, 3, device=self.device, requires_grad=False)
if not self.curriculum: self.cfg["env"]["terrain"]["maxInitMapLevel"] = self.cfg["env"]["terrain"]["numLevels"] - 1
self.terrain_levels = torch.randint(0, self.cfg["env"]["terrain"]["maxInitMapLevel"]+1, (self.num_envs,), device=self.device)
self.terrain_types = torch.randint(0, self.cfg["env"]["terrain"]["numTerrains"], (self.num_envs,), device=self.device)
if self.custom_origins:
self.terrain_origins = torch.from_numpy(self.terrain.env_origins).to(self.device).to(torch.float)
spacing = 0.
env_lower = gymapi.Vec3(-spacing, -spacing, 0.0)
env_upper = gymapi.Vec3(spacing, spacing, spacing)
self.anymal_handles = []
self.envs = []
for i in range(self.num_envs):
# create env instance
env_handle = self.gym.create_env(self.sim, env_lower, env_upper, num_per_row)
if self.custom_origins:
self.env_origins[i] = self.terrain_origins[self.terrain_levels[i], self.terrain_types[i]]
pos = self.env_origins[i].clone()
pos[:2] += torch_rand_float(-1., 1., (2, 1), device=self.device).squeeze(1)
start_pose.p = gymapi.Vec3(*pos)
for s in range(len(rigid_shape_prop)):
rigid_shape_prop[s].friction = friction_buckets[i % num_buckets]
self.gym.set_asset_rigid_shape_properties(anymal_asset, rigid_shape_prop)
anymal_handle = self.gym.create_actor(env_handle, anymal_asset, start_pose, "anymal", i, 0, 0)
self.gym.set_actor_dof_properties(env_handle, anymal_handle, dof_props)
self.envs.append(env_handle)
self.anymal_handles.append(anymal_handle)
for i in range(len(feet_names)):
self.feet_indices[i] = self.gym.find_actor_rigid_body_handle(self.envs[0], self.anymal_handles[0], feet_names[i])
for i in range(len(knee_names)):
self.knee_indices[i] = self.gym.find_actor_rigid_body_handle(self.envs[0], self.anymal_handles[0], knee_names[i])
self.base_index = self.gym.find_actor_rigid_body_handle(self.envs[0], self.anymal_handles[0], "base")
def check_termination(self):
self.reset_buf = torch.norm(self.contact_forces[:, self.base_index, :], dim=1) > 1.
if not self.allow_knee_contacts:
knee_contact = torch.norm(self.contact_forces[:, self.knee_indices, :], dim=2) > 1.
self.reset_buf |= torch.any(knee_contact, dim=1)
self.reset_buf = torch.where(self.progress_buf >= self.max_episode_length - 1, torch.ones_like(self.reset_buf), self.reset_buf)
def compute_observations(self):
self.measured_heights = self.get_heights()
heights = torch.clip(self.root_states[:, 2].unsqueeze(1) - 0.5 - self.measured_heights, -1, 1.) * self.height_meas_scale
self.obs_buf = torch.cat(( self.base_lin_vel * self.lin_vel_scale,
self.base_ang_vel * self.ang_vel_scale,
self.projected_gravity,
self.commands[:, :3] * self.commands_scale,
self.dof_pos * self.dof_pos_scale,
self.dof_vel * self.dof_vel_scale,
heights,
self.actions
), dim=-1)
def compute_reward(self):
# velocity tracking reward
lin_vel_error = torch.sum(torch.square(self.commands[:, :2] - self.base_lin_vel[:, :2]), dim=1)
ang_vel_error = torch.square(self.commands[:, 2] - self.base_ang_vel[:, 2])
rew_lin_vel_xy = torch.exp(-lin_vel_error/0.25) * self.rew_scales["lin_vel_xy"]
rew_ang_vel_z = torch.exp(-ang_vel_error/0.25) * self.rew_scales["ang_vel_z"]
# other base velocity penalties
rew_lin_vel_z = torch.square(self.base_lin_vel[:, 2]) * self.rew_scales["lin_vel_z"]
rew_ang_vel_xy = torch.sum(torch.square(self.base_ang_vel[:, :2]), dim=1) * self.rew_scales["ang_vel_xy"]
# orientation penalty
rew_orient = torch.sum(torch.square(self.projected_gravity[:, :2]), dim=1) * self.rew_scales["orient"]
# base height penalty
rew_base_height = torch.square(self.root_states[:, 2] - 0.52) * self.rew_scales["base_height"] # TODO add target base height to cfg
# torque penalty
rew_torque = torch.sum(torch.square(self.torques), dim=1) * self.rew_scales["torque"]
# joint acc penalty
rew_joint_acc = torch.sum(torch.square(self.last_dof_vel - self.dof_vel), dim=1) * self.rew_scales["joint_acc"]
# collision penalty
knee_contact = torch.norm(self.contact_forces[:, self.knee_indices, :], dim=2) > 1.
rew_collision = torch.sum(knee_contact, dim=1) * self.rew_scales["collision"] # sum vs any ?
# stumbling penalty
stumble = (torch.norm(self.contact_forces[:, self.feet_indices, :2], dim=2) > 5.) * (torch.abs(self.contact_forces[:, self.feet_indices, 2]) < 1.)
rew_stumble = torch.sum(stumble, dim=1) * self.rew_scales["stumble"]
# action rate penalty
rew_action_rate = torch.sum(torch.square(self.last_actions - self.actions), dim=1) * self.rew_scales["action_rate"]
# air time reward
# contact = torch.norm(contact_forces[:, feet_indices, :], dim=2) > 1.
contact = self.contact_forces[:, self.feet_indices, 2] > 1.
first_contact = (self.feet_air_time > 0.) * contact
self.feet_air_time += self.dt
rew_airTime = torch.sum((self.feet_air_time - 0.5) * first_contact, dim=1) * self.rew_scales["air_time"] # reward only on first contact with the ground
rew_airTime *= torch.norm(self.commands[:, :2], dim=1) > 0.1 #no reward for zero command
self.feet_air_time *= ~contact
# cosmetic penalty for hip motion
rew_hip = torch.sum(torch.abs(self.dof_pos[:, [0, 3, 6, 9]] - self.default_dof_pos[:, [0, 3, 6, 9]]), dim=1)* self.rew_scales["hip"]
# total reward
self.rew_buf = rew_lin_vel_xy + rew_ang_vel_z + rew_lin_vel_z + rew_ang_vel_xy + rew_orient + rew_base_height +\
rew_torque + rew_joint_acc + rew_collision + rew_action_rate + rew_airTime + rew_hip + rew_stumble
self.rew_buf = torch.clip(self.rew_buf, min=0., max=None)
# add termination reward
self.rew_buf += self.rew_scales["termination"] * self.reset_buf * ~self.timeout_buf
# log episode reward sums
self.episode_sums["lin_vel_xy"] += rew_lin_vel_xy
self.episode_sums["ang_vel_z"] += rew_ang_vel_z
self.episode_sums["lin_vel_z"] += rew_lin_vel_z
self.episode_sums["ang_vel_xy"] += rew_ang_vel_xy
self.episode_sums["orient"] += rew_orient
self.episode_sums["torques"] += rew_torque
self.episode_sums["joint_acc"] += rew_joint_acc
self.episode_sums["collision"] += rew_collision
self.episode_sums["stumble"] += rew_stumble
self.episode_sums["action_rate"] += rew_action_rate
self.episode_sums["air_time"] += rew_airTime
self.episode_sums["base_height"] += rew_base_height
self.episode_sums["hip"] += rew_hip
def reset_idx(self, env_ids):
positions_offset = torch_rand_float(0.5, 1.5, (len(env_ids), self.num_dof), device=self.device)
velocities = torch_rand_float(-0.1, 0.1, (len(env_ids), self.num_dof), device=self.device)
self.dof_pos[env_ids] = self.default_dof_pos[env_ids] * positions_offset
self.dof_vel[env_ids] = velocities
env_ids_int32 = env_ids.to(dtype=torch.int32)
if self.custom_origins:
self.update_terrain_level(env_ids)
self.root_states[env_ids] = self.base_init_state
self.root_states[env_ids, :3] += self.env_origins[env_ids]
self.root_states[env_ids, :2] += torch_rand_float(-0.5, 0.5, (len(env_ids), 2), device=self.device)
else:
self.root_states[env_ids] = self.base_init_state
self.gym.set_actor_root_state_tensor_indexed(self.sim,
gymtorch.unwrap_tensor(self.root_states),
gymtorch.unwrap_tensor(env_ids_int32), len(env_ids_int32))
self.gym.set_dof_state_tensor_indexed(self.sim,
gymtorch.unwrap_tensor(self.dof_state),
gymtorch.unwrap_tensor(env_ids_int32), len(env_ids_int32))
self.commands[env_ids, 0] = torch_rand_float(self.command_x_range[0], self.command_x_range[1], (len(env_ids), 1), device=self.device).squeeze()
self.commands[env_ids, 1] = torch_rand_float(self.command_y_range[0], self.command_y_range[1], (len(env_ids), 1), device=self.device).squeeze()
self.commands[env_ids, 3] = torch_rand_float(self.command_yaw_range[0], self.command_yaw_range[1], (len(env_ids), 1), device=self.device).squeeze()
self.commands[env_ids] *= (torch.norm(self.commands[env_ids, :2], dim=1) > 0.25).unsqueeze(1) # set small commands to zero
self.last_actions[env_ids] = 0.
self.last_dof_vel[env_ids] = 0.
self.feet_air_time[env_ids] = 0.
self.progress_buf[env_ids] = 0
self.reset_buf[env_ids] = 1
# fill extras
self.extras["episode"] = {}
for key in self.episode_sums.keys():
self.extras["episode"]['rew_' + key] = torch.mean(self.episode_sums[key][env_ids]) / self.max_episode_length_s
self.episode_sums[key][env_ids] = 0.
self.extras["episode"]["terrain_level"] = torch.mean(self.terrain_levels.float())
def update_terrain_level(self, env_ids):
if not self.init_done or not self.curriculum:
# don't change on initial reset
return
distance = torch.norm(self.root_states[env_ids, :2] - self.env_origins[env_ids, :2], dim=1)
self.terrain_levels[env_ids] -= 1 * (distance < torch.norm(self.commands[env_ids, :2])*self.max_episode_length_s*0.25)
self.terrain_levels[env_ids] += 1 * (distance > self.terrain.env_length / 2)
self.terrain_levels[env_ids] = torch.clip(self.terrain_levels[env_ids], 0) % self.terrain.env_rows
self.env_origins[env_ids] = self.terrain_origins[self.terrain_levels[env_ids], self.terrain_types[env_ids]]
def push_robots(self):
self.root_states[:, 7:9] = torch_rand_float(-1., 1., (self.num_envs, 2), device=self.device) # lin vel x/y
self.gym.set_actor_root_state_tensor(self.sim, gymtorch.unwrap_tensor(self.root_states))
def pre_physics_step(self, actions):
self.actions = actions.clone().to(self.device)
for i in range(self.decimation):
torques = torch.clip(self.Kp*(self.action_scale*self.actions + self.default_dof_pos - self.dof_pos) - self.Kd*self.dof_vel,
-80., 80.)
self.gym.set_dof_actuation_force_tensor(self.sim, gymtorch.unwrap_tensor(torques))
self.torques = torques.view(self.torques.shape)
self.gym.simulate(self.sim)
if self.device == 'cpu':
self.gym.fetch_results(self.sim, True)
self.gym.refresh_dof_state_tensor(self.sim)
def post_physics_step(self):
# self.gym.refresh_dof_state_tensor(self.sim) # done in step
self.gym.refresh_actor_root_state_tensor(self.sim)
self.gym.refresh_net_contact_force_tensor(self.sim)
self.progress_buf += 1
self.randomize_buf += 1
self.common_step_counter += 1
if self.common_step_counter % self.push_interval == 0:
self.push_robots()
# prepare quantities
self.base_quat = self.root_states[:, 3:7]
self.base_lin_vel = quat_rotate_inverse(self.base_quat, self.root_states[:, 7:10])
self.base_ang_vel = quat_rotate_inverse(self.base_quat, self.root_states[:, 10:13])
self.projected_gravity = quat_rotate_inverse(self.base_quat, self.gravity_vec)
forward = quat_apply(self.base_quat, self.forward_vec)
heading = torch.atan2(forward[:, 1], forward[:, 0])
self.commands[:, 2] = torch.clip(0.5*wrap_to_pi(self.commands[:, 3] - heading), -1., 1.)
# compute observations, rewards, resets, ...
self.check_termination()
self.compute_reward()
env_ids = self.reset_buf.nonzero(as_tuple=False).flatten()
if len(env_ids) > 0:
self.reset_idx(env_ids)
self.compute_observations()
if self.add_noise:
self.obs_buf += (2 * torch.rand_like(self.obs_buf) - 1) * self.noise_scale_vec
self.last_actions[:] = self.actions[:]
self.last_dof_vel[:] = self.dof_vel[:]
if self.viewer and self.enable_viewer_sync and self.debug_viz:
# draw height lines
self.gym.clear_lines(self.viewer)
self.gym.refresh_rigid_body_state_tensor(self.sim)
sphere_geom = gymutil.WireframeSphereGeometry(0.02, 4, 4, None, color=(1, 1, 0))
for i in range(self.num_envs):
base_pos = (self.root_states[i, :3]).cpu().numpy()
heights = self.measured_heights[i].cpu().numpy()
height_points = quat_apply_yaw(self.base_quat[i].repeat(heights.shape[0]), self.height_points[i]).cpu().numpy()
for j in range(heights.shape[0]):
x = height_points[j, 0] + base_pos[0]
y = height_points[j, 1] + base_pos[1]
z = heights[j]
sphere_pose = gymapi.Transform(gymapi.Vec3(x, y, z), r=None)
gymutil.draw_lines(sphere_geom, self.gym, self.viewer, self.envs[i], sphere_pose)
def init_height_points(self):
# 1mx1.6m rectangle (without center line)
y = 0.1 * torch.tensor([-5, -4, -3, -2, -1, 1, 2, 3, 4, 5], device=self.device, requires_grad=False) # 10-50cm on each side
x = 0.1 * torch.tensor([-8, -7, -6, -5, -4, -3, -2, 2, 3, 4, 5, 6, 7, 8], device=self.device, requires_grad=False) # 20-80cm on each side
grid_x, grid_y = torch.meshgrid(x, y)
self.num_height_points = grid_x.numel()
points = torch.zeros(self.num_envs, self.num_height_points, 3, device=self.device, requires_grad=False)
points[:, :, 0] = grid_x.flatten()
points[:, :, 1] = grid_y.flatten()
return points
def get_heights(self, env_ids=None):
if self.cfg["env"]["terrain"]["terrainType"] == 'plane':
return torch.zeros(self.num_envs, self.num_height_points, device=self.device, requires_grad=False)
elif self.cfg["env"]["terrain"]["terrainType"] == 'none':
raise NameError("Can't measure height with terrain type 'none'")
if env_ids:
points = quat_apply_yaw(self.base_quat[env_ids].repeat(1, self.num_height_points), self.height_points[env_ids]) + (self.root_states[env_ids, :3]).unsqueeze(1)
else:
points = quat_apply_yaw(self.base_quat.repeat(1, self.num_height_points), self.height_points) + (self.root_states[:, :3]).unsqueeze(1)
points += self.terrain.border_size
points = (points/self.terrain.horizontal_scale).long()
px = points[:, :, 0].view(-1)
py = points[:, :, 1].view(-1)
px = torch.clip(px, 0, self.height_samples.shape[0]-2)
py = torch.clip(py, 0, self.height_samples.shape[1]-2)
heights1 = self.height_samples[px, py]
heights2 = self.height_samples[px+1, py+1]
heights = torch.min(heights1, heights2)
return heights.view(self.num_envs, -1) * self.terrain.vertical_scale
# terrain generator
from isaacgym.terrain_utils import *
class Terrain:
def __init__(self, cfg, num_robots) -> None:
self.type = cfg["terrainType"]
if self.type in ["none", 'plane']:
return
self.horizontal_scale = 0.1
self.vertical_scale = 0.005
self.border_size = 20
self.num_per_env = 2
self.env_length = cfg["mapLength"]
self.env_width = cfg["mapWidth"]
self.proportions = [np.sum(cfg["terrainProportions"][:i+1]) for i in range(len(cfg["terrainProportions"]))]
self.env_rows = cfg["numLevels"]
self.env_cols = cfg["numTerrains"]
self.num_maps = self.env_rows * self.env_cols
self.num_per_env = int(num_robots / self.num_maps)
self.env_origins = np.zeros((self.env_rows, self.env_cols, 3))
self.width_per_env_pixels = int(self.env_width / self.horizontal_scale)
self.length_per_env_pixels = int(self.env_length / self.horizontal_scale)
self.border = int(self.border_size/self.horizontal_scale)
self.tot_cols = int(self.env_cols * self.width_per_env_pixels) + 2 * self.border
self.tot_rows = int(self.env_rows * self.length_per_env_pixels) + 2 * self.border
self.height_field_raw = np.zeros((self.tot_rows , self.tot_cols), dtype=np.int16)
if cfg["curriculum"]:
self.curiculum(num_robots, num_terrains=self.env_cols, num_levels=self.env_rows)
else:
self.randomized_terrain()
self.heightsamples = self.height_field_raw
self.vertices, self.triangles = convert_heightfield_to_trimesh(self.height_field_raw, self.horizontal_scale, self.vertical_scale, cfg["slopeTreshold"])
def randomized_terrain(self):
for k in range(self.num_maps):
# Env coordinates in the world
(i, j) = np.unravel_index(k, (self.env_rows, self.env_cols))
# Heightfield coordinate system from now on
start_x = self.border + i * self.length_per_env_pixels
end_x = self.border + (i + 1) * self.length_per_env_pixels
start_y = self.border + j * self.width_per_env_pixels
end_y = self.border + (j + 1) * self.width_per_env_pixels
terrain = SubTerrain("terrain",
width=self.width_per_env_pixels,
length=self.width_per_env_pixels,
vertical_scale=self.vertical_scale,
horizontal_scale=self.horizontal_scale)
choice = np.random.uniform(0, 1)
if choice < 0.1:
if np.random.choice([0, 1]):
pyramid_sloped_terrain(terrain, np.random.choice([-0.3, -0.2, 0, 0.2, 0.3]))
random_uniform_terrain(terrain, min_height=-0.1, max_height=0.1, step=0.05, downsampled_scale=0.2)
else:
pyramid_sloped_terrain(terrain, np.random.choice([-0.3, -0.2, 0, 0.2, 0.3]))
elif choice < 0.6:
# step_height = np.random.choice([-0.18, -0.15, -0.1, -0.05, 0.05, 0.1, 0.15, 0.18])
step_height = np.random.choice([-0.15, 0.15])
pyramid_stairs_terrain(terrain, step_width=0.31, step_height=step_height, platform_size=3.)
elif choice < 1.:
discrete_obstacles_terrain(terrain, 0.15, 1., 2., 40, platform_size=3.)
self.height_field_raw[start_x: end_x, start_y:end_y] = terrain.height_field_raw
env_origin_x = (i + 0.5) * self.env_length
env_origin_y = (j + 0.5) * self.env_width
x1 = int((self.env_length/2. - 1) / self.horizontal_scale)
x2 = int((self.env_length/2. + 1) / self.horizontal_scale)
y1 = int((self.env_width/2. - 1) / self.horizontal_scale)
y2 = int((self.env_width/2. + 1) / self.horizontal_scale)
env_origin_z = np.max(terrain.height_field_raw[x1:x2, y1:y2])*self.vertical_scale
self.env_origins[i, j] = [env_origin_x, env_origin_y, env_origin_z]
def curiculum(self, num_robots, num_terrains, num_levels):
num_robots_per_map = int(num_robots / num_terrains)
left_over = num_robots % num_terrains
idx = 0
for j in range(num_terrains):
for i in range(num_levels):
terrain = SubTerrain("terrain",
width=self.width_per_env_pixels,
length=self.width_per_env_pixels,
vertical_scale=self.vertical_scale,
horizontal_scale=self.horizontal_scale)
difficulty = i / num_levels
choice = j / num_terrains
slope = difficulty * 0.4
step_height = 0.05 + 0.175 * difficulty
discrete_obstacles_height = 0.025 + difficulty * 0.15
stepping_stones_size = 2 - 1.8 * difficulty
if choice < self.proportions[0]:
if choice < 0.05:
slope *= -1
pyramid_sloped_terrain(terrain, slope=slope, platform_size=3.)
elif choice < self.proportions[1]:
if choice < 0.15:
slope *= -1
pyramid_sloped_terrain(terrain, slope=slope, platform_size=3.)
random_uniform_terrain(terrain, min_height=-0.1, max_height=0.1, step=0.025, downsampled_scale=0.2)
elif choice < self.proportions[3]:
if choice<self.proportions[2]:
step_height *= -1
pyramid_stairs_terrain(terrain, step_width=0.31, step_height=step_height, platform_size=3.)
elif choice < self.proportions[4]:
discrete_obstacles_terrain(terrain, discrete_obstacles_height, 1., 2., 40, platform_size=3.)
else:
stepping_stones_terrain(terrain, stone_size=stepping_stones_size, stone_distance=0.1, max_height=0., platform_size=3.)
# Heightfield coordinate system
start_x = self.border + i * self.length_per_env_pixels
end_x = self.border + (i + 1) * self.length_per_env_pixels
start_y = self.border + j * self.width_per_env_pixels
end_y = self.border + (j + 1) * self.width_per_env_pixels
self.height_field_raw[start_x: end_x, start_y:end_y] = terrain.height_field_raw
robots_in_map = num_robots_per_map
if j < left_over:
robots_in_map +=1
env_origin_x = (i + 0.5) * self.env_length
env_origin_y = (j + 0.5) * self.env_width
x1 = int((self.env_length/2. - 1) / self.horizontal_scale)
x2 = int((self.env_length/2. + 1) / self.horizontal_scale)
y1 = int((self.env_width/2. - 1) / self.horizontal_scale)
y2 = int((self.env_width/2. + 1) / self.horizontal_scale)
env_origin_z = np.max(terrain.height_field_raw[x1:x2, y1:y2])*self.vertical_scale
self.env_origins[i, j] = [env_origin_x, env_origin_y, env_origin_z]
@torch.jit.script
def quat_apply_yaw(quat, vec):
quat_yaw = quat.clone().view(-1, 4)
quat_yaw[:, :2] = 0.
quat_yaw = normalize(quat_yaw)
return quat_apply(quat_yaw, vec)
@torch.jit.script
def wrap_to_pi(angles):
angles %= 2*np.pi
angles -= 2*np.pi * (angles > np.pi)
return angles
| 38,280 | Python | 54.640988 | 217 | 0.610789 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/tasks/trifinger.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import numpy as np
import os
import torch
from isaacgym import gymtorch
from isaacgym import gymapi
from isaacgymenvs.utils.torch_jit_utils import quat_mul
from collections import OrderedDict
project_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))
from isaacgymenvs.utils.torch_jit_utils import *
from isaacgymenvs.tasks.base.vec_task import VecTask
from types import SimpleNamespace
from collections import deque
from typing import Deque, Dict, Tuple, Union
# python
import enum
import numpy as np
# ################### #
# Dimensions of robot #
# ################### #
class TrifingerDimensions(enum.Enum):
"""
Dimensions of the tri-finger robot.
Note: While it may not seem necessary for tri-finger robot since it is fixed base, for floating
base systems having this dimensions class is useful.
"""
# general state
# cartesian position + quaternion orientation
PoseDim = 7,
# linear velocity + angular velcoity
VelocityDim = 6
# state: pose + velocity
StateDim = 13
# force + torque
WrenchDim = 6
# for robot
# number of fingers
NumFingers = 3
# for three fingers
JointPositionDim = 9
JointVelocityDim = 9
JointTorqueDim = 9
# generalized coordinates
GeneralizedCoordinatesDim = JointPositionDim
GeneralizedVelocityDim = JointVelocityDim
# for objects
ObjectPoseDim = 7
ObjectVelocityDim = 6
# ################# #
# Different objects #
# ################# #
# radius of the area
ARENA_RADIUS = 0.195
class CuboidalObject:
"""
Fields for a cuboidal object.
@note Motivation for this class is that if domain randomization is performed over the
size of the cuboid, then its attributes are automatically updated as well.
"""
# 3D radius of the cuboid
radius_3d: float
# distance from wall to the center
max_com_distance_to_center: float
# minimum and mximum height for spawning the object
min_height: float
max_height = 0.1
NumKeypoints = 8
ObjectPositionDim = 3
KeypointsCoordsDim = NumKeypoints * ObjectPositionDim
def __init__(self, size: Union[float, Tuple[float, float, float]]):
"""Initialize the cuboidal object.
Args:
size: The size of the object along x, y, z in meters. If a single float is provided, then it is assumed that
object is a cube.
"""
# decide the size depedning on input type
if isinstance(size, float):
self._size = (size, size, size)
else:
self._size = size
# compute remaining attributes
self.__compute()
"""
Properties
"""
@property
def size(self) -> Tuple[float, float, float]:
"""
Returns the dimensions of the cuboid object (x, y, z) in meters.
"""
return self._size
"""
Configurations
"""
@size.setter
def size(self, size: Union[float, Tuple[float, float, float]]):
""" Set size of the object.
Args:
size: The size of the object along x, y, z in meters. If a single float is provided, then it is assumed
that object is a cube.
"""
# decide the size depedning on input type
if isinstance(size, float):
self._size = (size, size, size)
else:
self._size = size
# compute attributes
self.__compute()
"""
Private members
"""
def __compute(self):
"""Compute the attributes for the object.
"""
# compute 3D radius of the cuboid
max_len = max(self._size)
self.radius_3d = max_len * np.sqrt(3) / 2
# compute distance from wall to the center
self.max_com_distance_to_center = ARENA_RADIUS - self.radius_3d
# minimum height for spawning the object
self.min_height = self._size[2] / 2
class Trifinger(VecTask):
# constants
# directory where assets for the simulator are present
_trifinger_assets_dir = os.path.join(project_dir, "../", "assets", "trifinger")
# robot urdf (path relative to `_trifinger_assets_dir`)
_robot_urdf_file = "robot_properties_fingers/urdf/pro/trifingerpro.urdf"
# stage urdf (path relative to `_trifinger_assets_dir`)
# _stage_urdf_file = "robot_properties_fingers/urdf/trifinger_stage.urdf"
_table_urdf_file = "robot_properties_fingers/urdf/table_without_border.urdf"
_boundary_urdf_file = "robot_properties_fingers/urdf/high_table_boundary.urdf"
# object urdf (path relative to `_trifinger_assets_dir`)
# TODO: Make object URDF configurable.
_object_urdf_file = "objects/urdf/cube_multicolor_rrc.urdf"
# physical dimensions of the object
# TODO: Make object dimensions configurable.
_object_dims = CuboidalObject(0.065)
# dimensions of the system
_dims = TrifingerDimensions
# Constants for limits
# Ref: https://github.com/rr-learning/rrc_simulation/blob/master/python/rrc_simulation/trifinger_platform.py#L68
# maximum joint torque (in N-m) applicable on each actuator
_max_torque_Nm = 0.36
# maximum joint velocity (in rad/s) on each actuator
_max_velocity_radps = 10
# History of state: Number of timesteps to save history for
# Note: Currently used only to manage history of object and frame states.
# This can be extended to other observations (as done in ANYmal).
_state_history_len = 2
# buffers to store the simulation data
# goal poses for the object [num. of instances, 7] where 7: (x, y, z, quat)
_object_goal_poses_buf: torch.Tensor
# DOF state of the system [num. of instances, num. of dof, 2] where last index: pos, vel
_dof_state: torch.Tensor
# Rigid body state of the system [num. of instances, num. of bodies, 13] where 13: (x, y, z, quat, v, omega)
_rigid_body_state: torch.Tensor
# Root prim states [num. of actors, 13] where 13: (x, y, z, quat, v, omega)
_actors_root_state: torch.Tensor
# Force-torque sensor array [num. of instances, num. of bodies * wrench]
_ft_sensors_values: torch.Tensor
# DOF position of the system [num. of instances, num. of dof]
_dof_position: torch.Tensor
# DOF velocity of the system [num. of instances, num. of dof]
_dof_velocity: torch.Tensor
# DOF torque of the system [num. of instances, num. of dof]
_dof_torque: torch.Tensor
# Fingertip links state list([num. of instances, num. of fingers, 13]) where 13: (x, y, z, quat, v, omega)
# The length of list is the history of the state: 0: t, 1: t-1, 2: t-2, ... step.
_fingertips_frames_state_history: Deque[torch.Tensor] = deque(maxlen=_state_history_len)
# Object prim state [num. of instances, 13] where 13: (x, y, z, quat, v, omega)
# The length of list is the history of the state: 0: t, 1: t-1, 2: t-2, ... step.
_object_state_history: Deque[torch.Tensor] = deque(maxlen=_state_history_len)
# stores the last action output
_last_action: torch.Tensor
# keeps track of the number of goal resets
_successes: torch.Tensor
# keeps track of number of consecutive successes
_consecutive_successes: float
_robot_limits: dict = {
"joint_position": SimpleNamespace(
# matches those on the real robot
low=np.array([-0.33, 0.0, -2.7] * _dims.NumFingers.value, dtype=np.float32),
high=np.array([1.0, 1.57, 0.0] * _dims.NumFingers.value, dtype=np.float32),
default=np.array([0.0, 0.9, -2.0] * _dims.NumFingers.value, dtype=np.float32),
),
"joint_velocity": SimpleNamespace(
low=np.full(_dims.JointVelocityDim.value, -_max_velocity_radps, dtype=np.float32),
high=np.full(_dims.JointVelocityDim.value, _max_velocity_radps, dtype=np.float32),
default=np.zeros(_dims.JointVelocityDim.value, dtype=np.float32),
),
"joint_torque": SimpleNamespace(
low=np.full(_dims.JointTorqueDim.value, -_max_torque_Nm, dtype=np.float32),
high=np.full(_dims.JointTorqueDim.value, _max_torque_Nm, dtype=np.float32),
default=np.zeros(_dims.JointTorqueDim.value, dtype=np.float32),
),
"fingertip_position": SimpleNamespace(
low=np.array([-0.4, -0.4, 0], dtype=np.float32),
high=np.array([0.4, 0.4, 0.5], dtype=np.float32),
),
"fingertip_orientation": SimpleNamespace(
low=-np.ones(4, dtype=np.float32),
high=np.ones(4, dtype=np.float32),
),
"fingertip_velocity": SimpleNamespace(
low=np.full(_dims.VelocityDim.value, -0.2, dtype=np.float32),
high=np.full(_dims.VelocityDim.value, 0.2, dtype=np.float32),
),
"fingertip_wrench": SimpleNamespace(
low=np.full(_dims.WrenchDim.value, -1.0, dtype=np.float32),
high=np.full(_dims.WrenchDim.value, 1.0, dtype=np.float32),
),
# used if we want to have joint stiffness/damping as parameters`
"joint_stiffness": SimpleNamespace(
low=np.array([1.0, 1.0, 1.0] * _dims.NumFingers.value, dtype=np.float32),
high=np.array([50.0, 50.0, 50.0] * _dims.NumFingers.value, dtype=np.float32),
),
"joint_damping": SimpleNamespace(
low=np.array([0.01, 0.03, 0.0001] * _dims.NumFingers.value, dtype=np.float32),
high=np.array([1.0, 3.0, 0.01] * _dims.NumFingers.value, dtype=np.float32),
),
}
# limits of the object (mapped later: str -> torch.tensor)
_object_limits: dict = {
"position": SimpleNamespace(
low=np.array([-0.3, -0.3, 0], dtype=np.float32),
high=np.array([0.3, 0.3, 0.3], dtype=np.float32),
default=np.array([0, 0, _object_dims.min_height], dtype=np.float32)
),
# difference between two positions
"position_delta": SimpleNamespace(
low=np.array([-0.6, -0.6, 0], dtype=np.float32),
high=np.array([0.6, 0.6, 0.3], dtype=np.float32),
default=np.array([0, 0, 0], dtype=np.float32)
),
"orientation": SimpleNamespace(
low=-np.ones(4, dtype=np.float32),
high=np.ones(4, dtype=np.float32),
default=np.array([0.0, 0.0, 0.0, 1.0], dtype=np.float32),
),
"velocity": SimpleNamespace(
low=np.full(_dims.VelocityDim.value, -0.5, dtype=np.float32),
high=np.full(_dims.VelocityDim.value, 0.5, dtype=np.float32),
default=np.zeros(_dims.VelocityDim.value, dtype=np.float32)
),
"scale": SimpleNamespace(
low=np.full(1, 0.0, dtype=np.float32),
high=np.full(1, 1.0, dtype=np.float32),
),
}
# PD gains for the robot (mapped later: str -> torch.tensor)
# Ref: https://github.com/rr-learning/rrc_simulation/blob/master/python/rrc_simulation/sim_finger.py#L49-L65
_robot_dof_gains = {
# The kp and kd gains of the PD control of the fingers.
# Note: This depends on simulation step size and is set for a rate of 250 Hz.
"stiffness": [10.0, 10.0, 10.0] * _dims.NumFingers.value,
"damping": [0.1, 0.3, 0.001] * _dims.NumFingers.value,
# The kd gains used for damping the joint motor velocities during the
# safety torque check on the joint motors.
"safety_damping": [0.08, 0.08, 0.04] * _dims.NumFingers.value
}
action_dim = _dims.JointTorqueDim.value
def __init__(self, cfg, rl_device, sim_device, graphics_device_id, headless, virtual_screen_capture, force_render):
self.cfg = cfg
self.obs_spec = {
"robot_q": self._dims.GeneralizedCoordinatesDim.value,
"robot_u": self._dims.GeneralizedVelocityDim.value,
"object_q": self._dims.ObjectPoseDim.value,
"object_q_des": self._dims.ObjectPoseDim.value,
"command": self.action_dim
}
if self.cfg["env"]["asymmetric_obs"]:
self.state_spec = {
# observations spec
**self.obs_spec,
# extra observations (added separately to make computations simpler)
"object_u": self._dims.ObjectVelocityDim.value,
"fingertip_state": self._dims.NumFingers.value * self._dims.StateDim.value,
"robot_a": self._dims.GeneralizedVelocityDim.value,
"fingertip_wrench": self._dims.NumFingers.value * self._dims.WrenchDim.value,
}
else:
self.state_spec = self.obs_spec
self.action_spec = {
"command": self.action_dim
}
self.cfg["env"]["numObservations"] = sum(self.obs_spec.values())
self.cfg["env"]["numStates"] = sum(self.state_spec.values())
self.cfg["env"]["numActions"] = sum(self.action_spec.values())
self.max_episode_length = self.cfg["env"]["episodeLength"]
self.randomize = self.cfg["task"]["randomize"]
self.randomization_params = self.cfg["task"]["randomization_params"]
# define prims present in the scene
prim_names = ["robot", "table", "boundary", "object", "goal_object"]
# mapping from name to asset instance
self.gym_assets = dict.fromkeys(prim_names)
# mapping from name to gym indices
self.gym_indices = dict.fromkeys(prim_names)
# mapping from name to gym rigid body handles
# name of finger tips links i.e. end-effector frames
fingertips_frames = ["finger_tip_link_0", "finger_tip_link_120", "finger_tip_link_240"]
self._fingertips_handles = OrderedDict.fromkeys(fingertips_frames, None)
# mapping from name to gym dof index
robot_dof_names = list()
for finger_pos in ['0', '120', '240']:
robot_dof_names += [f'finger_base_to_upper_joint_{finger_pos}',
f'finger_upper_to_middle_joint_{finger_pos}',
f'finger_middle_to_lower_joint_{finger_pos}']
self._robot_dof_indices = OrderedDict.fromkeys(robot_dof_names, None)
super().__init__(config=self.cfg, rl_device=rl_device, sim_device=sim_device, graphics_device_id=graphics_device_id, headless=headless, virtual_screen_capture=virtual_screen_capture, force_render=force_render)
if self.viewer != None:
cam_pos = gymapi.Vec3(0.7, 0.0, 0.7)
cam_target = gymapi.Vec3(0.0, 0.0, 0.0)
self.gym.viewer_camera_look_at(self.viewer, None, cam_pos, cam_target)
# change constant buffers from numpy/lists into torch tensors
# limits for robot
for limit_name in self._robot_limits:
# extract limit simple-namespace
limit_dict = self._robot_limits[limit_name].__dict__
# iterate over namespace attributes
for prop, value in limit_dict.items():
limit_dict[prop] = torch.tensor(value, dtype=torch.float, device=self.device)
# limits for the object
for limit_name in self._object_limits:
# extract limit simple-namespace
limit_dict = self._object_limits[limit_name].__dict__
# iterate over namespace attributes
for prop, value in limit_dict.items():
limit_dict[prop] = torch.tensor(value, dtype=torch.float, device=self.device)
# PD gains for actuation
for gain_name, value in self._robot_dof_gains.items():
self._robot_dof_gains[gain_name] = torch.tensor(value, dtype=torch.float, device=self.device)
# store the sampled goal poses for the object: [num. of instances, 7]
self._object_goal_poses_buf = torch.zeros((self.num_envs, 7), device=self.device, dtype=torch.float)
# get force torque sensor if enabled
if self.cfg["env"]["enable_ft_sensors"] or self.cfg["env"]["asymmetric_obs"]:
# # joint torques
# dof_force_tensor = self.gym.acquire_dof_force_tensor(self.sim)
# self._dof_torque = gymtorch.wrap_tensor(dof_force_tensor).view(self.num_envs,
# self._dims.JointTorqueDim.value)
# # force-torque sensor
num_ft_dims = self._dims.NumFingers.value * self._dims.WrenchDim.value
# sensor_tensor = self.gym.acquire_force_sensor_tensor(self.sim)
# self._ft_sensors_values = gymtorch.wrap_tensor(sensor_tensor).view(self.num_envs, num_ft_dims)
sensor_tensor = self.gym.acquire_force_sensor_tensor(self.sim)
self._ft_sensors_values = gymtorch.wrap_tensor(sensor_tensor).view(self.num_envs, num_ft_dims)
dof_force_tensor = self.gym.acquire_dof_force_tensor(self.sim)
self._dof_torque = gymtorch.wrap_tensor(dof_force_tensor).view(self.num_envs, self._dims.JointTorqueDim.value)
# get gym GPU state tensors
actor_root_state_tensor = self.gym.acquire_actor_root_state_tensor(self.sim)
dof_state_tensor = self.gym.acquire_dof_state_tensor(self.sim)
rigid_body_tensor = self.gym.acquire_rigid_body_state_tensor(self.sim)
# refresh the buffer (to copy memory?)
self.gym.refresh_actor_root_state_tensor(self.sim)
self.gym.refresh_dof_state_tensor(self.sim)
self.gym.refresh_rigid_body_state_tensor(self.sim)
# create wrapper tensors for reference (consider everything as pointer to actual memory)
# DOF
self._dof_state = gymtorch.wrap_tensor(dof_state_tensor).view(self.num_envs, -1, 2)
self._dof_position = self._dof_state[..., 0]
self._dof_velocity = self._dof_state[..., 1]
# rigid body
self._rigid_body_state = gymtorch.wrap_tensor(rigid_body_tensor).view(self.num_envs, -1, 13)
# root actors
self._actors_root_state = gymtorch.wrap_tensor(actor_root_state_tensor).view(-1, 13)
# frames history
action_dim = sum(self.action_spec.values())
self._last_action = torch.zeros(self.num_envs, action_dim, dtype=torch.float, device=self.device)
fingertip_handles_indices = list(self._fingertips_handles.values())
object_indices = self.gym_indices["object"]
# timestep 0 is current tensor
curr_history_length = 0
while curr_history_length < self._state_history_len:
# add tensors to history list
print(self._rigid_body_state.shape)
self._fingertips_frames_state_history.append(self._rigid_body_state[:, fingertip_handles_indices])
self._object_state_history.append(self._actors_root_state[object_indices])
# update current history length
curr_history_length += 1
self._observations_scale = SimpleNamespace(low=None, high=None)
self._states_scale = SimpleNamespace(low=None, high=None)
self._action_scale = SimpleNamespace(low=None, high=None)
self._successes = torch.zeros(self.num_envs, device=self.device, dtype=torch.long)
self._successes_pos = torch.zeros(self.num_envs, device=self.device, dtype=torch.long)
self._successes_quat = torch.zeros(self.num_envs, device=self.device, dtype=torch.long)
self.__configure_mdp_spaces()
def create_sim(self):
self.up_axis_idx = 2 # index of up axis: Y=1, Z=2
self.sim = super().create_sim(self.device_id, self.graphics_device_id, self.physics_engine, self.sim_params)
self._create_ground_plane()
self._create_scene_assets()
self._create_envs(self.num_envs, self.cfg["env"]["envSpacing"], int(np.sqrt(self.num_envs)))
# If randomizing, apply once immediately on startup before the fist sim step
if self.randomize:
self.apply_randomizations(self.randomization_params)
def _create_ground_plane(self):
plane_params = gymapi.PlaneParams()
plane_params.normal = gymapi.Vec3(0.0, 0.0, 1.0)
plane_params.distance = 0.013
plane_params.static_friction = 1.0
plane_params.dynamic_friction = 1.0
self.gym.add_ground(self.sim, plane_params)
def _create_scene_assets(self):
""" Define Gym assets for stage, robot and object.
"""
# define assets
self.gym_assets["robot"] = self.__define_robot_asset()
self.gym_assets["table"] = self.__define_table_asset()
self.gym_assets["boundary"] = self.__define_boundary_asset()
self.gym_assets["object"] = self.__define_object_asset()
self.gym_assets["goal_object"] = self.__define_goal_object_asset()
# display the properties (only for debugging)
# robot
print("Trifinger Robot Asset: ")
print(f'\t Number of bodies: {self.gym.get_asset_rigid_body_count(self.gym_assets["robot"])}')
print(f'\t Number of shapes: {self.gym.get_asset_rigid_shape_count(self.gym_assets["robot"])}')
print(f'\t Number of dofs: {self.gym.get_asset_dof_count(self.gym_assets["robot"])}')
print(f'\t Number of actuated dofs: {self._dims.JointTorqueDim.value}')
# stage
print("Trifinger Table Asset: ")
print(f'\t Number of bodies: {self.gym.get_asset_rigid_body_count(self.gym_assets["table"])}')
print(f'\t Number of shapes: {self.gym.get_asset_rigid_shape_count(self.gym_assets["table"])}')
print("Trifinger Boundary Asset: ")
print(f'\t Number of bodies: {self.gym.get_asset_rigid_body_count(self.gym_assets["boundary"])}')
print(f'\t Number of shapes: {self.gym.get_asset_rigid_shape_count(self.gym_assets["boundary"])}')
def _create_envs(self, num_envs, spacing, num_per_row):
# define the dof properties for the robot
robot_dof_props = self.gym.get_asset_dof_properties(self.gym_assets["robot"])
# set dof properites based on the control mode
for k, dof_index in enumerate(self._robot_dof_indices.values()):
# note: since safety checks are employed, the simulator PD controller is not
# used. Instead the torque is computed manually and applied, even if the
# command mode is 'position'.
robot_dof_props['driveMode'][dof_index] = gymapi.DOF_MODE_EFFORT
robot_dof_props['stiffness'][dof_index] = 0.0
robot_dof_props['damping'][dof_index] = 0.0
# set dof limits
robot_dof_props['effort'][dof_index] = self._max_torque_Nm
robot_dof_props['velocity'][dof_index] = self._max_velocity_radps
robot_dof_props['lower'][dof_index] = float(self._robot_limits["joint_position"].low[k])
robot_dof_props['upper'][dof_index] = float(self._robot_limits["joint_position"].high[k])
self.envs = []
# define lower and upper region bound for each environment
env_lower_bound = gymapi.Vec3(-self.cfg["env"]["envSpacing"], -self.cfg["env"]["envSpacing"], 0.0)
env_upper_bound = gymapi.Vec3(self.cfg["env"]["envSpacing"], self.cfg["env"]["envSpacing"], self.cfg["env"]["envSpacing"])
num_envs_per_row = int(np.sqrt(self.num_envs))
# initialize gym indices buffer as a list
# note: later the list is converted to torch tensor for ease in interfacing with IsaacGym.
for asset_name in self.gym_indices.keys():
self.gym_indices[asset_name] = list()
# count number of shapes and bodies
max_agg_bodies = 0
max_agg_shapes = 0
for asset in self.gym_assets.values():
max_agg_bodies += self.gym.get_asset_rigid_body_count(asset)
max_agg_shapes += self.gym.get_asset_rigid_shape_count(asset)
# iterate and create environment instances
for env_index in range(self.num_envs):
# create environment
env_ptr = self.gym.create_env(self.sim, env_lower_bound, env_upper_bound, num_envs_per_row)
# begin aggregration mode if enabled - this can improve simulation performance
if self.cfg["env"]["aggregate_mode"]:
self.gym.begin_aggregate(env_ptr, max_agg_bodies, max_agg_shapes, True)
# add trifinger robot to environment
trifinger_actor = self.gym.create_actor(env_ptr, self.gym_assets["robot"], gymapi.Transform(),
"robot", env_index, 0, 0)
trifinger_idx = self.gym.get_actor_index(env_ptr, trifinger_actor, gymapi.DOMAIN_SIM)
# add table to environment
table_handle = self.gym.create_actor(env_ptr, self.gym_assets["table"], gymapi.Transform(),
"table", env_index, 1, 0)
table_idx = self.gym.get_actor_index(env_ptr, table_handle, gymapi.DOMAIN_SIM)
# add stage to environment
boundary_handle = self.gym.create_actor(env_ptr, self.gym_assets["boundary"], gymapi.Transform(),
"boundary", env_index, 1, 0)
boundary_idx = self.gym.get_actor_index(env_ptr, boundary_handle, gymapi.DOMAIN_SIM)
# add object to environment
object_handle = self.gym.create_actor(env_ptr, self.gym_assets["object"], gymapi.Transform(),
"object", env_index, 0, 0)
object_idx = self.gym.get_actor_index(env_ptr, object_handle, gymapi.DOMAIN_SIM)
# add goal object to environment
goal_handle = self.gym.create_actor(env_ptr, self.gym_assets["goal_object"], gymapi.Transform(),
"goal_object", env_index + self.num_envs, 0, 0)
goal_object_idx = self.gym.get_actor_index(env_ptr, goal_handle, gymapi.DOMAIN_SIM)
# change settings of DOF
self.gym.set_actor_dof_properties(env_ptr, trifinger_actor, robot_dof_props)
# add color to instances
stage_color = gymapi.Vec3(0.73, 0.68, 0.72)
self.gym.set_rigid_body_color(env_ptr, table_handle, 0, gymapi.MESH_VISUAL_AND_COLLISION, stage_color)
self.gym.set_rigid_body_color(env_ptr, boundary_handle, 0, gymapi.MESH_VISUAL_AND_COLLISION, stage_color)
# end aggregation mode if enabled
if self.cfg["env"]["aggregate_mode"]:
self.gym.end_aggregate(env_ptr)
# add instances to list
self.envs.append(env_ptr)
self.gym_indices["robot"].append(trifinger_idx)
self.gym_indices["table"].append(table_idx)
self.gym_indices["boundary"].append(boundary_idx)
self.gym_indices["object"].append(object_idx)
self.gym_indices["goal_object"].append(goal_object_idx)
# convert gym indices from list to tensor
for asset_name, asset_indices in self.gym_indices.items():
self.gym_indices[asset_name] = torch.tensor(asset_indices, dtype=torch.long, device=self.device)
def __configure_mdp_spaces(self):
"""
Configures the observations, state and action spaces.
"""
# Action scale for the MDP
# Note: This is order sensitive.
if self.cfg["env"]["command_mode"] == "position":
# action space is joint positions
self._action_scale.low = self._robot_limits["joint_position"].low
self._action_scale.high = self._robot_limits["joint_position"].high
elif self.cfg["env"]["command_mode"] == "torque":
# action space is joint torques
self._action_scale.low = self._robot_limits["joint_torque"].low
self._action_scale.high = self._robot_limits["joint_torque"].high
else:
msg = f"Invalid command mode. Input: {self.cfg['env']['command_mode']} not in ['torque', 'position']."
raise ValueError(msg)
# Observations scale for the MDP
# check if policy outputs normalized action [-1, 1] or not.
if self.cfg["env"]["normalize_action"]:
obs_action_scale = SimpleNamespace(
low=torch.full((self.action_dim,), -1, dtype=torch.float, device=self.device),
high=torch.full((self.action_dim,), 1, dtype=torch.float, device=self.device)
)
else:
obs_action_scale = self._action_scale
object_obs_low = torch.cat([
self._object_limits["position"].low,
self._object_limits["orientation"].low,
]*2)
object_obs_high = torch.cat([
self._object_limits["position"].high,
self._object_limits["orientation"].high,
]*2)
# Note: This is order sensitive.
self._observations_scale.low = torch.cat([
self._robot_limits["joint_position"].low,
self._robot_limits["joint_velocity"].low,
object_obs_low,
obs_action_scale.low
])
self._observations_scale.high = torch.cat([
self._robot_limits["joint_position"].high,
self._robot_limits["joint_velocity"].high,
object_obs_high,
obs_action_scale.high
])
# State scale for the MDP
if self.cfg["env"]["asymmetric_obs"]:
# finger tip scaling
fingertip_state_scale = SimpleNamespace(
low=torch.cat([
self._robot_limits["fingertip_position"].low,
self._robot_limits["fingertip_orientation"].low,
self._robot_limits["fingertip_velocity"].low,
]),
high=torch.cat([
self._robot_limits["fingertip_position"].high,
self._robot_limits["fingertip_orientation"].high,
self._robot_limits["fingertip_velocity"].high,
])
)
states_low = [
self._observations_scale.low,
self._object_limits["velocity"].low,
fingertip_state_scale.low.repeat(self._dims.NumFingers.value),
self._robot_limits["joint_torque"].low,
self._robot_limits["fingertip_wrench"].low.repeat(self._dims.NumFingers.value),
]
states_high = [
self._observations_scale.high,
self._object_limits["velocity"].high,
fingertip_state_scale.high.repeat(self._dims.NumFingers.value),
self._robot_limits["joint_torque"].high,
self._robot_limits["fingertip_wrench"].high.repeat(self._dims.NumFingers.value),
]
# Note: This is order sensitive.
self._states_scale.low = torch.cat(states_low)
self._states_scale.high = torch.cat(states_high)
# check that dimensions of scalings are correct
# count number of dimensions
state_dim = sum(self.state_spec.values())
obs_dim = sum(self.obs_spec.values())
action_dim = sum(self.action_spec.values())
# check that dimensions match
# observations
if self._observations_scale.low.shape[0] != obs_dim or self._observations_scale.high.shape[0] != obs_dim:
msg = f"Observation scaling dimensions mismatch. " \
f"\tLow: {self._observations_scale.low.shape[0]}, " \
f"\tHigh: {self._observations_scale.high.shape[0]}, " \
f"\tExpected: {obs_dim}."
raise AssertionError(msg)
# state
if self.cfg["env"]["asymmetric_obs"] \
and (self._states_scale.low.shape[0] != state_dim or self._states_scale.high.shape[0] != state_dim):
msg = f"States scaling dimensions mismatch. " \
f"\tLow: {self._states_scale.low.shape[0]}, " \
f"\tHigh: {self._states_scale.high.shape[0]}, " \
f"\tExpected: {state_dim}."
raise AssertionError(msg)
# actions
if self._action_scale.low.shape[0] != action_dim or self._action_scale.high.shape[0] != action_dim:
msg = f"Actions scaling dimensions mismatch. " \
f"\tLow: {self._action_scale.low.shape[0]}, " \
f"\tHigh: {self._action_scale.high.shape[0]}, " \
f"\tExpected: {action_dim}."
raise AssertionError(msg)
# print the scaling
print(f'MDP Raw observation bounds\n'
f'\tLow: {self._observations_scale.low}\n'
f'\tHigh: {self._observations_scale.high}')
print(f'MDP Raw state bounds\n'
f'\tLow: {self._states_scale.low}\n'
f'\tHigh: {self._states_scale.high}')
print(f'MDP Raw action bounds\n'
f'\tLow: {self._action_scale.low}\n'
f'\tHigh: {self._action_scale.high}')
def compute_reward(self, actions):
self.rew_buf[:] = 0.
self.reset_buf[:] = 0.
self.rew_buf[:], self.reset_buf[:], log_dict = compute_trifinger_reward(
self.obs_buf,
self.reset_buf,
self.progress_buf,
self.max_episode_length,
self.cfg["sim"]["dt"],
self.cfg["env"]["reward_terms"]["finger_move_penalty"]["weight"],
self.cfg["env"]["reward_terms"]["finger_reach_object_rate"]["weight"],
self.cfg["env"]["reward_terms"]["object_dist"]["weight"],
self.cfg["env"]["reward_terms"]["object_rot"]["weight"],
self.env_steps_count,
self._object_goal_poses_buf,
self._object_state_history[0],
self._object_state_history[1],
self._fingertips_frames_state_history[0],
self._fingertips_frames_state_history[1],
self.cfg["env"]["reward_terms"]["keypoints_dist"]["activate"]
)
self.extras.update({"env/rewards/"+k: v.mean() for k, v in log_dict.items()})
def compute_observations(self):
# refresh memory buffers
self.gym.refresh_dof_state_tensor(self.sim)
self.gym.refresh_actor_root_state_tensor(self.sim)
self.gym.refresh_rigid_body_state_tensor(self.sim)
if self.cfg["env"]["enable_ft_sensors"] or self.cfg["env"]["asymmetric_obs"]:
self.gym.refresh_dof_force_tensor(self.sim)
self.gym.refresh_force_sensor_tensor(self.sim)
joint_torques = self._dof_torque
tip_wrenches = self._ft_sensors_values
else:
joint_torques = torch.zeros(self.num_envs, self._dims.JointTorqueDim.value, dtype=torch.float32, device=self.device)
tip_wrenches = torch.zeros(self.num_envs, self._dims.NumFingers.value * self._dims.WrenchDim.value, dtype=torch.float32, device=self.device)
# extract frame handles
fingertip_handles_indices = list(self._fingertips_handles.values())
object_indices = self.gym_indices["object"]
# update state histories
self._fingertips_frames_state_history.appendleft(self._rigid_body_state[:, fingertip_handles_indices])
self._object_state_history.appendleft(self._actors_root_state[object_indices])
# fill the observations and states buffer
self.obs_buf[:], self.states_buf[:] = compute_trifinger_observations_states(
self.cfg["env"]["asymmetric_obs"],
self._dof_position,
self._dof_velocity,
self._object_state_history[0],
self._object_goal_poses_buf,
self.actions,
self._fingertips_frames_state_history[0],
joint_torques,
tip_wrenches,
)
# normalize observations if flag is enabled
if self.cfg["env"]["normalize_obs"]:
# for normal obs
self.obs_buf = scale_transform(
self.obs_buf,
lower=self._observations_scale.low,
upper=self._observations_scale.high
)
def reset_idx(self, env_ids):
# randomization can happen only at reset time, since it can reset actor positions on GPU
if self.randomize:
self.apply_randomizations(self.randomization_params)
# A) Reset episode stats buffers
self.reset_buf[env_ids] = 0
self.progress_buf[env_ids] = 0
self._successes[env_ids] = 0
self._successes_pos[env_ids] = 0
self._successes_quat[env_ids] = 0
# B) Various randomizations at the start of the episode:
# -- Robot base position.
# -- Stage position.
# -- Coefficient of restituion and friction for robot, object, stage.
# -- Mass and size of the object
# -- Mass of robot links
# -- Robot joint state
robot_initial_state_config = self.cfg["env"]["reset_distribution"]["robot_initial_state"]
self._sample_robot_state(
env_ids,
distribution=robot_initial_state_config["type"],
dof_pos_stddev=robot_initial_state_config["dof_pos_stddev"],
dof_vel_stddev=robot_initial_state_config["dof_vel_stddev"]
)
# -- Sampling of initial pose of the object
object_initial_state_config = self.cfg["env"]["reset_distribution"]["object_initial_state"]
self._sample_object_poses(
env_ids,
distribution=object_initial_state_config["type"],
)
# -- Sampling of goal pose of the object
self._sample_object_goal_poses(
env_ids,
difficulty=self.cfg["env"]["task_difficulty"]
)
# C) Extract trifinger indices to reset
robot_indices = self.gym_indices["robot"][env_ids].to(torch.int32)
object_indices = self.gym_indices["object"][env_ids].to(torch.int32)
goal_object_indices = self.gym_indices["goal_object"][env_ids].to(torch.int32)
all_indices = torch.unique(torch.cat([robot_indices, object_indices, goal_object_indices]))
# D) Set values into simulator
# -- DOF
self.gym.set_dof_state_tensor_indexed(self.sim, gymtorch.unwrap_tensor(self._dof_state),
gymtorch.unwrap_tensor(robot_indices), len(robot_indices))
# -- actor root states
self.gym.set_actor_root_state_tensor_indexed(self.sim, gymtorch.unwrap_tensor(self._actors_root_state),
gymtorch.unwrap_tensor(all_indices), len(all_indices))
def _sample_robot_state(self, instances: torch.Tensor, distribution: str = 'default',
dof_pos_stddev: float = 0.0, dof_vel_stddev: float = 0.0):
"""Samples the robot DOF state based on the settings.
Type of robot initial state distribution: ["default", "random"]
- "default" means that robot is in default configuration.
- "random" means that noise is added to default configuration
- "none" means that robot is configuration is not reset between episodes.
Args:
instances: A tensor constraining indices of environment instances to reset.
distribution: Name of distribution to sample initial state from: ['default', 'random']
dof_pos_stddev: Noise scale to DOF position (used if 'type' is 'random')
dof_vel_stddev: Noise scale to DOF velocity (used if 'type' is 'random')
"""
# number of samples to generate
num_samples = instances.size()[0]
# sample dof state based on distribution type
if distribution == "none":
return
elif distribution == "default":
# set to default configuration
self._dof_position[instances] = self._robot_limits["joint_position"].default
self._dof_velocity[instances] = self._robot_limits["joint_velocity"].default
elif distribution == "random":
# sample uniform random from (-1, 1)
dof_state_dim = self._dims.JointPositionDim.value + self._dims.JointVelocityDim.value
dof_state_noise = 2 * torch.rand((num_samples, dof_state_dim,), dtype=torch.float,
device=self.device) - 1
# set to default configuration
self._dof_position[instances] = self._robot_limits["joint_position"].default
self._dof_velocity[instances] = self._robot_limits["joint_velocity"].default
# add noise
# DOF position
start_offset = 0
end_offset = self._dims.JointPositionDim.value
self._dof_position[instances] += dof_pos_stddev * dof_state_noise[:, start_offset:end_offset]
# DOF velocity
start_offset = end_offset
end_offset += self._dims.JointVelocityDim.value
self._dof_velocity[instances] += dof_vel_stddev * dof_state_noise[:, start_offset:end_offset]
else:
msg = f"Invalid robot initial state distribution. Input: {distribution} not in [`default`, `random`]."
raise ValueError(msg)
# reset robot fingertips state history
for idx in range(1, self._state_history_len):
self._fingertips_frames_state_history[idx][instances] = 0.0
def _sample_object_poses(self, instances: torch.Tensor, distribution: str):
"""Sample poses for the cube.
Type of distribution: ["default", "random", "none"]
- "default" means that pose is default configuration.
- "random" means that pose is randomly sampled on the table.
- "none" means no resetting of object pose between episodes.
Args:
instances: A tensor constraining indices of environment instances to reset.
distribution: Name of distribution to sample initial state from: ['default', 'random']
"""
# number of samples to generate
num_samples = instances.size()[0]
# sample poses based on distribution type
if distribution == "none":
return
elif distribution == "default":
pos_x, pos_y, pos_z = self._object_limits["position"].default
orientation = self._object_limits["orientation"].default
elif distribution == "random":
# For initialization
pos_x, pos_y = random_xy(num_samples, self._object_dims.max_com_distance_to_center, self.device)
# add a small offset to the height to account for scale randomisation (prevent ground intersection)
pos_z = self._object_dims.size[2] / 2 + 0.0015
orientation = random_yaw_orientation(num_samples, self.device)
else:
msg = f"Invalid object initial state distribution. Input: {distribution} " \
"not in [`default`, `random`, `none`]."
raise ValueError(msg)
# set buffers into simulator
# extract indices for goal object
object_indices = self.gym_indices["object"][instances]
# set values into buffer
# object buffer
self._object_state_history[0][instances, 0] = pos_x
self._object_state_history[0][instances, 1] = pos_y
self._object_state_history[0][instances, 2] = pos_z
self._object_state_history[0][instances, 3:7] = orientation
self._object_state_history[0][instances, 7:13] = 0
# reset object state history
for idx in range(1, self._state_history_len):
self._object_state_history[idx][instances] = 0.0
# root actor buffer
self._actors_root_state[object_indices] = self._object_state_history[0][instances]
def _sample_object_goal_poses(self, instances: torch.Tensor, difficulty: int):
"""Sample goal poses for the cube and sets them into the desired goal pose buffer.
Args:
instances: A tensor constraining indices of environment instances to reset.
difficulty: Difficulty level. The higher, the more difficult is the goal.
Possible levels are:
- -1: Random goal position on the table, including yaw orientation.
- 1: Random goal position on the table, no orientation.
- 2: Fixed goal position in the air with x,y = 0. No orientation.
- 3: Random goal position in the air, no orientation.
- 4: Random goal pose in the air, including orientation.
"""
# number of samples to generate
num_samples = instances.size()[0]
# sample poses based on task difficulty
if difficulty == -1:
# For initialization
pos_x, pos_y = random_xy(num_samples, self._object_dims.max_com_distance_to_center, self.device)
pos_z = self._object_dims.size[2] / 2
orientation = random_yaw_orientation(num_samples, self.device)
elif difficulty == 1:
# Random goal position on the table, no orientation.
pos_x, pos_y = random_xy(num_samples, self._object_dims.max_com_distance_to_center, self.device)
pos_z = self._object_dims.size[2] / 2
orientation = default_orientation(num_samples, self.device)
elif difficulty == 2:
# Fixed goal position in the air with x,y = 0. No orientation.
pos_x, pos_y = 0.0, 0.0
pos_z = self._object_dims.min_height + 0.05
orientation = default_orientation(num_samples, self.device)
elif difficulty == 3:
# Random goal position in the air, no orientation.
pos_x, pos_y = random_xy(num_samples, self._object_dims.max_com_distance_to_center, self.device)
pos_z = random_z(num_samples, self._object_dims.min_height, self._object_dims.max_height, self.device)
orientation = default_orientation(num_samples, self.device)
elif difficulty == 4:
# Random goal pose in the air, including orientation.
# Note: Set minimum height such that the cube does not intersect with the
# ground in any orientation
max_goal_radius = self._object_dims.max_com_distance_to_center
max_height = self._object_dims.max_height
orientation = random_orientation(num_samples, self.device)
# pick x, y, z according to the maximum height / radius at the current point
# in the cirriculum
pos_x, pos_y = random_xy(num_samples, max_goal_radius, self.device)
pos_z = random_z(num_samples, self._object_dims.radius_3d, max_height, self.device)
else:
msg = f"Invalid difficulty index for task: {difficulty}."
raise ValueError(msg)
# extract indices for goal object
goal_object_indices = self.gym_indices["goal_object"][instances]
# set values into buffer
# object goal buffer
self._object_goal_poses_buf[instances, 0] = pos_x
self._object_goal_poses_buf[instances, 1] = pos_y
self._object_goal_poses_buf[instances, 2] = pos_z
self._object_goal_poses_buf[instances, 3:7] = orientation
# root actor buffer
self._actors_root_state[goal_object_indices, 0:7] = self._object_goal_poses_buf[instances]
# self._actors_root_state[goal_object_indices, 2] = -10
def pre_physics_step(self, actions):
env_ids = self.reset_buf.nonzero(as_tuple=False).flatten()
if len(env_ids) > 0:
self.reset_idx(env_ids)
self.gym.simulate(self.sim)
self.actions = actions.clone().to(self.device)
# if normalized_action is true, then denormalize them.
if self.cfg["env"]["normalize_action"]:
# TODO: Default action should correspond to normalized value of 0.
action_transformed = unscale_transform(
self.actions,
lower=self._action_scale.low,
upper=self._action_scale.high
)
else:
action_transformed = self.actions
# compute command on the basis of mode selected
if self.cfg["env"]["command_mode"] == 'torque':
# command is the desired joint torque
computed_torque = action_transformed
elif self.cfg["env"]["command_mode"] == 'position':
# command is the desired joint positions
desired_dof_position = action_transformed
# compute torque to apply
computed_torque = self._robot_dof_gains["stiffness"] * (desired_dof_position - self._dof_position)
computed_torque -= self._robot_dof_gains["damping"] * self._dof_velocity
else:
msg = f"Invalid command mode. Input: {self.cfg['env']['command_mode']} not in ['torque', 'position']."
raise ValueError(msg)
# apply clamping of computed torque to actuator limits
applied_torque = saturate(
computed_torque,
lower=self._robot_limits["joint_torque"].low,
upper=self._robot_limits["joint_torque"].high
)
# apply safety damping and clamping of the action torque if enabled
if self.cfg["env"]["apply_safety_damping"]:
# apply damping by joint velocity
applied_torque -= self._robot_dof_gains["safety_damping"] * self._dof_velocity
# clamp input
applied_torque = saturate(
applied_torque,
lower=self._robot_limits["joint_torque"].low,
upper=self._robot_limits["joint_torque"].high
)
# set computed torques to simulator buffer.
self.gym.set_dof_actuation_force_tensor(self.sim, gymtorch.unwrap_tensor(applied_torque))
def post_physics_step(self):
self._step_info = {}
self.progress_buf += 1
self.randomize_buf += 1
self.compute_observations()
self.compute_reward(self.actions)
# check termination conditions (success only)
self._check_termination()
if torch.sum(self.reset_buf) > 0:
self._step_info['consecutive_successes'] = np.mean(self._successes.float().cpu().numpy())
self._step_info['consecutive_successes_pos'] = np.mean(self._successes_pos.float().cpu().numpy())
self._step_info['consecutive_successes_quat'] = np.mean(self._successes_quat.float().cpu().numpy())
def _check_termination(self):
"""Check whether the episode is done per environment.
"""
# Extract configuration for termination conditions
termination_config = self.cfg["env"]["termination_conditions"]
# Termination condition - successful completion
# Calculate distance between current object and goal
object_goal_position_dist = torch.norm(
self._object_goal_poses_buf[:, 0:3] - self._object_state_history[0][:, 0:3],
p=2, dim=-1
)
# log theoretical number of r eseats
goal_position_reset = torch.le(object_goal_position_dist,
termination_config["success"]["position_tolerance"])
self._step_info['env/current_position_goal/per_env'] = np.mean(goal_position_reset.float().cpu().numpy())
# For task with difficulty 4, we need to check if orientation matches as well.
# Compute the difference in orientation between object and goal pose
object_goal_orientation_dist = quat_diff_rad(self._object_state_history[0][:, 3:7],
self._object_goal_poses_buf[:, 3:7])
# Check for distance within tolerance
goal_orientation_reset = torch.le(object_goal_orientation_dist,
termination_config["success"]["orientation_tolerance"])
self._step_info['env/current_orientation_goal/per_env'] = np.mean(goal_orientation_reset.float().cpu().numpy())
if self.cfg["env"]['task_difficulty'] < 4:
# Check for task completion if position goal is within a threshold
task_completion_reset = goal_position_reset
elif self.cfg["env"]['task_difficulty'] == 4:
# Check for task completion if both position + orientation goal is within a threshold
task_completion_reset = torch.logical_and(goal_position_reset, goal_orientation_reset)
else:
# Check for task completion if both orientation goal is within a threshold
task_completion_reset = goal_orientation_reset
self._successes = task_completion_reset
self._successes_pos = goal_position_reset
self._successes_quat = goal_orientation_reset
"""
Helper functions - define assets
"""
def __define_robot_asset(self):
""" Define Gym asset for robot.
"""
# define tri-finger asset
robot_asset_options = gymapi.AssetOptions()
robot_asset_options.flip_visual_attachments = False
robot_asset_options.fix_base_link = True
robot_asset_options.collapse_fixed_joints = False
robot_asset_options.disable_gravity = False
robot_asset_options.default_dof_drive_mode = gymapi.DOF_MODE_EFFORT
robot_asset_options.thickness = 0.001
robot_asset_options.angular_damping = 0.01
robot_asset_options.vhacd_enabled = True
robot_asset_options.vhacd_params = gymapi.VhacdParams()
robot_asset_options.vhacd_params.resolution = 100000
robot_asset_options.vhacd_params.concavity = 0.0025
robot_asset_options.vhacd_params.alpha = 0.04
robot_asset_options.vhacd_params.beta = 1.0
robot_asset_options.vhacd_params.convex_hull_downsampling = 4
robot_asset_options.vhacd_params.max_num_vertices_per_ch = 256
if self.physics_engine == gymapi.SIM_PHYSX:
robot_asset_options.use_physx_armature = True
# load tri-finger asset
trifinger_asset = self.gym.load_asset(self.sim, self._trifinger_assets_dir,
self._robot_urdf_file, robot_asset_options)
# set the link properties for the robot
# Ref: https://github.com/rr-learning/rrc_simulation/blob/master/python/rrc_simulation/sim_finger.py#L563
trifinger_props = self.gym.get_asset_rigid_shape_properties(trifinger_asset)
for p in trifinger_props:
p.friction = 1.0
p.torsion_friction = 1.0
p.restitution = 0.8
self.gym.set_asset_rigid_shape_properties(trifinger_asset, trifinger_props)
# extract the frame handles
for frame_name in self._fingertips_handles.keys():
self._fingertips_handles[frame_name] = self.gym.find_asset_rigid_body_index(trifinger_asset,
frame_name)
# check valid handle
if self._fingertips_handles[frame_name] == gymapi.INVALID_HANDLE:
msg = f"Invalid handle received for frame: `{frame_name}`."
print(msg)
if self.cfg["env"]["enable_ft_sensors"] or self.cfg["env"]["asymmetric_obs"]:
sensor_pose = gymapi.Transform()
for fingertip_handle in self._fingertips_handles.values():
self.gym.create_asset_force_sensor(trifinger_asset, fingertip_handle, sensor_pose)
# extract the dof indices
# Note: need to write actuated dofs manually since the system contains fixed joints as well which show up.
for dof_name in self._robot_dof_indices.keys():
self._robot_dof_indices[dof_name] = self.gym.find_asset_dof_index(trifinger_asset, dof_name)
# check valid handle
if self._robot_dof_indices[dof_name] == gymapi.INVALID_HANDLE:
msg = f"Invalid index received for DOF: `{dof_name}`."
print(msg)
# return the asset
return trifinger_asset
def __define_table_asset(self):
""" Define Gym asset for stage.
"""
# define stage asset
table_asset_options = gymapi.AssetOptions()
table_asset_options.disable_gravity = True
table_asset_options.fix_base_link = True
table_asset_options.thickness = 0.001
# load stage asset
table_asset = self.gym.load_asset(self.sim, self._trifinger_assets_dir,
self._table_urdf_file, table_asset_options)
# set stage properties
table_props = self.gym.get_asset_rigid_shape_properties(table_asset)
# iterate over each mesh
for p in table_props:
p.friction = 0.1
p.torsion_friction = 0.1
self.gym.set_asset_rigid_shape_properties(table_asset, table_props)
# return the asset
return table_asset
def __define_boundary_asset(self):
""" Define Gym asset for stage.
"""
# define stage asset
boundary_asset_options = gymapi.AssetOptions()
boundary_asset_options.disable_gravity = True
boundary_asset_options.fix_base_link = True
boundary_asset_options.thickness = 0.001
boundary_asset_options.vhacd_enabled = True
boundary_asset_options.vhacd_params = gymapi.VhacdParams()
boundary_asset_options.vhacd_params.resolution = 100000
boundary_asset_options.vhacd_params.concavity = 0.0
boundary_asset_options.vhacd_params.alpha = 0.04
boundary_asset_options.vhacd_params.beta = 1.0
boundary_asset_options.vhacd_params.max_num_vertices_per_ch = 1024
# load stage asset
boundary_asset = self.gym.load_asset(self.sim, self._trifinger_assets_dir,
self._boundary_urdf_file, boundary_asset_options)
# set stage properties
boundary_props = self.gym.get_asset_rigid_shape_properties(boundary_asset)
self.gym.set_asset_rigid_shape_properties(boundary_asset, boundary_props)
# return the asset
return boundary_asset
def __define_object_asset(self):
""" Define Gym asset for object.
"""
# define object asset
object_asset_options = gymapi.AssetOptions()
object_asset_options.disable_gravity = False
object_asset_options.thickness = 0.001
object_asset_options.flip_visual_attachments = True
# load object asset
object_asset = self.gym.load_asset(self.sim, self._trifinger_assets_dir,
self._object_urdf_file, object_asset_options)
# set object properties
# Ref: https://github.com/rr-learning/rrc_simulation/blob/master/python/rrc_simulation/collision_objects.py#L96
object_props = self.gym.get_asset_rigid_shape_properties(object_asset)
for p in object_props:
p.friction = 1.0
p.torsion_friction = 0.001
p.restitution = 0.0
self.gym.set_asset_rigid_shape_properties(object_asset, object_props)
# return the asset
return object_asset
def __define_goal_object_asset(self):
""" Define Gym asset for goal object.
"""
# define object asset
object_asset_options = gymapi.AssetOptions()
object_asset_options.disable_gravity = True
object_asset_options.fix_base_link = True
object_asset_options.thickness = 0.001
object_asset_options.flip_visual_attachments = True
# load object asset
goal_object_asset = self.gym.load_asset(self.sim, self._trifinger_assets_dir,
self._object_urdf_file, object_asset_options)
# return the asset
return goal_object_asset
@property
def env_steps_count(self) -> int:
"""Returns the total number of environment steps aggregated across parallel environments."""
return self.gym.get_frame_count(self.sim) * self.num_envs
#####################################################################
###=========================jit functions=========================###
#####################################################################
@torch.jit.script
def lgsk_kernel(x: torch.Tensor, scale: float = 50.0, eps:float=2) -> torch.Tensor:
"""Defines logistic kernel function to bound input to [-0.25, 0)
Ref: https://arxiv.org/abs/1901.08652 (page 15)
Args:
x: Input tensor.
scale: Scaling of the kernel function (controls how wide the 'bell' shape is')
eps: Controls how 'tall' the 'bell' shape is.
Returns:
Output tensor computed using kernel.
"""
scaled = x * scale
return 1.0 / (scaled.exp() + eps + (-scaled).exp())
@torch.jit.script
def gen_keypoints(pose: torch.Tensor, num_keypoints: int = 8, size: Tuple[float, float, float] = (0.065, 0.065, 0.065)):
num_envs = pose.shape[0]
keypoints_buf = torch.ones(num_envs, num_keypoints, 3, dtype=torch.float32, device=pose.device)
for i in range(num_keypoints):
# which dimensions to negate
n = [((i >> k) & 1) == 0 for k in range(3)]
corner_loc = [(1 if n[k] else -1) * s / 2 for k, s in enumerate(size)],
corner = torch.tensor(corner_loc, dtype=torch.float32, device=pose.device) * keypoints_buf[:, i, :]
keypoints_buf[:, i, :] = local_to_world_space(corner, pose)
return keypoints_buf
@torch.jit.script
def compute_trifinger_reward(
obs_buf: torch.Tensor,
reset_buf: torch.Tensor,
progress_buf: torch.Tensor,
episode_length: int,
dt: float,
finger_move_penalty_weight: float,
finger_reach_object_weight: float,
object_dist_weight: float,
object_rot_weight: float,
env_steps_count: int,
object_goal_poses_buf: torch.Tensor,
object_state: torch.Tensor,
last_object_state: torch.Tensor,
fingertip_state: torch.Tensor,
last_fingertip_state: torch.Tensor,
use_keypoints: bool
) -> Tuple[torch.Tensor, torch.Tensor, Dict[str, torch.Tensor]]:
ft_sched_start = 0
ft_sched_end = 5e7
# Reward penalising finger movement
fingertip_vel = (fingertip_state[:, :, 0:3] - last_fingertip_state[:, :, 0:3]) / dt
finger_movement_penalty = finger_move_penalty_weight * fingertip_vel.pow(2).view(-1, 9).sum(dim=-1)
# Reward for finger reaching the object
# distance from each finger to the centroid of the object, shape (N, 3).
curr_norms = torch.stack([
torch.norm(fingertip_state[:, i, 0:3] - object_state[:, 0:3], p=2, dim=-1)
for i in range(3)
], dim=-1)
# distance from each finger to the centroid of the object in the last timestep, shape (N, 3).
prev_norms = torch.stack([
torch.norm(last_fingertip_state[:, i, 0:3] - last_object_state[:, 0:3], p=2, dim=-1)
for i in range(3)
], dim=-1)
ft_sched_val = 1.0 if ft_sched_start <= env_steps_count <= ft_sched_end else 0.0
finger_reach_object_reward = finger_reach_object_weight * ft_sched_val * (curr_norms - prev_norms).sum(dim=-1)
if use_keypoints:
object_keypoints = gen_keypoints(object_state[:, 0:7])
goal_keypoints = gen_keypoints(object_goal_poses_buf[:, 0:7])
delta = object_keypoints - goal_keypoints
dist_l2 = torch.norm(delta, p=2, dim=-1)
keypoints_kernel_sum = lgsk_kernel(dist_l2, scale=30., eps=2.).mean(dim=-1)
pose_reward = object_dist_weight * dt * keypoints_kernel_sum
else:
# Reward for object distance
object_dist = torch.norm(object_state[:, 0:3] - object_goal_poses_buf[:, 0:3], p=2, dim=-1)
object_dist_reward = object_dist_weight * dt * lgsk_kernel(object_dist, scale=50., eps=2.)
# Reward for object rotation
# extract quaternion orientation
quat_a = object_state[:, 3:7]
quat_b = object_goal_poses_buf[:, 3:7]
angles = quat_diff_rad(quat_a, quat_b)
object_rot_reward = object_rot_weight * dt / (3. * torch.abs(angles) + 0.01)
pose_reward = object_dist_reward + object_rot_reward
total_reward = (
finger_movement_penalty
+ finger_reach_object_reward
+ pose_reward
)
# reset agents
reset = torch.zeros_like(reset_buf)
reset = torch.where(progress_buf >= episode_length - 1, torch.ones_like(reset_buf), reset)
info: Dict[str, torch.Tensor] = {
'finger_movement_penalty': finger_movement_penalty,
'finger_reach_object_reward': finger_reach_object_reward,
'pose_reward': finger_reach_object_reward,
'reward': total_reward,
}
return total_reward, reset, info
@torch.jit.script
def compute_trifinger_observations_states(
asymmetric_obs: bool,
dof_position: torch.Tensor,
dof_velocity: torch.Tensor,
object_state: torch.Tensor,
object_goal_poses: torch.Tensor,
actions: torch.Tensor,
fingertip_state: torch.Tensor,
joint_torques: torch.Tensor,
tip_wrenches: torch.Tensor
):
num_envs = dof_position.shape[0]
obs_buf = torch.cat([
dof_position,
dof_velocity,
object_state[:, 0:7], # pose
object_goal_poses,
actions
], dim=-1)
if asymmetric_obs:
states_buf = torch.cat([
obs_buf,
object_state[:, 7:13], # linear / angular velocity
fingertip_state.reshape(num_envs, -1),
joint_torques,
tip_wrenches
], dim=-1)
else:
states_buf = obs_buf
return obs_buf, states_buf
"""
Sampling of cuboidal object
"""
@torch.jit.script
def random_xy(num: int, max_com_distance_to_center: float, device: str) -> Tuple[torch.Tensor, torch.Tensor]:
"""Returns sampled uniform positions in circle (https://stackoverflow.com/a/50746409)"""
# sample radius of circle
radius = torch.sqrt(torch.rand(num, dtype=torch.float, device=device))
radius *= max_com_distance_to_center
# sample theta of point
theta = 2 * np.pi * torch.rand(num, dtype=torch.float, device=device)
# x,y-position of the cube
x = radius * torch.cos(theta)
y = radius * torch.sin(theta)
return x, y
@torch.jit.script
def random_z(num: int, min_height: float, max_height: float, device: str) -> torch.Tensor:
"""Returns sampled height of the goal object."""
z = torch.rand(num, dtype=torch.float, device=device)
z = (max_height - min_height) * z + min_height
return z
@torch.jit.script
def default_orientation(num: int, device: str) -> torch.Tensor:
"""Returns identity rotation transform."""
quat = torch.zeros((num, 4,), dtype=torch.float, device=device)
quat[..., -1] = 1.0
return quat
@torch.jit.script
def random_orientation(num: int, device: str) -> torch.Tensor:
"""Returns sampled rotation in 3D as quaternion.
Ref: https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.transform.Rotation.random.html
"""
# sample random orientation from normal distribution
quat = torch.randn((num, 4,), dtype=torch.float, device=device)
# normalize the quaternion
quat = torch.nn.functional.normalize(quat, p=2., dim=-1, eps=1e-12)
return quat
@torch.jit.script
def random_orientation_within_angle(num: int, device:str, base: torch.Tensor, max_angle: float):
""" Generates random quaternions within max_angle of base
Ref: https://math.stackexchange.com/a/3448434
"""
quat = torch.zeros((num, 4,), dtype=torch.float, device=device)
rand = torch.rand((num, 3), dtype=torch.float, device=device)
c = torch.cos(rand[:, 0]*max_angle)
n = torch.sqrt((1.-c)/2.)
quat[:, 3] = torch.sqrt((1+c)/2.)
quat[:, 2] = (rand[:, 1]*2.-1.) * n
quat[:, 0] = (torch.sqrt(1-quat[:, 2]**2.) * torch.cos(2*np.pi*rand[:, 2])) * n
quat[:, 1] = (torch.sqrt(1-quat[:, 2]**2.) * torch.sin(2*np.pi*rand[:, 2])) * n
# floating point errors can cause it to be slightly off, re-normalise
quat = torch.nn.functional.normalize(quat, p=2., dim=-1, eps=1e-12)
return quat_mul(quat, base)
@torch.jit.script
def random_angular_vel(num: int, device: str, magnitude_stdev: float) -> torch.Tensor:
"""Samples a random angular velocity with standard deviation `magnitude_stdev`"""
axis = torch.randn((num, 3,), dtype=torch.float, device=device)
axis /= torch.norm(axis, p=2, dim=-1).view(-1, 1)
magnitude = torch.randn((num, 1,), dtype=torch.float, device=device)
magnitude *= magnitude_stdev
return magnitude * axis
@torch.jit.script
def random_yaw_orientation(num: int, device: str) -> torch.Tensor:
"""Returns sampled rotation around z-axis."""
roll = torch.zeros(num, dtype=torch.float, device=device)
pitch = torch.zeros(num, dtype=torch.float, device=device)
yaw = 2 * np.pi * torch.rand(num, dtype=torch.float, device=device)
return quat_from_euler_xyz(roll, pitch, yaw)
| 70,571 | Python | 45.643754 | 217 | 0.611568 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/tasks/shadow_hand.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import numpy as np
import os
import torch
from isaacgym import gymtorch
from isaacgym import gymapi
from isaacgymenvs.utils.torch_jit_utils import scale, unscale, quat_mul, quat_conjugate, quat_from_angle_axis, \
to_torch, get_axis_params, torch_rand_float, tensor_clamp
from isaacgymenvs.tasks.base.vec_task import VecTask
class ShadowHand(VecTask):
def __init__(self, cfg, rl_device, sim_device, graphics_device_id, headless, virtual_screen_capture, force_render):
self.cfg = cfg
self.randomize = self.cfg["task"]["randomize"]
self.randomization_params = self.cfg["task"]["randomization_params"]
self.aggregate_mode = self.cfg["env"]["aggregateMode"]
self.dist_reward_scale = self.cfg["env"]["distRewardScale"]
self.rot_reward_scale = self.cfg["env"]["rotRewardScale"]
self.action_penalty_scale = self.cfg["env"]["actionPenaltyScale"]
self.success_tolerance = self.cfg["env"]["successTolerance"]
self.reach_goal_bonus = self.cfg["env"]["reachGoalBonus"]
self.fall_dist = self.cfg["env"]["fallDistance"]
self.fall_penalty = self.cfg["env"]["fallPenalty"]
self.rot_eps = self.cfg["env"]["rotEps"]
self.vel_obs_scale = 0.2 # scale factor of velocity based observations
self.force_torque_obs_scale = 10.0 # scale factor of velocity based observations
self.reset_position_noise = self.cfg["env"]["resetPositionNoise"]
self.reset_rotation_noise = self.cfg["env"]["resetRotationNoise"]
self.reset_dof_pos_noise = self.cfg["env"]["resetDofPosRandomInterval"]
self.reset_dof_vel_noise = self.cfg["env"]["resetDofVelRandomInterval"]
self.force_scale = self.cfg["env"].get("forceScale", 0.0)
self.force_prob_range = self.cfg["env"].get("forceProbRange", [0.001, 0.1])
self.force_decay = self.cfg["env"].get("forceDecay", 0.99)
self.force_decay_interval = self.cfg["env"].get("forceDecayInterval", 0.08)
self.shadow_hand_dof_speed_scale = self.cfg["env"]["dofSpeedScale"]
self.use_relative_control = self.cfg["env"]["useRelativeControl"]
self.act_moving_average = self.cfg["env"]["actionsMovingAverage"]
self.debug_viz = self.cfg["env"]["enableDebugVis"]
self.max_episode_length = self.cfg["env"]["episodeLength"]
self.reset_time = self.cfg["env"].get("resetTime", -1.0)
self.print_success_stat = self.cfg["env"]["printNumSuccesses"]
self.max_consecutive_successes = self.cfg["env"]["maxConsecutiveSuccesses"]
self.av_factor = self.cfg["env"].get("averFactor", 0.1)
self.object_type = self.cfg["env"]["objectType"]
assert self.object_type in ["block", "egg", "pen"]
self.ignore_z = (self.object_type == "pen")
self.asset_files_dict = {
"block": "urdf/objects/cube_multicolor.urdf",
"egg": "mjcf/open_ai_assets/hand/egg.xml",
"pen": "mjcf/open_ai_assets/hand/pen.xml"
}
if "asset" in self.cfg["env"]:
self.asset_files_dict["block"] = self.cfg["env"]["asset"].get("assetFileNameBlock", self.asset_files_dict["block"])
self.asset_files_dict["egg"] = self.cfg["env"]["asset"].get("assetFileNameEgg", self.asset_files_dict["egg"])
self.asset_files_dict["pen"] = self.cfg["env"]["asset"].get("assetFileNamePen", self.asset_files_dict["pen"])
# can be "openai", "full_no_vel", "full", "full_state"
self.obs_type = self.cfg["env"]["observationType"]
if not (self.obs_type in ["openai", "full_no_vel", "full", "full_state"]):
raise Exception(
"Unknown type of observations!\nobservationType should be one of: [openai, full_no_vel, full, full_state]")
print("Obs type:", self.obs_type)
self.num_obs_dict = {
"openai": 42,
"full_no_vel": 77,
"full": 157,
"full_state": 211
}
self.up_axis = 'z'
self.fingertips = ["robot0:ffdistal", "robot0:mfdistal", "robot0:rfdistal", "robot0:lfdistal", "robot0:thdistal"]
self.num_fingertips = len(self.fingertips)
self.use_vel_obs = False
self.fingertip_obs = True
self.asymmetric_obs = self.cfg["env"]["asymmetric_observations"]
num_states = 0
if self.asymmetric_obs:
num_states = 211
self.cfg["env"]["numObservations"] = self.num_obs_dict[self.obs_type]
self.cfg["env"]["numStates"] = num_states
self.cfg["env"]["numActions"] = 20
super().__init__(config=self.cfg, rl_device=rl_device, sim_device=sim_device, graphics_device_id=graphics_device_id, headless=headless, virtual_screen_capture=virtual_screen_capture, force_render=force_render)
self.dt = self.sim_params.dt
control_freq_inv = self.cfg["env"].get("controlFrequencyInv", 1)
if self.reset_time > 0.0:
self.max_episode_length = int(round(self.reset_time/(control_freq_inv * self.dt)))
print("Reset time: ", self.reset_time)
print("New episode length: ", self.max_episode_length)
if self.viewer != None:
cam_pos = gymapi.Vec3(10.0, 5.0, 1.0)
cam_target = gymapi.Vec3(6.0, 5.0, 0.0)
self.gym.viewer_camera_look_at(self.viewer, None, cam_pos, cam_target)
# get gym GPU state tensors
actor_root_state_tensor = self.gym.acquire_actor_root_state_tensor(self.sim)
dof_state_tensor = self.gym.acquire_dof_state_tensor(self.sim)
rigid_body_tensor = self.gym.acquire_rigid_body_state_tensor(self.sim)
if self.obs_type == "full_state" or self.asymmetric_obs:
sensor_tensor = self.gym.acquire_force_sensor_tensor(self.sim)
self.vec_sensor_tensor = gymtorch.wrap_tensor(sensor_tensor).view(self.num_envs, self.num_fingertips * 6)
dof_force_tensor = self.gym.acquire_dof_force_tensor(self.sim)
self.dof_force_tensor = gymtorch.wrap_tensor(dof_force_tensor).view(self.num_envs, self.num_shadow_hand_dofs)
self.gym.refresh_actor_root_state_tensor(self.sim)
self.gym.refresh_dof_state_tensor(self.sim)
self.gym.refresh_rigid_body_state_tensor(self.sim)
# create some wrapper tensors for different slices
self.shadow_hand_default_dof_pos = torch.zeros(self.num_shadow_hand_dofs, dtype=torch.float, device=self.device)
self.dof_state = gymtorch.wrap_tensor(dof_state_tensor)
self.shadow_hand_dof_state = self.dof_state.view(self.num_envs, -1, 2)[:, :self.num_shadow_hand_dofs]
self.shadow_hand_dof_pos = self.shadow_hand_dof_state[..., 0]
self.shadow_hand_dof_vel = self.shadow_hand_dof_state[..., 1]
self.rigid_body_states = gymtorch.wrap_tensor(rigid_body_tensor).view(self.num_envs, -1, 13)
self.num_bodies = self.rigid_body_states.shape[1]
self.root_state_tensor = gymtorch.wrap_tensor(actor_root_state_tensor).view(-1, 13)
self.num_dofs = self.gym.get_sim_dof_count(self.sim) // self.num_envs
self.prev_targets = torch.zeros((self.num_envs, self.num_dofs), dtype=torch.float, device=self.device)
self.cur_targets = torch.zeros((self.num_envs, self.num_dofs), dtype=torch.float, device=self.device)
self.global_indices = torch.arange(self.num_envs * 3, dtype=torch.int32, device=self.device).view(self.num_envs, -1)
self.x_unit_tensor = to_torch([1, 0, 0], dtype=torch.float, device=self.device).repeat((self.num_envs, 1))
self.y_unit_tensor = to_torch([0, 1, 0], dtype=torch.float, device=self.device).repeat((self.num_envs, 1))
self.z_unit_tensor = to_torch([0, 0, 1], dtype=torch.float, device=self.device).repeat((self.num_envs, 1))
self.reset_goal_buf = self.reset_buf.clone()
self.successes = torch.zeros(self.num_envs, dtype=torch.float, device=self.device)
self.consecutive_successes = torch.zeros(1, dtype=torch.float, device=self.device)
self.av_factor = to_torch(self.av_factor, dtype=torch.float, device=self.device)
self.total_successes = 0
self.total_resets = 0
# object apply random forces parameters
self.force_decay = to_torch(self.force_decay, dtype=torch.float, device=self.device)
self.force_prob_range = to_torch(self.force_prob_range, dtype=torch.float, device=self.device)
self.random_force_prob = torch.exp((torch.log(self.force_prob_range[0]) - torch.log(self.force_prob_range[1]))
* torch.rand(self.num_envs, device=self.device) + torch.log(self.force_prob_range[1]))
self.rb_forces = torch.zeros((self.num_envs, self.num_bodies, 3), dtype=torch.float, device=self.device)
def create_sim(self):
self.dt = self.cfg["sim"]["dt"]
self.up_axis_idx = 2 if self.up_axis == 'z' else 1 # index of up axis: Y=1, Z=2
self.sim = super().create_sim(self.device_id, self.graphics_device_id, self.physics_engine, self.sim_params)
self._create_ground_plane()
self._create_envs(self.num_envs, self.cfg["env"]['envSpacing'], int(np.sqrt(self.num_envs)))
# If randomizing, apply once immediately on startup before the fist sim step
if self.randomize:
self.apply_randomizations(self.randomization_params)
def _create_ground_plane(self):
plane_params = gymapi.PlaneParams()
plane_params.normal = gymapi.Vec3(0.0, 0.0, 1.0)
self.gym.add_ground(self.sim, plane_params)
def _create_envs(self, num_envs, spacing, num_per_row):
lower = gymapi.Vec3(-spacing, -spacing, 0.0)
upper = gymapi.Vec3(spacing, spacing, spacing)
asset_root = os.path.normpath(os.path.join(os.path.dirname(os.path.abspath(__file__)), '../../assets'))
shadow_hand_asset_file = os.path.normpath("mjcf/open_ai_assets/hand/shadow_hand.xml")
if "asset" in self.cfg["env"]:
# asset_root = self.cfg["env"]["asset"].get("assetRoot", asset_root)
shadow_hand_asset_file = os.path.normpath(self.cfg["env"]["asset"].get("assetFileName", shadow_hand_asset_file))
object_asset_file = self.asset_files_dict[self.object_type]
# load shadow hand_ asset
asset_options = gymapi.AssetOptions()
asset_options.flip_visual_attachments = False
asset_options.fix_base_link = True
asset_options.collapse_fixed_joints = True
asset_options.disable_gravity = True
asset_options.thickness = 0.001
asset_options.angular_damping = 0.01
if self.physics_engine == gymapi.SIM_PHYSX:
asset_options.use_physx_armature = True
# Note - DOF mode is set in the MJCF file and loaded by Isaac Gym
asset_options.default_dof_drive_mode = gymapi.DOF_MODE_NONE
shadow_hand_asset = self.gym.load_asset(self.sim, asset_root, shadow_hand_asset_file, asset_options)
self.num_shadow_hand_bodies = self.gym.get_asset_rigid_body_count(shadow_hand_asset)
self.num_shadow_hand_shapes = self.gym.get_asset_rigid_shape_count(shadow_hand_asset)
self.num_shadow_hand_dofs = self.gym.get_asset_dof_count(shadow_hand_asset)
self.num_shadow_hand_actuators = self.gym.get_asset_actuator_count(shadow_hand_asset)
self.num_shadow_hand_tendons = self.gym.get_asset_tendon_count(shadow_hand_asset)
# tendon set up
limit_stiffness = 30
t_damping = 0.1
relevant_tendons = ["robot0:T_FFJ1c", "robot0:T_MFJ1c", "robot0:T_RFJ1c", "robot0:T_LFJ1c"]
tendon_props = self.gym.get_asset_tendon_properties(shadow_hand_asset)
for i in range(self.num_shadow_hand_tendons):
for rt in relevant_tendons:
if self.gym.get_asset_tendon_name(shadow_hand_asset, i) == rt:
tendon_props[i].limit_stiffness = limit_stiffness
tendon_props[i].damping = t_damping
self.gym.set_asset_tendon_properties(shadow_hand_asset, tendon_props)
actuated_dof_names = [self.gym.get_asset_actuator_joint_name(shadow_hand_asset, i) for i in range(self.num_shadow_hand_actuators)]
self.actuated_dof_indices = [self.gym.find_asset_dof_index(shadow_hand_asset, name) for name in actuated_dof_names]
# get shadow_hand dof properties, loaded by Isaac Gym from the MJCF file
shadow_hand_dof_props = self.gym.get_asset_dof_properties(shadow_hand_asset)
self.shadow_hand_dof_lower_limits = []
self.shadow_hand_dof_upper_limits = []
self.shadow_hand_dof_default_pos = []
self.shadow_hand_dof_default_vel = []
for i in range(self.num_shadow_hand_dofs):
self.shadow_hand_dof_lower_limits.append(shadow_hand_dof_props['lower'][i])
self.shadow_hand_dof_upper_limits.append(shadow_hand_dof_props['upper'][i])
self.shadow_hand_dof_default_pos.append(0.0)
self.shadow_hand_dof_default_vel.append(0.0)
self.actuated_dof_indices = to_torch(self.actuated_dof_indices, dtype=torch.long, device=self.device)
self.shadow_hand_dof_lower_limits = to_torch(self.shadow_hand_dof_lower_limits, device=self.device)
self.shadow_hand_dof_upper_limits = to_torch(self.shadow_hand_dof_upper_limits, device=self.device)
self.shadow_hand_dof_default_pos = to_torch(self.shadow_hand_dof_default_pos, device=self.device)
self.shadow_hand_dof_default_vel = to_torch(self.shadow_hand_dof_default_vel, device=self.device)
self.fingertip_handles = [self.gym.find_asset_rigid_body_index(shadow_hand_asset, name) for name in self.fingertips]
# create fingertip force sensors, if needed
if self.obs_type == "full_state" or self.asymmetric_obs:
sensor_pose = gymapi.Transform()
for ft_handle in self.fingertip_handles:
self.gym.create_asset_force_sensor(shadow_hand_asset, ft_handle, sensor_pose)
# load manipulated object and goal assets
object_asset_options = gymapi.AssetOptions()
object_asset = self.gym.load_asset(self.sim, asset_root, object_asset_file, object_asset_options)
object_asset_options.disable_gravity = True
goal_asset = self.gym.load_asset(self.sim, asset_root, object_asset_file, object_asset_options)
shadow_hand_start_pose = gymapi.Transform()
shadow_hand_start_pose.p = gymapi.Vec3(*get_axis_params(0.5, self.up_axis_idx))
object_start_pose = gymapi.Transform()
object_start_pose.p = gymapi.Vec3()
object_start_pose.p.x = shadow_hand_start_pose.p.x
pose_dy, pose_dz = -0.39, 0.10
object_start_pose.p.y = shadow_hand_start_pose.p.y + pose_dy
object_start_pose.p.z = shadow_hand_start_pose.p.z + pose_dz
if self.object_type == "pen":
object_start_pose.p.z = shadow_hand_start_pose.p.z + 0.02
self.goal_displacement = gymapi.Vec3(-0.2, -0.06, 0.12)
self.goal_displacement_tensor = to_torch(
[self.goal_displacement.x, self.goal_displacement.y, self.goal_displacement.z], device=self.device)
goal_start_pose = gymapi.Transform()
goal_start_pose.p = object_start_pose.p + self.goal_displacement
goal_start_pose.p.z -= 0.04
# compute aggregate size
max_agg_bodies = self.num_shadow_hand_bodies + 2
max_agg_shapes = self.num_shadow_hand_shapes + 2
self.shadow_hands = []
self.envs = []
self.object_init_state = []
self.hand_start_states = []
self.hand_indices = []
self.fingertip_indices = []
self.object_indices = []
self.goal_object_indices = []
self.fingertip_handles = [self.gym.find_asset_rigid_body_index(shadow_hand_asset, name) for name in self.fingertips]
shadow_hand_rb_count = self.gym.get_asset_rigid_body_count(shadow_hand_asset)
object_rb_count = self.gym.get_asset_rigid_body_count(object_asset)
self.object_rb_handles = list(range(shadow_hand_rb_count, shadow_hand_rb_count + object_rb_count))
for i in range(self.num_envs):
# create env instance
env_ptr = self.gym.create_env(
self.sim, lower, upper, num_per_row
)
if self.aggregate_mode >= 1:
self.gym.begin_aggregate(env_ptr, max_agg_bodies, max_agg_shapes, True)
# add hand - collision filter = -1 to use asset collision filters set in mjcf loader
shadow_hand_actor = self.gym.create_actor(env_ptr, shadow_hand_asset, shadow_hand_start_pose, "hand", i, -1, 0)
self.hand_start_states.append([shadow_hand_start_pose.p.x, shadow_hand_start_pose.p.y, shadow_hand_start_pose.p.z,
shadow_hand_start_pose.r.x, shadow_hand_start_pose.r.y, shadow_hand_start_pose.r.z, shadow_hand_start_pose.r.w,
0, 0, 0, 0, 0, 0])
self.gym.set_actor_dof_properties(env_ptr, shadow_hand_actor, shadow_hand_dof_props)
hand_idx = self.gym.get_actor_index(env_ptr, shadow_hand_actor, gymapi.DOMAIN_SIM)
self.hand_indices.append(hand_idx)
# enable DOF force sensors, if needed
if self.obs_type == "full_state" or self.asymmetric_obs:
self.gym.enable_actor_dof_force_sensors(env_ptr, shadow_hand_actor)
# add object
object_handle = self.gym.create_actor(env_ptr, object_asset, object_start_pose, "object", i, 0, 0)
self.object_init_state.append([object_start_pose.p.x, object_start_pose.p.y, object_start_pose.p.z,
object_start_pose.r.x, object_start_pose.r.y, object_start_pose.r.z, object_start_pose.r.w,
0, 0, 0, 0, 0, 0])
object_idx = self.gym.get_actor_index(env_ptr, object_handle, gymapi.DOMAIN_SIM)
self.object_indices.append(object_idx)
# add goal object
goal_handle = self.gym.create_actor(env_ptr, goal_asset, goal_start_pose, "goal_object", i + self.num_envs, 0, 0)
goal_object_idx = self.gym.get_actor_index(env_ptr, goal_handle, gymapi.DOMAIN_SIM)
self.goal_object_indices.append(goal_object_idx)
if self.object_type != "block":
self.gym.set_rigid_body_color(
env_ptr, object_handle, 0, gymapi.MESH_VISUAL, gymapi.Vec3(0.6, 0.72, 0.98))
self.gym.set_rigid_body_color(
env_ptr, goal_handle, 0, gymapi.MESH_VISUAL, gymapi.Vec3(0.6, 0.72, 0.98))
if self.aggregate_mode > 0:
self.gym.end_aggregate(env_ptr)
self.envs.append(env_ptr)
self.shadow_hands.append(shadow_hand_actor)
# we are not using new mass values after DR when calculating random forces applied to an object,
# which should be ok as long as the randomization range is not too big
object_rb_props = self.gym.get_actor_rigid_body_properties(env_ptr, object_handle)
self.object_rb_masses = [prop.mass for prop in object_rb_props]
self.object_init_state = to_torch(self.object_init_state, device=self.device, dtype=torch.float).view(self.num_envs, 13)
self.goal_states = self.object_init_state.clone()
self.goal_states[:, self.up_axis_idx] -= 0.04
self.goal_init_state = self.goal_states.clone()
self.hand_start_states = to_torch(self.hand_start_states, device=self.device).view(self.num_envs, 13)
self.fingertip_handles = to_torch(self.fingertip_handles, dtype=torch.long, device=self.device)
self.object_rb_handles = to_torch(self.object_rb_handles, dtype=torch.long, device=self.device)
self.object_rb_masses = to_torch(self.object_rb_masses, dtype=torch.float, device=self.device)
self.hand_indices = to_torch(self.hand_indices, dtype=torch.long, device=self.device)
self.object_indices = to_torch(self.object_indices, dtype=torch.long, device=self.device)
self.goal_object_indices = to_torch(self.goal_object_indices, dtype=torch.long, device=self.device)
def compute_reward(self, actions):
self.rew_buf[:], self.reset_buf[:], self.reset_goal_buf[:], self.progress_buf[:], self.successes[:], self.consecutive_successes[:] = compute_hand_reward(
self.rew_buf, self.reset_buf, self.reset_goal_buf, self.progress_buf, self.successes, self.consecutive_successes,
self.max_episode_length, self.object_pos, self.object_rot, self.goal_pos, self.goal_rot,
self.dist_reward_scale, self.rot_reward_scale, self.rot_eps, self.actions, self.action_penalty_scale,
self.success_tolerance, self.reach_goal_bonus, self.fall_dist, self.fall_penalty,
self.max_consecutive_successes, self.av_factor, (self.object_type == "pen")
)
self.extras['consecutive_successes'] = self.consecutive_successes.mean()
if self.print_success_stat:
self.total_resets = self.total_resets + self.reset_buf.sum()
direct_average_successes = self.total_successes + self.successes.sum()
self.total_successes = self.total_successes + (self.successes * self.reset_buf).sum()
# The direct average shows the overall result more quickly, but slightly undershoots long term
# policy performance.
print("Direct average consecutive successes = {:.1f}".format(direct_average_successes/(self.total_resets + self.num_envs)))
if self.total_resets > 0:
print("Post-Reset average consecutive successes = {:.1f}".format(self.total_successes/self.total_resets))
def compute_observations(self):
self.gym.refresh_dof_state_tensor(self.sim)
self.gym.refresh_actor_root_state_tensor(self.sim)
self.gym.refresh_rigid_body_state_tensor(self.sim)
if self.obs_type == "full_state" or self.asymmetric_obs:
self.gym.refresh_force_sensor_tensor(self.sim)
self.gym.refresh_dof_force_tensor(self.sim)
self.object_pose = self.root_state_tensor[self.object_indices, 0:7]
self.object_pos = self.root_state_tensor[self.object_indices, 0:3]
self.object_rot = self.root_state_tensor[self.object_indices, 3:7]
self.object_linvel = self.root_state_tensor[self.object_indices, 7:10]
self.object_angvel = self.root_state_tensor[self.object_indices, 10:13]
self.goal_pose = self.goal_states[:, 0:7]
self.goal_pos = self.goal_states[:, 0:3]
self.goal_rot = self.goal_states[:, 3:7]
self.fingertip_state = self.rigid_body_states[:, self.fingertip_handles][:, :, 0:13]
self.fingertip_pos = self.rigid_body_states[:, self.fingertip_handles][:, :, 0:3]
if self.obs_type == "openai":
self.compute_fingertip_observations(True)
elif self.obs_type == "full_no_vel":
self.compute_full_observations(True)
elif self.obs_type == "full":
self.compute_full_observations()
elif self.obs_type == "full_state":
self.compute_full_state()
else:
print("Unknown observations type!")
if self.asymmetric_obs:
self.compute_full_state(True)
def compute_fingertip_observations(self, no_vel=False):
if no_vel:
# Per https://arxiv.org/pdf/1808.00177.pdf Table 2
# Fingertip positions
# Object Position, but not orientation
# Relative target orientation
# 3*self.num_fingertips = 15
self.obs_buf[:, 0:15] = self.fingertip_pos.reshape(self.num_envs, 15)
self.obs_buf[:, 15:18] = self.object_pose[:, 0:3]
self.obs_buf[:, 18:22] = quat_mul(self.object_rot, quat_conjugate(self.goal_rot))
self.obs_buf[:, 22:42] = self.actions
else:
# 13*self.num_fingertips = 65
self.obs_buf[:, 0:65] = self.fingertip_state.reshape(self.num_envs, 65)
self.obs_buf[:, 65:72] = self.object_pose
self.obs_buf[:, 72:75] = self.object_linvel
self.obs_buf[:, 75:78] = self.vel_obs_scale * self.object_angvel
self.obs_buf[:, 78:85] = self.goal_pose
self.obs_buf[:, 85:89] = quat_mul(self.object_rot, quat_conjugate(self.goal_rot))
self.obs_buf[:, 89:109] = self.actions
def compute_full_observations(self, no_vel=False):
if no_vel:
self.obs_buf[:, 0:self.num_shadow_hand_dofs] = unscale(self.shadow_hand_dof_pos,
self.shadow_hand_dof_lower_limits, self.shadow_hand_dof_upper_limits)
self.obs_buf[:, 24:31] = self.object_pose
self.obs_buf[:, 31:38] = self.goal_pose
self.obs_buf[:, 38:42] = quat_mul(self.object_rot, quat_conjugate(self.goal_rot))
# 3*self.num_fingertips = 15
self.obs_buf[:, 42:57] = self.fingertip_pos.reshape(self.num_envs, 15)
self.obs_buf[:, 57:77] = self.actions
else:
self.obs_buf[:, 0:self.num_shadow_hand_dofs] = unscale(self.shadow_hand_dof_pos,
self.shadow_hand_dof_lower_limits, self.shadow_hand_dof_upper_limits)
self.obs_buf[:, self.num_shadow_hand_dofs:2*self.num_shadow_hand_dofs] = self.vel_obs_scale * self.shadow_hand_dof_vel
self.obs_buf[:, 48:55] = self.object_pose
self.obs_buf[:, 55:58] = self.object_linvel
self.obs_buf[:, 58:61] = self.vel_obs_scale * self.object_angvel
self.obs_buf[:, 61:68] = self.goal_pose
self.obs_buf[:, 68:72] = quat_mul(self.object_rot, quat_conjugate(self.goal_rot))
# 13*self.num_fingertips = 65
self.obs_buf[:, 72:137] = self.fingertip_state.reshape(self.num_envs, 65)
self.obs_buf[:, 137:157] = self.actions
def compute_full_state(self, asymm_obs=False):
if asymm_obs:
self.states_buf[:, 0:self.num_shadow_hand_dofs] = unscale(self.shadow_hand_dof_pos,
self.shadow_hand_dof_lower_limits, self.shadow_hand_dof_upper_limits)
self.states_buf[:, self.num_shadow_hand_dofs:2*self.num_shadow_hand_dofs] = self.vel_obs_scale * self.shadow_hand_dof_vel
self.states_buf[:, 2*self.num_shadow_hand_dofs:3*self.num_shadow_hand_dofs] = self.force_torque_obs_scale * self.dof_force_tensor
obj_obs_start = 3*self.num_shadow_hand_dofs # 72
self.states_buf[:, obj_obs_start:obj_obs_start + 7] = self.object_pose
self.states_buf[:, obj_obs_start + 7:obj_obs_start + 10] = self.object_linvel
self.states_buf[:, obj_obs_start + 10:obj_obs_start + 13] = self.vel_obs_scale * self.object_angvel
goal_obs_start = obj_obs_start + 13 # 85
self.states_buf[:, goal_obs_start:goal_obs_start + 7] = self.goal_pose
self.states_buf[:, goal_obs_start + 7:goal_obs_start + 11] = quat_mul(self.object_rot, quat_conjugate(self.goal_rot))
# fingertip observations, state(pose and vel) + force-torque sensors
num_ft_states = 13 * self.num_fingertips # 65
num_ft_force_torques = 6 * self.num_fingertips # 30
fingertip_obs_start = goal_obs_start + 11 # 96
self.states_buf[:, fingertip_obs_start:fingertip_obs_start + num_ft_states] = self.fingertip_state.reshape(self.num_envs, num_ft_states)
self.states_buf[:, fingertip_obs_start + num_ft_states:fingertip_obs_start + num_ft_states +
num_ft_force_torques] = self.force_torque_obs_scale * self.vec_sensor_tensor
# obs_end = 96 + 65 + 30 = 191
# obs_total = obs_end + num_actions = 211
obs_end = fingertip_obs_start + num_ft_states + num_ft_force_torques
self.states_buf[:, obs_end:obs_end + self.num_actions] = self.actions
else:
self.obs_buf[:, 0:self.num_shadow_hand_dofs] = unscale(self.shadow_hand_dof_pos,
self.shadow_hand_dof_lower_limits, self.shadow_hand_dof_upper_limits)
self.obs_buf[:, self.num_shadow_hand_dofs:2*self.num_shadow_hand_dofs] = self.vel_obs_scale * self.shadow_hand_dof_vel
self.obs_buf[:, 2*self.num_shadow_hand_dofs:3*self.num_shadow_hand_dofs] = self.force_torque_obs_scale * self.dof_force_tensor
obj_obs_start = 3*self.num_shadow_hand_dofs # 72
self.obs_buf[:, obj_obs_start:obj_obs_start + 7] = self.object_pose
self.obs_buf[:, obj_obs_start + 7:obj_obs_start + 10] = self.object_linvel
self.obs_buf[:, obj_obs_start + 10:obj_obs_start + 13] = self.vel_obs_scale * self.object_angvel
goal_obs_start = obj_obs_start + 13 # 85
self.obs_buf[:, goal_obs_start:goal_obs_start + 7] = self.goal_pose
self.obs_buf[:, goal_obs_start + 7:goal_obs_start + 11] = quat_mul(self.object_rot, quat_conjugate(self.goal_rot))
# fingertip observations, state(pose and vel) + force-torque sensors
num_ft_states = 13 * self.num_fingertips # 65
num_ft_force_torques = 6 * self.num_fingertips # 30
fingertip_obs_start = goal_obs_start + 11 # 96
self.obs_buf[:, fingertip_obs_start:fingertip_obs_start + num_ft_states] = self.fingertip_state.reshape(self.num_envs, num_ft_states)
self.obs_buf[:, fingertip_obs_start + num_ft_states:fingertip_obs_start + num_ft_states +
num_ft_force_torques] = self.force_torque_obs_scale * self.vec_sensor_tensor
# obs_end = 96 + 65 + 30 = 191
# obs_total = obs_end + num_actions = 211
obs_end = fingertip_obs_start + num_ft_states + num_ft_force_torques
self.obs_buf[:, obs_end:obs_end + self.num_actions] = self.actions
def reset_target_pose(self, env_ids, apply_reset=False):
rand_floats = torch_rand_float(-1.0, 1.0, (len(env_ids), 4), device=self.device)
new_rot = randomize_rotation(rand_floats[:, 0], rand_floats[:, 1], self.x_unit_tensor[env_ids], self.y_unit_tensor[env_ids])
self.goal_states[env_ids, 0:3] = self.goal_init_state[env_ids, 0:3]
self.goal_states[env_ids, 3:7] = new_rot
self.root_state_tensor[self.goal_object_indices[env_ids], 0:3] = self.goal_states[env_ids, 0:3] + self.goal_displacement_tensor
self.root_state_tensor[self.goal_object_indices[env_ids], 3:7] = self.goal_states[env_ids, 3:7]
self.root_state_tensor[self.goal_object_indices[env_ids], 7:13] = torch.zeros_like(self.root_state_tensor[self.goal_object_indices[env_ids], 7:13])
if apply_reset:
goal_object_indices = self.goal_object_indices[env_ids].to(torch.int32)
self.gym.set_actor_root_state_tensor_indexed(self.sim,
gymtorch.unwrap_tensor(self.root_state_tensor),
gymtorch.unwrap_tensor(goal_object_indices), len(env_ids))
self.reset_goal_buf[env_ids] = 0
def reset_idx(self, env_ids, goal_env_ids):
# randomization can happen only at reset time, since it can reset actor positions on GPU
if self.randomize:
self.apply_randomizations(self.randomization_params)
# generate random values
rand_floats = torch_rand_float(-1.0, 1.0, (len(env_ids), self.num_shadow_hand_dofs * 2 + 5), device=self.device)
# randomize start object poses
self.reset_target_pose(env_ids)
# reset rigid body forces
self.rb_forces[env_ids, :, :] = 0.0
# reset object
self.root_state_tensor[self.object_indices[env_ids]] = self.object_init_state[env_ids].clone()
self.root_state_tensor[self.object_indices[env_ids], 0:2] = self.object_init_state[env_ids, 0:2] + \
self.reset_position_noise * rand_floats[:, 0:2]
self.root_state_tensor[self.object_indices[env_ids], self.up_axis_idx] = self.object_init_state[env_ids, self.up_axis_idx] + \
self.reset_position_noise * rand_floats[:, self.up_axis_idx]
new_object_rot = randomize_rotation(rand_floats[:, 3], rand_floats[:, 4], self.x_unit_tensor[env_ids], self.y_unit_tensor[env_ids])
if self.object_type == "pen":
rand_angle_y = torch.tensor(0.3)
new_object_rot = randomize_rotation_pen(rand_floats[:, 3], rand_floats[:, 4], rand_angle_y,
self.x_unit_tensor[env_ids], self.y_unit_tensor[env_ids], self.z_unit_tensor[env_ids])
self.root_state_tensor[self.object_indices[env_ids], 3:7] = new_object_rot
self.root_state_tensor[self.object_indices[env_ids], 7:13] = torch.zeros_like(self.root_state_tensor[self.object_indices[env_ids], 7:13])
object_indices = torch.unique(torch.cat([self.object_indices[env_ids],
self.goal_object_indices[env_ids],
self.goal_object_indices[goal_env_ids]]).to(torch.int32))
self.gym.set_actor_root_state_tensor_indexed(self.sim,
gymtorch.unwrap_tensor(self.root_state_tensor),
gymtorch.unwrap_tensor(object_indices), len(object_indices))
# reset random force probabilities
self.random_force_prob[env_ids] = torch.exp((torch.log(self.force_prob_range[0]) - torch.log(self.force_prob_range[1]))
* torch.rand(len(env_ids), device=self.device) + torch.log(self.force_prob_range[1]))
# reset shadow hand
delta_max = self.shadow_hand_dof_upper_limits - self.shadow_hand_dof_default_pos
delta_min = self.shadow_hand_dof_lower_limits - self.shadow_hand_dof_default_pos
rand_delta = delta_min + (delta_max - delta_min) * 0.5 * (rand_floats[:, 5:5+self.num_shadow_hand_dofs] + 1)
pos = self.shadow_hand_default_dof_pos + self.reset_dof_pos_noise * rand_delta
self.shadow_hand_dof_pos[env_ids, :] = pos
self.shadow_hand_dof_vel[env_ids, :] = self.shadow_hand_dof_default_vel + \
self.reset_dof_vel_noise * rand_floats[:, 5+self.num_shadow_hand_dofs:5+self.num_shadow_hand_dofs*2]
self.prev_targets[env_ids, :self.num_shadow_hand_dofs] = pos
self.cur_targets[env_ids, :self.num_shadow_hand_dofs] = pos
hand_indices = self.hand_indices[env_ids].to(torch.int32)
self.gym.set_dof_position_target_tensor_indexed(self.sim,
gymtorch.unwrap_tensor(self.prev_targets),
gymtorch.unwrap_tensor(hand_indices), len(env_ids))
self.gym.set_dof_state_tensor_indexed(self.sim,
gymtorch.unwrap_tensor(self.dof_state),
gymtorch.unwrap_tensor(hand_indices), len(env_ids))
self.progress_buf[env_ids] = 0
self.reset_buf[env_ids] = 0
self.successes[env_ids] = 0
def pre_physics_step(self, actions):
env_ids = self.reset_buf.nonzero(as_tuple=False).squeeze(-1)
goal_env_ids = self.reset_goal_buf.nonzero(as_tuple=False).squeeze(-1)
# if only goals need reset, then call set API
if len(goal_env_ids) > 0 and len(env_ids) == 0:
self.reset_target_pose(goal_env_ids, apply_reset=True)
# if goals need reset in addition to other envs, call set API in reset_idx()
elif len(goal_env_ids) > 0:
self.reset_target_pose(goal_env_ids)
if len(env_ids) > 0:
self.reset_idx(env_ids, goal_env_ids)
self.actions = actions.clone().to(self.device)
if self.use_relative_control:
targets = self.prev_targets[:, self.actuated_dof_indices] + self.shadow_hand_dof_speed_scale * self.dt * self.actions
self.cur_targets[:, self.actuated_dof_indices] = tensor_clamp(targets,
self.shadow_hand_dof_lower_limits[self.actuated_dof_indices], self.shadow_hand_dof_upper_limits[self.actuated_dof_indices])
else:
self.cur_targets[:, self.actuated_dof_indices] = scale(self.actions,
self.shadow_hand_dof_lower_limits[self.actuated_dof_indices], self.shadow_hand_dof_upper_limits[self.actuated_dof_indices])
self.cur_targets[:, self.actuated_dof_indices] = self.act_moving_average * self.cur_targets[:,
self.actuated_dof_indices] + (1.0 - self.act_moving_average) * self.prev_targets[:, self.actuated_dof_indices]
self.cur_targets[:, self.actuated_dof_indices] = tensor_clamp(self.cur_targets[:, self.actuated_dof_indices],
self.shadow_hand_dof_lower_limits[self.actuated_dof_indices], self.shadow_hand_dof_upper_limits[self.actuated_dof_indices])
self.prev_targets[:, self.actuated_dof_indices] = self.cur_targets[:, self.actuated_dof_indices]
self.gym.set_dof_position_target_tensor(self.sim, gymtorch.unwrap_tensor(self.cur_targets))
if self.force_scale > 0.0:
self.rb_forces *= torch.pow(self.force_decay, self.dt / self.force_decay_interval)
# apply new forces
force_indices = (torch.rand(self.num_envs, device=self.device) < self.random_force_prob).nonzero()
self.rb_forces[force_indices, self.object_rb_handles, :] = torch.randn(
self.rb_forces[force_indices, self.object_rb_handles, :].shape, device=self.device) * self.object_rb_masses * self.force_scale
self.gym.apply_rigid_body_force_tensors(self.sim, gymtorch.unwrap_tensor(self.rb_forces), None, gymapi.LOCAL_SPACE)
def post_physics_step(self):
self.progress_buf += 1
self.randomize_buf += 1
self.compute_observations()
self.compute_reward(self.actions)
if self.viewer and self.debug_viz:
# draw axes on target object
self.gym.clear_lines(self.viewer)
self.gym.refresh_rigid_body_state_tensor(self.sim)
for i in range(self.num_envs):
targetx = (self.goal_pos[i] + quat_apply(self.goal_rot[i], to_torch([1, 0, 0], device=self.device) * 0.2)).cpu().numpy()
targety = (self.goal_pos[i] + quat_apply(self.goal_rot[i], to_torch([0, 1, 0], device=self.device) * 0.2)).cpu().numpy()
targetz = (self.goal_pos[i] + quat_apply(self.goal_rot[i], to_torch([0, 0, 1], device=self.device) * 0.2)).cpu().numpy()
p0 = self.goal_pos[i].cpu().numpy() + self.goal_displacement_tensor.cpu().numpy()
self.gym.add_lines(self.viewer, self.envs[i], 1, [p0[0], p0[1], p0[2], targetx[0], targetx[1], targetx[2]], [0.85, 0.1, 0.1])
self.gym.add_lines(self.viewer, self.envs[i], 1, [p0[0], p0[1], p0[2], targety[0], targety[1], targety[2]], [0.1, 0.85, 0.1])
self.gym.add_lines(self.viewer, self.envs[i], 1, [p0[0], p0[1], p0[2], targetz[0], targetz[1], targetz[2]], [0.1, 0.1, 0.85])
objectx = (self.object_pos[i] + quat_apply(self.object_rot[i], to_torch([1, 0, 0], device=self.device) * 0.2)).cpu().numpy()
objecty = (self.object_pos[i] + quat_apply(self.object_rot[i], to_torch([0, 1, 0], device=self.device) * 0.2)).cpu().numpy()
objectz = (self.object_pos[i] + quat_apply(self.object_rot[i], to_torch([0, 0, 1], device=self.device) * 0.2)).cpu().numpy()
p0 = self.object_pos[i].cpu().numpy()
self.gym.add_lines(self.viewer, self.envs[i], 1, [p0[0], p0[1], p0[2], objectx[0], objectx[1], objectx[2]], [0.85, 0.1, 0.1])
self.gym.add_lines(self.viewer, self.envs[i], 1, [p0[0], p0[1], p0[2], objecty[0], objecty[1], objecty[2]], [0.1, 0.85, 0.1])
self.gym.add_lines(self.viewer, self.envs[i], 1, [p0[0], p0[1], p0[2], objectz[0], objectz[1], objectz[2]], [0.1, 0.1, 0.85])
#####################################################################
###=========================jit functions=========================###
#####################################################################
@torch.jit.script
def compute_hand_reward(
rew_buf, reset_buf, reset_goal_buf, progress_buf, successes, consecutive_successes,
max_episode_length: float, object_pos, object_rot, target_pos, target_rot,
dist_reward_scale: float, rot_reward_scale: float, rot_eps: float,
actions, action_penalty_scale: float,
success_tolerance: float, reach_goal_bonus: float, fall_dist: float,
fall_penalty: float, max_consecutive_successes: int, av_factor: float, ignore_z_rot: bool
):
# Distance from the hand to the object
goal_dist = torch.norm(object_pos - target_pos, p=2, dim=-1)
if ignore_z_rot:
success_tolerance = 2.0 * success_tolerance
# Orientation alignment for the cube in hand and goal cube
quat_diff = quat_mul(object_rot, quat_conjugate(target_rot))
rot_dist = 2.0 * torch.asin(torch.clamp(torch.norm(quat_diff[:, 0:3], p=2, dim=-1), max=1.0))
dist_rew = goal_dist * dist_reward_scale
rot_rew = 1.0/(torch.abs(rot_dist) + rot_eps) * rot_reward_scale
action_penalty = torch.sum(actions ** 2, dim=-1)
# Total reward is: position distance + orientation alignment + action regularization + success bonus + fall penalty
reward = dist_rew + rot_rew + action_penalty * action_penalty_scale
# Find out which envs hit the goal and update successes count
goal_resets = torch.where(torch.abs(rot_dist) <= success_tolerance, torch.ones_like(reset_goal_buf), reset_goal_buf)
successes = successes + goal_resets
# Success bonus: orientation is within `success_tolerance` of goal orientation
reward = torch.where(goal_resets == 1, reward + reach_goal_bonus, reward)
# Fall penalty: distance to the goal is larger than a threshold
reward = torch.where(goal_dist >= fall_dist, reward + fall_penalty, reward)
# Check env termination conditions, including maximum success number
resets = torch.where(goal_dist >= fall_dist, torch.ones_like(reset_buf), reset_buf)
if max_consecutive_successes > 0:
# Reset progress buffer on goal envs if max_consecutive_successes > 0
progress_buf = torch.where(torch.abs(rot_dist) <= success_tolerance, torch.zeros_like(progress_buf), progress_buf)
resets = torch.where(successes >= max_consecutive_successes, torch.ones_like(resets), resets)
resets = torch.where(progress_buf >= max_episode_length - 1, torch.ones_like(resets), resets)
# Apply penalty for not reaching the goal
if max_consecutive_successes > 0:
reward = torch.where(progress_buf >= max_episode_length - 1, reward + 0.5 * fall_penalty, reward)
num_resets = torch.sum(resets)
finished_cons_successes = torch.sum(successes * resets.float())
cons_successes = torch.where(num_resets > 0, av_factor*finished_cons_successes/num_resets + (1.0 - av_factor)*consecutive_successes, consecutive_successes)
return reward, resets, goal_resets, progress_buf, successes, cons_successes
@torch.jit.script
def randomize_rotation(rand0, rand1, x_unit_tensor, y_unit_tensor):
return quat_mul(quat_from_angle_axis(rand0 * np.pi, x_unit_tensor),
quat_from_angle_axis(rand1 * np.pi, y_unit_tensor))
@torch.jit.script
def randomize_rotation_pen(rand0, rand1, max_angle, x_unit_tensor, y_unit_tensor, z_unit_tensor):
rot = quat_mul(quat_from_angle_axis(0.5 * np.pi + rand0 * max_angle, x_unit_tensor),
quat_from_angle_axis(rand0 * np.pi, z_unit_tensor))
return rot
| 45,910 | Python | 55.40172 | 217 | 0.624439 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/tasks/franka_cabinet.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import numpy as np
import os
import torch
from isaacgym import gymutil, gymtorch, gymapi
from isaacgymenvs.utils.torch_jit_utils import to_torch, get_axis_params, tensor_clamp, \
tf_vector, tf_combine
from .base.vec_task import VecTask
class FrankaCabinet(VecTask):
def __init__(self, cfg, rl_device, sim_device, graphics_device_id, headless, virtual_screen_capture, force_render):
self.cfg = cfg
self.max_episode_length = self.cfg["env"]["episodeLength"]
self.action_scale = self.cfg["env"]["actionScale"]
self.start_position_noise = self.cfg["env"]["startPositionNoise"]
self.start_rotation_noise = self.cfg["env"]["startRotationNoise"]
self.num_props = self.cfg["env"]["numProps"]
self.aggregate_mode = self.cfg["env"]["aggregateMode"]
self.dof_vel_scale = self.cfg["env"]["dofVelocityScale"]
self.dist_reward_scale = self.cfg["env"]["distRewardScale"]
self.rot_reward_scale = self.cfg["env"]["rotRewardScale"]
self.around_handle_reward_scale = self.cfg["env"]["aroundHandleRewardScale"]
self.open_reward_scale = self.cfg["env"]["openRewardScale"]
self.finger_dist_reward_scale = self.cfg["env"]["fingerDistRewardScale"]
self.action_penalty_scale = self.cfg["env"]["actionPenaltyScale"]
self.debug_viz = self.cfg["env"]["enableDebugVis"]
self.up_axis = "z"
self.up_axis_idx = 2
self.distX_offset = 0.04
self.dt = 1/60.
# prop dimensions
self.prop_width = 0.08
self.prop_height = 0.08
self.prop_length = 0.08
self.prop_spacing = 0.09
num_obs = 23
num_acts = 9
self.cfg["env"]["numObservations"] = 23
self.cfg["env"]["numActions"] = 9
super().__init__(config=self.cfg, rl_device=rl_device, sim_device=sim_device, graphics_device_id=graphics_device_id, headless=headless, virtual_screen_capture=virtual_screen_capture, force_render=force_render)
# get gym GPU state tensors
actor_root_state_tensor = self.gym.acquire_actor_root_state_tensor(self.sim)
dof_state_tensor = self.gym.acquire_dof_state_tensor(self.sim)
rigid_body_tensor = self.gym.acquire_rigid_body_state_tensor(self.sim)
self.gym.refresh_actor_root_state_tensor(self.sim)
self.gym.refresh_dof_state_tensor(self.sim)
self.gym.refresh_rigid_body_state_tensor(self.sim)
# create some wrapper tensors for different slices
self.franka_default_dof_pos = to_torch([1.157, -1.066, -0.155, -2.239, -1.841, 1.003, 0.469, 0.035, 0.035], device=self.device)
self.dof_state = gymtorch.wrap_tensor(dof_state_tensor)
self.franka_dof_state = self.dof_state.view(self.num_envs, -1, 2)[:, :self.num_franka_dofs]
self.franka_dof_pos = self.franka_dof_state[..., 0]
self.franka_dof_vel = self.franka_dof_state[..., 1]
self.cabinet_dof_state = self.dof_state.view(self.num_envs, -1, 2)[:, self.num_franka_dofs:]
self.cabinet_dof_pos = self.cabinet_dof_state[..., 0]
self.cabinet_dof_vel = self.cabinet_dof_state[..., 1]
self.rigid_body_states = gymtorch.wrap_tensor(rigid_body_tensor).view(self.num_envs, -1, 13)
self.num_bodies = self.rigid_body_states.shape[1]
self.root_state_tensor = gymtorch.wrap_tensor(actor_root_state_tensor).view(self.num_envs, -1, 13)
if self.num_props > 0:
self.prop_states = self.root_state_tensor[:, 2:]
self.num_dofs = self.gym.get_sim_dof_count(self.sim) // self.num_envs
self.franka_dof_targets = torch.zeros((self.num_envs, self.num_dofs), dtype=torch.float, device=self.device)
self.global_indices = torch.arange(self.num_envs * (2 + self.num_props), dtype=torch.int32, device=self.device).view(self.num_envs, -1)
self.reset_idx(torch.arange(self.num_envs, device=self.device))
def create_sim(self):
self.sim_params.up_axis = gymapi.UP_AXIS_Z
self.sim_params.gravity.x = 0
self.sim_params.gravity.y = 0
self.sim_params.gravity.z = -9.81
self.sim = super().create_sim(
self.device_id, self.graphics_device_id, self.physics_engine, self.sim_params)
self._create_ground_plane()
self._create_envs(self.num_envs, self.cfg["env"]['envSpacing'], int(np.sqrt(self.num_envs)))
def _create_ground_plane(self):
plane_params = gymapi.PlaneParams()
plane_params.normal = gymapi.Vec3(0.0, 0.0, 1.0)
self.gym.add_ground(self.sim, plane_params)
def _create_envs(self, num_envs, spacing, num_per_row):
lower = gymapi.Vec3(-spacing, -spacing, 0.0)
upper = gymapi.Vec3(spacing, spacing, spacing)
asset_root = os.path.join(os.path.dirname(os.path.abspath(__file__)), "../../assets")
franka_asset_file = "urdf/franka_description/robots/franka_panda.urdf"
cabinet_asset_file = "urdf/sektion_cabinet_model/urdf/sektion_cabinet_2.urdf"
if "asset" in self.cfg["env"]:
asset_root = os.path.join(os.path.dirname(os.path.abspath(__file__)), self.cfg["env"]["asset"].get("assetRoot", asset_root))
franka_asset_file = self.cfg["env"]["asset"].get("assetFileNameFranka", franka_asset_file)
cabinet_asset_file = self.cfg["env"]["asset"].get("assetFileNameCabinet", cabinet_asset_file)
# load franka asset
asset_options = gymapi.AssetOptions()
asset_options.flip_visual_attachments = True
asset_options.fix_base_link = True
asset_options.collapse_fixed_joints = True
asset_options.disable_gravity = True
asset_options.thickness = 0.001
asset_options.default_dof_drive_mode = gymapi.DOF_MODE_POS
asset_options.use_mesh_materials = True
franka_asset = self.gym.load_asset(self.sim, asset_root, franka_asset_file, asset_options)
# load cabinet asset
asset_options.flip_visual_attachments = False
asset_options.collapse_fixed_joints = True
asset_options.disable_gravity = False
asset_options.default_dof_drive_mode = gymapi.DOF_MODE_NONE
asset_options.armature = 0.005
cabinet_asset = self.gym.load_asset(self.sim, asset_root, cabinet_asset_file, asset_options)
franka_dof_stiffness = to_torch([400, 400, 400, 400, 400, 400, 400, 1.0e6, 1.0e6], dtype=torch.float, device=self.device)
franka_dof_damping = to_torch([80, 80, 80, 80, 80, 80, 80, 1.0e2, 1.0e2], dtype=torch.float, device=self.device)
self.num_franka_bodies = self.gym.get_asset_rigid_body_count(franka_asset)
self.num_franka_dofs = self.gym.get_asset_dof_count(franka_asset)
self.num_cabinet_bodies = self.gym.get_asset_rigid_body_count(cabinet_asset)
self.num_cabinet_dofs = self.gym.get_asset_dof_count(cabinet_asset)
print("num franka bodies: ", self.num_franka_bodies)
print("num franka dofs: ", self.num_franka_dofs)
print("num cabinet bodies: ", self.num_cabinet_bodies)
print("num cabinet dofs: ", self.num_cabinet_dofs)
# set franka dof properties
franka_dof_props = self.gym.get_asset_dof_properties(franka_asset)
self.franka_dof_lower_limits = []
self.franka_dof_upper_limits = []
for i in range(self.num_franka_dofs):
franka_dof_props['driveMode'][i] = gymapi.DOF_MODE_POS
if self.physics_engine == gymapi.SIM_PHYSX:
franka_dof_props['stiffness'][i] = franka_dof_stiffness[i]
franka_dof_props['damping'][i] = franka_dof_damping[i]
else:
franka_dof_props['stiffness'][i] = 7000.0
franka_dof_props['damping'][i] = 50.0
self.franka_dof_lower_limits.append(franka_dof_props['lower'][i])
self.franka_dof_upper_limits.append(franka_dof_props['upper'][i])
self.franka_dof_lower_limits = to_torch(self.franka_dof_lower_limits, device=self.device)
self.franka_dof_upper_limits = to_torch(self.franka_dof_upper_limits, device=self.device)
self.franka_dof_speed_scales = torch.ones_like(self.franka_dof_lower_limits)
self.franka_dof_speed_scales[[7, 8]] = 0.1
franka_dof_props['effort'][7] = 200
franka_dof_props['effort'][8] = 200
# set cabinet dof properties
cabinet_dof_props = self.gym.get_asset_dof_properties(cabinet_asset)
for i in range(self.num_cabinet_dofs):
cabinet_dof_props['damping'][i] = 10.0
# create prop assets
box_opts = gymapi.AssetOptions()
box_opts.density = 400
prop_asset = self.gym.create_box(self.sim, self.prop_width, self.prop_height, self.prop_width, box_opts)
franka_start_pose = gymapi.Transform()
franka_start_pose.p = gymapi.Vec3(1.0, 0.0, 0.0)
franka_start_pose.r = gymapi.Quat(0.0, 0.0, 1.0, 0.0)
cabinet_start_pose = gymapi.Transform()
cabinet_start_pose.p = gymapi.Vec3(*get_axis_params(0.4, self.up_axis_idx))
# compute aggregate size
num_franka_bodies = self.gym.get_asset_rigid_body_count(franka_asset)
num_franka_shapes = self.gym.get_asset_rigid_shape_count(franka_asset)
num_cabinet_bodies = self.gym.get_asset_rigid_body_count(cabinet_asset)
num_cabinet_shapes = self.gym.get_asset_rigid_shape_count(cabinet_asset)
num_prop_bodies = self.gym.get_asset_rigid_body_count(prop_asset)
num_prop_shapes = self.gym.get_asset_rigid_shape_count(prop_asset)
max_agg_bodies = num_franka_bodies + num_cabinet_bodies + self.num_props * num_prop_bodies
max_agg_shapes = num_franka_shapes + num_cabinet_shapes + self.num_props * num_prop_shapes
self.frankas = []
self.cabinets = []
self.default_prop_states = []
self.prop_start = []
self.envs = []
for i in range(self.num_envs):
# create env instance
env_ptr = self.gym.create_env(
self.sim, lower, upper, num_per_row
)
if self.aggregate_mode >= 3:
self.gym.begin_aggregate(env_ptr, max_agg_bodies, max_agg_shapes, True)
franka_actor = self.gym.create_actor(env_ptr, franka_asset, franka_start_pose, "franka", i, 1, 0)
self.gym.set_actor_dof_properties(env_ptr, franka_actor, franka_dof_props)
if self.aggregate_mode == 2:
self.gym.begin_aggregate(env_ptr, max_agg_bodies, max_agg_shapes, True)
cabinet_pose = cabinet_start_pose
cabinet_pose.p.x += self.start_position_noise * (np.random.rand() - 0.5)
dz = 0.5 * np.random.rand()
dy = np.random.rand() - 0.5
cabinet_pose.p.y += self.start_position_noise * dy
cabinet_pose.p.z += self.start_position_noise * dz
cabinet_actor = self.gym.create_actor(env_ptr, cabinet_asset, cabinet_pose, "cabinet", i, 2, 0)
self.gym.set_actor_dof_properties(env_ptr, cabinet_actor, cabinet_dof_props)
if self.aggregate_mode == 1:
self.gym.begin_aggregate(env_ptr, max_agg_bodies, max_agg_shapes, True)
if self.num_props > 0:
self.prop_start.append(self.gym.get_sim_actor_count(self.sim))
drawer_handle = self.gym.find_actor_rigid_body_handle(env_ptr, cabinet_actor, "drawer_top")
drawer_pose = self.gym.get_rigid_transform(env_ptr, drawer_handle)
props_per_row = int(np.ceil(np.sqrt(self.num_props)))
xmin = -0.5 * self.prop_spacing * (props_per_row - 1)
yzmin = -0.5 * self.prop_spacing * (props_per_row - 1)
prop_count = 0
for j in range(props_per_row):
prop_up = yzmin + j * self.prop_spacing
for k in range(props_per_row):
if prop_count >= self.num_props:
break
propx = xmin + k * self.prop_spacing
prop_state_pose = gymapi.Transform()
prop_state_pose.p.x = drawer_pose.p.x + propx
propz, propy = 0, prop_up
prop_state_pose.p.y = drawer_pose.p.y + propy
prop_state_pose.p.z = drawer_pose.p.z + propz
prop_state_pose.r = gymapi.Quat(0, 0, 0, 1)
prop_handle = self.gym.create_actor(env_ptr, prop_asset, prop_state_pose, "prop{}".format(prop_count), i, 0, 0)
prop_count += 1
prop_idx = j * props_per_row + k
self.default_prop_states.append([prop_state_pose.p.x, prop_state_pose.p.y, prop_state_pose.p.z,
prop_state_pose.r.x, prop_state_pose.r.y, prop_state_pose.r.z, prop_state_pose.r.w,
0, 0, 0, 0, 0, 0])
if self.aggregate_mode > 0:
self.gym.end_aggregate(env_ptr)
self.envs.append(env_ptr)
self.frankas.append(franka_actor)
self.cabinets.append(cabinet_actor)
self.hand_handle = self.gym.find_actor_rigid_body_handle(env_ptr, franka_actor, "panda_link7")
self.drawer_handle = self.gym.find_actor_rigid_body_handle(env_ptr, cabinet_actor, "drawer_top")
self.lfinger_handle = self.gym.find_actor_rigid_body_handle(env_ptr, franka_actor, "panda_leftfinger")
self.rfinger_handle = self.gym.find_actor_rigid_body_handle(env_ptr, franka_actor, "panda_rightfinger")
self.default_prop_states = to_torch(self.default_prop_states, device=self.device, dtype=torch.float).view(self.num_envs, self.num_props, 13)
self.init_data()
def init_data(self):
hand = self.gym.find_actor_rigid_body_handle(self.envs[0], self.frankas[0], "panda_link7")
lfinger = self.gym.find_actor_rigid_body_handle(self.envs[0], self.frankas[0], "panda_leftfinger")
rfinger = self.gym.find_actor_rigid_body_handle(self.envs[0], self.frankas[0], "panda_rightfinger")
hand_pose = self.gym.get_rigid_transform(self.envs[0], hand)
lfinger_pose = self.gym.get_rigid_transform(self.envs[0], lfinger)
rfinger_pose = self.gym.get_rigid_transform(self.envs[0], rfinger)
finger_pose = gymapi.Transform()
finger_pose.p = (lfinger_pose.p + rfinger_pose.p) * 0.5
finger_pose.r = lfinger_pose.r
hand_pose_inv = hand_pose.inverse()
grasp_pose_axis = 1
franka_local_grasp_pose = hand_pose_inv * finger_pose
franka_local_grasp_pose.p += gymapi.Vec3(*get_axis_params(0.04, grasp_pose_axis))
self.franka_local_grasp_pos = to_torch([franka_local_grasp_pose.p.x, franka_local_grasp_pose.p.y,
franka_local_grasp_pose.p.z], device=self.device).repeat((self.num_envs, 1))
self.franka_local_grasp_rot = to_torch([franka_local_grasp_pose.r.x, franka_local_grasp_pose.r.y,
franka_local_grasp_pose.r.z, franka_local_grasp_pose.r.w], device=self.device).repeat((self.num_envs, 1))
drawer_local_grasp_pose = gymapi.Transform()
drawer_local_grasp_pose.p = gymapi.Vec3(*get_axis_params(0.01, grasp_pose_axis, 0.3))
drawer_local_grasp_pose.r = gymapi.Quat(0, 0, 0, 1)
self.drawer_local_grasp_pos = to_torch([drawer_local_grasp_pose.p.x, drawer_local_grasp_pose.p.y,
drawer_local_grasp_pose.p.z], device=self.device).repeat((self.num_envs, 1))
self.drawer_local_grasp_rot = to_torch([drawer_local_grasp_pose.r.x, drawer_local_grasp_pose.r.y,
drawer_local_grasp_pose.r.z, drawer_local_grasp_pose.r.w], device=self.device).repeat((self.num_envs, 1))
self.gripper_forward_axis = to_torch([0, 0, 1], device=self.device).repeat((self.num_envs, 1))
self.drawer_inward_axis = to_torch([-1, 0, 0], device=self.device).repeat((self.num_envs, 1))
self.gripper_up_axis = to_torch([0, 1, 0], device=self.device).repeat((self.num_envs, 1))
self.drawer_up_axis = to_torch([0, 0, 1], device=self.device).repeat((self.num_envs, 1))
self.franka_grasp_pos = torch.zeros_like(self.franka_local_grasp_pos)
self.franka_grasp_rot = torch.zeros_like(self.franka_local_grasp_rot)
self.franka_grasp_rot[..., -1] = 1 # xyzw
self.drawer_grasp_pos = torch.zeros_like(self.drawer_local_grasp_pos)
self.drawer_grasp_rot = torch.zeros_like(self.drawer_local_grasp_rot)
self.drawer_grasp_rot[..., -1] = 1
self.franka_lfinger_pos = torch.zeros_like(self.franka_local_grasp_pos)
self.franka_rfinger_pos = torch.zeros_like(self.franka_local_grasp_pos)
self.franka_lfinger_rot = torch.zeros_like(self.franka_local_grasp_rot)
self.franka_rfinger_rot = torch.zeros_like(self.franka_local_grasp_rot)
def compute_reward(self, actions):
self.rew_buf[:], self.reset_buf[:] = compute_franka_reward(
self.reset_buf, self.progress_buf, self.actions, self.cabinet_dof_pos,
self.franka_grasp_pos, self.drawer_grasp_pos, self.franka_grasp_rot, self.drawer_grasp_rot,
self.franka_lfinger_pos, self.franka_rfinger_pos,
self.gripper_forward_axis, self.drawer_inward_axis, self.gripper_up_axis, self.drawer_up_axis,
self.num_envs, self.dist_reward_scale, self.rot_reward_scale, self.around_handle_reward_scale, self.open_reward_scale,
self.finger_dist_reward_scale, self.action_penalty_scale, self.distX_offset, self.max_episode_length
)
def compute_observations(self):
self.gym.refresh_actor_root_state_tensor(self.sim)
self.gym.refresh_dof_state_tensor(self.sim)
self.gym.refresh_rigid_body_state_tensor(self.sim)
hand_pos = self.rigid_body_states[:, self.hand_handle][:, 0:3]
hand_rot = self.rigid_body_states[:, self.hand_handle][:, 3:7]
drawer_pos = self.rigid_body_states[:, self.drawer_handle][:, 0:3]
drawer_rot = self.rigid_body_states[:, self.drawer_handle][:, 3:7]
self.franka_grasp_rot[:], self.franka_grasp_pos[:], self.drawer_grasp_rot[:], self.drawer_grasp_pos[:] = \
compute_grasp_transforms(hand_rot, hand_pos, self.franka_local_grasp_rot, self.franka_local_grasp_pos,
drawer_rot, drawer_pos, self.drawer_local_grasp_rot, self.drawer_local_grasp_pos
)
self.franka_lfinger_pos = self.rigid_body_states[:, self.lfinger_handle][:, 0:3]
self.franka_rfinger_pos = self.rigid_body_states[:, self.rfinger_handle][:, 0:3]
self.franka_lfinger_rot = self.rigid_body_states[:, self.lfinger_handle][:, 3:7]
self.franka_rfinger_rot = self.rigid_body_states[:, self.rfinger_handle][:, 3:7]
dof_pos_scaled = (2.0 * (self.franka_dof_pos - self.franka_dof_lower_limits)
/ (self.franka_dof_upper_limits - self.franka_dof_lower_limits) - 1.0)
to_target = self.drawer_grasp_pos - self.franka_grasp_pos
self.obs_buf = torch.cat((dof_pos_scaled, self.franka_dof_vel * self.dof_vel_scale, to_target,
self.cabinet_dof_pos[:, 3].unsqueeze(-1), self.cabinet_dof_vel[:, 3].unsqueeze(-1)), dim=-1)
return self.obs_buf
def reset_idx(self, env_ids):
env_ids_int32 = env_ids.to(dtype=torch.int32)
# reset franka
pos = tensor_clamp(
self.franka_default_dof_pos.unsqueeze(0) + 0.25 * (torch.rand((len(env_ids), self.num_franka_dofs), device=self.device) - 0.5),
self.franka_dof_lower_limits, self.franka_dof_upper_limits)
self.franka_dof_pos[env_ids, :] = pos
self.franka_dof_vel[env_ids, :] = torch.zeros_like(self.franka_dof_vel[env_ids])
self.franka_dof_targets[env_ids, :self.num_franka_dofs] = pos
# reset cabinet
self.cabinet_dof_state[env_ids, :] = torch.zeros_like(self.cabinet_dof_state[env_ids])
# reset props
if self.num_props > 0:
prop_indices = self.global_indices[env_ids, 2:].flatten()
self.prop_states[env_ids] = self.default_prop_states[env_ids]
self.gym.set_actor_root_state_tensor_indexed(self.sim,
gymtorch.unwrap_tensor(self.root_state_tensor),
gymtorch.unwrap_tensor(prop_indices), len(prop_indices))
multi_env_ids_int32 = self.global_indices[env_ids, :2].flatten()
self.gym.set_dof_position_target_tensor_indexed(self.sim,
gymtorch.unwrap_tensor(self.franka_dof_targets),
gymtorch.unwrap_tensor(multi_env_ids_int32), len(multi_env_ids_int32))
self.gym.set_dof_state_tensor_indexed(self.sim,
gymtorch.unwrap_tensor(self.dof_state),
gymtorch.unwrap_tensor(multi_env_ids_int32), len(multi_env_ids_int32))
self.progress_buf[env_ids] = 0
self.reset_buf[env_ids] = 0
def pre_physics_step(self, actions):
self.actions = actions.clone().to(self.device)
targets = self.franka_dof_targets[:, :self.num_franka_dofs] + self.franka_dof_speed_scales * self.dt * self.actions * self.action_scale
self.franka_dof_targets[:, :self.num_franka_dofs] = tensor_clamp(
targets, self.franka_dof_lower_limits, self.franka_dof_upper_limits)
env_ids_int32 = torch.arange(self.num_envs, dtype=torch.int32, device=self.device)
self.gym.set_dof_position_target_tensor(self.sim,
gymtorch.unwrap_tensor(self.franka_dof_targets))
def post_physics_step(self):
self.progress_buf += 1
env_ids = self.reset_buf.nonzero(as_tuple=False).squeeze(-1)
if len(env_ids) > 0:
self.reset_idx(env_ids)
self.compute_observations()
self.compute_reward(self.actions)
# debug viz
if self.viewer and self.debug_viz:
self.gym.clear_lines(self.viewer)
self.gym.refresh_rigid_body_state_tensor(self.sim)
for i in range(self.num_envs):
px = (self.franka_grasp_pos[i] + quat_apply(self.franka_grasp_rot[i], to_torch([1, 0, 0], device=self.device) * 0.2)).cpu().numpy()
py = (self.franka_grasp_pos[i] + quat_apply(self.franka_grasp_rot[i], to_torch([0, 1, 0], device=self.device) * 0.2)).cpu().numpy()
pz = (self.franka_grasp_pos[i] + quat_apply(self.franka_grasp_rot[i], to_torch([0, 0, 1], device=self.device) * 0.2)).cpu().numpy()
p0 = self.franka_grasp_pos[i].cpu().numpy()
self.gym.add_lines(self.viewer, self.envs[i], 1, [p0[0], p0[1], p0[2], px[0], px[1], px[2]], [0.85, 0.1, 0.1])
self.gym.add_lines(self.viewer, self.envs[i], 1, [p0[0], p0[1], p0[2], py[0], py[1], py[2]], [0.1, 0.85, 0.1])
self.gym.add_lines(self.viewer, self.envs[i], 1, [p0[0], p0[1], p0[2], pz[0], pz[1], pz[2]], [0.1, 0.1, 0.85])
px = (self.drawer_grasp_pos[i] + quat_apply(self.drawer_grasp_rot[i], to_torch([1, 0, 0], device=self.device) * 0.2)).cpu().numpy()
py = (self.drawer_grasp_pos[i] + quat_apply(self.drawer_grasp_rot[i], to_torch([0, 1, 0], device=self.device) * 0.2)).cpu().numpy()
pz = (self.drawer_grasp_pos[i] + quat_apply(self.drawer_grasp_rot[i], to_torch([0, 0, 1], device=self.device) * 0.2)).cpu().numpy()
p0 = self.drawer_grasp_pos[i].cpu().numpy()
self.gym.add_lines(self.viewer, self.envs[i], 1, [p0[0], p0[1], p0[2], px[0], px[1], px[2]], [1, 0, 0])
self.gym.add_lines(self.viewer, self.envs[i], 1, [p0[0], p0[1], p0[2], py[0], py[1], py[2]], [0, 1, 0])
self.gym.add_lines(self.viewer, self.envs[i], 1, [p0[0], p0[1], p0[2], pz[0], pz[1], pz[2]], [0, 0, 1])
px = (self.franka_lfinger_pos[i] + quat_apply(self.franka_lfinger_rot[i], to_torch([1, 0, 0], device=self.device) * 0.2)).cpu().numpy()
py = (self.franka_lfinger_pos[i] + quat_apply(self.franka_lfinger_rot[i], to_torch([0, 1, 0], device=self.device) * 0.2)).cpu().numpy()
pz = (self.franka_lfinger_pos[i] + quat_apply(self.franka_lfinger_rot[i], to_torch([0, 0, 1], device=self.device) * 0.2)).cpu().numpy()
p0 = self.franka_lfinger_pos[i].cpu().numpy()
self.gym.add_lines(self.viewer, self.envs[i], 1, [p0[0], p0[1], p0[2], px[0], px[1], px[2]], [1, 0, 0])
self.gym.add_lines(self.viewer, self.envs[i], 1, [p0[0], p0[1], p0[2], py[0], py[1], py[2]], [0, 1, 0])
self.gym.add_lines(self.viewer, self.envs[i], 1, [p0[0], p0[1], p0[2], pz[0], pz[1], pz[2]], [0, 0, 1])
px = (self.franka_rfinger_pos[i] + quat_apply(self.franka_rfinger_rot[i], to_torch([1, 0, 0], device=self.device) * 0.2)).cpu().numpy()
py = (self.franka_rfinger_pos[i] + quat_apply(self.franka_rfinger_rot[i], to_torch([0, 1, 0], device=self.device) * 0.2)).cpu().numpy()
pz = (self.franka_rfinger_pos[i] + quat_apply(self.franka_rfinger_rot[i], to_torch([0, 0, 1], device=self.device) * 0.2)).cpu().numpy()
p0 = self.franka_rfinger_pos[i].cpu().numpy()
self.gym.add_lines(self.viewer, self.envs[i], 1, [p0[0], p0[1], p0[2], px[0], px[1], px[2]], [1, 0, 0])
self.gym.add_lines(self.viewer, self.envs[i], 1, [p0[0], p0[1], p0[2], py[0], py[1], py[2]], [0, 1, 0])
self.gym.add_lines(self.viewer, self.envs[i], 1, [p0[0], p0[1], p0[2], pz[0], pz[1], pz[2]], [0, 0, 1])
#####################################################################
###=========================jit functions=========================###
#####################################################################
@torch.jit.script
def compute_franka_reward(
reset_buf, progress_buf, actions, cabinet_dof_pos,
franka_grasp_pos, drawer_grasp_pos, franka_grasp_rot, drawer_grasp_rot,
franka_lfinger_pos, franka_rfinger_pos,
gripper_forward_axis, drawer_inward_axis, gripper_up_axis, drawer_up_axis,
num_envs, dist_reward_scale, rot_reward_scale, around_handle_reward_scale, open_reward_scale,
finger_dist_reward_scale, action_penalty_scale, distX_offset, max_episode_length
):
# type: (Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, int, float, float, float, float, float, float, float, float) -> Tuple[Tensor, Tensor]
# distance from hand to the drawer
d = torch.norm(franka_grasp_pos - drawer_grasp_pos, p=2, dim=-1)
dist_reward = 1.0 / (1.0 + d ** 2)
dist_reward *= dist_reward
dist_reward = torch.where(d <= 0.02, dist_reward * 2, dist_reward)
axis1 = tf_vector(franka_grasp_rot, gripper_forward_axis)
axis2 = tf_vector(drawer_grasp_rot, drawer_inward_axis)
axis3 = tf_vector(franka_grasp_rot, gripper_up_axis)
axis4 = tf_vector(drawer_grasp_rot, drawer_up_axis)
dot1 = torch.bmm(axis1.view(num_envs, 1, 3), axis2.view(num_envs, 3, 1)).squeeze(-1).squeeze(-1) # alignment of forward axis for gripper
dot2 = torch.bmm(axis3.view(num_envs, 1, 3), axis4.view(num_envs, 3, 1)).squeeze(-1).squeeze(-1) # alignment of up axis for gripper
# reward for matching the orientation of the hand to the drawer (fingers wrapped)
rot_reward = 0.5 * (torch.sign(dot1) * dot1 ** 2 + torch.sign(dot2) * dot2 ** 2)
# bonus if left finger is above the drawer handle and right below
around_handle_reward = torch.zeros_like(rot_reward)
around_handle_reward = torch.where(franka_lfinger_pos[:, 2] > drawer_grasp_pos[:, 2],
torch.where(franka_rfinger_pos[:, 2] < drawer_grasp_pos[:, 2],
around_handle_reward + 0.5, around_handle_reward), around_handle_reward)
# reward for distance of each finger from the drawer
finger_dist_reward = torch.zeros_like(rot_reward)
lfinger_dist = torch.abs(franka_lfinger_pos[:, 2] - drawer_grasp_pos[:, 2])
rfinger_dist = torch.abs(franka_rfinger_pos[:, 2] - drawer_grasp_pos[:, 2])
finger_dist_reward = torch.where(franka_lfinger_pos[:, 2] > drawer_grasp_pos[:, 2],
torch.where(franka_rfinger_pos[:, 2] < drawer_grasp_pos[:, 2],
(0.04 - lfinger_dist) + (0.04 - rfinger_dist), finger_dist_reward), finger_dist_reward)
# regularization on the actions (summed for each environment)
action_penalty = torch.sum(actions ** 2, dim=-1)
# how far the cabinet has been opened out
open_reward = cabinet_dof_pos[:, 3] * around_handle_reward + cabinet_dof_pos[:, 3] # drawer_top_joint
rewards = dist_reward_scale * dist_reward + rot_reward_scale * rot_reward \
+ around_handle_reward_scale * around_handle_reward + open_reward_scale * open_reward \
+ finger_dist_reward_scale * finger_dist_reward - action_penalty_scale * action_penalty
# bonus for opening drawer properly
rewards = torch.where(cabinet_dof_pos[:, 3] > 0.01, rewards + 0.5, rewards)
rewards = torch.where(cabinet_dof_pos[:, 3] > 0.2, rewards + around_handle_reward, rewards)
rewards = torch.where(cabinet_dof_pos[:, 3] > 0.39, rewards + (2.0 * around_handle_reward), rewards)
# prevent bad style in opening drawer
rewards = torch.where(franka_lfinger_pos[:, 0] < drawer_grasp_pos[:, 0] - distX_offset,
torch.ones_like(rewards) * -1, rewards)
rewards = torch.where(franka_rfinger_pos[:, 0] < drawer_grasp_pos[:, 0] - distX_offset,
torch.ones_like(rewards) * -1, rewards)
# reset if drawer is open or max length reached
reset_buf = torch.where(cabinet_dof_pos[:, 3] > 0.39, torch.ones_like(reset_buf), reset_buf)
reset_buf = torch.where(progress_buf >= max_episode_length - 1, torch.ones_like(reset_buf), reset_buf)
return rewards, reset_buf
@torch.jit.script
def compute_grasp_transforms(hand_rot, hand_pos, franka_local_grasp_rot, franka_local_grasp_pos,
drawer_rot, drawer_pos, drawer_local_grasp_rot, drawer_local_grasp_pos
):
# type: (Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor) -> Tuple[Tensor, Tensor, Tensor, Tensor]
global_franka_rot, global_franka_pos = tf_combine(
hand_rot, hand_pos, franka_local_grasp_rot, franka_local_grasp_pos)
global_drawer_rot, global_drawer_pos = tf_combine(
drawer_rot, drawer_pos, drawer_local_grasp_rot, drawer_local_grasp_pos)
return global_franka_rot, global_franka_pos, global_drawer_rot, global_drawer_pos
| 32,782 | Python | 56.716549 | 217 | 0.613141 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/tasks/__init__.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from .ant import Ant
from .anymal import Anymal
from .anymal_terrain import AnymalTerrain
from .ball_balance import BallBalance
from .cartpole import Cartpole
from .factory.factory_task_gears import FactoryTaskGears
from .factory.factory_task_insertion import FactoryTaskInsertion
from .factory.factory_task_nut_bolt_pick import FactoryTaskNutBoltPick
from .factory.factory_task_nut_bolt_place import FactoryTaskNutBoltPlace
from .factory.factory_task_nut_bolt_screw import FactoryTaskNutBoltScrew
from .franka_cabinet import FrankaCabinet
from .franka_cube_stack import FrankaCubeStack
from .humanoid import Humanoid
from .humanoid_amp import HumanoidAMP
from .ingenuity import Ingenuity
from .quadcopter import Quadcopter
from .shadow_hand import ShadowHand
from .allegro_hand import AllegroHand
from .dextreme.allegro_hand_dextreme import AllegroHandDextremeManualDR, AllegroHandDextremeADR
from .trifinger import Trifinger
from .allegro_kuka.allegro_kuka_reorientation import AllegroKukaReorientation
from .allegro_kuka.allegro_kuka_regrasping import AllegroKukaRegrasping
from .allegro_kuka.allegro_kuka_throw import AllegroKukaThrow
from .allegro_kuka.allegro_kuka_two_arms_regrasping import AllegroKukaTwoArmsRegrasping
from .allegro_kuka.allegro_kuka_two_arms_reorientation import AllegroKukaTwoArmsReorientation
from .industreal.industreal_task_pegs_insert import IndustRealTaskPegsInsert
from .industreal.industreal_task_gears_insert import IndustRealTaskGearsInsert
def resolve_allegro_kuka(cfg, *args, **kwargs):
subtask_name: str = cfg["env"]["subtask"]
subtask_map = dict(
reorientation=AllegroKukaReorientation,
throw=AllegroKukaThrow,
regrasping=AllegroKukaRegrasping,
)
if subtask_name not in subtask_map:
print("!!!!!")
raise ValueError(f"Unknown subtask={subtask_name} in {subtask_map}")
return subtask_map[subtask_name](cfg, *args, **kwargs)
def resolve_allegro_kuka_two_arms(cfg, *args, **kwargs):
subtask_name: str = cfg["env"]["subtask"]
subtask_map = dict(
reorientation=AllegroKukaTwoArmsReorientation,
regrasping=AllegroKukaTwoArmsRegrasping,
)
if subtask_name not in subtask_map:
raise ValueError(f"Unknown subtask={subtask_name} in {subtask_map}")
return subtask_map[subtask_name](cfg, *args, **kwargs)
# Mappings from strings to environments
isaacgym_task_map = {
"AllegroHand": AllegroHand,
"AllegroKuka": resolve_allegro_kuka,
"AllegroKukaTwoArms": resolve_allegro_kuka_two_arms,
"AllegroHandManualDR": AllegroHandDextremeManualDR,
"AllegroHandADR": AllegroHandDextremeADR,
"Ant": Ant,
"Anymal": Anymal,
"AnymalTerrain": AnymalTerrain,
"BallBalance": BallBalance,
"Cartpole": Cartpole,
"FactoryTaskGears": FactoryTaskGears,
"FactoryTaskInsertion": FactoryTaskInsertion,
"FactoryTaskNutBoltPick": FactoryTaskNutBoltPick,
"FactoryTaskNutBoltPlace": FactoryTaskNutBoltPlace,
"FactoryTaskNutBoltScrew": FactoryTaskNutBoltScrew,
"IndustRealTaskPegsInsert": IndustRealTaskPegsInsert,
"IndustRealTaskGearsInsert": IndustRealTaskGearsInsert,
"FrankaCabinet": FrankaCabinet,
"FrankaCubeStack": FrankaCubeStack,
"Humanoid": Humanoid,
"HumanoidAMP": HumanoidAMP,
"Ingenuity": Ingenuity,
"Quadcopter": Quadcopter,
"ShadowHand": ShadowHand,
"Trifinger": Trifinger,
}
| 4,960 | Python | 42.13913 | 95 | 0.777218 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/tasks/humanoid_amp.py | # Copyright (c) 2021-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE..
from enum import Enum
import numpy as np
import torch
import os
from gym import spaces
from isaacgym import gymapi
from isaacgym import gymtorch
from isaacgymenvs.tasks.amp.humanoid_amp_base import HumanoidAMPBase, dof_to_obs
from isaacgymenvs.tasks.amp.utils_amp import gym_util
from isaacgymenvs.tasks.amp.utils_amp.motion_lib import MotionLib
from isaacgymenvs.utils.torch_jit_utils import quat_mul, to_torch, calc_heading_quat_inv, quat_to_tan_norm, my_quat_rotate
NUM_AMP_OBS_PER_STEP = 13 + 52 + 28 + 12 # [root_h, root_rot, root_vel, root_ang_vel, dof_pos, dof_vel, key_body_pos]
class HumanoidAMP(HumanoidAMPBase):
class StateInit(Enum):
Default = 0
Start = 1
Random = 2
Hybrid = 3
def __init__(self, cfg, rl_device, sim_device, graphics_device_id, headless, virtual_screen_capture, force_render):
self.cfg = cfg
state_init = cfg["env"]["stateInit"]
self._state_init = HumanoidAMP.StateInit[state_init]
self._hybrid_init_prob = cfg["env"]["hybridInitProb"]
self._num_amp_obs_steps = cfg["env"]["numAMPObsSteps"]
assert(self._num_amp_obs_steps >= 2)
self._reset_default_env_ids = []
self._reset_ref_env_ids = []
super().__init__(config=self.cfg, rl_device=rl_device, sim_device=sim_device, graphics_device_id=graphics_device_id, headless=headless, virtual_screen_capture=virtual_screen_capture, force_render=force_render)
motion_file = cfg['env'].get('motion_file', "amp_humanoid_backflip.npy")
motion_file_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "../../assets/amp/motions/" + motion_file)
self._load_motion(motion_file_path)
self.num_amp_obs = self._num_amp_obs_steps * NUM_AMP_OBS_PER_STEP
self._amp_obs_space = spaces.Box(np.ones(self.num_amp_obs) * -np.Inf, np.ones(self.num_amp_obs) * np.Inf)
self._amp_obs_buf = torch.zeros((self.num_envs, self._num_amp_obs_steps, NUM_AMP_OBS_PER_STEP), device=self.device, dtype=torch.float)
self._curr_amp_obs_buf = self._amp_obs_buf[:, 0]
self._hist_amp_obs_buf = self._amp_obs_buf[:, 1:]
self._amp_obs_demo_buf = None
return
def post_physics_step(self):
super().post_physics_step()
self._update_hist_amp_obs()
self._compute_amp_observations()
amp_obs_flat = self._amp_obs_buf.view(-1, self.get_num_amp_obs())
self.extras["amp_obs"] = amp_obs_flat
return
def get_num_amp_obs(self):
return self.num_amp_obs
@property
def amp_observation_space(self):
return self._amp_obs_space
def fetch_amp_obs_demo(self, num_samples):
return self.task.fetch_amp_obs_demo(num_samples)
def fetch_amp_obs_demo(self, num_samples):
dt = self.dt
motion_ids = self._motion_lib.sample_motions(num_samples)
if (self._amp_obs_demo_buf is None):
self._build_amp_obs_demo_buf(num_samples)
else:
assert(self._amp_obs_demo_buf.shape[0] == num_samples)
motion_times0 = self._motion_lib.sample_time(motion_ids)
motion_ids = np.tile(np.expand_dims(motion_ids, axis=-1), [1, self._num_amp_obs_steps])
motion_times = np.expand_dims(motion_times0, axis=-1)
time_steps = -dt * np.arange(0, self._num_amp_obs_steps)
motion_times = motion_times + time_steps
motion_ids = motion_ids.flatten()
motion_times = motion_times.flatten()
root_pos, root_rot, dof_pos, root_vel, root_ang_vel, dof_vel, key_pos \
= self._motion_lib.get_motion_state(motion_ids, motion_times)
root_states = torch.cat([root_pos, root_rot, root_vel, root_ang_vel], dim=-1)
amp_obs_demo = build_amp_observations(root_states, dof_pos, dof_vel, key_pos,
self._local_root_obs)
self._amp_obs_demo_buf[:] = amp_obs_demo.view(self._amp_obs_demo_buf.shape)
amp_obs_demo_flat = self._amp_obs_demo_buf.view(-1, self.get_num_amp_obs())
return amp_obs_demo_flat
def _build_amp_obs_demo_buf(self, num_samples):
self._amp_obs_demo_buf = torch.zeros((num_samples, self._num_amp_obs_steps, NUM_AMP_OBS_PER_STEP), device=self.device, dtype=torch.float)
return
def _load_motion(self, motion_file):
self._motion_lib = MotionLib(motion_file=motion_file,
num_dofs=self.num_dof,
key_body_ids=self._key_body_ids.cpu().numpy(),
device=self.device)
return
def reset_idx(self, env_ids):
super().reset_idx(env_ids)
self._init_amp_obs(env_ids)
return
def _reset_actors(self, env_ids):
if (self._state_init == HumanoidAMP.StateInit.Default):
self._reset_default(env_ids)
elif (self._state_init == HumanoidAMP.StateInit.Start
or self._state_init == HumanoidAMP.StateInit.Random):
self._reset_ref_state_init(env_ids)
elif (self._state_init == HumanoidAMP.StateInit.Hybrid):
self._reset_hybrid_state_init(env_ids)
else:
assert(False), "Unsupported state initialization strategy: {:s}".format(str(self._state_init))
self.progress_buf[env_ids] = 0
self.reset_buf[env_ids] = 0
self._terminate_buf[env_ids] = 0
return
def _reset_default(self, env_ids):
self._dof_pos[env_ids] = self._initial_dof_pos[env_ids]
self._dof_vel[env_ids] = self._initial_dof_vel[env_ids]
env_ids_int32 = env_ids.to(dtype=torch.int32)
self.gym.set_actor_root_state_tensor_indexed(self.sim, gymtorch.unwrap_tensor(self._initial_root_states),
gymtorch.unwrap_tensor(env_ids_int32), len(env_ids_int32))
self.gym.set_dof_state_tensor_indexed(self.sim, gymtorch.unwrap_tensor(self._dof_state),
gymtorch.unwrap_tensor(env_ids_int32), len(env_ids_int32))
self._reset_default_env_ids = env_ids
return
def _reset_ref_state_init(self, env_ids):
num_envs = env_ids.shape[0]
motion_ids = self._motion_lib.sample_motions(num_envs)
if (self._state_init == HumanoidAMP.StateInit.Random
or self._state_init == HumanoidAMP.StateInit.Hybrid):
motion_times = self._motion_lib.sample_time(motion_ids)
elif (self._state_init == HumanoidAMP.StateInit.Start):
motion_times = np.zeros(num_envs)
else:
assert(False), "Unsupported state initialization strategy: {:s}".format(str(self._state_init))
root_pos, root_rot, dof_pos, root_vel, root_ang_vel, dof_vel, key_pos \
= self._motion_lib.get_motion_state(motion_ids, motion_times)
self._set_env_state(env_ids=env_ids,
root_pos=root_pos,
root_rot=root_rot,
dof_pos=dof_pos,
root_vel=root_vel,
root_ang_vel=root_ang_vel,
dof_vel=dof_vel)
self._reset_ref_env_ids = env_ids
self._reset_ref_motion_ids = motion_ids
self._reset_ref_motion_times = motion_times
return
def _reset_hybrid_state_init(self, env_ids):
num_envs = env_ids.shape[0]
ref_probs = to_torch(np.array([self._hybrid_init_prob] * num_envs), device=self.device)
ref_init_mask = torch.bernoulli(ref_probs) == 1.0
ref_reset_ids = env_ids[ref_init_mask]
if (len(ref_reset_ids) > 0):
self._reset_ref_state_init(ref_reset_ids)
default_reset_ids = env_ids[torch.logical_not(ref_init_mask)]
if (len(default_reset_ids) > 0):
self._reset_default(default_reset_ids)
return
def _init_amp_obs(self, env_ids):
self._compute_amp_observations(env_ids)
if (len(self._reset_default_env_ids) > 0):
self._init_amp_obs_default(self._reset_default_env_ids)
if (len(self._reset_ref_env_ids) > 0):
self._init_amp_obs_ref(self._reset_ref_env_ids, self._reset_ref_motion_ids,
self._reset_ref_motion_times)
return
def _init_amp_obs_default(self, env_ids):
curr_amp_obs = self._curr_amp_obs_buf[env_ids].unsqueeze(-2)
self._hist_amp_obs_buf[env_ids] = curr_amp_obs
return
def _init_amp_obs_ref(self, env_ids, motion_ids, motion_times):
dt = self.dt
motion_ids = np.tile(np.expand_dims(motion_ids, axis=-1), [1, self._num_amp_obs_steps - 1])
motion_times = np.expand_dims(motion_times, axis=-1)
time_steps = -dt * (np.arange(0, self._num_amp_obs_steps - 1) + 1)
motion_times = motion_times + time_steps
motion_ids = motion_ids.flatten()
motion_times = motion_times.flatten()
root_pos, root_rot, dof_pos, root_vel, root_ang_vel, dof_vel, key_pos \
= self._motion_lib.get_motion_state(motion_ids, motion_times)
root_states = torch.cat([root_pos, root_rot, root_vel, root_ang_vel], dim=-1)
amp_obs_demo = build_amp_observations(root_states, dof_pos, dof_vel, key_pos,
self._local_root_obs)
self._hist_amp_obs_buf[env_ids] = amp_obs_demo.view(self._hist_amp_obs_buf[env_ids].shape)
return
def _set_env_state(self, env_ids, root_pos, root_rot, dof_pos, root_vel, root_ang_vel, dof_vel):
self._root_states[env_ids, 0:3] = root_pos
self._root_states[env_ids, 3:7] = root_rot
self._root_states[env_ids, 7:10] = root_vel
self._root_states[env_ids, 10:13] = root_ang_vel
self._dof_pos[env_ids] = dof_pos
self._dof_vel[env_ids] = dof_vel
env_ids_int32 = env_ids.to(dtype=torch.int32)
self.gym.set_actor_root_state_tensor_indexed(self.sim, gymtorch.unwrap_tensor(self._root_states),
gymtorch.unwrap_tensor(env_ids_int32), len(env_ids_int32))
self.gym.set_dof_state_tensor_indexed(self.sim, gymtorch.unwrap_tensor(self._dof_state),
gymtorch.unwrap_tensor(env_ids_int32), len(env_ids_int32))
return
def _update_hist_amp_obs(self, env_ids=None):
if (env_ids is None):
for i in reversed(range(self._amp_obs_buf.shape[1] - 1)):
self._amp_obs_buf[:, i + 1] = self._amp_obs_buf[:, i]
else:
for i in reversed(range(self._amp_obs_buf.shape[1] - 1)):
self._amp_obs_buf[env_ids, i + 1] = self._amp_obs_buf[env_ids, i]
return
def _compute_amp_observations(self, env_ids=None):
key_body_pos = self._rigid_body_pos[:, self._key_body_ids, :]
if (env_ids is None):
self._curr_amp_obs_buf[:] = build_amp_observations(self._root_states, self._dof_pos, self._dof_vel, key_body_pos,
self._local_root_obs)
else:
self._curr_amp_obs_buf[env_ids] = build_amp_observations(self._root_states[env_ids], self._dof_pos[env_ids],
self._dof_vel[env_ids], key_body_pos[env_ids],
self._local_root_obs)
return
#####################################################################
###=========================jit functions=========================###
#####################################################################
@torch.jit.script
def build_amp_observations(root_states, dof_pos, dof_vel, key_body_pos, local_root_obs):
# type: (Tensor, Tensor, Tensor, Tensor, bool) -> Tensor
root_pos = root_states[:, 0:3]
root_rot = root_states[:, 3:7]
root_vel = root_states[:, 7:10]
root_ang_vel = root_states[:, 10:13]
root_h = root_pos[:, 2:3]
heading_rot = calc_heading_quat_inv(root_rot)
if (local_root_obs):
root_rot_obs = quat_mul(heading_rot, root_rot)
else:
root_rot_obs = root_rot
root_rot_obs = quat_to_tan_norm(root_rot_obs)
local_root_vel = my_quat_rotate(heading_rot, root_vel)
local_root_ang_vel = my_quat_rotate(heading_rot, root_ang_vel)
root_pos_expand = root_pos.unsqueeze(-2)
local_key_body_pos = key_body_pos - root_pos_expand
heading_rot_expand = heading_rot.unsqueeze(-2)
heading_rot_expand = heading_rot_expand.repeat((1, local_key_body_pos.shape[1], 1))
flat_end_pos = local_key_body_pos.view(local_key_body_pos.shape[0] * local_key_body_pos.shape[1], local_key_body_pos.shape[2])
flat_heading_rot = heading_rot_expand.view(heading_rot_expand.shape[0] * heading_rot_expand.shape[1],
heading_rot_expand.shape[2])
local_end_pos = my_quat_rotate(flat_heading_rot, flat_end_pos)
flat_local_key_pos = local_end_pos.view(local_key_body_pos.shape[0], local_key_body_pos.shape[1] * local_key_body_pos.shape[2])
dof_obs = dof_to_obs(dof_pos)
obs = torch.cat((root_h, root_rot_obs, local_root_vel, local_root_ang_vel, dof_obs, dof_vel, flat_local_key_pos), dim=-1)
return obs | 14,984 | Python | 44 | 217 | 0.602309 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/tasks/humanoid.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import numpy as np
import os
import torch
from isaacgym import gymtorch
from isaacgym import gymapi
from isaacgymenvs.utils.torch_jit_utils import scale, unscale, quat_mul, quat_conjugate, quat_from_angle_axis, \
to_torch, get_axis_params, torch_rand_float, tensor_clamp, compute_heading_and_up, compute_rot, normalize_angle
from isaacgymenvs.tasks.base.vec_task import VecTask
class Humanoid(VecTask):
def __init__(self, cfg, rl_device, sim_device, graphics_device_id, headless, virtual_screen_capture, force_render):
self.cfg = cfg
self.randomization_params = self.cfg["task"]["randomization_params"]
self.randomize = self.cfg["task"]["randomize"]
self.dof_vel_scale = self.cfg["env"]["dofVelocityScale"]
self.angular_velocity_scale = self.cfg["env"].get("angularVelocityScale", 0.1)
self.contact_force_scale = self.cfg["env"]["contactForceScale"]
self.power_scale = self.cfg["env"]["powerScale"]
self.heading_weight = self.cfg["env"]["headingWeight"]
self.up_weight = self.cfg["env"]["upWeight"]
self.actions_cost_scale = self.cfg["env"]["actionsCost"]
self.energy_cost_scale = self.cfg["env"]["energyCost"]
self.joints_at_limit_cost_scale = self.cfg["env"]["jointsAtLimitCost"]
self.death_cost = self.cfg["env"]["deathCost"]
self.termination_height = self.cfg["env"]["terminationHeight"]
self.debug_viz = self.cfg["env"]["enableDebugVis"]
self.plane_static_friction = self.cfg["env"]["plane"]["staticFriction"]
self.plane_dynamic_friction = self.cfg["env"]["plane"]["dynamicFriction"]
self.plane_restitution = self.cfg["env"]["plane"]["restitution"]
self.max_episode_length = self.cfg["env"]["episodeLength"]
self.cfg["env"]["numObservations"] = 108
self.cfg["env"]["numActions"] = 21
super().__init__(config=self.cfg, rl_device=rl_device, sim_device=sim_device, graphics_device_id=graphics_device_id, headless=headless, virtual_screen_capture=virtual_screen_capture, force_render=force_render)
if self.viewer != None:
cam_pos = gymapi.Vec3(50.0, 25.0, 2.4)
cam_target = gymapi.Vec3(45.0, 25.0, 0.0)
self.gym.viewer_camera_look_at(self.viewer, None, cam_pos, cam_target)
# get gym GPU state tensors
actor_root_state = self.gym.acquire_actor_root_state_tensor(self.sim)
dof_state_tensor = self.gym.acquire_dof_state_tensor(self.sim)
sensor_tensor = self.gym.acquire_force_sensor_tensor(self.sim)
sensors_per_env = 2
self.vec_sensor_tensor = gymtorch.wrap_tensor(sensor_tensor).view(self.num_envs, sensors_per_env * 6)
dof_force_tensor = self.gym.acquire_dof_force_tensor(self.sim)
self.dof_force_tensor = gymtorch.wrap_tensor(dof_force_tensor).view(self.num_envs, self.num_dof)
self.gym.refresh_dof_state_tensor(self.sim)
self.gym.refresh_actor_root_state_tensor(self.sim)
self.root_states = gymtorch.wrap_tensor(actor_root_state)
self.initial_root_states = self.root_states.clone()
self.initial_root_states[:, 7:13] = 0
# create some wrapper tensors for different slices
self.dof_state = gymtorch.wrap_tensor(dof_state_tensor)
self.dof_pos = self.dof_state.view(self.num_envs, self.num_dof, 2)[..., 0]
self.dof_vel = self.dof_state.view(self.num_envs, self.num_dof, 2)[..., 1]
self.initial_dof_pos = torch.zeros_like(self.dof_pos, device=self.device, dtype=torch.float)
zero_tensor = torch.tensor([0.0], device=self.device)
self.initial_dof_pos = torch.where(self.dof_limits_lower > zero_tensor, self.dof_limits_lower,
torch.where(self.dof_limits_upper < zero_tensor, self.dof_limits_upper, self.initial_dof_pos))
self.initial_dof_vel = torch.zeros_like(self.dof_vel, device=self.device, dtype=torch.float)
# initialize some data used later on
self.up_vec = to_torch(get_axis_params(1., self.up_axis_idx), device=self.device).repeat((self.num_envs, 1))
self.heading_vec = to_torch([1, 0, 0], device=self.device).repeat((self.num_envs, 1))
self.inv_start_rot = quat_conjugate(self.start_rotation).repeat((self.num_envs, 1))
self.basis_vec0 = self.heading_vec.clone()
self.basis_vec1 = self.up_vec.clone()
self.targets = to_torch([1000, 0, 0], device=self.device).repeat((self.num_envs, 1))
self.target_dirs = to_torch([1, 0, 0], device=self.device).repeat((self.num_envs, 1))
self.dt = self.cfg["sim"]["dt"]
self.potentials = to_torch([-1000./self.dt], device=self.device).repeat(self.num_envs)
self.prev_potentials = self.potentials.clone()
def create_sim(self):
self.up_axis_idx = 2 # index of up axis: Y=1, Z=2
self.sim = super().create_sim(self.device_id, self.graphics_device_id, self.physics_engine, self.sim_params)
self._create_ground_plane()
self._create_envs(self.num_envs, self.cfg["env"]['envSpacing'], int(np.sqrt(self.num_envs)))
# If randomizing, apply once immediately on startup before the fist sim step
if self.randomize:
self.apply_randomizations(self.randomization_params)
def _create_ground_plane(self):
plane_params = gymapi.PlaneParams()
plane_params.normal = gymapi.Vec3(0.0, 0.0, 1.0)
plane_params.static_friction = self.plane_static_friction
plane_params.dynamic_friction = self.plane_dynamic_friction
plane_params.restitution = self.plane_restitution
self.gym.add_ground(self.sim, plane_params)
def _create_envs(self, num_envs, spacing, num_per_row):
lower = gymapi.Vec3(-spacing, -spacing, 0.0)
upper = gymapi.Vec3(spacing, spacing, spacing)
asset_root = os.path.join(os.path.dirname(os.path.abspath(__file__)), '../../assets')
asset_file = "mjcf/nv_humanoid.xml"
if "asset" in self.cfg["env"]:
asset_file = self.cfg["env"]["asset"].get("assetFileName", asset_file)
asset_path = os.path.join(asset_root, asset_file)
asset_root = os.path.dirname(asset_path)
asset_file = os.path.basename(asset_path)
asset_options = gymapi.AssetOptions()
asset_options.angular_damping = 0.01
asset_options.max_angular_velocity = 100.0
# Note - DOF mode is set in the MJCF file and loaded by Isaac Gym
asset_options.default_dof_drive_mode = gymapi.DOF_MODE_NONE
humanoid_asset = self.gym.load_asset(self.sim, asset_root, asset_file, asset_options)
# Note - for this asset we are loading the actuator info from the MJCF
actuator_props = self.gym.get_asset_actuator_properties(humanoid_asset)
motor_efforts = [prop.motor_effort for prop in actuator_props]
# create force sensors at the feet
right_foot_idx = self.gym.find_asset_rigid_body_index(humanoid_asset, "right_foot")
left_foot_idx = self.gym.find_asset_rigid_body_index(humanoid_asset, "left_foot")
sensor_pose = gymapi.Transform()
self.gym.create_asset_force_sensor(humanoid_asset, right_foot_idx, sensor_pose)
self.gym.create_asset_force_sensor(humanoid_asset, left_foot_idx, sensor_pose)
self.max_motor_effort = max(motor_efforts)
self.motor_efforts = to_torch(motor_efforts, device=self.device)
self.torso_index = 0
self.num_bodies = self.gym.get_asset_rigid_body_count(humanoid_asset)
self.num_dof = self.gym.get_asset_dof_count(humanoid_asset)
self.num_joints = self.gym.get_asset_joint_count(humanoid_asset)
start_pose = gymapi.Transform()
start_pose.p = gymapi.Vec3(*get_axis_params(1.34, self.up_axis_idx))
start_pose.r = gymapi.Quat(0.0, 0.0, 0.0, 1.0)
self.start_rotation = torch.tensor([start_pose.r.x, start_pose.r.y, start_pose.r.z, start_pose.r.w], device=self.device)
self.humanoid_handles = []
self.envs = []
self.dof_limits_lower = []
self.dof_limits_upper = []
for i in range(self.num_envs):
# create env instance
env_ptr = self.gym.create_env(
self.sim, lower, upper, num_per_row
)
handle = self.gym.create_actor(env_ptr, humanoid_asset, start_pose, "humanoid", i, 0, 0)
self.gym.enable_actor_dof_force_sensors(env_ptr, handle)
for j in range(self.num_bodies):
self.gym.set_rigid_body_color(
env_ptr, handle, j, gymapi.MESH_VISUAL, gymapi.Vec3(0.97, 0.38, 0.06))
self.envs.append(env_ptr)
self.humanoid_handles.append(handle)
dof_prop = self.gym.get_actor_dof_properties(env_ptr, handle)
for j in range(self.num_dof):
if dof_prop['lower'][j] > dof_prop['upper'][j]:
self.dof_limits_lower.append(dof_prop['upper'][j])
self.dof_limits_upper.append(dof_prop['lower'][j])
else:
self.dof_limits_lower.append(dof_prop['lower'][j])
self.dof_limits_upper.append(dof_prop['upper'][j])
self.dof_limits_lower = to_torch(self.dof_limits_lower, device=self.device)
self.dof_limits_upper = to_torch(self.dof_limits_upper, device=self.device)
self.extremities = to_torch([5, 8], device=self.device, dtype=torch.long)
def compute_reward(self, actions):
self.rew_buf[:], self.reset_buf = compute_humanoid_reward(
self.obs_buf,
self.reset_buf,
self.progress_buf,
self.actions,
self.up_weight,
self.heading_weight,
self.potentials,
self.prev_potentials,
self.actions_cost_scale,
self.energy_cost_scale,
self.joints_at_limit_cost_scale,
self.max_motor_effort,
self.motor_efforts,
self.termination_height,
self.death_cost,
self.max_episode_length
)
def compute_observations(self):
self.gym.refresh_dof_state_tensor(self.sim)
self.gym.refresh_actor_root_state_tensor(self.sim)
self.gym.refresh_force_sensor_tensor(self.sim)
self.gym.refresh_dof_force_tensor(self.sim)
self.obs_buf[:], self.potentials[:], self.prev_potentials[:], self.up_vec[:], self.heading_vec[:] = compute_humanoid_observations(
self.obs_buf, self.root_states, self.targets, self.potentials,
self.inv_start_rot, self.dof_pos, self.dof_vel, self.dof_force_tensor,
self.dof_limits_lower, self.dof_limits_upper, self.dof_vel_scale,
self.vec_sensor_tensor, self.actions, self.dt, self.contact_force_scale, self.angular_velocity_scale,
self.basis_vec0, self.basis_vec1)
def reset_idx(self, env_ids):
# Randomization can happen only at reset time, since it can reset actor positions on GPU
if self.randomize:
self.apply_randomizations(self.randomization_params)
positions = torch_rand_float(-0.2, 0.2, (len(env_ids), self.num_dof), device=self.device)
velocities = torch_rand_float(-0.1, 0.1, (len(env_ids), self.num_dof), device=self.device)
self.dof_pos[env_ids] = tensor_clamp(self.initial_dof_pos[env_ids] + positions, self.dof_limits_lower, self.dof_limits_upper)
self.dof_vel[env_ids] = velocities
env_ids_int32 = env_ids.to(dtype=torch.int32)
self.gym.set_actor_root_state_tensor_indexed(self.sim,
gymtorch.unwrap_tensor(self.initial_root_states),
gymtorch.unwrap_tensor(env_ids_int32), len(env_ids_int32))
self.gym.set_dof_state_tensor_indexed(self.sim,
gymtorch.unwrap_tensor(self.dof_state),
gymtorch.unwrap_tensor(env_ids_int32), len(env_ids_int32))
to_target = self.targets[env_ids] - self.initial_root_states[env_ids, 0:3]
to_target[:, self.up_axis_idx] = 0
self.prev_potentials[env_ids] = -torch.norm(to_target, p=2, dim=-1) / self.dt
self.potentials[env_ids] = self.prev_potentials[env_ids].clone()
self.progress_buf[env_ids] = 0
self.reset_buf[env_ids] = 0
def pre_physics_step(self, actions):
self.actions = actions.to(self.device).clone()
forces = self.actions * self.motor_efforts.unsqueeze(0) * self.power_scale
force_tensor = gymtorch.unwrap_tensor(forces)
self.gym.set_dof_actuation_force_tensor(self.sim, force_tensor)
def post_physics_step(self):
self.progress_buf += 1
self.randomize_buf += 1
env_ids = self.reset_buf.nonzero(as_tuple=False).flatten()
if len(env_ids) > 0:
self.reset_idx(env_ids)
self.compute_observations()
self.compute_reward(self.actions)
# debug viz
if self.viewer and self.debug_viz:
self.gym.clear_lines(self.viewer)
points = []
colors = []
for i in range(self.num_envs):
origin = self.gym.get_env_origin(self.envs[i])
pose = self.root_states[:, 0:3][i].cpu().numpy()
glob_pos = gymapi.Vec3(origin.x + pose[0], origin.y + pose[1], origin.z + pose[2])
points.append([glob_pos.x, glob_pos.y, glob_pos.z, glob_pos.x + 4 * self.heading_vec[i, 0].cpu().numpy(),
glob_pos.y + 4 * self.heading_vec[i, 1].cpu().numpy(),
glob_pos.z + 4 * self.heading_vec[i, 2].cpu().numpy()])
colors.append([0.97, 0.1, 0.06])
points.append([glob_pos.x, glob_pos.y, glob_pos.z, glob_pos.x + 4 * self.up_vec[i, 0].cpu().numpy(), glob_pos.y + 4 * self.up_vec[i, 1].cpu().numpy(),
glob_pos.z + 4 * self.up_vec[i, 2].cpu().numpy()])
colors.append([0.05, 0.99, 0.04])
self.gym.add_lines(self.viewer, None, self.num_envs * 2, points, colors)
#####################################################################
###=========================jit functions=========================###
#####################################################################
@torch.jit.script
def compute_humanoid_reward(
obs_buf,
reset_buf,
progress_buf,
actions,
up_weight,
heading_weight,
potentials,
prev_potentials,
actions_cost_scale,
energy_cost_scale,
joints_at_limit_cost_scale,
max_motor_effort,
motor_efforts,
termination_height,
death_cost,
max_episode_length
):
# type: (Tensor, Tensor, Tensor, Tensor, float, float, Tensor, Tensor, float, float, float, float, Tensor, float, float, float) -> Tuple[Tensor, Tensor]
# reward from the direction headed
heading_weight_tensor = torch.ones_like(obs_buf[:, 11]) * heading_weight
heading_reward = torch.where(obs_buf[:, 11] > 0.8, heading_weight_tensor, heading_weight * obs_buf[:, 11] / 0.8)
# reward for being upright
up_reward = torch.zeros_like(heading_reward)
up_reward = torch.where(obs_buf[:, 10] > 0.93, up_reward + up_weight, up_reward)
actions_cost = torch.sum(actions ** 2, dim=-1)
# energy cost reward
motor_effort_ratio = motor_efforts / max_motor_effort
scaled_cost = joints_at_limit_cost_scale * (torch.abs(obs_buf[:, 12:33]) - 0.98) / 0.02
dof_at_limit_cost = torch.sum((torch.abs(obs_buf[:, 12:33]) > 0.98) * scaled_cost * motor_effort_ratio.unsqueeze(0), dim=-1)
electricity_cost = torch.sum(torch.abs(actions * obs_buf[:, 33:54]) * motor_effort_ratio.unsqueeze(0), dim=-1)
# reward for duration of being alive
alive_reward = torch.ones_like(potentials) * 2.0
progress_reward = potentials - prev_potentials
total_reward = progress_reward + alive_reward + up_reward + heading_reward - \
actions_cost_scale * actions_cost - energy_cost_scale * electricity_cost - dof_at_limit_cost
# adjust reward for fallen agents
total_reward = torch.where(obs_buf[:, 0] < termination_height, torch.ones_like(total_reward) * death_cost, total_reward)
# reset agents
reset = torch.where(obs_buf[:, 0] < termination_height, torch.ones_like(reset_buf), reset_buf)
reset = torch.where(progress_buf >= max_episode_length - 1, torch.ones_like(reset_buf), reset)
return total_reward, reset
@torch.jit.script
def compute_humanoid_observations(obs_buf, root_states, targets, potentials, inv_start_rot, dof_pos, dof_vel,
dof_force, dof_limits_lower, dof_limits_upper, dof_vel_scale,
sensor_force_torques, actions, dt, contact_force_scale, angular_velocity_scale,
basis_vec0, basis_vec1):
# type: (Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, float, Tensor, Tensor, float, float, float, Tensor, Tensor) -> Tuple[Tensor, Tensor, Tensor, Tensor, Tensor]
torso_position = root_states[:, 0:3]
torso_rotation = root_states[:, 3:7]
velocity = root_states[:, 7:10]
ang_velocity = root_states[:, 10:13]
to_target = targets - torso_position
to_target[:, 2] = 0
prev_potentials_new = potentials.clone()
potentials = -torch.norm(to_target, p=2, dim=-1) / dt
torso_quat, up_proj, heading_proj, up_vec, heading_vec = compute_heading_and_up(
torso_rotation, inv_start_rot, to_target, basis_vec0, basis_vec1, 2)
vel_loc, angvel_loc, roll, pitch, yaw, angle_to_target = compute_rot(
torso_quat, velocity, ang_velocity, targets, torso_position)
roll = normalize_angle(roll).unsqueeze(-1)
yaw = normalize_angle(yaw).unsqueeze(-1)
angle_to_target = normalize_angle(angle_to_target).unsqueeze(-1)
dof_pos_scaled = unscale(dof_pos, dof_limits_lower, dof_limits_upper)
# obs_buf shapes: 1, 3, 3, 1, 1, 1, 1, 1, num_dofs (21), num_dofs (21), 6, num_acts (21)
obs = torch.cat((torso_position[:, 2].view(-1, 1), vel_loc, angvel_loc * angular_velocity_scale,
yaw, roll, angle_to_target, up_proj.unsqueeze(-1), heading_proj.unsqueeze(-1),
dof_pos_scaled, dof_vel * dof_vel_scale, dof_force * contact_force_scale,
sensor_force_torques.view(-1, 12) * contact_force_scale, actions), dim=-1)
return obs, potentials, prev_potentials_new, up_vec, heading_vec
| 20,168 | Python | 47.717391 | 217 | 0.631743 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/tasks/ant.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import numpy as np
import os
import torch
from isaacgym import gymtorch
from isaacgym import gymapi
from isaacgym.gymtorch import *
from isaacgymenvs.utils.torch_jit_utils import *
from isaacgymenvs.tasks.base.vec_task import VecTask
class Ant(VecTask):
def __init__(self, cfg, rl_device, sim_device, graphics_device_id, headless, virtual_screen_capture, force_render):
self.cfg = cfg
self.max_episode_length = self.cfg["env"]["episodeLength"]
self.randomization_params = self.cfg["task"]["randomization_params"]
self.randomize = self.cfg["task"]["randomize"]
self.dof_vel_scale = self.cfg["env"]["dofVelocityScale"]
self.contact_force_scale = self.cfg["env"]["contactForceScale"]
self.power_scale = self.cfg["env"]["powerScale"]
self.heading_weight = self.cfg["env"]["headingWeight"]
self.up_weight = self.cfg["env"]["upWeight"]
self.actions_cost_scale = self.cfg["env"]["actionsCost"]
self.energy_cost_scale = self.cfg["env"]["energyCost"]
self.joints_at_limit_cost_scale = self.cfg["env"]["jointsAtLimitCost"]
self.death_cost = self.cfg["env"]["deathCost"]
self.termination_height = self.cfg["env"]["terminationHeight"]
self.debug_viz = self.cfg["env"]["enableDebugVis"]
self.plane_static_friction = self.cfg["env"]["plane"]["staticFriction"]
self.plane_dynamic_friction = self.cfg["env"]["plane"]["dynamicFriction"]
self.plane_restitution = self.cfg["env"]["plane"]["restitution"]
self.cfg["env"]["numObservations"] = 60
self.cfg["env"]["numActions"] = 8
super().__init__(config=self.cfg, rl_device=rl_device, sim_device=sim_device, graphics_device_id=graphics_device_id, headless=headless, virtual_screen_capture=virtual_screen_capture, force_render=force_render)
if self.viewer != None:
cam_pos = gymapi.Vec3(50.0, 25.0, 2.4)
cam_target = gymapi.Vec3(45.0, 25.0, 0.0)
self.gym.viewer_camera_look_at(self.viewer, None, cam_pos, cam_target)
# get gym GPU state tensors
actor_root_state = self.gym.acquire_actor_root_state_tensor(self.sim)
dof_state_tensor = self.gym.acquire_dof_state_tensor(self.sim)
sensor_tensor = self.gym.acquire_force_sensor_tensor(self.sim)
sensors_per_env = 4
self.vec_sensor_tensor = gymtorch.wrap_tensor(sensor_tensor).view(self.num_envs, sensors_per_env * 6)
self.gym.refresh_dof_state_tensor(self.sim)
self.gym.refresh_actor_root_state_tensor(self.sim)
self.root_states = gymtorch.wrap_tensor(actor_root_state)
self.initial_root_states = self.root_states.clone()
self.initial_root_states[:, 7:13] = 0 # set lin_vel and ang_vel to 0
# create some wrapper tensors for different slices
self.dof_state = gymtorch.wrap_tensor(dof_state_tensor)
self.dof_pos = self.dof_state.view(self.num_envs, self.num_dof, 2)[..., 0]
self.dof_vel = self.dof_state.view(self.num_envs, self.num_dof, 2)[..., 1]
self.initial_dof_pos = torch.zeros_like(self.dof_pos, device=self.device, dtype=torch.float)
zero_tensor = torch.tensor([0.0], device=self.device)
self.initial_dof_pos = torch.where(self.dof_limits_lower > zero_tensor, self.dof_limits_lower,
torch.where(self.dof_limits_upper < zero_tensor, self.dof_limits_upper, self.initial_dof_pos))
self.initial_dof_vel = torch.zeros_like(self.dof_vel, device=self.device, dtype=torch.float)
# initialize some data used later on
self.up_vec = to_torch(get_axis_params(1., self.up_axis_idx), device=self.device).repeat((self.num_envs, 1))
self.heading_vec = to_torch([1, 0, 0], device=self.device).repeat((self.num_envs, 1))
self.inv_start_rot = quat_conjugate(self.start_rotation).repeat((self.num_envs, 1))
self.basis_vec0 = self.heading_vec.clone()
self.basis_vec1 = self.up_vec.clone()
self.targets = to_torch([1000, 0, 0], device=self.device).repeat((self.num_envs, 1))
self.target_dirs = to_torch([1, 0, 0], device=self.device).repeat((self.num_envs, 1))
self.dt = self.cfg["sim"]["dt"]
self.potentials = to_torch([-1000./self.dt], device=self.device).repeat(self.num_envs)
self.prev_potentials = self.potentials.clone()
def create_sim(self):
self.up_axis_idx = 2 # index of up axis: Y=1, Z=2
self.sim = super().create_sim(self.device_id, self.graphics_device_id, self.physics_engine, self.sim_params)
self._create_ground_plane()
print(f'num envs {self.num_envs} env spacing {self.cfg["env"]["envSpacing"]}')
self._create_envs(self.num_envs, self.cfg["env"]['envSpacing'], int(np.sqrt(self.num_envs)))
# If randomizing, apply once immediately on startup before the fist sim step
if self.randomize:
self.apply_randomizations(self.randomization_params)
def _create_ground_plane(self):
plane_params = gymapi.PlaneParams()
plane_params.normal = gymapi.Vec3(0.0, 0.0, 1.0)
plane_params.static_friction = self.plane_static_friction
plane_params.dynamic_friction = self.plane_dynamic_friction
self.gym.add_ground(self.sim, plane_params)
def _create_envs(self, num_envs, spacing, num_per_row):
lower = gymapi.Vec3(-spacing, -spacing, 0.0)
upper = gymapi.Vec3(spacing, spacing, spacing)
asset_root = os.path.join(os.path.dirname(os.path.abspath(__file__)), '../../assets')
asset_file = "mjcf/nv_ant.xml"
if "asset" in self.cfg["env"]:
asset_file = self.cfg["env"]["asset"].get("assetFileName", asset_file)
asset_path = os.path.join(asset_root, asset_file)
asset_root = os.path.dirname(asset_path)
asset_file = os.path.basename(asset_path)
asset_options = gymapi.AssetOptions()
# Note - DOF mode is set in the MJCF file and loaded by Isaac Gym
asset_options.default_dof_drive_mode = gymapi.DOF_MODE_NONE
asset_options.angular_damping = 0.0
ant_asset = self.gym.load_asset(self.sim, asset_root, asset_file, asset_options)
self.num_dof = self.gym.get_asset_dof_count(ant_asset)
self.num_bodies = self.gym.get_asset_rigid_body_count(ant_asset)
# Note - for this asset we are loading the actuator info from the MJCF
actuator_props = self.gym.get_asset_actuator_properties(ant_asset)
motor_efforts = [prop.motor_effort for prop in actuator_props]
self.joint_gears = to_torch(motor_efforts, device=self.device)
start_pose = gymapi.Transform()
start_pose.p = gymapi.Vec3(*get_axis_params(0.44, self.up_axis_idx))
self.start_rotation = torch.tensor([start_pose.r.x, start_pose.r.y, start_pose.r.z, start_pose.r.w], device=self.device)
self.torso_index = 0
self.num_bodies = self.gym.get_asset_rigid_body_count(ant_asset)
body_names = [self.gym.get_asset_rigid_body_name(ant_asset, i) for i in range(self.num_bodies)]
extremity_names = [s for s in body_names if "foot" in s]
self.extremities_index = torch.zeros(len(extremity_names), dtype=torch.long, device=self.device)
# create force sensors attached to the "feet"
extremity_indices = [self.gym.find_asset_rigid_body_index(ant_asset, name) for name in extremity_names]
sensor_pose = gymapi.Transform()
for body_idx in extremity_indices:
self.gym.create_asset_force_sensor(ant_asset, body_idx, sensor_pose)
self.ant_handles = []
self.envs = []
self.dof_limits_lower = []
self.dof_limits_upper = []
for i in range(self.num_envs):
# create env instance
env_ptr = self.gym.create_env(
self.sim, lower, upper, num_per_row
)
ant_handle = self.gym.create_actor(env_ptr, ant_asset, start_pose, "ant", i, 1, 0)
for j in range(self.num_bodies):
self.gym.set_rigid_body_color(
env_ptr, ant_handle, j, gymapi.MESH_VISUAL, gymapi.Vec3(0.97, 0.38, 0.06))
self.envs.append(env_ptr)
self.ant_handles.append(ant_handle)
dof_prop = self.gym.get_actor_dof_properties(env_ptr, ant_handle)
for j in range(self.num_dof):
if dof_prop['lower'][j] > dof_prop['upper'][j]:
self.dof_limits_lower.append(dof_prop['upper'][j])
self.dof_limits_upper.append(dof_prop['lower'][j])
else:
self.dof_limits_lower.append(dof_prop['lower'][j])
self.dof_limits_upper.append(dof_prop['upper'][j])
self.dof_limits_lower = to_torch(self.dof_limits_lower, device=self.device)
self.dof_limits_upper = to_torch(self.dof_limits_upper, device=self.device)
for i in range(len(extremity_names)):
self.extremities_index[i] = self.gym.find_actor_rigid_body_handle(self.envs[0], self.ant_handles[0], extremity_names[i])
def compute_reward(self, actions):
self.rew_buf[:], self.reset_buf[:] = compute_ant_reward(
self.obs_buf,
self.reset_buf,
self.progress_buf,
self.actions,
self.up_weight,
self.heading_weight,
self.potentials,
self.prev_potentials,
self.actions_cost_scale,
self.energy_cost_scale,
self.joints_at_limit_cost_scale,
self.termination_height,
self.death_cost,
self.max_episode_length
)
def compute_observations(self):
self.gym.refresh_dof_state_tensor(self.sim)
self.gym.refresh_actor_root_state_tensor(self.sim)
self.gym.refresh_force_sensor_tensor(self.sim)
self.obs_buf[:], self.potentials[:], self.prev_potentials[:], self.up_vec[:], self.heading_vec[:] = compute_ant_observations(
self.obs_buf, self.root_states, self.targets, self.potentials,
self.inv_start_rot, self.dof_pos, self.dof_vel,
self.dof_limits_lower, self.dof_limits_upper, self.dof_vel_scale,
self.vec_sensor_tensor, self.actions, self.dt, self.contact_force_scale,
self.basis_vec0, self.basis_vec1, self.up_axis_idx)
# Required for PBT training
def compute_true_objective(self):
velocity = self.root_states[:, 7:10]
# We optimize for the maximum velocity along the x-axis (forward)
self.extras['true_objective'] = velocity[:, 0].squeeze()
def reset_idx(self, env_ids):
# Randomization can happen only at reset time, since it can reset actor positions on GPU
if self.randomize:
self.apply_randomizations(self.randomization_params)
positions = torch_rand_float(-0.2, 0.2, (len(env_ids), self.num_dof), device=self.device)
velocities = torch_rand_float(-0.1, 0.1, (len(env_ids), self.num_dof), device=self.device)
self.dof_pos[env_ids] = tensor_clamp(self.initial_dof_pos[env_ids] + positions, self.dof_limits_lower, self.dof_limits_upper)
self.dof_vel[env_ids] = velocities
env_ids_int32 = env_ids.to(dtype=torch.int32)
self.gym.set_actor_root_state_tensor_indexed(self.sim,
gymtorch.unwrap_tensor(self.initial_root_states),
gymtorch.unwrap_tensor(env_ids_int32), len(env_ids_int32))
self.gym.set_dof_state_tensor_indexed(self.sim,
gymtorch.unwrap_tensor(self.dof_state),
gymtorch.unwrap_tensor(env_ids_int32), len(env_ids_int32))
to_target = self.targets[env_ids] - self.initial_root_states[env_ids, 0:3]
to_target[:, 2] = 0.0
self.prev_potentials[env_ids] = -torch.norm(to_target, p=2, dim=-1) / self.dt
self.potentials[env_ids] = self.prev_potentials[env_ids].clone()
self.progress_buf[env_ids] = 0
self.reset_buf[env_ids] = 0
def pre_physics_step(self, actions):
self.actions = actions.clone().to(self.device)
forces = self.actions * self.joint_gears * self.power_scale
force_tensor = gymtorch.unwrap_tensor(forces)
self.gym.set_dof_actuation_force_tensor(self.sim, force_tensor)
def post_physics_step(self):
self.progress_buf += 1
self.randomize_buf += 1
env_ids = self.reset_buf.nonzero(as_tuple=False).flatten()
if len(env_ids) > 0:
self.reset_idx(env_ids)
self.compute_observations()
self.compute_reward(self.actions)
self.compute_true_objective()
# debug viz
if self.viewer and self.debug_viz:
self.gym.clear_lines(self.viewer)
self.gym.refresh_actor_root_state_tensor(self.sim)
points = []
colors = []
for i in range(self.num_envs):
origin = self.gym.get_env_origin(self.envs[i])
pose = self.root_states[:, 0:3][i].cpu().numpy()
glob_pos = gymapi.Vec3(origin.x + pose[0], origin.y + pose[1], origin.z + pose[2])
points.append([glob_pos.x, glob_pos.y, glob_pos.z, glob_pos.x + 4 * self.heading_vec[i, 0].cpu().numpy(),
glob_pos.y + 4 * self.heading_vec[i, 1].cpu().numpy(),
glob_pos.z + 4 * self.heading_vec[i, 2].cpu().numpy()])
colors.append([0.97, 0.1, 0.06])
points.append([glob_pos.x, glob_pos.y, glob_pos.z, glob_pos.x + 4 * self.up_vec[i, 0].cpu().numpy(), glob_pos.y + 4 * self.up_vec[i, 1].cpu().numpy(),
glob_pos.z + 4 * self.up_vec[i, 2].cpu().numpy()])
colors.append([0.05, 0.99, 0.04])
self.gym.add_lines(self.viewer, None, self.num_envs * 2, points, colors)
#####################################################################
###=========================jit functions=========================###
#####################################################################
@torch.jit.script
def compute_ant_reward(
obs_buf,
reset_buf,
progress_buf,
actions,
up_weight,
heading_weight,
potentials,
prev_potentials,
actions_cost_scale,
energy_cost_scale,
joints_at_limit_cost_scale,
termination_height,
death_cost,
max_episode_length
):
# type: (Tensor, Tensor, Tensor, Tensor, float, float, Tensor, Tensor, float, float, float, float, float, float) -> Tuple[Tensor, Tensor]
# reward from direction headed
heading_weight_tensor = torch.ones_like(obs_buf[:, 11]) * heading_weight
heading_reward = torch.where(obs_buf[:, 11] > 0.8, heading_weight_tensor, heading_weight * obs_buf[:, 11] / 0.8)
# aligning up axis of ant and environment
up_reward = torch.zeros_like(heading_reward)
up_reward = torch.where(obs_buf[:, 10] > 0.93, up_reward + up_weight, up_reward)
# energy penalty for movement
actions_cost = torch.sum(actions ** 2, dim=-1)
electricity_cost = torch.sum(torch.abs(actions * obs_buf[:, 20:28]), dim=-1)
dof_at_limit_cost = torch.sum(obs_buf[:, 12:20] > 0.99, dim=-1)
# reward for duration of staying alive
alive_reward = torch.ones_like(potentials) * 0.5
progress_reward = potentials - prev_potentials
total_reward = progress_reward + alive_reward + up_reward + heading_reward - \
actions_cost_scale * actions_cost - energy_cost_scale * electricity_cost - dof_at_limit_cost * joints_at_limit_cost_scale
# adjust reward for fallen agents
total_reward = torch.where(obs_buf[:, 0] < termination_height, torch.ones_like(total_reward) * death_cost, total_reward)
# reset agents
reset = torch.where(obs_buf[:, 0] < termination_height, torch.ones_like(reset_buf), reset_buf)
reset = torch.where(progress_buf >= max_episode_length - 1, torch.ones_like(reset_buf), reset)
return total_reward, reset
@torch.jit.script
def compute_ant_observations(obs_buf, root_states, targets, potentials,
inv_start_rot, dof_pos, dof_vel,
dof_limits_lower, dof_limits_upper, dof_vel_scale,
sensor_force_torques, actions, dt, contact_force_scale,
basis_vec0, basis_vec1, up_axis_idx):
# type: (Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, float, Tensor, Tensor, float, float, Tensor, Tensor, int) -> Tuple[Tensor, Tensor, Tensor, Tensor, Tensor]
torso_position = root_states[:, 0:3]
torso_rotation = root_states[:, 3:7]
velocity = root_states[:, 7:10]
ang_velocity = root_states[:, 10:13]
to_target = targets - torso_position
to_target[:, 2] = 0.0
prev_potentials_new = potentials.clone()
potentials = -torch.norm(to_target, p=2, dim=-1) / dt
torso_quat, up_proj, heading_proj, up_vec, heading_vec = compute_heading_and_up(
torso_rotation, inv_start_rot, to_target, basis_vec0, basis_vec1, 2)
vel_loc, angvel_loc, roll, pitch, yaw, angle_to_target = compute_rot(
torso_quat, velocity, ang_velocity, targets, torso_position)
dof_pos_scaled = unscale(dof_pos, dof_limits_lower, dof_limits_upper)
# obs_buf shapes: 1, 3, 3, 1, 1, 1, 1, 1, num_dofs(8), num_dofs(8), 24, num_dofs(8)
obs = torch.cat((torso_position[:, up_axis_idx].view(-1, 1), vel_loc, angvel_loc,
yaw.unsqueeze(-1), roll.unsqueeze(-1), angle_to_target.unsqueeze(-1),
up_proj.unsqueeze(-1), heading_proj.unsqueeze(-1), dof_pos_scaled,
dof_vel * dof_vel_scale, sensor_force_torques.view(-1, 24) * contact_force_scale,
actions), dim=-1)
return obs, potentials, prev_potentials_new, up_vec, heading_vec | 19,545 | Python | 46.906863 | 217 | 0.626349 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/tasks/cartpole.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import numpy as np
import os
import torch
from isaacgym import gymutil, gymtorch, gymapi
from .base.vec_task import VecTask
class Cartpole(VecTask):
def __init__(self, cfg, rl_device, sim_device, graphics_device_id, headless, virtual_screen_capture, force_render):
self.cfg = cfg
self.reset_dist = self.cfg["env"]["resetDist"]
self.max_push_effort = self.cfg["env"]["maxEffort"]
self.max_episode_length = 500
self.cfg["env"]["numObservations"] = 4
self.cfg["env"]["numActions"] = 1
super().__init__(config=self.cfg, rl_device=rl_device, sim_device=sim_device, graphics_device_id=graphics_device_id, headless=headless, virtual_screen_capture=virtual_screen_capture, force_render=force_render)
dof_state_tensor = self.gym.acquire_dof_state_tensor(self.sim)
self.dof_state = gymtorch.wrap_tensor(dof_state_tensor)
self.dof_pos = self.dof_state.view(self.num_envs, self.num_dof, 2)[..., 0]
self.dof_vel = self.dof_state.view(self.num_envs, self.num_dof, 2)[..., 1]
def create_sim(self):
# set the up axis to be z-up given that assets are y-up by default
self.up_axis = self.cfg["sim"]["up_axis"]
self.sim = super().create_sim(self.device_id, self.graphics_device_id, self.physics_engine, self.sim_params)
self._create_ground_plane()
self._create_envs(self.num_envs, self.cfg["env"]['envSpacing'], int(np.sqrt(self.num_envs)))
def _create_ground_plane(self):
plane_params = gymapi.PlaneParams()
# set the normal force to be z dimension
plane_params.normal = gymapi.Vec3(0.0, 0.0, 1.0) if self.up_axis == 'z' else gymapi.Vec3(0.0, 1.0, 0.0)
self.gym.add_ground(self.sim, plane_params)
def _create_envs(self, num_envs, spacing, num_per_row):
# define plane on which environments are initialized
lower = gymapi.Vec3(0.5 * -spacing, -spacing, 0.0) if self.up_axis == 'z' else gymapi.Vec3(0.5 * -spacing, 0.0, -spacing)
upper = gymapi.Vec3(0.5 * spacing, spacing, spacing)
asset_root = os.path.join(os.path.dirname(os.path.abspath(__file__)), "../../assets")
asset_file = "urdf/cartpole.urdf"
if "asset" in self.cfg["env"]:
asset_root = os.path.join(os.path.dirname(os.path.abspath(__file__)), self.cfg["env"]["asset"].get("assetRoot", asset_root))
asset_file = self.cfg["env"]["asset"].get("assetFileName", asset_file)
asset_path = os.path.join(asset_root, asset_file)
asset_root = os.path.dirname(asset_path)
asset_file = os.path.basename(asset_path)
asset_options = gymapi.AssetOptions()
asset_options.fix_base_link = True
cartpole_asset = self.gym.load_asset(self.sim, asset_root, asset_file, asset_options)
self.num_dof = self.gym.get_asset_dof_count(cartpole_asset)
pose = gymapi.Transform()
if self.up_axis == 'z':
pose.p.z = 2.0
# asset is rotated z-up by default, no additional rotations needed
pose.r = gymapi.Quat(0.0, 0.0, 0.0, 1.0)
else:
pose.p.y = 2.0
pose.r = gymapi.Quat(-np.sqrt(2)/2, 0.0, 0.0, np.sqrt(2)/2)
self.cartpole_handles = []
self.envs = []
for i in range(self.num_envs):
# create env instance
env_ptr = self.gym.create_env(
self.sim, lower, upper, num_per_row
)
cartpole_handle = self.gym.create_actor(env_ptr, cartpole_asset, pose, "cartpole", i, 1, 0)
dof_props = self.gym.get_actor_dof_properties(env_ptr, cartpole_handle)
dof_props['driveMode'][0] = gymapi.DOF_MODE_EFFORT
dof_props['driveMode'][1] = gymapi.DOF_MODE_NONE
dof_props['stiffness'][:] = 0.0
dof_props['damping'][:] = 0.0
self.gym.set_actor_dof_properties(env_ptr, cartpole_handle, dof_props)
self.envs.append(env_ptr)
self.cartpole_handles.append(cartpole_handle)
def compute_reward(self):
# retrieve environment observations from buffer
pole_angle = self.obs_buf[:, 2]
pole_vel = self.obs_buf[:, 3]
cart_vel = self.obs_buf[:, 1]
cart_pos = self.obs_buf[:, 0]
self.rew_buf[:], self.reset_buf[:] = compute_cartpole_reward(
pole_angle, pole_vel, cart_vel, cart_pos,
self.reset_dist, self.reset_buf, self.progress_buf, self.max_episode_length
)
def compute_observations(self, env_ids=None):
if env_ids is None:
env_ids = np.arange(self.num_envs)
self.gym.refresh_dof_state_tensor(self.sim)
self.obs_buf[env_ids, 0] = self.dof_pos[env_ids, 0].squeeze()
self.obs_buf[env_ids, 1] = self.dof_vel[env_ids, 0].squeeze()
self.obs_buf[env_ids, 2] = self.dof_pos[env_ids, 1].squeeze()
self.obs_buf[env_ids, 3] = self.dof_vel[env_ids, 1].squeeze()
return self.obs_buf
def reset_idx(self, env_ids):
positions = 0.2 * (torch.rand((len(env_ids), self.num_dof), device=self.device) - 0.5)
velocities = 0.5 * (torch.rand((len(env_ids), self.num_dof), device=self.device) - 0.5)
self.dof_pos[env_ids, :] = positions[:]
self.dof_vel[env_ids, :] = velocities[:]
env_ids_int32 = env_ids.to(dtype=torch.int32)
self.gym.set_dof_state_tensor_indexed(self.sim,
gymtorch.unwrap_tensor(self.dof_state),
gymtorch.unwrap_tensor(env_ids_int32), len(env_ids_int32))
self.reset_buf[env_ids] = 0
self.progress_buf[env_ids] = 0
def pre_physics_step(self, actions):
actions_tensor = torch.zeros(self.num_envs * self.num_dof, device=self.device, dtype=torch.float)
actions_tensor[::self.num_dof] = actions.to(self.device).squeeze() * self.max_push_effort
forces = gymtorch.unwrap_tensor(actions_tensor)
self.gym.set_dof_actuation_force_tensor(self.sim, forces)
def post_physics_step(self):
self.progress_buf += 1
env_ids = self.reset_buf.nonzero(as_tuple=False).squeeze(-1)
if len(env_ids) > 0:
self.reset_idx(env_ids)
self.compute_observations()
self.compute_reward()
#####################################################################
###=========================jit functions=========================###
#####################################################################
@torch.jit.script
def compute_cartpole_reward(pole_angle, pole_vel, cart_vel, cart_pos,
reset_dist, reset_buf, progress_buf, max_episode_length):
# type: (Tensor, Tensor, Tensor, Tensor, float, Tensor, Tensor, float) -> Tuple[Tensor, Tensor]
# reward is combo of angle deviated from upright, velocity of cart, and velocity of pole moving
reward = 1.0 - pole_angle * pole_angle - 0.01 * torch.abs(cart_vel) - 0.005 * torch.abs(pole_vel)
# adjust reward for reset agents
reward = torch.where(torch.abs(cart_pos) > reset_dist, torch.ones_like(reward) * -2.0, reward)
reward = torch.where(torch.abs(pole_angle) > np.pi / 2, torch.ones_like(reward) * -2.0, reward)
reset = torch.where(torch.abs(cart_pos) > reset_dist, torch.ones_like(reset_buf), reset_buf)
reset = torch.where(torch.abs(pole_angle) > np.pi / 2, torch.ones_like(reset_buf), reset)
reset = torch.where(progress_buf >= max_episode_length - 1, torch.ones_like(reset_buf), reset)
return reward, reset
| 9,134 | Python | 45.370558 | 217 | 0.629297 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/tasks/franka_cube_stack.py | # Copyright (c) 2021-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import numpy as np
import os
import torch
from isaacgym import gymtorch
from isaacgym import gymapi
from isaacgymenvs.utils.torch_jit_utils import quat_mul, to_torch, tensor_clamp
from isaacgymenvs.tasks.base.vec_task import VecTask
@torch.jit.script
def axisangle2quat(vec, eps=1e-6):
"""
Converts scaled axis-angle to quat.
Args:
vec (tensor): (..., 3) tensor where final dim is (ax,ay,az) axis-angle exponential coordinates
eps (float): Stability value below which small values will be mapped to 0
Returns:
tensor: (..., 4) tensor where final dim is (x,y,z,w) vec4 float quaternion
"""
# type: (Tensor, float) -> Tensor
# store input shape and reshape
input_shape = vec.shape[:-1]
vec = vec.reshape(-1, 3)
# Grab angle
angle = torch.norm(vec, dim=-1, keepdim=True)
# Create return array
quat = torch.zeros(torch.prod(torch.tensor(input_shape)), 4, device=vec.device)
quat[:, 3] = 1.0
# Grab indexes where angle is not zero an convert the input to its quaternion form
idx = angle.reshape(-1) > eps
quat[idx, :] = torch.cat([
vec[idx, :] * torch.sin(angle[idx, :] / 2.0) / angle[idx, :],
torch.cos(angle[idx, :] / 2.0)
], dim=-1)
# Reshape and return output
quat = quat.reshape(list(input_shape) + [4, ])
return quat
class FrankaCubeStack(VecTask):
def __init__(self, cfg, rl_device, sim_device, graphics_device_id, headless, virtual_screen_capture, force_render):
self.cfg = cfg
self.max_episode_length = self.cfg["env"]["episodeLength"]
self.action_scale = self.cfg["env"]["actionScale"]
self.start_position_noise = self.cfg["env"]["startPositionNoise"]
self.start_rotation_noise = self.cfg["env"]["startRotationNoise"]
self.franka_position_noise = self.cfg["env"]["frankaPositionNoise"]
self.franka_rotation_noise = self.cfg["env"]["frankaRotationNoise"]
self.franka_dof_noise = self.cfg["env"]["frankaDofNoise"]
self.aggregate_mode = self.cfg["env"]["aggregateMode"]
# Create dicts to pass to reward function
self.reward_settings = {
"r_dist_scale": self.cfg["env"]["distRewardScale"],
"r_lift_scale": self.cfg["env"]["liftRewardScale"],
"r_align_scale": self.cfg["env"]["alignRewardScale"],
"r_stack_scale": self.cfg["env"]["stackRewardScale"],
}
# Controller type
self.control_type = self.cfg["env"]["controlType"]
assert self.control_type in {"osc", "joint_tor"},\
"Invalid control type specified. Must be one of: {osc, joint_tor}"
# dimensions
# obs include: cubeA_pose (7) + cubeB_pos (3) + eef_pose (7) + q_gripper (2)
self.cfg["env"]["numObservations"] = 19 if self.control_type == "osc" else 26
# actions include: delta EEF if OSC (6) or joint torques (7) + bool gripper (1)
self.cfg["env"]["numActions"] = 7 if self.control_type == "osc" else 8
# Values to be filled in at runtime
self.states = {} # will be dict filled with relevant states to use for reward calculation
self.handles = {} # will be dict mapping names to relevant sim handles
self.num_dofs = None # Total number of DOFs per env
self.actions = None # Current actions to be deployed
self._init_cubeA_state = None # Initial state of cubeA for the current env
self._init_cubeB_state = None # Initial state of cubeB for the current env
self._cubeA_state = None # Current state of cubeA for the current env
self._cubeB_state = None # Current state of cubeB for the current env
self._cubeA_id = None # Actor ID corresponding to cubeA for a given env
self._cubeB_id = None # Actor ID corresponding to cubeB for a given env
# Tensor placeholders
self._root_state = None # State of root body (n_envs, 13)
self._dof_state = None # State of all joints (n_envs, n_dof)
self._q = None # Joint positions (n_envs, n_dof)
self._qd = None # Joint velocities (n_envs, n_dof)
self._rigid_body_state = None # State of all rigid bodies (n_envs, n_bodies, 13)
self._contact_forces = None # Contact forces in sim
self._eef_state = None # end effector state (at grasping point)
self._eef_lf_state = None # end effector state (at left fingertip)
self._eef_rf_state = None # end effector state (at left fingertip)
self._j_eef = None # Jacobian for end effector
self._mm = None # Mass matrix
self._arm_control = None # Tensor buffer for controlling arm
self._gripper_control = None # Tensor buffer for controlling gripper
self._pos_control = None # Position actions
self._effort_control = None # Torque actions
self._franka_effort_limits = None # Actuator effort limits for franka
self._global_indices = None # Unique indices corresponding to all envs in flattened array
self.debug_viz = self.cfg["env"]["enableDebugVis"]
self.up_axis = "z"
self.up_axis_idx = 2
super().__init__(config=self.cfg, rl_device=rl_device, sim_device=sim_device, graphics_device_id=graphics_device_id, headless=headless, virtual_screen_capture=virtual_screen_capture, force_render=force_render)
# Franka defaults
self.franka_default_dof_pos = to_torch(
[0, 0.1963, 0, -2.6180, 0, 2.9416, 0.7854, 0.035, 0.035], device=self.device
)
# OSC Gains
self.kp = to_torch([150.] * 6, device=self.device)
self.kd = 2 * torch.sqrt(self.kp)
self.kp_null = to_torch([10.] * 7, device=self.device)
self.kd_null = 2 * torch.sqrt(self.kp_null)
#self.cmd_limit = None # filled in later
# Set control limits
self.cmd_limit = to_torch([0.1, 0.1, 0.1, 0.5, 0.5, 0.5], device=self.device).unsqueeze(0) if \
self.control_type == "osc" else self._franka_effort_limits[:7].unsqueeze(0)
# Reset all environments
self.reset_idx(torch.arange(self.num_envs, device=self.device))
# Refresh tensors
self._refresh()
def create_sim(self):
self.sim_params.up_axis = gymapi.UP_AXIS_Z
self.sim_params.gravity.x = 0
self.sim_params.gravity.y = 0
self.sim_params.gravity.z = -9.81
self.sim = super().create_sim(
self.device_id, self.graphics_device_id, self.physics_engine, self.sim_params)
self._create_ground_plane()
self._create_envs(self.num_envs, self.cfg["env"]['envSpacing'], int(np.sqrt(self.num_envs)))
def _create_ground_plane(self):
plane_params = gymapi.PlaneParams()
plane_params.normal = gymapi.Vec3(0.0, 0.0, 1.0)
self.gym.add_ground(self.sim, plane_params)
def _create_envs(self, num_envs, spacing, num_per_row):
lower = gymapi.Vec3(-spacing, -spacing, 0.0)
upper = gymapi.Vec3(spacing, spacing, spacing)
asset_root = os.path.join(os.path.dirname(os.path.abspath(__file__)), "../../assets")
franka_asset_file = "urdf/franka_description/robots/franka_panda_gripper.urdf"
if "asset" in self.cfg["env"]:
asset_root = os.path.join(os.path.dirname(os.path.abspath(__file__)), self.cfg["env"]["asset"].get("assetRoot", asset_root))
franka_asset_file = self.cfg["env"]["asset"].get("assetFileNameFranka", franka_asset_file)
# load franka asset
asset_options = gymapi.AssetOptions()
asset_options.flip_visual_attachments = True
asset_options.fix_base_link = True
asset_options.collapse_fixed_joints = False
asset_options.disable_gravity = True
asset_options.thickness = 0.001
asset_options.default_dof_drive_mode = gymapi.DOF_MODE_EFFORT
asset_options.use_mesh_materials = True
franka_asset = self.gym.load_asset(self.sim, asset_root, franka_asset_file, asset_options)
franka_dof_stiffness = to_torch([0, 0, 0, 0, 0, 0, 0, 5000., 5000.], dtype=torch.float, device=self.device)
franka_dof_damping = to_torch([0, 0, 0, 0, 0, 0, 0, 1.0e2, 1.0e2], dtype=torch.float, device=self.device)
# Create table asset
table_pos = [0.0, 0.0, 1.0]
table_thickness = 0.05
table_opts = gymapi.AssetOptions()
table_opts.fix_base_link = True
table_asset = self.gym.create_box(self.sim, *[1.2, 1.2, table_thickness], table_opts)
# Create table stand asset
table_stand_height = 0.1
table_stand_pos = [-0.5, 0.0, 1.0 + table_thickness / 2 + table_stand_height / 2]
table_stand_opts = gymapi.AssetOptions()
table_stand_opts.fix_base_link = True
table_stand_asset = self.gym.create_box(self.sim, *[0.2, 0.2, table_stand_height], table_opts)
self.cubeA_size = 0.050
self.cubeB_size = 0.070
# Create cubeA asset
cubeA_opts = gymapi.AssetOptions()
cubeA_asset = self.gym.create_box(self.sim, *([self.cubeA_size] * 3), cubeA_opts)
cubeA_color = gymapi.Vec3(0.6, 0.1, 0.0)
# Create cubeB asset
cubeB_opts = gymapi.AssetOptions()
cubeB_asset = self.gym.create_box(self.sim, *([self.cubeB_size] * 3), cubeB_opts)
cubeB_color = gymapi.Vec3(0.0, 0.4, 0.1)
self.num_franka_bodies = self.gym.get_asset_rigid_body_count(franka_asset)
self.num_franka_dofs = self.gym.get_asset_dof_count(franka_asset)
print("num franka bodies: ", self.num_franka_bodies)
print("num franka dofs: ", self.num_franka_dofs)
# set franka dof properties
franka_dof_props = self.gym.get_asset_dof_properties(franka_asset)
self.franka_dof_lower_limits = []
self.franka_dof_upper_limits = []
self._franka_effort_limits = []
for i in range(self.num_franka_dofs):
franka_dof_props['driveMode'][i] = gymapi.DOF_MODE_POS if i > 6 else gymapi.DOF_MODE_EFFORT
if self.physics_engine == gymapi.SIM_PHYSX:
franka_dof_props['stiffness'][i] = franka_dof_stiffness[i]
franka_dof_props['damping'][i] = franka_dof_damping[i]
else:
franka_dof_props['stiffness'][i] = 7000.0
franka_dof_props['damping'][i] = 50.0
self.franka_dof_lower_limits.append(franka_dof_props['lower'][i])
self.franka_dof_upper_limits.append(franka_dof_props['upper'][i])
self._franka_effort_limits.append(franka_dof_props['effort'][i])
self.franka_dof_lower_limits = to_torch(self.franka_dof_lower_limits, device=self.device)
self.franka_dof_upper_limits = to_torch(self.franka_dof_upper_limits, device=self.device)
self._franka_effort_limits = to_torch(self._franka_effort_limits, device=self.device)
self.franka_dof_speed_scales = torch.ones_like(self.franka_dof_lower_limits)
self.franka_dof_speed_scales[[7, 8]] = 0.1
franka_dof_props['effort'][7] = 200
franka_dof_props['effort'][8] = 200
# Define start pose for franka
franka_start_pose = gymapi.Transform()
franka_start_pose.p = gymapi.Vec3(-0.45, 0.0, 1.0 + table_thickness / 2 + table_stand_height)
franka_start_pose.r = gymapi.Quat(0.0, 0.0, 0.0, 1.0)
# Define start pose for table
table_start_pose = gymapi.Transform()
table_start_pose.p = gymapi.Vec3(*table_pos)
table_start_pose.r = gymapi.Quat(0.0, 0.0, 0.0, 1.0)
self._table_surface_pos = np.array(table_pos) + np.array([0, 0, table_thickness / 2])
self.reward_settings["table_height"] = self._table_surface_pos[2]
# Define start pose for table stand
table_stand_start_pose = gymapi.Transform()
table_stand_start_pose.p = gymapi.Vec3(*table_stand_pos)
table_stand_start_pose.r = gymapi.Quat(0.0, 0.0, 0.0, 1.0)
# Define start pose for cubes (doesn't really matter since they're get overridden during reset() anyways)
cubeA_start_pose = gymapi.Transform()
cubeA_start_pose.p = gymapi.Vec3(-1.0, 0.0, 0.0)
cubeA_start_pose.r = gymapi.Quat(0.0, 0.0, 0.0, 1.0)
cubeB_start_pose = gymapi.Transform()
cubeB_start_pose.p = gymapi.Vec3(1.0, 0.0, 0.0)
cubeB_start_pose.r = gymapi.Quat(0.0, 0.0, 0.0, 1.0)
# compute aggregate size
num_franka_bodies = self.gym.get_asset_rigid_body_count(franka_asset)
num_franka_shapes = self.gym.get_asset_rigid_shape_count(franka_asset)
max_agg_bodies = num_franka_bodies + 4 # 1 for table, table stand, cubeA, cubeB
max_agg_shapes = num_franka_shapes + 4 # 1 for table, table stand, cubeA, cubeB
self.frankas = []
self.envs = []
# Create environments
for i in range(self.num_envs):
# create env instance
env_ptr = self.gym.create_env(self.sim, lower, upper, num_per_row)
# Create actors and define aggregate group appropriately depending on setting
# NOTE: franka should ALWAYS be loaded first in sim!
if self.aggregate_mode >= 3:
self.gym.begin_aggregate(env_ptr, max_agg_bodies, max_agg_shapes, True)
# Create franka
# Potentially randomize start pose
if self.franka_position_noise > 0:
rand_xy = self.franka_position_noise * (-1. + np.random.rand(2) * 2.0)
franka_start_pose.p = gymapi.Vec3(-0.45 + rand_xy[0], 0.0 + rand_xy[1],
1.0 + table_thickness / 2 + table_stand_height)
if self.franka_rotation_noise > 0:
rand_rot = torch.zeros(1, 3)
rand_rot[:, -1] = self.franka_rotation_noise * (-1. + np.random.rand() * 2.0)
new_quat = axisangle2quat(rand_rot).squeeze().numpy().tolist()
franka_start_pose.r = gymapi.Quat(*new_quat)
franka_actor = self.gym.create_actor(env_ptr, franka_asset, franka_start_pose, "franka", i, 0, 0)
self.gym.set_actor_dof_properties(env_ptr, franka_actor, franka_dof_props)
if self.aggregate_mode == 2:
self.gym.begin_aggregate(env_ptr, max_agg_bodies, max_agg_shapes, True)
# Create table
table_actor = self.gym.create_actor(env_ptr, table_asset, table_start_pose, "table", i, 1, 0)
table_stand_actor = self.gym.create_actor(env_ptr, table_stand_asset, table_stand_start_pose, "table_stand",
i, 1, 0)
if self.aggregate_mode == 1:
self.gym.begin_aggregate(env_ptr, max_agg_bodies, max_agg_shapes, True)
# Create cubes
self._cubeA_id = self.gym.create_actor(env_ptr, cubeA_asset, cubeA_start_pose, "cubeA", i, 2, 0)
self._cubeB_id = self.gym.create_actor(env_ptr, cubeB_asset, cubeB_start_pose, "cubeB", i, 4, 0)
# Set colors
self.gym.set_rigid_body_color(env_ptr, self._cubeA_id, 0, gymapi.MESH_VISUAL, cubeA_color)
self.gym.set_rigid_body_color(env_ptr, self._cubeB_id, 0, gymapi.MESH_VISUAL, cubeB_color)
if self.aggregate_mode > 0:
self.gym.end_aggregate(env_ptr)
# Store the created env pointers
self.envs.append(env_ptr)
self.frankas.append(franka_actor)
# Setup init state buffer
self._init_cubeA_state = torch.zeros(self.num_envs, 13, device=self.device)
self._init_cubeB_state = torch.zeros(self.num_envs, 13, device=self.device)
# Setup data
self.init_data()
def init_data(self):
# Setup sim handles
env_ptr = self.envs[0]
franka_handle = 0
self.handles = {
# Franka
"hand": self.gym.find_actor_rigid_body_handle(env_ptr, franka_handle, "panda_hand"),
"leftfinger_tip": self.gym.find_actor_rigid_body_handle(env_ptr, franka_handle, "panda_leftfinger_tip"),
"rightfinger_tip": self.gym.find_actor_rigid_body_handle(env_ptr, franka_handle, "panda_rightfinger_tip"),
"grip_site": self.gym.find_actor_rigid_body_handle(env_ptr, franka_handle, "panda_grip_site"),
# Cubes
"cubeA_body_handle": self.gym.find_actor_rigid_body_handle(self.envs[0], self._cubeA_id, "box"),
"cubeB_body_handle": self.gym.find_actor_rigid_body_handle(self.envs[0], self._cubeB_id, "box"),
}
# Get total DOFs
self.num_dofs = self.gym.get_sim_dof_count(self.sim) // self.num_envs
# Setup tensor buffers
_actor_root_state_tensor = self.gym.acquire_actor_root_state_tensor(self.sim)
_dof_state_tensor = self.gym.acquire_dof_state_tensor(self.sim)
_rigid_body_state_tensor = self.gym.acquire_rigid_body_state_tensor(self.sim)
self._root_state = gymtorch.wrap_tensor(_actor_root_state_tensor).view(self.num_envs, -1, 13)
self._dof_state = gymtorch.wrap_tensor(_dof_state_tensor).view(self.num_envs, -1, 2)
self._rigid_body_state = gymtorch.wrap_tensor(_rigid_body_state_tensor).view(self.num_envs, -1, 13)
self._q = self._dof_state[..., 0]
self._qd = self._dof_state[..., 1]
self._eef_state = self._rigid_body_state[:, self.handles["grip_site"], :]
self._eef_lf_state = self._rigid_body_state[:, self.handles["leftfinger_tip"], :]
self._eef_rf_state = self._rigid_body_state[:, self.handles["rightfinger_tip"], :]
_jacobian = self.gym.acquire_jacobian_tensor(self.sim, "franka")
jacobian = gymtorch.wrap_tensor(_jacobian)
hand_joint_index = self.gym.get_actor_joint_dict(env_ptr, franka_handle)['panda_hand_joint']
self._j_eef = jacobian[:, hand_joint_index, :, :7]
_massmatrix = self.gym.acquire_mass_matrix_tensor(self.sim, "franka")
mm = gymtorch.wrap_tensor(_massmatrix)
self._mm = mm[:, :7, :7]
self._cubeA_state = self._root_state[:, self._cubeA_id, :]
self._cubeB_state = self._root_state[:, self._cubeB_id, :]
# Initialize states
self.states.update({
"cubeA_size": torch.ones_like(self._eef_state[:, 0]) * self.cubeA_size,
"cubeB_size": torch.ones_like(self._eef_state[:, 0]) * self.cubeB_size,
})
# Initialize actions
self._pos_control = torch.zeros((self.num_envs, self.num_dofs), dtype=torch.float, device=self.device)
self._effort_control = torch.zeros_like(self._pos_control)
# Initialize control
self._arm_control = self._effort_control[:, :7]
self._gripper_control = self._pos_control[:, 7:9]
# Initialize indices
self._global_indices = torch.arange(self.num_envs * 5, dtype=torch.int32,
device=self.device).view(self.num_envs, -1)
def _update_states(self):
self.states.update({
# Franka
"q": self._q[:, :],
"q_gripper": self._q[:, -2:],
"eef_pos": self._eef_state[:, :3],
"eef_quat": self._eef_state[:, 3:7],
"eef_vel": self._eef_state[:, 7:],
"eef_lf_pos": self._eef_lf_state[:, :3],
"eef_rf_pos": self._eef_rf_state[:, :3],
# Cubes
"cubeA_quat": self._cubeA_state[:, 3:7],
"cubeA_pos": self._cubeA_state[:, :3],
"cubeA_pos_relative": self._cubeA_state[:, :3] - self._eef_state[:, :3],
"cubeB_quat": self._cubeB_state[:, 3:7],
"cubeB_pos": self._cubeB_state[:, :3],
"cubeA_to_cubeB_pos": self._cubeB_state[:, :3] - self._cubeA_state[:, :3],
})
def _refresh(self):
self.gym.refresh_actor_root_state_tensor(self.sim)
self.gym.refresh_dof_state_tensor(self.sim)
self.gym.refresh_rigid_body_state_tensor(self.sim)
self.gym.refresh_jacobian_tensors(self.sim)
self.gym.refresh_mass_matrix_tensors(self.sim)
# Refresh states
self._update_states()
def compute_reward(self, actions):
self.rew_buf[:], self.reset_buf[:] = compute_franka_reward(
self.reset_buf, self.progress_buf, self.actions, self.states, self.reward_settings, self.max_episode_length
)
def compute_observations(self):
self._refresh()
obs = ["cubeA_quat", "cubeA_pos", "cubeA_to_cubeB_pos", "eef_pos", "eef_quat"]
obs += ["q_gripper"] if self.control_type == "osc" else ["q"]
self.obs_buf = torch.cat([self.states[ob] for ob in obs], dim=-1)
maxs = {ob: torch.max(self.states[ob]).item() for ob in obs}
return self.obs_buf
def reset_idx(self, env_ids):
env_ids_int32 = env_ids.to(dtype=torch.int32)
# Reset cubes, sampling cube B first, then A
# if not self._i:
self._reset_init_cube_state(cube='B', env_ids=env_ids, check_valid=False)
self._reset_init_cube_state(cube='A', env_ids=env_ids, check_valid=True)
# self._i = True
# Write these new init states to the sim states
self._cubeA_state[env_ids] = self._init_cubeA_state[env_ids]
self._cubeB_state[env_ids] = self._init_cubeB_state[env_ids]
# Reset agent
reset_noise = torch.rand((len(env_ids), 9), device=self.device)
pos = tensor_clamp(
self.franka_default_dof_pos.unsqueeze(0) +
self.franka_dof_noise * 2.0 * (reset_noise - 0.5),
self.franka_dof_lower_limits.unsqueeze(0), self.franka_dof_upper_limits)
# Overwrite gripper init pos (no noise since these are always position controlled)
pos[:, -2:] = self.franka_default_dof_pos[-2:]
# Reset the internal obs accordingly
self._q[env_ids, :] = pos
self._qd[env_ids, :] = torch.zeros_like(self._qd[env_ids])
# Set any position control to the current position, and any vel / effort control to be 0
# NOTE: Task takes care of actually propagating these controls in sim using the SimActions API
self._pos_control[env_ids, :] = pos
self._effort_control[env_ids, :] = torch.zeros_like(pos)
# Deploy updates
multi_env_ids_int32 = self._global_indices[env_ids, 0].flatten()
self.gym.set_dof_position_target_tensor_indexed(self.sim,
gymtorch.unwrap_tensor(self._pos_control),
gymtorch.unwrap_tensor(multi_env_ids_int32),
len(multi_env_ids_int32))
self.gym.set_dof_actuation_force_tensor_indexed(self.sim,
gymtorch.unwrap_tensor(self._effort_control),
gymtorch.unwrap_tensor(multi_env_ids_int32),
len(multi_env_ids_int32))
self.gym.set_dof_state_tensor_indexed(self.sim,
gymtorch.unwrap_tensor(self._dof_state),
gymtorch.unwrap_tensor(multi_env_ids_int32),
len(multi_env_ids_int32))
# Update cube states
multi_env_ids_cubes_int32 = self._global_indices[env_ids, -2:].flatten()
self.gym.set_actor_root_state_tensor_indexed(
self.sim, gymtorch.unwrap_tensor(self._root_state),
gymtorch.unwrap_tensor(multi_env_ids_cubes_int32), len(multi_env_ids_cubes_int32))
self.progress_buf[env_ids] = 0
self.reset_buf[env_ids] = 0
def _reset_init_cube_state(self, cube, env_ids, check_valid=True):
"""
Simple method to sample @cube's position based on self.startPositionNoise and self.startRotationNoise, and
automaticlly reset the pose internally. Populates the appropriate self._init_cubeX_state
If @check_valid is True, then this will also make sure that the sampled position is not in contact with the
other cube.
Args:
cube(str): Which cube to sample location for. Either 'A' or 'B'
env_ids (tensor or None): Specific environments to reset cube for
check_valid (bool): Whether to make sure sampled position is collision-free with the other cube.
"""
# If env_ids is None, we reset all the envs
if env_ids is None:
env_ids = torch.arange(start=0, end=self.num_envs, device=self.device, dtype=torch.long)
# Initialize buffer to hold sampled values
num_resets = len(env_ids)
sampled_cube_state = torch.zeros(num_resets, 13, device=self.device)
# Get correct references depending on which one was selected
if cube.lower() == 'a':
this_cube_state_all = self._init_cubeA_state
other_cube_state = self._init_cubeB_state[env_ids, :]
cube_heights = self.states["cubeA_size"]
elif cube.lower() == 'b':
this_cube_state_all = self._init_cubeB_state
other_cube_state = self._init_cubeA_state[env_ids, :]
cube_heights = self.states["cubeA_size"]
else:
raise ValueError(f"Invalid cube specified, options are 'A' and 'B'; got: {cube}")
# Minimum cube distance for guarenteed collision-free sampling is the sum of each cube's effective radius
min_dists = (self.states["cubeA_size"] + self.states["cubeB_size"])[env_ids] * np.sqrt(2) / 2.0
# We scale the min dist by 2 so that the cubes aren't too close together
min_dists = min_dists * 2.0
# Sampling is "centered" around middle of table
centered_cube_xy_state = torch.tensor(self._table_surface_pos[:2], device=self.device, dtype=torch.float32)
# Set z value, which is fixed height
sampled_cube_state[:, 2] = self._table_surface_pos[2] + cube_heights.squeeze(-1)[env_ids] / 2
# Initialize rotation, which is no rotation (quat w = 1)
sampled_cube_state[:, 6] = 1.0
# If we're verifying valid sampling, we need to check and re-sample if any are not collision-free
# We use a simple heuristic of checking based on cubes' radius to determine if a collision would occur
if check_valid:
success = False
# Indexes corresponding to envs we're still actively sampling for
active_idx = torch.arange(num_resets, device=self.device)
num_active_idx = len(active_idx)
for i in range(100):
# Sample x y values
sampled_cube_state[active_idx, :2] = centered_cube_xy_state + \
2.0 * self.start_position_noise * (
torch.rand_like(sampled_cube_state[active_idx, :2]) - 0.5)
# Check if sampled values are valid
cube_dist = torch.linalg.norm(sampled_cube_state[:, :2] - other_cube_state[:, :2], dim=-1)
active_idx = torch.nonzero(cube_dist < min_dists, as_tuple=True)[0]
num_active_idx = len(active_idx)
# If active idx is empty, then all sampling is valid :D
if num_active_idx == 0:
success = True
break
# Make sure we succeeded at sampling
assert success, "Sampling cube locations was unsuccessful! ):"
else:
# We just directly sample
sampled_cube_state[:, :2] = centered_cube_xy_state.unsqueeze(0) + \
2.0 * self.start_position_noise * (
torch.rand(num_resets, 2, device=self.device) - 0.5)
# Sample rotation value
if self.start_rotation_noise > 0:
aa_rot = torch.zeros(num_resets, 3, device=self.device)
aa_rot[:, 2] = 2.0 * self.start_rotation_noise * (torch.rand(num_resets, device=self.device) - 0.5)
sampled_cube_state[:, 3:7] = quat_mul(axisangle2quat(aa_rot), sampled_cube_state[:, 3:7])
# Lastly, set these sampled values as the new init state
this_cube_state_all[env_ids, :] = sampled_cube_state
def _compute_osc_torques(self, dpose):
# Solve for Operational Space Control # Paper: khatib.stanford.edu/publications/pdfs/Khatib_1987_RA.pdf
# Helpful resource: studywolf.wordpress.com/2013/09/17/robot-control-4-operation-space-control/
q, qd = self._q[:, :7], self._qd[:, :7]
mm_inv = torch.inverse(self._mm)
m_eef_inv = self._j_eef @ mm_inv @ torch.transpose(self._j_eef, 1, 2)
m_eef = torch.inverse(m_eef_inv)
# Transform our cartesian action `dpose` into joint torques `u`
u = torch.transpose(self._j_eef, 1, 2) @ m_eef @ (
self.kp * dpose - self.kd * self.states["eef_vel"]).unsqueeze(-1)
# Nullspace control torques `u_null` prevents large changes in joint configuration
# They are added into the nullspace of OSC so that the end effector orientation remains constant
# roboticsproceedings.org/rss07/p31.pdf
j_eef_inv = m_eef @ self._j_eef @ mm_inv
u_null = self.kd_null * -qd + self.kp_null * (
(self.franka_default_dof_pos[:7] - q + np.pi) % (2 * np.pi) - np.pi)
u_null[:, 7:] *= 0
u_null = self._mm @ u_null.unsqueeze(-1)
u += (torch.eye(7, device=self.device).unsqueeze(0) - torch.transpose(self._j_eef, 1, 2) @ j_eef_inv) @ u_null
# Clip the values to be within valid effort range
u = tensor_clamp(u.squeeze(-1),
-self._franka_effort_limits[:7].unsqueeze(0), self._franka_effort_limits[:7].unsqueeze(0))
return u
def pre_physics_step(self, actions):
self.actions = actions.clone().to(self.device)
# Split arm and gripper command
u_arm, u_gripper = self.actions[:, :-1], self.actions[:, -1]
# print(u_arm, u_gripper)
# print(self.cmd_limit, self.action_scale)
# Control arm (scale value first)
u_arm = u_arm * self.cmd_limit / self.action_scale
if self.control_type == "osc":
u_arm = self._compute_osc_torques(dpose=u_arm)
self._arm_control[:, :] = u_arm
# Control gripper
u_fingers = torch.zeros_like(self._gripper_control)
u_fingers[:, 0] = torch.where(u_gripper >= 0.0, self.franka_dof_upper_limits[-2].item(),
self.franka_dof_lower_limits[-2].item())
u_fingers[:, 1] = torch.where(u_gripper >= 0.0, self.franka_dof_upper_limits[-1].item(),
self.franka_dof_lower_limits[-1].item())
# Write gripper command to appropriate tensor buffer
self._gripper_control[:, :] = u_fingers
# Deploy actions
self.gym.set_dof_position_target_tensor(self.sim, gymtorch.unwrap_tensor(self._pos_control))
self.gym.set_dof_actuation_force_tensor(self.sim, gymtorch.unwrap_tensor(self._effort_control))
def post_physics_step(self):
self.progress_buf += 1
env_ids = self.reset_buf.nonzero(as_tuple=False).squeeze(-1)
if len(env_ids) > 0:
self.reset_idx(env_ids)
self.compute_observations()
self.compute_reward(self.actions)
# debug viz
if self.viewer and self.debug_viz:
self.gym.clear_lines(self.viewer)
self.gym.refresh_rigid_body_state_tensor(self.sim)
# Grab relevant states to visualize
eef_pos = self.states["eef_pos"]
eef_rot = self.states["eef_quat"]
cubeA_pos = self.states["cubeA_pos"]
cubeA_rot = self.states["cubeA_quat"]
cubeB_pos = self.states["cubeB_pos"]
cubeB_rot = self.states["cubeB_quat"]
# Plot visualizations
for i in range(self.num_envs):
for pos, rot in zip((eef_pos, cubeA_pos, cubeB_pos), (eef_rot, cubeA_rot, cubeB_rot)):
px = (pos[i] + quat_apply(rot[i], to_torch([1, 0, 0], device=self.device) * 0.2)).cpu().numpy()
py = (pos[i] + quat_apply(rot[i], to_torch([0, 1, 0], device=self.device) * 0.2)).cpu().numpy()
pz = (pos[i] + quat_apply(rot[i], to_torch([0, 0, 1], device=self.device) * 0.2)).cpu().numpy()
p0 = pos[i].cpu().numpy()
self.gym.add_lines(self.viewer, self.envs[i], 1, [p0[0], p0[1], p0[2], px[0], px[1], px[2]], [0.85, 0.1, 0.1])
self.gym.add_lines(self.viewer, self.envs[i], 1, [p0[0], p0[1], p0[2], py[0], py[1], py[2]], [0.1, 0.85, 0.1])
self.gym.add_lines(self.viewer, self.envs[i], 1, [p0[0], p0[1], p0[2], pz[0], pz[1], pz[2]], [0.1, 0.1, 0.85])
#####################################################################
###=========================jit functions=========================###
#####################################################################
@torch.jit.script
def compute_franka_reward(
reset_buf, progress_buf, actions, states, reward_settings, max_episode_length
):
# type: (Tensor, Tensor, Tensor, Dict[str, Tensor], Dict[str, float], float) -> Tuple[Tensor, Tensor]
# Compute per-env physical parameters
target_height = states["cubeB_size"] + states["cubeA_size"] / 2.0
cubeA_size = states["cubeA_size"]
cubeB_size = states["cubeB_size"]
# distance from hand to the cubeA
d = torch.norm(states["cubeA_pos_relative"], dim=-1)
d_lf = torch.norm(states["cubeA_pos"] - states["eef_lf_pos"], dim=-1)
d_rf = torch.norm(states["cubeA_pos"] - states["eef_rf_pos"], dim=-1)
dist_reward = 1 - torch.tanh(10.0 * (d + d_lf + d_rf) / 3)
# reward for lifting cubeA
cubeA_height = states["cubeA_pos"][:, 2] - reward_settings["table_height"]
cubeA_lifted = (cubeA_height - cubeA_size) > 0.04
lift_reward = cubeA_lifted
# how closely aligned cubeA is to cubeB (only provided if cubeA is lifted)
offset = torch.zeros_like(states["cubeA_to_cubeB_pos"])
offset[:, 2] = (cubeA_size + cubeB_size) / 2
d_ab = torch.norm(states["cubeA_to_cubeB_pos"] + offset, dim=-1)
align_reward = (1 - torch.tanh(10.0 * d_ab)) * cubeA_lifted
# Dist reward is maximum of dist and align reward
dist_reward = torch.max(dist_reward, align_reward)
# final reward for stacking successfully (only if cubeA is close to target height and corresponding location, and gripper is not grasping)
cubeA_align_cubeB = (torch.norm(states["cubeA_to_cubeB_pos"][:, :2], dim=-1) < 0.02)
cubeA_on_cubeB = torch.abs(cubeA_height - target_height) < 0.02
gripper_away_from_cubeA = (d > 0.04)
stack_reward = cubeA_align_cubeB & cubeA_on_cubeB & gripper_away_from_cubeA
# Compose rewards
# We either provide the stack reward or the align + dist reward
rewards = torch.where(
stack_reward,
reward_settings["r_stack_scale"] * stack_reward,
reward_settings["r_dist_scale"] * dist_reward + reward_settings["r_lift_scale"] * lift_reward + reward_settings[
"r_align_scale"] * align_reward,
)
# Compute resets
reset_buf = torch.where((progress_buf >= max_episode_length - 1) | (stack_reward > 0), torch.ones_like(reset_buf), reset_buf)
return rewards, reset_buf
| 37,426 | Python | 49.036096 | 217 | 0.595816 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/tasks/quadcopter.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import math
import numpy as np
import os
import torch
import xml.etree.ElementTree as ET
from isaacgym import gymutil, gymtorch, gymapi
from isaacgymenvs.utils.torch_jit_utils import *
from .base.vec_task import VecTask
class Quadcopter(VecTask):
def __init__(self, cfg, rl_device, sim_device, graphics_device_id, headless, virtual_screen_capture, force_render):
self.cfg = cfg
self.max_episode_length = self.cfg["env"]["maxEpisodeLength"]
self.debug_viz = self.cfg["env"]["enableDebugVis"]
dofs_per_env = 8
bodies_per_env = 9
# Observations:
# 0:13 - root state
# 13:29 - DOF states
num_obs = 21
# Actions:
# 0:8 - rotor DOF position targets
# 8:12 - rotor thrust magnitudes
num_acts = 12
self.cfg["env"]["numObservations"] = num_obs
self.cfg["env"]["numActions"] = num_acts
super().__init__(config=self.cfg, rl_device=rl_device, sim_device=sim_device, graphics_device_id=graphics_device_id, headless=headless, virtual_screen_capture=virtual_screen_capture, force_render=force_render)
self.root_tensor = self.gym.acquire_actor_root_state_tensor(self.sim)
self.dof_state_tensor = self.gym.acquire_dof_state_tensor(self.sim)
vec_root_tensor = gymtorch.wrap_tensor(self.root_tensor).view(self.num_envs, 13)
vec_dof_tensor = gymtorch.wrap_tensor(self.dof_state_tensor).view(self.num_envs, dofs_per_env, 2)
self.root_states = vec_root_tensor
self.root_positions = vec_root_tensor[..., 0:3]
self.root_quats = vec_root_tensor[..., 3:7]
self.root_linvels = vec_root_tensor[..., 7:10]
self.root_angvels = vec_root_tensor[..., 10:13]
self.dof_states = vec_dof_tensor
self.dof_positions = vec_dof_tensor[..., 0]
self.dof_velocities = vec_dof_tensor[..., 1]
self.gym.refresh_actor_root_state_tensor(self.sim)
self.gym.refresh_dof_state_tensor(self.sim)
self.initial_root_states = vec_root_tensor.clone()
self.initial_dof_states = vec_dof_tensor.clone()
max_thrust = 2
self.thrust_lower_limits = torch.zeros(4, device=self.device, dtype=torch.float32)
self.thrust_upper_limits = max_thrust * torch.ones(4, device=self.device, dtype=torch.float32)
# control tensors
self.dof_position_targets = torch.zeros((self.num_envs, dofs_per_env), dtype=torch.float32, device=self.device, requires_grad=False)
self.thrusts = torch.zeros((self.num_envs, 4), dtype=torch.float32, device=self.device, requires_grad=False)
self.forces = torch.zeros((self.num_envs, bodies_per_env, 3), dtype=torch.float32, device=self.device, requires_grad=False)
self.all_actor_indices = torch.arange(self.num_envs, dtype=torch.int32, device=self.device)
if self.viewer:
cam_pos = gymapi.Vec3(1.0, 1.0, 1.8)
cam_target = gymapi.Vec3(2.2, 2.0, 1.0)
self.gym.viewer_camera_look_at(self.viewer, None, cam_pos, cam_target)
# need rigid body states for visualizing thrusts
self.rb_state_tensor = self.gym.acquire_rigid_body_state_tensor(self.sim)
self.rb_states = gymtorch.wrap_tensor(self.rb_state_tensor).view(self.num_envs, bodies_per_env, 13)
self.rb_positions = self.rb_states[..., 0:3]
self.rb_quats = self.rb_states[..., 3:7]
def create_sim(self):
self.sim_params.up_axis = gymapi.UP_AXIS_Z
self.sim_params.gravity.x = 0
self.sim_params.gravity.y = 0
self.sim_params.gravity.z = -9.81
self.sim = super().create_sim(self.device_id, self.graphics_device_id, self.physics_engine, self.sim_params)
self.dt = self.sim_params.dt
self._create_quadcopter_asset()
self._create_ground_plane()
self._create_envs(self.num_envs, self.cfg["env"]['envSpacing'], int(np.sqrt(self.num_envs)))
def _create_quadcopter_asset(self):
chassis_radius = 0.1
chassis_thickness = 0.03
rotor_radius = 0.04
rotor_thickness = 0.01
rotor_arm_radius = 0.01
root = ET.Element('mujoco')
root.attrib["model"] = "Quadcopter"
compiler = ET.SubElement(root, "compiler")
compiler.attrib["angle"] = "degree"
compiler.attrib["coordinate"] = "local"
compiler.attrib["inertiafromgeom"] = "true"
worldbody = ET.SubElement(root, "worldbody")
chassis = ET.SubElement(worldbody, "body")
chassis.attrib["name"] = "chassis"
chassis.attrib["pos"] = "%g %g %g" % (0, 0, 0)
chassis_geom = ET.SubElement(chassis, "geom")
chassis_geom.attrib["type"] = "cylinder"
chassis_geom.attrib["size"] = "%g %g" % (chassis_radius, 0.5 * chassis_thickness)
chassis_geom.attrib["pos"] = "0 0 0"
chassis_geom.attrib["density"] = "50"
chassis_joint = ET.SubElement(chassis, "joint")
chassis_joint.attrib["name"] = "root_joint"
chassis_joint.attrib["type"] = "free"
zaxis = gymapi.Vec3(0, 0, 1)
rotor_arm_offset = gymapi.Vec3(chassis_radius + 0.25 * rotor_arm_radius, 0, 0)
pitch_joint_offset = gymapi.Vec3(0, 0, 0)
rotor_offset = gymapi.Vec3(rotor_radius + 0.25 * rotor_arm_radius, 0, 0)
rotor_angles = [0.25 * math.pi, 0.75 * math.pi, 1.25 * math.pi, 1.75 * math.pi]
for i in range(len(rotor_angles)):
angle = rotor_angles[i]
rotor_arm_quat = gymapi.Quat.from_axis_angle(zaxis, angle)
rotor_arm_pos = rotor_arm_quat.rotate(rotor_arm_offset)
pitch_joint_pos = pitch_joint_offset
rotor_pos = rotor_offset
rotor_quat = gymapi.Quat()
rotor_arm = ET.SubElement(chassis, "body")
rotor_arm.attrib["name"] = "rotor_arm" + str(i)
rotor_arm.attrib["pos"] = "%g %g %g" % (rotor_arm_pos.x, rotor_arm_pos.y, rotor_arm_pos.z)
rotor_arm.attrib["quat"] = "%g %g %g %g" % (rotor_arm_quat.w, rotor_arm_quat.x, rotor_arm_quat.y, rotor_arm_quat.z)
rotor_arm_geom = ET.SubElement(rotor_arm, "geom")
rotor_arm_geom.attrib["type"] = "sphere"
rotor_arm_geom.attrib["size"] = "%g" % rotor_arm_radius
rotor_arm_geom.attrib["density"] = "200"
pitch_joint = ET.SubElement(rotor_arm, "joint")
pitch_joint.attrib["name"] = "rotor_pitch" + str(i)
pitch_joint.attrib["type"] = "hinge"
pitch_joint.attrib["pos"] = "%g %g %g" % (0, 0, 0)
pitch_joint.attrib["axis"] = "0 1 0"
pitch_joint.attrib["limited"] = "true"
pitch_joint.attrib["range"] = "-30 30"
rotor = ET.SubElement(rotor_arm, "body")
rotor.attrib["name"] = "rotor" + str(i)
rotor.attrib["pos"] = "%g %g %g" % (rotor_pos.x, rotor_pos.y, rotor_pos.z)
rotor.attrib["quat"] = "%g %g %g %g" % (rotor_quat.w, rotor_quat.x, rotor_quat.y, rotor_quat.z)
rotor_geom = ET.SubElement(rotor, "geom")
rotor_geom.attrib["type"] = "cylinder"
rotor_geom.attrib["size"] = "%g %g" % (rotor_radius, 0.5 * rotor_thickness)
#rotor_geom.attrib["type"] = "box"
#rotor_geom.attrib["size"] = "%g %g %g" % (rotor_radius, rotor_radius, 0.5 * rotor_thickness)
rotor_geom.attrib["density"] = "1000"
roll_joint = ET.SubElement(rotor, "joint")
roll_joint.attrib["name"] = "rotor_roll" + str(i)
roll_joint.attrib["type"] = "hinge"
roll_joint.attrib["pos"] = "%g %g %g" % (0, 0, 0)
roll_joint.attrib["axis"] = "1 0 0"
roll_joint.attrib["limited"] = "true"
roll_joint.attrib["range"] = "-30 30"
gymutil._indent_xml(root)
ET.ElementTree(root).write("quadcopter.xml")
def _create_ground_plane(self):
plane_params = gymapi.PlaneParams()
plane_params.normal = gymapi.Vec3(0.0, 0.0, 1.0)
self.gym.add_ground(self.sim, plane_params)
def _create_envs(self, num_envs, spacing, num_per_row):
lower = gymapi.Vec3(-spacing, -spacing, 0.0)
upper = gymapi.Vec3(spacing, spacing, spacing)
asset_root = "."
asset_file = "quadcopter.xml"
asset_options = gymapi.AssetOptions()
asset_options.fix_base_link = False
asset_options.angular_damping = 0.0
asset_options.max_angular_velocity = 4 * math.pi
asset_options.slices_per_cylinder = 40
asset = self.gym.load_asset(self.sim, asset_root, asset_file, asset_options)
self.num_dofs = self.gym.get_asset_dof_count(asset)
dof_props = self.gym.get_asset_dof_properties(asset)
self.dof_lower_limits = []
self.dof_upper_limits = []
for i in range(self.num_dofs):
self.dof_lower_limits.append(dof_props['lower'][i])
self.dof_upper_limits.append(dof_props['upper'][i])
self.dof_lower_limits = to_torch(self.dof_lower_limits, device=self.device)
self.dof_upper_limits = to_torch(self.dof_upper_limits, device=self.device)
self.dof_ranges = self.dof_upper_limits - self.dof_lower_limits
default_pose = gymapi.Transform()
default_pose.p.z = 1.0
self.envs = []
for i in range(self.num_envs):
# create env instance
env = self.gym.create_env(self.sim, lower, upper, num_per_row)
actor_handle = self.gym.create_actor(env, asset, default_pose, "quadcopter", i, 1, 0)
dof_props = self.gym.get_actor_dof_properties(env, actor_handle)
dof_props['driveMode'].fill(gymapi.DOF_MODE_POS)
dof_props['stiffness'].fill(1000.0)
dof_props['damping'].fill(0.0)
self.gym.set_actor_dof_properties(env, actor_handle, dof_props)
# pretty colors
chassis_color = gymapi.Vec3(0.8, 0.6, 0.2)
rotor_color = gymapi.Vec3(0.1, 0.2, 0.6)
arm_color = gymapi.Vec3(0.0, 0.0, 0.0)
self.gym.set_rigid_body_color(env, actor_handle, 0, gymapi.MESH_VISUAL_AND_COLLISION, chassis_color)
self.gym.set_rigid_body_color(env, actor_handle, 1, gymapi.MESH_VISUAL_AND_COLLISION, arm_color)
self.gym.set_rigid_body_color(env, actor_handle, 3, gymapi.MESH_VISUAL_AND_COLLISION, arm_color)
self.gym.set_rigid_body_color(env, actor_handle, 5, gymapi.MESH_VISUAL_AND_COLLISION, arm_color)
self.gym.set_rigid_body_color(env, actor_handle, 7, gymapi.MESH_VISUAL_AND_COLLISION, arm_color)
self.gym.set_rigid_body_color(env, actor_handle, 2, gymapi.MESH_VISUAL_AND_COLLISION, rotor_color)
self.gym.set_rigid_body_color(env, actor_handle, 4, gymapi.MESH_VISUAL_AND_COLLISION, rotor_color)
self.gym.set_rigid_body_color(env, actor_handle, 6, gymapi.MESH_VISUAL_AND_COLLISION, rotor_color)
self.gym.set_rigid_body_color(env, actor_handle, 8, gymapi.MESH_VISUAL_AND_COLLISION, rotor_color)
#self.gym.set_rigid_body_color(env, actor_handle, 2, gymapi.MESH_VISUAL_AND_COLLISION, gymapi.Vec3(1, 0, 0))
#self.gym.set_rigid_body_color(env, actor_handle, 4, gymapi.MESH_VISUAL_AND_COLLISION, gymapi.Vec3(0, 1, 0))
#self.gym.set_rigid_body_color(env, actor_handle, 6, gymapi.MESH_VISUAL_AND_COLLISION, gymapi.Vec3(0, 0, 1))
#self.gym.set_rigid_body_color(env, actor_handle, 8, gymapi.MESH_VISUAL_AND_COLLISION, gymapi.Vec3(1, 1, 0))
self.envs.append(env)
if self.debug_viz:
# need env offsets for the rotors
self.rotor_env_offsets = torch.zeros((self.num_envs, 4, 3), device=self.device)
for i in range(self.num_envs):
env_origin = self.gym.get_env_origin(self.envs[i])
self.rotor_env_offsets[i, ..., 0] = env_origin.x
self.rotor_env_offsets[i, ..., 1] = env_origin.y
self.rotor_env_offsets[i, ..., 2] = env_origin.z
def reset_idx(self, env_ids):
num_resets = len(env_ids)
self.dof_states[env_ids] = self.initial_dof_states[env_ids]
actor_indices = self.all_actor_indices[env_ids].flatten()
self.root_states[env_ids] = self.initial_root_states[env_ids]
self.root_states[env_ids, 0] += torch_rand_float(-1.5, 1.5, (num_resets, 1), self.device).flatten()
self.root_states[env_ids, 1] += torch_rand_float(-1.5, 1.5, (num_resets, 1), self.device).flatten()
self.root_states[env_ids, 2] += torch_rand_float(-0.2, 1.5, (num_resets, 1), self.device).flatten()
self.gym.set_actor_root_state_tensor_indexed(self.sim, self.root_tensor, gymtorch.unwrap_tensor(actor_indices), num_resets)
self.dof_positions[env_ids] = torch_rand_float(-0.2, 0.2, (num_resets, 8), self.device)
self.dof_velocities[env_ids] = 0.0
self.gym.set_dof_state_tensor_indexed(self.sim, self.dof_state_tensor, gymtorch.unwrap_tensor(actor_indices), num_resets)
self.reset_buf[env_ids] = 0
self.progress_buf[env_ids] = 0
def pre_physics_step(self, _actions):
# resets
reset_env_ids = self.reset_buf.nonzero(as_tuple=False).squeeze(-1)
if len(reset_env_ids) > 0:
self.reset_idx(reset_env_ids)
actions = _actions.to(self.device)
dof_action_speed_scale = 8 * math.pi
self.dof_position_targets += self.dt * dof_action_speed_scale * actions[:, 0:8]
self.dof_position_targets[:] = tensor_clamp(self.dof_position_targets, self.dof_lower_limits, self.dof_upper_limits)
thrust_action_speed_scale = 200
self.thrusts += self.dt * thrust_action_speed_scale * actions[:, 8:12]
self.thrusts[:] = tensor_clamp(self.thrusts, self.thrust_lower_limits, self.thrust_upper_limits)
self.forces[:, 2, 2] = self.thrusts[:, 0]
self.forces[:, 4, 2] = self.thrusts[:, 1]
self.forces[:, 6, 2] = self.thrusts[:, 2]
self.forces[:, 8, 2] = self.thrusts[:, 3]
# clear actions for reset envs
self.thrusts[reset_env_ids] = 0.0
self.forces[reset_env_ids] = 0.0
self.dof_position_targets[reset_env_ids] = self.dof_positions[reset_env_ids]
# apply actions
self.gym.set_dof_position_target_tensor(self.sim, gymtorch.unwrap_tensor(self.dof_position_targets))
self.gym.apply_rigid_body_force_tensors(self.sim, gymtorch.unwrap_tensor(self.forces), None, gymapi.LOCAL_SPACE)
def post_physics_step(self):
self.progress_buf += 1
self.gym.refresh_actor_root_state_tensor(self.sim)
self.gym.refresh_dof_state_tensor(self.sim)
self.compute_observations()
self.compute_reward()
# debug viz
if self.viewer and self.debug_viz:
# compute start and end positions for visualizing thrust lines
self.gym.refresh_rigid_body_state_tensor(self.sim)
rotor_indices = torch.LongTensor([2, 4, 6, 8])
quats = self.rb_quats[:, rotor_indices]
dirs = -quat_axis(quats.view(self.num_envs * 4, 4), 2).view(self.num_envs, 4, 3)
starts = self.rb_positions[:, rotor_indices] + self.rotor_env_offsets
ends = starts + 0.1 * self.thrusts.view(self.num_envs, 4, 1) * dirs
# submit debug line geometry
verts = torch.stack([starts, ends], dim=2).cpu().numpy()
colors = np.zeros((self.num_envs * 4, 3), dtype=np.float32)
colors[..., 0] = 1.0
self.gym.clear_lines(self.viewer)
self.gym.add_lines(self.viewer, None, self.num_envs * 4, verts, colors)
def compute_observations(self):
target_x = 0.0
target_y = 0.0
target_z = 1.0
self.obs_buf[..., 0] = (target_x - self.root_positions[..., 0]) / 3
self.obs_buf[..., 1] = (target_y - self.root_positions[..., 1]) / 3
self.obs_buf[..., 2] = (target_z - self.root_positions[..., 2]) / 3
self.obs_buf[..., 3:7] = self.root_quats
self.obs_buf[..., 7:10] = self.root_linvels / 2
self.obs_buf[..., 10:13] = self.root_angvels / math.pi
self.obs_buf[..., 13:21] = self.dof_positions
return self.obs_buf
def compute_reward(self):
self.rew_buf[:], self.reset_buf[:] = compute_quadcopter_reward(
self.root_positions,
self.root_quats,
self.root_linvels,
self.root_angvels,
self.reset_buf, self.progress_buf, self.max_episode_length
)
#####################################################################
###=========================jit functions=========================###
#####################################################################
@torch.jit.script
def compute_quadcopter_reward(root_positions, root_quats, root_linvels, root_angvels, reset_buf, progress_buf, max_episode_length):
# type: (Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, float) -> Tuple[Tensor, Tensor]
# distance to target
target_dist = torch.sqrt(root_positions[..., 0] * root_positions[..., 0] +
root_positions[..., 1] * root_positions[..., 1] +
(1 - root_positions[..., 2]) * (1 - root_positions[..., 2]))
pos_reward = 1.0 / (1.0 + target_dist * target_dist)
# uprightness
ups = quat_axis(root_quats, 2)
tiltage = torch.abs(1 - ups[..., 2])
up_reward = 1.0 / (1.0 + tiltage * tiltage)
# spinning
spinnage = torch.abs(root_angvels[..., 2])
spinnage_reward = 1.0 / (1.0 + spinnage * spinnage)
# combined reward
# uprigness and spinning only matter when close to the target
reward = pos_reward + pos_reward * (up_reward + spinnage_reward)
# resets due to misbehavior
ones = torch.ones_like(reset_buf)
die = torch.zeros_like(reset_buf)
die = torch.where(target_dist > 3.0, ones, die)
die = torch.where(root_positions[..., 2] < 0.3, ones, die)
# resets due to episode length
reset = torch.where(progress_buf >= max_episode_length - 1, ones, die)
return reward, reset
| 19,725 | Python | 46.078759 | 217 | 0.61308 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/tasks/ingenuity.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import math
import numpy as np
import os
import torch
import xml.etree.ElementTree as ET
from isaacgymenvs.utils.torch_jit_utils import *
from .base.vec_task import VecTask
from isaacgym import gymutil, gymtorch, gymapi
class Ingenuity(VecTask):
def __init__(self, cfg, rl_device, sim_device, graphics_device_id, headless, virtual_screen_capture, force_render):
self.cfg = cfg
self.max_episode_length = self.cfg["env"]["maxEpisodeLength"]
self.debug_viz = self.cfg["env"]["enableDebugVis"]
# Observations:
# 0:13 - root state
self.cfg["env"]["numObservations"] = 13
# Actions:
# 0:3 - xyz force vector for lower rotor
# 4:6 - xyz force vector for upper rotor
self.cfg["env"]["numActions"] = 6
super().__init__(config=self.cfg, rl_device=rl_device, sim_device=sim_device, graphics_device_id=graphics_device_id, headless=headless, virtual_screen_capture=virtual_screen_capture, force_render=force_render)
dofs_per_env = 4
bodies_per_env = 6
self.root_tensor = self.gym.acquire_actor_root_state_tensor(self.sim)
self.dof_state_tensor = self.gym.acquire_dof_state_tensor(self.sim)
vec_root_tensor = gymtorch.wrap_tensor(self.root_tensor).view(self.num_envs, 2, 13)
vec_dof_tensor = gymtorch.wrap_tensor(self.dof_state_tensor).view(self.num_envs, dofs_per_env, 2)
self.root_states = vec_root_tensor[:, 0, :]
self.root_positions = self.root_states[:, 0:3]
self.target_root_positions = torch.zeros((self.num_envs, 3), device=self.device, dtype=torch.float32)
self.target_root_positions[:, 2] = 1
self.root_quats = self.root_states[:, 3:7]
self.root_linvels = self.root_states[:, 7:10]
self.root_angvels = self.root_states[:, 10:13]
self.marker_states = vec_root_tensor[:, 1, :]
self.marker_positions = self.marker_states[:, 0:3]
self.dof_states = vec_dof_tensor
self.dof_positions = vec_dof_tensor[..., 0]
self.dof_velocities = vec_dof_tensor[..., 1]
self.gym.refresh_actor_root_state_tensor(self.sim)
self.gym.refresh_dof_state_tensor(self.sim)
self.initial_root_states = self.root_states.clone()
self.initial_dof_states = self.dof_states.clone()
self.thrust_lower_limit = 0
self.thrust_upper_limit = 2000
self.thrust_lateral_component = 0.2
# control tensors
self.thrusts = torch.zeros((self.num_envs, 2, 3), dtype=torch.float32, device=self.device, requires_grad=False)
self.forces = torch.zeros((self.num_envs, bodies_per_env, 3), dtype=torch.float32, device=self.device, requires_grad=False)
self.all_actor_indices = torch.arange(self.num_envs * 2, dtype=torch.int32, device=self.device).reshape((self.num_envs, 2))
if self.viewer:
cam_pos = gymapi.Vec3(2.25, 2.25, 3.0)
cam_target = gymapi.Vec3(3.5, 4.0, 1.9)
self.gym.viewer_camera_look_at(self.viewer, None, cam_pos, cam_target)
# need rigid body states for visualizing thrusts
self.rb_state_tensor = self.gym.acquire_rigid_body_state_tensor(self.sim)
self.rb_states = gymtorch.wrap_tensor(self.rb_state_tensor).view(self.num_envs, bodies_per_env, 13)
self.rb_positions = self.rb_states[..., 0:3]
self.rb_quats = self.rb_states[..., 3:7]
def create_sim(self):
self.sim_params.up_axis = gymapi.UP_AXIS_Z
# Mars gravity
self.sim_params.gravity.x = 0
self.sim_params.gravity.y = 0
self.sim_params.gravity.z = -3.721
self.sim = super().create_sim(self.device_id, self.graphics_device_id, self.physics_engine, self.sim_params)
self.dt = self.sim_params.dt
self._create_ingenuity_asset()
self._create_ground_plane()
self._create_envs(self.num_envs, self.cfg["env"]['envSpacing'], int(np.sqrt(self.num_envs)))
def _create_ingenuity_asset(self):
chassis_size = 0.06
rotor_axis_length = 0.2
rotor_radius = 0.15
rotor_thickness = 0.01
rotor_arm_radius = 0.01
root = ET.Element('mujoco')
root.attrib["model"] = "Ingenuity"
compiler = ET.SubElement(root, "compiler")
compiler.attrib["angle"] = "degree"
compiler.attrib["coordinate"] = "local"
compiler.attrib["inertiafromgeom"] = "true"
mesh_asset = ET.SubElement(root, "asset")
model_path = "../assets/glb/ingenuity/"
mesh = ET.SubElement(mesh_asset, "mesh")
mesh.attrib["file"] = model_path + "chassis.glb"
mesh.attrib["name"] = "ingenuity_mesh"
lower_prop_mesh = ET.SubElement(mesh_asset, "mesh")
lower_prop_mesh.attrib["file"] = model_path + "lower_prop.glb"
lower_prop_mesh.attrib["name"] = "lower_prop_mesh"
upper_prop_mesh = ET.SubElement(mesh_asset, "mesh")
upper_prop_mesh.attrib["file"] = model_path + "upper_prop.glb"
upper_prop_mesh.attrib["name"] = "upper_prop_mesh"
worldbody = ET.SubElement(root, "worldbody")
chassis = ET.SubElement(worldbody, "body")
chassis.attrib["name"] = "chassis"
chassis.attrib["pos"] = "%g %g %g" % (0, 0, 0)
chassis_geom = ET.SubElement(chassis, "geom")
chassis_geom.attrib["type"] = "box"
chassis_geom.attrib["size"] = "%g %g %g" % (chassis_size, chassis_size, chassis_size)
chassis_geom.attrib["pos"] = "0 0 0"
chassis_geom.attrib["density"] = "50"
mesh_quat = gymapi.Quat.from_euler_zyx(0.5 * math.pi, 0, 0)
mesh_geom = ET.SubElement(chassis, "geom")
mesh_geom.attrib["type"] = "mesh"
mesh_geom.attrib["quat"] = "%g %g %g %g" % (mesh_quat.w, mesh_quat.x, mesh_quat.y, mesh_quat.z)
mesh_geom.attrib["mesh"] = "ingenuity_mesh"
mesh_geom.attrib["pos"] = "%g %g %g" % (0, 0, 0)
mesh_geom.attrib["contype"] = "0"
mesh_geom.attrib["conaffinity"] = "0"
chassis_joint = ET.SubElement(chassis, "joint")
chassis_joint.attrib["name"] = "root_joint"
chassis_joint.attrib["type"] = "hinge"
chassis_joint.attrib["limited"] = "true"
chassis_joint.attrib["range"] = "0 0"
zaxis = gymapi.Vec3(0, 0, 1)
low_rotor_pos = gymapi.Vec3(0, 0, 0)
rotor_separation = gymapi.Vec3(0, 0, 0.025)
for i, mesh_name in enumerate(["lower_prop_mesh", "upper_prop_mesh"]):
angle = 0
rotor_quat = gymapi.Quat.from_axis_angle(zaxis, angle)
rotor_pos = low_rotor_pos + (rotor_separation * i)
rotor = ET.SubElement(chassis, "body")
rotor.attrib["name"] = "rotor_physics_" + str(i)
rotor.attrib["pos"] = "%g %g %g" % (rotor_pos.x, rotor_pos.y, rotor_pos.z)
rotor.attrib["quat"] = "%g %g %g %g" % (rotor_quat.w, rotor_quat.x, rotor_quat.y, rotor_quat.z)
rotor_geom = ET.SubElement(rotor, "geom")
rotor_geom.attrib["type"] = "cylinder"
rotor_geom.attrib["size"] = "%g %g" % (rotor_radius, 0.5 * rotor_thickness)
rotor_geom.attrib["density"] = "1000"
roll_joint = ET.SubElement(rotor, "joint")
roll_joint.attrib["name"] = "rotor_roll" + str(i)
roll_joint.attrib["type"] = "hinge"
roll_joint.attrib["limited"] = "true"
roll_joint.attrib["range"] = "0 0"
roll_joint.attrib["pos"] = "%g %g %g" % (0, 0, 0)
rotor_dummy = ET.SubElement(chassis, "body")
rotor_dummy.attrib["name"] = "rotor_visual_" + str(i)
rotor_dummy.attrib["pos"] = "%g %g %g" % (rotor_pos.x, rotor_pos.y, rotor_pos.z)
rotor_dummy.attrib["quat"] = "%g %g %g %g" % (rotor_quat.w, rotor_quat.x, rotor_quat.y, rotor_quat.z)
rotor_mesh_geom = ET.SubElement(rotor_dummy, "geom")
rotor_mesh_geom.attrib["type"] = "mesh"
rotor_mesh_geom.attrib["mesh"] = mesh_name
rotor_mesh_quat = gymapi.Quat.from_euler_zyx(0.5 * math.pi, 0, 0)
rotor_mesh_geom.attrib["quat"] = "%g %g %g %g" % (rotor_mesh_quat.w, rotor_mesh_quat.x, rotor_mesh_quat.y, rotor_mesh_quat.z)
rotor_mesh_geom.attrib["contype"] = "0"
rotor_mesh_geom.attrib["conaffinity"] = "0"
dummy_roll_joint = ET.SubElement(rotor_dummy, "joint")
dummy_roll_joint.attrib["name"] = "rotor_roll" + str(i)
dummy_roll_joint.attrib["type"] = "hinge"
dummy_roll_joint.attrib["axis"] = "0 0 1"
dummy_roll_joint.attrib["pos"] = "%g %g %g" % (0, 0, 0)
gymutil._indent_xml(root)
ET.ElementTree(root).write("ingenuity.xml")
def _create_ground_plane(self):
plane_params = gymapi.PlaneParams()
plane_params.normal = gymapi.Vec3(0.0, 0.0, 1.0)
self.gym.add_ground(self.sim, plane_params)
def _create_envs(self, num_envs, spacing, num_per_row):
lower = gymapi.Vec3(-spacing, -spacing, 0.0)
upper = gymapi.Vec3(spacing, spacing, spacing)
asset_root = "./"
asset_file = "ingenuity.xml"
asset_options = gymapi.AssetOptions()
asset_options.fix_base_link = False
asset_options.angular_damping = 0.0
asset_options.max_angular_velocity = 4 * math.pi
asset_options.slices_per_cylinder = 40
asset = self.gym.load_asset(self.sim, asset_root, asset_file, asset_options)
asset_options.fix_base_link = True
marker_asset = self.gym.create_sphere(self.sim, 0.1, asset_options)
default_pose = gymapi.Transform()
default_pose.p.z = 1.0
self.envs = []
self.actor_handles = []
for i in range(self.num_envs):
# create env instance
env = self.gym.create_env(self.sim, lower, upper, num_per_row)
actor_handle = self.gym.create_actor(env, asset, default_pose, "ingenuity", i, 1, 1)
dof_props = self.gym.get_actor_dof_properties(env, actor_handle)
dof_props['stiffness'].fill(0)
dof_props['damping'].fill(0)
self.gym.set_actor_dof_properties(env, actor_handle, dof_props)
marker_handle = self.gym.create_actor(env, marker_asset, default_pose, "marker", i, 1, 1)
self.gym.set_rigid_body_color(env, marker_handle, 0, gymapi.MESH_VISUAL_AND_COLLISION, gymapi.Vec3(1, 0, 0))
self.actor_handles.append(actor_handle)
self.envs.append(env)
if self.debug_viz:
# need env offsets for the rotors
self.rotor_env_offsets = torch.zeros((self.num_envs, 2, 3), device=self.device)
for i in range(self.num_envs):
env_origin = self.gym.get_env_origin(self.envs[i])
self.rotor_env_offsets[i, ..., 0] = env_origin.x
self.rotor_env_offsets[i, ..., 1] = env_origin.y
self.rotor_env_offsets[i, ..., 2] = env_origin.z
def set_targets(self, env_ids):
num_sets = len(env_ids)
# set target position randomly with x, y in (-5, 5) and z in (1, 2)
self.target_root_positions[env_ids, 0:2] = (torch.rand(num_sets, 2, device=self.device) * 10) - 5
self.target_root_positions[env_ids, 2] = torch.rand(num_sets, device=self.device) + 1
self.marker_positions[env_ids] = self.target_root_positions[env_ids]
# copter "position" is at the bottom of the legs, so shift the target up so it visually aligns better
self.marker_positions[env_ids, 2] += 0.4
actor_indices = self.all_actor_indices[env_ids, 1].flatten()
return actor_indices
def reset_idx(self, env_ids):
# set rotor speeds
self.dof_velocities[:, 1] = -50
self.dof_velocities[:, 3] = 50
num_resets = len(env_ids)
target_actor_indices = self.set_targets(env_ids)
actor_indices = self.all_actor_indices[env_ids, 0].flatten()
self.root_states[env_ids] = self.initial_root_states[env_ids]
self.root_states[env_ids, 0] += torch_rand_float(-1.5, 1.5, (num_resets, 1), self.device).flatten()
self.root_states[env_ids, 1] += torch_rand_float(-1.5, 1.5, (num_resets, 1), self.device).flatten()
self.root_states[env_ids, 2] += torch_rand_float(-0.2, 1.5, (num_resets, 1), self.device).flatten()
self.gym.set_dof_state_tensor_indexed(self.sim, self.dof_state_tensor, gymtorch.unwrap_tensor(actor_indices), num_resets)
self.reset_buf[env_ids] = 0
self.progress_buf[env_ids] = 0
return torch.unique(torch.cat([target_actor_indices, actor_indices]))
def pre_physics_step(self, _actions):
# resets
set_target_ids = (self.progress_buf % 500 == 0).nonzero(as_tuple=False).squeeze(-1)
target_actor_indices = torch.tensor([], device=self.device, dtype=torch.int32)
if len(set_target_ids) > 0:
target_actor_indices = self.set_targets(set_target_ids)
reset_env_ids = self.reset_buf.nonzero(as_tuple=False).squeeze(-1)
actor_indices = torch.tensor([], device=self.device, dtype=torch.int32)
if len(reset_env_ids) > 0:
actor_indices = self.reset_idx(reset_env_ids)
reset_indices = torch.unique(torch.cat([target_actor_indices, actor_indices]))
if len(reset_indices) > 0:
self.gym.set_actor_root_state_tensor_indexed(self.sim, self.root_tensor, gymtorch.unwrap_tensor(reset_indices), len(reset_indices))
actions = _actions.to(self.device)
thrust_action_speed_scale = 2000
vertical_thrust_prop_0 = torch.clamp(actions[:, 2] * thrust_action_speed_scale, -self.thrust_upper_limit, self.thrust_upper_limit)
vertical_thrust_prop_1 = torch.clamp(actions[:, 5] * thrust_action_speed_scale, -self.thrust_upper_limit, self.thrust_upper_limit)
lateral_fraction_prop_0 = torch.clamp(actions[:, 0:2], -self.thrust_lateral_component, self.thrust_lateral_component)
lateral_fraction_prop_1 = torch.clamp(actions[:, 3:5], -self.thrust_lateral_component, self.thrust_lateral_component)
self.thrusts[:, 0, 2] = self.dt * vertical_thrust_prop_0
self.thrusts[:, 0, 0:2] = self.thrusts[:, 0, 2, None] * lateral_fraction_prop_0
self.thrusts[:, 1, 2] = self.dt * vertical_thrust_prop_1
self.thrusts[:, 1, 0:2] = self.thrusts[:, 1, 2, None] * lateral_fraction_prop_1
self.forces[:, 1] = self.thrusts[:, 0]
self.forces[:, 3] = self.thrusts[:, 1]
# clear actions for reset envs
self.thrusts[reset_env_ids] = 0.0
self.forces[reset_env_ids] = 0.0
# apply actions
self.gym.apply_rigid_body_force_tensors(self.sim, gymtorch.unwrap_tensor(self.forces), None, gymapi.LOCAL_SPACE)
def post_physics_step(self):
self.progress_buf += 1
self.gym.refresh_actor_root_state_tensor(self.sim)
self.gym.refresh_dof_state_tensor(self.sim)
self.compute_observations()
self.compute_reward()
# debug viz
if self.viewer and self.debug_viz:
# compute start and end positions for visualizing thrust lines
self.gym.refresh_rigid_body_state_tensor(self.sim)
rotor_indices = torch.LongTensor([2, 4, 6, 8])
quats = self.rb_quats[:, rotor_indices]
dirs = -quat_axis(quats.view(self.num_envs * 4, 4), 2).view(self.num_envs, 4, 3)
starts = self.rb_positions[:, rotor_indices] + self.rotor_env_offsets
ends = starts + 0.1 * self.thrusts.view(self.num_envs, 4, 1) * dirs
# submit debug line geometry
verts = torch.stack([starts, ends], dim=2).cpu().numpy()
colors = np.zeros((self.num_envs * 4, 3), dtype=np.float32)
colors[..., 0] = 1.0
self.gym.clear_lines(self.viewer)
self.gym.add_lines(self.viewer, None, self.num_envs * 4, verts, colors)
def compute_observations(self):
self.obs_buf[..., 0:3] = (self.target_root_positions - self.root_positions) / 3
self.obs_buf[..., 3:7] = self.root_quats
self.obs_buf[..., 7:10] = self.root_linvels / 2
self.obs_buf[..., 10:13] = self.root_angvels / math.pi
return self.obs_buf
def compute_reward(self):
self.rew_buf[:], self.reset_buf[:] = compute_ingenuity_reward(
self.root_positions,
self.target_root_positions,
self.root_quats,
self.root_linvels,
self.root_angvels,
self.reset_buf, self.progress_buf, self.max_episode_length
)
#####################################################################
###=========================jit functions=========================###
#####################################################################
@torch.jit.script
def compute_ingenuity_reward(root_positions, target_root_positions, root_quats, root_linvels, root_angvels, reset_buf, progress_buf, max_episode_length):
# type: (Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, float) -> Tuple[Tensor, Tensor]
# distance to target
target_dist = torch.sqrt(torch.square(target_root_positions - root_positions).sum(-1))
pos_reward = 1.0 / (1.0 + target_dist * target_dist)
# uprightness
ups = quat_axis(root_quats, 2)
tiltage = torch.abs(1 - ups[..., 2])
up_reward = 5.0 / (1.0 + tiltage * tiltage)
# spinning
spinnage = torch.abs(root_angvels[..., 2])
spinnage_reward = 1.0 / (1.0 + spinnage * spinnage)
# combined reward
# uprigness and spinning only matter when close to the target
reward = pos_reward + pos_reward * (up_reward + spinnage_reward)
# resets due to misbehavior
ones = torch.ones_like(reset_buf)
die = torch.zeros_like(reset_buf)
die = torch.where(target_dist > 8.0, ones, die)
die = torch.where(root_positions[..., 2] < 0.5, ones, die)
# resets due to episode length
reset = torch.where(progress_buf >= max_episode_length - 1, ones, die)
return reward, reset
| 19,671 | Python | 43.60771 | 217 | 0.614763 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/tasks/anymal.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import numpy as np
import os
import torch
from isaacgym import gymtorch
from isaacgym import gymapi
from isaacgymenvs.utils.torch_jit_utils import to_torch, get_axis_params, torch_rand_float, quat_rotate, quat_rotate_inverse
from isaacgymenvs.tasks.base.vec_task import VecTask
from typing import Tuple, Dict
class Anymal(VecTask):
def __init__(self, cfg, rl_device, sim_device, graphics_device_id, headless, virtual_screen_capture, force_render):
self.cfg = cfg
# normalization
self.lin_vel_scale = self.cfg["env"]["learn"]["linearVelocityScale"]
self.ang_vel_scale = self.cfg["env"]["learn"]["angularVelocityScale"]
self.dof_pos_scale = self.cfg["env"]["learn"]["dofPositionScale"]
self.dof_vel_scale = self.cfg["env"]["learn"]["dofVelocityScale"]
self.action_scale = self.cfg["env"]["control"]["actionScale"]
# reward scales
self.rew_scales = {}
self.rew_scales["lin_vel_xy"] = self.cfg["env"]["learn"]["linearVelocityXYRewardScale"]
self.rew_scales["ang_vel_z"] = self.cfg["env"]["learn"]["angularVelocityZRewardScale"]
self.rew_scales["torque"] = self.cfg["env"]["learn"]["torqueRewardScale"]
# randomization
self.randomization_params = self.cfg["task"]["randomization_params"]
self.randomize = self.cfg["task"]["randomize"]
# command ranges
self.command_x_range = self.cfg["env"]["randomCommandVelocityRanges"]["linear_x"]
self.command_y_range = self.cfg["env"]["randomCommandVelocityRanges"]["linear_y"]
self.command_yaw_range = self.cfg["env"]["randomCommandVelocityRanges"]["yaw"]
# plane params
self.plane_static_friction = self.cfg["env"]["plane"]["staticFriction"]
self.plane_dynamic_friction = self.cfg["env"]["plane"]["dynamicFriction"]
self.plane_restitution = self.cfg["env"]["plane"]["restitution"]
# base init state
pos = self.cfg["env"]["baseInitState"]["pos"]
rot = self.cfg["env"]["baseInitState"]["rot"]
v_lin = self.cfg["env"]["baseInitState"]["vLinear"]
v_ang = self.cfg["env"]["baseInitState"]["vAngular"]
state = pos + rot + v_lin + v_ang
self.base_init_state = state
# default joint positions
self.named_default_joint_angles = self.cfg["env"]["defaultJointAngles"]
self.cfg["env"]["numObservations"] = 48
self.cfg["env"]["numActions"] = 12
super().__init__(config=self.cfg, rl_device=rl_device, sim_device=sim_device, graphics_device_id=graphics_device_id, headless=headless, virtual_screen_capture=virtual_screen_capture, force_render=force_render)
# other
self.dt = self.sim_params.dt
self.max_episode_length_s = self.cfg["env"]["learn"]["episodeLength_s"]
self.max_episode_length = int(self.max_episode_length_s / self.dt + 0.5)
self.Kp = self.cfg["env"]["control"]["stiffness"]
self.Kd = self.cfg["env"]["control"]["damping"]
for key in self.rew_scales.keys():
self.rew_scales[key] *= self.dt
if self.viewer != None:
p = self.cfg["env"]["viewer"]["pos"]
lookat = self.cfg["env"]["viewer"]["lookat"]
cam_pos = gymapi.Vec3(p[0], p[1], p[2])
cam_target = gymapi.Vec3(lookat[0], lookat[1], lookat[2])
self.gym.viewer_camera_look_at(self.viewer, None, cam_pos, cam_target)
# get gym state tensors
actor_root_state = self.gym.acquire_actor_root_state_tensor(self.sim)
dof_state_tensor = self.gym.acquire_dof_state_tensor(self.sim)
net_contact_forces = self.gym.acquire_net_contact_force_tensor(self.sim)
torques = self.gym.acquire_dof_force_tensor(self.sim)
self.gym.refresh_dof_state_tensor(self.sim)
self.gym.refresh_actor_root_state_tensor(self.sim)
self.gym.refresh_net_contact_force_tensor(self.sim)
self.gym.refresh_dof_force_tensor(self.sim)
# create some wrapper tensors for different slices
self.root_states = gymtorch.wrap_tensor(actor_root_state)
self.dof_state = gymtorch.wrap_tensor(dof_state_tensor)
self.dof_pos = self.dof_state.view(self.num_envs, self.num_dof, 2)[..., 0]
self.dof_vel = self.dof_state.view(self.num_envs, self.num_dof, 2)[..., 1]
self.contact_forces = gymtorch.wrap_tensor(net_contact_forces).view(self.num_envs, -1, 3) # shape: num_envs, num_bodies, xyz axis
self.torques = gymtorch.wrap_tensor(torques).view(self.num_envs, self.num_dof)
self.commands = torch.zeros(self.num_envs, 3, dtype=torch.float, device=self.device, requires_grad=False)
self.commands_y = self.commands.view(self.num_envs, 3)[..., 1]
self.commands_x = self.commands.view(self.num_envs, 3)[..., 0]
self.commands_yaw = self.commands.view(self.num_envs, 3)[..., 2]
self.default_dof_pos = torch.zeros_like(self.dof_pos, dtype=torch.float, device=self.device, requires_grad=False)
for i in range(self.cfg["env"]["numActions"]):
name = self.dof_names[i]
angle = self.named_default_joint_angles[name]
self.default_dof_pos[:, i] = angle
# initialize some data used later on
self.extras = {}
self.initial_root_states = self.root_states.clone()
self.initial_root_states[:] = to_torch(self.base_init_state, device=self.device, requires_grad=False)
self.gravity_vec = to_torch(get_axis_params(-1., self.up_axis_idx), device=self.device).repeat((self.num_envs, 1))
self.actions = torch.zeros(self.num_envs, self.num_actions, dtype=torch.float, device=self.device, requires_grad=False)
self.reset_idx(torch.arange(self.num_envs, device=self.device))
def create_sim(self):
self.up_axis_idx = 2 # index of up axis: Y=1, Z=2
self.sim = super().create_sim(self.device_id, self.graphics_device_id, self.physics_engine, self.sim_params)
self._create_ground_plane()
self._create_envs(self.num_envs, self.cfg["env"]['envSpacing'], int(np.sqrt(self.num_envs)))
# If randomizing, apply once immediately on startup before the fist sim step
if self.randomize:
self.apply_randomizations(self.randomization_params)
def _create_ground_plane(self):
plane_params = gymapi.PlaneParams()
plane_params.normal = gymapi.Vec3(0.0, 0.0, 1.0)
plane_params.static_friction = self.plane_static_friction
plane_params.dynamic_friction = self.plane_dynamic_friction
self.gym.add_ground(self.sim, plane_params)
def _create_envs(self, num_envs, spacing, num_per_row):
asset_root = os.path.join(os.path.dirname(os.path.abspath(__file__)), '../../assets')
asset_file = "urdf/anymal_c/urdf/anymal.urdf"
asset_options = gymapi.AssetOptions()
asset_options.default_dof_drive_mode = gymapi.DOF_MODE_NONE
asset_options.collapse_fixed_joints = True
asset_options.replace_cylinder_with_capsule = True
asset_options.flip_visual_attachments = True
asset_options.fix_base_link = self.cfg["env"]["urdfAsset"]["fixBaseLink"]
asset_options.density = 0.001
asset_options.angular_damping = 0.0
asset_options.linear_damping = 0.0
asset_options.armature = 0.0
asset_options.thickness = 0.01
asset_options.disable_gravity = False
anymal_asset = self.gym.load_asset(self.sim, asset_root, asset_file, asset_options)
self.num_dof = self.gym.get_asset_dof_count(anymal_asset)
self.num_bodies = self.gym.get_asset_rigid_body_count(anymal_asset)
start_pose = gymapi.Transform()
start_pose.p = gymapi.Vec3(*self.base_init_state[:3])
body_names = self.gym.get_asset_rigid_body_names(anymal_asset)
self.dof_names = self.gym.get_asset_dof_names(anymal_asset)
extremity_name = "SHANK" if asset_options.collapse_fixed_joints else "FOOT"
feet_names = [s for s in body_names if extremity_name in s]
self.feet_indices = torch.zeros(len(feet_names), dtype=torch.long, device=self.device, requires_grad=False)
knee_names = [s for s in body_names if "THIGH" in s]
self.knee_indices = torch.zeros(len(knee_names), dtype=torch.long, device=self.device, requires_grad=False)
self.base_index = 0
dof_props = self.gym.get_asset_dof_properties(anymal_asset)
for i in range(self.num_dof):
dof_props['driveMode'][i] = gymapi.DOF_MODE_POS
dof_props['stiffness'][i] = self.cfg["env"]["control"]["stiffness"] #self.Kp
dof_props['damping'][i] = self.cfg["env"]["control"]["damping"] #self.Kd
env_lower = gymapi.Vec3(-spacing, -spacing, 0.0)
env_upper = gymapi.Vec3(spacing, spacing, spacing)
self.anymal_handles = []
self.envs = []
for i in range(self.num_envs):
# create env instance
env_ptr = self.gym.create_env(self.sim, env_lower, env_upper, num_per_row)
anymal_handle = self.gym.create_actor(env_ptr, anymal_asset, start_pose, "anymal", i, 1, 0)
self.gym.set_actor_dof_properties(env_ptr, anymal_handle, dof_props)
self.gym.enable_actor_dof_force_sensors(env_ptr, anymal_handle)
self.envs.append(env_ptr)
self.anymal_handles.append(anymal_handle)
for i in range(len(feet_names)):
self.feet_indices[i] = self.gym.find_actor_rigid_body_handle(self.envs[0], self.anymal_handles[0], feet_names[i])
for i in range(len(knee_names)):
self.knee_indices[i] = self.gym.find_actor_rigid_body_handle(self.envs[0], self.anymal_handles[0], knee_names[i])
self.base_index = self.gym.find_actor_rigid_body_handle(self.envs[0], self.anymal_handles[0], "base")
def pre_physics_step(self, actions):
self.actions = actions.clone().to(self.device)
targets = self.action_scale * self.actions + self.default_dof_pos
self.gym.set_dof_position_target_tensor(self.sim, gymtorch.unwrap_tensor(targets))
def post_physics_step(self):
self.progress_buf += 1
env_ids = self.reset_buf.nonzero(as_tuple=False).squeeze(-1)
if len(env_ids) > 0:
self.reset_idx(env_ids)
self.compute_observations()
self.compute_reward(self.actions)
def compute_reward(self, actions):
self.rew_buf[:], self.reset_buf[:] = compute_anymal_reward(
# tensors
self.root_states,
self.commands,
self.torques,
self.contact_forces,
self.knee_indices,
self.progress_buf,
# Dict
self.rew_scales,
# other
self.base_index,
self.max_episode_length,
)
def compute_observations(self):
self.gym.refresh_dof_state_tensor(self.sim) # done in step
self.gym.refresh_actor_root_state_tensor(self.sim)
self.gym.refresh_net_contact_force_tensor(self.sim)
self.gym.refresh_dof_force_tensor(self.sim)
self.obs_buf[:] = compute_anymal_observations( # tensors
self.root_states,
self.commands,
self.dof_pos,
self.default_dof_pos,
self.dof_vel,
self.gravity_vec,
self.actions,
# scales
self.lin_vel_scale,
self.ang_vel_scale,
self.dof_pos_scale,
self.dof_vel_scale
)
def reset_idx(self, env_ids):
# Randomization can happen only at reset time, since it can reset actor positions on GPU
if self.randomize:
self.apply_randomizations(self.randomization_params)
positions_offset = torch_rand_float(0.5, 1.5, (len(env_ids), self.num_dof), device=self.device)
velocities = torch_rand_float(-0.1, 0.1, (len(env_ids), self.num_dof), device=self.device)
self.dof_pos[env_ids] = self.default_dof_pos[env_ids] * positions_offset
self.dof_vel[env_ids] = velocities
env_ids_int32 = env_ids.to(dtype=torch.int32)
self.gym.set_actor_root_state_tensor_indexed(self.sim,
gymtorch.unwrap_tensor(self.initial_root_states),
gymtorch.unwrap_tensor(env_ids_int32), len(env_ids_int32))
self.gym.set_dof_state_tensor_indexed(self.sim,
gymtorch.unwrap_tensor(self.dof_state),
gymtorch.unwrap_tensor(env_ids_int32), len(env_ids_int32))
self.commands_x[env_ids] = torch_rand_float(self.command_x_range[0], self.command_x_range[1], (len(env_ids), 1), device=self.device).squeeze()
self.commands_y[env_ids] = torch_rand_float(self.command_y_range[0], self.command_y_range[1], (len(env_ids), 1), device=self.device).squeeze()
self.commands_yaw[env_ids] = torch_rand_float(self.command_yaw_range[0], self.command_yaw_range[1], (len(env_ids), 1), device=self.device).squeeze()
self.progress_buf[env_ids] = 0
self.reset_buf[env_ids] = 1
#####################################################################
###=========================jit functions=========================###
#####################################################################
@torch.jit.script
def compute_anymal_reward(
# tensors
root_states,
commands,
torques,
contact_forces,
knee_indices,
episode_lengths,
# Dict
rew_scales,
# other
base_index,
max_episode_length
):
# (reward, reset, feet_in air, feet_air_time, episode sums)
# type: (Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Dict[str, float], int, int) -> Tuple[Tensor, Tensor]
# prepare quantities (TODO: return from obs ?)
base_quat = root_states[:, 3:7]
base_lin_vel = quat_rotate_inverse(base_quat, root_states[:, 7:10])
base_ang_vel = quat_rotate_inverse(base_quat, root_states[:, 10:13])
# velocity tracking reward
lin_vel_error = torch.sum(torch.square(commands[:, :2] - base_lin_vel[:, :2]), dim=1)
ang_vel_error = torch.square(commands[:, 2] - base_ang_vel[:, 2])
rew_lin_vel_xy = torch.exp(-lin_vel_error/0.25) * rew_scales["lin_vel_xy"]
rew_ang_vel_z = torch.exp(-ang_vel_error/0.25) * rew_scales["ang_vel_z"]
# torque penalty
rew_torque = torch.sum(torch.square(torques), dim=1) * rew_scales["torque"]
total_reward = rew_lin_vel_xy + rew_ang_vel_z + rew_torque
total_reward = torch.clip(total_reward, 0., None)
# reset agents
reset = torch.norm(contact_forces[:, base_index, :], dim=1) > 1.
reset = reset | torch.any(torch.norm(contact_forces[:, knee_indices, :], dim=2) > 1., dim=1)
time_out = episode_lengths >= max_episode_length - 1 # no terminal reward for time-outs
reset = reset | time_out
return total_reward.detach(), reset
@torch.jit.script
def compute_anymal_observations(root_states,
commands,
dof_pos,
default_dof_pos,
dof_vel,
gravity_vec,
actions,
lin_vel_scale,
ang_vel_scale,
dof_pos_scale,
dof_vel_scale
):
# type: (Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, float, float, float, float) -> Tensor
base_quat = root_states[:, 3:7]
base_lin_vel = quat_rotate_inverse(base_quat, root_states[:, 7:10]) * lin_vel_scale
base_ang_vel = quat_rotate_inverse(base_quat, root_states[:, 10:13]) * ang_vel_scale
projected_gravity = quat_rotate(base_quat, gravity_vec)
dof_pos_scaled = (dof_pos - default_dof_pos) * dof_pos_scale
commands_scaled = commands*torch.tensor([lin_vel_scale, lin_vel_scale, ang_vel_scale], requires_grad=False, device=commands.device)
obs = torch.cat((base_lin_vel,
base_ang_vel,
projected_gravity,
commands_scaled,
dof_pos_scaled,
dof_vel*dof_vel_scale,
actions
), dim=-1)
return obs
| 18,546 | Python | 46.925064 | 217 | 0.602071 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/tasks/base/vec_task.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import os
import time
from datetime import datetime
from os.path import join
from typing import Dict, Any, Tuple, List, Set
import gym
from gym import spaces
from isaacgym import gymtorch, gymapi
from isaacgymenvs.utils.torch_jit_utils import to_torch
from isaacgymenvs.utils.dr_utils import get_property_setter_map, get_property_getter_map, \
get_default_setter_args, apply_random_samples, check_buckets, generate_random_samples
import torch
import numpy as np
import operator, random
from copy import deepcopy
from isaacgymenvs.utils.utils import nested_dict_get_attr, nested_dict_set_attr
from collections import deque
import sys
import abc
from abc import ABC
EXISTING_SIM = None
SCREEN_CAPTURE_RESOLUTION = (1027, 768)
def _create_sim_once(gym, *args, **kwargs):
global EXISTING_SIM
if EXISTING_SIM is not None:
return EXISTING_SIM
else:
EXISTING_SIM = gym.create_sim(*args, **kwargs)
return EXISTING_SIM
class Env(ABC):
def __init__(self, config: Dict[str, Any], rl_device: str, sim_device: str, graphics_device_id: int, headless: bool):
"""Initialise the env.
Args:
config: the configuration dictionary.
sim_device: the device to simulate physics on. eg. 'cuda:0' or 'cpu'
graphics_device_id: the device ID to render with.
headless: Set to False to disable viewer rendering.
"""
split_device = sim_device.split(":")
self.device_type = split_device[0]
self.device_id = int(split_device[1]) if len(split_device) > 1 else 0
self.device = "cpu"
if config["sim"]["use_gpu_pipeline"]:
if self.device_type.lower() == "cuda" or self.device_type.lower() == "gpu":
self.device = "cuda" + ":" + str(self.device_id)
else:
print("GPU Pipeline can only be used with GPU simulation. Forcing CPU Pipeline.")
config["sim"]["use_gpu_pipeline"] = False
self.rl_device = rl_device
# Rendering
# if training in a headless mode
self.headless = headless
enable_camera_sensors = config["env"].get("enableCameraSensors", False)
self.graphics_device_id = graphics_device_id
if enable_camera_sensors == False and self.headless == True:
self.graphics_device_id = -1
self.num_environments = config["env"]["numEnvs"]
self.num_agents = config["env"].get("numAgents", 1) # used for multi-agent environments
self.num_observations = config["env"].get("numObservations", 0)
self.num_states = config["env"].get("numStates", 0)
self.obs_space = spaces.Box(np.ones(self.num_obs) * -np.Inf, np.ones(self.num_obs) * np.Inf)
self.state_space = spaces.Box(np.ones(self.num_states) * -np.Inf, np.ones(self.num_states) * np.Inf)
self.num_actions = config["env"]["numActions"]
self.control_freq_inv = config["env"].get("controlFrequencyInv", 1)
self.act_space = spaces.Box(np.ones(self.num_actions) * -1., np.ones(self.num_actions) * 1.)
self.clip_obs = config["env"].get("clipObservations", np.Inf)
self.clip_actions = config["env"].get("clipActions", np.Inf)
# Total number of training frames since the beginning of the experiment.
# We get this information from the learning algorithm rather than tracking ourselves.
# The learning algorithm tracks the total number of frames since the beginning of training and accounts for
# experiments restart/resumes. This means this number can be > 0 right after initialization if we resume the
# experiment.
self.total_train_env_frames: int = 0
# number of control steps
self.control_steps: int = 0
self.render_fps: int = config["env"].get("renderFPS", -1)
self.last_frame_time: float = 0.0
self.record_frames: bool = False
self.record_frames_dir = join("recorded_frames", datetime.now().strftime("%Y-%m-%d_%H-%M-%S"))
@abc.abstractmethod
def allocate_buffers(self):
"""Create torch buffers for observations, rewards, actions dones and any additional data."""
@abc.abstractmethod
def step(self, actions: torch.Tensor) -> Tuple[Dict[str, torch.Tensor], torch.Tensor, torch.Tensor, Dict[str, Any]]:
"""Step the physics of the environment.
Args:
actions: actions to apply
Returns:
Observations, rewards, resets, info
Observations are dict of observations (currently only one member called 'obs')
"""
@abc.abstractmethod
def reset(self)-> Dict[str, torch.Tensor]:
"""Reset the environment.
Returns:
Observation dictionary
"""
@abc.abstractmethod
def reset_idx(self, env_ids: torch.Tensor):
"""Reset environments having the provided indices.
Args:
env_ids: environments to reset
"""
@property
def observation_space(self) -> gym.Space:
"""Get the environment's observation space."""
return self.obs_space
@property
def action_space(self) -> gym.Space:
"""Get the environment's action space."""
return self.act_space
@property
def num_envs(self) -> int:
"""Get the number of environments."""
return self.num_environments
@property
def num_acts(self) -> int:
"""Get the number of actions in the environment."""
return self.num_actions
@property
def num_obs(self) -> int:
"""Get the number of observations in the environment."""
return self.num_observations
def set_train_info(self, env_frames, *args, **kwargs):
"""
Send the information in the direction algo->environment.
Most common use case: tell the environment how far along we are in the training process. This is useful
for implementing curriculums and things such as that.
"""
self.total_train_env_frames = env_frames
# print(f'env_frames updated to {self.total_train_env_frames}')
def get_env_state(self):
"""
Return serializable environment state to be saved to checkpoint.
Can be used for stateful training sessions, i.e. with adaptive curriculums.
"""
return None
def set_env_state(self, env_state):
pass
class VecTask(Env):
metadata = {"render.modes": ["human", "rgb_array"], "video.frames_per_second": 24}
def __init__(self, config, rl_device, sim_device, graphics_device_id, headless, virtual_screen_capture: bool = False, force_render: bool = False):
"""Initialise the `VecTask`.
Args:
config: config dictionary for the environment.
sim_device: the device to simulate physics on. eg. 'cuda:0' or 'cpu'
graphics_device_id: the device ID to render with.
headless: Set to False to disable viewer rendering.
virtual_screen_capture: Set to True to allow the users get captured screen in RGB array via `env.render(mode='rgb_array')`.
force_render: Set to True to always force rendering in the steps (if the `control_freq_inv` is greater than 1 we suggest stting this arg to True)
"""
# super().__init__(config, rl_device, sim_device, graphics_device_id, headless, use_dict_obs)
super().__init__(config, rl_device, sim_device, graphics_device_id, headless)
self.virtual_screen_capture = virtual_screen_capture
self.virtual_display = None
if self.virtual_screen_capture:
from pyvirtualdisplay.smartdisplay import SmartDisplay
self.virtual_display = SmartDisplay(size=SCREEN_CAPTURE_RESOLUTION)
self.virtual_display.start()
self.force_render = force_render
self.sim_params = self.__parse_sim_params(self.cfg["physics_engine"], self.cfg["sim"])
if self.cfg["physics_engine"] == "physx":
self.physics_engine = gymapi.SIM_PHYSX
elif self.cfg["physics_engine"] == "flex":
self.physics_engine = gymapi.SIM_FLEX
else:
msg = f"Invalid physics engine backend: {self.cfg['physics_engine']}"
raise ValueError(msg)
self.dt: float = self.sim_params.dt
# optimization flags for pytorch JIT
torch._C._jit_set_profiling_mode(False)
torch._C._jit_set_profiling_executor(False)
self.gym = gymapi.acquire_gym()
self.first_randomization = True
self.original_props = {}
self.dr_randomizations = {}
self.actor_params_generator = None
self.extern_actor_params = {}
self.last_step = -1
self.last_rand_step = -1
for env_id in range(self.num_envs):
self.extern_actor_params[env_id] = None
# create envs, sim and viewer
self.sim_initialized = False
self.create_sim()
self.gym.prepare_sim(self.sim)
self.sim_initialized = True
self.set_viewer()
self.allocate_buffers()
self.obs_dict = {}
def set_viewer(self):
"""Create the viewer."""
# todo: read from config
self.enable_viewer_sync = True
self.viewer = None
# if running with a viewer, set up keyboard shortcuts and camera
if self.headless == False:
# subscribe to keyboard shortcuts
self.viewer = self.gym.create_viewer(
self.sim, gymapi.CameraProperties())
self.gym.subscribe_viewer_keyboard_event(
self.viewer, gymapi.KEY_ESCAPE, "QUIT")
self.gym.subscribe_viewer_keyboard_event(
self.viewer, gymapi.KEY_V, "toggle_viewer_sync")
self.gym.subscribe_viewer_keyboard_event(
self.viewer, gymapi.KEY_R, "record_frames")
# set the camera position based on up axis
sim_params = self.gym.get_sim_params(self.sim)
if sim_params.up_axis == gymapi.UP_AXIS_Z:
cam_pos = gymapi.Vec3(20.0, 25.0, 3.0)
cam_target = gymapi.Vec3(10.0, 15.0, 0.0)
else:
cam_pos = gymapi.Vec3(20.0, 3.0, 25.0)
cam_target = gymapi.Vec3(10.0, 0.0, 15.0)
self.gym.viewer_camera_look_at(
self.viewer, None, cam_pos, cam_target)
def allocate_buffers(self):
"""Allocate the observation, states, etc. buffers.
These are what is used to set observations and states in the environment classes which
inherit from this one, and are read in `step` and other related functions.
"""
# allocate buffers
self.obs_buf = torch.zeros(
(self.num_envs, self.num_obs), device=self.device, dtype=torch.float)
self.states_buf = torch.zeros(
(self.num_envs, self.num_states), device=self.device, dtype=torch.float)
self.rew_buf = torch.zeros(
self.num_envs, device=self.device, dtype=torch.float)
self.reset_buf = torch.ones(
self.num_envs, device=self.device, dtype=torch.long)
self.timeout_buf = torch.zeros(
self.num_envs, device=self.device, dtype=torch.long)
self.progress_buf = torch.zeros(
self.num_envs, device=self.device, dtype=torch.long)
self.randomize_buf = torch.zeros(
self.num_envs, device=self.device, dtype=torch.long)
self.extras = {}
def create_sim(self, compute_device: int, graphics_device: int, physics_engine, sim_params: gymapi.SimParams):
"""Create an Isaac Gym sim object.
Args:
compute_device: ID of compute device to use.
graphics_device: ID of graphics device to use.
physics_engine: physics engine to use (`gymapi.SIM_PHYSX` or `gymapi.SIM_FLEX`)
sim_params: sim params to use.
Returns:
the Isaac Gym sim object.
"""
sim = _create_sim_once(self.gym, compute_device, graphics_device, physics_engine, sim_params)
if sim is None:
print("*** Failed to create sim")
quit()
return sim
def get_state(self):
"""Returns the state buffer of the environment (the privileged observations for asymmetric training)."""
return torch.clamp(self.states_buf, -self.clip_obs, self.clip_obs).to(self.rl_device)
@abc.abstractmethod
def pre_physics_step(self, actions: torch.Tensor):
"""Apply the actions to the environment (eg by setting torques, position targets).
Args:
actions: the actions to apply
"""
@abc.abstractmethod
def post_physics_step(self):
"""Compute reward and observations, reset any environments that require it."""
def step(self, actions: torch.Tensor) -> Tuple[Dict[str, torch.Tensor], torch.Tensor, torch.Tensor, Dict[str, Any]]:
"""Step the physics of the environment.
Args:
actions: actions to apply
Returns:
Observations, rewards, resets, info
Observations are dict of observations (currently only one member called 'obs')
"""
# randomize actions
if self.dr_randomizations.get('actions', None):
actions = self.dr_randomizations['actions']['noise_lambda'](actions)
action_tensor = torch.clamp(actions, -self.clip_actions, self.clip_actions)
# apply actions
self.pre_physics_step(action_tensor)
# step physics and render each frame
for i in range(self.control_freq_inv):
if self.force_render:
self.render()
self.gym.simulate(self.sim)
# to fix!
if self.device == 'cpu':
self.gym.fetch_results(self.sim, True)
# compute observations, rewards, resets, ...
self.post_physics_step()
self.control_steps += 1
# fill time out buffer: set to 1 if we reached the max episode length AND the reset buffer is 1. Timeout == 1 makes sense only if the reset buffer is 1.
self.timeout_buf = (self.progress_buf >= self.max_episode_length - 1) & (self.reset_buf != 0)
# randomize observations
if self.dr_randomizations.get('observations', None):
self.obs_buf = self.dr_randomizations['observations']['noise_lambda'](self.obs_buf)
self.extras["time_outs"] = self.timeout_buf.to(self.rl_device)
self.obs_dict["obs"] = torch.clamp(self.obs_buf, -self.clip_obs, self.clip_obs).to(self.rl_device)
# asymmetric actor-critic
if self.num_states > 0:
self.obs_dict["states"] = self.get_state()
return self.obs_dict, self.rew_buf.to(self.rl_device), self.reset_buf.to(self.rl_device), self.extras
def zero_actions(self) -> torch.Tensor:
"""Returns a buffer with zero actions.
Returns:
A buffer of zero torch actions
"""
actions = torch.zeros([self.num_envs, self.num_actions], dtype=torch.float32, device=self.rl_device)
return actions
def reset_idx(self, env_idx):
"""Reset environment with indces in env_idx.
Should be implemented in an environment class inherited from VecTask.
"""
pass
def reset(self):
"""Is called only once when environment starts to provide the first observations.
Doesn't calculate observations. Actual reset and observation calculation need to be implemented by user.
Returns:
Observation dictionary
"""
self.obs_dict["obs"] = torch.clamp(self.obs_buf, -self.clip_obs, self.clip_obs).to(self.rl_device)
# asymmetric actor-critic
if self.num_states > 0:
self.obs_dict["states"] = self.get_state()
return self.obs_dict
def reset_done(self):
"""Reset the environment.
Returns:
Observation dictionary, indices of environments being reset
"""
done_env_ids = self.reset_buf.nonzero(as_tuple=False).flatten()
if len(done_env_ids) > 0:
self.reset_idx(done_env_ids)
self.obs_dict["obs"] = torch.clamp(self.obs_buf, -self.clip_obs, self.clip_obs).to(self.rl_device)
# asymmetric actor-critic
if self.num_states > 0:
self.obs_dict["states"] = self.get_state()
return self.obs_dict, done_env_ids
def render(self, mode="rgb_array"):
"""Draw the frame to the viewer, and check for keyboard events."""
if self.viewer:
# check for window closed
if self.gym.query_viewer_has_closed(self.viewer):
sys.exit()
# check for keyboard events
for evt in self.gym.query_viewer_action_events(self.viewer):
if evt.action == "QUIT" and evt.value > 0:
sys.exit()
elif evt.action == "toggle_viewer_sync" and evt.value > 0:
self.enable_viewer_sync = not self.enable_viewer_sync
elif evt.action == "record_frames" and evt.value > 0:
self.record_frames = not self.record_frames
# fetch results
if self.device != 'cpu':
self.gym.fetch_results(self.sim, True)
# step graphics
if self.enable_viewer_sync:
self.gym.step_graphics(self.sim)
self.gym.draw_viewer(self.viewer, self.sim, True)
# Wait for dt to elapse in real time.
# This synchronizes the physics simulation with the rendering rate.
self.gym.sync_frame_time(self.sim)
# it seems like in some cases sync_frame_time still results in higher-than-realtime framerate
# this code will slow down the rendering to real time
now = time.time()
delta = now - self.last_frame_time
if self.render_fps < 0:
# render at control frequency
render_dt = self.dt * self.control_freq_inv # render every control step
else:
render_dt = 1.0 / self.render_fps
if delta < render_dt:
time.sleep(render_dt - delta)
self.last_frame_time = time.time()
else:
self.gym.poll_viewer_events(self.viewer)
if self.record_frames:
if not os.path.isdir(self.record_frames_dir):
os.makedirs(self.record_frames_dir, exist_ok=True)
self.gym.write_viewer_image_to_file(self.viewer, join(self.record_frames_dir, f"frame_{self.control_steps}.png"))
if self.virtual_display and mode == "rgb_array":
img = self.virtual_display.grab()
return np.array(img)
def __parse_sim_params(self, physics_engine: str, config_sim: Dict[str, Any]) -> gymapi.SimParams:
"""Parse the config dictionary for physics stepping settings.
Args:
physics_engine: which physics engine to use. "physx" or "flex"
config_sim: dict of sim configuration parameters
Returns
IsaacGym SimParams object with updated settings.
"""
sim_params = gymapi.SimParams()
# check correct up-axis
if config_sim["up_axis"] not in ["z", "y"]:
msg = f"Invalid physics up-axis: {config_sim['up_axis']}"
print(msg)
raise ValueError(msg)
# assign general sim parameters
sim_params.dt = config_sim["dt"]
sim_params.num_client_threads = config_sim.get("num_client_threads", 0)
sim_params.use_gpu_pipeline = config_sim["use_gpu_pipeline"]
sim_params.substeps = config_sim.get("substeps", 2)
# assign up-axis
if config_sim["up_axis"] == "z":
sim_params.up_axis = gymapi.UP_AXIS_Z
else:
sim_params.up_axis = gymapi.UP_AXIS_Y
# assign gravity
sim_params.gravity = gymapi.Vec3(*config_sim["gravity"])
# configure physics parameters
if physics_engine == "physx":
# set the parameters
if "physx" in config_sim:
for opt in config_sim["physx"].keys():
if opt == "contact_collection":
setattr(sim_params.physx, opt, gymapi.ContactCollection(config_sim["physx"][opt]))
else:
setattr(sim_params.physx, opt, config_sim["physx"][opt])
else:
# set the parameters
if "flex" in config_sim:
for opt in config_sim["flex"].keys():
setattr(sim_params.flex, opt, config_sim["flex"][opt])
# return the configured params
return sim_params
"""
Domain Randomization methods
"""
def get_actor_params_info(self, dr_params: Dict[str, Any], env):
"""Generate a flat array of actor params, their names and ranges.
Returns:
The array
"""
if "actor_params" not in dr_params:
return None
params = []
names = []
lows = []
highs = []
param_getters_map = get_property_getter_map(self.gym)
for actor, actor_properties in dr_params["actor_params"].items():
handle = self.gym.find_actor_handle(env, actor)
for prop_name, prop_attrs in actor_properties.items():
if prop_name == 'color':
continue # this is set randomly
props = param_getters_map[prop_name](env, handle)
if not isinstance(props, list):
props = [props]
for prop_idx, prop in enumerate(props):
for attr, attr_randomization_params in prop_attrs.items():
name = prop_name+'_' + str(prop_idx) + '_'+attr
lo_hi = attr_randomization_params['range']
distr = attr_randomization_params['distribution']
if 'uniform' not in distr:
lo_hi = (-1.0*float('Inf'), float('Inf'))
if isinstance(prop, np.ndarray):
for attr_idx in range(prop[attr].shape[0]):
params.append(prop[attr][attr_idx])
names.append(name+'_'+str(attr_idx))
lows.append(lo_hi[0])
highs.append(lo_hi[1])
else:
params.append(getattr(prop, attr))
names.append(name)
lows.append(lo_hi[0])
highs.append(lo_hi[1])
return params, names, lows, highs
def apply_randomizations(self, dr_params):
"""Apply domain randomizations to the environment.
Note that currently we can only apply randomizations only on resets, due to current PhysX limitations
Args:
dr_params: parameters for domain randomization to use.
"""
# If we don't have a randomization frequency, randomize every step
rand_freq = dr_params.get("frequency", 1)
# First, determine what to randomize:
# - non-environment parameters when > frequency steps have passed since the last non-environment
# - physical environments in the reset buffer, which have exceeded the randomization frequency threshold
# - on the first call, randomize everything
self.last_step = self.gym.get_frame_count(self.sim)
if self.first_randomization:
do_nonenv_randomize = True
env_ids = list(range(self.num_envs))
else:
do_nonenv_randomize = (self.last_step - self.last_rand_step) >= rand_freq
rand_envs = torch.where(self.randomize_buf >= rand_freq, torch.ones_like(self.randomize_buf), torch.zeros_like(self.randomize_buf))
rand_envs = torch.logical_and(rand_envs, self.reset_buf)
env_ids = torch.nonzero(rand_envs, as_tuple=False).squeeze(-1).tolist()
self.randomize_buf[rand_envs] = 0
if do_nonenv_randomize:
self.last_rand_step = self.last_step
param_setters_map = get_property_setter_map(self.gym)
param_setter_defaults_map = get_default_setter_args(self.gym)
param_getters_map = get_property_getter_map(self.gym)
# On first iteration, check the number of buckets
if self.first_randomization:
check_buckets(self.gym, self.envs, dr_params)
for nonphysical_param in ["observations", "actions"]:
if nonphysical_param in dr_params and do_nonenv_randomize:
dist = dr_params[nonphysical_param]["distribution"]
op_type = dr_params[nonphysical_param]["operation"]
sched_type = dr_params[nonphysical_param]["schedule"] if "schedule" in dr_params[nonphysical_param] else None
sched_step = dr_params[nonphysical_param]["schedule_steps"] if "schedule" in dr_params[nonphysical_param] else None
op = operator.add if op_type == 'additive' else operator.mul
if sched_type == 'linear':
sched_scaling = 1.0 / sched_step * \
min(self.last_step, sched_step)
elif sched_type == 'constant':
sched_scaling = 0 if self.last_step < sched_step else 1
else:
sched_scaling = 1
if dist == 'gaussian':
mu, var = dr_params[nonphysical_param]["range"]
mu_corr, var_corr = dr_params[nonphysical_param].get("range_correlated", [0., 0.])
if op_type == 'additive':
mu *= sched_scaling
var *= sched_scaling
mu_corr *= sched_scaling
var_corr *= sched_scaling
elif op_type == 'scaling':
var = var * sched_scaling # scale up var over time
mu = mu * sched_scaling + 1.0 * \
(1.0 - sched_scaling) # linearly interpolate
var_corr = var_corr * sched_scaling # scale up var over time
mu_corr = mu_corr * sched_scaling + 1.0 * \
(1.0 - sched_scaling) # linearly interpolate
def noise_lambda(tensor, param_name=nonphysical_param):
params = self.dr_randomizations[param_name]
corr = params.get('corr', None)
if corr is None:
corr = torch.randn_like(tensor)
params['corr'] = corr
corr = corr * params['var_corr'] + params['mu_corr']
return op(
tensor, corr + torch.randn_like(tensor) * params['var'] + params['mu'])
self.dr_randomizations[nonphysical_param] = {'mu': mu, 'var': var, 'mu_corr': mu_corr, 'var_corr': var_corr, 'noise_lambda': noise_lambda}
elif dist == 'uniform':
lo, hi = dr_params[nonphysical_param]["range"]
lo_corr, hi_corr = dr_params[nonphysical_param].get("range_correlated", [0., 0.])
if op_type == 'additive':
lo *= sched_scaling
hi *= sched_scaling
lo_corr *= sched_scaling
hi_corr *= sched_scaling
elif op_type == 'scaling':
lo = lo * sched_scaling + 1.0 * (1.0 - sched_scaling)
hi = hi * sched_scaling + 1.0 * (1.0 - sched_scaling)
lo_corr = lo_corr * sched_scaling + 1.0 * (1.0 - sched_scaling)
hi_corr = hi_corr * sched_scaling + 1.0 * (1.0 - sched_scaling)
def noise_lambda(tensor, param_name=nonphysical_param):
params = self.dr_randomizations[param_name]
corr = params.get('corr', None)
if corr is None:
corr = torch.randn_like(tensor)
params['corr'] = corr
corr = corr * (params['hi_corr'] - params['lo_corr']) + params['lo_corr']
return op(tensor, corr + torch.rand_like(tensor) * (params['hi'] - params['lo']) + params['lo'])
self.dr_randomizations[nonphysical_param] = {'lo': lo, 'hi': hi, 'lo_corr': lo_corr, 'hi_corr': hi_corr, 'noise_lambda': noise_lambda}
if "sim_params" in dr_params and do_nonenv_randomize:
prop_attrs = dr_params["sim_params"]
prop = self.gym.get_sim_params(self.sim)
if self.first_randomization:
self.original_props["sim_params"] = {
attr: getattr(prop, attr) for attr in dir(prop)}
for attr, attr_randomization_params in prop_attrs.items():
apply_random_samples(
prop, self.original_props["sim_params"], attr, attr_randomization_params, self.last_step)
self.gym.set_sim_params(self.sim, prop)
# If self.actor_params_generator is initialized: use it to
# sample actor simulation params. This gives users the
# freedom to generate samples from arbitrary distributions,
# e.g. use full-covariance distributions instead of the DR's
# default of treating each simulation parameter independently.
extern_offsets = {}
if self.actor_params_generator is not None:
for env_id in env_ids:
self.extern_actor_params[env_id] = \
self.actor_params_generator.sample()
extern_offsets[env_id] = 0
# randomise all attributes of each actor (hand, cube etc..)
# actor_properties are (stiffness, damping etc..)
# Loop over actors, then loop over envs, then loop over their props
# and lastly loop over the ranges of the params
for actor, actor_properties in dr_params["actor_params"].items():
# Loop over all envs as this part is not tensorised yet
for env_id in env_ids:
env = self.envs[env_id]
handle = self.gym.find_actor_handle(env, actor)
extern_sample = self.extern_actor_params[env_id]
# randomise dof_props, rigid_body, rigid_shape properties
# all obtained from the YAML file
# EXAMPLE: prop name: dof_properties, rigid_body_properties, rigid_shape properties
# prop_attrs:
# {'damping': {'range': [0.3, 3.0], 'operation': 'scaling', 'distribution': 'loguniform'}
# {'stiffness': {'range': [0.75, 1.5], 'operation': 'scaling', 'distribution': 'loguniform'}
for prop_name, prop_attrs in actor_properties.items():
if prop_name == 'color':
num_bodies = self.gym.get_actor_rigid_body_count(
env, handle)
for n in range(num_bodies):
self.gym.set_rigid_body_color(env, handle, n, gymapi.MESH_VISUAL,
gymapi.Vec3(random.uniform(0, 1), random.uniform(0, 1), random.uniform(0, 1)))
continue
if prop_name == 'scale':
setup_only = prop_attrs.get('setup_only', False)
if (setup_only and not self.sim_initialized) or not setup_only:
attr_randomization_params = prop_attrs
sample = generate_random_samples(attr_randomization_params, 1,
self.last_step, None)
og_scale = 1
if attr_randomization_params['operation'] == 'scaling':
new_scale = og_scale * sample
elif attr_randomization_params['operation'] == 'additive':
new_scale = og_scale + sample
self.gym.set_actor_scale(env, handle, new_scale)
continue
prop = param_getters_map[prop_name](env, handle)
set_random_properties = True
if isinstance(prop, list):
if self.first_randomization:
self.original_props[prop_name] = [
{attr: getattr(p, attr) for attr in dir(p)} for p in prop]
for p, og_p in zip(prop, self.original_props[prop_name]):
for attr, attr_randomization_params in prop_attrs.items():
setup_only = attr_randomization_params.get('setup_only', False)
if (setup_only and not self.sim_initialized) or not setup_only:
smpl = None
if self.actor_params_generator is not None:
smpl, extern_offsets[env_id] = get_attr_val_from_sample(
extern_sample, extern_offsets[env_id], p, attr)
apply_random_samples(
p, og_p, attr, attr_randomization_params,
self.last_step, smpl)
else:
set_random_properties = False
else:
if self.first_randomization:
self.original_props[prop_name] = deepcopy(prop)
for attr, attr_randomization_params in prop_attrs.items():
setup_only = attr_randomization_params.get('setup_only', False)
if (setup_only and not self.sim_initialized) or not setup_only:
smpl = None
if self.actor_params_generator is not None:
smpl, extern_offsets[env_id] = get_attr_val_from_sample(
extern_sample, extern_offsets[env_id], prop, attr)
apply_random_samples(
prop, self.original_props[prop_name], attr,
attr_randomization_params, self.last_step, smpl)
else:
set_random_properties = False
if set_random_properties:
setter = param_setters_map[prop_name]
default_args = param_setter_defaults_map[prop_name]
setter(env, handle, prop, *default_args)
if self.actor_params_generator is not None:
for env_id in env_ids: # check that we used all dims in sample
if extern_offsets[env_id] > 0:
extern_sample = self.extern_actor_params[env_id]
if extern_offsets[env_id] != extern_sample.shape[0]:
print('env_id', env_id,
'extern_offset', extern_offsets[env_id],
'vs extern_sample.shape', extern_sample.shape)
raise Exception("Invalid extern_sample size")
self.first_randomization = False | 37,452 | Python | 43.586905 | 160 | 0.569476 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/tasks/base/__init__.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
| 1,558 | Python | 54.678569 | 80 | 0.784339 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/tasks/allegro_kuka/generate_cuboids.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import os
from os.path import join
from typing import Callable, List
from jinja2 import Environment, FileSystemLoader, select_autoescape
FilterFunc = Callable[[List[int]], bool]
def generate_assets(
scales, min_volume, max_volume, generated_assets_dir, base_mesh, base_cube_size_m, filter_funcs: List[FilterFunc]
):
template_dir = join(os.path.dirname(os.path.abspath(__file__)), "../../../assets/asset_templates")
print(f"Assets template dir: {template_dir}")
env = Environment(
loader=FileSystemLoader(template_dir),
autoescape=select_autoescape(),
)
template = env.get_template("cube_multicolor_allegro.urdf.template") # <-- pass as function parameter?
idx = 0
for x_scale in scales:
for y_scale in scales:
for z_scale in scales:
volume = x_scale * y_scale * z_scale / (100 * 100 * 100)
if volume > max_volume:
continue
if volume < min_volume:
continue
curr_scales = [x_scale, y_scale, z_scale]
curr_scales.sort()
filtered = False
for filter_func in filter_funcs:
if filter_func(curr_scales):
filtered = True
if filtered:
continue
asset = template.render(
base_mesh=base_mesh,
x_scale=base_cube_size_m * (x_scale / 100),
y_scale=base_cube_size_m * (y_scale / 100),
z_scale=base_cube_size_m * (z_scale / 100),
)
fname = f"{idx:03d}_cube_{x_scale}_{y_scale}_{z_scale}.urdf"
idx += 1
with open(join(generated_assets_dir, fname), "w") as fobj:
fobj.write(asset)
def filter_thin_plates(scales: List[int]) -> bool:
"""
Skip cuboids where one dimension is much smaller than the other two - these are very hard to grasp.
We return true if object needs to be skipped.
"""
scales = sorted(scales)
return scales[0] * 3 <= scales[1]
def generate_default_cube(assets_dir, base_mesh, base_cube_size_m):
scales = [100]
min_volume = max_volume = 1.0
generate_assets(scales, min_volume, max_volume, assets_dir, base_mesh, base_cube_size_m, [])
def generate_small_cuboids(assets_dir, base_mesh, base_cube_size_m):
scales = [100, 50, 66, 75, 90, 110, 125, 150, 175, 200, 250, 300]
min_volume = 1.0
max_volume = 2.5
generate_assets(scales, min_volume, max_volume, assets_dir, base_mesh, base_cube_size_m, [])
def generate_big_cuboids(assets_dir, base_mesh, base_cube_size_m):
scales = [100, 125, 150, 200, 250, 300, 350]
min_volume = 2.5
max_volume = 15.0
generate_assets(scales, min_volume, max_volume, assets_dir, base_mesh, base_cube_size_m, [filter_thin_plates])
def filter_non_elongated(scales: List[int]) -> bool:
"""
Skip cuboids that are not elongated. One dimension should be significantly larger than the other two.
We return true if object needs to be skipped.
"""
scales = sorted(scales)
return scales[2] <= scales[0] * 3 or scales[2] <= scales[1] * 3
def generate_sticks(assets_dir, base_mesh, base_cube_size_m):
scales = [100, 50, 75, 200, 300, 400, 500, 600]
min_volume = 2.5
max_volume = 6.0
generate_assets(
scales,
min_volume,
max_volume,
assets_dir,
base_mesh,
base_cube_size_m,
[filter_thin_plates, filter_non_elongated],
)
| 5,157 | Python | 37.492537 | 117 | 0.645143 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/tasks/allegro_kuka/allegro_kuka_two_arms_regrasping.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from typing import List, Tuple
import torch
from isaacgym import gymapi
from torch import Tensor
from isaacgymenvs.utils.torch_jit_utils import to_torch, torch_rand_float
from isaacgymenvs.tasks.allegro_kuka.allegro_kuka_two_arms import AllegroKukaTwoArmsBase
from isaacgymenvs.tasks.allegro_kuka.allegro_kuka_utils import tolerance_curriculum, tolerance_successes_objective
class AllegroKukaTwoArmsRegrasping(AllegroKukaTwoArmsBase):
def __init__(self, cfg, rl_device, sim_device, graphics_device_id, headless, virtual_screen_capture, force_render):
self.goal_object_indices = []
self.goal_asset = None
super().__init__(cfg, rl_device, sim_device, graphics_device_id, headless, virtual_screen_capture, force_render)
def _object_keypoint_offsets(self):
"""Regrasping task uses only a single object keypoint since we do not care about object orientation."""
return [[0, 0, 0]]
def _load_additional_assets(self, object_asset_root, arm_y_offset: float):
goal_asset_options = gymapi.AssetOptions()
goal_asset_options.disable_gravity = True
self.goal_asset = self.gym.load_asset(
self.sim, object_asset_root, self.asset_files_dict["ball"], goal_asset_options
)
goal_rb_count = self.gym.get_asset_rigid_body_count(self.goal_asset)
goal_shapes_count = self.gym.get_asset_rigid_shape_count(self.goal_asset)
return goal_rb_count, goal_shapes_count
def _create_additional_objects(self, env_ptr, env_idx, object_asset_idx):
goal_start_pose = gymapi.Transform()
goal_asset = self.goal_asset
goal_handle = self.gym.create_actor(
env_ptr, goal_asset, goal_start_pose, "goal_object", env_idx + self.num_envs, 0, 0
)
self.gym.set_actor_scale(env_ptr, goal_handle, 0.5)
self.gym.set_rigid_body_color(env_ptr, goal_handle, 0, gymapi.MESH_VISUAL, gymapi.Vec3(0.6, 0.72, 0.98))
goal_object_idx = self.gym.get_actor_index(env_ptr, goal_handle, gymapi.DOMAIN_SIM)
self.goal_object_indices.append(goal_object_idx)
def _after_envs_created(self):
self.goal_object_indices = to_torch(self.goal_object_indices, dtype=torch.long, device=self.device)
def _reset_target(self, env_ids: Tensor) -> None:
# sample random target location in some volume
target_volume_origin = self.target_volume_origin
target_volume_extent = self.target_volume_extent
target_volume_min_coord = target_volume_origin + target_volume_extent[:, 0]
target_volume_max_coord = target_volume_origin + target_volume_extent[:, 1]
target_volume_size = target_volume_max_coord - target_volume_min_coord
rand_pos_floats = torch_rand_float(0.0, 1.0, (len(env_ids), 3), device=self.device)
target_coords = target_volume_min_coord + rand_pos_floats * target_volume_size
# let the target be close to 1st or 2nd arm, randomly
left_right_random = torch_rand_float(-1.0, 1.0, (len(env_ids), 1), device=self.device)
x_ofs = 0.75
x_pos = torch.where(
left_right_random > 0,
x_ofs * torch.ones_like(left_right_random),
-x_ofs * torch.ones_like(left_right_random),
)
target_coords[:, 0] += x_pos.squeeze(dim=1)
self.goal_states[env_ids, 0:3] = target_coords
self.root_state_tensor[self.goal_object_indices[env_ids], 0:3] = self.goal_states[env_ids, 0:3]
# we also reset the object to its initial position
self.reset_object_pose(env_ids)
# since we put the object back on the table, also reset the lifting reward
self.lifted_object[env_ids] = False
self.deferred_set_actor_root_state_tensor_indexed(
[self.object_indices[env_ids], self.goal_object_indices[env_ids]]
)
def _extra_object_indices(self, env_ids: Tensor) -> List[Tensor]:
return [self.goal_object_indices[env_ids]]
def compute_kuka_reward(self) -> Tuple[Tensor, Tensor]:
rew_buf, is_success = super().compute_kuka_reward()
return rew_buf, is_success
def _true_objective(self) -> Tensor:
true_objective = tolerance_successes_objective(
self.success_tolerance, self.initial_tolerance, self.target_tolerance, self.successes
)
return true_objective
def _extra_curriculum(self):
self.success_tolerance, self.last_curriculum_update = tolerance_curriculum(
self.last_curriculum_update,
self.frame_since_restart,
self.tolerance_curriculum_interval,
self.prev_episode_successes,
self.success_tolerance,
self.initial_tolerance,
self.target_tolerance,
self.tolerance_curriculum_increment,
)
| 6,376 | Python | 45.889706 | 120 | 0.692597 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/tasks/allegro_kuka/allegro_kuka_two_arms.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import math
import os
import tempfile
from copy import copy
from os.path import join
from typing import List, Tuple
from isaacgym import gymapi, gymtorch, gymutil
from torch import Tensor
from isaacgymenvs.tasks.allegro_kuka.allegro_kuka_utils import DofParameters, populate_dof_properties
from isaacgymenvs.tasks.base.vec_task import VecTask
from isaacgymenvs.tasks.allegro_kuka.generate_cuboids import (
generate_big_cuboids,
generate_default_cube,
generate_small_cuboids,
generate_sticks,
)
from isaacgymenvs.utils.torch_jit_utils import *
class AllegroKukaTwoArmsBase(VecTask):
def __init__(self, cfg, rl_device, sim_device, graphics_device_id, headless, virtual_screen_capture, force_render):
self.cfg = cfg
self.frame_since_restart: int = 0 # number of control steps since last restart across all actors
self.hand_arm_asset_file: str = self.cfg["env"]["asset"]["kukaAllegro"]
self.clamp_abs_observations: float = self.cfg["env"]["clampAbsObservations"]
self.num_arms = self.cfg["env"]["numArms"]
assert self.num_arms == 2, f"Only two arms supported, got {self.num_arms}"
self.arm_x_ofs = self.cfg["env"]["armXOfs"]
self.arm_y_ofs = self.cfg["env"]["armYOfs"]
# 4 joints for index, middle, ring, and thumb and 7 for kuka arm
self.num_arm_dofs = 7
self.num_finger_dofs = 4
self.num_allegro_fingertips = 4
self.num_hand_dofs = self.num_finger_dofs * self.num_allegro_fingertips
self.num_hand_arm_dofs = self.num_hand_dofs + self.num_arm_dofs
self.num_allegro_kuka_actions = self.num_hand_arm_dofs * self.num_arms
self.randomize = self.cfg["task"]["randomize"]
self.randomization_params = self.cfg["task"]["randomization_params"]
self.distance_delta_rew_scale = self.cfg["env"]["distanceDeltaRewScale"]
self.lifting_rew_scale = self.cfg["env"]["liftingRewScale"]
self.lifting_bonus = self.cfg["env"]["liftingBonus"]
self.lifting_bonus_threshold = self.cfg["env"]["liftingBonusThreshold"]
self.keypoint_rew_scale = self.cfg["env"]["keypointRewScale"]
# not used in 2-arm task for now
# to fix: add to config
# self.kuka_actions_penalty_scale = self.cfg["env"]["kukaActionsPenaltyScale"]
# self.allegro_actions_penalty_scale = self.cfg["env"]["allegroActionsPenaltyScale"]
self.dof_params: DofParameters = DofParameters.from_cfg(self.cfg)
self.initial_tolerance = self.cfg["env"]["successTolerance"]
self.success_tolerance = self.initial_tolerance
self.target_tolerance = self.cfg["env"]["targetSuccessTolerance"]
self.tolerance_curriculum_increment = self.cfg["env"]["toleranceCurriculumIncrement"]
self.tolerance_curriculum_interval = self.cfg["env"]["toleranceCurriculumInterval"]
self.reach_goal_bonus = self.cfg["env"]["reachGoalBonus"]
self.fall_dist = self.cfg["env"]["fallDistance"]
self.fall_penalty = self.cfg["env"]["fallPenalty"]
self.reset_position_noise_x = self.cfg["env"]["resetPositionNoiseX"]
self.reset_position_noise_y = self.cfg["env"]["resetPositionNoiseY"]
self.reset_position_noise_z = self.cfg["env"]["resetPositionNoiseZ"]
self.reset_rotation_noise = self.cfg["env"]["resetRotationNoise"]
self.reset_dof_pos_noise_fingers = self.cfg["env"]["resetDofPosRandomIntervalFingers"]
self.reset_dof_pos_noise_arm = self.cfg["env"]["resetDofPosRandomIntervalArm"]
self.reset_dof_vel_noise = self.cfg["env"]["resetDofVelRandomInterval"]
self.force_scale = self.cfg["env"].get("forceScale", 0.0)
self.force_prob_range = self.cfg["env"].get("forceProbRange", [0.001, 0.1])
self.force_decay = self.cfg["env"].get("forceDecay", 0.99)
self.force_decay_interval = self.cfg["env"].get("forceDecayInterval", 0.08)
# currently not used in 2-hand env
# self.hand_dof_speed_scale = self.cfg["env"]["dofSpeedScale"]
self.use_relative_control = self.cfg["env"]["useRelativeControl"]
self.act_moving_average = self.cfg["env"]["actionsMovingAverage"]
self.debug_viz = self.cfg["env"]["enableDebugVis"]
self.max_episode_length = self.cfg["env"]["episodeLength"]
self.reset_time = self.cfg["env"].get("resetTime", -1.0)
self.max_consecutive_successes = self.cfg["env"]["maxConsecutiveSuccesses"]
self.success_steps: int = self.cfg["env"]["successSteps"]
# 1.0 means keypoints correspond to the corners of the object
# larger values help the agent to prioritize rotation matching
self.keypoint_scale = self.cfg["env"]["keypointScale"]
# size of the object (i.e. cube) before scaling
self.object_base_size = self.cfg["env"]["objectBaseSize"]
# whether to sample random object dimensions
self.randomize_object_dimensions = self.cfg["env"]["randomizeObjectDimensions"]
self.with_small_cuboids = self.cfg["env"]["withSmallCuboids"]
self.with_big_cuboids = self.cfg["env"]["withBigCuboids"]
self.with_sticks = self.cfg["env"]["withSticks"]
if self.reset_time > 0.0:
self.max_episode_length = int(round(self.reset_time / (self.control_freq_inv * self.sim_params.dt)))
print("Reset time: ", self.reset_time)
print("New episode length: ", self.max_episode_length)
self.object_type = self.cfg["env"]["objectType"]
assert self.object_type in ["block"]
self.asset_files_dict = {
"block": "urdf/objects/cube_multicolor.urdf", # 0.05m box
"table": "urdf/table_wide.urdf",
"bucket": "urdf/objects/bucket.urdf",
"lightbulb": "lightbulb/A60_E27_SI.urdf",
"socket": "E27SocketSimple.urdf",
"ball": "urdf/objects/ball.urdf",
}
self.keypoints_offsets = self._object_keypoint_offsets()
self.num_keypoints = len(self.keypoints_offsets)
self.allegro_fingertips = ["index_link_3", "middle_link_3", "ring_link_3", "thumb_link_3"]
self.fingertip_offsets = np.array(
[[0.05, 0.005, 0], [0.05, 0.005, 0], [0.05, 0.005, 0], [0.06, 0.005, 0]], dtype=np.float32
)
palm_offset = np.array([-0.00, -0.02, 0.16], dtype=np.float32)
self.num_fingertips = len(self.allegro_fingertips)
# can be only "full_state"
self.obs_type = self.cfg["env"]["observationType"]
if not (self.obs_type in ["full_state"]):
raise Exception("Unknown type of observations!")
print("Obs type:", self.obs_type)
num_dof_pos = num_dof_vel = self.num_hand_arm_dofs * self.num_arms
palm_pos_size = 3 * self.num_arms
palm_rot_vel_angvel_size = 10 * self.num_arms
obj_rot_vel_angvel_size = 10
fingertip_rel_pos_size = 3 * self.num_fingertips * self.num_arms
keypoints_rel_palm_size = self.num_keypoints * 3 * self.num_arms
keypoints_rel_goal_size = self.num_keypoints * 3
object_scales_size = 3
max_keypoint_dist_size = 1
lifted_object_flag_size = 1
progress_obs_size = 1 + 1
# commented out for now - not used in 2-hand env
# closest_fingertip_distance_size = self.num_fingertips * self.num_arms
reward_obs_size = 1
self.full_state_size = (
num_dof_pos
+ num_dof_vel
+ palm_pos_size
+ palm_rot_vel_angvel_size
+ obj_rot_vel_angvel_size
+ fingertip_rel_pos_size
+ keypoints_rel_palm_size
+ keypoints_rel_goal_size
+ object_scales_size
+ max_keypoint_dist_size
+ lifted_object_flag_size
+ progress_obs_size
+ reward_obs_size
)
num_states = self.full_state_size
self.num_obs_dict = {
"full_state": self.full_state_size,
}
self.up_axis = "z"
self.fingertip_obs = True
self.cfg["env"]["numObservations"] = self.num_obs_dict[self.obs_type]
self.cfg["env"]["numStates"] = num_states
self.cfg["env"]["numActions"] = self.num_allegro_kuka_actions
self.cfg["device_type"] = sim_device.split(":")[0]
self.cfg["device_id"] = int(sim_device.split(":")[1])
self.cfg["headless"] = headless
super().__init__(
config=self.cfg, rl_device=rl_device, sim_device=sim_device, graphics_device_id=graphics_device_id,
headless=headless, virtual_screen_capture=virtual_screen_capture, force_render=force_render,
)
if self.viewer is not None:
cam_pos = gymapi.Vec3(10.0, 5.0, 1.0)
cam_target = gymapi.Vec3(6.0, 5.0, 0.0)
self.gym.viewer_camera_look_at(self.viewer, None, cam_pos, cam_target)
# volume to sample target position from
target_volume_origin = np.array([0, 0.0, 0.8], dtype=np.float32)
target_volume_extent = np.array([[-0.2, 0.2], [-0.5, 0.5], [-0.12, 0.25]], dtype=np.float32)
self.target_volume_origin = torch.from_numpy(target_volume_origin).to(self.device).float()
self.target_volume_extent = torch.from_numpy(target_volume_extent).to(self.device).float()
# get gym GPU state tensors
actor_root_state_tensor = self.gym.acquire_actor_root_state_tensor(self.sim)
dof_state_tensor = self.gym.acquire_dof_state_tensor(self.sim)
rigid_body_tensor = self.gym.acquire_rigid_body_state_tensor(self.sim)
self.gym.refresh_actor_root_state_tensor(self.sim)
self.gym.refresh_dof_state_tensor(self.sim)
self.gym.refresh_rigid_body_state_tensor(self.sim)
# create some wrapper tensors for different slices
self.dof_state = gymtorch.wrap_tensor(dof_state_tensor)
self.hand_arm_default_dof_pos = torch.zeros(
[self.num_arms, self.num_hand_arm_dofs], dtype=torch.float, device=self.device
)
desired_kuka_pos = torch.tensor([-1.571, 1.571, -0.000, 1.6, -0.000, 1.485, 2.358]) # pose v1
# desired_kuka_pos = torch.tensor([-2.135, 0.843, 1.786, -0.903, -2.262, 1.301, -2.791]) # pose v2
self.hand_arm_default_dof_pos[0, :7] = desired_kuka_pos
desired_kuka_pos = torch.tensor([-1.571, 1.571, -0.000, 1.6, -0.000, 1.485, 2.358]) # pose v1
# desired_kuka_pos = torch.tensor([-2.135, 0.843, 1.786, -0.903, -2.262, 1.301, -2.791]) # pose v2
self.hand_arm_default_dof_pos[1, :7] = desired_kuka_pos
self.pos_noise_coeff = torch.zeros_like(self.hand_arm_default_dof_pos, device=self.device)
self.pos_noise_coeff[:, 0:7] = self.reset_dof_pos_noise_arm
self.pos_noise_coeff[:, 7 : self.num_hand_arm_dofs] = self.reset_dof_pos_noise_fingers
self.pos_noise_coeff = self.pos_noise_coeff.flatten()
self.hand_arm_default_dof_pos = self.hand_arm_default_dof_pos.flatten()
self.arm_hand_dof_state = self.dof_state.view(self.num_envs, -1, 2)[:, : self.num_hand_arm_dofs * self.num_arms]
# this will have dimensions [num_envs, num_arms * num_hand_arm_dofs]
self.arm_hand_dof_pos = self.arm_hand_dof_state[..., 0]
self.arm_hand_dof_vel = self.arm_hand_dof_state[..., 1]
self.rigid_body_states = gymtorch.wrap_tensor(rigid_body_tensor).view(self.num_envs, -1, 13)
self.num_bodies = self.rigid_body_states.shape[1]
self.root_state_tensor = gymtorch.wrap_tensor(actor_root_state_tensor).view(-1, 13)
self.palm_center_offset = torch.from_numpy(palm_offset).to(self.device).repeat((self.num_envs, 1))
self.palm_center_pos = torch.zeros((self.num_envs, self.num_arms, 3), dtype=torch.float, device=self.device)
self.fingertip_offsets = torch.from_numpy(self.fingertip_offsets).to(self.device).repeat((self.num_envs, 1, 1))
self.set_actor_root_state_object_indices: List[Tensor] = []
self.prev_targets = torch.zeros(
(self.num_envs, self.num_arms * self.num_hand_arm_dofs), dtype=torch.float, device=self.device
)
self.cur_targets = torch.zeros(
(self.num_envs, self.num_arms * self.num_hand_arm_dofs), dtype=torch.float, device=self.device
)
self.global_indices = torch.arange(self.num_envs * 3, dtype=torch.int32, device=self.device).view(
self.num_envs, -1
)
self.x_unit_tensor = to_torch([1, 0, 0], dtype=torch.float, device=self.device).repeat((self.num_envs, 1))
self.y_unit_tensor = to_torch([0, 1, 0], dtype=torch.float, device=self.device).repeat((self.num_envs, 1))
self.z_unit_tensor = to_torch([0, 0, 1], dtype=torch.float, device=self.device).repeat((self.num_envs, 1))
self.reset_goal_buf = self.reset_buf.clone()
self.successes = torch.zeros(self.num_envs, dtype=torch.float, device=self.device)
self.prev_episode_successes = torch.zeros_like(self.successes)
# true objective value for the whole episode, plus saving values for the previous episode
self.true_objective = torch.zeros(self.num_envs, dtype=torch.float, device=self.device)
self.prev_episode_true_objective = torch.zeros_like(self.true_objective)
self.total_successes = 0
self.total_resets = 0
# object apply random forces parameters
self.force_decay = to_torch(self.force_decay, dtype=torch.float, device=self.device)
self.force_prob_range = to_torch(self.force_prob_range, dtype=torch.float, device=self.device)
self.random_force_prob = torch.exp(
(torch.log(self.force_prob_range[0]) - torch.log(self.force_prob_range[1]))
* torch.rand(self.num_envs, device=self.device)
+ torch.log(self.force_prob_range[1])
)
self.rb_forces = torch.zeros((self.num_envs, self.num_bodies, 3), dtype=torch.float, device=self.device)
self.action_torques = torch.zeros((self.num_envs, self.num_bodies, 3), dtype=torch.float, device=self.device)
self.obj_keypoint_pos = torch.zeros(
(self.num_envs, self.num_keypoints, 3), dtype=torch.float, device=self.device
)
self.goal_keypoint_pos = torch.zeros(
(self.num_envs, self.num_keypoints, 3), dtype=torch.float, device=self.device
)
# how many steps we were within the goal tolerance
self.near_goal_steps = torch.zeros(self.num_envs, dtype=torch.int, device=self.device)
self.lifted_object = torch.zeros(self.num_envs, dtype=torch.bool, device=self.device)
self.closest_keypoint_max_dist = -torch.ones(self.num_envs, dtype=torch.float, device=self.device)
self.closest_fingertip_dist = -torch.ones(
[self.num_envs, self.num_arms, self.num_fingertips], dtype=torch.float, device=self.device
)
reward_keys = [
"raw_fingertip_delta_rew",
"raw_lifting_rew",
"raw_keypoint_rew",
"fingertip_delta_rew",
"lifting_rew",
"lift_bonus_rew",
"keypoint_rew",
"bonus_rew",
]
self.rewards_episode = {
key: torch.zeros(self.num_envs, dtype=torch.float, device=self.device) for key in reward_keys
}
self.last_curriculum_update = 0
self.episode_root_state_tensors = [[] for _ in range(self.num_envs)]
self.episode_dof_states = [[] for _ in range(self.num_envs)]
self.eval_stats: bool = self.cfg["env"]["evalStats"]
if self.eval_stats:
self.last_success_step = torch.zeros(self.num_envs, dtype=torch.float, device=self.device)
self.success_time = torch.zeros(self.num_envs, dtype=torch.float, device=self.device)
self.total_num_resets = torch.zeros(self.num_envs, dtype=torch.float, device=self.device)
self.successes_count = torch.zeros(
self.max_consecutive_successes + 1, dtype=torch.float, device=self.device
)
from tensorboardX import SummaryWriter
self.eval_summary_dir = "./eval_summaries"
# remove the old directory if it exists
if os.path.exists(self.eval_summary_dir):
import shutil
shutil.rmtree(self.eval_summary_dir)
self.eval_summaries = SummaryWriter(self.eval_summary_dir, flush_secs=3)
# AllegroKukaBase abstract interface - to be overriden in derived classes
def _object_keypoint_offsets(self):
raise NotImplementedError()
def _object_start_pose(self, arms_y_ofs: float, table_pose_dy: float, table_pose_dz: float):
object_start_pose = gymapi.Transform()
object_start_pose.p = gymapi.Vec3()
object_start_pose.p.x = 0.0
pose_dy, pose_dz = table_pose_dy, table_pose_dz + 0.25
object_start_pose.p.y = arms_y_ofs + pose_dy
object_start_pose.p.z = pose_dz
return object_start_pose
def _main_object_assets_and_scales(self, object_asset_root, tmp_assets_dir):
object_asset_files, object_asset_scales = self._box_asset_files_and_scales(object_asset_root, tmp_assets_dir)
if not self.randomize_object_dimensions:
object_asset_files = object_asset_files[:1]
object_asset_scales = object_asset_scales[:1]
# randomize order
files_and_scales = list(zip(object_asset_files, object_asset_scales))
# use fixed seed here to make sure when we restart from checkpoint the distribution of object types is the same
rng = np.random.default_rng(42)
rng.shuffle(files_and_scales)
object_asset_files, object_asset_scales = zip(*files_and_scales)
return object_asset_files, object_asset_scales
def _load_main_object_asset(self):
"""Load manipulated object and goal assets."""
object_asset_options = gymapi.AssetOptions()
object_assets = []
for object_asset_file in self.object_asset_files:
object_asset_dir = os.path.dirname(object_asset_file)
object_asset_fname = os.path.basename(object_asset_file)
object_asset_ = self.gym.load_asset(self.sim, object_asset_dir, object_asset_fname, object_asset_options)
object_assets.append(object_asset_)
object_rb_count = self.gym.get_asset_rigid_body_count(
object_assets[0]
) # assuming all of them have the same rb count
object_shapes_count = self.gym.get_asset_rigid_shape_count(
object_assets[0]
) # assuming all of them have the same rb count
return object_assets, object_rb_count, object_shapes_count
def _load_additional_assets(self, object_asset_root, arm_y_offset: float) -> Tuple[int, int]:
"""
returns: tuple (num_rigid_bodies, num_shapes)
"""
return 0, 0
def _create_additional_objects(self, env_ptr, env_idx, object_asset_idx):
pass
def _after_envs_created(self):
pass
def _extra_reset_rules(self, resets):
return resets
def _reset_target(self, env_ids: Tensor) -> None:
raise NotImplementedError()
def _extra_object_indices(self, env_ids: Tensor) -> List[Tensor]:
return []
def _extra_curriculum(self):
pass
# AllegroKukaBase implementation
def get_env_state(self):
"""
Return serializable environment state to be saved to checkpoint.
Can be used for stateful training sessions, i.e. with adaptive curriculums.
"""
return dict(
success_tolerance=self.success_tolerance,
)
def set_env_state(self, env_state):
if env_state is None:
return
for key in self.get_env_state().keys():
value = env_state.get(key, None)
if value is None:
continue
self.__dict__[key] = value
print(f"Loaded env state value {key}:{value}")
print(f"Success tolerance value after loading from checkpoint: {self.success_tolerance}")
# noinspection PyMethodOverriding
def create_sim(self):
self.dt = self.sim_params.dt
self.up_axis_idx = 2 # index of up axis: Y=1, Z=2 (same as in allegro_hand.py)
self.sim = super().create_sim(self.device_id, self.graphics_device_id, self.physics_engine, self.sim_params)
self._create_ground_plane()
self._create_envs(self.num_envs, self.cfg["env"]["envSpacing"], int(np.sqrt(self.num_envs)))
def _create_ground_plane(self):
plane_params = gymapi.PlaneParams()
plane_params.normal = gymapi.Vec3(0.0, 0.0, 1.0)
self.gym.add_ground(self.sim, plane_params)
def _box_asset_files_and_scales(self, object_assets_root, generated_assets_dir):
files = []
scales = []
try:
filenames = os.listdir(generated_assets_dir)
for fname in filenames:
if fname.endswith(".urdf"):
os.remove(join(generated_assets_dir, fname))
except Exception as exc:
print(f"Exception {exc} while removing older procedurally-generated urdf assets")
objects_rel_path = os.path.dirname(self.asset_files_dict[self.object_type])
objects_dir = join(object_assets_root, objects_rel_path)
base_mesh = join(objects_dir, "meshes", "cube_multicolor.obj")
generate_default_cube(generated_assets_dir, base_mesh, self.object_base_size)
if self.with_small_cuboids:
generate_small_cuboids(generated_assets_dir, base_mesh, self.object_base_size)
if self.with_big_cuboids:
generate_big_cuboids(generated_assets_dir, base_mesh, self.object_base_size)
if self.with_sticks:
generate_sticks(generated_assets_dir, base_mesh, self.object_base_size)
filenames = os.listdir(generated_assets_dir)
filenames = sorted(filenames)
for fname in filenames:
if fname.endswith(".urdf"):
scale_tokens = os.path.splitext(fname)[0].split("_")[2:]
files.append(join(generated_assets_dir, fname))
scales.append([float(scale_token) / 100 for scale_token in scale_tokens])
return files, scales
def _create_envs(self, num_envs, spacing, num_per_row):
lower = gymapi.Vec3(-spacing, -spacing, 0.0)
upper = gymapi.Vec3(spacing, spacing, spacing)
asset_root = os.path.join(os.path.dirname(os.path.abspath(__file__)), "../../../assets")
object_asset_root = asset_root
tmp_assets_dir = tempfile.TemporaryDirectory()
self.object_asset_files, self.object_asset_scales = self._main_object_assets_and_scales(
object_asset_root, tmp_assets_dir.name
)
asset_options = gymapi.AssetOptions()
asset_options.fix_base_link = True
asset_options.flip_visual_attachments = False
asset_options.collapse_fixed_joints = True
asset_options.disable_gravity = True
asset_options.thickness = 0.001
asset_options.angular_damping = 0.01
asset_options.linear_damping = 0.01
if self.physics_engine == gymapi.SIM_PHYSX:
asset_options.use_physx_armature = True
asset_options.default_dof_drive_mode = gymapi.DOF_MODE_POS
print(f"Loading asset {self.hand_arm_asset_file} from {asset_root}")
allegro_kuka_asset = self.gym.load_asset(self.sim, asset_root, self.hand_arm_asset_file, asset_options)
print(f"Loaded asset {allegro_kuka_asset}")
num_hand_arm_bodies = self.gym.get_asset_rigid_body_count(allegro_kuka_asset)
num_hand_arm_shapes = self.gym.get_asset_rigid_shape_count(allegro_kuka_asset)
num_hand_arm_dofs = self.gym.get_asset_dof_count(allegro_kuka_asset)
assert (
self.num_hand_arm_dofs == num_hand_arm_dofs
), f"Number of DOFs in asset {allegro_kuka_asset} is {num_hand_arm_dofs}, but {self.num_hand_arm_dofs} was expected"
max_agg_bodies = all_arms_bodies = num_hand_arm_bodies * self.num_arms
max_agg_shapes = all_arms_shapes = num_hand_arm_shapes * self.num_arms
allegro_rigid_body_names = [
self.gym.get_asset_rigid_body_name(allegro_kuka_asset, i) for i in range(num_hand_arm_bodies)
]
print(f"Allegro num rigid bodies: {num_hand_arm_bodies}")
print(f"Allegro rigid bodies: {allegro_rigid_body_names}")
# allegro_actuated_dof_names = [self.gym.get_asset_actuator_joint_name(allegro_asset, i) for i in range(self.num_allegro_dofs)]
# self.allegro_actuated_dof_indices = [self.gym.find_asset_dof_index(allegro_asset, name) for name in allegro_actuated_dof_names]
hand_arm_dof_props = self.gym.get_asset_dof_properties(allegro_kuka_asset)
arm_hand_dof_lower_limits = []
arm_hand_dof_upper_limits = []
for arm_idx in range(self.num_arms):
for i in range(self.num_hand_arm_dofs):
arm_hand_dof_lower_limits.append(hand_arm_dof_props["lower"][i])
arm_hand_dof_upper_limits.append(hand_arm_dof_props["upper"][i])
# self.allegro_actuated_dof_indices = to_torch(self.allegro_actuated_dof_indices, dtype=torch.long, device=self.device)
self.arm_hand_dof_lower_limits = to_torch(arm_hand_dof_lower_limits, device=self.device)
self.arm_hand_dof_upper_limits = to_torch(arm_hand_dof_upper_limits, device=self.device)
arm_poses = [gymapi.Transform() for _ in range(self.num_arms)]
arm_x_ofs, arm_y_ofs = self.arm_x_ofs, self.arm_y_ofs
for arm_idx, arm_pose in enumerate(arm_poses):
x_ofs = arm_x_ofs * (-1 if arm_idx == 0 else 1)
arm_pose.p = gymapi.Vec3(*get_axis_params(0.0, self.up_axis_idx)) + gymapi.Vec3(x_ofs, arm_y_ofs, 0)
# arm_pose.r = gymapi.Quat(0.0, 0.0, 0.0, 1.0)
if arm_idx == 0:
# rotate 1st arm 90 degrees to the left
arm_pose.r = gymapi.Quat.from_axis_angle(gymapi.Vec3(0, 0, 1), math.pi / 2)
else:
# rotate 2nd arm 90 degrees to the right
arm_pose.r = gymapi.Quat.from_axis_angle(gymapi.Vec3(0, 0, 1), -math.pi / 2)
object_assets, object_rb_count, object_shapes_count = self._load_main_object_asset()
max_agg_bodies += object_rb_count
max_agg_shapes += object_shapes_count
# load auxiliary objects
table_asset_options = gymapi.AssetOptions()
table_asset_options.disable_gravity = False
table_asset_options.fix_base_link = True
table_asset = self.gym.load_asset(self.sim, asset_root, self.asset_files_dict["table"], table_asset_options)
table_pose = gymapi.Transform()
table_pose.p = gymapi.Vec3()
table_pose.p.x = 0.0
# table_pose_dy, table_pose_dz = -0.8, 0.38
table_pose_dy, table_pose_dz = 0.0, 0.38
table_pose.p.y = arm_y_ofs + table_pose_dy
table_pose.p.z = table_pose_dz
table_rb_count = self.gym.get_asset_rigid_body_count(table_asset)
table_shapes_count = self.gym.get_asset_rigid_shape_count(table_asset)
max_agg_bodies += table_rb_count
max_agg_shapes += table_shapes_count
additional_rb, additional_shapes = self._load_additional_assets(object_asset_root, arm_y_ofs)
max_agg_bodies += additional_rb
max_agg_shapes += additional_shapes
# set up object and goal positions
self.object_start_pose = self._object_start_pose(arm_y_ofs, table_pose_dy, table_pose_dz)
self.envs = []
object_init_state = []
object_scales = []
object_keypoint_offsets = []
allegro_palm_handle = self.gym.find_asset_rigid_body_index(allegro_kuka_asset, "iiwa7_link_7")
fingertip_handles = [
self.gym.find_asset_rigid_body_index(allegro_kuka_asset, name) for name in self.allegro_fingertips
]
self.allegro_palm_handles = []
self.allegro_fingertip_handles = []
for arm_idx in range(self.num_arms):
self.allegro_palm_handles.append(allegro_palm_handle + arm_idx * num_hand_arm_bodies)
self.allegro_fingertip_handles.extend([h + arm_idx * num_hand_arm_bodies for h in fingertip_handles])
# does this rely on the fact that objects are added right after the arms in terms of create_actor()?
self.object_rb_handles = list(range(all_arms_bodies, all_arms_bodies + object_rb_count))
self.arm_indices = torch.empty([self.num_envs, self.num_arms], dtype=torch.long, device=self.device)
self.object_indices = torch.empty(self.num_envs, dtype=torch.long, device=self.device)
assert self.num_envs >= 1
for i in range(self.num_envs):
# create env instance
env_ptr = self.gym.create_env(self.sim, lower, upper, num_per_row)
self.gym.begin_aggregate(env_ptr, max_agg_bodies, max_agg_shapes, True)
# add arms
for arm_idx in range(self.num_arms):
arm = self.gym.create_actor(env_ptr, allegro_kuka_asset, arm_poses[arm_idx], f"arm{arm_idx}", i, -1, 0)
populate_dof_properties(hand_arm_dof_props, self.dof_params, self.num_arm_dofs, self.num_hand_dofs)
self.gym.set_actor_dof_properties(env_ptr, arm, hand_arm_dof_props)
allegro_hand_idx = self.gym.get_actor_index(env_ptr, arm, gymapi.DOMAIN_SIM)
self.arm_indices[i, arm_idx] = allegro_hand_idx
# add object
object_asset_idx = i % len(object_assets)
object_asset = object_assets[object_asset_idx]
obj_pose = self.object_start_pose
object_handle = self.gym.create_actor(env_ptr, object_asset, obj_pose, "object", i, 0, 0)
pos, rot = obj_pose.p, obj_pose.r
object_init_state.append([pos.x, pos.y, pos.z, rot.x, rot.y, rot.z, rot.w, 0, 0, 0, 0, 0, 0])
object_idx = self.gym.get_actor_index(env_ptr, object_handle, gymapi.DOMAIN_SIM)
self.object_indices[i] = object_idx
object_scale = self.object_asset_scales[object_asset_idx]
object_scales.append(object_scale)
object_offsets = []
for keypoint in self.keypoints_offsets:
keypoint = copy(keypoint)
for coord_idx in range(3):
keypoint[coord_idx] *= object_scale[coord_idx] * self.object_base_size * self.keypoint_scale / 2
object_offsets.append(keypoint)
object_keypoint_offsets.append(object_offsets)
# table object
table_handle = self.gym.create_actor(env_ptr, table_asset, table_pose, "table_object", i, 0, 0)
_table_object_idx = self.gym.get_actor_index(env_ptr, table_handle, gymapi.DOMAIN_SIM)
# task-specific objects (i.e. goal object for reorientation task)
self._create_additional_objects(env_ptr, env_idx=i, object_asset_idx=object_asset_idx)
self.gym.end_aggregate(env_ptr)
self.envs.append(env_ptr)
# we are not using new mass values after DR when calculating random forces applied to an object,
# which should be ok as long as the randomization range is not too big
# noinspection PyUnboundLocalVariable
object_rb_props = self.gym.get_actor_rigid_body_properties(self.envs[0], object_handle)
self.object_rb_masses = [prop.mass for prop in object_rb_props]
self.object_init_state = to_torch(object_init_state, device=self.device, dtype=torch.float).view(
self.num_envs, 13
)
self.goal_states = self.object_init_state.clone()
self.goal_states[:, self.up_axis_idx] -= 0.04
self.goal_init_state = self.goal_states.clone()
self.allegro_fingertip_handles = to_torch(self.allegro_fingertip_handles, dtype=torch.long, device=self.device)
self.object_rb_handles = to_torch(self.object_rb_handles, dtype=torch.long, device=self.device)
self.object_rb_masses = to_torch(self.object_rb_masses, dtype=torch.float, device=self.device)
self.object_scales = to_torch(object_scales, dtype=torch.float, device=self.device)
self.object_keypoint_offsets = to_torch(object_keypoint_offsets, dtype=torch.float, device=self.device)
self._after_envs_created()
try:
# by this point we don't need the temporary folder for procedurally generated assets
tmp_assets_dir.cleanup()
except Exception:
pass
def _distance_delta_rewards(self, lifted_object: Tensor) -> Tensor:
"""Rewards for fingertips approaching the object or penalty for hand getting further away from the object."""
# this is positive if we got closer, negative if we're further away than the closest we've gotten
fingertip_deltas_closest = self.closest_fingertip_dist - self.curr_fingertip_distances
# update the values if finger tips got closer to the object
self.closest_fingertip_dist = torch.minimum(self.closest_fingertip_dist, self.curr_fingertip_distances)
# clip between zero and +inf to turn deltas into rewards
fingertip_deltas = torch.clip(fingertip_deltas_closest, 0, 10)
fingertip_delta_rew = torch.sum(fingertip_deltas, dim=-1)
fingertip_delta_rew = torch.sum(fingertip_delta_rew, dim=-1) # sum over all arms
# vvvv this is commented out for 2 arms: we want the 2nd arm to be relatively close at all times
# add this reward only before the object is lifted off the table
# after this, we should be guided only by keypoint and bonus rewards
# fingertip_delta_rew *= ~lifted_object
return fingertip_delta_rew
def _lifting_reward(self) -> Tuple[Tensor, Tensor, Tensor]:
"""Reward for lifting the object off the table."""
z_lift = 0.05 + self.object_pos[:, 2] - self.object_init_state[:, 2]
lifting_rew = torch.clip(z_lift, 0, 0.5)
# this flag tells us if we lifted an object above a certain height compared to the initial position
lifted_object = (z_lift > self.lifting_bonus_threshold) | self.lifted_object
# Since we stop rewarding the agent for height after the object is lifted, we should give it large positive reward
# to compensate for "lost" opportunity to get more lifting reward for sitting just below the threshold.
# This bonus depends on the max lifting reward (lifting reward coeff * threshold) and the discount factor
# (i.e. the effective future horizon for the agent)
# For threshold 0.15, lifting reward coeff = 3 and gamma 0.995 (effective horizon ~500 steps)
# a value of 300 for the bonus reward seems reasonable
just_lifted_above_threshold = lifted_object & ~self.lifted_object
lift_bonus_rew = self.lifting_bonus * just_lifted_above_threshold
# stop giving lifting reward once we crossed the threshold - now the agent can focus entirely on the
# keypoint reward
lifting_rew *= ~lifted_object
# update the flag that describes whether we lifted an object above the table or not
self.lifted_object = lifted_object
return lifting_rew, lift_bonus_rew, lifted_object
def _keypoint_reward(self, lifted_object: Tensor) -> Tensor:
# this is positive if we got closer, negative if we're further away
max_keypoint_deltas = self.closest_keypoint_max_dist - self.keypoints_max_dist
# update the values if we got closer to the target
self.closest_keypoint_max_dist = torch.minimum(self.closest_keypoint_max_dist, self.keypoints_max_dist)
# clip between zero and +inf to turn deltas into rewards
max_keypoint_deltas = torch.clip(max_keypoint_deltas, 0, 100)
# administer reward only when we already lifted an object from the table
# to prevent the situation where the agent just rolls it around the table
keypoint_rew = max_keypoint_deltas * lifted_object
return keypoint_rew
def _compute_resets(self, is_success):
resets = torch.where(self.object_pos[:, 2] < 0.1, torch.ones_like(self.reset_buf), self.reset_buf) # fall
if self.max_consecutive_successes > 0:
# Reset progress buffer if max_consecutive_successes > 0
self.progress_buf = torch.where(is_success > 0, torch.zeros_like(self.progress_buf), self.progress_buf)
resets = torch.where(self.successes >= self.max_consecutive_successes, torch.ones_like(resets), resets)
resets = torch.where(self.progress_buf >= self.max_episode_length - 1, torch.ones_like(resets), resets)
resets = self._extra_reset_rules(resets)
return resets
def _true_objective(self):
raise NotImplementedError()
def compute_kuka_reward(self) -> Tuple[Tensor, Tensor]:
lifting_rew, lift_bonus_rew, lifted_object = self._lifting_reward()
fingertip_delta_rew = self._distance_delta_rewards(lifted_object)
keypoint_rew = self._keypoint_reward(lifted_object)
keypoint_success_tolerance = self.success_tolerance * self.keypoint_scale
# noinspection PyTypeChecker
near_goal: Tensor = self.keypoints_max_dist <= keypoint_success_tolerance
self.near_goal_steps += near_goal
is_success = self.near_goal_steps >= self.success_steps
goal_resets = is_success
self.successes += is_success
self.reset_goal_buf[:] = goal_resets
self.rewards_episode["raw_fingertip_delta_rew"] += fingertip_delta_rew
self.rewards_episode["raw_lifting_rew"] += lifting_rew
self.rewards_episode["raw_keypoint_rew"] += keypoint_rew
fingertip_delta_rew *= self.distance_delta_rew_scale
lifting_rew *= self.lifting_rew_scale
keypoint_rew *= self.keypoint_rew_scale
# Success bonus: orientation is within `success_tolerance` of goal orientation
# We spread out the reward over "success_steps"
bonus_rew = near_goal * (self.reach_goal_bonus / self.success_steps)
reward = fingertip_delta_rew + lifting_rew + lift_bonus_rew + keypoint_rew + bonus_rew
self.rew_buf[:] = reward
resets = self._compute_resets(is_success)
self.reset_buf[:] = resets
self.extras["successes"] = self.prev_episode_successes.mean()
self.true_objective = self._true_objective()
self.extras["true_objective"] = self.true_objective
# scalars for logging
self.extras["true_objective_mean"] = self.true_objective.mean()
self.extras["true_objective_min"] = self.true_objective.min()
self.extras["true_objective_max"] = self.true_objective.max()
rewards = [
(fingertip_delta_rew, "fingertip_delta_rew"),
(lifting_rew, "lifting_rew"),
(lift_bonus_rew, "lift_bonus_rew"),
(keypoint_rew, "keypoint_rew"),
(bonus_rew, "bonus_rew"),
]
episode_cumulative = dict()
for rew_value, rew_name in rewards:
self.rewards_episode[rew_name] += rew_value
episode_cumulative[rew_name] = rew_value
self.extras["rewards_episode"] = self.rewards_episode
self.extras["episode_cumulative"] = episode_cumulative
return self.rew_buf, is_success
def _eval_stats(self, is_success: Tensor) -> None:
if self.eval_stats:
frame: int = self.frame_since_restart
n_frames = torch.empty_like(self.last_success_step).fill_(frame)
self.success_time = torch.where(is_success, n_frames - self.last_success_step, self.success_time)
self.last_success_step = torch.where(is_success, n_frames, self.last_success_step)
mask_ = self.success_time > 0
if any(mask_):
avg_time_mean = ((self.success_time * mask_).sum(dim=0) / mask_.sum(dim=0)).item()
else:
avg_time_mean = math.nan
self.total_resets = self.total_resets + self.reset_buf.sum()
self.total_successes = self.total_successes + (self.successes * self.reset_buf).sum()
self.total_num_resets += self.reset_buf
reset_ids = self.reset_buf.nonzero().squeeze()
last_successes = self.successes[reset_ids].long()
self.successes_count[last_successes] += 1
if frame % 100 == 0:
# The direct average shows the overall result more quickly, but slightly undershoots long term
# policy performance.
print(f"Max num successes: {self.successes.max().item()}")
print(f"Average consecutive successes: {self.prev_episode_successes.mean().item():.2f}")
print(f"Total num resets: {self.total_num_resets.sum().item()} --> {self.total_num_resets}")
print(f"Reset percentage: {(self.total_num_resets > 0).sum() / self.num_envs:.2%}")
print(f"Last ep successes: {self.prev_episode_successes.mean().item():.2f}")
print(f"Last ep true objective: {self.prev_episode_true_objective.mean().item():.2f}")
self.eval_summaries.add_scalar("last_ep_successes", self.prev_episode_successes.mean().item(), frame)
self.eval_summaries.add_scalar(
"last_ep_true_objective", self.prev_episode_true_objective.mean().item(), frame
)
self.eval_summaries.add_scalar(
"reset_stats/reset_percentage", (self.total_num_resets > 0).sum() / self.num_envs, frame
)
self.eval_summaries.add_scalar("reset_stats/min_num_resets", self.total_num_resets.min().item(), frame)
self.eval_summaries.add_scalar("policy_speed/avg_success_time_frames", avg_time_mean, frame)
frame_time = self.control_freq_inv * self.dt
self.eval_summaries.add_scalar(
"policy_speed/avg_success_time_seconds", avg_time_mean * frame_time, frame
)
self.eval_summaries.add_scalar(
"policy_speed/avg_success_per_minute", 60.0 / (avg_time_mean * frame_time), frame
)
print(f"Policy speed (successes per minute): {60.0 / (avg_time_mean * frame_time):.2f}")
# create a matplotlib bar chart of the self.successes_count
import matplotlib.pyplot as plt
plt.bar(list(range(self.max_consecutive_successes + 1)), self.successes_count.cpu().numpy())
plt.title("Successes histogram")
plt.xlabel("Successes")
plt.ylabel("Frequency")
plt.savefig(f"{self.eval_summary_dir}/successes_histogram.png")
plt.clf()
def compute_observations(self) -> Tuple[Tensor, int]:
self.gym.refresh_dof_state_tensor(self.sim)
self.gym.refresh_actor_root_state_tensor(self.sim)
self.gym.refresh_rigid_body_state_tensor(self.sim)
self.object_state = self.root_state_tensor[self.object_indices, 0:13]
self.object_pose = self.root_state_tensor[self.object_indices, 0:7]
self.object_pos = self.root_state_tensor[self.object_indices, 0:3]
self.object_rot = self.root_state_tensor[self.object_indices, 3:7]
self.object_linvel = self.root_state_tensor[self.object_indices, 7:10]
self.object_angvel = self.root_state_tensor[self.object_indices, 10:13]
self.goal_pose = self.goal_states[:, 0:7]
self.goal_pos = self.goal_states[:, 0:3]
self.goal_rot = self.goal_states[:, 3:7]
self._palm_state = self.rigid_body_states[:, self.allegro_palm_handles]
palm_pos = self._palm_state[..., 0:3] # [num_envs, num_arms, 3]
self._palm_rot = self._palm_state[..., 3:7] # [num_envs, num_arms, 4]
for arm_idx in range(self.num_arms):
self.palm_center_pos[:, arm_idx] = palm_pos[:, arm_idx] + quat_rotate(
self._palm_rot[:, arm_idx], self.palm_center_offset
)
self.fingertip_state = self.rigid_body_states[:, self.allegro_fingertip_handles][:, :, 0:13]
self.fingertip_pos = self.fingertip_state[:, :, 0:3]
self.fingertip_rot = self.fingertip_state[:, :, 3:7]
if hasattr(self, "fingertip_pos_rel_object"):
self.fingertip_pos_rel_object_prev[:, :, :] = self.fingertip_pos_rel_object
else:
self.fingertip_pos_rel_object_prev = None
self.fingertip_pos_offset = torch.zeros_like(self.fingertip_pos).to(self.device)
for arm_idx in range(self.num_arms):
for i in range(self.num_fingertips):
finger_idx = arm_idx * self.num_fingertips + i
self.fingertip_pos_offset[:, finger_idx] = self.fingertip_pos[:, finger_idx] + quat_rotate(
self.fingertip_rot[:, finger_idx], self.fingertip_offsets[:, i]
)
obj_pos_repeat = self.object_pos.unsqueeze(1).repeat(1, self.num_arms * self.num_fingertips, 1)
self.fingertip_pos_rel_object = self.fingertip_pos_offset - obj_pos_repeat
self.curr_fingertip_distances = torch.norm(
self.fingertip_pos_rel_object.view(self.num_envs, self.num_arms, self.num_fingertips, -1), dim=-1
)
# when episode ends or target changes we reset this to -1, this will initialize it to the actual distance on the 1st frame of the episode
self.closest_fingertip_dist = torch.where(
self.closest_fingertip_dist < 0.0, self.curr_fingertip_distances, self.closest_fingertip_dist
)
palm_center_repeat = self.palm_center_pos.unsqueeze(2).repeat(
1, 1, self.num_fingertips, 1
) # [num_envs, num_arms, num_fingertips, 3] == [num_envs, 2, 4, 3]
self.fingertip_pos_rel_palm = self.fingertip_pos_offset - palm_center_repeat.view(
self.num_envs, self.num_arms * self.num_fingertips, 3
) # [num_envs, num_arms * num_fingertips, 3] == [num_envs, 8, 3]
if self.fingertip_pos_rel_object_prev is None:
self.fingertip_pos_rel_object_prev = self.fingertip_pos_rel_object.clone()
for i in range(self.num_keypoints):
self.obj_keypoint_pos[:, i] = self.object_pos + quat_rotate(
self.object_rot, self.object_keypoint_offsets[:, i]
)
self.goal_keypoint_pos[:, i] = self.goal_pos + quat_rotate(
self.goal_rot, self.object_keypoint_offsets[:, i]
)
self.keypoints_rel_goal = self.obj_keypoint_pos - self.goal_keypoint_pos
palm_center_repeat = self.palm_center_pos.unsqueeze(2).repeat(1, 1, self.num_keypoints, 1)
obj_kp_pos_repeat = self.obj_keypoint_pos.unsqueeze(1).repeat(1, self.num_arms, 1, 1)
self.keypoints_rel_palm = obj_kp_pos_repeat - palm_center_repeat
self.keypoints_rel_palm = self.keypoints_rel_palm.view(self.num_envs, self.num_arms * self.num_keypoints, 3)
# self.keypoints_rel_palm = self.obj_keypoint_pos - palm_center_repeat.view(
# self.num_envs, self.num_arms * self.num_keypoints, 3
# )
self.keypoint_distances_l2 = torch.norm(self.keypoints_rel_goal, dim=-1)
# furthest keypoint from the goal
self.keypoints_max_dist = self.keypoint_distances_l2.max(dim=-1).values
# this is the closest the keypoint had been to the target in the current episode (for the furthest keypoint of all)
# make sure we initialize this value before using it for obs or rewards
self.closest_keypoint_max_dist = torch.where(
self.closest_keypoint_max_dist < 0.0, self.keypoints_max_dist, self.closest_keypoint_max_dist
)
if self.obs_type == "full_state":
full_state_size, reward_obs_ofs = self.compute_full_state(self.obs_buf)
assert (
full_state_size == self.full_state_size
), f"Expected full state size {self.full_state_size}, actual: {full_state_size}"
return self.obs_buf, reward_obs_ofs
else:
raise ValueError("Unkown observations type!")
def compute_full_state(self, buf: Tensor) -> Tuple[int, int]:
num_dofs = self.num_hand_arm_dofs * self.num_arms
ofs: int = 0
# dof positions
buf[:, ofs : ofs + num_dofs] = unscale(
self.arm_hand_dof_pos[:, :num_dofs],
self.arm_hand_dof_lower_limits[:num_dofs],
self.arm_hand_dof_upper_limits[:num_dofs],
)
ofs += num_dofs
# dof velocities
buf[:, ofs : ofs + num_dofs] = self.arm_hand_dof_vel[:, :num_dofs]
ofs += num_dofs
# palm pos
num_palm_coords = 3 * self.num_arms
buf[:, ofs : ofs + num_palm_coords] = self.palm_center_pos.view(self.num_envs, num_palm_coords)
ofs += num_palm_coords
# palm rot, linvel, ang vel
num_palm_rot_vel_angvel = 10 * self.num_arms
buf[:, ofs : ofs + num_palm_rot_vel_angvel] = self._palm_state[..., 3:13].reshape(
self.num_envs, num_palm_rot_vel_angvel
)
ofs += num_palm_rot_vel_angvel
# object rot, linvel, ang vel
buf[:, ofs : ofs + 10] = self.object_state[:, 3:13]
ofs += 10
# fingertip pos relative to the palm of the hand
fingertip_rel_pos_size = 3 * self.num_arms * self.num_fingertips
buf[:, ofs : ofs + fingertip_rel_pos_size] = self.fingertip_pos_rel_palm.reshape(
self.num_envs, fingertip_rel_pos_size
)
ofs += fingertip_rel_pos_size
# keypoint distances relative to the palm of the hand
keypoint_rel_palm_size = 3 * self.num_arms * self.num_keypoints
buf[:, ofs : ofs + keypoint_rel_palm_size] = self.keypoints_rel_palm.reshape(
self.num_envs, keypoint_rel_palm_size
)
ofs += keypoint_rel_palm_size
# keypoint distances relative to the goal
keypoint_rel_pos_size = 3 * self.num_keypoints
buf[:, ofs : ofs + keypoint_rel_pos_size] = self.keypoints_rel_goal.reshape(
self.num_envs, keypoint_rel_pos_size
)
ofs += keypoint_rel_pos_size
# object scales
buf[:, ofs : ofs + 3] = self.object_scales
ofs += 3
# closest distance to the furthest of all keypoints achieved so far in this episode
buf[:, ofs : ofs + 1] = self.closest_keypoint_max_dist.unsqueeze(-1)
# print(f"closest_keypoint_max_dist: {self.closest_keypoint_max_dist[0]}")
ofs += 1
# commented out for 2-hand version to minimize the number of observations
# closest distance between a fingertip and an object achieved since last target reset
# this should help the critic predict the anticipated fingertip reward
# buf[:, ofs : ofs + self.num_fingertips] = self.closest_fingertip_dist
# print(f"closest_fingertip_dist: {self.closest_fingertip_dist[0]}")
# ofs += self.num_fingertips
# indicates whether we already lifted the object from the table or not, should help the critic be more accurate
buf[:, ofs : ofs + 1] = self.lifted_object.unsqueeze(-1)
# print(f"Lifted object: {self.lifted_object[0]}")
ofs += 1
# this should help the critic predict the future rewards better and anticipate the episode termination
buf[:, ofs : ofs + 1] = torch.log(self.progress_buf / 10 + 1).unsqueeze(-1)
ofs += 1
buf[:, ofs : ofs + 1] = torch.log(self.successes + 1).unsqueeze(-1)
ofs += 1
# actions
# buf[:, ofs : ofs + self.num_actions] = self.actions
# ofs += self.num_actions
# state_str = [f"{state.item():.3f}" for state in buf[0, : self.full_state_size]]
# print(' '.join(state_str))
# this is where we will add the reward observation
reward_obs_ofs = ofs
ofs += 1
assert ofs == self.full_state_size
return ofs, reward_obs_ofs
def clamp_obs(self, obs_buf: Tensor) -> None:
if self.clamp_abs_observations > 0:
obs_buf.clamp_(-self.clamp_abs_observations, self.clamp_abs_observations)
def get_random_quat(self, env_ids):
# https://github.com/KieranWynn/pyquaternion/blob/master/pyquaternion/quaternion.py
# https://github.com/KieranWynn/pyquaternion/blob/master/pyquaternion/quaternion.py#L261
uvw = torch_rand_float(0, 1.0, (len(env_ids), 3), device=self.device)
q_w = torch.sqrt(1.0 - uvw[:, 0]) * (torch.sin(2 * np.pi * uvw[:, 1]))
q_x = torch.sqrt(1.0 - uvw[:, 0]) * (torch.cos(2 * np.pi * uvw[:, 1]))
q_y = torch.sqrt(uvw[:, 0]) * (torch.sin(2 * np.pi * uvw[:, 2]))
q_z = torch.sqrt(uvw[:, 0]) * (torch.cos(2 * np.pi * uvw[:, 2]))
new_rot = torch.cat((q_x.unsqueeze(-1), q_y.unsqueeze(-1), q_z.unsqueeze(-1), q_w.unsqueeze(-1)), dim=-1)
return new_rot
def reset_target_pose(self, env_ids: Tensor) -> None:
self._reset_target(env_ids)
self.reset_goal_buf[env_ids] = 0
self.near_goal_steps[env_ids] = 0
self.closest_keypoint_max_dist[env_ids] = -1
def reset_object_pose(self, env_ids):
obj_indices = self.object_indices[env_ids]
# reset object
table_width = 1.1
obj_x_ofs = table_width / 2 - 0.2
left_right_random = torch_rand_float(-1.0, 1.0, (len(env_ids), 1), device=self.device)
x_pos = torch.where(
left_right_random > 0,
obj_x_ofs * torch.ones_like(left_right_random),
-obj_x_ofs * torch.ones_like(left_right_random),
)
rand_pos_floats = torch_rand_float(-1.0, 1.0, (len(env_ids), 3), device=self.device)
self.root_state_tensor[obj_indices] = self.object_init_state[env_ids].clone()
# indices 0..2 correspond to the object position
self.root_state_tensor[obj_indices, 0:1] = x_pos + self.reset_position_noise_x * rand_pos_floats[:, 0:1]
self.root_state_tensor[obj_indices, 1:2] = (
self.object_init_state[env_ids, 1:2] + self.reset_position_noise_y * rand_pos_floats[:, 1:2]
)
self.root_state_tensor[obj_indices, 2:3] = (
self.object_init_state[env_ids, 2:3] + self.reset_position_noise_z * rand_pos_floats[:, 2:3]
)
new_object_rot = self.get_random_quat(env_ids)
# indices 3,4,5,6 correspond to the rotation quaternion
self.root_state_tensor[obj_indices, 3:7] = new_object_rot
self.root_state_tensor[obj_indices, 7:13] = torch.zeros_like(self.root_state_tensor[obj_indices, 7:13])
# since we reset the object, we also should update distances between fingers and the object
self.closest_fingertip_dist[env_ids] = -1
def deferred_set_actor_root_state_tensor_indexed(self, obj_indices: List[Tensor]) -> None:
self.set_actor_root_state_object_indices.extend(obj_indices)
def set_actor_root_state_tensor_indexed(self) -> None:
object_indices: List[Tensor] = self.set_actor_root_state_object_indices
if not object_indices:
# nothing to set
return
unique_object_indices = torch.unique(torch.cat(object_indices).to(torch.int32))
self.gym.set_actor_root_state_tensor_indexed(
self.sim,
gymtorch.unwrap_tensor(self.root_state_tensor),
gymtorch.unwrap_tensor(unique_object_indices),
len(unique_object_indices),
)
self.set_actor_root_state_object_indices = []
def reset_idx(self, env_ids: Tensor) -> None:
# randomization can happen only at reset time, since it can reset actor positions on GPU
if self.randomize:
self.apply_randomizations(self.randomization_params)
# randomize start object poses
self.reset_target_pose(env_ids)
# reset rigid body forces
self.rb_forces[env_ids, :, :] = 0.0
# reset object
self.reset_object_pose(env_ids)
# flattened list of arm actors that we need to reset
arm_indices = self.arm_indices[env_ids].to(torch.int32).flatten()
# reset random force probabilities
self.random_force_prob[env_ids] = torch.exp(
(torch.log(self.force_prob_range[0]) - torch.log(self.force_prob_range[1]))
* torch.rand(len(env_ids), device=self.device)
+ torch.log(self.force_prob_range[1])
)
# reset allegro hand
delta_max = self.arm_hand_dof_upper_limits - self.hand_arm_default_dof_pos
delta_min = self.arm_hand_dof_lower_limits - self.hand_arm_default_dof_pos
rand_dof_floats = torch_rand_float(
0.0, 1.0, (len(env_ids), self.num_arms * self.num_hand_arm_dofs), device=self.device
)
rand_delta = delta_min + (delta_max - delta_min) * rand_dof_floats
allegro_pos = self.hand_arm_default_dof_pos + self.pos_noise_coeff * rand_delta
self.arm_hand_dof_pos[env_ids, ...] = allegro_pos
self.prev_targets[env_ids, ...] = allegro_pos
self.cur_targets[env_ids, ...] = allegro_pos
rand_vel_floats = torch_rand_float(
-1.0, 1.0, (len(env_ids), self.num_hand_arm_dofs * self.num_arms), device=self.device
)
self.arm_hand_dof_vel[env_ids, :] = self.reset_dof_vel_noise * rand_vel_floats
arm_indices_gym = gymtorch.unwrap_tensor(arm_indices)
num_arm_indices: int = len(arm_indices)
self.gym.set_dof_position_target_tensor_indexed(
self.sim, gymtorch.unwrap_tensor(self.prev_targets), arm_indices_gym, num_arm_indices
)
self.gym.set_dof_state_tensor_indexed(
self.sim, gymtorch.unwrap_tensor(self.dof_state), arm_indices_gym, num_arm_indices
)
object_indices = [self.object_indices[env_ids]]
object_indices.extend(self._extra_object_indices(env_ids))
self.deferred_set_actor_root_state_tensor_indexed(object_indices)
self.progress_buf[env_ids] = 0
self.reset_buf[env_ids] = 0
self.prev_episode_successes[env_ids] = self.successes[env_ids]
self.successes[env_ids] = 0
self.prev_episode_true_objective[env_ids] = self.true_objective[env_ids]
self.true_objective[env_ids] = 0
self.lifted_object[env_ids] = False
# -1 here indicates that the value is not initialized
self.closest_keypoint_max_dist[env_ids] = -1
self.closest_fingertip_dist[env_ids] = -1
self.near_goal_steps[env_ids] = 0
for key in self.rewards_episode.keys():
# print(f"{env_ids}: {key}: {self.rewards_episode[key][env_ids]}")
self.rewards_episode[key][env_ids] = 0
self.extras["scalars"] = dict()
self.extras["scalars"]["success_tolerance"] = self.success_tolerance
def pre_physics_step(self, actions):
self.actions = actions.clone().to(self.device)
reset_env_ids = self.reset_buf.nonzero(as_tuple=False).squeeze(-1)
reset_goal_env_ids = self.reset_goal_buf.nonzero(as_tuple=False).squeeze(-1)
self.reset_target_pose(reset_goal_env_ids)
if len(reset_env_ids) > 0:
self.reset_idx(reset_env_ids)
self.set_actor_root_state_tensor_indexed()
if self.use_relative_control:
raise NotImplementedError("Use relative control False for now")
else:
# TODO: this uses simplified finger control compared to the original code of 1-hand env
num_dofs: int = self.num_hand_arm_dofs * self.num_arms
# target position control for the hand DOFs
self.cur_targets[..., :num_dofs] = scale(
actions[..., :num_dofs],
self.arm_hand_dof_lower_limits[:num_dofs],
self.arm_hand_dof_upper_limits[:num_dofs],
)
self.cur_targets[..., :num_dofs] = (
self.act_moving_average * self.cur_targets[..., :num_dofs]
+ (1.0 - self.act_moving_average) * self.prev_targets[..., :num_dofs]
)
self.cur_targets[..., :num_dofs] = tensor_clamp(
self.cur_targets[..., :num_dofs],
self.arm_hand_dof_lower_limits[:num_dofs],
self.arm_hand_dof_upper_limits[:num_dofs],
)
self.prev_targets[...] = self.cur_targets[...]
self.gym.set_dof_position_target_tensor(self.sim, gymtorch.unwrap_tensor(self.cur_targets))
if self.force_scale > 0.0:
self.rb_forces *= torch.pow(self.force_decay, self.dt / self.force_decay_interval)
# apply new forces
force_indices = (torch.rand(self.num_envs, device=self.device) < self.random_force_prob).nonzero()
self.rb_forces[force_indices, self.object_rb_handles, :] = (
torch.randn(self.rb_forces[force_indices, self.object_rb_handles, :].shape, device=self.device)
* self.object_rb_masses
* self.force_scale
)
self.gym.apply_rigid_body_force_tensors(
self.sim, gymtorch.unwrap_tensor(self.rb_forces), None, gymapi.LOCAL_SPACE
)
def post_physics_step(self):
self.frame_since_restart += 1
self.progress_buf += 1
self.randomize_buf += 1
self._extra_curriculum()
obs_buf, reward_obs_ofs = self.compute_observations()
rewards, is_success = self.compute_kuka_reward()
# add rewards to observations
reward_obs_scale = 0.01
obs_buf[:, reward_obs_ofs : reward_obs_ofs + 1] = rewards.unsqueeze(-1) * reward_obs_scale
self.clamp_obs(obs_buf)
self._eval_stats(is_success)
if self.viewer and self.debug_viz:
# draw axes on target object
self.gym.clear_lines(self.viewer)
self.gym.refresh_rigid_body_state_tensor(self.sim)
axes_geom = gymutil.AxesGeometry(0.1)
sphere_pose = gymapi.Transform()
sphere_pose.r = gymapi.Quat(0, 0, 0, 1)
sphere_geom = gymutil.WireframeSphereGeometry(0.01, 8, 8, sphere_pose, color=(1, 1, 0))
sphere_geom_white = gymutil.WireframeSphereGeometry(0.02, 8, 8, sphere_pose, color=(1, 1, 1))
palm_center_pos_cpu = self.palm_center_pos.cpu().numpy()
palm_rot_cpu = self._palm_rot.cpu().numpy()
for i in range(self.num_envs):
palm_center_transform = gymapi.Transform()
palm_center_transform.p = gymapi.Vec3(*palm_center_pos_cpu[i])
palm_center_transform.r = gymapi.Quat(*palm_rot_cpu[i])
gymutil.draw_lines(sphere_geom_white, self.gym, self.viewer, self.envs[i], palm_center_transform)
for j in range(self.num_fingertips):
fingertip_pos_cpu = self.fingertip_pos_offset[:, j].cpu().numpy()
fingertip_rot_cpu = self.fingertip_rot[:, j].cpu().numpy()
for i in range(self.num_envs):
fingertip_transform = gymapi.Transform()
fingertip_transform.p = gymapi.Vec3(*fingertip_pos_cpu[i])
fingertip_transform.r = gymapi.Quat(*fingertip_rot_cpu[i])
gymutil.draw_lines(sphere_geom, self.gym, self.viewer, self.envs[i], fingertip_transform)
for j in range(self.num_keypoints):
keypoint_pos_cpu = self.obj_keypoint_pos[:, j].cpu().numpy()
goal_keypoint_pos_cpu = self.goal_keypoint_pos[:, j].cpu().numpy()
for i in range(self.num_envs):
keypoint_transform = gymapi.Transform()
keypoint_transform.p = gymapi.Vec3(*keypoint_pos_cpu[i])
gymutil.draw_lines(sphere_geom, self.gym, self.viewer, self.envs[i], keypoint_transform)
goal_keypoint_transform = gymapi.Transform()
goal_keypoint_transform.p = gymapi.Vec3(*goal_keypoint_pos_cpu[i])
gymutil.draw_lines(sphere_geom, self.gym, self.viewer, self.envs[i], goal_keypoint_transform)
| 65,956 | Python | 45.579802 | 145 | 0.626099 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/tasks/allegro_kuka/allegro_kuka_base.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import io
import math
import os
import random
import tempfile
from copy import copy
from os.path import join
from typing import List, Tuple
from isaacgym import gymapi, gymtorch, gymutil
from torch import Tensor
from isaacgymenvs.tasks.allegro_kuka.allegro_kuka_utils import DofParameters, populate_dof_properties
from isaacgymenvs.tasks.base.vec_task import VecTask
from isaacgymenvs.tasks.allegro_kuka.generate_cuboids import (
generate_big_cuboids,
generate_default_cube,
generate_small_cuboids,
generate_sticks,
)
from isaacgymenvs.utils.torch_jit_utils import *
class AllegroKukaBase(VecTask):
def __init__(self, cfg, rl_device, sim_device, graphics_device_id, headless, virtual_screen_capture, force_render):
self.cfg = cfg
self.frame_since_restart: int = 0 # number of control steps since last restart across all actors
self.hand_arm_asset_file: str = self.cfg["env"]["asset"]["kukaAllegro"]
self.clamp_abs_observations: float = self.cfg["env"]["clampAbsObservations"]
self.privileged_actions = self.cfg["env"]["privilegedActions"]
self.privileged_actions_torque = self.cfg["env"]["privilegedActionsTorque"]
# 4 joints for index, middle, ring, and thumb and 7 for kuka arm
self.num_arm_dofs = 7
self.num_finger_dofs = 4
self.num_allegro_fingertips = 4
self.num_hand_dofs = self.num_finger_dofs * self.num_allegro_fingertips
self.num_hand_arm_dofs = self.num_hand_dofs + self.num_arm_dofs
self.num_allegro_kuka_actions = self.num_hand_arm_dofs
if self.privileged_actions:
self.num_allegro_kuka_actions += 3
self.randomize = self.cfg["task"]["randomize"]
self.randomization_params = self.cfg["task"]["randomization_params"]
self.distance_delta_rew_scale = self.cfg["env"]["distanceDeltaRewScale"]
self.lifting_rew_scale = self.cfg["env"]["liftingRewScale"]
self.lifting_bonus = self.cfg["env"]["liftingBonus"]
self.lifting_bonus_threshold = self.cfg["env"]["liftingBonusThreshold"]
self.keypoint_rew_scale = self.cfg["env"]["keypointRewScale"]
self.kuka_actions_penalty_scale = self.cfg["env"]["kukaActionsPenaltyScale"]
self.allegro_actions_penalty_scale = self.cfg["env"]["allegroActionsPenaltyScale"]
self.dof_params: DofParameters = DofParameters.from_cfg(self.cfg)
self.initial_tolerance = self.cfg["env"]["successTolerance"]
self.success_tolerance = self.initial_tolerance
self.target_tolerance = self.cfg["env"]["targetSuccessTolerance"]
self.tolerance_curriculum_increment = self.cfg["env"]["toleranceCurriculumIncrement"]
self.tolerance_curriculum_interval = self.cfg["env"]["toleranceCurriculumInterval"]
self.save_states = self.cfg["env"]["saveStates"]
self.save_states_filename = self.cfg["env"]["saveStatesFile"]
self.should_load_initial_states = self.cfg["env"]["loadInitialStates"]
self.load_states_filename = self.cfg["env"]["loadStatesFile"]
self.initial_root_state_tensors = self.initial_dof_state_tensors = None
self.initial_state_idx = self.num_initial_states = 0
self.reach_goal_bonus = self.cfg["env"]["reachGoalBonus"]
self.fall_dist = self.cfg["env"]["fallDistance"]
self.fall_penalty = self.cfg["env"]["fallPenalty"]
self.reset_position_noise_x = self.cfg["env"]["resetPositionNoiseX"]
self.reset_position_noise_y = self.cfg["env"]["resetPositionNoiseY"]
self.reset_position_noise_z = self.cfg["env"]["resetPositionNoiseZ"]
self.reset_rotation_noise = self.cfg["env"]["resetRotationNoise"]
self.reset_dof_pos_noise_fingers = self.cfg["env"]["resetDofPosRandomIntervalFingers"]
self.reset_dof_pos_noise_arm = self.cfg["env"]["resetDofPosRandomIntervalArm"]
self.reset_dof_vel_noise = self.cfg["env"]["resetDofVelRandomInterval"]
self.force_scale = self.cfg["env"].get("forceScale", 0.0)
self.force_prob_range = self.cfg["env"].get("forceProbRange", [0.001, 0.1])
self.force_decay = self.cfg["env"].get("forceDecay", 0.99)
self.force_decay_interval = self.cfg["env"].get("forceDecayInterval", 0.08)
self.hand_dof_speed_scale = self.cfg["env"]["dofSpeedScale"]
self.use_relative_control = self.cfg["env"]["useRelativeControl"]
self.act_moving_average = self.cfg["env"]["actionsMovingAverage"]
self.debug_viz = self.cfg["env"]["enableDebugVis"]
self.max_episode_length = self.cfg["env"]["episodeLength"]
self.reset_time = self.cfg["env"].get("resetTime", -1.0)
self.max_consecutive_successes = self.cfg["env"]["maxConsecutiveSuccesses"]
self.success_steps: int = self.cfg["env"]["successSteps"]
# 1.0 means keypoints correspond to the corners of the object
# larger values help the agent to prioritize rotation matching
self.keypoint_scale = self.cfg["env"]["keypointScale"]
# size of the object (i.e. cube) before scaling
self.object_base_size = self.cfg["env"]["objectBaseSize"]
# whether to sample random object dimensions
self.randomize_object_dimensions = self.cfg["env"]["randomizeObjectDimensions"]
self.with_small_cuboids = self.cfg["env"]["withSmallCuboids"]
self.with_big_cuboids = self.cfg["env"]["withBigCuboids"]
self.with_sticks = self.cfg["env"]["withSticks"]
self.with_dof_force_sensors = False
# create fingertip force-torque sensors
self.with_fingertip_force_sensors = False
if self.reset_time > 0.0:
self.max_episode_length = int(round(self.reset_time / (self.control_freq_inv * self.sim_params.dt)))
print("Reset time: ", self.reset_time)
print("New episode length: ", self.max_episode_length)
self.object_type = self.cfg["env"]["objectType"]
assert self.object_type in ["block"]
self.asset_files_dict = {
"block": "urdf/objects/cube_multicolor.urdf", # 0.05m box
"table": "urdf/table_narrow.urdf",
"bucket": "urdf/objects/bucket.urdf",
"lightbulb": "lightbulb/A60_E27_SI.urdf",
"socket": "E27SocketSimple.urdf",
"ball": "urdf/objects/ball.urdf",
}
self.keypoints_offsets = self._object_keypoint_offsets()
self.num_keypoints = len(self.keypoints_offsets)
self.allegro_fingertips = ["index_link_3", "middle_link_3", "ring_link_3", "thumb_link_3"]
self.fingertip_offsets = np.array(
[[0.05, 0.005, 0], [0.05, 0.005, 0], [0.05, 0.005, 0], [0.06, 0.005, 0]], dtype=np.float32
)
self.palm_offset = np.array([-0.00, -0.02, 0.16], dtype=np.float32)
assert self.num_allegro_fingertips == len(self.allegro_fingertips)
# can be only "full_state"
self.obs_type = self.cfg["env"]["observationType"]
if not (self.obs_type in ["full_state"]):
raise Exception("Unknown type of observations!")
print("Obs type:", self.obs_type)
num_dof_pos = self.num_hand_arm_dofs
num_dof_vel = self.num_hand_arm_dofs
num_dof_forces = self.num_hand_arm_dofs if self.with_dof_force_sensors else 0
palm_pos_size = 3
palm_rot_vel_angvel_size = 10
obj_rot_vel_angvel_size = 10
fingertip_rel_pos_size = 3 * self.num_allegro_fingertips
keypoint_info_size = self.num_keypoints * 3 + self.num_keypoints * 3
object_scales_size = 3
max_keypoint_dist_size = 1
lifted_object_flag_size = 1
progress_obs_size = 1 + 1
closest_fingertip_distance_size = self.num_allegro_fingertips
reward_obs_size = 1
self.full_state_size = (
num_dof_pos
+ num_dof_vel
+ num_dof_forces
+ palm_pos_size
+ palm_rot_vel_angvel_size
+ obj_rot_vel_angvel_size
+ fingertip_rel_pos_size
+ keypoint_info_size
+ object_scales_size
+ max_keypoint_dist_size
+ lifted_object_flag_size
+ progress_obs_size
+ closest_fingertip_distance_size
+ reward_obs_size
# + self.num_allegro_actions
)
num_states = self.full_state_size
self.num_obs_dict = {
"full_state": self.full_state_size,
}
self.up_axis = "z"
self.fingertip_obs = True
self.cfg["env"]["numObservations"] = self.num_obs_dict[self.obs_type]
self.cfg["env"]["numStates"] = num_states
self.cfg["env"]["numActions"] = self.num_allegro_kuka_actions
self.cfg["device_type"] = sim_device.split(":")[0]
self.cfg["device_id"] = int(sim_device.split(":")[1])
self.cfg["headless"] = headless
super().__init__(
config=self.cfg, rl_device=rl_device, sim_device=sim_device, graphics_device_id=graphics_device_id,
headless=headless, virtual_screen_capture=virtual_screen_capture, force_render=force_render,
)
if self.viewer is not None:
cam_pos = gymapi.Vec3(10.0, 5.0, 1.0)
cam_target = gymapi.Vec3(6.0, 5.0, 0.0)
self.gym.viewer_camera_look_at(self.viewer, None, cam_pos, cam_target)
# volume to sample target position from
target_volume_origin = np.array([0, 0.05, 0.8], dtype=np.float32)
target_volume_extent = np.array([[-0.4, 0.4], [-0.05, 0.3], [-0.12, 0.25]], dtype=np.float32)
self.target_volume_origin = torch.from_numpy(target_volume_origin).to(self.device).float()
self.target_volume_extent = torch.from_numpy(target_volume_extent).to(self.device).float()
# get gym GPU state tensors
actor_root_state_tensor = self.gym.acquire_actor_root_state_tensor(self.sim)
dof_state_tensor = self.gym.acquire_dof_state_tensor(self.sim)
rigid_body_tensor = self.gym.acquire_rigid_body_state_tensor(self.sim)
if self.obs_type == "full_state":
if self.with_fingertip_force_sensors:
sensor_tensor = self.gym.acquire_force_sensor_tensor(self.sim)
self.vec_sensor_tensor = gymtorch.wrap_tensor(sensor_tensor).view(
self.num_envs, self.num_allegro_fingertips * 6
)
if self.with_dof_force_sensors:
dof_force_tensor = self.gym.acquire_dof_force_tensor(self.sim)
self.dof_force_tensor = gymtorch.wrap_tensor(dof_force_tensor).view(
self.num_envs, self.num_hand_arm_dofs
)
self.gym.refresh_actor_root_state_tensor(self.sim)
self.gym.refresh_dof_state_tensor(self.sim)
self.gym.refresh_rigid_body_state_tensor(self.sim)
# create some wrapper tensors for different slices
self.dof_state = gymtorch.wrap_tensor(dof_state_tensor)
self.hand_arm_default_dof_pos = torch.zeros(self.num_hand_arm_dofs, dtype=torch.float, device=self.device)
desired_kuka_pos = torch.tensor([-1.571, 1.571, -0.000, 1.376, -0.000, 1.485, 2.358]) # pose v1
# desired_kuka_pos = torch.tensor([-2.135, 0.843, 1.786, -0.903, -2.262, 1.301, -2.791]) # pose v2
self.hand_arm_default_dof_pos[:7] = desired_kuka_pos
self.arm_hand_dof_state = self.dof_state.view(self.num_envs, -1, 2)[:, : self.num_hand_arm_dofs]
self.arm_hand_dof_pos = self.arm_hand_dof_state[..., 0]
self.arm_hand_dof_vel = self.arm_hand_dof_state[..., 1]
self.rigid_body_states = gymtorch.wrap_tensor(rigid_body_tensor).view(self.num_envs, -1, 13)
self.num_bodies = self.rigid_body_states.shape[1]
self.root_state_tensor = gymtorch.wrap_tensor(actor_root_state_tensor).view(-1, 13)
self.set_actor_root_state_object_indices: List[Tensor] = []
self.num_dofs = self.gym.get_sim_dof_count(self.sim) // self.num_envs
self.prev_targets = torch.zeros((self.num_envs, self.num_dofs), dtype=torch.float, device=self.device)
self.cur_targets = torch.zeros((self.num_envs, self.num_dofs), dtype=torch.float, device=self.device)
self.global_indices = torch.arange(self.num_envs * 3, dtype=torch.int32, device=self.device).view(
self.num_envs, -1
)
self.x_unit_tensor = to_torch([1, 0, 0], dtype=torch.float, device=self.device).repeat((self.num_envs, 1))
self.y_unit_tensor = to_torch([0, 1, 0], dtype=torch.float, device=self.device).repeat((self.num_envs, 1))
self.z_unit_tensor = to_torch([0, 0, 1], dtype=torch.float, device=self.device).repeat((self.num_envs, 1))
self.reset_goal_buf = self.reset_buf.clone()
self.successes = torch.zeros(self.num_envs, dtype=torch.float, device=self.device)
self.prev_episode_successes = torch.zeros_like(self.successes)
# true objective value for the whole episode, plus saving values for the previous episode
self.true_objective = torch.zeros(self.num_envs, dtype=torch.float, device=self.device)
self.prev_episode_true_objective = torch.zeros_like(self.true_objective)
self.total_successes = 0
self.total_resets = 0
# object apply random forces parameters
self.force_decay = to_torch(self.force_decay, dtype=torch.float, device=self.device)
self.force_prob_range = to_torch(self.force_prob_range, dtype=torch.float, device=self.device)
self.random_force_prob = torch.exp(
(torch.log(self.force_prob_range[0]) - torch.log(self.force_prob_range[1]))
* torch.rand(self.num_envs, device=self.device)
+ torch.log(self.force_prob_range[1])
)
self.rb_forces = torch.zeros((self.num_envs, self.num_bodies, 3), dtype=torch.float, device=self.device)
self.action_torques = torch.zeros((self.num_envs, self.num_bodies, 3), dtype=torch.float, device=self.device)
self.obj_keypoint_pos = torch.zeros(
(self.num_envs, self.num_keypoints, 3), dtype=torch.float, device=self.device
)
self.goal_keypoint_pos = torch.zeros(
(self.num_envs, self.num_keypoints, 3), dtype=torch.float, device=self.device
)
# how many steps we were within the goal tolerance
self.near_goal_steps = torch.zeros(self.num_envs, dtype=torch.int, device=self.device)
self.lifted_object = torch.zeros(self.num_envs, dtype=torch.bool, device=self.device)
self.closest_keypoint_max_dist = -torch.ones(self.num_envs, dtype=torch.float, device=self.device)
self.closest_fingertip_dist = -torch.ones(
[self.num_envs, self.num_allegro_fingertips], dtype=torch.float, device=self.device
)
self.furthest_hand_dist = -torch.ones([self.num_envs], dtype=torch.float, device=self.device)
self.finger_rew_coeffs = torch.ones(
[self.num_envs, self.num_allegro_fingertips], dtype=torch.float, device=self.device
)
reward_keys = [
"raw_fingertip_delta_rew",
"raw_hand_delta_penalty",
"raw_lifting_rew",
"raw_keypoint_rew",
"fingertip_delta_rew",
"hand_delta_penalty",
"lifting_rew",
"lift_bonus_rew",
"keypoint_rew",
"bonus_rew",
"kuka_actions_penalty",
"allegro_actions_penalty",
]
self.rewards_episode = {
key: torch.zeros(self.num_envs, dtype=torch.float, device=self.device) for key in reward_keys
}
self.last_curriculum_update = 0
self.episode_root_state_tensors = [[] for _ in range(self.num_envs)]
self.episode_dof_states = [[] for _ in range(self.num_envs)]
self.eval_stats: bool = self.cfg["env"]["evalStats"]
if self.eval_stats:
self.last_success_step = torch.zeros(self.num_envs, dtype=torch.float, device=self.device)
self.success_time = torch.zeros(self.num_envs, dtype=torch.float, device=self.device)
self.total_num_resets = torch.zeros(self.num_envs, dtype=torch.float, device=self.device)
self.successes_count = torch.zeros(
self.max_consecutive_successes + 1, dtype=torch.float, device=self.device
)
from tensorboardX import SummaryWriter
self.eval_summary_dir = "./eval_summaries"
# remove the old directory if it exists
if os.path.exists(self.eval_summary_dir):
import shutil
shutil.rmtree(self.eval_summary_dir)
self.eval_summaries = SummaryWriter(self.eval_summary_dir, flush_secs=3)
# AllegroKukaBase abstract interface - to be overriden in derived classes
def _object_keypoint_offsets(self):
raise NotImplementedError()
def _object_start_pose(self, allegro_pose, table_pose_dy, table_pose_dz):
object_start_pose = gymapi.Transform()
object_start_pose.p = gymapi.Vec3()
object_start_pose.p.x = allegro_pose.p.x
pose_dy, pose_dz = table_pose_dy, table_pose_dz + 0.25
object_start_pose.p.y = allegro_pose.p.y + pose_dy
object_start_pose.p.z = allegro_pose.p.z + pose_dz
return object_start_pose
def _main_object_assets_and_scales(self, object_asset_root, tmp_assets_dir):
object_asset_files, object_asset_scales = self._box_asset_files_and_scales(object_asset_root, tmp_assets_dir)
if not self.randomize_object_dimensions:
object_asset_files = object_asset_files[:1]
object_asset_scales = object_asset_scales[:1]
# randomize order
files_and_scales = list(zip(object_asset_files, object_asset_scales))
# use fixed seed here to make sure when we restart from checkpoint the distribution of object types is the same
rng = np.random.default_rng(42)
rng.shuffle(files_and_scales)
object_asset_files, object_asset_scales = zip(*files_and_scales)
return object_asset_files, object_asset_scales
def _load_main_object_asset(self):
"""Load manipulated object and goal assets."""
object_asset_options = gymapi.AssetOptions()
object_assets = []
for object_asset_file in self.object_asset_files:
object_asset_dir = os.path.dirname(object_asset_file)
object_asset_fname = os.path.basename(object_asset_file)
object_asset_ = self.gym.load_asset(self.sim, object_asset_dir, object_asset_fname, object_asset_options)
object_assets.append(object_asset_)
object_rb_count = self.gym.get_asset_rigid_body_count(
object_assets[0]
) # assuming all of them have the same rb count
object_shapes_count = self.gym.get_asset_rigid_shape_count(
object_assets[0]
) # assuming all of them have the same rb count
return object_assets, object_rb_count, object_shapes_count
def _load_additional_assets(self, object_asset_root, arm_pose):
"""
returns: tuple (num_rigid_bodies, num_shapes)
"""
return 0, 0
def _create_additional_objects(self, env_ptr, env_idx, object_asset_idx):
pass
def _after_envs_created(self):
pass
def _extra_reset_rules(self, resets):
return resets
def _reset_target(self, env_ids: Tensor) -> None:
raise NotImplementedError()
def _extra_object_indices(self, env_ids: Tensor) -> List[Tensor]:
return []
def _extra_curriculum(self):
pass
# AllegroKukaBase implementation
def get_env_state(self):
"""
Return serializable environment state to be saved to checkpoint.
Can be used for stateful training sessions, i.e. with adaptive curriculums.
"""
return dict(
success_tolerance=self.success_tolerance,
)
def set_env_state(self, env_state):
if env_state is None:
return
for key in self.get_env_state().keys():
value = env_state.get(key, None)
if value is None:
continue
self.__dict__[key] = value
print(f"Loaded env state value {key}:{value}")
print(f"Success tolerance value after loading from checkpoint: {self.success_tolerance}")
def create_sim(self):
self.dt = self.sim_params.dt
self.up_axis_idx = 2 # index of up axis: Y=1, Z=2 (same as in allegro_hand.py)
self.sim = super().create_sim(self.device_id, self.graphics_device_id, self.physics_engine, self.sim_params)
self._create_ground_plane()
self._create_envs(self.num_envs, self.cfg["env"]["envSpacing"], int(np.sqrt(self.num_envs)))
def _create_ground_plane(self):
plane_params = gymapi.PlaneParams()
plane_params.normal = gymapi.Vec3(0.0, 0.0, 1.0)
self.gym.add_ground(self.sim, plane_params)
def _box_asset_files_and_scales(self, object_assets_root, generated_assets_dir):
files = []
scales = []
try:
filenames = os.listdir(generated_assets_dir)
for fname in filenames:
if fname.endswith(".urdf"):
os.remove(join(generated_assets_dir, fname))
except Exception as exc:
print(f"Exception {exc} while removing older procedurally-generated urdf assets")
objects_rel_path = os.path.dirname(self.asset_files_dict[self.object_type])
objects_dir = join(object_assets_root, objects_rel_path)
base_mesh = join(objects_dir, "meshes", "cube_multicolor.obj")
generate_default_cube(generated_assets_dir, base_mesh, self.object_base_size)
if self.with_small_cuboids:
generate_small_cuboids(generated_assets_dir, base_mesh, self.object_base_size)
if self.with_big_cuboids:
generate_big_cuboids(generated_assets_dir, base_mesh, self.object_base_size)
if self.with_sticks:
generate_sticks(generated_assets_dir, base_mesh, self.object_base_size)
filenames = os.listdir(generated_assets_dir)
filenames = sorted(filenames)
for fname in filenames:
if fname.endswith(".urdf"):
scale_tokens = os.path.splitext(fname)[0].split("_")[2:]
files.append(join(generated_assets_dir, fname))
scales.append([float(scale_token) / 100 for scale_token in scale_tokens])
return files, scales
def _create_envs(self, num_envs, spacing, num_per_row):
if self.should_load_initial_states:
self.load_initial_states()
lower = gymapi.Vec3(-spacing, -spacing, 0.0)
upper = gymapi.Vec3(spacing, spacing, spacing)
asset_root = os.path.join(os.path.dirname(os.path.abspath(__file__)), "../../../assets")
object_asset_root = asset_root
tmp_assets_dir = tempfile.TemporaryDirectory()
self.object_asset_files, self.object_asset_scales = self._main_object_assets_and_scales(
object_asset_root, tmp_assets_dir.name
)
asset_options = gymapi.AssetOptions()
asset_options.fix_base_link = True
asset_options.flip_visual_attachments = False
asset_options.collapse_fixed_joints = True
asset_options.disable_gravity = True
asset_options.thickness = 0.001
asset_options.angular_damping = 0.01
asset_options.linear_damping = 0.01
if self.physics_engine == gymapi.SIM_PHYSX:
asset_options.use_physx_armature = True
asset_options.default_dof_drive_mode = gymapi.DOF_MODE_POS
print(f"Loading asset {self.hand_arm_asset_file} from {asset_root}")
allegro_kuka_asset = self.gym.load_asset(self.sim, asset_root, self.hand_arm_asset_file, asset_options)
print(f"Loaded asset {allegro_kuka_asset}")
self.num_hand_arm_bodies = self.gym.get_asset_rigid_body_count(allegro_kuka_asset)
self.num_hand_arm_shapes = self.gym.get_asset_rigid_shape_count(allegro_kuka_asset)
num_hand_arm_dofs = self.gym.get_asset_dof_count(allegro_kuka_asset)
assert (
self.num_hand_arm_dofs == num_hand_arm_dofs
), f"Number of DOFs in asset {allegro_kuka_asset} is {num_hand_arm_dofs}, but {self.num_hand_arm_dofs} was expected"
max_agg_bodies = self.num_hand_arm_bodies
max_agg_shapes = self.num_hand_arm_shapes
allegro_rigid_body_names = [
self.gym.get_asset_rigid_body_name(allegro_kuka_asset, i) for i in range(self.num_hand_arm_bodies)
]
print(f"Allegro num rigid bodies: {self.num_hand_arm_bodies}")
print(f"Allegro rigid bodies: {allegro_rigid_body_names}")
allegro_hand_dof_props = self.gym.get_asset_dof_properties(allegro_kuka_asset)
self.arm_hand_dof_lower_limits = []
self.arm_hand_dof_upper_limits = []
self.allegro_sensors = []
allegro_sensor_pose = gymapi.Transform()
for i in range(self.num_hand_arm_dofs):
self.arm_hand_dof_lower_limits.append(allegro_hand_dof_props["lower"][i])
self.arm_hand_dof_upper_limits.append(allegro_hand_dof_props["upper"][i])
self.arm_hand_dof_lower_limits = to_torch(self.arm_hand_dof_lower_limits, device=self.device)
self.arm_hand_dof_upper_limits = to_torch(self.arm_hand_dof_upper_limits, device=self.device)
allegro_pose = gymapi.Transform()
allegro_pose.p = gymapi.Vec3(*get_axis_params(0.0, self.up_axis_idx)) + gymapi.Vec3(0.0, 0.8, 0)
allegro_pose.r = gymapi.Quat(0, 0, 0, 1)
object_assets, object_rb_count, object_shapes_count = self._load_main_object_asset()
max_agg_bodies += object_rb_count
max_agg_shapes += object_shapes_count
# load auxiliary objects
table_asset_options = gymapi.AssetOptions()
table_asset_options.disable_gravity = False
table_asset_options.fix_base_link = True
table_asset = self.gym.load_asset(self.sim, asset_root, self.asset_files_dict["table"], table_asset_options)
table_pose = gymapi.Transform()
table_pose.p = gymapi.Vec3()
table_pose.p.x = allegro_pose.p.x
table_pose_dy, table_pose_dz = -0.8, 0.38
table_pose.p.y = allegro_pose.p.y + table_pose_dy
table_pose.p.z = allegro_pose.p.z + table_pose_dz
table_rb_count = self.gym.get_asset_rigid_body_count(table_asset)
table_shapes_count = self.gym.get_asset_rigid_shape_count(table_asset)
max_agg_bodies += table_rb_count
max_agg_shapes += table_shapes_count
additional_rb, additional_shapes = self._load_additional_assets(object_asset_root, allegro_pose)
max_agg_bodies += additional_rb
max_agg_shapes += additional_shapes
# set up object and goal positions
self.object_start_pose = self._object_start_pose(allegro_pose, table_pose_dy, table_pose_dz)
self.allegro_hands = []
self.envs = []
object_init_state = []
self.allegro_hand_indices = []
object_indices = []
object_scales = []
object_keypoint_offsets = []
self.allegro_fingertip_handles = [
self.gym.find_asset_rigid_body_index(allegro_kuka_asset, name) for name in self.allegro_fingertips
]
self.allegro_palm_handle = self.gym.find_asset_rigid_body_index(allegro_kuka_asset, "iiwa7_link_7")
# this rely on the fact that objects are added right after the arms in terms of create_actor()
self.object_rb_handles = list(range(self.num_hand_arm_bodies, self.num_hand_arm_bodies + object_rb_count))
for i in range(self.num_envs):
# create env instance
env_ptr = self.gym.create_env(self.sim, lower, upper, num_per_row)
self.gym.begin_aggregate(env_ptr, max_agg_bodies, max_agg_shapes, True)
allegro_actor = self.gym.create_actor(env_ptr, allegro_kuka_asset, allegro_pose, "allegro", i, -1, 0)
populate_dof_properties(allegro_hand_dof_props, self.dof_params, self.num_arm_dofs, self.num_hand_dofs)
self.gym.set_actor_dof_properties(env_ptr, allegro_actor, allegro_hand_dof_props)
allegro_hand_idx = self.gym.get_actor_index(env_ptr, allegro_actor, gymapi.DOMAIN_SIM)
self.allegro_hand_indices.append(allegro_hand_idx)
if self.obs_type == "full_state":
if self.with_fingertip_force_sensors:
for ft_handle in self.allegro_fingertip_handles:
env_sensors = [self.gym.create_force_sensor(env_ptr, ft_handle, allegro_sensor_pose)]
self.allegro_sensors.append(env_sensors)
if self.with_dof_force_sensors:
self.gym.enable_actor_dof_force_sensors(env_ptr, allegro_actor)
# add object
object_asset_idx = i % len(object_assets)
object_asset = object_assets[object_asset_idx]
object_handle = self.gym.create_actor(env_ptr, object_asset, self.object_start_pose, "object", i, 0, 0)
object_init_state.append(
[
self.object_start_pose.p.x,
self.object_start_pose.p.y,
self.object_start_pose.p.z,
self.object_start_pose.r.x,
self.object_start_pose.r.y,
self.object_start_pose.r.z,
self.object_start_pose.r.w,
0,
0,
0,
0,
0,
0,
]
)
object_idx = self.gym.get_actor_index(env_ptr, object_handle, gymapi.DOMAIN_SIM)
object_indices.append(object_idx)
object_scale = self.object_asset_scales[object_asset_idx]
object_scales.append(object_scale)
object_offsets = []
for keypoint in self.keypoints_offsets:
keypoint = copy(keypoint)
for coord_idx in range(3):
keypoint[coord_idx] *= object_scale[coord_idx] * self.object_base_size * self.keypoint_scale / 2
object_offsets.append(keypoint)
object_keypoint_offsets.append(object_offsets)
# table object
table_handle = self.gym.create_actor(env_ptr, table_asset, table_pose, "table_object", i, 0, 0)
table_object_idx = self.gym.get_actor_index(env_ptr, table_handle, gymapi.DOMAIN_SIM)
# task-specific objects (i.e. goal object for reorientation task)
self._create_additional_objects(env_ptr, env_idx=i, object_asset_idx=object_asset_idx)
self.gym.end_aggregate(env_ptr)
self.envs.append(env_ptr)
self.allegro_hands.append(allegro_actor)
# we are not using new mass values after DR when calculating random forces applied to an object,
# which should be ok as long as the randomization range is not too big
object_rb_props = self.gym.get_actor_rigid_body_properties(self.envs[0], object_handle)
self.object_rb_masses = [prop.mass for prop in object_rb_props]
self.object_init_state = to_torch(object_init_state, device=self.device, dtype=torch.float).view(
self.num_envs, 13
)
self.goal_states = self.object_init_state.clone()
self.goal_states[:, self.up_axis_idx] -= 0.04
self.goal_init_state = self.goal_states.clone()
self.allegro_fingertip_handles = to_torch(self.allegro_fingertip_handles, dtype=torch.long, device=self.device)
self.object_rb_handles = to_torch(self.object_rb_handles, dtype=torch.long, device=self.device)
self.object_rb_masses = to_torch(self.object_rb_masses, dtype=torch.float, device=self.device)
self.allegro_hand_indices = to_torch(self.allegro_hand_indices, dtype=torch.long, device=self.device)
self.object_indices = to_torch(object_indices, dtype=torch.long, device=self.device)
self.object_scales = to_torch(object_scales, dtype=torch.float, device=self.device)
self.object_keypoint_offsets = to_torch(object_keypoint_offsets, dtype=torch.float, device=self.device)
self._after_envs_created()
try:
# by this point we don't need the temporary folder for procedurally generated assets
tmp_assets_dir.cleanup()
except Exception:
pass
def _distance_delta_rewards(self, lifted_object: Tensor) -> Tuple[Tensor, Tensor]:
"""Rewards for fingertips approaching the object or penalty for hand getting further away from the object."""
# this is positive if we got closer, negative if we're further away than the closest we've gotten
fingertip_deltas_closest = self.closest_fingertip_dist - self.curr_fingertip_distances
# update the values if finger tips got closer to the object
self.closest_fingertip_dist = torch.minimum(self.closest_fingertip_dist, self.curr_fingertip_distances)
# again, positive is closer, negative is further away
# here we use index of the 1st finger, when the distance is large it doesn't matter which one we use
hand_deltas_furthest = self.furthest_hand_dist - self.curr_fingertip_distances[:, 0]
# update the values if finger tips got further away from the object
self.furthest_hand_dist = torch.maximum(self.furthest_hand_dist, self.curr_fingertip_distances[:, 0])
# clip between zero and +inf to turn deltas into rewards
fingertip_deltas = torch.clip(fingertip_deltas_closest, 0, 10)
fingertip_deltas *= self.finger_rew_coeffs
fingertip_delta_rew = torch.sum(fingertip_deltas, dim=-1)
# add this reward only before the object is lifted off the table
# after this, we should be guided only by keypoint and bonus rewards
fingertip_delta_rew *= ~lifted_object
# clip between zero and -inf to turn deltas into penalties
hand_delta_penalty = torch.clip(hand_deltas_furthest, -10, 0)
hand_delta_penalty *= ~lifted_object
# multiply by the number of fingers so two rewards are on the same scale
hand_delta_penalty *= self.num_allegro_fingertips
return fingertip_delta_rew, hand_delta_penalty
def _lifting_reward(self) -> Tuple[Tensor, Tensor, Tensor]:
"""Reward for lifting the object off the table."""
z_lift = 0.05 + self.object_pos[:, 2] - self.object_init_state[:, 2]
lifting_rew = torch.clip(z_lift, 0, 0.5)
# this flag tells us if we lifted an object above a certain height compared to the initial position
lifted_object = (z_lift > self.lifting_bonus_threshold) | self.lifted_object
# Since we stop rewarding the agent for height after the object is lifted, we should give it large positive reward
# to compensate for "lost" opportunity to get more lifting reward for sitting just below the threshold.
# This bonus depends on the max lifting reward (lifting reward coeff * threshold) and the discount factor
# (i.e. the effective future horizon for the agent)
# For threshold 0.15, lifting reward coeff = 3 and gamma 0.995 (effective horizon ~500 steps)
# a value of 300 for the bonus reward seems reasonable
just_lifted_above_threshold = lifted_object & ~self.lifted_object
lift_bonus_rew = self.lifting_bonus * just_lifted_above_threshold
# stop giving lifting reward once we crossed the threshold - now the agent can focus entirely on the
# keypoint reward
lifting_rew *= ~lifted_object
# update the flag that describes whether we lifted an object above the table or not
self.lifted_object = lifted_object
return lifting_rew, lift_bonus_rew, lifted_object
def _keypoint_reward(self, lifted_object: Tensor) -> Tensor:
# this is positive if we got closer, negative if we're further away
max_keypoint_deltas = self.closest_keypoint_max_dist - self.keypoints_max_dist
# update the values if we got closer to the target
self.closest_keypoint_max_dist = torch.minimum(self.closest_keypoint_max_dist, self.keypoints_max_dist)
# clip between zero and +inf to turn deltas into rewards
max_keypoint_deltas = torch.clip(max_keypoint_deltas, 0, 100)
# administer reward only when we already lifted an object from the table
# to prevent the situation where the agent just rolls it around the table
keypoint_rew = max_keypoint_deltas * lifted_object
return keypoint_rew
def _action_penalties(self) -> Tuple[Tensor, Tensor]:
kuka_actions_penalty = (
torch.sum(torch.abs(self.arm_hand_dof_vel[..., 0:7]), dim=-1) * self.kuka_actions_penalty_scale
)
allegro_actions_penalty = (
torch.sum(torch.abs(self.arm_hand_dof_vel[..., 7 : self.num_hand_arm_dofs]), dim=-1)
* self.allegro_actions_penalty_scale
)
return -1 * kuka_actions_penalty, -1 * allegro_actions_penalty
def _compute_resets(self, is_success):
resets = torch.where(self.object_pos[:, 2] < 0.1, torch.ones_like(self.reset_buf), self.reset_buf) # fall
if self.max_consecutive_successes > 0:
# Reset progress buffer if max_consecutive_successes > 0
self.progress_buf = torch.where(is_success > 0, torch.zeros_like(self.progress_buf), self.progress_buf)
resets = torch.where(self.successes >= self.max_consecutive_successes, torch.ones_like(resets), resets)
resets = torch.where(self.progress_buf >= self.max_episode_length - 1, torch.ones_like(resets), resets)
resets = self._extra_reset_rules(resets)
return resets
def _true_objective(self):
raise NotImplementedError()
def compute_kuka_reward(self) -> Tuple[Tensor, Tensor]:
lifting_rew, lift_bonus_rew, lifted_object = self._lifting_reward()
fingertip_delta_rew, hand_delta_penalty = self._distance_delta_rewards(lifted_object)
keypoint_rew = self._keypoint_reward(lifted_object)
keypoint_success_tolerance = self.success_tolerance * self.keypoint_scale
# noinspection PyTypeChecker
near_goal: Tensor = self.keypoints_max_dist <= keypoint_success_tolerance
self.near_goal_steps += near_goal
is_success = self.near_goal_steps >= self.success_steps
goal_resets = is_success
self.successes += is_success
self.reset_goal_buf[:] = goal_resets
self.rewards_episode["raw_fingertip_delta_rew"] += fingertip_delta_rew
self.rewards_episode["raw_hand_delta_penalty"] += hand_delta_penalty
self.rewards_episode["raw_lifting_rew"] += lifting_rew
self.rewards_episode["raw_keypoint_rew"] += keypoint_rew
fingertip_delta_rew *= self.distance_delta_rew_scale
hand_delta_penalty *= self.distance_delta_rew_scale * 0 # currently disabled
lifting_rew *= self.lifting_rew_scale
keypoint_rew *= self.keypoint_rew_scale
kuka_actions_penalty, allegro_actions_penalty = self._action_penalties()
# Success bonus: orientation is within `success_tolerance` of goal orientation
# We spread out the reward over "success_steps"
bonus_rew = near_goal * (self.reach_goal_bonus / self.success_steps)
reward = (
fingertip_delta_rew
+ hand_delta_penalty # + sign here because hand_delta_penalty is negative
+ lifting_rew
+ lift_bonus_rew
+ keypoint_rew
+ kuka_actions_penalty
+ allegro_actions_penalty
+ bonus_rew
)
self.rew_buf[:] = reward
resets = self._compute_resets(is_success)
self.reset_buf[:] = resets
self.extras["successes"] = self.prev_episode_successes.mean()
self.true_objective = self._true_objective()
self.extras["true_objective"] = self.true_objective
# scalars for logging
self.extras["true_objective_mean"] = self.true_objective.mean()
self.extras["true_objective_min"] = self.true_objective.min()
self.extras["true_objective_max"] = self.true_objective.max()
rewards = [
(fingertip_delta_rew, "fingertip_delta_rew"),
(hand_delta_penalty, "hand_delta_penalty"),
(lifting_rew, "lifting_rew"),
(lift_bonus_rew, "lift_bonus_rew"),
(keypoint_rew, "keypoint_rew"),
(kuka_actions_penalty, "kuka_actions_penalty"),
(allegro_actions_penalty, "allegro_actions_penalty"),
(bonus_rew, "bonus_rew"),
]
episode_cumulative = dict()
for rew_value, rew_name in rewards:
self.rewards_episode[rew_name] += rew_value
episode_cumulative[rew_name] = rew_value
self.extras["rewards_episode"] = self.rewards_episode
self.extras["episode_cumulative"] = episode_cumulative
return self.rew_buf, is_success
def _eval_stats(self, is_success: Tensor) -> None:
if self.eval_stats:
frame: int = self.frame_since_restart
n_frames = torch.empty_like(self.last_success_step).fill_(frame)
self.success_time = torch.where(is_success, n_frames - self.last_success_step, self.success_time)
self.last_success_step = torch.where(is_success, n_frames, self.last_success_step)
mask_ = self.success_time > 0
if any(mask_):
avg_time_mean = ((self.success_time * mask_).sum(dim=0) / mask_.sum(dim=0)).item()
else:
avg_time_mean = math.nan
self.total_resets = self.total_resets + self.reset_buf.sum()
self.total_successes = self.total_successes + (self.successes * self.reset_buf).sum()
self.total_num_resets += self.reset_buf
reset_ids = self.reset_buf.nonzero().squeeze()
last_successes = self.successes[reset_ids].long()
self.successes_count[last_successes] += 1
if frame % 100 == 0:
# The direct average shows the overall result more quickly, but slightly undershoots long term
# policy performance.
print(f"Max num successes: {self.successes.max().item()}")
print(f"Average consecutive successes: {self.prev_episode_successes.mean().item():.2f}")
print(f"Total num resets: {self.total_num_resets.sum().item()} --> {self.total_num_resets}")
print(f"Reset percentage: {(self.total_num_resets > 0).sum() / self.num_envs:.2%}")
print(f"Last ep successes: {self.prev_episode_successes.mean().item():.2f}")
print(f"Last ep true objective: {self.prev_episode_true_objective.mean().item():.2f}")
self.eval_summaries.add_scalar("last_ep_successes", self.prev_episode_successes.mean().item(), frame)
self.eval_summaries.add_scalar(
"last_ep_true_objective", self.prev_episode_true_objective.mean().item(), frame
)
self.eval_summaries.add_scalar(
"reset_stats/reset_percentage", (self.total_num_resets > 0).sum() / self.num_envs, frame
)
self.eval_summaries.add_scalar("reset_stats/min_num_resets", self.total_num_resets.min().item(), frame)
self.eval_summaries.add_scalar("policy_speed/avg_success_time_frames", avg_time_mean, frame)
frame_time = self.control_freq_inv * self.dt
self.eval_summaries.add_scalar(
"policy_speed/avg_success_time_seconds", avg_time_mean * frame_time, frame
)
self.eval_summaries.add_scalar(
"policy_speed/avg_success_per_minute", 60.0 / (avg_time_mean * frame_time), frame
)
print(f"Policy speed (successes per minute): {60.0 / (avg_time_mean * frame_time):.2f}")
# create a matplotlib bar chart of the self.successes_count
import matplotlib.pyplot as plt
plt.bar(list(range(self.max_consecutive_successes + 1)), self.successes_count.cpu().numpy())
plt.title("Successes histogram")
plt.xlabel("Successes")
plt.ylabel("Frequency")
plt.savefig(f"{self.eval_summary_dir}/successes_histogram.png")
plt.clf()
def compute_observations(self) -> Tuple[Tensor, int]:
self.gym.refresh_dof_state_tensor(self.sim)
self.gym.refresh_actor_root_state_tensor(self.sim)
self.gym.refresh_rigid_body_state_tensor(self.sim)
if self.obs_type == "full_state":
if self.with_fingertip_force_sensors:
self.gym.refresh_force_sensor_tensor(self.sim)
if self.with_dof_force_sensors:
self.gym.refresh_dof_force_tensor(self.sim)
self.object_state = self.root_state_tensor[self.object_indices, 0:13]
self.object_pose = self.root_state_tensor[self.object_indices, 0:7]
self.object_pos = self.root_state_tensor[self.object_indices, 0:3]
self.object_rot = self.root_state_tensor[self.object_indices, 3:7]
self.object_linvel = self.root_state_tensor[self.object_indices, 7:10]
self.object_angvel = self.root_state_tensor[self.object_indices, 10:13]
self.goal_pose = self.goal_states[:, 0:7]
self.goal_pos = self.goal_states[:, 0:3]
self.goal_rot = self.goal_states[:, 3:7]
self.palm_center_offset = torch.from_numpy(self.palm_offset).to(self.device).repeat((self.num_envs, 1))
self._palm_state = self.rigid_body_states[:, self.allegro_palm_handle][:, 0:13]
self._palm_pos = self.rigid_body_states[:, self.allegro_palm_handle][:, 0:3]
self._palm_rot = self.rigid_body_states[:, self.allegro_palm_handle][:, 3:7]
self.palm_center_pos = self._palm_pos + quat_rotate(self._palm_rot, self.palm_center_offset)
self.fingertip_state = self.rigid_body_states[:, self.allegro_fingertip_handles][:, :, 0:13]
self.fingertip_pos = self.rigid_body_states[:, self.allegro_fingertip_handles][:, :, 0:3]
self.fingertip_rot = self.rigid_body_states[:, self.allegro_fingertip_handles][:, :, 3:7]
if not isinstance(self.fingertip_offsets, torch.Tensor):
self.fingertip_offsets = (
torch.from_numpy(self.fingertip_offsets).to(self.device).repeat((self.num_envs, 1, 1))
)
if hasattr(self, "fingertip_pos_rel_object"):
self.fingertip_pos_rel_object_prev[:, :, :] = self.fingertip_pos_rel_object
else:
self.fingertip_pos_rel_object_prev = None
self.fingertip_pos_offset = torch.zeros_like(self.fingertip_pos).to(self.device)
for i in range(self.num_allegro_fingertips):
self.fingertip_pos_offset[:, i] = self.fingertip_pos[:, i] + quat_rotate(
self.fingertip_rot[:, i], self.fingertip_offsets[:, i]
)
obj_pos_repeat = self.object_pos.unsqueeze(1).repeat(1, self.num_allegro_fingertips, 1)
self.fingertip_pos_rel_object = self.fingertip_pos_offset - obj_pos_repeat
self.curr_fingertip_distances = torch.norm(self.fingertip_pos_rel_object, dim=-1)
# when episode ends or target changes we reset this to -1, this will initialize it to the actual distance on the 1st frame of the episode
self.closest_fingertip_dist = torch.where(
self.closest_fingertip_dist < 0.0, self.curr_fingertip_distances, self.closest_fingertip_dist
)
self.furthest_hand_dist = torch.where(
self.furthest_hand_dist < 0.0, self.curr_fingertip_distances[:, 0], self.furthest_hand_dist
)
palm_center_repeat = self.palm_center_pos.unsqueeze(1).repeat(1, self.num_allegro_fingertips, 1)
self.fingertip_pos_rel_palm = self.fingertip_pos_offset - palm_center_repeat
if self.fingertip_pos_rel_object_prev is None:
self.fingertip_pos_rel_object_prev = self.fingertip_pos_rel_object.clone()
for i in range(self.num_keypoints):
self.obj_keypoint_pos[:, i] = self.object_pos + quat_rotate(
self.object_rot, self.object_keypoint_offsets[:, i]
)
self.goal_keypoint_pos[:, i] = self.goal_pos + quat_rotate(
self.goal_rot, self.object_keypoint_offsets[:, i]
)
self.keypoints_rel_goal = self.obj_keypoint_pos - self.goal_keypoint_pos
palm_center_repeat = self.palm_center_pos.unsqueeze(1).repeat(1, self.num_keypoints, 1)
self.keypoints_rel_palm = self.obj_keypoint_pos - palm_center_repeat
self.keypoint_distances_l2 = torch.norm(self.keypoints_rel_goal, dim=-1)
# furthest keypoint from the goal
self.keypoints_max_dist = self.keypoint_distances_l2.max(dim=-1).values
# this is the closest the keypoint had been to the target in the current episode (for the furthest keypoint of all)
# make sure we initialize this value before using it for obs or rewards
self.closest_keypoint_max_dist = torch.where(
self.closest_keypoint_max_dist < 0.0, self.keypoints_max_dist, self.closest_keypoint_max_dist
)
if self.obs_type == "full_state":
full_state_size, reward_obs_ofs = self.compute_full_state(self.obs_buf)
assert (
full_state_size == self.full_state_size
), f"Expected full state size {self.full_state_size}, actual: {full_state_size}"
return self.obs_buf, reward_obs_ofs
else:
raise ValueError("Unkown observations type!")
def compute_full_state(self, buf: Tensor) -> Tuple[int, int]:
num_dofs = self.num_hand_arm_dofs
ofs = 0
# dof positions
buf[:, ofs : ofs + num_dofs] = unscale(
self.arm_hand_dof_pos[:, :num_dofs],
self.arm_hand_dof_lower_limits[:num_dofs],
self.arm_hand_dof_upper_limits[:num_dofs],
)
ofs += num_dofs
# dof velocities
buf[:, ofs : ofs + num_dofs] = self.arm_hand_dof_vel[:, :num_dofs]
ofs += num_dofs
if self.with_dof_force_sensors:
# dof forces
buf[:, ofs : ofs + num_dofs] = self.dof_force_tensor[:, :num_dofs]
ofs += num_dofs
# palm pos
buf[:, ofs : ofs + 3] = self.palm_center_pos
ofs += 3
# palm rot, linvel, ang vel
buf[:, ofs : ofs + 10] = self._palm_state[:, 3:13]
ofs += 10
# object rot, linvel, ang vel
buf[:, ofs : ofs + 10] = self.object_state[:, 3:13]
ofs += 10
# fingertip pos relative to the palm of the hand
fingertip_rel_pos_size = 3 * self.num_allegro_fingertips
buf[:, ofs : ofs + fingertip_rel_pos_size] = self.fingertip_pos_rel_palm.reshape(
self.num_envs, fingertip_rel_pos_size
)
ofs += fingertip_rel_pos_size
# keypoint distances relative to the palm of the hand
keypoint_rel_pos_size = 3 * self.num_keypoints
buf[:, ofs : ofs + keypoint_rel_pos_size] = self.keypoints_rel_palm.reshape(
self.num_envs, keypoint_rel_pos_size
)
ofs += keypoint_rel_pos_size
# keypoint distances relative to the goal
buf[:, ofs : ofs + keypoint_rel_pos_size] = self.keypoints_rel_goal.reshape(
self.num_envs, keypoint_rel_pos_size
)
ofs += keypoint_rel_pos_size
# object scales
buf[:, ofs : ofs + 3] = self.object_scales
ofs += 3
# closest distance to the furthest keypoint, achieved so far in this episode
buf[:, ofs : ofs + 1] = self.closest_keypoint_max_dist.unsqueeze(-1)
ofs += 1
# closest distance between a fingertip and an object achieved since last target reset
# this should help the critic predict the anticipated fingertip reward
buf[:, ofs : ofs + self.num_allegro_fingertips] = self.closest_fingertip_dist
ofs += self.num_allegro_fingertips
# indicates whether we already lifted the object from the table or not, should help the critic be more accurate
buf[:, ofs : ofs + 1] = self.lifted_object.unsqueeze(-1)
ofs += 1
# this should help the critic predict the future rewards better and anticipate the episode termination
buf[:, ofs : ofs + 1] = torch.log(self.progress_buf / 10 + 1).unsqueeze(-1)
ofs += 1
buf[:, ofs : ofs + 1] = torch.log(self.successes + 1).unsqueeze(-1)
ofs += 1
# this is where we will add the reward observation
reward_obs_ofs = ofs
ofs += 1
assert ofs == self.full_state_size
return ofs, reward_obs_ofs
def clamp_obs(self, obs_buf: Tensor) -> None:
if self.clamp_abs_observations > 0:
obs_buf.clamp_(-self.clamp_abs_observations, self.clamp_abs_observations)
def get_random_quat(self, env_ids):
# https://github.com/KieranWynn/pyquaternion/blob/master/pyquaternion/quaternion.py
# https://github.com/KieranWynn/pyquaternion/blob/master/pyquaternion/quaternion.py#L261
uvw = torch_rand_float(0, 1.0, (len(env_ids), 3), device=self.device)
q_w = torch.sqrt(1.0 - uvw[:, 0]) * (torch.sin(2 * np.pi * uvw[:, 1]))
q_x = torch.sqrt(1.0 - uvw[:, 0]) * (torch.cos(2 * np.pi * uvw[:, 1]))
q_y = torch.sqrt(uvw[:, 0]) * (torch.sin(2 * np.pi * uvw[:, 2]))
q_z = torch.sqrt(uvw[:, 0]) * (torch.cos(2 * np.pi * uvw[:, 2]))
new_rot = torch.cat((q_x.unsqueeze(-1), q_y.unsqueeze(-1), q_z.unsqueeze(-1), q_w.unsqueeze(-1)), dim=-1)
return new_rot
def reset_target_pose(self, env_ids: Tensor) -> None:
self._reset_target(env_ids)
self.reset_goal_buf[env_ids] = 0
self.near_goal_steps[env_ids] = 0
self.closest_keypoint_max_dist[env_ids] = -1
def reset_object_pose(self, env_ids):
obj_indices = self.object_indices[env_ids]
# reset object
rand_pos_floats = torch_rand_float(-1.0, 1.0, (len(env_ids), 3), device=self.device)
self.root_state_tensor[obj_indices] = self.object_init_state[env_ids].clone()
# indices 0..2 correspond to the object position
self.root_state_tensor[obj_indices, 0:1] = (
self.object_init_state[env_ids, 0:1] + self.reset_position_noise_x * rand_pos_floats[:, 0:1]
)
self.root_state_tensor[obj_indices, 1:2] = (
self.object_init_state[env_ids, 1:2] + self.reset_position_noise_y * rand_pos_floats[:, 1:2]
)
self.root_state_tensor[obj_indices, 2:3] = (
self.object_init_state[env_ids, 2:3] + self.reset_position_noise_z * rand_pos_floats[:, 2:3]
)
new_object_rot = self.get_random_quat(env_ids)
# indices 3,4,5,6 correspond to the rotation quaternion
self.root_state_tensor[obj_indices, 3:7] = new_object_rot
self.root_state_tensor[obj_indices, 7:13] = torch.zeros_like(self.root_state_tensor[obj_indices, 7:13])
# since we reset the object, we also should update distances between fingers and the object
self.closest_fingertip_dist[env_ids] = -1
self.furthest_hand_dist[env_ids] = -1
def deferred_set_actor_root_state_tensor_indexed(self, obj_indices: List[Tensor]) -> None:
self.set_actor_root_state_object_indices.extend(obj_indices)
def set_actor_root_state_tensor_indexed(self) -> None:
object_indices: List[Tensor] = self.set_actor_root_state_object_indices
if not object_indices:
# nothing to set
return
unique_object_indices = torch.unique(torch.cat(object_indices).to(torch.int32))
self.gym.set_actor_root_state_tensor_indexed(
self.sim,
gymtorch.unwrap_tensor(self.root_state_tensor),
gymtorch.unwrap_tensor(unique_object_indices),
len(unique_object_indices),
)
self.set_actor_root_state_object_indices = []
def reset_idx(self, env_ids: Tensor) -> None:
# randomization can happen only at reset time, since it can reset actor positions on GPU
if self.randomize:
self.apply_randomizations(self.randomization_params)
# randomize start object poses
self.reset_target_pose(env_ids)
# reset rigid body forces
self.rb_forces[env_ids, :, :] = 0.0
# reset object
self.reset_object_pose(env_ids)
hand_indices = self.allegro_hand_indices[env_ids].to(torch.int32)
# reset random force probabilities
self.random_force_prob[env_ids] = torch.exp(
(torch.log(self.force_prob_range[0]) - torch.log(self.force_prob_range[1]))
* torch.rand(len(env_ids), device=self.device)
+ torch.log(self.force_prob_range[1])
)
# reset allegro hand
delta_max = self.arm_hand_dof_upper_limits - self.hand_arm_default_dof_pos
delta_min = self.arm_hand_dof_lower_limits - self.hand_arm_default_dof_pos
rand_dof_floats = torch_rand_float(0.0, 1.0, (len(env_ids), self.num_hand_arm_dofs), device=self.device)
rand_delta = delta_min + (delta_max - delta_min) * rand_dof_floats
noise_coeff = torch.zeros_like(self.hand_arm_default_dof_pos, device=self.device)
noise_coeff[0:7] = self.reset_dof_pos_noise_arm
noise_coeff[7 : self.num_hand_arm_dofs] = self.reset_dof_pos_noise_fingers
allegro_pos = self.hand_arm_default_dof_pos + noise_coeff * rand_delta
self.arm_hand_dof_pos[env_ids, :] = allegro_pos
rand_vel_floats = torch_rand_float(-1.0, 1.0, (len(env_ids), self.num_hand_arm_dofs), device=self.device)
self.arm_hand_dof_vel[env_ids, :] = self.reset_dof_vel_noise * rand_vel_floats
self.prev_targets[env_ids, : self.num_hand_arm_dofs] = allegro_pos
self.cur_targets[env_ids, : self.num_hand_arm_dofs] = allegro_pos
if self.should_load_initial_states:
if len(env_ids) > self.num_initial_states:
print(f"Not enough initial states to load {len(env_ids)}/{self.num_initial_states}...")
else:
if self.initial_state_idx + len(env_ids) > self.num_initial_states:
self.initial_state_idx = 0
dof_states_to_load = self.initial_dof_state_tensors[
self.initial_state_idx : self.initial_state_idx + len(env_ids)
]
self.dof_state.reshape([self.num_envs, -1, *self.dof_state.shape[1:]])[
env_ids
] = dof_states_to_load.clone()
root_state_tensors_to_load = self.initial_root_state_tensors[
self.initial_state_idx : self.initial_state_idx + len(env_ids)
]
cube_object_idx = self.object_indices[0]
self.root_state_tensor.reshape([self.num_envs, -1, *self.root_state_tensor.shape[1:]])[
env_ids, cube_object_idx
] = root_state_tensors_to_load[:, cube_object_idx].clone()
self.initial_state_idx += len(env_ids)
self.gym.set_dof_position_target_tensor_indexed(
self.sim, gymtorch.unwrap_tensor(self.prev_targets), gymtorch.unwrap_tensor(hand_indices), len(env_ids)
)
self.gym.set_dof_state_tensor_indexed(
self.sim, gymtorch.unwrap_tensor(self.dof_state), gymtorch.unwrap_tensor(hand_indices), len(env_ids)
)
object_indices = [self.object_indices[env_ids]]
object_indices.extend(self._extra_object_indices(env_ids))
self.deferred_set_actor_root_state_tensor_indexed(object_indices)
self.progress_buf[env_ids] = 0
self.reset_buf[env_ids] = 0
self.prev_episode_successes[env_ids] = self.successes[env_ids]
self.successes[env_ids] = 0
self.prev_episode_true_objective[env_ids] = self.true_objective[env_ids]
self.true_objective[env_ids] = 0
self.lifted_object[env_ids] = False
# -1 here indicates that the value is not initialized
self.closest_keypoint_max_dist[env_ids] = -1
self.closest_fingertip_dist[env_ids] = -1
self.furthest_hand_dist[env_ids] = -1
self.near_goal_steps[env_ids] = 0
for key in self.rewards_episode.keys():
self.rewards_episode[key][env_ids] = 0
if self.save_states:
self.dump_env_states(env_ids)
self.extras["scalars"] = dict()
self.extras["scalars"]["success_tolerance"] = self.success_tolerance
def pre_physics_step(self, actions):
self.actions = actions.clone().to(self.device)
if self.privileged_actions:
torque_actions = actions[:, :3]
actions = actions[:, 3:]
reset_env_ids = self.reset_buf.nonzero(as_tuple=False).squeeze(-1)
reset_goal_env_ids = self.reset_goal_buf.nonzero(as_tuple=False).squeeze(-1)
self.reset_target_pose(reset_goal_env_ids)
if len(reset_env_ids) > 0:
self.reset_idx(reset_env_ids)
self.set_actor_root_state_tensor_indexed()
if self.use_relative_control:
raise NotImplementedError("Use relative control False for now")
else:
# target position control for the hand DOFs
self.cur_targets[:, 7 : self.num_hand_arm_dofs] = scale(
actions[:, 7 : self.num_hand_arm_dofs],
self.arm_hand_dof_lower_limits[7 : self.num_hand_arm_dofs],
self.arm_hand_dof_upper_limits[7 : self.num_hand_arm_dofs],
)
self.cur_targets[:, 7 : self.num_hand_arm_dofs] = (
self.act_moving_average * self.cur_targets[:, 7 : self.num_hand_arm_dofs]
+ (1.0 - self.act_moving_average) * self.prev_targets[:, 7 : self.num_hand_arm_dofs]
)
self.cur_targets[:, 7 : self.num_hand_arm_dofs] = tensor_clamp(
self.cur_targets[:, 7 : self.num_hand_arm_dofs],
self.arm_hand_dof_lower_limits[7 : self.num_hand_arm_dofs],
self.arm_hand_dof_upper_limits[7 : self.num_hand_arm_dofs],
)
targets = self.prev_targets[:, :7] + self.hand_dof_speed_scale * self.dt * self.actions[:, :7]
self.cur_targets[:, :7] = tensor_clamp(
targets, self.arm_hand_dof_lower_limits[:7], self.arm_hand_dof_upper_limits[:7]
)
self.prev_targets[:, :] = self.cur_targets[:, :]
self.gym.set_dof_position_target_tensor(self.sim, gymtorch.unwrap_tensor(self.cur_targets))
if self.force_scale > 0.0:
self.rb_forces *= torch.pow(self.force_decay, self.dt / self.force_decay_interval)
# apply new forces
force_indices = (torch.rand(self.num_envs, device=self.device) < self.random_force_prob).nonzero()
self.rb_forces[force_indices, self.object_rb_handles, :] = (
torch.randn(self.rb_forces[force_indices, self.object_rb_handles, :].shape, device=self.device)
* self.object_rb_masses
* self.force_scale
)
self.gym.apply_rigid_body_force_tensors(
self.sim, gymtorch.unwrap_tensor(self.rb_forces), None, gymapi.LOCAL_SPACE
)
# apply torques
if self.privileged_actions:
torque_actions = torque_actions.unsqueeze(1)
torque_amount = self.privileged_actions_torque
torque_actions *= torque_amount
self.action_torques[:, self.object_rb_handles, :] = torque_actions
self.gym.apply_rigid_body_force_tensors(
self.sim, None, gymtorch.unwrap_tensor(self.action_torques), gymapi.ENV_SPACE
)
def post_physics_step(self):
self.frame_since_restart += 1
self.progress_buf += 1
self.randomize_buf += 1
self._extra_curriculum()
obs_buf, reward_obs_ofs = self.compute_observations()
rewards, is_success = self.compute_kuka_reward()
# add rewards to observations
reward_obs_scale = 0.01
obs_buf[:, reward_obs_ofs : reward_obs_ofs + 1] = rewards.unsqueeze(-1) * reward_obs_scale
self.clamp_obs(obs_buf)
self._eval_stats(is_success)
if self.save_states:
self.accumulate_env_states()
if self.viewer and self.debug_viz:
# draw axes on target object
self.gym.clear_lines(self.viewer)
self.gym.refresh_rigid_body_state_tensor(self.sim)
axes_geom = gymutil.AxesGeometry(0.1)
sphere_pose = gymapi.Transform()
sphere_pose.r = gymapi.Quat(0, 0, 0, 1)
sphere_geom = gymutil.WireframeSphereGeometry(0.01, 8, 8, sphere_pose, color=(1, 1, 0))
sphere_geom_white = gymutil.WireframeSphereGeometry(0.02, 8, 8, sphere_pose, color=(1, 1, 1))
palm_center_pos_cpu = self.palm_center_pos.cpu().numpy()
palm_rot_cpu = self._palm_rot.cpu().numpy()
for i in range(self.num_envs):
palm_center_transform = gymapi.Transform()
palm_center_transform.p = gymapi.Vec3(*palm_center_pos_cpu[i])
palm_center_transform.r = gymapi.Quat(*palm_rot_cpu[i])
gymutil.draw_lines(sphere_geom_white, self.gym, self.viewer, self.envs[i], palm_center_transform)
for j in range(self.num_allegro_fingertips):
fingertip_pos_cpu = self.fingertip_pos_offset[:, j].cpu().numpy()
fingertip_rot_cpu = self.fingertip_rot[:, j].cpu().numpy()
for i in range(self.num_envs):
fingertip_transform = gymapi.Transform()
fingertip_transform.p = gymapi.Vec3(*fingertip_pos_cpu[i])
fingertip_transform.r = gymapi.Quat(*fingertip_rot_cpu[i])
gymutil.draw_lines(sphere_geom, self.gym, self.viewer, self.envs[i], fingertip_transform)
for j in range(self.num_keypoints):
keypoint_pos_cpu = self.obj_keypoint_pos[:, j].cpu().numpy()
goal_keypoint_pos_cpu = self.goal_keypoint_pos[:, j].cpu().numpy()
for i in range(self.num_envs):
keypoint_transform = gymapi.Transform()
keypoint_transform.p = gymapi.Vec3(*keypoint_pos_cpu[i])
gymutil.draw_lines(sphere_geom, self.gym, self.viewer, self.envs[i], keypoint_transform)
goal_keypoint_transform = gymapi.Transform()
goal_keypoint_transform.p = gymapi.Vec3(*goal_keypoint_pos_cpu[i])
gymutil.draw_lines(sphere_geom, self.gym, self.viewer, self.envs[i], goal_keypoint_transform)
def accumulate_env_states(self):
root_state_tensor = self.root_state_tensor.reshape(
[self.num_envs, -1, *self.root_state_tensor.shape[1:]]
).clone()
dof_state = self.dof_state.reshape([self.num_envs, -1, *self.dof_state.shape[1:]]).clone()
for env_idx in range(self.num_envs):
env_root_state_tensor = root_state_tensor[env_idx]
self.episode_root_state_tensors[env_idx].append(env_root_state_tensor)
env_dof_state = dof_state[env_idx]
self.episode_dof_states[env_idx].append(env_dof_state)
def dump_env_states(self, env_ids):
def write_tensor_to_bin_stream(tensor, stream):
bin_buff = io.BytesIO()
torch.save(tensor, bin_buff)
bin_buff = bin_buff.getbuffer()
stream.write(int(len(bin_buff)).to_bytes(4, "big"))
stream.write(bin_buff)
with open(self.save_states_filename, "ab") as save_states_file:
bin_stream = io.BytesIO()
for env_idx in env_ids:
ep_len = len(self.episode_root_state_tensors[env_idx])
if ep_len <= 20:
continue
states_to_save = min(ep_len // 10, 50)
state_indices = random.sample(range(ep_len), states_to_save)
print(f"Adding {states_to_save} states {state_indices}")
bin_stream.write(int(states_to_save).to_bytes(4, "big"))
root_states = [self.episode_root_state_tensors[env_idx][si] for si in state_indices]
dof_states = [self.episode_dof_states[env_idx][si] for si in state_indices]
root_states = torch.stack(root_states)
dof_states = torch.stack(dof_states)
write_tensor_to_bin_stream(root_states, bin_stream)
write_tensor_to_bin_stream(dof_states, bin_stream)
self.episode_root_state_tensors[env_idx] = []
self.episode_dof_states[env_idx] = []
bin_data = bin_stream.getbuffer()
if bin_data.nbytes > 0:
print(f"Writing {len(bin_data)} to file {self.save_states_filename}")
save_states_file.write(bin_data)
def load_initial_states(self):
loaded_root_states = []
loaded_dof_states = []
with open(self.load_states_filename, "rb") as states_file:
def read_nbytes(n_):
res = states_file.read(n_)
if len(res) < n_:
raise RuntimeError(
f"Could not read {n_} bytes from the binary file. Perhaps reached the end of file"
)
return res
while True:
try:
num_states = int.from_bytes(read_nbytes(4), byteorder="big")
print(f"num_states_chunk {num_states}")
root_states_len = int.from_bytes(read_nbytes(4), byteorder="big")
print(f"root tensors len {root_states_len}")
root_states_bytes = read_nbytes(root_states_len)
dof_states_len = int.from_bytes(read_nbytes(4), byteorder="big")
print(f"dof_states_len {dof_states_len}")
dof_states_bytes = read_nbytes(dof_states_len)
except Exception as exc:
print(exc)
break
finally:
# parse binary buffers
def parse_tensors(bin_data):
with io.BytesIO(bin_data) as buffer:
tensors = torch.load(buffer)
return tensors
root_state_tensors = parse_tensors(root_states_bytes)
dof_state_tensors = parse_tensors(dof_states_bytes)
loaded_root_states.append(root_state_tensors)
loaded_dof_states.append(dof_state_tensors)
self.initial_root_state_tensors = torch.cat(loaded_root_states)
self.initial_dof_state_tensors = torch.cat(loaded_dof_states)
assert self.initial_dof_state_tensors.shape[0] == self.initial_root_state_tensors.shape[0]
self.num_initial_states = len(self.initial_root_state_tensors)
print(f"{self.num_initial_states} states loaded from file {self.load_states_filename}!")
| 73,269 | Python | 44.994978 | 145 | 0.619785 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/tasks/allegro_kuka/allegro_kuka_two_arms_reorientation.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import os
from typing import List
import torch
from isaacgym import gymapi
from torch import Tensor
from isaacgymenvs.utils.torch_jit_utils import to_torch, torch_rand_float
from isaacgymenvs.tasks.allegro_kuka.allegro_kuka_two_arms import AllegroKukaTwoArmsBase
from isaacgymenvs.tasks.allegro_kuka.allegro_kuka_utils import tolerance_curriculum, tolerance_successes_objective
class AllegroKukaTwoArmsReorientation(AllegroKukaTwoArmsBase):
def __init__(self, cfg, rl_device, sim_device, graphics_device_id, headless, virtual_screen_capture, force_render):
self.goal_object_indices = []
self.goal_assets = []
super().__init__(cfg, rl_device, sim_device, graphics_device_id, headless, virtual_screen_capture, force_render)
def _object_keypoint_offsets(self):
return [
[1, 1, 1],
[1, 1, -1],
[-1, -1, 1],
[-1, -1, -1],
]
def _load_additional_assets(self, object_asset_root, arm_pose):
object_asset_options = gymapi.AssetOptions()
object_asset_options.disable_gravity = True
self.goal_assets = []
for object_asset_file in self.object_asset_files:
object_asset_dir = os.path.dirname(object_asset_file)
object_asset_fname = os.path.basename(object_asset_file)
goal_asset_ = self.gym.load_asset(self.sim, object_asset_dir, object_asset_fname, object_asset_options)
self.goal_assets.append(goal_asset_)
goal_rb_count = self.gym.get_asset_rigid_body_count(
self.goal_assets[0]
) # assuming all of them have the same rb count
goal_shapes_count = self.gym.get_asset_rigid_shape_count(
self.goal_assets[0]
) # assuming all of them have the same rb count
return goal_rb_count, goal_shapes_count
def _create_additional_objects(self, env_ptr, env_idx, object_asset_idx):
self.goal_displacement = gymapi.Vec3(-0.35, -0.06, 0.12)
self.goal_displacement_tensor = to_torch(
[self.goal_displacement.x, self.goal_displacement.y, self.goal_displacement.z], device=self.device
)
goal_start_pose = gymapi.Transform()
goal_start_pose.p = self.object_start_pose.p + self.goal_displacement
goal_start_pose.p.z -= 0.04
goal_asset = self.goal_assets[object_asset_idx]
goal_handle = self.gym.create_actor(
env_ptr, goal_asset, goal_start_pose, "goal_object", env_idx + self.num_envs, 0, 0
)
goal_object_idx = self.gym.get_actor_index(env_ptr, goal_handle, gymapi.DOMAIN_SIM)
self.goal_object_indices.append(goal_object_idx)
if self.object_type != "block":
self.gym.set_rigid_body_color(env_ptr, goal_handle, 0, gymapi.MESH_VISUAL, gymapi.Vec3(0.6, 0.72, 0.98))
def _after_envs_created(self):
self.goal_object_indices = to_torch(self.goal_object_indices, dtype=torch.long, device=self.device)
def _reset_target(self, env_ids: Tensor) -> None:
# sample random target location in some volume
target_volume_origin = self.target_volume_origin
target_volume_extent = self.target_volume_extent
target_volume_min_coord = target_volume_origin + target_volume_extent[:, 0]
target_volume_max_coord = target_volume_origin + target_volume_extent[:, 1]
target_volume_size = target_volume_max_coord - target_volume_min_coord
rand_pos_floats = torch_rand_float(0.0, 1.0, (len(env_ids), 3), device=self.device)
target_coords = target_volume_min_coord + rand_pos_floats * target_volume_size
# let the target be close to 1st or 2nd arm, randomly
left_right_random = torch_rand_float(-1.0, 1.0, (len(env_ids), 1), device=self.device)
x_ofs = 0.75
x_pos = torch.where(
left_right_random > 0,
x_ofs * torch.ones_like(left_right_random),
-x_ofs * torch.ones_like(left_right_random),
)
target_coords[:, 0] += x_pos.squeeze(dim=1)
self.goal_states[env_ids, 0:3] = target_coords
self.root_state_tensor[self.goal_object_indices[env_ids], 0:3] = self.goal_states[env_ids, 0:3]
# new_rot = randomize_rotation(
# rand_floats[:, 0], rand_floats[:, 1], self.x_unit_tensor[env_ids], self.y_unit_tensor[env_ids]
# )
# new implementation by Ankur:
new_rot = self.get_random_quat(env_ids)
self.goal_states[env_ids, 3:7] = new_rot
self.root_state_tensor[self.goal_object_indices[env_ids], 3:7] = self.goal_states[env_ids, 3:7]
self.root_state_tensor[self.goal_object_indices[env_ids], 7:13] = torch.zeros_like(
self.root_state_tensor[self.goal_object_indices[env_ids], 7:13]
)
object_indices_to_reset = [self.goal_object_indices[env_ids]]
self.deferred_set_actor_root_state_tensor_indexed(object_indices_to_reset)
def _extra_object_indices(self, env_ids: Tensor) -> List[Tensor]:
return [self.goal_object_indices[env_ids]]
def _extra_curriculum(self):
self.success_tolerance, self.last_curriculum_update = tolerance_curriculum(
self.last_curriculum_update,
self.frame_since_restart,
self.tolerance_curriculum_interval,
self.prev_episode_successes,
self.success_tolerance,
self.initial_tolerance,
self.target_tolerance,
self.tolerance_curriculum_increment,
)
def _true_objective(self) -> Tensor:
true_objective = tolerance_successes_objective(
self.success_tolerance, self.initial_tolerance, self.target_tolerance, self.successes
)
return true_objective
| 7,306 | Python | 44.955975 | 120 | 0.673008 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/tasks/allegro_kuka/allegro_kuka_utils.py |
# Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from __future__ import annotations
from dataclasses import dataclass
from typing import Tuple, Dict, List
from torch import Tensor
@dataclass
class DofParameters:
"""Joint/dof parameters."""
allegro_stiffness: float
kuka_stiffness: float
allegro_effort: float
kuka_effort: List[float] # separate per DOF
allegro_damping: float
kuka_damping: float
dof_friction: float
allegro_armature: float
kuka_armature: float
@staticmethod
def from_cfg(cfg: Dict) -> DofParameters:
return DofParameters(
allegro_stiffness=cfg["env"]["allegroStiffness"],
kuka_stiffness=cfg["env"]["kukaStiffness"],
allegro_effort=cfg["env"]["allegroEffort"],
kuka_effort=cfg["env"]["kukaEffort"],
allegro_damping=cfg["env"]["allegroDamping"],
kuka_damping=cfg["env"]["kukaDamping"],
dof_friction=cfg["env"]["dofFriction"],
allegro_armature=cfg["env"]["allegroArmature"],
kuka_armature=cfg["env"]["kukaArmature"],
)
def populate_dof_properties(hand_arm_dof_props, params: DofParameters, arm_dofs: int, hand_dofs: int) -> None:
assert len(hand_arm_dof_props["stiffness"]) == arm_dofs + hand_dofs
hand_arm_dof_props["stiffness"][0:arm_dofs].fill(params.kuka_stiffness)
hand_arm_dof_props["stiffness"][arm_dofs:].fill(params.allegro_stiffness)
assert len(params.kuka_effort) == arm_dofs
hand_arm_dof_props["effort"][0:arm_dofs] = params.kuka_effort
hand_arm_dof_props["effort"][arm_dofs:].fill(params.allegro_effort)
hand_arm_dof_props["damping"][0:arm_dofs].fill(params.kuka_damping)
hand_arm_dof_props["damping"][arm_dofs:].fill(params.allegro_damping)
if params.dof_friction >= 0:
hand_arm_dof_props["friction"].fill(params.dof_friction)
hand_arm_dof_props["armature"][0:arm_dofs].fill(params.kuka_armature)
hand_arm_dof_props["armature"][arm_dofs:].fill(params.allegro_armature)
def tolerance_curriculum(
last_curriculum_update: int,
frames_since_restart: int,
curriculum_interval: int,
prev_episode_successes: Tensor,
success_tolerance: float,
initial_tolerance: float,
target_tolerance: float,
tolerance_curriculum_increment: float,
) -> Tuple[float, int]:
"""
Returns: new tolerance, new last_curriculum_update
"""
if frames_since_restart - last_curriculum_update < curriculum_interval:
return success_tolerance, last_curriculum_update
mean_successes_per_episode = prev_episode_successes.mean()
if mean_successes_per_episode < 3.0:
# this policy is not good enough with the previous tolerance value, keep training for now...
return success_tolerance, last_curriculum_update
# decrease the tolerance now
success_tolerance *= tolerance_curriculum_increment
success_tolerance = min(success_tolerance, initial_tolerance)
success_tolerance = max(success_tolerance, target_tolerance)
print(f"Prev episode successes: {mean_successes_per_episode}, success tolerance: {success_tolerance}")
last_curriculum_update = frames_since_restart
return success_tolerance, last_curriculum_update
def interp_0_1(x_curr: float, x_initial: float, x_target: float) -> float:
"""
Outputs 1 when x_curr == x_target (curriculum completed)
Outputs 0 when x_curr == x_initial (just started training)
Interpolates value in between.
"""
span = x_initial - x_target
return (x_initial - x_curr) / span
def tolerance_successes_objective(
success_tolerance: float, initial_tolerance: float, target_tolerance: float, successes: Tensor
) -> Tensor:
"""
Objective for the PBT. This basically prioritizes tolerance over everything else when we
execute the curriculum, after that it's just #successes.
"""
# this grows from 0 to 1 as we reach the target tolerance
if initial_tolerance > target_tolerance:
# makeshift unit tests:
eps = 1e-5
assert abs(interp_0_1(initial_tolerance, initial_tolerance, target_tolerance)) < eps
assert abs(interp_0_1(target_tolerance, initial_tolerance, target_tolerance) - 1.0) < eps
mid_tolerance = (initial_tolerance + target_tolerance) / 2
assert abs(interp_0_1(mid_tolerance, initial_tolerance, target_tolerance) - 0.5) < eps
tolerance_objective = interp_0_1(success_tolerance, initial_tolerance, target_tolerance)
else:
tolerance_objective = 1.0
if success_tolerance > target_tolerance:
# add succeses with a small coefficient to differentiate between policies at the beginning of training
# increment in tolerance improvement should always give higher value than higher successes with the
# previous tolerance, that's why this coefficient is very small
true_objective = (successes * 0.01) + tolerance_objective
else:
# basically just the successes + tolerance objective so that true_objective never decreases when we cross
# the threshold
true_objective = successes + tolerance_objective
return true_objective
| 6,689 | Python | 41.075471 | 113 | 0.712214 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/tasks/allegro_kuka/allegro_kuka_regrasping.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from typing import List, Tuple
import torch
from isaacgym import gymapi
from torch import Tensor
from isaacgymenvs.utils.torch_jit_utils import to_torch, torch_rand_float
from isaacgymenvs.tasks.allegro_kuka.allegro_kuka_base import AllegroKukaBase
from isaacgymenvs.tasks.allegro_kuka.allegro_kuka_utils import tolerance_curriculum, tolerance_successes_objective
class AllegroKukaRegrasping(AllegroKukaBase):
def __init__(self, cfg, rl_device, sim_device, graphics_device_id, headless, virtual_screen_capture, force_render):
self.goal_object_indices = []
self.goal_asset = None
super().__init__(cfg, rl_device, sim_device, graphics_device_id, headless, virtual_screen_capture, force_render)
def _object_keypoint_offsets(self):
"""Regrasping task uses only a single object keypoint since we do not care about object orientation."""
return [[0, 0, 0]]
def _load_additional_assets(self, object_asset_root, arm_pose):
goal_asset_options = gymapi.AssetOptions()
goal_asset_options.disable_gravity = True
self.goal_asset = self.gym.load_asset(
self.sim, object_asset_root, self.asset_files_dict["ball"], goal_asset_options
)
goal_rb_count = self.gym.get_asset_rigid_body_count(self.goal_asset)
goal_shapes_count = self.gym.get_asset_rigid_shape_count(self.goal_asset)
return goal_rb_count, goal_shapes_count
def _create_additional_objects(self, env_ptr, env_idx, object_asset_idx):
goal_start_pose = gymapi.Transform()
goal_asset = self.goal_asset
goal_handle = self.gym.create_actor(
env_ptr, goal_asset, goal_start_pose, "goal_object", env_idx + self.num_envs, 0, 0
)
self.gym.set_actor_scale(env_ptr, goal_handle, 0.5)
self.gym.set_rigid_body_color(env_ptr, goal_handle, 0, gymapi.MESH_VISUAL, gymapi.Vec3(0.6, 0.72, 0.98))
goal_object_idx = self.gym.get_actor_index(env_ptr, goal_handle, gymapi.DOMAIN_SIM)
self.goal_object_indices.append(goal_object_idx)
def _after_envs_created(self):
self.goal_object_indices = to_torch(self.goal_object_indices, dtype=torch.long, device=self.device)
def _reset_target(self, env_ids: Tensor) -> None:
target_volume_origin = self.target_volume_origin
target_volume_extent = self.target_volume_extent
target_volume_min_coord = target_volume_origin + target_volume_extent[:, 0]
target_volume_max_coord = target_volume_origin + target_volume_extent[:, 1]
target_volume_size = target_volume_max_coord - target_volume_min_coord
rand_pos_floats = torch_rand_float(0.0, 1.0, (len(env_ids), 3), device=self.device)
target_coords = target_volume_min_coord + rand_pos_floats * target_volume_size
self.goal_states[env_ids, 0:3] = target_coords
self.root_state_tensor[self.goal_object_indices[env_ids], 0:3] = self.goal_states[env_ids, 0:3]
# we also reset the object to its initial position
self.reset_object_pose(env_ids)
# since we put the object back on the table, also reset the lifting reward
self.lifted_object[env_ids] = False
self.deferred_set_actor_root_state_tensor_indexed(
[self.object_indices[env_ids], self.goal_object_indices[env_ids]]
)
def _extra_object_indices(self, env_ids: Tensor) -> List[Tensor]:
return [self.goal_object_indices[env_ids]]
def compute_kuka_reward(self) -> Tuple[Tensor, Tensor]:
rew_buf, is_success = super().compute_kuka_reward() # TODO: customize reward?
return rew_buf, is_success
def _true_objective(self) -> Tensor:
true_objective = tolerance_successes_objective(
self.success_tolerance, self.initial_tolerance, self.target_tolerance, self.successes
)
return true_objective
def _extra_curriculum(self):
self.success_tolerance, self.last_curriculum_update = tolerance_curriculum(
self.last_curriculum_update,
self.frame_since_restart,
self.tolerance_curriculum_interval,
self.prev_episode_successes,
self.success_tolerance,
self.initial_tolerance,
self.target_tolerance,
self.tolerance_curriculum_increment,
)
| 5,893 | Python | 46.532258 | 120 | 0.702019 |
Tbarkin121/GuardDog/isaac/IsaacGymEnvs/build/lib/isaacgymenvs/tasks/allegro_kuka/allegro_kuka_reorientation.py | # Copyright (c) 2018-2023, NVIDIA Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import os
from typing import List
import torch
from isaacgym import gymapi
from torch import Tensor
from isaacgymenvs.utils.torch_jit_utils import to_torch, torch_rand_float
from isaacgymenvs.tasks.allegro_kuka.allegro_kuka_base import AllegroKukaBase
from isaacgymenvs.tasks.allegro_kuka.allegro_kuka_utils import tolerance_curriculum, tolerance_successes_objective
class AllegroKukaReorientation(AllegroKukaBase):
def __init__(self, cfg, rl_device, sim_device, graphics_device_id, headless, virtual_screen_capture, force_render):
self.goal_object_indices = []
self.goal_assets = []
super().__init__(cfg, rl_device, sim_device, graphics_device_id, headless, virtual_screen_capture, force_render)
def _object_keypoint_offsets(self):
return [
[1, 1, 1],
[1, 1, -1],
[-1, -1, 1],
[-1, -1, -1],
]
def _load_additional_assets(self, object_asset_root, arm_pose):
object_asset_options = gymapi.AssetOptions()
object_asset_options.disable_gravity = True
self.goal_assets = []
for object_asset_file in self.object_asset_files:
object_asset_dir = os.path.dirname(object_asset_file)
object_asset_fname = os.path.basename(object_asset_file)
goal_asset_ = self.gym.load_asset(self.sim, object_asset_dir, object_asset_fname, object_asset_options)
self.goal_assets.append(goal_asset_)
goal_rb_count = self.gym.get_asset_rigid_body_count(
self.goal_assets[0]
) # assuming all of them have the same rb count
goal_shapes_count = self.gym.get_asset_rigid_shape_count(
self.goal_assets[0]
) # assuming all of them have the same rb count
return goal_rb_count, goal_shapes_count
def _create_additional_objects(self, env_ptr, env_idx, object_asset_idx):
self.goal_displacement = gymapi.Vec3(-0.35, -0.06, 0.12)
self.goal_displacement_tensor = to_torch(
[self.goal_displacement.x, self.goal_displacement.y, self.goal_displacement.z], device=self.device
)
goal_start_pose = gymapi.Transform()
goal_start_pose.p = self.object_start_pose.p + self.goal_displacement
goal_start_pose.p.z -= 0.04
goal_asset = self.goal_assets[object_asset_idx]
goal_handle = self.gym.create_actor(
env_ptr, goal_asset, goal_start_pose, "goal_object", env_idx + self.num_envs, 0, 0
)
goal_object_idx = self.gym.get_actor_index(env_ptr, goal_handle, gymapi.DOMAIN_SIM)
self.goal_object_indices.append(goal_object_idx)
if self.object_type != "block":
self.gym.set_rigid_body_color(env_ptr, goal_handle, 0, gymapi.MESH_VISUAL, gymapi.Vec3(0.6, 0.72, 0.98))
def _after_envs_created(self):
self.goal_object_indices = to_torch(self.goal_object_indices, dtype=torch.long, device=self.device)
def _extra_reset_rules(self, resets):
# hand far from the object
resets = torch.where(
self.curr_fingertip_distances.max(dim=-1).values > 1.5, torch.ones_like(self.reset_buf), resets
)
return resets
def _reset_target(self, env_ids: Tensor) -> None:
target_volume_origin = self.target_volume_origin
target_volume_extent = self.target_volume_extent
target_volume_min_coord = target_volume_origin + target_volume_extent[:, 0]
target_volume_max_coord = target_volume_origin + target_volume_extent[:, 1]
target_volume_size = target_volume_max_coord - target_volume_min_coord
rand_pos_floats = torch_rand_float(0.0, 1.0, (len(env_ids), 3), device=self.device)
target_coords = target_volume_min_coord + rand_pos_floats * target_volume_size
self.goal_states[env_ids, 0:3] = target_coords
self.root_state_tensor[self.goal_object_indices[env_ids], 0:3] = self.goal_states[env_ids, 0:3]
new_rot = self.get_random_quat(env_ids)
self.goal_states[env_ids, 3:7] = new_rot
self.root_state_tensor[self.goal_object_indices[env_ids], 3:7] = self.goal_states[env_ids, 3:7]
self.root_state_tensor[self.goal_object_indices[env_ids], 7:13] = torch.zeros_like(
self.root_state_tensor[self.goal_object_indices[env_ids], 7:13]
)
object_indices_to_reset = [self.goal_object_indices[env_ids]]
self.deferred_set_actor_root_state_tensor_indexed(object_indices_to_reset)
def _extra_object_indices(self, env_ids: Tensor) -> List[Tensor]:
return [self.goal_object_indices[env_ids]]
def _extra_curriculum(self):
self.success_tolerance, self.last_curriculum_update = tolerance_curriculum(
self.last_curriculum_update,
self.frame_since_restart,
self.tolerance_curriculum_interval,
self.prev_episode_successes,
self.success_tolerance,
self.initial_tolerance,
self.target_tolerance,
self.tolerance_curriculum_increment,
)
def _true_objective(self) -> Tensor:
true_objective = tolerance_successes_objective(
self.success_tolerance, self.initial_tolerance, self.target_tolerance, self.successes
)
return true_objective
| 6,855 | Python | 44.706666 | 120 | 0.680379 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.