file_path
stringlengths 20
207
| content
stringlengths 5
3.85M
| size
int64 5
3.85M
| lang
stringclasses 9
values | avg_line_length
float64 1.33
100
| max_line_length
int64 4
993
| alphanum_fraction
float64 0.26
0.93
|
---|---|---|---|---|---|---|
MarqRazz/c3pzero/c300/c300_bringup/config/c300_isaac_controllers.yaml | controller_manager:
ros__parameters:
update_rate: 60 # Hz (this should match the Isaac publish rate)
joint_state_broadcaster:
type: joint_state_broadcaster/JointStateBroadcaster
diff_drive_base_controller:
type: diff_drive_controller/DiffDriveController
diff_drive_base_controller:
ros__parameters:
left_wheel_names: ["drivewhl_l_joint"]
right_wheel_names: ["drivewhl_r_joint"]
wheels_per_side: 1
wheel_separation: 0.61 # outside distance between the wheels
wheel_radius: 0.1715
wheel_separation_multiplier: 1.0
left_wheel_radius_multiplier: 1.0
right_wheel_radius_multiplier: 1.0
publish_rate: 50.0
odom_frame_id: odom
base_frame_id: base_link
pose_covariance_diagonal : [0.001, 0.001, 0.0, 0.0, 0.0, 0.01]
twist_covariance_diagonal: [0.001, 0.0, 0.0, 0.0, 0.0, 0.01]
open_loop: false
position_feedback: true
enable_odom_tf: true
cmd_vel_timeout: 0.5
#publish_limited_velocity: true
use_stamped_vel: false
#velocity_rolling_window_size: 10
# Velocity and acceleration limits
# Whenever a min_* is unspecified, default to -max_*
linear.x.has_velocity_limits: true
linear.x.has_acceleration_limits: true
linear.x.has_jerk_limits: false
linear.x.max_velocity: 2.0
linear.x.min_velocity: -2.0
linear.x.max_acceleration: 0.5
linear.x.max_jerk: 0.0
linear.x.min_jerk: 0.0
angular.z.has_velocity_limits: true
angular.z.has_acceleration_limits: true
angular.z.has_jerk_limits: false
angular.z.max_velocity: 2.0
angular.z.min_velocity: -2.0
angular.z.max_acceleration: 1.0
angular.z.min_acceleration: -1.0
angular.z.max_jerk: 0.0
angular.z.min_jerk: 0.0
| 1,741 | YAML | 28.525423 | 68 | 0.680643 |
MarqRazz/c3pzero/c300/c300_navigation/README.md | # c300_navigation
This package contains the Nav2 parameters to be used with the Permobile c300 mobile base.
To test out this navigation package first start the robot in Gazebo and then run the included navigation launch file.
https://github.com/MarqRazz/c3pzero/assets/25058794/35301ba1-e443-44ff-b6ba-0fabae138205
# Nav2 with Gazebo simulated robot
To start the `c300` mobile base in Gazebo run the following command:
``` bash
ros2 launch c300_bringup gazebo_c300.launch.py launch_rviz:=false
```
To start Nav2 with the included Depot map run:
``` bash
ros2 launch c300_navigation navigation.launch.py
```
# Nav2 with Isaac simulated robot
To start Isaac with the `c300` mobile base in an industrial warehouse run the following command on the host PC where Isaac is installed:
``` bash
cd <path_to_workspace>/c3pzero_ws/src/c3pzero/c300/c300_description/usd
./python.sh isaac_c300.py
```
In the Docker container start the `c300` controllers to command the base and report state:
``` bash
ros2 launch c300_bringup isaac_c300.launch.py launch_rviz:=false
```
In the Docker container start Nav2 with the included Isaac Warehouse map run:
``` bash
ros2 launch c300_navigation navigation.launch.py map:=isaac_warehouse.yaml
```
## Initialize the location of the robot
Currently the [nav2_parameters.yaml](https://github.com/MarqRazz/c3pzero/blob/main/c300/c300_navigation/params/nav2_params.yaml) is setup to automatically set the initial pose of the robot for simulation environments.
If running on hardware change `set_initial_pose: false` under the `amcl` parameters and use the Rviz tool to set the initial pose on startup.
You can also set the initial pose through cli with the following command:
```bash
ros2 topic pub -1 /initialpose geometry_msgs/PoseWithCovarianceStamped '{ header: {stamp: {sec: 0, nanosec: 0}, frame_id: "map"}, pose: { pose: {position: {x: 0.0, y: 0.0, z: 0.0}, orientation: {w: 1.0}}, } }'
```
## Running SLAM
To create a new map with SLAM Toolbox start the robot and then run:
``` bash
ros2 launch c300_navigation navigation.launch.py slam:=True
```
After you have created a new map you can save it with the following command:
```bash
ros2 run nav2_map_server map_saver_cli -f ~/c3pzero_ws/<name of the new map>
```
| 2,259 | Markdown | 36.666666 | 217 | 0.760956 |
MarqRazz/c3pzero/c300/c300_navigation/launch/navigation.launch.py | # -*- coding: utf-8 -*-
# Copyright (c) 2018 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This is all-in-one launch script intended for use by nav2 developers."""
import os
from ament_index_python.packages import get_package_share_directory
from launch import LaunchDescription
from launch.actions import (
DeclareLaunchArgument,
ExecuteProcess,
IncludeLaunchDescription,
)
from launch.conditions import IfCondition
from launch.launch_description_sources import PythonLaunchDescriptionSource
from launch.substitutions import LaunchConfiguration, PythonExpression
from launch_ros.actions import Node
def generate_launch_description():
# Get the launch directory
bringup_dir = get_package_share_directory("nav2_bringup")
c300_nav_dir = get_package_share_directory("c300_navigation")
launch_dir = os.path.join(bringup_dir, "launch")
# Create the launch configuration variables
slam = LaunchConfiguration("slam")
namespace = LaunchConfiguration("namespace")
use_namespace = LaunchConfiguration("use_namespace")
map_yaml_file = LaunchConfiguration("map")
use_sim_time = LaunchConfiguration("use_sim_time")
params_file = LaunchConfiguration("params_file")
autostart = LaunchConfiguration("autostart")
use_composition = LaunchConfiguration("use_composition")
use_respawn = LaunchConfiguration("use_respawn")
# Launch configuration variables specific to simulation
rviz_config_file = LaunchConfiguration("rviz_config_file")
use_rviz = LaunchConfiguration("use_rviz")
# Declare the launch arguments
declare_namespace_cmd = DeclareLaunchArgument(
"namespace", default_value="", description="Top-level namespace"
)
declare_use_namespace_cmd = DeclareLaunchArgument(
"use_namespace",
default_value="False",
description="Whether to apply a namespace to the navigation stack",
)
declare_slam_cmd = DeclareLaunchArgument(
"slam", default_value="False", description="Whether run a SLAM"
)
declare_map_yaml_cmd = DeclareLaunchArgument(
"map",
default_value=os.path.join(c300_nav_dir, "map", "depot.yaml"),
description="Full path to map file to load",
)
declare_use_sim_time_cmd = DeclareLaunchArgument(
"use_sim_time",
default_value="True",
description="Use simulation (Gazebo) clock if True",
)
declare_params_file_cmd = DeclareLaunchArgument(
"params_file",
default_value=os.path.join(c300_nav_dir, "params", "nav2_params_mppi.yaml"),
description="Full path to the ROS2 parameters file to use for all launched nodes",
)
declare_autostart_cmd = DeclareLaunchArgument(
"autostart",
default_value="True",
description="Automatically startup the nav2 stack",
)
declare_use_composition_cmd = DeclareLaunchArgument(
"use_composition",
default_value="True",
description="Whether to use composed bringup",
)
declare_use_respawn_cmd = DeclareLaunchArgument(
"use_respawn",
default_value="False",
description="Whether to respawn if a node crashes. Applied when composition is disabled.",
)
declare_rviz_config_file_cmd = DeclareLaunchArgument(
"rviz_config_file",
default_value=os.path.join(c300_nav_dir, "rviz", "navigation.rviz"),
description="Full path to the RVIZ config file to use",
)
declare_use_rviz_cmd = DeclareLaunchArgument(
"use_rviz", default_value="True", description="Whether to start RVIZ"
)
rviz_cmd = IncludeLaunchDescription(
PythonLaunchDescriptionSource(os.path.join(launch_dir, "rviz_launch.py")),
condition=IfCondition(use_rviz),
launch_arguments={
"namespace": namespace,
"use_namespace": use_namespace,
"rviz_config": rviz_config_file,
}.items(),
)
bringup_cmd = IncludeLaunchDescription(
PythonLaunchDescriptionSource(os.path.join(launch_dir, "bringup_launch.py")),
launch_arguments={
"namespace": namespace,
"use_namespace": use_namespace,
"slam": slam,
"map": map_yaml_file,
"use_sim_time": use_sim_time,
"params_file": params_file,
"autostart": autostart,
"use_composition": use_composition,
"use_respawn": use_respawn,
}.items(),
)
# Create the launch description and populate
ld = LaunchDescription()
# Declare the launch options
ld.add_action(declare_namespace_cmd)
ld.add_action(declare_use_namespace_cmd)
ld.add_action(declare_slam_cmd)
ld.add_action(declare_map_yaml_cmd)
ld.add_action(declare_use_sim_time_cmd)
ld.add_action(declare_params_file_cmd)
ld.add_action(declare_autostart_cmd)
ld.add_action(declare_use_composition_cmd)
ld.add_action(declare_rviz_config_file_cmd)
ld.add_action(declare_use_rviz_cmd)
ld.add_action(declare_use_respawn_cmd)
ld.add_action(rviz_cmd)
ld.add_action(bringup_cmd)
return ld
| 5,641 | Python | 33.82716 | 98 | 0.683744 |
MarqRazz/c3pzero/c300/c300_navigation/map/isaac_warehouse.yaml | image: isaac_warehouse.pgm
mode: trinary
resolution: 0.05
origin: [-12.3, -8.46, 0]
negate: 0
occupied_thresh: 0.65
free_thresh: 0.25
| 134 | YAML | 15.874998 | 26 | 0.716418 |
MarqRazz/c3pzero/c300/c300_navigation/map/depot.yaml | image: depot.pgm
mode: trinary
resolution: 0.05
origin: [-15.2, -7.72, 0]
negate: 0
occupied_thresh: 0.65
free_thresh: 0.25
| 124 | YAML | 14.624998 | 25 | 0.701613 |
MarqRazz/c3pzero/c300/c300_driver/setup.py | # -*- coding: utf-8 -*-
import os
from glob import glob
from setuptools import setup
package_name = "c300_driver"
setup(
name=package_name,
version="0.0.1",
packages=[package_name],
data_files=[
("share/ament_index/resource_index/packages", ["resource/" + package_name]),
("share/" + package_name, ["package.xml"]),
(os.path.join("share", package_name, "launch"), glob("launch/*.launch.py")),
(os.path.join("share", package_name, "config"), glob("config/*.yaml")),
],
install_requires=["setuptools"],
zip_safe=True,
maintainer="marq",
maintainer_email="[email protected]",
description="Driver package to control the c300 mobile base",
license="MIT",
tests_require=["pytest"],
entry_points={
"console_scripts": [
"twist2roboclaw = c300_driver.twist2roboclaw:main",
],
},
)
| 892 | Python | 27.806451 | 84 | 0.608744 |
MarqRazz/c3pzero/c300/c300_driver/test/test_flake8.py | # -*- coding: utf-8 -*-
# Copyright 2017 Open Source Robotics Foundation, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ament_flake8.main import main_with_errors
import pytest
@pytest.mark.flake8
@pytest.mark.linter
def test_flake8():
rc, errors = main_with_errors(argv=[])
assert rc == 0, "Found %d code style errors / warnings:\n" % len(
errors
) + "\n".join(errors)
| 902 | Python | 32.444443 | 74 | 0.721729 |
MarqRazz/c3pzero/c300/c300_driver/test/test_pep257.py | # -*- coding: utf-8 -*-
# Copyright 2015 Open Source Robotics Foundation, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ament_pep257.main import main
import pytest
@pytest.mark.linter
@pytest.mark.pep257
def test_pep257():
rc = main(argv=[".", "test"])
assert rc == 0, "Found code style errors / warnings"
| 827 | Python | 32.119999 | 74 | 0.733978 |
MarqRazz/c3pzero/c300/c300_driver/test/test_copyright.py | # -*- coding: utf-8 -*-
# Copyright 2015 Open Source Robotics Foundation, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ament_copyright.main import main
import pytest
@pytest.mark.copyright
@pytest.mark.linter
def test_copyright():
rc = main(argv=[".", "test"])
assert rc == 0, "Found errors"
| 814 | Python | 31.599999 | 74 | 0.735872 |
MarqRazz/c3pzero/c300/c300_driver/launch/teleop.launch.py | # -*- coding: utf-8 -*-
import os
from ament_index_python.packages import get_package_share_directory
import launch
import launch_ros.actions
def generate_launch_description():
joy_config = launch.substitutions.LaunchConfiguration("joy_config")
joy_dev = launch.substitutions.LaunchConfiguration("joy_dev")
config_filepath = launch.substitutions.LaunchConfiguration("config_filepath")
return launch.LaunchDescription(
[
launch.actions.DeclareLaunchArgument("joy_vel", default_value="/cmd_vel"),
launch.actions.DeclareLaunchArgument("joy_config", default_value="f710"),
launch.actions.DeclareLaunchArgument(
"joy_dev", default_value="/dev/input/js0"
),
launch.actions.DeclareLaunchArgument(
"config_filepath",
default_value=[
launch.substitutions.TextSubstitution(
text=os.path.join(
get_package_share_directory("c300_driver"), "config", ""
)
),
joy_config,
launch.substitutions.TextSubstitution(text=".config.yaml"),
],
),
launch_ros.actions.Node(
package="joy",
executable="joy_node",
name="joy_node",
parameters=[
{
"dev": joy_dev,
"deadzone": 0.3,
"autorepeat_rate": 20.0,
}
],
),
launch_ros.actions.Node(
package="teleop_twist_joy",
executable="teleop_node",
name="teleop_twist_joy_node",
parameters=[config_filepath],
remappings={
("/cmd_vel", launch.substitutions.LaunchConfiguration("joy_vel"))
},
),
]
)
| 2,001 | Python | 34.122806 | 86 | 0.498251 |
MarqRazz/c3pzero/c300/c300_driver/c300_driver/diff_drive_odom.py | # -*- coding: utf-8 -*-
from nav_msgs.msg import Odometry
from tf_transformations import quaternion_from_euler
from math import cos, sin
class DiffDriveOdom:
def __init__(self, clock, separation, radius):
self._clock = clock
self._frame_id = "odom"
self._child_frame_id = "base_id"
self._separation = separation
self._radius = radius
self._last_position = (0, 0)
self._last_time = self._clock.now()
self._robot_pose = (0, 0, 0)
self._first = True
def _get_diff(self, position):
if self._first:
self._first = False
self._last_position = position
diff = (
position[0] - self._last_position[0],
position[1] - self._last_position[1],
)
self._last_position = position
return diff
def step(self, position, velocity):
# position is radians tuple (l, r)
# velocity is m/s tuple (l, r)
now = self._clock.now()
time_step = now - self._last_time
wheel_l, wheel_r = self._get_diff(position)
delta_s = self._radius * (wheel_r + wheel_l) / 2.0
theta = self._radius * (wheel_r - wheel_l) / self._separation
self._robot_pose = (
self._robot_pose[0] + delta_s * cos(self._robot_pose[2] + (theta / 2.0)),
self._robot_pose[1] + delta_s * sin(self._robot_pose[2] + (theta / 2.0)),
self._robot_pose[2] + theta,
)
q = quaternion_from_euler(0.0, 0.0, self._robot_pose[2])
self._last_time = now
msg = Odometry()
msg.header.frame_id = self._frame_id
msg.header.stamp = now.to_msg()
msg.child_frame_id = self._child_frame_id
msg.pose.pose.position.x = self._robot_pose[0]
msg.pose.pose.position.y = self._robot_pose[1]
msg.pose.pose.position.z = 0.0
msg.pose.pose.orientation.x = q[0]
msg.pose.pose.orientation.y = q[1]
msg.pose.pose.orientation.z = q[2]
msg.pose.pose.orientation.w = q[3]
msg.twist.twist.linear.x = delta_s / (time_step.nanoseconds * 1e9)
msg.twist.twist.angular.z = theta / (time_step.nanoseconds * 1e9)
return msg
| 2,223 | Python | 34.870967 | 85 | 0.560504 |
MarqRazz/c3pzero/c300/c300_driver/c300_driver/roboclaw_3.py | # -*- coding: utf-8 -*-
import random
import serial
import struct
import time
class Roboclaw:
"Roboclaw Interface Class"
def __init__(self, comport, rate, timeout=0.01, retries=3):
self.comport = comport
self.rate = rate
self.timeout = timeout
self._trystimeout = retries
self._crc = 0
# Command Enums
class Cmd:
M1FORWARD = 0
M1BACKWARD = 1
SETMINMB = 2
SETMAXMB = 3
M2FORWARD = 4
M2BACKWARD = 5
M17BIT = 6
M27BIT = 7
MIXEDFORWARD = 8
MIXEDBACKWARD = 9
MIXEDRIGHT = 10
MIXEDLEFT = 11
MIXEDFB = 12
MIXEDLR = 13
GETM1ENC = 16
GETM2ENC = 17
GETM1SPEED = 18
GETM2SPEED = 19
RESETENC = 20
GETVERSION = 21
SETM1ENCCOUNT = 22
SETM2ENCCOUNT = 23
GETMBATT = 24
GETLBATT = 25
SETMINLB = 26
SETMAXLB = 27
SETM1PID = 28
SETM2PID = 29
GETM1ISPEED = 30
GETM2ISPEED = 31
M1DUTY = 32
M2DUTY = 33
MIXEDDUTY = 34
M1SPEED = 35
M2SPEED = 36
MIXEDSPEED = 37
M1SPEEDACCEL = 38
M2SPEEDACCEL = 39
MIXEDSPEEDACCEL = 40
M1SPEEDDIST = 41
M2SPEEDDIST = 42
MIXEDSPEEDDIST = 43
M1SPEEDACCELDIST = 44
M2SPEEDACCELDIST = 45
MIXEDSPEEDACCELDIST = 46
GETBUFFERS = 47
GETPWMS = 48
GETCURRENTS = 49
MIXEDSPEED2ACCEL = 50
MIXEDSPEED2ACCELDIST = 51
M1DUTYACCEL = 52
M2DUTYACCEL = 53
MIXEDDUTYACCEL = 54
READM1PID = 55
READM2PID = 56
SETMAINVOLTAGES = 57
SETLOGICVOLTAGES = 58
GETMINMAXMAINVOLTAGES = 59
GETMINMAXLOGICVOLTAGES = 60
SETM1POSPID = 61
SETM2POSPID = 62
READM1POSPID = 63
READM2POSPID = 64
M1SPEEDACCELDECCELPOS = 65
M2SPEEDACCELDECCELPOS = 66
MIXEDSPEEDACCELDECCELPOS = 67
SETM1DEFAULTACCEL = 68
SETM2DEFAULTACCEL = 69
SETPINFUNCTIONS = 74
GETPINFUNCTIONS = 75
SETDEADBAND = 76
GETDEADBAND = 77
RESTOREDEFAULTS = 80
GETTEMP = 82
GETTEMP2 = 83
GETERROR = 90
GETENCODERMODE = 91
SETM1ENCODERMODE = 92
SETM2ENCODERMODE = 93
WRITENVM = 94
READNVM = 95
SETCONFIG = 98
GETCONFIG = 99
SETM1MAXCURRENT = 133
SETM2MAXCURRENT = 134
GETM1MAXCURRENT = 135
GETM2MAXCURRENT = 136
SETPWMMODE = 148
GETPWMMODE = 149
READEEPROM = 252
WRITEEEPROM = 253
FLAGBOOTLOADER = 255
# Private Functions
def crc_clear(self):
self._crc = 0
return
def crc_update(self, data):
self._crc = self._crc ^ (data << 8)
for bit in range(0, 8):
if (self._crc & 0x8000) == 0x8000:
self._crc = (self._crc << 1) ^ 0x1021
else:
self._crc = self._crc << 1
return
def _sendcommand(self, address, command):
self.crc_clear()
self.crc_update(address)
# self._port.write(chr(address))
self._port.write(address.to_bytes(1, "big"))
self.crc_update(command)
# self._port.write(chr(command))
self._port.write(command.to_bytes(1, "big"))
return
def _readchecksumword(self):
data = self._port.read(2)
if len(data) == 2:
# crc = (ord(data[0])<<8) | ord(data[1])
crc = (data[0] << 8) | data[1]
return (1, crc)
return (0, 0)
def _readbyte(self):
data = self._port.read(1)
if len(data):
val = ord(data)
self.crc_update(val)
return (1, val)
return (0, 0)
def _readword(self):
val1 = self._readbyte()
if val1[0]:
val2 = self._readbyte()
if val2[0]:
return (1, val1[1] << 8 | val2[1])
return (0, 0)
def _readlong(self):
val1 = self._readbyte()
if val1[0]:
val2 = self._readbyte()
if val2[0]:
val3 = self._readbyte()
if val3[0]:
val4 = self._readbyte()
if val4[0]:
return (
1,
val1[1] << 24 | val2[1] << 16 | val3[1] << 8 | val4[1],
)
return (0, 0)
def _readslong(self):
val = self._readlong()
if val[0]:
if val[1] & 0x80000000:
return (val[0], val[1] - 0x100000000)
return (val[0], val[1])
return (0, 0)
def _writebyte(self, val):
self.crc_update(val & 0xFF)
# self._port.write(chr(val&0xFF))
self._port.write(val.to_bytes(1, "big"))
def _writesbyte(self, val):
self._writebyte(val)
def _writeword(self, val):
self._writebyte((val >> 8) & 0xFF)
self._writebyte(val & 0xFF)
def _writesword(self, val):
self._writeword(val)
def _writelong(self, val):
self._writebyte((val >> 24) & 0xFF)
self._writebyte((val >> 16) & 0xFF)
self._writebyte((val >> 8) & 0xFF)
self._writebyte(val & 0xFF)
def _writeslong(self, val):
self._writelong(val)
def _read1(self, address, cmd):
tries = self._trystimeout
while 1:
self._port.flushInput()
self._sendcommand(address, cmd)
val1 = self._readbyte()
if val1[0]:
crc = self._readchecksumword()
if crc[0]:
if self._crc & 0xFFFF != crc[1] & 0xFFFF:
return (0, 0)
return (1, val1[1])
tries -= 1
if tries == 0:
break
return (0, 0)
def _read2(self, address, cmd):
tries = self._trystimeout
while 1:
self._port.flushInput()
self._sendcommand(address, cmd)
val1 = self._readword()
if val1[0]:
crc = self._readchecksumword()
if crc[0]:
if self._crc & 0xFFFF != crc[1] & 0xFFFF:
return (0, 0)
return (1, val1[1])
tries -= 1
if tries == 0:
break
return (0, 0)
def _read4(self, address, cmd):
tries = self._trystimeout
while 1:
self._port.flushInput()
self._sendcommand(address, cmd)
val1 = self._readlong()
if val1[0]:
crc = self._readchecksumword()
if crc[0]:
if self._crc & 0xFFFF != crc[1] & 0xFFFF:
return (0, 0)
return (1, val1[1])
tries -= 1
if tries == 0:
break
return (0, 0)
def _read4_1(self, address, cmd):
tries = self._trystimeout
while 1:
self._port.flushInput()
self._sendcommand(address, cmd)
val1 = self._readslong()
if val1[0]:
val2 = self._readbyte()
if val2[0]:
crc = self._readchecksumword()
if crc[0]:
if self._crc & 0xFFFF != crc[1] & 0xFFFF:
return (0, 0)
return (1, val1[1], val2[1])
tries -= 1
if tries == 0:
break
return (0, 0)
def _read_n(self, address, cmd, args):
tries = self._trystimeout
while 1:
self._port.flushInput()
tries -= 1
if tries == 0:
break
failed = False
self._sendcommand(address, cmd)
data = [
1,
]
for i in range(0, args):
val = self._readlong()
if val[0] == 0:
failed = True
break
data.append(val[1])
if failed:
continue
crc = self._readchecksumword()
if crc[0]:
if self._crc & 0xFFFF == crc[1] & 0xFFFF:
return data
return (0, 0, 0, 0, 0)
def _writechecksum(self):
self._writeword(self._crc & 0xFFFF)
val = self._readbyte()
if len(val) > 0:
if val[0]:
return True
return False
def _write0(self, address, cmd):
tries = self._trystimeout
while tries:
self._sendcommand(address, cmd)
if self._writechecksum():
return True
tries = tries - 1
return False
def _write1(self, address, cmd, val):
tries = self._trystimeout
while tries:
self._sendcommand(address, cmd)
self._writebyte(val)
if self._writechecksum():
return True
tries = tries - 1
return False
def _write11(self, address, cmd, val1, val2):
tries = self._trystimeout
while tries:
self._sendcommand(address, cmd)
self._writebyte(val1)
self._writebyte(val2)
if self._writechecksum():
return True
tries = tries - 1
return False
def _write111(self, address, cmd, val1, val2, val3):
tries = self._trystimeout
while tries:
self._sendcommand(address, cmd)
self._writebyte(val1)
self._writebyte(val2)
self._writebyte(val3)
if self._writechecksum():
return True
tries = tries - 1
return False
def _write2(self, address, cmd, val):
tries = self._trystimeout
while tries:
self._sendcommand(address, cmd)
self._writeword(val)
if self._writechecksum():
return True
tries = tries - 1
return False
def _writeS2(self, address, cmd, val):
tries = self._trystimeout
while tries:
self._sendcommand(address, cmd)
self._writesword(val)
if self._writechecksum():
return True
tries = tries - 1
return False
def _write22(self, address, cmd, val1, val2):
tries = self._trystimeout
while tries:
self._sendcommand(address, cmd)
self._writeword(val1)
self._writeword(val2)
if self._writechecksum():
return True
tries = tries - 1
return False
def _writeS22(self, address, cmd, val1, val2):
tries = self._trystimeout
while tries:
self._sendcommand(address, cmd)
self._writesword(val1)
self._writeword(val2)
if self._writechecksum():
return True
tries = tries - 1
return False
def _writeS2S2(self, address, cmd, val1, val2):
tries = self._trystimeout
while tries:
self._sendcommand(address, cmd)
self._writesword(val1)
self._writesword(val2)
if self._writechecksum():
return True
tries = tries - 1
return False
def _writeS24(self, address, cmd, val1, val2):
tries = self._trystimeout
while tries:
self._sendcommand(address, cmd)
self._writesword(val1)
self._writelong(val2)
if self._writechecksum():
return True
tries = tries - 1
return False
def _writeS24S24(self, address, cmd, val1, val2, val3, val4):
tries = self._trystimeout
while tries:
self._sendcommand(address, cmd)
self._writesword(val1)
self._writelong(val2)
self._writesword(val3)
self._writelong(val4)
if self._writechecksum():
return True
tries = tries - 1
return False
def _write4(self, address, cmd, val):
tries = self._trystimeout
while tries:
self._sendcommand(address, cmd)
self._writelong(val)
if self._writechecksum():
return True
tries = tries - 1
return False
def _writeS4(self, address, cmd, val):
tries = self._trystimeout
while tries:
self._sendcommand(address, cmd)
self._writeslong(val)
if self._writechecksum():
return True
tries = tries - 1
return False
def _write44(self, address, cmd, val1, val2):
tries = self._trystimeout
while tries:
self._sendcommand(address, cmd)
self._writelong(val1)
self._writelong(val2)
if self._writechecksum():
return True
tries = tries - 1
return False
def _write4S4(self, address, cmd, val1, val2):
tries = self._trystimeout
while tries:
self._sendcommand(address, cmd)
self._writelong(val1)
self._writeslong(val2)
if self._writechecksum():
return True
tries = tries - 1
return False
def _writeS4S4(self, address, cmd, val1, val2):
tries = self._trystimeout
while tries:
self._sendcommand(address, cmd)
self._writeslong(val1)
self._writeslong(val2)
if self._writechecksum():
return True
tries = tries - 1
return False
def _write441(self, address, cmd, val1, val2, val3):
tries = self._trystimeout
while tries:
self._sendcommand(address, cmd)
self._writelong(val1)
self._writelong(val2)
self._writebyte(val3)
if self._writechecksum():
return True
tries = tries - 1
return False
def _writeS441(self, address, cmd, val1, val2, val3):
tries = self._trystimeout
while tries:
self._sendcommand(address, cmd)
self._writeslong(val1)
self._writelong(val2)
self._writebyte(val3)
if self._writechecksum():
return True
tries = tries - 1
return False
def _write4S4S4(self, address, cmd, val1, val2, val3):
tries = self._trystimeout
while tries:
self._sendcommand(address, cmd)
self._writelong(val1)
self._writeslong(val2)
self._writeslong(val3)
if self._writechecksum():
return True
tries = tries - 1
return False
def _write4S441(self, address, cmd, val1, val2, val3, val4):
tries = self._trystimeout
while tries:
self._sendcommand(address, cmd)
self._writelong(val1)
self._writeslong(val2)
self._writelong(val3)
self._writebyte(val4)
if self._writechecksum():
return True
tries = tries - 1
return False
def _write4444(self, address, cmd, val1, val2, val3, val4):
tries = self._trystimeout
while tries:
self._sendcommand(address, cmd)
self._writelong(val1)
self._writelong(val2)
self._writelong(val3)
self._writelong(val4)
if self._writechecksum():
return True
tries = tries - 1
return False
def _write4S44S4(self, address, cmd, val1, val2, val3, val4):
tries = self._trystimeout
while tries:
self._sendcommand(address, cmd)
self._writelong(val1)
self._writeslong(val2)
self._writelong(val3)
self._writeslong(val4)
if self._writechecksum():
return True
tries = tries - 1
return False
def _write44441(self, address, cmd, val1, val2, val3, val4, val5):
tries = self._trystimeout
while tries:
self._sendcommand(address, cmd)
self._writelong(val1)
self._writelong(val2)
self._writelong(val3)
self._writelong(val4)
self._writebyte(val5)
if self._writechecksum():
return True
tries = tries - 1
return False
def _writeS44S441(self, address, cmd, val1, val2, val3, val4, val5):
tries = self._trystimeout
while tries:
self._sendcommand(address, cmd)
self._writeslong(val1)
self._writelong(val2)
self._writeslong(val3)
self._writelong(val4)
self._writebyte(val5)
if self._writechecksum():
return True
tries = tries - 1
return False
def _write4S44S441(self, address, cmd, val1, val2, val3, val4, val5, val6):
tries = self._trystimeout
while tries:
self._sendcommand(address, cmd)
self._writelong(val1)
self._writeslong(val2)
self._writelong(val3)
self._writeslong(val4)
self._writelong(val5)
self._writebyte(val6)
if self._writechecksum():
return True
tries = tries - 1
return False
def _write4S444S441(self, address, cmd, val1, val2, val3, val4, val5, val6, val7):
tries = self._trystimeout
while tries:
self._sendcommand(self, address, cmd)
self._writelong(val1)
self._writeslong(val2)
self._writelong(val3)
self._writelong(val4)
self._writeslong(val5)
self._writelong(val6)
self._writebyte(val7)
if self._writechecksum():
return True
tries = tries - 1
return False
def _write4444444(self, address, cmd, val1, val2, val3, val4, val5, val6, val7):
tries = self._trystimeout
while tries:
self._sendcommand(address, cmd)
self._writelong(val1)
self._writelong(val2)
self._writelong(val3)
self._writelong(val4)
self._writelong(val5)
self._writelong(val6)
self._writelong(val7)
if self._writechecksum():
return True
tries = tries - 1
return False
def _write444444441(
self, address, cmd, val1, val2, val3, val4, val5, val6, val7, val8, val9
):
tries = self._trystimeout
while tries:
self._sendcommand(address, cmd)
self._writelong(val1)
self._writelong(val2)
self._writelong(val3)
self._writelong(val4)
self._writelong(val5)
self._writelong(val6)
self._writelong(val7)
self._writelong(val8)
self._writebyte(val9)
if self._writechecksum():
return True
tries = tries - 1
return False
# User accessible functions
def SendRandomData(self, cnt):
for i in range(0, cnt):
byte = random.getrandbits(8)
# self._port.write(chr(byte))
self._port.write(byte.to_bytes(1, "big"))
return
def ForwardM1(self, address, val):
return self._write1(address, self.Cmd.M1FORWARD, val)
def BackwardM1(self, address, val):
return self._write1(address, self.Cmd.M1BACKWARD, val)
def SetMinVoltageMainBattery(self, address, val):
return self._write1(address, self.Cmd.SETMINMB, val)
def SetMaxVoltageMainBattery(self, address, val):
return self._write1(address, self.Cmd.SETMAXMB, val)
def ForwardM2(self, address, val):
return self._write1(address, self.Cmd.M2FORWARD, val)
def BackwardM2(self, address, val):
return self._write1(address, self.Cmd.M2BACKWARD, val)
def ForwardBackwardM1(self, address, val):
return self._write1(address, self.Cmd.M17BIT, val)
def ForwardBackwardM2(self, address, val):
return self._write1(address, self.Cmd.M27BIT, val)
def ForwardMixed(self, address, val):
return self._write1(address, self.Cmd.MIXEDFORWARD, val)
def BackwardMixed(self, address, val):
return self._write1(address, self.Cmd.MIXEDBACKWARD, val)
def TurnRightMixed(self, address, val):
return self._write1(address, self.Cmd.MIXEDRIGHT, val)
def TurnLeftMixed(self, address, val):
return self._write1(address, self.Cmd.MIXEDLEFT, val)
def ForwardBackwardMixed(self, address, val):
return self._write1(address, self.Cmd.MIXEDFB, val)
def LeftRightMixed(self, address, val):
return self._write1(address, self.Cmd.MIXEDLR, val)
def ReadEncM1(self, address):
return self._read4_1(address, self.Cmd.GETM1ENC)
def ReadEncM2(self, address):
return self._read4_1(address, self.Cmd.GETM2ENC)
def ReadSpeedM1(self, address):
return self._read4_1(address, self.Cmd.GETM1SPEED)
def ReadSpeedM2(self, address):
return self._read4_1(address, self.Cmd.GETM2SPEED)
def ResetEncoders(self, address):
return self._write0(address, self.Cmd.RESETENC)
def ReadVersion(self, address):
tries = self._trystimeout
while 1:
self._port.flushInput()
self._sendcommand(address, self.Cmd.GETVERSION)
str = ""
passed = True
for i in range(0, 48):
data = self._port.read(1)
if len(data):
val = ord(data)
self.crc_update(val)
if val == 0:
break
# str+=data[0]
str += chr(data[0])
else:
passed = False
break
if passed:
crc = self._readchecksumword()
if crc[0]:
if self._crc & 0xFFFF == crc[1] & 0xFFFF:
return (1, str)
else:
time.sleep(0.01)
tries -= 1
if tries == 0:
break
return (0, 0)
def SetEncM1(self, address, cnt):
return self._write4(address, self.Cmd.SETM1ENCCOUNT, cnt)
def SetEncM2(self, address, cnt):
return self._write4(address, self.Cmd.SETM2ENCCOUNT, cnt)
def ReadMainBatteryVoltage(self, address):
return self._read2(address, self.Cmd.GETMBATT)
def ReadLogicBatteryVoltage(
self,
address,
):
return self._read2(address, self.Cmd.GETLBATT)
def SetMinVoltageLogicBattery(self, address, val):
return self._write1(address, self.Cmd.SETMINLB, val)
def SetMaxVoltageLogicBattery(self, address, val):
return self._write1(address, self.Cmd.SETMAXLB, val)
def SetM1VelocityPID(self, address, p, i, d, qpps):
# return self._write4444(address,self.Cmd.SETM1PID,long(d*65536),long(p*65536),long(i*65536),qpps)
return self._write4444(
address, self.Cmd.SETM1PID, d * 65536, p * 65536, i * 65536, qpps
)
def SetM2VelocityPID(self, address, p, i, d, qpps):
# return self._write4444(address,self.Cmd.SETM2PID,long(d*65536),long(p*65536),long(i*65536),qpps)
return self._write4444(
address, self.Cmd.SETM2PID, d * 65536, p * 65536, i * 65536, qpps
)
def ReadISpeedM1(self, address):
return self._read4_1(address, self.Cmd.GETM1ISPEED)
def ReadISpeedM2(self, address):
return self._read4_1(address, self.Cmd.GETM2ISPEED)
def DutyM1(self, address, val):
return self._writeS2(address, self.Cmd.M1DUTY, val)
def DutyM2(self, address, val):
return self._writeS2(address, self.Cmd.M2DUTY, val)
def DutyM1M2(self, address, m1, m2):
return self._writeS2S2(address, self.Cmd.MIXEDDUTY, m1, m2)
def SpeedM1(self, address, val):
return self._writeS4(address, self.Cmd.M1SPEED, val)
def SpeedM2(self, address, val):
return self._writeS4(address, self.Cmd.M2SPEED, val)
def SpeedM1M2(self, address, m1, m2):
return self._writeS4S4(address, self.Cmd.MIXEDSPEED, m1, m2)
def SpeedAccelM1(self, address, accel, speed):
return self._write4S4(address, self.Cmd.M1SPEEDACCEL, accel, speed)
def SpeedAccelM2(self, address, accel, speed):
return self._write4S4(address, self.Cmd.M2SPEEDACCEL, accel, speed)
def SpeedAccelM1M2(self, address, accel, speed1, speed2):
return self._write4S4S4(
address, self.Cmd.MIXEDSPEEDACCEL, accel, speed1, speed2
)
def SpeedDistanceM1(self, address, speed, distance, buffer):
return self._writeS441(address, self.Cmd.M1SPEEDDIST, speed, distance, buffer)
def SpeedDistanceM2(self, address, speed, distance, buffer):
return self._writeS441(address, self.Cmd.M2SPEEDDIST, speed, distance, buffer)
def SpeedDistanceM1M2(self, address, speed1, distance1, speed2, distance2, buffer):
return self._writeS44S441(
address,
self.Cmd.MIXEDSPEEDDIST,
speed1,
distance1,
speed2,
distance2,
buffer,
)
def SpeedAccelDistanceM1(self, address, accel, speed, distance, buffer):
return self._write4S441(
address, self.Cmd.M1SPEEDACCELDIST, accel, speed, distance, buffer
)
def SpeedAccelDistanceM2(self, address, accel, speed, distance, buffer):
return self._write4S441(
address, self.Cmd.M2SPEEDACCELDIST, accel, speed, distance, buffer
)
def SpeedAccelDistanceM1M2(
self, address, accel, speed1, distance1, speed2, distance2, buffer
):
return self._write4S44S441(
address,
self.Cmd.MIXEDSPEEDACCELDIST,
accel,
speed1,
distance1,
speed2,
distance2,
buffer,
)
def ReadBuffers(self, address):
val = self._read2(address, self.Cmd.GETBUFFERS)
if val[0]:
return (1, val[1] >> 8, val[1] & 0xFF)
return (0, 0, 0)
def ReadPWMs(self, address):
val = self._read4(address, self.Cmd.GETPWMS)
if val[0]:
pwm1 = val[1] >> 16
pwm2 = val[1] & 0xFFFF
if pwm1 & 0x8000:
pwm1 -= 0x10000
if pwm2 & 0x8000:
pwm2 -= 0x10000
return (1, pwm1, pwm2)
return (0, 0, 0)
def ReadCurrents(self, address):
val = self._read4(address, self.Cmd.GETCURRENTS)
if val[0]:
cur1 = val[1] >> 16
cur2 = val[1] & 0xFFFF
if cur1 & 0x8000:
cur1 -= 0x10000
if cur2 & 0x8000:
cur2 -= 0x10000
return (1, cur1, cur2)
return (0, 0, 0)
def SpeedAccelM1M2_2(self, address, accel1, speed1, accel2, speed2):
return self._write4S44S4(
address, self.Cmd.MIXEDSPEED2ACCEL, accel, speed1, accel2, speed2
)
def SpeedAccelDistanceM1M2_2(
self, address, accel1, speed1, distance1, accel2, speed2, distance2, buffer
):
return self._write4S444S441(
address,
self.Cmd.MIXEDSPEED2ACCELDIST,
accel1,
speed1,
distance1,
accel2,
speed2,
distance2,
buffer,
)
def DutyAccelM1(self, address, accel, duty):
return self._writeS24(address, self.Cmd.M1DUTYACCEL, duty, accel)
def DutyAccelM2(self, address, accel, duty):
return self._writeS24(address, self.Cmd.M2DUTYACCEL, duty, accel)
def DutyAccelM1M2(self, address, accel1, duty1, accel2, duty2):
return self._writeS24S24(
address, self.Cmd.MIXEDDUTYACCEL, duty1, accel1, duty2, accel2
)
def ReadM1VelocityPID(self, address):
data = self._read_n(address, self.Cmd.READM1PID, 4)
if data[0]:
data[1] /= 65536.0
data[2] /= 65536.0
data[3] /= 65536.0
return data
return (0, 0, 0, 0, 0)
def ReadM2VelocityPID(self, address):
data = self._read_n(address, self.Cmd.READM2PID, 4)
if data[0]:
data[1] /= 65536.0
data[2] /= 65536.0
data[3] /= 65536.0
return data
return (0, 0, 0, 0, 0)
def SetMainVoltages(self, address, min, max):
return self._write22(address, self.Cmd.SETMAINVOLTAGES, min, max)
def SetLogicVoltages(self, address, min, max):
return self._write22(address, self.Cmd.SETLOGICVOLTAGES, min, max)
def ReadMinMaxMainVoltages(self, address):
val = self._read4(address, self.Cmd.GETMINMAXMAINVOLTAGES)
if val[0]:
min = val[1] >> 16
max = val[1] & 0xFFFF
return (1, min, max)
return (0, 0, 0)
def ReadMinMaxLogicVoltages(self, address):
val = self._read4(address, self.Cmd.GETMINMAXLOGICVOLTAGES)
if val[0]:
min = val[1] >> 16
max = val[1] & 0xFFFF
return (1, min, max)
return (0, 0, 0)
def SetM1PositionPID(self, address, kp, ki, kd, kimax, deadzone, min, max):
# return self._write4444444(address,self.Cmd.SETM1POSPID,long(kd*1024),long(kp*1024),long(ki*1024),kimax,deadzone,min,max)
return self._write4444444(
address,
self.Cmd.SETM1POSPID,
kd * 1024,
kp * 1024,
ki * 1024,
kimax,
deadzone,
min,
max,
)
def SetM2PositionPID(self, address, kp, ki, kd, kimax, deadzone, min, max):
# return self._write4444444(address,self.Cmd.SETM2POSPID,long(kd*1024),long(kp*1024),long(ki*1024),kimax,deadzone,min,max)
return self._write4444444(
address,
self.Cmd.SETM2POSPID,
kd * 1024,
kp * 1024,
ki * 1024,
kimax,
deadzone,
min,
max,
)
def ReadM1PositionPID(self, address):
data = self._read_n(address, self.Cmd.READM1POSPID, 7)
if data[0]:
data[1] /= 1024.0
data[2] /= 1024.0
data[3] /= 1024.0
return data
return (0, 0, 0, 0, 0, 0, 0, 0)
def ReadM2PositionPID(self, address):
data = self._read_n(address, self.Cmd.READM2POSPID, 7)
if data[0]:
data[1] /= 1024.0
data[2] /= 1024.0
data[3] /= 1024.0
return data
return (0, 0, 0, 0, 0, 0, 0, 0)
def SpeedAccelDeccelPositionM1(
self, address, accel, speed, deccel, position, buffer
):
return self._write44441(
address,
self.Cmd.M1SPEEDACCELDECCELPOS,
accel,
speed,
deccel,
position,
buffer,
)
def SpeedAccelDeccelPositionM2(
self, address, accel, speed, deccel, position, buffer
):
return self._write44441(
address,
self.Cmd.M2SPEEDACCELDECCELPOS,
accel,
speed,
deccel,
position,
buffer,
)
def SpeedAccelDeccelPositionM1M2(
self,
address,
accel1,
speed1,
deccel1,
position1,
accel2,
speed2,
deccel2,
position2,
buffer,
):
return self._write444444441(
address,
self.Cmd.MIXEDSPEEDACCELDECCELPOS,
accel1,
speed1,
deccel1,
position1,
accel2,
speed2,
deccel2,
position2,
buffer,
)
def SetM1DefaultAccel(self, address, accel):
return self._write4(address, self.Cmd.SETM1DEFAULTACCEL, accel)
def SetM2DefaultAccel(self, address, accel):
return self._write4(address, self.Cmd.SETM2DEFAULTACCEL, accel)
def SetPinFunctions(self, address, S3mode, S4mode, S5mode):
return self._write111(address, self.Cmd.SETPINFUNCTIONS, S3mode, S4mode, S5mode)
def ReadPinFunctions(self, address):
tries = self._trystimeout
while 1:
self._sendcommand(address, self.Cmd.GETPINFUNCTIONS)
val1 = self._readbyte()
if val1[0]:
val2 = self._readbyte()
if val1[0]:
val3 = self._readbyte()
if val1[0]:
crc = self._readchecksumword()
if crc[0]:
if self._crc & 0xFFFF != crc[1] & 0xFFFF:
return (0, 0)
return (1, val1[1], val2[1], val3[1])
tries -= 1
if tries == 0:
break
return (0, 0)
def SetDeadBand(self, address, min, max):
return self._write11(address, self.Cmd.SETDEADBAND, min, max)
def GetDeadBand(self, address):
val = self._read2(address, self.Cmd.GETDEADBAND)
if val[0]:
return (1, val[1] >> 8, val[1] & 0xFF)
return (0, 0, 0)
# Warning(TTL Serial): Baudrate will change if not already set to 38400. Communications will be lost
def RestoreDefaults(self, address):
return self._write0(address, self.Cmd.RESTOREDEFAULTS)
def ReadTemp(self, address):
return self._read2(address, self.Cmd.GETTEMP)
def ReadTemp2(self, address):
return self._read2(address, self.Cmd.GETTEMP2)
def ReadError(self, address):
return self._read4(address, self.Cmd.GETERROR)
def ReadEncoderModes(self, address):
val = self._read2(address, self.Cmd.GETENCODERMODE)
if val[0]:
return (1, val[1] >> 8, val[1] & 0xFF)
return (0, 0, 0)
def SetM1EncoderMode(self, address, mode):
return self._write1(address, self.Cmd.SETM1ENCODERMODE, mode)
def SetM2EncoderMode(self, address, mode):
return self._write1(address, self.Cmd.SETM2ENCODERMODE, mode)
# saves active settings to NVM
def WriteNVM(self, address):
return self._write4(address, self.Cmd.WRITENVM, 0xE22EAB7A)
# restores settings from NVM
# Warning(TTL Serial): If baudrate changes or the control mode changes communications will be lost
def ReadNVM(self, address):
return self._write0(address, self.Cmd.READNVM)
# Warning(TTL Serial): If control mode is changed from packet serial mode when setting config communications will be lost!
# Warning(TTL Serial): If baudrate of packet serial mode is changed communications will be lost!
def SetConfig(self, address, config):
return self._write2(address, self.Cmd.SETCONFIG, config)
def GetConfig(self, address):
return self._read2(address, self.Cmd.GETCONFIG)
def SetM1MaxCurrent(self, address, max):
return self._write44(address, self.Cmd.SETM1MAXCURRENT, max, 0)
def SetM2MaxCurrent(self, address, max):
return self._write44(address, self.Cmd.SETM2MAXCURRENT, max, 0)
def ReadM1MaxCurrent(self, address):
data = self._read_n(address, self.Cmd.GETM1MAXCURRENT, 2)
if data[0]:
return (1, data[1])
return (0, 0)
def ReadM2MaxCurrent(self, address):
data = self._read_n(address, self.Cmd.GETM2MAXCURRENT, 2)
if data[0]:
return (1, data[1])
return (0, 0)
def SetPWMMode(self, address, mode):
return self._write1(address, self.Cmd.SETPWMMODE, mode)
def ReadPWMMode(self, address):
return self._read1(address, self.Cmd.GETPWMMODE)
def ReadEeprom(self, address, ee_address):
tries = self._trystimeout
while 1:
self._port.flushInput()
self._sendcommand(address, self.Cmd.READEEPROM)
self.crc_update(ee_address)
self._port.write(chr(ee_address))
val1 = self._readword()
if val1[0]:
crc = self._readchecksumword()
if crc[0]:
if self._crc & 0xFFFF != crc[1] & 0xFFFF:
return (0, 0)
return (1, val1[1])
tries -= 1
if tries == 0:
break
return (0, 0)
def WriteEeprom(self, address, ee_address, ee_word):
retval = self._write111(
address, self.Cmd.WRITEEEPROM, ee_address, ee_word >> 8, ee_word & 0xFF
)
if retval == True:
tries = self._trystimeout
while 1:
self._port.flushInput()
val1 = self._readbyte()
if val1[0]:
if val1[1] == 0xAA:
return True
tries -= 1
if tries == 0:
break
return False
def Open(self):
try:
self._port = serial.Serial(
port=self.comport,
baudrate=self.rate,
timeout=1,
interCharTimeout=self.timeout,
)
except:
return 0
return 1
| 37,787 | Python | 30.229752 | 132 | 0.526689 |
MarqRazz/c3pzero/c300/c300_driver/c300_driver/twist2roboclaw.py | # -*- coding: utf-8 -*-
# Copyright 2016 Open Source Robotics Foundation, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import rclpy
from rclpy.node import Node
from . import roboclaw_3
from . import diff_drive_odom
from geometry_msgs.msg import Twist
import math
from pprint import pprint
class RoboclawTwistSubscriber(Node):
def __init__(self):
super().__init__("roboclaw_twist_subscriber")
cmd_vel_topic = "cmd_vel"
self.wheel_radius = 0.1715 # meters
self.wheel_circumference = 2 * math.pi * self.wheel_radius # meters
self.ppr = 11600 # pulses per wheel revolution
self.wheel_track = 0.54 # y distance between left and righ wheel
self.subscription = self.create_subscription(
Twist, cmd_vel_topic, self.twist_listener_callback, 10
)
self.subscription # prevent unused variable warning
self.rc = roboclaw_3.Roboclaw("/dev/ttyACM0", 115200)
if not self.rc.Open():
self.get_logger().error("failed to open port")
self.rc_address = 0x80
version = self.rc.ReadVersion(self.rc_address)
if version[0] == False:
self.get_logger().error("Retrieving the Roboclaw version failed")
else:
self.get_logger().info("Roboclaw version: %s" % repr(version[1]))
self.rc.ResetEncoders(self.rc_address)
self.diff_drive_odom = diff_drive_odom.DiffDriveOdom(
self.get_clock(), self.wheel_track, self.wheel_radius
)
self.create_timer(0.02, self.odom_callback)
self.get_logger().info(
"Init complete, listening for twist commands on topic: %s" % cmd_vel_topic
)
def twist_listener_callback(self, msg):
# self.get_logger().info('X_vel: %f, Z_rot: %f' % (0.4*msg.linear.x, msg.angular.z))
right_wheel = (
0.2 * msg.linear.x + (0.3 * msg.angular.z * self.wheel_track) / 2
) # meters / sec
left_wheel = 0.2 * msg.linear.x - (0.3 * msg.angular.z * self.wheel_track) / 2
wheel_cmds = self.mps_to_pps((right_wheel, left_wheel))
self.rc.SpeedM1(self.rc_address, wheel_cmds[0])
self.rc.SpeedM2(self.rc_address, wheel_cmds[1])
def odom_callback(self):
"""
the roboclaw returns the encoder position and velocity in a tuple
the first value is if the read was successful
the second value is the result (position pulses or rate)
the third value is ???
"""
right_wheel_enc = self.rc.ReadEncM1(self.rc_address)
left_wheel_enc = self.rc.ReadEncM2(self.rc_address)
# if reading the wheel velocities was unsuccessful return
if right_wheel_enc[0] == 0 | right_wheel_enc[0] == 0:
self.get_logger().error("Failed retrieving the Roboclaw wheel positions")
return
right_wheel_pps = self.rc.ReadSpeedM1(self.rc_address) # pulses per second.
left_wheel_pps = self.rc.ReadSpeedM2(self.rc_address)
# if reading the wheel velocities was unsuccessful return
if right_wheel_pps[0] == 0 | left_wheel_pps[0] == 0:
self.get_logger().error("Failed retrieving the Roboclaw wheel velocities")
return
# convert the wheel positions to radians
wheel_pos = self.enc_to_rad((right_wheel_enc[1], left_wheel_enc[1]))
# convert the wheel speeds to meters / sec
wheel_speed = self.pps_to_mps((right_wheel_pps[1], left_wheel_pps[1]))
odom_msg = self.diff_drive_odom.step(wheel_pos, wheel_speed)
# pprint(odom_msg.pose.pose.position)
self.get_logger().info(
"Pose: x=%f, y=%f theta=%f"
% (
odom_msg.pose.pose.position.x,
odom_msg.pose.pose.position.y,
odom_msg.pose.pose.orientation.z,
)
)
def mps_to_pps(self, wheel_speed):
right_wheel_pluses = int(wheel_speed[0] / self.wheel_circumference * self.ppr)
left_wheel_pluses = int(wheel_speed[1] / self.wheel_circumference * self.ppr)
return (right_wheel_pluses, left_wheel_pluses)
def enc_to_rad(self, wheel_pulses):
right_wheel_pos = wheel_pulses[0] / self.ppr * 2 * math.pi
left_wheel_pos = wheel_pulses[1] / self.ppr * 2 * math.pi
# self.get_logger().info('right=%f, left=%f' % (right_wheel_pos, left_wheel_pos))
return (right_wheel_pos, left_wheel_pos)
def pps_to_mps(self, wheel_pulses_per_sec):
right_wheel_speed = (
wheel_pulses_per_sec[0] / self.ppr * self.wheel_circumference
)
left_wheel_speed = wheel_pulses_per_sec[1] / self.ppr * self.wheel_circumference
return (right_wheel_speed, left_wheel_speed)
def main(args=None):
rclpy.init(args=args)
minimal_subscriber = RoboclawTwistSubscriber()
rclpy.spin(minimal_subscriber)
# Destroy the node explicitly
# (optional - otherwise it will be done automatically
# when the garbage collector destroys the node object)
minimal_subscriber.destroy_node()
rclpy.shutdown()
if __name__ == "__main__":
main()
| 5,658 | Python | 35.986928 | 92 | 0.631142 |
MarqRazz/c3pzero/c300/c300_driver/config/f710.config.yaml | # Logitech F710
teleop_twist_joy_node:
ros__parameters:
axis_linear: # Forward/Back
x: 1
scale_linear:
x: 1.0
scale_linear_turbo:
x: 2.0
axis_angular: # Twist
yaw: 3
scale_angular:
yaw: 0.8
scale_angular_turbo:
yaw: 2.0
enable_button: 5 # Trigger
enable_turbo_button: 4 # Button 2 aka thumb button
| 374 | YAML | 17.749999 | 55 | 0.580214 |
MarqRazz/c3pzero/c300/c300_description/launch/view_base_urdf.launch.py | # -*- coding: utf-8 -*-
# Copyright 2021 PickNik Inc.
# All rights reserved.
#
# Unauthorized copying of this code base via any medium is strictly prohibited.
# Proprietary and confidential.
import launch
from launch.substitutions import Command, LaunchConfiguration
import launch_ros
import os
def generate_launch_description():
pkg_share = launch_ros.substitutions.FindPackageShare(
package="c300_description"
).find("c300_description")
default_model_path = os.path.join(pkg_share, "urdf/c300_base.urdf")
default_rviz_config_path = os.path.join(pkg_share, "rviz/view_urdf.rviz")
robot_state_publisher_node = launch_ros.actions.Node(
package="robot_state_publisher",
executable="robot_state_publisher",
parameters=[
{"robot_description": Command(["xacro ", LaunchConfiguration("model")])}
],
)
joint_state_publisher_node = launch_ros.actions.Node(
package="joint_state_publisher_gui",
executable="joint_state_publisher_gui",
)
rviz_node = launch_ros.actions.Node(
package="rviz2",
executable="rviz2",
name="rviz2",
output="screen",
arguments=["-d", LaunchConfiguration("rvizconfig")],
)
return launch.LaunchDescription(
[
launch.actions.DeclareLaunchArgument(
name="model",
default_value=default_model_path,
description="Absolute path to robot urdf file",
),
launch.actions.DeclareLaunchArgument(
name="rvizconfig",
default_value=default_rviz_config_path,
description="Absolute path to rviz config file",
),
robot_state_publisher_node,
joint_state_publisher_node,
rviz_node,
]
)
| 1,837 | Python | 31.245613 | 84 | 0.621666 |
MarqRazz/c3pzero/c300/c300_description/usd/isaac_c300.py | # -*- coding: utf-8 -*-
import argparse
import os
import sys
import carb
import numpy as np
from omni.isaac.kit import SimulationApp
C300_STAGE_PATH = "/c300"
BACKGROUND_STAGE_PATH = "/background"
BACKGROUND_USD_PATH = (
"/Isaac/Environments/Simple_Warehouse/warehouse_multiple_shelves.usd"
)
# Initialize the parser
parser = argparse.ArgumentParser(
description="Process the path to the robot's folder containing its UDS file"
)
# Add the arguments
parser.add_argument(
"Path", metavar="path", type=str, help="the path to the robot's folder"
)
# Parse the arguments
args = parser.parse_args()
# Check if the path argument was provided
if args.Path:
GEN3_USD_PATH = args.Path + "/c300.usd"
else:
print(
"[ERROR] This script requires an argument with the absolute path to the robot's folder containing it's UDS file"
)
sys.exit()
CONFIG = {"renderer": "RayTracedLighting", "headless": False}
# Example ROS2 bridge sample demonstrating the manual loading of stages
# and creation of ROS components
simulation_app = SimulationApp(CONFIG)
import omni.graph.core as og # noqa E402
from omni.isaac.core import SimulationContext # noqa E402
from omni.isaac.core.utils import ( # noqa E402
extensions,
nucleus,
prims,
rotations,
stage,
viewports,
)
from omni.isaac.core_nodes.scripts.utils import set_target_prims # noqa E402
from pxr import Gf # noqa E402
# enable ROS2 bridge extension
extensions.enable_extension("omni.isaac.ros2_bridge")
simulation_context = SimulationContext(stage_units_in_meters=1.0)
# Locate Isaac Sim assets folder to load environment and robot stages
assets_root_path = nucleus.get_assets_root_path()
if assets_root_path is None:
carb.log_error("Could not find Isaac Sim assets folder")
simulation_app.close()
sys.exit()
# Preparing stage
viewports.set_camera_view(eye=np.array([3, 0.5, 0.8]), target=np.array([0, 0, 0.5]))
# Loading the simple_room environment
stage.add_reference_to_stage(
assets_root_path + BACKGROUND_USD_PATH, BACKGROUND_STAGE_PATH
)
# Loading the gen3 robot USD
prims.create_prim(
C300_STAGE_PATH,
"Xform",
position=np.array([1, 0, 0.17]),
orientation=rotations.gf_rotation_to_np_array(Gf.Rotation(Gf.Vec3d(0, 0, 1), 90)),
usd_path=GEN3_USD_PATH,
)
simulation_app.update()
# Creating a action graph with ROS component nodes
try:
og.Controller.edit(
{"graph_path": "/ActionGraph", "evaluator_name": "execution"},
{
og.Controller.Keys.CREATE_NODES: [
("OnImpulseEvent", "omni.graph.action.OnImpulseEvent"),
("ReadSimTime", "omni.isaac.core_nodes.IsaacReadSimulationTime"),
("Context", "omni.isaac.ros2_bridge.ROS2Context"),
("PublishJointState", "omni.isaac.ros2_bridge.ROS2PublishJointState"),
(
"SubscribeJointState",
"omni.isaac.ros2_bridge.ROS2SubscribeJointState",
),
(
"ArticulationController",
"omni.isaac.core_nodes.IsaacArticulationController",
),
("PublishClock", "omni.isaac.ros2_bridge.ROS2PublishClock"),
("IsaacReadLidarBeams", "omni.isaac.range_sensor.IsaacReadLidarBeams"),
("PublishLidarScan", "omni.isaac.ros2_bridge.ROS2PublishLaserScan"),
# Nodes to subtract some time of the lidar message so it's timestamps match the tf tree in ROS
("ConstantFloat", "omni.graph.nodes.ConstantFloat"),
("Subtract", "omni.graph.nodes.Subtract"),
],
og.Controller.Keys.CONNECT: [
("OnImpulseEvent.outputs:execOut", "PublishJointState.inputs:execIn"),
("OnImpulseEvent.outputs:execOut", "SubscribeJointState.inputs:execIn"),
("OnImpulseEvent.outputs:execOut", "PublishClock.inputs:execIn"),
(
"OnImpulseEvent.outputs:execOut",
"ArticulationController.inputs:execIn",
),
("Context.outputs:context", "PublishJointState.inputs:context"),
("Context.outputs:context", "SubscribeJointState.inputs:context"),
("Context.outputs:context", "PublishClock.inputs:context"),
(
"ReadSimTime.outputs:simulationTime",
"PublishJointState.inputs:timeStamp",
),
("ReadSimTime.outputs:simulationTime", "PublishClock.inputs:timeStamp"),
(
"SubscribeJointState.outputs:jointNames",
"ArticulationController.inputs:jointNames",
),
(
"SubscribeJointState.outputs:positionCommand",
"ArticulationController.inputs:positionCommand",
),
(
"SubscribeJointState.outputs:velocityCommand",
"ArticulationController.inputs:velocityCommand",
),
(
"SubscribeJointState.outputs:effortCommand",
"ArticulationController.inputs:effortCommand",
),
# Hack time offset for lidar messages
("ReadSimTime.outputs:simulationTime", "Subtract.inputs:a"),
("ConstantFloat.inputs:value", "Subtract.inputs:b"),
# Lidar nodes
(
"OnImpulseEvent.outputs:execOut",
"IsaacReadLidarBeams.inputs:execIn",
),
(
"IsaacReadLidarBeams.outputs:execOut",
"PublishLidarScan.inputs:execIn",
),
(
"IsaacReadLidarBeams.outputs:azimuthRange",
"PublishLidarScan.inputs:azimuthRange",
),
(
"IsaacReadLidarBeams.outputs:depthRange",
"PublishLidarScan.inputs:depthRange",
),
(
"IsaacReadLidarBeams.outputs:horizontalFov",
"PublishLidarScan.inputs:horizontalFov",
),
(
"IsaacReadLidarBeams.outputs:horizontalResolution",
"PublishLidarScan.inputs:horizontalResolution",
),
(
"IsaacReadLidarBeams.outputs:intensitiesData",
"PublishLidarScan.inputs:intensitiesData",
),
(
"IsaacReadLidarBeams.outputs:linearDepthData",
"PublishLidarScan.inputs:linearDepthData",
),
(
"IsaacReadLidarBeams.outputs:numCols",
"PublishLidarScan.inputs:numCols",
),
(
"IsaacReadLidarBeams.outputs:numRows",
"PublishLidarScan.inputs:numRows",
),
(
"IsaacReadLidarBeams.outputs:rotationRate",
"PublishLidarScan.inputs:rotationRate",
),
(
"Subtract.outputs:difference",
"PublishLidarScan.inputs:timeStamp",
),
("Context.outputs:context", "PublishLidarScan.inputs:context"),
],
og.Controller.Keys.SET_VALUES: [
("Context.inputs:domain_id", int(os.environ["ROS_DOMAIN_ID"])),
# Setting the /c300 target prim to Articulation Controller node
("ArticulationController.inputs:usePath", True),
("ArticulationController.inputs:robotPath", C300_STAGE_PATH),
("PublishJointState.inputs:topicName", "isaac_joint_states"),
("SubscribeJointState.inputs:topicName", "isaac_joint_commands"),
(
"PublishLidarScan.inputs:frameId",
"base_laser",
), # c300's laser frame_id
("PublishLidarScan.inputs:topicName", "scan"),
# Hack time offset for lidar messages
("ConstantFloat.inputs:value", 0.1),
],
},
)
except Exception as e:
print(e)
# Setting the /c300 target prim to Publish JointState node
set_target_prims(
primPath="/ActionGraph/PublishJointState", targetPrimPaths=[C300_STAGE_PATH]
)
# Setting the /c300's Lidar target prim to read scan data in the simulation
set_target_prims(
primPath="/ActionGraph/IsaacReadLidarBeams",
inputName="inputs:lidarPrim",
targetPrimPaths=[C300_STAGE_PATH + "/Chassis/Laser/UAM_05LP/UAM_05LP/Scan/Lidar"],
)
simulation_app.update()
# need to initialize physics getting any articulation..etc
simulation_context.initialize_physics()
simulation_context.play()
while simulation_app.is_running():
# Run with a fixed step size
simulation_context.step(render=True)
# Tick the Publish/Subscribe JointState, Publish TF and Publish Clock nodes each frame
og.Controller.set(
og.Controller.attribute("/ActionGraph/OnImpulseEvent.state:enableImpulse"), True
)
simulation_context.stop()
simulation_app.close()
| 9,341 | Python | 36.368 | 120 | 0.58741 |
MarqRazz/c3pzero/c300/c300_description/usd/README.md | # Known issues with simulating c300 in Isaac
- Currently the wheel friction is set to a value of `10.0` but for rubber it should only require around `0.8`. It was increased to make the odometry match what is observed on the real robot with concrete floors.
- In the omnigraph for c300 a negative time offset is required for the LIDAR messages published by Isaac.
This is because the data's timestamp is ahead of all values available in the tf buffer.
Isaac (and Gazebo) require a `wheel radius multiplier` of less than 1.0 to get the odometry to report correctly.
When navigating the odometry is still not perfect and the localization system needs to compensate more than expected when the base it rotating.
| 711 | Markdown | 70.199993 | 211 | 0.791842 |
MarqRazz/c3pzero/doc/developer.md | # Developers Guide
## Quickly update code repositories
To make sure you have the latest repos:
cd $COLCON_WS/src/c3pzero
git checkout main
git pull origin main
cd $COLCON_WS/src
rosdep install --from-paths . --ignore-src -y
## Setup pre-commit
pre-commit is a tool to automatically run formatting checks on each commit, which saves you from manually running clang-format (or, crucially, from forgetting to run them!).
Install pre-commit like this:
```
pip3 install pre-commit
```
Run this in the top directory of the repo to set up the git hooks:
```
pre-commit install
```
## Using ccache
> *Note*: This is already setup in the Docker container
ccache is a useful tool to speed up compilation times with GCC or any other sufficiently similar compiler.
To install ccache on Linux:
sudo apt-get install ccache
For other OS, search the package manager or software store for ccache, or refer to the [ccache website](https://ccache.dev/)
### Setup
To use ccache after installing it there are two methods. you can add it to your PATH or you can configure it for more specific uses.
ccache must be in front of your regular compiler or it won't be called. It is recommended that you add this line to your `.bashrc`:
export PATH=/usr/lib/ccache:$PATH
To configure ccache for more particular uses, set the CC and CXX environment variables before invoking make, cmake, catkin_make or catkin build.
For more information visit the [ccache website](https://ccache.dev/).
| 1,516 | Markdown | 28.745097 | 173 | 0.742744 |
MarqRazz/c3pzero/doc/installation.md | # Installation
These instructions assume you are utilizing Docker to build the robot's workspace.
# Setup the C3pzero workspace
1. On the host PC create a workspace that we can share with the docker container (*Note:* Currently the docker container expects this exact workspace `COLCON_WS` name)
``` bash
export COLCON_WS=~/c3pzero_ws/
mkdir -p $COLCON_WS/src
```
2. Get the repo and install any dependencies:
``` bash
cd $COLCON_WS/src
git clone https://github.com/MarqRazz/c3pzero.git
vcs import < c3pzero/c3pzero.repos
```
# Build and run the Docker container
1. Move into the `c3pzero` package where the `docker-compose.yaml` is located and build the docker container with:
``` bash
cd $COLCON_WS/src/c3pzero
docker compose build gpu
```
> *NOTE:* If your machine does not have an Nvidia GPU, run `docker compose build cpu` to build a container without GPU support.
2. Run the Docker container:
``` bash
docker compose run gpu # or cpu
```
3. In a second terminal attach to the container with:
``` bash
docker exec -it c3pzero bash
```
4. Build the colcon workspace with:
``` bash
colcon build --symlink-install --event-handlers log-
```
| 1,151 | Markdown | 25.790697 | 167 | 0.742832 |
MarqRazz/c3pzero/doc/user.md | # User Guide
These instructions assume you have followed the [Installation](doc/installation.md) guide and have a terminal running inside the Docker container.
## To start the `c300` mobile base in Gazebo run the following command:
``` bash
ros2 launch c300_bringup c300_sim.launch.py
```
# To teleoperate the robot with a Logitech F710 joystick run:
``` bash
ros2 launch c300_driver teleop.launch.py
```
> NOTE: in simulation the `cmd_vel` topic is on `/diff_drive_base_controller/cmd_vel_unstamped`
| 504 | Markdown | 32.666664 | 146 | 0.761905 |
MarqRazz/c3pzero/c3pzero/README.md | # c3pzero Mobile Robot
## To view the robots URDF in Rviz you can use the following launch file:
``` bash
ros2 launch c3pzero_description view_robot_urdf.launch.py
```
<img src="../doc/c3pzero_urdf.png" width="50%" >
## To start the `c3pzero` mobile robot in Gazebo run the following command:
``` bash
ros2 launch c3pzero_bringup gazebo_c3pzero.launch.py launch _rviz:=false
```
## To start the `c3pzero` mobile robot in Isaac run the following commands:
From the `c3pzero_description/usd` folder on the host start the robot in Isaac Sim
``` bash
./python.sh isaac_c3pzero.py
```
Inside the `c3pzero` Docker container start the robot controllers
``` bash
ros2 launch c3pzero_bringup gazebo_c3pzero.launch.py launch _rviz:=false
```
## To start MoveIt to control the simulated robot run the following command:
``` bash
ros2 launch c3pzero_moveit_config move_group.launch.py
```
## To test out the controllers in simulation you can run the following commands:
- Arm home pose
``` bash
ros2 topic pub /joint_trajectory_controller/joint_trajectory trajectory_msgs/JointTrajectory "{
joint_names: [gen3_joint_1, gen3_joint_2, gen3_joint_3, gen3_joint_4, gen3_joint_5, gen3_joint_6, gen3_joint_7],
points: [
{ positions: [0.0, 0.26, 3.14, -2.27, 0.0, 0.96, 1.57], time_from_start: { sec: 2 } },
]
}" -1
```
- Arm retracted pose
``` bash
ros2 topic pub /joint_trajectory_controller/joint_trajectory trajectory_msgs/JointTrajectory "{
joint_names: [gen3_joint_1, gen3_joint_2, gen3_joint_3, gen3_joint_4, gen3_joint_5, gen3_joint_6, gen3_joint_7],
points: [
{ positions: [0.0, -0.35, 3.14, -2.54, 0.0, -0.87, 1.57], time_from_start: { sec: 2 } },
]
}" -1
```
- GripperActionController (Open position: 0, Closed: 0.8)
``` bash
ros2 action send_goal /robotiq_gripper_controller/gripper_cmd control_msgs/action/GripperCommand "{command: {position: 0.0}}"
```
## To test sending commands directly to Isaac Sim you can run the following commands:
> NOTE: sending command that are far away from the robots current pose can cause the simulation to go unstable and be thrown around in the world.
- Arm home pose
``` bash
ros2 topic pub /isaac_joint_commands sensor_msgs/JointState "{
name: [gen3_joint_1, gen3_joint_2, gen3_joint_3, gen3_joint_4, gen3_joint_5, gen3_joint_6, gen3_joint_7],
position: [0.0, 0.26, 3.14, -2.27, 0.0, 0.96, 1.57]
}" -1
```
- Arm retracted pose
``` bash
ros2 topic pub /isaac_joint_commands sensor_msgs/JointState "{
name: [gen3_joint_1, gen3_joint_2, gen3_joint_3, gen3_joint_4, gen3_joint_5, gen3_joint_6, gen3_joint_7],
position: [0.0, -0.35, 3.14, -2.54, 0.0, -0.87, 1.57]
}" -1
```
| 2,640 | Markdown | 34.213333 | 145 | 0.702652 |
MarqRazz/c3pzero/c3pzero/c3pzero_description/launch/view_robot_urdf.launch.py | # -*- coding: utf-8 -*-
# Copyright 2021 PickNik Inc.
# All rights reserved.
#
# Unauthorized copying of this code base via any medium is strictly prohibited.
# Proprietary and confidential.
import launch
from launch.substitutions import Command, LaunchConfiguration
import launch_ros
import os
def generate_launch_description():
pkg_share = launch_ros.substitutions.FindPackageShare(
package="c3pzero_description"
).find("c3pzero_description")
default_model_path = os.path.join(pkg_share, "urdf/c3pzero_kinova_gen3.xacro")
default_rviz_config_path = os.path.join(pkg_share, "rviz/view_urdf.rviz")
robot_state_publisher_node = launch_ros.actions.Node(
package="robot_state_publisher",
executable="robot_state_publisher",
parameters=[
{"robot_description": Command(["xacro ", LaunchConfiguration("model")])}
],
)
joint_state_publisher_node = launch_ros.actions.Node(
package="joint_state_publisher_gui",
executable="joint_state_publisher_gui",
)
rviz_node = launch_ros.actions.Node(
package="rviz2",
executable="rviz2",
name="rviz2",
output="screen",
arguments=["-d", LaunchConfiguration("rvizconfig")],
)
return launch.LaunchDescription(
[
launch.actions.DeclareLaunchArgument(
name="model",
default_value=default_model_path,
description="Absolute path to robot urdf file",
),
launch.actions.DeclareLaunchArgument(
name="rvizconfig",
default_value=default_rviz_config_path,
description="Absolute path to rviz config file",
),
robot_state_publisher_node,
joint_state_publisher_node,
rviz_node,
]
)
| 1,854 | Python | 31.543859 | 84 | 0.624595 |
MarqRazz/c3pzero/c3pzero/c3pzero_description/usd/README.md | # Known issues with simulating c300 in Isaac
- Currently the wheel friction is set to a value of `10.0` but for rubber it should only require around `0.8`. It was increased to make the odometry match what is observed on the real robot with concrete floors.
- In the omnigraph for c300 a negative time offset is required for the LIDAR messages published by Isaac.
This is because the data's timestamp is ahead of all values available in the tf buffer.
Isaac (and Gazebo) require a `wheel radius multiplier` of less than 1.0 to get the odometry to report correctly.
When navigating the odometry is still not perfect and the localization system needs to compensate more than expected when the base it rotating.
Running each ros2_control hardware interface as if it were real hardware causes the joint commands to come in separately.
This causes jerky execution and can cause the simulation to go unstable.
I am keeping 3 usd files for the arm; as imported, manually tuned gains, and inverting joint angles to make rotation directions match values reported on ROS topic.
Inverting the joint angles looks to be an Isaac bug because the values reported in the UI do not match.
| 1,176 | Markdown | 72.562495 | 211 | 0.798469 |
MarqRazz/c3pzero/c3pzero/c3pzero_description/usd/isaac_c3pzero.py | # -*- coding: utf-8 -*-
import argparse
import os
import sys
import carb
import numpy as np
from omni.isaac.kit import SimulationApp
C3PZERO_STAGE_PATH = "/c3pzero"
CAMERA_PRIM_PATH = (
f"{C3PZERO_STAGE_PATH}/kbot/wrist_mounted_camera_color_frame/RealsenseCamera"
)
BACKGROUND_STAGE_PATH = "/background"
BACKGROUND_USD_PATH = (
"/Isaac/Environments/Simple_Warehouse/warehouse_multiple_shelves.usd"
)
REALSENSE_VIEWPORT_NAME = "realsense_viewport"
# Initialize the parser
parser = argparse.ArgumentParser(
description="Process the path to the robot's folder containing its UDS file"
)
# Add the arguments
parser.add_argument(
"Path", metavar="path", type=str, help="the path to the robot's folder"
)
# Parse the arguments
args = parser.parse_args()
# Check if the path argument was provided
if args.Path:
GEN3_USD_PATH = args.Path + "/c3pzero_composite.usd"
else:
print(
"[ERROR] This script requires an argument with the absolute path to the robot's folder containing it's UDS file"
)
sys.exit()
CONFIG = {"renderer": "RayTracedLighting", "headless": False}
# Example ROS2 bridge sample demonstrating the manual loading of stages
# and creation of ROS components
simulation_app = SimulationApp(CONFIG)
import omni.graph.core as og # noqa E402
from omni.isaac.core import SimulationContext # noqa E402
from omni.isaac.core.utils import ( # noqa E402
extensions,
nucleus,
prims,
rotations,
stage,
viewports,
)
from omni.isaac.core.utils.prims import set_targets
from omni.isaac.core_nodes.scripts.utils import set_target_prims # noqa E402
from pxr import Gf, UsdGeom # noqa E402
from pxr import Gf # noqa E402
import omni.ui # to dock realsense viewport automatically
# enable ROS2 bridge extension
extensions.enable_extension("omni.isaac.ros2_bridge")
simulation_context = SimulationContext(stage_units_in_meters=1.0)
# Locate Isaac Sim assets folder to load environment and robot stages
assets_root_path = nucleus.get_assets_root_path()
if assets_root_path is None:
carb.log_error("Could not find Isaac Sim assets folder")
simulation_app.close()
sys.exit()
# Preparing stage
viewports.set_camera_view(eye=np.array([3, 0.5, 0.8]), target=np.array([0, 0, 0.5]))
# Loading the simple_room environment
stage.add_reference_to_stage(
assets_root_path + BACKGROUND_USD_PATH, BACKGROUND_STAGE_PATH
)
# Loading the gen3 robot USD
prims.create_prim(
C3PZERO_STAGE_PATH,
"Xform",
position=np.array([1, 0, 0.17]),
orientation=rotations.gf_rotation_to_np_array(Gf.Rotation(Gf.Vec3d(0, 0, 1), 90)),
usd_path=GEN3_USD_PATH,
)
simulation_app.update()
# Creating a action graph with ROS component nodes
# TODO: Creating the omnigraph here is getting ridiculous!
# Move them into the USD files or refactor to use helper functions to build the graph.
try:
og.Controller.edit(
{"graph_path": "/ActionGraph", "evaluator_name": "execution"},
{
og.Controller.Keys.CREATE_NODES: [
("OnImpulseEvent", "omni.graph.action.OnImpulseEvent"),
("ReadSimTime", "omni.isaac.core_nodes.IsaacReadSimulationTime"),
("Context", "omni.isaac.ros2_bridge.ROS2Context"),
("PublishJointState", "omni.isaac.ros2_bridge.ROS2PublishJointState"),
(
"MobileBaseSubscribeJointState",
"omni.isaac.ros2_bridge.ROS2SubscribeJointState",
),
(
"MobileBaseArticulationController",
"omni.isaac.core_nodes.IsaacArticulationController",
),
(
"ManipulatorSubscribeJointState",
"omni.isaac.ros2_bridge.ROS2SubscribeJointState",
),
(
"ManipulatorArticulationController",
"omni.isaac.core_nodes.IsaacArticulationController",
),
("PublishClock", "omni.isaac.ros2_bridge.ROS2PublishClock"),
("IsaacReadLidarBeams", "omni.isaac.range_sensor.IsaacReadLidarBeams"),
("PublishLidarScan", "omni.isaac.ros2_bridge.ROS2PublishLaserScan"),
# Nodes to subtract some time of the lidar message so it's timestamps match the tf tree in ROS
("ConstantFloat", "omni.graph.nodes.ConstantFloat"),
("Subtract", "omni.graph.nodes.Subtract"),
# Wrist camera
("OnTick", "omni.graph.action.OnTick"),
("createViewport", "omni.isaac.core_nodes.IsaacCreateViewport"),
(
"getRenderProduct",
"omni.isaac.core_nodes.IsaacGetViewportRenderProduct",
),
("setCamera", "omni.isaac.core_nodes.IsaacSetCameraOnRenderProduct"),
("cameraHelperRgb", "omni.isaac.ros2_bridge.ROS2CameraHelper"),
("cameraHelperInfo", "omni.isaac.ros2_bridge.ROS2CameraHelper"),
("cameraHelperDepth", "omni.isaac.ros2_bridge.ROS2CameraHelper"),
],
og.Controller.Keys.CONNECT: [
("OnImpulseEvent.outputs:execOut", "PublishJointState.inputs:execIn"),
("OnImpulseEvent.outputs:execOut", "PublishClock.inputs:execIn"),
(
"OnImpulseEvent.outputs:execOut",
"MobileBaseSubscribeJointState.inputs:execIn",
),
(
"Context.outputs:context",
"MobileBaseSubscribeJointState.inputs:context",
),
(
"OnImpulseEvent.outputs:execOut",
"MobileBaseArticulationController.inputs:execIn",
),
(
"MobileBaseSubscribeJointState.outputs:jointNames",
"MobileBaseArticulationController.inputs:jointNames",
),
(
"MobileBaseSubscribeJointState.outputs:positionCommand",
"MobileBaseArticulationController.inputs:positionCommand",
),
(
"MobileBaseSubscribeJointState.outputs:velocityCommand",
"MobileBaseArticulationController.inputs:velocityCommand",
),
(
"MobileBaseSubscribeJointState.outputs:effortCommand",
"MobileBaseArticulationController.inputs:effortCommand",
),
(
"OnImpulseEvent.outputs:execOut",
"ManipulatorSubscribeJointState.inputs:execIn",
),
(
"Context.outputs:context",
"ManipulatorSubscribeJointState.inputs:context",
),
(
"OnImpulseEvent.outputs:execOut",
"ManipulatorArticulationController.inputs:execIn",
),
(
"ManipulatorSubscribeJointState.outputs:jointNames",
"ManipulatorArticulationController.inputs:jointNames",
),
(
"ManipulatorSubscribeJointState.outputs:positionCommand",
"ManipulatorArticulationController.inputs:positionCommand",
),
(
"ManipulatorSubscribeJointState.outputs:velocityCommand",
"ManipulatorArticulationController.inputs:velocityCommand",
),
(
"ManipulatorSubscribeJointState.outputs:effortCommand",
"ManipulatorArticulationController.inputs:effortCommand",
),
("Context.outputs:context", "PublishJointState.inputs:context"),
("Context.outputs:context", "PublishClock.inputs:context"),
(
"ReadSimTime.outputs:simulationTime",
"PublishJointState.inputs:timeStamp",
),
("ReadSimTime.outputs:simulationTime", "PublishClock.inputs:timeStamp"),
# Hack time offset for lidar messages
("ReadSimTime.outputs:simulationTime", "Subtract.inputs:a"),
("ConstantFloat.inputs:value", "Subtract.inputs:b"),
# Lidar nodes
(
"OnImpulseEvent.outputs:execOut",
"IsaacReadLidarBeams.inputs:execIn",
),
(
"IsaacReadLidarBeams.outputs:execOut",
"PublishLidarScan.inputs:execIn",
),
(
"IsaacReadLidarBeams.outputs:azimuthRange",
"PublishLidarScan.inputs:azimuthRange",
),
(
"IsaacReadLidarBeams.outputs:depthRange",
"PublishLidarScan.inputs:depthRange",
),
(
"IsaacReadLidarBeams.outputs:horizontalFov",
"PublishLidarScan.inputs:horizontalFov",
),
(
"IsaacReadLidarBeams.outputs:horizontalResolution",
"PublishLidarScan.inputs:horizontalResolution",
),
(
"IsaacReadLidarBeams.outputs:intensitiesData",
"PublishLidarScan.inputs:intensitiesData",
),
(
"IsaacReadLidarBeams.outputs:linearDepthData",
"PublishLidarScan.inputs:linearDepthData",
),
(
"IsaacReadLidarBeams.outputs:numCols",
"PublishLidarScan.inputs:numCols",
),
(
"IsaacReadLidarBeams.outputs:numRows",
"PublishLidarScan.inputs:numRows",
),
(
"IsaacReadLidarBeams.outputs:rotationRate",
"PublishLidarScan.inputs:rotationRate",
),
(
"Subtract.outputs:difference",
"PublishLidarScan.inputs:timeStamp",
),
("Context.outputs:context", "PublishLidarScan.inputs:context"),
# wrist camera
("OnTick.outputs:tick", "createViewport.inputs:execIn"),
("createViewport.outputs:execOut", "getRenderProduct.inputs:execIn"),
("createViewport.outputs:viewport", "getRenderProduct.inputs:viewport"),
("getRenderProduct.outputs:execOut", "setCamera.inputs:execIn"),
(
"getRenderProduct.outputs:renderProductPath",
"setCamera.inputs:renderProductPath",
),
("setCamera.outputs:execOut", "cameraHelperRgb.inputs:execIn"),
("setCamera.outputs:execOut", "cameraHelperInfo.inputs:execIn"),
("setCamera.outputs:execOut", "cameraHelperDepth.inputs:execIn"),
("Context.outputs:context", "cameraHelperRgb.inputs:context"),
("Context.outputs:context", "cameraHelperInfo.inputs:context"),
("Context.outputs:context", "cameraHelperDepth.inputs:context"),
(
"getRenderProduct.outputs:renderProductPath",
"cameraHelperRgb.inputs:renderProductPath",
),
(
"getRenderProduct.outputs:renderProductPath",
"cameraHelperInfo.inputs:renderProductPath",
),
(
"getRenderProduct.outputs:renderProductPath",
"cameraHelperDepth.inputs:renderProductPath",
),
],
og.Controller.Keys.SET_VALUES: [
("Context.inputs:domain_id", int(os.environ["ROS_DOMAIN_ID"])),
# Setting the /c3pzero target prim to Articulation Controller node
("MobileBaseArticulationController.inputs:usePath", True),
(
"MobileBaseArticulationController.inputs:robotPath",
C3PZERO_STAGE_PATH,
),
(
"MobileBaseSubscribeJointState.inputs:topicName",
"mobile_base_joint_commands",
),
("ManipulatorArticulationController.inputs:usePath", True),
(
"ManipulatorArticulationController.inputs:robotPath",
C3PZERO_STAGE_PATH,
),
(
"ManipulatorSubscribeJointState.inputs:topicName",
"manipulator_joint_commands",
),
("PublishJointState.inputs:topicName", "isaac_joint_states"),
(
"PublishLidarScan.inputs:frameId",
"base_laser",
), # c300's laser frame_id
("PublishLidarScan.inputs:topicName", "scan"),
# Hack time offset for lidar messages
("ConstantFloat.inputs:value", 0.1),
# Wrist camera
("createViewport.inputs:name", REALSENSE_VIEWPORT_NAME),
("createViewport.inputs:viewportId", 1),
(
"cameraHelperRgb.inputs:frameId",
"wrist_mounted_camera_color_optical_frame",
),
(
"cameraHelperRgb.inputs:topicName",
"/wrist_mounted_camera/color/image_raw",
),
("cameraHelperRgb.inputs:type", "rgb"),
(
"cameraHelperInfo.inputs:frameId",
"wrist_mounted_camera_color_optical_frame",
),
(
"cameraHelperInfo.inputs:topicName",
"/wrist_mounted_camera/color/camera_info",
),
("cameraHelperInfo.inputs:type", "camera_info"),
(
"cameraHelperDepth.inputs:frameId",
"wrist_mounted_camera_color_optical_frame",
),
(
"cameraHelperDepth.inputs:topicName",
"/wrist_mounted_camera/depth/image_rect_raw",
),
("cameraHelperDepth.inputs:type", "depth"),
],
},
)
except Exception as e:
print(e)
# Setting the /c300 target prim to Publish JointState node
set_target_prims(
primPath="/ActionGraph/PublishJointState", targetPrimPaths=[C3PZERO_STAGE_PATH]
)
# Setting the /c300's Lidar target prim to read scan data in the simulation
set_target_prims(
primPath="/ActionGraph/IsaacReadLidarBeams",
inputName="inputs:lidarPrim",
targetPrimPaths=[
C3PZERO_STAGE_PATH + "/c300/Chassis/Laser/UAM_05LP/UAM_05LP/Scan/Lidar"
],
)
# Fix camera settings since the defaults in the realsense model are inaccurate
realsense_prim = UsdGeom.Camera(
stage.get_current_stage().GetPrimAtPath(CAMERA_PRIM_PATH)
)
realsense_prim.GetHorizontalApertureAttr().Set(20.955)
realsense_prim.GetVerticalApertureAttr().Set(11.7)
realsense_prim.GetFocalLengthAttr().Set(18.8)
realsense_prim.GetFocusDistanceAttr().Set(400)
realsense_prim.GetClippingRangeAttr().Set(Gf.Vec2f(0.01, 1000000.0))
set_targets(
prim=stage.get_current_stage().GetPrimAtPath("/ActionGraph/setCamera"),
attribute="inputs:cameraPrim",
target_prim_paths=[CAMERA_PRIM_PATH],
)
simulation_app.update()
# need to initialize physics getting any articulation..etc
simulation_context.initialize_physics()
simulation_context.play()
# Dock the second camera window
viewport = omni.ui.Workspace.get_window("Viewport")
rs_viewport = omni.ui.Workspace.get_window(REALSENSE_VIEWPORT_NAME)
rs_viewport.dock_in(viewport, omni.ui.DockPosition.RIGHT)
while simulation_app.is_running():
# Run with a fixed step size
simulation_context.step(render=True)
# Tick the Publish/Subscribe JointState, Publish TF and Publish Clock nodes each frame
og.Controller.set(
og.Controller.attribute("/ActionGraph/OnImpulseEvent.state:enableImpulse"), True
)
simulation_context.stop()
simulation_app.close()
| 16,470 | Python | 39.370098 | 120 | 0.570431 |
MarqRazz/c3pzero/c3pzero/c3pzero_bringup/launch/gazebo_c3pzero.launch.py | # -*- coding: utf-8 -*-
# Author: Marq Rasmussen
import shlex
from launch import LaunchDescription
from launch.actions import (
DeclareLaunchArgument,
IncludeLaunchDescription,
RegisterEventHandler,
)
from launch.event_handlers import OnProcessExit
from launch.launch_description_sources import PythonLaunchDescriptionSource
from launch.conditions import IfCondition
from launch.substitutions import (
Command,
FindExecutable,
LaunchConfiguration,
PathJoinSubstitution,
)
from launch_ros.actions import Node
from launch_ros.substitutions import FindPackageShare
def generate_launch_description():
declared_arguments = []
# Simulation specific arguments
declared_arguments.append(
DeclareLaunchArgument(
"sim_ignition",
default_value="True",
description="Use Ignition Gazebo for simulation",
)
)
# General arguments
declared_arguments.append(
DeclareLaunchArgument(
"runtime_config_package",
default_value="c3pzero_bringup",
description='Package with the controller\'s configuration in "config" folder. \
Usually the argument is not set, it enables use of a custom setup.',
)
)
declared_arguments.append(
DeclareLaunchArgument(
"description_package",
default_value="c3pzero_description",
description="Description package with robot URDF/XACRO files. Usually the argument \
is not set, it enables use of a custom description.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"description_file",
default_value="c3pzero_kinova_gen3.xacro",
description="URDF/XACRO description file with the robot.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"robot_name",
default_value="c3pzero",
description="Robot name.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"diff_drive_controller",
default_value="diff_drive_base_controller",
description="Diff drive base controller to start.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"jtc_controller",
default_value="joint_trajectory_controller",
description="Robot controller to start.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"robot_hand_controller",
default_value="robotiq_gripper_controller",
description="Robot hand controller to start.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"launch_rviz", default_value="True", description="Launch RViz?"
)
)
declared_arguments.append(
DeclareLaunchArgument(
"use_sim_time",
default_value="True",
description="Use simulation (Gazebo) clock if true",
)
)
# Initialize Arguments
sim_ignition = LaunchConfiguration("sim_ignition")
# General arguments
runtime_config_package = LaunchConfiguration("runtime_config_package")
description_package = LaunchConfiguration("description_package")
description_file = LaunchConfiguration("description_file")
robot_name = LaunchConfiguration("robot_name")
diff_drive_controller = LaunchConfiguration("diff_drive_controller")
robot_traj_controller = LaunchConfiguration("jtc_controller")
robot_hand_controller = LaunchConfiguration("robot_hand_controller")
launch_rviz = LaunchConfiguration("launch_rviz")
use_sim_time = LaunchConfiguration("use_sim_time")
rviz_config_file = PathJoinSubstitution(
[FindPackageShare(runtime_config_package), "rviz", "bringup_config.rviz"]
)
robot_description_content = Command(
[
PathJoinSubstitution([FindExecutable(name="xacro")]),
" ",
PathJoinSubstitution(
[FindPackageShare(description_package), "urdf", description_file]
),
" ",
"sim_ignition:=",
sim_ignition,
" ",
]
)
robot_state_publisher_node = Node(
package="robot_state_publisher",
executable="robot_state_publisher",
output="both",
parameters=[
{
"use_sim_time": use_sim_time,
"robot_description": robot_description_content,
}
],
)
rviz_node = Node(
package="rviz2",
executable="rviz2",
name="rviz2",
output="log",
arguments=["-d", rviz_config_file],
condition=IfCondition(launch_rviz),
)
joint_state_broadcaster_spawner = Node(
package="controller_manager",
executable="spawner",
parameters=[{"use_sim_time": use_sim_time}],
arguments=[
"joint_state_broadcaster",
"--controller-manager",
"/controller_manager",
],
)
# Delay rviz start after `joint_state_broadcaster`
delay_rviz_after_joint_state_broadcaster_spawner = RegisterEventHandler(
event_handler=OnProcessExit(
target_action=joint_state_broadcaster_spawner,
on_exit=[rviz_node],
)
)
robot_traj_controller_spawner = Node(
package="controller_manager",
executable="spawner",
arguments=[robot_traj_controller, "-c", "/controller_manager"],
)
diff_drive_controller_spawner = Node(
package="controller_manager",
executable="spawner",
arguments=[diff_drive_controller, "-c", "/controller_manager"],
)
robot_hand_controller_spawner = Node(
package="controller_manager",
executable="spawner",
arguments=[robot_hand_controller, "-c", "/controller_manager"],
)
ignition_spawn_entity = Node(
package="ros_gz_sim",
executable="create",
output="screen",
arguments=[
"-string",
robot_description_content,
"-name",
robot_name,
"-allow_renaming",
"true",
"-x",
"0.0",
"-y",
"0.0",
"-z",
"0.3",
"-R",
"0.0",
"-P",
"0.0",
"-Y",
"0.0",
],
condition=IfCondition(sim_ignition),
)
ignition_launch_description = IncludeLaunchDescription(
PythonLaunchDescriptionSource(
[FindPackageShare("ros_gz_sim"), "/launch/gz_sim.launch.py"]
),
# TODO (marqrazz): fix the hardcoded path to the gazebo world
launch_arguments={
"gz_args": " -r -v 3 /root/c3pzero_ws/src/c3pzero/c300/c300_bringup/worlds/depot.sdf"
}.items(),
condition=IfCondition(sim_ignition),
)
# Bridge
gazebo_bridge = Node(
package="ros_gz_bridge",
executable="parameter_bridge",
parameters=[{"use_sim_time": use_sim_time}],
arguments=[
"/rgbd_camera/image@sensor_msgs/msg/Image[ignition.msgs.Image",
"/rgbd_camera/depth_image@sensor_msgs/msg/Image[ignition.msgs.Image",
"/rgbd_camera/points@sensor_msgs/msg/PointCloud2[ignition.msgs.PointCloudPacked",
"/rgbd_camera/camera_info@sensor_msgs/msg/CameraInfo[ignition.msgs.CameraInfo",
# "/segmentation/colored_map@sensor_msgs/msg/Image[ignition.msgs.Image",
# '/segmentation/labels_map@sensor_msgs/msg/[email protected]',
# "/segmentation/camera_info@sensor_msgs/msg/CameraInfo[ignition.msgs.CameraInfo",
"/scan@sensor_msgs/msg/LaserScan[ignition.msgs.LaserScan",
"/clock@rosgraph_msgs/msg/Clock[ignition.msgs.Clock",
],
remappings=[
(
"/rgbd_camera/image",
"/wrist_mounted_camera/color/image_raw",
),
(
"/rgbd_camera/depth_image",
"/wrist_mounted_camera/depth/image_rect_raw",
),
(
"/rgbd_camera/points",
"/wrist_mounted_camera/depth/color/points",
),
(
"/rgbd_camera/camera_info",
"/wrist_mounted_camera/color/camera_info",
),
],
output="screen",
)
nodes_to_start = [
robot_state_publisher_node,
joint_state_broadcaster_spawner,
delay_rviz_after_joint_state_broadcaster_spawner,
diff_drive_controller_spawner,
robot_traj_controller_spawner,
robot_hand_controller_spawner,
ignition_launch_description,
ignition_spawn_entity,
gazebo_bridge,
]
return LaunchDescription(declared_arguments + nodes_to_start)
| 8,908 | Python | 31.278985 | 97 | 0.590368 |
MarqRazz/c3pzero/c3pzero/c3pzero_bringup/launch/isaac_c3pzero.launch.py | # -*- coding: utf-8 -*-
# Author: Marq Rasmussen
import shlex
from launch import LaunchDescription
from launch.actions import (
DeclareLaunchArgument,
RegisterEventHandler,
)
from launch.event_handlers import OnProcessExit
from launch.conditions import IfCondition
from launch.substitutions import (
Command,
FindExecutable,
LaunchConfiguration,
PathJoinSubstitution,
)
from launch_ros.actions import ComposableNodeContainer, Node
import launch_ros.descriptions
from launch_ros.substitutions import FindPackageShare
def generate_launch_description():
declared_arguments = []
# Simulation specific arguments
declared_arguments.append(
DeclareLaunchArgument(
"sim_isaac",
default_value="True",
description="Use Nvidia Isaac for simulation",
)
)
# General arguments
declared_arguments.append(
DeclareLaunchArgument(
"runtime_config_package",
default_value="c3pzero_bringup",
description='Package with the controller\'s configuration in "config" folder. \
Usually the argument is not set, it enables use of a custom setup.',
)
)
declared_arguments.append(
DeclareLaunchArgument(
"controllers_file",
default_value="c3pzero_isaac_controllers.yaml",
description="YAML file with the controllers configuration.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"description_package",
default_value="c3pzero_description",
description="Description package with robot URDF/XACRO files. Usually the argument \
is not set, it enables use of a custom description.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"description_file",
default_value="c3pzero_kinova_gen3.xacro",
description="URDF/XACRO description file with the robot.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"diff_drive_controller",
default_value="diff_drive_base_controller",
description="Diff drive base controller to start.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"jtc_controller",
default_value="joint_trajectory_controller",
description="Robot controller to start.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"robot_hand_controller",
default_value="robotiq_gripper_controller",
description="Robot hand controller to start.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"launch_rviz", default_value="True", description="Launch RViz?"
)
)
declared_arguments.append(
DeclareLaunchArgument(
"use_sim_time",
default_value="True",
description="Use simulation (Gazebo) clock if true",
)
)
# Initialize Arguments
sim_isaac = LaunchConfiguration("sim_isaac")
# General arguments
runtime_config_package = LaunchConfiguration("runtime_config_package")
controllers_file = LaunchConfiguration("controllers_file")
description_package = LaunchConfiguration("description_package")
description_file = LaunchConfiguration("description_file")
diff_drive_controller = LaunchConfiguration("diff_drive_controller")
robot_traj_controller = LaunchConfiguration("jtc_controller")
robot_hand_controller = LaunchConfiguration("robot_hand_controller")
launch_rviz = LaunchConfiguration("launch_rviz")
use_sim_time = LaunchConfiguration("use_sim_time")
robot_controllers = PathJoinSubstitution(
[FindPackageShare(runtime_config_package), "config", controllers_file]
)
rviz_config_file = PathJoinSubstitution(
[FindPackageShare(runtime_config_package), "rviz", "bringup_config.rviz"]
)
robot_description_content = Command(
[
PathJoinSubstitution([FindExecutable(name="xacro")]),
" ",
PathJoinSubstitution(
[FindPackageShare(description_package), "urdf", description_file]
),
" ",
"sim_isaac:=",
sim_isaac,
" ",
"sim_ignition:=False",
" ",
]
)
robot_description = {"robot_description": robot_description_content}
control_node = Node(
package="controller_manager",
executable="ros2_control_node",
parameters=[
{"use_sim_time": use_sim_time},
robot_description,
robot_controllers,
],
remappings=[
("/diff_drive_base_controller/cmd_vel_unstamped", "/cmd_vel"),
("/diff_drive_base_controller/odom", "/odom"),
],
output="both",
)
robot_state_publisher_node = Node(
package="robot_state_publisher",
executable="robot_state_publisher",
output="both",
parameters=[
{
"use_sim_time": use_sim_time,
"robot_description": robot_description_content,
}
],
)
rviz_node = Node(
package="rviz2",
executable="rviz2",
name="rviz2",
output="log",
arguments=["-d", rviz_config_file],
condition=IfCondition(launch_rviz),
)
joint_state_broadcaster_spawner = Node(
package="controller_manager",
executable="spawner",
parameters=[{"use_sim_time": use_sim_time}],
arguments=[
"joint_state_broadcaster",
"--controller-manager",
"/controller_manager",
],
)
# Delay rviz start after `joint_state_broadcaster`
delay_rviz_after_joint_state_broadcaster_spawner = RegisterEventHandler(
event_handler=OnProcessExit(
target_action=joint_state_broadcaster_spawner,
on_exit=[rviz_node],
)
)
robot_traj_controller_spawner = Node(
package="controller_manager",
executable="spawner",
arguments=[robot_traj_controller, "-c", "/controller_manager"],
)
diff_drive_controller_spawner = Node(
package="controller_manager",
executable="spawner",
arguments=[diff_drive_controller, "-c", "/controller_manager"],
)
robot_hand_controller_spawner = Node(
package="controller_manager",
executable="spawner",
arguments=[robot_hand_controller, "-c", "/controller_manager"],
)
# launch point cloud plugin through rclcpp_components container
# see https://github.com/ros-perception/image_pipeline/blob/humble/depth_image_proc/launch/point_cloud_xyz.launch.py
point_cloud_node = ComposableNodeContainer(
name="container",
namespace="",
package="rclcpp_components",
executable="component_container",
composable_node_descriptions=[
# Driver itself
launch_ros.descriptions.ComposableNode(
package="depth_image_proc",
plugin="depth_image_proc::PointCloudXyzrgbNode",
name="point_cloud_xyz_node",
remappings=[
("rgb/image_rect_color", "/wrist_mounted_camera/color/image_raw"),
("rgb/camera_info", "/wrist_mounted_camera/color/camera_info"),
(
"depth_registered/image_rect",
"/wrist_mounted_camera/depth/image_rect_raw",
),
("points", "/wrist_mounted_camera/depth/color/points"),
],
),
],
output="screen",
)
nodes_to_start = [
control_node,
robot_state_publisher_node,
joint_state_broadcaster_spawner,
delay_rviz_after_joint_state_broadcaster_spawner,
diff_drive_controller_spawner,
robot_traj_controller_spawner,
robot_hand_controller_spawner,
point_cloud_node,
]
return LaunchDescription(declared_arguments + nodes_to_start)
| 8,173 | Python | 31.959677 | 120 | 0.605408 |
MarqRazz/c3pzero/c3pzero/c3pzero_bringup/config/c3pzero_gz_controllers.yaml | controller_manager:
ros__parameters:
update_rate: 500 # Hz
joint_state_broadcaster:
type: joint_state_broadcaster/JointStateBroadcaster
diff_drive_base_controller:
type: diff_drive_controller/DiffDriveController
joint_trajectory_controller:
type: joint_trajectory_controller/JointTrajectoryController
robotiq_gripper_controller:
type: position_controllers/GripperActionController
diff_drive_base_controller:
ros__parameters:
left_wheel_names: ["drivewhl_l_joint"]
right_wheel_names: ["drivewhl_r_joint"]
wheels_per_side: 1
wheel_separation: 0.61 # outside distance between the wheels
wheel_radius: 0.1715
wheel_separation_multiplier: 1.0
left_wheel_radius_multiplier: 1.0
right_wheel_radius_multiplier: 1.0
publish_rate: 50.0
odom_frame_id: odom
base_frame_id: base_link
pose_covariance_diagonal : [0.001, 0.001, 0.0, 0.0, 0.0, 0.01]
twist_covariance_diagonal: [0.001, 0.0, 0.0, 0.0, 0.0, 0.01]
open_loop: false
position_feedback: true
enable_odom_tf: true
cmd_vel_timeout: 0.5
#publish_limited_velocity: true
use_stamped_vel: false
#velocity_rolling_window_size: 10
# Velocity and acceleration limits
# Whenever a min_* is unspecified, default to -max_*
linear.x.has_velocity_limits: true
linear.x.has_acceleration_limits: true
linear.x.has_jerk_limits: false
linear.x.max_velocity: 2.0
linear.x.min_velocity: -2.0
linear.x.max_acceleration: 0.5
linear.x.max_jerk: 0.0
linear.x.min_jerk: 0.0
angular.z.has_velocity_limits: true
angular.z.has_acceleration_limits: true
angular.z.has_jerk_limits: false
angular.z.max_velocity: 2.0
angular.z.min_velocity: -2.0
angular.z.max_acceleration: 1.0
angular.z.min_acceleration: -1.0
angular.z.max_jerk: 0.0
angular.z.min_jerk: 0.0
joint_trajectory_controller:
ros__parameters:
joints:
- gen3_joint_1
- gen3_joint_2
- gen3_joint_3
- gen3_joint_4
- gen3_joint_5
- gen3_joint_6
- gen3_joint_7
command_interfaces:
- position
state_interfaces:
- position
- velocity
state_publish_rate: 100.0
action_monitor_rate: 20.0
allow_partial_joints_goal: false
constraints:
stopped_velocity_tolerance: 0.0
goal_time: 0.0
robotiq_gripper_controller:
ros__parameters:
default: true
joint: gen3_robotiq_85_left_knuckle_joint
interface_name: position
| 2,508 | YAML | 25.978494 | 66 | 0.673046 |
MarqRazz/c3pzero/c3pzero/c3pzero_bringup/config/c3pzero_isaac_controllers.yaml | controller_manager:
ros__parameters:
update_rate: 60 # Hz (this should match the Isaac publish rate)
joint_state_broadcaster:
type: joint_state_broadcaster/JointStateBroadcaster
diff_drive_base_controller:
type: diff_drive_controller/DiffDriveController
joint_trajectory_controller:
type: joint_trajectory_controller/JointTrajectoryController
robotiq_gripper_controller:
type: position_controllers/GripperActionController
diff_drive_base_controller:
ros__parameters:
left_wheel_names: ["drivewhl_l_joint"]
right_wheel_names: ["drivewhl_r_joint"]
wheels_per_side: 1
wheel_separation: 0.61 # outside distance between the wheels
wheel_radius: 0.1715
wheel_separation_multiplier: 1.0
left_wheel_radius_multiplier: 1.0
right_wheel_radius_multiplier: 1.0
publish_rate: 50.0
odom_frame_id: odom
base_frame_id: base_link
pose_covariance_diagonal : [0.001, 0.001, 0.0, 0.0, 0.0, 0.01]
twist_covariance_diagonal: [0.001, 0.0, 0.0, 0.0, 0.0, 0.01]
open_loop: false
position_feedback: true
enable_odom_tf: true
cmd_vel_timeout: 0.5
#publish_limited_velocity: true
use_stamped_vel: false
#velocity_rolling_window_size: 10
# Velocity and acceleration limits
# Whenever a min_* is unspecified, default to -max_*
linear.x.has_velocity_limits: true
linear.x.has_acceleration_limits: true
linear.x.has_jerk_limits: false
linear.x.max_velocity: 2.0
linear.x.min_velocity: -2.0
linear.x.max_acceleration: 0.5
linear.x.max_jerk: 0.0
linear.x.min_jerk: 0.0
angular.z.has_velocity_limits: true
angular.z.has_acceleration_limits: true
angular.z.has_jerk_limits: false
angular.z.max_velocity: 2.0
angular.z.min_velocity: -2.0
angular.z.max_acceleration: 1.0
angular.z.min_acceleration: -1.0
angular.z.max_jerk: 0.0
angular.z.min_jerk: 0.0
joint_trajectory_controller:
ros__parameters:
joints:
- gen3_joint_1
- gen3_joint_2
- gen3_joint_3
- gen3_joint_4
- gen3_joint_5
- gen3_joint_6
- gen3_joint_7
command_interfaces:
- position
state_interfaces:
- position
- velocity
state_publish_rate: 100.0
action_monitor_rate: 20.0
allow_partial_joints_goal: false
constraints:
stopped_velocity_tolerance: 0.0
goal_time: 0.0
robotiq_gripper_controller:
ros__parameters:
default: true
joint: gen3_robotiq_85_left_knuckle_joint
interface_name: position
| 2,550 | YAML | 26.430107 | 68 | 0.674902 |
MarqRazz/c3pzero/c3pzero/c3pzero_moveit_config/launch/spawn_controllers.launch.py | # -*- coding: utf-8 -*-
from moveit_configs_utils import MoveItConfigsBuilder
from moveit_configs_utils.launches import generate_spawn_controllers_launch
def generate_launch_description():
moveit_config = MoveItConfigsBuilder(
"c3pzero_kinova_gen3", package_name="c3pzero_moveit_config"
).to_moveit_configs()
return generate_spawn_controllers_launch(moveit_config)
| 387 | Python | 34.272724 | 75 | 0.75969 |
MarqRazz/c3pzero/c3pzero/c3pzero_moveit_config/launch/warehouse_db.launch.py | # -*- coding: utf-8 -*-
from moveit_configs_utils import MoveItConfigsBuilder
from moveit_configs_utils.launches import generate_warehouse_db_launch
def generate_launch_description():
moveit_config = MoveItConfigsBuilder(
"c3pzero_kinova_gen3", package_name="c3pzero_moveit_config"
).to_moveit_configs()
return generate_warehouse_db_launch(moveit_config)
| 377 | Python | 33.363633 | 70 | 0.753316 |
MarqRazz/c3pzero/c3pzero/c3pzero_moveit_config/launch/rsp.launch.py | # -*- coding: utf-8 -*-
from moveit_configs_utils import MoveItConfigsBuilder
from moveit_configs_utils.launches import generate_rsp_launch
def generate_launch_description():
moveit_config = MoveItConfigsBuilder(
"c3pzero_kinova_gen3", package_name="c3pzero_moveit_config"
).to_moveit_configs()
return generate_rsp_launch(moveit_config)
| 359 | Python | 31.72727 | 67 | 0.746518 |
MarqRazz/c3pzero/c3pzero/c3pzero_moveit_config/launch/demo.launch.py | # -*- coding: utf-8 -*-
from moveit_configs_utils import MoveItConfigsBuilder
from moveit_configs_utils.launches import generate_demo_launch
def generate_launch_description():
moveit_config = MoveItConfigsBuilder(
"c3pzero_kinova_gen3", package_name="c3pzero_moveit_config"
).to_moveit_configs()
return generate_demo_launch(moveit_config)
| 361 | Python | 31.909088 | 67 | 0.747922 |
MarqRazz/c3pzero/c3pzero/c3pzero_moveit_config/launch/move_group.launch.py | # -*- coding: utf-8 -*-
# Author: Marq Rasmussen
import os
from launch import LaunchDescription
from launch.actions import DeclareLaunchArgument
from launch.substitutions import LaunchConfiguration
from launch_ros.actions import Node
from launch.conditions import IfCondition
from ament_index_python.packages import get_package_share_directory
from moveit_configs_utils import MoveItConfigsBuilder
def generate_launch_description():
declared_arguments = []
declared_arguments.append(
DeclareLaunchArgument(
"sim",
default_value="true",
description="Use simulated clock",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"launch_rviz", default_value="true", description="Launch RViz?"
)
)
# Initialize Arguments
launch_rviz = LaunchConfiguration("launch_rviz")
use_sim_time = LaunchConfiguration("sim")
# This launch file assumes the robot is already running so we don't need to
# pass any special arguments to the URDF.xacro like sim_ignition or sim_isaac
# because the controllers should already be running.
description_arguments = {
"robot_ip": "xxx.yyy.zzz.www",
"use_fake_hardware": "false",
}
moveit_config = (
MoveItConfigsBuilder("gen3", package_name="c3pzero_moveit_config")
.robot_description(mappings=description_arguments)
.planning_pipelines(pipelines=["ompl", "pilz_industrial_motion_planner"])
.to_moveit_configs()
)
# Start the actual move_group node/action server
move_group_node = Node(
package="moveit_ros_move_group",
executable="move_group",
output="log",
parameters=[moveit_config.to_dict(), {"use_sim_time": use_sim_time}],
arguments=[
"--ros-args",
"--log-level",
"fatal",
], # MoveIt is spamming the log because of unknown '*_mimic' joints
condition=IfCondition(launch_rviz),
)
rviz_config_path = os.path.join(
get_package_share_directory("c3pzero_moveit_config"),
"config",
"moveit.rviz",
)
rviz_node = Node(
package="rviz2",
executable="rviz2",
name="rviz2",
output="log",
arguments=["-d", rviz_config_path],
parameters=[
moveit_config.robot_description,
moveit_config.robot_description_semantic,
moveit_config.planning_pipelines,
moveit_config.robot_description_kinematics,
moveit_config.joint_limits,
{"use_sim_time": use_sim_time},
],
condition=IfCondition(launch_rviz),
)
return LaunchDescription(declared_arguments + [move_group_node, rviz_node])
| 2,769 | Python | 30.123595 | 81 | 0.63597 |
MarqRazz/c3pzero/c3pzero/c3pzero_moveit_config/launch/moveit_rviz.launch.py | # -*- coding: utf-8 -*-
from moveit_configs_utils import MoveItConfigsBuilder
from moveit_configs_utils.launches import generate_moveit_rviz_launch
def generate_launch_description():
moveit_config = MoveItConfigsBuilder(
"c3pzero_kinova_gen3", package_name="c3pzero_moveit_config"
).to_moveit_configs()
return generate_moveit_rviz_launch(moveit_config)
| 375 | Python | 33.181815 | 69 | 0.752 |
MarqRazz/c3pzero/c3pzero/c3pzero_moveit_config/launch/static_virtual_joint_tfs.launch.py | # -*- coding: utf-8 -*-
from moveit_configs_utils import MoveItConfigsBuilder
from moveit_configs_utils.launches import generate_static_virtual_joint_tfs_launch
def generate_launch_description():
moveit_config = MoveItConfigsBuilder(
"c3pzero_kinova_gen3", package_name="c3pzero_moveit_config"
).to_moveit_configs()
return generate_static_virtual_joint_tfs_launch(moveit_config)
| 401 | Python | 35.545451 | 82 | 0.758105 |
MarqRazz/c3pzero/c3pzero/c3pzero_moveit_config/launch/setup_assistant.launch.py | # -*- coding: utf-8 -*-
from moveit_configs_utils import MoveItConfigsBuilder
from moveit_configs_utils.launches import generate_setup_assistant_launch
def generate_launch_description():
moveit_config = MoveItConfigsBuilder(
"c3pzero_kinova_gen3", package_name="c3pzero_moveit_config"
).to_moveit_configs()
return generate_setup_assistant_launch(moveit_config)
| 383 | Python | 33.909088 | 73 | 0.75718 |
MarqRazz/c3pzero/c3pzero/c3pzero_moveit_config/config/initial_positions.yaml | # Default initial positions for c3pzero_kinova_gen3's ros2_control fake system
initial_positions:
gen3_joint_1: 0
gen3_joint_2: 0
gen3_joint_3: 0
gen3_joint_4: 0
gen3_joint_5: 0
gen3_joint_6: 0
gen3_joint_7: 0
gen3_robotiq_85_left_knuckle_joint: 0
| 265 | YAML | 21.166665 | 78 | 0.709434 |
MarqRazz/c3pzero/c3pzero/c3pzero_moveit_config/config/pilz_cartesian_limits.yaml | # Limits for the Pilz planner
cartesian_limits:
max_trans_vel: 1.0
max_trans_acc: 2.25
max_trans_dec: -5.0
max_rot_vel: 1.57
| 133 | YAML | 18.142855 | 29 | 0.676692 |
MarqRazz/c3pzero/c3pzero/c3pzero_moveit_config/config/kinematics.yaml | manipulator:
kinematics_solver: kdl_kinematics_plugin/KDLKinematicsPlugin
kinematics_solver_search_resolution: 0.0050000000000000001
kinematics_solver_timeout: 0.0050000000000000001
| 188 | YAML | 36.799993 | 62 | 0.851064 |
MarqRazz/c3pzero/c3pzero/c3pzero_moveit_config/config/moveit_controllers.yaml | # MoveIt uses this configuration for controller management
moveit_controller_manager: moveit_simple_controller_manager/MoveItSimpleControllerManager
moveit_simple_controller_manager:
controller_names:
- joint_trajectory_controller
- robotiq_gripper_controller
joint_trajectory_controller:
type: FollowJointTrajectory
action_ns: follow_joint_trajectory
default: true
joints:
- gen3_joint_1
- gen3_joint_2
- gen3_joint_3
- gen3_joint_4
- gen3_joint_5
- gen3_joint_6
- gen3_joint_7
robotiq_gripper_controller:
type: GripperCommand
joints:
- gen3_robotiq_85_left_knuckle_joint
action_ns: gripper_cmd
default: true
| 707 | YAML | 24.285713 | 89 | 0.701556 |
MarqRazz/c3pzero/c3pzero/c3pzero_moveit_config/config/joint_limits.yaml | # joint_limits.yaml allows the dynamics properties specified in the URDF to be overwritten or augmented as needed
# For beginners, we downscale velocity and acceleration limits.
# You can always specify higher scaling factors (<= 1.0) in your motion requests. # Increase the values below to 1.0 to always move at maximum speed.
default_velocity_scaling_factor: 1.0
default_acceleration_scaling_factor: 1.0
# Specific joint properties can be changed with the keys [max_position, min_position, max_velocity, max_acceleration]
# Joint limits can be turned off with [has_velocity_limits, has_acceleration_limits]
joint_limits:
gen3_joint_1:
has_velocity_limits: true
max_velocity: 1.3963000000000001
has_acceleration_limits: true
max_acceleration: 8.6
gen3_joint_2:
has_velocity_limits: true
max_velocity: 1.3963000000000001
has_acceleration_limits: true
max_acceleration: 8.6
gen3_joint_3:
has_velocity_limits: true
max_velocity: 1.3963000000000001
has_acceleration_limits: true
max_acceleration: 8.6
gen3_joint_4:
has_velocity_limits: true
max_velocity: 1.3963000000000001
has_acceleration_limits: true
max_acceleration: 8.6
gen3_joint_5:
has_velocity_limits: true
max_velocity: 1.2218
has_acceleration_limits: true
max_acceleration: 8.6
gen3_joint_6:
has_velocity_limits: true
max_velocity: 1.2218
has_acceleration_limits: true
max_acceleration: 8.6
gen3_joint_7:
has_velocity_limits: true
max_velocity: 1.2218
has_acceleration_limits: true
max_acceleration: 8.6
gen3_robotiq_85_left_knuckle_joint:
has_velocity_limits: true
max_velocity: 0.5
has_acceleration_limits: true
max_acceleration: 1.0
| 1,741 | YAML | 33.156862 | 150 | 0.73004 |
MarqRazz/c3pzero/c3pzero/c3pzero_moveit_config/config/ompl_planning.yaml | planning_plugin: ompl_interface/OMPLPlanner
start_state_max_bounds_error: 0.1
jiggle_fraction: 0.05
request_adapters: >-
default_planner_request_adapters/AddTimeOptimalParameterization
default_planner_request_adapters/ResolveConstraintFrames
default_planner_request_adapters/FixWorkspaceBounds
default_planner_request_adapters/FixStartStateBounds
default_planner_request_adapters/FixStartStateCollision
default_planner_request_adapters/FixStartStatePathConstraints
| 489 | YAML | 43.545451 | 67 | 0.838446 |
Ranasinghe843/file-explorer-project/README.md | # Extension Project Template
This project was automatically generated.
- `app` - It is a folder link to the location of your *Omniverse Kit* based app.
- `exts` - It is a folder where you can add new extensions. It was automatically added to extension search path. (Extension Manager -> Gear Icon -> Extension Search Path).
Open this folder using Visual Studio Code. It will suggest you to install few extensions that will make python experience better.
Look for "samitha.file.explorer" extension in extension manager and enable it. Try applying changes to any python files, it will hot-reload and you can observe results immediately.
Alternatively, you can launch your app from console with this folder added to search path and your extension enabled, e.g.:
```
> app\omni.code.bat --ext-folder exts --enable company.hello.world
```
# App Link Setup
If `app` folder link doesn't exist or broken it can be created again. For better developer experience it is recommended to create a folder link named `app` to the *Omniverse Kit* app installed from *Omniverse Launcher*. Convenience script to use is included.
Run:
```
> link_app.bat
```
If successful you should see `app` folder link in the root of this repo.
If multiple Omniverse apps is installed script will select recommended one. Or you can explicitly pass an app:
```
> link_app.bat --app create
```
You can also just pass a path to create link to:
```
> link_app.bat --path "C:/Users/bob/AppData/Local/ov/pkg/create-2021.3.4"
```
# Sharing Your Extensions
This folder is ready to be pushed to any git repository. Once pushed direct link to a git repository can be added to *Omniverse Kit* extension search paths.
Link might look like this: `git://github.com/[user]/[your_repo].git?branch=main&dir=exts`
Notice `exts` is repo subfolder with extensions. More information can be found in "Git URL as Extension Search Paths" section of developers manual.
To add a link to your *Omniverse Kit* based app go into: Extension Manager -> Gear Icon -> Extension Search Path
| 2,045 | Markdown | 37.603773 | 258 | 0.757457 |
Ranasinghe843/file-explorer-project/tools/scripts/link_app.py | import argparse
import json
import os
import sys
import packmanapi
import urllib3
def find_omniverse_apps():
http = urllib3.PoolManager()
try:
r = http.request("GET", "http://127.0.0.1:33480/components")
except Exception as e:
print(f"Failed retrieving apps from an Omniverse Launcher, maybe it is not installed?\nError: {e}")
sys.exit(1)
apps = {}
for x in json.loads(r.data.decode("utf-8")):
latest = x.get("installedVersions", {}).get("latest", "")
if latest:
for s in x.get("settings", []):
if s.get("version", "") == latest:
root = s.get("launch", {}).get("root", "")
apps[x["slug"]] = (x["name"], root)
break
return apps
def create_link(src, dst):
print(f"Creating a link '{src}' -> '{dst}'")
packmanapi.link(src, dst)
APP_PRIORITIES = ["code", "create", "view"]
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Create folder link to Kit App installed from Omniverse Launcher")
parser.add_argument(
"--path",
help="Path to Kit App installed from Omniverse Launcher, e.g.: 'C:/Users/bob/AppData/Local/ov/pkg/create-2021.3.4'",
required=False,
)
parser.add_argument(
"--app", help="Name of Kit App installed from Omniverse Launcher, e.g.: 'code', 'create'", required=False
)
args = parser.parse_args()
path = args.path
if not path:
print("Path is not specified, looking for Omniverse Apps...")
apps = find_omniverse_apps()
if len(apps) == 0:
print(
"Can't find any Omniverse Apps. Use Omniverse Launcher to install one. 'Code' is the recommended app for developers."
)
sys.exit(0)
print("\nFound following Omniverse Apps:")
for i, slug in enumerate(apps):
name, root = apps[slug]
print(f"{i}: {name} ({slug}) at: '{root}'")
if args.app:
selected_app = args.app.lower()
if selected_app not in apps:
choices = ", ".join(apps.keys())
print(f"Passed app: '{selected_app}' is not found. Specify one of the following found Apps: {choices}")
sys.exit(0)
else:
selected_app = next((x for x in APP_PRIORITIES if x in apps), None)
if not selected_app:
selected_app = next(iter(apps))
print(f"\nSelected app: {selected_app}")
_, path = apps[selected_app]
if not os.path.exists(path):
print(f"Provided path doesn't exist: {path}")
else:
SCRIPT_ROOT = os.path.dirname(os.path.realpath(__file__))
create_link(f"{SCRIPT_ROOT}/../../app", path)
print("Success!")
| 2,814 | Python | 32.117647 | 133 | 0.562189 |
Ranasinghe843/file-explorer-project/tools/packman/config.packman.xml | <config remotes="cloudfront">
<remote2 name="cloudfront">
<transport actions="download" protocol="https" packageLocation="d4i3qtqj3r0z5.cloudfront.net/${name}@${version}" />
</remote2>
</config>
| 211 | XML | 34.333328 | 123 | 0.691943 |
Ranasinghe843/file-explorer-project/tools/packman/bootstrap/install_package.py | # Copyright 2019 NVIDIA CORPORATION
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import shutil
import sys
import tempfile
import zipfile
__author__ = "hfannar"
logging.basicConfig(level=logging.WARNING, format="%(message)s")
logger = logging.getLogger("install_package")
class TemporaryDirectory:
def __init__(self):
self.path = None
def __enter__(self):
self.path = tempfile.mkdtemp()
return self.path
def __exit__(self, type, value, traceback):
# Remove temporary data created
shutil.rmtree(self.path)
def install_package(package_src_path, package_dst_path):
with zipfile.ZipFile(package_src_path, allowZip64=True) as zip_file, TemporaryDirectory() as temp_dir:
zip_file.extractall(temp_dir)
# Recursively copy (temp_dir will be automatically cleaned up on exit)
try:
# Recursive copy is needed because both package name and version folder could be missing in
# target directory:
shutil.copytree(temp_dir, package_dst_path)
except OSError as exc:
logger.warning("Directory %s already present, packaged installation aborted" % package_dst_path)
else:
logger.info("Package successfully installed to %s" % package_dst_path)
install_package(sys.argv[1], sys.argv[2])
| 1,844 | Python | 33.166666 | 108 | 0.703362 |
Ranasinghe843/file-explorer-project/exts/samitha.file.explorer/samitha/file/explorer/extension.py | import omni.ext
import omni.ui as ui
# Functions and vars are available to other extension as usual in python: `example.python_ext.some_public_function(x)`
def some_public_function(x: int):
print("[samitha.file.explorer] some_public_function was called with x: ", x)
return x ** x
# Any class derived from `omni.ext.IExt` in top level module (defined in `python.modules` of `extension.toml`) will be
# instantiated when extension gets enabled and `on_startup(ext_id)` will be called. Later when extension gets disabled
# on_shutdown() is called.
class SamithaFileExplorerExtension(omni.ext.IExt):
# ext_id is current extension id. It can be used with extension manager to query additional information, like where
# this extension is located on filesystem.
def on_startup(self, ext_id):
print("[samitha.file.explorer] samitha file explorer startup")
self._count = 0
self._window = ui.Window("My Window", width=300, height=300)
with self._window.frame:
with ui.VStack():
label = ui.Label("")
def on_click():
self._count += 1
label.text = f"count: {self._count}"
def on_reset():
self._count = 0
label.text = "empty"
on_reset()
with ui.HStack():
ui.Button("Add", clicked_fn=on_click)
ui.Button("Reset", clicked_fn=on_reset)
def on_shutdown(self):
print("[samitha.file.explorer] samitha file explorer shutdown")
| 1,595 | Python | 35.272726 | 119 | 0.610658 |
Ranasinghe843/file-explorer-project/exts/samitha.file.explorer/samitha/file/explorer/__init__.py | from .extension import *
| 25 | Python | 11.999994 | 24 | 0.76 |
Ranasinghe843/file-explorer-project/exts/samitha.file.explorer/samitha/file/explorer/tests/__init__.py | from .test_hello_world import * | 31 | Python | 30.999969 | 31 | 0.774194 |
Ranasinghe843/file-explorer-project/exts/samitha.file.explorer/samitha/file/explorer/tests/test_hello_world.py | # NOTE:
# omni.kit.test - std python's unittest module with additional wrapping to add suport for async/await tests
# For most things refer to unittest docs: https://docs.python.org/3/library/unittest.html
import omni.kit.test
# Extnsion for writing UI tests (simulate UI interaction)
import omni.kit.ui_test as ui_test
# Import extension python module we are testing with absolute import path, as if we are external user (other extension)
import samitha.file.explorer
# Having a test class dervived from omni.kit.test.AsyncTestCase declared on the root of module will make it auto-discoverable by omni.kit.test
class Test(omni.kit.test.AsyncTestCase):
# Before running each test
async def setUp(self):
pass
# After running each test
async def tearDown(self):
pass
# Actual test, notice it is "async" function, so "await" can be used if needed
async def test_hello_public_function(self):
result = samitha.file.explorer.some_public_function(4)
self.assertEqual(result, 256)
async def test_window_button(self):
# Find a label in our window
label = ui_test.find("My Window//Frame/**/Label[*]")
# Find buttons in our window
add_button = ui_test.find("My Window//Frame/**/Button[*].text=='Add'")
reset_button = ui_test.find("My Window//Frame/**/Button[*].text=='Reset'")
# Click reset button
await reset_button.click()
self.assertEqual(label.widget.text, "empty")
await add_button.click()
self.assertEqual(label.widget.text, "count: 1")
await add_button.click()
self.assertEqual(label.widget.text, "count: 2")
| 1,678 | Python | 34.723404 | 142 | 0.682956 |
Ranasinghe843/file-explorer-project/exts/samitha.file.explorer/config/extension.toml | [package]
# Semantic Versioning is used: https://semver.org/
version = "1.0.0"
# Lists people or organizations that are considered the "authors" of the package.
authors = ["NVIDIA"]
# The title and description fields are primarily for displaying extension info in UI
title = "samitha file explorer"
description="A simple python extension example to use as a starting point for your extensions."
# Path (relative to the root) or content of readme markdown file for UI.
readme = "docs/README.md"
# URL of the extension source repository.
repository = ""
# One of categories for UI.
category = "Example"
# Keywords for the extension
keywords = ["kit", "example"]
# Location of change log file in target (final) folder of extension, relative to the root.
# More info on writing changelog: https://keepachangelog.com/en/1.0.0/
changelog="docs/CHANGELOG.md"
# Preview image and icon. Folder named "data" automatically goes in git lfs (see .gitattributes file).
# Preview image is shown in "Overview" of Extensions window. Screenshot of an extension might be a good preview image.
preview_image = "data/preview.png"
# Icon is shown in Extensions window, it is recommended to be square, of size 256x256.
icon = "data/icon.png"
# Use omni.ui to build simple UI
[dependencies]
"omni.kit.uiapp" = {}
# Main python module this extension provides, it will be publicly available as "import samitha.file.explorer".
[[python.module]]
name = "samitha.file.explorer"
[[test]]
# Extra dependencies only to be used during test run
dependencies = [
"omni.kit.ui_test" # UI testing extension
]
| 1,589 | TOML | 32.124999 | 118 | 0.746381 |
Ranasinghe843/file-explorer-project/exts/samitha.file.explorer/docs/CHANGELOG.md | # Changelog
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
## [1.0.0] - 2021-04-26
- Initial version of extension UI template with a window
| 178 | Markdown | 18.888887 | 80 | 0.702247 |
Ranasinghe843/file-explorer-project/exts/samitha.file.explorer/docs/README.md | # Python Extension Example [samitha.file.explorer]
This is an example of pure python Kit extension. It is intended to be copied and serve as a template to create new extensions.
| 180 | Markdown | 35.199993 | 126 | 0.788889 |
Ranasinghe843/file-explorer-project/exts/samitha.file.explorer/docs/index.rst | samitha.file.explorer
#############################
Example of Python only extension
.. toctree::
:maxdepth: 1
README
CHANGELOG
.. automodule::"samitha.file.explorer"
:platform: Windows-x86_64, Linux-x86_64
:members:
:undoc-members:
:show-inheritance:
:imported-members:
:exclude-members: contextmanager
| 343 | reStructuredText | 15.380952 | 43 | 0.623907 |
True-VFX/kit-ext-cube_array/README.md | # Cube Array Sample Extension

## Adding This Extension
To add a this extension to your Omniverse app:
1. Go into: Extension Manager -> Gear Icon -> Extension Search Path
2. Add this as a search path: `git://github.com/True-VFX/kit-ext-cube_array.git?branch=main&dir=exts`
## Using This Extension
1. Click the large 'Create Array' button. This will add an xform object to your stage
2. Mess with the X, Y, and Z values to determine how many cubes in each axis to create
3. Play with the Space Between to add more or less space between each cube
| 595 | Markdown | 38.733331 | 101 | 0.754622 |
True-VFX/kit-ext-cube_array/tools/scripts/link_app.py | import os
import argparse
import sys
import json
import packmanapi
import urllib3
def find_omniverse_apps():
http = urllib3.PoolManager()
try:
r = http.request("GET", "http://127.0.0.1:33480/components")
except Exception as e:
print(f"Failed retrieving apps from an Omniverse Launcher, maybe it is not installed?\nError: {e}")
sys.exit(1)
apps = {}
for x in json.loads(r.data.decode("utf-8")):
latest = x.get("installedVersions", {}).get("latest", "")
if latest:
for s in x.get("settings", []):
if s.get("version", "") == latest:
root = s.get("launch", {}).get("root", "")
apps[x["slug"]] = (x["name"], root)
break
return apps
def create_link(src, dst):
print(f"Creating a link '{src}' -> '{dst}'")
packmanapi.link(src, dst)
APP_PRIORITIES = ["code", "create", "view"]
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Create folder link to Kit App installed from Omniverse Launcher")
parser.add_argument(
"--path",
help="Path to Kit App installed from Omniverse Launcher, e.g.: 'C:/Users/bob/AppData/Local/ov/pkg/create-2021.3.4'",
required=False,
)
parser.add_argument(
"--app", help="Name of Kit App installed from Omniverse Launcher, e.g.: 'code', 'create'", required=False
)
args = parser.parse_args()
path = args.path
if not path:
print("Path is not specified, looking for Omniverse Apps...")
apps = find_omniverse_apps()
if len(apps) == 0:
print(
"Can't find any Omniverse Apps. Use Omniverse Launcher to install one. 'Code' is the recommended app for developers."
)
sys.exit(0)
print("\nFound following Omniverse Apps:")
for i, slug in enumerate(apps):
name, root = apps[slug]
print(f"{i}: {name} ({slug}) at: '{root}'")
if args.app:
selected_app = args.app.lower()
if selected_app not in apps:
choices = ", ".join(apps.keys())
print(f"Passed app: '{selected_app}' is not found. Specify one of the following found Apps: {choices}")
sys.exit(0)
else:
selected_app = next((x for x in APP_PRIORITIES if x in apps), None)
if not selected_app:
selected_app = next(iter(apps))
print(f"\nSelected app: {selected_app}")
_, path = apps[selected_app]
if not os.path.exists(path):
print(f"Provided path doesn't exist: {path}")
else:
SCRIPT_ROOT = os.path.dirname(os.path.realpath(__file__))
create_link(f"{SCRIPT_ROOT}/../../app", path)
print("Success!")
| 2,813 | Python | 32.5 | 133 | 0.562389 |
True-VFX/kit-ext-cube_array/tools/packman/config.packman.xml | <config remotes="cloudfront">
<remote2 name="cloudfront">
<transport actions="download" protocol="https" packageLocation="d4i3qtqj3r0z5.cloudfront.net/${name}@${version}" />
</remote2>
</config>
| 211 | XML | 34.333328 | 123 | 0.691943 |
True-VFX/kit-ext-cube_array/tools/packman/bootstrap/install_package.py | # Copyright 2019 NVIDIA CORPORATION
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import zipfile
import tempfile
import sys
import shutil
__author__ = "hfannar"
logging.basicConfig(level=logging.WARNING, format="%(message)s")
logger = logging.getLogger("install_package")
class TemporaryDirectory:
def __init__(self):
self.path = None
def __enter__(self):
self.path = tempfile.mkdtemp()
return self.path
def __exit__(self, type, value, traceback):
# Remove temporary data created
shutil.rmtree(self.path)
def install_package(package_src_path, package_dst_path):
with zipfile.ZipFile(
package_src_path, allowZip64=True
) as zip_file, TemporaryDirectory() as temp_dir:
zip_file.extractall(temp_dir)
# Recursively copy (temp_dir will be automatically cleaned up on exit)
try:
# Recursive copy is needed because both package name and version folder could be missing in
# target directory:
shutil.copytree(temp_dir, package_dst_path)
except OSError as exc:
logger.warning(
"Directory %s already present, packaged installation aborted" % package_dst_path
)
else:
logger.info("Package successfully installed to %s" % package_dst_path)
install_package(sys.argv[1], sys.argv[2])
| 1,888 | Python | 31.568965 | 103 | 0.68697 |
True-VFX/kit-ext-cube_array/exts/tvfx.tools.cube_array/tvfx/tools/cube_array/extension.py | from functools import partial
from pxr import UsdGeom, Usd, Sdf, Gf
from pxr.Usd import Stage
import omni.ext
import omni.kit.commands
import omni.ui as ui
import omni.usd
# PYTHON 3.7.12
def create_uint_slider(axis:str, min=0, max=50, default=1) -> ui.UIntSlider:
ui.Label(f"{axis.capitalize()}:",width=20)
slider = ui.UIntSlider(
min=min,
max=max,
tooltip=f"The number of boxes to create in the {axis.capitalize()} axis"
)
slider.model.set_value(default)
int_field = ui.IntField(width=30)
int_field.model = slider.model
return slider
def on_slider_change(x_slider:ui.UIntSlider,y_slider:ui.UIntSlider,z_slider:ui.UIntSlider, space_slider:ui.UIntSlider, _b:float, xform:UsdGeom.Xform=None):
global cubes
# Get Active Prim
space = space_slider.model.get_value_as_float()*100
stage:Stage = omni.usd.get_context().get_stage()
selection = omni.usd.get_context().get_selection().get_selected_prim_paths()
if not selection:
return
selected_xform = xform or stage.GetPrimAtPath(selection[0])
# Ensure PointInstancer
if not selected_xform or selected_xform.GetPrim().GetTypeName() != "PointInstancer":
return
# Get XYZ values
x_count = x_slider.model.get_value_as_int()
y_count = y_slider.model.get_value_as_int()
z_count = z_slider.model.get_value_as_int()
ids = []
positions = []
# Create Cube Array
for i in range(x_count):
x = i*100+space*i
for j in range(y_count):
y = j*100+space*j
for k in range(z_count):
b = j*x_count
c = k*y_count*x_count
n = (i+b+c)
positions.append((x, y, k*100+space*k))
ids.append(0)
instancer = UsdGeom.PointInstancer(selected_xform.GetPrim())
instancer.CreateProtoIndicesAttr()
instancer.CreatePositionsAttr()
instancer.GetProtoIndicesAttr().Set(ids)
instancer.GetPositionsAttr().Set(positions)
def on_space_change(x_slider:ui.UIntSlider,y_slider:ui.UIntSlider,z_slider:ui.UIntSlider, space_slider:ui.UIntSlider, _b:float, xform:UsdGeom.Xform=None):
space = space_slider.model.get_value_as_float()*100
stage:Stage = omni.usd.get_context().get_stage()
# Get Active Selection
selection = omni.usd.get_context().get_selection().get_selected_prim_paths()
if not selection:
return
selected_xform = xform or stage.GetPrimAtPath(selection[0])
# Ensure PointInstancer
if not selected_xform or selected_xform.GetPrim().GetTypeName() != "PointInstancer":
return
# Get XYZ Values
x_count = x_slider.model.get_value_as_int()
y_count = y_slider.model.get_value_as_int()
z_count = z_slider.model.get_value_as_int()
ids = []
positions = []
# Translate Cubes
for i in range(x_count):
x = i*100+space*i
for j in range(y_count):
y = j*100+space*j
for k in range(z_count):
b = j*x_count
c = k*y_count*x_count
n = (i+b+c)
positions.append((x, y, k*100+space*k))
ids.append(0)
instancer = UsdGeom.PointInstancer(selected_xform.GetPrim())
instancer.CreateProtoIndicesAttr()
instancer.CreatePositionsAttr()
instancer.GetProtoIndicesAttr().Set(ids)
instancer.GetPositionsAttr().Set(positions)
class MyExtension(omni.ext.IExt):
def on_startup(self, ext_id):
print("[tvfx.tools.cube_array] MyExtension startup")
self._window = ui.Window("My Window", width=300, height=300)
with self._window.frame:
with ui.VStack():
# Create Slider Row
with ui.HStack(height=20):
x_slider = create_uint_slider("X")
y_slider = create_uint_slider("Y")
z_slider = create_uint_slider("Z")
ui.Spacer(height=7)
with ui.HStack(height=20):
ui.Label("Space Between:")
space_slider = ui.FloatSlider(min=0.0,max=10)
space_slider.model.set_value(0.5)
space_field = ui.FloatField(width=30)
space_field.model = space_slider.model
# Add Functions on Change
x_slider.model.add_value_changed_fn(partial(on_slider_change, x_slider,y_slider,z_slider,space_slider))
y_slider.model.add_value_changed_fn(partial(on_slider_change, x_slider,y_slider,z_slider,space_slider))
z_slider.model.add_value_changed_fn(partial(on_slider_change, x_slider,y_slider,z_slider,space_slider))
space_slider.model.add_value_changed_fn(partial(on_space_change, x_slider,y_slider,z_slider,space_slider))
# Create Array Xform Button
def create_array_holder(x_slider:ui.UIntSlider,y_slider:ui.UIntSlider,z_slider:ui.UIntSlider, space_slider:ui.UIntSlider):
C:omni.usd.UsdContext = omni.usd.get_context()
stage:Stage = C.get_stage()
cube_array:UsdGeom.PointInstancer = UsdGeom.PointInstancer.Define(stage, stage.GetDefaultPrim().GetPath().AppendPath("Cube_Array"))
proto_container = stage.OverridePrim(cube_array.GetPath().AppendPath("Prototypes"))
cube = UsdGeom.Cube.Define(stage,proto_container.GetPath().AppendPath("Cube"))
cube.CreateSizeAttr(100)
cube_array.CreatePrototypesRel()
cube_array.GetPrototypesRel().AddTarget(cube.GetPath())
omni.kit.commands.execute(
'SelectPrimsCommand',
old_selected_paths=[],
new_selected_paths=[str(cube_array.GetPath())],
expand_in_stage=True
)
on_slider_change(x_slider, y_slider, z_slider, space_slider,None, xform=cube_array)
create_array_button = ui.Button(text="Create Array")
create_array_button.set_clicked_fn(partial(create_array_holder, x_slider,y_slider,z_slider,space_slider))
def on_shutdown(self):
print("[tvfx.tools.cube_array] MyExtension shutdown")
self._window.destroy()
self._window = None
stage:Stage = omni.usd.get_context().get_stage()
| 6,479 | Python | 39.754717 | 155 | 0.60071 |
True-VFX/kit-ext-cube_array/exts/tvfx.tools.cube_array/tvfx/tools/cube_array/__init__.py | from .extension import *
| 25 | Python | 11.999994 | 24 | 0.76 |
True-VFX/kit-ext-cube_array/exts/tvfx.tools.cube_array/config/extension.toml | [package]
version = "1.0.0"
authors = ["True-VFX: Zach Eastin"]
title = "Simple Cube Array Creator"
description="Creates a simple 3D array of cubes"
category = "Tools"
readme = "docs/README.md"
repository = "https://github.com/True-VFX/kit-ext-cube_array"
preview_image = "data/preview.png"
icon = "data/icon.svg"
changelog="docs/CHANGELOG.md"
# Keywords for the extension
keywords = ["kit", "zach", "cube", "array","true-vfx", "tvfx"]
# Use omni.ui to build simple UI
[dependencies]
"omni.kit.uiapp" = {}
# Main python module this extension provides, it will be publicly available as "import omni.hello.world".
[[python.module]]
name = "tvfx.tools.cube_array"
| 672 | TOML | 21.433333 | 105 | 0.703869 |
True-VFX/kit-ext-cube_array/docs/README.md | # Cube Array Sample Extension

## Adding This Extension
To add a this extension to your Omniverse app:
1. Go into: Extension Manager -> Gear Icon -> Extension Search Path
2. Add this as a search path: `git://github.com/True-VFX/kit-ext-cube_array.git?branch=main&dir=exts`
## Using This Extension
1. Click the large 'Create Array' button. This will add an xform object to your stage
2. Mess with the X, Y, and Z values to determine how many cubes in each axis to create
3. Play with the Space Between to add more or less space between each cube
| 598 | Markdown | 38.933331 | 101 | 0.750836 |
True-VFX/kit-ext-cube_array/docs/index.rst | tvfx.tools.cube_array
###########################
.. toctree::
:maxdepth: 1
README
CHANGELOG
| 106 | reStructuredText | 8.727272 | 27 | 0.462264 |
Prudhvi-Tummala/PrudhviTummala.MeshEditor.MeshTools/README.md | # Python Extension [MeshTools]
This project developed for NVDIA OMNIVERSE.
MeshTools Extension contains tools to split, slice and merge meshes
note: data like normals and other porperites of mesh will lost in output data. If your use case require those data to be considerd you have to modify code
- `app` - It is a folder link to the location of your *Omniverse Kit* based app.
- `exts` - It is a folder where you can add new extensions. It was automatically added to extension search path. (Extension Manager -> Gear Icon -> Extension Search Path).
Open this folder using Visual Studio Code. It will suggest you to install few extensions that will make python experience better.
Look for "MeshTools" extension in extension manager and enable it. Try applying changes to any python files, it will hot-reload and you can observe results immediately.
Alternatively, you can launch your app from console with this folder added to search path and your extension enabled, e.g.:
```
> app\omni.code.bat --ext-folder exts --enable company.hello.world
```
# App Link Setup
If `app` folder link doesn't exist or broken it can be created again. For better developer experience it is recommended to create a folder link named `app` to the *Omniverse Kit* app installed from *Omniverse Launcher*. Convenience script to use is included.
Run:
```
> link_app.bat
```
If successful you should see `app` folder link in the root of this repo.
If multiple Omniverse apps is installed script will select recommended one. Or you can explicitly pass an app:
```
> link_app.bat --app create
```
You can also just pass a path to create link to:
```
> link_app.bat --path "C:/Users/bob/AppData/Local/ov/pkg/create-2021.3.4"
```
| 1,722 | Markdown | 36.456521 | 258 | 0.75842 |
Prudhvi-Tummala/PrudhviTummala.MeshEditor.MeshTools/tools/scripts/link_app.py | import argparse
import json
import os
import sys
import packmanapi
import urllib3
def find_omniverse_apps():
http = urllib3.PoolManager()
try:
r = http.request("GET", "http://127.0.0.1:33480/components")
except Exception as e:
print(f"Failed retrieving apps from an Omniverse Launcher, maybe it is not installed?\nError: {e}")
sys.exit(1)
apps = {}
for x in json.loads(r.data.decode("utf-8")):
latest = x.get("installedVersions", {}).get("latest", "")
if latest:
for s in x.get("settings", []):
if s.get("version", "") == latest:
root = s.get("launch", {}).get("root", "")
apps[x["slug"]] = (x["name"], root)
break
return apps
def create_link(src, dst):
print(f"Creating a link '{src}' -> '{dst}'")
packmanapi.link(src, dst)
APP_PRIORITIES = ["code", "create", "view"]
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Create folder link to Kit App installed from Omniverse Launcher")
parser.add_argument(
"--path",
help="Path to Kit App installed from Omniverse Launcher, e.g.: 'C:/Users/bob/AppData/Local/ov/pkg/create-2021.3.4'",
required=False,
)
parser.add_argument(
"--app", help="Name of Kit App installed from Omniverse Launcher, e.g.: 'code', 'create'", required=False
)
args = parser.parse_args()
path = args.path
if not path:
print("Path is not specified, looking for Omniverse Apps...")
apps = find_omniverse_apps()
if len(apps) == 0:
print(
"Can't find any Omniverse Apps. Use Omniverse Launcher to install one. 'Code' is the recommended app for developers."
)
sys.exit(0)
print("\nFound following Omniverse Apps:")
for i, slug in enumerate(apps):
name, root = apps[slug]
print(f"{i}: {name} ({slug}) at: '{root}'")
if args.app:
selected_app = args.app.lower()
if selected_app not in apps:
choices = ", ".join(apps.keys())
print(f"Passed app: '{selected_app}' is not found. Specify one of the following found Apps: {choices}")
sys.exit(0)
else:
selected_app = next((x for x in APP_PRIORITIES if x in apps), None)
if not selected_app:
selected_app = next(iter(apps))
print(f"\nSelected app: {selected_app}")
_, path = apps[selected_app]
if not os.path.exists(path):
print(f"Provided path doesn't exist: {path}")
else:
SCRIPT_ROOT = os.path.dirname(os.path.realpath(__file__))
create_link(f"{SCRIPT_ROOT}/../../app", path)
print("Success!")
| 2,814 | Python | 32.117647 | 133 | 0.562189 |
Prudhvi-Tummala/PrudhviTummala.MeshEditor.MeshTools/tools/packman/config.packman.xml | <config remotes="cloudfront">
<remote2 name="cloudfront">
<transport actions="download" protocol="https" packageLocation="d4i3qtqj3r0z5.cloudfront.net/${name}@${version}" />
</remote2>
</config>
| 211 | XML | 34.333328 | 123 | 0.691943 |
Prudhvi-Tummala/PrudhviTummala.MeshEditor.MeshTools/tools/packman/bootstrap/install_package.py | # Copyright 2019 NVIDIA CORPORATION
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import shutil
import sys
import tempfile
import zipfile
__author__ = "hfannar"
logging.basicConfig(level=logging.WARNING, format="%(message)s")
logger = logging.getLogger("install_package")
class TemporaryDirectory:
def __init__(self):
self.path = None
def __enter__(self):
self.path = tempfile.mkdtemp()
return self.path
def __exit__(self, type, value, traceback):
# Remove temporary data created
shutil.rmtree(self.path)
def install_package(package_src_path, package_dst_path):
with zipfile.ZipFile(package_src_path, allowZip64=True) as zip_file, TemporaryDirectory() as temp_dir:
zip_file.extractall(temp_dir)
# Recursively copy (temp_dir will be automatically cleaned up on exit)
try:
# Recursive copy is needed because both package name and version folder could be missing in
# target directory:
shutil.copytree(temp_dir, package_dst_path)
except OSError as exc:
logger.warning("Directory %s already present, packaged installation aborted" % package_dst_path)
else:
logger.info("Package successfully installed to %s" % package_dst_path)
install_package(sys.argv[1], sys.argv[2])
| 1,844 | Python | 33.166666 | 108 | 0.703362 |
Prudhvi-Tummala/PrudhviTummala.MeshEditor.MeshTools/exts/MeshTools/MeshTools/SliceTool.py | import omni.ext
import omni.ui as ui
from pxr import Usd, UsdGeom, Gf
import numpy as np
import omni.kit.app
import omni.kit.commands
import os
class Slice_Tools:
def sliceMesh(self,PlaneAxis,Plane):
stage = omni.usd.get_context().get_stage()
prim_paths = omni.usd.get_context().get_selection().get_selected_prim_paths()
if len(prim_paths) > 1:
return
for prim_path in prim_paths:
# Define a new Mesh primitive
prim_mesh = stage.GetPrimAtPath(prim_path)
split_mesh = UsdGeom.Mesh(prim_mesh)
faceVertexIndices1 = np.array(split_mesh.GetFaceVertexIndicesAttr().Get())
faceVertexCounts1 = np.array(split_mesh.GetFaceVertexCountsAttr().Get())
points1 = np.array(split_mesh.GetPointsAttr().Get())
indiBfrPlane = np.where(points1[:,PlaneAxis]<=Plane)
indiAftrPlane = np.where(points1[:,PlaneAxis]>=Plane)
faceVertexIndices1 = faceVertexIndices1.reshape(-1,3)
shape = np.unique(faceVertexCounts1)
if len(shape) > 1 :
print('non-uniform mesh')
return
if not shape[0] == 3:
print('Only works for triangle mesh')
return
bfrPlane = []
aftrPlane = []
others = []
pointsBfrPlane = np.array(points1[indiBfrPlane])
pointsAftrPlane = np.array(points1[indiAftrPlane])
for vertex in faceVertexIndices1:
easy_split = False
if len(np.intersect1d(vertex,indiBfrPlane)) == 3:
bfrPlane.append(vertex)
easy_split = True
if len(np.intersect1d(vertex,indiAftrPlane)) == 3:
aftrPlane.append(vertex)
easy_split = True
if not easy_split:
others.append(vertex)
for vIndi in others:
tri = points1[vIndi]
bfr = np.where(tri[:,PlaneAxis]<Plane)
aftr = np.where(tri[:,PlaneAxis]>Plane)
solo_bfr = (len(bfr[0])== 1)
if solo_bfr:
a = bfr[0][0]
else:
a = aftr[0][0]
b = (a+4)%3
c = (a+5)%3
dir_vector_d = tri[b]-tri[a]
dir_vector_e = tri[c]-tri[a]
cnst_d = (Plane - tri[a][PlaneAxis])/dir_vector_d[PlaneAxis]
cnst_e = (Plane - tri[a][PlaneAxis])/dir_vector_e[PlaneAxis]
point_d = tri[a] + (dir_vector_d*cnst_d)
point_e = tri[a] + (dir_vector_e*cnst_e)
pointsBfrPlane = np.append(pointsBfrPlane, [point_d])
pointsBfrPlane = np.append(pointsBfrPlane, [point_e])
pointsAftrPlane = np.append(pointsAftrPlane, [point_d])
pointsAftrPlane = np.append(pointsAftrPlane, [point_e])
max1 = np.max(np.array(bfrPlane).reshape(1,-1))
max2 = np.max(np.array(aftrPlane).reshape(1,-1))
if solo_bfr:
bfrPlane.append([vIndi[a],max1+1,max1+2])
aftrPlane.append([vIndi[b],vIndi[c],max2+1])
aftrPlane.append([vIndi[c],max2+2,max2+1])
else:
bfrPlane.append([vIndi[b],vIndi[c],max1+1])
bfrPlane.append([vIndi[c],max1+2,max1+1])
aftrPlane.append([vIndi[a],max2+1,max2+2])
uniq_bfr = np.sort(np.unique(np.array(bfrPlane).reshape(1,-1),return_index=False))
uniq_aftr = np.sort(np.unique(np.array(aftrPlane).reshape(1,-1),return_index=False))
nwIndiBfr = np.searchsorted(uniq_bfr,np.array(bfrPlane).reshape(1,-1))[0]
nwIndiAftr = np.searchsorted(uniq_aftr,np.array(aftrPlane).reshape(1,-1))[0]
mesh1 = UsdGeom.Mesh.Define(stage, '/Split_Mesh_1')
pointsBfrPlane = np.array(pointsBfrPlane).reshape(-1,3)
pointsAftrPlane = np.array(pointsAftrPlane).reshape(-1,3)
mesh1.CreatePointsAttr(pointsBfrPlane)
lnCntBfr = len(nwIndiBfr)/3
triCntsBfr = np.zeros(int(lnCntBfr)) + 3
print(len(pointsBfrPlane))
print(np.max(nwIndiBfr))
mesh1.CreateFaceVertexCountsAttr(triCntsBfr)
mesh1.CreateFaceVertexIndicesAttr(nwIndiBfr)
mesh2 = UsdGeom.Mesh.Define(stage, '/Split_Mesh_2')
mesh2.CreatePointsAttr(pointsAftrPlane)
lnCntAftr = len(nwIndiAftr)/3
print(len(pointsAftrPlane))
print(np.max(nwIndiAftr))
triCntsAftr = np.zeros(int(lnCntAftr)) + 3
mesh2.CreateFaceVertexCountsAttr(triCntsAftr)
mesh2.CreateFaceVertexIndicesAttr(nwIndiAftr)
def boundingbox(self):
stage = omni.usd.get_context().get_stage()
prim_paths = omni.usd.get_context().get_selection().get_selected_prim_paths()
if len(prim_paths) > 1:
return
if len(prim_paths) == 0:
return
print('fail')
for prim_path in prim_paths:
prim_mesh = stage.GetPrimAtPath(prim_path)
split_mesh = UsdGeom.Mesh(prim_mesh)
UsdGeom.BBoxCache(Usd.TimeCode.Default(), ["default"]).Clear()
localBBox = UsdGeom.BBoxCache(Usd.TimeCode.Default(), ["default"]).ComputeWorldBound(prim_mesh)
bbox = Gf.BBox3d(localBBox).GetBox()
minbox = bbox.GetMin()
maxbox = bbox.GetMax()
return minbox, maxbox
def createplane(self,plane,planeValue,minbox1,maxbox1):
if plane == 0:
point0 = [planeValue,minbox1[1]-5,minbox1[2]-5]
point1 = [planeValue,minbox1[1]-5,maxbox1[2]+5]
point2 = [planeValue,maxbox1[1]+5,maxbox1[2]+5]
point3 = [planeValue,maxbox1[1]+5,minbox1[2]-5]
elif plane == 1:
point0 = [minbox1[0]-5,planeValue,minbox1[2]-5]
point1 = [minbox1[0]-5,planeValue,maxbox1[2]+5]
point2 = [maxbox1[0]+5,planeValue,maxbox1[2]+5]
point3 = [maxbox1[0]+5,planeValue,minbox1[2]-5]
else:
point0 = [minbox1[0]-5,minbox1[1]-5,planeValue]
point1 = [minbox1[0]-5,maxbox1[1]+5,planeValue]
point2 = [maxbox1[0]+5,maxbox1[1]+5,planeValue]
point3 = [maxbox1[0]+5,minbox1[1]-5,planeValue]
points = [point0,point1,point2,point3]
counts = [4]
indicies = [0,1,2,3]
stage = omni.usd.get_context().get_stage()
mesh_plane = UsdGeom.Mesh.Define(stage, '/temp_plane')
mesh_plane.CreatePointsAttr(points)
mesh_plane.CreateFaceVertexCountsAttr(counts)
mesh_plane.CreateFaceVertexIndicesAttr(indicies)
def modifyplane(self,plane,planeValue):
stage = omni.usd.get_context().get_stage()
prim = stage.GetPrimAtPath('/temp_plane')
mesh_plane = UsdGeom.Mesh(prim )
points = mesh_plane.GetPointsAttr().Get()
points = np.array(points)
changevalue = np.zeros(4) + planeValue
points[:,plane] = changevalue
counts = [4]
indicies = [0,1,2,3]
mesh_plane.CreatePointsAttr(points)
mesh_plane.CreateFaceVertexCountsAttr(counts)
mesh_plane.CreateFaceVertexIndicesAttr(indicies)
def delPlane(self):
omni.kit.commands.execute('DeletePrims', paths=['/temp_plane'],destructive=False)
| 7,530 | Python | 43.040935 | 107 | 0.563878 |
Prudhvi-Tummala/PrudhviTummala.MeshEditor.MeshTools/exts/MeshTools/MeshTools/SplitTool.py | import omni.ext
import omni.ui as ui
from pxr import Usd, UsdGeom, Gf
import numpy as np
import omni.kit.app
import os
class Split_Tools:
def splitMesh(self):
stage = omni.usd.get_context().get_stage()
prim_paths = omni.usd.get_context().get_selection().get_selected_prim_paths()
for prim_path in prim_paths:
# Define a new Mesh primitive
prim_mesh = stage.GetPrimAtPath(prim_path)
split_mesh = UsdGeom.Mesh(prim_mesh)
faceVertexIndices1 = split_mesh.GetFaceVertexIndicesAttr().Get()
faceVertexCounts1 = split_mesh.GetFaceVertexCountsAttr().Get()
points1 = split_mesh.GetPointsAttr().Get()
meshshape = np.unique(faceVertexCounts1,return_index= False)
if len(meshshape) > 1:
print('non-uniform mesh')
return
faceVertexIndices1 = np.array(faceVertexIndices1)
faceVertexIndices1 = np.asarray(faceVertexIndices1.reshape(-1,meshshape[0]))
def splitmeshes(groups):
meshes = []
sorted_ref = []
flag = 0
for group in groups:
if not meshes:
meshes.append(group)
else:
notinlist = True
for i in range(0,len(meshes)):
if len(np.intersect1d(group.reshape(1,-1),meshes[i].reshape(1,-1))) > 0:
meshes[i] = np.concatenate([meshes[i],group])
notinlist = False
if notinlist:
meshes.append(group)
if len(meshes) == len(groups):
return [meshes, sorted_ref]
else:
return(splitmeshes(meshes))
meshsoutput , referceoutput = splitmeshes(faceVertexIndices1)
if len(meshsoutput) == 1:
print('Connex mesh: no actin taken')
return
i = 1
for meshoutput in meshsoutput:
uniqIndi = np.unique(meshoutput.reshape(1,-1),return_index=False)
uniqIndi = np.sort(uniqIndi)
points1 = np.asarray(points1)
meshpoints = np.array(points1[uniqIndi])
connexMeshIndi_adjsuted = np.searchsorted(uniqIndi,meshoutput.reshape(1,-1))[0]
counts = np.zeros(int(len(meshoutput)/3)) + meshshape
mesh = UsdGeom.Mesh.Define(stage, f'/NewMesh_{i}')
# Define the points of the mesh
mesh.CreatePointsAttr(meshpoints)
# Define the faces of the mesh
mesh.CreateFaceVertexCountsAttr(counts)
mesh.CreateFaceVertexIndicesAttr(connexMeshIndi_adjsuted)
i = i+1
next
| 2,973 | Python | 37.623376 | 100 | 0.51665 |
Prudhvi-Tummala/PrudhviTummala.MeshEditor.MeshTools/exts/MeshTools/MeshTools/extension.py | import omni.ext
import omni.ui as ui
from pxr import Usd, UsdGeom, Gf
import numpy as np
import omni.kit.app
import os
from .MeshTools_Window import MeshToolsWindow
ext_path = os.path.dirname(os.path.realpath(__file__))
# Functions and vars are available to other extension as usual in python: `example.python_ext.some_public_function(x)`
def some_public_function(x: int):
print("[MeshTools] some_public_function was called with x: ", x)
return x ** x
# Any class derived from `omni.ext.IExt` in top level module (defined in `python.modules` of `extension.toml`) will be
# instantiated when extension gets enabled and `on_startup(ext_id)` will be called. Later when extension gets disabled
# on_shutdown() is called.
class MeshtoolsExtension(omni.ext.IExt):
# ext_id is current extension id. It can be used with extension manager to query additional information, like where
# this extension is located on filesystem.
def on_startup(self, ext_id):
print("[MeshTools] MeshTools startup")
self.MeshToolsWindow = MeshToolsWindow()
self.MeshToolsWindow.on_startup(ext_id)
def on_shutdown(self):
print("[MeshTools] MeshTools shutdown")
| 1,229 | Python | 28.999999 | 119 | 0.711147 |
Prudhvi-Tummala/PrudhviTummala.MeshEditor.MeshTools/exts/MeshTools/MeshTools/__init__.py | from .extension import *
| 25 | Python | 11.999994 | 24 | 0.76 |
Prudhvi-Tummala/PrudhviTummala.MeshEditor.MeshTools/exts/MeshTools/MeshTools/MergeMesh.py | import omni.ext
import omni.ui as ui
from pxr import Usd, UsdGeom, Gf
import numpy as np
import omni.kit.app
import omni.kit.commands
import os
class mergeMesh:
def merge_mesh(self):
stage = omni.usd.get_context().get_stage()
prim_paths = omni.usd.get_context().get_selection().get_selected_prim_paths()
faceVertexCounts2 = np.array([])
faceVertexIndices2 = np.array([])
points2 = np.array([])
for prim_path in prim_paths:
# Define a new Mesh primitive
prim_mesh = stage.GetPrimAtPath(prim_path)
split_mesh = UsdGeom.Mesh(prim_mesh)
faceVertexIndices1 = np.array(split_mesh.GetFaceVertexIndicesAttr().Get())
faceVertexCounts1 = np.array(split_mesh.GetFaceVertexCountsAttr().Get())
points1 = np.array(split_mesh.GetPointsAttr().Get())
if not len(faceVertexIndices2) == 0:
faceVertexIndices1 =faceVertexIndices1 +np.max(faceVertexIndices2) +1
faceVertexIndices2 = np.append(faceVertexIndices2,faceVertexIndices1)
points2 = np.append(points2,points1)
faceVertexCounts2 = np.append(faceVertexCounts2,faceVertexCounts1)
combinedMesh = UsdGeom.Mesh.Define(stage,'/combinedMesh')
combinedMesh.CreatePointsAttr(points2)
combinedMesh.CreateFaceVertexIndicesAttr(faceVertexIndices2)
combinedMesh.CreateFaceVertexCountsAttr(faceVertexCounts2)
| 1,484 | Python | 39.135134 | 86 | 0.66779 |
Prudhvi-Tummala/PrudhviTummala.MeshEditor.MeshTools/exts/MeshTools/MeshTools/MeshTools_Window.py | import omni.ext
import omni.ui as ui
from pxr import Usd, UsdGeom, Gf
import numpy as np
import omni.kit.app
import os
ext_path = os.path.dirname(os.path.realpath(__file__))
from .SplitTool import Split_Tools
from .SliceTool import Slice_Tools
from .MergeMesh import mergeMesh
Split_Tools = Split_Tools()
Slice_Tools = Slice_Tools()
mergeMesh = mergeMesh()
class MeshToolsWindow:
def on_startup(self,ext_id):
def button1_clicked():
Split_Tools.splitMesh()
def button2_clicked():
plane = sliderInt.model.as_int
planeValue = fieldFloat.model.as_int
Slice_Tools.sliceMesh(plane,planeValue)
Slice_Tools.delPlane()
def button_merge():
mergeMesh.merge_mesh()
def button3_clicked():
minBBox, maxBBox = Slice_Tools.boundingbox()
plane = sliderInt.model.as_int
sliderFloat.min = minBBox[plane]
sliderFloat.max = maxBBox[plane]
planeValue = fieldFloat.model.as_int
Slice_Tools.createplane(plane,planeValue,minBBox,maxBBox)
def planechange():
plane = sliderInt.model.as_int
planeValue = fieldFloat.model.as_int
Slice_Tools.modifyplane(plane,planeValue)
def deleteplane():
Slice_Tools.delPlane()
self._window = ui.Window("My Window", width=300, height=500)
with self._window.frame:
with ui.VStack(height = 0):
with ui.CollapsableFrame("split and merge mesh"):
with ui.HStack():
ui.Button("Split Mesh",image_url =f'{ext_path}/Imgs/Split_Meshes.PNG', image_height = 140 ,width = 140, clicked_fn = button1_clicked)
ui.Button("Merge Mesh",image_url =f'{ext_path}/Imgs/Merge.PNG', image_height = 140 ,width = 140 , clicked_fn = button_merge)
with ui.CollapsableFrame("Slice mesh with plane"):
with ui.VStack(height = 0):
ui.Label("| YZ : 0 | ZX : 1 | XY : 2 |")
with ui.HStack():
sliderInt = ui.IntSlider(min=0,max = 2)
ui.Button("Start",width = 80, height = 30 , clicked_fn = button3_clicked )
sliderInt.tooltip = "For slicing plane give 0 for YZ plane, 1 for ZX, 2 for XY"
with ui.HStack():
fieldFloat = ui.StringField(width = 50)
sliderFloat = ui.FloatSlider(min=0,max=50)
sliderFloat.model = fieldFloat.model
sliderFloat.model.add_value_changed_fn(lambda m:planechange())
with ui.HStack():
ui.Button("Slice Mesh",image_url =f'{ext_path}/Imgs/Slicing_Mesh.PNG', image_height = 140 ,width = 140,clicked_fn = button2_clicked)
ui.Button('Delete Plane',image_url =f'{ext_path}/Imgs/Delete.PNG', image_height = 140 ,width = 140,clicked_fn = deleteplane)
| 3,079 | Python | 44.970149 | 161 | 0.566742 |
Prudhvi-Tummala/PrudhviTummala.MeshEditor.MeshTools/exts/MeshTools/MeshTools/tests/__init__.py | from .test_hello_world import * | 31 | Python | 30.999969 | 31 | 0.774194 |
Prudhvi-Tummala/PrudhviTummala.MeshEditor.MeshTools/exts/MeshTools/MeshTools/tests/test_hello_world.py | # NOTE:
# omni.kit.test - std python's unittest module with additional wrapping to add suport for async/await tests
# For most things refer to unittest docs: https://docs.python.org/3/library/unittest.html
import omni.kit.test
# Extnsion for writing UI tests (simulate UI interaction)
import omni.kit.ui_test as ui_test
# Import extension python module we are testing with absolute import path, as if we are external user (other extension)
import MeshTools
# Having a test class dervived from omni.kit.test.AsyncTestCase declared on the root of module will make it auto-discoverable by omni.kit.test
class Test(omni.kit.test.AsyncTestCase):
# Before running each test
async def setUp(self):
pass
# After running each test
async def tearDown(self):
pass
# Actual test, notice it is "async" function, so "await" can be used if needed
async def test_hello_public_function(self):
result = MeshTools.some_public_function(4)
self.assertEqual(result, 256)
async def test_window_button(self):
# Find a label in our window
label = ui_test.find("My Window//Frame/**/Label[*]")
# Find buttons in our window
add_button = ui_test.find("My Window//Frame/**/Button[*].text=='Add'")
reset_button = ui_test.find("My Window//Frame/**/Button[*].text=='Reset'")
# Click reset button
await reset_button.click()
self.assertEqual(label.widget.text, "empty")
await add_button.click()
self.assertEqual(label.widget.text, "count: 1")
await add_button.click()
self.assertEqual(label.widget.text, "count: 2")
| 1,654 | Python | 34.212765 | 142 | 0.680774 |
Prudhvi-Tummala/PrudhviTummala.MeshEditor.MeshTools/exts/MeshTools/config/extension.toml | [package]
# Semantic Versioning is used: https://semver.org/
version = "1.0.0"
# Lists people or organizations that are considered the "authors" of the package.
authors = ["Prudhvi Tummala"]
# The title and description fields are primarily for displaying extension info in UI
title = "MeshTools"
description="A simple python extension example to use as a starting point for your extensions."
# Path (relative to the root) or content of readme markdown file for UI.
readme = "docs/README.md"
# URL of the extension source repository.
repository = ""
# One of categories for UI.
category = "Mesh Editor"
# Keywords for the extension
keywords = ["Mesh", "Split","Slice","Merge"]
# Location of change log file in target (final) folder of extension, relative to the root.
# More info on writing changelog: https://keepachangelog.com/en/1.0.0/
changelog="docs/CHANGELOG.md"
# Preview image and icon. Folder named "data" automatically goes in git lfs (see .gitattributes file).
# Preview image is shown in "Overview" of Extensions window. Screenshot of an extension might be a good preview image.
preview_image = "data/preview.png"
# Icon is shown in Extensions window, it is recommended to be square, of size 256x256.
icon = "data/icon.png"
# Use omni.ui to build simple UI
[dependencies]
"omni.kit.uiapp" = {}
# Main python module this extension provides, it will be publicly available as "import MeshTools".
[[python.module]]
name = "MeshTools"
[[test]]
# Extra dependencies only to be used during test run
dependencies = [
"omni.kit.ui_test" # UI testing extension
]
| 1,581 | TOML | 31.958333 | 118 | 0.743833 |
Prudhvi-Tummala/PrudhviTummala.MeshEditor.MeshTools/exts/MeshTools/docs/CHANGELOG.md | # Changelog
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
## [1.0.0] - 2021-04-26
- Initial version of extension UI template with a window
| 178 | Markdown | 18.888887 | 80 | 0.702247 |
Prudhvi-Tummala/PrudhviTummala.MeshEditor.MeshTools/exts/MeshTools/docs/README.md | # Python Extension [MeshTools]
MeshTools Extension contains tools to split, slice and merge meshes
note: data like normals and other porperites of mesh will lost in output data. If your use case require those data to be considerd you have to modify code
| 256 | Markdown | 41.833326 | 154 | 0.800781 |
Prudhvi-Tummala/PrudhviTummala.MeshEditor.MeshTools/exts/MeshTools/docs/index.rst | MeshTools
#############################
Example of Python only extension
.. toctree::
:maxdepth: 1
README
CHANGELOG
.. automodule::"MeshTools"
:platform: Windows-x86_64, Linux-x86_64
:members:
:undoc-members:
:show-inheritance:
:imported-members:
:exclude-members: contextmanager
| 319 | reStructuredText | 14.238095 | 43 | 0.60815 |
aivxx/hello.world.ext/README.md | # Extension Project Template
This project was automatically generated.
- `app` - It is a folder link to the location of your *Omniverse Kit* based app.
- `exts` - It is a folder where you can add new extensions. It was automatically added to extension search path. (Extension Manager -> Gear Icon -> Extension Search Path).
Open this folder using Visual Studio Code. It will suggest you to install few extensions that will make python experience better.
Look for "omni.hello.world" extension in extension manager and enable it. Try applying changes to any python files, it will hot-reload and you can observe results immediately.
Alternatively, you can launch your app from console with this folder added to search path and your extension enabled, e.g.:
```
> app\omni.code.bat --ext-folder exts --enable omni.hello.world
```
# App Link Setup
If `app` folder link doesn't exist or broken it can be created again. For better developer experience it is recommended to create a folder link named `app` to the *Omniverse Kit* app installed from *Omniverse Launcher*. Convenience script to use is included.
Run:
```
> link_app.bat
```
If successful you should see `app` folder link in the root of this repo.
If multiple Omniverse apps is installed script will select recommended one. Or you can explicitly pass an app:
```
> link_app.bat --app create
```
You can also just pass a path to create link to:
```
> link_app.bat --path "C:/Users/bob/AppData/Local/ov/pkg/create-2021.3.4"
```
# Sharing Your Extensions
This folder is ready to be pushed to any git repository. Once pushed direct link to a git repository can be added to *Omniverse Kit* extension search paths.
Link might look like this: `git://github.com/[user]/[your_repo].git?branch=main&dir=exts`
Notice `exts` is repo subfolder with extensions. More information can be found in "Git URL as Extension Search Paths" section of developers manual.
To add a link to your *Omniverse Kit* based app go into: Extension Manager -> Gear Icon -> Extension Search Path
| 2,037 | Markdown | 37.452829 | 258 | 0.756505 |
aivxx/hello.world.ext/tools/scripts/link_app.py | import os
import argparse
import sys
import json
import packmanapi
import urllib3
def find_omniverse_apps():
http = urllib3.PoolManager()
try:
r = http.request("GET", "http://127.0.0.1:33480/components")
except Exception as e:
print(f"Failed retrieving apps from an Omniverse Launcher, maybe it is not installed?\nError: {e}")
sys.exit(1)
apps = {}
for x in json.loads(r.data.decode("utf-8")):
latest = x.get("installedVersions", {}).get("latest", "")
if latest:
for s in x.get("settings", []):
if s.get("version", "") == latest:
root = s.get("launch", {}).get("root", "")
apps[x["slug"]] = (x["name"], root)
break
return apps
def create_link(src, dst):
print(f"Creating a link '{src}' -> '{dst}'")
packmanapi.link(src, dst)
APP_PRIORITIES = ["code", "create", "view"]
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Create folder link to Kit App installed from Omniverse Launcher")
parser.add_argument(
"--path",
help="Path to Kit App installed from Omniverse Launcher, e.g.: 'C:/Users/bob/AppData/Local/ov/pkg/create-2021.3.4'",
required=False,
)
parser.add_argument(
"--app", help="Name of Kit App installed from Omniverse Launcher, e.g.: 'code', 'create'", required=False
)
args = parser.parse_args()
path = args.path
if not path:
print("Path is not specified, looking for Omniverse Apps...")
apps = find_omniverse_apps()
if len(apps) == 0:
print(
"Can't find any Omniverse Apps. Use Omniverse Launcher to install one. 'Code' is the recommended app for developers."
)
sys.exit(0)
print("\nFound following Omniverse Apps:")
for i, slug in enumerate(apps):
name, root = apps[slug]
print(f"{i}: {name} ({slug}) at: '{root}'")
if args.app:
selected_app = args.app.lower()
if selected_app not in apps:
choices = ", ".join(apps.keys())
print(f"Passed app: '{selected_app}' is not found. Specify one of the following found Apps: {choices}")
sys.exit(0)
else:
selected_app = next((x for x in APP_PRIORITIES if x in apps), None)
if not selected_app:
selected_app = next(iter(apps))
print(f"\nSelected app: {selected_app}")
_, path = apps[selected_app]
if not os.path.exists(path):
print(f"Provided path doesn't exist: {path}")
else:
SCRIPT_ROOT = os.path.dirname(os.path.realpath(__file__))
create_link(f"{SCRIPT_ROOT}/../../app", path)
print("Success!")
| 2,813 | Python | 32.5 | 133 | 0.562389 |
aivxx/hello.world.ext/tools/packman/config.packman.xml | <config remotes="cloudfront">
<remote2 name="cloudfront">
<transport actions="download" protocol="https" packageLocation="d4i3qtqj3r0z5.cloudfront.net/${name}@${version}" />
</remote2>
</config>
| 211 | XML | 34.333328 | 123 | 0.691943 |
aivxx/hello.world.ext/tools/packman/bootstrap/install_package.py | # Copyright 2019 NVIDIA CORPORATION
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import zipfile
import tempfile
import sys
import shutil
__author__ = "hfannar"
logging.basicConfig(level=logging.WARNING, format="%(message)s")
logger = logging.getLogger("install_package")
class TemporaryDirectory:
def __init__(self):
self.path = None
def __enter__(self):
self.path = tempfile.mkdtemp()
return self.path
def __exit__(self, type, value, traceback):
# Remove temporary data created
shutil.rmtree(self.path)
def install_package(package_src_path, package_dst_path):
with zipfile.ZipFile(
package_src_path, allowZip64=True
) as zip_file, TemporaryDirectory() as temp_dir:
zip_file.extractall(temp_dir)
# Recursively copy (temp_dir will be automatically cleaned up on exit)
try:
# Recursive copy is needed because both package name and version folder could be missing in
# target directory:
shutil.copytree(temp_dir, package_dst_path)
except OSError as exc:
logger.warning(
"Directory %s already present, packaged installation aborted" % package_dst_path
)
else:
logger.info("Package successfully installed to %s" % package_dst_path)
install_package(sys.argv[1], sys.argv[2])
| 1,888 | Python | 31.568965 | 103 | 0.68697 |
aivxx/hello.world.ext/exts/omni.hello.world/config/extension.toml | [package]
# Semantic Versionning is used: https://semver.org/
version = "1.0.0"
# The title and description fields are primarily for displaying extension info in UI
title = "Simple UI Extension Template"
description="The simplest python extension example. Use it as a starting point for your extensions."
# Path (relative to the root) or content of readme markdown file for UI.
readme = "docs/README.md"
# URL of the extension source repository.
repository = ""
# One of categories for UI.
category = "Example"
# Keywords for the extension
keywords = ["kit", "example"]
# Use omni.ui to build simple UI
[dependencies]
"omni.kit.uiapp" = {}
# Main python module this extension provides, it will be publicly available as "import omni.hello.world".
[[python.module]]
name = "omni.hello.world"
| 799 | TOML | 26.586206 | 105 | 0.740926 |
aivxx/hello.world.ext/exts/omni.hello.world/omni/hello/world/extension.py | import omni.ext
import omni.ui as ui
# Any class derived from `omni.ext.IExt` in top level module (defined in `python.modules` of `extension.toml`) will be
# instantiated when extension gets enabled and `on_startup(ext_id)` will be called. Later when extension gets disabled
# on_shutdown() is called.
class MyExtension(omni.ext.IExt):
# ext_id is current extension id. It can be used with extension manager to query additional information, like where
# this extension is located on filesystem.
def on_startup(self, ext_id):
print("[omni.hello.world] MyExtension startup")
self._window = ui.Window("Hello World Ext", width=300, height=300)
with self._window.frame:
with ui.VStack():
with ui.ZStack():
with ui.Placer(offset_x=1, offset_y=1):
ui.Label("Hello World", height=50, style={"font_size":24})
ui.Button("This is a button", style= {"color":0xFF00AA00})
ui.Separator(height=5)
with ui.HStack(height=5):
ui.Button("Another Button")
ui.Button("A button here")
ui.Button("One more")
def on_click():
print("clicked!")
ui.Button("Click Me", clicked_fn=lambda: on_click())
ui.IntSlider(height=30).model.set_value(10)
def on_shutdown(self):
print("[omni.hello.world] MyExtension shutdown")
| 1,589 | Python | 34.333333 | 119 | 0.555695 |
aivxx/hello.world.ext/exts/omni.hello.world/omni/hello/world/__init__.py | from .extension import *
| 25 | Python | 11.999994 | 24 | 0.76 |
fnuabhimanyu8713/orbit/pyproject.toml | [tool.isort]
py_version = 310
line_length = 120
group_by_package = true
# Files to skip
skip_glob = ["docs/*", "logs/*", "_isaac_sim/*", ".vscode/*"]
# Order of imports
sections = [
"FUTURE",
"STDLIB",
"THIRDPARTY",
"ASSETS_FIRSTPARTY",
"FIRSTPARTY",
"EXTRA_FIRSTPARTY",
"LOCALFOLDER",
]
# Extra standard libraries considered as part of python (permissive licenses
extra_standard_library = [
"numpy",
"h5py",
"open3d",
"torch",
"tensordict",
"bpy",
"matplotlib",
"gymnasium",
"gym",
"scipy",
"hid",
"yaml",
"prettytable",
"toml",
"trimesh",
"tqdm",
]
# Imports from Isaac Sim and Omniverse
known_third_party = [
"omni.isaac.core",
"omni.replicator.isaac",
"omni.replicator.core",
"pxr",
"omni.kit.*",
"warp",
"carb",
]
# Imports from this repository
known_first_party = "omni.isaac.orbit"
known_assets_firstparty = "omni.isaac.orbit_assets"
known_extra_firstparty = [
"omni.isaac.orbit_tasks"
]
# Imports from the local folder
known_local_folder = "config"
[tool.pyright]
include = ["source/extensions", "source/standalone"]
exclude = [
"**/__pycache__",
"**/_isaac_sim",
"**/docs",
"**/logs",
".git",
".vscode",
]
typeCheckingMode = "basic"
pythonVersion = "3.10"
pythonPlatform = "Linux"
enableTypeIgnoreComments = true
# This is required as the CI pre-commit does not download the module (i.e. numpy, torch, prettytable)
# Therefore, we have to ignore missing imports
reportMissingImports = "none"
# This is required to ignore for type checks of modules with stubs missing.
reportMissingModuleSource = "none" # -> most common: prettytable in mdp managers
reportGeneralTypeIssues = "none" # -> raises 218 errors (usage of literal MISSING in dataclasses)
reportOptionalMemberAccess = "warning" # -> raises 8 errors
reportPrivateUsage = "warning"
[tool.codespell]
skip = '*.usd,*.svg,*.png,_isaac_sim*,*.bib,*.css,*/_build'
quiet-level = 0
# the world list should always have words in lower case
ignore-words-list = "haa,slq,collapsable"
# todo: this is hack to deal with incorrect spelling of "Environment" in the Isaac Sim grid world asset
exclude-file = "source/extensions/omni.isaac.orbit/omni/isaac/orbit/sim/spawners/from_files/from_files.py"
| 2,314 | TOML | 23.627659 | 106 | 0.665946 |
fnuabhimanyu8713/orbit/CONTRIBUTING.md | # Contribution Guidelines
Orbit is a community maintained project. We wholeheartedly welcome contributions to the project to make
the framework more mature and useful for everyone. These may happen in forms of bug reports, feature requests,
design proposals and more.
For general information on how to contribute see
<https://isaac-orbit.github.io/orbit/source/refs/contributing.html>.
| 388 | Markdown | 42.222218 | 110 | 0.81701 |
fnuabhimanyu8713/orbit/CONTRIBUTORS.md | # Orbit Developers and Contributors
This is the official list of Orbit Project developers and contributors.
To see the full list of contributors, please check the revision history in the source control.
Guidelines for modifications:
* Please keep the lists sorted alphabetically.
* Names should be added to this file as: *individual names* or *organizations*.
* E-mail addresses are tracked elsewhere to avoid spam.
## Developers
* Boston Dynamics AI Institute, Inc.
* ETH Zurich
* NVIDIA Corporation & Affiliates
* University of Toronto
---
* David Hoeller
* Farbod Farshidian
* Hunter Hansen
* James Smith
* James Tigue
* **Mayank Mittal** (maintainer)
* Nikita Rudin
* Pascal Roth
## Contributors
* Anton Bjørndahl Mortensen
* Alice Zhou
* Andrej Orsula
* Antonio Serrano-Muñoz
* Arjun Bhardwaj
* Calvin Yu
* Chenyu Yang
* Jia Lin Yuan
* Jingzhou Liu
* Lorenz Wellhausen
* Muhong Guo
* Kourosh Darvish
* Özhan Özen
* Qinxi Yu
* René Zurbrügg
* Ritvik Singh
* Rosario Scalise
* Shafeef Omar
* Vladimir Fokow
## Acknowledgements
* Ajay Mandlekar
* Animesh Garg
* Buck Babich
* Gavriel State
* Hammad Mazhar
* Marco Hutter
* Yunrong Guo
| 1,149 | Markdown | 17.548387 | 94 | 0.75631 |
fnuabhimanyu8713/orbit/README.md | 
---
# Orbit
[](https://docs.omniverse.nvidia.com/isaacsim/latest/overview.html)
[](https://docs.python.org/3/whatsnew/3.10.html)
[](https://releases.ubuntu.com/20.04/)
[](https://pre-commit.com/)
[](https://isaac-orbit.github.io/orbit)
[](https://opensource.org/licenses/BSD-3-Clause)
<!-- TODO: Replace docs status with workflow badge? Link: https://github.com/isaac-orbit/orbit/actions/workflows/docs.yaml/badge.svg -->
**Orbit** is a unified and modular framework for robot learning that aims to simplify common workflows
in robotics research (such as RL, learning from demonstrations, and motion planning). It is built upon
[NVIDIA Isaac Sim](https://docs.omniverse.nvidia.com/isaacsim/latest/overview.html) to leverage the latest
simulation capabilities for photo-realistic scenes and fast and accurate simulation.
Please refer to our [documentation page](https://isaac-orbit.github.io/orbit) to learn more about the
installation steps, features, tutorials, and how to set up your project with Orbit.
## Announcements
* [17.04.2024] [**v0.3.0**](https://github.com/NVIDIA-Omniverse/orbit/releases/tag/v0.3.0):
Several improvements and bug fixes to the framework. Includes cabinet opening and dexterous manipulation environments,
terrain-aware patch sampling, and animation recording.
* [22.12.2023] [**v0.2.0**](https://github.com/NVIDIA-Omniverse/orbit/releases/tag/v0.2.0):
Significant breaking updates to enhance the modularity and user-friendliness of the framework. Also includes
procedural terrain generation, warp-based custom ray-casters, and legged-locomotion environments.
## Contributing to Orbit
We wholeheartedly welcome contributions from the community to make this framework mature and useful for everyone.
These may happen as bug reports, feature requests, or code contributions. For details, please check our
[contribution guidelines](https://isaac-orbit.github.io/orbit/source/refs/contributing.html).
## Troubleshooting
Please see the [troubleshooting](https://isaac-orbit.github.io/orbit/source/refs/troubleshooting.html) section for
common fixes or [submit an issue](https://github.com/NVIDIA-Omniverse/orbit/issues).
For issues related to Isaac Sim, we recommend checking its [documentation](https://docs.omniverse.nvidia.com/app_isaacsim/app_isaacsim/overview.html)
or opening a question on its [forums](https://forums.developer.nvidia.com/c/agx-autonomous-machines/isaac/67).
## Support
* Please use GitHub [Discussions](https://github.com/NVIDIA-Omniverse/Orbit/discussions) for discussing ideas, asking questions, and requests for new features.
* Github [Issues](https://github.com/NVIDIA-Omniverse/orbit/issues) should only be used to track executable pieces of work with a definite scope and a clear deliverable. These can be fixing bugs, documentation issues, new features, or general updates.
## Acknowledgement
NVIDIA Isaac Sim is available freely under [individual license](https://www.nvidia.com/en-us/omniverse/download/). For more information about its license terms, please check [here](https://docs.omniverse.nvidia.com/app_isaacsim/common/NVIDIA_Omniverse_License_Agreement.html#software-support-supplement).
Orbit framework is released under [BSD-3 License](LICENSE). The license files of its dependencies and assets are present in the [`docs/licenses`](docs/licenses) directory.
## Citing
If you use this framework in your work, please cite [this paper](https://arxiv.org/abs/2301.04195):
```text
@article{mittal2023orbit,
author={Mittal, Mayank and Yu, Calvin and Yu, Qinxi and Liu, Jingzhou and Rudin, Nikita and Hoeller, David and Yuan, Jia Lin and Singh, Ritvik and Guo, Yunrong and Mazhar, Hammad and Mandlekar, Ajay and Babich, Buck and State, Gavriel and Hutter, Marco and Garg, Animesh},
journal={IEEE Robotics and Automation Letters},
title={Orbit: A Unified Simulation Framework for Interactive Robot Learning Environments},
year={2023},
volume={8},
number={6},
pages={3740-3747},
doi={10.1109/LRA.2023.3270034}
}
```
| 4,547 | Markdown | 59.639999 | 304 | 0.773257 |
fnuabhimanyu8713/orbit/tools/install_deps.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""
A script with various methods of installing dependencies
defined in an extension.toml
"""
import argparse
import os
import shutil
import sys
import toml
from subprocess import SubprocessError, run
# add argparse arguments
parser = argparse.ArgumentParser(description="Utility to install dependencies based on an extension.toml")
parser.add_argument("type", type=str, choices=["all", "apt", "rosdep"], help="The type of packages to install")
parser.add_argument("path", type=str, help="The path to the extension which will have its deps installed")
def install_apt_packages(path):
"""
A function which attempts to install apt packages for Orbit extensions.
It looks in {extension_root}/config/extension.toml for [orbit_settings][apt_deps]
and then attempts to install them. Exits on failure to stop the build process
from continuing despite missing dependencies.
Args:
path: A path to the extension root
"""
try:
if shutil.which("apt"):
with open(f"{path}/config/extension.toml") as fd:
ext_toml = toml.load(fd)
if "orbit_settings" in ext_toml and "apt_deps" in ext_toml["orbit_settings"]:
deps = ext_toml["orbit_settings"]["apt_deps"]
print(f"[INFO] Installing the following apt packages: {deps}")
run_and_print(["apt-get", "update"])
run_and_print(["apt-get", "install", "-y"] + deps)
else:
print("[INFO] No apt packages to install")
else:
raise RuntimeError("Exiting because 'apt' is not a known command")
except SubprocessError as e:
print(f"[ERROR]: {str(e.stderr, encoding='utf-8')}")
sys.exit(1)
except Exception as e:
print(f"[ERROR]: {e}")
sys.exit(1)
def install_rosdep_packages(path):
"""
A function which attempts to install rosdep packages for Orbit extensions.
It looks in {extension_root}/config/extension.toml for [orbit_settings][ros_ws]
and then attempts to install all rosdeps under that workspace.
Exits on failure to stop the build process from continuing despite missing dependencies.
Args:
path: A path to the extension root
"""
try:
if shutil.which("rosdep"):
with open(f"{path}/config/extension.toml") as fd:
ext_toml = toml.load(fd)
if "orbit_settings" in ext_toml and "ros_ws" in ext_toml["orbit_settings"]:
ws_path = ext_toml["orbit_settings"]["ros_ws"]
if not os.path.exists("/etc/ros/rosdep/sources.list.d/20-default.list"):
run_and_print(["rosdep", "init"])
run_and_print(["rosdep", "update", "--rosdistro=humble"])
run_and_print([
"rosdep",
"install",
"--from-paths",
f"{path}/{ws_path}/src",
"--ignore-src",
"-y",
"--rosdistro=humble",
])
else:
print("[INFO] No rosdep packages to install")
else:
raise RuntimeError("Exiting because 'rosdep' is not a known command")
except SubprocessError as e:
print(f"[ERROR]: {str(e.stderr, encoding='utf-8')}")
sys.exit(1)
except Exception as e:
print(f"[ERROR]: {e}")
sys.exit(1)
def run_and_print(args):
"""
Runs a subprocess.run(args=args, capture_output=True, check=True),
and prints the output
"""
completed_process = run(args=args, capture_output=True, check=True)
print(f"{str(completed_process.stdout, encoding='utf-8')}")
def main():
args = parser.parse_args()
if args.type == "all":
install_apt_packages(args.path)
install_rosdep_packages(args.path)
elif args.type == "apt":
install_apt_packages(args.path)
elif args.type == "rosdep":
install_rosdep_packages(args.path)
else:
print(f"[ERROR] '{args.type}' type dependencies not installable")
sys.exit(1)
if __name__ == "__main__":
main()
| 4,357 | Python | 35.316666 | 111 | 0.586413 |
fnuabhimanyu8713/orbit/tools/tests_to_skip.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
# The following tests are skipped by run_tests.py
TESTS_TO_SKIP = [
# orbit
"test_argparser_launch.py", # app.close issue
"test_env_var_launch.py", # app.close issue
"test_kwarg_launch.py", # app.close issue
"test_differential_ik.py", # Failing
# orbit_tasks
"test_data_collector.py", # Failing
"test_record_video.py", # Failing
]
| 492 | Python | 27.999998 | 56 | 0.666667 |
fnuabhimanyu8713/orbit/tools/run_all_tests.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""A runner script for all the tests within source directory.
.. code-block:: bash
./orbit.sh -p tools/run_all_tests.py
# for dry run
./orbit.sh -p tools/run_all_tests.py --discover_only
# for quiet run
./orbit.sh -p tools/run_all_tests.py --quiet
# for increasing timeout (default is 600 seconds)
./orbit.sh -p tools/run_all_tests.py --timeout 1000
"""
import argparse
import logging
import os
import subprocess
import sys
import time
from datetime import datetime
from pathlib import Path
from prettytable import PrettyTable
# Tests to skip
from tests_to_skip import TESTS_TO_SKIP
ORBIT_PATH = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
"""Path to the root directory of Orbit repository."""
def parse_args() -> argparse.Namespace:
"""Parse command line arguments."""
parser = argparse.ArgumentParser(description="Run all tests under current directory.")
# add arguments
parser.add_argument(
"--skip_tests",
default="",
help="Space separated list of tests to skip in addition to those in tests_to_skip.py.",
type=str,
nargs="*",
)
# configure default test directory (source directory)
default_test_dir = os.path.join(ORBIT_PATH, "source")
parser.add_argument(
"--test_dir", type=str, default=default_test_dir, help="Path to the directory containing the tests."
)
# configure default logging path based on time stamp
log_file_name = datetime.now().strftime("%Y-%m-%d_%H-%M-%S") + ".log"
default_log_path = os.path.join(ORBIT_PATH, "logs", "test_results", log_file_name)
parser.add_argument(
"--log_path", type=str, default=default_log_path, help="Path to the log file to store the results in."
)
parser.add_argument("--discover_only", action="store_true", help="Only discover and print tests, don't run them.")
parser.add_argument("--quiet", action="store_true", help="Don't print to console, only log to file.")
parser.add_argument("--timeout", type=int, default=600, help="Timeout for each test in seconds.")
# parse arguments
args = parser.parse_args()
return args
def test_all(
test_dir: str,
tests_to_skip: list[str],
log_path: str,
timeout: float = 600.0,
discover_only: bool = False,
quiet: bool = False,
) -> bool:
"""Run all tests under the given directory.
Args:
test_dir: Path to the directory containing the tests.
tests_to_skip: List of tests to skip.
log_path: Path to the log file to store the results in.
timeout: Timeout for each test in seconds. Defaults to 600 seconds (10 minutes).
discover_only: If True, only discover and print the tests without running them. Defaults to False.
quiet: If False, print the output of the tests to the terminal console (in addition to the log file).
Defaults to False.
Returns:
True if all un-skipped tests pass or `discover_only` is True. Otherwise, False.
Raises:
ValueError: If any test to skip is not found under the given `test_dir`.
"""
# Create the log directory if it doesn't exist
os.makedirs(os.path.dirname(log_path), exist_ok=True)
# Add file handler to log to file
logging_handlers = [logging.FileHandler(log_path)]
# We also want to print to console
if not quiet:
logging_handlers.append(logging.StreamHandler())
# Set up logger
logging.basicConfig(level=logging.INFO, format="%(message)s", handlers=logging_handlers)
# Discover all tests under current directory
all_test_paths = [str(path) for path in Path(test_dir).resolve().rglob("*test_*.py")]
skipped_test_paths = []
test_paths = []
# Check that all tests to skip are actually in the tests
for test_to_skip in tests_to_skip:
for test_path in all_test_paths:
if test_to_skip in test_path:
break
else:
raise ValueError(f"Test to skip '{test_to_skip}' not found in tests.")
# Remove tests to skip from the list of tests to run
if len(tests_to_skip) != 0:
for test_path in all_test_paths:
if any([test_to_skip in test_path for test_to_skip in tests_to_skip]):
skipped_test_paths.append(test_path)
else:
test_paths.append(test_path)
else:
test_paths = all_test_paths
# Sort test paths so they're always in the same order
all_test_paths.sort()
test_paths.sort()
skipped_test_paths.sort()
# Print tests to be run
logging.info("\n" + "=" * 60 + "\n")
logging.info(f"The following {len(all_test_paths)} tests were found:")
for i, test_path in enumerate(all_test_paths):
logging.info(f"{i + 1:02d}: {test_path}")
logging.info("\n" + "=" * 60 + "\n")
logging.info(f"The following {len(skipped_test_paths)} tests are marked to be skipped:")
for i, test_path in enumerate(skipped_test_paths):
logging.info(f"{i + 1:02d}: {test_path}")
logging.info("\n" + "=" * 60 + "\n")
# Exit if only discovering tests
if discover_only:
return True
results = {}
# Run each script and store results
for test_path in test_paths:
results[test_path] = {}
before = time.time()
logging.info("\n" + "-" * 60 + "\n")
logging.info(f"[INFO] Running '{test_path}'\n")
try:
completed_process = subprocess.run(
[sys.executable, test_path], check=True, capture_output=True, timeout=timeout
)
except subprocess.TimeoutExpired as e:
logging.error(f"Timeout occurred: {e}")
result = "TIMEDOUT"
stdout = e.stdout
stderr = e.stderr
except subprocess.CalledProcessError as e:
# When check=True is passed to subprocess.run() above, CalledProcessError is raised if the process returns a
# non-zero exit code. The caveat is returncode is not correctly updated in this case, so we simply
# catch the exception and set this test as FAILED
result = "FAILED"
stdout = e.stdout
stderr = e.stderr
except Exception as e:
logging.error(f"Unexpected exception {e}. Please report this issue on the repository.")
result = "FAILED"
stdout = e.stdout
stderr = e.stderr
else:
# Should only get here if the process ran successfully, e.g. no exceptions were raised
# but we still check the returncode just in case
result = "PASSED" if completed_process.returncode == 0 else "FAILED"
stdout = completed_process.stdout
stderr = completed_process.stderr
after = time.time()
time_elapsed = after - before
# Decode stdout and stderr and write to file and print to console if desired
stdout_str = stdout.decode("utf-8") if stdout is not None else ""
stderr_str = stderr.decode("utf-8") if stderr is not None else ""
# Write to log file
logging.info(stdout_str)
logging.info(stderr_str)
logging.info(f"[INFO] Time elapsed: {time_elapsed:.2f} s")
logging.info(f"[INFO] Result '{test_path}': {result}")
# Collect results
results[test_path]["time_elapsed"] = time_elapsed
results[test_path]["result"] = result
# Calculate the number and percentage of passing tests
num_tests = len(all_test_paths)
num_passing = len([test_path for test_path in test_paths if results[test_path]["result"] == "PASSED"])
num_failing = len([test_path for test_path in test_paths if results[test_path]["result"] == "FAILED"])
num_timing_out = len([test_path for test_path in test_paths if results[test_path]["result"] == "TIMEDOUT"])
num_skipped = len(skipped_test_paths)
if num_tests == 0:
passing_percentage = 100
else:
passing_percentage = (num_passing + num_skipped) / num_tests * 100
# Print summaries of test results
summary_str = "\n\n"
summary_str += "===================\n"
summary_str += "Test Result Summary\n"
summary_str += "===================\n"
summary_str += f"Total: {num_tests}\n"
summary_str += f"Passing: {num_passing}\n"
summary_str += f"Failing: {num_failing}\n"
summary_str += f"Skipped: {num_skipped}\n"
summary_str += f"Timing Out: {num_timing_out}\n"
summary_str += f"Passing Percentage: {passing_percentage:.2f}%\n"
# Print time elapsed in hours, minutes, seconds
total_time = sum([results[test_path]["time_elapsed"] for test_path in test_paths])
summary_str += f"Total Time Elapsed: {total_time // 3600}h"
summary_str += f"{total_time // 60 % 60}m"
summary_str += f"{total_time % 60:.2f}s"
summary_str += "\n\n=======================\n"
summary_str += "Per Test Result Summary\n"
summary_str += "=======================\n"
# Construct table of results per test
per_test_result_table = PrettyTable(field_names=["Test Path", "Result", "Time (s)"])
per_test_result_table.align["Test Path"] = "l"
per_test_result_table.align["Time (s)"] = "r"
for test_path in test_paths:
per_test_result_table.add_row(
[test_path, results[test_path]["result"], f"{results[test_path]['time_elapsed']:0.2f}"]
)
for test_path in skipped_test_paths:
per_test_result_table.add_row([test_path, "SKIPPED", "N/A"])
summary_str += per_test_result_table.get_string()
# Print summary to console and log file
logging.info(summary_str)
# Only count failing and timing out tests towards failure
return num_failing + num_timing_out == 0
if __name__ == "__main__":
# parse command line arguments
args = parse_args()
# add tests to skip to the list of tests to skip
tests_to_skip = TESTS_TO_SKIP
tests_to_skip += args.skip_tests
# run all tests
test_success = test_all(
test_dir=args.test_dir,
tests_to_skip=tests_to_skip,
log_path=args.log_path,
timeout=args.timeout,
discover_only=args.discover_only,
quiet=args.quiet,
)
# update exit status based on all tests passing or not
if not test_success:
exit(1)
| 10,432 | Python | 36.394265 | 120 | 0.625863 |
fnuabhimanyu8713/orbit/source/extensions/omni.isaac.orbit_tasks/setup.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Installation script for the 'omni.isaac.orbit_tasks' python package."""
import itertools
import os
import toml
from setuptools import setup
# Obtain the extension data from the extension.toml file
EXTENSION_PATH = os.path.dirname(os.path.realpath(__file__))
# Read the extension.toml file
EXTENSION_TOML_DATA = toml.load(os.path.join(EXTENSION_PATH, "config", "extension.toml"))
# Minimum dependencies required prior to installation
INSTALL_REQUIRES = [
# generic
"numpy",
"torch==2.0.1",
"torchvision>=0.14.1", # ensure compatibility with torch 1.13.1
"protobuf>=3.20.2",
# data collection
"h5py",
# basic logger
"tensorboard",
# video recording
"moviepy",
]
# Extra dependencies for RL agents
EXTRAS_REQUIRE = {
"sb3": ["stable-baselines3>=2.0"],
"skrl": ["skrl>=1.1.0"],
"rl_games": ["rl-games==1.6.1", "gym"], # rl-games still needs gym :(
"rsl_rl": ["rsl_rl@git+https://github.com/leggedrobotics/rsl_rl.git"],
"robomimic": ["robomimic@git+https://github.com/ARISE-Initiative/robomimic.git"],
}
# cumulation of all extra-requires
EXTRAS_REQUIRE["all"] = list(itertools.chain.from_iterable(EXTRAS_REQUIRE.values()))
# Installation operation
setup(
name="omni-isaac-orbit_tasks",
author="ORBIT Project Developers",
maintainer="Mayank Mittal",
maintainer_email="[email protected]",
url=EXTENSION_TOML_DATA["package"]["repository"],
version=EXTENSION_TOML_DATA["package"]["version"],
description=EXTENSION_TOML_DATA["package"]["description"],
keywords=EXTENSION_TOML_DATA["package"]["keywords"],
include_package_data=True,
python_requires=">=3.10",
install_requires=INSTALL_REQUIRES,
extras_require=EXTRAS_REQUIRE,
packages=["omni.isaac.orbit_tasks"],
classifiers=[
"Natural Language :: English",
"Programming Language :: Python :: 3.10",
"Isaac Sim :: 2023.1.0-hotfix.1",
"Isaac Sim :: 2023.1.1",
],
zip_safe=False,
)
| 2,113 | Python | 29.637681 | 89 | 0.67345 |
fnuabhimanyu8713/orbit/source/extensions/omni.isaac.orbit_tasks/test/test_environments.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Launch Isaac Sim Simulator first."""
from omni.isaac.orbit.app import AppLauncher, run_tests
# launch the simulator
app_launcher = AppLauncher(headless=True)
simulation_app = app_launcher.app
"""Rest everything follows."""
import gymnasium as gym
import torch
import unittest
import omni.usd
from omni.isaac.orbit.envs import RLTaskEnv, RLTaskEnvCfg
import omni.isaac.orbit_tasks # noqa: F401
from omni.isaac.orbit_tasks.utils.parse_cfg import parse_env_cfg
class TestEnvironments(unittest.TestCase):
"""Test cases for all registered environments."""
@classmethod
def setUpClass(cls):
# acquire all Isaac environments names
cls.registered_tasks = list()
for task_spec in gym.registry.values():
if "Isaac" in task_spec.id:
cls.registered_tasks.append(task_spec.id)
# sort environments by name
cls.registered_tasks.sort()
# print all existing task names
print(">>> All registered environments:", cls.registered_tasks)
"""
Test fixtures.
"""
def test_multiple_instances_gpu(self):
"""Run all environments with multiple instances and check environments return valid signals."""
# common parameters
num_envs = 32
use_gpu = True
# iterate over all registered environments
for task_name in self.registered_tasks:
with self.subTest(task_name=task_name):
print(f">>> Running test for environment: {task_name}")
# check environment
self._check_random_actions(task_name, use_gpu, num_envs, num_steps=100)
# close the environment
print(f">>> Closing environment: {task_name}")
print("-" * 80)
def test_single_instance_gpu(self):
"""Run all environments with single instance and check environments return valid signals."""
# common parameters
num_envs = 1
use_gpu = True
# iterate over all registered environments
for task_name in self.registered_tasks:
with self.subTest(task_name=task_name):
print(f">>> Running test for environment: {task_name}")
# check environment
self._check_random_actions(task_name, use_gpu, num_envs, num_steps=100)
# close the environment
print(f">>> Closing environment: {task_name}")
print("-" * 80)
"""
Helper functions.
"""
def _check_random_actions(self, task_name: str, use_gpu: bool, num_envs: int, num_steps: int = 1000):
"""Run random actions and check environments return valid signals."""
# create a new stage
omni.usd.get_context().new_stage()
# parse configuration
env_cfg: RLTaskEnvCfg = parse_env_cfg(task_name, use_gpu=use_gpu, num_envs=num_envs)
# create environment
env: RLTaskEnv = gym.make(task_name, cfg=env_cfg)
# reset environment
obs, _ = env.reset()
# check signal
self.assertTrue(self._check_valid_tensor(obs))
# simulate environment for num_steps steps
with torch.inference_mode():
for _ in range(num_steps):
# sample actions from -1 to 1
actions = 2 * torch.rand(env.action_space.shape, device=env.unwrapped.device) - 1
# apply actions
transition = env.step(actions)
# check signals
for data in transition:
self.assertTrue(self._check_valid_tensor(data), msg=f"Invalid data: {data}")
# close the environment
env.close()
@staticmethod
def _check_valid_tensor(data: torch.Tensor | dict) -> bool:
"""Checks if given data does not have corrupted values.
Args:
data: Data buffer.
Returns:
True if the data is valid.
"""
if isinstance(data, torch.Tensor):
return not torch.any(torch.isnan(data))
elif isinstance(data, dict):
valid_tensor = True
for value in data.values():
if isinstance(value, dict):
valid_tensor &= TestEnvironments._check_valid_tensor(value)
elif isinstance(value, torch.Tensor):
valid_tensor &= not torch.any(torch.isnan(value))
return valid_tensor
else:
raise ValueError(f"Input data of invalid type: {type(data)}.")
if __name__ == "__main__":
run_tests()
| 4,679 | Python | 33.666666 | 105 | 0.600769 |
fnuabhimanyu8713/orbit/source/extensions/omni.isaac.orbit_tasks/test/test_data_collector.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Launch Isaac Sim Simulator first."""
from omni.isaac.orbit.app import AppLauncher, run_tests
# launch the simulator
app_launcher = AppLauncher(headless=True)
simulation_app = app_launcher.app
"""Rest everything follows."""
import os
import torch
import unittest
from omni.isaac.orbit_tasks.utils.data_collector import RobomimicDataCollector
class TestRobomimicDataCollector(unittest.TestCase):
"""Test dataset flushing behavior of robomimic data collector."""
def test_basic_flushing(self):
"""Adds random data into the collector and checks saving of the data."""
# name of the environment (needed by robomimic)
task_name = "My-Task-v0"
# specify directory for logging experiments
test_dir = os.path.dirname(os.path.abspath(__file__))
log_dir = os.path.join(test_dir, "output", "demos")
# name of the file to save data
filename = "hdf_dataset.hdf5"
# number of episodes to collect
num_demos = 10
# number of environments to simulate
num_envs = 4
# create data-collector
collector_interface = RobomimicDataCollector(task_name, log_dir, filename, num_demos)
# reset the collector
collector_interface.reset()
while not collector_interface.is_stopped():
# generate random data to store
# -- obs
obs = {"joint_pos": torch.randn(num_envs, 7), "joint_vel": torch.randn(num_envs, 7)}
# -- actions
actions = torch.randn(num_envs, 7)
# -- next obs
next_obs = {"joint_pos": torch.randn(num_envs, 7), "joint_vel": torch.randn(num_envs, 7)}
# -- rewards
rewards = torch.randn(num_envs)
# -- dones
dones = torch.rand(num_envs) > 0.5
# store signals
# -- obs
for key, value in obs.items():
collector_interface.add(f"obs/{key}", value)
# -- actions
collector_interface.add("actions", actions)
# -- next_obs
for key, value in next_obs.items():
collector_interface.add(f"next_obs/{key}", value.cpu().numpy())
# -- rewards
collector_interface.add("rewards", rewards)
# -- dones
collector_interface.add("dones", dones)
# flush data from collector for successful environments
# note: in this case we flush all the time
reset_env_ids = dones.nonzero(as_tuple=False).squeeze(-1)
collector_interface.flush(reset_env_ids)
# close collector
collector_interface.close()
# TODO: Add inspection of the saved dataset as part of the test.
if __name__ == "__main__":
run_tests()
| 2,906 | Python | 32.802325 | 101 | 0.602546 |
fnuabhimanyu8713/orbit/source/extensions/omni.isaac.orbit_tasks/test/test_record_video.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Launch Isaac Sim Simulator first."""
from omni.isaac.orbit.app import AppLauncher, run_tests
# launch the simulator
app_launcher = AppLauncher(headless=True, offscreen_render=True)
simulation_app = app_launcher.app
"""Rest everything follows."""
import gymnasium as gym
import os
import torch
import unittest
import omni.usd
from omni.isaac.orbit.envs import RLTaskEnvCfg
import omni.isaac.orbit_tasks # noqa: F401
from omni.isaac.orbit_tasks.utils import parse_env_cfg
class TestRecordVideoWrapper(unittest.TestCase):
"""Test recording videos using the RecordVideo wrapper."""
@classmethod
def setUpClass(cls):
# acquire all Isaac environments names
cls.registered_tasks = list()
for task_spec in gym.registry.values():
if "Isaac" in task_spec.id:
cls.registered_tasks.append(task_spec.id)
# sort environments by name
cls.registered_tasks.sort()
# print all existing task names
print(">>> All registered environments:", cls.registered_tasks)
# directory to save videos
cls.videos_dir = os.path.join(os.path.dirname(__file__), "output", "videos")
def setUp(self) -> None:
# common parameters
self.num_envs = 16
self.use_gpu = True
# video parameters
self.step_trigger = lambda step: step % 225 == 0
self.video_length = 200
def test_record_video(self):
"""Run random actions agent with recording of videos."""
for task_name in self.registered_tasks:
with self.subTest(task_name=task_name):
print(f">>> Running test for environment: {task_name}")
# create a new stage
omni.usd.get_context().new_stage()
# parse configuration
env_cfg: RLTaskEnvCfg = parse_env_cfg(task_name, use_gpu=self.use_gpu, num_envs=self.num_envs)
# create environment
env = gym.make(task_name, cfg=env_cfg, render_mode="rgb_array")
# directory to save videos
videos_dir = os.path.join(self.videos_dir, task_name)
# wrap environment to record videos
env = gym.wrappers.RecordVideo(
env,
videos_dir,
step_trigger=self.step_trigger,
video_length=self.video_length,
disable_logger=True,
)
# reset environment
env.reset()
# simulate environment
with torch.inference_mode():
for _ in range(500):
# compute zero actions
actions = 2 * torch.rand(env.action_space.shape, device=env.unwrapped.device) - 1
# apply actions
_ = env.step(actions)
# close the simulator
env.close()
if __name__ == "__main__":
run_tests()
| 3,127 | Python | 31.926315 | 110 | 0.579789 |
fnuabhimanyu8713/orbit/source/extensions/omni.isaac.orbit_tasks/test/wrappers/test_rsl_rl_wrapper.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Launch Isaac Sim Simulator first."""
from omni.isaac.orbit.app import AppLauncher, run_tests
# launch the simulator
app_launcher = AppLauncher(headless=True)
simulation_app = app_launcher.app
"""Rest everything follows."""
import gymnasium as gym
import torch
import unittest
import omni.usd
from omni.isaac.orbit.envs import RLTaskEnvCfg
import omni.isaac.orbit_tasks # noqa: F401
from omni.isaac.orbit_tasks.utils.parse_cfg import parse_env_cfg
from omni.isaac.orbit_tasks.utils.wrappers.rsl_rl import RslRlVecEnvWrapper
class TestRslRlVecEnvWrapper(unittest.TestCase):
"""Test that RSL-RL VecEnv wrapper works as expected."""
@classmethod
def setUpClass(cls):
# acquire all Isaac environments names
cls.registered_tasks = list()
for task_spec in gym.registry.values():
if "Isaac" in task_spec.id:
cls.registered_tasks.append(task_spec.id)
# sort environments by name
cls.registered_tasks.sort()
# only pick the first four environments to test
cls.registered_tasks = cls.registered_tasks[:4]
# print all existing task names
print(">>> All registered environments:", cls.registered_tasks)
def setUp(self) -> None:
# common parameters
self.num_envs = 64
self.use_gpu = True
def test_random_actions(self):
"""Run random actions and check environments return valid signals."""
for task_name in self.registered_tasks:
with self.subTest(task_name=task_name):
print(f">>> Running test for environment: {task_name}")
# create a new stage
omni.usd.get_context().new_stage()
# parse configuration
env_cfg: RLTaskEnvCfg = parse_env_cfg(task_name, use_gpu=self.use_gpu, num_envs=self.num_envs)
# create environment
env = gym.make(task_name, cfg=env_cfg)
# wrap environment
env = RslRlVecEnvWrapper(env)
# reset environment
obs, extras = env.reset()
# check signal
self.assertTrue(self._check_valid_tensor(obs))
self.assertTrue(self._check_valid_tensor(extras))
# simulate environment for 1000 steps
with torch.inference_mode():
for _ in range(1000):
# sample actions from -1 to 1
actions = 2 * torch.rand(env.action_space.shape, device=env.unwrapped.device) - 1
# apply actions
transition = env.step(actions)
# check signals
for data in transition:
self.assertTrue(self._check_valid_tensor(data), msg=f"Invalid data: {data}")
# close the environment
print(f">>> Closing environment: {task_name}")
env.close()
def test_no_time_outs(self):
"""Check that environments with finite horizon do not send time-out signals."""
for task_name in self.registered_tasks:
with self.subTest(task_name=task_name):
print(f">>> Running test for environment: {task_name}")
# create a new stage
omni.usd.get_context().new_stage()
# parse configuration
env_cfg: RLTaskEnvCfg = parse_env_cfg(task_name, use_gpu=self.use_gpu, num_envs=self.num_envs)
# change to finite horizon
env_cfg.is_finite_horizon = True
# create environment
env = gym.make(task_name, cfg=env_cfg)
# wrap environment
env = RslRlVecEnvWrapper(env)
# reset environment
_, extras = env.reset()
# check signal
self.assertNotIn("time_outs", extras, msg="Time-out signal found in finite horizon environment.")
# simulate environment for 10 steps
with torch.inference_mode():
for _ in range(10):
# sample actions from -1 to 1
actions = 2 * torch.rand(env.action_space.shape, device=env.unwrapped.device) - 1
# apply actions
extras = env.step(actions)[-1]
# check signals
self.assertNotIn(
"time_outs", extras, msg="Time-out signal found in finite horizon environment."
)
# close the environment
print(f">>> Closing environment: {task_name}")
env.close()
"""
Helper functions.
"""
@staticmethod
def _check_valid_tensor(data: torch.Tensor | dict) -> bool:
"""Checks if given data does not have corrupted values.
Args:
data: Data buffer.
Returns:
True if the data is valid.
"""
if isinstance(data, torch.Tensor):
return not torch.any(torch.isnan(data))
elif isinstance(data, dict):
valid_tensor = True
for value in data.values():
if isinstance(value, dict):
valid_tensor &= TestRslRlVecEnvWrapper._check_valid_tensor(value)
elif isinstance(value, torch.Tensor):
valid_tensor &= not torch.any(torch.isnan(value))
return valid_tensor
else:
raise ValueError(f"Input data of invalid type: {type(data)}.")
if __name__ == "__main__":
run_tests()
| 5,796 | Python | 36.160256 | 113 | 0.559179 |
fnuabhimanyu8713/orbit/source/extensions/omni.isaac.orbit_tasks/test/wrappers/test_rl_games_wrapper.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Launch Isaac Sim Simulator first."""
from omni.isaac.orbit.app import AppLauncher, run_tests
# launch the simulator
app_launcher = AppLauncher(headless=True)
simulation_app = app_launcher.app
"""Rest everything follows."""
import gymnasium as gym
import torch
import unittest
import omni.usd
from omni.isaac.orbit.envs import RLTaskEnvCfg
import omni.isaac.orbit_tasks # noqa: F401
from omni.isaac.orbit_tasks.utils.parse_cfg import parse_env_cfg
from omni.isaac.orbit_tasks.utils.wrappers.rl_games import RlGamesVecEnvWrapper
class TestRlGamesVecEnvWrapper(unittest.TestCase):
"""Test that RL-Games VecEnv wrapper works as expected."""
@classmethod
def setUpClass(cls):
# acquire all Isaac environments names
cls.registered_tasks = list()
for task_spec in gym.registry.values():
if "Isaac" in task_spec.id:
cls.registered_tasks.append(task_spec.id)
# sort environments by name
cls.registered_tasks.sort()
# only pick the first four environments to test
cls.registered_tasks = cls.registered_tasks[:4]
# print all existing task names
print(">>> All registered environments:", cls.registered_tasks)
def setUp(self) -> None:
# common parameters
self.num_envs = 64
self.use_gpu = True
def test_random_actions(self):
"""Run random actions and check environments return valid signals."""
for task_name in self.registered_tasks:
with self.subTest(task_name=task_name):
print(f">>> Running test for environment: {task_name}")
# create a new stage
omni.usd.get_context().new_stage()
# parse configuration
env_cfg: RLTaskEnvCfg = parse_env_cfg(task_name, use_gpu=self.use_gpu, num_envs=self.num_envs)
# create environment
env = gym.make(task_name, cfg=env_cfg)
# wrap environment
env = RlGamesVecEnvWrapper(env, "cuda:0", 100, 100)
# reset environment
obs = env.reset()
# check signal
self.assertTrue(self._check_valid_tensor(obs))
# simulate environment for 100 steps
with torch.inference_mode():
for _ in range(100):
# sample actions from -1 to 1
actions = 2 * torch.rand(env.num_envs, *env.action_space.shape, device=env.device) - 1
# apply actions
transition = env.step(actions)
# check signals
for data in transition:
self.assertTrue(self._check_valid_tensor(data), msg=f"Invalid data: {data}")
# close the environment
print(f">>> Closing environment: {task_name}")
env.close()
"""
Helper functions.
"""
@staticmethod
def _check_valid_tensor(data: torch.Tensor | dict) -> bool:
"""Checks if given data does not have corrupted values.
Args:
data: Data buffer.
Returns:
True if the data is valid.
"""
if isinstance(data, torch.Tensor):
return not torch.any(torch.isnan(data))
elif isinstance(data, dict):
valid_tensor = True
for value in data.values():
if isinstance(value, dict):
valid_tensor &= TestRlGamesVecEnvWrapper._check_valid_tensor(value)
elif isinstance(value, torch.Tensor):
valid_tensor &= not torch.any(torch.isnan(value))
return valid_tensor
else:
raise ValueError(f"Input data of invalid type: {type(data)}.")
if __name__ == "__main__":
run_tests()
| 3,997 | Python | 33.17094 | 110 | 0.583688 |
fnuabhimanyu8713/orbit/source/extensions/omni.isaac.orbit_tasks/test/wrappers/test_sb3_wrapper.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Launch Isaac Sim Simulator first."""
from omni.isaac.orbit.app import AppLauncher, run_tests
# launch the simulator
app_launcher = AppLauncher(headless=True)
simulation_app = app_launcher.app
"""Rest everything follows."""
import gymnasium as gym
import numpy as np
import torch
import unittest
import omni.usd
from omni.isaac.orbit.envs import RLTaskEnvCfg
import omni.isaac.orbit_tasks # noqa: F401
from omni.isaac.orbit_tasks.utils.parse_cfg import parse_env_cfg
from omni.isaac.orbit_tasks.utils.wrappers.sb3 import Sb3VecEnvWrapper
class TestStableBaselines3VecEnvWrapper(unittest.TestCase):
"""Test that RSL-RL VecEnv wrapper works as expected."""
@classmethod
def setUpClass(cls):
# acquire all Isaac environments names
cls.registered_tasks = list()
for task_spec in gym.registry.values():
if "Isaac" in task_spec.id:
cls.registered_tasks.append(task_spec.id)
# sort environments by name
cls.registered_tasks.sort()
# only pick the first four environments to test
cls.registered_tasks = cls.registered_tasks[:4]
# print all existing task names
print(">>> All registered environments:", cls.registered_tasks)
def setUp(self) -> None:
# common parameters
self.num_envs = 64
self.use_gpu = True
def test_random_actions(self):
"""Run random actions and check environments return valid signals."""
for task_name in self.registered_tasks:
with self.subTest(task_name=task_name):
print(f">>> Running test for environment: {task_name}")
# create a new stage
omni.usd.get_context().new_stage()
# parse configuration
env_cfg: RLTaskEnvCfg = parse_env_cfg(task_name, use_gpu=self.use_gpu, num_envs=self.num_envs)
# create environment
env = gym.make(task_name, cfg=env_cfg)
# wrap environment
env = Sb3VecEnvWrapper(env)
# reset environment
obs = env.reset()
# check signal
self.assertTrue(self._check_valid_array(obs))
# simulate environment for 1000 steps
with torch.inference_mode():
for _ in range(1000):
# sample actions from -1 to 1
actions = 2 * np.random.rand(env.num_envs, *env.action_space.shape) - 1
# apply actions
transition = env.step(actions)
# check signals
for data in transition:
self.assertTrue(self._check_valid_array(data), msg=f"Invalid data: {data}")
# close the environment
print(f">>> Closing environment: {task_name}")
env.close()
"""
Helper functions.
"""
@staticmethod
def _check_valid_array(data: np.ndarray | dict | list) -> bool:
"""Checks if given data does not have corrupted values.
Args:
data: Data buffer.
Returns:
True if the data is valid.
"""
if isinstance(data, np.ndarray):
return not np.any(np.isnan(data))
elif isinstance(data, dict):
valid_array = True
for value in data.values():
if isinstance(value, dict):
valid_array &= TestStableBaselines3VecEnvWrapper._check_valid_array(value)
elif isinstance(value, np.ndarray):
valid_array &= not np.any(np.isnan(value))
return valid_array
elif isinstance(data, list):
valid_array = True
for value in data:
valid_array &= TestStableBaselines3VecEnvWrapper._check_valid_array(value)
return valid_array
else:
raise ValueError(f"Input data of invalid type: {type(data)}.")
if __name__ == "__main__":
run_tests()
| 4,188 | Python | 33.05691 | 110 | 0.582139 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.