file_path
stringlengths 20
202
| content
stringlengths 9
3.85M
| size
int64 9
3.85M
| lang
stringclasses 9
values | avg_line_length
float64 3.33
100
| max_line_length
int64 8
993
| alphanum_fraction
float64 0.26
0.93
|
---|---|---|---|---|---|---|
Mariuxtheone/kit-extension-sample-camerastudio/exts/omni.example.camerastudio/omni/example/camerastudio/csvreader.py | import omni.ext
import omni.ui as ui
import omni.kit.commands
from pxr import UsdGeom
from omni.kit.window.file_importer import get_file_importer
from typing import List, Tuple, Callable, Dict
import csv
from .cameragenerator import CameraGenerator
class CSVReader():
def __init__(self):
pass
def import_handler(self,filename: str, dirname: str, selections: List[str] = []):
print(f"> Import '{filename}' from '{dirname}' or selected files '{selections}'")
self.openCSV(dirname+filename)
def on_open_file(self):
file_importer = get_file_importer()
file_importer.show_window(
title="Import File",
# The callback function called after the user has selected a file.
import_handler=self.import_handler
)
#write a function that opens a CSV file, reads the data, and stores it in variables named shot_name, focal_length, aperture, distance
def openCSV(self,selections):
with open(selections) as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
line_count = 0
for row in csv_reader:
if line_count == 0:
line_count += 1
else:
shot_name = row[0]
print (f'Shot Name: {shot_name}.')
focal_length = row[1]
print (f'Focal Length: {focal_length}.')
aperture = row[2]
print (f'Aperture: {aperture}.')
distance = row[3]
print (f'Distance: {distance}.')
#do something with the csv data. in this case, generate a camera
cameraGenerator = CameraGenerator()
cameraGenerator.generate_camera(str(shot_name), float(focal_length), float(aperture), float(distance))
line_count += 1
| 1,960 | Python | 34.017857 | 137 | 0.560714 |
Mariuxtheone/kit-extension-sample-camerastudio/exts/omni.example.camerastudio/omni/example/camerastudio/cameragenerator.py | import omni.ext
import omni.ui as ui
import omni.kit.commands
from pxr import UsdGeom
from omni.kit.window.file_importer import get_file_importer
class CameraGenerator():
def __init__(self):
pass
def generate_camera(self, shot_name, focal_length, aperture, distance):
#generate camera
omni.kit.commands.execute("CreatePrimWithDefaultXform",
prim_type="Camera",
prim_path="/World/"+shot_name,
attributes={
"projection": UsdGeom.Tokens.perspective,
"focalLength": focal_length,
"horizontalAperture": aperture,
}
)
#move camera
omni.kit.commands.execute('TransformMultiPrimsSRTCpp',
count=1,
paths=['/World/'+shot_name],
new_translations=[0, 0, distance*1000],
new_rotation_eulers=[-0.0, -0.0, -0.0],
new_rotation_orders=[1, 0, 2],
new_scales=[1.0, 1.0, 1.0],
old_translations=[0.0, 0.0, 0.0],
old_rotation_eulers=[0.0, -0.0, -0.0],
old_rotation_orders=[1, 0, 2],
old_scales=[1.0, 1.0, 1.0],
usd_context_name='',
time_code=0.0)
| 1,526 | Python | 38.153845 | 75 | 0.439712 |
Mariuxtheone/kit-extension-sample-camerastudio/exts/omni.example.camerastudio/docs/README.md | # Camera Studio
This extension allows to open a CSV file containing information about Camera Settings and generate in-scene Cameras accordingly.
Usage:
The extension generates cameras with the following settings:
-Shot Name
-Focal Length (in mm)
-Horizontal Aperture (in mm)
-Distance from the subject the camera should be placed at the scene (in meters)
1) Create your .csv file with the following header:
shot_name,focal_length,aperture,distance
e.g.
shot_name,focal_length,aperture,distance
establishing_shot,24,2.8,4
wide_shot,14,2.0,4
over_the_shoulder_shot,50,2.8,0.5
point_of_view_shot,85,2.8,0.5
low_angle_shot,24,1.8,0.5
high_angle_shot,100,2.8,1.5
2) Open the .csv file via the Extension.
3) The extension will generate the cameras in your scene with the desired shots configured.
| 799 | Markdown | 26.586206 | 128 | 0.777222 |
Mariuxtheone/omni-openai-gpt3-snippet-extension/README.md | # NVIDIA Omniverse OpenAI GPT-3 Snippet Extension

This is an Extension that adds a simple snippet UI to NVIDIA Omniverse which allows you to generate GPT-3 based snippets.
## 1) Dependencies
In order to use this extension, you will need to install the following dependencies:
- openai python library: `pip install openai`
- pyperclip: `pip install pyperclip`
## 2) Installation
1) Install the Extension in your Omniverse app.
2) We need to create a folder to include the OPEN AI API key and the path to the main Python modules repository on our device, since Omniverse doesn't use the Python global PYTHONHOME and PYTHONPATH.
3) To do this, in the omni\openai\snippet\ folder, create a new file called `apikeys.py`
4) in the `apikeys.py` file, add the following lines:
```
apikey = "YOUR_OPENAI_API_KEY_GOES_HERE"
pythonpath = "The file path where you have installed your main python modules"
```
so `apikeys.py` should look like this:
```
apikey = "sk-123Mb38gELphag234GDyYT67FJwa3334FPRZQZ2Aq5f1o" (this is a fake API key, good try!)
pythonpath = "C:/Users/yourusername/AppData/Local/Packages/PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0/LocalCache/local-packages/Python310/site-packages"
```
## 3) Enable and Usage
To use the extension, enable it from the Extension Window and then click the "Generate Snippet" button. The generated snippet will be copied to your clipboard and you can past anywhere you want.
## 4) IMPORTANT DISCLAIMER
1) OpenAI is a third party API and you will need to create an account with OpenAI to use it. Consider that there's a cost associated with using the API.
2) The extension by default generate snippets up to 40 Tokens. If you want to generate more tokens, you will need to edit the variable `openaitokensresponse`
3) The extension by default uses the GPT-3 Engine "DaVinci" `text-davinci-001` which is the most powerful, but also, most expensive engine. If you want to use a different engine, you will need to edit the variable `engine` in `openai.Completion.create()`.
| 2,095 | Markdown | 43.595744 | 255 | 0.77327 |
Mariuxtheone/omni-openai-gpt3-snippet-extension/exts/omni.openai.snippet/omni/openai/snippet/extension.py | import omni.ext
import omni.ui as ui
#create a file apikeys.py in the same folder as extension.py and add 2 variables:
# API_KEY: "your openai api key"
# PYTHON_PATH: "the path of the python folder where the openai python library is installed"
from .apikeys import apikey
from .apikeys import pythonpath
import pyperclip
import sys
sys.path.append(pythonpath)
import openai
#tokens used in the OpenAI API response
openaitokensresponse = 40
# Any class derived from `omni.ext.IExt` in top level module (defined in `python.modules` of `extension.toml`) will be
# instantiated when extension gets enabled and `on_startup(ext_id)` will be called. Later when extension gets disabled
# on_shutdown() is called.
class MyExtension(omni.ext.IExt):
# ext_id is current extension id. It can be used with extension manager to query additional information, like where
# this extension is located on filesystem.
def on_startup(self, ext_id):
print("[omni.openai.snippet] MyExtension startup")
self._window = ui.Window("OpenAI GPT-3 Text Generator", width=300, height=300)
with self._window.frame:
with ui.VStack():
prompt_label = ui.Label("Your Prompt:")
prompt_field = ui.StringField(multiline=True)
result_label = ui.Label("OpenAI GPT-3 Result:")
label_style = {"Label": {"font_size": 16, "color": 0xFF00FF00,}}
result_actual_label = ui.Label("The OpenAI generated text will show up here", style=label_style, word_wrap=True)
def on_click():
# Load your API key from an environment variable or secret management service
#openai.api_key = "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"
openai.api_key = apikey
my_prompt = prompt_field.model.get_value_as_string().replace("\n", " ")
response = openai.Completion.create(engine="text-davinci-001", prompt=my_prompt, max_tokens=openaitokensresponse)
#parse response as json and extract text
text = response["choices"][0]["text"]
pyperclip.copy(text)
result_actual_label.text = ""
result_actual_label.text = text
ui.Button("Generate and Copy to Clipboard", clicked_fn=lambda: on_click())
def on_shutdown(self):
print("[omni.openai.snippet] MyExtension shutdown")
| 2,609 | Python | 40.428571 | 133 | 0.617478 |
echo3Dco/NVIDIAOmniverse-echo3D-extension/README.md | # Echo3D Omniverse Extension
An extension that allows Nvidia Omniverse users to easily import their echo3D assets into their projects, as well as search for new assets in the echo3D public library.
Installation steps can be found at https://docs.echo3d.com/nvidia-omniverse/installation
| 289 | Markdown | 47.333325 | 168 | 0.820069 |
echo3Dco/NVIDIAOmniverse-echo3D-extension/exts/echo3d.search/echo3d/search/extension.py | import json
import os
import asyncio
import ssl
import certifi
import aiohttp
import omni.ext
import omni.ui as ui
import omni.kit.commands
import urllib
from omni.ui import color as cl
# GLOBAL VARIABLES #
IMAGES_PER_PAGE = 3
current_search_page = 0
current_project_page = 0
searchJsonData = []
projectJsonData = []
# UI Elements for the thumbnails
search_image_widgets = [ui.Image() for _ in range(IMAGES_PER_PAGE)]
project_image_widgets = [ui.Button() for _ in range(IMAGES_PER_PAGE)]
# Hardcoded echo3D images
script_dir = os.path.dirname(os.path.abspath(__file__))
logo_image_filename = 'echo3D_Logo.png'
logo_image_path = os.path.join(script_dir, logo_image_filename)
cloud_image_filename = 'cloud_background_transparent.png'
cloud_image_path = os.path.join(script_dir, cloud_image_filename)
# State variables to hold the style associated with each thumbnail
project_button_styles = [
{
"border_radius": 5,
"Button.Image": {
"color": cl("#FFFFFF30"),
"image_url": cloud_image_path,
"alignment": ui.Alignment.CENTER,
"fill_policy": ui.FillPolicy.PRESERVE_ASPECT_CROP
}
} for _ in range(IMAGES_PER_PAGE)]
search_button_styles = [
{
"border_radius": 5,
"Button.Image": {
"color": cl("#FFFFFF30"),
"image_url": cloud_image_path,
"alignment": ui.Alignment.CENTER,
"fill_policy": ui.FillPolicy.PRESERVE_ASPECT_CROP
}
} for _ in range(IMAGES_PER_PAGE)]
arrowStyle = {
":disabled": {
"background_color": cl("#1f212460")
},
"Button.Label:disabled": {
"color": cl("#FFFFFF40")
}
}
###########################################################################################################
# #
# An extension for Nvidia Omniverse that allows users to connect to their echo3D projects in order to #
# stream their existing assets into the Omniverse Viewport, as well as search for new assets in the #
# echo3D public asset library to add to their projects. #
# #
###########################################################################################################
class Echo3dSearchExtension(omni.ext.IExt):
def on_startup(self, ext_id):
print("[echo3D] echo3D startup")
###############################################
# Define Functions for Search Feature #
###############################################
# Load in new image thumbnails when clicks the previous/next buttons
def update_search_images(searchJsonData):
start_index = current_search_page * IMAGES_PER_PAGE
end_index = start_index + IMAGES_PER_PAGE
print(start_index)
print(end_index)
for i in range(start_index, end_index):
if i < len(searchJsonData):
search_button_styles[i % IMAGES_PER_PAGE] = {"Button.Image": {
"color": cl("#FFFFFF"),
"image_url": searchJsonData[i]["thumbnail"],
"alignment": ui.Alignment.CENTER,
"fill_policy": ui.FillPolicy.PRESERVE_ASPECT_CROP
},
"border_radius": 5
}
search_image_widgets[i % IMAGES_PER_PAGE].style = search_button_styles[i % IMAGES_PER_PAGE]
search_image_widgets[i % IMAGES_PER_PAGE].enabled = True
else:
global cloud_image_path
search_button_styles[i % IMAGES_PER_PAGE] = {
"Button.Image": {
"color": cl("#FFFFFF30"),
"image_url": cloud_image_path,
"alignment": ui.Alignment.CENTER,
"fill_policy": ui.FillPolicy.PRESERVE_ASPECT_CROP
},
"border_radius": 5
}
search_image_widgets[i % IMAGES_PER_PAGE].style = search_button_styles[i % IMAGES_PER_PAGE]
search_image_widgets[i % IMAGES_PER_PAGE].enabled = False
# Update state variables to reflect change of page, disable arrow buttons, update the thumbnails shown
def on_click_left_arrow_search():
global current_search_page
current_search_page -= 1
if (current_search_page == 0):
searchLeftArrow.enabled = False
searchRightArrow.enabled = True
global searchJsonData
update_search_images(searchJsonData)
def on_click_right_arrow_search():
global current_search_page
current_search_page += 1
global searchJsonData
if ((current_search_page + 1) * IMAGES_PER_PAGE >= len(searchJsonData)):
searchRightArrow.enabled = False
searchLeftArrow.enabled = True
update_search_images(searchJsonData)
async def on_click_search_image(index):
global searchJsonData
global current_search_page
selectedEntry = searchJsonData[current_search_page * IMAGES_PER_PAGE + index]
url = selectedEntry["glb_location_url"]
filename = selectedEntry["name"] + '.glb'
folder_path = os.path.join(os.path.dirname(__file__), "temp_files")
file_path = os.path.join(folder_path, filename)
if not os.path.exists(folder_path):
os.makedirs(folder_path)
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
response.raise_for_status()
content = await response.read()
with open(file_path, "wb") as file:
file.write(content)
omni.kit.commands.execute('CreateReferenceCommand',
path_to='/World/' + os.path.splitext(filename)[0].replace(" ", "_"),
asset_path=file_path,
usd_context=omni.usd.get_context())
api_url = "https://api.echo3d.com/upload"
data = {
"key": apiKeyInput.model.get_value_as_string(),
"secKey": secKeyInput.model.get_value_as_string(),
"data": "filePath:null",
"type": "upload",
"target_type": "2",
"hologram_type": "2",
"file_size": str(os.path.getsize(file_path)),
"file_model": open(file_path, "rb")
}
async with session.post(url=api_url, data=data) as uploadRequest:
uploadRequest.raise_for_status()
# Call the echo3D /search endpoint to get models and display the resulting thumbnails
def on_click_search():
global current_search_page
current_search_page = 0
searchLeftArrow.enabled = False
searchRightArrow.enabled = False
searchTerm = searchInput.model.get_value_as_string()
api_url = "https://api.echo3d.com/search"
data = {
"key": apiKeyInput.model.get_value_as_string(),
"secKey": secKeyInput.model.get_value_as_string(),
"keywords": searchTerm,
"include2Dcontent": "false"
}
encoded_data = urllib.parse.urlencode(data).encode('utf-8')
request = urllib.request.Request(api_url, data=encoded_data)
response = urllib.request.urlopen(request, context=ssl.create_default_context(cafile=certifi.where()))
librarySearchRequest = response.read().decode('utf-8')
global searchJsonData
searchJsonData = json.loads(librarySearchRequest)
searchJsonData = [data for data in searchJsonData if "glb_location_url" in data
and data["source"] == 'poly']
global search_image_widgets
global search_button_styles
for i in range(IMAGES_PER_PAGE):
if i < len(searchJsonData):
search_button_styles[i] = {
"Button.Image": {
"color": cl("#FFFFFF"),
"image_url": searchJsonData[i]["thumbnail"],
"alignment": ui.Alignment.CENTER,
"fill_policy": ui.FillPolicy.PRESERVE_ASPECT_CROP
},
"border_radius": 5
}
search_image_widgets[i].style = search_button_styles[i]
search_image_widgets[i].enabled = True
searchRightArrow.enabled = len(searchJsonData) > IMAGES_PER_PAGE
else:
global cloud_image_path
search_button_styles[i] = {
"Button.Image": {
"color": cl("#FFFFFF30"),
"image_url": cloud_image_path,
"alignment": ui.Alignment.CENTER,
"fill_policy": ui.FillPolicy.PRESERVE_ASPECT_CROP
},
"border_radius": 5
}
search_image_widgets[i].style = search_button_styles[i]
search_image_widgets[i].enabled = False
# Clear all the thumbnails and search term
def on_reset_search():
global current_search_page
current_search_page = 0
searchInput.model.set_value("")
global search_image_widgets
for i in range(IMAGES_PER_PAGE):
global cloud_image_path
search_button_styles[i] = {
"Button.Image": {
"color": cl("#FFFFFF30"),
"image_url": cloud_image_path,
"alignment": ui.Alignment.CENTER,
"fill_policy": ui.FillPolicy.PRESERVE_ASPECT_CROP
},
"border_radius": 5
}
search_image_widgets[i].style = search_button_styles[i]
search_image_widgets[i].enabled = False
#################################################
# Define Functions for Project Querying #
#################################################
# Load in new image thumbnails when clicks the previous/next buttons
def update_project_images(projectJsonData):
start_index = current_project_page * IMAGES_PER_PAGE
end_index = start_index + IMAGES_PER_PAGE
for i in range(start_index, end_index):
if i < len(projectJsonData):
baseUrl = 'https://storage.echo3d.co/' + apiKeyInput.model.get_value_as_string() + "/"
imageFilename = projectJsonData[i]["additionalData"]["screenshotStorageID"]
project_button_styles[i % IMAGES_PER_PAGE] = {"Button.Image": {
"color": cl("#FFFFFF"),
"image_url": baseUrl + imageFilename,
"alignment": ui.Alignment.CENTER,
"fill_policy": ui.FillPolicy.PRESERVE_ASPECT_CROP
},
"border_radius": 5
}
project_image_widgets[i % IMAGES_PER_PAGE].style = project_button_styles[i % IMAGES_PER_PAGE]
project_image_widgets[i % IMAGES_PER_PAGE].enabled = True
else:
global cloud_image_path
project_button_styles[i % IMAGES_PER_PAGE] = {
"Button.Image": {
"color": cl("#FFFFFF30"),
"image_url": cloud_image_path,
"alignment": ui.Alignment.CENTER,
"fill_policy": ui.FillPolicy.PRESERVE_ASPECT_CROP
},
"border_radius": 5
}
project_image_widgets[i % IMAGES_PER_PAGE].style = project_button_styles[i % IMAGES_PER_PAGE]
project_image_widgets[i % IMAGES_PER_PAGE].enabled = False
# Update state variables to reflect change of page, disable arrow buttons, update the thumbnails shown
def on_click_left_arrow_project():
global current_project_page
current_project_page -= 1
if (current_project_page == 0):
projectLeftArrow.enabled = False
projectRightArrow.enabled = True
global projectJsonData
update_project_images(projectJsonData)
def on_click_right_arrow_project():
global current_project_page
current_project_page += 1
global projectJsonData
if ((current_project_page + 1) * IMAGES_PER_PAGE >= len(projectJsonData)):
projectRightArrow.enabled = False
projectLeftArrow.enabled = True
update_project_images(projectJsonData)
# When a user clicks a thumbnail, download the corresponding .usdz file if it exists and
# instantiate it in the scene. Otherwise use the .glb file
def on_click_project_image(index):
global projectJsonData
global current_project_page
selectedEntry = projectJsonData[current_project_page * IMAGES_PER_PAGE + index]
usdzStorageID = selectedEntry["additionalData"]["usdzHologramStorageID"]
usdzFilename = selectedEntry["additionalData"]["usdzHologramStorageFilename"]
if (usdzFilename):
open_project_asset_from_filename(usdzFilename, usdzStorageID)
else:
glbStorageID = selectedEntry["hologram"]["storageID"]
glbFilename = selectedEntry["hologram"]["filename"]
open_project_asset_from_filename(glbFilename, glbStorageID)
# Directly instantiate previously cached files from the session, or download them from the echo3D API
def open_project_asset_from_filename(filename, storageId):
folder_path = os.path.join(os.path.dirname(__file__), "temp_files")
if not os.path.exists(folder_path):
os.makedirs(folder_path)
file_path = os.path.join(folder_path, filename)
cachedUpload = os.path.exists(file_path)
if (not cachedUpload):
apiKey = apiKeyInput.model.get_value_as_string()
secKey = secKeyInput.model.get_value_as_string()
storageId = urllib.parse.quote(storageId)
url = f'https://api.echo3d.com/query?key={apiKey}&secKey={secKey}&file={storageId}'
response = urllib.request.urlopen(url, context=ssl.create_default_context(cafile=certifi.where()))
response_data = response.read()
with open(file_path, "wb") as file:
file.write(response_data)
omni.kit.commands.execute('CreateReferenceCommand',
path_to='/World/' + os.path.splitext(filename)[0],
asset_path=file_path,
usd_context=omni.usd.get_context())
# Call the echo3D /query endpoint to get models and display the resulting thumbnails
def on_click_load_project():
global current_project_page
current_project_page = 0
projectLeftArrow.enabled = False
projectRightArrow.enabled = False
api_url = "https://api.echo3d.com/query"
data = {
"key": apiKeyInput.model.get_value_as_string(),
"secKey": secKeyInput.model.get_value_as_string(),
}
encoded_data = urllib.parse.urlencode(data).encode('utf-8')
request = urllib.request.Request(api_url, data=encoded_data)
try:
with urllib.request.urlopen(request,
context=ssl.create_default_context(cafile=certifi.where())) as response:
response_data = response.read().decode('utf-8')
response_json = json.loads(response_data)
values = list(response_json["db"].values())
entriesWithScreenshot = [data for data in values if "additionalData" in data
and "screenshotStorageID" in data["additionalData"]]
global projectJsonData
projectJsonData = entriesWithScreenshot
global project_image_widgets
global project_button_styles
sampleModels = ["6af76ce2-2f57-4ed0-82d8-42652f0eddbe.png",
"d2398ecf-566b-4fde-b8cb-46b2fd6add1d.png",
"d686a655-e800-430d-bfd2-e38cdfb0c9e9.png"]
for i in range(IMAGES_PER_PAGE):
if i < len(projectJsonData):
imageFilename = projectJsonData[i]["additionalData"]["screenshotStorageID"]
if (imageFilename in sampleModels):
baseUrl = 'https://storage.echo3d.co/0_model_samples/'
else:
baseUrl = 'https://storage.echo3d.co/' + apiKeyInput.model.get_value_as_string() + "/"
project_button_styles[i] = {
"Button.Image": {
"color": cl("#FFFFFF"),
"image_url": baseUrl + imageFilename,
"alignment": ui.Alignment.CENTER,
"fill_policy": ui.FillPolicy.PRESERVE_ASPECT_CROP
},
"border_radius": 5
}
project_image_widgets[i].style = project_button_styles[i]
project_image_widgets[i].enabled = True
projectRightArrow.enabled = len(projectJsonData) > IMAGES_PER_PAGE
else:
global cloud_image_path
project_button_styles[i] = {
"Button.Image": {
"color": cl("#FFFFFF30"),
"image_url": cloud_image_path,
"alignment": ui.Alignment.CENTER,
"fill_policy": ui.FillPolicy.PRESERVE_ASPECT_CROP
},
"border_radius": 5
}
project_image_widgets[i].style = project_button_styles[i]
project_image_widgets[i].enabled = False
searchButton.enabled = True
clearButton.enabled = True
searchInput.enabled = True
disabledStateCover.style = {"background_color": cl("#32343400")}
loadError.visible = False
except Exception as e:
loadError.visible = True
print(str(e) + ". Ensure that your API Key and Security Key are entered correctly.")
# Display the UI
self._window = ui.Window("Echo3D", width=400, height=478)
with self._window.frame:
with ui.VStack():
script_dir = os.path.dirname(os.path.abspath(__file__))
logo_image_filename = 'echo3D_Logo.png'
logo_image_path = os.path.join(script_dir, logo_image_filename)
ui.Spacer(height=5)
with ui.Frame(height=25):
ui.Image(logo_image_path)
ui.Spacer(height=8)
with ui.HStack(height=20):
ui.Spacer(width=5)
with ui.Frame(width=85):
ui.Label("API Key:")
apiKeyInput = ui.StringField()
ui.Spacer(width=5)
ui.Spacer(height=3)
with ui.HStack(height=20):
ui.Spacer(width=5)
with ui.Frame(width=85):
ui.Label("Security Key:")
secKeyInput = ui.StringField()
with ui.Frame(width=5):
ui.Label("")
ui.Spacer(height=3)
with ui.Frame(height=20):
ui.Button("Load Project", clicked_fn=on_click_load_project)
loadError = ui.Label("Error: Cannot Load Project. Correct your keys and try again.", visible=False,
height=20, style={"color": cl("#FF0000")}, alignment=ui.Alignment.CENTER)
ui.Spacer(height=3)
# Overlay the disabled elements to indicate their state
with ui.ZStack():
with ui.VStack():
with ui.HStack(height=5):
ui.Spacer(width=5)
ui.Line(name='default', style={"color": cl.gray})
ui.Spacer(width=5)
ui.Spacer(height=3)
with ui.HStack(height=20):
ui.Spacer(width=5)
ui.Label("Assets in Project:")
global project_image_widgets
with ui.HStack(height=80):
with ui.Frame(height=80, width=10):
projectLeftArrow = ui.Button("<", clicked_fn=on_click_left_arrow_project, enabled=False,
style=arrowStyle)
for i in range(IMAGES_PER_PAGE):
with ui.Frame(height=80):
project_image_widgets[i] = ui.Button("", clicked_fn=lambda index=i:
on_click_project_image(index),
style=project_button_styles[i], enabled=False)
with ui.Frame(height=80, width=10):
projectRightArrow = ui.Button(">", clicked_fn=on_click_right_arrow_project,
enabled=False, style=arrowStyle)
ui.Spacer(height=10)
with ui.HStack(height=5):
ui.Spacer(width=5)
ui.Line(name='default', style={"color": cl.gray})
ui.Spacer(width=5)
ui.Spacer(height=5)
with ui.HStack(height=20):
ui.Spacer(width=5)
ui.Label("Public Search Results:")
global search_image_widgets
with ui.HStack(height=80):
with ui.Frame(height=80, width=10):
searchLeftArrow = ui.Button("<", clicked_fn=on_click_left_arrow_search, enabled=False,
style=arrowStyle)
for i in range(IMAGES_PER_PAGE):
with ui.Frame(height=80):
search_image_widgets[i] = ui.Button("",
clicked_fn=lambda idx=i:
asyncio.ensure_future(
on_click_search_image(idx)),
style=search_button_styles[i], enabled=False)
with ui.Frame(height=80, width=10):
searchRightArrow = ui.Button(">", clicked_fn=on_click_right_arrow_search, enabled=False,
style=arrowStyle)
ui.Spacer(height=10)
with ui.HStack(height=20):
ui.Spacer(width=5)
with ui.Frame(width=85):
ui.Label("Keywords:")
searchInput = ui.StringField(enabled=False)
with ui.Frame(width=5):
ui.Label("")
ui.Spacer(height=5)
with ui.VStack():
with ui.Frame(height=20):
searchButton = ui.Button("Search", clicked_fn=on_click_search, enabled=False)
with ui.Frame(height=20):
clearButton = ui.Button("Clear", clicked_fn=on_reset_search, enabled=False)
disabledStateCover = ui.Rectangle(style={"background_color": cl("#323434A0")}, height=500)
def on_shutdown(self):
# Clear all temporary download files
folder_path = os.path.join(os.path.dirname(__file__), "temp_files")
if os.path.exists(folder_path):
file_list = os.listdir(folder_path)
for file_name in file_list:
file_path = os.path.join(folder_path, file_name)
if os.path.isfile(file_path):
os.remove(file_path)
print("[echo3D] echo3D shutdown")
| 26,601 | Python | 49.670476 | 120 | 0.477012 |
echo3Dco/NVIDIAOmniverse-echo3D-extension/exts/echo3d.search/docs/README.md | # echo3D Connector [echo3d.search]
Manage and search 3D assets in your Omniverse experiences with the echo3D Connector.
echo3D is a cloud platform for 3D asset management that provides tools and server-side infrastructure to help developers & companies manage and deploy 3D/AR/VR assets.
echo3D offers a 3D-first content management system (CMS) and delivery network (CDN) that enables developers to build a 3D/AR/VR app backend in minutes and allows content creators to easily manage and publish 3D content to their Omniverse experience without involving development teams.
### Connecting an echo3D Project
To begin, copy your echo3D API Key and Secret Key (if enabled) into the corresponding boxes in the Omniverse Extension.
The API Key can be found in the header of the echo3D console, and the Secret Key can be found on the Security Tab of the Settings Page of the console.
### Loading Assets
Simply click any of your project assets to add them to the Omniverse Viewer
Additionally, you can search for publicly available assets by entering a keyword into the search bar. Note that clicking on them and importing them into the Omniverse Viewer will also automatically upload the asset to your echo3D project.
### Any other questions?
- Reach out to [email protected]
- or join at https://go.echo3d.co/join
### License
This asset is governed by the license agreement at echo3D.com/terms.
### Preview | 1,411 | Markdown | 57.833331 | 285 | 0.796598 |
ngzhili/SynTable/visualize_annotations.py | """ Visualises SynTable generated annotations: """
# Run python ./visualize_annotations.py --dataset './sample_data' --ann_json './sample_data/annotation_final.json'
import json
import cv2
import numpy as np
import os, shutil
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
from matplotlib import pyplot as plt
from PIL import Image
import networkx as nx
import argparse
import pycocotools.mask as mask_util
from matplotlib.colors import ListedColormap
import seaborn as sns
import matplotlib.patches as mpatches
# visualize annotations
def apply_mask(image, mask):
# Convert to numpy arrays
image = np.array(image)
mask = np.array(mask)
# Convert grayscale image to RGB
mask = np.stack((mask,)*3, axis=-1)
# Multiply arrays
rgb_result= image*mask
# First create the image with alpha channel
rgba = cv2.cvtColor(rgb_result, cv2.COLOR_RGB2RGBA)
# Then assign the mask to the last channel of the image
# rgba[:, :, 3] = alpha_data
# Make image transparent white anywhere it is transparent
rgba[rgba[...,-1]==0] = [255,255,255,0]
return rgba
def compute_occluded_masks(mask1, mask2):
"""Computes occlusions between two sets of masks.
masks1, masks2: [Height, Width, instances]
"""
# If either set of masks is empty return empty result
#if masks1.shape[-1] == 0 or masks2.shape[-1] == 0:
#return np.zeros((masks1.shape[-1], masks2.shape[-1]))
# flatten masks and compute their areas
#masks1 = np.reshape(masks1 > .5, (-1, masks1.shape[-1])).astype(np.float32)
#masks2 = np.reshape(masks2 > .5, (-1, masks2.shape[-1])).astype(np.float32)
#area1 = np.sum(masks1, axis=0)
#area2 = np.sum(masks2, axis=0)
# intersections and union
#intersections_mask = np.dot(masks1.T, masks2)
mask1_area = np.count_nonzero( mask1 )
mask2_area = np.count_nonzero( mask2 )
intersection_mask = np.logical_and( mask1, mask2 )
intersection = np.count_nonzero( np.logical_and( mask1, mask2 ) )
iou = intersection/(mask1_area+mask2_area-intersection)
return iou, intersection_mask.astype(float)
def convert_png(image):
image = Image.fromarray(np.uint8(image))
image = image.convert('RGBA')
# Transparency
newImage = []
for item in image.getdata():
if item[:3] == (0, 0, 0):
newImage.append((0, 0, 0, 0))
else:
newImage.append(item)
image.putdata(newImage)
return image
def rle2mask(mask_rle, shape=(480,640)):
'''
mask_rle: run-length as string formated (start length)
shape: (width,height) of array to return
Returns numpy array, 1 - mask, 0 - background
'''
s = mask_rle.split()
starts, lengths = [np.asarray(x, dtype=int) for x in (s[0:][::2], s[1:][::2])]
starts -= 1
ends = starts + lengths
img = np.zeros(shape[0]*shape[1], dtype=np.uint8)
for lo, hi in zip(starts, ends):
img[lo:hi] = 1
return img.reshape(shape).T
def segmToRLE(segm, img_size):
h, w = img_size
if type(segm) == list:
# polygon -- a single object might consist of multiple parts
# we merge all parts into one mask rle code
rles = maskUtils.frPyObjects(segm, h, w)
rle = maskUtils.merge(rles)
elif type(segm["counts"]) == list:
# uncompressed RLE
rle = maskUtils.frPyObjects(segm, h, w)
else:
# rle
rle = segm
return rle
# Convert 1-channel groundtruth data to visualization image data
def normalize_greyscale_image(image_data):
image_data = np.reciprocal(image_data)
image_data[image_data == 0.0] = 1e-5
image_data = np.clip(image_data, 0, 255)
image_data -= np.min(image_data)
if np.max(image_data) > 0:
image_data /= np.max(image_data)
image_data *= 255
image_data = image_data.astype(np.uint8)
return image_data
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Visualise Annotations')
parser.add_argument('--dataset', type=str,
help='dataset to visualise')
parser.add_argument('--ann_json', type=str,
help='dataset annotation to visualise')
args = parser.parse_args()
data_dir = args.dataset
ann_json = args.ann_json
# Opening JSON file
f = open(ann_json)
# returns JSON object as a dictionary
data = json.load(f)
f.close()
referenceDict = {}
for i, ann in enumerate(data['annotations']):
image_id = ann["image_id"]
ann_id = ann["id"]
# print(ann_id)
if image_id not in referenceDict:
referenceDict.update({image_id:{"rgb":None,"depth":None, "amodal":[], "visible":[],
"occluded":[],"occluded_rate":[],"category_id":[],"object_name":[]}})
# print(referenceDict)
referenceDict[image_id].update({"rgb":data["images"][i]["file_name"]})
referenceDict[image_id].update({"depth":data["images"][i]["depth_file_name"]})
# referenceDict[image_id].update({"occlusion_order":data["images"][i]["occlusion_order_file_name"]})
referenceDict[image_id]["amodal"].append(ann["segmentation"])
referenceDict[image_id]["visible"].append(ann["visible_mask"])
referenceDict[image_id]["occluded"].append(ann["occluded_mask"])
referenceDict[image_id]["occluded_rate"].append(ann["occluded_rate"])
referenceDict[image_id]["category_id"].append(ann["category_id"])
# referenceDict[image_id]["object_name"].append(ann["object_name"])
else:
# if not (referenceDict[image_id]["rgb"] or referenceDict[image_id]["depth"]):
# referenceDict[image_id].update({"rgb":data["images"][i]["file_name"]})
# referenceDict[image_id].update({"depth":data["images"][i]["depth_file_name"]})
referenceDict[image_id]["amodal"].append(ann["segmentation"])
referenceDict[image_id]["visible"].append(ann["visible_mask"])
referenceDict[image_id]["occluded"].append(ann["occluded_mask"])
referenceDict[image_id]["occluded_rate"].append(ann["occluded_rate"])
referenceDict[image_id]["category_id"].append(ann["category_id"])
# referenceDict[image_id]["object_name"].append(ann["object_name"])
# Create visualise directory
vis_dir = os.path.join(data_dir,"visualise_dataset")
if os.path.exists(vis_dir): # remove contents if exist
for filename in os.listdir(vis_dir):
file_path = os.path.join(vis_dir, filename)
try:
if os.path.isfile(file_path) or os.path.islink(file_path):
os.unlink(file_path)
elif os.path.isdir(file_path):
shutil.rmtree(file_path)
except Exception as e:
print('Failed to delete %s. Reason: %s' % (file_path, e))
else:
os.makedirs(vis_dir)
# query_img_id_list = [1,50,100]
query_img_id_list = [i for i in range(1,len(referenceDict)+1)] # visualise all images
for id in query_img_id_list:
if id in referenceDict:
ann_dic = referenceDict[id]
vis_dir_img = os.path.join(vis_dir,str(id))
if not os.path.exists(vis_dir_img):
os.makedirs(vis_dir_img)
# visualise rgb image
rgb_path = os.path.join(data_dir,ann_dic["rgb"])
rgb_img = cv2.imread(rgb_path, cv2.IMREAD_UNCHANGED)
# visualise depth image
depth_path = os.path.join(data_dir,ann_dic["depth"])
from PIL import Image
im = Image.open(depth_path)
im = np.array(im)
depth_img = Image.fromarray(normalize_greyscale_image(im.astype("float32")))
file = os.path.join(vis_dir_img,f"depth_{id}.png")
depth_img.save(file, "PNG")
# visualise occlusion masks on rgb image
occ_img_list = ann_dic["occluded"]
if len(occ_img_list) > 0:
occ_img = rgb_img.copy()
overlay = rgb_img.copy()
combined_mask = np.zeros((occ_img.shape[0],occ_img.shape[1]))
# iterate through all occlusion masks
for i, occMask in enumerate(occ_img_list):
occluded_mask = mask_util.decode(occMask)
if ann_dic["category_id"][i] == 0:
occ_img_back = rgb_img.copy()
overlay_back = rgb_img.copy()
occluded_mask = occluded_mask.astype(bool) # boolean mask
overlay_back[occluded_mask] = [0, 0, 255]
# print(np.unique(occluded_mask))
alpha =0.5
occ_img_back = cv2.addWeighted(overlay_back, alpha, occ_img_back, 1 - alpha, 0, occ_img_back)
occ_save_path = f"{vis_dir_img}/rgb_occlusion_{id}_background.png"
cv2.imwrite(occ_save_path, occ_img_back)
else:
combined_mask += occluded_mask
combined_mask = combined_mask.astype(bool) # boolean mask
overlay[combined_mask] = [0, 0, 255]
alpha =0.5
occ_img = cv2.addWeighted(overlay, alpha, occ_img, 1 - alpha, 0, occ_img)
occ_save_path = f"{vis_dir_img}/rgb_occlusion_{id}.png"
cv2.imwrite(occ_save_path, occ_img)
combined_mask = combined_mask.astype('uint8')
occ_save_path = f"{vis_dir_img}/occlusion_mask_{id}.png"
cv2.imwrite(occ_save_path, combined_mask*255)
cols = 4
rows = len(occ_img_list) // cols + 1
from matplotlib import pyplot as plt
fig = plt.figure(figsize=(20,10))
for index, occMask in enumerate(occ_img_list):
occ_mask = mask_util.decode(occMask)
plt.subplot(rows,cols, index+1)
plt.axis('off')
# plt.title(ann_dic["object_name"][index])
plt.imshow(occ_mask)
plt.tight_layout()
plt.suptitle(f"Occlusion Masks for {id}.png")
# plt.show()
plt.savefig(f'{vis_dir_img}/occ_masks_{id}.png')
plt.close()
# visualise visible masks on rgb image
vis_img_list = ann_dic["visible"]
if len(vis_img_list) > 0:
vis_img = rgb_img.copy()
overlay = rgb_img.copy()
# iterate through all occlusion masks
for i, visMask in enumerate(vis_img_list):
visible_mask = mask_util.decode(visMask)
if ann_dic["category_id"][i] == 0:
vis_img_back = rgb_img.copy()
overlay_back = rgb_img.copy()
visible_mask = visible_mask.astype(bool) # boolean mask
overlay_back[visible_mask] = [0, 0, 255]
alpha =0.5
vis_img_back = cv2.addWeighted(overlay_back, alpha, vis_img_back, 1 - alpha, 0, vis_img_back)
vis_save_path = f"{vis_dir_img}/rgb_visible_mask_{id}_background.png"
cv2.imwrite(vis_save_path, vis_img_back)
else:
vis_combined_mask = visible_mask.astype(bool) # boolean mask
colour = list(np.random.choice(range(256), size=3))
overlay[vis_combined_mask] = colour
alpha = 0.5
vis_img = cv2.addWeighted(overlay, alpha, vis_img, 1 - alpha, 0, vis_img)
vis_save_path = f"{vis_dir_img}/rgb_visible_mask_{id}.png"
cv2.imwrite(vis_save_path,vis_img)
cols = 4
rows = len(vis_img_list) // cols + 1
# print(len(amodal_img_list))
# print(cols,rows)
from matplotlib import pyplot as plt
fig = plt.figure(figsize=(20,10))
for index, visMask in enumerate(vis_img_list):
vis_mask = mask_util.decode(visMask)
plt.subplot(rows,cols, index+1)
plt.axis('off')
# plt.title(ann_dic["object_name"][index])
plt.imshow(vis_mask)
plt.tight_layout()
plt.suptitle(f"Visible Masks for {id}.png")
# plt.show()
plt.savefig(f'{vis_dir_img}/vis_masks_{id}.png')
plt.close()
# visualise amodal masks
# img_dir_path = f"{output_dir}/visualize_occlusion_masks/"
# img_list = sorted(os.listdir(img_dir_path), key=lambda x: float(x[4:-4]))
amodal_img_list = ann_dic["amodal"]
if len(amodal_img_list) > 0:
cols = 4
rows = len(amodal_img_list) // cols + 1
# print(len(amodal_img_list))
# print(cols,rows)
from matplotlib import pyplot as plt
fig = plt.figure(figsize=(20,10))
for index, amoMask in enumerate(amodal_img_list):
amodal_mask = mask_util.decode(amoMask)
plt.subplot(rows,cols, index+1)
plt.axis('off')
# plt.title(ann_dic["object_name"][index])
plt.imshow(amodal_mask)
plt.tight_layout()
plt.suptitle(f"Amodal Masks for {id}.png")
# plt.show()
plt.savefig(f'{vis_dir_img}/amodal_masks_{id}.png')
plt.close()
# get rgb_path
rgb_path = os.path.join(data_dir,ann_dic["rgb"])
rgb_img = cv2.imread(rgb_path, cv2.IMREAD_UNCHANGED)
occ_order = False
if occ_order:
# get occlusion order adjacency matrix
npy_path = os.path.join(data_dir,ann_dic["occlusion_order"])
occlusion_order_adjacency_matrix = np.load(npy_path)
print(f"Calculating Directed Graph for Scene:{id}")
# vis_img = cv2.imread(f"{vis_dir}/visuals/{scene_index}.png", cv2.IMREAD_UNCHANGED)
rows = cols = len(ann_dic["visible"]) # number of objects
obj_rgb_mask_list = []
for i in range(1,len(ann_dic["visible"])+1):
visMask = ann_dic["visible"][i-1]
visible_mask = mask_util.decode(visMask)
rgb_crop = apply_mask(rgb_img, visible_mask)
rgb_crop = convert_png(rgb_crop)
def bbox(im):
a = np.array(im)[:,:,:3] # keep RGB only
m = np.any(a != [0,0,0], axis=2)
coords = np.argwhere(m)
y0, x0, y1, x1 = *np.min(coords, axis=0), *np.max(coords, axis=0)
return (x0, y0, x1+1, y1+1)
# print(bbox(rgb_crop))
obj_rgb_mask = rgb_crop.crop(bbox(rgb_crop))
obj_rgb_mask_list.append(obj_rgb_mask) # add obj_rgb_mask
# get contours (presumably just one around the nonzero pixels) # for instance segmentation mask
# contours = cv2.findContours(visible_mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# contours = contours[0] if len(contours) == 2 else contours[1]
# for cntr in contours:
# x,y,w,h = cv2.boundingRect(cntr)
# cv2.putText(img=vis_img, text=str(i), org=(x+w//2, y+h//2), fontFace=cv2.FONT_HERSHEY_TRIPLEX, fontScale=0.5, color=(0, 0, 0),thickness=1)
""" === Generate Directed Graph === """
# print("Occlusion Order Adjacency Matrix:\n",occlusion_order_adjacency_matrix)
# f, (ax1,ax2) = plt.subplots(1,2)
# show_graph_with_labels(overlap_adjacency_matrix,ax1)
labels = [i for i in range(1,len(occlusion_order_adjacency_matrix)+1)]
labels_dict = {}
for i in range(len(occlusion_order_adjacency_matrix)):
labels_dict.update({i:labels[i]})
rows, cols = np.where(occlusion_order_adjacency_matrix == 1)
rows += 1
cols += 1
edges = zip(rows.tolist(), cols.tolist())
nodes_list = [i for i in range(1, len(occlusion_order_adjacency_matrix)+1)]
# Initialise directed graph G
G = nx.DiGraph()
G.add_nodes_from(nodes_list)
G.add_edges_from(edges)
# pos=nx.spring_layout(G,k=1/sqrt(N))
is_planar, P = nx.check_planarity(G)
if is_planar:
pos=nx.planar_layout(G)
else:
# pos=nx.draw(G)
N = len(G.nodes())
pos=nx.spring_layout(G,k=3/sqrt(N))
print("Nodes:",G.nodes())
print("Edges:",G.edges())
# print(G.in_edges())
# print(G.out_edges())
# get start nodes
start_nodes = [node for (node,degree) in G.in_degree if degree == 0]
print("start_nodes:",start_nodes)
# get end nodes
end_nodes = [node for (node,degree) in G.out_degree if degree == 0]
for node in end_nodes:
if node in start_nodes:
end_nodes.remove(node)
print("end_nodes:",end_nodes)
# get intermediate notes
intermediate_nodes = [i for i in nodes_list if i not in (start_nodes) and i not in (end_nodes)]
print("intermediate_nodes:",intermediate_nodes)
print("(Degree of clustering) Number of Weakly Connected Components:",nx.number_weakly_connected_components(G))
# largest_wcc = max(nx.weakly_connected_components(G), key=len)
# largest_wcc_size = len(largest_wcc)
# print("(Scene Complexity) Sizes of Weakly Connected Component:",largest_wcc_size)
wcc_list = list(nx.weakly_connected_components(G))
wcc_len = []
for component in wcc_list:
wcc_len.append(len(component))
print("(Scene Complexity/Degree of overlapping regions) Sizes of Weakly Connected Components:",wcc_len)
dag_longest_path_length = nx.dag_longest_path_length(G)
print("(Minimum no. of depth layers to order all regions in WCC) Longest directed path of Weakly Connected Components:",dag_longest_path_length)
# nx.draw(gr, node_size=500, with_labels=True)
node_color_list = []
node_size_list = []
for node in nodes_list:
if node in start_nodes:
node_color_list.append('green')
node_size_list.append(500)
elif node in end_nodes:
node_color_list.append('yellow')
node_size_list.append(300)
else:
node_color_list.append('#1f78b4')
node_size_list.append(300)
options = {
'node_color': node_color_list,
'node_size': node_size_list,
'width': 1,
'arrowstyle': '-|>',
'arrowsize': 10
}
fig1 = plt.figure(figsize=(20, 6), dpi=80)
plt.subplot(1,3,1)
# nx.draw_planar(G, pos, with_labels = True, arrows=True, **options)
nx.draw_networkx(G,pos, with_labels= True, arrows=True, **options)
dag = nx.is_directed_acyclic_graph(G)
print(f"Is Directed Acyclic Graph (DAG)?: {dag}")
colors = ["green", "#1f78b4", "yellow"]
texts = ["Top Layer", "Intermediate Layers", "Bottom Layer"]
patches = [ plt.plot([],[], marker="o", ms=10, ls="", mec=None, color=colors[i],
label="{:s}".format(texts[i]) )[0] for i in range(len(texts)) ]
plt.legend(handles=patches, bbox_to_anchor=(0.5, -0.05),
loc='center', ncol=3, fancybox=True, shadow=True,
facecolor="w", numpoints=1, fontsize=10)
plt.title("Directed Occlusion Order Graph")
# plt.subplot(1,2,2)
# plt.imshow(vis_img)
# plt.imshow(vis_img)
# plt.title(f"Visible Masks Scene {scene_index}")
plt.axis('off')
# plt.show()
# plt.savefig(f"{output_dir}/vis_img_{i}.png")
# cv2.imwrite(f"{output_dir}/scene_{scene_index}.png", vis_img)
# plt.show()
# fig2 = plt.figure(figsize=(16, 6), dpi=80)
plt.subplot(1,3,2)
options = {
'node_color': "white",
# 'node_size': node_size_list,
'width': 1,
'arrowstyle': '-|>',
'arrowsize': 10
}
# nx.draw_networkx(G, arrows=True, **options)
# nx.draw(G, with_labels = True,arrows=True, connectionstyle='arc3, rad = 0.1')
# nx.draw_spring(G, with_labels = True,arrows=True, connectionstyle='arc3, rad = 0.5')
N = len(G.nodes())
from math import sqrt
if is_planar:
pos=nx.planar_layout(G)
else:
# pos=nx.draw(G)
N = len(G.nodes())
pos=nx.spring_layout(G,k=3/sqrt(N))
nx.draw_networkx(G,pos, with_labels= False, arrows=True, **options)
plt.title("Visualisation of Occlusion Order Graph")
# draw with images on nodes
# nx.draw_networkx(G,pos,width=3,edge_color="r",alpha=0.6)
ax=plt.gca()
fig=plt.gcf()
trans = ax.transData.transform
trans2 = fig.transFigure.inverted().transform
imsize = 0.05 # this is the image size
node_size_list = []
for n in G.nodes():
(x,y) = pos[n]
xx,yy = trans((x,y)) # figure coordinates
xa,ya = trans2((xx,yy)) # axes coordinates
# a = plt.axes([xa-imsize/2.0,ya-imsize/2.0, imsize, imsize ])
a = plt.axes([xa-imsize/2.0,ya-imsize/2.0, imsize, imsize ])
a.imshow(obj_rgb_mask_list[n-1])
a.set_aspect('equal')
a.axis('off')
# fig.patch.set_visible(False)
ax.axis('off')
plt.subplot(1,3,3)
plt.imshow(rgb_img)
plt.axis('off')
plt.title(f"RGB Scene {id}")
# plt.tight_layout()
# plt.show()
plt.savefig(f'{vis_dir_img}/occlusion_order_{id}.png')
plt.close()
m = occlusion_order_adjacency_matrix.astype(int)
unique_chars, matrix = np.unique(m, return_inverse=True)
color_dict = {1: 'darkred', 0: 'white'}
plt.figure(figsize=(20,20))
sns.set(font_scale=2)
ax1 = sns.heatmap(matrix.reshape(m.shape), annot=m, annot_kws={'fontsize': 20}, fmt='',
linecolor='dodgerblue', lw=5, square=True, clip_on=False,
cmap=ListedColormap([color_dict[char] for char in unique_chars]),
xticklabels=np.arange(m.shape[1]) + 1, yticklabels=np.arange(m.shape[0]) + 1, cbar=False)
ax1.tick_params(labelrotation=0)
ax1.tick_params(axis='both', which='major', labelsize=20, labelbottom = False, bottom=False, top = False, labeltop=True)
plt.xlabel("Occludee")
ax1.xaxis.set_ticks_position('top')
ax1.xaxis.set_label_position('top')
plt.ylabel("Occluder")
# plt.show()
plt.savefig(f'{vis_dir_img}/occlusion_order_adjacency_matrix_{id}.png')
plt.close()
| 25,100 | Python | 44.227027 | 160 | 0.5149 |
ngzhili/SynTable/README.md | # SynTable - A Synthetic Data Generation Pipeline for Cluttered Tabletop Scenes
This repository contains the official implementation of the paper **"SynTable: A Synthetic Data Generation Pipeline for Unseen Object Amodal Instance Segmentation of Cluttered Tabletop Scenes"**.
Zhili Ng*, Haozhe Wang*, Zhengshen Zhang*, Francis Eng Hock Tay, Marcelo H. Ang Jr.
*equal contributions
[[arXiv]](https://arxiv.org/abs/2307.07333)
[[Website]](https://sites.google.com/view/syntable/home)
[[Dataset]](https://doi.org/10.5281/zenodo.10565517)
[[Demo Video]](https://youtu.be/zHM8H58Kn3E)
[[Modified UOAIS-v2]](https://github.com/ngzhili/uoais-v2?tab=readme-ov-file)
[](https://doi.org/10.5281/zenodo.10565517)

SynTable is a robust custom data generation pipeline that creates photorealistic synthetic datasets of Cluttered Tabletop Scenes. For each scene, it includes metadata such as
- [x] RGB image of scene
- [x] depth image of Scene
- [x] scene instance segmentation masks
- [x] object amodal (visible + invisible) rgb
- [x] object amodal (visible + invisible) masks
- [x] object modal (visible) masks
- [x] object occlusion (invisible) masks
- [x] object occlusion rate
- [x] object visible bounding box
- [x] tabletop visible masks
- [x] background visible mask (background excludes tabletop and objects)
- [x] occlusion ordering adjacency matrix (OOAM) of objects on tabletop
## **Installation**
1. Install [NVIDIA Isaac Sim 2022.1.1 version](https://developer.nvidia.com/isaac-sim) on Omniverse
2. Change Directory to isaac_sim-2022.1.1 directory
``` bash
cd '/home/<username>/.local/share/ov/pkg/isaac_sim-2022.1.1/tools'
```
3. Clone the repo
``` bash
git clone https://github.com/ngzhili/SynTable.git
```
4. Install Dependencies into isaac sim's python
- Get issac sim source code directory path in command line.
``` bash
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
echo $SCRIPT_DIR
```
- Get isaac sim's python path
``` bash
python_exe=${PYTHONEXE:-"${SCRIPT_DIR}/kit/python/bin/python3"}
echo $python_exe
```
- Run isaac sim's python
``` bash
$python_exe
```
- while running isaac sim's python in bash, install pycocotools and opencv-python into isaac sim's python
``` bash
import pip
package_names=['pycocotools', 'opencv-python'] #packages to install
pip.main(['install'] + package_names + ['--upgrade'])
```
5. Copy the mount_dir folder to your home directory (anywhere outside of isaac sim source code)
``` bash
cp -r SynTable/mount_dir /home/<username>
```
## **Adding object models to nucleus**
1. You can download the .USD object models to be used for generating the tabletop datasets [here](https://mega.nz/folder/1nJAwQxA#1P3iUtqENKCS66uQYXk1vg).
2. Upload the downloaded syntable_nucleus folder into Omniverse Nucleus into /Users directory.
3. Ensure that the file paths in the config file are correct before running the generate dataset commands.
## **Generate Synthetic Dataset**
Note: Before generating the synthetic dataset, please ensure that you uploaded all object models to isaac sim nucleus and their paths in the config file is correct.
1. Change Directory to Isaac SIM source code
``` bash
cd /home/<username>/.local/share/ov/pkg/isaac_sim-2022.1.1
```
2. Run Syntable Pipeline (non-headless)
``` bash
./python.sh SynTable/syntable_composer/src/main1.py --input */parameters/train_config_syntable1.yaml --output */dataset/train --mount '/home/<username>/mount_dir' --num_scenes 3 --num_views 3 --overwrite --save_segmentation_data
```
### **Types of Flags**
| Flag | Description |
| :--- | :----: |
| ```--input``` | Path to input parameter file. |
| ```--mount``` | Path to mount symbolized in parameter files via '*'. |
| ```--headless``` | Will not launch Isaac SIM window. |
| ```--nap``` | Will nap Isaac SIM after the first scene is generated. |
| ```--overwrite``` | Overwrites dataset in output directory. |
| ```--output``` | Output directory. Overrides 'output_dir' param. |
| ```--num-scenes``` | Number of scenes in dataset. Overrides 'num_scenes' param. |
| ```--num-views``` | Number of views to generate per scene. Overrides 'num_views' param. |
| ```--save-segmentation-data``` | Saves visualisation of annotations into output directory. False by default. |
## Generated dataset
- SynTable data generation pipeline generates dataset in COCO - Common Objects in Context format.
## **Folder Structure of Generated Synthetic Dataset**
.
├── ...
├── SynTable-Sim # Generated dataset
│ ├── data # folder to store RGB, Depth, OOAM
│ │ └── mono
│ │ ├── rgb
│ │ │ ├── 0_0.png # file naming convention follows sceneNum_viewNum.png
│ │ │ └── 0_1.png
│ │ ├── depth
│ │ │ ├── 0_0.png
│ │ │ └── 0_1.png
│ │ └── occlusion order
│ │ ├── 0_0.npy
│ │ └── 0_1.npy
│ ├── parameters # parameters used for generation of annotations
│ └── train.json # Annotation COCO.JSON
└── ...
## **Visualise Annotations**
1. Create python venv and install dependencies
```
python3.8 -m venv env
source env/bin/activate
pip install -r requirements.txt
```
2. Visualise sample annotations (creates a visualise_dataset directory in dataset directory, then saves annotation visualisations there)
```
python ./visualize_annotations.py --dataset './sample_data' --ann_json './sample_data/annotation_final.json'
```
## **Sample Visualisation of Annotations**


## **References**
We have heavily modified the Python SDK source code from NVIDA Isaac Sim's Replicator Composer.
## **Citation**
If you find our work useful for your research, please consider citing the following BibTeX entry:
```
@misc{ng2023syntable,
title={SynTable: A Synthetic Data Generation Pipeline for Unseen Object Amodal Instance Segmentation of Cluttered Tabletop Scenes},
author={Zhili Ng and Haozhe Wang and Zhengshen Zhang and Francis Tay Eng Hock and Marcelo H. Ang Jr au2},
year={2023},
eprint={2307.07333},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
| 6,703 | Markdown | 40.9 | 232 | 0.654781 |
ngzhili/SynTable/syntable_composer/src/main.py | import argparse
import os
import shutil
import signal
import sys
from omni.isaac.kit import SimulationApp
config1 = {"headless": False}
kit = SimulationApp(config1)
from distributions import Distribution
from input import Parser
from output import Metrics, Logger, OutputManager
from sampling import Sampler
from scene import SceneManager
class Composer:
def __init__(self, params, index, output_dir):
""" Construct Composer. Start simulator and prepare for generation. """
self.params = params
self.index = index
self.output_dir = output_dir
self.sample = Sampler().sample
# Set-up output directories
self.setup_data_output()
# Start Simulator
Logger.content_log_path = self.content_log_path
Logger.start_log_entry("start-up")
Logger.print("Isaac Sim starting up...")
config = {"headless": self.sample("headless")}
if self.sample("path_tracing"):
config["renderer"] = "PathTracing"
config["samples_per_pixel_per_frame"] = self.sample("samples_per_pixel_per_frame")
else:
config["renderer"] = "RayTracedLighting"
#self.sim_app = SimulationApp(config)
self.sim_app = kit
from omni.isaac.core import SimulationContext
self.scene_units_in_meters = self.sample("scene_units_in_meters")
self.sim_context = SimulationContext(physics_dt=1.0 / 60.0, stage_units_in_meters=self.scene_units_in_meters)
# need to initialize physics getting any articulation..etc
self.sim_context.initialize_physics()
self.sim_context.play()
self.num_scenes = self.sample("num_scenes")
self.sequential = self.sample("sequential")
self.scene_manager = SceneManager(self.sim_app, self.sim_context)
self.output_manager = OutputManager(
self.sim_app, self.sim_context, self.scene_manager, self.output_data_dir, self.scene_units_in_meters
)
# Set-up exit message
signal.signal(signal.SIGINT, self.handle_exit)
Logger.finish_log_entry()
def handle_exit(self, *args, **kwargs):
print("exiting dataset generation...")
self.sim_context.clear_instance()
self.sim_app.close()
sys.exit()
def generate_scene(self):
""" Generate 1 dataset scene. Returns captured groundtruth data. """
self.scene_manager.prepare_scene(self.index)
self.scene_manager.populate_scene()
if self.sequential:
sequence_length = self.sample("sequence_step_count")
step_time = self.sample("sequence_step_time")
for step in range(sequence_length):
self.scene_manager.update_scene(step_time=step_time, step_index=step)
groundtruth = self.output_manager.capture_groundtruth(
self.index, step_index=step, sequence_length=sequence_length
)
if step == 0:
Logger.print("stepping through scene...")
else:
self.scene_manager.update_scene()
groundtruth = self.output_manager.capture_groundtruth(self.index)
self.scene_manager.finish_scene()
return groundtruth
def setup_data_output(self):
""" Create output directories and copy input files to output. """
# Overwrite output directory, if needed
if self.params["overwrite"]:
shutil.rmtree(self.output_dir, ignore_errors=True)
# Create output directory
os.makedirs(self.output_dir, exist_ok=True)
# Create output directories, as needed
self.output_data_dir = os.path.join(self.output_dir, "data")
self.parameter_dir = os.path.join(self.output_dir, "parameters")
self.parameter_profiles_dir = os.path.join(self.parameter_dir, "profiles")
self.log_dir = os.path.join(self.output_dir, "log")
self.content_log_path = os.path.join(self.log_dir, "sampling_log.yaml")
os.makedirs(self.output_data_dir, exist_ok=True)
os.makedirs(self.parameter_profiles_dir, exist_ok=True)
os.makedirs(self.log_dir, exist_ok=True)
# Copy input parameters file to output
input_file_name = os.path.basename(self.params["file_path"])
input_file_copy = os.path.join(self.parameter_dir, input_file_name)
shutil.copy(self.params["file_path"], input_file_copy)
# Copy profile parameters file(s) to output
if self.params["profile_files"]:
for profile_file in self.params["profile_files"]:
profile_file_name = os.path.basename(profile_file)
profile_file_copy = os.path.join(self.parameter_profiles_dir, profile_file_name)
shutil.copy(profile_file, profile_file_copy)
def get_output_dir(params):
""" Determine output directory. """
if params["output_dir"].startswith("/"):
output_dir = params["output_dir"]
elif params["output_dir"].startswith("*"):
output_dir = os.path.join(Distribution.mount, params["output_dir"][2:])
else:
output_dir = os.path.join(os.path.dirname(__file__), "..", "datasets", params["output_dir"])
return output_dir
def get_starting_index(params, output_dir):
""" Determine starting index of dataset. """
if params["overwrite"]:
return 0
output_data_dir = os.path.join(output_dir, "data")
if not os.path.exists(output_data_dir):
return 0
def find_min_missing(indices):
if indices:
indices.sort()
for i in range(indices[-1]):
if i not in indices:
return i
return indices[-1]
else:
return -1
camera_dirs = [os.path.join(output_data_dir, sub_dir) for sub_dir in os.listdir(output_data_dir)]
min_indices = []
for camera_dir in camera_dirs:
data_dirs = [os.path.join(camera_dir, sub_dir) for sub_dir in os.listdir(camera_dir)]
for data_dir in data_dirs:
indices = []
for filename in os.listdir(data_dir):
try:
if "_" in filename:
index = int(filename[: filename.rfind("_")])
else:
index = int(filename[: filename.rfind(".")])
indices.append(index)
except:
pass
min_index = find_min_missing(indices)
min_indices.append(min_index)
if min_indices:
minest_index = min(min_indices)
return minest_index + 1
else:
return 0
def assert_dataset_complete(params, index):
""" Check if dataset is already complete. """
num_scenes = params["num_scenes"]
if index >= num_scenes:
print(
'Dataset is completed. Number of generated samples {} satifies "num_scenes" {}.'.format(index, num_scenes)
)
sys.exit()
else:
print("Starting at index ", index)
def define_arguments():
""" Define command line arguments. """
parser = argparse.ArgumentParser()
parser.add_argument("--input", default="parameters/warehouse.yaml", help="Path to input parameter file")
parser.add_argument(
"--visualize-models",
"--visualize_models",
action="store_true",
help="Output visuals of all object models defined in input parameter file, instead of outputting a dataset.",
)
parser.add_argument("--mount", default="/tmp/composer", help="Path to mount symbolized in parameter files via '*'.")
parser.add_argument("--headless", action="store_true", help="Will not launch Isaac SIM window.")
parser.add_argument("--nap", action="store_true", help="Will nap Isaac SIM after the first scene is generated.")
parser.add_argument("--overwrite", action="store_true", help="Overwrites dataset in output directory.")
parser.add_argument("--output", type=str, help="Output directory. Overrides 'output_dir' param.")
parser.add_argument(
"--num-scenes", "--num_scenes", type=int, help="Num scenes in dataset. Overrides 'num_scenes' param."
)
parser.add_argument(
"--nucleus-server", "--nucleus_server", type=str, help="Nucleus Server URL. Overrides 'nucleus_server' param."
)
return parser
if __name__ == "__main__":
# Create argument parser
parser = define_arguments()
args, _ = parser.parse_known_args()
# Parse input parameter file
parser = Parser(args)
params = parser.params
Sampler.params = params
# Determine output directory
output_dir = get_output_dir(params)
# Run Composer in Visualize mode
if args.visualize_models:
from visualize import Visualizer
visuals = Visualizer(parser, params, output_dir)
visuals.visualize_models()
# Handle shutdown
visuals.composer.sim_context.clear_instance()
visuals.composer.sim_app.close()
sys.exit()
# Set verbose mode
Logger.verbose = params["verbose"]
# Get starting index of dataset
index = get_starting_index(params, output_dir)
# Check if dataset is already complete
assert_dataset_complete(params, index)
# Initialize composer
composer = Composer(params, index, output_dir)
metrics = Metrics(composer.log_dir, composer.content_log_path)
# Generate dataset
while composer.index < params["num_scenes"]:
composer.generate_scene()
composer.index += 1
# Handle shutdown
composer.output_manager.data_writer.stop_threads()
composer.sim_context.clear_instance()
composer.sim_app.close()
# Output performance metrics
metrics.output_performance_metrics()
| 9,745 | Python | 33.807143 | 120 | 0.626783 |
ngzhili/SynTable/syntable_composer/src/helper_functions.py | """
SynTable Replicator Composer Helper Functions
"""
import numpy as np
import pycocotools.mask as mask_util
import cv2
def compute_occluded_masks(mask1, mask2):
"""Computes occlusions between two sets of masks.
masks1, masks2: [Height, Width, instances]
"""
# intersections and union
mask1_area = np.count_nonzero(mask1)
mask2_area = np.count_nonzero(mask2)
intersection_mask = np.logical_and(mask1, mask2)
intersection = np.count_nonzero(np.logical_and(mask1, mask2))
iou = intersection/(mask1_area+mask2_area-intersection)
return iou, intersection_mask.astype(float)
class GenericMask:
"""
Attribute:
polygons (list[ndarray]): list[ndarray]: polygons for this mask.
Each ndarray has format [x, y, x, y, ...]
mask (ndarray): a binary mask
"""
def __init__(self, mask_or_polygons, height, width):
self._mask = self._polygons = self._has_holes = None
self.height = height
self.width = width
m = mask_or_polygons
if isinstance(m, dict):
# RLEs
assert "counts" in m and "size" in m
if isinstance(m["counts"], list): # uncompressed RLEs
h, w = m["size"]
assert h == height and w == width
m = mask_util.frPyObjects(m, h, w)
self._mask = mask_util.decode(m)[:, :]
return
if isinstance(m, list): # list[ndarray]
self._polygons = [np.asarray(x).reshape(-1) for x in m]
return
if isinstance(m, np.ndarray): # assumed to be a binary mask
assert m.shape[1] != 2, m.shape
assert m.shape == (height, width), m.shape
self._mask = m.astype("uint8")
return
raise ValueError("GenericMask cannot handle object {} of type '{}'".format(m, type(m)))
@property
def mask(self):
if self._mask is None:
self._mask = self.polygons_to_mask(self._polygons)
return self._mask
@property
def polygons(self):
if self._polygons is None:
self._polygons, self._has_holes = self.mask_to_polygons(self._mask)
return self._polygons
@property
def has_holes(self):
if self._has_holes is None:
if self._mask is not None:
self._polygons, self._has_holes = self.mask_to_polygons(self._mask)
else:
self._has_holes = False # if original format is polygon, does not have holes
return self._has_holes
def mask_to_polygons(self, mask):
# cv2.RETR_CCOMP flag retrieves all the contours and arranges them to a 2-level
# hierarchy. External contours (boundary) of the object are placed in hierarchy-1.
# Internal contours (holes) are placed in hierarchy-2.
# cv2.CHAIN_APPROX_NONE flag gets vertices of polygons from contours.
mask = np.ascontiguousarray(mask) # some versions of cv2 does not support incontiguous arr
res = cv2.findContours(mask.astype("uint8"), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)
hierarchy = res[-1]
if hierarchy is None: # empty mask
return [], False
has_holes = (hierarchy.reshape(-1, 4)[:, 3] >= 0).sum() > 0
res = res[-2]
res = [x.flatten() for x in res]
# These coordinates from OpenCV are integers in range [0, W-1 or H-1].
# We add 0.5 to turn them into real-value coordinate space. A better solution
# would be to first +0.5 and then dilate the returned polygon by 0.5.
res = [x + 0.5 for x in res if len(x) >= 6]
return res, has_holes
def polygons_to_mask(self, polygons):
rle = mask_util.frPyObjects(polygons, self.height, self.width)
rle = mask_util.merge(rle)
return mask_util.decode(rle)[:, :]
def area(self):
return self.mask.sum()
def bbox(self):
try:
p = mask_util.frPyObjects(self.polygons, self.height, self.width)
p = mask_util.merge(p)
bbox = mask_util.toBbox(p)
bbox[2] += bbox[0]
bbox[3] += bbox[1]
except:
print(f"Encountered error while generating bounding boxes from mask polygons: {self.polygons}")
print("self.polygons:",self.polygons)
bbox = np.array([0,0,0,0])
return bbox
def bbox_from_binary_mask(binary_mask):
""" Returns the smallest bounding box containing all pixels marked "1" in the given image mask.
:param binary_mask: A binary image mask with the shape [H, W].
:return: The bounding box represented as [x, y, width, height]
"""
# Find all columns and rows that contain 1s
rows = np.any(binary_mask, axis=1)
cols = np.any(binary_mask, axis=0)
# Find the min and max col/row index that contain 1s
rmin, rmax = np.where(rows)[0][[0, -1]]
cmin, cmax = np.where(cols)[0][[0, -1]]
# Calc height and width
h = rmax - rmin + 1
w = cmax - cmin + 1
return [int(cmin), int(rmin), int(w), int(h)]
| 5,066 | Python | 36.533333 | 107 | 0.595934 |
ngzhili/SynTable/syntable_composer/src/main1.py | """
SynTable Replicator Composer Main
"""
# import dependencies
import argparse
from ntpath import join
import os
import shutil
import signal
import sys
import numpy as np
import random
import math
import gc
import json
import datetime
import time
import glob
import cv2
from omni.isaac.kit import SimulationApp
from distributions import Distribution
from input.parse1 import Parser
from output import Metrics, Logger
from output.output1 import OutputManager
from sampling.sample1 import Sampler
from scene.scene1 import SceneManager
from helper_functions import compute_occluded_masks
from omni.isaac.kit.utils import set_carb_setting
from scene.light1 import Light
class Composer:
def __init__(self, params, index, output_dir):
""" Construct Composer. Start simulator and prepare for generation. """
self.params = params
self.index = index
self.output_dir = output_dir
self.sample = Sampler().sample
# Set-up output directories
self.setup_data_output()
# Start Simulator
Logger.content_log_path = self.content_log_path
Logger.start_log_entry("start-up")
Logger.print("Isaac Sim starting up...")
config = {"headless": self.sample("headless")}
if self.sample("path_tracing"):
config["renderer"] = "PathTracing"
config["samples_per_pixel_per_frame"] = self.sample("samples_per_pixel_per_frame")
else:
config["renderer"] = "RayTracedLighting"
self.sim_app = SimulationApp(config)
from omni.isaac.core import SimulationContext
self.scene_units_in_meters = self.sample("scene_units_in_meters")
self.sim_context = SimulationContext(physics_dt=1.0/60, #1.0 / 60.0,
rendering_dt =1.0/60, #1.0 / 60.0,
stage_units_in_meters=self.scene_units_in_meters)
# need to initialize physics getting any articulation..etc
self.sim_context.initialize_physics()
self.sim_context.play()
self.num_scenes = self.sample("num_scenes")
self.sequential = self.sample("sequential")
self.scene_manager = SceneManager(self.sim_app, self.sim_context)
self.output_manager = OutputManager(
self.sim_app, self.sim_context, self.scene_manager, self.output_data_dir, self.scene_units_in_meters
)
# Set-up exit message
signal.signal(signal.SIGINT, self.handle_exit)
Logger.finish_log_entry()
def handle_exit(self, *args, **kwargs):
print("exiting dataset generation...")
self.sim_context.clear_instance()
self.sim_app.close()
sys.exit()
def generate_scene(self, img_index, ann_index, img_list,ann_list,regen_scene):
""" Generate 1 dataset scene. Returns captured groundtruth data. """
amodal = True
self.scene_manager.prepare_scene(self.index)
# reload table into scene
self.scene_manager.reload_table()
kit = self.sim_app
# if generate amodal annotations
if amodal:
roomTableSize = self.scene_manager.roomTableSize
roomTableHeight = roomTableSize[-1]
spawnLowerBoundOffset = 0.2
spawnUpperBoundOffset = 1
# calculate tableBounds to constraint objects' spawn locations to be within tableBounds
x_width = roomTableSize[0] /2
y_length = roomTableSize[1] /2
min_val = (-x_width*0.6, -y_length*0.6, roomTableHeight+spawnLowerBoundOffset)
max_val = (x_width*0.6, y_length*0.6, roomTableHeight+spawnUpperBoundOffset)
tableBounds = [min_val,max_val]
self.scene_manager.populate_scene(tableBounds=tableBounds) # populate the scene once
else:
self.scene_manager.populate_scene()
if self.sequential:
sequence_length = self.sample("sequence_step_count")
step_time = self.sample("sequence_step_time")
for step in range(sequence_length):
self.scene_manager.update_scene(step_time=step_time, step_index=step)
groundtruth = self.output_manager.capture_groundtruth(
self.index, step_index=step, sequence_length=sequence_length
)
if step == 0:
Logger.print("stepping through scene...")
# if generate amodal annotations
elif amodal:
# simulate physical dropping of objects
self.scene_manager.update_scene()
# refresh UI rendering
self.sim_context.render()
# pause simulation
self.sim_context.pause()
# stop all object motion and remove objects not on tabletop
objects = self.scene_manager.objs.copy()
objects_filtered = []
# remove objects outside tabletop regions after simulation
for obj in objects:
obj.coord, quaternion = obj.xform_prim.get_world_pose()
obj.coord = np.array(obj.coord, dtype=np.float32)
# if object is not on tabletop after simulation, remove object
if (abs(obj.coord[0]) > (roomTableSize[0]/2)) \
or (abs(obj.coord[1]) > (roomTableSize[1]/2)) \
or (abs(obj.coord[2]) < roomTableSize[2]):
# remove object by turning off visibility of object
obj.off_prim()
# else object on tabletop, add obj to filtered list
else:
objects_filtered.append(obj)
self.scene_manager.objs = objects_filtered
# if no objects left on tabletop, regenerate scene
if len(self.scene_manager.objs) == 0:
print("No objects found on tabletop, regenerating scene.")
self.scene_manager.finish_scene()
return None, img_index, ann_index, img_list, ann_list, regen_scene
else:
regen_scene = False
print("\nNumber of Objects on tabletop:", len(self.scene_manager.objs))
# get camera coordinates based on hemisphere of radus r and tabletop height
def camera_orbit_coord(r = 12, tableTopHeight=10):
"""
constraints camera loc to a hemi-spherical orbit around tabletop origin
origin z of hemisphere is offset by tabletopheight + 1m
"""
u = random.uniform(0,1)
v = random.uniform(0,1)
phi = math.acos(1.0 - v) # phi: [0,0.5*pi]
theta = 2.0 * math.pi * u # theta: [0,2*pi]
x = r * math.cos(theta) * math.sin(phi)
y = r * math.sin(theta) * math.sin(phi)
z = r * math.cos(phi) + tableTopHeight # add table height offset
return np.array([x,y,z])
# Randomly move camera and light coordinates to be constrainted between 2 concentric hemispheres above tabletop
numViews = self.params["num_views"]
# get hemisphere radius bounds
autoHemisphereRadius = self.sample("auto_hemisphere_radius")
if not autoHemisphereRadius:
camHemisphereRadiusMin = self.sample("cam_hemisphere_radius_min")
camHemisphereRadiusMax = self.sample("cam_hemisphere_radius_max")
lightHemisphereRadiusMin = self.sample("spherelight_hemisphere_radius_min")
lightHemisphereRadiusMax = self.sample("spherelight_hemisphere_radius_max")
else:
camHemisphereRadiusMin = max(x_width,y_length) * 0.8
camHemisphereRadiusMax = camHemisphereRadiusMin + 0.7*camHemisphereRadiusMin
lightHemisphereRadiusMin = camHemisphereRadiusMax + 0.1
lightHemisphereRadiusMax = lightHemisphereRadiusMin + 1
print(x_width,y_length)
print("\n===Camera & Light Hemisphere Parameters===")
print(f"autoHemisphereRadius:{autoHemisphereRadius}")
print(f"camHemisphereRadiusMin = {camHemisphereRadiusMin}")
print(f"camHemisphereRadiusMax = {camHemisphereRadiusMax}")
print(f"lightHemisphereRadiusMin = {lightHemisphereRadiusMin}")
print(f"lightHemisphereRadiusMax = {lightHemisphereRadiusMax}")
Logger.print(f"\n=== Capturing Groundtruth for each viewport in scene ===\n")
for view_id in range(numViews):
random.seed(None)
Logger.print(f"\n==> Scene: {self.index}, View: {view_id} <==\n")
# resample radius of camera hemisphere between min and max radii bounds
r = random.uniform(camHemisphereRadiusMin,camHemisphereRadiusMax)
print('sampled radius r of camera hemisphere:',r)
# resample camera coordinates and rotate camera to look at tabletop surface center
cam_coord_w = camera_orbit_coord(r=r,tableTopHeight=roomTableHeight+0.2)
print("sampled camera coordinate:",cam_coord_w)
self.scene_manager.camera.translate(cam_coord_w)
self.scene_manager.camera.translate_rotate(target=(0,0,roomTableHeight)) #target coordinates
# initialise ambient lighting as 0 (for ray tracing), path tracing not affected
rtx_mode = "/rtx"
ambient_light_intensity = 0 #random.uniform(0.2,3.5)
set_carb_setting(kit._carb_settings, rtx_mode + "/sceneDb/ambientLightIntensity", ambient_light_intensity)
# Enable indirect diffuse GI (for ray tracing)
set_carb_setting(kit._carb_settings, rtx_mode + "/indirectDiffuse/enabled", True)
# Reset and delete all lights
from omni.isaac.core.utils import prims
for light in self.scene_manager.lights:
prims.delete_prim(light.path)
# Resample number of lights in viewport
self.scene_manager.lights = []
for grp_index, group in enumerate(self.scene_manager.sample("groups")):
# adjust ceiling light parameters
if group == "ceilinglights":
for lightIndex, light in enumerate(self.scene_manager.ceilinglights):
if lightIndex == 0:
new_intensity = light.sample("light_intensity")
if light.sample("light_temp_enabled"):
new_temp = light.sample("light_temp")
# change light intensity
light.attributes["intensity"] = new_intensity
light.prim.GetAttribute("intensity").Set(light.attributes["intensity"])
# change light temperature
if light.sample("light_temp_enabled"):
light.attributes["colorTemperature"] = new_temp
light.prim.GetAttribute("colorTemperature").Set(light.attributes["colorTemperature"])
# adjust spherical light parameters
if group == "lights":
num_lights = self.scene_manager.sample("light_count", group=group)
for i in range(num_lights):
path = "{}/Lights/lights_{}".format( self.scene_manager.scene_path, len(self.scene_manager.lights))
light = Light(self.scene_manager.sim_app, self.scene_manager.sim_context, path, self.scene_manager.camera, group)
# change light intensity
light.attributes["intensity"] = light.sample("light_intensity")
light.prim.GetAttribute("intensity").Set(light.attributes["intensity"])
# change light temperature
if light.sample("light_temp_enabled"):
light.attributes["colorTemperature"] =light.sample("light_temp")
light.prim.GetAttribute("colorTemperature").Set(light.attributes["colorTemperature"])
# change light coordinates
light_coord_w = camera_orbit_coord(r=random.uniform(lightHemisphereRadiusMin,lightHemisphereRadiusMax),tableTopHeight=roomTableHeight+0.2)
light.translate(light_coord_w)
light.coord, quaternion = light.xform_prim.get_world_pose()
light.coord = np.array(light.coord, dtype=np.float32)
self.scene_manager.lights.append(light)
print(f"Number of sphere lights in scene: {len(self.scene_manager.lights)}")
# capture groundtruth of entire viewpoint
groundtruth, img_index, ann_index, img_list, ann_list = \
self.output_manager.capture_amodal_groundtruth(self.index,
self.scene_manager,
img_index, ann_index, view_id,
img_list, ann_list
)
else:
self.scene_manager.update_scene()
groundtruth = self.output_manager.capture_groundtruth(self.index)
# finish the scene and reset prims in scene
self.scene_manager.finish_scene()
return groundtruth, img_index, ann_index, img_list, ann_list, regen_scene
def setup_data_output(self):
""" Create output directories and copy input files to output. """
# Overwrite output directory, if needed
if self.params["overwrite"]:
shutil.rmtree(self.output_dir, ignore_errors=True)
# Create output directory
os.makedirs(self.output_dir, exist_ok=True)
# Create output directories, as needed
self.output_data_dir = os.path.join(self.output_dir, "data")
self.parameter_dir = os.path.join(self.output_dir, "parameters")
self.parameter_profiles_dir = os.path.join(self.parameter_dir, "profiles")
self.log_dir = os.path.join(self.output_dir, "log")
self.content_log_path = os.path.join(self.log_dir, "sampling_log.yaml")
os.makedirs(self.output_data_dir, exist_ok=True)
os.makedirs(self.parameter_profiles_dir, exist_ok=True)
os.makedirs(self.log_dir, exist_ok=True)
# Copy input parameters file to output
input_file_name = os.path.basename(self.params["file_path"])
input_file_copy = os.path.join(self.parameter_dir, input_file_name)
shutil.copy(self.params["file_path"], input_file_copy)
# Copy profile parameters file(s) to output
if self.params["profile_files"]:
for profile_file in self.params["profile_files"]:
profile_file_name = os.path.basename(profile_file)
profile_file_copy = os.path.join(self.parameter_profiles_dir, profile_file_name)
shutil.copy(profile_file, profile_file_copy)
def get_output_dir(params):
""" Determine output directory to store datasets.
"""
if params["output_dir"].startswith("/"):
output_dir = params["output_dir"]
elif params["output_dir"].startswith("*"):
output_dir = os.path.join(Distribution.mount, params["output_dir"][2:])
else:
output_dir = os.path.join(os.path.dirname(__file__), "..", "datasets", params["output_dir"])
return output_dir
def get_starting_index(params, output_dir):
""" Determine starting index of dataset. """
if params["overwrite"]:
return 0
output_data_dir = os.path.join(output_dir, "data")
if not os.path.exists(output_data_dir):
return 0
def find_min_missing(indices):
if indices:
indices.sort()
for i in range(indices[-1]):
if i not in indices:
return i
return indices[-1]
else:
return -1
camera_dirs = [os.path.join(output_data_dir, sub_dir) for sub_dir in os.listdir(output_data_dir)]
min_indices = []
for camera_dir in camera_dirs:
data_dirs = [os.path.join(camera_dir, sub_dir) for sub_dir in os.listdir(camera_dir)]
for data_dir in data_dirs:
indices = []
for filename in os.listdir(data_dir):
try:
if "_" in filename:
index = int(filename[: filename.rfind("_")])
else:
index = int(filename[: filename.rfind(".")])
indices.append(index)
except:
pass
min_index = find_min_missing(indices)
min_indices.append(min_index)
if min_indices:
minest_index = min(min_indices)
return minest_index + 1
else:
return 0
def assert_dataset_complete(params, index):
""" Check if dataset is already complete. """
num_scenes = params["num_scenes"]
if index >= num_scenes:
print(
'Dataset is completed. Number of generated samples {} satifies "num_scenes" {}.'.format(index, num_scenes)
)
sys.exit()
else:
print("Starting at index ", index)
def define_arguments():
""" Define command line arguments. """
parser = argparse.ArgumentParser()
parser.add_argument("--input", default="parameters/warehouse.yaml", help="Path to input parameter file")
parser.add_argument(
"--visualize-models",
"--visualize_models",
action="store_true",
help="Output visuals of all object models defined in input parameter file, instead of outputting a dataset.",
)
parser.add_argument("--mount", default="/tmp/composer", help="Path to mount symbolized in parameter files via '*'.")
parser.add_argument("--headless", action="store_true", help="Will not launch Isaac SIM window.")
parser.add_argument("--nap", action="store_true", help="Will nap Isaac SIM after the first scene is generated.")
parser.add_argument("--overwrite", action="store_true", help="Overwrites dataset in output directory.")
parser.add_argument("--output", type=str, help="Output directory. Overrides 'output_dir' param.")
parser.add_argument(
"--num-scenes", "--num_scenes", type=int, help="Num scenes in dataset. Overrides 'num_scenes' param."
)
parser.add_argument(
"--num-views", "--num_views", type=int, help="Num Views in scenes. Overrides 'num_views' param."
)
parser.add_argument(
"--save-segmentation-data", "--save_segmentation_data", action="store_true", help="Save Segmentation data as PNG, Depth image as .pfm. Overrides 'save_segmentation_data' param."
)
parser.add_argument(
"--nucleus-server", "--nucleus_server", type=str, help="Nucleus Server URL. Overrides 'nucleus_server' param."
)
return parser
if __name__ == "__main__":
# Create argument parser
parser = define_arguments()
args, _ = parser.parse_known_args()
# Parse input parameter file
parser = Parser(args)
params = parser.params
#print("params:",params)
Sampler.params = params
sample = Sampler().sample
# Determine output directory
output_dir = get_output_dir(params)
# Run Composer in Visualize mode
if args.visualize_models:
from visualize import Visualizer
visuals = Visualizer(parser, params, output_dir)
visuals.visualize_models()
# Handle shutdown
visuals.composer.sim_context.clear_instance()
visuals.composer.sim_app.close()
sys.exit()
# Set verbose mode
Logger.verbose = params["verbose"]
# Get starting index of dataset
index = get_starting_index(params, output_dir)
# if not overwrite
json_files = []
if not params["overwrite"] and os.path.isdir(output_dir):
# Check if annotation_final.json is present, continue from last scene index
json_files = [pos_json for pos_json in os.listdir(output_dir) if pos_json.endswith('.json')]
if len(json_files)>0:
last_scene_index = -1
last_json_path = ""
for i in json_files:
if i != "annotation_final.json":
json_index = int(i.split('_')[-1].split('.')[0])
if json_index >= last_scene_index:
last_scene_index = json_index
last_json_path = os.path.join(output_dir,i)
# get current index
index = last_scene_index + 1
# read latest json file
f = open(last_json_path)
data = json.load(f)
last_img_index = max(data['images'][-1]['id'],-1)
last_ann_index = max(data['annotations'][-1]['id'],-1)
f.close()
# remove images more than last scene index, these images do not have annotations
img_files = [img_path for img_path in os.listdir(output_dir) if img_path.endswith('.png')]
for path, subdirs, files in os.walk(output_dir):
for name in files:
if name.endswith('.png') or name.endswith('.pfm'):
img_scene = int(name.split("_")[0])
if img_scene > last_scene_index:
img_path = os.path.join(path, name)
os.remove(img_path)
print(f"Removing Images from scene {index} onwards.")
print(f"Continuing from scene {index}.")
# Check if dataset is already complete
assert_dataset_complete(params, index)
# Initialize composer
composer = Composer(params, index, output_dir)
metrics = Metrics(composer.log_dir, composer.content_log_path)
if not params["overwrite"] and os.path.isdir(output_dir) and len(json_files) > 0:
img_index, ann_index = last_img_index+1, last_ann_index+1
else:
img_index, ann_index = 1, 1
img_list, ann_list = [],[]
total_st = time.time()
# Generate dataset
while composer.index < params["num_scenes"]:
# get the start time
st = time.time()
regen_scene = True
while regen_scene:
_, img_index, ann_index, img_list, ann_list, regen_scene = composer.generate_scene(img_index, ann_index,img_list,ann_list,regen_scene)
# remove all images not are not saved in json/csv
scene_no = composer.index
if (scene_no % params["checkpoint_interval"]) == 0 and (scene_no != 0): # save every 2 generated scenes
gc.collect() # Force the garbage collector for releasing an unreferenced memory
date_created = str(datetime.datetime.now())
# create annotation file
coco_json = {
"info": {
"description": "SynTable",
"url": "nil",
"version": "0.1.0",
"year": 2022,
"contributor": "SynTable",
"date_created": date_created
},
"licenses": [
{
"id": 1,
"name": "Attribution-NonCommercial-ShareAlike License",
"url": "http://creativecommons.org/licenses/by-nc-sa/2.0/"
}
],
"categories": [
{
"id": 1,
"name": "object",
"supercategory": "shape"
}
],
"images":img_list,
"annotations":ann_list}
# if save background segmentation
if params["save_background"]:
coco_json["categories"].append({
"id": 0,
"name": "background",
"supercategory": "shape"
})
# save annotation dict
with open(f'{output_dir}/annotation_{scene_no}.json', 'w') as write_file:
json.dump(coco_json, write_file, indent=4)
print(f"\n[Checkpoint] Finished scene {scene_no}, saving annotations to {output_dir}/annotation_{scene_no}.json")
if (scene_no + 1) != params["num_scenes"]:
# reset lists to prevent memory error
img_list, ann_list = [],[]
coco_json = {}
composer.index += 1
# get the end time
et = time.time()
# get the execution time
elapsed_time = time.time() - st
print(f'\nExecution time for scene {scene_no}:', time.strftime("%H:%M:%S", time.gmtime(elapsed_time)))
date_created = str(datetime.datetime.now())
# create annotation file
coco_json = {
"info": {
"description": "SynTable",
"url": "nil",
"version": "0.1.0",
"year": 2022,
"contributor": "SynTable",
"date_created": date_created
},
"licenses": [
{
"id": 1,
"name": "Attribution-NonCommercial-ShareAlike License",
"url": "http://creativecommons.org/licenses/by-nc-sa/2.0/"
}
],
"categories": [
{
"id": 1,
"name": "object",
"supercategory": "shape"
}
],
"images":img_list,
"annotations":ann_list}
# if save background segmentation
if params["save_background"]:
coco_json["categories"].append({
"id": 0,
"name": "background",
"supercategory": "shape"
})
# save json
with open(f'{output_dir}/annotation_{scene_no}.json', 'w') as write_file:
json.dump(coco_json, write_file, indent=4)
print(f"\n[End] Finished last scene {scene_no}, saving annotations to {output_dir}/annotation_{scene_no}.json")
# reset lists to prevent out of memory (oom) error
del img_list
del ann_list
del coco_json
gc.collect() # Force the garbage collector for releasing an unreferenced memory
elapsed_time = time.time() - total_st
print(f'\nExecution time for all {params["num_scenes"]} scenes * {params["num_views"]} views:', time.strftime("%H:%M:%S", time.gmtime(elapsed_time)))
# Handle shutdown
composer.output_manager.data_writer.stop_threads()
composer.sim_context.clear_instance()
composer.sim_app.close()
# Output performance metrics
metrics.output_performance_metrics()
# concatenate all coco.json checkpoint files to final coco.json
final_json_path = f'{output_dir}/annotation_final.json'
json_files = [os.path.join(output_dir,pos_json) for pos_json in os.listdir(output_dir) if (pos_json.endswith('.json') and os.path.join(output_dir,pos_json) != final_json_path)]
json_files = sorted(json_files, key=lambda x: int(x.split("_")[-1].split(".")[0]))
coco_json = {"info":{},"licenses":[],"categories":[],"images":[],"annotations":[]}
for i, file in enumerate(json_files):
if file != final_json_path:
f = open(file)
data = json.load(f)
if i == 0:
coco_json["info"] = data["info"]
coco_json["licenses"] = data["licenses"]
coco_json["categories"] = data["categories"]
coco_json["images"].extend(data["images"])
coco_json["annotations"].extend(data["annotations"])
f.close()
with open(final_json_path, 'w') as write_file:
json.dump(coco_json, write_file, indent=4)
# visualize annotations
if params["save_segmentation_data"]:
print("[INFO] Generating occlusion masks...")
rgb_dir = f"{output_dir}/data/mono/rgb"
occ_dir = f"{output_dir}/data/mono/occlusion"
instance_dir = f"{output_dir}/data/mono/instance"
vis_dir = f"{output_dir}/data/mono/visualize"
vis_occ_dir = f"{vis_dir}/occlusion"
vis_instance_dir = f"{vis_dir}/instance"
# make visualisation output directory
for dir in [vis_dir,vis_occ_dir, vis_instance_dir]:
if not os.path.exists(dir):
os.makedirs(dir)
# iterate through scenes
rgb_paths = [pos_json for pos_json in os.listdir(rgb_dir) if pos_json.endswith('.png')]
for scene_index in range(0,params["num_scenes"]):
# scene_index = str(scene_index_raw) +"_"+str(view_id)
for view_id in range(0,params["num_views"]):
rgb_img_list = glob.glob(f"{rgb_dir}/{scene_index}_{view_id}.png")
rgb_img = cv2.imread(rgb_img_list[0], cv2.IMREAD_UNCHANGED)
occ_img_list = glob.glob(f"{occ_dir}/{scene_index}_{view_id}_*.png")
#occ_mask_list = []
if len(occ_img_list) > 0:
occ_img = rgb_img.copy()
overlay = rgb_img.copy()
combined_mask = np.zeros((occ_img.shape[0],occ_img.shape[1]))
background = f"{occ_dir}/{scene_index}_background.png"
# iterate through all occlusion masks
for i in range(len(occ_img_list)):
occ_mask_path = occ_img_list[i]
if occ_mask_path == background:
occ_img_back = rgb_img.copy()
overlay_back = rgb_img.copy()
occluded_mask = cv2.imread(occ_mask_path, cv2.IMREAD_UNCHANGED)
occluded_mask = occluded_mask.astype(bool) # boolean mask
overlay_back[occluded_mask] = [0, 0, 255]
alpha =0.5
occ_img_back = cv2.addWeighted(overlay_back, alpha, occ_img_back, 1 - alpha, 0, occ_img_back)
occ_save_path = f"{vis_occ_dir}/{scene_index}_{view_id}_background.png"
cv2.imwrite(occ_save_path, occ_img_back)
else:
occluded_mask = cv2.imread(occ_mask_path, cv2.IMREAD_UNCHANGED)
combined_mask += occluded_mask
combined_mask = combined_mask.astype(bool) # boolean mask
overlay[combined_mask] = [0, 0, 255]
alpha =0.5
occ_img = cv2.addWeighted(overlay, alpha, occ_img, 1 - alpha, 0, occ_img)
occ_save_path = f"{vis_occ_dir}/{scene_index}_{view_id}.png"
cv2.imwrite(occ_save_path, occ_img)
combined_mask = combined_mask.astype('uint8')
occ_save_path = f"{vis_occ_dir}/{scene_index}_{view_id}_mask.png"
cv2.imwrite(occ_save_path, combined_mask*255)
vis_img_list = glob.glob(f"{instance_dir}/{scene_index}_{view_id}_*.png")
if len(vis_img_list) > 0:
vis_img = rgb_img.copy()
overlay = rgb_img.copy()
background = f"{instance_dir}/{scene_index}_{view_id}_background.png"
# iterate through all occlusion masks
for i in range(len(vis_img_list)):
vis_mask_path = vis_img_list[i]
if vis_mask_path == background:
vis_img_back = rgb_img.copy()
overlay_back = rgb_img.copy()
visible_mask = cv2.imread(vis_mask_path, cv2.IMREAD_UNCHANGED)
visible_mask = visible_mask.astype(bool) # boolean mask
overlay_back[visible_mask] = [0, 0, 255]
alpha =0.5
vis_img_back = cv2.addWeighted(overlay_back, alpha, vis_img_back, 1 - alpha, 0, vis_img_back)
vis_save_path = f"{vis_instance_dir}/{scene_index}_{view_id}_background.png"
cv2.imwrite(vis_save_path, vis_img_back)
else:
visible_mask = cv2.imread(vis_mask_path, cv2.IMREAD_UNCHANGED)
vis_combined_mask = visible_mask.astype(bool) # boolean mask
colour = list(np.random.choice(range(256), size=3))
overlay[vis_combined_mask] = colour
alpha =0.5
vis_img = cv2.addWeighted(overlay, alpha, vis_img, 1 - alpha, 0, vis_img)
vis_save_path = f"{vis_instance_dir}/{scene_index}_{view_id}.png"
cv2.imwrite(vis_save_path,vis_img)
| 33,426 | Python | 43.274172 | 185 | 0.55397 |
ngzhili/SynTable/syntable_composer/src/input/parse.py |
import copy
import numpy as np
import os
import yaml
from distributions import Distribution, Choice, Normal, Range, Uniform, Walk
class Parser:
""" For parsing the input parameterization to Composer. """
def __init__(self, args):
""" Construct Parser. Parse input file. """
self.args = args
self.global_group = "[[global]]"
self.param_suffix_to_file_type = {
"model": [".usd", ".usdz", ".usda", ".usdc"],
"texture": [".png", ".jpg", ".jpeg", ".hdr", ".exr"],
"material": [".mdl"],
}
self.no_eval_check_params = {"output_dir", "nucleus_server", "inherit", "profiles"}
Distribution.mount = args.mount
Distribution.param_suffix_to_file_type = self.param_suffix_to_file_type
self.default_params = self.parse_param_set("parameters/profiles/default.yaml", default=True)
additional_params_to_default_set = {"inherit": "", "profiles": [], "file_path": "", "profile_files": []}
self.default_params = {**additional_params_to_default_set, **self.default_params}
self.initialize_params(self.default_params)
self.params = self.parse_input(self.args.input)
def evaluate_param(self, key, val):
""" Evaluate a parameter value in Python """
# Skip evaluation on certain parameter with string values
if not self.param_is_evaluated(key, val):
return val
if type(val) is str and len(val) > 0:
val = eval(val)
if type(val) in (tuple, list):
try:
val = np.array(val, dtype=np.float32)
except:
pass
if isinstance(val, Distribution):
val.setup(key)
if type(val) in (tuple, list):
elems = val
val = [self.evaluate_param(key, sub_elem) for sub_elem in elems]
return val
def param_is_evaluated(self, key, val):
if type(val) is np.ndarray:
return True
return not (key in self.no_eval_check_params or not val or (type(val) is str and val.startswith("/")))
def initialize_params(self, params, default=False):
""" Evaluate parameter values in Python. Verify parameter name and value type. """
for key, val in params.items():
if type(val) is dict:
self.initialize_params(val)
else:
# Evaluate parameter
try:
val = self.evaluate_param(key, val)
params[key] = val
except Exception:
raise ValueError("Unable to evaluate parameter '{}' with value '{}'".format(key, val))
# Verify parameter
if not default:
if key.startswith("obj") or key.startswith("light"):
default_param_set = self.default_params["groups"][self.global_group]
else:
default_param_set = self.default_params
# Verify parameter name
if key not in default_param_set and key:
raise ValueError("Parameter '{}' is not a parameter.".format(key))
# Verify parameter value type
default_val = default_param_set[key]
if isinstance(val, Distribution):
val_type = val.get_type()
else:
val_type = type(val)
if isinstance(default_val, Distribution):
default_val_type = default_val.get_type()
else:
default_val_type = type(default_val)
if default_val_type in (int, float):
# Integer and Float equivalence
default_val_type = [int, float]
elif default_val_type in (tuple, list, np.ndarray):
# Tuple, List, and Array equivalence
default_val_type = [tuple, list, np.ndarray]
else:
default_val_type = [default_val_type]
if val_type not in default_val_type:
raise ValueError(
"Parameter '{}' has incorrect value type {}. Value type must be in {}.".format(
key, val_type, default_val_type
)
)
def verify_nucleus_paths(self, params):
""" Verify parameter values that point to Nucleus server file paths. """
import omni.client
for key, val in params.items():
if type(val) is dict:
self.verify_nucleus_paths(val)
# Check Nucleus server file path of certain parameters
elif key.endswith(("model", "texture", "material")) and not isinstance(val, Distribution) and val:
# Check path starts with "/"
if not val.startswith("/"):
raise ValueError(
"Parameter '{}' has path '{}' which must start with a forward slash.".format(key, val)
)
# Check file type
param_file_type = val[val.rfind(".") :].lower()
correct_file_types = self.param_suffix_to_file_type.get(key[key.rfind("_") + 1 :], [])
if param_file_type not in correct_file_types:
raise ValueError(
"Parameter '{}' has path '{}' with incorrect file type. File type must be one of {}.".format(
key, val, correct_file_types
)
)
# Check file can be found
file_path = self.nucleus_server + val
(exists_result, _, _) = omni.client.read_file(file_path)
is_file = exists_result.name.startswith("OK")
if not is_file:
raise ValueError(
"Parameter '{}' has path '{}' not found on '{}'.".format(key, val, self.nucleus_server)
)
def override_params(self, params):
""" Override params with CLI args. """
if self.args.output:
params["output_dir"] = self.args.output
if self.args.num_scenes is not None:
params["num_scenes"] = self.args.num_scenes
if self.args.mount:
params["mount"] = self.args.mount
params["overwrite"] = self.args.overwrite
params["headless"] = self.args.headless
params["nap"] = self.args.nap
params["visualize_models"] = self.args.visualize_models
def parse_param_set(self, input, parse_from_file=True, default=False):
""" Parse input parameter file. """
if parse_from_file:
# Determine parameter file path
if input.startswith("/"):
input_file = input
elif input.startswith("*"):
input_file = os.path.join(Distribution.mount, input[2:])
else:
input_file = os.path.join(os.path.dirname(__file__), "../../", input)
# Read parameter file
with open(input_file, "r") as f:
params = yaml.safe_load(f)
# Add a parameter for the input file path
params["file_path"] = input_file
else:
params = input
# Process parameter groups
groups = {}
groups[self.global_group] = {}
for key, val in list(params.items()):
# Add group
if type(val) is dict:
if key in groups:
raise ValueError("Parameter group name is not unique: {}".format(key))
groups[key] = val
params.pop(key)
# Add param to global group
if key.startswith("obj_") or key.startswith("light_"):
groups[self.global_group][key] = val
params.pop(key)
params["groups"] = groups
return params
def parse_params(self, params):
""" Parse params into a final parameter set. """
import omni.client
# Add a global group, if needed
if self.global_group not in params["groups"]:
params["groups"][self.global_group] = {}
# Parse all profile parameter sets
profile_param_sets = [self.parse_param_set(profile) for profile in params.get("profiles", [])[::-1]]
# Set default as lowest param set and input file param set as highest
param_sets = [copy.deepcopy(self.default_params)] + profile_param_sets + [params]
# Union parameters sets
final_params = param_sets[0]
for params in param_sets[1:]:
global_group_params = params["groups"][self.global_group]
sub_global_group_params = final_params["groups"][self.global_group]
for group in params["groups"]:
if group == self.global_group:
continue
group_params = params["groups"][group]
if "inherit" in group_params:
inherited_group = group_params["inherit"]
if inherited_group not in final_params["groups"]:
raise ValueError(
"In group '{}' cannot find the inherited group '{}'".format(group, inherited_group)
)
inherited_params = final_params["groups"][inherited_group]
else:
inherited_params = {}
final_params["groups"][group] = {
**sub_global_group_params,
**inherited_params,
**global_group_params,
**group_params,
}
final_params["groups"][self.global_group] = {
**final_params["groups"][self.global_group],
**params["groups"][self.global_group],
}
final_groups = final_params["groups"].copy()
final_params = {**final_params, **params}
final_params["groups"] = final_groups
# Remove non-final groups
for group in list(final_params["groups"].keys()):
if group not in param_sets[-1]["groups"]:
final_params["groups"].pop(group)
final_params["groups"].pop(self.global_group)
params = final_params
# Set profile file paths
params["profile_files"] = [profile_params["file_path"] for profile_params in profile_param_sets]
# Set Nucleus server and check connection
if self.args.nucleus_server:
params["nucleus_server"] = self.args.nucleus_server
if "://" not in params["nucleus_server"]:
params["nucleus_server"] = "omniverse://" + params["nucleus_server"]
self.nucleus_server = params["nucleus_server"]
(result, _) = omni.client.stat(self.nucleus_server)
if not result.name.startswith("OK"):
raise ConnectionError("Could not connect to the Nucleus server: {}".format(self.nucleus_server))
Distribution.nucleus_server = params["nucleus_server"]
# Initialize params
self.initialize_params(params)
# Verify Nucleus server paths
self.verify_nucleus_paths(params)
return params
def parse_input(self, input, parse_from_file=True):
""" Parse all input parameter files. """
if parse_from_file:
print("Parsing and checking input parameterization.")
# Parse input parameter file
params = self.parse_param_set(input, parse_from_file=parse_from_file)
# Process params
params = self.parse_params(params)
# Override parameters with CLI args
self.override_params(params)
return params
| 11,936 | Python | 37.631068 | 117 | 0.528988 |
ngzhili/SynTable/syntable_composer/src/input/__init__.py | from .parse import Parser
| 26 | Python | 12.499994 | 25 | 0.807692 |
ngzhili/SynTable/syntable_composer/src/visualize/visualize.py |
import numpy as np
import os
import sys
from PIL import Image, ImageDraw, ImageFont
from distributions import Choice, Walk
from main import Composer
from sampling import Sampler
class Visualizer:
""" For generating visuals of each input object model in the input parameterization. """
def __init__(self, parser, input_params, output_dir):
""" Construct Visualizer. Parameterize Composer to generate the data needed to post-process into model visuals. """
self.parser = parser
self.input_params = input_params
self.output_dir = os.path.join(output_dir, "visuals")
os.makedirs(self.output_dir, exist_ok=True)
# Get all object models from input parameter file
self.obj_models = self.get_all_obj_models()
self.nucleus_server = self.input_params["nucleus_server"]
# Copy model list to output file
model_list = os.path.join(self.output_dir, "models.txt")
with open(model_list, "w") as f:
for obj_model in self.obj_models:
f.write(obj_model)
f.write("\n")
# Filter obj models
if not self.input_params["overwrite"]:
self.filter_obj_models(self.obj_models)
if not self.obj_models:
print("All object model visuals are already created.")
sys.exit()
self.tile_width = 500
self.tile_height = 500
self.obj_size = 1
self.room_size = 10 * self.obj_size
self.cam_distance = 4 * self.obj_size
self.camera_coord = np.array((-self.cam_distance, 0, self.room_size / 2))
self.background_color = (160, 185, 190)
self.group_name = "photoshoot"
# Set hard-coded parameters
self.params = {self.group_name: {}}
self.set_obj_params()
self.set_light_params()
self.set_room_params()
self.set_cam_params()
self.set_other_params()
# Parse parameters
self.params = parser.parse_input(self.params, parse_from_file=False)
# Set parameters
Sampler.params = self.params
# Initiate Composer
self.composer = Composer(self.params, 0, self.output_dir)
def visualize_models(self):
""" Generate samples and post-process captured data into visuals. """
num_models = len(self.obj_models)
for i, obj_model in enumerate(self.obj_models):
print("Model {}/{} - {}".format(i, num_models, obj_model))
self.set_obj_model(obj_model)
# Capture 4 angles per model
outputs = [self.composer.generate_scene() for j in range(4)]
image_matrix = self.process_outputs(outputs)
self.save_visual(obj_model, image_matrix)
def get_all_obj_models(self):
""" Get all object models from input parameterization. """
obj_models = []
groups = self.input_params["groups"]
for group_name, group in groups.items():
obj_count = group["obj_count"]
group_models = group["obj_model"]
if group_models and obj_count:
if type(group_models) is Choice or type(group_models) is Walk:
group_models = group_models.elems
else:
group_models = [group_models]
obj_models.extend(group_models)
# Remove repeats
obj_models = list(set(obj_models))
return obj_models
def filter_obj_models(self, obj_models):
""" Filter out obj models that have already been visualized. """
existing_filenames = set([f for f in os.listdir(self.output_dir)])
for obj_model in obj_models:
filename = self.model_to_filename(obj_model)
if filename in existing_filenames:
obj_models.remove(obj_model)
def model_to_filename(self, obj_model):
""" Map object model's Nucleus path to a filename. """
filename = obj_model.replace("/", "__")
r_index = filename.rfind(".")
filename = filename[:r_index]
filename += ".jpg"
return filename
def process_outputs(self, outputs):
""" Tile output data from scene into one image matrix. """
rgbs = [groundtruth["DATA"]["RGB"] for groundtruth in outputs]
wireframes = [groundtruth["DATA"]["WIREFRAME"] for groundtruth in outputs]
rgbs = [rgb[:, :, :3] for rgb in rgbs]
top_row_matrix = np.concatenate(rgbs, axis=1)
wireframes = [wireframe[:, :, :3] for wireframe in wireframes]
bottom_row_matrix = np.concatenate(wireframes, axis=1)
image_matrix = np.concatenate([top_row_matrix, bottom_row_matrix], axis=0)
image_matrix = np.array(image_matrix, dtype=np.uint8)
return image_matrix
def save_visual(self, obj_model, image_matrix):
""" Save image matrix as image. """
image = Image.fromarray(image_matrix, "RGB")
font_path = os.path.join(os.path.dirname(__file__), "RobotoMono-Regular.ttf")
font = ImageFont.truetype(font_path, 24)
draw = ImageDraw.Draw(image)
width, height = image.size
draw.text((10, 10), obj_model, font=font)
model_name = self.model_to_filename(obj_model)
filename = os.path.join(self.output_dir, model_name)
image.save(filename, "JPEG", quality=90)
def set_cam_params(self):
""" Set camera parameters. """
self.params["camera_coord"] = str(self.camera_coord.tolist())
self.params["camera_rot"] = str((0, 0, 0))
self.params["focal_length"] = 50
def set_room_params(self):
""" Set room parameters. """
self.params["scenario_room_enabled"] = str(True)
self.params["floor_size"] = str(self.room_size)
self.params["wall_height"] = str(self.room_size)
self.params["floor_color"] = str(self.background_color)
self.params["wall_color"] = str(self.background_color)
self.params["ceiling_color"] = str(self.background_color)
self.params["floor_reflectance"] = str(0)
self.params["wall_reflectance"] = str(0)
self.params["ceiling_reflectance"] = str(0)
def set_obj_params(self):
""" Set object parameters. """
group = self.params[self.group_name]
group["obj_coord_camera_relative"] = str(False)
group["obj_rot_camera_relative"] = str(False)
group["obj_coord"] = str((0, 0, self.room_size / 2))
group["obj_rot"] = "Walk([(25, -25, -45), (-25, 25, -225), (-25, 25, -45), (25, -25, -225)])"
group["obj_size"] = str(self.obj_size)
group["obj_count"] = str(1)
def set_light_params(self):
""" Set light parameters. """
group = self.params[self.group_name]
group["light_count"] = str(4)
group["light_coord_camera_relative"] = str(False)
light_offset = 2 * self.obj_size
light_coords = [
self.camera_coord + (0, -light_offset, 0),
self.camera_coord + (0, 0, light_offset),
self.camera_coord + (0, light_offset, 0),
self.camera_coord + (0, 0, -light_offset),
]
light_coords = str([tuple(coord.tolist()) for coord in light_coords])
group["light_coord"] = "Walk(" + light_coords + ")"
group["light_intensity"] = str(40000)
group["light_radius"] = str(0.50)
group["light_color"] = str([200, 200, 200])
def set_other_params(self):
""" Set other parameters. """
self.params["img_width"] = str(self.tile_width)
self.params["img_height"] = str(self.tile_height)
self.params["write_data"] = str(False)
self.params["verbose"] = str(False)
self.params["rgb"] = str(True)
self.params["wireframe"] = str(True)
self.params["nucleus_server"] = str(self.nucleus_server)
self.params["pause"] = str(0.5)
self.params["path_tracing"] = True
def set_obj_model(self, obj_model):
""" Set obj_model parameter. """
group = self.params["groups"][self.group_name]
group["obj_model"] = str(obj_model)
| 8,145 | Python | 34.885462 | 123 | 0.590055 |
ngzhili/SynTable/syntable_composer/src/visualize/__init__.py |
from .visualize import Visualizer
| 35 | Python | 10.999996 | 33 | 0.828571 |
ngzhili/SynTable/syntable_composer/src/sampling/__init__.py | from .sample import Sampler
| 28 | Python | 13.499993 | 27 | 0.821429 |
ngzhili/SynTable/syntable_composer/src/sampling/sample1.py | import numpy as np
from distributions import Distribution
from output import Logger
class Sampler:
""" For managing parameter sampling. """
# Static variable of parameter set
params = None
def __init__(self, group=None):
""" Construct a Sampler. Potentially set an associated group. """
self.group = group
def evaluate(self, val):
""" Evaluate a parameter into a primitive. """
if isinstance(val, Distribution):
val = val.sample()
elif isinstance(val, (list, tuple)):
elems = val
val = [self.evaluate(sub_elem) for sub_elem in elems]
is_numeric = all([type(elem) == int or type(elem) == float for elem in val])
if is_numeric:
val = np.array(val, dtype=np.float32)
return val
def sample(self, key, group=None,tableBounds=None):
""" Sample a parameter. """
if group is None:
group = self.group
if key.startswith("obj") or key.startswith("light") and group:
param_set = Sampler.params["groups"][group]
else:
param_set = Sampler.params
if key in param_set:
val = param_set[key]
else:
print('Warning key "{}" in group "{}" not found in parameter set.'.format(key, group))
return None
if key == "obj_coord" and group != "table" and tableBounds:
min_val = tableBounds[0]
max_val = tableBounds[1]
val.min_val = min_val
val.max_val = max_val
val = self.evaluate(val)
Logger.write_parameter(key, val, group=group)
return val
| 1,686 | Python | 28.086206 | 98 | 0.561684 |
ngzhili/SynTable/syntable_composer/src/scene/scene1.py | import time
import numpy as np
from random import randint
from output import Logger
# from sampling import Sampler
from sampling.sample1 import Sampler
# from scene import Camera, Light
from scene.light1 import Light
from scene.camera1 import Camera
from scene.object1 import Object
from scene.room1 import Room
def randomNumObjList(num_objs, total_sum):
"""
Function to sample a list of m random non-negative integers whose sum is n
"""
# Create an array of size m where every element is initialized to 0
arr = [0] * num_objs
# To make the sum of the final list as n
for i in range(total_sum) :
# Increment any random element from the array by 1
# arr[randint(0, n) % m] += 1
arr[randint(0, num_objs-1)] += 1
return arr
class SceneManager:
""" For managing scene set-up and generation. """
def __init__(self, sim_app, sim_context):
""" Construct SceneManager. Set-up scenario in Isaac Sim. """
import omni
self.sim_app = sim_app
self.sim_context = sim_context
self.stage = omni.usd.get_context().get_stage()
self.sample = Sampler().sample
self.scene_path = "/World/Scene"
self.scenario_label = "[[scenario]]"
self.play_frame = False
self.objs = []
self.lights = []
self.camera = Camera(self.sim_app, self.sim_context, "/World/CameraRig", None, group=None)
self.setup_scenario()
def setup_scenario(self):
""" Load in base scenario(s) """
import omni
from omni.isaac.core import SimulationContext
from omni.isaac.core.utils import stage
from omni.isaac.core.utils.stage import get_stage_units
cached_physics_dt = self.sim_context.get_physics_dt()
cached_rendering_dt = self.sim_context.get_rendering_dt()
cached_stage_units = get_stage_units()
self.room = None
if self.sample("scenario_room_enabled"):
# Generate a parameterizable room
self.room = Room(self.sim_app, self.sim_context)
# add table
from scene.room_face1 import RoomTable
group = "table"
path = "/World/Room/table_{}".format(1)
ref = self.sample("nucleus_server") + self.sample("obj_model", group=group)
obj = RoomTable(self.sim_app, self.sim_context, ref, path, "obj", self.camera, group=group)
roomTableMinBounds, roomTableMaxBounds = obj.get_bounds()
roomTableSize = roomTableMaxBounds - roomTableMinBounds # (x,y,z size of table)
roomTableHeight = roomTableSize[-1]
roomTableZCenter = roomTableHeight/2
obj.translate(np.array([0,0,roomTableZCenter]))
self.roomTableSize = roomTableSize
self.roomTable = obj
else:
# Load in a USD scenario
self.load_scenario_model()
# Re-initialize context after we open a stage
self.sim_context = SimulationContext(
physics_dt=cached_physics_dt, rendering_dt=cached_rendering_dt, stage_units_in_meters=cached_stage_units
)
self.stage = omni.usd.get_context().get_stage()
# Set the up axis to the z axis
stage.set_stage_up_axis("z")
# Set scenario label to stage prims
self.set_scenario_label()
# Reset rendering settings
self.sim_app.reset_render_settings()
def set_scenario_label(self):
""" Set scenario label to all prims in stage. """
from pxr import Semantics
for prim in self.stage.Traverse():
path = prim.GetPath()
# print(path)
if path == "/World":
continue
if not prim.HasAPI(Semantics.SemanticsAPI):
sem = Semantics.SemanticsAPI.Apply(prim, "Semantics")
sem.CreateSemanticTypeAttr()
sem.CreateSemanticDataAttr()
else:
sem = Semantics.SemanticsAPI.Get(prim, "Semantics")
continue
typeAttr = sem.GetSemanticTypeAttr()
dataAttr = sem.GetSemanticDataAttr()
typeAttr.Set("class")
dataAttr.Set(self.scenario_label)
def load_scenario_model(self):
""" Load in a USD scenario. """
from omni.isaac.core.utils.stage import open_stage
# Load in base scenario from Nucleus
if self.sample("scenario_model"):
scenario_ref = self.sample("nucleus_server") + self.sample("scenario_model")
open_stage(scenario_ref)
def populate_scene(self, tableBounds=None):
""" Populate a sample's scene a camera, objects, and lights. """
# Update camera
self.camera.place_in_scene()
# Iterate through each group
self.objs = []
self.lights = []
self.ceilinglights = []
if self.sample("randomise_num_of_objs_in_scene"):
MaxObjInScene = self.sample("max_obj_in_scene")
numUniqueObjs = len([i for i in self.sample("groups") if i.lower().startswith("object")])
ObjNumList = randomNumObjList(numUniqueObjs, MaxObjInScene)
for grp_index, group in enumerate(self.sample("groups")):
# spawn objects to scene
if group not in ["table","lights","ceilinglights","backgroundobject"]: # do not add Roomtable here
if self.sample("randomise_num_of_objs_in_scene"):
num_objs = ObjNumList[grp_index] # get number of objects to be generated
else:
num_objs = self.sample("obj_count", group=group)
for i in range(num_objs):
path = "{}/Objects/object_{}".format(self.scene_path, len(self.objs))
ref = self.sample("nucleus_server") + self.sample("obj_model", group=group)
obj = Object(self.sim_app, self.sim_context, ref, path, "obj", self.camera, group,tableBounds=tableBounds)
self.objs.append(obj)
elif group == "ceilinglights":
# Spawn lights
num_lights = self.sample("light_count", group=group)
for i in range(num_lights):
path = "{}/Ceilinglights/ceilinglights_{}".format(self.scene_path, len(self.ceilinglights))
light = Light(self.sim_app, self.sim_context, path, self.camera, group)
self.ceilinglights.append(light)
elif group == "lights":
# Spawn lights
num_lights = self.sample("light_count", group=group)
for i in range(num_lights):
path = "{}/Lights/lights_{}".format(self.scene_path, len(self.lights))
light = Light(self.sim_app, self.sim_context, path, self.camera, group)
self.lights.append(light)
# Update room
if self.room:
self.room.update()
self.roomTable.add_material()
# Add skybox, if needed
self.add_skybox()
def update_scene(self, step_time=None, step_index=0):
""" Update Omniverse after scene is generated. """
from omni.isaac.core.utils.stage import is_stage_loading
# Step positions of objs and lights
if step_time:
self.camera.step(step_time)
for obj in self.objs:
obj.step(step_time)
for light in self.lights:
light.step(step_time)
# Wait for scene to finish loading
while is_stage_loading():
self.sim_context.render()
# Determine if scene is played
scene_assets = self.objs + self.lights
self.play_frame = any([asset.physics for asset in scene_assets])
# Play scene, if needed
if self.play_frame and step_index == 0:
Logger.print("\nPhysically simulating...")
self.sim_context.play()
render = not self.sample("headless")
sim_time = self.sample("physics_simulate_time")
frames_to_simulate = int(sim_time * 60) + 1
for i in range(frames_to_simulate):
self.sim_context.step(render=render)
# Napping
if self.sample("nap"):
print("napping")
while True:
self.sim_context.render()
# Update
if step_index == 0:
Logger.print("\nLoading textures...")
self.sim_context.render()
# Pausing
if step_index == 0:
pause_time = self.sample("pause")
start_time = time.time()
while time.time() - start_time < pause_time:
self.sim_context.render()
def add_skybox(self):
""" Add a DomeLight that creates a textured skybox, if needed. """
from pxr import UsdGeom, UsdLux
from omni.isaac.core.utils.prims import create_prim
sky_texture = self.sample("sky_texture")
sky_light_intensity = self.sample("sky_light_intensity")
if sky_texture:
create_prim(
prim_path="{}/Lights/skybox".format(self.scene_path),
prim_type="DomeLight",
attributes={
UsdLux.Tokens.intensity: sky_light_intensity,
UsdLux.Tokens.specular: 1,
UsdLux.Tokens.textureFile: self.sample("nucleus_server") + sky_texture,
UsdLux.Tokens.textureFormat: UsdLux.Tokens.latlong,
UsdGeom.Tokens.visibility: "inherited",
},
)
def prepare_scene(self, index):
""" Scene preparation step. """
self.valid_sample = True
Logger.start_log_entry(index)
Logger.print("===== Generating Scene: " + str(index) + " =====\n")
def finish_scene(self):
""" Scene finish step. Clean-up variables, Isaac Sim stage. """
from omni.isaac.core.utils.prims import delete_prim
self.objs = []
self.lights = []
self.ceilinglights = []
delete_prim(self.scene_path)
delete_prim("/Looks")
self.sim_context.stop()
self.sim_context.render()
self.play_frame = False
Logger.finish_log_entry()
def print_instance_attributes(self):
for attribute, value in self.__dict__.items():
print(attribute, '=', value)
def reload_table(self):
from omni.isaac.core.utils.prims import delete_prim
from scene.room_face1 import RoomTable
group = "table"
path = "/World/Room/table_{}".format(1)
delete_prim(path) # delete old tables
ref = self.sample("nucleus_server") + self.sample("obj_model", group=group)
obj = RoomTable(self.sim_app, self.sim_context, ref, path, "obj", self.camera, group=group)
roomTableMinBounds, roomTableMaxBounds = obj.get_bounds()
roomTableSize = roomTableMaxBounds - roomTableMinBounds # (x,y,z size of table)
roomTableHeight = roomTableSize[-1]
roomTableZCenter = roomTableHeight/2
obj.translate(np.array([0,0,roomTableZCenter]))
self.roomTableSize = roomTableSize
self.roomTable = obj
| 11,333 | Python | 35.679612 | 136 | 0.578752 |
ngzhili/SynTable/syntable_composer/src/scene/room_face1.py | from scene.object1 import Object
import numpy as np
import os
class RoomFace(Object):
""" For managing an Xform asset in Isaac Sim. """
def __init__(self, sim_app, sim_context, path, prefix, coord, rotation, scaling):
""" Construct Object. """
self.coord = coord
self.rotation = rotation
self.scaling = scaling
super().__init__(sim_app, sim_context, "", path, prefix, None, None)
def load_asset(self):
""" Create asset from object parameters. """
from omni.isaac.core.prims import XFormPrim
from omni.isaac.core.utils.prims import move_prim
from pxr import PhysxSchema, UsdPhysics
if self.prefix == "floor":
# Create invisible ground plane
path = "/World/Room/ground"
planeGeom = PhysxSchema.Plane.Define(self.stage, path)
planeGeom.CreatePurposeAttr().Set("guide")
planeGeom.CreateAxisAttr().Set("Z")
prim = self.stage.GetPrimAtPath(path)
UsdPhysics.CollisionAPI.Apply(prim)
# Create plane
from omni.kit.primitive.mesh import CreateMeshPrimWithDefaultXformCommand
CreateMeshPrimWithDefaultXformCommand(prim_type="Plane").do()
move_prim(path_from="/Plane", path_to=self.path)
self.prim = self.stage.GetPrimAtPath(self.path)
self.xform_prim = XFormPrim(self.path)
def place_in_scene(self):
""" Scale, rotate, and translate asset. """
self.translate(self.coord)
self.rotate(self.rotation)
self.scale(self.scaling)
def step(self):
""" Room Face does not update in a scene's sequence. """
return
class RoomTable(Object):
""" For managing an Xform asset in Isaac Sim. """
def __init__(self, sim_app, sim_context, ref, path, prefix, camera, group):
super().__init__(sim_app, sim_context, ref, path, prefix, camera, group, None)
def load_asset(self):
""" Create asset from object parameters. """
from omni.isaac.core.prims import XFormPrim
from omni.isaac.core.utils import prims
# print(self.path)
# Create object
self.prim = prims.create_prim(self.path, "Xform", semantic_label="[[scenario]]")
self.xform_prim = XFormPrim(self.path)
nested_path = os.path.join(self.path, "nested_prim")
self.nested_prim = prims.create_prim(nested_path, "Xform", usd_path=self.ref, semantic_label="[[scenario]]")
self.nested_xform_prim = XFormPrim(nested_path)
self.add_material()
self.add_collision()
| 2,607 | Python | 31.6 | 116 | 0.624473 |
ngzhili/SynTable/syntable_composer/src/scene/asset1.py |
from abc import ABC, abstractmethod
import math
import numpy as np
from scipy.spatial.transform import Rotation
from output import Logger
from sampling.sample1 import Sampler
class Asset(ABC):
""" For managing an asset in Isaac Sim. """
def __init__(self, sim_app, sim_context, path, prefix, name, group=None, camera=None):
""" Construct Asset. """
self.sim_app = sim_app
self.sim_context = sim_context
self.path = path
self.camera = camera
self.name = name
self.prefix = prefix
self.stage = self.sim_context.stage
self.sample = Sampler(group=group).sample
self.class_name = self.__class__.__name__
if self.class_name != "RoomFace":
self.vel = self.sample(self.concat("vel"))
self.rot_vel = self.sample(self.concat("rot_vel"))
self.accel = self.sample(self.concat("accel"))
self.rot_accel = self.sample(self.concat("rot_accel"))
self.label = group
self.physics = False
@abstractmethod
def place_in_scene(self):
""" Place asset in scene. """
pass
def is_given(self, param):
""" Is a parameter value is given. """
if type(param) in (np.ndarray, list, tuple, str):
return len(param) > 0
elif type(param) is float:
return not math.isnan(param)
else:
return param is not None
def translate(self, coord, xform_prim=None):
""" Translate asset. """
if xform_prim is None:
xform_prim = self.xform_prim
xform_prim.set_world_pose(position=coord)
def scale(self, scaling, xform_prim=None):
""" Scale asset uniformly across all axes. """
if xform_prim is None:
xform_prim = self.xform_prim
xform_prim.set_local_scale(scaling)
def rotate(self, rotation, xform_prim=None):
""" Rotate asset. """
from omni.isaac.core.utils.rotations import euler_angles_to_quat
if xform_prim is None:
xform_prim = self.xform_prim
xform_prim.set_world_pose(orientation=euler_angles_to_quat(rotation.tolist(), degrees=True))
def is_coord_camera_relative(self):
return self.sample(self.concat("coord_camera_relative"))
def is_rot_camera_relative(self):
return self.sample(self.concat("rot_camera_relative"))
def concat(self, parameter_suffix):
""" Concatenate the parameter prefix and suffix. """
return self.prefix + "_" + parameter_suffix
def get_initial_coord(self,tableBounds=None):
""" Get coordinates of asset across 3 axes. """
if self.is_coord_camera_relative():
cam_coord = self.camera.coords[0]
cam_rot = self.camera.rotation
horiz_fov = -1 * self.camera.intrinsics[0]["horiz_fov"]
vert_fov = self.camera.intrinsics[0]["vert_fov"]
radius = self.sample(self.concat("distance"))
theta = horiz_fov * self.sample(self.concat("horiz_fov_loc")) / 2
phi = vert_fov * self.sample(self.concat("vert_fov_loc")) / 2
# Convert from polar to cartesian
rads = np.radians(cam_rot[2] + theta)
x = cam_coord[0] + radius * np.cos(rads)
y = cam_coord[1] + radius * np.sin(rads)
rads = np.radians(cam_rot[0] + phi)
z = cam_coord[2] + radius * np.sin(rads)
coord = np.array([x, y, z])
elif tableBounds:
coord = self.sample(self.concat("coord"),tableBounds=tableBounds)
else:
coord = self.sample(self.concat("coord"))
pretty_coord = tuple([round(v, 1) for v in coord.tolist()])
return coord
def get_initial_rotation(self):
""" Get rotation of asset across 3 axes. """
rotation = self.sample(self.concat("rot"))
rotation = np.array(rotation)
if self.is_rot_camera_relative():
cam_rot = self.camera.rotation
rotation += cam_rot
return rotation
def step(self, step_time):
""" Step asset forward in its sequence. """
from omni.isaac.core.utils.rotations import quat_to_euler_angles
if self.class_name != "Camera":
self.coord, quaternion = self.xform_prim.get_world_pose()
self.coord = np.array(self.coord, dtype=np.float32)
self.rotation = np.degrees(quat_to_euler_angles(quaternion))
vel_vector = self.vel
accel_vector = self.accel
if self.sample(self.concat("movement") + "_" + self.concat("relative")):
radians = np.radians(self.rotation)
direction_cosine_matrix = Rotation.from_rotvec(radians).as_matrix()
vel_vector = direction_cosine_matrix.dot(vel_vector)
accel_vector = direction_cosine_matrix.dot(accel_vector)
self.coord += vel_vector * step_time + 0.5 * accel_vector * step_time ** 2
self.translate(self.coord)
self.rotation += self.rot_vel * step_time + 0.5 * self.rot_accel * step_time ** 2
self.rotate(self.rotation)
| 5,129 | Python | 32.529412 | 100 | 0.594463 |
ngzhili/SynTable/syntable_composer/src/scene/__init__.py |
from .asset import *
from .room import Room
from .scene import SceneManager
| 77 | Python | 14.599997 | 31 | 0.779221 |
ngzhili/SynTable/syntable_composer/src/scene/room.py | import numpy as np
from sampling import Sampler
from scene import RoomFace
class Room:
""" For managing a parameterizable rectangular prism centered at the origin. """
def __init__(self, sim_app, sim_context):
""" Construct Room. Generate room in Isaac SIM. """
self.sim_app = sim_app
self.sim_context = sim_context
self.stage = self.sim_context.stage
self.sample = Sampler().sample
self.room = self.scenario_room()
def scenario_room(self):
""" Generate and return assets creating a rectangular prism at the origin. """
wall_height = self.sample("wall_height")
floor_size = self.sample("floor_size")
self.room_faces = []
faces = []
coords = []
scalings = []
rotations = []
if self.sample("floor"):
faces.append("floor")
coords.append((0, 0, 0))
scalings.append((floor_size / 100, floor_size / 100, 1))
rotations.append((0, 0, 0))
if self.sample("wall"):
faces.extend(4 * ["wall"])
coords.append((floor_size / 2, 0, wall_height / 2))
coords.append((0, floor_size / 2, wall_height / 2))
coords.append((-floor_size / 2, 0, wall_height / 2))
coords.append((0, -floor_size / 2, wall_height / 2))
scalings.extend(4 * [(floor_size / 100, wall_height / 100, 1)])
rotations.append((90, 0, 90))
rotations.append((90, 0, 0))
rotations.append((90, 0, 90))
rotations.append((90, 0, 0))
if self.sample("ceiling"):
faces.append("ceiling")
coords.append((0, 0, wall_height))
scalings.append((floor_size / 100, floor_size / 100, 1))
rotations.append((0, 0, 0))
room = []
for i, face in enumerate(faces):
coord = np.array(coords[i])
rotation = np.array(rotations[i])
scaling = np.array(scalings[i])
path = "/World/Room/{}_{}".format(face, i)
room_face = RoomFace(self.sim_app, self.sim_context, path, face, coord, rotation, scaling)
room.append(room_face)
return room
def update(self):
""" Update room components. """
for room_face in self.room:
room_face.add_material()
| 2,363 | Python | 30.945946 | 102 | 0.544223 |
ngzhili/SynTable/syntable_composer/src/scene/camera1.py |
import math
import numpy as np
import carb
from scene.asset1 import Asset
from output import Logger
# from sampling import Sampler
from sampling.sample1 import Sampler
class Camera(Asset):
""" For managing a camera in Isaac Sim. """
def __init__(self, sim_app, sim_context, path, camera, group):
""" Construct Camera. """
self.sample = Sampler(group=group).sample
self.stereo = self.sample("stereo")
if self.stereo:
name = "stereo_cams"
else:
name = "mono_cam"
super().__init__(sim_app, sim_context, path, "camera", name, camera=camera, group=group)
self.load_camera()
def is_coord_camera_relative(self):
return False
def is_rot_camera_relative(self):
return False
def load_camera(self):
""" Create a camera in Isaac Sim. """
import omni
from pxr import Sdf, UsdGeom
from omni.isaac.core.prims import XFormPrim
from omni.isaac.core.utils import prims
self.prim = prims.create_prim(self.path, "Xform")
self.xform_prim = XFormPrim(self.path)
self.camera_rig = UsdGeom.Xformable(self.prim)
camera_prim_paths = []
if self.stereo:
camera_prim_paths.append(self.path + "/LeftCamera")
camera_prim_paths.append(self.path + "/RightCamera")
else:
camera_prim_paths.append(self.path + "/MonoCamera")
self.cameras = [
self.stage.DefinePrim(Sdf.Path(camera_prim_path), "Camera") for camera_prim_path in camera_prim_paths
]
focal_length = self.sample("focal_length")
focus_distance = self.sample("focus_distance")
horiz_aperture = self.sample("horiz_aperture")
vert_aperture = self.sample("vert_aperture")
f_stop = self.sample("f_stop")
for camera in self.cameras:
camera = UsdGeom.Camera(camera)
camera.GetFocalLengthAttr().Set(focal_length)
camera.GetFocusDistanceAttr().Set(focus_distance)
camera.GetHorizontalApertureAttr().Set(horiz_aperture)
camera.GetVerticalApertureAttr().Set(vert_aperture)
camera.GetFStopAttr().Set(f_stop)
# Set viewports
carb.settings.acquire_settings_interface().set_int("/app/renderer/resolution/width", -1)
carb.settings.acquire_settings_interface().set_int("/app/renderer/resolution/height", -1)
self.viewports = []
for i in range(len(self.cameras)):
if i == 0:
viewport_handle = omni.kit.viewport_legacy.get_viewport_interface().get_instance("Viewport")
else:
viewport_handle = omni.kit.viewport_legacy.get_viewport_interface().create_instance()
viewport_window = omni.kit.viewport_legacy.get_viewport_interface().get_viewport_window(viewport_handle)
viewport_window.set_texture_resolution(self.sample("img_width"), self.sample("img_height"))
viewport_window.set_active_camera(camera_prim_paths[i])
if self.stereo:
if i == 0:
viewport_name = "left"
else:
viewport_name = "right"
else:
viewport_name = "mono"
self.viewports.append((viewport_name, viewport_window))
self.sim_context.render()
self.sim_app.update()
# Set viewport window size
if self.stereo:
left_viewport = omni.ui.Workspace.get_window("Viewport")
right_viewport = omni.ui.Workspace.get_window("Viewport 2")
right_viewport.dock_in(left_viewport, omni.ui.DockPosition.RIGHT)
self.intrinsics = [self.get_intrinsics(camera) for camera in self.cameras]
# print(self.intrinsics)
def translate(self, coord):
""" Translate each camera asset. Find stereo positions, if needed. """
self.coord = coord
if self.sample("stereo"):
self.coords = self.get_stereo_coords(self.coord, self.rotation)
else:
self.coords = [self.coord]
for i, camera in enumerate(self.cameras):
viewport_name, viewport_window = self.viewports[i]
viewport_window.set_camera_position(
str(camera.GetPath()), self.coords[i][0], self.coords[i][1], self.coords[i][2], True
)
def rotate(self, rotation):
""" Rotate each camera asset. """
from pxr import UsdGeom
self.rotation = rotation
for i, camera in enumerate(self.cameras):
offset_cam_rot = self.rotation + np.array((90, 0, 270), dtype=np.float32)
UsdGeom.XformCommonAPI(camera).SetRotate(offset_cam_rot.tolist())
def place_in_scene(self):
""" Place camera in scene. """
rotation = self.get_initial_rotation()
self.rotate(rotation)
coord = self.get_initial_coord()
self.translate(coord)
self.step(0)
def get_stereo_coords(self, coord, rotation):
""" Convert camera center coord and rotation and return stereo camera coords. """
coords = []
for i in range(len(self.cameras)):
sign = 1 if i == 0 else -1
theta = np.radians(rotation[0] + sign * 90)
phi = np.radians(rotation[1])
radius = self.sample("stereo_baseline") / 2
# Add offset such that center of stereo cameras is at cam_coord
x = coord[0] + radius * np.cos(theta) * np.cos(phi)
y = coord[1] + radius * np.sin(theta) * np.cos(phi)
z = coord[2] + radius * sign * np.sin(phi)
coords.append(np.array((x, y, z)))
return coords
def get_intrinsics(self, camera):
""" Compute, print, and return camera intrinsics. """
from omni.syntheticdata import helpers
width = self.sample("img_width")
height = self.sample("img_height")
aspect_ratio = width / height
camera.GetAttribute("clippingRange").Set((0.01, 1000000)) # set clipping range
near, far = camera.GetAttribute("clippingRange").Get()
focal_length = camera.GetAttribute("focalLength").Get()
horiz_aperture = camera.GetAttribute("horizontalAperture").Get()
vert_aperture = camera.GetAttribute("verticalAperture").Get()
horiz_fov = 2 * math.atan(horiz_aperture / (2 * focal_length))
horiz_fov = np.degrees(horiz_fov)
vert_fov = 2 * math.atan(vert_aperture / (2 * focal_length))
vert_fov = np.degrees(vert_fov)
fx = width * focal_length / horiz_aperture
fy = height * focal_length / vert_aperture
cx = width * 0.5
cy = height * 0.5
proj_mat = helpers.get_projection_matrix(np.radians(horiz_fov), aspect_ratio, near, far)
with np.printoptions(precision=2, suppress=True):
proj_mat_str = str(proj_mat)
Logger.print("")
Logger.print("Camera intrinsics")
Logger.print("- width, height: {}, {}".format(round(width), round(height)))
Logger.print("- focal_length: {}".format(focal_length, 2))
Logger.print(
"- horiz_aperture, vert_aperture: {}, {}".format(round(horiz_aperture, 2), round(vert_aperture, 2))
)
Logger.print("- horiz_fov, vert_fov: {}, {}".format(round(horiz_fov, 2), round(vert_fov, 2)))
Logger.print("- focal_x, focal_y: {}, {}".format(round(fx, 2), round(fy, 2)))
Logger.print("- proj_mat: \n {}".format(str(proj_mat_str)))
Logger.print("")
cam_intrinsics = {
"width": width,
"height": height,
"focal_length": focal_length,
"horiz_aperture": horiz_aperture,
"vert_aperture": vert_aperture,
"horiz_fov": horiz_fov,
"vert_fov": vert_fov,
"fx": fx,
"fy": fy,
"cx": cx,
"cy": cy,
"proj_mat": proj_mat,
"near":near,
"far":far
}
return cam_intrinsics
def print_instance_attributes(self):
for attribute, value in self.__dict__.items():
print(attribute, '=', value)
def translate_rotate(self,target=(0,0,0)):
""" Translate each camera asset. Find stereo positions, if needed. """
for i, camera in enumerate(self.cameras):
viewport_name, viewport_window = self.viewports[i]
viewport_window.set_camera_target(str(camera.GetPath()), target[0], target[1], target[2], True)
| 8,574 | Python | 34.878661 | 116 | 0.586774 |
ngzhili/SynTable/syntable_composer/src/scene/light1.py | from sampling.sample1 import Sampler
from scene.asset1 import Asset
class Light(Asset):
""" For managing a light asset in Isaac Sim. """
def __init__(self, sim_app, sim_context, path, camera, group):
""" Construct Light. """
self.sample = Sampler(group=group).sample
self.distant = self.sample("light_distant")
self.directed = self.sample("light_directed")
if self.distant:
name = "distant_light"
elif self.directed:
name = "directed_light"
else:
name = "sphere_light"
super().__init__(sim_app, sim_context, path, "light", name, camera=camera, group=group)
self.load_light()
self.place_in_scene()
def place_in_scene(self):
""" Place light in scene. """
self.coord = self.get_initial_coord()
self.translate(self.coord)
self.rotation = self.get_initial_rotation()
self.rotate(self.rotation)
def load_light(self):
""" Create a light in Isaac Sim. """
from pxr import Sdf
from omni.usd.commands import ChangePropertyCommand
from omni.isaac.core.prims import XFormPrim
from omni.isaac.core.utils import prims
intensity = self.sample("light_intensity")
color = tuple(self.sample("light_color") / 255)
temp_enabled = self.sample("light_temp_enabled")
temp = self.sample("light_temp")
radius = self.sample("light_radius")
focus = self.sample("light_directed_focus")
focus_softness = self.sample("light_directed_focus_softness")
width = self.sample("light_width")
height = self.sample("light_height")
attributes = {}
if self.distant:
light_shape = "DistantLight"
elif self.directed:
light_shape = "RectLight"
attributes["width"] = width
attributes["height"] = height
else:
light_shape = "SphereLight"
attributes["radius"] = radius
attributes["intensity"] = intensity
attributes["color"] = color
if temp_enabled:
attributes["enableColorTemperature"] = True
attributes["colorTemperature"] = temp
self.attributes = attributes # added
self.prim = prims.create_prim(self.path, light_shape, attributes=attributes)
self.xform_prim = XFormPrim(self.path)
if self.directed:
ChangePropertyCommand(prop_path=Sdf.Path(self.path + ".shaping:focus"), value=focus, prev=0.0).do()
ChangePropertyCommand(
prop_path=Sdf.Path(self.path + ".shaping:cone:softness"), value=focus_softness, prev=0.0
)
def off_prim(self):
""" Turn Object Visibility off """
from omni.isaac.core.utils import prims
prims.set_prim_visibility(self.prim, False) | 2,880 | Python | 34.134146 | 111 | 0.599653 |
ngzhili/SynTable/syntable_composer/src/scene/object1.py | import numpy as np
import os
from scene.asset1 import Asset
class Object(Asset):
""" For managing an Xform asset in Isaac Sim. """
def __init__(self, sim_app, sim_context, ref, path, prefix, camera, group,tableBounds=None):
""" Construct Object. """
self.tableBounds = tableBounds
self.ref = ref
name = self.ref[self.ref.rfind("/") + 1 : self.ref.rfind(".")]
super().__init__(sim_app, sim_context, path, prefix, name, camera=camera, group=group)
self.load_asset()
self.place_in_scene()
if self.class_name != "RoomFace" and self.sample("obj_physics"):
self.add_physics()
def load_asset(self):
""" Create asset from object parameters. """
from omni.isaac.core.prims import XFormPrim
from omni.isaac.core.utils import prims
#print(self.path)
# Create object
self.prim = prims.create_prim(self.path, "Xform", semantic_label=self.label)
self.xform_prim = XFormPrim(self.path)
nested_path = os.path.join(self.path, "nested_prim")
self.nested_prim = prims.create_prim(nested_path, "Xform", usd_path=self.ref, semantic_label=self.label)
self.nested_xform_prim = XFormPrim(nested_path)
self.add_material()
def place_in_scene(self):
""" Scale, rotate, and translate asset. """
# Get asset dimensions
min_bound, max_bound = self.get_bounds()
size = max_bound - min_bound
# Get asset scaling
obj_size_is_enabled = self.sample("obj_size_enabled")
if obj_size_is_enabled:
obj_size = self.sample("obj_size")
max_size = max(size)
self.scaling = obj_size / max_size
else:
self.scaling = self.sample("obj_scale")
# Offset nested asset
obj_centered = self.sample("obj_centered")
if obj_centered:
offset = (max_bound + min_bound) / 2
self.translate(-offset, xform_prim=self.nested_xform_prim)
# Scale asset
self.scaling = np.array([self.scaling, self.scaling, self.scaling])
self.scale(self.scaling)
# Get asset coord and rotation
self.coord = self.get_initial_coord(tableBounds=self.tableBounds)
self.rotation = self.get_initial_rotation()
# Rotate asset
self.rotate(self.rotation)
# Place asset
self.translate(self.coord)
def get_bounds(self):
""" Compute min and max bounds of an asset. """
from omni.isaac.core.utils.bounds import compute_aabb, create_bbox_cache, recompute_extents
# recompute_extents(self.nested_prim)
cache = create_bbox_cache()
bound = compute_aabb(cache, self.path).tolist()
min_bound = np.array(bound[:3])
max_bound = np.array(bound[3:])
return min_bound, max_bound
def add_material(self):
""" Add material to asset, if needed. """
from pxr import UsdShade
material = self.sample(self.concat("material"))
color = self.sample(self.concat("color"))
texture = self.sample(self.concat("texture"))
texture_scale = self.sample(self.concat("texture_scale"))
texture_rot = self.sample(self.concat("texture_rot"))
reflectance = self.sample(self.concat("reflectance"))
metallic = self.sample(self.concat("metallicness"))
mtl_prim_path = None
if self.is_given(material):
# Load a material
mtl_prim_path = self.load_material_from_nucleus(material)
elif self.is_given(color) or self.is_given(texture):
# Load a new material
mtl_prim_path = self.create_material()
if mtl_prim_path:
# print(f"Adding {mtl_prim_path} to {self.path}")
# Update material properties and assign to asset
mtl_prim = self.update_material(
mtl_prim_path, color, texture, texture_scale, texture_rot, reflectance, metallic
)
UsdShade.MaterialBindingAPI(self.prim).Bind(mtl_prim, UsdShade.Tokens.strongerThanDescendants)
def load_material_from_nucleus(self, material):
""" Create material from Nucleus path. """
from pxr import Sdf
from omni.usd.commands import CreateMdlMaterialPrimCommand
mtl_url = self.sample("nucleus_server") + material
left_index = material.rfind("/") + 1 if "/" in material else 0
right_index = material.rfind(".") if "." in material else -1
mtl_name = material[left_index:right_index]
left_index = self.path.rfind("/") + 1 if "/" in self.path else 0
path_name = self.path[left_index:]
mtl_prim_path = "/Looks/" + mtl_name + "_" + path_name
mtl_prim_path = Sdf.Path(mtl_prim_path.replace("-", "_"))
CreateMdlMaterialPrimCommand(mtl_url=mtl_url, mtl_name=mtl_name, mtl_path=mtl_prim_path).do()
return mtl_prim_path
def create_material(self):
""" Create a OmniPBR material with provided properties and assign to asset. """
from pxr import Sdf
import omni
from omni.isaac.core.utils.prims import move_prim
from omni.kit.material.library import CreateAndBindMdlMaterialFromLibrary
mtl_created_list = []
CreateAndBindMdlMaterialFromLibrary(
mdl_name="OmniPBR.mdl", mtl_name="OmniPBR", mtl_created_list=mtl_created_list
).do()
mtl_prim_path = Sdf.Path(mtl_created_list[0])
new_mtl_prim_path = omni.usd.get_stage_next_free_path(self.stage, "/Looks/OmniPBR", False)
move_prim(path_from=mtl_prim_path, path_to=new_mtl_prim_path)
mtl_prim_path = new_mtl_prim_path
return mtl_prim_path
def update_material(self, mtl_prim_path, color, texture, texture_scale, texture_rot, reflectance, metallic):
""" Update properties of an existing material. """
import omni
from pxr import Sdf, UsdShade
mtl_prim = UsdShade.Material(self.stage.GetPrimAtPath(mtl_prim_path))
if self.is_given(color):
color = tuple(color / 255)
omni.usd.create_material_input(mtl_prim, "diffuse_color_constant", color, Sdf.ValueTypeNames.Color3f)
omni.usd.create_material_input(mtl_prim, "diffuse_tint", color, Sdf.ValueTypeNames.Color3f)
if self.is_given(texture):
texture = self.sample("nucleus_server") + texture
omni.usd.create_material_input(mtl_prim, "diffuse_texture", texture, Sdf.ValueTypeNames.Asset)
if self.is_given(texture_scale):
texture_scale = 1 / texture_scale
omni.usd.create_material_input(
mtl_prim, "texture_scale", (texture_scale, texture_scale), Sdf.ValueTypeNames.Float2
)
if self.is_given(texture_rot):
omni.usd.create_material_input(mtl_prim, "texture_rotate", texture_rot, Sdf.ValueTypeNames.Float)
if self.is_given(reflectance):
roughness = 1 - reflectance
omni.usd.create_material_input(
mtl_prim, "reflection_roughness_constant", roughness, Sdf.ValueTypeNames.Float
)
if self.is_given(metallic):
omni.usd.create_material_input(mtl_prim, "metallic_constant", metallic, Sdf.ValueTypeNames.Float)
return mtl_prim
def add_physics(self):
""" Make asset a rigid body to enable gravity and collision. """
from omni.isaac.core.utils.prims import get_all_matching_child_prims, get_prim_at_path
from omni.physx.scripts import utils
from pxr import UsdPhysics
def is_rigid_body(prim_path):
prim = get_prim_at_path(prim_path)
if prim.HasAPI(UsdPhysics.RigidBodyAPI):
return True
return False
has_physics_already = len(get_all_matching_child_prims(self.path, predicate=is_rigid_body)) > 0
if has_physics_already:
self.physics = True
return
utils.setRigidBody(self.prim, "convexHull", False)
# Set mass to 1 kg
mass_api = UsdPhysics.MassAPI.Apply(self.prim)
mass_api.CreateMassAttr(1)
self.physics = True
def print_instance_attributes(self):
for attribute, value in self.__dict__.items():
print(attribute, '=', value)
def off_physics_prim(self):
""" Turn Off Object Physics """
self.vel = (0,0,0)
self.rot_vel = (0,0,0)
self.accel = (0,0,0)
self.rot_accel = (0,0,0)
self.physics = False
def off_prim(self):
""" Turn Object Visibility off """
from omni.isaac.core.utils import prims
prims.set_prim_visibility(self.prim, False)
#print("\nTurn off visibility of prim;",self.prim)
#print("\n")
def on_prim(self):
""" Turn Object Visibility on """
from omni.isaac.core.utils import prims
prims.set_prim_visibility(self.prim, True)
#print("\nTurn on visibility of prim;",self.prim)
#print("\n")
def add_collision(self):
""" Turn Object Collision on """
from pxr import UsdPhysics
# prim = self.stage.GetPrimAtPath(path)
UsdPhysics.CollisionAPI.Apply(self.prim) | 9,291 | Python | 35.582677 | 113 | 0.613712 |
ngzhili/SynTable/syntable_composer/src/scene/asset/room_face.py |
from scene.asset import Object
class RoomFace(Object):
""" For managing an Xform asset in Isaac Sim. """
def __init__(self, sim_app, sim_context, path, prefix, coord, rotation, scaling):
""" Construct Object. """
self.coord = coord
self.rotation = rotation
self.scaling = scaling
super().__init__(sim_app, sim_context, "", path, prefix, None, None)
def load_asset(self):
""" Create asset from object parameters. """
from omni.isaac.core.prims import XFormPrim
from omni.isaac.core.utils.prims import move_prim
from pxr import PhysxSchema, UsdPhysics
if self.prefix == "floor":
# Create invisible ground plane
path = "/World/Room/ground"
planeGeom = PhysxSchema.Plane.Define(self.stage, path)
planeGeom.CreatePurposeAttr().Set("guide")
planeGeom.CreateAxisAttr().Set("Z")
prim = self.stage.GetPrimAtPath(path)
UsdPhysics.CollisionAPI.Apply(prim)
# Create plane
from omni.kit.primitive.mesh import CreateMeshPrimWithDefaultXformCommand
CreateMeshPrimWithDefaultXformCommand(prim_type="Plane").do()
move_prim(path_from="/Plane", path_to=self.path)
self.prim = self.stage.GetPrimAtPath(self.path)
self.xform_prim = XFormPrim(self.path)
def place_in_scene(self):
""" Scale, rotate, and translate asset. """
self.translate(self.coord)
self.rotate(self.rotation)
self.scale(self.scaling)
def step(self):
""" Room Face does not update in a scene's sequence. """
return | 1,656 | Python | 30.865384 | 85 | 0.622585 |
ngzhili/SynTable/syntable_composer/src/scene/asset/__init__.py | from .asset import Asset
from .camera import Camera
from .object import Object
from .light import Light
from .room_face import RoomFace
| 136 | Python | 21.83333 | 31 | 0.808824 |
ngzhili/SynTable/syntable_composer/src/distributions/choice.py |
import numpy as np
import os
from distributions import Distribution
class Choice(Distribution):
""" For sampling from a list of elems. """
def __init__(self, input, p=None, filter_list=None):
""" Construct Choice distribution. """
self.input = input
self.p = p
self.filter_list = filter_list
if self.p:
self.p = np.array(self.p)
self.p = self.p / np.sum(self.p)
def __repr__(self):
return "Choice(name={}, input={}, p={}, filter_list={})".format(self.name, self.input, self.p, self.filter_list)
def setup(self, name):
""" Process input into a list of elems, with filter_list elems removed. """
self.name = name
self.valid_file_types = Distribution.param_suffix_to_file_type.get(self.name[self.name.rfind("_") + 1 :], [])
self.elems = self.get_elem_list(self.input)
if self.filter_list:
filter_listed_elems = self.get_elem_list(self.filter_list)
elem_set = set(self.elems)
for elem in filter_listed_elems:
if elem in elem_set:
self.elems.remove(self.elems)
self.elems = self.unpack_elem_list(self.elems)
self.verify_args()
def verify_args(self):
""" Verify elem list derived from input args. """
if len(self.elems) == 0:
raise ValueError(repr(self) + " has no elems.")
if self.p != None:
if len(self.elems) != len(self.p):
raise ValueError(
repr(self)
+ " must have equal num p weights '{}' and num elems '{}'".format(len(self.elems), len(self.p))
)
if len(self.elems) > 1:
type_checks = []
for elem in self.elems:
if type(elem) in (int, float):
# Integer and Float equivalence
elem_types = [int, float]
elif type(elem) in (tuple, list, np.ndarray):
# Tuple and List equivalence
elem_types = [tuple, list, np.ndarray]
else:
elem_types = [type(elem)]
type_check = type(self.elems[0]) in elem_types
type_checks.append(type_check)
all_elems_same_val_type = all(type_checks)
if not all_elems_same_val_type:
raise ValueError(repr(self) + " must have elems that are all the same value type.")
def sample(self):
""" Samples from the list of elems. """
# print(self.__repr__())
# print('len(self.elems):',len(self.elems))
# print("self.elems:",self.elems)
if self.elems:
index = np.random.choice(len(self.elems), p=self.p)
sample = self.elems[index]
if type(sample) in (tuple, list):
sample = np.array(sample)
return sample
else:
return None
def get_type(self):
""" Get value type of elem list, which are all the same. """
return type(self.elems[0])
def get_elem_list(self, input):
""" Process input into a list of elems. """
elems = []
if type(input) is str and input[-4:] == ".txt":
input_file = input
file_elems = self.parse_input_file(input_file)
elems.extend(file_elems)
elif type(input) is list:
for elem in input:
list_elems = self.get_elem_list(elem)
elems.extend(list_elems)
else:
elem = input
if type(elem) in (tuple, list):
elem = np.array(elem)
elems.append(input)
return elems
def parse_input_file(self, input_file):
""" Parse an input file into a list of elems. """
if input_file.startswith("/"):
input_file = input_file
elif input_file.startswith("*"):
input_file = os.path.join(Distribution.mount, input_file[2:])
else:
input_file = os.path.join(os.path.dirname(__file__), "../../", input_file)
if not os.path.exists(input_file):
raise ValueError(repr(self) + " is unable to find file '{}'".format(input_file))
with open(input_file) as f:
lines = f.readlines()
lines = [line.strip() for line in lines]
file_elems = []
for elem in lines:
if elem and not elem.startswith("#"):
try:
elem = eval(elem)
if type(elem) in (tuple, list):
try:
elem = np.array(elem, dtype=np.float32)
except:
pass
except Exception as e:
pass
file_elems.append(elem)
return file_elems
def unpack_elem_list(self, elems):
""" Unpack all potential Nucleus server directories referenced in the parameter values. """
all_unpacked_elems = []
for elem in elems:
unpacked_elems = [elem]
if type(elem) is str:
if not elem.startswith("/"):
raise ValueError(repr(self) + " with path elem '{}' must start with a forward slash.".format(elem))
directory_elems = self.get_directory_elems(elem)
if directory_elems:
directory = elem
unpacked_elems = self.unpack_directory(directory_elems, directory)
# if "." in elem:
# file_type = elem[elem.rfind(".") :].lower()
# if file_type not in self.valid_file_types:
# raise ValueError(
# repr(self)
# + " has elem '{}' with incorrect file type. File type must be in '{}'.".format(
# elem, self.valid_file_types
# )
# )
all_unpacked_elems.extend(unpacked_elems)
elems = all_unpacked_elems
return elems
def unpack_directory(self, directory_elems, directory):
""" Unpack a directory on Nucleus into a list of file paths. """
unpacked_elems = []
for directory_elem in directory_elems:
directory_elem = os.path.join(directory, directory_elem)
file_type = directory_elem[directory_elem.rfind(".") :].lower()
if file_type in self.valid_file_types:
elem = os.path.join(directory, directory_elem)
unpacked_elems.append(elem)
else:
sub_directory_elems = self.get_directory_elems(directory_elem)
if sub_directory_elems:
# Recurse on subdirectories
unpacked_elems.extend(self.unpack_directory(sub_directory_elems, directory_elem))
return unpacked_elems
def get_directory_elems(self, elem):
""" Grab files in a potential Nucleus server directory. """
import omni.client
elem_can_be_nucleus_dir = "." not in os.path.basename(elem)
if elem_can_be_nucleus_dir:
(_, directory_elems) = omni.client.list(self.nucleus_server + elem)
directory_elems = [str(elem.relative_path) for elem in directory_elems]
return directory_elems
else:
return ()
| 7,523 | Python | 35 | 120 | 0.516682 |
ngzhili/SynTable/syntable_composer/src/distributions/__init__.py |
from .distribution import Distribution
from .choice import Choice
from .normal import Normal
from .range import Range
from .uniform import Uniform
from .walk import Walk
| 172 | Python | 18.22222 | 38 | 0.813953 |
ngzhili/SynTable/syntable_composer/src/distributions/distribution.py |
from abc import ABC, abstractmethod
class Distribution:
# Static variables
mount = None
nucleus_server = None
param_suffix_to_file_type = None
@abstractmethod
def __init__(self):
pass
@abstractmethod
def setup(self):
pass
@abstractmethod
def verify_args(self):
pass
@abstractmethod
def sample(self):
pass
@abstractmethod
def get_type(self):
pass
| 451 | Python | 13.580645 | 36 | 0.59867 |
ngzhili/SynTable/syntable_composer/src/distributions/normal.py |
import numpy as np
from distributions import Distribution
class Normal(Distribution):
""" For sampling a Gaussian. """
def __init__(self, mean, var, min=None, max=None):
""" Construct Normal distribution. """
self.mean = mean
self.var = var
self.min_val = min
self.max_val = max
def __repr__(self):
return "Normal(name={}, mean={}, var={}, min_bound={}, max_bound={})".format(
self.name, self.mean, self.var, self.min_val, self.max_val
)
def setup(self, name):
""" Parse input arguments. """
self.name = name
self.std_dev = np.sqrt(self.var)
self.verify_args()
def verify_args(self):
""" Verify input arguments. """
def verify_arg_i(mean, var, min_val, max_val):
""" Verify number values. """
if type(mean) not in (int, float):
raise ValueError(repr(self) + " has incorrect mean type.")
if type(var) not in (int, float):
raise ValueError(repr(self) + " has incorrect variance type.")
if var < 0:
raise ValueError(repr(self) + " must have non-negative variance.")
if min_val != None and type(min_val) not in (int, float):
raise ValueError(repr(self) + " has incorrect min type.")
if max_val != None and type(max_val) not in (int, float):
raise ValueError(repr(self) + " has incorrect max type.")
return True
valid = False
if type(self.mean) in (tuple, list) and type(self.var) in (tuple, list):
if len(self.mean) != len(self.var):
raise ValueError(repr(self) + " must have mean and variance with same length.")
if self.min_val and len(self.min_val) != len(self.mean):
raise ValueError(repr(self) + " must have mean and min bound with same length.")
if self.max_val and len(self.max_val) != len(self.mean):
raise ValueError(repr(self) + " must have mean and max bound with same length.")
valid = all(
[
verify_arg_i(
self.mean[i],
self.var[i],
self.min_val[i] if self.min_val else None,
self.max_val[i] if self.max_val else None,
)
for i in range(len(self.mean))
]
)
else:
valid = verify_arg_i(self.mean, self.var, self.min_val, self.max_val)
if not valid:
raise ValueError(repr(self) + " is invalid.")
def sample(self):
""" Sample from Gaussian. """
sample = np.random.normal(self.mean, self.std_dev)
if self.min_val is not None or self.max_val is not None:
sample = np.clip(sample, a_min=self.min_val, a_max=self.max_val)
return sample
def get_type(self):
if type(self.mean) in (tuple, list):
return tuple
else:
return float
| 3,091 | Python | 32.978022 | 96 | 0.524426 |
ngzhili/SynTable/syntable_composer/src/distributions/range.py |
import numpy as np
from distributions import Distribution
class Range(Distribution):
""" For sampling from a range of integers. """
def __init__(self, min_val, max_val):
""" Construct Range distribution. """
self.min_val = min_val
self.max_val = max_val
def __repr__(self):
return "Range(name={}, min={}, max={})".format(self.name, self.min_val, self.max_val)
def setup(self, name):
""" Parse input arguments. """
self.name = name
self.range = range(self.min_val, self.max_val + 1)
self.verify_args()
def verify_args(self):
""" Verify input arguments. """
def verify_args_i(min_val, max_val):
""" Verify number values. """
valid = False
if type(min_val) is int and type(max_val) is int:
valid = min_val <= max_val
return valid
valid = False
if type(self.min_val) in (tuple, list) and type(self.max_val) in (tuple, list):
if len(self.min_val) != len(self.max_val):
raise ValueError(repr(self) + " must have min and max with same length.")
valid = all([verify_args_i(self.min_val[i], self.max_val[i]) for i in range(len(self.min_val))])
else:
valid = verify_args_i(self.min_val, self.max_val)
if not valid:
raise ValueError(repr(self) + " is invalid.")
def sample(self):
""" Sample from discrete range. """
return np.random.choice(self.range)
def get_type(self):
""" Get value type. """
if type(self.min_val) in (tuple, list):
return tuple
else:
return int
| 1,707 | Python | 26.548387 | 108 | 0.54833 |
ngzhili/SynTable/syntable_composer/src/distributions/uniform.py |
import numpy as np
from distributions import Distribution
class Uniform(Distribution):
""" For sampling uniformly from a continuous range. """
def __init__(self, min_val, max_val):
""" Construct Uniform distribution."""
self.min_val = min_val
self.max_val = max_val
def __repr__(self):
return "Uniform(name={}, min={}, max={})".format(self.name, self.min_val, self.max_val)
def setup(self, name):
""" Parse input arguments. """
self.name = name
self.verify_args()
def verify_args(self):
""" Verify input arguments. """
def verify_args_i(min_val, max_val):
""" Verify number values. """
valid = False
if type(min_val) in (int, float) and type(max_val) in (int, float):
valid = min_val <= max_val
return valid
valid = False
if type(self.min_val) in (tuple, list) and type(self.max_val) in (tuple, list):
if len(self.min_val) != len(self.max_val):
raise ValueError(repr(self) + " must have min and max with same length.")
valid = all([verify_args_i(self.min_val[i], self.max_val[i]) for i in range(len(self.min_val))])
else:
valid = verify_args_i(self.min_val, self.max_val)
if not valid:
raise ValueError(repr(self) + " is invalid.")
def sample(self):
""" Sample from continuous range. """
return np.random.uniform(self.min_val, self.max_val)
def get_type(self):
""" Get value type. """
if type(self.min_val) in (tuple, list):
return tuple
else:
return float
| 1,700 | Python | 27.35 | 108 | 0.554118 |
ngzhili/SynTable/syntable_composer/src/distributions/walk.py |
import numpy as np
from distributions import Choice
class Walk(Choice):
""" For sampling from a list of elems without replacement. """
def __init__(self, input, filter_list=None, ordered=True):
""" Constructs a Walk distribution. """
super().__init__(input, filter_list=filter_list)
self.ordered = ordered
self.completed = False
self.index = 0
def __repr__(self):
return "Walk(name={}, input={}, filter_list={}, ordered={})".format(
self.name, self.input, self.filter_list, self.ordered
)
def setup(self, name):
""" Parse input arguments. """
self.name = name
if not self.ordered:
self.sampled_indices = list(range(len(self.elems)))
super().setup(name)
def sample(self):
""" Samples from list of elems and updates the index tracker. """
if self.ordered:
self.index %= len(self.elems)
sample = self.elems[self.index]
self.index += 1
else:
if len(self.sampled_indices) == 0:
self.sampled_indices = list(range(len(self.elems)))
self.index = np.choice(self.sampled_indices)
self.sampled_indices.remove(self.index)
sample = self.elems[self.index]
if type(sample) in (tuple, list):
sample = np.array(sample)
return sample
| 1,416 | Python | 26.249999 | 76 | 0.567797 |
ngzhili/SynTable/syntable_composer/src/output/disparity.py |
import numpy as np
class DisparityConverter:
""" For converting stereo depth maps to stereo disparity maps. """
def __init__(self, depth_l, depth_r, fx, fy, cx, cy, baseline):
""" Construct DisparityConverter. """
self.depth_l = np.array(depth_l, dtype=np.float32)
self.depth_r = np.array(depth_r, dtype=np.float32)
self.fx = fx
self.fy = fy
self.cx = cx
self.cy = cy
self.baseline = baseline
def compute_disparity(self):
""" Computes a disparity map from left and right depth maps. """
# List all valid depths in the depth map
(y, x) = np.nonzero(np.invert(np.isnan(self.depth_l)))
depth_l = self.depth_l[y, x]
depth_r = self.depth_r[y, x]
# Compute disparity maps
disp_lr = self.depth_to_disparity(x, depth_l, self.baseline)
disp_rl = self.depth_to_disparity(x, depth_r, -self.baseline)
# Use numpy vectorization to get pixel coordinates
disp_l, disp_r = np.zeros(self.depth_l.shape), np.zeros(self.depth_r.shape)
disp_l[y, x] = np.abs(disp_lr)
disp_r[y, x] = np.abs(disp_rl)
disp_l = np.array(disp_l, dtype=np.float32)
disp_r = np.array(disp_r, dtype=np.float32)
return disp_l, disp_r
def depth_to_disparity(self, x, depth, baseline_offset):
""" Convert depth map to disparity map. """
# Backproject image to 3D world
x_est = (x - self.cx) * (depth / self.fx)
# Add baseline offset to 3D world position
x_est += baseline_offset
# Project to the other stereo image domain
x_pt = self.cx + (x_est / depth * self.fx)
# Compute disparity with the x-axis only since the left and right images are rectified
disp = x_pt - x
return disp
| 1,827 | Python | 32.236363 | 94 | 0.595512 |
ngzhili/SynTable/syntable_composer/src/output/log.py | import datetime
import os
import time
import yaml
class Logger:
""" For logging parameter samples and dataset generation metadata. """
# Static variables set outside class
verbose = None
content_log_path = None
def start_log_entry(index):
""" Initialize a sample's log message. """
Logger.start_time = time.time()
Logger.log_entry = [{}]
Logger.log_entry[0]["index"] = index
Logger.log_entry[0]["metadata"] = {"params": [], "lines": []}
Logger.log_entry[0]["metadata"]["timestamp"] = str(datetime.datetime.now())
if Logger.verbose:
print()
def finish_log_entry():
""" Output a sample's log message to the end of the content log. """
duration = time.time() - Logger.start_time
Logger.log_entry[0]["time_elapsed"] = duration
if Logger.content_log_path:
with open(Logger.content_log_path, "a") as f:
yaml.safe_dump(Logger.log_entry, f)
def write_parameter(key, val, group=None):
""" Record a sample parameter value. """
if key == "groups":
return
param_dict = {}
param_dict["parameter"] = key
param_dict["val"] = str(val)
param_dict["group"] = group
Logger.log_entry[0]["metadata"]["params"].append(param_dict)
def print(line, force_print=False):
""" Record a string and potentially output it to console. """
Logger.log_entry[0]["metadata"]["lines"].append(line)
if Logger.verbose or force_print:
line = str(line)
print(line)
| 1,615 | Python | 27.350877 | 83 | 0.580805 |
ngzhili/SynTable/syntable_composer/src/output/__init__.py |
from .writer import DataWriter
from .disparity import DisparityConverter
from .metrics import Metrics
from .log import Logger
from .output import OutputManager
| 161 | Python | 22.142854 | 41 | 0.838509 |
ngzhili/SynTable/syntable_composer/src/output/metrics.py | import os
import yaml
class Metrics:
""" For managing performance metrics of dataset generation. """
def __init__(self, log_dir, content_log_path):
""" Construct Metrics. """
self.metric_path = os.path.join(log_dir, "metrics.txt")
self.content_log_path = content_log_path
def output_performance_metrics(self):
""" Collect per-scene metrics and calculate and output summary metrics. """
with open(self.content_log_path, "r") as f:
log = yaml.safe_load(f)
durations = []
for log_entry in log:
if type(log_entry["index"]) is int:
durations.append(log_entry["time_elapsed"])
durations.sort()
metric_packet = {}
n = len(durations)
metric_packet["time_per_sample_min"] = durations[0]
metric_packet["time_per_sample_first_quartile"] = durations[n // 4]
metric_packet["time_per_sample_median"] = durations[n // 2]
metric_packet["time_per_sample_third_quartile"] = durations[3 * n // 4]
metric_packet["time_per_sample_max"] = durations[-1]
metric_packet["time_per_sample_mean"] = sum(durations) / n
with open(self.metric_path, "w") as f:
yaml.safe_dump(metric_packet, f)
| 1,273 | Python | 31.666666 | 83 | 0.598586 |
ngzhili/SynTable/syntable_composer/src/output/output.py | import copy
import numpy as np
import carb
from output import DataWriter, DisparityConverter, Logger
from sampling import Sampler
class OutputManager:
""" For managing Composer outputs, including sending data to the data writer. """
def __init__(self, sim_app, sim_context, scene_manager, output_data_dir, scene_units_in_meters):
""" Construct OutputManager. Start data writer threads. """
from omni.isaac.synthetic_utils import SyntheticDataHelper
self.sim_app = sim_app
self.sim_context = sim_context
self.scene_manager = scene_manager
self.output_data_dir = output_data_dir
self.scene_units_in_meters = scene_units_in_meters
self.camera = self.scene_manager.camera
self.viewports = self.camera.viewports
self.stage = self.sim_context.stage
self.sample = Sampler().sample
self.groundtruth_visuals = self.sample("groundtruth_visuals")
self.label_to_class_id = self.get_label_to_class_id()
max_queue_size = 500
self.write_data = self.sample("write_data")
if self.write_data:
self.data_writer = DataWriter(self.output_data_dir, self.sample("num_data_writer_threads"), max_queue_size)
self.data_writer.start_threads()
self.sd_helper = SyntheticDataHelper()
self.gt_list = []
if self.sample("rgb") or (
self.sample("bbox_2d_tight")
or self.sample("bbox_2d_loose")
or self.sample("bbox_3d")
and self.groundtruth_visuals
):
self.gt_list.append("rgb")
if (self.sample("depth")) or (self.sample("disparity") and self.sample("stereo")):
self.gt_list.append("depthLinear")
if self.sample("instance_seg"):
self.gt_list.append("instanceSegmentation")
if self.sample("semantic_seg"):
self.gt_list.append("semanticSegmentation")
if self.sample("bbox_2d_tight"):
self.gt_list.append("boundingBox2DTight")
if self.sample("bbox_2d_loose"):
self.gt_list.append("boundingBox2DLoose")
if self.sample("bbox_3d"):
self.gt_list.append("boundingBox3D")
for viewport_name, viewport_window in self.viewports:
self.sd_helper.initialize(sensor_names=self.gt_list, viewport=viewport_window)
self.sim_app.update()
self.carb_settings = carb.settings.acquire_settings_interface()
def get_label_to_class_id(self):
""" Get mapping of object semantic labels to class ids. """
label_to_class_id = {}
groups = self.sample("groups")
for group in groups:
class_id = self.sample("obj_class_id", group=group)
label_to_class_id[group] = class_id
label_to_class_id["[[scenario]]"] = self.sample("scenario_class_id")
return label_to_class_id
def capture_groundtruth(self, index, step_index=0, sequence_length=0):
""" Capture groundtruth data from Isaac Sim. Send data to data writer. """
depths = []
all_viewport_data = []
for i in range(len(self.viewports)):
self.sim_context.render()
self.sim_context.render()
viewport_name, viewport_window = self.viewports[i]
num_digits = len(str(self.sample("num_scenes") - 1))
id = str(index)
id = id.zfill(num_digits)
if self.sample("sequential"):
num_digits = len(str(sequence_length - 1))
suffix_id = str(step_index)
suffix_id = suffix_id.zfill(num_digits)
id = id + "_" + suffix_id
groundtruth = {
"METADATA": {
"image_id": id,
"viewport_name": viewport_name,
"DEPTH": {},
"INSTANCE": {},
"SEMANTIC": {},
"BBOX2DTIGHT": {},
"BBOX2DLOOSE": {},
"BBOX3D": {},
},
"DATA": {},
}
# Collect Groundtruth
self.sim_context.render()
self.sim_context.render()
gt = copy.deepcopy(self.sd_helper.get_groundtruth(self.gt_list, viewport_window, wait_for_sensor_data=0.2))
# RGB
if "rgb" in gt["state"]:
if gt["state"]["rgb"]:
groundtruth["DATA"]["RGB"] = gt["rgb"]
# Depth (for Disparity)
if "depthLinear" in gt["state"]:
depth_data = copy.deepcopy(gt["depthLinear"]).squeeze()
# Convert to scene units
depth_data /= self.scene_units_in_meters
depths.append(depth_data)
if i == 0 or self.sample("groundtruth_stereo"):
# Depth
if "depthLinear" in gt["state"]:
if self.sample("depth"):
depth_data = gt["depthLinear"].squeeze()
# Convert to scene units
depth_data /= self.scene_units_in_meters
groundtruth["DATA"]["DEPTH"] = depth_data
groundtruth["METADATA"]["DEPTH"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["DEPTH"]["NPY"] = True
# Instance Segmentation
if "instanceSegmentation" in gt["state"]:
instance_data = gt["instanceSegmentation"][0]
groundtruth["DATA"]["INSTANCE"] = instance_data
groundtruth["METADATA"]["INSTANCE"]["WIDTH"] = instance_data.shape[1]
groundtruth["METADATA"]["INSTANCE"]["HEIGHT"] = instance_data.shape[0]
groundtruth["METADATA"]["INSTANCE"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["INSTANCE"]["NPY"] = True
# Semantic Segmentation
if "semanticSegmentation" in gt["state"]:
semantic_data = gt["semanticSegmentation"]
semantic_data = self.sd_helper.get_mapped_semantic_data(
semantic_data, self.label_to_class_id, remap_using_base_class=True
)
semantic_data = np.array(semantic_data)
semantic_data[semantic_data == 65535] = 0 # deals with invalid semantic id
groundtruth["DATA"]["SEMANTIC"] = semantic_data
groundtruth["METADATA"]["SEMANTIC"]["WIDTH"] = semantic_data.shape[1]
groundtruth["METADATA"]["SEMANTIC"]["HEIGHT"] = semantic_data.shape[0]
groundtruth["METADATA"]["SEMANTIC"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["SEMANTIC"]["NPY"] = True
# 2D Tight BBox
if "boundingBox2DTight" in gt["state"]:
groundtruth["DATA"]["BBOX2DTIGHT"] = gt["boundingBox2DTight"]
groundtruth["METADATA"]["BBOX2DTIGHT"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["BBOX2DTIGHT"]["NPY"] = True
# 2D Loose BBox
if "boundingBox2DLoose" in gt["state"]:
groundtruth["DATA"]["BBOX2DLOOSE"] = gt["boundingBox2DLoose"]
groundtruth["METADATA"]["BBOX2DLOOSE"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["BBOX2DLOOSE"]["NPY"] = True
# 3D BBox
if "boundingBox3D" in gt["state"]:
groundtruth["DATA"]["BBOX3D"] = gt["boundingBox3D"]
groundtruth["METADATA"]["BBOX3D"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["BBOX3D"]["NPY"] = True
all_viewport_data.append(groundtruth)
# Wireframe
if self.sample("wireframe"):
self.carb_settings.set("/rtx/wireframe/mode", 2.0)
# Need two updates for all viewports to have wireframe properly
self.sim_context.render()
self.sim_context.render()
for i in range(len(self.viewports)):
viewport_name, viewport_window = self.viewports[i]
gt = copy.deepcopy(self.sd_helper.get_groundtruth(["rgb"], viewport_window))
all_viewport_data[i]["DATA"]["WIREFRAME"] = gt["rgb"]
self.carb_settings.set("/rtx/wireframe/mode", 0)
self.sim_context.render()
for i in range(len(self.viewports)):
if self.write_data:
self.data_writer.q.put(copy.deepcopy(all_viewport_data[i]))
# Disparity
if self.sample("disparity") and self.sample("stereo"):
depth_l, depth_r = depths
cam_intrinsics = self.camera.intrinsics[0]
disp_convert = DisparityConverter(
depth_l,
depth_r,
cam_intrinsics["fx"],
cam_intrinsics["fy"],
cam_intrinsics["cx"],
cam_intrinsics["cy"],
self.sample("stereo_baseline"),
)
disp_l, disp_r = disp_convert.compute_disparity()
disparities = [disp_l, disp_r]
for i in range(len(self.viewports)):
if i == 0 or self.sample("groundtruth_stereo"):
viewport_name, viewport_window = self.viewports[i]
groundtruth = {
"METADATA": {"image_id": id, "viewport_name": viewport_name, "DISPARITY": {}},
"DATA": {},
}
disparity_data = disparities[i]
groundtruth["DATA"]["DISPARITY"] = disparity_data
groundtruth["METADATA"]["DISPARITY"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["DISPARITY"]["NPY"] = True
if self.write_data:
self.data_writer.q.put(copy.deepcopy(groundtruth))
return groundtruth
| 10,140 | Python | 41.970339 | 126 | 0.537673 |
ngzhili/SynTable/syntable_composer/src/output/writer.py | import atexit
import numpy as np
import os
from PIL import Image
import queue
import sys
import threading
class DataWriter:
""" For processing and writing output data to files. """
def __init__(self, data_dir, num_worker_threads, max_queue_size=500):
""" Construct DataWriter. """
from omni.isaac.synthetic_utils import visualization
self.visualization = visualization
atexit.register(self.stop_threads)
self.data_dir = data_dir
# Threading for multiple scenes
self.num_worker_threads = num_worker_threads
# Initialize queue with a specified size
self.q = queue.Queue(max_queue_size)
self.threads = []
def start_threads(self):
""" Start worker threads. """
for _ in range(self.num_worker_threads):
t = threading.Thread(target=self.worker, daemon=True)
t.start()
self.threads.append(t)
def stop_threads(self):
""" Waits for all tasks to be completed before stopping worker threads. """
print("Finish writing data...")
# Block until all tasks are done
self.q.join()
print("Done.")
def worker(self):
""" Processes task from queue. Each tasks contains groundtruth data and metadata which is used to transform the output and write it to disk. """
while True:
groundtruth = self.q.get()
if groundtruth is None:
break
filename = groundtruth["METADATA"]["image_id"]
viewport_name = groundtruth["METADATA"]["viewport_name"]
for gt_type, data in groundtruth["DATA"].items():
if gt_type == "RGB":
self.save_image(viewport_name, gt_type, data, filename)
elif gt_type == "WIREFRAME":
self.save_image(viewport_name, gt_type, data, filename)
elif gt_type == "DEPTH":
if groundtruth["METADATA"]["DEPTH"]["NPY"]:
self.save_PFM(viewport_name, gt_type, data, filename)
if groundtruth["METADATA"]["DEPTH"]["COLORIZE"]:
self.save_image(viewport_name, gt_type, data, filename)
elif gt_type == "DISPARITY":
if groundtruth["METADATA"]["DISPARITY"]["NPY"]:
self.save_PFM(viewport_name, gt_type, data, filename)
if groundtruth["METADATA"]["DISPARITY"]["COLORIZE"]:
self.save_image(viewport_name, gt_type, data, filename)
elif gt_type == "INSTANCE":
self.save_segmentation(
viewport_name,
gt_type,
data,
filename,
groundtruth["METADATA"]["INSTANCE"]["WIDTH"],
groundtruth["METADATA"]["INSTANCE"]["HEIGHT"],
groundtruth["METADATA"]["INSTANCE"]["COLORIZE"],
groundtruth["METADATA"]["INSTANCE"]["NPY"],
)
elif gt_type == "SEMANTIC":
self.save_segmentation(
viewport_name,
gt_type,
data,
filename,
groundtruth["METADATA"]["SEMANTIC"]["WIDTH"],
groundtruth["METADATA"]["SEMANTIC"]["HEIGHT"],
groundtruth["METADATA"]["SEMANTIC"]["COLORIZE"],
groundtruth["METADATA"]["SEMANTIC"]["NPY"],
)
elif gt_type in ["BBOX2DTIGHT", "BBOX2DLOOSE", "BBOX3D"]:
self.save_bbox(
viewport_name,
gt_type,
data,
filename,
groundtruth["METADATA"][gt_type]["COLORIZE"],
groundtruth["DATA"]["RGB"],
groundtruth["METADATA"][gt_type]["NPY"],
)
elif gt_type == "CAMERA":
self.camera_folder = self.data_dir + "/" + str(viewport_name) + "/camera/"
np.save(self.camera_folder + filename + ".npy", data)
elif gt_type == "POSES":
self.poses_folder = self.data_dir + "/" + str(viewport_name) + "/poses/"
np.save(self.poses_folder + filename + ".npy", data)
else:
raise NotImplementedError
self.q.task_done()
def save_segmentation(
self, viewport_name, data_type, data, filename, width=1280, height=720, display_rgb=True, save_npy=True
):
""" Save segmentation mask data and visuals. """
# Save ground truth data as 16-bit single channel png
if save_npy:
if data_type == "INSTANCE":
data_folder = os.path.join(self.data_dir, viewport_name, "instance")
data = np.array(data, dtype=np.uint8)
img = Image.fromarray(data, mode="L")
elif data_type == "SEMANTIC":
data_folder = os.path.join(self.data_dir, viewport_name, "semantic")
data = np.array(data, dtype=np.uint8)
img = Image.fromarray(data, mode="L")
os.makedirs(data_folder, exist_ok=True)
file = os.path.join(data_folder, filename + ".png")
img.save(file, "PNG", bits=16)
# Save ground truth data as visuals
if display_rgb:
image_data = np.frombuffer(data, dtype=np.uint8).reshape(*data.shape, -1)
image_data += 1
if data_type == "SEMANTIC":
# Move close values apart to allow color values to separate more
image_data = np.array((image_data * 17) % 256, dtype=np.uint8)
color_image = self.visualization.colorize_segmentation(image_data, width, height, 3, None)
color_image = color_image[:, :, :3]
color_image_rgb = Image.fromarray(color_image, "RGB")
if data_type == "INSTANCE":
data_folder = os.path.join(self.data_dir, viewport_name, "instance", "visuals")
elif data_type == "SEMANTIC":
data_folder = os.path.join(self.data_dir, viewport_name, "semantic", "visuals")
os.makedirs(data_folder, exist_ok=True)
file = os.path.join(data_folder, filename + ".png")
color_image_rgb.save(file, "PNG")
def save_image(self, viewport_name, img_type, image_data, filename):
""" Save rgb data, depth visuals, and disparity visuals. """
# Convert 1-channel groundtruth data to visualization image data
def normalize_greyscale_image(image_data):
image_data = np.reciprocal(image_data)
image_data[image_data == 0.0] = 1e-5
image_data = np.clip(image_data, 0, 255)
image_data -= np.min(image_data)
if np.max(image_data) > 0:
image_data /= np.max(image_data)
image_data *= 255
image_data = image_data.astype(np.uint8)
return image_data
# Save image data as png
if img_type == "RGB":
data_folder = os.path.join(self.data_dir, viewport_name, "rgb")
image_data = image_data[:, :, :3]
img = Image.fromarray(image_data, "RGB")
elif img_type == "WIREFRAME":
data_folder = os.path.join(self.data_dir, viewport_name, "wireframe")
image_data = np.average(image_data, axis=2)
image_data = image_data.astype(np.uint8)
img = Image.fromarray(image_data, "L")
elif img_type == "DEPTH":
image_data = image_data * 100
image_data = normalize_greyscale_image(image_data)
data_folder = os.path.join(self.data_dir, viewport_name, "depth", "visuals")
img = Image.fromarray(image_data, mode="L")
elif img_type == "DISPARITY":
image_data = normalize_greyscale_image(image_data)
data_folder = os.path.join(self.data_dir, viewport_name, "disparity", "visuals")
img = Image.fromarray(image_data, mode="L")
os.makedirs(data_folder, exist_ok=True)
file = os.path.join(data_folder, filename + ".png")
img.save(file, "PNG")
def save_bbox(self, viewport_name, data_type, data, filename, display_rgb=True, rgb_data=None, save_npy=True):
""" Save bbox data and visuals. """
# Save ground truth data as npy
if save_npy:
if data_type == "BBOX2DTIGHT":
data_folder = os.path.join(self.data_dir, viewport_name, "bbox_2d_tight")
elif data_type == "BBOX2DLOOSE":
data_folder = os.path.join(self.data_dir, viewport_name, "bbox_2d_loose")
elif data_type == "BBOX3D":
data_folder = os.path.join(self.data_dir, viewport_name, "bbox_3d")
os.makedirs(data_folder, exist_ok=True)
file = os.path.join(data_folder, filename)
np.save(file, data)
# Save ground truth data and rgb data as visuals
if display_rgb and rgb_data is not None:
color_image = self.visualization.colorize_bboxes(data, rgb_data)
color_image = color_image[:, :, :3]
color_image_rgb = Image.fromarray(color_image, "RGB")
if data_type == "BBOX2DTIGHT":
data_folder = os.path.join(self.data_dir, viewport_name, "bbox_2d_tight", "visuals")
if data_type == "BBOX2DLOOSE":
data_folder = os.path.join(self.data_dir, viewport_name, "bbox_2d_loose", "visuals")
if data_type == "BBOX3D":
# 3D BBox visuals are not yet supported
return
os.makedirs(data_folder, exist_ok=True)
file = os.path.join(data_folder, filename + ".png")
color_image_rgb.save(file, "PNG")
def save_PFM(self, viewport_name, data_type, data, filename):
""" Save Depth and Disparity data. """
if data_type == "DEPTH":
data_folder = os.path.join(self.data_dir, viewport_name, "depth")
elif data_type == "DISPARITY":
data_folder = os.path.join(self.data_dir, viewport_name, "disparity")
os.makedirs(data_folder, exist_ok=True)
file = os.path.join(data_folder, filename + ".pfm")
self.write_PFM(file, data)
def write_PFM(self, file, image, scale=1):
""" Convert numpy matrix into PFM and save. """
file = open(file, "wb")
color = None
if image.dtype.name != "float32":
raise Exception("Image dtype must be float32")
image = np.flipud(image)
if len(image.shape) == 3 and image.shape[2] == 3: # color image
color = True
elif len(image.shape) == 2 or len(image.shape) == 3 and image.shape[2] == 1: # greyscale
color = False
else:
raise Exception("Image must have H x W x 3, H x W x 1 or H x W dimensions.")
file.write(b"PF\n" if color else b"Pf\n")
file.write(b"%d %d\n" % (image.shape[1], image.shape[0]))
endian = image.dtype.byteorder
if endian == "<" or endian == "=" and sys.byteorder == "little":
scale = -scale
file.write(b"%f\n" % scale)
image.tofile(file)
| 11,473 | Python | 41.496296 | 152 | 0.537784 |
ngzhili/SynTable/syntable_composer/src/output/output1.py | import os
import copy
import numpy as np
import cv2
import carb
import datetime
from output import DisparityConverter, Logger
# from sampling import Sampler
from sampling.sample1 import Sampler
# from omni.isaac.core.utils import prims
from output.writer1 import DataWriter
from helper_functions import compute_occluded_masks, GenericMask, bbox_from_binary_mask # Added
import pycocotools.mask as mask_util
class OutputManager:
""" For managing Composer outputs, including sending data to the data writer. """
def __init__(self, sim_app, sim_context, scene_manager, output_data_dir, scene_units_in_meters):
""" Construct OutputManager. Start data writer threads. """
from omni.isaac.synthetic_utils.syntheticdata import SyntheticDataHelper
self.sim_app = sim_app
self.sim_context = sim_context
self.scene_manager = scene_manager
self.output_data_dir = output_data_dir
self.scene_units_in_meters = scene_units_in_meters
self.camera = self.scene_manager.camera
self.viewports = self.camera.viewports
self.stage = self.sim_context.stage
self.sample = Sampler().sample
self.groundtruth_visuals = self.sample("groundtruth_visuals")
self.label_to_class_id = self.get_label_to_class_id1()
max_queue_size = 500
self.save_segmentation_data = self.sample("save_segmentation_data")
self.write_data = self.sample("write_data")
if self.write_data:
self.data_writer = DataWriter(self.output_data_dir, self.sample("num_data_writer_threads"), self.save_segmentation_data, max_queue_size)
self.data_writer.start_threads()
self.sd_helper = SyntheticDataHelper()
self.gt_list = []
if self.sample("rgb") or (
self.sample("bbox_2d_tight")
or self.sample("bbox_2d_loose")
or self.sample("bbox_3d")
and self.groundtruth_visuals
):
self.gt_list.append("rgb")
if (self.sample("depth")) or (self.sample("disparity") and self.sample("stereo")):
self.gt_list.append("depthLinear")
if self.sample("instance_seg"):
self.gt_list.append("instanceSegmentation")
if self.sample("semantic_seg"):
self.gt_list.append("semanticSegmentation")
if self.sample("bbox_2d_tight"):
self.gt_list.append("boundingBox2DTight")
if self.sample("bbox_2d_loose"):
self.gt_list.append("boundingBox2DLoose")
if self.sample("bbox_3d"):
self.gt_list.append("boundingBox3D")
for viewport_name, viewport_window in self.viewports:
self.sd_helper.initialize(sensor_names=self.gt_list, viewport=viewport_window)
self.sim_app.update()
self.carb_settings = carb.settings.acquire_settings_interface()
def get_label_to_class_id(self):
""" Get mapping of object semantic labels to class ids. """
label_to_class_id = {}
groups = self.sample("groups")
for group in groups:
class_id = self.sample("obj_class_id", group=group)
label_to_class_id[group] = class_id
label_to_class_id["[[scenario]]"] = self.sample("scenario_class_id")
return label_to_class_id
def get_label_to_class_id1(self):
""" Get mapping of object semantic labels to class ids. """
label_to_class_id = {}
groups = self.sample("groups")
for group in groups:
class_id = self.sample("obj_class_id", group=group)
label_to_class_id[group] = class_id
label_to_class_id["[[scenario]]"] = self.sample("scenario_class_id")
return label_to_class_id
def capture_amodal_groundtruth(self, index, scene_manager, img_index, ann_index,
view_id, img_list, ann_list,
step_index=0, sequence_length=0):
""" Capture groundtruth data from Isaac Sim. Send data to data writer. """
num_objects = len(scene_manager.objs) # get number of objects in scene
objects = scene_manager.objs # get all objects in scene
depths = []
all_viewport_data = []
for i in range(len(self.viewports)):
viewport_name, viewport_window = self.viewports[i]
num_digits = len(str(self.sample("num_scenes") - 1))
img_id = str(index) + "_" + str(view_id)
groundtruth = {
"METADATA": {
"image_id": img_id,
"viewport_name": viewport_name,
"RGB":{},
"DEPTH": {},
"INSTANCE": {},
"SEMANTIC": {},
"BBOX2DTIGHT": {},
"BBOX2DLOOSE": {},
"BBOX3D": {},
},
"DATA": {},
}
""" =================================================================
===== Collect Viewport's RGB/DEPTH and object visible masks =====
================================================================= """
gt = copy.deepcopy(self.sd_helper.get_groundtruth(self.gt_list, viewport_window, wait_for_sensor_data=0.1))
# RGB
if "rgb" in gt["state"]:
if gt["state"]["rgb"]:
groundtruth["DATA"]["RGB"] = gt["rgb"]
# Depth (for Disparity)
if "depthLinear" in gt["state"]:
depth_data = copy.deepcopy(gt["depthLinear"]).squeeze()
# Convert to scene units
depth_data /= self.scene_units_in_meters
depths.append(depth_data)
if i == 0 or self.sample("groundtruth_stereo"):
# Depth
if "depthLinear" in gt["state"]:
if self.sample("depth"):
depth_data = gt["depthLinear"].squeeze()
# Convert to scene units
depth_data /= self.scene_units_in_meters
groundtruth["DATA"]["DEPTH"] = depth_data
groundtruth["METADATA"]["DEPTH"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["DEPTH"]["NPY"] = True
# Instance Segmentation
if "instanceSegmentation" in gt["state"]:
semantics = list(self.label_to_class_id.keys())
instance_data, instance_mappings = self.sd_helper.sensor_helpers["instanceSegmentation"](
viewport_window, parsed=False, return_mapping=True)
instances_list = [(im[0], im[4], im["semanticLabel"]) for im in instance_mappings][::-1]
max_instance_id_list = max([max(il[1]) for il in instances_list])
max_instance_id = instance_data.max()
lut = np.zeros(max(max_instance_id, max_instance_id_list) + 1, dtype=np.uint32)
for uid, il, sem in instances_list:
if sem in semantics and sem != "[[scenario]]":
lut[np.array(il)] = uid
instance_data = np.take(lut, instance_data)
if self.save_segmentation_data:
groundtruth["DATA"]["INSTANCE"] = instance_data
groundtruth["METADATA"]["INSTANCE"]["WIDTH"] = instance_data.shape[1]
groundtruth["METADATA"]["INSTANCE"]["HEIGHT"] = instance_data.shape[0]
groundtruth["METADATA"]["INSTANCE"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["INSTANCE"]["NPY"] = True
# get visible instance segmentation of all objects in scene
instance_map = list(np.unique(instance_data))[1:]
org_instance_data_np = np.array(instance_data)
org_instance_data = instance_data
instance_mappings_dict ={}
for obj_prim in instance_mappings:
inst_id = obj_prim[0]
inst_path = obj_prim[1]
instance_mappings_dict[inst_path]= inst_id
all_viewport_data.append(groundtruth)
""" ==== define image info dict ==== """
height, width, _ = gt["rgb"].shape
date_captured = str(datetime.datetime.now())
image_info = {
"id": img_index,
"file_name": f"data/mono/rgb/{img_id}.png",
"depth_file_name": f"data/mono/depth/{img_id}.png",
"occlusion_order_file_name": f"data/mono/occlusion_order/{img_id}.npy",
"width": width,
"height": height,
"date_captured": date_captured,
"license": 1,
"coco_url": "",
"flickr_url": ""
}
""" =====================================
===== Collect Background Masks ======
===================================== """
if self.sample("save_background"):
groundtruth = {
"METADATA": {
"image_id": str(img_index) + "_background",
"viewport_name": viewport_name,
"DEPTH": {},
"INSTANCE": {},
"SEMANTIC": {},
"AMODAL": {},
"OCCLUSION": {},
"BBOX2DTIGHT": {},
"BBOX2DLOOSE": {},
"BBOX3D": {},
},
"DATA": {},
}
ann_info = {
"id": ann_index,
"image_id": img_index,
"category_id": 0,
"bbox": [],
"height": height,
"width": width,
"object_name":"",
"iscrowd": 0,
"segmentation": {
"size": [
height,
width
],
"counts": "",
"area": 0
},
"area": 0,
"visible_mask": {
"size": [
height,
width
],
"counts": "",
"area": 0
},
"visible_bbox": [],
"occluded_mask": {
"size": [
height,
width
],
"counts": "",
"area": 0
},
"occluded_rate": 0.0
}
ann_info["object_name"] = "background"
""" ===== extract visible mask ===== """
curr_instance_data_np = org_instance_data_np.copy()
# find pixels that belong to background class
instance_id = 0
curr_instance_data_np[np.where(org_instance_data != instance_id)] = 0
curr_instance_data_np[np.where(org_instance_data == instance_id)] = 1
background_visible_mask = curr_instance_data_np.astype(np.uint8)
""" ===== extract amodal mask ===== """ # background assumed to be binary mask of np.ones
background_amodal_mask = np.ones(background_visible_mask.shape).astype(np.uint8) # get object amodal mask
""" ===== calculate occlusion mask ===== """
background_occ_mask = cv2.absdiff(background_amodal_mask, background_visible_mask)
""" ===== calculate occlusion rate ===== """ # assumes binary mask (True == 1)
background_occ_mask_pixel_count = background_occ_mask.sum()
background_amodal_mask_pixel_count = background_amodal_mask.sum()
occlusion_rate = round(background_occ_mask_pixel_count / background_amodal_mask_pixel_count, 2)
if occlusion_rate < 1: # fully occluded objects are not considered
if self.save_segmentation_data:
groundtruth["DATA"]["INSTANCE"] = background_visible_mask
groundtruth["METADATA"]["INSTANCE"]["WIDTH"] = background_visible_mask.shape[1]
groundtruth["METADATA"]["INSTANCE"]["HEIGHT"] = background_visible_mask.shape[0]
groundtruth["METADATA"]["INSTANCE"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["INSTANCE"]["NPY"] = True
groundtruth["DATA"]["AMODAL"] = background_amodal_mask
groundtruth["METADATA"]["AMODAL"]["WIDTH"] = background_amodal_mask.shape[1]
groundtruth["METADATA"]["AMODAL"]["HEIGHT"] = background_amodal_mask.shape[0]
groundtruth["METADATA"]["AMODAL"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["AMODAL"]["NPY"] = True
#if occlusion_rate > 0: # if object is occluded, save occlusion mask
if self.save_segmentation_data:
# print(background_occ_mask)
# print(background_occ_mask.shape)
groundtruth["DATA"]["OCCLUSION"] = background_occ_mask
groundtruth["METADATA"]["OCCLUSION"]["WIDTH"] = background_occ_mask.shape[1]
groundtruth["METADATA"]["OCCLUSION"]["HEIGHT"] = background_occ_mask.shape[0]
groundtruth["METADATA"]["OCCLUSION"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["OCCLUSION"]["NPY"] = True
# Assign Mask to Generic Mask Class
background_amodal_mask_class = GenericMask(background_amodal_mask.astype("uint8"),height, width)
background_visible_mask_class = GenericMask(background_visible_mask.astype("uint8"),height, width)
background_occ_mask_class = GenericMask(background_occ_mask.astype("uint8"),height, width)
# Encode binary masks to bytes
background_amodal_mask= mask_util.encode(np.array(background_amodal_mask[:, :, None], order="F", dtype="uint8"))[0]
background_visible_mask= mask_util.encode(np.array(background_visible_mask[:, :, None], order="F", dtype="uint8"))[0]
background_occ_mask= mask_util.encode(np.array(background_occ_mask[:, :, None], order="F", dtype="uint8"))[0]
# append annotations to dict
ann_info["segmentation"]["counts"] = background_amodal_mask['counts'].decode('UTF-8') # amodal mask
ann_info["visible_mask"]["counts"] = background_visible_mask['counts'].decode('UTF-8') # obj_visible_mask
ann_info["occluded_mask"]["counts"] =background_occ_mask['counts'].decode('UTF-8') # obj_visible_mask
ann_info["visible_bbox"] = list(background_visible_mask_class.bbox())
ann_info["bbox"] = list(background_visible_mask_class.bbox())
ann_info["segmentation"]["area"] = int(background_amodal_mask_class.area())
ann_info["visible_mask"]["area"] = int(background_visible_mask_class.area())
ann_info["occluded_mask"]["area"] = int(background_occ_mask_class.area())
ann_info["occluded_rate"] = occlusion_rate
ann_index += 1
all_viewport_data.append(groundtruth)
ann_list.append(ann_info)
img_list.append(image_info)
""" =================================================
===== Collect Object Amodal/Occlusion Masks =====
================================================= """
# turn off visibility of all objects
for obj in objects:
obj.off_prim()
visible_obj_paths = instance_mappings_dict.keys()
""" ======= START OBJ LOOP ======= """
obj_visible_mask_list = []
obj_occlusion_mask_list = []
# loop through objects and capture mask of each object
for obj in objects:
# turn on visibility of object
obj.on_prim()
ann_info = {
"id": ann_index,
"image_id": img_index,
"category_id": 1,
"bbox": [],
"width": width,
"height": height,
"object_name":"",
"iscrowd": 0,
"segmentation": {
"size": [
height,
width
],
"counts": "",
"area": 0
},
"area": 0,
"visible_mask": {
"size": [
height,
width
],
"counts": "",
"area": 0
},
"visible_bbox": [],
"occluded_mask": {
"size": [
height,
width
],
"counts": "",
"area": 0
},
"occluded_rate": 0.0
}
ann_info["object_name"] = obj.name
""" ===== get object j index and attributes ===== """
obj_path = obj.path
obj_index = int(obj.path.split("/")[-1].split("_")[1])
id = f"{img_id}_{obj_index}" #image id
obj_nested_prim_path = obj_path+"/nested_prim"
if obj_nested_prim_path in instance_mappings_dict:
instance_id = instance_mappings_dict[obj_nested_prim_path]
else:
print(f"{obj_nested_prim_path} does not exist")
instance_id = -1
print(f"instance_mappings_dict:{instance_mappings_dict}")
""" ===== Check if Object j is visible from viewport ===== """
# Remove Fully Occluded Objects from viewport
if obj_path in visible_obj_paths and instance_id in instance_map: # if object is fully occluded
pass
else: # object is not visible, skipping object
obj.off_prim()
continue
groundtruth = {
"METADATA": {
"image_id": id,
"viewport_name": viewport_name,
"RGB":{},
"DEPTH": {},
"INSTANCE": {},
"SEMANTIC": {},
"AMODAL": {},
"OCCLUSION": {},
"BBOX2DTIGHT": {},
"BBOX2DLOOSE": {},
"BBOX3D": {},
},
"DATA": {},
}
""" ===== extract visible mask of object j ===== """
curr_instance_data_np = org_instance_data_np.copy()
if instance_id != 0: # find object instance segmentation
curr_instance_data_np[np.where(org_instance_data_np != instance_id)] = 0
curr_instance_data_np[np.where(org_instance_data_np == instance_id)] = 1
obj_visible_mask = curr_instance_data_np.astype(np.uint8)
""" ===== extract amodal mask of object j ===== """
# Collect Groundtruth
gt = copy.deepcopy(self.sd_helper.get_groundtruth(self.gt_list, viewport_window, wait_for_sensor_data=0.01))
obj.off_prim() # turn off visibility of object
# RGB
if self.save_segmentation_data:
if "rgb" in gt["state"]:
if gt["state"]["rgb"]:
groundtruth["DATA"]["RGB"] = gt["rgb"]
if i == 0 or self.sample("groundtruth_stereo"):
# Instance Segmentation
if "instanceSegmentation" in gt["state"]:
semantics = list(self.label_to_class_id.keys())
instance_data, instance_mappings = self.sd_helper.sensor_helpers["instanceSegmentation"](
viewport_window, parsed=False, return_mapping=True)
instances_list = [(im[0], im[4], im["semanticLabel"]) for im in instance_mappings][::-1]
max_instance_id_list = max([max(il[1]) for il in instances_list])
max_instance_id = instance_data.max()
lut = np.zeros(max(max_instance_id, max_instance_id_list) + 1, dtype=np.uint32)
for uid, il, sem in instances_list:
if sem in semantics and sem != "[[scenario]]":
lut[np.array(il)] = uid
instance_data = np.take(lut, instance_data)
# get object amodal mask
obj_amodal_mask = instance_data.astype(np.uint8)
obj_amodal_mask[np.where(instance_data > 0)] = 1
""" ===== calculate occlusion mask of object j ===== """
obj_occ_mask = cv2.absdiff(obj_amodal_mask, obj_visible_mask)
""" ===== calculate occlusion rate of object j ===== """ # assumes binary mask (True == 1)
obj_occ_mask_pixel_count = obj_occ_mask.sum()
obj_amodal_mask_pixel_count = obj_amodal_mask.sum()
occlusion_rate = round(obj_occ_mask_pixel_count / obj_amodal_mask_pixel_count, 2)
""" ===== Save Segmentation Masks ==== """
if occlusion_rate < 1: # fully occluded objects are not considered
# append visible and occlusion masks for generation of occlusion order matrix
obj_visible_mask_list.append(obj_visible_mask)
obj_occlusion_mask_list.append(obj_occ_mask)
if self.save_segmentation_data:
groundtruth["DATA"]["INSTANCE"] = obj_visible_mask
groundtruth["METADATA"]["INSTANCE"]["WIDTH"] = obj_visible_mask.shape[1]
groundtruth["METADATA"]["INSTANCE"]["HEIGHT"] = obj_visible_mask.shape[0]
groundtruth["METADATA"]["INSTANCE"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["INSTANCE"]["NPY"] = True
groundtruth["DATA"]["AMODAL"] = instance_data
groundtruth["METADATA"]["AMODAL"]["WIDTH"] = instance_data.shape[1]
groundtruth["METADATA"]["AMODAL"]["HEIGHT"] = instance_data.shape[0]
groundtruth["METADATA"]["AMODAL"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["AMODAL"]["NPY"] = True
# if occlusion_rate > 0: # if object is occluded, save occlusion mask
groundtruth["DATA"]["OCCLUSION"] = obj_occ_mask
groundtruth["METADATA"]["OCCLUSION"]["WIDTH"] = obj_occ_mask.shape[1]
groundtruth["METADATA"]["OCCLUSION"]["HEIGHT"] = obj_occ_mask.shape[0]
groundtruth["METADATA"]["OCCLUSION"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["OCCLUSION"]["NPY"] = True
ann_info["visible_bbox"] = bbox_from_binary_mask(obj_visible_mask)
ann_info["bbox"] = ann_info["visible_bbox"]
""" ===== Add Segmentation Mask into COCO.JSON ===== """
instance_mask_class = GenericMask(instance_data.astype("uint8"),height, width)
obj_visible_mask_class = GenericMask(obj_visible_mask.astype("uint8"),height, width)
obj_occ_mask_class = GenericMask(obj_occ_mask.astype("uint8"),height, width)
# Encode binary masks to bytes
instance_data= mask_util.encode(np.array(instance_data[:, :, None], order="F", dtype="uint8"))[0]
obj_visible_mask= mask_util.encode(np.array(obj_visible_mask[:, :, None], order="F", dtype="uint8"))[0]
obj_occ_mask= mask_util.encode(np.array(obj_occ_mask[:, :, None], order="F", dtype="uint8"))[0]
# append annotations to dict
ann_info["segmentation"]["counts"] = instance_data['counts'].decode('UTF-8') # amodal mask
ann_info["visible_mask"]["counts"] = obj_visible_mask['counts'].decode('UTF-8') # obj_visible_mask
ann_info["occluded_mask"]["counts"] = obj_occ_mask['counts'].decode('UTF-8') # obj_visible_mask
ann_info["segmentation"]["area"] = int(instance_mask_class.area())
ann_info["visible_mask"]["area"] = int(obj_visible_mask_class.area())
ann_info["occluded_mask"]["area"] = int(obj_occ_mask_class.area())
ann_info["occluded_rate"] = occlusion_rate
ann_index += 1
all_viewport_data.append(groundtruth)
ann_list.append(ann_info)
img_list.append(image_info)
""" ======= END OBJ LOOP ======= """
# Wireframe
if self.sample("wireframe"):
self.carb_settings.set("/rtx/wireframe/mode", 2.0)
# Need two updates for all viewports to have wireframe properly
self.sim_context.render()
self.sim_context.render()
for i in range(len(self.viewports)):
viewport_name, viewport_window = self.viewports[i]
gt = copy.deepcopy(self.sd_helper.get_groundtruth(["rgb"], viewport_window))
all_viewport_data[i]["DATA"]["WIREFRAME"] = gt["rgb"]
self.carb_settings.set("/rtx/wireframe/mode", 0)
self.sim_context.render()
for j in range(len(all_viewport_data)):
if self.write_data:
self.data_writer.q.put(copy.deepcopy(all_viewport_data[j]))
# Disparity
if self.sample("disparity") and self.sample("stereo"):
depth_l, depth_r = depths
cam_intrinsics = self.camera.intrinsics[0]
disp_convert = DisparityConverter(
depth_l,
depth_r,
cam_intrinsics["fx"],
cam_intrinsics["fy"],
cam_intrinsics["cx"],
cam_intrinsics["cy"],
self.sample("stereo_baseline"),
)
disp_l, disp_r = disp_convert.compute_disparity()
disparities = [disp_l, disp_r]
for i in range(len(self.viewports)):
if i == 0 or self.sample("groundtruth_stereo"):
viewport_name, viewport_window = self.viewports[i]
groundtruth = {
"METADATA": {"image_id": id, "viewport_name": viewport_name, "DISPARITY": {}},
"DATA": {},
}
disparity_data = disparities[i]
groundtruth["DATA"]["DISPARITY"] = disparity_data
groundtruth["METADATA"]["DISPARITY"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["DISPARITY"]["NPY"] = True
if self.write_data:
self.data_writer.q.put(copy.deepcopy(groundtruth))
# turn on visibility of all objects (for next camera viewport)
for obj in objects:
obj.on_prim()
# generate occlusion ordering for current viewport
rows = cols = len(obj_visible_mask_list)
occlusion_adjacency_matrix = np.zeros((rows,cols))
# A(i,j), col j, row i. row i --> col j
for i in range(0,len(obj_visible_mask_list)):
visible_mask_i = obj_visible_mask_list[i] # occluder
for j in range(0,len(obj_visible_mask_list)):
if j != i:
occluded_mask_j = obj_occlusion_mask_list[j] # occludee
iou, _ = compute_occluded_masks(visible_mask_i,occluded_mask_j)
if iou > 0: # object i's visible mask is overlapping object j's occluded mask
occlusion_adjacency_matrix[i][j] = 1
data_folder = os.path.join(self.output_data_dir, viewport_name, "occlusion_order")
os.makedirs(data_folder, exist_ok=True)
filename = os.path.join(data_folder, f"{img_id}.npy")
# save occlusion adjacency matrix
np.save(filename, occlusion_adjacency_matrix)
# increment img index (next viewport)
img_index += 1
return groundtruth, img_index, ann_index, img_list, ann_list
| 30,498 | Python | 48.75367 | 148 | 0.474457 |
ngzhili/SynTable/syntable_composer/datasets/dataset/parameters/warehouse.yaml | # dropped warehouse objects
objects:
obj_model: Choice(["assets/models/warehouse.txt"])
obj_count: Range(5, 15)
obj_size_enabled: False
obj_scale: Uniform(0.75, 1.25)
obj_vert_fov_loc: Uniform(0, 0.5)
obj_distance: Uniform(3, 10)
obj_rot: (Normal(0, 45), Normal(0, 45), Uniform(0, 360))
obj_class_id: 1
obj_physics: True
# colorful ceiling lights
lights:
light_count: Range(0, 2)
light_coord_camera_relative: False
light_coord: (Uniform(-2, 2), Uniform(-2, 2), 5)
light_color: Uniform((0, 0, 0), (255, 255, 255))
light_intensity: Uniform(0, 300000)
light_radius: 1
# warehouse scenario
scenario_model: /NVIDIA/Assets/Isaac/2022.1/Isaac/Environments/Simple_Warehouse/warehouse.usd
scenario_class_id: 0
# camera
camera_coord: (0, 0, Uniform(.20, 1))
camera_rot: (Normal(0, 1), 0, Uniform(0, 360))
# output
output_dir: dataset
num_scenes: 10
img_width: 1920
img_height: 1080
rgb: True
depth: True
semantic_seg: True
groundtruth_visuals: True
# simulate
physics_simulate_time: 2
| 1,029 | YAML | 16.457627 | 93 | 0.688047 |
ngzhili/SynTable/syntable_composer/parameters/flying_things_4d.yaml | # object groups inherited from flying_things_3d
objs:
inherit: objs
objs_color_dr:
inherit: objs_color_dr
objs_texture_dr:
inherit: objs_texture_dr
objs_material_dr:
inherit: objs_material_dr
midground_shapes:
inherit: midground_shapes
midground_shapes_material_dr:
inherit: midground_shapes_material_dr
background_shapes:
inherit: background_shapes
background_plane:
obj_vel: (0, 0, 0)
obj_rot_vel: (0, 0, 0)
inherit: background_plane
# global object movement parameters
obj_vel: Normal((0, 0, 0), (1, 1, 1))
obj_rot_vel: Normal((0, 0, 0), (20, 20, 20))
# light groups inherited from flying_things_3d
lights:
inherit: lights
lights_color:
inherit: lights_color
distant_light:
inherit: distant_light
camera_light:
inherit: camera_light
# camera movement parameters (uncomment to add)
# camera_vel: Normal((.30, 0, 0), (.10, .10, .10))
# camera_accel: Normal((0, 0, 0), (.05, .05, .05))
# camera_rot_vel: Normal((0, 0, 0), (.05, .05, .05))
# camera_movement_camera_relative: True
# sequence parameters
sequential: True
sequence_step_count: 20
sequence_step_time: Uniform(0.5, 1)
profiles:
- parameters/flying_things_3d.yaml
- parameters/profiles/base_groups.yaml
| 1,206 | YAML | 20.553571 | 52 | 0.707297 |
ngzhili/SynTable/syntable_composer/parameters/flying_things_3d.yaml | # flying objects
objs:
obj_count: Range(0, 15)
inherit: flying_objs
# flying objects (color randomized)
objs_color_dr:
obj_color: Uniform((0, 0, 0), (255, 255, 255))
obj_count: Range(0, 10)
inherit: flying_objs
# flying objects (texture randomized)
objs_texture_dr:
obj_texture: Choice(["assets/textures/patterns.txt", "assets/textures/synthetic.txt"])
obj_texture_scale: Choice([0.1, 1])
obj_count: Range(0, 10)
inherit: flying_objs
# flying objects (material randomized)
objs_material_dr:
obj_material: Choice("assets/materials/materials.txt")
obj_count: Range(0, 10)
inherit: flying_objs
# flying midground shapes (texture randomized)
midground_shapes:
obj_texture: Choice(["assets/textures/patterns.txt", "assets/textures/synthetic.txt"])
obj_texture_scale: Choice([0.01, 1])
obj_count: Range(0, 5)
inherit: flying_shapes
# flying midground shapes (material randomized)
midground_shapes_material_dr:
obj_material: Choice("assets/materials/materials.txt")
obj_count: Range(0, 5)
inherit: flying_shapes
# flying background shapes (material randomized)
background_shapes:
obj_material: Choice("assets/materials/materials.txt")
obj_count: Range(0, 10)
obj_horiz_fov_loc: Uniform(-0.7, 0.7)
obj_vert_fov_loc: Uniform(-0.3, 0.7)
obj_size: Uniform(3, 5)
obj_distance: Uniform(20, 30)
inherit: flying_shapes
# background plane
background_plane:
obj_model: /NVIDIA/Assets/Isaac/2022.1/Isaac/Props/Shapes/plane.usd
obj_material: Choice("assets/materials/materials.txt")
obj_texture_rot: Uniform(0, 360)
obj_count: 1
obj_size: 5000
obj_distance: Uniform(30, 40)
obj_horiz_fov_loc: 0
obj_vert_fov_loc: 0
obj_rot: Normal((0, 90, 0), (10, 10, 10))
obj_class_id: 0
# flying lights
lights:
light_count: Range(1, 2)
light_color: (200, 200, 200)
inherit: flying_lights
# flying lights (colorful)
lights_color:
light_count: Range(0, 2)
light_color: Choice([(255, 0, 0), (0, 255, 0), (255, 255, 0), (255, 0, 255), (0, 255, 255)])
inherit: flying_lights
# sky light
distant_light:
light_distant: True
light_count: 1
light_color: Uniform((0, 0, 0), (255, 255, 255))
light_intensity: Uniform(2000, 10000)
light_rot: Normal((0, 0, 0), (20, 20, 20))
# light at camera coordinate
camera_light:
light_count: 1
light_color: Uniform((0, 0, 0), (255, 255, 255))
light_coord_camera_relative: True
light_distance: 0
light_intensity: Uniform(0, 100000)
light_radius: .50
# randomized floor
scenario_room_enabled: True
scenario_class_id: 0
floor: True
wall: False
ceiling: False
floor_size: 50
floor_material: Choice("assets/materials/materials.txt")
# camera
focal_length: 40
stereo: True
stereo_baseline: .20
camera_coord: Uniform((-2, -2, 1), (2, 2, 4))
camera_rot: Normal((0, 0, 0), (3, 3, 20))
# output
img_width: 1920
img_height: 1080
rgb: True
disparity: True
instance_seg: True
semantic_seg: True
bbox_2d_tight: True
groundtruth_visuals: True
groundtruth_stereo: False
profiles:
- parameters/profiles/base_groups.yaml
| 3,052 | YAML | 19.085526 | 94 | 0.695282 |
ngzhili/SynTable/syntable_composer/parameters/profiles/default.yaml | # Default parameters. Do not edit, move, or delete.
# default object parameters
obj_model: /NVIDIA/Assets/Isaac/2022.1/Isaac/Props/Forklift/forklift.usd
obj_color: ()
obj_texture: ""
obj_material: ""
obj_metallicness: float("NaN")
obj_reflectance: float("NaN")
obj_size_enabled: True
obj_size: 1
obj_scale: 1
obj_texture_scale: 1
obj_texture_rot: 0
obj_rot: (0, 0, 0)
obj_coord: (0, 0, 0)
obj_centered: True
obj_physics: False
obj_rot_camera_relative: True
obj_coord_camera_relative: True
obj_count: 0
obj_distance: Uniform(300, 800)
obj_horiz_fov_loc: Uniform(-1, 1)
obj_vert_fov_loc: Uniform(-1, 1)
obj_vel: (0, 0, 0)
obj_rot_vel: (0, 0, 0)
obj_accel: (0, 0, 0)
obj_rot_accel: (0, 0, 0)
obj_movement_obj_relative: False
obj_class_id: 1
# default light parameters
light_intensity: 100000
light_radius: 0.25
light_temp_enabled: False
light_color: (255, 255, 255)
light_temp: 6500
light_directed: False
light_directed_focus: 20
light_directed_focus_softness: 0
light_distant: False
light_camera_relative: True
light_rot: (0, 0, 0)
light_coord: (0, 0, 0)
light_count: 0
light_distance: Uniform(3, 8)
light_horiz_fov_loc: Uniform(-1, 1)
light_vert_fov_loc: Uniform(-1, 1)
light_coord_camera_relative: True
light_rot_camera_relative: True
light_vel: (0, 0, 0)
light_rot_vel: (0, 0, 0)
light_accel: (0, 0, 0)
light_rot_accel: (0, 0, 0)
light_movement_light_relative: False
# default scenario parameters
scenario_room_enabled: False
scenario_model: /NVIDIA/Assets/Isaac/2022.1/Isaac/Environments/Simple_Warehouse/warehouse.usd
scenario_class_id: 0
sky_texture: ""
sky_light_intensity: 1000
floor: True
wall: True
ceiling: True
wall_height: 20
floor_size: 20
floor_color: ()
wall_color: ()
ceiling_color: ()
floor_texture: ""
wall_texture: ""
ceiling_texture: ""
floor_texture_scale: 1
wall_texture_scale: 1
ceiling_texture_scale: 1
floor_texture_rot: 0
wall_texture_rot: 0
ceiling_texture_rot: 0
floor_material: ""
wall_material: ""
ceiling_material: ""
floor_reflectance: float("NaN")
wall_reflectance: float("NaN")
ceiling_reflectance: float("NaN")
floor_metallicness: float("NaN")
wall_metallicness: float("NaN")
ceiling_metallicness: float("NaN")
# default camera parameters
focal_length: 18.15
focus_distance: 4
horiz_aperture: 20.955
vert_aperture: 15.2908
f_stop: 0
stereo: False
stereo_baseline: 20
camera_coord: (0, 0, 50)
camera_rot: (0, 0, 0)
camera_vel: (0, 0, 0)
camera_rot_vel: (0, 0, 0)
camera_accel: (0, 0, 0)
camera_rot_accel: (0, 0, 0)
camera_movement_camera_relative: False
# default output parameters
output_dir: dataset
num_scenes: 10
img_width: 1280
img_height: 720
write_data: True
num_data_writer_threads: 4
sequential: False
sequence_step_count: 10
sequence_step_time: 1
rgb: True
depth: False
disparity: False
instance_seg: False
semantic_seg: False
bbox_2d_tight: False
bbox_2d_loose: False
bbox_3d: False
wireframe: False
groundtruth_stereo: False
groundtruth_visuals: False
# default model store parameters
nucleus_server: localhost
# default debug parameters
pause: 0
verbose: True
# simulation parameters
physics_simulate_time: 1
scene_units_in_meters: 1
path_tracing: False
samples_per_pixel_per_frame: 32 | 3,194 | YAML | 15.554404 | 93 | 0.725736 |
ngzhili/SynTable/syntable_composer/parameters/profiles/base_groups.yaml | flying_objs:
obj_model: Choice(["assets/models/warehouse.txt", "assets/models/hospital.txt", "assets/models/office.txt"])
obj_size: Uniform(.50, .75)
obj_distance: Uniform(4, 20)
flying_shapes:
obj_model: Choice(["assets/models/shapes.txt"])
obj_size: Uniform(1, 2)
obj_distance: Uniform(15, 25)
flying_lights:
light_intensity: Uniform(0, 100000)
light_radius: Uniform(.50, 1)
light_vert_fov_loc: Uniform(0, 1)
light_distance: Uniform(4, 15)
# global parameters
obj_rot: Uniform((0, 0, 0), (360, 360, 360))
obj_horiz_fov_loc: Uniform(-1, 1)
obj_vert_fov_loc: Uniform(-0.7, 1)
obj_metallicness: Uniform(0.1, 0.8)
obj_reflectance: Uniform(0.1, 0.8)
| 679 | YAML | 20.249999 | 110 | 0.680412 |
selinaxiao/MeshToUsd/exts/mesh.to.usd/mesh/to/usd/extension.py | import omni.ext
import omni.ui as ui
import omni.usd
#from .MeshGen.sdf_to_mesh import mc_result
from pxr import Gf, Sdf
# Functions and vars are available to other extension as usual in python: `example.python_ext.some_public_function(x)`
def some_public_function(x: int):
print("[mesh.to.usd] some_public_function was called with x: ", x)
return x ** x
# Any class derived from `omni.ext.IExt` in top level module (defined in `python.modules` of `extension.toml`) will be
# instantiated when extension gets enabled and `on_startup(ext_id)` will be called. Later when extension gets disabled
# on_shutdown() is called.
class MeshToUsdExtension(omni.ext.IExt):
# ext_id is current extension id. It can be used with extension manager to query additional information, like where
# this extension is located on filesystem.
def on_startup(self, ext_id):
print("[mesh.to.usd] mesh to usd startup")
self._count = 0
self._window = ui.Window("My Window", width=300, height=300)
with self._window.frame:
with ui.VStack():
label = ui.Label("")
def process(path):
infile = open(path,'r')
lines = infile.readlines()
for i in range(len(lines)):
lines[i] = lines[i].replace('\n','').split(' ')[1:]
if [] in lines:
lines.remove([])
idx1 = lines.index(['Normals'])
verts = lines[1:idx1]
float_verts = []
for i in range(len(verts)):
float_verts.append(Gf.Vec3f(float(verts[i][0]), float(verts[i][1]), float(verts[i][2])))
idx2 = lines.index(['Faces'])
normals = lines[idx1+1:idx2]
float_norms = []
print(normals)
for i in range(len(normals)):
float_norms.append(Gf.Vec3f(float(normals[i][0]), float(normals[i][1]), float(normals[i][2])))
float_norms.append(Gf.Vec3f(float(normals[i][0]), float(normals[i][1]), float(normals[i][2])))
float_norms.append(Gf.Vec3f(float(normals[i][0]), float(normals[i][1]), float(normals[i][2])))
faces = lines[idx2+1:]
int_faces = []
for i in range(len(faces)):
int_faces.append(int(faces[i][0]) - 1)
int_faces.append(int(faces[i][1]) - 1)
int_faces.append(int(faces[i][2]) - 1)
print(type(float_verts))
print(float_verts)
return float_verts, int_faces, float_norms
def assemble():
stage = omni.usd.get_context().get_stage()
if(not stage.GetPrimAtPath(Sdf.Path('/World/Trial')).IsValid()):
omni.kit.commands.execute('CreateMeshPrimWithDefaultXform',
prim_type='Cube',
prim_path=None,
select_new_prim=True,
prepend_default_prim=True)
omni.kit.commands.execute('MovePrim',
path_from='/World/Cube',
path_to='/World/Trial',
destructive=False)
cube_prim = stage.GetPrimAtPath('/World/Trial')
verts, faces, normals = process('C:/users/labuser/desktop/data transfer/meshtousd/exts/mesh.to.usd/mesh/to/usd/whyyyyyyyareumeaningless.obj')
face_vert_count = [3]*(len(faces)//3)
primvar = [(0,0)]*len(faces)
print(type(cube_prim.GetAttribute('faceVertexIndices').Get()))
print(cube_prim.GetAttribute('faceVertexIndices').Get())
print(type(face_vert_count))
cube_prim.GetAttribute('faceVertexCounts').Set(face_vert_count)
cube_prim.GetAttribute('faceVertexIndices').Set(faces)
cube_prim.GetAttribute('normals').Set(normals)
cube_prim.GetAttribute('points').Set(verts)
cube_prim.GetAttribute('primvars:st').Set(primvar)
with ui.HStack():
ui.Button("TRANSFORMERS!!!", clicked_fn=assemble)
def on_shutdown(self):
print("[mesh.to.usd] mesh to usd shutdown")
| 4,616 | Python | 40.223214 | 161 | 0.513865 |
loupeteam/Omniverse_Beckhoff_Bridge_Extension/README.md | # Info
This tool is provided by Loupe.
https://loupe.team
[email protected]
1-800-240-7042
# Description
This is an extension that connects Beckhoff PLCs into the Omniverse ecosystem. It leverages [pyads](https://github.com/stlehmann/pyads) to set up an ADS client for communicating with PLCs.
# Documentation
Detailed documentation can be found in the extension readme file [here](exts/loupe.simulation.beckhoff_bridge/docs/README.md).
# Licensing
This software contains source code provided by NVIDIA Corporation. This code is subject to the terms of the [NVIDIA Omniverse License Agreement](https://docs.omniverse.nvidia.com/isaacsim/latest/common/NVIDIA_Omniverse_License_Agreement.html). Files are licensed as follows:
### Files created entirely by Loupe ([MIT License](LICENSE)):
* `ads_driver.py`
* `BeckhoffBridge.py`
### Files including Nvidia-generated code and modifications by Loupe (Nvidia Omniverse License Agreement AND MIT License; use must comply to whichever is most restrictive for any attribute):
* `__init__.py`
* `extension.py`
* `global_variables.py`
* `ui_builder.py`
This software is intended for use with NVIDIA Omniverse apps, which are subject to the [NVIDIA Omniverse License Agreement](https://docs.omniverse.nvidia.com/isaacsim/latest/common/NVIDIA_Omniverse_License_Agreement.html) for use and distribution.
This software also relies on [pyads](https://github.com/stlehmann/pyads), which is licensed under the MIT license.
| 1,470 | Markdown | 44.968749 | 274 | 0.782313 |
loupeteam/Omniverse_Beckhoff_Bridge_Extension/exts/loupe.simulation.beckhoff_bridge/loupe/simulation/beckhoff_bridge/ads_driver.py | '''
File: **ads_driver.py**
Copyright (c) 2024 Loupe
https://loupe.team
This file is part of Omniverse_Beckhoff_Bridge_Extension, licensed under the MIT License.
'''
import pyads
class AdsDriver():
"""
A class that represents an ADS driver. It contains a list of variables to read from the target device and provides methods to read and write data.
Args:
ams_net_id (str): The AMS Net ID of the target device.
Attributes:
ams_net_id (str): The AMS Net ID of the target device.
_read_names (list): A list of names for reading data.
_read_struct_def (dict): A dictionary that maps names to structure definitions.
"""
def __init__(self, ams_net_id):
"""
Initializes an instance of the AdsDriver class.
Args:
ams_net_id (str): The AMS Net ID of the target device.
"""
self.ams_net_id = ams_net_id
self._read_names = list()
self._read_struct_def = dict()
def add_read(self, name : str, structure_def = None):
"""
Adds a variable to the list of data to read.
Args:
name (str): The name of the data to be read. "my_struct.my_array[0].my_var"
structure_def (optional): The structure definition of the data.
"""
if name not in self._read_names:
self._read_names.append(name)
if structure_def is not None:
if name not in self._read_struct_def:
self._read_struct_def[name] = structure_def
def write_data(self, data : dict ):
"""
Writes data to the target device.
Args:
data (dict): A dictionary containing the data to be written to the PLC
e.g.
data = {'MAIN.b_Execute': False, 'MAIN.str_TestString': 'Goodbye World', 'MAIN.r32_TestReal': 54.321}
"""
self._connection.write_list_by_name(data)
def read_data(self):
"""
Reads all variables from the cyclic read list.
Returns:
dict: A dictionary containing the parsed data.
"""
if self._read_names.__len__() > 0:
data = self._connection.read_list_by_name(self._read_names, structure_defs=self._read_struct_def)
parsed_data = dict()
for name in data.keys():
parsed_data = self._parse_name(parsed_data, name, data[name])
else:
parsed_data = dict()
return parsed_data
def _parse_name(self, name_dict, name, value):
"""
Convert a variable from a flat name to a dictionary based structure.
"my_struct.my_array[0].my_var: value" -> {"my_struct": {"my_array": [{"my_var": value}]}}
Args:
name_dict (dict): The dictionary to store the parsed data.
name (str): The name of the data item.
value: The value of the data item.
Returns:
dict: The updated name_dict.
"""
name_parts = name.split(".")
if len(name_parts) > 1:
if name_parts[0] not in name_dict:
name_dict[name_parts[0]] = dict()
if "[" in name_parts[1]:
array_name, index = name_parts[1].split("[")
index = int(index[:-1])
if array_name not in name_dict[name_parts[0]]:
name_dict[name_parts[0]][array_name] = []
if index >= len(name_dict[name_parts[0]][array_name]):
name_dict[name_parts[0]][array_name].extend([None] * (index - len(name_dict[name_parts[0]][array_name]) + 1))
name_dict[name_parts[0]][array_name][index] = self._parse_name(name_dict[name_parts[0]][array_name], "[" + str(index) + "]" + ".".join(name_parts[2:]), value)
else:
name_dict[name_parts[0]] = self._parse_name(name_dict[name_parts[0]], ".".join(name_parts[1:]), value)
else:
if "[" in name_parts[0]:
array_name, index = name_parts[0].split("[")
index = int(index[:-1])
if index >= len(name_dict):
name_dict.extend([None] * (index - len(name_dict) + 1))
name_dict[index] = value
return name_dict[index]
else:
name_dict[name_parts[0]] = value
return name_dict
def connect(self, ams_net_id = None):
"""
Connects to the target device.
Args:
ams_net_id (str): The AMS Net ID of the target device. This does not need to be provided if it was provided in the constructor and has not changed.
"""
if ams_net_id is not None:
self.ams_net_id = ams_net_id
self._connection = pyads.Connection(self.ams_net_id, pyads.PORT_TC3PLC1)
self._connection.open()
def disconnect(self):
"""
Disconnects from the target device.
"""
self._connection.close()
def is_connected(self):
"""
Returns the connection state.
Returns:
bool: True if the connection is open, False otherwise.
"""
try:
adsState, deviceState = self._connection.read_state()
return True
except Exception as e:
return False
| 5,344 | Python | 32.40625 | 174 | 0.54491 |
loupeteam/Omniverse_Beckhoff_Bridge_Extension/exts/loupe.simulation.beckhoff_bridge/loupe/simulation/beckhoff_bridge/BeckhoffBridge.py | '''
File: **BeckhoffBridge.py**
Copyright (c) 2024 Loupe
https://loupe.team
This file is part of Omniverse_Beckhoff_Bridge_Extension, licensed under the MIT License.
'''
from typing import Callable
import carb.events
import omni.kit.app
EVENT_TYPE_DATA_INIT = carb.events.type_from_string("loupe.simulation.beckhoff_bridge.DATA_INIT")
EVENT_TYPE_DATA_READ = carb.events.type_from_string("loupe.simulation.beckhoff_bridge.DATA_READ")
EVENT_TYPE_DATA_READ_REQ = carb.events.type_from_string("loupe.simulation.beckhoff_bridge.DATA_READ_REQ")
EVENT_TYPE_DATA_WRITE_REQ = carb.events.type_from_string("loupe.simulation.beckhoff_bridge.DATA_WRITE_REQ")
class Manager:
"""
BeckhoffBridge class provides an interface for interacting with the Beckhoff Bridge Extension.
It can be used in Python scripts to read and write variables.
Methods:
register_init_callback( callback : Callable[[carb.events.IEvent], None] ): Registers a callback function for the DATA_INIT event.
register_data_callback( callback : Callable[[carb.events.IEvent], None] ): Registers a callback function for the DATA_READ event.
add_cyclic_read_variables( variable_name_array : list[str]): Adds variables to the cyclic read list.
write_variable( name : str, value : any ): Writes a variable value to the Beckhoff Bridge.
"""
def __init__(self):
"""
Initializes the BeckhoffBridge object.
"""
self._event_stream = omni.kit.app.get_app().get_message_bus_event_stream()
self._callbacks = []
def __del__(self):
"""
Cleans up the event subscriptions.
"""
for callback in self._callbacks:
self._event_stream.remove_subscription(callback)
def register_init_callback( self, callback : Callable[[carb.events.IEvent], None] ):
"""
Registers a callback function for the DATA_INIT event.
The callback is triggered when the Beckhoff Bridge is initialized.
The user should use this event to add cyclic read variables.
This event may get called multiple times in normal operation due to the nature of how extensions are loaded.
Args:
callback (function): The callback function to be registered.
Returns:
None
"""
self._callbacks.append(self._event_stream.create_subscription_to_push_by_type(EVENT_TYPE_DATA_INIT, callback))
callback(None)
def register_data_callback( self, callback : Callable[[carb.events.IEvent], None] ):
"""
Registers a callback function for the DATA_READ event.
The callback is triggered when the Beckhoff Bridge receives new data. The payload contains the updated variables.
Args:
callback (Callable): The callback function to be registered.
example callback:
def on_message( event ):
data = event.payload['data']['MAIN']['custom_struct']['var_array']
Returns:
None
"""
self._callbacks.append(self._event_stream.create_subscription_to_push_by_type(EVENT_TYPE_DATA_READ, callback))
def add_cyclic_read_variables(self, variable_name_array : list[str]):
"""
Adds variables to the cyclic read list.
Variables in the cyclic read list are read from the Beckhoff Bridge at a fixed interval.
Args:
variableList (list): List of variables to be added. ["MAIN.myStruct.myvar1", "MAIN.var2", ...]
Returns:
None
"""
self._event_stream.push(event_type=EVENT_TYPE_DATA_READ_REQ, payload={'variables': variable_name_array})
def write_variable(self, name : str, value : any ):
"""
Writes a variable value to the Beckhoff Bridge.
Args:
name (str): The name of the variable. "MAIN.myStruct.myvar1"
value (basic type): The value to be written. 1, 2.5, "Hello", ...
Returns:
None
"""
payload = {"variables": [{'name': name, 'value': value}]}
self._event_stream.push(event_type=EVENT_TYPE_DATA_WRITE_REQ, payload=payload)
| 4,182 | Python | 37.731481 | 137 | 0.649211 |
loupeteam/Omniverse_Beckhoff_Bridge_Extension/exts/loupe.simulation.beckhoff_bridge/loupe/simulation/beckhoff_bridge/ui_builder.py | # This software contains source code provided by NVIDIA Corporation.
# Copyright (c) 2022-2023, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
import omni.ui as ui
import omni.timeline
from carb.settings import get_settings
from .ads_driver import AdsDriver
from .global_variables import EXTENSION_NAME
from .BeckhoffBridge import EVENT_TYPE_DATA_READ, EVENT_TYPE_DATA_READ_REQ, EVENT_TYPE_DATA_WRITE_REQ, EVENT_TYPE_DATA_INIT
import threading
from threading import RLock
import json
import time
class UIBuilder:
def __init__(self):
# UI elements created using a UIElementWrapper instance
self.wrapped_ui_elements = []
# Get access to the timeline to control stop/pause/play programmatically
self._timeline = omni.timeline.get_timeline_interface()
# Get the settings interface
self.settings_interface = get_settings()
# Internal status flags.
self._thread_is_alive = True
self._communication_initialized = False
self._ui_initialized = False
# Configuration parameters for the extension.
# These are exposed on the UI.
self._enable_communication = self.get_setting( 'ENABLE_COMMUNICATION', False )
self._refresh_rate = self.get_setting( 'REFRESH_RATE', 20 )
# Data stream where the extension will dump the data that it reads from the PLC.
self._event_stream = omni.kit.app.get_app().get_message_bus_event_stream()
self._ads_connector = AdsDriver(self.get_setting( 'PLC_AMS_NET_ID', '127.0.0.1.1.1'))
self.write_queue = dict()
self.write_lock = RLock()
self.read_req = self._event_stream.create_subscription_to_push_by_type(EVENT_TYPE_DATA_READ_REQ, self.on_read_req_event)
self.write_req = self._event_stream.create_subscription_to_push_by_type(EVENT_TYPE_DATA_WRITE_REQ, self.on_write_req_event)
self._event_stream.push(event_type=EVENT_TYPE_DATA_INIT, payload={'data': {}})
self._thread = threading.Thread(target=self._update_plc_data)
self._thread.start()
###################################################################################
# The Functions Below Are Called Automatically By extension.py
###################################################################################
def on_menu_callback(self):
"""Callback for when the UI is opened from the toolbar.
This is called directly after build_ui().
"""
self._event_stream.push(event_type=EVENT_TYPE_DATA_INIT, payload={'data': {}})
if(not self._thread_is_alive):
self._thread_is_alive = True
self._thread = threading.Thread(target=self._update_plc_data)
self._thread.start()
def on_timeline_event(self, event):
"""Callback for Timeline events (Play, Pause, Stop)
Args:
event (omni.timeline.TimelineEventType): Event Type
"""
if(event.type == int(omni.timeline.TimelineEventType.STOP)):
pass
elif(event.type == int(omni.timeline.TimelineEventType.PLAY)):
pass
elif(event.type == int(omni.timeline.TimelineEventType.PAUSE)):
pass
def on_stage_event(self, event):
"""Callback for Stage Events
Args:
event (omni.usd.StageEventType): Event Type
"""
pass
def cleanup(self):
"""
Called when the stage is closed or the extension is hot reloaded.
Perform any necessary cleanup such as removing active callback functions
"""
self.read_req.unsubscribe()
self.write_req.unsubscribe()
self._thread_is_alive = False
self._thread.join()
def build_ui(self):
"""
Build a custom UI tool to run your extension.
This function will be called any time the UI window is closed and reopened.
"""
with ui.CollapsableFrame("Configuration", collapsed=False):
with ui.VStack(spacing=5, height=0):
with ui.HStack(spacing=5, height=0):
ui.Label("Enable ADS Client")
self._enable_communication_checkbox = ui.CheckBox(ui.SimpleBoolModel(self._enable_communication))
self._enable_communication_checkbox.model.add_value_changed_fn(self._toggle_communication_enable)
with ui.HStack(spacing=5, height=0):
ui.Label("Refresh Rate (ms)")
self._refresh_rate_field = ui.IntField(ui.SimpleIntModel(self._refresh_rate))
self._refresh_rate_field.model.set_min(10)
self._refresh_rate_field.model.set_max(10000)
self._refresh_rate_field.model.add_value_changed_fn(self._on_refresh_rate_changed)
with ui.HStack(spacing=5, height=0):
ui.Label("PLC AMS Net Id")
self._plc_ams_net_id_field = ui.StringField(ui.SimpleStringModel(self._ads_connector.ams_net_id))
self._plc_ams_net_id_field.model.add_value_changed_fn(self._on_plc_ams_net_id_changed)
with ui.HStack(spacing=5, height=0):
ui.Label("Settings")
ui.Button("Load", clicked_fn=self.load_settings)
ui.Button("Save", clicked_fn=self.save_settings)
with ui.CollapsableFrame("Status", collapsed=False):
with ui.VStack(spacing=5, height=0):
with ui.HStack(spacing=5, height=0):
ui.Label("Status")
self._status_field = ui.StringField(ui.SimpleStringModel("n/a"), read_only=True)
with ui.CollapsableFrame("Monitor", collapsed=False):
with ui.VStack(spacing=5, height=0):
with ui.HStack(spacing=5, height=100):
ui.Label("Variables")
self._monitor_field = ui.StringField(ui.SimpleStringModel("{}"), multiline=True, read_only=True)
self._ui_initialized = True
####################################
####################################
# UTILITY FUNCTIONS
####################################
####################################
def on_read_req_event(self, event ):
event_data = event.payload
variables : list = event_data['variables']
for name in variables:
self._ads_connector.add_read(name)
def on_write_req_event(self, event ):
variables = event.payload["variables"]
for variable in variables:
self.queue_write(variable['name'], variable['value'])
def queue_write(self, name, value):
with self.write_lock:
self.write_queue[name] = value
def _update_plc_data(self):
thread_start_time = time.time()
status_update_time = time.time()
while self._thread_is_alive:
# Sleep for the refresh rate
sleepy_time = self._refresh_rate/1000 - (time.time() - thread_start_time)
if sleepy_time > 0:
time.sleep(sleepy_time)
else:
time.sleep(0.1)
thread_start_time = time.time()
# Check if the communication is enabled
if not self._enable_communication:
if self._ui_initialized:
self._status_field.model.set_value("Disabled")
self._monitor_field.model.set_value("{}")
continue
# Catch exceptions and log them to the status field
try:
# Start the communication if it is not initialized
if (not self._communication_initialized) and (self._enable_communication):
self._ads_connector.connect()
self._communication_initialized = True
elif (self._communication_initialized) and (not self._ads_connector.is_connected()):
self._ads_connector.disconnect()
if status_update_time < time.time():
if self._ads_connector.is_connected():
self._status_field.model.set_value("Connected")
else:
self._status_field.model.set_value("Attempting to connect...")
# Write data to the PLC if there is data to write
# If there is an exception, log it to the status field but continue reading data
try:
if self.write_queue:
with self.write_lock:
values = self.write_queue
self.write_queue = dict()
self._ads_connector.write_data(values)
except Exception as e:
if self._ui_initialized:
self._status_field.model.set_value(f"Error writing data to PLC: {e}")
status_update_time = time.time() + 1
# Read data from the PLC
self._data = self._ads_connector.read_data()
# Push the data to the event stream
self._event_stream.push(event_type=EVENT_TYPE_DATA_READ, payload={'data': self._data})
# Update the monitor field
if self._ui_initialized:
json_formatted_str = json.dumps(self._data, indent=4)
self._monitor_field.model.set_value(json_formatted_str)
except Exception as e:
if self._ui_initialized:
self._status_field.model.set_value(f"Error reading data from PLC: {e}")
status_update_time = time.time() + 1
time.sleep(1)
####################################
####################################
# Manage Settings
####################################
####################################
def get_setting(self, name, default_value=None ):
setting = self.settings_interface.get("/persistent/" + EXTENSION_NAME + "/" + name)
if setting is None:
setting = default_value
self.settings_interface.set("/persistent/" + EXTENSION_NAME + "/" + name, setting)
return setting
def set_setting(self, name, value ):
self.settings_interface.set("/persistent/" + EXTENSION_NAME + "/" + name, value)
def _on_plc_ams_net_id_changed(self, value):
self._ads_connector.ams_net_id = value.get_value_as_string()
self._communication_initialized = False
def _on_refresh_rate_changed(self, value):
self._refresh_rate = value.get_value_as_int()
def _toggle_communication_enable(self, state):
self._enable_communication = state.get_value_as_bool()
if not self._enable_communication:
self._communication_initialized = False
def save_settings(self):
self.set_setting('REFRESH_RATE', self._refresh_rate)
self.set_setting('PLC_AMS_NET_ID', self._ads_connector.ams_net_id)
self.set_setting('ENABLE_COMMUNICATION', self._enable_communication)
def load_settings(self):
self._refresh_rate = self.get_setting('REFRESH_RATE')
self._ads_connector.ams_net_id = self.get_setting('PLC_AMS_NET_ID')
self._enable_communication = self.get_setting('ENABLE_COMMUNICATION')
self._refresh_rate_field.model.set_value(self._refresh_rate)
self._plc_ams_net_id_field.model.set_value(self._ads_connector.ams_net_id)
self._enable_communication_checkbox.model.set_value(self._enable_communication)
self._communication_initialized = False
| 12,107 | Python | 41.335664 | 131 | 0.572148 |
loupeteam/Omniverse_Beckhoff_Bridge_Extension/exts/loupe.simulation.beckhoff_bridge/config/extension.toml | [core]
reloadable = true
order = 0
[package]
version = "0.1.0"
category = "simulation"
title = "Beckhoff Bridge"
description = "A bridge for connecting Omniverse to Beckhoff PLCs over ADS"
authors = ["Loupe"]
repository = "https://github.com/loupeteam/Omniverse_Beckhoff_Bridge_Extension"
keywords = ["Beckhoff", "Digital Twin", "ADS", "PLC"]
changelog = "docs/CHANGELOG.md"
readme = "docs/README.md"
preview_image = "data/preview.png"
icon = "data/icon.png"
[dependencies]
"omni.kit.uiapp" = {}
[python.pipapi]
requirements = ['pyads']
use_online_index = true
[[python.module]]
name = "loupe.simulation.beckhoff_bridge"
public = true | 639 | TOML | 21.857142 | 79 | 0.716745 |
loupeteam/Omniverse_Beckhoff_Bridge_Extension/exts/loupe.simulation.beckhoff_bridge/docs/CHANGELOG.md | Changelog
[0.1.0]
- Created with based functionality to setup a connection and send/receive messages with other extensions. | 125 | Markdown | 30.499992 | 105 | 0.8 |
loupeteam/Omniverse_Beckhoff_Bridge_Extension/exts/loupe.simulation.beckhoff_bridge/docs/README.md | # Beckhoff Bridge
The Beckhoff Bridge is an [NVIDIA Omniverse](https://www.nvidia.com/en-us/omniverse/) extension for communicating with [Beckhoff PLCs](https://www.beckhoff.com/en-en/) using the [ADS protocol](https://infosys.beckhoff.com/english.php?content=../content/1033/cx8190_hw/5091854987.html&id=).
# Installation
### Install from registry
This is the preferred method. Open up the extensions manager by navigating to `Window / Extensions`. The extension is available as a "Third Party" extension. Search for `Beckhoff Bridge`, and click the slider to Enable it. Once enabled, the extension will be available as an option in the top menu banner of the Omniverse app.
### Install from source
You can also install from source instead. In order to do so, follow these steps:
- Clone the repo [here](https://github.com/loupeteam/Omniverse_Beckhoff_Bridge_Extension).
- In your Omniverse app, open the extensions manager by navigating to `Window / Extensions`.
- Open the general extension settings, and add a new entry into the `Extension Search Paths` table. This should be the local path to the root of the repo that was just cloned.
- Back in the extensions manager, search for `BECKHOFF BRIDGE`, and enable it.
- Once enabled, the extension will show up as an option in the top menu banner.
# Configuration
You can open the extension by clicking on `Beckhoff Bridge / Open Bridge Settings` from the top menu. The following configuration options are available:
- Enable ADS Client: Enable or disable the ADS client from reading or writing data to the PLC.
- Refresh Rate: The rate at which the ADS client will read data from the PLC in milliseconds.
- PLC AMS Net ID: The AMS Net ID of the PLC to connect to.
- Settings commands: These commands are used to load and save the extension settings as permanent parameters. The Save button backs up the current parameters, and the Load button restores them from the last saved values.
# Usage
Once the extension is enabled, the Beckhoff Bridge will attempt to connect to the PLC.
### Monitoring Extension Status
The status of the extension can be viewed in the `Status` field. Here are the possible messages and their meaning:
- `Disabled`: the enable checkbox is unchecked, and no communication is attempted.
- `Attempting to connect...`: the ADS client is trying to connect to the PLC. Staying in this state for more than a few seconds indicates that there is a problem with the connection.
- `Connected`: the ADS client has successfully established a connection with the PLC.
- `Error writing data to the PLC: [...]`: an error occurred while performing an ADS variable write.
- `Error reading data from the PLC: [...]`: an error occurred while performing an ADS variable read.
### Monitoring Variable Values
Once variable reads are occurring, the `Monitor` pane will show a JSON string with the names and values of the variables being read. This is helpful for troubleshooting.
### Performing read/write operations
The variables on the PLC that should be read or written are specified in a custom user extension or app that uses the API available from the `loupe.simulation.beckhoff_bridge` module.
```python
from loupe.simulation.beckhoff_bridge import BeckhoffBridge
# Instantiate the bridge and register lifecycle subscriptions
beckhoff_bridge = BeckhoffBridge.Manager()
beckhoff_bridge.register_init_callback(on_beckoff_init)
beckhoff_bridge.register_data_callback(on_message)
# This function gets called once on init, and should be used to subscribe to cyclic reads.
def on_beckoff_init( event ):
# Create a list of variable names to be read cyclically, and add to Manager
variables = [ 'MAIN.custom_struct.var1',
'MAIN.custom_struct.var_array[0]',
'MAIN.custom_struct.var_array[1]']
beckhoff_bridge.add_cyclic_read_variables(variables)
# This function is called every time the bridge receives new data
def on_message( event ):
# Read the event data, which includes values for the PLC variables requested
data = event.payload['data']['MAIN']['custom_struct']['var_array']
# In the app's cyclic logic, writes can be performed as follows:
def cyclic():
# Write the value `1` to PLC variable 'MAIN.custom_struct.var1'
beckhoff_bridge.write_variable('MAIN.custom_struct.var1', 1)
``` | 4,351 | Markdown | 55.51948 | 326 | 0.759136 |
shazanfazal/Test-omniverse/exts/shazan.extension/shazan/extension/extension.py | import omni.ext
import omni.ui as ui
import omni.kit.commands as command
# Functions and vars are available to other extension as usual in python: `example.python_ext.some_public_function(x)`
def some_public_function(x: int):
print("[shazan.extension] some_public_function was called with x: ", x)
return x ** x
# Any class derived from `omni.ext.IExt` in top level module (defined in `python.modules` of `extension.toml`) will be
# instantiated when extension gets enabled and `on_startup(ext_id)` will be called. Later when extension gets disabled
# on_shutdown() is called.
class ShazanExtensionExtension(omni.ext.IExt):
# ext_id is current extension id. It can be used with extension manager to query additional information, like where
# this extension is located on filesystem.
def on_startup(self, ext_id):
print("[shazan.extension] shazan extension startup")
self._window = ui.Window("Learning Extension", width=300, height=300)
with self._window.frame:
with ui.VStack():
def on_click(prim_type):
command.execute("CreateMeshPrimWithDefaultXform",prim_type=prim_type)
ui.Label("Create me the following")
ui.Button("Create a Cone",clicked_fn=lambda: on_click("Cone"))
ui.Button("Create a Cube",clicked_fn=lambda: on_click("Cube"))
ui.Button("Create a Cylinder",clicked_fn=lambda: on_click("Cylinder"))
ui.Button("Create a Disk",clicked_fn=lambda: on_click("Disk"))
ui.Button("Create a Plane",clicked_fn=lambda: on_click("Plane"))
ui.Button("Create a Sphere",clicked_fn=lambda: on_click("Sphere"))
ui.Button("Create a Torus",clicked_fn=lambda: on_click("Torus"))
def on_shutdown(self):
print("[shazan.extension] shazan extension shutdown")
| 1,884 | Python | 47.333332 | 119 | 0.663482 |
pascal-roth/orbit_envs/README.md | <div style="display: flex;">
<img src="docs/example_matterport.png" alt="Matterport Mesh" style="width: 48%; padding: 5px;">
<img src="docs/example_carla.png" alt="Unreal Engine / Carla Mesh" style="width: 48%; padding: 5px;">
</div>
---
# Omniverse Matterport3D and Unreal Engine Assets Extensions
[](https://docs.omniverse.nvidia.com/isaacsim/latest/overview.html)
[](https://docs.python.org/3/whatsnew/3.10.html)
[](https://releases.ubuntu.com/20.04/)
[](https://pre-commit.com/)
[](https://opensource.org/licenses/BSD-3-Clause)
This repository contains the extensions for Matterport3D and Unreal Engine Assets.
The extensions enable the easy loading of assets into Isaac Sim and have access to the semantic labels.
They are developed as part of the ViPlanner project ([Paper](https://arxiv.org/abs/2310.00982) | [Code](https://github.com/leggedrobotics/viplanner))
and are based on the [Orbit](https://isaac-orbit.github.io/) framework.
**Attention:**
The central part of the extensions is currently updated to the latest orbit version.
This repo contains a temporary solution sufficient for the demo script included in ViPlanner, found [here](https://github.com/leggedrobotics/viplanner/tree/main/omniverse).
An updated version will be available soon.
## Installation
To install the extensions, follow these steps:
1. Install Isaac Sim using the [Orbit installation guide](https://isaac-orbit.github.io/orbit/source/setup/installation.html).
2. Clone the orbit repo and link the extension.
```
git clone [email protected]:NVIDIA-Omniverse/orbit.git
cd orbit/source/extensions
ln -s {ORBIT_ENVS_PROJECT_DIR}/extensions/omni.isaac.matterport .
ln -s {ORBIT_ENVS_PROJECT_DIR}/extensions/omni.isaac.carla .
```
4. Then run the orbit installer script.
```
./orbit.sh -i -e
```
## Usage
For the Matterport extension, a GUI interface is available. To use it, start the simulation:
```
./orbit.sh -s
```
Then, in the GUI, go to `Window -> Extensions` and type `Matterport` in the search bar. You should see the Matterport3D extension.
Enable it to open the GUI interface.
To use both as part of an Orbit workflow, please refer to the [ViPlanner Demo](https://github.com/leggedrobotics/viplanner/tree/main/omniverse).
## <a name="CitingViPlanner"></a>Citing
If you use this code in a scientific publication, please cite the following [paper](https://arxiv.org/abs/2310.00982):
```
@article{roth2023viplanner,
title ={ViPlanner: Visual Semantic Imperative Learning for Local Navigation},
author ={Pascal Roth and Julian Nubert and Fan Yang and Mayank Mittal and Marco Hutter},
journal = {2024 IEEE International Conference on Robotics and Automation (ICRA)},
year = {2023},
month = {May},
}
```
### License
This code belongs to the Robotic Systems Lab, ETH Zurich.
All right reserved
**Authors: [Pascal Roth](https://github.com/pascal-roth)<br />
Maintainer: Pascal Roth, [email protected]**
This repository contains research code, except that it changes often, and any fitness for a particular purpose is disclaimed.
| 3,457 | Markdown | 40.66265 | 172 | 0.74718 |
pascal-roth/orbit_envs/extensions/omni.isaac.matterport/setup.py | # Copyright (c) 2024 ETH Zurich (Robotic Systems Lab)
# Author: Pascal Roth
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Installation script for the 'omni.isaac.matterport' python package."""
from setuptools import setup
# Minimum dependencies required prior to installation
INSTALL_REQUIRES = [
# generic
"trimesh",
"PyQt5",
"matplotlib>=3.5.0",
"pandas",
]
# Installation operation
setup(
name="omni-isaac-matterport",
author="Pascal Roth",
author_email="[email protected]",
version="0.0.1",
description="Extension to include Matterport 3D Datasets into Isaac (taken from https://niessner.github.io/Matterport/).",
keywords=["robotics"],
include_package_data=True,
python_requires=">=3.7",
install_requires=INSTALL_REQUIRES,
packages=["omni.isaac.matterport"],
classifiers=["Natural Language :: English", "Programming Language :: Python :: 3.7"],
zip_safe=False,
)
# EOF
| 967 | Python | 24.473684 | 126 | 0.688728 |
pascal-roth/orbit_envs/extensions/omni.isaac.matterport/config/extension.toml | [package]
version = "0.0.1"
title = "Matterport extension"
description="Extension to include Matterport 3D Datasets into Isaac"
authors =["Pascal Roth"]
repository = "https://github.com/leggedrobotics/omni_isaac_orbit"
category = "robotics"
keywords = ["kit", "robotics"]
readme = "docs/README.md"
[dependencies]
"omni.kit.uiapp" = {}
"omni.isaac.ui" = {}
"omni.isaac.core" = {}
"omni.isaac.orbit" = {}
# Main python module this extension provides.
[[python.module]]
name = "omni.isaac.matterport"
[[python.module]]
name = "omni.isaac.matterport.scripts"
| 559 | TOML | 23.347825 | 68 | 0.710197 |
pascal-roth/orbit_envs/extensions/omni.isaac.matterport/omni/isaac/matterport/domains/__init__.py | # Copyright (c) 2024 ETH Zurich (Robotic Systems Lab)
# Author: Pascal Roth
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
import os
DATA_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), "../../../../data"))
from .matterport_importer import MatterportImporter
from .matterport_raycast_camera import MatterportRayCasterCamera
from .matterport_raycaster import MatterportRayCaster
from .raycaster_cfg import MatterportRayCasterCfg
__all__ = [
"MatterportRayCasterCamera",
"MatterportImporter",
"MatterportRayCaster",
"MatterportRayCasterCfg",
"DATA_DIR",
]
# EoF
| 617 | Python | 23.719999 | 87 | 0.73906 |
pascal-roth/orbit_envs/extensions/omni.isaac.matterport/omni/isaac/matterport/domains/matterport_raycast_camera.py | # Copyright (c) 2024 ETH Zurich (Robotic Systems Lab)
# Author: Pascal Roth
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
import os
from typing import ClassVar, Sequence
import carb
import numpy as np
import omni.isaac.orbit.utils.math as math_utils
import pandas as pd
import torch
import trimesh
import warp as wp
from omni.isaac.matterport.domains import DATA_DIR
from omni.isaac.orbit.sensors import RayCasterCamera, RayCasterCameraCfg
from omni.isaac.orbit.utils.warp import raycast_mesh
from tensordict import TensorDict
class MatterportRayCasterCamera(RayCasterCamera):
UNSUPPORTED_TYPES: ClassVar[dict] = {
"rgb",
"instance_id_segmentation",
"instance_segmentation",
"skeleton_data",
"motion_vectors",
"bounding_box_2d_tight",
"bounding_box_2d_loose",
"bounding_box_3d",
}
"""Data types that are not supported by the ray-caster."""
face_id_category_mapping: ClassVar[dict] = {}
"""Mapping from face id to semantic category id."""
def __init__(self, cfg: RayCasterCameraCfg):
# initialize base class
super().__init__(cfg)
def _check_supported_data_types(self, cfg: RayCasterCameraCfg):
# check if there is any intersection in unsupported types
# reason: we cannot obtain this data from simplified warp-based ray caster
common_elements = set(cfg.data_types) & MatterportRayCasterCamera.UNSUPPORTED_TYPES
if common_elements:
raise ValueError(
f"RayCasterCamera class does not support the following sensor types: {common_elements}."
"\n\tThis is because these sensor types cannot be obtained in a fast way using ''warp''."
"\n\tHint: If you need to work with these sensor types, we recommend using the USD camera"
" interface from the omni.isaac.orbit.sensors.camera module."
)
def _initialize_impl(self):
super()._initialize_impl()
# load categort id to class mapping (name and id of mpcat40 redcued class set)
# More Information: https://github.com/niessner/Matterport/blob/master/data_organization.md#house_segmentations
mapping = pd.read_csv(DATA_DIR + "/mappings/category_mapping.tsv", sep="\t")
self.mapping_mpcat40 = torch.tensor(mapping["mpcat40index"].to_numpy(), device=self._device, dtype=torch.long)
self._color_mapping()
def _color_mapping(self):
# load defined colors for mpcat40
mapping_40 = pd.read_csv(DATA_DIR + "/mappings/mpcat40.tsv", sep="\t")
color = mapping_40["hex"].to_numpy()
self.color = torch.tensor(
[(int(color[i][1:3], 16), int(color[i][3:5], 16), int(color[i][5:7], 16)) for i in range(len(color))],
device=self._device,
dtype=torch.uint8,
)
def _initialize_warp_meshes(self):
# check if mesh is already loaded
for mesh_prim_path in self.cfg.mesh_prim_paths:
if (
mesh_prim_path in MatterportRayCasterCamera.meshes
and mesh_prim_path in MatterportRayCasterCamera.face_id_category_mapping
):
continue
# find ply
if os.path.isabs(mesh_prim_path):
file_path = mesh_prim_path
assert os.path.isfile(mesh_prim_path), f"No .ply file found under absolute path: {mesh_prim_path}"
else:
file_path = os.path.join(DATA_DIR, mesh_prim_path)
assert os.path.isfile(
file_path
), f"No .ply file found under relative path to extension data: {file_path}"
# load ply
curr_trimesh = trimesh.load(file_path)
if mesh_prim_path not in MatterportRayCasterCamera.meshes:
# Convert trimesh into wp mesh
mesh_wp = wp.Mesh(
points=wp.array(curr_trimesh.vertices.astype(np.float32), dtype=wp.vec3, device=self._device),
indices=wp.array(curr_trimesh.faces.astype(np.int32).flatten(), dtype=int, device=self._device),
)
# save mesh
MatterportRayCasterCamera.meshes[mesh_prim_path] = mesh_wp
if mesh_prim_path not in MatterportRayCasterCamera.face_id_category_mapping:
# create mapping from face id to semantic categroy id
# get raw face information
faces_raw = curr_trimesh.metadata["_ply_raw"]["face"]["data"]
carb.log_info(f"Raw face information of type {faces_raw.dtype}")
# get face categories
face_id_category_mapping = torch.tensor(
[single_face[3] for single_face in faces_raw], device=self._device
)
# save mapping
MatterportRayCasterCamera.face_id_category_mapping[mesh_prim_path] = face_id_category_mapping
def _update_buffers_impl(self, env_ids: Sequence[int]):
"""Fills the buffers of the sensor data."""
# increment frame count
self._frame[env_ids] += 1
# compute poses from current view
pos_w, quat_w = self._compute_camera_world_poses(env_ids)
# update the data
self._data.pos_w[env_ids] = pos_w
self._data.quat_w_world[env_ids] = quat_w
# note: full orientation is considered
ray_starts_w = math_utils.quat_apply(quat_w.repeat(1, self.num_rays), self.ray_starts[env_ids])
ray_starts_w += pos_w.unsqueeze(1)
ray_directions_w = math_utils.quat_apply(quat_w.repeat(1, self.num_rays), self.ray_directions[env_ids])
# ray cast and store the hits
# TODO: Make ray-casting work for multiple meshes?
# necessary for regular dictionaries.
self.ray_hits_w, ray_depth, ray_normal, ray_face_ids = raycast_mesh(
ray_starts_w,
ray_directions_w,
mesh=RayCasterCamera.meshes[self.cfg.mesh_prim_paths[0]],
max_dist=self.cfg.max_distance,
return_distance=any(
[name in self.cfg.data_types for name in ["distance_to_image_plane", "distance_to_camera"]]
),
return_normal="normals" in self.cfg.data_types,
return_face_id="semantic_segmentation" in self.cfg.data_types,
)
# update output buffers
if "distance_to_image_plane" in self.cfg.data_types:
# note: data is in camera frame so we only take the first component (z-axis of camera frame)
distance_to_image_plane = (
math_utils.quat_apply(
math_utils.quat_inv(quat_w).repeat(1, self.num_rays),
(ray_depth[:, :, None] * ray_directions_w),
)
)[:, :, 0]
self._data.output["distance_to_image_plane"][env_ids] = distance_to_image_plane.view(-1, *self.image_shape)
if "distance_to_camera" in self.cfg.data_types:
self._data.output["distance_to_camera"][env_ids] = ray_depth.view(-1, *self.image_shape)
if "normals" in self.cfg.data_types:
self._data.output["normals"][env_ids] = ray_normal.view(-1, *self.image_shape, 3)
if "semantic_segmentation" in self._data.output.keys(): # noqa: SIM118
# get the category index of the hit faces (category index from unreduced set = ~1600 classes)
face_id = MatterportRayCasterCamera.face_id_category_mapping[self.cfg.mesh_prim_paths[0]][
ray_face_ids.flatten().type(torch.long)
]
# map category index to reduced set
face_id_mpcat40 = self.mapping_mpcat40[face_id.type(torch.long) - 1]
# get the color of the face
face_color = self.color[face_id_mpcat40]
# reshape and transpose to get the correct orientation
self._data.output["semantic_segmentation"][env_ids] = face_color.view(-1, *self.image_shape, 3)
def _create_buffers(self):
"""Create the buffers to store data."""
# prepare drift
self.drift = torch.zeros(self._view.count, 3, device=self.device)
# create the data object
# -- pose of the cameras
self._data.pos_w = torch.zeros((self._view.count, 3), device=self._device)
self._data.quat_w_world = torch.zeros((self._view.count, 4), device=self._device)
# -- intrinsic matrix
self._data.intrinsic_matrices = torch.zeros((self._view.count, 3, 3), device=self._device)
self._data.intrinsic_matrices[:, 2, 2] = 1.0
self._data.image_shape = self.image_shape
# -- output data
# create the buffers to store the annotator data.
self._data.output = TensorDict({}, batch_size=self._view.count, device=self.device)
self._data.info = [{name: None for name in self.cfg.data_types}] * self._view.count
for name in self.cfg.data_types:
if name in ["distance_to_image_plane", "distance_to_camera"]:
shape = (self.cfg.pattern_cfg.height, self.cfg.pattern_cfg.width)
dtype = torch.float32
elif name in ["normals"]:
shape = (self.cfg.pattern_cfg.height, self.cfg.pattern_cfg.width, 3)
dtype = torch.float32
elif name in ["semantic_segmentation"]:
shape = (self.cfg.pattern_cfg.height, self.cfg.pattern_cfg.width, 3)
dtype = torch.uint8
else:
raise ValueError(f"Unknown data type: {name}")
# store the data
self._data.output[name] = torch.zeros((self._view.count, *shape), dtype=dtype, device=self._device)
| 9,752 | Python | 46.808823 | 119 | 0.608901 |
pascal-roth/orbit_envs/extensions/omni.isaac.matterport/omni/isaac/matterport/domains/raycaster_cfg.py | # Copyright (c) 2024 ETH Zurich (Robotic Systems Lab)
# Author: Pascal Roth
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from omni.isaac.orbit.sensors.ray_caster import RayCasterCfg
from omni.isaac.orbit.utils import configclass
from .matterport_raycaster import MatterportRayCaster
@configclass
class MatterportRayCasterCfg(RayCasterCfg):
"""Configuration for the ray-cast sensor for Matterport Environments."""
class_type = MatterportRayCaster
"""Name of the specific matterport ray caster class."""
| 539 | Python | 27.421051 | 76 | 0.779221 |
pascal-roth/orbit_envs/extensions/omni.isaac.matterport/omni/isaac/matterport/domains/matterport_importer.py | # Copyright (c) 2024 ETH Zurich (Robotic Systems Lab)
# Author: Pascal Roth
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
import builtins
# python
import os
from typing import TYPE_CHECKING
# omni
import carb
import omni.isaac.core.utils.prims as prim_utils
import omni.isaac.core.utils.stage as stage_utils
# isaac-orbit
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.core.simulation_context import SimulationContext
from omni.isaac.orbit.terrains import TerrainImporter
if TYPE_CHECKING:
from omni.isaac.matterport.config import MatterportImporterCfg
# omniverse
from omni.isaac.core.utils import extensions
extensions.enable_extension("omni.kit.asset_converter")
import omni.kit.asset_converter as converter
class MatterportConverter:
def __init__(self, input_obj: str, context: converter.impl.AssetConverterContext) -> None:
self._input_obj = input_obj
self._context = context
# setup converter
self.task_manager = converter.extension.AssetImporterExtension()
return
async def convert_asset_to_usd(self) -> None:
# get usd file path and create directory
base_path, _ = os.path.splitext(self._input_obj)
# set task
task = self.task_manager.create_converter_task(
self._input_obj, base_path + ".usd", asset_converter_context=self._context
)
success = await task.wait_until_finished()
# print error
if not success:
detailed_status_code = task.get_status()
detailed_status_error_string = task.get_error_message()
carb.log_error(
f"Failed to convert {self._input_obj} to {base_path + '.usd'} "
f"with status {detailed_status_code} and error {detailed_status_error_string}"
)
return
class MatterportImporter(TerrainImporter):
"""
Default stairs environment for testing
"""
cfg: MatterportImporterCfg
def __init__(self, cfg: MatterportImporterCfg) -> None:
"""
:param
"""
# store inputs
self.cfg = cfg
self.device = SimulationContext.instance().device
# create a dict of meshes
self.meshes = dict()
self.warp_meshes = dict()
self.env_origins = None
self.terrain_origins = None
# import the world
if not self.cfg.terrain_type == "matterport":
raise ValueError(
"MatterportImporter can only import 'matterport' data. Given terrain type "
f"'{self.cfg.terrain_type}'is not supported."
)
if builtins.ISAAC_LAUNCHED_FROM_TERMINAL is False:
self.load_world()
else:
carb.log_info("[INFO]: Loading in extension mode requires calling 'load_world_async'")
if isinstance(self.cfg.num_envs, int):
self.configure_env_origins()
# set initial state of debug visualization
self.set_debug_vis(self.cfg.debug_vis)
# Converter
self.converter: MatterportConverter = MatterportConverter(self.cfg.obj_filepath, self.cfg.asset_converter)
return
async def load_world_async(self) -> None:
"""Function called when clicking load button"""
# create world
await self.load_matterport()
# update stage for any remaining process.
await stage_utils.update_stage_async()
# Now we are ready!
carb.log_info("[INFO]: Setup complete...")
return
def load_world(self) -> None:
"""Function called when clicking load button"""
# create world
self.load_matterport_sync()
# update stage for any remaining process.
stage_utils.update_stage()
# Now we are ready!
carb.log_info("[INFO]: Setup complete...")
return
async def load_matterport(self) -> None:
_, ext = os.path.splitext(self.cfg.obj_filepath)
# if obj mesh --> convert to usd
if ext == ".obj":
await self.converter.convert_asset_to_usd()
# add mesh to stage
self.load_matterport_sync()
def load_matterport_sync(self) -> None:
base_path, _ = os.path.splitext(self.cfg.obj_filepath)
assert os.path.exists(base_path + ".usd"), (
"Matterport load sync can only handle '.usd' files not obj files. "
"Please use the async function to convert the obj file to usd first (accessed over the extension in the GUI)"
)
self._xform_prim = prim_utils.create_prim(
prim_path=self.cfg.prim_path + "/Matterport", translation=(0.0, 0.0, 0.0), usd_path=base_path + ".usd"
)
# apply collider properties
collider_cfg = sim_utils.CollisionPropertiesCfg(collision_enabled=True)
sim_utils.define_collision_properties(self._xform_prim.GetPrimPath(), collider_cfg)
# create physics material
physics_material_cfg: sim_utils.RigidBodyMaterialCfg = self.cfg.physics_material
# spawn the material
physics_material_cfg.func(f"{self.cfg.prim_path}/physicsMaterial", self.cfg.physics_material)
sim_utils.bind_physics_material(self._xform_prim.GetPrimPath(), f"{self.cfg.prim_path}/physicsMaterial")
# add colliders and physics material
if self.cfg.groundplane:
ground_plane_cfg = sim_utils.GroundPlaneCfg(physics_material=self.cfg.physics_material)
ground_plane = ground_plane_cfg.func("/World/GroundPlane", ground_plane_cfg)
ground_plane.visible = False
return
| 5,626 | Python | 33.734568 | 121 | 0.640775 |
pascal-roth/orbit_envs/extensions/omni.isaac.matterport/omni/isaac/matterport/domains/matterport_raycaster.py | # Copyright (c) 2024 ETH Zurich (Robotic Systems Lab)
# Author: Pascal Roth
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
import os
from typing import TYPE_CHECKING
import numpy as np
import trimesh
import warp as wp
from omni.isaac.matterport.domains import DATA_DIR
from omni.isaac.orbit.sensors.ray_caster import RayCaster
if TYPE_CHECKING:
from .raycaster_cfg import MatterportRayCasterCfg
class MatterportRayCaster(RayCaster):
"""A ray-casting sensor for matterport meshes.
The ray-caster uses a set of rays to detect collisions with meshes in the scene. The rays are
defined in the sensor's local coordinate frame. The sensor can be configured to ray-cast against
a set of meshes with a given ray pattern.
The meshes are parsed from the list of primitive paths provided in the configuration. These are then
converted to warp meshes and stored in the `warp_meshes` list. The ray-caster then ray-casts against
these warp meshes using the ray pattern provided in the configuration.
.. note::
Currently, only static meshes are supported. Extending the warp mesh to support dynamic meshes
is a work in progress.
"""
cfg: MatterportRayCasterCfg
"""The configuration parameters."""
def __init__(self, cfg: MatterportRayCasterCfg):
"""Initializes the ray-caster object.
Args:
cfg (MatterportRayCasterCfg): The configuration parameters.
"""
# initialize base class
super().__init__(cfg)
def _initialize_warp_meshes(self):
# check if mesh is already loaded
for mesh_prim_path in self.cfg.mesh_prim_paths:
if mesh_prim_path in MatterportRayCaster.meshes:
continue
# find ply
if os.path.isabs(mesh_prim_path):
file_path = mesh_prim_path
assert os.path.isfile(mesh_prim_path), f"No .ply file found under absolute path: {mesh_prim_path}"
else:
file_path = os.path.join(DATA_DIR, mesh_prim_path)
assert os.path.isfile(
file_path
), f"No .ply file found under relative path to extension data: {file_path}"
# load ply
curr_trimesh = trimesh.load(file_path)
# Convert trimesh into wp mesh
mesh_wp = wp.Mesh(
points=wp.array(curr_trimesh.vertices.astype(np.float32), dtype=wp.vec3, device=self._device),
indices=wp.array(curr_trimesh.faces.astype(np.int32).flatten(), dtype=int, device=self._device),
)
# save mesh
MatterportRayCaster.meshes[mesh_prim_path] = mesh_wp
| 2,745 | Python | 35.131578 | 114 | 0.654281 |
pascal-roth/orbit_envs/extensions/omni.isaac.matterport/omni/isaac/matterport/scripts/matterport_domains.py | # Copyright (c) 2024 ETH Zurich (Robotic Systems Lab)
# Author: Pascal Roth
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from typing import Dict
import carb
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import omni
import torch
from omni.isaac.matterport.domains.matterport_raycast_camera import (
MatterportRayCasterCamera,
)
from omni.isaac.orbit.sensors.camera import CameraData
from omni.isaac.orbit.sensors.ray_caster import RayCasterCfg
from omni.isaac.orbit.sim import SimulationContext
from .ext_cfg import MatterportExtConfig
mpl.use("Qt5Agg")
class MatterportDomains:
"""
Load Matterport3D Semantics and make them available to Isaac Sim
"""
def __init__(self, cfg: MatterportExtConfig):
"""
Initialize MatterportSemWarp
Args:
path (str): path to Matterport3D Semantics
"""
self._cfg: MatterportExtConfig = cfg
# setup camera list
self.cameras: Dict[str, MatterportRayCasterCamera] = {}
# setup camera visualization
self.figures = {}
# internal parameters
self.callback_set = False
self.vis_init = False
self.prev_position = torch.zeros(3)
self.prev_orientation = torch.zeros(4)
# add callbacks for stage play/stop
physx_interface = omni.physx.acquire_physx_interface()
self._initialize_handle = physx_interface.get_simulation_event_stream_v2().create_subscription_to_pop_by_type(
int(omni.physx.bindings._physx.SimulationEvent.RESUMED), self._initialize_callback
)
self._invalidate_initialize_handle = (
physx_interface.get_simulation_event_stream_v2().create_subscription_to_pop_by_type(
int(omni.physx.bindings._physx.SimulationEvent.STOPPED), self._invalidate_initialize_callback
)
)
return
##
# Public Methods
##
def register_camera(self, cfg: RayCasterCfg):
"""
Register a camera to the MatterportSemWarp
"""
# append to camera list
self.cameras[cfg.prim_path] = MatterportRayCasterCamera(cfg)
##
# Callback Setup
##
def _invalidate_initialize_callback(self, val):
if self.callback_set:
self._sim.remove_render_callback("matterport_update")
self.callback_set = False
def _initialize_callback(self, val):
if self.callback_set:
return
# check for camera
if len(self.cameras) == 0:
carb.log_warn("No cameras added! Add cameras first, then enable the callback!")
return
# get SimulationContext
if SimulationContext.instance():
self._sim: SimulationContext = SimulationContext.instance()
else:
carb.log_error("No Simulation Context found! Matterport Callback not attached!")
# add callback
self._sim.add_render_callback("matterport_update", callback_fn=self._update)
self.callback_set = True
##
# Callback Function
##
def _update(self, dt: float):
for camera in self.cameras.values():
camera.update(dt.payload["dt"])
if self._cfg.visualize:
vis_prim = self._cfg.visualize_prim if self._cfg.visualize_prim else list(self.cameras.keys())[0]
if torch.all(self.cameras[vis_prim].data.pos_w.cpu() == self.prev_position) and torch.all(
self.cameras[vis_prim].data.quat_w_world.cpu() == self.prev_orientation
):
return
self._update_visualization(self.cameras[vis_prim].data)
self.prev_position = self.cameras[vis_prim].data.pos_w.clone().cpu()
self.prev_orientation = self.cameras[vis_prim].data.quat_w_world.clone().cpu()
##
# Private Methods (Helper Functions)
##
# Visualization helpers ###
def _init_visualization(self, data: CameraData):
"""Initializes the visualization plane."""
# init depth figure
self.n_bins = 500 # Number of bins in the colormap
self.color_array = mpl.colormaps["gist_rainbow"](np.linspace(0, 1, self.n_bins)) # Colormap
if "semantic_segmentation" in data.output.keys(): # noqa: SIM118
# init semantics figure
fg_sem = plt.figure()
ax_sem = fg_sem.gca()
ax_sem.set_title("Semantic Segmentation")
img_sem = ax_sem.imshow(data.output["semantic_segmentation"][0].cpu().numpy())
self.figures["semantics"] = {"fig": fg_sem, "axis": ax_sem, "img": img_sem}
if "distance_to_image_plane" in data.output.keys(): # noqa: SIM118
# init semantics figure
fg_depth = plt.figure()
ax_depth = fg_depth.gca()
ax_depth.set_title("Distance To Image Plane")
img_depth = ax_depth.imshow(self.convert_depth_to_color(data.output["distance_to_image_plane"][0]))
self.figures["depth"] = {"fig": fg_depth, "axis": ax_depth, "img": img_depth}
if len(self.figures) > 0:
plt.ion()
# update flag
self.vis_init = True
def _update_visualization(self, data: CameraData) -> None:
"""
Updates the visualization plane.
"""
if self.vis_init is False:
self._init_visualization(data)
else:
# SEMANTICS
if "semantic_segmentation" in data.output.keys(): # noqa: SIM118
self.figures["semantics"]["img"].set_array(data.output["semantic_segmentation"][0].cpu().numpy())
self.figures["semantics"]["fig"].canvas.draw()
self.figures["semantics"]["fig"].canvas.flush_events()
# DEPTH
if "distance_to_image_plane" in data.output.keys(): # noqa: SIM118
# cam_data.img_depth.set_array(cam_data.render_depth)
self.figures["depth"]["img"].set_array(
self.convert_depth_to_color(data.output["distance_to_image_plane"][0])
)
self.figures["depth"]["fig"].canvas.draw()
self.figures["depth"]["fig"].canvas.flush_events()
plt.pause(1e-6)
def convert_depth_to_color(self, depth_img):
depth_img = depth_img.cpu().numpy()
depth_img[~np.isfinite(depth_img)] = depth_img.max()
depth_img_flattend = np.clip(depth_img.flatten(), a_min=0, a_max=depth_img.max())
depth_img_flattend = np.round(depth_img_flattend / depth_img.max() * (self.n_bins - 1)).astype(np.int32)
depth_colors = self.color_array[depth_img_flattend]
depth_colors = depth_colors.reshape(depth_img.shape[0], depth_img.shape[1], 4)
return depth_colors
| 6,788 | Python | 35.5 | 118 | 0.610784 |
pascal-roth/orbit_envs/extensions/omni.isaac.matterport/omni/isaac/matterport/scripts/ext_cfg.py | # Copyright (c) 2024 ETH Zurich (Robotic Systems Lab)
# Author: Pascal Roth
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
# python
from dataclasses import dataclass
from omni.isaac.matterport.config.importer_cfg import MatterportImporterCfg
@dataclass
class MatterportExtConfig:
# config classes
importer: MatterportImporterCfg = MatterportImporterCfg()
# semantic and depth information (can be changed individually for each camera)
visualize: bool = False
visualize_prim: str = None
# set value functions
def set_friction_dynamic(self, value: float):
self.importer.physics_material.dynamic_friction = value
def set_friction_static(self, value: float):
self.importer.physics_material.static_friction = value
def set_restitution(self, value: float):
self.importer.physics_material.restitution = value
def set_friction_combine_mode(self, value: int):
self.importer.physics_material.friction_combine_mode = value
def set_restitution_combine_mode(self, value: int):
self.importer.physics_material.restitution_combine_mode = value
def set_improved_patch_friction(self, value: bool):
self.importer.physics_material.improve_patch_friction = value
def set_obj_filepath(self, value: str):
self.importer.obj_filepath = value
def set_prim_path(self, value: str):
self.importer.prim_path = value
def set_visualize(self, value: bool):
self.visualize = value
def set_visualization_prim(self, value: str):
self.visualize_prim = value
| 1,596 | Python | 30.313725 | 82 | 0.716165 |
pascal-roth/orbit_envs/extensions/omni.isaac.matterport/omni/isaac/matterport/scripts/__init__.py | # Copyright (c) 2024 ETH Zurich (Robotic Systems Lab)
# Author: Pascal Roth
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from .matterport_ext import MatterPortExtension
__all__ = ["MatterPortExtension"]
| 225 | Python | 21.599998 | 53 | 0.746667 |
pascal-roth/orbit_envs/extensions/omni.isaac.matterport/omni/isaac/matterport/scripts/matterport_ext.py | # Copyright (c) 2024 ETH Zurich (Robotic Systems Lab)
# Author: Pascal Roth
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
import asyncio
import gc
# python
import os
import carb
# omni
import omni
import omni.client
import omni.ext
import omni.isaac.core.utils.prims as prim_utils
import omni.isaac.core.utils.stage as stage_utils
# isaac-core
import omni.ui as ui
from omni.isaac.matterport.domains import MatterportImporter
from omni.isaac.orbit.sensors.ray_caster import RayCasterCfg, patterns
from omni.isaac.orbit.sim import SimulationCfg, SimulationContext
# omni-isaac-ui
from omni.isaac.ui.ui_utils import (
btn_builder,
cb_builder,
dropdown_builder,
float_builder,
get_style,
int_builder,
setup_ui_headers,
str_builder,
)
# omni-isaac-matterport
from .ext_cfg import MatterportExtConfig
from .matterport_domains import MatterportDomains
EXTENSION_NAME = "Matterport Importer"
def is_mesh_file(path: str) -> bool:
_, ext = os.path.splitext(path.lower())
return ext in [".obj", ".usd"]
def is_ply_file(path: str) -> bool:
_, ext = os.path.splitext(path.lower())
return ext in [".ply"]
def on_filter_obj_item(item) -> bool:
if not item or item.is_folder:
return not (item.name == "Omniverse" or item.path.startswith("omniverse:"))
return is_mesh_file(item.path)
def on_filter_ply_item(item) -> bool:
if not item or item.is_folder:
return not (item.name == "Omniverse" or item.path.startswith("omniverse:"))
return is_ply_file(item.path)
class MatterPortExtension(omni.ext.IExt):
"""Extension to load Matterport 3D Environments into Isaac Sim"""
def on_startup(self, ext_id):
self._ext_id = ext_id
self._usd_context = omni.usd.get_context()
self._window = omni.ui.Window(
EXTENSION_NAME, width=400, height=500, visible=True, dockPreference=ui.DockPreference.LEFT_BOTTOM
)
# init config class and get path to extension
self._config = MatterportExtConfig()
self._extension_path = omni.kit.app.get_app().get_extension_manager().get_extension_path(ext_id)
# set additional parameters
self._input_fields: dict = {} # dictionary to store values of buttion, float fields, etc.
self.domains: MatterportDomains = None # callback class for semantic rendering
self.ply_proposal: str = ""
# build ui
self.build_ui()
return
##
# UI Build functions
##
def build_ui(self, build_cam: bool = False, build_viz: bool = False):
with self._window.frame:
with ui.VStack(spacing=5, height=0):
self._build_info_ui()
self._build_import_ui()
if build_cam:
self._build_camera_ui()
if build_viz:
self._build_viz_ui()
async def dock_window():
await omni.kit.app.get_app().next_update_async()
def dock(space, name, location, pos=0.5):
window = omni.ui.Workspace.get_window(name)
if window and space:
window.dock_in(space, location, pos)
return window
tgt = ui.Workspace.get_window("Viewport")
dock(tgt, EXTENSION_NAME, omni.ui.DockPosition.LEFT, 0.33)
await omni.kit.app.get_app().next_update_async()
self._task = asyncio.ensure_future(dock_window())
def _build_info_ui(self):
title = EXTENSION_NAME
doc_link = "https://github.com/leggedrobotics/omni_isaac_orbit"
overview = "This utility is used to import Matterport3D Environments into Isaac Sim. "
overview += "The environment and additional information are available at https://github.com/niessner/Matterport"
overview += "\n\nPress the 'Open in IDE' button to view the source code."
setup_ui_headers(self._ext_id, __file__, title, doc_link, overview)
return
def _build_import_ui(self):
frame = ui.CollapsableFrame(
title="Import Dataset",
height=0,
collapsed=False,
style=get_style(),
style_type_name_override="CollapsableFrame",
horizontal_scrollbar_policy=ui.ScrollBarPolicy.SCROLLBAR_AS_NEEDED,
vertical_scrollbar_policy=ui.ScrollBarPolicy.SCROLLBAR_ALWAYS_ON,
)
with frame:
with ui.VStack(style=get_style(), spacing=5, height=0):
# PhysicsMaterial
self._input_fields["friction_dynamic"] = float_builder(
"Dynamic Friction",
default_val=self._config.importer.physics_material.dynamic_friction,
tooltip=f"Sets the dynamic friction of the physics material (default: {self._config.importer.physics_material.dynamic_friction})",
)
self._input_fields["friction_dynamic"].add_value_changed_fn(
lambda m, config=self._config: config.set_friction_dynamic(m.get_value_as_float())
)
self._input_fields["friction_static"] = float_builder(
"Static Friction",
default_val=self._config.importer.physics_material.static_friction,
tooltip=f"Sets the static friction of the physics material (default: {self._config.importer.physics_material.static_friction})",
)
self._input_fields["friction_static"].add_value_changed_fn(
lambda m, config=self._config: config.set_friction_static(m.get_value_as_float())
)
self._input_fields["restitution"] = float_builder(
"Restitution",
default_val=self._config.importer.physics_material.restitution,
tooltip=f"Sets the restitution of the physics material (default: {self._config.importer.physics_material.restitution})",
)
self._input_fields["restitution"].add_value_changed_fn(
lambda m, config=self._config: config.set_restitution(m.get_value_as_float())
)
friction_restitution_options = ["average", "min", "multiply", "max"]
dropdown_builder(
"Friction Combine Mode",
items=friction_restitution_options,
default_val=friction_restitution_options.index(
self._config.importer.physics_material.friction_combine_mode
),
on_clicked_fn=lambda mode_str, config=self._config: config.set_friction_combine_mode(mode_str),
tooltip=f"Sets the friction combine mode of the physics material (default: {self._config.importer.physics_material.friction_combine_mode})",
)
dropdown_builder(
"Restitution Combine Mode",
items=friction_restitution_options,
default_val=friction_restitution_options.index(
self._config.importer.physics_material.restitution_combine_mode
),
on_clicked_fn=lambda mode_str, config=self._config: config.set_restitution_combine_mode(mode_str),
tooltip=f"Sets the friction combine mode of the physics material (default: {self._config.importer.physics_material.restitution_combine_mode})",
)
cb_builder(
label="Improved Patch Friction",
tooltip=f"Sets the improved patch friction of the physics material (default: {self._config.importer.physics_material.improve_patch_friction})",
on_clicked_fn=lambda m, config=self._config: config.set_improved_patch_friction(m),
default_val=self._config.importer.physics_material.improve_patch_friction,
)
# Set prim path for environment
self._input_fields["prim_path"] = str_builder(
"Prim Path of the Environment",
tooltip="Prim path of the environment",
default_val=self._config.importer.prim_path,
)
self._input_fields["prim_path"].add_value_changed_fn(
lambda m, config=self._config: config.set_prim_path(m.get_value_as_string())
)
# read import location
def check_file_type(model=None):
path = model.get_value_as_string()
if is_mesh_file(path):
self._input_fields["import_btn"].enabled = True
self._make_ply_proposal(path)
self._config.set_obj_filepath(path)
else:
self._input_fields["import_btn"].enabled = False
carb.log_warn(f"Invalid path to .obj file: {path}")
kwargs = {
"label": "Input File",
"default_val": self._config.importer.obj_filepath,
"tooltip": "Click the Folder Icon to Set Filepath",
"use_folder_picker": True,
"item_filter_fn": on_filter_obj_item,
"bookmark_label": "Included Matterport3D meshs",
"bookmark_path": f"{self._extension_path}/data/mesh",
"folder_dialog_title": "Select .obj File",
"folder_button_title": "*.obj, *.usd",
}
self._input_fields["input_file"] = str_builder(**kwargs)
self._input_fields["input_file"].add_value_changed_fn(check_file_type)
self._input_fields["import_btn"] = btn_builder(
"Import", text="Import", on_clicked_fn=self._start_loading
)
self._input_fields["import_btn"].enabled = False
return
def _build_camera_ui(self):
frame = ui.CollapsableFrame(
title="Add Camera",
height=0,
collapsed=False,
style=get_style(),
style_type_name_override="CollapsableFrame",
horizontal_scrollbar_policy=ui.ScrollBarPolicy.SCROLLBAR_AS_NEEDED,
vertical_scrollbar_policy=ui.ScrollBarPolicy.SCROLLBAR_ALWAYS_ON,
)
with frame:
with ui.VStack(style=get_style(), spacing=5, height=0):
# get import location and save directory
kwargs = {
"label": "Input ply File",
"default_val": self.ply_proposal,
"tooltip": "Click the Folder Icon to Set Filepath",
"use_folder_picker": True,
"item_filter_fn": on_filter_ply_item,
"bookmark_label": "Included Matterport3D Point-Cloud with semantic labels",
"bookmark_path": f"{self._extension_path}/data/mesh",
"folder_dialog_title": "Select .ply Point-Cloud File",
"folder_button_title": "Select .ply Point-Cloud",
}
self._input_fields["input_ply_file"] = str_builder(**kwargs)
# data fields parameters
self._input_fields["camera_semantics"] = cb_builder(
label="Enable Semantics",
tooltip="Enable access to the semantics information of the mesh (default: True)",
default_val=True,
)
self._input_fields["camera_depth"] = cb_builder(
label="Enable Distance to Camera Frame",
tooltip="Enable access to the depth information of the mesh - no additional compute effort (default: True)",
default_val=True,
)
# add camera sensor for which semantics and depth should be rendered
kwargs = {
"label": "Camera Prim Path",
"type": "stringfield",
"default_val": "",
"tooltip": "Enter Camera Prim Path",
"use_folder_picker": False,
}
self._input_fields["camera_prim"] = str_builder(**kwargs)
self._input_fields["camera_prim"].add_value_changed_fn(self.activate_load_camera)
self._input_fields["cam_height"] = int_builder(
"Camera Height in Pixels",
default_val=480,
tooltip="Set the height of the camera image plane in pixels (default: 480)",
)
self._input_fields["cam_width"] = int_builder(
"Camera Width in Pixels",
default_val=640,
tooltip="Set the width of the camera image plane in pixels (default: 640)",
)
self._input_fields["load_camera"] = btn_builder(
"Add Camera", text="Add Camera", on_clicked_fn=self._register_camera
)
self._input_fields["load_camera"].enabled = False
return
def _build_viz_ui(self):
frame = ui.CollapsableFrame(
title="Visualization",
height=0,
collapsed=False,
style=get_style(),
style_type_name_override="CollapsableFrame",
horizontal_scrollbar_policy=ui.ScrollBarPolicy.SCROLLBAR_AS_NEEDED,
vertical_scrollbar_policy=ui.ScrollBarPolicy.SCROLLBAR_ALWAYS_ON,
)
with frame:
with ui.VStack(style=get_style(), spacing=5, height=0):
cb_builder(
label="Visualization",
tooltip=f"Visualize Semantics and/or Depth (default: {self._config.visualize})",
on_clicked_fn=lambda m, config=self._config: config.set_visualize(m),
default_val=self._config.visualize,
)
dropdown_builder(
"Shown Camera Prim",
items=list(self.domains.cameras.keys()),
default_val=0,
on_clicked_fn=lambda mode_str, config=self._config: config.set_visualization_prim(mode_str),
tooltip="Select the camera prim shown in the visualization window",
)
##
# Shutdown Helpers
##
def on_shutdown(self):
if self._window:
self._window = None
gc.collect()
stage_utils.clear_stage()
if self.domains is not None and self.domains.callback_set:
self.domains.set_domain_callback(True)
##
# Path Helpers
##
def _make_ply_proposal(self, path: str) -> None:
"""use default matterport datastructure to make proposal about point-cloud file
- "env_id"
- matterport_mesh
- "id_nbr"
- "id_nbr".obj
- house_segmentations
- "env_id".ply
"""
file_dir, file_name = os.path.split(path)
ply_dir = os.path.join(file_dir, "../..", "house_segmentations")
env_id = file_dir.split("/")[-3]
try:
ply_file = os.path.join(ply_dir, f"{env_id}.ply")
os.path.isfile(ply_file)
carb.log_verbose(f"Found ply file: {ply_file}")
self.ply_proposal = ply_file
except FileNotFoundError:
carb.log_verbose("No ply file found in default matterport datastructure")
##
# Load Mesh and Point-Cloud
##
async def load_matterport(self):
# simulation settings
# check if simulation context was created earlier or not.
if SimulationContext.instance():
SimulationContext.clear_instance()
carb.log_warn("SimulationContext already loaded. Will clear now and init default SimulationContext")
# create new simulation context
self.sim = SimulationContext(SimulationCfg())
# initialize simulation
await self.sim.initialize_simulation_context_async()
# load matterport
self._matterport = MatterportImporter(self._config.importer)
await self._matterport.load_world_async()
# reset the simulator
# note: this plays the simulator which allows setting up all the physics handles.
await self.sim.reset_async()
await self.sim.pause_async()
def _start_loading(self):
path = self._config.importer.obj_filepath
if not path:
return
# find obj, usd file
if os.path.isabs(path):
file_path = path
assert os.path.isfile(file_path), f"No .obj or .usd file found under absolute path: {file_path}"
else:
file_path = os.path.join(self._extension_path, "data", path)
assert os.path.isfile(
file_path
), f"No .obj or .usd file found under relative path to extension data: {file_path}"
self._config.set_obj_filepath(file_path) # update config
carb.log_verbose("MatterPort 3D Mesh found, start loading...")
asyncio.ensure_future(self.load_matterport())
carb.log_info("MatterPort 3D Mesh loaded")
self.build_ui(build_cam=True)
self._input_fields["import_btn"].enabled = False
##
# Register Cameras
##
def activate_load_camera(self, val):
self._input_fields["load_camera"].enabled = True
def _register_camera(self):
ply_filepath = self._input_fields["input_ply_file"].get_value_as_string()
if not is_ply_file(ply_filepath):
carb.log_error("Given ply path is not valid! No camera created!")
camera_path = self._input_fields["camera_prim"].get_value_as_string()
if not prim_utils.is_prim_path_valid(camera_path): # create prim if no prim found
prim_utils.create_prim(camera_path, "Xform")
camera_semantics = self._input_fields["camera_semantics"].get_value_as_bool()
camera_depth = self._input_fields["camera_depth"].get_value_as_bool()
camera_width = self._input_fields["cam_width"].get_value_as_int()
camera_height = self._input_fields["cam_height"].get_value_as_int()
# Setup camera sensor
data_types = []
if camera_semantics:
data_types += ["semantic_segmentation"]
if camera_depth:
data_types += ["distance_to_image_plane"]
camera_pattern_cfg = patterns.PinholeCameraPatternCfg(
focal_length=24.0,
horizontal_aperture=20.955,
height=camera_height,
width=camera_width,
data_types=data_types,
)
camera_cfg = RayCasterCfg(
prim_path=camera_path,
mesh_prim_paths=[ply_filepath],
update_period=0,
offset=RayCasterCfg.OffsetCfg(pos=(0.0, 0.0, 0.0), rot=(1.0, 0.0, 0.0, 0.0)),
debug_vis=True,
pattern_cfg=camera_pattern_cfg,
)
if self.domains is None:
self.domains = MatterportDomains(self._config)
# register camera
self.domains.register_camera(camera_cfg)
# initialize physics handles
self.sim.reset()
# allow for tasks
self.build_ui(build_cam=True, build_viz=True)
return
| 19,441 | Python | 40.190678 | 163 | 0.569621 |
pascal-roth/orbit_envs/extensions/omni.isaac.matterport/omni/isaac/matterport/config/importer_cfg.py | # Copyright (c) 2024 ETH Zurich (Robotic Systems Lab)
# Author: Pascal Roth
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from dataclasses import MISSING
from omni.isaac.core.utils import extensions
from omni.isaac.matterport.domains import MatterportImporter
from omni.isaac.orbit.terrains import TerrainImporterCfg
from omni.isaac.orbit.utils import configclass
from typing_extensions import Literal
extensions.enable_extension("omni.kit.asset_converter")
from omni.kit.asset_converter.impl import AssetConverterContext
# NOTE: hopefully will be soon changed to dataclass, then initialization can be improved
asset_converter_cfg: AssetConverterContext = AssetConverterContext()
asset_converter_cfg.ignore_materials = False
# Don't import/export materials
asset_converter_cfg.ignore_animations = False
# Don't import/export animations
asset_converter_cfg.ignore_camera = False
# Don't import/export cameras
asset_converter_cfg.ignore_light = False
# Don't import/export lights
asset_converter_cfg.single_mesh = False
# By default, instanced props will be export as single USD for reference. If
# this flag is true, it will export all props into the same USD without instancing.
asset_converter_cfg.smooth_normals = True
# Smoothing normals, which is only for assimp backend.
asset_converter_cfg.export_preview_surface = False
# Imports material as UsdPreviewSurface instead of MDL for USD export
asset_converter_cfg.use_meter_as_world_unit = True
# Sets world units to meters, this will also scale asset if it's centimeters model.
asset_converter_cfg.create_world_as_default_root_prim = True
# Creates /World as the root prim for Kit needs.
asset_converter_cfg.embed_textures = True
# Embedding textures into output. This is only enabled for FBX and glTF export.
asset_converter_cfg.convert_fbx_to_y_up = False
# Always use Y-up for fbx import.
asset_converter_cfg.convert_fbx_to_z_up = True
# Always use Z-up for fbx import.
asset_converter_cfg.keep_all_materials = False
# If it's to remove non-referenced materials.
asset_converter_cfg.merge_all_meshes = False
# Merges all meshes to single one if it can.
asset_converter_cfg.use_double_precision_to_usd_transform_op = False
# Uses double precision for all transform ops.
asset_converter_cfg.ignore_pivots = False
# Don't export pivots if assets support that.
asset_converter_cfg.disabling_instancing = False
# Don't export instancing assets with instanceable flag.
asset_converter_cfg.export_hidden_props = False
# By default, only visible props will be exported from USD exporter.
asset_converter_cfg.baking_scales = False
# Only for FBX. It's to bake scales into meshes.
@configclass
class MatterportImporterCfg(TerrainImporterCfg):
class_type: type = MatterportImporter
"""The class name of the terrain importer."""
terrain_type: Literal["matterport"] = "matterport"
"""The type of terrain to generate. Defaults to "matterport".
"""
prim_path: str = "/World/Matterport"
"""The absolute path of the Matterport Environment prim.
All sub-terrains are imported relative to this prim path.
"""
obj_filepath: str = MISSING
asset_converter: AssetConverterContext = asset_converter_cfg
groundplane: bool = True
| 3,241 | Python | 38.536585 | 88 | 0.779081 |
pascal-roth/orbit_envs/extensions/omni.isaac.matterport/omni/isaac/matterport/config/__init__.py | # Copyright (c) 2024 ETH Zurich (Robotic Systems Lab)
# Author: Pascal Roth
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from .importer_cfg import AssetConverterContext, MatterportImporterCfg
__all__ = ["MatterportImporterCfg", "AssetConverterContext"]
| 275 | Python | 26.599997 | 70 | 0.770909 |
pascal-roth/orbit_envs/extensions/omni.isaac.carla/setup.py | # Copyright (c) 2024 ETH Zurich (Robotic Systems Lab)
# Author: Pascal Roth
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Installation script for the 'omni.isaac.carla' python package."""
from setuptools import setup
# Minimum dependencies required prior to installation
INSTALL_REQUIRES = [
# generic
"opencv-python-headless",
"PyQt5",
]
# Installation operation
setup(
name="omni-isaac-carla",
author="Pascal Roth",
author_email="[email protected]",
version="0.0.1",
description="Extension to include 3D Datasets from the Carla Simulator.",
keywords=["robotics"],
include_package_data=True,
python_requires=">=3.7",
install_requires=INSTALL_REQUIRES,
packages=["omni.isaac.carla"],
classifiers=["Natural Language :: English", "Programming Language :: Python :: 3.7"],
zip_safe=False,
)
| 872 | Python | 24.67647 | 89 | 0.692661 |
pascal-roth/orbit_envs/extensions/omni.isaac.carla/config/extension.toml | [package]
version = "0.0.1"
title = "CARLA extension"
description="Extension to include 3D Datasets from the Carla Simulator."
authors =["Pascal Roth"]
repository = "https://gitlab-master.nvidia.com/mmittal/omni_isaac_orbit"
category = "robotics"
keywords = ["kit", "robotics"]
readme = "docs/README.md"
[dependencies]
"omni.kit.uiapp" = {}
"omni.isaac.ui" = {}
"omni.isaac.core" = {}
"omni.isaac.orbit" = {}
"omni.isaac.anymal" = {}
# Main python module this extension provides.
[[python.module]]
name = "omni.isaac.carla"
[[python.module]]
name = "omni.isaac.carla.scripts"
| 580 | TOML | 23.208332 | 72 | 0.696552 |
pascal-roth/orbit_envs/extensions/omni.isaac.carla/omni/isaac/carla/scripts/__init__.py | # Copyright (c) 2024 ETH Zurich (Robotic Systems Lab)
# Author: Pascal Roth
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from .loader import CarlaLoader
__all__ = ["CarlaLoader"]
# EoF
| 208 | Python | 16.416665 | 53 | 0.711538 |
pascal-roth/orbit_envs/extensions/omni.isaac.carla/omni/isaac/carla/scripts/loader.py | # Copyright (c) 2024 ETH Zurich (Robotic Systems Lab)
# Author: Pascal Roth
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
# python
import os
from typing import List, Tuple
# omniverse
import carb
import omni
import omni.isaac.core.utils.prims as prim_utils
import omni.isaac.debug_draw._debug_draw as omni_debug_draw
import scipy.spatial.transform as tf
import yaml
# isaac-carla
from omni.isaac.carla.configs import CarlaLoaderConfig
# isaac-core
from omni.isaac.core.materials import PhysicsMaterial
from omni.isaac.core.objects.ground_plane import GroundPlane
from omni.isaac.core.prims import GeometryPrim
from omni.isaac.core.simulation_context import SimulationContext
from omni.isaac.core.utils.semantics import add_update_semantics, remove_all_semantics
from omni.isaac.core.utils.viewports import set_camera_view
from omni.isaac.orbit.utils.assets import ISAAC_NUCLEUS_DIR
# isaac-orbit
from omni.isaac.orbit.utils.configclass import class_to_dict
from pxr import Gf, PhysxSchema, Usd
class CarlaLoader:
debug: bool = False
def __init__(self, cfg: CarlaLoaderConfig) -> None:
self._cfg = cfg
# Load kit helper
self.sim = SimulationContext(
stage_units_in_meters=1.0,
physics_dt=self._cfg.sim_cfg.dt,
rendering_dt=self._cfg.sim_cfg.dt * self._cfg.sim_cfg.substeps,
backend="torch",
sim_params=class_to_dict(self._cfg.sim_cfg.physx),
device=self._cfg.sim_cfg.device,
)
# Set main camera
set_camera_view([20 / self._cfg.scale, 20 / self._cfg.scale, 20 / self._cfg.scale], [0.0, 0.0, 0.0])
# Acquire draw interface
self.draw_interface = omni_debug_draw.acquire_debug_draw_interface()
self.material: PhysicsMaterial = None
return
def load(self) -> None:
"""Load the scene."""
# design scene
assert os.path.isfile(self._cfg.usd_path), f"USD File not found: {self._cfg.usd_path}"
self._design_scene()
self.sim.reset()
# modify mesh
if self._cfg.cw_config_file:
self._multiply_crosswalks()
if self._cfg.people_config_file:
self._insert_people()
if self._cfg.vehicle_config_file:
self._insert_vehicles()
# assign semantic labels
if self._cfg.sem_mesh_to_class_map:
self._add_semantics()
return
""" Scene Helper Functions """
def _design_scene(self):
"""Add prims to the scene."""
self._xform_prim = prim_utils.create_prim(
prim_path=self._cfg.prim_path,
translation=(0.0, 0.0, 0.0),
usd_path=self._cfg.usd_path,
scale=(self._cfg.scale, self._cfg.scale, self._cfg.scale),
)
# physics material
self.material = PhysicsMaterial(
"/World/PhysicsMaterial", static_friction=0.7, dynamic_friction=0.7, restitution=0
)
# enable patch-friction: yields better results!
physx_material_api = PhysxSchema.PhysxMaterialAPI.Apply(self.material._prim)
physx_material_api.CreateImprovePatchFrictionAttr().Set(True)
physx_material_api.CreateFrictionCombineModeAttr().Set("max")
physx_material_api.CreateRestitutionCombineModeAttr().Set("max")
# assign each submesh it's own geometry prim --> important for raytracing to be able to identify the submesh
submeshes = prim_utils.get_prim_children(self._xform_prim)[1].GetAllChildren()
for submesh in submeshes:
submesh_path = submesh.GetPath().pathString
# create geometry prim
GeometryPrim(
prim_path=submesh_path,
name="collision",
position=None,
orientation=None,
collision=True,
).apply_physics_material(self.material)
# physx_utils.setCollider(submesh, approximationShape="None")
# "None" will use the base triangle mesh if available
# Lights-1
prim_utils.create_prim(
"/World/Light/GreySphere",
"SphereLight",
translation=(45 / self._cfg.scale, 100 / self._cfg.scale, 100 / self._cfg.scale),
attributes={"radius": 10, "intensity": 30000.0, "color": (0.75, 0.75, 0.75)},
)
# Lights-2
prim_utils.create_prim(
"/World/Light/WhiteSphere",
"SphereLight",
translation=(100 / self._cfg.scale, 100 / self._cfg.scale, 100 / self._cfg.scale),
attributes={"radius": 10, "intensity": 30000.0, "color": (1.0, 1.0, 1.0)},
)
if self._cfg.axis_up == "Y" or self._cfg.axis_up == "y":
world_prim = prim_utils.get_prim_at_path(self._cfg.prim_path)
rot_quat = tf.Rotation.from_euler("XYZ", [90, 90, 0], degrees=True).as_quat()
gf_quat = Gf.Quatf()
gf_quat.real = rot_quat[3]
gf_quat.imaginary = Gf.Vec3f(list(rot_quat[:3]))
world_prim.GetAttribute("xformOp:orient").Set(gf_quat)
if self._cfg.groundplane:
_ = GroundPlane("/World/GroundPlane", z_position=0.0, physics_material=self.material, visible=False)
return
""" Assign Semantic Labels """
def _add_semantics(self):
# remove all previous semantic labels
remove_all_semantics(prim_utils.get_prim_at_path(self._cfg.prim_path + self._cfg.suffix), recursive=True)
# get mesh prims
mesh_prims, mesh_prims_name = self.get_mesh_prims(self._cfg.prim_path + self._cfg.suffix)
carb.log_info(f"Total of {len(mesh_prims)} meshes in the scene, start assigning semantic class ...")
# mapping from prim name to class
with open(self._cfg.sem_mesh_to_class_map) as file:
class_keywords = yaml.safe_load(file)
# make all the string lower case
mesh_prims_name = [mesh_prim_single.lower() for mesh_prim_single in mesh_prims_name]
keywords_class_mapping_lower = {
key: [value_single.lower() for value_single in value] for key, value in class_keywords.items()
}
# assign class to mesh in ISAAC
def recursive_semUpdate(prim, sem_class_name: str, update_submesh: bool) -> bool:
# Necessary for Park Mesh
# FIXME: include all meshes leads to OgnSdStageInstanceMapping does not support more than 65535 semantic entities (2718824 requested) error since entities are restricted to int16
if (
prim.GetName() == "HierarchicalInstancedStaticMesh"
): # or "FoliageInstancedStaticMeshComponent" in prim.GetName():
add_update_semantics(prim, sem_class_name)
update_submesh = True
children = prim.GetChildren()
if len(children) > 0:
for child in children:
update_submesh = recursive_semUpdate(child, sem_class_name, update_submesh)
return update_submesh
def recursive_meshInvestigator(mesh_idx, mesh_name, mesh_prim_list) -> bool:
success = False
for class_name, keywords in keywords_class_mapping_lower.items():
if any([keyword in mesh_name for keyword in keywords]):
update_submesh = recursive_semUpdate(mesh_prim_list[mesh_idx], class_name, False)
if not update_submesh:
add_update_semantics(mesh_prim_list[mesh_idx], class_name)
success = True
break
if not success:
success_child = []
mesh_prims_children, mesh_prims_name_children = self.get_mesh_prims(
mesh_prim_list[mesh_idx].GetPrimPath().pathString
)
mesh_prims_name_children = [mesh_prim_single.lower() for mesh_prim_single in mesh_prims_name_children]
for mesh_idx_child, mesh_name_child in enumerate(mesh_prims_name_children):
success_child.append(
recursive_meshInvestigator(mesh_idx_child, mesh_name_child, mesh_prims_children)
)
success = any(success_child)
return success
mesh_list = []
for mesh_idx, mesh_name in enumerate(mesh_prims_name):
success = recursive_meshInvestigator(mesh_idx=mesh_idx, mesh_name=mesh_name, mesh_prim_list=mesh_prims)
if success:
mesh_list.append(mesh_idx)
missing = [i for x, y in zip(mesh_list, mesh_list[1:]) for i in range(x + 1, y) if y - x > 1]
assert len(mesh_list) > 0, "No mesh is assigned a semantic class!"
assert len(mesh_list) == len(
mesh_prims_name
), f"Not all meshes are assigned a semantic class! Following mesh names are included yet: {[mesh_prims_name[miss_idx] for miss_idx in missing]}"
carb.log_info("Semantic mapping done.")
return
""" Modify Mesh """
def _multiply_crosswalks(self) -> None:
"""Increase number of crosswalks in the scene."""
with open(self._cfg.cw_config_file) as stream:
multipy_cfg: dict = yaml.safe_load(stream)
# get the stage
stage = omni.usd.get_context().get_stage()
# get town prim
town_prim = multipy_cfg.pop("town_prim")
# init counter
crosswalk_add_counter = 0
for key, value in multipy_cfg.items():
print(f"Execute crosswalk multiplication '{key}'")
# iterate over the number of crosswalks to be created
for copy_idx in range(value["factor"]):
success = omni.usd.duplicate_prim(
stage=stage,
prim_path=os.path.join(self._cfg.prim_path, town_prim, value["cw_prim"]),
path_to=os.path.join(
self._cfg.prim_path, town_prim, value["cw_prim"] + f"_cp{copy_idx}" + value.get("suffix", "")
),
duplicate_layers=True,
)
assert success, f"Failed to duplicate crosswalk '{key}'"
# get crosswalk prim
prim_utils.get_prim_at_path(
os.path.join(
self._cfg.prim_path, town_prim, value["cw_prim"] + f"_cp{copy_idx}" + value.get("suffix", "")
)
).GetAttribute("xformOp:translate").Set(
Gf.Vec3d(value["translation"][0], value["translation"][1], value["translation"][2]) * (copy_idx + 1)
)
# update counter
crosswalk_add_counter += 1
carb.log_info(f"Number of crosswalks added: {crosswalk_add_counter}")
print(f"Number of crosswalks added: {crosswalk_add_counter}")
return
def _insert_vehicles(self):
# load vehicle config file
with open(self._cfg.vehicle_config_file) as file:
vehicle_cfg: dict = yaml.safe_load(file)
# get the stage
stage = omni.usd.get_context().get_stage()
# get town prim and all its meshes
town_prim = vehicle_cfg.pop("town_prim")
mesh_prims: dict = prim_utils.get_prim_at_path(f"{self._cfg.prim_path}/{town_prim}").GetChildren()
mesh_prims_name = [mesh_prim_single.GetName() for mesh_prim_single in mesh_prims]
# car counter
car_add_counter = 0
for key, vehicle in vehicle_cfg.items():
print(f"Execute vehicle multiplication '{key}'")
# get all meshs that include the keystring
meshs = [
mesh_prim_single for mesh_prim_single in mesh_prims_name if vehicle["prim_part"] in mesh_prim_single
]
# iterate over the number of vehicles to be created
for idx, translation in enumerate(vehicle["translation"]):
for single_mesh in meshs:
success = omni.usd.duplicate_prim(
stage=stage,
prim_path=os.path.join(self._cfg.prim_path, town_prim, single_mesh),
path_to=os.path.join(self._cfg.prim_path, town_prim, single_mesh + key + f"_cp{idx}"),
duplicate_layers=True,
)
assert success, f"Failed to duplicate vehicle '{key}'"
# get vehicle prim
prim_utils.get_prim_at_path(
os.path.join(self._cfg.prim_path, town_prim, single_mesh + key + f"_cp{idx}")
).GetAttribute("xformOp:translate").Set(Gf.Vec3d(translation[0], translation[1], translation[2]))
car_add_counter += 1
carb.log_info(f"Number of vehicles added: {car_add_counter}")
print(f"Number of vehicles added: {car_add_counter}")
return
def _insert_people(self):
# load people config file
with open(self._cfg.people_config_file) as file:
people_cfg: dict = yaml.safe_load(file)
for key, person_cfg in people_cfg.items():
carb.log_verbose(f"Insert person '{key}'")
self.insert_single_person(
person_cfg["prim_name"],
person_cfg["translation"],
scale_people=1, # scale_people,
usd_path=person_cfg.get("usd_path", "People/Characters/F_Business_02/F_Business_02.usd"),
)
# TODO: movement of the people
carb.log_info(f"Number of people added: {len(people_cfg)}")
print(f"Number of people added: {len(people_cfg)}")
return
@staticmethod
def insert_single_person(
prim_name: str,
translation: list,
scale_people: float = 1.0,
usd_path: str = "People/Characters/F_Business_02/F_Business_02.usd",
) -> None:
person_prim = prim_utils.create_prim(
prim_path=os.path.join("/World/People", prim_name),
translation=tuple(translation),
usd_path=os.path.join(ISAAC_NUCLEUS_DIR, usd_path),
scale=(scale_people, scale_people, scale_people),
)
if isinstance(person_prim.GetAttribute("xformOp:orient").Get(), Gf.Quatd):
person_prim.GetAttribute("xformOp:orient").Set(Gf.Quatd(1.0, 0.0, 0.0, 0.0))
else:
person_prim.GetAttribute("xformOp:orient").Set(Gf.Quatf(1.0, 0.0, 0.0, 0.0))
add_update_semantics(person_prim, "person")
return
@staticmethod
def get_mesh_prims(env_prim: str) -> Tuple[List[Usd.Prim], List[str]]:
def recursive_search(start_prim: str, mesh_prims: list):
for curr_prim in prim_utils.get_prim_at_path(start_prim).GetChildren():
if curr_prim.GetTypeName() == "Xform" or curr_prim.GetTypeName() == "Mesh":
mesh_prims.append(curr_prim)
elif curr_prim.GetTypeName() == "Scope":
mesh_prims = recursive_search(start_prim=curr_prim.GetPath().pathString, mesh_prims=mesh_prims)
return mesh_prims
assert prim_utils.is_prim_path_valid(env_prim), f"Prim path '{env_prim}' is not valid"
mesh_prims = []
mesh_prims = recursive_search(env_prim, mesh_prims)
# mesh_prims: dict = prim_utils.get_prim_at_path(self._cfg.prim_path + "/" + self._cfg.usd_name.split(".")[0]).GetChildren()
mesh_prims_name = [mesh_prim_single.GetName() for mesh_prim_single in mesh_prims]
return mesh_prims, mesh_prims_name
# EoF
| 15,577 | Python | 39.149484 | 190 | 0.588688 |
pascal-roth/orbit_envs/extensions/omni.isaac.carla/omni/isaac/carla/configs/__init__.py | # Copyright (c) 2024 ETH Zurich (Robotic Systems Lab)
# Author: Pascal Roth
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
# configs
from .configs import DATA_DIR, CarlaLoaderConfig
__all__ = [
# configs
"CarlaLoaderConfig",
# path
"DATA_DIR",
]
# EoF
| 289 | Python | 15.11111 | 53 | 0.66782 |
pascal-roth/orbit_envs/extensions/omni.isaac.carla/omni/isaac/carla/configs/configs.py | # Copyright (c) 2024 ETH Zurich (Robotic Systems Lab)
# Author: Pascal Roth
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
# python
import os
from dataclasses import dataclass
DATA_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), "../../../../data"))
@dataclass
class SimCfg:
"""Simulation physics."""
dt = 0.005 # physics-dt:(s)
substeps = 8 # rendering-dt = physics-dt * substeps (s)
gravity = [0.0, 0.0, -9.81] # (m/s^2)
enable_scene_query_support = False # disable scene query for more speed-up
use_flatcache = True # output from simulation to flat cache
use_gpu_pipeline = True # direct GPU access functionality
device = "cpu" # device on which to run simulation/environment
@dataclass
class PhysxCfg:
"""PhysX solver parameters."""
worker_thread_count = 10 # note: unused
solver_position_iteration_count = 4 # note: unused
solver_velocity_iteration_count = 1 # note: unused
enable_sleeping = True # note: unused
max_depenetration_velocity = 1.0 # note: unused
contact_offset = 0.002 # note: unused
rest_offset = 0.0 # note: unused
use_gpu = True # GPU dynamics pipeline and broad-phase type
solver_type = 1 # 0: PGS, 1: TGS
enable_stabilization = True # additional stabilization pass in solver
# (m/s): contact with relative velocity below this will not bounce
bounce_threshold_velocity = 0.5
# (m): threshold for contact point to experience friction force
friction_offset_threshold = 0.04
# (m): used to decide if contacts are close enough to merge into a single friction anchor point
friction_correlation_distance = 0.025
# GPU buffers parameters
gpu_max_rigid_contact_count = 512 * 1024
gpu_max_rigid_patch_count = 80 * 1024 * 2
gpu_found_lost_pairs_capacity = 1024 * 1024 * 2
gpu_found_lost_aggregate_pairs_capacity = 1024 * 1024 * 32
gpu_total_aggregate_pairs_capacity = 1024 * 1024 * 2
gpu_max_soft_body_contacts = 1024 * 1024
gpu_max_particle_contacts = 1024 * 1024
gpu_heap_capacity = 128 * 1024 * 1024
gpu_temp_buffer_capacity = 32 * 1024 * 1024
gpu_max_num_partitions = 8
physx: PhysxCfg = PhysxCfg()
@dataclass
class CarlaLoaderConfig:
# carla map
root_path: str = "path_to_unreal_mesh"
usd_name: str = "Town01_Opt.usd"
suffix: str = "/Town01_Opt"
# prim path for the carla map
prim_path: str = "/World/Carla"
# SimCfg
sim_cfg: SimCfg = SimCfg()
# scale
scale: float = 0.01 # scale the scene to be in meters
# up axis
axis_up: str = "Y"
# multiply crosswalks
cw_config_file: str | None = os.path.join(
DATA_DIR, "town01", "cw_multiply_cfg.yml"
) # if None, no crosswalks are added
# mesh to semantic class mapping --> only if set, semantic classes will be added to the scene
sem_mesh_to_class_map: str | None = os.path.join(
DATA_DIR, "town01", "keyword_mapping.yml"
) # os.path.join(DATA_DIR, "park", "keyword_mapping.yml") os.path.join(DATA_DIR, "town01", "keyword_mapping.yml")
# add Groundplane to the scene
groundplane: bool = True
# add people to the scene
people_config_file: str | None = os.path.join(DATA_DIR, "town01", "people_cfg.yml") # if None, no people are added
# multiply vehicles
vehicle_config_file: str | None = os.path.join(
DATA_DIR, "town01", "vehicle_cfg.yml"
) # if None, no vehicles are added
@property
def usd_path(self) -> str:
return os.path.join(self.root_path, self.usd_name)
| 3,701 | Python | 36.393939 | 119 | 0.636585 |
pascal-roth/orbit_envs/extensions/omni.isaac.carla/data/town02/cw_multiply_cfg.yml | # Definition of which crosswalks should be repeated how often along which axis
# Adjusted for: TOWN02
# each entry has the following format:
# name:
# cw_prim: [str] prim of the crosswalk in the loaded town file
# factor: [int] number how often the crosswalk should be repeated
# translation: [float, float] vector along which the crosswalk should be repeated, defines the position of the first
# repeated crosswalk, every following crosswalk will be placed at the position of the
# previous one plus the translation vector
# suffix: [str] optional, str will be added to the copied prim of the new crosswalk
# NOTE: rotations and scales applied to the mesh are not applied to the translations given here, i.e. they have to be
# in the original dataformat of the town file, i.e. y-up and in cm
town_prim: "Town02"
cw_2:
cw_prim: "Road_Crosswalk_Town02_8"
factor: 4
translation: [+1500, 0, 0]
cw_3:
cw_prim: "Road_Crosswalk_Town02_10"
factor: 2
translation: [-1500, 0, 0]
cw_4:
cw_prim: "Road_Crosswalk_Town02_9"
factor: 4
translation: [+1500, 0, 0]
suffix: "_neg"
cw_5:
cw_prim: "Road_Crosswalk_Town02_11"
factor: 4
translation: [1500, 0, 0]
cw_6_pos:
cw_prim: "Road_Crosswalk_Town02_12"
factor: 1
translation: [0, 0, 1500]
cw_6_neg:
cw_prim: "Road_Crosswalk_Town02_12"
factor: 2
translation: [0, 0, -1500]
cw_7_neg:
cw_prim: "Road_Crosswalk_Town02_7"
factor: 1
translation: [-1500, 0, 0]
cw_7_pos:
cw_prim: "Road_Crosswalk_Town02_7"
factor: 1
translation: [1500, 0, 0]
cw_8:
cw_prim: "Road_Crosswalk_Town02_4"
factor: 2
translation: [1500, 0, 0]
cw_9:
cw_prim: "Road_Crosswalk_Town02_3"
factor: 4
translation: [1500, 0, 0]
cw_10:
cw_prim: "Road_Crosswalk_Town02_6"
factor: 2
translation: [-1500, 0, 0]
cw_11_neg:
cw_prim: "Road_Crosswalk_Town02_1"
factor: 4
translation: [-1500, 0, 0]
cw_11_pos:
cw_prim: "Road_Crosswalk_Town02_1"
factor: 2
translation: [+1500, 0, 0]
cw_12:
cw_prim: "Road_Crosswalk_Town02_2"
factor: 4
translation: [-1500, 0, 0]
cw_13:
cw_prim: "Road_Crosswalk_Town02_13"
factor: 2
translation: [0, 0, +1500]
cw_14_pos:
cw_prim: "Road_Crosswalk_Town02_15"
factor: 2
translation: [0, 0, +1500]
cw_14_neg:
cw_prim: "Road_Crosswalk_Town02_15"
factor: 1
translation: [0, 0, -1500]
cw_15:
cw_prim: "Road_Crosswalk_Town02_16"
factor: 2
translation: [0, 0, -1500]
cw_16_neg:
cw_prim: "Road_Crosswalk_Town02_17"
factor: 2
translation: [0, 0, -1500]
cw_16_pos:
cw_prim: "Road_Crosswalk_Town02_17"
factor: 4
translation: [0, 0, +1500]
cw_17_neg:
cw_prim: "Road_Crosswalk_Town02_19"
factor: 4
translation: [0, 0, -1500]
cw_17_pos:
cw_prim: "Road_Crosswalk_Town02_19"
factor: 1
translation: [0, 0, +1500]
cw_18:
cw_prim: "Road_Crosswalk_Town02_20"
factor: 3
translation: [0, 0, +1500]
# EoF
| 2,991 | YAML | 21.162963 | 120 | 0.641926 |
pascal-roth/orbit_envs/extensions/omni.isaac.carla/data/town02/vehicle_cfg.yml | # Definition of where additional vehicles should be added
# Adjusted for: TOWN02
# each entry has the following format:
# name:
# prim_part: [str] part of the prim of the vehicle that should be multiplied (every prim containing this string will be multiplied)
# translation: [[float, float, float]] list of translations of the vehicle
# NOTE: rotations and scales applied to the mesh are not applied to the translations given here, i.e. they have to be
# in the original dataformat of the town file, i.e. y-up and in cm
# NOTE: for Town02, take "Vh_Car_SeatLeon_54" for vehicles along the x axis
town_prim: "Town02"
vehicle_1:
prim_part: "Vh_Car_SeatLeon_54"
translation:
# horizontal road low
- [3900, 0, 600]
- [3900, 0, 3000]
- [3900, 0, 3500]
- [3900, 0, 4000]
- [3900, 0, 6000]
- [3900, 0, -1500]
- [3900, 0, -4000]
- [3900, 0, -7500]
- [3900, 0, -8000]
- [3500, 0, -10000]
- [3500, 0, -7500]
- [3500, 0, -3000]
- [3500, 0, 1000]
- [3500, 0, 5000]
# horizontal road middle
- [-10800, 0, 1000]
- [-10800, 0, 5000]
- [-10800, 0, -2500]
# horizontal road high
- [-15800, 0, 2000]
- [-15800, 0, 4700]
- [-16200, 0, 3400]
- [-16200, 0, 0]
- [-16200, 0, -3000]
- [-16200, 0, -6000]
- [-16200, 0, -9000]
# EoF
| 1,436 | YAML | 28.937499 | 160 | 0.550139 |
pascal-roth/orbit_envs/extensions/omni.isaac.carla/data/town02/keyword_mapping.yml |
# Mapping mesh keywords to VIPlanner semantic classes
road:
- Road_Road
- Road_Marking
- ManholeCover
- roadunique
sidewalk:
- Road_Sidewalk
- SideWalkCube
- Road_Grass # pedestrian terrain (between building, squares, ...)
crosswalk:
- Road_Crosswalk
floor:
- Pathwalk # way to the door of a building
- PathWay # wat to the door of a building
- curb
- iron_plank
- Cube
- Floor
vehicle:
- Van
- Vehicle
- Car
building:
- NewBlueprint # roofs, windows, other parts of buildings
- CityBuilding
- Suburb
- House
- MergingBuilding
- BuildingWall
- garage
- airConditioner
- Office
- Block
- Apartment
- ConstructBuilding
- snacksStand
- doghouse
- streetCounter
- fountain
- container
- pergola
- GuardShelter
- atm
- awning
- bus_stop
- NewsStand
- ironplank
- kiosk
- TownHall
wall:
- GardenWall
- Wall
- RepSpline # fences or walls to limit residential areas
- RepeatedMeshesAlongSpline # should make the spline go around the building --> not working in isaac
fence:
- urbanFence
- chain_barrier
- picketFence
- fence
pole:
- bollard
- Lamppost
- Parklight
- CityLamp
- Traffic_Light_Base
- ElectricPole
- PoleCylinder
traffic_sign:
- streetBillboard
- RoundSign
- roadsigns
traffic_light:
- TLights
- TL_BotCover
- SM_Charger
- SM_FreewayLights
bench:
- bench
vegetation:
- tree
- Stone
- Cypress
- PlantPot
- TreePot
- Maple
- Beech
- FanPalm
- Sassafras
- Pine_Bush
- Hedge
- Bush
- palm
- acer
- plant_pit
- arbusto_pine
terrain:
- dirtDebris # roughness in the terrain, street or sidewalk (traversable but more difficult)
- GrassLeaf
- Grass
- LandscapeComponent
- Ash
water_surface:
- TileLake
sky:
- terrain2
- sky
dynamic:
- Trashbag
- advertise
- creased_box
- garbage
- trashcan
- clothes_line
- barbecue
- ConstructionCone
- box
- droppingasset
- barrel
static:
- firehydrant
- Gnome
- metroMap
- Bikeparking
- StaticMesh # gate barrier
- trampoline
- wheelbarrow
- NewspaperBox
- swing
- bin
- big_plane
- plane
- slide
- instancedfoliageactor
- roadbillboard
- prophitreacting_child # vending machines
- prop_wateringcan
furniture:
- Campingtable
- swingcouch
- table
- chair
| 2,344 | YAML | 15.061644 | 103 | 0.664249 |
pascal-roth/orbit_envs/extensions/omni.isaac.carla/data/town01/cw_multiply_cfg.yml | # Definition of which crosswalks should be repeated how often along which axis
# Adjusted for: TOWN01
# each entry has the following format:
# name:
# cw_prim: [str] prim of the crosswalk in the loaded town file
# factor: [int] number how often the crosswalk should be repeated
# translation: [float, float] vector along which the crosswalk should be repeated, defines the position of the first
# repeated crosswalk, every following crosswalk will be placed at the position of the
# previous one plus the translation vector
# suffix: [str] optional, str will be added to the copied prim of the new crosswalk
# NOTE: rotations and scales applied to the mesh are not applied to the translations given here, i.e. they have to be
# in the original dataformat of the town file, i.e. y-up and in cm
town_prim: "Town01_Opt"
cw_2:
cw_prim: "Road_Crosswalk_Town01_2"
factor: 2
translation: [0, 0, -1500]
cw_3_pos:
cw_prim: "Road_Crosswalk_Town01_3"
factor: 6
translation: [1500, 0, 0]
cw_3_neg:
cw_prim: "Road_Crosswalk_Town01_3"
factor: 1
translation: [-1500, 0, 0]
suffix: "_neg"
cw_4:
cw_prim: "Road_Crosswalk_Town01_4"
factor: 1
translation: [1500, 0, 0]
cw_5:
cw_prim: "Road_Crosswalk_Town01_5"
factor: 3
translation: [1500, 0, 0]
cw_6:
cw_prim: "Road_Crosswalk_Town01_6"
factor: 3
translation: [0, 0, -1500]
cw_9:
cw_prim: "Road_Crosswalk_Town01_9"
factor: 2
translation: [0, 0, -1500]
cw_10:
cw_prim: "Road_Crosswalk_Town01_10"
factor: 1
translation: [0, 0, 1500]
cw_11:
cw_prim: "Road_Crosswalk_Town01_11"
factor: 1
translation: [0, 0, 1500]
cw_14:
cw_prim: "Road_Crosswalk_Town01_14"
factor: 1
translation: [0, 0, 1500]
cw_15:
cw_prim: "Road_Crosswalk_Town01_15"
factor: 2
translation: [0, 0, -1500]
cw_18:
cw_prim: "Road_Crosswalk_Town01_18"
factor: 5
translation: [1500, 0, 0]
cw_19:
cw_prim: "Road_Crosswalk_Town01_19"
factor: 2
translation: [1500, 0, 0]
cw_21:
cw_prim: "Road_Crosswalk_Town01_21"
factor: 3
translation: [1500, 0, 0]
cw_22:
cw_prim: "Road_Crosswalk_Town01_22"
factor: 5
translation: [1500, 0, 0]
cw_24:
cw_prim: "Road_Crosswalk_Town01_24"
factor: 3
translation: [-1500, 0, 0]
cw_26_pos:
cw_prim: "Road_Crosswalk_Town01_26"
factor: 5
translation: [1500, 0, 0]
cw_26_neg:
cw_prim: "Road_Crosswalk_Town01_26"
factor: 3
translation: [-1500, 0, 0]
suffix: "_neg"
cw_28:
cw_prim: "Road_Crosswalk_Town01_28"
factor: 4
translation: [0, 0, 1500]
cw_29:
cw_prim: "Road_Crosswalk_Town01_29"
factor: 4
translation: [0, 0, 1500]
cw_30:
cw_prim: "Road_Crosswalk_Town01_30"
factor: 4
translation: [0, 0, 1500]
cw_30_neg:
cw_prim: "Road_Crosswalk_Town01_31"
factor: 2
translation: [0, 0, -1500]
cw_32:
cw_prim: "Road_Crosswalk_Town01_32"
factor: 6
translation: [0, 0, -1500]
cw_33_pos:
cw_prim: "Road_Crosswalk_Town01_33"
factor: 4
translation: [1500, 0, 0]
cw_33_neg:
cw_prim: "Road_Crosswalk_Town01_33"
factor: 3
translation: [-2500, 0, 0]
suffix: "_neg"
cw_34:
cw_prim: "Road_Crosswalk_Town01_34"
factor: 7
translation: [1500, 0, 0]
cw_35:
cw_prim: "Road_Crosswalk_Town01_35"
factor: 1
translation: [1500, 0, 0]
cw_36_pos:
cw_prim: "Road_Crosswalk_Town01_36"
factor: 1
translation: [0, 0, 1500]
cw_36_neg:
cw_prim: "Road_Crosswalk_Town01_36"
factor: 5
translation: [0, 0, -1500]
suffix: "_neg"
cw_40:
cw_prim: "Road_Crosswalk_Town01_40"
factor: 4
translation: [1500, 0, 0]
# EoF
| 3,635 | YAML | 20.017341 | 120 | 0.641541 |
pascal-roth/orbit_envs/extensions/omni.isaac.carla/data/town01/vehicle_cfg.yml | # Definition of where additional vehicles should be added
# Adjusted for: TOWN01
# each entry has the following format:
# name:
# prim_part: [str] part of the prim of the vehicle that should be multiplied (every prim containing this string will be multiplied)
# translation: [[float, float, float]] list of translations of the vehicle
# NOTE: rotations and scales applied to the mesh are not applied to the translations given here, i.e. they have to be
# in the original dataformat of the town file, i.e. y-up and in cm
# NOTE: for Town01, take "ChevroletImpala_High_V4" for vehicles along the x axis and "JeepWranglerRubicon_36"
# for vehicles along the y axis
town_prim: "Town01_Opt"
vehicle_1:
prim_part: "ChevroletImpala_High_V4"
translation:
- [-15300, 0, -4000]
- [-15300, 0, 0]
- [-15300, 0, 15000]
- [-15600, 0, 21000]
- [9000, 0, 20500]
- [9400, 0, 15000]
- [9400, 0, 9000]
- [9400, 0, 7000]
- [9000, 0, 6000]
- [9000, 0, 500]
- [9000, 0, -4000]
vehicle_2:
prim_part: "JeepWranglerRubicon_36"
translation:
- [0, 0, -1500]
- [3500, 0, -1500]
- [5300, 0, -1900]
- [9000, 0, -1900]
- [16500, 0, -1500]
- [22500, 0, -1900]
- [25000, 0, 3800]
- [20000, 0, 4200]
- [17000, 0, 4200]
- [12000, 0, 3800]
- [7000, 0, 3800]
- [7000, 0, 11100]
- [11000, 0, 11500]
- [16000, 0, 11100]
- [20000, 0, 11100]
- [26000, 0, 11500]
- [26000, 0, 17800]
- [23000, 0, 18200]
- [18000, 0, 18200]
- [14000, 0, 17800]
- [13500, 0, 18200]
- [10000, 0, 18200]
- [9500, 0, 17800]
- [4000, 0, 17800]
- [2000, 0, 30800]
- [-1000, 0, 31300]
- [6000, 0, 31300]
- [12000, 0, 30800]
- [15000, 0, 30800]
- [15600, 0, 30800]
- [16400, 0, 30800]
- [21000, 0, 31300]
- [25000, 0, 31300]
# EoF
| 1,904 | YAML | 26.214285 | 160 | 0.558824 |
pascal-roth/orbit_envs/extensions/omni.isaac.carla/data/town01/area_filter_cfg.yaml | # Definition of which areas should not be explored and used to sample points
# Adjusted for: TOWN01
# each entry has the following format:
# name:
# x_low: [float] low number of the x axis
# x_high: [float] high number of the x axis
# y_low: [float] low number of the y axis
# y_high: [float] high number of the y axis
area_1:
x_low: 208.9
x_high: 317.8
y_low: 100.5
y_high: 325.5
area_2:
x_low: 190.3
x_high: 315.8
y_low: 12.7
y_high: 80.6
area_3:
x_low: 123.56
x_high: 139.37
y_low: 10
y_high: 80.0
| 601 | YAML | 20.499999 | 76 | 0.570715 |
pascal-roth/orbit_envs/extensions/omni.isaac.carla/data/park/keyword_mapping.yml | sidewalk:
- Sidewalk
floor:
- SM_ParkSquare05_4HISMA
- SM_ParkSquare02_1HISMA
- SM_ParkSquare05_4HISMA
- SM_ParkSquare05_6HISMA
- SM_ParkSquare05_3HISMA
- SM_ParkSquare04_1HISMA
- SM_ParkSquare05_1HISMA
- SM_ParkSquare02_2HISMA
- SM_ParkSquare11_1HISMA
- SM_ParkSquare05_7HISMA
- SM_ParkSquare05_8HISMA
- SM_ParkSquare05_9HISMA
- SM_ParkSquare05_5HISMA
- SM_ParkSquare12_1HISMA
- SM_ParkSquare05_2HISMA
- TennisField
- BaseballField
- BasketballField
- Asphalt
- FootballField
- SM_ParkSquare03_7HISMA_598
- SM_PoolHISMA
- Border
- Manhole
- ParkPath
- RoadDecal
- MergedRoad
bridge:
- Bridge
tunnel:
- tunnel
building:
- CafeBuilding
- House
- Tribune
- Pier
- Bower
stairs:
- SM_ParkSquare03_3HISMA
- SM_ParkSquare05_3HISMA
- SM_ParkSquare07_1HISMA
- SM_ParkSquare05_12HISMA
- SM_ParkSquare03_2HISMA
- SM_ParkSquare03_5HISMA
- SM_ParkSquare03_5HISMA
- SM_ParkSquare03_7HISMA
- ParkSquare03_8HISMA
- ParkSquare13_7HISMA
- SM_ParkSquare03_2HISMA_687
- SM_ParkSquare03_1HISMA
- SM_ParkSquare05_2HISMA
wall:
- SM_ParkSquare02_4HISMA
- SM_ParkSquare01_5HISMA
- SM_ParkSquare06_1HISMA
- SM_ParkSquare02_8HISMA
- SM_ParkSquare06_4HISMA
- SM_ParkSquare10HISMA
- SM_ParkSquare06_5HISMA
- SM_ParkSquare06_3HISMA
- SM_ParkSquare06_2HISMA
- SM_ParkSquare02_7HISMA
- SM_ParkSquare02_1HISMA
- SM_ParkSquare03_6HISMA
- SM_ParkSquare06_6HISMA
- SM_ParkSquare12_2HISMA
- SM_ParkSquare07_2HISMA
- SM_ParkSquare01_3HISMA
- SM_ParkSquare01_1HISMA
- SM_ParkSquare07_3HISMA
- SM_ParkSquare05_12HISMA
- SM_ParkSquare02_6HISMA
- SM_ParkSquare01_10HISMA
- SM_ParkSquare02_3HISMA
- SM_ParkSquare02_5HISMA
- SM_ParkSquare02_5HISMA_209
- SM_ParkSquare12_3HISMA
- SM_ParkSquare01_2HISMA
- SM_ParkSquare01_9HISMA
- SM_ParkSquare03_4HISMA
- ParkSquare14_3HISMA
- ParkSquare13_5HISMA
- SM_ParkSquare02_2HISMA
- SM_ParkSquare01_7HISMA
- SM_ParkSquare01_4HISMA
- ParkSquare01_11HISMA
- SM_ParkSquare01_6HISMA
- SM_ParkSquare01_8HISMA
- ParkSquare13_7HISMA
- BaseballGate
- SM_Fountain01HISMA
- MergedParkSquare
fence:
- ParkSquare14_3HISMA
- ParkSquare13_1HISMA
- ParkSquare14_2HISMA
- ParkSquare13_3HISMA
- ParkSquare13_2HISMA
- Fence
- ParkSquare13_3HISMA_600
- ParkSquare13_4HISMA_603
- ParkSquare13_5HISMA_605
- MergedPark03_10
- ParkSquare14_1HISMA
- ParkSquare13_6HISMA
pole:
- LampPost
- TrafficBarrel
- TrashCan
traffic_sign:
- RoadSigns
traffic_light:
- TennisFloodlight
- TrafficLight
bench:
- Bench
vegetation:
- BP_SplineMeshes # all spline meshes
- Amur
- Elm
- Ivy
- Maple
- Amur
- Bush
- grass
- Weeping
- Rock
terrain:
- Landscape
- SM_ParkSquare11_3HISMA
- MergedGround
- Instancedfoliageactor_2
- SM_ParkSquare11_2HISMA
- MergedLeaks
water_surface:
- Plane
- PlanarReflection
ceiling:
- SM_ParkSquare09_1HISMA
- SM_ParkSquare09_3HISMA
- SM_ParkSquare09_4HISMA
dynamic:
- DryLeaves06HISMA
- DryLeaves07HISMA
- LeakDecal
- Newspaper
static:
- Statue
- PlayGround # all playground meshes
- TennisNet
- TennisUmpiresChair
- Umbrella
- BasketballHoop
- DrinkingFountain
- FoodStalls
- FoodballGate
- RoadBlock
- Sphere
- Tribune
- FootballGate
furniture:
- Table
- CafeChair
| 3,370 | YAML | 17.221622 | 40 | 0.71454 |
pascal-roth/orbit_envs/extensions/omni.isaac.carla/data/warehouse/people_cfg.yml | person_1:
prim_name: "Person_1"
translation: [4.23985, -2.42198, 0.0]
target: [0, 0, 0]
usd_path: People/Characters/male_adult_construction_01_new/male_adult_construction_01_new.usd
person_2:
prim_name: "Person_2"
translation: [2.51653, 7.80822, 0.0]
target: [0, 0, 0]
usd_path: People/Characters/male_adult_construction_03/male_adult_construction_03.usd
person_3:
prim_name: "Person_3"
translation: [5.07179, 3.8561, 0.0]
target: [0, 0, 0]
usd_path: People/Characters/male_adult_construction_05_new/male_adult_construction_05_new.usd
person_4:
prim_name: "Person_4"
translation: [-3.2015, 11.79695, 0.0]
target: [0, 0, 0]
usd_path: People/Characters/original_male_adult_construction_01/male_adult_construction_01.usd
person_5:
prim_name: "Person_5"
translation: [-6.70566, 7.58019, 0.0]
target: [0, 0, 0]
usd_path: People/Characters/original_male_adult_construction_02/male_adult_construction_02.usd
person_6:
prim_name: "Person_6"
translation: [-5.12784, 2.43409, 0.0]
target: [0, 0, 0]
usd_path: People/Characters/original_male_adult_construction_05/male_adult_construction_05.usd
person_7:
prim_name: "Person_7"
translation: [-6.98476, -9.47249, 0.0]
target: [0, 0, 0]
usd_path: People/Characters/male_adult_construction_01_new/male_adult_construction_01_new.usd
person_8:
prim_name: "Person_8"
translation: [-1.63744, -3.43285, 0.0]
target: [0, 0, 0]
usd_path: People/Characters/male_adult_construction_01_new/male_adult_construction_01_new.usd
person_9:
prim_name: "Person_9"
translation: [6.15617, -8.3114, 0.0]
target: [0, 0, 0]
usd_path: People/Characters/original_male_adult_construction_05/male_adult_construction_05.usd
person_10:
prim_name: "Person_10"
translation: [5.34416, -7.47814, 0.0]
target: [0, 0, 0]
usd_path: People/Characters/male_adult_construction_05_new/male_adult_construction_05_new.usd
| 1,905 | YAML | 30.766666 | 96 | 0.704462 |
pascal-roth/orbit_envs/extensions/omni.isaac.carla/data/warehouse/keyword_mapping.yml | floor:
- SM_Floor1
- SM_Floor2
- SM_Floor3
- SM_Floor4
- SM_Floor5
- SM_Floor6
- groundplane
wall:
- FuseBox
- SM_PillarA
- SM_Sign
- SM_Wall
- S_Barcode
bench:
- Bench
ceiling:
- SM_Ceiling
- PillarPartA
- SM_Beam
- SM_Bracket
static:
- LampCeiling
- SM_FloorDecal
- SM_FireExtinguisher
furniture:
- SM_Rack
- SM_SignCVer
- S_AisleSign
- SM_Palette
- SM_CardBox
- SmallKLT
- SM_PushCarta
- SM_CratePlastic
| 468 | YAML | 10.725 | 23 | 0.617521 |
swadaskar/Isaac_Sim_Folder/PACKAGE-INFO.yaml | Package: isaac-sim-standalone
Version: 2022.2.1-rc.14+2022.2.494.70497c06.tc.linux-x86_64.release
Commit: 70497c064272778b550d785b89e618821248d0cf
Time: Thu Mar 16 01:35:15 2023
CI Build ID: 14259040
Platform: linux-x86_64
CI Build Number: 2022.2.1-rc.14+2022.2.494.70497c06.tc
| 278 | YAML | 33.874996 | 67 | 0.794964 |
swadaskar/Isaac_Sim_Folder/environment.yml | name: isaac-sim
channels:
- defaults
- pytorch
- nvidia
dependencies:
- python=3.7
- pip
- pytorch
- torchvision
- torchaudio
- cuda-toolkit=11.7
- pip:
- stable-baselines3==1.6.2
- tensorboard==2.11.0
- tensorboard-plugin-wit==1.8.1
- protobuf==3.20.3
| 297 | YAML | 15.555555 | 36 | 0.599327 |
swadaskar/Isaac_Sim_Folder/launcher.toml | #displayed application name
name = "Isaac Sim"
#displayed before application name in launcher
productArea = "Omniverse"
version = "2022.2.1"
#unique identifier for component, all lower case, persists between versions
slug = "isaac_sim"
## install and launch instructions by environment
[defaults.windows-x86_64]
url = ""
entrypoint = "${productRoot}/isaac-sim.selector.bat"
args = []
[defaults.windows-x86_64.environment]
[defaults.windows-x86_64.install]
pre-install = ""
pre-install-args = []
install = ""
install-args = []
post-install = "${productRoot}/omni.isaac.sim.post.install.bat"
post-install-args = ">${productRoot}/omni.isaac.sim.post.install.log"
[defaults.windows-x86_64.uninstall]
pre-uninstall = ""
pre-uninstall-args = []
uninstall = ""
uninstall-args = []
post-uninstall = ""
post-uninstall-args = []
[defaults.linux-x86_64]
url = ""
entrypoint = "${productRoot}/isaac-sim.selector.sh"
args = []
[defaults.linux-x86_64.environment]
[defaults.linux-x86_64.install]
pre-install = ""
pre-install-args = []
install = ""
install-args = []
post-install = "${productRoot}/omni.isaac.sim.post.install.sh"
post-install-args = ">${productRoot}/omni.isaac.sim.post.install.log"
[defaults.linux-x86_64.uninstall]
pre-uninstall = ""
pre-uninstall-args = []
uninstall = ""
uninstall-args = []
post-uninstall = ""
post-uninstall-args = []
| 1,349 | TOML | 24 | 75 | 0.716827 |
swadaskar/Isaac_Sim_Folder/exts/omni.isaac.dofbot/config/extension.toml | [core]
reloadable = true
order = 0
[package]
version = "0.3.0"
category = "Simulation"
title = "Isaac Dofbot Robot"
description = "Isaac Dofbot Robot Helper Class"
authors = ["NVIDIA"]
repository = ""
keywords = ["isaac"]
changelog = "docs/CHANGELOG.md"
readme = "docs/README.md"
icon = "data/icon.png"
[dependencies]
"omni.isaac.core" = {}
"omni.isaac.motion_generation" = {}
"omni.isaac.manipulators" = {}
[[python.module]]
name = "omni.isaac.dofbot"
| 457 | TOML | 17.319999 | 47 | 0.684902 |
swadaskar/Isaac_Sim_Folder/exts/omni.isaac.dofbot/omni/isaac/dofbot/kinematics_solver.py | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
from omni.isaac.motion_generation import ArticulationKinematicsSolver, interface_config_loader, LulaKinematicsSolver
from omni.isaac.core.articulations import Articulation
from typing import Optional
class KinematicsSolver(ArticulationKinematicsSolver):
"""Kinematics Solver for Dofbot robot. This class loads a LulaKinematicsSovler object
Args:
robot_articulation (Articulation): An initialized Articulation object representing this Dofbot
end_effector_frame_name (Optional[str]): The name of the Dofbot end effector. If None, an end effector link will
be automatically selected. Defaults to None.
"""
def __init__(self, robot_articulation: Articulation, end_effector_frame_name: Optional[str] = None) -> None:
kinematics_config = interface_config_loader.load_supported_lula_kinematics_solver_config("DofBot")
self._kinematics = LulaKinematicsSolver(**kinematics_config)
if end_effector_frame_name is None:
end_effector_frame_name = "link5"
ArticulationKinematicsSolver.__init__(self, robot_articulation, self._kinematics, end_effector_frame_name)
return
| 1,590 | Python | 47.21212 | 121 | 0.760377 |
swadaskar/Isaac_Sim_Folder/exts/omni.isaac.dofbot/omni/isaac/dofbot/tasks/pick_place.py | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
import omni.isaac.core.tasks as tasks
from omni.isaac.core.utils.stage import get_stage_units
from omni.isaac.dofbot import DofBot
from omni.isaac.core.utils.prims import is_prim_path_valid
from omni.isaac.core.utils.string import find_unique_string_name
import numpy as np
from typing import Optional
class PickPlace(tasks.PickPlace):
def __init__(
self,
name: str = "dofbot_pick_place",
cube_initial_position: Optional[np.ndarray] = None,
cube_initial_orientation: Optional[np.ndarray] = None,
target_position: Optional[np.ndarray] = None,
cube_size: Optional[np.ndarray] = None,
offset: Optional[np.ndarray] = None,
) -> None:
"""[summary]
Args:
name (str, optional): [description]. Defaults to "dofbot_pick_place".
cube_initial_position (Optional[np.ndarray], optional): [description]. Defaults to None.
cube_initial_orientation (Optional[np.ndarray], optional): [description]. Defaults to None.
target_position (Optional[np.ndarray], optional): [description]. Defaults to None.
cube_size (Optional[np.ndarray], optional): [description]. Defaults to None.
offset (Optional[np.ndarray], optional): [description]. Defaults to None.
"""
if cube_initial_position is None:
cube_initial_position = np.array([0.31, 0, 0.025 / 2.0]) / get_stage_units()
if cube_size is None:
cube_size = np.array([0.025, 0.025, 0.025]) / get_stage_units()
if target_position is None:
target_position = np.array([-0.31, 0.31, 0.025]) / get_stage_units()
tasks.PickPlace.__init__(
self,
name=name,
cube_initial_position=cube_initial_position,
cube_initial_orientation=cube_initial_orientation,
target_position=target_position,
cube_size=cube_size,
offset=offset,
)
return
def set_robot(self) -> DofBot:
"""[summary]
Returns:
DofBot: [description]
"""
dofbot_prim_path = find_unique_string_name(
initial_name="/World/DofBot", is_unique_fn=lambda x: not is_prim_path_valid(x)
)
dofbot_robot_name = find_unique_string_name(
initial_name="my_dofbot", is_unique_fn=lambda x: not self.scene.object_exists(x)
)
return DofBot(prim_path=dofbot_prim_path, name=dofbot_robot_name)
| 2,912 | Python | 41.838235 | 103 | 0.645604 |
swadaskar/Isaac_Sim_Folder/exts/omni.isaac.dofbot/omni/isaac/dofbot/tasks/follow_target.py | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
import omni.isaac.core.tasks as tasks
from omni.isaac.core.utils.stage import get_stage_units
from omni.isaac.dofbot import DofBot
from omni.isaac.core.utils.prims import is_prim_path_valid
from omni.isaac.core.utils.string import find_unique_string_name
import numpy as np
from typing import Optional
class FollowTarget(tasks.FollowTarget):
"""[summary]
Args:
name (str, optional): [description]. Defaults to "dofbot_follow_target".
target_prim_path (Optional[str], optional): [description]. Defaults to None.
target_name (Optional[str], optional): [description]. Defaults to None.
target_position (Optional[np.ndarray], optional): [description]. Defaults to None.
target_orientation (Optional[np.ndarray], optional): [description]. Defaults to None.
offset (Optional[np.ndarray], optional): [description]. Defaults to None.
dofbot_prim_path (Optional[str], optional): [description]. Defaults to None.
dofbot_robot_name (Optional[str], optional): [description]. Defaults to None.
"""
def __init__(
self,
name: str = "dofbot_follow_target",
target_prim_path: Optional[str] = None,
target_name: Optional[str] = None,
target_position: Optional[np.ndarray] = None,
target_orientation: Optional[np.ndarray] = None,
offset: Optional[np.ndarray] = None,
dofbot_prim_path: Optional[str] = None,
dofbot_robot_name: Optional[str] = None,
) -> None:
if target_position is None:
target_position = np.array([0, 0.1, 0.1]) / get_stage_units()
tasks.FollowTarget.__init__(
self,
name=name,
target_prim_path=target_prim_path,
target_name=target_name,
target_position=target_position,
target_orientation=target_orientation,
offset=offset,
)
self._dofbot_prim_path = dofbot_prim_path
self._dofbot_robot_name = dofbot_robot_name
return
def set_robot(self) -> DofBot:
"""[summary]
Returns:
DofBot: [description]
"""
if self._dofbot_prim_path is None:
self._dofbot_prim_path = find_unique_string_name(
initial_name="/World/DofBot", is_unique_fn=lambda x: not is_prim_path_valid(x)
)
if self._dofbot_robot_name is None:
self._dofbot_robot_name = find_unique_string_name(
initial_name="my_dofbot", is_unique_fn=lambda x: not self.scene.object_exists(x)
)
return DofBot(prim_path=self._dofbot_prim_path, name=self._dofbot_robot_name)
| 3,089 | Python | 41.328767 | 96 | 0.651667 |
swadaskar/Isaac_Sim_Folder/exts/omni.isaac.dofbot/omni/isaac/dofbot/controllers/rmpflow_controller.py | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
import omni.isaac.motion_generation as mg
from omni.isaac.core.articulations import Articulation
class RMPFlowController(mg.MotionPolicyController):
"""[summary]
Args:
name (str): [description]
robot_articulation (Articulation): [description]
physics_dt (float, optional): [description]. Defaults to 1.0/60.0.
"""
def __init__(self, name: str, robot_articulation: Articulation, physics_dt: float = 1.0 / 60.0) -> None:
self.rmp_flow_config = mg.interface_config_loader.load_supported_motion_policy_config("DofBot", "RMPflow")
self.rmp_flow = mg.lula.motion_policies.RmpFlow(**self.rmp_flow_config)
self.articulation_rmp = mg.ArticulationMotionPolicy(robot_articulation, self.rmp_flow, physics_dt)
mg.MotionPolicyController.__init__(self, name=name, articulation_motion_policy=self.articulation_rmp)
self._default_position, self._default_orientation = (
self._articulation_motion_policy._robot_articulation.get_world_pose()
)
self._motion_policy.set_robot_base_pose(
robot_position=self._default_position, robot_orientation=self._default_orientation
)
return
def reset(self):
mg.MotionPolicyController.reset(self)
self._motion_policy.set_robot_base_pose(
robot_position=self._default_position, robot_orientation=self._default_orientation
)
| 1,870 | Python | 43.547618 | 114 | 0.705348 |
swadaskar/Isaac_Sim_Folder/exts/omni.isaac.dofbot/omni/isaac/dofbot/controllers/pick_place_controller.py | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
from omni.isaac.core.utils.stage import get_stage_units
from omni.isaac.core.articulations import Articulation
from omni.isaac.manipulators.grippers.parallel_gripper import ParallelGripper
import omni.isaac.manipulators.controllers as manipulators_controllers
from omni.isaac.dofbot.controllers import RMPFlowController
from typing import Optional, List
class PickPlaceController(manipulators_controllers.PickPlaceController):
"""[summary]
Args:
name (str): [description]
gripper (ParallelGripper): [description]
robot_articulation(Articulation): [description]
events_dt (Optional[List[float]], optional): [description]. Defaults to None.
"""
def __init__(
self,
name: str,
gripper: ParallelGripper,
robot_articulation: Articulation,
events_dt: Optional[List[float]] = None,
) -> None:
if events_dt is None:
events_dt = [0.01, 0.01, 1, 0.01, 0.01, 0.01, 0.01, 0.05, 0.01, 0.08]
manipulators_controllers.PickPlaceController.__init__(
self,
name=name,
cspace_controller=RMPFlowController(
name=name + "_cspace_controller", robot_articulation=robot_articulation
),
gripper=gripper,
events_dt=events_dt,
end_effector_initial_height=0.2 / get_stage_units(),
)
return
| 1,854 | Python | 38.468084 | 89 | 0.677994 |
swadaskar/Isaac_Sim_Folder/exts/omni.isaac.dofbot/docs/CHANGELOG.md | # Changelog
## [0.3.0] - 2022-07-26
### Removed
- Removed GripperController class and used the new ParallelGripper class instead.
### Changed
- Changed gripper_dof_indices argument in PickPlaceController to gripper.
### Added
- Added deltas argument in Franka class for the gripper action deltas when openning or closing.
## [0.2.1] - 2022-07-22
### Fixed
- Bug with adding a custom usd for manipulator
## [0.2.0] - 2022-05-02
### Changed
- Changed InverseKinematicsSolver class to KinematicsSolver class, using the new LulaKinematicsSolver class in motion_generation
## [0.1.4] - 2022-04-21
### Changed
-Updated RmpFlowController class init alongside modifying motion_generation extension
## [0.1.3] - 2022-03-25
### Changed
- Updated RmpFlowController class alongside changes to motion_generation extension
## [0.1.2] - 2022-03-16
### Changed
- Replaced find_nucleus_server() with get_assets_root_path()
## [0.1.1] - 2021-12-02
### Changed
- Propagation of core api changes
## [0.1.0] - 2021-09-01
### Added
- Added Dofbot class | 1,048 | Markdown | 21.319148 | 128 | 0.721374 |
swadaskar/Isaac_Sim_Folder/exts/omni.isaac.dofbot/docs/index.rst | Dofbot Robot [omni.isaac.dofbot]
################################
Dofbot
=============
.. automodule:: omni.isaac.dofbot.dofbot
:inherited-members:
:members:
:undoc-members:
:exclude-members:
Dofbot Kinematics Solver
=========================
.. automodule:: omni.isaac.dofbot.kinematics_solver
:inherited-members:
:members:
Dofbot Controllers
==================
.. automodule:: omni.isaac.dofbot.controllers
:inherited-members:
:imported-members:
:members:
:undoc-members:
:exclude-members:
Dofbot Tasks
==============
.. automodule:: omni.isaac.dofbot.tasks
:inherited-members:
:imported-members:
:members:
:undoc-members:
:exclude-members:
| 718 | reStructuredText | 16.119047 | 51 | 0.584958 |
swadaskar/Isaac_Sim_Folder/exts/omni.isaac.repl/config/extension.toml | [core]
reloadable = true
order = 0
[package]
version = "1.0.3"
category = "Utility"
title = "Isaac Sim REPL"
description = "Extension that provides an interactive shell to a running omniverse app"
authors = ["NVIDIA"]
repository = ""
keywords = ["isaac", "python", "repl"]
changelog = "docs/CHANGELOG.md"
readme = "docs/README.md"
icon = "data/icon.png"
writeTarget.kit = true
target.platform = ["linux-*"]
[dependencies]
"omni.kit.test" = {}
[[python.module]]
name = "prompt_toolkit"
path = "pip_prebundle"
[[python.module]]
name = "omni.isaac.repl"
[[python.module]]
name = "omni.isaac.repl.tests"
[settings]
exts."omni.isaac.repl".host = "127.0.0.1"
exts."omni.isaac.repl".port = 8223
| 695 | TOML | 18.885714 | 87 | 0.684892 |
swadaskar/Isaac_Sim_Folder/exts/omni.isaac.repl/docs/CHANGELOG.md | # Changelog
## [1.0.3] - 2022-04-16
### Fixed
- ptpython was not fixed
## [1.0.2] - 2022-04-08
### Fixed
- Fix incorrect windows platform check
## [1.0.1] - 2022-04-08
### Changed
- Extenion only targets linux now due to asyncio add_reader limitation
## [1.0.0] - 2022-04-06
### Added
- Initial version of extension | 323 | Markdown | 14.428571 | 70 | 0.647059 |
swadaskar/Isaac_Sim_Folder/exts/omni.isaac.repl/docs/README.md | # Usage
To enable this extension, go to the Extension Manager menu and enable omni.isaac.repl extension.
Then login using `telnet localhost 8223`. See exetnsion.toml for a full list of settings. | 197 | Markdown | 38.599992 | 96 | 0.786802 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.