file_path
stringlengths 20
207
| content
stringlengths 5
3.85M
| size
int64 5
3.85M
| lang
stringclasses 9
values | avg_line_length
float64 1.33
100
| max_line_length
int64 4
993
| alphanum_fraction
float64 0.26
0.93
|
---|---|---|---|---|---|---|
jasonsaini/OmniverseCubeClickExtension/spawn_cube/exts/spawn.cube/spawn/cube/__init__.py | from .extension import *
| 25 | Python | 11.999994 | 24 | 0.76 |
jasonsaini/OmniverseCubeClickExtension/spawn_cube/exts/spawn.cube/spawn/cube/tests/__init__.py | from .test_hello_world import * | 31 | Python | 30.999969 | 31 | 0.774194 |
jasonsaini/OmniverseCubeClickExtension/spawn_cube/exts/spawn.cube/spawn/cube/tests/test_hello_world.py | # NOTE:
# omni.kit.test - std python's unittest module with additional wrapping to add suport for async/await tests
# For most things refer to unittest docs: https://docs.python.org/3/library/unittest.html
import omni.kit.test
# Extnsion for writing UI tests (simulate UI interaction)
import omni.kit.ui_test as ui_test
# Import extension python module we are testing with absolute import path, as if we are external user (other extension)
import spawn.cube
# Having a test class dervived from omni.kit.test.AsyncTestCase declared on the root of module will make it auto-discoverable by omni.kit.test
class Test(omni.kit.test.AsyncTestCase):
# Before running each test
async def setUp(self):
pass
# After running each test
async def tearDown(self):
pass
# Actual test, notice it is "async" function, so "await" can be used if needed
async def test_hello_public_function(self):
result = spawn.cube.some_public_function(4)
self.assertEqual(result, 256)
async def test_window_button(self):
# Find a label in our window
label = ui_test.find("My Window//Frame/**/Label[*]")
# Find buttons in our window
add_button = ui_test.find("My Window//Frame/**/Button[*].text=='Add'")
reset_button = ui_test.find("My Window//Frame/**/Button[*].text=='Reset'")
# Click reset button
await reset_button.click()
self.assertEqual(label.widget.text, "empty")
await add_button.click()
self.assertEqual(label.widget.text, "count: 1")
await add_button.click()
self.assertEqual(label.widget.text, "count: 2")
| 1,656 | Python | 34.255318 | 142 | 0.679952 |
jasonsaini/OmniverseCubeClickExtension/spawn_cube/exts/spawn.cube/config/extension.toml | [package]
# Semantic Versioning is used: https://semver.org/
version = "1.0.0"
# Lists people or organizations that are considered the "authors" of the package.
authors = ["NVIDIA"]
# The title and description fields are primarily for displaying extension info in UI
title = "Jason's Cube Spawner Extension"
description="A simple python extension example that spawns cube in a USD file."
# Path (relative to the root) or content of readme markdown file for UI.
readme = "docs/README.md"
# URL of the extension source repository.
repository = ""
# One of categories for UI.
category = "Example"
# Keywords for the extension
keywords = ["kit", "example"]
# Location of change log file in target (final) folder of extension, relative to the root.
# More info on writing changelog: https://keepachangelog.com/en/1.0.0/
changelog="docs/CHANGELOG.md"
# Preview image and icon. Folder named "data" automatically goes in git lfs (see .gitattributes file).
# Preview image is shown in "Overview" of Extensions window. Screenshot of an extension might be a good preview image.
preview_image = "data/preview.png"
# Icon is shown in Extensions window, it is recommended to be square, of size 256x256.
icon = "data/icon.png"
# Use omni.ui to build simple UI
[dependencies]
"omni.kit.uiapp" = {}
# Main python module this extension provides, it will be publicly available as "import spawn.cube".
[[python.module]]
name = "spawn.cube"
[[test]]
# Extra dependencies only to be used during test run
dependencies = [
"omni.kit.ui_test" # UI testing extension
]
| 1,559 | TOML | 32.191489 | 118 | 0.743425 |
jasonsaini/OmniverseCubeClickExtension/spawn_cube/exts/spawn.cube/docs/CHANGELOG.md | # Changelog
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
## [1.0.0] - 2021-04-26
- Initial version of extension UI template with a window
| 178 | Markdown | 18.888887 | 80 | 0.702247 |
jasonsaini/OmniverseCubeClickExtension/spawn_cube/exts/spawn.cube/docs/README.md | # Python Extension Example [spawn.cube]
This is an example of pure python Kit extension. It is intended to be copied and serve as a template to create new extensions.
| 169 | Markdown | 32.999993 | 126 | 0.781065 |
jasonsaini/OmniverseCubeClickExtension/spawn_cube/exts/spawn.cube/docs/index.rst | spawn.cube
#############################
Example of Python only extension
.. toctree::
:maxdepth: 1
README
CHANGELOG
.. automodule::"spawn.cube"
:platform: Windows-x86_64, Linux-x86_64
:members:
:undoc-members:
:show-inheritance:
:imported-members:
:exclude-members: contextmanager
| 321 | reStructuredText | 14.333333 | 43 | 0.604361 |
Mariuxtheone/kit-extension-sample-camerastudio/README.md | # Camera Studio - NVIDIA Omniverse Extension
<img src="https://github.com/Mariuxtheone/kit-extension-sample-camerastudio/blob/main/exts/omni.example.camerastudio/data/icon.png" width="128">
This extension allows to open a CSV file containing information about Camera Settings and generate in-scene Cameras accordingly.

Usage:
The extension generates cameras with the following settings:
-Shot Name
-Focal Length (in mm)
-Horizontal Aperture (in mm)
-Distance from the subject the camera should be placed at the scene (in meters)
1) Create your .csv file with the following header:
```
shot_name,focal_length,aperture,distance
```
e.g.
```
shot_name,focal_length,aperture,distance
establishing_shot,24,2.8,4
wide_shot,14,2.0,4
over_the_shoulder_shot,50,2.8,0.5
point_of_view_shot,85,2.8,0.5
low_angle_shot,24,1.8,0.5
high_angle_shot,100,2.8,1.5
```
2) Open the .csv file via the Extension.
3) The extension will generate the cameras in your scene with the desired shots configured.
# chatGPT Prompt (also works with GPT-3)
This is the prompt that I perfected to generate shots, you might have to run it a few times to get the exact desired results, but this seems to do the trick:
```
list a series of 10 camera shots for an interior video shoot, specifying the focal length of the camera in mm, the horizontal aperture (as number), and the distance the camera should be put at (in meters)
put those settings in a CSV file using this header: shot_name, focal_length, aperture, distance
horizontal aperture should be indicated as number (for example, 2.8) and distance should be indicated as number (for example, for 1 meter, put 1). shot_name has to be represented with underscore format (for example, extreme_close_up_shot)
remove mm and m from the CSV
```
# Extension Project Template
This project was automatically generated.
- `app` - It is a folder link to the location of your *Omniverse Kit* based app.
- `exts` - It is a folder where you can add new extensions. It was automatically added to extension search path. (Extension Manager -> Gear Icon -> Extension Search Path).
Open this folder using Visual Studio Code. It will suggest you to install few extensions that will make python experience better.
Look for "omni.example.camerastudio" extension in extension manager and enable it. Try applying changes to any python files, it will hot-reload and you can observe results immediately.
Alternatively, you can launch your app from console with this folder added to search path and your extension enabled, e.g.:
```
> app\omni.code.bat --ext-folder exts --enable company.hello.world
```
# App Link Setup
If `app` folder link doesn't exist or broken it can be created again. For better developer experience it is recommended to create a folder link named `app` to the *Omniverse Kit* app installed from *Omniverse Launcher*. Convenience script to use is included.
Run:
```
> link_app.bat
```
If successful you should see `app` folder link in the root of this repo.
If multiple Omniverse apps is installed script will select recommended one. Or you can explicitly pass an app:
```
> link_app.bat --app create
```
You can also just pass a path to create link to:
```
> link_app.bat --path "C:/Users/bob/AppData/Local/ov/pkg/create-2021.3.4"
```
# Sharing Your Extensions
This folder is ready to be pushed to any git repository. Once pushed direct link to a git repository can be added to *Omniverse Kit* extension search paths.
Link might look like this: `git://github.com/[user]/[your_repo].git?branch=main&dir=exts`
Notice `exts` is repo subfolder with extensions. More information can be found in "Git URL as Extension Search Paths" section of developers manual.
To add a link to your *Omniverse Kit* based app go into: Extension Manager -> Gear Icon -> Extension Search Path
## Contributing
The source code for this repository is provided as-is and we are not accepting outside contributions.
| 4,001 | Markdown | 36.401869 | 258 | 0.763809 |
Mariuxtheone/kit-extension-sample-camerastudio/tools/scripts/link_app.py | import argparse
import json
import os
import sys
import packmanapi
import urllib3
def find_omniverse_apps():
http = urllib3.PoolManager()
try:
r = http.request("GET", "http://127.0.0.1:33480/components")
except Exception as e:
print(f"Failed retrieving apps from an Omniverse Launcher, maybe it is not installed?\nError: {e}")
sys.exit(1)
apps = {}
for x in json.loads(r.data.decode("utf-8")):
latest = x.get("installedVersions", {}).get("latest", "")
if latest:
for s in x.get("settings", []):
if s.get("version", "") == latest:
root = s.get("launch", {}).get("root", "")
apps[x["slug"]] = (x["name"], root)
break
return apps
def create_link(src, dst):
print(f"Creating a link '{src}' -> '{dst}'")
packmanapi.link(src, dst)
APP_PRIORITIES = ["code", "create", "view"]
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Create folder link to Kit App installed from Omniverse Launcher")
parser.add_argument(
"--path",
help="Path to Kit App installed from Omniverse Launcher, e.g.: 'C:/Users/bob/AppData/Local/ov/pkg/create-2021.3.4'",
required=False,
)
parser.add_argument(
"--app", help="Name of Kit App installed from Omniverse Launcher, e.g.: 'code', 'create'", required=False
)
args = parser.parse_args()
path = args.path
if not path:
print("Path is not specified, looking for Omniverse Apps...")
apps = find_omniverse_apps()
if len(apps) == 0:
print(
"Can't find any Omniverse Apps. Use Omniverse Launcher to install one. 'Code' is the recommended app for developers."
)
sys.exit(0)
print("\nFound following Omniverse Apps:")
for i, slug in enumerate(apps):
name, root = apps[slug]
print(f"{i}: {name} ({slug}) at: '{root}'")
if args.app:
selected_app = args.app.lower()
if selected_app not in apps:
choices = ", ".join(apps.keys())
print(f"Passed app: '{selected_app}' is not found. Specify one of the following found Apps: {choices}")
sys.exit(0)
else:
selected_app = next((x for x in APP_PRIORITIES if x in apps), None)
if not selected_app:
selected_app = next(iter(apps))
print(f"\nSelected app: {selected_app}")
_, path = apps[selected_app]
if not os.path.exists(path):
print(f"Provided path doesn't exist: {path}")
else:
SCRIPT_ROOT = os.path.dirname(os.path.realpath(__file__))
create_link(f"{SCRIPT_ROOT}/../../app", path)
print("Success!")
| 2,814 | Python | 32.117647 | 133 | 0.562189 |
Mariuxtheone/kit-extension-sample-camerastudio/tools/packman/config.packman.xml | <config remotes="cloudfront">
<remote2 name="cloudfront">
<transport actions="download" protocol="https" packageLocation="d4i3qtqj3r0z5.cloudfront.net/${name}@${version}" />
</remote2>
</config>
| 211 | XML | 34.333328 | 123 | 0.691943 |
Mariuxtheone/kit-extension-sample-camerastudio/tools/packman/bootstrap/install_package.py | # Copyright 2019 NVIDIA CORPORATION
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import shutil
import sys
import tempfile
import zipfile
__author__ = "hfannar"
logging.basicConfig(level=logging.WARNING, format="%(message)s")
logger = logging.getLogger("install_package")
class TemporaryDirectory:
def __init__(self):
self.path = None
def __enter__(self):
self.path = tempfile.mkdtemp()
return self.path
def __exit__(self, type, value, traceback):
# Remove temporary data created
shutil.rmtree(self.path)
def install_package(package_src_path, package_dst_path):
with zipfile.ZipFile(package_src_path, allowZip64=True) as zip_file, TemporaryDirectory() as temp_dir:
zip_file.extractall(temp_dir)
# Recursively copy (temp_dir will be automatically cleaned up on exit)
try:
# Recursive copy is needed because both package name and version folder could be missing in
# target directory:
shutil.copytree(temp_dir, package_dst_path)
except OSError as exc:
logger.warning("Directory %s already present, packaged installation aborted" % package_dst_path)
else:
logger.info("Package successfully installed to %s" % package_dst_path)
install_package(sys.argv[1], sys.argv[2])
| 1,844 | Python | 33.166666 | 108 | 0.703362 |
Mariuxtheone/kit-extension-sample-camerastudio/exts/omni.example.camerastudio/config/extension.toml | [package]
# Semantic Versioning is used: https://semver.org/
version = "1.0.0"
# Lists people or organizations that are considered the "authors" of the package.
authors = ["NVIDIA"]
# The title and description fields are primarily for displaying extension info in UI
title = "Camera Studio"
description="A small extension that generates Cameras based on predefined settings"
# Path (relative to the root) or content of readme markdown file for UI.
readme = "docs/README.md"
# URL of the extension source repository.
repository = ""
# One of categories for UI.
category = "Example"
# Keywords for the extension
keywords = ["kit", "example"]
# Location of change log file in target (final) folder of extension, relative to the root.
# More info on writing changelog: https://keepachangelog.com/en/1.0.0/
changelog="docs/CHANGELOG.md"
# Preview image and icon. Folder named "data" automatically goes in git lfs (see .gitattributes file).
# Preview image is shown in "Overview" of Extensions window. Screenshot of an extension might be a good preview image.
preview_image = "data/preview.png"
# Icon is shown in Extensions window, it is recommended to be square, of size 256x256.
icon = "data/icon.png"
# Use omni.ui to build simple UI
[dependencies]
"omni.kit.uiapp" = {}
# Main python module this extension provides, it will be publicly available as "import omni.example.camerastudio".
[[python.module]]
name = "omni.example.camerastudio"
[[test]]
# Extra dependencies only to be used during test run
dependencies = [
"omni.kit.ui_test" # UI testing extension
]
| 1,577 | TOML | 31.874999 | 118 | 0.748256 |
Mariuxtheone/kit-extension-sample-camerastudio/exts/omni.example.camerastudio/omni/example/camerastudio/extension.py | import omni.ext
import omni.ui as ui
import omni.kit.commands
from .csvreader import CSVReader
# Functions and vars are available to other extension as usual in python: `example.python_ext.some_public_function(x)`
def some_public_function(x: int):
print("[omni.example.camerastudio] some_public_function was called with x: ", x)
return x ** x
# Any class derived from `omni.ext.IExt` in top level module (defined in `python.modules` of `extension.toml`) will be
# instantiated when extension gets enabled and `on_startup(ext_id)` will be called. Later when extension gets disabled
# on_shutdown() is called.
class CamerastudioExtension(omni.ext.IExt):
# ext_id is current extension id. It can be used with extension manager to query additional information, like where
# this extension is located on filesystem.
def on_startup(self, ext_id):
print("[omni.example.camerastudio] omni example camerastudio startup")
self._count = 0
self.csvreader = CSVReader()
self._window = ui.Window("Camera Studio", width=300, height=250)
with self._window.frame:
with ui.VStack():
label = ui.Label("Click the button to import a CSV file\nwith the details to generate multiple cameras.")
with ui.HStack():
ui.Button("Open File...", clicked_fn=self.csvreader.on_open_file)
def on_shutdown(self):
print("[omni.example.camerastudio] omni example camerastudio shutdown")
| 1,538 | Python | 39.499999 | 121 | 0.675553 |
Mariuxtheone/kit-extension-sample-camerastudio/exts/omni.example.camerastudio/omni/example/camerastudio/__init__.py | from .extension import *
| 25 | Python | 11.999994 | 24 | 0.76 |
Mariuxtheone/kit-extension-sample-camerastudio/exts/omni.example.camerastudio/omni/example/camerastudio/csvreader.py | import omni.ext
import omni.ui as ui
import omni.kit.commands
from pxr import UsdGeom
from omni.kit.window.file_importer import get_file_importer
from typing import List, Tuple, Callable, Dict
import csv
from .cameragenerator import CameraGenerator
class CSVReader():
def __init__(self):
pass
def import_handler(self,filename: str, dirname: str, selections: List[str] = []):
print(f"> Import '{filename}' from '{dirname}' or selected files '{selections}'")
self.openCSV(dirname+filename)
def on_open_file(self):
file_importer = get_file_importer()
file_importer.show_window(
title="Import File",
# The callback function called after the user has selected a file.
import_handler=self.import_handler
)
#write a function that opens a CSV file, reads the data, and stores it in variables named shot_name, focal_length, aperture, distance
def openCSV(self,selections):
with open(selections) as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
line_count = 0
for row in csv_reader:
if line_count == 0:
line_count += 1
else:
shot_name = row[0]
print (f'Shot Name: {shot_name}.')
focal_length = row[1]
print (f'Focal Length: {focal_length}.')
aperture = row[2]
print (f'Aperture: {aperture}.')
distance = row[3]
print (f'Distance: {distance}.')
#do something with the csv data. in this case, generate a camera
cameraGenerator = CameraGenerator()
cameraGenerator.generate_camera(str(shot_name), float(focal_length), float(aperture), float(distance))
line_count += 1
| 1,960 | Python | 34.017857 | 137 | 0.560714 |
Mariuxtheone/kit-extension-sample-camerastudio/exts/omni.example.camerastudio/omni/example/camerastudio/cameragenerator.py | import omni.ext
import omni.ui as ui
import omni.kit.commands
from pxr import UsdGeom
from omni.kit.window.file_importer import get_file_importer
class CameraGenerator():
def __init__(self):
pass
def generate_camera(self, shot_name, focal_length, aperture, distance):
#generate camera
omni.kit.commands.execute("CreatePrimWithDefaultXform",
prim_type="Camera",
prim_path="/World/"+shot_name,
attributes={
"projection": UsdGeom.Tokens.perspective,
"focalLength": focal_length,
"horizontalAperture": aperture,
}
)
#move camera
omni.kit.commands.execute('TransformMultiPrimsSRTCpp',
count=1,
paths=['/World/'+shot_name],
new_translations=[0, 0, distance*1000],
new_rotation_eulers=[-0.0, -0.0, -0.0],
new_rotation_orders=[1, 0, 2],
new_scales=[1.0, 1.0, 1.0],
old_translations=[0.0, 0.0, 0.0],
old_rotation_eulers=[0.0, -0.0, -0.0],
old_rotation_orders=[1, 0, 2],
old_scales=[1.0, 1.0, 1.0],
usd_context_name='',
time_code=0.0)
| 1,526 | Python | 38.153845 | 75 | 0.439712 |
Mariuxtheone/kit-extension-sample-camerastudio/exts/omni.example.camerastudio/omni/example/camerastudio/tests/__init__.py | from .test_hello_world import * | 31 | Python | 30.999969 | 31 | 0.774194 |
Mariuxtheone/kit-extension-sample-camerastudio/exts/omni.example.camerastudio/omni/example/camerastudio/tests/test_hello_world.py | # NOTE:
# omni.kit.test - std python's unittest module with additional wrapping to add suport for async/await tests
# For most things refer to unittest docs: https://docs.python.org/3/library/unittest.html
import omni.kit.test
# Extnsion for writing UI tests (simulate UI interaction)
import omni.kit.ui_test as ui_test
# Import extension python module we are testing with absolute import path, as if we are external user (other extension)
import omni.example.camerastudio
# Having a test class dervived from omni.kit.test.AsyncTestCase declared on the root of module will make it auto-discoverable by omni.kit.test
class Test(omni.kit.test.AsyncTestCase):
# Before running each test
async def setUp(self):
pass
# After running each test
async def tearDown(self):
pass
# Actual test, notice it is "async" function, so "await" can be used if needed
async def test_hello_public_function(self):
result = omni.example.camerastudio.some_public_function(4)
self.assertEqual(result, 256)
async def test_window_button(self):
# Find a label in our window
label = ui_test.find("My Window//Frame/**/Label[*]")
# Find buttons in our window
add_button = ui_test.find("My Window//Frame/**/Button[*].text=='Add'")
reset_button = ui_test.find("My Window//Frame/**/Button[*].text=='Reset'")
# Click reset button
await reset_button.click()
self.assertEqual(label.widget.text, "empty")
await add_button.click()
self.assertEqual(label.widget.text, "count: 1")
await add_button.click()
self.assertEqual(label.widget.text, "count: 2")
| 1,686 | Python | 34.893616 | 142 | 0.68446 |
Mariuxtheone/kit-extension-sample-camerastudio/exts/omni.example.camerastudio/docs/CHANGELOG.md | # Changelog
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
## [1.0.0] - 2021-04-26
- Initial version of extension UI template with a window
| 178 | Markdown | 18.888887 | 80 | 0.702247 |
Mariuxtheone/kit-extension-sample-camerastudio/exts/omni.example.camerastudio/docs/README.md | # Camera Studio
This extension allows to open a CSV file containing information about Camera Settings and generate in-scene Cameras accordingly.
Usage:
The extension generates cameras with the following settings:
-Shot Name
-Focal Length (in mm)
-Horizontal Aperture (in mm)
-Distance from the subject the camera should be placed at the scene (in meters)
1) Create your .csv file with the following header:
shot_name,focal_length,aperture,distance
e.g.
shot_name,focal_length,aperture,distance
establishing_shot,24,2.8,4
wide_shot,14,2.0,4
over_the_shoulder_shot,50,2.8,0.5
point_of_view_shot,85,2.8,0.5
low_angle_shot,24,1.8,0.5
high_angle_shot,100,2.8,1.5
2) Open the .csv file via the Extension.
3) The extension will generate the cameras in your scene with the desired shots configured.
| 799 | Markdown | 26.586206 | 128 | 0.777222 |
Mariuxtheone/kit-extension-sample-camerastudio/exts/omni.example.camerastudio/docs/index.rst | omni.example.camerastudio
#############################
Example of Python only extension
.. toctree::
:maxdepth: 1
README
CHANGELOG
.. automodule::"omni.example.camerastudio"
:platform: Windows-x86_64, Linux-x86_64
:members:
:undoc-members:
:show-inheritance:
:imported-members:
:exclude-members: contextmanager
| 351 | reStructuredText | 15.761904 | 43 | 0.632479 |
Mariuxtheone/omni-openai-gpt3-snippet-extension/README.md | # NVIDIA Omniverse OpenAI GPT-3 Snippet Extension

This is an Extension that adds a simple snippet UI to NVIDIA Omniverse which allows you to generate GPT-3 based snippets.
## 1) Dependencies
In order to use this extension, you will need to install the following dependencies:
- openai python library: `pip install openai`
- pyperclip: `pip install pyperclip`
## 2) Installation
1) Install the Extension in your Omniverse app.
2) We need to create a folder to include the OPEN AI API key and the path to the main Python modules repository on our device, since Omniverse doesn't use the Python global PYTHONHOME and PYTHONPATH.
3) To do this, in the omni\openai\snippet\ folder, create a new file called `apikeys.py`
4) in the `apikeys.py` file, add the following lines:
```
apikey = "YOUR_OPENAI_API_KEY_GOES_HERE"
pythonpath = "The file path where you have installed your main python modules"
```
so `apikeys.py` should look like this:
```
apikey = "sk-123Mb38gELphag234GDyYT67FJwa3334FPRZQZ2Aq5f1o" (this is a fake API key, good try!)
pythonpath = "C:/Users/yourusername/AppData/Local/Packages/PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0/LocalCache/local-packages/Python310/site-packages"
```
## 3) Enable and Usage
To use the extension, enable it from the Extension Window and then click the "Generate Snippet" button. The generated snippet will be copied to your clipboard and you can past anywhere you want.
## 4) IMPORTANT DISCLAIMER
1) OpenAI is a third party API and you will need to create an account with OpenAI to use it. Consider that there's a cost associated with using the API.
2) The extension by default generate snippets up to 40 Tokens. If you want to generate more tokens, you will need to edit the variable `openaitokensresponse`
3) The extension by default uses the GPT-3 Engine "DaVinci" `text-davinci-001` which is the most powerful, but also, most expensive engine. If you want to use a different engine, you will need to edit the variable `engine` in `openai.Completion.create()`.
| 2,095 | Markdown | 43.595744 | 255 | 0.77327 |
Mariuxtheone/omni-openai-gpt3-snippet-extension/LICENSE.md | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 11,357 | Markdown | 55.227722 | 77 | 0.73206 |
Mariuxtheone/omni-openai-gpt3-snippet-extension/tools/scripts/link_app.py | import os
import argparse
import sys
import json
import packmanapi
import urllib3
def find_omniverse_apps():
http = urllib3.PoolManager()
try:
r = http.request("GET", "http://127.0.0.1:33480/components")
except Exception as e:
print(f"Failed retrieving apps from an Omniverse Launcher, maybe it is not installed?\nError: {e}")
sys.exit(1)
apps = {}
for x in json.loads(r.data.decode("utf-8")):
latest = x.get("installedVersions", {}).get("latest", "")
if latest:
for s in x.get("settings", []):
if s.get("version", "") == latest:
root = s.get("launch", {}).get("root", "")
apps[x["slug"]] = (x["name"], root)
break
return apps
def create_link(src, dst):
print(f"Creating a link '{src}' -> '{dst}'")
packmanapi.link(src, dst)
APP_PRIORITIES = ["code", "create", "view"]
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Create folder link to Kit App installed from Omniverse Launcher")
parser.add_argument(
"--path",
help="Path to Kit App installed from Omniverse Launcher, e.g.: 'C:/Users/bob/AppData/Local/ov/pkg/create-2021.3.4'",
required=False,
)
parser.add_argument(
"--app", help="Name of Kit App installed from Omniverse Launcher, e.g.: 'code', 'create'", required=False
)
args = parser.parse_args()
path = args.path
if not path:
print("Path is not specified, looking for Omniverse Apps...")
apps = find_omniverse_apps()
if len(apps) == 0:
print(
"Can't find any Omniverse Apps. Use Omniverse Launcher to install one. 'Code' is the recommended app for developers."
)
sys.exit(0)
print("\nFound following Omniverse Apps:")
for i, slug in enumerate(apps):
name, root = apps[slug]
print(f"{i}: {name} ({slug}) at: '{root}'")
if args.app:
selected_app = args.app.lower()
if selected_app not in apps:
choices = ", ".join(apps.keys())
print(f"Passed app: '{selected_app}' is not found. Specify one of the following found Apps: {choices}")
sys.exit(0)
else:
selected_app = next((x for x in APP_PRIORITIES if x in apps), None)
if not selected_app:
selected_app = next(iter(apps))
print(f"\nSelected app: {selected_app}")
_, path = apps[selected_app]
if not os.path.exists(path):
print(f"Provided path doesn't exist: {path}")
else:
SCRIPT_ROOT = os.path.dirname(os.path.realpath(__file__))
create_link(f"{SCRIPT_ROOT}/../../app", path)
print("Success!")
| 2,813 | Python | 32.5 | 133 | 0.562389 |
Mariuxtheone/omni-openai-gpt3-snippet-extension/tools/packman/config.packman.xml | <config remotes="cloudfront">
<remote2 name="cloudfront">
<transport actions="download" protocol="https" packageLocation="d4i3qtqj3r0z5.cloudfront.net/${name}@${version}" />
</remote2>
</config>
| 211 | XML | 34.333328 | 123 | 0.691943 |
Mariuxtheone/omni-openai-gpt3-snippet-extension/tools/packman/bootstrap/install_package.py | # Copyright 2019 NVIDIA CORPORATION
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import zipfile
import tempfile
import sys
import shutil
__author__ = "hfannar"
logging.basicConfig(level=logging.WARNING, format="%(message)s")
logger = logging.getLogger("install_package")
class TemporaryDirectory:
def __init__(self):
self.path = None
def __enter__(self):
self.path = tempfile.mkdtemp()
return self.path
def __exit__(self, type, value, traceback):
# Remove temporary data created
shutil.rmtree(self.path)
def install_package(package_src_path, package_dst_path):
with zipfile.ZipFile(
package_src_path, allowZip64=True
) as zip_file, TemporaryDirectory() as temp_dir:
zip_file.extractall(temp_dir)
# Recursively copy (temp_dir will be automatically cleaned up on exit)
try:
# Recursive copy is needed because both package name and version folder could be missing in
# target directory:
shutil.copytree(temp_dir, package_dst_path)
except OSError as exc:
logger.warning(
"Directory %s already present, packaged installation aborted" % package_dst_path
)
else:
logger.info("Package successfully installed to %s" % package_dst_path)
install_package(sys.argv[1], sys.argv[2])
| 1,888 | Python | 31.568965 | 103 | 0.68697 |
Mariuxtheone/omni-openai-gpt3-snippet-extension/exts/omni.openai.snippet/config/extension.toml | [package]
# Semantic Versionning is used: https://semver.org/
version = "1.0.0"
# The title and description fields are primarily for displaying extension info in UI
title = "OpenAI GPT-3 Snippet Extension"
description="A simple UI to generate snippet of text from OpenAI's GPT-3"
# Path (relative to the root) or content of readme markdown file for UI.
readme = "docs/README.md"
# URL of the extension source repository.
repository = ""
# One of categories for UI.
category = "Example"
# Keywords for the extension
keywords = ["kit", "example"]
# Use omni.ui to build simple UI
[dependencies]
"omni.kit.uiapp" = {}
# Main python module this extension provides, it will be publicly available as "import omni.hello.world".
[[python.module]]
name = "omni.openai.snippet"
| 777 | TOML | 25.827585 | 105 | 0.736165 |
Mariuxtheone/omni-openai-gpt3-snippet-extension/exts/omni.openai.snippet/omni/openai/snippet/extension.py | import omni.ext
import omni.ui as ui
#create a file apikeys.py in the same folder as extension.py and add 2 variables:
# API_KEY: "your openai api key"
# PYTHON_PATH: "the path of the python folder where the openai python library is installed"
from .apikeys import apikey
from .apikeys import pythonpath
import pyperclip
import sys
sys.path.append(pythonpath)
import openai
#tokens used in the OpenAI API response
openaitokensresponse = 40
# Any class derived from `omni.ext.IExt` in top level module (defined in `python.modules` of `extension.toml`) will be
# instantiated when extension gets enabled and `on_startup(ext_id)` will be called. Later when extension gets disabled
# on_shutdown() is called.
class MyExtension(omni.ext.IExt):
# ext_id is current extension id. It can be used with extension manager to query additional information, like where
# this extension is located on filesystem.
def on_startup(self, ext_id):
print("[omni.openai.snippet] MyExtension startup")
self._window = ui.Window("OpenAI GPT-3 Text Generator", width=300, height=300)
with self._window.frame:
with ui.VStack():
prompt_label = ui.Label("Your Prompt:")
prompt_field = ui.StringField(multiline=True)
result_label = ui.Label("OpenAI GPT-3 Result:")
label_style = {"Label": {"font_size": 16, "color": 0xFF00FF00,}}
result_actual_label = ui.Label("The OpenAI generated text will show up here", style=label_style, word_wrap=True)
def on_click():
# Load your API key from an environment variable or secret management service
#openai.api_key = "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"
openai.api_key = apikey
my_prompt = prompt_field.model.get_value_as_string().replace("\n", " ")
response = openai.Completion.create(engine="text-davinci-001", prompt=my_prompt, max_tokens=openaitokensresponse)
#parse response as json and extract text
text = response["choices"][0]["text"]
pyperclip.copy(text)
result_actual_label.text = ""
result_actual_label.text = text
ui.Button("Generate and Copy to Clipboard", clicked_fn=lambda: on_click())
def on_shutdown(self):
print("[omni.openai.snippet] MyExtension shutdown")
| 2,609 | Python | 40.428571 | 133 | 0.617478 |
Mariuxtheone/omni-openai-gpt3-snippet-extension/exts/omni.openai.snippet/omni/openai/snippet/__init__.py | from .extension import *
| 26 | Python | 7.999997 | 24 | 0.730769 |
Mariuxtheone/omni-openai-gpt3-snippet-extension/exts/omni.openai.snippet/docs/README.md | # NVIDIA Omniverse OpenAI GPT-3 Snippet Extension
This is an Extension that adds a simple snippet UI to NVIDIA Omniverse which allows you to generate GPT-3 based snippets.
## 1) Dependencies
In order to use this extension, you will need to install the following dependencies:
- openai python library: `pip install openai`
- pyperclip: `pip install pyperclip`
## 2) Installation
1) Install the Extension in your Omniverse app.
2) We need to create a folder to include the OPEN AI API key and the path to the main Python modules repository on our device, since Omniverse doesn't use the Python global PYTHONHOME and PYTHONPATH.
3) To do this, in the omni\openai\snippet\ folder, create a new file called `apikeys.py`
4) in the `apikeys.py` file, add the following lines:
```
apikey = "YOUR_OPENAI_API_KEY_GOES_HERE"
pythonpath = "The file path where you have installed your main python modules"
```
so `apikeys.py` should look like this:
```
apikey = "sk-123Mb38gELphag234GDyYT67FJwa3334FPRZQZ2Aq5f1o" (this is a fake API key, good try!)
pythonpath = "C:/Users/yourusername/AppData/Local/Packages/PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0/LocalCache/local-packages/Python310/site-packages"
```
## 3) Enable and Usage
To use the extension, enable it from the Extension Window and then click the "Generate Snippet" button. The generated snippet will be copied to your clipboard and you can past anywhere you want.
## 4) IMPORTANT DISCLAIMER
1) OpenAI is a third party API and you will need to create an account with OpenAI to use it. Consider that there's a cost associated with using the API.
2) The extension by default generate snippets up to 40 Tokens. If you want to generate more tokens, you will need to edit the variable `openaitokensresponse`
3) The extension by default uses the GPT-3 Engine "DaVinci" `text-davinci-001` which is the most powerful, but also, most expensive engine. If you want to use a different engine, you will need to edit the variable `engine` in `openai.Completion.create()`. | 2,029 | Markdown | 49.749999 | 255 | 0.774766 |
echo3Dco/NVIDIAOmniverse-echo3D-extension/README.md | # Echo3D Omniverse Extension
An extension that allows Nvidia Omniverse users to easily import their echo3D assets into their projects, as well as search for new assets in the echo3D public library.
Installation steps can be found at https://docs.echo3d.com/nvidia-omniverse/installation
| 289 | Markdown | 47.333325 | 168 | 0.820069 |
echo3Dco/NVIDIAOmniverse-echo3D-extension/tools/scripts/link_app.py | import argparse
import json
import os
import sys
import packmanapi
import urllib3
def find_omniverse_apps():
http = urllib3.PoolManager()
try:
r = http.request("GET", "http://127.0.0.1:33480/components")
except Exception as e:
print(f"Failed retrieving apps from an Omniverse Launcher, maybe it is not installed?\nError: {e}")
sys.exit(1)
apps = {}
for x in json.loads(r.data.decode("utf-8")):
latest = x.get("installedVersions", {}).get("latest", "")
if latest:
for s in x.get("settings", []):
if s.get("version", "") == latest:
root = s.get("launch", {}).get("root", "")
apps[x["slug"]] = (x["name"], root)
break
return apps
def create_link(src, dst):
print(f"Creating a link '{src}' -> '{dst}'")
packmanapi.link(src, dst)
APP_PRIORITIES = ["code", "create", "view"]
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Create folder link to Kit App installed from Omniverse Launcher")
parser.add_argument(
"--path",
help="Path to Kit App installed from Omniverse Launcher, e.g.: 'C:/Users/bob/AppData/Local/ov/pkg/create-2021.3.4'",
required=False,
)
parser.add_argument(
"--app", help="Name of Kit App installed from Omniverse Launcher, e.g.: 'code', 'create'", required=False
)
args = parser.parse_args()
path = args.path
if not path:
print("Path is not specified, looking for Omniverse Apps...")
apps = find_omniverse_apps()
if len(apps) == 0:
print(
"Can't find any Omniverse Apps. Use Omniverse Launcher to install one. 'Code' is the recommended app for developers."
)
sys.exit(0)
print("\nFound following Omniverse Apps:")
for i, slug in enumerate(apps):
name, root = apps[slug]
print(f"{i}: {name} ({slug}) at: '{root}'")
if args.app:
selected_app = args.app.lower()
if selected_app not in apps:
choices = ", ".join(apps.keys())
print(f"Passed app: '{selected_app}' is not found. Specify one of the following found Apps: {choices}")
sys.exit(0)
else:
selected_app = next((x for x in APP_PRIORITIES if x in apps), None)
if not selected_app:
selected_app = next(iter(apps))
print(f"\nSelected app: {selected_app}")
_, path = apps[selected_app]
if not os.path.exists(path):
print(f"Provided path doesn't exist: {path}")
else:
SCRIPT_ROOT = os.path.dirname(os.path.realpath(__file__))
create_link(f"{SCRIPT_ROOT}/../../app", path)
print("Success!")
| 2,814 | Python | 32.117647 | 133 | 0.562189 |
echo3Dco/NVIDIAOmniverse-echo3D-extension/tools/packman/config.packman.xml | <config remotes="cloudfront">
<remote2 name="cloudfront">
<transport actions="download" protocol="https" packageLocation="d4i3qtqj3r0z5.cloudfront.net/${name}@${version}" />
</remote2>
</config>
| 211 | XML | 34.333328 | 123 | 0.691943 |
echo3Dco/NVIDIAOmniverse-echo3D-extension/tools/packman/bootstrap/install_package.py | # Copyright 2019 NVIDIA CORPORATION
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import shutil
import sys
import tempfile
import zipfile
__author__ = "hfannar"
logging.basicConfig(level=logging.WARNING, format="%(message)s")
logger = logging.getLogger("install_package")
class TemporaryDirectory:
def __init__(self):
self.path = None
def __enter__(self):
self.path = tempfile.mkdtemp()
return self.path
def __exit__(self, type, value, traceback):
# Remove temporary data created
shutil.rmtree(self.path)
def install_package(package_src_path, package_dst_path):
with zipfile.ZipFile(package_src_path, allowZip64=True) as zip_file, TemporaryDirectory() as temp_dir:
zip_file.extractall(temp_dir)
# Recursively copy (temp_dir will be automatically cleaned up on exit)
try:
# Recursive copy is needed because both package name and version folder could be missing in
# target directory:
shutil.copytree(temp_dir, package_dst_path)
except OSError as exc:
logger.warning("Directory %s already present, packaged installation aborted" % package_dst_path)
else:
logger.info("Package successfully installed to %s" % package_dst_path)
install_package(sys.argv[1], sys.argv[2])
| 1,844 | Python | 33.166666 | 108 | 0.703362 |
echo3Dco/NVIDIAOmniverse-echo3D-extension/exts/echo3d.search/echo3d/search/extension.py | import json
import os
import asyncio
import ssl
import certifi
import aiohttp
import omni.ext
import omni.ui as ui
import omni.kit.commands
import urllib
from omni.ui import color as cl
# GLOBAL VARIABLES #
IMAGES_PER_PAGE = 3
current_search_page = 0
current_project_page = 0
searchJsonData = []
projectJsonData = []
# UI Elements for the thumbnails
search_image_widgets = [ui.Image() for _ in range(IMAGES_PER_PAGE)]
project_image_widgets = [ui.Button() for _ in range(IMAGES_PER_PAGE)]
# Hardcoded echo3D images
script_dir = os.path.dirname(os.path.abspath(__file__))
logo_image_filename = 'echo3D_Logo.png'
logo_image_path = os.path.join(script_dir, logo_image_filename)
cloud_image_filename = 'cloud_background_transparent.png'
cloud_image_path = os.path.join(script_dir, cloud_image_filename)
# State variables to hold the style associated with each thumbnail
project_button_styles = [
{
"border_radius": 5,
"Button.Image": {
"color": cl("#FFFFFF30"),
"image_url": cloud_image_path,
"alignment": ui.Alignment.CENTER,
"fill_policy": ui.FillPolicy.PRESERVE_ASPECT_CROP
}
} for _ in range(IMAGES_PER_PAGE)]
search_button_styles = [
{
"border_radius": 5,
"Button.Image": {
"color": cl("#FFFFFF30"),
"image_url": cloud_image_path,
"alignment": ui.Alignment.CENTER,
"fill_policy": ui.FillPolicy.PRESERVE_ASPECT_CROP
}
} for _ in range(IMAGES_PER_PAGE)]
arrowStyle = {
":disabled": {
"background_color": cl("#1f212460")
},
"Button.Label:disabled": {
"color": cl("#FFFFFF40")
}
}
###########################################################################################################
# #
# An extension for Nvidia Omniverse that allows users to connect to their echo3D projects in order to #
# stream their existing assets into the Omniverse Viewport, as well as search for new assets in the #
# echo3D public asset library to add to their projects. #
# #
###########################################################################################################
class Echo3dSearchExtension(omni.ext.IExt):
def on_startup(self, ext_id):
print("[echo3D] echo3D startup")
###############################################
# Define Functions for Search Feature #
###############################################
# Load in new image thumbnails when clicks the previous/next buttons
def update_search_images(searchJsonData):
start_index = current_search_page * IMAGES_PER_PAGE
end_index = start_index + IMAGES_PER_PAGE
print(start_index)
print(end_index)
for i in range(start_index, end_index):
if i < len(searchJsonData):
search_button_styles[i % IMAGES_PER_PAGE] = {"Button.Image": {
"color": cl("#FFFFFF"),
"image_url": searchJsonData[i]["thumbnail"],
"alignment": ui.Alignment.CENTER,
"fill_policy": ui.FillPolicy.PRESERVE_ASPECT_CROP
},
"border_radius": 5
}
search_image_widgets[i % IMAGES_PER_PAGE].style = search_button_styles[i % IMAGES_PER_PAGE]
search_image_widgets[i % IMAGES_PER_PAGE].enabled = True
else:
global cloud_image_path
search_button_styles[i % IMAGES_PER_PAGE] = {
"Button.Image": {
"color": cl("#FFFFFF30"),
"image_url": cloud_image_path,
"alignment": ui.Alignment.CENTER,
"fill_policy": ui.FillPolicy.PRESERVE_ASPECT_CROP
},
"border_radius": 5
}
search_image_widgets[i % IMAGES_PER_PAGE].style = search_button_styles[i % IMAGES_PER_PAGE]
search_image_widgets[i % IMAGES_PER_PAGE].enabled = False
# Update state variables to reflect change of page, disable arrow buttons, update the thumbnails shown
def on_click_left_arrow_search():
global current_search_page
current_search_page -= 1
if (current_search_page == 0):
searchLeftArrow.enabled = False
searchRightArrow.enabled = True
global searchJsonData
update_search_images(searchJsonData)
def on_click_right_arrow_search():
global current_search_page
current_search_page += 1
global searchJsonData
if ((current_search_page + 1) * IMAGES_PER_PAGE >= len(searchJsonData)):
searchRightArrow.enabled = False
searchLeftArrow.enabled = True
update_search_images(searchJsonData)
async def on_click_search_image(index):
global searchJsonData
global current_search_page
selectedEntry = searchJsonData[current_search_page * IMAGES_PER_PAGE + index]
url = selectedEntry["glb_location_url"]
filename = selectedEntry["name"] + '.glb'
folder_path = os.path.join(os.path.dirname(__file__), "temp_files")
file_path = os.path.join(folder_path, filename)
if not os.path.exists(folder_path):
os.makedirs(folder_path)
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
response.raise_for_status()
content = await response.read()
with open(file_path, "wb") as file:
file.write(content)
omni.kit.commands.execute('CreateReferenceCommand',
path_to='/World/' + os.path.splitext(filename)[0].replace(" ", "_"),
asset_path=file_path,
usd_context=omni.usd.get_context())
api_url = "https://api.echo3d.com/upload"
data = {
"key": apiKeyInput.model.get_value_as_string(),
"secKey": secKeyInput.model.get_value_as_string(),
"data": "filePath:null",
"type": "upload",
"target_type": "2",
"hologram_type": "2",
"file_size": str(os.path.getsize(file_path)),
"file_model": open(file_path, "rb")
}
async with session.post(url=api_url, data=data) as uploadRequest:
uploadRequest.raise_for_status()
# Call the echo3D /search endpoint to get models and display the resulting thumbnails
def on_click_search():
global current_search_page
current_search_page = 0
searchLeftArrow.enabled = False
searchRightArrow.enabled = False
searchTerm = searchInput.model.get_value_as_string()
api_url = "https://api.echo3d.com/search"
data = {
"key": apiKeyInput.model.get_value_as_string(),
"secKey": secKeyInput.model.get_value_as_string(),
"keywords": searchTerm,
"include2Dcontent": "false"
}
encoded_data = urllib.parse.urlencode(data).encode('utf-8')
request = urllib.request.Request(api_url, data=encoded_data)
response = urllib.request.urlopen(request, context=ssl.create_default_context(cafile=certifi.where()))
librarySearchRequest = response.read().decode('utf-8')
global searchJsonData
searchJsonData = json.loads(librarySearchRequest)
searchJsonData = [data for data in searchJsonData if "glb_location_url" in data
and data["source"] == 'poly']
global search_image_widgets
global search_button_styles
for i in range(IMAGES_PER_PAGE):
if i < len(searchJsonData):
search_button_styles[i] = {
"Button.Image": {
"color": cl("#FFFFFF"),
"image_url": searchJsonData[i]["thumbnail"],
"alignment": ui.Alignment.CENTER,
"fill_policy": ui.FillPolicy.PRESERVE_ASPECT_CROP
},
"border_radius": 5
}
search_image_widgets[i].style = search_button_styles[i]
search_image_widgets[i].enabled = True
searchRightArrow.enabled = len(searchJsonData) > IMAGES_PER_PAGE
else:
global cloud_image_path
search_button_styles[i] = {
"Button.Image": {
"color": cl("#FFFFFF30"),
"image_url": cloud_image_path,
"alignment": ui.Alignment.CENTER,
"fill_policy": ui.FillPolicy.PRESERVE_ASPECT_CROP
},
"border_radius": 5
}
search_image_widgets[i].style = search_button_styles[i]
search_image_widgets[i].enabled = False
# Clear all the thumbnails and search term
def on_reset_search():
global current_search_page
current_search_page = 0
searchInput.model.set_value("")
global search_image_widgets
for i in range(IMAGES_PER_PAGE):
global cloud_image_path
search_button_styles[i] = {
"Button.Image": {
"color": cl("#FFFFFF30"),
"image_url": cloud_image_path,
"alignment": ui.Alignment.CENTER,
"fill_policy": ui.FillPolicy.PRESERVE_ASPECT_CROP
},
"border_radius": 5
}
search_image_widgets[i].style = search_button_styles[i]
search_image_widgets[i].enabled = False
#################################################
# Define Functions for Project Querying #
#################################################
# Load in new image thumbnails when clicks the previous/next buttons
def update_project_images(projectJsonData):
start_index = current_project_page * IMAGES_PER_PAGE
end_index = start_index + IMAGES_PER_PAGE
for i in range(start_index, end_index):
if i < len(projectJsonData):
baseUrl = 'https://storage.echo3d.co/' + apiKeyInput.model.get_value_as_string() + "/"
imageFilename = projectJsonData[i]["additionalData"]["screenshotStorageID"]
project_button_styles[i % IMAGES_PER_PAGE] = {"Button.Image": {
"color": cl("#FFFFFF"),
"image_url": baseUrl + imageFilename,
"alignment": ui.Alignment.CENTER,
"fill_policy": ui.FillPolicy.PRESERVE_ASPECT_CROP
},
"border_radius": 5
}
project_image_widgets[i % IMAGES_PER_PAGE].style = project_button_styles[i % IMAGES_PER_PAGE]
project_image_widgets[i % IMAGES_PER_PAGE].enabled = True
else:
global cloud_image_path
project_button_styles[i % IMAGES_PER_PAGE] = {
"Button.Image": {
"color": cl("#FFFFFF30"),
"image_url": cloud_image_path,
"alignment": ui.Alignment.CENTER,
"fill_policy": ui.FillPolicy.PRESERVE_ASPECT_CROP
},
"border_radius": 5
}
project_image_widgets[i % IMAGES_PER_PAGE].style = project_button_styles[i % IMAGES_PER_PAGE]
project_image_widgets[i % IMAGES_PER_PAGE].enabled = False
# Update state variables to reflect change of page, disable arrow buttons, update the thumbnails shown
def on_click_left_arrow_project():
global current_project_page
current_project_page -= 1
if (current_project_page == 0):
projectLeftArrow.enabled = False
projectRightArrow.enabled = True
global projectJsonData
update_project_images(projectJsonData)
def on_click_right_arrow_project():
global current_project_page
current_project_page += 1
global projectJsonData
if ((current_project_page + 1) * IMAGES_PER_PAGE >= len(projectJsonData)):
projectRightArrow.enabled = False
projectLeftArrow.enabled = True
update_project_images(projectJsonData)
# When a user clicks a thumbnail, download the corresponding .usdz file if it exists and
# instantiate it in the scene. Otherwise use the .glb file
def on_click_project_image(index):
global projectJsonData
global current_project_page
selectedEntry = projectJsonData[current_project_page * IMAGES_PER_PAGE + index]
usdzStorageID = selectedEntry["additionalData"]["usdzHologramStorageID"]
usdzFilename = selectedEntry["additionalData"]["usdzHologramStorageFilename"]
if (usdzFilename):
open_project_asset_from_filename(usdzFilename, usdzStorageID)
else:
glbStorageID = selectedEntry["hologram"]["storageID"]
glbFilename = selectedEntry["hologram"]["filename"]
open_project_asset_from_filename(glbFilename, glbStorageID)
# Directly instantiate previously cached files from the session, or download them from the echo3D API
def open_project_asset_from_filename(filename, storageId):
folder_path = os.path.join(os.path.dirname(__file__), "temp_files")
if not os.path.exists(folder_path):
os.makedirs(folder_path)
file_path = os.path.join(folder_path, filename)
cachedUpload = os.path.exists(file_path)
if (not cachedUpload):
apiKey = apiKeyInput.model.get_value_as_string()
secKey = secKeyInput.model.get_value_as_string()
storageId = urllib.parse.quote(storageId)
url = f'https://api.echo3d.com/query?key={apiKey}&secKey={secKey}&file={storageId}'
response = urllib.request.urlopen(url, context=ssl.create_default_context(cafile=certifi.where()))
response_data = response.read()
with open(file_path, "wb") as file:
file.write(response_data)
omni.kit.commands.execute('CreateReferenceCommand',
path_to='/World/' + os.path.splitext(filename)[0],
asset_path=file_path,
usd_context=omni.usd.get_context())
# Call the echo3D /query endpoint to get models and display the resulting thumbnails
def on_click_load_project():
global current_project_page
current_project_page = 0
projectLeftArrow.enabled = False
projectRightArrow.enabled = False
api_url = "https://api.echo3d.com/query"
data = {
"key": apiKeyInput.model.get_value_as_string(),
"secKey": secKeyInput.model.get_value_as_string(),
}
encoded_data = urllib.parse.urlencode(data).encode('utf-8')
request = urllib.request.Request(api_url, data=encoded_data)
try:
with urllib.request.urlopen(request,
context=ssl.create_default_context(cafile=certifi.where())) as response:
response_data = response.read().decode('utf-8')
response_json = json.loads(response_data)
values = list(response_json["db"].values())
entriesWithScreenshot = [data for data in values if "additionalData" in data
and "screenshotStorageID" in data["additionalData"]]
global projectJsonData
projectJsonData = entriesWithScreenshot
global project_image_widgets
global project_button_styles
sampleModels = ["6af76ce2-2f57-4ed0-82d8-42652f0eddbe.png",
"d2398ecf-566b-4fde-b8cb-46b2fd6add1d.png",
"d686a655-e800-430d-bfd2-e38cdfb0c9e9.png"]
for i in range(IMAGES_PER_PAGE):
if i < len(projectJsonData):
imageFilename = projectJsonData[i]["additionalData"]["screenshotStorageID"]
if (imageFilename in sampleModels):
baseUrl = 'https://storage.echo3d.co/0_model_samples/'
else:
baseUrl = 'https://storage.echo3d.co/' + apiKeyInput.model.get_value_as_string() + "/"
project_button_styles[i] = {
"Button.Image": {
"color": cl("#FFFFFF"),
"image_url": baseUrl + imageFilename,
"alignment": ui.Alignment.CENTER,
"fill_policy": ui.FillPolicy.PRESERVE_ASPECT_CROP
},
"border_radius": 5
}
project_image_widgets[i].style = project_button_styles[i]
project_image_widgets[i].enabled = True
projectRightArrow.enabled = len(projectJsonData) > IMAGES_PER_PAGE
else:
global cloud_image_path
project_button_styles[i] = {
"Button.Image": {
"color": cl("#FFFFFF30"),
"image_url": cloud_image_path,
"alignment": ui.Alignment.CENTER,
"fill_policy": ui.FillPolicy.PRESERVE_ASPECT_CROP
},
"border_radius": 5
}
project_image_widgets[i].style = project_button_styles[i]
project_image_widgets[i].enabled = False
searchButton.enabled = True
clearButton.enabled = True
searchInput.enabled = True
disabledStateCover.style = {"background_color": cl("#32343400")}
loadError.visible = False
except Exception as e:
loadError.visible = True
print(str(e) + ". Ensure that your API Key and Security Key are entered correctly.")
# Display the UI
self._window = ui.Window("Echo3D", width=400, height=478)
with self._window.frame:
with ui.VStack():
script_dir = os.path.dirname(os.path.abspath(__file__))
logo_image_filename = 'echo3D_Logo.png'
logo_image_path = os.path.join(script_dir, logo_image_filename)
ui.Spacer(height=5)
with ui.Frame(height=25):
ui.Image(logo_image_path)
ui.Spacer(height=8)
with ui.HStack(height=20):
ui.Spacer(width=5)
with ui.Frame(width=85):
ui.Label("API Key:")
apiKeyInput = ui.StringField()
ui.Spacer(width=5)
ui.Spacer(height=3)
with ui.HStack(height=20):
ui.Spacer(width=5)
with ui.Frame(width=85):
ui.Label("Security Key:")
secKeyInput = ui.StringField()
with ui.Frame(width=5):
ui.Label("")
ui.Spacer(height=3)
with ui.Frame(height=20):
ui.Button("Load Project", clicked_fn=on_click_load_project)
loadError = ui.Label("Error: Cannot Load Project. Correct your keys and try again.", visible=False,
height=20, style={"color": cl("#FF0000")}, alignment=ui.Alignment.CENTER)
ui.Spacer(height=3)
# Overlay the disabled elements to indicate their state
with ui.ZStack():
with ui.VStack():
with ui.HStack(height=5):
ui.Spacer(width=5)
ui.Line(name='default', style={"color": cl.gray})
ui.Spacer(width=5)
ui.Spacer(height=3)
with ui.HStack(height=20):
ui.Spacer(width=5)
ui.Label("Assets in Project:")
global project_image_widgets
with ui.HStack(height=80):
with ui.Frame(height=80, width=10):
projectLeftArrow = ui.Button("<", clicked_fn=on_click_left_arrow_project, enabled=False,
style=arrowStyle)
for i in range(IMAGES_PER_PAGE):
with ui.Frame(height=80):
project_image_widgets[i] = ui.Button("", clicked_fn=lambda index=i:
on_click_project_image(index),
style=project_button_styles[i], enabled=False)
with ui.Frame(height=80, width=10):
projectRightArrow = ui.Button(">", clicked_fn=on_click_right_arrow_project,
enabled=False, style=arrowStyle)
ui.Spacer(height=10)
with ui.HStack(height=5):
ui.Spacer(width=5)
ui.Line(name='default', style={"color": cl.gray})
ui.Spacer(width=5)
ui.Spacer(height=5)
with ui.HStack(height=20):
ui.Spacer(width=5)
ui.Label("Public Search Results:")
global search_image_widgets
with ui.HStack(height=80):
with ui.Frame(height=80, width=10):
searchLeftArrow = ui.Button("<", clicked_fn=on_click_left_arrow_search, enabled=False,
style=arrowStyle)
for i in range(IMAGES_PER_PAGE):
with ui.Frame(height=80):
search_image_widgets[i] = ui.Button("",
clicked_fn=lambda idx=i:
asyncio.ensure_future(
on_click_search_image(idx)),
style=search_button_styles[i], enabled=False)
with ui.Frame(height=80, width=10):
searchRightArrow = ui.Button(">", clicked_fn=on_click_right_arrow_search, enabled=False,
style=arrowStyle)
ui.Spacer(height=10)
with ui.HStack(height=20):
ui.Spacer(width=5)
with ui.Frame(width=85):
ui.Label("Keywords:")
searchInput = ui.StringField(enabled=False)
with ui.Frame(width=5):
ui.Label("")
ui.Spacer(height=5)
with ui.VStack():
with ui.Frame(height=20):
searchButton = ui.Button("Search", clicked_fn=on_click_search, enabled=False)
with ui.Frame(height=20):
clearButton = ui.Button("Clear", clicked_fn=on_reset_search, enabled=False)
disabledStateCover = ui.Rectangle(style={"background_color": cl("#323434A0")}, height=500)
def on_shutdown(self):
# Clear all temporary download files
folder_path = os.path.join(os.path.dirname(__file__), "temp_files")
if os.path.exists(folder_path):
file_list = os.listdir(folder_path)
for file_name in file_list:
file_path = os.path.join(folder_path, file_name)
if os.path.isfile(file_path):
os.remove(file_path)
print("[echo3D] echo3D shutdown")
| 26,601 | Python | 49.670476 | 120 | 0.477012 |
echo3Dco/NVIDIAOmniverse-echo3D-extension/exts/echo3d.search/echo3d/search/__init__.py | from .extension import *
| 25 | Python | 11.999994 | 24 | 0.76 |
echo3Dco/NVIDIAOmniverse-echo3D-extension/exts/echo3d.search/echo3d/search/tests/__init__.py | from .test_hello_world import * | 31 | Python | 30.999969 | 31 | 0.774194 |
echo3Dco/NVIDIAOmniverse-echo3D-extension/exts/echo3d.search/echo3d/search/tests/test_hello_world.py | # NOTE:
# omni.kit.test - std python's unittest module with additional wrapping to add suport for async/await tests
# For most things refer to unittest docs: https://docs.python.org/3/library/unittest.html
import omni.kit.test
# Extnsion for writing UI tests (simulate UI interaction)
import omni.kit.ui_test as ui_test
# Import extension python module we are testing with absolute import path, as if we are external user (other extension)
import echo3d.search
# Having a test class dervived from omni.kit.test.AsyncTestCase declared on the root of module will make it auto-discoverable by omni.kit.test
class Test(omni.kit.test.AsyncTestCase):
# Before running each test
async def setUp(self):
pass
# After running each test
async def tearDown(self):
pass
# Actual test, notice it is "async" function, so "await" can be used if needed
async def test_hello_public_function(self):
result = echo3d.search.some_public_function(4)
self.assertEqual(result, 256)
async def test_window_button(self):
# Find a label in our window
label = ui_test.find("My Window//Frame/**/Label[*]")
# Find buttons in our window
add_button = ui_test.find("My Window//Frame/**/Button[*].text=='Add'")
reset_button = ui_test.find("My Window//Frame/**/Button[*].text=='Reset'")
# Click reset button
await reset_button.click()
self.assertEqual(label.widget.text, "empty")
await add_button.click()
self.assertEqual(label.widget.text, "count: 1")
await add_button.click()
self.assertEqual(label.widget.text, "count: 2")
| 1,662 | Python | 34.382978 | 142 | 0.681107 |
echo3Dco/NVIDIAOmniverse-echo3D-extension/exts/echo3d.search/config/extension.toml | [package]
# Semantic Versioning is used: https://semver.org/
version = "1.0.0"
# Lists people or organizations that are considered the "authors" of the package.
authors = ["echo3D"]
# The title and description fields are primarily for displaying extension info in UI
title = "echo3d Connector"
description="Manage and search 3D assets in your Omniverse experiences with the echo3D Connector.."
# Path (relative to the root) or content of readme markdown file for UI.
readme = "docs/README.md"
# URL of the extension source repository.
repository = ""
# One of categories for UI.
category = "Services"
# Keywords for the extension
keywords = ["kit", "services", "search", "library", "startup"]
# Location of change log file in target (final) folder of extension, relative to the root.
# More info on writing changelog: https://keepachangelog.com/en/1.0.0/
changelog="docs/CHANGELOG.md"
# Preview image and icon. Folder named "data" automatically goes in git lfs (see .gitattributes file).
# Preview image is shown in "Overview" of Extensions window. Screenshot of an extension might be a good preview image.
preview_image = "data/preview.png"
# Icon is shown in Extensions window, it is recommended to be square, of size 256x256.
icon = "data/icon.png"
# Use omni.ui to build simple UI
[dependencies]
"omni.kit.uiapp" = {}
# Main python module this extension provides, it will be publicly available as "import echo3d.search".
[[python.module]]
name = "echo3d.search"
[[test]]
# Extra dependencies only to be used during test run
dependencies = [
"omni.kit.ui_test" # UI testing extension
]
| 1,606 | TOML | 32.479166 | 118 | 0.743462 |
echo3Dco/NVIDIAOmniverse-echo3D-extension/exts/echo3d.search/docs/CHANGELOG.md | # Changelog
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
## [1.0.0] - 2021-04-26
- Initial version of extension UI template with a window
| 178 | Markdown | 18.888887 | 80 | 0.702247 |
echo3Dco/NVIDIAOmniverse-echo3D-extension/exts/echo3d.search/docs/README.md | # echo3D Connector [echo3d.search]
Manage and search 3D assets in your Omniverse experiences with the echo3D Connector.
echo3D is a cloud platform for 3D asset management that provides tools and server-side infrastructure to help developers & companies manage and deploy 3D/AR/VR assets.
echo3D offers a 3D-first content management system (CMS) and delivery network (CDN) that enables developers to build a 3D/AR/VR app backend in minutes and allows content creators to easily manage and publish 3D content to their Omniverse experience without involving development teams.
### Connecting an echo3D Project
To begin, copy your echo3D API Key and Secret Key (if enabled) into the corresponding boxes in the Omniverse Extension.
The API Key can be found in the header of the echo3D console, and the Secret Key can be found on the Security Tab of the Settings Page of the console.
### Loading Assets
Simply click any of your project assets to add them to the Omniverse Viewer
Additionally, you can search for publicly available assets by entering a keyword into the search bar. Note that clicking on them and importing them into the Omniverse Viewer will also automatically upload the asset to your echo3D project.
### Any other questions?
- Reach out to [email protected]
- or join at https://go.echo3d.co/join
### License
This asset is governed by the license agreement at echo3D.com/terms.
### Preview | 1,411 | Markdown | 57.833331 | 285 | 0.796598 |
echo3Dco/NVIDIAOmniverse-echo3D-extension/exts/echo3d.search/docs/index.rst | echo3d.search
#############################
Example of Python only extension
.. toctree::
:maxdepth: 1
README
CHANGELOG
.. automodule::"echo3d.search"
:platform: Windows-x86_64, Linux-x86_64
:members:
:undoc-members:
:show-inheritance:
:imported-members:
:exclude-members: contextmanager
| 327 | reStructuredText | 14.619047 | 43 | 0.611621 |
ngzhili/SynTable/visualize_annotations.py | """ Visualises SynTable generated annotations: """
# Run python ./visualize_annotations.py --dataset './sample_data' --ann_json './sample_data/annotation_final.json'
import json
import cv2
import numpy as np
import os, shutil
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
from matplotlib import pyplot as plt
from PIL import Image
import networkx as nx
import argparse
import pycocotools.mask as mask_util
from matplotlib.colors import ListedColormap
import seaborn as sns
import matplotlib.patches as mpatches
# visualize annotations
def apply_mask(image, mask):
# Convert to numpy arrays
image = np.array(image)
mask = np.array(mask)
# Convert grayscale image to RGB
mask = np.stack((mask,)*3, axis=-1)
# Multiply arrays
rgb_result= image*mask
# First create the image with alpha channel
rgba = cv2.cvtColor(rgb_result, cv2.COLOR_RGB2RGBA)
# Then assign the mask to the last channel of the image
# rgba[:, :, 3] = alpha_data
# Make image transparent white anywhere it is transparent
rgba[rgba[...,-1]==0] = [255,255,255,0]
return rgba
def compute_occluded_masks(mask1, mask2):
"""Computes occlusions between two sets of masks.
masks1, masks2: [Height, Width, instances]
"""
# If either set of masks is empty return empty result
#if masks1.shape[-1] == 0 or masks2.shape[-1] == 0:
#return np.zeros((masks1.shape[-1], masks2.shape[-1]))
# flatten masks and compute their areas
#masks1 = np.reshape(masks1 > .5, (-1, masks1.shape[-1])).astype(np.float32)
#masks2 = np.reshape(masks2 > .5, (-1, masks2.shape[-1])).astype(np.float32)
#area1 = np.sum(masks1, axis=0)
#area2 = np.sum(masks2, axis=0)
# intersections and union
#intersections_mask = np.dot(masks1.T, masks2)
mask1_area = np.count_nonzero( mask1 )
mask2_area = np.count_nonzero( mask2 )
intersection_mask = np.logical_and( mask1, mask2 )
intersection = np.count_nonzero( np.logical_and( mask1, mask2 ) )
iou = intersection/(mask1_area+mask2_area-intersection)
return iou, intersection_mask.astype(float)
def convert_png(image):
image = Image.fromarray(np.uint8(image))
image = image.convert('RGBA')
# Transparency
newImage = []
for item in image.getdata():
if item[:3] == (0, 0, 0):
newImage.append((0, 0, 0, 0))
else:
newImage.append(item)
image.putdata(newImage)
return image
def rle2mask(mask_rle, shape=(480,640)):
'''
mask_rle: run-length as string formated (start length)
shape: (width,height) of array to return
Returns numpy array, 1 - mask, 0 - background
'''
s = mask_rle.split()
starts, lengths = [np.asarray(x, dtype=int) for x in (s[0:][::2], s[1:][::2])]
starts -= 1
ends = starts + lengths
img = np.zeros(shape[0]*shape[1], dtype=np.uint8)
for lo, hi in zip(starts, ends):
img[lo:hi] = 1
return img.reshape(shape).T
def segmToRLE(segm, img_size):
h, w = img_size
if type(segm) == list:
# polygon -- a single object might consist of multiple parts
# we merge all parts into one mask rle code
rles = maskUtils.frPyObjects(segm, h, w)
rle = maskUtils.merge(rles)
elif type(segm["counts"]) == list:
# uncompressed RLE
rle = maskUtils.frPyObjects(segm, h, w)
else:
# rle
rle = segm
return rle
# Convert 1-channel groundtruth data to visualization image data
def normalize_greyscale_image(image_data):
image_data = np.reciprocal(image_data)
image_data[image_data == 0.0] = 1e-5
image_data = np.clip(image_data, 0, 255)
image_data -= np.min(image_data)
if np.max(image_data) > 0:
image_data /= np.max(image_data)
image_data *= 255
image_data = image_data.astype(np.uint8)
return image_data
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Visualise Annotations')
parser.add_argument('--dataset', type=str,
help='dataset to visualise')
parser.add_argument('--ann_json', type=str,
help='dataset annotation to visualise')
args = parser.parse_args()
data_dir = args.dataset
ann_json = args.ann_json
# Opening JSON file
f = open(ann_json)
# returns JSON object as a dictionary
data = json.load(f)
f.close()
referenceDict = {}
for i, ann in enumerate(data['annotations']):
image_id = ann["image_id"]
ann_id = ann["id"]
# print(ann_id)
if image_id not in referenceDict:
referenceDict.update({image_id:{"rgb":None,"depth":None, "amodal":[], "visible":[],
"occluded":[],"occluded_rate":[],"category_id":[],"object_name":[]}})
# print(referenceDict)
referenceDict[image_id].update({"rgb":data["images"][i]["file_name"]})
referenceDict[image_id].update({"depth":data["images"][i]["depth_file_name"]})
# referenceDict[image_id].update({"occlusion_order":data["images"][i]["occlusion_order_file_name"]})
referenceDict[image_id]["amodal"].append(ann["segmentation"])
referenceDict[image_id]["visible"].append(ann["visible_mask"])
referenceDict[image_id]["occluded"].append(ann["occluded_mask"])
referenceDict[image_id]["occluded_rate"].append(ann["occluded_rate"])
referenceDict[image_id]["category_id"].append(ann["category_id"])
# referenceDict[image_id]["object_name"].append(ann["object_name"])
else:
# if not (referenceDict[image_id]["rgb"] or referenceDict[image_id]["depth"]):
# referenceDict[image_id].update({"rgb":data["images"][i]["file_name"]})
# referenceDict[image_id].update({"depth":data["images"][i]["depth_file_name"]})
referenceDict[image_id]["amodal"].append(ann["segmentation"])
referenceDict[image_id]["visible"].append(ann["visible_mask"])
referenceDict[image_id]["occluded"].append(ann["occluded_mask"])
referenceDict[image_id]["occluded_rate"].append(ann["occluded_rate"])
referenceDict[image_id]["category_id"].append(ann["category_id"])
# referenceDict[image_id]["object_name"].append(ann["object_name"])
# Create visualise directory
vis_dir = os.path.join(data_dir,"visualise_dataset")
if os.path.exists(vis_dir): # remove contents if exist
for filename in os.listdir(vis_dir):
file_path = os.path.join(vis_dir, filename)
try:
if os.path.isfile(file_path) or os.path.islink(file_path):
os.unlink(file_path)
elif os.path.isdir(file_path):
shutil.rmtree(file_path)
except Exception as e:
print('Failed to delete %s. Reason: %s' % (file_path, e))
else:
os.makedirs(vis_dir)
# query_img_id_list = [1,50,100]
query_img_id_list = [i for i in range(1,len(referenceDict)+1)] # visualise all images
for id in query_img_id_list:
if id in referenceDict:
ann_dic = referenceDict[id]
vis_dir_img = os.path.join(vis_dir,str(id))
if not os.path.exists(vis_dir_img):
os.makedirs(vis_dir_img)
# visualise rgb image
rgb_path = os.path.join(data_dir,ann_dic["rgb"])
rgb_img = cv2.imread(rgb_path, cv2.IMREAD_UNCHANGED)
# visualise depth image
depth_path = os.path.join(data_dir,ann_dic["depth"])
from PIL import Image
im = Image.open(depth_path)
im = np.array(im)
depth_img = Image.fromarray(normalize_greyscale_image(im.astype("float32")))
file = os.path.join(vis_dir_img,f"depth_{id}.png")
depth_img.save(file, "PNG")
# visualise occlusion masks on rgb image
occ_img_list = ann_dic["occluded"]
if len(occ_img_list) > 0:
occ_img = rgb_img.copy()
overlay = rgb_img.copy()
combined_mask = np.zeros((occ_img.shape[0],occ_img.shape[1]))
# iterate through all occlusion masks
for i, occMask in enumerate(occ_img_list):
occluded_mask = mask_util.decode(occMask)
if ann_dic["category_id"][i] == 0:
occ_img_back = rgb_img.copy()
overlay_back = rgb_img.copy()
occluded_mask = occluded_mask.astype(bool) # boolean mask
overlay_back[occluded_mask] = [0, 0, 255]
# print(np.unique(occluded_mask))
alpha =0.5
occ_img_back = cv2.addWeighted(overlay_back, alpha, occ_img_back, 1 - alpha, 0, occ_img_back)
occ_save_path = f"{vis_dir_img}/rgb_occlusion_{id}_background.png"
cv2.imwrite(occ_save_path, occ_img_back)
else:
combined_mask += occluded_mask
combined_mask = combined_mask.astype(bool) # boolean mask
overlay[combined_mask] = [0, 0, 255]
alpha =0.5
occ_img = cv2.addWeighted(overlay, alpha, occ_img, 1 - alpha, 0, occ_img)
occ_save_path = f"{vis_dir_img}/rgb_occlusion_{id}.png"
cv2.imwrite(occ_save_path, occ_img)
combined_mask = combined_mask.astype('uint8')
occ_save_path = f"{vis_dir_img}/occlusion_mask_{id}.png"
cv2.imwrite(occ_save_path, combined_mask*255)
cols = 4
rows = len(occ_img_list) // cols + 1
from matplotlib import pyplot as plt
fig = plt.figure(figsize=(20,10))
for index, occMask in enumerate(occ_img_list):
occ_mask = mask_util.decode(occMask)
plt.subplot(rows,cols, index+1)
plt.axis('off')
# plt.title(ann_dic["object_name"][index])
plt.imshow(occ_mask)
plt.tight_layout()
plt.suptitle(f"Occlusion Masks for {id}.png")
# plt.show()
plt.savefig(f'{vis_dir_img}/occ_masks_{id}.png')
plt.close()
# visualise visible masks on rgb image
vis_img_list = ann_dic["visible"]
if len(vis_img_list) > 0:
vis_img = rgb_img.copy()
overlay = rgb_img.copy()
# iterate through all occlusion masks
for i, visMask in enumerate(vis_img_list):
visible_mask = mask_util.decode(visMask)
if ann_dic["category_id"][i] == 0:
vis_img_back = rgb_img.copy()
overlay_back = rgb_img.copy()
visible_mask = visible_mask.astype(bool) # boolean mask
overlay_back[visible_mask] = [0, 0, 255]
alpha =0.5
vis_img_back = cv2.addWeighted(overlay_back, alpha, vis_img_back, 1 - alpha, 0, vis_img_back)
vis_save_path = f"{vis_dir_img}/rgb_visible_mask_{id}_background.png"
cv2.imwrite(vis_save_path, vis_img_back)
else:
vis_combined_mask = visible_mask.astype(bool) # boolean mask
colour = list(np.random.choice(range(256), size=3))
overlay[vis_combined_mask] = colour
alpha = 0.5
vis_img = cv2.addWeighted(overlay, alpha, vis_img, 1 - alpha, 0, vis_img)
vis_save_path = f"{vis_dir_img}/rgb_visible_mask_{id}.png"
cv2.imwrite(vis_save_path,vis_img)
cols = 4
rows = len(vis_img_list) // cols + 1
# print(len(amodal_img_list))
# print(cols,rows)
from matplotlib import pyplot as plt
fig = plt.figure(figsize=(20,10))
for index, visMask in enumerate(vis_img_list):
vis_mask = mask_util.decode(visMask)
plt.subplot(rows,cols, index+1)
plt.axis('off')
# plt.title(ann_dic["object_name"][index])
plt.imshow(vis_mask)
plt.tight_layout()
plt.suptitle(f"Visible Masks for {id}.png")
# plt.show()
plt.savefig(f'{vis_dir_img}/vis_masks_{id}.png')
plt.close()
# visualise amodal masks
# img_dir_path = f"{output_dir}/visualize_occlusion_masks/"
# img_list = sorted(os.listdir(img_dir_path), key=lambda x: float(x[4:-4]))
amodal_img_list = ann_dic["amodal"]
if len(amodal_img_list) > 0:
cols = 4
rows = len(amodal_img_list) // cols + 1
# print(len(amodal_img_list))
# print(cols,rows)
from matplotlib import pyplot as plt
fig = plt.figure(figsize=(20,10))
for index, amoMask in enumerate(amodal_img_list):
amodal_mask = mask_util.decode(amoMask)
plt.subplot(rows,cols, index+1)
plt.axis('off')
# plt.title(ann_dic["object_name"][index])
plt.imshow(amodal_mask)
plt.tight_layout()
plt.suptitle(f"Amodal Masks for {id}.png")
# plt.show()
plt.savefig(f'{vis_dir_img}/amodal_masks_{id}.png')
plt.close()
# get rgb_path
rgb_path = os.path.join(data_dir,ann_dic["rgb"])
rgb_img = cv2.imread(rgb_path, cv2.IMREAD_UNCHANGED)
occ_order = False
if occ_order:
# get occlusion order adjacency matrix
npy_path = os.path.join(data_dir,ann_dic["occlusion_order"])
occlusion_order_adjacency_matrix = np.load(npy_path)
print(f"Calculating Directed Graph for Scene:{id}")
# vis_img = cv2.imread(f"{vis_dir}/visuals/{scene_index}.png", cv2.IMREAD_UNCHANGED)
rows = cols = len(ann_dic["visible"]) # number of objects
obj_rgb_mask_list = []
for i in range(1,len(ann_dic["visible"])+1):
visMask = ann_dic["visible"][i-1]
visible_mask = mask_util.decode(visMask)
rgb_crop = apply_mask(rgb_img, visible_mask)
rgb_crop = convert_png(rgb_crop)
def bbox(im):
a = np.array(im)[:,:,:3] # keep RGB only
m = np.any(a != [0,0,0], axis=2)
coords = np.argwhere(m)
y0, x0, y1, x1 = *np.min(coords, axis=0), *np.max(coords, axis=0)
return (x0, y0, x1+1, y1+1)
# print(bbox(rgb_crop))
obj_rgb_mask = rgb_crop.crop(bbox(rgb_crop))
obj_rgb_mask_list.append(obj_rgb_mask) # add obj_rgb_mask
# get contours (presumably just one around the nonzero pixels) # for instance segmentation mask
# contours = cv2.findContours(visible_mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# contours = contours[0] if len(contours) == 2 else contours[1]
# for cntr in contours:
# x,y,w,h = cv2.boundingRect(cntr)
# cv2.putText(img=vis_img, text=str(i), org=(x+w//2, y+h//2), fontFace=cv2.FONT_HERSHEY_TRIPLEX, fontScale=0.5, color=(0, 0, 0),thickness=1)
""" === Generate Directed Graph === """
# print("Occlusion Order Adjacency Matrix:\n",occlusion_order_adjacency_matrix)
# f, (ax1,ax2) = plt.subplots(1,2)
# show_graph_with_labels(overlap_adjacency_matrix,ax1)
labels = [i for i in range(1,len(occlusion_order_adjacency_matrix)+1)]
labels_dict = {}
for i in range(len(occlusion_order_adjacency_matrix)):
labels_dict.update({i:labels[i]})
rows, cols = np.where(occlusion_order_adjacency_matrix == 1)
rows += 1
cols += 1
edges = zip(rows.tolist(), cols.tolist())
nodes_list = [i for i in range(1, len(occlusion_order_adjacency_matrix)+1)]
# Initialise directed graph G
G = nx.DiGraph()
G.add_nodes_from(nodes_list)
G.add_edges_from(edges)
# pos=nx.spring_layout(G,k=1/sqrt(N))
is_planar, P = nx.check_planarity(G)
if is_planar:
pos=nx.planar_layout(G)
else:
# pos=nx.draw(G)
N = len(G.nodes())
pos=nx.spring_layout(G,k=3/sqrt(N))
print("Nodes:",G.nodes())
print("Edges:",G.edges())
# print(G.in_edges())
# print(G.out_edges())
# get start nodes
start_nodes = [node for (node,degree) in G.in_degree if degree == 0]
print("start_nodes:",start_nodes)
# get end nodes
end_nodes = [node for (node,degree) in G.out_degree if degree == 0]
for node in end_nodes:
if node in start_nodes:
end_nodes.remove(node)
print("end_nodes:",end_nodes)
# get intermediate notes
intermediate_nodes = [i for i in nodes_list if i not in (start_nodes) and i not in (end_nodes)]
print("intermediate_nodes:",intermediate_nodes)
print("(Degree of clustering) Number of Weakly Connected Components:",nx.number_weakly_connected_components(G))
# largest_wcc = max(nx.weakly_connected_components(G), key=len)
# largest_wcc_size = len(largest_wcc)
# print("(Scene Complexity) Sizes of Weakly Connected Component:",largest_wcc_size)
wcc_list = list(nx.weakly_connected_components(G))
wcc_len = []
for component in wcc_list:
wcc_len.append(len(component))
print("(Scene Complexity/Degree of overlapping regions) Sizes of Weakly Connected Components:",wcc_len)
dag_longest_path_length = nx.dag_longest_path_length(G)
print("(Minimum no. of depth layers to order all regions in WCC) Longest directed path of Weakly Connected Components:",dag_longest_path_length)
# nx.draw(gr, node_size=500, with_labels=True)
node_color_list = []
node_size_list = []
for node in nodes_list:
if node in start_nodes:
node_color_list.append('green')
node_size_list.append(500)
elif node in end_nodes:
node_color_list.append('yellow')
node_size_list.append(300)
else:
node_color_list.append('#1f78b4')
node_size_list.append(300)
options = {
'node_color': node_color_list,
'node_size': node_size_list,
'width': 1,
'arrowstyle': '-|>',
'arrowsize': 10
}
fig1 = plt.figure(figsize=(20, 6), dpi=80)
plt.subplot(1,3,1)
# nx.draw_planar(G, pos, with_labels = True, arrows=True, **options)
nx.draw_networkx(G,pos, with_labels= True, arrows=True, **options)
dag = nx.is_directed_acyclic_graph(G)
print(f"Is Directed Acyclic Graph (DAG)?: {dag}")
colors = ["green", "#1f78b4", "yellow"]
texts = ["Top Layer", "Intermediate Layers", "Bottom Layer"]
patches = [ plt.plot([],[], marker="o", ms=10, ls="", mec=None, color=colors[i],
label="{:s}".format(texts[i]) )[0] for i in range(len(texts)) ]
plt.legend(handles=patches, bbox_to_anchor=(0.5, -0.05),
loc='center', ncol=3, fancybox=True, shadow=True,
facecolor="w", numpoints=1, fontsize=10)
plt.title("Directed Occlusion Order Graph")
# plt.subplot(1,2,2)
# plt.imshow(vis_img)
# plt.imshow(vis_img)
# plt.title(f"Visible Masks Scene {scene_index}")
plt.axis('off')
# plt.show()
# plt.savefig(f"{output_dir}/vis_img_{i}.png")
# cv2.imwrite(f"{output_dir}/scene_{scene_index}.png", vis_img)
# plt.show()
# fig2 = plt.figure(figsize=(16, 6), dpi=80)
plt.subplot(1,3,2)
options = {
'node_color': "white",
# 'node_size': node_size_list,
'width': 1,
'arrowstyle': '-|>',
'arrowsize': 10
}
# nx.draw_networkx(G, arrows=True, **options)
# nx.draw(G, with_labels = True,arrows=True, connectionstyle='arc3, rad = 0.1')
# nx.draw_spring(G, with_labels = True,arrows=True, connectionstyle='arc3, rad = 0.5')
N = len(G.nodes())
from math import sqrt
if is_planar:
pos=nx.planar_layout(G)
else:
# pos=nx.draw(G)
N = len(G.nodes())
pos=nx.spring_layout(G,k=3/sqrt(N))
nx.draw_networkx(G,pos, with_labels= False, arrows=True, **options)
plt.title("Visualisation of Occlusion Order Graph")
# draw with images on nodes
# nx.draw_networkx(G,pos,width=3,edge_color="r",alpha=0.6)
ax=plt.gca()
fig=plt.gcf()
trans = ax.transData.transform
trans2 = fig.transFigure.inverted().transform
imsize = 0.05 # this is the image size
node_size_list = []
for n in G.nodes():
(x,y) = pos[n]
xx,yy = trans((x,y)) # figure coordinates
xa,ya = trans2((xx,yy)) # axes coordinates
# a = plt.axes([xa-imsize/2.0,ya-imsize/2.0, imsize, imsize ])
a = plt.axes([xa-imsize/2.0,ya-imsize/2.0, imsize, imsize ])
a.imshow(obj_rgb_mask_list[n-1])
a.set_aspect('equal')
a.axis('off')
# fig.patch.set_visible(False)
ax.axis('off')
plt.subplot(1,3,3)
plt.imshow(rgb_img)
plt.axis('off')
plt.title(f"RGB Scene {id}")
# plt.tight_layout()
# plt.show()
plt.savefig(f'{vis_dir_img}/occlusion_order_{id}.png')
plt.close()
m = occlusion_order_adjacency_matrix.astype(int)
unique_chars, matrix = np.unique(m, return_inverse=True)
color_dict = {1: 'darkred', 0: 'white'}
plt.figure(figsize=(20,20))
sns.set(font_scale=2)
ax1 = sns.heatmap(matrix.reshape(m.shape), annot=m, annot_kws={'fontsize': 20}, fmt='',
linecolor='dodgerblue', lw=5, square=True, clip_on=False,
cmap=ListedColormap([color_dict[char] for char in unique_chars]),
xticklabels=np.arange(m.shape[1]) + 1, yticklabels=np.arange(m.shape[0]) + 1, cbar=False)
ax1.tick_params(labelrotation=0)
ax1.tick_params(axis='both', which='major', labelsize=20, labelbottom = False, bottom=False, top = False, labeltop=True)
plt.xlabel("Occludee")
ax1.xaxis.set_ticks_position('top')
ax1.xaxis.set_label_position('top')
plt.ylabel("Occluder")
# plt.show()
plt.savefig(f'{vis_dir_img}/occlusion_order_adjacency_matrix_{id}.png')
plt.close()
| 25,100 | Python | 44.227027 | 160 | 0.5149 |
ngzhili/SynTable/README.md | # SynTable - A Synthetic Data Generation Pipeline for Cluttered Tabletop Scenes
This repository contains the official implementation of the paper **"SynTable: A Synthetic Data Generation Pipeline for Unseen Object Amodal Instance Segmentation of Cluttered Tabletop Scenes"**.
Zhili Ng*, Haozhe Wang*, Zhengshen Zhang*, Francis Eng Hock Tay, Marcelo H. Ang Jr.
*equal contributions
[[arXiv]](https://arxiv.org/abs/2307.07333)
[[Website]](https://sites.google.com/view/syntable/home)
[[Dataset]](https://doi.org/10.5281/zenodo.10565517)
[[Demo Video]](https://youtu.be/zHM8H58Kn3E)
[[Modified UOAIS-v2]](https://github.com/ngzhili/uoais-v2?tab=readme-ov-file)
[](https://doi.org/10.5281/zenodo.10565517)

SynTable is a robust custom data generation pipeline that creates photorealistic synthetic datasets of Cluttered Tabletop Scenes. For each scene, it includes metadata such as
- [x] RGB image of scene
- [x] depth image of Scene
- [x] scene instance segmentation masks
- [x] object amodal (visible + invisible) rgb
- [x] object amodal (visible + invisible) masks
- [x] object modal (visible) masks
- [x] object occlusion (invisible) masks
- [x] object occlusion rate
- [x] object visible bounding box
- [x] tabletop visible masks
- [x] background visible mask (background excludes tabletop and objects)
- [x] occlusion ordering adjacency matrix (OOAM) of objects on tabletop
## **Installation**
1. Install [NVIDIA Isaac Sim 2022.1.1 version](https://developer.nvidia.com/isaac-sim) on Omniverse
2. Change Directory to isaac_sim-2022.1.1 directory
``` bash
cd '/home/<username>/.local/share/ov/pkg/isaac_sim-2022.1.1/tools'
```
3. Clone the repo
``` bash
git clone https://github.com/ngzhili/SynTable.git
```
4. Install Dependencies into isaac sim's python
- Get issac sim source code directory path in command line.
``` bash
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
echo $SCRIPT_DIR
```
- Get isaac sim's python path
``` bash
python_exe=${PYTHONEXE:-"${SCRIPT_DIR}/kit/python/bin/python3"}
echo $python_exe
```
- Run isaac sim's python
``` bash
$python_exe
```
- while running isaac sim's python in bash, install pycocotools and opencv-python into isaac sim's python
``` bash
import pip
package_names=['pycocotools', 'opencv-python'] #packages to install
pip.main(['install'] + package_names + ['--upgrade'])
```
5. Copy the mount_dir folder to your home directory (anywhere outside of isaac sim source code)
``` bash
cp -r SynTable/mount_dir /home/<username>
```
## **Adding object models to nucleus**
1. You can download the .USD object models to be used for generating the tabletop datasets [here](https://mega.nz/folder/1nJAwQxA#1P3iUtqENKCS66uQYXk1vg).
2. Upload the downloaded syntable_nucleus folder into Omniverse Nucleus into /Users directory.
3. Ensure that the file paths in the config file are correct before running the generate dataset commands.
## **Generate Synthetic Dataset**
Note: Before generating the synthetic dataset, please ensure that you uploaded all object models to isaac sim nucleus and their paths in the config file is correct.
1. Change Directory to Isaac SIM source code
``` bash
cd /home/<username>/.local/share/ov/pkg/isaac_sim-2022.1.1
```
2. Run Syntable Pipeline (non-headless)
``` bash
./python.sh SynTable/syntable_composer/src/main1.py --input */parameters/train_config_syntable1.yaml --output */dataset/train --mount '/home/<username>/mount_dir' --num_scenes 3 --num_views 3 --overwrite --save_segmentation_data
```
### **Types of Flags**
| Flag | Description |
| :--- | :----: |
| ```--input``` | Path to input parameter file. |
| ```--mount``` | Path to mount symbolized in parameter files via '*'. |
| ```--headless``` | Will not launch Isaac SIM window. |
| ```--nap``` | Will nap Isaac SIM after the first scene is generated. |
| ```--overwrite``` | Overwrites dataset in output directory. |
| ```--output``` | Output directory. Overrides 'output_dir' param. |
| ```--num-scenes``` | Number of scenes in dataset. Overrides 'num_scenes' param. |
| ```--num-views``` | Number of views to generate per scene. Overrides 'num_views' param. |
| ```--save-segmentation-data``` | Saves visualisation of annotations into output directory. False by default. |
## Generated dataset
- SynTable data generation pipeline generates dataset in COCO - Common Objects in Context format.
## **Folder Structure of Generated Synthetic Dataset**
.
├── ...
├── SynTable-Sim # Generated dataset
│ ├── data # folder to store RGB, Depth, OOAM
│ │ └── mono
│ │ ├── rgb
│ │ │ ├── 0_0.png # file naming convention follows sceneNum_viewNum.png
│ │ │ └── 0_1.png
│ │ ├── depth
│ │ │ ├── 0_0.png
│ │ │ └── 0_1.png
│ │ └── occlusion order
│ │ ├── 0_0.npy
│ │ └── 0_1.npy
│ ├── parameters # parameters used for generation of annotations
│ └── train.json # Annotation COCO.JSON
└── ...
## **Visualise Annotations**
1. Create python venv and install dependencies
```
python3.8 -m venv env
source env/bin/activate
pip install -r requirements.txt
```
2. Visualise sample annotations (creates a visualise_dataset directory in dataset directory, then saves annotation visualisations there)
```
python ./visualize_annotations.py --dataset './sample_data' --ann_json './sample_data/annotation_final.json'
```
## **Sample Visualisation of Annotations**


## **References**
We have heavily modified the Python SDK source code from NVIDA Isaac Sim's Replicator Composer.
## **Citation**
If you find our work useful for your research, please consider citing the following BibTeX entry:
```
@misc{ng2023syntable,
title={SynTable: A Synthetic Data Generation Pipeline for Unseen Object Amodal Instance Segmentation of Cluttered Tabletop Scenes},
author={Zhili Ng and Haozhe Wang and Zhengshen Zhang and Francis Tay Eng Hock and Marcelo H. Ang Jr au2},
year={2023},
eprint={2307.07333},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
| 6,703 | Markdown | 40.9 | 232 | 0.654781 |
ngzhili/SynTable/syntable_composer/src/main.py | import argparse
import os
import shutil
import signal
import sys
from omni.isaac.kit import SimulationApp
config1 = {"headless": False}
kit = SimulationApp(config1)
from distributions import Distribution
from input import Parser
from output import Metrics, Logger, OutputManager
from sampling import Sampler
from scene import SceneManager
class Composer:
def __init__(self, params, index, output_dir):
""" Construct Composer. Start simulator and prepare for generation. """
self.params = params
self.index = index
self.output_dir = output_dir
self.sample = Sampler().sample
# Set-up output directories
self.setup_data_output()
# Start Simulator
Logger.content_log_path = self.content_log_path
Logger.start_log_entry("start-up")
Logger.print("Isaac Sim starting up...")
config = {"headless": self.sample("headless")}
if self.sample("path_tracing"):
config["renderer"] = "PathTracing"
config["samples_per_pixel_per_frame"] = self.sample("samples_per_pixel_per_frame")
else:
config["renderer"] = "RayTracedLighting"
#self.sim_app = SimulationApp(config)
self.sim_app = kit
from omni.isaac.core import SimulationContext
self.scene_units_in_meters = self.sample("scene_units_in_meters")
self.sim_context = SimulationContext(physics_dt=1.0 / 60.0, stage_units_in_meters=self.scene_units_in_meters)
# need to initialize physics getting any articulation..etc
self.sim_context.initialize_physics()
self.sim_context.play()
self.num_scenes = self.sample("num_scenes")
self.sequential = self.sample("sequential")
self.scene_manager = SceneManager(self.sim_app, self.sim_context)
self.output_manager = OutputManager(
self.sim_app, self.sim_context, self.scene_manager, self.output_data_dir, self.scene_units_in_meters
)
# Set-up exit message
signal.signal(signal.SIGINT, self.handle_exit)
Logger.finish_log_entry()
def handle_exit(self, *args, **kwargs):
print("exiting dataset generation...")
self.sim_context.clear_instance()
self.sim_app.close()
sys.exit()
def generate_scene(self):
""" Generate 1 dataset scene. Returns captured groundtruth data. """
self.scene_manager.prepare_scene(self.index)
self.scene_manager.populate_scene()
if self.sequential:
sequence_length = self.sample("sequence_step_count")
step_time = self.sample("sequence_step_time")
for step in range(sequence_length):
self.scene_manager.update_scene(step_time=step_time, step_index=step)
groundtruth = self.output_manager.capture_groundtruth(
self.index, step_index=step, sequence_length=sequence_length
)
if step == 0:
Logger.print("stepping through scene...")
else:
self.scene_manager.update_scene()
groundtruth = self.output_manager.capture_groundtruth(self.index)
self.scene_manager.finish_scene()
return groundtruth
def setup_data_output(self):
""" Create output directories and copy input files to output. """
# Overwrite output directory, if needed
if self.params["overwrite"]:
shutil.rmtree(self.output_dir, ignore_errors=True)
# Create output directory
os.makedirs(self.output_dir, exist_ok=True)
# Create output directories, as needed
self.output_data_dir = os.path.join(self.output_dir, "data")
self.parameter_dir = os.path.join(self.output_dir, "parameters")
self.parameter_profiles_dir = os.path.join(self.parameter_dir, "profiles")
self.log_dir = os.path.join(self.output_dir, "log")
self.content_log_path = os.path.join(self.log_dir, "sampling_log.yaml")
os.makedirs(self.output_data_dir, exist_ok=True)
os.makedirs(self.parameter_profiles_dir, exist_ok=True)
os.makedirs(self.log_dir, exist_ok=True)
# Copy input parameters file to output
input_file_name = os.path.basename(self.params["file_path"])
input_file_copy = os.path.join(self.parameter_dir, input_file_name)
shutil.copy(self.params["file_path"], input_file_copy)
# Copy profile parameters file(s) to output
if self.params["profile_files"]:
for profile_file in self.params["profile_files"]:
profile_file_name = os.path.basename(profile_file)
profile_file_copy = os.path.join(self.parameter_profiles_dir, profile_file_name)
shutil.copy(profile_file, profile_file_copy)
def get_output_dir(params):
""" Determine output directory. """
if params["output_dir"].startswith("/"):
output_dir = params["output_dir"]
elif params["output_dir"].startswith("*"):
output_dir = os.path.join(Distribution.mount, params["output_dir"][2:])
else:
output_dir = os.path.join(os.path.dirname(__file__), "..", "datasets", params["output_dir"])
return output_dir
def get_starting_index(params, output_dir):
""" Determine starting index of dataset. """
if params["overwrite"]:
return 0
output_data_dir = os.path.join(output_dir, "data")
if not os.path.exists(output_data_dir):
return 0
def find_min_missing(indices):
if indices:
indices.sort()
for i in range(indices[-1]):
if i not in indices:
return i
return indices[-1]
else:
return -1
camera_dirs = [os.path.join(output_data_dir, sub_dir) for sub_dir in os.listdir(output_data_dir)]
min_indices = []
for camera_dir in camera_dirs:
data_dirs = [os.path.join(camera_dir, sub_dir) for sub_dir in os.listdir(camera_dir)]
for data_dir in data_dirs:
indices = []
for filename in os.listdir(data_dir):
try:
if "_" in filename:
index = int(filename[: filename.rfind("_")])
else:
index = int(filename[: filename.rfind(".")])
indices.append(index)
except:
pass
min_index = find_min_missing(indices)
min_indices.append(min_index)
if min_indices:
minest_index = min(min_indices)
return minest_index + 1
else:
return 0
def assert_dataset_complete(params, index):
""" Check if dataset is already complete. """
num_scenes = params["num_scenes"]
if index >= num_scenes:
print(
'Dataset is completed. Number of generated samples {} satifies "num_scenes" {}.'.format(index, num_scenes)
)
sys.exit()
else:
print("Starting at index ", index)
def define_arguments():
""" Define command line arguments. """
parser = argparse.ArgumentParser()
parser.add_argument("--input", default="parameters/warehouse.yaml", help="Path to input parameter file")
parser.add_argument(
"--visualize-models",
"--visualize_models",
action="store_true",
help="Output visuals of all object models defined in input parameter file, instead of outputting a dataset.",
)
parser.add_argument("--mount", default="/tmp/composer", help="Path to mount symbolized in parameter files via '*'.")
parser.add_argument("--headless", action="store_true", help="Will not launch Isaac SIM window.")
parser.add_argument("--nap", action="store_true", help="Will nap Isaac SIM after the first scene is generated.")
parser.add_argument("--overwrite", action="store_true", help="Overwrites dataset in output directory.")
parser.add_argument("--output", type=str, help="Output directory. Overrides 'output_dir' param.")
parser.add_argument(
"--num-scenes", "--num_scenes", type=int, help="Num scenes in dataset. Overrides 'num_scenes' param."
)
parser.add_argument(
"--nucleus-server", "--nucleus_server", type=str, help="Nucleus Server URL. Overrides 'nucleus_server' param."
)
return parser
if __name__ == "__main__":
# Create argument parser
parser = define_arguments()
args, _ = parser.parse_known_args()
# Parse input parameter file
parser = Parser(args)
params = parser.params
Sampler.params = params
# Determine output directory
output_dir = get_output_dir(params)
# Run Composer in Visualize mode
if args.visualize_models:
from visualize import Visualizer
visuals = Visualizer(parser, params, output_dir)
visuals.visualize_models()
# Handle shutdown
visuals.composer.sim_context.clear_instance()
visuals.composer.sim_app.close()
sys.exit()
# Set verbose mode
Logger.verbose = params["verbose"]
# Get starting index of dataset
index = get_starting_index(params, output_dir)
# Check if dataset is already complete
assert_dataset_complete(params, index)
# Initialize composer
composer = Composer(params, index, output_dir)
metrics = Metrics(composer.log_dir, composer.content_log_path)
# Generate dataset
while composer.index < params["num_scenes"]:
composer.generate_scene()
composer.index += 1
# Handle shutdown
composer.output_manager.data_writer.stop_threads()
composer.sim_context.clear_instance()
composer.sim_app.close()
# Output performance metrics
metrics.output_performance_metrics()
| 9,745 | Python | 33.807143 | 120 | 0.626783 |
ngzhili/SynTable/syntable_composer/src/helper_functions.py | """
SynTable Replicator Composer Helper Functions
"""
import numpy as np
import pycocotools.mask as mask_util
import cv2
def compute_occluded_masks(mask1, mask2):
"""Computes occlusions between two sets of masks.
masks1, masks2: [Height, Width, instances]
"""
# intersections and union
mask1_area = np.count_nonzero(mask1)
mask2_area = np.count_nonzero(mask2)
intersection_mask = np.logical_and(mask1, mask2)
intersection = np.count_nonzero(np.logical_and(mask1, mask2))
iou = intersection/(mask1_area+mask2_area-intersection)
return iou, intersection_mask.astype(float)
class GenericMask:
"""
Attribute:
polygons (list[ndarray]): list[ndarray]: polygons for this mask.
Each ndarray has format [x, y, x, y, ...]
mask (ndarray): a binary mask
"""
def __init__(self, mask_or_polygons, height, width):
self._mask = self._polygons = self._has_holes = None
self.height = height
self.width = width
m = mask_or_polygons
if isinstance(m, dict):
# RLEs
assert "counts" in m and "size" in m
if isinstance(m["counts"], list): # uncompressed RLEs
h, w = m["size"]
assert h == height and w == width
m = mask_util.frPyObjects(m, h, w)
self._mask = mask_util.decode(m)[:, :]
return
if isinstance(m, list): # list[ndarray]
self._polygons = [np.asarray(x).reshape(-1) for x in m]
return
if isinstance(m, np.ndarray): # assumed to be a binary mask
assert m.shape[1] != 2, m.shape
assert m.shape == (height, width), m.shape
self._mask = m.astype("uint8")
return
raise ValueError("GenericMask cannot handle object {} of type '{}'".format(m, type(m)))
@property
def mask(self):
if self._mask is None:
self._mask = self.polygons_to_mask(self._polygons)
return self._mask
@property
def polygons(self):
if self._polygons is None:
self._polygons, self._has_holes = self.mask_to_polygons(self._mask)
return self._polygons
@property
def has_holes(self):
if self._has_holes is None:
if self._mask is not None:
self._polygons, self._has_holes = self.mask_to_polygons(self._mask)
else:
self._has_holes = False # if original format is polygon, does not have holes
return self._has_holes
def mask_to_polygons(self, mask):
# cv2.RETR_CCOMP flag retrieves all the contours and arranges them to a 2-level
# hierarchy. External contours (boundary) of the object are placed in hierarchy-1.
# Internal contours (holes) are placed in hierarchy-2.
# cv2.CHAIN_APPROX_NONE flag gets vertices of polygons from contours.
mask = np.ascontiguousarray(mask) # some versions of cv2 does not support incontiguous arr
res = cv2.findContours(mask.astype("uint8"), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)
hierarchy = res[-1]
if hierarchy is None: # empty mask
return [], False
has_holes = (hierarchy.reshape(-1, 4)[:, 3] >= 0).sum() > 0
res = res[-2]
res = [x.flatten() for x in res]
# These coordinates from OpenCV are integers in range [0, W-1 or H-1].
# We add 0.5 to turn them into real-value coordinate space. A better solution
# would be to first +0.5 and then dilate the returned polygon by 0.5.
res = [x + 0.5 for x in res if len(x) >= 6]
return res, has_holes
def polygons_to_mask(self, polygons):
rle = mask_util.frPyObjects(polygons, self.height, self.width)
rle = mask_util.merge(rle)
return mask_util.decode(rle)[:, :]
def area(self):
return self.mask.sum()
def bbox(self):
try:
p = mask_util.frPyObjects(self.polygons, self.height, self.width)
p = mask_util.merge(p)
bbox = mask_util.toBbox(p)
bbox[2] += bbox[0]
bbox[3] += bbox[1]
except:
print(f"Encountered error while generating bounding boxes from mask polygons: {self.polygons}")
print("self.polygons:",self.polygons)
bbox = np.array([0,0,0,0])
return bbox
def bbox_from_binary_mask(binary_mask):
""" Returns the smallest bounding box containing all pixels marked "1" in the given image mask.
:param binary_mask: A binary image mask with the shape [H, W].
:return: The bounding box represented as [x, y, width, height]
"""
# Find all columns and rows that contain 1s
rows = np.any(binary_mask, axis=1)
cols = np.any(binary_mask, axis=0)
# Find the min and max col/row index that contain 1s
rmin, rmax = np.where(rows)[0][[0, -1]]
cmin, cmax = np.where(cols)[0][[0, -1]]
# Calc height and width
h = rmax - rmin + 1
w = cmax - cmin + 1
return [int(cmin), int(rmin), int(w), int(h)]
| 5,066 | Python | 36.533333 | 107 | 0.595934 |
ngzhili/SynTable/syntable_composer/src/main1.py | """
SynTable Replicator Composer Main
"""
# import dependencies
import argparse
from ntpath import join
import os
import shutil
import signal
import sys
import numpy as np
import random
import math
import gc
import json
import datetime
import time
import glob
import cv2
from omni.isaac.kit import SimulationApp
from distributions import Distribution
from input.parse1 import Parser
from output import Metrics, Logger
from output.output1 import OutputManager
from sampling.sample1 import Sampler
from scene.scene1 import SceneManager
from helper_functions import compute_occluded_masks
from omni.isaac.kit.utils import set_carb_setting
from scene.light1 import Light
class Composer:
def __init__(self, params, index, output_dir):
""" Construct Composer. Start simulator and prepare for generation. """
self.params = params
self.index = index
self.output_dir = output_dir
self.sample = Sampler().sample
# Set-up output directories
self.setup_data_output()
# Start Simulator
Logger.content_log_path = self.content_log_path
Logger.start_log_entry("start-up")
Logger.print("Isaac Sim starting up...")
config = {"headless": self.sample("headless")}
if self.sample("path_tracing"):
config["renderer"] = "PathTracing"
config["samples_per_pixel_per_frame"] = self.sample("samples_per_pixel_per_frame")
else:
config["renderer"] = "RayTracedLighting"
self.sim_app = SimulationApp(config)
from omni.isaac.core import SimulationContext
self.scene_units_in_meters = self.sample("scene_units_in_meters")
self.sim_context = SimulationContext(physics_dt=1.0/60, #1.0 / 60.0,
rendering_dt =1.0/60, #1.0 / 60.0,
stage_units_in_meters=self.scene_units_in_meters)
# need to initialize physics getting any articulation..etc
self.sim_context.initialize_physics()
self.sim_context.play()
self.num_scenes = self.sample("num_scenes")
self.sequential = self.sample("sequential")
self.scene_manager = SceneManager(self.sim_app, self.sim_context)
self.output_manager = OutputManager(
self.sim_app, self.sim_context, self.scene_manager, self.output_data_dir, self.scene_units_in_meters
)
# Set-up exit message
signal.signal(signal.SIGINT, self.handle_exit)
Logger.finish_log_entry()
def handle_exit(self, *args, **kwargs):
print("exiting dataset generation...")
self.sim_context.clear_instance()
self.sim_app.close()
sys.exit()
def generate_scene(self, img_index, ann_index, img_list,ann_list,regen_scene):
""" Generate 1 dataset scene. Returns captured groundtruth data. """
amodal = True
self.scene_manager.prepare_scene(self.index)
# reload table into scene
self.scene_manager.reload_table()
kit = self.sim_app
# if generate amodal annotations
if amodal:
roomTableSize = self.scene_manager.roomTableSize
roomTableHeight = roomTableSize[-1]
spawnLowerBoundOffset = 0.2
spawnUpperBoundOffset = 1
# calculate tableBounds to constraint objects' spawn locations to be within tableBounds
x_width = roomTableSize[0] /2
y_length = roomTableSize[1] /2
min_val = (-x_width*0.6, -y_length*0.6, roomTableHeight+spawnLowerBoundOffset)
max_val = (x_width*0.6, y_length*0.6, roomTableHeight+spawnUpperBoundOffset)
tableBounds = [min_val,max_val]
self.scene_manager.populate_scene(tableBounds=tableBounds) # populate the scene once
else:
self.scene_manager.populate_scene()
if self.sequential:
sequence_length = self.sample("sequence_step_count")
step_time = self.sample("sequence_step_time")
for step in range(sequence_length):
self.scene_manager.update_scene(step_time=step_time, step_index=step)
groundtruth = self.output_manager.capture_groundtruth(
self.index, step_index=step, sequence_length=sequence_length
)
if step == 0:
Logger.print("stepping through scene...")
# if generate amodal annotations
elif amodal:
# simulate physical dropping of objects
self.scene_manager.update_scene()
# refresh UI rendering
self.sim_context.render()
# pause simulation
self.sim_context.pause()
# stop all object motion and remove objects not on tabletop
objects = self.scene_manager.objs.copy()
objects_filtered = []
# remove objects outside tabletop regions after simulation
for obj in objects:
obj.coord, quaternion = obj.xform_prim.get_world_pose()
obj.coord = np.array(obj.coord, dtype=np.float32)
# if object is not on tabletop after simulation, remove object
if (abs(obj.coord[0]) > (roomTableSize[0]/2)) \
or (abs(obj.coord[1]) > (roomTableSize[1]/2)) \
or (abs(obj.coord[2]) < roomTableSize[2]):
# remove object by turning off visibility of object
obj.off_prim()
# else object on tabletop, add obj to filtered list
else:
objects_filtered.append(obj)
self.scene_manager.objs = objects_filtered
# if no objects left on tabletop, regenerate scene
if len(self.scene_manager.objs) == 0:
print("No objects found on tabletop, regenerating scene.")
self.scene_manager.finish_scene()
return None, img_index, ann_index, img_list, ann_list, regen_scene
else:
regen_scene = False
print("\nNumber of Objects on tabletop:", len(self.scene_manager.objs))
# get camera coordinates based on hemisphere of radus r and tabletop height
def camera_orbit_coord(r = 12, tableTopHeight=10):
"""
constraints camera loc to a hemi-spherical orbit around tabletop origin
origin z of hemisphere is offset by tabletopheight + 1m
"""
u = random.uniform(0,1)
v = random.uniform(0,1)
phi = math.acos(1.0 - v) # phi: [0,0.5*pi]
theta = 2.0 * math.pi * u # theta: [0,2*pi]
x = r * math.cos(theta) * math.sin(phi)
y = r * math.sin(theta) * math.sin(phi)
z = r * math.cos(phi) + tableTopHeight # add table height offset
return np.array([x,y,z])
# Randomly move camera and light coordinates to be constrainted between 2 concentric hemispheres above tabletop
numViews = self.params["num_views"]
# get hemisphere radius bounds
autoHemisphereRadius = self.sample("auto_hemisphere_radius")
if not autoHemisphereRadius:
camHemisphereRadiusMin = self.sample("cam_hemisphere_radius_min")
camHemisphereRadiusMax = self.sample("cam_hemisphere_radius_max")
lightHemisphereRadiusMin = self.sample("spherelight_hemisphere_radius_min")
lightHemisphereRadiusMax = self.sample("spherelight_hemisphere_radius_max")
else:
camHemisphereRadiusMin = max(x_width,y_length) * 0.8
camHemisphereRadiusMax = camHemisphereRadiusMin + 0.7*camHemisphereRadiusMin
lightHemisphereRadiusMin = camHemisphereRadiusMax + 0.1
lightHemisphereRadiusMax = lightHemisphereRadiusMin + 1
print(x_width,y_length)
print("\n===Camera & Light Hemisphere Parameters===")
print(f"autoHemisphereRadius:{autoHemisphereRadius}")
print(f"camHemisphereRadiusMin = {camHemisphereRadiusMin}")
print(f"camHemisphereRadiusMax = {camHemisphereRadiusMax}")
print(f"lightHemisphereRadiusMin = {lightHemisphereRadiusMin}")
print(f"lightHemisphereRadiusMax = {lightHemisphereRadiusMax}")
Logger.print(f"\n=== Capturing Groundtruth for each viewport in scene ===\n")
for view_id in range(numViews):
random.seed(None)
Logger.print(f"\n==> Scene: {self.index}, View: {view_id} <==\n")
# resample radius of camera hemisphere between min and max radii bounds
r = random.uniform(camHemisphereRadiusMin,camHemisphereRadiusMax)
print('sampled radius r of camera hemisphere:',r)
# resample camera coordinates and rotate camera to look at tabletop surface center
cam_coord_w = camera_orbit_coord(r=r,tableTopHeight=roomTableHeight+0.2)
print("sampled camera coordinate:",cam_coord_w)
self.scene_manager.camera.translate(cam_coord_w)
self.scene_manager.camera.translate_rotate(target=(0,0,roomTableHeight)) #target coordinates
# initialise ambient lighting as 0 (for ray tracing), path tracing not affected
rtx_mode = "/rtx"
ambient_light_intensity = 0 #random.uniform(0.2,3.5)
set_carb_setting(kit._carb_settings, rtx_mode + "/sceneDb/ambientLightIntensity", ambient_light_intensity)
# Enable indirect diffuse GI (for ray tracing)
set_carb_setting(kit._carb_settings, rtx_mode + "/indirectDiffuse/enabled", True)
# Reset and delete all lights
from omni.isaac.core.utils import prims
for light in self.scene_manager.lights:
prims.delete_prim(light.path)
# Resample number of lights in viewport
self.scene_manager.lights = []
for grp_index, group in enumerate(self.scene_manager.sample("groups")):
# adjust ceiling light parameters
if group == "ceilinglights":
for lightIndex, light in enumerate(self.scene_manager.ceilinglights):
if lightIndex == 0:
new_intensity = light.sample("light_intensity")
if light.sample("light_temp_enabled"):
new_temp = light.sample("light_temp")
# change light intensity
light.attributes["intensity"] = new_intensity
light.prim.GetAttribute("intensity").Set(light.attributes["intensity"])
# change light temperature
if light.sample("light_temp_enabled"):
light.attributes["colorTemperature"] = new_temp
light.prim.GetAttribute("colorTemperature").Set(light.attributes["colorTemperature"])
# adjust spherical light parameters
if group == "lights":
num_lights = self.scene_manager.sample("light_count", group=group)
for i in range(num_lights):
path = "{}/Lights/lights_{}".format( self.scene_manager.scene_path, len(self.scene_manager.lights))
light = Light(self.scene_manager.sim_app, self.scene_manager.sim_context, path, self.scene_manager.camera, group)
# change light intensity
light.attributes["intensity"] = light.sample("light_intensity")
light.prim.GetAttribute("intensity").Set(light.attributes["intensity"])
# change light temperature
if light.sample("light_temp_enabled"):
light.attributes["colorTemperature"] =light.sample("light_temp")
light.prim.GetAttribute("colorTemperature").Set(light.attributes["colorTemperature"])
# change light coordinates
light_coord_w = camera_orbit_coord(r=random.uniform(lightHemisphereRadiusMin,lightHemisphereRadiusMax),tableTopHeight=roomTableHeight+0.2)
light.translate(light_coord_w)
light.coord, quaternion = light.xform_prim.get_world_pose()
light.coord = np.array(light.coord, dtype=np.float32)
self.scene_manager.lights.append(light)
print(f"Number of sphere lights in scene: {len(self.scene_manager.lights)}")
# capture groundtruth of entire viewpoint
groundtruth, img_index, ann_index, img_list, ann_list = \
self.output_manager.capture_amodal_groundtruth(self.index,
self.scene_manager,
img_index, ann_index, view_id,
img_list, ann_list
)
else:
self.scene_manager.update_scene()
groundtruth = self.output_manager.capture_groundtruth(self.index)
# finish the scene and reset prims in scene
self.scene_manager.finish_scene()
return groundtruth, img_index, ann_index, img_list, ann_list, regen_scene
def setup_data_output(self):
""" Create output directories and copy input files to output. """
# Overwrite output directory, if needed
if self.params["overwrite"]:
shutil.rmtree(self.output_dir, ignore_errors=True)
# Create output directory
os.makedirs(self.output_dir, exist_ok=True)
# Create output directories, as needed
self.output_data_dir = os.path.join(self.output_dir, "data")
self.parameter_dir = os.path.join(self.output_dir, "parameters")
self.parameter_profiles_dir = os.path.join(self.parameter_dir, "profiles")
self.log_dir = os.path.join(self.output_dir, "log")
self.content_log_path = os.path.join(self.log_dir, "sampling_log.yaml")
os.makedirs(self.output_data_dir, exist_ok=True)
os.makedirs(self.parameter_profiles_dir, exist_ok=True)
os.makedirs(self.log_dir, exist_ok=True)
# Copy input parameters file to output
input_file_name = os.path.basename(self.params["file_path"])
input_file_copy = os.path.join(self.parameter_dir, input_file_name)
shutil.copy(self.params["file_path"], input_file_copy)
# Copy profile parameters file(s) to output
if self.params["profile_files"]:
for profile_file in self.params["profile_files"]:
profile_file_name = os.path.basename(profile_file)
profile_file_copy = os.path.join(self.parameter_profiles_dir, profile_file_name)
shutil.copy(profile_file, profile_file_copy)
def get_output_dir(params):
""" Determine output directory to store datasets.
"""
if params["output_dir"].startswith("/"):
output_dir = params["output_dir"]
elif params["output_dir"].startswith("*"):
output_dir = os.path.join(Distribution.mount, params["output_dir"][2:])
else:
output_dir = os.path.join(os.path.dirname(__file__), "..", "datasets", params["output_dir"])
return output_dir
def get_starting_index(params, output_dir):
""" Determine starting index of dataset. """
if params["overwrite"]:
return 0
output_data_dir = os.path.join(output_dir, "data")
if not os.path.exists(output_data_dir):
return 0
def find_min_missing(indices):
if indices:
indices.sort()
for i in range(indices[-1]):
if i not in indices:
return i
return indices[-1]
else:
return -1
camera_dirs = [os.path.join(output_data_dir, sub_dir) for sub_dir in os.listdir(output_data_dir)]
min_indices = []
for camera_dir in camera_dirs:
data_dirs = [os.path.join(camera_dir, sub_dir) for sub_dir in os.listdir(camera_dir)]
for data_dir in data_dirs:
indices = []
for filename in os.listdir(data_dir):
try:
if "_" in filename:
index = int(filename[: filename.rfind("_")])
else:
index = int(filename[: filename.rfind(".")])
indices.append(index)
except:
pass
min_index = find_min_missing(indices)
min_indices.append(min_index)
if min_indices:
minest_index = min(min_indices)
return minest_index + 1
else:
return 0
def assert_dataset_complete(params, index):
""" Check if dataset is already complete. """
num_scenes = params["num_scenes"]
if index >= num_scenes:
print(
'Dataset is completed. Number of generated samples {} satifies "num_scenes" {}.'.format(index, num_scenes)
)
sys.exit()
else:
print("Starting at index ", index)
def define_arguments():
""" Define command line arguments. """
parser = argparse.ArgumentParser()
parser.add_argument("--input", default="parameters/warehouse.yaml", help="Path to input parameter file")
parser.add_argument(
"--visualize-models",
"--visualize_models",
action="store_true",
help="Output visuals of all object models defined in input parameter file, instead of outputting a dataset.",
)
parser.add_argument("--mount", default="/tmp/composer", help="Path to mount symbolized in parameter files via '*'.")
parser.add_argument("--headless", action="store_true", help="Will not launch Isaac SIM window.")
parser.add_argument("--nap", action="store_true", help="Will nap Isaac SIM after the first scene is generated.")
parser.add_argument("--overwrite", action="store_true", help="Overwrites dataset in output directory.")
parser.add_argument("--output", type=str, help="Output directory. Overrides 'output_dir' param.")
parser.add_argument(
"--num-scenes", "--num_scenes", type=int, help="Num scenes in dataset. Overrides 'num_scenes' param."
)
parser.add_argument(
"--num-views", "--num_views", type=int, help="Num Views in scenes. Overrides 'num_views' param."
)
parser.add_argument(
"--save-segmentation-data", "--save_segmentation_data", action="store_true", help="Save Segmentation data as PNG, Depth image as .pfm. Overrides 'save_segmentation_data' param."
)
parser.add_argument(
"--nucleus-server", "--nucleus_server", type=str, help="Nucleus Server URL. Overrides 'nucleus_server' param."
)
return parser
if __name__ == "__main__":
# Create argument parser
parser = define_arguments()
args, _ = parser.parse_known_args()
# Parse input parameter file
parser = Parser(args)
params = parser.params
#print("params:",params)
Sampler.params = params
sample = Sampler().sample
# Determine output directory
output_dir = get_output_dir(params)
# Run Composer in Visualize mode
if args.visualize_models:
from visualize import Visualizer
visuals = Visualizer(parser, params, output_dir)
visuals.visualize_models()
# Handle shutdown
visuals.composer.sim_context.clear_instance()
visuals.composer.sim_app.close()
sys.exit()
# Set verbose mode
Logger.verbose = params["verbose"]
# Get starting index of dataset
index = get_starting_index(params, output_dir)
# if not overwrite
json_files = []
if not params["overwrite"] and os.path.isdir(output_dir):
# Check if annotation_final.json is present, continue from last scene index
json_files = [pos_json for pos_json in os.listdir(output_dir) if pos_json.endswith('.json')]
if len(json_files)>0:
last_scene_index = -1
last_json_path = ""
for i in json_files:
if i != "annotation_final.json":
json_index = int(i.split('_')[-1].split('.')[0])
if json_index >= last_scene_index:
last_scene_index = json_index
last_json_path = os.path.join(output_dir,i)
# get current index
index = last_scene_index + 1
# read latest json file
f = open(last_json_path)
data = json.load(f)
last_img_index = max(data['images'][-1]['id'],-1)
last_ann_index = max(data['annotations'][-1]['id'],-1)
f.close()
# remove images more than last scene index, these images do not have annotations
img_files = [img_path for img_path in os.listdir(output_dir) if img_path.endswith('.png')]
for path, subdirs, files in os.walk(output_dir):
for name in files:
if name.endswith('.png') or name.endswith('.pfm'):
img_scene = int(name.split("_")[0])
if img_scene > last_scene_index:
img_path = os.path.join(path, name)
os.remove(img_path)
print(f"Removing Images from scene {index} onwards.")
print(f"Continuing from scene {index}.")
# Check if dataset is already complete
assert_dataset_complete(params, index)
# Initialize composer
composer = Composer(params, index, output_dir)
metrics = Metrics(composer.log_dir, composer.content_log_path)
if not params["overwrite"] and os.path.isdir(output_dir) and len(json_files) > 0:
img_index, ann_index = last_img_index+1, last_ann_index+1
else:
img_index, ann_index = 1, 1
img_list, ann_list = [],[]
total_st = time.time()
# Generate dataset
while composer.index < params["num_scenes"]:
# get the start time
st = time.time()
regen_scene = True
while regen_scene:
_, img_index, ann_index, img_list, ann_list, regen_scene = composer.generate_scene(img_index, ann_index,img_list,ann_list,regen_scene)
# remove all images not are not saved in json/csv
scene_no = composer.index
if (scene_no % params["checkpoint_interval"]) == 0 and (scene_no != 0): # save every 2 generated scenes
gc.collect() # Force the garbage collector for releasing an unreferenced memory
date_created = str(datetime.datetime.now())
# create annotation file
coco_json = {
"info": {
"description": "SynTable",
"url": "nil",
"version": "0.1.0",
"year": 2022,
"contributor": "SynTable",
"date_created": date_created
},
"licenses": [
{
"id": 1,
"name": "Attribution-NonCommercial-ShareAlike License",
"url": "http://creativecommons.org/licenses/by-nc-sa/2.0/"
}
],
"categories": [
{
"id": 1,
"name": "object",
"supercategory": "shape"
}
],
"images":img_list,
"annotations":ann_list}
# if save background segmentation
if params["save_background"]:
coco_json["categories"].append({
"id": 0,
"name": "background",
"supercategory": "shape"
})
# save annotation dict
with open(f'{output_dir}/annotation_{scene_no}.json', 'w') as write_file:
json.dump(coco_json, write_file, indent=4)
print(f"\n[Checkpoint] Finished scene {scene_no}, saving annotations to {output_dir}/annotation_{scene_no}.json")
if (scene_no + 1) != params["num_scenes"]:
# reset lists to prevent memory error
img_list, ann_list = [],[]
coco_json = {}
composer.index += 1
# get the end time
et = time.time()
# get the execution time
elapsed_time = time.time() - st
print(f'\nExecution time for scene {scene_no}:', time.strftime("%H:%M:%S", time.gmtime(elapsed_time)))
date_created = str(datetime.datetime.now())
# create annotation file
coco_json = {
"info": {
"description": "SynTable",
"url": "nil",
"version": "0.1.0",
"year": 2022,
"contributor": "SynTable",
"date_created": date_created
},
"licenses": [
{
"id": 1,
"name": "Attribution-NonCommercial-ShareAlike License",
"url": "http://creativecommons.org/licenses/by-nc-sa/2.0/"
}
],
"categories": [
{
"id": 1,
"name": "object",
"supercategory": "shape"
}
],
"images":img_list,
"annotations":ann_list}
# if save background segmentation
if params["save_background"]:
coco_json["categories"].append({
"id": 0,
"name": "background",
"supercategory": "shape"
})
# save json
with open(f'{output_dir}/annotation_{scene_no}.json', 'w') as write_file:
json.dump(coco_json, write_file, indent=4)
print(f"\n[End] Finished last scene {scene_no}, saving annotations to {output_dir}/annotation_{scene_no}.json")
# reset lists to prevent out of memory (oom) error
del img_list
del ann_list
del coco_json
gc.collect() # Force the garbage collector for releasing an unreferenced memory
elapsed_time = time.time() - total_st
print(f'\nExecution time for all {params["num_scenes"]} scenes * {params["num_views"]} views:', time.strftime("%H:%M:%S", time.gmtime(elapsed_time)))
# Handle shutdown
composer.output_manager.data_writer.stop_threads()
composer.sim_context.clear_instance()
composer.sim_app.close()
# Output performance metrics
metrics.output_performance_metrics()
# concatenate all coco.json checkpoint files to final coco.json
final_json_path = f'{output_dir}/annotation_final.json'
json_files = [os.path.join(output_dir,pos_json) for pos_json in os.listdir(output_dir) if (pos_json.endswith('.json') and os.path.join(output_dir,pos_json) != final_json_path)]
json_files = sorted(json_files, key=lambda x: int(x.split("_")[-1].split(".")[0]))
coco_json = {"info":{},"licenses":[],"categories":[],"images":[],"annotations":[]}
for i, file in enumerate(json_files):
if file != final_json_path:
f = open(file)
data = json.load(f)
if i == 0:
coco_json["info"] = data["info"]
coco_json["licenses"] = data["licenses"]
coco_json["categories"] = data["categories"]
coco_json["images"].extend(data["images"])
coco_json["annotations"].extend(data["annotations"])
f.close()
with open(final_json_path, 'w') as write_file:
json.dump(coco_json, write_file, indent=4)
# visualize annotations
if params["save_segmentation_data"]:
print("[INFO] Generating occlusion masks...")
rgb_dir = f"{output_dir}/data/mono/rgb"
occ_dir = f"{output_dir}/data/mono/occlusion"
instance_dir = f"{output_dir}/data/mono/instance"
vis_dir = f"{output_dir}/data/mono/visualize"
vis_occ_dir = f"{vis_dir}/occlusion"
vis_instance_dir = f"{vis_dir}/instance"
# make visualisation output directory
for dir in [vis_dir,vis_occ_dir, vis_instance_dir]:
if not os.path.exists(dir):
os.makedirs(dir)
# iterate through scenes
rgb_paths = [pos_json for pos_json in os.listdir(rgb_dir) if pos_json.endswith('.png')]
for scene_index in range(0,params["num_scenes"]):
# scene_index = str(scene_index_raw) +"_"+str(view_id)
for view_id in range(0,params["num_views"]):
rgb_img_list = glob.glob(f"{rgb_dir}/{scene_index}_{view_id}.png")
rgb_img = cv2.imread(rgb_img_list[0], cv2.IMREAD_UNCHANGED)
occ_img_list = glob.glob(f"{occ_dir}/{scene_index}_{view_id}_*.png")
#occ_mask_list = []
if len(occ_img_list) > 0:
occ_img = rgb_img.copy()
overlay = rgb_img.copy()
combined_mask = np.zeros((occ_img.shape[0],occ_img.shape[1]))
background = f"{occ_dir}/{scene_index}_background.png"
# iterate through all occlusion masks
for i in range(len(occ_img_list)):
occ_mask_path = occ_img_list[i]
if occ_mask_path == background:
occ_img_back = rgb_img.copy()
overlay_back = rgb_img.copy()
occluded_mask = cv2.imread(occ_mask_path, cv2.IMREAD_UNCHANGED)
occluded_mask = occluded_mask.astype(bool) # boolean mask
overlay_back[occluded_mask] = [0, 0, 255]
alpha =0.5
occ_img_back = cv2.addWeighted(overlay_back, alpha, occ_img_back, 1 - alpha, 0, occ_img_back)
occ_save_path = f"{vis_occ_dir}/{scene_index}_{view_id}_background.png"
cv2.imwrite(occ_save_path, occ_img_back)
else:
occluded_mask = cv2.imread(occ_mask_path, cv2.IMREAD_UNCHANGED)
combined_mask += occluded_mask
combined_mask = combined_mask.astype(bool) # boolean mask
overlay[combined_mask] = [0, 0, 255]
alpha =0.5
occ_img = cv2.addWeighted(overlay, alpha, occ_img, 1 - alpha, 0, occ_img)
occ_save_path = f"{vis_occ_dir}/{scene_index}_{view_id}.png"
cv2.imwrite(occ_save_path, occ_img)
combined_mask = combined_mask.astype('uint8')
occ_save_path = f"{vis_occ_dir}/{scene_index}_{view_id}_mask.png"
cv2.imwrite(occ_save_path, combined_mask*255)
vis_img_list = glob.glob(f"{instance_dir}/{scene_index}_{view_id}_*.png")
if len(vis_img_list) > 0:
vis_img = rgb_img.copy()
overlay = rgb_img.copy()
background = f"{instance_dir}/{scene_index}_{view_id}_background.png"
# iterate through all occlusion masks
for i in range(len(vis_img_list)):
vis_mask_path = vis_img_list[i]
if vis_mask_path == background:
vis_img_back = rgb_img.copy()
overlay_back = rgb_img.copy()
visible_mask = cv2.imread(vis_mask_path, cv2.IMREAD_UNCHANGED)
visible_mask = visible_mask.astype(bool) # boolean mask
overlay_back[visible_mask] = [0, 0, 255]
alpha =0.5
vis_img_back = cv2.addWeighted(overlay_back, alpha, vis_img_back, 1 - alpha, 0, vis_img_back)
vis_save_path = f"{vis_instance_dir}/{scene_index}_{view_id}_background.png"
cv2.imwrite(vis_save_path, vis_img_back)
else:
visible_mask = cv2.imread(vis_mask_path, cv2.IMREAD_UNCHANGED)
vis_combined_mask = visible_mask.astype(bool) # boolean mask
colour = list(np.random.choice(range(256), size=3))
overlay[vis_combined_mask] = colour
alpha =0.5
vis_img = cv2.addWeighted(overlay, alpha, vis_img, 1 - alpha, 0, vis_img)
vis_save_path = f"{vis_instance_dir}/{scene_index}_{view_id}.png"
cv2.imwrite(vis_save_path,vis_img)
| 33,426 | Python | 43.274172 | 185 | 0.55397 |
ngzhili/SynTable/syntable_composer/src/input/parse.py |
import copy
import numpy as np
import os
import yaml
from distributions import Distribution, Choice, Normal, Range, Uniform, Walk
class Parser:
""" For parsing the input parameterization to Composer. """
def __init__(self, args):
""" Construct Parser. Parse input file. """
self.args = args
self.global_group = "[[global]]"
self.param_suffix_to_file_type = {
"model": [".usd", ".usdz", ".usda", ".usdc"],
"texture": [".png", ".jpg", ".jpeg", ".hdr", ".exr"],
"material": [".mdl"],
}
self.no_eval_check_params = {"output_dir", "nucleus_server", "inherit", "profiles"}
Distribution.mount = args.mount
Distribution.param_suffix_to_file_type = self.param_suffix_to_file_type
self.default_params = self.parse_param_set("parameters/profiles/default.yaml", default=True)
additional_params_to_default_set = {"inherit": "", "profiles": [], "file_path": "", "profile_files": []}
self.default_params = {**additional_params_to_default_set, **self.default_params}
self.initialize_params(self.default_params)
self.params = self.parse_input(self.args.input)
def evaluate_param(self, key, val):
""" Evaluate a parameter value in Python """
# Skip evaluation on certain parameter with string values
if not self.param_is_evaluated(key, val):
return val
if type(val) is str and len(val) > 0:
val = eval(val)
if type(val) in (tuple, list):
try:
val = np.array(val, dtype=np.float32)
except:
pass
if isinstance(val, Distribution):
val.setup(key)
if type(val) in (tuple, list):
elems = val
val = [self.evaluate_param(key, sub_elem) for sub_elem in elems]
return val
def param_is_evaluated(self, key, val):
if type(val) is np.ndarray:
return True
return not (key in self.no_eval_check_params or not val or (type(val) is str and val.startswith("/")))
def initialize_params(self, params, default=False):
""" Evaluate parameter values in Python. Verify parameter name and value type. """
for key, val in params.items():
if type(val) is dict:
self.initialize_params(val)
else:
# Evaluate parameter
try:
val = self.evaluate_param(key, val)
params[key] = val
except Exception:
raise ValueError("Unable to evaluate parameter '{}' with value '{}'".format(key, val))
# Verify parameter
if not default:
if key.startswith("obj") or key.startswith("light"):
default_param_set = self.default_params["groups"][self.global_group]
else:
default_param_set = self.default_params
# Verify parameter name
if key not in default_param_set and key:
raise ValueError("Parameter '{}' is not a parameter.".format(key))
# Verify parameter value type
default_val = default_param_set[key]
if isinstance(val, Distribution):
val_type = val.get_type()
else:
val_type = type(val)
if isinstance(default_val, Distribution):
default_val_type = default_val.get_type()
else:
default_val_type = type(default_val)
if default_val_type in (int, float):
# Integer and Float equivalence
default_val_type = [int, float]
elif default_val_type in (tuple, list, np.ndarray):
# Tuple, List, and Array equivalence
default_val_type = [tuple, list, np.ndarray]
else:
default_val_type = [default_val_type]
if val_type not in default_val_type:
raise ValueError(
"Parameter '{}' has incorrect value type {}. Value type must be in {}.".format(
key, val_type, default_val_type
)
)
def verify_nucleus_paths(self, params):
""" Verify parameter values that point to Nucleus server file paths. """
import omni.client
for key, val in params.items():
if type(val) is dict:
self.verify_nucleus_paths(val)
# Check Nucleus server file path of certain parameters
elif key.endswith(("model", "texture", "material")) and not isinstance(val, Distribution) and val:
# Check path starts with "/"
if not val.startswith("/"):
raise ValueError(
"Parameter '{}' has path '{}' which must start with a forward slash.".format(key, val)
)
# Check file type
param_file_type = val[val.rfind(".") :].lower()
correct_file_types = self.param_suffix_to_file_type.get(key[key.rfind("_") + 1 :], [])
if param_file_type not in correct_file_types:
raise ValueError(
"Parameter '{}' has path '{}' with incorrect file type. File type must be one of {}.".format(
key, val, correct_file_types
)
)
# Check file can be found
file_path = self.nucleus_server + val
(exists_result, _, _) = omni.client.read_file(file_path)
is_file = exists_result.name.startswith("OK")
if not is_file:
raise ValueError(
"Parameter '{}' has path '{}' not found on '{}'.".format(key, val, self.nucleus_server)
)
def override_params(self, params):
""" Override params with CLI args. """
if self.args.output:
params["output_dir"] = self.args.output
if self.args.num_scenes is not None:
params["num_scenes"] = self.args.num_scenes
if self.args.mount:
params["mount"] = self.args.mount
params["overwrite"] = self.args.overwrite
params["headless"] = self.args.headless
params["nap"] = self.args.nap
params["visualize_models"] = self.args.visualize_models
def parse_param_set(self, input, parse_from_file=True, default=False):
""" Parse input parameter file. """
if parse_from_file:
# Determine parameter file path
if input.startswith("/"):
input_file = input
elif input.startswith("*"):
input_file = os.path.join(Distribution.mount, input[2:])
else:
input_file = os.path.join(os.path.dirname(__file__), "../../", input)
# Read parameter file
with open(input_file, "r") as f:
params = yaml.safe_load(f)
# Add a parameter for the input file path
params["file_path"] = input_file
else:
params = input
# Process parameter groups
groups = {}
groups[self.global_group] = {}
for key, val in list(params.items()):
# Add group
if type(val) is dict:
if key in groups:
raise ValueError("Parameter group name is not unique: {}".format(key))
groups[key] = val
params.pop(key)
# Add param to global group
if key.startswith("obj_") or key.startswith("light_"):
groups[self.global_group][key] = val
params.pop(key)
params["groups"] = groups
return params
def parse_params(self, params):
""" Parse params into a final parameter set. """
import omni.client
# Add a global group, if needed
if self.global_group not in params["groups"]:
params["groups"][self.global_group] = {}
# Parse all profile parameter sets
profile_param_sets = [self.parse_param_set(profile) for profile in params.get("profiles", [])[::-1]]
# Set default as lowest param set and input file param set as highest
param_sets = [copy.deepcopy(self.default_params)] + profile_param_sets + [params]
# Union parameters sets
final_params = param_sets[0]
for params in param_sets[1:]:
global_group_params = params["groups"][self.global_group]
sub_global_group_params = final_params["groups"][self.global_group]
for group in params["groups"]:
if group == self.global_group:
continue
group_params = params["groups"][group]
if "inherit" in group_params:
inherited_group = group_params["inherit"]
if inherited_group not in final_params["groups"]:
raise ValueError(
"In group '{}' cannot find the inherited group '{}'".format(group, inherited_group)
)
inherited_params = final_params["groups"][inherited_group]
else:
inherited_params = {}
final_params["groups"][group] = {
**sub_global_group_params,
**inherited_params,
**global_group_params,
**group_params,
}
final_params["groups"][self.global_group] = {
**final_params["groups"][self.global_group],
**params["groups"][self.global_group],
}
final_groups = final_params["groups"].copy()
final_params = {**final_params, **params}
final_params["groups"] = final_groups
# Remove non-final groups
for group in list(final_params["groups"].keys()):
if group not in param_sets[-1]["groups"]:
final_params["groups"].pop(group)
final_params["groups"].pop(self.global_group)
params = final_params
# Set profile file paths
params["profile_files"] = [profile_params["file_path"] for profile_params in profile_param_sets]
# Set Nucleus server and check connection
if self.args.nucleus_server:
params["nucleus_server"] = self.args.nucleus_server
if "://" not in params["nucleus_server"]:
params["nucleus_server"] = "omniverse://" + params["nucleus_server"]
self.nucleus_server = params["nucleus_server"]
(result, _) = omni.client.stat(self.nucleus_server)
if not result.name.startswith("OK"):
raise ConnectionError("Could not connect to the Nucleus server: {}".format(self.nucleus_server))
Distribution.nucleus_server = params["nucleus_server"]
# Initialize params
self.initialize_params(params)
# Verify Nucleus server paths
self.verify_nucleus_paths(params)
return params
def parse_input(self, input, parse_from_file=True):
""" Parse all input parameter files. """
if parse_from_file:
print("Parsing and checking input parameterization.")
# Parse input parameter file
params = self.parse_param_set(input, parse_from_file=parse_from_file)
# Process params
params = self.parse_params(params)
# Override parameters with CLI args
self.override_params(params)
return params
| 11,936 | Python | 37.631068 | 117 | 0.528988 |
ngzhili/SynTable/syntable_composer/src/input/__init__.py | from .parse import Parser
| 26 | Python | 12.499994 | 25 | 0.807692 |
ngzhili/SynTable/syntable_composer/src/input/parse1.py |
import copy
import numpy as np
import os
import yaml
from distributions import Distribution, Choice, Normal, Range, Uniform, Walk
class Parser:
""" For parsing the input parameterization to Composer. """
def __init__(self, args):
""" Construct Parser. Parse input file. """
self.args = args
self.global_group = "[[global]]"
self.param_suffix_to_file_type = {
"model": [".usd", ".usdz", ".usda", ".usdc"],
"texture": [".png", ".jpg", ".jpeg", ".hdr", ".exr"],
"material": [".mdl"],
}
self.no_eval_check_params = {"output_dir", "nucleus_server", "inherit", "profiles"}
Distribution.mount = args.mount
Distribution.param_suffix_to_file_type = self.param_suffix_to_file_type
self.default_params = self.parse_param_set("parameters/profiles/default1.yaml", default=True)
additional_params_to_default_set = {"inherit": "", "profiles": [], "file_path": "", "profile_files": []}
self.default_params = {**additional_params_to_default_set, **self.default_params}
self.initialize_params(self.default_params)
self.params = self.parse_input(self.args.input)
def evaluate_param(self, key, val):
""" Evaluate a parameter value in Python """
# Skip evaluation on certain parameter with string values
if not self.param_is_evaluated(key, val):
return val
if type(val) is str and len(val) > 0:
val = eval(val)
if type(val) in (tuple, list):
try:
val = np.array(val, dtype=np.float32)
except:
pass
if isinstance(val, Distribution):
val.setup(key)
if type(val) in (tuple, list):
elems = val
val = [self.evaluate_param(key, sub_elem) for sub_elem in elems]
return val
def param_is_evaluated(self, key, val):
if type(val) is np.ndarray:
return True
return not (key in self.no_eval_check_params or not val or (type(val) is str and val.startswith("/")))
def initialize_params(self, params, default=False):
""" Evaluate parameter values in Python. Verify parameter name and value type. """
for key, val in params.items():
if type(val) is dict:
self.initialize_params(val)
else:
# Evaluate parameter
try:
val = self.evaluate_param(key, val)
params[key] = val
except Exception:
raise ValueError("Unable to evaluate parameter '{}' with value '{}'".format(key, val))
# Verify parameter
if not default:
if key.startswith("obj") or key.startswith("light"):
default_param_set = self.default_params["groups"][self.global_group]
else:
default_param_set = self.default_params
# Verify parameter name
if key not in default_param_set and key:
raise ValueError("Parameter '{}' is not a parameter.".format(key))
# Verify parameter value type
default_val = default_param_set[key]
if isinstance(val, Distribution):
val_type = val.get_type()
else:
val_type = type(val)
if isinstance(default_val, Distribution):
default_val_type = default_val.get_type()
else:
default_val_type = type(default_val)
if default_val_type in (int, float):
# Integer and Float equivalence
default_val_type = [int, float]
elif default_val_type in (tuple, list, np.ndarray):
# Tuple, List, and Array equivalence
default_val_type = [tuple, list, np.ndarray]
else:
default_val_type = [default_val_type]
if val_type not in default_val_type:
raise ValueError(
"Parameter '{}' has incorrect value type {}. Value type must be in {}.".format(
key, val_type, default_val_type
)
)
def verify_nucleus_paths(self, params):
""" Verify parameter values that point to Nucleus server file paths. """
import omni.client
for key, val in params.items():
if type(val) is dict:
self.verify_nucleus_paths(val)
# Check Nucleus server file path of certain parameters
elif key.endswith(("model", "texture", "material")) and not isinstance(val, Distribution) and val:
# Check path starts with "/"
if not val.startswith("/"):
raise ValueError(
"Parameter '{}' has path '{}' which must start with a forward slash.".format(key, val)
)
# Check file type
param_file_type = val[val.rfind(".") :].lower()
correct_file_types = self.param_suffix_to_file_type.get(key[key.rfind("_") + 1 :], [])
if param_file_type not in correct_file_types:
raise ValueError(
"Parameter '{}' has path '{}' with incorrect file type. File type must be one of {}.".format(
key, val, correct_file_types
)
)
# Check file can be found
file_path = self.nucleus_server + val
(exists_result, _, _) = omni.client.read_file(file_path)
is_file = exists_result.name.startswith("OK")
if not is_file:
raise ValueError(
"Parameter '{}' has path '{}' not found on '{}'.".format(key, val, self.nucleus_server)
)
def override_params(self, params):
""" Override params with CLI args. """
if self.args.output:
params["output_dir"] = self.args.output
if self.args.num_scenes is not None:
params["num_scenes"] = self.args.num_scenes
if self.args.num_views is not None: # added
params["num_views"] = self.args.num_views
if self.args.save_segmentation_data is not None: # added
params["save_segmentation_data"] = self.args.save_segmentation_data
if self.args.mount:
params["mount"] = self.args.mount
params["overwrite"] = self.args.overwrite
params["headless"] = self.args.headless
params["nap"] = self.args.nap
params["visualize_models"] = self.args.visualize_models
def parse_param_set(self, input, parse_from_file=True, default=False):
""" Parse input parameter file. """
if parse_from_file:
# Determine parameter file path
if input.startswith("/"):
input_file = input
elif input.startswith("*"):
input_file = os.path.join(Distribution.mount, input[2:])
else:
input_file = os.path.join(os.path.dirname(__file__), "../../", input)
# Read parameter file
with open(input_file, "r") as f:
params = yaml.safe_load(f)
# Add a parameter for the input file path
params["file_path"] = input_file
else:
params = input
# Process parameter groups
groups = {}
groups[self.global_group] = {}
for key, val in list(params.items()):
# Add group
if type(val) is dict:
if key in groups:
raise ValueError("Parameter group name is not unique: {}".format(key))
groups[key] = val
params.pop(key)
# Add param to global group
if key.startswith("obj_") or key.startswith("light_"):
groups[self.global_group][key] = val
params.pop(key)
params["groups"] = groups
return params
def parse_params(self, params):
""" Parse params into a final parameter set. """
import omni.client
# Add a global group, if needed
if self.global_group not in params["groups"]:
params["groups"][self.global_group] = {}
# Parse all profile parameter sets
profile_param_sets = [self.parse_param_set(profile) for profile in params.get("profiles", [])[::-1]]
# Set default as lowest param set and input file param set as highest
param_sets = [copy.deepcopy(self.default_params)] + profile_param_sets + [params]
# Union parameters sets
final_params = param_sets[0]
for params in param_sets[1:]:
global_group_params = params["groups"][self.global_group]
sub_global_group_params = final_params["groups"][self.global_group]
for group in params["groups"]:
if group == self.global_group:
continue
group_params = params["groups"][group]
if "inherit" in group_params:
inherited_group = group_params["inherit"]
if inherited_group not in final_params["groups"]:
raise ValueError(
"In group '{}' cannot find the inherited group '{}'".format(group, inherited_group)
)
inherited_params = final_params["groups"][inherited_group]
else:
inherited_params = {}
final_params["groups"][group] = {
**sub_global_group_params,
**inherited_params,
**global_group_params,
**group_params,
}
final_params["groups"][self.global_group] = {
**final_params["groups"][self.global_group],
**params["groups"][self.global_group],
}
final_groups = final_params["groups"].copy()
final_params = {**final_params, **params}
final_params["groups"] = final_groups
# Remove non-final groups
for group in list(final_params["groups"].keys()):
if group not in param_sets[-1]["groups"]:
final_params["groups"].pop(group)
final_params["groups"].pop(self.global_group)
params = final_params
# Set profile file paths
params["profile_files"] = [profile_params["file_path"] for profile_params in profile_param_sets]
# Set Nucleus server and check connection
if self.args.nucleus_server:
params["nucleus_server"] = self.args.nucleus_server
if "://" not in params["nucleus_server"]:
params["nucleus_server"] = "omniverse://" + params["nucleus_server"]
self.nucleus_server = params["nucleus_server"]
(result, _) = omni.client.stat(self.nucleus_server)
if not result.name.startswith("OK"):
raise ConnectionError("Could not connect to the Nucleus server: {}".format(self.nucleus_server))
Distribution.nucleus_server = params["nucleus_server"]
# Initialize params
self.initialize_params(params)
# Verify Nucleus server paths
self.verify_nucleus_paths(params)
return params
def parse_input(self, input, parse_from_file=True):
""" Parse all input parameter files. """
if parse_from_file:
print("Parsing and checking input parameterization.")
# Parse input parameter file
params = self.parse_param_set(input, parse_from_file=parse_from_file)
# Process params
params = self.parse_params(params)
# Override parameters with CLI args
self.override_params(params)
return params
| 12,196 | Python | 37.968051 | 117 | 0.530912 |
ngzhili/SynTable/syntable_composer/src/visualize/visualize.py |
import numpy as np
import os
import sys
from PIL import Image, ImageDraw, ImageFont
from distributions import Choice, Walk
from main import Composer
from sampling import Sampler
class Visualizer:
""" For generating visuals of each input object model in the input parameterization. """
def __init__(self, parser, input_params, output_dir):
""" Construct Visualizer. Parameterize Composer to generate the data needed to post-process into model visuals. """
self.parser = parser
self.input_params = input_params
self.output_dir = os.path.join(output_dir, "visuals")
os.makedirs(self.output_dir, exist_ok=True)
# Get all object models from input parameter file
self.obj_models = self.get_all_obj_models()
self.nucleus_server = self.input_params["nucleus_server"]
# Copy model list to output file
model_list = os.path.join(self.output_dir, "models.txt")
with open(model_list, "w") as f:
for obj_model in self.obj_models:
f.write(obj_model)
f.write("\n")
# Filter obj models
if not self.input_params["overwrite"]:
self.filter_obj_models(self.obj_models)
if not self.obj_models:
print("All object model visuals are already created.")
sys.exit()
self.tile_width = 500
self.tile_height = 500
self.obj_size = 1
self.room_size = 10 * self.obj_size
self.cam_distance = 4 * self.obj_size
self.camera_coord = np.array((-self.cam_distance, 0, self.room_size / 2))
self.background_color = (160, 185, 190)
self.group_name = "photoshoot"
# Set hard-coded parameters
self.params = {self.group_name: {}}
self.set_obj_params()
self.set_light_params()
self.set_room_params()
self.set_cam_params()
self.set_other_params()
# Parse parameters
self.params = parser.parse_input(self.params, parse_from_file=False)
# Set parameters
Sampler.params = self.params
# Initiate Composer
self.composer = Composer(self.params, 0, self.output_dir)
def visualize_models(self):
""" Generate samples and post-process captured data into visuals. """
num_models = len(self.obj_models)
for i, obj_model in enumerate(self.obj_models):
print("Model {}/{} - {}".format(i, num_models, obj_model))
self.set_obj_model(obj_model)
# Capture 4 angles per model
outputs = [self.composer.generate_scene() for j in range(4)]
image_matrix = self.process_outputs(outputs)
self.save_visual(obj_model, image_matrix)
def get_all_obj_models(self):
""" Get all object models from input parameterization. """
obj_models = []
groups = self.input_params["groups"]
for group_name, group in groups.items():
obj_count = group["obj_count"]
group_models = group["obj_model"]
if group_models and obj_count:
if type(group_models) is Choice or type(group_models) is Walk:
group_models = group_models.elems
else:
group_models = [group_models]
obj_models.extend(group_models)
# Remove repeats
obj_models = list(set(obj_models))
return obj_models
def filter_obj_models(self, obj_models):
""" Filter out obj models that have already been visualized. """
existing_filenames = set([f for f in os.listdir(self.output_dir)])
for obj_model in obj_models:
filename = self.model_to_filename(obj_model)
if filename in existing_filenames:
obj_models.remove(obj_model)
def model_to_filename(self, obj_model):
""" Map object model's Nucleus path to a filename. """
filename = obj_model.replace("/", "__")
r_index = filename.rfind(".")
filename = filename[:r_index]
filename += ".jpg"
return filename
def process_outputs(self, outputs):
""" Tile output data from scene into one image matrix. """
rgbs = [groundtruth["DATA"]["RGB"] for groundtruth in outputs]
wireframes = [groundtruth["DATA"]["WIREFRAME"] for groundtruth in outputs]
rgbs = [rgb[:, :, :3] for rgb in rgbs]
top_row_matrix = np.concatenate(rgbs, axis=1)
wireframes = [wireframe[:, :, :3] for wireframe in wireframes]
bottom_row_matrix = np.concatenate(wireframes, axis=1)
image_matrix = np.concatenate([top_row_matrix, bottom_row_matrix], axis=0)
image_matrix = np.array(image_matrix, dtype=np.uint8)
return image_matrix
def save_visual(self, obj_model, image_matrix):
""" Save image matrix as image. """
image = Image.fromarray(image_matrix, "RGB")
font_path = os.path.join(os.path.dirname(__file__), "RobotoMono-Regular.ttf")
font = ImageFont.truetype(font_path, 24)
draw = ImageDraw.Draw(image)
width, height = image.size
draw.text((10, 10), obj_model, font=font)
model_name = self.model_to_filename(obj_model)
filename = os.path.join(self.output_dir, model_name)
image.save(filename, "JPEG", quality=90)
def set_cam_params(self):
""" Set camera parameters. """
self.params["camera_coord"] = str(self.camera_coord.tolist())
self.params["camera_rot"] = str((0, 0, 0))
self.params["focal_length"] = 50
def set_room_params(self):
""" Set room parameters. """
self.params["scenario_room_enabled"] = str(True)
self.params["floor_size"] = str(self.room_size)
self.params["wall_height"] = str(self.room_size)
self.params["floor_color"] = str(self.background_color)
self.params["wall_color"] = str(self.background_color)
self.params["ceiling_color"] = str(self.background_color)
self.params["floor_reflectance"] = str(0)
self.params["wall_reflectance"] = str(0)
self.params["ceiling_reflectance"] = str(0)
def set_obj_params(self):
""" Set object parameters. """
group = self.params[self.group_name]
group["obj_coord_camera_relative"] = str(False)
group["obj_rot_camera_relative"] = str(False)
group["obj_coord"] = str((0, 0, self.room_size / 2))
group["obj_rot"] = "Walk([(25, -25, -45), (-25, 25, -225), (-25, 25, -45), (25, -25, -225)])"
group["obj_size"] = str(self.obj_size)
group["obj_count"] = str(1)
def set_light_params(self):
""" Set light parameters. """
group = self.params[self.group_name]
group["light_count"] = str(4)
group["light_coord_camera_relative"] = str(False)
light_offset = 2 * self.obj_size
light_coords = [
self.camera_coord + (0, -light_offset, 0),
self.camera_coord + (0, 0, light_offset),
self.camera_coord + (0, light_offset, 0),
self.camera_coord + (0, 0, -light_offset),
]
light_coords = str([tuple(coord.tolist()) for coord in light_coords])
group["light_coord"] = "Walk(" + light_coords + ")"
group["light_intensity"] = str(40000)
group["light_radius"] = str(0.50)
group["light_color"] = str([200, 200, 200])
def set_other_params(self):
""" Set other parameters. """
self.params["img_width"] = str(self.tile_width)
self.params["img_height"] = str(self.tile_height)
self.params["write_data"] = str(False)
self.params["verbose"] = str(False)
self.params["rgb"] = str(True)
self.params["wireframe"] = str(True)
self.params["nucleus_server"] = str(self.nucleus_server)
self.params["pause"] = str(0.5)
self.params["path_tracing"] = True
def set_obj_model(self, obj_model):
""" Set obj_model parameter. """
group = self.params["groups"][self.group_name]
group["obj_model"] = str(obj_model)
| 8,145 | Python | 34.885462 | 123 | 0.590055 |
ngzhili/SynTable/syntable_composer/src/visualize/__init__.py |
from .visualize import Visualizer
| 35 | Python | 10.999996 | 33 | 0.828571 |
ngzhili/SynTable/syntable_composer/src/sampling/__init__.py | from .sample import Sampler
| 28 | Python | 13.499993 | 27 | 0.821429 |
ngzhili/SynTable/syntable_composer/src/sampling/sample1.py | import numpy as np
from distributions import Distribution
from output import Logger
class Sampler:
""" For managing parameter sampling. """
# Static variable of parameter set
params = None
def __init__(self, group=None):
""" Construct a Sampler. Potentially set an associated group. """
self.group = group
def evaluate(self, val):
""" Evaluate a parameter into a primitive. """
if isinstance(val, Distribution):
val = val.sample()
elif isinstance(val, (list, tuple)):
elems = val
val = [self.evaluate(sub_elem) for sub_elem in elems]
is_numeric = all([type(elem) == int or type(elem) == float for elem in val])
if is_numeric:
val = np.array(val, dtype=np.float32)
return val
def sample(self, key, group=None,tableBounds=None):
""" Sample a parameter. """
if group is None:
group = self.group
if key.startswith("obj") or key.startswith("light") and group:
param_set = Sampler.params["groups"][group]
else:
param_set = Sampler.params
if key in param_set:
val = param_set[key]
else:
print('Warning key "{}" in group "{}" not found in parameter set.'.format(key, group))
return None
if key == "obj_coord" and group != "table" and tableBounds:
min_val = tableBounds[0]
max_val = tableBounds[1]
val.min_val = min_val
val.max_val = max_val
val = self.evaluate(val)
Logger.write_parameter(key, val, group=group)
return val
| 1,686 | Python | 28.086206 | 98 | 0.561684 |
ngzhili/SynTable/syntable_composer/src/sampling/sample.py | import numpy as np
from distributions import Distribution
from output import Logger
class Sampler:
""" For managing parameter sampling. """
# Static variable of parameter set
params = None
def __init__(self, group=None):
""" Construct a Sampler. Potentially set an associated group. """
self.group = group
def evaluate(self, val):
""" Evaluate a parameter into a primitive. """
if isinstance(val, Distribution):
val = val.sample()
elif isinstance(val, (list, tuple)):
elems = val
val = [self.evaluate(sub_elem) for sub_elem in elems]
is_numeric = all([type(elem) == int or type(elem) == float for elem in val])
if is_numeric:
val = np.array(val, dtype=np.float32)
return val
def sample(self, key, group=None):
""" Sample a parameter. """
if group is None:
group = self.group
if key.startswith("obj") or key.startswith("light") and group:
param_set = Sampler.params["groups"][group]
else:
param_set = Sampler.params
if key in param_set:
val = param_set[key]
else:
print('Warning key "{}" in group "{}" not found in parameter set.'.format(key, group))
return None
val = self.evaluate(val)
Logger.write_parameter(key, val, group=group)
return val
| 1,452 | Python | 25.907407 | 98 | 0.568871 |
ngzhili/SynTable/syntable_composer/src/scene/scene1.py | import time
import numpy as np
from random import randint
from output import Logger
# from sampling import Sampler
from sampling.sample1 import Sampler
# from scene import Camera, Light
from scene.light1 import Light
from scene.camera1 import Camera
from scene.object1 import Object
from scene.room1 import Room
def randomNumObjList(num_objs, total_sum):
"""
Function to sample a list of m random non-negative integers whose sum is n
"""
# Create an array of size m where every element is initialized to 0
arr = [0] * num_objs
# To make the sum of the final list as n
for i in range(total_sum) :
# Increment any random element from the array by 1
# arr[randint(0, n) % m] += 1
arr[randint(0, num_objs-1)] += 1
return arr
class SceneManager:
""" For managing scene set-up and generation. """
def __init__(self, sim_app, sim_context):
""" Construct SceneManager. Set-up scenario in Isaac Sim. """
import omni
self.sim_app = sim_app
self.sim_context = sim_context
self.stage = omni.usd.get_context().get_stage()
self.sample = Sampler().sample
self.scene_path = "/World/Scene"
self.scenario_label = "[[scenario]]"
self.play_frame = False
self.objs = []
self.lights = []
self.camera = Camera(self.sim_app, self.sim_context, "/World/CameraRig", None, group=None)
self.setup_scenario()
def setup_scenario(self):
""" Load in base scenario(s) """
import omni
from omni.isaac.core import SimulationContext
from omni.isaac.core.utils import stage
from omni.isaac.core.utils.stage import get_stage_units
cached_physics_dt = self.sim_context.get_physics_dt()
cached_rendering_dt = self.sim_context.get_rendering_dt()
cached_stage_units = get_stage_units()
self.room = None
if self.sample("scenario_room_enabled"):
# Generate a parameterizable room
self.room = Room(self.sim_app, self.sim_context)
# add table
from scene.room_face1 import RoomTable
group = "table"
path = "/World/Room/table_{}".format(1)
ref = self.sample("nucleus_server") + self.sample("obj_model", group=group)
obj = RoomTable(self.sim_app, self.sim_context, ref, path, "obj", self.camera, group=group)
roomTableMinBounds, roomTableMaxBounds = obj.get_bounds()
roomTableSize = roomTableMaxBounds - roomTableMinBounds # (x,y,z size of table)
roomTableHeight = roomTableSize[-1]
roomTableZCenter = roomTableHeight/2
obj.translate(np.array([0,0,roomTableZCenter]))
self.roomTableSize = roomTableSize
self.roomTable = obj
else:
# Load in a USD scenario
self.load_scenario_model()
# Re-initialize context after we open a stage
self.sim_context = SimulationContext(
physics_dt=cached_physics_dt, rendering_dt=cached_rendering_dt, stage_units_in_meters=cached_stage_units
)
self.stage = omni.usd.get_context().get_stage()
# Set the up axis to the z axis
stage.set_stage_up_axis("z")
# Set scenario label to stage prims
self.set_scenario_label()
# Reset rendering settings
self.sim_app.reset_render_settings()
def set_scenario_label(self):
""" Set scenario label to all prims in stage. """
from pxr import Semantics
for prim in self.stage.Traverse():
path = prim.GetPath()
# print(path)
if path == "/World":
continue
if not prim.HasAPI(Semantics.SemanticsAPI):
sem = Semantics.SemanticsAPI.Apply(prim, "Semantics")
sem.CreateSemanticTypeAttr()
sem.CreateSemanticDataAttr()
else:
sem = Semantics.SemanticsAPI.Get(prim, "Semantics")
continue
typeAttr = sem.GetSemanticTypeAttr()
dataAttr = sem.GetSemanticDataAttr()
typeAttr.Set("class")
dataAttr.Set(self.scenario_label)
def load_scenario_model(self):
""" Load in a USD scenario. """
from omni.isaac.core.utils.stage import open_stage
# Load in base scenario from Nucleus
if self.sample("scenario_model"):
scenario_ref = self.sample("nucleus_server") + self.sample("scenario_model")
open_stage(scenario_ref)
def populate_scene(self, tableBounds=None):
""" Populate a sample's scene a camera, objects, and lights. """
# Update camera
self.camera.place_in_scene()
# Iterate through each group
self.objs = []
self.lights = []
self.ceilinglights = []
if self.sample("randomise_num_of_objs_in_scene"):
MaxObjInScene = self.sample("max_obj_in_scene")
numUniqueObjs = len([i for i in self.sample("groups") if i.lower().startswith("object")])
ObjNumList = randomNumObjList(numUniqueObjs, MaxObjInScene)
for grp_index, group in enumerate(self.sample("groups")):
# spawn objects to scene
if group not in ["table","lights","ceilinglights","backgroundobject"]: # do not add Roomtable here
if self.sample("randomise_num_of_objs_in_scene"):
num_objs = ObjNumList[grp_index] # get number of objects to be generated
else:
num_objs = self.sample("obj_count", group=group)
for i in range(num_objs):
path = "{}/Objects/object_{}".format(self.scene_path, len(self.objs))
ref = self.sample("nucleus_server") + self.sample("obj_model", group=group)
obj = Object(self.sim_app, self.sim_context, ref, path, "obj", self.camera, group,tableBounds=tableBounds)
self.objs.append(obj)
elif group == "ceilinglights":
# Spawn lights
num_lights = self.sample("light_count", group=group)
for i in range(num_lights):
path = "{}/Ceilinglights/ceilinglights_{}".format(self.scene_path, len(self.ceilinglights))
light = Light(self.sim_app, self.sim_context, path, self.camera, group)
self.ceilinglights.append(light)
elif group == "lights":
# Spawn lights
num_lights = self.sample("light_count", group=group)
for i in range(num_lights):
path = "{}/Lights/lights_{}".format(self.scene_path, len(self.lights))
light = Light(self.sim_app, self.sim_context, path, self.camera, group)
self.lights.append(light)
# Update room
if self.room:
self.room.update()
self.roomTable.add_material()
# Add skybox, if needed
self.add_skybox()
def update_scene(self, step_time=None, step_index=0):
""" Update Omniverse after scene is generated. """
from omni.isaac.core.utils.stage import is_stage_loading
# Step positions of objs and lights
if step_time:
self.camera.step(step_time)
for obj in self.objs:
obj.step(step_time)
for light in self.lights:
light.step(step_time)
# Wait for scene to finish loading
while is_stage_loading():
self.sim_context.render()
# Determine if scene is played
scene_assets = self.objs + self.lights
self.play_frame = any([asset.physics for asset in scene_assets])
# Play scene, if needed
if self.play_frame and step_index == 0:
Logger.print("\nPhysically simulating...")
self.sim_context.play()
render = not self.sample("headless")
sim_time = self.sample("physics_simulate_time")
frames_to_simulate = int(sim_time * 60) + 1
for i in range(frames_to_simulate):
self.sim_context.step(render=render)
# Napping
if self.sample("nap"):
print("napping")
while True:
self.sim_context.render()
# Update
if step_index == 0:
Logger.print("\nLoading textures...")
self.sim_context.render()
# Pausing
if step_index == 0:
pause_time = self.sample("pause")
start_time = time.time()
while time.time() - start_time < pause_time:
self.sim_context.render()
def add_skybox(self):
""" Add a DomeLight that creates a textured skybox, if needed. """
from pxr import UsdGeom, UsdLux
from omni.isaac.core.utils.prims import create_prim
sky_texture = self.sample("sky_texture")
sky_light_intensity = self.sample("sky_light_intensity")
if sky_texture:
create_prim(
prim_path="{}/Lights/skybox".format(self.scene_path),
prim_type="DomeLight",
attributes={
UsdLux.Tokens.intensity: sky_light_intensity,
UsdLux.Tokens.specular: 1,
UsdLux.Tokens.textureFile: self.sample("nucleus_server") + sky_texture,
UsdLux.Tokens.textureFormat: UsdLux.Tokens.latlong,
UsdGeom.Tokens.visibility: "inherited",
},
)
def prepare_scene(self, index):
""" Scene preparation step. """
self.valid_sample = True
Logger.start_log_entry(index)
Logger.print("===== Generating Scene: " + str(index) + " =====\n")
def finish_scene(self):
""" Scene finish step. Clean-up variables, Isaac Sim stage. """
from omni.isaac.core.utils.prims import delete_prim
self.objs = []
self.lights = []
self.ceilinglights = []
delete_prim(self.scene_path)
delete_prim("/Looks")
self.sim_context.stop()
self.sim_context.render()
self.play_frame = False
Logger.finish_log_entry()
def print_instance_attributes(self):
for attribute, value in self.__dict__.items():
print(attribute, '=', value)
def reload_table(self):
from omni.isaac.core.utils.prims import delete_prim
from scene.room_face1 import RoomTable
group = "table"
path = "/World/Room/table_{}".format(1)
delete_prim(path) # delete old tables
ref = self.sample("nucleus_server") + self.sample("obj_model", group=group)
obj = RoomTable(self.sim_app, self.sim_context, ref, path, "obj", self.camera, group=group)
roomTableMinBounds, roomTableMaxBounds = obj.get_bounds()
roomTableSize = roomTableMaxBounds - roomTableMinBounds # (x,y,z size of table)
roomTableHeight = roomTableSize[-1]
roomTableZCenter = roomTableHeight/2
obj.translate(np.array([0,0,roomTableZCenter]))
self.roomTableSize = roomTableSize
self.roomTable = obj
| 11,333 | Python | 35.679612 | 136 | 0.578752 |
ngzhili/SynTable/syntable_composer/src/scene/room_face1.py | from scene.object1 import Object
import numpy as np
import os
class RoomFace(Object):
""" For managing an Xform asset in Isaac Sim. """
def __init__(self, sim_app, sim_context, path, prefix, coord, rotation, scaling):
""" Construct Object. """
self.coord = coord
self.rotation = rotation
self.scaling = scaling
super().__init__(sim_app, sim_context, "", path, prefix, None, None)
def load_asset(self):
""" Create asset from object parameters. """
from omni.isaac.core.prims import XFormPrim
from omni.isaac.core.utils.prims import move_prim
from pxr import PhysxSchema, UsdPhysics
if self.prefix == "floor":
# Create invisible ground plane
path = "/World/Room/ground"
planeGeom = PhysxSchema.Plane.Define(self.stage, path)
planeGeom.CreatePurposeAttr().Set("guide")
planeGeom.CreateAxisAttr().Set("Z")
prim = self.stage.GetPrimAtPath(path)
UsdPhysics.CollisionAPI.Apply(prim)
# Create plane
from omni.kit.primitive.mesh import CreateMeshPrimWithDefaultXformCommand
CreateMeshPrimWithDefaultXformCommand(prim_type="Plane").do()
move_prim(path_from="/Plane", path_to=self.path)
self.prim = self.stage.GetPrimAtPath(self.path)
self.xform_prim = XFormPrim(self.path)
def place_in_scene(self):
""" Scale, rotate, and translate asset. """
self.translate(self.coord)
self.rotate(self.rotation)
self.scale(self.scaling)
def step(self):
""" Room Face does not update in a scene's sequence. """
return
class RoomTable(Object):
""" For managing an Xform asset in Isaac Sim. """
def __init__(self, sim_app, sim_context, ref, path, prefix, camera, group):
super().__init__(sim_app, sim_context, ref, path, prefix, camera, group, None)
def load_asset(self):
""" Create asset from object parameters. """
from omni.isaac.core.prims import XFormPrim
from omni.isaac.core.utils import prims
# print(self.path)
# Create object
self.prim = prims.create_prim(self.path, "Xform", semantic_label="[[scenario]]")
self.xform_prim = XFormPrim(self.path)
nested_path = os.path.join(self.path, "nested_prim")
self.nested_prim = prims.create_prim(nested_path, "Xform", usd_path=self.ref, semantic_label="[[scenario]]")
self.nested_xform_prim = XFormPrim(nested_path)
self.add_material()
self.add_collision()
| 2,607 | Python | 31.6 | 116 | 0.624473 |
ngzhili/SynTable/syntable_composer/src/scene/asset1.py |
from abc import ABC, abstractmethod
import math
import numpy as np
from scipy.spatial.transform import Rotation
from output import Logger
from sampling.sample1 import Sampler
class Asset(ABC):
""" For managing an asset in Isaac Sim. """
def __init__(self, sim_app, sim_context, path, prefix, name, group=None, camera=None):
""" Construct Asset. """
self.sim_app = sim_app
self.sim_context = sim_context
self.path = path
self.camera = camera
self.name = name
self.prefix = prefix
self.stage = self.sim_context.stage
self.sample = Sampler(group=group).sample
self.class_name = self.__class__.__name__
if self.class_name != "RoomFace":
self.vel = self.sample(self.concat("vel"))
self.rot_vel = self.sample(self.concat("rot_vel"))
self.accel = self.sample(self.concat("accel"))
self.rot_accel = self.sample(self.concat("rot_accel"))
self.label = group
self.physics = False
@abstractmethod
def place_in_scene(self):
""" Place asset in scene. """
pass
def is_given(self, param):
""" Is a parameter value is given. """
if type(param) in (np.ndarray, list, tuple, str):
return len(param) > 0
elif type(param) is float:
return not math.isnan(param)
else:
return param is not None
def translate(self, coord, xform_prim=None):
""" Translate asset. """
if xform_prim is None:
xform_prim = self.xform_prim
xform_prim.set_world_pose(position=coord)
def scale(self, scaling, xform_prim=None):
""" Scale asset uniformly across all axes. """
if xform_prim is None:
xform_prim = self.xform_prim
xform_prim.set_local_scale(scaling)
def rotate(self, rotation, xform_prim=None):
""" Rotate asset. """
from omni.isaac.core.utils.rotations import euler_angles_to_quat
if xform_prim is None:
xform_prim = self.xform_prim
xform_prim.set_world_pose(orientation=euler_angles_to_quat(rotation.tolist(), degrees=True))
def is_coord_camera_relative(self):
return self.sample(self.concat("coord_camera_relative"))
def is_rot_camera_relative(self):
return self.sample(self.concat("rot_camera_relative"))
def concat(self, parameter_suffix):
""" Concatenate the parameter prefix and suffix. """
return self.prefix + "_" + parameter_suffix
def get_initial_coord(self,tableBounds=None):
""" Get coordinates of asset across 3 axes. """
if self.is_coord_camera_relative():
cam_coord = self.camera.coords[0]
cam_rot = self.camera.rotation
horiz_fov = -1 * self.camera.intrinsics[0]["horiz_fov"]
vert_fov = self.camera.intrinsics[0]["vert_fov"]
radius = self.sample(self.concat("distance"))
theta = horiz_fov * self.sample(self.concat("horiz_fov_loc")) / 2
phi = vert_fov * self.sample(self.concat("vert_fov_loc")) / 2
# Convert from polar to cartesian
rads = np.radians(cam_rot[2] + theta)
x = cam_coord[0] + radius * np.cos(rads)
y = cam_coord[1] + radius * np.sin(rads)
rads = np.radians(cam_rot[0] + phi)
z = cam_coord[2] + radius * np.sin(rads)
coord = np.array([x, y, z])
elif tableBounds:
coord = self.sample(self.concat("coord"),tableBounds=tableBounds)
else:
coord = self.sample(self.concat("coord"))
pretty_coord = tuple([round(v, 1) for v in coord.tolist()])
return coord
def get_initial_rotation(self):
""" Get rotation of asset across 3 axes. """
rotation = self.sample(self.concat("rot"))
rotation = np.array(rotation)
if self.is_rot_camera_relative():
cam_rot = self.camera.rotation
rotation += cam_rot
return rotation
def step(self, step_time):
""" Step asset forward in its sequence. """
from omni.isaac.core.utils.rotations import quat_to_euler_angles
if self.class_name != "Camera":
self.coord, quaternion = self.xform_prim.get_world_pose()
self.coord = np.array(self.coord, dtype=np.float32)
self.rotation = np.degrees(quat_to_euler_angles(quaternion))
vel_vector = self.vel
accel_vector = self.accel
if self.sample(self.concat("movement") + "_" + self.concat("relative")):
radians = np.radians(self.rotation)
direction_cosine_matrix = Rotation.from_rotvec(radians).as_matrix()
vel_vector = direction_cosine_matrix.dot(vel_vector)
accel_vector = direction_cosine_matrix.dot(accel_vector)
self.coord += vel_vector * step_time + 0.5 * accel_vector * step_time ** 2
self.translate(self.coord)
self.rotation += self.rot_vel * step_time + 0.5 * self.rot_accel * step_time ** 2
self.rotate(self.rotation)
| 5,129 | Python | 32.529412 | 100 | 0.594463 |
ngzhili/SynTable/syntable_composer/src/scene/scene.py |
import time
from output import Logger
from sampling import Sampler
from scene import Camera, Light, Object, Room
class SceneManager:
""" For managing scene set-up and generation. """
def __init__(self, sim_app, sim_context):
""" Construct SceneManager. Set-up scenario in Isaac Sim. """
import omni
self.sim_app = sim_app
self.sim_context = sim_context
self.stage = omni.usd.get_context().get_stage()
self.sample = Sampler().sample
self.scene_path = "/World/Scene"
self.scenario_label = "[[scenario]]"
self.play_frame = False
self.objs = []
self.lights = []
self.setup_scenario()
self.camera = Camera(self.sim_app, self.sim_context, "/World/CameraRig", None, group=None)
def setup_scenario(self):
""" Load in base scenario(s) """
import omni
from omni.isaac.core import SimulationContext
from omni.isaac.core.utils import stage
from omni.isaac.core.utils.stage import get_stage_units
cached_physics_dt = self.sim_context.get_physics_dt()
cached_rendering_dt = self.sim_context.get_rendering_dt()
cached_stage_units = get_stage_units()
self.room = None
if self.sample("scenario_room_enabled"):
# Generate a parameterizable room
self.room = Room(self.sim_app, self.sim_context)
else:
# Load in a USD scenario
self.load_scenario_model()
# Re-initialize context after we open a stage
self.sim_context = SimulationContext(
physics_dt=cached_physics_dt, rendering_dt=cached_rendering_dt, stage_units_in_meters=cached_stage_units
)
self.stage = omni.usd.get_context().get_stage()
# Set the up axis to the z axis
stage.set_stage_up_axis("z")
# Set scenario label to stage prims
self.set_scenario_label()
# Reset rendering settings
self.sim_app.reset_render_settings()
def set_scenario_label(self):
""" Set scenario label to all prims in stage. """
from pxr import Semantics
for prim in self.stage.Traverse():
path = prim.GetPath()
if path == "/World":
continue
if not prim.HasAPI(Semantics.SemanticsAPI):
sem = Semantics.SemanticsAPI.Apply(prim, "Semantics")
sem.CreateSemanticTypeAttr()
sem.CreateSemanticDataAttr()
else:
sem = Semantics.SemanticsAPI.Get(prim, "Semantics")
continue
typeAttr = sem.GetSemanticTypeAttr()
dataAttr = sem.GetSemanticDataAttr()
typeAttr.Set("class")
dataAttr.Set(self.scenario_label)
def load_scenario_model(self):
""" Load in a USD scenario. """
from omni.isaac.core.utils.stage import open_stage
# Load in base scenario from Nucleus
if self.sample("scenario_model"):
scenario_ref = self.sample("nucleus_server") + self.sample("scenario_model")
open_stage(scenario_ref)
def populate_scene(self):
""" Populate a sample's scene a camera, objects, and lights. """
# Update camera
self.camera.place_in_scene()
# Iterate through each group
self.objs = []
self.lights = []
for group in self.sample("groups"):
# Spawn objects
num_objs = self.sample("obj_count", group=group)
for i in range(num_objs):
path = "{}/Objects/object_{}".format(self.scene_path, len(self.objs))
ref = self.sample("nucleus_server") + self.sample("obj_model", group=group)
obj = Object(self.sim_app, self.sim_context, ref, path, "obj", self.camera, group)
self.objs.append(obj)
# Spawn lights
num_lights = self.sample("light_count", group=group)
for i in range(num_lights):
path = "{}/Lights/lights_{}".format(self.scene_path, len(self.lights))
light = Light(self.sim_app, self.sim_context, path, self.camera, group)
self.lights.append(light)
# Update room
if self.room:
self.room.update()
# Add skybox, if needed
self.add_skybox()
def update_scene(self, step_time=None, step_index=0):
""" Update Omniverse after scene is generated. """
from omni.isaac.core.utils.stage import is_stage_loading
# Step positions of objs and lights
if step_time:
self.camera.step(step_time)
for obj in self.objs:
obj.step(step_time)
for light in self.lights:
light.step(step_time)
# Wait for scene to finish loading
while is_stage_loading():
self.sim_context.render()
# Determine if scene is played
scene_assets = self.objs + self.lights
self.play_frame = any([asset.physics for asset in scene_assets])
# Play scene, if needed
if self.play_frame and step_index == 0:
Logger.print("physically simulating...")
self.sim_context.play()
render = not self.sample("headless")
sim_time = self.sample("physics_simulate_time")
frames_to_simulate = int(sim_time * 60) + 1
for i in range(frames_to_simulate):
self.sim_context.step(render=render)
# Napping
if self.sample("nap"):
print("napping")
while True:
self.sim_context.render()
# Update
if step_index == 0:
Logger.print("loading textures...")
self.sim_context.render()
# Pausing
if step_index == 0:
pause_time = self.sample("pause")
start_time = time.time()
while time.time() - start_time < pause_time:
self.sim_context.render()
def add_skybox(self):
""" Add a DomeLight that creates a textured skybox, if needed. """
from pxr import UsdGeom, UsdLux
from omni.isaac.core.utils.prims import create_prim
sky_texture = self.sample("sky_texture")
sky_light_intensity = self.sample("sky_light_intensity")
if sky_texture:
create_prim(
prim_path="{}/Lights/skybox".format(self.scene_path),
prim_type="DomeLight",
attributes={
UsdLux.Tokens.intensity: sky_light_intensity,
UsdLux.Tokens.specular: 1,
UsdLux.Tokens.textureFile: self.sample("nucleus_server") + sky_texture,
UsdLux.Tokens.textureFormat: UsdLux.Tokens.latlong,
UsdGeom.Tokens.visibility: "inherited",
},
)
def prepare_scene(self, index):
""" Scene preparation step. """
self.valid_sample = True
Logger.start_log_entry(index)
Logger.print("Scene: " + str(index) + "\n")
def finish_scene(self):
""" Scene finish step. Clean-up variables, Isaac Sim stage. """
from omni.isaac.core.utils.prims import delete_prim
self.objs = []
self.lights = []
delete_prim(self.scene_path)
delete_prim("/Looks")
self.sim_context.stop()
self.sim_context.render()
self.play_frame = False
Logger.finish_log_entry()
| 7,513 | Python | 31.95614 | 116 | 0.570212 |
ngzhili/SynTable/syntable_composer/src/scene/__init__.py |
from .asset import *
from .room import Room
from .scene import SceneManager
| 77 | Python | 14.599997 | 31 | 0.779221 |
ngzhili/SynTable/syntable_composer/src/scene/room.py | import numpy as np
from sampling import Sampler
from scene import RoomFace
class Room:
""" For managing a parameterizable rectangular prism centered at the origin. """
def __init__(self, sim_app, sim_context):
""" Construct Room. Generate room in Isaac SIM. """
self.sim_app = sim_app
self.sim_context = sim_context
self.stage = self.sim_context.stage
self.sample = Sampler().sample
self.room = self.scenario_room()
def scenario_room(self):
""" Generate and return assets creating a rectangular prism at the origin. """
wall_height = self.sample("wall_height")
floor_size = self.sample("floor_size")
self.room_faces = []
faces = []
coords = []
scalings = []
rotations = []
if self.sample("floor"):
faces.append("floor")
coords.append((0, 0, 0))
scalings.append((floor_size / 100, floor_size / 100, 1))
rotations.append((0, 0, 0))
if self.sample("wall"):
faces.extend(4 * ["wall"])
coords.append((floor_size / 2, 0, wall_height / 2))
coords.append((0, floor_size / 2, wall_height / 2))
coords.append((-floor_size / 2, 0, wall_height / 2))
coords.append((0, -floor_size / 2, wall_height / 2))
scalings.extend(4 * [(floor_size / 100, wall_height / 100, 1)])
rotations.append((90, 0, 90))
rotations.append((90, 0, 0))
rotations.append((90, 0, 90))
rotations.append((90, 0, 0))
if self.sample("ceiling"):
faces.append("ceiling")
coords.append((0, 0, wall_height))
scalings.append((floor_size / 100, floor_size / 100, 1))
rotations.append((0, 0, 0))
room = []
for i, face in enumerate(faces):
coord = np.array(coords[i])
rotation = np.array(rotations[i])
scaling = np.array(scalings[i])
path = "/World/Room/{}_{}".format(face, i)
room_face = RoomFace(self.sim_app, self.sim_context, path, face, coord, rotation, scaling)
room.append(room_face)
return room
def update(self):
""" Update room components. """
for room_face in self.room:
room_face.add_material()
| 2,363 | Python | 30.945946 | 102 | 0.544223 |
ngzhili/SynTable/syntable_composer/src/scene/camera1.py |
import math
import numpy as np
import carb
from scene.asset1 import Asset
from output import Logger
# from sampling import Sampler
from sampling.sample1 import Sampler
class Camera(Asset):
""" For managing a camera in Isaac Sim. """
def __init__(self, sim_app, sim_context, path, camera, group):
""" Construct Camera. """
self.sample = Sampler(group=group).sample
self.stereo = self.sample("stereo")
if self.stereo:
name = "stereo_cams"
else:
name = "mono_cam"
super().__init__(sim_app, sim_context, path, "camera", name, camera=camera, group=group)
self.load_camera()
def is_coord_camera_relative(self):
return False
def is_rot_camera_relative(self):
return False
def load_camera(self):
""" Create a camera in Isaac Sim. """
import omni
from pxr import Sdf, UsdGeom
from omni.isaac.core.prims import XFormPrim
from omni.isaac.core.utils import prims
self.prim = prims.create_prim(self.path, "Xform")
self.xform_prim = XFormPrim(self.path)
self.camera_rig = UsdGeom.Xformable(self.prim)
camera_prim_paths = []
if self.stereo:
camera_prim_paths.append(self.path + "/LeftCamera")
camera_prim_paths.append(self.path + "/RightCamera")
else:
camera_prim_paths.append(self.path + "/MonoCamera")
self.cameras = [
self.stage.DefinePrim(Sdf.Path(camera_prim_path), "Camera") for camera_prim_path in camera_prim_paths
]
focal_length = self.sample("focal_length")
focus_distance = self.sample("focus_distance")
horiz_aperture = self.sample("horiz_aperture")
vert_aperture = self.sample("vert_aperture")
f_stop = self.sample("f_stop")
for camera in self.cameras:
camera = UsdGeom.Camera(camera)
camera.GetFocalLengthAttr().Set(focal_length)
camera.GetFocusDistanceAttr().Set(focus_distance)
camera.GetHorizontalApertureAttr().Set(horiz_aperture)
camera.GetVerticalApertureAttr().Set(vert_aperture)
camera.GetFStopAttr().Set(f_stop)
# Set viewports
carb.settings.acquire_settings_interface().set_int("/app/renderer/resolution/width", -1)
carb.settings.acquire_settings_interface().set_int("/app/renderer/resolution/height", -1)
self.viewports = []
for i in range(len(self.cameras)):
if i == 0:
viewport_handle = omni.kit.viewport_legacy.get_viewport_interface().get_instance("Viewport")
else:
viewport_handle = omni.kit.viewport_legacy.get_viewport_interface().create_instance()
viewport_window = omni.kit.viewport_legacy.get_viewport_interface().get_viewport_window(viewport_handle)
viewport_window.set_texture_resolution(self.sample("img_width"), self.sample("img_height"))
viewport_window.set_active_camera(camera_prim_paths[i])
if self.stereo:
if i == 0:
viewport_name = "left"
else:
viewport_name = "right"
else:
viewport_name = "mono"
self.viewports.append((viewport_name, viewport_window))
self.sim_context.render()
self.sim_app.update()
# Set viewport window size
if self.stereo:
left_viewport = omni.ui.Workspace.get_window("Viewport")
right_viewport = omni.ui.Workspace.get_window("Viewport 2")
right_viewport.dock_in(left_viewport, omni.ui.DockPosition.RIGHT)
self.intrinsics = [self.get_intrinsics(camera) for camera in self.cameras]
# print(self.intrinsics)
def translate(self, coord):
""" Translate each camera asset. Find stereo positions, if needed. """
self.coord = coord
if self.sample("stereo"):
self.coords = self.get_stereo_coords(self.coord, self.rotation)
else:
self.coords = [self.coord]
for i, camera in enumerate(self.cameras):
viewport_name, viewport_window = self.viewports[i]
viewport_window.set_camera_position(
str(camera.GetPath()), self.coords[i][0], self.coords[i][1], self.coords[i][2], True
)
def rotate(self, rotation):
""" Rotate each camera asset. """
from pxr import UsdGeom
self.rotation = rotation
for i, camera in enumerate(self.cameras):
offset_cam_rot = self.rotation + np.array((90, 0, 270), dtype=np.float32)
UsdGeom.XformCommonAPI(camera).SetRotate(offset_cam_rot.tolist())
def place_in_scene(self):
""" Place camera in scene. """
rotation = self.get_initial_rotation()
self.rotate(rotation)
coord = self.get_initial_coord()
self.translate(coord)
self.step(0)
def get_stereo_coords(self, coord, rotation):
""" Convert camera center coord and rotation and return stereo camera coords. """
coords = []
for i in range(len(self.cameras)):
sign = 1 if i == 0 else -1
theta = np.radians(rotation[0] + sign * 90)
phi = np.radians(rotation[1])
radius = self.sample("stereo_baseline") / 2
# Add offset such that center of stereo cameras is at cam_coord
x = coord[0] + radius * np.cos(theta) * np.cos(phi)
y = coord[1] + radius * np.sin(theta) * np.cos(phi)
z = coord[2] + radius * sign * np.sin(phi)
coords.append(np.array((x, y, z)))
return coords
def get_intrinsics(self, camera):
""" Compute, print, and return camera intrinsics. """
from omni.syntheticdata import helpers
width = self.sample("img_width")
height = self.sample("img_height")
aspect_ratio = width / height
camera.GetAttribute("clippingRange").Set((0.01, 1000000)) # set clipping range
near, far = camera.GetAttribute("clippingRange").Get()
focal_length = camera.GetAttribute("focalLength").Get()
horiz_aperture = camera.GetAttribute("horizontalAperture").Get()
vert_aperture = camera.GetAttribute("verticalAperture").Get()
horiz_fov = 2 * math.atan(horiz_aperture / (2 * focal_length))
horiz_fov = np.degrees(horiz_fov)
vert_fov = 2 * math.atan(vert_aperture / (2 * focal_length))
vert_fov = np.degrees(vert_fov)
fx = width * focal_length / horiz_aperture
fy = height * focal_length / vert_aperture
cx = width * 0.5
cy = height * 0.5
proj_mat = helpers.get_projection_matrix(np.radians(horiz_fov), aspect_ratio, near, far)
with np.printoptions(precision=2, suppress=True):
proj_mat_str = str(proj_mat)
Logger.print("")
Logger.print("Camera intrinsics")
Logger.print("- width, height: {}, {}".format(round(width), round(height)))
Logger.print("- focal_length: {}".format(focal_length, 2))
Logger.print(
"- horiz_aperture, vert_aperture: {}, {}".format(round(horiz_aperture, 2), round(vert_aperture, 2))
)
Logger.print("- horiz_fov, vert_fov: {}, {}".format(round(horiz_fov, 2), round(vert_fov, 2)))
Logger.print("- focal_x, focal_y: {}, {}".format(round(fx, 2), round(fy, 2)))
Logger.print("- proj_mat: \n {}".format(str(proj_mat_str)))
Logger.print("")
cam_intrinsics = {
"width": width,
"height": height,
"focal_length": focal_length,
"horiz_aperture": horiz_aperture,
"vert_aperture": vert_aperture,
"horiz_fov": horiz_fov,
"vert_fov": vert_fov,
"fx": fx,
"fy": fy,
"cx": cx,
"cy": cy,
"proj_mat": proj_mat,
"near":near,
"far":far
}
return cam_intrinsics
def print_instance_attributes(self):
for attribute, value in self.__dict__.items():
print(attribute, '=', value)
def translate_rotate(self,target=(0,0,0)):
""" Translate each camera asset. Find stereo positions, if needed. """
for i, camera in enumerate(self.cameras):
viewport_name, viewport_window = self.viewports[i]
viewport_window.set_camera_target(str(camera.GetPath()), target[0], target[1], target[2], True)
| 8,574 | Python | 34.878661 | 116 | 0.586774 |
ngzhili/SynTable/syntable_composer/src/scene/light1.py | from sampling.sample1 import Sampler
from scene.asset1 import Asset
class Light(Asset):
""" For managing a light asset in Isaac Sim. """
def __init__(self, sim_app, sim_context, path, camera, group):
""" Construct Light. """
self.sample = Sampler(group=group).sample
self.distant = self.sample("light_distant")
self.directed = self.sample("light_directed")
if self.distant:
name = "distant_light"
elif self.directed:
name = "directed_light"
else:
name = "sphere_light"
super().__init__(sim_app, sim_context, path, "light", name, camera=camera, group=group)
self.load_light()
self.place_in_scene()
def place_in_scene(self):
""" Place light in scene. """
self.coord = self.get_initial_coord()
self.translate(self.coord)
self.rotation = self.get_initial_rotation()
self.rotate(self.rotation)
def load_light(self):
""" Create a light in Isaac Sim. """
from pxr import Sdf
from omni.usd.commands import ChangePropertyCommand
from omni.isaac.core.prims import XFormPrim
from omni.isaac.core.utils import prims
intensity = self.sample("light_intensity")
color = tuple(self.sample("light_color") / 255)
temp_enabled = self.sample("light_temp_enabled")
temp = self.sample("light_temp")
radius = self.sample("light_radius")
focus = self.sample("light_directed_focus")
focus_softness = self.sample("light_directed_focus_softness")
width = self.sample("light_width")
height = self.sample("light_height")
attributes = {}
if self.distant:
light_shape = "DistantLight"
elif self.directed:
light_shape = "RectLight"
attributes["width"] = width
attributes["height"] = height
else:
light_shape = "SphereLight"
attributes["radius"] = radius
attributes["intensity"] = intensity
attributes["color"] = color
if temp_enabled:
attributes["enableColorTemperature"] = True
attributes["colorTemperature"] = temp
self.attributes = attributes # added
self.prim = prims.create_prim(self.path, light_shape, attributes=attributes)
self.xform_prim = XFormPrim(self.path)
if self.directed:
ChangePropertyCommand(prop_path=Sdf.Path(self.path + ".shaping:focus"), value=focus, prev=0.0).do()
ChangePropertyCommand(
prop_path=Sdf.Path(self.path + ".shaping:cone:softness"), value=focus_softness, prev=0.0
)
def off_prim(self):
""" Turn Object Visibility off """
from omni.isaac.core.utils import prims
prims.set_prim_visibility(self.prim, False) | 2,880 | Python | 34.134146 | 111 | 0.599653 |
ngzhili/SynTable/syntable_composer/src/scene/room1.py | import numpy as np
from sampling.sample1 import Sampler
from scene.room_face1 import RoomFace, RoomTable
class Room:
""" For managing a parameterizable rectangular prism centered at the origin. """
def __init__(self, sim_app, sim_context):
""" Construct Room. Generate room in Isaac SIM. """
self.sim_app = sim_app
self.sim_context = sim_context
self.stage = self.sim_context.stage
self.sample = Sampler().sample
self.room = self.scenario_room()
def scenario_room(self):
""" Generate and return assets creating a rectangular prism at the origin. """
wall_height = self.sample("wall_height")
floor_size = self.sample("floor_size")
self.room_faces = []
faces = []
coords = []
scalings = []
rotations = []
if self.sample("floor"):
faces.append("floor")
coords.append((0, 0, 0))
scalings.append((floor_size / 100, floor_size / 100, 1))
rotations.append((0, 0, 0))
if self.sample("wall"):
faces.extend(4 * ["wall"])
coords.append((floor_size / 2, 0, wall_height / 2))
coords.append((0, floor_size / 2, wall_height / 2))
coords.append((-floor_size / 2, 0, wall_height / 2))
coords.append((0, -floor_size / 2, wall_height / 2))
scalings.extend(4 * [(floor_size / 100, wall_height / 100, 1)])
rotations.append((90, 0, 90))
rotations.append((90, 0, 0))
rotations.append((90, 0, 90))
rotations.append((90, 0, 0))
if self.sample("ceiling"):
faces.append("ceiling")
coords.append((0, 0, wall_height))
scalings.append((floor_size / 100, floor_size / 100, 1))
rotations.append((0, 0, 0))
room = []
for i, face in enumerate(faces):
coord = np.array(coords[i])
rotation = np.array(rotations[i])
scaling = np.array(scalings[i])
path = "/World/Room/{}_{}".format(face, i)
room_face = RoomFace(self.sim_app, self.sim_context, path, face, coord, rotation, scaling)
room.append(room_face)
return room
def update(self):
""" Update room components. """
for room_face in self.room:
room_face.add_material()
| 2,393 | Python | 31.351351 | 102 | 0.547848 |
ngzhili/SynTable/syntable_composer/src/scene/object1.py | import numpy as np
import os
from scene.asset1 import Asset
class Object(Asset):
""" For managing an Xform asset in Isaac Sim. """
def __init__(self, sim_app, sim_context, ref, path, prefix, camera, group,tableBounds=None):
""" Construct Object. """
self.tableBounds = tableBounds
self.ref = ref
name = self.ref[self.ref.rfind("/") + 1 : self.ref.rfind(".")]
super().__init__(sim_app, sim_context, path, prefix, name, camera=camera, group=group)
self.load_asset()
self.place_in_scene()
if self.class_name != "RoomFace" and self.sample("obj_physics"):
self.add_physics()
def load_asset(self):
""" Create asset from object parameters. """
from omni.isaac.core.prims import XFormPrim
from omni.isaac.core.utils import prims
#print(self.path)
# Create object
self.prim = prims.create_prim(self.path, "Xform", semantic_label=self.label)
self.xform_prim = XFormPrim(self.path)
nested_path = os.path.join(self.path, "nested_prim")
self.nested_prim = prims.create_prim(nested_path, "Xform", usd_path=self.ref, semantic_label=self.label)
self.nested_xform_prim = XFormPrim(nested_path)
self.add_material()
def place_in_scene(self):
""" Scale, rotate, and translate asset. """
# Get asset dimensions
min_bound, max_bound = self.get_bounds()
size = max_bound - min_bound
# Get asset scaling
obj_size_is_enabled = self.sample("obj_size_enabled")
if obj_size_is_enabled:
obj_size = self.sample("obj_size")
max_size = max(size)
self.scaling = obj_size / max_size
else:
self.scaling = self.sample("obj_scale")
# Offset nested asset
obj_centered = self.sample("obj_centered")
if obj_centered:
offset = (max_bound + min_bound) / 2
self.translate(-offset, xform_prim=self.nested_xform_prim)
# Scale asset
self.scaling = np.array([self.scaling, self.scaling, self.scaling])
self.scale(self.scaling)
# Get asset coord and rotation
self.coord = self.get_initial_coord(tableBounds=self.tableBounds)
self.rotation = self.get_initial_rotation()
# Rotate asset
self.rotate(self.rotation)
# Place asset
self.translate(self.coord)
def get_bounds(self):
""" Compute min and max bounds of an asset. """
from omni.isaac.core.utils.bounds import compute_aabb, create_bbox_cache, recompute_extents
# recompute_extents(self.nested_prim)
cache = create_bbox_cache()
bound = compute_aabb(cache, self.path).tolist()
min_bound = np.array(bound[:3])
max_bound = np.array(bound[3:])
return min_bound, max_bound
def add_material(self):
""" Add material to asset, if needed. """
from pxr import UsdShade
material = self.sample(self.concat("material"))
color = self.sample(self.concat("color"))
texture = self.sample(self.concat("texture"))
texture_scale = self.sample(self.concat("texture_scale"))
texture_rot = self.sample(self.concat("texture_rot"))
reflectance = self.sample(self.concat("reflectance"))
metallic = self.sample(self.concat("metallicness"))
mtl_prim_path = None
if self.is_given(material):
# Load a material
mtl_prim_path = self.load_material_from_nucleus(material)
elif self.is_given(color) or self.is_given(texture):
# Load a new material
mtl_prim_path = self.create_material()
if mtl_prim_path:
# print(f"Adding {mtl_prim_path} to {self.path}")
# Update material properties and assign to asset
mtl_prim = self.update_material(
mtl_prim_path, color, texture, texture_scale, texture_rot, reflectance, metallic
)
UsdShade.MaterialBindingAPI(self.prim).Bind(mtl_prim, UsdShade.Tokens.strongerThanDescendants)
def load_material_from_nucleus(self, material):
""" Create material from Nucleus path. """
from pxr import Sdf
from omni.usd.commands import CreateMdlMaterialPrimCommand
mtl_url = self.sample("nucleus_server") + material
left_index = material.rfind("/") + 1 if "/" in material else 0
right_index = material.rfind(".") if "." in material else -1
mtl_name = material[left_index:right_index]
left_index = self.path.rfind("/") + 1 if "/" in self.path else 0
path_name = self.path[left_index:]
mtl_prim_path = "/Looks/" + mtl_name + "_" + path_name
mtl_prim_path = Sdf.Path(mtl_prim_path.replace("-", "_"))
CreateMdlMaterialPrimCommand(mtl_url=mtl_url, mtl_name=mtl_name, mtl_path=mtl_prim_path).do()
return mtl_prim_path
def create_material(self):
""" Create a OmniPBR material with provided properties and assign to asset. """
from pxr import Sdf
import omni
from omni.isaac.core.utils.prims import move_prim
from omni.kit.material.library import CreateAndBindMdlMaterialFromLibrary
mtl_created_list = []
CreateAndBindMdlMaterialFromLibrary(
mdl_name="OmniPBR.mdl", mtl_name="OmniPBR", mtl_created_list=mtl_created_list
).do()
mtl_prim_path = Sdf.Path(mtl_created_list[0])
new_mtl_prim_path = omni.usd.get_stage_next_free_path(self.stage, "/Looks/OmniPBR", False)
move_prim(path_from=mtl_prim_path, path_to=new_mtl_prim_path)
mtl_prim_path = new_mtl_prim_path
return mtl_prim_path
def update_material(self, mtl_prim_path, color, texture, texture_scale, texture_rot, reflectance, metallic):
""" Update properties of an existing material. """
import omni
from pxr import Sdf, UsdShade
mtl_prim = UsdShade.Material(self.stage.GetPrimAtPath(mtl_prim_path))
if self.is_given(color):
color = tuple(color / 255)
omni.usd.create_material_input(mtl_prim, "diffuse_color_constant", color, Sdf.ValueTypeNames.Color3f)
omni.usd.create_material_input(mtl_prim, "diffuse_tint", color, Sdf.ValueTypeNames.Color3f)
if self.is_given(texture):
texture = self.sample("nucleus_server") + texture
omni.usd.create_material_input(mtl_prim, "diffuse_texture", texture, Sdf.ValueTypeNames.Asset)
if self.is_given(texture_scale):
texture_scale = 1 / texture_scale
omni.usd.create_material_input(
mtl_prim, "texture_scale", (texture_scale, texture_scale), Sdf.ValueTypeNames.Float2
)
if self.is_given(texture_rot):
omni.usd.create_material_input(mtl_prim, "texture_rotate", texture_rot, Sdf.ValueTypeNames.Float)
if self.is_given(reflectance):
roughness = 1 - reflectance
omni.usd.create_material_input(
mtl_prim, "reflection_roughness_constant", roughness, Sdf.ValueTypeNames.Float
)
if self.is_given(metallic):
omni.usd.create_material_input(mtl_prim, "metallic_constant", metallic, Sdf.ValueTypeNames.Float)
return mtl_prim
def add_physics(self):
""" Make asset a rigid body to enable gravity and collision. """
from omni.isaac.core.utils.prims import get_all_matching_child_prims, get_prim_at_path
from omni.physx.scripts import utils
from pxr import UsdPhysics
def is_rigid_body(prim_path):
prim = get_prim_at_path(prim_path)
if prim.HasAPI(UsdPhysics.RigidBodyAPI):
return True
return False
has_physics_already = len(get_all_matching_child_prims(self.path, predicate=is_rigid_body)) > 0
if has_physics_already:
self.physics = True
return
utils.setRigidBody(self.prim, "convexHull", False)
# Set mass to 1 kg
mass_api = UsdPhysics.MassAPI.Apply(self.prim)
mass_api.CreateMassAttr(1)
self.physics = True
def print_instance_attributes(self):
for attribute, value in self.__dict__.items():
print(attribute, '=', value)
def off_physics_prim(self):
""" Turn Off Object Physics """
self.vel = (0,0,0)
self.rot_vel = (0,0,0)
self.accel = (0,0,0)
self.rot_accel = (0,0,0)
self.physics = False
def off_prim(self):
""" Turn Object Visibility off """
from omni.isaac.core.utils import prims
prims.set_prim_visibility(self.prim, False)
#print("\nTurn off visibility of prim;",self.prim)
#print("\n")
def on_prim(self):
""" Turn Object Visibility on """
from omni.isaac.core.utils import prims
prims.set_prim_visibility(self.prim, True)
#print("\nTurn on visibility of prim;",self.prim)
#print("\n")
def add_collision(self):
""" Turn Object Collision on """
from pxr import UsdPhysics
# prim = self.stage.GetPrimAtPath(path)
UsdPhysics.CollisionAPI.Apply(self.prim) | 9,291 | Python | 35.582677 | 113 | 0.613712 |
ngzhili/SynTable/syntable_composer/src/scene/asset/camera.py |
import math
import numpy as np
import carb
from scene.asset import Asset
from output import Logger
from sampling import Sampler
class Camera(Asset):
""" For managing a camera in Isaac Sim. """
def __init__(self, sim_app, sim_context, path, camera, group):
""" Construct Camera. """
self.sample = Sampler(group=group).sample
self.stereo = self.sample("stereo")
if self.stereo:
name = "stereo_cams"
else:
name = "mono_cam"
super().__init__(sim_app, sim_context, path, "camera", name, camera=camera, group=group)
self.load_camera()
def is_coord_camera_relative(self):
return False
def is_rot_camera_relative(self):
return False
def load_camera(self):
""" Create a camera in Isaac Sim. """
import omni
from pxr import Sdf, UsdGeom
from omni.isaac.core.prims import XFormPrim
from omni.isaac.core.utils import prims
self.prim = prims.create_prim(self.path, "Xform")
self.xform_prim = XFormPrim(self.path)
self.camera_rig = UsdGeom.Xformable(self.prim)
camera_prim_paths = []
if self.stereo:
camera_prim_paths.append(self.path + "/LeftCamera")
camera_prim_paths.append(self.path + "/RightCamera")
else:
camera_prim_paths.append(self.path + "/MonoCamera")
self.cameras = [
self.stage.DefinePrim(Sdf.Path(camera_prim_path), "Camera") for camera_prim_path in camera_prim_paths
]
focal_length = self.sample("focal_length")
focus_distance = self.sample("focus_distance")
horiz_aperture = self.sample("horiz_aperture")
vert_aperture = self.sample("vert_aperture")
f_stop = self.sample("f_stop")
for camera in self.cameras:
camera = UsdGeom.Camera(camera)
camera.GetFocalLengthAttr().Set(focal_length)
camera.GetFocusDistanceAttr().Set(focus_distance)
camera.GetHorizontalApertureAttr().Set(horiz_aperture)
camera.GetVerticalApertureAttr().Set(vert_aperture)
camera.GetFStopAttr().Set(f_stop)
# Set viewports
carb.settings.acquire_settings_interface().set_int("/app/renderer/resolution/width", -1)
carb.settings.acquire_settings_interface().set_int("/app/renderer/resolution/height", -1)
self.viewports = []
for i in range(len(self.cameras)):
if i == 0:
viewport_handle = omni.kit.viewport_legacy.get_viewport_interface().get_instance("Viewport")
else:
viewport_handle = omni.kit.viewport_legacy.get_viewport_interface().create_instance()
viewport_window = omni.kit.viewport_legacy.get_viewport_interface().get_viewport_window(viewport_handle)
viewport_window.set_texture_resolution(self.sample("img_width"), self.sample("img_height"))
viewport_window.set_active_camera(camera_prim_paths[i])
if self.stereo:
if i == 0:
viewport_name = "left"
else:
viewport_name = "right"
else:
viewport_name = "mono"
self.viewports.append((viewport_name, viewport_window))
self.sim_context.render()
self.sim_app.update()
# Set viewport window size
if self.stereo:
left_viewport = omni.ui.Workspace.get_window("Viewport")
right_viewport = omni.ui.Workspace.get_window("Viewport 2")
right_viewport.dock_in(left_viewport, omni.ui.DockPosition.RIGHT)
self.intrinsics = [self.get_intrinsics(camera) for camera in self.cameras]
def translate(self, coord):
""" Translate each camera asset. Find stereo positions, if needed. """
self.coord = coord
if self.sample("stereo"):
self.coords = self.get_stereo_coords(self.coord, self.rotation)
else:
self.coords = [self.coord]
for i, camera in enumerate(self.cameras):
viewport_name, viewport_window = self.viewports[i]
viewport_window.set_camera_position(
str(camera.GetPath()), self.coords[i][0], self.coords[i][1], self.coords[i][2], True
)
def rotate(self, rotation):
""" Rotate each camera asset. """
from pxr import UsdGeom
self.rotation = rotation
for i, camera in enumerate(self.cameras):
offset_cam_rot = self.rotation + np.array((90, 0, 270), dtype=np.float32)
UsdGeom.XformCommonAPI(camera).SetRotate(offset_cam_rot.tolist())
def place_in_scene(self):
""" Place camera in scene. """
rotation = self.get_initial_rotation()
self.rotate(rotation)
coord = self.get_initial_coord()
self.translate(coord)
self.step(0)
def get_stereo_coords(self, coord, rotation):
""" Convert camera center coord and rotation and return stereo camera coords. """
coords = []
for i in range(len(self.cameras)):
sign = 1 if i == 0 else -1
theta = np.radians(rotation[0] + sign * 90)
phi = np.radians(rotation[1])
radius = self.sample("stereo_baseline") / 2
# Add offset such that center of stereo cameras is at cam_coord
x = coord[0] + radius * np.cos(theta) * np.cos(phi)
y = coord[1] + radius * np.sin(theta) * np.cos(phi)
z = coord[2] + radius * sign * np.sin(phi)
coords.append(np.array((x, y, z)))
return coords
def get_intrinsics(self, camera):
""" Compute, print, and return camera intrinsics. """
from omni.syntheticdata import helpers
width = self.sample("img_width")
height = self.sample("img_height")
aspect_ratio = width / height
near, far = camera.GetAttribute("clippingRange").Get()
focal_length = camera.GetAttribute("focalLength").Get()
horiz_aperture = camera.GetAttribute("horizontalAperture").Get()
vert_aperture = camera.GetAttribute("verticalAperture").Get()
horiz_fov = 2 * math.atan(horiz_aperture / (2 * focal_length))
horiz_fov = np.degrees(horiz_fov)
vert_fov = 2 * math.atan(vert_aperture / (2 * focal_length))
vert_fov = np.degrees(vert_fov)
fx = width * focal_length / horiz_aperture
fy = height * focal_length / vert_aperture
cx = width * 0.5
cy = height * 0.5
proj_mat = helpers.get_projection_matrix(np.radians(horiz_fov), aspect_ratio, near, far)
with np.printoptions(precision=2, suppress=True):
proj_mat_str = str(proj_mat)
Logger.print("")
Logger.print("Camera intrinsics")
Logger.print("- width, height: {}, {}".format(round(width), round(height)))
Logger.print("- focal_length: {}".format(focal_length, 2))
Logger.print(
"- horiz_aperture, vert_aperture: {}, {}".format(round(horiz_aperture, 2), round(vert_aperture, 2))
)
Logger.print("- horiz_fov, vert_fov: {}, {}".format(round(horiz_fov, 2), round(vert_fov, 2)))
Logger.print("- focal_x, focal_y: {}, {}".format(round(fx, 2), round(fy, 2)))
Logger.print("- proj_mat: \n {}".format(str(proj_mat_str)))
Logger.print("")
cam_intrinsics = {
"width": width,
"height": height,
"focal_length": focal_length,
"horiz_aperture": horiz_aperture,
"vert_aperture": vert_aperture,
"horiz_fov": horiz_fov,
"vert_fov": vert_fov,
"fx": fx,
"fy": fy,
"cx": cx,
"cy": cy,
"proj_mat": proj_mat,
}
return cam_intrinsics
| 7,863 | Python | 34.264574 | 116 | 0.585018 |
ngzhili/SynTable/syntable_composer/src/scene/asset/room_face.py |
from scene.asset import Object
class RoomFace(Object):
""" For managing an Xform asset in Isaac Sim. """
def __init__(self, sim_app, sim_context, path, prefix, coord, rotation, scaling):
""" Construct Object. """
self.coord = coord
self.rotation = rotation
self.scaling = scaling
super().__init__(sim_app, sim_context, "", path, prefix, None, None)
def load_asset(self):
""" Create asset from object parameters. """
from omni.isaac.core.prims import XFormPrim
from omni.isaac.core.utils.prims import move_prim
from pxr import PhysxSchema, UsdPhysics
if self.prefix == "floor":
# Create invisible ground plane
path = "/World/Room/ground"
planeGeom = PhysxSchema.Plane.Define(self.stage, path)
planeGeom.CreatePurposeAttr().Set("guide")
planeGeom.CreateAxisAttr().Set("Z")
prim = self.stage.GetPrimAtPath(path)
UsdPhysics.CollisionAPI.Apply(prim)
# Create plane
from omni.kit.primitive.mesh import CreateMeshPrimWithDefaultXformCommand
CreateMeshPrimWithDefaultXformCommand(prim_type="Plane").do()
move_prim(path_from="/Plane", path_to=self.path)
self.prim = self.stage.GetPrimAtPath(self.path)
self.xform_prim = XFormPrim(self.path)
def place_in_scene(self):
""" Scale, rotate, and translate asset. """
self.translate(self.coord)
self.rotate(self.rotation)
self.scale(self.scaling)
def step(self):
""" Room Face does not update in a scene's sequence. """
return | 1,656 | Python | 30.865384 | 85 | 0.622585 |
ngzhili/SynTable/syntable_composer/src/scene/asset/__init__.py | from .asset import Asset
from .camera import Camera
from .object import Object
from .light import Light
from .room_face import RoomFace
| 136 | Python | 21.83333 | 31 | 0.808824 |
ngzhili/SynTable/syntable_composer/src/scene/asset/asset.py |
from abc import ABC, abstractmethod
import math
import numpy as np
from scipy.spatial.transform import Rotation
from output import Logger
from sampling import Sampler
class Asset(ABC):
""" For managing an asset in Isaac Sim. """
def __init__(self, sim_app, sim_context, path, prefix, name, group=None, camera=None):
""" Construct Asset. """
self.sim_app = sim_app
self.sim_context = sim_context
self.path = path
self.camera = camera
self.name = name
self.prefix = prefix
self.stage = self.sim_context.stage
self.sample = Sampler(group=group).sample
self.class_name = self.__class__.__name__
if self.class_name != "RoomFace":
self.vel = self.sample(self.concat("vel"))
self.rot_vel = self.sample(self.concat("rot_vel"))
self.accel = self.sample(self.concat("accel"))
self.rot_accel = self.sample(self.concat("rot_accel"))
self.label = group
self.physics = False
@abstractmethod
def place_in_scene(self):
""" Place asset in scene. """
pass
def is_given(self, param):
""" Is a parameter value is given. """
if type(param) in (np.ndarray, list, tuple, str):
return len(param) > 0
elif type(param) is float:
return not math.isnan(param)
else:
return param is not None
def translate(self, coord, xform_prim=None):
""" Translate asset. """
if xform_prim is None:
xform_prim = self.xform_prim
xform_prim.set_world_pose(position=coord)
def scale(self, scaling, xform_prim=None):
""" Scale asset uniformly across all axes. """
if xform_prim is None:
xform_prim = self.xform_prim
xform_prim.set_local_scale(scaling)
def rotate(self, rotation, xform_prim=None):
""" Rotate asset. """
from omni.isaac.core.utils.rotations import euler_angles_to_quat
if xform_prim is None:
xform_prim = self.xform_prim
xform_prim.set_world_pose(orientation=euler_angles_to_quat(rotation.tolist(), degrees=True))
def is_coord_camera_relative(self):
return self.sample(self.concat("coord_camera_relative"))
def is_rot_camera_relative(self):
return self.sample(self.concat("rot_camera_relative"))
def concat(self, parameter_suffix):
""" Concatenate the parameter prefix and suffix. """
return self.prefix + "_" + parameter_suffix
def get_initial_coord(self):
""" Get coordinates of asset across 3 axes. """
if self.is_coord_camera_relative():
cam_coord = self.camera.coords[0]
cam_rot = self.camera.rotation
horiz_fov = -1 * self.camera.intrinsics[0]["horiz_fov"]
vert_fov = self.camera.intrinsics[0]["vert_fov"]
radius = self.sample(self.concat("distance"))
theta = horiz_fov * self.sample(self.concat("horiz_fov_loc")) / 2
phi = vert_fov * self.sample(self.concat("vert_fov_loc")) / 2
# Convert from polar to cartesian
rads = np.radians(cam_rot[2] + theta)
x = cam_coord[0] + radius * np.cos(rads)
y = cam_coord[1] + radius * np.sin(rads)
rads = np.radians(cam_rot[0] + phi)
z = cam_coord[2] + radius * np.sin(rads)
coord = np.array([x, y, z])
else:
coord = self.sample(self.concat("coord"))
pretty_coord = tuple([round(v, 1) for v in coord.tolist()])
Logger.print("adding {} {} at coords{}".format(self.prefix.upper(), self.name, pretty_coord))
return coord
def get_initial_rotation(self):
""" Get rotation of asset across 3 axes. """
rotation = self.sample(self.concat("rot"))
rotation = np.array(rotation)
if self.is_rot_camera_relative():
cam_rot = self.camera.rotation
rotation += cam_rot
return rotation
def step(self, step_time):
""" Step asset forward in its sequence. """
from omni.isaac.core.utils.rotations import quat_to_euler_angles
if self.class_name != "Camera":
self.coord, quaternion = self.xform_prim.get_world_pose()
self.coord = np.array(self.coord, dtype=np.float32)
self.rotation = np.degrees(quat_to_euler_angles(quaternion))
vel_vector = self.vel
accel_vector = self.accel
if self.sample(self.concat("movement") + "_" + self.concat("relative")):
radians = np.radians(self.rotation)
direction_cosine_matrix = Rotation.from_rotvec(radians).as_matrix()
vel_vector = direction_cosine_matrix.dot(vel_vector)
accel_vector = direction_cosine_matrix.dot(accel_vector)
self.coord += vel_vector * step_time + 0.5 * accel_vector * step_time ** 2
self.translate(self.coord)
self.rotation += self.rot_vel * step_time + 0.5 * self.rot_accel * step_time ** 2
self.rotate(self.rotation)
| 5,103 | Python | 32.359477 | 101 | 0.592789 |
ngzhili/SynTable/syntable_composer/src/scene/asset/light.py |
from sampling import Sampler
from scene.asset import Asset
class Light(Asset):
""" For managing a light asset in Isaac Sim. """
def __init__(self, sim_app, sim_context, path, camera, group):
""" Construct Light. """
self.sample = Sampler(group=group).sample
self.distant = self.sample("light_distant")
self.directed = self.sample("light_directed")
if self.distant:
name = "distant_light"
elif self.directed:
name = "directed_light"
else:
name = "sphere_light"
super().__init__(sim_app, sim_context, path, "light", name, camera=camera, group=group)
self.load_light()
self.place_in_scene()
def place_in_scene(self):
""" Place light in scene. """
self.coord = self.get_initial_coord()
self.translate(self.coord)
self.rotation = self.get_initial_rotation()
self.rotate(self.rotation)
def load_light(self):
""" Create a light in Isaac Sim. """
from pxr import Sdf
from omni.usd.commands import ChangePropertyCommand
from omni.isaac.core.prims import XFormPrim
from omni.isaac.core.utils import prims
intensity = self.sample("light_intensity")
color = tuple(self.sample("light_color") / 255)
temp_enabled = self.sample("light_temp_enabled")
temp = self.sample("light_temp")
radius = self.sample("light_radius")
focus = self.sample("light_directed_focus")
focus_softness = self.sample("light_directed_focus_softness")
attributes = {}
if self.distant:
light_shape = "DistantLight"
elif self.directed:
light_shape = "DiskLight"
attributes["radius"] = radius
else:
light_shape = "SphereLight"
attributes["radius"] = radius
attributes["intensity"] = intensity
attributes["color"] = color
if temp_enabled:
attributes["enableColorTemperature"] = True
attributes["colorTemperature"] = temp
self.prim = prims.create_prim(self.path, light_shape, attributes=attributes)
self.xform_prim = XFormPrim(self.path)
if self.directed:
ChangePropertyCommand(prop_path=Sdf.Path(self.path + ".shaping:focus"), value=focus, prev=0.0).do()
ChangePropertyCommand(
prop_path=Sdf.Path(self.path + ".shaping:cone:softness"), value=focus_softness, prev=0.0
)
| 2,525 | Python | 32.236842 | 111 | 0.599208 |
ngzhili/SynTable/syntable_composer/src/scene/asset/object.py | import numpy as np
import os
from scene.asset import Asset
class Object(Asset):
""" For managing an Xform asset in Isaac Sim. """
def __init__(self, sim_app, sim_context, ref, path, prefix, camera, group):
""" Construct Object. """
self.ref = ref
name = self.ref[self.ref.rfind("/") + 1 : self.ref.rfind(".")]
super().__init__(sim_app, sim_context, path, prefix, name, camera=camera, group=group)
self.load_asset()
self.place_in_scene()
if self.class_name != "RoomFace" and self.sample("obj_physics"):
self.add_physics()
def load_asset(self):
""" Create asset from object parameters. """
from omni.isaac.core.prims import XFormPrim
from omni.isaac.core.utils import prims
print(self.path)
# Create object
self.prim = prims.create_prim(self.path, "Xform", semantic_label=self.label)
self.xform_prim = XFormPrim(self.path)
nested_path = os.path.join(self.path, "nested_prim")
self.nested_prim = prims.create_prim(nested_path, "Xform", usd_path=self.ref, semantic_label=self.label)
self.nested_xform_prim = XFormPrim(nested_path)
self.add_material()
def place_in_scene(self):
""" Scale, rotate, and translate asset. """
# Get asset dimensions
min_bound, max_bound = self.get_bounds()
size = max_bound - min_bound
# Get asset scaling
obj_size_is_enabled = self.sample("obj_size_enabled")
if obj_size_is_enabled:
obj_size = self.sample("obj_size")
max_size = max(size)
self.scaling = obj_size / max_size
else:
self.scaling = self.sample("obj_scale")
# Offset nested asset
obj_centered = self.sample("obj_centered")
if obj_centered:
offset = (max_bound + min_bound) / 2
self.translate(-offset, xform_prim=self.nested_xform_prim)
# Scale asset
self.scaling = np.array([self.scaling, self.scaling, self.scaling])
self.scale(self.scaling)
# Get asset coord and rotation
self.coord = self.get_initial_coord()
self.rotation = self.get_initial_rotation()
# Rotate asset
self.rotate(self.rotation)
# Place asset
self.translate(self.coord)
def get_bounds(self):
""" Compute min and max bounds of an asset. """
from omni.isaac.core.utils.bounds import compute_aabb, create_bbox_cache, recompute_extents
# recompute_extents(self.nested_prim)
cache = create_bbox_cache()
bound = compute_aabb(cache, self.path).tolist()
min_bound = np.array(bound[:3])
max_bound = np.array(bound[3:])
return min_bound, max_bound
def add_material(self):
""" Add material to asset, if needed. """
from pxr import UsdShade
material = self.sample(self.concat("material"))
color = self.sample(self.concat("color"))
texture = self.sample(self.concat("texture"))
texture_scale = self.sample(self.concat("texture_scale"))
texture_rot = self.sample(self.concat("texture_rot"))
reflectance = self.sample(self.concat("reflectance"))
metallic = self.sample(self.concat("metallicness"))
mtl_prim_path = None
if self.is_given(material):
# Load a material
mtl_prim_path = self.load_material_from_nucleus(material)
elif self.is_given(color) or self.is_given(texture):
# Load a new material
mtl_prim_path = self.create_material()
if mtl_prim_path:
# Update material properties and assign to asset
mtl_prim = self.update_material(
mtl_prim_path, color, texture, texture_scale, texture_rot, reflectance, metallic
)
UsdShade.MaterialBindingAPI(self.prim).Bind(mtl_prim, UsdShade.Tokens.strongerThanDescendants)
def load_material_from_nucleus(self, material):
""" Create material from Nucleus path. """
from pxr import Sdf
from omni.usd.commands import CreateMdlMaterialPrimCommand
mtl_url = self.sample("nucleus_server") + material
left_index = material.rfind("/") + 1 if "/" in material else 0
right_index = material.rfind(".") if "." in material else -1
mtl_name = material[left_index:right_index]
left_index = self.path.rfind("/") + 1 if "/" in self.path else 0
path_name = self.path[left_index:]
mtl_prim_path = "/Looks/" + mtl_name + "_" + path_name
mtl_prim_path = Sdf.Path(mtl_prim_path.replace("-", "_"))
CreateMdlMaterialPrimCommand(mtl_url=mtl_url, mtl_name=mtl_name, mtl_path=mtl_prim_path).do()
return mtl_prim_path
def create_material(self):
""" Create a OmniPBR material with provided properties and assign to asset. """
from pxr import Sdf
import omni
from omni.isaac.core.utils.prims import move_prim
from omni.kit.material.library import CreateAndBindMdlMaterialFromLibrary
mtl_created_list = []
CreateAndBindMdlMaterialFromLibrary(
mdl_name="OmniPBR.mdl", mtl_name="OmniPBR", mtl_created_list=mtl_created_list
).do()
mtl_prim_path = Sdf.Path(mtl_created_list[0])
new_mtl_prim_path = omni.usd.get_stage_next_free_path(self.stage, "/Looks/OmniPBR", False)
move_prim(path_from=mtl_prim_path, path_to=new_mtl_prim_path)
mtl_prim_path = new_mtl_prim_path
return mtl_prim_path
def update_material(self, mtl_prim_path, color, texture, texture_scale, texture_rot, reflectance, metallic):
""" Update properties of an existing material. """
import omni
from pxr import Sdf, UsdShade
mtl_prim = UsdShade.Material(self.stage.GetPrimAtPath(mtl_prim_path))
if self.is_given(color):
color = tuple(color / 255)
omni.usd.create_material_input(mtl_prim, "diffuse_color_constant", color, Sdf.ValueTypeNames.Color3f)
omni.usd.create_material_input(mtl_prim, "diffuse_tint", color, Sdf.ValueTypeNames.Color3f)
if self.is_given(texture):
texture = self.sample("nucleus_server") + texture
omni.usd.create_material_input(mtl_prim, "diffuse_texture", texture, Sdf.ValueTypeNames.Asset)
if self.is_given(texture_scale):
texture_scale = 1 / texture_scale
omni.usd.create_material_input(
mtl_prim, "texture_scale", (texture_scale, texture_scale), Sdf.ValueTypeNames.Float2
)
if self.is_given(texture_rot):
omni.usd.create_material_input(mtl_prim, "texture_rotate", texture_rot, Sdf.ValueTypeNames.Float)
if self.is_given(reflectance):
roughness = 1 - reflectance
omni.usd.create_material_input(
mtl_prim, "reflection_roughness_constant", roughness, Sdf.ValueTypeNames.Float
)
if self.is_given(metallic):
omni.usd.create_material_input(mtl_prim, "metallic_constant", metallic, Sdf.ValueTypeNames.Float)
return mtl_prim
def add_physics(self):
""" Make asset a rigid body to enable gravity and collision. """
from omni.isaac.core.utils.prims import get_all_matching_child_prims, get_prim_at_path
from omni.physx.scripts import utils
from pxr import UsdPhysics
def is_rigid_body(prim_path):
prim = get_prim_at_path(prim_path)
if prim.HasAPI(UsdPhysics.RigidBodyAPI):
return True
return False
has_physics_already = len(get_all_matching_child_prims(self.path, predicate=is_rigid_body)) > 0
if has_physics_already:
self.physics = True
return
utils.setRigidBody(self.prim, "convexHull", False)
# Set mass to 1 kg
mass_api = UsdPhysics.MassAPI.Apply(self.prim)
mass_api.CreateMassAttr(1)
self.physics = True
| 8,080 | Python | 35.731818 | 113 | 0.617946 |
ngzhili/SynTable/syntable_composer/src/distributions/choice.py |
import numpy as np
import os
from distributions import Distribution
class Choice(Distribution):
""" For sampling from a list of elems. """
def __init__(self, input, p=None, filter_list=None):
""" Construct Choice distribution. """
self.input = input
self.p = p
self.filter_list = filter_list
if self.p:
self.p = np.array(self.p)
self.p = self.p / np.sum(self.p)
def __repr__(self):
return "Choice(name={}, input={}, p={}, filter_list={})".format(self.name, self.input, self.p, self.filter_list)
def setup(self, name):
""" Process input into a list of elems, with filter_list elems removed. """
self.name = name
self.valid_file_types = Distribution.param_suffix_to_file_type.get(self.name[self.name.rfind("_") + 1 :], [])
self.elems = self.get_elem_list(self.input)
if self.filter_list:
filter_listed_elems = self.get_elem_list(self.filter_list)
elem_set = set(self.elems)
for elem in filter_listed_elems:
if elem in elem_set:
self.elems.remove(self.elems)
self.elems = self.unpack_elem_list(self.elems)
self.verify_args()
def verify_args(self):
""" Verify elem list derived from input args. """
if len(self.elems) == 0:
raise ValueError(repr(self) + " has no elems.")
if self.p != None:
if len(self.elems) != len(self.p):
raise ValueError(
repr(self)
+ " must have equal num p weights '{}' and num elems '{}'".format(len(self.elems), len(self.p))
)
if len(self.elems) > 1:
type_checks = []
for elem in self.elems:
if type(elem) in (int, float):
# Integer and Float equivalence
elem_types = [int, float]
elif type(elem) in (tuple, list, np.ndarray):
# Tuple and List equivalence
elem_types = [tuple, list, np.ndarray]
else:
elem_types = [type(elem)]
type_check = type(self.elems[0]) in elem_types
type_checks.append(type_check)
all_elems_same_val_type = all(type_checks)
if not all_elems_same_val_type:
raise ValueError(repr(self) + " must have elems that are all the same value type.")
def sample(self):
""" Samples from the list of elems. """
# print(self.__repr__())
# print('len(self.elems):',len(self.elems))
# print("self.elems:",self.elems)
if self.elems:
index = np.random.choice(len(self.elems), p=self.p)
sample = self.elems[index]
if type(sample) in (tuple, list):
sample = np.array(sample)
return sample
else:
return None
def get_type(self):
""" Get value type of elem list, which are all the same. """
return type(self.elems[0])
def get_elem_list(self, input):
""" Process input into a list of elems. """
elems = []
if type(input) is str and input[-4:] == ".txt":
input_file = input
file_elems = self.parse_input_file(input_file)
elems.extend(file_elems)
elif type(input) is list:
for elem in input:
list_elems = self.get_elem_list(elem)
elems.extend(list_elems)
else:
elem = input
if type(elem) in (tuple, list):
elem = np.array(elem)
elems.append(input)
return elems
def parse_input_file(self, input_file):
""" Parse an input file into a list of elems. """
if input_file.startswith("/"):
input_file = input_file
elif input_file.startswith("*"):
input_file = os.path.join(Distribution.mount, input_file[2:])
else:
input_file = os.path.join(os.path.dirname(__file__), "../../", input_file)
if not os.path.exists(input_file):
raise ValueError(repr(self) + " is unable to find file '{}'".format(input_file))
with open(input_file) as f:
lines = f.readlines()
lines = [line.strip() for line in lines]
file_elems = []
for elem in lines:
if elem and not elem.startswith("#"):
try:
elem = eval(elem)
if type(elem) in (tuple, list):
try:
elem = np.array(elem, dtype=np.float32)
except:
pass
except Exception as e:
pass
file_elems.append(elem)
return file_elems
def unpack_elem_list(self, elems):
""" Unpack all potential Nucleus server directories referenced in the parameter values. """
all_unpacked_elems = []
for elem in elems:
unpacked_elems = [elem]
if type(elem) is str:
if not elem.startswith("/"):
raise ValueError(repr(self) + " with path elem '{}' must start with a forward slash.".format(elem))
directory_elems = self.get_directory_elems(elem)
if directory_elems:
directory = elem
unpacked_elems = self.unpack_directory(directory_elems, directory)
# if "." in elem:
# file_type = elem[elem.rfind(".") :].lower()
# if file_type not in self.valid_file_types:
# raise ValueError(
# repr(self)
# + " has elem '{}' with incorrect file type. File type must be in '{}'.".format(
# elem, self.valid_file_types
# )
# )
all_unpacked_elems.extend(unpacked_elems)
elems = all_unpacked_elems
return elems
def unpack_directory(self, directory_elems, directory):
""" Unpack a directory on Nucleus into a list of file paths. """
unpacked_elems = []
for directory_elem in directory_elems:
directory_elem = os.path.join(directory, directory_elem)
file_type = directory_elem[directory_elem.rfind(".") :].lower()
if file_type in self.valid_file_types:
elem = os.path.join(directory, directory_elem)
unpacked_elems.append(elem)
else:
sub_directory_elems = self.get_directory_elems(directory_elem)
if sub_directory_elems:
# Recurse on subdirectories
unpacked_elems.extend(self.unpack_directory(sub_directory_elems, directory_elem))
return unpacked_elems
def get_directory_elems(self, elem):
""" Grab files in a potential Nucleus server directory. """
import omni.client
elem_can_be_nucleus_dir = "." not in os.path.basename(elem)
if elem_can_be_nucleus_dir:
(_, directory_elems) = omni.client.list(self.nucleus_server + elem)
directory_elems = [str(elem.relative_path) for elem in directory_elems]
return directory_elems
else:
return ()
| 7,523 | Python | 35 | 120 | 0.516682 |
ngzhili/SynTable/syntable_composer/src/distributions/__init__.py |
from .distribution import Distribution
from .choice import Choice
from .normal import Normal
from .range import Range
from .uniform import Uniform
from .walk import Walk
| 172 | Python | 18.22222 | 38 | 0.813953 |
ngzhili/SynTable/syntable_composer/src/distributions/distribution.py |
from abc import ABC, abstractmethod
class Distribution:
# Static variables
mount = None
nucleus_server = None
param_suffix_to_file_type = None
@abstractmethod
def __init__(self):
pass
@abstractmethod
def setup(self):
pass
@abstractmethod
def verify_args(self):
pass
@abstractmethod
def sample(self):
pass
@abstractmethod
def get_type(self):
pass
| 451 | Python | 13.580645 | 36 | 0.59867 |
ngzhili/SynTable/syntable_composer/src/distributions/normal.py |
import numpy as np
from distributions import Distribution
class Normal(Distribution):
""" For sampling a Gaussian. """
def __init__(self, mean, var, min=None, max=None):
""" Construct Normal distribution. """
self.mean = mean
self.var = var
self.min_val = min
self.max_val = max
def __repr__(self):
return "Normal(name={}, mean={}, var={}, min_bound={}, max_bound={})".format(
self.name, self.mean, self.var, self.min_val, self.max_val
)
def setup(self, name):
""" Parse input arguments. """
self.name = name
self.std_dev = np.sqrt(self.var)
self.verify_args()
def verify_args(self):
""" Verify input arguments. """
def verify_arg_i(mean, var, min_val, max_val):
""" Verify number values. """
if type(mean) not in (int, float):
raise ValueError(repr(self) + " has incorrect mean type.")
if type(var) not in (int, float):
raise ValueError(repr(self) + " has incorrect variance type.")
if var < 0:
raise ValueError(repr(self) + " must have non-negative variance.")
if min_val != None and type(min_val) not in (int, float):
raise ValueError(repr(self) + " has incorrect min type.")
if max_val != None and type(max_val) not in (int, float):
raise ValueError(repr(self) + " has incorrect max type.")
return True
valid = False
if type(self.mean) in (tuple, list) and type(self.var) in (tuple, list):
if len(self.mean) != len(self.var):
raise ValueError(repr(self) + " must have mean and variance with same length.")
if self.min_val and len(self.min_val) != len(self.mean):
raise ValueError(repr(self) + " must have mean and min bound with same length.")
if self.max_val and len(self.max_val) != len(self.mean):
raise ValueError(repr(self) + " must have mean and max bound with same length.")
valid = all(
[
verify_arg_i(
self.mean[i],
self.var[i],
self.min_val[i] if self.min_val else None,
self.max_val[i] if self.max_val else None,
)
for i in range(len(self.mean))
]
)
else:
valid = verify_arg_i(self.mean, self.var, self.min_val, self.max_val)
if not valid:
raise ValueError(repr(self) + " is invalid.")
def sample(self):
""" Sample from Gaussian. """
sample = np.random.normal(self.mean, self.std_dev)
if self.min_val is not None or self.max_val is not None:
sample = np.clip(sample, a_min=self.min_val, a_max=self.max_val)
return sample
def get_type(self):
if type(self.mean) in (tuple, list):
return tuple
else:
return float
| 3,091 | Python | 32.978022 | 96 | 0.524426 |
ngzhili/SynTable/syntable_composer/src/distributions/range.py |
import numpy as np
from distributions import Distribution
class Range(Distribution):
""" For sampling from a range of integers. """
def __init__(self, min_val, max_val):
""" Construct Range distribution. """
self.min_val = min_val
self.max_val = max_val
def __repr__(self):
return "Range(name={}, min={}, max={})".format(self.name, self.min_val, self.max_val)
def setup(self, name):
""" Parse input arguments. """
self.name = name
self.range = range(self.min_val, self.max_val + 1)
self.verify_args()
def verify_args(self):
""" Verify input arguments. """
def verify_args_i(min_val, max_val):
""" Verify number values. """
valid = False
if type(min_val) is int and type(max_val) is int:
valid = min_val <= max_val
return valid
valid = False
if type(self.min_val) in (tuple, list) and type(self.max_val) in (tuple, list):
if len(self.min_val) != len(self.max_val):
raise ValueError(repr(self) + " must have min and max with same length.")
valid = all([verify_args_i(self.min_val[i], self.max_val[i]) for i in range(len(self.min_val))])
else:
valid = verify_args_i(self.min_val, self.max_val)
if not valid:
raise ValueError(repr(self) + " is invalid.")
def sample(self):
""" Sample from discrete range. """
return np.random.choice(self.range)
def get_type(self):
""" Get value type. """
if type(self.min_val) in (tuple, list):
return tuple
else:
return int
| 1,707 | Python | 26.548387 | 108 | 0.54833 |
ngzhili/SynTable/syntable_composer/src/distributions/uniform.py |
import numpy as np
from distributions import Distribution
class Uniform(Distribution):
""" For sampling uniformly from a continuous range. """
def __init__(self, min_val, max_val):
""" Construct Uniform distribution."""
self.min_val = min_val
self.max_val = max_val
def __repr__(self):
return "Uniform(name={}, min={}, max={})".format(self.name, self.min_val, self.max_val)
def setup(self, name):
""" Parse input arguments. """
self.name = name
self.verify_args()
def verify_args(self):
""" Verify input arguments. """
def verify_args_i(min_val, max_val):
""" Verify number values. """
valid = False
if type(min_val) in (int, float) and type(max_val) in (int, float):
valid = min_val <= max_val
return valid
valid = False
if type(self.min_val) in (tuple, list) and type(self.max_val) in (tuple, list):
if len(self.min_val) != len(self.max_val):
raise ValueError(repr(self) + " must have min and max with same length.")
valid = all([verify_args_i(self.min_val[i], self.max_val[i]) for i in range(len(self.min_val))])
else:
valid = verify_args_i(self.min_val, self.max_val)
if not valid:
raise ValueError(repr(self) + " is invalid.")
def sample(self):
""" Sample from continuous range. """
return np.random.uniform(self.min_val, self.max_val)
def get_type(self):
""" Get value type. """
if type(self.min_val) in (tuple, list):
return tuple
else:
return float
| 1,700 | Python | 27.35 | 108 | 0.554118 |
ngzhili/SynTable/syntable_composer/src/distributions/walk.py |
import numpy as np
from distributions import Choice
class Walk(Choice):
""" For sampling from a list of elems without replacement. """
def __init__(self, input, filter_list=None, ordered=True):
""" Constructs a Walk distribution. """
super().__init__(input, filter_list=filter_list)
self.ordered = ordered
self.completed = False
self.index = 0
def __repr__(self):
return "Walk(name={}, input={}, filter_list={}, ordered={})".format(
self.name, self.input, self.filter_list, self.ordered
)
def setup(self, name):
""" Parse input arguments. """
self.name = name
if not self.ordered:
self.sampled_indices = list(range(len(self.elems)))
super().setup(name)
def sample(self):
""" Samples from list of elems and updates the index tracker. """
if self.ordered:
self.index %= len(self.elems)
sample = self.elems[self.index]
self.index += 1
else:
if len(self.sampled_indices) == 0:
self.sampled_indices = list(range(len(self.elems)))
self.index = np.choice(self.sampled_indices)
self.sampled_indices.remove(self.index)
sample = self.elems[self.index]
if type(sample) in (tuple, list):
sample = np.array(sample)
return sample
| 1,416 | Python | 26.249999 | 76 | 0.567797 |
ngzhili/SynTable/syntable_composer/src/output/disparity.py |
import numpy as np
class DisparityConverter:
""" For converting stereo depth maps to stereo disparity maps. """
def __init__(self, depth_l, depth_r, fx, fy, cx, cy, baseline):
""" Construct DisparityConverter. """
self.depth_l = np.array(depth_l, dtype=np.float32)
self.depth_r = np.array(depth_r, dtype=np.float32)
self.fx = fx
self.fy = fy
self.cx = cx
self.cy = cy
self.baseline = baseline
def compute_disparity(self):
""" Computes a disparity map from left and right depth maps. """
# List all valid depths in the depth map
(y, x) = np.nonzero(np.invert(np.isnan(self.depth_l)))
depth_l = self.depth_l[y, x]
depth_r = self.depth_r[y, x]
# Compute disparity maps
disp_lr = self.depth_to_disparity(x, depth_l, self.baseline)
disp_rl = self.depth_to_disparity(x, depth_r, -self.baseline)
# Use numpy vectorization to get pixel coordinates
disp_l, disp_r = np.zeros(self.depth_l.shape), np.zeros(self.depth_r.shape)
disp_l[y, x] = np.abs(disp_lr)
disp_r[y, x] = np.abs(disp_rl)
disp_l = np.array(disp_l, dtype=np.float32)
disp_r = np.array(disp_r, dtype=np.float32)
return disp_l, disp_r
def depth_to_disparity(self, x, depth, baseline_offset):
""" Convert depth map to disparity map. """
# Backproject image to 3D world
x_est = (x - self.cx) * (depth / self.fx)
# Add baseline offset to 3D world position
x_est += baseline_offset
# Project to the other stereo image domain
x_pt = self.cx + (x_est / depth * self.fx)
# Compute disparity with the x-axis only since the left and right images are rectified
disp = x_pt - x
return disp
| 1,827 | Python | 32.236363 | 94 | 0.595512 |
ngzhili/SynTable/syntable_composer/src/output/log.py | import datetime
import os
import time
import yaml
class Logger:
""" For logging parameter samples and dataset generation metadata. """
# Static variables set outside class
verbose = None
content_log_path = None
def start_log_entry(index):
""" Initialize a sample's log message. """
Logger.start_time = time.time()
Logger.log_entry = [{}]
Logger.log_entry[0]["index"] = index
Logger.log_entry[0]["metadata"] = {"params": [], "lines": []}
Logger.log_entry[0]["metadata"]["timestamp"] = str(datetime.datetime.now())
if Logger.verbose:
print()
def finish_log_entry():
""" Output a sample's log message to the end of the content log. """
duration = time.time() - Logger.start_time
Logger.log_entry[0]["time_elapsed"] = duration
if Logger.content_log_path:
with open(Logger.content_log_path, "a") as f:
yaml.safe_dump(Logger.log_entry, f)
def write_parameter(key, val, group=None):
""" Record a sample parameter value. """
if key == "groups":
return
param_dict = {}
param_dict["parameter"] = key
param_dict["val"] = str(val)
param_dict["group"] = group
Logger.log_entry[0]["metadata"]["params"].append(param_dict)
def print(line, force_print=False):
""" Record a string and potentially output it to console. """
Logger.log_entry[0]["metadata"]["lines"].append(line)
if Logger.verbose or force_print:
line = str(line)
print(line)
| 1,615 | Python | 27.350877 | 83 | 0.580805 |
ngzhili/SynTable/syntable_composer/src/output/__init__.py |
from .writer import DataWriter
from .disparity import DisparityConverter
from .metrics import Metrics
from .log import Logger
from .output import OutputManager
| 161 | Python | 22.142854 | 41 | 0.838509 |
ngzhili/SynTable/syntable_composer/src/output/metrics.py | import os
import yaml
class Metrics:
""" For managing performance metrics of dataset generation. """
def __init__(self, log_dir, content_log_path):
""" Construct Metrics. """
self.metric_path = os.path.join(log_dir, "metrics.txt")
self.content_log_path = content_log_path
def output_performance_metrics(self):
""" Collect per-scene metrics and calculate and output summary metrics. """
with open(self.content_log_path, "r") as f:
log = yaml.safe_load(f)
durations = []
for log_entry in log:
if type(log_entry["index"]) is int:
durations.append(log_entry["time_elapsed"])
durations.sort()
metric_packet = {}
n = len(durations)
metric_packet["time_per_sample_min"] = durations[0]
metric_packet["time_per_sample_first_quartile"] = durations[n // 4]
metric_packet["time_per_sample_median"] = durations[n // 2]
metric_packet["time_per_sample_third_quartile"] = durations[3 * n // 4]
metric_packet["time_per_sample_max"] = durations[-1]
metric_packet["time_per_sample_mean"] = sum(durations) / n
with open(self.metric_path, "w") as f:
yaml.safe_dump(metric_packet, f)
| 1,273 | Python | 31.666666 | 83 | 0.598586 |
ngzhili/SynTable/syntable_composer/src/output/output.py | import copy
import numpy as np
import carb
from output import DataWriter, DisparityConverter, Logger
from sampling import Sampler
class OutputManager:
""" For managing Composer outputs, including sending data to the data writer. """
def __init__(self, sim_app, sim_context, scene_manager, output_data_dir, scene_units_in_meters):
""" Construct OutputManager. Start data writer threads. """
from omni.isaac.synthetic_utils import SyntheticDataHelper
self.sim_app = sim_app
self.sim_context = sim_context
self.scene_manager = scene_manager
self.output_data_dir = output_data_dir
self.scene_units_in_meters = scene_units_in_meters
self.camera = self.scene_manager.camera
self.viewports = self.camera.viewports
self.stage = self.sim_context.stage
self.sample = Sampler().sample
self.groundtruth_visuals = self.sample("groundtruth_visuals")
self.label_to_class_id = self.get_label_to_class_id()
max_queue_size = 500
self.write_data = self.sample("write_data")
if self.write_data:
self.data_writer = DataWriter(self.output_data_dir, self.sample("num_data_writer_threads"), max_queue_size)
self.data_writer.start_threads()
self.sd_helper = SyntheticDataHelper()
self.gt_list = []
if self.sample("rgb") or (
self.sample("bbox_2d_tight")
or self.sample("bbox_2d_loose")
or self.sample("bbox_3d")
and self.groundtruth_visuals
):
self.gt_list.append("rgb")
if (self.sample("depth")) or (self.sample("disparity") and self.sample("stereo")):
self.gt_list.append("depthLinear")
if self.sample("instance_seg"):
self.gt_list.append("instanceSegmentation")
if self.sample("semantic_seg"):
self.gt_list.append("semanticSegmentation")
if self.sample("bbox_2d_tight"):
self.gt_list.append("boundingBox2DTight")
if self.sample("bbox_2d_loose"):
self.gt_list.append("boundingBox2DLoose")
if self.sample("bbox_3d"):
self.gt_list.append("boundingBox3D")
for viewport_name, viewport_window in self.viewports:
self.sd_helper.initialize(sensor_names=self.gt_list, viewport=viewport_window)
self.sim_app.update()
self.carb_settings = carb.settings.acquire_settings_interface()
def get_label_to_class_id(self):
""" Get mapping of object semantic labels to class ids. """
label_to_class_id = {}
groups = self.sample("groups")
for group in groups:
class_id = self.sample("obj_class_id", group=group)
label_to_class_id[group] = class_id
label_to_class_id["[[scenario]]"] = self.sample("scenario_class_id")
return label_to_class_id
def capture_groundtruth(self, index, step_index=0, sequence_length=0):
""" Capture groundtruth data from Isaac Sim. Send data to data writer. """
depths = []
all_viewport_data = []
for i in range(len(self.viewports)):
self.sim_context.render()
self.sim_context.render()
viewport_name, viewport_window = self.viewports[i]
num_digits = len(str(self.sample("num_scenes") - 1))
id = str(index)
id = id.zfill(num_digits)
if self.sample("sequential"):
num_digits = len(str(sequence_length - 1))
suffix_id = str(step_index)
suffix_id = suffix_id.zfill(num_digits)
id = id + "_" + suffix_id
groundtruth = {
"METADATA": {
"image_id": id,
"viewport_name": viewport_name,
"DEPTH": {},
"INSTANCE": {},
"SEMANTIC": {},
"BBOX2DTIGHT": {},
"BBOX2DLOOSE": {},
"BBOX3D": {},
},
"DATA": {},
}
# Collect Groundtruth
self.sim_context.render()
self.sim_context.render()
gt = copy.deepcopy(self.sd_helper.get_groundtruth(self.gt_list, viewport_window, wait_for_sensor_data=0.2))
# RGB
if "rgb" in gt["state"]:
if gt["state"]["rgb"]:
groundtruth["DATA"]["RGB"] = gt["rgb"]
# Depth (for Disparity)
if "depthLinear" in gt["state"]:
depth_data = copy.deepcopy(gt["depthLinear"]).squeeze()
# Convert to scene units
depth_data /= self.scene_units_in_meters
depths.append(depth_data)
if i == 0 or self.sample("groundtruth_stereo"):
# Depth
if "depthLinear" in gt["state"]:
if self.sample("depth"):
depth_data = gt["depthLinear"].squeeze()
# Convert to scene units
depth_data /= self.scene_units_in_meters
groundtruth["DATA"]["DEPTH"] = depth_data
groundtruth["METADATA"]["DEPTH"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["DEPTH"]["NPY"] = True
# Instance Segmentation
if "instanceSegmentation" in gt["state"]:
instance_data = gt["instanceSegmentation"][0]
groundtruth["DATA"]["INSTANCE"] = instance_data
groundtruth["METADATA"]["INSTANCE"]["WIDTH"] = instance_data.shape[1]
groundtruth["METADATA"]["INSTANCE"]["HEIGHT"] = instance_data.shape[0]
groundtruth["METADATA"]["INSTANCE"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["INSTANCE"]["NPY"] = True
# Semantic Segmentation
if "semanticSegmentation" in gt["state"]:
semantic_data = gt["semanticSegmentation"]
semantic_data = self.sd_helper.get_mapped_semantic_data(
semantic_data, self.label_to_class_id, remap_using_base_class=True
)
semantic_data = np.array(semantic_data)
semantic_data[semantic_data == 65535] = 0 # deals with invalid semantic id
groundtruth["DATA"]["SEMANTIC"] = semantic_data
groundtruth["METADATA"]["SEMANTIC"]["WIDTH"] = semantic_data.shape[1]
groundtruth["METADATA"]["SEMANTIC"]["HEIGHT"] = semantic_data.shape[0]
groundtruth["METADATA"]["SEMANTIC"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["SEMANTIC"]["NPY"] = True
# 2D Tight BBox
if "boundingBox2DTight" in gt["state"]:
groundtruth["DATA"]["BBOX2DTIGHT"] = gt["boundingBox2DTight"]
groundtruth["METADATA"]["BBOX2DTIGHT"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["BBOX2DTIGHT"]["NPY"] = True
# 2D Loose BBox
if "boundingBox2DLoose" in gt["state"]:
groundtruth["DATA"]["BBOX2DLOOSE"] = gt["boundingBox2DLoose"]
groundtruth["METADATA"]["BBOX2DLOOSE"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["BBOX2DLOOSE"]["NPY"] = True
# 3D BBox
if "boundingBox3D" in gt["state"]:
groundtruth["DATA"]["BBOX3D"] = gt["boundingBox3D"]
groundtruth["METADATA"]["BBOX3D"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["BBOX3D"]["NPY"] = True
all_viewport_data.append(groundtruth)
# Wireframe
if self.sample("wireframe"):
self.carb_settings.set("/rtx/wireframe/mode", 2.0)
# Need two updates for all viewports to have wireframe properly
self.sim_context.render()
self.sim_context.render()
for i in range(len(self.viewports)):
viewport_name, viewport_window = self.viewports[i]
gt = copy.deepcopy(self.sd_helper.get_groundtruth(["rgb"], viewport_window))
all_viewport_data[i]["DATA"]["WIREFRAME"] = gt["rgb"]
self.carb_settings.set("/rtx/wireframe/mode", 0)
self.sim_context.render()
for i in range(len(self.viewports)):
if self.write_data:
self.data_writer.q.put(copy.deepcopy(all_viewport_data[i]))
# Disparity
if self.sample("disparity") and self.sample("stereo"):
depth_l, depth_r = depths
cam_intrinsics = self.camera.intrinsics[0]
disp_convert = DisparityConverter(
depth_l,
depth_r,
cam_intrinsics["fx"],
cam_intrinsics["fy"],
cam_intrinsics["cx"],
cam_intrinsics["cy"],
self.sample("stereo_baseline"),
)
disp_l, disp_r = disp_convert.compute_disparity()
disparities = [disp_l, disp_r]
for i in range(len(self.viewports)):
if i == 0 or self.sample("groundtruth_stereo"):
viewport_name, viewport_window = self.viewports[i]
groundtruth = {
"METADATA": {"image_id": id, "viewport_name": viewport_name, "DISPARITY": {}},
"DATA": {},
}
disparity_data = disparities[i]
groundtruth["DATA"]["DISPARITY"] = disparity_data
groundtruth["METADATA"]["DISPARITY"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["DISPARITY"]["NPY"] = True
if self.write_data:
self.data_writer.q.put(copy.deepcopy(groundtruth))
return groundtruth
| 10,140 | Python | 41.970339 | 126 | 0.537673 |
ngzhili/SynTable/syntable_composer/src/output/writer.py | import atexit
import numpy as np
import os
from PIL import Image
import queue
import sys
import threading
class DataWriter:
""" For processing and writing output data to files. """
def __init__(self, data_dir, num_worker_threads, max_queue_size=500):
""" Construct DataWriter. """
from omni.isaac.synthetic_utils import visualization
self.visualization = visualization
atexit.register(self.stop_threads)
self.data_dir = data_dir
# Threading for multiple scenes
self.num_worker_threads = num_worker_threads
# Initialize queue with a specified size
self.q = queue.Queue(max_queue_size)
self.threads = []
def start_threads(self):
""" Start worker threads. """
for _ in range(self.num_worker_threads):
t = threading.Thread(target=self.worker, daemon=True)
t.start()
self.threads.append(t)
def stop_threads(self):
""" Waits for all tasks to be completed before stopping worker threads. """
print("Finish writing data...")
# Block until all tasks are done
self.q.join()
print("Done.")
def worker(self):
""" Processes task from queue. Each tasks contains groundtruth data and metadata which is used to transform the output and write it to disk. """
while True:
groundtruth = self.q.get()
if groundtruth is None:
break
filename = groundtruth["METADATA"]["image_id"]
viewport_name = groundtruth["METADATA"]["viewport_name"]
for gt_type, data in groundtruth["DATA"].items():
if gt_type == "RGB":
self.save_image(viewport_name, gt_type, data, filename)
elif gt_type == "WIREFRAME":
self.save_image(viewport_name, gt_type, data, filename)
elif gt_type == "DEPTH":
if groundtruth["METADATA"]["DEPTH"]["NPY"]:
self.save_PFM(viewport_name, gt_type, data, filename)
if groundtruth["METADATA"]["DEPTH"]["COLORIZE"]:
self.save_image(viewport_name, gt_type, data, filename)
elif gt_type == "DISPARITY":
if groundtruth["METADATA"]["DISPARITY"]["NPY"]:
self.save_PFM(viewport_name, gt_type, data, filename)
if groundtruth["METADATA"]["DISPARITY"]["COLORIZE"]:
self.save_image(viewport_name, gt_type, data, filename)
elif gt_type == "INSTANCE":
self.save_segmentation(
viewport_name,
gt_type,
data,
filename,
groundtruth["METADATA"]["INSTANCE"]["WIDTH"],
groundtruth["METADATA"]["INSTANCE"]["HEIGHT"],
groundtruth["METADATA"]["INSTANCE"]["COLORIZE"],
groundtruth["METADATA"]["INSTANCE"]["NPY"],
)
elif gt_type == "SEMANTIC":
self.save_segmentation(
viewport_name,
gt_type,
data,
filename,
groundtruth["METADATA"]["SEMANTIC"]["WIDTH"],
groundtruth["METADATA"]["SEMANTIC"]["HEIGHT"],
groundtruth["METADATA"]["SEMANTIC"]["COLORIZE"],
groundtruth["METADATA"]["SEMANTIC"]["NPY"],
)
elif gt_type in ["BBOX2DTIGHT", "BBOX2DLOOSE", "BBOX3D"]:
self.save_bbox(
viewport_name,
gt_type,
data,
filename,
groundtruth["METADATA"][gt_type]["COLORIZE"],
groundtruth["DATA"]["RGB"],
groundtruth["METADATA"][gt_type]["NPY"],
)
elif gt_type == "CAMERA":
self.camera_folder = self.data_dir + "/" + str(viewport_name) + "/camera/"
np.save(self.camera_folder + filename + ".npy", data)
elif gt_type == "POSES":
self.poses_folder = self.data_dir + "/" + str(viewport_name) + "/poses/"
np.save(self.poses_folder + filename + ".npy", data)
else:
raise NotImplementedError
self.q.task_done()
def save_segmentation(
self, viewport_name, data_type, data, filename, width=1280, height=720, display_rgb=True, save_npy=True
):
""" Save segmentation mask data and visuals. """
# Save ground truth data as 16-bit single channel png
if save_npy:
if data_type == "INSTANCE":
data_folder = os.path.join(self.data_dir, viewport_name, "instance")
data = np.array(data, dtype=np.uint8)
img = Image.fromarray(data, mode="L")
elif data_type == "SEMANTIC":
data_folder = os.path.join(self.data_dir, viewport_name, "semantic")
data = np.array(data, dtype=np.uint8)
img = Image.fromarray(data, mode="L")
os.makedirs(data_folder, exist_ok=True)
file = os.path.join(data_folder, filename + ".png")
img.save(file, "PNG", bits=16)
# Save ground truth data as visuals
if display_rgb:
image_data = np.frombuffer(data, dtype=np.uint8).reshape(*data.shape, -1)
image_data += 1
if data_type == "SEMANTIC":
# Move close values apart to allow color values to separate more
image_data = np.array((image_data * 17) % 256, dtype=np.uint8)
color_image = self.visualization.colorize_segmentation(image_data, width, height, 3, None)
color_image = color_image[:, :, :3]
color_image_rgb = Image.fromarray(color_image, "RGB")
if data_type == "INSTANCE":
data_folder = os.path.join(self.data_dir, viewport_name, "instance", "visuals")
elif data_type == "SEMANTIC":
data_folder = os.path.join(self.data_dir, viewport_name, "semantic", "visuals")
os.makedirs(data_folder, exist_ok=True)
file = os.path.join(data_folder, filename + ".png")
color_image_rgb.save(file, "PNG")
def save_image(self, viewport_name, img_type, image_data, filename):
""" Save rgb data, depth visuals, and disparity visuals. """
# Convert 1-channel groundtruth data to visualization image data
def normalize_greyscale_image(image_data):
image_data = np.reciprocal(image_data)
image_data[image_data == 0.0] = 1e-5
image_data = np.clip(image_data, 0, 255)
image_data -= np.min(image_data)
if np.max(image_data) > 0:
image_data /= np.max(image_data)
image_data *= 255
image_data = image_data.astype(np.uint8)
return image_data
# Save image data as png
if img_type == "RGB":
data_folder = os.path.join(self.data_dir, viewport_name, "rgb")
image_data = image_data[:, :, :3]
img = Image.fromarray(image_data, "RGB")
elif img_type == "WIREFRAME":
data_folder = os.path.join(self.data_dir, viewport_name, "wireframe")
image_data = np.average(image_data, axis=2)
image_data = image_data.astype(np.uint8)
img = Image.fromarray(image_data, "L")
elif img_type == "DEPTH":
image_data = image_data * 100
image_data = normalize_greyscale_image(image_data)
data_folder = os.path.join(self.data_dir, viewport_name, "depth", "visuals")
img = Image.fromarray(image_data, mode="L")
elif img_type == "DISPARITY":
image_data = normalize_greyscale_image(image_data)
data_folder = os.path.join(self.data_dir, viewport_name, "disparity", "visuals")
img = Image.fromarray(image_data, mode="L")
os.makedirs(data_folder, exist_ok=True)
file = os.path.join(data_folder, filename + ".png")
img.save(file, "PNG")
def save_bbox(self, viewport_name, data_type, data, filename, display_rgb=True, rgb_data=None, save_npy=True):
""" Save bbox data and visuals. """
# Save ground truth data as npy
if save_npy:
if data_type == "BBOX2DTIGHT":
data_folder = os.path.join(self.data_dir, viewport_name, "bbox_2d_tight")
elif data_type == "BBOX2DLOOSE":
data_folder = os.path.join(self.data_dir, viewport_name, "bbox_2d_loose")
elif data_type == "BBOX3D":
data_folder = os.path.join(self.data_dir, viewport_name, "bbox_3d")
os.makedirs(data_folder, exist_ok=True)
file = os.path.join(data_folder, filename)
np.save(file, data)
# Save ground truth data and rgb data as visuals
if display_rgb and rgb_data is not None:
color_image = self.visualization.colorize_bboxes(data, rgb_data)
color_image = color_image[:, :, :3]
color_image_rgb = Image.fromarray(color_image, "RGB")
if data_type == "BBOX2DTIGHT":
data_folder = os.path.join(self.data_dir, viewport_name, "bbox_2d_tight", "visuals")
if data_type == "BBOX2DLOOSE":
data_folder = os.path.join(self.data_dir, viewport_name, "bbox_2d_loose", "visuals")
if data_type == "BBOX3D":
# 3D BBox visuals are not yet supported
return
os.makedirs(data_folder, exist_ok=True)
file = os.path.join(data_folder, filename + ".png")
color_image_rgb.save(file, "PNG")
def save_PFM(self, viewport_name, data_type, data, filename):
""" Save Depth and Disparity data. """
if data_type == "DEPTH":
data_folder = os.path.join(self.data_dir, viewport_name, "depth")
elif data_type == "DISPARITY":
data_folder = os.path.join(self.data_dir, viewport_name, "disparity")
os.makedirs(data_folder, exist_ok=True)
file = os.path.join(data_folder, filename + ".pfm")
self.write_PFM(file, data)
def write_PFM(self, file, image, scale=1):
""" Convert numpy matrix into PFM and save. """
file = open(file, "wb")
color = None
if image.dtype.name != "float32":
raise Exception("Image dtype must be float32")
image = np.flipud(image)
if len(image.shape) == 3 and image.shape[2] == 3: # color image
color = True
elif len(image.shape) == 2 or len(image.shape) == 3 and image.shape[2] == 1: # greyscale
color = False
else:
raise Exception("Image must have H x W x 3, H x W x 1 or H x W dimensions.")
file.write(b"PF\n" if color else b"Pf\n")
file.write(b"%d %d\n" % (image.shape[1], image.shape[0]))
endian = image.dtype.byteorder
if endian == "<" or endian == "=" and sys.byteorder == "little":
scale = -scale
file.write(b"%f\n" % scale)
image.tofile(file)
| 11,473 | Python | 41.496296 | 152 | 0.537784 |
ngzhili/SynTable/syntable_composer/src/output/output1.py | import os
import copy
import numpy as np
import cv2
import carb
import datetime
from output import DisparityConverter, Logger
# from sampling import Sampler
from sampling.sample1 import Sampler
# from omni.isaac.core.utils import prims
from output.writer1 import DataWriter
from helper_functions import compute_occluded_masks, GenericMask, bbox_from_binary_mask # Added
import pycocotools.mask as mask_util
class OutputManager:
""" For managing Composer outputs, including sending data to the data writer. """
def __init__(self, sim_app, sim_context, scene_manager, output_data_dir, scene_units_in_meters):
""" Construct OutputManager. Start data writer threads. """
from omni.isaac.synthetic_utils.syntheticdata import SyntheticDataHelper
self.sim_app = sim_app
self.sim_context = sim_context
self.scene_manager = scene_manager
self.output_data_dir = output_data_dir
self.scene_units_in_meters = scene_units_in_meters
self.camera = self.scene_manager.camera
self.viewports = self.camera.viewports
self.stage = self.sim_context.stage
self.sample = Sampler().sample
self.groundtruth_visuals = self.sample("groundtruth_visuals")
self.label_to_class_id = self.get_label_to_class_id1()
max_queue_size = 500
self.save_segmentation_data = self.sample("save_segmentation_data")
self.write_data = self.sample("write_data")
if self.write_data:
self.data_writer = DataWriter(self.output_data_dir, self.sample("num_data_writer_threads"), self.save_segmentation_data, max_queue_size)
self.data_writer.start_threads()
self.sd_helper = SyntheticDataHelper()
self.gt_list = []
if self.sample("rgb") or (
self.sample("bbox_2d_tight")
or self.sample("bbox_2d_loose")
or self.sample("bbox_3d")
and self.groundtruth_visuals
):
self.gt_list.append("rgb")
if (self.sample("depth")) or (self.sample("disparity") and self.sample("stereo")):
self.gt_list.append("depthLinear")
if self.sample("instance_seg"):
self.gt_list.append("instanceSegmentation")
if self.sample("semantic_seg"):
self.gt_list.append("semanticSegmentation")
if self.sample("bbox_2d_tight"):
self.gt_list.append("boundingBox2DTight")
if self.sample("bbox_2d_loose"):
self.gt_list.append("boundingBox2DLoose")
if self.sample("bbox_3d"):
self.gt_list.append("boundingBox3D")
for viewport_name, viewport_window in self.viewports:
self.sd_helper.initialize(sensor_names=self.gt_list, viewport=viewport_window)
self.sim_app.update()
self.carb_settings = carb.settings.acquire_settings_interface()
def get_label_to_class_id(self):
""" Get mapping of object semantic labels to class ids. """
label_to_class_id = {}
groups = self.sample("groups")
for group in groups:
class_id = self.sample("obj_class_id", group=group)
label_to_class_id[group] = class_id
label_to_class_id["[[scenario]]"] = self.sample("scenario_class_id")
return label_to_class_id
def get_label_to_class_id1(self):
""" Get mapping of object semantic labels to class ids. """
label_to_class_id = {}
groups = self.sample("groups")
for group in groups:
class_id = self.sample("obj_class_id", group=group)
label_to_class_id[group] = class_id
label_to_class_id["[[scenario]]"] = self.sample("scenario_class_id")
return label_to_class_id
def capture_amodal_groundtruth(self, index, scene_manager, img_index, ann_index,
view_id, img_list, ann_list,
step_index=0, sequence_length=0):
""" Capture groundtruth data from Isaac Sim. Send data to data writer. """
num_objects = len(scene_manager.objs) # get number of objects in scene
objects = scene_manager.objs # get all objects in scene
depths = []
all_viewport_data = []
for i in range(len(self.viewports)):
viewport_name, viewport_window = self.viewports[i]
num_digits = len(str(self.sample("num_scenes") - 1))
img_id = str(index) + "_" + str(view_id)
groundtruth = {
"METADATA": {
"image_id": img_id,
"viewport_name": viewport_name,
"RGB":{},
"DEPTH": {},
"INSTANCE": {},
"SEMANTIC": {},
"BBOX2DTIGHT": {},
"BBOX2DLOOSE": {},
"BBOX3D": {},
},
"DATA": {},
}
""" =================================================================
===== Collect Viewport's RGB/DEPTH and object visible masks =====
================================================================= """
gt = copy.deepcopy(self.sd_helper.get_groundtruth(self.gt_list, viewport_window, wait_for_sensor_data=0.1))
# RGB
if "rgb" in gt["state"]:
if gt["state"]["rgb"]:
groundtruth["DATA"]["RGB"] = gt["rgb"]
# Depth (for Disparity)
if "depthLinear" in gt["state"]:
depth_data = copy.deepcopy(gt["depthLinear"]).squeeze()
# Convert to scene units
depth_data /= self.scene_units_in_meters
depths.append(depth_data)
if i == 0 or self.sample("groundtruth_stereo"):
# Depth
if "depthLinear" in gt["state"]:
if self.sample("depth"):
depth_data = gt["depthLinear"].squeeze()
# Convert to scene units
depth_data /= self.scene_units_in_meters
groundtruth["DATA"]["DEPTH"] = depth_data
groundtruth["METADATA"]["DEPTH"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["DEPTH"]["NPY"] = True
# Instance Segmentation
if "instanceSegmentation" in gt["state"]:
semantics = list(self.label_to_class_id.keys())
instance_data, instance_mappings = self.sd_helper.sensor_helpers["instanceSegmentation"](
viewport_window, parsed=False, return_mapping=True)
instances_list = [(im[0], im[4], im["semanticLabel"]) for im in instance_mappings][::-1]
max_instance_id_list = max([max(il[1]) for il in instances_list])
max_instance_id = instance_data.max()
lut = np.zeros(max(max_instance_id, max_instance_id_list) + 1, dtype=np.uint32)
for uid, il, sem in instances_list:
if sem in semantics and sem != "[[scenario]]":
lut[np.array(il)] = uid
instance_data = np.take(lut, instance_data)
if self.save_segmentation_data:
groundtruth["DATA"]["INSTANCE"] = instance_data
groundtruth["METADATA"]["INSTANCE"]["WIDTH"] = instance_data.shape[1]
groundtruth["METADATA"]["INSTANCE"]["HEIGHT"] = instance_data.shape[0]
groundtruth["METADATA"]["INSTANCE"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["INSTANCE"]["NPY"] = True
# get visible instance segmentation of all objects in scene
instance_map = list(np.unique(instance_data))[1:]
org_instance_data_np = np.array(instance_data)
org_instance_data = instance_data
instance_mappings_dict ={}
for obj_prim in instance_mappings:
inst_id = obj_prim[0]
inst_path = obj_prim[1]
instance_mappings_dict[inst_path]= inst_id
all_viewport_data.append(groundtruth)
""" ==== define image info dict ==== """
height, width, _ = gt["rgb"].shape
date_captured = str(datetime.datetime.now())
image_info = {
"id": img_index,
"file_name": f"data/mono/rgb/{img_id}.png",
"depth_file_name": f"data/mono/depth/{img_id}.png",
"occlusion_order_file_name": f"data/mono/occlusion_order/{img_id}.npy",
"width": width,
"height": height,
"date_captured": date_captured,
"license": 1,
"coco_url": "",
"flickr_url": ""
}
""" =====================================
===== Collect Background Masks ======
===================================== """
if self.sample("save_background"):
groundtruth = {
"METADATA": {
"image_id": str(img_index) + "_background",
"viewport_name": viewport_name,
"DEPTH": {},
"INSTANCE": {},
"SEMANTIC": {},
"AMODAL": {},
"OCCLUSION": {},
"BBOX2DTIGHT": {},
"BBOX2DLOOSE": {},
"BBOX3D": {},
},
"DATA": {},
}
ann_info = {
"id": ann_index,
"image_id": img_index,
"category_id": 0,
"bbox": [],
"height": height,
"width": width,
"object_name":"",
"iscrowd": 0,
"segmentation": {
"size": [
height,
width
],
"counts": "",
"area": 0
},
"area": 0,
"visible_mask": {
"size": [
height,
width
],
"counts": "",
"area": 0
},
"visible_bbox": [],
"occluded_mask": {
"size": [
height,
width
],
"counts": "",
"area": 0
},
"occluded_rate": 0.0
}
ann_info["object_name"] = "background"
""" ===== extract visible mask ===== """
curr_instance_data_np = org_instance_data_np.copy()
# find pixels that belong to background class
instance_id = 0
curr_instance_data_np[np.where(org_instance_data != instance_id)] = 0
curr_instance_data_np[np.where(org_instance_data == instance_id)] = 1
background_visible_mask = curr_instance_data_np.astype(np.uint8)
""" ===== extract amodal mask ===== """ # background assumed to be binary mask of np.ones
background_amodal_mask = np.ones(background_visible_mask.shape).astype(np.uint8) # get object amodal mask
""" ===== calculate occlusion mask ===== """
background_occ_mask = cv2.absdiff(background_amodal_mask, background_visible_mask)
""" ===== calculate occlusion rate ===== """ # assumes binary mask (True == 1)
background_occ_mask_pixel_count = background_occ_mask.sum()
background_amodal_mask_pixel_count = background_amodal_mask.sum()
occlusion_rate = round(background_occ_mask_pixel_count / background_amodal_mask_pixel_count, 2)
if occlusion_rate < 1: # fully occluded objects are not considered
if self.save_segmentation_data:
groundtruth["DATA"]["INSTANCE"] = background_visible_mask
groundtruth["METADATA"]["INSTANCE"]["WIDTH"] = background_visible_mask.shape[1]
groundtruth["METADATA"]["INSTANCE"]["HEIGHT"] = background_visible_mask.shape[0]
groundtruth["METADATA"]["INSTANCE"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["INSTANCE"]["NPY"] = True
groundtruth["DATA"]["AMODAL"] = background_amodal_mask
groundtruth["METADATA"]["AMODAL"]["WIDTH"] = background_amodal_mask.shape[1]
groundtruth["METADATA"]["AMODAL"]["HEIGHT"] = background_amodal_mask.shape[0]
groundtruth["METADATA"]["AMODAL"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["AMODAL"]["NPY"] = True
#if occlusion_rate > 0: # if object is occluded, save occlusion mask
if self.save_segmentation_data:
# print(background_occ_mask)
# print(background_occ_mask.shape)
groundtruth["DATA"]["OCCLUSION"] = background_occ_mask
groundtruth["METADATA"]["OCCLUSION"]["WIDTH"] = background_occ_mask.shape[1]
groundtruth["METADATA"]["OCCLUSION"]["HEIGHT"] = background_occ_mask.shape[0]
groundtruth["METADATA"]["OCCLUSION"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["OCCLUSION"]["NPY"] = True
# Assign Mask to Generic Mask Class
background_amodal_mask_class = GenericMask(background_amodal_mask.astype("uint8"),height, width)
background_visible_mask_class = GenericMask(background_visible_mask.astype("uint8"),height, width)
background_occ_mask_class = GenericMask(background_occ_mask.astype("uint8"),height, width)
# Encode binary masks to bytes
background_amodal_mask= mask_util.encode(np.array(background_amodal_mask[:, :, None], order="F", dtype="uint8"))[0]
background_visible_mask= mask_util.encode(np.array(background_visible_mask[:, :, None], order="F", dtype="uint8"))[0]
background_occ_mask= mask_util.encode(np.array(background_occ_mask[:, :, None], order="F", dtype="uint8"))[0]
# append annotations to dict
ann_info["segmentation"]["counts"] = background_amodal_mask['counts'].decode('UTF-8') # amodal mask
ann_info["visible_mask"]["counts"] = background_visible_mask['counts'].decode('UTF-8') # obj_visible_mask
ann_info["occluded_mask"]["counts"] =background_occ_mask['counts'].decode('UTF-8') # obj_visible_mask
ann_info["visible_bbox"] = list(background_visible_mask_class.bbox())
ann_info["bbox"] = list(background_visible_mask_class.bbox())
ann_info["segmentation"]["area"] = int(background_amodal_mask_class.area())
ann_info["visible_mask"]["area"] = int(background_visible_mask_class.area())
ann_info["occluded_mask"]["area"] = int(background_occ_mask_class.area())
ann_info["occluded_rate"] = occlusion_rate
ann_index += 1
all_viewport_data.append(groundtruth)
ann_list.append(ann_info)
img_list.append(image_info)
""" =================================================
===== Collect Object Amodal/Occlusion Masks =====
================================================= """
# turn off visibility of all objects
for obj in objects:
obj.off_prim()
visible_obj_paths = instance_mappings_dict.keys()
""" ======= START OBJ LOOP ======= """
obj_visible_mask_list = []
obj_occlusion_mask_list = []
# loop through objects and capture mask of each object
for obj in objects:
# turn on visibility of object
obj.on_prim()
ann_info = {
"id": ann_index,
"image_id": img_index,
"category_id": 1,
"bbox": [],
"width": width,
"height": height,
"object_name":"",
"iscrowd": 0,
"segmentation": {
"size": [
height,
width
],
"counts": "",
"area": 0
},
"area": 0,
"visible_mask": {
"size": [
height,
width
],
"counts": "",
"area": 0
},
"visible_bbox": [],
"occluded_mask": {
"size": [
height,
width
],
"counts": "",
"area": 0
},
"occluded_rate": 0.0
}
ann_info["object_name"] = obj.name
""" ===== get object j index and attributes ===== """
obj_path = obj.path
obj_index = int(obj.path.split("/")[-1].split("_")[1])
id = f"{img_id}_{obj_index}" #image id
obj_nested_prim_path = obj_path+"/nested_prim"
if obj_nested_prim_path in instance_mappings_dict:
instance_id = instance_mappings_dict[obj_nested_prim_path]
else:
print(f"{obj_nested_prim_path} does not exist")
instance_id = -1
print(f"instance_mappings_dict:{instance_mappings_dict}")
""" ===== Check if Object j is visible from viewport ===== """
# Remove Fully Occluded Objects from viewport
if obj_path in visible_obj_paths and instance_id in instance_map: # if object is fully occluded
pass
else: # object is not visible, skipping object
obj.off_prim()
continue
groundtruth = {
"METADATA": {
"image_id": id,
"viewport_name": viewport_name,
"RGB":{},
"DEPTH": {},
"INSTANCE": {},
"SEMANTIC": {},
"AMODAL": {},
"OCCLUSION": {},
"BBOX2DTIGHT": {},
"BBOX2DLOOSE": {},
"BBOX3D": {},
},
"DATA": {},
}
""" ===== extract visible mask of object j ===== """
curr_instance_data_np = org_instance_data_np.copy()
if instance_id != 0: # find object instance segmentation
curr_instance_data_np[np.where(org_instance_data_np != instance_id)] = 0
curr_instance_data_np[np.where(org_instance_data_np == instance_id)] = 1
obj_visible_mask = curr_instance_data_np.astype(np.uint8)
""" ===== extract amodal mask of object j ===== """
# Collect Groundtruth
gt = copy.deepcopy(self.sd_helper.get_groundtruth(self.gt_list, viewport_window, wait_for_sensor_data=0.01))
obj.off_prim() # turn off visibility of object
# RGB
if self.save_segmentation_data:
if "rgb" in gt["state"]:
if gt["state"]["rgb"]:
groundtruth["DATA"]["RGB"] = gt["rgb"]
if i == 0 or self.sample("groundtruth_stereo"):
# Instance Segmentation
if "instanceSegmentation" in gt["state"]:
semantics = list(self.label_to_class_id.keys())
instance_data, instance_mappings = self.sd_helper.sensor_helpers["instanceSegmentation"](
viewport_window, parsed=False, return_mapping=True)
instances_list = [(im[0], im[4], im["semanticLabel"]) for im in instance_mappings][::-1]
max_instance_id_list = max([max(il[1]) for il in instances_list])
max_instance_id = instance_data.max()
lut = np.zeros(max(max_instance_id, max_instance_id_list) + 1, dtype=np.uint32)
for uid, il, sem in instances_list:
if sem in semantics and sem != "[[scenario]]":
lut[np.array(il)] = uid
instance_data = np.take(lut, instance_data)
# get object amodal mask
obj_amodal_mask = instance_data.astype(np.uint8)
obj_amodal_mask[np.where(instance_data > 0)] = 1
""" ===== calculate occlusion mask of object j ===== """
obj_occ_mask = cv2.absdiff(obj_amodal_mask, obj_visible_mask)
""" ===== calculate occlusion rate of object j ===== """ # assumes binary mask (True == 1)
obj_occ_mask_pixel_count = obj_occ_mask.sum()
obj_amodal_mask_pixel_count = obj_amodal_mask.sum()
occlusion_rate = round(obj_occ_mask_pixel_count / obj_amodal_mask_pixel_count, 2)
""" ===== Save Segmentation Masks ==== """
if occlusion_rate < 1: # fully occluded objects are not considered
# append visible and occlusion masks for generation of occlusion order matrix
obj_visible_mask_list.append(obj_visible_mask)
obj_occlusion_mask_list.append(obj_occ_mask)
if self.save_segmentation_data:
groundtruth["DATA"]["INSTANCE"] = obj_visible_mask
groundtruth["METADATA"]["INSTANCE"]["WIDTH"] = obj_visible_mask.shape[1]
groundtruth["METADATA"]["INSTANCE"]["HEIGHT"] = obj_visible_mask.shape[0]
groundtruth["METADATA"]["INSTANCE"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["INSTANCE"]["NPY"] = True
groundtruth["DATA"]["AMODAL"] = instance_data
groundtruth["METADATA"]["AMODAL"]["WIDTH"] = instance_data.shape[1]
groundtruth["METADATA"]["AMODAL"]["HEIGHT"] = instance_data.shape[0]
groundtruth["METADATA"]["AMODAL"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["AMODAL"]["NPY"] = True
# if occlusion_rate > 0: # if object is occluded, save occlusion mask
groundtruth["DATA"]["OCCLUSION"] = obj_occ_mask
groundtruth["METADATA"]["OCCLUSION"]["WIDTH"] = obj_occ_mask.shape[1]
groundtruth["METADATA"]["OCCLUSION"]["HEIGHT"] = obj_occ_mask.shape[0]
groundtruth["METADATA"]["OCCLUSION"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["OCCLUSION"]["NPY"] = True
ann_info["visible_bbox"] = bbox_from_binary_mask(obj_visible_mask)
ann_info["bbox"] = ann_info["visible_bbox"]
""" ===== Add Segmentation Mask into COCO.JSON ===== """
instance_mask_class = GenericMask(instance_data.astype("uint8"),height, width)
obj_visible_mask_class = GenericMask(obj_visible_mask.astype("uint8"),height, width)
obj_occ_mask_class = GenericMask(obj_occ_mask.astype("uint8"),height, width)
# Encode binary masks to bytes
instance_data= mask_util.encode(np.array(instance_data[:, :, None], order="F", dtype="uint8"))[0]
obj_visible_mask= mask_util.encode(np.array(obj_visible_mask[:, :, None], order="F", dtype="uint8"))[0]
obj_occ_mask= mask_util.encode(np.array(obj_occ_mask[:, :, None], order="F", dtype="uint8"))[0]
# append annotations to dict
ann_info["segmentation"]["counts"] = instance_data['counts'].decode('UTF-8') # amodal mask
ann_info["visible_mask"]["counts"] = obj_visible_mask['counts'].decode('UTF-8') # obj_visible_mask
ann_info["occluded_mask"]["counts"] = obj_occ_mask['counts'].decode('UTF-8') # obj_visible_mask
ann_info["segmentation"]["area"] = int(instance_mask_class.area())
ann_info["visible_mask"]["area"] = int(obj_visible_mask_class.area())
ann_info["occluded_mask"]["area"] = int(obj_occ_mask_class.area())
ann_info["occluded_rate"] = occlusion_rate
ann_index += 1
all_viewport_data.append(groundtruth)
ann_list.append(ann_info)
img_list.append(image_info)
""" ======= END OBJ LOOP ======= """
# Wireframe
if self.sample("wireframe"):
self.carb_settings.set("/rtx/wireframe/mode", 2.0)
# Need two updates for all viewports to have wireframe properly
self.sim_context.render()
self.sim_context.render()
for i in range(len(self.viewports)):
viewport_name, viewport_window = self.viewports[i]
gt = copy.deepcopy(self.sd_helper.get_groundtruth(["rgb"], viewport_window))
all_viewport_data[i]["DATA"]["WIREFRAME"] = gt["rgb"]
self.carb_settings.set("/rtx/wireframe/mode", 0)
self.sim_context.render()
for j in range(len(all_viewport_data)):
if self.write_data:
self.data_writer.q.put(copy.deepcopy(all_viewport_data[j]))
# Disparity
if self.sample("disparity") and self.sample("stereo"):
depth_l, depth_r = depths
cam_intrinsics = self.camera.intrinsics[0]
disp_convert = DisparityConverter(
depth_l,
depth_r,
cam_intrinsics["fx"],
cam_intrinsics["fy"],
cam_intrinsics["cx"],
cam_intrinsics["cy"],
self.sample("stereo_baseline"),
)
disp_l, disp_r = disp_convert.compute_disparity()
disparities = [disp_l, disp_r]
for i in range(len(self.viewports)):
if i == 0 or self.sample("groundtruth_stereo"):
viewport_name, viewport_window = self.viewports[i]
groundtruth = {
"METADATA": {"image_id": id, "viewport_name": viewport_name, "DISPARITY": {}},
"DATA": {},
}
disparity_data = disparities[i]
groundtruth["DATA"]["DISPARITY"] = disparity_data
groundtruth["METADATA"]["DISPARITY"]["COLORIZE"] = self.groundtruth_visuals
groundtruth["METADATA"]["DISPARITY"]["NPY"] = True
if self.write_data:
self.data_writer.q.put(copy.deepcopy(groundtruth))
# turn on visibility of all objects (for next camera viewport)
for obj in objects:
obj.on_prim()
# generate occlusion ordering for current viewport
rows = cols = len(obj_visible_mask_list)
occlusion_adjacency_matrix = np.zeros((rows,cols))
# A(i,j), col j, row i. row i --> col j
for i in range(0,len(obj_visible_mask_list)):
visible_mask_i = obj_visible_mask_list[i] # occluder
for j in range(0,len(obj_visible_mask_list)):
if j != i:
occluded_mask_j = obj_occlusion_mask_list[j] # occludee
iou, _ = compute_occluded_masks(visible_mask_i,occluded_mask_j)
if iou > 0: # object i's visible mask is overlapping object j's occluded mask
occlusion_adjacency_matrix[i][j] = 1
data_folder = os.path.join(self.output_data_dir, viewport_name, "occlusion_order")
os.makedirs(data_folder, exist_ok=True)
filename = os.path.join(data_folder, f"{img_id}.npy")
# save occlusion adjacency matrix
np.save(filename, occlusion_adjacency_matrix)
# increment img index (next viewport)
img_index += 1
return groundtruth, img_index, ann_index, img_list, ann_list
| 30,498 | Python | 48.75367 | 148 | 0.474457 |
ngzhili/SynTable/syntable_composer/src/output/writer1.py | import atexit
import numpy as np
import os
from PIL import Image
import queue
import sys
import threading
class DataWriter:
""" For processing and writing output data to files. """
def __init__(self, data_dir, num_worker_threads, save_segmentation_data, max_queue_size=500):
""" Construct DataWriter. """
from omni.isaac.synthetic_utils import visualization
self.visualization = visualization
atexit.register(self.stop_threads)
self.data_dir = data_dir
self.save_segmentation_data = save_segmentation_data
# Threading for multiple scenes
self.num_worker_threads = num_worker_threads
# Initialize queue with a specified size
self.q = queue.Queue(max_queue_size)
self.threads = []
def start_threads(self):
""" Start worker threads. """
for _ in range(self.num_worker_threads):
t = threading.Thread(target=self.worker, daemon=True)
t.start()
self.threads.append(t)
def stop_threads(self):
""" Waits for all tasks to be completed before stopping worker threads. """
print("\nFinish writing data...")
# Block until all tasks are done
self.q.join()
print("Done.")
def worker(self):
""" Processes task from queue. Each tasks contains groundtruth data and metadata which is used to transform the output and write it to disk. """
while True:
groundtruth = self.q.get()
if groundtruth is None:
break
filename = groundtruth["METADATA"]["image_id"]
viewport_name = groundtruth["METADATA"]["viewport_name"]
for gt_type, data in groundtruth["DATA"].items():
if gt_type == "RGB":
self.save_image(viewport_name, gt_type, data, filename)
elif gt_type == "WIREFRAME":
self.save_image(viewport_name, gt_type, data, filename)
elif gt_type == "DEPTH":
if self.save_segmentation_data:
if groundtruth["METADATA"]["DEPTH"]["NPY"]:
self.save_PFM(viewport_name, gt_type, data, filename)
if groundtruth["METADATA"]["DEPTH"]["COLORIZE"]:
self.save_image(viewport_name, gt_type, data, filename)
else:
if groundtruth["METADATA"]["DEPTH"]["NPY"]:
self.save_image(viewport_name, gt_type, data, filename)
elif gt_type == "DISPARITY":
if groundtruth["METADATA"]["DISPARITY"]["NPY"]:
self.save_PFM(viewport_name, gt_type, data, filename)
if groundtruth["METADATA"]["DISPARITY"]["COLORIZE"]:
self.save_image(viewport_name, gt_type, data, filename)
elif gt_type == "INSTANCE":
self.save_segmentation(
viewport_name,
gt_type,
data,
filename,
groundtruth["METADATA"]["INSTANCE"]["WIDTH"],
groundtruth["METADATA"]["INSTANCE"]["HEIGHT"],
groundtruth["METADATA"]["INSTANCE"]["COLORIZE"],
groundtruth["METADATA"]["INSTANCE"]["NPY"],
)
elif gt_type == "SEMANTIC":
self.save_segmentation(
viewport_name,
gt_type,
data,
filename,
groundtruth["METADATA"]["SEMANTIC"]["WIDTH"],
groundtruth["METADATA"]["SEMANTIC"]["HEIGHT"],
groundtruth["METADATA"]["SEMANTIC"]["COLORIZE"],
groundtruth["METADATA"]["SEMANTIC"]["NPY"],
)
elif gt_type == "AMODAL":
self.save_segmentation(
viewport_name,
gt_type,
data,
filename,
groundtruth["METADATA"]["AMODAL"]["WIDTH"],
groundtruth["METADATA"]["AMODAL"]["HEIGHT"],
groundtruth["METADATA"]["AMODAL"]["COLORIZE"],
groundtruth["METADATA"]["AMODAL"]["NPY"],
)
elif gt_type == "OCCLUSION":
self.save_segmentation(
viewport_name,
gt_type,
data,
filename,
groundtruth["METADATA"]["OCCLUSION"]["WIDTH"],
groundtruth["METADATA"]["OCCLUSION"]["HEIGHT"],
groundtruth["METADATA"]["OCCLUSION"]["COLORIZE"],
groundtruth["METADATA"]["OCCLUSION"]["NPY"],
)
elif gt_type in ["BBOX2DTIGHT", "BBOX2DLOOSE", "BBOX3D"]:
self.save_bbox(
viewport_name,
gt_type,
data,
filename,
groundtruth["METADATA"][gt_type]["COLORIZE"],
groundtruth["DATA"]["RGB"],
groundtruth["METADATA"][gt_type]["NPY"],
)
elif gt_type == "CAMERA":
self.camera_folder = self.data_dir + "/" + str(viewport_name) + "/camera/"
np.save(self.camera_folder + filename + ".npy", data)
elif gt_type == "POSES":
self.poses_folder = self.data_dir + "/" + str(viewport_name) + "/poses/"
np.save(self.poses_folder + filename + ".npy", data)
else:
raise NotImplementedError
self.q.task_done()
def save_segmentation(
self, viewport_name, data_type, data, filename, width=1280, height=720, display_rgb=True, save_npy=True
):
""" Save segmentation mask data and visuals. """
# Save ground truth data as 16-bit single channel png
if save_npy:
if data_type == "INSTANCE":
data_folder = os.path.join(self.data_dir, viewport_name, "instance")
data = np.array(data, dtype=np.uint8)
img = Image.fromarray(data, mode="L")
elif data_type == "SEMANTIC":
data_folder = os.path.join(self.data_dir, viewport_name, "semantic")
data = np.array(data, dtype=np.uint8)
img = Image.fromarray(data, mode="L")
elif data_type == "AMODAL":
data_folder = os.path.join(self.data_dir, viewport_name, "amodal")
data = np.array(data, dtype=np.uint8)
img = Image.fromarray(data, mode="L")
elif data_type == "OCCLUSION":
data_folder = os.path.join(self.data_dir, viewport_name, "occlusion")
data = np.array(data, dtype=np.uint8)
img = Image.fromarray(data, mode="L")
os.makedirs(data_folder, exist_ok=True)
file = os.path.join(data_folder, filename + ".png")
img.save(file, "PNG", bits=16)
# Save ground truth data as visuals
if display_rgb:
image_data = np.frombuffer(data, dtype=np.uint8).reshape(*data.shape, -1)
image_data += 1
if data_type == "SEMANTIC":
# Move close values apart to allow color values to separate more
image_data = np.array((image_data * 17) % 256, dtype=np.uint8)
color_image = self.visualization.colorize_segmentation(image_data, width, height, 3, None)
color_image = color_image[:, :, :3]
color_image_rgb = Image.fromarray(color_image, "RGB")
if data_type == "INSTANCE":
data_folder = os.path.join(self.data_dir, viewport_name, "instance", "visuals")
elif data_type == "SEMANTIC":
data_folder = os.path.join(self.data_dir, viewport_name, "semantic", "visuals")
elif data_type == "AMODAL":
data_folder = os.path.join(self.data_dir, viewport_name, "amodal", "visuals")
elif data_type == "OCCLUSION":
data_folder = os.path.join(self.data_dir, viewport_name, "occlusion", "visuals")
os.makedirs(data_folder, exist_ok=True)
file = os.path.join(data_folder, filename + ".png")
color_image_rgb.save(file, "PNG")
def save_image(self, viewport_name, img_type, image_data, filename):
""" Save rgb data, depth visuals, and disparity visuals. """
# Convert 1-channel groundtruth data to visualization image data
def normalize_greyscale_image(image_data):
image_data = np.reciprocal(image_data)
image_data[image_data == 0.0] = 1e-5
image_data = np.clip(image_data, 0, 255)
image_data -= np.min(image_data)
if np.max(image_data) > 0:
image_data /= np.max(image_data)
image_data *= 255
image_data = image_data.astype(np.uint8)
return image_data
# Save image data as png
if img_type == "RGB":
data_folder = os.path.join(self.data_dir, viewport_name, "rgb")
image_data = image_data[:, :, :3]
img = Image.fromarray(image_data, "RGB")
elif img_type == "WIREFRAME":
data_folder = os.path.join(self.data_dir, viewport_name, "wireframe")
image_data = np.average(image_data, axis=2)
image_data = image_data.astype(np.uint8)
img = Image.fromarray(image_data, "L")
elif img_type == "DEPTH":
image_data = image_data * 1000 # converty to mm
depth_img = image_data.copy().astype("int32")
image_data = normalize_greyscale_image(image_data)
if self.save_segmentation_data:
data_folder = os.path.join(self.data_dir, viewport_name, "depth", "visuals")
img = Image.fromarray(image_data, mode="L")
depth_data_folder = os.path.join(self.data_dir, viewport_name, "depth")
depth_img = Image.fromarray(depth_img)
os.makedirs(depth_data_folder, exist_ok=True)
file = os.path.join(depth_data_folder, filename + ".png")
depth_img.save(file, "PNG")
elif img_type == "DISPARITY":
image_data = normalize_greyscale_image(image_data)
data_folder = os.path.join(self.data_dir, viewport_name, "disparity", "visuals")
img = Image.fromarray(image_data, mode="L")
if self.save_segmentation_data or img_type == "RGB":
os.makedirs(data_folder, exist_ok=True)
file = os.path.join(data_folder, filename + ".png")
img.save(file, "PNG")
def save_bbox(self, viewport_name, data_type, data, filename, display_rgb=True, rgb_data=None, save_npy=True):
""" Save bbox data and visuals. """
# Save ground truth data as npy
if save_npy:
if data_type == "BBOX2DTIGHT":
data_folder = os.path.join(self.data_dir, viewport_name, "bbox_2d_tight")
elif data_type == "BBOX2DLOOSE":
data_folder = os.path.join(self.data_dir, viewport_name, "bbox_2d_loose")
elif data_type == "BBOX3D":
data_folder = os.path.join(self.data_dir, viewport_name, "bbox_3d")
os.makedirs(data_folder, exist_ok=True)
file = os.path.join(data_folder, filename)
np.save(file, data)
# Save ground truth data and rgb data as visuals
if display_rgb and rgb_data is not None:
color_image = self.visualization.colorize_bboxes(data, rgb_data)
color_image = color_image[:, :, :3]
color_image_rgb = Image.fromarray(color_image, "RGB")
if data_type == "BBOX2DTIGHT":
data_folder = os.path.join(self.data_dir, viewport_name, "bbox_2d_tight", "visuals")
if data_type == "BBOX2DLOOSE":
data_folder = os.path.join(self.data_dir, viewport_name, "bbox_2d_loose", "visuals")
if data_type == "BBOX3D":
# 3D BBox visuals are not yet supported
return
os.makedirs(data_folder, exist_ok=True)
file = os.path.join(data_folder, filename + ".png")
color_image_rgb.save(file, "PNG")
def save_PFM(self, viewport_name, data_type, data, filename):
""" Save Depth and Disparity data. """
if data_type == "DEPTH":
data_folder = os.path.join(self.data_dir, viewport_name, "depth")
elif data_type == "DISPARITY":
data_folder = os.path.join(self.data_dir, viewport_name, "disparity")
os.makedirs(data_folder, exist_ok=True)
file = os.path.join(data_folder, filename + ".pfm")
self.write_PFM(file, data)
def write_PFM(self, file, image, scale=1):
""" Convert numpy matrix into PFM and save. """
file = open(file, "wb")
color = None
if image.dtype.name != "float32":
raise Exception("Image dtype must be float32")
image = np.flipud(image)
if len(image.shape) == 3 and image.shape[2] == 3: # color image
color = True
elif len(image.shape) == 2 or len(image.shape) == 3 and image.shape[2] == 1: # greyscale
color = False
else:
raise Exception("Image must have H x W x 3, H x W x 1 or H x W dimensions.")
file.write(b"PF\n" if color else b"Pf\n")
file.write(b"%d %d\n" % (image.shape[1], image.shape[0]))
endian = image.dtype.byteorder
if endian == "<" or endian == "=" and sys.byteorder == "little":
scale = -scale
file.write(b"%f\n" % scale)
image.tofile(file)
| 14,136 | Python | 42.903727 | 152 | 0.524547 |
ngzhili/SynTable/syntable_composer/datasets/dataset/parameters/warehouse.yaml | # dropped warehouse objects
objects:
obj_model: Choice(["assets/models/warehouse.txt"])
obj_count: Range(5, 15)
obj_size_enabled: False
obj_scale: Uniform(0.75, 1.25)
obj_vert_fov_loc: Uniform(0, 0.5)
obj_distance: Uniform(3, 10)
obj_rot: (Normal(0, 45), Normal(0, 45), Uniform(0, 360))
obj_class_id: 1
obj_physics: True
# colorful ceiling lights
lights:
light_count: Range(0, 2)
light_coord_camera_relative: False
light_coord: (Uniform(-2, 2), Uniform(-2, 2), 5)
light_color: Uniform((0, 0, 0), (255, 255, 255))
light_intensity: Uniform(0, 300000)
light_radius: 1
# warehouse scenario
scenario_model: /NVIDIA/Assets/Isaac/2022.1/Isaac/Environments/Simple_Warehouse/warehouse.usd
scenario_class_id: 0
# camera
camera_coord: (0, 0, Uniform(.20, 1))
camera_rot: (Normal(0, 1), 0, Uniform(0, 360))
# output
output_dir: dataset
num_scenes: 10
img_width: 1920
img_height: 1080
rgb: True
depth: True
semantic_seg: True
groundtruth_visuals: True
# simulate
physics_simulate_time: 2
| 1,029 | YAML | 16.457627 | 93 | 0.688047 |
ngzhili/SynTable/syntable_composer/parameters/flying_things_4d.yaml | # object groups inherited from flying_things_3d
objs:
inherit: objs
objs_color_dr:
inherit: objs_color_dr
objs_texture_dr:
inherit: objs_texture_dr
objs_material_dr:
inherit: objs_material_dr
midground_shapes:
inherit: midground_shapes
midground_shapes_material_dr:
inherit: midground_shapes_material_dr
background_shapes:
inherit: background_shapes
background_plane:
obj_vel: (0, 0, 0)
obj_rot_vel: (0, 0, 0)
inherit: background_plane
# global object movement parameters
obj_vel: Normal((0, 0, 0), (1, 1, 1))
obj_rot_vel: Normal((0, 0, 0), (20, 20, 20))
# light groups inherited from flying_things_3d
lights:
inherit: lights
lights_color:
inherit: lights_color
distant_light:
inherit: distant_light
camera_light:
inherit: camera_light
# camera movement parameters (uncomment to add)
# camera_vel: Normal((.30, 0, 0), (.10, .10, .10))
# camera_accel: Normal((0, 0, 0), (.05, .05, .05))
# camera_rot_vel: Normal((0, 0, 0), (.05, .05, .05))
# camera_movement_camera_relative: True
# sequence parameters
sequential: True
sequence_step_count: 20
sequence_step_time: Uniform(0.5, 1)
profiles:
- parameters/flying_things_3d.yaml
- parameters/profiles/base_groups.yaml
| 1,206 | YAML | 20.553571 | 52 | 0.707297 |
ngzhili/SynTable/syntable_composer/parameters/flying_things_3d.yaml | # flying objects
objs:
obj_count: Range(0, 15)
inherit: flying_objs
# flying objects (color randomized)
objs_color_dr:
obj_color: Uniform((0, 0, 0), (255, 255, 255))
obj_count: Range(0, 10)
inherit: flying_objs
# flying objects (texture randomized)
objs_texture_dr:
obj_texture: Choice(["assets/textures/patterns.txt", "assets/textures/synthetic.txt"])
obj_texture_scale: Choice([0.1, 1])
obj_count: Range(0, 10)
inherit: flying_objs
# flying objects (material randomized)
objs_material_dr:
obj_material: Choice("assets/materials/materials.txt")
obj_count: Range(0, 10)
inherit: flying_objs
# flying midground shapes (texture randomized)
midground_shapes:
obj_texture: Choice(["assets/textures/patterns.txt", "assets/textures/synthetic.txt"])
obj_texture_scale: Choice([0.01, 1])
obj_count: Range(0, 5)
inherit: flying_shapes
# flying midground shapes (material randomized)
midground_shapes_material_dr:
obj_material: Choice("assets/materials/materials.txt")
obj_count: Range(0, 5)
inherit: flying_shapes
# flying background shapes (material randomized)
background_shapes:
obj_material: Choice("assets/materials/materials.txt")
obj_count: Range(0, 10)
obj_horiz_fov_loc: Uniform(-0.7, 0.7)
obj_vert_fov_loc: Uniform(-0.3, 0.7)
obj_size: Uniform(3, 5)
obj_distance: Uniform(20, 30)
inherit: flying_shapes
# background plane
background_plane:
obj_model: /NVIDIA/Assets/Isaac/2022.1/Isaac/Props/Shapes/plane.usd
obj_material: Choice("assets/materials/materials.txt")
obj_texture_rot: Uniform(0, 360)
obj_count: 1
obj_size: 5000
obj_distance: Uniform(30, 40)
obj_horiz_fov_loc: 0
obj_vert_fov_loc: 0
obj_rot: Normal((0, 90, 0), (10, 10, 10))
obj_class_id: 0
# flying lights
lights:
light_count: Range(1, 2)
light_color: (200, 200, 200)
inherit: flying_lights
# flying lights (colorful)
lights_color:
light_count: Range(0, 2)
light_color: Choice([(255, 0, 0), (0, 255, 0), (255, 255, 0), (255, 0, 255), (0, 255, 255)])
inherit: flying_lights
# sky light
distant_light:
light_distant: True
light_count: 1
light_color: Uniform((0, 0, 0), (255, 255, 255))
light_intensity: Uniform(2000, 10000)
light_rot: Normal((0, 0, 0), (20, 20, 20))
# light at camera coordinate
camera_light:
light_count: 1
light_color: Uniform((0, 0, 0), (255, 255, 255))
light_coord_camera_relative: True
light_distance: 0
light_intensity: Uniform(0, 100000)
light_radius: .50
# randomized floor
scenario_room_enabled: True
scenario_class_id: 0
floor: True
wall: False
ceiling: False
floor_size: 50
floor_material: Choice("assets/materials/materials.txt")
# camera
focal_length: 40
stereo: True
stereo_baseline: .20
camera_coord: Uniform((-2, -2, 1), (2, 2, 4))
camera_rot: Normal((0, 0, 0), (3, 3, 20))
# output
img_width: 1920
img_height: 1080
rgb: True
disparity: True
instance_seg: True
semantic_seg: True
bbox_2d_tight: True
groundtruth_visuals: True
groundtruth_stereo: False
profiles:
- parameters/profiles/base_groups.yaml
| 3,052 | YAML | 19.085526 | 94 | 0.695282 |
ngzhili/SynTable/syntable_composer/parameters/warehouse.yaml | # dropped warehouse objects
objects:
obj_model: Choice(["assets/models/warehouse.txt"])
obj_count: Range(5, 15)
obj_size_enabled: False
obj_scale: Uniform(0.75, 1.25)
obj_vert_fov_loc: Uniform(0, 0.5)
obj_distance: Uniform(3, 10)
obj_rot: (Normal(0, 45), Normal(0, 45), Uniform(0, 360))
obj_class_id: 1
obj_physics: True
# colorful ceiling lights
lights:
light_count: Range(0, 2)
light_coord_camera_relative: False
light_coord: (Uniform(-2, 2), Uniform(-2, 2), 5)
light_color: Uniform((0, 0, 0), (255, 255, 255))
light_intensity: Uniform(0, 300000)
light_radius: 1
# warehouse scenario
scenario_model: /NVIDIA/Assets/Isaac/2022.1/Isaac/Environments/Simple_Warehouse/warehouse.usd
scenario_class_id: 0
# camera
camera_coord: (0, 0, Uniform(.20, 1))
camera_rot: (Normal(0, 1), 0, Uniform(0, 360))
# output
output_dir: dataset
num_scenes: 10
img_width: 1920
img_height: 1080
rgb: True
depth: True
semantic_seg: True
groundtruth_visuals: True
# simulate
physics_simulate_time: 2
| 1,029 | YAML | 16.457627 | 93 | 0.688047 |
ngzhili/SynTable/syntable_composer/parameters/profiles/default.yaml | # Default parameters. Do not edit, move, or delete.
# default object parameters
obj_model: /NVIDIA/Assets/Isaac/2022.1/Isaac/Props/Forklift/forklift.usd
obj_color: ()
obj_texture: ""
obj_material: ""
obj_metallicness: float("NaN")
obj_reflectance: float("NaN")
obj_size_enabled: True
obj_size: 1
obj_scale: 1
obj_texture_scale: 1
obj_texture_rot: 0
obj_rot: (0, 0, 0)
obj_coord: (0, 0, 0)
obj_centered: True
obj_physics: False
obj_rot_camera_relative: True
obj_coord_camera_relative: True
obj_count: 0
obj_distance: Uniform(300, 800)
obj_horiz_fov_loc: Uniform(-1, 1)
obj_vert_fov_loc: Uniform(-1, 1)
obj_vel: (0, 0, 0)
obj_rot_vel: (0, 0, 0)
obj_accel: (0, 0, 0)
obj_rot_accel: (0, 0, 0)
obj_movement_obj_relative: False
obj_class_id: 1
# default light parameters
light_intensity: 100000
light_radius: 0.25
light_temp_enabled: False
light_color: (255, 255, 255)
light_temp: 6500
light_directed: False
light_directed_focus: 20
light_directed_focus_softness: 0
light_distant: False
light_camera_relative: True
light_rot: (0, 0, 0)
light_coord: (0, 0, 0)
light_count: 0
light_distance: Uniform(3, 8)
light_horiz_fov_loc: Uniform(-1, 1)
light_vert_fov_loc: Uniform(-1, 1)
light_coord_camera_relative: True
light_rot_camera_relative: True
light_vel: (0, 0, 0)
light_rot_vel: (0, 0, 0)
light_accel: (0, 0, 0)
light_rot_accel: (0, 0, 0)
light_movement_light_relative: False
# default scenario parameters
scenario_room_enabled: False
scenario_model: /NVIDIA/Assets/Isaac/2022.1/Isaac/Environments/Simple_Warehouse/warehouse.usd
scenario_class_id: 0
sky_texture: ""
sky_light_intensity: 1000
floor: True
wall: True
ceiling: True
wall_height: 20
floor_size: 20
floor_color: ()
wall_color: ()
ceiling_color: ()
floor_texture: ""
wall_texture: ""
ceiling_texture: ""
floor_texture_scale: 1
wall_texture_scale: 1
ceiling_texture_scale: 1
floor_texture_rot: 0
wall_texture_rot: 0
ceiling_texture_rot: 0
floor_material: ""
wall_material: ""
ceiling_material: ""
floor_reflectance: float("NaN")
wall_reflectance: float("NaN")
ceiling_reflectance: float("NaN")
floor_metallicness: float("NaN")
wall_metallicness: float("NaN")
ceiling_metallicness: float("NaN")
# default camera parameters
focal_length: 18.15
focus_distance: 4
horiz_aperture: 20.955
vert_aperture: 15.2908
f_stop: 0
stereo: False
stereo_baseline: 20
camera_coord: (0, 0, 50)
camera_rot: (0, 0, 0)
camera_vel: (0, 0, 0)
camera_rot_vel: (0, 0, 0)
camera_accel: (0, 0, 0)
camera_rot_accel: (0, 0, 0)
camera_movement_camera_relative: False
# default output parameters
output_dir: dataset
num_scenes: 10
img_width: 1280
img_height: 720
write_data: True
num_data_writer_threads: 4
sequential: False
sequence_step_count: 10
sequence_step_time: 1
rgb: True
depth: False
disparity: False
instance_seg: False
semantic_seg: False
bbox_2d_tight: False
bbox_2d_loose: False
bbox_3d: False
wireframe: False
groundtruth_stereo: False
groundtruth_visuals: False
# default model store parameters
nucleus_server: localhost
# default debug parameters
pause: 0
verbose: True
# simulation parameters
physics_simulate_time: 1
scene_units_in_meters: 1
path_tracing: False
samples_per_pixel_per_frame: 32 | 3,194 | YAML | 15.554404 | 93 | 0.725736 |
ngzhili/SynTable/syntable_composer/parameters/profiles/default1.yaml | # Default parameters. Do not edit, move, or delete.
# default object parameters
obj_model: /NVIDIA/Assets/Isaac/2022.1/Isaac/Props/Forklift/forklift.usd
obj_color: ()
obj_texture: ""
obj_material: ""
obj_metallicness: float("NaN")
obj_reflectance: float("NaN")
obj_size_enabled: True
obj_size: 1
obj_scale: 1
obj_texture_scale: 1
obj_texture_rot: 0
obj_rot: (0, 0, 0)
obj_coord: (0, 0, 0)
obj_centered: True
obj_physics: False
obj_rot_camera_relative: True
obj_coord_camera_relative: True
obj_count: 0
obj_distance: Uniform(300, 800)
obj_horiz_fov_loc: Uniform(-1, 1)
obj_vert_fov_loc: Uniform(-1, 1)
obj_vel: (0, 0, 0)
obj_rot_vel: (0, 0, 0)
obj_accel: (0, 0, 0)
obj_rot_accel: (0, 0, 0)
obj_movement_obj_relative: False
obj_class_id: 1
# default light parameters
light_intensity: 100000
light_radius: 0.5
light_temp_enabled: False
light_color: (255, 255, 255)
light_temp: 6500
light_directed: False
light_directed_focus: 20
light_directed_focus_softness: 0
light_height: 15
light_width: 15
light_distant: False
light_camera_relative: True
light_rot: (0, 0, 0)
light_coord: (0, 0, 0)
light_count: 0
light_distance: Uniform(3, 8)
light_horiz_fov_loc: Uniform(-1, 1)
light_vert_fov_loc: Uniform(-1, 1)
light_coord_camera_relative: True
light_rot_camera_relative: True
light_vel: (0, 0, 0)
light_rot_vel: (0, 0, 0)
light_accel: (0, 0, 0)
light_rot_accel: (0, 0, 0)
light_movement_light_relative: False
spherelight_hemisphere_radius_min: 1.5
spherelight_hemisphere_radius_max: 2.5
# default scenario parameters
scenario_room_enabled: False
scenario_model: /NVIDIA/Assets/Isaac/2022.1/Isaac/Environments/Simple_Warehouse/warehouse.usd
scenario_class_id: 0
sky_texture: ""
sky_light_intensity: 1000
floor: True
wall: True
ceiling: True
wall_height: 20
floor_size: 20
floor_color: ()
wall_color: ()
ceiling_color: ()
floor_texture: ""
wall_texture: ""
ceiling_texture: ""
floor_texture_scale: 1
wall_texture_scale: 1
ceiling_texture_scale: 1
floor_texture_rot: 0
wall_texture_rot: 0
ceiling_texture_rot: 0
floor_material: ""
wall_material: ""
ceiling_material: ""
floor_reflectance: float("NaN")
wall_reflectance: float("NaN")
ceiling_reflectance: float("NaN")
floor_metallicness: float("NaN")
wall_metallicness: float("NaN")
ceiling_metallicness: float("NaN")
# default camera parameters
focal_length: 18.15
focus_distance: 4
horiz_aperture: 20.955
vert_aperture: 15.2908
f_stop: 0
stereo: False
stereo_baseline: 20
camera_coord: (0, 0, 50)
camera_rot: (0, 0, 0)
camera_vel: (0, 0, 0)
camera_rot_vel: (0, 0, 0)
camera_accel: (0, 0, 0)
camera_rot_accel: (0, 0, 0)
camera_movement_camera_relative: False
cam_hemisphere_radius_min: 0.7
cam_hemisphere_radius_max: 1.4
# Camera and Light Parameters
auto_hemisphere_radius: False
# Scene Settings
max_obj_in_scene: 10
randomise_num_of_objs_in_scene: False
save_segmentation_data: False
save_background: False
checkpoint_interval: 10
# default output parameters
output_dir: dataset
num_scenes: 10
num_views: 1
img_width: 1280
img_height: 720
write_data: True
num_data_writer_threads: 4
sequential: False
sequence_step_count: 10
sequence_step_time: 1
rgb: True
depth: False
disparity: False
instance_seg: False
semantic_seg: False
bbox_2d_tight: False
bbox_2d_loose: False
bbox_3d: False
wireframe: False
groundtruth_stereo: False
groundtruth_visuals: False
# default model store parameters
nucleus_server: localhost
# default debug parameters
pause: 0
verbose: True
# simulation parameters
physics_simulate_time: 1
scene_units_in_meters: 1
path_tracing: False
samples_per_pixel_per_frame: 32
| 3,598 | YAML | 15.817757 | 93 | 0.732073 |
ngzhili/SynTable/syntable_composer/parameters/profiles/base_groups.yaml | flying_objs:
obj_model: Choice(["assets/models/warehouse.txt", "assets/models/hospital.txt", "assets/models/office.txt"])
obj_size: Uniform(.50, .75)
obj_distance: Uniform(4, 20)
flying_shapes:
obj_model: Choice(["assets/models/shapes.txt"])
obj_size: Uniform(1, 2)
obj_distance: Uniform(15, 25)
flying_lights:
light_intensity: Uniform(0, 100000)
light_radius: Uniform(.50, 1)
light_vert_fov_loc: Uniform(0, 1)
light_distance: Uniform(4, 15)
# global parameters
obj_rot: Uniform((0, 0, 0), (360, 360, 360))
obj_horiz_fov_loc: Uniform(-1, 1)
obj_vert_fov_loc: Uniform(-0.7, 1)
obj_metallicness: Uniform(0.1, 0.8)
obj_reflectance: Uniform(0.1, 0.8)
| 679 | YAML | 20.249999 | 110 | 0.680412 |
ngzhili/SynTable/mount_dir/parameters/profiles/default.yaml | # Default parameters. Do not edit, move, or delete.
# default object parameters
obj_model: /NVIDIA/Assets/Isaac/2022.1/Isaac/Props/Forklift/forklift.usd
obj_color: ()
obj_texture: ""
obj_material: ""
obj_metallicness: float("NaN")
obj_reflectance: float("NaN")
obj_size_enabled: True
obj_size: 1
obj_scale: 1
obj_texture_scale: 1
obj_texture_rot: 0
obj_rot: (0, 0, 0)
obj_coord: (0, 0, 0)
obj_centered: True
obj_physics: False
obj_rot_camera_relative: True
obj_coord_camera_relative: True
obj_count: 0
obj_distance: Uniform(300, 800)
obj_horiz_fov_loc: Uniform(-1, 1)
obj_vert_fov_loc: Uniform(-1, 1)
obj_vel: (0, 0, 0)
obj_rot_vel: (0, 0, 0)
obj_accel: (0, 0, 0)
obj_rot_accel: (0, 0, 0)
obj_movement_obj_relative: False
obj_class_id: 1
# default light parameters
light_intensity: 100000
light_radius: 0.25
light_temp_enabled: False
light_color: (255, 255, 255)
light_temp: 6500
light_directed: False
light_directed_focus: 20
light_directed_focus_softness: 0
light_distant: False
light_camera_relative: True
light_rot: (0, 0, 0)
light_coord: (0, 0, 0)
light_count: 0
light_distance: Uniform(3, 8)
light_horiz_fov_loc: Uniform(-1, 1)
light_vert_fov_loc: Uniform(-1, 1)
light_coord_camera_relative: True
light_rot_camera_relative: True
light_vel: (0, 0, 0)
light_rot_vel: (0, 0, 0)
light_accel: (0, 0, 0)
light_rot_accel: (0, 0, 0)
light_movement_light_relative: False
# default scenario parameters
scenario_room_enabled: False
scenario_model: /NVIDIA/Assets/Isaac/2022.1/Isaac/Environments/Simple_Warehouse/warehouse.usd
scenario_class_id: 0
sky_texture: ""
sky_light_intensity: 1000
floor: True
wall: True
ceiling: True
wall_height: 20
floor_size: 20
floor_color: ()
wall_color: ()
ceiling_color: ()
floor_texture: ""
wall_texture: ""
ceiling_texture: ""
floor_texture_scale: 1
wall_texture_scale: 1
ceiling_texture_scale: 1
floor_texture_rot: 0
wall_texture_rot: 0
ceiling_texture_rot: 0
floor_material: ""
wall_material: ""
ceiling_material: ""
floor_reflectance: float("NaN")
wall_reflectance: float("NaN")
ceiling_reflectance: float("NaN")
floor_metallicness: float("NaN")
wall_metallicness: float("NaN")
ceiling_metallicness: float("NaN")
# default camera parameters
focal_length: 18.15
focus_distance: 4
horiz_aperture: 20.955
vert_aperture: 15.2908
f_stop: 0
stereo: False
stereo_baseline: 20
camera_coord: (0, 0, 50)
camera_rot: (0, 0, 0)
camera_vel: (0, 0, 0)
camera_rot_vel: (0, 0, 0)
camera_accel: (0, 0, 0)
camera_rot_accel: (0, 0, 0)
camera_movement_camera_relative: False
# default output parameters
output_dir: dataset
num_scenes: 10
img_width: 1280
img_height: 720
write_data: True
num_data_writer_threads: 4
sequential: False
sequence_step_count: 10
sequence_step_time: 1
rgb: True
depth: False
disparity: False
instance_seg: False
semantic_seg: False
bbox_2d_tight: False
bbox_2d_loose: False
bbox_3d: False
wireframe: False
groundtruth_stereo: False
groundtruth_visuals: False
# default model store parameters
nucleus_server: localhost
# default debug parameters
pause: 0
verbose: True
# simulation parameters
physics_simulate_time: 1
scene_units_in_meters: 1
path_tracing: False
samples_per_pixel_per_frame: 32 | 3,194 | YAML | 15.554404 | 93 | 0.725736 |
ngzhili/SynTable/mount_dir/parameters/profiles/default1.yaml | # Default parameters. Do not edit, move, or delete.
# default object parameters
obj_model: /NVIDIA/Assets/Isaac/2022.1/Isaac/Props/Forklift/forklift.usd
obj_color: ()
obj_texture: ""
obj_material: ""
obj_metallicness: float("NaN")
obj_reflectance: float("NaN")
obj_size_enabled: True
obj_size: 1
obj_scale: 1
obj_texture_scale: 1
obj_texture_rot: 0
obj_rot: (0, 0, 0)
obj_coord: (0, 0, 0)
obj_centered: True
obj_physics: False
obj_rot_camera_relative: True
obj_coord_camera_relative: True
obj_count: 0
obj_distance: Uniform(300, 800)
obj_horiz_fov_loc: Uniform(-1, 1)
obj_vert_fov_loc: Uniform(-1, 1)
obj_vel: (0, 0, 0)
obj_rot_vel: (0, 0, 0)
obj_accel: (0, 0, 0)
obj_rot_accel: (0, 0, 0)
obj_movement_obj_relative: False
obj_class_id: 1
# default light parameters
light_intensity: 100000
light_radius: 0.5
light_temp_enabled: False
light_color: (255, 255, 255)
light_temp: 6500
light_directed: False
light_directed_focus: 20
light_directed_focus_softness: 0
light_height: 15
light_width: 15
light_distant: False
light_camera_relative: True
light_rot: (0, 0, 0)
light_coord: (0, 0, 0)
light_count: 0
light_distance: Uniform(3, 8)
light_horiz_fov_loc: Uniform(-1, 1)
light_vert_fov_loc: Uniform(-1, 1)
light_coord_camera_relative: True
light_rot_camera_relative: True
light_vel: (0, 0, 0)
light_rot_vel: (0, 0, 0)
light_accel: (0, 0, 0)
light_rot_accel: (0, 0, 0)
light_movement_light_relative: False
lights_hemisphere_radius_min: 1.5
lights_hemisphere_radius_max: 2.5
# default scenario parameters
scenario_room_enabled: False
scenario_model: /NVIDIA/Assets/Isaac/2022.1/Isaac/Environments/Simple_Warehouse/warehouse.usd
scenario_class_id: 0
sky_texture: ""
sky_light_intensity: 1000
floor: True
wall: True
ceiling: True
wall_height: 20
floor_size: 20
floor_color: ()
wall_color: ()
ceiling_color: ()
floor_texture: ""
wall_texture: ""
ceiling_texture: ""
floor_texture_scale: 1
wall_texture_scale: 1
ceiling_texture_scale: 1
floor_texture_rot: 0
wall_texture_rot: 0
ceiling_texture_rot: 0
floor_material: ""
wall_material: ""
ceiling_material: ""
floor_reflectance: float("NaN")
wall_reflectance: float("NaN")
ceiling_reflectance: float("NaN")
floor_metallicness: float("NaN")
wall_metallicness: float("NaN")
ceiling_metallicness: float("NaN")
# default camera parameters
focal_length: 18.15
focus_distance: 4
horiz_aperture: 20.955
vert_aperture: 15.2908
f_stop: 0
stereo: False
stereo_baseline: 20
camera_coord: (0, 0, 50)
camera_rot: (0, 0, 0)
camera_vel: (0, 0, 0)
camera_rot_vel: (0, 0, 0)
camera_accel: (0, 0, 0)
camera_rot_accel: (0, 0, 0)
camera_movement_camera_relative: False
cam_hemisphere_radius_min: 0.7
cam_hemisphere_radius_max: 1.4
# Camera and Light Parameters
auto_hemisphere_radius: False
# Scene Settings
max_obj_in_scene: 10
randomise_num_of_objs_in_scene: False
save_segmentation_data: False
save_background: False
checkpoint_interval: 10
# default output parameters
output_dir: dataset
num_scenes: 10
num_views: 1
img_width: 1280
img_height: 720
write_data: True
num_data_writer_threads: 4
sequential: False
sequence_step_count: 10
sequence_step_time: 1
rgb: True
depth: False
disparity: False
instance_seg: False
semantic_seg: False
bbox_2d_tight: False
bbox_2d_loose: False
bbox_3d: False
wireframe: False
groundtruth_stereo: False
groundtruth_visuals: False
# default model store parameters
nucleus_server: localhost
# default debug parameters
pause: 0
verbose: True
# simulation parameters
physics_simulate_time: 1
scene_units_in_meters: 1
path_tracing: False
samples_per_pixel_per_frame: 32
| 3,589 | YAML | 15.697674 | 93 | 0.731123 |
ngzhili/SynTable/mount_dir/parameters/profiles/base_groups.yaml | flying_objs:
obj_model: Choice(["assets/models/warehouse.txt", "assets/models/hospital.txt", "assets/models/office.txt"])
obj_size: Uniform(.50, .75)
obj_distance: Uniform(4, 20)
flying_shapes:
obj_model: Choice(["assets/models/shapes.txt"])
obj_size: Uniform(1, 2)
obj_distance: Uniform(15, 25)
flying_lights:
light_intensity: Uniform(0, 100000)
light_radius: Uniform(.50, 1)
light_vert_fov_loc: Uniform(0, 1)
light_distance: Uniform(4, 15)
# global parameters
obj_rot: Uniform((0, 0, 0), (360, 360, 360))
obj_horiz_fov_loc: Uniform(-1, 1)
obj_vert_fov_loc: Uniform(-0.7, 1)
obj_metallicness: Uniform(0.1, 0.8)
obj_reflectance: Uniform(0.1, 0.8)
| 679 | YAML | 20.249999 | 110 | 0.680412 |
selinaxiao/MeshToUsd/README.md | # Extension Project Template
This project was automatically generated.
- `app` - It is a folder link to the location of your *Omniverse Kit* based app.
- `exts` - It is a folder where you can add new extensions. It was automatically added to extension search path. (Extension Manager -> Gear Icon -> Extension Search Path).
Open this folder using Visual Studio Code. It will suggest you to install few extensions that will make python experience better.
Look for "mesh.to.usd" extension in extension manager and enable it. Try applying changes to any python files, it will hot-reload and you can observe results immediately.
Alternatively, you can launch your app from console with this folder added to search path and your extension enabled, e.g.:
```
> app\omni.code.bat --ext-folder exts --enable company.hello.world
```
# App Link Setup
If `app` folder link doesn't exist or broken it can be created again. For better developer experience it is recommended to create a folder link named `app` to the *Omniverse Kit* app installed from *Omniverse Launcher*. Convenience script to use is included.
Run:
```
> link_app.bat
```
If successful you should see `app` folder link in the root of this repo.
If multiple Omniverse apps is installed script will select recommended one. Or you can explicitly pass an app:
```
> link_app.bat --app create
```
You can also just pass a path to create link to:
```
> link_app.bat --path "C:/Users/bob/AppData/Local/ov/pkg/create-2021.3.4"
```
# Sharing Your Extensions
This folder is ready to be pushed to any git repository. Once pushed direct link to a git repository can be added to *Omniverse Kit* extension search paths.
Link might look like this: `git://github.com/[user]/[your_repo].git?branch=main&dir=exts`
Notice `exts` is repo subfolder with extensions. More information can be found in "Git URL as Extension Search Paths" section of developers manual.
To add a link to your *Omniverse Kit* based app go into: Extension Manager -> Gear Icon -> Extension Search Path
| 2,035 | Markdown | 37.415094 | 258 | 0.756265 |
selinaxiao/MeshToUsd/tools/scripts/link_app.py | import argparse
import json
import os
import sys
import packmanapi
import urllib3
def find_omniverse_apps():
http = urllib3.PoolManager()
try:
r = http.request("GET", "http://127.0.0.1:33480/components")
except Exception as e:
print(f"Failed retrieving apps from an Omniverse Launcher, maybe it is not installed?\nError: {e}")
sys.exit(1)
apps = {}
for x in json.loads(r.data.decode("utf-8")):
latest = x.get("installedVersions", {}).get("latest", "")
if latest:
for s in x.get("settings", []):
if s.get("version", "") == latest:
root = s.get("launch", {}).get("root", "")
apps[x["slug"]] = (x["name"], root)
break
return apps
def create_link(src, dst):
print(f"Creating a link '{src}' -> '{dst}'")
packmanapi.link(src, dst)
APP_PRIORITIES = ["code", "create", "view"]
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Create folder link to Kit App installed from Omniverse Launcher")
parser.add_argument(
"--path",
help="Path to Kit App installed from Omniverse Launcher, e.g.: 'C:/Users/bob/AppData/Local/ov/pkg/create-2021.3.4'",
required=False,
)
parser.add_argument(
"--app", help="Name of Kit App installed from Omniverse Launcher, e.g.: 'code', 'create'", required=False
)
args = parser.parse_args()
path = args.path
if not path:
print("Path is not specified, looking for Omniverse Apps...")
apps = find_omniverse_apps()
if len(apps) == 0:
print(
"Can't find any Omniverse Apps. Use Omniverse Launcher to install one. 'Code' is the recommended app for developers."
)
sys.exit(0)
print("\nFound following Omniverse Apps:")
for i, slug in enumerate(apps):
name, root = apps[slug]
print(f"{i}: {name} ({slug}) at: '{root}'")
if args.app:
selected_app = args.app.lower()
if selected_app not in apps:
choices = ", ".join(apps.keys())
print(f"Passed app: '{selected_app}' is not found. Specify one of the following found Apps: {choices}")
sys.exit(0)
else:
selected_app = next((x for x in APP_PRIORITIES if x in apps), None)
if not selected_app:
selected_app = next(iter(apps))
print(f"\nSelected app: {selected_app}")
_, path = apps[selected_app]
if not os.path.exists(path):
print(f"Provided path doesn't exist: {path}")
else:
SCRIPT_ROOT = os.path.dirname(os.path.realpath(__file__))
create_link(f"{SCRIPT_ROOT}/../../app", path)
print("Success!")
| 2,814 | Python | 32.117647 | 133 | 0.562189 |
selinaxiao/MeshToUsd/tools/packman/config.packman.xml | <config remotes="cloudfront">
<remote2 name="cloudfront">
<transport actions="download" protocol="https" packageLocation="d4i3qtqj3r0z5.cloudfront.net/${name}@${version}" />
</remote2>
</config>
| 211 | XML | 34.333328 | 123 | 0.691943 |
selinaxiao/MeshToUsd/tools/packman/bootstrap/install_package.py | # Copyright 2019 NVIDIA CORPORATION
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import shutil
import sys
import tempfile
import zipfile
__author__ = "hfannar"
logging.basicConfig(level=logging.WARNING, format="%(message)s")
logger = logging.getLogger("install_package")
class TemporaryDirectory:
def __init__(self):
self.path = None
def __enter__(self):
self.path = tempfile.mkdtemp()
return self.path
def __exit__(self, type, value, traceback):
# Remove temporary data created
shutil.rmtree(self.path)
def install_package(package_src_path, package_dst_path):
with zipfile.ZipFile(package_src_path, allowZip64=True) as zip_file, TemporaryDirectory() as temp_dir:
zip_file.extractall(temp_dir)
# Recursively copy (temp_dir will be automatically cleaned up on exit)
try:
# Recursive copy is needed because both package name and version folder could be missing in
# target directory:
shutil.copytree(temp_dir, package_dst_path)
except OSError as exc:
logger.warning("Directory %s already present, packaged installation aborted" % package_dst_path)
else:
logger.info("Package successfully installed to %s" % package_dst_path)
install_package(sys.argv[1], sys.argv[2])
| 1,844 | Python | 33.166666 | 108 | 0.703362 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.