file_path
stringlengths 20
207
| content
stringlengths 5
3.85M
| size
int64 5
3.85M
| lang
stringclasses 9
values | avg_line_length
float64 1.33
100
| max_line_length
int64 4
993
| alphanum_fraction
float64 0.26
0.93
|
---|---|---|---|---|---|---|
alankent/ordinary-depthmap-projection/exts/ordinary.depthmap.projection/ordinary/depthmap/projection/tests/test_hello_world.py | # NOTE:
# omni.kit.test - std python's unittest module with additional wrapping to add suport for async/await tests
# For most things refer to unittest docs: https://docs.python.org/3/library/unittest.html
import omni.kit.test
# Extnsion for writing UI tests (simulate UI interaction)
import omni.kit.ui_test as ui_test
# Import extension python module we are testing with absolute import path, as if we are external user (other extension)
import ordinary.depthmap.projection
# Having a test class dervived from omni.kit.test.AsyncTestCase declared on the root of module will make it auto-discoverable by omni.kit.test
class Test(omni.kit.test.AsyncTestCase):
# Before running each test
async def setUp(self):
pass
# After running each test
async def tearDown(self):
pass
# Actual test, notice it is "async" function, so "await" can be used if needed
async def test_hello_public_function(self):
result = ordinary.depthmap.projection.some_public_function(4)
self.assertEqual(result, 256)
async def test_window_button(self):
# Find a label in our window
label = ui_test.find("My Window//Frame/**/Label[*]")
# Find buttons in our window
add_button = ui_test.find("My Window//Frame/**/Button[*].text=='Add'")
reset_button = ui_test.find("My Window//Frame/**/Button[*].text=='Reset'")
# Click reset button
await reset_button.click()
self.assertEqual(label.widget.text, "empty")
await add_button.click()
self.assertEqual(label.widget.text, "count: 1")
await add_button.click()
self.assertEqual(label.widget.text, "count: 2")
| 1,692 | Python | 35.021276 | 142 | 0.685579 |
alankent/ordinary-depthmap-projection/exts/ordinary.depthmap.projection/config/extension.toml | [package]
# Semantic Versioning is used: https://semver.org/
version = "1.0.0"
# Lists people or organizations that are considered the "authors" of the package.
authors = ["NVIDIA"]
# The title and description fields are primarily for displaying extension info in UI
title = "ordinary depthmap projection"
description="A simple python extension example to use as a starting point for your extensions."
# Path (relative to the root) or content of readme markdown file for UI.
readme = "docs/README.md"
# URL of the extension source repository.
repository = ""
# One of categories for UI.
category = "Example"
# Keywords for the extension
keywords = ["kit", "example"]
# Location of change log file in target (final) folder of extension, relative to the root.
# More info on writing changelog: https://keepachangelog.com/en/1.0.0/
changelog="docs/CHANGELOG.md"
# Preview image and icon. Folder named "data" automatically goes in git lfs (see .gitattributes file).
# Preview image is shown in "Overview" of Extensions window. Screenshot of an extension might be a good preview image.
preview_image = "data/preview.png"
# Icon is shown in Extensions window, it is recommended to be square, of size 256x256.
icon = "data/icon.png"
# Use omni.ui to build simple UI
[dependencies]
"omni.kit.uiapp" = {}
# Main python module this extension provides, it will be publicly available as "import ordinary.depthmap.projection".
[[python.module]]
name = "ordinary.depthmap.projection"
[[test]]
# Extra dependencies only to be used during test run
dependencies = [
"omni.kit.ui_test" # UI testing extension
]
| 1,610 | TOML | 32.562499 | 118 | 0.749689 |
alankent/ordinary-depthmap-projection/exts/ordinary.depthmap.projection/docs/CHANGELOG.md | # Changelog
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
## [1.0.0] - 2021-04-26
- Initial version of extension UI template with a window
| 178 | Markdown | 18.888887 | 80 | 0.702247 |
alankent/ordinary-depthmap-projection/exts/ordinary.depthmap.projection/docs/README.md | # Python Extension Example [ordinary.depthmap.projection]
This is an example of pure python Kit extension. It is intended to be copied and serve as a template to create new extensions.
| 187 | Markdown | 36.599993 | 126 | 0.796791 |
alankent/ordinary-depthmap-projection/exts/ordinary.depthmap.projection/docs/index.rst | ordinary.depthmap.projection
#############################
Example of Python only extension
.. toctree::
:maxdepth: 1
README
CHANGELOG
.. automodule::"ordinary.depthmap.projection"
:platform: Windows-x86_64, Linux-x86_64
:members:
:undoc-members:
:show-inheritance:
:imported-members:
:exclude-members: contextmanager
| 357 | reStructuredText | 16.047618 | 45 | 0.638655 |
alankent/ordinary-vrm-clean/README-ext.md | # Extension Project Template
This project was automatically generated.
- `app` - It is a folder link to the location of your *Omniverse Kit* based app.
- `exts` - It is a folder where you can add new extensions. It was automatically added to extension search path. (Extension Manager -> Gear Icon -> Extension Search Path).
Open this folder using Visual Studio Code. It will suggest you to install few extensions that will make python experience better.
Look for "ordinary" extension in extension manager and enable it. Try applying changes to any python files, it will hot-reload and you can observe results immediately.
Alternatively, you can launch your app from console with this folder added to search path and your extension enabled, e.g.:
```
> app\omni.code.bat --ext-folder exts --enable company.hello.world
```
# App Link Setup
If `app` folder link doesn't exist or broken it can be created again. For better developer experience it is recommended to create a folder link named `app` to the *Omniverse Kit* app installed from *Omniverse Launcher*. Convenience script to use is included.
Run:
```
> link_app.bat
```
If successful you should see `app` folder link in the root of this repo.
If multiple Omniverse apps is installed script will select recommended one. Or you can explicitly pass an app:
```
> link_app.bat --app create
```
You can also just pass a path to create link to:
```
> link_app.bat --path "C:/Users/bob/AppData/Local/ov/pkg/create-2021.3.4"
```
# Sharing Your Extensions
This folder is ready to be pushed to any git repository. Once pushed direct link to a git repository can be added to *Omniverse Kit* extension search paths.
Link might look like this: `git://github.com/[user]/[your_repo].git?branch=main&dir=exts`
Notice `exts` is repo subfolder with extensions. More information can be found in "Git URL as Extension Search Paths" section of developers manual.
To add a link to your *Omniverse Kit* based app go into: Extension Manager -> Gear Icon -> Extension Search Path
| 2,032 | Markdown | 37.35849 | 258 | 0.75689 |
alankent/ordinary-vrm-clean/README.md | # VRM Importer for NVIDIA Omniverse
This repository is a work in progress, but making it available in its
current messy form regardless. I hope to clean this up and improve it
over the coming months.
## What is NVIDIA Omniverse
You can download NVIDIA Omniverse for free if you have a NVIDIA GPU.
You can create 3D scenes using USD (Universal Scene Description) originally
from Pixar, now open source (https://openusd.org/).
Omniverse uses USD as its native file format.
## What is this repo?
This repo contains a Omniverse Kit extension. Almost all the Omniverse
tools are built using Kit, their framework for app building.
See [README-ext.md](./README-ext.md) for more information.
This extension is not in a good or final place, but I got it to do something
so sharing with the world in case anyone else wants to give it a go.
## The UI looks like the default template extension UI
Well spotted. I have put no effort into the UI at this stage. I just wanted
to get it to do something. So there is a button called "Clean" which runs
the code and "Dump" which walks the current Stage and prints out the length
of all properties that hold arrays. (I was use the latter to work out the
lengths of all the point mesh details as part of my learnings.)
## So how do I use it?
I grab a `.vrm` file exported from [VRoid Studio](https://vroid.com/en/studio),
rename it to `.glb`, open in Omniverse USD Composer (formerly "Create"),
right click and "Convert to USD". I then open the USD file and click the
"Clean" button. It restructures things in the currently opened character
USD file.
[VRM](https://github.com/vrm-c) files are in GLB format (the binary form of
glTF) but follow some additional standards to help with interchange in VR apps
(like VR Chat and some VTuber software like [VSeeFace](https://vseeface.icu).
Ultimately, I could imagine this extension becoming a `.vrm` file importer
extension for Omniverse. One day...
## Why is it necessary?
Using the above approach to import a GLB file has worked the best but
still suffers from problems in Omniverse:
* The root bone is lost during the GLB import process, making animation clips not work correctly
* [Audio2Face](https://www.nvidia.com/en-us/omniverse/apps/audio2face/) does not like the meshes VRoid Studio generates
* Need to add hair physics (not started)
* Need to add cloth physics for clothes, like skirts (not started)
## Why am I doing this?
I tell people I am trying to create an animated cartoon series to publish
on YouTube. What really happens is I get distracted geeking out on technology.
* I created a few episodes originally with 2D animation in Adobe Character Animator.
* I then created a few episodes using Unity as the rendering pipeline (HDRP).
* I am now exploring NVIDIA Omniverse for rendering.
I am trying to stick to free tools so others can give it a go and see if
they like it before investing money into commercial tools.
## When can I learn more?
I blog at [extra-ordinary.tv/blog](https://extra-ordinary.tv/blog/)
* [First Steps for VRoid Studio characters in NVIDIA Omniverse](https://extra-ordinary.tv/2023/05/28/2902/)
* [VRoid Studio, meet NVIDIA Omniverse](https://extra-ordinary.tv/2023/05/10/vroid-studio-meet-nvidia-omniverse/)
| 3,249 | Markdown | 42.918918 | 119 | 0.768544 |
alankent/ordinary-vrm-clean/tools/scripts/link_app.py | import argparse
import json
import os
import sys
import packmanapi
import urllib3
def find_omniverse_apps():
http = urllib3.PoolManager()
try:
r = http.request("GET", "http://127.0.0.1:33480/components")
except Exception as e:
print(f"Failed retrieving apps from an Omniverse Launcher, maybe it is not installed?\nError: {e}")
sys.exit(1)
apps = {}
for x in json.loads(r.data.decode("utf-8")):
latest = x.get("installedVersions", {}).get("latest", "")
if latest:
for s in x.get("settings", []):
if s.get("version", "") == latest:
root = s.get("launch", {}).get("root", "")
apps[x["slug"]] = (x["name"], root)
break
return apps
def create_link(src, dst):
print(f"Creating a link '{src}' -> '{dst}'")
packmanapi.link(src, dst)
APP_PRIORITIES = ["code", "create", "view"]
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Create folder link to Kit App installed from Omniverse Launcher")
parser.add_argument(
"--path",
help="Path to Kit App installed from Omniverse Launcher, e.g.: 'C:/Users/bob/AppData/Local/ov/pkg/create-2021.3.4'",
required=False,
)
parser.add_argument(
"--app", help="Name of Kit App installed from Omniverse Launcher, e.g.: 'code', 'create'", required=False
)
args = parser.parse_args()
path = args.path
if not path:
print("Path is not specified, looking for Omniverse Apps...")
apps = find_omniverse_apps()
if len(apps) == 0:
print(
"Can't find any Omniverse Apps. Use Omniverse Launcher to install one. 'Code' is the recommended app for developers."
)
sys.exit(0)
print("\nFound following Omniverse Apps:")
for i, slug in enumerate(apps):
name, root = apps[slug]
print(f"{i}: {name} ({slug}) at: '{root}'")
if args.app:
selected_app = args.app.lower()
if selected_app not in apps:
choices = ", ".join(apps.keys())
print(f"Passed app: '{selected_app}' is not found. Specify one of the following found Apps: {choices}")
sys.exit(0)
else:
selected_app = next((x for x in APP_PRIORITIES if x in apps), None)
if not selected_app:
selected_app = next(iter(apps))
print(f"\nSelected app: {selected_app}")
_, path = apps[selected_app]
if not os.path.exists(path):
print(f"Provided path doesn't exist: {path}")
else:
SCRIPT_ROOT = os.path.dirname(os.path.realpath(__file__))
create_link(f"{SCRIPT_ROOT}/../../app", path)
print("Success!")
| 2,814 | Python | 32.117647 | 133 | 0.562189 |
alankent/ordinary-vrm-clean/tools/packman/config.packman.xml | <config remotes="cloudfront">
<remote2 name="cloudfront">
<transport actions="download" protocol="https" packageLocation="d4i3qtqj3r0z5.cloudfront.net/${name}@${version}" />
</remote2>
</config>
| 211 | XML | 34.333328 | 123 | 0.691943 |
alankent/ordinary-vrm-clean/tools/packman/bootstrap/install_package.py | # Copyright 2019 NVIDIA CORPORATION
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import shutil
import sys
import tempfile
import zipfile
__author__ = "hfannar"
logging.basicConfig(level=logging.WARNING, format="%(message)s")
logger = logging.getLogger("install_package")
class TemporaryDirectory:
def __init__(self):
self.path = None
def __enter__(self):
self.path = tempfile.mkdtemp()
return self.path
def __exit__(self, type, value, traceback):
# Remove temporary data created
shutil.rmtree(self.path)
def install_package(package_src_path, package_dst_path):
with zipfile.ZipFile(package_src_path, allowZip64=True) as zip_file, TemporaryDirectory() as temp_dir:
zip_file.extractall(temp_dir)
# Recursively copy (temp_dir will be automatically cleaned up on exit)
try:
# Recursive copy is needed because both package name and version folder could be missing in
# target directory:
shutil.copytree(temp_dir, package_dst_path)
except OSError as exc:
logger.warning("Directory %s already present, packaged installation aborted" % package_dst_path)
else:
logger.info("Package successfully installed to %s" % package_dst_path)
install_package(sys.argv[1], sys.argv[2])
| 1,844 | Python | 33.166666 | 108 | 0.703362 |
alankent/ordinary-vrm-clean/exts/ordinary/ordinary/extension.py | # Made using: https://youtu.be/eGxV_PGNpOg
# Lessons learned
# - work out your package name first (it affects directory structure)
# - DeletePrims references Sdf which is not imported for you
import omni.ext
import omni.ui as ui
import omni.kit.commands
from pxr import Usd, Sdf, Gf, UsdGeom
from .ExtractMeshes import ExtractMeshes
# Functions and vars are available to other extension as usual in python: `example.python_ext.some_public_function(x)`
def some_public_function(x: int):
print("[ordinary] some_public_function was called with x: ", x)
return x ** x
# Any class derived from `omni.ext.IExt` in top level module (defined in `python.modules` of `extension.toml`) will be
# instantiated when extension gets enabled and `on_startup(ext_id)` will be called. Later when extension gets disabled
# on_shutdown() is called.
class OrdinaryExtension(omni.ext.IExt):
# ext_id is current extension id. It can be used with extension manager to query additional information, like where
# this extension is located on filesystem.
def on_startup(self, ext_id):
self._window = ui.Window("VRM Import Cleanup v1", width=300, height=300)
with self._window.frame:
# TODO: Clean up the UI... one day.
with ui.VStack():
label = ui.Label("")
def on_click():
self.clean_up_prim()
label.text = "clicked"
def on_dump():
label.text = "empty"
# Debugging: Print hierarchy of all prims in stage, all attributes of prims that are arrays
self.dump_stage()
label.text = "dump"
with ui.HStack():
ui.Button("Clean", clicked_fn=on_click)
ui.Button("Dump", clicked_fn=on_dump)
def on_shutdown(self):
print("[ordinary] ordinary shutdown")
# The main body of the clean up code for VRoid Studio characters.
def clean_up_prim(self):
# VRoid Studio dependent code. This code has hard coded path names used by VRoid Studio characters.
# If needed, could clean this up to make more generic.
# But I am also hoping NVIDIA fix their GLB import code, so trying to minimize my effort.
# The problem is the bone structure is it picks one level too deep for the bone hierarchy.
# /World/Root/J_Bip_C_Hips0/Skeleton/J_Bip_C_Hips/... should have been one node higher, so Root was
# included under Skeleton. Without this, it thinks the Hips are the root bone (at height zero).
# To work around the problem, I go through all the joint lists and insert "Root/" at the start
# of the joint paths.
ctx = omni.usd.get_context()
stage = ctx.get_stage()
# TODO: May have a go at this again, moving everything up so its Root/Skeleton without the hips.
# root_prim = stage.GetPrimAtPath('/World/Root')
# root_prim.SetTypeName('SkelRoot')
# Move Skeleton directly under Root layer
# self.move_if_necessary(stage, '/World/Root/J_Bip_C_Hips0/Skeleton', '/World/Root/Skeleton')
# Move meshes directly under Root layer, next to Skeleton
# self.move_if_necessary(stage, '/World/Root/J_Bip_C_Hips0/Face_baked', '/World/Root/Face_baked')
# self.move_if_necessary(stage, '/World/Root/J_Bip_C_Hips0/Body_baked', '/World/Root/Body_baked')
# self.move_if_necessary(stage, '/World/Root/J_Bip_C_Hips0/Hair001_baked', '/World/Root/Hair001_baked')
# Patch the skeleton structure, if not done already. Add "Root" to the joint list.
skeleton_prim = stage.GetPrimAtPath('/World/Root/J_Bip_C_Hips0/Skeleton')
if skeleton_prim:
self.add_parent_to_skeleton_joint_list(skeleton_prim, 'Root')
for child in stage.GetPrimAtPath('/World/Root/J_Bip_C_Hips0').GetChildren():
if child.IsA(UsdGeom.Mesh):
self.add_parent_to_mesh_joint_list(child, 'Root')
# self.split_disconnected_meshes(stage, child)
if child.GetName().startswith("Face_"):
e = ExtractMeshes(stage)
e.extract_face_meshes(child)
if child.GetName().startswith("Hair"):
e = ExtractMeshes(stage)
e.extract_hair_meshes(child)
if child.GetName().startswith("Body_"):
e = ExtractMeshes(stage)
e.extract_body_meshes(child)
# Delete the dangling node (was old SkelRoot)
# self.delete_if_no_children(stage, '/World/Root/J_Bip_C_Hips0')
# Delete old skeleton if present.
if stage.GetPrimAtPath('/World/Root/J_Bip_C_Hips0/Skeleton/J_Bip_C_Hips'):
omni.kit.commands.execute('DeletePrims', paths=[Sdf.Path('/World/Root/J_Bip_C_Hips0/Skeleton/J_Bip_C_Hips')])
# If a prim exists at the source path, move it to the target path.
# Returns true if moved, false otherwise.
def move_if_necessary(self, stage, source_path, target_path):
if stage.GetPrimAtPath(source_path):
omni.kit.commands.execute(
'MovePrim',
path_from=source_path,
path_to=target_path,
keep_world_transform=False,
destructive=False)
return True
return False
# Delete the prim at the specified path if it exists and has no children.
# Returns true if deleted, false otherwise.
def delete_if_no_children(self, stage, path):
prim = stage.GetPrimAtPath(path)
if prim:
if not prim.GetChildren():
omni.kit.commands.execute(
'DeletePrims',
paths=[Sdf.Path(path)],
destructive=False)
return True
return False
def add_parent_to_skeleton_joint_list(self, skeleton_prim: Usd.Prim, parent_name):
"""
A skeleton has 3 attributes:
- uniform matrix4d[] bindTransforms = [( (1, 0, -0, 0), ...]
- uniform token[] joints = ["J_Bip_C_Hips", ...]
- uniform matrix4d[] restTransforms = [( (1, 0, -0, 0), ...]
In Omniverse Code etc, you can hover over the names in the "Raw USD Property" panel to get
more documentation on the above properties.
We need to insert the new parent at the front of the three lists, and prepend the name to the join paths.
"""
# Get the attributes
joints: Usd.Attribute = skeleton_prim.GetAttribute('joints')
bindTransforms: Usd.Attribute = skeleton_prim.GetAttribute('bindTransforms')
restTransforms: Usd.Attribute = skeleton_prim.GetAttribute('restTransforms')
# If first join is the parent name already, nothing to do.
if joints.Get()[0] == parent_name:
return False
# TODO: I use raw USD functions here, but there is also omni.kit.commands.execute("ChangeProperty",...)
# if want undo...
# https://docs.omniverse.nvidia.com/prod_kit/prod_kit/programmer_ref/usd/properties/set-attribute.html#omniverse-kit-commands
joints.Set([parent_name] + [parent_name + '/' + jp for jp in joints.Get()])
# Insert unity matrix at the start for the root node we added.
# ((1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1))
unity_matrix = Gf.Matrix4d(1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1)
if bindTransforms.IsValid():
bindTransforms.Set([unity_matrix] + [x for x in bindTransforms.Get()])
if restTransforms.IsValid():
restTransforms.Set([unity_matrix] + [x for x in restTransforms.Get()])
# The meshes have paths to bones as well - add "Root" to their paths as well.
def add_parent_to_mesh_joint_list(self, mesh_prim, parent_name):
if mesh_prim:
joints: Usd.Attribute = mesh_prim.GetAttribute('skel:joints')
# Don't touch empty string. Don't add if already added. First value might be empty string.
if not joints.Get()[1].startswith(parent_name):
joints.Set([jp if jp == "" else parent_name + '/' + jp for jp in joints.Get()])
return True
return False
# Going to delete this - ended up creating a separate class for this. It got tricky.
#def split_disconnected_meshes(self, stage: Usd.Stage, mesh_prim: UsdGeom.Mesh):
# """
# Look for child GeomSubset prims and see if they have discontiguous meshes.
# Audio2Face cannot work with such meshes, so we need to split them into separate
# GeomSubset prims.
# """
# mesh_indices = mesh_prim.GetAttribute('faceVertexIndices')
# for child in mesh_prim.GetChildren():
# if child.IsA(UsdGeom.Subset):
# subset_prim: UsdGeom.Subset = child
# subset_indices = child.GetAttribute('indices').Get()
# split = self.split_subset(mesh_indices, subset_indices)
# if len(split) > 1:
# print("Create new meshes for " + subset_prim)
# i = 0
# for split_subset in split:
# subset_path = subset_prim.GetPath() + "_" + i
# new_prim: UsdGeom.Subset = stage.DefinePrim(subset_path, usdType=UsdGeom.Subset)
# new_prim.CreateElementTypeAttr().Set(subset_prim.GetElementTypeAttr().Get())
# new_prim.CreateFamilyNameAttr().Set(subset_prim.GetFamilyNameAttr().Get())
# new_prim.CreateIndicesAttr().Set(split_subset)
# material_binding: UsdShade.MaterialBindingAPI = UsdShade.MaterialBindingAPI(new_prim)
# binding_targets = material_binding.GetMaterialBindSubsets()
# material_binding.CreateMaterialBindSubset().Set(UsdShade.MaterialBindingAPI(subset_prim).GetMaterialBindingTargets())
# i += 1
#
#def split_subset(self, mesh_indices, subset_indices):
# """
# Given an array of mesh face vertex indicies (multiply them by 3 to get the points index)
# and an array of subset indices into the mesh indicies array, return an array of
# disconnected submeshes.
# """
# Traverse the tree of prims, printing out selected attribute information.
# Useful for debugging.
def dump_stage(self):
ctx = omni.usd.get_context()
stage = ctx.get_stage()
for prim in stage.Traverse():
for attr in prim.GetAttributes():
try:
if len(attr.Get()) >= 50:
print(attr.GetPath(), len(attr.Get()))
except Exception:
pass
| 10,842 | Python | 47.40625 | 142 | 0.606069 |
alankent/ordinary-vrm-clean/exts/ordinary/ordinary/__init__.py | from .extension import *
| 25 | Python | 11.999994 | 24 | 0.76 |
alankent/ordinary-vrm-clean/exts/ordinary/ordinary/MeshMaker.py | from pxr import Usd, Sdf, Gf, UsdGeom, UsdShade, UsdSkel
# This class creates a new Mesh by adding faces one at a time.
# When don, you ask it to create a new Mesh prim.
class MeshMaker:
def __init__(self, stage: Usd.Stage, material, skeleton, skelJoints):
self.stage = stage
self.material = material
self.faceVertexCounts = []
self.faceVertexIndices = []
self.normals = []
self.st = []
self.points = []
self.skeleton = skeleton
self.skelJoints = skelJoints
self.skelJointIndices = []
self.skelJointWeights = []
# Create a Mesh prim at the given prim path.
def create_at_path(self, prim_path) -> UsdGeom.Mesh:
# https://stackoverflow.com/questions/74462822/python-for-usd-map-a-texture-on-a-cube-so-every-face-have-the-same-image
mesh: UsdGeom.Mesh = UsdGeom.Mesh.Define(self.stage, prim_path)
mesh.CreateSubdivisionSchemeAttr().Set(UsdGeom.Tokens.none)
mesh.CreatePointsAttr(self.points)
mesh.CreateExtentAttr(UsdGeom.PointBased(mesh).ComputeExtent(mesh.GetPointsAttr().Get()))
mesh.CreateNormalsAttr(self.normals)
mesh.SetNormalsInterpolation(UsdGeom.Tokens.faceVarying)
mesh.CreateFaceVertexCountsAttr(self.faceVertexCounts)
mesh.CreateFaceVertexIndicesAttr(self.faceVertexIndices)
mesh.CreatePrimvar('st', Sdf.ValueTypeNames.TexCoord2fArray, UsdGeom.Tokens.faceVarying).Set(self.st)
ba: UsdSkel.BindingAPI = UsdSkel.BindingAPI(mesh)
ba.Apply(mesh.GetPrim())
ba.CreateGeomBindTransformAttr(Gf.Matrix4d(1,0,0,0, 0,1,0,0, 0,0,1,0, 0,0,0,1))
ba.CreateSkeletonRel().SetTargets(self.skeleton)
ba.CreateJointsAttr(self.skelJoints)
ba.CreateJointIndicesPrimvar(False, elementSize=4).Set(self.skelJointIndices)
ba.CreateJointWeightsPrimvar(False, elementSize=4).Set(self.skelJointWeights)
UsdShade.MaterialBindingAPI(mesh).GetDirectBindingRel().SetTargets(self.material)
return mesh
# Add a new face (3 points with normals and mappings to part of the texture)
def add_face(self, points, jointIndices, jointWeights, pi1, pi2, pi3, normal1, normal2, normal3, st1, st2, st3):
self.faceVertexCounts.append(3)
self.faceVertexIndices.append(self.new_index_of_point(points, jointIndices, jointWeights, pi1))
self.faceVertexIndices.append(self.new_index_of_point(points, jointIndices, jointWeights, pi2))
self.faceVertexIndices.append(self.new_index_of_point(points, jointIndices, jointWeights, pi3))
self.normals.append(normal1)
self.normals.append(normal2)
self.normals.append(normal3)
self.st.append(st1)
self.st.append(st2)
self.st.append(st3)
# Given a point, find an existing points array entry and return its index, otherwise add another point
# and return the index of the new point.
# If adding a new point, also copy across the skeleton joint index and joint weight from the old point.
# TODO: This could be optimized with a lookup table of point->index.
def new_index_of_point(self, points, jointIndices, jointWeights, point_index):
point = points[point_index]
for i in range(0, len(self.points)):
if self.points[i] == point:
return i
self.points.append(point)
# Copy across the old joint information. This assumes element_size = 4.
self.skelJointIndices.append(jointIndices[point_index * 4])
self.skelJointIndices.append(jointIndices[point_index * 4 + 1])
self.skelJointIndices.append(jointIndices[point_index * 4 + 2])
self.skelJointIndices.append(jointIndices[point_index * 4 + 3])
self.skelJointWeights.append(jointWeights[point_index * 4])
self.skelJointWeights.append(jointWeights[point_index * 4 + 1])
self.skelJointWeights.append(jointWeights[point_index * 4 + 2])
self.skelJointWeights.append(jointWeights[point_index * 4 + 3])
return len(self.points) - 1
| 4,081 | Python | 50.024999 | 127 | 0.688312 |
alankent/ordinary-vrm-clean/exts/ordinary/ordinary/ExtractMeshes.py | import typing
import omni.ext
import omni.ui as ui
import omni.kit.commands
from pxr import Usd, Sdf, Gf, UsdGeom, UsdShade, UsdSkel
from .MeshMaker import MeshMaker
import math
# Good resource https://github.com/NVIDIA-Omniverse/USD-Tutorials-And-Examples/blob/main/ColaboratoryNotebooks/usd_introduction.ipynb
# Also https://docs.omniverse.nvidia.com/prod_kit/prod_kit/programmer_ref/usd/transforms/get-world-transforms.html
#def get_world_translate(prim: Usd.Prim) -> Gf.Vec3d:
# """
# Get the local transformation of a prim using Xformable.
# See https://graphics.pixar.com/usd/release/api/class_usd_geom_xformable.html
# Args:
# prim: The prim to calculate the world transformation.
# Returns:
# - Translation vector.
# """
# xform = UsdGeom.Xformable(prim)
# time = Usd.TimeCode.Default() # The time at which we compute the bounding box
# world_transform: Gf.Matrix4d = xform.ComputeLocalToWorldTransform(time)
# translation: Gf.Vec3d = world_transform.ExtractTranslation()
# return translation
class ExtractMeshes:
def __init__(self, stage: Usd.Stage):
self.stage = stage
self.segment_map = None
# Return true if this Mesh is the Face mesh we want to convert.
# Names are like "Face_baked" and "Face__merged__Clone_".
#def mesh_is_face_mesh(mesh: UsdGeom.Mesh):
# name: str = mesh.GetName()
# if name.startswith("Face_"):
# return True
# else:
# return False
def extract_face_meshes(self, mesh: UsdGeom.Mesh):
# Run through the children sub-meshes
# "F00_000_00_FaceMouth_00_FACE" -- Needs splitting into upper teeth, lower teeth, mouth cavity, toungue
# "F00_000_00_EyeWhite_00_EYE"
# "F00_000_00_FaceEyeline_00_FACE"
# "F00_000_00_FaceEyelash_00_FACE"
# "F00_000_00_FaceBrow_00_FACE"
# "F00_000_00_EyeIris_00_EYE"
# "F00_000_00_EyeHighlight_00_EYE" -- Drop? Gone in new version.
# "F00_000_00_Face_00_SKIN" or "N00_000_00_Face_00_SKIN__Instance_"
# "F00_000_00_Face_00_SKIN_1"
# "F00_000_00_EyeExtra_01_EYE" -- Drop? Gone in new version.
for child in mesh.GetChildren():
if child.IsA(UsdGeom.Subset):
name: str = child.GetName()
if "_Face_" in name and "SKIN" in name:
self.extract_face_skin(mesh, child)
elif "_FaceMouth_" in name:
self.extract_mouth(mesh, child)
elif "_FaceEyeline_" in name:
self.extract_eyeline(mesh, child)
elif "_FaceEyelash_" in name:
self.extract_eyelash(mesh, child)
elif "_FaceBrow_" in name:
self.extract_eyebrow(mesh, child)
elif "_EyeWhite_" in name:
self.extract_eyewhites(mesh, child)
elif "_EyeIris" in name:
self.extract_irises(mesh, child)
def extract_hair_meshes(self, old_mesh: UsdGeom.Mesh):
# Run through the children sub-meshes and copy them to their own Mesh
n = 0
for child in old_mesh.GetChildren():
if child.IsA(UsdGeom.Subset):
material = UsdShade.MaterialBindingAPI(child).GetDirectBindingRel().GetTargets()
skeleton = UsdSkel.BindingAPI(old_mesh).GetSkeletonRel().GetTargets()
skelJoints = UsdSkel.BindingAPI(old_mesh).GetJointsAttr().Get()
new_mesh = MeshMaker(self.stage, material, skeleton, skelJoints)
self.copy_subset(new_mesh, old_mesh, child)
new_mesh.create_at_path(old_mesh.GetPath().GetParentPath().AppendChild('hair_' + str(n)))
n += 1
def extract_body_meshes(self, old_mesh: UsdGeom.Mesh):
# Run through the children sub-meshes and copy them to their own Mesh
n = 0
for child in old_mesh.GetChildren():
if child.IsA(UsdGeom.Subset):
material = UsdShade.MaterialBindingAPI(child).GetDirectBindingRel().GetTargets()
skeleton = UsdSkel.BindingAPI(old_mesh).GetSkeletonRel().GetTargets()
skelJoints = UsdSkel.BindingAPI(old_mesh).GetJointsAttr().Get()
new_mesh = MeshMaker(self.stage, material, skeleton, skelJoints)
self.copy_subset(new_mesh, old_mesh, child)
name: str = child.GetName()
if "_Body_" in name:
new_name = 'bodyskin'
elif "_Tops_" in name:
new_name = 'clothes_upper'
elif "_Bottoms_" in name:
new_name = 'clothes_lower'
elif "_Shoes_" in name:
new_name = 'shoes'
else:
new_name = 'body_' + str(n)
n += 1
new_mesh.create_at_path(old_mesh.GetPath().GetParentPath().AppendChild(new_name))
# Copy the whole mesh across for the face.
# The original mesh in VRoid points that are not used in any face.
def extract_face_skin(self, old_mesh: UsdGeom.Mesh, old_subset: UsdGeom.Subset):
self.make_mesh_from_subset(old_mesh, old_subset, 'face_skin')
# Extract multiple meshes from the mouth: upper teeth (needs joining), lower teeth (needs joining),
# tounge, and mouth cavity.
def extract_mouth(self, old_mesh: UsdGeom.Mesh, old_subset: UsdGeom.Subset):
# This one is the most tricky case. There is one "subset" for the mouth, which includes upper
# and lower teeth, as well as the mouth cavity. We need upper and lower teeth in their own
# meshes for Audio2Face to work.
# So, we segment the mesh, ignoring points that are not actually used by faces (there are lot!).
# The first two connected meshes are for the inside and outside of the teeth.
# The next two are the lower teeth.
# Then there is a single connected mesh for the mouth cavity.
# Finally there are another two meshes for the top and bottom of the tongue.
num_segments = self.segment_mesh(old_mesh, old_subset)
if num_segments == 7:
material = UsdShade.MaterialBindingAPI(old_subset).GetDirectBindingRel().GetTargets()
skeleton = UsdSkel.BindingAPI(old_mesh).GetSkeletonRel().GetTargets()
skelJoints = UsdSkel.BindingAPI(old_mesh).GetJointsAttr().Get()
# 0 = inner upper teeth, 1 = outer upper teeth
new_mesh = MeshMaker(self.stage, material, skeleton, skelJoints)
self.copy_subset(new_mesh, old_mesh, old_subset, 0, 1)
new_mesh.create_at_path(old_mesh.GetPath().GetParentPath().AppendChild('upper_teeth'))
# 2 = inner lower teeth, 3 = outer lower teeth
new_mesh = MeshMaker(self.stage, material, skeleton, skelJoints)
self.copy_subset(new_mesh, old_mesh, old_subset, 2, 3)
new_mesh.create_at_path(old_mesh.GetPath().GetParentPath().AppendChild('lower_teeth'))
# 4 = mouth cavity
new_mesh = MeshMaker(self.stage, material, skeleton, skelJoints)
self.copy_subset(new_mesh, old_mesh, old_subset, 4, 4)
new_mesh.create_at_path(old_mesh.GetPath().GetParentPath().AppendChild('mouth_cavity'))
# 5 = upper tongue, 6 = lower tongue
new_mesh = MeshMaker(self.stage, material, skeleton, skelJoints)
self.copy_subset(new_mesh, old_mesh, old_subset, 5, 6)
new_mesh.create_at_path(old_mesh.GetPath().GetParentPath().AppendChild('tongue'))
self.clear_segment_map()
# Eyeline we could split into left and right eyes, but we don't need to.
# It uses a separate mesh with its own material.
def extract_eyeline(self, old_mesh: UsdGeom.Mesh, old_subset: UsdGeom.Subset):
self.make_mesh_from_subset(old_mesh, old_subset, 'eyeline')
# Eyelash we could split into left and right eyes, but we don't need to.
# It uses a separate mesh with its own material.
def extract_eyelash(self, old_mesh: UsdGeom.Mesh, old_subset: UsdGeom.Subset):
self.make_mesh_from_subset(old_mesh, old_subset, 'eyelash')
# Eyebrows we could split into left and right eyes, but we don't need to.
# It uses a separate mesh with its own material.
def extract_eyebrow(self, old_mesh: UsdGeom.Mesh, old_subset: UsdGeom.Subset):
self.make_mesh_from_subset(old_mesh, old_subset, 'eyebrow')
# Eyewhites are interesting. Rather than be a sphere or similar, there is actually a gap
# behind the irises, which causes shadows. Might want to adjust these one day, but they
# stop from being able to look inside the head when the eyes move (the whites do not move).
def extract_eyewhites(self, old_mesh: UsdGeom.Mesh, old_subset: UsdGeom.Subset):
self.make_mesh_from_subset(old_mesh, old_subset, 'eyewhites')
# We need to extract the left and right eye irises. There is lots of other points and things
# that are not actually used, so we want to toss them.
def extract_irises(self, old_mesh: UsdGeom.Mesh, old_subset: UsdGeom.Subset):
self.segment_mesh(old_mesh, old_subset)
eyes = ["left_eye", "right_eye"]
for i in range(len(eyes)):
material = UsdShade.MaterialBindingAPI(old_subset).GetDirectBindingRel().GetTargets()
skeleton = UsdSkel.BindingAPI(old_mesh).GetSkeletonRel().GetTargets()
skelJoints = UsdSkel.BindingAPI(old_mesh).GetJointsAttr().Get()
new_mesh = MeshMaker(self.stage, material, skeleton, skelJoints)
self.copy_subset(new_mesh, old_mesh, old_subset, i, i)
# We do a bit more work because eyes need to rotate. So we insert an Xform about the
# Mesh for the two eyes. Its a bit behind the eyes, moved a bit towards the center of the head
# as the front of the characters is more rounded than a real head.
# We use the size of the iris to estimate how far behind the iris the pivot point needs to go.
pivot_prim_path = old_mesh.GetPath().GetParentPath().AppendChild(eyes[i] + "_pivot")
xformPrim = UsdGeom.Xform.Define(self.stage, pivot_prim_path)
eye_mesh: UsdGeom.Mesh = new_mesh.create_at_path(pivot_prim_path.AppendChild(eyes[i]))
extent = eye_mesh.GetExtentAttr().Get()
(x1,y1,z1) = extent[0]
(x2,y2,z2) = extent[1]
(dx,dy,dz) = ((x1+x2)/4.0, (y1+y2)/2.0, z2 - (y2-y1) * 2.0)
UsdGeom.XformCommonAPI(xformPrim).SetTranslate((dx, dy, dz))
UsdGeom.XformCommonAPI(eye_mesh).SetTranslate((-dx, -dy, -dz))
self.clear_segment_map()
# Create a new mesh from the given submesh (copy it all to a new Mesh)
def make_mesh_from_subset(self, old_mesh, old_subset, prim_name):
material = UsdShade.MaterialBindingAPI(old_subset).GetDirectBindingRel().GetTargets()
skeleton = UsdSkel.BindingAPI(old_mesh).GetSkeletonRel().GetTargets()
skelJoints = UsdSkel.BindingAPI(old_mesh).GetJointsAttr().Get()
new_mesh = MeshMaker(self.stage, material, skeleton, skelJoints)
self.copy_subset(new_mesh, old_mesh, old_subset)
new_mesh.create_at_path(old_mesh.GetPath().GetParentPath().AppendChild(prim_name))
# A GeomSubset holds an array of indicies of which faces are used by this subset.
# But we need it in a separate Mesh for Audio2Face to be happy.
# So we run down the list of referenced faces, and add them to a completely new Mesh.
# This drops lots of unused points and cleans up the model.
def copy_subset(self, new_mesh: MeshMaker, old_mesh: UsdGeom.Mesh, old_subset: UsdGeom.Subset, segment1=None, segment2=None):
faceVertexIndices = old_mesh.GetAttribute('faceVertexIndices').Get()
points = old_mesh.GetAttribute('points').Get() # GetPointsAttr().Get()
normals = old_mesh.GetAttribute('normals').Get() # GetNormalsAttr().Get()
st = old_mesh.GetAttribute('primvars:st').Get()
jointIndices = old_mesh.GetAttribute('primvars:skel:jointIndices').Get()
jointWeights = old_mesh.GetAttribute('primvars:skel:jointWeights').Get()
for face_index in old_subset.GetAttribute('indices').Get(): # GetIndicesAttr().Get():
if self.segment_map is None or self.segment_map[face_index] == segment1 or self.segment_map[face_index] == segment2:
# This is hard coded for VRoid Studio in that it assumes each face is a triangle with 3 points.
# Pull the details of each triangle on the surface and add it to a new mesh, building it up from scratch.
pi1 = faceVertexIndices[face_index * 3]
pi2 = faceVertexIndices[face_index * 3 + 1]
pi3 = faceVertexIndices[face_index * 3 + 2]
n1 = normals[face_index * 3]
n2 = normals[face_index * 3 + 1]
n3 = normals[face_index * 3 + 2]
st1 = st[face_index * 3]
st2 = st[face_index * 3 + 1]
st3 = st[face_index * 3 + 2]
new_mesh.add_face(points, jointIndices, jointWeights, pi1, pi2, pi3, n1, n2, n3, st1, st2, st3)
# Clear the segment map.
def clear_segment_map(self):
self.segment_map = None
# Create a segment map by running through all the faces (and their verticies), then any other face
# that shares a point with the same mesh is also considered to be part of the same segment.
def segment_mesh(self, old_mesh: UsdGeom.Mesh, old_subset: UsdGeom.Subset):
# Work out some commonly used attributes.
faceVertexIndices = old_mesh.GetAttribute('faceVertexIndices').Get()
subset_indices = old_subset.GetAttribute('indices').Get()
num_faces = math.ceil(len(faceVertexIndices) / 3)
self.segment_map = [None] * num_faces
segment = 0
# Loop until we fail to find a face that has not been marked with a segment number.
while True:
i = 0
while i < len(subset_indices) and self.segment_map[subset_indices[i]] is not None:
i += 1
if i == len(subset_indices):
# Nothing found, so we are all done!
break
first_face = i
# Rather than adding the 3D coordinates, we add the index to the array of "points".
# If anything reuses the same index, then its in the same continuous mesh.
seen = set()
seen.add(faceVertexIndices[subset_indices[i] * 3])
seen.add(faceVertexIndices[subset_indices[i] * 3 + 1])
seen.add(faceVertexIndices[subset_indices[i] * 3 + 2])
self.segment_map[i] = segment
# We have no guarantee on the order of faces, so we do a pass from front to end.
# If a face was added to the current segment, we retry until we fail to find anything.
found_one = True
while found_one:
found_one = False
i = first_face
while i < len(subset_indices):
face = subset_indices[i]
if self.segment_map[face] is None:
if faceVertexIndices[face * 3] in seen or faceVertexIndices[face * 3 + 1] in seen or faceVertexIndices[face * 3 + 2] in seen:
# One of the 3 points of the current Face shares a vertex with the current mesh,
# so add all 3 points as belonging to the mesh and mark that we did find at least one more.
seen.add(faceVertexIndices[face * 3])
seen.add(faceVertexIndices[face * 3 + 1])
seen.add(faceVertexIndices[face * 3 + 2])
self.segment_map[face] = segment
found_one = True
i += 1
segment += 1
return segment
| 16,123 | Python | 52.926421 | 149 | 0.616076 |
alankent/ordinary-vrm-clean/exts/ordinary/ordinary/tests/__init__.py | from .test_hello_world import * | 31 | Python | 30.999969 | 31 | 0.774194 |
alankent/ordinary-vrm-clean/exts/ordinary/ordinary/tests/test_hello_world.py | # NOTE:
# omni.kit.test - std python's unittest module with additional wrapping to add suport for async/await tests
# For most things refer to unittest docs: https://docs.python.org/3/library/unittest.html
import omni.kit.test
# Extnsion for writing UI tests (simulate UI interaction)
import omni.kit.ui_test as ui_test
# Import extension python module we are testing with absolute import path, as if we are external user (other extension)
import ordinary
# Having a test class dervived from omni.kit.test.AsyncTestCase declared on the root of module will make it auto-discoverable by omni.kit.test
class Test(omni.kit.test.AsyncTestCase):
# Before running each test
async def setUp(self):
pass
# After running each test
async def tearDown(self):
pass
# Actual test, notice it is "async" function, so "await" can be used if needed
async def test_hello_public_function(self):
result = ordinary.some_public_function(4)
self.assertEqual(result, 256)
async def test_window_button(self):
# Find a label in our window
label = ui_test.find("My Window//Frame/**/Label[*]")
# Find buttons in our window
add_button = ui_test.find("My Window//Frame/**/Button[*].text=='Add'")
reset_button = ui_test.find("My Window//Frame/**/Button[*].text=='Reset'")
# Click reset button
await reset_button.click()
self.assertEqual(label.widget.text, "empty")
await add_button.click()
self.assertEqual(label.widget.text, "count: 1")
await add_button.click()
self.assertEqual(label.widget.text, "count: 2")
| 1,652 | Python | 34.170212 | 142 | 0.680387 |
alankent/ordinary-vrm-clean/exts/ordinary/config/extension.toml | [package]
# Semantic Versioning is used: https://semver.org/
version = "0.1.0"
# Lists people or organizations that are considered the "authors" of the package.
authors = ["Alan Kent"]
# The title and description fields are primarily for displaying extension info in UI
title = "First VRM Cleanup Attempt"
description="Importing a VRM file (renamed to .glb) as USD puts skeleton at wrong level. Try and clean it up."
# Path (relative to the root) or content of readme markdown file for UI.
readme = "docs/README.md"
# URL of the extension source repository.
repository = ""
# One of categories for UI.
category = "Example"
# Keywords for the extension
keywords = ["kit", "example"]
# Location of change log file in target (final) folder of extension, relative to the root.
# More info on writing changelog: https://keepachangelog.com/en/1.0.0/
changelog="docs/CHANGELOG.md"
# Preview image and icon. Folder named "data" automatically goes in git lfs (see .gitattributes file).
# Preview image is shown in "Overview" of Extensions window. Screenshot of an extension might be a good preview image.
preview_image = "data/preview.png"
# Icon is shown in Extensions window, it is recommended to be square, of size 256x256.
icon = "data/icon.png"
# Use omni.ui to build simple UI
[dependencies]
"omni.kit.uiapp" = {}
# Main python module this extension provides, it will be publicly available as "import ordinary".
# TODO: I created with wrong module name - directory structure needs fixing
[[python.module]]
name = "ordinary"
[[test]]
# Extra dependencies only to be used during test run
dependencies = [
"omni.kit.ui_test" # UI testing extension
]
| 1,661 | TOML | 32.918367 | 118 | 0.743528 |
alankent/ordinary-vrm-clean/exts/ordinary/docs/CHANGELOG.md | # Changelog
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
## [1.0.0] - 2021-04-26
- Initial version of extension UI template with a window
| 178 | Markdown | 18.888887 | 80 | 0.702247 |
alankent/ordinary-vrm-clean/exts/ordinary/docs/README.md | # Python Extension Example [ordinary]
This is an example of pure python Kit extension. It is intended to be copied and serve as a template to create new extensions.
| 167 | Markdown | 32.599993 | 126 | 0.784431 |
alankent/ordinary-vrm-clean/exts/ordinary/docs/index.rst | ordinary
#############################
Example of Python only extension
.. toctree::
:maxdepth: 1
README
CHANGELOG
.. automodule::"ordinary"
:platform: Windows-x86_64, Linux-x86_64
:members:
:undoc-members:
:show-inheritance:
:imported-members:
:exclude-members: contextmanager
| 317 | reStructuredText | 14.142856 | 43 | 0.605678 |
swatchoncompany/fabricator-omniverse-extension/tools/scripts/link_app.py | import os
import argparse
import sys
import json
import packmanapi
import urllib3
def find_omniverse_apps():
http = urllib3.PoolManager()
try:
r = http.request("GET", "http://127.0.0.1:33480/components")
except Exception as e:
print(f"Failed retrieving apps from an Omniverse Launcher, maybe it is not installed?\nError: {e}")
sys.exit(1)
apps = {}
for x in json.loads(r.data.decode("utf-8")):
latest = x.get("installedVersions", {}).get("latest", "")
if latest:
for s in x.get("settings", []):
if s.get("version", "") == latest:
root = s.get("launch", {}).get("root", "")
apps[x["slug"]] = (x["name"], root)
break
return apps
def create_link(src, dst):
print(f"Creating a link '{src}' -> '{dst}'")
packmanapi.link(src, dst)
APP_PRIORITIES = ["code", "create", "view"]
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Create folder link to Kit App installed from Omniverse Launcher")
parser.add_argument(
"--path",
help="Path to Kit App installed from Omniverse Launcher, e.g.: 'C:/Users/bob/AppData/Local/ov/pkg/create-2021.3.4'",
required=False,
)
parser.add_argument(
"--app", help="Name of Kit App installed from Omniverse Launcher, e.g.: 'code', 'create'", required=False
)
args = parser.parse_args()
path = args.path
if not path:
print("Path is not specified, looking for Omniverse Apps...")
apps = find_omniverse_apps()
if len(apps) == 0:
print(
"Can't find any Omniverse Apps. Use Omniverse Launcher to install one. 'Code' is the recommended app for developers."
)
sys.exit(0)
print("\nFound following Omniverse Apps:")
for i, slug in enumerate(apps):
name, root = apps[slug]
print(f"{i}: {name} ({slug}) at: '{root}'")
if args.app:
selected_app = args.app.lower()
if selected_app not in apps:
choices = ", ".join(apps.keys())
print(f"Passed app: '{selected_app}' is not found. Specify one of the following found Apps: {choices}")
sys.exit(0)
else:
selected_app = next((x for x in APP_PRIORITIES if x in apps), None)
if not selected_app:
selected_app = next(iter(apps))
print(f"\nSelected app: {selected_app}")
_, path = apps[selected_app]
if not os.path.exists(path):
print(f"Provided path doesn't exist: {path}")
else:
SCRIPT_ROOT = os.path.dirname(os.path.realpath(__file__))
create_link(f"{SCRIPT_ROOT}/../../app", path)
print("Success!")
| 2,813 | Python | 32.5 | 133 | 0.562389 |
swatchoncompany/fabricator-omniverse-extension/tools/packman/config.packman.xml | <config remotes="cloudfront">
<remote2 name="cloudfront">
<transport actions="download" protocol="https" packageLocation="d4i3qtqj3r0z5.cloudfront.net/${name}@${version}" />
</remote2>
</config>
| 211 | XML | 34.333328 | 123 | 0.691943 |
swatchoncompany/fabricator-omniverse-extension/tools/packman/bootstrap/install_package.py | # Copyright 2019 NVIDIA CORPORATION
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import zipfile
import tempfile
import sys
import shutil
__author__ = "hfannar"
logging.basicConfig(level=logging.WARNING, format="%(message)s")
logger = logging.getLogger("install_package")
class TemporaryDirectory:
def __init__(self):
self.path = None
def __enter__(self):
self.path = tempfile.mkdtemp()
return self.path
def __exit__(self, type, value, traceback):
# Remove temporary data created
shutil.rmtree(self.path)
def install_package(package_src_path, package_dst_path):
with zipfile.ZipFile(
package_src_path, allowZip64=True
) as zip_file, TemporaryDirectory() as temp_dir:
zip_file.extractall(temp_dir)
# Recursively copy (temp_dir will be automatically cleaned up on exit)
try:
# Recursive copy is needed because both package name and version folder could be missing in
# target directory:
shutil.copytree(temp_dir, package_dst_path)
except OSError as exc:
logger.warning(
"Directory %s already present, packaged installation aborted" % package_dst_path
)
else:
logger.info("Package successfully installed to %s" % package_dst_path)
install_package(sys.argv[1], sys.argv[2])
| 1,888 | Python | 31.568965 | 103 | 0.68697 |
swatchoncompany/fabricator-omniverse-extension/exts/fabricator.extension/config/extension.toml | [package]
# Semantic Versionning is used: https://semver.org/
version = "1.0.1"
# The title and description fields are primarily for displaying extension info in UI
title = "Fabricator Omniverse Extension"
description = "fabricator omniverse extension"
# Path (relative to the root) or content of readme markdown file for UI.
readme = "docs/README.md"
# Path (relative to the root) of changelog
changelog = "docs/CHANGELOG.md"
# URL of the extension source repository.
repository = "https://github.com/swatchoncompany/fabricator-omniverse-extension"
# One of categories for UI.
category = "Generative AI"
# Keywords for the extension
keywords = ["swatchon", "fabricator", "3d fabric"]
# Icon to show in the extension manager
icon = "data/icon.png"
# Preview to show in the extension manager
preview_image = "data/preview.png"
# Use omni.ui to build simple UI
[dependencies]
"omni.kit.uiapp" = {}
# Main python module this extension provides, it will be publicly available as "import omni.hello.world".
[[python.module]]
name = "fabricator.extension"
[[test]]
# Extra dependencies only to be used during test run
dependencies = [
"omni.kit.ui_test" # UI testing extension
]
[python.pipapi]
requirements = ["requests","urllib3==1.26.12","chardet==3.0.4","charset_normalizer==2.1.1"]
use_online_index = true
| 1,323 | TOML | 26.583333 | 105 | 0.739229 |
swatchoncompany/fabricator-omniverse-extension/exts/fabricator.extension/docs/CHANGELOG.md | [no changelog]
| 15 | Markdown | 6.999997 | 14 | 0.733333 |
swatchoncompany/fabricator-omniverse-extension/exts/fabricator.extension/docs/README.md | # Fabricator Extension
| 23 | Markdown | 10.999995 | 22 | 0.826087 |
swatchoncompany/fabricator-omniverse-extension/exts/fabricator.extension/fabricator/extension/auth_service.py | import requests
class AuthService:
SIGN_IN_URL = 'https://gateway.vmod.com/library/customers/sign_in'
AUTHENTICATE_URL = 'https://gateway.vmod.com/library/customers/me'
WORKSPACES_URL = 'https://gateway.vmod.com/library/members/all_workspaces'
def __init__(self):
self.access_token = None
def sign_in(self, email: str, password: str) -> None:
try:
response = requests.post(AuthService.SIGN_IN_URL, json = { "email": email, "password": password })
response.raise_for_status()
self.access_token = response.json()['token']
except:
raise Exception('Invalid credentials')
def is_authenticated(self) -> bool:
try:
response = requests.get(AuthService.AUTHENTICATE_URL, headers = { 'Authorization': f'Bearer {self.access_token}' })
response.raise_for_status()
return True
except:
return False
def get_workspaces(self):
try:
response = requests.get(AuthService.WORKSPACES_URL, headers = { 'Authorization': f'Bearer {self.access_token}' })
response.raise_for_status()
return list(map(lambda x: { "id": x["workspace"]["id"], "name": x["workspace"]["name"] }, filter(lambda x: x["status"] == "active", response.json())))
except:
raise Exception('Invalid workspaces') | 1,403 | Python | 41.545453 | 162 | 0.601568 |
swatchoncompany/fabricator-omniverse-extension/exts/fabricator.extension/fabricator/extension/extension.py | import omni.ext
import omni.ui as ui
from .auth_service import AuthService
from .fabricator_service import FabricatorService
from .file_service import FileService
# Functions and vars are available to other extension as usual in python: `example.python_ext.some_public_function(x)`
def some_public_function(x: int):
print(f"[fabricator.extension] some_public_function was called with {x}")
return x ** x
# Any class derived from `omni.ext.IExt` in top level module (defined in `python.modules` of `extension.toml`) will be
# instantiated when extension gets enabled and `on_startup(ext_id)` will be called. Later when extension gets disabled
# on_shutdown() is called.
class FabricatorExtension(omni.ext.IExt):
DEFAULT_WIDTH = 600
PER_PAGE = 15
# ext_id is current extension id. It can be used with extension manager to query additional information, like where
# this extension is located on filesystem.
def on_startup(self, ext_id):
print("[fabricator.extension] fabricator extensiohn startup")
self._window = ui.Window("Fabricator", width=FabricatorExtension.DEFAULT_WIDTH)
self.file_service = FileService()
self.file_service.create_dir()
self.auth_service = AuthService()
self.signin_email_input = None
self.signin_password_input = None
self._signin_error_msg = None
self.workspaces = []
self.current_workspace_idx = 0
self.workspace_combobox = None
self.fabricator_service = FabricatorService()
self.page_search_input = None
self.page = 1
self.max_page = 1
self.render_signin_page()
def on_shutdown(self):
print("[fabricator.extension] fabricator extensiohn shutdown")
self.file_service.remove_dir()
def current_workspace_id(self):
return self.workspaces[self.current_workspace_idx]["id"]
def render_signin_page(self):
def signin_btn_clicked():
email = self.signin_email_input.model.get_value_as_string()
password = self.signin_password_input.model.get_value_as_string()
try:
self.auth_service.sign_in(email, password)
self.fabricator_service.set_access_token(self.auth_service.access_token)
self.workspaces = self.auth_service.get_workspaces()
self.render_assets_page()
except Exception as e:
print(f"[fabricator.extension] {e}")
self._signin_error_msg = "Invalid Email or Password"
self.render_signin_page()
with self._window.frame:
with ui.VStack(width=FabricatorExtension.DEFAULT_WIDTH, spacing=10):
ui.Label("Sign In", alignment=ui.Alignment.CENTER, height=80, style={"font_size": 40})
ui.Label("with your VMOD account", alignment=ui.Alignment.CENTER_TOP, height=10)
ui.Label("If you don't have an account, create one at visit vmod.xyz", alignment=ui.Alignment.CENTER_TOP, height=40)
if self._signin_error_msg is not None:
ui.Label(self._signin_error_msg, alignment=ui.Alignment.CENTER, height=10, style={"color": "red"})
ui.Label("Email:", alignment=ui.Alignment.CENTER_BOTTOM, height=10)
self.signin_email_input = ui.StringField(placeholder="Email", height=20, style={"margin_width": 40})
ui.Label("Password:", alignment=ui.Alignment.CENTER_BOTTOM, height=10)
self.signin_password_input = ui.StringField(password_mode=True, height=20, style={"margin_width": 40})
ui.Button("Sign In", height=80, style={"margin_width": 100, "margin_height": 20}, clicked_fn=signin_btn_clicked)
def workspace_selector_component(self):
self.workspace_combobox = ui.ComboBox(self.current_workspace_idx, *list(map(lambda ws: ws["name"], self.workspaces)), height=10).model
def workspace_changed(model, item):
self.current_workspace_idx = model.get_item_value_model().as_int
self.page = 1
self.render_assets_page()
self.workspace_combobox.add_item_changed_fn(workspace_changed)
def render_assets_page(self):
try:
print(f"[fabricator.extension] render_assets_page")
assets, count = self.fabricator_service.load_assets(self.current_workspace_id(), self.page, FabricatorExtension.PER_PAGE)
self.max_page = count
with self._window.frame:
with ui.VStack(spacing=10, width=FabricatorExtension.DEFAULT_WIDTH, height=400):
self.workspace_selector_component()
with ui.VGrid(column_width=100, row_height=120, column_count=5, row_count=3):
for asset in assets:
file_path = self.file_service.save_file(f'{asset["code"]}.usda', asset["asset_url"])
self.asset_component(asset, file_path)
self.page_search_component()
except Exception as e:
print(f"[fabricator.extension] {e}")
self.render_signin_page()
def library_page(self):
with self._window.frame:
with ui.VStack(spacing=10, width=FabricatorExtension.DEFAULT_WIDTH, height=400):
self.gnb_component()
ui.Label("library!!")
def asset_component(self, asset, file_path):
def drag_fn(asset):
asset_name = asset["code"]
image_url = asset["thumbnail_url"]
asset_url = asset["asset_url"]
with ui.VStack():
ui.Image(image_url, width=100, height=100)
return file_path
with ui.VStack():
asset_name = asset["code"]
image_url = asset["thumbnail_url"]
ui.ImageWithProvider(image_url, width=100, height=100, style={"margin_width": 5},
drag_fn=lambda: drag_fn(asset))
ui.Label(asset_name, alignment=ui.Alignment.CENTER_TOP, width=100, height=20, style={"font_size": 15}, elided_text=True)
def page_search_component(self):
curr_page = self.page
def search_btn_handler():
m = self.page_search_input.model
search_page = m.get_value_as_int()
# if search_page == curr_page:
# return
if search_page < 1 or search_page > self.max_page:
print("[fabricator.extension] Invalid search range")
m.set_value(curr_page)
return
self.page = search_page
self.render_assets_page()
WIDTH = 100
HEIGHT = 20
with ui.Placer(offset_x=FabricatorExtension.DEFAULT_WIDTH / 2 - WIDTH / 2):
with ui.HStack(width=WIDTH, height=HEIGHT):
self.page_search_input = ui.IntField()
self.page_search_input.model.set_value(curr_page)
ui.Label(f" / {self.max_page}")
ui.Button("search", clicked_fn=search_btn_handler)
| 7,073 | Python | 42.937888 | 142 | 0.61346 |
swatchoncompany/fabricator-omniverse-extension/exts/fabricator.extension/fabricator/extension/__init__.py | from .extension import *
| 25 | Python | 11.999994 | 24 | 0.76 |
swatchoncompany/fabricator-omniverse-extension/exts/fabricator.extension/fabricator/extension/file_service.py | import os, requests, shutil, uuid
from io import BytesIO
from PIL import Image
from pathlib import Path
class FileService:
image_ext = "png"
model_ext = "usdz"
def __init__(self) -> None:
self.dir_path = (Path.home() / 'temp').as_posix()
def create_dir(self):
if not os.path.exists(self.dir_path):
os.makedirs(self.dir_path)
def remove_dir(self):
if os.path.exists(self.dir_path):
shutil.rmtree(self.dir_path)
def save_file(self, file_name: str, url: str) -> str:
file_path = os.path.join(self.dir_path, f'{file_name}')
if os.path.exists(file_path):
return file_path
binary_data = requests.get(url).content
with open(file_path, 'wb') as file_object:
file_object.write(binary_data)
return file_path | 848 | Python | 27.299999 | 63 | 0.596698 |
swatchoncompany/fabricator-omniverse-extension/exts/fabricator.extension/fabricator/extension/mock_auth_service.py | class MockAuthService:
def __init__(self):
self.access_token = None
def sign_in(self, email, password) -> None:
return "mock_token"
def is_authenticated(self) -> bool:
return True
def get_workspaces(self):
return [{ "id": 1, "name": "workspace1" }, { "id": 2, "name": "workspace2" }] | 346 | Python | 27.916664 | 85 | 0.552023 |
swatchoncompany/fabricator-omniverse-extension/exts/fabricator.extension/fabricator/extension/mock_fabricator_service.py | from pathlib import Path
import json
import math
class MockFabricatorService:
mock_data_path = Path(__file__).parent / 'mock.json'
mock2_data_path = Path(__file__).parent / 'mock2.json'
def __init__(self):
self.data = []
self.data2 = []
with open(MockFabricatorService.mock_data_path, 'r') as file:
self.data = json.load(file)
with open(MockFabricatorService.mock2_data_path, 'r') as file:
self.data2 = json.load(file)
def set_access_token(self, access_token):
self.access_token = access_token
def load_assets(self, workspace_id, page, limit):
data = self.data if (workspace_id == 1) else self.data2
assets = data[((page - 1) * limit):(page * limit)]
count = math.ceil(len(data) / limit)
return [assets, count]
| 843 | Python | 28.103447 | 70 | 0.60261 |
swatchoncompany/fabricator-omniverse-extension/exts/fabricator.extension/fabricator/extension/fabricator_service.py | import requests
import math
class FabricatorService:
ASSETS_URL = 'https://gateway.vmod.com/library/texture_generations/omniverse/list'
def set_access_token(self, access_token):
self.access_token = access_token
def load_assets(self, workspace_id, page, limit):
headers = { 'Authorization': f'Bearer {self.access_token}', 'current-workspace-id': workspace_id }
if page < 1:
page = 1
offset = (page - 1) * limit
try:
response = requests.get(f'{FabricatorService.ASSETS_URL}?offset={offset}&limit={limit}', headers = headers)
response.raise_for_status()
body = response.json()
count = max(math.ceil(body['count'] / limit), 1)
return [list(map(lambda d: { 'id': d['id'], 'code': d['code'], 'asset_url': d['usdaUrl'], 'thumbnail_url': d['thumbnail']['blackSmallUrl']}, body['data'])), count]
except:
raise Exception('Invalid credentials') | 982 | Python | 39.958332 | 175 | 0.606925 |
swatchoncompany/fabricator-omniverse-extension/exts/fabricator.extension/fabricator/extension/tests/__init__.py | from .test_hello_world import * | 31 | Python | 30.999969 | 31 | 0.774194 |
swatchoncompany/fabricator-omniverse-extension/exts/fabricator.extension/fabricator/extension/tests/test_hello_world.py | # NOTE:
# omni.kit.test - std python's unittest module with additional wrapping to add suport for async/await tests
# For most things refer to unittest docs: https://docs.python.org/3/library/unittest.html
import omni.kit.test
# Extnsion for writing UI tests (simulate UI interaction)
import omni.kit.ui_test as ui_test
# Import extension python module we are testing with absolute import path, as if we are external user (other extension)
import omni.hello.world
# Having a test class dervived from omni.kit.test.AsyncTestCase declared on the root of module will make it auto-discoverable by omni.kit.test
class Test(omni.kit.test.AsyncTestCase):
# Before running each test
async def setUp(self):
pass
# After running each test
async def tearDown(self):
pass
# Actual test, notice it is "async" function, so "await" can be used if needed
async def test_hello_public_function(self):
result = omni.hello.world.some_public_function(4)
self.assertEqual(result, 256)
async def test_window_button(self):
# Find a label in our window
label = ui_test.find("My Window//Frame/**/Label[*]")
# Find buttons in our window
add_button = ui_test.find("My Window//Frame/**/Button[*].text=='Add'")
reset_button = ui_test.find("My Window//Frame/**/Button[*].text=='Reset'")
# Click reset button
await reset_button.click()
self.assertEqual(label.widget.text, "empty")
await add_button.click()
self.assertEqual(label.widget.text, "count: 1")
await add_button.click()
self.assertEqual(label.widget.text, "count: 2")
| 1,668 | Python | 34.510638 | 142 | 0.681055 |
aniketrajnish/Omniverse-Shakespeare-Project/README.md | # Extension Project Template
This project was automatically generated.
- `app` - It is a folder link to the location of your *Omniverse Kit* based app.
- `exts` - It is a folder where you can add new extensions. It was automatically added to extension search path. (Extension Manager -> Gear Icon -> Extension Search Path).
Open this folder using Visual Studio Code. It will suggest you to install few extensions that will make python experience better.
Look for "makra.omniverse.shakespeare.project" extension in extension manager and enable it. Try applying changes to any python files, it will hot-reload and you can observe results immediately.
Alternatively, you can launch your app from console with this folder added to search path and your extension enabled, e.g.:
```
> app\omni.code.bat --ext-folder exts --enable company.hello.world
```
# App Link Setup
If `app` folder link doesn't exist or broken it can be created again. For better developer experience it is recommended to create a folder link named `app` to the *Omniverse Kit* app installed from *Omniverse Launcher*. Convenience script to use is included.
Run:
```
> link_app.bat
```
If successful you should see `app` folder link in the root of this repo.
If multiple Omniverse apps is installed script will select recommended one. Or you can explicitly pass an app:
```
> link_app.bat --app create
```
You can also just pass a path to create link to:
```
> link_app.bat --path "C:/Users/bob/AppData/Local/ov/pkg/create-2021.3.4"
```
# Sharing Your Extensions
This folder is ready to be pushed to any git repository. Once pushed direct link to a git repository can be added to *Omniverse Kit* extension search paths.
Link might look like this: `git://github.com/[user]/[your_repo].git?branch=main&dir=exts`
Notice `exts` is repo subfolder with extensions. More information can be found in "Git URL as Extension Search Paths" section of developers manual.
To add a link to your *Omniverse Kit* based app go into: Extension Manager -> Gear Icon -> Extension Search Path
| 2,059 | Markdown | 37.867924 | 258 | 0.758621 |
aniketrajnish/Omniverse-Shakespeare-Project/tools/scripts/link_app.py | import argparse
import json
import os
import sys
import packmanapi
import urllib3
def find_omniverse_apps():
http = urllib3.PoolManager()
try:
r = http.request("GET", "http://127.0.0.1:33480/components")
except Exception as e:
print(f"Failed retrieving apps from an Omniverse Launcher, maybe it is not installed?\nError: {e}")
sys.exit(1)
apps = {}
for x in json.loads(r.data.decode("utf-8")):
latest = x.get("installedVersions", {}).get("latest", "")
if latest:
for s in x.get("settings", []):
if s.get("version", "") == latest:
root = s.get("launch", {}).get("root", "")
apps[x["slug"]] = (x["name"], root)
break
return apps
def create_link(src, dst):
print(f"Creating a link '{src}' -> '{dst}'")
packmanapi.link(src, dst)
APP_PRIORITIES = ["code", "create", "view"]
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Create folder link to Kit App installed from Omniverse Launcher")
parser.add_argument(
"--path",
help="Path to Kit App installed from Omniverse Launcher, e.g.: 'C:/Users/bob/AppData/Local/ov/pkg/create-2021.3.4'",
required=False,
)
parser.add_argument(
"--app", help="Name of Kit App installed from Omniverse Launcher, e.g.: 'code', 'create'", required=False
)
args = parser.parse_args()
path = args.path
if not path:
print("Path is not specified, looking for Omniverse Apps...")
apps = find_omniverse_apps()
if len(apps) == 0:
print(
"Can't find any Omniverse Apps. Use Omniverse Launcher to install one. 'Code' is the recommended app for developers."
)
sys.exit(0)
print("\nFound following Omniverse Apps:")
for i, slug in enumerate(apps):
name, root = apps[slug]
print(f"{i}: {name} ({slug}) at: '{root}'")
if args.app:
selected_app = args.app.lower()
if selected_app not in apps:
choices = ", ".join(apps.keys())
print(f"Passed app: '{selected_app}' is not found. Specify one of the following found Apps: {choices}")
sys.exit(0)
else:
selected_app = next((x for x in APP_PRIORITIES if x in apps), None)
if not selected_app:
selected_app = next(iter(apps))
print(f"\nSelected app: {selected_app}")
_, path = apps[selected_app]
if not os.path.exists(path):
print(f"Provided path doesn't exist: {path}")
else:
SCRIPT_ROOT = os.path.dirname(os.path.realpath(__file__))
create_link(f"{SCRIPT_ROOT}/../../app", path)
print("Success!")
| 2,814 | Python | 32.117647 | 133 | 0.562189 |
aniketrajnish/Omniverse-Shakespeare-Project/tools/packman/config.packman.xml | <config remotes="cloudfront">
<remote2 name="cloudfront">
<transport actions="download" protocol="https" packageLocation="d4i3qtqj3r0z5.cloudfront.net/${name}@${version}" />
</remote2>
</config>
| 211 | XML | 34.333328 | 123 | 0.691943 |
aniketrajnish/Omniverse-Shakespeare-Project/tools/packman/bootstrap/install_package.py | # Copyright 2019 NVIDIA CORPORATION
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import shutil
import sys
import tempfile
import zipfile
__author__ = "hfannar"
logging.basicConfig(level=logging.WARNING, format="%(message)s")
logger = logging.getLogger("install_package")
class TemporaryDirectory:
def __init__(self):
self.path = None
def __enter__(self):
self.path = tempfile.mkdtemp()
return self.path
def __exit__(self, type, value, traceback):
# Remove temporary data created
shutil.rmtree(self.path)
def install_package(package_src_path, package_dst_path):
with zipfile.ZipFile(package_src_path, allowZip64=True) as zip_file, TemporaryDirectory() as temp_dir:
zip_file.extractall(temp_dir)
# Recursively copy (temp_dir will be automatically cleaned up on exit)
try:
# Recursive copy is needed because both package name and version folder could be missing in
# target directory:
shutil.copytree(temp_dir, package_dst_path)
except OSError as exc:
logger.warning("Directory %s already present, packaged installation aborted" % package_dst_path)
else:
logger.info("Package successfully installed to %s" % package_dst_path)
install_package(sys.argv[1], sys.argv[2])
| 1,844 | Python | 33.166666 | 108 | 0.703362 |
aniketrajnish/Omniverse-Shakespeare-Project/exts/makra.omniverse.shakespeare.project/makra/omniverse/shakespeare/project/extension.py | import os
import omni.ext
import omni.ui as ui
from omni.kit.window.file_importer import get_file_importer
from .gemini import gemini
from .convai import convai
class ShakespeareProjectExtension(omni.ext.IExt):
def on_startup(self, ext_id):
print("[Shakespeare Project] Startup")
self.initUI()
def initUI(self):
self._window = ui.Window("Shakespeare Project", width=400, height=300)
with self._window.frame:
with ui.VStack():
self.selectImgBtn = ui.Button("Select Image", clicked_fn=self.selectImage, width=100, height=30)
ui.Spacer(height=10)
self.imgWidget = ui.Image(width=320, height=180, fill_policy=ui.FillPolicy.PRESERVE_ASPECT_FIT)
def selectImage(self):
fileImporter = get_file_importer()
fileImporter.show_window(
title="Import File",
import_handler=self.onFileSelected,
file_extension_types=[
("jpg", "JPEG image"),
("jpeg", "JPEG image"),
("png", "PNG image"),
("webp", "WebP image"),
("heic", "HEIC image"),
("heif", "HEIF image")
],
import_button_label="Select"
)
def onFileSelected(self, filename, dirname, selections):
if selections:
filepath = os.path.join(dirname, selections[0])
print(f"Selected file: {filepath}")
self.processImage(filepath)
convai.appendToCharBackstory(self.geminiResponse)
def processImage(self, imgPath):
self.imgWidget.source_url = f"file:///{imgPath.replace(os.sep, '/')}"
self.geminiResponse = gemini.getGeminiResponse(imgPath)
print(f"Gemini Response: {self.geminiResponse}")
def on_shutdown(self):
print("[Shakespeare Project] Shutdown")
if self._window:
self._window.destroy()
| 1,935 | Python | 35.528301 | 112 | 0.595349 |
aniketrajnish/Omniverse-Shakespeare-Project/exts/makra.omniverse.shakespeare.project/makra/omniverse/shakespeare/project/__init__.py | from .extension import *
| 25 | Python | 11.999994 | 24 | 0.76 |
aniketrajnish/Omniverse-Shakespeare-Project/exts/makra.omniverse.shakespeare.project/makra/omniverse/shakespeare/project/convai/convai.py | import os
import requests
import json
import configparser
def loadConvaiConfig():
configPath = os.path.join(os.path.dirname(__file__), 'convai.env')
if not os.path.exists(configPath):
raise FileNotFoundError("Convai configuration file not found.")
config = configparser.ConfigParser()
config.read(configPath)
try:
convaiConfig = {
'apiKey': config.get('CONVAI', 'API_KEY'),
'characterId': config.get('CONVAI', 'CHARACTER_ID'),
'channel': config.get('CONVAI', 'CHANNEL'),
'actions': config.get('CONVAI', 'ACTIONS'),
'sessionId': config.get('CONVAI', 'SESSION_ID'),
'baseBackstory' : config.get('CONVAI', 'BASE_BACKSTORY').replace("\\n", "\n")
}
except configparser.NoOptionError as e:
raise KeyError(f"Missing configuration key in convai.env: {e}")
return convaiConfig
# def fetchCurrCharBackstory():
# config = loadConvaiConfig()
# url = "https://api.convai.com/character/get"
# payload = json.dumps({
# "charID": config['characterId']
# })
# headers = {
# 'CONVAI-API-KEY': config['apiKey'],
# 'Content-Type': 'application/json'
# }
# response = requests.post(url, headers=headers, data=payload)
# if response.status_code == 200:
# data = response.json()
# return data.get('backstory', "No backstory found.")
# else:
# print(f"Failed to fetch character details: {response.status_code} - {response.text}")
# return None
def updateCharBackstory(newBackstory):
config = loadConvaiConfig()
url = "https://api.convai.com/character/update"
payload = json.dumps({
"charID": config['characterId'],
"backstory": newBackstory
})
headers = {
'CONVAI-API-KEY': config['apiKey'],
'Content-Type': 'application/json'
}
response = requests.post(url, headers=headers, data=payload)
if response.status_code == 200:
print("Character updated successfully.")
else:
print(f"Failed to update character: {response.status_code} - {response.text}")
def appendToCharBackstory(backstoryUpdate):
config = loadConvaiConfig()
currBackstory = config['baseBackstory']
if currBackstory:
newBackstory = f"{currBackstory}\n{backstoryUpdate}"
updateCharBackstory(newBackstory) | 2,409 | Python | 33.428571 | 95 | 0.62308 |
aniketrajnish/Omniverse-Shakespeare-Project/exts/makra.omniverse.shakespeare.project/makra/omniverse/shakespeare/project/convai/extension.py | import math, os
import asyncio
import numpy as np
import omni.ext
import carb.events
import omni.ui as ui
import configparser
import pyaudio
import grpc
from .rpc import service_pb2 as convai_service_msg
from .rpc import service_pb2_grpc as convai_service
from .convai_audio_player import ConvaiAudioPlayer
from typing import Generator
import io
from pydub import AudioSegment
import threading
import traceback
import time
from collections import deque
import random
from functools import partial
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
CHUNK = 1024
FORMAT = pyaudio.paInt16
CHANNELS = 1
RATE = 12000
def log(text: str, warning: bool =False):
print(f"[convai] {'[Warning]' if warning else ''} {text}")
class ConvaiExtension(omni.ext.IExt):
WINDOW_NAME = "Convai"
MENU_PATH = f"Window/{WINDOW_NAME}"
def on_startup(self, ext_id: str):
self.IsCapturingAudio = False
self.on_new_frame_sub = None
self.channel_address = None
self.channel = None
self.SessionID = None
self.channelState = grpc.ChannelConnectivity.IDLE
self.client = None
self.ConvaiGRPCGetResponseProxy = None
self.PyAudio = pyaudio.PyAudio()
self.stream = None
self.Tick = False
self.TickThread = None
self.ConvaiAudioPlayer = ConvaiAudioPlayer(self._on_start_talk_callback, self._on_stop_talk_callback)
self.LastReadyTranscription = ""
self.ResponseTextBuffer = ""
self.OldCharacterID = ""
self.response_UI_Label_text = ""
self.action_UI_Label_text = "<Action>"
self.transcription_UI_Label_text = ""
# self.response_UI_Label_text = "<Response will apear here>"
self.response_UI_Label_text = "" # Turn off response text due to unknown crash
self.StartTalking_Btn_text = "Start Talking"
self.StartTalking_Btn_state = True
self.UI_Lock = threading.Lock()
self.Mic_Lock = threading.Lock()
self.UI_update_counter = 0
self.on_new_update_sub = None
ui.Workspace.set_show_window_fn(ConvaiExtension.WINDOW_NAME, partial(self.show_window, None))
ui.Workspace.show_window(ConvaiExtension.WINDOW_NAME)
# # Put the new menu
editor_menu = omni.kit.ui.get_editor_menu()
if editor_menu:
self._menu = editor_menu.add_item(
ConvaiExtension.MENU_PATH, self.show_window, toggle=True, value=True
)
# self.show_window(None, True)
self.read_channel_address_from_config()
self.create_channel()
log("ConvaiExtension started")
def setup_UI(self):
self._window = ui.Window(ConvaiExtension.WINDOW_NAME, width=300, height=300)
self._window.set_visibility_changed_fn(self._visiblity_changed_fn)
with self._window.frame:
with ui.VStack():
with ui.HStack(height = ui.Length(30)):
l = ui.Label("Convai API key")
self.APIKey_input_UI = ui.StringField()
ui.Spacer(height=5)
with ui.HStack(height = ui.Length(30)):
l = ui.Label("Character ID")
self.CharID_input_UI = ui.StringField()
ui.Spacer(height=5)
# with ui.HStack(height = ui.Length(30)):
# l = ui.Label("Session(Leave empty for 1st time)")
# self.session_input_UI = ui.StringField()
# ui.Spacer(height=5)
with ui.HStack(height = ui.Length(30)):
l = ui.Label("Comma seperated actions")
self.actions_input_UI = ui.StringField()
self.actions_input_UI.set_tooltip("e.g. Dances, Jumps")
ui.Spacer(height=5)
# self.response_UI_Label = ui.Label("", height = ui.Length(60), word_wrap = True)
# self.response_UI_Label.alignment = ui.Alignment.CENTER
self.action_UI_Label = ui.Label("<Action>", height = ui.Length(30), word_wrap = False)
self.action_UI_Label.alignment = ui.Alignment.CENTER
ui.Spacer(height=5)
self.StartTalking_Btn = ui.Button("Start Talking", clicked_fn=lambda: self.on_start_talking_btn_click(), height = ui.Length(30))
self.transcription_UI_Label = ui.Label("", height = ui.Length(60), word_wrap = True)
self.transcription_UI_Label.alignment = ui.Alignment.CENTER
if self.on_new_update_sub is None:
self.on_new_update_sub = (
omni.kit.app.get_app()
.get_update_event_stream()
.create_subscription_to_pop(self._on_UI_update_event, name="convai new UI update")
)
self.read_UI_from_config()
return self._window
def _on_UI_update_event(self, e):
if self.UI_update_counter>1000:
self.UI_update_counter = 0
self.UI_update_counter += 1
if self._window is None:
return
if self.UI_Lock.locked():
log("UI_Lock is locked", 1)
return
with self.UI_Lock:
# self.response_UI_Label.text = str(self.response_UI_Label_text)
self.action_UI_Label.text = str(self.action_UI_Label_text)
self.transcription_UI_Label.text = str(self.transcription_UI_Label_text)
self.StartTalking_Btn.text = self.StartTalking_Btn_text
self.StartTalking_Btn.enabled = self.StartTalking_Btn_state
def start_tick(self):
if self.Tick:
log("Tick already started", 1)
return
self.Tick = True
self.TickThread = threading.Thread(target=self._on_tick)
self.TickThread.start()
def stop_tick(self):
if self.TickThread and self.Tick:
self.Tick = False
self.TickThread.join()
def read_channel_address_from_config(self):
config = configparser.ConfigParser()
config.read(os.path.join(__location__, 'convai.env'))
self.channel_address = config.get("CONVAI", "CHANNEL")
def read_UI_from_config(self):
config = configparser.ConfigParser()
config.read(os.path.join(__location__, 'convai.env'))
api_key = config.get("CONVAI", "API_KEY")
self.APIKey_input_UI.model.set_value(api_key)
character_id = config.get("CONVAI", "CHARACTER_ID")
self.CharID_input_UI.model.set_value(character_id)
actions_text = config.get("CONVAI", "ACTIONS")
self.actions_input_UI.model.set_value(actions_text)
def save_config(self):
config = configparser.ConfigParser()
config.read(os.path.join(__location__, 'convai.env'))
config.set("CONVAI", "API_KEY", self.APIKey_input_UI.model.get_value_as_string())
config.set("CONVAI", "CHARACTER_ID", self.CharID_input_UI.model.get_value_as_string())
config.set("CONVAI", "ACTIONS", self.actions_input_UI.model.get_value_as_string())
# config.set("CONVAI", "CHANNEL", self.channel_address)
with open(os.path.join(__location__, 'convai.env'), 'w') as file:
config.write(file)
def create_channel(self):
if (self.channel):
log("gRPC channel already created")
return
self.channel = grpc.secure_channel(self.channel_address, grpc.ssl_channel_credentials())
# self.channel.subscribe(self.on_channel_state_change, True)
log("Created gRPC channel")
def close_channel(self):
if (self.channel):
self.channel.close()
self.channel = None
log("close_channel - Closed gRPC channel")
else:
log("close_channel - gRPC channel already closed")
def on_start_talking_btn_click(self):
if (self.IsCapturingAudio):
# Change UI
with self.UI_Lock:
self.StartTalking_Btn_text = "Processing..."
# self.StartTalking_Btn_text = "Start Talking"
self.StartTalking_Btn_state = False
# Reset response UI text
self.response_UI_Label_text = ""
# Do one last mic read
self.read_mic_and_send_to_grpc(True)
# time.sleep(0.01)
# Stop Mic
self.stop_mic()
else:
# Reset Session ID if Character ID changes
if self.OldCharacterID != self.CharID_input_UI.model.get_value_as_string():
self.OldCharacterID = self.CharID_input_UI.model.get_value_as_string()
self.SessionID = ""
with self.UI_Lock:
# Reset transcription UI text
self.transcription_UI_Label_text = ""
self.LastReadyTranscription = ""
# Change Btn text
self.StartTalking_Btn_text = "Stop"
# Open Mic stream
self.start_mic()
# Stop any on-going audio
self.ConvaiAudioPlayer.stop()
# Save API key, character ID and session ID
self.save_config()
# Create gRPC stream
self.ConvaiGRPCGetResponseProxy = ConvaiGRPCGetResponseProxy(self)
def on_shutdown(self):
self.clean_grpc_stream()
self.close_channel()
self.stop_tick()
if self._menu:
self._menu = None
if self._window:
self._window.destroy()
self._window = None
# Deregister the function that shows the window from omni.ui
ui.Workspace.set_show_window_fn(ConvaiExtension.WINDOW_NAME, None)
log("ConvaiExtension shutdown")
def start_mic(self):
if self.IsCapturingAudio == True:
log("start_mic - mic is already capturing audio", 1)
return
self.stream = self.PyAudio.open(format=FORMAT,
channels=CHANNELS,
rate=RATE,
input=True,
frames_per_buffer=CHUNK)
self.IsCapturingAudio = True
self.start_tick()
log("start_mic - Started Recording")
def stop_mic(self):
if self.IsCapturingAudio == False:
log("stop_mic - mic has not started yet", 1)
return
self.stop_tick()
if self.stream:
self.stream.stop_stream()
self.stream.close()
else:
log("stop_mic - could not close mic stream since it is None", 1)
self.IsCapturingAudio = False
log("stop_mic - Stopped Recording")
def clean_grpc_stream(self):
if self.ConvaiGRPCGetResponseProxy:
self.ConvaiGRPCGetResponseProxy.Parent = None
del self.ConvaiGRPCGetResponseProxy
self.ConvaiGRPCGetResponseProxy = None
# self.close_channel()
def on_transcription_received(self, Transcription: str, IsTranscriptionReady: bool, IsFinal: bool):
'''
Called when user transcription is received
'''
self.UI_Lock.acquire()
self.transcription_UI_Label_text = self.LastReadyTranscription + " " + Transcription
self.UI_Lock.release()
if IsTranscriptionReady:
self.LastReadyTranscription = self.LastReadyTranscription + " " + Transcription
def on_data_received(self, ReceivedText: str, ReceivedAudio: bytes, SampleRate: int, IsFinal: bool):
'''
Called when new text and/or Audio data is received
'''
self.ResponseTextBuffer += str(ReceivedText)
if IsFinal:
with self.UI_Lock:
self.response_UI_Label_text = self.ResponseTextBuffer
self.transcription_UI_Label_text = self.ResponseTextBuffer
self.ResponseTextBuffer = ""
self.ConvaiAudioPlayer.append_to_stream(ReceivedAudio)
return
def on_actions_received(self, Action: str):
'''
Called when actions are received
'''
# Action.replace(".", "")
self.UI_Lock.acquire()
for InputAction in self.parse_actions():
# log (f"on_actions_received: {Action} - {InputAction} - {InputAction.find(Action)}")
if Action.find(InputAction) >= 0:
self.action_UI_Label_text = InputAction
self.fire_event(InputAction)
self.UI_Lock.release()
return
self.action_UI_Label_text = "None"
self.UI_Lock.release()
def on_session_ID_received(self, SessionID: str):
'''
Called when new SessionID is received
'''
self.SessionID = SessionID
def on_finish(self):
'''
Called when the response stream is done
'''
self.ConvaiGRPCGetResponseProxy = None
with self.UI_Lock:
self.StartTalking_Btn_text = "Start Talking"
self.StartTalking_Btn_state = True
self.clean_grpc_stream()
log("Received on_finish")
def on_failure(self, ErrorMessage: str):
'''
Called when there is an unsuccessful response
'''
log(f"on_failure called with message: {ErrorMessage}", 1)
with self.UI_Lock:
self.transcription_UI_Label_text = "ERROR: Please double check API key and the character ID - Send logs to [email protected] for further assistance."
self.stop_mic()
self.on_finish()
def _on_tick(self):
while self.Tick:
time.sleep(0.1)
if self.IsCapturingAudio == False or self.ConvaiGRPCGetResponseProxy is None:
continue
self.read_mic_and_send_to_grpc(False)
def _on_start_talk_callback(self):
self.fire_event("start")
log("Character Started Talking")
def _on_stop_talk_callback(self):
self.fire_event("stop")
log("Character Stopped Talking")
def read_mic_and_send_to_grpc(self, LastWrite):
with self.Mic_Lock:
if self.stream:
data = self.stream.read(CHUNK)
else:
log("read_mic_and_send_to_grpc - could not read mic stream since it is none", 1)
data = bytes()
if self.ConvaiGRPCGetResponseProxy:
self.ConvaiGRPCGetResponseProxy.write_audio_data_to_send(data, LastWrite)
else:
log("read_mic_and_send_to_grpc - ConvaiGRPCGetResponseProxy is not valid", 1)
def fire_event(self, event_name):
def registered_event_name(event_name):
"""Returns the internal name used for the given custom event name"""
n = "omni.graph.action." + event_name
return carb.events.type_from_string(n)
reg_event_name = registered_event_name(event_name)
message_bus = omni.kit.app.get_app().get_message_bus_event_stream()
message_bus.push(reg_event_name, payload={})
def parse_actions(self):
actions = ["None"] + self.actions_input_UI.model.get_value_as_string().split(',')
actions = [a.lstrip(" ").rstrip(" ") for a in actions]
return actions
def show_window(self, menu, value):
# with self.UI_Lock:
if value:
self.setup_UI()
self._window.set_visibility_changed_fn(self._visiblity_changed_fn)
else:
if self._window:
self._window.visible = False
def _visiblity_changed_fn(self, visible):
# with self.UI_Lock:
# Called when the user pressed "X"
self._set_menu(visible)
if not visible:
# Destroy the window, since we are creating new window
# in show_window
asyncio.ensure_future(self._destroy_window_async())
def _set_menu(self, value):
"""Set the menu to create this window on and off"""
editor_menu = omni.kit.ui.get_editor_menu()
if editor_menu:
editor_menu.set_value(ConvaiExtension.MENU_PATH, value)
async def _destroy_window_async(self):
# with self.UI_Lock:
# wait one frame, this is due to the one frame defer
# in Window::_moveToMainOSWindow()
await omni.kit.app.get_app().next_update_async()
if self._window:
self._window.destroy()
self._window = None
class ConvaiGRPCGetResponseProxy:
def __init__(self, Parent: ConvaiExtension):
self.Parent = Parent
self.AudioBuffer = deque(maxlen=4096*2)
self.InformOnDataReceived = False
self.LastWriteReceived = False
self.client = None
self.NumberOfAudioBytesSent = 0
self.call = None
self._write_task = None
self._read_task = None
# self._main_task = asyncio.ensure_future(self.activate())
self.activate()
log("ConvaiGRPCGetResponseProxy constructor")
def activate(self):
# Validate API key
if (len(self.Parent.APIKey_input_UI.model.get_value_as_string()) == 0):
self.Parent.on_failure("API key is empty")
return
# Validate Character ID
if (len(self.Parent.CharID_input_UI.model.get_value_as_string()) == 0):
self.Parent.on_failure("Character ID is empty")
return
# Validate Channel
if self.Parent.channel is None:
log("grpc - self.Parent.channel is None", 1)
self.Parent.on_failure("gRPC channel was not created")
return
# Create the stub
self.client = convai_service.ConvaiServiceStub(self.Parent.channel)
threading.Thread(target=self.init_stream).start()
def init_stream(self):
log("grpc - stream initialized")
try:
for response in self.client.GetResponse(self.create_getGetResponseRequests()):
if response.HasField("audio_response"):
log("gRPC - audio_response: {} {} {}".format(response.audio_response.audio_config, response.audio_response.text_data, response.audio_response.end_of_response))
log("gRPC - session_id: {}".format(response.session_id))
self.Parent.on_session_ID_received(response.session_id)
self.Parent.on_data_received(
response.audio_response.text_data,
response.audio_response.audio_data,
response.audio_response.audio_config.sample_rate_hertz,
response.audio_response.end_of_response)
elif response.HasField("action_response"):
log(f"gRPC - action_response: {response.action_response.action}")
self.Parent.on_actions_received(response.action_response.action)
elif response.HasField("user_query"):
log(f"gRPC - user_query: {response.user_query}")
self.Parent.on_transcription_received(response.user_query.text_data, response.user_query.is_final, response.user_query.end_of_response)
else:
log("Stream Message: {}".format(response))
time.sleep(0.1)
except Exception as e:
if 'response' in locals() and response is not None and response.HasField("audio_response"):
self.Parent.on_failure(f"gRPC - Exception caught in loop: {str(e)} - Stream Message: {response}")
else:
self.Parent.on_failure(f"gRPC - Exception caught in loop: {str(e)}")
traceback.print_exc()
return
self.Parent.on_finish()
def create_initial_GetResponseRequest(self)-> convai_service_msg.GetResponseRequest:
action_config = convai_service_msg.ActionConfig(
classification = 'singlestep',
context_level = 1
)
action_config.actions[:] = self.Parent.parse_actions()
action_config.objects.append(
convai_service_msg.ActionConfig.Object(
name = "dummy",
description = "A dummy object."
)
)
log(f"gRPC - actions parsed: {action_config.actions}")
action_config.characters.append(
convai_service_msg.ActionConfig.Character(
name = "User",
bio = "Person playing the game and asking questions."
)
)
get_response_config = convai_service_msg.GetResponseRequest.GetResponseConfig(
character_id = self.Parent.CharID_input_UI.model.get_value_as_string(),
api_key = self.Parent.APIKey_input_UI.model.get_value_as_string(),
audio_config = convai_service_msg.AudioConfig(
sample_rate_hertz = RATE
),
action_config = action_config
)
if self.Parent.SessionID and self.Parent.SessionID != "":
get_response_config.session_id = self.Parent.SessionID
return convai_service_msg.GetResponseRequest(get_response_config = get_response_config)
def create_getGetResponseRequests(self)-> Generator[convai_service_msg.GetResponseRequest, None, None]:
req = self.create_initial_GetResponseRequest()
yield req
# for i in range(0, 10):
while 1:
IsThisTheFinalWrite = False
GetResponseData = None
if (0): # check if this is a text request
pass
else:
data, IsThisTheFinalWrite = self.consume_from_audio_buffer()
if len(data) == 0 and IsThisTheFinalWrite == False:
time.sleep(0.05)
continue
# Load the audio data to the request
self.NumberOfAudioBytesSent += len(data)
# if len(data):
# log(f"len(data) = {len(data)}")
GetResponseData = convai_service_msg.GetResponseRequest.GetResponseData(audio_data = data)
# Prepare the request
req = convai_service_msg.GetResponseRequest(get_response_data = GetResponseData)
yield req
if IsThisTheFinalWrite:
log(f"gRPC - Done Writing - {self.NumberOfAudioBytesSent} audio bytes sent")
break
time.sleep(0.1)
def write_audio_data_to_send(self, Data: bytes, LastWrite: bool):
self.AudioBuffer.append(Data)
if LastWrite:
self.LastWriteReceived = True
log(f"gRPC LastWriteReceived")
# if self.InformOnDataReceived:
# # Inform of new data to send
# self._write_task = asyncio.ensure_future(self.write_stream())
# # Reset
# self.InformOnDataReceived = False
def finish_writing(self):
self.write_audio_data_to_send(bytes(), True)
def consume_from_audio_buffer(self):
Length = len(self.AudioBuffer)
IsThisTheFinalWrite = False
data = bytes()
if Length:
data = self.AudioBuffer.pop()
# self.AudioBuffer = bytes()
if self.LastWriteReceived and Length == 0:
IsThisTheFinalWrite = True
else:
IsThisTheFinalWrite = False
if IsThisTheFinalWrite:
log(f"gRPC Consuming last mic write")
return data, IsThisTheFinalWrite
def __del__(self):
self.Parent = None
# if self._main_task:
# self._main_task.cancel()
# if self._write_task:
# self._write_task.cancel()
# if self._read_task:
# self._read_task.cancel()
# if self.call:
# self.call.cancel()
log("ConvaiGRPCGetResponseProxy Destructor")
| 23,850 | Python | 36.4427 | 179 | 0.584151 |
aniketrajnish/Omniverse-Shakespeare-Project/exts/makra.omniverse.shakespeare.project/makra/omniverse/shakespeare/project/convai/__init__.py | from .extension import *
| 25 | Python | 11.999994 | 24 | 0.76 |
aniketrajnish/Omniverse-Shakespeare-Project/exts/makra.omniverse.shakespeare.project/makra/omniverse/shakespeare/project/convai/convai_audio_player.py | # from .extension import ConvaiExtension, log
# from test import ConvaiExtension, log
import pyaudio
from pydub import AudioSegment
import io
class ConvaiAudioPlayer:
def __init__(self, start_taking_callback, stop_talking_callback):
self.start_talking_callback = start_taking_callback
self.stop_talking_callback = stop_talking_callback
self.AudioSegment = None
self.pa = pyaudio.PyAudio()
self.pa_stream = None
self.IsPlaying = False
def append_to_stream(self, data: bytes):
segment = AudioSegment.from_wav(io.BytesIO(data)).fade_in(100).fade_out(100)
if self.AudioSegment is None:
self.AudioSegment = segment
else:
self.AudioSegment._data += segment._data
self.play()
def play(self):
if self.IsPlaying:
return
print("ConvaiAudioPlayer - Started playing")
self.start_talking_callback()
self.pa_stream = self.pa.open(
format=pyaudio.get_format_from_width(self.AudioSegment.sample_width),
channels=self.AudioSegment.channels,
rate=self.AudioSegment.frame_rate,
output=True,
stream_callback=self.stream_callback
)
self.IsPlaying = True
def pause(self):
'''
Pause playing
'''
self.IsPlaying = False
def stop(self):
'''
Pause playing and clear audio
'''
self.pause()
self.AudioSegment = None
def stream_callback(self, in_data, frame_count, time_info, status_flags):
if not self.IsPlaying:
frames = bytes()
else:
frames = self.consume_frames(frame_count)
if self.AudioSegment and len(frames) < frame_count*self.AudioSegment.frame_width:
print("ConvaiAudioPlayer - Stopped playing")
self.stop_talking_callback()
self.IsPlaying = False
return frames, pyaudio.paComplete
else:
return frames, pyaudio.paContinue
def consume_frames(self, count: int):
if self.AudioSegment is None:
return bytes()
FrameEnd = self.AudioSegment.frame_width*count
if FrameEnd > len(self.AudioSegment._data):
return bytes()
FramesToReturn = self.AudioSegment._data[0:FrameEnd]
if FrameEnd == len(self.AudioSegment._data):
self.AudioSegment._data = bytes()
else:
self.AudioSegment._data = self.AudioSegment._data[FrameEnd:]
# print("self.AudioSegment._data = self.AudioSegment._data[FrameEnd:]")
return FramesToReturn
if __name__ == '__main__':
import time
import pyaudio
import grpc
from rpc import service_pb2 as convai_service_msg
from rpc import service_pb2_grpc as convai_service
from typing import Generator
import io
from pydub import AudioSegment
import configparser
CHUNK = 1024
FORMAT = pyaudio.paInt16
CHANNELS = 1
RATE = 16000
RECORD_SECONDS = 3
p = pyaudio.PyAudio()
stream = p.open(format=FORMAT,
channels=CHANNELS,
rate=RATE,
input=True,
frames_per_buffer=CHUNK)
audio_player = ConvaiAudioPlayer(None)
def start_mic():
global stream
stream = PyAudio.open(format=FORMAT,
channels=CHANNELS,
rate=RATE,
input=True,
frames_per_buffer=CHUNK)
print("start_mic - Started Recording")
def stop_mic():
global stream
if stream:
stream.stop_stream()
stream.close()
else:
print("stop_mic - could not close mic stream since it is None")
return
print("stop_mic - Stopped Recording")
def getGetResponseRequests(api_key: str, character_id: str, session_id: str = "") -> Generator[convai_service_msg.GetResponseRequest, None, None]:
action_config = convai_service_msg.ActionConfig(
classification = 'multistep',
context_level = 1
)
action_config.actions[:] = ["fetch", "jump", "dance", "swim"]
action_config.objects.append(
convai_service_msg.ActionConfig.Object(
name = "ball",
description = "A round object that can bounce around."
)
)
action_config.objects.append(
convai_service_msg.ActionConfig.Object(
name = "water",
description = "Liquid found in oceans, seas and rivers that you can swim in. You can also drink it."
)
)
action_config.characters.append(
convai_service_msg.ActionConfig.Character(
name = "User",
bio = "Person playing the game and asking questions."
)
)
action_config.characters.append(
convai_service_msg.ActionConfig.Character(
name = "Learno",
bio = "A medieval farmer from a small village."
)
)
get_response_config = convai_service_msg.GetResponseRequest.GetResponseConfig(
character_id = character_id,
api_key = api_key,
audio_config = convai_service_msg.AudioConfig(
sample_rate_hertz = 16000
),
action_config = action_config
)
# session_id = "f50b7bf00ad50f5c2c22065965948c16"
if session_id != "":
get_response_config.session_id = session_id
yield convai_service_msg.GetResponseRequest(
get_response_config = get_response_config
)
for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)):
data = stream.read(CHUNK)
yield convai_service_msg.GetResponseRequest(
get_response_data = convai_service_msg.GetResponseRequest.GetResponseData(
audio_data = data
)
)
stream.stop_stream()
stream.close()
print("* recording stopped")
config = configparser.ConfigParser()
config.read("exts\convai\convai\convai.env")
api_key = config.get("CONVAI", "API_KEY")
character_id = config.get("CONVAI", "CHARACTER_ID")
channel_address = config.get("CONVAI", "CHANNEL")
channel = grpc.secure_channel(channel_address, grpc.ssl_channel_credentials())
client = convai_service.ConvaiServiceStub(channel)
for response in client.GetResponse(getGetResponseRequests(api_key, character_id)):
if response.HasField("audio_response"):
print("Stream Message: {} {} {}".format(response.session_id, response.audio_response.audio_config, response.audio_response.text_data))
audio_player.append_to_stream(response.audio_response.audio_data)
else:
print("Stream Message: {}".format(response))
p.terminate()
# start_mic()
time.sleep(10)
# while 1:
# audio_player = ConvaiAudioPlayer(None)
# # data = stream.read(CHUNK)
# # _, data = scipy.io.wavfile.read("F:/Work/Convai/Tests/Welcome.wav")
# f = open("F:/Work/Convai/Tests/Welcome.wav", "rb")
# data = f.read()
# print(type(data))
# audio_player.append_to_stream(data)
# time.sleep(0.2)
# break
# # stop_mic()
# time.sleep(2)
# with keyboard.Listener(on_press=on_press,on_release=on_release):
# while(1):
# time.sleep(0.1)
# continue
# print("running") | 7,714 | Python | 32.986784 | 150 | 0.577651 |
aniketrajnish/Omniverse-Shakespeare-Project/exts/makra.omniverse.shakespeare.project/makra/omniverse/shakespeare/project/convai/rpc/service_pb2_grpc.py | # Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
"""Client and server classes corresponding to protobuf-defined services."""
import grpc
from . import service_pb2 as service__pb2
class ConvaiServiceStub(object):
"""Missing associated documentation comment in .proto file."""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.Hello = channel.unary_unary(
'/service.ConvaiService/Hello',
request_serializer=service__pb2.HelloRequest.SerializeToString,
response_deserializer=service__pb2.HelloResponse.FromString,
)
self.HelloStream = channel.stream_stream(
'/service.ConvaiService/HelloStream',
request_serializer=service__pb2.HelloRequest.SerializeToString,
response_deserializer=service__pb2.HelloResponse.FromString,
)
self.SpeechToText = channel.stream_stream(
'/service.ConvaiService/SpeechToText',
request_serializer=service__pb2.STTRequest.SerializeToString,
response_deserializer=service__pb2.STTResponse.FromString,
)
self.GetResponse = channel.stream_stream(
'/service.ConvaiService/GetResponse',
request_serializer=service__pb2.GetResponseRequest.SerializeToString,
response_deserializer=service__pb2.GetResponseResponse.FromString,
)
self.GetResponseSingle = channel.unary_stream(
'/service.ConvaiService/GetResponseSingle',
request_serializer=service__pb2.GetResponseRequestSingle.SerializeToString,
response_deserializer=service__pb2.GetResponseResponse.FromString,
)
class ConvaiServiceServicer(object):
"""Missing associated documentation comment in .proto file."""
def Hello(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def HelloStream(self, request_iterator, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def SpeechToText(self, request_iterator, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def GetResponse(self, request_iterator, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def GetResponseSingle(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_ConvaiServiceServicer_to_server(servicer, server):
rpc_method_handlers = {
'Hello': grpc.unary_unary_rpc_method_handler(
servicer.Hello,
request_deserializer=service__pb2.HelloRequest.FromString,
response_serializer=service__pb2.HelloResponse.SerializeToString,
),
'HelloStream': grpc.stream_stream_rpc_method_handler(
servicer.HelloStream,
request_deserializer=service__pb2.HelloRequest.FromString,
response_serializer=service__pb2.HelloResponse.SerializeToString,
),
'SpeechToText': grpc.stream_stream_rpc_method_handler(
servicer.SpeechToText,
request_deserializer=service__pb2.STTRequest.FromString,
response_serializer=service__pb2.STTResponse.SerializeToString,
),
'GetResponse': grpc.stream_stream_rpc_method_handler(
servicer.GetResponse,
request_deserializer=service__pb2.GetResponseRequest.FromString,
response_serializer=service__pb2.GetResponseResponse.SerializeToString,
),
'GetResponseSingle': grpc.unary_stream_rpc_method_handler(
servicer.GetResponseSingle,
request_deserializer=service__pb2.GetResponseRequestSingle.FromString,
response_serializer=service__pb2.GetResponseResponse.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'service.ConvaiService', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
# This class is part of an EXPERIMENTAL API.
class ConvaiService(object):
"""Missing associated documentation comment in .proto file."""
@staticmethod
def Hello(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/service.ConvaiService/Hello',
service__pb2.HelloRequest.SerializeToString,
service__pb2.HelloResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def HelloStream(request_iterator,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.stream_stream(request_iterator, target, '/service.ConvaiService/HelloStream',
service__pb2.HelloRequest.SerializeToString,
service__pb2.HelloResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def SpeechToText(request_iterator,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.stream_stream(request_iterator, target, '/service.ConvaiService/SpeechToText',
service__pb2.STTRequest.SerializeToString,
service__pb2.STTResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def GetResponse(request_iterator,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.stream_stream(request_iterator, target, '/service.ConvaiService/GetResponse',
service__pb2.GetResponseRequest.SerializeToString,
service__pb2.GetResponseResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def GetResponseSingle(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_stream(request, target, '/service.ConvaiService/GetResponseSingle',
service__pb2.GetResponseRequestSingle.SerializeToString,
service__pb2.GetResponseResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
| 8,631 | Python | 42.376884 | 111 | 0.636543 |
aniketrajnish/Omniverse-Shakespeare-Project/exts/makra.omniverse.shakespeare.project/makra/omniverse/shakespeare/project/gemini/helpers.py | import os
import json
from pathlib import Path
def loadConfig(file):
try:
with open(file, 'r') as f:
return json.load(f)
except FileNotFoundError:
raise Exception(f"Could not find config file: {file}")
except json.JSONDecodeError:
raise Exception(f"Could not parse config file: {file}")
def currPath():
return os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
def determineMimeType(fileName):
ext = fileName.split(".")[-1].lower()
if ext == 'jpg':
return 'image/jpeg'
supportedExts = ["jpg", "jpeg", "png", "webp", "heic", "heif"]
if ext not in supportedExts:
raise Exception(f"Unsupported file extension: {ext}")
else:
return f"image/{ext}"
| 777 | Python | 25.827585 | 81 | 0.616474 |
aniketrajnish/Omniverse-Shakespeare-Project/exts/makra.omniverse.shakespeare.project/makra/omniverse/shakespeare/project/gemini/image.py | import base64
class ImageHandler:
@staticmethod
def encodeImg(imgPath):
with open(imgPath, "rb") as imgFile:
return base64.b64encode(imgFile.read()).decode("utf-8") | 193 | Python | 26.714282 | 67 | 0.663212 |
aniketrajnish/Omniverse-Shakespeare-Project/exts/makra.omniverse.shakespeare.project/makra/omniverse/shakespeare/project/gemini/gemini.py | import requests
import json
import os
import configparser
from . import image
from . import helpers
def loadGeminiConfig():
configPath = os.path.join(helpers.currPath(), 'gemini.env')
if not os.path.exists(configPath):
raise FileNotFoundError("Gemini configuration file not found.")
config = configparser.ConfigParser()
config.read(configPath)
try:
geminiConfig = {
'baseUrl': config.get('GEMINI', 'BASE_URL'),
'apiKey': config.get('GEMINI', 'API_KEY'),
'model': config.get('GEMINI', 'MODEL'),
'prompt': config.get('GEMINI', 'PROMPT')
}
except configparser.NoOptionError as e:
raise KeyError(f"Missing configuration key in gemini.env: {e}")
return geminiConfig
def getGeminiResponse(imgPath):
geminiConfig = loadGeminiConfig()
url = f"{geminiConfig['baseUrl']}/{geminiConfig['model']}:generateContent?key={geminiConfig['apiKey']}"
base64Img = image.ImageHandler.encodeImg(imgPath)
headers = {
"Content-Type": "application/json"
}
data = json.dumps({
"contents": [
{
"parts": [
{"text": geminiConfig["prompt"]},
{
"inline_data": {
"mime_type": helpers.determineMimeType(imgPath),
"data": base64Img
}
}
]
}
]
})
try:
response = requests.post(url, headers=headers, data=data)
if response.status_code != 200:
return f"Error: {response.status_code} - {response.text}"
result = response.json()
return result["candidates"][0]["content"]["parts"][0]["text"]
except requests.exceptions.RequestException as e:
return f"Request failed: {str(e)}" | 1,910 | Python | 28.859375 | 107 | 0.551309 |
aniketrajnish/Omniverse-Shakespeare-Project/exts/makra.omniverse.shakespeare.project/makra/omniverse/shakespeare/project/tests/__init__.py | from .test_hello_world import * | 31 | Python | 30.999969 | 31 | 0.774194 |
aniketrajnish/Omniverse-Shakespeare-Project/exts/makra.omniverse.shakespeare.project/makra/omniverse/shakespeare/project/tests/test_hello_world.py | # NOTE:
# omni.kit.test - std python's unittest module with additional wrapping to add suport for async/await tests
# For most things refer to unittest docs: https://docs.python.org/3/library/unittest.html
import omni.kit.test
# Extnsion for writing UI tests (simulate UI interaction)
import omni.kit.ui_test as ui_test
# Import extension python module we are testing with absolute import path, as if we are external user (other extension)
import makra.omniverse.shakespeare.project
# Having a test class dervived from omni.kit.test.AsyncTestCase declared on the root of module will make it auto-discoverable by omni.kit.test
class Test(omni.kit.test.AsyncTestCase):
# Before running each test
async def setUp(self):
pass
# After running each test
async def tearDown(self):
pass
# Actual test, notice it is "async" function, so "await" can be used if needed
async def test_hello_public_function(self):
result = makra.omniverse.shakespeare.project.some_public_function(4)
self.assertEqual(result, 256)
async def test_window_button(self):
# Find a label in our window
label = ui_test.find("My Window//Frame/**/Label[*]")
# Find buttons in our window
add_button = ui_test.find("My Window//Frame/**/Button[*].text=='Add'")
reset_button = ui_test.find("My Window//Frame/**/Button[*].text=='Reset'")
# Click reset button
await reset_button.click()
self.assertEqual(label.widget.text, "empty")
await add_button.click()
self.assertEqual(label.widget.text, "count: 1")
await add_button.click()
self.assertEqual(label.widget.text, "count: 2")
| 1,706 | Python | 35.319148 | 142 | 0.686987 |
aniketrajnish/Omniverse-Shakespeare-Project/exts/makra.omniverse.shakespeare.project/config/extension.toml | [package]
# Semantic Versioning is used: https://semver.org/
version = "1.0.0"
# Lists people or organizations that are considered the "authors" of the package.
authors = ["NVIDIA"]
# The title and description fields are primarily for displaying extension info in UI
title = "makra omniverse shakespeare project"
description="A simple python extension example to use as a starting point for your extensions."
# Path (relative to the root) or content of readme markdown file for UI.
readme = "docs/README.md"
# URL of the extension source repository.
repository = ""
# One of categories for UI.
category = "Example"
# Keywords for the extension
keywords = ["kit", "example"]
# Location of change log file in target (final) folder of extension, relative to the root.
# More info on writing changelog: https://keepachangelog.com/en/1.0.0/
changelog="docs/CHANGELOG.md"
# Preview image and icon. Folder named "data" automatically goes in git lfs (see .gitattributes file).
# Preview image is shown in "Overview" of Extensions window. Screenshot of an extension might be a good preview image.
preview_image = "data/preview.png"
# Icon is shown in Extensions window, it is recommended to be square, of size 256x256.
icon = "data/icon.png"
# Use omni.ui to build simple UI
[dependencies]
"omni.kit.uiapp" = {}
# Main python module this extension provides, it will be publicly available as "import makra.omniverse.shakespeare.project".
[[python.module]]
name = "makra.omniverse.shakespeare.project"
[[test]]
# Extra dependencies only to be used during test run
dependencies = [
"omni.kit.ui_test" # UI testing extension
]
[python.pipapi]
requirements = [
"requests",
"scipy",
"wavio",
"sounddevice",
"requests",
"googleapis-common-protos",
"grpcio==1.51.1",
"grpcio-tools==1.51.1",
"protobuf==4.21.10",
"PyAudio==0.2.12",
"pydub==0.25.1"
]
use_online_index = true
ignore_import_check = false | 1,947 | TOML | 28.515151 | 124 | 0.722137 |
aniketrajnish/Omniverse-Shakespeare-Project/exts/makra.omniverse.shakespeare.project/docs/CHANGELOG.md | # Changelog
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
## [1.0.0] - 2021-04-26
- Initial version of extension UI template with a window
| 178 | Markdown | 18.888887 | 80 | 0.702247 |
aniketrajnish/Omniverse-Shakespeare-Project/exts/makra.omniverse.shakespeare.project/docs/README.md | # Python Extension Example [makra.omniverse.shakespeare.project]
This is an example of pure python Kit extension. It is intended to be copied and serve as a template to create new extensions.
| 194 | Markdown | 37.999992 | 126 | 0.798969 |
aniketrajnish/Omniverse-Shakespeare-Project/exts/makra.omniverse.shakespeare.project/docs/index.rst | makra.omniverse.shakespeare.project
#############################
Example of Python only extension
.. toctree::
:maxdepth: 1
README
CHANGELOG
.. automodule::"makra.omniverse.shakespeare.project"
:platform: Windows-x86_64, Linux-x86_64
:members:
:undoc-members:
:show-inheritance:
:imported-members:
:exclude-members: contextmanager
| 371 | reStructuredText | 16.714285 | 52 | 0.6469 |
NVIDIA/warp/build_lib.py | # Copyright (c) 2022 NVIDIA CORPORATION. All rights reserved.
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
# This script is an 'offline' build of the core warp runtime libraries
# designed to be executed as part of CI / developer workflows, not
# as part of the user runtime (since it requires CUDA toolkit, etc)
import sys
if sys.version_info < (3, 7):
raise Exception("Warp requires Python 3.7 minimum")
import argparse
import glob
import os
import shutil
from warp.build_dll import build_dll, find_host_compiler, set_msvc_env, verbose_cmd
from warp.context import export_builtins
parser = argparse.ArgumentParser(description="Warp build script")
parser.add_argument("--msvc_path", type=str, help="Path to MSVC compiler (optional if already on PATH)")
parser.add_argument("--sdk_path", type=str, help="Path to WinSDK (optional if already on PATH)")
parser.add_argument("--cuda_path", type=str, help="Path to CUDA SDK")
parser.add_argument(
"--mode",
type=str,
default="release",
help="Build configuration, default 'release'",
choices=["release", "debug"],
)
# Note argparse.BooleanOptionalAction can be used here when Python 3.9+ becomes the minimum supported version
parser.add_argument("--verbose", action="store_true", help="Verbose building output, default enabled")
parser.add_argument("--no_verbose", dest="verbose", action="store_false")
parser.set_defaults(verbose=True)
parser.add_argument(
"--verify_fp",
action="store_true",
help="Verify kernel inputs and outputs are finite after each launch, default disabled",
)
parser.add_argument("--no_verify_fp", dest="verify_fp", action="store_false")
parser.set_defaults(verify_fp=False)
parser.add_argument("--fast_math", action="store_true", help="Enable fast math on library, default disabled")
parser.add_argument("--no_fast_math", dest="fast_math", action="store_false")
parser.set_defaults(fast_math=False)
parser.add_argument("--quick", action="store_true", help="Only generate PTX code, disable CUTLASS ops")
parser.add_argument("--build_llvm", action="store_true", help="Build Clang/LLVM compiler from source, default disabled")
parser.add_argument("--no_build_llvm", dest="build_llvm", action="store_false")
parser.set_defaults(build_llvm=False)
parser.add_argument(
"--llvm_source_path", type=str, help="Path to the LLVM project source code (optional, repo cloned if not set)"
)
parser.add_argument("--debug_llvm", action="store_true", help="Enable LLVM compiler code debugging, default disabled")
parser.add_argument("--no_debug_llvm", dest="debug_llvm", action="store_false")
parser.set_defaults(debug_llvm=False)
parser.add_argument("--standalone", action="store_true", help="Use standalone LLVM-based JIT compiler, default enabled")
parser.add_argument("--no_standalone", dest="standalone", action="store_false")
parser.set_defaults(standalone=True)
args = parser.parse_args()
# set build output path off this file
base_path = os.path.dirname(os.path.realpath(__file__))
build_path = os.path.join(base_path, "warp")
print(args)
verbose_cmd = args.verbose
def find_cuda_sdk():
# check environment variables
for env in ["WARP_CUDA_PATH", "CUDA_HOME", "CUDA_PATH"]:
cuda_sdk = os.environ.get(env)
if cuda_sdk is not None:
print(f"Using CUDA Toolkit path '{cuda_sdk}' provided through the '{env}' environment variable")
return cuda_sdk
# use which/where to locate the nvcc compiler program
nvcc = shutil.which("nvcc")
if nvcc is not None:
cuda_sdk = os.path.dirname(os.path.dirname(nvcc)) # strip the executable name and bin folder
print(f"Using CUDA Toolkit path '{cuda_sdk}' found through 'which nvcc'")
return cuda_sdk
# check default paths
if os.name == "nt":
cuda_paths = glob.glob("C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v*.*")
if len(cuda_paths) >= 1:
cuda_sdk = cuda_paths[0]
print(f"Using CUDA Toolkit path '{cuda_sdk}' found at default path")
return cuda_sdk
else:
usr_local_cuda = "/usr/local/cuda"
if os.path.exists(usr_local_cuda):
cuda_sdk = usr_local_cuda
print(f"Using CUDA Toolkit path '{cuda_sdk}' found at default path")
return cuda_sdk
return None
# setup CUDA Toolkit path
if sys.platform == "darwin":
args.cuda_path = None
else:
if not args.cuda_path:
args.cuda_path = find_cuda_sdk()
# setup MSVC and WinSDK paths
if os.name == "nt":
if args.msvc_path or args.sdk_path:
# user provided MSVC and Windows SDK
assert args.msvc_path and args.sdk_path, "--msvc_path and --sdk_path must be used together."
args.host_compiler = set_msvc_env(msvc_path=args.msvc_path, sdk_path=args.sdk_path)
else:
# attempt to find MSVC in environment (will set vcvars)
args.host_compiler = find_host_compiler()
if not args.host_compiler:
print("Warp build error: Could not find MSVC compiler")
sys.exit(1)
# return platform specific shared library name
def lib_name(name):
if sys.platform == "win32":
return f"{name}.dll"
elif sys.platform == "darwin":
return f"lib{name}.dylib"
else:
return f"{name}.so"
def generate_exports_header_file():
"""Generates warp/native/exports.h, which lets built-in functions be callable from outside kernels"""
# set build output path off this file
export_path = os.path.join(base_path, "warp", "native", "exports.h")
try:
with open(export_path, "w") as f:
export_builtins(f)
print(f"Finished writing {export_path}")
except FileNotFoundError:
print(f"Error: The file '{export_path}' was not found.")
except PermissionError:
print(f"Error: Permission denied. Unable to write to '{export_path}'.")
except OSError as e:
print(f"Error: An OS-related error occurred: {e}")
except Exception as e:
print(f"An unexpected error occurred: {e}")
try:
# Generate warp/native/export.h
generate_exports_header_file()
# build warp.dll
cpp_sources = [
"native/warp.cpp",
"native/crt.cpp",
"native/error.cpp",
"native/cuda_util.cpp",
"native/mesh.cpp",
"native/hashgrid.cpp",
"native/reduce.cpp",
"native/runlength_encode.cpp",
"native/sort.cpp",
"native/sparse.cpp",
"native/volume.cpp",
"native/marching.cpp",
"native/cutlass_gemm.cpp",
]
warp_cpp_paths = [os.path.join(build_path, cpp) for cpp in cpp_sources]
if args.cuda_path is None:
print("Warning: CUDA toolchain not found, building without CUDA support")
warp_cu_path = None
else:
warp_cu_path = os.path.join(build_path, "native/warp.cu")
warp_dll_path = os.path.join(build_path, f"bin/{lib_name('warp')}")
build_dll(args, dll_path=warp_dll_path, cpp_paths=warp_cpp_paths, cu_path=warp_cu_path)
# build warp-clang.dll
if args.standalone:
import build_llvm
if args.build_llvm:
build_llvm.build_from_source(args)
build_llvm.build_warp_clang(args, lib_name("warp-clang"))
except Exception as e:
# output build error
print(f"Warp build error: {e}")
# report error
sys.exit(1)
| 7,690 | Python | 34.118721 | 120 | 0.673602 |
NVIDIA/warp/build_llvm.py | import os
import subprocess
import sys
from warp.build_dll import *
# set build output path off this file
base_path = os.path.dirname(os.path.realpath(__file__))
build_path = os.path.join(base_path, "warp")
llvm_project_path = os.path.join(base_path, "external/llvm-project")
llvm_build_path = os.path.join(llvm_project_path, "out/build/")
llvm_install_path = os.path.join(llvm_project_path, "out/install/")
# Fetch prebuilt Clang/LLVM libraries
def fetch_prebuilt_libraries(arch):
if os.name == "nt":
packman = "tools\\packman\\packman.cmd"
packages = {"x86_64": "15.0.7-windows-x86_64-ptx-vs142"}
else:
packman = "./tools/packman/packman"
if sys.platform == "darwin":
packages = {
"aarch64": "15.0.7-darwin-aarch64-macos11",
"x86_64": "15.0.7-darwin-x86_64-macos11",
}
else:
packages = {
"aarch64": "15.0.7-linux-aarch64-gcc7.5",
"x86_64": "18.1.3-linux-x86_64-gcc9.4",
}
subprocess.check_call(
[
packman,
"install",
"-l",
f"./_build/host-deps/llvm-project/release-{arch}",
"clang+llvm-warp",
packages[arch],
]
)
def build_from_source_for_arch(args, arch, llvm_source):
# Check out the LLVM project Git repository, unless it already exists
if not os.path.exists(llvm_source):
# Install dependencies
subprocess.check_call([sys.executable, "-m", "pip", "install", "gitpython"])
subprocess.check_call([sys.executable, "-m", "pip", "install", "cmake"])
subprocess.check_call([sys.executable, "-m", "pip", "install", "ninja"])
from git import Repo
repo_url = "https://github.com/llvm/llvm-project.git"
print(f"Cloning LLVM project from {repo_url}...")
shallow_clone = True # https://github.blog/2020-12-21-get-up-to-speed-with-partial-clone-and-shallow-clone/
version = "18.1.3"
if shallow_clone:
repo = Repo.clone_from(
repo_url,
to_path=llvm_source,
single_branch=True,
branch=f"llvmorg-{version}",
depth=1,
)
else:
repo = Repo.clone_from(repo_url, to_path=llvm_source)
repo.git.checkout(f"tags/llvmorg-{version}", "-b", f"llvm-{version}")
print(f"Using LLVM project source from {llvm_source}")
# CMake supports Debug, Release, RelWithDebInfo, and MinSizeRel builds
if args.mode == "release":
msvc_runtime = "MultiThreaded"
# prefer smaller size over aggressive speed
cmake_build_type = "MinSizeRel"
else:
msvc_runtime = "MultiThreadedDebug"
# When args.mode == "debug" we build a Debug version of warp.dll but
# we generally don't want warp-clang.dll to be a slow Debug version.
if args.debug_llvm:
cmake_build_type = "Debug"
else:
# The GDB/LLDB debugger observes the __jit_debug_register_code symbol
# defined by the LLVM JIT, for which it needs debug info.
cmake_build_type = "RelWithDebInfo"
# Location of cmake and ninja installed through pip (see build.bat / build.sh)
python_bin = "python/Scripts" if sys.platform == "win32" else "python/bin"
os.environ["PATH"] = os.path.join(base_path, "_build/target-deps/" + python_bin) + os.pathsep + os.environ["PATH"]
if arch == "aarch64":
target_backend = "AArch64"
else:
target_backend = "X86"
if sys.platform == "darwin":
host_triple = f"{arch}-apple-macos11"
osx_architectures = arch # build one architecture only
abi_version = ""
elif os.name == "nt":
host_triple = f"{arch}-pc-windows"
osx_architectures = ""
abi_version = ""
else:
host_triple = f"{arch}-pc-linux"
osx_architectures = ""
abi_version = "-fabi-version=13" # GCC 8.2+
llvm_path = os.path.join(llvm_source, "llvm")
build_path = os.path.join(llvm_build_path, f"{args.mode}-{arch}")
install_path = os.path.join(llvm_install_path, f"{args.mode}-{arch}")
# Build LLVM and Clang
# fmt: off
cmake_gen = [
"cmake",
"-S", llvm_path,
"-B", build_path,
"-G", "Ninja",
"-D", f"CMAKE_BUILD_TYPE={cmake_build_type}",
"-D", f"CMAKE_MSVC_RUNTIME_LIBRARY={msvc_runtime}",
"-D", f"LLVM_TARGETS_TO_BUILD={target_backend};NVPTX",
"-D", "LLVM_ENABLE_PROJECTS=clang",
"-D", "LLVM_ENABLE_ZLIB=FALSE",
"-D", "LLVM_ENABLE_ZSTD=FALSE",
"-D", "LLVM_ENABLE_TERMINFO=FALSE",
"-D", "LLVM_BUILD_LLVM_C_DYLIB=FALSE",
"-D", "LLVM_BUILD_RUNTIME=FALSE",
"-D", "LLVM_BUILD_RUNTIMES=FALSE",
"-D", "LLVM_BUILD_TOOLS=FALSE",
"-D", "LLVM_BUILD_UTILS=FALSE",
"-D", "LLVM_INCLUDE_BENCHMARKS=FALSE",
"-D", "LLVM_INCLUDE_DOCS=FALSE",
"-D", "LLVM_INCLUDE_EXAMPLES=FALSE",
"-D", "LLVM_INCLUDE_RUNTIMES=FALSE",
"-D", "LLVM_INCLUDE_TESTS=FALSE",
"-D", "LLVM_INCLUDE_TOOLS=TRUE", # Needed by Clang
"-D", "LLVM_INCLUDE_UTILS=FALSE",
"-D", f"CMAKE_CXX_FLAGS=-D_GLIBCXX_USE_CXX11_ABI=0 {abi_version}", # The pre-C++11 ABI is still the default on the CentOS 7 toolchain
"-D", f"CMAKE_INSTALL_PREFIX={install_path}",
"-D", f"LLVM_HOST_TRIPLE={host_triple}",
"-D", f"CMAKE_OSX_ARCHITECTURES={osx_architectures}",
# Disable unused tools and features
"-D", "CLANG_BUILD_TOOLS=FALSE",
"-D", "LLVM_ENABLE_PLUGINS=FALSE",
"-D", "CLANG_PLUGIN_SUPPORT=FALSE",
"-D", "CLANG_ENABLE_ARCMT=FALSE",
"-D", "CLANG_ENABLE_STATIC_ANALYZER=FALSE",
"-D", "CLANG_TOOLING_BUILD_AST_INTROSPECTION=FALSE",
"-D", "CLANG_TOOL_AMDGPU_ARCH_BUILD=FALSE",
"-D", "CLANG_TOOL_APINOTES_TEST_BUILD=FALSE",
"-D", "CLANG_TOOL_ARCMT_TEST_BUILD=FALSE",
"-D", "CLANG_TOOL_CLANG_CHECK_BUILD=FALSE",
"-D", "CLANG_TOOL_CLANG_DIFF_BUILD=FALSE",
"-D", "CLANG_TOOL_CLANG_EXTDEF_MAPPING_BUILD=FALSE",
"-D", "CLANG_TOOL_CLANG_FORMAT_BUILD=FALSE",
"-D", "CLANG_TOOL_CLANG_FORMAT_VS_BUILD=FALSE",
"-D", "CLANG_TOOL_CLANG_FUZZER_BUILD=FALSE",
"-D", "CLANG_TOOL_CLANG_IMPORT_TEST_BUILD=FALSE",
"-D", "CLANG_TOOL_CLANG_LINKER_WRAPPER_BUILD=FALSE",
"-D", "CLANG_TOOL_CLANG_NVLINK_WRAPPER_BUILD=FALSE",
"-D", "CLANG_TOOL_CLANG_OFFLOAD_BUNDLER_BUILD=FALSE",
"-D", "CLANG_TOOL_CLANG_OFFLOAD_PACKAGER_BUILD=FALSE",
"-D", "CLANG_TOOL_CLANG_OFFLOAD_WRAPPER_BUILD=FALSE",
"-D", "CLANG_TOOL_CLANG_REFACTOR_BUILD=FALSE",
"-D", "CLANG_TOOL_CLANG_RENAME_BUILD=FALSE",
"-D", "CLANG_TOOL_CLANG_REPL_BUILD=FALSE",
"-D", "CLANG_TOOL_CLANG_SCAN_DEPS_BUILD=FALSE",
"-D", "CLANG_TOOL_CLANG_SHLIB_BUILD=FALSE",
"-D", "CLANG_TOOL_C_ARCMT_TEST_BUILD=FALSE",
"-D", "CLANG_TOOL_C_INDEX_TEST_BUILD=FALSE",
"-D", "CLANG_TOOL_DIAGTOOL_BUILD=FALSE",
"-D", "CLANG_TOOL_DRIVER_BUILD=FALSE",
"-D", "CLANG_TOOL_LIBCLANG_BUILD=FALSE",
"-D", "CLANG_TOOL_SCAN_BUILD_BUILD=FALSE",
"-D", "CLANG_TOOL_SCAN_BUILD_PY_BUILD=FALSE",
"-D", "CLANG_TOOL_CLANG_OFFLOAD_BUNDLER_BUILD=FALSE",
"-D", "CLANG_TOOL_SCAN_VIEW_BUILD=FALSE",
"-D", "LLVM_ENABLE_BINDINGS=FALSE",
"-D", "LLVM_ENABLE_OCAMLDOC=FALSE",
"-D", "LLVM_TOOL_BUGPOINT_BUILD=FALSE",
"-D", "LLVM_TOOL_BUGPOINT_PASSES_BUILD=FALSE",
"-D", "LLVM_TOOL_CLANG_BUILD=FALSE",
"-D", "LLVM_TOOL_DSYMUTIL_BUILD=FALSE",
"-D", "LLVM_TOOL_DXIL_DIS_BUILD=FALSE",
"-D", "LLVM_TOOL_GOLD_BUILD=FALSE",
"-D", "LLVM_TOOL_LLC_BUILD=FALSE",
"-D", "LLVM_TOOL_LLDB_BUILD=FALSE",
"-D", "LLVM_TOOL_LLI_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_AR_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_AS_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_AS_FUZZER_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_BCANALYZER_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_CAT_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_CFI_VERIFY_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_CONFIG_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_COV_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_CVTRES_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_CXXDUMP_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_CXXFILT_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_CXXMAP_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_C_TEST_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_DEBUGINFOD_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_DEBUGINFOD_FIND_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_DIFF_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_DIS_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_DIS_FUZZER_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_DLANG_DEMANGLE_FUZZER_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_DWARFDUMP_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_DWARFUTIL_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_DWP_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_EXEGESIS_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_EXTRACT_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_GO_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_GSYMUTIL_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_IFS_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_ISEL_FUZZER_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_ITANIUM_DEMANGLE_FUZZER_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_JITLINK_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_JITLISTENER_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_LIBTOOL_DARWIN_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_LINK_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_LIPO_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_LTO2_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_LTO_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_MCA_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_MC_ASSEMBLE_FUZZER_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_MC_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_MC_DISASSEMBLE_FUZZER_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_MICROSOFT_DEMANGLE_FUZZER_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_ML_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_MODEXTRACT_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_MT_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_NM_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_OBJCOPY_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_OBJDUMP_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_OPT_FUZZER_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_OPT_REPORT_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_PDBUTIL_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_PROFDATA_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_PROFGEN_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_RC_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_READOBJ_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_REDUCE_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_REMARK_SIZE_DIFF_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_RTDYLD_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_RUST_DEMANGLE_FUZZER_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_SHLIB_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_SIM_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_SIZE_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_SPECIAL_CASE_LIST_FUZZER_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_SPLIT_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_STRESS_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_STRINGS_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_SYMBOLIZER_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_TAPI_DIFF_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_TLI_CHECKER_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_UNDNAME_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_XRAY_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_YAML_NUMERIC_PARSER_FUZZER_BUILD=FALSE",
"-D", "LLVM_TOOL_LLVM_YAML_PARSER_FUZZER_BUILD=FALSE",
"-D", "LLVM_TOOL_LTO_BUILD=FALSE",
"-D", "LLVM_TOOL_OBJ2YAML_BUILD=FALSE",
"-D", "LLVM_TOOL_OPT_BUILD=FALSE",
"-D", "LLVM_TOOL_OPT_VIEWER_BUILD=FALSE",
"-D", "LLVM_TOOL_REMARKS_SHLIB_BUILD=FALSE",
"-D", "LLVM_TOOL_SANCOV_BUILD=FALSE",
"-D", "LLVM_TOOL_SANSTATS_BUILD=FALSE",
"-D", "LLVM_TOOL_SPLIT_FILE_BUILD=FALSE",
"-D", "LLVM_TOOL_VERIFY_USELISTORDER_BUILD=FALSE",
"-D", "LLVM_TOOL_VFABI_DEMANGLE_FUZZER_BUILD=FALSE",
"-D", "LLVM_TOOL_XCODE_TOOLCHAIN_BUILD=FALSE",
"-D", "LLVM_TOOL_YAML2OBJ_BUILD=FALSE",
]
# fmt: on
subprocess.check_call(cmake_gen, stderr=subprocess.STDOUT)
cmake_build = ["cmake", "--build", build_path]
subprocess.check_call(cmake_build, stderr=subprocess.STDOUT)
cmake_install = ["cmake", "--install", build_path]
subprocess.check_call(cmake_install, stderr=subprocess.STDOUT)
def build_from_source(args):
print("Building Clang/LLVM from source...")
if args.llvm_source_path is not None:
llvm_source = args.llvm_source_path
else:
llvm_source = llvm_project_path
# build for the machine's architecture
build_from_source_for_arch(args, machine_architecture(), llvm_source)
# for Apple systems also cross-compile for building a universal binary
if sys.platform == "darwin":
if machine_architecture() == "x86_64":
build_from_source_for_arch(args, "aarch64", llvm_source)
else:
build_from_source_for_arch(args, "x86_64", llvm_source)
# build warp-clang.dll
def build_warp_clang_for_arch(args, lib_name, arch):
try:
cpp_sources = [
"native/clang/clang.cpp",
"native/crt.cpp",
]
clang_cpp_paths = [os.path.join(build_path, cpp) for cpp in cpp_sources]
clang_dll_path = os.path.join(build_path, f"bin/{lib_name}")
if args.build_llvm:
# obtain Clang and LLVM libraries from the local build
install_path = os.path.join(llvm_install_path, f"{args.mode}-{arch}")
libpath = os.path.join(install_path, "lib")
else:
# obtain Clang and LLVM libraries from packman
fetch_prebuilt_libraries(arch)
libpath = os.path.join(base_path, f"_build/host-deps/llvm-project/release-{arch}/lib")
libs = []
for _, _, libraries in os.walk(libpath):
libs.extend(libraries)
break # just the top level contains library files
if os.name == "nt":
libs.append("Version.lib")
libs.append("Ws2_32.lib")
libs.append(f'/LIBPATH:"{libpath}"')
else:
libs = [f"-l{lib[3:-2]}" for lib in libs if os.path.splitext(lib)[1] == ".a"]
if sys.platform == "darwin":
libs += libs # prevents unresolved symbols due to link order
else:
libs.insert(0, "-Wl,--start-group")
libs.append("-Wl,--end-group")
libs.append(f"-L{libpath}")
libs.append("-lpthread")
libs.append("-ldl")
if sys.platform != "darwin":
libs.append("-lrt")
build_dll_for_arch(
args,
dll_path=clang_dll_path,
cpp_paths=clang_cpp_paths,
cu_path=None,
libs=libs,
arch=arch,
mode=args.mode if args.build_llvm else "release",
)
except Exception as e:
# output build error
print(f"Warp Clang/LLVM build error: {e}")
# report error
sys.exit(1)
def build_warp_clang(args, lib_name):
if sys.platform == "darwin":
# create a universal binary by combining x86-64 and AArch64 builds
build_warp_clang_for_arch(args, lib_name + "-x86_64", "x86_64")
build_warp_clang_for_arch(args, lib_name + "-aarch64", "aarch64")
dylib_path = os.path.join(build_path, f"bin/{lib_name}")
run_cmd(f"lipo -create -output {dylib_path} {dylib_path}-x86_64 {dylib_path}-aarch64")
os.remove(f"{dylib_path}-x86_64")
os.remove(f"{dylib_path}-aarch64")
else:
build_warp_clang_for_arch(args, lib_name, machine_architecture())
| 16,081 | Python | 40.989556 | 142 | 0.577763 |
NVIDIA/warp/PACKAGING.md | # Release Instructions
## Versioning
Versions take the format X.Y.Z, similar to [Python itself](https://devguide.python.org/developer-workflow/development-cycle/#devcycle):
- Increments in X are reserved for major reworks of the project causing disruptive incompatibility (or reaching the 1.0 milestone).
- Increments in Y are for regular releases with a new set of features.
- Increments in Z are for bug fixes. In principle there are no new features. Can be omitted if 0 or not relevant.
This is similar to [Semantic Versioning](https://semver.org/) but less strict around backward compatibility.
Like with Python, some breaking changes can be present between minor versions if well documented and gradually introduced.
Note that prior to 0.11.0 this schema was not strictly adhered to.
## Repositories
Development happens internally on a GitLab repository (part of the Omniverse group), while releases are made public on GitHub.
This document uses the following Git remote names:
- **omniverse**: `git remote add omniverse https://gitlab-master.nvidia.com/omniverse/warp.git`
- **github**: `git remote add github https://github.com/NVIDIA/warp.git`
Currently, all feature branches get merged into the `main` branch of the **omniverse** repo and then GitLab push-mirrors
the changes over to GitHub (nominally within five minutes). This mirroring process also pushes all tags
(only tags beginning with `v` are allowed to be created) and branches beginning with `release-`.
The status of push mirroring can be checked under **Settings** :arrow_right: **Repository** on GitLab.
## GitLab Release Branch
1) Create a branch in your fork repository from which a merge-request will be opened to bump the version string
and create the public-facing changelogs for the release.
2) Search & replace the current version string from `VERSION.md`.
We want to keep the Omniverse extensions' version in sync with the library so update the strings found in the `exts` folder as well.
The version string currently appears in the following two files, but there could be more in the future:
- `omni.warp/config/extension.toml`
- `omni.warp.core/config/extension.toml`
Be sure *not* to update previous strings in `CHANGELOG.md`.
3) Update `CHANGELOG.md` from Git history (since the last release branch). Only list user-facing changes.
The entire development team should all be helping to keep this file up-to-date, so verify that all changes users
should know about are included.
The changelogs from the Omniverse extensions found in `exts` are kept in sync with the one from the library, so update them all at the same time and list any change made to the extensions.
4) Open a MR on GitLab to merge this branch into `main`. Send a message in `#omni-warp-dev` to the `@warp-team`
asking for a review of the merge request's changes.
5) Merge the branch into `main` after waiting a reasonable amount of time for the team to review and approve the MR.
6) For new `X.Y` versions, create a release branch (note `.Z` maintenance versions remain on the same branch):
`git checkout -b release-X.Y [<start-point>]`
If branching from an older revision or reusing a branch, make sure to cherry-pick the version and changelog update.
7) Make any release-specific changes (e.g. disable/remove features not ready yet).
8) :warning: Keep in mind that branches pushed to the **omniverse** repository beginning with `release-` are
automatically mirrored to GitLab. :warning:
Push the new release branch to **omniverse** when it is in a state ready for CI testing.
9) Check that the last revision on the release branch passes GitLab CI tests. A pipeline should have been automatically
created after pushing the branch in the previous step:
<https://gitlab-master.nvidia.com/omniverse/warp/-/pipelines>
Fix issues until all tests pass. Cherry-pick fixes for `main` where applicable.
## Creating a GitHub Release Package
1) Wait for the (latest) packages to appear in:
<https://gitlab-master.nvidia.com/omniverse/warp/-/packages/>
2) Download the `.whl` files for each supported platform and move them into an empty folder.
3) Run tests for at least one platform:
- Run `python -m pip install warp_lang-<version>-<platform-tag>.whl`
- Run `python -m warp.tests`
Check that the correct version number gets printed.
4) If tests fail, make fixes on `release-X.Y` and where necessary cherry-pick to `main` before repeating from step (1).
5) Tag the release with `vX.Y.Z` on `release-X.Y` and push to `omniverse`.
Both the tag and the release branch will be automatically mirrored to GitLab.
It is safest to push *just* the new tag using `git push omniverse vX.Y.Z`.
In case of a mistake, a tag already pushed to `omniverse` can be deleted from the GitLab UI.
The bad tag must also be deleted from the GitHub UI if it was mirrored there.
6) Create a new release on [GitHub](https://github.com/NVIDIA/warp) with a tag and title of `vX.Y.Z` and
upload the `.whl` artifacts as attachments. Use the changelog updates as the description.
## Upload a PyPI Release
First time:
- Create a [PyPI](https://pypi.org/) account.
- [Create a Token](https://pypi.org/manage/account/#api-tokens) for uploading to the `warp-lang` project (store it somewhere safe).
- Get an admin (<[email protected]>) to give you write access to the project.
Per release:
Run `python -m twine upload *` from the `.whl` packages folder (on Windows make sure to use `cmd` shell; Git Bash doesn't work).
- username: `__token__`
- password: `(your token string from PyPI)`
## Publishing the Omniverse Extensions
1) Ensure that the version strings and `CHANGELOG.md` files in the `exts` folder are in sync with the ones from the library.
2) Wait for the (latest) packages to appear in:
<https://gitlab-master.nvidia.com/omniverse/warp/-/packages/>
3) Download `kit-extensions.zip` to your computer.
4) Extract it to a clean folder and check the extensions inside of Kit:
- Run `omni.create.sh --ext-folder /path/to/artifacts/exts --enable omni.warp-X.Y.Z --enable omni.warp.core-X.Y.Z`
- Ensure that the example scenes are working as expected
- Run test suites for both extensions
5) If tests fail, make fixes on `release-X.Y` and where necessary cherry-pick to `main` before repeating from step (2).
6) If all tests passed:
- `kit --ext-folder /path/to/artifacts/exts --publish omni-warp.core-X.Y.Z`
- `kit --ext-folder /path/to/artifacts/exts --publish omni-warp-X.Y.Z`
7) Ensure that the release is tagged with `vX.Y.Z` on both `omniverse/release-X.Y` and `github/release-X.Y`.
## Automated processes
The following is just for your information. These steps should run automatically by CI/CD pipelines, but can be replicated manually if needed:
### Building the documentation
The contents of <https://nvidia.github.io/warp/> is generated by a GitHub pipeline which runs `python build_docs.py` (prerequisites: `pip install docs/requirements.txt`).
### Building pip wheels
The GitLab pipeline's `create pypi wheels` Job (part of the `package` Stage) combines artifacts from each platform build, moving the contents of `warp/bin` to platform- and architecture-specific
subfolders; e.g. `warp/bin/linux-x86_64` and `warp/bin/linux-aarch64` both contain `warp.so` and `warp-clang.so` files.
Pip wheels are then built using:
```bash
python -m build --wheel -C--build-option=-Pwindows-x86_64
python -m build --wheel -C--build-option=-Plinux-x86_64
python -m build --wheel -C--build-option=-Plinux-aarch64
python -m build --wheel -C--build-option=-Pmacos-universal
```
Selecting the correct library files for each wheel happens in [`setup.py`](setup.py).
| 7,743 | Markdown | 44.552941 | 194 | 0.74816 |
NVIDIA/warp/pyproject.toml | [build-system]
requires = ["setuptools>=61", "build", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "warp-lang"
requires-python = ">=3.7" # 3.9 recommended
authors = [{ name = "NVIDIA", email = "[email protected]" }]
description = "A Python framework for high-performance simulation and graphics programming"
license = { text = "NVIDIA Software License" }
classifiers = [
"Programming Language :: Python :: 3.7", # Deprecated
"Programming Language :: Python :: 3.8", # Deprecated
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
]
dependencies = ["numpy"]
dynamic = ["version", "readme"]
[project.urls]
GitHub = "https://github.com/NVIDIA/warp"
Documentation = "https://nvidia.github.io/warp"
Changelog = "https://github.com/NVIDIA/warp/blob/main/CHANGELOG.md"
[project.optional-dependencies]
dev = ["pre-commit", "ruff", "nvtx", "furo", "sphinx-copybutton", "coverage[toml]"]
extras = ['usd-core', 'matplotlib', 'pyglet']
[tool.setuptools.packages.find]
include = ["warp*"]
[tool.setuptools.dynamic]
version = { attr = "warp.config.version" }
readme = { file = ["README.md"], content-type = "text/markdown" }
[tool.ruff]
cache-dir = ".cache/ruff"
line-length = 120
indent-width = 4
extend-exclude = [
"warp/native/cutlass/",
"warp/thirdparty/appdirs.py",
"warp/thirdparty/dlpack.py",
"tools",
"stubs.py",
]
[tool.ruff.lint]
select = [
"E", # pycodestyle errors
"I", # isort
"F", # pyflakes
"W", # pycodestyle warnings
"B", # flake8-bugbear
"C4", # flake8-comprehensions
]
ignore = [
"E501", # Many lines are over 120 characters already
"E741", # Warp often uses l as a variable name
"F403", # Allow wildcard imports
"F405", # Related to use of wildcard imports
"F811", # Warp often uses overloads
]
[tool.ruff.lint.per-file-ignores]
"__init__.py" = ["F401"]
"warp/tests/*.py" = ["F841"]
[tool.ruff.lint.pydocstyle]
convention = "google"
[tool.ruff.format]
quote-style = "double"
indent-style = "space"
skip-magic-trailing-comma = false
line-ending = "auto"
docstring-code-format = true
[tool.coverage.run]
source = ["warp", "warp.sim", "warp.render"]
disable_warnings = [
"module-not-measured",
"module-not-imported",
"no-data-collected",
"couldnt-parse",
]
[tool.coverage.report]
exclude_lines = [
"pragma: no cover",
"@wp",
"@warp",
"if 0:",
"if __name__ == .__main__.:",
]
omit = [
"*/warp/thirdparty/*",
"*/warp/examples/*",
"*/warp/tests/*",
"*/warp/fem/*",
"appdirs.py",
"render_opengl.py",
"build_dll.py",
"config.py",
"stubs.py",
]
| 2,800 | TOML | 24.463636 | 91 | 0.633571 |
NVIDIA/warp/CONTRIBUTING.md | # Contributing to NVIDIA Warp
Contributions and PRs from the community are welcome and are taken under the
terms described in the **9. Feedback** section of the [license](LICENSE.md).
## Forking and Branch Naming
The first step in developing for Warp is to create a fork of the Warp repository.
- GitHub community developers can fork the [GitHub Warp repository](https://github.com/NVIDIA/warp).
- NVIDIA developers can fork the [GitLab Warp repository](https://gitlab-master.nvidia.com/omniverse/warp).
Features should be developed on a branch with the following naming scheme:
user/feature-name
For example:
mmacklin/cuda-bvh-optimizations
## Opening a Merge Request
The following guidelines were originally written for NVIDIA developers
working on Warp using the internal GitLab repository. Developers working
on GitHub should generally follow this process, replacing the GitLab-specific
components with their GitHub counterparts.
When you're ready to submit your changes, please follow these steps to create a Merge Request (MR):
1. **Create MR**: Submit your MR against the Warp repo.
Ensure your MR has a descriptive title that clearly states the purpose of the changes.
2. **Add a Detailed Description**: Your MR should include a brief description covering:
- Summary of changes.
- Areas affected by the changes.
- The problem being solved.
- Any limitations or non-handled areas in the changes.
- A link to the JIRA Or GitHub issue it is addressing.
3. **Pre-Review Checklist**: The following should be checked before assigning reviews:
- Unit / regression tests are written.
- Docs have been updated.
- Use `ruff check` and `ruff format --check` to check for code quality issues.
The GitLab pipeline will fail if there are issues.
Exclusions may be used as appropriate, e.g. `# noqa: F841` or `#fmt: skip`.
- The GitLab CI/CD pipeline for the merge request is successful.
4. **Assign Reviewers**: Select one or more reviewers from the owners list below to review your changes.
Use the **Assignees** field to indicate reviewers who must _all_ approve the MR before it can be merged.
Additional reviewers whose approvals are not required can be listed in the **Reviewers** field.
5. **Address Reviewer Comments**: Respond to all reviewer feedback. Be open to revising your approach based on their suggestions.
Once you have addressed a comment then reply to notify reviewers.
_Do not_ resolve the thread yourself, this makes it harder for the reviewer to verify what has been changed.
If a reviewer has already approved the MR, you may self-resolve any of their outstanding threads in the interest of convenience.
6. **Final Steps for Merging**: Before your MR can be merged, ensure that:
- All reviewer comments are resolved.
- All mandatory reviewers (in the **Assignees** field) have approved the MR.
## Reviewer Guidelines
As a reviewer, your role is crucial in maintaining the quality of the NVIDIA Warp library. Here's what to look for in an MR:
1. **Bug and Regression Checks**: If the MR addresses any bugs or regressions, verify that new unit tests are added to prevent future regressions.
2. **Code Style and Conventions**: The code should generally adhere to PEP8 standards. However, if the surrounding code deviates from these standards, prioritize existing conventions. Avoid introducing new styles, layouts, or terminology for existing concepts.
3. **Documentation**: Check for appropriate documentation of new features. This includes docstrings and updates to the User Manual. Note that documentation is auto-generated for each MR, so contributors should not submit built documentation files.
4. **Review Thoroughly**: Take your time with the review.
- Consider if there's a simpler or better solution, ask clarifying questions or add comments if the intention is not clear.
- Consider the impact on the user experience, ease of use, intuitiveness, and consistency.
- Beware of breaking changes, even if the API does not change, does it break semantics existing users may be relying on?
Once you are satisfied with a thread resolution you should mark it as resolved. All threads must be resolved for the MR to be merged.
## Feature Owners
If you're contributing to a specific area of NVIDIA Warp, please consult the relevant feature owners:
- **Public API**: MilesM + relevant area owner from below
- **Code Generation**: NicolasC, MilesM
- **Platform Support (macOS, Tegra)**: NicolasC
- **CI/CD**: EricS, NicolasC, ZachC
- **CUDA, MGPU**: LukaszW, EricS
- **Kit Extensions**: ChristopherC
- **Torch/dlpack Interop**: LukaszW, ZachC
- **warp.sim**: EricH, MilesM
- **warp.fem**: GillesD
- **warp.optim**: GillesD, JonathanL
- **NanoVDB**: GregK
- **Testing/Packaging/Deployment**: EricS, NicolasC, LukaszW
Thank you for your contributions to making NVIDIA Warp a great tool for developers!
| 4,908 | Markdown | 49.091836 | 260 | 0.762429 |
NVIDIA/warp/setup.py | import argparse
import os
import pathlib
import platform
import shutil
import sys
from typing import NamedTuple
import setuptools
from wheel.bdist_wheel import bdist_wheel
# Parse --build-option arguments meant for the bdist_wheel command. We have to parse these
# ourselves because when bdist_wheel runs it's too late to select a subset of libraries for package_data.
parser = argparse.ArgumentParser()
parser.add_argument("command")
parser.add_argument(
"--platform", "-P", type=str, default="", help="Wheel platform: windows|linux|macos-x86_64|aarch64|universal"
)
args = parser.parse_known_args()[0]
# returns a canonical machine architecture string
# - "x86_64" for x86-64, aka. AMD64, aka. x64
# - "aarch64" for AArch64, aka. ARM64
def machine_architecture() -> str:
machine = platform.machine()
if machine == "x86_64" or machine == "AMD64":
return "x86_64"
if machine == "aarch64" or machine == "arm64":
return "aarch64"
raise RuntimeError(f"Unrecognized machine architecture {machine}")
def machine_os() -> str:
if sys.platform == "win32":
return "windows"
if sys.platform == "linux":
return "linux"
if sys.platform == "darwin":
return "macos"
raise RuntimeError(f"Unrecognized system platform {sys.platform}")
class Platform(NamedTuple):
os: str
arch: str
fancy_name: str
extension: str
tag: str
def name(self) -> str:
return self.os + "-" + self.arch
platforms = [
Platform("windows", "x86_64", "Windows x86-64", ".dll", "win_amd64"),
Platform("linux", "x86_64", "Linux x86-64", ".so", "manylinux2014_x86_64"),
Platform("linux", "aarch64", "Linux AArch64", ".so", "manylinux2014_aarch64"),
Platform("macos", "universal", "macOS universal", ".dylib", "macosx_10_13_universal2"),
]
class Library(NamedTuple):
file: str
directory: str
platform: Platform
# Enumerate warp/bin libraries
def detect_warp_libraries():
detected_libraries = set()
warp_bin = pathlib.Path("warp/bin")
for file in warp_bin.rglob("*.*"):
for p in platforms:
if os.path.splitext(file.name)[1] == p.extension:
# If this is a local build, assume we want a wheel for this machine's architecture
if file.parent.name == "bin" and (p.arch == machine_architecture() or p.arch == "universal"):
detected_libraries.add(Library(file.name, "bin/", p))
else:
# Expect libraries to be in a subdirectory named after the wheel platform
platform_name = p.name()
if file.parent.name == platform_name:
detected_libraries.add(Library(file.name, "bin/" + platform_name + "/", p))
if len(detected_libraries) == 0:
raise Exception("No libraries found in warp/bin. Please run build_lib.py first.")
return detected_libraries
detected_libraries = detect_warp_libraries()
detected_platforms = {lib.platform for lib in detected_libraries}
wheel_platform = None # The one platform for which we're building a wheel
if args.command == "bdist_wheel":
if args.platform != "":
for p in platforms:
if args.platform == p.name():
wheel_platform = p
print(f"Platform argument specified for building {p.fancy_name} wheel")
break
if wheel_platform is None:
print(f"Platform argument '{args.platform}' not recognized")
elif wheel_platform not in detected_platforms:
print(f"No libraries found for {wheel_platform.fancy_name}")
print("Falling back to auto-detection")
wheel_platform = None
if wheel_platform is None:
if len(detected_platforms) > 1:
print("Libraries for multiple platforms were detected.")
print(
"Run `python -m build --wheel -C--build-option=-P[windows|linux|macos]-[x86_64|aarch64|universal]` to select a specific one."
)
# Select the libraries corresponding with the this machine's platform
for p in platforms:
if p.os == machine_os() and p.arch == machine_architecture():
wheel_platform = p
break
if wheel_platform is None:
# Just pick the first one
wheel_platform = next(iter(detected_platforms))
print("Creating Warp wheel for " + wheel_platform.fancy_name)
# Binary wheel distribution builds assume that the platform you're building on will be the platform
# of the package. This class overrides the platform tag.
# https://packaging.python.org/en/latest/specifications/platform-compatibility-tags
class WarpBDistWheel(bdist_wheel):
# Even though we parse the platform argument ourselves, we need to declare it here as well so
# setuptools.Command can validate the command line options.
user_options = bdist_wheel.user_options + [
("platform=", "P", "Wheel platform: windows|linux|macos-x86_64|aarch64|universal"),
]
def initialize_options(self):
super().initialize_options()
self.platform = ""
def get_tag(self):
if wheel_platform is not None:
# The wheel's complete tag format is {python tag}-{abi tag}-{platform tag}.
return "py3", "none", wheel_platform.tag
else:
# The target platform was not overridden. Fall back to base class behavior.
return bdist_wheel.get_tag(self)
def run(self):
super().run()
# Clean up so we can re-invoke `py -m build --wheel -C--build-option=--platform=...`
# See https://github.com/pypa/setuptools/issues/1871 for details.
shutil.rmtree("./build", ignore_errors=True)
shutil.rmtree("./warp_lang.egg-info", ignore_errors=True)
# Distributions are identified as non-pure (i.e. containing non-Python code, or binaries) if the
# setuptools.setup() `ext_modules` parameter is not empty, but this assumes building extension
# modules from source through the Python build. This class provides an override for prebuilt binaries:
class BinaryDistribution(setuptools.Distribution):
def has_ext_modules(self):
return True
def get_warp_libraries(platform):
libraries = []
for library in detected_libraries:
if library.platform == platform:
src = "warp/" + library.directory + library.file
dst = "warp/bin/" + library.file
if src != dst:
shutil.copyfile(src, dst)
libraries.append("bin/" + library.file)
return libraries
if wheel_platform is not None:
warp_binary_libraries = get_warp_libraries(wheel_platform)
else:
warp_binary_libraries = [] # Not needed during egg_info command
setuptools.setup(
package_data={
"": [
"native/*.cpp",
"native/*.cu",
"native/*.h",
"native/clang/*.cpp",
"native/nanovdb/*.h",
"tests/assets/*",
"examples/assets/*",
]
+ warp_binary_libraries,
},
distclass=BinaryDistribution,
cmdclass={
"bdist_wheel": WarpBDistWheel,
},
)
| 7,228 | Python | 34.092233 | 141 | 0.630742 |
NVIDIA/warp/CHANGELOG.md | # CHANGELOG
## [Upcoming Release] - 2024-??-??
- Add a not-a-number floating-point constant that can be used as `wp.NAN` or `wp.nan`.
- Add `wp.isnan()`, `wp.isinf()`, and `wp.isfinite()` for scalars, vectors, matrices, etc.
- Improve kernel cache reuse by hashing just the local module constants. Previously, a
module's hash was affected by all constants declared in a Warp program.
- Revised module compilation process to allow multiple processes to use the same kernel cache directory.
Cached kernels will now be stored in hash-specific subdirectory.
- Add runtime checks for `wp.MarchingCubes` on field dimensions and size
- Fix memory leak in mesh BVH ([GH-225](https://github.com/NVIDIA/warp/issues/225))
- Use C++17 with NVCC when building the Warp library and user kernels
- Increase PTX target architecture up to `sm_75` (from `sm_70`), enabling Turing ISA features
- Extended NanoVDB support (see `warp.Volume`):
- Add support for data-agnostic index grids, allocation at voxel granularity
- New `volume_lookup_index`, `volume_sample_index` and generic `volume_sample`/`volume_lookup`/`volume_store` kernel-level functions
- Zero-copy aliasing of in-memory grids, support for multi-grid buffers
- Grid introspection and blind data access capabilities
- warp.fem can now work directly on NanoVDB grids using `warp.fem.Nanogrid`
- Fixed `volume_sample_v` and `volume_store_*` adjoints
- Prevent `volume_store` from overwriting grid background values
- Improve validation of user-provided fields and values in warp.fem
## [1.1.1] - 2024-05-24
- Implicitly initialize Warp when first required
- Speed up `omni.warp.core`'s startup time
## [1.1.0] - 2024-05-09
- Support returning a value from `@wp.func_native` CUDA functions using type hints
- Improved differentiability of the `wp.sim.FeatherstoneIntegrator`
- Fix gradient propagation for rigid body contacts in `wp.sim.collide()`
- Added support for event-based timing, see `wp.ScopedTimer()`
- Added Tape visualization and debugging functions, see `wp.Tape.visualize()`
- Support constructing Warp arrays from objects that define the `__cuda_array_interface__` attribute
- Support copying a struct to another device, use `struct.to(device)` to migrate struct arrays
- Allow rigid shapes to not have any collisions with other shapes in `wp.sim.Model`
- Change default test behavior to test redundant GPUs (up to 2x)
- Test each example in an individual subprocess
- Polish and optimize various examples and tests
- Allow non-contiguous point arrays to be passed to `wp.HashGrid.build()`
- Upgrade LLVM to 18.1.3 for from-source builds and Linux x86-64 builds
- Build DLL source code as C++17 and require GCC 9.4 as a minimum
- Array clone, assign, and copy are now differentiable
- Use `Ruff` for formatting and linting
- Various documentation improvements (infinity, math constants, etc.)
- Improve URDF importer, handle joint armature
- Allow builtins.bool to be used in Warp data structures
- Use external gradient arrays in backward passes when passed to `wp.launch()`
- Add Conjugate Residual linear solver, see `wp.optim.linear.cr()`
- Fix propagation of gradients on aliased copy of variables in kernels
- Facilitate debugging and speed up `import warp` by eliminating raising any exceptions
- Improve support for nested vec/mat assignments in structs
- Recommend Python 3.9 or higher, which is required for JAX and soon PyTorch.
- Support gradient propagation for indexing sliced multi-dimensional arrays, i.e. `a[i][j]` vs. `a[i, j]`
- Provide an informative message if setting DLL C-types failed, instructing to try rebuilding the library
## [1.0.3] - 2024-04-17
- Add a `support_level` entry to the configuration file of the extensions
## [1.0.2] - 2024-03-22
- Make examples runnable from any location
- Fix the examples not running directly from their Python file
- Add the example gallery to the documentation
- Update `README.md` examples USD location
- Update `example_graph_capture.py` description
## [1.0.1] - 2024-03-15
- Document Device `total_memory` and `free_memory`
- Documentation for allocators, streams, peer access, and generics
- Changed example output directory to current working directory
- Added `python -m warp.examples.browse` for browsing the examples folder
- Print where the USD stage file is being saved
- Added `examples/optim/example_walker.py` sample
- Make the drone example not specific to USD
- Reduce the time taken to run some examples
- Optimise rendering points with a single colour
- Clarify an error message around needing USD
- Raise exception when module is unloaded during graph capture
- Added `wp.synchronize_event()` for blocking the host thread until a recorded event completes
- Flush C print buffers when ending `stdout` capture
- Remove more unneeded CUTLASS files
- Allow setting mempool release threshold as a fractional value
## [1.0.0] - 2024-03-07
- Add `FeatherstoneIntegrator` which provides more stable simulation of articulated rigid body dynamics in generalized coordinates (`State.joint_q` and `State.joint_qd`)
- Introduce `warp.sim.Control` struct to store control inputs for simulations (optional, by default the `Model` control inputs are used as before); integrators now have a different simulation signature: `integrator.simulate(model: Model, state_in: State, state_out: State, dt: float, control: Control)`
- `joint_act` can now behave in 3 modes: with `joint_axis_mode` set to `JOINT_MODE_FORCE` it behaves as a force/torque, with `JOINT_MODE_VELOCITY` it behaves as a velocity target, and with `JOINT_MODE_POSITION` it behaves as a position target; `joint_target` has been removed
- Add adhesive contact to Euler integrators via `Model.shape_materials.ka` which controls the contact distance at which the adhesive force is applied
- Improve handling of visual/collision shapes in URDF importer so visual shapes are not involved in contact dynamics
- Experimental JAX kernel callback support
- Improve module load exception message
- Add `wp.ScopedCapture`
- Removing `enable_backward` warning for callables
- Copy docstrings and annotations from wrapped kernels, functions, structs
## [0.15.1] - 2024-03-05
- Add examples assets to the wheel packages
- Fix broken image link in documentation
- Fix codegen for custom grad functions calling their respective forward functions
- Fix custom grad function handling for functions that have no outputs
- Fix issues when `wp.config.quiet = True`
## [0.15.0] - 2024-03-04
- Add thumbnails to examples gallery
- Apply colored lighting to examples
- Moved `examples` directory under `warp/`
- Add example usage to `python -m warp.tests --help`
- Adding `torch.autograd.function` example + docs
- Add error-checking to array shapes during creation
- Adding `example_graph_capture`
- Add a Diffsim Example of a Drone
- Fix `verify_fp` causing compiler errors and support CPU kernels
- Fix to enable `matmul` to be called in CUDA graph capture
- Enable mempools by default
- Update `wp.launch` to support tuple args
- Fix BiCGSTAB and GMRES producing NaNs when converging early
- Fix warning about backward codegen being disabled in `test_fem`
- Fix `assert_np_equal` when NaN's and tolerance are involved
- Improve error message to discern between CUDA being disabled or not supported
- Support cross-module functions with user-defined gradients
- Suppress superfluous CUDA error when ending capture after errors
- Make output during initialization atomic
- Add `warp.config.max_unroll`, fix custom gradient unrolling
- Support native replay snippets using `@wp.func_native(snippet, replay_snippet=replay_snippet)`
- Look for the CUDA Toolkit in default locations if the `CUDA_PATH` environment variable or `--cuda_path` build option are not used
- Added `wp.ones()` to efficiently create one-initialized arrays
- Rename `wp.config.graph_capture_module_load_default` to `wp.config.enable_graph_capture_module_load_by_default`
## [0.14.0] - 2024-02-19
- Add support for CUDA pooled (stream-ordered) allocators
- Support memory allocation during graph capture
- Support copying non-contiguous CUDA arrays during graph capture
- Improved memory allocation/deallocation performance with pooled allocators
- Use `wp.config.enable_mempools_at_init` to enable pooled allocators during Warp initialization (if supported)
- `wp.is_mempool_supported()` - check if a device supports pooled allocators
- `wp.is_mempool_enabled()`, `wp.set_mempool_enabled()` - enable or disable pooled allocators per device
- `wp.set_mempool_release_threshold()`, `wp.get_mempool_release_threshold()` - configure memory pool release threshold
- Add support for direct memory access between devices
- Improved peer-to-peer memory transfer performance if access is enabled
- Caveat: enabling peer access may impact memory allocation/deallocation performance and increase memory consumption
- `wp.is_peer_access_supported()` - check if the memory of a device can be accessed by a peer device
- `wp.is_peer_access_enabled()`, `wp.set_peer_access_enabled()` - manage peer access for memory allocated using default CUDA allocators
- `wp.is_mempool_access_supported()` - check if the memory pool of a device can be accessed by a peer device
- `wp.is_mempool_access_enabled()`, `wp.set_mempool_access_enabled()` - manage access for memory allocated using pooled CUDA allocators
- Refined stream synchronization semantics
- `wp.ScopedStream` can synchronize with the previous stream on entry and/or exit (only sync on entry by default)
- Functions taking an optional stream argument do no implicit synchronization for max performance (e.g., `wp.copy()`, `wp.launch()`, `wp.capture_launch()`)
- Support for passing a custom `deleter` argument when constructing arrays
- Deprecation of `owner` argument - use `deleter` to transfer ownership
- Optimizations for various core API functions (e.g., `wp.zeros()`, `wp.full()`, and more)
- Fix `wp.matmul()` to always use the correct CUDA context
- Fix memory leak in BSR transpose
- Fix stream synchronization issues when copying non-contiguous arrays
- API change: `wp.matmul()` no longer accepts a device as a parameter; instead, it infers the correct device from the arrays being multiplied
- Updated DLPack utilities to the latest published standard
- External arrays can be imported into Warp directly, e.g., `wp.from_dlpack(external_array)`
- Warp arrays can be exported to consumer frameworks directly, e.g., `jax.dlpack.from_dlpack(warp_array)`
- Added CUDA stream synchronization for CUDA arrays
- The original DLPack protocol can still be used for better performance when stream synchronization is not required, see interoperability docs for details
- `warp.to_dlpack()` is about 3-4x faster in common cases
- `warp.from_dlpack()` is about 2x faster when called with a DLPack capsule
- Fixed a small CPU memory leak related to DLPack interop
- Improved performance of creating arrays
## [0.13.1] - 2024-02-22
- Ensure that the results from the `Noise Deform` are deterministic across different Kit sessions
## [0.13.0] - 2024-02-16
- Update the license to *NVIDIA Software License*, allowing commercial use (see `LICENSE.md`)
- Add `CONTRIBUTING.md` guidelines (for NVIDIA employees)
- Hash CUDA `snippet` and `adj_snippet` strings to fix caching
- Fix `build_docs.py` on Windows
- Add missing `.py` extension to `warp/tests/walkthrough_debug`
- Allow `wp.bool` usage in vector and matrix types
## [0.12.0] - 2024-02-05
- Add a warning when the `enable_backward` setting is set to `False` upon calling `wp.Tape.backward()`
- Fix kernels not being recompiled as expected when defined using a closure
- Change the kernel cache appauthor subdirectory to just "NVIDIA"
- Ensure that gradients attached to PyTorch tensors have compatible strides when calling `wp.from_torch()`
- Add a `Noise Deform` node for OmniGraph that deforms points using a perlin/curl noise
## [0.11.0] - 2024-01-23
- Re-release 1.0.0-beta.7 as a non-pre-release 0.11.0 version so it gets selected by `pip install warp-lang`.
- Introducing a new versioning and release process, detailed in `PACKAGING.md` and resembling that of [Python itself](https://devguide.python.org/developer-workflow/development-cycle/#devcycle):
- The 0.11 release(s) can be found on the `release-0.11` branch.
- Point releases (if any) go on the same minor release branch and only contain bug fixes, not new features.
- The `public` branch, previously used to merge releases into and corresponding with the GitHub `main` branch, is retired.
## [1.0.0-beta.7] - 2024-01-23
- Ensure captures are always enclosed in `try`/`finally`
- Only include .py files from the warp subdirectory into wheel packages
- Fix an extension's sample node failing at parsing some version numbers
- Allow examples to run without USD when possible
- Add a setting to disable the main Warp menu in Kit
- Add iterative linear solvers, see `wp.optim.linear.cg`, `wp.optim.linear.bicgstab`, `wp.optim.linear.gmres`, and `wp.optim.linear.LinearOperator`
- Improve error messages around global variables
- Improve error messages around mat/vec assignments
- Support conversion of scalars to native/ctypes, e.g.: `float(wp.float32(1.23))` or `ctypes.c_float(wp.float32(1.23))`
- Add a constant for infinity, see `wp.inf`
- Add a FAQ entry about array assignments
- Add a mass spring cage diff simulation example, see `examples/example_diffsim_mass_spring_cage.py`
- Add `-s`, `--suite` option for only running tests belonging to the given suites
- Fix common spelling mistakes
- Fix indentation of generated code
- Show deprecation warnings only once
- Improve `wp.render.OpenGLRenderer`
- Create the extension's symlink to the *core library* at runtime
- Fix some built-ins failing to compile the backward pass when nested inside if/else blocks
- Update examples with the new variants of the mesh query built-ins
- Fix type members that weren't zero-initialized
- Fix missing adjoint function for `wp.mesh_query_ray()`
## [1.0.0-beta.6] - 2024-01-10
- Do not create CPU copy of grad array when calling `array.numpy()`
- Fix `assert_np_equal()` bug
- Support Linux AArch64 platforms, including Jetson/Tegra devices
- Add parallel testing runner (invoke with `python -m warp.tests`, use `warp/tests/unittest_serial.py` for serial testing)
- Fix support for function calls in `range()`
- `wp.matmul()` adjoints now accumulate
- Expand available operators (e.g. vector @ matrix, scalar as dividend) and improve support for calling native built-ins
- Fix multi-gpu synchronization issue in `sparse.py`
- Add depth rendering to `wp.render.OpenGLRenderer`, document `wp.render`
- Make `wp.atomic_min()`, `wp.atomic_max()` differentiable
- Fix error reporting using the exact source segment
- Add user-friendly mesh query overloads, returning a struct instead of overwriting parameters
- Address multiple differentiability issues
- Fix backpropagation for returning array element references
- Support passing the return value to adjoints
- Add point basis space and explicit point-based quadrature for `wp.fem`
- Support overriding the LLVM project source directory path using `build_lib.py --build_llvm --llvm_source_path=`
- Fix the error message for accessing non-existing attributes
- Flatten faces array for Mesh constructor in URDF parser
## [1.0.0-beta.5] - 2023-11-22
- Fix for kernel caching when function argument types change
- Fix code-gen ordering of dependent structs
- Fix for `wp.Mesh` build on MGPU systems
- Fix for name clash bug with adjoint code: https://github.com/NVIDIA/warp/issues/154
- Add `wp.frac()` for returning the fractional part of a floating point value
- Add support for custom native CUDA snippets using `@wp.func_native` decorator
- Add support for batched matmul with batch size > 2^16-1
- Add support for transposed CUTLASS `wp.matmul()` and additional error checking
- Add support for quad and hex meshes in `wp.fem`
- Detect and warn when C++ runtime doesn't match compiler during build, e.g.: ``libstdc++.so.6: version `GLIBCXX_3.4.30' not found``
- Documentation update for `wp.BVH`
- Documentation and simplified API for runtime kernel specialization `wp.Kernel`
## [1.0.0-beta.4] - 2023-11-01
- Add `wp.cbrt()` for cube root calculation
- Add `wp.mesh_furthest_point_no_sign()` to compute furthest point on a surface from a query point
- Add support for GPU BVH builds, 10-100x faster than CPU builds for large meshes
- Add support for chained comparisons, i.e.: `0 < x < 2`
- Add support for running `wp.fem` examples headless
- Fix for unit test determinism
- Fix for possible GC collection of array during graph capture
- Fix for `wp.utils.array_sum()` output initialization when used with vector types
- Coverage and documentation updates
## [1.0.0-beta.3] - 2023-10-19
- Add support for code coverage scans (test_coverage.py), coverage at 85% in `omni.warp.core`
- Add support for named component access for vector types, e.g.: `a = v.x`
- Add support for lvalue expressions, e.g.: `array[i] += b`
- Add casting constructors for matrix and vector types
- Add support for `type()` operator that can be used to return type inside kernels
- Add support for grid-stride kernels to support kernels with > 2^31-1 thread blocks
- Fix for multi-process initialization warnings
- Fix alignment issues with empty `wp.struct`
- Fix for return statement warning with tuple-returning functions
- Fix for `wp.batched_matmul()` registering the wrong function in the Tape
- Fix and document for `wp.sim` forward + inverse kinematics
- Fix for `wp.func` to return a default value if function does not return on all control paths
- Refactor `wp.fem` support for new basis functions, decoupled function spaces
- Optimizations for `wp.noise` functions, up to 10x faster in most cases
- Optimizations for `type_size_in_bytes()` used in array construction'
### Breaking Changes
- To support grid-stride kernels, `wp.tid()` can no longer be called inside `wp.func` functions.
## [1.0.0-beta.2] - 2023-09-01
- Fix for passing bool into `wp.func` functions
- Fix for deprecation warnings appearing on `stderr`, now redirected to `stdout`
- Fix for using `for i in wp.hash_grid_query(..)` syntax
## [1.0.0-beta.1] - 2023-08-29
- Fix for `wp.float16` being passed as kernel arguments
- Fix for compile errors with kernels using structs in backward pass
- Fix for `wp.Mesh.refit()` not being CUDA graph capturable due to synchronous temp. allocs
- Fix for dynamic texture example flickering / MGPU crashes demo in Kit by reusing `ui.DynamicImageProvider` instances
- Fix for a regression that disabled bundle change tracking in samples
- Fix for incorrect surface velocities when meshes are deforming in `OgnClothSimulate`
- Fix for incorrect lower-case when setting USD stage "up_axis" in examples
- Fix for incompatible gradient types when wrapping PyTorch tensor as a vector or matrix type
- Fix for adding open edges when building cloth constraints from meshes in `wp.sim.ModelBuilder.add_cloth_mesh()`
- Add support for `wp.fabricarray` to directly access Fabric data from Warp kernels, see https://docs.omniverse.nvidia.com/kit/docs/usdrt/latest/docs/usdrt_prim_selection.html for examples
- Add support for user defined gradient functions, see `@wp.func_replay`, and `@wp.func_grad` decorators
- Add support for more OG attribute types in `omni.warp.from_omni_graph()`
- Add support for creating NanoVDB `wp.Volume` objects from dense NumPy arrays
- Add support for `wp.volume_sample_grad_f()` which returns the value + gradient efficiently from an NVDB volume
- Add support for LLVM fp16 intrinsics for half-precision arithmetic
- Add implementation of stochastic gradient descent, see `wp.optim.SGD`
- Add `wp.fem` framework for solving weak-form PDE problems (see https://nvidia.github.io/warp/modules/fem.html)
- Optimizations for `omni.warp` extension load time (2.2s to 625ms cold start)
- Make all `omni.ui` dependencies optional so that Warp unit tests can run headless
- Deprecation of `wp.tid()` outside of kernel functions, users should pass `tid()` values to `wp.func` functions explicitly
- Deprecation of `wp.sim.Model.flatten()` for returning all contained tensors from the model
- Add support for clamping particle max velocity in `wp.sim.Model.particle_max_velocity`
- Remove dependency on `urdfpy` package, improve MJCF parser handling of default values
## [0.10.1] - 2023-07-25
- Fix for large multidimensional kernel launches (> 2^32 threads)
- Fix for module hashing with generics
- Fix for unrolling loops with break or continue statements (will skip unrolling)
- Fix for passing boolean arguments to build_lib.py (previously ignored)
- Fix build warnings on Linux
- Fix for creating array of structs from NumPy structured array
- Fix for regression on kernel load times in Kit when using `wp.sim`
- Update `wp.array.reshape()` to handle `-1` dimensions
- Update margin used by for mesh queries when using `wp.sim.create_soft_body_contacts()`
- Improvements to gradient handling with `wp.from_torch()`, `wp.to_torch()` plus documentation
## [0.10.0] - 2023-07-05
- Add support for macOS universal binaries (x86 + aarch64) for M1+ support
- Add additional methods for SDF generation please see the following new methods:
- `wp.mesh_query_point_nosign()` - closest point query with no sign determination
- `wp.mesh_query_point_sign_normal()` - closest point query with sign from angle-weighted normal
- `wp.mesh_query_point_sign_winding_number()` - closest point query with fast winding number sign determination
- Add CSR/BSR sparse matrix support, see `wp.sparse` module:
- `wp.sparse.BsrMatrix`
- `wp.sparse.bsr_zeros()`, `wp.sparse.bsr_set_from_triplets()` for construction
- `wp.sparse.bsr_mm()`, `wp.sparse_bsr_mv()` for matrix-matrix and matrix-vector products respectively
- Add array-wide utilities:
- `wp.utils.array_scan()` - prefix sum (inclusive or exclusive)
- `wp.utils.array_sum()` - sum across array
- `wp.utils.radix_sort_pairs()` - in-place radix sort (key,value) pairs
- Add support for calling `@wp.func` functions from Python (outside of kernel scope)
- Add support for recording kernel launches using a `wp.Launch` object that can be replayed with low overhead, use `wp.launch(..., record_cmd=True)` to generate a command object
- Optimizations for `wp.struct` kernel arguments, up to 20x faster launches for kernels with large structs or number of params
- Refresh USD samples to use bundle based workflow + change tracking
- Add Python API for manipulating mesh and point bundle data in OmniGraph, see `omni.warp.nodes` module, see `omni.warp.nodes.mesh_create_bundle()`, `omni.warp.nodes.mesh_get_points()`, etc
- Improvements to `wp.array`:
- Fix a number of array methods misbehaving with empty arrays
- Fix a number of bugs and memory leaks related to gradient arrays
- Fix array construction when creating arrays in pinned memory from a data source in pageable memory
- `wp.empty()` no longer zeroes-out memory and returns an uninitialized array, as intended
- `array.zero_()` and `array.fill_()` work with non-contiguous arrays
- Support wrapping non-contiguous NumPy arrays without a copy
- Support preserving the outer dimensions of NumPy arrays when wrapping them as Warp arrays of vector or matrix types
- Improve PyTorch and DLPack interop with Warp arrays of arbitrary vectors and matrices
- `array.fill_()` can now take lists or other sequences when filling arrays of vectors or matrices, e.g. `arr.fill_([[1, 2], [3, 4]])`
- `array.fill_()` now works with arrays of structs (pass a struct instance)
- `wp.copy()` gracefully handles copying between non-contiguous arrays on different devices
- Add `wp.full()` and `wp.full_like()`, e.g., `a = wp.full(shape, value)`
- Add optional `device` argument to `wp.empty_like()`, `wp.zeros_like()`, `wp.full_like()`, and `wp.clone()`
- Add `indexedarray` methods `.zero_()`, `.fill_()`, and `.assign()`
- Fix `indexedarray` methods `.numpy()` and `.list()`
- Fix `array.list()` to work with arrays of any Warp data type
- Fix `array.list()` synchronization issue with CUDA arrays
- `array.numpy()` called on an array of structs returns a structured NumPy array with named fields
- Improve the performance of creating arrays
- Fix for `Error: No module named 'omni.warp.core'` when running some Kit configurations (e.g.: stubgen)
- Fix for `wp.struct` instance address being included in module content hash
- Fix codegen with overridden function names
- Fix for kernel hashing so it occurs after code generation and before loading to fix a bug with stale kernel cache
- Fix for `wp.BVH.refit()` when executed on the CPU
- Fix adjoint of `wp.struct` constructor
- Fix element accessors for `wp.float16` vectors and matrices in Python
- Fix `wp.float16` members in structs
- Remove deprecated `wp.ScopedCudaGuard()`, please use `wp.ScopedDevice()` instead
## [0.9.0] - 2023-06-01
- Add support for in-place modifications to vector, matrix, and struct types inside kernels (will warn during backward pass with `wp.verbose` if using gradients)
- Add support for step-through VSCode debugging of kernel code with standalone LLVM compiler, see `wp.breakpoint()`, and `walkthrough_debug.py`
- Add support for default values on built-in functions
- Add support for multi-valued `@wp.func` functions
- Add support for `pass`, `continue`, and `break` statements
- Add missing `__sincos_stret` symbol for macOS
- Add support for gradient propagation through `wp.Mesh.points`, and other cases where arrays are passed to native functions
- Add support for Python `@` operator as an alias for `wp.matmul()`
- Add XPBD support for particle-particle collision
- Add support for individual particle radii: `ModelBuilder.add_particle` has a new `radius` argument, `Model.particle_radius` is now a Warp array
- Add per-particle flags as a `Model.particle_flags` Warp array, introduce `PARTICLE_FLAG_ACTIVE` to define whether a particle is being simulated and participates in contact dynamics
- Add support for Python bitwise operators `&`, `|`, `~`, `<<`, `>>`
- Switch to using standalone LLVM compiler by default for `cpu` devices
- Split `omni.warp` into `omni.warp.core` for Omniverse applications that want to use the Warp Python module with minimal additional dependencies
- Disable kernel gradient generation by default inside Omniverse for improved compile times
- Fix for bounds checking on element access of vector/matrix types
- Fix for stream initialization when a custom (non-primary) external CUDA context has been set on the calling thread
- Fix for duplicate `@wp.struct` registration during hot reload
- Fix for array `unot()` operator so kernel writers can use `if not array:` syntax
- Fix for case where dynamic loops are nested within unrolled loops
- Change `wp.hash_grid_point_id()` now returns -1 if the `wp.HashGrid` has not been reserved before
- Deprecate `wp.Model.soft_contact_distance` which is now replaced by `wp.Model.particle_radius`
- Deprecate single scalar particle radius (should be a per-particle array)
## [0.8.2] - 2023-04-21
- Add `ModelBuilder.soft_contact_max` to control the maximum number of soft contacts that can be registered. Use `Model.allocate_soft_contacts(new_count)` to change count on existing `Model` objects.
- Add support for `bool` parameters
- Add support for logical boolean operators with `int` types
- Fix for `wp.quat()` default constructor
- Fix conditional reassignments
- Add sign determination using angle weighted normal version of `wp.mesh_query_point()` as `wp.mesh_query_sign_normal()`
- Add sign determination using winding number of `wp.mesh_query_point()` as `wp.mesh_query_sign_winding_number()`
- Add query point without sign determination `wp.mesh_query_no_sign()`
## [0.8.1] - 2023-04-13
- Fix for regression when passing flattened numeric lists as matrix arguments to kernels
- Fix for regressions when passing `wp.struct` types with uninitialized (`None`) member attributes
## [0.8.0] - 2023-04-05
- Add `Texture Write` node for updating dynamic RTX textures from Warp kernels / nodes
- Add multi-dimensional kernel support to Warp Kernel Node
- Add `wp.load_module()` to pre-load specific modules (pass `recursive=True` to load recursively)
- Add `wp.poisson()` for sampling Poisson distributions
- Add support for UsdPhysics schema see `wp.sim.parse_usd()`
- Add XPBD rigid body implementation plus diff. simulation examples
- Add support for standalone CPU compilation (no host-compiler) with LLVM backed, enable with `--standalone` build option
- Add support for per-timer color in `wp.ScopedTimer()`
- Add support for row-based construction of matrix types outside of kernels
- Add support for setting and getting row vectors for Python matrices, see `matrix.get_row()`, `matrix.set_row()`
- Add support for instantiating `wp.struct` types within kernels
- Add support for indexed arrays, `slice = array[indices]` will now generate a sparse slice of array data
- Add support for generic kernel params, use `def compute(param: Any):`
- Add support for `with wp.ScopedDevice("cuda") as device:` syntax (same for `wp.ScopedStream()`, `wp.Tape()`)
- Add support for creating custom length vector/matrices inside kernels, see `wp.vector()`, and `wp.matrix()`
- Add support for creating identity matrices in kernels with, e.g.: `I = wp.identity(n=3, dtype=float)`
- Add support for unary plus operator (`wp.pos()`)
- Add support for `wp.constant` variables to be used directly in Python without having to use `.val` member
- Add support for nested `wp.struct` types
- Add support for returning `wp.struct` from functions
- Add `--quick` build for faster local dev. iteration (uses a reduced set of SASS arches)
- Add optional `requires_grad` parameter to `wp.from_torch()` to override gradient allocation
- Add type hints for generic vector / matrix types in Python stubs
- Add support for custom user function recording in `wp.Tape()`
- Add support for registering CUTLASS `wp.matmul()` with tape backward pass
- Add support for grids with > 2^31 threads (each dimension may be up to INT_MAX in length)
- Add CPU fallback for `wp.matmul()`
- Optimizations for `wp.launch()`, up to 3x faster launches in common cases
- Fix `wp.randf()` conversion to float to reduce bias for uniform sampling
- Fix capture of `wp.func` and `wp.constant` types from inside Python closures
- Fix for CUDA on WSL
- Fix for matrices in structs
- Fix for transpose indexing for some non-square matrices
- Enable Python faulthandler by default
- Update to VS2019
### Breaking Changes
- `wp.constant` variables can now be treated as their true type, accessing the underlying value through `constant.val` is no longer supported
- `wp.sim.model.ground_plane` is now a `wp.array` to support gradient, users should call `builder.set_ground_plane()` to create the ground
- `wp.sim` capsule, cones, and cylinders are now aligned with the default USD up-axis
## [0.7.2] - 2023-02-15
- Reduce test time for vec/math types
- Clean-up CUDA disabled build pipeline
- Remove extension.gen.toml to make Kit packages Python version independent
- Handle additional cases for array indexing inside Python
## [0.7.1] - 2023-02-14
- Disabling some slow tests for Kit
- Make unit tests run on first GPU only by default
## [0.7.0] - 2023-02-13
- Add support for arbitrary length / type vector and matrices e.g.: `wp.vec(length=7, dtype=wp.float16)`, see `wp.vec()`, and `wp.mat()`
- Add support for `array.flatten()`, `array.reshape()`, and `array.view()` with NumPy semantics
- Add support for slicing `wp.array` types in Python
- Add `wp.from_ptr()` helper to construct arrays from an existing allocation
- Add support for `break` statements in ranged-for and while loops (backward pass support currently not implemented)
- Add built-in mathematic constants, see `wp.pi`, `wp.e`, `wp.log2e`, etc.
- Add built-in conversion between degrees and radians, see `wp.degrees()`, `wp.radians()`
- Add security pop-up for Kernel Node
- Improve error handling for kernel return values
## [0.6.3] - 2023-01-31
- Add DLPack utilities, see `wp.from_dlpack()`, `wp.to_dlpack()`
- Add Jax utilities, see `wp.from_jax()`, `wp.to_jax()`, `wp.device_from_jax()`, `wp.device_to_jax()`
- Fix for Linux Kit extensions OM-80132, OM-80133
## [0.6.2] - 2023-01-19
- Updated `wp.from_torch()` to support more data types
- Updated `wp.from_torch()` to automatically determine the target Warp data type if not specified
- Updated `wp.from_torch()` to support non-contiguous tensors with arbitrary strides
- Add CUTLASS integration for dense GEMMs, see `wp.matmul()` and `wp.matmul_batched()`
- Add QR and Eigen decompositions for `mat33` types, see `wp.qr3()`, and `wp.eig3()`
- Add default (zero) constructors for matrix types
- Add a flag to suppress all output except errors and warnings (set `wp.config.quiet = True`)
- Skip recompilation when Kernel Node attributes are edited
- Allow optional attributes for Kernel Node
- Allow disabling backward pass code-gen on a per-kernel basis, use `@wp.kernel(enable_backward=False)`
- Replace Python `imp` package with `importlib`
- Fix for quaternion slerp gradients (`wp.quat_slerp()`)
## [0.6.1] - 2022-12-05
- Fix for non-CUDA builds
- Fix strides computation in array_t constructor, fixes a bug with accessing mesh indices through mesh.indices[]
- Disable backward pass code generation for kernel node (4-6x faster compilation)
- Switch to linbuild for universal Linux binaries (affects TeamCity builds only)
## [0.6.0] - 2022-11-28
- Add support for CUDA streams, see `wp.Stream`, `wp.get_stream()`, `wp.set_stream()`, `wp.synchronize_stream()`, `wp.ScopedStream`
- Add support for CUDA events, see `wp.Event`, `wp.record_event()`, `wp.wait_event()`, `wp.wait_stream()`, `wp.Stream.record_event()`, `wp.Stream.wait_event()`, `wp.Stream.wait_stream()`
- Add support for PyTorch stream interop, see `wp.stream_from_torch()`, `wp.stream_to_torch()`
- Add support for allocating host arrays in pinned memory for asynchronous data transfers, use `wp.array(..., pinned=True)` (default is non-pinned)
- Add support for direct conversions between all scalar types, e.g.: `x = wp.uint8(wp.float64(3.0))`
- Add per-module option to enable fast math, use `wp.set_module_options({"fast_math": True})`, fast math is now *disabled* by default
- Add support for generating CUBIN kernels instead of PTX on systems with older drivers
- Add user preference options for CUDA kernel output ("ptx" or "cubin", e.g.: `wp.config.cuda_output = "ptx"` or per-module `wp.set_module_options({"cuda_output": "ptx"})`)
- Add kernel node for OmniGraph
- Add `wp.quat_slerp()`, `wp.quat_to_axis_angle()`, `wp.rotate_rodriquez()` and adjoints for all remaining quaternion operations
- Add support for unrolling for-loops when range is a `wp.constant`
- Add support for arithmetic operators on built-in vector / matrix types outside of `wp.kernel`
- Add support for multiple solution variables in `wp.optim` Adam optimization
- Add nested attribute support for `wp.struct` attributes
- Add missing adjoint implementations for spatial math types, and document all functions with missing adjoints
- Add support for retrieving NanoVDB tiles and voxel size, see `wp.Volume.get_tiles()`, and `wp.Volume.get_voxel_size()`
- Add support for store operations on integer NanoVDB volumes, see `wp.volume_store_i()`
- Expose `wp.Mesh` points, indices, as arrays inside kernels, see `wp.mesh_get()`
- Optimizations for `wp.array` construction, 2-3x faster on average
- Optimizations for URDF import
- Fix various deployment issues by statically linking with all CUDA libs
- Update warp.so/warp.dll to CUDA Toolkit 11.5
## [0.5.1] - 2022-11-01
- Fix for unit tests in Kit
## [0.5.0] - 2022-10-31
- Add smoothed particle hydrodynamics (SPH) example, see `example_sph.py`
- Add support for accessing `array.shape` inside kernels, e.g.: `width = arr.shape[0]`
- Add dependency tracking to hot-reload modules if dependencies were modified
- Add lazy acquisition of CUDA kernel contexts (save ~300Mb of GPU memory in MGPU environments)
- Add BVH object, see `wp.Bvh` and `bvh_query_ray()`, `bvh_query_aabb()` functions
- Add component index operations for `spatial_vector`, `spatial_matrix` types
- Add `wp.lerp()` and `wp.smoothstep()` builtins
- Add `wp.optim` module with implementation of the Adam optimizer for float and vector types
- Add support for transient Python modules (fix for Houdini integration)
- Add `wp.length_sq()`, `wp.trace()` for vector / matrix types respectively
- Add missing adjoints for `wp.quat_rpy()`, `wp.determinant()`
- Add `wp.atomic_min()`, `wp.atomic_max()` operators
- Add vectorized version of `wp.sim.model.add_cloth_mesh()`
- Add NVDB volume allocation API, see `wp.Volume.allocate()`, and `wp.Volume.allocate_by_tiles()`
- Add NVDB volume write methods, see `wp.volume_store_i()`, `wp.volume_store_f()`, `wp.volume_store_v()`
- Add MGPU documentation
- Add example showing how to compute Jacobian of multiple environments in parallel, see `example_jacobian_ik.py`
- Add `wp.Tape.zero()` support for `wp.struct` types
- Make SampleBrowser an optional dependency for Kit extension
- Make `wp.Mesh` object accept both 1d and 2d arrays of face vertex indices
- Fix for reloading of class member kernel / function definitions using `importlib.reload()`
- Fix for hashing of `wp.constants()` not invalidating kernels
- Fix for reload when multiple `.ptx` versions are present
- Improved error reporting during code-gen
## [0.4.3] - 2022-09-20
- Update all samples to use GPU interop path by default
- Fix for arrays > 2GB in length
- Add support for per-vertex USD mesh colors with `wp.render` class
## [0.4.2] - 2022-09-07
- Register Warp samples to the sample browser in Kit
- Add NDEBUG flag to release mode kernel builds
- Fix for particle solver node when using a large number of particles
- Fix for broken cameras in Warp sample scenes
## [0.4.1] - 2022-08-30
- Add geometry sampling methods, see `wp.sample_unit_cube()`, `wp.sample_unit_disk()`, etc
- Add `wp.lower_bound()` for searching sorted arrays
- Add an option for disabling code-gen of backward pass to improve compilation times, see `wp.set_module_options({"enable_backward": False})`, True by default
- Fix for using Warp from Script Editor or when module does not have a `__file__` attribute
- Fix for hot reload of modules containing `wp.func()` definitions
- Fix for debug flags not being set correctly on CUDA when `wp.config.mode == "debug"`, this enables bounds checking on CUDA kernels in debug mode
- Fix for code gen of functions that do not return a value
## [0.4.0] - 2022-08-09
- Fix for FP16 conversions on GPUs without hardware support
- Fix for `runtime = None` errors when reloading the Warp module
- Fix for PTX architecture version when running with older drivers, see `wp.config.ptx_target_arch`
- Fix for USD imports from `__init__.py`, defer them to individual functions that need them
- Fix for robustness issues with sign determination for `wp.mesh_query_point()`
- Fix for `wp.HashGrid` memory leak when creating/destroying grids
- Add CUDA version checks for toolkit and driver
- Add support for cross-module `@wp.struct` references
- Support running even if CUDA initialization failed, use `wp.is_cuda_available()` to check availability
- Statically linking with the CUDA runtime library to avoid deployment issues
### Breaking Changes
- Removed `wp.runtime` reference from the top-level module, as it should be considered private
## [0.3.2] - 2022-07-19
- Remove Torch import from `__init__.py`, defer import to `wp.from_torch()`, `wp.to_torch()`
## [0.3.1] - 2022-07-12
- Fix for marching cubes reallocation after initialization
- Add support for closest point between line segment tests, see `wp.closest_point_edge_edge()` builtin
- Add support for per-triangle elasticity coefficients in simulation, see `wp.sim.ModelBuilder.add_cloth_mesh()`
- Add support for specifying default device, see `wp.set_device()`, `wp.get_device()`, `wp.ScopedDevice`
- Add support for multiple GPUs (e.g., `"cuda:0"`, `"cuda:1"`), see `wp.get_cuda_devices()`, `wp.get_cuda_device_count()`, `wp.get_cuda_device()`
- Add support for explicitly targeting the current CUDA context using device alias `"cuda"`
- Add support for using arbitrary external CUDA contexts, see `wp.map_cuda_device()`, `wp.unmap_cuda_device()`
- Add PyTorch device aliasing functions, see `wp.device_from_torch()`, `wp.device_to_torch()`
### Breaking Changes
- A CUDA device is used by default, if available (aligned with `wp.get_preferred_device()`)
- `wp.ScopedCudaGuard` is deprecated, use `wp.ScopedDevice` instead
- `wp.synchronize()` now synchronizes all devices; for finer-grained control, use `wp.synchronize_device()`
- Device alias `"cuda"` now refers to the current CUDA context, rather than a specific device like `"cuda:0"` or `"cuda:1"`
## [0.3.0] - 2022-07-08
- Add support for FP16 storage type, see `wp.float16`
- Add support for per-dimension byte strides, see `wp.array.strides`
- Add support for passing Python classes as kernel arguments, see `@wp.struct` decorator
- Add additional bounds checks for builtin matrix types
- Add additional floating point checks, see `wp.config.verify_fp`
- Add interleaved user source with generated code to aid debugging
- Add generalized GPU marching cubes implementation, see `wp.MarchingCubes` class
- Add additional scalar*matrix vector operators
- Add support for retrieving a single row from builtin types, e.g.: `r = m33[i]`
- Add `wp.log2()` and `wp.log10()` builtins
- Add support for quickly instancing `wp.sim.ModelBuilder` objects to improve env. creation performance for RL
- Remove custom CUB version and improve compatibility with CUDA 11.7
- Fix to preserve external user-gradients when calling `wp.Tape.zero()`
- Fix to only allocate gradient of a Torch tensor if `requires_grad=True`
- Fix for missing `wp.mat22` constructor adjoint
- Fix for ray-cast precision in edge case on GPU (watertightness issue)
- Fix for kernel hot-reload when definition changes
- Fix for NVCC warnings on Linux
- Fix for generated function names when kernels are defined as class functions
- Fix for reload of generated CPU kernel code on Linux
- Fix for example scripts to output USD at 60 timecodes per-second (better Kit compatibility)
## [0.2.3] - 2022-06-13
- Fix for incorrect 4d array bounds checking
- Fix for `wp.constant` changes not updating module hash
- Fix for stale CUDA kernel cache when CPU kernels launched first
- Array gradients are now allocated along with the arrays and accessible as `wp.array.grad`, users should take care to always call `wp.Tape.zero()` to clear gradients between different invocations of `wp.Tape.backward()`
- Added `wp.array.fill_()` to set all entries to a scalar value (4-byte values only currently)
### Breaking Changes
- Tape `capture` option has been removed, users can now capture tapes inside existing CUDA graphs (e.g.: inside Torch)
- Scalar loss arrays should now explicitly set `requires_grad=True` at creation time
## [0.2.2] - 2022-05-30
- Fix for `from import *` inside Warp initialization
- Fix for body space velocity when using deforming Mesh objects with scale
- Fix for noise gradient discontinuities affecting `wp.curlnoise()`
- Fix for `wp.from_torch()` to correctly preserve shape
- Fix for URDF parser incorrectly passing density to scale parameter
- Optimizations for startup time from 3s -> 0.3s
- Add support for custom kernel cache location, Warp will now store generated binaries in the user's application directory
- Add support for cross-module function references, e.g.: call another modules @wp.func functions
- Add support for overloading `@wp.func` functions based on argument type
- Add support for calling built-in functions directly from Python interpreter outside kernels (experimental)
- Add support for auto-complete and docstring lookup for builtins in IDEs like VSCode, PyCharm, etc
- Add support for doing partial array copies, see `wp.copy()` for details
- Add support for accessing mesh data directly in kernels, see `wp.mesh_get_point()`, `wp.mesh_get_index()`, `wp.mesh_eval_face_normal()`
- Change to only compile for targets where kernel is launched (e.g.: will not compile CPU unless explicitly requested)
### Breaking Changes
- Builtin methods such as `wp.quat_identity()` now call the Warp native implementation directly and will return a `wp.quat` object instead of NumPy array
- NumPy implementations of many builtin methods have been moved to `wp.utils` and will be deprecated
- Local `@wp.func` functions should not be namespaced when called, e.g.: previously `wp.myfunc()` would work even if `myfunc()` was not a builtin
- Removed `wp.rpy2quat()`, please use `wp.quat_rpy()` instead
## [0.2.1] - 2022-05-11
- Fix for unit tests in Kit
## [0.2.0] - 2022-05-02
### Warp Core
- Fix for unrolling loops with negative bounds
- Fix for unresolved symbol `hash_grid_build_device()` not found when lib is compiled without CUDA support
- Fix for failure to load nvrtc-builtins64_113.dll when user has a newer CUDA toolkit installed on their machine
- Fix for conversion of Torch tensors to `wp.array` with a vector dtype (incorrect row count)
- Fix for `warp.dll` not found on some Windows installations
- Fix for macOS builds on Clang 13.x
- Fix for step-through debugging of kernels on Linux
- Add argument type checking for user defined `@wp.func` functions
- Add support for custom iterable types, supports ranges, hash grid, and mesh query objects
- Add support for multi-dimensional arrays, for example use `x = array[i,j,k]` syntax to address a 3-dimensional array
- Add support for multi-dimensional kernel launches, use `launch(kernel, dim=(i,j,k), ...` and `i,j,k = wp.tid()` to obtain thread indices
- Add support for bounds-checking array memory accesses in debug mode, use `wp.config.mode = "debug"` to enable
- Add support for differentiating through dynamic and nested for-loops
- Add support for evaluating MLP neural network layers inside kernels with custom activation functions, see `wp.mlp()`
- Add additional NVDB sampling methods and adjoints, see `wp.volume_sample_i()`, `wp.volume_sample_f()`, and `wp.volume_sample_vec()`
- Add support for loading zlib compressed NVDB volumes, see `wp.Volume.load_from_nvdb()`
- Add support for triangle intersection testing, see `wp.intersect_tri_tri()`
- Add support for NVTX profile zones in `wp.ScopedTimer()`
- Add support for additional transform and quaternion math operations, see `wp.inverse()`, `wp.quat_to_matrix()`, `wp.quat_from_matrix()`
- Add fast math (`--fast-math`) to kernel compilation by default
- Add `wp.torch` import by default (if PyTorch is installed)
### Warp Kit
- Add Kit menu for browsing Warp documentation and example scenes under 'Window->Warp'
- Fix for OgnParticleSolver.py example when collider is coming from Read Prim into Bundle node
### Warp Sim
- Fix for joint attachment forces
- Fix for URDF importer and floating base support
- Add examples showing how to use differentiable forward kinematics to solve inverse kinematics
- Add examples for URDF cartpole and quadruped simulation
### Breaking Changes
- `wp.volume_sample_world()` is now replaced by `wp.volume_sample_f/i/vec()` which operate in index (local) space. Users should use `wp.volume_world_to_index()` to transform points from world space to index space before sampling.
- `wp.mlp()` expects multi-dimensional arrays instead of one-dimensional arrays for inference, all other semantics remain the same as earlier versions of this API.
- `wp.array.length` member has been removed, please use `wp.array.shape` to access array dimensions, or use `wp.array.size` to get total element count
- Marking `dense_gemm()`, `dense_chol()`, etc methods as experimental until we revisit them
## [0.1.25] - 2022-03-20
- Add support for class methods to be Warp kernels
- Add HashGrid reserve() so it can be used with CUDA graphs
- Add support for CUDA graph capture of tape forward/backward passes
- Add support for Python 3.8.x and 3.9.x
- Add hyperbolic trigonometric functions, see `wp.tanh()`, `wp.sinh()`, `wp.cosh()`
- Add support for floored division on integer types
- Move tests into core library so they can be run in Kit environment
## [0.1.24] - 2022-03-03
### Warp Core
- Add NanoVDB support, see `wp.volume_sample*()` methods
- Add support for reading compile-time constants in kernels, see `wp.constant()`
- Add support for __cuda_array_interface__ protocol for zero-copy interop with PyTorch, see `wp.torch.to_torch()`
- Add support for additional numeric types, i8, u8, i16, u16, etc
- Add better checks for device strings during allocation / launch
- Add support for sampling random numbers with a normal distribution, see `wp.randn()`
- Upgrade to CUDA 11.3
- Update example scenes to Kit 103.1
- Deduce array dtype from np.array when one is not provided
- Fix for ranged for loops with negative step sizes
- Fix for 3d and 4d spherical gradient distributions
## [0.1.23] - 2022-02-17
### Warp Core
- Fix for generated code folder being removed during Showroom installation
- Fix for macOS support
- Fix for dynamic for-loop code gen edge case
- Add procedural noise primitives, see `wp.noise()`, `wp.pnoise()`, `wp.curlnoise()`
- Move simulation helpers our of test into `wp.sim` module
## [0.1.22] - 2022-02-14
### Warp Core
- Fix for .so reloading on Linux
- Fix for while loop code-gen in some edge cases
- Add rounding functions `wp.round()`, `wp.rint()`, `wp.trunc()`, `wp.floor()`, `wp.ceil()`
- Add support for printing strings and formatted strings from kernels
- Add MSVC compiler version detection and require minimum
### Warp Sim
- Add support for universal and compound joint types
## [0.1.21] - 2022-01-19
### Warp Core
- Fix for exception on shutdown in empty `wp.array` objects
- Fix for hot reload of CPU kernels in Kit
- Add hash grid primitive for point-based spatial queries, see `wp.hash_grid_query()`, `wp.hash_grid_query_next()`
- Add new PRNG methods using PCG-based generators, see `wp.rand_init()`, `wp.randf()`, `wp.randi()`
- Add support for AABB mesh queries, see `wp.mesh_query_aabb()`, `wp.mesh_query_aabb_next()`
- Add support for all Python `range()` loop variants
- Add builtin vec2 type and additional math operators, `wp.pow()`, `wp.tan()`, `wp.atan()`, `wp.atan2()`
- Remove dependency on CUDA driver library at build time
- Remove unused NVRTC binary dependencies (50mb smaller Linux distribution)
### Warp Sim
- Bundle import of multiple shapes for simulation nodes
- New OgnParticleVolume node for sampling shapes -> particles
- New OgnParticleSolver node for DEM style granular materials
## [0.1.20] - 2021-11-02
- Updates to the ripple solver for GTC (support for multiple colliders, buoyancy, etc)
## [0.1.19] - 2021-10-15
- Publish from 2021.3 to avoid omni.graph database incompatibilities
## [0.1.18] - 2021-10-08
- Enable Linux support (tested on 20.04)
## [0.1.17] - 2021-09-30
- Fix for 3x3 SVD adjoint
- Fix for A6000 GPU (bump compute model to sm_52 minimum)
- Fix for .dll unload on rebuild
- Fix for possible array destruction warnings on shutdown
- Rename spatial_transform -> transform
- Documentation update
## [0.1.16] - 2021-09-06
- Fix for case where simple assignments (a = b) incorrectly generated reference rather than value copy
- Handle passing zero-length (empty) arrays to kernels
## [0.1.15] - 2021-09-03
- Add additional math library functions (asin, etc)
- Add builtin 3x3 SVD support
- Add support for named constants (True, False, None)
- Add support for if/else statements (differentiable)
- Add custom memset kernel to avoid CPU overhead of cudaMemset()
- Add rigid body joint model to `wp.sim` (based on Brax)
- Add Linux, MacOS support in core library
- Fix for incorrectly treating pure assignment as reference instead of value copy
- Removes the need to transfer array to CPU before numpy conversion (will be done implicitly)
- Update the example OgnRipple wave equation solver to use bundles
## [0.1.14] - 2021-08-09
- Fix for out-of-bounds memory access in CUDA BVH
- Better error checking after kernel launches (use `wp.config.verify_cuda=True`)
- Fix for vec3 normalize adjoint code
## [0.1.13] - 2021-07-29
- Remove OgnShrinkWrap.py test node
## [0.1.12] - 2021-07-29
- Switch to Woop et al.'s watertight ray-tri intersection test
- Disable --fast-math in CUDA compilation step for improved precision
## [0.1.11] - 2021-07-28
- Fix for `wp.mesh_query_ray()` returning incorrect t-value
## [0.1.10] - 2021-07-28
- Fix for OV extension fwatcher filters to avoid hot-reload loop due to OGN regeneration
## [0.1.9] - 2021-07-21
- Fix for loading sibling DLL paths
- Better type checking for built-in function arguments
- Added runtime docs, can now list all builtins using `wp.print_builtins()`
## [0.1.8] - 2021-07-14
- Fix for hot-reload of CUDA kernels
- Add Tape object for replaying differentiable kernels
- Add helpers for Torch interop (convert `torch.Tensor` to `wp.Array`)
## [0.1.7] - 2021-07-05
- Switch to NVRTC for CUDA runtime
- Allow running without host compiler
- Disable asserts in kernel release mode (small perf. improvement)
## [0.1.6] - 2021-06-14
- Look for CUDA toolchain in target-deps
## [0.1.5] - 2021-06-14
- Rename OgLang -> Warp
- Improve CUDA environment error checking
- Clean-up some logging, add verbose mode (`wp.config.verbose`)
## [0.1.4] - 2021-06-10
- Add support for mesh raycast
## [0.1.3] - 2021-06-09
- Add support for unary negation operator
- Add support for mutating variables during dynamic loops (non-differentiable)
- Add support for in-place operators
- Improve kernel cache start up times (avoids adjointing before cache check)
- Update README.md with requirements / examples
## [0.1.2] - 2021-06-03
- Add support for querying mesh velocities
- Add CUDA graph support, see `wp.capture_begin()`, `wp.capture_end()`, `wp.capture_launch()`
- Add explicit initialization phase, `wp.init()`
- Add variational Euler solver (sim)
- Add contact caching, switch to nonlinear friction model (sim)
- Fix for Linux/macOS support
## [0.1.1] - 2021-05-18
- Fix bug with conflicting CUDA contexts
## [0.1.0] - 2021-05-17
- Initial publish for alpha testing
| 54,394 | Markdown | 56.257895 | 302 | 0.753557 |
NVIDIA/warp/build_docs.py | # Copyright (c) 2022 NVIDIA CORPORATION. All rights reserved.
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
import os
import shutil
import subprocess
from warp.context import export_functions_rst, export_stubs
base_path = os.path.dirname(os.path.realpath(__file__))
# generate stubs for autocomplete
with open(os.path.join(base_path, "warp", "stubs.py"), "w") as stub_file:
export_stubs(stub_file)
# code formatting of stubs.py
subprocess.run(["ruff", "format", "--verbose", os.path.join(base_path, "warp", "stubs.py")])
with open(os.path.join(base_path, "docs", "modules", "functions.rst"), "w") as function_ref:
export_functions_rst(function_ref)
source_dir = os.path.join(base_path, "docs")
output_dir = os.path.join(base_path, "docs", "_build", "html")
# Clean previous HTML output
if os.path.exists(output_dir):
shutil.rmtree(output_dir)
command = ["sphinx-build", "-W", "-b", "html", source_dir, output_dir]
subprocess.run(command, check=True)
print("Finished")
| 1,306 | Python | 33.394736 | 92 | 0.7366 |
NVIDIA/warp/codecov.yml | coverage:
status:
project: # More options at https://docs.codecov.com/docs/commit-status
default:
target: auto #default
threshold: "5"
base: auto
comment:
behavior: default
require_changes: false # if true: only post the comment if coverage changes
hide_project_coverage: false # [true :: only show coverage on the git diff aka patch coverage]]
require_base: false # [true :: must have a base report to post]
require_head: true # [true :: must have a head report to post]
| 519 | YAML | 33.666664 | 97 | 0.682081 |
NVIDIA/warp/VERSION.md | 1.1.1
| 6 | Markdown | 2.499999 | 5 | 0.5 |
NVIDIA/warp/repo.toml | ########################################################################################################################
# Repo tool base settings
########################################################################################################################
[repo]
# Repository Name
name = "warp"
########################################################################################################################
# Build tool setup
########################################################################################################################
[repo_build]
# List of packman projects to pull (in order)
fetch.packman_host_files_to_pull = [
"${root}/deps/host-deps.packman.xml",
]
fetch.packman_target_files_to_pull = [
"${root}/deps/target-deps.packman.xml",
]
# Extensions precache
fetch.after_pull_commands = [
]
[repo_build_number]
enabled = true
| 889 | TOML | 28.666666 | 120 | 0.313836 |
NVIDIA/warp/SECURITY.md | # Security
NVIDIA is dedicated to the security and trust of our software products and services, including all source code repositories managed through our organization.
If you need to report a security issue, please use the appropriate contact points outlined below. **Please do not report security vulnerabilities through GitHub.**
## Reporting Potential Security Vulnerability in an NVIDIA Product
To report a potential security vulnerability in any NVIDIA product:
- Web: [Security Vulnerability Submission Form](https://www.nvidia.com/object/submit-security-vulnerability.html)
- E-Mail: <[email protected]>
- We encourage you to use the following PGP key for secure email communication: [NVIDIA public PGP Key for communication](https://www.nvidia.com/en-us/security/pgp-key)
- Please include the following information:
- Product/Driver name and version/branch that contains the vulnerability
- Type of vulnerability (code execution, denial of service, buffer overflow, etc.)
- Instructions to reproduce the vulnerability
- Proof-of-concept or exploit code
- Potential impact of the vulnerability, including how an attacker could exploit the vulnerability
While NVIDIA currently does not have a bug bounty program, we do offer acknowledgement when an externally reported security issue is addressed under our coordinated vulnerability disclosure policy. Please visit our [Product Security Incident Response Team (PSIRT)](https://www.nvidia.com/en-us/security/psirt-policies/) policies page for more information.
## NVIDIA Product Security
For all security-related concerns, please visit NVIDIA's Product Security portal at <https://www.nvidia.com/en-us/security>
| 1,697 | Markdown | 64.30769 | 355 | 0.798468 |
NVIDIA/warp/README.md | [](https://badge.fury.io/py/warp-lang)

[](https://pepy.tech/project/warp-lang)
[](https://codecov.io/github/NVIDIA/warp)

[](https://discord.com/invite/nvidiaomniverse)
# NVIDIA Warp
Warp is a Python framework for writing high-performance simulation and graphics code. Warp takes
regular Python functions and JIT compiles them to efficient kernel code that can run on the CPU or GPU.
Warp is designed for spatial computing and comes with a rich set of primitives that make it easy to write
programs for physics simulation, perception, robotics, and geometry processing. In addition, Warp kernels
are differentiable and can be used as part of machine-learning pipelines with frameworks such as PyTorch and JAX.
Please refer to the project [Documentation](https://nvidia.github.io/warp/) for API and language reference and [CHANGELOG.md](./CHANGELOG.md) for release history.
<div align="center">
<img src="https://github.com/NVIDIA/warp/raw/main/docs/img/header.jpg">
<p><i>A selection of physical simulations computed with Warp</i></p>
</div>
## Installing
Python version 3.9 or newer is recommended. Warp can run on x86-64 and ARMv8 CPUs on Windows, Linux, and macOS. GPU support requires a CUDA capable NVIDIA GPU and driver (minimum GeForce GTX 9xx).
The easiest way to install Warp is from [PyPI](https://pypi.org/project/warp-lang/):
pip install warp-lang
You can also use `pip install warp-lang[extras]` to install additional dependencies for running examples and USD-related features.
Pre-built binary packages are also available on the [Releases](https://github.com/NVIDIA/warp/releases) page. To install in your local Python environment run the following command from the download directory:
pip install warp_lang-<version and platform>.whl
## Getting Started
An example first program that computes the lengths of random 3D vectors is given below:
```python
import warp as wp
import numpy as np
num_points = 1024
@wp.kernel
def length(points: wp.array(dtype=wp.vec3),
lengths: wp.array(dtype=float)):
# thread index
tid = wp.tid()
# compute distance of each point from origin
lengths[tid] = wp.length(points[tid])
# allocate an array of 3d points
points = wp.array(np.random.rand(num_points, 3), dtype=wp.vec3)
lengths = wp.zeros(num_points, dtype=float)
# launch kernel
wp.launch(kernel=length,
dim=len(points),
inputs=[points, lengths])
print(lengths)
```
## Running Examples
The `examples` directory contains a number of scripts that show how to implement different simulation methods using the Warp API. Most examples will generate USD files containing time-sampled animations (stored in the current working directory). Before running examples, users should ensure that the ``usd-core``, ``matplotlib``, and ``pyglet`` packages are installed using:
pip install warp-lang[extras]
Or can be manually installed with:
pip install usd-core matplotlib pyglet
Examples can be run from the command-line as follows:
python -m warp.examples.<example_subdir>.<example>
To browse the example source code, you can open the directory where the files are located like this:
python -m warp.examples.browse
Most examples can be run on either the CPU or a CUDA-capable device, but a handful require a CUDA-capable device. These are marked at the top of the example script.
USD files can be viewed or rendered inside [NVIDIA Omniverse](https://developer.nvidia.com/omniverse), Pixar's UsdView, and Blender. Note that Preview in macOS is not recommended as it has limited support for time-sampled animations.
Built-in unit tests can be run from the command-line as follows:
python -m warp.tests
### examples/core
<table>
<tbody>
<tr>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/core/example_dem.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/core_dem.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/core/example_fluid.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/core_fluid.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/core/example_graph_capture.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/core_graph_capture.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/core/example_marching_cubes.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/core_marching_cubes.png"></a></td>
</tr>
<tr>
<td align="center">dem</td>
<td align="center">fluid</td>
<td align="center">graph capture</td>
<td align="center">marching cubes</td>
</tr>
<tr>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/core/example_mesh.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/core_mesh.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/core/example_nvdb.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/core_nvdb.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/core/example_raycast.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/core_raycast.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/core/example_raymarch.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/core_raymarch.png"></a></td>
</tr>
<tr>
<td align="center">mesh</td>
<td align="center">nvdb</td>
<td align="center">raycast</td>
<td align="center">raymarch</td>
</tr>
<tr>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/core/example_sph.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/core_sph.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/core/example_torch.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/core_torch.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/core/example_wave.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/core_wave.png"></a></td>
<td></td>
</tr>
<tr>
<td align="center">sph</td>
<td align="center">torch</td>
<td align="center">wave</td>
<td align="center"></td>
</tr>
</tbody>
</table>
### examples/fem
<table>
<tbody>
<tr>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/fem/example_apic_fluid.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/fem_apic_fluid.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/fem/example_convection_diffusion.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/fem_convection_diffusion.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/fem/example_diffusion_3d.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/fem_diffusion_3d.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/fem/example_diffusion.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/fem_diffusion.png"></a></td>
</tr>
<tr>
<td align="center">apic fluid</td>
<td align="center">convection diffusion</td>
<td align="center">diffusion 3d</td>
<td align="center">diffusion</td>
</tr>
<tr>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/fem/example_mixed_elasticity.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/fem_mixed_elasticity.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/fem/example_navier_stokes.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/fem_navier_stokes.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/fem/example_stokes_transfer.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/fem_stokes_transfer.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/fem/example_stokes.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/fem_stokes.png"></a></td>
</tr>
<tr>
<td align="center">mixed elasticity</td>
<td align="center">navier stokes</td>
<td align="center">stokes transfer</td>
<td align="center">stokes</td>
</tr>
</tbody>
</table>
### examples/optim
<table>
<tbody>
<tr>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/optim/example_bounce.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/optim_bounce.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/optim/example_cloth_throw.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/optim_cloth_throw.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/optim/example_diffray.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/optim_diffray.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/optim/example_drone.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/optim_drone.png"></a></td>
</tr>
<tr>
<td align="center">bounce</td>
<td align="center">cloth throw</td>
<td align="center">diffray</td>
<td align="center">drone</td>
</tr>
<tr>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/optim/example_inverse_kinematics.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/optim_inverse_kinematics.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/optim/example_spring_cage.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/optim_spring_cage.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/optim/example_trajectory.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/optim_trajectory.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/optim/example_walker.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/optim_walker.png"></a></td>
</tr>
<tr>
<td align="center">inverse kinematics</td>
<td align="center">spring cage</td>
<td align="center">trajectory</td>
<td align="center">walker</td>
</tr>
</tbody>
</table>
### examples/sim
<table>
<tbody>
<tr>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/sim/example_cartpole.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/sim_cartpole.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/sim/example_cloth.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/sim_cloth.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/sim/example_granular.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/sim_granular.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/sim/example_granular_collision_sdf.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/sim_granular_collision_sdf.png"></a></td>
</tr>
<tr>
<td align="center">cartpole</td>
<td align="center">cloth</td>
<td align="center">granular</td>
<td align="center">granular collision sdf</td>
</tr>
<tr>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/sim/example_jacobian_ik.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/sim_jacobian_ik.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/sim/example_quadruped.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/sim_quadruped.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/sim/example_rigid_chain.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/sim_rigid_chain.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/sim/example_rigid_contact.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/sim_rigid_contact.png"></a></td>
</tr>
<tr>
<td align="center">jacobian ik</td>
<td align="center">quadruped</td>
<td align="center">rigid chain</td>
<td align="center">rigid contact</td>
</tr>
<tr>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/sim/example_rigid_force.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/sim_rigid_force.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/sim/example_rigid_gyroscopic.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/sim_rigid_gyroscopic.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/sim/example_rigid_soft_contact.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/sim_rigid_soft_contact.png"></a></td>
<td><a href="https://github.com/NVIDIA/warp/tree/main/warp/examples/sim/example_soft_body.py"><img src="https://github.com/NVIDIA/warp/raw/main/docs/img/examples/sim_soft_body.png"></a></td>
</tr>
<tr>
<td align="center">rigid force</td>
<td align="center">rigid gyroscopic</td>
<td align="center">rigid soft contact</td>
<td align="center">soft body</td>
</tr>
</tbody>
</table>
## Building
For developers who want to build the library themselves, the following tools are required:
* Microsoft Visual Studio 2019 upwards (Windows)
* GCC 9.4 upwards (Linux)
* CUDA Toolkit 11.5 or higher
* [Git LFS](https://git-lfs.github.com/) installed
After cloning the repository, users should run:
python build_lib.py
This will generate the `warp.dll` / `warp.so` core library respectively. It will search for the CUDA Toolkit in the default install directory. This path can be overridden by setting the `CUDA_PATH` environment variable. Alternatively, the path to the CUDA Toolkit can be passed to the build command as `--cuda_path="..."`. After building, the Warp package should be installed using:
pip install -e .
This ensures that subsequent modifications to the library will be reflected in the Python package.
## Learn More
Please see the following resources for additional background on Warp:
* [Product Page](https://developer.nvidia.com/warp-python)
* [GTC 2022 Presentation](https://www.nvidia.com/en-us/on-demand/session/gtcspring22-s41599)
* [GTC 2021 Presentation](https://www.nvidia.com/en-us/on-demand/session/gtcspring21-s31838)
* [SIGGRAPH Asia 2021 Differentiable Simulation Course](https://dl.acm.org/doi/abs/10.1145/3476117.3483433)
* [GTC 2024 Presentation](https://www.nvidia.com/en-us/on-demand/session/gtc24-s63345/)
The underlying technology in Warp has been used in a number of research projects at NVIDIA including the following publications:
* Accelerated Policy Learning with Parallel Differentiable Simulation - Xu, J., Makoviychuk, V., Narang, Y., Ramos, F., Matusik, W., Garg, A., & Macklin, M. [(2022)](https://short-horizon-actor-critic.github.io)
* DiSECt: Differentiable Simulator for Robotic Cutting - Heiden, E., Macklin, M., Narang, Y., Fox, D., Garg, A., & Ramos, F [(2021)](https://github.com/NVlabs/DiSECt)
* gradSim: Differentiable Simulation for System Identification and Visuomotor Control - Murthy, J. Krishna, Miles Macklin, Florian Golemo, Vikram Voleti, Linda Petrini, Martin Weiss, Breandan Considine et al. [(2021)](https://gradsim.github.io)
## Frequently Asked Questions
See the [FAQ](https://nvidia.github.io/warp/faq.html) in the Warp documentation.
## Support
Problems, questions, and feature requests can be opened on [GitHub Issues](https://github.com/NVIDIA/warp/issues).
The Warp team also monitors the **#warp** channel on the public [Omniverse Discord](https://discord.com/invite/nvidiaomniverse) server, come chat to us!
## Versioning
Versions take the format X.Y.Z, similar to [Python itself](https://devguide.python.org/developer-workflow/development-cycle/#devcycle):
* Increments in X are reserved for major reworks of the project causing disruptive incompatibility (or reaching the 1.0 milestone).
* Increments in Y are for regular releases with a new set of features.
* Increments in Z are for bug fixes. In principle there are no new features. Can be omitted if 0 or not relevant.
This is similar to [Semantic Versioning](https://semver.org/) but less strict around backward compatibility.
Like with Python, some breaking changes can be present between minor versions if well documented and gradually introduced.
Note that prior to 0.11.0 this schema was not strictly adhered to.
## License
Warp is provided under the NVIDIA Software License, please see [LICENSE.md](./LICENSE.md) for full license text.
## Contributing
Contributions and pull requests from the community are welcome and are taken under the
terms described in the **9. Feedback** section of the [license](LICENSE.md).
[CONTRIBUTING.md](./CONTRIBUTING.md) provides additional information on how to open a pull request for Warp.
## Citing
If you use Warp in your research please use the following citation:
```bibtex
@misc{warp2022,
title= {Warp: A High-performance Python Framework for GPU Simulation and Graphics},
author = {Miles Macklin},
month = {March},
year = {2022},
note= {NVIDIA GPU Technology Conference (GTC)},
howpublished = {\url{https://github.com/nvidia/warp}}
}
```
| 19,041 | Markdown | 56.183183 | 382 | 0.68946 |
NVIDIA/warp/deps/target-deps.packman.xml | <project toolsVersion="5.0">
<!-- Import Kit SDk target-deps xml file to steal some deps from it:
<import path="../_build/${platform}/${config}/kit/dev/deps/target-deps.packman.xml">
<filter include="pybind11" />
<filter include="fmt" />
<filter include="python" />
</import>
-->
<!-- Import Rtx plugins deps
<import path="../_build/target-deps/rtx_plugins/deps/target-deps.packman.xml">
<filter include="carb_sdk_plugins" />
</import>
-->
<!-- Pull those deps of the same version as in Kit SDK. Override linkPath to point correctly, other properties can also be override, including version.
<dependency name="carb_sdk_plugins" linkPath="../_build/target-deps/carb_sdk_plugins" tags="non-redist" />
<dependency name="pybind11" linkPath="../_build/target-deps/pybind11" />
<dependency name="fmt" linkPath="../_build/target-deps/fmt" />
<dependency name="python" linkPath="../_build/target-deps/python" />
-->
<dependency name="python" linkPath="../_build/target-deps/python" tags="slim-package">
<package name="python" version="3.9.18+nv1-windows-x86_64" platforms="windows-x86_64"/>
<!-- https://teamcity.nvidia.com/project.html?projectId=Omniverse_Externals_Python -->
<package name="python" version="3.9.18+nv1-linux-x86_64" platforms="linux-x86_64"/>
<package name="python" version="3.9.16+nv1-linux-aarch64" platforms="linux-aarch64"/>
<package name="python" version="3.9.16+nv1-macos-universal" platforms="macos-x86_64"/>
</dependency>
<dependency name="cuda" linkPath="../_build/target-deps/cuda">
<package name="cuda" version="11.5.2_496.13-46d75baa-windows-x86_64" platforms="windows-x86_64"/>
<package name="cuda" version="11.5.2_495.29-46d75baa-linux-x86_64" platforms="linux-x86_64"/>
<package name="cuda" version="11.8.0_520.61-abe3d9d7-linux-aarch64" platforms="linux-aarch64"/>
</dependency>
</project>
| 1,913 | XML | 46.849999 | 153 | 0.685834 |
NVIDIA/warp/deps/repo-deps.packman.xml | <project toolsVersion="5.0">
<dependency name="repo_man" linkPath="../_repo/deps/repo_man">
<package name="repo_man" version="1.57.12"/>
</dependency>
<dependency name="repo_build" linkPath="../_repo/deps/repo_build">
<package name="repo_build" version="0.62.7"/>
</dependency>
</project>
| 305 | XML | 32.999996 | 68 | 0.659016 |
NVIDIA/warp/deps/host-deps.packman.xml | <project toolsVersion="5.0">
<dependency name="msvc" linkPath="../_build/host-deps/msvc">
<package name="msvc" version="2019-16.11.24" platforms="windows-x86_64" />
</dependency>
<dependency name="winsdk" linkPath="../_build/host-deps/winsdk">
<package name="winsdk" version="10.17763" platforms="windows-x86_64"/>
</dependency>
<dependency name="linbuild" linkPath="../_build/host-deps/linbuild">
<package name="linbuild" version="3.3.3-${platform}" platforms="linux-x86_64 linux-aarch64" />
</dependency>
</project>
| 542 | XML | 44.249996 | 98 | 0.686347 |
NVIDIA/warp/tools/ci/publishing/build_nodes_info.py | # Copyright (c) 2023 NVIDIA CORPORATION. All rights reserved.
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
"""Script to build the node.json OGN file that lists the extension's nodes."""
import json
import os
def gather_nodes_info(
ext_path: str,
ext_name: str,
) -> None:
# fmt: off
ogn_file_paths = tuple(
os.path.join(dir_path, file_name)
for (dir_path, _, file_names) in os.walk(ext_path)
for file_name in file_names if file_name.endswith(".ogn")
)
# fmt: on
nodes_info = {}
for file_path in ogn_file_paths:
with open(file_path) as file:
data = json.load(file)
node_key = next(iter(data.keys()))
node_data = data[node_key]
nodes_info[node_key] = {
"description": node_data.get("description", ""),
"version": node_data.get("version", 1),
"uiName": node_data.get("uiName", ""),
"extension": ext_name,
"language": node_data.get("language", ""),
}
return {"nodes": nodes_info}
if __name__ == "__main__":
here = os.path.dirname(__file__)
root_path = os.path.abspath(os.path.join(here, "..", "..", ".."))
ext_path = os.path.join(root_path, "exts", "omni.warp")
ogn_path = os.path.join(ext_path, "ogn")
nodes_info_path = os.path.join(ogn_path, "nodes.json")
nodes_info = gather_nodes_info(ext_path, "omni.warp")
os.makedirs(ogn_path, exist_ok=True)
with open(nodes_info_path, "w") as file:
json.dump(nodes_info, file, indent=4)
| 1,898 | Python | 33.527272 | 78 | 0.611697 |
NVIDIA/warp/tools/repoman/repoman.py | import os
import sys
import io
import contextlib
import packmanapi
REPO_ROOT = os.path.join(os.path.dirname(os.path.realpath(__file__)), "../..")
REPO_DEPS_FILE = os.path.join(REPO_ROOT, "deps/repo-deps.packman.xml")
def bootstrap():
"""
Bootstrap all omni.repo modules.
Pull with packman from repo.packman.xml and add them all to python sys.path to enable importing.
"""
#with contextlib.redirect_stdout(io.StringIO()):
deps = packmanapi.pull(REPO_DEPS_FILE)
for dep_path in deps.values():
if dep_path not in sys.path:
sys.path.append(dep_path)
if __name__ == "__main__":
bootstrap()
import omni.repo.man
omni.repo.man.main(REPO_ROOT)
| 703 | Python | 23.275861 | 100 | 0.661451 |
NVIDIA/warp/tools/packman/packmanconf.py | # Use this file to bootstrap packman into your Python environment (3.7.x). Simply
# add the path by doing sys.insert to where packmanconf.py is located and then execute:
#
# >>> import packmanconf
# >>> packmanconf.init()
#
# It will use the configured remote(s) and the version of packman in the same folder,
# giving you full access to the packman API via the following module
#
# >> import packmanapi
# >> dir(packmanapi)
import os
import platform
import sys
def init():
"""Call this function to initialize the packman configuration.
Calls to the packman API will work after successfully calling this function.
Note:
This function only needs to be called once during the execution of your
program. Calling it repeatedly is harmless but wasteful.
Compatibility with your Python interpreter is checked and upon failure
the function will report what is required.
Example:
>>> import packmanconf
>>> packmanconf.init()
>>> import packmanapi
>>> packmanapi.set_verbosity_level(packmanapi.VERBOSITY_HIGH)
"""
major = sys.version_info.major
minor = sys.version_info.minor
patch = sys.version_info.micro
if major == 3 and (minor == 10 or (minor == 11 and patch <= 2)):
# we are good
pass
else:
raise RuntimeError(
f"This version of packman requires Python 3.10.0 up to 3.11.2, "
f"but {major}.{minor}.{patch} was provided"
)
conf_dir = os.path.dirname(os.path.abspath(__file__))
os.environ["PM_INSTALL_PATH"] = conf_dir
packages_root = get_packages_root(conf_dir)
version = get_version(conf_dir)
module_dir = get_module_dir(conf_dir, packages_root, version)
sys.path.insert(1, module_dir)
def get_packages_root(conf_dir: str) -> str:
root = os.getenv("PM_PACKAGES_ROOT")
if not root:
platform_name = platform.system()
if platform_name == "Windows":
drive, _ = os.path.splitdrive(conf_dir)
root = os.path.join(drive, "packman-repo")
elif platform_name == "Darwin":
# macOS
root = os.path.join(
os.path.expanduser("~"), "Library/Application Support/packman-cache"
)
elif platform_name == "Linux":
try:
cache_root = os.environ["XDG_HOME_CACHE"]
except KeyError:
cache_root = os.path.join(os.path.expanduser("~"), ".cache")
return os.path.join(cache_root, "packman")
else:
raise RuntimeError(f"Unsupported platform '{platform_name}'")
# make sure the path exists:
os.makedirs(root, exist_ok=True)
return root
def get_module_dir(conf_dir, packages_root: str, version: str) -> str:
module_dir = os.path.join(packages_root, "packman-common", version)
if not os.path.exists(module_dir):
import tempfile
tf = tempfile.NamedTemporaryFile(delete=False)
target_name = tf.name
tf.close()
url = f"http://bootstrap.packman.nvidia.com/packman-common@{version}.zip"
print(f"Downloading '{url}' ...")
import urllib.request
urllib.request.urlretrieve(url, target_name)
from importlib.machinery import SourceFileLoader
# import module from path provided
script_path = os.path.join(conf_dir, "bootstrap", "install_package.py")
ip = SourceFileLoader("install_package", script_path).load_module()
print("Unpacking ...")
ip.install_common_module(target_name, module_dir)
os.unlink(tf.name)
return module_dir
def get_version(conf_dir: str):
path = os.path.join(conf_dir, "packman")
if not os.path.exists(path): # in dev repo fallback
path += ".sh"
with open(path, "rt", encoding="utf8") as launch_file:
for line in launch_file.readlines():
if "PM_PACKMAN_VERSION" in line:
_, value = line.split("=")
return value.strip()
raise RuntimeError(f"Unable to find 'PM_PACKMAN_VERSION' in '{path}'")
| 4,086 | Python | 35.168141 | 87 | 0.627998 |
NVIDIA/warp/tools/packman/config.packman.xml | <config remotes="cloudfront urm">
<remote2 name="cloudfront">
<transport actions="download" protocol="http" packageLocation="d4i3qtqj3r0z5.cloudfront.net/${name}@${version}" />
<transport actions="list" protocol="https" packageLocation="omnipackages.nvidia.com/api/v1/list/cloudfront" />
<transport actions="upload" protocol="s3" packageLocation="packages-for-cloudfront" />
<transport actions="get-tag" protocol="https" packageLocation="omnipackages.nvidia.com/api/v1/tags/cloudfront" />
</remote2>
<remote2 name="urm">
<transport actions="download" protocol="https" packageLocation="urm.nvidia.com/artifactory/ct-omniverse-generic/pkgs/${name}/${name}@${version}" />
<transport actions="list" protocol="https" packageLocation="omnipackages.nvidia.com/api/v1/list/artifactory" />
</remote2>
</config>
| 834 | XML | 63.230764 | 151 | 0.738609 |
NVIDIA/warp/tools/packman/bootstrap/install_package.py | # Copyright 2019 NVIDIA CORPORATION
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import zipfile
import tempfile
import sys
import os
import stat
import time
import hashlib
from typing import Any, Callable, Union
RENAME_RETRY_COUNT = 100
RENAME_RETRY_DELAY = 0.1
logging.basicConfig(level=logging.WARNING, format="%(message)s")
logger = logging.getLogger("install_package")
def remove_directory_item(path):
if os.path.islink(path) or os.path.isfile(path):
try:
os.remove(path)
except PermissionError:
# make sure we have access and try again:
os.chmod(path, stat.S_IRWXU)
os.remove(path)
else:
# try first to delete the dir because this will work for folder junctions, otherwise we would follow the junctions and cause destruction!
clean_out_folder = False
try:
# make sure we have access preemptively - this is necessary because recursing into a directory without permissions
# will only lead to heart ache
os.chmod(path, stat.S_IRWXU)
os.rmdir(path)
except OSError:
clean_out_folder = True
if clean_out_folder:
# we should make sure the directory is empty
names = os.listdir(path)
for name in names:
fullname = os.path.join(path, name)
remove_directory_item(fullname)
# now try to again get rid of the folder - and not catch if it raises:
os.rmdir(path)
class StagingDirectory:
def __init__(self, staging_path):
self.staging_path = staging_path
self.temp_folder_path = None
os.makedirs(staging_path, exist_ok=True)
def __enter__(self):
self.temp_folder_path = tempfile.mkdtemp(prefix="ver-", dir=self.staging_path)
return self
def get_temp_folder_path(self):
return self.temp_folder_path
# this function renames the temp staging folder to folder_name, it is required that the parent path exists!
def promote_and_rename(self, folder_name):
abs_dst_folder_name = os.path.join(self.staging_path, folder_name)
os.rename(self.temp_folder_path, abs_dst_folder_name)
def __exit__(self, type, value, traceback):
# Remove temp staging folder if it's still there (something went wrong):
path = self.temp_folder_path
if os.path.isdir(path):
remove_directory_item(path)
def rename_folder(staging_dir: StagingDirectory, folder_name: str):
try:
staging_dir.promote_and_rename(folder_name)
except OSError as exc:
# if we failed to rename because the folder now exists we can assume that another packman process
# has managed to update the package before us - in all other cases we re-raise the exception
abs_dst_folder_name = os.path.join(staging_dir.staging_path, folder_name)
if os.path.exists(abs_dst_folder_name):
logger.warning(
f"Directory {abs_dst_folder_name} already present, package installation already completed"
)
else:
raise
def call_with_retry(
op_name: str, func: Callable, retry_count: int = 3, retry_delay: float = 20
) -> Any:
retries_left = retry_count
while True:
try:
return func()
except (OSError, IOError) as exc:
logger.warning(f"Failure while executing {op_name} [{str(exc)}]")
if retries_left:
retry_str = "retry" if retries_left == 1 else "retries"
logger.warning(
f"Retrying after {retry_delay} seconds"
f" ({retries_left} {retry_str} left) ..."
)
time.sleep(retry_delay)
else:
logger.error("Maximum retries exceeded, giving up")
raise
retries_left -= 1
def rename_folder_with_retry(staging_dir: StagingDirectory, folder_name):
dst_path = os.path.join(staging_dir.staging_path, folder_name)
call_with_retry(
f"rename {staging_dir.get_temp_folder_path()} -> {dst_path}",
lambda: rename_folder(staging_dir, folder_name),
RENAME_RETRY_COUNT,
RENAME_RETRY_DELAY,
)
def generate_sha256_for_file(file_path: Union[str, os.PathLike]) -> str:
"""Returns the SHA-256 hex digest for the file at `file_path`"""
hash = hashlib.sha256()
# Read the file in binary mode and update the hash object with data
with open(file_path, "rb") as file:
for chunk in iter(lambda: file.read(4096), b""):
hash.update(chunk)
return hash.hexdigest()
def install_common_module(package_path, install_path):
COMMON_SHA256 = "d4117f80ecc6dcc36444e04da85b125a4269f2abfe59a8984150138ad7d832c1"
package_sha256 = generate_sha256_for_file(package_path)
if package_sha256 != COMMON_SHA256:
raise RuntimeError(
f"Package at '{package_path}' must have a sha256 of '{COMMON_SHA256}' "
f"but was found to have '{package_sha256}'"
)
staging_path, version = os.path.split(install_path)
with StagingDirectory(staging_path) as staging_dir:
output_folder = staging_dir.get_temp_folder_path()
with zipfile.ZipFile(package_path, allowZip64=True) as zip_file:
zip_file.extractall(output_folder)
# attempt the rename operation
rename_folder_with_retry(staging_dir, version)
print(f"Package successfully installed to {install_path}")
if __name__ == "__main__":
executable_paths = os.getenv("PATH")
paths_list = executable_paths.split(os.path.pathsep) if executable_paths else []
target_path_np = os.path.normpath(sys.argv[2])
target_path_np_nc = os.path.normcase(target_path_np)
for exec_path in paths_list:
if os.path.normcase(os.path.normpath(exec_path)) == target_path_np_nc:
raise RuntimeError(f"packman will not install to executable path '{exec_path}'")
install_common_module(sys.argv[1], target_path_np)
| 6,575 | Python | 37.01156 | 145 | 0.647757 |
NVIDIA/warp/exts/omni.warp/PACKAGE-LICENSES/omni.warp-LICENSE.md | Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
NVIDIA CORPORATION and its licensors retain all intellectual property
and proprietary rights in and to this software, related documentation
and any modifications thereto. Any use, reproduction, disclosure or
distribution of this software and related documentation without an express
license agreement from NVIDIA CORPORATION is strictly prohibited. | 412 | Markdown | 57.999992 | 74 | 0.839806 |
NVIDIA/warp/exts/omni.warp/config/extension.toml | [package]
# Semantic Versioning is used: https://semver.org/
version = "1.1.1"
authors = ["NVIDIA"]
title = "Warp"
description="Warp OmniGraph Nodes and Sample Scenes"
readme = "docs/README.md"
repository="https://github.com/nvidia/warp"
category = "graph"
keywords = ["kit", "omnigraph", "warp", "simulation"]
changelog="docs/CHANGELOG.md"
python.import_mode = "ParallelThread"
preview_image = "data/preview.png"
icon = "data/icon.png"
# Watch the .ogn files for hot reloading (only works for Python files)
[fswatcher.patterns]
include = ["*.ogn", "*.py"]
exclude = ["Ogn*Database.py", "*/ogn*"]
[dependencies]
"omni.graph" = {}
"omni.graph.action" = {}
"omni.graph.core" = {}
"omni.graph.nodes" = {}
"omni.graph.ui" = {optional=true}
"omni.kit.actions.core" = {}
"omni.kit.browser.sample" = {optional = true}
"omni.kit.menu.utils" = {optional = true}
"omni.kit.property.usd" = {optional = true}
"omni.kit.widget.searchfield" = {optional = true}
"omni.kit.widget.text_editor" = {optional = true}
"omni.kit.window.property" = {optional = true}
"omni.timeline" = {}
"omni.ui" = {optional = true}
"omni.usd" = {}
"omni.warp.core" = {version = "1.1.1", exact = true}
[[python.module]]
name = "omni.warp._extension"
[[python.module]]
name = "omni.warp.nodes"
[settings]
exts."omni.warp".enable_backward = false
exts."omni.warp".enable_menu = true
| 1,350 | TOML | 26.571428 | 70 | 0.677037 |
NVIDIA/warp/exts/omni.warp/omni/warp/nodes/__init__.py | # Copyright (c) 2023 NVIDIA CORPORATION. All rights reserved.
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
"""Public Python API exposed by the omni.warp.nodes package."""
__all__ = [
"AttrTracking",
"NodeTimer",
"basis_curves_copy_bundle",
"basis_curves_create_bundle",
"basis_curves_get_curve_count",
"basis_curves_get_curve_vertex_counts",
"basis_curves_get_display_color",
"basis_curves_get_local_extent",
"basis_curves_get_point_count",
"basis_curves_get_points",
"basis_curves_get_widths",
"basis_curves_get_world_extent",
"bundle_get_attr",
"bundle_get_child_count",
"bundle_get_prim_type",
"bundle_get_world_xform",
"bundle_has_changed",
"bundle_have_attrs_changed",
"bundle_set_prim_type",
"bundle_set_world_xform",
"device_get_cuda_compute",
"from_omni_graph_ptr",
"from_omni_graph",
"mesh_create_bundle",
"mesh_copy_bundle",
"mesh_get_display_color",
"mesh_get_face_count",
"mesh_get_face_vertex_counts",
"mesh_get_face_vertex_indices",
"mesh_get_local_extent",
"mesh_get_normals",
"mesh_get_point_count",
"mesh_get_points",
"mesh_triangulate",
"mesh_get_uvs",
"mesh_get_velocities",
"mesh_get_vertex_count",
"mesh_get_world_extent",
"points_create_bundle",
"points_copy_bundle",
"points_get_display_color",
"points_get_local_extent",
"points_get_masses",
"points_get_point_count",
"points_get_points",
"points_get_velocities",
"points_get_widths",
"points_get_world_extent",
"type_convert_og_to_warp",
"type_convert_sdf_name_to_warp",
"type_convert_sdf_name_to_og",
]
from omni.warp.nodes._impl.attributes import (
AttrTracking,
from_omni_graph,
from_omni_graph_ptr,
)
from omni.warp.nodes._impl.basis_curves import (
basis_curves_copy_bundle,
basis_curves_create_bundle,
basis_curves_get_curve_count,
basis_curves_get_curve_vertex_counts,
basis_curves_get_display_color,
basis_curves_get_local_extent,
basis_curves_get_point_count,
basis_curves_get_points,
basis_curves_get_widths,
basis_curves_get_world_extent,
)
from omni.warp.nodes._impl.bundles import (
bundle_get_attr,
bundle_get_child_count,
bundle_get_prim_type,
bundle_get_world_xform,
bundle_has_changed,
bundle_have_attrs_changed,
)
from omni.warp.nodes._impl.common import (
NodeTimer,
device_get_cuda_compute,
type_convert_og_to_warp,
type_convert_sdf_name_to_og,
type_convert_sdf_name_to_warp,
)
from omni.warp.nodes._impl.mesh import (
mesh_copy_bundle,
mesh_create_bundle,
mesh_get_display_color,
mesh_get_face_count,
mesh_get_face_vertex_counts,
mesh_get_face_vertex_indices,
mesh_get_local_extent,
mesh_get_normals,
mesh_get_point_count,
mesh_get_points,
mesh_get_uvs,
mesh_get_velocities,
mesh_get_vertex_count,
mesh_get_world_extent,
mesh_triangulate,
)
from omni.warp.nodes._impl.points import (
points_copy_bundle,
points_create_bundle,
points_get_display_color,
points_get_local_extent,
points_get_masses,
points_get_point_count,
points_get_points,
points_get_velocities,
points_get_widths,
points_get_world_extent,
)
| 3,623 | Python | 27.992 | 76 | 0.678443 |
NVIDIA/warp/exts/omni.warp/omni/warp/nodes/_impl/common.py | # Copyright (c) 2023 NVIDIA CORPORATION. All rights reserved.
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
"""General helpers for this extension."""
from enum import Enum
from typing import (
Any,
Optional,
Union,
)
import omni.graph.core as og
import warp as wp
# General
# ------------------------------------------------------------------------------
class IntEnum(int, Enum):
"""Base class for integer enumerators with labels."""
def __new__(cls, value, label):
obj = int.__new__(cls, value)
obj._value_ = value
obj.label = label
return obj
# Timer
# ------------------------------------------------------------------------------
class NodeTimer(object):
"""Context wrapping Warp's scoped timer for use with nodes."""
def __init__(self, name: str, db: Any, active: bool = False) -> None:
name = "{}:{}".format(db.node.get_prim_path(), name)
self.timer = wp.ScopedTimer(name, active=active, synchronize=True)
def __enter__(self) -> None:
self.timer.__enter__()
return self
def __exit__(self, type: Any, value: Any, traceback: Any) -> None:
self.timer.__exit__(type, value, traceback)
# Device
# ------------------------------------------------------------------------------
def device_get_cuda_compute() -> wp.context.Device:
"""Retrieves the preferred CUDA device for computing purposes."""
query_fn = getattr(og, "get_compute_cuda_device", None)
cuda_device_idx = 0 if query_fn is None else query_fn()
return wp.get_device(f"cuda:{cuda_device_idx}")
# Types
# ------------------------------------------------------------------------------
_BaseDType = og.BaseDataType
_AttrRole = og.AttributeRole
# fmt: off
_DATA_TYPES_MAPPING = (
("bool" , (_BaseDType.BOOL , 1, _AttrRole.NONE ), "int8" ),
("color3f" , (_BaseDType.FLOAT , 3, _AttrRole.COLOR ), "vec3" ),
("color4f" , (_BaseDType.FLOAT , 4, _AttrRole.COLOR ), "vec4" ),
("double" , (_BaseDType.DOUBLE, 1, _AttrRole.NONE ), "float64"),
("float" , (_BaseDType.FLOAT , 1, _AttrRole.NONE ), "float32"),
("float2" , (_BaseDType.FLOAT , 2, _AttrRole.NONE ), "vec2" ),
("float3" , (_BaseDType.FLOAT , 3, _AttrRole.NONE ), "vec3" ),
("float4" , (_BaseDType.FLOAT , 4, _AttrRole.NONE ), "vec4" ),
("int" , (_BaseDType.INT , 1, _AttrRole.NONE ), "int32" ),
("int64" , (_BaseDType.INT64 , 1, _AttrRole.NONE ), "int64" ),
("matrix2d" , (_BaseDType.DOUBLE, 4, _AttrRole.MATRIX ), "mat22d" ),
("matrix3d" , (_BaseDType.DOUBLE, 9, _AttrRole.MATRIX ), "mat33d" ),
("matrix4d" , (_BaseDType.DOUBLE, 16, _AttrRole.MATRIX ), "mat44d" ),
("normal3f" , (_BaseDType.FLOAT , 3, _AttrRole.NORMAL ), "vec3" ),
("point3f" , (_BaseDType.FLOAT , 3, _AttrRole.POSITION ), "vec3" ),
("quatf" , (_BaseDType.FLOAT , 4, _AttrRole.QUATERNION), "quat" ),
("texCoord2f", (_BaseDType.FLOAT , 2, _AttrRole.TEXCOORD ), "vec2" ),
("texCoord3f", (_BaseDType.FLOAT , 3, _AttrRole.TEXCOORD ), "vec3" ),
("timecode" , (_BaseDType.DOUBLE, 1, _AttrRole.TIMECODE ), "float64"),
("token" , (_BaseDType.TOKEN , 1, _AttrRole.NONE ), "uint64" ),
("uchar" , (_BaseDType.UCHAR , 1, _AttrRole.NONE ), "uint8" ),
("uint" , (_BaseDType.UINT , 1, _AttrRole.NONE ), "uint32" ),
("uint64" , (_BaseDType.UINT64, 1, _AttrRole.NONE ), "uint64" ),
("vector3f" , (_BaseDType.FLOAT , 3, _AttrRole.VECTOR ), "vec3" ),
)
_SDF_DATA_TYPE_TO_OG = {k: v for (k, v, _) in _DATA_TYPES_MAPPING}
_SDF_DATA_TYPE_NAME_TO_WARP = {k: v for (k, _, v) in _DATA_TYPES_MAPPING}
_OG_DATA_TYPE_TO_WARP = {k: v for (_, k, v) in _DATA_TYPES_MAPPING}
# fmt: on
SUPPORTED_OG_DATA_TYPES = tuple(
og.Type(base_data_type, tuple_count=tuple_count, array_depth=0, role=role)
for base_data_type, tuple_count, role in _OG_DATA_TYPE_TO_WARP.keys()
)
SUPPORTED_OG_ARRAY_TYPES = tuple(
og.Type(base_data_type, tuple_count=tuple_count, array_depth=1, role=role)
for base_data_type, tuple_count, role in _OG_DATA_TYPE_TO_WARP.keys()
)
SUPPORTED_OG_TYPES = SUPPORTED_OG_DATA_TYPES + SUPPORTED_OG_ARRAY_TYPES
SUPPORTED_SDF_DATA_TYPE_NAMES = tuple(_SDF_DATA_TYPE_NAME_TO_WARP.keys())
def get_warp_type_from_data_type_name(
data_type_name: str,
dim_count: int = 0,
as_str: bool = False,
str_namespace: Optional[str] = "wp",
):
if as_str:
prefix = "" if str_namespace is None else "{}.".format(str_namespace)
if dim_count == 0:
return "{prefix}{dtype}".format(prefix=prefix, dtype=data_type_name)
if dim_count == 1:
return "{prefix}array(dtype={prefix}{dtype})".format(
prefix=prefix,
dtype=data_type_name,
)
return "{prefix}array(dtype={prefix}{dtype}, ndim={ndim})".format(
prefix=prefix,
dtype=data_type_name,
ndim=dim_count,
)
dtype = getattr(wp.types, data_type_name)
if dim_count == 0:
return dtype
if dim_count == 1:
return wp.array(dtype=dtype)
return wp.array(dtype=dtype, ndim=dim_count)
def type_convert_og_to_warp(
og_type: og.Type,
dim_count: Optional[int] = None,
as_str: bool = False,
str_namespace: Optional[str] = "wp",
) -> Union[Any, str]:
"""Converts an OmniGraph type into a compatible Warp type."""
data_type_name = _OG_DATA_TYPE_TO_WARP.get(
(og_type.base_type, og_type.tuple_count, og_type.role),
)
if data_type_name is None:
raise RuntimeError("Unsupported attribute type '{}'.".format(og_type))
if dim_count is None:
dim_count = og_type.array_depth
return get_warp_type_from_data_type_name(
data_type_name,
dim_count=dim_count,
as_str=as_str,
str_namespace=str_namespace,
)
def type_convert_sdf_name_to_warp(
sdf_type_name: str,
dim_count: Optional[int] = None,
as_str: bool = False,
str_namespace: Optional[str] = "wp",
) -> Union[Any, str]:
"""Converts a Sdf type name into a compatible Warp type."""
if sdf_type_name.endswith("[]"):
sdf_type_name = sdf_type_name[:-2]
if dim_count is None:
dim_count = 1
elif dim_count is None:
dim_count = 0
data_type_name = _SDF_DATA_TYPE_NAME_TO_WARP.get(sdf_type_name)
if data_type_name is None:
raise RuntimeError("Unsupported attribute type '{}'.".format(sdf_type_name))
return get_warp_type_from_data_type_name(
data_type_name,
dim_count=dim_count,
as_str=as_str,
str_namespace=str_namespace,
)
def type_convert_sdf_name_to_og(
sdf_type_name: str,
is_array: Optional[bool] = None,
) -> og.Type:
"""Converts a Sdf type name into its corresponding OmniGraph type."""
if sdf_type_name.endswith("[]"):
sdf_type_name = sdf_type_name[:-2]
if is_array is None:
is_array = True
elif is_array is None:
is_array = False
data_type = _SDF_DATA_TYPE_TO_OG.get(sdf_type_name)
if data_type is None:
raise RuntimeError("Unsupported attribute type '{}'.".format(sdf_type_name))
base_data_type, tuple_count, role = data_type
return og.Type(
base_data_type,
tuple_count=tuple_count,
array_depth=int(is_array),
role=role,
)
| 7,881 | Python | 33.876106 | 84 | 0.572136 |
NVIDIA/warp/exts/omni.warp/omni/warp/nodes/_impl/OgnGridCreate.py | # Copyright (c) 2023 NVIDIA CORPORATION. All rights reserved.
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
"""Node creating a geometry mesh grid."""
import traceback
import omni.graph.core as og
import omni.warp.nodes
from omni.warp.nodes._impl.kernels.grid_create import grid_create_launch_kernel
from omni.warp.nodes.ogn.OgnGridCreateDatabase import OgnGridCreateDatabase
import warp as wp
PROFILING = False
# Internal State
# ------------------------------------------------------------------------------
class InternalState:
"""Internal state for the node."""
def __init__(self) -> None:
self.is_valid = False
self.attr_tracking = omni.warp.nodes.AttrTracking(
(
"transform",
"size",
"dims",
),
)
# Compute
# ------------------------------------------------------------------------------
def compute(db: OgnGridCreateDatabase) -> None:
"""Evaluates the node."""
db.outputs.mesh.changes().activate()
if not db.outputs.mesh.valid:
return
state = db.internal_state
if state.is_valid and not state.attr_tracking.have_attrs_changed(db):
return
# Compute the mesh's topology counts.
face_count = db.inputs.dims[0] * db.inputs.dims[1]
vertex_count = face_count * 4
point_count = (db.inputs.dims[0] + 1) * (db.inputs.dims[1] + 1)
# Create a new geometry mesh within the output bundle.
omni.warp.nodes.mesh_create_bundle(
db.outputs.mesh,
point_count,
vertex_count,
face_count,
xform=db.inputs.transform,
create_normals=True,
create_uvs=True,
)
with omni.warp.nodes.NodeTimer("grid_create", db, active=PROFILING):
# Evaluate the kernel.
grid_create_launch_kernel(
omni.warp.nodes.mesh_get_points(db.outputs.mesh),
omni.warp.nodes.mesh_get_face_vertex_counts(db.outputs.mesh),
omni.warp.nodes.mesh_get_face_vertex_indices(db.outputs.mesh),
omni.warp.nodes.mesh_get_normals(db.outputs.mesh),
omni.warp.nodes.mesh_get_uvs(db.outputs.mesh),
db.inputs.size.tolist(),
db.inputs.dims.tolist(),
)
state.attr_tracking.update_state(db)
# Node Entry Point
# ------------------------------------------------------------------------------
class OgnGridCreate:
"""Node."""
@staticmethod
def internal_state() -> InternalState:
return InternalState()
@staticmethod
def compute(db: OgnGridCreateDatabase) -> None:
device = omni.warp.nodes.device_get_cuda_compute()
try:
with wp.ScopedDevice(device):
compute(db)
except Exception:
db.log_error(traceback.format_exc())
db.internal_state.is_valid = False
return
db.internal_state.is_valid = True
# Fire the execution for the downstream nodes.
db.outputs.execOut = og.ExecutionAttributeState.ENABLED
| 3,357 | Python | 28.2 | 80 | 0.596962 |
NVIDIA/warp/exts/omni.warp/omni/warp/nodes/_impl/mesh.py | # Copyright (c) 2023 NVIDIA CORPORATION. All rights reserved.
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
"""Helpers to author mesh geometries represented as OmniGraph bundles."""
from typing import Optional
import numpy as np
import omni.graph.core as og
from omni.warp.nodes._impl.attributes import (
attr_get,
attr_get_array_on_gpu,
attr_set,
)
from omni.warp.nodes._impl.bundles import (
bundle_copy_attr_value,
bundle_create_attr,
bundle_create_child,
bundle_create_metadata_attr,
bundle_get_attr,
bundle_set_prim_type,
bundle_set_world_xform,
)
from omni.warp.nodes._impl.points import (
points_get_display_color,
points_get_local_extent,
points_get_points,
points_get_velocities,
points_get_world_extent,
)
import warp as wp
def mesh_create_bundle(
dst_bundle: og.BundleContents,
point_count: int,
vertex_count: int,
face_count: int,
xform: Optional[np.ndarray] = None,
create_display_color: bool = False,
create_normals: bool = False,
create_uvs: bool = False,
child_idx: int = 0,
) -> None:
"""Creates and initializes mesh attributes within a bundle."""
child_bundle = bundle_create_child(dst_bundle, child_idx)
bundle_create_attr(
child_bundle,
"points",
og.Type(
og.BaseDataType.FLOAT,
tuple_count=3,
array_depth=1,
role=og.AttributeRole.POSITION,
),
size=point_count,
)
bundle_create_attr(
child_bundle,
"faceVertexCounts",
og.Type(
og.BaseDataType.INT,
tuple_count=1,
array_depth=1,
role=og.AttributeRole.NONE,
),
size=face_count,
)
bundle_create_attr(
child_bundle,
"faceVertexIndices",
og.Type(
og.BaseDataType.INT,
tuple_count=1,
array_depth=1,
role=og.AttributeRole.NONE,
),
size=vertex_count,
)
bundle_set_prim_type(dst_bundle, "Mesh", child_idx=child_idx)
if xform is not None:
bundle_set_world_xform(dst_bundle, xform, child_idx=child_idx)
if create_display_color:
bundle_create_attr(
child_bundle,
"primvars:displayColor",
og.Type(
og.BaseDataType.FLOAT,
tuple_count=3,
array_depth=1,
role=og.AttributeRole.COLOR,
),
size=point_count,
)
interp_attr = bundle_create_metadata_attr(
child_bundle,
"primvars:displayColor",
"interpolation",
og.Type(
og.BaseDataType.TOKEN,
tuple_count=1,
array_depth=0,
role=og.AttributeRole.NONE,
),
)
attr_set(interp_attr, "vertex")
if create_normals:
bundle_create_attr(
child_bundle,
"normals",
og.Type(
og.BaseDataType.FLOAT,
tuple_count=3,
array_depth=1,
role=og.AttributeRole.NORMAL,
),
size=vertex_count,
)
if create_uvs:
bundle_create_attr(
child_bundle,
"primvars:st",
og.Type(
og.BaseDataType.FLOAT,
tuple_count=2,
array_depth=1,
role=og.AttributeRole.TEXCOORD,
),
size=vertex_count,
)
def mesh_copy_bundle(
dst_bundle: og.BundleContents,
src_bundle: og.BundleContents,
deep_copy: bool = False,
child_idx: int = 0,
) -> None:
"""Creates and initializes mesh attributes from an existing bundle."""
dst_child_bundle = bundle_create_child(dst_bundle, child_idx)
src_child_bundle = src_bundle.bundle.get_child_bundle(child_idx)
dst_child_bundle.copy_bundle(src_child_bundle)
if deep_copy:
bundle_copy_attr_value(dst_child_bundle, src_child_bundle, "points", wp.vec3)
bundle_copy_attr_value(dst_child_bundle, src_child_bundle, "faceVertexCounts", int)
bundle_copy_attr_value(dst_child_bundle, src_child_bundle, "faceVertexIndices", int)
bundle_copy_attr_value(dst_child_bundle, src_child_bundle, "primvars:displayColor", wp.vec3)
bundle_copy_attr_value(dst_child_bundle, src_child_bundle, "normals", wp.vec3)
bundle_copy_attr_value(dst_child_bundle, src_child_bundle, "primvars:st", wp.vec2)
def mesh_get_point_count(
bundle: og.BundleContents,
child_idx: int = 0,
) -> int:
"""Retrieves the number of points."""
return bundle_get_attr(bundle, "points", child_idx).size()
def mesh_get_vertex_count(
bundle: og.BundleContents,
child_idx: int = 0,
) -> int:
"""Retrieves the number of vertices."""
attr = bundle_get_attr(bundle, "faceVertexCounts", child_idx)
return int(np.sum(attr_get(attr)))
def mesh_get_face_count(
bundle: og.BundleContents,
child_idx: int = 0,
) -> int:
"""Retrieves the number of faces."""
return bundle_get_attr(bundle, "faceVertexCounts", child_idx).size()
def mesh_get_points(
bundle: og.BundleContents,
child_idx: int = 0,
) -> wp.array(dtype=wp.vec3):
"""Retrieves the bundle points attribute as a Warp array."""
return points_get_points(bundle, child_idx=child_idx)
def mesh_get_velocities(
bundle: og.BundleContents,
child_idx: int = 0,
) -> wp.array(dtype=wp.vec3):
"""Retrieves the bundle velocities attribute as a Warp array."""
return points_get_velocities(bundle, child_idx=child_idx)
def mesh_get_normals(
bundle: og.BundleContents,
child_idx: int = 0,
) -> wp.array(dtype=wp.vec3):
"""Retrieves the bundle normals attribute as a Warp array."""
attr = bundle_get_attr(bundle, "normals", child_idx)
return attr_get_array_on_gpu(attr, wp.vec3, read_only=bundle.read_only)
def mesh_get_uvs(
bundle: og.BundleContents,
child_idx: int = 0,
) -> wp.array(dtype=wp.vec2):
"""Retrieves the bundle UVs attribute as a Warp array."""
attr = bundle_get_attr(bundle, "primvars:st", child_idx)
return attr_get_array_on_gpu(attr, wp.vec2, read_only=bundle.read_only)
def mesh_get_display_color(
bundle: og.BundleContents,
child_idx: int = 0,
) -> wp.array(dtype=wp.vec3):
"""Retrieves the bundle display color attribute as a Warp array."""
return points_get_display_color(bundle, child_idx=child_idx)
def mesh_get_face_vertex_counts(
bundle: og.BundleContents,
child_idx: int = 0,
) -> wp.array(dtype=int):
"""Retrieves the bundle face vertex counts attribute as a Warp array."""
attr = bundle_get_attr(bundle, "faceVertexCounts", child_idx)
return attr_get_array_on_gpu(attr, int, read_only=bundle.read_only)
def mesh_get_face_vertex_indices(
bundle: og.BundleContents,
child_idx: int = 0,
) -> wp.array(dtype=int):
"""Retrieves the bundle face vertex indices attribute as a Warp array."""
attr = bundle_get_attr(bundle, "faceVertexIndices", child_idx)
return attr_get_array_on_gpu(attr, int, read_only=bundle.read_only)
def mesh_get_local_extent(
bundle: og.BundleContents,
child_idx: int = 0,
) -> np.ndarray:
"""Retrieves the local extent of the geometry mesh."""
return points_get_local_extent(bundle, child_idx=child_idx)
def mesh_get_world_extent(
bundle: og.BundleContents,
axis_aligned: bool = False,
child_idx: int = 0,
) -> np.ndarray:
"""Retrieves the world extent of the geometry mesh."""
return points_get_world_extent(
bundle,
axis_aligned=axis_aligned,
child_idx=child_idx,
)
def mesh_triangulate(
bundle: og.BundleContents,
child_idx: int = 0,
) -> wp.array(dtype=int):
"""Computes a triangulated version of the face vertex indices."""
counts = mesh_get_face_vertex_counts(bundle, child_idx=child_idx).numpy()
if np.all(counts == 3):
return mesh_get_face_vertex_indices(bundle, child_idx=child_idx)
indices = mesh_get_face_vertex_indices(bundle, child_idx=child_idx).numpy()
tri_face_count = np.sum(np.subtract(counts, 2))
out = np.empty(tri_face_count * 3, dtype=int)
dst_offset = 0
src_offset = 0
for count in counts:
for i in range(count - 2):
out[dst_offset] = indices[src_offset]
out[dst_offset + 1] = indices[src_offset + i + 1]
out[dst_offset + 2] = indices[src_offset + i + 2]
dst_offset += 3
src_offset += count
return wp.array(out, dtype=int, copy=True)
| 8,953 | Python | 29.455782 | 100 | 0.623702 |
NVIDIA/warp/exts/omni.warp/omni/warp/nodes/_impl/attributes.py | # Copyright (c) 2023 NVIDIA CORPORATION. All rights reserved.
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
"""Helpers to author OmniGraph attributes."""
import ctypes
import functools
import inspect
import math
import operator
from typing import (
Any,
Optional,
Sequence,
Union,
)
import numpy as np
import omni.graph.core as og
from omni.warp.nodes._impl.common import type_convert_og_to_warp
import warp as wp
ATTR_BUNDLE_TYPE = og.Type(
og.BaseDataType.RELATIONSHIP,
1,
0,
og.AttributeRole.BUNDLE,
)
# Names
# ------------------------------------------------------------------------------
_ATTR_PORT_TYPES = (
og.AttributePortType.ATTRIBUTE_PORT_TYPE_INPUT,
og.AttributePortType.ATTRIBUTE_PORT_TYPE_OUTPUT,
og.AttributePortType.ATTRIBUTE_PORT_TYPE_STATE,
)
_ATTR_NAME_FMTS = {x: "{}:{{}}".format(og.get_port_type_namespace(x)) for x in _ATTR_PORT_TYPES}
def attr_join_name(
port_type: og.AttributePortType,
base_name: str,
) -> str:
"""Build an attribute name by prefixing it with its port type."""
return _ATTR_NAME_FMTS[port_type].format(base_name)
def attr_get_base_name(
attr: og.Attribute,
) -> str:
"""Retrieves an attribute base name."""
name = attr.get_name()
if (
attr.get_type_name() == "bundle"
and (attr.get_port_type() == og.AttributePortType.ATTRIBUTE_PORT_TYPE_OUTPUT)
and name.startswith("outputs_")
):
# Output bundles are a bit special because they are in fact implemented
# as USD primitives, and USD doesn't support the colon symbol `:` in
# primitive names, thus output bundles are prefixed with `outputs_` in
# OmniGraph instead of `outputs:` like everything else.
return name[8:]
return name.split(":")[-1]
def attr_get_name(
attr: og.Attribute,
) -> str:
"""Retrieves an attribute name."""
name = attr.get_name()
if (
attr.get_type_name() == "bundle"
and (attr.get_port_type() == og.AttributePortType.ATTRIBUTE_PORT_TYPE_OUTPUT)
and name.startswith("outputs_")
):
# Output bundles are a bit special because they are in fact implemented
# as USD primitives, and USD doesn't support the colon symbol `:` in
# primitive names, thus output bundles are prefixed with `outputs_` in
# OmniGraph instead of `outputs:` like everything else.
return attr_join_name(
og.AttributePortType.ATTRIBUTE_PORT_TYPE_OUTPUT,
name[8:],
)
return name
# Values
# ------------------------------------------------------------------------------
def attr_get(
attr: og.AttributeData,
) -> Any:
"""Retrieves the value from an attribute living on the CPU."""
return attr.get(on_gpu=False)
def attr_set(
attr: og.AttributeData,
value: Any,
) -> None:
"""Sets the given value onto an array attribute living on the CPU."""
attr.set(value, on_gpu=False)
def attr_get_array_on_gpu(
attr: og.AttributeData,
dtype: type,
read_only: bool = True,
) -> wp.array:
"""Retrieves the value of an array attribute living on the GPU."""
attr.gpu_ptr_kind = og.PtrToPtrKind.CPU
(ptr, _) = attr.get_array(
on_gpu=True,
get_for_write=not read_only,
reserved_element_count=0 if read_only else attr.size(),
)
return from_omni_graph_ptr(ptr, (attr.size(),), dtype=dtype)
def attr_cast_array_to_warp(
value: Union[np.array, og.DataWrapper],
dtype: type,
shape: Sequence[int],
device: wp.context.Device,
) -> wp.array:
"""Casts an attribute array value to its corresponding warp type."""
if device.is_cpu:
return wp.array(
value,
dtype=dtype,
shape=shape,
device=device,
)
elif device.is_cuda:
return from_omni_graph_ptr(
value.memory,
shape=shape,
dtype=dtype,
device=device,
)
raise AssertionError("Unexpected device '{}'.".format(device.alias))
# Tracking
# ------------------------------------------------------------------------------
class AttrTracking:
"""Attributes state for tracking changes."""
def __init__(self, names: Sequence[str]) -> None:
self._names = names
self._state = [None] * len(names)
def have_attrs_changed(self, db: og.Database) -> bool:
"""Compare the current attribute values with the internal state."""
for i, name in enumerate(self._names):
cached_value = self._state[i]
current_value = getattr(db.inputs, name)
if isinstance(current_value, np.ndarray):
if not np.array_equal(current_value, cached_value):
return True
elif current_value != cached_value:
return True
return False
def update_state(self, db: og.Database) -> None:
"""Updates the internal state with the current attribute values."""
for i, name in enumerate(self._names):
current_value = getattr(db.inputs, name)
if isinstance(current_value, np.ndarray):
self._state[i] = current_value.copy()
else:
self._state[i] = current_value
# High-level Helper
# ------------------------------------------------------------------------------
def from_omni_graph_ptr(ptr, shape, dtype=None, device=None):
return wp.array(
dtype=dtype,
ptr=0 if ptr == 0 else ctypes.cast(ptr, ctypes.POINTER(ctypes.c_size_t)).contents.value,
shape=shape,
device=device,
requires_grad=False,
)
def from_omni_graph(
value: Union[np.ndarray, og.DataWrapper, og.AttributeData, og.DynamicAttributeAccess],
dtype: Optional[type] = None,
shape: Optional[Sequence[int]] = None,
device: Optional[wp.context.Device] = None,
) -> wp.array:
"""Casts an OmniGraph array data to its corresponding Warp type."""
def from_data_wrapper(
data: og.DataWrapper,
dtype: Optional[type],
shape: Optional[Sequence[int]],
device: Optional[wp.context.Device],
) -> wp.array:
if data.gpu_ptr_kind != og.PtrToPtrKind.CPU:
raise RuntimeError("All pointers must live on the CPU, make sure to set 'cudaPointers' to 'cpu'.")
elif not data.is_array:
raise RuntimeError("The attribute data isn't an array.")
if dtype is None:
base_type = type_convert_og_to_warp(
og.Type(
data.dtype.base_type,
tuple_count=data.dtype.tuple_count,
array_depth=0,
role=og.AttributeRole.MATRIX if data.dtype.is_matrix_type() else og.AttributeRole.NONE,
),
)
dim_count = len(data.shape)
if dim_count == 1:
dtype = base_type
elif dim_count == 2:
dtype = wp.types.vector(length=data.shape[1], dtype=base_type)
elif dim_count == 3:
dtype = wp.types.matrix(shape=(data.shape[1], data.shape[2]), dtype=base_type)
else:
raise RuntimeError("Arrays with more than 3 dimensions are not supported.")
arr_size = data.shape[0] * data.dtype.size
element_size = wp.types.type_size_in_bytes(dtype)
if shape is None:
# Infer a shape compatible with the dtype.
for i in range(len(data.shape)):
if functools.reduce(operator.mul, data.shape[: i + 1]) * element_size == arr_size:
shape = data.shape[: i + 1]
break
if shape is None:
if arr_size % element_size != 0:
raise RuntimeError(
"Cannot infer a size matching the Warp data type '{}' with " "an array size of '{}' bytes.".format(
dtype.__name__, arr_size
)
)
shape = (arr_size // element_size,)
src_device = wp.get_device(str(data.device))
dst_device = device
return from_omni_graph_ptr(
data.memory,
shape=shape,
dtype=dtype,
device=src_device,
).to(dst_device)
def from_attr_data(
data: og.AttributeData,
dtype: Optional[type],
shape: Optional[Sequence[int]],
device: Optional[wp.context.Device],
) -> wp.array:
if data.gpu_valid():
on_gpu = True
elif data.cpu_valid():
on_gpu = False
else:
raise RuntimeError("The attribute data isn't valid.")
if on_gpu:
data_type = data.get_type()
base_type = type_convert_og_to_warp(
og.Type(
data_type.base_type,
tuple_count=data_type.tuple_count,
array_depth=0,
role=data_type.role,
),
)
if dtype is None:
dtype = base_type
arr_size = data.size() * wp.types.type_size_in_bytes(base_type)
element_size = wp.types.type_size_in_bytes(dtype)
if shape is None:
# Infer a shape compatible with the dtype.
if data_type.is_matrix_type():
dim = math.isqrt(data_type.tuple_count)
arr_shape = (data.size(), dim, dim)
else:
arr_shape = (data.size(), data_type.tuple_count)
for i in range(len(arr_shape)):
if functools.reduce(operator.mul, arr_shape[: i + 1]) * element_size == arr_size:
shape = arr_shape[: i + 1]
break
if shape is None:
if arr_size % element_size != 0:
raise RuntimeError(
"Cannot infer a size matching the Warp data type '{}' with "
"an array size of '{}' bytes.".format(dtype.__name__, arr_size)
)
shape = (arr_size // element_size,)
data.gpu_ptr_kind = og.PtrToPtrKind.CPU
(ptr, _) = data.get_array(
on_gpu=True,
get_for_write=not data.is_read_only(),
reserved_element_count=0 if data.is_read_only() else data.size(),
)
src_device = wp.get_device("cuda")
dst_device = device
return from_omni_graph_ptr(
ptr,
shape=shape,
dtype=dtype,
device=src_device,
).to(dst_device)
else:
arr = data.get_array(
on_gpu=False,
get_for_write=not data.is_read_only(),
reserved_element_count=0 if data.is_read_only() else data.size(),
)
return wp.from_numpy(arr, dtype=dtype, shape=shape, device=device)
if isinstance(value, np.ndarray):
return wp.from_numpy(value, dtype=dtype, shape=shape, device=device)
elif isinstance(value, og.DataWrapper):
return from_data_wrapper(value, dtype, shape, device)
elif isinstance(value, og.AttributeData):
return from_attr_data(value, dtype, shape, device)
elif og.DynamicAttributeAccess in inspect.getmro(type(getattr(value, "_parent", None))):
if device is None:
device = wp.get_device()
if device.is_cpu:
return wp.from_numpy(value.cpu, dtype=dtype, shape=shape, device=device)
elif device.is_cuda:
return from_data_wrapper(value.gpu, dtype, shape, device)
else:
raise AssertionError("Unexpected device '{}'.".format(device.alias))
return None
| 12,155 | Python | 32.30411 | 119 | 0.558042 |
NVIDIA/warp/exts/omni.warp/omni/warp/nodes/_impl/OgnNoiseDeform.py | # Copyright (c) 2023 NVIDIA CORPORATION. All rights reserved.
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
"""Node deforming points using a noise."""
import hashlib
import traceback
import omni.graph.core as og
import omni.warp.nodes
from omni.warp.nodes.ogn.OgnNoiseDeformDatabase import OgnNoiseDeformDatabase
import warp as wp
USE_GRAPH = True
PROFILING = False
FUNC_PERLIN = wp.constant(0)
FUNC_CURL = wp.constant(1)
FUNC_MAPPING = {
"perlin": FUNC_PERLIN,
"curl": FUNC_CURL,
}
UP_AXIS_MAPPING = {
"+X": (0, 1.0),
"+Y": (1, 1.0),
"+Z": (2, 1.0),
"-X": (0, -1.0),
"-Y": (1, -1.0),
"-Z": (2, -1.0),
}
# Kernels
# -----------------------------------------------------------------------------
@wp.kernel(enable_backward=False)
def deform_noise_kernel(
points: wp.array(dtype=wp.vec3),
partial: bool,
axis: int,
axis_sign: float,
falloff_begin: float,
falloff_end: float,
falloff: float,
func: int,
cell_size: float,
offset: float,
amplitude: wp.vec3,
seed: wp.uint32,
out_points: wp.array(dtype=wp.vec3),
):
"""Kernel to deform points using a noise."""
tid = wp.tid()
seed = wp.rand_init(int(seed))
pos = points[tid]
noise_pos = wp.vec3(pos / cell_size)
if func == FUNC_PERLIN:
displacement = wp.vec3(
wp.noise(
seed,
wp.vec4(
noise_pos[0],
noise_pos[1],
noise_pos[2],
offset,
),
),
wp.noise(
seed,
wp.vec4(
noise_pos[0],
noise_pos[1],
noise_pos[2],
offset + 1234.5,
),
),
wp.noise(
seed,
wp.vec4(
noise_pos[0],
noise_pos[1],
noise_pos[2],
offset + 6789.0,
),
),
)
elif func == FUNC_CURL:
displacement = wp.curlnoise(
seed,
wp.vec4(
noise_pos[0],
noise_pos[1],
noise_pos[2],
offset,
),
)
if partial:
if falloff < 1e-3:
if (pos[axis] - falloff_begin) * axis_sign > 0:
influence = 1.0
else:
influence = 0.0
else:
if axis_sign < 0.0:
dist = wp.clamp(pos[axis], falloff_end, falloff_begin)
else:
dist = wp.clamp(pos[axis], falloff_begin, falloff_end)
influence = axis_sign * (dist - falloff_begin) / falloff
else:
influence = 1.0
displacement[0] *= amplitude[0] * influence
displacement[1] *= amplitude[1] * influence
displacement[2] *= amplitude[2] * influence
out_points[tid] = pos + displacement
# Compute
# ------------------------------------------------------------------------------
def compute(db: OgnNoiseDeformDatabase) -> None:
"""Evaluates the node."""
# Copy the input primitives bundle.
db.outputs.prims = db.inputs.prims
partial = db.inputs.mode == "partial"
func = FUNC_MAPPING[db.inputs.func]
(axis, axis_sign) = UP_AXIS_MAPPING[db.inputs.upAxis]
falloff_begin = db.inputs.base * axis_sign
falloff_end = (db.inputs.base + db.inputs.falloff) * axis_sign
time_offset = db.inputs.time * db.inputs.speed
amplitude = db.inputs.axisAmplitude * db.inputs.amplitude
prim_count = omni.warp.nodes.bundle_get_child_count(db.inputs.prims)
for i in range(prim_count):
# Retrieve the input and output point data.
in_points = omni.warp.nodes.mesh_get_points(
db.inputs.prims,
child_idx=i,
)
out_points = omni.warp.nodes.mesh_get_points(
db.outputs.prims,
child_idx=i,
)
# Compute a unique seed for the given primitive by hashin its path.
# We cannot directly use the child index since bundle child ordering
# is currently not guaranteed and can change between sessions, which makes
# the result non-deterministic.
prim_path_attr = omni.warp.nodes.bundle_get_attr(db.inputs.prims, "sourcePrimPath", i)
prim_path = prim_path_attr.get(on_gpu=False)
prim_seed = int(hashlib.md5(prim_path.encode("utf-8")).hexdigest(), 16)
# Evaluate the kernel once per point.
wp.launch(
deform_noise_kernel,
dim=len(in_points),
inputs=(
in_points,
partial,
axis,
axis_sign,
falloff_begin,
falloff_end,
db.inputs.falloff,
func,
db.inputs.cellSize,
time_offset,
amplitude,
db.inputs.seed + prim_seed * 1234,
),
outputs=(out_points,),
)
# Node Entry Point
# ------------------------------------------------------------------------------
class OgnNoiseDeform:
"""Node."""
@staticmethod
def compute(db: OgnNoiseDeformDatabase) -> None:
device = omni.warp.nodes.device_get_cuda_compute()
try:
with wp.ScopedDevice(device):
compute(db)
except Exception:
db.log_error(traceback.format_exc())
return
# Fire the execution for the downstream nodes.
db.outputs.execOut = og.ExecutionAttributeState.ENABLED
| 5,971 | Python | 27.438095 | 94 | 0.516496 |
NVIDIA/warp/exts/omni.warp/omni/warp/nodes/_impl/OgnSampleMeshDeform.py | # Copyright (c) 2023 NVIDIA CORPORATION. All rights reserved.
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
"""Sample node deforming a geometry mesh."""
import traceback
import omni.graph.core as og
import omni.warp.nodes
from omni.warp.nodes.ogn.OgnSampleMeshDeformDatabase import OgnSampleMeshDeformDatabase
import warp as wp
PROFILING = False
# Kernels
# -----------------------------------------------------------------------------
@wp.kernel(enable_backward=False)
def deform_mesh_kernel(
points: wp.array(dtype=wp.vec3),
time: float,
out_points: wp.array(dtype=wp.vec3),
):
"""Kernel to deform a geometry mesh."""
tid = wp.tid()
pos = points[tid]
displacement = wp.vec3(0.0, wp.sin(time + pos[0] * 0.1) * 10.0, 0.0)
out_points[tid] = pos + displacement
# Compute
# ------------------------------------------------------------------------------
def compute(db: OgnSampleMeshDeformDatabase) -> None:
"""Evaluates the node."""
if not db.inputs.mesh.valid or not db.outputs.mesh.valid:
return
# Copy the input geometry mesh bundle and read its contents.
db.outputs.mesh = db.inputs.mesh
# Retrieve the input and output point data.
points = omni.warp.nodes.mesh_get_points(db.inputs.mesh)
out_points = omni.warp.nodes.mesh_get_points(db.outputs.mesh)
with omni.warp.nodes.NodeTimer("deform_mesh", db, active=PROFILING):
# Evaluate the kernel once per point.
wp.launch(
kernel=deform_mesh_kernel,
dim=len(points),
inputs=[
points,
db.inputs.time,
],
outputs=[
out_points,
],
)
# Node Entry Point
# ------------------------------------------------------------------------------
class OgnSampleMeshDeform:
"""Node."""
@staticmethod
def compute(db: OgnSampleMeshDeformDatabase) -> None:
device = omni.warp.nodes.device_get_cuda_compute()
try:
with wp.ScopedDevice(device):
compute(db)
except Exception:
db.log_error(traceback.format_exc())
return
# Fire the execution for the downstream nodes.
db.outputs.execOut = og.ExecutionAttributeState.ENABLED
| 2,612 | Python | 28.033333 | 87 | 0.595712 |
NVIDIA/warp/exts/omni.warp/omni/warp/nodes/_impl/__init__.py | # Copyright (c) 2023 NVIDIA CORPORATION. All rights reserved.
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
"""Node collection and private implementation for the corresponding API."""
| 500 | Python | 54.666661 | 76 | 0.812 |
NVIDIA/warp/exts/omni.warp/omni/warp/nodes/_impl/OgnParticlesFromMesh.py | # Copyright (c) 2023 NVIDIA CORPORATION. All rights reserved.
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
"""Node generating particles inside a mesh."""
import traceback
from typing import Tuple
import numpy as np
import omni.graph.core as og
import omni.warp.nodes
from omni.warp.nodes.ogn.OgnParticlesFromMeshDatabase import OgnParticlesFromMeshDatabase
import warp as wp
PROFILING = False
# Kernels
# ------------------------------------------------------------------------------
@wp.kernel(enable_backward=False)
def transform_points_kernel(
points: wp.array(dtype=wp.vec3),
xform: wp.mat44,
out_points: wp.array(dtype=wp.vec3),
):
tid = wp.tid()
out_points[tid] = wp.transform_point(xform, points[tid])
@wp.kernel(enable_backward=False)
def sample_mesh_kernel(
mesh: wp.uint64,
grid_lower_bound: wp.vec3,
max_points: int,
min_sdf: float,
max_sdf: float,
spacing: float,
spacing_jitter: float,
seed: int,
out_point_count: wp.array(dtype=int),
out_points: wp.array(dtype=wp.vec3),
):
tid = wp.tid()
x, y, z = wp.tid()
# Retrieve the cell's center position.
cell_pos = (
grid_lower_bound
+ wp.vec3(
float(x) + 0.5,
float(y) + 0.5,
float(z) + 0.5,
)
* spacing
)
# Query the closest location on the mesh.
max_dist = 1000.0
query = wp.mesh_query_point(mesh, cell_pos, max_dist)
if not query.result:
return
# Evaluates the position of the closest mesh location found.
mesh_pos = wp.mesh_eval_position(mesh, query.face, query.u, query.v)
# Check that the cell's distance to the mesh location is within
# the desired range.
dist = wp.length(cell_pos - mesh_pos) * query.sign
if dist < min_sdf or dist > max_sdf:
return
# Increment the counter of valid point locations found.
point_index = wp.atomic_add(out_point_count, 0, 1)
if point_index > max_points:
return
# Compute the spacing jitter value while making sure it's normalized
# in a range [-1, 1].
rng = wp.rand_init(seed, tid)
jitter = wp.vec3(
wp.randf(rng) * 2.0 - 1.0,
wp.randf(rng) * 2.0 - 1.0,
wp.randf(rng) * 2.0 - 1.0,
)
# Store the point position.
out_points[point_index] = cell_pos + jitter * spacing_jitter
# Internal State
# ------------------------------------------------------------------------------
class InternalState:
"""Internal state for the node."""
def __init__(self) -> None:
self.mesh = None
self.is_valid = False
self.attr_tracking = omni.warp.nodes.AttrTracking(
(
"transform",
"seed",
"minSdf",
"maxSdf",
"radius",
"spacing",
"spacingJitter",
"mass",
"velocityDir",
"velocityAmount",
"maxPoints",
),
)
def needs_initialization(self, db: OgnParticlesFromMeshDatabase) -> bool:
"""Checks if the internal state needs to be (re)initialized."""
if not self.is_valid:
return True
if omni.warp.nodes.bundle_has_changed(db.inputs.mesh):
return True
return False
def initialize(self, db: OgnParticlesFromMeshDatabase) -> bool:
"""Initializes the internal state."""
point_count = omni.warp.nodes.mesh_get_point_count(db.inputs.mesh)
xform = omni.warp.nodes.bundle_get_world_xform(db.inputs.mesh)
# Transform the mesh's point positions into world space.
world_point_positions = wp.empty(point_count, dtype=wp.vec3)
wp.launch(
kernel=transform_points_kernel,
dim=point_count,
inputs=[
omni.warp.nodes.mesh_get_points(db.inputs.mesh),
xform.T,
],
outputs=[
world_point_positions,
],
)
# Initialize Warp's mesh instance, which requires
# a triangulated topology.
face_vertex_indices = omni.warp.nodes.mesh_triangulate(db.inputs.mesh)
mesh = wp.Mesh(
points=world_point_positions,
velocities=wp.zeros(point_count, dtype=wp.vec3),
indices=face_vertex_indices,
)
# Store the class members.
self.mesh = mesh
return True
# Compute
# ------------------------------------------------------------------------------
def spawn_particles(db: OgnParticlesFromMeshDatabase) -> Tuple[wp.array, int]:
"""Spawns the particles by filling the given point positions array."""
# Initialize an empty array that will hold the particle positions.
points = wp.empty(db.inputs.maxPoints, dtype=wp.vec3)
# Retrieve the mesh's aligned bounding box.
extent = omni.warp.nodes.mesh_get_world_extent(
db.inputs.mesh,
axis_aligned=True,
)
# Compute the emitter's bounding box size.
extent_size = extent[1] - extent[0]
# Infer the emitter's grid dimensions from its bounding box size and
# the requested spacing.
spacing = max(db.inputs.spacing, 1e-6)
dims = (extent_size / spacing).astype(int) + 1
dims = np.maximum(dims, 1)
# Add one particle per grid cell located within the mesh geometry.
point_count = wp.zeros(1, dtype=int)
wp.launch(
kernel=sample_mesh_kernel,
dim=dims,
inputs=[
db.internal_state.mesh.id,
extent[0],
db.inputs.maxPoints,
db.inputs.minSdf,
db.inputs.maxSdf,
spacing,
db.inputs.spacingJitter,
db.inputs.seed,
],
outputs=[
point_count,
points,
],
)
# Retrieve the actual number of particles created.
point_count = min(int(point_count.numpy()[0]), db.inputs.maxPoints)
return (points, point_count)
def compute(db: OgnParticlesFromMeshDatabase) -> None:
"""Evaluates the node."""
db.outputs.particles.changes().activate()
if not db.inputs.mesh.valid or not db.outputs.particles.valid:
return
state = db.internal_state
# Initialize the internal state if it hasn't been already.
if state.needs_initialization(db):
if not state.initialize(db):
return
elif not state.attr_tracking.have_attrs_changed(db):
return
with omni.warp.nodes.NodeTimer("spawn_particles", db, active=PROFILING):
# Spawn new particles inside the mesh.
(points, point_count) = spawn_particles(db)
# Create a new geometry points within the output bundle.
omni.warp.nodes.points_create_bundle(
db.outputs.particles,
point_count,
xform=db.inputs.transform,
create_masses=True,
create_velocities=True,
create_widths=True,
)
# Copy the point positions onto the output bundle.
wp.copy(
omni.warp.nodes.points_get_points(db.outputs.particles),
points,
count=point_count,
)
if point_count:
velocities = omni.warp.nodes.points_get_velocities(db.outputs.particles)
if db.inputs.velocityAmount < 1e-6:
velocities.fill_(0.0)
else:
# Retrieve the mesh's world transformation.
xform = omni.warp.nodes.bundle_get_world_xform(db.inputs.mesh)
# Retrieve the normalized velocity direction.
vel = db.inputs.velocityDir
vel /= np.linalg.norm(vel)
# Transform the velocity local direction with the mesh's world
# rotation matrix to get the velocity direction in world space.
vel = np.dot(xform[:3, :3].T, vel)
# Scale the result to get the velocity's magnitude.
vel *= db.inputs.velocityAmount
# Store the velocities in the output bundle.
velocities.fill_(wp.vec3(vel))
# Store the radius in the output bundle.
widths = omni.warp.nodes.points_get_widths(db.outputs.particles)
widths.fill_(db.inputs.radius * 2.0)
# Store the mass in the output bundle.
masses = omni.warp.nodes.points_get_masses(db.outputs.particles)
masses.fill_(db.inputs.mass)
state.attr_tracking.update_state(db)
# Node Entry Point
# ------------------------------------------------------------------------------
class OgnParticlesFromMesh:
"""Node."""
@staticmethod
def internal_state() -> InternalState:
return InternalState()
@staticmethod
def compute(db: OgnParticlesFromMeshDatabase) -> None:
device = omni.warp.nodes.device_get_cuda_compute()
try:
with wp.ScopedDevice(device):
compute(db)
except Exception:
db.log_error(traceback.format_exc())
db.internal_state.is_valid = False
return
db.internal_state.is_valid = True
# Fire the execution for the downstream nodes.
db.outputs.execOut = og.ExecutionAttributeState.ENABLED
| 9,467 | Python | 28.867508 | 89 | 0.591106 |
NVIDIA/warp/exts/omni.warp/omni/warp/nodes/_impl/kernel.py | # Copyright (c) 2023 NVIDIA CORPORATION. All rights reserved.
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
"""Backend implementation for kernel node(s)."""
from __future__ import annotations
import functools
import hashlib
import importlib.util
import json
import operator
import os
import tempfile
from enum import IntFlag
from typing import (
Any,
Callable,
Mapping,
NamedTuple,
Optional,
Sequence,
Tuple,
Union,
)
import omni.graph.core as og
from omni.warp.nodes._impl.attributes import (
ATTR_BUNDLE_TYPE,
attr_cast_array_to_warp,
attr_get_base_name,
attr_get_name,
attr_join_name,
)
from omni.warp.nodes._impl.common import (
IntEnum,
get_warp_type_from_data_type_name,
type_convert_og_to_warp,
)
import warp as wp
_ATTR_PORT_TYPE_INPUT = og.AttributePortType.ATTRIBUTE_PORT_TYPE_INPUT
_ATTR_PORT_TYPE_OUTPUT = og.AttributePortType.ATTRIBUTE_PORT_TYPE_OUTPUT
EXPLICIT_SOURCE = "explicit"
# Enumerators
# ------------------------------------------------------------------------------
class UserAttributesEvent(IntFlag):
"""User attributes event."""
NONE = 0
CREATED = 1 << 0
REMOVED = 1 << 1
class OutputArrayShapeSource(IntEnum):
"""Method to infer the shape of output attribute arrays."""
AS_INPUT_OR_AS_KERNEL = (0, "as input if any, or as kernel")
AS_KERNEL = (1, "as kernel")
class OutputBundleTypeSource(IntEnum):
"""Method to infer the type of output attribute bundles."""
AS_INPUT = (0, "as input if any")
AS_INPUT_OR_EXPLICIT = (1, "as input if any, or explicit")
EXPLICIT = (2, "explicit")
class ArrayAttributeFormat(IntEnum):
"""Format describing how attribute arrays are defined on the node."""
RAW = (0, "raw")
BUNDLE = (1, "bundle")
# User Attributes Description
# ------------------------------------------------------------------------------
class UserAttributeDesc(NamedTuple):
"""Description of an attribute added dynamically by users through the UI.
This struct is what the Attribute Editor UI passes to the node in order to
communicate any attribute metadata.
"""
port_type: og.AttributePortType
base_name: str
data_type_name: str
is_array: bool
array_format: ArrayAttributeFormat
array_shape_source: Union[None, OutputArrayShapeSource]
optional: bool
@classmethod
def deserialize(
cls,
data: Mapping[str:Any],
) -> Optional[UserAttributeDesc]:
"""Creates a new instance based on a serialized representation."""
# Retrieve the port type. It's invalid not to have any set.
port_type = data.get("port_type")
if port_type is None:
return None
port_type = og.AttributePortType(port_type)
# Define sensible default values.
# Although this class requires all of its member values to be explicitly
# defined upon initialization, it's possible that the incoming data was
# serialized with an older version of this class, in which case we might
# want to try filling any gap.
values = {
"array_format": ArrayAttributeFormat.RAW,
"array_shape_source": (
OutputArrayShapeSource.AS_INPUT_OR_AS_KERNEL if port_type == _ATTR_PORT_TYPE_OUTPUT else None
),
"optional": False,
}
# Override the default values with the incoming data.
values.update({k: v for k, v in data.items() if k in cls._fields})
# Ensure that the member values are set using their rightful types.
values.update(
{
"port_type": port_type,
"array_format": ArrayAttributeFormat(values["array_format"]),
"array_shape_source": (
None
if values["array_shape_source"] is None
else OutputArrayShapeSource(values["array_shape_source"])
),
}
)
try:
# This might error in case some members are still missing.
return cls(**values)
except TypeError:
return None
@property
def name(self) -> str:
"""Retrieves the attribute's name prefixed with its port type."""
return attr_join_name(self.port_type, self.base_name)
@property
def type(self) -> og.Attribute:
"""Retrieves OmniGraph's attribute type."""
return og.AttributeType.type_from_sdf_type_name(self.type_name)
@property
def type_name(self) -> str:
"""Retrieves OmniGraph's attribute type name."""
if self.is_array:
return "{}[]".format(self.data_type_name)
return self.data_type_name
def serialize(self) -> Mapping[str:Any]:
"""Converts this instance into a serialized representation."""
return self._replace(
port_type=int(self.port_type),
)._asdict()
def deserialize_user_attribute_descs(
data: str,
) -> Mapping[str, UserAttributeDesc]:
"""Deserializes a string into a mapping of (name, desc)."""
descs = {attr_join_name(x["port_type"], x["base_name"]): UserAttributeDesc.deserialize(x) for x in json.loads(data)}
# Filter out any invalid description.
return {k: v for k, v in descs.items() if v is not None}
def serialize_user_attribute_descs(
descs: Mapping[str, UserAttributeDesc],
) -> str:
"""Serializes a mapping of (name, desc) into a string."""
return json.dumps(tuple(x.serialize() for x in descs.values()))
# User Attributes Information
# ------------------------------------------------------------------------------
class OutputAttributeInfo(NamedTuple):
"""Information relating to an output node attribute."""
array_shape_source: Optional[OutputArrayShapeSource]
bundle_type_source: Optional[OutputBundleTypeSource]
bundle_type_explicit: Optional[str] = None
class AttributeInfo(NamedTuple):
"""Information relating to a node attribute.
This struct contains all the metadata required by the node to initialize
and evaluate. This includes compiling the kernel and initializing the Inputs
and Outputs structs that are then passed to the kernel as parameters.
We don't directly store the array shape, if any, since it is possible that
it might vary between each evaluation of the node's compute. Instead,
we store which method to use to infer the array's shape and let the node
determine the actual shape during each compute step.
Note
----
The `warp_type` member represents the type of the kernel parameter
corresdonding to that attribute. If the attribute is a bundle, then it is
expected to be a `wp.struct` holding the values of the bundle, unless
the bundle is of type :class:`Array`, in which case `warp_type` should be
a standard `wp.array`.
"""
port_type: og.AttributePortType
base_name: str
og_type: og.Type
warp_type: type
output: Optional[OutputAttributeInfo] = None
@property
def name(self) -> str:
return attr_join_name(self.port_type, self.base_name)
@property
def og_data_type(self) -> og.Type:
return og.Type(
self.og_type.base_type,
tuple_count=self.og_type.tuple_count,
array_depth=0,
role=self.og_type.role,
)
@property
def is_array(self) -> bool:
return self.og_type.array_depth > 0
@property
def is_bundle(self) -> bool:
return self.og_type == ATTR_BUNDLE_TYPE
@property
def dim_count(self) -> int:
if self.is_array:
return self.warp_type.ndim
return 0
@property
def warp_data_type(self) -> type:
if self.is_array:
return self.warp_type.dtype
return self.warp_type
@property
def warp_type_name(self) -> str:
if self.is_bundle:
return self.warp_type.cls.__name__
return get_warp_type_from_data_type_name(
self.warp_data_type.__name__,
dim_count=self.dim_count,
as_str=True,
)
@property
def warp_data_type_name(self) -> str:
if self.is_bundle:
return self.warp_type.cls.__name__
return get_warp_type_from_data_type_name(
self.warp_data_type.__name__,
dim_count=0,
as_str=True,
)
def gather_attribute_infos(
node: og.Node,
db_inputs: Any,
db_outputs: Any,
attr_descs: Mapping[str, UserAttributeDesc],
kernel_dim_count: int,
) -> Mapping[og.AttributePortType, Tuple[AttributeInfo, ...]]:
"""Gathers the information for each user attribute.
See also: :class:`AttributeInfo`.
"""
def extract_partial_info_from_attr(attr: og.Attribute) -> Tuple[Any, ...]:
"""Extract a partial information set from an attribute."""
name = attr_get_name(attr)
base_name = attr_get_base_name(attr)
og_type = attr.get_resolved_type()
is_array = og_type.array_depth > 0
return (name, base_name, og_type, is_array)
# Retrieve the user attributes defined on the node.
attrs = tuple(x for x in node.get_attributes() if x.is_dynamic())
# Gather the information for the input attributes.
input_attr_infos = []
for attr in attrs:
if attr.get_port_type() != _ATTR_PORT_TYPE_INPUT:
continue
(name, base_name, og_type, is_array) = extract_partial_info_from_attr(attr)
og_data_type = og.Type(
og_type.base_type,
tuple_count=og_type.tuple_count,
array_depth=0,
role=og_type.role,
)
input_attr_infos.append(
AttributeInfo(
port_type=_ATTR_PORT_TYPE_INPUT,
base_name=base_name,
og_type=og_type,
warp_type=type_convert_og_to_warp(
og_data_type,
dim_count=int(is_array),
),
)
)
# Gather the information for the output attributes.
output_attr_infos = []
for attr in attrs:
if attr.get_port_type() != _ATTR_PORT_TYPE_OUTPUT:
continue
(name, base_name, og_type, is_array) = extract_partial_info_from_attr(attr)
desc = attr_descs.get(name)
if desc is None:
# Fallback for nodes created before the attribute description
# feature was implemented.
array_shape_source = OutputArrayShapeSource.AS_INPUT_OR_AS_KERNEL
else:
array_shape_source = desc.array_shape_source
if array_shape_source == OutputArrayShapeSource.AS_INPUT_OR_AS_KERNEL:
# Check if we have an input attribute with a matching name,
# in which case we use its array dimension count.
try:
dim_count = next(x.dim_count for x in input_attr_infos if x.base_name == base_name)
except StopIteration:
# Fallback to using the kernel's dimension count.
dim_count = kernel_dim_count
elif array_shape_source == OutputArrayShapeSource.AS_KERNEL:
dim_count = kernel_dim_count
else:
raise AssertionError("Unexpected array shape source method '{}'.".format(array_shape_source))
og_data_type = og.Type(
og_type.base_type,
tuple_count=og_type.tuple_count,
array_depth=0,
role=og_type.role,
)
output_attr_infos.append(
AttributeInfo(
port_type=_ATTR_PORT_TYPE_OUTPUT,
base_name=base_name,
og_type=og_type,
warp_type=type_convert_og_to_warp(
og_data_type,
dim_count=dim_count,
),
output=OutputAttributeInfo(
array_shape_source=array_shape_source,
bundle_type_source=OutputBundleTypeSource.AS_INPUT,
),
)
)
return {
_ATTR_PORT_TYPE_INPUT: tuple(input_attr_infos),
_ATTR_PORT_TYPE_OUTPUT: tuple(output_attr_infos),
}
# Kernel Code
# ------------------------------------------------------------------------------
_STRUCT_DECLARATION_CODE_TEMPLATE = """@wp.struct
class {name}:
{members}
"""
def _generate_struct_declaration_code(warp_struct: wp.struct) -> str:
"""Generates the code declaring a Warp struct."""
lines = []
for label, var in warp_struct.vars.items():
warp_type = var.type
if isinstance(warp_type, wp.array):
warp_data_type = warp_type.dtype
dim_count = warp_type.ndim
else:
warp_data_type = warp_type
dim_count = 0
warp_type_name = get_warp_type_from_data_type_name(
warp_data_type.__name__,
dim_count=dim_count,
as_str=True,
)
lines.append(" {}: {}".format(label, warp_type_name))
return _STRUCT_DECLARATION_CODE_TEMPLATE.format(
name=warp_struct.cls.__name__,
members="\n".join(lines),
)
_HEADER_CODE_TEMPLATE = """import warp as wp
{declarations}
@wp.struct
class Inputs:
{inputs}
pass
@wp.struct
class Outputs:
{outputs}
pass
"""
def _generate_header_code(
attr_infos: Mapping[og.AttributePortType, Tuple[AttributeInfo, ...]],
) -> str:
"""Generates the code header based on the node's attributes."""
# Retrieve all the Warp struct types corresponding to bundle attributes.
struct_types = {x.warp_type_name: x.warp_type for _, v in attr_infos.items() for x in v if x.is_bundle}
# Generate the code that declares the Warp structs found.
declarations = [""]
declarations.extend(_generate_struct_declaration_code(x) for _, x in struct_types.items())
# Generate the lines of code declaring the members for each port type.
lines = {k: tuple(" {}: {}".format(x.base_name, x.warp_type_name) for x in v) for k, v in attr_infos.items()}
# Return the template code populated with the members.
return _HEADER_CODE_TEMPLATE.format(
declarations="\n".join(declarations),
inputs="\n".join(lines.get(_ATTR_PORT_TYPE_INPUT, ())),
outputs="\n".join(lines.get(_ATTR_PORT_TYPE_OUTPUT, ())),
)
def _get_user_code(code_provider: str, code_str: str, code_file: str) -> str:
"""Retrieves the code provided by the user."""
if code_provider == "embedded":
return code_str
if code_provider == "file":
with open(code_file, "r") as f:
return f.read()
raise AssertionError("Unexpected code provider '{}'.".format(code_provider))
# Kernel Module
# ------------------------------------------------------------------------------
def _load_code_as_module(code: str, name: str) -> Any:
"""Loads a Python module from the given source code."""
# It's possible to use the `exec()` built-in function to create and
# populate a Python module with the source code defined in a string,
# however warp requires access to the source code of the kernel's
# function, which is only available when the original source file
# pointed by the function attribute `__code__.co_filename` can
# be opened to read the lines corresponding to that function.
# As such, we must write the source code into a temporary file
# on disk before importing it as a module and having the function
# turned into a kernel by warp's mechanism.
# Create a temporary file.
file, file_path = tempfile.mkstemp(suffix=".py")
try:
# Save the embedded code into the temporary file.
with os.fdopen(file, "w") as f:
f.write(code)
# Import the temporary file as a Python module.
spec = importlib.util.spec_from_file_location(name, file_path)
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
finally:
# The resulting Python module is stored into memory as a bytcode
# object and the kernel function has already been parsed by warp
# as long as it was correctly decorated, so it's now safe to
# clean-up the temporary file.
os.remove(file_path)
return module
def initialize_kernel_module(
attr_infos: Mapping[og.AttributePortType, Tuple[AttributeInfo, ...]],
code_provider: str,
code_str: str,
code_file: str,
) -> wp.context.Module:
# Ensure that all output parameters are arrays. Writing to non-array
# types is not supported as per CUDA's design.
invalid_attrs = tuple(x.name for x in attr_infos[_ATTR_PORT_TYPE_OUTPUT] if not x.is_array and not x.is_bundle)
if invalid_attrs:
raise RuntimeError(
"Output attributes are required to be arrays or bundles but "
"the following attributes are not: {}.".format(", ".join(invalid_attrs))
)
# Retrieve the kernel code to evaluate.
code_header = _generate_header_code(attr_infos)
user_code = _get_user_code(code_provider, code_str, code_file)
code = "{}\n{}".format(code_header, user_code)
# Create a Python module made of the kernel code.
# We try to keep its name unique to ensure that it's not clashing with
# other kernel modules from the same session.
uid = hashlib.blake2b(bytes(code, encoding="utf-8"), digest_size=8)
module_name = "warp-kernelnode-{}".format(uid.hexdigest())
kernel_module = _load_code_as_module(code, module_name)
# Validate the module's contents.
if not hasattr(kernel_module, "compute"):
raise RuntimeError("The code must define a kernel function named 'compute'.")
if not isinstance(kernel_module.compute, wp.context.Kernel):
raise RuntimeError("The 'compute' function must be decorated with '@wp.kernel'.")
# Configure warp to only compute the forward pass.
wp.set_module_options({"enable_backward": False}, module=kernel_module)
return kernel_module
# Data I/O
# ------------------------------------------------------------------------------
def _infer_output_array_shape(
attr_info: AttributeInfo,
input_attr_infos: Tuple[AttributeInfo, ...],
kernel_inputs: Any,
kernel_shape: Sequence[int],
) -> Tuple[int, ...]:
if attr_info.output.array_shape_source == OutputArrayShapeSource.AS_INPUT_OR_AS_KERNEL:
# Check if we have an input attribute with a matching name,
# in which case we use its array shape.
try:
ref_attr_base_name = next(
x.base_name
for x in input_attr_infos
if (x.base_name == attr_info.base_name and x.is_array and x.dim_count == attr_info.dim_count)
)
return getattr(kernel_inputs, ref_attr_base_name).shape
except StopIteration:
# Fallback to using the kernel's shape.
return tuple(kernel_shape)
if attr_info.output.array_shape_source == OutputArrayShapeSource.AS_KERNEL:
return tuple(kernel_shape)
raise AssertionError("Unexpected array shape source method '{}'.".format(attr_info.output.array_shape_source))
class KernelArgsConfig(NamedTuple):
"""Configuration for resolving kernel arguments."""
input_bundle_handlers: Optional[Mapping[str, Callable]] = None
output_bundle_handlers: Optional[Mapping[str, Callable]] = None
def get_kernel_args(
db_inputs: Any,
db_outputs: Any,
attr_infos: Mapping[og.AttributePortType, Tuple[AttributeInfo, ...]],
kernel_module: Any,
kernel_shape: Sequence[int],
device: Optional[wp.context.Device] = None,
config: Optional[KernelArgsConfig] = None,
) -> Tuple[Any, Any]:
"""Retrieves the in/out argument values to pass to the kernel."""
if device is None:
device = wp.get_device()
if config is None:
config = KernelArgsConfig()
# Initialize the kernel's input data.
inputs = kernel_module.Inputs()
for info in attr_infos[_ATTR_PORT_TYPE_INPUT]:
# Retrieve the input attribute value and cast it to
# the corresponding warp type.
if info.is_array:
value = getattr(db_inputs, info.base_name)
# The array value might define 2 dimensions when tuples such as
# wp.vec3 are used as data type, so we preserve only the first
# dimension to retrieve the actual shape since OmniGraph only
# supports 1D arrays anyways.
shape = value.shape[:1]
value = attr_cast_array_to_warp(
value,
info.warp_data_type,
shape,
device,
)
elif info.is_bundle:
raise NotImplementedError("Bundle attributes are not yet supported.")
else:
value = getattr(db_inputs, info.base_name)
# Store the result in the inputs struct.
setattr(inputs, info.base_name, value)
# Initialize the kernel's output data.
outputs = kernel_module.Outputs()
for info in attr_infos[_ATTR_PORT_TYPE_OUTPUT]:
# Retrieve the output attribute value and cast it to the corresponding
# warp type.
if info.is_array:
shape = _infer_output_array_shape(
info,
attr_infos[_ATTR_PORT_TYPE_INPUT],
inputs,
kernel_shape,
)
# Allocate a buffer for the array.
size = functools.reduce(operator.mul, shape)
setattr(db_outputs, "{}_size".format(info.base_name), size)
value = getattr(db_outputs, info.base_name)
value = attr_cast_array_to_warp(
value,
info.warp_data_type,
shape,
device,
)
elif info.is_bundle:
raise NotImplementedError("Bundle attributes are not yet supported.")
else:
raise AssertionError("Output attributes are expected to be arrays or bundles.")
# Store the result in the outputs struct.
setattr(outputs, info.base_name, value)
return (inputs, outputs)
def write_output_attrs(
db_outputs: Any,
attr_infos: Mapping[og.AttributePortType, Tuple[AttributeInfo, ...]],
kernel_outputs: Any,
device: Optional[wp.context.Device] = None,
) -> None:
"""Writes the output values to the node's attributes."""
if device is None:
device = wp.get_device()
if device.is_cuda:
# CUDA attribute arrays are directly being written to by Warp.
return
for info in attr_infos[_ATTR_PORT_TYPE_OUTPUT]:
value = getattr(kernel_outputs, info.base_name)
setattr(db_outputs, info.base_name, value)
# Validation
# ------------------------------------------------------------------------------
def validate_input_arrays(
node: og.Node,
attr_infos: Mapping[og.AttributePortType, Tuple[AttributeInfo, ...]],
kernel_inputs: Any,
) -> None:
"""Validates array input attributes."""
for info in attr_infos[_ATTR_PORT_TYPE_INPUT]:
value = getattr(kernel_inputs, info.base_name)
if not isinstance(value, wp.array):
continue
# Ensure that all array input attributes are not NULL,
# unless they are set as being optional.
attr = og.Controller.attribute(info.name, node)
if not attr.is_optional_for_compute and not value.ptr:
raise RuntimeError("Empty value for non-optional attribute '{}'.".format(info.name))
# Node's Internal State
# ------------------------------------------------------------------------------
class InternalStateBase:
"""Base class for the node's internal state."""
def __init__(self) -> None:
self._code_provider = None
self._code_str = None
self._code_file = None
self._code_file_timestamp = None
self.attr_infos = None
self.kernel_module = None
self.is_valid = False
def needs_initialization(
self,
db: Any,
check_file_modified_time: bool,
) -> bool:
"""Checks if the internal state needs to be (re)initialized."""
if self.is_valid:
# If everything is in order, we only need to recompile the kernel
# when attributes are removed, since adding new attributes is not
# a breaking change.
if self.kernel_module is None or UserAttributesEvent.REMOVED & db.state.userAttrsEvent:
return True
else:
# If something previously went wrong, we always recompile the kernel
# when attributes are edited, in case it might fix code that
# errored out due to referencing a non-existing attribute.
if db.state.userAttrsEvent != UserAttributesEvent.NONE:
return True
if self.attr_infos is None:
return True
if self._code_provider != db.inputs.codeProvider:
return True
if self._code_provider == "embedded":
if self._code_str != db.inputs.codeStr:
return True
elif self._code_provider == "file":
if self._code_file != db.inputs.codeFile or (
check_file_modified_time and (self._code_file_timestamp != os.path.getmtime(self._code_file))
):
return True
else:
raise AssertionError("Unexpected code provider '{}'.".format(self._code_provider))
return False
def initialize(self, db: Any) -> bool:
"""Initialize the internal state and recompile the kernel."""
# Cache the node attribute values relevant to this internal state.
# They're the ones used to check whether this state is outdated or not.
self._code_provider = db.inputs.codeProvider
self._code_str = db.inputs.codeStr
self._code_file = db.inputs.codeFile
return True
| 26,288 | Python | 32.617647 | 120 | 0.608072 |
NVIDIA/warp/exts/omni.warp/omni/warp/nodes/_impl/OgnParticlesSimulate.py | # Copyright (c) 2023 NVIDIA CORPORATION. All rights reserved.
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
"""Node simulating particles."""
import traceback
from math import inf
import numpy as np
import omni.graph.core as og
import omni.timeline
import omni.warp.nodes
from omni.warp.nodes.ogn.OgnParticlesSimulateDatabase import OgnParticlesSimulateDatabase
import warp as wp
USE_GRAPH = True
PROFILING = False
# Kernels
# ------------------------------------------------------------------------------
@wp.kernel(enable_backward=False)
def query_max_value_kernel(
values: wp.array(dtype=float),
out_max: wp.array(dtype=float),
):
wp.atomic_max(out_max, 0, values[wp.tid()])
@wp.kernel(enable_backward=False)
def compute_particles_inv_mass_kernel(
masses: wp.array(dtype=float),
out_inv_masses: wp.array(dtype=float),
):
tid = wp.tid()
out_inv_masses[tid] = 1.0 / masses[tid]
@wp.kernel(enable_backward=False)
def compute_particles_radius_kernel(
widths: wp.array(dtype=float),
out_radii: wp.array(dtype=float),
):
tid = wp.tid()
out_radii[tid] = widths[tid] * 0.5
@wp.kernel(enable_backward=False)
def transform_points_kernel(
points: wp.array(dtype=wp.vec3),
xform: wp.mat44,
out_points: wp.array(dtype=wp.vec3),
):
tid = wp.tid()
out_points[tid] = wp.transform_point(xform, points[tid])
@wp.kernel(enable_backward=False)
def update_collider_kernel(
points_0: wp.array(dtype=wp.vec3),
points_1: wp.array(dtype=wp.vec3),
xform_0: wp.mat44,
xform_1: wp.mat44,
sim_dt: float,
out_points: wp.array(dtype=wp.vec3),
out_velocities: wp.array(dtype=wp.vec3),
):
tid = wp.tid()
point_0 = wp.transform_point(xform_0, points_0[tid])
point_1 = wp.transform_point(xform_1, points_1[tid])
out_points[tid] = point_0
out_velocities[tid] = (point_1 - point_0) / sim_dt
@wp.kernel(enable_backward=False)
def update_particles_kernel(
points_0: wp.array(dtype=wp.vec3),
xform: wp.mat44,
out_points: wp.array(dtype=wp.vec3),
out_velocities: wp.array(dtype=wp.vec3),
):
tid = wp.tid()
point = wp.transform_point(xform, points_0[tid])
diff = point - points_0[tid]
out_points[tid] = point
out_velocities[tid] = out_velocities[tid] + diff
# Internal State
# ------------------------------------------------------------------------------
class InternalState:
"""Internal state for the node."""
def __init__(self) -> None:
self.sim_dt = None
self.sim_tick = None
self.model = None
self.integrator = None
self.state_0 = None
self.state_1 = None
self.xform = None
self.collider_xform = None
self.collider_mesh = None
self.collider_points_0 = None
self.collider_points_1 = None
self.graph = None
self.sim_enabled = True
self.time = 0.0
self.is_valid = False
self.attr_tracking = omni.warp.nodes.AttrTracking(
(
"substepCount",
"gravity",
"globalScale",
"contactElasticStiffness",
"contactFrictionStiffness",
"contactFrictionCoeff",
"contactDampingStiffness",
"particlesQueryRange",
"particlesContactAdhesion",
"particlesContactCohesion",
"colliderContactDistance",
"colliderContactQueryRange",
"groundEnabled",
"groundAltitude",
),
)
def needs_initialization(self, db: OgnParticlesSimulateDatabase) -> bool:
"""Checks if the internal state needs to be (re)initialized."""
if not self.is_valid or not db.inputs.enabled or not self.sim_enabled:
return True
if self.attr_tracking.have_attrs_changed(db):
return True
if db.inputs.time < self.time:
# Reset the simulation when we're rewinding.
return True
return False
def initialize(
self,
db: OgnParticlesSimulateDatabase,
device: wp.context.Device,
) -> bool:
"""Initializes the internal state."""
# Lazy load warp.sim here to not slow down extension loading.
import warp.sim
# Compute the simulation time step.
timeline = omni.timeline.get_timeline_interface()
sim_rate = timeline.get_ticks_per_second()
sim_dt = 1.0 / sim_rate
# Initialize Warp's simulation model builder.
builder = wp.sim.ModelBuilder()
# Retrieve some data from the particles points.
points = omni.warp.nodes.points_get_points(db.inputs.particles)
xform = omni.warp.nodes.bundle_get_world_xform(db.inputs.particles)
# Transform the particles point positions into world space.
world_points = wp.empty(len(points), dtype=wp.vec3)
wp.launch(
kernel=transform_points_kernel,
dim=len(points),
inputs=[
points,
xform.T,
],
outputs=[
world_points,
],
)
if db.inputs.collider.valid:
# Retrieve some data from the collider mesh.
collider_points = omni.warp.nodes.mesh_get_points(db.inputs.collider)
collider_xform = omni.warp.nodes.bundle_get_world_xform(db.inputs.collider)
# Transform the collider point positions into world space.
collider_world_points = wp.empty(
len(collider_points),
dtype=wp.vec3,
)
wp.launch(
kernel=transform_points_kernel,
dim=len(collider_points),
inputs=[
collider_points,
collider_xform.T,
],
outputs=[
collider_world_points,
],
)
# Initialize Warp's mesh instance, which requires
# triangulated meshes.
collider_face_vertex_indices = omni.warp.nodes.mesh_triangulate(
db.inputs.collider,
)
collider_mesh = wp.sim.Mesh(
collider_world_points.numpy(),
collider_face_vertex_indices.numpy(),
compute_inertia=False,
)
# Register the collider geometry mesh into Warp's simulation model
# builder.
builder.add_shape_mesh(
body=-1,
mesh=collider_mesh,
pos=(0.0, 0.0, 0.0),
rot=(0.0, 0.0, 0.0, 1.0),
scale=(1.0, 1.0, 1.0),
)
# Store the collider's point positions as internal state.
collider_points_0 = wp.empty_like(collider_points)
collider_points_1 = wp.empty_like(collider_points)
wp.copy(collider_points_0, collider_points)
wp.copy(collider_points_1, collider_points)
# Store the class members.
self.collider_xform = collider_xform.copy()
self.collider_mesh = collider_mesh
self.collider_points_0 = collider_points_0
self.collider_points_1 = collider_points_1
else:
self.collider_mesh = None
# Register the ground.
builder.set_ground_plane(
offset=-db.inputs.groundAltitude,
ke=db.inputs.contactElasticStiffness * db.inputs.globalScale,
kd=db.inputs.contactDampingStiffness * db.inputs.globalScale,
kf=db.inputs.contactFrictionStiffness * db.inputs.globalScale,
mu=db.inputs.contactFrictionCoeff,
)
# Build the simulation model.
model = builder.finalize()
# Register the input particles into the system.
model.particle_count = omni.warp.nodes.points_get_point_count(db.inputs.particles)
model.particle_q = world_points
model.particle_qd = omni.warp.nodes.points_get_velocities(db.inputs.particles)
model.particle_mass = omni.warp.nodes.points_get_masses(db.inputs.particles)
model.particle_inv_mass = wp.empty_like(model.particle_mass)
wp.launch(
compute_particles_inv_mass_kernel,
dim=model.particle_count,
inputs=[model.particle_mass],
outputs=[model.particle_inv_mass],
)
widths = omni.warp.nodes.points_get_widths(db.inputs.particles)
model.particle_radius = wp.empty_like(widths)
wp.launch(
compute_particles_radius_kernel,
dim=model.particle_count,
inputs=[widths],
outputs=[model.particle_radius],
)
model.particle_flags = wp.empty(model.particle_count, dtype=wp.uint32)
model.particle_flags.fill_(warp.sim.model.PARTICLE_FLAG_ACTIVE.value)
max_width = wp.array((-inf,), dtype=float)
wp.launch(
query_max_value_kernel,
dim=model.particle_count,
inputs=[widths],
outputs=[max_width],
)
model.particle_max_radius = float(max_width.numpy()[0]) * 0.5
# Allocate a single contact per particle.
model.allocate_soft_contacts(model.particle_count)
# Initialize the integrator.
integrator = wp.sim.SemiImplicitIntegrator()
# Set the model properties.
model.ground = db.inputs.groundEnabled
model.gravity = db.inputs.gravity
model.particle_adhesion = db.inputs.particlesContactAdhesion
model.particle_cohesion = db.inputs.particlesContactCohesion
model.particle_ke = db.inputs.contactElasticStiffness * db.inputs.globalScale
model.particle_kf = db.inputs.contactFrictionStiffness * db.inputs.globalScale
model.particle_mu = db.inputs.contactFrictionCoeff
model.particle_kd = db.inputs.contactDampingStiffness * db.inputs.globalScale
model.soft_contact_ke = db.inputs.contactElasticStiffness * db.inputs.globalScale
model.soft_contact_kf = db.inputs.contactFrictionStiffness * db.inputs.globalScale
model.soft_contact_mu = db.inputs.contactFrictionCoeff
model.soft_contact_kd = db.inputs.contactDampingStiffness * db.inputs.globalScale
model.soft_contact_margin = db.inputs.colliderContactDistance * db.inputs.colliderContactQueryRange
# Store the class members.
self.sim_dt = sim_dt
self.sim_tick = 0
self.model = model
self.integrator = integrator
self.state_0 = model.state()
self.state_1 = model.state()
self.xform = xform.copy()
if USE_GRAPH:
# Create the CUDA graph. We first manually load the necessary
# modules to avoid the capture to load all the modules that are
# registered and possibly not relevant.
wp.load_module(device=device)
wp.load_module(module=warp.sim, device=device, recursive=True)
wp.capture_begin(force_module_load=False)
try:
step(db)
finally:
self.graph = wp.capture_end()
self.attr_tracking.update_state(db)
return True
# Compute
# ------------------------------------------------------------------------------
def update_collider(
db: OgnParticlesSimulateDatabase,
) -> None:
"""Updates the collider state."""
state = db.internal_state
points = omni.warp.nodes.mesh_get_points(db.inputs.collider)
xform = omni.warp.nodes.bundle_get_world_xform(db.inputs.collider)
# Swap the previous and current collider point positions.
(state.collider_points_0, state.collider_points_1) = (
state.collider_points_1,
state.collider_points_0,
)
# Store the current point positions.
wp.copy(state.collider_points_1, points)
# Retrieve the previous and current world transformations.
xform_0 = state.collider_xform
xform_1 = xform
# Update the internal point positions and velocities.
wp.launch(
kernel=update_collider_kernel,
dim=len(state.collider_mesh.vertices),
inputs=[
state.collider_points_0,
state.collider_points_1,
xform_0.T,
xform_1.T,
state.sim_dt,
],
outputs=[
state.collider_mesh.mesh.points,
state.collider_mesh.mesh.velocities,
],
)
# Refit the BVH.
state.collider_mesh.mesh.refit()
# Update the state members.
state.collider_xform = xform.copy()
def update_particles(
db: OgnParticlesSimulateDatabase,
) -> None:
"""Updates the particles state."""
state = db.internal_state
xform = omni.warp.nodes.bundle_get_world_xform(db.inputs.particles)
# Retrieve the previous and current world transformations.
xform_0 = state.xform
xform_1 = xform
# Update the internal point positions and velocities.
wp.launch(
kernel=update_particles_kernel,
dim=len(state.state_0.particle_q),
inputs=[
state.state_0.particle_q,
np.matmul(np.linalg.inv(xform_0), xform_1).T,
],
outputs=[
state.state_0.particle_q,
state.state_0.particle_qd,
],
)
# Update the state members.
state.xform = xform.copy()
def step(db: OgnParticlesSimulateDatabase) -> None:
"""Steps through the simulation."""
state = db.internal_state
sim_dt = state.sim_dt / db.inputs.substepCount
# Run the collision detection once per frame.
wp.sim.collide(state.model, state.state_0)
for _ in range(db.inputs.substepCount):
state.state_0.clear_forces()
state.integrator.simulate(
state.model,
state.state_0,
state.state_1,
sim_dt,
)
# Swap the previous and current states.
(state.state_0, state.state_1) = (state.state_1, state.state_0)
def simulate(db: OgnParticlesSimulateDatabase) -> None:
"""Simulates the particles at the current time."""
state = db.internal_state
state.model.particle_grid.build(
state.state_0.particle_q,
state.model.particle_max_radius * db.inputs.particlesQueryRange,
)
if USE_GRAPH:
wp.capture_launch(state.graph)
else:
step(db)
def compute(db: OgnParticlesSimulateDatabase, device: wp.context.Device) -> None:
"""Evaluates the node."""
if not db.inputs.particles.valid or not db.outputs.particles.valid:
return
state = db.internal_state
if not db.inputs.enabled:
# Pass through the data.
db.outputs.particles = db.inputs.particles
# Store whether the simulation was last enabled.
state.sim_enabled = False
return
if state.needs_initialization(db):
# Initialize the internal state if it hasn't been already.
# We want to use the input particles geometry as the initial state
# of the simulation so we copy its bundle to the output one.
db.outputs.particles = db.inputs.particles
if not state.initialize(db, device):
return
else:
# We skip the simulation if it has just been initialized.
if state.sim_tick == 0 and omni.warp.nodes.bundle_has_changed(db.inputs.particles):
if not state.initialize(db, device):
return
if (
db.inputs.collider.valid
and state.collider_mesh is not None
and omni.warp.nodes.bundle_has_changed(db.inputs.collider)
):
# The collider might be animated so we need to update its state.
update_collider(db)
if omni.warp.nodes.bundle_have_attrs_changed(db.inputs.particles, ("worldMatrix",)):
update_particles(db)
with omni.warp.nodes.NodeTimer("simulate", db, active=PROFILING):
# Run the particles simulation at the current time.
simulate(db)
with omni.warp.nodes.NodeTimer("transform_points_to_local_space", db, active=PROFILING):
# Retrieve some data from the particles points.
xform = omni.warp.nodes.bundle_get_world_xform(db.inputs.particles)
# Transform the particles point positions back into local space
# and store them into the bundle.
out_points = omni.warp.nodes.points_get_points(db.outputs.particles)
wp.launch(
kernel=transform_points_kernel,
dim=len(out_points),
inputs=[
state.state_0.particle_q,
np.linalg.inv(xform).T,
],
outputs=[
out_points,
],
)
# Increment the simulation tick.
state.sim_tick += 1
# Store whether the simulation was last enabled.
state.sim_enabled = True
# Store the current time.
state.time = db.inputs.time
# Node Entry Point
# ------------------------------------------------------------------------------
class OgnParticlesSimulate:
"""Node."""
@staticmethod
def internal_state() -> InternalState:
return InternalState()
@staticmethod
def compute(db: OgnParticlesSimulateDatabase) -> None:
device = omni.warp.nodes.device_get_cuda_compute()
try:
with wp.ScopedDevice(device):
compute(db, device)
except Exception:
db.log_error(traceback.format_exc())
db.internal_state.is_valid = False
return
db.internal_state.is_valid = True
# Fire the execution for the downstream nodes.
db.outputs.execOut = og.ExecutionAttributeState.ENABLED
| 18,154 | Python | 31.075972 | 107 | 0.598546 |
NVIDIA/warp/exts/omni.warp/omni/warp/nodes/_impl/OgnKernel.py | # Copyright (c) 2023 NVIDIA CORPORATION. All rights reserved.
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
"""Warp kernel exposed as an OmniGraph node."""
import traceback
from typing import Tuple
import omni.graph.core as og
import omni.graph.tools.ogn as ogn
import omni.timeline
from omni.warp.nodes._impl.attributes import attr_join_name
from omni.warp.nodes._impl.kernel import (
EXPLICIT_SOURCE,
InternalStateBase,
UserAttributesEvent,
deserialize_user_attribute_descs,
gather_attribute_infos,
get_kernel_args,
initialize_kernel_module,
validate_input_arrays,
write_output_attrs,
)
from omni.warp.nodes.ogn.OgnKernelDatabase import OgnKernelDatabase
import warp as wp
QUIET_DEFAULT = wp.config.quiet
ATTR_PORT_TYPE_INPUT = og.AttributePortType.ATTRIBUTE_PORT_TYPE_INPUT
ATTR_PORT_TYPE_OUTPUT = og.AttributePortType.ATTRIBUTE_PORT_TYPE_OUTPUT
# Internal State
# ------------------------------------------------------------------------------
class InternalState(InternalStateBase):
"""Internal state for the node."""
def __init__(self) -> None:
super().__init__()
self.attr_tracking = omni.warp.nodes.AttrTracking(
("dimCount",),
)
def needs_initialization(
self,
db: OgnKernelDatabase,
check_file_modified_time: bool,
) -> bool:
"""Checks if the internal state needs to be (re)initialized."""
if super().needs_initialization(
db,
check_file_modified_time=check_file_modified_time,
):
return True
if self.attr_tracking.have_attrs_changed(db):
return True
return False
def initialize(
self,
db: OgnKernelDatabase,
kernel_dim_count: int,
) -> bool:
"""Initializes the internal state and recompile the kernel."""
if not super().initialize(db):
return False
# Retrieve the user attribute descriptions, if any.
attr_descs = deserialize_user_attribute_descs(db.state.userAttrDescs)
# Gather the information about each attribute to pass to the kernel.
attr_infos = gather_attribute_infos(
db.node,
db.inputs,
db.outputs,
attr_descs,
kernel_dim_count,
)
try:
kernel_module = initialize_kernel_module(
attr_infos,
self._code_provider,
self._code_str,
self._code_file,
)
except Exception:
db.log_error(traceback.format_exc())
return False
# Define the base class members.
self.attr_infos = attr_infos
self.kernel_module = kernel_module
self.attr_tracking.update_state(db)
return True
# Compute
# ------------------------------------------------------------------------------
def infer_kernel_shape(
db: OgnKernelDatabase,
) -> Tuple[int, ...]:
"""Infers the shape of the kernel."""
source = db.inputs.dimSource
if source == EXPLICIT_SOURCE:
dim_count = min(max(db.inputs.dimCount, 0), wp.types.ARRAY_MAX_DIMS)
return tuple(max(getattr(db.inputs, "dim{}".format(i + 1)), 0) for i in range(dim_count))
try:
value = getattr(db.inputs, source)
except AttributeError as e:
raise RuntimeError(
"The attribute '{}' used to source the dimension doesn't exist.".format(
attr_join_name(ATTR_PORT_TYPE_INPUT, source)
)
) from e
try:
return (value.shape[0],)
except AttributeError as e:
raise RuntimeError(
"The attribute '{}' used to source the dimension isn't an array.".format(
attr_join_name(ATTR_PORT_TYPE_INPUT, source)
)
) from e
def compute(db: OgnKernelDatabase, device: wp.context.Device) -> None:
"""Evaluates the node."""
db.set_dynamic_attribute_memory_location(
on_gpu=device.is_cuda,
gpu_ptr_kind=og.PtrToPtrKind.CPU,
)
# Infer the kernels's shape.
kernel_shape = infer_kernel_shape(db)
# Ensure that our internal state is correctly initialized.
timeline = omni.timeline.get_timeline_interface()
if db.internal_state.needs_initialization(db, timeline.is_stopped()):
if not db.internal_state.initialize(db, len(kernel_shape)):
return
db.internal_state.is_valid = True
# Exit early if there are no outputs defined.
if not db.internal_state.attr_infos[ATTR_PORT_TYPE_OUTPUT]:
return
# Retrieve the inputs and outputs argument values to pass to the kernel.
inputs, outputs = get_kernel_args(
db.inputs,
db.outputs,
db.internal_state.attr_infos,
db.internal_state.kernel_module,
kernel_shape,
)
# Ensure that all array input values are valid.
validate_input_arrays(db.node, db.internal_state.attr_infos, inputs)
# Launch the kernel.
wp.launch(
db.internal_state.kernel_module.compute,
dim=kernel_shape,
inputs=[inputs],
outputs=[outputs],
)
# Write the output values to the node's attributes.
write_output_attrs(db.outputs, db.internal_state.attr_infos, outputs)
# Node Entry Point
# ------------------------------------------------------------------------------
class OgnKernel:
"""Warp's kernel node."""
@staticmethod
def internal_state() -> InternalState:
return InternalState()
@staticmethod
def initialize(graph_context: og.GraphContext, node: og.Node) -> None:
# Populate the devices tokens.
attr = og.Controller.attribute("inputs:device", node)
if attr.get_metadata(ogn.MetadataKeys.ALLOWED_TOKENS) is None:
cuda_devices = [x.alias for x in wp.get_cuda_devices()]
attr.set_metadata(ogn.MetadataKeys.ALLOWED_TOKENS, ",".join(["cpu", "cuda"] + cuda_devices))
@staticmethod
def compute(db: OgnKernelDatabase) -> None:
try:
if db.inputs.device == "cuda":
device = omni.warp.nodes.device_get_cuda_compute()
else:
device = wp.get_device(db.inputs.device)
except Exception:
# Fallback to a default device.
# This can happen due to a scene being authored on a device
# (e.g.: `cuda:1`) that is not available to another user opening
# that same scene.
device = omni.warp.nodes.device_get_cuda_compute()
try:
with wp.ScopedDevice(device):
compute(db, device)
except Exception:
db.internal_state.is_valid = False
db.log_error(traceback.format_exc())
wp.config.quiet = True
return
else:
wp.config.quiet = QUIET_DEFAULT
# Reset the user attributes event since it has now been processed.
db.state.userAttrsEvent = UserAttributesEvent.NONE
# Fire the execution for the downstream nodes.
db.outputs.execOut = og.ExecutionAttributeState.ENABLED
| 7,465 | Python | 30.50211 | 104 | 0.607636 |
NVIDIA/warp/exts/omni.warp/omni/warp/nodes/_impl/OgnMeshFromVolume.py | # Copyright (c) 2023 NVIDIA CORPORATION. All rights reserved.
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
"""Node converting a volume into a geometry mesh."""
import traceback
import omni.graph.core as og
import omni.warp.nodes
from omni.warp.nodes import from_omni_graph_ptr
from omni.warp.nodes.ogn.OgnMeshFromVolumeDatabase import OgnMeshFromVolumeDatabase
import warp as wp
PROFILING = False
# Kernels
# ------------------------------------------------------------------------------
@wp.kernel(enable_backward=False)
def transform_points_kernel(
points: wp.array(dtype=wp.vec3),
center: wp.vec3,
scale: wp.vec3,
out_points: wp.array(dtype=wp.vec3),
):
"""Transform the points with the given offset and scale values."""
tid = wp.tid()
pos = points[tid]
pos = pos - center
pos = wp.cw_mul(pos, scale)
out_points[tid] = pos
# Internal State
# ------------------------------------------------------------------------------
class InternalState:
"""Internal state for the node."""
def __init__(self) -> None:
self.mc = None
self.is_valid = False
self.attr_tracking = omni.warp.nodes.AttrTracking(
(
"dim1",
"dim2",
"dim3",
"maxPoints",
"maxTriangles",
),
)
def needs_initialization(self, db: OgnMeshFromVolumeDatabase) -> bool:
"""Checks if the internal state needs to be (re)initialized."""
if not self.is_valid:
return True
if self.attr_tracking.have_attrs_changed(db):
return True
return False
def initialize(self, db: OgnMeshFromVolumeDatabase) -> bool:
"""Initializes the internal state."""
# Initialize Warp's marching cubes helper.
mc = wp.MarchingCubes(
int(db.inputs.dim1),
int(db.inputs.dim2),
int(db.inputs.dim3),
db.inputs.maxPoints,
db.inputs.maxTriangles,
)
# Store the class members.
self.mc = mc
self.attr_tracking.update_state(db)
return True
# Compute
# ------------------------------------------------------------------------------
def compute(db: OgnMeshFromVolumeDatabase) -> None:
"""Evaluates the node."""
db.outputs.mesh.changes().activate()
if not db.inputs.data.memory or db.inputs.data.shape[0] == 0:
return
state = db.internal_state
# Initialize the internal state if it hasn't been already.
if state.needs_initialization(db):
if not state.initialize(db):
return
dims = (db.inputs.dim1, db.inputs.dim2, db.inputs.dim3)
size = dims[0] * dims[1] * dims[2]
if db.inputs.data.shape[0] != size:
raise RuntimeError(
"The length of the input array data doesn't match with " "the given size: `{} != {}`.".format(
db.inputs.data.shape[0], size
)
)
# Alias the incoming memory to a Warp array.
data = from_omni_graph_ptr(
db.inputs.data.memory,
shape=dims,
dtype=float,
)
with omni.warp.nodes.NodeTimer("surface_mesh", db, active=PROFILING):
# Let Warp's marching cubes helper generate the mesh surface at
# the given ISO threshold value.
state.mc.surface(data, db.inputs.threshold)
# The generated surface is triangulated, so we have 3 vertices per face.
face_count = int(len(state.mc.indices) / 3)
vertex_count = len(state.mc.indices)
point_count = len(state.mc.verts)
if not point_count or not vertex_count or not face_count:
return
# Warp's marching cubes helper allocates its own arrays to store
# the resulting mesh geometry but, eventually, we need to write that data
# to OmniGraph, so we create a new geometry mesh within the output bundle.
omni.warp.nodes.mesh_create_bundle(
db.outputs.mesh,
point_count,
vertex_count,
face_count,
xform=db.inputs.transform,
)
out_points = omni.warp.nodes.mesh_get_points(db.outputs.mesh)
out_face_vertex_counts = omni.warp.nodes.mesh_get_face_vertex_counts(
db.outputs.mesh,
)
out_face_vertex_indices = omni.warp.nodes.mesh_get_face_vertex_indices(
db.outputs.mesh,
)
# Copy the data to the output geometry mesh bundle.
wp.copy(out_points, state.mc.verts)
wp.copy(out_face_vertex_indices, state.mc.indices)
# Set all faces to be triangles.
out_face_vertex_counts.fill_(3)
# Transform the mesh to fit the given center and size values.
center = (
dims[0] * 0.5,
dims[1] * 0.5,
dims[2] * 0.5,
)
scale = (
db.inputs.size[0] / dims[0],
db.inputs.size[1] / dims[1],
db.inputs.size[2] / dims[2],
)
with omni.warp.nodes.NodeTimer("transform_points", db, active=PROFILING):
wp.launch(
transform_points_kernel,
dim=point_count,
inputs=[
state.mc.verts,
center,
scale,
],
outputs=[
out_points,
],
)
# Node Entry Point
# ------------------------------------------------------------------------------
class OgnMeshFromVolume:
"""Node."""
@staticmethod
def internal_state() -> InternalState:
return InternalState()
@staticmethod
def compute(db: OgnMeshFromVolumeDatabase) -> None:
device = omni.warp.nodes.device_get_cuda_compute()
try:
with wp.ScopedDevice(device):
compute(db)
except Exception:
db.log_error(traceback.format_exc())
db.internal_state.is_valid = False
return
db.internal_state.is_valid = True
# Fire the execution for the downstream nodes.
db.outputs.execOut = og.ExecutionAttributeState.ENABLED
| 6,300 | Python | 27.640909 | 106 | 0.579683 |
NVIDIA/warp/exts/omni.warp/omni/warp/nodes/_impl/OgnFixedTime.py | # Copyright (c) 2023 NVIDIA CORPORATION. All rights reserved.
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
"""Node retrieving a time with a fixed time step."""
import omni.timeline
class OgnFixedTimeState:
def __init__(self):
self.time = 0.0
self.initialized = False
class OgnFixedTime:
"""Node."""
@staticmethod
def internal_state():
return OgnFixedTimeState()
@staticmethod
def compute(db) -> bool:
"""Compute the outputs from the current input"""
timeline = omni.timeline.get_timeline_interface()
context = db.internal_state
if not context.initialized:
context.time = db.inputs.start
context.initialized = True
db.outputs.time = context.time
if timeline.is_playing():
context.time += 1.0 / db.inputs.fps
return True
| 1,188 | Python | 26.651162 | 76 | 0.680135 |
NVIDIA/warp/exts/omni.warp/omni/warp/nodes/_impl/OgnSampleProceduralVolume.py | # Copyright (c) 2023 NVIDIA CORPORATION. All rights reserved.
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
"""Sample node generating a procedural volume."""
import traceback
import omni.graph.core as og
import omni.warp.nodes
from omni.warp.nodes.ogn.OgnSampleProceduralVolumeDatabase import OgnSampleProceduralVolumeDatabase
import warp as wp
MIN_RES = 8
PROFILING = False
# Kernels
# ------------------------------------------------------------------------------
@wp.func
def sdf_create_box(pos: wp.vec3, size: wp.vec3):
"""Creates a SDF box primitive."""
# https://iquilezles.org/articles/distfunctions
q = wp.vec3(
wp.abs(pos[0]) - size[0],
wp.abs(pos[1]) - size[1],
wp.abs(pos[2]) - size[2],
)
qp = wp.vec3(wp.max(q[0], 0.0), wp.max(q[1], 0.0), wp.max(q[2], 0.0))
return wp.length(qp) + wp.min(wp.max(q[0], wp.max(q[1], q[2])), 0.0)
@wp.func
def sdf_create_torus(pos: wp.vec3, major_radius: float, minor_radius: float):
"""Creates a SDF torus primitive."""
# https://iquilezles.org/articles/distfunctions
q = wp.vec2(wp.length(wp.vec2(pos[0], pos[2])) - major_radius, pos[1])
return wp.length(q) - minor_radius
@wp.func
def sdf_translate(pos: wp.vec3, offset: wp.vec3):
"""Translates a SDF position vector with an offset."""
return pos - offset
@wp.func
def sdf_rotate(pos: wp.vec3, angles: wp.vec3):
"""Rotates a SDF position vector using Euler angles."""
rot = wp.quat_rpy(
wp.radians(angles[0]),
wp.radians(angles[1]),
wp.radians(angles[2]),
)
return wp.quat_rotate_inv(rot, pos)
@wp.func
def sdf_smooth_min(a: float, b: float, radius: float):
"""Creates a SDF torus primitive."""
# https://iquilezles.org/articles/smin
h = wp.max(radius - wp.abs(a - b), 0.0) / radius
return wp.min(a, b) - h * h * h * radius * (1.0 / 6.0)
@wp.kernel(enable_backward=False)
def generate_volume_kernel(
torus_altitude: float,
torus_major_radius: float,
torus_minor_radius: float,
smooth_min_radius: float,
dim: int,
time: float,
out_data: wp.array3d(dtype=float),
):
"""Kernel to generate a SDF volume based on primitives."""
i, j, k = wp.tid()
# Retrieve the position of the current cell in a normalized [-1, 1] range
# for each dimension.
pos = wp.vec3(
2.0 * ((float(i) + 0.5) / float(dim)) - 1.0,
2.0 * ((float(j) + 0.5) / float(dim)) - 1.0,
2.0 * ((float(k) + 0.5) / float(dim)) - 1.0,
)
box = sdf_create_box(
sdf_translate(pos, wp.vec3(0.0, -0.7, 0.0)),
wp.vec3(0.9, 0.3, 0.9),
)
torus = sdf_create_torus(
sdf_rotate(
sdf_translate(pos, wp.vec3(0.0, torus_altitude, 0.0)),
wp.vec3(wp.sin(time) * 90.0, wp.cos(time) * 45.0, 0.0),
),
torus_major_radius,
torus_minor_radius,
)
out_data[i, j, k] = sdf_smooth_min(box, torus, smooth_min_radius)
# Compute
# ------------------------------------------------------------------------------
def compute(db: OgnSampleProceduralVolumeDatabase) -> None:
"""Evaluates the node."""
# Enforce a minimum dimension or else nothing's left to draw.
dim = max(db.inputs.dim, MIN_RES)
db.outputs.data_size = dim * dim * dim
data = omni.warp.nodes.from_omni_graph(
db.outputs.data,
dtype=float,
shape=(dim, dim, dim),
)
with omni.warp.nodes.NodeTimer("generate_volume", db, active=PROFILING):
wp.launch(
kernel=generate_volume_kernel,
dim=data.shape,
inputs=[
db.inputs.torusAltitude,
db.inputs.torusMajorRadius,
db.inputs.torusMinorRadius,
db.inputs.smoothMinRadius,
dim,
db.inputs.time,
],
outputs=[
data,
],
)
# Node Entry Point
# ------------------------------------------------------------------------------
class OgnSampleProceduralVolume:
"""Node."""
@staticmethod
def compute(db: OgnSampleProceduralVolumeDatabase) -> None:
device = omni.warp.nodes.device_get_cuda_compute()
try:
with wp.ScopedDevice(device):
compute(db)
except Exception:
db.log_error(traceback.format_exc())
return
# Fire the execution for the downstream nodes.
db.outputs.execOut = og.ExecutionAttributeState.ENABLED
| 4,827 | Python | 28.802469 | 99 | 0.576963 |
NVIDIA/warp/exts/omni.warp/omni/warp/nodes/_impl/OgnWaveSolve.py | # Copyright (c) 2023 NVIDIA CORPORATION. All rights reserved.
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
"""Node creating a grid geometry simulated with a wave equation solver."""
import traceback
from math import sqrt
import numpy as np
import omni.graph.core as og
import omni.timeline
import omni.warp.nodes
from omni.warp.nodes._impl.kernels.grid_create import grid_create_launch_kernel
from omni.warp.nodes.ogn.OgnWaveSolveDatabase import OgnWaveSolveDatabase
import warp as wp
PROFILING = False
# Kernels
# -----------------------------------------------------------------------------
@wp.func
def sample_height(
height_map: wp.array(dtype=float),
x: int,
z: int,
point_count_x: int,
point_count_z: int,
):
# Clamp to the grid's bounds.
x = wp.clamp(x, 0, point_count_x - 1)
z = wp.clamp(z, 0, point_count_z - 1)
return height_map[z * point_count_x + x]
@wp.func
def laplacian(
height_map: wp.array(dtype=float),
x: int,
z: int,
point_count_x: int,
point_count_z: int,
):
# See https://en.wikipedia.org/wiki/Wave_equation.
ddx = (
sample_height(height_map, x + 1, z, point_count_x, point_count_z)
- sample_height(height_map, x, z, point_count_x, point_count_z) * 2.0
+ sample_height(height_map, x - 1, z, point_count_x, point_count_z)
)
ddz = (
sample_height(height_map, x, z + 1, point_count_x, point_count_z)
- sample_height(height_map, x, z, point_count_x, point_count_z) * 2.0
+ sample_height(height_map, x, z - 1, point_count_x, point_count_z)
)
return ddx + ddz
@wp.kernel(enable_backward=False)
def displace_kernel(
point_count_x: int,
center_x: float,
center_z: float,
radius: float,
amplitude: float,
time: float,
out_height_map_0: wp.array(dtype=float),
out_height_map_1: wp.array(dtype=float),
):
tid = wp.tid()
x = tid % point_count_x
z = tid // point_count_x
dx = float(x) - center_x
dz = float(z) - center_z
dist_sq = float(dx * dx + dz * dz)
if dist_sq < radius * radius:
height = amplitude * wp.sin(time)
out_height_map_0[tid] = height
out_height_map_1[tid] = height
@wp.kernel(enable_backward=False)
def simulate_kernel(
point_count_x: int,
point_count_z: int,
inv_cell_size: float,
speed: float,
damping: float,
dt: float,
height_map_1: wp.array(dtype=float),
out_height_map_0: wp.array(dtype=float),
):
tid = wp.tid()
x = tid % point_count_x
z = tid // point_count_x
d = laplacian(height_map_1, x, z, point_count_x, point_count_z)
d *= inv_cell_size * inv_cell_size
# Integrate and write the result in the 'previous' height map buffer since
# it will be then swapped to become the 'current' one.
h0 = out_height_map_0[tid]
h1 = height_map_1[tid]
out_height_map_0[tid] = h1 * 2.0 - h0 + (d * speed - (h1 - h0) * damping) * dt * dt
@wp.kernel(enable_backward=False)
def update_mesh_kernel(
height_map: wp.array(dtype=float),
out_points: wp.array(dtype=wp.vec3),
):
tid = wp.tid()
height = height_map[tid]
pos = out_points[tid]
out_points[tid] = wp.vec3(pos[0], height, pos[2])
# Internal State
# ------------------------------------------------------------------------------
class InternalState:
"""Internal state for the node."""
def __init__(self) -> None:
self.height_map_0 = None
self.height_map_1 = None
self.time = 0.0
self.is_valid = False
self.attr_tracking = omni.warp.nodes.AttrTracking(
(
"size",
"cellSize",
),
)
def needs_initialization(self, db: OgnWaveSolveDatabase) -> bool:
"""Checks if the internal state needs to be (re)initialized."""
if not self.is_valid:
return True
if self.attr_tracking.have_attrs_changed(db):
return True
if db.inputs.time < self.time:
# Reset the simulation when we're rewinding.
return True
return False
def initialize(
self,
db: OgnWaveSolveDatabase,
dims: np.ndarray,
) -> bool:
"""Initializes the internal state."""
point_count = omni.warp.nodes.mesh_get_point_count(db.outputs.mesh)
# Initialize a double buffering for the height map.
height_map_0 = wp.zeros(point_count, dtype=float)
height_map_1 = wp.zeros(point_count, dtype=float)
# Build the grid mesh.
grid_create_launch_kernel(
omni.warp.nodes.mesh_get_points(db.outputs.mesh),
omni.warp.nodes.mesh_get_face_vertex_counts(db.outputs.mesh),
omni.warp.nodes.mesh_get_face_vertex_indices(db.outputs.mesh),
omni.warp.nodes.mesh_get_normals(db.outputs.mesh),
omni.warp.nodes.mesh_get_uvs(db.outputs.mesh),
db.inputs.size.tolist(),
dims.tolist(),
update_topology=True,
)
# Store the class members.
self.height_map_0 = height_map_0
self.height_map_1 = height_map_1
self.attr_tracking.update_state(db)
return True
# Compute
# ------------------------------------------------------------------------------
def displace(
db: OgnWaveSolveDatabase,
dims: np.ndarray,
cell_size: np.ndarray,
) -> None:
"""Displaces the height map with the collider."""
state = db.internal_state
# Retrieve some data from the grid mesh.
xform = omni.warp.nodes.bundle_get_world_xform(db.outputs.mesh)
# Retrieve some data from the collider mesh.
collider_xform = omni.warp.nodes.bundle_get_world_xform(db.inputs.collider)
collider_extent = omni.warp.nodes.mesh_get_world_extent(
db.inputs.collider,
axis_aligned=True,
)
# Retrieve the collider's position in the grid's object space.
collider_pos = np.pad(collider_xform[3][:3], (0, 1), constant_values=1)
collider_pos = np.dot(np.linalg.inv(xform).T, collider_pos)
# Compute the collider's radius.
collider_radius = np.amax(collider_extent[1] - collider_extent[0]) * 0.5
# Determine the point around which the grid will be displaced.
center_x = (dims[0] + 1) * 0.5 - float(collider_pos[0]) / cell_size[0]
center_z = (dims[1] + 1) * 0.5 - float(collider_pos[2]) / cell_size[1]
# Clamp the deformation center to the grid's bounds.
center_x = max(0, min(dims[0], center_x))
center_z = max(0, min(dims[1], center_z))
# Apply the displacement if the collider is in contact with the grid.
contact_radius_sq = (collider_radius**2) - (abs(collider_pos[1]) ** 2)
if contact_radius_sq > 0:
cell_size_uniform = (cell_size[0] + cell_size[1]) * 0.5
center_radius = sqrt(contact_radius_sq) / cell_size_uniform
wp.launch(
kernel=displace_kernel,
dim=omni.warp.nodes.mesh_get_point_count(db.outputs.mesh),
inputs=[
dims[0] + 1,
center_x,
center_z,
center_radius,
db.inputs.amplitude,
db.inputs.time,
],
outputs=[
state.height_map_0,
state.height_map_1,
],
)
def simulate(
db: OgnWaveSolveDatabase,
dims: np.ndarray,
cell_size: np.ndarray,
sim_dt: bool,
) -> None:
"""Solves the wave simulation."""
state = db.internal_state
cell_size_uniform = (cell_size[0] + cell_size[1]) * 0.5
wp.launch(
kernel=simulate_kernel,
dim=omni.warp.nodes.mesh_get_point_count(db.outputs.mesh),
inputs=[
dims[0] + 1,
dims[1] + 1,
1.0 / cell_size_uniform,
db.inputs.speed,
db.inputs.damping,
sim_dt,
state.height_map_1,
],
outputs=[
state.height_map_0,
],
)
# Swap the height map buffers
state.height_map_0, state.height_map_1 = (
state.height_map_1,
state.height_map_0,
)
def update_mesh(db: OgnWaveSolveDatabase) -> None:
"""Updates the output grid mesh."""
state = db.internal_state
wp.launch(
kernel=update_mesh_kernel,
dim=omni.warp.nodes.mesh_get_point_count(db.outputs.mesh),
inputs=[
state.height_map_1,
],
outputs=[
omni.warp.nodes.mesh_get_points(db.outputs.mesh),
],
)
def compute(db: OgnWaveSolveDatabase) -> None:
"""Evaluates the node."""
db.outputs.mesh.changes().activate()
if not db.outputs.mesh.valid:
return
state = db.internal_state
# Compute the number of divisions.
dims = (db.inputs.size / db.inputs.cellSize).astype(int)
# Compute the mesh's topology counts.
face_count = dims[0] * dims[1]
vertex_count = face_count * 4
point_count = (dims[0] + 1) * (dims[1] + 1)
# Create a new geometry mesh within the output bundle.
omni.warp.nodes.mesh_create_bundle(
db.outputs.mesh,
point_count,
vertex_count,
face_count,
xform=db.inputs.transform,
create_normals=True,
create_uvs=True,
)
if state.needs_initialization(db):
# Initialize the internal state if it hasn't been already.
if not state.initialize(db, dims):
return
else:
# We skip the simulation if it has just been initialized.
# Retrieve the simulation's delta time.
timeline = omni.timeline.get_timeline_interface()
sim_rate = timeline.get_ticks_per_second()
sim_dt = 1.0 / sim_rate
# Infer the size of each cell from the overall grid size and the number
# of dimensions.
cell_size = db.inputs.size / dims
if db.inputs.collider.valid:
with omni.warp.nodes.NodeTimer("displace", db, active=PROFILING):
# Deform the grid with a displacement value if the collider
# is in contact with it.
displace(db, dims, cell_size)
with omni.warp.nodes.NodeTimer("simulate", db, active=PROFILING):
# Simulate the ripples using the wave equation.
simulate(db, dims, cell_size, sim_dt)
with omni.warp.nodes.NodeTimer("update_mesh", db, active=PROFILING):
# Update the mesh points with the height map resulting from
# the displacement and simulation steps.
update_mesh(db)
# Store the current time.
state.time = db.inputs.time
# Node Entry Point
# ------------------------------------------------------------------------------
class OgnWaveSolve:
"""Node."""
@staticmethod
def internal_state() -> InternalState:
return InternalState()
@staticmethod
def compute(db: OgnWaveSolveDatabase) -> None:
device = omni.warp.nodes.device_get_cuda_compute()
try:
with wp.ScopedDevice(device):
compute(db)
except Exception:
db.log_error(traceback.format_exc())
db.internal_state.is_valid = False
return
db.internal_state.is_valid = True
# Fire the execution for the downstream nodes.
db.outputs.execOut = og.ExecutionAttributeState.ENABLED
| 11,701 | Python | 28.255 | 87 | 0.586787 |
NVIDIA/warp/exts/omni.warp/omni/warp/nodes/_impl/bundles.py | # Copyright (c) 2023 NVIDIA CORPORATION. All rights reserved.
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
"""Helpers to author OmniGraph bundles."""
from typing import (
Optional,
Sequence,
)
import numpy as np
import omni.graph.core as og
from omni.warp.nodes._impl.attributes import (
attr_get,
attr_get_array_on_gpu,
attr_set,
)
import warp as wp
# High-level Bundle API (og.BundleContents)
# ------------------------------------------------------------------------------
def bundle_get_child_count(
bundle: og.BundleContents,
) -> int:
"""Retrieves the number of children defined for a bundle."""
return bundle.bundle.get_child_bundle_count()
def bundle_get_prim_type(
bundle: og.BundleContents,
child_idx: int = 0,
) -> str:
"""Retrieves the primitive type."""
attr = bundle_get_attr(bundle, "sourcePrimType", child_idx)
return attr_get(attr)
def bundle_set_prim_type(
bundle: og.BundleContents,
prim_type: str,
child_idx: int = 0,
) -> None:
"""Sets the primitive type."""
child_bundle = bundle_create_child(bundle, child_idx)
attr = bundle_create_attr(
child_bundle,
"sourcePrimType",
og.Type(
og.BaseDataType.TOKEN,
tuple_count=1,
array_depth=0,
role=og.AttributeRole.NONE,
),
)
attr_set(attr, prim_type)
def bundle_get_world_xform(
bundle: og.BundleContents,
child_idx: int = 0,
) -> np.ndarray:
"""Retrieves the world transformation matrix."""
attr = bundle_get_attr(bundle, "worldMatrix", child_idx)
if attr is None:
return np.identity(4)
return attr_get(attr).reshape(4, 4)
def bundle_set_world_xform(
bundle: og.BundleContents,
xform: np.ndarray,
child_idx: int = 0,
) -> None:
"""Sets the bundle's world transformation matrix."""
child_bundle = bundle_create_child(bundle, child_idx)
attr = bundle_create_attr(
child_bundle,
"worldMatrix",
og.Type(
og.BaseDataType.DOUBLE,
tuple_count=16,
array_depth=0,
role=og.AttributeRole.MATRIX,
),
)
attr_set(attr, xform)
def bundle_create_child(
bundle: og.BundleContents,
child_idx: int = 0,
) -> og.IBundle2:
"""Creates a single child bundle if it doesn't already exist."""
if child_idx < bundle.bundle.get_child_bundle_count():
return bundle.bundle.get_child_bundle(child_idx)
return bundle.bundle.create_child_bundle("prim{}".format(child_idx))
def bundle_get_attr(
bundle: og.BundleContents,
name: str,
child_idx: int = 0,
) -> Optional[og.AttributeData]:
"""Retrieves a bundle attribute from its name."""
if bundle.bundle.get_child_bundle_count():
attr = bundle.bundle.get_child_bundle(child_idx).get_attribute_by_name(name)
else:
attr = bundle.bundle.get_attribute_by_name(name)
if not attr.is_valid():
return None
return attr
def bundle_has_changed(
bundle: og.BundleContents,
child_idx: int = 0,
) -> bool:
"""Checks whether the contents of the bundle has changed."""
with bundle.changes() as bundle_changes:
child_bundle = bundle.bundle.get_child_bundle(child_idx)
return bundle_changes.get_change(child_bundle) != og.BundleChangeType.NONE
def bundle_have_attrs_changed(
bundle: og.BundleContents,
attr_names: Sequence[str],
child_idx: int = 0,
) -> bool:
"""Checks whether the contents of a bundle's attributes have changed."""
with bundle.changes() as bundle_changes:
child_bundle = bundle.bundle.get_child_bundle(child_idx)
for attr_name in attr_names:
attr = child_bundle.get_attribute_by_name(attr_name)
if bundle_changes.get_change(attr) != og.BundleChangeType.NONE:
return True
return False
# Low-level Bundle API (og.IBundle2)
# ------------------------------------------------------------------------------
def bundle_create_attr(
bundle: og.IBundle2,
name: str,
og_type: og.Type,
size: int = 0,
) -> og.AttributeData:
"""Creates a bundle attribute if it doesn't already exist."""
attr = bundle.get_attribute_by_name(name)
if attr.is_valid() and attr.get_type() == og_type and attr.size() == size:
return attr
return bundle.create_attribute(name, og_type, element_count=size)
def bundle_create_metadata_attr(
bundle: og.IBundle2,
name: str,
field_name: str,
og_type: og.Type,
) -> og.AttributeData:
"""Creates a bundle metadata attribute if it doesn't already exist."""
attr = bundle.get_attribute_metadata_by_name(name, field_name)
if attr.is_valid() and attr.get_type() == og_type:
return attr
return bundle.create_attribute_metadata(name, field_name, og_type)
def bundle_copy_attr_value(
dst_bundle: og.IBundle2,
src_bundle: og.IBundle2,
name: str,
dtype: og.Type,
) -> None:
"""Copies an attribute value from one bundle to another."""
dst_attr = dst_bundle.get_attribute_by_name(name)
src_attr = src_bundle.get_attribute_by_name(name)
if not dst_attr.is_valid() or not src_attr.is_valid():
return
wp.copy(
attr_get_array_on_gpu(dst_attr, dtype, read_only=False),
attr_get_array_on_gpu(src_attr, dtype, read_only=True),
)
| 5,707 | Python | 27.54 | 84 | 0.634834 |
NVIDIA/warp/exts/omni.warp/omni/warp/nodes/_impl/OgnSamplePrimFlocking.py | # Copyright (c) 2023 NVIDIA CORPORATION. All rights reserved.
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
"""Sample node that simulates flocking behaviors by animating prim attributes."""
import math
import traceback
import carb.settings
import numpy as np
import omni.graph.core as og
import omni.kit.app
import omni.usd
import omni.warp.nodes
import usdrt
from omni.warp.nodes.ogn.OgnSamplePrimFlockingDatabase import OgnSamplePrimFlockingDatabase
import warp as wp
# device used for flocking simulation
MAIN_DEVICE = "cuda:0"
# device used for updating colors
COLOR_DEVICE = "cpu"
# Kernels
# -----------------------------------------------------------------------------
@wp.struct
class Boid:
vel: wp.vec3f
wander_angles: wp.vec2f
mass: float
group: int
@wp.struct
class Obstacle:
pos: wp.vec3f
radius: float
@wp.struct
class World:
lower: wp.vec3f
upper: wp.vec3f
grid: wp.uint64
seed: int
biases: wp.mat33f
obstacles: wp.array(dtype=Obstacle)
@wp.kernel(enable_backward=False)
def copy_positions(dst: wp.array(dtype=wp.vec3f), src: wp.fabricarray(dtype=wp.vec3d)):
tid = wp.tid()
pos = src[tid]
dst[tid] = wp.vec3f(float(pos[0]), float(pos[1]), float(pos[2]))
@wp.kernel(enable_backward=False)
def assign_colors(
glows: wp.array(dtype=float),
groups: wp.array(dtype=int),
color_ramps: wp.array2d(dtype=wp.vec3f),
colors: wp.fabricarrayarray(dtype=wp.vec3f),
):
tid = wp.tid()
glow = glows[tid]
group = groups[tid]
if glow < 0.4:
alpha = glow / 0.4
colors[tid][0] = (1.0 - alpha) * color_ramps[group, 0] + alpha * color_ramps[group, 1]
elif glow < 0.8:
alpha = (glow - 0.4) / 0.4
colors[tid][0] = (1.0 - alpha) * color_ramps[group, 1] + alpha * color_ramps[group, 2]
else:
alpha = (glow - 0.8) / 0.2
colors[tid][0] = (1.0 - alpha) * color_ramps[group, 2] + alpha * color_ramps[group, 3]
@wp.func
def intersect_ray_sphere(origin: wp.vec3f, dir: wp.vec3f, center: wp.vec3f, radius: float):
to_sphere = center - origin
tc = wp.dot(to_sphere, dir)
if tc < 0.0:
return tc
d = wp.sqrt(wp.length_sq(to_sphere) - tc * tc)
if d < 0.0:
return -999999.0
ts = wp.sqrt(radius * radius - d * d)
return tc - ts
@wp.kernel(enable_backward=False)
def boids(
boids: wp.array(dtype=Boid),
world: World,
dt: float,
positions: wp.fabricarray(dtype=wp.vec3d),
orientations: wp.fabricarray(dtype=wp.quatf),
glows: wp.array(dtype=float),
):
tid = wp.tid()
boid = boids[tid]
old_pos = positions[tid]
old_rot = orientations[tid]
pos = wp.vec3(float(old_pos[0]), float(old_pos[1]), float(old_pos[2]))
vel = boid.vel
forward = wp.quat_rotate(old_rot, wp.vec3f(1.0, 0.0, 0.0))
force = wp.vec3f(0.0)
# obstacle avoidance
depenetration_force = 100.0
avoidance_dist = 20.0
avoidance_force = 20.0
obstacles = world.obstacles
num_obstacles = obstacles.shape[0]
for i in range(num_obstacles):
obstacle = obstacles[i]
to_obstacle = obstacle.pos - pos
# use padded radius
radius = obstacle.radius + 2.0
if wp.length(to_obstacle) < radius:
# depenetration
force += depenetration_force * wp.normalize(-to_obstacle)
else:
# avoidance
t = intersect_ray_sphere(pos, forward, obstacle.pos, radius)
if t > 0.0 and t < avoidance_dist:
intersection_point = pos + t * forward
out = intersection_point - obstacle.pos
force += avoidance_force * (1.0 - t / avoidance_dist) * wp.normalize(out)
# wander
r = 10.0
s0 = wp.sin(boid.wander_angles[0])
c0 = wp.cos(boid.wander_angles[0])
s1 = wp.sin(boid.wander_angles[1])
c1 = wp.cos(boid.wander_angles[1])
p = wp.vec3f(r * s0 * s1, r * s0 * c1, r * c0)
offset = r + 1.0
target = pos + wp.quat_rotate(old_rot, wp.vec3f(offset, 0.0, 0.0) + p)
wander_force = 7.0
force += wander_force * wp.normalize(target - pos)
state = wp.rand_init(world.seed, tid)
angle0 = boid.wander_angles[0] + wp.pi * (0.1 - 0.2 * wp.randf(state))
angle1 = boid.wander_angles[1] + wp.pi * (0.1 - 0.2 * wp.randf(state))
boid.wander_angles = wp.vec2f(angle0, angle1)
cohesion_radius = 15.0
cohesion_force = 20.0
separation_radius = 10.0
separation_force = 100.0
# flocking
query = wp.hash_grid_query(world.grid, pos, cohesion_radius)
num_neighbors = int(0)
num_align_neighbors = int(0)
num_cohesion_neighbors = float(0)
num_decohesion_neighbors = float(0)
cohesion_pos_sum = wp.vec3f(0.0)
decohesion_pos_sum = wp.vec3f(0.0)
vel_sum = wp.vec3f(0.0)
for index in query:
if index != tid:
other = boids[index]
other_pos64 = positions[index]
other_pos = wp.vec3f(float(other_pos64[0]), float(other_pos64[1]), float(other_pos64[2]))
dist = wp.length(pos - other_pos)
if dist < cohesion_radius:
to_other = wp.normalize(other_pos - pos)
# separation
if dist < separation_radius:
force -= separation_force * (1.0 - dist / separation_radius) * to_other
# cohesion
bias = world.biases[boid.group, other.group]
if bias > 0.0:
cohesion_pos_sum += bias * other_pos
num_cohesion_neighbors += bias
else:
decohesion_pos_sum -= bias * other_pos
num_decohesion_neighbors -= bias
# alignment
if other.group == boid.group:
vel_sum += bias * other.vel
num_align_neighbors += 1
num_neighbors += 1
# align
if num_align_neighbors > 0:
vel_avg = vel_sum / float(num_align_neighbors)
force += vel_avg - vel
# cohere
if num_cohesion_neighbors > 0.0:
cohesion_pos_avg = cohesion_pos_sum / float(num_cohesion_neighbors)
force += cohesion_force * wp.normalize(cohesion_pos_avg - pos)
# decohere (group separation)
if num_decohesion_neighbors > 0.0:
decohesion_pos_avg = decohesion_pos_sum / float(num_decohesion_neighbors)
force += cohesion_force * wp.normalize(pos - decohesion_pos_avg)
# boundaries
boundary_force = 20.0
if pos[0] >= world.upper[0]:
force += wp.vec3f(-boundary_force, 0.0, 0.0)
if pos[0] <= world.lower[0]:
force += wp.vec3f(boundary_force, 0.0, 0.0)
if pos[1] >= world.upper[1]:
force += wp.vec3f(0.0, -0.5 * boundary_force, 0.0)
if pos[1] <= world.lower[1]:
force += wp.vec3f(0.0, 5.0 * boundary_force, 0.0)
if pos[2] >= world.upper[2]:
force += wp.vec3f(0.0, 0.0, -boundary_force)
if pos[2] <= world.lower[2]:
force += wp.vec3f(0.0, 0.0, boundary_force)
vel += dt * force / boid.mass
# clamp speed
max_speed = 15.0
speed_sq = wp.length_sq(vel)
if speed_sq > max_speed * max_speed:
vel = max_speed * wp.normalize(vel)
# update position
pos += dt * vel
positions[tid] = wp.vec3d(wp.float64(pos[0]), wp.float64(pos[1]), wp.float64(pos[2]))
# update orientation
dq = wp.quat_between_vectors(forward, vel)
orientations[tid] = wp.normalize(dq * orientations[tid])
# save velocity
boid.vel = vel
boids[tid] = boid
# update glow as an exponentially weighted moving average to keep it smooth
glow = wp.min(1.0, float(num_neighbors) / 25.0)
glow_alpha = 0.25
glows[tid] = glow_alpha * glow + (1.0 - glow_alpha) * glows[tid]
# Internal State
# ------------------------------------------------------------------------------
class InternalState:
"""Internal state for the node."""
def __init__(self) -> None:
self.initialized = False
def initialize(self, device):
# requirement checks
ext_mgr = omni.kit.app.get_app().get_extension_manager()
# make sure USDRT is enabled
usdrt_ext_name = "usdrt.scenegraph"
if not ext_mgr.is_extension_enabled(usdrt_ext_name):
raise RuntimeError(f"This sample requires the '{usdrt_ext_name}' extension to be enabled")
# check USDRT version to make sure we have a working SelectPrims()
usdrt_ext_id = ext_mgr.get_enabled_extension_id(usdrt_ext_name)
usdrt_version_string = ext_mgr.get_extension_dict(usdrt_ext_id)["package"]["version"].split("-")[0]
usdrt_version = tuple(int(v) for v in usdrt_version_string.split("."))
if usdrt_version < (7, 3, 0):
raise RuntimeError(
f"USDRT version 7.3.0 is required, found {usdrt_version_string}. Please update to a newer version of Kit to run this sample."
)
# check if FSD is enabled
settings = carb.settings.get_settings()
is_fsd_enabled = settings.get_as_bool("/app/useFabricSceneDelegate")
if not is_fsd_enabled:
print("***")
print("*** Flocking demo warning: The Fabric Scene Delegate is not enabled.")
print("*** Some features, like color animation, may not work.")
print("*** You can enable FSD in Preferences->Rendering.")
print("***")
stage_id = omni.usd.get_context().get_stage_id()
usdrt_stage = usdrt.Usd.Stage.Attach(stage_id)
# import to Fabric
for _prim in usdrt_stage.Traverse():
pass
# set up for Fabric interop
boid_root = usdrt_stage.GetPrimAtPath(usdrt.Sdf.Path("/World/Boids"))
boid_prims = boid_root.GetChildren()
for prim in boid_prims:
pos = prim.GetAttribute("xformOp:translate").Get()
prim.CreateAttribute("_worldPosition", usdrt.Sdf.ValueTypeNames.Double3, True).Set(pos)
prim.CreateAttribute("_worldOrientation", usdrt.Sdf.ValueTypeNames.Quatf, True).Set(
usdrt.Gf.Quatf(1, 0, 0, 0)
)
prim.CreateAttribute("_worldScale", usdrt.Sdf.ValueTypeNames.Float3, True).Set(usdrt.Gf.Vec3f(1, 1, 1))
prim.CreateAttribute("primvars:_emissive", usdrt.Sdf.ValueTypeNames.Float3Array, True).Set(
usdrt.Vt.Vec3fArray([usdrt.Gf.Vec3f(1, 0, 1)])
)
# create a custom tag for the boids (could use applied schema too)
prim.CreateAttribute("BoidTag", usdrt.Sdf.ValueTypeNames.AppliedSchemaTypeTag, True)
num_boids = len(boid_prims)
self.stage = usdrt_stage
self.require_schemas = ["BoidTag"]
self.transform_attrs = [
(usdrt.Sdf.ValueTypeNames.Double3, "_worldPosition", usdrt.Usd.Access.ReadWrite),
(usdrt.Sdf.ValueTypeNames.Quatf, "_worldOrientation", usdrt.Usd.Access.ReadWrite),
]
self.color_attrs = [
(usdrt.Sdf.ValueTypeNames.Float3Array, "primvars:_emissive", usdrt.Usd.Access.ReadWrite),
]
npboids = np.zeros(num_boids, dtype=Boid.numpy_dtype())
angles = math.pi - 2 * math.pi * np.random.rand(num_boids)
vx = 20 * np.sin(angles)
vz = 20 * np.cos(angles)
npboids["vel"][:, 0] = vx
npboids["vel"][:, 2] = vz
npboids["wander_angles"][:, 0] = math.pi * np.random.rand(num_boids)
npboids["wander_angles"][:, 1] = 2 * math.pi * np.random.rand(num_boids)
min_mass = 1.0
max_mass = 2.0
npboids["mass"][:] = min_mass + (max_mass - min_mass) * np.random.rand(num_boids)
# we can have up to 3 groups currently, but that can be easily extended
self.num_groups = 2
npboids["group"] = np.random.randint(self.num_groups, size=num_boids)
num_obstacles = 3
npobstacles = np.zeros(num_obstacles, dtype=Obstacle.numpy_dtype())
npobstacles["pos"][0] = (-20, 30, -40)
npobstacles["radius"][0] = 40
npobstacles["pos"][1] = (90, 30, 30)
npobstacles["radius"][1] = 30
npobstacles["pos"][2] = (-100, 30, 60)
npobstacles["radius"][2] = 25
self.grid = wp.HashGrid(dim_x=32, dim_y=32, dim_z=32, device=device)
biases = wp.mat33f(-1.0)
for i in range(self.num_groups):
biases[i, i] = 1.0
world = World()
world.lower = (-120, 20, -90)
world.upper = (120, 40, 90)
world.grid = self.grid.id
world.seed = 0
world.biases = biases
world.obstacles = wp.array(npobstacles, dtype=Obstacle, device=device)
self.world = world
self.num_boids = num_boids
self.boids = wp.array(npboids, dtype=Boid, device=device)
# color ramps per group
color_ramps = [
[[0.3, 0.0, 0.0], [1.0, 0.0, 0.0], [1.0, 0.5, 0.0], [1.0, 1.0, 0.5]],
[[0.0, 0.0, 0.3], [0.0, 0.0, 1.0], [0.0, 0.5, 1.0], [0.5, 1.0, 1.0]],
[[0.0, 0.3, 0.0], [0.0, 1.0, 0.0], [0.0, 1.0, 0.5], [0.8, 1.0, 0.8]],
]
# copy of positions used for updating the spatial grid
self.positions = wp.zeros(num_boids, dtype=wp.vec3f, device=device)
# color ramps are only used on the COLOR_DEVICE
self.color_ramps_c = wp.array(color_ramps, dtype=wp.vec3f, device=COLOR_DEVICE)
# keep a copy of group assignments on the COLOR_DEVICE
self.groups_c = wp.array(npboids["group"], device=COLOR_DEVICE)
# if we use different devices, the glow array must be copied on each update
if COLOR_DEVICE == device:
# use the same glow array on each device, no copying needed
self.glows_c = wp.zeros(num_boids, dtype=float, device=device)
self.glows_m = self.glows_c
elif COLOR_DEVICE == "cpu" or device == "cpu":
# use a pinned host array for async copying glows between devices
glows_h = wp.zeros(num_boids, dtype=float, device="cpu", pinned=True)
if COLOR_DEVICE == "cpu":
self.glows_c = glows_h
self.glows_m = wp.zeros_like(glows_h, device=device)
else:
self.glows_c = wp.zeros_like(glows_h, device=COLOR_DEVICE)
self.glows_m = glows_h
else:
# two different CUDA devices
self.glows_c = wp.zeros(num_boids, dtype=float, device=COLOR_DEVICE)
self.glows_m = wp.zeros(num_boids, dtype=float, device=device)
# ...but that's currently not supported in Kit
raise ValueError("Multiple GPUs not supported yet")
self.time = 0.0
self.min_group_think = 3.0
self.max_group_think = 10.0
self.next_group_think = self.min_group_think + (self.max_group_think - self.min_group_think) * np.random.rand()
self.frameno = 0
self.initialized = True
# Compute
# ------------------------------------------------------------------------------
def compute(db: OgnSamplePrimFlockingDatabase) -> None:
"""Evaluates the node."""
state = db.internal_state
device = wp.get_device()
if not state.initialized:
state.initialize(device)
state.frameno += 1
# get transform attributes
selection = state.stage.SelectPrims(
require_applied_schemas=state.require_schemas, require_attrs=state.transform_attrs, device=str(device)
)
fpos = wp.fabricarray(data=selection, attrib="_worldPosition")
frot = wp.fabricarray(data=selection, attrib="_worldOrientation")
# use fixed dt for stability
dt = 1.0 / 60.0
state.time += dt
# copy positions to a contiguous array and convert to vec3f so they can be used to update the spatial grid
wp.launch(copy_positions, dim=state.num_boids, inputs=[state.positions, fpos])
# grid cell radius should be a bit bigger than query radius
cell_radius = 20.0
state.grid.build(state.positions, cell_radius)
state.world.seed = state.frameno
# step the flocking simulation
wp.launch(boids, dim=state.num_boids, inputs=[state.boids, state.world, dt, fpos, frot, state.glows_m])
# async copy from main device and remember the stream so we can sync later
if COLOR_DEVICE != device:
if device.is_cuda:
work_stream = device.stream
else:
work_stream = wp.get_stream(COLOR_DEVICE)
wp.copy(state.glows_c, state.glows_m, stream=work_stream)
else:
work_stream = None
# get color attributes
color_selection = state.stage.SelectPrims(
require_applied_schemas=state.require_schemas, require_attrs=state.color_attrs, device=COLOR_DEVICE
)
fcolor = wp.fabricarray(data=color_selection, attrib="primvars:_emissive")
# occasionally update group biases (whether they are attracted or repelled from each other)
if state.num_groups > 1 and state.time >= state.next_group_think:
# pick two random groups
group0 = np.random.randint(state.num_groups)
group1 = np.random.randint(state.num_groups)
while group0 == group1:
group1 = np.random.randint(state.num_groups)
# bias towards intra-group separation, but also allow attraction
state.world.biases[group0, group1] = 1.0 - 5.0 * np.random.rand()
state.world.biases[group1, group0] = 1.0 - 5.0 * np.random.rand()
state.next_group_think += (
state.min_group_think + (state.max_group_think - state.min_group_think) * np.random.rand()
)
if work_stream is not None:
# wait for async GPU work to complete
wp.synchronize_stream(work_stream)
# update colors
wp.launch(
assign_colors,
dim=state.num_boids,
inputs=[state.glows_c, state.groups_c, state.color_ramps_c, fcolor],
device=COLOR_DEVICE,
)
# Node Entry Point
# ------------------------------------------------------------------------------
class OgnSamplePrimFlocking:
"""Node."""
@staticmethod
def internal_state() -> InternalState:
return InternalState()
@staticmethod
def compute(db: OgnSamplePrimFlockingDatabase) -> None:
device = wp.get_device(MAIN_DEVICE)
try:
with wp.ScopedDevice(device):
compute(db)
except Exception:
db.log_error(traceback.format_exc())
return
# Fire the execution for the downstream nodes.
db.outputs.execOut = og.ExecutionAttributeState.ENABLED
| 18,870 | Python | 33.186594 | 142 | 0.595337 |
NVIDIA/warp/exts/omni.warp/omni/warp/nodes/_impl/basis_curves.py | # Copyright (c) 2023 NVIDIA CORPORATION. All rights reserved.
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
"""Helpers to author basis curves geometries represented as OmniGraph bundles."""
from typing import Optional
import numpy as np
import omni.graph.core as og
from omni.warp.nodes._impl.attributes import (
attr_get_array_on_gpu,
attr_set,
)
from omni.warp.nodes._impl.bundles import (
bundle_copy_attr_value,
bundle_create_attr,
bundle_create_child,
bundle_create_metadata_attr,
bundle_get_attr,
bundle_set_prim_type,
bundle_set_world_xform,
)
from omni.warp.nodes._impl.points import (
points_get_display_color,
points_get_local_extent,
points_get_points,
points_get_widths,
points_get_world_extent,
)
import warp as wp
# Public API
# ------------------------------------------------------------------------------
def basis_curves_create_bundle(
dst_bundle: og.BundleContents,
point_count: int,
curve_count: int,
type: Optional[str] = None,
basis: Optional[str] = None,
wrap: Optional[str] = None,
xform: Optional[np.ndarray] = None,
create_display_color: bool = False,
create_widths: bool = False,
child_idx: int = 0,
) -> None:
"""Creates and initializes point cloud attributes within a bundle."""
child_bundle = bundle_create_child(dst_bundle, child_idx)
bundle_create_attr(
child_bundle,
"points",
og.Type(
og.BaseDataType.FLOAT,
tuple_count=3,
array_depth=1,
role=og.AttributeRole.POSITION,
),
size=point_count,
)
bundle_create_attr(
child_bundle,
"curveVertexCounts",
og.Type(
og.BaseDataType.INT,
tuple_count=1,
array_depth=1,
role=og.AttributeRole.NONE,
),
size=curve_count,
)
if type is not None:
attr = bundle_create_attr(
child_bundle,
"type",
og.Type(
og.BaseDataType.TOKEN,
tuple_count=1,
array_depth=0,
role=og.AttributeRole.NONE,
),
)
attr_set(attr, type)
if basis is not None:
attr = bundle_create_attr(
child_bundle,
"basis",
og.Type(
og.BaseDataType.TOKEN,
tuple_count=1,
array_depth=0,
role=og.AttributeRole.NONE,
),
)
attr_set(attr, basis)
if wrap is not None:
attr = bundle_create_attr(
child_bundle,
"warp",
og.Type(
og.BaseDataType.TOKEN,
tuple_count=1,
array_depth=0,
role=og.AttributeRole.NONE,
),
)
attr_set(attr, wrap)
bundle_set_prim_type(dst_bundle, "BasisCurves", child_idx=child_idx)
if xform is not None:
bundle_set_world_xform(dst_bundle, xform, child_idx=child_idx)
if create_display_color:
bundle_create_attr(
child_bundle,
"primvars:displayColor",
og.Type(
og.BaseDataType.FLOAT,
tuple_count=3,
array_depth=1,
role=og.AttributeRole.COLOR,
),
size=point_count,
)
interp_attr = bundle_create_metadata_attr(
child_bundle,
"primvars:displayColor",
"interpolation",
og.Type(
og.BaseDataType.TOKEN,
tuple_count=1,
array_depth=0,
role=og.AttributeRole.NONE,
),
)
attr_set(interp_attr, "vertex")
if create_widths:
bundle_create_attr(
child_bundle,
"widths",
og.Type(
og.BaseDataType.FLOAT,
tuple_count=1,
array_depth=1,
role=og.AttributeRole.NONE,
),
size=point_count,
)
def basis_curves_copy_bundle(
dst_bundle: og.BundleContents,
src_bundle: og.BundleContents,
deep_copy: bool = False,
child_idx: int = 0,
) -> None:
"""Creates and initializes points attributes from an existing bundle."""
dst_child_bundle = bundle_create_child(dst_bundle, child_idx)
src_child_bundle = src_bundle.bundle.get_child_bundle(child_idx)
dst_child_bundle.copy_bundle(src_child_bundle)
if deep_copy:
bundle_copy_attr_value(dst_child_bundle, src_child_bundle, "points", wp.vec3)
bundle_copy_attr_value(dst_child_bundle, src_child_bundle, "curveVertexCounts", int)
bundle_copy_attr_value(dst_child_bundle, src_child_bundle, "primvars:displayColor", wp.vec3)
bundle_copy_attr_value(dst_child_bundle, src_child_bundle, "widths", float)
def basis_curves_get_point_count(
bundle: og.BundleContents,
child_idx: int = 0,
) -> int:
"""Retrieves the number of points."""
return bundle_get_attr(bundle, "points", child_idx).size()
def basis_curves_get_curve_count(
bundle: og.BundleContents,
child_idx: int = 0,
) -> int:
"""Retrieves the number of curves."""
return bundle_get_attr(bundle, "curveVertexCounts", child_idx).size()
def basis_curves_get_points(
bundle: og.BundleContents,
child_idx: int = 0,
) -> wp.array(dtype=wp.vec3):
"""Retrieves the bundle points attribute as a Warp array."""
return points_get_points(bundle, child_idx=child_idx)
def basis_curves_get_widths(
bundle: og.BundleContents,
child_idx: int = 0,
) -> wp.array(dtype=float):
"""Retrieves the bundle widths attribute as a Warp array."""
return points_get_widths(bundle, child_idx=child_idx)
def basis_curves_get_display_color(
bundle: og.BundleContents,
child_idx: int = 0,
) -> wp.array(dtype=wp.vec3):
"""Retrieves the bundle display color attribute as a Warp array."""
return points_get_display_color(bundle, child_idx=child_idx)
def basis_curves_get_curve_vertex_counts(
bundle: og.BundleContents,
child_idx: int = 0,
) -> wp.array(dtype=int):
"""Retrieves the bundle curve vertex counts attribute as a Warp array."""
attr = bundle_get_attr(bundle, "curveVertexCounts", child_idx)
return attr_get_array_on_gpu(attr, int, read_only=bundle.read_only)
def basis_curves_get_local_extent(
bundle: og.BundleContents,
child_idx: int = 0,
) -> np.ndarray:
"""Retrieves the local extent of the geometry points."""
return points_get_local_extent(bundle, child_idx=child_idx)
def basis_curves_get_world_extent(
bundle: og.BundleContents,
axis_aligned: bool = False,
child_idx: int = 0,
) -> np.ndarray:
"""Retrieves the world extent of the geometry points."""
return points_get_world_extent(
bundle,
axis_aligned=axis_aligned,
child_idx=child_idx,
)
| 7,277 | Python | 28.465587 | 100 | 0.597087 |
NVIDIA/warp/exts/omni.warp/omni/warp/nodes/_impl/points.py | # Copyright (c) 2023 NVIDIA CORPORATION. All rights reserved.
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
"""Helpers to author point cloud geometries represented as OmniGraph bundles."""
from math import inf
from typing import Optional
import numpy as np
import omni.graph.core as og
from omni.warp.nodes._impl.attributes import (
attr_get,
attr_get_array_on_gpu,
attr_set,
)
from omni.warp.nodes._impl.bundles import (
bundle_copy_attr_value,
bundle_create_attr,
bundle_create_child,
bundle_create_metadata_attr,
bundle_get_attr,
bundle_get_world_xform,
bundle_set_prim_type,
bundle_set_world_xform,
)
import warp as wp
# Public API
# ------------------------------------------------------------------------------
def points_create_bundle(
dst_bundle: og.BundleContents,
point_count: int,
xform: Optional[np.ndarray] = None,
create_display_color: bool = False,
create_masses: bool = False,
create_velocities: bool = False,
create_widths: bool = False,
child_idx: int = 0,
) -> None:
"""Creates and initializes point cloud attributes within a bundle."""
child_bundle = bundle_create_child(dst_bundle, child_idx)
bundle_create_attr(
child_bundle,
"points",
og.Type(
og.BaseDataType.FLOAT,
tuple_count=3,
array_depth=1,
role=og.AttributeRole.POSITION,
),
size=point_count,
)
bundle_set_prim_type(dst_bundle, "Points", child_idx=child_idx)
if xform is not None:
bundle_set_world_xform(dst_bundle, xform, child_idx=child_idx)
if create_display_color:
bundle_create_attr(
child_bundle,
"primvars:displayColor",
og.Type(
og.BaseDataType.FLOAT,
tuple_count=3,
array_depth=1,
role=og.AttributeRole.COLOR,
),
size=point_count,
)
interp_attr = bundle_create_metadata_attr(
child_bundle,
"primvars:displayColor",
"interpolation",
og.Type(
og.BaseDataType.TOKEN,
tuple_count=1,
array_depth=0,
role=og.AttributeRole.NONE,
),
)
attr_set(interp_attr, "vertex")
if create_masses:
bundle_create_attr(
child_bundle,
"masses",
og.Type(
og.BaseDataType.FLOAT,
tuple_count=1,
array_depth=1,
role=og.AttributeRole.NONE,
),
size=point_count,
)
if create_velocities:
bundle_create_attr(
child_bundle,
"velocities",
og.Type(
og.BaseDataType.FLOAT,
tuple_count=3,
array_depth=1,
role=og.AttributeRole.VECTOR,
),
size=point_count,
)
if create_widths:
bundle_create_attr(
child_bundle,
"widths",
og.Type(
og.BaseDataType.FLOAT,
tuple_count=1,
array_depth=1,
role=og.AttributeRole.NONE,
),
size=point_count,
)
def points_copy_bundle(
dst_bundle: og.BundleContents,
src_bundle: og.BundleContents,
deep_copy: bool = False,
child_idx: int = 0,
) -> None:
"""Creates and initializes points attributes from an existing bundle."""
dst_child_bundle = bundle_create_child(dst_bundle, child_idx)
src_child_bundle = src_bundle.bundle.get_child_bundle(child_idx)
dst_child_bundle.copy_bundle(src_child_bundle)
if deep_copy:
bundle_copy_attr_value(dst_child_bundle, src_child_bundle, "points", wp.vec3)
bundle_copy_attr_value(dst_child_bundle, src_child_bundle, "primvars:displayColor", wp.vec3)
bundle_copy_attr_value(dst_child_bundle, src_child_bundle, "masses", float)
bundle_copy_attr_value(dst_child_bundle, src_child_bundle, "velocities", wp.vec3)
bundle_copy_attr_value(dst_child_bundle, src_child_bundle, "widths", float)
def points_get_point_count(
bundle: og.BundleContents,
child_idx: int = 0,
) -> int:
"""Retrieves the number of points."""
return bundle_get_attr(bundle, "points", child_idx).size()
def points_get_points(
bundle: og.BundleContents,
child_idx: int = 0,
) -> wp.array(dtype=wp.vec3):
"""Retrieves the bundle points attribute as a Warp array."""
attr = bundle_get_attr(bundle, "points", child_idx)
return attr_get_array_on_gpu(attr, wp.vec3, read_only=bundle.read_only)
def points_get_velocities(
bundle: og.BundleContents,
child_idx: int = 0,
) -> wp.array(dtype=wp.vec3):
"""Retrieves the bundle velocities attribute as a Warp array."""
attr = bundle_get_attr(bundle, "velocities", child_idx)
return attr_get_array_on_gpu(attr, wp.vec3, read_only=bundle.read_only)
def points_get_widths(
bundle: og.BundleContents,
child_idx: int = 0,
) -> wp.array(dtype=float):
"""Retrieves the bundle widths attribute as a Warp array."""
attr = bundle_get_attr(bundle, "widths", child_idx)
return attr_get_array_on_gpu(attr, float, read_only=bundle.read_only)
def points_get_masses(
bundle: og.BundleContents,
child_idx: int = 0,
) -> wp.array(dtype=float):
"""Retrieves the bundle masses attribute as a Warp array."""
attr = bundle_get_attr(bundle, "masses", child_idx)
return attr_get_array_on_gpu(attr, float, read_only=bundle.read_only)
def points_get_display_color(
bundle: og.BundleContents,
child_idx: int = 0,
) -> wp.array(dtype=wp.vec3):
"""Retrieves the bundle display color attribute as a Warp array."""
attr = bundle_get_attr(bundle, "primvars:displayColor", child_idx)
return attr_get_array_on_gpu(attr, wp.vec3, read_only=bundle.read_only)
def points_get_local_extent(
bundle: og.BundleContents,
child_idx: int = 0,
) -> np.ndarray:
"""Retrieves the local extent of the geometry points."""
# Some standard workflows include a single 'extent' attribute when defining
# geometry primitives on the stage.
attr = bundle_get_attr(bundle, "extent", child_idx)
if attr is not None:
return attr_get(attr)
# Alternatively, the ReadPrims node offers an option to compute the bounding
# box which results in a triple of 'bboxMinCorner', 'bboxMaxCorner',
# and 'bboxTransform' attributes.
min_attr = bundle_get_attr(bundle, "bboxMinCorner", child_idx)
max_attr = bundle_get_attr(bundle, "bboxMaxCorner", child_idx)
if min_attr is not None and max_attr is not None:
return np.stack(
(
attr_get(min_attr),
attr_get(max_attr),
),
)
# The last resort is to compute the extent ourselves from
# the point positions.
points = points_get_points(bundle, child_idx=child_idx)
min_extent = wp.array((+inf, +inf, +inf), dtype=wp.vec3)
max_extent = wp.array((-inf, -inf, -inf), dtype=wp.vec3)
wp.launch(
_compute_extent_kernel,
dim=len(points),
inputs=[points],
outputs=[min_extent, max_extent],
)
return np.concatenate((min_extent.numpy(), max_extent.numpy()))
def points_get_world_extent(
bundle: og.BundleContents,
axis_aligned: bool = False,
child_idx: int = 0,
) -> np.ndarray:
"""Retrieves the world extent of the geometry points."""
extent = points_get_local_extent(bundle, child_idx=child_idx)
xform = bundle_get_world_xform(bundle, child_idx=child_idx)
if axis_aligned:
points = np.array(
(
(extent[0][0], extent[0][1], extent[0][2]),
(extent[0][0], extent[0][1], extent[1][2]),
(extent[0][0], extent[1][1], extent[0][2]),
(extent[0][0], extent[1][1], extent[1][2]),
(extent[1][0], extent[1][1], extent[1][2]),
(extent[1][0], extent[0][1], extent[1][2]),
(extent[1][0], extent[1][1], extent[0][2]),
(extent[1][0], extent[0][1], extent[0][2]),
),
)
else:
points = extent
points = np.pad(points, ((0, 0), (0, 1)), constant_values=1)
points = np.dot(xform.T, points[:, :, None]).squeeze()[:-1, :].T
return np.array(
(
np.amin(points, axis=0),
np.amax(points, axis=0),
)
)
# Private Helpers
# ------------------------------------------------------------------------------
@wp.kernel(enable_backward=False)
def _compute_extent_kernel(
points: wp.array(dtype=wp.vec3),
out_min_extent: wp.array(dtype=wp.vec3),
out_max_extent: wp.array(dtype=wp.vec3),
):
"""Computes the extent of a point cloud."""
tid = wp.tid()
wp.atomic_min(out_min_extent, 0, points[tid])
wp.atomic_max(out_max_extent, 0, points[tid])
| 9,333 | Python | 31.186207 | 100 | 0.592843 |
NVIDIA/warp/exts/omni.warp/omni/warp/nodes/_impl/OgnTextureWrite.py | # Copyright (c) 2023 NVIDIA CORPORATION. All rights reserved.
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
"""Node writing a dynamic texture."""
import ctypes
import traceback
import omni.graph.core as og
import omni.warp.nodes
from omni.warp.nodes.ogn.OgnTextureWriteDatabase import OgnTextureWriteDatabase
import warp as wp
try:
import omni.ui as ui
except ImportError:
ui = None
# Internal State
# ------------------------------------------------------------------------------
class InternalState:
"""Internal state for the node."""
def __init__(self) -> None:
self.texture_provider = None
self.is_valid = False
self.attr_tracking = omni.warp.nodes.AttrTracking(
("uri",),
)
def needs_initialization(self, db: OgnTextureWriteDatabase) -> bool:
"""Checks if the internal state needs to be (re)initialized."""
if not self.is_valid:
return True
if self.attr_tracking.have_attrs_changed(db):
return True
return False
def initialize(
self,
db: OgnTextureWriteDatabase,
) -> bool:
"""Initializes the internal state."""
uri = db.inputs.uri
if not uri.startswith("dynamic://"):
return False
texture_provider = ui.DynamicTextureProvider(uri[10:])
# Store the class members.
self.texture_provider = texture_provider
self.attr_tracking.update_state(db)
return True
# Compute
# ------------------------------------------------------------------------------
def compute(db: OgnTextureWriteDatabase) -> None:
"""Evaluates the node."""
if ui is None:
db.log_warning("Cannot write dynamic textures in headless mode.")
return
if not db.inputs.data.memory or db.inputs.data.shape[0] == 0:
return
state = db.internal_state
if state.needs_initialization(db):
# Initialize the internal state if it hasn't been already.
if not state.initialize(db):
return
dim_count = min(max(db.inputs.dimCount, 0), wp.types.ARRAY_MAX_DIMS)
resolution = tuple(max(getattr(db.inputs, "dim{}".format(i + 1)), 0) for i in range(dim_count))
# We need to dereference OG's attribute pointer to get the actual pointer
# to the data.
data_ptr = ctypes.cast(db.inputs.data.memory, ctypes.POINTER(ctypes.c_size_t)).contents.value
# Write the texture to the provider.
state.texture_provider.set_bytes_data_from_gpu(
data_ptr,
resolution,
format=ui.TextureFormat.RGBA32_SFLOAT,
)
# Node Entry Point
# ------------------------------------------------------------------------------
class OgnTextureWrite:
"""Dynamic texture write node."""
@staticmethod
def internal_state() -> InternalState:
return InternalState()
@staticmethod
def compute(db: OgnTextureWriteDatabase) -> None:
try:
compute(db)
except Exception:
db.log_error(traceback.format_exc())
db.internal_state.is_valid = False
return
db.internal_state.is_valid = True
# Trigger the execution for the downstream nodes.
db.outputs.execOut = og.ExecutionAttributeState.ENABLED
| 3,608 | Python | 26.976744 | 99 | 0.612251 |
NVIDIA/warp/exts/omni.warp/omni/warp/nodes/_impl/OgnClothSimulate.py | # Copyright (c) 2023 NVIDIA CORPORATION. All rights reserved.
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
"""Node simulating cloth."""
import traceback
import numpy as np
import omni.graph.core as og
import omni.timeline
import omni.warp.nodes
from omni.warp.nodes.ogn.OgnClothSimulateDatabase import OgnClothSimulateDatabase
import warp as wp
USE_GRAPH = True
PROFILING = False
# Kernels
# ------------------------------------------------------------------------------
@wp.kernel(enable_backward=False)
def transform_points_kernel(
points: wp.array(dtype=wp.vec3),
xform: wp.mat44,
out_points: wp.array(dtype=wp.vec3),
):
tid = wp.tid()
out_points[tid] = wp.transform_point(xform, points[tid])
@wp.kernel(enable_backward=False)
def update_collider_kernel(
points_0: wp.array(dtype=wp.vec3),
points_1: wp.array(dtype=wp.vec3),
xform_0: wp.mat44,
xform_1: wp.mat44,
sim_dt: float,
out_points: wp.array(dtype=wp.vec3),
out_velocities: wp.array(dtype=wp.vec3),
):
tid = wp.tid()
point_0 = wp.transform_point(xform_0, points_0[tid])
point_1 = wp.transform_point(xform_1, points_1[tid])
out_points[tid] = point_0
out_velocities[tid] = (point_1 - point_0) / sim_dt
@wp.kernel(enable_backward=False)
def update_cloth_kernel(
points_0: wp.array(dtype=wp.vec3),
xform: wp.mat44,
out_points: wp.array(dtype=wp.vec3),
out_velocities: wp.array(dtype=wp.vec3),
):
tid = wp.tid()
point = wp.transform_point(xform, points_0[tid])
diff = point - points_0[tid]
out_points[tid] = point
out_velocities[tid] = out_velocities[tid] + diff
@wp.kernel
def update_contacts_kernel(
points: wp.array(dtype=wp.vec3),
velocities: wp.array(dtype=wp.vec3),
sim_dt: float,
out_points: wp.array(dtype=wp.vec3),
):
tid = wp.tid()
out_points[tid] = points[tid] + velocities[tid] * sim_dt
def basis_curve_points_from_springs_kernel(
points: wp.array(dtype=wp.vec3),
indices: wp.array(dtype=int),
out_points: wp.array(dtype=wp.vec3),
):
tid = wp.tid()
idx = indices[tid]
out_points[tid] = points[idx]
# Internal State
# ------------------------------------------------------------------------------
class InternalState:
"""Internal state for the node."""
def __init__(self) -> None:
self.sim_dt = None
self.sim_tick = None
self.model = None
self.integrator = None
self.state_0 = None
self.state_1 = None
self.xform = None
self.collider_xform = None
self.collider_mesh = None
self.collider_points_0 = None
self.collider_points_1 = None
self.graph = None
self.visualization_enabled = False
self.sim_enabled = True
self.time = 0.0
self.is_valid = False
self.attr_tracking = omni.warp.nodes.AttrTracking(
(
"substepCount",
"gravity",
"globalScale",
"contactElasticStiffness",
"contactFrictionStiffness",
"contactFrictionCoeff",
"contactDampingStiffness",
"clothDensity",
"clothTriElasticStiffness",
"clothTriAreaStiffness",
"clothTriDampingStiffness",
"clothEdgeBendingStiffness",
"clothEdgeDampingStiffness",
"colliderContactDistance",
"colliderContactQueryRange",
"groundEnabled",
"groundAltitude",
),
)
def needs_initialization(self, db: OgnClothSimulateDatabase) -> bool:
"""Checks if the internal state needs to be (re)initialized."""
if not self.is_valid or not db.inputs.enabled or not self.sim_enabled:
return True
if self.attr_tracking.have_attrs_changed(db):
return True
if db.inputs.time < self.time:
# Reset the simulation when we're rewinding.
return True
return False
def initialize(
self,
db: OgnClothSimulateDatabase,
device: wp.context.Device,
) -> bool:
"""Initializes the internal state."""
# Lazy load warp.sim here to not slow down extension loading.
import warp.sim
# Compute the simulation time step.
timeline = omni.timeline.get_timeline_interface()
sim_rate = timeline.get_ticks_per_second()
sim_dt = 1.0 / sim_rate
# Initialize Warp's simulation model builder.
builder = wp.sim.ModelBuilder()
# Retrieve some data from the cloth mesh.
points = omni.warp.nodes.mesh_get_points(db.inputs.cloth)
xform = omni.warp.nodes.bundle_get_world_xform(db.inputs.cloth)
# Transform the cloth point positions into world space.
world_points = wp.empty(len(points), dtype=wp.vec3)
wp.launch(
kernel=transform_points_kernel,
dim=len(points),
inputs=[
points,
xform.T,
],
outputs=[
world_points,
],
)
# Register the cloth geometry mesh into Warp's simulation model builder,
# which requires triangulated meshes.
face_vertex_indices = omni.warp.nodes.mesh_triangulate(db.inputs.cloth)
builder.add_cloth_mesh(
pos=(0.0, 0.0, 0.0),
rot=(0.0, 0.0, 0.0, 1.0),
scale=1.0,
vel=(0.0, 0.0, 0.0),
vertices=world_points.numpy(),
indices=face_vertex_indices.numpy(),
density=db.inputs.clothDensity,
tri_ke=db.inputs.clothTriElasticStiffness * db.inputs.globalScale,
tri_ka=db.inputs.clothTriAreaStiffness * db.inputs.globalScale,
tri_kd=db.inputs.clothTriDampingStiffness * db.inputs.globalScale,
tri_drag=db.inputs.clothTriDrag * db.inputs.globalScale,
tri_lift=db.inputs.clothTriLift * db.inputs.globalScale,
edge_ke=db.inputs.clothEdgeBendingStiffness * db.inputs.globalScale,
edge_kd=db.inputs.clothEdgeDampingStiffness * db.inputs.globalScale,
)
# Set a uniform mass to avoid large discrepancies.
avg_mass = np.mean(builder.particle_mass)
builder.particle_mass = np.full(
(len(builder.particle_mass),),
avg_mass,
)
# Register any spring constraint.
for src_idx, dst_idx in db.inputs.springIndexPairs:
builder.add_spring(
src_idx,
dst_idx,
ke=db.inputs.springElasticStiffness,
kd=db.inputs.springDampingStiffness,
control=1.0,
)
if db.inputs.collider.valid:
# Retrieve some data from the collider mesh.
collider_points = omni.warp.nodes.mesh_get_points(db.inputs.collider)
collider_xform = omni.warp.nodes.bundle_get_world_xform(db.inputs.collider)
# Transform the collider point position into world space.
collider_world_points = wp.empty(
len(collider_points),
dtype=wp.vec3,
)
wp.launch(
kernel=transform_points_kernel,
dim=len(collider_points),
inputs=[
collider_points,
collider_xform.T,
],
outputs=[
collider_world_points,
],
)
# Initialize Warp's mesh instance, which requires
# triangulated meshes.
collider_face_vertex_indices = omni.warp.nodes.mesh_triangulate(
db.inputs.collider,
)
collider_mesh = wp.sim.Mesh(
collider_world_points.numpy(),
collider_face_vertex_indices.numpy(),
compute_inertia=False,
)
# Register the collider geometry mesh into Warp's simulation model
# builder.
builder.add_shape_mesh(
body=-1,
mesh=collider_mesh,
pos=(0.0, 0.0, 0.0),
rot=(0.0, 0.0, 0.0, 1.0),
scale=(1.0, 1.0, 1.0),
density=0.0,
ke=0.0,
kd=0.0,
kf=0.0,
mu=0.0,
)
# Store the collider's point positions as internal state.
collider_points_0 = wp.empty_like(collider_points)
collider_points_1 = wp.empty_like(collider_points)
wp.copy(collider_points_0, collider_points)
wp.copy(collider_points_1, collider_points)
# Store the class members.
self.collider_xform = collider_xform.copy()
self.collider_mesh = collider_mesh
self.collider_points_0 = collider_points_0
self.collider_points_1 = collider_points_1
else:
self.collider_mesh = None
# Register the ground.
builder.set_ground_plane(
offset=-db.inputs.groundAltitude + db.inputs.colliderContactDistance,
ke=db.inputs.contactElasticStiffness * db.inputs.globalScale,
kd=db.inputs.contactDampingStiffness * db.inputs.globalScale,
kf=db.inputs.contactFrictionStiffness * db.inputs.globalScale,
mu=db.inputs.contactFrictionCoeff,
)
# Build the simulation model.
model = builder.finalize()
# Allocate a single contact per particle.
model.allocate_soft_contacts(model.particle_count)
# Initialize the integrator.
integrator = wp.sim.SemiImplicitIntegrator()
# Set the model properties.
model.ground = db.inputs.groundEnabled
model.gravity = db.inputs.gravity
model.soft_contact_ke = db.inputs.contactElasticStiffness * db.inputs.globalScale
model.soft_contact_kf = db.inputs.contactFrictionStiffness * db.inputs.globalScale
model.soft_contact_mu = db.inputs.contactFrictionCoeff
model.soft_contact_kd = db.inputs.contactDampingStiffness * db.inputs.globalScale
model.soft_contact_margin = db.inputs.colliderContactDistance * db.inputs.colliderContactQueryRange
model.particle_radius.fill_(db.inputs.colliderContactDistance)
# Store the class members.
self.sim_dt = sim_dt
self.sim_tick = 0
self.model = model
self.integrator = integrator
self.state_0 = model.state()
self.state_1 = model.state()
self.xform = xform.copy()
if USE_GRAPH:
# Create the CUDA graph. We first manually load the necessary
# modules to avoid the capture to load all the modules that are
# registered and possibly not relevant.
wp.load_module(device=device)
wp.load_module(module=warp.sim, device=device, recursive=True)
wp.capture_begin(force_module_load=False)
try:
step(db)
finally:
self.graph = wp.capture_end()
self.attr_tracking.update_state(db)
return True
# Compute
# ------------------------------------------------------------------------------
def update_collider(
db: OgnClothSimulateDatabase,
) -> None:
"""Updates the collider state."""
state = db.internal_state
points = omni.warp.nodes.mesh_get_points(db.inputs.collider)
xform = omni.warp.nodes.bundle_get_world_xform(db.inputs.collider)
# Swap the previous and current collider point positions.
(state.collider_points_0, state.collider_points_1) = (
state.collider_points_1,
state.collider_points_0,
)
# Store the current point positions.
wp.copy(state.collider_points_1, points)
# Retrieve the previous and current world transformations.
xform_0 = state.collider_xform
xform_1 = xform
# Update the internal point positions and velocities.
wp.launch(
kernel=update_collider_kernel,
dim=len(state.collider_mesh.vertices),
inputs=[
state.collider_points_0,
state.collider_points_1,
xform_0.T,
xform_1.T,
state.sim_dt,
],
outputs=[
state.collider_mesh.mesh.points,
state.collider_mesh.mesh.velocities,
],
)
# Refit the BVH.
state.collider_mesh.mesh.refit()
# Update the state members.
state.collider_xform = xform.copy()
def update_cloth(
db: OgnClothSimulateDatabase,
) -> None:
"""Updates the cloth state."""
state = db.internal_state
xform = omni.warp.nodes.bundle_get_world_xform(db.inputs.cloth)
# Retrieve the previous and current world transformations.
xform_0 = state.xform
xform_1 = xform
# Update the internal point positions and velocities.
wp.launch(
kernel=update_cloth_kernel,
dim=len(state.state_0.particle_q),
inputs=[
state.state_0.particle_q,
np.matmul(np.linalg.inv(xform_0), xform_1).T,
],
outputs=[
state.state_0.particle_q,
state.state_0.particle_qd,
],
)
# Update the state members.
state.xform = xform.copy()
def step(db: OgnClothSimulateDatabase) -> None:
"""Steps through the simulation."""
state = db.internal_state
sim_dt = state.sim_dt / db.inputs.substepCount
# Run the collision detection once per frame.
wp.sim.collide(state.model, state.state_0)
for _ in range(db.inputs.substepCount):
state.state_0.clear_forces()
wp.launch(
update_contacts_kernel,
state.model.soft_contact_max,
inputs=[
state.model.soft_contact_body_pos,
state.model.soft_contact_body_vel,
sim_dt,
],
outputs=[
state.model.soft_contact_body_pos,
],
)
state.integrator.simulate(
state.model,
state.state_0,
state.state_1,
sim_dt,
)
# Swap the previous and current states.
(state.state_0, state.state_1) = (state.state_1, state.state_0)
def simulate(db: OgnClothSimulateDatabase) -> None:
"""Simulates the cloth at the current time."""
state = db.internal_state
if USE_GRAPH:
wp.capture_launch(state.graph)
else:
step(db)
def compute(db: OgnClothSimulateDatabase, device: wp.context.Device) -> None:
"""Evaluates the node."""
if not db.inputs.cloth.valid or not db.outputs.cloth.valid:
return
state = db.internal_state
if not db.inputs.enabled:
# Pass through the data.
db.outputs.cloth = db.inputs.cloth
# Store whether the simulation was last enabled.
state.sim_enabled = False
return
if state.needs_initialization(db):
# Initialize the internal state if it hasn't been already.
# We want to use the input cloth geometry as the initial state
# of the simulation so we copy its bundle to the output one.
db.outputs.cloth = db.inputs.cloth
if not state.initialize(db, device):
return
else:
# We skip the simulation if it has just been initialized.
if state.sim_tick == 0 and omni.warp.nodes.bundle_has_changed(db.inputs.cloth):
if not state.initialize(db, device):
return
if (
db.inputs.collider.valid
and state.collider_mesh is not None
and omni.warp.nodes.bundle_has_changed(db.inputs.collider)
):
# The collider might be animated so we need to update its state.
update_collider(db)
if omni.warp.nodes.bundle_have_attrs_changed(db.inputs.cloth, ("worldMatrix",)):
update_cloth(db)
with omni.warp.nodes.NodeTimer("simulate", db, active=PROFILING):
# Run the cloth simulation at the current time.
simulate(db)
with omni.warp.nodes.NodeTimer("transform_points_to_local_space", db, active=PROFILING):
# Retrieve some data from the cloth mesh.
xform = omni.warp.nodes.bundle_get_world_xform(db.inputs.cloth)
# Transform the cloth point positions back into local space
# and store them into the bundle.
out_points = omni.warp.nodes.points_get_points(db.outputs.cloth)
wp.launch(
kernel=transform_points_kernel,
dim=len(out_points),
inputs=[
state.state_0.particle_q,
np.linalg.inv(xform).T,
],
outputs=[
out_points,
],
)
# Increment the simulation tick.
state.sim_tick += 1
# Clear any previous visualization data.
if state.visualization_enabled:
db.outputs.visualization.bundle.clear_contents()
state.visualization_enabled = False
# Each type of visualization goes into its own primitive.
visualization_prim = 0
# Visualize the spring constraints as curves.
if db.inputs.springVisualize and db.inputs.springIndexPairs.size > 0:
spring_indices = omni.warp.nodes.from_omni_graph(
db.inputs.springIndexPairs, dtype=int, shape=(db.inputs.springIndexPairs.size,)
)
# Create a new set of geometry curves within the output bundle.
omni.warp.nodes.basis_curves_create_bundle(
db.outputs.visualization,
len(db.inputs.springIndexPairs) * 2,
len(db.inputs.springIndexPairs),
type="linear",
xform=omni.warp.nodes.bundle_get_world_xform(db.outputs.cloth),
create_display_color=True,
create_widths=True,
child_idx=visualization_prim,
)
# Set the curve point positions by looking up the model's particles
# with the spring indices.
out_points = omni.warp.nodes.basis_curves_get_points(
db.outputs.visualization,
child_idx=visualization_prim,
)
wp.launch(
kernel=basis_curve_points_from_springs_kernel,
dim=len(out_points),
inputs=[
state.state_0.particle_q,
spring_indices,
],
outputs=[
out_points,
],
)
# Set the number of points per curve. Each curve represents a constraint
# between 2 points, so we set them all to a length of 2.
out_counts = omni.warp.nodes.basis_curves_get_curve_vertex_counts(
db.outputs.visualization,
child_idx=visualization_prim,
)
out_counts.fill_(2)
# Set the curve widths.
out_widths = omni.warp.nodes.basis_curves_get_widths(
db.outputs.visualization,
child_idx=visualization_prim,
)
out_widths.fill_(db.inputs.springVisualizeWidth)
# Set the curve colours.
out_colors = omni.warp.nodes.basis_curves_get_display_color(
db.outputs.visualization,
child_idx=visualization_prim,
)
out_colors.fill_(wp.vec3(db.inputs.springVisualizeColor))
# Store whether any visualization was last enabled.
state.visualization_enabled = True
# Store whether the simulation was last enabled.
state.sim_enabled = True
# Store the current time.
state.time = db.inputs.time
# Node Entry Point
# ------------------------------------------------------------------------------
class OgnClothSimulate:
"""Node."""
@staticmethod
def internal_state() -> InternalState:
return InternalState()
@staticmethod
def compute(db: OgnClothSimulateDatabase) -> None:
device = omni.warp.nodes.device_get_cuda_compute()
try:
with wp.ScopedDevice(device):
compute(db, device)
except Exception:
db.log_error(traceback.format_exc())
db.internal_state.is_valid = False
return
db.internal_state.is_valid = True
# Fire the execution for the downstream nodes.
db.outputs.execOut = og.ExecutionAttributeState.ENABLED
| 20,777 | Python | 31.314152 | 107 | 0.585551 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.