file_path
stringlengths 21
202
| content
stringlengths 19
1.02M
| size
int64 19
1.02M
| lang
stringclasses 8
values | avg_line_length
float64 5.88
100
| max_line_length
int64 12
993
| alphanum_fraction
float64 0.27
0.93
|
---|---|---|---|---|---|---|
omniverse-code/kit/exts/omni.syntheticdata/ogn/docs/OgnSdRenderVarPtr.rst | .. _omni_syntheticdata_SdRenderVarPtr_2:
.. _omni_syntheticdata_SdRenderVarPtr:
.. ================================================================================
.. THIS PAGE IS AUTO-GENERATED. DO NOT MANUALLY EDIT.
.. ================================================================================
:orphan:
.. meta::
:title: Sd Render Var Ptr
:keywords: lang-en omnigraph node graph:action syntheticdata sd-render-var-ptr
Sd Render Var Ptr
=================
.. <description>
Synthetic Data node exposing the raw pointer data of a rendervar.
.. </description>
Installation
------------
To use this node enable :ref:`omni.syntheticdata<ext_omni_syntheticdata>` in the Extension Manager.
Inputs
------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"inputs:exec", "``execution``", "Trigger", "None"
"inputs:renderResults", "``uint64``", "Render results pointer", "0"
"inputs:renderVar", "``token``", "Name of the renderVar", ""
Outputs
-------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"outputs:bufferSize", "``uint64``", "Size (in bytes) of the buffer (0 if the input is a texture)", "None"
"outputs:cudaDeviceIndex", "``int``", "Index of the device where the data lives (-1 for host data)", "-1"
"outputs:dataPtr", "``uint64``", "Pointer to the raw data (cuda device pointer or host pointer)", "0"
"Received (*outputs:exec*)", "``execution``", "Executes when the event is received", "None"
"outputs:format", "``uint64``", "Format", "None"
"outputs:height", "``uint``", "Height (0 if the input is a buffer)", "None"
"outputs:strides", "``int[2]``", "Strides (in bytes) ([0,0] if the input is a buffer)", "None"
"outputs:width", "``uint``", "Width (0 if the input is a buffer)", "None"
Metadata
--------
.. csv-table::
:header: "Name", "Value"
:widths: 30,70
"Unique ID", "omni.syntheticdata.SdRenderVarPtr"
"Version", "2"
"Extension", "omni.syntheticdata"
"Has State?", "False"
"Implementation Language", "C++"
"Default Memory Type", "cpu"
"Generated Code Exclusions", "tests"
"Categories", "graph:action"
"Generated Class Name", "OgnSdRenderVarPtrDatabase"
"Python Module", "omni.syntheticdata"
| 2,325 | reStructuredText | 29.605263 | 109 | 0.582366 |
omniverse-code/kit/exts/omni.syntheticdata/ogn/docs/OgnSdTestRenderProductCamera.rst | .. _omni_syntheticdata_SdTestRenderProductCamera_1:
.. _omni_syntheticdata_SdTestRenderProductCamera:
.. ================================================================================
.. THIS PAGE IS AUTO-GENERATED. DO NOT MANUALLY EDIT.
.. ================================================================================
:orphan:
.. meta::
:title: Sd Test Render Product Camera
:keywords: lang-en omnigraph node graph:simulation,graph:postRender,graph:action,internal:test syntheticdata sd-test-render-product-camera
Sd Test Render Product Camera
=============================
.. <description>
Synthetic Data node to test the renderProduct camera pipeline
.. </description>
Installation
------------
To use this node enable :ref:`omni.syntheticdata<ext_omni_syntheticdata>` in the Extension Manager.
Inputs
------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"inputs:cameraApertureOffset", "``float[2]``", "Camera horizontal and vertical aperture offset", "[0.0, 0.0]"
"inputs:cameraApertureSize", "``float[2]``", "Camera horizontal and vertical aperture", "[0.0, 0.0]"
"inputs:cameraFStop", "``float``", "Camera fStop", "0.0"
"inputs:cameraFisheyeParams", "``float[]``", "Camera fisheye projection parameters", "[]"
"inputs:cameraFocalLength", "``float``", "Camera focal length", "0.0"
"inputs:cameraFocusDistance", "``float``", "Camera focus distance", "0.0"
"inputs:cameraModel", "``int``", "Camera model (pinhole or fisheye models)", "0"
"inputs:cameraNearFar", "``float[2]``", "Camera near/far clipping range", "[0.0, 0.0]"
"inputs:cameraProjection", "``matrixd[4]``", "Camera projection matrix", "[[1.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0], [0.0, 0.0, 0.0, 1.0]]"
"inputs:cameraViewTransform", "``matrixd[4]``", "Camera view matrix", "[[1.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0], [0.0, 0.0, 0.0, 1.0]]"
"inputs:exec", "``execution``", "Trigger", "None"
"inputs:height", "``uint``", "Height of the frame", "0"
"inputs:metersPerSceneUnit", "``float``", "Scene units to meters scale", "0.0"
"inputs:renderProductCameraPath", "``token``", "RenderProduct camera prim path", ""
"inputs:renderProductResolution", "``int[2]``", "RenderProduct resolution", "[0, 0]"
"inputs:renderResults", "``uint64``", "OnDemand connection : pointer to render product results", "0"
"renderProduct (*inputs:rp*)", "``uint64``", "PostRender connection : pointer to render product for this view", "0"
"inputs:stage", "``token``", "Stage in {simulation, postrender, ondemand}", ""
"inputs:traceError", "``bool``", "If true print an error message when the frame numbers are out-of-sync", "False"
"inputs:width", "``uint``", "Width of the frame", "0"
Outputs
-------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"outputs:test", "``bool``", "Test value : false if failed", "None"
Metadata
--------
.. csv-table::
:header: "Name", "Value"
:widths: 30,70
"Unique ID", "omni.syntheticdata.SdTestRenderProductCamera"
"Version", "1"
"Extension", "omni.syntheticdata"
"Has State?", "False"
"Implementation Language", "C++"
"Default Memory Type", "cpu"
"Generated Code Exclusions", "tests"
"__tokens", "[""simulation"", ""postRender"", ""onDemand""]"
"Categories", "graph:simulation,graph:postRender,graph:action,internal:test"
"Generated Class Name", "OgnSdTestRenderProductCameraDatabase"
"Python Module", "omni.syntheticdata"
| 3,613 | reStructuredText | 40.540229 | 167 | 0.606975 |
omniverse-code/kit/exts/omni.syntheticdata/ogn/docs/OgnSdOnNewRenderProductFrame.rst | .. _omni_syntheticdata_SdOnNewRenderProductFrame_1:
.. _omni_syntheticdata_SdOnNewRenderProductFrame:
.. ================================================================================
.. THIS PAGE IS AUTO-GENERATED. DO NOT MANUALLY EDIT.
.. ================================================================================
:orphan:
.. meta::
:title: Sd On New Render Product Frame
:keywords: lang-en omnigraph node graph:action,flowControl syntheticdata sd-on-new-render-product-frame
Sd On New Render Product Frame
==============================
.. <description>
Synthetic Data postprocess node to execute pipeline after the NewFrame event has been received on the given renderProduct
.. </description>
Installation
------------
To use this node enable :ref:`omni.syntheticdata<ext_omni_syntheticdata>` in the Extension Manager.
Inputs
------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"Received (*inputs:exec*)", "``execution``", "Executes for each newFrame event received", "None"
"inputs:renderProductDataPtrs", "``uint64[]``", "HydraRenderProduct data pointers.", "[]"
"inputs:renderProductPath", "``token``", "Path of the renderProduct to wait for being rendered", ""
"inputs:renderProductPaths", "``token[]``", "Render product path tokens.", "[]"
Outputs
-------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"outputs:cudaStream", "``uint64``", "Cuda stream", "None"
"Received (*outputs:exec*)", "``execution``", "Executes for each newFrame event received", "None"
"outputs:renderProductPath", "``token``", "Path of the renderProduct to wait for being rendered", "None"
"outputs:renderResults", "``uint64``", "Render results", "None"
Metadata
--------
.. csv-table::
:header: "Name", "Value"
:widths: 30,70
"Unique ID", "omni.syntheticdata.SdOnNewRenderProductFrame"
"Version", "1"
"Extension", "omni.syntheticdata"
"Has State?", "False"
"Implementation Language", "C++"
"Default Memory Type", "cpu"
"Generated Code Exclusions", "None"
"Categories", "graph:action,flowControl"
"Generated Class Name", "OgnSdOnNewRenderProductFrameDatabase"
"Python Module", "omni.syntheticdata"
| 2,299 | reStructuredText | 30.506849 | 121 | 0.61592 |
omniverse-code/kit/exts/omni.syntheticdata/ogn/docs/OgnSdFabricTimeRangeExecution.rst | .. _omni_syntheticdata_SdFabricTimeRangeExecution_1:
.. _omni_syntheticdata_SdFabricTimeRangeExecution:
.. ================================================================================
.. THIS PAGE IS AUTO-GENERATED. DO NOT MANUALLY EDIT.
.. ================================================================================
:orphan:
.. meta::
:title: Sd Fabric Time Range Execution
:keywords: lang-en omnigraph node graph:postRender,graph:action syntheticdata sd-fabric-time-range-execution
Sd Fabric Time Range Execution
==============================
.. <description>
Read a rational time range from Fabric or RenderVars and signal its execution if the current time fall within this range. The range is [begin,end[, that is the end time does not belong to the range.
.. </description>
Installation
------------
To use this node enable :ref:`omni.syntheticdata<ext_omni_syntheticdata>` in the Extension Manager.
Inputs
------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"inputs:exec", "``execution``", "Trigger", "None"
"inputs:gpu", "``uint64``", "Pointer to shared context containing gpu foundations.", "0"
"inputs:renderResults", "``uint64``", "Render results", "0"
"inputs:timeRangeBeginDenominatorToken", "``token``", "Attribute name of the range begin time denominator", "timeRangeStartDenominator"
"inputs:timeRangeBeginNumeratorToken", "``token``", "Attribute name of the range begin time numerator", "timeRangeStartNumerator"
"inputs:timeRangeEndDenominatorToken", "``token``", "Attribute name of the range end time denominator", "timeRangeEndDenominator"
"inputs:timeRangeEndNumeratorToken", "``token``", "Attribute name of the range end time numerator", "timeRangeEndNumerator"
"inputs:timeRangeName", "``token``", "Time range name used to read from the Fabric or RenderVars.", ""
Outputs
-------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"outputs:exec", "``execution``", "Trigger", "None"
"outputs:timeRangeBeginDenominator", "``uint64``", "Time denominator of the last time range change (begin)", "None"
"outputs:timeRangeBeginNumerator", "``int64``", "Time numerator of the last time range change (begin)", "None"
"outputs:timeRangeEndDenominator", "``uint64``", "Time denominator of the last time range change (end)", "None"
"outputs:timeRangeEndNumerator", "``int64``", "Time numerator of the last time range change (end)", "None"
Metadata
--------
.. csv-table::
:header: "Name", "Value"
:widths: 30,70
"Unique ID", "omni.syntheticdata.SdFabricTimeRangeExecution"
"Version", "1"
"Extension", "omni.syntheticdata"
"Has State?", "False"
"Implementation Language", "C++"
"Default Memory Type", "cpu"
"Generated Code Exclusions", "None"
"Categories", "graph:postRender,graph:action"
"Generated Class Name", "OgnSdFabricTimeRangeExecutionDatabase"
"Python Module", "omni.syntheticdata"
| 3,037 | reStructuredText | 37.948717 | 198 | 0.650971 |
omniverse-code/kit/exts/omni.syntheticdata/ogn/docs/OgnSdRenderVarDisplayTexture.rst | .. _omni_syntheticdata_SdRenderVarDisplayTexture_2:
.. _omni_syntheticdata_SdRenderVarDisplayTexture:
.. ================================================================================
.. THIS PAGE IS AUTO-GENERATED. DO NOT MANUALLY EDIT.
.. ================================================================================
:orphan:
.. meta::
:title: Sd Render Var Display Texture
:keywords: lang-en omnigraph node graph:action,rendering,internal syntheticdata sd-render-var-display-texture
Sd Render Var Display Texture
=============================
.. <description>
Synthetic Data node to expose texture resource of a visualization render variable
.. </description>
Installation
------------
To use this node enable :ref:`omni.syntheticdata<ext_omni_syntheticdata>` in the Extension Manager.
Inputs
------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"inputs:exec", "``execution``", "Trigger", "None"
"inputs:renderResults", "``uint64``", "Render results pointer", "0"
"inputs:renderVarDisplay", "``token``", "Name of the renderVar", ""
Outputs
-------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"outputs:cudaPtr", "``uint64``", "Display texture CUDA pointer", "None"
"outputs:exec", "``execution``", "Trigger", "None"
"outputs:format", "``uint64``", "Display texture format", "None"
"outputs:height", "``uint``", "Display texture height", "None"
"outputs:referenceTimeDenominator", "``uint64``", "Reference time represented as a rational number : denominator", "None"
"outputs:referenceTimeNumerator", "``int64``", "Reference time represented as a rational number : numerator", "None"
"outputs:rpResourcePtr", "``uint64``", "Display texture RpResource pointer", "None"
"outputs:width", "``uint``", "Display texture width", "None"
Metadata
--------
.. csv-table::
:header: "Name", "Value"
:widths: 30,70
"Unique ID", "omni.syntheticdata.SdRenderVarDisplayTexture"
"Version", "2"
"Extension", "omni.syntheticdata"
"Has State?", "False"
"Implementation Language", "C++"
"Default Memory Type", "cpu"
"Generated Code Exclusions", "None"
"Categories", "graph:action,rendering,internal"
"Generated Class Name", "OgnSdRenderVarDisplayTextureDatabase"
"Python Module", "omni.syntheticdata"
| 2,410 | reStructuredText | 30.723684 | 125 | 0.614523 |
omniverse-code/kit/exts/omni.syntheticdata/ogn/docs/OgnSdPostRenderVarTextureToBuffer.rst | .. _omni_syntheticdata_SdPostRenderVarTextureToBuffer_1:
.. _omni_syntheticdata_SdPostRenderVarTextureToBuffer:
.. ================================================================================
.. THIS PAGE IS AUTO-GENERATED. DO NOT MANUALLY EDIT.
.. ================================================================================
:orphan:
.. meta::
:title: Sd Post Render Var Texture To Buffer
:keywords: lang-en omnigraph node graph:postRender,rendering syntheticdata sd-post-render-var-texture-to-buffer
Sd Post Render Var Texture To Buffer
====================================
.. <description>
Expose a device renderVar buffer a texture one.
.. </description>
Installation
------------
To use this node enable :ref:`omni.syntheticdata<ext_omni_syntheticdata>` in the Extension Manager.
Inputs
------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"inputs:exec", "``execution``", "Trigger", "None"
"inputs:gpu", "``uint64``", "Pointer to shared context containing gpu foundations", "0"
"inputs:renderVar", "``token``", "Name of the device renderVar to expose on the host", ""
"inputs:renderVarBufferSuffix", "``string``", "Suffix appended to the renderVar name", "buffer"
"inputs:rp", "``uint64``", "Pointer to render product for this view", "0"
Outputs
-------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"outputs:exec", "``execution``", "Trigger", "None"
"outputs:renderVar", "``token``", "Name of the resulting renderVar on the host", "None"
Metadata
--------
.. csv-table::
:header: "Name", "Value"
:widths: 30,70
"Unique ID", "omni.syntheticdata.SdPostRenderVarTextureToBuffer"
"Version", "1"
"Extension", "omni.syntheticdata"
"Has State?", "False"
"Implementation Language", "C++"
"Default Memory Type", "cpu"
"Generated Code Exclusions", "None"
"Categories", "graph:postRender,rendering"
"Generated Class Name", "OgnSdPostRenderVarTextureToBufferDatabase"
"Python Module", "omni.syntheticdata"
| 2,114 | reStructuredText | 28.375 | 115 | 0.603122 |
omniverse-code/kit/exts/omni.syntheticdata/ogn/docs/OgnSdPostInstanceMapping.rst | .. _omni_syntheticdata_SdPostInstanceMapping_2:
.. _omni_syntheticdata_SdPostInstanceMapping:
.. ================================================================================
.. THIS PAGE IS AUTO-GENERATED. DO NOT MANUALLY EDIT.
.. ================================================================================
:orphan:
.. meta::
:title: Sd Post Instance Mapping
:keywords: lang-en omnigraph node graph:postRender,rendering syntheticdata sd-post-instance-mapping
Sd Post Instance Mapping
========================
.. <description>
Synthetic Data node to compute and store scene instances semantic hierarchy information
.. </description>
Installation
------------
To use this node enable :ref:`omni.syntheticdata<ext_omni_syntheticdata>` in the Extension Manager.
Inputs
------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"inputs:exec", "``execution``", "Trigger", "None"
"gpuFoundations (*inputs:gpu*)", "``uint64``", "Pointer to shared context containing gpu foundations", "0"
"renderProduct (*inputs:rp*)", "``uint64``", "Pointer to render product for this view", "0"
"inputs:semanticFilterName", "``token``", "Name of the semantic filter to apply to the semanticLabelToken", "default"
Outputs
-------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"outputs:exec", "``execution``", "Trigger", "None"
"outputs:instanceMapSDCudaPtr", "``uint64``", "cuda uint16_t buffer pointer of size numInstances containing the instance parent semantic index", "None"
"outputs:instanceMappingInfoSDPtr", "``uint64``", "uint buffer pointer containing the following information : [ numInstances, minInstanceId, numSemantics, minSemanticId, numProtoSemantic, lastUpdateTimeNumeratorHigh, lastUpdateTimeNumeratorLow, , lastUpdateTimeDenominatorHigh, lastUpdateTimeDenominatorLow ]", "None"
"outputs:instancePrimTokenSDCudaPtr", "``uint64``", "cuda uint64_t buffer pointer of size numInstances containing the instance path token", "None"
"outputs:lastUpdateTimeDenominator", "``uint64``", "Time denominator of the last time the data has changed", "None"
"outputs:lastUpdateTimeNumerator", "``int64``", "Time numerator of the last time the data has changed", "None"
"outputs:semanticLabelTokenSDCudaPtr", "``uint64``", "cuda uint64_t buffer pointer of size numSemantics containing the semantic label token", "None"
"outputs:semanticLocalTransformSDCudaPtr", "``uint64``", "cuda float44 buffer pointer of size numSemantics containing the local semantic transform", "None"
"outputs:semanticMapSDCudaPtr", "``uint64``", "cuda uint16_t buffer pointer of size numSemantics containing the semantic parent semantic index", "None"
"outputs:semanticPrimTokenSDCudaPtr", "``uint64``", "cuda uint32_t buffer pointer of size numSemantics containing the prim part of the semantic path token", "None"
"outputs:semanticWorldTransformSDCudaPtr", "``uint64``", "cuda float44 buffer pointer of size numSemantics containing the world semantic transform", "None"
Metadata
--------
.. csv-table::
:header: "Name", "Value"
:widths: 30,70
"Unique ID", "omni.syntheticdata.SdPostInstanceMapping"
"Version", "2"
"Extension", "omni.syntheticdata"
"Has State?", "False"
"Implementation Language", "C++"
"Default Memory Type", "cpu"
"Generated Code Exclusions", "None"
"__tokens", "[""InstanceMappingInfoSDhost"", ""SemanticMapSD"", ""SemanticMapSDhost"", ""SemanticPrimTokenSD"", ""SemanticPrimTokenSDhost"", ""InstanceMapSD"", ""InstanceMapSDhost"", ""InstancePrimTokenSD"", ""InstancePrimTokenSDhost"", ""SemanticLabelTokenSD"", ""SemanticLabelTokenSDhost"", ""SemanticLocalTransformSD"", ""SemanticLocalTransformSDhost"", ""SemanticWorldTransformSD"", ""SemanticWorldTransformSDhost""]"
"Categories", "graph:postRender,rendering"
"Generated Class Name", "OgnSdPostInstanceMappingDatabase"
"Python Module", "omni.syntheticdata"
| 4,032 | reStructuredText | 48.790123 | 425 | 0.689236 |
omniverse-code/kit/exts/omni.syntheticdata/ogn/docs/OgnSdInstanceMapping.rst | .. _omni_syntheticdata_SdInstanceMapping_2:
.. _omni_syntheticdata_SdInstanceMapping:
.. ================================================================================
.. THIS PAGE IS AUTO-GENERATED. DO NOT MANUALLY EDIT.
.. ================================================================================
:orphan:
.. meta::
:title: Sd Instance Mapping
:keywords: lang-en omnigraph node graph:action syntheticdata sd-instance-mapping
Sd Instance Mapping
===================
.. <description>
Synthetic Data node to expose the scene instances semantic hierarchy information
.. </description>
Installation
------------
To use this node enable :ref:`omni.syntheticdata<ext_omni_syntheticdata>` in the Extension Manager.
Inputs
------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"inputs:exec", "``execution``", "Trigger", "None"
"inputs:lazy", "``bool``", "Compute outputs only when connected to a downstream node", "True"
"inputs:renderResults", "``uint64``", "Render results pointer", "0"
Outputs
-------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"Received (*outputs:exec*)", "``execution``", "Executes when the event is received", "None"
"outputs:sdIMInstanceSemanticMap", "``uchar[]``", "Raw array of uint16_t of size sdIMNumInstances*sdIMMaxSemanticHierarchyDepth containing the mapping from the instances index to their inherited semantic entities", "None"
"outputs:sdIMInstanceTokens", "``token[]``", "Instance array containing the token for every instances", "None"
"outputs:sdIMLastUpdateTimeDenominator", "``uint64``", "Time denominator of the last time the data has changed", "None"
"outputs:sdIMLastUpdateTimeNumerator", "``int64``", "Time numerator of the last time the data has changed", "None"
"outputs:sdIMMaxSemanticHierarchyDepth", "``uint``", "Maximal number of semantic entities inherited by an instance", "None"
"outputs:sdIMMinInstanceIndex", "``uint``", "Instance id of the first instance in the instance arrays", "None"
"outputs:sdIMMinSemanticIndex", "``uint``", "Semantic id of the first semantic entity in the semantic arrays", "None"
"outputs:sdIMNumInstances", "``uint``", "Number of instances in the instance arrays", "None"
"outputs:sdIMNumSemanticTokens", "``uint``", "Number of semantics token including the semantic entity path, the semantic entity types and if the number of semantic types is greater than one a ", "None"
"outputs:sdIMNumSemantics", "``uint``", "Number of semantic entities in the semantic arrays", "None"
"outputs:sdIMSemanticLocalTransform", "``float[]``", "Semantic array of 4x4 float matrices containing the transform from world to local space for every semantic entity", "None"
"outputs:sdIMSemanticTokenMap", "``token[]``", "Semantic array of token of size numSemantics * numSemanticTypes containing the mapping from the semantic entities to the semantic entity path and semantic types", "None"
"outputs:sdIMSemanticWorldTransform", "``float[]``", "Semantic array of 4x4 float matrices containing the transform from local to world space for every semantic entity", "None"
Metadata
--------
.. csv-table::
:header: "Name", "Value"
:widths: 30,70
"Unique ID", "omni.syntheticdata.SdInstanceMapping"
"Version", "2"
"Extension", "omni.syntheticdata"
"Has State?", "False"
"Implementation Language", "C++"
"Default Memory Type", "cpu"
"Generated Code Exclusions", "None"
"__tokens", "[""InstanceMappingInfoSDhost"", ""InstanceMapSDhost"", ""SemanticLabelTokenSDhost"", ""InstancePrimTokenSDhost"", ""SemanticLocalTransformSDhost"", ""SemanticWorldTransformSDhost""]"
"Categories", "graph:action"
"Generated Class Name", "OgnSdInstanceMappingDatabase"
"Python Module", "omni.syntheticdata"
| 3,895 | reStructuredText | 45.939758 | 225 | 0.675225 |
omniverse-code/kit/exts/omni.syntheticdata/ogn/docs/OgnSdInstanceMappingPtr.rst | .. _omni_syntheticdata_SdInstanceMappingPtr_2:
.. _omni_syntheticdata_SdInstanceMappingPtr:
.. ================================================================================
.. THIS PAGE IS AUTO-GENERATED. DO NOT MANUALLY EDIT.
.. ================================================================================
:orphan:
.. meta::
:title: Sd Instance Mapping Ptr
:keywords: lang-en omnigraph node graph:action syntheticdata sd-instance-mapping-ptr
Sd Instance Mapping Ptr
=======================
.. <description>
Synthetic Data node to expose the scene instances semantic hierarchy information
.. </description>
Installation
------------
To use this node enable :ref:`omni.syntheticdata<ext_omni_syntheticdata>` in the Extension Manager.
Inputs
------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"inputs:cudaPtr", "``bool``", "If true, return cuda device pointer instead of host pointer", "False"
"inputs:exec", "``execution``", "Trigger", "None"
"inputs:renderResults", "``uint64``", "Render results pointer", "0"
"inputs:semanticFilerTokens", "``token[]``", "Tokens identifying the semantic filters applied to the output semantic labels. Each token should correspond to an activated SdSemanticFilter node", "[]"
Outputs
-------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"outputs:cudaDeviceIndex", "``int``", "If the data is on the device it is the cuda index of this device otherwise it is set to -1", "-1"
"Received (*outputs:exec*)", "``execution``", "Executes when the event is received", "None"
"outputs:instanceMapPtr", "``uint64``", "Array pointer of numInstances uint16_t containing the semantic index of the instance prim first semantic prim parent", "None"
"outputs:instancePrimPathPtr", "``uint64``", "Array pointer of numInstances uint64_t containing the prim path tokens for every instance prims", "None"
"outputs:lastUpdateTimeDenominator", "``uint64``", "Time denominator of the last time the data has changed", "None"
"outputs:lastUpdateTimeNumerator", "``int64``", "Time numerator of the last time the data has changed", "None"
"outputs:minInstanceIndex", "``uint``", "Instance index of the first instance prim in the instance arrays", "None"
"outputs:minSemanticIndex", "``uint``", "Semantic index of the first semantic prim in the semantic arrays", "None"
"outputs:numInstances", "``uint``", "Number of instances prim in the instance arrays", "None"
"outputs:numSemantics", "``uint``", "Number of semantic prim in the semantic arrays", "None"
"outputs:semanticLabelTokenPtrs", "``uint64[]``", "Array containing for every input semantic filters the corresponding array pointer of numSemantics uint64_t representing the semantic label of the semantic prim", "None"
"outputs:semanticLocalTransformPtr", "``uint64``", "Array pointer of numSemantics 4x4 float matrices containing the transform from world to object space for every semantic prims", "None"
"outputs:semanticMapPtr", "``uint64``", "Array pointer of numSemantics uint16_t containing the semantic index of the semantic prim first semantic prim parent", "None"
"outputs:semanticPrimPathPtr", "``uint64``", "Array pointer of numSemantics uint32_t containing the prim part of the prim path tokens for every semantic prims", "None"
"outputs:semanticWorldTransformPtr", "``uint64``", "Array pointer of numSemantics 4x4 float matrices containing the transform from local to world space for every semantic entity", "None"
Metadata
--------
.. csv-table::
:header: "Name", "Value"
:widths: 30,70
"Unique ID", "omni.syntheticdata.SdInstanceMappingPtr"
"Version", "2"
"Extension", "omni.syntheticdata"
"Has State?", "False"
"Implementation Language", "C++"
"Default Memory Type", "cpu"
"Generated Code Exclusions", "None"
"__tokens", "[""InstanceMappingInfoSDhost"", ""InstancePrimTokenSDhost"", ""InstancePrimTokenSD"", ""SemanticPrimTokenSDhost"", ""SemanticPrimTokenSD"", ""InstanceMapSDhost"", ""InstanceMapSD"", ""SemanticMapSDhost"", ""SemanticMapSD"", ""SemanticWorldTransformSDhost"", ""SemanticWorldTransformSD"", ""SemanticLocalTransformSDhost"", ""SemanticLocalTransformSD"", ""SemanticLabelTokenSDhost"", ""SemanticLabelTokenSD""]"
"Categories", "graph:action"
"Generated Class Name", "OgnSdInstanceMappingPtrDatabase"
"Python Module", "omni.syntheticdata"
| 4,502 | reStructuredText | 51.97647 | 425 | 0.684363 |
omniverse-code/kit/exts/omni.syntheticdata/ogn/docs/OgnSdTestSimFabricTimeRange.rst | .. _omni_syntheticdata_SdTestSimFabricTimeRange_1:
.. _omni_syntheticdata_SdTestSimFabricTimeRange:
.. ================================================================================
.. THIS PAGE IS AUTO-GENERATED. DO NOT MANUALLY EDIT.
.. ================================================================================
:orphan:
.. meta::
:title: Sd Test Sim Fabric Time Range
:keywords: lang-en omnigraph node graph:simulation,internal,event compute-on-request syntheticdata sd-test-sim-fabric-time-range
Sd Test Sim Fabric Time Range
=============================
.. <description>
Testing node : on request write/update a Fabric time range of a given number of frames starting at the current simulation time.
.. </description>
Installation
------------
To use this node enable :ref:`omni.syntheticdata<ext_omni_syntheticdata>` in the Extension Manager.
Inputs
------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"inputs:numberOfFrames", "``uint64``", "Number of frames to writes.", "0"
"inputs:timeRangeBeginDenominatorToken", "``token``", "Attribute name of the range begin time denominator", "timeRangeStartDenominator"
"inputs:timeRangeBeginNumeratorToken", "``token``", "Attribute name of the range begin time numerator", "timeRangeStartNumerator"
"inputs:timeRangeEndDenominatorToken", "``token``", "Attribute name of the range end time denominator", "timeRangeEndDenominator"
"inputs:timeRangeEndNumeratorToken", "``token``", "Attribute name of the range end time numerator", "timeRangeEndNumerator"
"inputs:timeRangeName", "``token``", "Time range name used to write to the Fabric.", "TestSimFabricTimeRangeSD"
Outputs
-------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"outputs:exec", "``execution``", "Trigger", "None"
Metadata
--------
.. csv-table::
:header: "Name", "Value"
:widths: 30,70
"Unique ID", "omni.syntheticdata.SdTestSimFabricTimeRange"
"Version", "1"
"Extension", "omni.syntheticdata"
"Has State?", "False"
"Implementation Language", "C++"
"Default Memory Type", "cpu"
"Generated Code Exclusions", "None"
"__tokens", "[""fc_exportToRingbuffer""]"
"Categories", "graph:simulation,internal,event"
"Generated Class Name", "OgnSdTestSimFabricTimeRangeDatabase"
"Python Module", "omni.syntheticdata"
| 2,437 | reStructuredText | 32.39726 | 139 | 0.639311 |
omniverse-code/kit/exts/omni.syntheticdata/ogn/docs/OgnSdPostCompRenderVarTextures.rst | .. _omni_syntheticdata_SdPostCompRenderVarTextures_1:
.. _omni_syntheticdata_SdPostCompRenderVarTextures:
.. ================================================================================
.. THIS PAGE IS AUTO-GENERATED. DO NOT MANUALLY EDIT.
.. ================================================================================
:orphan:
.. meta::
:title: Sd Post Comp Render Var Textures
:keywords: lang-en omnigraph node graph:postRender,rendering,internal syntheticdata sd-post-comp-render-var-textures
Sd Post Comp Render Var Textures
================================
.. <description>
Synthetic Data node to compose a front renderVar texture into a back renderVar texture
.. </description>
Installation
------------
To use this node enable :ref:`omni.syntheticdata<ext_omni_syntheticdata>` in the Extension Manager.
Inputs
------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"inputs:cudaPtr", "``uint64``", "Front texture CUDA pointer", "0"
"inputs:format", "``uint64``", "Front texture format", "0"
"gpuFoundations (*inputs:gpu*)", "``uint64``", "Pointer to shared context containing gpu foundations", "0"
"inputs:height", "``uint``", "Front texture height", "0"
"inputs:mode", "``token``", "Mode : grid, line", "line"
"inputs:parameters", "``float[3]``", "Parameters", "[0, 0, 0]"
"inputs:renderVar", "``token``", "Name of the back RenderVar", "LdrColor"
"renderProduct (*inputs:rp*)", "``uint64``", "Pointer to render product for this view", "0"
"inputs:width", "``uint``", "Front texture width", "0"
Metadata
--------
.. csv-table::
:header: "Name", "Value"
:widths: 30,70
"Unique ID", "omni.syntheticdata.SdPostCompRenderVarTextures"
"Version", "1"
"Extension", "omni.syntheticdata"
"Has State?", "False"
"Implementation Language", "C++"
"Default Memory Type", "cpu"
"Generated Code Exclusions", "None"
"__tokens", "[""line"", ""grid""]"
"Categories", "graph:postRender,rendering,internal"
"Generated Class Name", "OgnSdPostCompRenderVarTexturesDatabase"
"Python Module", "omni.syntheticdata"
| 2,167 | reStructuredText | 31.358208 | 120 | 0.5976 |
omniverse-code/kit/exts/omni.syntheticdata/ogn/docs/OgnSdPostSemantic3dBoundingBoxFilter.rst | .. _omni_syntheticdata_SdPostSemantic3dBoundingBoxFilter_1:
.. _omni_syntheticdata_SdPostSemantic3dBoundingBoxFilter:
.. ================================================================================
.. THIS PAGE IS AUTO-GENERATED. DO NOT MANUALLY EDIT.
.. ================================================================================
:orphan:
.. meta::
:title: Sd Post Semantic3D Bounding Box Filter
:keywords: lang-en omnigraph node graph:postRender,rendering syntheticdata sd-post-semantic3d-bounding-box-filter
Sd Post Semantic3D Bounding Box Filter
======================================
.. <description>
Synthetic Data node to cull the semantic 3d bounding boxes.
.. </description>
Installation
------------
To use this node enable :ref:`omni.syntheticdata<ext_omni_syntheticdata>` in the Extension Manager.
Inputs
------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"inputs:exec", "``execution``", "Trigger", "None"
"gpuFoundations (*inputs:gpu*)", "``uint64``", "Pointer to shared context containing gpu foundations", "0"
"inputs:instanceMappingInfoSDPtr", "``uint64``", "uint buffer pointer containing the following information : [numInstances, minInstanceId, numSemantics, minSemanticId, numProtoSemantic]", "0"
"inputs:metersPerSceneUnit", "``float``", "Scene units to meters scale", "0.01"
"renderProduct (*inputs:rp*)", "``uint64``", "Pointer to render product for this view", "0"
"inputs:sdSemBBox3dCamCornersCudaPtr", "``uint64``", "Cuda buffer containing the projection of the 3d bounding boxes on the camera plane represented as a float3=(u,v,z,a) for each bounding box corners", "0"
"inputs:sdSemBBoxInfosCudaPtr", "``uint64``", "Cuda buffer containing valid bounding boxes infos", "0"
"inputs:viewportNearFar", "``float[2]``", "near and far plane (in scene units) used to clip the 3d bounding boxes.", "[0.0, -1.0]"
Outputs
-------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"outputs:exec", "``execution``", "Trigger", "None"
"outputs:sdSemBBoxInfosCudaPtr", "``uint64``", "Cuda buffer containing valid bounding boxes infos", "None"
Metadata
--------
.. csv-table::
:header: "Name", "Value"
:widths: 30,70
"Unique ID", "omni.syntheticdata.SdPostSemantic3dBoundingBoxFilter"
"Version", "1"
"Extension", "omni.syntheticdata"
"Has State?", "False"
"Implementation Language", "C++"
"Default Memory Type", "cpu"
"Generated Code Exclusions", "None"
"__tokens", "[""SemanticBoundingBox3DInfosSD"", ""SemanticBoundingBox3DFilterInfosSD""]"
"Categories", "graph:postRender,rendering"
"Generated Class Name", "OgnSdPostSemantic3dBoundingBoxFilterDatabase"
"Python Module", "omni.syntheticdata"
| 2,834 | reStructuredText | 36.302631 | 210 | 0.641849 |
omniverse-code/kit/exts/omni.syntheticdata/ogn/docs/OgnSdSimInstanceMapping.rst | .. _omni_syntheticdata_SdSimInstanceMapping_1:
.. _omni_syntheticdata_SdSimInstanceMapping:
.. ================================================================================
.. THIS PAGE IS AUTO-GENERATED. DO NOT MANUALLY EDIT.
.. ================================================================================
:orphan:
.. meta::
:title: Sd Sim Instance Mapping
:keywords: lang-en omnigraph node graph:simulation,internal syntheticdata sd-sim-instance-mapping
Sd Sim Instance Mapping
=======================
.. <description>
Synthetic Data node to update and cache the instance mapping data
.. </description>
Installation
------------
To use this node enable :ref:`omni.syntheticdata<ext_omni_syntheticdata>` in the Extension Manager.
Inputs
------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"inputs:needTransform", "``bool``", "If true compute the semantic entities world and object transforms", "True"
"inputs:semanticFilterPredicate", "``token``", "The semantic filter predicate : a disjunctive normal form of semantic type and label", "*:*"
Outputs
-------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"outputs:exec", "``execution``", "Trigger", "None"
"outputs:semanticFilterPredicate", "``token``", "The semantic filter predicate in normalized form", "None"
Metadata
--------
.. csv-table::
:header: "Name", "Value"
:widths: 30,70
"Unique ID", "omni.syntheticdata.SdSimInstanceMapping"
"Version", "1"
"Extension", "omni.syntheticdata"
"Has State?", "False"
"Implementation Language", "C++"
"Default Memory Type", "cpu"
"Generated Code Exclusions", "None"
"Categories", "graph:simulation,internal"
"Generated Class Name", "OgnSdSimInstanceMappingDatabase"
"Python Module", "omni.syntheticdata"
| 1,900 | reStructuredText | 26.550724 | 144 | 0.607368 |
omniverse-code/kit/exts/omni.syntheticdata/ogn/docs/OgnSdFrameIdentifier.rst | .. _omni_syntheticdata_SdFrameIdentifier_1:
.. _omni_syntheticdata_SdFrameIdentifier:
.. ================================================================================
.. THIS PAGE IS AUTO-GENERATED. DO NOT MANUALLY EDIT.
.. ================================================================================
:orphan:
.. meta::
:title: Sd Frame Identifier
:keywords: lang-en omnigraph node graph:postRender,graph:action syntheticdata sd-frame-identifier
Sd Frame Identifier
===================
.. <description>
Synthetic Data node to expose pipeline frame identifier.
.. </description>
Installation
------------
To use this node enable :ref:`omni.syntheticdata<ext_omni_syntheticdata>` in the Extension Manager.
Inputs
------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"inputs:exec", "``execution``", "Trigger", "None"
"inputs:renderResults", "``uint64``", "Render results", "0"
Outputs
-------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"outputs:durationDenominator", "``uint64``", "Duration denominator. Only valid if eConstantFramerateFrameNumber", "0"
"outputs:durationNumerator", "``int64``", "Duration numerator. Only valid if eConstantFramerateFrameNumber.", "0"
"Received (*outputs:exec*)", "``execution``", "Executes for each newFrame event received", "None"
"outputs:externalTimeOfSimNs", "``int64``", "External time in Ns. Only valid if eConstantFramerateFrameNumber.", "-1"
"outputs:frameNumber", "``int64``", "Frame number. Valid if eFrameNumber or eConstantFramerateFrameNumber.", "-1"
"outputs:rationalTimeOfSimDenominator", "``uint64``", "rational time of simulation denominator.", "0"
"outputs:rationalTimeOfSimNumerator", "``int64``", "rational time of simulation numerator.", "0"
"outputs:sampleTimeOffsetInSimFrames", "``uint64``", "Sample time offset. Only valid if eConstantFramerateFrameNumber.", "0"
"outputs:type", "``token``", "Type of the frame identifier.", "NoFrameNumber"
"", "*allowedTokens*", "NoFrameNumber,FrameNumber,ConstantFramerateFrameNumber", ""
Metadata
--------
.. csv-table::
:header: "Name", "Value"
:widths: 30,70
"Unique ID", "omni.syntheticdata.SdFrameIdentifier"
"Version", "1"
"Extension", "omni.syntheticdata"
"Has State?", "False"
"Implementation Language", "C++"
"Default Memory Type", "cpu"
"Generated Code Exclusions", "None"
"Categories", "graph:postRender,graph:action"
"Generated Class Name", "OgnSdFrameIdentifierDatabase"
"Python Module", "omni.syntheticdata"
| 2,650 | reStructuredText | 33.428571 | 128 | 0.636226 |
omniverse-code/kit/exts/omni.syntheticdata/ogn/docs/OgnSdTestPrintRawArray.rst | .. _omni_syntheticdata_SdTestPrintRawArray_1:
.. _omni_syntheticdata_SdTestPrintRawArray:
.. ================================================================================
.. THIS PAGE IS AUTO-GENERATED. DO NOT MANUALLY EDIT.
.. ================================================================================
:orphan:
.. meta::
:title: Sd Test Print Raw Array
:keywords: lang-en omnigraph node graph:action,internal:test syntheticdata sd-test-print-raw-array
Sd Test Print Raw Array
=======================
.. <description>
Synthetic Data test node printing the input linear array
.. </description>
Installation
------------
To use this node enable :ref:`omni.syntheticdata<ext_omni_syntheticdata>` in the Extension Manager.
Inputs
------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"inputs:bufferSize", "``uint``", "Size (in bytes) of the buffer (0 if the input is a texture)", "0"
"inputs:data", "``uchar[]``", "Buffer array data", "[]"
"inputs:dataFileBaseName", "``token``", "Basename of the output npy file", "/tmp/sdTestRawArray"
"inputs:elementCount", "``int``", "Number of array element", "1"
"inputs:elementType", "``token``", "Type of the array element", "uint8"
"inputs:exec", "``execution``", "Trigger", "None"
"inputs:height", "``uint``", "Height (0 if the input is a buffer)", "0"
"inputs:mode", "``token``", "Mode in [printFormatted, printReferences, testReferences]", "printFormatted"
"inputs:randomSeed", "``int``", "Random seed", "0"
"inputs:referenceNumUniqueRandomValues", "``int``", "Number of reference unique random values to compare", "7"
"inputs:referenceSWHFrameNumbers", "``uint[]``", "Reference swhFrameNumbers relative to the first one", "[11, 17, 29]"
"inputs:referenceTolerance", "``float``", "Reference tolerance", "0.1"
"inputs:referenceValues", "``float[]``", "Reference data point values", "[]"
"inputs:swhFrameNumber", "``uint64``", "Frame number", "0"
"inputs:width", "``uint``", "Width (0 if the input is a buffer)", "0"
Outputs
-------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"Received (*outputs:exec*)", "``execution``", "Executes when the event is received", "None"
"outputs:swhFrameNumber", "``uint64``", "FrameNumber just rendered", "None"
State
-----
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"state:initialSWHFrameNumber", "``int64``", "Initial swhFrameNumber", "-1"
Metadata
--------
.. csv-table::
:header: "Name", "Value"
:widths: 30,70
"Unique ID", "omni.syntheticdata.SdTestPrintRawArray"
"Version", "1"
"Extension", "omni.syntheticdata"
"Has State?", "True"
"Implementation Language", "Python"
"Default Memory Type", "cpu"
"Generated Code Exclusions", "tests"
"__tokens", "[""uint16"", ""int16"", ""uint32"", ""int32"", ""float32"", ""token"", ""printFormatted"", ""printReferences"", ""writeToDisk""]"
"Categories", "graph:action,internal:test"
"Generated Class Name", "OgnSdTestPrintRawArrayDatabase"
"Python Module", "omni.syntheticdata"
| 3,216 | reStructuredText | 33.967391 | 146 | 0.601679 |
omniverse-code/kit/exts/omni.syntheticdata/ogn/docs/OgnSdOnNewFrame.rst | .. _omni_syntheticdata_SdOnNewFrame_1:
.. _omni_syntheticdata_SdOnNewFrame:
.. ================================================================================
.. THIS PAGE IS AUTO-GENERATED. DO NOT MANUALLY EDIT.
.. ================================================================================
:orphan:
.. meta::
:title: Sd On New Frame
:keywords: lang-en omnigraph node graph:action,flowControl syntheticdata sd-on-new-frame
Sd On New Frame
===============
.. <description>
Synthetic Data postprocess node to execute pipeline after the NewFrame event has been received
.. </description>
Installation
------------
To use this node enable :ref:`omni.syntheticdata<ext_omni_syntheticdata>` in the Extension Manager.
Outputs
-------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"outputs:cudaStream", "``uint64``", "Cuda stream", "None"
"outputs:exec", "``execution``", "Executes for each newFrame event received", "None"
"outputs:referenceTimeDenominator", "``uint64``", "Reference time represented as a rational number : denominator", "None"
"outputs:referenceTimeNumerator", "``int64``", "Reference time represented as a rational number : numerator", "None"
"outputs:renderProductDataPtrs", "``uint64[]``", "HydraRenderProduct data pointer.", "None"
"outputs:renderProductPaths", "``token[]``", "Render product path tokens.", "None"
Metadata
--------
.. csv-table::
:header: "Name", "Value"
:widths: 30,70
"Unique ID", "omni.syntheticdata.SdOnNewFrame"
"Version", "1"
"Extension", "omni.syntheticdata"
"Has State?", "False"
"Implementation Language", "C++"
"Default Memory Type", "cpu"
"Generated Code Exclusions", "None"
"Categories", "graph:action,flowControl"
"Generated Class Name", "OgnSdOnNewFrameDatabase"
"Python Module", "omni.syntheticdata"
| 1,904 | reStructuredText | 29.238095 | 125 | 0.612395 |
omniverse-code/kit/exts/omni.syntheticdata/ogn/docs/OgnSdTestStageManipulationScenarii.rst | .. _omni_syntheticdata_SdTestStageManipulationScenarii_1:
.. _omni_syntheticdata_SdTestStageManipulationScenarii:
.. ================================================================================
.. THIS PAGE IS AUTO-GENERATED. DO NOT MANUALLY EDIT.
.. ================================================================================
:orphan:
.. meta::
:title: Sd Test Stage Manipulation Scenarii
:keywords: lang-en omnigraph node graph:simulation,internal:test syntheticdata sd-test-stage-manipulation-scenarii
Sd Test Stage Manipulation Scenarii
===================================
.. <description>
Synthetic Data test node applying randomly some predefined stage manipulation scenarii
.. </description>
Installation
------------
To use this node enable :ref:`omni.syntheticdata<ext_omni_syntheticdata>` in the Extension Manager.
Inputs
------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"inputs:randomSeed", "``int``", "Random seed", "0"
"inputs:worldPrimPath", "``token``", "Path of the world prim : contains every modifiable prim, cannot be modified", ""
State
-----
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"state:frameNumber", "``uint64``", "Current frameNumber (number of invocations)", "0"
Metadata
--------
.. csv-table::
:header: "Name", "Value"
:widths: 30,70
"Unique ID", "omni.syntheticdata.SdTestStageManipulationScenarii"
"Version", "1"
"Extension", "omni.syntheticdata"
"Has State?", "True"
"Implementation Language", "Python"
"Default Memory Type", "cpu"
"Generated Code Exclusions", "tests"
"Categories", "graph:simulation,internal:test"
"Generated Class Name", "OgnSdTestStageManipulationScenariiDatabase"
"Python Module", "omni.syntheticdata"
| 1,863 | reStructuredText | 26.411764 | 122 | 0.611379 |
omniverse-code/kit/exts/omni.syntheticdata/ogn/docs/OgnSdRenderProductCamera.rst | .. _omni_syntheticdata_SdRenderProductCamera_2:
.. _omni_syntheticdata_SdRenderProductCamera:
.. ================================================================================
.. THIS PAGE IS AUTO-GENERATED. DO NOT MANUALLY EDIT.
.. ================================================================================
:orphan:
.. meta::
:title: Sd Render Product Camera
:keywords: lang-en omnigraph node graph:postRender,graph:action syntheticdata sd-render-product-camera
Sd Render Product Camera
========================
.. <description>
Synthetic Data node to expose the camera data
.. </description>
Installation
------------
To use this node enable :ref:`omni.syntheticdata<ext_omni_syntheticdata>` in the Extension Manager.
Inputs
------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"inputs:exec", "``execution``", "Trigger", "None"
"inputs:gpu", "``uint64``", "Pointer to shared context containing gpu foundations.", "0"
"inputs:renderProductPath", "``token``", "RenderProduct prim path", ""
"inputs:renderResults", "``uint64``", "Render results", "0"
Outputs
-------
.. csv-table::
:header: "Name", "Type", "Descripton", "Default"
:widths: 20, 20, 50, 10
"outputs:cameraApertureOffset", "``float[2]``", "Camera horizontal and vertical aperture offset", "None"
"outputs:cameraApertureSize", "``float[2]``", "Camera horizontal and vertical aperture", "None"
"outputs:cameraFStop", "``float``", "Camera fStop", "None"
"outputs:cameraFisheyeParams", "``float[]``", "Camera fisheye projection parameters", "None"
"outputs:cameraFocalLength", "``float``", "Camera focal length", "None"
"outputs:cameraFocusDistance", "``float``", "Camera focus distance", "None"
"outputs:cameraModel", "``int``", "Camera model (pinhole or fisheye models)", "None"
"outputs:cameraNearFar", "``float[2]``", "Camera near/far clipping range", "None"
"outputs:cameraProjection", "``matrixd[4]``", "Camera projection matrix", "None"
"outputs:cameraViewTransform", "``matrixd[4]``", "Camera view matrix", "None"
"Received (*outputs:exec*)", "``execution``", "Executes for each newFrame event received", "None"
"outputs:metersPerSceneUnit", "``float``", "Scene units to meters scale", "None"
"outputs:renderProductResolution", "``int[2]``", "RenderProduct resolution", "None"
Metadata
--------
.. csv-table::
:header: "Name", "Value"
:widths: 30,70
"Unique ID", "omni.syntheticdata.SdRenderProductCamera"
"Version", "2"
"Extension", "omni.syntheticdata"
"Has State?", "False"
"Implementation Language", "C++"
"Default Memory Type", "cpu"
"Generated Code Exclusions", "None"
"__tokens", "[""RenderProductCameraSD""]"
"Categories", "graph:postRender,graph:action"
"Generated Class Name", "OgnSdRenderProductCameraDatabase"
"Python Module", "omni.syntheticdata"
| 2,943 | reStructuredText | 34.469879 | 108 | 0.620795 |
omniverse-code/kit/exts/omni.syntheticdata/omni/syntheticdata/ogn/OgnSdTestStageSynchronizationDatabase.py | """Support for simplified access to data on nodes of type omni.syntheticdata.SdTestStageSynchronization
Synthetic Data node to test the pipeline stage synchronization
"""
import omni.graph.core as og
import omni.graph.core._omni_graph_core as _og
import omni.graph.tools.ogn as ogn
class OgnSdTestStageSynchronizationDatabase(og.Database):
"""Helper class providing simplified access to data on nodes of type omni.syntheticdata.SdTestStageSynchronization
Class Members:
node: Node being evaluated
Attribute Value Properties:
Inputs:
inputs.exec
inputs.gpu
inputs.randomMaxProcessingTimeUs
inputs.randomSeed
inputs.renderResults
inputs.rp
inputs.swhFrameNumber
inputs.tag
inputs.traceError
Outputs:
outputs.exec
outputs.fabricSWHFrameNumber
outputs.swhFrameNumber
"""
# Imprint the generator and target ABI versions in the file for JIT generation
GENERATOR_VERSION = (1, 41, 3)
TARGET_VERSION = (2, 139, 12)
# This is an internal object that provides per-class storage of a per-node data dictionary
PER_NODE_DATA = {}
# This is an internal object that describes unchanging attributes in a generic way
# The values in this list are in no particular order, as a per-attribute tuple
# Name, Type, ExtendedTypeIndex, UiName, Description, Metadata,
# Is_Required, DefaultValue, Is_Deprecated, DeprecationMsg
# You should not need to access any of this data directly, use the defined database interfaces
INTERFACE = og.Database._get_interface([
('inputs:exec', 'execution', 0, None, 'OnDemand connection : trigger', {}, True, None, False, ''),
('inputs:gpu', 'uint64', 0, 'gpuFoundations', 'PostRender connection : pointer to shared context containing gpu foundations', {}, True, 0, False, ''),
('inputs:randomMaxProcessingTimeUs', 'uint', 0, None, 'Maximum number of micro-seconds to randomly (uniformely) wait for in order to simulate varying workload', {ogn.MetadataKeys.DEFAULT: '0'}, True, 0, False, ''),
('inputs:randomSeed', 'uint', 0, None, 'Random seed for the randomization', {ogn.MetadataKeys.DEFAULT: '0'}, True, 0, False, ''),
('inputs:renderResults', 'uint64', 0, None, 'OnDemand connection : pointer to render product results', {}, True, 0, False, ''),
('inputs:rp', 'uint64', 0, 'renderProduct', 'PostRender connection : pointer to render product for this view', {}, True, 0, False, ''),
('inputs:swhFrameNumber', 'uint64', 0, None, 'Fabric frame number', {}, True, 0, False, ''),
('inputs:tag', 'token', 0, None, 'A tag to identify the node', {}, True, "", False, ''),
('inputs:traceError', 'bool', 0, None, 'If true print an error message when the frame numbers are out-of-sync', {ogn.MetadataKeys.DEFAULT: 'false'}, True, False, False, ''),
('outputs:exec', 'execution', 0, None, 'OnDemand connection : trigger', {}, True, None, False, ''),
('outputs:fabricSWHFrameNumber', 'uint64', 0, None, 'Fabric frame number from the fabric', {}, True, None, False, ''),
('outputs:swhFrameNumber', 'uint64', 0, None, 'Fabric frame number', {}, True, None, False, ''),
])
@classmethod
def _populate_role_data(cls):
"""Populate a role structure with the non-default roles on this node type"""
role_data = super()._populate_role_data()
role_data.inputs.exec = og.AttributeRole.EXECUTION
role_data.outputs.exec = og.AttributeRole.EXECUTION
return role_data
class ValuesForInputs(og.DynamicAttributeAccess):
LOCAL_PROPERTY_NAMES = { }
"""Helper class that creates natural hierarchical access to input attributes"""
def __init__(self, node: og.Node, attributes, dynamic_attributes: og.DynamicAttributeInterface):
"""Initialize simplified access for the attribute data"""
context = node.get_graph().get_default_graph_context()
super().__init__(context, node, attributes, dynamic_attributes)
self._batchedReadAttributes = []
self._batchedReadValues = []
@property
def exec(self):
data_view = og.AttributeValueHelper(self._attributes.exec)
return data_view.get()
@exec.setter
def exec(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.exec)
data_view = og.AttributeValueHelper(self._attributes.exec)
data_view.set(value)
@property
def gpu(self):
data_view = og.AttributeValueHelper(self._attributes.gpu)
return data_view.get()
@gpu.setter
def gpu(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.gpu)
data_view = og.AttributeValueHelper(self._attributes.gpu)
data_view.set(value)
@property
def randomMaxProcessingTimeUs(self):
data_view = og.AttributeValueHelper(self._attributes.randomMaxProcessingTimeUs)
return data_view.get()
@randomMaxProcessingTimeUs.setter
def randomMaxProcessingTimeUs(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.randomMaxProcessingTimeUs)
data_view = og.AttributeValueHelper(self._attributes.randomMaxProcessingTimeUs)
data_view.set(value)
@property
def randomSeed(self):
data_view = og.AttributeValueHelper(self._attributes.randomSeed)
return data_view.get()
@randomSeed.setter
def randomSeed(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.randomSeed)
data_view = og.AttributeValueHelper(self._attributes.randomSeed)
data_view.set(value)
@property
def renderResults(self):
data_view = og.AttributeValueHelper(self._attributes.renderResults)
return data_view.get()
@renderResults.setter
def renderResults(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.renderResults)
data_view = og.AttributeValueHelper(self._attributes.renderResults)
data_view.set(value)
@property
def rp(self):
data_view = og.AttributeValueHelper(self._attributes.rp)
return data_view.get()
@rp.setter
def rp(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.rp)
data_view = og.AttributeValueHelper(self._attributes.rp)
data_view.set(value)
@property
def swhFrameNumber(self):
data_view = og.AttributeValueHelper(self._attributes.swhFrameNumber)
return data_view.get()
@swhFrameNumber.setter
def swhFrameNumber(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.swhFrameNumber)
data_view = og.AttributeValueHelper(self._attributes.swhFrameNumber)
data_view.set(value)
@property
def tag(self):
data_view = og.AttributeValueHelper(self._attributes.tag)
return data_view.get()
@tag.setter
def tag(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.tag)
data_view = og.AttributeValueHelper(self._attributes.tag)
data_view.set(value)
@property
def traceError(self):
data_view = og.AttributeValueHelper(self._attributes.traceError)
return data_view.get()
@traceError.setter
def traceError(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.traceError)
data_view = og.AttributeValueHelper(self._attributes.traceError)
data_view.set(value)
def _prefetch(self):
readAttributes = self._batchedReadAttributes
newValues = _og._prefetch_input_attributes_data(readAttributes)
if len(readAttributes) == len(newValues):
self._batchedReadValues = newValues
class ValuesForOutputs(og.DynamicAttributeAccess):
LOCAL_PROPERTY_NAMES = { }
"""Helper class that creates natural hierarchical access to output attributes"""
def __init__(self, node: og.Node, attributes, dynamic_attributes: og.DynamicAttributeInterface):
"""Initialize simplified access for the attribute data"""
context = node.get_graph().get_default_graph_context()
super().__init__(context, node, attributes, dynamic_attributes)
self._batchedWriteValues = { }
@property
def exec(self):
data_view = og.AttributeValueHelper(self._attributes.exec)
return data_view.get()
@exec.setter
def exec(self, value):
data_view = og.AttributeValueHelper(self._attributes.exec)
data_view.set(value)
@property
def fabricSWHFrameNumber(self):
data_view = og.AttributeValueHelper(self._attributes.fabricSWHFrameNumber)
return data_view.get()
@fabricSWHFrameNumber.setter
def fabricSWHFrameNumber(self, value):
data_view = og.AttributeValueHelper(self._attributes.fabricSWHFrameNumber)
data_view.set(value)
@property
def swhFrameNumber(self):
data_view = og.AttributeValueHelper(self._attributes.swhFrameNumber)
return data_view.get()
@swhFrameNumber.setter
def swhFrameNumber(self, value):
data_view = og.AttributeValueHelper(self._attributes.swhFrameNumber)
data_view.set(value)
def _commit(self):
_og._commit_output_attributes_data(self._batchedWriteValues)
self._batchedWriteValues = { }
class ValuesForState(og.DynamicAttributeAccess):
"""Helper class that creates natural hierarchical access to state attributes"""
def __init__(self, node: og.Node, attributes, dynamic_attributes: og.DynamicAttributeInterface):
"""Initialize simplified access for the attribute data"""
context = node.get_graph().get_default_graph_context()
super().__init__(context, node, attributes, dynamic_attributes)
def __init__(self, node):
super().__init__(node)
dynamic_attributes = self.dynamic_attribute_data(node, og.AttributePortType.ATTRIBUTE_PORT_TYPE_INPUT)
self.inputs = OgnSdTestStageSynchronizationDatabase.ValuesForInputs(node, self.attributes.inputs, dynamic_attributes)
dynamic_attributes = self.dynamic_attribute_data(node, og.AttributePortType.ATTRIBUTE_PORT_TYPE_OUTPUT)
self.outputs = OgnSdTestStageSynchronizationDatabase.ValuesForOutputs(node, self.attributes.outputs, dynamic_attributes)
dynamic_attributes = self.dynamic_attribute_data(node, og.AttributePortType.ATTRIBUTE_PORT_TYPE_STATE)
self.state = OgnSdTestStageSynchronizationDatabase.ValuesForState(node, self.attributes.state, dynamic_attributes)
| 11,401 | Python | 44.067194 | 222 | 0.643277 |
omniverse-code/kit/exts/omni.syntheticdata/omni/syntheticdata/ogn/OgnSdFabricTimeRangeExecutionDatabase.py | """Support for simplified access to data on nodes of type omni.syntheticdata.SdFabricTimeRangeExecution
Read a rational time range from Fabric or RenderVars and signal its execution if the current time fall within this range.
The range is [begin,end[, that is the end time does not belong to the range.
"""
import omni.graph.core as og
import omni.graph.core._omni_graph_core as _og
import omni.graph.tools.ogn as ogn
class OgnSdFabricTimeRangeExecutionDatabase(og.Database):
"""Helper class providing simplified access to data on nodes of type omni.syntheticdata.SdFabricTimeRangeExecution
Class Members:
node: Node being evaluated
Attribute Value Properties:
Inputs:
inputs.exec
inputs.gpu
inputs.renderResults
inputs.timeRangeBeginDenominatorToken
inputs.timeRangeBeginNumeratorToken
inputs.timeRangeEndDenominatorToken
inputs.timeRangeEndNumeratorToken
inputs.timeRangeName
Outputs:
outputs.exec
outputs.timeRangeBeginDenominator
outputs.timeRangeBeginNumerator
outputs.timeRangeEndDenominator
outputs.timeRangeEndNumerator
"""
# Imprint the generator and target ABI versions in the file for JIT generation
GENERATOR_VERSION = (1, 41, 3)
TARGET_VERSION = (2, 139, 12)
# This is an internal object that provides per-class storage of a per-node data dictionary
PER_NODE_DATA = {}
# This is an internal object that describes unchanging attributes in a generic way
# The values in this list are in no particular order, as a per-attribute tuple
# Name, Type, ExtendedTypeIndex, UiName, Description, Metadata,
# Is_Required, DefaultValue, Is_Deprecated, DeprecationMsg
# You should not need to access any of this data directly, use the defined database interfaces
INTERFACE = og.Database._get_interface([
('inputs:exec', 'execution', 0, None, 'Trigger', {}, True, None, False, ''),
('inputs:gpu', 'uint64', 0, None, 'Pointer to shared context containing gpu foundations.', {}, True, 0, False, ''),
('inputs:renderResults', 'uint64', 0, None, 'Render results', {}, True, 0, False, ''),
('inputs:timeRangeBeginDenominatorToken', 'token', 0, None, 'Attribute name of the range begin time denominator', {ogn.MetadataKeys.DEFAULT: '"timeRangeStartDenominator"'}, True, "timeRangeStartDenominator", False, ''),
('inputs:timeRangeBeginNumeratorToken', 'token', 0, None, 'Attribute name of the range begin time numerator', {ogn.MetadataKeys.DEFAULT: '"timeRangeStartNumerator"'}, True, "timeRangeStartNumerator", False, ''),
('inputs:timeRangeEndDenominatorToken', 'token', 0, None, 'Attribute name of the range end time denominator', {ogn.MetadataKeys.DEFAULT: '"timeRangeEndDenominator"'}, True, "timeRangeEndDenominator", False, ''),
('inputs:timeRangeEndNumeratorToken', 'token', 0, None, 'Attribute name of the range end time numerator', {ogn.MetadataKeys.DEFAULT: '"timeRangeEndNumerator"'}, True, "timeRangeEndNumerator", False, ''),
('inputs:timeRangeName', 'token', 0, None, 'Time range name used to read from the Fabric or RenderVars.', {}, True, "", False, ''),
('outputs:exec', 'execution', 0, None, 'Trigger', {}, True, None, False, ''),
('outputs:timeRangeBeginDenominator', 'uint64', 0, None, 'Time denominator of the last time range change (begin)', {}, True, None, False, ''),
('outputs:timeRangeBeginNumerator', 'int64', 0, None, 'Time numerator of the last time range change (begin)', {}, True, None, False, ''),
('outputs:timeRangeEndDenominator', 'uint64', 0, None, 'Time denominator of the last time range change (end)', {}, True, None, False, ''),
('outputs:timeRangeEndNumerator', 'int64', 0, None, 'Time numerator of the last time range change (end)', {}, True, None, False, ''),
])
@classmethod
def _populate_role_data(cls):
"""Populate a role structure with the non-default roles on this node type"""
role_data = super()._populate_role_data()
role_data.inputs.exec = og.AttributeRole.EXECUTION
role_data.outputs.exec = og.AttributeRole.EXECUTION
return role_data
class ValuesForInputs(og.DynamicAttributeAccess):
LOCAL_PROPERTY_NAMES = { }
"""Helper class that creates natural hierarchical access to input attributes"""
def __init__(self, node: og.Node, attributes, dynamic_attributes: og.DynamicAttributeInterface):
"""Initialize simplified access for the attribute data"""
context = node.get_graph().get_default_graph_context()
super().__init__(context, node, attributes, dynamic_attributes)
self._batchedReadAttributes = []
self._batchedReadValues = []
@property
def exec(self):
data_view = og.AttributeValueHelper(self._attributes.exec)
return data_view.get()
@exec.setter
def exec(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.exec)
data_view = og.AttributeValueHelper(self._attributes.exec)
data_view.set(value)
@property
def gpu(self):
data_view = og.AttributeValueHelper(self._attributes.gpu)
return data_view.get()
@gpu.setter
def gpu(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.gpu)
data_view = og.AttributeValueHelper(self._attributes.gpu)
data_view.set(value)
@property
def renderResults(self):
data_view = og.AttributeValueHelper(self._attributes.renderResults)
return data_view.get()
@renderResults.setter
def renderResults(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.renderResults)
data_view = og.AttributeValueHelper(self._attributes.renderResults)
data_view.set(value)
@property
def timeRangeBeginDenominatorToken(self):
data_view = og.AttributeValueHelper(self._attributes.timeRangeBeginDenominatorToken)
return data_view.get()
@timeRangeBeginDenominatorToken.setter
def timeRangeBeginDenominatorToken(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.timeRangeBeginDenominatorToken)
data_view = og.AttributeValueHelper(self._attributes.timeRangeBeginDenominatorToken)
data_view.set(value)
@property
def timeRangeBeginNumeratorToken(self):
data_view = og.AttributeValueHelper(self._attributes.timeRangeBeginNumeratorToken)
return data_view.get()
@timeRangeBeginNumeratorToken.setter
def timeRangeBeginNumeratorToken(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.timeRangeBeginNumeratorToken)
data_view = og.AttributeValueHelper(self._attributes.timeRangeBeginNumeratorToken)
data_view.set(value)
@property
def timeRangeEndDenominatorToken(self):
data_view = og.AttributeValueHelper(self._attributes.timeRangeEndDenominatorToken)
return data_view.get()
@timeRangeEndDenominatorToken.setter
def timeRangeEndDenominatorToken(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.timeRangeEndDenominatorToken)
data_view = og.AttributeValueHelper(self._attributes.timeRangeEndDenominatorToken)
data_view.set(value)
@property
def timeRangeEndNumeratorToken(self):
data_view = og.AttributeValueHelper(self._attributes.timeRangeEndNumeratorToken)
return data_view.get()
@timeRangeEndNumeratorToken.setter
def timeRangeEndNumeratorToken(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.timeRangeEndNumeratorToken)
data_view = og.AttributeValueHelper(self._attributes.timeRangeEndNumeratorToken)
data_view.set(value)
@property
def timeRangeName(self):
data_view = og.AttributeValueHelper(self._attributes.timeRangeName)
return data_view.get()
@timeRangeName.setter
def timeRangeName(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.timeRangeName)
data_view = og.AttributeValueHelper(self._attributes.timeRangeName)
data_view.set(value)
def _prefetch(self):
readAttributes = self._batchedReadAttributes
newValues = _og._prefetch_input_attributes_data(readAttributes)
if len(readAttributes) == len(newValues):
self._batchedReadValues = newValues
class ValuesForOutputs(og.DynamicAttributeAccess):
LOCAL_PROPERTY_NAMES = { }
"""Helper class that creates natural hierarchical access to output attributes"""
def __init__(self, node: og.Node, attributes, dynamic_attributes: og.DynamicAttributeInterface):
"""Initialize simplified access for the attribute data"""
context = node.get_graph().get_default_graph_context()
super().__init__(context, node, attributes, dynamic_attributes)
self._batchedWriteValues = { }
@property
def exec(self):
data_view = og.AttributeValueHelper(self._attributes.exec)
return data_view.get()
@exec.setter
def exec(self, value):
data_view = og.AttributeValueHelper(self._attributes.exec)
data_view.set(value)
@property
def timeRangeBeginDenominator(self):
data_view = og.AttributeValueHelper(self._attributes.timeRangeBeginDenominator)
return data_view.get()
@timeRangeBeginDenominator.setter
def timeRangeBeginDenominator(self, value):
data_view = og.AttributeValueHelper(self._attributes.timeRangeBeginDenominator)
data_view.set(value)
@property
def timeRangeBeginNumerator(self):
data_view = og.AttributeValueHelper(self._attributes.timeRangeBeginNumerator)
return data_view.get()
@timeRangeBeginNumerator.setter
def timeRangeBeginNumerator(self, value):
data_view = og.AttributeValueHelper(self._attributes.timeRangeBeginNumerator)
data_view.set(value)
@property
def timeRangeEndDenominator(self):
data_view = og.AttributeValueHelper(self._attributes.timeRangeEndDenominator)
return data_view.get()
@timeRangeEndDenominator.setter
def timeRangeEndDenominator(self, value):
data_view = og.AttributeValueHelper(self._attributes.timeRangeEndDenominator)
data_view.set(value)
@property
def timeRangeEndNumerator(self):
data_view = og.AttributeValueHelper(self._attributes.timeRangeEndNumerator)
return data_view.get()
@timeRangeEndNumerator.setter
def timeRangeEndNumerator(self, value):
data_view = og.AttributeValueHelper(self._attributes.timeRangeEndNumerator)
data_view.set(value)
def _commit(self):
_og._commit_output_attributes_data(self._batchedWriteValues)
self._batchedWriteValues = { }
class ValuesForState(og.DynamicAttributeAccess):
"""Helper class that creates natural hierarchical access to state attributes"""
def __init__(self, node: og.Node, attributes, dynamic_attributes: og.DynamicAttributeInterface):
"""Initialize simplified access for the attribute data"""
context = node.get_graph().get_default_graph_context()
super().__init__(context, node, attributes, dynamic_attributes)
def __init__(self, node):
super().__init__(node)
dynamic_attributes = self.dynamic_attribute_data(node, og.AttributePortType.ATTRIBUTE_PORT_TYPE_INPUT)
self.inputs = OgnSdFabricTimeRangeExecutionDatabase.ValuesForInputs(node, self.attributes.inputs, dynamic_attributes)
dynamic_attributes = self.dynamic_attribute_data(node, og.AttributePortType.ATTRIBUTE_PORT_TYPE_OUTPUT)
self.outputs = OgnSdFabricTimeRangeExecutionDatabase.ValuesForOutputs(node, self.attributes.outputs, dynamic_attributes)
dynamic_attributes = self.dynamic_attribute_data(node, og.AttributePortType.ATTRIBUTE_PORT_TYPE_STATE)
self.state = OgnSdFabricTimeRangeExecutionDatabase.ValuesForState(node, self.attributes.state, dynamic_attributes)
| 12,894 | Python | 47.844697 | 227 | 0.669148 |
omniverse-code/kit/exts/omni.syntheticdata/omni/syntheticdata/ogn/OgnSdRenderVarDisplayTextureDatabase.py | """Support for simplified access to data on nodes of type omni.syntheticdata.SdRenderVarDisplayTexture
Synthetic Data node to expose texture resource of a visualization render variable
"""
import omni.graph.core as og
import omni.graph.core._omni_graph_core as _og
import omni.graph.tools.ogn as ogn
class OgnSdRenderVarDisplayTextureDatabase(og.Database):
"""Helper class providing simplified access to data on nodes of type omni.syntheticdata.SdRenderVarDisplayTexture
Class Members:
node: Node being evaluated
Attribute Value Properties:
Inputs:
inputs.exec
inputs.renderResults
inputs.renderVarDisplay
Outputs:
outputs.cudaPtr
outputs.exec
outputs.format
outputs.height
outputs.referenceTimeDenominator
outputs.referenceTimeNumerator
outputs.rpResourcePtr
outputs.width
"""
# Imprint the generator and target ABI versions in the file for JIT generation
GENERATOR_VERSION = (1, 41, 3)
TARGET_VERSION = (2, 139, 12)
# This is an internal object that provides per-class storage of a per-node data dictionary
PER_NODE_DATA = {}
# This is an internal object that describes unchanging attributes in a generic way
# The values in this list are in no particular order, as a per-attribute tuple
# Name, Type, ExtendedTypeIndex, UiName, Description, Metadata,
# Is_Required, DefaultValue, Is_Deprecated, DeprecationMsg
# You should not need to access any of this data directly, use the defined database interfaces
INTERFACE = og.Database._get_interface([
('inputs:exec', 'execution', 0, None, 'Trigger', {}, True, None, False, ''),
('inputs:renderResults', 'uint64', 0, None, 'Render results pointer', {}, True, 0, False, ''),
('inputs:renderVarDisplay', 'token', 0, None, 'Name of the renderVar', {}, True, "", False, ''),
('outputs:cudaPtr', 'uint64', 0, None, 'Display texture CUDA pointer', {}, True, None, False, ''),
('outputs:exec', 'execution', 0, None, 'Trigger', {}, True, None, False, ''),
('outputs:format', 'uint64', 0, None, 'Display texture format', {}, True, None, False, ''),
('outputs:height', 'uint', 0, None, 'Display texture height', {}, True, None, False, ''),
('outputs:referenceTimeDenominator', 'uint64', 0, None, 'Reference time represented as a rational number : denominator', {}, True, None, False, ''),
('outputs:referenceTimeNumerator', 'int64', 0, None, 'Reference time represented as a rational number : numerator', {}, True, None, False, ''),
('outputs:rpResourcePtr', 'uint64', 0, None, 'Display texture RpResource pointer', {}, True, None, False, ''),
('outputs:width', 'uint', 0, None, 'Display texture width', {}, True, None, False, ''),
])
@classmethod
def _populate_role_data(cls):
"""Populate a role structure with the non-default roles on this node type"""
role_data = super()._populate_role_data()
role_data.inputs.exec = og.AttributeRole.EXECUTION
role_data.outputs.exec = og.AttributeRole.EXECUTION
return role_data
class ValuesForInputs(og.DynamicAttributeAccess):
LOCAL_PROPERTY_NAMES = { }
"""Helper class that creates natural hierarchical access to input attributes"""
def __init__(self, node: og.Node, attributes, dynamic_attributes: og.DynamicAttributeInterface):
"""Initialize simplified access for the attribute data"""
context = node.get_graph().get_default_graph_context()
super().__init__(context, node, attributes, dynamic_attributes)
self._batchedReadAttributes = []
self._batchedReadValues = []
@property
def exec(self):
data_view = og.AttributeValueHelper(self._attributes.exec)
return data_view.get()
@exec.setter
def exec(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.exec)
data_view = og.AttributeValueHelper(self._attributes.exec)
data_view.set(value)
@property
def renderResults(self):
data_view = og.AttributeValueHelper(self._attributes.renderResults)
return data_view.get()
@renderResults.setter
def renderResults(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.renderResults)
data_view = og.AttributeValueHelper(self._attributes.renderResults)
data_view.set(value)
@property
def renderVarDisplay(self):
data_view = og.AttributeValueHelper(self._attributes.renderVarDisplay)
return data_view.get()
@renderVarDisplay.setter
def renderVarDisplay(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.renderVarDisplay)
data_view = og.AttributeValueHelper(self._attributes.renderVarDisplay)
data_view.set(value)
def _prefetch(self):
readAttributes = self._batchedReadAttributes
newValues = _og._prefetch_input_attributes_data(readAttributes)
if len(readAttributes) == len(newValues):
self._batchedReadValues = newValues
class ValuesForOutputs(og.DynamicAttributeAccess):
LOCAL_PROPERTY_NAMES = { }
"""Helper class that creates natural hierarchical access to output attributes"""
def __init__(self, node: og.Node, attributes, dynamic_attributes: og.DynamicAttributeInterface):
"""Initialize simplified access for the attribute data"""
context = node.get_graph().get_default_graph_context()
super().__init__(context, node, attributes, dynamic_attributes)
self._batchedWriteValues = { }
@property
def cudaPtr(self):
data_view = og.AttributeValueHelper(self._attributes.cudaPtr)
return data_view.get()
@cudaPtr.setter
def cudaPtr(self, value):
data_view = og.AttributeValueHelper(self._attributes.cudaPtr)
data_view.set(value)
@property
def exec(self):
data_view = og.AttributeValueHelper(self._attributes.exec)
return data_view.get()
@exec.setter
def exec(self, value):
data_view = og.AttributeValueHelper(self._attributes.exec)
data_view.set(value)
@property
def format(self):
data_view = og.AttributeValueHelper(self._attributes.format)
return data_view.get()
@format.setter
def format(self, value):
data_view = og.AttributeValueHelper(self._attributes.format)
data_view.set(value)
@property
def height(self):
data_view = og.AttributeValueHelper(self._attributes.height)
return data_view.get()
@height.setter
def height(self, value):
data_view = og.AttributeValueHelper(self._attributes.height)
data_view.set(value)
@property
def referenceTimeDenominator(self):
data_view = og.AttributeValueHelper(self._attributes.referenceTimeDenominator)
return data_view.get()
@referenceTimeDenominator.setter
def referenceTimeDenominator(self, value):
data_view = og.AttributeValueHelper(self._attributes.referenceTimeDenominator)
data_view.set(value)
@property
def referenceTimeNumerator(self):
data_view = og.AttributeValueHelper(self._attributes.referenceTimeNumerator)
return data_view.get()
@referenceTimeNumerator.setter
def referenceTimeNumerator(self, value):
data_view = og.AttributeValueHelper(self._attributes.referenceTimeNumerator)
data_view.set(value)
@property
def rpResourcePtr(self):
data_view = og.AttributeValueHelper(self._attributes.rpResourcePtr)
return data_view.get()
@rpResourcePtr.setter
def rpResourcePtr(self, value):
data_view = og.AttributeValueHelper(self._attributes.rpResourcePtr)
data_view.set(value)
@property
def width(self):
data_view = og.AttributeValueHelper(self._attributes.width)
return data_view.get()
@width.setter
def width(self, value):
data_view = og.AttributeValueHelper(self._attributes.width)
data_view.set(value)
def _commit(self):
_og._commit_output_attributes_data(self._batchedWriteValues)
self._batchedWriteValues = { }
class ValuesForState(og.DynamicAttributeAccess):
"""Helper class that creates natural hierarchical access to state attributes"""
def __init__(self, node: og.Node, attributes, dynamic_attributes: og.DynamicAttributeInterface):
"""Initialize simplified access for the attribute data"""
context = node.get_graph().get_default_graph_context()
super().__init__(context, node, attributes, dynamic_attributes)
def __init__(self, node):
super().__init__(node)
dynamic_attributes = self.dynamic_attribute_data(node, og.AttributePortType.ATTRIBUTE_PORT_TYPE_INPUT)
self.inputs = OgnSdRenderVarDisplayTextureDatabase.ValuesForInputs(node, self.attributes.inputs, dynamic_attributes)
dynamic_attributes = self.dynamic_attribute_data(node, og.AttributePortType.ATTRIBUTE_PORT_TYPE_OUTPUT)
self.outputs = OgnSdRenderVarDisplayTextureDatabase.ValuesForOutputs(node, self.attributes.outputs, dynamic_attributes)
dynamic_attributes = self.dynamic_attribute_data(node, og.AttributePortType.ATTRIBUTE_PORT_TYPE_STATE)
self.state = OgnSdRenderVarDisplayTextureDatabase.ValuesForState(node, self.attributes.state, dynamic_attributes)
| 10,061 | Python | 42.938864 | 156 | 0.646158 |
omniverse-code/kit/exts/omni.syntheticdata/omni/syntheticdata/ogn/OgnSdPostCompRenderVarTexturesDatabase.py | """Support for simplified access to data on nodes of type omni.syntheticdata.SdPostCompRenderVarTextures
Synthetic Data node to compose a front renderVar texture into a back renderVar texture
"""
import numpy
import omni.graph.core as og
import omni.graph.core._omni_graph_core as _og
import omni.graph.tools.ogn as ogn
class OgnSdPostCompRenderVarTexturesDatabase(og.Database):
"""Helper class providing simplified access to data on nodes of type omni.syntheticdata.SdPostCompRenderVarTextures
Class Members:
node: Node being evaluated
Attribute Value Properties:
Inputs:
inputs.cudaPtr
inputs.format
inputs.gpu
inputs.height
inputs.mode
inputs.parameters
inputs.renderVar
inputs.rp
inputs.width
Predefined Tokens:
tokens.line
tokens.grid
"""
# Imprint the generator and target ABI versions in the file for JIT generation
GENERATOR_VERSION = (1, 41, 3)
TARGET_VERSION = (2, 139, 12)
# This is an internal object that provides per-class storage of a per-node data dictionary
PER_NODE_DATA = {}
# This is an internal object that describes unchanging attributes in a generic way
# The values in this list are in no particular order, as a per-attribute tuple
# Name, Type, ExtendedTypeIndex, UiName, Description, Metadata,
# Is_Required, DefaultValue, Is_Deprecated, DeprecationMsg
# You should not need to access any of this data directly, use the defined database interfaces
INTERFACE = og.Database._get_interface([
('inputs:cudaPtr', 'uint64', 0, None, 'Front texture CUDA pointer', {}, True, 0, False, ''),
('inputs:format', 'uint64', 0, None, 'Front texture format', {}, True, 0, False, ''),
('inputs:gpu', 'uint64', 0, 'gpuFoundations', 'Pointer to shared context containing gpu foundations', {}, True, 0, False, ''),
('inputs:height', 'uint', 0, None, 'Front texture height', {}, True, 0, False, ''),
('inputs:mode', 'token', 0, None, 'Mode : grid, line', {ogn.MetadataKeys.DEFAULT: '"line"'}, True, "line", False, ''),
('inputs:parameters', 'float3', 0, None, 'Parameters', {ogn.MetadataKeys.DEFAULT: '[0, 0, 0]'}, True, [0, 0, 0], False, ''),
('inputs:renderVar', 'token', 0, None, 'Name of the back RenderVar', {ogn.MetadataKeys.DEFAULT: '"LdrColor"'}, True, "LdrColor", False, ''),
('inputs:rp', 'uint64', 0, 'renderProduct', 'Pointer to render product for this view', {}, True, 0, False, ''),
('inputs:width', 'uint', 0, None, 'Front texture width', {}, True, 0, False, ''),
])
class tokens:
line = "line"
grid = "grid"
class ValuesForInputs(og.DynamicAttributeAccess):
LOCAL_PROPERTY_NAMES = { }
"""Helper class that creates natural hierarchical access to input attributes"""
def __init__(self, node: og.Node, attributes, dynamic_attributes: og.DynamicAttributeInterface):
"""Initialize simplified access for the attribute data"""
context = node.get_graph().get_default_graph_context()
super().__init__(context, node, attributes, dynamic_attributes)
self._batchedReadAttributes = []
self._batchedReadValues = []
@property
def cudaPtr(self):
data_view = og.AttributeValueHelper(self._attributes.cudaPtr)
return data_view.get()
@cudaPtr.setter
def cudaPtr(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.cudaPtr)
data_view = og.AttributeValueHelper(self._attributes.cudaPtr)
data_view.set(value)
@property
def format(self):
data_view = og.AttributeValueHelper(self._attributes.format)
return data_view.get()
@format.setter
def format(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.format)
data_view = og.AttributeValueHelper(self._attributes.format)
data_view.set(value)
@property
def gpu(self):
data_view = og.AttributeValueHelper(self._attributes.gpu)
return data_view.get()
@gpu.setter
def gpu(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.gpu)
data_view = og.AttributeValueHelper(self._attributes.gpu)
data_view.set(value)
@property
def height(self):
data_view = og.AttributeValueHelper(self._attributes.height)
return data_view.get()
@height.setter
def height(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.height)
data_view = og.AttributeValueHelper(self._attributes.height)
data_view.set(value)
@property
def mode(self):
data_view = og.AttributeValueHelper(self._attributes.mode)
return data_view.get()
@mode.setter
def mode(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.mode)
data_view = og.AttributeValueHelper(self._attributes.mode)
data_view.set(value)
@property
def parameters(self):
data_view = og.AttributeValueHelper(self._attributes.parameters)
return data_view.get()
@parameters.setter
def parameters(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.parameters)
data_view = og.AttributeValueHelper(self._attributes.parameters)
data_view.set(value)
@property
def renderVar(self):
data_view = og.AttributeValueHelper(self._attributes.renderVar)
return data_view.get()
@renderVar.setter
def renderVar(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.renderVar)
data_view = og.AttributeValueHelper(self._attributes.renderVar)
data_view.set(value)
@property
def rp(self):
data_view = og.AttributeValueHelper(self._attributes.rp)
return data_view.get()
@rp.setter
def rp(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.rp)
data_view = og.AttributeValueHelper(self._attributes.rp)
data_view.set(value)
@property
def width(self):
data_view = og.AttributeValueHelper(self._attributes.width)
return data_view.get()
@width.setter
def width(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.width)
data_view = og.AttributeValueHelper(self._attributes.width)
data_view.set(value)
def _prefetch(self):
readAttributes = self._batchedReadAttributes
newValues = _og._prefetch_input_attributes_data(readAttributes)
if len(readAttributes) == len(newValues):
self._batchedReadValues = newValues
class ValuesForOutputs(og.DynamicAttributeAccess):
LOCAL_PROPERTY_NAMES = { }
"""Helper class that creates natural hierarchical access to output attributes"""
def __init__(self, node: og.Node, attributes, dynamic_attributes: og.DynamicAttributeInterface):
"""Initialize simplified access for the attribute data"""
context = node.get_graph().get_default_graph_context()
super().__init__(context, node, attributes, dynamic_attributes)
self._batchedWriteValues = { }
def _commit(self):
_og._commit_output_attributes_data(self._batchedWriteValues)
self._batchedWriteValues = { }
class ValuesForState(og.DynamicAttributeAccess):
"""Helper class that creates natural hierarchical access to state attributes"""
def __init__(self, node: og.Node, attributes, dynamic_attributes: og.DynamicAttributeInterface):
"""Initialize simplified access for the attribute data"""
context = node.get_graph().get_default_graph_context()
super().__init__(context, node, attributes, dynamic_attributes)
def __init__(self, node):
super().__init__(node)
dynamic_attributes = self.dynamic_attribute_data(node, og.AttributePortType.ATTRIBUTE_PORT_TYPE_INPUT)
self.inputs = OgnSdPostCompRenderVarTexturesDatabase.ValuesForInputs(node, self.attributes.inputs, dynamic_attributes)
dynamic_attributes = self.dynamic_attribute_data(node, og.AttributePortType.ATTRIBUTE_PORT_TYPE_OUTPUT)
self.outputs = OgnSdPostCompRenderVarTexturesDatabase.ValuesForOutputs(node, self.attributes.outputs, dynamic_attributes)
dynamic_attributes = self.dynamic_attribute_data(node, og.AttributePortType.ATTRIBUTE_PORT_TYPE_STATE)
self.state = OgnSdPostCompRenderVarTexturesDatabase.ValuesForState(node, self.attributes.state, dynamic_attributes)
| 9,264 | Python | 41.695852 | 148 | 0.630937 |
omniverse-code/kit/exts/omni.syntheticdata/omni/syntheticdata/ogn/OgnSdTextureToLinearArrayDatabase.py | """Support for simplified access to data on nodes of type omni.syntheticdata.SdTextureToLinearArray
SyntheticData node to copy the input texture into a linear array buffer
"""
import numpy
import omni.graph.core as og
import omni.graph.core._omni_graph_core as _og
import omni.graph.tools.ogn as ogn
class OgnSdTextureToLinearArrayDatabase(og.Database):
"""Helper class providing simplified access to data on nodes of type omni.syntheticdata.SdTextureToLinearArray
Class Members:
node: Node being evaluated
Attribute Value Properties:
Inputs:
inputs.cudaMipmappedArray
inputs.format
inputs.height
inputs.hydraTime
inputs.mipCount
inputs.outputHeight
inputs.outputWidth
inputs.simTime
inputs.stream
inputs.width
Outputs:
outputs.data
outputs.height
outputs.hydraTime
outputs.simTime
outputs.stream
outputs.width
"""
# Imprint the generator and target ABI versions in the file for JIT generation
GENERATOR_VERSION = (1, 41, 3)
TARGET_VERSION = (2, 139, 12)
# This is an internal object that provides per-class storage of a per-node data dictionary
PER_NODE_DATA = {}
# This is an internal object that describes unchanging attributes in a generic way
# The values in this list are in no particular order, as a per-attribute tuple
# Name, Type, ExtendedTypeIndex, UiName, Description, Metadata,
# Is_Required, DefaultValue, Is_Deprecated, DeprecationMsg
# You should not need to access any of this data directly, use the defined database interfaces
INTERFACE = og.Database._get_interface([
('inputs:cudaMipmappedArray', 'uint64', 0, None, 'Pointer to the CUDA Mipmapped Array', {}, True, 0, False, ''),
('inputs:format', 'uint64', 0, None, 'Format', {}, True, 0, False, ''),
('inputs:height', 'uint', 0, None, 'Height', {}, True, 0, False, ''),
('inputs:hydraTime', 'double', 0, None, 'Hydra time in stage', {}, True, 0.0, False, ''),
('inputs:mipCount', 'uint', 0, None, 'Mip Count', {}, True, 0, False, ''),
('inputs:outputHeight', 'uint', 0, None, 'Requested output height', {ogn.MetadataKeys.DEFAULT: '0'}, True, 0, False, ''),
('inputs:outputWidth', 'uint', 0, None, 'Requested output width', {ogn.MetadataKeys.DEFAULT: '0'}, True, 0, False, ''),
('inputs:simTime', 'double', 0, None, 'Simulation time', {}, True, 0.0, False, ''),
('inputs:stream', 'uint64', 0, None, 'Pointer to the CUDA Stream', {}, True, 0, False, ''),
('inputs:width', 'uint', 0, None, 'Width', {}, True, 0, False, ''),
('outputs:data', 'float4[]', 0, None, 'Buffer array data', {ogn.MetadataKeys.MEMORY_TYPE: 'cuda', ogn.MetadataKeys.DEFAULT: '[]'}, True, [], False, ''),
('outputs:height', 'uint', 0, None, 'Buffer array height', {}, True, None, False, ''),
('outputs:hydraTime', 'double', 0, None, 'Hydra time in stage', {}, True, None, False, ''),
('outputs:simTime', 'double', 0, None, 'Simulation time', {}, True, None, False, ''),
('outputs:stream', 'uint64', 0, None, 'Pointer to the CUDA Stream', {}, True, None, False, ''),
('outputs:width', 'uint', 0, None, 'Buffer array width', {}, True, None, False, ''),
])
class ValuesForInputs(og.DynamicAttributeAccess):
LOCAL_PROPERTY_NAMES = { }
"""Helper class that creates natural hierarchical access to input attributes"""
def __init__(self, node: og.Node, attributes, dynamic_attributes: og.DynamicAttributeInterface):
"""Initialize simplified access for the attribute data"""
context = node.get_graph().get_default_graph_context()
super().__init__(context, node, attributes, dynamic_attributes)
self._batchedReadAttributes = []
self._batchedReadValues = []
@property
def cudaMipmappedArray(self):
data_view = og.AttributeValueHelper(self._attributes.cudaMipmappedArray)
return data_view.get()
@cudaMipmappedArray.setter
def cudaMipmappedArray(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.cudaMipmappedArray)
data_view = og.AttributeValueHelper(self._attributes.cudaMipmappedArray)
data_view.set(value)
@property
def format(self):
data_view = og.AttributeValueHelper(self._attributes.format)
return data_view.get()
@format.setter
def format(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.format)
data_view = og.AttributeValueHelper(self._attributes.format)
data_view.set(value)
@property
def height(self):
data_view = og.AttributeValueHelper(self._attributes.height)
return data_view.get()
@height.setter
def height(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.height)
data_view = og.AttributeValueHelper(self._attributes.height)
data_view.set(value)
@property
def hydraTime(self):
data_view = og.AttributeValueHelper(self._attributes.hydraTime)
return data_view.get()
@hydraTime.setter
def hydraTime(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.hydraTime)
data_view = og.AttributeValueHelper(self._attributes.hydraTime)
data_view.set(value)
@property
def mipCount(self):
data_view = og.AttributeValueHelper(self._attributes.mipCount)
return data_view.get()
@mipCount.setter
def mipCount(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.mipCount)
data_view = og.AttributeValueHelper(self._attributes.mipCount)
data_view.set(value)
@property
def outputHeight(self):
data_view = og.AttributeValueHelper(self._attributes.outputHeight)
return data_view.get()
@outputHeight.setter
def outputHeight(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.outputHeight)
data_view = og.AttributeValueHelper(self._attributes.outputHeight)
data_view.set(value)
@property
def outputWidth(self):
data_view = og.AttributeValueHelper(self._attributes.outputWidth)
return data_view.get()
@outputWidth.setter
def outputWidth(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.outputWidth)
data_view = og.AttributeValueHelper(self._attributes.outputWidth)
data_view.set(value)
@property
def simTime(self):
data_view = og.AttributeValueHelper(self._attributes.simTime)
return data_view.get()
@simTime.setter
def simTime(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.simTime)
data_view = og.AttributeValueHelper(self._attributes.simTime)
data_view.set(value)
@property
def stream(self):
data_view = og.AttributeValueHelper(self._attributes.stream)
return data_view.get()
@stream.setter
def stream(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.stream)
data_view = og.AttributeValueHelper(self._attributes.stream)
data_view.set(value)
@property
def width(self):
data_view = og.AttributeValueHelper(self._attributes.width)
return data_view.get()
@width.setter
def width(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.width)
data_view = og.AttributeValueHelper(self._attributes.width)
data_view.set(value)
def _prefetch(self):
readAttributes = self._batchedReadAttributes
newValues = _og._prefetch_input_attributes_data(readAttributes)
if len(readAttributes) == len(newValues):
self._batchedReadValues = newValues
class ValuesForOutputs(og.DynamicAttributeAccess):
LOCAL_PROPERTY_NAMES = { }
"""Helper class that creates natural hierarchical access to output attributes"""
def __init__(self, node: og.Node, attributes, dynamic_attributes: og.DynamicAttributeInterface):
"""Initialize simplified access for the attribute data"""
context = node.get_graph().get_default_graph_context()
super().__init__(context, node, attributes, dynamic_attributes)
self.data_size = 0
self._batchedWriteValues = { }
@property
def data(self):
data_view = og.AttributeValueHelper(self._attributes.data)
return data_view.get(reserved_element_count=self.data_size, on_gpu=True)
@data.setter
def data(self, value):
data_view = og.AttributeValueHelper(self._attributes.data)
data_view.set(value, on_gpu=True)
self.data_size = data_view.get_array_size()
@property
def height(self):
data_view = og.AttributeValueHelper(self._attributes.height)
return data_view.get()
@height.setter
def height(self, value):
data_view = og.AttributeValueHelper(self._attributes.height)
data_view.set(value)
@property
def hydraTime(self):
data_view = og.AttributeValueHelper(self._attributes.hydraTime)
return data_view.get()
@hydraTime.setter
def hydraTime(self, value):
data_view = og.AttributeValueHelper(self._attributes.hydraTime)
data_view.set(value)
@property
def simTime(self):
data_view = og.AttributeValueHelper(self._attributes.simTime)
return data_view.get()
@simTime.setter
def simTime(self, value):
data_view = og.AttributeValueHelper(self._attributes.simTime)
data_view.set(value)
@property
def stream(self):
data_view = og.AttributeValueHelper(self._attributes.stream)
return data_view.get()
@stream.setter
def stream(self, value):
data_view = og.AttributeValueHelper(self._attributes.stream)
data_view.set(value)
@property
def width(self):
data_view = og.AttributeValueHelper(self._attributes.width)
return data_view.get()
@width.setter
def width(self, value):
data_view = og.AttributeValueHelper(self._attributes.width)
data_view.set(value)
def _commit(self):
_og._commit_output_attributes_data(self._batchedWriteValues)
self._batchedWriteValues = { }
class ValuesForState(og.DynamicAttributeAccess):
"""Helper class that creates natural hierarchical access to state attributes"""
def __init__(self, node: og.Node, attributes, dynamic_attributes: og.DynamicAttributeInterface):
"""Initialize simplified access for the attribute data"""
context = node.get_graph().get_default_graph_context()
super().__init__(context, node, attributes, dynamic_attributes)
def __init__(self, node):
super().__init__(node)
dynamic_attributes = self.dynamic_attribute_data(node, og.AttributePortType.ATTRIBUTE_PORT_TYPE_INPUT)
self.inputs = OgnSdTextureToLinearArrayDatabase.ValuesForInputs(node, self.attributes.inputs, dynamic_attributes)
dynamic_attributes = self.dynamic_attribute_data(node, og.AttributePortType.ATTRIBUTE_PORT_TYPE_OUTPUT)
self.outputs = OgnSdTextureToLinearArrayDatabase.ValuesForOutputs(node, self.attributes.outputs, dynamic_attributes)
dynamic_attributes = self.dynamic_attribute_data(node, og.AttributePortType.ATTRIBUTE_PORT_TYPE_STATE)
self.state = OgnSdTextureToLinearArrayDatabase.ValuesForState(node, self.attributes.state, dynamic_attributes)
| 12,568 | Python | 41.177852 | 160 | 0.620942 |
omniverse-code/kit/exts/omni.syntheticdata/omni/syntheticdata/ogn/OgnSdTestInstanceMappingDatabase.py | """Support for simplified access to data on nodes of type omni.syntheticdata.SdTestInstanceMapping
Synthetic Data node to test the instance mapping pipeline
"""
import numpy
import omni.graph.core as og
import omni.graph.core._omni_graph_core as _og
import omni.graph.tools.ogn as ogn
class OgnSdTestInstanceMappingDatabase(og.Database):
"""Helper class providing simplified access to data on nodes of type omni.syntheticdata.SdTestInstanceMapping
Class Members:
node: Node being evaluated
Attribute Value Properties:
Inputs:
inputs.exec
inputs.instanceMapPtr
inputs.instancePrimPathPtr
inputs.minInstanceIndex
inputs.minSemanticIndex
inputs.numInstances
inputs.numSemantics
inputs.semanticLabelTokenPtrs
inputs.semanticLocalTransformPtr
inputs.semanticMapPtr
inputs.semanticPrimPathPtr
inputs.semanticWorldTransformPtr
inputs.stage
inputs.swhFrameNumber
inputs.testCaseIndex
Outputs:
outputs.exec
outputs.semanticFilterPredicate
outputs.success
Predefined Tokens:
tokens.simulation
tokens.postRender
tokens.onDemand
"""
# Imprint the generator and target ABI versions in the file for JIT generation
GENERATOR_VERSION = (1, 41, 3)
TARGET_VERSION = (2, 139, 12)
# This is an internal object that provides per-class storage of a per-node data dictionary
PER_NODE_DATA = {}
# This is an internal object that describes unchanging attributes in a generic way
# The values in this list are in no particular order, as a per-attribute tuple
# Name, Type, ExtendedTypeIndex, UiName, Description, Metadata,
# Is_Required, DefaultValue, Is_Deprecated, DeprecationMsg
# You should not need to access any of this data directly, use the defined database interfaces
INTERFACE = og.Database._get_interface([
('inputs:exec', 'execution', 0, None, 'Trigger', {}, True, None, False, ''),
('inputs:instanceMapPtr', 'uint64', 0, None, 'Array pointer of numInstances uint16_t containing the semantic index of the instance prim first semantic prim parent', {}, True, 0, False, ''),
('inputs:instancePrimPathPtr', 'uint64', 0, None, 'Array pointer of numInstances uint64_t containing the prim path tokens for every instance prims', {}, True, 0, False, ''),
('inputs:minInstanceIndex', 'uint', 0, None, 'Instance index of the first instance prim in the instance arrays', {}, True, 0, False, ''),
('inputs:minSemanticIndex', 'uint', 0, None, 'Semantic index of the first semantic prim in the semantic arrays', {}, True, 0, False, ''),
('inputs:numInstances', 'uint', 0, None, 'Number of instances prim in the instance arrays', {}, True, 0, False, ''),
('inputs:numSemantics', 'uint', 0, None, 'Number of semantic prim in the semantic arrays', {}, True, 0, False, ''),
('inputs:semanticLabelTokenPtrs', 'uint64[]', 0, None, 'Array containing for every input semantic filters the corresponding array pointer of numSemantics uint64_t representing the semantic label of the semantic prim', {}, True, [], False, ''),
('inputs:semanticLocalTransformPtr', 'uint64', 0, None, 'Array pointer of numSemantics 4x4 float matrices containing the transform from world to object space for every semantic prims', {}, True, 0, False, ''),
('inputs:semanticMapPtr', 'uint64', 0, None, 'Array pointer of numSemantics uint16_t containing the semantic index of the semantic prim first semantic prim parent', {}, True, 0, False, ''),
('inputs:semanticPrimPathPtr', 'uint64', 0, None, 'Array pointer of numSemantics uint32_t containing the prim part of the prim path tokens for every semantic prims', {}, True, 0, False, ''),
('inputs:semanticWorldTransformPtr', 'uint64', 0, None, 'Array pointer of numSemantics 4x4 float matrices containing the transform from local to world space for every semantic entity', {}, True, 0, False, ''),
('inputs:stage', 'token', 0, None, 'Stage in {simulation, postrender, ondemand}', {}, True, "", False, ''),
('inputs:swhFrameNumber', 'uint64', 0, None, 'Fabric frame number', {}, True, 0, False, ''),
('inputs:testCaseIndex', 'int', 0, None, 'Test case index', {ogn.MetadataKeys.DEFAULT: '-1'}, True, -1, False, ''),
('outputs:exec', 'execution', 0, 'Received', 'Executes when the event is received', {}, True, None, False, ''),
('outputs:semanticFilterPredicate', 'token', 0, None, 'The semantic filter predicate : a disjunctive normal form of semantic type and label', {}, True, None, False, ''),
('outputs:success', 'bool', 0, None, 'Test value : false if failed', {}, True, None, False, ''),
])
class tokens:
simulation = "simulation"
postRender = "postRender"
onDemand = "onDemand"
@classmethod
def _populate_role_data(cls):
"""Populate a role structure with the non-default roles on this node type"""
role_data = super()._populate_role_data()
role_data.inputs.exec = og.AttributeRole.EXECUTION
role_data.outputs.exec = og.AttributeRole.EXECUTION
return role_data
class ValuesForInputs(og.DynamicAttributeAccess):
LOCAL_PROPERTY_NAMES = { }
"""Helper class that creates natural hierarchical access to input attributes"""
def __init__(self, node: og.Node, attributes, dynamic_attributes: og.DynamicAttributeInterface):
"""Initialize simplified access for the attribute data"""
context = node.get_graph().get_default_graph_context()
super().__init__(context, node, attributes, dynamic_attributes)
self._batchedReadAttributes = []
self._batchedReadValues = []
@property
def exec(self):
data_view = og.AttributeValueHelper(self._attributes.exec)
return data_view.get()
@exec.setter
def exec(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.exec)
data_view = og.AttributeValueHelper(self._attributes.exec)
data_view.set(value)
@property
def instanceMapPtr(self):
data_view = og.AttributeValueHelper(self._attributes.instanceMapPtr)
return data_view.get()
@instanceMapPtr.setter
def instanceMapPtr(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.instanceMapPtr)
data_view = og.AttributeValueHelper(self._attributes.instanceMapPtr)
data_view.set(value)
@property
def instancePrimPathPtr(self):
data_view = og.AttributeValueHelper(self._attributes.instancePrimPathPtr)
return data_view.get()
@instancePrimPathPtr.setter
def instancePrimPathPtr(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.instancePrimPathPtr)
data_view = og.AttributeValueHelper(self._attributes.instancePrimPathPtr)
data_view.set(value)
@property
def minInstanceIndex(self):
data_view = og.AttributeValueHelper(self._attributes.minInstanceIndex)
return data_view.get()
@minInstanceIndex.setter
def minInstanceIndex(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.minInstanceIndex)
data_view = og.AttributeValueHelper(self._attributes.minInstanceIndex)
data_view.set(value)
@property
def minSemanticIndex(self):
data_view = og.AttributeValueHelper(self._attributes.minSemanticIndex)
return data_view.get()
@minSemanticIndex.setter
def minSemanticIndex(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.minSemanticIndex)
data_view = og.AttributeValueHelper(self._attributes.minSemanticIndex)
data_view.set(value)
@property
def numInstances(self):
data_view = og.AttributeValueHelper(self._attributes.numInstances)
return data_view.get()
@numInstances.setter
def numInstances(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.numInstances)
data_view = og.AttributeValueHelper(self._attributes.numInstances)
data_view.set(value)
@property
def numSemantics(self):
data_view = og.AttributeValueHelper(self._attributes.numSemantics)
return data_view.get()
@numSemantics.setter
def numSemantics(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.numSemantics)
data_view = og.AttributeValueHelper(self._attributes.numSemantics)
data_view.set(value)
@property
def semanticLabelTokenPtrs(self):
data_view = og.AttributeValueHelper(self._attributes.semanticLabelTokenPtrs)
return data_view.get()
@semanticLabelTokenPtrs.setter
def semanticLabelTokenPtrs(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.semanticLabelTokenPtrs)
data_view = og.AttributeValueHelper(self._attributes.semanticLabelTokenPtrs)
data_view.set(value)
self.semanticLabelTokenPtrs_size = data_view.get_array_size()
@property
def semanticLocalTransformPtr(self):
data_view = og.AttributeValueHelper(self._attributes.semanticLocalTransformPtr)
return data_view.get()
@semanticLocalTransformPtr.setter
def semanticLocalTransformPtr(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.semanticLocalTransformPtr)
data_view = og.AttributeValueHelper(self._attributes.semanticLocalTransformPtr)
data_view.set(value)
@property
def semanticMapPtr(self):
data_view = og.AttributeValueHelper(self._attributes.semanticMapPtr)
return data_view.get()
@semanticMapPtr.setter
def semanticMapPtr(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.semanticMapPtr)
data_view = og.AttributeValueHelper(self._attributes.semanticMapPtr)
data_view.set(value)
@property
def semanticPrimPathPtr(self):
data_view = og.AttributeValueHelper(self._attributes.semanticPrimPathPtr)
return data_view.get()
@semanticPrimPathPtr.setter
def semanticPrimPathPtr(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.semanticPrimPathPtr)
data_view = og.AttributeValueHelper(self._attributes.semanticPrimPathPtr)
data_view.set(value)
@property
def semanticWorldTransformPtr(self):
data_view = og.AttributeValueHelper(self._attributes.semanticWorldTransformPtr)
return data_view.get()
@semanticWorldTransformPtr.setter
def semanticWorldTransformPtr(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.semanticWorldTransformPtr)
data_view = og.AttributeValueHelper(self._attributes.semanticWorldTransformPtr)
data_view.set(value)
@property
def stage(self):
data_view = og.AttributeValueHelper(self._attributes.stage)
return data_view.get()
@stage.setter
def stage(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.stage)
data_view = og.AttributeValueHelper(self._attributes.stage)
data_view.set(value)
@property
def swhFrameNumber(self):
data_view = og.AttributeValueHelper(self._attributes.swhFrameNumber)
return data_view.get()
@swhFrameNumber.setter
def swhFrameNumber(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.swhFrameNumber)
data_view = og.AttributeValueHelper(self._attributes.swhFrameNumber)
data_view.set(value)
@property
def testCaseIndex(self):
data_view = og.AttributeValueHelper(self._attributes.testCaseIndex)
return data_view.get()
@testCaseIndex.setter
def testCaseIndex(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.testCaseIndex)
data_view = og.AttributeValueHelper(self._attributes.testCaseIndex)
data_view.set(value)
def _prefetch(self):
readAttributes = self._batchedReadAttributes
newValues = _og._prefetch_input_attributes_data(readAttributes)
if len(readAttributes) == len(newValues):
self._batchedReadValues = newValues
class ValuesForOutputs(og.DynamicAttributeAccess):
LOCAL_PROPERTY_NAMES = { }
"""Helper class that creates natural hierarchical access to output attributes"""
def __init__(self, node: og.Node, attributes, dynamic_attributes: og.DynamicAttributeInterface):
"""Initialize simplified access for the attribute data"""
context = node.get_graph().get_default_graph_context()
super().__init__(context, node, attributes, dynamic_attributes)
self._batchedWriteValues = { }
@property
def exec(self):
data_view = og.AttributeValueHelper(self._attributes.exec)
return data_view.get()
@exec.setter
def exec(self, value):
data_view = og.AttributeValueHelper(self._attributes.exec)
data_view.set(value)
@property
def semanticFilterPredicate(self):
data_view = og.AttributeValueHelper(self._attributes.semanticFilterPredicate)
return data_view.get()
@semanticFilterPredicate.setter
def semanticFilterPredicate(self, value):
data_view = og.AttributeValueHelper(self._attributes.semanticFilterPredicate)
data_view.set(value)
@property
def success(self):
data_view = og.AttributeValueHelper(self._attributes.success)
return data_view.get()
@success.setter
def success(self, value):
data_view = og.AttributeValueHelper(self._attributes.success)
data_view.set(value)
def _commit(self):
_og._commit_output_attributes_data(self._batchedWriteValues)
self._batchedWriteValues = { }
class ValuesForState(og.DynamicAttributeAccess):
"""Helper class that creates natural hierarchical access to state attributes"""
def __init__(self, node: og.Node, attributes, dynamic_attributes: og.DynamicAttributeInterface):
"""Initialize simplified access for the attribute data"""
context = node.get_graph().get_default_graph_context()
super().__init__(context, node, attributes, dynamic_attributes)
def __init__(self, node):
super().__init__(node)
dynamic_attributes = self.dynamic_attribute_data(node, og.AttributePortType.ATTRIBUTE_PORT_TYPE_INPUT)
self.inputs = OgnSdTestInstanceMappingDatabase.ValuesForInputs(node, self.attributes.inputs, dynamic_attributes)
dynamic_attributes = self.dynamic_attribute_data(node, og.AttributePortType.ATTRIBUTE_PORT_TYPE_OUTPUT)
self.outputs = OgnSdTestInstanceMappingDatabase.ValuesForOutputs(node, self.attributes.outputs, dynamic_attributes)
dynamic_attributes = self.dynamic_attribute_data(node, og.AttributePortType.ATTRIBUTE_PORT_TYPE_STATE)
self.state = OgnSdTestInstanceMappingDatabase.ValuesForState(node, self.attributes.state, dynamic_attributes)
| 16,282 | Python | 45.65616 | 251 | 0.6525 |
omniverse-code/kit/exts/omni.syntheticdata/omni/syntheticdata/ogn/OgnSdFrameIdentifierDatabase.py | """Support for simplified access to data on nodes of type omni.syntheticdata.SdFrameIdentifier
Synthetic Data node to expose pipeline frame identifier.
"""
import omni.graph.core as og
import omni.graph.core._omni_graph_core as _og
import omni.graph.tools.ogn as ogn
class OgnSdFrameIdentifierDatabase(og.Database):
"""Helper class providing simplified access to data on nodes of type omni.syntheticdata.SdFrameIdentifier
Class Members:
node: Node being evaluated
Attribute Value Properties:
Inputs:
inputs.exec
inputs.renderResults
Outputs:
outputs.durationDenominator
outputs.durationNumerator
outputs.exec
outputs.externalTimeOfSimNs
outputs.frameNumber
outputs.rationalTimeOfSimDenominator
outputs.rationalTimeOfSimNumerator
outputs.sampleTimeOffsetInSimFrames
outputs.type
Predefined Tokens:
tokens.NoFrameNumber
tokens.FrameNumber
tokens.ConstantFramerateFrameNumber
"""
# Imprint the generator and target ABI versions in the file for JIT generation
GENERATOR_VERSION = (1, 41, 3)
TARGET_VERSION = (2, 139, 12)
# This is an internal object that provides per-class storage of a per-node data dictionary
PER_NODE_DATA = {}
# This is an internal object that describes unchanging attributes in a generic way
# The values in this list are in no particular order, as a per-attribute tuple
# Name, Type, ExtendedTypeIndex, UiName, Description, Metadata,
# Is_Required, DefaultValue, Is_Deprecated, DeprecationMsg
# You should not need to access any of this data directly, use the defined database interfaces
INTERFACE = og.Database._get_interface([
('inputs:exec', 'execution', 0, None, 'Trigger', {}, True, None, False, ''),
('inputs:renderResults', 'uint64', 0, None, 'Render results', {}, True, 0, False, ''),
('outputs:durationDenominator', 'uint64', 0, None, 'Duration denominator.\nOnly valid if eConstantFramerateFrameNumber', {ogn.MetadataKeys.DEFAULT: '0'}, True, 0, False, ''),
('outputs:durationNumerator', 'int64', 0, None, 'Duration numerator.\nOnly valid if eConstantFramerateFrameNumber.', {ogn.MetadataKeys.DEFAULT: '0'}, True, 0, False, ''),
('outputs:exec', 'execution', 0, 'Received', 'Executes for each newFrame event received', {}, True, None, False, ''),
('outputs:externalTimeOfSimNs', 'int64', 0, None, 'External time in Ns.\nOnly valid if eConstantFramerateFrameNumber.', {ogn.MetadataKeys.DEFAULT: '-1'}, True, -1, False, ''),
('outputs:frameNumber', 'int64', 0, None, 'Frame number.\nValid if eFrameNumber or eConstantFramerateFrameNumber.', {ogn.MetadataKeys.DEFAULT: '-1'}, True, -1, False, ''),
('outputs:rationalTimeOfSimDenominator', 'uint64', 0, None, 'rational time of simulation denominator.', {ogn.MetadataKeys.DEFAULT: '0'}, True, 0, False, ''),
('outputs:rationalTimeOfSimNumerator', 'int64', 0, None, 'rational time of simulation numerator.', {ogn.MetadataKeys.DEFAULT: '0'}, True, 0, False, ''),
('outputs:sampleTimeOffsetInSimFrames', 'uint64', 0, None, 'Sample time offset.\nOnly valid if eConstantFramerateFrameNumber.', {ogn.MetadataKeys.DEFAULT: '0'}, True, 0, False, ''),
('outputs:type', 'token', 0, None, 'Type of the frame identifier.', {ogn.MetadataKeys.ALLOWED_TOKENS: 'NoFrameNumber,FrameNumber,ConstantFramerateFrameNumber', ogn.MetadataKeys.ALLOWED_TOKENS_RAW: '["NoFrameNumber", "FrameNumber", "ConstantFramerateFrameNumber"]', ogn.MetadataKeys.DEFAULT: '"NoFrameNumber"'}, True, "NoFrameNumber", False, ''),
])
class tokens:
NoFrameNumber = "NoFrameNumber"
FrameNumber = "FrameNumber"
ConstantFramerateFrameNumber = "ConstantFramerateFrameNumber"
@classmethod
def _populate_role_data(cls):
"""Populate a role structure with the non-default roles on this node type"""
role_data = super()._populate_role_data()
role_data.inputs.exec = og.AttributeRole.EXECUTION
role_data.outputs.exec = og.AttributeRole.EXECUTION
return role_data
class ValuesForInputs(og.DynamicAttributeAccess):
LOCAL_PROPERTY_NAMES = { }
"""Helper class that creates natural hierarchical access to input attributes"""
def __init__(self, node: og.Node, attributes, dynamic_attributes: og.DynamicAttributeInterface):
"""Initialize simplified access for the attribute data"""
context = node.get_graph().get_default_graph_context()
super().__init__(context, node, attributes, dynamic_attributes)
self._batchedReadAttributes = []
self._batchedReadValues = []
@property
def exec(self):
data_view = og.AttributeValueHelper(self._attributes.exec)
return data_view.get()
@exec.setter
def exec(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.exec)
data_view = og.AttributeValueHelper(self._attributes.exec)
data_view.set(value)
@property
def renderResults(self):
data_view = og.AttributeValueHelper(self._attributes.renderResults)
return data_view.get()
@renderResults.setter
def renderResults(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.renderResults)
data_view = og.AttributeValueHelper(self._attributes.renderResults)
data_view.set(value)
def _prefetch(self):
readAttributes = self._batchedReadAttributes
newValues = _og._prefetch_input_attributes_data(readAttributes)
if len(readAttributes) == len(newValues):
self._batchedReadValues = newValues
class ValuesForOutputs(og.DynamicAttributeAccess):
LOCAL_PROPERTY_NAMES = { }
"""Helper class that creates natural hierarchical access to output attributes"""
def __init__(self, node: og.Node, attributes, dynamic_attributes: og.DynamicAttributeInterface):
"""Initialize simplified access for the attribute data"""
context = node.get_graph().get_default_graph_context()
super().__init__(context, node, attributes, dynamic_attributes)
self._batchedWriteValues = { }
@property
def durationDenominator(self):
data_view = og.AttributeValueHelper(self._attributes.durationDenominator)
return data_view.get()
@durationDenominator.setter
def durationDenominator(self, value):
data_view = og.AttributeValueHelper(self._attributes.durationDenominator)
data_view.set(value)
@property
def durationNumerator(self):
data_view = og.AttributeValueHelper(self._attributes.durationNumerator)
return data_view.get()
@durationNumerator.setter
def durationNumerator(self, value):
data_view = og.AttributeValueHelper(self._attributes.durationNumerator)
data_view.set(value)
@property
def exec(self):
data_view = og.AttributeValueHelper(self._attributes.exec)
return data_view.get()
@exec.setter
def exec(self, value):
data_view = og.AttributeValueHelper(self._attributes.exec)
data_view.set(value)
@property
def externalTimeOfSimNs(self):
data_view = og.AttributeValueHelper(self._attributes.externalTimeOfSimNs)
return data_view.get()
@externalTimeOfSimNs.setter
def externalTimeOfSimNs(self, value):
data_view = og.AttributeValueHelper(self._attributes.externalTimeOfSimNs)
data_view.set(value)
@property
def frameNumber(self):
data_view = og.AttributeValueHelper(self._attributes.frameNumber)
return data_view.get()
@frameNumber.setter
def frameNumber(self, value):
data_view = og.AttributeValueHelper(self._attributes.frameNumber)
data_view.set(value)
@property
def rationalTimeOfSimDenominator(self):
data_view = og.AttributeValueHelper(self._attributes.rationalTimeOfSimDenominator)
return data_view.get()
@rationalTimeOfSimDenominator.setter
def rationalTimeOfSimDenominator(self, value):
data_view = og.AttributeValueHelper(self._attributes.rationalTimeOfSimDenominator)
data_view.set(value)
@property
def rationalTimeOfSimNumerator(self):
data_view = og.AttributeValueHelper(self._attributes.rationalTimeOfSimNumerator)
return data_view.get()
@rationalTimeOfSimNumerator.setter
def rationalTimeOfSimNumerator(self, value):
data_view = og.AttributeValueHelper(self._attributes.rationalTimeOfSimNumerator)
data_view.set(value)
@property
def sampleTimeOffsetInSimFrames(self):
data_view = og.AttributeValueHelper(self._attributes.sampleTimeOffsetInSimFrames)
return data_view.get()
@sampleTimeOffsetInSimFrames.setter
def sampleTimeOffsetInSimFrames(self, value):
data_view = og.AttributeValueHelper(self._attributes.sampleTimeOffsetInSimFrames)
data_view.set(value)
@property
def type(self):
data_view = og.AttributeValueHelper(self._attributes.type)
return data_view.get()
@type.setter
def type(self, value):
data_view = og.AttributeValueHelper(self._attributes.type)
data_view.set(value)
def _commit(self):
_og._commit_output_attributes_data(self._batchedWriteValues)
self._batchedWriteValues = { }
class ValuesForState(og.DynamicAttributeAccess):
"""Helper class that creates natural hierarchical access to state attributes"""
def __init__(self, node: og.Node, attributes, dynamic_attributes: og.DynamicAttributeInterface):
"""Initialize simplified access for the attribute data"""
context = node.get_graph().get_default_graph_context()
super().__init__(context, node, attributes, dynamic_attributes)
def __init__(self, node):
super().__init__(node)
dynamic_attributes = self.dynamic_attribute_data(node, og.AttributePortType.ATTRIBUTE_PORT_TYPE_INPUT)
self.inputs = OgnSdFrameIdentifierDatabase.ValuesForInputs(node, self.attributes.inputs, dynamic_attributes)
dynamic_attributes = self.dynamic_attribute_data(node, og.AttributePortType.ATTRIBUTE_PORT_TYPE_OUTPUT)
self.outputs = OgnSdFrameIdentifierDatabase.ValuesForOutputs(node, self.attributes.outputs, dynamic_attributes)
dynamic_attributes = self.dynamic_attribute_data(node, og.AttributePortType.ATTRIBUTE_PORT_TYPE_STATE)
self.state = OgnSdFrameIdentifierDatabase.ValuesForState(node, self.attributes.state, dynamic_attributes)
| 11,174 | Python | 46.151899 | 353 | 0.667442 |
omniverse-code/kit/exts/omni.syntheticdata/omni/syntheticdata/ogn/OgnSdPostInstanceMappingDatabase.py | """Support for simplified access to data on nodes of type omni.syntheticdata.SdPostInstanceMapping
Synthetic Data node to compute and store scene instances semantic hierarchy information
"""
import omni.graph.core as og
import omni.graph.core._omni_graph_core as _og
import omni.graph.tools.ogn as ogn
class OgnSdPostInstanceMappingDatabase(og.Database):
"""Helper class providing simplified access to data on nodes of type omni.syntheticdata.SdPostInstanceMapping
Class Members:
node: Node being evaluated
Attribute Value Properties:
Inputs:
inputs.exec
inputs.gpu
inputs.rp
inputs.semanticFilterName
Outputs:
outputs.exec
outputs.instanceMapSDCudaPtr
outputs.instanceMappingInfoSDPtr
outputs.instancePrimTokenSDCudaPtr
outputs.lastUpdateTimeDenominator
outputs.lastUpdateTimeNumerator
outputs.semanticLabelTokenSDCudaPtr
outputs.semanticLocalTransformSDCudaPtr
outputs.semanticMapSDCudaPtr
outputs.semanticPrimTokenSDCudaPtr
outputs.semanticWorldTransformSDCudaPtr
Predefined Tokens:
tokens.InstanceMappingInfoSDhost
tokens.SemanticMapSD
tokens.SemanticMapSDhost
tokens.SemanticPrimTokenSD
tokens.SemanticPrimTokenSDhost
tokens.InstanceMapSD
tokens.InstanceMapSDhost
tokens.InstancePrimTokenSD
tokens.InstancePrimTokenSDhost
tokens.SemanticLabelTokenSD
tokens.SemanticLabelTokenSDhost
tokens.SemanticLocalTransformSD
tokens.SemanticLocalTransformSDhost
tokens.SemanticWorldTransformSD
tokens.SemanticWorldTransformSDhost
"""
# Imprint the generator and target ABI versions in the file for JIT generation
GENERATOR_VERSION = (1, 41, 3)
TARGET_VERSION = (2, 139, 12)
# This is an internal object that provides per-class storage of a per-node data dictionary
PER_NODE_DATA = {}
# This is an internal object that describes unchanging attributes in a generic way
# The values in this list are in no particular order, as a per-attribute tuple
# Name, Type, ExtendedTypeIndex, UiName, Description, Metadata,
# Is_Required, DefaultValue, Is_Deprecated, DeprecationMsg
# You should not need to access any of this data directly, use the defined database interfaces
INTERFACE = og.Database._get_interface([
('inputs:exec', 'execution', 0, None, 'Trigger', {}, True, None, False, ''),
('inputs:gpu', 'uint64', 0, 'gpuFoundations', 'Pointer to shared context containing gpu foundations', {}, True, 0, False, ''),
('inputs:rp', 'uint64', 0, 'renderProduct', 'Pointer to render product for this view', {}, True, 0, False, ''),
('inputs:semanticFilterName', 'token', 0, None, 'Name of the semantic filter to apply to the semanticLabelToken', {ogn.MetadataKeys.DEFAULT: '"default"'}, True, "default", False, ''),
('outputs:exec', 'execution', 0, None, 'Trigger', {}, True, None, False, ''),
('outputs:instanceMapSDCudaPtr', 'uint64', 0, None, 'cuda uint16_t buffer pointer of size numInstances containing the instance parent semantic index', {}, True, None, False, ''),
('outputs:instanceMappingInfoSDPtr', 'uint64', 0, None, 'uint buffer pointer containing the following information :\n[ numInstances, minInstanceId, numSemantics, minSemanticId, numProtoSemantic,\n lastUpdateTimeNumeratorHigh, lastUpdateTimeNumeratorLow, , lastUpdateTimeDenominatorHigh, lastUpdateTimeDenominatorLow ]', {}, True, None, False, ''),
('outputs:instancePrimTokenSDCudaPtr', 'uint64', 0, None, 'cuda uint64_t buffer pointer of size numInstances containing the instance path token', {}, True, None, False, ''),
('outputs:lastUpdateTimeDenominator', 'uint64', 0, None, 'Time denominator of the last time the data has changed', {}, True, None, False, ''),
('outputs:lastUpdateTimeNumerator', 'int64', 0, None, 'Time numerator of the last time the data has changed', {}, True, None, False, ''),
('outputs:semanticLabelTokenSDCudaPtr', 'uint64', 0, None, 'cuda uint64_t buffer pointer of size numSemantics containing the semantic label token', {}, True, None, False, ''),
('outputs:semanticLocalTransformSDCudaPtr', 'uint64', 0, None, 'cuda float44 buffer pointer of size numSemantics containing the local semantic transform', {}, True, None, False, ''),
('outputs:semanticMapSDCudaPtr', 'uint64', 0, None, 'cuda uint16_t buffer pointer of size numSemantics containing the semantic parent semantic index', {}, True, None, False, ''),
('outputs:semanticPrimTokenSDCudaPtr', 'uint64', 0, None, 'cuda uint32_t buffer pointer of size numSemantics containing the prim part of the semantic path token', {}, True, None, False, ''),
('outputs:semanticWorldTransformSDCudaPtr', 'uint64', 0, None, 'cuda float44 buffer pointer of size numSemantics containing the world semantic transform', {}, True, None, False, ''),
])
class tokens:
InstanceMappingInfoSDhost = "InstanceMappingInfoSDhost"
SemanticMapSD = "SemanticMapSD"
SemanticMapSDhost = "SemanticMapSDhost"
SemanticPrimTokenSD = "SemanticPrimTokenSD"
SemanticPrimTokenSDhost = "SemanticPrimTokenSDhost"
InstanceMapSD = "InstanceMapSD"
InstanceMapSDhost = "InstanceMapSDhost"
InstancePrimTokenSD = "InstancePrimTokenSD"
InstancePrimTokenSDhost = "InstancePrimTokenSDhost"
SemanticLabelTokenSD = "SemanticLabelTokenSD"
SemanticLabelTokenSDhost = "SemanticLabelTokenSDhost"
SemanticLocalTransformSD = "SemanticLocalTransformSD"
SemanticLocalTransformSDhost = "SemanticLocalTransformSDhost"
SemanticWorldTransformSD = "SemanticWorldTransformSD"
SemanticWorldTransformSDhost = "SemanticWorldTransformSDhost"
@classmethod
def _populate_role_data(cls):
"""Populate a role structure with the non-default roles on this node type"""
role_data = super()._populate_role_data()
role_data.inputs.exec = og.AttributeRole.EXECUTION
role_data.outputs.exec = og.AttributeRole.EXECUTION
return role_data
class ValuesForInputs(og.DynamicAttributeAccess):
LOCAL_PROPERTY_NAMES = { }
"""Helper class that creates natural hierarchical access to input attributes"""
def __init__(self, node: og.Node, attributes, dynamic_attributes: og.DynamicAttributeInterface):
"""Initialize simplified access for the attribute data"""
context = node.get_graph().get_default_graph_context()
super().__init__(context, node, attributes, dynamic_attributes)
self._batchedReadAttributes = []
self._batchedReadValues = []
@property
def exec(self):
data_view = og.AttributeValueHelper(self._attributes.exec)
return data_view.get()
@exec.setter
def exec(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.exec)
data_view = og.AttributeValueHelper(self._attributes.exec)
data_view.set(value)
@property
def gpu(self):
data_view = og.AttributeValueHelper(self._attributes.gpu)
return data_view.get()
@gpu.setter
def gpu(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.gpu)
data_view = og.AttributeValueHelper(self._attributes.gpu)
data_view.set(value)
@property
def rp(self):
data_view = og.AttributeValueHelper(self._attributes.rp)
return data_view.get()
@rp.setter
def rp(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.rp)
data_view = og.AttributeValueHelper(self._attributes.rp)
data_view.set(value)
@property
def semanticFilterName(self):
data_view = og.AttributeValueHelper(self._attributes.semanticFilterName)
return data_view.get()
@semanticFilterName.setter
def semanticFilterName(self, value):
if self._setting_locked:
raise og.ReadOnlyError(self._attributes.semanticFilterName)
data_view = og.AttributeValueHelper(self._attributes.semanticFilterName)
data_view.set(value)
def _prefetch(self):
readAttributes = self._batchedReadAttributes
newValues = _og._prefetch_input_attributes_data(readAttributes)
if len(readAttributes) == len(newValues):
self._batchedReadValues = newValues
class ValuesForOutputs(og.DynamicAttributeAccess):
LOCAL_PROPERTY_NAMES = { }
"""Helper class that creates natural hierarchical access to output attributes"""
def __init__(self, node: og.Node, attributes, dynamic_attributes: og.DynamicAttributeInterface):
"""Initialize simplified access for the attribute data"""
context = node.get_graph().get_default_graph_context()
super().__init__(context, node, attributes, dynamic_attributes)
self._batchedWriteValues = { }
@property
def exec(self):
data_view = og.AttributeValueHelper(self._attributes.exec)
return data_view.get()
@exec.setter
def exec(self, value):
data_view = og.AttributeValueHelper(self._attributes.exec)
data_view.set(value)
@property
def instanceMapSDCudaPtr(self):
data_view = og.AttributeValueHelper(self._attributes.instanceMapSDCudaPtr)
return data_view.get()
@instanceMapSDCudaPtr.setter
def instanceMapSDCudaPtr(self, value):
data_view = og.AttributeValueHelper(self._attributes.instanceMapSDCudaPtr)
data_view.set(value)
@property
def instanceMappingInfoSDPtr(self):
data_view = og.AttributeValueHelper(self._attributes.instanceMappingInfoSDPtr)
return data_view.get()
@instanceMappingInfoSDPtr.setter
def instanceMappingInfoSDPtr(self, value):
data_view = og.AttributeValueHelper(self._attributes.instanceMappingInfoSDPtr)
data_view.set(value)
@property
def instancePrimTokenSDCudaPtr(self):
data_view = og.AttributeValueHelper(self._attributes.instancePrimTokenSDCudaPtr)
return data_view.get()
@instancePrimTokenSDCudaPtr.setter
def instancePrimTokenSDCudaPtr(self, value):
data_view = og.AttributeValueHelper(self._attributes.instancePrimTokenSDCudaPtr)
data_view.set(value)
@property
def lastUpdateTimeDenominator(self):
data_view = og.AttributeValueHelper(self._attributes.lastUpdateTimeDenominator)
return data_view.get()
@lastUpdateTimeDenominator.setter
def lastUpdateTimeDenominator(self, value):
data_view = og.AttributeValueHelper(self._attributes.lastUpdateTimeDenominator)
data_view.set(value)
@property
def lastUpdateTimeNumerator(self):
data_view = og.AttributeValueHelper(self._attributes.lastUpdateTimeNumerator)
return data_view.get()
@lastUpdateTimeNumerator.setter
def lastUpdateTimeNumerator(self, value):
data_view = og.AttributeValueHelper(self._attributes.lastUpdateTimeNumerator)
data_view.set(value)
@property
def semanticLabelTokenSDCudaPtr(self):
data_view = og.AttributeValueHelper(self._attributes.semanticLabelTokenSDCudaPtr)
return data_view.get()
@semanticLabelTokenSDCudaPtr.setter
def semanticLabelTokenSDCudaPtr(self, value):
data_view = og.AttributeValueHelper(self._attributes.semanticLabelTokenSDCudaPtr)
data_view.set(value)
@property
def semanticLocalTransformSDCudaPtr(self):
data_view = og.AttributeValueHelper(self._attributes.semanticLocalTransformSDCudaPtr)
return data_view.get()
@semanticLocalTransformSDCudaPtr.setter
def semanticLocalTransformSDCudaPtr(self, value):
data_view = og.AttributeValueHelper(self._attributes.semanticLocalTransformSDCudaPtr)
data_view.set(value)
@property
def semanticMapSDCudaPtr(self):
data_view = og.AttributeValueHelper(self._attributes.semanticMapSDCudaPtr)
return data_view.get()
@semanticMapSDCudaPtr.setter
def semanticMapSDCudaPtr(self, value):
data_view = og.AttributeValueHelper(self._attributes.semanticMapSDCudaPtr)
data_view.set(value)
@property
def semanticPrimTokenSDCudaPtr(self):
data_view = og.AttributeValueHelper(self._attributes.semanticPrimTokenSDCudaPtr)
return data_view.get()
@semanticPrimTokenSDCudaPtr.setter
def semanticPrimTokenSDCudaPtr(self, value):
data_view = og.AttributeValueHelper(self._attributes.semanticPrimTokenSDCudaPtr)
data_view.set(value)
@property
def semanticWorldTransformSDCudaPtr(self):
data_view = og.AttributeValueHelper(self._attributes.semanticWorldTransformSDCudaPtr)
return data_view.get()
@semanticWorldTransformSDCudaPtr.setter
def semanticWorldTransformSDCudaPtr(self, value):
data_view = og.AttributeValueHelper(self._attributes.semanticWorldTransformSDCudaPtr)
data_view.set(value)
def _commit(self):
_og._commit_output_attributes_data(self._batchedWriteValues)
self._batchedWriteValues = { }
class ValuesForState(og.DynamicAttributeAccess):
"""Helper class that creates natural hierarchical access to state attributes"""
def __init__(self, node: og.Node, attributes, dynamic_attributes: og.DynamicAttributeInterface):
"""Initialize simplified access for the attribute data"""
context = node.get_graph().get_default_graph_context()
super().__init__(context, node, attributes, dynamic_attributes)
def __init__(self, node):
super().__init__(node)
dynamic_attributes = self.dynamic_attribute_data(node, og.AttributePortType.ATTRIBUTE_PORT_TYPE_INPUT)
self.inputs = OgnSdPostInstanceMappingDatabase.ValuesForInputs(node, self.attributes.inputs, dynamic_attributes)
dynamic_attributes = self.dynamic_attribute_data(node, og.AttributePortType.ATTRIBUTE_PORT_TYPE_OUTPUT)
self.outputs = OgnSdPostInstanceMappingDatabase.ValuesForOutputs(node, self.attributes.outputs, dynamic_attributes)
dynamic_attributes = self.dynamic_attribute_data(node, og.AttributePortType.ATTRIBUTE_PORT_TYPE_STATE)
self.state = OgnSdPostInstanceMappingDatabase.ValuesForState(node, self.attributes.state, dynamic_attributes)
| 15,172 | Python | 47.476038 | 356 | 0.679475 |
omniverse-code/kit/exts/omni.syntheticdata/omni/syntheticdata/scripts/menu.py | # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
__all__ = ["SynthDataMenuContainer"]
from omni.kit.viewport.menubar.core import (
ComboBoxModel,
ComboBoxItem,
ComboBoxMenuDelegate,
CheckboxMenuDelegate,
IconMenuDelegate,
SliderMenuDelegate,
ViewportMenuContainer,
ViewportMenuItem,
ViewportMenuSeparator
)
from .SyntheticData import SyntheticData
from .visualizer_window import VisualizerWindow
import carb
import omni.ui as ui
from pathlib import Path
import weakref
ICON_PATH = Path(carb.tokens.get_tokens_interface().resolve("${omni.syntheticdata}")).joinpath("data")
UI_STYLE = {"Menu.Item.Icon::SyntheticData": {"image_url": str(ICON_PATH.joinpath("sensor_icon.svg"))}}
class SensorAngleModel(ui.AbstractValueModel):
def __init__(self, getter, setter, *args, **kwargs):
super().__init__(*args, **kwargs)
self.__getter = getter
self.__setter = setter
def destroy(self):
self.__getter = None
self.__setter = None
def get_value_as_float(self) -> float:
return self.__getter()
def get_value_as_int(self) -> int:
return int(self.get_value_as_float())
def set_value(self, value):
value = float(value)
if self.get_value_as_float() != value:
self.__setter(value)
self._value_changed()
class SensorVisualizationModel(ui.AbstractValueModel):
def __init__(self, sensor: str, visualizer_window, *args, **kwargs):
super().__init__(*args, **kwargs)
self.__sensor = sensor
self.__visualizer_window = visualizer_window
def get_value_as_bool(self) -> bool:
try:
return bool(self.__sensor in self.__visualizer_window.visualization_activation)
except:
return False
def get_value_as_int(self) -> int:
return 1 if self.get_value_as_bool() else 0
def set_value(self, enabled):
enabled = bool(enabled)
if self.get_value_as_bool() != enabled:
self.__visualizer_window.on_sensor_item_clicked(enabled, self.__sensor)
self._value_changed()
def sensor(self):
return self.__sensor
class MenuContext:
def __init__(self, viewport_api):
self.__visualizer_window = VisualizerWindow(f"{viewport_api.id}", viewport_api)
self.__hide_on_click = False
self.__sensor_models = set()
def destroy(self):
self.__sensor_models = set()
self.__visualizer_window.close()
@property
def hide_on_click(self) -> bool:
return self.__hide_on_click
def add_render_settings_items(self):
render_product_combo_model = self.__visualizer_window.render_product_combo_model
if render_product_combo_model:
ViewportMenuItem(
"RenderProduct",
delegate=ComboBoxMenuDelegate(model=render_product_combo_model),
hide_on_click=self.__hide_on_click,
)
render_var_combo_model = self.__visualizer_window.render_var_combo_model
if render_var_combo_model:
ViewportMenuItem(
"RenderVar",
delegate=ComboBoxMenuDelegate(model=render_var_combo_model),
hide_on_click=self.__hide_on_click,
)
def add_angles_items(self):
render_var_combo_model = self.__visualizer_window.render_var_combo_model
if render_var_combo_model:
ViewportMenuItem(
name="Angle",
hide_on_click=self.__hide_on_click,
delegate=SliderMenuDelegate(
model=SensorAngleModel(render_var_combo_model.get_combine_angle,
render_var_combo_model.set_combine_angle),
min=-100.0,
max=100.0,
tooltip="Set Combine Angle",
),
)
ViewportMenuItem(
name="X",
hide_on_click=self.__hide_on_click,
delegate=SliderMenuDelegate(
model=SensorAngleModel(render_var_combo_model.get_combine_divide_x,
render_var_combo_model.set_combine_divide_x),
min=-100.0,
max=100.0,
tooltip="Set Combine Divide X",
),
)
ViewportMenuItem(
name="Y",
hide_on_click=self.__hide_on_click,
delegate=SliderMenuDelegate(
model=SensorAngleModel(render_var_combo_model.get_combine_divide_y,
render_var_combo_model.set_combine_divide_y),
min=-100.0,
max=100.0,
tooltip="Set Combine Divide Y",
),
)
def add_sensor_selection(self):
for sensor_label, sensor in SyntheticData.get_registered_visualization_template_names_for_display():
model = SensorVisualizationModel(sensor, self.__visualizer_window)
self.__sensor_models.add(model)
ViewportMenuItem(
name=sensor_label,
hide_on_click=self.__hide_on_click,
delegate=CheckboxMenuDelegate(model=model, tooltip=f'Enable "{sensor}" visualization')
)
if SyntheticData.get_visualization_template_name_default_activation(sensor):
model.set_value(True)
def clear_all(self, *args, **kwargs):
for smodel in self.__sensor_models:
smodel.set_value(False)
def set_as_default(self, *args, **kwargs):
for smodel in self.__sensor_models:
SyntheticData.set_visualization_template_name_default_activation(smodel.sensor(), smodel.get_value_as_bool())
def reset_to_default(self, *args, **kwargs):
default_sensors = []
for _, sensor in SyntheticData.get_registered_visualization_template_names_for_display():
if SyntheticData.get_visualization_template_name_default_activation(sensor):
default_sensors.append(sensor)
for smodel in self.__sensor_models:
smodel.set_value(smodel.sensor() in default_sensors)
def show_window(self, *args, **kwargs):
self.__visualizer_window.toggle_enable_visualization()
class SynthDataMenuContainer(ViewportMenuContainer):
def __init__(self):
super().__init__(name="SyntheticData",
visible_setting_path="/exts/omni.syntheticdata/menubar/visible",
order_setting_path="/exts/omni.syntheticdata/menubar/order",
delegate=IconMenuDelegate("SyntheticData"), # tooltip="Synthetic Data Sensors"),
style=UI_STYLE)
self.__menu_context: Dict[str, MenuContext] = {}
def __del__(self):
self.destroy()
def destroy(self):
for menu_ctx in self.__menu_context.values():
menu_ctx.destroy()
self.__menu_context = {}
super().destroy()
def build_fn(self, desc: dict):
viewport_api = desc.get("viewport_api")
if not viewport_api:
return
viewport_api_id = viewport_api.id
menu_ctx = self.__menu_context.get(viewport_api_id)
if menu_ctx:
menu_ctx.destroy()
menu_ctx = MenuContext(viewport_api)
self.__menu_context[viewport_api_id] = menu_ctx
with self:
menu_ctx.add_render_settings_items()
ViewportMenuSeparator()
menu_ctx.add_angles_items()
ViewportMenuSeparator()
menu_ctx.add_sensor_selection()
if carb.settings.get_settings().get_as_bool("/exts/omni.syntheticdata/menubar/showSensorDefaultButton"):
ViewportMenuSeparator()
ViewportMenuItem(name="Set as default", hide_on_click=menu_ctx.hide_on_click, onclick_fn=menu_ctx.set_as_default)
ViewportMenuItem(name="Reset to default", hide_on_click=menu_ctx.hide_on_click, onclick_fn=menu_ctx.reset_to_default)
ViewportMenuSeparator()
ViewportMenuItem(name="Clear All", hide_on_click=menu_ctx.hide_on_click, onclick_fn=menu_ctx.clear_all)
ViewportMenuItem(name="Show Window", hide_on_click=menu_ctx.hide_on_click, onclick_fn=menu_ctx.show_window)
super().build_fn(desc)
def clear_all(self):
for menu_ctx in self.__menu_context.values():
menu_ctx.clear_all()
| 8,981 | Python | 36.739496 | 133 | 0.595813 |
omniverse-code/kit/exts/omni.syntheticdata/omni/syntheticdata/tests/pipeline/test_instance_mapping_update.py | import carb
import os.path
from pxr import Gf, UsdGeom, UsdLux, Sdf
import omni.hydratexture
import omni.kit.test
from omni.syntheticdata import SyntheticData, SyntheticDataStage
from ..utils import add_semantics
# Test the instance mapping update Fabric flag
class TestInstanceMappingUpdate(omni.kit.test.AsyncTestCase):
def __init__(self, methodName: str) -> None:
super().__init__(methodName=methodName)
# Dictionnary containing the pair (file_path , reference_data). If the reference data is None only the existence of the file is validated.
self._golden_references = {}
def _texture_render_product_path(self, hydra_texture) -> str:
'''Return a string to the UsdRender.Product used by the texture'''
render_product = hydra_texture.get_render_product_path()
if render_product and (not render_product.startswith('/')):
render_product = '/Render/RenderProduct_' + render_product
return render_product
def _assert_count_equal(self, counter_template_name, count):
count_output = SyntheticData.Get().get_node_attributes(
counter_template_name,
["outputs:count"],
self._render_product_path
)
assert "outputs:count" in count_output
assert count_output["outputs:count"] == count
def _activate_fabric_time_range(self) -> None:
sdg_iface = SyntheticData.Get()
if not sdg_iface.is_node_template_registered("TestSimFabricTimeRange"):
sdg_iface.register_node_template(
SyntheticData.NodeTemplate(
SyntheticDataStage.ON_DEMAND,
"omni.syntheticdata.SdTestSimFabricTimeRange"
),
template_name="TestSimFabricTimeRange"
)
sdg_iface.activate_node_template(
"TestSimFabricTimeRange",
attributes={"inputs:timeRangeName":"testFabricTimeRangeTrigger"}
)
if not sdg_iface.is_node_template_registered("TestPostRenderFabricTimeRange"):
sdg_iface.register_node_template(
SyntheticData.NodeTemplate(
SyntheticDataStage.POST_RENDER,
"omni.syntheticdata.SdFabricTimeRangeExecution",
[
SyntheticData.NodeConnectionTemplate(
SyntheticData.renderer_template_name(),
attributes_mapping=
{
"outputs:rp": "inputs:renderResults",
"outputs:gpu": "inputs:gpu"
}
)
]
),
template_name="TestPostRenderFabricTimeRange"
)
sdg_iface.activate_node_template(
"TestPostRenderFabricTimeRange",
0,
[self._render_product_path],
attributes={"inputs:timeRangeName":"testFabricTimeRangeTrigger"}
)
if not sdg_iface.is_node_template_registered("TestPostProcessFabricTimeRange"):
sdg_iface.register_node_template(
SyntheticData.NodeTemplate(
SyntheticDataStage.ON_DEMAND,
"omni.syntheticdata.SdFabricTimeRangeExecution",
[
SyntheticData.NodeConnectionTemplate("PostProcessDispatch"),
SyntheticData.NodeConnectionTemplate("TestPostRenderFabricTimeRange")
]
),
template_name="TestPostProcessFabricTimeRange"
)
sdg_iface.activate_node_template(
"TestPostProcessFabricTimeRange",
0,
[self._render_product_path],
attributes={"inputs:timeRangeName":"testFabricTimeRangeTrigger"}
)
if not sdg_iface.is_node_template_registered("TestPostProcessFabricTimeRangeCounter"):
sdg_iface.register_node_template(
SyntheticData.NodeTemplate(
SyntheticDataStage.ON_DEMAND,
"omni.graph.action.Counter",
[
SyntheticData.NodeConnectionTemplate(
"TestPostProcessFabricTimeRange",
attributes_mapping={"outputs:exec": "inputs:execIn"}
)
]
),
template_name="TestPostProcessFabricTimeRangeCounter"
)
sdg_iface.activate_node_template(
"TestPostProcessFabricTimeRangeCounter",
0,
[self._render_product_path]
)
def _activate_instance_mapping_update(self) -> None:
sdg_iface = SyntheticData.Get()
if not sdg_iface.is_node_template_registered("TestPostProcessInstanceMappingUpdate"):
sdg_iface.register_node_template(
SyntheticData.NodeTemplate(
SyntheticDataStage.ON_DEMAND,
"omni.syntheticdata.SdTimeChangeExecution",
[
SyntheticData.NodeConnectionTemplate("InstanceMappingPtr"),
SyntheticData.NodeConnectionTemplate("PostProcessDispatch")
]
),
template_name="TestPostProcessInstanceMappingUpdate"
)
if not sdg_iface.is_node_template_registered("TestPostProcessInstanceMappingUpdateCounter"):
sdg_iface.register_node_template(
SyntheticData.NodeTemplate(
SyntheticDataStage.ON_DEMAND,
"omni.graph.action.Counter",
[
SyntheticData.NodeConnectionTemplate(
"TestPostProcessInstanceMappingUpdate",
attributes_mapping={"outputs:exec": "inputs:execIn"}
)
]
),
template_name="TestPostProcessInstanceMappingUpdateCounter"
)
sdg_iface.activate_node_template(
"TestPostProcessInstanceMappingUpdateCounter",
0,
[self._render_product_path]
)
async def _request_fabric_time_range_trigger(self, number_of_frames=1):
sdg_iface = SyntheticData.Get()
sdg_iface.set_node_attributes("TestSimFabricTimeRange",{"inputs:numberOfFrames":number_of_frames})
sdg_iface.request_node_execution("TestSimFabricTimeRange")
await omni.kit.app.get_app().next_update_async()
async def setUp(self):
"""Called at the begining of every tests"""
self._settings = carb.settings.acquire_settings_interface()
self._hydra_texture_factory = omni.hydratexture.acquire_hydra_texture_factory_interface()
self._usd_context_name = ''
self._usd_context = omni.usd.get_context(self._usd_context_name)
await self._usd_context.new_stage_async()
# renderer
renderer = "rtx"
if renderer not in self._usd_context.get_attached_hydra_engine_names():
omni.usd.add_hydra_engine(renderer, self._usd_context)
# create the hydra textures
self._hydra_texture_0 = self._hydra_texture_factory.create_hydra_texture(
"TEX0",
1920,
1080,
self._usd_context_name,
hydra_engine_name=renderer,
is_async=self._settings.get("/app/asyncRendering")
)
self._hydra_texture_rendered_counter = 0
def on_hydra_texture_0(event: carb.events.IEvent):
self._hydra_texture_rendered_counter += 1
self._hydra_texture_rendered_counter_sub = self._hydra_texture_0.get_event_stream().create_subscription_to_push_by_type(
omni.hydratexture.EVENT_TYPE_DRAWABLE_CHANGED,
on_hydra_texture_0,
name='async rendering test drawable update',
)
stage = omni.usd.get_context().get_stage()
world_prim = UsdGeom.Xform.Define(stage,"/World")
UsdGeom.Xformable(world_prim).AddTranslateOp().Set((0, 0, 0))
UsdGeom.Xformable(world_prim).AddRotateXYZOp().Set((0, 0, 0))
self._render_product_path = self._texture_render_product_path(self._hydra_texture_0)
await omni.syntheticdata.sensors.next_render_simulation_async(self._render_product_path)
await omni.syntheticdata.sensors.next_render_simulation_async(self._render_product_path)
async def tearDown(self):
"""Called at the end of every tests"""
self._hydra_texture_rendered_counter_sub = None
self._hydra_texture_0 = None
self._usd_context.close_stage()
omni.usd.release_all_hydra_engines(self._usd_context)
self._hydra_texture_factory = None
self._settings = None
wait_iterations = 6
for _ in range(wait_iterations):
await omni.kit.app.get_app().next_update_async()
async def test_case_0(self):
"""Test case 0 : no time range"""
self._activate_fabric_time_range()
await omni.syntheticdata.sensors.next_render_simulation_async(self._render_product_path, 11)
self._assert_count_equal("TestPostProcessFabricTimeRangeCounter", 0)
async def test_case_1(self):
"""Test case 1 : setup a time range of 5 frames"""
self._activate_fabric_time_range()
await omni.syntheticdata.sensors.next_render_simulation_async(self._render_product_path)
await self._request_fabric_time_range_trigger(5)
await omni.syntheticdata.sensors.next_render_simulation_async(self._render_product_path, 11)
self._assert_count_equal("TestPostProcessFabricTimeRangeCounter", 5)
async def test_case_2(self):
"""Test case 2 : initial instance mapping setup"""
self._activate_instance_mapping_update()
await omni.syntheticdata.sensors.next_render_simulation_async(self._render_product_path, 11)
self._assert_count_equal("TestPostProcessInstanceMappingUpdateCounter", 1)
async def test_case_3(self):
"""Test case 3 : setup an instance mapping with 1, 2, 3, 4 changes"""
stage = omni.usd.get_context().get_stage()
self._activate_instance_mapping_update()
await omni.syntheticdata.sensors.next_render_simulation_async(self._render_product_path, 1)
self._assert_count_equal("TestPostProcessInstanceMappingUpdateCounter", 1)
sphere_prim = stage.DefinePrim("/World/Sphere", "Sphere")
add_semantics(sphere_prim, "sphere")
await omni.syntheticdata.sensors.next_render_simulation_async(self._render_product_path, 3)
self._assert_count_equal("TestPostProcessInstanceMappingUpdateCounter", 2)
sub_sphere_prim = stage.DefinePrim("/World/Sphere/Sphere", "Sphere")
await omni.syntheticdata.sensors.next_render_simulation_async(self._render_product_path, 5)
self._assert_count_equal("TestPostProcessInstanceMappingUpdateCounter", 3)
add_semantics(sub_sphere_prim, "sphere")
await omni.syntheticdata.sensors.next_render_simulation_async(self._render_product_path, 1)
self._assert_count_equal("TestPostProcessInstanceMappingUpdateCounter", 4) | 11,297 | Python | 45.303279 | 146 | 0.610605 |
omniverse-code/kit/exts/omni.syntheticdata/omni/syntheticdata/tests/sensors/test_display_rendervar.py | # NOTE:
# omni.kit.test - std python's unittest module with additional wrapping to add suport for async/await tests
# For most things refer to unittest docs: https://docs.python.org/3/library/unittest.html
import os
import unittest
import omni.kit.test
from omni.kit.viewport.utility import get_active_viewport
from omni.syntheticdata import SyntheticData
# Test the semantic filter
class TestDisplayRenderVar(omni.kit.test.AsyncTestCase):
def __init__(self, methodName: str) -> None:
super().__init__(methodName=methodName)
async def setUp(self):
await omni.usd.get_context().new_stage_async()
self.render_product_path = get_active_viewport().render_product_path
await omni.kit.app.get_app().next_update_async()
async def wait_for_frames(self):
wait_iterations = 6
for _ in range(wait_iterations):
await omni.kit.app.get_app().next_update_async()
async def test_valid_ldrcolor_texture(self):
SyntheticData.Get().activate_node_template("LdrColorDisplay", 0, [self.render_product_path])
await self.wait_for_frames()
display_output_names = ["outputs:rpResourcePtr", "outputs:width", "outputs:height", "outputs:format"]
display_outputs = SyntheticData.Get().get_node_attributes("LdrColorDisplay", display_output_names, self.render_product_path)
assert(display_outputs and all(o in display_outputs for o in display_output_names) and display_outputs["outputs:rpResourcePtr"] != 0 and display_outputs["outputs:format"] == 11)
SyntheticData.Get().deactivate_node_template("LdrColorDisplay", 0, [self.render_product_path])
async def test_valid_bbox3d_texture(self):
SyntheticData.Get().activate_node_template("BoundingBox3DDisplay", 0, [self.render_product_path])
await self.wait_for_frames()
display_output_names = ["outputs:rpResourcePtr", "outputs:width", "outputs:height", "outputs:format"]
display_outputs = SyntheticData.Get().get_node_attributes("BoundingBox3DDisplay", display_output_names, self.render_product_path)
assert(display_outputs and all(o in display_outputs for o in display_output_names) and display_outputs["outputs:rpResourcePtr"] != 0 and display_outputs["outputs:format"] == 11)
SyntheticData.Get().deactivate_node_template("BoundingBox3DDisplay", 0, [self.render_product_path])
async def test_valid_cam3dpos_texture(self):
SyntheticData.Get().activate_node_template("Camera3dPositionDisplay", 0, [self.render_product_path])
await self.wait_for_frames()
display_output_names = ["outputs:rpResourcePtr", "outputs:width", "outputs:height", "outputs:format"]
display_outputs = SyntheticData.Get().get_node_attributes("Camera3dPositionDisplay", display_output_names, self.render_product_path)
assert(display_outputs and all(o in display_outputs for o in display_output_names) and display_outputs["outputs:rpResourcePtr"] != 0 and display_outputs["outputs:format"] == 11)
SyntheticData.Get().deactivate_node_template("Camera3dPositionDisplay", 0, [self.render_product_path])
| 3,130 | Python | 61.619999 | 185 | 0.720447 |
omniverse-code/kit/exts/omni.syntheticdata/omni/syntheticdata/tests/sensors/test_cross_correspondence.py | # NOTE:
# omni.kit.test - std python's unittest module with additional wrapping to add suport for async/await tests
# For most things refer to unittest docs: https://docs.python.org/3/library/unittest.html
import os
import math
import asyncio
from PIL import Image
from time import time
from pathlib import Path
import carb
import numpy as np
from numpy.lib.arraysetops import unique
import omni.kit.test
from pxr import Gf, UsdGeom
from omni.kit.viewport.utility import get_active_viewport, next_viewport_frame_async, create_viewport_window
# Import extension python module we are testing with absolute import path, as if we are external user (other extension)
import omni.syntheticdata as syn
from ..utils import add_semantics
FILE_DIR = os.path.dirname(os.path.realpath(__file__))
TIMEOUT = 200
cameras = ["/World/Cameras/CameraFisheyeLeft", "/World/Cameras/CameraPinhole", "/World/Cameras/CameraFisheyeRight"]
# Having a test class derived from omni.kit.test.AsyncTestCase declared on the root of module will make it auto-discoverable by omni.kit.test
# This test has to run last and thus it's prefixed as such to force that:
# - This is because it has to create additional viewports which makes the test
# get stuck if it's not the last one in the OV process session
class ZZHasToRunLast_TestCrossCorrespondence(omni.kit.test.AsyncTestCase):
def __init__(self, methodName: str) -> None:
super().__init__(methodName=methodName)
self.golden_image_path = Path(os.path.dirname(os.path.abspath(__file__))) / ".." / "data" / "golden"
self.output_image_path = Path(os.path.dirname(os.path.abspath(__file__))) / ".." / "data" / "output"
self.StdDevTolerance = 0.1
self.sensorViewport = None
# Before running each test
async def setUp(self):
global cameras
np.random.seed(1234)
# Load the scene
scenePath = os.path.join(FILE_DIR, "../data/scenes/cross_correspondence.usda")
await omni.usd.get_context().open_stage_async(scenePath)
await omni.kit.app.get_app().next_update_async()
# Get the main-viewport as the sensor-viewport
self.sensorViewport = get_active_viewport()
await next_viewport_frame_async(self.sensorViewport)
# Setup viewports
resolution = self.sensorViewport.resolution
viewport_windows = [None] * 2
x_pos, y_pos = 12, 75
for i in range(len(viewport_windows)):
viewport_windows[i] = create_viewport_window(width=resolution[0], height=resolution[1], position_x=x_pos, position_y=y_pos)
viewport_windows[i].width = 500
viewport_windows[i].height = 500
x_pos += 500
# Setup cameras
self.sensorViewport.camera_path = cameras[0]
for i in range(len(viewport_windows)):
viewport_windows[i].viewport_api.camera_path = cameras[i + 1]
async def test_golden_image_rt_cubemap(self):
settings = carb.settings.get_settings()
settings.set_string("/rtx/rendermode", "RaytracedLighting")
settings.set_bool("/rtx/fishEye/useCubemap", True)
await omni.kit.app.get_app().next_update_async()
# Use default viewport for sensor target as otherwise sensor enablement doesn't work
# also the test will get stuck
# Initialize Sensor
await syn.sensors.create_or_retrieve_sensor_async(
self.sensorViewport, syn._syntheticdata.SensorType.CrossCorrespondence
)
# Render one frame
await syn.sensors.next_sensor_data_async(self.sensorViewport,True)
data = syn.sensors.get_cross_correspondence(self.sensorViewport)
golden_image = np.load(self.golden_image_path / "cross_correspondence.npz")["array"]
# normalize xy (uv offset) to zw channels' value range
# x100 seems like a good number to bring uv offset to ~1
data[:, [0, 1]] *= 100
golden_image[:, [0, 1]] *= 100
std_dev = np.sqrt(np.square(data - golden_image).astype(float).mean())
if std_dev >= self.StdDevTolerance:
if not os.path.isdir(self.output_image_path):
os.mkdir(self.output_image_path)
np.savez_compressed(self.output_image_path / "cross_correspondence.npz", array=data)
golden_image = ((golden_image + 1.0) / 2) * 255
data = ((data + 1.0) / 2) * 255
Image.fromarray(golden_image.astype(np.uint8), "RGBA").save(
self.output_image_path / "cross_correspondence_golden.png"
)
Image.fromarray(data.astype(np.uint8), "RGBA").save(self.output_image_path / "cross_correspondence.png")
self.assertTrue(std_dev < self.StdDevTolerance)
async def test_golden_image_rt_non_cubemap(self):
settings = carb.settings.get_settings()
settings.set_string("/rtx/rendermode", "RaytracedLighting")
settings.set_bool("/rtx/fishEye/useCubemap", False)
await omni.kit.app.get_app().next_update_async()
# Use default viewport for sensor target as otherwise sensor enablement doesn't work
# also the test will get stuck
# Initialize Sensor
await syn.sensors.create_or_retrieve_sensor_async(
self.sensorViewport, syn._syntheticdata.SensorType.CrossCorrespondence
)
# Render one frame
await syn.sensors.next_sensor_data_async(self.sensorViewport,True)
data = syn.sensors.get_cross_correspondence(self.sensorViewport)
golden_image = np.load(self.golden_image_path / "cross_correspondence.npz")["array"]
# normalize xy (uv offset) to zw channels' value range
# x100 seems like a good number to bring uv offset to ~1
data[:, [0, 1]] *= 100
golden_image[:, [0, 1]] *= 100
std_dev = np.sqrt(np.square(data - golden_image).astype(float).mean())
if std_dev >= self.StdDevTolerance:
if not os.path.isdir(self.output_image_path):
os.mkdir(self.output_image_path)
np.savez_compressed(self.output_image_path / "cross_correspondence.npz", array=data)
golden_image = ((golden_image + 1.0) / 2) * 255
data = ((data + 1.0) / 2) * 255
Image.fromarray(golden_image.astype(np.uint8), "RGBA").save(
self.output_image_path / "cross_correspondence_golden.png"
)
Image.fromarray(data.astype(np.uint8), "RGBA").save(self.output_image_path / "cross_correspondence.png")
self.assertTrue(std_dev < self.StdDevTolerance)
async def test_golden_image_pt(self):
settings = carb.settings.get_settings()
settings.set_string("/rtx/rendermode", "PathTracing")
settings.set_int("/rtx/pathtracing/spp", 32)
settings.set_bool("/rtx/fishEye/useCubemap", False)
await omni.kit.app.get_app().next_update_async()
# Use default viewport for sensor target as otherwise sensor enablement doesn't work
# also the test will get stuck
# Initialize Sensor
await syn.sensors.create_or_retrieve_sensor_async(
self.sensorViewport, syn._syntheticdata.SensorType.CrossCorrespondence
)
# Render one frame
await syn.sensors.next_sensor_data_async(self.sensorViewport,True)
data = syn.sensors.get_cross_correspondence(self.sensorViewport)
golden_image = np.load(self.golden_image_path / "cross_correspondence.npz")["array"]
# normalize xy (uv offset) to zw channels' value range
# x100 seems like a good number to bring uv offset to ~1
data[:, [0, 1]] *= 100
golden_image[:, [0, 1]] *= 100
std_dev = np.sqrt(np.square(data - golden_image).astype(float).mean())
if std_dev >= self.StdDevTolerance:
if not os.path.isdir(self.output_image_path):
os.mkdir(self.output_image_path)
np.savez_compressed(self.output_image_path / "cross_correspondence.npz", array=data)
golden_image = ((golden_image + 1.0) / 2) * 255
data = ((data + 1.0) / 2) * 255
Image.fromarray(golden_image.astype(np.uint8), "RGBA").save(
self.output_image_path / "cross_correspondence_golden.png"
)
Image.fromarray(data.astype(np.uint8), "RGBA").save(self.output_image_path / "cross_correspondence.png")
self.assertTrue(std_dev < self.StdDevTolerance)
async def test_same_position(self):
global cameras
# Make sure our cross correspondence values converage around 0 when the target and reference cameras are
# in the same position
settings = carb.settings.get_settings()
settings.set_string("/rtx/rendermode", "PathTracing")
settings.set_int("/rtx/pathtracing/spp", 32)
settings.set_bool("/rtx/fishEye/useCubemap", False)
# Use default viewport for sensor target as otherwise sensor enablement doesn't work
# also the test will get stuck
# Move both cameras to the same position
camera_left = omni.usd.get_context().get_stage().GetPrimAtPath(cameras[0])
camera_right = omni.usd.get_context().get_stage().GetPrimAtPath(cameras[2])
UsdGeom.XformCommonAPI(camera_left).SetTranslate(Gf.Vec3d(-10, 4, 0))
UsdGeom.XformCommonAPI(camera_right).SetTranslate(Gf.Vec3d(-10, 4, 0))
await omni.kit.app.get_app().next_update_async()
# Initialize Sensor
await syn.sensors.create_or_retrieve_sensor_async(
self.sensorViewport, syn._syntheticdata.SensorType.CrossCorrespondence
)
# Render one frame
await syn.sensors.next_sensor_data_async(self.sensorViewport,True)
raw_data = syn.sensors.get_cross_correspondence(self.sensorViewport)
# Get histogram parameters
du_scale = float(raw_data.shape[1] - 1)
dv_scale = float(raw_data.shape[0] - 1)
du_img = raw_data[:, :, 0] * du_scale
dv_img = raw_data[:, :, 1] * dv_scale
# Clear all invalid pixels by setting them to 10000.0
invalid_mask = (raw_data[:, :, 2] == -1)
du_img[invalid_mask] = 10000.0
dv_img[invalid_mask] = 10000.0
# Selection mask
du_selected = (du_img >= -1.0) & (du_img < 1.0)
dv_selected = (dv_img >= -1.0) & (dv_img < 1.0)
# Calculate bins
bins = np.arange(-1.0, 1.0 + 0.1, 0.1)
# calculate histograms for cross correspondence values along eacheach axis
hist_du, edges_du = np.histogram(du_img[du_selected], bins=bins)
hist_dv, edges_dv = np.histogram(dv_img[dv_selected], bins=bins)
# ensure the (0.0, 0.0) bins contain the most values
self.assertTrue(np.argmax(hist_du) == 10)
self.assertTrue(np.argmax(hist_dv) == 10)
# After running each test
async def tearDown(self):
pass
| 10,904 | Python | 42.795181 | 141 | 0.646735 |
omniverse-code/kit/exts/omni.syntheticdata/omni/syntheticdata/tests/sensors/test_rendervar_buff_host_ptr.py | # NOTE:
# omni.kit.test - std python's unittest module with additional wrapping to add suport for async/await tests
# For most things refer to unittest docs: https://docs.python.org/3/library/unittest.html
import unittest
import numpy as np
import ctypes
import omni.kit.test
from omni.gpu_foundation_factory import TextureFormat
from omni.kit.viewport.utility import get_active_viewport
from pxr import UsdGeom, UsdLux
# Import extension python module we are testing with absolute import path, as if we are external user (other extension)
import omni.syntheticdata as syn
from ..utils import add_semantics
# Test the SyntheticData following nodes :
# - SdPostRenderVarTextureToBuffer : node to convert a texture device rendervar into a buffer device rendervar
# - SdPostRenderVarToHost : node to readback a device rendervar into a host rendervar
# - SdRenderVarPtr : node to expose in the action graph, raw device / host pointers on the renderVars
#
# the tests consists in pulling the ptr data and comparing it with the data ouputed by :
# - SdRenderVarToRawArray
#
class TestRenderVarBuffHostPtr(omni.kit.test.AsyncTestCase):
_tolerance = 1.1
_outputs_ptr = ["outputs:dataPtr","outputs:width","outputs:height","outputs:bufferSize","outputs:format", "outputs:strides"]
_outputs_arr = ["outputs:data","outputs:width","outputs:height","outputs:bufferSize","outputs:format"]
@staticmethod
def _texture_element_size(texture_format):
if texture_format == int(TextureFormat.RGBA16_SFLOAT):
return 8
elif texture_format == int(TextureFormat.RGBA32_SFLOAT):
return 16
elif texture_format == int(TextureFormat.R32_SFLOAT):
return 4
elif texture_format == int(TextureFormat.RGBA8_UNORM):
return 4
elif texture_format == int(TextureFormat.R32_UINT):
return 4
else:
return 0
@staticmethod
def _assert_equal_tex_infos(out_a, out_b):
assert((out_a["outputs:width"] == out_b["outputs:width"]) and
(out_a["outputs:height"] == out_b["outputs:height"]) and
(out_a["outputs:format"] == out_b["outputs:format"]))
@staticmethod
def _assert_equal_buff_infos(out_a, out_b):
assert((out_a["outputs:bufferSize"] == out_b["outputs:bufferSize"]))
@staticmethod
def _assert_equal_data(data_a, data_b):
assert(np.amax(np.square(data_a - data_b)) < TestRenderVarBuffHostPtr._tolerance)
def _get_raw_array(self, render_var):
ptr_outputs = syn.SyntheticData.Get().get_node_attributes(render_var + "ExportRawArray", TestRenderVarBuffHostPtr._outputs_arr, self.render_product)
is_texture = ptr_outputs["outputs:width"] > 0
if is_texture:
elem_size = TestRenderVarBuffHostPtr._texture_element_size(ptr_outputs["outputs:format"])
arr_shape = (ptr_outputs["outputs:height"], ptr_outputs["outputs:width"], elem_size)
ptr_outputs["outputs:data"] = ptr_outputs["outputs:data"].reshape(arr_shape)
return ptr_outputs
def _get_ptr_array(self, render_var, ptr_suffix):
ptr_outputs = syn.SyntheticData.Get().get_node_attributes(render_var + ptr_suffix, TestRenderVarBuffHostPtr._outputs_ptr, self.render_product)
c_ptr = ctypes.cast(ptr_outputs["outputs:dataPtr"],ctypes.POINTER(ctypes.c_ubyte))
is_texture = ptr_outputs["outputs:width"] > 0
if is_texture:
elem_size = TestRenderVarBuffHostPtr._texture_element_size(ptr_outputs["outputs:format"])
arr_shape = (ptr_outputs["outputs:height"], ptr_outputs["outputs:width"], elem_size)
arr_strides = ptr_outputs["outputs:strides"]
buffer_size = arr_strides[1] * arr_shape[1]
arr_strides = (arr_strides[1], arr_strides[0], 1)
data_ptr = np.ctypeslib.as_array(c_ptr,shape=(buffer_size,))
data_ptr = np.lib.stride_tricks.as_strided(data_ptr, shape=arr_shape, strides=arr_strides)
else:
data_ptr = np.ctypeslib.as_array(c_ptr,shape=(ptr_outputs["outputs:bufferSize"],))
ptr_outputs["outputs:dataPtr"] = data_ptr
return ptr_outputs
def _assert_equal_rv_ptr(self, render_var:str, ptr_suffix:str, texture=None):
arr_out = self._get_raw_array(render_var)
ptr_out = self._get_ptr_array(render_var,ptr_suffix)
if not texture is None:
if texture:
TestRenderVarBuffHostPtr._assert_equal_tex_infos(arr_out,ptr_out)
else:
TestRenderVarBuffHostPtr._assert_equal_buff_infos(arr_out,ptr_out)
TestRenderVarBuffHostPtr._assert_equal_data(arr_out["outputs:data"],ptr_out["outputs:dataPtr"])
def _assert_equal_rv_ptr_size(self, render_var:str, ptr_suffix:str, arr_size:int):
ptr_out = self._get_ptr_array(render_var,ptr_suffix)
data_ptr = ptr_out["outputs:dataPtr"]
# helper for setting the value : print the size if None
if arr_size is None:
print(f"EqualRVPtrSize : {render_var} = {data_ptr.size}")
else:
assert(data_ptr.size==arr_size)
def _assert_equal_rv_arr(self, render_var:str, ptr_suffix:str, texture=None):
arr_out_a = self._get_raw_array(render_var)
arr_out_b = self._get_raw_array(render_var+ptr_suffix)
if not texture is None:
if texture:
TestRenderVarBuffHostPtr._assert_equal_tex_infos(arr_out_a,arr_out_b)
else:
TestRenderVarBuffHostPtr._assert_equal_buff_infos(arr_out_a,arr_out_b)
TestRenderVarBuffHostPtr._assert_equal_data(
arr_out_a["outputs:data"].flatten(),arr_out_b["outputs:data"].flatten())
def _assert_executed_rv_ptr(self, render_var:str, ptr_suffix:str):
ptr_outputs = syn.SyntheticData.Get().get_node_attributes(render_var + ptr_suffix, ["outputs:exec"], self.render_product)
assert(ptr_outputs["outputs:exec"]>0)
def __init__(self, methodName: str) -> None:
super().__init__(methodName=methodName)
async def setUp(self):
await omni.usd.get_context().new_stage_async()
stage = omni.usd.get_context().get_stage()
world_prim = UsdGeom.Xform.Define(stage,"/World")
UsdGeom.Xformable(world_prim).AddTranslateOp().Set((0, 0, 0))
UsdGeom.Xformable(world_prim).AddRotateXYZOp().Set((0, 0, 0))
sphere_prim = stage.DefinePrim("/World/Sphere", "Sphere")
add_semantics(sphere_prim, "sphere")
UsdGeom.Xformable(sphere_prim).AddTranslateOp().Set((0, 0, 0))
UsdGeom.Xformable(sphere_prim).AddScaleOp().Set((77, 77, 77))
UsdGeom.Xformable(sphere_prim).AddRotateXYZOp().Set((-90, 0, 0))
sphere_prim.GetAttribute("primvars:displayColor").Set([(1, 0.3, 1)])
capsule0_prim = stage.DefinePrim("/World/Sphere/Capsule0", "Capsule")
add_semantics(capsule0_prim, "capsule0")
UsdGeom.Xformable(capsule0_prim).AddTranslateOp().Set((3, 0, 0))
UsdGeom.Xformable(capsule0_prim).AddRotateXYZOp().Set((0, 0, 0))
capsule0_prim.GetAttribute("primvars:displayColor").Set([(0.3, 1, 0)])
capsule1_prim = stage.DefinePrim("/World/Sphere/Capsule1", "Capsule")
add_semantics(capsule1_prim, "capsule1")
UsdGeom.Xformable(capsule1_prim).AddTranslateOp().Set((-3, 0, 0))
UsdGeom.Xformable(capsule1_prim).AddRotateXYZOp().Set((0, 0, 0))
capsule1_prim.GetAttribute("primvars:displayColor").Set([(0, 1, 0.3)])
capsule2_prim = stage.DefinePrim("/World/Sphere/Capsule2", "Capsule")
add_semantics(capsule2_prim, "capsule2")
UsdGeom.Xformable(capsule2_prim).AddTranslateOp().Set((0, 3, 0))
UsdGeom.Xformable(capsule2_prim).AddRotateXYZOp().Set((0, 0, 0))
capsule2_prim.GetAttribute("primvars:displayColor").Set([(0.7, 0.1, 0.4)])
capsule3_prim = stage.DefinePrim("/World/Sphere/Capsule3", "Capsule")
add_semantics(capsule3_prim, "capsule3")
UsdGeom.Xformable(capsule3_prim).AddTranslateOp().Set((0, -3, 0))
UsdGeom.Xformable(capsule3_prim).AddRotateXYZOp().Set((0, 0, 0))
capsule3_prim.GetAttribute("primvars:displayColor").Set([(0.1, 0.7, 0.4)])
spherelight = UsdLux.SphereLight.Define(stage, "/SphereLight")
spherelight.GetIntensityAttr().Set(30000)
spherelight.GetRadiusAttr().Set(30)
self.viewport = get_active_viewport()
self.render_product = self.viewport.render_product_path
await omni.kit.app.get_app().next_update_async()
async def test_host_arr(self):
render_vars = [
"BoundingBox2DLooseSD",
"SemanticLocalTransformSD"
]
for render_var in render_vars:
syn.SyntheticData.Get().activate_node_template(render_var + "ExportRawArray", 0, [self.render_product])
syn.SyntheticData.Get().activate_node_template(render_var + "hostExportRawArray", 0, [self.render_product])
await syn.sensors.next_render_simulation_async(self.render_product, 1)
for render_var in render_vars:
self._assert_equal_rv_arr(render_var,"host", False)
async def test_host_ptr_size(self):
render_vars = {
"BoundingBox3DSD" : 576,
"BoundingBox2DLooseSD" : 144,
"SemanticLocalTransformSD" : 320,
"Camera3dPositionSD" : 14745600,
"SemanticMapSD" : 10,
"InstanceSegmentationSD" : 3686400,
"SemanticBoundingBox3DCamExtentSD" : 120,
"SemanticBoundingBox3DFilterInfosSD" : 24
}
for render_var in render_vars:
syn.SyntheticData.Get().activate_node_template(render_var + "hostPtr", 0, [self.render_product])
await syn.sensors.next_render_simulation_async(self.render_product, 1)
for render_var, arr_size in render_vars.items():
self._assert_equal_rv_ptr_size(render_var,"hostPtr", arr_size)
async def test_buff_arr(self):
render_vars = [
"Camera3dPositionSD",
"DistanceToImagePlaneSD",
]
for render_var in render_vars:
syn.SyntheticData.Get().activate_node_template(render_var + "ExportRawArray", 0, [self.render_product])
syn.SyntheticData.Get().activate_node_template(render_var + "buffExportRawArray", 0, [self.render_product])
await syn.sensors.next_render_simulation_async(self.render_product, 1)
for render_var in render_vars:
self._assert_equal_rv_arr(render_var, "buff")
async def test_host_ptr(self):
render_vars = [
"BoundingBox2DTightSD",
"BoundingBox3DSD",
"InstanceMapSD"
]
for render_var in render_vars:
syn.SyntheticData.Get().activate_node_template(render_var + "ExportRawArray", 0, [self.render_product])
syn.SyntheticData.Get().activate_node_template(render_var + "hostPtr", 0, [self.render_product])
await syn.sensors.next_render_simulation_async(self.render_product, 1)
for render_var in render_vars:
self._assert_equal_rv_ptr(render_var,"hostPtr",False)
self._assert_executed_rv_ptr(render_var,"hostPtr")
async def test_host_ptr_tex(self):
render_vars = [
"NormalSD",
"DistanceToCameraSD"
]
for render_var in render_vars:
syn.SyntheticData.Get().activate_node_template(render_var + "ExportRawArray", 0, [self.render_product])
syn.SyntheticData.Get().activate_node_template(render_var + "hostPtr", 0, [self.render_product])
await syn.sensors.next_render_simulation_async(self.render_product, 1)
for render_var in render_vars:
self._assert_equal_rv_ptr(render_var,"hostPtr",True)
async def test_buff_host_ptr(self):
render_vars = [
"LdrColorSD",
"InstanceSegmentationSD",
]
for render_var in render_vars:
syn.SyntheticData.Get().activate_node_template(render_var + "ExportRawArray", 0, [self.render_product])
syn.SyntheticData.Get().activate_node_template(render_var + "buffhostPtr", 0, [self.render_product])
await syn.sensors.next_render_simulation_async(self.render_product, 1)
for render_var in render_vars:
self._assert_equal_rv_ptr(render_var, "buffhostPtr",True)
async def test_empty_semantic_host_ptr(self):
await omni.usd.get_context().new_stage_async()
self.viewport = get_active_viewport()
self.render_product = self.viewport.render_product_path
await omni.kit.app.get_app().next_update_async()
render_vars = [
"BoundingBox2DTightSD",
"BoundingBox3DSD",
"InstanceMapSD"
]
for render_var in render_vars:
syn.SyntheticData.Get().activate_node_template(render_var + "hostPtr", 0, [self.render_product])
await syn.sensors.next_render_simulation_async(self.render_product, 1)
for render_var in render_vars:
self._assert_executed_rv_ptr(render_var,"hostPtr")
# After running each test
async def tearDown(self):
pass
| 13,257 | Python | 48.103704 | 156 | 0.646225 |
omniverse-code/kit/exts/omni.usd.schema.audio/pxr/AudioSchema/__init__.py | #!/usr/bin/env python3
#
# Copyright (c) 2020-2021, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
import omni.log
omni.log.warn("pxr.AudioSchema is deprecated - please use pxr.OmniAudioSchema instead")
from pxr.OmniAudioSchema import *
| 597 | Python | 34.176469 | 87 | 0.80067 |
omniverse-code/kit/exts/omni.usd.schema.audio/pxr/OmniAudioSchema/_omniAudioSchema.pyi | from __future__ import annotations
import pxr.OmniAudioSchema._omniAudioSchema
import typing
import Boost.Python
import pxr.OmniAudioSchema
import pxr.Usd
import pxr.UsdGeom
__all__ = [
"Listener",
"OmniListener",
"OmniSound",
"Sound",
"Tokens"
]
class Listener(OmniListener, pxr.UsdGeom.Xformable, pxr.UsdGeom.Imageable, pxr.Usd.Typed, pxr.Usd.SchemaBase, Boost.Python.instance):
@staticmethod
def Define(*args, **kwargs) -> None: ...
@staticmethod
def Get(*args, **kwargs) -> None: ...
@staticmethod
def GetSchemaAttributeNames(*args, **kwargs) -> None: ...
@staticmethod
def _GetStaticTfType(*args, **kwargs) -> None: ...
__instance_size__ = 40
pass
class OmniListener(pxr.UsdGeom.Xformable, pxr.UsdGeom.Imageable, pxr.Usd.Typed, pxr.Usd.SchemaBase, Boost.Python.instance):
@staticmethod
def CreateConeAnglesAttr(*args, **kwargs) -> None: ...
@staticmethod
def CreateConeLowPassFilterAttr(*args, **kwargs) -> None: ...
@staticmethod
def CreateConeVolumesAttr(*args, **kwargs) -> None: ...
@staticmethod
def CreateOrientationFromViewAttr(*args, **kwargs) -> None: ...
@staticmethod
def Define(*args, **kwargs) -> None: ...
@staticmethod
def Get(*args, **kwargs) -> None: ...
@staticmethod
def GetConeAnglesAttr(*args, **kwargs) -> None: ...
@staticmethod
def GetConeLowPassFilterAttr(*args, **kwargs) -> None: ...
@staticmethod
def GetConeVolumesAttr(*args, **kwargs) -> None: ...
@staticmethod
def GetOrientationFromViewAttr(*args, **kwargs) -> None: ...
@staticmethod
def GetSchemaAttributeNames(*args, **kwargs) -> None: ...
@staticmethod
def _GetStaticTfType(*args, **kwargs) -> None: ...
__instance_size__ = 40
pass
class Sound(OmniSound, pxr.UsdGeom.Xformable, pxr.UsdGeom.Imageable, pxr.Usd.Typed, pxr.Usd.SchemaBase, Boost.Python.instance):
@staticmethod
def Define(*args, **kwargs) -> None: ...
@staticmethod
def Get(*args, **kwargs) -> None: ...
@staticmethod
def GetSchemaAttributeNames(*args, **kwargs) -> None: ...
@staticmethod
def _GetStaticTfType(*args, **kwargs) -> None: ...
__instance_size__ = 40
pass
class OmniSound(pxr.UsdGeom.Xformable, pxr.UsdGeom.Imageable, pxr.Usd.Typed, pxr.Usd.SchemaBase, Boost.Python.instance):
@staticmethod
def CreateAttenuationRangeAttr(*args, **kwargs) -> None: ...
@staticmethod
def CreateAttenuationTypeAttr(*args, **kwargs) -> None: ...
@staticmethod
def CreateAuralModeAttr(*args, **kwargs) -> None: ...
@staticmethod
def CreateConeAnglesAttr(*args, **kwargs) -> None: ...
@staticmethod
def CreateConeLowPassFilterAttr(*args, **kwargs) -> None: ...
@staticmethod
def CreateConeVolumesAttr(*args, **kwargs) -> None: ...
@staticmethod
def CreateEnableDistanceDelayAttr(*args, **kwargs) -> None: ...
@staticmethod
def CreateEnableDopplerAttr(*args, **kwargs) -> None: ...
@staticmethod
def CreateEnableInterauralDelayAttr(*args, **kwargs) -> None: ...
@staticmethod
def CreateEndTimeAttr(*args, **kwargs) -> None: ...
@staticmethod
def CreateFilePathAttr(*args, **kwargs) -> None: ...
@staticmethod
def CreateGainAttr(*args, **kwargs) -> None: ...
@staticmethod
def CreateLoopCountAttr(*args, **kwargs) -> None: ...
@staticmethod
def CreateMediaOffsetEndAttr(*args, **kwargs) -> None: ...
@staticmethod
def CreateMediaOffsetStartAttr(*args, **kwargs) -> None: ...
@staticmethod
def CreatePriorityAttr(*args, **kwargs) -> None: ...
@staticmethod
def CreateStartTimeAttr(*args, **kwargs) -> None: ...
@staticmethod
def CreateTimeScaleAttr(*args, **kwargs) -> None: ...
@staticmethod
def Define(*args, **kwargs) -> None: ...
@staticmethod
def Get(*args, **kwargs) -> None: ...
@staticmethod
def GetAttenuationRangeAttr(*args, **kwargs) -> None: ...
@staticmethod
def GetAttenuationTypeAttr(*args, **kwargs) -> None: ...
@staticmethod
def GetAuralModeAttr(*args, **kwargs) -> None: ...
@staticmethod
def GetConeAnglesAttr(*args, **kwargs) -> None: ...
@staticmethod
def GetConeLowPassFilterAttr(*args, **kwargs) -> None: ...
@staticmethod
def GetConeVolumesAttr(*args, **kwargs) -> None: ...
@staticmethod
def GetEnableDistanceDelayAttr(*args, **kwargs) -> None: ...
@staticmethod
def GetEnableDopplerAttr(*args, **kwargs) -> None: ...
@staticmethod
def GetEnableInterauralDelayAttr(*args, **kwargs) -> None: ...
@staticmethod
def GetEndTimeAttr(*args, **kwargs) -> None: ...
@staticmethod
def GetFilePathAttr(*args, **kwargs) -> None: ...
@staticmethod
def GetGainAttr(*args, **kwargs) -> None: ...
@staticmethod
def GetLoopCountAttr(*args, **kwargs) -> None: ...
@staticmethod
def GetMediaOffsetEndAttr(*args, **kwargs) -> None: ...
@staticmethod
def GetMediaOffsetStartAttr(*args, **kwargs) -> None: ...
@staticmethod
def GetPriorityAttr(*args, **kwargs) -> None: ...
@staticmethod
def GetSchemaAttributeNames(*args, **kwargs) -> None: ...
@staticmethod
def GetStartTimeAttr(*args, **kwargs) -> None: ...
@staticmethod
def GetTimeScaleAttr(*args, **kwargs) -> None: ...
@staticmethod
def _GetStaticTfType(*args, **kwargs) -> None: ...
__instance_size__ = 40
pass
class Tokens(Boost.Python.instance):
attenuationRange = 'attenuationRange'
attenuationType = 'attenuationType'
auralMode = 'auralMode'
coneAngles = 'coneAngles'
coneLowPassFilter = 'coneLowPassFilter'
coneVolumes = 'coneVolumes'
default_ = 'default'
enableDistanceDelay = 'enableDistanceDelay'
enableDoppler = 'enableDoppler'
enableInterauralDelay = 'enableInterauralDelay'
endTime = 'endTime'
filePath = 'filePath'
gain = 'gain'
inverse = 'inverse'
linear = 'linear'
linearSquare = 'linearSquare'
loopCount = 'loopCount'
mediaOffsetEnd = 'mediaOffsetEnd'
mediaOffsetStart = 'mediaOffsetStart'
nonSpatial = 'nonSpatial'
off = 'off'
on = 'on'
orientationFromView = 'orientationFromView'
priority = 'priority'
spatial = 'spatial'
startTime = 'startTime'
timeScale = 'timeScale'
pass
__MFB_FULL_PACKAGE_NAME = 'omniAudioSchema'
| 6,388 | unknown | 34.494444 | 133 | 0.651534 |
omniverse-code/kit/exts/omni.rtx.ujitsoprocessors/omni/rtx/ujitsoprocessors/_ujitsoprocessors.pyi | """pybind11 omni.rtx.ujitsoprocessors bindings"""
from __future__ import annotations
import omni.rtx.ujitsoprocessors._ujitsoprocessors
import typing
__all__ = [
"IUJITSOProcessors",
"acquire_ujitsoprocessors_interface",
"release_ujitsoprocessors_interface"
]
class IUJITSOProcessors():
def runHTTPJobs(self, arg0: str, arg1: str) -> str: ...
pass
def acquire_ujitsoprocessors_interface(plugin_name: str = None, library_path: str = None) -> IUJITSOProcessors:
pass
def release_ujitsoprocessors_interface(arg0: IUJITSOProcessors) -> None:
pass
| 574 | unknown | 27.749999 | 111 | 0.736934 |
omniverse-code/kit/exts/omni.rtx.ujitsoprocessors/omni/rtx/ujitsoprocessors/__init__.py | from ._ujitsoprocessors import *
| 33 | Python | 15.999992 | 32 | 0.787879 |
omniverse-code/kit/exts/omni.rtx.ujitsoprocessors/omni/rtx/ujitsoprocessors/tests/__init__.py | from .test_ujitsoprocessors import *
| 37 | Python | 17.999991 | 36 | 0.810811 |
omniverse-code/kit/exts/omni.rtx.ujitsoprocessors/omni/rtx/ujitsoprocessors/tests/test_ujitsoprocessors.py | ## Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
##
## NVIDIA CORPORATION and its licensors retain all intellectual property
## and proprietary rights in and to this software, related documentation
## and any modifications thereto. Any use, reproduction, disclosure or
## distribution of this software and related documentation without an express
## license agreement from NVIDIA CORPORATION is strictly prohibited.
##
import omni.kit.app
from omni.kit.test import AsyncTestCase
import pathlib
import omni.rtx.ujitsoprocessors
try:
import omni.kit.test
TestClass = omni.kit.test.AsyncTestCase
TEST_DIR = str(pathlib.Path(__file__).parent.parent.parent.parent.parent.parent.parent)
except ImportError:
from . import TestRtScenegraph
TestClass = TestRtScenegraph
TEST_DIR = "."
class TestConverter(AsyncTestCase):
async def setUp(self):
self._iface = omni.rtx.ujitsoprocessors.acquire_ujitsoprocessors_interface()
async def tearDown(self):
pass
async def test_001_startstop(self):
print(self._iface.runHTTPJobs("", ""));
| 1,104 | Python | 29.694444 | 91 | 0.744565 |
omniverse-code/kit/exts/omni.kit.window.privacy/omni/kit/window/privacy/__init__.py | from .privacy_window import *
| 30 | Python | 14.499993 | 29 | 0.766667 |
omniverse-code/kit/exts/omni.kit.window.privacy/omni/kit/window/privacy/privacy_window.py | import toml
import os
import asyncio
import weakref
import carb
import carb.settings
import carb.tokens
import omni.client
import omni.kit.app
import omni.kit.ui
import omni.ext
from omni import ui
WINDOW_NAME = "About"
class PrivacyWindow:
def __init__(self, privacy_file=None):
if not privacy_file:
privacy_file = carb.tokens.get_tokens_interface().resolve("${omni_config}/privacy.toml")
privacy_data = toml.load(privacy_file) if os.path.exists(privacy_file) else {}
# NVIDIA Employee?
privacy_dict = privacy_data.get("privacy", {})
userId = privacy_dict.get("userId", None)
if not userId or not userId.endswith("@nvidia.com"):
return
# Already set?
extraDiagnosticDataOptIn = privacy_dict.get("extraDiagnosticDataOptIn", None)
if extraDiagnosticDataOptIn:
return
self._window = ui.Window(
"Privacy",
flags=ui.WINDOW_FLAGS_NO_TITLE_BAR | ui.WINDOW_FLAGS_NO_RESIZE | ui.WINDOW_FLAGS_NO_MOVE,
auto_resize=True,
)
def on_ok_clicked():
allow_for_public_build = self._cb.model.as_bool
carb.log_info(f"writing privacy file: '{privacy_file}'. allow_for_public_build: {allow_for_public_build}")
privacy_data.setdefault("privacy", {})["extraDiagnosticDataOptIn"] = (
"externalBuilds" if allow_for_public_build else "internalBuilds"
)
with open(privacy_file, "w") as f:
toml.dump(privacy_data, f)
self._window.visible = False
text = """
By accessing or using Omniverse Beta, you agree to share usage and performance data as well as diagnostic data, including crash data. This will help us to optimize our features, prioritize development, and make positive changes for future development.
As an NVIDIA employee, your email address will be associated with any collected diagnostic data from internal builds to help us improve Omniverse. Below, you can optionally choose to associate your email address with any collected diagnostic data from publicly available builds. Please contact us at [email protected] for any questions.
"""
with self._window.frame:
with ui.ZStack(width=0, height=0):
with ui.VStack(style={"margin": 5}, height=0):
ui.Label(text, width=600, style={"font_size": 18}, word_wrap=True)
ui.Separator()
with ui.HStack(height=0):
self._cb = ui.CheckBox(width=0, height=0)
self._cb.model.set_value(True)
ui.Label(
"Associate my @nvidia.com email address with diagnostic data from publicly available builds.'",
value=True,
style={"font_size": 16},
)
with ui.HStack(height=0):
ui.Spacer()
ui.Button("OK", width=100, clicked_fn=lambda: on_ok_clicked())
ui.Spacer()
self._window.visible = True
self.on_ok_clicked = on_ok_clicked
async def _create_window(ext_weak):
# Wait for few frames to get everything in working state first
for _ in range(5):
await omni.kit.app.get_app().next_update_async()
if ext_weak():
ext_weak()._window = PrivacyWindow()
class Extension(omni.ext.IExt):
def on_startup(self, ext_id):
asyncio.ensure_future(_create_window(weakref.ref(self)))
def on_shutdown(self):
self._window = None
| 3,676 | Python | 37.302083 | 359 | 0.606094 |
omniverse-code/kit/exts/omni.kit.window.privacy/omni/kit/window/privacy/tests/__init__.py | from .test_window_privacy import *
| 35 | Python | 16.999992 | 34 | 0.771429 |
omniverse-code/kit/exts/omni.kit.window.privacy/omni/kit/window/privacy/tests/test_window_privacy.py | import tempfile
import toml
import omni.kit.test
import omni.kit.window.privacy
class TestPrivacyWindow(omni.kit.test.AsyncTestCase):
async def test_privacy(self):
with tempfile.TemporaryDirectory() as tmp_dir:
privacy_file = f"{tmp_dir}/privacy.toml"
def write_data(data):
with open(privacy_file, "w") as f:
toml.dump(data, f)
# NVIDIA User, first time
write_data({"privacy": {"userId": "[email protected]"}})
w = omni.kit.window.privacy.PrivacyWindow(privacy_file)
self.assertIsNotNone(w._window)
w.on_ok_clicked()
data = toml.load(privacy_file)
self.assertEqual(data["privacy"]["extraDiagnosticDataOptIn"], "externalBuilds")
# NVIDIA User, second time
w = omni.kit.window.privacy.PrivacyWindow(privacy_file)
self.assertFalse(hasattr(w, "_window"))
# NVIDIA User, first time, checkbox off
write_data({"privacy": {"userId": "[email protected]"}})
w = omni.kit.window.privacy.PrivacyWindow(privacy_file)
self.assertIsNotNone(w._window)
w._cb.model.set_value(False)
w.on_ok_clicked()
data = toml.load(privacy_file)
self.assertEqual(data["privacy"]["extraDiagnosticDataOptIn"], "internalBuilds")
# Non NVIDIA User
write_data({"privacy": {"userId": "[email protected]"}})
w = omni.kit.window.privacy.PrivacyWindow(privacy_file)
self.assertFalse(hasattr(w, "_window"))
| 1,602 | Python | 34.622221 | 91 | 0.591136 |
omniverse-code/kit/exts/omni.kit.test/omni/kit/test/ext_utils.py | from __future__ import annotations
from collections import defaultdict
from contextlib import suppress
from types import ModuleType
from typing import Dict, Tuple
import fnmatch
import sys
import unittest
import carb
import omni.kit.app
from .unittests import get_tests_to_remove_from_modules
# Type for collecting module information corresponding to extensions
ModuleMap_t = Dict[str, Tuple[str, bool]]
# ==============================================================================================================
def get_module_to_extension_map() -> ModuleMap_t:
"""Returns a dictionary mapping the names of Python modules in an extension to (OwningExtension, EnabledState)
e.g. for this extension it would contain {"omni.kit.test": (["omni.kit.test", True])}
It will be expanded to include the implicit test modules added by the test management.
"""
module_map = {}
manager = omni.kit.app.get_app().get_extension_manager()
for extension in manager.fetch_extension_summaries():
ext_id = None
enabled = False
with suppress(KeyError):
if extension["enabled_version"]["enabled"]:
ext_id = extension["enabled_version"]["id"]
enabled = True
if ext_id is None:
try:
ext_id = extension["latest_version"]["id"]
except KeyError:
# Silently skip any extensions that do not have enough information to identify them
continue
ext_dict = manager.get_extension_dict(ext_id)
# Look for the defined Python modules, skipping processing of any extensions that have no Python modules
try:
module_list = ext_dict["python"]["module"]
except (KeyError, TypeError):
continue
# Walk through the list of all modules independently as there is no guarantee they are related.
for module in module_list:
# Some modules do not have names, only paths - just ignore them
with suppress(KeyError):
module_map[module["name"]] = (ext_id, enabled)
# Add the two test modules that are explicitly added by the testing system
if not module["name"].endswith(".tests"):
module_map[module["name"] + ".tests"] = (ext_id, enabled)
module_map[module["name"] + ".ogn.tests"] = (ext_id, enabled)
return module_map
# ==============================================================================================================
def extension_from_test_name(test_name: str, module_map: ModuleMap_t) -> tuple[str, bool, str, bool] | None:
"""Given a test name, return None if the extension couldn't be inferred from the name, otherwise a tuple
containing the name of the owning extension, a boolean indicating if it is currently enabled, a string
indicating in which Python module the test was found, and a boolean indicating if that module is currently
imported, or None if it was not.
Args:
test_name: Full name of the test to look up
module_map: Module to extension mapping. Passed in for sharing as it's expensive to compute.
The algorithm walks backwards from the full name to find the maximum-length Python import module known to be
part of an extension that is part of the test name. It does this because the exact import paths can be nested or
not nested. e.g. omni.kit.window.tests is not part of omni.kit.window
Extracting the extension from the test name is a little tricky but all of the information is available. Here is
how it attempts to decompose a sample test name
.. code-block:: text
omni.graph.nodes.tests.tests_for_samples.TestsForSamples.test_for_sample
+--------------+ +---+ +---------------+ +-------------+ +-------------+
Import path | | | |
Testing subdirectory | |
| | |
Test File Test Class Test Name
Each extension has a list of import paths of Python modules it explicitly defines, and in addition it will add
implicit imports for .tests and .ogn.tests submodules that are not explicitly listed in the extension dictionary.
With this structure the user could have done any of these imports:
.. code-block:: python
import omni.graph.nodes
import omni.graph.nodes.tests
import omni.graph.nodes.test_for_samples
Each nested one may or may not have been exposed by the parent so it is important to do a greedy match.
This is how the process of decoding works for this test:
.. code-block:: text
Split the test name on "."
["omni", "graph", "nodes", "tests", "tests_for_samples", "TestsForSamples", "test_for_sample"]
Starting at the entire list, recursively remove one element until a match in the module dictionary is found
Fail: "omni.graph.nodes.tests.tests_for_samples.TestsForSamples.test_for_sample"
Fail: "omni.graph.nodes.tests.tests_for_samples.TestsForSamples"
Fail: "omni.graph.nodes.tests.tests_for_samples"
Succeed: "omni.graph.nodes.tests"
If no success, of if sys.modules does not contain the found module:
Return the extension id, enabled state, and None for the module
Else:
Check the module recursively for exposed attributes with the rest of the names. In this example:
file_object = getattr(module, "tests_for_samples")
class_object = getattr(file_object, "TestsForSamples")
test_object = getattr(class_object, "test_for_sample")
If test_object is valid:
Return the extension id, enabled state, and the found module
Else:
Return the extension id, enabled state, and None for the module
"""
class_elements = test_name.split(".")
for el in range(len(class_elements), 0, -1):
check_module = ".".join(class_elements[:el])
# If the extension owned the module then process it, otherwise continue up to the parent element
try:
(ext_id, is_enabled) = module_map[check_module]
except KeyError:
continue
# The module was found in an extension definition but not imported into the Python namespace yet
try:
module = sys.modules[check_module]
except KeyError:
return (ext_id, is_enabled, check_module, False)
# This checks to make sure that the actual test is registered in the module that was found.
# e.g. if the full name is omni.graph.nodes.tests.TestStuff.test_stuff then we would expect to find
# a module named "omni.graph.nodes.tests" that contains "TestStuff", and the object "TestStuff" will
# in turn contain "test_stuff".
sub_module = module
for elem in class_elements[el:]:
sub_module = getattr(sub_module, elem, None)
if sub_module is None:
break
return (ext_id, is_enabled, module, sub_module is not None)
return None
# ==============================================================================================================
def test_only_extension_dependencies(ext_id: str) -> set[str]:
"""Returns a set of extensions with test-only dependencies on the given one.
Not currently used as dynamically enabling random extensions is not stable enough to use here yet.
"""
test_only_extensions = set() # Set of extensions that are enabled only in testing mode
manager = omni.kit.app.get_app().get_extension_manager()
ext_dict = manager.get_extension_dict(ext_id)
# Find the test-only dependencies that may also not be enabled yet
if ext_dict and "test" in ext_dict:
for test_info in ext_dict["test"]:
try:
new_extensions = test_info["dependencies"]
except (KeyError, TypeError):
new_extensions = []
for new_extension in new_extensions:
with suppress(KeyError):
test_only_extensions.add(new_extension)
return test_only_extensions
# ==============================================================================================================
def decompose_test_list(
test_list: list[str]
) -> tuple[list[unittest.TestCase], set[str], set[str], defaultdict[str, set[str]]]:
"""Read in the given log file and return the list of tests that were run, in the order in which they were run.
TODO: Move this outside the core omni.kit.test area as it requires external knowledge
If any modules containing the tests in the log are not currently available then they are reported for the user
to intervene and most likely enable the owning extensions.
Args:
test_list: List of tests to decompose and find modules and extensions for
Returns:
Tuple of (tests, not_found, extensions, modules) gleaned from the log file
tests: List of unittest.TestCase for all tests named in the log file, in the order they appeared
not_found: Name of tests whose location could not be determined, or that did not exist
extensions: Name of extensions containing modules that look like they contain tests from "not_found"
modules: Map of extension to list of modules where the extension is enabled but the module potentially
containing the tests from "not_found" has not been imported.
"""
module_map = get_module_to_extension_map()
not_found = set() # The set of full names of tests whose module was not found
# Map of enabled extensions to modules in them that contain tests in the log but which are not imported
modules_not_imported = defaultdict(set)
test_names = []
modules_found = set() # Modules matching the tests
extensions_to_enable = set()
# Walk the test list and parse out all of the test run information
for test_name in test_list:
test_info = extension_from_test_name(test_name, module_map)
if test_info is None:
not_found.add(test_name)
else:
(ext_id, ext_enabled, module, module_imported) = test_info
if ext_enabled:
if module is None:
not_found.add(test_name)
elif not module_imported:
modules_not_imported[ext_id].add(module)
else:
test_names.append(test_name)
modules_found.add(module)
else:
extensions_to_enable.add(ext_id)
# Carefully find all of the desired test cases, preserving the order in which they were encountered since that is
# a key feature of reading tests from a log
test_mapping: dict[str, unittest.TestCase] = {} # Mapping of test name onto discovered test case for running
for module in modules_found:
# Find all of the individual tests in the TestCase classes and add those that match any of the disabled patterns
# Uses get_tests_to_remove_from_modules because it considers possible tests and ogn.tests submodules
test_cases = get_tests_to_remove_from_modules([module])
for test_case in test_cases:
if test_case.id() in test_names:
test_mapping[test_case.id()] = test_case
tests: list[unittest.TestCase] = []
for test_name in test_names:
if test_name in test_mapping:
tests.append(test_mapping[test_name])
return (tests, not_found, extensions_to_enable, modules_not_imported)
# ==============================================================================================================
def find_disabled_tests() -> list[unittest.TestCase]:
"""Scan the existing tests and the extension.toml to find all tests that are currently disabled"""
manager = omni.kit.app.get_app().get_extension_manager()
# Find the per-extension list of (python_modules, disabled_patterns).
def __get_disabled_patterns() -> list[tuple[list[ModuleType], list[str]]]:
disabled_patterns = []
summaries = manager.fetch_extension_summaries()
for extension in summaries:
try:
if not extension["enabled_version"]["enabled"]:
continue
except KeyError:
carb.log_info(f"Could not find enabled state of extension {extension}")
continue
ext_id = extension["enabled_version"]["id"]
ext_dict = manager.get_extension_dict(ext_id)
# Look for the defined Python modules
modules = []
with suppress(KeyError, TypeError):
modules += [sys.modules[module_info["name"]] for module_info in ext_dict["python"]["module"]]
# Look for unreliable tests
regex_list = []
with suppress(KeyError, TypeError):
test_info = ext_dict["test"]
for test_details in test_info or []:
with suppress(KeyError):
regex_list += test_details["pythonTests"]["unreliable"]
if regex_list:
disabled_patterns.append((modules, regex_list))
return disabled_patterns
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def _match(to_match: str, pattern: str) -> bool:
"""Match that supports wildcards and '!' to invert it"""
should_match = True
if pattern.startswith("!"):
pattern = pattern[1:]
should_match = False
return should_match == fnmatch.fnmatch(to_match, pattern)
tests = []
for modules, regex_list in __get_disabled_patterns():
# Find all of the individual tests in the TestCase classes and add those that match any of the disabled patterns
# Uses get_tests_to_remove_from_modules because it considers possible tests and ogn.tests submodules
test_cases = get_tests_to_remove_from_modules(modules)
for test_case in test_cases:
for regex in regex_list:
if _match(test_case.id(), regex):
tests.append(test_case)
break
return tests
| 14,477 | Python | 46.782178 | 120 | 0.60344 |
omniverse-code/kit/exts/omni.kit.test/omni/kit/test/ext_test_generator.py | # WARNING: This file/interface is likely to be removed after transition from Legacy Viewport is complete.
# Seee merge_requests/17255
__all__ = ["get_tests_to_run"]
# Rewrite a Legacy Viewport test to launch with Viewport backend only
def _get_viewport_next_test_config(test):
# Check for flag to run test only on legacy Viewport
if test.config.get("viewport_legacy_only", False):
return None
# Get the test dependencies
test_dependencies = test.config.get("dependencies", tuple())
# Check if legacy Viewport is in the test dependency list
if "omni.kit.window.viewport" not in test_dependencies:
return None
# Deep copy the config dependencies
test_dependencies = list(test_dependencies)
# Re-write any 'omni.kit.window.viewport' dependency as 'omni.kit.viewport.window'
for i in range(len(test_dependencies)):
cur_dep = test_dependencies[i]
if cur_dep == "omni.kit.window.viewport":
test_dependencies[i] = "omni.kit.viewport.window"
# Shallow copy of the config
test_config = test.config.copy()
# set the config name
test_config["name"] = "viewport_next"
# Replace the dependencies
test_config["dependencies"] = test_dependencies
# Add any additional args by deep copying the arg list
test_args = list(test.config.get("args", tuple()))
test_args.append("--/exts/omni.kit.viewport.window/startup/windowName=Viewport")
# Replace the args
test_config["args"] = test_args
# TODO: Error if legacy Viewport somehow still inserting itself into the test run
# Return the new config
return test_config
def get_tests_to_run(test, ExtTest, run_context, is_parallel_run: bool, valid: bool):
"""For a test gather all unique test-runs that should be invoked"""
# First run the additional tests, followed by the original / base-case
# No need to run additional tests of the original is not valid
if valid:
# Currently the only usage of this method is to have legacy Viewport test run against new Viewport
additional_tests = {
'viewport_next': _get_viewport_next_test_config
}
for test_name, get_config in additional_tests.items():
new_config = get_config(test)
if new_config:
yield ExtTest(
ext_id=test.ext_id,
ext_info=test.ext_info,
test_config=new_config,
test_id=f"{test.test_id}-{test_name}",
is_parallel_run=is_parallel_run,
run_context=run_context,
test_app=test.test_app,
valid=valid
)
# Run the original / base-case
yield test
| 2,762 | Python | 35.84 | 106 | 0.640116 |
omniverse-code/kit/exts/omni.kit.test/omni/kit/test/teamcity.py | import os
import sys
import time
import pathlib
from functools import lru_cache
_quote = {"'": "|'", "|": "||", "\n": "|n", "\r": "|r", "[": "|[", "]": "|]"}
def escape_value(value):
return "".join(_quote.get(x, x) for x in value)
@lru_cache()
def is_running_in_teamcity():
return bool(os.getenv("TEAMCITY_VERSION"))
@lru_cache()
def get_teamcity_build_url() -> str:
teamcity_url = os.getenv("TEAMCITY_BUILD_URL")
if teamcity_url and not teamcity_url.startswith("http"):
teamcity_url = "https://" + teamcity_url
return teamcity_url or ""
# TeamCity Service messages documentation
# https://www.jetbrains.com/help/teamcity/service-messages.html
def teamcity_publish_artifact(artifact_path: str, stream=sys.stdout):
if not is_running_in_teamcity():
return
tc_message = f"##teamcity[publishArtifacts '{escape_value(artifact_path)}']\n"
stream.write(tc_message)
stream.flush()
def teamcity_log_fail(teamCityName, msg, stream=sys.stdout):
if not is_running_in_teamcity():
return
tc_message = f"##teamcity[testFailed name='{teamCityName}' message='{teamCityName} failed. Reason {msg}. Check artifacts for logs']\n"
stream.write(tc_message)
stream.flush()
def teamcity_test_retry_support(enabled: bool, stream=sys.stdout):
"""
With this option enabled, the successful run of a test will mute its previous failure,
which means that TeamCity will mute a test if it fails and then succeeds within the same build.
Such tests will not affect the build status.
"""
if not is_running_in_teamcity():
return
retry_support = str(bool(enabled)).lower()
tc_message = f"##teamcity[testRetrySupport enabled='{retry_support}']\n"
stream.write(tc_message)
stream.flush()
def teamcity_show_image(label: str, image_path: str, stream=sys.stdout):
if not is_running_in_teamcity():
return
tc_message = f"##teamcity[testMetadata type='image' name='{label}' value='{image_path}']\n"
stream.write(tc_message)
stream.flush()
def teamcity_publish_image_artifact(src_path: str, dest_path: str, inline_image_label: str = None, stream=sys.stdout):
if not is_running_in_teamcity():
return
tc_message = f"##teamcity[publishArtifacts '{src_path} => {dest_path}']\n"
stream.write(tc_message)
if inline_image_label:
result_path = str(pathlib.PurePath(dest_path).joinpath(os.path.basename(src_path)).as_posix())
teamcity_show_image(inline_image_label, result_path)
stream.flush()
def teamcity_message(message_name, stream=sys.stdout, **properties):
if not is_running_in_teamcity():
return
current_time = time.time()
(current_time_int, current_time_fraction) = divmod(current_time, 1)
current_time_struct = time.localtime(current_time_int)
timestamp = time.strftime("%Y-%m-%dT%H:%M:%S.", current_time_struct) + "%03d" % (int(current_time_fraction * 1000))
message = "##teamcity[%s timestamp='%s'" % (message_name, timestamp)
for k in sorted(properties.keys()):
value = properties[k]
if value is None:
continue
message += f" {k}='{escape_value(str(value))}'"
message += "]\n"
# Python may buffer it for a long time, flushing helps to see real-time result
stream.write(message)
stream.flush()
# Based on metadata message for TC:
# https://www.jetbrains.com/help/teamcity/reporting-test-metadata.html#Reporting+Additional+Test+Data
def teamcity_metadata_message(metadata_value, stream=sys.stdout, metadata_name="", metadata_testname=""):
teamcity_message(
"testMetadata",
stream=stream,
testName=metadata_testname,
name=metadata_name,
value=metadata_value,
)
def teamcity_status(text, status: str = "success", stream=sys.stdout):
teamcity_message("buildStatus", stream=stream, text=text, status=status)
| 3,924 | Python | 30.910569 | 138 | 0.667686 |
omniverse-code/kit/exts/omni.kit.test/omni/kit/test/repo_test_context.py | import json
import logging
import os
logger = logging.getLogger(__name__)
class RepoTestContext: # pragma: no cover
def __init__(self):
self.context = None
repo_test_context_file = os.environ.get("REPO_TEST_CONTEXT", None)
if repo_test_context_file and os.path.exists(repo_test_context_file):
print("Found repo test context file:", repo_test_context_file)
with open(repo_test_context_file) as f:
self.context = json.load(f)
logger.info("repo test context:", self.context)
def get(self):
return self.context
| 609 | Python | 28.047618 | 77 | 0.627258 |
omniverse-code/kit/exts/omni.kit.test/omni/kit/test/nvdf.py | import itertools
import json
import logging
import os
import re
import sys
import time
import urllib.error
import urllib.request
from collections import defaultdict
from functools import lru_cache
from pathlib import Path
from typing import Dict, List, Tuple
import carb.settings
import omni.kit.app
from .gitlab import get_gitlab_build_url, is_running_in_gitlab
from .teamcity import get_teamcity_build_url, is_running_in_teamcity
from .utils import call_git, get_global_test_output_path, is_running_on_ci
logger = logging.getLogger(__name__)
@lru_cache()
def get_nvdf_report_filepath() -> str:
return os.path.join(get_global_test_output_path(), "nvdf_data.json")
def _partition(pred, iterable):
"""Use a predicate to partition entries into false entries and true entries"""
t1, t2 = itertools.tee(iterable)
return itertools.filterfalse(pred, t1), filter(pred, t2)
def _get_json_data(report_data: List[Dict[str, str]], app_info: dict, ci_info: dict) -> Dict:
"""Transform report_data into json data
Input:
{"event": "start", "test_id": "omni.kit.viewport", "ext_name": "omni.kit.viewport", "start_time": 1664831914.002093}
{"event": "stop", "test_id": "omni.kit.viewport", "ext_name": "omni.kit.viewport", "success": true, "skipped": false, "stop_time": 1664831927.1145973, "duration": 13.113}
...
Output:
{
"omni.kit.viewport+omni.kit.viewport": {
"app": { ... },
"teamcity": { ... },
"test": {
"b_success": false,
"d_duration": 1.185,
"s_ext_name": "omni.kit.viewport",
"s_name": "omni.kit.viewport",
"s_type": "exttest",
"ts_start_time": 1664893802530,
"ts_stop_time": 1664893803715
"result": { ... },
},
},
}
"""
def _aggregate_json_data(data: dict, config=""):
test_id = data["test_id"] + config
# increment retries count - that info is not available in the report_data, start count at 0 (-1 + 1 = 0)
# by keeping all passed results we can know if all retries failed and set consecutive_failure to true
if data["event"] == "start":
retries = test_retries.get(test_id, -1) + 1
test_retries[test_id] = retries
data["retries"] = retries
else:
retries = test_retries.get(test_id, 0)
if data["event"] == "stop":
test_results[test_id].append(data.get("passed", False))
test_id += f"{CONCAT_CHAR}{retries}"
test_data = json_data.get(test_id, {}).get("test", {})
test_data.update(data)
# special case for time, convert it to nvdf ts_ format right away
if "start_time" in test_data:
test_data["ts_start_time"] = int(test_data.pop("start_time") * 1000)
if "stop_time" in test_data:
test_data["ts_stop_time"] = int(test_data.pop("stop_time") * 1000)
# event is discarded
if "event" in test_data:
test_data.pop("event")
# init passed to false if needed, it can be missing if a test crashes (no stop event)
if "passed" not in test_data:
test_data["passed"] = False
json_data.update({test_id: {"app": app_info, "ci": ci_info, "test": test_data}})
CONCAT_CHAR = "|"
MIN_CONSECUTIVE_FAILURES = 3 # this value is in sync with repo.toml testExtMaxTestRunCount=3
test_retries: Dict[str, int] = {}
test_results = defaultdict(list)
exttest, unittest = _partition(lambda data: data["test_type"] == "unittest", report_data)
# add exttests - group by name + retry count
json_data: Dict[str, str] = {}
for data in exttest:
_aggregate_json_data(data)
# second loop to only keep exttest with results
for key, data in list(json_data.items()):
if not data.get("test", {}).get("result"):
del json_data[key]
# add all unittests - group by name + config + retry count
for data in unittest:
config = data["ext_test_id"].rsplit(CONCAT_CHAR, maxsplit=1)
config = f"{CONCAT_CHAR}{config[1]}" if len(config) > 1 else ""
_aggregate_json_data(data, config)
# second loop to tag all consecutive failures (when all results are false and equal or above the retry count)
for key, data in json_data.items():
results = test_results.get(key.rsplit(CONCAT_CHAR, maxsplit=1)[0])
all_failures = results and not any(results) and len(results) >= MIN_CONSECUTIVE_FAILURES - 1
if all_failures:
data["test"]["consecutive_failure"] = all_failures
return json_data
def _can_post_to_nvdf() -> bool:
if omni.kit.app.get_app().is_app_external():
logger.info("nvdf is disabled for external build")
return False
if not is_running_on_ci():
logger.info("nvdf posting only enabled on CI")
return False
return True
def post_to_nvdf(report_data: List[Dict[str, str]]):
if not report_data or not _can_post_to_nvdf():
return
try:
app_info = get_app_info()
ci_info = _get_ci_info()
json_data = _get_json_data(report_data, app_info, ci_info)
with open(get_nvdf_report_filepath(), "w") as f:
json.dump(json_data, f, skipkeys=True, sort_keys=True, indent=4)
# convert json_data to nvdf form and add to list
json_array = []
for data in json_data.values():
data["ts_created"] = int(time.time() * 1000)
json_array.append(to_nvdf_form(data))
# post all results in one request
project = "omniverse-kit-tests-results-v2"
json_str = json.dumps(json_array, skipkeys=True)
_post_json(project, json_str)
# print(json_str) # uncomment to debug
except Exception as e:
logger.warning(f"Exception occurred: {e}")
def post_coverage_to_nvdf(coverage_data: Dict[str, Dict]):
if not coverage_data or not _can_post_to_nvdf():
return
try:
app_info = get_app_info()
ci_info = _get_ci_info()
# convert json_data to nvdf form and add to list
json_array = []
for data in coverage_data.values():
data["ts_created"] = int(time.time() * 1000)
data["app"] = app_info
data["ci"] = ci_info
json_array.append(to_nvdf_form(data))
# post all results in one request
project = "omniverse-kit-tests-coverage-v2"
json_str = json.dumps(json_array, skipkeys=True)
_post_json(project, json_str)
# print(json_str) # uncomment to debug
except Exception as e:
logger.warning(f"Exception occurred: {e}")
def _post_json(project: str, json_str: str):
url = f"https://gpuwa.nvidia.com/dataflow/{project}/posting"
try:
resp = None
req = urllib.request.Request(url)
req.add_header("Content-Type", "application/json; charset=utf-8")
json_data_bytes = json_str.encode("utf-8") # needs to be bytes
# use a short 10 seconds timeout to avoid taking too much time in case of problems
resp = urllib.request.urlopen(req, json_data_bytes, timeout=10)
except (urllib.error.URLError, json.JSONDecodeError) as e:
logger.warning(f"Error sending request to nvdf, response: {resp}, exception: {e}")
def query_nvdf(query: str) -> dict:
project = "df-omniverse-kit-tests-results-v2*"
url = f"https://gpuwa.nvidia.com:443/elasticsearch/{project}/_search"
try:
resp = None
req = urllib.request.Request(url)
req.add_header("Content-Type", "application/json; charset=utf-8")
json_data = json.dumps(query).encode("utf-8")
# use a short 10 seconds timeout to avoid taking too much time in case of problems
with urllib.request.urlopen(req, data=json_data, timeout=10) as resp:
return json.loads(resp.read())
except (urllib.error.URLError, json.JSONDecodeError) as e:
logger.warning(f"Request error to nvdf, response: {resp}, exception: {e}")
return {}
@lru_cache()
def _detect_kit_branch_and_mr(full_kit_version: str) -> Tuple[str, int]:
match = re.search(r"^([^\+]+)\+([^\.]+)", full_kit_version)
if match is None:
logger.warning(f"Cannot detect kit SDK branch from: {full_kit_version}")
branch = "Unknown"
else:
if match[2] == "release":
branch = f"release/{match[1]}"
else:
branch = match[2]
# merge requests will be named mr1234 with 1234 being the merge request number
if branch.startswith("mr") and branch[2:].isdigit():
mr = int(branch[2:])
branch = "" # if we have an mr we don't have the branch name
else:
mr = 0
return branch, mr
@lru_cache()
def _find_repository_info() -> str:
"""Get repo remote origin url, fallback on yaml if not found"""
res = call_git(["config", "--get", "remote.origin.url"])
remote_url = res.stdout.strip("\n") if res and res.returncode == 0 else ""
if remote_url:
return remote_url
# Attempt to find the repository from yaml file
kit_root = Path(sys.argv[0]).parent
if kit_root.stem.lower() != "kit":
info_yaml = kit_root.joinpath("INFO.yaml")
if not info_yaml.exists():
info_yaml = kit_root.joinpath("PACKAGE-INFO.yaml")
if info_yaml.exists():
repo_re = re.compile(r"^Repository\s*:\s*(.+)$", re.MULTILINE)
content = info_yaml.read_text()
matches = repo_re.findall(content)
if len(matches) == 1:
return matches[0].strip()
return ""
@lru_cache()
def get_app_info() -> Dict:
"""This should be part of omni.kit.app.
Example response:
{
"app_name": "omni.app.full.kit",
"app_version": "1.0.1",
"kit_version_full": "103.1+release.10030.f5f9dcab.tc",
"kit_version": "103.1",
"kit_build_number": 10030,
"branch": "master"
"config": "release",
"platform": "windows-x86_64",
"python_version": "cp37"
}
"""
app = omni.kit.app.get_app()
ver = app.get_build_version() # eg 103.1+release.10030.f5f9dcab.tc
branch, mr = _detect_kit_branch_and_mr(ver)
settings = carb.settings.get_settings()
info = {
"app_name": settings.get("/app/name"),
"app_name_full": settings.get("/app/window/title") or settings.get("/app/name"),
"app_version": settings.get("/app/version"),
"branch": branch,
"merge_request": mr,
"git_hash": ver.rsplit(".", 2)[1],
"git_remote_url": _find_repository_info(),
"kit_version_full": ver,
"kit_version": ver.split("+", 1)[0],
"kit_build_number": int(ver.rsplit(".", 3)[1]),
}
info.update(app.get_platform_info())
return info
@lru_cache()
def _get_ci_info() -> Dict:
info = {
"ci_name": "local",
}
if is_running_in_teamcity():
info.update(
{
"ci_name": "teamcity",
"build_id": os.getenv("TEAMCITY_BUILD_ID") or "",
"build_config_name": os.getenv("TEAMCITY_BUILDCONF_NAME") or "",
"build_url": get_teamcity_build_url(),
"project_name": os.getenv("TEAMCITY_PROJECT_NAME") or "",
}
)
elif is_running_in_gitlab():
info.update(
{
"ci_name": "gitlab",
"build_id": os.getenv("CI_PIPELINE_ID") or "",
"build_config_name": os.getenv("CI_JOB_NAME") or "",
"build_url": get_gitlab_build_url(),
"project_name": os.getenv("CI_PROJECT_NAME") or "",
}
)
# todo : support github
return info
def to_nvdf_form(data: dict) -> Dict:
"""Convert dict to NVDF-compliant form.
https://confluence.nvidia.com/display/nvdataflow/NVDataFlow#NVDataFlow-PostingPayload
"""
reserved = {"ts_created", "_id"}
prefixes = {str: "s_", float: "d_", int: "l_", bool: "b_", list: "obj_", tuple: "obj_"}
key_illegal_pattern = "[!@#$%^&*.]+"
def _convert(d):
result = {}
try:
for key, value in d.items():
key = re.sub(key_illegal_pattern, "_", key)
if key in reserved:
result[key] = value
elif key.startswith("ts_"):
result[key] = value
elif isinstance(value, dict):
# note that nvdf docs state this should prefix with 'obj_', but without works also.
# We choose not to as it matches up with existing fields from kit benchmarking
result[key] = _convert(value)
elif hasattr(value, "__dict__"):
# support for Classes
result[key] = _convert(value.__dict__)
elif isinstance(value, (list, tuple)):
_type = type(value[0]) if value else str
result[prefixes[_type] + key] = value
elif isinstance(value, (str, float, int, bool)):
result[prefixes[type(value)] + key] = value
else:
raise ValueError(f"Type {type(value)} not supported in nvdf (data: {data})")
return result
except Exception as e:
raise Exception(f"Exception for {key} {value} -> {e}")
return _convert(data)
def remove_nvdf_form(data: dict):
prefixes = ["s_", "d_", "l_", "b_"]
def _convert(d):
result = {}
try:
for key, value in d.items():
if isinstance(value, dict):
# note that nvdf docs state this should prefix with 'obj_', but without works also.
# We choose not to as it matches up with existing fields from kit benchmarking
result[key] = _convert(value)
elif hasattr(value, "__dict__"):
# support for Classes
result[key] = _convert(value.__dict__)
elif isinstance(value, (list, tuple, str, float, int, bool)):
if key[:2] in prefixes:
key = key[2:]
result[key] = value
else:
raise ValueError(f"Type {type(value)} not supported in nvdf (data: {data})")
return result
except Exception as e:
raise Exception(f"Exception for {key} {value} -> {e}")
return _convert(data)
| 14,649 | Python | 35.901763 | 174 | 0.569117 |
omniverse-code/kit/exts/omni.kit.test/omni/kit/test/__init__.py | import asyncio
import omni.kit.app
import omni.ext
from .async_unittest import AsyncTestCase
from .async_unittest import AsyncTestCaseFailOnLogError
from .async_unittest import AsyncTestSuite
from .utils import get_setting, get_global_test_output_path, get_test_output_path
from .ext_utils import decompose_test_list
from .ext_utils import extension_from_test_name
from .ext_utils import find_disabled_tests
from .ext_utils import get_module_to_extension_map
from .ext_utils import test_only_extension_dependencies
from . import unittests
from .unittests import get_tests_from_modules
from .unittests import get_tests_to_remove_from_modules
from .unittests import run_tests
from .unittests import get_tests
from .unittests import remove_from_dynamic_test_cache
from .exttests import run_ext_tests, shutdown_ext_tests
from .exttests import ExtTest, ExtTestResult
from .test_reporters import TestRunStatus
from .test_reporters import add_test_status_report_cb
from .test_reporters import remove_test_status_report_cb
from .test_coverage import PyCoverageCollector
from .test_populators import DEFAULT_POPULATOR_NAME, TestPopulator, TestPopulateAll, TestPopulateDisabled
from .reporter import generate_report
try:
from omni.kit.omni_test_registry import omni_test_registry
except ImportError:
# omni_test_registry is copied at build time into the omni.kit.test extension directory in _build
pass
async def _auto_run_tests(run_tests_and_exit: bool):
# Skip 2 updates to make sure all extensions loaded and initialized
await omni.kit.app.get_app().next_update_async()
await omni.kit.app.get_app().next_update_async()
# Run Extension tests?
# This part runs on the parent Kit Process that triggers all extension tests
test_exts = list(get_setting("/exts/omni.kit.test/testExts", default=[]))
if len(test_exts) > 0:
# Quit on finish:
def on_finish(result: bool):
# generate coverage report at the end?
if get_setting("/exts/omni.kit.test/testExtGenerateCoverageReport", default=False):
generate_report()
returncode = 0 if result else 21
omni.kit.app.get_app().post_quit(returncode)
exclude_exts = list(get_setting("/exts/omni.kit.test/excludeExts", default=[]))
run_ext_tests(test_exts, on_finish_fn=on_finish, exclude_exts=exclude_exts)
return
# Print tests?
# This part runs on the child Kit Process to print the number of extension tests
if len(test_exts) == 0 and get_setting("/exts/omni.kit.test/printTestsAndQuit", default=False):
unittests.print_tests()
omni.kit.app.get_app().post_quit(0)
return
# Run python tests?
# This part runs on the child Kit Process that performs the extension tests
if run_tests_and_exit:
tests_filter = get_setting("/exts/omni.kit.test/runTestsFilter", default="")
from unittest.result import TestResult
# Quit on finish:
def on_finish(result: TestResult):
returncode = 0 if result.wasSuccessful() else 13
cpp_test_res = get_setting("/exts/omni.kit.test/~cppTestResult", default=None)
if cpp_test_res is not None:
returncode += cpp_test_res
if not get_setting("/exts/omni.kit.test/doNotQuit", default=False):
omni.kit.app.get_app().post_quit(returncode)
unittests.run_tests(unittests.get_tests(tests_filter), on_finish)
class _TestAutoRunner(omni.ext.IExt):
"""Automatically run tests based on setting"""
def __init__(self):
super().__init__()
self._py_coverage = PyCoverageCollector()
def on_startup(self):
# Report generate mode?
if get_setting("/exts/omni.kit.test/testExtGenerateReport", default=False):
generate_report()
omni.kit.app.get_app().post_quit(0)
return
# Otherwise: regular test run
run_tests_and_exit = get_setting("/exts/omni.kit.test/runTestsAndQuit", default=False)
ui_mode = get_setting("/exts/omni.kit.test/testExtUIMode", default=False)
# If launching a Python test then start test coverage subsystem (might do nothing depending on the settings)
if run_tests_and_exit or ui_mode:
self._py_coverage.startup()
def on_app_ready(e):
asyncio.ensure_future(_auto_run_tests(run_tests_and_exit))
self._app_ready_sub = (
omni.kit.app.get_app()
.get_startup_event_stream()
.create_subscription_to_pop_by_type(
omni.kit.app.EVENT_APP_READY, on_app_ready, name="omni.kit.test start tests"
)
)
def on_shutdown(self):
# Stop coverage and generate report if it's started.
self._py_coverage.shutdown()
shutdown_ext_tests()
| 4,851 | Python | 39.099173 | 116 | 0.683983 |
omniverse-code/kit/exts/omni.kit.test/omni/kit/test/sampling.py | import datetime
import logging
import random
from statistics import mean
from .nvdf import get_app_info, query_nvdf
from .utils import clamp, get_setting, is_running_on_ci
logger = logging.getLogger(__name__)
class SamplingFactor:
LOWER_BOUND = 0.0
UPPER_BOUND = 1.0
MID_POINT = 0.5
class Sampling:
"""Basic Tests Sampling support"""
AGG_TEST_IDS = "test_ids"
AGG_LAST_PASSED = "last_passed"
LAST_PASSED_COUNT = 3
TEST_IDS_COUNT = 1000
DAYS = 4
def __init__(self, app_info: dict):
self.tests_sample = []
self.tests_run_count = []
self.query_result = False
self.app_info = app_info
def run_query(self, extension_name: str, unittests: list, running_on_ci: bool):
# when running locally skip the nvdf query
if running_on_ci:
try:
self.query_result = self._query_nvdf(extension_name, unittests)
except Exception as e:
logger.warning(f"Exception while doing nvdf query: {e}")
else:
self.query_result = True
# populate test list if empty, can happen both locally and on CI
if self.query_result and not self.tests_sample:
self.tests_sample = unittests
self.tests_run_count = [SamplingFactor.MID_POINT] * len(self.tests_sample)
def get_tests_to_skip(self, sampling_factor: float) -> list:
if not self.query_result:
return []
weights = self._calculate_weights()
samples_count = len(self.tests_sample)
# Grab (1.0 - sampling factor) to get the list of tests to skip
sampling_factor = SamplingFactor.UPPER_BOUND - sampling_factor
sampling_count = clamp(int(sampling_factor * float(samples_count)), 0, samples_count)
# use sampling seed if available
seed = int(get_setting("/exts/omni.kit.test/testExtSamplingSeed", default=-1))
if seed >= 0:
random.seed(seed)
sampled_tests = self._random_choices_no_replace(
population=self.tests_sample,
weights=weights,
k=sampling_count,
)
return sampled_tests
def _query_nvdf(self, extension_name: str, unittests: list) -> bool: # pragma: no cover
query = self._es_query(extension_name, days=self.DAYS, hours=0)
r = query_nvdf(query)
for aggs in r.get("aggregations", {}).get(self.AGG_TEST_IDS, {}).get("buckets", {}):
key = aggs.get("key")
if key not in unittests:
continue
hits = aggs.get(self.AGG_LAST_PASSED, {}).get("hits", {}).get("hits", [])
if not hits:
continue
all_failed = False
for hit in hits:
passed = hit["_source"]["test"]["b_passed"]
all_failed = all_failed or not passed
# consecutive failed tests cannot be skipped
if all_failed:
continue
self.tests_sample.append(key)
self.tests_run_count.append(aggs.get("doc_count", 0))
return True
def _random_choices_no_replace(self, population, weights, k) -> list:
"""Similar to numpy.random.Generator.choice() with replace=False"""
weights = list(weights)
positions = range(len(population))
indices = []
while True:
needed = k - len(indices)
if not needed:
break
for i in random.choices(positions, weights, k=needed):
if weights[i]:
weights[i] = SamplingFactor.LOWER_BOUND
indices.append(i)
return [population[i] for i in indices]
def _calculate_weights(self) -> list:
"""Simple weight adjusting to make sure all tests run an equal amount of times"""
samples_min = min(self.tests_run_count)
samples_max = max(self.tests_run_count)
samples_width = samples_max - samples_min
samples_mean = mean(self.tests_run_count)
def _calculate_weight(test_count: int):
if samples_width == 0:
return SamplingFactor.MID_POINT
weight = SamplingFactor.MID_POINT + (samples_mean - float(test_count)) / float(samples_width)
# clamp is not set to [0.0, 1.0] to have better random distribution
return clamp(
weight,
SamplingFactor.LOWER_BOUND + 0.05,
SamplingFactor.UPPER_BOUND - 0.05,
)
return [_calculate_weight(c) for c in self.tests_run_count]
def _es_query(self, extension_name: str, days: int, hours: int) -> dict:
target_date = datetime.datetime.utcnow() - datetime.timedelta(days=days, hours=hours)
kit_version = self.app_info["kit_version"]
platform = self.app_info["platform"]
branch = self.app_info["branch"]
merge_request = self.app_info["merge_request"]
query = {
"aggs": {
self.AGG_TEST_IDS: {
"terms": {"field": "test.s_test_id", "order": {"_count": "desc"}, "size": self.TEST_IDS_COUNT},
"aggs": {
self.AGG_LAST_PASSED: {
"top_hits": {
"_source": "test.b_passed",
"size": self.LAST_PASSED_COUNT,
"sort": [{"ts_created": {"order": "desc"}}],
}
}
},
}
},
"size": 0,
"query": {
"bool": {
"filter": [
{"match_all": {}},
{"term": {"test.s_ext_test_id": extension_name}},
{"term": {"app.s_kit_version": kit_version}},
{"term": {"app.s_platform": platform}},
{"term": {"app.s_branch": branch}},
{"term": {"app.l_merge_request": merge_request}},
{
"range": {
"ts_created": {
"gte": target_date.isoformat() + "Z",
"format": "strict_date_optional_time",
}
}
},
],
}
},
}
return query
def get_tests_sampling_to_skip(extension_name: str, sampling_factor: float, unittests: list) -> list: # pragma: no cover
"""Return a list of tests that can be skipped for a given extension based on a sampling factor
When using tests sampling we have to run:
1) all new tests (not found on nvdf)
2) all failed tests (ie: only consecutive failures, flaky tests are not considered)
3) sampling tests (sampling factor * number of tests)
By applying (1 - sampling factor) we get a list of tests to skip, which are garanteed not to contain any test
from point 1 or 2.
"""
ts = Sampling(get_app_info())
ts.run_query(extension_name, unittests, is_running_on_ci())
return ts.get_tests_to_skip(sampling_factor)
| 7,260 | Python | 37.015707 | 121 | 0.52865 |
omniverse-code/kit/exts/omni.kit.test/omni/kit/test/test_populators.py | """Support for the population of a test list from various configurable sources"""
from __future__ import annotations
import abc
import unittest
from .ext_utils import find_disabled_tests
from .unittests import get_tests
__all__ = [
"DEFAULT_POPULATOR_NAME",
"TestPopulator",
"TestPopulateAll",
"TestPopulateDisabled",
]
# The name of the default populator, implemented with TestPopulateAll
DEFAULT_POPULATOR_NAME = "All Tests"
# ==============================================================================================================
class TestPopulator(abc.ABC):
"""Base class for the objects used to populate the initial list of tests, before filtering."""
def __init__(self, name: str, description: str):
"""Set up the populator with the important information it needs for getting tests from some location
Args:
name: Name of the populator, which can be used for a menu
description: Verbose description of the populator, which can be used for the tooltip of the menu item
"""
self.name: str = name
self.description: str = description
self.tests: list[unittest.TestCase] = [] # Remembers the tests it retrieves for later use
# --------------------------------------------------------------------------------------------------------------
def destroy(self):
"""Opportunity to clean up any allocated resources"""
pass
# --------------------------------------------------------------------------------------------------------------
@abc.abstractmethod
def get_tests(self, call_when_done: callable):
"""Populate the internal list of raw tests and then call the provided function when it has been done.
The callable takes one optional boolean 'canceled' that is only True if the test retrieval was not done.
"""
# ==============================================================================================================
class TestPopulateAll(TestPopulator):
"""Implementation of the TestPopulator that returns a list of all tests known to Kit"""
def __init__(self):
super().__init__(
DEFAULT_POPULATOR_NAME,
"Use all of the tests in currently enabled extensions that pass the filters",
)
def get_tests(self, call_when_done: callable):
self.tests = get_tests()
call_when_done()
# ==============================================================================================================
class TestPopulateDisabled(TestPopulator):
"""Implementation of the TestPopulator that returns a list of all tests disabled by their extension.toml file"""
def __init__(self):
super().__init__(
"Disabled Tests",
"Use all tests from enabled extensions whose extension.toml flags them as disabled",
)
def get_tests(self, call_when_done: callable):
self.tests = find_disabled_tests()
call_when_done()
| 3,004 | Python | 39.608108 | 116 | 0.542943 |
omniverse-code/kit/exts/omni.kit.test/omni/kit/test/crash_process.py | def _error(stream, msg):
stream.write(f"[error] [{__file__}] {msg}\n")
def _crash_process_win(pid):
# fmt: off
import ctypes
POINTER = ctypes.POINTER
LPVOID = ctypes.c_void_p
PVOID = LPVOID
HANDLE = LPVOID
PHANDLE = POINTER(HANDLE)
ULONG = ctypes.c_ulong
SIZE_T = ctypes.c_size_t
LONG = ctypes.c_long
NTSTATUS = LONG
DWORD = ctypes.c_uint32
ACCESS_MASK = DWORD
INVALID_HANDLE_VALUE = ctypes.c_void_p(-1).value
BOOL = ctypes.c_int
byref = ctypes.byref
long = int
WAIT_TIMEOUT = 0x102
WAIT_FAILED = 0xFFFFFFFF
WAIT_OBJECT_0 = 0
STANDARD_RIGHTS_ALL = long(0x001F0000)
SPECIFIC_RIGHTS_ALL = long(0x0000FFFF)
SYNCHRONIZE = long(0x00100000)
STANDARD_RIGHTS_REQUIRED = long(0x000F0000)
PROCESS_ALL_ACCESS = (STANDARD_RIGHTS_REQUIRED | SYNCHRONIZE | 0xFFFF)
THREAD_CREATE_FLAGS_SKIP_THREAD_ATTACH = long(0x00000002)
def NT_SUCCESS(x): return x >= 0
windll = ctypes.windll
# HANDLE WINAPI OpenProcess(
# IN DWORD dwDesiredAccess,
# IN BOOL bInheritHandle,
# IN DWORD dwProcessId
# );
_OpenProcess = windll.kernel32.OpenProcess
_OpenProcess.argtypes = [DWORD, BOOL, DWORD]
_OpenProcess.restype = HANDLE
# NTSTATUS NtCreateThreadEx(
# OUT PHANDLE hThread,
# IN ACCESS_MASK DesiredAccess,
# IN PVOID ObjectAttributes,
# IN HANDLE ProcessHandle,
# IN PVOID lpStartAddress,
# IN PVOID lpParameter,
# IN ULONG Flags,
# IN SIZE_T StackZeroBits,
# IN SIZE_T SizeOfStackCommit,
# IN SIZE_T SizeOfStackReserve,
# OUT PVOID lpBytesBuffer
# );
_NtCreateThreadEx = windll.ntdll.NtCreateThreadEx
_NtCreateThreadEx.argtypes = [PHANDLE, ACCESS_MASK, PVOID, HANDLE, PVOID, PVOID, ULONG, SIZE_T, SIZE_T, SIZE_T, PVOID]
_NtCreateThreadEx.restype = NTSTATUS
# DWORD WINAPI WaitForSingleObject(
# HANDLE hHandle,
# DWORD dwMilliseconds
# );
_WaitForSingleObject = windll.kernel32.WaitForSingleObject
_WaitForSingleObject.argtypes = [HANDLE, DWORD]
_WaitForSingleObject.restype = DWORD
hProcess = _OpenProcess(
PROCESS_ALL_ACCESS,
0, # bInheritHandle
pid
)
if not hProcess:
raise ctypes.WinError()
# this injects a new thread into the process running the test code. this thread starts executing at address 0,
# causing a crash.
#
# alternatives considered:
#
# DebugBreakProcess(): in order for DebugBreakProcess() to send the breakpoint, a debugger must be attached. this
# can be accomplished with DebugActiveProcess()/WaitForDebugEvent()/ContinueDebugEvent(). unfortunately, when a
# debugger is attached, UnhandledExceptionFilter() is ignored. UnhandledExceptionFilter() is where the test process
# runs the crash dump code.
#
# CreateRemoteThread(): this approach does not work if the target process is stuck waiting for the loader lock.
#
# the solution below uses NtCreateThreadEx to create the faulting thread in the test process. unlike
# CreateRemoteThread(), NtCreateThreadEx accepts the THREAD_CREATE_FLAGS_SKIP_THREAD_ATTACH flag which skips
# THREAD_ATTACH in DllMain thereby avoiding the loader lock.
hThread = HANDLE(INVALID_HANDLE_VALUE)
status = _NtCreateThreadEx(
byref(hThread),
(STANDARD_RIGHTS_ALL | SPECIFIC_RIGHTS_ALL),
0, # ObjectAttributes
hProcess,
0, # lpStartAddress (calls into null causing a crash)
0, # lpParameter
THREAD_CREATE_FLAGS_SKIP_THREAD_ATTACH,
0, # StackZeroBits
0, # StackZeroBits (must be 0 to crash)
0, # SizeOfStackReserve
0, # lpBytesBuffer
)
if not NT_SUCCESS(status):
raise OSError(None, "NtCreateThreadEx failed", None, status)
waitTimeMs = 30 * 1000
status = _WaitForSingleObject(hProcess, waitTimeMs)
if status == WAIT_TIMEOUT:
raise TimeoutError("timed out while waiting for target process to exit")
elif status == WAIT_FAILED:
raise ctypes.WinError()
elif status != WAIT_OBJECT_0:
raise OSError(None, "failed to wait for target process to exit", None, status)
# fmt: on
def crash_process(process, stream):
"""
Triggers a crash dump in the test process, terminating the process.
Returns True if the test process was terminated, False if the process is still running.
"""
import os
assert process
pid = process.pid
if os.name == "nt":
try:
_crash_process_win(pid)
except Exception as e:
_error(stream, f"Failed crashing timed out process: {pid}. Error: {e}")
else:
import signal
try:
process.send_signal(signal.SIGABRT)
process.wait(timeout=30) # seconds
except Exception as e:
_error(stream, f"Failed crashing timed out process: {pid}. Error: {e}")
return not process.is_running()
| 5,281 | Python | 34.449664 | 122 | 0.625071 |
omniverse-code/kit/exts/omni.kit.test/omni/kit/test/utils.py | import glob
import hashlib
import os
import shutil
import sys
from datetime import datetime
from functools import lru_cache
from pathlib import Path
from typing import List, Tuple
import carb
import carb.settings
import carb.tokens
import omni.ext
from .gitlab import is_running_in_gitlab
from .teamcity import is_running_in_teamcity
_settings_iface = None
def get_setting(path, default=None):
global _settings_iface
if not _settings_iface:
_settings_iface = carb.settings.get_settings()
setting = _settings_iface.get(path)
return setting if setting is not None else default
def get_local_timestamp():
return (
# ':' is not path-friendly on windows
datetime.now()
.isoformat(timespec="seconds")
.replace(":", "-")
)
@lru_cache()
def _split_argv() -> Tuple[List[str], List[str]]:
"""Return list of argv before `--` and after (processed and unprocessed)"""
try:
index = sys.argv.index("--")
return list(sys.argv[:index]), list(sys.argv[index + 1 :])
except ValueError:
return list(sys.argv), []
def get_argv() -> List[str]:
return _split_argv()[0]
def get_unprocessed_argv() -> List[str]:
return _split_argv()[1]
def resolve_path(path, root) -> str:
path = carb.tokens.get_tokens_interface().resolve(path)
if not os.path.isabs(path):
path = os.path.join(root, path)
return os.path.normpath(path)
@lru_cache()
def _get_passed_test_output_path():
return get_setting("/exts/omni.kit.test/testOutputPath", default=None)
@lru_cache()
def get_global_test_output_path():
"""Get global extension test output path. It is shared for all extensions."""
# If inside test process, we have testoutput for actual extension, just go on folder up:
output_path = _get_passed_test_output_path()
if output_path:
return os.path.abspath(os.path.join(output_path, ".."))
# If inside ext test runner process, use setting:
output_path = carb.tokens.get_tokens_interface().resolve(
get_setting("/exts/omni.kit.test/testExtOutputPath", default="")
)
output_path = os.path.abspath(output_path)
return output_path
@lru_cache()
def get_test_output_path():
"""Get local extension test output path. It is unique for each extension test process."""
output_path = _get_passed_test_output_path()
# If not passed we probably not inside test process, default to global
if not output_path:
return get_global_test_output_path()
output_path = os.path.abspath(carb.tokens.get_tokens_interface().resolve(output_path))
return output_path
@lru_cache()
def get_ext_test_id() -> str:
return str(get_setting("/exts/omni.kit.test/extTestId", default=""))
def cleanup_folder(path):
try:
for p in glob.glob(f"{path}/*"):
if os.path.isdir(p):
if omni.ext.is_link(p):
omni.ext.destroy_link(p)
else:
shutil.rmtree(p)
else:
os.remove(p)
except Exception as exc: # pylint: disable=broad-except
carb.log_warn(f"Unable to clean up files: {path}: {exc}")
def ext_id_to_fullname(ext_id: str) -> str:
return omni.ext.get_extension_name(ext_id)
def clamp(value, min_value, max_value):
return max(min(value, max_value), min_value)
@lru_cache()
def is_running_on_ci():
return is_running_in_teamcity() or is_running_in_gitlab()
def call_git(args, cwd=None):
import subprocess
cmd = ["git"] + args
carb.log_verbose("run process: {}".format(cmd))
try:
res = subprocess.run(cmd, cwd=cwd, capture_output=True, text=True)
if res.returncode != 0:
carb.log_warn(f"Error running process: {cmd}. Result: {res}. Stderr: {res.stderr}")
return res
except FileNotFoundError:
carb.log_warn("Failed calling git")
except PermissionError:
carb.log_warn("No permission to execute git")
def _hash_file_impl(path, hash, as_text):
mode = "r" if as_text else "rb"
encoding = "utf-8" if as_text else None
with open(path, mode, encoding=encoding) as f:
while True:
data = f.readline().encode("utf-8") if as_text else f.read(65536)
if not data:
break
hash.update(data)
def hash_file(path, hash):
# Try as text first, to avoid CRLF/LF mismatch on both platforms
try:
return _hash_file_impl(path, hash, as_text=True)
except UnicodeDecodeError:
return _hash_file_impl(path, hash, as_text=False)
def sha1_path(path, hash_length=16) -> str:
exclude_files = ["extension.gen.toml"]
hash = hashlib.sha1()
if os.path.isfile(path):
hash_file(path, hash)
else:
for p in glob.glob(f"{path}/**", recursive=True):
if not os.path.isfile(p) or os.path.basename(p) in exclude_files:
continue
hash_file(p, hash)
return hash.hexdigest()[:hash_length]
def sha1_list(strings: List[str], hash_length=16) -> str:
hash = hashlib.sha1()
for s in strings:
hash.update(s.encode("utf-8"))
return hash.hexdigest()[:hash_length]
| 5,195 | Python | 27.23913 | 95 | 0.635226 |
omniverse-code/kit/exts/omni.kit.test/omni/kit/test/test_reporters.py | from enum import Enum
from typing import Callable, Any
class TestRunStatus(Enum):
UNKNOWN = 0
RUNNING = 1
PASSED = 2
FAILED = 3
_callbacks = []
def add_test_status_report_cb(callback: Callable[[str, TestRunStatus, Any], None]):
"""Add callback to be called when tests start, fail, pass."""
global _callbacks
_callbacks.append(callback)
def remove_test_status_report_cb(callback: Callable[[str, TestRunStatus, Any], None]):
"""Remove callback to be called when tests start, fail, pass."""
global _callbacks
_callbacks.remove(callback)
def _test_status_report(test_id: str, status: TestRunStatus, **kwargs):
for cb in _callbacks:
cb(test_id, status, **kwargs)
| 720 | Python | 23.033333 | 86 | 0.679167 |
omniverse-code/kit/exts/omni.kit.test/omni/kit/test/async_unittest.py | """Async version of python unittest module.
AsyncTestCase, AsyncTestSuite and AsyncTextTestRunner classes were copied from python unittest source and async/await
keywords were added.
There are two ways of registering tests, which must all be in the 'tests' submodule of your python module.
1. 'from X import *" from every file containing tests
2. Add the line 'scan_for_test_modules = True' in your __init__.py file to pick up tests in every file starting
with 'test_'
"""
import asyncio
import time
import unittest
import warnings
from unittest.case import _Outcome
import carb
import omni.kit.app
from .reporter import TestReporter
from .test_reporters import TestRunStatus
from .utils import get_ext_test_id, is_running_on_ci
KEY_FAILING_TESTS = "Failing tests"
STARTED_UNITTEST = "started "
async def await_or_call(func):
"""
Awaits on function if it is a coroutine, calls it otherwise.
"""
if asyncio.iscoroutinefunction(func):
await func()
else:
func()
class LogErrorChecker:
"""Automatically subscribes to logging events and monitors if error were produced during the test."""
def __init__(self):
# Setup this test case to fail if any error is produced
self._error_count = 0
def on_log_event(e):
if e.payload["level"] >= carb.logging.LEVEL_ERROR:
self._error_count = self._error_count + 1
self._log_stream = omni.kit.app.get_app().get_log_event_stream()
self._log_sub = self._log_stream.create_subscription_to_pop(on_log_event, name="test log event")
def shutdown(self):
self._log_stream = None
self._log_sub = None
def get_error_count(self):
self._log_stream.pump()
return self._error_count
class AsyncTestCase(unittest.TestCase):
"""Base class for all async test cases.
Derive from it to make your tests auto discoverable. Test methods must start with `test_` prefix.
Test cases allow for generation and/or adaptation of tests at runtime. See testing_exts_python.md for more details.
"""
# If true test will check for Carbonite logging messages and fail if any error level or higher was produced during the test.
fail_on_log_error = False
async def run(self, result=None):
# Log error checker
self._log_error_checker = None
if self.fail_on_log_error:
carb.log_warn(
"[DEPRECATION WARNING] `AsyncTestCaseFailOnLogError` is deprecated. Replace with `AsyncTestCase`. Errors are captured from stdout by an external test runner process now."
)
# Make sure log buffer pumped:
await omni.kit.app.get_app().next_update_async()
self._log_error_checker = LogErrorChecker()
orig_result = result
if result is None:
result = self.defaultTestResult()
startTestRun = getattr(result, "startTestRun", None)
if startTestRun is not None:
startTestRun()
result.startTest(self)
testMethod = getattr(self, self._testMethodName)
if getattr(self.__class__, "__unittest_skip__", False) or getattr(testMethod, "__unittest_skip__", False):
# If the class or method was skipped.
try:
skip_why = getattr(self.__class__, "__unittest_skip_why__", "") or getattr(
testMethod, "__unittest_skip_why__", ""
)
self._addSkip(result, self, skip_why)
finally:
result.stopTest(self)
return
expecting_failure_method = getattr(testMethod, "__unittest_expecting_failure__", False)
expecting_failure_class = getattr(self, "__unittest_expecting_failure__", False)
expecting_failure = expecting_failure_class or expecting_failure_method
outcome = _Outcome(result)
try:
self._outcome = outcome
with outcome.testPartExecutor(self):
await await_or_call(self.setUp)
if outcome.success:
outcome.expecting_failure = expecting_failure
with outcome.testPartExecutor(self, isTest=True):
await await_or_call(testMethod)
outcome.expecting_failure = False
with outcome.testPartExecutor(self):
await await_or_call(self.tearDown)
# Log error checks
if self._log_error_checker:
await omni.kit.app.get_app().next_update_async()
error_count = self._log_error_checker.get_error_count()
if error_count > 0:
self.fail(f"Test failure because of {error_count} error message(s) logged during it.")
self.doCleanups()
for test, reason in outcome.skipped:
self._addSkip(result, test, reason)
self._feedErrorsToResult(result, outcome.errors)
if outcome.success:
if expecting_failure:
if outcome.expectedFailure:
self._addExpectedFailure(result, outcome.expectedFailure)
else:
self._addUnexpectedSuccess(result)
else:
result.addSuccess(self)
return result
finally:
if self._log_error_checker:
self._log_error_checker.shutdown()
result.stopTest(self)
if orig_result is None:
stopTestRun = getattr(result, "stopTestRun", None)
if stopTestRun is not None:
stopTestRun()
# explicitly break reference cycles:
# outcome.errors -> frame -> outcome -> outcome.errors
# outcome.expectedFailure -> frame -> outcome -> outcome.expectedFailure
outcome.errors.clear()
outcome.expectedFailure = None
# clear the outcome, no more needed
self._outcome = None
class AsyncTestCaseFailOnLogError(AsyncTestCase):
"""Test Case which automatically subscribes to logging events and fails if any error were produced during the test.
This class is for backward compatibility, you can also just change value of `fail_on_log_error`.
"""
# Enable failure on error
fail_on_log_error = True
class OmniTestResult(unittest.TextTestResult):
def __init__(self, stream, descriptions, verbosity):
# If we are running under CI we will use default unittest reporter with higher verbosity.
if not is_running_on_ci():
verbosity = 2
super(OmniTestResult, self).__init__(stream, descriptions, verbosity)
self.reporter = TestReporter(stream)
self.on_status_report_fn = None
def _report_status(self, *args, **kwargs):
if self.on_status_report_fn:
self.on_status_report_fn(*args, **kwargs)
@staticmethod
def get_tc_test_id(test):
if isinstance(test, str):
return test
# Use dash as a clear visual separator of 3 parts:
test_id = "%s - %s - %s" % (test.__class__.__module__, test.__class__.__qualname__, test._testMethodName)
# Dots have special meaning in TC, replace with /
test_id = test_id.replace(".", "/")
ext_test_id = get_ext_test_id()
if ext_test_id:
# In the context of extension test it has own test id. Convert to TC form by getting rid of dots.
ext_test_id = ext_test_id.replace(".", "+")
test_id = f"{ext_test_id}.{test_id}"
return test_id
def addSuccess(self, test):
super(OmniTestResult, self).addSuccess(test)
def addError(self, test, err, *k):
super(OmniTestResult, self).addError(test, err)
fail_message = self._get_error_message(test, "Error", self.errors)
self.report_fail(test, "Error", err, fail_message)
def addFailure(self, test, err, *k):
super(OmniTestResult, self).addFailure(test, err)
fail_message = self._get_error_message(test, "Fail", self.failures)
self.report_fail(test, "Failure", err, fail_message)
def report_fail(self, test, fail_type: str, err, fail_message: str):
tc_test_id = self.get_tc_test_id(test)
test_id = test.id()
# pass the failing test info back to the ext testing framework in parent proc
self.stream.write(f"##omni.kit.test[append, {KEY_FAILING_TESTS}, {test_id}]\n")
self.reporter.unittest_fail(test_id, tc_test_id, fail_type, fail_message)
self._report_status(test_id, TestRunStatus.FAILED, fail_message=fail_message)
def _get_error_message(self, test, fail_type: str, errors: list) -> str:
# In python/Lib/unittest/result.py the failures are reported with _exc_info_to_string() that is private.
# To get the same result we grab the latest errors/failures from `self.errors[-1]` or `self.failures[-1]`
# In python/Lib/unittest/runner.py from the `printErrorList` function we also copied the logic here.
exc_info = errors[-1][1] if errors[-1] else ""
error_msg = []
error_msg.append(self.separator1)
error_msg.append(f"{fail_type.upper()}: {self.getDescription(test)}")
error_msg.append(self.separator2)
error_msg.append(exc_info)
return "\n".join(error_msg)
def startTest(self, test):
super(OmniTestResult, self).startTest(test)
tc_test_id = self.get_tc_test_id(test)
test_id = test.id()
self.stream.write("\n")
# python tests can start but never finish (crash, time out, etc)
# track it from the parent proc with a pragma message (see _extract_metadata_pragma in exttests.py)
self.stream.write(f"##omni.kit.test[set, {test_id}, {STARTED_UNITTEST}{tc_test_id}]\n")
self.reporter.unittest_start(test_id, tc_test_id, captureStandardOutput="true")
self._report_status(test_id, TestRunStatus.RUNNING)
def stopTest(self, test):
super(OmniTestResult, self).stopTest(test)
tc_test_id = self.get_tc_test_id(test)
test_id = test.id()
# test finished, delete it from the metadata
self.stream.write(f"##omni.kit.test[del, {test_id}]\n")
# test._outcome is None when test is skipped using decorator.
# When skipped using self.skipTest() it contains list of skipped test cases
skipped = test._outcome is None or bool(test._outcome.skipped)
# self.skipped last index contains the current skipped test, name is at index 0, reason at index 1
skip_reason = self.skipped[-1][1] if skipped and self.skipped else ""
# skipped tests are marked as "passed" not to confuse reporting down the line
passed = test._outcome.success if test._outcome and not skipped else True
self.reporter.unittest_stop(test_id, tc_test_id, passed=passed, skipped=skipped, skip_reason=skip_reason)
if passed:
self._report_status(test_id, TestRunStatus.PASSED)
class TeamcityTestResult(OmniTestResult):
def __init__(self, stream, descriptions, verbosity):
carb.log_warn("[DEPRECATION WARNING] `TeamcityTestResult` is deprecated. Replace with `OmniTestResult`.")
super(TeamcityTestResult, self).__init__(stream, descriptions, verbosity)
class AsyncTextTestRunner(unittest.TextTestRunner):
"""A test runner class that displays results in textual form.
It prints out the names of tests as they are run, errors as they
occur, and a summary of the results at the end of the test run.
"""
async def run(self, test, on_status_report_fn=None):
"Run the given test case or test suite."
result = self._makeResult()
unittest.signals.registerResult(result)
result.failfast = self.failfast
result.buffer = self.buffer
result.tb_locals = self.tb_locals
result.on_status_report_fn = on_status_report_fn
with warnings.catch_warnings():
if self.warnings:
# if self.warnings is set, use it to filter all the warnings
warnings.simplefilter(self.warnings)
# if the filter is 'default' or 'always', special-case the
# warnings from the deprecated unittest methods to show them
# no more than once per module, because they can be fairly
# noisy. The -Wd and -Wa flags can be used to bypass this
# only when self.warnings is None.
if self.warnings in ["default", "always"]:
warnings.filterwarnings(
"module", category=DeprecationWarning, message=r"Please use assert\w+ instead."
)
startTime = time.time()
startTestRun = getattr(result, "startTestRun", None)
if startTestRun is not None:
startTestRun()
try:
await test(result)
finally:
stopTestRun = getattr(result, "stopTestRun", None)
if stopTestRun is not None:
stopTestRun()
stopTime = time.time()
timeTaken = stopTime - startTime
result.printErrors()
if hasattr(result, "separator2"):
self.stream.writeln(result.separator2)
run = result.testsRun
self.stream.writeln("Ran %d test%s in %.3fs" % (run, run != 1 and "s" or "", timeTaken))
self.stream.writeln()
expectedFails = unexpectedSuccesses = skipped = 0
try:
results = map(len, (result.expectedFailures, result.unexpectedSuccesses, result.skipped))
except AttributeError:
pass
else:
expectedFails, unexpectedSuccesses, skipped = results
infos = []
if not result.wasSuccessful():
self.stream.write("FAILED")
failed, errored = len(result.failures), len(result.errors)
if failed:
infos.append("failures=%d" % failed)
if errored:
infos.append("errors=%d" % errored)
else:
self.stream.write("OK")
if skipped:
infos.append("skipped=%d" % skipped)
if expectedFails:
infos.append("expected failures=%d" % expectedFails)
if unexpectedSuccesses:
infos.append("unexpected successes=%d" % unexpectedSuccesses)
if infos:
self.stream.writeln(" (%s)" % (", ".join(infos),))
else:
self.stream.write("\n")
return result
def _isnotsuite(test):
"A crude way to tell apart testcases and suites with duck-typing"
try:
iter(test)
except TypeError:
return True
return False
class AsyncTestSuite(unittest.TestSuite):
"""A test suite is a composite test consisting of a number of TestCases.
For use, create an instance of TestSuite, then add test case instances.
When all tests have been added, the suite can be passed to a test
runner, such as TextTestRunner. It will run the individual test cases
in the order in which they were added, aggregating the results. When
subclassing, do not forget to call the base class constructor.
"""
async def run(self, result, debug=False):
topLevel = False
if getattr(result, "_testRunEntered", False) is False:
result._testRunEntered = topLevel = True
for index, test in enumerate(self):
if result.shouldStop:
break
if _isnotsuite(test):
self._tearDownPreviousClass(test, result)
self._handleModuleFixture(test, result)
self._handleClassSetUp(test, result)
result._previousTestClass = test.__class__
if getattr(test.__class__, "_classSetupFailed", False) or getattr(result, "_moduleSetUpFailed", False):
continue
if not debug:
await test(result)
else:
await test.debug()
if self._cleanup:
self._removeTestAtIndex(index)
if topLevel:
self._tearDownPreviousClass(None, result)
self._handleModuleTearDown(result)
result._testRunEntered = False
return result
| 16,411 | Python | 39.927681 | 186 | 0.613796 |
omniverse-code/kit/exts/omni.kit.test/omni/kit/test/reporter.py | import fnmatch
import glob
import json
import os
import platform
import shutil
import sys
import time
import xml.etree.ElementTree as ET
from collections import defaultdict
from dataclasses import dataclass
from datetime import datetime
from enum import Enum
from functools import lru_cache
from pathlib import Path
from typing import Dict, List, Optional
import carb
import carb.settings
import carb.tokens
import psutil
from .nvdf import post_coverage_to_nvdf, post_to_nvdf
from .teamcity import teamcity_message, teamcity_publish_artifact, teamcity_status
from .test_coverage import generate_coverage_report
from .utils import (
ext_id_to_fullname,
get_ext_test_id,
get_global_test_output_path,
get_setting,
get_test_output_path,
is_running_on_ci,
)
CURRENT_PATH = Path(__file__).parent
HTML_PATH = CURRENT_PATH.parent.parent.parent.joinpath("html")
REPORT_FILENAME = "report.jsonl"
RESULTS_FILENAME = "results.xml"
@lru_cache()
def get_report_filepath():
return os.path.join(get_test_output_path(), REPORT_FILENAME)
@lru_cache()
def get_results_filepath():
return os.path.join(get_test_output_path(), RESULTS_FILENAME)
def _load_report_data(report_path):
data = []
with open(report_path, "r") as f:
for line in f:
data.append(json.loads(line))
return data
def _get_tc_test_id(test_id):
return test_id.replace(".", "+")
class TestReporter:
"""Combines TC reports to stdout and JSON lines report to a file"""
def __init__(self, stream=sys.stdout):
self._stream = stream
self._timers = {}
self._report_filepath = get_report_filepath()
self.unreliable_tests = get_setting("/exts/omni.kit.test/unreliableTests", default=[])
self.parallel_run = get_setting("/exts/omni.kit.test/parallelRun", default=False)
def _get_duration(self, test_id: str) -> float:
try:
duration = round(time.time() - self._timers.pop(test_id), 3)
except KeyError:
duration = 0.0
return duration
def _is_unreliable(self, test_id):
return any(fnmatch.fnmatch(test_id, p) for p in self.unreliable_tests)
def set_output_path(self, output_path: str):
self._report_filepath = os.path.join(output_path, REPORT_FILENAME)
def _write_report(self, data: dict):
if self._report_filepath:
with open(self._report_filepath, "a") as f:
f.write(json.dumps(data))
f.write("\n")
def unittest_start(self, test_id, tc_test_id, captureStandardOutput="false"):
teamcity_message(
"testStarted", stream=self._stream, name=tc_test_id, captureStandardOutput=captureStandardOutput
)
self._timers[test_id] = time.time()
self._write_report(
{
"event": "start",
"test_type": "unittest",
"test_id": test_id,
"ext_test_id": get_ext_test_id(),
"unreliable": self._is_unreliable(test_id),
"parallel_run": self.parallel_run,
"start_time": time.time(),
}
)
def unittest_stop(self, test_id, tc_test_id, passed=False, skipped=False, skip_reason=""):
if skipped:
teamcity_message("testIgnored", stream=self._stream, name=tc_test_id, message=skip_reason)
teamcity_message("testFinished", stream=self._stream, name=tc_test_id)
self._write_report(
{
"event": "stop",
"test_type": "unittest",
"test_id": test_id,
"ext_test_id": get_ext_test_id(),
"passed": passed,
"skipped": skipped,
"skip_reason": skip_reason,
"stop_time": time.time(),
"duration": self._get_duration(test_id),
}
)
def unittest_fail(self, test_id, tc_test_id, fail_type: str, fail_message: str):
teamcity_message("testFailed", stream=self._stream, name=tc_test_id, fail_type=fail_type, message=fail_message)
self._write_report(
{
"event": "fail",
"test_type": "unittest",
"test_id": test_id,
"ext_test_id": get_ext_test_id(),
"fail_type": fail_type,
"message": fail_message,
}
)
def exttest_start(self, test_id, tc_test_id, ext_id, ext_name, captureStandardOutput="false", report=True):
teamcity_message(
"testStarted", stream=self._stream, name=tc_test_id, captureStandardOutput=captureStandardOutput
)
if report:
self._timers[test_id] = time.time()
self._write_report(
{
"event": "start",
"test_type": "exttest",
"test_id": test_id,
"ext_id": ext_id,
"ext_name": ext_name,
"start_time": time.time(),
}
)
def exttest_stop(self, test_id, tc_test_id, passed=False, skipped=False, report=True):
if skipped:
teamcity_message("testIgnored", stream=self._stream, name=tc_test_id, message="skipped")
teamcity_message("testFinished", stream=self._stream, name=tc_test_id)
if report:
self._write_report(
{
"event": "stop",
"test_type": "exttest",
"test_id": test_id,
"passed": passed,
"skipped": skipped,
"stop_time": time.time(),
"duration": self._get_duration(test_id),
}
)
def exttest_fail(self, test_id, tc_test_id, fail_type: str, fail_message: str):
teamcity_message("testFailed", stream=self._stream, name=tc_test_id, fail_type=fail_type, message=fail_message)
self._write_report(
{
"event": "fail",
"test_type": "exttest",
"test_id": test_id,
"fail_type": fail_type,
"message": fail_message,
}
)
def report_result(self, test):
"""Write tests results data we want to later show on the html report and in elastic"""
res = defaultdict(dict)
res["config"] = test.config
res["retries"] = test.retries
res["timeout"] = test.timeout if test.timeout else 0
ext_info = test.ext_info
ext_dict = ext_info.get_dict()
res["state"]["enabled"] = ext_dict.get("state", {}).get("enabled", False)
res["package"]["version"] = ext_dict.get("package", {}).get("version", "")
res.update(vars(test.result))
change = {}
if test.change_analyzer_result:
change["skip"] = test.change_analyzer_result.should_skip_test
change["startup_sequence_hash"] = test.change_analyzer_result.startup_sequence_hash
change["tested_ext_hash"] = test.change_analyzer_result.tested_ext_hash
change["kernel_version"] = test.change_analyzer_result.kernel_version
self._write_report(
{
"event": "result",
"test_type": "exttest",
"test_id": test.test_id,
"ext_id": test.ext_id,
"ext_name": test.ext_name,
"test_bucket": test.bucket_name,
"unreliable": test.config.get("unreliable", False),
"parallel_run": get_setting("/exts/omni.kit.test/parallelRun", default=False),
"change_analyzer": change,
"result": res,
}
)
# TODO: this function should be rewritten to avoid any guessing
def _get_extension_name(path: str, ext_id_to_name: dict):
# if ext_id is in the path return that extension name
for k, v in ext_id_to_name.items():
if k in path:
return v
p = Path(path)
for i, e in enumerate(p.parts):
if e == "exts" or e == "extscore":
if p.parts[i + 1][0:1].isdigit():
return ext_id_to_fullname(p.parts[i + 2])
else:
return p.parts[i + 1]
elif e == "extscache" or e == "extsPhysics":
# exts from cache will be named like this: omni.ramp-103.0.10+103.1.wx64.r.cp37
# exts from physics will be named like this: omni.physx-1.5.0-5.1
return ext_id_to_fullname(p.parts[i + 1])
elif e == "extensions":
# on linux we'll have paths from source/extensions/<ext_name>
return p.parts[i + 1]
carb.log_warn(f"Could not get extension name for {path}")
return "_unsorted"
class ExtCoverage:
def __init__(self):
self.ext_id: str = ""
self.ext_name: str = ""
self.covered_lines = []
self.num_statements = []
self.test_result = {}
def mean_cov(self):
statements = self.sum_statements()
if statements == 0:
return 0
return (self.sum_covered_lines() / statements) * 100.0
def sum_covered_lines(self):
return sum(self.covered_lines)
def sum_statements(self):
return sum(self.num_statements)
# Note that the combined coverage data will 'merge' (or 'lose') the test config because the coverage is reported
# at the filename level. For example an extension with 2 configs, omni.kit.renderer.core [default, compatibility]
# will produce 2 .pycov files, but in the combined report (json) it will be merged per source file, so no way to know
# what was the coverage for default vs compatibility, we'll get to coverage for all of omni.kit.renderer.core tests
def _build_ext_coverage(coverage_data: dict, ext_id_to_name: dict) -> Dict[str, ExtCoverage]:
exts = defaultdict(ExtCoverage)
for file, info in coverage_data["files"].items():
ext_name = _get_extension_name(file, ext_id_to_name)
exts[ext_name].ext_name = ext_name
exts[ext_name].covered_lines.append(info["summary"]["covered_lines"])
exts[ext_name].num_statements.append(info["summary"]["num_statements"])
return exts
def _report_unreliable_tests(report_data):
# Dummy tests to group all "unreliable" tests and report (convenience for TC UI)
unreliable_failed = [r for r in report_data if r["event"] == "result" and r["result"]["unreliable_fail"] == 1]
reporter = TestReporter()
total = len(unreliable_failed)
if total > 0:
dummy_test_id = "UNRELIABLE_TESTS"
summary = ""
for r in unreliable_failed:
test_result = r["result"]
summary += " [{0:5.1f}s] {1} (Count: {2})\n".format(
test_result["duration"], r["test_id"], test_result["test_count"]
)
reporter.unittest_start(dummy_test_id, dummy_test_id)
message = f"There are {total} tests that fail, but marked as unreliable:\n{summary}"
reporter.unittest_fail(dummy_test_id, dummy_test_id, "Error", message)
print(message)
reporter.unittest_stop(dummy_test_id, dummy_test_id)
def _build_test_data_html(report_data):
# consider retries: start, fail, start, (nothing) -> success
# for each fail look back and add extra data that this test will pass or fail.
# it is convenient for the next code to know ahead of time if test will pass.
started_tests = {}
for e in report_data:
if e["event"] == "start":
started_tests[e["test_id"]] = e
elif e["event"] == "fail":
started_tests[e["test_id"]]["will_pass"] = False
results = {item["test_id"]: item["result"] for item in report_data if item["event"] == "result"}
RESULT_EMOJI = {True: "✅", False: "❌"}
COLOR_CLASS = {True: "add-green-color", False: "add-red-color"}
unreliable = False
html_data = '<ul class="test_list">\n'
depth = 0
for e in report_data:
if e["event"] == "start":
# reset depth if needed (missing stop event)
if e["test_type"] == "exttest" and depth > 0:
depth -= 1
while depth > 0:
html_data += "</ul>\n"
depth -= 1
depth += 1
if depth > 1:
html_data += "<ul>\n"
test_id = e["test_id"]
passed = e.get("will_pass", True)
extra = ""
attr = ""
# Root test ([[test]] entry)
# Reset unreliable marker
if depth == 1:
unreliable = False
# Get more stats about the whole [[test]] run
if test_id in results:
test_result = results[test_id]
extra += " [{0:5.1f}s]".format(test_result["duration"])
unreliable = bool(test_result["unreliable"])
passed = test_result["passed"]
style_class = COLOR_CLASS[passed]
if unreliable:
extra += " <b>[unreliable]</b>"
style_class = "add-yellow-color unreliable"
html_data += '<li class="{0}" {4}>{3} {1} {2}</li>\n'.format(
style_class, extra, test_id, RESULT_EMOJI[passed], attr
)
if e["event"] == "stop":
depth -= 1
if depth > 0:
html_data += "</ul>\n"
html_data += "</ul>\n"
return html_data
def _post_build_status(report_data: list):
exts = {item["ext_id"] for item in report_data if item["event"] == "result"}
# there could be retry events, so only count unique tests:
tests_started = {item["test_id"] for item in report_data if item["event"] == "start"}
tests_passed = {item["test_id"] for item in report_data if item["event"] == "stop" and item["passed"]}
total_count = len(tests_started)
fail_count = total_count - len(tests_passed)
if fail_count:
status = "failure"
text = f"{fail_count} tests failed out of {total_count}"
else:
status = "success"
text = f"All {total_count} tests passed"
text += " (extensions tested: {}).".format(len(exts))
teamcity_status(text=text, status=status)
def _calculate_durations(report_data: list):
"""
Calculate startup time of each extension and time taken by each individual test
We count the time between the extension start_time to the start_time of the first test
"""
ext_startup_time = {}
ext_startup_time_found = {}
ext_tests_time = {}
for d in report_data:
test_id = d["test_id"]
test_type = d["test_type"]
ext_test_id = d.get("ext_test_id", None)
if d["event"] == "start":
if test_type == "exttest":
if not ext_startup_time_found.get(test_id):
start_time = d["start_time"]
ext_startup_time[test_id] = start_time
else:
if not ext_startup_time_found.get(ext_test_id):
t = ext_startup_time.get(ext_test_id, 0.0)
ext_startup_time[ext_test_id] = round(d["start_time"] - t, 2)
ext_startup_time_found[ext_test_id] = True
elif d["event"] == "stop":
if test_type == "unittest":
t = ext_tests_time.get(ext_test_id, 0.0)
t += d.get("duration", 0.0)
ext_tests_time[ext_test_id] = t
elif d["event"] == "result":
test_result = d.get("result", None)
if test_result:
# it's possible an extension has no tests, so we set startup_duration = duration
if ext_startup_time_found.get(test_id, False) is True:
t = ext_startup_time.get(test_id, 0.0)
test_result["startup_duration"] = t
else:
test_result["startup_duration"] = test_result.get("duration", 0.0)
# update duration of all tests
test_result["tests_duration"] = ext_tests_time.get(test_id, 0.0)
# ratios
test_result["startup_ratio"] = 0.0
test_result["tests_ratio"] = 0.0
if test_result["tests_duration"] != 0.0:
test_result["startup_ratio"] = (test_result["startup_duration"] / test_result["duration"]) * 100.0
test_result["tests_ratio"] = (test_result["tests_duration"] / test_result["duration"]) * 100.0
def generate_report():
"""After running tests this function will generate html report / post to nvdf / publish artifacts"""
# at this point all kit processes should be finished
if is_running_on_ci():
_kill_kit_processes()
try:
print("\nGenerating a Test Report...")
_generate_report_internal()
except Exception as e:
import traceback
print(f"Exception while running generate_report(): {e}, callstack: {traceback.format_exc()}")
def _kill_kit_processes():
"""Kill all Kit processes except self"""
kit_process_name = carb.tokens.get_tokens_interface().resolve("${exe-filename}")
for proc in psutil.process_iter():
if proc.pid == os.getpid():
continue
try:
if proc.name() == kit_process_name:
carb.log_warn(
"Killing a Kit process that is still running:\n"
f" PID: {proc.pid}\n"
f" Command line: {proc.cmdline()}"
)
proc.terminate()
except psutil.AccessDenied as e:
carb.log_warn(f"Access denied: {e}")
except psutil.ZombieProcess as e:
carb.log_warn(f"Encountered a zombie process: {e}")
except psutil.NoSuchProcess as e:
carb.log_warn(f"Process no longer exists: {e}")
except (psutil.Error, Exception) as e:
carb.log_warn(f"An error occurred: {str(e)}")
def _generate_report_internal():
# Get Test report and publish it
report_data = []
# combine report from various test runs (each process has own file, for parallel run)
for report_file in glob.glob(get_global_test_output_path() + "/*/" + REPORT_FILENAME):
report_data.extend(_load_report_data(report_file))
if not report_data:
return
# generate combined file
combined_report_path = get_global_test_output_path() + "/report_combined.jsonl"
with open(combined_report_path, "w") as f:
f.write(json.dumps(report_data))
teamcity_publish_artifact(combined_report_path)
# TC Build status
_post_build_status(report_data)
# Dummy test report
_report_unreliable_tests(report_data)
# Prepare output path
output_path = get_global_test_output_path()
os.makedirs(output_path, exist_ok=True)
# calculate durations (startup, total, etc)
_calculate_durations(report_data)
# post to elasticsearch
post_to_nvdf(report_data)
# write junit xml
_write_junit_results(report_data)
# get coverage results and generate html report
merged_results, coverage_results = _load_coverage_results(report_data)
html = _generate_html_report(report_data, merged_results)
# post coverage results
post_coverage_to_nvdf(_get_coverage_for_nvdf(merged_results, coverage_results))
# write and publish html report
_write_html_report(html, output_path)
# publish all test output to TC in the end:
teamcity_publish_artifact(f"{output_path}/**/*")
def _load_coverage_results(report_data, read_coverage=True) -> tuple[dict, dict]:
# build a map of extension id to extension name
ext_id_to_name = {}
for item in report_data:
if item["event"] == "result":
ext_id_to_name[item["ext_id"]] = item["ext_name"]
# Get data coverage per extension (configs are merged)
coverage_results = defaultdict(ExtCoverage)
if read_coverage:
coverage_result = generate_coverage_report()
if coverage_result and coverage_result.json_path:
coverage_data = json.load(open(coverage_result.json_path))
coverage_results = _build_ext_coverage(coverage_data, ext_id_to_name)
# combine test results and coverage data, key is the test_id (separates extensions per config)
merged_results = defaultdict(ExtCoverage)
for item in report_data:
if item["event"] == "result":
test_id = item["test_id"]
ext_id = item["ext_id"]
ext_name = item["ext_name"]
merged_results[test_id].ext_id = ext_id
merged_results[test_id].ext_name = ext_name
merged_results[test_id].test_result = item["result"]
cov = coverage_results.get(ext_name)
if cov:
merged_results[test_id].covered_lines = cov.covered_lines
merged_results[test_id].num_statements = cov.num_statements
return merged_results, coverage_results
def _get_coverage_for_nvdf(merged_results: dict, coverage_results: dict) -> dict:
json_data = {}
for ext_name, _ in coverage_results.items():
# grab the matching result
result: ExtCoverage = merged_results.get(ext_name)
if not result:
# in rare cases the default name of a test config can be different, search of the extension name instead
res: ExtCoverage
for res in merged_results.values():
if res.ext_name == ext_name:
result = res
break
test_result = result.test_result if result else None
if not test_result:
continue
test_data = {
"ext_id": result.ext_id,
"ext_name": ext_name,
}
test_data.update(test_result)
json_data.update({ext_name: {"test": test_data}})
return json_data
def _generate_html_report(report_data, merged_results):
html = ""
with open(os.path.join(HTML_PATH, "template.html"), "r") as f:
html = f.read()
class Color(Enum):
RED = 0
GREEN = 1
YELLOW = 2
def get_color(var, threshold: tuple, inverse=False, warning_only=False) -> Color:
if var == "":
return None
if inverse is True:
if float(var) >= threshold[0]:
return Color.RED
elif float(var) >= threshold[1]:
return Color.YELLOW
elif not warning_only:
return Color.GREEN
else:
if float(var) <= threshold[0]:
return Color.RED
elif float(var) <= threshold[1]:
return Color.YELLOW
elif not warning_only:
return Color.GREEN
def get_td(var, color: Color = None):
if color is Color.RED:
return f"<td ov-red>{var}</td>\n"
elif color is Color.GREEN:
return f"<td ov-green>{var}</td>\n"
elif color is Color.YELLOW:
return f"<td ov-yellow>{var}</td>\n"
else:
return f"<td>{var}</td>\n"
coverage_enabled = get_setting("/exts/omni.kit.test/pyCoverageEnabled", default=False)
coverage_threshold = get_setting("/exts/omni.kit.test/pyCoverageThreshold", default=75)
# disable coverage button when not needed
if not coverage_enabled:
html = html.replace(
"""<button class="tablinks" onclick="openTab(event, 'Coverage')">Coverage</button>""",
"""<button disabled class="tablinks" onclick="openTab(event, 'Coverage')">Coverage</button>""",
)
# Build test run data
html = html.replace("%%test_data%%", _build_test_data_html(report_data))
# Build extension table
html_data = ""
for test_id, info in sorted(merged_results.items()):
r = info.test_result
waiver = True if r.get("config", {}).get("waiver") else False
passed = r.get("passed", False)
test_count = r.get("test_count", 0)
duration = round(r.get("duration", 0.0), 1)
startup_duration = round(r.get("startup_duration", 0.0), 1)
startup_ratio = round(r.get("startup_ratio", 0.0), 1)
tests_duration = round(r.get("tests_duration", 0.0), 1)
tests_ratio = round(r.get("tests_ratio", 0.0), 1)
timeout = round(r.get("timeout", 0), 0)
timeout_ratio = 0
if timeout != 0:
timeout_ratio = round((duration / timeout) * 100.0, 0)
# an extension can override pyCoverageEnabled / pyCoverageThreshold
ext_coverage_enabled = bool(r.get("config", {}).get("pyCoverageEnabled", coverage_enabled))
ext_coverage_threshold = int(r.get("config", {}).get("pyCoverageThreshold", coverage_threshold))
ext_coverage_threshold_low = int(ext_coverage_threshold * (2 / 3))
# coverage data
num_statements = info.sum_statements()
num_covered_lines = info.sum_covered_lines()
cov_percent = round(info.mean_cov(), 2)
# add those calculated values to our results
py_coverage = {
"lines_total": num_statements,
"lines_tested": num_covered_lines,
"cov_percent": float(cov_percent),
"cov_threshold": ext_coverage_threshold,
"enabled": bool(coverage_enabled and ext_coverage_enabled),
}
info.test_result["pyCoverage"] = py_coverage
html_data += "<tr>\n"
html_data += get_td(test_id)
html_data += get_td(r.get("package", {}).get("version", ""))
html_data += get_td(waiver, Color.GREEN if waiver is True else None)
html_data += get_td(passed, Color.GREEN if bool(passed) is True else Color.RED)
html_data += get_td(test_count, Color.GREEN if waiver is True else get_color(test_count, (0, 5)))
# color code tests duration: >=60 seconds is red and >=30 seconds is yellow
html_data += get_td(str(duration), get_color(duration, (60, 30), inverse=True, warning_only=True))
html_data += get_td(str(startup_duration))
html_data += get_td(str(startup_ratio))
html_data += get_td(str(tests_duration))
html_data += get_td(str(tests_ratio))
html_data += get_td(str(timeout))
html_data += get_td(str(timeout_ratio), get_color(timeout_ratio, (90, 75), inverse=True, warning_only=True))
html_data += get_td(bool(coverage_enabled and ext_coverage_enabled))
if coverage_enabled and ext_coverage_enabled:
html_data += get_td(ext_coverage_threshold)
html_data += get_td(num_statements)
html_data += get_td(num_covered_lines)
html_data += get_td(
cov_percent,
Color.GREEN
if waiver is True
else get_color(cov_percent, (ext_coverage_threshold_low, ext_coverage_threshold)),
)
print(f" > Coverage for {test_id} is {cov_percent}%")
else:
for _ in range(4):
html_data += get_td("-")
html_data += "</tr>\n"
html = html.replace("%%table_data%%", html_data)
return html
def _write_html_report(html, output_path):
REPORT_NAME = "index.html"
REPORT_FOLDER_NAME = "test_report"
report_dir = os.path.join(output_path, REPORT_FOLDER_NAME)
os.makedirs(report_dir, exist_ok=True)
with open(os.path.join(report_dir, REPORT_NAME), "w") as f:
f.write(html)
print(f" > Full report available here {f.name}")
if not is_running_on_ci():
import webbrowser
webbrowser.open(f.name)
# copy javascript/css files
shutil.copyfile(os.path.join(HTML_PATH, "script.js"), os.path.join(report_dir, "script.js"))
shutil.copyfile(os.path.join(HTML_PATH, "style.css"), os.path.join(report_dir, "style.css"))
shutil.make_archive(os.path.join(output_path, REPORT_FOLDER_NAME), "zip", report_dir)
teamcity_publish_artifact(os.path.join(output_path, "*.zip"))
@dataclass
class Stats:
passed: int = 0
failure: int = 0
error: int = 0
skipped: int = 0
def get_total(self):
return self.passed + self.failure + self.error + self.skipped
def _write_junit_results(report_data: list):
"""Write a JUnit XML from our report data"""
testcases = []
testsuites = ET.Element("testsuites")
start_time = datetime.now()
last_failure = {"message": "", "fail_type": ""}
stats = Stats()
for data in report_data:
test_id = data["test_id"]
test_type = data["test_type"]
ext_test_id = data.get("ext_test_id", test_id)
if data["event"] == "start":
if test_type == "exttest":
start_time = datetime.fromtimestamp(data["start_time"])
elif data["event"] == "fail":
last_failure = data
elif data["event"] == "stop":
# create a testcase for each stop event (for both exttest and unittest)
testcase = ET.Element("testcase", name=test_id, classname=ext_test_id, time=f"{data['duration']:.3f}")
if data.get("skipped"):
stats.skipped += 1
node = ET.SubElement(testcase, "skipped")
node.text = data.get("skip_reason", "")
elif data.get("passed"):
stats.passed += 1
else:
# extension tests failures are of type Error
if test_type == "exttest":
stats.error += 1
node = ET.SubElement(testcase, "error")
elif last_failure["fail_type"] == "Failure":
stats.failure += 1
node = ET.SubElement(testcase, "failure")
else:
stats.error += 1
node = ET.SubElement(testcase, "error")
node.text = last_failure["message"]
testcases.append(testcase)
# extension test stop - gather all testcases and add test suite
if test_type == "exttest":
testsuite = ET.Element(
"testsuite",
name=test_id,
failures=str(stats.failure),
errors=str(stats.error),
skipped=str(stats.skipped),
tests=str(stats.get_total()),
time=f"{data['duration']:.3f}",
timestamp=start_time.isoformat(),
hostname=platform.node(),
)
testsuite.extend(testcases)
testsuites.append(testsuite)
# reset things between test suites
testcases = []
last_failure = {"message": "", "fail_type": ""}
stats = Stats()
# write our file
ET.indent(testsuites)
with open(get_results_filepath(), "w", encoding="utf-8") as f:
f.write(ET.tostring(testsuites, encoding="unicode", xml_declaration=True))
| 30,977 | Python | 37.386617 | 119 | 0.568971 |
omniverse-code/kit/exts/omni.kit.test/omni/kit/test/test_coverage.py | import os
import shutil
from datetime import datetime
from functools import lru_cache
from pathlib import Path
import carb.settings
import coverage
import omni.kit.app
from .teamcity import teamcity_publish_artifact
from .utils import get_global_test_output_path, get_setting
# For Coverage.py to be able to combine data it's required that combined reports have the same prefix in the filenames.
# From https://coverage.readthedocs.io/en/coverage-6.1.2/api_coverage.htm:
# "All coverage data files whose name starts with data_file (from the coverage() constructor) will be read,
# and combined together into the current measurements."
COV_OUTPUT_DATAFILE_PREFIX = "py_cov"
COV_OUTPUT_DATAFILE_EXTENSION = ".pycov"
CURRENT_PATH = Path(__file__).parent
HTML_LOCAL_PATH = CURRENT_PATH.parent.parent.parent.joinpath("html", "coverage")
class PyCoverageCollectorSettings:
def __init__(self):
self.enabled = False
self.output_dir = None
self.filter: list = None
self.omit: list = None
self.include_modules = False
self.include_dependencies = False
self.include_test_dependencies = False
@lru_cache()
def _get_coverage_output_dir():
return os.path.join(get_global_test_output_path(), "pycov")
def read_coverage_collector_settings() -> PyCoverageCollectorSettings:
result = PyCoverageCollectorSettings()
result.enabled = get_setting("/exts/omni.kit.test/pyCoverageEnabled", True)
result.output_dir = _get_coverage_output_dir()
result.filter = get_setting("/exts/omni.kit.test/pyCoverageFilter", None)
result.omit = get_setting("/exts/omni.kit.test/pyCoverageOmit", None)
result.include_modules = get_setting("/exts/omni.kit.test/pyCoverageIncludeModules", False)
result.include_dependencies = get_setting("/exts/omni.kit.test/pyCoverageIncludeDependencies", False)
result.include_test_dependencies = get_setting("/exts/omni.kit.test/pyCoverageIncludeTestDependencies", False)
return result
class PyCoverageCollector:
"""Initializes code coverage collections and saves collected data at Python interpreter exit"""
class PyCoverageSettings:
def __init__(self):
self.filter = None
self.omit = None
self.output_data_path_prefix = None
self.output_datafile_suffix = None
def __init__(self):
self._coverage = None
def _read_collector_settings(self) -> PyCoverageSettings:
"""
Reads coverage settings and returns non None PyCoverageSettings if Python coverage is required
"""
app_name = get_setting("/app/name")
collector_settings = read_coverage_collector_settings()
if not collector_settings.enabled:
print(f"'{app_name}' has disabled Python coverage in settings")
return None
if collector_settings.output_dir is None:
print(f"Output directory for Python coverage isn't set. Skipping Python coverage for '{app_name}'.")
return None
result = self.PyCoverageSettings()
result.filter = collector_settings.filter
result.omit = collector_settings.omit
filename_timestamp = app_name + f"_{datetime.now():%Y-%m-%d_%H-%M-%S-%f}"
# PyCoverage combines report files that have the same prefix so adding the same prefix to created reports
result.output_data_path_prefix = os.path.normpath(
os.path.join(collector_settings.output_dir, COV_OUTPUT_DATAFILE_PREFIX)
)
result.output_datafile_suffix = filename_timestamp + COV_OUTPUT_DATAFILE_EXTENSION
return result
def startup(self):
# Reading settings to check if it's needed to start Python coverage
# It's needed to be done as soon as possible to properly collect data
self._settings = self._read_collector_settings()
if self._settings is not None:
self._coverage = coverage.Coverage(
source=self._settings.filter,
omit=self._settings.omit,
data_file=self._settings.output_data_path_prefix,
data_suffix=self._settings.output_datafile_suffix,
)
self._coverage.config.disable_warnings = [
"module-not-measured",
"module-not-imported",
"no-data-collected",
"couldnt-parse",
]
self._coverage.start()
# Register for app shutdown to finalize the coverage.
# For fast shutdown, shutdown function of ext will not be called.
# The following subscription will give a chance to collect coverage report.
if carb.settings.get_settings().get("/app/fastShutdown"):
self._shutdown_subs = (
omni.kit.app.get_app()
.get_shutdown_event_stream()
.create_subscription_to_pop_by_type(
omni.kit.app.POST_QUIT_EVENT_TYPE, self.shutdown, name="omni.kit.test::coverage", order=1000
)
)
else:
self._shutdown_subs = None
def shutdown(self, _=None):
if self._coverage is not None:
self._coverage.stop()
try:
# Note: trying to save report in non-internal format in the "atexit" handler will result in error
self._coverage.save()
except coverage.misc.CoverageException as err:
print(f"Couldn't save Coverage report in internal format: {err}")
self._coverage = None
self._settings = None
self._shutdown_subs = None
class PyCoverageReporterSettings:
def __init__(self):
self.source_dir = None
self.output_to_std = False
self.output_to_json = False
self.output_to_html = False
self.combine_previous_data = False
def read_coverage_reporter_settings() -> PyCoverageReporterSettings:
coverage_enabled = get_setting("/exts/omni.kit.test/pyCoverageEnabled", True)
if not coverage_enabled:
return None
pyCoverageFormats = [s.lower() for s in get_setting("/exts/omni.kit.test/pyCoverageFormats", ["json"])]
output_to_std = "stdout" in pyCoverageFormats
output_to_json = "json" in pyCoverageFormats
output_to_html = "html" in pyCoverageFormats
# Check if no Python coverage report required
if not output_to_std and not output_to_json:
return None
source_dir = _get_coverage_output_dir()
if not os.path.exists(source_dir):
return None
result = PyCoverageReporterSettings()
result.source_dir = source_dir
result.output_to_std = output_to_std
result.output_to_json = output_to_json
result.output_to_html = output_to_html
result.combine_previous_data = get_setting("/exts/omni.kit.test/pyCoverageCombinedReport", False)
return result
def _report_single_coverage_result(
cov: coverage,
src_path: str,
std_output: bool = True,
json_output_file: str = None,
title: str = None,
html_output_path: str = None,
):
"""
Creates single report and returns path for created json file (or None if it wasn't created)
"""
try:
# Note: parameter 'keep' sets if read files will be removed afterwards
# setting it to true as they might be used to regenerate overall coverage report
cov.combine(data_paths=[src_path], keep=True)
# Note: ignore errors is needed to ignore some of the errors when coverage fails to process
# .../PythonExtension.cpp::shutdown() or some other file
if std_output:
print()
print("=" * 60)
title = title if title is not None else "Python coverage report"
print(title)
print()
cov.report(ignore_errors=True)
print("=" * 60)
if json_output_file is not None:
cov.json_report(outfile=json_output_file, ignore_errors=True)
if html_output_path is not None:
cov.html_report(directory=html_output_path, ignore_errors=True)
except coverage.misc.CoverageException as err:
print(f"Couldn't create coverage report for '{src_path}': {err}")
def _modify_html_report(output_path: str):
# modify coverage html file to have a larger and clearer filter for extensions
html = ""
with open(os.path.join(output_path, "index.html"), 'r') as file:
html = file.read()
with open(os.path.join(HTML_LOCAL_PATH, "modify.html"), 'r') as file:
# find_replace [0] is the line to find, [1] the line to replace and [2] the line to add
find_replace = file.read().splitlines()
html = html.replace(find_replace[0], find_replace[1] + '\n' + find_replace[2])
with open(os.path.join(output_path, "index.html"), 'w') as file:
file.write(html)
# overwrite coverage css/js files
shutil.copyfile(os.path.join(HTML_LOCAL_PATH, "new_style.css"), os.path.join(output_path, "style.css"))
shutil.copyfile(os.path.join(HTML_LOCAL_PATH, "new_script.js"), os.path.join(output_path, "coverage_html.js"))
class PyCoverageReporterResult:
def __init__(self):
self.html_path = None
self.json_path = None
def report_coverage_results(reporter_settings: PyCoverageReporterSettings = None) -> PyCoverageReporterResult:
"""
Processes previously collected coverage data according to settings in the 'reporter_settings'
"""
result = PyCoverageReporterResult()
if reporter_settings is None:
return result
if (
not reporter_settings.output_to_std
and not reporter_settings.output_to_json
and not reporter_settings.output_to_html
):
print("No output report options selected for the coverage results. No result report generated.")
return result
# use global configuration file
config_file = str(CURRENT_PATH.joinpath(".coveragerc"))
# A helper file required by coverage for combining already existing reports
cov_internal_file = os.path.join(reporter_settings.source_dir, COV_OUTPUT_DATAFILE_PREFIX)
cov = coverage.Coverage(source=None, data_file=cov_internal_file, config_file=config_file)
cov.config.disable_warnings = ["module-not-measured", "module-not-imported", "no-data-collected", "couldnt-parse"]
if reporter_settings.combine_previous_data:
result.json_path = (
os.path.join(reporter_settings.source_dir, "combined_py_coverage" + COV_OUTPUT_DATAFILE_EXTENSION + ".json")
if reporter_settings.output_to_json
else None
)
result.html_path = (
os.path.join(reporter_settings.source_dir, "combined_py_coverage_html")
if reporter_settings.output_to_html
else None
)
_report_single_coverage_result(
cov,
reporter_settings.source_dir,
reporter_settings.output_to_std,
result.json_path,
html_output_path=result.html_path,
)
if result.html_path and os.path.exists(result.html_path):
# slightly modify the html report for our needs
_modify_html_report(result.html_path)
# add folder to zip file, while be used on TeamCity
shutil.make_archive(os.path.join(reporter_settings.source_dir, "coverage"), "zip", result.html_path)
if not os.path.exists(result.json_path):
result.json_path = None
else:
internal_reports = [
file for file in os.listdir(reporter_settings.source_dir) if file.endswith(COV_OUTPUT_DATAFILE_EXTENSION)
]
for cur_file in internal_reports:
cov.erase()
processed_filename = (
cur_file[len(COV_OUTPUT_DATAFILE_PREFIX) + 1 :]
if cur_file.startswith(COV_OUTPUT_DATAFILE_PREFIX)
else cur_file
)
json_path = None
if reporter_settings.output_to_json:
json_path = os.path.join(reporter_settings.source_dir, processed_filename + ".json")
title = None
if reporter_settings.output_to_std:
title, _ = os.path.splitext(processed_filename)
title = f"Python coverage report for '{title}'"
_report_single_coverage_result(
cov,
os.path.join(reporter_settings.source_dir, cur_file),
reporter_settings.output_to_std,
json_path,
title,
)
# Cleanup of intermediate data
cov.erase()
return result
def generate_coverage_report() -> PyCoverageReporterResult:
# processing coverage data
result = PyCoverageReporterResult()
coverage_collector_settings = read_coverage_collector_settings()
# automatically enable coverage if we detect a pycov directory present when generating a report
if os.path.exists(coverage_collector_settings.output_dir):
carb.settings.get_settings().set("/exts/omni.kit.test/pyCoverageEnabled", True)
coverage_collector_settings.enabled = True
if coverage_collector_settings.enabled:
coverage_reporter_settings = read_coverage_reporter_settings()
result = report_coverage_results(coverage_reporter_settings)
teamcity_publish_artifact(os.path.join(coverage_collector_settings.output_dir, "*.zip"))
return result
| 13,448 | Python | 38.09593 | 120 | 0.645895 |
omniverse-code/kit/exts/omni.kit.test/omni/kit/test/flaky.py | import datetime
import logging
import os
from collections import defaultdict
import carb
from .utils import get_test_output_path
from .nvdf import get_app_info, query_nvdf
logger = logging.getLogger(__name__)
FLAKY_TESTS_QUERY_DAYS = 30
class FlakyTestAnalyzer:
"""Basic Flaky Tests Analyzer"""
AGG_TEST_IDS = "ids"
AGG_LAST_EXT_CONFIG = "config"
BUCKET_PASSED = "passed"
BUCKET_FAILED = "failed"
tests_failed = set()
ext_failed = defaultdict(list)
def __init__(
self, ext_test_id: str = "*", query_days=FLAKY_TESTS_QUERY_DAYS, exclude_consecutive_failure: bool = True
):
self.ext_test_id = ext_test_id
self.query_days = query_days
self.exclude_consecutive_failure = exclude_consecutive_failure
self.app_info = get_app_info()
self.query_result = self._query_nvdf()
def should_skip_test(self) -> bool:
if not self.query_result:
carb.log_info(f"{self.ext_test_id} query error - skipping test")
return True
if len(self.tests_failed) == 0:
carb.log_info(f"{self.ext_test_id} has no failed tests in last {self.query_days} days - skipping test")
return True
return False
def get_flaky_tests(self, ext_id: str) -> list:
return self.ext_failed.get(ext_id, [])
def generate_playlist(self) -> str:
test_output_path = get_test_output_path()
os.makedirs(test_output_path, exist_ok=True)
filename = "flakytest_" + self.ext_test_id.replace(".", "_").replace(":", "-")
filepath = os.path.join(test_output_path, f"{filename}_playlist.log")
if self._write_playlist(filepath):
return filepath
def _write_playlist(self, filepath: str) -> bool:
try:
with open(filepath, "w") as f:
f.write("\n".join(self.tests_failed))
return True
except IOError as e:
carb.log_warn(f"Error writing to {filepath} -> {e}")
return False
def _query_nvdf(self) -> bool:
query = self._es_query(days=self.query_days, hours=0)
r = query_nvdf(query)
for aggs in r.get("aggregations", {}).get(self.AGG_TEST_IDS, {}).get("buckets", {}):
test_id = aggs.get("key")
test_config = aggs.get("config", {}).get("hits", {}).get("hits")
if not test_config or not test_config[0]:
continue
test_config = test_config[0]
ext_test_id = test_config.get("fields", {}).get("test.s_ext_test_id")
if not ext_test_id or not ext_test_id[0]:
continue
ext_test_id = ext_test_id[0]
passed = aggs.get(self.BUCKET_PASSED, {}).get("doc_count", 0)
failed = aggs.get(self.BUCKET_FAILED, {}).get("doc_count", 0)
ratio = 0
if passed != 0 and failed != 0:
ratio = failed / (passed + failed)
carb.log_info(
f"{test_id} passed: {passed} failed: {failed} ({ratio * 100:.2f}% fail rate) in last {self.query_days} days"
)
if failed == 0:
continue
self.ext_failed[ext_test_id].append(
{"test_id": test_id, "passed": passed, "failed": failed, "ratio": ratio}
)
self.tests_failed.add(test_id)
return True
def _es_query(self, days: int, hours: int) -> dict:
target_date = datetime.datetime.utcnow() - datetime.timedelta(days=days, hours=hours)
kit_version = self.app_info["kit_version"]
carb.log_info(f"NVDF query for {self.ext_test_id} on Kit {kit_version}, last {days} days")
query = {
"aggs": {
self.AGG_TEST_IDS: {
"terms": {"field": "test.s_test_id", "order": {self.BUCKET_FAILED: "desc"}, "size": 1000},
"aggs": {
self.AGG_LAST_EXT_CONFIG: {
"top_hits": {
"fields": [{"field": "test.s_ext_test_id"}],
"_source": False,
"size": 1,
"sort": [{"ts_created": {"order": "desc"}}],
}
},
self.BUCKET_PASSED: {
"filter": {
"bool": {
"filter": [{"term": {"test.b_passed": True}}],
}
}
},
self.BUCKET_FAILED: {
"filter": {
"bool": {
"filter": [{"term": {"test.b_passed": False}}],
}
}
},
},
}
},
"size": 0,
"query": {
"bool": {
# filter out consecutive failure
# not (test.b_consecutive_failure : * and test.b_consecutive_failure : true)
"must_not": {
"bool": {
"filter": [
{
"bool": {
"should": [{"exists": {"field": "test.b_consecutive_failure"}}],
"minimum_should_match": 1,
}
},
{
"bool": {
"should": [
{"term": {"test.b_consecutive_failure": self.exclude_consecutive_failure}}
],
"minimum_should_match": 1,
}
},
]
}
},
"filter": [
{"term": {"test.s_ext_test_id": self.ext_test_id}},
{"term": {"test.s_test_type": "unittest"}},
{"term": {"test.b_skipped": False}},
{"term": {"test.b_unreliable": False}},
{"term": {"test.b_parallel_run": False}}, # Exclude parallel_run results
{"term": {"app.s_kit_version": kit_version}},
{"term": {"app.l_merge_request": 0}}, # Should we enable flaky tests from MR? For now excluded.
{
"range": {
"ts_created": {
"gte": target_date.isoformat() + "Z",
"format": "strict_date_optional_time",
}
}
},
],
}
},
}
return query
| 7,157 | Python | 38.988827 | 124 | 0.415398 |
omniverse-code/kit/exts/omni.kit.test/omni/kit/test/unittests.py | import asyncio
import fnmatch
import os
import random
import sys
import traceback
import unittest
from contextlib import suppress
from glob import glob
from importlib import import_module
from itertools import islice
from os.path import basename, dirname, isfile, join, splitext
from types import ModuleType
from typing import Callable, List
import carb
import carb.tokens
import omni.kit.app
from .async_unittest import AsyncTestSuite, AsyncTextTestRunner, OmniTestResult
from .exttests import RunExtTests
from .reporter import TestReporter
from .sampling import SamplingFactor, get_tests_sampling_to_skip
from .teamcity import teamcity_message
from .test_reporters import _test_status_report
from .utils import get_ext_test_id, get_setting, get_test_output_path
def _import_if_exist(module: str):
try:
return import_module(module)
except ModuleNotFoundError as e:
# doesn't exist if that is what we trying to import or namespace
if e.name == module or module.startswith(e.name + "."):
return None
carb.log_error(
f"Failed to import python module with tests: {module}. Error: {e}. Traceback:\n{traceback.format_exc()}"
)
except Exception as e:
carb.log_error(
f"Failed to import python module with tests: {module}. Error: {e}. Traceback:\n{traceback.format_exc()}"
)
def _get_enabled_extension_modules(filter_fn: Callable[[str], bool] = None):
manager = omni.kit.app.get_app().get_extension_manager()
# For each extension get each python module it declares
module_names = manager.get_enabled_extension_module_names()
sys_modules = set()
for name in module_names:
if name in sys.modules:
if filter_fn and not filter_fn(name):
continue
sys_modules.add(sys.modules[name])
# Automatically look for and import '[some_module].tests' and '[some_module].ogn.tests' so that extensions
# don't have to put tests into config files and import them all the time.
for test_submodule in [f"{name}.tests", f"{name}.ogn.tests"]:
if filter_fn and not filter_fn(test_submodule):
continue
if test_submodule in sys.modules:
sys_modules.add(sys.modules[test_submodule])
else:
test_module = _import_if_exist(test_submodule)
if test_module:
sys_modules.add(test_module)
return sys_modules
# ----------------------------------------------------------------------
SCANNED_TEST_MODULES = {} # Dictionary of moduleName: [dynamicTestModules]
_EXTENSION_DISABLED_HOOK = None # Hook for monitoring extension state changes, to keep the auto-populated list synced
_LOG = bool(os.getenv("TESTS_DEBUG")) # Environment variable to enable debugging of the test registration
# ----------------------------------------------------------------------
def remove_from_dynamic_test_cache(module_root):
"""Get the list of tests dynamically added to the given module directory (via "scan_for_test_modules")"""
global SCANNED_TEST_MODULES
for module_suffix in ["", ".tests", ".ogn.tests"]:
module_name = module_root + module_suffix
tests_to_remove = SCANNED_TEST_MODULES.get(module_name, [])
if tests_to_remove:
if _LOG:
print(f"Removing {len(tests_to_remove)} tests from {module_name}")
del SCANNED_TEST_MODULES[module_name]
# ----------------------------------------------------------------------
def _on_ext_disabled(ext_id, *_):
"""Callback executed when an extension has been disabled - scan for tests to remove"""
config = omni.kit.app.get_app().get_extension_manager().get_extension_dict(ext_id)
for node in ("module", "modules"):
with suppress(KeyError):
for module in config["python"][node]:
remove_from_dynamic_test_cache(module["name"])
# ----------------------------------------------------------------------
def dynamic_test_modules(module_root: str, module_file: str) -> List[ModuleType]:
"""Import all of the test modules and return a list of the imports so that automatic test recognition works
The normal test recognition mechanism relies on knowing all of the file names at build time. This function is
used to support automatic recognition of all test files in a certain directory at run time.
Args:
module_root: Name of the module for which tests are being imported, usually just __name__ of the caller
module_file: File from which the import is happening, usually just __file__ of the caller
Usage:
In the directory containing your tests add this line to the __init__.py file (creating the file if necessary):
scan_for_test_modules = True
It will pick up any Python files names testXXX.py or TestXXX.py and scan them for tests when the extension
is loaded.
Important:
The __init__.py file must be imported with the extension. If you have a .tests module or .ogn.tests module
underneath your main module this will happen automatically for you.
Returns:
List of modules that were added, each pointing to a file in which tests are contained
"""
global _EXTENSION_DISABLED_HOOK
global SCANNED_TEST_MODULES
if module_root in SCANNED_TEST_MODULES:
return SCANNED_TEST_MODULES[module_root]
modules_imported = []
for module_name in [basename(f) for f in glob(join(dirname(module_file), "*.py")) if isfile(f)]:
if module_name != "__init__" and module_name.lower().startswith("test"):
imported_module = f"{module_root}.{splitext(module_name)[0]}"
modules_imported.append(import_module(imported_module))
SCANNED_TEST_MODULES[module_root] = modules_imported
# This is a singleton initialization. If ever any test modules are scanned then from then on monitor for an
# extension being disabled so that the cached list can be cleared for rebuilding on the next run.
if _EXTENSION_DISABLED_HOOK is None:
hooks = omni.kit.app.get_app().get_extension_manager().get_hooks()
_EXTENSION_DISABLED_HOOK = hooks.create_extension_state_change_hook(
_on_ext_disabled,
omni.ext.ExtensionStateChangeType.BEFORE_EXTENSION_DISABLE,
ext_dict_path="python",
hook_name="python.unit_tests",
)
return modules_imported
# ==============================================================================================================
def get_tests_to_remove_from_modules(modules, log=_LOG):
"""Return the list of tests to be removed when a module is unloaded.
This includes all tests registered or dynamically discovered from the list of modules and their .tests or
.ogn.tests submodules. Keeping this separate from get_tests_from_modules() allows the import of all three related
modules, while preventing duplication of their tests when all extension module tests are requested.
Args:
modules: List of modules to
"""
all_modules = modules
all_modules += [module.tests for module in modules if hasattr(module, "tests")]
all_modules += [module.ogn.tests for module in modules if hasattr(module, "ogn") and hasattr(module.ogn, "tests")]
return get_tests_from_modules(all_modules, log)
# ==============================================================================================================
def get_tests_from_modules(modules, log=_LOG):
"""Return the list of tests registered or dynamically discovered from the list of modules"""
loader = unittest.TestLoader()
loader.suiteClass = AsyncTestSuite
tests = []
for module in modules:
if log:
carb.log_warn(f"Getting tests from module {module.__name__}")
suite = loader.loadTestsFromModule(module)
test_count = suite.countTestCases()
if test_count > 0:
if log:
carb.log_warn(f"Found {test_count} tests in {module.__name__}")
for t in suite:
tests += t._tests
if "scan_for_test_modules" in module.__dict__:
if log:
carb.log_warn(f"Scanning for test modules in {module.__name__} loaded from {module.__file__}")
for extra_module in dynamic_test_modules(module.__name__, module.__file__):
if log:
carb.log_warn(f" Processing additional module {extra_module}")
extra_suite = loader.loadTestsFromModule(extra_module)
extra_count = extra_suite.countTestCases()
if extra_count > 0:
if log:
carb.log_warn(f"Found {extra_count} additional tests added through {extra_module.__name__}")
for extra_test in extra_suite:
tests += extra_test._tests
# Some tests can be generated at runtime out of discovered ones. For example, we can leverage that to duplicate
# tests for different configurations.
for t in islice(tests, 0, len(tests)):
generate_extra = getattr(t, "generate_extra_tests", None)
if callable(generate_extra):
generated = generate_extra()
if generated:
tests += generated
return tests
def get_tests_from_enabled_extensions():
include_tests = get_setting("/exts/omni.kit.test/includeTests", default=[])
exclude_tests = get_setting("/exts/omni.kit.test/excludeTests", default=[])
def include_test(test_id: str) -> bool:
return any(fnmatch.fnmatch(test_id, p) for p in include_tests) and not any(
fnmatch.fnmatch(test_id, p) for p in exclude_tests
)
# Filter modules before importing. That allows having test-only modules and dependencies, they will fail to import
# in non-test environment. Tricky part is filtering itself. For includeTests = "omni.foo.test_abc_def_*" we want to
# match `omni.foo` module, but not `omni.foo_test_abc` test id. Thus module filtering is more permissive and
# checks "starts with" too.
def include_module(module: str) -> bool:
def match_module(module, pattern):
return fnmatch.fnmatch(module, pattern) or pattern.startswith(module)
return any(match_module(module, p) for p in include_tests)
modules = _get_enabled_extension_modules(filter_fn=include_module)
return (t for t in get_tests_from_modules(modules) if include_test(t.id()))
def _get_tests_from_file(filepath: str) -> list:
test_list = []
try:
with open(filepath) as f:
test_list = f.read().splitlines()
except IOError as e:
carb.log_warn(f"Error opening file {filepath} -> {e}")
return test_list
def _get_tests_override(tests: list) -> list:
"""Apply some override/modifiers to get the proper list of tests in that order:
1. Add/Remove unreliable tests depending on testExtRunUnreliableTests value
2. Get list of failed tests if present (if enabled, used with retry-on-failure)
3. Get list of tests from a file (if enabled, generated when running tests)
4. Get list of tests from sampling (if enabled)
5. Shuffle (random order) is applied last
"""
def is_unreliable_test(test_id: str) -> bool:
return any(fnmatch.fnmatch(test_id, p) for p in unreliable_tests)
unreliable_tests = get_setting("/exts/omni.kit.test/unreliableTests", default=[])
run_unreliable_tests = get_setting("/exts/omni.kit.test/testExtRunUnreliableTests", default=0)
if run_unreliable_tests == RunExtTests.RELIABLE_ONLY:
tests = [t for t in tests if not is_unreliable_test(t.id())]
elif run_unreliable_tests == RunExtTests.UNRELIABLE_ONLY:
tests = [t for t in tests if is_unreliable_test(t.id())]
failed_tests = get_setting("/exts/omni.kit.test/retryFailedTests", default=[])
tests_filepath = get_setting("/exts/omni.kit.test/runTestsFromFile", default="")
sampling_factor = float(get_setting("/exts/omni.kit.test/samplingFactor", default=SamplingFactor.UPPER_BOUND))
shuffle_tests = bool(get_setting("/exts/omni.kit.test/testExtRandomOrder", default=False))
if failed_tests:
tests = [t for t in tests if t.id() in failed_tests]
elif tests_filepath:
tests_from_file = _get_tests_from_file(tests_filepath)
tests = [t for t in tests if t.id() in tests_from_file]
tests.sort(key=lambda x: tests_from_file.index(x.id()))
elif tests and sampling_factor != SamplingFactor.UPPER_BOUND:
sampling = get_tests_sampling_to_skip(get_ext_test_id(), sampling_factor, [t.id() for t in tests])
skipped_tests = [t for t in tests if t.id() in sampling]
print(
"----------------------------------------\n"
f"Tests Sampling Factor set to {int(sampling_factor * 100)}% "
f"(each test should run every ~{int(1.0 / sampling_factor)} runs)\n"
)
teamcity_message("message", text=f"Tests Sampling Factor set to {int(sampling_factor * 100)}%")
# Add unittest.skip function (decorator) to all skipped tests if not skipped already.
# It will provide an explicit reason why the test was skipped.
for t in skipped_tests:
test_method = getattr(t, t._testMethodName)
if not getattr(test_method, "__unittest_skip__", False):
setattr(t, t._testMethodName, unittest.skip("Skipped by Sampling")(test_method))
if shuffle_tests:
seed = int(get_setting("/exts/omni.kit.test/testExtSamplingSeed", default=-1))
if seed >= 0:
random.seed(seed)
random.shuffle(tests)
return tests
def get_tests(tests_filter="") -> List:
"""Default function to get all current tests.
It gets tests from all enabled extensions, but also included include and exclude settings to filter them
Args:
tests_filter(str): Additional filter string to apply on list of tests.
Returns:
List of tests.
"""
if "*" not in tests_filter:
tests_filter = f"*{tests_filter}*"
# Find all tests in loaded extensions and filter with patterns using settings above:
tests = [t for t in get_tests_from_enabled_extensions() if fnmatch.fnmatch(t.id(), tests_filter)]
tests = _get_tests_override(tests)
return tests
def _setup_output_path(test_output_path: str):
tokens = carb.tokens.get_tokens_interface()
os.makedirs(test_output_path, exist_ok=True)
tokens.set_value("test_output", test_output_path)
def _write_tests_playlist(test_output_path: str, tests: list):
n = 1
filepath = test_output_path
app_name = get_setting("/app/name", "exttest")
while os.path.exists(filepath):
filepath = os.path.join(test_output_path, f"{app_name}_playlist_{n}.log")
n += 1
try:
with open(filepath, "w") as f:
for t in tests:
f.write(f"{t.id()}\n")
except IOError as e:
carb.log_warn(f"Error writing to {filepath} -> {e}")
def run_tests_in_modules(modules, on_finish_fn=None):
run_tests(get_tests_from_modules(modules, True), on_finish_fn)
def run_tests(tests=None, on_finish_fn=None, on_status_report_fn=None):
if tests is None:
tests = get_tests()
test_output_path = get_test_output_path()
_setup_output_path(test_output_path)
_write_tests_playlist(test_output_path, tests)
loader = unittest.TestLoader()
loader.suiteClass = AsyncTestSuite
suite = AsyncTestSuite()
suite.addTests(tests)
def on_status_report(*args, **kwargs):
if on_status_report_fn:
on_status_report_fn(*args, **kwargs)
_test_status_report(*args, **kwargs)
# Use our own TC reporter:
AsyncTextTestRunner.resultclass = OmniTestResult
runner = AsyncTextTestRunner(verbosity=2, stream=sys.stdout)
async def run():
result = await runner.run(suite, on_status_report)
if on_finish_fn:
on_finish_fn(result)
print("========================================")
print("========================================")
print(f"Running Tests (count: {len(tests)}):")
print("========================================")
asyncio.ensure_future(run())
def print_tests():
tests = get_tests()
print("========================================")
print(f"Printing All Tests (count: {len(tests)}):")
print("========================================")
reporter = TestReporter()
for t in tests:
reporter.unittest_start(t.id(), t.id())
print(t.id())
reporter.unittest_stop(t.id(), t.id())
print("========================================")
| 16,769 | Python | 41.890025 | 119 | 0.62705 |
omniverse-code/kit/exts/omni.kit.test/omni/kit/test/exttests.py | from __future__ import annotations
import asyncio
import fnmatch
import io
import multiprocessing
import os
import pprint
import random
import re
import subprocess
import sys
import time
from collections import defaultdict
from enum import IntEnum
from typing import Dict, List, Set, Tuple
import carb.dictionary
import carb.settings
import carb.tokens
import omni.kit.app
import psutil
from .async_unittest import KEY_FAILING_TESTS, STARTED_UNITTEST
from .code_change_analyzer import CodeChangeAnalyzer
from .crash_process import crash_process
from .flaky import FLAKY_TESTS_QUERY_DAYS, FlakyTestAnalyzer
from .repo_test_context import RepoTestContext
from .reporter import TestReporter
from .sampling import SamplingFactor
from .teamcity import is_running_in_teamcity, teamcity_message, teamcity_test_retry_support
from .test_coverage import read_coverage_collector_settings
from .test_reporters import TestRunStatus, _test_status_report
from .utils import (
clamp,
cleanup_folder,
ext_id_to_fullname,
get_argv,
get_global_test_output_path,
get_local_timestamp,
get_setting,
get_unprocessed_argv,
is_running_on_ci,
resolve_path,
)
BEGIN_SEPARATOR = "\n{0} [EXTENSION TEST START: {{0}}] {0}\n".format("|" * 30)
END_SEPARATOR = "\n{0} [EXTENSION TEST {{0}}: {{1}}] {0}\n".format("|" * 30)
DEFAULT_TEST_NAME = "default"
def _error(stream, msg):
stream.write(f"[error] [{__file__}] {msg}\n")
_debug_log = bool(os.getenv("OMNI_KIT_TEST_DEBUG", default=False))
_asyncio_process_was_terminated = False
def _debug(stream, msg):
if _debug_log:
stream.write(f"[info] [{__file__}] {msg}\n")
def matched_patterns(s: str, patterns: List[str]) -> List[str]:
return [p for p in patterns if fnmatch.fnmatch(s, p)]
def match(s: str, patterns: List[str]) -> bool:
return len(matched_patterns(s, patterns)) > 0
def escape_for_fnmatch(s: str) -> str:
return s.replace("[", "[[]")
def unescape_fnmatch(s: str) -> str:
return s.replace("[[]", "[")
class FailPatterns:
def __init__(self, include=[], exclude=[]):
self.include = [escape_for_fnmatch(s.lower()) for s in include]
self.exclude = [escape_for_fnmatch(s.lower()) for s in exclude]
def merge(self, patterns: FailPatterns):
self.include += patterns.include
self.exclude += patterns.exclude
def match_line(self, line: str) -> Tuple[str, str, bool]:
line_lower = line.lower()
include_matched = match(line_lower, self.include)
exclude_matched = match(line_lower, self.exclude)
if include_matched and not exclude_matched:
patterns = matched_patterns(line_lower, self.include)
patterns = [unescape_fnmatch(p) for p in patterns]
return ", ".join(patterns), line.strip(), exclude_matched
return "", "", exclude_matched
def __str__(self):
return pprint.pformat(vars(self))
class RunExtTests(IntEnum):
RELIABLE_ONLY = 0
UNRELIABLE_ONLY = 1
BOTH = 2
class RetryStrategy:
NO_RETRY = "no-retry"
RETRY_ON_FAILURE = "retry-on-failure"
ITERATIONS = "iterations"
RERUN_UNTIL_FAILURE = "rerun-until-failure"
# CI strategy, default to no-retry when testing locally
RETRY_ON_FAILURE_CI_ONLY = "retry-on-failure-ci-only"
class SamplingContext:
ANY = "any"
LOCAL = "local"
CI = "ci"
class TestRunContext:
def __init__(self):
# Setup output path for test data
self.output_path = get_global_test_output_path()
os.makedirs(self.output_path, exist_ok=True)
print("Test output path: {}".format(self.output_path))
self.coverage_mode = get_setting("/exts/omni.kit.test/testExtGenerateCoverageReport", default=False) or (
"--coverage" in get_argv()
)
# clean output folder?
clean_output = get_setting("/exts/omni.kit.test/testExtCleanOutputPath", default=False)
if clean_output:
cleanup_folder(self.output_path)
self.shared_patterns = FailPatterns(
get_setting("/exts/omni.kit.test/stdoutFailPatterns/include", default=[]),
get_setting("/exts/omni.kit.test/stdoutFailPatterns/exclude", default=[]),
)
self.trim_stdout_on_success = bool(get_setting("/exts/omni.kit.test/testExtTrimStdoutOnSuccess", default=False))
self.trim_excluded_messages = bool(
get_setting("/exts/omni.kit.test/stdoutFailPatterns/trimExcludedMessages", default=False)
)
self.retry_strategy = get_setting("/exts/omni.kit.test/testExtRetryStrategy", default=RetryStrategy.NO_RETRY)
self.max_test_run = int(get_setting("/exts/omni.kit.test/testExtMaxTestRunCount", default=1))
if self.retry_strategy == RetryStrategy.RETRY_ON_FAILURE_CI_ONLY:
if is_running_on_ci():
self.retry_strategy = RetryStrategy.RETRY_ON_FAILURE
self.max_test_run = 3
else:
self.retry_strategy = RetryStrategy.NO_RETRY
self.max_test_run = 1
self.run_unreliable_tests = RunExtTests(get_setting("/exts/omni.kit.test/testExtRunUnreliableTests", default=0))
self.run_flaky_tests = get_setting("/exts/omni.kit.test/testExtRunFlakyTests", default=False)
self.start_ts = get_local_timestamp()
self.repo_test_context = RepoTestContext()
self.change_analyzer = None
if get_setting("/exts/omni.kit.test/testExtCodeChangeAnalyzerEnabled", default=False) and is_running_on_ci():
self.change_analyzer = CodeChangeAnalyzer(self.repo_test_context)
def _prepare_ext_for_testing(ext_name, stream=sys.stdout):
manager = omni.kit.app.get_app().get_extension_manager()
ext_id = None
ext_info_local = manager.get_extension_dict(ext_name)
if ext_info_local:
return ext_info_local
ext_info_remote = manager.get_registry_extension_dict(ext_name)
if ext_info_remote:
ext_id = ext_info_remote["package/id"]
else:
versions = manager.fetch_extension_versions(ext_name)
if len(versions) > 0:
ext_id = versions[0]["id"]
else:
_error(stream, f"Can't find extension: {ext_name} to run extension test on.")
return None
ext_info_local = manager.get_extension_dict(ext_id)
is_local = ext_info_local is not None
if not is_local:
if not manager.pull_extension(ext_id):
_error(stream, f"Failed to pull extension: {ext_id} to run extension test on.")
return None
ext_info_local = manager.get_extension_dict(ext_id)
if not ext_info_local:
_error(stream, f"Failed to get extension dict: {ext_id} while preparing extension for testing.")
return ext_info_local
def _prepare_app_for_testing(stream) -> Tuple[str, str]:
"""Returns path to app (kit file) and short name of an app."""
test_app = get_setting("/exts/omni.kit.test/testExtApp", default=None)
test_app = carb.tokens.get_tokens_interface().resolve(test_app)
# Test app can be either path to kit file or extension id (to optionally download and use extension as an app)
if test_app.endswith(".kit") or "/" in test_app:
return (test_app, "")
app_ext_info = _prepare_ext_for_testing(test_app, stream)
if app_ext_info:
return (app_ext_info["path"], test_app)
return (None, test_app)
class ExtTestResult:
def __init__(self):
self.duration = 0.0
self.test_count = 0
self.unreliable = 0
self.unreliable_fail = 0
self.fail = 0
self.passed = True
class TestApp:
def __init__(self, stream):
self.path, self.name = _prepare_app_for_testing(stream)
self.is_empty = not self.name
class ExtTest:
def __init__(
self,
ext_id: str,
ext_info: carb.dictionary.Item,
test_config: Dict,
test_id: str,
is_parallel_run: bool,
run_context: TestRunContext,
test_app: TestApp,
valid=True,
):
self.context = run_context
self.ext_id = ext_id
self.ext_name = ext_id_to_fullname(ext_id)
self.test_id = test_id
self.app_name = ""
# TC treats dots are separators to filter tests in UI, replace them.
self.tc_test_id = test_id.replace(".", "+") + ".[PROCESS CHECK]"
self.bucket_name = get_setting("/exts/omni.kit.test/testExtTestBucket", default="")
self.unreliable = False
self.skip = False
self.allow_sampling = True
self.args: List[str] = []
self.patterns = FailPatterns()
self.timeout = -1
self.result = ExtTestResult()
self.retries = 0
self.buffer_stdout = bool(is_parallel_run) or bool(self.context.trim_stdout_on_success)
self.stdout = io.StringIO() if self.buffer_stdout else sys.stdout
self.log_file = ""
self.parallelizable = True
self.reporter = TestReporter(self.stdout)
self.test_app = test_app
self.config = test_config
self.ext_info = ext_info
self.output_path = ""
self.valid = bool(valid and self.ext_info)
self.change_analyzer_result = None
self.failed_tests = []
if self.valid:
self._fill_ext_test()
def _fill_ext_test(self):
self.args = [get_argv()[0]]
self.app_name = "exttest_" + self.test_id.replace(".", "_").replace(":", "-")
ui_mode = get_setting("/exts/omni.kit.test/testExtUIMode", default=False) or ("--dev" in get_argv())
print_mode = get_setting("/exts/omni.kit.test/printTestsAndQuit", default=False)
use_kit_file_as_app = get_setting("/exts/omni.kit.test/testExtUseKitFileAsApp", default=True)
coverage_mode = self.context.coverage_mode
self.ext_id = self.ext_info["package/id"]
self.ext_name = self.ext_info["package/name"]
is_kit_file = self.ext_info.get("isKitFile", False)
# If extension is kit file just run startup test without using a test app
ext_path = self.ext_info.get("path", "")
if is_kit_file and use_kit_file_as_app:
self.args += [ext_path]
else:
self.args += [self.test_app.path, "--enable", self.ext_id]
# test output dir
self.output_path = f"{self.context.output_path}/{self.app_name}"
if not os.path.exists(self.output_path):
os.makedirs(self.output_path)
self.reporter.set_output_path(self.output_path)
# current ts (not precise as test run can be delayed relative to this moment)
ts = get_local_timestamp()
self.log_file = f"{self.output_path}/{self.app_name}_{ts}_0.log"
self.args += [
"--/log/flushStandardStreamOutput=1",
"--/app/name=" + self.app_name,
f"--/log/file='{self.log_file}'",
f"--/exts/omni.kit.test/testOutputPath='{self.output_path}'",
f"--/exts/omni.kit.test/extTestId='{self.test_id}'",
f"--/crashreporter/dumpDir='{self.output_path}'",
"--/crashreporter/preserveDump=1",
"--/crashreporter/gatherUserStory=0", # don't pop up the GUI on crash
"--/rtx-transient/dlssg/enabled=false", # OM-97205: Disable DLSS-G for now globally, so L40 tests will all pass. DLSS-G tests will have to enable it
]
# also set extTestId on the parent process - needed when calling unittest_* functions from exttests.py
carb.settings.get_settings().set_string("/exts/omni.kit.test/extTestId", self.test_id)
# Pass all exts folders
ext_folders = list(get_setting("/app/exts/folders", default=[]))
ext_folders += list(get_setting("/persistent/app/exts/userFolders", default=[]))
for folder in ext_folders:
self.args += ["--ext-folder", folder]
# Profiler trace enabled ?
default_profiling = get_setting("/exts/omni.kit.test/testExtEnableProfiler", default=False)
profiling = self.config.get("profiling", default_profiling)
if profiling:
self.args += [
"--/plugins/carb.profiler-cpu.plugin/saveProfile=1",
"--/plugins/carb.profiler-cpu.plugin/compressProfile=1",
"--/app/profileFromStart=1",
f"--/plugins/carb.profiler-cpu.plugin/filePath='{self.output_path}/ct_{self.app_name}_{ts}.gz'",
]
# Timeout for the process
default_timeout = int(get_setting("/exts/omni.kit.test/testExtDefaultTimeout", default=300))
max_timeout = int(get_setting("/exts/omni.kit.test/testExtMaxTimeout", default=0))
self.timeout = self.config.get("timeout", default_timeout)
# Clamp timeout if needed
if max_timeout > 0 and self.timeout > max_timeout:
self.timeout = max_timeout
# [[test]] can be marked as unreliable - meaning it will not run any of its tests unless unreliable tests are run
self.unreliable = self.config.get("unreliable", self.config.get("flaky", False))
# python tests to include
include_tests = list(self.config.get("pythonTests", {}).get("include", []))
exclude_tests = list(self.config.get("pythonTests", {}).get("exclude", []))
unreliable_tests = list(self.config.get("pythonTests", {}).get("unreliable", []))
# When running unreliable tests:
# 1. if the [[test]] is set as unreliable run all python tests (override the `unreliable_tests` list)
# 2. if running unreliable tests - set unreliable to true and disable sampling
if self.unreliable:
unreliable_tests = ["*"]
self.allow_sampling = False
elif unreliable_tests and self.context.run_unreliable_tests != RunExtTests.RELIABLE_ONLY:
self.unreliable = True
self.allow_sampling = False
# Check if we run flaky tests - if we do grab the test list as a playlist
if self.context.run_flaky_tests:
self.allow_sampling = False
query_days = int(get_setting("/exts/omni.kit.test/flakyTestsQueryDays", default=FLAKY_TESTS_QUERY_DAYS))
flaky_test_analyzer = FlakyTestAnalyzer(self.test_id, query_days)
if flaky_test_analyzer.should_skip_test():
self.skip = True
elif self.config.get("samplingFactor") == SamplingFactor.UPPER_BOUND:
pass # if an extension has disabled tests sampling we run all tests
else:
file = flaky_test_analyzer.generate_playlist()
if file:
self.args += [f"--/exts/omni.kit.test/runTestsFromFile='{file}'"]
def get_python_modules(ext_info: carb.dictionary.Item):
python_dict = ext_info.get("python", {})
if isinstance(python_dict, dict):
python_modules = python_dict.get("module", []) + python_dict.get("modules", [])
for m in python_modules:
module = m.get("name")
if module:
yield module
# By default if extension has python modules use them to fill in tests mask. Can be overridden with explicit tests list.
# Do that only for pure extensions tests, so that for tests inside an app extensions can opt in add more tests slowly.
python_modules_names = []
python_modules_names.extend(get_python_modules(self.ext_info))
if len(include_tests) == 0 and self.test_app.is_empty:
include_tests.extend(["{}.*".format(e) for e in python_modules_names])
# Cpp test libraries
test_libraries = self.config.get("cppTests", {}).get("libraries", [])
test_libraries = [resolve_path(library, ext_path) for library in test_libraries]
# If extension has tests -> run python (or cpp) tests, otherwise just do startup test
if len(include_tests) > 0 or len(test_libraries) > 0:
# We need kit.test as a test runner then
self.args += ["--enable", "omni.kit.test"]
if ui_mode:
self.args += [
"--enable",
"omni.kit.window.tests",
"--enable",
"omni.kit.window.extensions",
"--enable",
"omni.kit.renderer.core",
"--/exts/omni.kit.window.tests/openWindow=1",
"--/exts/omni.kit.test/testExtUIMode=1",
]
self.timeout = None # No timeout in that case
elif print_mode:
self.args += ["--/exts/omni.kit.test/printTestsAndQuit=true"]
else:
self.args += ["--/exts/omni.kit.test/runTestsAndQuit=true"]
for i, test_mask in enumerate(include_tests):
self.args += [f"--/exts/omni.kit.test/includeTests/{i}='{test_mask}'"]
for i, test_mask in enumerate(exclude_tests):
self.args += [f"--/exts/omni.kit.test/excludeTests/{i}='{test_mask}'"]
for i, test_mask in enumerate(unreliable_tests):
self.args += [f"--/exts/omni.kit.test/unreliableTests/{i}='{test_mask}'"]
for i, test_library in enumerate(test_libraries):
self.args += [f"--/exts/omni.kit.test/testLibraries/{i}='{test_library}'"]
else:
self.args += ["--/app/quitAfter=10", "--/crashreporter/gatherUserStory=0"]
# Reduce output on TC to make log shorter. Mostly that removes long extension startup/shutdown lists. We have
# that information in log files attached to artifacts anyway.
if is_running_on_ci():
self.args += ["--/app/enableStdoutOutput=0"]
# Test filtering (support shorter version)
argv = get_argv()
filter_value = _parse_arg_shortcut(argv, "-f")
if filter_value:
self.args += [f"--/exts/omni.kit.test/runTestsFilter='{filter_value}'"]
# Pass some args down the line:
self.args += _propagate_args(argv, "--portable")
self.args += _propagate_args(argv, "--portable-root", True)
self.args += _propagate_args(argv, "--allow-root")
self.args += _propagate_args(argv, "-d")
self.args += _propagate_args(argv, "-v")
self.args += _propagate_args(argv, "-vv")
self.args += _propagate_args(argv, "--wait-debugger")
self.args += _propagate_args(argv, "--/exts/omni.kit.test/runTestsFilter", starts_with=True)
self.args += _propagate_args(argv, "--/exts/omni.kit.test/runTestsFromFile", starts_with=True)
self.args += _propagate_args(argv, "--/exts/omni.kit.test/testExtRunUnreliableTests", starts_with=True)
self.args += _propagate_args(argv, "--/exts/omni.kit.test/doNotQuit", starts_with=True)
self.args += _propagate_args(argv, "--/exts/omni.kit.test/parallelRun", starts_with=True)
self.args += _propagate_args(argv, "--/telemetry/mode", starts_with=True)
self.args += _propagate_args(argv, "--/crashreporter/data/testName", starts_with=True)
def is_arg_prefix_present(args, prefix: str):
for arg in args:
if arg.startswith(prefix):
return True
return False
# make sure to set the telemetry mode to 'test' if it hasn't explicitly been overridden
# to something else. This prevents structured log events generated from tests from
# unintentionally polluting the telemetry analysis data.
if not is_arg_prefix_present(self.args, "--/telemetry/mode"):
self.args += ["--/telemetry/mode=test"]
# make sure to pass on the test name that was given in the settings if it was not
# explicitly given on the command line.
if not is_arg_prefix_present(self.args, "--/crashreporter/data/testName"):
test_name_setting = get_setting("/crashreporter/data/testName")
if test_name_setting != None:
self.args += [f"--/crashreporter/data/testName=\"{test_name_setting}\""]
# Read default coverage settings
default_coverage_settings = read_coverage_collector_settings()
# Sets if python test coverage enabled or disabled
py_coverage_enabled = self.config.get("pyCoverageEnabled", default_coverage_settings.enabled or coverage_mode)
# This must be set explicitly for the child test process:
# if the main process gets this setting from the command line and it's different from
# values in the configuration files then we must pass it to the child process but
# there is no way to know whether or not the value were from the command line so
# always set it explicitly for the child process
self.args += [f"--/exts/omni.kit.test/pyCoverageEnabled={py_coverage_enabled}"]
if py_coverage_enabled:
self.allow_sampling = False
py_coverage_filter = default_coverage_settings.filter or []
py_coverage_deps_omit = []
# If custom filter is specified, only use that list
custom_filter = self.config.get("pyCoverageFilter", None)
if custom_filter:
py_coverage_filter = custom_filter
else:
# Append all python modules
if self.config.get("pyCoverageIncludeModules", default_coverage_settings.include_modules):
for m in python_modules_names:
py_coverage_filter.append(m)
# Append all python modules from the dependencies
dependencies = [
{
"setting": "pyCoverageIncludeDependencies",
"default": default_coverage_settings.include_dependencies,
"config": self.ext_info,
},
{
"setting": "pyCoverageIncludeTestDependencies",
"default": default_coverage_settings.include_test_dependencies,
"config": self.config,
},
]
for d in dependencies:
if not self.config.get(d["setting"], d["default"]):
continue
deps = d["config"].get("dependencies", [])
manager = omni.kit.app.get_app().get_extension_manager()
for ext_d in manager.get_extensions():
if ext_d["name"] not in deps:
continue
ext_info = manager.get_extension_dict(ext_d["id"])
py_coverage_filter.extend(get_python_modules(ext_info))
# also look for omit in dependencies
test_info = ext_info.get("test", None)
if isinstance(test_info, list) or isinstance(test_info, tuple):
for t in test_info:
for cov_omit in t.get("pyCoverageOmit", []):
cov_omit = cov_omit.replace("\\", "/")
if not os.path.isabs(cov_omit) and not cov_omit.startswith("*/"):
cov_omit = "*/" + cov_omit
py_coverage_deps_omit.append(cov_omit)
if len(py_coverage_filter) > 0:
for i, cov_filter in enumerate(py_coverage_filter):
self.args += [f"--/exts/omni.kit.test/pyCoverageFilter/{i}='{cov_filter}'"]
# omit files/path for coverage
default_py_coverage_omit = default_coverage_settings.omit or []
py_coverage_omit = list(self.config.get("pyCoverageOmit", default_py_coverage_omit))
py_coverage_omit.extend(py_coverage_deps_omit)
if len(py_coverage_omit) > 0:
for i, cov_omit in enumerate(py_coverage_omit):
cov_omit = cov_omit.replace("\\", "/")
if not os.path.isabs(cov_omit) and not cov_omit.startswith("*/"):
cov_omit = "*/" + cov_omit
self.args += [f"--/exts/omni.kit.test/pyCoverageOmit/{i}='{cov_omit}'"]
# in coverage mode we generate a report at the end, need to set the settings on the parent process
if coverage_mode:
carb.settings.get_settings().set("/exts/omni.kit.test/pyCoverageEnabled", py_coverage_enabled)
carb.settings.get_settings().set("/exts/omni.kit.test/testExtGenerateCoverageReport", True)
# Extra extensions to run
exts_to_enable = [self.ext_id]
for ext in self.config.get("dependencies", []):
self.args += ["--enable", ext]
exts_to_enable.append(ext)
# Check if skipped by code change analyzer based on extensions it is about to enable
if self.context.change_analyzer:
self.change_analyzer_result = self.context.change_analyzer.analyze(
self.test_id, self.ext_name, exts_to_enable
)
if self.change_analyzer_result.should_skip_test:
self.skip = True
if not self.context.change_analyzer.allow_sampling():
self.allow_sampling = False
# Tests Sampling per extension
default_sampling = float(
get_setting("/exts/omni.kit.test/testExtSamplingFactor", default=SamplingFactor.UPPER_BOUND)
)
sampling_factor = clamp(
self.config.get("samplingFactor", default_sampling), SamplingFactor.LOWER_BOUND, SamplingFactor.UPPER_BOUND
)
if sampling_factor == SamplingFactor.UPPER_BOUND:
self.allow_sampling = False
if self.allow_sampling and self._use_tests_sampling():
self.args += [f"--/exts/omni.kit.test/samplingFactor={sampling_factor}"]
# tests random order
random_order = get_setting("/exts/omni.kit.test/testExtRandomOrder", default=False)
if random_order:
self.args += ["--/exts/omni.kit.test/testExtRandomOrder=1"]
# Test Sampling Seed
seed = int(get_setting("/exts/omni.kit.test/testExtSamplingSeed", default=-1))
if seed >= 0:
self.args += [f"--/exts/omni.kit.test/testExtSamplingSeed={seed}"]
# Extra args
self.args += list(get_setting("/exts/omni.kit.test/testExtArgs", default=[]))
# Extra args
self.args += self.config.get("args", [])
# if in ui mode we need to remove --no-window
if ui_mode:
self.args = [a for a in self.args if a != "--no-window"]
# Build fail patterns
self.patterns = FailPatterns(
self.config.get("stdoutFailPatterns", {}).get("include", []),
self.config.get("stdoutFailPatterns", {}).get("exclude", []),
)
self.patterns.merge(self.context.shared_patterns)
# Pass all unprocessed argv down the line at the very end. They can also have another `--` potentially.
unprocessed_argv = get_unprocessed_argv()
if unprocessed_argv:
self.args += unprocessed_argv
# Other settings
self.parallelizable = self.config.get("parallelizable", True)
def _pre_test_run(self, test_run: int, retry_strategy: RetryStrategy):
"""Update arguments that must change between each test run"""
if test_run > 0:
for index, arg in enumerate(self.args):
# make sure to use a different log file if we run tests multiple times
if arg.startswith("--/log/file="):
ts = get_local_timestamp()
self.log_file = f"{self.output_path}/{self.app_name}_{ts}_{test_run}.log"
self.args[index] = f"--/log/file='{self.log_file}'"
# make sure to use a different random seed if present, only valid on some retry strategies
if retry_strategy == RetryStrategy.ITERATIONS or retry_strategy == RetryStrategy.RERUN_UNTIL_FAILURE:
if arg.startswith("--/exts/omni.kit.test/testExtSamplingSeed="):
random_seed = random.randint(0, 2**16)
self.args[index] = f"--/exts/omni.kit.test/testExtSamplingSeed={random_seed}"
def _use_tests_sampling(self) -> bool:
external_build = get_setting("/privacy/externalBuild")
if external_build:
return False
use_sampling = get_setting("/exts/omni.kit.test/useSampling", default=True)
if not use_sampling:
return False
use_sampling = bool(os.getenv("OMNI_KIT_TEST_USE_SAMPLING", default=True))
if not use_sampling:
return False
sampling_context = get_setting("/exts/omni.kit.test/testExtSamplingContext")
if sampling_context == SamplingContext.CI and is_running_on_ci():
return True
elif sampling_context == SamplingContext.LOCAL and not is_running_on_ci():
return True
return sampling_context == SamplingContext.ANY
def get_cmd(self) -> str:
return " ".join(self.args)
def on_start(self):
self.result = ExtTestResult()
self.reporter.exttest_start(self.test_id, self.tc_test_id, self.ext_id, self.ext_name)
self.stdout.write(BEGIN_SEPARATOR.format(self.test_id))
def on_finish(self, test_result):
self.stdout.write(END_SEPARATOR.format("PASSED" if test_result else "FAILED", self.test_id))
self.reporter.exttest_stop(self.test_id, self.tc_test_id, passed=test_result)
def on_fail(self, fail_message):
# TC service messages can't match failure with a test start message when there are other tests in between.
# As a work around in that case stop test and start again (send those messages). That makes it has 2 block
# entries in the log, but gets reported as failed correctly.
if is_running_in_teamcity():
self.reporter.exttest_stop(self.test_id, self.tc_test_id, report=False)
self.reporter.exttest_start(self.test_id, self.tc_test_id, self.ext_id, self.ext_name, report=False)
self.reporter.exttest_fail(self.test_id, self.tc_test_id, "Error", fail_message)
self.stdout.write(f"{fail_message}\n")
def _kill_process_recursive(pid, stream):
def _output(msg: str):
teamcity_message("message", text=msg)
stream.write(msg)
def _terminate(proc: psutil.Process):
try:
proc.terminate()
except psutil.AccessDenied as e:
_error(stream, f"Access denied: {e}")
except psutil.ZombieProcess as e:
_error(stream, f"Encountered a zombie process: {e}")
except psutil.NoSuchProcess as e:
_error(stream, f"Process no longer exists: {e}")
except (psutil.Error, Exception) as e:
_error(stream, f"An error occurred: {str(e)}")
try:
process = psutil.Process(pid)
# kill all children of test process (if any)
for proc in process.children(recursive=True):
if crash_process(proc, stream):
_output(f"\nTest Process Timed out, crashing child test process to collect callstack, PID: {proc.pid}\n\n")
else:
_output(
f"\nAttempt to crash child test process to collect callstack failed. Killing child test process, PID: {proc.pid}\n\n"
)
_terminate(proc)
# kill the test process itself
if crash_process(process, stream):
_output(f"\nTest Process Timed out, crashing test process to collect callstack, PID: {process.pid}\n\n")
else:
_output(
f"\nAttempt to crash test process to collect callstack failed. Killing test process, PID: {process.pid}\n\n"
)
_terminate(process)
except psutil.NoSuchProcess as e:
_error(stream, f"Process no longer exists: {e}")
global _asyncio_process_was_terminated
_asyncio_process_was_terminated = True
PRAGMA_REGEX = re.compile(r"^##omni\.kit\.test\[(.*)\]")
def _extract_metadata_pragma(line, metadata):
"""
Test subprocs can print specially formatted pragmas, that get picked up here as extra fields
that get printed into the status report. Pragmas must be at the start of the line, and should
be the only thing on that line.
Format:
##omni.kit.test[op, key, value]
op = operation type, either "set", "append" or "del" (str)
key = name of the key (str)
value = string value (str)
Examples:
# set a value
##omni.kit.test[set, foo, this is a message and spaces are allowed]
# append a value to a list
##omni.kit.test[append, bah, test-13]
"""
match = PRAGMA_REGEX.match(line)
if not match:
return False
body = match.groups()[0]
args = body.split(",")
args = [x.strip() for x in args]
if not args:
return False
op = args[0]
args = args[1:]
if op in ("set", "append"):
if len(args) != 2:
return False
key, value = args
if op == "set":
metadata[key] = value
elif op == "append":
metadata.setdefault(key, []).append(value)
elif op == "del":
if len(args) != 1:
return False
key = args[0]
del metadata[key]
else:
return False # unsupported pragma op
return True
async def _run_test_process(test: ExtTest) -> Tuple[int, List[str], Dict]:
"""Run test process and read stdout (use PIPE)."""
returncode = 0
fail_messages = []
fail_patterns = defaultdict(list)
test_run_metadata = {}
proc = None
try:
test.stdout.write(f">>> running process: {test.get_cmd()}\n")
_debug(test.stdout, f"fail patterns: {test.patterns}")
async def run_proc():
nonlocal proc
proc = await asyncio.create_subprocess_exec(
*test.args,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
bufsize=0,
)
async for line in proc.stdout:
suppress_line = False
line = line.decode(errors="replace").replace("\r\n", "\n").replace("\r", "\n")
# Check for failure on certain stdout message (like log errors)
nonlocal fail_messages
pattern_str, messages, exclude_matched = test.patterns.match_line(line)
if pattern_str and messages:
fail_patterns[pattern_str].append(messages)
# Check for special pragmas printed by the child proc that tell us to add custom
# fields to the formatted status report
try:
if _extract_metadata_pragma(line, test_run_metadata):
suppress_line = True
except: # noqa
pass
# grab the number of tests
m = re.match(r"(?:Running|Printing All) Tests \(count: (\d+)\)", line, re.M)
if m:
try:
test.result.test_count = int(m.group(1))
except: # noqa
pass
# replace with some generic message to avoid confusion when people search for [error] etc.
if exclude_matched and test.context.trim_excluded_messages:
line = "[...line contained error that was excluded by omni.kit.test...]\n"
if not suppress_line:
test.stdout.write("|| " + line)
await proc.wait()
nonlocal returncode
returncode = proc.returncode
proc = None
await asyncio.wait_for(run_proc(), timeout=test.timeout)
except subprocess.CalledProcessError as e:
returncode = e.returncode
fail_messages.append(f"subprocess.CalledProcessError was raised: {e.output}")
except asyncio.TimeoutError:
returncode = 15
fail_messages.append(
f"Process timed out (timeout: {test.timeout} seconds), terminating. Check artifacts for .dmp files."
)
if proc:
_kill_process_recursive(proc.pid, test.stdout)
except NotImplementedError as e:
fail_messages.append(
f"The asyncio loop does not implement subprocess. This is known to happen when using SelectorEventLoop on Windows, exception {e}"
)
# loop all pattern matches and put them on top of the fail messages
pattern_messages = []
for pattern, messages in fail_patterns.items():
pattern_messages.append(f"Matched {len(messages)} fail pattern '{pattern}' in stdout: ")
for msg in messages:
pattern_messages.append(f" '{msg}'")
fail_messages = pattern_messages + fail_messages
# return code failure check.
if returncode == 13:
# 13 - is code we return when python test fails
failing_tests_cnt = max(len(test_run_metadata.get(KEY_FAILING_TESTS, [])), 1)
fail_messages.append(f"{failing_tests_cnt} test(s) failed.")
elif returncode == 15:
# 15 - is code we return when a test process timeout, fail_message already added
pass
elif returncode != 0:
# other return codes usually mean crash
fail_messages.append("Process might have crashed or timed out.")
# Check if any unittests were started but never completed (crashed/timed out/etc.)
# When a test crash the 'stop' message is missing making test results harder to read, add them manually.
for key, value in test_run_metadata.items():
if type(value) == str and value.startswith(STARTED_UNITTEST):
test_id = key
tc_test_id = value.replace(STARTED_UNITTEST, "", 1)
test.reporter.unittest_fail(
test_id,
tc_test_id,
"Error",
f"Test started but never finished, test: {tc_test_id}. Test likely crashed or timed out.",
)
test.reporter.unittest_stop(test_id, tc_test_id)
return (returncode, fail_messages, test_run_metadata)
def _propagate_args(argv, arg_name, has_value=False, starts_with=False):
args = []
for i, arg in enumerate(argv):
if arg == arg_name or (starts_with and arg.startswith(arg_name)):
args += [arg]
if has_value:
args += [argv[i + 1]]
return args
def _parse_arg_shortcut(argv, arg_name):
for i, arg in enumerate(argv):
if arg == arg_name:
return argv[i + 1]
return None
def _get_test_configs_for_ext(ext_info, name_filter=None) -> List[Dict]:
test_config = ext_info.get("test", None)
configs = []
if not test_config:
# no [[test]] entry
configs.append({})
elif isinstance(test_config, dict):
# [test] entry
configs.append(test_config)
elif isinstance(test_config, list) or isinstance(test_config, tuple):
# [[test]] entry
if len(test_config) == 0:
configs.append({})
else:
configs.extend(test_config)
# Filter those matching the name filter
configs = [t for t in configs if not name_filter or match(t.get("name", DEFAULT_TEST_NAME), [name_filter])]
# Filter out disabled
configs = [t for t in configs if t.get("enabled", True)]
return configs
def is_matching_list(ext_id, ext_name, ext_list):
return any(fnmatch.fnmatch(ext_id, p) or fnmatch.fnmatch(ext_name, p) for p in ext_list)
def _build_exts_set(exts: List[str], exclude: List[str], use_registry: bool, match_version_as_string: bool) -> Set[str]:
manager = omni.kit.app.get_app().get_extension_manager()
all_exts = manager.get_extensions()
if use_registry:
manager.sync_registry()
all_exts += manager.get_registry_extensions()
def is_match_ext(ext_id, ext_name, ext_def):
return (fnmatch.fnmatch(ext_id, ext_def) or fnmatch.fnmatch(ext_name, ext_def)) and not is_matching_list(
ext_id, ext_name, exclude
)
exts_to_test = set()
for ext_def in exts:
# Empty string is same as "all"
if ext_def == "":
ext_def = "*"
# If wildcard is used, match all
if "*" in ext_def:
exts_to_test.update([e["id"] for e in all_exts if is_match_ext(e["id"], e["name"], ext_def)])
else:
# Otherwise use extension manager to get matching version and pick highest one (they are sorted)
ext_ids = [v["id"] for v in manager.fetch_extension_versions(ext_def)]
if match_version_as_string:
ext_ids = [v for v in ext_ids if v.startswith(ext_def)]
# Take highest version, but if we are not using registry skip remote local one:
for ext_id in ext_ids:
if use_registry or manager.get_extension_dict(ext_id) is not None:
exts_to_test.add(ext_id)
break
return sorted(exts_to_test)
def _format_cmdline(cmdline: str) -> str:
"""Format commandline printed from CI so that we can run it locally"""
cmdline = cmdline.replace("\\", "/").replace("//", "/")
if is_running_on_ci():
exe_path = cmdline.split(" ")[0]
index = exe_path.find("/_build/")
if index != -1:
path_to_remove = exe_path[:index]
cmdline = (
cmdline.replace(path_to_remove, ".")
.replace(path_to_remove.lower(), ".")
.replace(path_to_remove.replace("/", "\\"), ".")
)
return cmdline
def _get_test_cmdline(ext_name: str, failed_tests: list = []) -> list:
"""Return an example cmdline to run extension tests or a single unittest"""
cmdline = []
try:
shell_ext = carb.tokens.get_tokens_interface().resolve("${shell_ext}")
kit_exe = carb.tokens.get_tokens_interface().resolve("${kit}")
path_to_kit = _format_cmdline(os.path.relpath(kit_exe, os.getcwd()))
if not path_to_kit.startswith("./"):
path_to_kit = f"./{path_to_kit}"
test_file = f"{path_to_kit}/tests-{ext_name}{shell_ext}"
if failed_tests:
test_name = failed_tests[0].rsplit(".")[-1]
cmdline.append(f" Cmdline to run a single unittest: {test_file} -f *{test_name}")
cmdline.append(f" Cmdline to run the extension tests: {test_file}")
except: # noqa
pass
return cmdline
async def gather_with_concurrency(n, *tasks):
semaphore = asyncio.Semaphore(n)
async def sem_task(task):
async with semaphore:
return await task
return await asyncio.gather(*(sem_task(task) for task in tasks))
async def run_serial_and_parallel_tasks(parallel_tasks, serial_tasks, max_parallel_tasks: int):
for r in serial_tasks:
yield await r
for r in await gather_with_concurrency(max_parallel_tasks, *parallel_tasks):
yield r
async def _run_ext_test(run_context: TestRunContext, test: ExtTest, on_status_report_fn):
def _print_info(mode: str, result_str: str = None):
if result_str is None:
result_str = "succeeded" if test.result.passed else "failed"
print(f"{test.test_id} test {result_str} ({mode} {test_run + 1} out of {run_context.max_test_run})")
teamcity_test_retry_support(run_context.retry_strategy == RetryStrategy.RETRY_ON_FAILURE)
# Allow retrying tests multiple times:
for test_run in range(run_context.max_test_run):
is_last_try = (test_run == run_context.max_test_run - 1) or (
run_context.retry_strategy == RetryStrategy.NO_RETRY
)
retry_failed_tests = run_context.retry_strategy == RetryStrategy.RETRY_ON_FAILURE
test._pre_test_run(test_run, run_context.retry_strategy)
test = await _run_ext_test_once(test, on_status_report_fn, is_last_try, retry_failed_tests)
# depending on the retry strategy we might continue or exit the loop
if run_context.retry_strategy == RetryStrategy.NO_RETRY:
# max_test_run is ignored in no-retry strategy
break
elif run_context.retry_strategy == RetryStrategy.RETRY_ON_FAILURE:
# retry on failure - stop at first success otherwise continue
result_str = "succeeded"
if not test.result.passed:
result_str = "failed" if is_last_try else "failed, retrying..."
_print_info("attempt", result_str)
if test.result.passed:
break
else:
test.retries += 1
elif run_context.retry_strategy == RetryStrategy.ITERATIONS:
# iterations - continue until the end
_print_info("iteration")
elif run_context.retry_strategy == RetryStrategy.RERUN_UNTIL_FAILURE:
# rerun until failure - stop at first failure otherwise continue
_print_info("rerun")
if not test.result.passed:
break
else:
_error(sys.stderr, f"Invalid retry strategy '{run_context.retry_strategy}'")
return test
async def _run_ext_test_once(test: ExtTest, on_status_report_fn, is_last_try: bool, retry_failed_tests: bool):
ext = test.ext_id
if on_status_report_fn:
on_status_report_fn(test.test_id, TestRunStatus.RUNNING)
# Starting test
test.on_start()
err_messages = []
metadata = {}
cmd = ""
returncode = 0
if test.valid:
cmd = test.get_cmd()
# Run process
start_time = time.time()
returncode, err_messages, metadata = await _run_test_process(test)
test.result.duration = round(time.time() - start_time, 2)
else:
err_messages.append(f"Failed to run process for extension testing (ext: {ext}).")
if test.unreliable:
test.result.unreliable = 1
# Grab failed tests
test.failed_tests = list(metadata.pop(KEY_FAILING_TESTS, []))
for key, value in list(metadata.items()):
if type(value) == str and value.startswith(STARTED_UNITTEST):
test_id = key
test.failed_tests.append(test_id + " (started but never finished)")
del metadata[key]
if retry_failed_tests:
# remove failed tests from previous run if any
test.args = [item for item in test.args if not item.startswith("--/exts/omni.kit.test/retryFailedTests")]
# Only retry failed tests if all conditions are met:
# - retry-on-failure strategy selected
# - metadata with failing tests is present
# - extension tests reported failures but no crash (return code 13)
# - at least on retry left to do (ie: not last retry)
if test.failed_tests and returncode == 13 and not is_last_try:
# add new failed tests as args for the next run
for i, test_id in enumerate(test.failed_tests):
test.args.append(f"--/exts/omni.kit.test/retryFailedTests/{i}='{test_id}'")
# Report failure and mark overall run as failure
test.result.passed = True
if len(err_messages) > 0:
spaces_8 = " " * 8
spaces_12 = " " * 12
messages_str = f"\n{spaces_8}".join([""] + err_messages)
fail_message_lines = [
"",
"[fail] Extension Test failed. Details:",
f" Cmdline: {_format_cmdline(cmd)}",
]
fail_message_lines += _get_test_cmdline(test.ext_name, test.failed_tests)
fail_message_lines += [
f" Return code: {returncode} ({returncode & (2**31-1):#010x})",
f" Failure reason(s): {messages_str}",
]
details_message_lines = [" Details:"]
if metadata:
details_message_lines.append(f"{spaces_8}Metadata:")
for key, value in sorted(metadata.items()):
details_message_lines.append(f"{spaces_12}{key}: {value}")
if test.failed_tests:
messages_str = f"\n{spaces_12}".join([""] + test.failed_tests)
details_message_lines.append(f"{spaces_8}{KEY_FAILING_TESTS}: {messages_str}")
if not omni.kit.app.get_app().is_app_external():
url = f"http://omnitests.nvidia.com/?query={test.test_id}"
details_message_lines.append(f"{spaces_8}Test history:")
details_message_lines.append(f"{spaces_12}{url}")
fail_message = "\n".join(fail_message_lines + details_message_lines)
test.result.passed = False
if test.unreliable:
test.result.unreliable_fail = 1
test.stdout.write("[fail] Extension test failed, but marked as unreliable.\n")
else:
test.result.fail = 1
test.stdout.write("[fail] Extension test failed.\n")
if is_last_try:
test.on_fail(fail_message)
if on_status_report_fn:
on_status_report_fn(test.test_id, TestRunStatus.FAILED, fail_message=fail_message, ext_test=test)
else:
test.stdout.write("[ ok ] Extension test passed.\n")
test.on_finish(test.result.passed)
if test.result.passed and on_status_report_fn:
on_status_report_fn(test.test_id, TestRunStatus.PASSED, ext_test=test)
# dump stdout, acts as stdout sync point for parallel run
if test.stdout != sys.stdout:
if test.context.trim_stdout_on_success and test.result.passed:
for line in test.stdout.getvalue().splitlines():
# We still want to print all service messages to correctly output number of tests on TC and all that.
if "##teamcity[" in line:
sys.stdout.write(line)
sys.stdout.write("\n")
sys.stdout.write(
f"[omni.kit.test] Stdout was trimmed. Look for the Kit log file '{test.log_file}' in TC artifacts for the full output.\n"
)
else:
sys.stdout.write(test.stdout.getvalue())
sys.stdout.flush()
# reset test.stdout (io.StringIO)
test.stdout.truncate(0)
test.stdout.seek(0)
return test
def _build_test_id(test_type: str, ext: str, app: str = "", test_name: str = "") -> str:
s = ""
if test_type:
s += f"{test_type}:"
s += ext_id_to_fullname(ext)
if test_name and test_name != DEFAULT_TEST_NAME:
s += f"-{test_name}"
if app:
s += f"_app:{app}"
return s
async def _run_ext_tests(exts, on_status_report_fn, exclude_exts, only_list=False) -> bool:
run_context = TestRunContext()
use_registry = get_setting("/exts/omni.kit.test/testExtUseRegistry", default=False)
match_version_as_string = get_setting("/exts/omni.kit.test/testExtMatchVersionAsString", default=False)
test_type = get_setting("/exts/omni.kit.test/testExtTestType", default="exttest")
# Test Name filtering (support shorter version)
test_name_filter = _parse_arg_shortcut(get_argv(), "-n")
if not test_name_filter:
test_name_filter = get_setting("/exts/omni.kit.test/testExtTestNameFilter", default="")
max_parallel_procs = int(get_setting("/exts/omni.kit.test/testExtMaxParallelProcesses", default=-1))
if max_parallel_procs <= 0:
max_parallel_procs = multiprocessing.cpu_count()
exts_to_test = _build_exts_set(exts, exclude_exts, use_registry, match_version_as_string)
# Prepare an app:
test_app = TestApp(sys.stdout)
def fail_all(fail_message):
reporter = TestReporter(sys.stdout)
for ext in exts:
message = fail_message.format(ext)
test_id = _build_test_id(test_type, ext, test_app.name)
tc_test_id = test_id.replace(".", "+") + ".[PROCESS CHECK]"
_error(sys.stderr, message)
# add start / fail / stop messages for TC + our own reporter
reporter.exttest_start(test_id, tc_test_id, ext, ext)
reporter.exttest_fail(test_id, tc_test_id, fail_type="Error", fail_message=message)
reporter.exttest_stop(test_id, tc_test_id, passed=False)
if on_status_report_fn:
on_status_report_fn(test_id, TestRunStatus.FAILED, fail_message=message)
# If no extensions found report query entries as failures
if len(exts_to_test) == 0:
fail_all("Can't find any extension matching: '{0}'.")
# If no app found report query entries as failures
if not test_app.path:
fail_all(f"Can't find app: {test_app.name}")
exts_to_test = []
# Prepare test run tasks, put into separate serial and parallel queues
parallel_tasks = []
serial_tasks = []
is_parallel_run = max_parallel_procs > 1 and len(exts_to_test) > 1
exts_issues = []
total = 0
for ext in exts_to_test:
ext_info = _prepare_ext_for_testing(ext)
if ext_info:
test_configs = _get_test_configs_for_ext(ext_info, test_name_filter)
unique_test_names = set()
for test_config in test_configs:
valid = True
test_name = test_config.get("name", DEFAULT_TEST_NAME)
if test_name in unique_test_names:
_error(
sys.stderr,
f"Extension {ext} has multiple [[test]] entry with the same 'name' attribute. It should be unique, default is '{DEFAULT_TEST_NAME}'",
)
valid = False
else:
unique_test_names.add(test_name)
total += 1
# Build test id.
test_id = _build_test_id(test_type, ext, test_app.name, test_name)
if only_list:
print(f"test_id: '{test_id}'")
continue
test = ExtTest(
ext,
ext_info,
test_config,
test_id,
is_parallel_run,
run_context=run_context,
test_app=test_app,
valid=valid,
)
# fmt: off
# both means we run all tests (reliable and unreliable)
# otherwise we either run reliable tests only or unreliable tests only, so we skip accordingly
if run_context.run_unreliable_tests != RunExtTests.BOTH and int(run_context.run_unreliable_tests) != int(test.unreliable):
test_unreliable = "unreliable" if test.unreliable else "reliable"
run_unreliable = "unreliable" if run_context.run_unreliable_tests == RunExtTests.UNRELIABLE_ONLY else "reliable"
print(f"[INFO] {test_id} skipped because it's marked as {test_unreliable} and we currently run all {run_unreliable} tests")
total -= 1
continue
# fmt: on
# Test skipped itself? (it should have explained it already by now)
if test.skip:
total -= 1
continue
# A single test may be invoked in more than one way, gather them all
from .ext_test_generator import get_tests_to_run
for test_instance in get_tests_to_run(test, ExtTest, run_context, is_parallel_run, valid):
task = _run_ext_test(run_context, test_instance, on_status_report_fn)
if test_instance.parallelizable:
parallel_tasks.append(task)
else:
serial_tasks.append(task)
else:
exts_issues.append(ext)
intro = f"Running {total} Extension Test Process(es)."
if run_context.run_unreliable_tests == RunExtTests.UNRELIABLE_ONLY:
intro = "[Unreliable Tests Run] " + intro
print(intro)
# Actual test run:
finished_tests: List[ExtTest] = []
fail_count = 0
unreliable_fail_count = 0
unreliable_total = 0
async for test in run_serial_and_parallel_tasks(parallel_tasks, serial_tasks, max_parallel_procs):
unreliable_total += test.result.unreliable
unreliable_fail_count += test.result.unreliable_fail
fail_count += test.result.fail
finished_tests.append(test)
if only_list:
print(f"Found {total} tests processes to run.")
return True
return_result = True
def generate_summary():
for test in finished_tests:
if test.result.passed:
if test.retries > 0:
res_str = "[retry ok]"
else:
res_str = "[ ok ]"
else:
res_str = "[ fail ]"
if test.result.unreliable:
res_str += " [unreliable]"
res_str += f" [{test.result.duration:5.1f}s]"
res_str += f" {test.test_id}"
res_str += f" (Count: {test.result.test_count})"
yield f"{res_str}"
for ext in exts_issues:
res_str = f"[ fail ] {ext} (extension registry issue)"
yield f"{res_str}"
def get_failed_tests():
all_failed_tests = [t for test in finished_tests for t in test.failed_tests]
if all_failed_tests:
yield f"\nFailing tests (Count: {len(all_failed_tests)}) :"
for test_name in all_failed_tests:
yield f" - {test_name}"
# Print summary
test_results_file = os.path.join(run_context.output_path, "ext_test_results.txt")
with open(test_results_file, "a") as f:
def report(line):
print(line)
f.write(line + "\n")
report("\n")
report("=" * 60)
report(f"Extension Tests Run Summary (Date: {run_context.start_ts})")
report("=" * 60)
report(" app: {}".format(test_app.name if not test_app.is_empty else "[empty]"))
report(f" retry strategy: {run_context.retry_strategy}," f" max test run: {run_context.max_test_run}")
report("=" * 60)
for line in generate_summary():
report(line)
for line in get_failed_tests():
report(line)
report("=" * 60)
report("=" * 60)
if unreliable_total > 0:
report(
f"UNRELIABLE TESTS REPORT: {unreliable_fail_count} unreliable tests processes failed out of {unreliable_total}."
)
# Exit with non-zero code on failure
if fail_count > 0 or len(exts_issues) > 0:
if fail_count > 0:
report(f"[ERROR] {fail_count} tests processes failed out of {total}.")
if len(exts_issues) > 0:
report(f"[ERROR] {len(exts_issues)} extension registry issue.")
return_result = False
else:
report(f"[OK] All {total} tests processes returned 0.")
# Report all results
for test in finished_tests:
test.reporter.report_result(test)
return return_result
def run_ext_tests(test_exts, on_finish_fn=None, on_status_report_fn=None, exclude_exts=[]):
def on_status_report(*args, **kwargs):
if on_status_report_fn:
on_status_report_fn(*args, **kwargs)
_test_status_report(*args, **kwargs)
async def run():
result = await _run_ext_tests(test_exts, on_status_report, exclude_exts)
if on_finish_fn:
on_finish_fn(result)
return asyncio.ensure_future(run())
def shutdown_ext_tests():
# When running extension tests and killing the process after timeout, asyncio hangs somewhere in python shutdown.
# Explicitly closing event loop here helps with that.
if _asyncio_process_was_terminated:
def exception_handler(_, exc):
print(f"Asyncio exception on shutdown: {exc}")
asyncio.get_event_loop().set_exception_handler(exception_handler)
asyncio.get_event_loop().close()
| 60,071 | Python | 40.572318 | 161 | 0.594397 |
omniverse-code/kit/exts/omni.kit.test/omni/kit/test/code_change_analyzer.py | import json
import os
import omni.kit.app
import logging
from typing import List
from .repo_test_context import RepoTestContext
from .utils import sha1_path, sha1_list, get_global_test_output_path
logger = logging.getLogger(__name__)
KNOWN_EXT_SOURCE_PATH = ["kit/source/extensions/", "source/extensions/"]
# We know for sure that this hash changes all the time and used in many many tests, don't want it to mess with our logic for now
STARTUP_SEQUENCE_EXCLUDE = ["omni.rtx.shadercache.d3d12", "omni.rtx.shadercache.vulkan"]
def _get_extension_hash(path):
path = os.path.normpath(path)
hash_cache_file = f"{get_global_test_output_path()}/exts_hash.json"
ext_hashes = {}
# cache hash calculation in a file to speed up things (it's slow)
try:
with open(hash_cache_file, "r") as f:
ext_hashes = json.load(f)
except FileNotFoundError:
pass
except Exception as e:
logger.warn(f"Failed to load extension hashes from {hash_cache_file}, error: {e}")
ext_hash = ext_hashes.get(path, None)
if ext_hash:
return ext_hash
ext_hash = sha1_path(path)
# read from file again in case it changed while calculating hash (parallel run) to update
try:
with open(hash_cache_file, "r") as f:
ext_hashes = json.load(f)
except FileNotFoundError:
pass
except Exception as e:
logger.warn(f"Failed to load extension hashes from {hash_cache_file}, error: {e}")
ext_hashes[path] = ext_hash
with open(hash_cache_file, "w") as f:
json.dump(ext_hashes, f)
return ext_hash
def _get_extension_name_for_file(file):
for path in KNOWN_EXT_SOURCE_PATH:
if file.startswith(path):
ext = file[len(path) :].split("/")[0]
return ext
return None
def _print(str, *argv):
print(f"[omni.kit.test.code_change_analyzer] {str}", *argv)
class ChangeAnalyzerResult:
def __init__(self):
self.should_skip_test = False
self.startup_sequence = []
self.startup_sequence_hash = ""
self.tested_ext_hash = ""
self.kernel_version = ""
class CodeChangeAnalyzer:
"""repo_test can provide (if in MR and on TC) with a list of changed files using env var.
Check if changed ONLY extensions. If any change is not in `source/extensions` -> run all tests
If changed ONLY extensions than for each test solve list of ALL enabled extensions and check against that list.
"""
def __init__(self, repo_test_context: RepoTestContext):
self._allow_sampling = True
self._allow_skipping = False
self._changed_extensions = self._gather_changed_extensions(repo_test_context)
def _gather_changed_extensions(self, repo_test_context: RepoTestContext):
data = repo_test_context.get()
if data:
changed_files = data.get("changed_files", [])
if changed_files:
self._allow_skipping = True
changed_extensions = set()
for file in changed_files:
ext = _get_extension_name_for_file(file)
if ext:
logger.info(f"Changed path: {file} is an extension: {ext}")
changed_extensions.add(ext)
elif self._allow_skipping:
_print("All tests will run. At least one changed file is not in an extension:", file)
self._allow_skipping = False
self._allow_sampling = False
if self._allow_skipping:
ext_list_str = "\n".join(("\t - " + e for e in changed_extensions))
_print(f"Only tests that use those extensions will run. Changed extensions:\n{ext_list_str}")
return changed_extensions
logger.info("No changed files provided")
return set()
def get_changed_extensions(self) -> List[str]:
return list(self._changed_extensions)
def allow_sampling(self) -> bool:
return self._allow_sampling
def _build_startup_sequence(self, result: ChangeAnalyzerResult, ext_name: str, exts: List):
result.kernel_version = omni.kit.app.get_app().get_kernel_version()
result.startup_sequence = [("kernel", result.kernel_version)]
for ext in exts:
if ext["name"] in STARTUP_SEQUENCE_EXCLUDE:
continue
path = ext.get("path", None)
if path:
hash = _get_extension_hash(path)
result.startup_sequence.append((ext["name"], hash))
if ext["name"] == ext_name:
result.tested_ext_hash = hash
# Hash whole startup sequence
result.startup_sequence_hash = sha1_list([hash for ext, hash in result.startup_sequence])
def analyze(self, test_id: str, ext_name: str, exts_to_enable: List[str]) -> ChangeAnalyzerResult:
result = ChangeAnalyzerResult()
result.should_skip_test = False
# Ask manager for extension startup sequence
manager = omni.kit.app.get_app().get_extension_manager()
solve_result, exts, err = manager.solve_extensions(
exts_to_enable, add_enabled=False, return_only_disabled=False
)
if not solve_result:
logger.warn(f"Failed to solve dependencies for extension(s): {exts_to_enable}, error: {err}")
return result
# Build hashes for a startup sequence
self._build_startup_sequence(result, ext_name, exts)
if not self._allow_skipping:
return result
if not self._changed_extensions:
return result
for ext in exts:
if ext["name"] in self._changed_extensions:
_print(f"{test_id} test will run because it uses the changed extension:", ext["name"])
self._allow_sampling = False
return result
_print(
f"{test_id} skipped by code change analyzer. Extensions enabled in this tests were not changed in this MR."
)
result.should_skip_test = True
return result
| 6,153 | Python | 34.572254 | 128 | 0.609946 |
omniverse-code/kit/exts/omni.kit.test/omni/kit/test/gitlab.py | import os
from functools import lru_cache
# GitLab CI/CD variables :
# https://docs.gitlab.com/ee/ci/variables/predefined_variables.html
@lru_cache()
def is_running_in_gitlab():
return bool(os.getenv("GITLAB_CI"))
@lru_cache()
def get_gitlab_build_url() -> str:
return os.getenv("CI_PIPELINE_URL") or ""
| 317 | Python | 18.874999 | 67 | 0.700315 |
omniverse-code/kit/exts/omni.kit.test/omni/kit/test/tests/test_reporter.py | from pathlib import Path
import omni.kit.test
from ..reporter import _calculate_durations, _load_report_data, _load_coverage_results, _generate_html_report
CURRENT_PATH = Path(__file__).parent
DATA_TESTS_PATH = CURRENT_PATH.parent.parent.parent.parent.joinpath("data/tests")
class TestReporter(omni.kit.test.AsyncTestCase):
async def test_success_report_data(self):
"""
omni_kit_test_success_report.jsonl contains the report.jsonl of a successful run testing omni.kit.test
"""
path = DATA_TESTS_PATH.joinpath("omni_kit_test_success_report.jsonl")
report_data = _load_report_data(path)
self.assertEqual(len(report_data), 11)
result = report_data[10]
test_result = result.get("result", None)
self.assertNotEqual(test_result, None)
# make sure durations are good
_calculate_durations(report_data)
startup_duration = test_result["startup_duration"]
tests_duration = test_result["tests_duration"]
self.assertAlmostEqual(startup_duration, 1.040, places=3)
self.assertAlmostEqual(tests_duration, 0.007, places=3)
# make sure our ratio are good
duration = test_result["duration"]
startup_ratio = test_result["startup_ratio"]
tests_ratio = test_result["tests_ratio"]
self.assertAlmostEqual(startup_ratio, 100 * (startup_duration / duration), places=3)
self.assertAlmostEqual(tests_ratio, 100 * (tests_duration / duration), places=3)
async def test_fail_report_data(self):
"""
omni_kit_test_fail_report.jsonl contains the report.jsonl of a failed run of testing omni.kit.test
with a few failed tests and also a test that crash
"""
path = DATA_TESTS_PATH.joinpath("omni_kit_test_fail_report.jsonl")
report_data = _load_report_data(path)
self.assertEqual(len(report_data), 18)
result = report_data[17]
test_result = result.get("result", None)
self.assertNotEqual(test_result, None)
# make sure durations are good
_calculate_durations(report_data)
startup_duration = test_result["startup_duration"]
tests_duration = test_result["tests_duration"]
self.assertAlmostEqual(startup_duration, 0.950, places=3)
self.assertAlmostEqual(tests_duration, 0.006, places=3)
# make sure our ratio are good
duration = test_result["duration"]
startup_ratio = test_result["startup_ratio"]
tests_ratio = test_result["tests_ratio"]
self.assertAlmostEqual(startup_ratio, 100 * (startup_duration / duration), places=3)
self.assertAlmostEqual(tests_ratio, 100 * (tests_duration / duration), places=3)
async def test_html_report(self):
path = DATA_TESTS_PATH.joinpath("omni_kit_test_success_report.jsonl")
report_data = _load_report_data(path)
_calculate_durations(report_data)
merged_results, _ = _load_coverage_results(report_data, read_coverage=False)
html = _generate_html_report(report_data, merged_results)
# total duration is 1.32 seconds, in the hmtl report we keep 1 decimal so it will be shown as 1.3
self.assertTrue(html.find("<td>1.3</td>") != -1)
# startup duration will be 78.8 %
self.assertTrue(html.find("<td>78.8</td>") != -1)
| 3,342 | Python | 47.449275 | 110 | 0.664572 |
omniverse-code/kit/exts/omni.kit.test/omni/kit/test/tests/test_kit_test.py | import unittest
import carb
import omni.kit.app
import omni.kit.test
# uncomment for dev work
# import unittest
class TestKitTest(omni.kit.test.AsyncTestCase):
async def test_test_settings(self):
# See [[test]] section
carb.log_error("This message will not fail the test because it is excluded in [[test]]")
self.assertEqual(carb.settings.get_settings().get("/extra_arg_passed/param"), 123)
async def test_test_other_settings(self):
self.assertEqual(carb.settings.get_settings().get("/extra_arg_passed/param"), 456)
async def test_that_is_excluded(self):
self.fail("Should not be called")
async def test_get_test(self):
if any("test_that_is_unreliable" in t.id() for t in omni.kit.test.get_tests()):
self.skipTest("Skipping if test_that_is_unreliable ran")
self.assertSetEqual(
{t.id() for t in omni.kit.test.get_tests()},
set(
[
"omni.kit.test.tests.test_kit_test.TestKitTest.test_are_async",
"omni.kit.test.tests.test_kit_test.TestKitTest.test_can_be_skipped_1",
"omni.kit.test.tests.test_kit_test.TestKitTest.test_can_be_skipped_2",
"omni.kit.test.tests.test_kit_test.TestKitTest.test_can_be_sync",
"omni.kit.test.tests.test_kit_test.TestKitTest.test_get_test",
"omni.kit.test.tests.test_kit_test.TestKitTest.test_test_settings",
"omni.kit.test.tests.test_kit_test.TestKitTest.test_with_metadata",
"omni.kit.test.tests.test_kit_test.TestKitTest.test_with_subtest",
"omni.kit.test.tests.test_lookups.TestLookups.test_lookups",
"omni.kit.test.tests.test_nvdf.TestNVDF.test_convert_advanced_types",
"omni.kit.test.tests.test_nvdf.TestNVDF.test_convert_basic_types",
"omni.kit.test.tests.test_nvdf.TestNVDF.test_convert_reserved_types",
"omni.kit.test.tests.test_reporter.TestReporter.test_fail_report_data",
"omni.kit.test.tests.test_reporter.TestReporter.test_html_report",
"omni.kit.test.tests.test_reporter.TestReporter.test_success_report_data",
"omni.kit.test.tests.test_sampling.TestSampling.test_sampling_factor_one",
"omni.kit.test.tests.test_sampling.TestSampling.test_sampling_factor_point_five",
"omni.kit.test.tests.test_sampling.TestSampling.test_sampling_factor_zero",
"omni.kit.test.tests.test_sampling.TestSampling.test_with_fake_nvdf_query",
]
),
)
self.assertListEqual(
[t.id() for t in omni.kit.test.get_tests(tests_filter="test_settings")],
[
"omni.kit.test.tests.test_kit_test.TestKitTest.test_test_settings",
],
)
async def test_are_async(self):
app = omni.kit.app.get_app()
update = app.get_update_number()
await app.next_update_async()
self.assertEqual(app.get_update_number(), update + 1)
def test_can_be_sync(self):
self.assertTrue(True)
@unittest.skip("Skip test with @unittest.skip")
async def test_can_be_skipped_1(self):
self.assertTrue(False)
async def test_can_be_skipped_2(self):
self.skipTest("Skip test with self.skipTest")
self.assertTrue(False)
# subTest will get fixes in python 3.11, see https://bugs.python.org/issue25894
async def test_with_subtest(self):
with self.subTest(msg="subtest example"):
self.assertTrue(True)
async def test_with_metadata(self):
"""This is an example to use metadata"""
print("##omni.kit.test[set, my_key, This line will be printed if the test fails]")
self.assertTrue(True)
async def test_that_is_unreliable(self):
"""This test will not run unless we run unreliable tests"""
self.assertTrue(True) # we don't make it fail when running unreliable tests
# Development tests - uncomment when doing dev work to test all ways a test can succeed / fail
# async def test_success(self):
# self.assertTrue(True)
# async def test_fail_1(self):
# self.assertTrue(False)
# async def test_fail_2(self):
# raise Exception("fuff")
# self.assertTrue(False)
# will crash with stack overflow
# async def test_fail_3(self):
# __import__("sys").setrecursionlimit(100000000)
# def crash():
# crash()
# crash()
| 4,638 | Python | 41.559633 | 101 | 0.617076 |
omniverse-code/kit/exts/omni.kit.test/omni/kit/test/tests/test_nvdf.py | import omni.kit.test
from ..nvdf import remove_nvdf_form, to_nvdf_form
class TestNVDF(omni.kit.test.AsyncTestCase):
async def test_convert_basic_types(self):
d = {
"some_boolean": True,
"some_int": -123,
"some_float": 0.001,
"array_of_string": ["a", "b"],
}
nv = to_nvdf_form(d)
self.assertDictEqual(
nv, {"b_some_boolean": True, "l_some_int": -123, "d_some_float": 0.001, "s_array_of_string": ["a", "b"]}
)
r = remove_nvdf_form(nv)
self.assertDictEqual(d, r)
async def test_convert_advanced_types(self):
class myClass:
def __init__(self, int_value: int, float_value: float) -> None:
self.cl_int: int = int_value
self.cl_float: float = float_value
m = myClass(12, 0.1)
d = {
"some_list": [3, 4],
"some_tuple": (1, 2),
"some_class": m,
}
nv = to_nvdf_form(d)
self.assertDictEqual(
nv, {"l_some_list": [3, 4], "l_some_tuple": (1, 2), "some_class": {"l_cl_int": 12, "d_cl_float": 0.1}}
)
d["some_class"] = m.__dict__
r = remove_nvdf_form(nv)
self.assertDictEqual(d, r)
async def test_convert_reserved_types(self):
d = {
"ts_anything": 2992929,
"ts_created": 56555,
"_id": 69988,
}
nv = to_nvdf_form(d)
self.assertDictEqual(
nv, {"ts_anything": 2992929, "ts_created": 56555, "_id": 69988}
)
r = remove_nvdf_form(nv)
self.assertDictEqual(d, r)
| 1,649 | Python | 30.132075 | 116 | 0.4906 |
omniverse-code/kit/exts/omni.kit.test/omni/kit/test/tests/__init__.py | from .test_kit_test import *
from .test_lookups import *
from .test_nvdf import *
from .test_reporter import *
from .test_sampling import *
| 140 | Python | 22.499996 | 28 | 0.742857 |
omniverse-code/kit/exts/omni.kit.test/omni/kit/test/tests/test_lookups.py | """Test the functionality used by the test runner."""
import omni.kit.app
import omni.kit.test
class TestLookups(omni.kit.test.AsyncTestCase):
async def test_lookups(self):
"""Oddly self-referencing test that uses the test runner test lookup utility to confirm that the utility
finds this test.
"""
manager = omni.kit.app.get_app().get_extension_manager()
my_extension_id = manager.get_enabled_extension_id("omni.kit.test")
module_map = omni.kit.test.get_module_to_extension_map()
self.assertTrue("omni.kit.test" in module_map)
extension_info = module_map["omni.kit.test"]
self.assertEqual((my_extension_id, True), extension_info)
this_test_info = omni.kit.test.extension_from_test_name("omni.kit.test.TestLookups.test_lookups", module_map)
self.assertIsNotNone(this_test_info)
this_test_info_no_module = tuple(e for i, e in enumerate(this_test_info) if i != 2)
self.assertEqual((my_extension_id, True, False), this_test_info_no_module)
| 1,049 | Python | 44.652172 | 117 | 0.682555 |
omniverse-code/kit/exts/omni.kit.test/omni/kit/test/tests/test_sampling.py | import urllib.error
from contextlib import suppress
import omni.kit.test
from ..nvdf import get_app_info
from ..sampling import Sampling
class TestSampling(omni.kit.test.AsyncTestCase):
def setUp(self):
self.sampling = Sampling(get_app_info())
self.unittests = ["test_one", "test_two", "test_three", "test_four"]
async def test_sampling_factor_zero(self):
self.sampling.run_query("omni.foo", self.unittests, running_on_ci=False)
samples = self.sampling.get_tests_to_skip(0.0)
# will return the same list but with a different order
self.assertEqual(len(samples), len(self.unittests))
async def test_sampling_factor_one(self):
self.sampling.run_query("omni.foo", self.unittests, running_on_ci=False)
samples = self.sampling.get_tests_to_skip(1.0)
self.assertListEqual(samples, [])
async def test_sampling_factor_point_five(self):
self.sampling.run_query("omni.foo", self.unittests, running_on_ci=False)
samples = self.sampling.get_tests_to_skip(0.5)
self.assertEqual(len(samples), len(self.unittests) / 2)
async def test_with_fake_nvdf_query(self):
with suppress(urllib.error.URLError):
self.sampling.run_query("omni.foo", self.unittests, running_on_ci=True)
samples = self.sampling.get_tests_to_skip(0.5)
if self.sampling.query_result is True:
self.assertEqual(len(samples), len(self.unittests) / 2)
else:
self.assertListEqual(samples, [])
| 1,551 | Python | 38.794871 | 83 | 0.662153 |
omniverse-code/kit/exts/omni.kit.test/omni/kit/omni_test_registry/omni_test_registry.py | def omni_test_registry(*args, **kwargs):
"""
The decorator for Python tests.
NOTE: currently passing in the test uuid as a kwarg 'guid'
"""
def decorator(func):
func.guid = kwargs.get("guid", None)
return func
return decorator
| 269 | Python | 21.499998 | 62 | 0.6171 |
omniverse-code/kit/exts/omni.kit.test/docs/omni_test_registry.rst | :orphan:
.. _omni.kit.omni_test_registry:
omni.kit.omni_test_registry
###########################
This extension pulls in the `repo_test GUID decorator <https://gitlab-master.nvidia.com/omniverse/repo/repo_test/-/tree/main/omni/repo/test/guid>` via the `omniverse_test packman package <http://packman.ov.nvidia.com/packages/omniverse_test>` that enables the tagging of tests with GUID metadata. This GUID is then used for tracking tests through renames and relocations.
It is imported in all Python unittest test modules that use omni.kit.test, and the decorator is applied to test methods/functions with a GUID:
.. code:: python
import omni.kit.test
def test_itelemetry_generic_events():
"""Test name + GUID pulled from omni.kit.telemetry for example.
"""
pass
**Issues?**
Please reach out to @rafal karp or @chris morrell on Slack, or visit the #ct-omni-repoman Slack channel.
| 922 | reStructuredText | 35.919999 | 372 | 0.715835 |
omniverse-code/kit/exts/omni.kit.test/docs/index.rst | omni.kit.test
###########################
Python asyncio-centric testing system.
To create a test derive from :class:`omni.kit.test.AsyncTestCase` and add a method that starts with ``test_``, like in :mod:`unittest`. Method can be either async or regular one.
.. code:: python
import omni.kit.test
class MyTest(omni.kit.test.AsyncTestCase):
async def setUp(self):
pass
async def tearDown(self):
pass
# Actual test, notice it is "async" function, so "await" can be used if needed
async def test_hello(self):
self.assertEqual(10, 10)
Test class must be defined in "tests" submodule of your public extension module. For example if your ``extension.toml`` defines:
.. code:: toml
[[python.module]]
name = "omni.foo"
``omni.foo.tests.MyTest`` should be a path to your test. Test system will automatically discover and import ``omni.foo.tests`` module. Using ``tests`` submodule of your extension module is a recommended way to organize tests. That keeps tests together with extension, but not too coupled with the actual module they test, so that they can import module with absolute path (e.g. ``import omni.foo``) and test it the way user will see them.
Refer to ``omni.example.hello`` extension as a simplest example of extension with a python test.
Settings
**********
For the settings refer to ``extension.toml`` file:
.. literalinclude:: ../config/extension.toml
:language: toml
They can be used to filter, automatically run tests and quit.
API Reference
***************
.. automodule:: omni.kit.test
:platform: Windows-x86_64, Linux-x86_64
:members:
:undoc-members:
:imported-members:
:exclude-members: contextlib, suppress
| 1,752 | reStructuredText | 30.872727 | 438 | 0.68379 |
omniverse-code/kit/exts/omni.kit.test_suite.menu/omni/kit/test_suite/menu/tests/__init__.py | from .context_menu_bind_material_listview import *
| 51 | Python | 24.999988 | 50 | 0.803922 |
omniverse-code/kit/exts/omni.kit.test_suite.menu/omni/kit/test_suite/menu/tests/context_menu_bind_material_listview.py | ## Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
##
## NVIDIA CORPORATION and its licensors retain all intellectual property
## and proprietary rights in and to this software, related documentation
## and any modifications thereto. Any use, reproduction, disclosure or
## distribution of this software and related documentation without an express
## license agreement from NVIDIA CORPORATION is strictly prohibited.
##
import omni.kit.test
import os
import omni.kit.app
import omni.usd
import omni.kit.commands
from omni.kit.test.async_unittest import AsyncTestCase
from omni.kit import ui_test
from pxr import Sdf, Usd, UsdShade
from omni.kit.material.library.test_helper import MaterialLibraryTestHelper
from omni.kit.test_suite.helpers import (
open_stage,
get_test_data_path,
select_prims,
delete_prim_path_children,
arrange_windows
)
from omni.kit.material.library.test_helper import MaterialLibraryTestHelper
from omni.kit.window.content_browser.test_helper import ContentBrowserTestHelper
class ContextMenuBindMaterialListview(AsyncTestCase):
# Before running each test
async def setUp(self):
await arrange_windows("Stage", 300.0)
await open_stage(get_test_data_path(__name__, "bound_shapes.usda"))
async def test_l1_context_menu_bind_material_listview(self):
await ui_test.find("Stage").focus()
await ui_test.find("Content").focus()
# grid_view_enabled = True doesn't work with item_offset
to_select = ["/World/Cube", "/World/Sphere", "/World/Cylinder"]
stage = omni.usd.get_context().get_stage()
material_test_helper = MaterialLibraryTestHelper()
content_browser_helper = ContentBrowserTestHelper()
await content_browser_helper.toggle_grid_view_async(False)
mdl_list = await omni.kit.material.library.get_mdl_list_async()
for mtl_name, mdl_path, submenu in mdl_list:
# delete any materials in looks
await delete_prim_path_children("/World/Looks")
# get content browser file
await content_browser_helper.navigate_to_async(mdl_path)
await ui_test.human_delay(10)
item = await content_browser_helper.get_treeview_item_async(os.path.basename(mdl_path))
self.assertFalse(item == None)
# get content browser treeview
content_treeview = ui_test.find("Content//Frame/**/TreeView[*].identifier=='content_browser_treeview'")
# select prims
await select_prims(to_select)
# right click content browser
await content_treeview.right_click(item.center)
# click on context menu item
await ui_test.select_context_menu("Bind material to selected prim(s)")
# use create material dialog
await material_test_helper.handle_create_material_dialog(mdl_path, mtl_name)
# verify item(s)
for prim_path in to_select:
prim = stage.GetPrimAtPath(prim_path)
bound_material, _ = UsdShade.MaterialBindingAPI(prim).ComputeBoundMaterial()
self.assertTrue(bound_material.GetPrim().IsValid() == True)
self.assertTrue(bound_material.GetPrim().GetPrimPath().pathString == f"/World/Looks/{mtl_name}")
| 3,328 | Python | 40.098765 | 115 | 0.683894 |
omniverse-code/kit/exts/omni.kit.test_suite.menu/docs/index.rst | omni.kit.test_suite.menu
########################
menu tests
.. toctree::
:maxdepth: 1
CHANGELOG
| 106 | reStructuredText | 9.699999 | 24 | 0.509434 |
omniverse-code/kit/exts/omni.kit.viewport.rtx/config/extension.toml | [package]
# Semantic Versioning is used: https://semver.org/
version = "104.0.0"
# Lists people or organizations that are considered the "authors" of the package.
authors = ["NVIDIA"]
# The title and description fields are primarly for displaying extension info in UI
title = "Viewport RTX Bundle"
description="Extension to make the RTX Realtime and Pathtraced renderer and settings available for the Viewport."
# URL of the extension source repository.
repository = ""
# Keywords for the extension
keywords = ["kit", "ui", "viewport", "hydra", "rtx", "render"]
# Location of change log file in target (final) folder of extension, relative to the root.
# More info on writing changelog: https://keepachangelog.com/en/1.0.0/
changelog="docs/CHANGELOG.md"
# Path (relative to the root) or content of readme markdown file for UI.
readme = "docs/README.md"
# Icon is shown in Extensions window, it is recommended to be square, of size 256x256.
icon = "data/icon.png"
preview_image = "data/preview.png"
category = "Viewport"
[dependencies]
# Load the RTX renderer extension
"omni.hydra.rtx" = {}
# Load the RTX renderer settings extension
"omni.rtx.window.settings" = { }
# Main python module this extension provides, it will be publicly available as "import omni.kit.viewport.rtx".
# [[python.module]]
# name = "omni.kit.viewport.rtx"
[settings]
# When including multiple omni.kit.viewport.XXX renderer extensions,
# the app should set this to a comma delimited of all renderers to eanble and what to startup with
# renderer.enabled "rtx,iray,pxr"
# renderer.active = "rtx"
# Make sure the renderer is enabled and active
renderer.enabled = "rtx"
renderer.active = "rtx"
[[test]]
# This is just a collection of extensions, they should be tested indivdiually for now
waiver = ""
| 1,788 | TOML | 31.527272 | 113 | 0.744407 |
omniverse-code/kit/exts/omni.kit.viewport.rtx/docs/CHANGELOG.md | # CHANGELOG
This document records all notable changes to ``omni.kit.viewport.rtx`` extension.
This project adheres to `Semantic Versioning <https://semver.org/>`_.
## [104.0.0] - 2022-05-04
### Added
- Initial version
| 220 | Markdown | 23.555553 | 81 | 0.718182 |
omniverse-code/kit/exts/omni.kit.viewport.rtx/docs/README.md | # Viewport RTX Extension [omni.kit.viewport.rtx]
Extension to make the RTX Realtime and Pathtraced renderer and settings available for the Viewport.
| 149 | Markdown | 48.999984 | 99 | 0.818792 |
omniverse-code/kit/exts/omni.kit.viewport.pxr/config/extension.toml | [package]
# Semantic Versioning is used: https://semver.org/
version = "104.0.1"
# Lists people or organizations that are considered the "authors" of the package.
authors = ["NVIDIA"]
# The title and description fields are primarly for displaying extension info in UI
title = "Viewport External Renderers Bundle"
description="Extension to make external HydraDelegate renderers and settings available for the Viewport."
# URL of the extension source repository.
repository = ""
# Keywords for the extension
keywords = ["kit", "ui", "viewport", "hydra", "storm", "render", "pxr", "pixar", "render delegate"]
# Location of change log file in target (final) folder of extension, relative to the root.
# More info on writing changelog: https://keepachangelog.com/en/1.0.0/
changelog="docs/CHANGELOG.md"
# Path (relative to the root) or content of readme markdown file for UI.
readme = "docs/README.md"
# Icon is shown in Extensions window, it is recommended to be square, of size 256x256.
icon = "data/icon.png"
preview_image = "data/preview.png"
category = "Viewport"
[dependencies]
# Load the Pixar render delegate extension
"omni.hydra.pxr" = {}
# Load the settings extensionfor those renderers
"omni.hydra.pxr.settings" = { }
# Main python module this extension provides, it will be publicly available as "import omni.kit.viewport.pxr".
# [[python.module]]
# name = "omni.kit.viewport.pxr"
[settings]
# When including multiple omni.kit.viewport.XXX renderer extensions,
# the app should set this to a comma delimited of all renderers to eanble and what to startup with
# renderer.enabled "rtx,iray,pxr"
# renderer.active = "rtx"
# Make sure the renderer is enabled and active
renderer.enabled = "pxr"
renderer.active = "pxr"
# External renderer extensions might append to this list, so put Storm in as valid and enabled.
# Final application can always overide this to explicitly disable Storm if desired.
pxr.renderers="HdStormRendererPlugin:GL"
[[test]]
# This is just a collection of extensions, they should be tested indivdiually for now
waiver = ""
| 2,066 | TOML | 35.263157 | 110 | 0.75363 |
omniverse-code/kit/exts/omni.kit.viewport.pxr/docs/CHANGELOG.md | # CHANGELOG
This document records all notable changes to ``omni.kit.viewport.pxr`` extension.
This project adheres to `Semantic Versioning <https://semver.org/>`_.
## [104.0.1] - 2022-08-11
### Changed
- Default to Storm being available and enabled if other renderers start enabling themselves during startup.
## [104.0.0] - 2022-05-04
### Added
- Initial version
| 367 | Markdown | 27.30769 | 107 | 0.730245 |
omniverse-code/kit/exts/omni.kit.viewport.pxr/docs/README.md | # Viewport Hydra Delegate Extension [omni.kit.viewport.pxr]
Extension to make external HydraDelegate renderers and settings available for the Viewport.
| 152 | Markdown | 49.999983 | 91 | 0.835526 |
omniverse-code/kit/exts/omni.ujitso.python/omni/ujitso/_ujitso.pyi | from __future__ import annotations
import omni.ujitso._ujitso
import typing
import numpy
_Shape = typing.Tuple[int, ...]
__all__ = [
"Agent",
"AgentConfigFlags",
"BuildContext",
"BuildHandle",
"BuildJob",
"DataGrid",
"DataStoreUtils",
"Default",
"DependencyContext",
"DependencyHandle",
"DependencyJob",
"DynamicRequest",
"ExternalStorage",
"ForceRemoteTasks",
"IAgent",
"IDataGrid",
"IDataStore",
"IFactory",
"IHTTPFactory",
"IInProcessFactory",
"ILocalDataStore",
"INucleusDataStore",
"IRegistry",
"IService",
"ITCPFactory",
"ITaskAgent",
"ITaskService",
"KeyToken",
"KeyTokenEx",
"MatchContext",
"MatchResult",
"None",
"OperationResult",
"Processor",
"ProcessorInformation",
"Request",
"RequestCallbackData",
"RequestFilter",
"RequestHandle",
"RequestTokenType",
"RequestType",
"ResultHandle",
"TCPAgentConfigFlags",
"TIME_OUT_INFINITE",
"UjitsoUtils",
"UseRoundRobinServerScheduling",
"ValidationType",
"WaitForConnectionsBeforeLaunch",
"acquire_agent_interface",
"acquire_data_grid_interface",
"acquire_factory_interface",
"acquire_http_factory_interface",
"acquire_in_progress_factory_interface",
"acquire_local_data_store_interface",
"acquire_nucleus_data_store_interface",
"acquire_registry_interface",
"acquire_service_interface",
"acquire_tcp_factory_interface",
"release_agent_interface",
"release_data_grid_interface",
"release_factory_interface",
"release_http_factory_interface",
"release_in_progress_factory_interface",
"release_local_data_store_interface",
"release_nucleus_data_store_interface",
"release_registry_interface",
"release_service_interface",
"release_tcp_factory_interface"
]
class Agent():
@property
def agent(self) -> carb::ujitso::IAgent:
"""
:type: carb::ujitso::IAgent
"""
@property
def factory(self) -> carb::ujitso::IFactory:
"""
:type: carb::ujitso::IFactory
"""
@property
def registry(self) -> IRegistry:
"""
:type: IRegistry
"""
@property
def service(self) -> carb::ujitso::IService:
"""
:type: carb::ujitso::IService
"""
@property
def store(self) -> IDataStore:
"""
:type: IDataStore
"""
@property
def taskAgent(self) -> ITaskAgent:
"""
:type: ITaskAgent
"""
@property
def taskService(self) -> ITaskService:
"""
:type: ITaskService
"""
pass
class AgentConfigFlags():
"""
Members:
None
ForceRemoteTasks
Default
"""
def __eq__(self, other: object) -> bool: ...
def __getstate__(self) -> int: ...
def __hash__(self) -> int: ...
def __index__(self) -> int: ...
def __init__(self, value: int) -> None: ...
def __int__(self) -> int: ...
def __ne__(self, other: object) -> bool: ...
def __repr__(self) -> str: ...
def __setstate__(self, state: int) -> None: ...
@property
def name(self) -> str:
"""
:type: str
"""
@property
def value(self) -> int:
"""
:type: int
"""
Default: omni.ujitso._ujitso.AgentConfigFlags # value = <AgentConfigFlags.None: 1>
ForceRemoteTasks: omni.ujitso._ujitso.AgentConfigFlags # value = <AgentConfigFlags.ForceRemoteTasks: 2>
None: omni.ujitso._ujitso.AgentConfigFlags # value = <AgentConfigFlags.None: 1>
__members__: dict # value = {'None': <AgentConfigFlags.None: 1>, 'ForceRemoteTasks': <AgentConfigFlags.ForceRemoteTasks: 2>, 'Default': <AgentConfigFlags.None: 1>}
pass
class BuildContext():
@property
def agent(self) -> Agent:
"""
:type: Agent
"""
@property
def processor(self) -> carb::ujitso::Processor:
"""
:type: carb::ujitso::Processor
"""
pass
class BuildHandle():
def __init__(self, value: int = 0) -> None: ...
@property
def value(self) -> int:
"""
:type: int
"""
@value.setter
def value(self, arg0: int) -> None:
pass
pass
class BuildJob():
@staticmethod
def __init__(*args, **kwargs) -> typing.Any: ...
@property
def request(self) -> Request:
"""
:type: Request
"""
@request.setter
def request(self, arg0: Request) -> None:
pass
pass
class DataGrid():
@property
def iface(self) -> carb::dad::IDataGrid:
"""
:type: carb::dad::IDataGrid
"""
pass
class DataStoreUtils():
@staticmethod
def copy_data_block(*args, **kwargs) -> typing.Any: ...
pass
class DependencyContext():
@property
def agent(self) -> Agent:
"""
:type: Agent
"""
@property
def processor(self) -> carb::ujitso::Processor:
"""
:type: carb::ujitso::Processor
"""
pass
class DependencyHandle():
def __init__(self, value: int = 0) -> None: ...
@property
def value(self) -> int:
"""
:type: int
"""
@value.setter
def value(self, arg0: int) -> None:
pass
pass
class DependencyJob():
@staticmethod
def __init__(*args, **kwargs) -> typing.Any: ...
@property
def request(self) -> Request:
"""
:type: Request
"""
@request.setter
def request(self, arg0: Request) -> None:
pass
pass
class DynamicRequest():
@typing.overload
def __init__(self) -> None: ...
@typing.overload
def __init__(self, arg0: Request) -> None: ...
def add(self, arg0: KeyTokenEx) -> int: ...
def add_buffer(self, arg0: KeyTokenEx, arg1: RequestTokenType, arg2: numpy.ndarray[numpy.uint8]) -> int: ...
def add_double(self, arg0: KeyTokenEx, arg1: float) -> int: ...
def add_float(self, arg0: KeyTokenEx, arg1: float) -> int: ...
def add_int(self, arg0: KeyTokenEx, arg1: int) -> int: ...
def add_int16(self, arg0: KeyTokenEx, arg1: int) -> int: ...
def add_int64(self, arg0: KeyTokenEx, arg1: int) -> int: ...
def add_key_token(self, arg0: KeyTokenEx, arg1: KeyToken) -> int: ...
def add_string(self, arg0: KeyTokenEx, arg1: str) -> int: ...
def add_uint(self, arg0: KeyTokenEx, arg1: int) -> int: ...
def add_uint16(self, arg0: KeyTokenEx, arg1: int) -> int: ...
def add_uint64(self, arg0: KeyTokenEx, arg1: int) -> int: ...
def add_uint8(self, arg0: KeyTokenEx, arg1: int) -> int: ...
def copy(self, arg0: KeyTokenEx, arg1: DynamicRequest) -> bool: ...
def find_key(self, arg0: KeyTokenEx) -> tuple: ...
def get_as_double(self, arg0: KeyTokenEx) -> tuple: ...
def get_as_float(self, arg0: KeyTokenEx) -> tuple: ...
def get_as_int16(self, arg0: KeyTokenEx) -> tuple: ...
def get_as_int32(self, arg0: KeyTokenEx) -> tuple: ...
def get_as_int64(self, arg0: KeyTokenEx) -> tuple: ...
def get_as_key_token(self, arg0: KeyTokenEx) -> tuple: ...
def get_as_string(self, arg0: KeyTokenEx) -> tuple: ...
def get_as_uint16(self, arg0: KeyTokenEx) -> tuple: ...
def get_as_uint32(self, arg0: KeyTokenEx) -> tuple: ...
def get_as_uint64(self, arg0: KeyTokenEx) -> tuple: ...
def get_as_uint8(self, arg0: KeyTokenEx) -> tuple: ...
def get_key(self, arg0: int) -> KeyToken: ...
def get_request(self) -> Request: ...
def get_type(self, arg0: int) -> RequestTokenType: ...
def remove_key(self, arg0: KeyTokenEx) -> bool: ...
def replace_key(self, arg0: KeyTokenEx, arg1: KeyTokenEx) -> bool: ...
def replace_value_double(self, arg0: KeyTokenEx, arg1: float) -> bool: ...
def replace_value_float(self, arg0: KeyTokenEx, arg1: float) -> bool: ...
def replace_value_int(self, arg0: KeyTokenEx, arg1: int) -> bool: ...
def replace_value_int16(self, arg0: KeyTokenEx, arg1: int) -> bool: ...
def replace_value_int64(self, arg0: KeyTokenEx, arg1: int) -> bool: ...
def replace_value_key_token(self, arg0: KeyTokenEx, arg1: KeyToken) -> bool: ...
def replace_value_string(self, arg0: KeyTokenEx, arg1: str) -> bool: ...
def replace_value_uint(self, arg0: KeyTokenEx, arg1: int) -> bool: ...
def replace_value_uint16(self, arg0: KeyTokenEx, arg1: int) -> bool: ...
def replace_value_uint64(self, arg0: KeyTokenEx, arg1: int) -> bool: ...
def replace_value_uint8(self, arg0: KeyTokenEx, arg1: int) -> bool: ...
def reserve(self, arg0: int) -> None: ...
def size(self) -> int: ...
pass
class ExternalStorage():
def __init__(self, arg0: numpy.ndarray[numpy.uint64]) -> None: ...
@property
def values(self) -> numpy.ndarray[numpy.uint64]:
"""
:type: numpy.ndarray[numpy.uint64]
"""
@values.setter
def values(self, arg1: numpy.ndarray[numpy.uint64]) -> None:
pass
pass
class IAgent():
def destroy_request(self, arg0: RequestHandle) -> OperationResult: ...
def get_request_external_data(self, arg0: ResultHandle) -> tuple: ...
def get_request_meta_data(self, arg0: ResultHandle) -> tuple: ...
def get_request_result(self, arg0: RequestHandle) -> tuple: ...
def get_request_storage_context(self, arg0: RequestHandle) -> tuple: ...
def request_build(self, arg0: Agent, arg1: Request, arg2: RequestCallbackData) -> tuple: ...
def validate_request_external_data(self, arg0: RequestHandle, arg1: ResultHandle, arg2: numpy.ndarray[bool], arg3: bool) -> OperationResult: ...
def wait_all(self, arg0: Agent, arg1: int) -> OperationResult: ...
def wait_request(self, arg0: RequestHandle, arg1: int) -> OperationResult: ...
pass
class IDataGrid():
def create_data_grid(self) -> DataGrid: ...
def destroy_data_grid(self, arg0: DataGrid) -> None: ...
pass
class IDataStore():
class RetrieveFlags():
"""
Members:
EN_RETRIEVEFLAG_NONE
EN_RETRIEVEFLAG_SYNC
EN_RETRIEVEFLAG_SIZE_ONLY
EN_RETRIEVEFLAG_EXISTENCE_ONLY
EN_RETRIEVEFLAG_LOCAL
EN_RETRIEVEFLAG_CLUSTER
"""
def __eq__(self, other: object) -> bool: ...
def __getstate__(self) -> int: ...
def __hash__(self) -> int: ...
def __index__(self) -> int: ...
def __init__(self, value: int) -> None: ...
def __int__(self) -> int: ...
def __ne__(self, other: object) -> bool: ...
def __repr__(self) -> str: ...
def __setstate__(self, state: int) -> None: ...
@property
def name(self) -> str:
"""
:type: str
"""
@property
def value(self) -> int:
"""
:type: int
"""
EN_RETRIEVEFLAG_CLUSTER: omni.ujitso._ujitso.IDataStore.RetrieveFlags # value = <RetrieveFlags.EN_RETRIEVEFLAG_CLUSTER: 16>
EN_RETRIEVEFLAG_EXISTENCE_ONLY: omni.ujitso._ujitso.IDataStore.RetrieveFlags # value = <RetrieveFlags.EN_RETRIEVEFLAG_EXISTENCE_ONLY: 4>
EN_RETRIEVEFLAG_LOCAL: omni.ujitso._ujitso.IDataStore.RetrieveFlags # value = <RetrieveFlags.EN_RETRIEVEFLAG_LOCAL: 8>
EN_RETRIEVEFLAG_NONE: omni.ujitso._ujitso.IDataStore.RetrieveFlags # value = <RetrieveFlags.EN_RETRIEVEFLAG_NONE: 0>
EN_RETRIEVEFLAG_SIZE_ONLY: omni.ujitso._ujitso.IDataStore.RetrieveFlags # value = <RetrieveFlags.EN_RETRIEVEFLAG_SIZE_ONLY: 2>
EN_RETRIEVEFLAG_SYNC: omni.ujitso._ujitso.IDataStore.RetrieveFlags # value = <RetrieveFlags.EN_RETRIEVEFLAG_SYNC: 1>
__members__: dict # value = {'EN_RETRIEVEFLAG_NONE': <RetrieveFlags.EN_RETRIEVEFLAG_NONE: 0>, 'EN_RETRIEVEFLAG_SYNC': <RetrieveFlags.EN_RETRIEVEFLAG_SYNC: 1>, 'EN_RETRIEVEFLAG_SIZE_ONLY': <RetrieveFlags.EN_RETRIEVEFLAG_SIZE_ONLY: 2>, 'EN_RETRIEVEFLAG_EXISTENCE_ONLY': <RetrieveFlags.EN_RETRIEVEFLAG_EXISTENCE_ONLY: 4>, 'EN_RETRIEVEFLAG_LOCAL': <RetrieveFlags.EN_RETRIEVEFLAG_LOCAL: 8>, 'EN_RETRIEVEFLAG_CLUSTER': <RetrieveFlags.EN_RETRIEVEFLAG_CLUSTER: 16>}
pass
class StoreFlags():
"""
Members:
EN_STOREFLAG_DEFAULT
EN_STOREFLAG_NO_LOCAL
EN_STOREFLAG_LOCAL_INCONSISTENT
EN_STOREFLAG_NO_CLUSTER
EN_STOREFLAG_CLUSTER_INCONSISTENT
EN_STOREFLAG_NO_REMOTE
EN_STOREFLAG_REMOTE_INCONSISTENT
EN_STOREFLAG_PERSISTENCE_PRIORITY_LOW
EN_STOREFLAG_PERSISTENCE_PRIORITY_MEDIUM
EN_STOREFLAG_PERSISTENCE_PRIORITY_HIGH
"""
def __eq__(self, other: object) -> bool: ...
def __getstate__(self) -> int: ...
def __hash__(self) -> int: ...
def __index__(self) -> int: ...
def __init__(self, value: int) -> None: ...
def __int__(self) -> int: ...
def __ne__(self, other: object) -> bool: ...
def __repr__(self) -> str: ...
def __setstate__(self, state: int) -> None: ...
@property
def name(self) -> str:
"""
:type: str
"""
@property
def value(self) -> int:
"""
:type: int
"""
EN_STOREFLAG_CLUSTER_INCONSISTENT: omni.ujitso._ujitso.IDataStore.StoreFlags # value = <StoreFlags.EN_STOREFLAG_CLUSTER_INCONSISTENT: 8>
EN_STOREFLAG_DEFAULT: omni.ujitso._ujitso.IDataStore.StoreFlags # value = <StoreFlags.EN_STOREFLAG_DEFAULT: 0>
EN_STOREFLAG_LOCAL_INCONSISTENT: omni.ujitso._ujitso.IDataStore.StoreFlags # value = <StoreFlags.EN_STOREFLAG_LOCAL_INCONSISTENT: 2>
EN_STOREFLAG_NO_CLUSTER: omni.ujitso._ujitso.IDataStore.StoreFlags # value = <StoreFlags.EN_STOREFLAG_NO_CLUSTER: 4>
EN_STOREFLAG_NO_LOCAL: omni.ujitso._ujitso.IDataStore.StoreFlags # value = <StoreFlags.EN_STOREFLAG_NO_LOCAL: 1>
EN_STOREFLAG_NO_REMOTE: omni.ujitso._ujitso.IDataStore.StoreFlags # value = <StoreFlags.EN_STOREFLAG_NO_REMOTE: 16>
EN_STOREFLAG_PERSISTENCE_PRIORITY_HIGH: omni.ujitso._ujitso.IDataStore.StoreFlags # value = <StoreFlags.EN_STOREFLAG_PERSISTENCE_PRIORITY_HIGH: 192>
EN_STOREFLAG_PERSISTENCE_PRIORITY_LOW: omni.ujitso._ujitso.IDataStore.StoreFlags # value = <StoreFlags.EN_STOREFLAG_PERSISTENCE_PRIORITY_LOW: 64>
EN_STOREFLAG_PERSISTENCE_PRIORITY_MEDIUM: omni.ujitso._ujitso.IDataStore.StoreFlags # value = <StoreFlags.EN_STOREFLAG_PERSISTENCE_PRIORITY_MEDIUM: 128>
EN_STOREFLAG_REMOTE_INCONSISTENT: omni.ujitso._ujitso.IDataStore.StoreFlags # value = <StoreFlags.EN_STOREFLAG_REMOTE_INCONSISTENT: 32>
__members__: dict # value = {'EN_STOREFLAG_DEFAULT': <StoreFlags.EN_STOREFLAG_DEFAULT: 0>, 'EN_STOREFLAG_NO_LOCAL': <StoreFlags.EN_STOREFLAG_NO_LOCAL: 1>, 'EN_STOREFLAG_LOCAL_INCONSISTENT': <StoreFlags.EN_STOREFLAG_LOCAL_INCONSISTENT: 2>, 'EN_STOREFLAG_NO_CLUSTER': <StoreFlags.EN_STOREFLAG_NO_CLUSTER: 4>, 'EN_STOREFLAG_CLUSTER_INCONSISTENT': <StoreFlags.EN_STOREFLAG_CLUSTER_INCONSISTENT: 8>, 'EN_STOREFLAG_NO_REMOTE': <StoreFlags.EN_STOREFLAG_NO_REMOTE: 16>, 'EN_STOREFLAG_REMOTE_INCONSISTENT': <StoreFlags.EN_STOREFLAG_REMOTE_INCONSISTENT: 32>, 'EN_STOREFLAG_PERSISTENCE_PRIORITY_LOW': <StoreFlags.EN_STOREFLAG_PERSISTENCE_PRIORITY_LOW: 64>, 'EN_STOREFLAG_PERSISTENCE_PRIORITY_MEDIUM': <StoreFlags.EN_STOREFLAG_PERSISTENCE_PRIORITY_MEDIUM: 128>, 'EN_STOREFLAG_PERSISTENCE_PRIORITY_HIGH': <StoreFlags.EN_STOREFLAG_PERSISTENCE_PRIORITY_HIGH: 192>}
pass
EN_RETRIEVEFLAG_CLUSTER: omni.ujitso._ujitso.IDataStore.RetrieveFlags # value = <RetrieveFlags.EN_RETRIEVEFLAG_CLUSTER: 16>
EN_RETRIEVEFLAG_EXISTENCE_ONLY: omni.ujitso._ujitso.IDataStore.RetrieveFlags # value = <RetrieveFlags.EN_RETRIEVEFLAG_EXISTENCE_ONLY: 4>
EN_RETRIEVEFLAG_LOCAL: omni.ujitso._ujitso.IDataStore.RetrieveFlags # value = <RetrieveFlags.EN_RETRIEVEFLAG_LOCAL: 8>
EN_RETRIEVEFLAG_NONE: omni.ujitso._ujitso.IDataStore.RetrieveFlags # value = <RetrieveFlags.EN_RETRIEVEFLAG_NONE: 0>
EN_RETRIEVEFLAG_SIZE_ONLY: omni.ujitso._ujitso.IDataStore.RetrieveFlags # value = <RetrieveFlags.EN_RETRIEVEFLAG_SIZE_ONLY: 2>
EN_RETRIEVEFLAG_SYNC: omni.ujitso._ujitso.IDataStore.RetrieveFlags # value = <RetrieveFlags.EN_RETRIEVEFLAG_SYNC: 1>
EN_STOREFLAG_CLUSTER_INCONSISTENT: omni.ujitso._ujitso.IDataStore.StoreFlags # value = <StoreFlags.EN_STOREFLAG_CLUSTER_INCONSISTENT: 8>
EN_STOREFLAG_DEFAULT: omni.ujitso._ujitso.IDataStore.StoreFlags # value = <StoreFlags.EN_STOREFLAG_DEFAULT: 0>
EN_STOREFLAG_LOCAL_INCONSISTENT: omni.ujitso._ujitso.IDataStore.StoreFlags # value = <StoreFlags.EN_STOREFLAG_LOCAL_INCONSISTENT: 2>
EN_STOREFLAG_NO_CLUSTER: omni.ujitso._ujitso.IDataStore.StoreFlags # value = <StoreFlags.EN_STOREFLAG_NO_CLUSTER: 4>
EN_STOREFLAG_NO_LOCAL: omni.ujitso._ujitso.IDataStore.StoreFlags # value = <StoreFlags.EN_STOREFLAG_NO_LOCAL: 1>
EN_STOREFLAG_NO_REMOTE: omni.ujitso._ujitso.IDataStore.StoreFlags # value = <StoreFlags.EN_STOREFLAG_NO_REMOTE: 16>
EN_STOREFLAG_PERSISTENCE_PRIORITY_HIGH: omni.ujitso._ujitso.IDataStore.StoreFlags # value = <StoreFlags.EN_STOREFLAG_PERSISTENCE_PRIORITY_HIGH: 192>
EN_STOREFLAG_PERSISTENCE_PRIORITY_LOW: omni.ujitso._ujitso.IDataStore.StoreFlags # value = <StoreFlags.EN_STOREFLAG_PERSISTENCE_PRIORITY_LOW: 64>
EN_STOREFLAG_PERSISTENCE_PRIORITY_MEDIUM: omni.ujitso._ujitso.IDataStore.StoreFlags # value = <StoreFlags.EN_STOREFLAG_PERSISTENCE_PRIORITY_MEDIUM: 128>
EN_STOREFLAG_REMOTE_INCONSISTENT: omni.ujitso._ujitso.IDataStore.StoreFlags # value = <StoreFlags.EN_STOREFLAG_REMOTE_INCONSISTENT: 32>
pass
class IFactory():
def create_agent(self, arg0: DataGrid, arg1: IDataStore, arg2: ITaskAgent, arg3: ITaskService, arg4: AgentConfigFlags) -> Agent: ...
def destroy_agent(self, arg0: Agent) -> None: ...
pass
class IHTTPFactory():
def create_agent(self, arg0: str) -> ITaskAgent: ...
def create_service(self) -> ITaskService: ...
def run_http_jobs(self, arg0: ITaskService, arg1: str, arg2: str) -> str: ...
pass
class IInProcessFactory():
def create_agent(self) -> ITaskAgent: ...
def get_service(self, arg0: ITaskAgent) -> ITaskService: ...
pass
class ILocalDataStore():
def create(self, arg0: str, arg1: int) -> IDataStore: ...
def destroy(self, arg0: IDataStore) -> None: ...
pass
class INucleusDataStore():
def create(self, remote_cache_path: str, remote_cache_discovery_path: str, use_cache_discovery_for_writes: bool = True) -> IDataStore: ...
def destroy(self, arg0: IDataStore) -> None: ...
pass
class IRegistry():
class GlobalKeyToken():
"""
Members:
PATH
VERSION
TIME
FLAGS
PARAM0
PARAM1
PARAM2
PARAM3
CUSTOM_START
"""
def __eq__(self, other: object) -> bool: ...
def __getstate__(self) -> int: ...
def __hash__(self) -> int: ...
def __index__(self) -> int: ...
def __init__(self, value: int) -> None: ...
def __int__(self) -> int: ...
def __ne__(self, other: object) -> bool: ...
def __repr__(self) -> str: ...
def __setstate__(self, state: int) -> None: ...
@property
def name(self) -> str:
"""
:type: str
"""
@property
def value(self) -> int:
"""
:type: int
"""
CUSTOM_START: omni.ujitso._ujitso.IRegistry.GlobalKeyToken # value = <GlobalKeyToken.CUSTOM_START: 65536>
FLAGS: omni.ujitso._ujitso.IRegistry.GlobalKeyToken # value = <GlobalKeyToken.FLAGS: 3>
PARAM0: omni.ujitso._ujitso.IRegistry.GlobalKeyToken # value = <GlobalKeyToken.PARAM0: 4>
PARAM1: omni.ujitso._ujitso.IRegistry.GlobalKeyToken # value = <GlobalKeyToken.PARAM1: 5>
PARAM2: omni.ujitso._ujitso.IRegistry.GlobalKeyToken # value = <GlobalKeyToken.PARAM2: 6>
PARAM3: omni.ujitso._ujitso.IRegistry.GlobalKeyToken # value = <GlobalKeyToken.PARAM3: 7>
PATH: omni.ujitso._ujitso.IRegistry.GlobalKeyToken # value = <GlobalKeyToken.PATH: 0>
TIME: omni.ujitso._ujitso.IRegistry.GlobalKeyToken # value = <GlobalKeyToken.TIME: 2>
VERSION: omni.ujitso._ujitso.IRegistry.GlobalKeyToken # value = <GlobalKeyToken.VERSION: 1>
__members__: dict # value = {'PATH': <GlobalKeyToken.PATH: 0>, 'VERSION': <GlobalKeyToken.VERSION: 1>, 'TIME': <GlobalKeyToken.TIME: 2>, 'FLAGS': <GlobalKeyToken.FLAGS: 3>, 'PARAM0': <GlobalKeyToken.PARAM0: 4>, 'PARAM1': <GlobalKeyToken.PARAM1: 5>, 'PARAM2': <GlobalKeyToken.PARAM2: 6>, 'PARAM3': <GlobalKeyToken.PARAM3: 7>, 'CUSTOM_START': <GlobalKeyToken.CUSTOM_START: 65536>}
pass
@staticmethod
def register_processor(*args, **kwargs) -> None: ...
@staticmethod
def unregister_processor(*args, **kwargs) -> None: ...
CUSTOM_START: omni.ujitso._ujitso.IRegistry.GlobalKeyToken # value = <GlobalKeyToken.CUSTOM_START: 65536>
FLAGS: omni.ujitso._ujitso.IRegistry.GlobalKeyToken # value = <GlobalKeyToken.FLAGS: 3>
PARAM0: omni.ujitso._ujitso.IRegistry.GlobalKeyToken # value = <GlobalKeyToken.PARAM0: 4>
PARAM1: omni.ujitso._ujitso.IRegistry.GlobalKeyToken # value = <GlobalKeyToken.PARAM1: 5>
PARAM2: omni.ujitso._ujitso.IRegistry.GlobalKeyToken # value = <GlobalKeyToken.PARAM2: 6>
PARAM3: omni.ujitso._ujitso.IRegistry.GlobalKeyToken # value = <GlobalKeyToken.PARAM3: 7>
PATH: omni.ujitso._ujitso.IRegistry.GlobalKeyToken # value = <GlobalKeyToken.PATH: 0>
TIME: omni.ujitso._ujitso.IRegistry.GlobalKeyToken # value = <GlobalKeyToken.TIME: 2>
VERSION: omni.ujitso._ujitso.IRegistry.GlobalKeyToken # value = <GlobalKeyToken.VERSION: 1>
pass
class IService():
def add_dependency(self, arg0: Agent, arg1: DependencyHandle, arg2: Request) -> tuple: ...
def add_request_tuple_input(self, arg0: Agent, arg1: DependencyHandle, arg2: Request, arg3: bool, arg4: bool) -> OperationResult: ...
def allocate_meta_data_storage(self, arg0: Agent, arg1: BuildHandle, arg2: int) -> tuple: ...
def get_dependencies(self, arg0: Agent, arg1: BuildHandle) -> tuple: ...
def get_external_data(self, arg0: Agent, arg1: BuildHandle) -> tuple: ...
def get_meta_data(self, arg0: Agent, arg1: BuildHandle) -> tuple: ...
def set_storage_context(self, arg0: Agent, arg1: DependencyHandle, arg2: str) -> OperationResult: ...
def store_external_data(self, arg0: Agent, arg1: BuildHandle, arg2: typing.List[numpy.ndarray[numpy.uint8]], arg3: typing.List[ValidationType]) -> OperationResult: ...
pass
class ITCPFactory():
def create_agent(self, addressesAndPorts: typing.List[typing.Tuple[str, int]] = [], flags: TCPAgentConfigFlags = TCPAgentConfigFlags.UseRoundRobinServerScheduling) -> ITaskAgent: ...
def create_service(self, port: int = 0) -> ITaskService: ...
def get_service_ip(self, arg0: ITaskService) -> str: ...
pass
class ITaskAgent():
def destroy(self) -> None: ...
pass
class ITaskService():
def destroy(self) -> None: ...
pass
class KeyToken():
def __eq__(self, arg0: KeyToken) -> bool: ...
def __init__(self, value: int = 0) -> None: ...
@property
def value(self) -> int:
"""
:type: int
"""
@value.setter
def value(self, arg0: int) -> None:
pass
STATIC_STRING_HASH_MARKER = 2147483648
__hash__ = None
pass
class KeyTokenEx():
@typing.overload
def __eq__(self, arg0: KeyTokenEx) -> bool: ...
@typing.overload
def __eq__(self, arg0: KeyToken) -> bool: ...
@typing.overload
def __init__(self, arg0: IRegistry.GlobalKeyToken) -> None: ...
@typing.overload
def __init__(self, arg0: str) -> None: ...
@typing.overload
def __init__(self, arg0: KeyToken) -> None: ...
@property
def value(self) -> int:
"""
:type: int
"""
@value.setter
def value(self, arg0: int) -> None:
pass
__hash__ = None
pass
class MatchContext():
@property
def agent(self) -> Agent:
"""
:type: Agent
"""
@property
def processor(self) -> carb::ujitso::Processor:
"""
:type: carb::ujitso::Processor
"""
pass
class MatchResult():
"""
Members:
FAILURE
LOWEST_PRIORITY
NORMAL_PRIORITY
HIGHEST_PRIORITY
"""
def __eq__(self, other: object) -> bool: ...
def __getstate__(self) -> int: ...
def __hash__(self) -> int: ...
def __index__(self) -> int: ...
def __init__(self, value: int) -> None: ...
def __int__(self) -> int: ...
def __ne__(self, other: object) -> bool: ...
def __repr__(self) -> str: ...
def __setstate__(self, state: int) -> None: ...
@property
def name(self) -> str:
"""
:type: str
"""
@property
def value(self) -> int:
"""
:type: int
"""
FAILURE: omni.ujitso._ujitso.MatchResult # value = <MatchResult.FAILURE: 0>
HIGHEST_PRIORITY: omni.ujitso._ujitso.MatchResult # value = <MatchResult.HIGHEST_PRIORITY: 2000>
LOWEST_PRIORITY: omni.ujitso._ujitso.MatchResult # value = <MatchResult.LOWEST_PRIORITY: 1>
NORMAL_PRIORITY: omni.ujitso._ujitso.MatchResult # value = <MatchResult.NORMAL_PRIORITY: 1000>
__members__: dict # value = {'FAILURE': <MatchResult.FAILURE: 0>, 'LOWEST_PRIORITY': <MatchResult.LOWEST_PRIORITY: 1>, 'NORMAL_PRIORITY': <MatchResult.NORMAL_PRIORITY: 1000>, 'HIGHEST_PRIORITY': <MatchResult.HIGHEST_PRIORITY: 2000>}
pass
class OperationResult():
"""
Members:
SUCCESS
FAILURE
OVERFLOW_ERROR
INVALIDHANDLE_ERROR
NOPROCESSOR_ERROR
NOTFOUND_ERROR
NOTBUILT_ERROR
INVALIDMETADATA_ERROR
OUTOFMEMORY_ERROR
DATAVALIDATION_ERROR
INTERNAL
"""
def __eq__(self, other: object) -> bool: ...
def __getstate__(self) -> int: ...
def __hash__(self) -> int: ...
def __index__(self) -> int: ...
def __init__(self, value: int) -> None: ...
def __int__(self) -> int: ...
def __ne__(self, other: object) -> bool: ...
def __repr__(self) -> str: ...
def __setstate__(self, state: int) -> None: ...
@property
def name(self) -> str:
"""
:type: str
"""
@property
def value(self) -> int:
"""
:type: int
"""
DATAVALIDATION_ERROR: omni.ujitso._ujitso.OperationResult # value = <OperationResult.DATAVALIDATION_ERROR: 10>
FAILURE: omni.ujitso._ujitso.OperationResult # value = <OperationResult.FAILURE: 1>
INTERNAL: omni.ujitso._ujitso.OperationResult # value = <OperationResult.INTERNAL: 65535>
INVALIDHANDLE_ERROR: omni.ujitso._ujitso.OperationResult # value = <OperationResult.INVALIDHANDLE_ERROR: 3>
INVALIDMETADATA_ERROR: omni.ujitso._ujitso.OperationResult # value = <OperationResult.INVALIDMETADATA_ERROR: 7>
NOPROCESSOR_ERROR: omni.ujitso._ujitso.OperationResult # value = <OperationResult.NOPROCESSOR_ERROR: 4>
NOTBUILT_ERROR: omni.ujitso._ujitso.OperationResult # value = <OperationResult.NOTBUILT_ERROR: 6>
NOTFOUND_ERROR: omni.ujitso._ujitso.OperationResult # value = <OperationResult.NOTFOUND_ERROR: 5>
OUTOFMEMORY_ERROR: omni.ujitso._ujitso.OperationResult # value = <OperationResult.OUTOFMEMORY_ERROR: 8>
OVERFLOW_ERROR: omni.ujitso._ujitso.OperationResult # value = <OperationResult.OVERFLOW_ERROR: 2>
SUCCESS: omni.ujitso._ujitso.OperationResult # value = <OperationResult.SUCCESS: 0>
__members__: dict # value = {'SUCCESS': <OperationResult.SUCCESS: 0>, 'FAILURE': <OperationResult.FAILURE: 1>, 'OVERFLOW_ERROR': <OperationResult.OVERFLOW_ERROR: 2>, 'INVALIDHANDLE_ERROR': <OperationResult.INVALIDHANDLE_ERROR: 3>, 'NOPROCESSOR_ERROR': <OperationResult.NOPROCESSOR_ERROR: 4>, 'NOTFOUND_ERROR': <OperationResult.NOTFOUND_ERROR: 5>, 'NOTBUILT_ERROR': <OperationResult.NOTBUILT_ERROR: 6>, 'INVALIDMETADATA_ERROR': <OperationResult.INVALIDMETADATA_ERROR: 7>, 'OUTOFMEMORY_ERROR': <OperationResult.OUTOFMEMORY_ERROR: 8>, 'DATAVALIDATION_ERROR': <OperationResult.DATAVALIDATION_ERROR: 10>, 'INTERNAL': <OperationResult.INTERNAL: 65535>}
pass
class Processor():
@staticmethod
def __init__(*args, **kwargs) -> typing.Any: ...
pass
class ProcessorInformation():
def __init__(self, arg0: str, arg1: int, arg2: int) -> None: ...
@property
def name(self) -> str:
"""
:type: str
"""
@name.setter
def name(self, arg1: str) -> None:
pass
@property
def remoteExecutionBatchHint(self) -> int:
"""
:type: int
"""
@remoteExecutionBatchHint.setter
def remoteExecutionBatchHint(self, arg0: int) -> None:
pass
@property
def version(self) -> int:
"""
:type: int
"""
@version.setter
def version(self, arg0: int) -> None:
pass
pass
class Request():
def __init__(self, type: RequestType = ..., token_values: bytes = b'') -> None: ...
@property
def tokenValues(self) -> bytes:
"""
:type: bytes
"""
@property
def type(self) -> RequestType:
"""
:type: RequestType
"""
pass
class RequestCallbackData():
def __init__(self, callback: typing.Callable[[object, object, RequestHandle, ResultHandle, OperationResult], None] = None, callback_context_0: object = None, callback_context_1: object = None) -> None: ...
@property
def callbackContext0(self) -> object:
"""
:type: object
"""
@property
def callbackContext1(self) -> object:
"""
:type: object
"""
pass
class RequestFilter():
def __init__(self, keys: numpy.ndarray[KeyToken] = array([], dtype=[('value', '<u4')])) -> None: ...
@property
def count(self) -> int:
"""
:type: int
"""
@property
def keys(self) -> numpy.ndarray[KeyToken]:
"""
:type: numpy.ndarray[KeyToken]
"""
pass
class RequestHandle():
def __init__(self, value: int = 0) -> None: ...
@property
def value(self) -> int:
"""
:type: int
"""
@value.setter
def value(self, arg0: int) -> None:
pass
pass
class RequestTokenType():
def __eq__(self, arg0: RequestTokenType) -> bool: ...
def __init__(self, size: int = 0, type_name_hash: int = 0) -> None: ...
@property
def size(self) -> int:
"""
:type: int
"""
@size.setter
def size(self, arg0: int) -> None:
pass
@property
def typeNameHash(self) -> int:
"""
:type: int
"""
@typeNameHash.setter
def typeNameHash(self, arg0: int) -> None:
pass
__hash__ = None
pass
class RequestType():
def __init__(self, keys: numpy.ndarray[KeyToken] = array([], dtype=[('value', '<u4')]), types: numpy.ndarray[RequestTokenType] = array([], dtype=[('size', '<u4'), ('typeNameHash', '<u4')])) -> None: ...
@property
def count(self) -> int:
"""
:type: int
"""
@property
def keys(self) -> numpy.ndarray[KeyToken]:
"""
:type: numpy.ndarray[KeyToken]
"""
@property
def types(self) -> numpy.ndarray[RequestTokenType]:
"""
:type: numpy.ndarray[RequestTokenType]
"""
pass
class ResultHandle():
def __init__(self, value: int = 0) -> None: ...
@property
def value(self) -> int:
"""
:type: int
"""
@value.setter
def value(self, arg0: int) -> None:
pass
pass
class TCPAgentConfigFlags():
"""
Members:
None
WaitForConnectionsBeforeLaunch
UseRoundRobinServerScheduling
Default
"""
def __eq__(self, other: object) -> bool: ...
def __getstate__(self) -> int: ...
def __hash__(self) -> int: ...
def __index__(self) -> int: ...
def __init__(self, value: int) -> None: ...
def __int__(self) -> int: ...
def __ne__(self, other: object) -> bool: ...
def __repr__(self) -> str: ...
def __setstate__(self, state: int) -> None: ...
@property
def name(self) -> str:
"""
:type: str
"""
@property
def value(self) -> int:
"""
:type: int
"""
Default: omni.ujitso._ujitso.TCPAgentConfigFlags # value = <TCPAgentConfigFlags.UseRoundRobinServerScheduling: 2>
None: omni.ujitso._ujitso.TCPAgentConfigFlags # value = <TCPAgentConfigFlags.None: 1>
UseRoundRobinServerScheduling: omni.ujitso._ujitso.TCPAgentConfigFlags # value = <TCPAgentConfigFlags.UseRoundRobinServerScheduling: 2>
WaitForConnectionsBeforeLaunch: omni.ujitso._ujitso.TCPAgentConfigFlags # value = <TCPAgentConfigFlags.WaitForConnectionsBeforeLaunch: 4>
__members__: dict # value = {'None': <TCPAgentConfigFlags.None: 1>, 'WaitForConnectionsBeforeLaunch': <TCPAgentConfigFlags.WaitForConnectionsBeforeLaunch: 4>, 'UseRoundRobinServerScheduling': <TCPAgentConfigFlags.UseRoundRobinServerScheduling: 2>, 'Default': <TCPAgentConfigFlags.UseRoundRobinServerScheduling: 2>}
pass
class UjitsoUtils():
@staticmethod
def get_request_value_double(arg0: Agent, arg1: Request, arg2: KeyTokenEx, arg3: float) -> float: ...
@staticmethod
def get_request_value_float(arg0: Agent, arg1: Request, arg2: KeyTokenEx, arg3: float) -> float: ...
@staticmethod
def get_request_value_int(arg0: Agent, arg1: Request, arg2: KeyTokenEx, arg3: int) -> int: ...
@staticmethod
def get_request_value_int16(arg0: Agent, arg1: Request, arg2: KeyTokenEx, arg3: int) -> int: ...
@staticmethod
def get_request_value_int64(arg0: Agent, arg1: Request, arg2: KeyTokenEx, arg3: int) -> int: ...
@staticmethod
def get_request_value_key_token(arg0: Agent, arg1: Request, arg2: KeyTokenEx, arg3: KeyToken) -> KeyToken: ...
@staticmethod
def get_request_value_string(arg0: Agent, arg1: Request, arg2: KeyTokenEx, arg3: str) -> str: ...
@staticmethod
def get_request_value_uint(arg0: Agent, arg1: Request, arg2: KeyTokenEx, arg3: int) -> int: ...
@staticmethod
def get_request_value_uint16(arg0: Agent, arg1: Request, arg2: KeyTokenEx, arg3: int) -> int: ...
@staticmethod
def get_request_value_uint64(arg0: Agent, arg1: Request, arg2: KeyTokenEx, arg3: int) -> int: ...
@staticmethod
def get_request_value_uint8(arg0: Agent, arg1: Request, arg2: KeyTokenEx, arg3: int) -> int: ...
@staticmethod
def make_double_request_token_type() -> RequestTokenType: ...
@staticmethod
def make_float_request_token_type() -> RequestTokenType: ...
@staticmethod
def make_int16_request_token_type() -> RequestTokenType: ...
@staticmethod
def make_int32_request_token_type() -> RequestTokenType: ...
@staticmethod
def make_int64_request_token_type() -> RequestTokenType: ...
@staticmethod
def make_key_token_request_token_type() -> RequestTokenType: ...
@staticmethod
def make_uint16_request_token_type() -> RequestTokenType: ...
@staticmethod
def make_uint32_request_token_type() -> RequestTokenType: ...
@staticmethod
def make_uint64_request_token_type() -> RequestTokenType: ...
@staticmethod
def make_uint8_request_token_type() -> RequestTokenType: ...
@staticmethod
def make_void_request_token_type() -> RequestTokenType: ...
pass
class ValidationType():
"""
Members:
MANDATORY
DEFERRED
NONE
"""
def __eq__(self, other: object) -> bool: ...
def __getstate__(self) -> int: ...
def __hash__(self) -> int: ...
def __index__(self) -> int: ...
def __init__(self, value: int) -> None: ...
def __int__(self) -> int: ...
def __ne__(self, other: object) -> bool: ...
def __repr__(self) -> str: ...
def __setstate__(self, state: int) -> None: ...
@property
def name(self) -> str:
"""
:type: str
"""
@property
def value(self) -> int:
"""
:type: int
"""
DEFERRED: omni.ujitso._ujitso.ValidationType # value = <ValidationType.DEFERRED: 1>
MANDATORY: omni.ujitso._ujitso.ValidationType # value = <ValidationType.MANDATORY: 0>
NONE: omni.ujitso._ujitso.ValidationType # value = <ValidationType.NONE: 2>
__members__: dict # value = {'MANDATORY': <ValidationType.MANDATORY: 0>, 'DEFERRED': <ValidationType.DEFERRED: 1>, 'NONE': <ValidationType.NONE: 2>}
pass
def acquire_agent_interface(plugin_name: str = None, library_path: str = None) -> IAgent:
pass
def acquire_data_grid_interface(plugin_name: str = None, library_path: str = None) -> IDataGrid:
pass
def acquire_factory_interface(plugin_name: str = None, library_path: str = None) -> IFactory:
pass
def acquire_http_factory_interface(plugin_name: str = None, library_path: str = None) -> IHTTPFactory:
pass
def acquire_in_progress_factory_interface(plugin_name: str = None, library_path: str = None) -> IInProcessFactory:
pass
def acquire_local_data_store_interface(plugin_name: str = None, library_path: str = None) -> ILocalDataStore:
pass
def acquire_nucleus_data_store_interface(plugin_name: str = None, library_path: str = None) -> INucleusDataStore:
pass
def acquire_registry_interface(plugin_name: str = None, library_path: str = None) -> IRegistry:
pass
def acquire_service_interface(plugin_name: str = None, library_path: str = None) -> IService:
pass
def acquire_tcp_factory_interface(plugin_name: str = None, library_path: str = None) -> ITCPFactory:
pass
def release_agent_interface(arg0: IAgent) -> None:
pass
def release_data_grid_interface(arg0: IDataGrid) -> None:
pass
def release_factory_interface(arg0: IFactory) -> None:
pass
def release_http_factory_interface(arg0: IHTTPFactory) -> None:
pass
def release_in_progress_factory_interface(arg0: IInProcessFactory) -> None:
pass
def release_local_data_store_interface(arg0: ILocalDataStore) -> None:
pass
def release_nucleus_data_store_interface(arg0: INucleusDataStore) -> None:
pass
def release_registry_interface(arg0: IRegistry) -> None:
pass
def release_service_interface(arg0: IService) -> None:
pass
def release_tcp_factory_interface(arg0: ITCPFactory) -> None:
pass
Default: omni.ujitso._ujitso.AgentConfigFlags # value = <AgentConfigFlags.None: 1>
ForceRemoteTasks: omni.ujitso._ujitso.AgentConfigFlags # value = <AgentConfigFlags.ForceRemoteTasks: 2>
None: omni.ujitso._ujitso.AgentConfigFlags # value = <AgentConfigFlags.None: 1>
TIME_OUT_INFINITE = 4294967295
UseRoundRobinServerScheduling: omni.ujitso._ujitso.TCPAgentConfigFlags # value = <TCPAgentConfigFlags.UseRoundRobinServerScheduling: 2>
WaitForConnectionsBeforeLaunch: omni.ujitso._ujitso.TCPAgentConfigFlags # value = <TCPAgentConfigFlags.WaitForConnectionsBeforeLaunch: 4>
| 38,861 | unknown | 39.146694 | 857 | 0.629603 |
omniverse-code/kit/exts/omni.ujitso.python/omni/ujitso/__init__.py | from ._ujitso import *
| 23 | Python | 10.999995 | 22 | 0.695652 |
omniverse-code/kit/exts/omni.ujitso.python/omni/ujitso/tests/test_bindings.py | # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
import numpy as np
import omni.kit.test
from omni.ujitso import *
class TestBindings(omni.kit.test.AsyncTestCase):
"""Test bindings in this extension"""
async def test_operation_result(self):
"""validate binding of OperationResult"""
self.assertEqual(len(OperationResult.__members__), 11)
self.assertEqual(int(OperationResult.SUCCESS), 0)
self.assertEqual(int(OperationResult.FAILURE), 1)
self.assertEqual(int(OperationResult.OVERFLOW_ERROR), 2)
self.assertEqual(int(OperationResult.INVALIDHANDLE_ERROR), 3)
self.assertEqual(int(OperationResult.NOPROCESSOR_ERROR), 4)
self.assertEqual(int(OperationResult.NOTFOUND_ERROR), 5)
self.assertEqual(int(OperationResult.NOTBUILT_ERROR), 6)
self.assertEqual(int(OperationResult.INVALIDMETADATA_ERROR), 7)
self.assertEqual(int(OperationResult.OUTOFMEMORY_ERROR), 8)
self.assertEqual(int(OperationResult.DATAVALIDATION_ERROR), 10)
self.assertEqual(int(OperationResult.INTERNAL), 0xffff)
async def test_validation_type(self):
"""validate binding of ValidationType"""
self.assertEqual(len(ValidationType.__members__), 3)
self.assertEqual(int(ValidationType.MANDATORY), 0)
self.assertEqual(int(ValidationType.DEFERRED), 1)
self.assertEqual(int(ValidationType.NONE), 2)
async def test_match_result(self):
"""validate binding of MatchResult"""
self.assertEqual(len(MatchResult.__members__), 4)
self.assertEqual(int(MatchResult.FAILURE), 0)
self.assertEqual(int(MatchResult.LOWEST_PRIORITY), 1)
self.assertEqual(int(MatchResult.NORMAL_PRIORITY), 1000)
self.assertEqual(int(MatchResult.HIGHEST_PRIORITY), 2000)
async def test_global_key_token(self):
"""validate binding of GlobalKeyToken"""
self.assertEqual(len(IRegistry.GlobalKeyToken.__members__), 9)
self.assertEqual(int(IRegistry.PATH), 0)
self.assertEqual(int(IRegistry.VERSION), 1)
self.assertEqual(int(IRegistry.TIME), 2)
self.assertEqual(int(IRegistry.FLAGS), 3)
self.assertEqual(int(IRegistry.PARAM0), 4)
self.assertEqual(int(IRegistry.PARAM1), 5)
self.assertEqual(int(IRegistry.PARAM2), 6)
self.assertEqual(int(IRegistry.PARAM3), 7)
self.assertEqual(int(IRegistry.CUSTOM_START), 1 << 16)
self.assertEqual(int(IRegistry.GlobalKeyToken.PATH), 0)
self.assertEqual(int(IRegistry.GlobalKeyToken.VERSION), 1)
self.assertEqual(int(IRegistry.GlobalKeyToken.TIME), 2)
self.assertEqual(int(IRegistry.GlobalKeyToken.FLAGS), 3)
self.assertEqual(int(IRegistry.GlobalKeyToken.PARAM0), 4)
self.assertEqual(int(IRegistry.GlobalKeyToken.PARAM1), 5)
self.assertEqual(int(IRegistry.GlobalKeyToken.PARAM2), 6)
self.assertEqual(int(IRegistry.GlobalKeyToken.PARAM3), 7)
self.assertEqual(int(IRegistry.GlobalKeyToken.CUSTOM_START), 1 << 16)
async def test_retrieve_flags(self):
"""validate binding of RetrieveFlags"""
self.assertEqual(len(IDataStore.RetrieveFlags.__members__), 6)
self.assertEqual(int(IDataStore.EN_RETRIEVEFLAG_NONE), 0)
self.assertEqual(int(IDataStore.EN_RETRIEVEFLAG_SYNC), 1 << 0)
self.assertEqual(int(IDataStore.EN_RETRIEVEFLAG_SIZE_ONLY), 1 << 1)
self.assertEqual(int(IDataStore.EN_RETRIEVEFLAG_EXISTENCE_ONLY), 1 << 2)
self.assertEqual(int(IDataStore.EN_RETRIEVEFLAG_LOCAL), 1 << 3)
self.assertEqual(int(IDataStore.EN_RETRIEVEFLAG_CLUSTER), 1 << 4)
self.assertEqual(int(IDataStore.RetrieveFlags.EN_RETRIEVEFLAG_NONE), 0)
self.assertEqual(int(IDataStore.RetrieveFlags.EN_RETRIEVEFLAG_SYNC), 1 << 0)
self.assertEqual(int(IDataStore.RetrieveFlags.EN_RETRIEVEFLAG_SIZE_ONLY), 1 << 1)
self.assertEqual(int(IDataStore.RetrieveFlags.EN_RETRIEVEFLAG_EXISTENCE_ONLY), 1 << 2)
self.assertEqual(int(IDataStore.RetrieveFlags.EN_RETRIEVEFLAG_LOCAL), 1 << 3)
self.assertEqual(int(IDataStore.RetrieveFlags.EN_RETRIEVEFLAG_CLUSTER), 1 << 4)
async def test_store_flags(self):
"""validate binding of StoreFlags"""
self.assertEqual(len(IDataStore.StoreFlags.__members__), 10)
self.assertEqual(int(IDataStore.EN_STOREFLAG_DEFAULT), 0)
self.assertEqual(int(IDataStore.EN_STOREFLAG_NO_LOCAL), 1 << 0)
self.assertEqual(int(IDataStore.EN_STOREFLAG_LOCAL_INCONSISTENT), 1 << 1)
self.assertEqual(int(IDataStore.EN_STOREFLAG_NO_CLUSTER), 1 << 2)
self.assertEqual(int(IDataStore.EN_STOREFLAG_CLUSTER_INCONSISTENT), 1 << 3)
self.assertEqual(int(IDataStore.EN_STOREFLAG_NO_REMOTE), 1 << 4)
self.assertEqual(int(IDataStore.EN_STOREFLAG_REMOTE_INCONSISTENT), 1 << 5)
self.assertEqual(int(IDataStore.EN_STOREFLAG_PERSISTENCE_PRIORITY_LOW), 1 << 6)
self.assertEqual(int(IDataStore.EN_STOREFLAG_PERSISTENCE_PRIORITY_MEDIUM), 2 << 6)
self.assertEqual(int(IDataStore.EN_STOREFLAG_PERSISTENCE_PRIORITY_HIGH), 3 << 6)
self.assertEqual(int(IDataStore.StoreFlags.EN_STOREFLAG_DEFAULT), 0)
self.assertEqual(int(IDataStore.StoreFlags.EN_STOREFLAG_NO_LOCAL), 1 << 0)
self.assertEqual(int(IDataStore.StoreFlags.EN_STOREFLAG_LOCAL_INCONSISTENT), 1 << 1)
self.assertEqual(int(IDataStore.StoreFlags.EN_STOREFLAG_NO_CLUSTER), 1 << 2)
self.assertEqual(int(IDataStore.StoreFlags.EN_STOREFLAG_CLUSTER_INCONSISTENT), 1 << 3)
self.assertEqual(int(IDataStore.StoreFlags.EN_STOREFLAG_NO_REMOTE), 1 << 4)
self.assertEqual(int(IDataStore.StoreFlags.EN_STOREFLAG_REMOTE_INCONSISTENT), 1 << 5)
self.assertEqual(int(IDataStore.StoreFlags.EN_STOREFLAG_PERSISTENCE_PRIORITY_LOW), 1 << 6)
self.assertEqual(int(IDataStore.StoreFlags.EN_STOREFLAG_PERSISTENCE_PRIORITY_MEDIUM), 2 << 6)
self.assertEqual(int(IDataStore.StoreFlags.EN_STOREFLAG_PERSISTENCE_PRIORITY_HIGH), 3 << 6)
async def test_request_handle(self):
"""validate binding of RequestHandle"""
# validate the default value
handle = RequestHandle()
self.assertEqual(handle.value, 0)
# only positive integer is accepted
valid_values = [1, 1<<64 - 1, 0]
invalid_values = [-1, -1.0, 1.0, 1<<64]
# validate setter with valid values
for val in valid_values:
handle.value = val
self.assertEqual(handle.value, val)
# validate setter with invalid values
for val in invalid_values:
with self.assertRaises(TypeError):
handle.value = val
# only positive integer is accepted
valid_indices = [1, 1<<32 - 1, 0]
# validate initialization with valid values
for val in valid_values:
handle = RequestHandle(val)
self.assertEqual(handle.value, val)
handle = RequestHandle(value = val)
self.assertEqual(handle.value, val)
# validate initialization with invalid values
for val in invalid_values:
with self.assertRaises(TypeError):
handle = RequestHandle(val)
with self.assertRaises(TypeError):
handle = RequestHandle(value = val)
async def test_key_token(self):
"""validate binding of KeyToken"""
# validate the default value
token = KeyToken()
self.assertEqual(token.value, 0)
# only positive integer is accepted
valid_values = [1, 1<<32 - 1, 0]
invalid_values = [-1, -1.0, 1.0, 1<<32]
# validate setter with valid values
for val in valid_values:
token.value = val
self.assertEqual(token.value, val)
# validate setter with invalid values
for val in invalid_values:
with self.assertRaises(TypeError):
token.value = val
# the value shouldn't change
self.assertEqual(token.value, valid_values[-1])
# validate initialization with valid values
for val in valid_values:
token = KeyToken(val)
self.assertEqual(token.value, val)
token = KeyToken(value = val)
self.assertEqual(token.value, val)
# validate initialization with invalid values
for val in invalid_values:
with self.assertRaises(TypeError):
token = KeyToken(val)
with self.assertRaises(TypeError):
token = KeyToken(value = val)
# validate the value of STATIC_STRING_HASH_MARKER
self.assertEqual(KeyToken.STATIC_STRING_HASH_MARKER, 0x80000000)
# can't set attribute
with self.assertRaises(AttributeError):
KeyToken.STATIC_STRING_HASH_MARKER = 0
# validate __eq__
for val in valid_values:
self.assertEqual(KeyToken(val), KeyToken(val))
self.assertNotEqual(KeyToken(val), KeyToken(2))
async def test_key_token_ex(self):
"""validate binding of KeyTokenEx"""
key_token_ex = KeyTokenEx(IRegistry.GlobalKeyToken.PATH)
self.assertEqual(key_token_ex.value, IRegistry.GlobalKeyToken.PATH)
key_token_ex = KeyTokenEx(IRegistry.PATH)
self.assertEqual(key_token_ex.value, IRegistry.PATH)
key_token_ex = KeyTokenEx("")
key_token_ex = KeyTokenEx("test")
key_token_ex = KeyTokenEx(KeyToken())
self.assertEqual(key_token_ex.value, 0)
valid_values = [0, 1, 1<<32 - 1]
for val in valid_values:
key_token_ex_a = KeyTokenEx(KeyToken(val))
self.assertEqual(key_token_ex_a.value, val)
key_token_ex_b = KeyTokenEx(KeyToken())
key_token_ex_b.value = val
self.assertEqual(key_token_ex_b.value, val)
self.assertTrue(key_token_ex_a == key_token_ex_b)
self.assertTrue(key_token_ex_b == key_token_ex_a)
self.assertFalse(key_token_ex_a != key_token_ex_b)
self.assertFalse(key_token_ex_b != key_token_ex_a)
key_token = KeyToken(val)
self.assertTrue(key_token_ex_a == key_token)
self.assertTrue(key_token == key_token_ex_a)
self.assertFalse(key_token_ex_a != key_token)
self.assertFalse(key_token != key_token_ex_a)
key_token_100 = KeyToken(100)
self.assertFalse(key_token_ex_a == key_token_100)
self.assertFalse(key_token_100 == key_token_ex_a)
self.assertTrue(key_token_ex_a != key_token_100)
self.assertTrue(key_token_100 != key_token_ex_a)
key_token_ex_100 = KeyTokenEx(key_token_100)
self.assertFalse(key_token_ex_a == key_token_ex_100)
self.assertFalse(key_token_ex_100 == key_token_ex_a)
self.assertTrue(key_token_ex_a != key_token_ex_100)
self.assertTrue(key_token_ex_100 != key_token_ex_a)
with self.assertRaises(TypeError):
key_token_ex = KeyTokenEx()
with self.assertRaises(TypeError):
key_token_ex = KeyTokenEx(0)
# This following line will end up calling C++ KeyTokenEx::KeyTokenEx(const char* str) with `str` being nullptr.
# It will lead to crash but it is a problem of the C++ implementation rather than the Python binding code.
#
# key_token_ex = KeyTokenEx(None)
async def test_dynamic_request(self):
"""validate binding of DynamicRequest"""
# validate the default constructor
dynamic_request = DynamicRequest()
self.assertEqual(dynamic_request.size(), 0)
request = dynamic_request.get_request()
self.assertEqual(request.type.count, 0)
self.assertEqual(request.type.keys.tolist(), [])
self.assertEqual(request.type.types.tolist(), [])
self.assertEqual(request.tokenValues, b'')
# validate addition of different data types
buf = [12, 13, 14]
buf_token_type = UjitsoUtils.make_uint8_request_token_type()
buf_token_type.size = len(buf)
key_value_pairs = [
(KeyTokenEx("void"),),
(KeyTokenEx("key token"), KeyToken(1)),
(KeyTokenEx("8-bit unsigned interger"), 2),
(KeyTokenEx("16-bit signed integer"), 3),
(KeyTokenEx("16-bit unsigned integer"), 4),
(KeyTokenEx("32-bit signed integer"), 5),
(KeyTokenEx("32-bit unsigned integer"), 6),
(KeyTokenEx("64-bit signed integer"), 7),
(KeyTokenEx("64-bit unsigned integer"), 8),
(KeyTokenEx("double precision floating point"), 9.0),
(KeyTokenEx("single precision floating point"), 10.0),
(KeyTokenEx("string"), "11"),
(KeyTokenEx("buffer"), buf_token_type, buf)]
token_type_size_list = [0, 4, 1, 2, 2, 4, 4, 8, 8, 8, 4, 3, len(buf)]
self.assertEqual(len(key_value_pairs), len(token_type_size_list))
dynamic_request.reserve(len(key_value_pairs))
counter = 0
index = dynamic_request.add(*(key_value_pairs[counter]))
self.assertEqual(index, counter)
self.assertEqual(dynamic_request.size(), counter + 1)
counter += 1
index = dynamic_request.add_key_token(*(key_value_pairs[counter]))
self.assertEqual(index, counter)
self.assertEqual(dynamic_request.size(), counter + 1)
counter += 1
index = dynamic_request.add_uint8(*(key_value_pairs[counter]))
self.assertEqual(index, counter)
self.assertEqual(dynamic_request.size(), counter + 1)
counter += 1
index = dynamic_request.add_int16(*(key_value_pairs[counter]))
self.assertEqual(index, counter)
self.assertEqual(dynamic_request.size(), counter + 1)
counter += 1
index = dynamic_request.add_uint16(*(key_value_pairs[counter]))
self.assertEqual(index, counter)
self.assertEqual(dynamic_request.size(), counter + 1)
counter += 1
index = dynamic_request.add_int(*(key_value_pairs[counter]))
self.assertEqual(index, counter)
self.assertEqual(dynamic_request.size(), counter + 1)
counter += 1
index = dynamic_request.add_uint(*(key_value_pairs[counter]))
self.assertEqual(index, counter)
self.assertEqual(dynamic_request.size(), counter + 1)
counter += 1
index = dynamic_request.add_int64(*(key_value_pairs[counter]))
self.assertEqual(index, counter)
self.assertEqual(dynamic_request.size(), counter + 1)
counter += 1
index = dynamic_request.add_uint64(*(key_value_pairs[counter]))
self.assertEqual(index, counter)
self.assertEqual(dynamic_request.size(), counter + 1)
counter += 1
index = dynamic_request.add_double(*(key_value_pairs[counter]))
self.assertEqual(index, counter)
self.assertEqual(dynamic_request.size(), counter + 1)
counter += 1
index = dynamic_request.add_float(*(key_value_pairs[counter]))
self.assertEqual(index, counter)
self.assertEqual(dynamic_request.size(), counter + 1)
counter += 1
index = dynamic_request.add_string(*(key_value_pairs[counter]))
self.assertEqual(index, counter)
self.assertEqual(dynamic_request.size(), counter + 1)
counter += 1
index = dynamic_request.add_buffer(*(key_value_pairs[counter]))
self.assertEqual(index, counter)
self.assertEqual(dynamic_request.size(), counter + 1)
for i in range(dynamic_request.size()):
key_value_pair = key_value_pairs[i]
token_type_size = token_type_size_list[i]
key = key_value_pair[0]
found, request_token_type = dynamic_request.find_key(key)
self.assertTrue(found)
self.assertEqual(request_token_type.size, token_type_size)
self.assertEqual(dynamic_request.get_key(i), key)
self.assertEqual(dynamic_request.get_type(i), request_token_type)
# validate reading of existent requests through matching template.
counter = 1
found, value = dynamic_request.get_as_key_token(key_value_pairs[counter][0])
self.assertTrue(found)
self.assertEqual(value, key_value_pairs[counter][1])
counter += 1
found, value = dynamic_request.get_as_uint8(key_value_pairs[counter][0])
self.assertTrue(found)
self.assertEqual(value, key_value_pairs[counter][1])
counter += 1
found, value = dynamic_request.get_as_int16(key_value_pairs[counter][0])
self.assertTrue(found)
self.assertEqual(value, key_value_pairs[counter][1])
counter += 1
found, value = dynamic_request.get_as_uint16(key_value_pairs[counter][0])
self.assertTrue(found)
self.assertEqual(value, key_value_pairs[counter][1])
counter += 1
found, value = dynamic_request.get_as_int32(key_value_pairs[counter][0])
self.assertTrue(found)
self.assertEqual(value, key_value_pairs[counter][1])
counter += 1
found, value = dynamic_request.get_as_uint32(key_value_pairs[counter][0])
self.assertTrue(found)
self.assertEqual(value, key_value_pairs[counter][1])
counter += 1
found, value = dynamic_request.get_as_int64(key_value_pairs[counter][0])
self.assertTrue(found)
self.assertEqual(value, key_value_pairs[counter][1])
counter += 1
found, value = dynamic_request.get_as_uint64(key_value_pairs[counter][0])
self.assertTrue(found)
self.assertEqual(value, key_value_pairs[counter][1])
counter += 1
found, value = dynamic_request.get_as_double(key_value_pairs[counter][0])
self.assertTrue(found)
self.assertEqual(value, key_value_pairs[counter][1])
counter += 1
found, value = dynamic_request.get_as_float(key_value_pairs[counter][0])
self.assertTrue(found)
self.assertEqual(value, key_value_pairs[counter][1])
counter += 1
found, value = dynamic_request.get_as_string(key_value_pairs[counter][0])
self.assertTrue(found)
self.assertEqual(value, key_value_pairs[counter][1])
# validate reading of an existent request through mismatching template.
found, value = dynamic_request.get_as_uint8(key_value_pairs[counter][0])
self.assertFalse(found)
self.assertEqual(value, 0)
found, value = dynamic_request.get_as_uint8(key_value_pairs[0][0])
self.assertFalse(found)
self.assertEqual(value, 0)
# validate reading of an nonexistent request.
nonexistent_key_token = KeyTokenEx("nonexistent")
found, value = dynamic_request.get_as_key_token(nonexistent_key_token)
self.assertFalse(found)
self.assertEqual(value, KeyToken())
found, value = dynamic_request.get_as_uint8(nonexistent_key_token)
self.assertFalse(found)
self.assertEqual(value, 0)
found, value = dynamic_request.get_as_int16(nonexistent_key_token)
self.assertFalse(found)
self.assertEqual(value, 0)
found, value = dynamic_request.get_as_uint16(nonexistent_key_token)
self.assertFalse(found)
self.assertEqual(value, 0)
found, value = dynamic_request.get_as_int32(nonexistent_key_token)
self.assertFalse(found)
self.assertEqual(value, 0)
found, value = dynamic_request.get_as_uint32(nonexistent_key_token)
self.assertFalse(found)
self.assertEqual(value, 0)
found, value = dynamic_request.get_as_int64(nonexistent_key_token)
self.assertFalse(found)
self.assertEqual(value, 0)
found, value = dynamic_request.get_as_uint64(nonexistent_key_token)
self.assertFalse(found)
self.assertEqual(value, 0)
found, value = dynamic_request.get_as_double(nonexistent_key_token)
self.assertFalse(found)
self.assertEqual(value, 0.0)
found, value = dynamic_request.get_as_float(nonexistent_key_token)
self.assertFalse(found)
self.assertEqual(value, 0.0)
found, value = dynamic_request.get_as_string(nonexistent_key_token)
self.assertFalse(found)
self.assertEqual(value, None)
# validate the constructor with arguments
new_dynamic_request = DynamicRequest(Request())
self.assertEqual(new_dynamic_request.size(), 0)
request = new_dynamic_request.get_request()
self.assertEqual(request.type.count, 0)
self.assertEqual(request.type.keys.tolist(), [])
self.assertEqual(request.type.types.tolist(), [])
self.assertEqual(request.tokenValues, b'')
# validate copying of nonexistent requests between DynamicRequest instances
found = new_dynamic_request.copy(nonexistent_key_token, dynamic_request)
self.assertFalse(found)
self.assertEqual(new_dynamic_request.size(), 0)
# validate copying of existent requests between DynamicRequest instances
existent_key_token = KeyTokenEx("8-bit unsigned interger")
found = new_dynamic_request.copy(existent_key_token, dynamic_request)
self.assertTrue(found)
self.assertEqual(new_dynamic_request.size(), 1)
# validate key replacement of a nonexistent request
new_key_token = KeyTokenEx("unsigned char")
found = new_dynamic_request.replace_key(nonexistent_key_token, new_key_token)
self.assertFalse(found)
# validate key replacement of an existent request
found = new_dynamic_request.replace_key(existent_key_token, new_key_token)
self.assertTrue(found)
found, value = new_dynamic_request.get_as_uint8(new_key_token)
self.assertTrue(found)
self.assertEqual(value, 2)
# validate value replacement
found = new_dynamic_request.replace_value_uint8(new_key_token, 100)
self.assertTrue(found)
# validate removal of a nonexistent request
found = new_dynamic_request.remove_key(nonexistent_key_token)
self.assertFalse(found)
self.assertEqual(new_dynamic_request.size(), 1)
# validate removal of an existent request
found = new_dynamic_request.remove_key(new_key_token)
self.assertTrue(found)
self.assertEqual(new_dynamic_request.size(), 0)
async def test_request_token_type(self):
"""validate binding of RequestTokenType"""
# validate the default value
request_token_type = RequestTokenType()
self.assertEqual(request_token_type.size, 0)
self.assertEqual(request_token_type.typeNameHash, 0)
# only positive integer is accepted
valid_values = [1, 1<<32 - 1, 0]
invalid_values = [-1, -1.0, 1.0, 1<<32]
# validate setter with valid values
for val in valid_values:
request_token_type.size = val
self.assertEqual(request_token_type.size, val)
request_token_type.typeNameHash = val
self.assertEqual(request_token_type.typeNameHash, val)
# validate setter with invalid values
for val in invalid_values:
with self.assertRaises(TypeError):
request_token_type.size = val
with self.assertRaises(TypeError):
request_token_type.typeNameHash = val
# the value shouldn't change
self.assertEqual(request_token_type.size, valid_values[-1])
self.assertEqual(request_token_type.typeNameHash, valid_values[-1])
# validate initialization with valid values
for val in valid_values:
request_token_type = RequestTokenType(val, val)
self.assertEqual(request_token_type.size, val)
self.assertEqual(request_token_type.typeNameHash, val)
request_token_type = RequestTokenType(size = val, type_name_hash= val)
self.assertEqual(request_token_type.size, val)
self.assertEqual(request_token_type.typeNameHash, val)
# validate initialization with invalid values
for val in invalid_values:
with self.assertRaises(TypeError):
request_token_type = KeyToken(val, val)
with self.assertRaises(TypeError):
request_token_type = KeyToken(value = val, type_name_hash = val)
async def test_make_request_token_type(self):
"""validate bindings of template function makeRequestTokenType()"""
request_token_type = UjitsoUtils.make_key_token_request_token_type()
self.assertEqual(request_token_type.size, 4)
self.assertNotEqual(request_token_type.typeNameHash, 0)
request_token_type = UjitsoUtils.make_uint8_request_token_type()
self.assertEqual(request_token_type.size, 1)
self.assertNotEqual(request_token_type.typeNameHash, 0)
request_token_type = UjitsoUtils.make_int16_request_token_type()
self.assertEqual(request_token_type.size, 2)
self.assertNotEqual(request_token_type.typeNameHash, 0)
request_token_type = UjitsoUtils.make_uint16_request_token_type()
self.assertEqual(request_token_type.size, 2)
self.assertNotEqual(request_token_type.typeNameHash, 0)
request_token_type = UjitsoUtils.make_int32_request_token_type()
self.assertEqual(request_token_type.size, 4)
self.assertNotEqual(request_token_type.typeNameHash, 0)
request_token_type = UjitsoUtils.make_uint32_request_token_type()
self.assertEqual(request_token_type.size, 4)
self.assertNotEqual(request_token_type.typeNameHash, 0)
request_token_type = UjitsoUtils.make_int64_request_token_type()
self.assertEqual(request_token_type.size, 8)
self.assertNotEqual(request_token_type.typeNameHash, 0)
request_token_type = UjitsoUtils.make_uint64_request_token_type()
self.assertEqual(request_token_type.size, 8)
self.assertNotEqual(request_token_type.typeNameHash, 0)
request_token_type = UjitsoUtils.make_double_request_token_type()
self.assertEqual(request_token_type.size, 8)
self.assertNotEqual(request_token_type.typeNameHash, 0)
request_token_type = UjitsoUtils.make_float_request_token_type()
self.assertEqual(request_token_type.size, 4)
self.assertNotEqual(request_token_type.typeNameHash, 0)
request_token_type = UjitsoUtils.make_void_request_token_type()
self.assertEqual(request_token_type.size, 0)
self.assertEqual(request_token_type.typeNameHash, 0)
async def test_get_request_value(self):
"""validate bindings of template function getRequestValue()"""
# validate whether these functions are available in UjitsoUtils.
self.assertTrue(hasattr(UjitsoUtils, "get_request_value_key_token"))
self.assertTrue(hasattr(UjitsoUtils, "get_request_value_uint8"))
self.assertTrue(hasattr(UjitsoUtils, "get_request_value_int16"))
self.assertTrue(hasattr(UjitsoUtils, "get_request_value_uint16"))
self.assertTrue(hasattr(UjitsoUtils, "get_request_value_int"))
self.assertTrue(hasattr(UjitsoUtils, "get_request_value_uint"))
self.assertTrue(hasattr(UjitsoUtils, "get_request_value_int64"))
self.assertTrue(hasattr(UjitsoUtils, "get_request_value_uint64"))
self.assertTrue(hasattr(UjitsoUtils, "get_request_value_double"))
self.assertTrue(hasattr(UjitsoUtils, "get_request_value_float"))
self.assertTrue(hasattr(UjitsoUtils, "get_request_value_string"))
async def test_request_type(self):
"""validate binding of RequestType"""
# validate the default constructor
request_type = RequestType()
self.assertEqual(request_type.count, 0)
self.assertEqual(request_type.keys.tolist(), [])
self.assertEqual(request_type.types.tolist(), [])
# can't set attribute
with self.assertRaises(AttributeError):
request_type.count = 1
with self.assertRaises(AttributeError):
request_type.keys = []
with self.assertRaises(AttributeError):
request_type.types = []
valid_args_list = [
([], []),
([(0,), (1,), (2,)], [(1, 2), (11, 12), (21, 22)]),
]
for keys, types in valid_args_list:
count = len(keys)
self.assertEqual(count, len(types))
request_type = RequestType(keys, types)
self.assertEqual(request_type.count, count)
self.assertEqual(request_type.keys.tolist(), keys)
self.assertEqual(request_type.types.tolist(), types)
# The array size of keys and types doesn't match
with self.assertRaises(ValueError):
request_type = RequestType([(0,), (1,), (2,)], [])
async def test_request_filter(self):
"""validate binding of RequestFilter"""
# validate the default constructor
request_filter = RequestFilter()
self.assertEqual(request_filter.count, 0)
self.assertEqual(request_filter.keys.tolist(), [])
# can't set attribute
with self.assertRaises(AttributeError):
request_filter.count = 1
with self.assertRaises(AttributeError):
request_filter.keys = []
valid_args_list = [
[],
[(0,), (1,), (2,)],
]
for keys in valid_args_list:
request_filter = RequestFilter(keys)
self.assertEqual(request_filter.count, len(keys))
self.assertEqual(request_filter.keys.tolist(), keys)
async def test_request(self):
"""validate binding of Request"""
# validate the default constructor
request = Request()
self.assertEqual(request.type.count, 0)
self.assertEqual(request.type.keys.tolist(), [])
self.assertEqual(request.type.types.tolist(), [])
self.assertEqual(request.tokenValues, b'')
# can't set attribute
with self.assertRaises(AttributeError):
request.type = RequestType()
with self.assertRaises(AttributeError):
request.tokenValues = b''
request = Request(RequestType(), b'')
self.assertEqual(request.type.count, 0)
self.assertEqual(request.type.keys.tolist(), [])
self.assertEqual(request.type.types.tolist(), [])
self.assertEqual(request.tokenValues, b'')
# validate non-default arguments
keys = [(0,), (1,), (2,)]
types = [(1, 2), (2, 12), (3, 22)]
request_type = RequestType(keys, types)
token_values = b'\x01\x02\x03\x04\x05\x06'
request_size = sum([t[0] for t in types])
self.assertEqual(request_size, len(token_values))
request = Request(request_type, token_values)
self.assertEqual(request.type.count, 3)
self.assertEqual(request.type.keys.tolist(), keys)
self.assertEqual(request.type.types.tolist(), types)
self.assertEqual(request.tokenValues, token_values)
async def test_agent(self):
"""validate binding of Agent"""
with self.assertRaises(TypeError):
agent = Agent()
async def test_request_callback_data(self):
"""validate binding of RequestCallbackData"""
# validate the default constructor
request_callback_data = RequestCallbackData()
self.assertEqual(request_callback_data.callbackContext0, None)
self.assertEqual(request_callback_data.callbackContext1, None)
# can't read and write the callback attribute
with self.assertRaises(AttributeError):
callback = request_callback_data.callback
with self.assertRaises(AttributeError):
request_callback_data.callback = None
class _Context:
def __init__(self, name):
self.name = name
def print_name(self):
print(self.name)
def _callback(callback_context0, callback_context1, request_handle, request_index, operation_result):
callback_context0.print_name()
callback_context1.print_name()
context0 = _Context("context0")
context1 = _Context("context1")
request_callback_data = RequestCallbackData(_callback, context0, context1)
self.assertEqual(request_callback_data.callbackContext0, context0)
self.assertEqual(request_callback_data.callbackContext1, context1)
async def test_external_storage(self):
"""validate binding of ExternalStorage"""
values = list(range(4))
external_storage = ExternalStorage(values)
self.assertEqual(external_storage.values.tolist(), values)
external_storage = ExternalStorage(tuple(values))
self.assertEqual(external_storage.values.tolist(), values)
values = list(range(100, 104))
external_storage.values = values
self.assertEqual(external_storage.values.tolist(), values)
external_storage.values = tuple(values)
self.assertEqual(external_storage.values.tolist(), values)
with self.assertRaises(TypeError):
external_storage.values = None
with self.assertRaises(ValueError):
external_storage.values = list(range(3))
with self.assertRaises(ValueError):
external_storage.values = list(range(5))
with self.assertRaises(TypeError):
external_storage = ExternalStorage()
with self.assertRaises(TypeError):
external_storage = ExternalStorage(*values)
with self.assertRaises(ValueError):
external_storage = ExternalStorage(list(range(3)))
with self.assertRaises(ValueError):
external_storage = ExternalStorage(list(range(5)))
async def test_service_interface(self):
"""validate binding of IService"""
service_interface = acquire_service_interface()
self.assertIsNotNone(service_interface)
release_service_interface(service_interface)
async def test_local_data_store_interface(self):
""" validate binding of ILocalDataStore"""
local_data_store_interface = acquire_local_data_store_interface()
self.assertIsNotNone(local_data_store_interface)
data_store = local_data_store_interface.create("test", 1024)
self.assertIsNotNone(data_store)
local_data_store_interface.destroy(data_store)
release_local_data_store_interface(local_data_store_interface)
async def test_nucleus_data_store_interface(self):
""" validate binding of INucleusDataStore"""
nucleus_data_store_interface = acquire_nucleus_data_store_interface()
self.assertIsNotNone(nucleus_data_store_interface)
data_store = nucleus_data_store_interface.create("test", "test", False)
self.assertIsNotNone(data_store)
nucleus_data_store_interface.destroy(data_store)
release_nucleus_data_store_interface(nucleus_data_store_interface)
async def test_processor_information(self):
""" validate binding of ProcessorInformation"""
valid_args_list = [(None, 0, 0), ("", 0, 0), ("test1", 0, 0), ("test2", 1, 1)]
for name, version, remote_execution_batch_hint in valid_args_list:
processor_information = ProcessorInformation(name, version, remote_execution_batch_hint)
self.assertEqual(processor_information.name, name)
self.assertEqual(processor_information.version, version)
self.assertEqual(processor_information.remoteExecutionBatchHint, remote_execution_batch_hint)
with self.assertRaises(TypeError):
processor_information = ProcessorInformation()
invalid_args_list = [("test1", -1, 0), ("test2", 0, -1), ("test3", 0.5, 0), ("test4", 0, 0.5)]
for name, version, remote_execution_batch_hint in invalid_args_list:
with self.assertRaises(TypeError):
processor_information = ProcessorInformation(name, version, remote_execution_batch_hint)
processor_information = ProcessorInformation(None, 0, 0)
for name, version, remote_execution_batch_hint in valid_args_list:
processor_information.name = name
processor_information.version = version
processor_information.remoteExecutionBatchHint = remote_execution_batch_hint
self.assertEqual(processor_information.name, name)
self.assertEqual(processor_information.version, version)
self.assertEqual(processor_information.remoteExecutionBatchHint, remote_execution_batch_hint)
processor_information = ProcessorInformation(None, 0, 0)
for name, version, remote_execution_batch_hint in invalid_args_list:
with self.assertRaises(TypeError):
processor_information.name = name
processor_information.version = version
processor_information.remoteExecutionBatchHint = remote_execution_batch_hint
async def test_processor(self):
""" validate binding of Processor"""
processor = Processor(None, None, None, None)
# It is fine to intialize with functions with mismatched signature, exception will be raised later when triggering these callbacks.
processor = Processor(lambda : None, lambda : None, lambda : None, lambda : None)
with self.assertRaises(TypeError):
processor = Processor()
with self.assertRaises(TypeError):
processor = Processor(None, None, None)
with self.assertRaises(TypeError):
processor = Processor("", "", "", "")
with self.assertRaises(TypeError):
processor = Processor(0, 0, 0, 0)
async def test_match_context(self):
""" validate binding of MatchContext"""
with self.assertRaises(TypeError):
match_context = MatchContext()
async def test_dependency_context(self):
""" validate binding of DependencyContext"""
with self.assertRaises(TypeError):
dependency_context = DependencyContext()
async def test_build_context(self):
""" validate binding of BuildContext"""
with self.assertRaises(TypeError):
build_context = BuildContext()
async def test_dependency_handle(self):
"""validate binding of DependencyHandle"""
# validate the default value
handle = DependencyHandle()
self.assertEqual(handle.value, 0)
# only positive integer is accepted
valid_values = [1, 1<<64 - 1, 0]
invalid_values = [-1, -1.0, 1.0, 1<<64]
# validate setter with valid values
for val in valid_values:
handle.value = val
self.assertEqual(handle.value, val)
# validate setter with invalid values
for val in invalid_values:
with self.assertRaises(TypeError):
handle.value = val
# the value shouldn't change
self.assertEqual(handle.value, valid_values[-1])
# validate initialization with valid values
for val in valid_values:
handle = DependencyHandle(val)
self.assertEqual(handle.value, val)
handle = DependencyHandle(value = val)
self.assertEqual(handle.value, val)
# validate initialization with invalid values
for val in invalid_values:
with self.assertRaises(TypeError):
handle = DependencyHandle(val)
with self.assertRaises(TypeError):
handle = DependencyHandle(value = val)
async def test_build_handle(self):
"""validate binding of BuildHandle"""
# validate the default value
handle = BuildHandle()
self.assertEqual(handle.value, 0)
# only positive integer is accepted
valid_values = [1, 1<<64 - 1, 0]
invalid_values = [-1, -1.0, 1.0, 1<<64]
# validate setter with valid values
for val in valid_values:
handle.value = val
self.assertEqual(handle.value, val)
# validate setter with invalid values
for val in invalid_values:
with self.assertRaises(TypeError):
handle.value = val
# the value shouldn't change
self.assertEqual(handle.value, valid_values[-1])
# validate initialization with valid values
for val in valid_values:
handle = BuildHandle(val)
self.assertEqual(handle.value, val)
handle = BuildHandle(value = val)
self.assertEqual(handle.value, val)
# validate initialization with invalid values
for val in invalid_values:
with self.assertRaises(TypeError):
handle = BuildHandle(val)
with self.assertRaises(TypeError):
handle = BuildHandle(value = val)
async def test_dependency_job(self):
""" validate binding of DependencyJob"""
# validate default values
dependency_job = DependencyJob()
request = dependency_job.request
self.assertEqual(request.type.count, 0)
self.assertEqual(request.type.keys.tolist(), [])
self.assertEqual(request.type.types.tolist(), [])
self.assertEqual(request.tokenValues, b'')
keys = [(0,), (1,), (2,)]
types = [(1, 2), (2, 12), (3, 22)]
request_type = RequestType(keys, types)
token_values = b'\x11\x12\x13\x14\x15\x16'
request_size = sum([t[0] for t in types])
self.assertEqual(request_size, len(token_values))
# validate setter and getter
dependency_job.request = Request(request_type, token_values)
request = dependency_job.request
self.assertEqual(request.type.count, len(keys))
self.assertEqual(request.type.keys.tolist(), keys)
self.assertEqual(request.type.types.tolist(), types)
self.assertEqual(request.tokenValues, token_values)
# validate resetting of requests
dependency_job.request = Request()
request = dependency_job.request
self.assertEqual(request.type.count, 0)
self.assertEqual(request.type.keys.tolist(), [])
self.assertEqual(request.type.types.tolist(), [])
self.assertEqual(request.tokenValues, b'')
with self.assertRaises(TypeError):
dependency_job.request = None
async def test_build_job(self):
""" validate binding of BuildJob"""
# validate default values
build_job = BuildJob()
request = build_job.request
self.assertEqual(request.type.count, 0)
self.assertEqual(request.type.keys.tolist(), [])
self.assertEqual(request.type.types.tolist(), [])
self.assertEqual(request.tokenValues, b'')
keys = [(0,), (1,), (2,)]
types = [(1, 2), (2, 12), (3, 22)]
request_type = RequestType(keys, types)
token_values = b'\x11\x12\x13\x14\x15\x16'
request_size = sum([t[0] for t in types])
self.assertEqual(request_size, len(token_values))
# validate setter and getter
request = Request(request_type, token_values)
build_job.request = request
request = build_job.request
self.assertEqual(request.type.count, len(keys))
self.assertEqual(request.type.keys.tolist(), keys)
self.assertEqual(request.type.types.tolist(), types)
self.assertEqual(request.tokenValues, token_values)
# validate resetting of requests
build_job.request = Request()
request = build_job.request
self.assertEqual(request.type.count, 0)
self.assertEqual(request.type.keys.tolist(), [])
self.assertEqual(request.type.types.tolist(), [])
self.assertEqual(request.tokenValues, b'')
with self.assertRaises(TypeError):
build_job.request = None
async def test_data_grid(self):
""" validate binding of DataGrid"""
with self.assertRaises(TypeError):
data_grid = DataGrid()
async def test_data_grid_interface(self):
"""validate binding of IDataGrid"""
data_grid_interface = acquire_data_grid_interface()
self.assertIsNotNone(data_grid_interface)
data_grid = data_grid_interface.create_data_grid()
self.assertIsNotNone(data_grid)
self.assertIsNotNone(data_grid.iface)
# can't set attribute
with self.assertRaises(AttributeError):
data_grid.iface = None
data_grid_interface.destroy_data_grid(data_grid)
release_data_grid_interface(data_grid_interface)
async def test_in_progress_factory_interface(self):
""" validate binding of IInProgressFactory"""
factory_interface = acquire_in_progress_factory_interface()
self.assertIsNotNone(factory_interface)
task_agent = factory_interface.create_agent()
self.assertIsNotNone(task_agent)
task_service = factory_interface.get_service(task_agent)
self.assertIsNotNone(task_service)
task_service.destroy()
task_agent.destroy()
release_in_progress_factory_interface(factory_interface)
async def test_tcp_factory_interface(self):
""" validate binding of ITCPFactory"""
factory_interface = acquire_tcp_factory_interface()
self.assertIsNotNone(factory_interface)
port = 1113
task_service = factory_interface.create_service(port)
self.assertIsNotNone(task_service)
address_and_port = ("127.0.0.1", port)
addresses = [address_and_port]
task_agent = factory_interface.create_agent(addresses)
self.assertIsNotNone(task_agent)
service_ip = factory_interface.get_service_ip(task_service)
self.assertIsNotNone(service_ip)
self.assertTrue(isinstance(service_ip, str))
task_service.destroy()
task_agent.destroy()
release_tcp_factory_interface(factory_interface)
async def test_http_factory_interface(self):
""" validate binding of IHTTPFactory"""
factory_interface = acquire_http_factory_interface()
self.assertIsNotNone(factory_interface)
task_agent = factory_interface.create_agent("test")
self.assertIsNotNone(task_agent)
task_service = factory_interface.create_service()
self.assertIsNotNone(task_service)
result = factory_interface.run_http_jobs(task_service, "desc", "store_path")
self.assertIsNotNone(result)
self.assertTrue(isinstance(result, str))
task_service.destroy()
task_agent.destroy()
release_http_factory_interface(factory_interface)
| 47,293 | Python | 40.053819 | 139 | 0.646058 |
omniverse-code/kit/exts/omni.ujitso.python/omni/ujitso/tests/__init__.py | scan_for_test_modules = True
"""The presence of this object causes the test runner to automatically scan the directory for unit test cases"""
| 142 | Python | 46.666651 | 112 | 0.78169 |
omniverse-code/kit/exts/omni.ujitso.python/omni/ujitso/tests/test_UJITSO.py | # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
import numpy as np
from omni.kit.test import AsyncTestCase
from omni.ujitso import *
kTokenTypeUSDPath = KeyToken(11)
kTokenTypeRTXPackedMeshLODs = KeyToken(12)
kTokenTypeRTXPackedMeshLOD = KeyToken(13)
kTokenTypeMeshLOD = KeyToken(14)
kTokenTypeTriangulatedMesh = KeyToken(15)
kTokenTypeMesh = KeyToken(16)
kTokenMeshReduceFactor = KeyToken(17)
class ProcessorImpl(Processor):
def __init__(self, agent, key_token, get_info_impl, match_impl, gather_dependencies_impl, build_impl):
super().__init__(get_info_impl, match_impl, gather_dependencies_impl, build_impl)
self._agent = agent
request_filter = RequestFilter([(key_token.value,)])
agent.registry.register_processor(self, request_filter)
def __del__(self):
self._agent.registry.unregister_processor(self)
@staticmethod
def _match_impl(match_context, request_array):
return OperationResult.SUCCESS, MatchResult.NORMAL_PRIORITY
@staticmethod
def _build_impl(build_context, build_job, build_handle):
return OperationResult.SUCCESS
class ProcessorRTXPackedMeshLODs(ProcessorImpl):
def __init__(self, agent, key_token):
super().__init__(agent, key_token, ProcessorRTXPackedMeshLODs._get_info_impl, ProcessorRTXPackedMeshLODs._match_impl, ProcessorRTXPackedMeshLODs._gather_dependencies_impl, ProcessorRTXPackedMeshLODs._build_impl)
@staticmethod
def _get_info_impl(processor):
return OperationResult.SUCCESS, ProcessorInformation("ProcessorRTXPackedMeshLODs", 1, 1)
@staticmethod
def _gather_dependencies_impl(dependency_context, dependency_job, dependency_handle):
dynamic_request = DynamicRequest(dependency_job.request)
dynamic_request.replace_key(KeyTokenEx(kTokenTypeRTXPackedMeshLODs), KeyTokenEx(kTokenTypeRTXPackedMeshLOD))
dynamic_request.add_float(KeyTokenEx(kTokenMeshReduceFactor), 1.0)
dependency_context.agent.service.add_dependency(dependency_context.agent, dependency_handle, dynamic_request.get_request())
dynamic_request.replace_value_float(KeyTokenEx(kTokenMeshReduceFactor), 0.3)
dependency_context.agent.service.add_dependency(dependency_context.agent, dependency_handle, dynamic_request.get_request())
dynamic_request.replace_value_float(KeyTokenEx(kTokenMeshReduceFactor), 0.1)
dependency_context.agent.service.add_dependency(dependency_context.agent, dependency_handle, dynamic_request.get_request())
return OperationResult.SUCCESS
class ProcessorRTXPackedMeshLOD(ProcessorImpl):
def __init__(self, agent, key_token):
super().__init__(agent, key_token, ProcessorRTXPackedMeshLOD._get_info_impl, ProcessorRTXPackedMeshLOD._match_impl, ProcessorRTXPackedMeshLOD._gather_dependencies_impl, ProcessorRTXPackedMeshLOD._build_impl)
@staticmethod
def _get_info_impl(processor):
return OperationResult.SUCCESS, ProcessorInformation("ProcessorRTXPackedMeshLOD", 1, 1)
@staticmethod
def _gather_dependencies_impl(dependency_context, dependency_job, dependency_handle):
dynamic_request = DynamicRequest(dependency_job.request)
dynamic_request.replace_key(KeyTokenEx(kTokenTypeRTXPackedMeshLOD), KeyTokenEx(kTokenTypeMeshLOD))
dependency_context.agent.service.add_dependency(dependency_context.agent, dependency_handle, dynamic_request.get_request())
return OperationResult.SUCCESS
class ProcessorMeshLOD(ProcessorImpl):
def __init__(self, agent, key_token):
super().__init__(agent, key_token, ProcessorMeshLOD._get_info_impl, ProcessorMeshLOD._match_impl, ProcessorMeshLOD._gather_dependencies_impl, ProcessorMeshLOD._build_impl)
@staticmethod
def _get_info_impl(processor):
return OperationResult.SUCCESS, ProcessorInformation("ProcessorMeshLOD", 1, 1)
@staticmethod
def _gather_dependencies_impl(dependency_context, dependency_job, dependency_handle):
dynamic_request = DynamicRequest(dependency_job.request)
dynamic_request.replace_key(KeyTokenEx(kTokenTypeMeshLOD), KeyTokenEx(kTokenTypeTriangulatedMesh))
dynamic_request.remove_key(KeyTokenEx(kTokenMeshReduceFactor))
dependency_context.agent.service.add_dependency(dependency_context.agent, dependency_handle, dynamic_request.get_request())
return OperationResult.SUCCESS
class ProcessorTriangulatedMesh(ProcessorImpl):
def __init__(self, agent, key_token):
super().__init__(agent, key_token, ProcessorTriangulatedMesh._get_info_impl, ProcessorTriangulatedMesh._match_impl, ProcessorTriangulatedMesh._gather_dependencies_impl, ProcessorTriangulatedMesh._build_impl)
@staticmethod
def _get_info_impl(processor):
return OperationResult.SUCCESS, ProcessorInformation("ProcessorTriangulatedMesh", 1, 1)
@staticmethod
def _gather_dependencies_impl(dependency_context, dependency_job, dependency_handle):
dynamic_request = DynamicRequest(dependency_job.request)
dynamic_request.replace_key(KeyTokenEx(kTokenTypeTriangulatedMesh), KeyTokenEx(kTokenTypeMesh))
dependency_context.agent.service.add_dependency(dependency_context.agent, dependency_handle, dynamic_request.get_request())
return OperationResult.SUCCESS
class ProcessorMesh(ProcessorImpl):
def __init__(self, agent, key_token):
super().__init__(agent, key_token, ProcessorMesh._get_info_impl, ProcessorMesh._match_impl, ProcessorMesh._gather_dependencies_impl, ProcessorMesh._build_impl)
@staticmethod
def _get_info_impl(processor):
return OperationResult.SUCCESS, ProcessorInformation("ProcessorMesh", 1, 1)
@staticmethod
def _gather_dependencies_impl(dependency_context, dependency_job, dependency_handle):
return OperationResult.SUCCESS
class TestUJITSO(AsyncTestCase):
"""Test UJITSO (ported from TestUJITSO.cpp)"""
async def test_ujitso_agent(self):
"""UJITSO agent test"""
self._test_UJITSO()
def _test_UJITSO(self, task_agent = None, task_service = None):
factory_interface = acquire_factory_interface()
self.assertIsNotNone(factory_interface)
data_grid_interface = acquire_data_grid_interface()
self.assertIsNotNone(data_grid_interface)
data_grid = data_grid_interface.create_data_grid()
self.assertIsNotNone(data_grid)
local_data_store_interface = acquire_local_data_store_interface()
self.assertIsNotNone(local_data_store_interface)
data_store = local_data_store_interface.create(None, 0)
self.assertIsNotNone(data_store)
agent = factory_interface.create_agent(data_grid, data_store, task_agent, task_service, AgentConfigFlags.Default)
self.assertIsNotNone(agent)
self.assertIsNotNone(agent.agent)
self._test_registry(agent)
self._test_request(agent)
self._test_lod_mockup(agent)
self._test_build(agent, data_grid, data_store)
factory_interface.destroy_agent(agent)
data_grid_interface.destroy_data_grid(data_grid)
local_data_store_interface.destroy(data_store)
release_factory_interface(factory_interface)
release_local_data_store_interface(local_data_store_interface)
release_data_grid_interface(data_grid_interface)
def _test_registry(self, agent):
string1 = KeyTokenEx("Test1")
string2 = KeyTokenEx("Test2")
string3 = KeyTokenEx("Test1")
self.assertTrue(string1 != string2)
self.assertTrue(string2 != string3)
self.assertTrue(string1 == string3)
def _test_request(self, agent):
agent_interface = agent.agent
keys = [(123,)]
key_token_rtype = UjitsoUtils.make_key_token_request_token_type()
types = [(key_token_rtype.size, key_token_rtype.typeNameHash)]
request_type = RequestType(keys, types)
key_token_dtype = np.dtype([('value', np.uint32)])
key_token_value = [1]
request_values = np.array(key_token_value, key_token_dtype).tobytes()
request = Request(request_type, request_values)
request_callback_data = RequestCallbackData()
operation_result, request_handle = agent_interface.request_build(agent, request, request_callback_data)
self.assertEqual(operation_result, OperationResult.SUCCESS)
self.assertIsNotNone(request_handle)
self.assertNotEqual(request_handle.value, 0)
operation_result = agent_interface.wait_all(agent, TIME_OUT_INFINITE)
self.assertEqual(operation_result, OperationResult.SUCCESS)
operation_result, result_handle = agent_interface.get_request_result(request_handle)
self.assertEqual(operation_result, OperationResult.NOPROCESSOR_ERROR)
operation_result = agent_interface.destroy_request(request_handle)
self.assertEqual(operation_result, OperationResult.SUCCESS)
operation_result = agent_interface.destroy_request(request_handle)
self.assertEqual(operation_result, OperationResult.INVALIDHANDLE_ERROR)
operation_result, result_handle = agent_interface.get_request_result(request_handle)
self.assertEqual(operation_result, OperationResult.INVALIDHANDLE_ERROR)
def _test_lod_mockup(self, agent):
agent_interface = agent.agent
procPackedLODs = ProcessorRTXPackedMeshLODs(agent, kTokenTypeRTXPackedMeshLODs)
procPackedLOD = ProcessorRTXPackedMeshLOD(agent, kTokenTypeRTXPackedMeshLOD)
procMeshLOD = ProcessorMeshLOD(agent, kTokenTypeMeshLOD)
procTriangulatedMesh = ProcessorTriangulatedMesh(agent, kTokenTypeTriangulatedMesh)
procMesh = ProcessorMesh(agent, kTokenTypeMesh)
dynamic_request = DynamicRequest()
dynamic_request.add(KeyTokenEx(kTokenTypeRTXPackedMeshLODs))
dynamic_request.add_string(KeyTokenEx(kTokenTypeUSDPath), "/Meshes/TestMesh")
request = dynamic_request.get_request()
self.assertIsNotNone(request)
request_callback_data = RequestCallbackData()
operation_result, request_handle = agent_interface.request_build(agent, request, request_callback_data)
self.assertEqual(operation_result, OperationResult.SUCCESS)
self.assertIsNotNone(request_handle)
self.assertNotEqual(request_handle.value, 0)
operation_result = agent_interface.wait_all(agent, TIME_OUT_INFINITE)
self.assertEqual(operation_result, OperationResult.SUCCESS)
agent_interface.destroy_request(request_handle)
def _test_build(self, agent, data_grid, data_store):
class ProcessorBuild(ProcessorImpl):
def __init__(self, agent, key_token):
super().__init__(agent, key_token, ProcessorBuild._get_info_impl, ProcessorBuild._match_impl, ProcessorBuild._gather_dependencies_impl, ProcessorBuild._build_impl)
@staticmethod
def _get_info_impl(processor):
return OperationResult.SUCCESS, ProcessorInformation("ProcessorBuild", 1, 1)
@staticmethod
def _gather_dependencies_impl(dependency_context, dependency_job, dependency_handle):
agent = dependency_context.agent
service = agent.service
dynamic_request = DynamicRequest(dependency_job.request)
service.add_request_tuple_input(agent, dependency_handle, dynamic_request.get_request(), False, False)
service.set_storage_context(agent, dependency_handle, "TestStorageContext")
return OperationResult.SUCCESS
@staticmethod
def _build_impl(build_context, build_job, build_handle):
agent = build_context.agent
string_value = UjitsoUtils.get_request_value_string(agent, build_job.request, KeyTokenEx("String"), None)
int_value = UjitsoUtils.get_request_value_int(agent, build_job.request, KeyTokenEx("IntParam"), 0)
dtype = np.dtype('uint32')
elem_size = dtype.itemsize
operation_result, metadata = agent.service.allocate_meta_data_storage(agent, build_handle, elem_size * 255)
self.assertEqual(operation_result, OperationResult.SUCCESS)
# reinterpret data type
metadata_array = np.frombuffer(metadata, dtype)
for i in range(len(metadata_array)):
metadata_array[i] = int_value
external_data = [np.frombuffer(string_value.encode(), dtype=np.uint8)]
validation_data = [ValidationType.MANDATORY]
operation_result = agent.service.store_external_data(agent, build_handle, external_data, validation_data)
return OperationResult.SUCCESS
kIntValue = 0x102
kStringValue = "MyTestStringForFun"
kTokenRequest = KeyTokenEx("String")
test_processor = ProcessorBuild(agent, kTokenRequest)
dynamic_request = DynamicRequest()
dynamic_request.add_int(KeyTokenEx("IntParam"), kIntValue)
dynamic_request.add_string(KeyTokenEx("String"), kStringValue)
request = dynamic_request.get_request()
request_callback_data = RequestCallbackData()
agent_interface = agent.agent
operation_result, request_handle = agent_interface.request_build(agent, request, request_callback_data)
self.assertEqual(operation_result, OperationResult.SUCCESS)
self.assertIsNotNone(request_handle)
self.assertNotEqual(request_handle.value, 0)
operation_result = agent_interface.wait_request(request_handle, TIME_OUT_INFINITE)
self.assertEqual(operation_result, OperationResult.SUCCESS)
operation_result, result_handle = agent_interface.get_request_result(request_handle)
operation_result, metadata = agent_interface.get_request_meta_data(result_handle)
self.assertEqual(operation_result, OperationResult.SUCCESS)
# reinterpret data type
metadata_array = np.frombuffer(metadata, dtype='uint32')
for i in range(len(metadata_array)):
self.assertEqual(metadata_array[i], kIntValue)
operation_result, storages = agent_interface.get_request_external_data(result_handle)
self.assertEqual(operation_result, OperationResult.SUCCESS)
self.assertEqual(len(storages), 1)
operation_result = agent_interface.validate_request_external_data(request_handle, result_handle, [True], True)
self.assertEqual(operation_result, OperationResult.SUCCESS)
storage = storages[0]
operation_result, data_block = DataStoreUtils.copy_data_block(data_store, "", storage)
string_value = ''.join(chr(v) for v in data_block)
self.assertEqual(string_value, kStringValue)
operation_result, storage_context = agent_interface.get_request_storage_context(request_handle)
self.assertEqual(operation_result, OperationResult.SUCCESS)
self.assertEqual(storage_context, "TestStorageContext")
agent_interface.destroy_request(request_handle)
| 15,517 | Python | 46.895062 | 219 | 0.7147 |
omniverse-code/kit/exts/omni.kit.autocapture/omni/kit/autocapture/scripts/extension.py | import os
import importlib
import carb
import carb.settings
try:
import omni.renderer_capture
omni_renderer_capture_present = True
except ImportError:
omni_renderer_capture_present = False
import omni.ext
import omni.kit.app
from omni.hydra.engine.stats import HydraEngineStats
class Extension(omni.ext.IExt):
def __init__(self):
super().__init__()
pass
def _set_default_settings(self):
self._settings.set_default_int("/app/captureFrame/startFrame", -1)
self._settings.set_default("/app/captureFrame/startMultipleFrame/0", -1)
self._settings.set_default_bool("/app/captureFrame/closeApplication", False)
self._settings.set_default_string("/app/captureFrame/fileName", "no-filename-specified")
self._settings.set_default_string("/app/captureFrame/outputPath", "")
self._settings.set_default_bool("/app/captureFrame/setAlphaTo1", True)
self._settings.set_default_bool("/app/captureFrame/saveFps", False)
self._settings.set_default_bool("/app/captureFrame/hdr", False)
self._settings.set_default_int("/app/captureFrame/asyncBufferSizeMB", 2048)
self._settings.set_default_bool("/renderer/gpuProfiler/record", False)
self._settings.set_default_int("/renderer/gpuProfiler/maxIndent", 1)
def on_startup(self):
self._settings = carb.settings.get_settings()
self._set_default_settings()
self._app = omni.kit.app.get_app()
try:
module_omni_usd = importlib.import_module('omni.usd')
self._usd_context = module_omni_usd.get_context()
self._opened_state = module_omni_usd.StageState.OPENED
except ImportError:
self._usd_context = None
self._opened_state = None
if omni_renderer_capture_present:
self._renderer_capture = omni.renderer_capture.acquire_renderer_capture_interface()
self._renderer_capture.start_frame_updates()
else:
self._renderer_capture = None
carb.log_error("Autocapture initialization failed: renderer.capture extension should be present!")
return
# Initial configuration
self._frame_no = 0
# App is exiting before last image has been saved, exit after _quitFrameCounter frames
self._quitFrameCounter = 10
self._multiple_frame_no = 0
self._start_frame = self._settings.get("/app/captureFrame/startFrame")
self._start_multiple_frame = self._settings.get("/app/captureFrame/startMultipleFrame")
self._close_app = self._settings.get("/app/captureFrame/closeApplication")
self._file_name = self._settings.get("/app/captureFrame/fileName")
self._output_path = self._settings.get("/app/captureFrame/outputPath")
if len(self._output_path) == 0:
module_carb_tokens = importlib.import_module('carb.tokens')
self._output_path = module_carb_tokens.get_tokens_interface().resolve("${kit}") + "/../../../outputs/"
self._record_gpu_performance = self._settings.get("/renderer/gpuProfiler/record")
self._recording_max_indent = self._settings.get("/renderer/gpuProfiler/maxIndent")
self._gpu_perf = []
# viewport_api = get_active_viewport()
# self.__stats = HydraEngineStats(viewport_api.usd_context_name, viewport_api.hydra_engine)
self.__stats = HydraEngineStats()
self._count_loading_frames = False
self._next_frame_exit = False
if self._start_frame > 0 or self._start_multiple_frame[0] > 0:
def on_post_update(e: carb.events.IEvent):
if not self._app.is_app_ready():
return
if self._next_frame_exit:
if self._quitFrameCounter <= 0:
self._app.post_quit()
self._quitFrameCounter = self._quitFrameCounter - 1
return None
count_frame = True
if not self._count_loading_frames and self._usd_context is not None:
if self._usd_context.get_stage_state() != self._opened_state:
count_frame = True
if count_frame:
if self._record_gpu_performance and self.__stats:
frame_perf = self.__stats.get_nested_gpu_profiler_result(self._recording_max_indent)
dev_count = len(frame_perf)
has_data = False
for dev_idx in range(dev_count):
if len(frame_perf[dev_idx]) > 0:
has_data = True
break
if has_data:
if len(self._gpu_perf) == 0:
self._gpu_perf.extend(frame_perf)
for dev_idx in range(dev_count):
self._gpu_perf[dev_idx] = {}
for dev_idx in range(dev_count):
self._gpu_perf[dev_idx]["frame %d" % (self._frame_no)] = frame_perf[dev_idx]
self._frame_no += 1
if self._start_frame > 0:
if self._frame_no >= self._start_frame:
self._renderer_capture.capture_next_frame_swapchain(self._output_path + self._file_name)
self._next_frame_exit = self._close_app
if self._start_multiple_frame[0] > 0 and self._multiple_frame_no < len(self._start_multiple_frame):
if self._frame_no >= self._start_multiple_frame[self._multiple_frame_no] :
self._renderer_capture.capture_next_frame_swapchain(self._output_path + self._file_name + "_" +str(self._start_multiple_frame[self._multiple_frame_no]))
self._multiple_frame_no += 1
if self._multiple_frame_no >= len(self._start_multiple_frame):
self._next_frame_exit = self._close_app
self._post_update_subs = self._app.get_post_update_event_stream().create_subscription_to_pop(on_post_update, name="Autocapture post-update")
def on_shutdown(self):
if self._record_gpu_performance:
json_filename = self._output_path + self._file_name + ".json"
dump_json = {}
dev_count = len(self._gpu_perf)
for dev_idx in range(dev_count):
dump_json["GPU-%d" % (dev_idx)] = self._gpu_perf[dev_idx]
import json
with open(json_filename, 'w', encoding='utf-8') as json_file:
json.dump(dump_json, json_file, ensure_ascii=False, indent=4)
self._gpu_perf = None
self._settings = None
self._usd_context = None
self._opened_state = None
self._renderer_capture = None
self._post_update_subs = None
| 7,035 | Python | 44.393548 | 180 | 0.572139 |
omniverse-code/kit/exts/omni.kit.autocapture/docs/index.rst | omni.kit.autocapture
#########################
.. automodule:: omni.kit.autocapture
:platform: Windows-x86_64, Linux-x86_64, Linux-aarch64
:members:
:undoc-members:
:imported-members:
| 201 | reStructuredText | 21.444442 | 58 | 0.606965 |
omniverse-code/kit/exts/omni.kit.property.render/omni/kit/property/render/extension.py | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
import omni.ext
# Any class derived from `omni.ext.IExt` in top level module (defined in `python.modules` of `extension.toml`) will be
# instantiated when extension gets enabled and `on_startup(ext_id)` will be called. Later when extension gets disabled
# on_shutdown() is called.
class RenderPropertiesExtension(omni.ext.IExt):
# ext_id is current extension id. It can be used with extension manager to query additional information, like where
# this extension is located on filesystem.
def on_startup(self, ext_id):
self.__wg_registered = False
# Register custom UI for Property widget
self.__register_widgets()
def on_shutdown(self):
self.__unregister_widgets()
def __register_widgets(self):
import omni.kit.window.property as p
from omni.kit.property.usd.usd_property_widget import MultiSchemaPropertiesWidget
from .product_schema import ProductSchemaAttributesWidget
from pxr import UsdRender
w = p.get_window()
if w:
self.__wg_registered = True
w.register_widget(
'prim',
'rendersettings_base',
MultiSchemaPropertiesWidget('Render Settings', UsdRender.Settings, [UsdRender.SettingsBase], group_api_schemas = True),
)
w.register_widget(
'prim',
'renderproduct_base',
ProductSchemaAttributesWidget('Render Product',
UsdRender.Product,
[UsdRender.SettingsBase],
include_list=["camera", "orderedVars"],
exclude_list=["aspectRatioConformPolicy", "dataWindowNDC", "instantaneousShutter", "pixelAspectRatio", "productName", "productType"],
group_api_schemas = True),
)
w.register_widget(
'prim',
'rendervar_base',
MultiSchemaPropertiesWidget('Render Var', UsdRender.Var, [UsdRender.Var], group_api_schemas = True),
)
def __unregister_widgets(self):
if self.__wg_registered:
import omni.kit.window.property as p
w = p.get_window()
if w:
w.unregister_widget('prim', 'rendersettings_base')
w.unregister_widget('prim', 'renderproduct_base')
w.unregister_widget('prim', 'rendervar_base')
self.__wg_registered = False
| 3,015 | Python | 45.399999 | 179 | 0.605307 |
omniverse-code/kit/exts/omni.kit.property.render/omni/kit/property/render/__init__.py | from .extension import RenderPropertiesExtension
| 49 | Python | 23.999988 | 48 | 0.897959 |
omniverse-code/kit/exts/omni.kit.property.render/omni/kit/property/render/product_schema.py | import carb
import omni.ext
from typing import List, Sequence
from pxr import Kind, Sdf, Usd, UsdGeom, Vt, UsdRender
from omni.kit.property.usd.usd_property_widget import MultiSchemaPropertiesWidget, UsdPropertyUiEntry
class ProductSchemaAttributesWidget(MultiSchemaPropertiesWidget):
def __init__(self, title: str, schema, schema_subclasses: list, include_list: list = [], exclude_list: list = [], api_schemas: Sequence[str] = None, group_api_schemas: bool = False):
super().__init__(title, schema, schema_subclasses, include_list, exclude_list, api_schemas, group_api_schemas)
def on_new_payload(self, payload):
"""
See PropertyWidget.on_new_payload
"""
if not super().on_new_payload(payload):
return False
if not self._payload or len(self._payload) == 0:
return False
used = []
for prim_path in self._payload:
prim = self._get_prim(prim_path)
if not prim or not prim.IsA(self._schema):
return False
used += [attr for attr in prim.GetAttributes() if attr.GetName() in self._schema_attr_names and not attr.IsHidden()]
return used
def _customize_props_layout(self, attrs):
from omni.kit.property.usd.custom_layout_helper import (
CustomLayoutFrame,
CustomLayoutGroup,
CustomLayoutProperty,
)
from omni.kit.window.property.templates import (
SimplePropertyWidget,
LABEL_WIDTH,
LABEL_HEIGHT,
HORIZONTAL_SPACING,
)
frame = CustomLayoutFrame(hide_extra=False)
anchor_prim = self._get_prim(self._payload[-1])
with frame:
with CustomLayoutGroup("Render Product"):
CustomLayoutProperty("resolution", "Resolution")
CustomLayoutProperty("camera", "Camera")
CustomLayoutProperty("orderedVars", "Ordered Vars")
# https://github.com/PixarAnimationStudios/USD/commit/dbbe38b94e6bf113acbb9db4c85622fe12a344a5
if hasattr(UsdRender.Tokens, 'disableMotionBlur'):
CustomLayoutProperty("disableMotionBlur", "Disable Motion Blur")
return frame.apply(attrs)
| 2,268 | Python | 38.120689 | 186 | 0.632275 |
omniverse-code/kit/exts/omni.kit.property.render/omni/kit/property/render/tests/test_render_properties.py | ## Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
##
## NVIDIA CORPORATION and its licensors retain all intellectual property
## and proprietary rights in and to this software, related documentation
## and any modifications thereto. Any use, reproduction, disclosure or
## distribution of this software and related documentation without an express
## license agreement from NVIDIA CORPORATION is strictly prohibited.
##
import omni.kit.test
import pathlib
import omni.kit.app
import omni.ui as ui
from omni.ui.tests.test_base import OmniUiTest
from omni.kit import ui_test
from omni.kit.test_suite.helpers import wait_stage_loading
class TestRenderPropertiesWidget(OmniUiTest):
# Before running each test
async def setUp(self):
await super().setUp()
from omni.kit.property.usd.usd_attribute_widget import UsdPropertiesWidget
import omni.kit.window.property as p
self._w = p.get_window()
extension_path = omni.kit.app.get_app().get_extension_manager().get_extension_path_by_module(__name__)
test_data_path = pathlib.Path(extension_path).joinpath("data").joinpath("tests")
self.__golden_img_dir = test_data_path.absolute().joinpath('golden_img').absolute()
self.__usd_path = str(test_data_path.joinpath('render_prim_test.usda').absolute())
# After running each test
async def tearDown(self):
await super().tearDown()
# Test(s)
async def __test_render_prim_ui(self, prim_name):
usd_context = omni.usd.get_context()
await self.docked_test_window(
window=self._w._window,
width=450,
height=650,
restore_window = ui.Workspace.get_window('Layer') or ui.Workspace.get_window('Stage'),
restore_position = ui.DockPosition.BOTTOM)
await usd_context.open_stage_async(self.__usd_path)
await wait_stage_loading()
# NOTE: cannot do DomeLight as it contains a file path which is build specific
# Select the prim.
usd_context.get_selection().set_selected_prim_paths([f'/World/RenderTest/{prim_name}'], True)
# Need to wait for an additional frames for omni.ui rebuild to take effect
await ui_test.human_delay(10)
await self.finalize_test(golden_img_dir=self.__golden_img_dir, golden_img_name=f'test_{prim_name}_ui.png')
# Test(s)
async def test_rendersettings_ui(self):
await self.__test_render_prim_ui('rendersettings1')
async def test_renderproduct_ui(self):
await self.__test_render_prim_ui('renderproduct1')
async def test_rendervar_ui(self):
await self.__test_render_prim_ui('rendervar1')
| 2,685 | Python | 37.371428 | 114 | 0.686406 |
omniverse-code/kit/exts/omni.kit.property.render/docs/CHANGELOG.md | # Changelog
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
## [1.1.0] - 2022-02-02
### Changes
- Updated to new AOV design
## [1.0.0] - 2021-10-12
### Initial Version
- Initial Version
| 222 | Markdown | 17.583332 | 80 | 0.657658 |
omniverse-code/kit/exts/omni.kit.property.render/docs/README.md | # omni.kit.property.render
## Introduction
Property window extensions are for viewing and editing Usd Prim Attributes
## This extension supports editing of these Usd Types;
- UsdRenderSettings
- UsdRenderProduct
- UsdRenderVar
## also groups applied API's on the above types when availbale
| 296 | Markdown | 18.799999 | 74 | 0.790541 |
omniverse-code/kit/exts/omni.kit.property.render/docs/index.rst | omni.kit.property.render
###########################
Property Render Settings Values
.. toctree::
:maxdepth: 1
CHANGELOG
| 132 | reStructuredText | 10.083332 | 31 | 0.560606 |
omniverse-code/kit/exts/omni.kit.widget.highlight_label/config/extension.toml | [package]
# Semantic Versionning is used: https://semver.org/
version = "1.0.0"
# Lists people or organizations that are considered the "authors" of the package.
authors = ["NVIDIA"]
# The title and description fields are primarly for displaying extension info in UI
title = "Highlight widgets"
description="A label widget to show highlight word."
# URL of the extension source repository.
repository = ""
# Keywords for the extension
keywords = ["kit", "ui", "widget", "label", "hightlight"]
# Location of change log file in target (final) folder of extension, relative to the root.
# More info on writing changelog: https://keepachangelog.com/en/1.0.0/
changelog="docs/CHANGELOG.rst"
# Path (relative to the root) or content of readme markdown file for UI.
readme = "docs/README.md"
# Preview image and icon. Folder named "data" automatically goes in git lfs (see .gitattributes file).
# Preview image is shown in "Overview" of Extensions window. Screenshot of an extension might be a good preview image.
preview_image = "data/preview.png"
# Icon is shown in Extensions window, it is recommended to be square, of size 256x256.
icon = "data/icon.png"
category = "Internal"
# We only depend on testing framework currently:
[dependencies]
"omni.ui" = {}
# Main python module this extension provides, it will be publicly available as "import omni.kit.widget.searchfield".
[[python.module]]
name = "omni.kit.widget.highlight_label"
[settings]
[[test]]
args = [
"--no-window",
"--/app/window/dpiScaleOverride=1.0",
"--/app/window/scaleToMonitor=false",
]
dependencies = [
"omni.kit.renderer.core",
"omni.kit.renderer.capture",
"omni.kit.ui_test",
]
| 1,683 | TOML | 29.071428 | 118 | 0.726679 |
omniverse-code/kit/exts/omni.kit.widget.highlight_label/omni/kit/widget/highlight_label/style.py | from omni.ui import color as cl
cl.highlight_default = cl.shade(cl('#848484'))
cl.highlight_highlight = cl.shade(cl('#DFCB4A'))
cl.highlight_selected = cl.shade(cl("#1F2123"))
UI_STYLE = {
"HighlightLabel": {"color": cl.highlight_default},
"HighlightLabel:selected": {"color": cl.highlight_selected},
"HighlightLabel::highlight": {"color": cl.highlight_highlight},
}
| 381 | Python | 30.833331 | 67 | 0.695538 |
omniverse-code/kit/exts/omni.kit.widget.highlight_label/omni/kit/widget/highlight_label/__init__.py | from .highlight_label import HighlightLabel
| 44 | Python | 21.499989 | 43 | 0.863636 |
omniverse-code/kit/exts/omni.kit.widget.highlight_label/omni/kit/widget/highlight_label/highlight_label.py | import carb
import math
from omni import ui
from typing import Optional, Dict
from .style import UI_STYLE
def split_selection(text, selection, match_case: bool = False):
"""
Split given text to substrings to draw selected text. Result starts with unselected text.
Example: "helloworld" "o" -> ["hell", "o", "w", "o", "rld"]
Example: "helloworld" "helloworld" -> ["", "helloworld"]
"""
if not selection:
return [text, ""]
else:
origin_text = text
if not match_case:
selection = selection.lower()
text = text.lower()
elif text == selection:
return ["", text]
selection_len = len(selection)
result = []
while True:
found = text.find(selection)
result.append(origin_text if found < 0 else origin_text[:found])
if found < 0:
break
else:
result.append(origin_text[found : found + selection_len])
text = text[found + selection_len :]
origin_text = origin_text[found + selection_len :]
return result
class HighlightLabel:
"""
Represents a label widget could show hightlight word.
Args:
text (str): String of label.
Keyword args:
highlight (Optional[str]): Word to show highlight
match_case (bool): Show highlight word with case sensitive. Default False.
width (ui.Length): Widget length. Default ui.Fraction(1)
height (ui.Length): Widget height. Default 0
style (Dict): Custom style
"""
def __init__(
self,
text: str,
highlight: Optional[str] = None,
match_case: bool = False,
width: ui.Length=ui.Fraction(1),
height: ui.Length=0,
style: Dict = {}
):
self._container: Optional[ui.HStack] = None
self.__text = text
self.__hightlight = highlight
self.__match_case = match_case
self.__width = width
self.__height = height
self.__style = UI_STYLE.copy()
self.__style.update(style)
self._build_ui()
def _build_ui(self):
if not self._container:
self._container = ui.HStack(width=self.__width, height=self.__height, style=self.__style)
else:
self._container.clear()
if not self.__hightlight:
with self._container:
ui.Label(
self.__text,
width=0,
name="",
style_type_name_override="HighlightLabel",
)
else:
selection_chain = split_selection(self.__text, self.__hightlight, match_case=self.__match_case)
labelnames_chain = ["", "highlight"]
# Extend the label names depending on the size of the selection chain. Example, if it was [a, b]
# and selection_chain is [z,y,x,w], it will become [a, b, a, b].
labelnames_chain *= int(math.ceil(len(selection_chain) / len(labelnames_chain)))
with self._container:
for current_text, current_name in zip(selection_chain, labelnames_chain):
if not current_text:
continue
ui.Label(
current_text,
width=0,
name=current_name,
style_type_name_override="HighlightLabel",
)
@property
def widget(self) -> Optional[ui.HStack]:
return self._container
@property
def visible(self) -> None:
"""
Widget visibility
"""
return self._container.visible
@visible.setter
def visible(self, value: bool) -> None:
self._container.visible = value
@property
def text(self) -> str:
return self.__text
@text.setter
def text(self, value: str) -> None:
self.__text = value
self._build_ui()
@property
def hightlight(self) -> Optional[str]:
return self.__hightlight
@hightlight.setter
def highlight(self, value: Optional[str]) -> None:
self.__hightlight = value
self._build_ui()
@property
def text(self) -> str:
return self.__text
@text.setter
def text(self, value: str) -> None:
self.__text = value | 4,346 | Python | 28.773972 | 108 | 0.546019 |
omniverse-code/kit/exts/omni.kit.widget.highlight_label/omni/kit/widget/highlight_label/tests/test_ui.py | ## Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
##
## NVIDIA CORPORATION and its licensors retain all intellectual property
## and proprietary rights in and to this software, related documentation
## and any modifications thereto. Any use, reproduction, disclosure or
## distribution of this software and related documentation without an express
## license agreement from NVIDIA CORPORATION is strictly prohibited.
##
import omni.ui as ui
from omni.ui.tests.test_base import OmniUiTest
from .. import HighlightLabel
from pathlib import Path
CURRENT_PATH = Path(__file__).parent
TEST_DATA_PATH = CURRENT_PATH.parent.parent.parent.parent.parent.joinpath("data").joinpath("tests")
TEST_WIDTH = 400
TEST_HEIGHT = 200
CUSTOM_UI_STYLE = {
"HighlightLabel": {"color": 0xFFFFFFFF},
"HighlightLabel::highlight": {"color": 0xFF0000FF},
}
class HightlightLabelTestCase(OmniUiTest):
# Before running each test
async def setUp(self):
await super().setUp()
self._golden_img_dir = TEST_DATA_PATH.absolute().joinpath("golden_img").absolute()
# After running each test
async def tearDown(self):
await super().tearDown()
async def test_general(self):
"""Testing general look of SearchField"""
window = await self.create_test_window(width=TEST_WIDTH, height=TEST_HEIGHT)
with window.frame:
with ui.VStack(spacing=10):
HighlightLabel("No highlight")
HighlightLabel("Highlight All", highlight="Highlight All")
HighlightLabel("Highlight 'gh'", highlight="gh")
label = HighlightLabel("Highlight 't' via property")
label.highlight = "t"
HighlightLabel("Highlight 'H' MATCH Case", highlight="H", match_case=True)
HighlightLabel("Match Case All", highlight="Match Case All", match_case=True)
HighlightLabel("Highlight style CUSTOM", highlight="style", style=CUSTOM_UI_STYLE)
await self.docked_test_window(window=window, width=TEST_WIDTH, height=TEST_HEIGHT)
await self.finalize_test(golden_img_dir=self._golden_img_dir, golden_img_name="highlight_label.png")
| 2,189 | Python | 42.799999 | 108 | 0.687985 |
omniverse-code/kit/exts/omni.kit.widget.highlight_label/docs/CHANGELOG.rst | # CHANGELOG
This document records all notable changes to ``omni.kit.widget.searchfield`` extension.
This project adheres to `Semantic Versioning <https://semver.org/>`_.
## [1.0.0] - 2022-10-10
### Added
- Initial version implementation
| 241 | reStructuredText | 20.999998 | 87 | 0.73029 |
omniverse-code/kit/exts/omni.kit.test_suite.stage_window/omni/kit/test_suite/stage_window/tests/visibility_toggle.py | ## Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
##
## NVIDIA CORPORATION and its licensors retain all intellectual property
## and proprietary rights in and to this software, related documentation
## and any modifications thereto. Any use, reproduction, disclosure or
## distribution of this software and related documentation without an express
## license agreement from NVIDIA CORPORATION is strictly prohibited.
##
import omni.kit.test
import math
import omni.usd
import omni.kit.app
from pxr import UsdGeom
from omni.kit.test.async_unittest import AsyncTestCase
from omni.kit import ui_test
from omni.kit.test_suite.helpers import open_stage, get_test_data_path, wait_stage_loading, get_prims, arrange_windows
class VisibilityToggleUsdStage(AsyncTestCase):
# Before running each test
async def setUp(self):
await arrange_windows("Stage", 512)
await open_stage(get_test_data_path(__name__, "bound_shapes.usda"))
# After running each test
async def tearDown(self):
await wait_stage_loading()
async def test_l1_eye_visibility_icon(self):
await ui_test.find("Content").focus()
usd_context = omni.usd.get_context()
stage = usd_context.get_stage()
await wait_stage_loading()
def verify_prim_state(expected):
stage = omni.usd.get_context().get_stage()
for prim_path in [prim.GetPath().pathString for prim in stage.TraverseAll() if not omni.usd.is_hidden_type(prim)]:
if not "Looks" in prim_path:
self.assertEqual(UsdGeom.Imageable(stage.GetPrimAtPath(prim_path)).ComputeVisibility(), expected[prim_path])
# veirfy default state
verify_prim_state({"/World": "inherited", "/World/defaultLight": "inherited", "/World/Cone": "inherited", "/World/Cube": "inherited", "/World/Sphere": "inherited", "/World/Cylinder": "inherited"})
# build table of eye buttons
stage_widget = ui_test.find("Stage//Frame/**/ScrollingFrame/TreeView[*].visible==True")
widgets = {}
for w in stage_widget.find_all("**/Label[*]"):
widget_name = w.widget.text if w.widget.text != "World (defaultPrim)" else "World"
for wtb in stage_widget.find_all(f"**/ToolButton[*]"):
if math.isclose(wtb.widget.screen_position_y, w.widget.screen_position_y):
widgets[widget_name] = wtb
break
# click world eye & verify
await widgets["World"].click()
verify_prim_state({"/World": "invisible", "/World/defaultLight": "invisible", "/World/Cone": "invisible", "/World/Cube": "invisible", "/World/Sphere": "invisible", "/World/Cylinder": "invisible"})
await widgets["World"].click()
verify_prim_state({"/World": "inherited", "/World/defaultLight": "inherited", "/World/Cone": "inherited", "/World/Cube": "inherited", "/World/Sphere": "inherited", "/World/Cylinder": "inherited"})
# click individual prima eye & verify
for prim_name in ["defaultLight", "Cone", "Cube", "Sphere", "Cylinder"]:
expected = {"/World": "inherited", "/World/defaultLight": "inherited", "/World/Cone": "inherited", "/World/Cube": "inherited", "/World/Sphere": "inherited", "/World/Cylinder": "inherited"}
verify_prim_state(expected)
await widgets[prim_name].click()
expected[f"/World/{prim_name}"] = "invisible"
verify_prim_state(expected)
await widgets[prim_name].click()
| 3,523 | Python | 50.823529 | 204 | 0.652853 |
omniverse-code/kit/exts/omni.kit.test_suite.stage_window/omni/kit/test_suite/stage_window/tests/stage_menu_create_custom_materials.py | ## Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
##
## NVIDIA CORPORATION and its licensors retain all intellectual property
## and proprietary rights in and to this software, related documentation
## and any modifications thereto. Any use, reproduction, disclosure or
## distribution of this software and related documentation without an express
## license agreement from NVIDIA CORPORATION is strictly prohibited.
##
import omni.kit.test
import omni.kit.app
import omni.usd
from omni.ui.tests.test_base import OmniUiTest
from omni.kit.test.async_unittest import AsyncTestCase
from omni.kit import ui_test
from pxr import Sdf, UsdShade
from omni.kit.test_suite.helpers import open_stage, get_test_data_path, select_prims, wait_stage_loading, delete_prim_path_children, arrange_windows
from omni.kit.material.library.test_helper import MaterialLibraryTestHelper
class TestCreateMenuContextMenu(OmniUiTest):
# Before running each test
async def setUp(self):
await arrange_windows()
await open_stage(get_test_data_path(__name__, "bound_shapes.usda"))
# After running each test
async def tearDown(self):
await wait_stage_loading()
DATA = ["/World/Cube", "/World/Cone", "/World/Sphere", "/World/Cylinder"]
async def test_l1_stage_menu_create_custom_materials(self):
stage_window = ui_test.find("Stage")
await stage_window.focus()
usd_context = omni.usd.get_context()
stage = usd_context.get_stage()
to_select = self.DATA
# test custom materials
material_test_helper = MaterialLibraryTestHelper()
for material_url, mtl_name in [("mahogany_floorboards.mdl", "mahogany_floorboards"), ("multi_hair.mdl", "OmniHair_Green"), ("multi_hair.mdl", "OmniHair_Brown")]:
# delete any materials in looks
await delete_prim_path_children("/World/Looks")
# select prims
await select_prims(to_select)
# right click on Cube
stage_widget = ui_test.find("Stage//Frame/**/ScrollingFrame/TreeView[*].visible==True")
await stage_widget.find(f"**/StringField[*].model.path=='{to_select[0]}'").right_click()
# click on context menu item
await ui_test.select_context_menu("Create/Material/Add MDL File", offset=ui_test.Vec2(50, 10))
# use add material dialog
mdl_path = get_test_data_path(__name__, f"mtl/{material_url}")
await material_test_helper.handle_add_material_dialog(mdl_path, mtl_name)
# wait for material to load & UI to refresh
await wait_stage_loading()
# verify item(s)
for prim_path in to_select:
prim = stage.GetPrimAtPath(prim_path)
bound_material, _ = UsdShade.MaterialBindingAPI(prim).ComputeBoundMaterial()
self.assertTrue(bound_material.GetPrim().IsValid())
self.assertEqual(bound_material.GetPrim().GetPrimPath().pathString, f"/World/Looks/{mtl_name}")
| 3,042 | Python | 43.101449 | 169 | 0.672584 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.