status
stringclasses 1
value | repo_name
stringlengths 9
24
| repo_url
stringlengths 28
43
| issue_id
int64 1
104k
| updated_files
stringlengths 8
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[ns, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 674 | ["langchain/vectorstores/faiss.py", "tests/integration_tests/vectorstores/test_faiss.py"] | test_faiss_with_metadatas: key mismatch in assert | https://github.com/hwchase17/langchain/blob/236ae93610a8538d3d0044fc29379c481acc6789/tests/integration_tests/vectorstores/test_faiss.py#L54
This test will fail because `FAISS.from_texts` will assign uuid4s as keys in its docstore, while `expected_docstore` has string numbers as keys. | https://github.com/langchain-ai/langchain/issues/674 | https://github.com/langchain-ai/langchain/pull/676 | e45f7e40e80d9b47fb51853f0c672e747735b951 | e04b063ff40d7f70eaa91f135729071de60b219d | 2023-01-21T16:02:54Z | python | 2023-01-22T00:08:14Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 5,774 | ["onnx/checker.py", "onnx/helper.py", "onnx/test/checker_test.py"] | Resurrect check function context logic | https://github.com/onnx/onnx/pull/4928, previously removed in https://github.com/onnx/onnx/pull/5693 | https://github.com/onnx/onnx/issues/5774 | https://github.com/onnx/onnx/pull/5778 | 5be7f3164ba0b2c323813264ceb0ae7e929d2350 | cd35663188f91c9b9474a4979e080cb80f7a0087 | 2023-11-28T17:16:23Z | python | 2023-11-30T23:22:34Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 5,761 | ["onnx/common/file_utils.h", "onnx/test/checker_test.py"] | Support Unicode characters in proto paths | # Bug Report
### Is the issue related to model conversion?
<!-- If the ONNX checker reports issues with this model then this is most probably related to the converter used to convert the original framework model to ONNX. Please create this bug in the appropriate converter's GitHub repo (pytorch, tensorflow-onnx, sklearn-onnx, keras-onnx, onnxmltools) to get the best help. -->
No
### Describe the bug
1. the onnx name is simple Chinese,such as **“士大夫.onnx”**
2. ```
output_file_path = 'D:\\士大夫.onnx'
onnx.checker.check_model(output_file_path)```
3. after run ,throw **ValidationError**
```
onnx.onnx_cpp2py_export.checker.ValidationError: Unable to open proto file: D:\士大夫.onnx. Please check if it is a valid proto.
[fail_check("Unable to open proto file: ", proto_path, ". Please check if it is a valid proto. ");]
```
https://github.com/onnx/onnx/blob/b60f69412abb5393ab819b936b473f83867f6c87/onnx/common/file_utils.h#L21
### System information
<!--
- OS Platform and Distribution (*e.g. Linux Ubuntu 20.04*):
- ONNX version (*e.g. 1.13*):
- Python version:
- GCC/Compiler version (if compiling from source):
- CMake version:
- Protobuf version:
- Visual Studio version (if applicable):-->
Windows
onnx version 1.15.0
| https://github.com/onnx/onnx/issues/5761 | https://github.com/onnx/onnx/pull/5806 | cb6844a12a5794b743abe8b49abcb0f97177cb2d | 67c456ba4747412afb44158a1a889c0fc3349641 | 2023-11-17T10:15:54Z | python | 2023-12-15T19:05:14Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 5,755 | ["onnx/checker.py"] | check_function requires contexts as arguments which breaks backward compatibility | https://github.com/onnx/onnx/pull/5693 added required parameters to the `check_function` function in checker which breaks backward compatibility. Should we provide default contexts to `check_function` as well?
| https://github.com/onnx/onnx/issues/5755 | https://github.com/onnx/onnx/pull/5757 | ab5bdf8d6d77432cf7892ff702d926b8582b2704 | ede2c77d322b8ae92f6b2100bce88846296594c6 | 2023-11-13T23:22:02Z | python | 2023-11-14T15:27:20Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 5,696 | ["MANIFEST.in", "pyproject.toml"] | onnx-weekly distribution missing pyi files | It looks like the onnx-weekly distribution is missing pyi files.
cc @jcwchen @liqunfu | https://github.com/onnx/onnx/issues/5696 | https://github.com/onnx/onnx/pull/5697 | abb445738d5052b5226c697142e5277f8669dba1 | 8f4507545f72b251871e188215b6583f513ead1e | 2023-10-23T18:24:33Z | python | 2023-10-23T22:14:45Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 5,668 | ["onnx/checker.cc"] | musl build fails due to stat64 call | # Bug Report
### Describe the bug
The usage of `stat64()` currently breaks the build on non-glibc linux systems (e.g. musl), as those systems like Apple and WASM do not support LFS64.
### System information
- OS Platform and Distribution: Alpine Linux 64-bit amd64 (3.18)
- ONNX version: 1.14.1
- Python version: n/a
- GCC/Compiler version: 12.2.1
- CMake version: 3.26.5
- Protobuf version: 3.21.12
- Visual Studio version: n/a
### Reproduction instructions
See [aports!52138](https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/52138) and the associated [pipeline log](https://gitlab.alpinelinux.org/leso-kn/aports/-/jobs/1143766#L1155) for reproduction reference.
### Expected behavior
checker.cc should use `stat()` instead of `stat64()` on non-glibc linux systems.
### Notes
Build log:
```log
/usr/bin/g++ -DEIGEN_MPL2_ONLY -DONNX_ML=1 -DONNX_NAMESPACE=onnx -DONNX_USE_LITE_PROTO=1 -DORT_ENABLE_STREAM -D_GNU_SOURCE -D__ONNX_DISABLE_STATIC_REGISTRATION -D__ONNX_NO_DOC_STRINGS -D__STDC_FORMAT_MACROS -I/builds/leso-kn/aports/testing/onnxruntime/src/onnxruntime-1.16.1/build/_deps/onnx-src -I/builds/leso-kn/aports/testing/onnxruntime/src/onnxruntime-1.16.1/build/_deps/onnx-build -Os -fstack-clash-protection -Wformat -Werror=format-security -D_GLIBCXX_ASSERTIONS=1 -fno-plt -O2 -Wno-deprecated-declarations -flto=auto -ffunction-sections -fdata-sections -DCPUINFO_SUPPORTED -Wnon-virtual-dtor -std=gnu++17 -fPIC -Wall -Wextra -MD -MT _deps/onnx-build/CMakeFiles/onnx.dir/onnx/checker.cc.o -MF _deps/onnx-build/CMakeFiles/onnx.dir/onnx/checker.cc.o.d -o _deps/onnx-build/CMakeFiles/onnx.dir/onnx/checker.cc.o -c /builds/leso-kn/aports/testing/onnxruntime/src/onnxruntime-1.16.1/build/_deps/onnx-src/onnx/checker.cc
/builds/leso-kn/aports/testing/onnxruntime/src/onnxruntime-1.16.1/build/_deps/onnx-src/onnx/checker.cc: In function 'void onnx::checker::check_tensor(const onnx::TensorProto&, const CheckerContext&)':
/builds/leso-kn/aports/testing/onnxruntime/src/onnxruntime-1.16.1/build/_deps/onnx-src/onnx/checker.cc:191:23: error: aggregate 'onnx::checker::check_tensor(const onnx::TensorProto&, const CheckerContext&)::stat64 buffer' has incomplete type and cannot be defined
191 | struct stat64 buffer; // All POSIX except APPLE have stat64
| ^~~~~~
/builds/leso-kn/aports/testing/onnxruntime/src/onnxruntime-1.16.1/build/_deps/onnx-src/onnx/checker.cc:192:48: error: invalid use of incomplete type 'struct onnx::checker::check_tensor(const onnx::TensorProto&, const CheckerContext&)::stat64'
192 | if (stat64((data_path).c_str(), &buffer) != 0) {
| ^
/builds/leso-kn/aports/testing/onnxruntime/src/onnxruntime-1.16.1/build/_deps/onnx-src/onnx/checker.cc:191:16: note: forward declaration of 'struct onnx::checker::check_tensor(const onnx::TensorProto&, const CheckerContext&)::stat64'
191 | struct stat64 buffer; // All POSIX except APPLE have stat64
| ^~~~~~
```
| https://github.com/onnx/onnx/issues/5668 | https://github.com/onnx/onnx/pull/5669 | 34cc49dc5a2b16d1c7ffd0b3be4ee74c5ce50a6d | f556cffc40dc769a2c8e3bb5fd9a90f1e7fbb7eb | 2023-10-14T13:01:34Z | python | 2023-10-15T04:37:20Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 5,662 | ["onnx/test/reference_evaluator_backend_test.py"] | Why are there two backend tests for the reference runtime? | There is onnx/test/test_backend_reference.py and reference_evaluator_backend_test.py. I wonder how they are related? cc @xadupre | https://github.com/onnx/onnx/issues/5662 | https://github.com/onnx/onnx/pull/5667 | b61d2a8ab7a7e60eb386a387c73e449dd3461b3e | 34cc49dc5a2b16d1c7ffd0b3be4ee74c5ce50a6d | 2023-10-12T01:21:57Z | python | 2023-10-14T08:06:15Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 5,659 | ["cmake/summary.cmake", "workflow_scripts/protobuf/build_protobuf_win.ps1"] | Windows release build error | Errors like
```
onnx.lib(checker.obj) : error LNK2001: unresolved external symbol "public: int __thiscall google::protobuf::RepeatedField<int>::size(void)const " (?size@?$RepeatedField@H@protobuf@google@@QBEHXZ) [D:\a\onnx\onnx\onnx\.setuptools-cmake-build\onnx_cpp2py_export.vcxproj]
```
https://github.com/onnx/onnx/actions/runs/6450822236/job/17510642914
The latest weekly for Windows was not published due to this. | https://github.com/onnx/onnx/issues/5659 | https://github.com/onnx/onnx/pull/5678 | bbe70113df1fb4fcdce92040fc6d23fe891068ee | 9bf7833888989eeb53ad759b0b62469e670f7bf7 | 2023-10-10T23:52:18Z | python | 2023-10-20T01:09:02Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 5,616 | ["README.md"] | ONNX PyPI page has broken links | # Bug Report
### Is the issue related to model conversion?
No
### Describe the bug
The first and the last links on the screenshot are broken, there may be others.

### System information
### Reproduction instructions
Go to: https://pypi.org/project/onnx/
### Expected behavior
The links should work
| https://github.com/onnx/onnx/issues/5616 | https://github.com/onnx/onnx/pull/5663 | c2e38cb97d2001c5563f2707c28ebca753f04e85 | b61d2a8ab7a7e60eb386a387c73e449dd3461b3e | 2023-09-22T20:21:16Z | python | 2023-10-12T23:21:15Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 5,577 | [".azure-pipelines/Linux-CI.yml", ".azure-pipelines/MacOS-CI.yml", ".azure-pipelines/Windows-CI.yml"] | Pipeline Badges display "never built" | Currently our github shows
 :
Could we update the links that the status shown? Maybe the visible setting was changed in Azure? Or should we switch to display badges from the github pipelines?
| https://github.com/onnx/onnx/issues/5577 | https://github.com/onnx/onnx/pull/5581 | 02a41bf1031defac78bd482328381f137ca99137 | fb31e11b9f1f09029aede6f23213286341a1ae34 | 2023-09-10T09:14:17Z | python | 2023-09-13T18:22:54Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 5,552 | ["onnx/__init__.py", "onnx/external_data_helper.py"] | Absolute path should be rejected in `onnx.save_model(localtion=)` | # Bug Report
### Is the issue related to model conversion?
No
### Describe the bug
The Python API `onnx.save_model` should reject if the `location` argument is provided with an absolute path since the [proto](https://github.com/onnx/onnx/blob/main/onnx/onnx.proto#L600) says
```
// - "location" (required) - POSIX filesystem path relative to the directory where the ONNX
// protobuf model was stored
```
A model with external data of absolute path is likely to fail if been moved around.
### System information
Not system dependented.
### Reproduction instructions
Run the attached script to generate model file.
The path should be rejected.
*Note* you need to modify the path to your local path for reproducing.
```py
import numpy as np
import onnx
from onnx import helper
from onnx import numpy_helper
from onnx import AttributeProto, TensorProto, GraphProto
M = 8
N = 16
K = 64
A = helper.make_tensor_value_info('A', TensorProto.FLOAT, [M, K])
b_data = np.random.uniform(-1, 1, size=K*N).reshape((K, N)).astype('float32')
B = numpy_helper.from_array(b_data, name='B')
C = helper.make_tensor_value_info('C', TensorProto.FLOAT, [M, N])
op = helper.make_node(
'MatMul',
['A', 'B'],
['C'],
)
g = helper.make_graph(
[op],
'test-model',
[A],
[C],
initializer=[B],
)
m = helper.make_model(g, producer_name='onnx-example')
onnx.save_model(m, 'repro.onnx', save_as_external_data=True, location='/home/zhenhuaw/playground/dl/graph/matmul/repro.data', size_threshold=0)
```
### Expected behavior
* Raise error like "The external data path you provided is an absolute path, please use relative path.".
* No model generated.
### Notes
<!-- Any additional information -->
| https://github.com/onnx/onnx/issues/5552 | https://github.com/onnx/onnx/pull/5566 | d2f1a483c8c3fddc1e37071a6dc2d65a4c64d5eb | 3a76db8234263858cf1456fc4f332263204e1e58 | 2023-08-31T10:03:59Z | python | 2023-09-04T15:34:53Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 5,520 | ["docs/Changelog.md", "docs/Operators.md", "onnx/defs/controlflow/defs.cc", "onnx/defs/controlflow/old.cc", "onnx/reference/ops/op_if.py", "onnx/test/reference_evaluator_test.py"] | Non scalar as condition to If op | # Ask a Question
### Question
<!-- -->
I write a onnx If op case, the condition input is np.array([1, 2, 0, 3]).astype(bool), the output is then_branch, if input is np.array([0, 2, 0, 3]).astype(bool), the output is else_branch, so from result, it looks like the first element decides True or False, but in onnx source code op_if.py:
```
if len(cond.shape) > 0:
try:
evaluated_condition = all(cond)
```
all means check if all items in a list are True, so it doesn't matter the experiment result, because [1, 2, 0, 3] has 0.
### Further information
- Relevant Area: <!--e.g., model usage, backend, best practices, converters, shape_inference, version_converter, training, test, operators, IR, ONNX Hub, data preprocessing, CI pipelines. -->
- Is this issue related to a specific model?
**Model name**: <!-- *e.g. mnist* -->
**Model opset**: <!-- *e.g. 17* -->
```
# Define input and output tensors
cond = onnx.helper.make_tensor_value_info('cond', onnx.TensorProto.BOOL, [])
res = onnx.helper.make_tensor_value_info('res', onnx.TensorProto.FLOAT, [5])
# Define then and else output tensors
then_out = onnx.helper.make_tensor_value_info('then_out', onnx.TensorProto.FLOAT, [5])
else_out = onnx.helper.make_tensor_value_info('else_out', onnx.TensorProto.FLOAT, [5])
# Define constant nodes for then and else branches
x = np.array([1, 2, 3, 4, 5]).astype(np.float32)
y = np.array([5, 4, 3, 2, 1]).astype(np.float32)
then_const_node = onnx.helper.make_node(
'Constant',
inputs=[],
outputs=['then_out'],
value=onnx.numpy_helper.from_array(x),
)
else_const_node = onnx.helper.make_node(
'Constant',
inputs=[],
outputs=['else_out'],
value=onnx.numpy_helper.from_array(y),
)
# Define then and else subgraphs
then_body = onnx.helper.make_graph(
[then_const_node],
'then_body',
[],
[then_out]
)
else_body = onnx.helper.make_graph(
[else_const_node],
'else_body',
[],
[else_out]
)
# Define If node
if_node = onnx.helper.make_node(
'If',
inputs=['cond'],
outputs=['res'],
then_branch=then_body,
else_branch=else_body,
)
graph_def = helper.make_graph([if_node],
'if_model',
inputs=[cond],
outputs=[res])
# 将模型转换为ONNX格式
model = helper.make_model(graph_def, producer_name='onnx-example', opset_imports=[helper.make_opsetid("", 11)])
onnx.save(model, 'if_model.onnx')
# Print the model
# print(model)
# Run the model using onnxruntime
sess = onnxruntime.InferenceSession(model.SerializeToString())
cond_tensor = np.array([1, 2, 0, 3]).astype(bool)
print("cond_tensor: ", cond_tensor)
# print("all(cond_tensor): ", all(cond_tensor))
input_data = {'cond': cond_tensor}
output = sess.run([], input_data)
print("if_output: ", output)
```
### Notes
<!-- Any additional information, code snippets. -->
| https://github.com/onnx/onnx/issues/5520 | https://github.com/onnx/onnx/pull/5617 | 09a0cbba0256e4da1c953df5feb932d436e291eb | c2418f7a163f690e05d354b7ea7d3cb8455e5106 | 2023-08-23T09:08:46Z | python | 2023-09-25T16:21:37Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 5,470 | ["onnx/defs/shape_inference.cc", "onnx/defs/shape_inference.h"] | "Source" and "target" unclear in shape inference error messages | In shape inference, error messages often contain references to "source" and "target", which are unclear from a user's point of view. I suggest changing to "inferred" and "specified".
https://github.com/onnx/onnx/blob/0d68f4995a49a62f4bccd69c83edf4f3d4dcf2b2/onnx/defs/shape_inference.cc#L123
| https://github.com/onnx/onnx/issues/5470 | https://github.com/onnx/onnx/pull/5494 | efd95c364da37fefa4e131bc5566f77731045d69 | 8a475b34cb3875df311a46f57571646498f5bda7 | 2023-08-04T02:34:15Z | python | 2023-08-16T22:18:46Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 5,463 | ["onnx/backend/test/runner/__init__.py", "onnx/cpp2py_export.cc", "onnx/shape_inference/implementation.cc", "onnx/shape_inference/implementation.h", "onnx/test/cpp/shape_inference_test.cc", "onnx/test/inference_function_test.py", "onnx/test/parser_test.py", "onnx/test/test_backend_test.py"] | strict_mode is set to False when InferShapesImpl is called with a subgraph | # Bug Report
### Is the issue related to model conversion?
NO
### Describe the bug
When InferShapesImpl is called with a subgraph, ShapeInferenceOptions (options) is reset (https://github.com/onnx/onnx/blob/main/onnx/shape_inference/implementation.cc#L1030). So even a user wants to do shape inference with strict mode, it still passes even if a subgraph failed shape inference.
This falsely implies that shape inference is successful.
Note that there is another related issue (https://github.com/onnx/onnx/issues/4986) which shall be addressed too: with a given model, it is known that some subgraphs will not get invoked. In this case, shape inference shall pass. This is actually the case for the example model. But again, these are two issues both need to be addressed.
### System information
NA
### Reproduction instructions
model = onnx.load("./onnx/backend/test/data/node/test_affine_grid_2d_expanded/model.onnx")
model_shape = onnx.shape_inference.infer_shapes(model, check_type=True, strict_mode=True)
- Attach the ONNX model to the issue (where applicable)--> the ONNX model is in the repo, located at onnx/backend/test/data/node/test_affine_grid_2d_expanded/model.onnx
### Expected behavior
The repo code shall fail.
### Notes
<!-- Any additional information -->
| https://github.com/onnx/onnx/issues/5463 | https://github.com/onnx/onnx/pull/5488 | 9183bbb4b9d15063ffe6bdaaf7a7f9cb34c1ea3c | 1679c4ac75aeb148df6628c8b303d0b308b69891 | 2023-08-02T19:51:29Z | python | 2023-08-21T23:22:31Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 5,442 | ["onnx/helper.py", "onnx/test/helper_test.py"] | `helper.make_attribute` should print attribute names in error messages | https://github.com/onnx/onnx/blob/9a17d4d64c210b86107e152d98539fc93aa8f329/onnx/helper.py#L911
prints the enum as ints: `TypeError: Inferred attribute type 2 mismatched with specified type 1`, which is not very readable. We should print the enum names instead. | https://github.com/onnx/onnx/issues/5442 | https://github.com/onnx/onnx/pull/5517 | c4f68dda7043257adbed03c1517aa51d026cfb45 | 12d9cefa3708b4df2559c9089b6c45ca41195b35 | 2023-07-24T18:04:00Z | python | 2023-08-23T20:07:52Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 5,392 | ["docs/docsgen/source/intro/python.md"] | Remove reference to deprecated APIs in docs | E.g. TENSOR_TYPE_TO_NP_TYPE in https://onnx.ai/onnx/intro/python.html | https://github.com/onnx/onnx/issues/5392 | https://github.com/onnx/onnx/pull/5593 | b82bd2def37bc97a1301f3ca604bf6357e94bd3b | 821d156046a96566e7bb8456696c04716bd27f27 | 2023-07-04T13:41:06Z | python | 2023-09-14T21:49:13Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 5,379 | [".github/workflows/lint.yml", "CONTRIBUTING.md", "README.md", "docs/AddNewOp.md", "docs/CONTRIBUTING.md"] | Include instructions on how to update the documentation in `CONTRIBUTING.md` | We have many PRs that tries to fix typos etc. in op documentation not knowing to regenerate documentation or otherwise did the change in the wrong place. We should document this in `CONTRIBUTING.md` and be able to point contributors to it in the ci.
Or do we have one somewhere already?
cc @jcwchen | https://github.com/onnx/onnx/issues/5379 | https://github.com/onnx/onnx/pull/5584 | 55ffe4e9de916732213471630857f6a920e59e1d | b82bd2def37bc97a1301f3ca604bf6357e94bd3b | 2023-06-29T01:10:16Z | python | 2023-09-13T18:30:45Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 5,377 | [".github/workflows/lint.yml", "docs/CONTRIBUTING.md", "onnx/defs/gen_doc.py", "tools/update_doc.bat", "tools/update_doc.sh"] | [Feature request] Generate the ONNX and ONNX-ML operator doc in one run | ### System information
Main branch
### What is the problem that this feature solves?
Previously ONNX CI does not run `ONNX_ML=0 python onnx/defs/gen_doc.py` to correctly update `Operators.md`: https://github.com/onnx/onnx/pull/5367. It easily brings confusion and I don't see benefit of making doc generation separate.
### Alternatives considered
_No response_
### Describe the feature
Just like an old closed PR: https://github.com/onnx/onnx/pull/3467: Generate ONNX and ONNX-ML operator doc in one run for simplicity and prevent confusions.
### Will this influence the current api (Y/N)?
Y. `python onnx/defs/gen_doc.py` will produce documents for ONNX and ONNX-ML in one shot.
### Feature Area
Doc generation
### Are you willing to contribute it (Y/N)
Yes
### Notes
_No response_ | https://github.com/onnx/onnx/issues/5377 | https://github.com/onnx/onnx/pull/5381 | b481d930077c529056d1f43af8782f76420ab231 | 5e78d2010169e8229b9a33e42abcfc92b2c6fccb | 2023-06-29T00:11:30Z | python | 2023-07-04T16:22:52Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 5,346 | ["onnx/defs/shape_inference.h"] | shape_inference.h compilation failure | https://github.com/onnx/onnx/blob/main/onnx/defs/shape_inference.h
I encountered compilation failure to due to the missing "#include <algorithm>" for the use of std::transform in the code. PTAL? | https://github.com/onnx/onnx/issues/5346 | https://github.com/onnx/onnx/pull/5363 | 8a980683df9acbcb82dc3385fc7eb8cce4ed840f | 0715646d1fb1eed830999c8a43dce532ead08491 | 2023-06-20T16:40:01Z | python | 2023-06-27T22:22:28Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 5,324 | ["onnx/defs/shape_inference.cc", "onnx/defs/shape_inference.h", "onnx/defs/tensor/defs.cc", "onnx/defs/tensor/old.cc", "onnx/test/shape_inference_test.py"] | Shape Inference for reshape with dynamic shape | # Bug Report
Currently, shape inference for [Reshape](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Reshape) intentionally does not succeed for dynamic shapes. In cases where the "shape" parameter has an inferred shape that is a value (and not a [symbolic param](https://github.com/onnx/onnx/blob/main/onnx/onnx.proto3#L664-L665)), we can determine the output rank.
Currently, however shape inference does not succeed [does not currently happen](https://github.com/onnx/onnx/blob/3dafb54b11a496cd933de17fd303324d0686fc3b/onnx/test/shape_inference_test.py#L503-L511).
In the provided example copied in below, we should be able to infer a shape of `(None, None)` rather than just `None`.
```
def test_reshape_dynamic_shape(self) -> None:
graph = self._make_graph(
[("x", TensorProto.UINT8, (2, 4, 3)), ("shape", TensorProto.INT64, (2,))],
[make_node("Reshape", ["x", "shape"], ["y"])],
[],
)
self._assert_inferred(
graph, [make_tensor_value_info("y", TensorProto.UINT8, None)]
)
```
If `"shape"` is either `None` (i.e. shape cannot be inferred) or it is symbolic (e.g. `("M",)`) we should still yield `None` as currently done.
| https://github.com/onnx/onnx/issues/5324 | https://github.com/onnx/onnx/pull/5327 | e2e97ccc36211e72c3607d46f71782542d1df5ee | 391f5703abe1a0d5c8cbb0b38df079b0bacec079 | 2023-06-16T14:47:58Z | python | 2023-06-21T01:10:29Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 5,292 | ["docs/docsgen/source/api/checker.md", "docs/docsgen/source/api/defs.md"] | Update documentation for OpSchema | ### System information
_No response_
### What is the problem that this feature solves?
_No response_
### Alternatives considered
_No response_
### Describe the feature
- FormalParameter
- Etc.
### Will this influence the current api (Y/N)?
_No response_
### Feature Area
_No response_
### Are you willing to contribute it (Y/N)
Yes
### Notes
_No response_ | https://github.com/onnx/onnx/issues/5292 | https://github.com/onnx/onnx/pull/5297 | 4eaaac1af37625341cd48a89a9017d5094c5f2f2 | 0f536363edbfdc9653a544c48f69c2c194b38daa | 2023-06-06T23:20:19Z | python | 2023-06-13T03:24:06Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 5,282 | ["docs/Changelog.md", "docs/Operators.md", "onnx/defs/math/defs.cc"] | typo in documentation of Einsum operator | # Bug Report
https://github.com/onnx/onnx/blob/main/docs/Operators.md#Einsum
Einsum
An einsum of the form term1, term2 -> output-term produces an output tensor using the following equation
output[output-term] = reduce-sum( input1[term1] * input2[term] ) <<<<need to be: *input2[term2])
where the reduce-sum performs a summation over all the indices occurring in the input terms (term1, term2) that do not occur in the output-term.
This is a small typo, however it can cause unnecessary confusion. | https://github.com/onnx/onnx/issues/5282 | https://github.com/onnx/onnx/pull/5537 | 934e80c09a23c97ac0db427899dd48e3420b691f | bb153e35324dabf350093755751b4d78b36cdd90 | 2023-06-02T22:30:39Z | python | 2023-08-28T23:55:07Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 5,216 | ["onnx/helper.py", "onnx/test/helper_test.py"] | `onnx.helper.make_attribute` could not infer type correctly from empty list | # Bug Report
### Is the issue related to model conversion?
<!-- If the ONNX checker reports issues with this model then this is most probably related to the converter used to convert the original framework model to ONNX. Please create this bug in the appropriate converter's GitHub repo (pytorch, tensorflow-onnx, sklearn-onnx, keras-onnx, onnxmltools) to get the best help. -->
### Describe the bug
<!-- Please describe the bug clearly and concisely -->
Since `all([])` is always `True` passing empty list to `make_attribute` will be always list of ints
https://github.com/onnx/onnx/blob/a02166572855083b3b6f5a329ba438f8cd8635f9/onnx/helper.py#L858-L860
### System information
<!--
- OS Platform and Distribution (*e.g. Linux Ubuntu 20.04*):
- ONNX version (*e.g. 1.13*):
- Python version:
- GCC/Compiler version (if compiling from source):
- CMake version:
- Protobuf version:
- Visual Studio version (if applicable):-->
### Reproduction instructions
<!--
- Describe the code to reproduce the behavior.
```
import onnx
model = onnx.load('model.onnx')
...
```
- Attach the ONNX model to the issue (where applicable)-->
```python
onnx.helper.make_attribute("attr_name", []) # Always .type == INTS which isn't always expected
```
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
For empty list `make_attribute` method should fail due to type inference error or there should be an explicit typed `make_attribute` method
### Notes
<!-- Any additional information -->
| https://github.com/onnx/onnx/issues/5216 | https://github.com/onnx/onnx/pull/5220 | 2e9a6757ad33ef4bc0c4a4ecc61837f79885a82a | 997c1d286aca527db0bc4baaf565feb76b74d329 | 2023-05-10T07:53:27Z | python | 2023-05-14T22:58:20Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 5,212 | ["onnx/defs/generator/old.cc", "onnx/shape_inference/implementation.cc", "onnx/shape_inference/implementation.h", "onnx/test/data_propagation_test.py", "onnx/test/model_inference_test.py", "onnx/test/shape_inference_test.py"] | Segfault running shape inference with `data_prop=True` on model with nested function. | # Bug Report
### Is the issue related to model conversion?
<!-- If the ONNX checker reports issues with this model then this is most probably related to the converter used to convert the original framework model to ONNX. Please create this bug in the appropriate converter's GitHub repo (pytorch, tensorflow-onnx, sklearn-onnx, keras-onnx, onnxmltools) to get the best help. -->
No, checker passed. inliner passed.
Same shape inference call passed on inlined model.
### Describe the bug
<!-- Please describe the bug clearly and concisely -->
As title
### System information
<!--
- OS Platform and Distribution (*e.g. Linux Ubuntu 20.04*):
- ONNX version (*e.g. 1.13*):
- Python version:
- GCC/Compiler version (if compiling from source):
- CMake version:
- Protobuf version:
- Visual Studio version (if applicable):-->
ONNX version: 1.15 dbeee2465b8628ddf742e8ff52144b424f3c45db
### Reproduction instructions
Model can be downloaded [here](https://microsoft-my.sharepoint.com/:u:/p/bowbao/EZVQDNP6-wpBqmls_ScqhgYB9iQJe_P0vbsZct55Gtw7MQ?e=9MBbVO).
```
import onnx
import onnx.shape_inference
import onnx.inliner
model = onnx.load("maf_mnist.onnx")
onnx.checker.check_model(model)
print("Pass checker!")
# Works fine
onnx.shape_inference.infer_shapes(model, data_prop=False)
print("Success with data_prop=False !")
inlined_model = onnx.inliner.inline_local_functions(model)
print("Success with inlining!")
onnx.shape_inference.infer_shapes(inlined_model, data_prop=True)
print("Success with data_prop=True on inlined model !")
# Segfaults
onnx.shape_inference.infer_shapes(model, data_prop=True)
```
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
### Notes
<!-- Any additional information -->
| https://github.com/onnx/onnx/issues/5212 | https://github.com/onnx/onnx/pull/5223 | 1d17a4090995ab86824f7c8fe41d7bd019076a55 | 213b525a51ead28961d9b4f2764b08c7e336bf2c | 2023-05-08T22:01:18Z | python | 2023-05-16T20:54:06Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 5,182 | [".gitignore", "onnx/__init__.py", "onnx/checker.py", "onnx/shape_inference.py", "onnx/test/basic_test.py", "onnx/test/test_external_data.py", "onnx/utils.py"] | [Feature request] Add possibility to pass `pathlib.Path` where filepath is expected | ### System information
name : onnx
version : 1.13.1
description : Open Neural Network Exchange
dependencies
- numpy >=1.16.6
- protobuf >=3.20.2,<4
- typing-extensions >=3.6.2.1
required by
- onnxconverter-common *
- skl2onnx >=1.2.1
### What is the problem that this feature solves?
`onnx` file handling accepts only `IO[bytes] | str`. If passing Python `pathlib.Path`, static type checkers such as `pyright` and `pylance` emit errors which is incorrect.
```python
import onnx
import pathlib
model = onnx.load(pathlib.Path("model.onnx"))
```
Causes `PyRight` error
```
Argument of type "Path" cannot be assigned to parameter "f" of type "IO[bytes] | str" in function "load_model"
Type "Path" cannot be assigned to type "IO[bytes] | str"
"Path" is incompatible with "IO[bytes]"
"Path" is incompatible with "str" (reportGeneralTypeIssues)
```
Commonly, usage `pathlib` is considered preferential to `str`, as it reduces the risk of manual error.
### Alternatives considered
One can avert this error by
```python
import pathlib
import onnx
import typing
modelpath = pathlib.Path("model.onnx")
model = onnx.load(str(modelpath))
# or
with open(modelpath, "rb") as fobj:
model = onnx.load(fobj)
# or
model = onnx.load(typing.cast(str, modelpath))
# or doing over-engineered staff with pathlib.Path
```
Or one can use manual path handling with `str`, which as previously mentioned, is often considered non-optimal.
### Describe the feature
1. Using `pathlib` to handle paths is considered best practice by many.
2. Would remove unnecessary object transformations to satisfy type checkers
3. Reduces `# type: ignore` usage in user code base.
These, in general, should benefit general code quality.
### Will this influence the current api (Y/N)?
No. The current code that expects path should work when passed a `pathlib.Path`. This feature should only affect the type hints.
### Feature Area
best practices, type hinting
### Are you willing to contribute it (Y/N)
Yes
### Notes
This change is very minor and could probably be done in a few hours by a "replace all" `Union[IO[bytes], str`] -> `Union[IO[bytes], str, Path]`. | https://github.com/onnx/onnx/issues/5182 | https://github.com/onnx/onnx/pull/5200 | 72c2578c3d6e14fcf0e87db1c8379304c7f49561 | ba9d0060dfb2577fc7dbbd910ca53b698ec68393 | 2023-04-28T07:32:36Z | python | 2023-05-30T19:36:06Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 5,128 | ["docs/docsgen/source/_templates/layout.html", "docs/docsgen/source/_templates/sidebar-nav-bs.html", "docs/docsgen/source/conf.py", "docs/docsgen/source/onnx_sphinx.py", "docs/docsgen/source/requirements.txt"] | Cast-19 table in onnx.ai docs is broken | # Bug Report
### Is the issue related to model conversion?
<!-- If the ONNX checker reports issues with this model then this is most probably related to the converter used to convert the original framework model to ONNX. Please create this bug in the appropriate converter's GitHub repo (pytorch, tensorflow-onnx, sklearn-onnx, keras-onnx, onnxmltools) to get the best help. -->
### Describe the bug
<!-- Please describe the bug clearly and concisely -->
Tables in https://onnx.ai/onnx/operators/onnx__Cast.html#cast-19 is broken but markdown version in https://github.com/onnx/onnx/blob/main/docs/Operators.md#Cast is fine.
### System information
<!--
- OS Platform and Distribution (*e.g. Linux Ubuntu 20.04*):
- ONNX version (*e.g. 1.13*):
- Python version:
- GCC/Compiler version (if compiling from source):
- CMake version:
- Protobuf version:
- Visual Studio version (if applicable):-->
### Reproduction instructions
<!--
- Describe the code to reproduce the behavior.
```
import onnx
model = onnx.load('model.onnx')
...
```
- Attach the ONNX model to the issue (where applicable)-->
See https://onnx.ai/onnx/operators/onnx__Cast.html#cast-19
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
### Notes
<!-- Any additional information -->
I don't know why but markdown syntax and sphinx syntax seems to have some diff? | https://github.com/onnx/onnx/issues/5128 | https://github.com/onnx/onnx/pull/5137 | 5c6d54b3f3e33aabfc311b7bec1a519ef1d54326 | d4b25d93faa0dc7d0090f1e734d1a2488aadb69c | 2023-04-14T08:50:14Z | python | 2023-04-17T16:22:15Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 5,093 | [".github/workflows/clang_tidy_review.yml", ".github/workflows/clang_tidy_review_post.yml"] | [CI failure] The new added Post clang-tidy review comments does not work | # Bug Report
### Is the issue related to model conversion?
<!-- If the ONNX checker reports issues with this model then this is most probably related to the converter used to convert the original framework model to ONNX. Please create this bug in the appropriate converter's GitHub repo (pytorch, tensorflow-onnx, sklearn-onnx, keras-onnx, onnxmltools) to get the best help. -->
No.
### Describe the bug
<!-- Please describe the bug clearly and concisely -->
CI failure: the new added Post clang-tidy review comments does not work: https://github.com/onnx/onnx/blob/main/.github/workflows/clang_tidy_review_post.yml
### System information
<!--
- OS Platform and Distribution (*e.g. Linux Ubuntu 20.04*):
- ONNX version (*e.g. 1.13*):
- Python version:
- GCC/Compiler version (if compiling from source):
- CMake version:
- Protobuf version:
- Visual Studio version (if applicable):-->
### Reproduction instructions
<!--
- Describe the code to reproduce the behavior.
```
import onnx
model = onnx.load('model.onnx')
...
```
- Attach the ONNX model to the issue (where applicable)-->
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
### Notes
<!-- Any additional information -->
| https://github.com/onnx/onnx/issues/5093 | https://github.com/onnx/onnx/pull/5164 | 3ccb8eac6f8cfbeb014d60641613e0fe2e29f4e9 | 159a159c6b21abb3e4b864fa743e16be0a326bd0 | 2023-04-05T18:11:43Z | python | 2023-04-21T01:01:05Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 5,055 | ["setup.cfg"] | pytest: `--current-env has been renamed to --nbval-current-env` | "--current-env has been renamed to --nbval-current-env" from a deprecation warning. We should remove the config instead and add it when runing pytest. (Remove pytest.ini) | https://github.com/onnx/onnx/issues/5055 | https://github.com/onnx/onnx/pull/5065 | ff9f6c429383690c19ad87271815178dff361397 | 0d1f14422cf8912e561bcb02d455917935ff3346 | 2023-03-28T15:13:08Z | python | 2023-04-18T18:57:54Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 5,018 | ["onnx/checker.py"] | Refine docs for check_model | Current version:
> Check the consistency of a model. An exception is raised if the test fails.
It would be good if we document the kind of checks done and the type of exception raised so users know what to catch for; as well as clarify that it also runs shape inference when strict is True. (Right now it says `if True, the function checks shapes can be inferred`)
Should we default `strict` to `True`? @jcwchen | https://github.com/onnx/onnx/issues/5018 | https://github.com/onnx/onnx/pull/5736 | 2bb6e5f46f46c71f5ad25c6ee7307885e896f081 | 02eb25e6512cbff983a55422f8d60baf4d1df2b5 | 2023-03-20T19:44:07Z | python | 2023-11-03T16:53:42Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,992 | ["CMakeLists.txt", "README.md"] | error: static assertion failed: Protobuf only supports C++14 and newer | # Bug Report
### Describe the bug
The build fails because protobuf header is built with -std=gnu++11 when it requires c++17.
I added -std=c++20 to CXXFLAGS but this is added earlier than -std=gnu++11 and doesn't take effect.
```
[ 10%] Building CXX object CMakeFiles/onnx_proto.dir/onnx/onnx-ml.pb.cc.o
/depot/gcc-12.2.0/bin/g++ -DONNX_API="__attribute__((__visibility__(\"default\")))" -DONNX_ML=1 -DONNX_NAMESPACE=onnx -D__STDC_FORMAT_MACROS -I/u/yuriv/nn-trainer/deps/onnx -isystem /u/yuriv/local/include -std=c++20 -Wnon-virtual-dtor -O3 -DNDEBUG -fPIC -fvisibility=hidden -fvisibility-inlines-hidden -std=gnu++11 -MD -MT CMakeFiles/onnx_proto.dir/onnx/onnx-ml.pb.cc.o -MF CMakeFiles/onnx_proto.dir/onnx/onnx-ml.pb.cc.o.d -o CMakeFiles/onnx_proto.dir/onnx/onnx-ml.pb.cc.o -c /u/yuriv/nn-trainer/deps/onnx/onnx/onnx-ml.pb.cc
In file included from /u/yuriv/nn-trainer/deps/onnx/onnx/onnx-ml.pb.h:11,
from /u/yuriv/nn-trainer/deps/onnx/onnx/onnx-ml.pb.cc:4:
/u/yuriv/local/include/google/protobuf/port_def.inc:200:15: error: static assertion failed: Protobuf only supports C++14 and newer.
200 | static_assert(PROTOBUF_CPLUSPLUS_MIN(201402L), "Protobuf only supports C++14 and newer.");
| ^~~~~~~~~~~~~~~~~~~~~~
/u/yuriv/local/include/google/protobuf/port_def.inc:200:15: note: the comparison reduces to ‘(201103 >= 201402)’
In file included from /u/yuriv/local/include/google/protobuf/stubs/common.h:44,
from /u/yuriv/local/include/google/protobuf/io/coded_stream.h:130,
from /u/yuriv/nn-trainer/deps/onnx/onnx/onnx-ml.pb.h:24:
```
### System information
- OS Platform and Distribution: Linux (3.10.0-1160.31.1.el7.x86_64)
- ONNX version (*e.g. 1.13*): current master
- Python version: 3.9
- GCC/Compiler version (if compiling from source): gcc-12.2.0
- CMake version: 3.22.3
- Protobuf version: protobuf's master rev. 626c7e7734662b7017402a0be137e2409c828b00
| https://github.com/onnx/onnx/issues/4992 | https://github.com/onnx/onnx/pull/5119 | d89c53ef8a7f1c9bdefc4c1e32d5623a3e06a9e1 | a979e756133167e0bdf5af23441cca4d189e805c | 2023-03-14T17:46:02Z | python | 2023-04-13T17:48:27Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,964 | ["docs/docsgen/source/intro/python.rst"] | [CI] The pipeline to generate and publish ONNX docs failed after IR bumped | # Bug Report
### Is the issue related to model conversion?
<!-- If the ONNX checker reports issues with this model then this is most probably related to the converter used to convert the original framework model to ONNX. Please create this bug in the appropriate converter's GitHub repo (pytorch, tensorflow-onnx, sklearn-onnx, keras-onnx, onnxmltools) to get the best help. -->
### Describe the bug
<!-- Please describe the bug clearly and concisely -->
The pipeline to generate and publish ONNX docs [failed](https://github.com/onnx/onnx/actions/runs/4320587058/jobs/7540968637) after [IR bumped](https://github.com/onnx/onnx/pull/4911).
### System information
- OS Platform and Distribution (*e.g. Linux Ubuntu 20.04*): ubuntu-latest
- ONNX version (*e.g. 1.13*): latest main branch
- Python version: 3.10
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
The CI should be green and produce correct web content.
### Notes
<!-- Any additional information -->
cc @liqunfu @xadupre | https://github.com/onnx/onnx/issues/4964 | https://github.com/onnx/onnx/pull/4969 | 71119c9a012b667a8d8fee8747f557bfa0779868 | f335fa2ea5c664ecc4ca71a94979a1fa2649cfe5 | 2023-03-06T22:45:30Z | python | 2023-03-14T00:25:54Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,930 | [".github/workflows/release_linux_aarch64.yml", ".github/workflows/release_linux_x86_64.yml", ".github/workflows/release_mac.yml", ".github/workflows/release_win.yml", "requirements-release.txt"] | [Announce] Move onnx-weekly package from TestPyPI to PyPI | ### Announce
<!-- Explain your question here. -->
Previously [onnx-weekly packages](https://test.pypi.org/project/onnx-weekly/) were published in TestPyPI to enable experimentation and early testing, but we have moved onnx-weekly package from TestPyPI to PyPI. Now onnx-weekly package is on both [TestPyPI](https://test.pypi.org/project/onnx-weekly/) and [PyPI](https://pypi.org/project/onnx-weekly/). Going forward we will still upload onnx-weekly wheels to TestPyPI until next ONNX release (probably 1.14.0). Please start to use onnx-weekly from PyPI instead. Thank you!
- TODO item: stop uploading onnx-weekly wheel to TestPyPI after next ONNX 1.14.0 release
### Further information
- Relevant Area: <!--e.g., model usage, backend, best practices, converters, shape_inference, version_converter, training, test, operators, IR, ONNX Hub, data preprocessing, CI pipelines. --> Python package
### Notes
<!-- Any additional information, code snippets. -->
Here is the merged PR to achieve this goal: https://github.com/onnx/onnx/pull/4897. | https://github.com/onnx/onnx/issues/4930 | https://github.com/onnx/onnx/pull/5233 | 1d814e204b92642412953d47c100d3ff6a29fc23 | 751496fe8282575beb5a5b612dabadaff8034a8f | 2023-02-22T17:16:33Z | python | 2023-05-18T17:38:57Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,878 | ["pyproject.toml"] | pip install in an environment without setuptools fails | # Bug Report
### Is the issue related to model conversion?
<!-- If the ONNX checker reports issues with this model then this is most probably related to the converter used to convert the original framework model to ONNX. Please create this bug in the appropriate converter's GitHub repo (pytorch, tensorflow-onnx, sklearn-onnx, keras-onnx, onnxmltools) to get the best help. -->
No
### Describe the bug
<!-- Please describe the bug clearly and concisely -->
Installing from source in an environment w/o setup tools will fail. We should specify this requirement in pyproject so pip picks it up and install it automatically.
Reference: https://github.com/justinchuby/onnx-script/blob/main/pyproject.toml#L4-L5
### System information
<!--
- OS Platform and Distribution (*e.g. Linux Ubuntu 16.04*):
- ONNX version (*e.g. 1.7*):
- Python version:
- GCC/Compiler version (if compiling from source):
- CMake version:
- Protobuf version:
- Visual Studio version (if applicable):-->
ONNX 1.14
| https://github.com/onnx/onnx/issues/4878 | https://github.com/onnx/onnx/pull/5531 | c77397e046022faed614d43594bd13b7ec28fab7 | b8482d71eca8d35414e125a5cfbfc16797367fe4 | 2023-02-09T19:59:26Z | python | 2023-08-27T02:16:20Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,870 | ["onnx/checker.py", "onnx/cpp2py_export.cc", "onnx/onnx_cpp2py_export/checker.pyi", "onnx/test/checker_test.py"] | [Feature request] Expose lexical scope context in Python checker | ### System information
Latest
### What is the problem that this feature solves?
Currently lexical scope context is not exposed in Python onnx.checker.
### Alternatives considered
_No response_
### Describe the feature
Follow up of https://github.com/onnx/onnx/pull/4720. Expose lexical scope context in Python onnx.checker. See https://github.com/onnx/onnx/blob/3747442528c820ab8dd41111ef3e9ab1a4da6062/onnx/cpp2py_export.cc#L378
### Will this influence the current api (Y/N)?
Y. Extended parameters will be added.
### Feature Area
checker
### Are you willing to contribute it (Y/N)
Yes
### Notes
_No response_ | https://github.com/onnx/onnx/issues/4870 | https://github.com/onnx/onnx/pull/5693 | 2cf3749d475e513cb9a2124f89308e214d4020db | 799dcfe6d381b6d968004147b6571564885dfa24 | 2023-02-08T18:17:11Z | python | 2023-11-09T01:26:57Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,857 | ["onnx/backend/test/case/model/__init__.py", "onnx/backend/test/data/real/test_bvlc_alexnet/data.json", "onnx/backend/test/data/real/test_densenet121/data.json", "onnx/backend/test/data/real/test_inception_v1/data.json", "onnx/backend/test/data/real/test_inception_v2/data.json", "onnx/backend/test/data/real/test_resnet50/data.json", "onnx/backend/test/data/real/test_shufflenet/data.json", "onnx/backend/test/data/real/test_squeezenet/data.json", "onnx/backend/test/data/real/test_vgg19/data.json", "onnx/backend/test/data/real/test_zfnet512/data.json"] | s3.amazonaws.com/download.onnx/models are forbidden for download | # Bug Report
### Is the issue related to model conversion?
<!-- If the ONNX checker reports issues with this model then this is most probably related to the converter used to convert the original framework model to ONNX. Please create this bug in the appropriate converter's GitHub repo (pytorch, tensorflow-onnx, sklearn-onnx, keras-onnx, onnxmltools) to get the best help. -->
No
### Describe the bug
<!-- Please describe the bug clearly and concisely -->
s3.amazonaws.com/download.onnx/models are forbidden for download. People will see test failure if testing ONNX (pytest under onnx directory) in a fresh envionrment (including current CI pipelines), because the model download link is broken. e.g., https://s3.amazonaws.com/download.onnx/models/opset_10/resnet50.tar.gz.
### System information
<!--
- OS Platform and Distribution (*e.g. Linux Ubuntu 16.04*):
- ONNX version (*e.g. 1.7*):
- Python version:
- GCC/Compiler version (if compiling from source):
- CMake version:
- Protobuf version:
- Visual Studio version (if applicable):-->
### Reproduction instructions
<!--
- Describe the code to reproduce the behavior.
```
import onnx
model = onnx.load('model.onnx')
...
```
- Attach the ONNX model to the issue (where applicable)-->
```
cd onnx
pytest
```
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
pytest should always succeed.
### Notes
<!-- Any additional information -->
| https://github.com/onnx/onnx/issues/4857 | https://github.com/onnx/onnx/pull/4865 | ef2a28073b523d5a8925b03bd897834ab8ddc458 | 358807f2f79b534d1dd76566e2c886540d7716bd | 2023-02-06T19:24:01Z | python | 2023-02-07T22:42:44Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,850 | ["onnx/reference/ops/_op.py", "onnx/reference/ops/op_reduce_sum.py", "onnx/test/reference_evaluator_test.py"] | [Reference runtime] ReduceSum an integer is required (got type RefAttrName) | While running the gpt2 model:
```
ReduceSum(self_5, dim_13) -> result
F
================================ FAILURES =================================
_______________ TestFxToOnnxWithOnnxRuntime.test_gpt2_tiny ________________
Traceback (most recent call last):
File "/home/justinchu/anaconda3/envs/pytorch/lib/python3.9/site-packages/onnx/reference/ops/op_reduce_sum.py", line 43, in _run
res = np.sum(x, axis=axes, keepdims=keepdims, dtype=x.dtype)
File "<__array_function__ internals>", line 180, in sum
File "/home/justinchu/anaconda3/envs/pytorch/lib/python3.9/site-packages/numpy/core/fromnumeric.py", line 2298, in sum
return _wrapreduction(a, np.add, 'sum', axis, dtype, out, keepdims=keepdims,
File "/home/justinchu/anaconda3/envs/pytorch/lib/python3.9/site-packages/numpy/core/fromnumeric.py", line 86, in _wrapreduction
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
TypeError: an integer is required (got type RefAttrName)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/justinchu/anaconda3/envs/pytorch/lib/python3.9/site-packages/onnx/reference/op_run.py", line 421, in run
res = self._run(*args, **kwargs)
File "/home/justinchu/anaconda3/envs/pytorch/lib/python3.9/site-packages/onnx/reference/op_run.py", line 574, in _run
results = self.impl_.run(None, feeds, attributes=attributes)
File "/home/justinchu/anaconda3/envs/pytorch/lib/python3.9/site-packages/onnx/reference/reference_evaluator.py", line 457, in run
outputs = node.run(*inputs, **linked_attributes)
File "/home/justinchu/anaconda3/envs/pytorch/lib/python3.9/site-packages/onnx/reference/ops/op_reduce_sum.py", line 20, in run
res = self._run(x, axes=axes, keepdims=keepdims)
File "/home/justinchu/anaconda3/envs/pytorch/lib/python3.9/site-packages/onnx/reference/ops/op_reduce_sum.py", line 46, in _run
raise TypeError(
TypeError: Unable to reduce shape (1, 2, 3, 3) with axes=(-1,) and keepdims=RefAttrName('keepdim').
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/justinchu/dev/pytorch/test/onnx/test_fx_to_onnx_with_onnxruntime.py", line 126, in test_gpt2_tiny
ort_outputs = self._run_ort(onnx_model, (input_ids, attention_mask))
File "/home/justinchu/dev/pytorch/test/onnx/test_fx_to_onnx_with_onnxruntime.py", line 30, in _run_ort
return session.run(
File "/home/justinchu/anaconda3/envs/pytorch/lib/python3.9/site-packages/onnx/reference/reference_evaluator.py", line 457, in run
outputs = node.run(*inputs, **linked_attributes)
File "/home/justinchu/anaconda3/envs/pytorch/lib/python3.9/site-packages/onnx/reference/op_run.py", line 423, in run
raise TypeError(
TypeError: Issues with types [<class 'numpy.ndarray'>, <class 'numpy.ndarray'>] and attributes ['dtype', 'keepdim'] and linked attributes=[] (operator 'OpFunction').
``` | https://github.com/onnx/onnx/issues/4850 | https://github.com/onnx/onnx/pull/4856 | ded7e3a27449750fb429b0f88a494e10fd555be7 | 3747442528c820ab8dd41111ef3e9ab1a4da6062 | 2023-02-03T22:48:25Z | python | 2023-02-08T18:04:03Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,820 | [".azure-pipelines/Linux-CI.yml", ".azure-pipelines/MacOS-CI.yml", "onnx/defs/tensor_proto_util.cc"] | [Feature request] CI: compile with UBSan and ASan | ### System information
_No response_
### What is the problem that this feature solves?
UBSan finds undefined behavior in https://github.com/pytorch/pytorch/issues/93174. We can use it in our CI (maybe)? https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html
### Alternatives considered
_No response_
### Describe the feature
_No response_
### Will this influence the current api (Y/N)?
_No response_
### Feature Area
_No response_
### Are you willing to contribute it (Y/N)
None
### Notes
_No response_ | https://github.com/onnx/onnx/issues/4820 | https://github.com/onnx/onnx/pull/4823 | a90d7029f68690928a0737e290fbc4b497b4acf4 | 1e09a7b0ae3c19e003ef39e3c88c0bf625c2d97d | 2023-01-28T04:17:51Z | python | 2023-02-16T16:22:23Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,799 | ["docs/Changelog.md", "docs/Operators.md", "onnx/defs/nn/defs.cc"] | Changlog.md is broken, showing raw markdown | # Bug Report
### Is the issue related to model conversion?
Nope.
### Describe the bug
[Changelog.md](https://github.com/onnx/onnx/blob/main/docs/Changelog.md#version-15-of-the-default-onnx-operator-set) is broken below the section "Version 15 of the default ONNX operator set", showing raw markdown (e.g. `<dt><tt>epsilon</tt> : float (default is 1e-05)</dt>`) after https://github.com/onnx/onnx/pull/4747.
### System information
NA - applies to website generally.
### Reproduction instructions
Visit the URL https://github.com/onnx/onnx/blob/main/docs/Changelog.md#version-15-of-the-default-onnx-operator-set, and scroll down.
### Expected behavior
The markdown appears as formatted HTML, like the sections above.
[update] Jacky Chen and Xavier Dupre are aware of this. | https://github.com/onnx/onnx/issues/4799 | https://github.com/onnx/onnx/pull/4801 | 7c03457a061a25df3fb8c92b80d011bff1fc7486 | bcbac252792a9d2ba8b3229e24600f9fb3aaeec5 | 2023-01-23T22:47:32Z | python | 2023-01-24T20:00:18Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,762 | ["VERSION_NUMBER", "docs/OnnxReleases.md"] | Bump onnx version in main | ONNX nightly shares the same version with the current release (1.13). This makes it harder for consuming packages to adjust behavior based on onnx versions. Would it make sense to bump the version right after release? | https://github.com/onnx/onnx/issues/4762 | https://github.com/onnx/onnx/pull/4772 | 0fe2e20ba1c852f04296a3d80eec508bac2fd13c | 85fb958c6fec31678f60aedc6d964405c09dec20 | 2023-01-11T00:43:29Z | python | 2023-01-18T20:43:13Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,754 | ["docs/docsgen/source/onnx_sphinx.py"] | Diff in operators documentation is in wrong direction | # Bug Report
### Is the issue related to model conversion?
No.
### Describe the bug
There is a new documentation for operators on [onnx.ai](https://onnx.ai/onnx/operators/index.html). And it is pretty cool, thank you!
Also it can show diff between any two version, but for now this diff is reversed. Look at [any of them](https://onnx.ai/onnx/operators/text_diff_Split_13_18.html): text in green was deleted with version upgrade, text in red — added (just compare to last version).
### System information
Firefox browser.
### Reproduction instructions
Open [any diff page](https://onnx.ai/onnx/operators/text_diff_Slice_11_13.html). Compare [old](https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Slice-11) and [new](https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Slice-13) versions manually.
### Expected behavior
On diff page text in green should be those added between this two versions and in red — deleted one.
### Notes
—
| https://github.com/onnx/onnx/issues/4754 | https://github.com/onnx/onnx/pull/4773 | a685adb84365b91fadb34d663a631bc46a3309cb | 9dd64dc74d7db856fcf9f7bff9c07e5cbe013036 | 2023-01-09T11:57:21Z | python | 2023-01-13T18:17:53Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,724 | ["docs/docsgen/source/conf.py", "docs/docsgen/source/expect_onnxruntime.rst", "docs/docsgen/source/onnx_sphinx.py"] | Attribute types and default values missing in docs at onnx.ai | # Bug Report
### Describe the bug
The HTML version of the ONNX operator specs at https://onnx.ai/onnx/operators/index.html are missing types and default values for attributes. Compare the [HTML version of Resize](https://onnx.ai/onnx/operators/onnx__Resize.html#resize-18) against the [markdown version](https://github.com/onnx/onnx/blob/main/docs/Operators.md#resize). This is annoying as the HTML version loads faster and is more convenient to browse otherwise.
### Expected behavior
The HTML version of the operator documentation should include types and default values for attributes. | https://github.com/onnx/onnx/issues/4724 | https://github.com/onnx/onnx/pull/4748 | e5722a03ef8fbb6068dbc64201931ae12fbf535e | ba018568508166f75fe78130ff17dafcbce92e55 | 2022-12-27T08:20:02Z | python | 2023-01-10T23:49:09Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,706 | ["onnx/mapping.py"] | DeprecatedWarningDict not printing the replacement | # Bug Report
### Describe the bug
https://github.com/onnx/onnx/blob/45f508bb7c3206cf504b923d54076e42d1ed591c/onnx/mapping.py#L86-L105
in line 100, the string should be a f-string
| https://github.com/onnx/onnx/issues/4706 | https://github.com/onnx/onnx/pull/4707 | 261394b6bd5d41964923fedde6dcecccdf1c3fae | 8929745333226cdbb7180e56e2c5ce6cc6b75ab5 | 2022-12-15T06:33:46Z | python | 2023-01-12T17:03:03Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,703 | ["onnx/defs/schema.cc", "onnx/onnx_cpp2py_export/defs.pyi", "onnx/test/schema_test.py"] | Referencing `_function_body` of operator schema causes SEGV | # Bug Report
### Is the issue related to model conversion?
<!-- If the ONNX checker reports issues with this model then this is most probably related to the converter used to convert the original framework model to ONNX. Please create this bug in the appropriate converter's GitHub repo (pytorch, tensorflow-onnx, sklearn-onnx, keras-onnx, onnxmltools) to get the best help. -->
No
### Describe the bug
<!-- Please describe the bug clearly and concisely -->
Referencing `_function_body` of operator schema causes SEGV error like following:
```
$ python3
Python 3.10.8 (main, Dec 1 2022, 11:46:45) [Clang 14.0.0 (clang-1400.0.29.202)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> n.input
KeyboardInterrupt
>>> import onnx
>>> onnx.__version__
'1.13.0'
>>> onnx.defs.get_schema("Selu").has_function
True
>>> onnx.defs.get_schema("Selu")._function_body
zsh: segmentation fault python3
```
### System information
- OS Platform and Distribution (*e.g. Linux Ubuntu 16.04*): macOS with brew installed python3, linux with python3 installed by pyenv
- ONNX version (*e.g. 1.7*): 1.13.0
- Python version: 3.9.15, 3.10.8
- GCC/Compiler version (if compiling from source):
- CMake version:
- Protobuf version: 3.20.3(pip show protobuf)
- Visual Studio version (if applicable):
### Reproduction instructions
<!--
- Describe the code to reproduce the behavior.
```
import onnx
model = onnx.load('model.onnx')
...
```
- Attach the ONNX model to the issue (where applicable)-->
```bash
python3 -c 'import onnx; onnx.defs.get_schema("Selu")._function_body'
```
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
Referencing `_function_body` property of schema won't cause SEGV
### Notes
<!-- Any additional information -->
| https://github.com/onnx/onnx/issues/4703 | https://github.com/onnx/onnx/pull/4711 | f335fa2ea5c664ecc4ca71a94979a1fa2649cfe5 | 67a7080b2cfbd7dafc448967664c8a740b512e1c | 2022-12-13T15:09:43Z | python | 2023-03-14T14:55:38Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,700 | ["docs/docsgen/source/api/checker.rst", "onnx/checker.py", "onnx/cpp2py_export.cc", "onnx/onnx_cpp2py_export/checker.pyi", "onnx/test/checker_test.py"] | [Feature request] ONNX checker checks function proto validity | ### System information
_No response_
### What is the problem that this feature solves?
ONNX checker currently checks model proto validity with check_model_proto. I would be very helpful to have a similar API that checks for function proto validity.
This greatly helps testing function libraries built with ONNX script.
### Alternatives considered
Make a model. Not all functions can be directly converted into a Model.
### Describe the feature
_No response_
### Will this influence the current api (Y/N)?
_No response_
### Feature Area
checker
### Are you willing to contribute it (Y/N)
None
### Notes
_No response_ | https://github.com/onnx/onnx/issues/4700 | https://github.com/onnx/onnx/pull/4720 | d776f5b21d84a6f8eb7d95caddc40f72c56eea27 | 479eaf9f8ef870f946ca5ce7b143e522b4b882c0 | 2022-12-09T20:25:36Z | python | 2023-01-27T01:06:06Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,687 | ["onnx/compose.py", "onnx/test/compose_test.py"] | onnx.compose.add_prefix_graph won't rename subgraph | # Bug Report
### Describe the bug
onnx.compose.add_prefix_graph won't rename subgraph, such as If branch. (also Loop block?)
This means it may break the onnx graph which contains If / Loop nodes.
### System information
Google Colab CPU
### Reproduction instructions
https://gist.github.com/Yosshi999/e6fda24ddc0e26687a2540c5db32d658
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
onnx.checker.check_graph passes.
### Notes
This function has to call the renaming routine recursively when it meets If / Loop nodes.
| https://github.com/onnx/onnx/issues/4687 | https://github.com/onnx/onnx/pull/4718 | 85fb958c6fec31678f60aedc6d964405c09dec20 | aa8170f7123c65e8409468c63d5e9664e3f168d8 | 2022-12-03T11:23:46Z | python | 2023-01-18T21:29:59Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,619 | ["onnx/defs/schema.cc"] | [Bug] Checker misses data type mismatch for Max | # Bug Report
### Is the issue related to model conversion?
<!-- If the ONNX checker reports issues with this model then this is most probably related to the converter used to convert the original framework model to ONNX. Please create this bug in the appropriate converter's GitHub repo (pytorch, tensorflow-onnx, sklearn-onnx, keras-onnx, onnxmltools) to get the best help. -->
Partially. PyTorch generates a wrong ONNX model but still `onnx.checker` did not check it carefully (I use `onnx.checker.check_model(mode, full_check=True)` to skip wrong models).
### Describe the bug
<!-- Please describe the bug clearly and concisely -->
The minimized model looks like this:
```python
"""
graph par1 (
%i0[DOUBLE, 2x2x1x1]
%/mlist.0/Mul_1_output_0[FLOAT, scalar]
) {
%/Max_output_0 = Max(%i0, %/mlist.0/Mul_1_output_0)
return %/Max_output_0
}
"""
```
Basically, it is `Max(<double>[2,2,1,1], <float>[scalar])`. The checker returns successfully when checking the model. Maybe there is some missed cases/conditions in the checker.
### System information
<!--
- OS Platform and Distribution (*e.g. Linux Ubuntu 16.04*):
- ONNX version (*e.g. 1.7*):
- Python version:
- GCC/Compiler version (if compiling from source):
- CMake version:
- Protobuf version:
- Visual Studio version (if applicable):-->
- ONNX version: 1.12
- Python version: 3.9
- OS: Ubuntu 22.04
### Reproduction instructions
<!--
- Describe the code to reproduce the behavior.
```
import onnx
model = onnx.load('model.onnx')
...
```
- Attach the ONNX model to the issue (where applicable)-->
```python
import onnx
X = onnx.helper.make_tensor_value_info("X", onnx.TensorProto.DOUBLE, (2, 2, 1, 1))
Y = onnx.helper.make_tensor_value_info("Y", onnx.TensorProto.FLOAT, ())
Z = onnx.helper.make_tensor_value_info("Z", onnx.TensorProto.DOUBLE, (2, 2, 1, 1))
model = onnx.helper.make_model(
onnx.helper.make_graph(
nodes=[
onnx.helper.make_node("Max", ["X", "Y"], ["Z"]),
],
name="bar",
inputs=[X, Y],
outputs=[Z],
initializer=[],
value_info=[],
),
producer_name="foo",
)
model = onnx.shape_inference.infer_shapes(model, strict_mode=True)
onnx.checker.check_model(model, full_check=True)
```
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
Should fail but did not. But later this error occurs when using ONNXRuntime.
<!-- Any additional information -->
cc: @jcwchen Thanks! | https://github.com/onnx/onnx/issues/4619 | https://github.com/onnx/onnx/pull/4622 | 1c0797f8a06cc4f66d93251f9f914a2505a3190e | 76b865e84c8e0df6f3831bbbb9bfda8d489a3281 | 2022-10-24T15:50:30Z | python | 2022-11-03T16:17:10Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,580 | ["docs/Operators.md", "docs/TestCoverage.md", "onnx/backend/test/case/node/resize.py", "onnx/backend/test/data/node/test_resize_tf_crop_and_resize_axes_2_3/model.onnx", "onnx/backend/test/data/node/test_resize_tf_crop_and_resize_axes_3_2/model.onnx", "onnx/defs/tensor/utils.cc"] | Shapes of intermediate tensors | # Ask a Question
### Question
In a onnx graph, I can see the tensor shapes for the inputs and outputs. Is there a way to know what shapes the intermediate tensors are? I consulted the shape inferencing document but it looks like there is only a c++ api.
### Further information
- Relevant Area: shape_inference
### Notes
Model exported from pytorch
```python
import torch
from torch import nn
class Upsampling(nn.Module):
def __init__(self):
super().__init__()
self.upsample = nn.Upsample(size=[2,2], mode='nearest')
def forward(self, input, weight):
weight = self.upsample(weight)
return torch.nn.functional.conv2d(input, weight)
torch.onnx.export(Upsampling(), (torch.tensor([[[[1,2],[1,2]]]], dtype=torch.float32), torch.tensor([[[[1]]]], dtype=torch.float32)), "test.onnx")
```
| https://github.com/onnx/onnx/issues/4580 | https://github.com/onnx/onnx/pull/4582 | 8669fad0247799f4d8683550eec749974b4f5338 | 4637098e881d0d2f63caa5e945a9fdc620e11e06 | 2022-10-10T15:03:10Z | python | 2022-10-12T23:53:54Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,549 | ["docs/PythonAPIOverview.md", "onnx/helper.py", "onnx/mapping.py", "onnx/numpy_helper.py", "onnx/onnx-ml.proto", "onnx/onnx-ml.proto3", "onnx/onnx.in.proto", "onnx/onnx.proto", "onnx/onnx.proto3", "onnx/test/helper_test.py"] | printable_graph error and INVALID_GRAPH (ONNXRuntime) when using bf16 datatype | # Bug Report
### Is the issue related to model conversion?
No, ONNX checker does not report issues
### Describe the bug
`onnx.helper.printable_graph` throws key error when loading a model with data type bfloat16. But `onnx.checker.check_model` does not give any error. Also ONNXRuntime throws invalid graph error when loading the model.
The errors are as follows (I also include these in the `Reproduction instructions`):
```
File /usr/local/lib/python3.8/dist-packages/onnx/helper.py:817, in printable_attribute(attr, subgraphs)
814 content.append("<Tensor>")
815 else:
816 # special case to print scalars
--> 817 field = STORAGE_TENSOR_TYPE_TO_FIELD[attr.t.data_type]
818 content.append(f'<Scalar Tensor {str(getattr(attr.t, field))}>')
819 elif attr.HasField("g"):
KeyError: 16
```
This may result in the INVALID_GRAPH error when using the ONNXRuntime to create an InferenceSession from it.
```
File /usr/local/lib/python3.8/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py:384, in InferenceSession._create_inference_session(self, providers, provider_options, disabled_optimizers)
382 session_options = self._sess_options if self._sess_options else C.get_default_session_options()
383 if self._model_path:
--> 384 sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
385 else:
386 sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : Load model from ./pegasus_cnn/encoder.onnx failed:This is an invalid model. Type Error: Type 'tensor(bfloat16)' of input parameter (onnx::Pow_284) of operator (Pow) in node (Pow_349) is invalid.
```
### System information
- OS Platform and Distribution (*e.g. Linux Ubuntu 16.04*): Linux Ubuntu 20.04
- ONNX version (*e.g., 1.7*): 1.12.0 (installed from pip)
- Python version: 3.8.10
- ONNX Runtime version (*e.g. 1.7*): 1.12.0
### Reproduction instructions
```
from onnxruntime import InferenceSession
import onnx
import torch
from transformers import AutoTokenizer,PegasusForConditionalGeneration
model_path = "google/pegasus-cnn_dailymail"
tokenizer = AutoTokenizer.from_pretrained(model_path)
pytorch_model_bf16: PegasusForConditionalGeneration = PegasusForConditionalGeneration.from_pretrained(model_path, torch_dtype=torch.bfloat16).eval()
# Export onnx model
output_path = "./pegasus_cnn/encoder.onnx"
export_text = "This is for test"
input_ids = tokenizer(export_text,truncation=True,return_tensors="pt").input_ids
def export_encoder(model, args, exported_model_path):
model.eval()
with torch.no_grad():
_ = torch.onnx._export(model,
args,
exported_model_path,
export_params=True,
opset_version=14,
input_names=['input_ids'],
output_names=['hidden_states'],
dynamic_axes={
'input_ids': {0:'batch', 1: 'sequence'},
'hidden_states': {0:'batch', 1: 'sequence'},
})
export_encoder(pytorch_model_bf16.model.encoder, input_ids, output_path)
encoder_model_graph = onnx.load(output_path)
# No error from check_model
onnx.checker.check_model(encoder_model_graph)
```
```
# printable_graph error
print(onnx.helper.printable_graph(encoder_model_graph.graph))
```
_File /usr/local/lib/python3.8/dist-packages/onnx/helper.py:817, in printable_attribute(attr, subgraphs)
814 content.append("<Tensor>")
815 else:
816 # special case to print scalars
--> 817 field = STORAGE_TENSOR_TYPE_TO_FIELD[attr.t.data_type]
818 content.append(f'<Scalar Tensor {str(getattr(attr.t, field))}>')
819 elif attr.HasField("g"):
KeyError: 16_
```
# OnnxRuntime create inference session error
encoder_session = InferenceSession(output_path, providers=["CUDAExecutionProvider"])
```
File /usr/local/lib/python3.8/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py:384, in InferenceSession._create_inference_session(self, providers, provider_options, disabled_optimizers)
382 session_options = self._sess_options if self._sess_options else C.get_default_session_options()
383 if self._model_path:
--> 384 sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
385 else:
386 sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : Load model from ./pegasus_cnn/encoder.onnx failed:This is an invalid model. Type Error: Type 'tensor(bfloat16)' of input parameter (onnx::Pow_284) of operator (Pow) in node (Pow_349) is invalid.
### Expected behavior
bfloat16 should be supported by ONNX's latest version (v1.12)
### Notes
The STORAGE_TENSOR_TYPE_TO_FIELD in [mapping.py](https://github.com/onnx/onnx/blob/f0d607478424ab9a8d3751a6e15d48c817350f8b/onnx/mapping.py#L56) seems missing `TensorProto.BFloat16` key.
I'm not sure whether the ONNXRuntime error is related to this. It's weird that `onnx.checker.check_model` does not throw any error, but ONNXRuntime says it's an invalid graph.
If I load the model using fp32 datatype, no error appears.
| https://github.com/onnx/onnx/issues/4549 | https://github.com/onnx/onnx/pull/4551 | de2459972d3f834dba24a586982d1ffce5532e0b | 46b96275554b0d978dd5c8ba786cc81dabd1d25a | 2022-09-27T18:43:10Z | python | 2022-09-30T18:54:30Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,539 | ["setup.py"] | Build error with the latest setuptool | # Bug Report
### Is the issue related to model conversion?
No
### Describe the bug
Building from source with `pip install -e .` resulted in an error with the latest setuptool.
Running
```sh
SETUPTOOLS_ENABLE_FEATURES="legacy-editable" pip install -e .
```
works.
### System information
- OS Platform and Distribution (*e.g. Linux Ubuntu 16.04*): Linux Ubuntu
- ONNX version (*e.g. 1.7*): master
- Python version: 3.10.7
- GCC/Compiler version (if compiling from source):
- CMake version:
- Protobuf version:
- Visual Studio version (if applicable):
```
Package Version Editable project location
---------------------------- --------- -------------------------
absl-py 1.2.0
astunparse 1.6.3
cachetools 5.2.0
certifi 2022.9.14
charset-normalizer 2.1.1
contourpy 1.0.5
cycler 0.11.0
etils 0.8.0
flatbuffers 2.0.7
fonttools 4.37.3
gast 0.4.0
google-auth 2.11.1
google-auth-oauthlib 0.4.6
google-pasta 0.2.0
grpcio 1.48.1
h5py 3.7.0
idna 3.4
importlib-resources 5.9.0
jax 0.3.17
jaxlib 0.3.15
joblib 1.2.0
keras 2.10.0
Keras-Preprocessing 1.1.2
kiwisolver 1.4.4
libclang 14.0.6
Markdown 3.4.1
MarkupSafe 2.1.1
matplotlib 3.6.0
numpy 1.23.3
oauthlib 3.2.1
onnx 1.12.0 /workspaces/onnx
opt-einsum 3.3.0
packaging 21.3
pandas 1.5.0
Pillow 9.2.0
pip 22.2.2
plotly 5.10.0
protobuf 3.19.5
pyasn1 0.4.8
pyasn1-modules 0.2.8
pyparsing 3.0.9
python-dateutil 2.8.2
pytz 2022.2.1
requests 2.28.1
requests-oauthlib 1.3.1
rsa 4.9
scikit-learn 1.1.2
scipy 1.9.1
seaborn 0.12.0
setuptools 65.3.0
six 1.16.0
tenacity 8.0.1
tensorboard 2.10.0
tensorboard-data-server 0.6.1
tensorboard-plugin-wit 1.8.1
tensorflow 2.10.0
tensorflow-estimator 2.10.0
tensorflow-io-gcs-filesystem 0.27.0
termcolor 2.0.1
threadpoolctl 3.1.0
torch 1.12.1
typing_extensions 4.3.0
urllib3 1.26.12
Werkzeug 2.2.2
wheel 0.37.1
wrapt 1.14.1
zipp 3.8.1
```
### Reproduction instructions
`pip install -e .`
<!--
- Describe the code to reproduce the behavior.
```
import onnx
model = onnx.load('model.onnx')
...
```
- Attach the ONNX model to the issue (where applicable)-->
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
### Notes
```python
Building wheels for collected packages: onnx
Building editable for onnx (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building editable for onnx (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [114 lines of output]
running editable_wheel
creating /tmp/pip-wheel-2mha55tp/tmpkm5xxdfk/onnx.egg-info
writing /tmp/pip-wheel-2mha55tp/tmpkm5xxdfk/onnx.egg-info/PKG-INFO
writing dependency_links to /tmp/pip-wheel-2mha55tp/tmpkm5xxdfk/onnx.egg-info/dependency_links.txt
writing entry points to /tmp/pip-wheel-2mha55tp/tmpkm5xxdfk/onnx.egg-info/entry_points.txt
writing requirements to /tmp/pip-wheel-2mha55tp/tmpkm5xxdfk/onnx.egg-info/requires.txt
writing top-level names to /tmp/pip-wheel-2mha55tp/tmpkm5xxdfk/onnx.egg-info/top_level.txt
writing manifest file '/tmp/pip-wheel-2mha55tp/tmpkm5xxdfk/onnx.egg-info/SOURCES.txt'
reading manifest file '/tmp/pip-wheel-2mha55tp/tmpkm5xxdfk/onnx.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching '*.c' under directory 'onnx'
adding license file 'LICENSE'
writing manifest file '/tmp/pip-wheel-2mha55tp/tmpkm5xxdfk/onnx.egg-info/SOURCES.txt'
creating '/tmp/pip-wheel-2mha55tp/tmpkm5xxdfk/onnx-1.12.0.dist-info'
adding license file "LICENSE" (matched pattern "LICENSE")
creating /tmp/pip-wheel-2mha55tp/tmpkm5xxdfk/onnx-1.12.0.dist-info/WHEEL
/tmp/pip-build-env-5rbu00q6/overlay/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running build_py
running create_version
running cmake_build
Extra cmake args: ['-DONNX_USE_PROTOBUF_SHARED_LIBS=ON']
Using cmake args: ['/usr/bin/cmake', '-DPYTHON_INCLUDE_DIR=/usr/local/python/3.10.7/include/python3.10', '-DPYTHON_EXECUTABLE=/usr/local/python/3.10.7/bin/python3.10', '-DBUILD_ONNX_PYTHON=ON', '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON', '-DONNX_NAMESPACE=onnx', '-DPY_EXT_SUFFIX=.cpython-310-x86_64-linux-gnu.so', '-DCMAKE_BUILD_TYPE=Release', '-DONNX_ML=1', '-DONNX_USE_PROTOBUF_SHARED_LIBS=ON', '/workspaces/onnx']
Generated: /workspaces/onnx/.setuptools-cmake-build/onnx/onnx-ml.proto
Generated: /workspaces/onnx/.setuptools-cmake-build/onnx/onnx-operators-ml.proto
Generated: /workspaces/onnx/.setuptools-cmake-build/onnx/onnx-data.proto
-- Could NOT find pybind11 (missing: pybind11_DIR)
-- pybind11 v2.9.1
--
-- ******** Summary ********
-- CMake version : 3.16.3
-- CMake command : /usr/bin/cmake
-- System : Linux
-- C++ compiler : /usr/bin/c++
-- C++ compiler version : 9.4.0
-- CXX flags : -Wnon-virtual-dtor
-- Build type : Release
-- Compile definitions : __STDC_FORMAT_MACROS
-- CMAKE_PREFIX_PATH :
-- CMAKE_INSTALL_PREFIX : /usr/local
-- CMAKE_MODULE_PATH :
--
-- ONNX version : 1.12.0
-- ONNX NAMESPACE : onnx
-- ONNX_USE_LITE_PROTO : OFF
-- USE_PROTOBUF_SHARED_LIBS : ON
-- Protobuf_USE_STATIC_LIBS : OFF
-- ONNX_DISABLE_EXCEPTIONS : OFF
-- ONNX_WERROR : OFF
-- ONNX_BUILD_TESTS : OFF
-- ONNX_BUILD_BENCHMARKS : OFF
--
-- Protobuf compiler : /usr/bin/protoc
-- Protobuf includes : /usr/include
-- Protobuf libraries : /usr/lib/x86_64-linux-gnu/libprotobuf.so;-lpthread
-- BUILD_ONNX_PYTHON : ON
-- Python version :
-- Python executable : /usr/local/python/3.10.7/bin/python3.10
-- Python includes : /usr/local/python/3.10.7/include/python3.10
-- Configuring done
-- Generating done
-- Build files have been written to: /workspaces/onnx/.setuptools-cmake-build
[ 3%] Built target gen_onnx_proto
[ 6%] Built target gen_onnx_data_proto
[ 9%] Built target gen_onnx_operators_proto
[ 24%] Built target onnx_proto
[ 96%] Built target onnx
[100%] Built target onnx_cpp2py_export
running build_ext
copying /workspaces/onnx/.setuptools-cmake-build/onnx_cpp2py_export.cpython-310-x86_64-linux-gnu.so -> /tmp/tmpksx5ht08.build-lib/onnx
Traceback (most recent call last):
File "/tmp/pip-build-env-5rbu00q6/overlay/lib/python3.10/site-packages/setuptools/_distutils/file_util.py", line 40, in _copy_file_contents
fdst = open(dst, 'wb')
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpksx5ht08.build-lib/onnx/onnx_cpp2py_export.cpython-310-x86_64-linux-gnu.so'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/pip-build-env-5rbu00q6/overlay/lib/python3.10/site-packages/setuptools/command/editable_wheel.py", line 140, in run
self._create_wheel_file(bdist_wheel)
File "/tmp/pip-build-env-5rbu00q6/overlay/lib/python3.10/site-packages/setuptools/command/editable_wheel.py", line 330, in _create_wheel_file
files, mapping = self._run_build_commands(dist_name, unpacked, lib, tmp)
File "/tmp/pip-build-env-5rbu00q6/overlay/lib/python3.10/site-packages/setuptools/command/editable_wheel.py", line 261, in _run_build_commands
self._run_build_subcommands()
File "/tmp/pip-build-env-5rbu00q6/overlay/lib/python3.10/site-packages/setuptools/command/editable_wheel.py", line 288, in _run_build_subcommands
self.run_command(name)
File "/tmp/pip-build-env-5rbu00q6/overlay/lib/python3.10/site-packages/setuptools/_distutils/cmd.py", line 319, in run_command
self.distribution.run_command(command)
File "/tmp/pip-build-env-5rbu00q6/overlay/lib/python3.10/site-packages/setuptools/dist.py", line 1217, in run_command
super().run_command(command)
File "/tmp/pip-build-env-5rbu00q6/overlay/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 992, in run_command
cmd_obj.run()
File "<string>", line 255, in run
File "/tmp/pip-build-env-5rbu00q6/overlay/lib/python3.10/site-packages/setuptools/command/build_ext.py", line 84, in run
_build_ext.run(self)
File "/tmp/pip-build-env-5rbu00q6/overlay/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 346, in run
self.build_extensions()
File "<string>", line 272, in build_extensions
File "/tmp/pip-build-env-5rbu00q6/overlay/lib/python3.10/site-packages/setuptools/_distutils/cmd.py", line 351, in copy_file
return file_util.copy_file(
File "/tmp/pip-build-env-5rbu00q6/overlay/lib/python3.10/site-packages/setuptools/_distutils/file_util.py", line 163, in copy_file
_copy_file_contents(src, dst)
File "/tmp/pip-build-env-5rbu00q6/overlay/lib/python3.10/site-packages/setuptools/_distutils/file_util.py", line 42, in _copy_file_contents
raise DistutilsFileError(
distutils.errors.DistutilsFileError: could not create '/tmp/tmpksx5ht08.build-lib/onnx/onnx_cpp2py_export.cpython-310-x86_64-linux-gnu.so': No such file or directory
error: Support for editable installs via PEP 660 was recently introduced
in `setuptools`. If you are seeing this error, please report to:
https://github.com/pypa/setuptools/issues
Meanwhile you can try the legacy behavior by setting an
environment variable and trying to install again:
SETUPTOOLS_ENABLE_FEATURES="legacy-editable"
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building editable for onnx
Failed to build onnx
ERROR: Could not build wheels for onnx, which is required to install pyproject.toml-based projects
```
| https://github.com/onnx/onnx/issues/4539 | https://github.com/onnx/onnx/pull/5558 | 70d146a0b7ebc990b34a70325ab745614dc002cb | a39995c1236aed6a4a967f97853cd3fe08121b28 | 2022-09-23T02:37:36Z | python | 2023-09-07T19:08:18Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,538 | ["pyproject.toml", "setup.cfg"] | Move mypy config to pyproject.toml | Now that we can use a newer version of mypy, we should move mypy config to pyproject.toml. | https://github.com/onnx/onnx/issues/4538 | https://github.com/onnx/onnx/pull/4542 | 15e287026417c3acbffbdbd12c3473466a8a6bd9 | 96e5c3c73deb0b08fe3775257fde59bdcfaf19f9 | 2022-09-23T02:30:28Z | python | 2022-10-01T16:26:18Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,532 | ["requirements.txt"] | Relax the protobuf version requirement. | # Bug Report
### Describe the bug
Protobuf now has 3.20.2, so recommend to write <4 instead:
- https://github.com/onnx/onnx/blob/895593af6ac17d4fe2749091659083d3b1246cae/requirements.txt#L2
- https://pypi.org/project/protobuf/3.20.2/ | https://github.com/onnx/onnx/issues/4532 | https://github.com/onnx/onnx/pull/4535 | d97f747ea9e6024ca9d6a246009e5022a5cf8de7 | e9fa40e08c360026f5c3a19d1b8f248283ee5af1 | 2022-09-22T11:26:05Z | python | 2022-10-01T00:13:20Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,515 | ["onnx/version_converter/convert.cc"] | Explicit error message when converting version for opsets from non-official domain | ### System information
The latest main branch: a252e651fda8e73935827505a148146459c2e06d.
### What is the problem that this feature solves?
Currently the version_converter only says: `unordered_map::at: key not found` when converting a model which consists of non-official opset. For instance, every quantized model in [ONNX Model Zoo](https://github.com/onnx/models) (*-int8.onnx). The error message should be more explicit.
### Alternatives considered
_No response_
### Describe the feature
Make the error message more explicit.
### Will this influence the current api (Y/N)?
N
### Feature Area
version_converter
### Are you willing to contribute it (Y/N)
Yes
### Notes
_No response_ | https://github.com/onnx/onnx/issues/4515 | https://github.com/onnx/onnx/pull/4521 | e9fa40e08c360026f5c3a19d1b8f248283ee5af1 | 15e287026417c3acbffbdbd12c3473466a8a6bd9 | 2022-09-15T15:54:09Z | python | 2022-10-01T03:23:53Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,472 | ["CMakeLists.txt", "tools/protoc-gen-mypy.bat", "tools/protoc-gen-mypy.sh.in"] | Install from pip fails due to spaces in path | # Bug Report
### Describe the bug
I'm trying t run pip install onnx
and I get the following error
```
[ 9%] Built target onnxifi_wrapper
/private/var/folders/mq/w85wcgsx58g9xjz_66mj8bk40000gn/T/pip-install-19y8i46a/onnx_08b9cb443db945b9b10e5844dcd22bbb/.setuptools-cmake-build/tools/protoc-gen-mypy.sh: line 4: /Volumes/Sabrent: No such file or directory
--mypy_out: protoc-gen-mypy: Plugin failed with status code 127.
gmake[2]: *** [CMakeFiles/gen_onnx_proto.dir/build.make:74: onnx/onnx-ml.pb.cc] Error 1
gmake[1]: *** [CMakeFiles/Makefile2:113: CMakeFiles/gen_onnx_proto.dir/all] Error 2
gmake: *** [Makefile:136: all] Error 2
The full path to my python venv is
/Volumes/Sabrent Media/Documents/Source/Python/pytorch_env/
```
The full error message is...
```
Collecting onnx
Using cached onnx-1.12.0.tar.gz (10.1 MB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: numpy>=1.16.6 in ./lib/python3.10/site-packages (from onnx) (1.23.1)
Requirement already satisfied: protobuf<=3.20.1,>=3.12.2 in ./lib/python3.10/site-packages (from onnx) (3.19.4)
Requirement already satisfied: typing-extensions>=3.6.2.1 in ./lib/python3.10/site-packages (from onnx) (4.3.0)
Building wheels for collected packages: onnx
Building wheel for onnx (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [124 lines of output]
fatal: not a git repository (or any of the parent directories): .git
running bdist_wheel
running build
running build_py
running create_version
running cmake_build
Using cmake args: ['/opt/local/bin/cmake', '-DPYTHON_INCLUDE_DIR=/opt/local/Library/Frameworks/Python.framework/Versions/3.10/include/python3.10', '-DPYTHON_EXECUTABLE=/Volumes/Sabrent Media/Documents/Source/Python/pytorch_env/bin/python', '-DBUILD_ONNX_PYTHON=ON', '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON', '-DONNX_NAMESPACE=onnx', '-DPY_EXT_SUFFIX=.cpython-310-darwin.so', '-DCMAKE_BUILD_TYPE=Release', '-DONNX_ML=1', '/private/var/folders/mq/w85wcgsx58g9xjz_66mj8bk40000gn/T/pip-install-19y8i46a/onnx_08b9cb443db945b9b10e5844dcd22bbb']
-- The C compiler identification is AppleClang 13.1.6.13160021
-- The CXX compiler identification is AppleClang 13.1.6.13160021
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found PythonInterp: /Volumes/Sabrent Media/Documents/Source/Python/pytorch_env/bin/python (found version "3.10.4")
-- Found PythonLibs: /opt/local/Library/Frameworks/Python.framework/Versions/3.10/lib/libpython3.10.dylib (found version "3.10.4")
-- Found Protobuf: /Volumes/Sabrent Media/Documents/Source/Python/pytorch_env/lib/libprotobuf.dylib (found version "3.21.5")
Generated: /private/var/folders/mq/w85wcgsx58g9xjz_66mj8bk40000gn/T/pip-install-19y8i46a/onnx_08b9cb443db945b9b10e5844dcd22bbb/.setuptools-cmake-build/onnx/onnx-ml.proto
Generated: /private/var/folders/mq/w85wcgsx58g9xjz_66mj8bk40000gn/T/pip-install-19y8i46a/onnx_08b9cb443db945b9b10e5844dcd22bbb/.setuptools-cmake-build/onnx/onnx-operators-ml.proto
Generated: /private/var/folders/mq/w85wcgsx58g9xjz_66mj8bk40000gn/T/pip-install-19y8i46a/onnx_08b9cb443db945b9b10e5844dcd22bbb/.setuptools-cmake-build/onnx/onnx-data.proto
-- Could NOT find pybind11 (missing: pybind11_DIR)
-- pybind11 v2.9.1
-- Found PythonLibs: /opt/local/Library/Frameworks/Python.framework/Versions/3.10/lib/libpython3.10.dylib
-- Performing Test HAS_FLTO
-- Performing Test HAS_FLTO - Success
-- Performing Test HAS_FLTO_THIN
-- Performing Test HAS_FLTO_THIN - Success
--
-- ******** Summary ********
-- CMake version : 3.22.4
-- CMake command : /opt/local/bin/cmake
-- System : Darwin
-- C++ compiler : /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++
-- C++ compiler version : 13.1.6.13160021
-- CXX flags : -Wnon-virtual-dtor
-- Build type : Release
-- Compile definitions : __STDC_FORMAT_MACROS
-- CMAKE_PREFIX_PATH :
-- CMAKE_INSTALL_PREFIX : /usr/local
-- CMAKE_MODULE_PATH :
--
-- ONNX version : 1.12.0
-- ONNX NAMESPACE : onnx
-- ONNX_USE_LITE_PROTO : OFF
-- USE_PROTOBUF_SHARED_LIBS : OFF
-- Protobuf_USE_STATIC_LIBS : ON
-- ONNX_DISABLE_EXCEPTIONS : OFF
-- ONNX_WERROR : OFF
-- ONNX_BUILD_TESTS : OFF
-- ONNX_BUILD_BENCHMARKS : OFF
-- ONNXIFI_DUMMY_BACKEND : OFF
-- ONNXIFI_ENABLE_EXT : OFF
--
-- Protobuf compiler : /Volumes/Sabrent Media/Documents/Source/Python/pytorch_env/bin/protoc
-- Protobuf includes : /Volumes/Sabrent Media/Documents/Source/Python/pytorch_env/include
-- Protobuf libraries : /Volumes/Sabrent Media/Documents/Source/Python/pytorch_env/lib/libprotobuf.dylib
-- BUILD_ONNX_PYTHON : ON
-- Python version :
-- Python executable : /Volumes/Sabrent Media/Documents/Source/Python/pytorch_env/bin/python
-- Python includes : /opt/local/Library/Frameworks/Python.framework/Versions/3.10/include/python3.10
-- Configuring done
-- Generating done
-- Build files have been written to: /private/var/folders/mq/w85wcgsx58g9xjz_66mj8bk40000gn/T/pip-install-19y8i46a/onnx_08b9cb443db945b9b10e5844dcd22bbb/.setuptools-cmake-build
[ 4%] Building C object CMakeFiles/onnxifi_dummy.dir/onnx/onnxifi_dummy.c.o
[ 4%] Building C object CMakeFiles/onnxifi_loader.dir/onnx/onnxifi_loader.c.o
[ 4%] Running gen_proto.py on onnx/onnx.in.proto
Processing /private/var/folders/mq/w85wcgsx58g9xjz_66mj8bk40000gn/T/pip-install-19y8i46a/onnx_08b9cb443db945b9b10e5844dcd22bbb/onnx/onnx.in.proto
Writing /private/var/folders/mq/w85wcgsx58g9xjz_66mj8bk40000gn/T/pip-install-19y8i46a/onnx_08b9cb443db945b9b10e5844dcd22bbb/.setuptools-cmake-build/onnx/onnx-ml.proto
Writing /private/var/folders/mq/w85wcgsx58g9xjz_66mj8bk40000gn/T/pip-install-19y8i46a/onnx_08b9cb443db945b9b10e5844dcd22bbb/.setuptools-cmake-build/onnx/onnx-ml.proto3
generating /private/var/folders/mq/w85wcgsx58g9xjz_66mj8bk40000gn/T/pip-install-19y8i46a/onnx_08b9cb443db945b9b10e5844dcd22bbb/.setuptools-cmake-build/onnx/onnx_pb.py
[ 5%] Running C++ protocol buffer compiler on /private/var/folders/mq/w85wcgsx58g9xjz_66mj8bk40000gn/T/pip-install-19y8i46a/onnx_08b9cb443db945b9b10e5844dcd22bbb/.setuptools-cmake-build/onnx/onnx-ml.proto
[ 7%] Linking C static library libonnxifi_loader.a
[ 8%] Linking C shared library libonnxifi_dummy.dylib
[ 8%] Built target onnxifi_loader
[ 8%] Built target onnxifi_dummy
[ 9%] Building C object CMakeFiles/onnxifi_wrapper.dir/onnx/onnxifi_wrapper.c.o
[ 11%] Linking C shared module libonnxifi.dylib
[ 11%] Built target onnxifi_wrapper
/private/var/folders/mq/w85wcgsx58g9xjz_66mj8bk40000gn/T/pip-install-19y8i46a/onnx_08b9cb443db945b9b10e5844dcd22bbb/.setuptools-cmake-build/tools/protoc-gen-mypy.sh: line 4: /Volumes/Sabrent: No such file or directory
--mypy_out: protoc-gen-mypy: Plugin failed with status code 127.
gmake[2]: *** [CMakeFiles/gen_onnx_proto.dir/build.make:74: onnx/onnx-ml.pb.cc] Error 1
gmake[1]: *** [CMakeFiles/Makefile2:113: CMakeFiles/gen_onnx_proto.dir/all] Error 2
gmake: *** [Makefile:136: all] Error 2
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/private/var/folders/mq/w85wcgsx58g9xjz_66mj8bk40000gn/T/pip-install-19y8i46a/onnx_08b9cb443db945b9b10e5844dcd22bbb/setup.py", line 332, in <module>
setuptools.setup(
File "/Volumes/Sabrent Media/Documents/Source/Python/pytorch_env/lib/python3.10/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/core.py", line 148, in setup
dist.run_commands()
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/Volumes/Sabrent Media/Documents/Source/Python/pytorch_env/lib/python3.10/site-packages/wheel/bdist_wheel.py", line 299, in run
self.run_command('build')
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/private/var/folders/mq/w85wcgsx58g9xjz_66mj8bk40000gn/T/pip-install-19y8i46a/onnx_08b9cb443db945b9b10e5844dcd22bbb/setup.py", line 223, in run
self.run_command("cmake_build")
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/private/var/folders/mq/w85wcgsx58g9xjz_66mj8bk40000gn/T/pip-install-19y8i46a/onnx_08b9cb443db945b9b10e5844dcd22bbb/setup.py", line 217, in run
subprocess.check_call(build_args)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 369, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/opt/local/bin/cmake', '--build', '.', '--', '-j', '8']' returned non-zero exit status 2.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for onnx
Running setup.py clean for onnx
Failed to build onnx
Installing collected packages: onnx
Running setup.py install for onnx ... error
error: subprocess-exited-with-error
× Running setup.py install for onnx did not run successfully.
│ exit code: 1
╰─> [98 lines of output]
fatal: not a git repository (or any of the parent directories): .git
running install
running build
running build_py
running create_version
running cmake_build
Using cmake args: ['/opt/local/bin/cmake', '-DPYTHON_INCLUDE_DIR=/opt/local/Library/Frameworks/Python.framework/Versions/3.10/include/python3.10', '-DPYTHON_EXECUTABLE=/Volumes/Sabrent Media/Documents/Source/Python/pytorch_env/bin/python', '-DBUILD_ONNX_PYTHON=ON', '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON', '-DONNX_NAMESPACE=onnx', '-DPY_EXT_SUFFIX=.cpython-310-darwin.so', '-DCMAKE_BUILD_TYPE=Release', '-DONNX_ML=1', '/private/var/folders/mq/w85wcgsx58g9xjz_66mj8bk40000gn/T/pip-install-19y8i46a/onnx_08b9cb443db945b9b10e5844dcd22bbb']
Generated: /private/var/folders/mq/w85wcgsx58g9xjz_66mj8bk40000gn/T/pip-install-19y8i46a/onnx_08b9cb443db945b9b10e5844dcd22bbb/.setuptools-cmake-build/onnx/onnx-ml.proto
Generated: /private/var/folders/mq/w85wcgsx58g9xjz_66mj8bk40000gn/T/pip-install-19y8i46a/onnx_08b9cb443db945b9b10e5844dcd22bbb/.setuptools-cmake-build/onnx/onnx-operators-ml.proto
Generated: /private/var/folders/mq/w85wcgsx58g9xjz_66mj8bk40000gn/T/pip-install-19y8i46a/onnx_08b9cb443db945b9b10e5844dcd22bbb/.setuptools-cmake-build/onnx/onnx-data.proto
-- Could NOT find pybind11 (missing: pybind11_DIR)
-- pybind11 v2.9.1
--
-- ******** Summary ********
-- CMake version : 3.22.4
-- CMake command : /opt/local/bin/cmake
-- System : Darwin
-- C++ compiler : /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++
-- C++ compiler version : 13.1.6.13160021
-- CXX flags : -Wnon-virtual-dtor
-- Build type : Release
-- Compile definitions : __STDC_FORMAT_MACROS
-- CMAKE_PREFIX_PATH :
-- CMAKE_INSTALL_PREFIX : /usr/local
-- CMAKE_MODULE_PATH :
--
-- ONNX version : 1.12.0
-- ONNX NAMESPACE : onnx
-- ONNX_USE_LITE_PROTO : OFF
-- USE_PROTOBUF_SHARED_LIBS : OFF
-- Protobuf_USE_STATIC_LIBS : ON
-- ONNX_DISABLE_EXCEPTIONS : OFF
-- ONNX_WERROR : OFF
-- ONNX_BUILD_TESTS : OFF
-- ONNX_BUILD_BENCHMARKS : OFF
-- ONNXIFI_DUMMY_BACKEND : OFF
-- ONNXIFI_ENABLE_EXT : OFF
--
-- Protobuf compiler : /Volumes/Sabrent Media/Documents/Source/Python/pytorch_env/bin/protoc
-- Protobuf includes : /Volumes/Sabrent Media/Documents/Source/Python/pytorch_env/include
-- Protobuf libraries : /Volumes/Sabrent Media/Documents/Source/Python/pytorch_env/lib/libprotobuf.dylib
-- BUILD_ONNX_PYTHON : ON
-- Python version :
-- Python executable : /Volumes/Sabrent Media/Documents/Source/Python/pytorch_env/bin/python
-- Python includes : /opt/local/Library/Frameworks/Python.framework/Versions/3.10/include/python3.10
-- Configuring done
-- Generating done
-- Build files have been written to: /private/var/folders/mq/w85wcgsx58g9xjz_66mj8bk40000gn/T/pip-install-19y8i46a/onnx_08b9cb443db945b9b10e5844dcd22bbb/.setuptools-cmake-build
Consolidate compiler generated dependencies of target onnxifi_dummy
Consolidate compiler generated dependencies of target onnxifi_loader
[ 1%] Running C++ protocol buffer compiler on /private/var/folders/mq/w85wcgsx58g9xjz_66mj8bk40000gn/T/pip-install-19y8i46a/onnx_08b9cb443db945b9b10e5844dcd22bbb/.setuptools-cmake-build/onnx/onnx-ml.proto
[ 7%] Built target onnxifi_dummy
[ 7%] Built target onnxifi_loader
Consolidate compiler generated dependencies of target onnxifi_wrapper
[ 9%] Built target onnxifi_wrapper
/private/var/folders/mq/w85wcgsx58g9xjz_66mj8bk40000gn/T/pip-install-19y8i46a/onnx_08b9cb443db945b9b10e5844dcd22bbb/.setuptools-cmake-build/tools/protoc-gen-mypy.sh: line 4: /Volumes/Sabrent: No such file or directory
--mypy_out: protoc-gen-mypy: Plugin failed with status code 127.
gmake[2]: *** [CMakeFiles/gen_onnx_proto.dir/build.make:74: onnx/onnx-ml.pb.cc] Error 1
gmake[1]: *** [CMakeFiles/Makefile2:113: CMakeFiles/gen_onnx_proto.dir/all] Error 2
gmake: *** [Makefile:136: all] Error 2
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/private/var/folders/mq/w85wcgsx58g9xjz_66mj8bk40000gn/T/pip-install-19y8i46a/onnx_08b9cb443db945b9b10e5844dcd22bbb/setup.py", line 332, in <module>
setuptools.setup(
File "/Volumes/Sabrent Media/Documents/Source/Python/pytorch_env/lib/python3.10/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/core.py", line 148, in setup
dist.run_commands()
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/Volumes/Sabrent Media/Documents/Source/Python/pytorch_env/lib/python3.10/site-packages/setuptools/command/install.py", line 61, in run
return orig.install.run(self)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/command/install.py", line 568, in run
self.run_command('build')
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/private/var/folders/mq/w85wcgsx58g9xjz_66mj8bk40000gn/T/pip-install-19y8i46a/onnx_08b9cb443db945b9b10e5844dcd22bbb/setup.py", line 223, in run
self.run_command("cmake_build")
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/private/var/folders/mq/w85wcgsx58g9xjz_66mj8bk40000gn/T/pip-install-19y8i46a/onnx_08b9cb443db945b9b10e5844dcd22bbb/setup.py", line 217, in run
subprocess.check_call(build_args)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 369, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/opt/local/bin/cmake', '--build', '.', '--', '-j', '8']' returned non-zero exit status 2.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> onnx
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
```
### System information
- MacOS 12.5
- ONNX version 1.12.0
- Python version: 3.10
- GCC/Compiler version (if compiling from source): Apple clang version 13.1.6 (clang-1316.0.21.2.5)
- CMake version: cmake version 3.22.4
- Protobuf version: libprotoc 3.21.5
### Reproduction instructions
Not sure, put I suspect creating a venv on a path with space on it, manually compiling protobuf-cpp
so it installs in the venv path and then doing a pip install onnx
### Expected behavior
onnx to compile successfully
| https://github.com/onnx/onnx/issues/4472 | https://github.com/onnx/onnx/pull/4473 | 68859d3387b1c25ee0e231316cbf34580852bb85 | 8133ea176ca6aeb28a26a6f3e8e0687ed3c99a8e | 2022-08-25T20:14:18Z | python | 2022-08-30T15:05:19Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,468 | [".github/workflows/codeql.yml"] | [Tracking] Retire LGTM.com and use GitHub code scanning | # Feature Request
### System information
<!-- ONNX version (you are using): -->Latest main branch
### What is the problem that this feature solves?
<!-- Please detail the discrepancy with our current functionality. -->
LGTM.com will be unavailable.
### Describe the feature
<!-- Why is this feature necessary? What does it accomplish? -->
https://github.blog/2022-08-15-the-next-step-for-lgtm-com-github-code-scanning/ LGTM.com will be shut down in Dec. 2022. ONNX should use GitHub code scanning instead
### Will this influence the current api (Y/N)?
<!-- If yes, how? -->
N
### Feature Area
<!-- Which area in ONNX does this impact? e.g., model usage, backend, best practices, converters, shape_inference, version_converter, training, test, operators, IR, ONNX Hub, data preprocessing, CI pipelines. --> CI pipeplines
### Are you willing to contribute it (Y/N):
Y
| https://github.com/onnx/onnx/issues/4468 | https://github.com/onnx/onnx/pull/4524 | 895593af6ac17d4fe2749091659083d3b1246cae | 6626cac52557ab2e99fbc0e66480bd05cebbdaaf | 2022-08-24T21:59:44Z | python | 2022-09-23T15:28:49Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,459 | [".github/workflows/clang_tidy_review_post.yml", ".github/workflows/lint.yml"] | Add .clang-tidy check in CI pipelines | # Feature Request
### System information
<!-- ONNX version (you are using): -->
Latest main branch
### What is the problem that this feature solves?
<!-- Please detail the discrepancy with our current functionality. -->
Follow-up work for https://github.com/onnx/onnx/pull/4391.
### Describe the feature
<!-- Why is this feature necessary? What does it accomplish? -->
ONNX just introduced .clang-tidy file for developers. It would be better if ONNX can also have a CI running it to make sure the codebase follows it.
### Will this influence the current api (Y/N)?
<!-- If yes, how? -->
N
### Feature Area
<!-- Which area in ONNX does this impact? e.g., model usage, backend, best practices, converters, shape_inference, version_converter, training, test, operators, IR, ONNX Hub, data preprocessing, CI pipelines. -->
CI pipelines
| https://github.com/onnx/onnx/issues/4459 | https://github.com/onnx/onnx/pull/5041 | 878d59b4c78e8afeaf06c53301c8460d8c4cf7f8 | 5a7f21cc6109881d41dbe067e4ea861582d43d1b | 2022-08-20T00:14:49Z | python | 2023-04-04T22:33:29Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,439 | [".github/workflows/release_linux_aarch64.yml", ".github/workflows/release_linux_x86_64.yml", ".github/workflows/release_mac.yml", ".github/workflows/release_win.yml", ".github/workflows/weekly_mac_ci.yml", ".github/workflows/win_no_exception_ci.yml"] | [Notice] Windows Release Python 3.10 CI fails occasionally due to missing MSVC tools | # Bug Report
### Is the issue related to model conversion?
No
### Describe the bug
Please ignore this failure for now while developing ONNX PR: Windows Release CI with Python 3.10+x86 fails occasionally due to missing MSVC tools for building NumPy from source. Similar error: https://github.com/numpy/numpy/issues/12016. New NumPy only supports Python 3.8+, but ONNX still supports Python 3.7 and so it is still using old NumPy version 1.12.5, which does not have prebuilt wheel Python 3.10. Therefore every release CI using Python 3.10 needs to build NumPy from source... It's easy to bump into issues while building NumPy from source if CI machines lack some dependencies. Python 3.7 EOL is 27 Jun 2023 so currently I will still focus on fixing build from source issue for NumPy 1.21.5.
### System information
- OS Platform and Distribution (*e.g. Linux Ubuntu 16.04*): Windows
- ONNX version (*e.g. 1.7*): latest main branch
- Python version: 3.10
### Reproduction instructions
https://github.com/onnx/onnx/actions/workflows/release_win.yml
### Expected behavior
All CIs should always succeed.
| https://github.com/onnx/onnx/issues/4439 | https://github.com/onnx/onnx/pull/4440 | f0d824c0ead2825f7c24790c05f985ad1fb909f2 | 97e7d242d50c7dd3391388471101436390d94f90 | 2022-08-12T16:28:23Z | python | 2022-08-15T18:36:01Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,436 | [".github/workflows/release_mac.yml"] | Add Apple Silicon support (M1/M2 machines): provide universal2 wheel | # Feature Request
### System information
ONNX version (you are using): the latest main branch
### What is the problem that this feature solves?
Currently there is no prebuilt wheel for Apple Silicon machines like M1/M2 on PyPI. Users need to build ONNX from source on their own.
### Describe the alternatives you have considered
Directly provide xxx_macosx_11_0_arm64.whl instead, but currently GitHub Action has not provided Apple Silicon machines yet. (All of ONNX released wheels are using GitHub Action).
### Describe the feature
To Add Apple Silicon support (M1/M2 machines): provide universal2 wheel. Universal2 wheels are fat binaries with support for both 64-bit Intel and Apple processors. Although GitHub Action currently lacks Apple Silicon machines, ONNX can provide universal wheel for now, because it is at least testable in CIs. In the future, ONNX will further provide direct support like `xxx_macosx_11_0_arm64.whl` after GitHub Action has provided Apple Silicon machines. More related discussion: https://github.com/onnx/onnx/issues/4317#issuecomment-1194376761.
### Will this influence the current api?
No
### Feature Area
Which area in ONNX does this impact? (*e.g. model usage, backend, best practices, converters, shape_inference, version_converter, training, test, operators*):
cc @daquexian
| https://github.com/onnx/onnx/issues/4436 | https://github.com/onnx/onnx/pull/4642 | 951b8505fee7229cd6bd558e0b2924011da6ef60 | 70c4a7d905cca13f6116efb31a6bc2a34ce8a31a | 2022-08-10T23:26:48Z | python | 2022-11-13T17:09:17Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,425 | ["onnx/common/ir_pb_converter.cc"] | Is this line a mistake? Is onnx model with (ir_version = 1) support now ? | https://github.com/onnx/onnx/blob/eb634addcbb23ec1baf4d548d826bf3932527533/onnx/common/ir_pb_converter.cc#L374-L379
Now in onnx.proto3, enum Version is start with zero. (_START_VERSION = 0;)
So I think, In ir_pb_converter.cc, below code may be right.
`
else if (mp.ir_version() == 0) {
return nullptr;
}
`
https://github.com/onnx/onnx/blob/eb634addcbb23ec1baf4d548d826bf3932527533/onnx/onnx.proto3#L50-L63
| https://github.com/onnx/onnx/issues/4425 | https://github.com/onnx/onnx/pull/4429 | fa2d5ccdaa43f40cf0ad14caf748c75057e41955 | 7275515fd009fde08abf65092210e85158b61456 | 2022-08-10T03:17:31Z | python | 2022-08-11T14:10:44Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,422 | ["onnx/checker.cc", "onnx/checker.h", "onnx/shape_inference/implementation.cc"] | Make checker with full_check consistent for giving ModelProto or model path | # Feature Request
### System information
ONNX version (you are using): latest main branch
### What is the problem that this feature solves?
Making the checker behavior consistent can prevent confusion. And checker should not modify the model in place.
### Describe the alternatives you have considered
Keep the same behavior as before.
### Describe the feature
Current behavior:
check_model+full_check with model path (Python and C++ API): the original ONNX model will be overrode by the inferred model through model path
check_model+full_check with ModelProto (Python API): the original model won't be updated
check_model+full_check with ModelProto (C++ API): the original ModelProto will be overrode by the inferred model
IMO, we should make checker with full_check consistent for giving ModelProto or model path: do not update the given models. Checker should just check the model without changing it. Related discussion: https://github.com/onnx/onnx/pull/4386#discussion_r937159690.
### Will this influence the current api?
Yes. checker won't update the given model anyway.
### Feature Area
Which area in ONNX does this impact? (*e.g. model usage, backend, best practices, converters, shape_inference, version_converter, training, test, operators*): checker
### Are you willing to contribute it (Y/N):
Y
| https://github.com/onnx/onnx/issues/4422 | https://github.com/onnx/onnx/pull/4456 | e6547e8b41578c766dd2ccf298a01b2b1b6335cd | 751f80c03817bb910af780ffb928853aea43ad36 | 2022-08-09T16:02:41Z | python | 2022-08-23T10:18:30Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,390 | ["onnx/hub.py", "onnx/test/hub_test.py"] | Enable ONNX Hub to download test_data_set from ONNX Model Zoo | # Feature Request
### System information
ONNX version (you are using): 1.12
### What is the problem that this feature solves?
It would be great if ONNX Hub can download test_data_set. It will be helpful for testing ONNX Model Zoo.
### Describe the feature
Currently users can only download models through ONNX Hub. There is no easy API for them to download the test_data_set from.tar.gz
### Will this influence the current api?
New API will be added.
### Feature Area
Which area in ONNX does this impact? (*e.g. model usage, backend, best practices, converters, shape_inference, version_converter, training, test, operators*): ONNX Hub for ONNX Model Zoo
### Are you willing to contribute it (Y/N):
Y. I can do it if I have more bandwidth in the future. Anyone interested you are welcome to contribute.
| https://github.com/onnx/onnx/issues/4390 | https://github.com/onnx/onnx/pull/4741 | 9dd64dc74d7db856fcf9f7bff9c07e5cbe013036 | 474c0b64ccd913101c4dc7108b3dea4fd1f51de8 | 2022-07-28T17:03:14Z | python | 2023-01-14T01:33:51Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,387 | [".github/workflows/manylinux/entrypoint.sh", ".github/workflows/release_linux_aarch64.yml", ".github/workflows/release_linux_x86_64.yml", ".github/workflows/release_mac.yml", ".github/workflows/release_win.yml", ".github/workflows/win_no_exception_ci.yml", "requirements-release.txt"] | [Tracking] Verify Python 3.11 support | It is going to be shipped in two months. | https://github.com/onnx/onnx/issues/4387 | https://github.com/onnx/onnx/pull/4490 | 46334f894301127869024dbb4e587c83d76569c1 | 466edb7992da7d4eac62e972e6082587c1410a78 | 2022-07-27T13:47:11Z | python | 2022-11-16T22:06:57Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,384 | [".github/workflows/weekly_mac_ci.yml", "docs/CIPipelines.md", "workflow_scripts/test_model_zoo.py"] | Make test_model_zoo CI use ONNX Hub directly to enhance test coverage | # Feature Request
### System information
ONNX version (you are using): develop
### What is the problem that this feature solves?
Increase test coverage for ONNX Hub; Simplify existing [download code](https://github.com/onnx/onnx/blob/main/workflow_scripts/test_model_zoo.py) by using ONNX Hub directly.
### Describe the alternatives you have considered
Write additional code to test ONNX Hub.
### Describe the feature
Previously the code for testing ONNX Model Zoo models was implemented before introducing ONNX Hub. Since now we can easily download ONNX Model Zoo models with ONNX Hub, ideally we should directly use it instead of writing additional code. Also, it can enhance test coverage for ONNX Hub. Recently there are some issues (for instance, https://github.com/onnx/models/issues/546) in onnx.hub. It would be great if we can have a CI to check onnx.hub regularly. [Weekly CI](https://github.com/onnx/onnx/blob/ae3c87bca9e5bf233a07b8c9677a3cc957d43604/.github/workflows/weekly_mac_ci.yml#L64) seems like a good place.
### Will this influence the current api?
No
### Feature Area
Which area in ONNX does this impact? (*e.g. model usage, backend, best practices, converters, shape_inference, version_converter, training, test, operators*): test
### Are you willing to contribute it (Y/N):
Y. I can do it if I have more bandwidth in the future. Anyone interested you are welcome to contribute!
### Notes
Any additional information
| https://github.com/onnx/onnx/issues/4384 | https://github.com/onnx/onnx/pull/5267 | ba9d0060dfb2577fc7dbbd910ca53b698ec68393 | ee36eedffb0d70c2ebd3aa3778a796005bed4764 | 2022-07-26T20:33:41Z | python | 2023-06-05T04:34:49Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,380 | ["onnx/defs/tensor/utils.cc", "onnx/test/shape_inference_test.py"] | New Resize check for sizes and scales breaks some existing models | ### Question
https://github.com/onnx/onnx/pull/4126 introduces new Resize check for sizes and scales that they cannot exist at the same time. Although I think it's a valid check, there are some existing models have sizes and scales simultaneously. Take yolov4 from ONNX Model Zoo as an example:

Its Resize has scales and sizes attributes at the same time, but its scales attribute is an empty tensor. Same thing happens in fcn-resnet as well. You can see the shape inference failures in this [weekly CI with ONNX Model Zoo](https://github.com/onnx/onnx/runs/7491394257?check_suite_focus=true).
To make shape inference be compatible with existing models, is it possible that we kind of relax the check here? For example, further check whether scales or sizes is an empty tensor and do not throw an error if both of them exist but one of them is an empty tensor.
cc @onnx/sig-operators-approvers @jantonguirao
### Further information
- Relevant Area (*e.g. model usage, backend, best practices, converters, shape_inference, version_converter, training, test, operators*):
Shape inference
- Is this issue related to a specific model?
**Model name** (*e.g. mnist*): yolov4, fcn-resnet
**Model opset** (*e.g. 7*): 11, 12 (Resize)
| https://github.com/onnx/onnx/issues/4380 | https://github.com/onnx/onnx/pull/4388 | b47107e163a8a5bb1cd056af5a37fa181812d183 | eb634addcbb23ec1baf4d548d826bf3932527533 | 2022-07-26T16:07:06Z | python | 2022-08-06T06:21:53Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,374 | ["docs/IR.md", "onnx/helper.py"] | Referencing function attributes in body | ### Question
Is there currently a standard way (as described in the [IR](https://github.com/onnx/onnx/blob/main/docs/IR.md)) to reference function attributes inside the function body (i.e. the nodes)?
### Further information
- Relevant Area: IR, model definition, functions
### Notes
The internal implementation uses the field `ref_attr_name` (for example used [here](https://github.com/onnx/onnx/blob/b064f0a88cbe2fb75081dc8d25adde942ad9a916/onnx/defs/parser.cc#L386), called from e.g. `MeanVarianceNormalization`'s [implementation](https://github.com/onnx/onnx/blob/64e79b78a410e69a804f4f198bbaf45193fb3c32/onnx/defs/nn/defs.cc#L2181)) in the language parser, but this does not seem to be documented in the standard nor exposed in `onnx.helper`.
I've found a workaround for accessing function attributes when defining a function in Python: Directly use the protobuf interface for creating an `AttributeProto`:
```py
def ref_attr(name: str, ref_name: str, value_type: onnx.AttributeProto.AttributeType):
attr = onnx.AttributeProto()
attr.name = name
attr.ref_attr_name = ref_name
attr.type = value_type
return attr
```
This can be used like so, explicitly adding an `AttributeProto` to `NodeProto.attribute`:
```py
fun_shift_node = onnx.helper.make_node(
'BitShift',
['A', 'L'],
['B']
)
# onnx.helper does not support referencing function attributes, do it manually (to specify direction):
fun_shift_node.attribute.append(ref_attr("direction", "dir", onnx.AttributeProto.STRING))
fun = onnx.helper.make_function(
'test',
'FunBitShift',
['A', 'L'],
['B'],
nodes=[fun_shift_node],
opset_imports=[onnx.helper.make_operatorsetid('', 16)],
attributes=["dir"]
) # fun<dir>: (A, L) -> B
```
Then we can specify the attribute when calling the function normally:
```py
call_shift_node = onnx.helper.make_node(
'FunBitShift',
['D', 'K'],
['R'],
domain='test',
dir="LEFT"
) # R = FunBitShift<dir="LEFT">(D, K)
```
I found no alternate solution that avoids doing this outside the helper.
| https://github.com/onnx/onnx/issues/4374 | https://github.com/onnx/onnx/pull/4393 | 66480ce6ab0f4c56e27bb120f246c3ddd0d1e19e | 5e1c1bbe63d95165cc85e816da906578ad637b68 | 2022-07-25T12:22:59Z | python | 2022-08-02T17:06:38Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,215 | ["onnx/shape_inference/implementation.cc"] | shape_inference failed with ONNX1.11.0 | # Bug Report
### Describe the bug
[TypeInferenceError] Cannot infer type and shape for node name resnetv17_conv0_fwd. No opset im
port for domain optype Conv with ONNX1.11.0
But can work with ONNX 1.10.0
### System information
- ONNX version (*e.g. 1.7*): 1.11.0
### Reproduction instructions
- Describe the code to reproduce the behavior.
```
import onnx
from onnx import shape_inference
model = onnx.load('model.onnx')
shape_inference.infer_shapes(model)
```
model path: https://github.com/onnx/models/raw/main/vision/classification/resnet/model/resnet50-v1-12.onnx | https://github.com/onnx/onnx/issues/4215 | https://github.com/onnx/onnx/pull/4224 | 6a89ca2e7bb3222684d14d23dbdb1c8cc1bd9ad2 | 0362e56f2a8582f28b9e631d3388d9bdec6dd509 | 2022-05-24T06:09:59Z | python | 2022-06-04T15:22:34Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,178 | ["onnx/version_converter/convert.h"] | SIGSEGV when convert_version | Hello everybody! When I try to convert a opset version of any onnx model to:
14-16 (when onnx==1.8 is installed),
15-16 (when onnx==1.9 is installed),
16 (when onnx==1.10 is installed)
with convert_version that I import from onnx.onnx_cpp2py_export.version_converter, I have the next error: Process finished with exit code 139 (interrupted by signal 11: SIGSEGV). But I don't face this error if I convert to a target version less 14, 15 and 16 respectively. I was installing onnx with pip install onnx==... and I have installed ort with pip install onnxruntime==1.11.1 for all these cases.
Can you help me please or ask the information I should provide to resolve this error? | https://github.com/onnx/onnx/issues/4178 | https://github.com/onnx/onnx/pull/4182 | c759bce40e40c4e093f7864debaa2a16d33191f0 | 77bdf2d3a2b0499d1649fe18f05b107daf7f4969 | 2022-05-02T14:46:11Z | python | 2022-05-16T19:36:14Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,144 | ["third_party/benchmark"] | Failed to build benchmark with gcc-11 | # Bug Report
### Is the issue related to model conversion?
No.
### Describe the bug
Failed to compile the built-in benchmark, potentially due to missing headers.
Please check if it can be fixed by an update or you need to patch the upstream.
```
#9 4659.1 [29/87] Building CXX object CMakeFiles/onnx.dir/onnx/common/status.cc.o
#9 4659.8 [30/87] Building CXX object third_party/benchmark/src/CMakeFiles/benchmark.dir/benchmark_register.cc.o
#9 4659.8 FAILED: third_party/benchmark/src/CMakeFiles/benchmark.dir/benchmark_register.cc.o
#9 4659.8 ccache /opt/rh/devtoolset-11/root/usr/bin/g++ -DHAVE_POSIX_REGEX -DHAVE_STD_REGEX -DHAVE_STEADY_CLOCK -Dbenchmark_EXPORTS -I/tmp/scratch/onnx/third_party/benchmark/include -I/tmp/scratch/onnx/third_party/benchmark/src -I/tmp/scratch/onnx/third_
party/benchmark/src/../include -fdebug-prefix-map='/tmp/scratch'='/usr/local/src' -g -Wnon-virtual-dtor -std=c++11 -Wall -Wextra -Wshadow -pedantic -pedantic-errors -Wfloat-equal -fstrict-aliasing -Wstrict-aliasing -flto -O3 -DNDEBUG -Werror -
fPIC -std=gnu++11 -MD -MT third_party/benchmark/src/CMakeFiles/benchmark.dir/benchmark_register.cc.o -MF third_party/benchmark/src/CMakeFiles/benchmark.dir/benchmark_register.cc.o.d -o third_party/benchmark/src/CMakeFiles/benchmark.dir/benchmark_register
.cc.o -c /tmp/scratch/onnx/third_party/benchmark/src/benchmark_register.cc
#9 4659.8 In file included from ../third_party/benchmark/src/benchmark_register.cc:15:
#9 4659.8 ../third_party/benchmark/src/benchmark_register.h: In function ‘void AddRange(std::vector<T>*, T, T, int)’:
#9 4659.8 ../third_party/benchmark/src/benchmark_register.h:17:30: error: ‘numeric_limits’ is not a member of ‘std’
#9 4659.8 17 | static const T kmax = std::numeric_limits<T>::max();
#9 4659.8 | ^~~~~~~~~~~~~~
#9 4659.8 ../third_party/benchmark/src/benchmark_register.h:17:46: error: expected primary-expression before ‘>’ token
#9 4659.8 17 | static const T kmax = std::numeric_limits<T>::max();
#9 4659.8 | ^
#9 4659.8 ../third_party/benchmark/src/benchmark_register.h:17:49: error: ‘::max’ has not been declared; did you mean ‘std::max’?
#9 4659.8 17 | static const T kmax = std::numeric_limits<T>::max();
#9 4659.8 | ^~~
#9 4659.8 | std::max
#9 4659.8 In file included from /opt/rh/devtoolset-11/root/usr/include/c++/11/algorithm:62,
#9 4659.8 from ../third_party/benchmark/include/benchmark/benchmark.h:175,
#9 4659.8 from ../third_party/benchmark/src/internal_macros.h:4,
#9 4659.8 from ../third_party/benchmark/src/check.h:8,
#9 4659.8 from ../third_party/benchmark/src/benchmark_register.h:6,
#9 4659.8 from ../third_party/benchmark/src/benchmark_register.cc:15:
#9 4659.8 /opt/rh/devtoolset-11/root/usr/include/c++/11/bits/stl_algo.h:3467:5: note: ‘std::max’ declared here
#9 4659.8 3467 | max(initializer_list<_Tp> __l, _Compare __comp)
#9 4659.8 | ^~~
#9 4660.4 [31/87] Building CXX object CMakeFiles/onnx.dir/onnx/defs/attr_proto_util.cc.o
```
### System information
- OS Platform and Distribution (*e.g. Linux Ubuntu 16.04*): CentOS 7
- ONNX version (*e.g. 1.7*): main
- Python version: 3.8 (scl rh-python38)
- GCC/Compiler version (if compiling from source): gcc-11 (scl devtoolset-11) | https://github.com/onnx/onnx/issues/4144 | https://github.com/onnx/onnx/pull/4145 | 1b8a9e3c6565e9a287fec40def0031f944f9c235 | 61ce4dfbe7be3a89fa5d32bf0ac9d64a872601c9 | 2022-04-15T08:32:30Z | python | 2022-04-20T20:39:00Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,131 | ["onnx/shape_inference/implementation.cc"] | refactor shape inference | null | https://github.com/onnx/onnx/issues/4131 | https://github.com/onnx/onnx/pull/4127 | 61ce4dfbe7be3a89fa5d32bf0ac9d64a872601c9 | bd849e86cb47e75b765ad120b1a1a05ef5cf8c10 | 2022-04-12T18:50:45Z | python | 2022-04-21T20:31:21Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,116 | ["docs/Changelog.md", "docs/Operators.md", "onnx/defs/tensor/defs.cc"] | Incorrect wording in Slice operator's spec | # Bug Report
### Describe the bug
In https://github.com/onnx/onnx/pull/3908, the following line was added:
The clamping for the adjusted `ends[i]` depends on the sign of `steps[i]` and must
accommodate copying 0 through `dims[axes[i]]` elements, so for positive stepping
`end[axes[i]]` is clamped to `[0, dims[axes[i]]]`, while for negative stepping it
is clamped to `[-1, ends[i]-1]`.
I think the final `...ends[i]-1` must read `dims[axes[i]] - 1` if I am understanding correctly
| https://github.com/onnx/onnx/issues/4116 | https://github.com/onnx/onnx/pull/4117 | 2ab133404afce34552aaccd86e7023e1fb9a60d2 | 50ba2e5d6404fa0618a261684cc4095376d0af2b | 2022-04-06T00:21:32Z | python | 2022-04-12T21:05:09Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,115 | ["docs/Changelog.md", "docs/Operators.md", "onnx/defs/tensor/defs.cc"] | Seeking clarity on Slice operator's clamping | # Ask a Question
### Question
In https://github.com/onnx/onnx/pull/3908/ which seems to have clarified Slice's operation, there seems to be a contradiction with how Numpy does clamping (the numpy link reference is also added to the op's spec) versus what is prescribed by the clarification. Furthermore, the clarification is without an opset bump for Slice (indicating that this was the behavior all along).
Prescribed start clamping:
Then `start[axes[i]]` is the adjusted `starts[i]` clamped into range of valid indices, i.e. `[0, dims[axes[i]]-1]`.
This seems to indicate that starts clamping is slicing direction oblivious.
Here is an illustrative example from Numpy which indicates starts clamping is slicing direction dependent.
x = np.array([1, 2, 3])
Positive stepping
>>> x[5:6:1]
array([], dtype=int32)
5 is not clamped to a valid range which would be 2 which would have resulted in a non-empty result
Negative stepping
>>> x[5:0:-1]
array([3, 2])
5 is clamped to a valid range (2) and this results in the printed value.
| https://github.com/onnx/onnx/issues/4115 | https://github.com/onnx/onnx/pull/4117 | 2ab133404afce34552aaccd86e7023e1fb9a60d2 | 50ba2e5d6404fa0618a261684cc4095376d0af2b | 2022-04-06T00:06:19Z | python | 2022-04-12T21:05:09Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,078 | ["onnx/common/ir_pb_converter.cc"] | Version converter crashes if a value_info's name does not exist in the graph | # Bug Report
### Describe the bug
An ONNX model crashes after version conversion without any error. The model is attached. The root cause is [here](https://github.com/onnx/onnx/blob/94e2f64551ded652df53a7e9111031e8aabddaee/onnx/common/ir_pb_converter.cc#L357). It assumes value_info's name has already existed in the graph. Although this model might be an invalid model (the value_info seems unused), version_converter should not crash without any error or warning.
cc @skottmckay
### Reproduction instructions
- Describe the code to reproduce the behavior.
```
import onnx
from onnx import version_converter
m = onnx.load('pt_mobilenet_v2.ssi.basic.onnx')
version_converter.convert_version(m, 15)
...
```
- Attach the ONNX model to the issue (where applicable)
[pt_mobilenet_v2.ssi.basic.onnx.zip](https://github.com/onnx/onnx/files/8326917/pt_mobilenet_v2.ssi.basic.onnx.zip)
### Expected behavior
The model should be successfully converted without crash.
### Notes
Will submit a PR for it soon.
| https://github.com/onnx/onnx/issues/4078 | https://github.com/onnx/onnx/pull/4079 | dad79aca26a276a7b8175b9add0614b5ebaa6f89 | b70a19c78ea087d0eb485820881d2e3ee3408c4f | 2022-03-22T18:29:55Z | python | 2022-03-30T17:51:53Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,058 | ["community/sigs.md"] | Slack URL is broken | https://slack.lfai.foundation/ is broken. Is there any way to join the slack channel? | https://github.com/onnx/onnx/issues/4058 | https://github.com/onnx/onnx/pull/5086 | 30619cd473119c94088e20762089e54443cdac50 | 1d814e204b92642412953d47c100d3ff6a29fc23 | 2022-03-07T06:45:47Z | python | 2023-05-18T15:28:50Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,053 | ["requirements.txt"] | Failed to install latest version of onnx-weekly nightly job. | # Bug Report
### Describe the bug
In some case, when we run the command "pip install --index-url https://test.pypi.org/simple/ onnx-weekly" to install latest version of onnx-weekly nightly build, it failed to install those latest ones but only an old version 1.10.1.dev20210830 will be installed. Some logs are shown as below:
```
:~/Desktop/code/tensorflow-onnx$ pip install --index-url https://test.pypi.org/simple/ onnx-weekly
Looking in indexes: https://test.pypi.org/simple/
WARNING: Keyring is skipped due to an exception: Failed to create the collection: Prompt dismissed..
Collecting onnx-weekly
Downloading https://test-files.pythonhosted.org/packages/19/56/84b1eb141b6129ffeac8c0c3a76e4170771d003479857ef1d855fa3b09b9/onnx_weekly-1.11.0.dev20220228-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (12.8 MB)
|████████████████████████████████| 12.8 MB 1.0 MB/s
Downloading https://test-files.pythonhosted.org/packages/3d/13/392cc9c308014001ce5d74e914bdb8d7a917b23f2edb29ac56d4b28c0ed6/onnx_weekly-1.11.0.dev20220221-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (12.8 MB)
|████████████████████████████████| 12.8 MB 1.1 MB/s
Downloading https://test-files.pythonhosted.org/packages/30/af/2a359cc38da90572c000ff10ac4e939d3423e7d1215300afcfe07044acc5/onnx-weekly-1.11.0.dev20220214.tar.gz (9.9 MB)
|████████████████████████████████| 9.9 MB 87 kB/s
ERROR: Command errored out with exit status 1:
command: /home/anaconda3/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-_u6v0gfw/onnx-weekly_9e3e31dab4a743679efa5a3fc060864b/setup.py'"'"'; __file__='"'"'/tmp/pip-install-_u6v0gfw/onnx-weekly_9e3e31dab4a743679efa5a3fc060864b/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-oheeqf44
cwd: /tmp/pip-install-_u6v0gfw/onnx-weekly_9e3e31dab4a743679efa5a3fc060864b/
Complete output (31 lines):
fatal: not a git repository (or any of the parent directories): .git
Traceback (most recent call last):
File "/home/anaconda3/lib/python3.8/site-packages/setuptools/installer.py", line 75, in fetch_build_egg
subprocess.check_call(cmd)
File "/home/anaconda3/lib/python3.8/subprocess.py", line 364, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/home/anaconda3/bin/python', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', '/tmp/tmpm3gv4xk1', '--quiet', 'pytest-runner']' died with <Signals.SIGSEGV: 11>.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-_u6v0gfw/onnx-weekly_9e3e31dab4a743679efa5a3fc060864b/setup.py", line 336, in <module>
setuptools.setup(
File "/home/anaconda3/lib/python3.8/site-packages/setuptools/__init__.py", line 152, in setup
_install_setup_requires(attrs)
File "/home/anaconda3/lib/python3.8/site-packages/setuptools/__init__.py", line 147, in _install_setup_requires
dist.fetch_build_eggs(dist.setup_requires)
File "/home/anaconda3/lib/python3.8/site-packages/setuptools/dist.py", line 686, in fetch_build_eggs
resolved_dists = pkg_resources.working_set.resolve(
File "/home/anaconda3/lib/python3.8/site-packages/pkg_resources/__init__.py", line 766, in resolve
dist = best[req.key] = env.best_match(
File "/home/anaconda3/lib/python3.8/site-packages/pkg_resources/__init__.py", line 1051, in best_match
return self.obtain(req, installer)
File "/home/anaconda3/lib/python3.8/site-packages/pkg_resources/__init__.py", line 1063, in obtain
return installer(requirement)
File "/home/anaconda3/lib/python3.8/site-packages/setuptools/dist.py", line 745, in fetch_build_egg
return fetch_build_egg(self, req)
File "/home/anaconda3/lib/python3.8/site-packages/setuptools/installer.py", line 77, in fetch_build_egg
raise DistutilsError(str(e)) from e
distutils.errors.DistutilsError: Command '['/home/anaconda3/bin/python', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', '/tmp/tmpm3gv4xk1', '--quiet', 'pytest-runner']' died with <Signals.SIGSEGV: 11>.
...
```
### System information
- OS Platform and Distribution (*e.g. Linux Ubuntu 16.04*): Linux Ubuntu 20.04
- ONNX version (*e.g. 1.7*): nightly build
- Python version: 3.8
### Reproduction instructions
- Execute below command in some environments:
```
pip install --index-url https://test.pypi.org/simple/ onnx-weekly
```
### Expected behavior
The installed onnx-weekly version should be the first one nnx_weekly-1.11.0.dev20220228-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.
### Notes
Any additional information
List all of dependencies version:
+ pip freeze --all
absl-py==0.15.0
astunparse==1.6.3
attrs==21.4.0
cachetools==4.2.4
certifi==2021.10.8
charset-normalizer==2.0.12
clang==5.0
coverage==6.3.2
flatbuffers==1.12
gast==0.4.0
google-auth==1.35.0
google-auth-oauthlib==0.4.6
google-pasta==0.2.0
graphviz==0.19.1
grpcio==1.44.0
h5py==3.1.0
idna==3.3
importlib-metadata==4.11.1
iniconfig==1.1.1
keras==2.6.0
Keras-Preprocessing==1.1.2
Markdown==3.3.6
numpy==1.19.5
oauthlib==3.2.0
onnx-weekly==1.10.1.dev20210830
onnxruntime==1.9.0
onnxruntime-extensions==0.3.1
opt-einsum==3.3.0
packaging==21.3
pandas==1.4.1
parameterized==0.8.1
Pillow==9.0.1
pip==21.2.4
pluggy==1.0.0
protobuf==3.19.4
py==1.11.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pyparsing==3.0.7
pytest==7.0.1
pytest-cov==3.0.0
pytest-runner==6.0.0
python-dateutil==2.8.2
pytz==2021.3
PyYAML==6.0
requests==2.27.1
requests-oauthlib==1.3.1
rsa==4.8
setuptools==58.0.4
six==1.15.0
tensorboard==2.6.0
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.1
tensorflow==2.6.2
tensorflow-estimator==2.6.0
tensorflow-hub==0.12.0
tensorflow-text==2.6.0
tensorflowjs==3.13.0
termcolor==1.1.0
tf2onnx==1.10.0
tomli==2.0.1
typing-extensions==3.7.4.3
urllib3==1.26.8
Werkzeug==2.0.3
wheel==0.37.1
wrapt==1.12.1
zipp==3.7.0 | https://github.com/onnx/onnx/issues/4053 | https://github.com/onnx/onnx/pull/4059 | acc127219b45bc27b0180b1fdc08299eac81b167 | 7a01a9683ef432d5ade0aab88c7e6c52f9581e5b | 2022-03-04T16:42:42Z | python | 2022-03-10T21:28:51Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,049 | [".github/workflows/manylinux/entrypoint.sh", ".github/workflows/manylinux/test_package_i686.sh", ".github/workflows/release_linux_i686.yml", "docs/CIPipelines.md", "docs/OnnxReleases.md"] | ONNX does not work properly with Python 3.10 on Linux i686 environment | # Bug Report
### Is the issue related to model conversion?
No
### Describe the bug
Please note that ONNX does not work properly with Python 3.10 on Linux i686 environment. With the Python 3.10 i686 ONNX wheel, `onnx/backend/test/cmd_tools.py generate-data` somehow take forever. It might be because NumPy does not support Python 3.10+i686.
### System information
- OS Platform and Distribution (*e.g. Linux Ubuntu 16.04*): manylinux2014-i686
- ONNX version (*e.g. 1.7*): 1.11.0 (latest main branch)
- Python version: 3.10
### Reproduction instructions
Follow the steps [here](https://github.com/onnx/onnx/blob/main/.github/workflows/release_linux_i686.yml) to build ONNX. Then using `onnx/backend/test/cmd_tools.py generate-data` will take forever.
### Expected behavior
`onnx/backend/test/cmd_tools.py generate-data` should generate data shortly.
| https://github.com/onnx/onnx/issues/4049 | https://github.com/onnx/onnx/pull/4111 | 00738d2032ca0a39d9124b02085a90adda96d018 | 4b3e3f95b32fbe605e0052fc7d55612907ca3098 | 2022-03-02T03:19:49Z | python | 2022-05-03T01:04:57Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,043 | ["docs/Syntax.md", "onnx/defs/parser.cc", "onnx/defs/parser.h", "onnx/test/cpp/parser_test.cc"] | Support for SequenceProto in ONNX parser | # Ask a Question
### Question
I'd like to write ONNX models including sequence inputs/outputs. I haven't seen any examples in the code base, or support for it in parser.cc.
Are there any plans to support `SequenceProto` in ONNX parser?
### Further information
- Relevant Area (*e.g. model usage, backend, best practices, converters, shape_inference, version_converter, training, test, operators*):
ONNX parser
- Is this issue related to a specific model?
No
### Notes
N/A
| https://github.com/onnx/onnx/issues/4043 | https://github.com/onnx/onnx/pull/4136 | 5a9251a5f96ce44981936db38495d3f57ff1c128 | 5b1346e30d66e4ec550f6b63c3883b258a2e8e3e | 2022-02-28T14:47:59Z | python | 2022-05-05T05:37:03Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,030 | ["requirements-lintrunner.txt"] | Slight different between pytorch model and its onnx model | # Ask a Question
### Question
I convert my BERT model to onnx , and compare their inference on same input text, however the results are slight different.
```
result1: ....-0.6521257758140564, 0.30830827355384827, -0.7473717927932739
result2: ....-0.6521257758140564, 0.3083084225654602, -0.7473714351654053
```
Why the difference? Shouldn't they be the same?
### Further information
transformers==4.9.2
onnxruntime==1.10.0
### Notes
My code snippet:
```
def use_onnx():
import time
tokenizer = AutoTokenizer.from_pretrained("/home/text_embeddings/result/raw_model/")
session = InferenceSession("./raw_model-onnx/model.onnx")
test_text = "喜欢唱 歌 也喜欢交朋友"
inputs = tokenizer(test_text, padding=True, truncation=True, max_length=64, return_tensors="np")
outputs = session.run(output_names=["pooler_output"], input_feed=dict(inputs))
print(outputs[0][0].tolist())
def use_torch():
import torch
import time
from scipy.spatial.distance import cosine
from transformers import AutoModel, AutoTokenizer
model_path = "/home/text_embeddings/result/raw_model/"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModel.from_pretrained(model_path)
texts = "喜欢唱 歌 也喜欢交朋友"
inputs = tokenizer(texts, padding=True, truncation=True, max_length=64, return_tensors="pt")
with torch.no_grad():
embeddings = model(**inputs, output_hidden_states=True, return_dict=True)
print(embeddings.pooler_output.tolist())
```
| https://github.com/onnx/onnx/issues/4030 | https://github.com/onnx/onnx/pull/5787 | e63475bf1c59c7ee4ee6395ccd186499d637ee27 | 7e0c267e9fdf229550957b7420b9f753890a7f44 | 2022-02-20T12:37:21Z | python | 2023-12-01T16:13:41Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,024 | ["requirements-lintrunner.txt"] | issue in converting from ONNX to Tensorflow due to Resize | ### Question
Not sure if there's a way to fix this, if someone can point me out a solution or at least a direction on which investigate it would be great, so:
I've converted succesfully my model from MXnet to ONNX but since the original model has 2 layers such as:
```
"op": "UpSampling",
"name": "rf_c3_upsampling",
"attrs": {
"num_args": "1",
"sample_type": "nearest",
"scale": "2",
"workspace": "512"
},
"inputs": [[227, 0, 0]]
```
This operation is automatically converted in ONNX into a `Resize` operation with `mode=half_pixel` and this is preventing me to achieve the conversion to my target framework which is `Tensorflow`, in fact i get the following error:
```
RuntimeError: Resize coordinate_transformation_mode=half_pixel and mode=nearest is not supported in Tensorflow.
```
Is there a way to convert a layer as above in some form that is compatible with tensorflow(lite)???
### Further information
I'm using:
```
tensorflow-addons 0.15.0
tensorflow-gpu 2.7.0
tensorflow-probability 0.15.0
onnx 1.10.2
onnx-tf 1.9.0
onnxruntime 1.10.0
```
Already tried all opset but can't succed please help
| https://github.com/onnx/onnx/issues/4024 | https://github.com/onnx/onnx/pull/5787 | e63475bf1c59c7ee4ee6395ccd186499d637ee27 | 7e0c267e9fdf229550957b7420b9f753890a7f44 | 2022-02-16T17:36:28Z | python | 2023-12-01T16:13:41Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,009 | ["docs/Changelog.md", "docs/Operators.md", "onnx/defs/controlflow/defs.cc", "onnx/defs/controlflow/old.cc"] | Spec for Scan from opset 9+ has unused type constraint 'I' | # Bug Report
### Describe the bug
The ONNX spec for Scan should have dropped the 'I' type constraint in opset 9 when the sequence lengths input was removed.
Is it possible to update the spec to remove this unused constraint from Scan 9, 11 and 16?
Use case: The spurious constraint makes it harder to validate that an implementation matches the spec. | https://github.com/onnx/onnx/issues/4009 | https://github.com/onnx/onnx/pull/4012 | 850a81b0b77786bf99ea90580242b084f86a6235 | c940fa3fea84948e46603cab2f86467291443beb | 2022-02-11T06:58:23Z | python | 2022-02-24T21:44:33Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 4,008 | ["requirements-lintrunner.txt"] | Could I change the opset version of the ONNX model? | # Ask a Question
### Question
Could I change the opset version of the ONNX model?
Thank you!
| https://github.com/onnx/onnx/issues/4008 | https://github.com/onnx/onnx/pull/5787 | e63475bf1c59c7ee4ee6395ccd186499d637ee27 | 7e0c267e9fdf229550957b7420b9f753890a7f44 | 2022-02-11T06:52:40Z | python | 2023-12-01T16:13:41Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 3,991 | ["onnx/checker.cc", "onnx/common/path.cc", "onnx/common/path.h", "onnx/test/cpp/common_path_test.cc", "onnx/test/test_external_data.py"] | The onnx runtime allow to load the external_data from the file outside the folder | The external_data field of the tensor proto can have a path to the file which is outside the model current directory or user provided directory, for example "../../../etc/passwd".
There is no validation on this in this function: https://github.com/onnx/onnx/blob/96516aecd4c110b0ac57eba08ac236ebf7205728/onnx/checker.cc#L129
The python library have the _sanitize_path function which has some basic restrictions but it doesn't work when you use the default onnxruntime package to do the model execution.
I can provide POC and create a patch by request.
| https://github.com/onnx/onnx/issues/3991 | https://github.com/onnx/onnx/pull/4400 | eb634addcbb23ec1baf4d548d826bf3932527533 | f369b0e859024095d721f1d1612da5a8fa38988d | 2022-02-06T18:21:58Z | python | 2022-08-10T15:35:50Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 3,968 | [".azure-pipelines/Windows-CI.yml", "onnx/backend/test/report/coverage.py", "onnx/helper.py", "setup.py"] | mypy typecheck (mypy version now is 0.760 in main branch) cannot work with Python 3.8 and 3.9 | # Bug Report
mypy typecheck (mypy 0.760 in main branch) cannot work with Python 3.8 and 3.9. It will throw few check errors. It is not caught by CI (AzurePipeline) because it only uses Python 3.6 and 3.7 with mypy typecheck. We should add 3.8 and 3.9 in AzurePipeline to cover it.
### Reproduction instructions
```
python setup.py typecheck
...
```
### Expected behavior
Should not produce any error. | https://github.com/onnx/onnx/issues/3968 | https://github.com/onnx/onnx/pull/3966 | 83fa57c74edfd13ddac9548b8a12f9e3e2ed05bd | b5de5867f0e5ecb471baed763db7fc8836ccb201 | 2022-01-24T22:26:05Z | python | 2022-01-31T18:29:23Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 3,959 | ["docs/Changelog.md", "onnx/backend/test/case/node/resize.py", "onnx/backend/test/data/node/test_resize_downsample_sizes_nearest_tf_half_pixel_for_nn/model.onnx", "onnx/defs/tensor/old.cc"] | Resize-11 scales input should be optional | The specification for Resize-11 says the `scales` input is required / single, but the text description says it should be optional. From [the doc](https://github.com/onnx/onnx/blob/main/docs/Changelog.md#resize-11):
> <dt><tt>scales</tt> : tensor(float)</dt>
> <dd>The scale array along each dimension. It takes value greater than 0. If it's less than 1, it's sampling down, otherwise, it's > upsampling. The number of elements of 'scales' should be the same as the rank of input 'X'. Only one of 'scales' and 'sizes' can be specified. If 'size' is needed, the user can use an empty string as the name of 'scales' in this operator's input list.</dd>
However, if I construct a model with `scales` set to an empty string, the checker fails. See repro code below.
## Repro instructions
```python
import onnx
from google.protobuf import text_format
text_proto = """
ir_version: 5
producer_name: "backend-test"
graph {
node {
output: "ROI"
op_type: "Constant"
attribute {
name: "value"
t {
data_type: 1
float_data: 0.0
name: "const_tensor"
}
type: TENSOR
}
}
node {
input: "X"
input: "ROI"
input: ""
input: "sizes"
output: "Y"
op_type: "Resize"
attribute {
name: "coordinate_transformation_mode"
s: "tf_half_pixel_for_nn"
type: STRING
}
attribute {
name: "mode"
s: "nearest"
type: STRING
}
}
name: "resize_11_bug"
input {
name: "X"
type {
tensor_type {
elem_type: 1
shape {
dim {
dim_value: 1
}
dim {
dim_value: 1
}
dim {
dim_value: 4
}
dim {
dim_value: 4
}
}
}
}
}
input {
name: "sizes"
type {
tensor_type {
elem_type: 7
shape {
dim {
dim_value: 4
}
}
}
}
}
output {
name: "Y"
type {
tensor_type {
elem_type: 1
shape {
dim {
dim_value: 1
}
dim {
dim_value: 1
}
dim {
dim_value: 3
}
dim {
dim_value: 2
}
}
}
}
}
}
opset_import {
version: 11
}
"""
model = text_format.Parse(text_proto, onnx.onnx_ml_pb2.ModelProto())
onnx.checker.check_model(model)
```
### expected behavior
Checker doesn't complain
### actual behavior
```
ValidationError: Node ()'s input 2 is marked single but has an empty string in the graph
==> Context: Bad node spec for node. Name: OpType: Resize
```
| https://github.com/onnx/onnx/issues/3959 | https://github.com/onnx/onnx/pull/3976 | e4e17979171b0318205978fbb77d4316d692e771 | f0c4ee2b46b3d00c005d1ecd9440cfc151f4fd1d | 2022-01-21T23:36:52Z | python | 2022-02-02T19:20:36Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 3,956 | ["requirements-lintrunner.txt"] | Why is `sorted` attribute of Unique an int instead of bool | The operator Unique has an attribute `axis` of type int, which has only two legal values, 0 or 1. Why wasn't type `bool` chosen for this attribute? Is there a notion that more values might be allowed in the future, or is it just for compatibility with another framework?
| https://github.com/onnx/onnx/issues/3956 | https://github.com/onnx/onnx/pull/5728 | 21300b77782fe207584f1cebb6120f71c7146e20 | fd699fbb4f69a0d2d12129013145c9f89a8c7eb0 | 2022-01-20T15:45:07Z | python | 2023-11-01T18:21:52Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 3,955 | [".github/workflows/release_mac.yml", "docs/CIPipelines.md"] | [Tracking] Upgrade MacOS to 11.0 from 10.15 in Mac release pipeline | Since MacOS-latest now uses 11.0 by default in GitHub Action, Mac release pipeline should start to use 11.0: https://github.com/onnx/onnx/blob/5668578e64f026f1e59771dc5427d9984680ca9d/.github/workflows/release_mac.yml#L20
Create this issue for tracking this task. | https://github.com/onnx/onnx/issues/3955 | https://github.com/onnx/onnx/pull/4379 | 5f3fb1f1bd0989a0b25c347ad0346dfcce3c2cd4 | 55d3b80f3ea1261f3f74f2e35844c204de85835a | 2022-01-19T22:42:12Z | python | 2022-08-04T00:22:05Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 3,944 | ["onnx/test/data_propagation_test.py", "onnx/test/shape_inference_test.py"] | Refactor `shape_inference_test.py` and separate shape inference tests with data propagation. | # Feature Request
Related to https://github.com/onnx/onnx/pull/3784#pullrequestreview-852061959
Refactor `onnx/test/shape_inference_test.py` and create `onnx/test/data_propagation_test.py` for testing shape inference with data propagation.
### What is the problem that this feature solves?
`onnx/test/shape_inference_test.py` is already quite huge and it should focus on shape inference tests without data propagation.
### Will this influence the current api?
No. This is just a refactoring of tests and does not affect apis.
### Are you willing to contribute it (Y/N):
Yes. | https://github.com/onnx/onnx/issues/3944 | https://github.com/onnx/onnx/pull/4302 | d6a90f63852534a8051679b1b73b1355bfad41a2 | ca5f4dc0f6114d57e98cbc0b70859baa4623d20c | 2022-01-14T00:12:17Z | python | 2022-06-27T16:59:50Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 3,942 | ["requirements-lintrunner.txt"] | Mapping input variables to model input variables | I have been tasked with creating an ONNX model consumer for a database. Users will be able to load models into the database and use the model to score a table of input data. The question I have is how do you map the columns of a table to the necessary inputs of the model?
For instance, if one user created a model in python on the iris dataset and shared that model with a coworker, do they have to manually tell the coworker the exact order of the input variables or is this information stored somewhere within the onnx model?
When looking at a model I generated in python I only see the input listed as a float array of 4 values but I don't see anything that gives the ordering of that values, say, sepal_length, sepal_width, petal_length, and petal_width.
I'm familiar with PMML and the inputs are spelled out rather clearly and I'm hoping the same exists in the onnx format.
Thanks!
| https://github.com/onnx/onnx/issues/3942 | https://github.com/onnx/onnx/pull/5728 | 21300b77782fe207584f1cebb6120f71c7146e20 | fd699fbb4f69a0d2d12129013145c9f89a8c7eb0 | 2022-01-13T19:44:53Z | python | 2023-11-01T18:21:52Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 3,937 | ["requirements-lintrunner.txt"] | python -m transformers.onnx --model=bert-base-cased onnx/bert-base-cased/ | Based on [SO post](https://stackoverflow.com/q/70652768/17840900).
Kernel: `conda_pytorch_p36`. I performed Restart & Run All, and refreshed file view in working directory.
I'm following along with this [code tutorial][1], the first Python code module.
```
python -m transformers.onnx --model=bert-base-cased onnx/bert-base-cased/
```
Console:
```
python -m transformers.onnx --model=bert-base-cased onnx/bert-base-cased/
File "<ipython-input-1-0652039cb84d>", line 1
python -m transformers.onnx --model=bert-base-cased onnx/bert-base-cased/
^
SyntaxError: invalid syntax
```
Terminal:
```
sh-4.2$ python -m transformers.onnx --model=bert-base-cased onnx/bert-base-cased/
/home/ec2-user/anaconda3/envs/JupyterSystemEnv/bin/python: Error while finding module specification for 'transformers.onnx' (ModuleNotFoundError: No module named 'transformers')
```
[1]: https://huggingface.co/docs/transformers/master/serialization | https://github.com/onnx/onnx/issues/3937 | https://github.com/onnx/onnx/pull/5728 | 21300b77782fe207584f1cebb6120f71c7146e20 | fd699fbb4f69a0d2d12129013145c9f89a8c7eb0 | 2022-01-12T12:50:40Z | python | 2023-11-01T18:21:52Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 3,936 | [".azure-pipelines/Linux-CI.yml", ".azure-pipelines/MacOS-CI.yml", ".azure-pipelines/Windows-CI.yml", "docs/CIPipelines.md", "docs/CONTRIBUTING.md", "docs/TypeAnnotations.md", "setup.py", "tools/style.sh"] | Is this concat operation supported by ONNX? | # Source
PaddlePaddle->onnx->onnx-sim
onnx version:11.0
# Describe
The input feature graph and the two learned feature graphs are spliced in dimension.
# New Operator

# one of the two learned feature graphs info:

# Confused point
How does this work?
Any suggestions if I need to refactor it with traditional operations by caffe?
# Thanks
I hope I can get your help!
Thanks !
| https://github.com/onnx/onnx/issues/3936 | https://github.com/onnx/onnx/pull/3988 | 9521ab161cfa555b93bf2c2084dc69cc50e30356 | 6411194f01dcc767d60740535cd415e4730f1761 | 2022-01-12T03:24:06Z | python | 2022-03-23T23:30:58Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 3,924 | ["docs/Changelog.md", "onnx/backend/test/case/node/resize.py", "onnx/backend/test/data/node/test_resize_downsample_sizes_nearest_tf_half_pixel_for_nn/model.onnx", "onnx/defs/tensor/old.cc"] | Resize tf_half_pixel_for_nn | # Bug Report
### Describe the bug
The `Resize` operator had a `coordinate_transformation_mode` attribute value `tf_half_pixel_for_nn` introduced in opset version 11, but removed in version 13.
Yet there still exists the `onnx/backend/test/data/node/test_resize_downsample_sizes_nearest_tf_half_pixel_for_nn/` backend test with the opset version 13 and that value set to the attribute.
### Expected behavior
Either the quoted test should be removed or the option is re-introduced in the documentation for `Resize`.
Possibly the test could remain, if the opset version is reduced back to 11 for it - but it looks like the strategy for ONNX tests is to remove old opset version tests from the HEAD of the repo, so this doesn't sound like the correct solution | https://github.com/onnx/onnx/issues/3924 | https://github.com/onnx/onnx/pull/3976 | e4e17979171b0318205978fbb77d4316d692e771 | f0c4ee2b46b3d00c005d1ecd9440cfc151f4fd1d | 2022-01-06T17:51:53Z | python | 2022-02-02T19:20:36Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 3,909 | ["requirements-lintrunner.txt"] | `tf.function` causes `onnx.checker.check_model` to fail, while onnxruntime runs fine | # Bug Report
When a tensorflow model contains `tf.function` decorated function with for loop in it, the tf->onnx conversion yields warnings:
```
WARNING:tensorflow:From /Users/amit/Programs/lammps/kim/kliff/venv/lib/python3.7/site-packages/tf2onnx/tf_loader.py:706: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.extract_sub_graph`
Cannot infer shape for model/ex_layer/PartitionedCall/while: model/ex_layer/PartitionedCall/while:3
Cannot infer shape for model/ex_layer/PartitionedCall/Identity: model/ex_layer/PartitionedCall/Identity:0
Cannot infer shape for Func/model/ex_layer/PartitionedCall/output/_3: Func/model/ex_layer/PartitionedCall/output/_3:0
Cannot infer shape for Identity: Identity:0
missing output shape for while/Identity_3:0
missing output shape for while/Identity_3:0
missing output shape for while/Identity_3:0
missing output shape for while/Identity_3:0
...
```
And as the obtained model is run through onnxruntime it runs fine, but model checker gives the following error
```
Traceback (most recent call last):
File "failed_example.py", line 85, in <module>
onnx.checker.check_model(onnx.load("tmp.onnx"))
File "venv/lib/python3.7/site-packages/onnx/checker.py", line 106, in check_model
C.check_model(protobuf_string)
onnx.onnx_cpp2py_export.checker.ValidationError: Field 'shape' of type is required but missing.
```
Netron does not show any appreciable difference between the model with decorated function and without decorated function. I guess error comes from the fact that the for loop is converted to separate while-loop graph, whose input shape is not defined. But it does work perfectly without `tf.function` decorator. I am putting a minimal replication code below.
I think it is related to following issues:
- https://github.com/onnx/onnx/issues/2932
- https://github.com/onnx/onnx/issues/2492
- https://github.com/onnx/onnx/pull/2937
### Is the issue related to model conversion?
Yes and no. While the model is generated by TF, there is inconsistency between model checker and onnxruntime.
### Describe the bug
Onnx mode checker gives error for undefined shape, while onnx runtime runs the model fine.
### System information
- OS Platform and Distribution (*e.g. Linux Ubuntu 16.04*): Mac OSX Mojave
- ONNX version (*e.g. 1.7*): 1.10.2
- Python version: 3.7.7
- GCC/Compiler version (if compiling from source):
- CMake version:
- Protobuf version:
- Visual Studio version (if applicable):
### Reproduction instructions
- Describe the code to reproduce the behavior.
```python
import tensorflow as tf
import numpy as np
import sys
import onnx
import onnxruntime
import tf2onnx
# =============================================================================
# Layer and its herler functions
# COMMENT IT OUT TO PASS ONNX CHECK
@tf.function(
input_signature=[
tf.TensorSpec(shape=[None,None], dtype=tf.int32),
tf.TensorSpec(shape=[None,None], dtype=tf.float32),
tf.TensorSpec(shape=None, dtype=tf.float32),
])
def extra_function(
list1,
list2,
accum_var
):
some_num = 4
num_iter = tf.size(list1)//some_num
for i in range(num_iter):
xyz_i = list2[0, i * 3 : (i + 1) * 3]
accum_var += tf.reduce_sum(xyz_i)
return accum_var
class ExLayer(tf.keras.layers.Layer):
def __init__(self):
super().__init__()
# Doesnt tf.function also create graphs out of called functions?
# however it does not seem to do that if `call` function is decorated
# @tf.function(
# input_signature=[
# tf.TensorSpec(shape=[None,None], dtype=tf.float32),
# tf.TensorSpec(shape=[None,None], dtype=tf.int32),
# ])
def call(self, list2,list1):
accum_var = tf.constant(0.0)
accum_var = extra_function( list1, list2, accum_var)
return accum_var
# =============================================================================
# =============================================================================
# Example implementation
layer1 = tf.keras.layers.Input(shape=(1,))
layer2 = tf.keras.layers.Input(shape=(1,), dtype=tf.int32)
EL = ExLayer()(layer1,layer2)
model = tf.keras.models.Model(inputs=[layer1, layer2], outputs=EL)
# Define input data
list2_tf = tf.constant([[0.,0.,0.,1.,1.,1.,2.,2.,2.,3.,3.,3.]],dtype=tf.float32)
list1_tf = tf.constant([[0,1,2,-1,1,0,2,-1,2,0,1,-1]],dtype=tf.int32)
list2_np = np.array([[0.,0.,0.,1.,1.,1.,2.,2.,2.,3.,3.,3.]],dtype=np.float32)
list1_np = np.array([[0,1,2,-1,1,0,2,-1,2,0,1,-1]],dtype=np.int32)
# Save to onnx
model_proto, external_tensor_storage = tf2onnx.convert.from_keras(model,
input_signature=[
tf.TensorSpec(shape=[None,None], dtype=tf.float32, name="list2"),
tf.TensorSpec(shape=[None,None], dtype=tf.int32, name="list1")
],
opset=11,
output_path="tmp.onnx")
# Load onnx runtime session
ort_session = onnxruntime.InferenceSession("tmp.onnx")
inputs = {"list2":list2_np, "list1":list1_np}
print("===================================================")
print("Original model evaluation:")
print(model([list2_tf,list1_tf]))
print("ORT session evaluation")
print(ort_session.run(None, inputs))
print("===================================================")
# Check with model checker
onnx.checker.check_model(onnx.load("tmp.onnx"))
```
- Attach the ONNX model to the issue (where applicable)
### Expected behavior
A clear and concise description of what you expected to happen.
### Notes
Any additional information
| https://github.com/onnx/onnx/issues/3909 | https://github.com/onnx/onnx/pull/5728 | 21300b77782fe207584f1cebb6120f71c7146e20 | fd699fbb4f69a0d2d12129013145c9f89a8c7eb0 | 2021-12-25T06:20:01Z | python | 2023-11-01T18:21:52Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 3,884 | ["onnx/checker.cc", "onnx/checker.h", "onnx/checker.py", "onnx/cpp2py_export.cc", "onnx/onnx_cpp2py_export/checker.pyi"] | Make C++ and Python checker API consistent | Python checker API supports `full_check` arg:
https://github.com/onnx/onnx/blob/fa6f8cfdce3d86346e8a7494f3062b98416c85fb/onnx/checker.py#L94
C++ does not.
It'd be nice for them to be consistent. | https://github.com/onnx/onnx/issues/3884 | https://github.com/onnx/onnx/pull/4386 | 8910f00b5976000e79111325bcdaad1a62811cde | 4239e3965ab95ee657d381ae8cbd346e696131e3 | 2021-12-08T23:35:31Z | python | 2022-08-05T15:20:16Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 3,873 | ["requirements-lintrunner.txt"] | extract_model, why model is not valid? | 
hello!
i have this model and i want to cut this into two parts via `onnx.utils.extract_model`
my second model have this inputs and outputs:
`inputs_names = ['627', '668', '633', '656', '639', '645', '667']`
`output_names = ['boxes', 'scores']`
to be honest, model looks not bad (`627`th input is ReLU from input model)

but i got error at last stage, when `onnx` checking my model
so what do you think is this? bug? or i'm doing smth wrong? thanks | https://github.com/onnx/onnx/issues/3873 | https://github.com/onnx/onnx/pull/5633 | f107e04dd6671aadf1a2f1236e1846169a520f05 | 9c34c062dbc0d53a58195d789aff2f39ccc9bfeb | 2021-12-01T05:46:04Z | python | 2023-10-02T03:02:56Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 3,854 | ["docs/Changelog.md", "docs/Operators.md", "onnx/defs/tensor/defs.cc", "onnx/defs/tensor/old.cc", "onnx/version_converter/adapters/softmax_12_13.h"] | Shape inference for Reshape op should catch empty shape error | # Bug Report
Original issue: https://github.com/onnx/onnx/issues/3728
### Reproduction instructions
- Describe the code to reproduce the behavior.
```
import onnx
from onnx import version_converter
model = onnx.load('bertsquad-8.onnx')
model_opset_15 = version_converter.convert_version(model, 15) # onnx/models
# onnx.save(model_opset_15, "bertsquad-8_opset_15.onnx") # For C++ side to load
inferred_model = onnx.shape_inference.infer_shapes(model_opset_15, True, True)
...
```
- Attach the ONNX model to the issue (where applicable)
https://github.com/onnx/models/blob/master/text/machine_comprehension/bert-squad/model/bertsquad-8.onnx
### Expected behavior
Shape inference should complain Reshape node cannot have empty shape
| https://github.com/onnx/onnx/issues/3854 | https://github.com/onnx/onnx/pull/3861 | ae3e6b80e63fbef0abda96ce698d31c70cd5e43d | dba3bd832f4d740d991d367535bab5c3634455a7 | 2021-11-18T19:58:57Z | python | 2022-01-25T01:20:53Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 3,853 | ["docs/Changelog.md", "docs/Operators.md", "onnx/defs/tensor/defs.cc", "onnx/defs/tensor/old.cc", "onnx/version_converter/adapters/softmax_12_13.h"] | Version converter for Softmax 12 to 13 should not produce a Reshape node with empty shape | # Bug Report
Original issue: https://github.com/onnx/onnx/issues/3728
The code for repro:
```
import onnx
from onnx import version_converter
model = onnx.load('bertsquad-8.onnx')
model_opset_15 = version_converter.convert_version(model, 15) # from onnx/models
# onnx.save(model_opset_15, "bertsquad-8_opset_15.onnx") # For C++ side to load
inferred_model = onnx.shape_inference.infer_shapes(model_opset_15, True, True, True)
```
Model link: https://github.com/onnx/models/blob/master/text/machine_comprehension/bert-squad/model/bertsquad-8.onnx
The error message:
```
InferenceError: [ShapeInferenceError] Shape inference error(s): (op_type:MatMul, node name: bert/encoder/layer_0/attention/self/MatMul_1): [ShapeInferenceError] Input tensors of wrong rank (0).
(op_type:MatMul, node name: bert/encoder/layer_1/attention/self/MatMul_1): [ShapeInferenceError] Input tensors of wrong rank (0).
(op_type:MatMul, node name: bert/encoder/layer_2/attention/self/MatMul_1): [ShapeInferenceError] Input tensors of wrong rank (0).
(op_type:MatMul, node name: bert/encoder/layer_3/attention/self/MatMul_1): [ShapeInferenceError] Input tensors of wrong rank (0).
(op_type:MatMul, node name: bert/encoder/layer_4/attention/self/MatMul_1): [ShapeInferenceError] Input tensors of wrong rank (0).
(op_type:MatMul, node name: bert/encoder/layer_5/attention/self/MatMul_1): [ShapeInferenceError] Input tensors of wrong rank (0).
(op_type:MatMul, node name: bert/encoder/layer_6/attention/self/MatMul_1): [ShapeInferenceError] Input tensors of wrong rank (0).
(op_type:MatMul, node name: bert/encoder/layer_7/attention/self/MatMul_1): [ShapeInferenceError] Input tensors of wrong rank (0).
(op_type:MatMul, node name: bert/encoder/layer_8/attention/self/MatMul_1): [ShapeInferenceError] Input tensors of wrong rank (0).
(op_type:MatMul, node name: bert/encoder/layer_9/attention/self/MatMul_1): [ShapeInferenceError] Input tensors of wrong rank (0).
(op_type:MatMul, node name: bert/encoder/layer_10/attention/self/MatMul_1): [ShapeInferenceError] Input tensors of wrong rank (0).
(op_type:MatMul, node name: bert/encoder/layer_11/attention/self/MatMul_1): [ShapeInferenceError] Input tensors of wrong rank (0).
```
After version conversion, the shape for Reshape node should not have an empty shape:

| https://github.com/onnx/onnx/issues/3853 | https://github.com/onnx/onnx/pull/3861 | ae3e6b80e63fbef0abda96ce698d31c70cd5e43d | dba3bd832f4d740d991d367535bab5c3634455a7 | 2021-11-18T19:48:01Z | python | 2022-01-25T01:20:53Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 3,840 | ["docs/CONTRIBUTING.md"] | Sync build documentation | There are atleast 2 places in onnx repo which contain build instructions. While one is up to date the other isn't. We need to sync all documentation, make sure there is 1 source and all other docs should simply link to this source.
Document with latest instructions: https://github.com/onnx/onnx#build-onnx-from-source
Out dated doc: https://github.com/onnx/onnx/blob/master/docs/CONTRIBUTING.md#development | https://github.com/onnx/onnx/issues/3840 | https://github.com/onnx/onnx/pull/3859 | c9f53d00932ab60e381a71789a85db0463a8d9e0 | 8cf41aa91bddb592ddefaeb98e7ab8f4d4e60c08 | 2021-11-12T19:34:34Z | python | 2021-12-09T01:14:34Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 3,823 | ["requirements-lintrunner.txt"] | onnx model import to tensorflow is superficial at best. | https://github.com/onnx/tutorials/blob/master/tutorials/OnnxTensorflowImport.ipynb
Inspecting the code, it does not really import to tensorflow, not even sure why it is called importing to tf.
the imported so cald tf_rep appears to be onnx's own stuff.
does not fit into keras model
https://www.tensorflow.org/api_docs/python/tf/keras/Model
Traceback (most recent call last):
print("Loading onnx model...")
model_import = onnx.load('output/p297.onnx')
model = prepare(model_import)
...
File "<dense>.py", line 68, in <module>
model.evaluate(X_test, y_test)
AttributeError: 'TensorflowRep' object has no attribute 'evaluate'
I wondered if there is a dedicatd keras converstion but pitifully https://github.com/onnx/tutorials has conversion from onnx to keras but scoring section has none.
| https://github.com/onnx/onnx/issues/3823 | https://github.com/onnx/onnx/pull/5633 | f107e04dd6671aadf1a2f1236e1846169a520f05 | 9c34c062dbc0d53a58195d789aff2f39ccc9bfeb | 2021-11-03T17:15:43Z | python | 2023-10-02T03:02:56Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 3,771 | ["docs/Operators.md", "docs/TestCoverage.md", "onnx/backend/test/case/node/onehot.py"] | type of depth input of OneHot in backend node test | # Bug Report
### Describe the bug
Please describe the bug clearly and concisely
In operators.md, the depth input of OneHot is `Scalar specifying the number of classes in one-hot tensor.`.
In the test script, onehot.py, the parameter for this input is scalar at [line 42]( https://github.com/onnx/onnx/blob/2e048660ffa8243596aaf3338e60c7c0575458f2/onnx/backend/test/case/node/onehot.py#L42)
and one element array in other cases, e.g. [line 62](https://github.com/onnx/onnx/blob/2e048660ffa8243596aaf3338e60c7c0575458f2/onnx/backend/test/case/node/onehot.py#L62)
Is this inconsistency in type a bug or intentional variance?
| https://github.com/onnx/onnx/issues/3771 | https://github.com/onnx/onnx/pull/3774 | 955f4ba5cf3a18f8ca474ffd9ac11008b3c3be8b | d0151d78c2ba53c3c71d05608b1d892ada20507d | 2021-10-13T01:57:22Z | python | 2021-10-22T19:52:57Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 3,748 | ["docs/Operators.md", "docs/TestCoverage.md", "onnx/backend/test/case/node/if.py", "onnx/backend/test/case/node/loop.py", "onnx/backend/test/data/node/test_if_opt/model.onnx", "onnx/backend/test/data/node/test_loop16_seq_none/model.onnx", "onnx/backend/test/data/node/test_loop16_seq_none/test_data_set_0/input_2.pb", "onnx/backend/test/data/node/test_loop16_seq_none/test_data_set_0/output_0.pb", "onnx/defs/shape_inference.cc", "onnx/defs/shape_inference.h"] | Test model in node test `test_if_opt` is invalid | # Bug Report
### Is the issue related to model conversion?
No
### Describe the bug
The else subgraph's output is not produced by any node in the else subgraph. A similar issue exists in the then subgraph.
https://github.com/onnx/onnx/blob/5282dd787b2c913110c26f34de7b624cf7ab9c01/onnx/backend/test/case/node/if.py#L200
### System information
N/A
### Reproduction instructions
N/A
### Expected behavior
Valid test model for backend testing
### Notes
N/A | https://github.com/onnx/onnx/issues/3748 | https://github.com/onnx/onnx/pull/3756 | 1259d16ff6715e78f3b7c0602d5dcb982b893383 | be76ca7148396176784ba8733133b9fb1186ea0d | 2021-09-29T03:38:42Z | python | 2021-10-20T18:50:15Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 3,747 | ["docs/Operators.md", "docs/TestCoverage.md", "onnx/backend/test/case/node/if.py", "onnx/backend/test/case/node/loop.py", "onnx/backend/test/data/node/test_if_opt/model.onnx", "onnx/backend/test/data/node/test_loop16_seq_none/model.onnx", "onnx/backend/test/data/node/test_loop16_seq_none/test_data_set_0/input_2.pb", "onnx/backend/test/data/node/test_loop16_seq_none/test_data_set_0/output_0.pb", "onnx/defs/shape_inference.cc", "onnx/defs/shape_inference.h"] | Test model in node test `test_loop16_seq_none` is invalid | # Bug Report
### Is the issue related to model conversion?
No
### Describe the bug
Please describe the bug clearly and concisely
The `If` node subgraphs (both the `then` and `else` subgraphs) within the `Loop` have subgraph outputs not produced by any node's outputs.
Then branch:

The subgraph output is named `seq_out` whereas the only node produces an output named `seq_empty`. Basically there is nothing that feeds the subgraph output `seq_out`
A similar issue exists in the Else branch.
### System information
N/A
### Reproduction instructions
N/A
### Expected behavior
Valid test model for backend testing
### Notes
N/A
| https://github.com/onnx/onnx/issues/3747 | https://github.com/onnx/onnx/pull/3756 | 1259d16ff6715e78f3b7c0602d5dcb982b893383 | be76ca7148396176784ba8733133b9fb1186ea0d | 2021-09-29T03:01:02Z | python | 2021-10-20T18:50:15Z |
closed | onnx/onnx | https://github.com/onnx/onnx | 3,744 | ["onnx/defs/quantization/defs.cc", "onnx/defs/quantization/old.cc", "onnx/defs/tensor/defs.cc", "onnx/defs/tensor/old.cc", "onnx/defs/tensor_proto_util.cc", "onnx/test/shape_inference_test.py"] | ONNX shape inference for QuantizeLinear fails on empty string as zero_point | # Bug Report
### Describe the bug
An ONNX model with QuantizeLinear (opset 13) passes shape inference when it has no input for zero_point, but fails if the input is the empty string. Trailing optional inputs should be able to be expressed in either way.
onnx version 1.10.1 | https://github.com/onnx/onnx/issues/3744 | https://github.com/onnx/onnx/pull/3806 | d7c546b874c2fde25e2a337f04eb1f6e08abfc62 | f61fd848e33c52d89dff41c1ff32e3d9355ec959 | 2021-09-29T00:28:22Z | python | 2021-11-10T15:03:01Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.