status
stringclasses
1 value
repo_name
stringlengths
9
24
repo_url
stringlengths
28
43
issue_id
int64
1
104k
updated_files
stringlengths
8
1.76k
title
stringlengths
4
369
body
stringlengths
0
254k
issue_url
stringlengths
37
56
pull_url
stringlengths
37
54
before_fix_sha
stringlengths
40
40
after_fix_sha
stringlengths
40
40
report_datetime
timestamp[ns, tz=UTC]
language
stringclasses
5 values
commit_datetime
timestamp[us, tz=UTC]
closed
onnx/onnx
https://github.com/onnx/onnx
3,742
["onnx/common/ir.h"]
memory leaks issue
# Bug Report exist memory leak by ASAN check if I use this patch the issue will disappear ``` --- a/onnx/common/ir.h +++ b/onnx/common/ir.h @@ -1212,6 +1212,7 @@ private: void freeValue(Value * v) { auto it = all_values.find(v); ONNX_ASSERT(it != all_values.end()); + delete *it; all_values.erase(it); } }; ``` please anyone look at this issue, thank you ``` ================================================================= ==27762==ERROR: LeakSanitizer: detected memory leaks Direct leak of 9856 byte(s) in 77 object(s) allocated from: #0 0x7f1fe341b448 in operator new(unsigned long) (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xe0448) #1 0x7f1fe303e8ab in onnx::Node::addOutput() (/home/chijinchao/workspace/GraphGen/build/core/import/libgraphgen_import.so+0x15e8ab) #2 0x7f1fe3040d7d in onnx::Graph::create(onnx::Symbol, unsigned long) (/home/chijinchao/workspace/GraphGen/build/core/import/libgraphgen_import.so+0x160d7d) #3 0x7f1fdf06213f in onnx::graphProtoToGraph(onnx::GraphProto const&, bool) third_party/onnx/onnx-src/onnx/common/ir_pb_converter.cc:243 #4 0x7f1fdf064210 in onnx::ImportModelProto(onnx::ModelProto const&) third_party/onnx/onnx-src/onnx/common/ir_pb_converter.cc:338 ....... SUMMARY: AddressSanitizer: 54784 byte(s) leaked in 588 allocation(s). ```
https://github.com/onnx/onnx/issues/3742
https://github.com/onnx/onnx/pull/3760
a56a110c2c4e3b85114cb918b9c27dac211ab969
1897d1bed5a2cd56fcdfef3fd4e0ba2c4fe6a362
2021-09-28T08:36:32Z
python
2021-10-19T05:50:33Z
closed
onnx/onnx
https://github.com/onnx/onnx
3,740
["docs/Operators.md", "docs/TestCoverage.md", "onnx/backend/test/case/node/if.py", "onnx/backend/test/case/node/loop.py", "onnx/backend/test/data/node/test_if_opt/model.onnx", "onnx/backend/test/data/node/test_loop16_seq_none/model.onnx", "onnx/backend/test/data/node/test_loop16_seq_none/test_data_set_0/input_2.pb", "onnx/backend/test/data/node/test_loop16_seq_none/test_data_set_0/output_0.pb", "onnx/defs/shape_inference.cc", "onnx/defs/shape_inference.h"]
Loop operator's type inference doesn't handle propagating optional types from subgraph output to Loop output
# Bug Report ### Is the issue related to model conversion? No ### Describe the bug Please describe the bug clearly and concisely When this line of code is hit for the optional type: https://github.com/onnx/onnx/blob/9f978e1ffbd4ba29263f9fdae592e2256b80ec2a/onnx/defs/controlflow/defs.cc#L358 It breaks because the function doesn't support optional types yet: https://github.com/onnx/onnx/blob/9f978e1ffbd4ba29263f9fdae592e2256b80ec2a/onnx/defs/shape_inference.h#L287 ### System information N/A ### Reproduction instructions N/A ### Expected behavior A clear and concise description of what you expected to happen. ### Notes Any additional information
https://github.com/onnx/onnx/issues/3740
https://github.com/onnx/onnx/pull/3756
1259d16ff6715e78f3b7c0602d5dcb982b893383
be76ca7148396176784ba8733133b9fb1186ea0d
2021-09-27T20:04:42Z
python
2021-10-20T18:50:15Z
closed
onnx/onnx
https://github.com/onnx/onnx
3,565
["onnx/defs/tensor/old.cc"]
infer_shapes fails but onnxruntime works
# Bug Report ### Is the issue related to model conversion? onnx raises an exception while running ``infer_shapes`` (``onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] (op_type:Sqrt, node name: ComplexAbsoutput__19): [ShapeInferenceError] Inferred shape and existing shape differ in dimension 1: (4) vs (3)``). ### Describe the bug The following code fails (onnx graph: [bug_shape_inference.zip](https://github.com/onnx/onnx/files/6785879/bug_shape_inference.zip)): ``` import onnx with open('bug_shape_inference.onnx', 'rb') as f: #see attached file model_proto = onnx.load(f) onnx.checker.check_model(model_proto, full_check=True) ``` Exception: ``` onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] (op_type:Sqrt, node name: ComplexAbsoutput__19): [ShapeInferenceError] Inferred shape and existing shape differ in dimension 1: (4) vs (3) ``` However, onnxruntime is able to compute the output: ``` from onnxruntime import InferenceSession import numpy mat = numpy.random.randn(3, 4).astype(numpy.float32) sess = InferenceSession('bug_shape_inference.onnx') print(sess.run(None, {'input:0': mat})) ``` It produces a matrix (3,3) as mentioned in the onnx graph: ``` output { name: "output:0" type { tensor_type { elem_type: 1 shape { dim { dim_value: 3 } dim { dim_value: 3 } } } } } ``` The error message seems to say onnx expects an output with shape (4,3) or (3,4) when it should be (3,3). ### System information - OS Platform and Distribution (*Windows, Ubuntu 20.04*): - ONNX version (*e.g. 1.7*): 1.9.0 - Python version: 3.9.5 - Visual Studio version (if applicable): 2019 ### Reproduction instructions See the bug description. ### Expected behavior I would expect infer_shapes to succeed and find a shape equal to (3,3).
https://github.com/onnx/onnx/issues/3565
https://github.com/onnx/onnx/pull/3810
94fb49259018fc9ed95732bd7eb609bd518b0ee8
f98b19a9390943d74ce7de30de1219aea8fff914
2021-07-08T16:19:38Z
python
2021-12-08T21:29:24Z
closed
onnx/onnx
https://github.com/onnx/onnx
3,562
["onnx/backend/test/runner/__init__.py"]
onnx-tensorflow onnx backend test of case test_identity_sequence_cpu failed
# Bug Report ### Is the issue related to model conversion? No ### Describe the bug When run onnx backend test case of test_identity_sequence_cpu with onnx-tensorflow, hit below error: ====================================================================== ERROR: test_identity_sequence_cpu (__main__.OnnxBackendNodeModelTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/anaconda3/envs/otf/lib/python3.7/site-packages/onnx-1.9.0-py3.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 250, in device_test_func return test_func(*args, device=device, **kwargs) File "/opt/anaconda3/envs/otf/lib/python3.7/site-packages/onnx-1.9.0-py3.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 308, in run atol=model_test.atol) File "/opt/anaconda3/envs/otf/lib/python3.7/site-packages/onnx-1.9.0-py3.7-linux-x86_64.egg/onnx/backend/test/runner/__init__.py", line 172, in assert_similar_outputs np.testing.assert_equal(outputs[i].dtype, ref_outputs[i].dtype) AttributeError: 'list' object has no attribute 'dtype' ---------------------------------------------------------------------- Ran 1 test in 0.970s FAILED (errors=1) ### System information - OS Platform and Distribution (*e.g. Linux Ubuntu 16.04*): - ONNX version 1.9.0 and master - Python version:3.7 ### Reproduction instructions - Describe the code to reproduce the behavior. ``` import onnx from onnx_tf.backend import TensorflowBackend backend_test = onnx.backend.test.runner.Runner(TensorflowBackend, __name__) if __name__ == '__main__': unittest.main() ... ``` - Attach the ONNX model to the issue (where applicable) ### Expected behavior The test case should pass (when onnx_tf implemented the operator correctly) ### Notes The root cause is function assert_similar_outputs() in onnx/backend/test/runner/__init__.py does not handle sequence/list as model/node's output, it always check dtype of the output. ``` np.testing.assert_equal(outputs[i].dtype, ref_outputs[i].dtype) ```
https://github.com/onnx/onnx/issues/3562
https://github.com/onnx/onnx/pull/3563
21d16d2bcb493687eec2c2beefc7dde89d54a1e2
cc7b5e7a5b7b569ec47f9ca36d379e1ea0b83c3b
2021-07-08T10:00:52Z
python
2021-07-12T20:50:35Z
closed
onnx/onnx
https://github.com/onnx/onnx
3,530
["CMakeLists.txt"]
Multiple libraries (such as onnx) set WX when building with MSVC even when ONNX_WERROR is set to OFF
I understand from https://github.com/onnx/onnx/issues/1936 that this may be "by design", however, I see two problems with this: 1) It is inconsistent across platform AND a cursory look at the options leads one to believe that by setting ONNX_WERROR to OFF, warnings won't be treated as errors. This results (at least in my case) in a tracking down of where the issue that came from and time lost. At the very least, ONNX_WERROR should be documented correctly to indicate that even when it is OFF, WX will be passed to the compiler for some libraries. 2) Warnings are treated as errors that the user doesn't really have control over. For example, onnx-mlir includes onnx and uses the same version of protobuf as onnx or newer. When building onnx-mlir, we get the following warnings treated as errors (among others): ``` third_party\onnx\onnx\onnx-data.pb.cc(104): error C2220: the following warning is treated as an error third_party\onnx\onnx\onnx-data.pb.cc(104): warning C4125: decimal digit terminates octal escape sequence ``` Since this is building unmodified onnx (1.9.0), this should be green, but it is not. So as the user of onnx, onnx-mlir has to explicitly disable those warnings or have build failures with MSVC.
https://github.com/onnx/onnx/issues/3530
https://github.com/onnx/onnx/pull/3548
c79ab4c78599eb7f09cee8bf2d53af8250ed7dc9
f24c062126225cddb49aecdedfebe7135b35ef50
2021-06-15T19:43:03Z
python
2021-07-01T23:02:41Z
closed
onnx/onnx
https://github.com/onnx/onnx
3,515
["onnx/defs/tensor/defs.cc"]
Squeeze shape inference failure with symbolic dimension
``` import onnx from onnx import version_converter, helper, TensorProto, shape_inference import numpy as np nodes = [helper.make_node('Squeeze', ['in', 'axes'], ['out'])] inputs = [ helper.make_tensor_value_info('in', TensorProto.FLOAT, [1, 2, 'hello']), helper.make_tensor_value_info('axes', TensorProto.INT64, [1]) ] outputs = [helper.make_tensor_value_info('out', TensorProto.FLOAT, (2, 'hello'))] initializers = [helper.make_tensor('axes', TensorProto.INT64, [1], [0])] model = helper.make_model(helper.make_graph(nodes, "g", inputs, outputs, initializers)) shape_inference.infer_shapes(model) ``` Shape inference fails with ``` (op_type:Squeeze): [ShapeInferenceError] Inferred shape and existing shape differ in rank: (0) vs (2) ``` Seems like this stopped working with https://github.com/onnx/onnx/pull/3465
https://github.com/onnx/onnx/issues/3515
https://github.com/onnx/onnx/pull/3516
d08b3e951be607b9638ab84c340fabec1fdbec83
7a2bd3567762c0bc6b941c70fa43ba780a192f27
2021-06-04T16:40:25Z
python
2021-06-15T18:42:47Z
closed
onnx/onnx
https://github.com/onnx/onnx
3,505
["onnx/defs/quantization/defs.cc", "onnx/test/shape_inference_test.py"]
DynamicQuantizeLinear function does not have shape inference function
# Bug Report ### Is the issue related to model conversion? No ### Describe the bug DynamicQuantizeLinear function op does not have shape inference function defined. In absence of shape inference, function body is used to get the shape inference for the function op and although it works as a fallback option it hurts perf. ### Expected behavior Add shape inference function for DynamicQuantizeLinear ### Notes Any additional information
https://github.com/onnx/onnx/issues/3505
https://github.com/onnx/onnx/pull/3539
e6ea9356fb4ba62515fa1cc9591eab5130c2e5ab
3c857c57826cf434626cb8c99f0b9517efb62451
2021-05-27T06:25:16Z
python
2021-06-22T17:27:51Z
closed
onnx/onnx
https://github.com/onnx/onnx
3,439
["onnx/defs/tensor/defs.cc", "onnx/shape_inference/implementation.cc"]
Shape inference for Unique assigns shape to empty string when optional outputs are provided.
A value_info entry is added for empty string, which is not correct. Onnx version 1.8.1. https://www.dropbox.com/s/ljyo821c0b2miv5/test_unsorted_segment_sum.onnx?dl=1
https://github.com/onnx/onnx/issues/3439
https://github.com/onnx/onnx/pull/3815
34dbc3e3a8f4709d96e7ceba3ac38510f4ed2ff3
fa6f8cfdce3d86346e8a7494f3062b98416c85fb
2021-04-19T23:21:26Z
python
2021-11-22T22:40:13Z
closed
onnx/onnx
https://github.com/onnx/onnx
3,428
["docs/Operators.md", "docs/TestCoverage.md", "onnx/backend/test/case/node/roialign.py", "onnx/backend/test/data/node/test_roialign/test_data_set_0/output_0.pb", "onnx/backend/test/data/node/test_roialign_identity_region/model.onnx", "onnx/backend/test/data/node/test_roialign_identity_region/test_data_set_0/input_0.pb", "onnx/backend/test/data/node/test_roialign_identity_region/test_data_set_0/input_1.pb", "onnx/backend/test/data/node/test_roialign_identity_region/test_data_set_0/input_2.pb", "onnx/backend/test/data/node/test_roialign_identity_region/test_data_set_0/output_0.pb"]
[Operator] RoiAlign backend test case's output is misaligned
# Bug Report ### Is the issue related to model conversion? No. ### Describe the bug The [RoiAlign operator](https://github.com/onnx/onnx/blob/master/docs/Operators.md#RoiAlign), per the [Mask R-CNN paper](https://arxiv.org/abs/1703.06870) and Facebook Research's [Detectron 2 implementation](https://github.com/facebookresearch/detectron2/blob/master/detectron2/layers/roi_align.py) aligns sampling points over the center of the pixels, but the [solitary test case](https://github.com/onnx/onnx/blob/master/onnx/backend/test/case/node/roialign.py) in the ONNX backend has output that is misaligned by half an input pixel. After comparing to various references (table below under **Notes**) and reading more into the history of this operator, I see this is actually an old bug (which Detectron since fixed and PyTorch also fixed with aligned=True) which applied an offset the output subsample by 0.5 but forgot to adjust the input sample to compensate (see their comment in the [code](https://github.com/facebookresearch/detectron2/blob/master/detectron2/layers/roi_align.py): "_the original roi_align (aligned=False) does not subtract the 0.5 when computing neighboring pixel indices and therefore it uses pixels with a slightly incorrect alignment (relative to our pixel model) when performing bilinear interpolation_" and the [test case](https://github.com/facebookresearch/detectron2/blob/master/tests/layers/test_roi_align.py#L15)). &nbsp; | equation ----------|---------- Correct | `input.x = (output.x + 0.5) * computedOutputToInputScale + scaledCurrentRoi.x - 0.5` | Incorrect | `input.x = (output.x + 0.5) * computedOutputToInputScale + scaledCurrentRoi.x - 0.0` | ![image](https://user-images.githubusercontent.com/1809166/110403224-95ce2500-8031-11eb-80c0-92732fa247d2.png) We should fix the existing [ONNX backend test case](https://github.com/onnx/onnx/blob/master/onnx/backend/test/case/node/roialign.py) accordingly. Otherwise even a simple case of cropping an identity region from the input doesn't match. e.g. a new test case: ```python # Crop a 2x2 region from the 4x5 input. @staticmethod def export_roialign_identity_region(): # type: () -> None node = onnx.helper.make_node( "RoiAlign", inputs=["X", "rois", "batch_indices"], outputs=["Y"], spatial_scale=1.0, output_height=2, output_width=2, sampling_ratio=1, ) X = np.array( [ [ [ [ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9,10,11], # 2x2 region of interest: 9,10 [12,13,14,15], # 13,14 [16,17,18,19], ] ] ], dtype=np.float32, ) batch_indices = np.array([0], dtype=np.int64) rois = np.array([[1, 2, 3, 4]], dtype=np.float32) # (num_rois, C, output_height, output_width) Y = np.array( [ [ [ [ 9,10], # Should be identity scale/translation window into the input, [13,14], # and not [[11.5,12.5], [15.5,16.5]]. ] ] ], dtype=np.float32, ) expect(node, inputs=[X, rois, batch_indices], outputs=[Y], name="test_roialign_identity_region") ``` [roi_align_identity_region.zip](https://github.com/onnx/onnx/files/6322528/roi_align_identity_region.zip) ### System information - OS Platform and Distribution: Windows 10 20H2 19042.867 - ONNX version: 1.8.1 - Python version: 3.9.1 - GCC/Compiler version: VS2019 - CMake version: NA - Protobuf version: NA - Visual Studio version (if applicable): VS2019 ### Reproduction instructions - Run the existing ONNX back test case. ### Expected behavior The test case should bilinearly sample from pixel centers in *both* input and output pixels, not half of one and zero of the other. e.g. ![image](https://user-images.githubusercontent.com/1809166/110191631-5d82d880-7dde-11eb-938b-da13a22eda97.png) ### Notes I compared the bounding box output for the [Mask R-CNN model](https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/mask-rcnn/model) using the baseline output in the tar.gz vs the half-pixel aligned output from the ONNX Runtime DirectML execution provider, and I feel the alignment is tighter and more accurately aligned, with fewer random false positives in the half-pixel sampled implementation. Other data scientists have also complained online that the old bug in PyTorch negatively affected their object detection results. ![image](https://user-images.githubusercontent.com/1809166/114975841-c75cbc00-9e39-11eb-8b7f-23e4ceb11d22.png) ### Framework comparisons ``` Input Tensor = <-------> 0.0 1.0 2.0 3.0 4.0 5.0 6.0 |.5 |.5 |.5 |.5 |.5 |.5 | 0.0___ |_|_|_|_|_|_|_|_|_|_|_|_| 1.0___[| 0,| 1,| 2,| 3,| 4,| 5 ] /|\ 2.0___[|10,┃11,┃12,┃13,|14,|15 ] \|/ 3.0___[|20,┃21,┃22,┃23,|24,|25 ] 4.0___[|30,|31,|32,|33,|34,|35 ] 5.0___[|40,|41,|42,|43,|44,|45 ] 6.0___[|50,|51,|52,|53,|54,|55 ] Active region of interest = [[1.0, 1.0, 3.0, 3.0], ...] // The first window is 2x2 over the input elements, covering values [[11,12],[21,22]]. Output tensor size = [4,4] ``` Image | Source | Output 4x4, from first 2x2 region -- | -- | -- ![image](https://user-images.githubusercontent.com/1809166/115476907-ac05fe00-a1f7-11eb-8b06-42d19070e5b8.png) | ✔ FB Research [Detectron 2](https://github.com/facebookresearch/detectron2/blob/master/detectron2/layers/roi_align.py) (MaskedRCNN paper) | [ 8.25,  8.75,  9.25,  9.75],<br/>[13.25, 13.75, 14.25, 14.75],<br/>[18.25, 18.75, 19.25, 19.75],<br/>[23.25, 23.75, 24.25, 24.75] ![image](https://user-images.githubusercontent.com/1809166/115476907-ac05fe00-a1f7-11eb-8b06-42d19070e5b8.png) | ✔ ONNX Runtime DirectML EP (ROI_ALIGN 0) | [ 8.25,  8.75,  9.25,  9.75],<br/>[13.25, 13.75, 14.25, 14.75],<br/>[18.25, 18.75, 19.25, 19.75],<br/>[23.25, 23.75, 24.25, 24.75] ![image](https://user-images.githubusercontent.com/1809166/115476907-ac05fe00-a1f7-11eb-8b06-42d19070e5b8.png) | ✔ ONNX Runtime 1.7 CPU [Resize](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Resize) + Slice<br/>coordinate_transformation_mode=half | [ 8.25,  8.75,  9.25,  9.75],<br/>[13.25, 13.75, 14.25, 14.75],<br/>[18.25, 18.75, 19.25, 19.75],<br/>[23.25, 23.75, 24.25, 24.75] ![image](https://user-images.githubusercontent.com/1809166/115476907-ac05fe00-a1f7-11eb-8b06-42d19070e5b8.png) | ✔ [`torchvision.ops.roi_align(aligned=True…)`](https://pytorch.org/vision/0.8/_modules/torchvision/ops/roi_align.html) | [ 8.25,  8.75,  9.25,  9.75],<br/>[13.25, 13.75, 14.25, 14.75],<br/>[18.25, 18.75, 19.25, 19.75],<br/>[23.25, 23.75, 24.25, 24.75] ![image](https://user-images.githubusercontent.com/1809166/115477637-3733c380-a1f9-11eb-8205-1e2a8593e43c.png) | ✖ [`torchvision.ops.roi_align(aligned=False…)`](https://pytorch.org/vision/0.8/_modules/torchvision/ops/roi_align.html)<br/>*deprecated, legacy flag still exists | [13.75, 14.25, 14.75, 15.25],<br/>[18.75, 19.25, 19.75, 20.25],<br/>[23.75, 24.25, 24.75, 25.25],<br/>[28.75, 29.25, 29.75, 30.25] ![image](https://user-images.githubusercontent.com/1809166/115477637-3733c380-a1f9-11eb-8205-1e2a8593e43c.png) | ✖ ONNX Runtime 1.7 CPU EP RoiAlign<br/>[PR \#7354](https://github.com/microsoft/onnxruntime/pull/7354) [Issue \#6921](https://github.com/microsoft/onnxruntime/issues/6921) | [13.75, 14.25, 14.75, 15.25],<br/>[18.75, 19.25, 19.75, 20.25],<br/>[23.75, 24.25, 24.75, 25.25],<br/>[28.75, 29.25, 29.75, 30.25] ![image](https://user-images.githubusercontent.com/1809166/115477913-d3f66100-a1f9-11eb-9cd5-ceecaa7e5f1e.png) | 🤨 [`tf.image.crop_and_resize(…)`](https://www.tensorflow.org/api_docs/python/tf/image/crop_and_resize)<br/>*Note boxes are normalized 0 to 1 (so /5 each ROI element) | [11.00, 11.66, 12.33, 13.00],<br/>[17.66, 18.33, 19.00, 19.66],<br/>[24.33, 25.00, 25.66, 26.33],<br/>[31.00, 31.66, 32.33, 33.00] ![image](https://user-images.githubusercontent.com/1809166/115477913-d3f66100-a1f9-11eb-9cd5-ceecaa7e5f1e.png) | 🤨 `tf.image.resize_bilinear(align_corners=True…)`<br/> + `tf.slice` | [11.00, 11.66, 12.33, 13.00],<br/>[17.66, 18.33, 19.00, 19.66],<br/>[24.33, 25.00, 25.66, 26.33],<br/>[31.00, 31.66, 32.33, 33.00] ![image](https://user-images.githubusercontent.com/1809166/115477858-b923ec80-a1f9-11eb-90be-a5ea5944ab49.png) | 🤨 `tf.image.resize_bilinear(align_corners=False…)`<br/> + `tf.slice` | [11.00, 11.50, 12.00, 12.50],<br/>[16.00, 16.50, 17.00, 17.50],<br/>[21.00, 21.50, 22.00, 22.50],<br/>[26.00, 26.50, 27.00, 27.50] ![image](https://user-images.githubusercontent.com/1809166/115476907-ac05fe00-a1f7-11eb-8b06-42d19070e5b8.png) | ✔ `tf.image.resize_bilinear(half_pixel_centers=True…)`<br/> + `tf.slice` | [ 8.25,  8.75,  9.25,  9.75],<br/>[13.25, 13.75, 14.25, 14.75],<br/>[18.25, 18.75, 19.25, 19.75],<br/>[23.25, 23.75, 24.25, 24.75] ---------- Facebook research's [Detectron 2 test code](https://github.com/facebookresearch/detectron2/blob/master/tests/layers/test_roi_align.py#L15): ```python class ROIAlignTest(unittest.TestCase): def test_forward_output(self): input = np.arange(25).reshape(5, 5).astype("float32") """ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 """ output = self._simple_roialign(input, [1, 1, 3, 3], (4, 4), aligned=False) output_correct = self._simple_roialign(input, [1, 1, 3, 3], (4, 4), aligned=True) # without correction: old_results = [ [7.5, 8, 8.5, 9], [10, 10.5, 11, 11.5], [12.5, 13, 13.5, 14], [15, 15.5, 16, 16.5], ] # with 0.5 correction: correct_results = [ [4.5, 5.0, 5.5, 6.0], [7.0, 7.5, 8.0, 8.5], [9.5, 10.0, 10.5, 11.0], [12.0, 12.5, 13.0, 13.5], ] # This is an upsampled version of [[6, 7], [11, 12]] ... ``` ---------- PyTorch sample code: ```python # pip install torch==1.7.1+cpu torchvision==0.8.2+cpu torchaudio===0.7.2 -f https://download.pytorch.org/whl/torch_stable.html import torch import torchvision print("PyTorch version:", torch.__version__) input = [[[[ 0, 1, 2, 3, 4, 5], # NCHW [10,11,12,13,14,15], [20,21,22,23,24,25], [30,31,32,33,34,35], [40,41,42,43,44,45], [50,51,52,53,54,55]]]] boxes = [[0, 1,1,3,3]] output_size = [4,4] aligned=True # Correct #aligned=False # Legacy setting sampling_ratio=1 spatial_scale=1 # https://pytorch.org/vision/0.8/_modules/torchvision/ops/roi_align.html output = torchvision.ops.roi_align( torch.tensor(input, dtype=torch.float), torch.tensor(boxes, dtype=torch.float), output_size, spatial_scale=spatial_scale, sampling_ratio=sampling_ratio, aligned=aligned ) torch.set_printoptions(sci_mode=False) print(input) print(boxes) print(output) ``` ---------- TensorFlow sample code: ```python # pip install tensorflow-gpu==1.15.0 import os import tensorflow.compat.v1 as tf input = [[ # NHWC [[ 0.], [ 1.], [ 2.], [ 3.], [ 4.], [ 5.]], [[10.], [11.], [12.], [13.], [14.], [15.]], [[20.], [21.], [22.], [23.], [24.], [25.]], [[30.], [31.], [32.], [33.], [34.], [35.]], [[40.], [41.], [42.], [43.], [44.], [45.]], [[50.], [51.], [52.], [53.], [54.], [55.]] ]] boxes = [[1/5,1/5,3/5,3/5],[3/5,3/5,4/5,4/5]] # Normalized 0.0 to 1.0 (where 1.0 = width - 1 and height - 1) box_indices = [0, 0] # Batch indices per corresponding region crop_size = [4, 4] # Output tensor size HW print("TensorFlow version:", tf.__version__) # 1.15.0 (cpu/cuda) # Using half_pixel_centers=True is correct (not align_corners=True) output_size = [6*2, 6*2] resize_output = tf.image.resize_bilinear(tf.constant(input), output_size, align_corners=False, half_pixel_centers=True) resize_bilinear_slice_output = tf.slice(resize_output, [0,2,2,0], [1,4,4,1]) # Note crop_and_resize doesn't scale the image boundaries to pixel centers, but always to corners, # and there is sadly no flag to influence this (unlike resize_bilinear). method = 'bilinear' extrapolation_value = 0 crop_and_resize_output = tf.image.crop_and_resize( image=tf.constant(input, dtype=tf.float32), # NHWC boxes=tf.constant(boxes, dtype=tf.float32), box_ind=tf.constant(box_indices, dtype=tf.int32), crop_size=tf.constant(crop_size, dtype=tf.int32), method=method, extrapolation_value=extrapolation_value ) with tf.Session(config=config) as session: with np.printoptions(precision=3, suppress=True): print("input:\n", input) print("crop_and_resize:\n", session.run(crop_and_resize_output)) print("resize_bilinear_and_slice:\n", session.run(resize_bilinear_slice_output)) ```
https://github.com/onnx/onnx/issues/3428
https://github.com/onnx/onnx/pull/3429
1880e91cd899076240117efb594fa4c9ab144f2c
82ddc566b950d05fdccb70c4579930f4014ece7f
2021-04-16T05:45:17Z
python
2021-04-30T22:06:56Z
closed
onnx/onnx
https://github.com/onnx/onnx
3,409
["docs/Changelog.md", "docs/Operators.md", "onnx/defs/math/defs.cc", "onnx/defs/math/old.cc", "onnx/defs/operator_sets.h", "onnx/defs/schema.h"]
Pow support for bfloat16
I see that with opset13 bfloat16 support has been added to [Pow](https://github.com/onnx/onnx/blob/master/docs/Operators.md#pow ), however only the first operand `T` has been updated to support bfloat16 ; should the second operand `T1` be updated as well? Reproduced here from https://github.com/onnx/onnx/blob/master/docs/Operators.md#pow ``` T : tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16) Constrain input X and output types to float/int tensors. T1 : tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double) Constrain input Y types to float/int tensors. ```
https://github.com/onnx/onnx/issues/3409
https://github.com/onnx/onnx/pull/3412
fe1d4946ccde54669dbabb47d4bec0aa14e92471
a64c2ed232392e2586dc8320710841eee62874df
2021-04-07T19:16:18Z
python
2021-04-24T23:56:26Z
closed
onnx/onnx
https://github.com/onnx/onnx
3,345
[".azure-pipelines/Linux-CI.yml", ".azure-pipelines/Windows-CI.yml", ".github/workflows/manylinux/entrypoint.sh", ".github/workflows/release_win.yml", ".github/workflows/weekly_mac_ci.yml", ".github/workflows/win_no_exception_ci.yml", "CMakeLists.txt", "README.md", "cmake/summary.cmake"]
ONNX_USE_PROTOBUF_SHARED_LIBS should control Protobuf_USE_STATIC_LIBS
# Bug Report ### Describe the bug ONNX_USE_PROTOBUF_SHARED_LIBS is set independently of Protobuf_USE_STATIC_LIBS even though the two are dependent on each other. Even the documentation says: If ONNX_USE_PROTOBUF_SHARED_LIBS is ON then Protobuf_USE_STATIC_LIBS must be OFF and USE_MSVC_STATIC_RUNTIME must be 0. If ONNX_USE_PROTOBUF_SHARED_LIBS is OFF then Protobuf_USE_STATIC_LIBS must be ON. Rather than relying on the user to set Protobuf_USE_STATIC_LIBS when calling cmake, it should be set before the call to find_package for protobuf based on the value of ONNX_USE_PROTOBUF_SHARED_LIBS. This way the user does not have to rely on their knowledge of what cmake does when find_package is called. ### System information - OS Platform and Distribution (*e.g. Linux Ubuntu 16.04*): Windows 10 - ONNX version (*e.g. 1.7*): 1.7 - Python version: 3.8 - GCC/Compiler version (if compiling from source): VS2019 - CMake version: 3.19 - Protobuf version: 3.12.3 - Visual Studio version (if applicable): VS2019 ### Reproduction instructions 1. Checkout protobuf 3.12.3 2. Correctly build static libraries with cmake and VS2019 -G "Visual Studio 16 2019" -A x64 -T host=x64 -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/path/to/protobuf/install -Dprotobuf_BUILD_EXAMPLES=OFF -Dprotobuf_BUILD_TESTS=OFF -Dprotobuf_MSVC_STATIC_RUNTIME=OFF -Dprotobuf_WITH_ZLIB=OFF 3. Checkout onnx-mlir 4. Build with cmake and VS2019 -G "Visual Studio 16 2019" -A x64 -T host=x64 -DCMAKE_BUILD_TYPE=Release -DCMAKE_PREFIX_PATH=/path/to/protobuf/install ### Expected behavior Build succeeds ### Actual behavior The build fails with multiple linker errors: LNK2001: unresolved external symbol "__declspec(dllimport) ..." LNK2019: unresolved external symbol "__declspec(dllimport) ..." because neither onnx-mlir, nor onnx define Protobuf_USE_STATIC_LIBS and cmake silently defines PROTOBUF_USE_DLLS, but our protobuf is a static library.
https://github.com/onnx/onnx/issues/3345
https://github.com/onnx/onnx/pull/3550
6c77a411215305c1a3f355f8c2e4633c19514690
31f0fd6103e231664d1cfd3b25264f672f5de960
2021-03-20T00:37:41Z
python
2021-07-02T21:24:44Z
closed
onnx/onnx
https://github.com/onnx/onnx
3,328
["onnx/test/version_converter_test.py", "onnx/version_converter/convert.cc", "onnx/version_converter/convert.h"]
Version converter doesn't convert nodes in subgraphs
The following script compares updating an `Unsqueeze` operator from a top-level graph vs an `Unsqueeze` operator from the body attribute of a `Loop` operator. It uses files from the node tests (update value of `node_tests_dir` to reproduce): ``` import onnx from onnx import version_converter, shape_inference node_tests_dir = '/home/matteo/Git/onnx/onnx/backend/test/data/node/' unsqueeze_test = 'test_unsqueeze_axis_3/' loop_test = 'test_loop11/' print('**** Unsqueeze model from Unsqueeze test') path = node_tests_dir + unsqueeze_test + 'model.onnx' model = onnx.load(path) print('Before conversion:') print('opset version: ' + str(model.opset_import[0].version)) print(model.graph.node[0]) print('After conversion:') converted = version_converter.convert_version(model, 13) print('opset version: ' + str(converted.opset_import[0].version)) print(converted.graph.node[0]) print('') print('**** Unsqueeze model from Loop test') path = node_tests_dir + loop_test + 'model.onnx' model = onnx.load(path) print('Before conversion:') print('opset version: ' + str(model.opset_import[0].version)) print(model.graph.node[0].attribute[0].g.node[4]) print('After conversion:') converted = version_converter.convert_version(model, 13) print('opset version: ' + str(converted.opset_import[0].version)) print(converted.graph.node[0].attribute[0].g.node[4]) print('') print('**** Shape inference on converted Loop model') shape_inference.infer_shapes(converted) ``` The output is: ``` **** Unsqueeze model from Unsqueeze test Before conversion: opset version: 11 input: "x" output: "y" op_type: "Unsqueeze" attribute { name: "axes" ints: 3 type: INTS } After conversion: opset version: 13 input: "x" input: "4" output: "y" op_type: "Unsqueeze" **** Unsqueeze model from Loop test Before conversion: opset version: 11 input: "iter_count" output: "slice_start" op_type: "Unsqueeze" attribute { name: "axes" ints: 0 type: INTS } After conversion: opset version: 13 input: "iter_count" output: "slice_start" op_type: "Unsqueeze" attribute { name: "axes" ints: 0 type: INTS } **** Shape inference on converted Loop model Traceback (most recent call last): File "script.py", line 33, in <module> shape_inference.infer_shapes(converted) File "/home/matteo/.local/lib/python3.8/site-packages/onnx-1.8.1-py3.8-linux-x86_64.egg/onnx/shape_inference.py", line 37, in infer_shapes inferred_model_str = C.infer_shapes(model_str, check_type, strict_mode) RuntimeError: input 1 is out of bounds ``` It can be seen that in the second case the `Unsqueeze` operator has not been updated, producing a faulty model which makes shape inference fail. So it seems that the version converter doesn't bother to recurse into the `Loop` subgraphs. Is this a problem with every operator supporting subgraphs?
https://github.com/onnx/onnx/issues/3328
https://github.com/onnx/onnx/pull/3474
9373cfc412b4d4ed54ea0e6d32aaf3722ee30493
ada1eed304bebbb441f732d583f118825eefa0fa
2021-03-14T15:40:09Z
python
2021-05-18T17:19:31Z
closed
onnx/onnx
https://github.com/onnx/onnx
3,314
["docs/Changelog.md", "docs/Operators.md", "docs/TestCoverage.md", "onnx/backend/test/case/node/averagepool.py", "onnx/backend/test/case/node/lppool.py", "onnx/backend/test/case/node/maxpool.py", "onnx/backend/test/case/node/pool_op_common.py", "onnx/defs/nn/defs.cc", "onnx/defs/nn/old.cc"]
Confused about AveragePool op ceil_mode attribute
# Ask a Question ### Question When I was study the AveragePool op, I am confused about the ceil_mode attr. What I see at [AveragePool](https://github.com/onnx/onnx/blob/master/docs/Operators.md#AveragePool) > ceil_mode : int (default is 0) > Whether to use ceil or floor (default) to compute the output shape. But I notice when auto_pad is set, it only uses ceil to compute the output shape. Is the ceil_mode attr only used when auto_pad is 'NOTSET'? ### Further information - This is the python code I find to compute the output shape, the code also is only use ceil to compute output shape. [get_output_shape](https://github.com/onnx/onnx/blob/1cbd549193661c7e9c870c4e9595144d0c6971c5/onnx/backend/test/case/node/pool_op_common.py#L24) - I notice pytorch's op AvgPool2d have ceil_mode( [AvgPool2d,](https://pytorch.org/docs/stable/generated/torch.nn.AvgPool2d.html) )I guess the onnx ceil_mode attr is only used when auto_pad is 'NOTSET', and when auto_pad is not 'NOTSET',the ceil_mode default value is 1 ,is true? Who can help me? Thanks very much.
https://github.com/onnx/onnx/issues/3314
https://github.com/onnx/onnx/pull/5248
efa11e6461960825ae5fc23ce48893ac25853bad
57057ac48f1b87c01789888e11a3c2bf8852e12c
2021-03-05T06:15:49Z
python
2023-06-15T04:28:58Z
closed
onnx/onnx
https://github.com/onnx/onnx
3,255
["onnx/test/checker_test.py"]
Not able to run inference with If operator
# Bug Report ### Is the issue related to model conversion? No ### Describe the bug https://github.com/onnx/onnx/blob/master/onnx/test/checker_test.py#L254 We created a model based on the above test case (test_nested_graph). Onnx model was successfully created but when running inference based on the generated model we got the following error: The code we're running: `model_path = "models/test1.onnx"` `input_data = [np.array([True]), np.array([1.0, 2.0]).astype(np.float32)]` `onnx_sess = onnxruntime.InferenceSession(model_path)` `pred = onnx_sess.run(None, input_data)` `print(pred)` Traceback (most recent call last): File "create_test_model.py", line 299, in <module> onnx_sess = onnxruntime.InferenceSession(model_out_path) File "lib/python3.7/site-packages/onnxruntime/capi/session.py", line 195, in __init__ self._create_inference_session(providers, provider_options) File "lib/python3.7/site-packages/onnxruntime/capi/session.py", line 200, in _create_inference_session sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model) onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from models/test1.onnx failed:Node () Op (If) [TypeInferenceError] Graph attribute inferencing failed: Size mismatch validating subgraph inputs. Got 0 inputs but subgraph has 1 inputs and requires 1 inputs. Either provide all subgraph inputs, or just the required inputs. ### System information - OS Platform and Distribution: Mac OS 10.14.6 - ONNX version: 1.7.0 - Python version: 3.7.8 - GCC/Compiler version (if compiling from source): - CMake version: - Protobuf version: - Visual Studio version (if applicable): ### Reproduction instructions - Describe the code to reproduce the behavior. See description. - Attach the ONNX model to the issue (where applicable) ### Expected behavior Expect onnxsession to be created and inference session to return float result ### Notes Any additional information
https://github.com/onnx/onnx/issues/3255
https://github.com/onnx/onnx/pull/3798
d0151d78c2ba53c3c71d05608b1d892ada20507d
5f1a8079561a0b677c9b8f82ecabff368051e79d
2021-02-02T01:44:50Z
python
2021-10-26T16:45:41Z
closed
onnx/onnx
https://github.com/onnx/onnx
3,237
["onnx/backend/test/report/coverage.py"]
Operator coverage report broken in ONNX 1.8
# Bug Report ### Describe the bug I’m trying to debug a problem we noticed on the [ONNX Scoreboard](https://github.com/onnx/backend-scoreboard). For a while now the operator coverage report has not been updating. While debugging this I noticed that the coverage report is broken because it can’t find a schema for Adagrad (and perhaps other ops). The error message I’m seeing is this: ``` RuntimeError: No schema registered for 'Adagrad'! ``` You can see [complete logs on Azure](https://dev.azure.com/michalkarzynski/michalkarzynski/_build/results?buildId=995&view=logs&j=89a03e46-fbf8-5c6e-11d2-f1a7ba9bbbde&t=d35e1a0e-28e7-52d8-f96e-e1252e0b4d87&l=4748). ### System information - OS Platform and Distribution: Linux Ubuntu 20.04 - ONNX version 1.8 - Python version: Python 3.6.9 ### Reproduction instructions You can download the [Dockerfile for ONNX Runtime](https://github.com/onnx/backend-scoreboard/blob/master/runtimes/onnx-runtime/stable/Dockerfile) from ONNX Scoreboard, or checkout the entire repo. ``` docker build -t scoreboard/onnx -f runtimes/onnx-runtime/stable/Dockerfile . docker run --name onnx-runtime --env-file setup/env.list -v `pwd`/results/onnx-runtime/stable:/root/results scoreboard/onnx ``` Internally this runs: ``` pytest test_backend.py --onnx_backend=onnxruntime.backend.backend -k 'not OnnxBackendRealModelTest' --tb=no ``` Using the following [test file](https://github.com/onnx/backend-scoreboard/blob/master/test/test_backend.py). ### Expected behavior We would expect to see a file named [`nodes.csv`](https://github.com/onnx/backend-scoreboard/blob/master/results/onnx-runtime/stable/nodes.csv) to be generated. ### Notes Instead we see `RuntimeError: No schema registered for 'Adagrad'!`. Why is Adagrad schema is not being registered for the operator coverage?
https://github.com/onnx/onnx/issues/3237
https://github.com/onnx/onnx/pull/3256
6544dba74a4a0c9d937507b6f8f6a4cb20234701
b4d69b49f4216ebbe8857bd76e7c1a77665ee44f
2021-01-25T16:17:51Z
python
2021-02-08T09:42:34Z
closed
onnx/onnx
https://github.com/onnx/onnx
3,228
["docs/ExternalData.md", "docs/PythonAPIOverview.md", "onnx/__init__.py", "onnx/external_data_helper.py", "onnx/test/test_external_data.py"]
Updates to external data helpers
Add a new API for converting a model to external data. Today the conversion happens in 2 steps `external_data_helper.convert_model_to_external_data(<model>, <all_tensors_to_one_file>, <size_threshold>) save_model(model, output_path) ` We want to add another api which combines the 2 steps ` save_model_to_external_data(<model>, <output_path>, <all_tensors_to_one_file>, <size_threshold>, <convert_attributes>) `
https://github.com/onnx/onnx/issues/3228
https://github.com/onnx/onnx/pull/3280
39df720cca9c4a833a3de3a6bc1d4667c1a8e58b
522b000546e5ed986fb2724ab2099d5a3219c586
2021-01-19T23:18:14Z
python
2021-03-24T20:02:31Z
closed
onnx/onnx
https://github.com/onnx/onnx
3,186
["onnx/defs/shape_inference.cc", "onnx/defs/shape_inference.h", "onnx/shape_inference/implementation.cc", "onnx/shape_inference/implementation.h"]
Opset conversion
# Bug Report ### Is the issue related to model conversion? Yes and no. Yes, in the sense that I've converted a model to ONNX. No, because I'm now trying to convert the opset to 7 ### Describe the bug I've converted a multiclass model to onnx with the target opset set to 7. When I tried to use it as with asp.net, I got an error saying it was opset 1 and that I needed to convert it. I'm now trying to convert it from opset 1 to opset 7 but I'm getting ``` RuntimeError: [TypeInferenceError] type case unsupported. existing=5 inferred=5 (op_type:ZipMap, node name: ZipMap): [TypeInferenceError] type case unsupported. existing=5 inferred=5 ``` **Code used to convert sklearn model** ``` initial_type = [('float_input', FloatTensorType([1, dim]))] onnx_model = convert_sklearn(lr, initial_types=initial_type, target_opset=12) with open("Modele/LR_2.onnx", "wb") as f: f.write(onnx_model.SerializeToString()) ``` **Code used to convert opset** ``` original_model = onnx.load(model_path) print('The model before conversion:\n{}'.format(original_model)) # Apply the version conversion on the original model converted_model = version_converter.convert_version(original_model, 7) print('The model after conversion:\n{}'.format(converted_model)) ``` **Output :** ``` floats: 0.9884291 floats: 1.063458 floats: 0.6337186 floats: -0.42541054 floats: -0.0067295996 floats: 0.34725153 floats: 0.0008511626 floats: 0.009505493 floats: 0.18251349 floats: -0.1303715 type: FLOATS } attribute { name: "multi_class" i: 1 type: INT } attribute { name: "post_transform" s: "SOFTMAX" type: STRING } domain: "ai.onnx.ml" } node { input: "probability_tensor" output: "probabilities" name: "Normalizer" op_type: "Normalizer" attribute { name: "norm" s: "L1" type: STRING } domain: "ai.onnx.ml" } node { input: "label" output: "output_label" name: "Identity" op_type: "Identity" domain: "" } node { input: "probabilities" output: "output_probability" name: "ZipMap" op_type: "ZipMap" attribute { ...... type: STRINGS } domain: "ai.onnx.ml" } name: "c50b4cd9856940b3924f51048cff27c5" input { name: "float_input" type { tensor_type { elem_type: 1 shape { dim { dim_value: 1 } dim { dim_value: 14525 } } } } } output { name: "output_label" type { tensor_type { elem_type: 8 shape { dim { } } } } } output { name: "output_probability" type { sequence_type { elem_type { map_type { key_type: 8 value_type { tensor_type { elem_type: 1 } } } } } } } } opset_import { domain: "" version: 1 } opset_import { domain: "ai.onnx.ml" version: 1 } Traceback (most recent call last): converted_model = version_converter.convert_version(original_model, 7) converted_model_str = C.convert_version(model_str, target_version) RuntimeError: [TypeInferenceError] type case unsupported. existing=5 inferred=5 (op_type:ZipMap, node name: ZipMap): [TypeInferenceError] type case unsupported. existing=5 inferred=5 ```
https://github.com/onnx/onnx/issues/3186
https://github.com/onnx/onnx/pull/3772
780b03610021d58c166b3c113b849899845421db
f9a21106141940ec89fb338895cbc4f573344e0d
2020-12-28T11:46:18Z
python
2021-10-28T04:28:26Z
closed
onnx/onnx
https://github.com/onnx/onnx
3,084
["CMakeLists.txt", "third_party/pybind11"]
Cannot build with pybind11 2.6.0
# Bug Report ### Describe the bug We cannot build with pybind11 2.6 released recently, 2.5 works fine. Basically a `-I` for `Python.h` is missing. A bug is filed to them: https://github.com/pybind/pybind11/issues/2632 They mentioned it could be caused by the wrong use in onnx's cmake. Could you take a look? ### System information - OS Platform and Distribution (*e.g. Linux Ubuntu 16.04*): CentOS 7 / Ubuntu 18.04 - ONNX version (*e.g. 1.7*): Built from latest release tag (1.7.0) - Python version: distro stock python3 - GCC/Compiler version (if compiling from source): gcc-9 / gcc-8 - CMake version: Built from latest release tag - Protobuf version: Built from latest release tag
https://github.com/onnx/onnx/issues/3084
https://github.com/onnx/onnx/pull/3105
356108796c72f151fb4980e8c377b766265e76e4
50cea73255696c7d8b9013ffa38e46338716078f
2020-11-02T09:24:50Z
python
2022-04-06T21:08:11Z
closed
onnx/onnx
https://github.com/onnx/onnx
3,017
[".azure-pipelines/Linux-CI.yml", ".azure-pipelines/Windows-CI.yml", ".github/workflows/manylinux/entrypoint.sh", ".github/workflows/release_win.yml", ".github/workflows/weekly_mac_ci.yml", ".github/workflows/win_no_exception_ci.yml", "CMakeLists.txt", "README.md", "cmake/summary.cmake"]
Wrong Installation Guide in README.md
# Bug Report ### Describe the bug The installation guide in https://github.com/onnx/onnx#linux-and-macos is wrong :| It should tell users how to build onnx from source, but `pip install onnx` there will directly install the pre-compiled onnx package from PyPI. In my opinion, a `--no-binary` command argument is needed https://pip.pypa.io/en/stable/reference/pip_install/#cmdoption-no-binary
https://github.com/onnx/onnx/issues/3017
https://github.com/onnx/onnx/pull/3550
6c77a411215305c1a3f355f8c2e4633c19514690
31f0fd6103e231664d1cfd3b25264f672f5de960
2020-09-13T13:09:40Z
python
2021-07-02T21:24:44Z
closed
onnx/onnx
https://github.com/onnx/onnx
3,013
["onnx/defs/controlflow/defs.cc", "onnx/defs/controlflow/old.cc"]
Shape inference issue with Loop output
# Bug Report ### Is the issue related to model conversion? No ### Describe the bug Reported by @BowenBao . Adding the issue here to be able to track it and reference in a PR. [loop_repro.zip](https://github.com/onnx/onnx/files/5211869/loop_repro.zip) Attached are 2 models loop_pass and loop_fail Download and run the script import onnxruntime import torch x = torch.randn(3,4,5) y = torch.randn(4,5) sess = onnxruntime.InferenceSession('model.onnx') ort_out = sess.run(None, {sess.get_inputs()[0].name: x.numpy(), sess.get_inputs()[1].name: y.numpy()}) Observe the following error message onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Node (Concat_33) Op (Concat) [ShapeInferenceError] axis must be in [-rank, rank-1]. However, inspecting the model it is obvious the input has rank 3 which makes the range valid for axis=2.
https://github.com/onnx/onnx/issues/3013
https://github.com/onnx/onnx/pull/3014
04357970e33cd3730c8b2b3e726fb470f323979b
b8f6254bf133cf894611eb0f0293720473cf3518
2020-09-11T23:42:18Z
python
2020-09-14T22:03:27Z
closed
onnx/onnx
https://github.com/onnx/onnx
2,991
["docs/Changelog.md", "docs/Operators.md", "onnx/defs/tensor/defs.cc"]
ScatterElements and ScatterND input 'updates' should be differentiable
as title.
https://github.com/onnx/onnx/issues/2991
https://github.com/onnx/onnx/pull/3028
63369697867ada4c88881175edc081728f445570
b2ed660d0a065b8346816f2c3a95d79ca79b88c9
2020-09-02T06:17:02Z
python
2020-09-23T20:24:27Z
closed
onnx/onnx
https://github.com/onnx/onnx
2,965
["onnx/defs/tensor/utils.cc", "onnx/shape_inference/implementation.cc", "onnx/test/shape_inference_test.py"]
Shape inference on the Torchvision’s Mask R-CNN causes a segmentation fault
# Bug Report ### Is the issue related to model conversion? Probably not. ### Describe the bug When I try to run shape inference on the Torchvision's Mask R-CNN it causes a segfault. However, `check_model` doesn't return any warning/exception. ### System information - OS Platform and Distribution (*e.g. Linux Ubuntu 16.04*): Linux Ubuntu 18.04.4 - ONNX version (*e.g. 1.7*): 1.6.0 - Python version: 3.6.10 - PyTorch version: 1.6.0a0+9907a3e - Torchvision version: 0.7.0a0 - GCC/Compiler version (if compiling from source): N/A - CMake version: 3.14.0 - Protobuf version: 3.12.2 - Visual Studio version (if applicable): N/A ### Reproduction instructions ``` import torch import torchvision import onnx model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True) model.eval() x = [torch.rand(3, 800, 800)] torch.onnx.export(model, x, "mask_rcnn.onnx", opset_version = 11) onnx_model = onnx.load("mask_rcnn.onnx") onnx.checker.check_model(onnx_model) onnx.shape_inference.infer_shapes(onnx_model) ``` ### Expected behavior Shape inference doesn't cause any errors.
https://github.com/onnx/onnx/issues/2965
https://github.com/onnx/onnx/pull/2983
1be9a7de8f720cebbd794d596dc5cfc4d5b1859c
53057782d19a59548705802fca756d87dee032e2
2020-08-19T11:40:19Z
python
2020-08-28T23:19:50Z
closed
onnx/onnx
https://github.com/onnx/onnx
2,481
["CMakeLists.txt"]
Build broken: Unknown generator option: dllexport_decl
@orionr @linkerzhang @zrphercule Run on macOS Catalina 10.15.1 with protobuf 3.11.0: ``` git clone https://github.com/onnx/onnx.git && cd onnx git submodule update --init --recursive python3 setup.py install ``` Log: ``` ~/onnx: python3 setup.py install running install running bdist_egg running egg_info creating onnx.egg-info writing onnx.egg-info/PKG-INFO writing dependency_links to onnx.egg-info/dependency_links.txt writing entry points to onnx.egg-info/entry_points.txt writing requirements to onnx.egg-info/requires.txt writing top-level names to onnx.egg-info/top_level.txt writing manifest file 'onnx.egg-info/SOURCES.txt' reading manifest file 'onnx.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'onnx.egg-info/SOURCES.txt' installing library code to build/bdist.macosx-10.15-x86_64/egg running install_lib running build_py running create_version running cmake_build -- The C compiler identification is AppleClang 11.0.0.11000033 -- The CXX compiler identification is AppleClang 11.0.0.11000033 -- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compile features -- Detecting C compile features - done -- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done -- Found Protobuf: /usr/local/lib/libprotobuf.dylib (found version "3.11.0") Generated: /Users/x/onnx/.setuptools-cmake-build/onnx/onnx-ml.proto Generated: /Users/x/onnx/.setuptools-cmake-build/onnx/onnx-operators-ml.proto CMake Warning at CMakeLists.txt:406 (find_package): By not providing "Findpybind11.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "pybind11", but CMake did not find one. Could not find a package configuration file provided by "pybind11" (requested version 2.2) with any of the following names: pybind11Config.cmake pybind11-config.cmake Add the installation prefix of "pybind11" to CMAKE_PREFIX_PATH or set "pybind11_DIR" to a directory containing one of the above files. If "pybind11" provides a separate development package or SDK, be sure it has been installed. -- -- ******** Summary ******** -- CMake version : 3.15.5 -- CMake command : /usr/local/Cellar/cmake/3.15.5/bin/cmake -- System : Darwin -- C++ compiler : /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -- C++ compiler version : 11.0.0.11000033 -- CXX flags : -Wnon-virtual-dtor -- Build type : Release -- Compile definitions : -- CMAKE_PREFIX_PATH : -- CMAKE_INSTALL_PREFIX : /usr/local -- CMAKE_MODULE_PATH : -- -- ONNX version : 1.6.0 -- ONNX NAMESPACE : onnx -- ONNX_BUILD_TESTS : OFF -- ONNX_BUILD_BENCHMARKS : OFF -- ONNX_USE_LITE_PROTO : OFF -- ONNXIFI_DUMMY_BACKEND : OFF -- ONNXIFI_ENABLE_EXT : OFF -- -- Protobuf compiler : /usr/local/bin/protoc -- Protobuf includes : /usr/local/include -- Protobuf libraries : /usr/local/lib/libprotobuf.dylib -- BUILD_ONNX_PYTHON : ON -- Python version : -- Python executable : /usr/local/opt/python/bin/python3.7 -- Python includes : /usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/include/python3.7m -- Configuring done -- Generating done -- Build files have been written to: /Users/x/onnx/.setuptools-cmake-build Scanning dependencies of target gen_onnx_proto Scanning dependencies of target onnxifi_loader Scanning dependencies of target onnxifi_dummy [ 1%] Running gen_proto.py on onnx/onnx.in.proto [ 4%] Building C object CMakeFiles/onnxifi_loader.dir/onnx/onnxifi_loader.c.o [ 4%] Building C object CMakeFiles/onnxifi_dummy.dir/onnx/onnxifi_dummy.c.o /Users/x/onnx/onnx/onnxifi_dummy.c:173:21: warning: incompatible function pointer types assigning to 'onnxExtensionFunctionPointer' (aka 'int (*)(void)') from 'onnxStatus (*)(onnxBackendID, const char *, onnxExtensionFunctionPointer *)' (aka 'int (*)(void *, const char *, int (**)(void))') [-Wincompatible-function-pointer-types] *function = &onnxGetExtensionFunctionAddress; ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /Users/x/onnx/onnx/onnxifi_dummy.c:176:21: warning: incompatible function pointer types assigning to 'onnxExtensionFunctionPointer' (aka 'int (*)(void)') from 'onnxStatus (*)(onnxGraph, uint32_t, const onnxTensorDescriptorV1 *, uint32_t, const onnxTensorDescriptorV1 *, onnxMemoryFenceV1 *)' (aka 'int (*)(void *, unsigned int, const struct onnxTensorDescriptorV1 *, unsigned int, const struct onnxTensorDescriptorV1 *, struct onnxMemoryFenceV1 *)') [-Wincompatible-function-pointer-types] *function = &onnxSetIOAndRunGraph; ^ ~~~~~~~~~~~~~~~~~~~~~ Processing /Users/x/onnx/onnx/onnx.in.proto Writing /Users/x/onnx/.setuptools-cmake-build/onnx/onnx-ml.proto Writing /Users/x/onnx/.setuptools-cmake-build/onnx/onnx-ml.proto3 generating /Users/x/onnx/.setuptools-cmake-build/onnx/onnx_pb.py 2 warnings generated. [ 6%] Linking C static library libonnxifi_loader.a [ 9%] Linking C shared library libonnxifi_dummy.dylib [ 9%] Running C++ protocol buffer compiler on /Users/x/onnx/.setuptools-cmake-build/onnx/onnx-ml.proto [ 9%] Built target onnxifi_loader --python_out: onnx/onnx-ml.proto: Unknown generator option: dllexport_decl make[2]: *** [onnx/onnx-ml.pb.cc] Error 1 make[1]: *** [CMakeFiles/gen_onnx_proto.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs.... Scanning dependencies of target onnxifi_wrapper [ 9%] Built target onnxifi_dummy [ 11%] Building C object CMakeFiles/onnxifi_wrapper.dir/onnx/onnxifi_wrapper.c.o [ 13%] Linking C shared module libonnxifi.dylib [ 13%] Built target onnxifi_wrapper make: *** [all] Error 2 Traceback (most recent call last): File "setup.py", line 336, in <module> 'backend-test-tools = onnx.backend.test.cmd_tools:main', File "/usr/local/lib/python3.7/site-packages/setuptools/__init__.py", line 145, in setup return distutils.core.setup(**attrs) File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/distutils/core.py", line 148, in setup dist.run_commands() File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/usr/local/lib/python3.7/site-packages/setuptools/command/install.py", line 67, in run self.do_egg_install() File "/usr/local/lib/python3.7/site-packages/setuptools/command/install.py", line 109, in do_egg_install self.run_command('bdist_egg') File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/usr/local/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 172, in run cmd = self.call_command('install_lib', warn_dir=0) File "/usr/local/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 158, in call_command self.run_command(cmdname) File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/usr/local/lib/python3.7/site-packages/setuptools/command/install_lib.py", line 11, in run self.build() File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/distutils/command/install_lib.py", line 105, in build self.run_command('build_py') File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "setup.py", line 213, in run self.run_command('cmake_build') File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "setup.py", line 207, in run subprocess.check_call(build_args) File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/subprocess.py", line 363, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['/usr/local/bin/cmake', '--build', '.', '--', '-j', '16']' returned non-zero exit status 2. ```
https://github.com/onnx/onnx/issues/2481
https://github.com/onnx/onnx/pull/2482
477a9b87715d614f8b7540a69c144b177275baa2
57ebc587fcf3913b4be93653b0dd58c686447298
2019-11-29T19:59:12Z
python
2019-12-17T01:01:14Z
closed
onnx/onnx
https://github.com/onnx/onnx
2,449
["docs/Changelog.md", "docs/Operators.md", "onnx/defs/tensor/defs.cc", "onnx/defs/tensor/old.cc", "onnx/version_converter/adapters/softmax_12_13.h"]
Spec unclear: Can a scalar be Reshaped to a 1D tensor and vice versa?
A scalar has shape [ ] and rank 0. Can the Reshape operator be applied to it with a shape of [ 1 ]? And Can a 1D tensor of shape [ 1 ] have Reshape applied with a shape of [ ]?
https://github.com/onnx/onnx/issues/2449
https://github.com/onnx/onnx/pull/3861
ae3e6b80e63fbef0abda96ce698d31c70cd5e43d
dba3bd832f4d740d991d367535bab5c3634455a7
2019-11-12T17:59:26Z
python
2022-01-25T01:20:53Z
closed
onnx/onnx
https://github.com/onnx/onnx
2,446
["onnx/optimizer/passes/fuse_consecutive_transposes.h", "onnx/test/optimizer_test.py"]
onnx optimizer cannot fuse consecutive transposes correctly.
The following is a simple repro code: ``` import onnx from onnx import helper, TensorProto from onnx import optimizer, shape_inference # Preprocessing: create a model with two nodes, Y's shape is unknown node1 = helper.make_node('Transpose', ['X'], ['Y'], perm=[1, 0, 2]) node2 = helper.make_node('Transpose', ['Y'], ['Z'], perm=[0, 2, 1]) graph = helper.make_graph( [node1, node2], 'two-transposes', [helper.make_tensor_value_info('X', TensorProto.FLOAT, (2, 3, 4))], [helper.make_tensor_value_info('Z', TensorProto.FLOAT, (3, 4, 2))], ) original_model = helper.make_model(graph, producer_name='onnx-examples') # Apply shape inference on the model and check the it inferred_model = shape_inference.infer_shapes(original_model) onnx.checker.check_model(inferred_model) # Pick one pass as example passes = ['fuse_consecutive_transposes'] # Apply the optimization on the original mode and check it optimized_model = optimizer.optimize(original_model, passes) onnx.checker.check_model(shape_inference.infer_shapes(optimized_model)) #error happens here in infer_shapes print('The model after optimization:\n{}'.format(optimized_model)) ``` The optimized model fails to pass infer_shapes. I looked into it. The 2 consecutive transpose ops along axis [1,0,2] and [0, 2,1] should be fused to [1,2,0]. But onnx gives a fused one [2,0,1].
https://github.com/onnx/onnx/issues/2446
https://github.com/onnx/onnx/pull/2471
ad1f5567b6c00cfde71a2c0e466b07101d8c84c2
cdc8b86196ad4d20e9e7e530b3323f23324babea
2019-11-12T07:15:22Z
python
2019-12-05T06:04:00Z
closed
onnx/onnx
https://github.com/onnx/onnx
2,339
["third_party/pybind11"]
import paddle.fluid & import onnx will be core_dump
https://github.com/microsoft/onnxruntime/issues/1541 https://github.com/pybind/pybind11/issues/1262 Update pybind to the latest version can solve this.
https://github.com/onnx/onnx/issues/2339
https://github.com/onnx/onnx/pull/2340
7840504d4d6f3125f6ae4dff256381632cf84d4f
1aa176e05b722b408b83aaf2b2a893c9fbaa37d7
2019-09-19T17:27:48Z
python
2019-09-19T20:35:31Z
closed
onnx/onnx
https://github.com/onnx/onnx
2,333
["docs/Changelog.md", "docs/Operators.md", "onnx/defs/nn/defs.cc"]
ConvTranspose's output_padding attribute requires clarification
ConvTranspose's output_padding attribute is defined as follows: > The **zero-padding** added to one side of the output. This is also called adjs/adjustment in some frameworks. (Emphasis mine.) The most likely interpretation of this is that a positive output_padding would add rows/columns of zeros to one side of the output tensor. This is misleading, as the apparent intention of this attribute is to match PyTorch's definition, which is different. PyTorch instead defines it as follows: https://pytorch.org/docs/stable/nn.html#convtranspose2d > The padding argument effectively adds dilation * (kernel_size - 1) - padding amount of zero padding to both sizes of the input. This is set so that when a Conv2d and a ConvTranspose2d are initialized with same parameters, they are inverses of each other in regard to the input and output shapes. However, when stride > 1, Conv2d maps multiple input shapes to the same output shape. output_padding is provided to resolve this ambiguity by effectively increasing the calculated output shape on one side. **Note that output_padding is only used to find output shape, but does not actually add zero-padding to output.** (Emphasis mine.) In other words in PyTorch, output_padding affects the output shape (just like it does in ONNX) but the added rows/columns *are not zero'd* - they participate in the ConvTranspose just like the other elements of the output tensor. The definition of output_padding in ConvTranspose should therefore probably be clarified to say something closer to how PyTorch defines it - that is, that it participates in shape inference of the output tensor, but doesn't actually zero out those added rows/columns. (This was discussed earlier in an offline thread with @spandantiwari and @pranavsharma)
https://github.com/onnx/onnx/issues/2333
https://github.com/onnx/onnx/pull/2343
a20ba2f1dc3fbd1109fcc67bf42247531832bb60
c5af774aa950f6a5cf1a7b5f37d014fb2fbd82bc
2019-09-18T22:22:40Z
python
2019-09-21T00:52:48Z
closed
onnx/onnx
https://github.com/onnx/onnx
1,949
["onnx/defs/tensor/defs.cc", "onnx/defs/tensor_proto_util.cc", "onnx/defs/tensor_proto_util.h", "onnx/test/shape_inference_test.py"]
shape inference bug of op "Slice 10"
this shape inference function fails even in simple cases, for example a ONNX graph that only contains 1 slice whose inputs such as "starts" are all const nodes except the "data".
https://github.com/onnx/onnx/issues/1949
https://github.com/onnx/onnx/pull/1950
6fb0775d85226668048360cacba4763deb80d948
414dbc731506e21c70b4600352a161c528233a57
2019-04-18T03:10:59Z
python
2019-04-19T07:35:37Z
closed
onnx/onnx
https://github.com/onnx/onnx
1,934
["docs/IR.md"]
Case sensitivity of operator names and attributes
The spec should be clear about case sensitivity when matching names of operators, attributes, and attribute string enums. That includes NodeProto::op_type, AttributeProto::name, and AttributeProto::s/strings. ## Applicable documentation: https://github.com/onnx/onnx/blob/master/docs/IR.md https://github.com/onnx/onnx/blob/master/docs/Operators.md#RNN https://github.com/onnx/onnx/blob/master/onnx/onnx.proto ## Context: We’ve been matching all of these in WinML kernels atop DirectML with exact case sensitivity (simpler/faster/less ambiguous), but we encountered a failing model exported from PyTorch where the RNN activation attribute “Tanh” was “tanh” instead :/. Granted, I see this as a bug in the exporter because the exporter should use the same case as the spec lists them, but the spec should make it clear in any case whether "tanh" should match too.
https://github.com/onnx/onnx/issues/1934
https://github.com/onnx/onnx/pull/4096
61c36aa4250ca607c194fc55070e95db2b95761b
ee4888c24510787bb8c61ebcba43ef252744e648
2019-04-12T21:55:54Z
python
2022-03-29T15:30:28Z
closed
onnx/onnx
https://github.com/onnx/onnx
1,823
["onnx/backend/test/case/model/shrink.py", "onnx/backend/test/data/node/test_shrink_hard/model.onnx", "onnx/backend/test/data/node/test_shrink_soft/model.onnx", "onnx/backend/test/data/simple/test_shrink/model.onnx"]
test data for shrink_test is of incorrect shape
Both onnx/backend/test/data/simple/test_shrink/test_data_set_0/input_0.pb and onnx/backend/test/data/simple/test_shrink/test_data_set_0/output_0.pb have shape [5] whereas the graph input and output in onnx/backend/test/data/simple/test_shrink/model.onnx has shape [1,5]. In test_shrink_soft and test_shrink_hard, the test input data, test output data, the graph input defined in the model, the graph output defined in the model all have consistent shape of [5]. Tagging @zrphercule as fyi
https://github.com/onnx/onnx/issues/1823
https://github.com/onnx/onnx/pull/1825
b37fc6df4e426650cb0d48fc69519d0cae1584ae
80346bd748fc8e586c70e89c50befe7d7f8c6383
2019-02-19T22:33:37Z
python
2019-02-20T22:54:18Z
closed
onnx/onnx
https://github.com/onnx/onnx
1,775
["docs/Changelog.md", "docs/Operators.md", "onnx/defs/logical/defs.cc"]
doc issue for Greater op
in Opset9, it is not limited to ```float tensors``` anymore. https://github.com/onnx/onnx/blob/15c33c945851907411619f599900c3852108e7e3/onnx/defs/logical/defs.cc#L82-L94
https://github.com/onnx/onnx/issues/1775
https://github.com/onnx/onnx/pull/1811
0628582506ede89f94faa80977d86e3f7e19739c
4f064a16359f256d8f7ba13f957a9013451c319a
2019-01-29T06:58:25Z
python
2019-02-14T16:02:58Z
closed
onnx/onnx
https://github.com/onnx/onnx
1,755
["docs/Changelog.md", "docs/Operators.md", "onnx/defs/tensor/defs.cc"]
Inaccurate doc for Squeeze
For attribute `axes` in `Squeeze`, doc describe it as: ``` List of positive integers, indicate the dimensions to squeeze. ``` But based on the implementation in Caffe2 and Microsoft's onnxruntime, this is 0-based instead of positive ints.
https://github.com/onnx/onnx/issues/1755
https://github.com/onnx/onnx/pull/1905
a8b45d62de57ab9b25d7764e38e44f5e9e9c027a
079c2639f9bb79b1774d1e3bfa05b0c093816ca7
2019-01-21T20:28:38Z
python
2019-04-03T06:21:37Z
closed
onnx/onnx
https://github.com/onnx/onnx
1,735
["onnx/defs/tensor/defs.cc", "onnx/test/shape_inference_test.py"]
Shape inference fails with a Split node with an split attribute
When a `split` attribute is set to a `Split` node, `onnx.shape_inference.infer_shapes` fails to infer its output shapes. ``` import onnx import onnx.shape_inference op = onnx.helper.make_node("Split", inputs=["a"], outputs=["b", "c", "d"], axis=0) i = onnx.helper.make_tensor_value_info(name="a", elem_type=onnx.TensorProto.FLOAT, shape=[6]) model = onnx.helper.make_model(onnx.helper.make_graph([op], "graph", [i], [])) print(onnx.shape_inference.infer_shapes(model)) # contains proper value_info op = onnx.helper.make_node("Split", inputs=["a"], outputs=["b", "c", "d"], axis=0, split=[1, 2, 3]) i = onnx.helper.make_tensor_value_info(name="a", elem_type=onnx.TensorProto.FLOAT, shape=[6]) model = onnx.helper.make_model(onnx.helper.make_graph([op], "graph", [i], [])) print(onnx.shape_inference.infer_shapes(model)) # the shapes of b, c, and d are not inferred ``` While the former example outputs the correct shapes (three `[2]`), the resulting `value_info` of the latter example do not contain dimension information. This will happen even if the tensor `a` is divided evenly (by setting `split=[2, 2, 2]`). I'm using ONNX 1.3.0.
https://github.com/onnx/onnx/issues/1735
https://github.com/onnx/onnx/pull/2328
684fc1bcf29da1986ffa5a9c33bef0565bb1a894
dfa4384c3f8fe9d804032f992e6f818fc54152b0
2019-01-14T22:01:07Z
python
2020-01-14T23:57:28Z
closed
onnx/onnx
https://github.com/onnx/onnx
1,689
["docs/Changelog.md", "docs/Operators.md", "onnx/defs/experiments/defs.cc", "onnx/defs/gen_doc.py", "onnx/defs/operator_sets.h"]
Either DynamicSlice is experimental or part of opset 9. Which one is it?
The op is classified under [experimental][1] while it's [mentioned][2] that it's part of opset 9. [1]: https://github.com/onnx/onnx/blob/master/docs/Operators.md#experimental-dynamicslice [2]: https://github.com/onnx/onnx/blob/master/docs/Operators.md#version-122
https://github.com/onnx/onnx/issues/1689
https://github.com/onnx/onnx/pull/1696
0c8d857bb162431912b255d5c0e773fb7c131a65
da7c50cb0e5948e43b9d7aa72489b1e040090461
2018-12-12T03:06:08Z
python
2018-12-19T15:53:31Z
closed
onnx/onnx
https://github.com/onnx/onnx
1,661
[".github/workflows/codeql.yml", ".github/workflows/lint.yml", ".github/workflows/scorecard.yml"]
Specifications for SVMClassifier is not precise enough
There are N columns for N classes if the operators output probabilities but if N > 2. What happens when the operator outputs raw scores for N(N-1)/2 support vectors? libsvm returns N(N-1)/2 columns, scikit-learn normalizes the scores to get N columns. The spec should choose one behavior or add a parameter to specify which one is expected.
https://github.com/onnx/onnx/issues/1661
https://github.com/onnx/onnx/pull/5186
b662b2c04dd1ef665132a98ed27c2c32c35d4643
ee593e5af421e93770dd454f68a5154793c9c7ae
2018-11-29T16:20:29Z
python
2023-05-01T17:05:24Z
closed
onnx/onnx
https://github.com/onnx/onnx
1,564
["onnx/common/interned_strings.h", "onnx/examples/optimize_onnx.ipynb", "onnx/optimizer/pass_registry.h", "onnx/optimizer/passes/eliminate_nop_dropout.h", "onnx/optimizer/passes/fuse_consecutive_reduce_unsqueeze.h", "onnx/test/optimizer_test.py"]
Optimization: Fuse {Reduction[keepdim=0]->Unsqueeze} to {Reduction[keepdim=1]}
https://github.com/onnx/onnx/issues/1564
https://github.com/onnx/onnx/pull/1565
6db386e4ccd1a63a367489e5ebaf4433fc3204e3
ea694bf5cbccf34e08f36a8d9365d9011b72aeb4
2018-10-31T01:28:41Z
python
2018-11-20T08:22:25Z
closed
onnx/onnx
https://github.com/onnx/onnx
1,556
["onnx/common/interned_strings.h", "onnx/examples/optimize_onnx.ipynb", "onnx/optimizer/pass_registry.h", "onnx/optimizer/passes/fuse_consecutive_concats.h", "onnx/test/optimizer_test.py"]
Optimization Pass: Fusing consecutive concats
I was looking at an exported model at work and saw the following behavior: ![image](https://user-images.githubusercontent.com/4429794/47581637-cdc3ea00-d906-11e8-9d24-f7d0f2e39417.png) In this case we can replace the 3 concats with a single concat along axis=0. I saw this type of pattern in multiple places of the DAG and it seems to be a biproduct of pytorch exportation of GRU (or maybe all RNN) ops.
https://github.com/onnx/onnx/issues/1556
https://github.com/onnx/onnx/pull/1557
3265c1e1d953b2638059ddcd55cfc8dee4c24cd7
736dc863dbf002af2509dad2f030dd864ab2a196
2018-10-26T17:11:54Z
python
2018-10-30T06:41:42Z
closed
onnx/onnx
https://github.com/onnx/onnx
1,521
["onnx/common/interned_strings.h", "onnx/examples/optimize_onnx.ipynb", "onnx/optimizer/pass_registry.h", "onnx/optimizer/passes/eliminate_nop_dropout.h", "onnx/test/optimizer_test.py"]
Optimization Pass: Dropout with ratio 0.0 should be a Nop
https://github.com/onnx/onnx/issues/1521
https://github.com/onnx/onnx/pull/1555
f958d5a8f8581f5a1cf362724ef9f33054ca5ff4
3265c1e1d953b2638059ddcd55cfc8dee4c24cd7
2018-10-18T00:43:08Z
python
2018-10-29T05:20:41Z
closed
onnx/onnx
https://github.com/onnx/onnx
1,429
["onnx/common/interned_strings.h", "onnx/examples/optimize_onnx.ipynb", "onnx/optimizer/pass_registry.h", "onnx/optimizer/passes/eliminate_nop_monotone_argmax.h", "onnx/test/optimizer_test.py"]
Optimization Pass: Monotonic growing function is useless before argmax calculation.
It's standard to use softmax to generate a probability distribution at the end of a model. If we want to get the predicted class we end up with `net(x).softmax(dim=1).argmax(dim=1)`. In reality though because softmax is a monotonic growing operation we can replace the expression with `net(x).argmax(dim=1)`
https://github.com/onnx/onnx/issues/1429
https://github.com/onnx/onnx/pull/1519
7b7b6ec4fdf21039c1e77814d1a1cb7c2d658e9a
f958d5a8f8581f5a1cf362724ef9f33054ca5ff4
2018-09-19T21:46:42Z
python
2018-10-26T04:27:30Z
closed
onnx/onnx
https://github.com/onnx/onnx
1,313
["onnx/backend/test/case/node/mvn.py", "onnx/backend/test/data/node/test_mvn/model.onnx", "onnx/backend/test/data/node/test_mvn/test_data_set_0/input_0.pb", "onnx/backend/test/data/node/test_mvn/test_data_set_0/output_0.pb"]
Test Cases for FuncMeanVarianceNormalization Missing
Test cases for FuncMeanVarianceNormalization are missing. This means that, on a separate branch of ONNX, when test data is generated, relevant tests start failing (schema for FuncMeanVarianceNormalization can't be found).
https://github.com/onnx/onnx/issues/1313
https://github.com/onnx/onnx/pull/1335
e509836f45ebbb7b9b977405d411eb042e24163d
ddcaf8adc3ed0d6c40d5c2691ad231593c40df61
2018-08-22T17:37:06Z
python
2018-08-28T17:24:12Z
closed
onnx/onnx
https://github.com/onnx/onnx
1,194
["CMakeLists.txt", "onnx/__init__.py", "onnx/onnx-operators-ml.proto", "onnx/onnx-operators-ml.proto3", "onnx/onnx-operators.in.proto", "onnx/onnx-operators.proto", "onnx/onnx-operators.proto3", "tools/protoc-gen-mypy.py"]
ModuleNotFoundError in generated onnx_operators_pb2.py
Need to import a class defined in operators.proto. Try adding `from .onnx_operators_pb import *` in __init__.py and had this error reported as follows: ``` >>> import onnx Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\\onnx\onnx\__init__.py", line 8, in <module> from .onnx_operators_pb import * File "C:\\onnx\onnx\onnx_operators_pb.py", line 8, in <module> from .onnx_operators_pb2 import * # noqa File "C:\\onnx\onnx\onnx_operators_pb2.py", line 17, in <module> import onnx_pb2 as onnx__pb2 ModuleNotFoundError: No module named 'onnx_pb2' ``` if I manually change ` import onnx_pb2 as onnx__pb2` to ` import onnx.onnx_pb2 as onnx__pb2` it wont crash. Any thoughts on how to bypass it?
https://github.com/onnx/onnx/issues/1194
https://github.com/onnx/onnx/pull/1198
17818ea03ae575e5c0777d69402918ce2f623902
2e7099ee7c37b196c197c9a084a97698a41da232
2018-07-13T00:49:39Z
python
2018-07-16T22:30:13Z
closed
onnx/onnx
https://github.com/onnx/onnx
1,100
["docs/IR.md"]
Missing/wrong documentation for opset_import/OperatorSetId
The OperatorSetId is not defined in https://github.com/onnx/onnx/blob/master/docs/IR.md > opset_import | OperatorSetId | A collection of operator set identifiers made available to the model. An implementation must support all operators in the set or reject the model. `operator set identifiers` may be assumed to be a simple string. In fact it is a `{domain, version}` struct.
https://github.com/onnx/onnx/issues/1100
https://github.com/onnx/onnx/pull/4039
c940fa3fea84948e46603cab2f86467291443beb
5a6628799ee4fa31edc4451886363f6d47095caa
2018-06-08T12:07:17Z
python
2022-02-28T20:43:55Z
closed
onnx/onnx
https://github.com/onnx/onnx
859
[".github/workflows/stale.yml"]
Request for adding Pack/Unpack to onnx (that are the ops tf.stack/tf.unstack)
After the discussion with @guschmue, we think adding Pack/Unpack to onnx is necessary for implementing Strideslices op of Tensorflow. I wish tf.stack/tf.unstack could be added to onnx
https://github.com/onnx/onnx/issues/859
https://github.com/onnx/onnx/pull/4917
5849e3ed84fad5fbfe18215a0a053f1fcbea47b4
dad6338fc2bf28c353d7c797122d9b63190e8495
2018-04-30T21:59:43Z
python
2023-02-21T14:53:32Z
closed
onnx/onnx
https://github.com/onnx/onnx
708
["cmake/Utils.cmake", "setup.py"]
CMake setup fail on local dev build
See error from cmake when build using `pip install -e .` . Looking into it now ``` ERROR Failed to remove indentation from: """ from distutils import sysconfig print(sysconfig.get_python_lib(prefix='')) """ Python dedent failed with error code: The system cannot find the file specified CMake Error at cmake/Utils.cmake:31 (message): Python dedent failed with error code: The system cannot find the file specified ```
https://github.com/onnx/onnx/issues/708
https://github.com/onnx/onnx/pull/709
6734224600c78621e516e50a1e17fd1cbab9cb0b
93f0d401612ee76afc2924fee0e14a01e27662f8
2018-04-04T17:27:45Z
python
2018-04-04T20:13:31Z
closed
onnx/onnx
https://github.com/onnx/onnx
518
[".github/workflows/stale.yml"]
torch.onnx
When I execute ‘python -c" import torch.onnx "‘ `bainuo@bainuo-SYS-7048GR-TR:~$ python -c "import torch.onnx" Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: No module named onnx` What should I do
https://github.com/onnx/onnx/issues/518
https://github.com/onnx/onnx/pull/4917
5849e3ed84fad5fbfe18215a0a053f1fcbea47b4
dad6338fc2bf28c353d7c797122d9b63190e8495
2018-02-07T03:12:30Z
python
2023-02-21T14:53:32Z
closed
onnx/onnx
https://github.com/onnx/onnx
455
[".github/workflows/stale.yml"]
[PR 400 Follow Up] Tune the IR API
Further discussion is needed.
https://github.com/onnx/onnx/issues/455
https://github.com/onnx/onnx/pull/4917
5849e3ed84fad5fbfe18215a0a053f1fcbea47b4
dad6338fc2bf28c353d7c797122d9b63190e8495
2018-01-24T06:43:37Z
python
2023-02-21T14:53:32Z
closed
onnx/onnx
https://github.com/onnx/onnx
405
[".github/workflows/stale.yml"]
Gemm requires (MxK), (KxN) inputs but no reshape are inserted in onnx models.
I facing a problem when working with "bvlc_alexnet/model.pb" from [here](https://github.com/onnx/models/tree/master/bvlc_alexnet) Its include a *Gemm* node with 3 inputs: - The output of the previous node : pool_5, its shape is : [1, 256, 6, 6] - A weights tensor presents into the initializers. its shape is : [4096, 9216] - A bias tensor presents into initializers. its shape is : [4096] The attrs of the node are : - Broadcasting = 1 : it's ok to broadcast the bias tensor. - transB = 1 : it's ok to transpose the weights tensor. In the [doc](https://github.com/onnx/onnx/blob/master/docs/Operators.md#gemm) we can read : > where input tensor A has dimension (M X K) , input tensor B has dimension (K X N), input tensor C and output tensor Y have dimension (M X N) So the tensor B will have shape : [9216, 4096] after being transposed. Si the input A (the output of the previous node) MUST have the shape : [1, 256\*6\*6] = [1, 9216] This will produce a tensor of shape [1, 4096] which we can add to the bias tensor (input C). According to me, a reshape is missing between the pool_5 node and the Gemm node.
https://github.com/onnx/onnx/issues/405
https://github.com/onnx/onnx/pull/4917
5849e3ed84fad5fbfe18215a0a053f1fcbea47b4
dad6338fc2bf28c353d7c797122d9b63190e8495
2018-01-09T14:08:26Z
python
2023-02-21T14:53:32Z
closed
onnx/onnx
https://github.com/onnx/onnx
374
["docs/Changelog.md", "docs/Operators.md", "onnx/defs/schema.h", "onnx/defs/tensor/defs.cc", "onnx/defs/tensor/old.cc"]
Concat Axis
Unless my protobuf implementation is up the spout (which is possible) the squeezenet model has no axis parameter in the Concat nodes. There is no default indicated. Squeezenet should be concatting on channels. What does no axis parameter indicate since the docs show no default?
https://github.com/onnx/onnx/issues/374
https://github.com/onnx/onnx/pull/390
938ce339fd3e7dc5eb6a8f6332f77762ff5eaaaf
535c8c1ded151c772051dd8f7a0887752c2d4fd5
2017-12-14T22:42:31Z
python
2018-02-20T22:43:32Z
closed
onnx/onnx
https://github.com/onnx/onnx
263
[".github/workflows/auto_update_doc.yml"]
Remove pip install dependency on protoc
When the ONNX Python package becomes a dependency of a project, then currently the Protocol Buffers compiler becomes a dependency. When installing the ONNX Python package many users get the following error: ``` Could not find "protoc" executable! ``` This is caused by the missing Protocol Buffers compiler. On some systems it's easy to install Protocol Buffers, but on Centos or Ubuntu 14.04 this is not so simple. Packaged versions of Protocol Buffers are older than 2.6.1, so users get these errors: ``` onnx.proto:308:7: Expected "required", "optional", or "repeated". onnx.proto:308:19: Missing field number. onnx.proto:330:3: Expected "required", "optional", or "repeated". onnx.proto:330:15: Missing field number. ``` Would it be possible to remove the dependency on `protoc` from the `pip install` process? Could the PyPI package be distributed with a precompiled form of the proto?
https://github.com/onnx/onnx/issues/263
https://github.com/onnx/onnx/pull/5730
4b7049dae24a8d233a66d2a378aafd7442896bd3
f97f16fbb41bbf3b1a1ce5e063f9d8773ae342b8
2017-11-15T16:43:10Z
python
2023-11-01T19:04:54Z
closed
onnx/onnx
https://github.com/onnx/onnx
91
[".github/workflows/clang_tidy_review.yml", ".github/workflows/clang_tidy_review_post.yml"]
Reference exports (Model zoo)
It may be a good idea to add a set of models serialized in the `onnx.proto` format to the `onnx/examples` directory. This could serve as a reference for people testing ONNX or writing their own import/export functions. What do you think? I would propose starting with a set of very simple models, getting progressively more and more complex: a simple addition (a+b+c), linear regression, logistic regression, MLP and then more advanced models such as AlexNet, VGG, ResNet, etc.
https://github.com/onnx/onnx/issues/91
https://github.com/onnx/onnx/pull/5503
329970e15867fe830377ec43bb562c90f1892481
e90f9f2d051fac6f299ff32fcb218a635dd76615
2017-10-09T10:43:56Z
python
2023-08-14T16:11:18Z
closed
onnx/onnx
https://github.com/onnx/onnx
90
[".github/workflows/clang_tidy_review.yml", ".github/workflows/clang_tidy_review_post.yml"]
Table of contents for operators
It can be sometimes tricky to jump to the definition of an operator like "Pad", where the string "pad" occurs multiple times in Operators.md. A modest usability improvement would be to add a table of contents, so that string search first discovers operator names, before a full text search.
https://github.com/onnx/onnx/issues/90
https://github.com/onnx/onnx/pull/5478
584624997b7e07b3bc80e7b437e04c124399f379
48f28c3eb6ebdb70e010d92ce1f33d987cf56dce
2017-10-08T15:44:56Z
python
2023-08-07T16:27:34Z
closed
onnx/onnx
https://github.com/onnx/onnx
78
["onnx/defs/data_type_utils.cc", "onnx/onnx-ml.proto", "onnx/onnx-ml.proto3", "onnx/onnx.in.proto", "onnx/onnx.proto", "onnx/onnx.proto3"]
Protobuf naming consistency / nested protos
We've been keeping an eye on annoyances when we adjust PyTorch for latest changes in the ONNX proto, and we noticed some for https://github.com/onnx/onnx/pull/51 * Not all protobuf messages have the `Proto` suffix anymore; in particular, the nested protobuf `Dimension` doesn't have the suffix. This makes it look strange when compared with all the other protobuf messages. Can we make this consistent? * Additionally, `TypeProto` defines nested protobufs. I wasn't sure how this was going to look in tooling, but we have discovered what happens with nanopb: you now get extremely ugly protobuf names like `TypeProto_TensorTypeProto`. This is especially ugly because `Type` is redundantly specified twice. Can we just put it at the top level of the protobuf?
https://github.com/onnx/onnx/issues/78
https://github.com/onnx/onnx/pull/215
3a8c7988fa551ad8d21bdf14df8afe3884a7e9fd
211336180e1da62752027a7926ba24b4b7ae5a11
2017-10-05T17:43:24Z
python
2017-11-14T00:49:47Z
closed
onnx/onnx
https://github.com/onnx/onnx
53
[".github/workflows/pages.yml"]
ir_version, publisher_tag, and publisher_version should appear in as the initial fields in GraphProto
It's typical to put versioning fields at the beginning of a file/message format to allow implementations to efficiently sniff the version prior to parsing the entire buffer. Doing so would a breaking change, but it's still quite early and we may have time.
https://github.com/onnx/onnx/issues/53
https://github.com/onnx/onnx/pull/5049
0cb435406195e1921343c675bee47bee60359c5d
06536487cb5f7e863dc11598d4693f6d3a8848a7
2017-09-24T22:11:16Z
python
2023-03-27T21:48:35Z
closed
onnx/onnx
https://github.com/onnx/onnx
28
["requirements-lintrunner.txt"]
ImportError: No module named version
With the latest `__version__` support commits, if you `pip install -e onnx` from source: ``` >>> import onnx Traceback (most recent call last): File "<stdin>", line 1, in <module> File "onnx/__init__.py", line 9, in <module> from .version import version as __version__ ImportError: No module named version ``` I think this is because we are not generating version.py for develop builds. Fix here probably must be applied to onnx-caffe2 as well.
https://github.com/onnx/onnx/issues/28
https://github.com/onnx/onnx/pull/5726
fd699fbb4f69a0d2d12129013145c9f89a8c7eb0
4b7049dae24a8d233a66d2a378aafd7442896bd3
2017-09-15T20:43:34Z
python
2023-11-01T18:40:46Z
closed
onnx/onnx
https://github.com/onnx/onnx
27
["requirements-lintrunner.txt"]
Operator: Softmax
Requested in https://github.com/pytorch/pytorch/issues/2747
https://github.com/onnx/onnx/issues/27
https://github.com/onnx/onnx/pull/5726
fd699fbb4f69a0d2d12129013145c9f89a8c7eb0
4b7049dae24a8d233a66d2a378aafd7442896bd3
2017-09-15T14:14:34Z
python
2023-11-01T18:40:46Z
closed
onnx/onnx
https://github.com/onnx/onnx
26
["requirements-lintrunner.txt"]
Operator: SubConstant/AddConstant
Add/subtract a constant from all elements of a tensor Note that MulConstant is handled as "Scale" atm.
https://github.com/onnx/onnx/issues/26
https://github.com/onnx/onnx/pull/5332
2843c9e8ccd6757019952a0dad0efcc7b2dcd0c5
42e091c2e628d896043086d173c0dbf321b78ffc
2017-09-15T00:55:04Z
python
2023-06-20T21:32:30Z
closed
apache/dubbo
https://github.com/apache/dubbo
13,492
["dubbo-rpc/dubbo-rpc-rest/src/main/java/org/apache/dubbo/rpc/protocol/rest/extension/resteasy/filter/ResteasyRequestContainerFilterAdapter.java"]
memory leak in rest
I got this error log: ``` 2023-11-23 19:23:26.786 ERROR 27 --- [NettyServerWorker-thread-1] io.netty.util.ResourceLeakDetector : LEAK: ByteBuf.release() was not called before it's garbage-collected. See https://netty.io/wiki/reference-counted-objects.html for more information. Recent access records: Created at: io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:385) io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187) io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:173) io.netty.buffer.AbstractByteBufAllocator.buffer(AbstractByteBufAllocator.java:107) org.apache.dubbo.rpc.protocol.rest.extension.resteasy.ResteasyContext.convertHttpRequestToContainerRequestContext(ResteasyContext.java:81) org.apache.dubbo.rpc.protocol.rest.extension.resteasy.filter.ResteasyRequestContainerFilterAdapter.filter(ResteasyRequestContainerFilterAdapter.java:54) org.apache.dubbo.rpc.protocol.rest.handler.NettyHttpHandler.executeFilters(NettyHttpHandler.java:131) org.apache.dubbo.rpc.protocol.rest.handler.NettyHttpHandler.handle(NettyHttpHandler.java:82) org.apache.dubbo.rpc.protocol.rest.netty.RestHttpRequestDecoder.lambda$decode$0(RestHttpRequestDecoder.java:70) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) org.apache.dubbo.common.threadlocal.InternalRunnable.run(InternalRunnable.java:41) java.lang.Thread.run(Thread.java:748) ``` the reason: org.apache.dubbo.rpc.protocol.rest.extension.resteasy.filter.ResteasyRequestContainerFilterAdapter#filter creates a DubboPreMatchContainerRequestContext. DubboPreMatchContainerRequestContext's NettyHttpRequest field contains a ByteBuf. ResteasyRequestContainerFilterAdapter#filter doesn't release the ByteBuf before exit. solution: call containerRequestContext.getHttpRequest().releaseContentBuffer() before exit. like ``` finally { containerRequestContext.getHttpRequest().releaseContentBuffer(); } ```
https://github.com/apache/dubbo/issues/13492
https://github.com/apache/dubbo/pull/13514
75f740df0a80dd869d17b9ceb577c35de356c041
092a1bd619ece992521d469bfc761fb2d9da56d0
2023-12-13T09:34:04Z
java
2023-12-20T13:06:23Z
closed
apache/dubbo
https://github.com/apache/dubbo
13,443
["dubbo-rpc/dubbo-rpc-triple/src/main/java/org/apache/dubbo/rpc/TriRpcStatus.java", "dubbo-rpc/dubbo-rpc-triple/src/test/java/org/apache/dubbo/rpc/TriRpcStatusTest.java"]
Caused by: com.alibaba.fastjson2.JSONException: not support none serializable class org.apache.dubbo.rpc.TriRpcStatus
<!-- If you need to report a security issue please visit https://github.com/apache/dubbo/security/policy --> - [ ] I have searched the [issues](https://github.com/apache/dubbo/issues) of this repository and believe that this is not a duplicate. ### Environment * Dubbo version: 3.2.8 * fastjson version: 2.0.41 * Java version: 1.8 ### Steps to reproduce this issue TriRpcStatus 没有继承Serializable接口 当执行到cace result.setException(status.asException());的时候回报错: Caused by: com.alibaba.fastjson2.JSONException: not support none serializable class org.apache.dubbo.rpc.TriRpcStatus ![image](https://github.com/apache/dubbo/assets/15622677/285a8f15-0f8d-4956-b2a3-9c97c313666a) Pls. provide [GitHub address] to reproduce this issue. ### Expected Behavior <!-- What do you expect from the above steps?--> ### Actual Behavior <!-- What actually happens? --> If there is an exception, please attach the exception trace: ``` Just put your stack trace here! ```
https://github.com/apache/dubbo/issues/13443
https://github.com/apache/dubbo/pull/13453
58fa12079578b7def49fe6c386c1bcce97f2123a
3e1310bea880add83510855a4506595ca488245e
2023-11-30T09:05:45Z
java
2023-12-04T04:48:27Z
closed
apache/dubbo
https://github.com/apache/dubbo
13,436
["dubbo-rpc/dubbo-rpc-triple/src/main/java/org/apache/dubbo/rpc/protocol/tri/TripleInvoker.java"]
3.2.7 invoking remote service in generic mode causes java.lang.ClassNotFoundException
<!-- If you need to report a security issue please visit https://github.com/apache/dubbo/security/policy --> - [x] I have searched the [issues](https://github.com/apache/dubbo/issues) of this repository and believe that this is not a duplicate. ### Environment * Dubbo version: `3.2.7` * Operating System version: `Linux 4.19.91-009.ali4000.alios7.x86_64 #1 SMP Mon Jan 25 10:47:38 CST 2021 x86_64 x86_64 x86_64 GNU/Linux` * Java version: `OpenJDK Runtime Environment (Alibaba 8.4.7) (build 1.8.0_152-b187) OpenJDK 64-Bit Server VM (Alibaba 8.4.7) (build 25.152-b187, mixed mode)` ### Steps to reproduce this issue 1. define generic reference config ```java referenceConfig.setGeneric("true"); referenceConfig.setAsync(false); genericService = referenceConfig.get(); ``` 2. do invocation ```java // argTypes contain remote-defined-only class genericService.$invoke(method, argTypes, args); ``` Pls. provide [GitHub address] to reproduce this issue. ### Expected Behavior Succeed in invoking remote service in generic mode. ### Actual Behavior Failed to invoke remote in generic mode, exception occurred in client side. ```text Caused by: org.apache.dubbo.rpc.StatusRpcException: INTERNAL : Call aborted cause client exception at org.apache.dubbo.rpc.TriRpcStatus.asException(TriRpcStatus.java:214) ~[dubbo-3.2.7.jar:3.2.7] at org.apache.dubbo.rpc.protocol.tri.TripleInvoker.doInvoke(TripleInvoker.java:166) ~[dubbo-3.2.7.jar:3.2.7] at org.apache.dubbo.rpc.protocol.AbstractInvoker.doInvokeAndReturn(AbstractInvoker.java:243) ~[dubbo-3.2.7.jar:3.2.7] at org.apache.dubbo.rpc.protocol.AbstractInvoker.invoke(AbstractInvoker.java:187) ~[dubbo-3.2.7.jar:3.2.7] ... 295 common frames omitted Caused by: java.lang.IllegalStateException: Not found class com.***.***DTO, cause: com.***.***DTO at org.apache.dubbo.common.utils.ReflectUtils.forName(ReflectUtils.java:685) ~[dubbo-3.2.7.jar:3.2.7] at org.apache.dubbo.rpc.support.RpcUtils.getParameterTypes(RpcUtils.java:169) ~[dubbo-3.2.7.jar:3.2.7] at org.apache.dubbo.rpc.protocol.tri.TripleInvoker.invokeUnary(TripleInvoker.java:251) ~[dubbo-3.2.7.jar:3.2.7] at org.apache.dubbo.rpc.protocol.tri.TripleInvoker.doInvoke(TripleInvoker.java:150) ~[dubbo-3.2.7.jar:3.2.7] ... 297 common frames omitted Caused by: java.lang.ClassNotFoundException: com.***.***DTO at org.springframework.boot.web.embedded.tomcat.TomcatEmbeddedWebappClassLoader.loadClass(TomcatEmbeddedWebappClassLoader.java:72) ~[spring-boot-2.5.12.jar:2.5.12] at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1215) ~[tomcat-embed-core-9.0.60.jar:9.0.60] at java.lang.Class.forName0(Native Method) ~[na:1.8.0_152] at java.lang.Class.forName(Class.java:348) ~[na:1.8.0_152] at org.apache.dubbo.common.utils.ReflectUtils.name2class(ReflectUtils.java:786) ~[dubbo-3.2.7.jar:3.2.7] at org.apache.dubbo.common.utils.ReflectUtils.name2class(ReflectUtils.java:706) ~[dubbo-3.2.7.jar:3.2.7] at org.apache.dubbo.common.utils.ReflectUtils.forName(ReflectUtils.java:683) ~[dubbo-3.2.7.jar:3.2.7] ... 300 common frames omitted ``` ### Cause `org.apache.dubbo.rpc.protocol.tri.TripleInvoker#invokeUnary` When method descriptor is generic, it tries to find remote-defined-only class in client side, which causes the `ClassNotFoundException`. ```java if (methodDescriptor instanceof StubMethodDescriptor) { pureArgument = invocation.getArguments()[0]; } else { if (methodDescriptor.isGeneric()) { Object[] args = new Object[3]; args[0] = RpcUtils.getMethodName(invocation); // here args[1] = Arrays.stream(RpcUtils.getParameterTypes(invocation)).map(Class::getName).toArray(String[]::new); args[2] = RpcUtils.getArguments(invocation); pureArgument = args; } else { pureArgument = invocation.getArguments(); } } ```
https://github.com/apache/dubbo/issues/13436
https://github.com/apache/dubbo/pull/13442
841ddb6e1f82d01f41a2c2a009de15a818c472be
d552cfca51a07765b080a84b8bddd044a694bc98
2023-11-29T09:28:06Z
java
2023-12-01T02:25:03Z
closed
apache/dubbo
https://github.com/apache/dubbo
13,363
["dubbo-rpc/dubbo-rpc-triple/src/main/java/org/apache/dubbo/rpc/protocol/tri/ReflectionPackableMethod.java", "dubbo-rpc/dubbo-rpc-triple/src/test/java/org/apache/dubbo/rpc/protocol/tri/DataWrapper.java", "dubbo-rpc/dubbo-rpc-triple/src/test/java/org/apache/dubbo/rpc/protocol/tri/DescriptorService.java", "dubbo-rpc/dubbo-rpc-triple/src/test/java/org/apache/dubbo/rpc/protocol/tri/ReflectionPackableMethodTest.java"]
INTERNAL exception occurred when sending generic parameter in streaming
* Dubbo version: 3.2.5 * Operating System version: windows11 * Java version: 17 If the streaming communication mode StreamObserver generic parameter is also generic, an error "INTERNAL: Call aborted cause client exception" will be reported during runtime. --- streaming通信模式StreamObserver泛型参数如果也是泛型的话在运行的时候会报错“INTERNAL : Call aborted cause client exception” 具体看代码截图 ![1700016850144](https://github.com/apache/dubbo/assets/33255974/f9c5dd27-e718-4080-ac75-c5d88ee0e79e) ![1700016959488](https://github.com/apache/dubbo/assets/33255974/6475ea78-1006-4c39-9f34-cf9bc465c1f2)
https://github.com/apache/dubbo/issues/13363
https://github.com/apache/dubbo/pull/13446
0d879666b4fc6163f42fcec2838633bb9f99a482
5554706256c9daf550f36858e5bc3d17f3ec6143
2023-11-15T02:56:19Z
java
2023-12-06T02:57:09Z
closed
apache/dubbo
https://github.com/apache/dubbo
13,222
["dubbo-rpc/dubbo-rpc-triple/src/main/java/org/apache/dubbo/rpc/protocol/tri/ReflectionPackableMethod.java", "dubbo-rpc/dubbo-rpc-triple/src/main/java/org/apache/dubbo/rpc/protocol/tri/TripleInvoker.java", "dubbo-rpc/dubbo-rpc-triple/src/main/java/org/apache/dubbo/rpc/protocol/tri/TripleProtocol.java", "dubbo-rpc/dubbo-rpc-triple/src/main/java/org/apache/dubbo/rpc/protocol/tri/call/ReflectionAbstractServerCall.java", "dubbo-rpc/dubbo-rpc-triple/src/test/java/org/apache/dubbo/rpc/protocol/tri/call/ReflectionServerCallTest.java"]
PackableMethod share while services cause serialization error
<!-- If you need to report a security issue please visit https://github.com/apache/dubbo/security/policy --> - [ ] I have searched the [issues](https://github.com/apache/dubbo/issues) of this repository and believe that this is not a duplicate. ### Steps to reproduce this issue In `org.apache.dubbo.rpc.protocol.tri.ReflectionPackableMethod#init`: ```java public static ReflectionPackableMethod init(MethodDescriptor methodDescriptor, URL url) { Object stored = methodDescriptor.getAttribute(METHOD_ATTR_PACK); if (stored != null) { return (ReflectionPackableMethod) stored; } final String serializeName = UrlUtils.serializationOrDefault(url); final Collection<String> allSerialize = UrlUtils.allSerializations(url); ReflectionPackableMethod reflectionPackableMethod = new ReflectionPackableMethod( methodDescriptor, url, serializeName, allSerialize); methodDescriptor.addAttribute(METHOD_ATTR_PACK, reflectionPackableMethod); return reflectionPackableMethod; } ``` Triple will cache `ReflectionPackableMethod` in `MethodDescriptor`. But `MethodDescriptor` is shared while different services. Thus will cause: 1. A consumer expect to use X serialization and B consumer expect to use Y serialization 2. A consumer init first, and write A serialziation into `MethodDescriptor` 3. B consumer will directly use A serialziation dur to cache ### Proper solution Cache `ReflectionPackableMethod` in `TripleInvoker` instead of `MethodDescriptor`. ### Note It is very easy to occur in the scenario of generic invoke.
https://github.com/apache/dubbo/issues/13222
https://github.com/apache/dubbo/pull/13224
02e486a1dccef10a03b81575fb3b7158d3cd9299
f995ef506c2e3306f9cbcd4bc33ad169a3da0b7e
2023-10-17T09:49:21Z
java
2023-10-18T03:39:33Z
closed
apache/dubbo
https://github.com/apache/dubbo
13,142
["dubbo-compiler/src/main/resources/Dubbo3InterfaceStub.mustache", "dubbo-compiler/src/main/resources/Dubbo3TripleInterfaceStub.mustache", "dubbo-compiler/src/main/resources/Dubbo3TripleStub.mustache"]
使用triple协议,提供方provider方法重载,consumer方消费报错,provider方空指针
### Environment * Dubbo version: 3.2 * Operating System version: windows 10 * Java version:1.8 ### Steps to reproduce this issue 1. 在provider侧提供重载方法如下 ![1695871853510](https://github.com/apache/dubbo/assets/14833956/b5a8a389-cb75-49e7-8150-63147065e75f) 2. 在consumer侧分别调用两个重载方法 3. provider报错NPE ![1695871935061](https://github.com/apache/dubbo/assets/14833956/44dde66c-2a5d-4904-afb5-9b709c74b059) ### Expected Behavior 期望能正常调用 ### Actual Behavior consumer ![image](https://github.com/apache/dubbo/assets/14833956/740c67c1-fde1-4106-b066-0fa257804441) provider ![1695871991526](https://github.com/apache/dubbo/assets/14833956/537c0413-0e82-467e-9838-fe33fe45dbed) ``` Just put your stack trace here! ```
https://github.com/apache/dubbo/issues/13142
https://github.com/apache/dubbo/pull/13385
883195aeb9c92fb7e7566846e8403906f9ae027e
4d5d9d8afc5d6d05010b8915ec72bdf9418ef02b
2023-09-28T03:33:22Z
java
2023-11-23T04:49:16Z
closed
apache/dubbo
https://github.com/apache/dubbo
13,135
["dubbo-cluster/pom.xml", "dubbo-metrics/dubbo-metrics-api/src/main/java/org/apache/dubbo/metrics/model/MetricsSupport.java"]
dubbo 3.2 branch source code dubbo-cluster build fail
### Environment * Dubbo version: 3.2 * Operating System version: mac 10.15 * Java version: JDK8 or JDK 15 or JDK 17 ### Steps to reproduce this issue 1. mvn clean install 2. show build error dubbo-cluster build fail ### Expected Behavior <!-- What do you expect from the above steps?--> ### Actual Behavior <!-- What actually happens? --> If there is an exception, please attach the exception trace: error log ``` Please refer to /Users/lucas/github/dubbo/dubbo-cluster/target/surefire-reports for the individual test results. Please refer to dump files (if any exist) [date].dump, [date]-jvmRun[N].dump and [date].dumpstream.� ``` �
https://github.com/apache/dubbo/issues/13135
https://github.com/apache/dubbo/pull/13139
c4120feb8fdeb7005168b01b694417027ee84584
47821429807654b14010a672d20eca3d00243ffe
2023-09-27T02:09:38Z
java
2023-10-13T09:43:30Z
closed
apache/dubbo
https://github.com/apache/dubbo
13,133
["dubbo-config/dubbo-config-api/src/main/java/org/apache/dubbo/config/bootstrap/builders/InternalServiceConfigBuilder.java"]
MetadataService should not be registered to registry
![image](https://github.com/apache/dubbo/assets/9292748/47acd7f8-08c2-480c-a6ff-0cbce847fb56) Step to reproduce: - Configure `-Ddubbo.registry.address=xxx`
https://github.com/apache/dubbo/issues/13133
https://github.com/apache/dubbo/pull/13134
bd5d158d1bb4295837b4b79e08f6ae5230b315c1
414608245329fbafd00a68dd1d53a0cd97ad3b6f
2023-09-26T09:21:01Z
java
2023-09-26T10:55:08Z
closed
apache/dubbo
https://github.com/apache/dubbo
13,078
["dubbo-remoting/dubbo-remoting-api/src/main/java/org/apache/dubbo/remoting/Constants.java", "dubbo-remoting/dubbo-remoting-netty4/src/main/java/org/apache/dubbo/remoting/transport/netty4/NettyConnectionClient.java", "dubbo-rpc/dubbo-rpc-triple/src/main/java/org/apache/dubbo/rpc/protocol/tri/TripleHttp2Protocol.java", "dubbo-rpc/dubbo-rpc-triple/src/main/java/org/apache/dubbo/rpc/protocol/tri/transport/TripleGoAwayHandler.java"]
tri协议Http2NoMoreStreamIdsException异常
* Dubbo version: 3.2.4 use tri protocol, it has this exception when service run many days,it can not resolve itself,we must restart the service 基本跑几天会出来,然后一直报错无法恢复,目前只能自动重启或者手动重启重连 the problem maybe is not reuse the (unary mode) stream 初步怀疑是unary的stream复用问题
https://github.com/apache/dubbo/issues/13078
https://github.com/apache/dubbo/pull/13228
3f07e683a184670ebfc0ef48c268836931b39a75
02e486a1dccef10a03b81575fb3b7158d3cd9299
2023-09-19T03:12:25Z
java
2023-10-18T03:38:57Z
closed
apache/dubbo
https://github.com/apache/dubbo
13,042
["dubbo-common/src/main/java/org/apache/dubbo/common/threadpool/support/AbortPolicyWithReport.java"]
AbortPolicyWithReport 高并发情况下 dump 多次的问题
<!-- If you need to report a security issue please visit https://github.com/apache/dubbo/security/policy --> - [x] I have searched the [issues](https://github.com/apache/dubbo/issues) of this repository and believe that this is not a duplicate. ### Environment * Dubbo version: 2.7.x, 3.x * Operating System version: ALL * Java version: ALL ### Steps to reproduce this issue 高并发请求时, 偶发复现 ### Expected Behavior 间隔指定时间, 比如 10 分钟, 再进行 threaddump ### Actual Behavior 下面是服务器上的日志记录, 可以看到 dubbo 线程池耗尽时的 threaddump 并没有按照预期的间隔 10分钟进行 ``` -rw-rw-r-- 1 root root 695929 Sep 1 19:30 Dubbo_JStack.log.2023-09-01_19:30:39 -rw-rw-r-- 1 root root 1068779 Sep 3 09:00 Dubbo_JStack.log.2023-09-03_09:00:35 -rw-rw-r-- 1 root root 2326331 Sep 3 23:53 Dubbo_JStack.log.2023-09-03_23:53:21 -rw-rw-r-- 1 root root 1081798 Sep 5 09:00 Dubbo_JStack.log.2023-09-05_09:00:35 -rw-rw-r-- 1 root root 1652182 Sep 5 09:00 Dubbo_JStack.log.2023-09-05_09:00:38 -rw-rw-r-- 1 root root 956693 Sep 9 19:30 Dubbo_JStack.log.2023-09-09_19:30:34 -rw-rw-r-- 1 root root 1515179 Sep 9 19:30 Dubbo_JStack.log.2023-09-09_19:30:37 -rw-rw-r-- 1 root root 981556 Sep 10 09:00 Dubbo_JStack.log.2023-09-10_09:00:38 -rw-rw-r-- 1 root root 1443054 Sep 10 09:00 Dubbo_JStack.log.2023-09-10_09:00:42 -rw-rw-r-- 1 root root 999105 Sep 11 09:00 Dubbo_JStack.log.2023-09-11_09:00:38 -rw-rw-r-- 1 root root 1500287 Sep 11 09:00 Dubbo_JStack.log.2023-09-11_09:00:43 ``` ### 原因分析 按照以下顺序获取锁就可能导致前面的现象, 线程切换 + 错误的顺序设置 lastPrintTime 共同导致了这个问题 ``` thread1: 1. 检查10分钟间隔通过 3. 获取锁 4. threaddump(stw) 5. 释放锁 7. 设置 lastPrintTime thread2: 2. 检查10分钟间隔通过 6. 获取锁 8. threaddump(stw) 9. 释放锁 ``` 源码如下 https://github.com/giraffe-tree/dubbo/blob/3.2/dubbo-common/src/main/java/org/apache/dubbo/common/threadpool/support/AbortPolicyWithReport.java ```java //dump every 10 minutes if (now - lastPrintTime < TEN_MINUTES_MILLS) { return; } if (!guard.tryAcquire()) { return; } ExecutorService pool = Executors.newSingleThreadExecutor(); pool.execute(() -> { // ....代码省略 try (FileOutputStream jStackStream = new FileOutputStream( new File(dumpPath, "Dubbo_JStack.log" + "." + dateStr))) { jstack(jStackStream); } catch (Exception t) { logger.error(COMMON_UNEXPECTED_CREATE_DUMP, "", "", "dump jStack error", t); } finally { guard.release(); } lastPrintTime = System.currentTimeMillis(); }); //must shutdown thread pool ,if not will lead to OOM pool.shutdown(); ``` ### 修复方案 1. 在获取锁后再次判断是否满足时间要求 2. 在释放锁前设置 lastPrintTime ```java if (now - lastPrintTime < TEN_MINUTES_MILLS) { return; } if (!guard.tryAcquire()) { return; } // 1. 此处再次判断时间 if (System.currentTimeMillis() - lastPrintTime < TEN_MINUTES_MILLS) { return; } ExecutorService pool = Executors.newSingleThreadExecutor(); pool.execute(() -> { // ....代码省略 try (FileOutputStream jStackStream = new FileOutputStream( new File(dumpPath, "Dubbo_JStack.log" + "." + dateStr))) { jstack(jStackStream); } catch (Exception t) { logger.error(COMMON_UNEXPECTED_CREATE_DUMP, "", "", "dump jStack error", t); } finally { // 2. 在释放锁前设置 lastPrintTime lastPrintTime = System.currentTimeMillis(); guard.release(); } }); //must shutdown thread pool ,if not will lead to OOM pool.shutdown(); ```
https://github.com/apache/dubbo/issues/13042
https://github.com/apache/dubbo/pull/13043
e887bab83b3582351a92b8a893610a46545517b8
d9db031469c27eb2e1c7438cf92e217b7f6817c4
2023-09-11T15:07:48Z
java
2023-09-13T01:54:48Z
closed
apache/dubbo
https://github.com/apache/dubbo
13,030
["dubbo-filter/dubbo-filter-validation/src/main/java/org/apache/dubbo/validation/filter/ValidationFilter.java", "dubbo-filter/dubbo-filter-validation/src/main/java/org/apache/dubbo/validation/support/jvalidation/JValidator.java", "dubbo-filter/dubbo-filter-validation/src/main/java/org/apache/dubbo/validation/support/jvalidation/JValidatorNew.java", "dubbo-filter/dubbo-filter-validation/src/test/java/org/apache/dubbo/validation/support/jvalidation/JValidatorTest.java", "dubbo-filter/dubbo-filter-validation/src/test/java/org/apache/dubbo/validation/support/jvalidation/mock/JValidatorTestTarget.java"]
Support ValidationFilter to display parameter names when validation fails
<!-- If you need to report a security issue please visit https://github.com/apache/dubbo/security/policy --> - [x] I have searched the [issues](https://github.com/apache/dubbo/issues) of this repository and believe that this is not a duplicate. - [x] I have searched the [release notes](https://github.com/apache/dubbo/releases) of this repository and believe that this is not a duplicate. ## Describe the feature <!-- Please also discuss possible business value -->
https://github.com/apache/dubbo/issues/13030
https://github.com/apache/dubbo/pull/13029
d9db031469c27eb2e1c7438cf92e217b7f6817c4
8a509e9601fab4af278859c9d947fca8f5f0fa27
2023-09-08T19:16:08Z
java
2023-09-13T09:12:25Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,945
["dubbo-remoting/dubbo-remoting-netty4/src/main/java/org/apache/dubbo/remoting/transport/netty4/NettyConnectionClient.java", "dubbo-rpc/dubbo-rpc-triple/src/main/java/org/apache/dubbo/rpc/protocol/tri/TripleHttp2Protocol.java", "dubbo-rpc/dubbo-rpc-triple/src/main/java/org/apache/dubbo/rpc/protocol/tri/TriplePingPongHandler.java"]
服务端突然断网、断电,客户端无法进行故障转移
dubbo 版本 3.2.4 实验过程:服务端关闭网络,节点在15min内不会从nacos平台消失。导致consumer仍然能够拿到invoder,此时客户端如果能判断服务端invoker不能连接,就会从全局的invoker列表中剔除该invoker,执行故障转移。但是dubbo consumer 在服务端突然断网后,判断连接仍然存在,导致无法剔除invoker,无法进行故障转移。 问题:针对服务端突然断电、断网,是否是应为tcp链接无法正常断开,导致consumer判断连接失效? <img width="916" alt="image" src="https://github.com/apache/dubbo/assets/139483558/38486a89-695a-4792-83a6-51f783f5abf2">
https://github.com/apache/dubbo/issues/12945
https://github.com/apache/dubbo/pull/12955
a4ca07087e0bd4c3358ec7f095b868cc707dda32
b84c1386bba2ac306a002b1a283cdb952b456047
2023-08-22T10:09:33Z
java
2023-10-11T03:17:41Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,927
["dubbo-rpc/dubbo-rpc-injvm/src/main/java/org/apache/dubbo/rpc/protocol/injvm/InjvmInvoker.java"]
使用手脚架生成项目启动包错
<!-- If you need to report a security issue please visit https://github.com/apache/dubbo/security/policy --> - [ ] I have searched the [issues](https://github.com/apache/dubbo/issues) of this repository and believe that this is not a duplicate. ### Environment * Dubbo version: 3.2.4 * Operating System version: windows 11 * Java version: 17.0.7 使用dubbo 手脚架生成的项目没有改任何代码启动报错 java.lang.reflect.InaccessibleObjectException: Unable to make field final int java.math.BigInteger.signum accessible: module java.base does not "opens java.math" to unnamed module @73a1e9a9 at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:354) at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297) at java.base/java.lang.reflect.Field.checkCanSetAccessible(Field.java:178) at java.base/java.lang.reflect.Field.setAccessible(Field.java:172) at com.alibaba.com.caucho.hessian.io.JavaDeserializer.getFieldMap(JavaDeserializer.java:340) at com.alibaba.com.caucho.hessian.io.JavaDeserializer.<init>(JavaDeserializer.java:80) at com.alibaba.com.caucho.hessian.io.BigIntegerDeserializer.<init>(BigIntegerDeserializer.java:25) at com.alibaba.com.caucho.hessian.io.SerializerFactory.<clinit>(SerializerFactory.java:142) at org.apache.dubbo.common.serialize.hessian2.Hessian2FactoryManager.createDefaultSerializerFactory(Hessian2FactoryManager.java:81) at org.apache.dubbo.common.serialize.hessian2.Hessian2FactoryManager.createSerializerFactory(Hessian2FactoryManager.java:77) at org.apache.dubbo.common.serialize.hessian2.Hessian2FactoryManager.getSerializerFactory(Hessian2FactoryManager.java:62) at org.apache.dubbo.common.serialize.hessian2.Hessian2ObjectOutput.<init>(Hessian2ObjectOutput.java:44) at org.apache.dubbo.common.serialize.hessian2.Hessian2Serialization.serialize(Hessian2Serialization.java:57) at org.apache.dubbo.common.serialize.DefaultSerializationExceptionWrapper.serialize(DefaultSerializationExceptionWrapper.java:51) at org.apache.dubbo.rpc.protocol.injvm.DefaultParamDeepCopyUtil.copy(DefaultParamDeepCopyUtil.java:45) at org.apache.dubbo.rpc.protocol.injvm.InjvmInvoker.recreateInvocation(InjvmInvoker.java:245) at org.apache.dubbo.rpc.protocol.injvm.InjvmInvoker.doInvoke(InjvmInvoker.java:119) at org.apache.dubbo.rpc.protocol.AbstractInvoker.doInvokeAndReturn(AbstractInvoker.java:243) at org.apache.dubbo.rpc.protocol.AbstractInvoker.invoke(AbstractInvoker.java:187) at org.apache.dubbo.rpc.listener.ListenerInvokerWrapper.invoke(ListenerInvokerWrapper.java:71) at org.apache.dubbo.metrics.filter.MetricsFilter.invoke(MetricsFilter.java:63) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:334) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CallbackRegistrationInvoker.invoke(FilterChainBuilder.java:196) at org.apache.dubbo.rpc.protocol.ReferenceCountInvokerWrapper.invoke(ReferenceCountInvokerWrapper.java:78) at org.apache.dubbo.rpc.cluster.support.AbstractClusterInvoker.invokeWithContext(AbstractClusterInvoker.java:383) at org.apache.dubbo.rpc.cluster.support.FailoverClusterInvoker.doInvoke(FailoverClusterInvoker.java:80) at org.apache.dubbo.rpc.cluster.support.AbstractClusterInvoker.invoke(AbstractClusterInvoker.java:344) at org.apache.dubbo.rpc.cluster.router.RouterSnapshotFilter.invoke(RouterSnapshotFilter.java:46) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:334) at org.apache.dubbo.monitor.support.MonitorFilter.invoke(MonitorFilter.java:108) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:334) at org.apache.dubbo.rpc.cluster.filter.support.MetricsClusterFilter.invoke(MetricsClusterFilter.java:50) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:334) at org.apache.dubbo.rpc.protocol.dubbo.filter.FutureFilter.invoke(FutureFilter.java:52) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:334) at org.apache.dubbo.spring.security.filter.ContextHolderParametersSelectedTransferFilter.invoke(ContextHolderParametersSelectedTransferFilter.java:41) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:334) at org.apache.dubbo.rpc.cluster.filter.support.ConsumerClassLoaderFilter.invoke(ConsumerClassLoaderFilter.java:40) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:334) at org.apache.dubbo.rpc.cluster.filter.support.ConsumerContextFilter.invoke(ConsumerContextFilter.java:118) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:334) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CallbackRegistrationInvoker.invoke(FilterChainBuilder.java:196) at org.apache.dubbo.rpc.cluster.support.wrapper.AbstractCluster$ClusterFilterInvoker.invoke(AbstractCluster.java:91) at org.apache.dubbo.rpc.cluster.support.wrapper.MockClusterInvoker.invoke(MockClusterInvoker.java:103) at org.apache.dubbo.rpc.cluster.support.wrapper.ScopeClusterInvoker.invoke(ScopeClusterInvoker.java:163) at org.apache.dubbo.rpc.proxy.InvocationUtil.invoke(InvocationUtil.java:57) at org.apache.dubbo.rpc.proxy.InvokerInvocationHandler.invoke(InvokerInvocationHandler.java:75) at cn.charlab.dubbotest.dubbo.api.DemoServiceDubboProxy0.sayHello(DemoServiceDubboProxy0.java) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at org.apache.dubbo.config.spring.util.LazyTargetInvocationHandler.invoke(LazyTargetInvocationHandler.java:58) at cn.charlab.dubbotest.dubbo.api.DemoServiceDubboProxy0.sayHello(DemoServiceDubboProxy0.java) at cn.charlab.dubbotest.dubbo.consumer.Consumer.run(Consumer.java:18) at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:769) at org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:753) at org.springframework.boot.SpringApplication.run(SpringApplication.java:317) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1304) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1293) at cn.charlab.dubbotest.DubbotestApplication.main(DubbotestApplication.java:12)
https://github.com/apache/dubbo/issues/12927
https://github.com/apache/dubbo/pull/12938
0c6b1ee17ada37b1363421f29cc6275dc27d716e
c2352df095c86b50496eea7d31d569fe02f8ebac
2023-08-21T05:56:58Z
java
2023-08-23T02:19:10Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,916
["dubbo-remoting/dubbo-remoting-http/src/main/java/org/apache/dubbo/remoting/http/restclient/HttpClientRestClient.java"]
3.2.5 rest协议使用apache-http-client出现NPE
<!-- If you need to report a security issue please visit https://github.com/apache/dubbo/security/policy --> - [ ] I have searched the [issues](https://github.com/apache/dubbo/issues) of this repository and believe that this is not a duplicate. ### Environment * Dubbo version: 3.2.5 * Operating System version: MAC OS * Java version: 17 ### Steps to reproduce this issue 1. 定义rest接口并配置client=apache-http-client ``` ** * 文件管理服务接口 * * @author chaoxin.lu * @since 2018/6/28 */ @RequestMapping("/oss") public interface FileSvc { /** * 获取文件 * * @param bucket * @param key * @return */ @GetMapping @ResponseBody R<FileEntry> getObject(@RequestParam("bucket") String bucket, @RequestParam("key") String key); } ``` ``` dubbo: protocol: id: ${DUBBO_PROTOCOL:rest} name: ${DUBBO_PROTOCOL:rest} client: ${DUBBO_CLIENT:apache-http-client} port: ${DUBBO_PORT:20895} server: ${DUBBO_PROTOCOL_SERVER:netty} ``` 3. 服务消费者如下 ![image](https://github.com/apache/dubbo/assets/13815910/efbe78a3-b17e-4a2e-9dcb-3087021c368e) Pls. provide [GitHub address] to reproduce this issue. ### Expected Behavior <!-- What do you expect from the above steps?--> ### Actual Behavior <!-- What actually happens? --> If there is an exception, please attach the exception trace: ``` Caused by: java.lang.NullPointerException: Cannot invoke "org.apache.http.Header.getValue()" because the return value of "org.apache.http.client.methods.CloseableHttpResponse.getFirstHeader(String)" is null at org.apache.dubbo.remoting.http.restclient.HttpClientRestClient$1.getContentType(HttpClientRestClient.java:96) at org.apache.dubbo.rpc.protocol.rest.RestInvoker.lambda$doInvoke$0(RestInvoker.java:105) at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) at java.base/java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:887) at java.base/java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2325) at org.apache.dubbo.rpc.protocol.rest.RestInvoker.doInvoke(RestInvoker.java:87) at org.apache.dubbo.rpc.protocol.AbstractInvoker.doInvokeAndReturn(AbstractInvoker.java:243) at org.apache.dubbo.rpc.protocol.AbstractInvoker.invoke(AbstractInvoker.java:187) ... 97 more ``` NPE点如下 ![image](https://github.com/apache/dubbo/assets/13815910/a55a548a-23e6-4f5f-8a2e-b9ddb54ff17f) 使用默认的okhttp则没有该问题 ![image](https://github.com/apache/dubbo/assets/13815910/22640e32-2bee-4ff5-a296-dbdf1077e1f7)
https://github.com/apache/dubbo/issues/12916
https://github.com/apache/dubbo/pull/12984
cdd7a405641c994a55d85690b87291867bee6820
35746aaf5a3e6eab9181fedb37fd8cc00793469c
2023-08-16T11:20:26Z
java
2023-09-05T03:22:21Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,779
["dubbo-spring-boot/dubbo-spring-boot-compatible/autoconfigure/src/main/java/org/apache/dubbo/spring/boot/env/DubboDefaultPropertiesEnvironmentPostProcessor.java", "dubbo-spring-boot/dubbo-spring-boot-compatible/autoconfigure/src/test/java/org/apache/dubbo/spring/boot/env/DubboDefaultPropertiesEnvironmentPostProcessorTest.java"]
通过外部配置文件的“dubbo.application.qos-enable=false”配置来关闭QOS不生效
<!-- If you need to report a security issue please visit https://github.com/apache/dubbo/security/policy --> - [ ] I have searched the [issues](https://github.com/apache/dubbo/issues) of this repository and believe that this is not a duplicate. ### Environment * Dubbo version: 3.2.4 * Operating System version: win10 * Java version: 17 ### Steps to reproduce this issue 1. 启动dubbo服务时加入JVM参数:-Ddubbo.properties.file=/data/dubbo/dubbo.properties 2. /data/dubbo/dubbo.properties 中配置:dubbo.application.qos-enable=false 3. 检查 QOS端口是否打开(已打开) 4. 通过监控地址:http://localhost:9002/actuator/dubbo/properties 查看 qos-enable 属性(true) ``` { "dubbo.application.name": "hp-demo-infra-service", "dubbo.application.organization": "HP-Socket", "dubbo.application.owner": "Kingfisher", "dubbo.application.qos-enable": "true", "dubbo.application.qos-port": "7002", "dubbo.application.version": "0.0.1-SNAPSHOT", "dubbo.config.multiple": "true", "dubbo.properties.file": "/data/dubbo/dubbo.properties", "dubbo.protocol.name": "dubbo", "dubbo.protocol.port": "6002", "dubbo.registry.file": "/data/dubbo/.cache/dubbo-registry_hp-demo-infra-service.properties", "dubbo.resolve.file": "/data/dubbo/dubbo-resolve.properties", "dubbo.scan.base-packages": "com.github.hpsocket.demo" } ``` 而通过JVM参数:“-Ddubbo.application.qos-enable=false”,或在 application.yml 中定义 ”dubbo.application.qos-enable: false”,配置是生效的。
https://github.com/apache/dubbo/issues/12779
https://github.com/apache/dubbo/pull/12861
b87746b19fb99b60b3e6973b3b50e3daca04eb27
7290ae59bc69577b5b902dc0f6e161eb4c74db6b
2023-07-24T02:38:58Z
java
2023-08-16T13:01:28Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,773
["dubbo-common/src/main/java/org/apache/dubbo/common/constants/CommonConstants.java", "dubbo-monitor/dubbo-monitor-api/src/main/java/org/apache/dubbo/monitor/support/CallbackConsumerContextFilter.java", "dubbo-monitor/dubbo-monitor-api/src/main/resources/META-INF/dubbo/internal/org.apache.dubbo.rpc.Filter", "dubbo-rpc/dubbo-rpc-dubbo/src/main/java/org/apache/dubbo/rpc/protocol/dubbo/CallbackServiceCodec.java"]
Callback mode rpc context loss, which also affects link tracking
<!-- If you need to report a security issue please visit https://github.com/apache/dubbo/security/policy --> - [x] I have searched the [issues](https://github.com/apache/dubbo/issues) of this repository and believe that this is not a duplicate. ### Environment * Dubbo version: 3.1.x * Operating System version: mac os\Linux * Java version: 1.8 ### Steps to reproduce this issue 1. Server ```java public interface DemoApi { String methodA(); String testCallbackMethod(String name, ApiCallback callback); interface ApiCallback { String callbackMethod(); } } @DubboService(methods = @Method(name="testCallbackMethod", arguments = {@Argument(index = 1, type = "DemoApi$ApiCallback", callback = true)})) public class DemoProvider implements DemoApi { @Override public String methodA() { return "ok"; } @Override public String testCallbackMethod(String name, ApiCallback callback) { String s = callback.callbackMethod(); return "ok"; } } ``` 2. Consumer ```java @RestController public class TestController{ @DubboReference DemoApi demo; private DemoApi.ApiCallback callback = new DemoApi.ApiCallback() { @Override public String callbackMethod() { //rpc context has been lost System.out.println("callback:" + RpcContext.getContext().getAttachment("test")); //This is just a simple simulation problem, in fact, it may call other services return demo.methodA(); } }; @GetMapping("/testDubboCallback") public ResponseEntity<String> testDubboCallback() { RpcContext.getContext().setAttachment("test", "abc"); return ResponseEntity.ok(demo.testCallbackMethod("param" ,callback)); } } ``` Pls. provide [GitHub address] to reproduce this issue. ### Expected Behavior <!-- What do you expect from the above steps?--> RpcContext passed normally, sky tracking is normal. ### Actual Behavior <!-- What actually happens? --> rpc context loss, the tracking has been broken. If there is an exception, please attach the exception trace: ``` Just put your stack trace here! ```
https://github.com/apache/dubbo/issues/12773
https://github.com/apache/dubbo/pull/12866
bac06929f21e42ba8fc574f3e829341432d571e2
78a4dbad6e5ed195acebc4d2eb18588c79563367
2023-07-21T07:04:20Z
java
2023-08-09T08:05:27Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,765
["dubbo-config/dubbo-config-spring/src/main/java/org/apache/dubbo/config/spring/beans/factory/annotation/ServiceAnnotationPostProcessor.java"]
关于 @Service 注解日志警告的问题
<!-- If you need to report a security issue please visit https://github.com/apache/dubbo/security/policy --> - [ ] I have searched the [issues](https://github.com/apache/dubbo/issues) of this repository and believe that this is not a duplicate. ## Ask your question here ![6MJU8B7(S{ID~1IHN}U(@ W](https://github.com/apache/dubbo/assets/31852897/c424399e-b890-427d-acd0-e8f839a0f57e) 现在都用 @DubboService 注解了 然而每次启动还会报 @Service 扫描不到的警告 这是不是有点非必要啊
https://github.com/apache/dubbo/issues/12765
https://github.com/apache/dubbo/pull/12981
8595711e99f9102eb1055a80a4698e118c82cfb7
cdd7a405641c994a55d85690b87291867bee6820
2023-07-20T09:22:04Z
java
2023-09-05T02:50:35Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,698
["dubbo-dependencies-bom/pom.xml"]
dubbo3.0在nacos配置中心修改消费者的订阅模式,是实时生效的吗?
dubbo3.0在nacos配置中心修改消费者的订阅模式,是实时生效的吗?比如前期是双订阅模式,然后把它改成仅应用级注册后,需要重启服务才能生效吗
https://github.com/apache/dubbo/issues/12698
https://github.com/apache/dubbo/pull/12304
ce6e5a6eedd9c8916e90d31d2fe5b732789f546b
53255e3f8e0cce46c0e6f9e5261ad24eb9d38509
2023-07-08T10:37:32Z
java
2023-05-16T04:53:40Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,658
["dubbo-remoting/dubbo-remoting-api/src/main/java/org/apache/dubbo/remoting/api/connection/AbstractConnectionClient.java", "dubbo-remoting/dubbo-remoting-netty4/src/main/java/org/apache/dubbo/remoting/transport/netty4/NettyConnectionClient.java"]
dubbo provider cannot reconnect service without registry center since 3.2.1
Description: a dubbo provider project with springboot invokes dubbo service, defining referenced service like this: ``` @DubboReference( check = false,url = "tri://localhost:50051") private HelloService helloService; ``` The consumer repeatedly calls the service with a delay of one second after each call. ``` @Scheduled(fixedDelay = 1000) public void run() { String message = helloService.sayHello("lucky"); System.out.println(message); } ``` key config items in application.properties like this : ``` dubbo.registry.address: N/A ``` Question: The above code can run normally in dubbo-3.1.7, when starting the service consumer first, and then starting the service provider. At 3.2.1, the service consumer could not successfully reconnect to the service provider. According to the source code, the implementation of the ConnectionListener class, which is responsible for reconnecting, has changed. The number of connection references in the code changed from positive to 0, causing the logic to terminate early. Why is that? Partial Code of Reconnection:
https://github.com/apache/dubbo/issues/12658
https://github.com/apache/dubbo/pull/12737
bd76d044ba7118409a4bb52c1fa2a0f2cbc9e7a3
630e14ae404d71dbd7229e3205468b3069b60676
2023-07-03T11:35:31Z
java
2023-07-17T03:35:03Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,650
["dubbo-common/src/main/java/org/apache/dubbo/common/logger/log4j2/Log4j2LoggerAdapter.java", "dubbo-common/src/main/java/org/apache/dubbo/common/logger/slf4j/Slf4jLogger.java", "dubbo-common/src/main/java/org/apache/dubbo/common/logger/slf4j/Slf4jLoggerAdapter.java"]
应用初始化启动后,使用telnet查看“日志框架运行时”抛NPE
<!-- If you need to report a security issue please visit https://github.com/apache/dubbo/security/policy --> - [x] I have searched the [issues](https://github.com/apache/dubbo/issues) of this repository and believe that this is not a duplicate. ### Environment * Dubbo version: 3.1.9 * Operating System version: Windows * Java version: 11.0.18 ### Steps to reproduce this issue 1. 正常启动一个普通springboot+dubbo应用,使用默认的logback且未配置日志选项 2. 使用 QoS 命令 `loggerInfo` 查询日志配置 3. 未正确返回结果,且应用控制台抛出NPE ```bash telnet localhost 22222 ___ __ __ ___ ___ ____ / _ \ / / / // _ ) / _ ) / __ \ / // // /_/ // _ |/ _ |/ /_/ / /____/ \____//____//____/ \____/ dubbo>loggerInfo loggerInfo :fail to execute commandContext by null ``` ### Expected Behavior <!-- What do you expect from the above steps?--> 正常查询应用的日志配置 ,如果某些参数未配置则提供默认值(如 Log level: INFO) ### Actual Behavior <!-- What actually happens? --> If there is an exception, please attach the exception trace: ``` 14:15:36.417 logback [qos-worker-3-1] INFO o.a.d.q.c.DefaultCommandExecutor - [DUBBO] [Dubbo QoS] Command Process start. Command: loggerInfo, Args: [], Remote Address: /0:0:0:0:0:0:0:1:63671, dubbo version: 3.1.9, current host: 10.201.50.35 No such logging.properties in classpath for jdk logging config! 14:15:36.450 logback [qos-worker-3-1] INFO o.a.d.q.c.DefaultCommandExecutor - [DUBBO] [Dubbo QoS] Command Process Failed. Command: loggerInfo, Args: [], Remote Address: /0:0:0:0:0:0:0:1:63671, dubbo version: 3.1.9, current host: 10.201.50.35 java.lang.NullPointerException: null at org.apache.dubbo.qos.command.impl.LoggerInfo.execute(LoggerInfo.java:35) at org.apache.dubbo.qos.command.DefaultCommandExecutor.execute(DefaultCommandExecutor.java:57) at org.apache.dubbo.qos.server.handler.TelnetProcessHandler.channelRead0(TelnetProcessHandler.java:59) at org.apache.dubbo.qos.server.handler.TelnetProcessHandler.channelRead0(TelnetProcessHandler.java:40) 14:15:36.451 logback [qos-worker-3-1] ERROR o.a.d.q.s.h.TelnetProcessHandler - [DUBBO] execute commandContext got exception org.apache.dubbo.qos.command.CommandContext@740b26f8, dubbo version: 3.1.9, current host: 10.201.50.35, error code: 7-6. This may be caused by , go to https://dubbo.apache.org/faq/7/6 to find instructions. java.lang.NullPointerException: null at org.apache.dubbo.qos.command.impl.LoggerInfo.execute(LoggerInfo.java:35) at org.apache.dubbo.qos.command.DefaultCommandExecutor.execute(DefaultCommandExecutor.java:57) at org.apache.dubbo.qos.server.handler.TelnetProcessHandler.channelRead0(TelnetProcessHandler.java:59) at org.apache.dubbo.qos.server.handler.TelnetProcessHandler.channelRead0(TelnetProcessHandler.java:40) ``` 断点调试发现 `org.apache.dubbo.qos.command.impl.LoggerInfo` 33行中返回了 null ![image](https://github.com/apache/dubbo/assets/10182210/3107c62c-077c-4614-b701-0dafb0fe4864) 使用 `switchLogLevel` 修改日志级别后再使用 `loggerInfo` 查询则正常 ``` dubbo>loggerInfo loggerInfo :fail to execute commandContext by null dubbo>switchLogLevel INFO OK dubbo>loggerInfo Available logger adapters: [slf4j, jcl, log4j2, jdk]. Current Adapter: [slf4j]. Log level: INFO ```
https://github.com/apache/dubbo/issues/12650
https://github.com/apache/dubbo/pull/12671
7c7f3db772589932bd58534fba8b1864d46d394b
879fd811283c28a5fcb4dfebd383e05b5e500cb0
2023-07-03T06:27:11Z
java
2023-07-25T13:18:30Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,646
["dubbo-registry/dubbo-registry-zookeeper/src/main/java/org/apache/dubbo/registry/zookeeper/ZookeeperRegistry.java"]
ZookeeperRegistry ->doRegister 方法 faultTolerant参数 设置为false,是不是会存在问题 ?
<!-- If you need to report a security issue please visit https://github.com/apache/dubbo/security/policy --> - [x] I have searched the [issues](https://github.com/apache/dubbo/issues) of this repository and believe that this is not a duplicate. ## Ask your question here [这个问题中](https://github.com/apache/dubbo/issues/12607) 在provider 恢复zk 链接后 日志如下 ``` 2023-06-27 19:24:53.473 [main-EventThread] INFO org.apache.curator.framework.state.ConnectionStateManager [postState:251] - State change: RECONNECTED ``` FailbackRegistry 尝试去恢复链接 会调用 addFailedRegistered 加入到HashedWheelTimer HashedWheelTimer 最后尝试去重新注册的时候提示node 已经存在 ``` at org.apache.zookeeper.KeeperException.create(KeeperException.java:119) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783) at org.apache.curator.framework.imps.CreateBuilderImpl$17.call(CreateBuilderImpl.java:1176) at org.apache.curator.framework.imps.CreateBuilderImpl$17.call(CreateBuilderImpl.java:1156) at org.apache.curator.connection.StandardConnectionHandlingPolicy.callWithRetry(StandardConnectionHandlingPolicy.java:64) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:100) at org.apache.curator.framework.imps.CreateBuilderImpl.pathInForeground(CreateBuilderImpl.java:1153) at org.apache.curator.framework.imps.CreateBuilderImpl.protectedPathInForeground(CreateBuilderImpl.java:607) at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:597) at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:575) at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:51) at org.apache.dubbo.remoting.zookeeper.curator.CuratorZookeeperClient.createEphemeral(CuratorZookeeperClient.java:138) at org.apache.dubbo.remoting.zookeeper.AbstractZookeeperClient.create(AbstractZookeeperClient.java:87) at org.apache.dubbo.registry.zookeeper.ZookeeperRegistry.doRegister(ZookeeperRegistry.java:161) at org.apache.dubbo.registry.retry.FailedRegisteredTask.doRetry(FailedRegisteredTask.java:37) at org.apache.dubbo.registry.retry.AbstractRetryTask.run(AbstractRetryTask.java:129) at org.apache.dubbo.common.timer.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:653) at org.apache.dubbo.common.timer.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:732) at org.apache.dubbo.common.timer.HashedWheelTimer$Worker.run(HashedWheelTimer.java:454) at java.base/java.lang.Thread.run(Thread.java:833) ``` 这个问题与 [瓜子遇到的问题类似](https://cn.dubbo.apache.org/zh-cn/blog/2019/01/05/dubbo-%e5%9c%a8%e7%93%9c%e5%ad%90%e4%ba%8c%e6%89%8b%e8%bd%a6%e7%9a%84%e5%ae%9e%e8%b7%b5/) 这块代码好像没有在3.0的代码中 而是用了faultTolerant 这个参数 查看了源码 AbstractZookeeperClient 的 create()只有在 ZookeeperRegistry 调用 doRegister 设置为false。 提交日志 显示#11036 这个issue后修改的doRegisterd的faultTolerant 从true 改为了 false
https://github.com/apache/dubbo/issues/12646
https://github.com/apache/dubbo/pull/12740
7a51c2379f4f7c4c6c6ed2ce53ae178304cbcc7b
c54e1b63f975d18797a85046cf23dfcc989d3456
2023-07-03T01:07:44Z
java
2023-07-19T06:55:18Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,620
["dubbo-config/dubbo-config-spring/src/main/java/org/apache/dubbo/config/spring/context/DubboDeployApplicationListener.java"]
dubbo3.1.5版本BUG!!!,启动是出现线程死锁导致启动服务失败
升级dubbo3遇到问题,项目里使用@PostConstruct注解,并且方法里调用的是使用了@Scheduled注解的方法,造成线程死锁 日志信息: ``` "Thread-54" #617 daemon prio=5 os_prio=0 tid=0x00007f55ca668000 nid=0x26d waiting for monitor entry [0x00007f5554af6000] java.lang.Thread.State: BLOCKED (on object monitor) at org.apache.dubbo.config.spring.ReferenceBean.getCallProxy(ReferenceBean.java:351) 【【【dubbo3.1.5版本引出的问题】】】】 - waiting to lock <0x000000069092aa18> (a java.util.concurrent.ConcurrentHashMap) at org.apache.dubbo.config.spring.ReferenceBean.access$100(ReferenceBean.java:100) at org.apache.dubbo.config.spring.ReferenceBean$DubboReferenceLazyInitTargetSource.createObject(ReferenceBean.java:359) at org.springframework.aop.target.AbstractLazyCreationTargetSource.getTarget(AbstractLazyCreationTargetSource.java:86) - locked <0x00000005d7866948> (a org.apache.dubbo.config.spring.ReferenceBean$DubboReferenceLazyInitTargetSource) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:192) at com.sun.proxy.$Proxy73.queryAllData(Unknown Source) at com.huasheng.core.client.basicData.EtfDataServiceClient.queryAllData$original$f6OsWsrc(EtfDataServiceClient.java:41) at com.huasheng.core.client.basicData.EtfDataServiceClient.queryAllData$original$f6OsWsrc$accessor$ggthnbjE(EtfDataServiceClient.java) at com.huasheng.core.client.basicData.EtfDataServiceClient$auxiliary$x5YqXG7G.call(Unknown Source) at org.apache.skywalking.apm.agent.core.plugin.interceptor.enhance.InstMethodsInter.intercept(InstMethodsInter.java:86) at com.huasheng.core.client.basicData.EtfDataServiceClient.queryAllData(EtfDataServiceClient.java) at com.huasheng.task.job.hk.HqBasicDataRefreshJob.refreshindexEtf$original$UrAXt9j7(HqBasicDataRefreshJob.java:1013) at com.huasheng.task.job.hk.HqBasicDataRefreshJob.refreshindexEtf$original$UrAXt9j7$accessor$WzXLmL3m(HqBasicDataRefreshJob.java) at com.huasheng.task.job.hk.HqBasicDataRefreshJob$auxiliary$J3bNxn2m.call(Unknown Source) at org.apache.skywalking.apm.agent.core.plugin.interceptor.enhance.InstMethodsInter.intercept(InstMethodsInter.java:86) at com.huasheng.task.job.hk.HqBasicDataRefreshJob.refreshindexEtf(HqBasicDataRefreshJob.java) at com.huasheng.task.job.hk.HqBasicDataRefreshJob.lambda$indexEtfJob$4(HqBasicDataRefreshJob.java:1006) at com.huasheng.task.job.hk.HqBasicDataRefreshJob$$Lambda$452/1550461532.run(Unknown Source) at java.lang.Thread.run(Thread.java:748) Locked ownable synchronizers: - None ```
https://github.com/apache/dubbo/issues/12620
https://github.com/apache/dubbo/pull/12659
78ca5e523cd6969a0b7085e51d3996fd8448a55c
1c5265073bb55d7a697438a2bae944377fb89343
2023-06-29T07:01:24Z
java
2023-07-14T01:52:11Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,545
["dubbo-cluster/src/main/java/org/apache/dubbo/rpc/cluster/support/wrapper/ScopeClusterInvoker.java", "dubbo-rpc/dubbo-rpc-injvm/src/main/java/org/apache/dubbo/rpc/protocol/injvm/InjvmInvoker.java"]
dubbo 3.2.2 使用 injvm 调用 报反射异常
<!-- If you need to report a security issue please visit https://github.com/apache/dubbo/security/policy --> - [ ] I have searched the [issues](https://github.com/apache/dubbo/issues) of this repository and believe that this is not a duplicate. ### Environment * Dubbo version: 3.2.2 * Operating System version: xxx * Java version: 17 * spring cloud alibaba 2022 ### Steps to reproduce this issue ``` java.lang.reflect.InaccessibleObjectException: Unable to make field private byte java.lang.StackTraceElement.format accessible: module java.base does not "opens java.lang" to unnamed module @74751b3 at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:354) at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297) at java.base/java.lang.reflect.Field.checkCanSetAccessible(Field.java:178) at java.base/java.lang.reflect.Field.setAccessible(Field.java:172) at com.alibaba.com.caucho.hessian.io.JavaDeserializer.getFieldMap(JavaDeserializer.java:340) at com.alibaba.com.caucho.hessian.io.JavaDeserializer.<init>(JavaDeserializer.java:80) at com.alibaba.com.caucho.hessian.io.StackTraceElementDeserializer.<init>(StackTraceElementDeserializer.java:56) at com.alibaba.com.caucho.hessian.io.SerializerFactory.<clinit>(SerializerFactory.java:189) at org.apache.dubbo.common.serialize.hessian2.Hessian2FactoryManager.createDefaultSerializerFactory(Hessian2FactoryManager.java:81) at org.apache.dubbo.common.serialize.hessian2.Hessian2FactoryManager.createSerializerFactory(Hessian2FactoryManager.java:77) at org.apache.dubbo.common.serialize.hessian2.Hessian2FactoryManager.getSerializerFactory(Hessian2FactoryManager.java:62) at org.apache.dubbo.common.serialize.hessian2.Hessian2ObjectOutput.<init>(Hessian2ObjectOutput.java:44) at org.apache.dubbo.common.serialize.hessian2.Hessian2Serialization.serialize(Hessian2Serialization.java:57) at org.apache.dubbo.rpc.protocol.injvm.DefaultParamDeepCopyUtil.copy(DefaultParamDeepCopyUtil.java:45) at org.apache.dubbo.rpc.protocol.injvm.InjvmInvoker.rebuildValue(InjvmInvoker.java:263) at org.apache.dubbo.rpc.protocol.injvm.InjvmInvoker.doInvoke(InjvmInvoker.java:172) at org.apache.dubbo.rpc.protocol.AbstractInvoker.doInvokeAndReturn(AbstractInvoker.java:242) at org.apache.dubbo.rpc.protocol.AbstractInvoker.invoke(AbstractInvoker.java:186) at org.apache.dubbo.rpc.listener.ListenerInvokerWrapper.invoke(ListenerInvokerWrapper.java:71) at org.dromara.common.dubbo.filter.DubboRequestFilter.invoke(DubboRequestFilter.java:42) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) at org.apache.dubbo.validation.filter.ValidationFilter.invoke(ValidationFilter.java:98) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) at com.alibaba.dubbo.rpc.Invoker$CompatibleInvoker.invoke(Invoker.java:75) at io.seata.integration.dubbo.alibaba.AlibabaDubboTransactionPropagationFilter.invoke(AlibabaDubboTransactionPropagationFilter.java:45) at com.alibaba.dubbo.rpc.Filter.invoke(Filter.java:31) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) at org.apache.dubbo.seata.SeataTransactionPropagationConsumerFilter.invoke(SeataTransactionPropagationConsumerFilter.java:53) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) at org.dromara.common.mybatis.filter.DubboDataPermissionFilter.invoke(DubboDataPermissionFilter.java:25) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) at org.apache.dubbo.metrics.filter.MetricsFilter.invoke(MetricsFilter.java:58) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) at cn.dev33.satoken.context.dubbo.filter.SaTokenDubboConsumerFilter.invoke(SaTokenDubboConsumerFilter.java:42) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CallbackRegistrationInvoker.invoke(FilterChainBuilder.java:194) at org.apache.dubbo.rpc.protocol.ReferenceCountInvokerWrapper.invoke(ReferenceCountInvokerWrapper.java:78) at org.apache.dubbo.rpc.cluster.support.AbstractClusterInvoker.invokeWithContext(AbstractClusterInvoker.java:382) at org.apache.dubbo.rpc.cluster.support.FailoverClusterInvoker.doInvoke(FailoverClusterInvoker.java:80) at org.apache.dubbo.rpc.cluster.support.AbstractClusterInvoker.invoke(AbstractClusterInvoker.java:343) at com.alibaba.csp.sentinel.adapter.dubbo3.SentinelDubboConsumerFilter.syncInvoke(SentinelDubboConsumerFilter.java:82) at com.alibaba.csp.sentinel.adapter.dubbo3.SentinelDubboConsumerFilter.invoke(SentinelDubboConsumerFilter.java:66) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) at org.apache.dubbo.rpc.cluster.router.RouterSnapshotFilter.invoke(RouterSnapshotFilter.java:46) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) at org.apache.dubbo.monitor.support.MonitorFilter.invoke(MonitorFilter.java:108) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) at org.apache.dubbo.rpc.cluster.filter.support.MetricsClusterFilter.invoke(MetricsClusterFilter.java:50) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) at org.apache.dubbo.rpc.protocol.dubbo.filter.FutureFilter.invoke(FutureFilter.java:52) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) at com.alibaba.csp.sentinel.adapter.dubbo3.DubboAppContextFilter.invoke(DubboAppContextFilter.java:47) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) at org.apache.dubbo.rpc.cluster.filter.support.ObservationSenderFilter.invoke(ObservationSenderFilter.java:61) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) at org.apache.dubbo.spring.security.filter.ContextHolderParametersSelectedTransferFilter.invoke(ContextHolderParametersSelectedTransferFilter.java:41) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) at org.apache.dubbo.rpc.cluster.filter.support.ConsumerClassLoaderFilter.invoke(ConsumerClassLoaderFilter.java:40) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) at org.apache.dubbo.rpc.cluster.filter.support.ConsumerContextFilter.invoke(ConsumerContextFilter.java:118) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CallbackRegistrationInvoker.invoke(FilterChainBuilder.java:194) at org.apache.dubbo.rpc.cluster.support.wrapper.AbstractCluster$ClusterFilterInvoker.invoke(AbstractCluster.java:91) at org.apache.dubbo.rpc.cluster.support.wrapper.ScopeClusterInvoker.invoke(ScopeClusterInvoker.java:149) at org.apache.dubbo.registry.client.migration.MigrationInvoker.invoke(MigrationInvoker.java:286) at org.apache.dubbo.rpc.proxy.InvocationUtil.invoke(InvocationUtil.java:57) at org.apache.dubbo.rpc.proxy.InvokerInvocationHandler.invoke(InvokerInvocationHandler.java:75) at org.dromara.system.api.RemoteLogServiceDubboProxy3.saveLog(RemoteLogServiceDubboProxy3.java) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:216) at jdk.proxy2/jdk.proxy2.$Proxy271.saveLog(Unknown Source) at org.dromara.common.log.event.LogEventListener.saveLog(LogEventListener.java:39) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:750) at org.springframework.aop.interceptor.AsyncExecutionInterceptor.lambda$invoke$0(AsyncExecutionInterceptor.java:115) at java.base/java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java) at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:833) ``` Pls. provide [GitHub address] to reproduce this issue. ### Expected Behavior <!-- What do you expect from the above steps?--> ### Actual Behavior <!-- What actually happens? --> If there is an exception, please attach the exception trace: ``` Just put your stack trace here! ```
https://github.com/apache/dubbo/issues/12545
https://github.com/apache/dubbo/pull/12553
3160602072c198728cca48e449984a4e596e7c37
a4e9c9efa4ec79ca6e371141762c3da84718c65f
2023-06-16T01:40:13Z
java
2023-06-19T01:34:29Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,540
["dubbo-cluster/src/main/java/org/apache/dubbo/rpc/cluster/router/condition/config/ProviderAppStateRouter.java"]
dubbo 3.2.2 生产者启动后 会包一大堆警告 This may be caused by condition router get providerApplication is empty
<!-- If you need to report a security issue please visit https://github.com/apache/dubbo/security/policy --> - [ ] I have searched the [issues](https://github.com/apache/dubbo/issues) of this repository and believe that this is not a duplicate. ### Environment ![image](https://github.com/apache/dubbo/assets/31852897/6ee9e08e-aad3-48e0-a16c-39393c20d8b2) 100%复现 每次启动都有 * Dubbo version: 3.2.2 * Operating System version: xxx * Java version: 17 * nacos 2.2.1 * springcloud alibaba 2022-RC2 ### Steps to reproduce this issue 1. xxx 2. xxx 3. xxx Pls. provide [GitHub address] to reproduce this issue. ### Expected Behavior <!-- What do you expect from the above steps?--> ### Actual Behavior <!-- What actually happens? --> If there is an exception, please attach the exception trace: ``` Just put your stack trace here! ```
https://github.com/apache/dubbo/issues/12540
https://github.com/apache/dubbo/pull/12729
68e624f7643eeb1aa136a382e4432f3b77002f87
ff60c870edc0878874ae64d1c7c1c5c18a13c519
2023-06-15T05:24:26Z
java
2023-07-15T02:15:38Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,532
["dubbo-metadata/dubbo-metadata-api/src/main/java/org/apache/dubbo/metadata/report/identifier/MetadataIdentifier.java", "dubbo-metadata/dubbo-metadata-api/src/main/java/org/apache/dubbo/metadata/report/support/AbstractMetadataReport.java"]
Metrics output unified interface and serviceKey
<!-- If you need to report a security issue please visit https://github.com/apache/dubbo/security/policy --> When outputting to the micrometer, uniformly use serviceKey. ![image](https://github.com/apache/dubbo/assets/38374721/5891ff68-83bf-4298-a48f-a6a1910421bc)
https://github.com/apache/dubbo/issues/12532
https://github.com/apache/dubbo/pull/12586
27d9ce383640359924f98a4ccb66fb6f3f3140d2
2c82cf7dfccd435be7f8de7346a937a27f2d0a9a
2023-06-14T12:18:08Z
java
2023-06-24T09:02:15Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,526
["dubbo-config/dubbo-config-api/src/main/java/org/apache/dubbo/config/ReferenceConfig.java", "dubbo-config/dubbo-config-api/src/main/java/org/apache/dubbo/config/ServiceConfig.java"]
provider注册url上methods参数中会包含重复的方法名
<!-- If you need to report a security issue please visit https://github.com/apache/dubbo/security/policy --> - [x] I have searched the [issues](https://github.com/apache/dubbo/issues) of this repository and believe that this is not a duplicate. ### Environment * Dubbo version: 3.1.11 * Operating System version: MAC * Java version: 1.8 ### Steps to reproduce this issue 1. 使用3.1.11的版本定义一个接口提供服务,接口中包含相同名字不同参数的X个方法,那么在provider的url上methods参数就会有多个重复的方法名 2. 在2.6.x的consumer端引用该服务时,methodInvokerMap会根据相同方法名存放X个重复invoker地址,如果默认使用random负载均衡,相当于该provider有着X倍的权重一样,压力会是X倍 3. 在刚升dubbo3的时候,如果一台机器收到的流量是其他机器的X倍,可能压力较大会有风险 ![duplicate](https://github.com/apache/dubbo/assets/3276231/fa4c2aab-e13c-4bd8-8b8a-3bfada07175c) ![addMap](https://github.com/apache/dubbo/assets/3276231/79b59170-020a-482e-82e2-0ebd6e162d67) Pls. provide [GitHub address] to reproduce this issue. ### Expected Behavior 在生成provider url的methods参数时,去掉重复的方法名 ### Actual Behavior 同名不同参数的方法名,会在url上生成多次 红圈中是灰度发布一台的重复现象 ![methods](https://github.com/apache/dubbo/assets/3276231/6c27459e-dc1b-4ad8-b1bf-57c51115c4d8)
https://github.com/apache/dubbo/issues/12526
https://github.com/apache/dubbo/pull/12527
a613cae2f3d6d42e555575d46c142ef60ec30729
467622aac6b62408c1f5861ddec2880cd74e88cc
2023-06-14T06:05:48Z
java
2023-06-18T05:45:06Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,493
["dubbo-rpc/dubbo-rpc-dubbo/src/main/java/org/apache/dubbo/rpc/protocol/dubbo/DubboInvoker.java", "dubbo-rpc/dubbo-rpc-injvm/src/main/java/org/apache/dubbo/rpc/protocol/injvm/InjvmInvoker.java", "dubbo-rpc/dubbo-rpc-triple/src/main/java/org/apache/dubbo/rpc/protocol/tri/TripleInvoker.java"]
未及时释放FutureContext造成频繁GC问题
<!-- If you need to report a security issue please visit https://github.com/apache/dubbo/security/policy --> - [x] I have searched the [issues](https://github.com/apache/dubbo/issues) of this repository and believe that this is not a duplicate. ### Environment * Dubbo version: 3.0.12 3.1.8 * Java version: 1.8 ### Steps to reproduce this issue 1. consumer及provider 通过环境变量,设置参数 future.sync.set=false 2. provider返回的报文较大 3. 3.0.12版本的调用,dubbo协议不会产生频繁GC问题,triple会产生 4. 3.1.8版本调用,dubbo协议及triple协议都会产生频繁GC问题 Pls. provide [GitHub address] to reproduce this issue. ### Expected Behavior Consumer通过dubbo或triple请求,设置了future.sync.set=false 不会在IntenalThreadLocal中保存返回结果,不再产生频繁GC问题 <!-- What do you expect from the above steps?--> ### Actual Behavior 统一设置了参数 future.sync.set=false **3.0.12版本:** > Dubbo协议调用 及时清除了FutureContext,生效原因 <img width="1135" alt="image" src="https://github.com/apache/dubbo/assets/33999699/cca05d9d-dacd-4cf9-95d9-a72f9e20533d"> 并在之后响应过程中清除了FutureContext <img width="1088" alt="image" src="https://github.com/apache/dubbo/assets/33999699/c63dff14-6814-41ac-8ad2-d35968f4cd8a"> 且因参数future.sync.set=false 之后也不再设置FutureContext,从而能够解决频发GC的情况 > Tiple协议调用 Tiple未及时清除了FutureContext原因 <img width="1027" alt="image" src="https://github.com/apache/dubbo/assets/33999699/2a71560a-2053-4d9e-8f88-07e976e7c713"> 且没有在接收到响应后清除InternalThreadLocal 虽参数future.sync.set=false,但已经将FutureContext设置进了InternalThreadLocal,会造成频繁GC的情况 **3.1.8版本:** Dubbo协议、Triple协议都没有及时清除FutureContext > Dubbo协议调用 <img width="1147" alt="image" src="https://github.com/apache/dubbo/assets/33999699/6b170820-1088-423f-8634-563944f6b00b"> > Tiple协议调用 Triple未清除原因 同3.0.12 <!-- What actually happens? -->
https://github.com/apache/dubbo/issues/12493
https://github.com/apache/dubbo/pull/12534
8474bf5415c6a7e76f0d9a9682399c9fa34d8a1e
5d76667808720eec49d6d970a57145142b228634
2023-06-11T16:52:51Z
java
2023-06-30T12:33:17Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,476
["dubbo-rpc/dubbo-rpc-api/src/main/java/org/apache/dubbo/rpc/protocol/AbstractProtocol.java", "dubbo-rpc/dubbo-rpc-api/src/main/java/org/apache/dubbo/rpc/protocol/AbstractProxyProtocol.java", "dubbo-rpc/dubbo-rpc-rest/src/main/java/org/apache/dubbo/rpc/protocol/rest/RestProtocol.java"]
java.lang.NoSuchFieldError: serverMap
<img width="739" alt="image" src="https://github.com/apache/dubbo/assets/12210038/ead31890-dc6e-4a82-a4d0-14c323f756cf"> <img width="473" alt="image" src="https://github.com/apache/dubbo/assets/12210038/85708f4f-4989-4324-9fb9-0102b2561927"> <img width="557" alt="image" src="https://github.com/apache/dubbo/assets/12210038/e5965563-cca4-4ade-95e4-d1b4211cdaec"> ``` <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>3.1.0</version> <relativePath/> <!-- lookup parent from repository --> </parent> <groupId>com.example</groupId> <artifactId>provider</artifactId> <version>0.0.1-SNAPSHOT</version> <name>provider</name> <description>provider</description> <properties> <java.version>17</java.version> <dubbo.version>3.2.2</dubbo.version> </properties> <dependencies> <dependency> <groupId>javax.servlet</groupId> <artifactId>javax.servlet-api</artifactId> <version>4.0.1</version> </dependency> <dependency> <groupId>com.caucho</groupId> <artifactId>hessian</artifactId> <version>4.0.66</version> </dependency> <dependency> <groupId>org.apache.dubbo.extensions</groupId> <artifactId>dubbo-rpc-hessian</artifactId> <version>1.0.1</version> </dependency> <dependency> <groupId>com.dubbo.test</groupId> <artifactId>api</artifactId> <version>1.0-SNAPSHOT</version> </dependency> <dependency> <groupId>org.apache.dubbo</groupId> <artifactId>dubbo-spring-boot-starter</artifactId> </dependency> <dependency> <groupId>org.apache.dubbo</groupId> <artifactId>dubbo-dependencies-zookeeper-curator5</artifactId> <type>pom</type> <exclusions> <exclusion> <groupId>org.slf4j</groupId> <artifactId>slf4j-reload4j</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> </dependencies> <dependencyManagement> <dependencies> <dependency> <groupId>org.apache.dubbo</groupId> <artifactId>dubbo-spring-boot-starter</artifactId> <version>${dubbo.version}</version> </dependency> <dependency> <groupId>org.apache.dubbo</groupId> <artifactId>dubbo-bom</artifactId> <version>${dubbo.version}</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>org.apache.dubbo</groupId> <artifactId>dubbo-dependencies-zookeeper-curator5</artifactId> <version>${dubbo.version}</version> <type>pom</type> </dependency> </dependencies> </dependencyManagement> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project> ```
https://github.com/apache/dubbo/issues/12476
https://github.com/apache/dubbo/pull/12507
dd94c1561ade672cc253eef79828f6d2daf5a676
2cb12ff9a6444b69f6b7f1b3cac7532be93e6083
2023-06-07T15:01:54Z
java
2023-06-12T12:51:13Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,473
["dubbo-config/dubbo-config-spring/src/main/java/org/apache/dubbo/config/spring/schema/DubboBeanDefinitionParser.java", "dubbo-config/dubbo-config-spring/src/test/java/org/apache/dubbo/config/spring/schema/DubboNamespaceHandlerTest.java", "dubbo-spring-boot/dubbo-spring-boot-compatible/autoconfigure/src/test/resources/META-INF/spring/dubbo-context.xml"]
Spring bean conflict of dubbo:method
<!-- If you need to report a security issue please visit https://github.com/apache/dubbo/security/policy --> - [ ] I have searched the [issues](https://github.com/apache/dubbo/issues) of this repository and believe that this is not a duplicate. ### Environment * Dubbo version: xxx * Operating System version: xxx * Java version: xxx ### Steps to reproduce this issue ```xml <bean id="sayHi" class="org.apache.dubbo.samples.provider.GreetingsServiceImpl"/> <dubbo:service id="GreetingsServiceRef1" interface="org.apache.dubbo.samples.api.GreetingsService" ref="sayHi"> <dubbo:method name="sayHi" timeout="1000" retries="3"/> </dubbo:service> ``` Pls. provide [GitHub address] to reproduce this issue. ### Expected Behavior <!-- What do you expect from the above steps?--> ### Actual Behavior <!-- What actually happens? --> If there is an exception, please attach the exception trace: ``` Just put your stack trace here! ```
https://github.com/apache/dubbo/issues/12473
https://github.com/apache/dubbo/pull/12474
04d132fee1cf23f6c0eada7131c9b5ab18b8ded7
a758a1e7e0efd5d9465cde6088c04f25fa11aa4a
2023-06-07T08:55:46Z
java
2023-06-08T04:47:57Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,434
["dubbo-rpc/dubbo-rpc-triple/src/main/java/org/apache/dubbo/rpc/protocol/tri/call/BiStreamServerCallListener.java", "dubbo-rpc/dubbo-rpc-triple/src/main/java/org/apache/dubbo/rpc/protocol/tri/transport/TripleServerConnectionHandler.java"]
使用dubbo triple reactor 方式通信,当调用方程序被杀死停止后,服务提供方建立的流doFinally方法不会收到中断信号?
您好,我想咨询下使用dubbo triple reactor 方式通信的程序,当调用方程序被杀死停止后,为什么服务提供方建立的流doFinally方法不会收到中断信号? 我理想的情况是服务提供方建立的流doFinally方法应该收到中断信号,这样我程序可以做一些资源清理操作,但目前实际程序运行并没有,希望能解答下,谢谢! 环境: dubbo :3.1.5 protobuf-java: 3.22.0 rpc.proto文件: ``` syntax = "proto3"; option java_multiple_files = true; package com.protocol.grpc; message Request { int64 id = 1; string sessionId = 2; string userId = 3; string ns = 4; string cmd = 5; bool binary = 6; bytes payload = 7; } enum ResponseType { RESPONSE = 0; PUSH = 1; } message Response { int64 id = 1; string sessionId = 2; string userId = 3; string ns = 4; string cmd = 5; bool binary = 6; bytes payload = 7; bool success = 8; string msg = 9; ResponseType type = 10; } service DataModelService { rpc connect(stream Request) returns (stream Response); } ``` 下面是简单的提供方代码,通过connect建立连接后,收到消息就原样返回。 ``` @Slf4j @DubboService(group = "test-sse") @RequiredArgsConstructor public class SseDataModelServiceProvider extends DubboDataModelServiceTriple.DataModelServiceImplBase { @Override public Flux<Response> connect(Flux<Request> request) { return request.doFinally(signalType -> { log.info("connect is finally, signalType:{}", signalType); //TODO //做一些资源回收 }).doOnNext(requestMsg -> { log.info("get data: {}", requestMsg); }).flatMap(requestMsg -> Flux.just(Response .newBuilder() .setId(requestMsg.getId()) .setSessionId(requestMsg.getSessionId()) .setUserId(requestMsg.getUserId()) .setBinary(requestMsg.getBinary()) .setType(ResponseType.RESPONSE) .setCmd(requestMsg.getCmd()) .setNs(requestMsg.getNs()) .setSuccess(true) .setPayload(ByteString.copyFrom(("payload").getBytes(StandardCharsets.UTF_8))).build()) ); } } ```
https://github.com/apache/dubbo/issues/12434
https://github.com/apache/dubbo/pull/12451
00d8a46cd54dff3687f32ade67c0bcddc3285e07
eba976fec63eb1ccbedd5751162e2f2a888391df
2023-05-31T07:30:02Z
java
2023-06-07T03:03:26Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,415
["dubbo-common/src/main/java/org/apache/dubbo/common/utils/ToStringUtils.java", "dubbo-rpc/dubbo-rpc-api/src/main/java/org/apache/dubbo/rpc/support/AccessLogData.java"]
trile protocol config accesslog="true" lead to StackOverflowError
* Dubbo version: 3.1.5 * Operating System version: xxx * Java version: jdk8 go client invoke java,Exception not print on java side,only go client side , AccessLogFilter convert request param (proto generated java class) to json,it lead to cycle invoke: ![image](https://github.com/apache/dubbo/assets/5662648/2a6edb13-fdb8-460d-bac7-1c2927e8c88f)
https://github.com/apache/dubbo/issues/12415
https://github.com/apache/dubbo/pull/12427
f60ac914f463034df4a50e6f871ddfb146c3b541
74315babbeb4513c8b4ded4b40b0c254060ae3ca
2023-05-29T03:34:59Z
java
2023-06-03T06:47:23Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,359
["dubbo-registry/dubbo-registry-api/src/main/java/org/apache/dubbo/registry/integration/RegistryDirectory.java"]
Error exception prompt when the protocol does not exist
### Environment * Dubbo version: 3.2.1 * Operating System version: mac m1 * Java version: 1.8 ### Steps to reproduce this issue 1. Pull the latest source code of dubbo3.2 2. Start the demo provider project, dubbo-springboot-demo-provider 3. Start the demo comsumer project, dubbo-springboot-demo-comsuer。Configure as follow: ``` dubbo.consumer.protocol: unknown //does not exist ``` ### Expected Behavior Comuser fails to start and reports that the protocol does not exist ### Actual Behavior Consumer failed to start and printed an unrelated exception ``` [20/05/23 23:43:14:894 CST] main ERROR deploy.DefaultModuleDeployer: [DUBBO] Model start failed: Dubbo Module[1.1.1] start failed: java.lang.IllegalStateException: Failed to check the status of the service org.apache.dubbo.springboot.demo.DemoService. No provider available for the service org.apache.dubbo.springboot.demo.DemoService from the url fgh://192.168.1.105/org.apache.dubbo.springboot.demo.DemoService?application=dubbo-springboot-demo-consumer&background=false&dubbo=2.0.2&executor-management-mode=isolation&file-cache=true&interface=org.apache.dubbo.springboot.demo.DemoService&methods=sayHello,sayHelloAsync&pid=24800&protocol=fgh&qos.enable=true&qos.port=33333&register.ip=192.168.1.105&release=&side=consumer&sticky=false&timestamp=1684597394299&unloadClusterRelated=false to the consumer 192.168.1.105 use dubbo version , dubbo version: , current host: 192.168.1.105, error code: 5-14. This may be caused by , go to https://dubbo.apache.org/faq/5/14 to find instructions. java.lang.IllegalStateException: Failed to check the status of the service org.apache.dubbo.springboot.demo.DemoService. No provider available for the service org.apache.dubbo.springboot.demo.DemoService from the url fgh://192.168.1.105/org.apache.dubbo.springboot.demo.DemoService?application=dubbo-springboot-demo-consumer&background=false&dubbo=2.0.2&executor-management-mode=isolation&file-cache=true&interface=org.apache.dubbo.springboot.demo.DemoService&methods=sayHello,sayHelloAsync&pid=24800&protocol=fgh&qos.enable=true&qos.port=33333&register.ip=192.168.1.105&release=&side=consumer&sticky=false&timestamp=1684597394299&unloadClusterRelated=false to the consumer 192.168.1.105 use dubbo version at org.apache.dubbo.config.ReferenceConfig.checkInvokerAvailable(ReferenceConfig.java:647) at org.apache.dubbo.config.ReferenceConfig.init(ReferenceConfig.java:311) at org.apache.dubbo.config.ReferenceConfig.get(ReferenceConfig.java:232) at org.apache.dubbo.config.utils.SimpleReferenceCache.destroyReference(SimpleReferenceCache.java:265) at org.apache.dubbo.config.utils.SimpleReferenceCache.destroy(SimpleReferenceCache.java:218) at org.apache.dubbo.config.utils.SimpleReferenceCache.destroy(SimpleReferenceCache.java:242) at org.apache.dubbo.config.deploy.DefaultModuleDeployer.lambda$referServices$6(DefaultModuleDeployer.java:444) at java.util.concurrent.ConcurrentHashMap$ValuesView.forEach(ConcurrentHashMap.java:4707) at org.apache.dubbo.config.deploy.DefaultModuleDeployer.referServices(DefaultModuleDeployer.java:419) at org.apache.dubbo.config.deploy.DefaultModuleDeployer.startSync(DefaultModuleDeployer.java:173) at org.apache.dubbo.config.deploy.DefaultModuleDeployer.start(DefaultModuleDeployer.java:145) at org.apache.dubbo.config.spring.context.DubboDeployApplicationListener.onContextRefreshedEvent(DubboDeployApplicationListener.java:113) at org.apache.dubbo.config.spring.context.DubboDeployApplicationListener.onApplicationEvent(DubboDeployApplicationListener.java:102) at org.apache.dubbo.config.spring.context.DubboDeployApplicationListener.onApplicationEvent(DubboDeployApplicationListener.java:47) at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:176) at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:169) at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:143) at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:421) at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:378) at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:940) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:586) at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:731) at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:408) at org.springframework.boot.SpringApplication.run(SpringApplication.java:307) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1303) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1292) at org.apache.dubbo.springboot.demo.consumer.ConsumerApplication.main(ConsumerApplication.java:38) ```
https://github.com/apache/dubbo/issues/12359
https://github.com/apache/dubbo/pull/12361
3fa518505fc54be59e4306300891e54db4610240
bde608ec3cf0cad32e2704e89e008a83bc27c46c
2023-05-20T16:06:48Z
java
2023-06-12T12:47:02Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,353
["dubbo-remoting/dubbo-remoting-netty4/src/main/java/org/apache/dubbo/remoting/transport/netty4/NettyChannel.java", "dubbo-remoting/dubbo-remoting-netty4/src/main/java/org/apache/dubbo/remoting/transport/netty4/NettyConfigOperator.java"]
端口协议复用且encodeInIOThread=false,服务端不能正常响应客户端
### Environment * Dubbo version: 3.2.0 * Operating System version: macos10.15.7 * Java version: 1.8 ### Steps to reproduce this issue 服务提供者 ```java public class ApiProvider { public static void main(String[] args) { ServiceConfig<GreeterService> serviceConfig = new ServiceConfig<>(); serviceConfig.setInterface(GreeterService.class); serviceConfig.setRef(new GreeterServiceImpl()); ProtocolConfig protocol = new ProtocolConfig("tri", 20881); protocol.setExtProtocol("dubbo"); ApplicationConfig applicationConfig = new ApplicationConfig("dubbo-demo-triple-api-provider"); applicationConfig.setRegisterMode("instance"); applicationConfig.setMetadataType("remote"); applicationConfig.setEnableFileCache(false); DubboBootstrap bootstrap = DubboBootstrap.getInstance(); bootstrap.application(applicationConfig) .registry(new RegistryConfig("zookeeper://127.0.0.1:2181")) .protocol(protocol) .service(serviceConfig) .start() .await(); } } ``` 服务消费者 ```java public class ApiConsumer { public static void main(String[] args) throws InterruptedException { System.setProperty("dubbo.application.service-discovery.migration", "FORCE_APPLICATION"); ReferenceConfig<GreeterService> referenceConfig = new ReferenceConfig<>(); referenceConfig.setInterface(GreeterService.class); referenceConfig.setCheck(false); referenceConfig.setTimeout(3000); referenceConfig.setProtocol(CommonConstants.DUBBO); referenceConfig.setRetries(0); ApplicationConfig applicationConfig = new ApplicationConfig("dubbo-demo-triple-api-consumer"); applicationConfig.setMetadataType("remote"); applicationConfig.setEnableFileCache(false); DubboBootstrap bootstrap = DubboBootstrap.getInstance(); bootstrap.application(applicationConfig) .registry(new RegistryConfig("zookeeper://127.0.0.1:2181")) .protocol(new ProtocolConfig(CommonConstants.DUBBO, -1)) .reference(referenceConfig) .start(); referenceConfig.get().sayHello(HelloRequest.newBuilder().setName("123").build()); } } ``` ### Expected Behavior 服务端正常响应客户端 ### Actual Behavior 服务消费者stacktrace ``` [19/05/23 03:10:58:058 CST] DubboShutdownHook INFO deploy.DefaultModuleDeployer: [DUBBO] Dubbo Module[1.1.1] is stopping., dubbo version: 3.2.0, current host: 10.55.38.81 Exception in thread "main" org.apache.dubbo.rpc.RpcException: Failed to invoke the method sayHello in the service org.apache.dubbo.demo.GreeterService. Tried 1 times of the providers [10.55.38.81:20881] (1/1) from the registry 127.0.0.1:2181 on the consumer 10.55.38.81 using the dubbo version 3.2.0. Last error is: Invoke remote method timeout. method: sayHello, provider: DefaultServiceInstance{serviceName='dubbo-demo-triple-api-provider', host='10.55.38.81', port=20881, enabled=true, healthy=true, metadata={dubbo.endpoints=[{"port":20881,"protocol":"dubbo"},{"port":20881,"protocol":"tri"}], dubbo.metadata.revision=0261e84951c0c038110acb6ff1edbb32, dubbo.metadata.storage-type=remote, timestamp=1684480151231}}, service{name='org.apache.dubbo.demo.GreeterService',group='null',version='null',protocol='dubbo',port='20881',params={executor-management-mode=isolation, side=provider, file-cache=false, release=3.2.0, methods=sayHello,sayHelloAsync,sayMultiHello1,sayMultiHello2, deprecated=false, dubbo=2.0.2, interface=org.apache.dubbo.demo.GreeterService, service-name-mapping=true, register-mode=instance, generic=false, ispuserver=true, metadata-type=remote, application=dubbo-demo-triple-api-provider, prefer.serialization=fastjson2,hessian2, background=false, dynamic=true, anyhost=true},}, cause: org.apache.dubbo.remoting.TimeoutException: Waiting server-side response timeout by scan timer. start time: 2023-05-19 15:10:55.126, end time: 2023-05-19 15:10:58.173, client elapsed: 124 ms, server elapsed: 2923 ms, timeout: 3000 ms, request: Request [id=0, version=2.0.2, twoWay=true, event=false, broken=false, mPayload=0, data=RpcInvocation [methodName=sayHello, parameterTypes=[class org.apache.dubbo.demo.hello.HelloRequest]]], channel: /10.55.38.81:56152 -> /10.55.38.81:20881 at org.apache.dubbo.rpc.cluster.support.FailoverClusterInvoker.doInvoke(FailoverClusterInvoker.java:115) at org.apache.dubbo.rpc.cluster.support.AbstractClusterInvoker.invoke(AbstractClusterInvoker.java:341) at org.apache.dubbo.rpc.cluster.router.RouterSnapshotFilter.invoke(RouterSnapshotFilter.java:46) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) at org.apache.dubbo.monitor.support.MonitorFilter.invoke(MonitorFilter.java:101) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) at org.apache.dubbo.rpc.cluster.filter.support.MetricsClusterFilter.invoke(MetricsClusterFilter.java:51) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) at org.apache.dubbo.rpc.protocol.dubbo.filter.FutureFilter.invoke(FutureFilter.java:52) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) at org.apache.dubbo.rpc.cluster.filter.support.ObservationSenderFilter.invoke(ObservationSenderFilter.java:61) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) at org.apache.dubbo.rpc.cluster.filter.support.ConsumerClassLoaderFilter.invoke(ConsumerClassLoaderFilter.java:40) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) at org.apache.dubbo.rpc.cluster.filter.support.ConsumerContextFilter.invoke(ConsumerContextFilter.java:118) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CallbackRegistrationInvoker.invoke(FilterChainBuilder.java:194) at org.apache.dubbo.rpc.cluster.support.wrapper.AbstractCluster$ClusterFilterInvoker.invoke(AbstractCluster.java:91) at org.apache.dubbo.rpc.cluster.support.wrapper.MockClusterInvoker.invoke(MockClusterInvoker.java:103) at org.apache.dubbo.rpc.cluster.support.wrapper.ScopeClusterInvoker.invoke(ScopeClusterInvoker.java:131) at org.apache.dubbo.registry.client.migration.MigrationInvoker.invoke(MigrationInvoker.java:286) at org.apache.dubbo.rpc.proxy.InvocationUtil.invoke(InvocationUtil.java:57) at org.apache.dubbo.rpc.proxy.InvokerInvocationHandler.invoke(InvokerInvocationHandler.java:75) at org.apache.dubbo.demo.GreeterServiceDubboProxy0.sayHello(GreeterServiceDubboProxy0.java) at org.apache.dubbo.demo.consumer.ApiConsumer.main(ApiConsumer.java:58) Caused by: java.util.concurrent.ExecutionException: org.apache.dubbo.remoting.TimeoutException: Waiting server-side response timeout by scan timer. start time: 2023-05-19 15:10:55.126, end time: 2023-05-19 15:10:58.173, client elapsed: 124 ms, server elapsed: 2923 ms, timeout: 3000 ms, request: Request [id=0, version=2.0.2, twoWay=true, event=false, broken=false, mPayload=0, data=RpcInvocation [methodName=sayHello, parameterTypes=[class org.apache.dubbo.demo.hello.HelloRequest]]], channel: /10.55.38.81:56152 -> /10.55.38.81:20881 at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1915) at org.apache.dubbo.rpc.AsyncRpcResult.get(AsyncRpcResult.java:208) at org.apache.dubbo.rpc.protocol.AbstractInvoker.waitForResultIfSync(AbstractInvoker.java:286) at org.apache.dubbo.rpc.protocol.AbstractInvoker.invoke(AbstractInvoker.java:189) at org.apache.dubbo.rpc.listener.ListenerInvokerWrapper.invoke(ListenerInvokerWrapper.java:71) at org.apache.dubbo.metrics.filter.MetricsFilter.invoke(MetricsFilter.java:56) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CallbackRegistrationInvoker.invoke(FilterChainBuilder.java:194) at org.apache.dubbo.rpc.protocol.ReferenceCountInvokerWrapper.invoke(ReferenceCountInvokerWrapper.java:78) at org.apache.dubbo.rpc.cluster.support.AbstractClusterInvoker.invokeWithContext(AbstractClusterInvoker.java:380) at org.apache.dubbo.rpc.cluster.support.FailoverClusterInvoker.doInvoke(FailoverClusterInvoker.java:81) ... 24 more Caused by: org.apache.dubbo.remoting.TimeoutException: Waiting server-side response timeout by scan timer. start time: 2023-05-19 15:10:55.126, end time: 2023-05-19 15:10:58.173, client elapsed: 124 ms, server elapsed: 2923 ms, timeout: 3000 ms, request: Request [id=0, version=2.0.2, twoWay=true, event=false, broken=false, mPayload=0, data=RpcInvocation [methodName=sayHello, parameterTypes=[class org.apache.dubbo.demo.hello.HelloRequest]]], channel: /10.55.38.81:56152 -> /10.55.38.81:20881 at org.apache.dubbo.remoting.exchange.support.DefaultFuture.doReceived(DefaultFuture.java:217) at org.apache.dubbo.remoting.exchange.support.DefaultFuture.received(DefaultFuture.java:181) at org.apache.dubbo.remoting.exchange.support.DefaultFuture$TimeoutCheckTask.notifyTimeout(DefaultFuture.java:291) at org.apache.dubbo.remoting.exchange.support.DefaultFuture$TimeoutCheckTask.lambda$run$0(DefaultFuture.java:278) at org.apache.dubbo.common.threadpool.ThreadlessExecutor$RunnableWrapper.run(ThreadlessExecutor.java:141) at org.apache.dubbo.common.threadpool.ThreadlessExecutor.waitAndDrain(ThreadlessExecutor.java:70) at org.apache.dubbo.rpc.AsyncRpcResult.get(AsyncRpcResult.java:202) ... 33 more ``` 客户端建立连接,服务端创建NettyChannel时,走主协议tri,创建内部Codec,即org.apache.dubbo.remoting.api.pu.DefaultCodec,是个空实现。 ```java @Override public void channelActive(ChannelHandlerContext ctx) throws Exception { super.channelActive(ctx); NettyChannel channel = NettyChannel.getOrAddChannel(ctx.channel(), url, handler);// 创建nettychannel,内部codec为tri协议获取到DefaultCodec2 } ``` 客户端协议确定以后,虽然org.apache.dubbo.remoting.transport.netty4.NettyConfigOperator#configChannelHandler将pipeline中的codec替换为DubboCountCodec了。 ```java if (!(codec2 instanceof DefaultCodec)){ NettyCodecAdapter codec = new NettyCodecAdapter(codec2, channel.getUrl(), handler); ((NettyChannel) channel).getNioChannel().pipeline().addLast( codec.getDecoder() ).addLast( codec.getEncoder() ); } ``` 但是org.apache.dubbo.remoting.transport.netty4.NettyChannel#send优化走业务线程encode,NettyChannel内部的codec是空实现,导致最后提交到netty的outputMessage是个空buffer,最终客户端收不到服务端的响应,超时结束。 ```java public void send(Object message, boolean sent) throws RemotingException { Object outputMessage = message; if (!encodeInIOThread) { ByteBuf buf = channel.alloc().buffer(); ChannelBuffer buffer = new NettyBackedChannelBuffer(buf); codec.encode(this, buffer, message); // 空实现 outputMessage = buf; // 空buffer } ```
https://github.com/apache/dubbo/issues/12353
https://github.com/apache/dubbo/pull/12355
08727e8e270ce355a51417d9604dfd2954b6fe6b
1671332337caffe68d61a3afb47bc4277c7601ee
2023-05-19T07:23:04Z
java
2023-05-20T05:54:29Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,334
["dubbo-cluster/src/main/java/org/apache/dubbo/rpc/cluster/support/wrapper/ScopeClusterInvoker.java", "dubbo-cluster/src/test/java/org/apache/dubbo/rpc/cluster/support/wrapper/ScopeClusterInvokerTest.java"]
Dubbo 3.2.0 broadcast + injvm not work as expected
<!-- If you need to report a security issue please visit https://github.com/apache/dubbo/security/policy --> - [x] I have searched the [issues](https://github.com/apache/dubbo/issues) of this repository and believe that this is not a duplicate. ### Environment * Dubbo version: 3.2.0 / 3.2.1 * Operating System version: Windows 10 * Java version: OpenJDK 17.0.6 ### Steps to reproduce this issue ```java public interface RemoteCacheSyncService { void flushCache(); } @DubboService public class RemoteCacheSyncServiceImpl implements RemoteCacheSyncService { @Override public void flushCache() { // some code } } ``` We have 3 microservices, referred to as `A`, `B`, and `C`. Each service contains the above code. But, just `C` has a `@DubboReference` to the interface only. ```java @DubboReference(cluster = ClusterRules.BROADCAST) RemoteCacheSyncService remoteCacheSyncService; ``` ### Expected Behavior Call **ALL** implementations in **ALL** microservices. ### Actual Behavior However, since there is a service implementation of the interface in the **same application** ( `C` ), Dubbo will use the `injvm` protocol. Only the service implementation located in `C` will be invoked. Same issue with https://github.com/apache/dubbo/issues/6842 , however, it came back again.
https://github.com/apache/dubbo/issues/12334
https://github.com/apache/dubbo/pull/12347
c22db81a8bd26fb1a1cd0e41b04cb0bbe421cec5
c76e97dc616509fe6f6b2bef3ab922642af7cd06
2023-05-16T13:52:18Z
java
2023-05-19T07:31:54Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,326
["dubbo-kubernetes/src/main/java/org/apache/dubbo/registry/kubernetes/util/KubernetesConfigUtils.java"]
k8s client http2Disable default value is wrong
### Environment * Dubbo version: 3.2.0 * Operating System version: macos10.15.7 * Java version: 1.8 ### Steps to reproduce this issue ### Expected Behavior set http2Disable default value into http2Disable. ```java public class KubernetesConfigUtils { public static Config createKubernetesConfig(URL url) { Config base = Config.autoConfigure(null); return new ConfigBuilder(base) // // ... .withTrustCerts(url.getParameter(TRUST_CERTS, base.isTrustCerts())) // .withHttp2Disable(url.getParameter(HTTP2_DISABLE, base.isHttp2Disable())) // .build(); } } ``` ### Actual Behavior set trustCerts default value into http2Disable. ```java public class KubernetesConfigUtils { public static Config createKubernetesConfig(URL url) { Config base = Config.autoConfigure(null); return new ConfigBuilder(base) // // ... .withTrustCerts(url.getParameter(TRUST_CERTS, base.isTrustCerts())) // .withHttp2Disable(url.getParameter(HTTP2_DISABLE, base.isTrustCerts())) // bug .build(); } } ```
https://github.com/apache/dubbo/issues/12326
https://github.com/apache/dubbo/pull/12328
a0e43cbcfe5fb8fd54635ef93adb08bb7fa96a51
1511ca09912492309bc2a093863eb969223f84b2
2023-05-16T09:27:12Z
java
2023-05-18T08:20:30Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,321
["dubbo-config/dubbo-config-api/src/main/java/org/apache/dubbo/config/deploy/DefaultApplicationDeployer.java", "dubbo-config/dubbo-config-api/src/test/java/org/apache/dubbo/config/deploy/DefaultApplicationDeployerTest.java", "dubbo-metrics/dubbo-metrics-default/src/main/java/org/apache/dubbo/metrics/collector/DefaultMetricsCollector.java"]
Dubbo3.2.0启动报错
<!-- If you need to report a security issue please visit https://github.com/apache/dubbo/security/policy --> - [ ] I have searched the [issues](https://github.com/apache/dubbo/issues) of this repository and believe that this is not a duplicate. ### Environment * Dubbo version: 3.2.0 * Operating System version: Windows 10 * Java version: 17 ### Steps to reproduce this issue 版本 spring boot 3.0.6 启动报错 Pls. provide [GitHub address] to reproduce this issue. ### Expected Behavior 正常启动 ### Actual Behavior <!-- What actually happens? --> If there is an exception, please attach the exception trace: java.lang.NoClassDefFoundError: io/prometheus/client/exporter/HttpConnectionFactory at org.apache.dubbo.metrics.prometheus.PrometheusMetricsReporterFactory.createMetricsReporter(PrometheusMetricsReporterFactory.java:43) at org.apache.dubbo.metrics.report.MetricsReporterFactory$Adaptive.createMetricsReporter(MetricsReporterFactory$Adaptive.java) at org.apache.dubbo.config.deploy.DefaultApplicationDeployer.initMetricsReporter(DefaultApplicationDeployer.java:392) at org.apache.dubbo.config.deploy.DefaultApplicationDeployer.initialize(DefaultApplicationDeployer.java:216) at org.apache.dubbo.config.deploy.DefaultModuleDeployer.prepare(DefaultModuleDeployer.java:488) at org.apache.dubbo.config.spring.context.DubboConfigApplicationListener.initDubboConfigBeans(DubboConfigApplicationListener.java:73) at org.apache.dubbo.config.spring.context.DubboConfigApplicationListener.onApplicationEvent(DubboConfigApplicationListener.java:59) at org.apache.dubbo.config.spring.context.DubboConfigApplicationListener.onApplicationEvent(DubboConfigApplicationListener.java:37) at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:176) at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:169) at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:143) at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:131) at org.springframework.context.support.AbstractApplicationContext.registerListeners(AbstractApplicationContext.java:880) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:581) at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:146) at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:732) at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:434) at org.springframework.boot.SpringApplication.run(SpringApplication.java:310) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1304) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1293) at com.crown.amish.user.UserApplication.main(UserApplication.java:17) Caused by: java.lang.ClassNotFoundException: io.prometheus.client.exporter.HttpConnectionFactory at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:641) at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:188) at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:520) ... 21 common frames omitted Just put your stack trace here! ```
https://github.com/apache/dubbo/issues/12321
https://github.com/apache/dubbo/pull/12349
6bd2174f01ef266c652845da5d7e520b6e486df6
c22db81a8bd26fb1a1cd0e41b04cb0bbe421cec5
2023-05-16T03:44:12Z
java
2023-05-19T07:30:18Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,320
["README.md"]
Some Flaky tests
<!-- If you need to report a security issue please visit https://github.com/apache/dubbo/security/policy --> - [X] I have searched the [issues](https://github.com/apache/dubbo/issues) of this repository and believe that this is not a duplicate. ### Environment * Dubbo version: xxx * Operating System version: ubuntu 22.04 * Java version: 11 ### Comments Hello, We tried running your project and discovered that it contains some flaky tests (i.e., tests that nondeterministically pass and fail). We found these tests to fail more frequently when running them on certain machines of ours. To prevent others from running this project and its tests in machines that may result in flaky tests, we suggest adding information to the README.md file indicating the minimum resource configuration for running the tests of this project as to prevent observation of test flakiness. If we run this project in a machine with 1cpu and 1gb ram, we observe flaky tests. We found that the tests in this project did not have any flaky tests when we ran it on machines with 2cpu and 2gb ram. Here is a list of the tests we have identified and their likelihood of failure on a system with less than the recommended 2 CPUs and 2 GB RAM. 1. org.apache.dubbo.rpc.cluster.support.MergeableClusterInvokerTest#testGetMenuSuccessfully (4 out 10) 2. org.apache.dubbo.rpc.protocol.dubbo.RpcFilterTest#testRpcFilter (1 out 10) 3. org.apache.dubbo.rpc.filter.ExecuteLimitFilterTest#testMoreThanExecuteLimitInvoke ( 2 out 10) 4. org.apache.dubbo.rpc.filter.ActiveLimitFilterTest#testInvokeGreaterActives ( 1 out 10) 5. org.apache.dubbo.rpc.protocol.rest.SpringMvcRestProtocolTest#testRestProtocol ( 7 out 10) 6. org.apache.dubbo.rpc.protocol.rest.SpringMvcRestProtocolTest#testRestProtocolWithContextPath ( 7 out 10) 7. org.apache.dubbo.rpc.protocol.dubbo.MultiThreadTest#testDubboMultiThreadInvoke ( 4 out 10) Please let me know if you would like us to create a pull request on this matter (possibly to the readme of this project). Thank you for your attention to this matter. We hope that our recommendations will be helpful in improving the quality and performance of your project, especially for others to use. ## Reproducing ~~~Dockerfile FROM maven:3.5.4-jdk-11 WORKDIR /home/ RUN git clone https://github.com/apache/dubbo && \ cd dubbo && \ git checkout acd4212f597813411b6634377f96ac2bb3888a07 WORKDIR /home/dubbo RUN mvn install -DskipTests ENTRYPOINT ["mvn", "test", "-fn"] ~~~ Build the image: ~~~bash $> mkdir tmp $> cp Dockerfile tmp $> cd tmp $> docker build -t dubbo . # estimated time of build 3m ~~~ Running: this configuration likely prevents flakiness (no flakiness in 10 runs) ~~~bash $> docker run --rm --memory=2g --cpus=2 --memory-swap=-1 dubbo | tee output.txt $> grep "Failures:" output.txt # checking results ~~~ this other configuration –similar to the previous– can’t prevent flaky tests (observation in 10 runs) ~~~bash $> docker run --rm --memory=1g --cpus=1 --memory-swap=-1 dubbo | tee output2.txt $> grep "Failures:" output2.txt # checking results ~~~
https://github.com/apache/dubbo/issues/12320
https://github.com/apache/dubbo/pull/12391
9eec5973fc5261c8ad08103aee9e16e5313c3e16
c7bc74557cfa037d6209b10764233cf1ce32eef8
2023-05-15T18:59:10Z
java
2023-05-29T06:48:24Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,315
["dubbo-config/dubbo-config-api/src/main/java/org/apache/dubbo/config/deploy/DefaultModuleDeployer.java"]
DefaultModuleDeployer start fail, but export MetadataService
### Environment * Dubbo version: 3.2.0 * Operating System version: macos10.15.7 * Java version: 1.8 ### Steps to reproduce this issue 使用各种方式导致Module启动失败。 ### Expected Behavior Module启动失败,不暴露MetadataService。 即不执行DefaultApplicationDeployer#prepareApplicationInstance。 ### Actual Behavior 由于org.apache.dubbo.config.deploy.DefaultModuleDeployer#onModuleFailed传入错误的状态STARTED,导致执行DefaultApplicationDeployer#prepareApplicationInstance。 ```java private void onModuleFailed(String msg, Throwable ex) { try { setFailed(ex); logger.error(CONFIG_FAILED_START_MODEL, "", "", "Model start failed: " + msg, ex); applicationDeployer.notifyModuleChanged(moduleModel, DeployState.STARTED); } finally { completeStartFuture(false); } } ```
https://github.com/apache/dubbo/issues/12315
https://github.com/apache/dubbo/pull/12316
1511ca09912492309bc2a093863eb969223f84b2
5fb15a130d8f3b5503c93afbeb9892eaa0238f7a
2023-05-15T08:51:26Z
java
2023-05-18T09:45:14Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,290
["dubbo-plugin/dubbo-qos/src/main/java/org/apache/dubbo/qos/command/impl/InvokeTelnet.java"]
the telnet invoke did not clean the ThreadLocal, causing the request response pairing errors
<!-- If you need to report a security issue please visit https://github.com/apache/dubbo/security/policy --> - [x] I have searched the [issues](https://github.com/apache/dubbo/issues) of this repository and believe that this is not a duplicate. ### Environment * Dubbo version: 3.2.0, 2.7.x, etc * Operating System version: Windows 10(irrelevant) * Java version: java 8(irrelevant) ### Steps to reproduce this issue 1. write the interface implementation class on the provider side, use RpcContext.startAsync() for asynchronous operations. 2. On the consumer side, call this interface in a loop and verify whether the returned result corresponds to the requested parameter. 3. Then, use the telnet invoke command to call this interface and observe whether the consumer side encounters an incorrect request and result pairing. 4. Then close the consumer, and observe whether the AsyncContext value returned by RpcContext.startAsync() on the provider side is the same object multiple times during the telnet invoke execution. ### Expected Behavior 1. the consumer always get the right response 2. RpcContext.startAsync() returns different instance per each request ### Actual Behavior 1. the consumer get mismatch response(only appeared in versions before 3.x) 2. RpcContext.startAsync() returns the same object in multiple requests --- When using RpcContext.startAsync() to return results asynchronously on the provider side, the context (RpcContext, AsyncContext, etc.) is stored in ThreadLocal by default. Under normal circumstances, the ContextFilter will clear the data in ThreadLocal. However, when the request is triggered by telnet invoke, ContextFilter will not be executed, resulting in ThreadLocal not being cleaned up. When this thread handles the next request, it obtains the context of the previous request, leading to problems. In versions before 3.x, this can cause consumers to get the wrong data. For example, request B gets the result of request A (same interface, same method), or a request for calling interface C gets the result of interface D (different interface, different method). In the 3.x version, since the invoke command is assigned to QoS and executed in a separate thread pool, therefore, there will be no issue of response pairing errors with normal consumer requests(if they can be configured to share the same thread pool later, there is still this risk). However, it is still unreasonable to not clean up ThreadLocal after use. In addition, telnet invoke cannot obtain the results asynchronously, which brings trouble to the operation and maintenance. Therefore, it can be optimized to obtain the results synchronously, making it easier to troubleshoot problems through telnet invoke. 当在provider端使用RpcContext.startAsync()进行异步返回结果时,会使用ThreadLocal存放上下文(RpcContext、AsyncContext等),正常情况下,会由ContextFilter清理ThreadLocal中的数据,当请求是由telnet invoke触发时,不会执行到ContextFilter,导致ThreadLocal未被清理,当此线程去处理下一个请求时,拿到的上下文是上一次请求的结果,从而引发问题。 在3.x以前的版本中,会造成正常的consumer拿到错误的数据,例如B请求拿到的是A请求的结果(同接口同方法),甚至是调用C接口的请求拿到D接口的结果(不同接口不同方法)。 在3.x版本中,由于invoke命令被划分到qos,并且是在单独的线程池中执行,因此不会与正常的consumer的请求发生串数据的情况(如果后续可以通过配置让两者共用线程池,则仍有此风险)。即便如此,ThreadLocal在使用完毕后却不清理仍旧是不合理的。 复现步骤:在provider端编写接口实现类,使用RpcContext.startAsync()进行异步操作;consumer端循环调用此接口,并校验返回结果是否与请求参数对应;然后通过telnet invoke命令调用此接口,观察consumer端是否出现请求与结果不对应的情况,然后关掉consumer,在telnet invoke执行时,观察provider端RpcContext.startAsync()返回的AsyncContext,是否多次为相同的对象。 另外telnet invoke无法拿到异步的结果,给运维时带来麻烦,此处可以优化为同步获取结果,方便通过telnet invoke排查问题
https://github.com/apache/dubbo/issues/12290
https://github.com/apache/dubbo/pull/12291
7c66d1a968d0388dc3fdbc8e2ff7ee54ffdc2c17
c916b82ab5fbd2ad9afedbed453c36a2a9865564
2023-05-10T16:27:20Z
java
2023-05-11T11:07:45Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,248
["dubbo-rpc/dubbo-rpc-api/src/main/java/org/apache/dubbo/rpc/filter/GenericFilter.java", "dubbo-rpc/dubbo-rpc-api/src/main/java/org/apache/dubbo/rpc/filter/GenericImplFilter.java"]
The generic call in bean mode passes a null parameter and reports an error NPE
### Environment * Dubbo version: 3.1 * Operating System version: mac m1 * Java version: 1.8 ### Steps to reproduce this issue 1. Use the following code to make a generalization call of the bean mode 2. ```java public interface DemoService { String sayHello(String name); } ``` 3. ```java ReferenceConfig<GenericService> referenceConfig = new ReferenceConfig<>(); referenceConfig.setInterface(DemoService.class.getCanonicalName()); applicationConfig.setRegistry(registryConfig); referenceConfig.setApplication(applicationConfig); referenceConfig.setGeneric(GENERIC_SERIALIZATION_BEAN); GenericService genericService = referenceConfig.get(); JavaBeanDescriptor result = (JavaBeanDescriptor)genericService.$invoke("sayHello", new String[]{"java.lang.String"}, new Object[]{JavaBeanSerializeUtil.serialize(null)}); ``` ### Expected Behavior call success ### Actual Behavior If there is an exception, please attach the exception trace: ``` Caused by: java.lang.NullPointerException at org.apache.dubbo.rpc.filter.GenericImplFilter.invoke(GenericImplFilter.java:126) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:327) at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CallbackRegistrationInvoker.invoke(FilterChainBuilder.java:194) at org.apache.dubbo.rpc.protocol.ReferenceCountInvokerWrapper.invoke(ReferenceCountInvokerWrapper.java:78) at org.apache.dubbo.rpc.cluster.support.AbstractClusterInvoker.invokeWithContext(AbstractClusterInvoker.java:379) at org.apache.dubbo.rpc.cluster.support.FailoverClusterInvoker.doInvoke(FailoverClusterInvoker.java:81) ```
https://github.com/apache/dubbo/issues/12248
https://github.com/apache/dubbo/pull/12249
da9fbbdbf8b46a35af9a202f733b5c4d11f6b807
838b7c9c4b24b5ab3f93607cca4cd49ef7aed4de
2023-05-07T07:56:36Z
java
2023-05-07T23:41:04Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,202
["dubbo-remoting/dubbo-remoting-netty4/src/main/java/org/apache/dubbo/remoting/transport/netty4/NettyChannelHandler.java", "dubbo-remoting/dubbo-remoting-netty4/src/main/java/org/apache/dubbo/remoting/transport/netty4/NettyPortUnificationServer.java", "dubbo-remoting/dubbo-remoting-netty4/src/main/java/org/apache/dubbo/remoting/transport/netty4/NettyPortUnificationServerHandler.java"]
dubbo2.7.x升级到3.2.x后 应用跑两三天后就发生内存泄漏
dubbo2.7.x升级到3.2.x后 应用跑两三天后就发生内存泄漏。 在 org.apache.dubbo.remoting.transport.netty4.NettyChannel的劝酒的私有的静态成员变量CHANNEL_MAP里的。 里面55多万个元素,1.1G的大小。 到后面AbstractInvoker.waitForResultIfSync(底层是threadlessExecutor.waitAndDrain)会卡住,比如: 某个服务正常调用不到100ms就返回了,应用跑两三天后就开始 那个正常的服务调用开始20%的概率出现一次两三秒的卡顿。
https://github.com/apache/dubbo/issues/12202
https://github.com/apache/dubbo/pull/12212
3552347d44d700b7413650dc279c90a91cf25504
830c460c0a2db7ad6d5caed9a0b7e2890952da77
2023-04-27T02:19:37Z
java
2023-04-28T09:14:01Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,153
[".licenserc.yaml", "NOTICE", "dubbo-metrics/dubbo-metrics-api/src/main/java/org/apache/dubbo/metrics/aggregate/DubboAbstractTDigest.java", "dubbo-metrics/dubbo-metrics-api/src/main/java/org/apache/dubbo/metrics/aggregate/DubboMergingDigest.java", "dubbo-metrics/dubbo-metrics-api/src/main/java/org/apache/dubbo/metrics/aggregate/TimeWindowQuantile.java", "dubbo-metrics/dubbo-metrics-api/src/test/java/org/apache/dubbo/metrics/aggregate/TimeWindowQuantileTest.java", "pom.xml"]
dubbo 3.2 turn on metrics report error ArrayIndexOutOfBoundsException
<!-- If you need to report a security issue please visit https://github.com/apache/dubbo/security/policy --> - [ ] I have searched the [issues](https://github.com/apache/dubbo/issues) of this repository and believe that this is not a duplicate. ### Environment * Dubbo version: 3.2.0 * Operating System version: linux * Java version: jdk 1.8 ### Steps to reproduce this issue 1. turn on metrics metrics: protocol: prometheus export-service-port: x0003 enable-jvm: true enable-threadpool: true enable-registry: true enable-metadata: true 2. pressure test 100 thread If there is an exception, please attach the exception trace: Caused by: java.lang.ArrayIndexOutOfBoundsException: null com.tdunning.math.stats.MergingDigest.merge(MergingDigest.java:381) com.tdunning.math.stats.MergingDigest.mergeNewValues(MergingDigest.java:363) com.tdunning.math.stats.MergingDigest.mergeNewValues(MergingDigest.java:353) com.tdunning.math.stats.MergingDigest.add(MergingDigest.java:259) com.tdunning.math.stats.MergingDigest.add(MergingDigest.java:251) com.tdunning.math.stats.AbstractTDigest.add(AbstractTDigest.java:129) org.apache.dubbo.metrics.aggregate.TimeWindowQuantile.add(TimeWindowQuantile.java:51) org.apache.dubbo.metrics.collector.AggregateMetricsCollector.onRTEvent(AggregateMetricsCollector.java:92) org.apache.dubbo.metrics.collector.AggregateMetricsCollector.onEvent(AggregateMetricsCollector.java:82) org.apache.dubbo.metrics.event.SimpleMetricsEventMulticaster.publishEvent(SimpleMetricsEventMulticaster.java:48) org.apache.dubbo.metrics.collector.sample.MethodMetricsSampler.lambda$rtConfigure$3(MethodMetricsSampler.java:61) org.apache.dubbo.metrics.collector.sample.SimpleMetricsCountSampler.addRT(SimpleMetricsCountSampler.java:120) org.apache.dubbo.metrics.filter.MethodMetricsInterceptor.rtTime(MethodMetricsInterceptor.java:97) org.apache.dubbo.metrics.filter.MethodMetricsInterceptor.onCompleted(MethodMetricsInterceptor.java:101) org.apache.dubbo.metrics.filter.MethodMetricsInterceptor.afterMethod(MethodMetricsInterceptor.java:58) org.apache.dubbo.metrics.filter.MetricsFilter.onResponse(MetricsFilter.java:64) org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CallbackRegistrationInvoker.lambda$invoke$1(FilterChainBuilder.java:218) org.apache.dubbo.rpc.AsyncRpcResult.lambda$whenCompleteWithContext$0(AsyncRpcResult.java:228) java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:792) java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2153) org.apache.dubbo.rpc.AsyncRpcResult.whenCompleteWithContext(AsyncRpcResult.java:224) org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CallbackRegistrationInvoker.invoke(FilterChainBuilder.java:195) org.apache.dubbo.rpc.protocol.ReferenceCountInvokerWrapper.invoke(ReferenceCountInvokerWrapper.java:78) org.apache.dubbo.rpc.cluster.support.AbstractClusterInvoker.invokeWithContext(AbstractClusterInvoker.java:380) org.apache.dubbo.rpc.cluster.support.FailoverClusterInvoker.doInvoke(FailoverClusterInvoker.java:81) org.apache.dubbo.rpc.cluster.support.AbstractClusterInvoker.invoke(AbstractClusterInvoker.java:341) org.apache.dubbo.rpc.cluster.router.RouterSnapshotFilter.invoke(RouterSnapshotFilter.java:46) org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) org.apache.dubbo.monitor.support.MonitorFilter.invoke$original$vkd7zZMh(MonitorFilter.java:101) org.apache.dubbo.monitor.support.MonitorFilter.invoke$original$vkd7zZMh$accessor$y8nuUX1o(MonitorFilter.java) org.apache.dubbo.monitor.support.MonitorFilter$auxiliary$YZgq7bd2.call(Unknown Source) org.apache.skywalking.apm.agent.core.plugin.interceptor.enhance.InstMethodsInter.intercept(InstMethodsInter.java:86) org.apache.dubbo.monitor.support.MonitorFilter.invoke(MonitorFilter.java) org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) org.apache.dubbo.rpc.cluster.filter.support.MetricsClusterFilter.invoke(MetricsClusterFilter.java:51) org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) org.apache.dubbo.rpc.protocol.dubbo.filter.FutureFilter.invoke(FutureFilter.java:52) org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) org.apache.dubbo.rpc.cluster.specifyaddress.AddressSpecifyClusterFilter.invoke(AddressSpecifyClusterFilter.java:40) org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) org.apache.dubbo.spring.security.filter.ContextHolderParametersSelectedTransferFilter.invoke(ContextHolderParametersSelectedTransferFilter.java:41) org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) org.apache.dubbo.rpc.cluster.filter.support.ConsumerClassLoaderFilter.invoke(ConsumerClassLoaderFilter.java:40) org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) com.mea.pay.common.dubbo.DubboConsumerContextFilter.invoke(DubboConsumerContextFilter.java:90) org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) org.apache.dubbo.rpc.cluster.filter.support.ConsumerContextFilter.invoke(ConsumerContextFilter.java:118) org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:331) org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CallbackRegistrationInvoker.invoke(FilterChainBuilder.java:194) org.apache.dubbo.rpc.cluster.support.wrapper.AbstractCluster$ClusterFilterInvoker.invoke(AbstractCluster.java:91) org.apache.dubbo.rpc.cluster.support.wrapper.MockClusterInvoker.invoke(MockClusterInvoker.java:103) com.mea.pay.common.dubbo.TagInvoker.invoke(TagInvoker.java:45) org.apache.dubbo.rpc.cluster.support.wrapper.ScopeClusterInvoker.invoke(ScopeClusterInvoker.java:131) org.apache.dubbo.registry.client.migration.MigrationInvoker.invoke(MigrationInvoker.java:284) \t... 91 common frames omitted
https://github.com/apache/dubbo/issues/12153
https://github.com/apache/dubbo/pull/12223
c5f8ce64787cbb17628ac5176947d46a457ee0ee
8b65828767ece40aa54fee1bd155b3f7a6185fac
2023-04-21T01:41:50Z
java
2023-05-06T15:02:01Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,132
["dubbo-metrics/dubbo-metrics-api/src/main/java/org/apache/dubbo/metrics/model/key/MetricsKey.java", "dubbo-metrics/dubbo-metrics-default/src/main/java/org/apache/dubbo/metrics/collector/AggregateMetricsCollector.java"]
Add P50 and P90 RT metrics
<!-- If you need to report a security issue please visit https://github.com/apache/dubbo/security/policy --> - [ ] I have searched the [issues](https://github.com/apache/dubbo/issues) of this repository and believe that this is not a duplicate. ## Describe the proposal <!-- Please use this for a concrete design proposal for functionality. --> <!-- If you just want to request a new feature and discuss the possible business value, create a Feature Request. -->
https://github.com/apache/dubbo/issues/12132
https://github.com/apache/dubbo/pull/12156
b7fc2b9a6a6e281a3e2fdd32a3a41e83910173e1
a0e43cbcfe5fb8fd54635ef93adb08bb7fa96a51
2023-04-19T06:17:59Z
java
2023-05-18T07:38:52Z
closed
apache/dubbo
https://github.com/apache/dubbo
12,131
["dubbo-common/src/main/java/org/apache/dubbo/config/AbstractInterfaceConfig.java", "dubbo-config/dubbo-config-api/src/main/java/org/apache/dubbo/config/deploy/DefaultMetricsServiceExporter.java", "dubbo-metrics/dubbo-metrics-default/src/main/java/org/apache/dubbo/metrics/collector/HistogramMetricsCollector.java"]
Enable aggregation and histogram by default
<!-- If you need to report a security issue please visit https://github.com/apache/dubbo/security/policy --> - [ ] I have searched the [issues](https://github.com/apache/dubbo/issues) of this repository and believe that this is not a duplicate. ## Describe the proposal <!-- Please use this for a concrete design proposal for functionality. --> <!-- If you just want to request a new feature and discuss the possible business value, create a Feature Request. --> @wxbty PTAL
https://github.com/apache/dubbo/issues/12131
https://github.com/apache/dubbo/pull/12137
3fd0a5a922f42c8914a70d0a89134e9f55782dd9
55a8940ad1a98d8ad78da67d8b73856768e09a53
2023-04-19T06:17:27Z
java
2023-06-01T09:19:25Z