diff
stringlengths 41
2.03M
| msg
stringlengths 1
1.5k
⌀ | repo
stringlengths 5
40
| sha
stringlengths 40
40
| time
stringlengths 20
20
|
---|---|---|---|---|
mmm a / tensorflow / opensource_only . files <nl> ppp b / tensorflow / opensource_only . files <nl> <nl> - tensorflow / api_template . __init__ . py <nl> tensorflow / __init__ . py <nl> + tensorflow / api_template . __init__ . py <nl> tensorflow / api_template_v1 . __init__ . py <nl> tensorflow / compat_template . __init__ . py <nl> tensorflow / compat_template_v1 . __init__ . py <nl> tensorflow / contrib / mpi / BUILD <nl> tensorflow / python / autograph / core / config . py <nl> - tensorflow / python / tpu / profiler / pip_package / README <nl> - tensorflow / python / tpu / profiler / pip_package / setup . py <nl> tensorflow / python / tpu / profiler / pip_package / BUILD <nl> + tensorflow / python / tpu / profiler / pip_package / README <nl> tensorflow / python / tpu / profiler / pip_package / build_pip_package . sh <nl> + tensorflow / python / tpu / profiler / pip_package / setup . py <nl> tensorflow / stream_executor / build_defs . bzl <nl> - tensorflow / third_party / __init__ . py <nl> tensorflow / third_party / BUILD <nl> + tensorflow / third_party / __init__ . py <nl> tensorflow / third_party / android / BUILD <nl> - tensorflow / third_party / android / android_configure . BUILD . tpl <nl> tensorflow / third_party / android / android . bzl . tpl <nl> + tensorflow / third_party / android / android_configure . BUILD . tpl <nl> tensorflow / third_party / android / android_configure . bzl <nl> - tensorflow / third_party / astor . BUILD <nl> tensorflow / third_party / arm_neon_2_x86_sse . BUILD <nl> + tensorflow / third_party / astor . BUILD <nl> tensorflow / third_party / backports_weakref . BUILD <nl> tensorflow / third_party / boringssl / BUILD <nl> tensorflow / third_party / clang_toolchain / BUILD <nl> tensorflow / third_party / clang_toolchain / cc_configure_clang . bzl <nl> tensorflow / third_party / clang_toolchain / download_clang . bzl <nl> - tensorflow / third_party / cub . BUILD <nl> tensorflow / third_party / codegen . BUILD <nl> + tensorflow / third_party / com_google_absl . BUILD <nl> tensorflow / third_party / common . bzl <nl> + tensorflow / third_party / cub . BUILD <nl> tensorflow / third_party / curl . BUILD <nl> tensorflow / third_party / cython . BUILD <nl> + tensorflow / third_party / double_conversion . BUILD <nl> tensorflow / third_party / eigen . BUILD <nl> - tensorflow / third_party / eigen3 / Eigen / LU <nl> - tensorflow / third_party / eigen3 / Eigen / Eigenvalues <nl> - tensorflow / third_party / eigen3 / Eigen / SVD <nl> - tensorflow / third_party / eigen3 / Eigen / Core <nl> + tensorflow / third_party / eigen3 / BUILD <nl> tensorflow / third_party / eigen3 / Eigen / Cholesky <nl> + tensorflow / third_party / eigen3 / Eigen / Core <nl> + tensorflow / third_party / eigen3 / Eigen / Eigenvalues <nl> + tensorflow / third_party / eigen3 / Eigen / LU <nl> tensorflow / third_party / eigen3 / Eigen / QR <nl> - tensorflow / third_party / eigen3 / BUILD <nl> + tensorflow / third_party / eigen3 / Eigen / SVD <nl> tensorflow / third_party / eigen3 / LICENSE <nl> tensorflow / third_party / eigen3 / gpu_packet_math . patch <nl> tensorflow / third_party / eigen3 / unsupported / Eigen / CXX11 / Tensor <nl> - tensorflow / third_party / eigen3 / unsupported / Eigen / CXX11 / ThreadPool <nl> tensorflow / third_party / eigen3 / unsupported / Eigen / CXX11 / FixedPoint <nl> - tensorflow / third_party / eigen3 / unsupported / Eigen / CXX11 / src / FixedPoint / MatMatProduct . h <nl> - tensorflow / third_party / eigen3 / unsupported / Eigen / CXX11 / src / FixedPoint / PacketMathAVX2 . h <nl> + tensorflow / third_party / eigen3 / unsupported / Eigen / CXX11 / ThreadPool <nl> tensorflow / third_party / eigen3 / unsupported / Eigen / CXX11 / src / FixedPoint / FixedPointTypes . h <nl> tensorflow / third_party / eigen3 / unsupported / Eigen / CXX11 / src / FixedPoint / MatMatProductNEON . h <nl> + tensorflow / third_party / eigen3 / unsupported / Eigen / CXX11 / src / FixedPoint / MatMatProduct . h <nl> + tensorflow / third_party / eigen3 / unsupported / Eigen / CXX11 / src / FixedPoint / MatMatProductAVX2 . h <nl> + tensorflow / third_party / eigen3 / unsupported / Eigen / CXX11 / src / FixedPoint / PacketMathAVX2 . h <nl> tensorflow / third_party / eigen3 / unsupported / Eigen / CXX11 / src / FixedPoint / MatVecProduct . h <nl> tensorflow / third_party / eigen3 / unsupported / Eigen / CXX11 / src / FixedPoint / TypeCastingAVX2 . h <nl> tensorflow / third_party / eigen3 / unsupported / Eigen / CXX11 / src / FixedPoint / TypeCastingAVX512 . h <nl> tensorflow / third_party / eigen3 / unsupported / Eigen / CXX11 / src / FixedPoint / PacketMathAVX512 . h <nl> - tensorflow / third_party / eigen3 / unsupported / Eigen / CXX11 / src / FixedPoint / MatMatProductAVX2 . h <nl> - tensorflow / third_party / eigen3 / unsupported / Eigen / SpecialFunctions <nl> tensorflow / third_party / eigen3 / unsupported / Eigen / MatrixFunctions <nl> - tensorflow / third_party / double_conversion . BUILD <nl> - tensorflow / third_party / com_google_absl . BUILD <nl> - tensorflow / third_party / fft2d / LICENSE <nl> - tensorflow / third_party / fft2d / fft2d . h <nl> - tensorflow / third_party / fft2d / fft2d . BUILD <nl> - tensorflow / third_party / fft2d / fft . h <nl> - tensorflow / third_party / fft2d / BUILD <nl> + tensorflow / third_party / eigen3 / unsupported / Eigen / SpecialFunctions <nl> tensorflow / third_party / enum34 . BUILD <nl> tensorflow / third_party / farmhash . BUILD <nl> - tensorflow / third_party / gif . BUILD <nl> + tensorflow / third_party / fft2d / BUILD <nl> + tensorflow / third_party / fft2d / LICENSE <nl> + tensorflow / third_party / fft2d / fft . h <nl> + tensorflow / third_party / fft2d / fft2d . BUILD <nl> + tensorflow / third_party / fft2d / fft2d . h <nl> + tensorflow / third_party / functools32 . BUILD <nl> tensorflow / third_party / gast . BUILD <nl> + tensorflow / third_party / git / BUILD . tpl <nl> tensorflow / third_party / git / BUILD <nl> tensorflow / third_party / git / git_configure . bzl <nl> - tensorflow / third_party / git / BUILD . tpl <nl> - tensorflow / third_party / functools32 . BUILD <nl> - tensorflow / third_party / googleapis . BUILD <nl> + tensorflow / third_party / gif . BUILD <nl> tensorflow / third_party / gpus / BUILD <nl> tensorflow / third_party / gpus / crosstool / BUILD <nl> tensorflow / third_party / gpus / crosstool / BUILD . tpl <nl> tensorflow / third_party / gpus / crosstool / clang / bin / crosstool_wrapper_driver_is_not_ <nl> tensorflow / third_party / gpus / crosstool / clang / bin / crosstool_wrapper_driver_rocm . tpl <nl> tensorflow / third_party / gpus / crosstool / windows / msvc_wrapper_for_nvcc . py . tpl <nl> tensorflow / third_party / gpus / cuda / BUILD <nl> - tensorflow / third_party / gpus / cuda / LICENSE <nl> tensorflow / third_party / gpus / cuda / BUILD . tpl <nl> - tensorflow / third_party / gpus / cuda / cuda_config . h . tpl <nl> tensorflow / third_party / gpus / cuda / BUILD . windows . tpl <nl> + tensorflow / third_party / gpus / cuda / LICENSE <nl> tensorflow / third_party / gpus / cuda / build_defs . bzl . tpl <nl> + tensorflow / third_party / gpus / cuda / cuda_config . h . tpl <nl> tensorflow / third_party / gpus / find_cuda_config . py <nl> + tensorflow / third_party / gpus / cuda_configure . bzl <nl> tensorflow / third_party / gpus / rocm / BUILD <nl> - tensorflow / third_party / gpus / rocm / rocm_config . h . tpl <nl> tensorflow / third_party / gpus / rocm / BUILD . tpl <nl> tensorflow / third_party / gpus / rocm / build_defs . bzl . tpl <nl> - tensorflow / third_party / gpus / cuda_configure . bzl <nl> + tensorflow / third_party / gpus / rocm / rocm_config . h . tpl <nl> tensorflow / third_party / gpus / rocm_configure . bzl <nl> + tensorflow / third_party / googleapis . BUILD <nl> tensorflow / third_party / grpc / BUILD <nl> tensorflow / third_party / icu / udata . patch <nl> tensorflow / third_party / kafka / BUILD <nl> tensorflow / third_party / kafka / config . patch <nl> tensorflow / third_party / jsoncpp . BUILD <nl> tensorflow / third_party / libxsmm . BUILD <nl> - tensorflow / third_party / llvm / expand_cmake_vars . py <nl> - tensorflow / third_party / llvm / llvm . autogenerated . BUILD <nl> + tensorflow / third_party / linenoise . BUILD <nl> tensorflow / third_party / llvm / BUILD <nl> + tensorflow / third_party / llvm / llvm . autogenerated . BUILD <nl> + tensorflow / third_party / llvm / expand_cmake_vars . py <nl> tensorflow / third_party / llvm / llvm . bzl <nl> - tensorflow / third_party / linenoise . BUILD <nl> + tensorflow / third_party / lmdb . BUILD <nl> tensorflow / third_party / mkl / BUILD <nl> - tensorflow / third_party / mkl / build_defs . bzl <nl> tensorflow / third_party / mkl / LICENSE <nl> tensorflow / third_party / mkl / MKL_LICENSE <nl> + tensorflow / third_party / mkl / build_defs . bzl <nl> tensorflow / third_party / mkl / mkl . BUILD <nl> - tensorflow / third_party / lmdb . BUILD <nl> - tensorflow / third_party / mkl_dnn / mkldnn . BUILD <nl> tensorflow / third_party / mkl_dnn / LICENSE <nl> + tensorflow / third_party / mkl_dnn / mkldnn . BUILD <nl> tensorflow / third_party / mpi / . gitignore <nl> tensorflow / third_party / mpi / BUILD <nl> tensorflow / third_party / mpi_collectives / BUILD <nl> tensorflow / third_party / nanopb . BUILD <nl> tensorflow / third_party / nccl / BUILD <nl> - tensorflow / third_party / nccl / archive . patch <nl> tensorflow / third_party / nccl / LICENSE <nl> - tensorflow / third_party / nccl / nccl_configure . bzl <nl> - tensorflow / third_party / nccl / system . BUILD . tpl <nl> tensorflow / third_party / nccl / archive . BUILD <nl> + tensorflow / third_party / nccl / archive . patch <nl> + tensorflow / third_party / nccl / nccl_configure . bzl <nl> tensorflow / third_party / nccl / build_defs . bzl . tpl <nl> + tensorflow / third_party / nccl / system . BUILD . tpl <nl> + tensorflow / third_party / ngraph / BUILD <nl> tensorflow / third_party / ngraph / LICENSE <nl> tensorflow / third_party / ngraph / NGRAPH_LICENSE <nl> - tensorflow / third_party / ngraph / BUILD <nl> tensorflow / third_party / ngraph / build_defs . bzl <nl> - tensorflow / third_party / ngraph / ngraph . BUILD <nl> - tensorflow / third_party / ngraph / tbb . BUILD <nl> tensorflow / third_party / ngraph / ngraph_tf . BUILD <nl> tensorflow / third_party / ngraph / nlohmann_json . BUILD <nl> + tensorflow / third_party / ngraph / ngraph . BUILD <nl> + tensorflow / third_party / ngraph / tbb . BUILD <nl> tensorflow / third_party / opt_einsum . BUILD <nl> - tensorflow / third_party / png_fix_rpi . patch <nl> tensorflow / third_party / pcre . BUILD <nl> - tensorflow / third_party / pprof . BUILD <nl> tensorflow / third_party / png . BUILD <nl> + tensorflow / third_party / png_fix_rpi . patch <nl> + tensorflow / third_party / pprof . BUILD <nl> tensorflow / third_party / protobuf / BUILD <nl> tensorflow / third_party / py / BUILD <nl> tensorflow / third_party / py / BUILD . tpl <nl> tensorflow / third_party / py / numpy / BUILD <nl> tensorflow / third_party / py / python_configure . bzl <nl> - tensorflow / third_party / python_runtime / BUILD <nl> tensorflow / third_party / pybind11 . BUILD <nl> + tensorflow / third_party / python_runtime / BUILD <nl> tensorflow / third_party / repo . bzl <nl> - tensorflow / third_party / sycl / crosstool / BUILD <nl> tensorflow / third_party / six . BUILD <nl> - tensorflow / third_party / swig . BUILD <nl> - tensorflow / third_party / sqlite . BUILD <nl> tensorflow / third_party / snappy . BUILD <nl> - tensorflow / third_party / systemlibs / absl_py . BUILD <nl> - tensorflow / third_party / systemlibs / BUILD . tpl <nl> + tensorflow / third_party / sqlite . BUILD <nl> + tensorflow / third_party / swig . BUILD <nl> + tensorflow / third_party / sycl / crosstool / BUILD <nl> tensorflow / third_party / systemlibs / BUILD <nl> + tensorflow / third_party / systemlibs / BUILD . tpl <nl> + tensorflow / third_party / systemlibs / absl_py . BUILD <nl> + tensorflow / third_party / systemlibs / absl_py . absl . flags . BUILD <nl> tensorflow / third_party / systemlibs / absl_py . absl . testing . BUILD <nl> - tensorflow / third_party / systemlibs / cython . BUILD <nl> + tensorflow / third_party / systemlibs / astor . BUILD <nl> tensorflow / third_party / systemlibs / boringssl . BUILD <nl> tensorflow / third_party / systemlibs / build_defs . bzl . tpl <nl> - tensorflow / third_party / systemlibs / google_cloud_cpp . BUILD <nl> - tensorflow / third_party / systemlibs / absl_py . absl . flags . BUILD <nl> - tensorflow / third_party / systemlibs / gif . BUILD <nl> - tensorflow / third_party / systemlibs / swig . BUILD <nl> tensorflow / third_party / systemlibs / curl . BUILD <nl> - tensorflow / third_party / systemlibs / lmdb . BUILD <nl> + tensorflow / third_party / systemlibs / cython . BUILD <nl> + tensorflow / third_party / systemlibs / double_conversion . BUILD <nl> tensorflow / third_party / systemlibs / gast . BUILD <nl> - tensorflow / third_party / systemlibs / astor . BUILD <nl> - tensorflow / third_party / systemlibs / png . BUILD <nl> + tensorflow / third_party / systemlibs / gif . BUILD <nl> + tensorflow / third_party / systemlibs / google_cloud_cpp . BUILD <nl> tensorflow / third_party / systemlibs / google_cloud_cpp . google . cloud . bigtable . BUILD <nl> - tensorflow / third_party / systemlibs / six . BUILD <nl> + tensorflow / third_party / systemlibs / googleapis . BUILD <nl> + tensorflow / third_party / systemlibs / jsoncpp . BUILD <nl> + tensorflow / third_party / systemlibs / grpc . BUILD <nl> + tensorflow / third_party / systemlibs / lmdb . BUILD <nl> + tensorflow / third_party / systemlibs / nsync . BUILD <nl> tensorflow / third_party / systemlibs / opt_einsum . BUILD <nl> + tensorflow / third_party / systemlibs / png . BUILD <nl> + tensorflow / third_party / systemlibs / pcre . BUILD <nl> + tensorflow / third_party / systemlibs / protobuf . BUILD <nl> + tensorflow / third_party / systemlibs / protobuf . bzl <nl> + tensorflow / third_party / systemlibs / re2 . BUILD <nl> + tensorflow / third_party / systemlibs / six . BUILD <nl> tensorflow / third_party / systemlibs / snappy . BUILD <nl> - tensorflow / third_party / systemlibs / double_conversion . BUILD <nl> + tensorflow / third_party / systemlibs / sqlite . BUILD <nl> + tensorflow / third_party / systemlibs / swig . BUILD <nl> tensorflow / third_party / systemlibs / syslibs_configure . bzl <nl> - tensorflow / third_party / systemlibs / re2 . BUILD <nl> - tensorflow / third_party / systemlibs / zlib . BUILD <nl> - tensorflow / third_party / systemlibs / protobuf . bzl <nl> tensorflow / third_party / systemlibs / termcolor . BUILD <nl> - tensorflow / third_party / systemlibs / grpc . BUILD <nl> - tensorflow / third_party / systemlibs / sqlite . BUILD <nl> - tensorflow / third_party / systemlibs / pcre . BUILD <nl> - tensorflow / third_party / systemlibs / nsync . BUILD <nl> - tensorflow / third_party / systemlibs / googleapis . BUILD <nl> - tensorflow / third_party / systemlibs / jsoncpp . BUILD <nl> - tensorflow / third_party / systemlibs / protobuf . BUILD <nl> + tensorflow / third_party / systemlibs / zlib . BUILD <nl> tensorflow / third_party / tensorrt / BUILD <nl> tensorflow / third_party / tensorrt / BUILD . tpl <nl> tensorflow / third_party / tensorrt / LICENSE <nl> tensorflow / third_party / tensorrt / build_defs . bzl . tpl <nl> tensorflow / third_party / tensorrt / tensorrt / include / tensorrt_config . h . tpl <nl> tensorflow / third_party / tensorrt / tensorrt_configure . bzl <nl> tensorflow / third_party / termcolor . BUILD <nl> + tensorflow / third_party / tflite_mobilenet_float . BUILD <nl> tensorflow / third_party / tflite_mobilenet . BUILD <nl> tensorflow / third_party / tflite_mobilenet_quant . BUILD <nl> tensorflow / third_party / tflite_ovic_testdata . BUILD <nl> - tensorflow / third_party / tflite_mobilenet_float . BUILD <nl> tensorflow / third_party / tflite_smartreply . BUILD <nl> tensorflow / third_party / toolchains / BUILD <nl> - tensorflow / third_party / toolchains / clang6 / clang . BUILD <nl> - tensorflow / third_party / toolchains / clang6 / CROSSTOOL . tpl <nl> tensorflow / third_party / toolchains / clang6 / BUILD <nl> - tensorflow / third_party / toolchains / clang6 / repo . bzl <nl> + tensorflow / third_party / toolchains / clang6 / CROSSTOOL . tpl <nl> tensorflow / third_party / toolchains / clang6 / README . md <nl> + tensorflow / third_party / toolchains / clang6 / clang . BUILD <nl> + tensorflow / third_party / toolchains / clang6 / repo . bzl <nl> tensorflow / third_party / toolchains / cpus / arm / BUILD <nl> - tensorflow / third_party / toolchains / cpus / arm / arm_compiler_configure . bzl <nl> tensorflow / third_party / toolchains / cpus / arm / cc_config . bzl . tpl <nl> + tensorflow / third_party / toolchains / cpus / arm / arm_compiler_configure . bzl <nl> tensorflow / third_party / toolchains / cpus / py / BUILD <nl> tensorflow / third_party / toolchains / cpus / py3 / BUILD <nl> tensorflow / third_party / toolchains / preconfig / centos6 / cuda10 . 0 - cudnn7 / cuda / BUILD <nl> tensorflow / third_party / toolchains / preconfig / centos6 / cuda10 . 0 - cudnn7 / cuda / build_defs . bzl <nl> - tensorflow / third_party / toolchains / preconfig / centos6 / cuda10 . 1 - cudnn7 / cuda / build_defs . bzl <nl> tensorflow / third_party / toolchains / preconfig / centos6 / cuda10 . 1 - cudnn7 / cuda / BUILD <nl> + tensorflow / third_party / toolchains / preconfig / centos6 / cuda10 . 1 - cudnn7 / cuda / build_defs . bzl <nl> + tensorflow / third_party / toolchains / preconfig / centos6 / gcc7 / BUILD <nl> tensorflow / third_party / toolchains / preconfig / centos6 / gcc7 / cc_toolchain_config . bzl <nl> tensorflow / third_party / toolchains / preconfig / centos6 / gcc7 / dummy_toolchain . bzl <nl> - tensorflow / third_party / toolchains / preconfig / centos6 / gcc7 / BUILD <nl> - tensorflow / third_party / toolchains / preconfig / centos6 / gcc7 - nvcc - cuda10 . 0 / cc_toolchain_config . bzl <nl> tensorflow / third_party / toolchains / preconfig / centos6 / gcc7 - nvcc - cuda10 . 0 / BUILD <nl> - tensorflow / third_party / toolchains / preconfig / centos6 / gcc7 - nvcc - cuda10 . 1 / cc_toolchain_config . bzl <nl> + tensorflow / third_party / toolchains / preconfig / centos6 / gcc7 - nvcc - cuda10 . 0 / cc_toolchain_config . bzl <nl> tensorflow / third_party / toolchains / preconfig / centos6 / gcc7 - nvcc - cuda10 . 1 / BUILD <nl> + tensorflow / third_party / toolchains / preconfig / centos6 / gcc7 - nvcc - cuda10 . 1 / cc_toolchain_config . bzl <nl> tensorflow / third_party / toolchains / preconfig / centos6 / py / BUILD <nl> tensorflow / third_party / toolchains / preconfig / centos6 / py3 / BUILD <nl> tensorflow / third_party / toolchains / preconfig / centos6 / tensorrt5 / BUILD <nl> tensorflow / third_party / toolchains / preconfig / centos6 / tensorrt5 / build_defs . bzl <nl> - tensorflow / third_party / toolchains / preconfig / generate / containers . bzl <nl> - tensorflow / third_party / toolchains / preconfig / generate / workspace . bzl <nl> tensorflow / third_party / toolchains / preconfig / generate / archives . bzl <nl> - tensorflow / third_party / toolchains / preconfig / generate / generate . bzl <nl> tensorflow / third_party / toolchains / preconfig / generate / BUILD <nl> - tensorflow / third_party / toolchains / preconfig / ubuntu14 . 04 / cuda10 . 0 - cudnn7 / cuda / build_defs . bzl <nl> + tensorflow / third_party / toolchains / preconfig / generate / generate . bzl <nl> + tensorflow / third_party / toolchains / preconfig / generate / containers . bzl <nl> + tensorflow / third_party / toolchains / preconfig / generate / workspace . bzl <nl> tensorflow / third_party / toolchains / preconfig / ubuntu14 . 04 / cuda10 . 0 - cudnn7 / cuda / BUILD <nl> - tensorflow / third_party / toolchains / preconfig / ubuntu14 . 04 / gcc - nvcc - cuda10 . 0 / cc_toolchain_config . bzl <nl> + tensorflow / third_party / toolchains / preconfig / ubuntu14 . 04 / cuda10 . 0 - cudnn7 / cuda / build_defs . bzl <nl> tensorflow / third_party / toolchains / preconfig / ubuntu14 . 04 / gcc - nvcc - cuda10 . 0 / BUILD <nl> + tensorflow / third_party / toolchains / preconfig / ubuntu14 . 04 / gcc - nvcc - cuda10 . 0 / cc_toolchain_config . bzl <nl> tensorflow / third_party / toolchains / preconfig / ubuntu14 . 04 / py3 / BUILD <nl> - tensorflow / third_party / toolchains / preconfig / ubuntu14 . 04 / tensorrt5 / build_defs . bzl <nl> tensorflow / third_party / toolchains / preconfig / ubuntu14 . 04 / tensorrt5 / BUILD <nl> - tensorflow / third_party / toolchains / preconfig / ubuntu16 . 04 / clang / dummy_toolchain . bzl <nl> + tensorflow / third_party / toolchains / preconfig / ubuntu14 . 04 / tensorrt5 / build_defs . bzl <nl> tensorflow / third_party / toolchains / preconfig / ubuntu16 . 04 / clang / BUILD <nl> tensorflow / third_party / toolchains / preconfig / ubuntu16 . 04 / clang / cc_toolchain_config . bzl <nl> + tensorflow / third_party / toolchains / preconfig / ubuntu16 . 04 / clang / dummy_toolchain . bzl <nl> tensorflow / third_party / toolchains / preconfig / ubuntu16 . 04 / cuda10 . 0 - cudnn7 / cuda / BUILD <nl> tensorflow / third_party / toolchains / preconfig / ubuntu16 . 04 / cuda10 . 0 - cudnn7 / cuda / build_defs . bzl <nl> - tensorflow / third_party / toolchains / preconfig / ubuntu16 . 04 / gcc5 - rocm / cc_toolchain_config . bzl <nl> tensorflow / third_party / toolchains / preconfig / ubuntu16 . 04 / gcc5 - rocm / BUILD <nl> - tensorflow / third_party / toolchains / preconfig / ubuntu16 . 04 / gcc7_manylinux2010 / dummy_toolchain . bzl <nl> + tensorflow / third_party / toolchains / preconfig / ubuntu16 . 04 / gcc5 - rocm / cc_toolchain_config . bzl <nl> tensorflow / third_party / toolchains / preconfig / ubuntu16 . 04 / gcc7_manylinux2010 / BUILD <nl> tensorflow / third_party / toolchains / preconfig / ubuntu16 . 04 / gcc7_manylinux2010 / cc_toolchain_config . bzl <nl> - tensorflow / third_party / toolchains / preconfig / ubuntu16 . 04 / gcc7_manylinux2010 - nvcc - cuda10 . 0 / cc_toolchain_config . bzl <nl> + tensorflow / third_party / toolchains / preconfig / ubuntu16 . 04 / gcc7_manylinux2010 / dummy_toolchain . bzl <nl> tensorflow / third_party / toolchains / preconfig / ubuntu16 . 04 / gcc7_manylinux2010 - nvcc - cuda10 . 0 / BUILD <nl> + tensorflow / third_party / toolchains / preconfig / ubuntu16 . 04 / gcc7_manylinux2010 - nvcc - cuda10 . 0 / cc_toolchain_config . bzl <nl> tensorflow / third_party / toolchains / preconfig / ubuntu16 . 04 / py / BUILD <nl> tensorflow / third_party / toolchains / preconfig / ubuntu16 . 04 / py3 / BUILD <nl> tensorflow / third_party / toolchains / preconfig / ubuntu16 . 04 / py3_opt / BUILD <nl> tensorflow / third_party / toolchains / preconfig / ubuntu16 . 04 / rocm / rocm / build_defs . bzl <nl> tensorflow / third_party / toolchains / preconfig / ubuntu16 . 04 / tensorrt5 / BUILD <nl> tensorflow / third_party / toolchains / preconfig / ubuntu16 . 04 / tensorrt5 . 1 / BUILD <nl> tensorflow / third_party / toolchains / preconfig / ubuntu16 . 04 / tensorrt5 . 1 / build_defs . bzl <nl> - tensorflow / third_party / toolchains / preconfig / win_1803 / bazel_025 / BUILD <nl> tensorflow / third_party / toolchains / preconfig / win_1803 / BUILD <nl> + tensorflow / third_party / toolchains / preconfig / win_1803 / bazel_025 / BUILD <nl> tensorflow / third_party / toolchains / preconfig / win_1803 / py36 / BUILD <nl> - tensorflow / third_party / toolchains / remote / BUILD . tpl <nl> tensorflow / third_party / toolchains / remote / BUILD <nl> + tensorflow / third_party / toolchains / remote / BUILD . tpl <nl> tensorflow / third_party / toolchains / remote / configure . bzl <nl> tensorflow / third_party / toolchains / remote / execution . bzl . tpl <nl> tensorflow / third_party / wrapt . BUILD <nl> tensorflow / third_party / zlib . BUILD <nl> tensorflow / tools / ci_build / remote / BUILD <nl> tensorflow / tools / def_file_filter / BUILD <nl> - tensorflow / tools / def_file_filter / def_file_filter_configure . bzl <nl> - tensorflow / tools / def_file_filter / def_file_filter . py . tpl <nl> tensorflow / tools / def_file_filter / BUILD . tpl <nl> - tensorflow / tools / lib_package / concat_licenses . sh <nl> - tensorflow / tools / lib_package / libtensorflow_test . c <nl> + tensorflow / tools / def_file_filter / def_file_filter . py . tpl <nl> + tensorflow / tools / def_file_filter / def_file_filter_configure . bzl <nl> tensorflow / tools / lib_package / BUILD <nl> + tensorflow / tools / lib_package / README . md <nl> tensorflow / tools / lib_package / LibTensorFlowTest . java <nl> + tensorflow / tools / lib_package / concat_licenses . sh <nl> tensorflow / tools / lib_package / libtensorflow_java_test . sh <nl> + tensorflow / tools / lib_package / libtensorflow_test . c <nl> tensorflow / tools / lib_package / libtensorflow_test . sh <nl> - tensorflow / tools / lib_package / README . md <nl> - tensorflow / tools / pip_package / README <nl> - tensorflow / tools / pip_package / check_load_py_test . py <nl> tensorflow / tools / pip_package / BUILD <nl> - tensorflow / tools / pip_package / simple_console_for_windows . py <nl> - tensorflow / tools / pip_package / simple_console . py <nl> tensorflow / tools / pip_package / MANIFEST . in <nl> - tensorflow / tools / pip_package / setup . py <nl> - tensorflow / tools / pip_package / pip_smoke_test . py <nl> + tensorflow / tools / pip_package / README <nl> tensorflow / tools / pip_package / build_pip_package . sh <nl> - tensorflow / virtual_root_template_v2 . __init__ . py <nl> + tensorflow / tools / pip_package / check_load_py_test . py <nl> + tensorflow / tools / pip_package / pip_smoke_test . py <nl> + tensorflow / tools / pip_package / setup . py <nl> + tensorflow / tools / pip_package / simple_console . py <nl> + tensorflow / tools / pip_package / simple_console_for_windows . py <nl> tensorflow / virtual_root_template_v1 . __init__ . py <nl> + tensorflow / virtual_root_template_v2 . __init__ . py <nl> llvm / llvm / projects / google_mlir / WORKSPACE <nl> \ No newline at end of file <nl> mmm a / tensorflow / python / framework / python_op_gen_internal . cc <nl> ppp b / tensorflow / python / framework / python_op_gen_internal . cc <nl> void GenerateLowerCaseOpName ( const string & str , string * result ) { <nl> const int last_index = str . size ( ) - 1 ; <nl> for ( int i = 0 ; i < = last_index ; + + i ) { <nl> const char c = str [ i ] ; <nl> + / / Convert namespace separators ( ' > ' characters ) to joiners <nl> + if ( c = = ' > ' ) { <nl> + result - > push_back ( joiner ) ; <nl> + continue ; <nl> + } <nl> + <nl> / / Emit a joiner only if a previous - lower - to - now - upper or a <nl> / / now - upper - to - next - lower transition happens . <nl> if ( isupper ( c ) & & ( i > 0 ) ) { <nl>
|
Implementing RFC : Allow Op names of the form RepoName > OpName .
|
tensorflow/tensorflow
|
7806b0233d00a71fcf8a2170e7a1e4819b366e45
|
2019-08-19T21:29:49Z
|
mmm a / src / MainWindow . cpp <nl> ppp b / src / MainWindow . cpp <nl> void MainWindow : : jumpToRow ( const QString & table , QString column , const QByteArra <nl> return ; <nl> <nl> / / Jump to table <nl> - populateTable ( table ) ; <nl> + ui - > comboBrowseTable - > setCurrentIndex ( ui - > comboBrowseTable - > findText ( table ) ) ; <nl> + populateTable ( ) ; <nl> <nl> / / Set filter <nl> ui - > dataTable - > filterHeader ( ) - > setFilter ( column_index + 1 , value ) ; <nl>
|
Bugfix
|
sqlitebrowser/sqlitebrowser
|
efd4fc482061267fd1aa756be436d7adbe994627
|
2016-08-16T20:46:28Z
|
mmm a / RELEASE . md <nl> ppp b / RELEASE . md <nl> <nl> - # Changes since last release <nl> + # Release 0 . 7 . 0 <nl> <nl> - # # Breaking changes to the API <nl> + # # Major Features and Improvements <nl> + <nl> + * Allow using any installed Cuda > = 7 . 0 and cuDNN > = R2 , and add support <nl> + for cuDNN R4 <nl> + * Added a ` contrib / ` directory for unsupported or experimental features , <nl> + including higher level ` layers ` module <nl> + * Added an easy way to add and dynamically load user - defined ops <nl> + * Built out a good suite of tests , things should break less ! <nl> + * Added ` MetaGraphDef ` which makes it easier to save graphs with metadata <nl> + * Added assignments for " Deep Learning with TensorFlow " udacity course <nl> + <nl> + <nl> + # # Bug Fixes and Other Changes <nl> + <nl> + * Added a versioning framework for ` GraphDef ` s to ensure compatibility <nl> + * Enforced Python 3 compatibility <nl> + * Internal changes now show up as sensibly separated commits <nl> + * Open - sourced the doc generator <nl> + * Un - fork Eigen <nl> + * Simplified the ` BUILD ` files and cleaned up C + + headers <nl> + * TensorFlow can now be used as a submodule in another bazel build <nl> + * New ops ( e . g . , ` * fft ` , ` * _matrix_solve ` ) <nl> + * Support for more data types in many ops <nl> + * Performance improvements <nl> + * Various bugfixes <nl> + * Documentation fixes and improvements <nl> + <nl> + <nl> + # # Breaking Changes to the API <nl> <nl> * ` AdjustContrast ` kernel deprecated , new kernel ` AdjustContrastv2 ` takes and <nl> outputs float only . ` adjust_contrast ` now takes all data types . <nl> <nl> * Renamed ` tf . test . GetTempDir ` and ` tf . test . IsBuiltWithCuda ` to <nl> ` tf . test . get_temp_dir ` and ` tf . test . is_built_with_cuda ` for PEP - 8 <nl> compatibility . <nl> - <nl> - <nl> - # # Bug fixes <nl> - <nl> + * ` parse_example ` ' s interface has changed , the old interface is accessible in <nl> + ` legacy_parse_example ` ( same for related functions ) . <nl> + * New ` Variable ` s are not added to the same collection several times even if <nl> + a list with duplicates is passed to the constructor . <nl> * The Python API will now properly set the ` list ` member of ` AttrValue ` in <nl> constructed ` GraphDef ` messages for empty lists . The serialization of some <nl> graphs will change , but the change is both forwards and backwards compatible . <nl> It will break tests that compare a generated ` GraphDef ` to a golden serialized <nl> - ` GraphDef ` . <nl> + ` GraphDef ` ( which is discouraged ) . <nl> + <nl> + <nl> + # # Thanks to our Contributors <nl> + <nl> + This release contains contributions from many people at Google , as well as : <nl> + <nl> + Akiomi Kamakura , Alex Vig , Alexander Rosenberg Johansen , Andre Cruz , Arun Ahuja , <nl> + Bart Coppens , Bernardo Pires , Carl Vondrick , Cesar Salgado , Chen Yu , <nl> + Christian Jauvin , Damien Aymeric , Dan Vanderkam , Denny Britz , Dongjoon Hyun , <nl> + Eren Güven , Erik Erwitt , Fabrizio Milo , G . Hussain Chinoy , Jim Fleming , <nl> + Joao Felipe Santos , Jonas Meinertz Hansen , Joshi Rekha , Julian Viereck , <nl> + Keiji Ariyama , Kenton Lee , Krishna Sankar , Kristina Chodorow , Linchao Zhu , <nl> + Lukas Krecan , Mark Borgerding , Mark Daoust , Moussa Taifi , <nl> + Nathan Howell , Naveen Sundar Govindarajulu , Nick Sweeting , Niklas Riekenbrauck , <nl> + Olivier Grisel , Patrick Christ , Povilas Liubauskas , Rainer Wasserfuhr , <nl> + Romain Thouvenin , Sagan Bolliger , Sam Abrahams , Taehoon Kim , Timothy J Laurent , <nl> + Vlad Zavidovych , Yangqing Jia , Yi - Lin Juang , Yuxin Wu , Zachary Lipton , <nl> + Zero Chen , Alan Wu , @ brchiu , @ emmjaykay , @ jalammar , @ Mandar - Shinde , <nl> + @ nsipplswezey , @ ninotoshi , @ panmari , @ prolearner and @ rizzomichaelg . <nl> + <nl> + We are also grateful to all who filed issues or helped resolve them , asked and <nl> + answered questions , and were part of inspiring discussions . <nl> <nl> <nl> # Release 0 . 6 . 0 <nl> <nl> come in later releases . <nl> <nl> <nl> - # # Bug fixes <nl> + # # Bug Fixes <nl> <nl> * Lots of fixes to documentation and tutorials , many contributed <nl> by the public . <nl> <nl> * 271 closed issues on github issues . <nl> <nl> - # # Backwards - incompatible changes <nl> + # # Backwards - Incompatible Changes <nl> <nl> * ` tf . nn . fixed_unigram_candidate_sampler ` changed its default ' distortion ' <nl> attribute from 0 . 0 to 1 . 0 . This was a bug in the original release <nl> mmm a / tensorflow / contrib / layers / __init__ . py <nl> ppp b / tensorflow / contrib / layers / __init__ . py <nl> <nl> # See the License for the specific language governing permissions and <nl> # limitations under the License . <nl> # = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> - " " " layers : A module containing a higher level NN interface . " " " <nl> + " " " Ops for building neural network layers , regularizers , summaries , etc . <nl> + <nl> + # # Higher level ops for building neural network layers . <nl> + <nl> + This package provides several ops that take care of creating variables that are <nl> + used internally in a consistent way and provide the building blocks for many <nl> + common machine learning algorithms . <nl> + <nl> + @ @ convolution2d <nl> + @ @ fully_connected <nl> + <nl> + Aliases for fully_connected which set a default activation function are <nl> + available : ` relu ` , ` relu6 ` and ` linear ` . <nl> + <nl> + # # Regularizers <nl> + <nl> + Regularization can help prevent overfitting . These have the signature <nl> + ` fn ( weights ) ` . The loss is typically added to ` tf . GraphKeys . REGULARIZATION_LOSS ` <nl> + <nl> + @ @ l1_regularizer <nl> + @ @ l2_regularizer <nl> + <nl> + # # Initializers <nl> + <nl> + Initializers are used to initialize variables with sensible values given their <nl> + size , data type , and purpose . <nl> + <nl> + @ @ xavier_initializer <nl> + @ @ xavier_initializer_conv2d <nl> + <nl> + # # Summaries <nl> + <nl> + Helper functions to summarize specific variables or ops . <nl> + <nl> + @ @ summarize_activation <nl> + @ @ summarize_tensor <nl> + @ @ summarize_tensors <nl> + @ @ summarize_collection <nl> + <nl> + The layers module defines convenience functions ` summarize_variables ` , <nl> + ` summarize_weights ` and ` summarize_biases ` , which set the ` collection ` argument <nl> + of ` summarize_collection ` to ` VARIABLES ` , ` WEIGHTS ` and ` BIASES ` , respectively . <nl> + <nl> + @ @ summarize_activations <nl> + <nl> + " " " <nl> <nl> from __future__ import absolute_import <nl> from __future__ import division <nl> mmm a / tensorflow / contrib / layers / python / layers / layers . py <nl> ppp b / tensorflow / contrib / layers / python / layers / layers . py <nl> <nl> # = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> <nl> # pylint : disable = g - short - docstring - punctuation <nl> - " " " # # Higher level ops related to regularization and building layers . <nl> - <nl> - This package provides several ops that take care of creating variables that are <nl> - used internally in a consistent way and provide the building blocks for many <nl> - common machine learning algorithms . <nl> - <nl> - @ @ convolution2d <nl> - @ @ fully_connected <nl> - <nl> - Aliases for fully_connected which set a default activation function are <nl> - available : ` relu ` , ` relu6 ` and ` linear ` . <nl> - <nl> - # # Regularizers <nl> - <nl> - Regularization can help prevent overfitting . <nl> - These have the signature ` fn ( weights ) ` . The loss is typically added to <nl> - ` tf . GraphKeys . REGULARIZATION_LOSS ` <nl> - <nl> - @ @ l1_regularizer <nl> - @ @ l2_regularizer <nl> - <nl> - # # Initializers <nl> - <nl> - Initializers are used to initialize variables with sensible values given their <nl> - size , data type , and purpose . <nl> - <nl> - @ @ xavier_initializer <nl> - @ @ xavier_initializer_conv2d <nl> - <nl> - # # Summaries <nl> - <nl> - Helper functions to summarize specific variables or ops . <nl> - <nl> - @ @ summarize_activation <nl> - @ @ summarize_tensor <nl> - @ @ summarize_tensors <nl> - @ @ summarize_collection <nl> - @ @ summarize_variables <nl> - @ @ summarize_weights <nl> - @ @ summarize_biases <nl> - @ @ summarize_activations <nl> - <nl> - " " " <nl> + " " " Higher level ops for building layers . " " " <nl> <nl> from __future__ import absolute_import <nl> from __future__ import division <nl> mmm a / tensorflow / contrib / layers / python / layers / summaries . py <nl> ppp b / tensorflow / contrib / layers / python / layers / summaries . py <nl> <nl> def _assert_summary_tag_unique ( tag ) : <nl> for summary in ops . get_collection ( ops . GraphKeys . SUMMARIES ) : <nl> old_tag = tensor_util . constant_value ( summary . op . inputs [ 0 ] ) <nl> - if tag = = str ( old_tag ) : <nl> + if tag . encode ( ) = = old_tag : <nl> raise ValueError ( ' Conflict with summary tag : % s exists on summary % s % s ' % <nl> ( tag , summary , old_tag ) ) <nl> <nl> mmm a / tensorflow / contrib / util / __init__ . py <nl> ppp b / tensorflow / contrib / util / __init__ . py <nl> <nl> # limitations under the License . <nl> # = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> <nl> - " " " contrib module containing volatile or experimental utility code . " " " <nl> + " " " Utilities for dealing with Tensors . <nl> + <nl> + # # Miscellaneous Utility Functions <nl> + <nl> + @ @ constant_value <nl> + @ @ make_tensor_proto <nl> + <nl> + " " " <nl> <nl> from __future__ import absolute_import <nl> from __future__ import division <nl> mmm a / tensorflow / core / ops / image_ops . cc <nl> ppp b / tensorflow / core / ops / image_ops . cc <nl> Bounding boxes are supplied and returned as ` [ y_min , x_min , y_max , x_max ] ` . The <nl> bounding box coordinates are floats in ` [ 0 . 0 , 1 . 0 ] ` relative to the width and <nl> height of the underlying image . <nl> <nl> - Example : <nl> - <nl> - ` ` ` <nl> - # Generate a single distorted bounding box . <nl> - begin , size , bbox_for_draw = tf . image . sample_distorted_bounding_box ( <nl> - tf . shape ( image ) , <nl> - bounding_boxes = bounding_boxes ) <nl> - <nl> - # Draw the bounding box in an image summary . <nl> - image_with_box = tf . image . draw_bounding_boxes ( tf . expand_dims ( image , 0 ) , <nl> - bbox_for_draw ) <nl> - tf . image_summary ( ' images_with_box ' , image_with_box ) <nl> - <nl> - # Employ the bounding box to distort the image . <nl> - distorted_image = tf . slice ( image , begin , size ) <nl> - ` ` ` <nl> + For example , <nl> + <nl> + # Generate a single distorted bounding box . <nl> + begin , size , bbox_for_draw = tf . image . sample_distorted_bounding_box ( <nl> + tf . shape ( image ) , <nl> + bounding_boxes = bounding_boxes ) <nl> + <nl> + # Draw the bounding box in an image summary . <nl> + image_with_box = tf . image . draw_bounding_boxes ( tf . expand_dims ( image , 0 ) , <nl> + bbox_for_draw ) <nl> + tf . image_summary ( ' images_with_box ' , image_with_box ) <nl> + <nl> + # Employ the bounding box to distort the image . <nl> + distorted_image = tf . slice ( image , begin , size ) <nl> <nl> Note that if no bounding box information is available , setting <nl> ` use_image_if_no_bounding_boxes = true ` will assume there is a single implicit <nl> mmm a / tensorflow / core / public / version . h <nl> ppp b / tensorflow / core / public / version . h <nl> limitations under the License . <nl> / / TensorFlow uses semantic versioning , see http : / / semver . org / . <nl> <nl> # define TF_MAJOR_VERSION 0 <nl> - # define TF_MINOR_VERSION 6 <nl> + # define TF_MINOR_VERSION 7 <nl> # define TF_PATCH_VERSION 0 <nl> <nl> / / TF_VERSION_SUFFIX is non - empty for pre - releases ( e . g . " - alpha " , " - alpha . 1 " , <nl> mmm a / tensorflow / g3doc / api_docs / cc / ClassEnv . md <nl> ppp b / tensorflow / g3doc / api_docs / cc / ClassEnv . md <nl> <nl> - # Class ` tensorflow : : Env ` <nl> + # ` class tensorflow : : Env ` <nl> <nl> An interface used by the tensorflow implementation to access operating system functionality like the filesystem etc . <nl> <nl> Callers may wish to provide a custom Env object to get fine grain control . <nl> <nl> All Env implementations are safe for concurrent access from multiple threads without any external synchronization . <nl> <nl> - # # Member Summary <nl> - <nl> - * [ ` tensorflow : : Env : : Env ( ) ` ] ( # tensorflow_Env_Env ) <nl> - * [ ` tensorflow : : Env : : ~ Env ( ) ` ] ( # tensorflow_Env_Env ) <nl> - * [ ` virtual Status tensorflow : : Env : : NewRandomAccessFile ( const string & fname , RandomAccessFile * * result ) = 0 ` ] ( # virtual_Status_tensorflow_Env_NewRandomAccessFile ) <nl> - * Creates a brand new random access read - only file with the specified name . <nl> - * [ ` virtual Status tensorflow : : Env : : NewWritableFile ( const string & fname , WritableFile * * result ) = 0 ` ] ( # virtual_Status_tensorflow_Env_NewWritableFile ) <nl> - * Creates an object that writes to a new file with the specified name . <nl> - * [ ` virtual Status tensorflow : : Env : : NewAppendableFile ( const string & fname , WritableFile * * result ) = 0 ` ] ( # virtual_Status_tensorflow_Env_NewAppendableFile ) <nl> - * Creates an object that either appends to an existing file , or writes to a new file ( if the file does not exist to begin with ) . <nl> - * [ ` virtual bool tensorflow : : Env : : FileExists ( const string & fname ) = 0 ` ] ( # virtual_bool_tensorflow_Env_FileExists ) <nl> - * Returns true iff the named file exists . <nl> - * [ ` virtual Status tensorflow : : Env : : GetChildren ( const string & dir , std : : vector < string > * result ) = 0 ` ] ( # virtual_Status_tensorflow_Env_GetChildren ) <nl> - * Stores in * result the names of the children of the specified directory . The names are relative to " dir " . <nl> - * [ ` virtual Status tensorflow : : Env : : DeleteFile ( const string & fname ) = 0 ` ] ( # virtual_Status_tensorflow_Env_DeleteFile ) <nl> - * Deletes the named file . <nl> - * [ ` virtual Status tensorflow : : Env : : CreateDir ( const string & dirname ) = 0 ` ] ( # virtual_Status_tensorflow_Env_CreateDir ) <nl> - * Creates the specified directory . <nl> - * [ ` virtual Status tensorflow : : Env : : DeleteDir ( const string & dirname ) = 0 ` ] ( # virtual_Status_tensorflow_Env_DeleteDir ) <nl> - * Deletes the specified directory . <nl> - * [ ` virtual Status tensorflow : : Env : : GetFileSize ( const string & fname , uint64 * file_size ) = 0 ` ] ( # virtual_Status_tensorflow_Env_GetFileSize ) <nl> - * Stores the size of ` fname ` in ` * file_size ` . <nl> - * [ ` virtual Status tensorflow : : Env : : RenameFile ( const string & src , const string & target ) = 0 ` ] ( # virtual_Status_tensorflow_Env_RenameFile ) <nl> - * Renames file src to target . If target already exists , it will be replaced . <nl> - * [ ` virtual uint64 tensorflow : : Env : : NowMicros ( ) = 0 ` ] ( # virtual_uint64_tensorflow_Env_NowMicros ) <nl> - * Returns the number of micro - seconds since some fixed point in time . Only useful for computing deltas of time . <nl> - * [ ` virtual void tensorflow : : Env : : SleepForMicroseconds ( int micros ) = 0 ` ] ( # virtual_void_tensorflow_Env_SleepForMicroseconds ) <nl> - * Sleeps / delays the thread for the prescribed number of micro - seconds . <nl> - * [ ` virtual Thread * tensorflow : : Env : : StartThread ( const ThreadOptions & thread_options , const string & name , std : : function < void ( ) > fn ) TF_MUST_USE_RESULT = 0 ` ] ( # virtual_Thread_tensorflow_Env_StartThread ) <nl> - * Returns a new thread that is running fn ( ) and is identified ( for debugging / performance - analysis ) by " name " . <nl> - * [ ` virtual void tensorflow : : Env : : SchedClosure ( std : : function < void ( ) > closure ) = 0 ` ] ( # virtual_void_tensorflow_Env_SchedClosure ) <nl> - * [ ` virtual void tensorflow : : Env : : SchedClosureAfter ( int micros , std : : function < void ( ) > closure ) = 0 ` ] ( # virtual_void_tensorflow_Env_SchedClosureAfter ) <nl> - * [ ` virtual Status tensorflow : : Env : : LoadLibrary ( const char * library_filename , void * * handle ) = 0 ` ] ( # virtual_Status_tensorflow_Env_LoadLibrary ) <nl> - * [ ` virtual Status tensorflow : : Env : : GetSymbolFromLibrary ( void * handle , const char * symbol_name , void * * symbol ) = 0 ` ] ( # virtual_Status_tensorflow_Env_GetSymbolFromLibrary ) <nl> - * [ ` static Env * tensorflow : : Env : : Default ( ) ` ] ( # static_Env_tensorflow_Env_Default ) <nl> - * Returns a default environment suitable for the current operating system . <nl> - <nl> - # # Member Details <nl> + # # # Member Details <nl> <nl> # # # # ` tensorflow : : Env : : Env ( ) ` { # tensorflow_Env_Env } <nl> <nl> mmm a / tensorflow / g3doc / api_docs / cc / ClassEnvWrapper . md <nl> ppp b / tensorflow / g3doc / api_docs / cc / ClassEnvWrapper . md <nl> <nl> - # Class ` tensorflow : : EnvWrapper ` <nl> + # ` class tensorflow : : EnvWrapper ` <nl> <nl> An implementation of Env that forwards all calls to another Env . <nl> <nl> May be useful to clients who wish to override just part of the functionality of another Env . <nl> <nl> - # # Member Summary <nl> - <nl> - * [ ` tensorflow : : EnvWrapper : : EnvWrapper ( Env * t ) ` ] ( # tensorflow_EnvWrapper_EnvWrapper ) <nl> - * Initializes an EnvWrapper that delegates all calls to * t . <nl> - * [ ` tensorflow : : EnvWrapper : : ~ EnvWrapper ( ) ` ] ( # tensorflow_EnvWrapper_EnvWrapper ) <nl> - * [ ` Env * tensorflow : : EnvWrapper : : target ( ) const ` ] ( # Env_tensorflow_EnvWrapper_target ) <nl> - * Returns the target to which this Env forwards all calls . <nl> - * [ ` Status tensorflow : : EnvWrapper : : NewRandomAccessFile ( const string & f , RandomAccessFile * * r ) override ` ] ( # Status_tensorflow_EnvWrapper_NewRandomAccessFile ) <nl> - * Creates a brand new random access read - only file with the specified name . <nl> - * [ ` Status tensorflow : : EnvWrapper : : NewWritableFile ( const string & f , WritableFile * * r ) override ` ] ( # Status_tensorflow_EnvWrapper_NewWritableFile ) <nl> - * Creates an object that writes to a new file with the specified name . <nl> - * [ ` Status tensorflow : : EnvWrapper : : NewAppendableFile ( const string & f , WritableFile * * r ) override ` ] ( # Status_tensorflow_EnvWrapper_NewAppendableFile ) <nl> - * Creates an object that either appends to an existing file , or writes to a new file ( if the file does not exist to begin with ) . <nl> - * [ ` bool tensorflow : : EnvWrapper : : FileExists ( const string & f ) override ` ] ( # bool_tensorflow_EnvWrapper_FileExists ) <nl> - * Returns true iff the named file exists . <nl> - * [ ` Status tensorflow : : EnvWrapper : : GetChildren ( const string & dir , std : : vector < string > * r ) override ` ] ( # Status_tensorflow_EnvWrapper_GetChildren ) <nl> - * Stores in * result the names of the children of the specified directory . The names are relative to " dir " . <nl> - * [ ` Status tensorflow : : EnvWrapper : : DeleteFile ( const string & f ) override ` ] ( # Status_tensorflow_EnvWrapper_DeleteFile ) <nl> - * Deletes the named file . <nl> - * [ ` Status tensorflow : : EnvWrapper : : CreateDir ( const string & d ) override ` ] ( # Status_tensorflow_EnvWrapper_CreateDir ) <nl> - * Creates the specified directory . <nl> - * [ ` Status tensorflow : : EnvWrapper : : DeleteDir ( const string & d ) override ` ] ( # Status_tensorflow_EnvWrapper_DeleteDir ) <nl> - * Deletes the specified directory . <nl> - * [ ` Status tensorflow : : EnvWrapper : : GetFileSize ( const string & f , uint64 * s ) override ` ] ( # Status_tensorflow_EnvWrapper_GetFileSize ) <nl> - * Stores the size of ` fname ` in ` * file_size ` . <nl> - * [ ` Status tensorflow : : EnvWrapper : : RenameFile ( const string & s , const string & t ) override ` ] ( # Status_tensorflow_EnvWrapper_RenameFile ) <nl> - * Renames file src to target . If target already exists , it will be replaced . <nl> - * [ ` uint64 tensorflow : : EnvWrapper : : NowMicros ( ) override ` ] ( # uint64_tensorflow_EnvWrapper_NowMicros ) <nl> - * Returns the number of micro - seconds since some fixed point in time . Only useful for computing deltas of time . <nl> - * [ ` void tensorflow : : EnvWrapper : : SleepForMicroseconds ( int micros ) override ` ] ( # void_tensorflow_EnvWrapper_SleepForMicroseconds ) <nl> - * Sleeps / delays the thread for the prescribed number of micro - seconds . <nl> - * [ ` Thread * tensorflow : : EnvWrapper : : StartThread ( const ThreadOptions & thread_options , const string & name , std : : function < void ( ) > fn ) override ` ] ( # Thread_tensorflow_EnvWrapper_StartThread ) <nl> - * Returns a new thread that is running fn ( ) and is identified ( for debugging / performance - analysis ) by " name " . <nl> - * [ ` void tensorflow : : EnvWrapper : : SchedClosure ( std : : function < void ( ) > closure ) override ` ] ( # void_tensorflow_EnvWrapper_SchedClosure ) <nl> - * [ ` void tensorflow : : EnvWrapper : : SchedClosureAfter ( int micros , std : : function < void ( ) > closure ) override ` ] ( # void_tensorflow_EnvWrapper_SchedClosureAfter ) <nl> - * [ ` Status tensorflow : : EnvWrapper : : LoadLibrary ( const char * library_filename , void * * handle ) override ` ] ( # Status_tensorflow_EnvWrapper_LoadLibrary ) <nl> - * [ ` Status tensorflow : : EnvWrapper : : GetSymbolFromLibrary ( void * handle , const char * symbol_name , void * * symbol ) override ` ] ( # Status_tensorflow_EnvWrapper_GetSymbolFromLibrary ) <nl> - <nl> - # # Member Details <nl> + # # # Member Details <nl> <nl> # # # # ` tensorflow : : EnvWrapper : : EnvWrapper ( Env * t ) ` { # tensorflow_EnvWrapper_EnvWrapper } <nl> <nl> mmm a / tensorflow / g3doc / api_docs / cc / ClassPartialTensorShape . md <nl> ppp b / tensorflow / g3doc / api_docs / cc / ClassPartialTensorShape . md <nl> <nl> - # Class ` tensorflow : : PartialTensorShape ` <nl> + # ` class tensorflow : : PartialTensorShape ` <nl> <nl> Manages the partially known dimensions of a Tensor and their sizes . <nl> <nl> <nl> <nl> - # # Member Summary <nl> - <nl> - * [ ` tensorflow : : PartialTensorShape : : PartialTensorShape ( gtl : : ArraySlice < int64 > dim_sizes ) ` ] ( # tensorflow_PartialTensorShape_PartialTensorShape ) <nl> - * Construct a ` PartialTensorShape ` from the provided sizes . REQUIRES : ` dim_sizes [ i ] > = 0 ` <nl> - * [ ` tensorflow : : PartialTensorShape : : PartialTensorShape ( std : : initializer_list < int64 > dim_sizes ) ` ] ( # tensorflow_PartialTensorShape_PartialTensorShape ) <nl> - * [ ` tensorflow : : PartialTensorShape : : PartialTensorShape ( const TensorShapeProto & proto ) ` ] ( # tensorflow_PartialTensorShape_PartialTensorShape ) <nl> - * REQUIRES : ` IsValid ( proto ) ` <nl> - * [ ` PartialTensorShape tensorflow : : PartialTensorShape : : Concatenate ( int64 size ) const ` ] ( # PartialTensorShape_tensorflow_PartialTensorShape_Concatenate ) <nl> - * [ ` PartialTensorShape tensorflow : : PartialTensorShape : : Concatenate ( const PartialTensorShape & shape ) const ` ] ( # PartialTensorShape_tensorflow_PartialTensorShape_Concatenate ) <nl> - * [ ` Status tensorflow : : PartialTensorShape : : MergeWith ( const PartialTensorShape & shape , PartialTensorShape * result ) const ` ] ( # Status_tensorflow_PartialTensorShape_MergeWith ) <nl> - * [ ` int tensorflow : : PartialTensorShape : : dims ( ) const ` ] ( # int_tensorflow_PartialTensorShape_dims ) <nl> - * Return the number of dimensions in the tensor . <nl> - * [ ` bool tensorflow : : PartialTensorShape : : IsFullyDefined ( ) const ` ] ( # bool_tensorflow_PartialTensorShape_IsFullyDefined ) <nl> - * Return true iff the rank and all of the dimensions are well defined . <nl> - * [ ` bool tensorflow : : PartialTensorShape : : IsCompatibleWith ( const PartialTensorShape & shape ) const ` ] ( # bool_tensorflow_PartialTensorShape_IsCompatibleWith ) <nl> - * [ ` bool tensorflow : : PartialTensorShape : : IsCompatibleWith ( const TensorShape & shape ) const ` ] ( # bool_tensorflow_PartialTensorShape_IsCompatibleWith ) <nl> - * [ ` int64 tensorflow : : PartialTensorShape : : dim_size ( int d ) const ` ] ( # int64_tensorflow_PartialTensorShape_dim_size ) <nl> - * Returns the number of elements in dimension ` d ` . REQUIRES : ` 0 < = d < dims ( ) ` <nl> - * [ ` gtl : : ArraySlice < int64 > tensorflow : : PartialTensorShape : : dim_sizes ( ) const ` ] ( # gtl_ArraySlice_int64_tensorflow_PartialTensorShape_dim_sizes ) <nl> - * Returns sizes of all dimensions . <nl> - * [ ` void tensorflow : : PartialTensorShape : : AsProto ( TensorShapeProto * proto ) const ` ] ( # void_tensorflow_PartialTensorShape_AsProto ) <nl> - * Fill ` * proto ` from ` * this ` . <nl> - * [ ` bool tensorflow : : PartialTensorShape : : AsTensorShape ( TensorShape * tensor_shape ) const ` ] ( # bool_tensorflow_PartialTensorShape_AsTensorShape ) <nl> - * [ ` string tensorflow : : PartialTensorShape : : DebugString ( ) const ` ] ( # string_tensorflow_PartialTensorShape_DebugString ) <nl> - * For error messages . <nl> - * [ ` bool tensorflow : : PartialTensorShape : : IsValid ( const TensorShapeProto & proto ) ` ] ( # bool_tensorflow_PartialTensorShape_IsValid ) <nl> - * Returns ` true ` iff ` proto ` is a valid partial tensor shape . <nl> - * [ ` Status tensorflow : : PartialTensorShape : : IsValidShape ( const TensorShapeProto & proto ) ` ] ( # Status_tensorflow_PartialTensorShape_IsValidShape ) <nl> - * [ ` string tensorflow : : PartialTensorShape : : DebugString ( const TensorShapeProto & proto ) ` ] ( # string_tensorflow_PartialTensorShape_DebugString ) <nl> - * [ ` Status tensorflow : : PartialTensorShape : : MakePartialShape ( const T * dims , int n , PartialTensorShape * out ) ` ] ( # Status_tensorflow_PartialTensorShape_MakePartialShape ) <nl> - * Returns a ` PartialTensorShape ` whose dimensions are ` dims [ 0 ] ` , ` dims [ 1 ] ` , . . . , ` dims [ n - 1 ] ` . Values of - 1 are considered " unknown " . <nl> - <nl> - # # Member Details <nl> + # # # Member Details <nl> + <nl> + # # # # ` tensorflow : : PartialTensorShape : : PartialTensorShape ( ) ` { # tensorflow_PartialTensorShape_PartialTensorShape } <nl> + <nl> + Construct an unknown ` PartialTensorShape ` . <nl> + <nl> + <nl> <nl> # # # # ` tensorflow : : PartialTensorShape : : PartialTensorShape ( gtl : : ArraySlice < int64 > dim_sizes ) ` { # tensorflow_PartialTensorShape_PartialTensorShape } <nl> <nl> Merges all the dimensions from ` shape ` . Returns ` InvalidArgument ` error if eithe <nl> <nl> # # # # ` int tensorflow : : PartialTensorShape : : dims ( ) const ` { # int_tensorflow_PartialTensorShape_dims } <nl> <nl> - Return the number of dimensions in the tensor . <nl> <nl> <nl> + Return the number of dimensions in the tensor . If the number of dimensions is unknown , return - 1 . <nl> <nl> # # # # ` bool tensorflow : : PartialTensorShape : : IsFullyDefined ( ) const ` { # bool_tensorflow_PartialTensorShape_IsFullyDefined } <nl> <nl> mmm a / tensorflow / g3doc / api_docs / cc / ClassPartialTensorShapeUtils . md <nl> ppp b / tensorflow / g3doc / api_docs / cc / ClassPartialTensorShapeUtils . md <nl> <nl> - # Class ` tensorflow : : PartialTensorShapeUtils ` <nl> + # ` class tensorflow : : PartialTensorShapeUtils ` <nl> <nl> Static helper routines for ` PartialTensorShape ` . Includes a few common predicates on a partially known tensor shape . <nl> <nl> <nl> <nl> - # # Member Summary <nl> - <nl> - * [ ` static string tensorflow : : PartialTensorShapeUtils : : PartialShapeListString ( const gtl : : ArraySlice < PartialTensorShape > & shapes ) ` ] ( # static_string_tensorflow_PartialTensorShapeUtils_PartialShapeListString ) <nl> - * [ ` static bool tensorflow : : PartialTensorShapeUtils : : AreCompatible ( const gtl : : ArraySlice < PartialTensorShape > & shapes0 , const gtl : : ArraySlice < PartialTensorShape > & shapes1 ) ` ] ( # static_bool_tensorflow_PartialTensorShapeUtils_AreCompatible ) <nl> - <nl> - # # Member Details <nl> + # # # Member Details <nl> <nl> # # # # ` static string tensorflow : : PartialTensorShapeUtils : : PartialShapeListString ( const gtl : : ArraySlice < PartialTensorShape > & shapes ) ` { # static_string_tensorflow_PartialTensorShapeUtils_PartialShapeListString } <nl> <nl> mmm a / tensorflow / g3doc / api_docs / cc / ClassRandomAccessFile . md <nl> ppp b / tensorflow / g3doc / api_docs / cc / ClassRandomAccessFile . md <nl> <nl> - # Class ` tensorflow : : RandomAccessFile ` <nl> + # ` class tensorflow : : RandomAccessFile ` <nl> <nl> A file abstraction for randomly reading the contents of a file . <nl> <nl> <nl> <nl> - # # Member Summary <nl> - <nl> - * [ ` tensorflow : : RandomAccessFile : : RandomAccessFile ( ) ` ] ( # tensorflow_RandomAccessFile_RandomAccessFile ) <nl> - * [ ` tensorflow : : RandomAccessFile : : ~ RandomAccessFile ( ) ` ] ( # tensorflow_RandomAccessFile_RandomAccessFile ) <nl> - * [ ` virtual Status tensorflow : : RandomAccessFile : : Read ( uint64 offset , size_t n , StringPiece * result , char * scratch ) const = 0 ` ] ( # virtual_Status_tensorflow_RandomAccessFile_Read ) <nl> - * Reads up to ` n ` bytes from the file starting at ` offset ` . <nl> - <nl> - # # Member Details <nl> + # # # Member Details <nl> <nl> # # # # ` tensorflow : : RandomAccessFile : : RandomAccessFile ( ) ` { # tensorflow_RandomAccessFile_RandomAccessFile } <nl> <nl> A file abstraction for randomly reading the contents of a file . <nl> <nl> <nl> <nl> - # # # # ` virtual Status tensorflow : : RandomAccessFile : : Read ( uint64 offset , size_t n , StringPiece * result , char * scratch ) const = 0 ` { # virtual_Status_tensorflow_RandomAccessFile_Read } <nl> + # # # # ` virtual Status tensorflow : : RandomAccessFile : : Read ( uint64 offset , size_t n , StringPiece * result , char * scratch ) const = 0 ` { # virtual_Status_tensorflow_RandomAccessFile_Read } <nl> <nl> Reads up to ` n ` bytes from the file starting at ` offset ` . <nl> <nl> mmm a / tensorflow / g3doc / api_docs / cc / ClassSession . md <nl> ppp b / tensorflow / g3doc / api_docs / cc / ClassSession . md <nl> <nl> - # Class ` tensorflow : : Session ` <nl> + # ` class tensorflow : : Session ` <nl> <nl> A Session instance lets a caller drive a TensorFlow graph computation . <nl> <nl> When a Session is created with a given target , a new Session object is bound to <nl> <nl> Example : <nl> <nl> - ` ` ` c + + tensorflow : : GraphDef graph ; <nl> - / / . . . Create or load graph into " graph " . <nl> - <nl> - / / This example uses the default options which connects <nl> - / / to a local runtime . <nl> - tensorflow : : SessionOptions options ; <nl> - std : : unique_ptr < tensorflow : : Session > <nl> - session ( tensorflow : : NewSession ( options ) ) ; <nl> - <nl> - / / Create the session with this graph . <nl> - tensorflow : : Status s = session - > Create ( graph ) ; <nl> - if ( ! s . ok ( ) ) { . . . } <nl> - <nl> - / / Run the graph and fetch the first output of the " output " <nl> - / / operation , and also run to but do not return anything <nl> - / / for the " update_state " operation . <nl> - std : : vector < tensorflow : : Tensor > outputs ; <nl> - s = session - > Run ( { } , { " output : 0 " } , { " update_state " } , & outputs ) ; <nl> - if ( ! s . ok ( ) ) { . . . } <nl> - <nl> - / / Map the output as a flattened float tensor , and do something <nl> - / / with it . <nl> - auto output_tensor = outputs [ 0 ] . flat < float > ( ) ; <nl> - if ( output_tensor ( 0 ) > 0 . 5 ) { . . . } <nl> - <nl> - / / Close the session to release the resources associated with <nl> - / / this session . <nl> - session - > Close ( ) ; <nl> - <nl> - ` ` ` <nl> + { c + + } tensorflow : : GraphDef graph ; / / . . . Create or load graph into " graph " . / / This example uses the default options which connects / / to a local runtime . tensorflow : : SessionOptions options ; std : : unique_ptr < tensorflow : : Session > session ( tensorflow : : NewSession ( options ) ) ; / / Create the session with this graph . tensorflow : : Status s = session - > Create ( graph ) ; if ( ! s . ok ( ) ) { . . . } / / Run the graph and fetch the first output of the " output " / / operation , and also run to but do not return anything / / for the " update_state " operation . std : : vector < tensorflow : : Tensor > outputs ; s = session - > Run ( { } , { " output : 0 " } , { " update_state " } , & outputs ) ; if ( ! s . ok ( ) ) { . . . } / / Map the output as a flattened float tensor , and do something / / with it . auto output_tensor = outputs [ 0 ] . flat < float > ( ) ; if ( output_tensor ( 0 ) > 0 . 5 ) { . . . } / / Close the session to release the resources associated with / / this session . session - > Close ( ) ; <nl> <nl> A Session allows concurrent calls to Run ( ) , though a Session must be created / extended by a single thread . <nl> <nl> Only one thread must call Close ( ) , and Close ( ) must only be called after all other calls to Run ( ) have returned . <nl> <nl> - # # Member Summary <nl> - <nl> - * [ ` virtual Status tensorflow : : Session : : Create ( const GraphDef & graph ) = 0 ` ] ( # virtual_Status_tensorflow_Session_Create ) <nl> - * Create the graph to be used for the session . <nl> - * [ ` virtual Status tensorflow : : Session : : Extend ( const GraphDef & graph ) = 0 ` ] ( # virtual_Status_tensorflow_Session_Extend ) <nl> - * Adds operations to the graph that is already registered with the Session . <nl> - * [ ` virtual Status tensorflow : : Session : : Run ( const std : : vector < std : : pair < string , Tensor > > & inputs , const std : : vector < string > & output_tensor_names , const std : : vector < string > & target_node_names , std : : vector < Tensor > * outputs ) = 0 ` ] ( # virtual_Status_tensorflow_Session_Run ) <nl> - * Runs the graph with the provided input tensors and fills ` outputs ` for the endpoints specified in ` output_tensor_names ` . Runs to but does not return Tensors for the nodes in ` target_node_names ` . <nl> - * [ ` virtual Status tensorflow : : Session : : Close ( ) = 0 ` ] ( # virtual_Status_tensorflow_Session_Close ) <nl> - * Closes this session . <nl> - * [ ` virtual tensorflow : : Session : : ~ Session ( ) ` ] ( # virtual_tensorflow_Session_Session ) <nl> - <nl> - # # Member Details <nl> + # # # Member Details <nl> <nl> # # # # ` virtual Status tensorflow : : Session : : Create ( const GraphDef & graph ) = 0 ` { # virtual_Status_tensorflow_Session_Create } <nl> <nl> REQUIRES : The name of each Tensor of the input or output must match a " Tensor en <nl> <nl> REQUIRES : outputs is not nullptr if ` output_tensor_names ` is non - empty . <nl> <nl> + # # # # ` virtual Status tensorflow : : Session : : PRunSetup ( const std : : vector < string > & input_names , const std : : vector < string > & output_names , const std : : vector < string > & target_nodes , string * handle ) ` { # virtual_Status_tensorflow_Session_PRunSetup } <nl> + <nl> + Sets up a graph for partial execution . All future feeds and fetches are specified by & apos ; input_names & apos ; and & apos ; output_names & apos ; . Returns & apos ; handle & apos ; that can be used to perform a sequence of partial feeds and fetches . NOTE : This API is still experimental and may change . <nl> + <nl> + <nl> + <nl> + # # # # ` virtual Status tensorflow : : Session : : PRun ( const string & handle , const std : : vector < std : : pair < string , Tensor > > & inputs , const std : : vector < string > & output_names , std : : vector < Tensor > * outputs ) ` { # virtual_Status_tensorflow_Session_PRun } <nl> + <nl> + Continues the pending execution specified by & apos ; handle & apos ; with the provided input tensors and fills ` outputs ` for the endpoints specified in ` output_names ` . NOTE : This API is still experimental and may change . <nl> + <nl> + <nl> + <nl> # # # # ` virtual Status tensorflow : : Session : : Close ( ) = 0 ` { # virtual_Status_tensorflow_Session_Close } <nl> <nl> Closes this session . <nl> mmm a / tensorflow / g3doc / api_docs / cc / ClassStatus . md <nl> ppp b / tensorflow / g3doc / api_docs / cc / ClassStatus . md <nl> <nl> - # Class ` tensorflow : : Status ` <nl> + # ` class tensorflow : : Status ` <nl> <nl> <nl> <nl> <nl> <nl> - # # Member Summary <nl> - <nl> - * [ ` tensorflow : : Status : : Status ( ) ` ] ( # tensorflow_Status_Status ) <nl> - * Create a success status . <nl> - * [ ` tensorflow : : Status : : ~ Status ( ) ` ] ( # tensorflow_Status_Status ) <nl> - * [ ` tensorflow : : Status : : Status ( tensorflow : : error : : Code code , tensorflow : : StringPiece msg ) ` ] ( # tensorflow_Status_Status ) <nl> - * Create a status with the specified error code and msg as a human - readable string containing more detailed information . <nl> - * [ ` tensorflow : : Status : : Status ( const Status & s ) ` ] ( # tensorflow_Status_Status ) <nl> - * Copy the specified status . <nl> - * [ ` void tensorflow : : Status : : operator = ( const Status & s ) ` ] ( # void_tensorflow_Status_operator_ ) <nl> - * [ ` bool tensorflow : : Status : : ok ( ) const ` ] ( # bool_tensorflow_Status_ok ) <nl> - * Returns true iff the status indicates success . <nl> - * [ ` tensorflow : : error : : Code tensorflow : : Status : : code ( ) const ` ] ( # tensorflow_error_Code_tensorflow_Status_code ) <nl> - * [ ` const string & tensorflow : : Status : : error_message ( ) const ` ] ( # const_string_tensorflow_Status_error_message ) <nl> - * [ ` bool tensorflow : : Status : : operator = = ( const Status & x ) const ` ] ( # bool_tensorflow_Status_operator_ ) <nl> - * [ ` bool tensorflow : : Status : : operator ! = ( const Status & x ) const ` ] ( # bool_tensorflow_Status_operator_ ) <nl> - * [ ` void tensorflow : : Status : : Update ( const Status & new_status ) ` ] ( # void_tensorflow_Status_Update ) <nl> - * If ` ok ( ) ` , stores ` new_status ` into ` * this ` . If ` ! ok ( ) ` , preserves the current status , but may augment with additional information about ` new_status ` . <nl> - * [ ` string tensorflow : : Status : : ToString ( ) const ` ] ( # string_tensorflow_Status_ToString ) <nl> - * Return a string representation of this status suitable for printing . Returns the string ` " OK " ` for success . <nl> - * [ ` return tensorflow : : Status : : OK ( ) ` ] ( # return_tensorflow_Status_OK ) <nl> - <nl> - # # Member Details <nl> + # # # Member Details <nl> <nl> # # # # ` tensorflow : : Status : : Status ( ) ` { # tensorflow_Status_Status } <nl> <nl> mmm a / tensorflow / g3doc / api_docs / cc / ClassTensor . md <nl> ppp b / tensorflow / g3doc / api_docs / cc / ClassTensor . md <nl> <nl> - # Class ` tensorflow : : Tensor ` <nl> + # ` class tensorflow : : Tensor ` <nl> <nl> Represents an n - dimensional array of values . <nl> <nl> <nl> <nl> - # # Member Summary <nl> - <nl> - * [ ` tensorflow : : Tensor : : Tensor ( ) ` ] ( # tensorflow_Tensor_Tensor ) <nl> - * Default Tensor constructor . Creates a 1 - dimension , 0 - element float tensor . <nl> - * [ ` tensorflow : : Tensor : : Tensor ( DataType type , const TensorShape & shape ) ` ] ( # tensorflow_Tensor_Tensor ) <nl> - * Creates a Tensor of the given ` type ` and ` shape ` . <nl> - * [ ` tensorflow : : Tensor : : Tensor ( Allocator * a , DataType type , const TensorShape & shape ) ` ] ( # tensorflow_Tensor_Tensor ) <nl> - * Creates a tensor with the input ` type ` and ` shape ` , using the allocator ` a ` to allocate the underlying buffer . <nl> - * [ ` tensorflow : : Tensor : : Tensor ( Allocator * a , DataType type , const TensorShape & shape , const AllocationAttributes & allocation_attr ) ` ] ( # tensorflow_Tensor_Tensor ) <nl> - * Creates a tensor with the input ` type ` and ` shape ` , using the allocator ` a ` and the specified " allocation_attr " to allocate the underlying buffer . <nl> - * [ ` tensorflow : : Tensor : : Tensor ( DataType type ) ` ] ( # tensorflow_Tensor_Tensor ) <nl> - * Creates an uninitialized Tensor of the given data type . <nl> - * [ ` tensorflow : : Tensor : : Tensor ( const Tensor & other ) ` ] ( # tensorflow_Tensor_Tensor ) <nl> - * [ ` tensorflow : : Tensor : : ~ Tensor ( ) ` ] ( # tensorflow_Tensor_Tensor ) <nl> - * Copy constructor . <nl> - * [ ` DataType tensorflow : : Tensor : : dtype ( ) const ` ] ( # DataType_tensorflow_Tensor_dtype ) <nl> - * Returns the data type . <nl> - * [ ` const TensorShape & tensorflow : : Tensor : : shape ( ) const ` ] ( # const_TensorShape_tensorflow_Tensor_shape ) <nl> - * Returns the shape of the tensor . <nl> - * [ ` int tensorflow : : Tensor : : dims ( ) const ` ] ( # int_tensorflow_Tensor_dims ) <nl> - * Convenience accessor for the tensor shape . <nl> - * [ ` int64 tensorflow : : Tensor : : dim_size ( int d ) const ` ] ( # int64_tensorflow_Tensor_dim_size ) <nl> - * Convenience accessor for the tensor shape . <nl> - * [ ` int64 tensorflow : : Tensor : : NumElements ( ) const ` ] ( # int64_tensorflow_Tensor_NumElements ) <nl> - * Convenience accessor for the tensor shape . <nl> - * [ ` bool tensorflow : : Tensor : : IsSameSize ( const Tensor & b ) const ` ] ( # bool_tensorflow_Tensor_IsSameSize ) <nl> - * [ ` bool tensorflow : : Tensor : : SharesBufferWith ( const Tensor & b ) const ` ] ( # bool_tensorflow_Tensor_SharesBufferWith ) <nl> - * [ ` size_t tensorflow : : Tensor : : BufferHash ( ) const ` ] ( # size_t_tensorflow_Tensor_BufferHash ) <nl> - * [ ` bool tensorflow : : Tensor : : IsInitialized ( ) const ` ] ( # bool_tensorflow_Tensor_IsInitialized ) <nl> - * Has this Tensor been initialized ? <nl> - * [ ` size_t tensorflow : : Tensor : : TotalBytes ( ) const ` ] ( # size_t_tensorflow_Tensor_TotalBytes ) <nl> - * Returns the estimated memory usage of this tensor . <nl> - * [ ` Tensor & tensorflow : : Tensor : : operator = ( const Tensor & other ) ` ] ( # Tensor_tensorflow_Tensor_operator_ ) <nl> - * Assign operator . This tensor shares other & apos ; s underlying storage . <nl> - * [ ` bool tensorflow : : Tensor : : CopyFrom ( const Tensor & other , const TensorShape & shape ) TF_MUST_USE_RESULT ` ] ( # bool_tensorflow_Tensor_CopyFrom ) <nl> - * Copy the other tensor into this tensor and reshape it . <nl> - * [ ` Tensor tensorflow : : Tensor : : Slice ( int64 dim0_start , int64 dim0_limit ) const ` ] ( # Tensor_tensorflow_Tensor_Slice ) <nl> - * Slice this tensor along the 1st dimension . <nl> - * [ ` bool tensorflow : : Tensor : : FromProto ( const TensorProto & other ) TF_MUST_USE_RESULT ` ] ( # bool_tensorflow_Tensor_FromProto ) <nl> - * Parse ` other ` and construct the tensor . <nl> - * [ ` bool tensorflow : : Tensor : : FromProto ( Allocator * a , const TensorProto & other ) TF_MUST_USE_RESULT ` ] ( # bool_tensorflow_Tensor_FromProto ) <nl> - * [ ` void tensorflow : : Tensor : : AsProtoField ( TensorProto * proto ) const ` ] ( # void_tensorflow_Tensor_AsProtoField ) <nl> - * Fills in ` proto ` with ` * this ` tensor & apos ; s content . <nl> - * [ ` void tensorflow : : Tensor : : AsProtoTensorContent ( TensorProto * proto ) const ` ] ( # void_tensorflow_Tensor_AsProtoTensorContent ) <nl> - * [ ` TTypes < T > : : Vec tensorflow : : Tensor : : vec ( ) ` ] ( # TTypes_T_Vec_tensorflow_Tensor_vec ) <nl> - * Return the tensor data as an ` Eigen : : Tensor ` with the type and sizes of this ` Tensor ` . <nl> - * [ ` TTypes < T > : : Matrix tensorflow : : Tensor : : matrix ( ) ` ] ( # TTypes_T_Matrix_tensorflow_Tensor_matrix ) <nl> - * [ ` TTypes < T , NDIMS > : : Tensor tensorflow : : Tensor : : tensor ( ) ` ] ( # TTypes_T_NDIMS_Tensor_tensorflow_Tensor_tensor ) <nl> - * [ ` TTypes < T > : : Flat tensorflow : : Tensor : : flat ( ) ` ] ( # TTypes_T_Flat_tensorflow_Tensor_flat ) <nl> - * Return the tensor data as an ` Eigen : : Tensor ` of the data type and a specified shape . <nl> - * [ ` TTypes < T > : : UnalignedFlat tensorflow : : Tensor : : unaligned_flat ( ) ` ] ( # TTypes_T_UnalignedFlat_tensorflow_Tensor_unaligned_flat ) <nl> - * [ ` TTypes < T > : : Matrix tensorflow : : Tensor : : flat_inner_dims ( ) ` ] ( # TTypes_T_Matrix_tensorflow_Tensor_flat_inner_dims ) <nl> - * [ ` TTypes < T > : : Matrix tensorflow : : Tensor : : flat_outer_dims ( ) ` ] ( # TTypes_T_Matrix_tensorflow_Tensor_flat_outer_dims ) <nl> - * [ ` TTypes < T , NDIMS > : : Tensor tensorflow : : Tensor : : shaped ( gtl : : ArraySlice < int64 > new_sizes ) ` ] ( # TTypes_T_NDIMS_Tensor_tensorflow_Tensor_shaped ) <nl> - * [ ` TTypes < T , NDIMS > : : UnalignedTensor tensorflow : : Tensor : : unaligned_shaped ( gtl : : ArraySlice < int64 > new_sizes ) ` ] ( # TTypes_T_NDIMS_UnalignedTensor_tensorflow_Tensor_unaligned_shaped ) <nl> - * [ ` TTypes < T > : : Scalar tensorflow : : Tensor : : scalar ( ) ` ] ( # TTypes_T_Scalar_tensorflow_Tensor_scalar ) <nl> - * Return the Tensor data as a ` TensorMap ` of fixed size 1 : ` TensorMap < TensorFixedSize < T , 1 > > ` . <nl> - * [ ` TTypes < T > : : ConstVec tensorflow : : Tensor : : vec ( ) const ` ] ( # TTypes_T_ConstVec_tensorflow_Tensor_vec ) <nl> - * Const versions of all the methods above . <nl> - * [ ` TTypes < T > : : ConstMatrix tensorflow : : Tensor : : matrix ( ) const ` ] ( # TTypes_T_ConstMatrix_tensorflow_Tensor_matrix ) <nl> - * [ ` TTypes < T , NDIMS > : : ConstTensor tensorflow : : Tensor : : tensor ( ) const ` ] ( # TTypes_T_NDIMS_ConstTensor_tensorflow_Tensor_tensor ) <nl> - * [ ` TTypes < T > : : ConstFlat tensorflow : : Tensor : : flat ( ) const ` ] ( # TTypes_T_ConstFlat_tensorflow_Tensor_flat ) <nl> - * [ ` TTypes < T > : : UnalignedConstFlat tensorflow : : Tensor : : unaligned_flat ( ) const ` ] ( # TTypes_T_UnalignedConstFlat_tensorflow_Tensor_unaligned_flat ) <nl> - * [ ` TTypes < T > : : ConstMatrix tensorflow : : Tensor : : flat_inner_dims ( ) const ` ] ( # TTypes_T_ConstMatrix_tensorflow_Tensor_flat_inner_dims ) <nl> - * [ ` TTypes < T > : : ConstMatrix tensorflow : : Tensor : : flat_outer_dims ( ) const ` ] ( # TTypes_T_ConstMatrix_tensorflow_Tensor_flat_outer_dims ) <nl> - * [ ` TTypes < T , NDIMS > : : ConstTensor tensorflow : : Tensor : : shaped ( gtl : : ArraySlice < int64 > new_sizes ) const ` ] ( # TTypes_T_NDIMS_ConstTensor_tensorflow_Tensor_shaped ) <nl> - * [ ` TTypes < T , NDIMS > : : UnalignedConstTensor tensorflow : : Tensor : : unaligned_shaped ( gtl : : ArraySlice < int64 > new_sizes ) const ` ] ( # TTypes_T_NDIMS_UnalignedConstTensor_tensorflow_Tensor_unaligned_shaped ) <nl> - * [ ` TTypes < T > : : ConstScalar tensorflow : : Tensor : : scalar ( ) const ` ] ( # TTypes_T_ConstScalar_tensorflow_Tensor_scalar ) <nl> - * [ ` string tensorflow : : Tensor : : SummarizeValue ( int64 max_entries ) const ` ] ( # string_tensorflow_Tensor_SummarizeValue ) <nl> - * Render the first ` max_entries ` values in ` * this ` into a string . <nl> - * [ ` string tensorflow : : Tensor : : DebugString ( ) const ` ] ( # string_tensorflow_Tensor_DebugString ) <nl> - * A human - readable summary of the tensor suitable for debugging . <nl> - * [ ` void tensorflow : : Tensor : : FillDescription ( TensorDescription * description ) const ` ] ( # void_tensorflow_Tensor_FillDescription ) <nl> - * [ ` StringPiece tensorflow : : Tensor : : tensor_data ( ) const ` ] ( # StringPiece_tensorflow_Tensor_tensor_data ) <nl> - * Returns a ` StringPiece ` mapping the current tensor & apos ; s buffer . <nl> - <nl> - # # Member Details <nl> + # # # Member Details <nl> <nl> # # # # ` tensorflow : : Tensor : : Tensor ( ) ` { # tensorflow_Tensor_Tensor } <nl> <nl> Returns the shape of the tensor . <nl> <nl> Convenience accessor for the tensor shape . <nl> <nl> - For all shape accessors , see comments for relevant methods of ` TensorShape ` in ` tensor_shape . h ` . <nl> + For all shape accessors , see comments for relevant methods of ` TensorShape ` in ` tensor_shape . h ` . <nl> <nl> # # # # ` int64 tensorflow : : Tensor : : dim_size ( int d ) const ` { # int64_tensorflow_Tensor_dim_size } <nl> <nl> Use these methods when you know the data type and the number of dimensions of th <nl> <nl> Example : <nl> <nl> - ` ` ` c + + typedef float T ; <nl> - Tensor my_mat ( . . . built with Shape { rows : 3 , cols : 5 } . . . ) ; <nl> - auto mat = my_mat . matrix < T > ( ) ; / / 2D Eigen : : Tensor , 3 x 5 . <nl> - auto mat = my_mat . tensor < T , 2 > ( ) ; / / 2D Eigen : : Tensor , 3 x 5 . <nl> - auto vec = my_mat . vec < T > ( ) ; / / CHECK fails as my_mat is 2D . <nl> - auto vec = my_mat . tensor < T , 3 > ( ) ; / / CHECK fails as my_mat is 2D . <nl> - auto mat = my_mat . matrix < int32 > ( ) ; / / CHECK fails as type mismatch . <nl> - <nl> - ` ` ` <nl> + { c + + } typedef float T ; Tensor my_mat ( . . . built with Shape { rows : 3 , cols : 5 } . . . ) ; auto mat = my_mat . matrix < T > ( ) ; / / 2D Eigen : : Tensor , 3 x 5 . auto mat = my_mat . tensor < T , 2 > ( ) ; / / 2D Eigen : : Tensor , 3 x 5 . auto vec = my_mat . vec < T > ( ) ; / / CHECK fails as my_mat is 2D . auto vec = my_mat . tensor < T , 3 > ( ) ; / / CHECK fails as my_mat is 2D . auto mat = my_mat . matrix < int32 > ( ) ; / / CHECK fails as type mismatch . <nl> <nl> # # # # ` TTypes < T > : : Matrix tensorflow : : Tensor : : matrix ( ) ` { # TTypes_T_Matrix_tensorflow_Tensor_matrix } <nl> <nl> These methods allow you to access the data with the dimensions and sizes of your <nl> <nl> Example : <nl> <nl> - ` ` ` c + + typedef float T ; <nl> - Tensor my_ten ( . . . built with Shape { planes : 4 , rows : 3 , cols : 5 } . . . ) ; <nl> - / / 1D Eigen : : Tensor , size 60 : <nl> - auto flat = my_ten . flat < T > ( ) ; <nl> - / / 2D Eigen : : Tensor 12 x 5 : <nl> - auto inner = my_ten . flat_inner_dims < T > ( ) ; <nl> - / / 2D Eigen : : Tensor 4 x 15 : <nl> - auto outer = my_ten . shaped < T , 2 > ( { 4 , 15 } ) ; <nl> - / / CHECK fails , bad num elements : <nl> - auto outer = my_ten . shaped < T , 2 > ( { 4 , 8 } ) ; <nl> - / / 3D Eigen : : Tensor 6 x 5 x 2 : <nl> - auto weird = my_ten . shaped < T , 3 > ( { 6 , 5 , 2 } ) ; <nl> - / / CHECK fails , type mismatch : <nl> - auto bad = my_ten . flat < int32 > ( ) ; <nl> - <nl> - ` ` ` <nl> + { c + + } typedef float T ; Tensor my_ten ( . . . built with Shape { planes : 4 , rows : 3 , cols : 5 } . . . ) ; / / 1D Eigen : : Tensor , size 60 : auto flat = my_ten . flat < T > ( ) ; / / 2D Eigen : : Tensor 12 x 5 : auto inner = my_ten . flat_inner_dims < T > ( ) ; / / 2D Eigen : : Tensor 4 x 15 : auto outer = my_ten . shaped < T , 2 > ( { 4 , 15 } ) ; / / CHECK fails , bad num elements : auto outer = my_ten . shaped < T , 2 > ( { 4 , 8 } ) ; / / 3D Eigen : : Tensor 6 x 5 x 2 : auto weird = my_ten . shaped < T , 3 > ( { 6 , 5 , 2 } ) ; / / CHECK fails , type mismatch : auto bad = my_ten . flat < int32 > ( ) ; <nl> <nl> # # # # ` TTypes < T > : : UnalignedFlat tensorflow : : Tensor : : unaligned_flat ( ) ` { # TTypes_T_UnalignedFlat_tensorflow_Tensor_unaligned_flat } <nl> <nl> The returned ` StringPiece ` may point to memory location on devices that the CP <nl> <nl> NOTE : The underlying tensor buffer is refcounted , so the lifetime of the contents mapped by the ` StringPiece ` matches the lifetime of the buffer ; callers should arrange to make sure the buffer does not get destroyed while the ` StringPiece ` is still used . <nl> <nl> - REQUIRES : ` DataTypeCanUseMemcpy ( dtype ( ) ) ` . <nl> + REQUIRES : ` DataTypeCanUseMemcpy ( dtype ( ) ) ` . <nl> mmm a / tensorflow / g3doc / api_docs / cc / ClassTensorShape . md <nl> ppp b / tensorflow / g3doc / api_docs / cc / ClassTensorShape . md <nl> <nl> - # Class ` tensorflow : : TensorShape ` <nl> - <nl> - Manages the dimensions of a Tensor and their sizes . <nl> - <nl> - <nl> - <nl> - # # Member Summary <nl> - <nl> - * [ ` tensorflow : : TensorShape : : TensorShape ( gtl : : ArraySlice < int64 > dim_sizes ) ` ] ( # tensorflow_TensorShape_TensorShape ) <nl> - * Construct a ` TensorShape ` from the provided sizes . REQUIRES : ` dim_sizes [ i ] > = 0 ` <nl> - * [ ` tensorflow : : TensorShape : : TensorShape ( std : : initializer_list < int64 > dim_sizes ) ` ] ( # tensorflow_TensorShape_TensorShape ) <nl> - * [ ` tensorflow : : TensorShape : : TensorShape ( const TensorShapeProto & proto ) ` ] ( # tensorflow_TensorShape_TensorShape ) <nl> - * REQUIRES : ` IsValid ( proto ) ` <nl> - * [ ` tensorflow : : TensorShape : : TensorShape ( ) ` ] ( # tensorflow_TensorShape_TensorShape ) <nl> - * [ ` void tensorflow : : TensorShape : : Clear ( ) ` ] ( # void_tensorflow_TensorShape_Clear ) <nl> - * Clear a tensor shape . <nl> - * [ ` void tensorflow : : TensorShape : : AddDim ( int64 size ) ` ] ( # void_tensorflow_TensorShape_AddDim ) <nl> - * Add a dimension to the end ( " inner - most " ) . REQUIRES : ` size > = 0 ` <nl> - * [ ` void tensorflow : : TensorShape : : AppendShape ( const TensorShape & shape ) ` ] ( # void_tensorflow_TensorShape_AppendShape ) <nl> - * Appends all the dimensions from ` shape ` . <nl> - * [ ` void tensorflow : : TensorShape : : InsertDim ( int d , int64 size ) ` ] ( # void_tensorflow_TensorShape_InsertDim ) <nl> - * Insert a dimension somewhere in the ` TensorShape ` . REQUIRES : ` 0 < = d < = dims ( ) ` REQUIRES : ` size > = 0 ` <nl> - * [ ` void tensorflow : : TensorShape : : set_dim ( int d , int64 size ) ` ] ( # void_tensorflow_TensorShape_set_dim ) <nl> - * Modifies the size of the dimension ` d ` to be ` size ` REQUIRES : ` 0 < = d < dims ( ) ` REQUIRES : ` size > = 0 ` <nl> - * [ ` void tensorflow : : TensorShape : : RemoveDim ( int d ) ` ] ( # void_tensorflow_TensorShape_RemoveDim ) <nl> - * Removes dimension ` d ` from the ` TensorShape ` . REQUIRES : ` 0 < = d < dims ( ) ` <nl> - * [ ` int tensorflow : : TensorShape : : dims ( ) const ` ] ( # int_tensorflow_TensorShape_dims ) <nl> - * Return the number of dimensions in the tensor . <nl> - * [ ` int64 tensorflow : : TensorShape : : dim_size ( int d ) const ` ] ( # int64_tensorflow_TensorShape_dim_size ) <nl> - * Returns the number of elements in dimension ` d ` . REQUIRES : ` 0 < = d < dims ( ) ` <nl> - * [ ` gtl : : ArraySlice < int64 > tensorflow : : TensorShape : : dim_sizes ( ) const ` ] ( # gtl_ArraySlice_int64_tensorflow_TensorShape_dim_sizes ) <nl> - * Returns sizes of all dimensions . <nl> - * [ ` int64 tensorflow : : TensorShape : : num_elements ( ) const ` ] ( # int64_tensorflow_TensorShape_num_elements ) <nl> - * Returns the number of elements in the tensor . <nl> - * [ ` bool tensorflow : : TensorShape : : IsSameSize ( const TensorShape & b ) const ` ] ( # bool_tensorflow_TensorShape_IsSameSize ) <nl> - * [ ` bool tensorflow : : TensorShape : : operator = = ( const TensorShape & b ) const ` ] ( # bool_tensorflow_TensorShape_operator_ ) <nl> - * [ ` void tensorflow : : TensorShape : : AsProto ( TensorShapeProto * proto ) const ` ] ( # void_tensorflow_TensorShape_AsProto ) <nl> - * Fill ` * proto ` from ` * this ` . <nl> - * [ ` Eigen : : DSizes < Eigen : : DenseIndex , NDIMS > tensorflow : : TensorShape : : AsEigenDSizes ( ) const ` ] ( # Eigen_DSizes_Eigen_DenseIndex_NDIMS_tensorflow_TensorShape_AsEigenDSizes ) <nl> - * Fill ` * dsizes ` from ` * this ` . <nl> - * [ ` Eigen : : DSizes < Eigen : : DenseIndex , NDIMS > tensorflow : : TensorShape : : AsEigenDSizesWithPadding ( ) const ` ] ( # Eigen_DSizes_Eigen_DenseIndex_NDIMS_tensorflow_TensorShape_AsEigenDSizesWithPadding ) <nl> - * [ ` TensorShapeIter tensorflow : : TensorShape : : begin ( ) const ` ] ( # TensorShapeIter_tensorflow_TensorShape_begin ) <nl> - * For iterating through the dimensions . <nl> - * [ ` TensorShapeIter tensorflow : : TensorShape : : end ( ) const ` ] ( # TensorShapeIter_tensorflow_TensorShape_end ) <nl> - * [ ` string tensorflow : : TensorShape : : DebugString ( ) const ` ] ( # string_tensorflow_TensorShape_DebugString ) <nl> - * For error messages . <nl> - * [ ` string tensorflow : : TensorShape : : ShortDebugString ( ) const ` ] ( # string_tensorflow_TensorShape_ShortDebugString ) <nl> - * Same as DebugString ( ) <nl> - * [ ` bool tensorflow : : TensorShape : : IsValid ( const TensorShapeProto & proto ) ` ] ( # bool_tensorflow_TensorShape_IsValid ) <nl> - * Returns ` true ` iff ` proto ` is a valid tensor shape . <nl> - * [ ` Status tensorflow : : TensorShape : : IsValidShape ( const TensorShapeProto & proto ) ` ] ( # Status_tensorflow_TensorShape_IsValidShape ) <nl> - * [ ` string tensorflow : : TensorShape : : ShortDebugString ( const TensorShapeProto & proto ) ` ] ( # string_tensorflow_TensorShape_ShortDebugString ) <nl> - <nl> - # # Member Details <nl> + # ` class tensorflow : : TensorShape ` <nl> + <nl> + <nl> + <nl> + <nl> + <nl> + # # # Member Details <nl> + <nl> + # # # # ` uint8 tensorflow : : TensorShape : : buf [ 16 ] [ 16 ] ` { # uint8_tensorflow_TensorShape_buf_16_ } <nl> + <nl> + <nl> + <nl> + <nl> + <nl> + # # # # ` Rep64 * tensorflow : : TensorShape : : unused_aligner ` { # Rep64_tensorflow_TensorShape_unused_aligner } <nl> + <nl> + <nl> + <nl> + <nl> <nl> # # # # ` tensorflow : : TensorShape : : TensorShape ( gtl : : ArraySlice < int64 > dim_sizes ) ` { # tensorflow_TensorShape_TensorShape } <nl> <nl> REQUIRES : ` IsValid ( proto ) ` <nl> <nl> Create a tensor shape with no dimensions and one element , which you can then call ` AddDim ( ) ` on . <nl> <nl> + # # # # ` tensorflow : : TensorShape : : ~ TensorShape ( ) ` { # tensorflow_TensorShape_TensorShape } <nl> + <nl> + <nl> + <nl> + <nl> + <nl> + # # # # ` tensorflow : : TensorShape : : TensorShape ( const TensorShape & b ) ` { # tensorflow_TensorShape_TensorShape } <nl> + <nl> + Copy the specified shape . <nl> + <nl> + <nl> + <nl> + # # # # ` void tensorflow : : TensorShape : : operator = ( const TensorShape & b ) ` { # void_tensorflow_TensorShape_operator_ } <nl> + <nl> + <nl> + <nl> + <nl> + <nl> # # # # ` void tensorflow : : TensorShape : : Clear ( ) ` { # void_tensorflow_TensorShape_Clear } <nl> <nl> Clear a tensor shape . <nl> Returns the number of elements in dimension ` d ` . REQUIRES : ` 0 < = d < dims ( ) ` <nl> <nl> <nl> <nl> - # # # # ` gtl : : ArraySlice < int64 > tensorflow : : TensorShape : : dim_sizes ( ) const ` { # gtl_ArraySlice_int64_tensorflow_TensorShape_dim_sizes } <nl> + # # # # ` gtl : : InlinedVector < int64 , 4 > tensorflow : : TensorShape : : dim_sizes ( ) const ` { # gtl_InlinedVector_int64_4_tensorflow_TensorShape_dim_sizes } <nl> <nl> Returns sizes of all dimensions . <nl> <nl> For error messages . <nl> <nl> <nl> <nl> - # # # # ` string tensorflow : : TensorShape : : ShortDebugString ( ) const ` { # string_tensorflow_TensorShape_ShortDebugString } <nl> + # # # # ` void tensorflow : : TensorShape : : DumpRep ( ) const ` { # void_tensorflow_TensorShape_DumpRep } <nl> + <nl> <nl> - Same as DebugString ( ) <nl> <nl> <nl> <nl> Returns ` true ` iff ` proto ` is a valid tensor shape . <nl> <nl> Returns ` OK ` iff ` proto ` is a valid tensor shape , and a descriptive error status otherwise . <nl> <nl> - # # # # ` string tensorflow : : TensorShape : : ShortDebugString ( const TensorShapeProto & proto ) ` { # string_tensorflow_TensorShape_ShortDebugString } <nl> + # # # # ` string tensorflow : : TensorShape : : DebugString ( const TensorShapeProto & proto ) ` { # string_tensorflow_TensorShape_DebugString } <nl> <nl> <nl> <nl> - Same as ` TensorShape ( proto ) . ShortDebugString ( ) ` but doesn & apos ; t crash for invalid protos . <nl> + Same as ` TensorShape ( proto ) . DebugString ( ) ` but doesn & apos ; t crash for invalid protos . <nl> mmm a / tensorflow / g3doc / api_docs / cc / ClassTensorShapeUtils . md <nl> ppp b / tensorflow / g3doc / api_docs / cc / ClassTensorShapeUtils . md <nl> <nl> - # Class ` tensorflow : : TensorShapeUtils ` <nl> + # ` class tensorflow : : TensorShapeUtils ` <nl> <nl> Static helper routines for ` TensorShape ` . Includes a few common predicates on a tensor shape . <nl> <nl> <nl> <nl> - # # Member Summary <nl> - <nl> - * [ ` static bool tensorflow : : TensorShapeUtils : : IsScalar ( const TensorShape & shape ) ` ] ( # static_bool_tensorflow_TensorShapeUtils_IsScalar ) <nl> - * [ ` static bool tensorflow : : TensorShapeUtils : : IsVector ( const TensorShape & shape ) ` ] ( # static_bool_tensorflow_TensorShapeUtils_IsVector ) <nl> - * [ ` static bool tensorflow : : TensorShapeUtils : : IsVectorOrHigher ( const TensorShape & shape ) ` ] ( # static_bool_tensorflow_TensorShapeUtils_IsVectorOrHigher ) <nl> - * [ ` static bool tensorflow : : TensorShapeUtils : : IsMatrix ( const TensorShape & shape ) ` ] ( # static_bool_tensorflow_TensorShapeUtils_IsMatrix ) <nl> - * [ ` static bool tensorflow : : TensorShapeUtils : : IsMatrixOrHigher ( const TensorShape & shape ) ` ] ( # static_bool_tensorflow_TensorShapeUtils_IsMatrixOrHigher ) <nl> - * [ ` static Status tensorflow : : TensorShapeUtils : : MakeShape ( const T * dims , int n , TensorShape * out ) ` ] ( # static_Status_tensorflow_TensorShapeUtils_MakeShape ) <nl> - * Returns a ` TensorShape ` whose dimensions are ` dims [ 0 ] ` , ` dims [ 1 ] ` , . . . , ` dims [ n - 1 ] ` . <nl> - * [ ` static string tensorflow : : TensorShapeUtils : : ShapeListString ( const gtl : : ArraySlice < TensorShape > & shapes ) ` ] ( # static_string_tensorflow_TensorShapeUtils_ShapeListString ) <nl> - * [ ` bool tensorflow : : TensorShapeUtils : : StartsWith ( const TensorShape & shape0 , const TensorShape & shape1 ) ` ] ( # bool_tensorflow_TensorShapeUtils_StartsWith ) <nl> - <nl> - # # Member Details <nl> + # # # Member Details <nl> <nl> # # # # ` static bool tensorflow : : TensorShapeUtils : : IsScalar ( const TensorShape & shape ) ` { # static_bool_tensorflow_TensorShapeUtils_IsScalar } <nl> <nl> mmm a / tensorflow / g3doc / api_docs / cc / ClassThread . md <nl> ppp b / tensorflow / g3doc / api_docs / cc / ClassThread . md <nl> <nl> - # Class ` tensorflow : : Thread ` <nl> + # ` class tensorflow : : Thread ` <nl> <nl> <nl> <nl> <nl> <nl> - # # Member Summary <nl> - <nl> - * [ ` tensorflow : : Thread : : Thread ( ) ` ] ( # tensorflow_Thread_Thread ) <nl> - * [ ` tensorflow : : Thread : : ~ Thread ( ) ` ] ( # tensorflow_Thread_Thread ) <nl> - * Blocks until the thread of control stops running . <nl> - <nl> - # # Member Details <nl> + # # # Member Details <nl> <nl> # # # # ` tensorflow : : Thread : : Thread ( ) ` { # tensorflow_Thread_Thread } <nl> <nl> mmm a / tensorflow / g3doc / api_docs / cc / ClassWritableFile . md <nl> ppp b / tensorflow / g3doc / api_docs / cc / ClassWritableFile . md <nl> <nl> - # Class ` tensorflow : : WritableFile ` <nl> + # ` class tensorflow : : WritableFile ` <nl> <nl> A file abstraction for sequential writing . <nl> <nl> The implementation must provide buffering since callers may append small fragments at a time to the file . <nl> <nl> - # # Member Summary <nl> - <nl> - * [ ` tensorflow : : WritableFile : : WritableFile ( ) ` ] ( # tensorflow_WritableFile_WritableFile ) <nl> - * [ ` tensorflow : : WritableFile : : ~ WritableFile ( ) ` ] ( # tensorflow_WritableFile_WritableFile ) <nl> - * [ ` virtual Status tensorflow : : WritableFile : : Append ( const StringPiece & data ) = 0 ` ] ( # virtual_Status_tensorflow_WritableFile_Append ) <nl> - * [ ` virtual Status tensorflow : : WritableFile : : Close ( ) = 0 ` ] ( # virtual_Status_tensorflow_WritableFile_Close ) <nl> - * [ ` virtual Status tensorflow : : WritableFile : : Flush ( ) = 0 ` ] ( # virtual_Status_tensorflow_WritableFile_Flush ) <nl> - * [ ` virtual Status tensorflow : : WritableFile : : Sync ( ) = 0 ` ] ( # virtual_Status_tensorflow_WritableFile_Sync ) <nl> - <nl> - # # Member Details <nl> + # # # Member Details <nl> <nl> # # # # ` tensorflow : : WritableFile : : WritableFile ( ) ` { # tensorflow_WritableFile_WritableFile } <nl> <nl> mmm a / tensorflow / g3doc / api_docs / cc / StructSessionOptions . md <nl> ppp b / tensorflow / g3doc / api_docs / cc / StructSessionOptions . md <nl> <nl> - # Struct ` tensorflow : : SessionOptions ` <nl> + # ` struct tensorflow : : SessionOptions ` <nl> <nl> Configuration information for a Session . <nl> <nl> <nl> <nl> - # # Member Summary <nl> - <nl> - * [ ` Env * tensorflow : : SessionOptions : : env ` ] ( # Env_tensorflow_SessionOptions_env ) <nl> - * The environment to use . <nl> - * [ ` string tensorflow : : SessionOptions : : target ` ] ( # string_tensorflow_SessionOptions_target ) <nl> - * The TensorFlow runtime to connect to . <nl> - * [ ` ConfigProto tensorflow : : SessionOptions : : config ` ] ( # ConfigProto_tensorflow_SessionOptions_config ) <nl> - * Configuration options . <nl> - * [ ` tensorflow : : SessionOptions : : SessionOptions ( ) ` ] ( # tensorflow_SessionOptions_SessionOptions ) <nl> - <nl> - # # Member Details <nl> + # # # Member Details <nl> <nl> # # # # ` Env * tensorflow : : SessionOptions : : env ` { # Env_tensorflow_SessionOptions_env } <nl> <nl> mmm a / tensorflow / g3doc / api_docs / cc / StructState . md <nl> ppp b / tensorflow / g3doc / api_docs / cc / StructState . md <nl> <nl> - # Struct ` tensorflow : : Status : : State ` <nl> + # ` struct tensorflow : : Status : : State ` <nl> <nl> <nl> <nl> <nl> <nl> - # # Member Summary <nl> - <nl> - * [ ` tensorflow : : error : : Code tensorflow : : Status : : State : : code ` ] ( # tensorflow_error_Code_tensorflow_Status_State_code ) <nl> - * [ ` string tensorflow : : Status : : State : : msg ` ] ( # string_tensorflow_Status_State_msg ) <nl> - <nl> - # # Member Details <nl> + # # # Member Details <nl> <nl> # # # # ` tensorflow : : error : : Code tensorflow : : Status : : State : : code ` { # tensorflow_error_Code_tensorflow_Status_State_code } <nl> <nl> mmm a / tensorflow / g3doc / api_docs / cc / StructTF_Buffer . md <nl> ppp b / tensorflow / g3doc / api_docs / cc / StructTF_Buffer . md <nl> <nl> - # Struct ` TF_Buffer ` <nl> + # ` struct TF_Buffer ` <nl> <nl> <nl> <nl> <nl> <nl> - # # Member Summary <nl> - <nl> - * [ ` const void * TF_Buffer : : data ` ] ( # const_void_TF_Buffer_data ) <nl> - * [ ` size_t TF_Buffer : : length ` ] ( # size_t_TF_Buffer_length ) <nl> - <nl> - # # Member Details <nl> + # # # Member Details <nl> <nl> # # # # ` const void * TF_Buffer : : data ` { # const_void_TF_Buffer_data } <nl> <nl> mmm a / tensorflow / g3doc / api_docs / cc / StructTensorShapeDim . md <nl> ppp b / tensorflow / g3doc / api_docs / cc / StructTensorShapeDim . md <nl> <nl> - # Struct ` tensorflow : : TensorShapeDim ` <nl> + # ` struct tensorflow : : TensorShapeDim ` <nl> <nl> <nl> <nl> <nl> <nl> - # # Member Summary <nl> + # # # Member Details <nl> <nl> - * [ ` int tensorflow : : TensorShapeDim : : size ` ] ( # int_tensorflow_TensorShapeDim_size ) <nl> - * [ ` tensorflow : : TensorShapeDim : : TensorShapeDim ( int64 s ) ` ] ( # tensorflow_TensorShapeDim_TensorShapeDim ) <nl> - <nl> - # # Member Details <nl> - <nl> - # # # # ` int tensorflow : : TensorShapeDim : : size ` { # int_tensorflow_TensorShapeDim_size } <nl> + # # # # ` int64 tensorflow : : TensorShapeDim : : size ` { # int64_tensorflow_TensorShapeDim_size } <nl> <nl> <nl> <nl> mmm a / tensorflow / g3doc / api_docs / cc / StructThreadOptions . md <nl> ppp b / tensorflow / g3doc / api_docs / cc / StructThreadOptions . md <nl> <nl> - # Struct ` tensorflow : : ThreadOptions ` <nl> + # ` struct tensorflow : : ThreadOptions ` <nl> <nl> Options to configure a Thread . <nl> <nl> Note that the options are all hints , and the underlying implementation may choose to ignore it . <nl> <nl> - # # Member Summary <nl> - <nl> - * [ ` size_t tensorflow : : ThreadOptions : : stack_size ` ] ( # size_t_tensorflow_ThreadOptions_stack_size ) <nl> - * Thread stack size to use ( in bytes ) . <nl> - * [ ` size_t tensorflow : : ThreadOptions : : guard_size ` ] ( # size_t_tensorflow_ThreadOptions_guard_size ) <nl> - * Guard area size to use near thread stacks to use ( in bytes ) <nl> - <nl> - # # Member Details <nl> + # # # Member Details <nl> <nl> # # # # ` size_t tensorflow : : ThreadOptions : : stack_size ` { # size_t_tensorflow_ThreadOptions_stack_size } <nl> <nl> mmm a / tensorflow / g3doc / api_docs / cc / index . md <nl> ppp b / tensorflow / g3doc / api_docs / cc / index . md <nl> write the graph to a file . <nl> <nl> # # Env <nl> <nl> - * [ tensorflow : : Env ] ( ClassEnv . md ) <nl> - * [ tensorflow : : RandomAccessFile ] ( ClassRandomAccessFile . md ) <nl> - * [ tensorflow : : WritableFile ] ( ClassWritableFile . md ) <nl> - * [ tensorflow : : EnvWrapper ] ( ClassEnvWrapper . md ) <nl> + * [ tensorflow : : Env ] ( classEnv . md ) <nl> + * [ tensorflow : : RandomAccessFile ] ( classRandomAccessFile . md ) <nl> + * [ tensorflow : : WritableFile ] ( classWritableFile . md ) <nl> + * [ tensorflow : : EnvWrapper ] ( classEnvWrapper . md ) <nl> <nl> # # Session <nl> <nl> - * [ tensorflow : : Session ] ( ClassSession . md ) <nl> - * [ tensorflow : : SessionOptions ] ( StructSessionOptions . md ) <nl> + * [ tensorflow : : Session ] ( classSession . md ) <nl> + * [ tensorflow : : SessionOptions ] ( structSessionOptions . md ) <nl> <nl> # # Status <nl> <nl> - * [ tensorflow : : Status ] ( ClassStatus . md ) <nl> - * [ tensorflow : : Status : : State ] ( StructState . md ) <nl> + * [ tensorflow : : Status ] ( classStatus . md ) <nl> + * [ tensorflow : : Status : : State ] ( structState . md ) <nl> <nl> # # Tensor <nl> <nl> - * [ tensorflow : : Tensor ] ( ClassTensor . md ) <nl> - * [ tensorflow : : TensorShape ] ( ClassTensorShape . md ) <nl> - * [ tensorflow : : TensorShapeDim ] ( StructTensorShapeDim . md ) <nl> - * [ tensorflow : : TensorShapeUtils ] ( ClassTensorShapeUtils . md ) <nl> - * [ tensorflow : : PartialTensorShape ] ( ClassPartialTensorShape . md ) <nl> - * [ tensorflow : : PartialTensorShapeUtils ] ( ClassPartialTensorShapeUtils . md ) <nl> - * [ TF_Buffer ] ( StructTF_Buffer . md ) <nl> + * [ tensorflow : : Tensor ] ( classTensor . md ) <nl> + * [ tensorflow : : TensorShape ] ( classTensorShape . md ) <nl> + * [ tensorflow : : TensorShapeDim ] ( structTensorShapeDim . md ) <nl> + * [ tensorflow : : TensorShapeUtils ] ( classTensorShapeUtils . md ) <nl> + * [ tensorflow : : PartialTensorShape ] ( classPartialTensorShape . md ) <nl> + * [ tensorflow : : PartialTensorShapeUtils ] ( classPartialTensorShapeUtils . md ) <nl> + * [ TF_Buffer ] ( structTF_Buffer . md ) <nl> <nl> # # Thread <nl> <nl> - * [ tensorflow : : Thread ] ( ClassThread . md ) <nl> - * [ tensorflow : : ThreadOptions ] ( StructThreadOptions . md ) <nl> + * [ tensorflow : : Thread ] ( classThread . md ) <nl> + * [ tensorflow : : ThreadOptions ] ( structThreadOptions . md ) <nl> <nl> - <nl> - <nl> - < div class = ' sections - order ' style = " display : none ; " > <nl> - < ! - - <nl> - < ! - - ClassEnv . md - - > <nl> - < ! - - ClassRandomAccessFile . md - - > <nl> - < ! - - ClassWritableFile . md - - > <nl> - < ! - - ClassEnvWrapper . md - - > <nl> - < ! - - ClassSession . md - - > <nl> - < ! - - StructSessionOptions . md - - > <nl> - < ! - - ClassStatus . md - - > <nl> - < ! - - StructState . md - - > <nl> - < ! - - ClassTensor . md - - > <nl> - < ! - - ClassTensorShape . md - - > <nl> - < ! - - StructTensorShapeDim . md - - > <nl> - < ! - - ClassTensorShapeUtils . md - - > <nl> - < ! - - ClassPartialTensorShape . md - - > <nl> - < ! - - ClassPartialTensorShapeUtils . md - - > <nl> - < ! - - StructTF_Buffer . md - - > <nl> - < ! - - ClassThread . md - - > <nl> - < ! - - StructThreadOptions . md - - > <nl> mmm > <nl> - < / div > <nl> mmm a / tensorflow / g3doc / api_docs / leftnav_files <nl> ppp b / tensorflow / g3doc / api_docs / leftnav_files <nl> python / python_io . md <nl> python / nn . md <nl> python / client . md <nl> python / train . md <nl> - # # # [ C + + API ] ( / api_docs / cc / index . md ) <nl> + python / script_ops . md <nl> + python / test . md <nl> + python / contrib . layers . md <nl> + python / contrib . util . md <nl> + > > > [ C + + API ] ( / api_docs / cc / index . md ) <nl> cc / ClassEnv . md <nl> cc / ClassRandomAccessFile . md <nl> cc / ClassWritableFile . md <nl> cc / ClassTensor . md <nl> cc / ClassTensorShape . md <nl> cc / StructTensorShapeDim . md <nl> cc / ClassTensorShapeUtils . md <nl> + cc / ClassPartialTensorShape . md <nl> + cc / ClassPartialTensorShapeUtils . md <nl> cc / ClassThread . md <nl> - cc / StructThreadOptions . md <nl> \ No newline at end of file <nl> + cc / StructThreadOptions . md <nl> mmm a / tensorflow / g3doc / api_docs / python / array_ops . md <nl> ppp b / tensorflow / g3doc / api_docs / python / array_ops . md <nl> <nl> # Tensor Transformations <nl> <nl> Note : Functions taking ` Tensor ` arguments can also take anything accepted by <nl> - [ ` tf . convert_to_tensor ` ] ( . . / . . / api_docs / python / framework . md # convert_to_tensor ) . <nl> + [ ` tf . convert_to_tensor ` ] ( framework . md # convert_to_tensor ) . <nl> <nl> [ TOC ] <nl> <nl> mmm a / tensorflow / g3doc / api_docs / python / constant_op . md <nl> ppp b / tensorflow / g3doc / api_docs / python / constant_op . md <nl> <nl> # Constants , Sequences , and Random Values <nl> <nl> Note : Functions taking ` Tensor ` arguments can also take anything accepted by <nl> - [ ` tf . convert_to_tensor ` ] ( . . / . . / api_docs / python / framework . md # convert_to_tensor ) . <nl> + [ ` tf . convert_to_tensor ` ] ( framework . md # convert_to_tensor ) . <nl> <nl> [ TOC ] <nl> <nl> new file mode 100644 <nl> index 0000000000000 . . d9351e47edf9b <nl> mmm / dev / null <nl> ppp b / tensorflow / g3doc / api_docs / python / contrib . layers . md <nl> <nl> + < ! - - This file is machine generated : DO NOT EDIT ! - - > <nl> + <nl> + # Layers ( contrib ) <nl> + [ TOC ] <nl> + <nl> + Ops for building neural network layers , regularizers , summaries , etc . <nl> + <nl> + # # Higher level ops for building neural network layers . <nl> + <nl> + This package provides several ops that take care of creating variables that are <nl> + used internally in a consistent way and provide the building blocks for many <nl> + common machine learning algorithms . <nl> + <nl> + - - - <nl> + <nl> + # # # ` tf . contrib . layers . convolution2d ( x , num_output_channels , kernel_size , activation_fn = None , stride = ( 1 , 1 ) , padding = ' SAME ' , weight_init = _initializer , bias_init = _initializer , name = None , weight_collections = None , bias_collections = None , output_collections = None , weight_regularizer = None , bias_regularizer = None ) ` { # convolution2d } <nl> + <nl> + Adds the parameters for a conv2d layer and returns the output . <nl> + <nl> + A neural network convolution layer is generally defined as : <nl> + \ \ ( y = f ( conv2d ( w , x ) + b ) \ \ ) where * * f * * is given by ` activation_fn ` , <nl> + * * conv2d * * is ` tf . nn . conv2d ` and ` x ` has shape <nl> + ` [ batch , height , width , channels ] ` . The output of this op is of shape <nl> + ` [ batch , out_height , out_width , num_output_channels ] ` , where ` out_width ` and <nl> + ` out_height ` are determined by the ` padding ` argument . See ` conv2D ` for <nl> + details . <nl> + <nl> + This op creates ` w ` and optionally ` b ` and adds various summaries that can be <nl> + useful for visualizing learning or diagnosing training problems . Bias can be <nl> + disabled by setting ` bias_init ` to ` None ` . <nl> + <nl> + The variable creation is compatible with ` tf . variable_scope ` and so can be <nl> + reused with ` tf . variable_scope ` or ` tf . make_template ` . <nl> + <nl> + Most of the details of variable creation can be controlled by specifying the <nl> + initializers ( ` weight_init ` and ` bias_init ` ) and which collections to place <nl> + the created variables in ( ` weight_collections ` and ` bias_collections ` ) . <nl> + <nl> + A per layer regularization can be specified by setting ` weight_regularizer ` . <nl> + This is only applied to weights and not the bias . <nl> + <nl> + # # # # # Args : <nl> + <nl> + <nl> + * < b > ` x ` < / b > : A 4 - D input ` Tensor ` . <nl> + * < b > ` num_output_channels ` < / b > : The number of output channels ( i . e . the size of the <nl> + last dimension of the output ) . <nl> + * < b > ` kernel_size ` < / b > : A length 2 ` list ` or ` tuple ` containing the kernel size . <nl> + * < b > ` activation_fn ` < / b > : A function that requires a single Tensor that is applied as a <nl> + non - linearity . <nl> + * < b > ` stride ` < / b > : A length 2 ` list ` or ` tuple ` specifying the stride of the sliding <nl> + window across the image . <nl> + * < b > ` padding ` < / b > : A ` string ` from : " SAME " , " VALID " . The type of padding algorithm to <nl> + use . <nl> + * < b > ` weight_init ` < / b > : An optional initialization . If not specified , uses Xavier <nl> + initialization ( see ` tf . learn . xavier_initializer ` ) . <nl> + * < b > ` bias_init ` < / b > : An initializer for the bias , defaults to 0 . Set to ` None ` in order <nl> + to disable bias . <nl> + * < b > ` name ` < / b > : The name for this operation is used to name operations and to find <nl> + variables . If specified it must be unique for this scope , otherwise a <nl> + unique name starting with " convolution2d " will be created . See <nl> + ` tf . variable_op_scope ` for details . <nl> + * < b > ` weight_collections ` < / b > : List of graph collections to which weights are added . <nl> + * < b > ` bias_collections ` < / b > : List of graph collections to which biases are added . <nl> + * < b > ` output_collections ` < / b > : List of graph collections to which outputs are added . <nl> + * < b > ` weight_regularizer ` < / b > : A regularizer like the result of <nl> + ` l1_regularizer ` or ` l2_regularizer ` . Used for weights . <nl> + * < b > ` bias_regularizer ` < / b > : A regularizer like the result of <nl> + ` l1_regularizer ` or ` l2_regularizer ` . Used for biases . <nl> + <nl> + # # # # # Returns : <nl> + <nl> + The result of applying a 2 - D convolutional layer . <nl> + <nl> + # # # # # Raises : <nl> + <nl> + <nl> + * < b > ` ValueError ` < / b > : If ` kernel_size ` or ` stride ` are not length 2 . <nl> + <nl> + <nl> + - - - <nl> + <nl> + # # # ` tf . contrib . layers . fully_connected ( x , num_output_units , activation_fn = None , weight_init = _initializer , bias_init = _initializer , name = None , weight_collections = ( ' weights ' , ) , bias_collections = ( ' biases ' , ) , output_collections = ( ' activations ' , ) , weight_regularizer = None , bias_regularizer = None ) ` { # fully_connected } <nl> + <nl> + Adds the parameters for a fully connected layer and returns the output . <nl> + <nl> + A fully connected layer is generally defined as a matrix multiply : <nl> + ` y = f ( w * x + b ) ` where ` f ` is given by ` activation_fn ` . If <nl> + ` activation_fn ` is ` None ` , the result of ` y = w * x + b ` is <nl> + returned . <nl> + <nl> + This op creates ` w ` and optionally ` b ` . Bias ( ` b ` ) can be disabled by setting <nl> + ` bias_init ` to ` None ` . <nl> + <nl> + The variable creation is compatible with ` tf . variable_scope ` and so can be <nl> + reused with ` tf . variable_scope ` or ` tf . make_template ` . <nl> + <nl> + Most of the details of variable creation can be controlled by specifying the <nl> + initializers ( ` weight_init ` and ` bias_init ` ) and which in collections to place <nl> + the created variables ( ` weight_collections ` and ` bias_collections ` ; note that <nl> + the variables are always added to the ` VARIABLES ` collection ) . The output of <nl> + the layer can be placed in custom collections using ` output_collections ` . <nl> + The collections arguments default to ` WEIGHTS ` , ` BIASES ` and ` ACTIVATIONS ` , <nl> + respectively . <nl> + <nl> + A per layer regularization can be specified by setting ` weight_regularizer ` <nl> + and ` bias_regularizer ` , which are applied to the weights and biases <nl> + respectively , and whose output is added to the ` REGULARIZATION_LOSSES ` <nl> + collection . <nl> + <nl> + # # # # # Args : <nl> + <nl> + <nl> + * < b > ` x ` < / b > : The input ` Tensor ` . <nl> + * < b > ` num_output_units ` < / b > : The size of the output . <nl> + * < b > ` activation_fn ` < / b > : A function that requires a single Tensor that is applied as a <nl> + non - linearity . If None is used , do not apply any activation . <nl> + * < b > ` weight_init ` < / b > : An optional weight initialization , defaults to <nl> + ` xavier_initializer ` . <nl> + * < b > ` bias_init ` < / b > : An initializer for the bias , defaults to 0 . Set to ` None ` in <nl> + order to disable bias . <nl> + * < b > ` name ` < / b > : The name for this operation is used to name operations and to find <nl> + variables . If specified it must be unique for this scope , otherwise a <nl> + unique name starting with " fully_connected " will be created . See <nl> + ` tf . variable_op_scope ` for details . <nl> + * < b > ` weight_collections ` < / b > : List of graph collections to which weights are added . <nl> + * < b > ` bias_collections ` < / b > : List of graph collections to which biases are added . <nl> + * < b > ` output_collections ` < / b > : List of graph collections to which outputs are added . <nl> + * < b > ` weight_regularizer ` < / b > : A regularizer like the result of <nl> + ` l1_regularizer ` or ` l2_regularizer ` . Used for weights . <nl> + * < b > ` bias_regularizer ` < / b > : A regularizer like the result of <nl> + ` l1_regularizer ` or ` l2_regularizer ` . Used for biases . <nl> + <nl> + # # # # # Returns : <nl> + <nl> + The output of the fully connected layer . <nl> + <nl> + <nl> + <nl> + Aliases for fully_connected which set a default activation function are <nl> + available : ` relu ` , ` relu6 ` and ` linear ` . <nl> + <nl> + # # Regularizers <nl> + <nl> + Regularization can help prevent overfitting . These have the signature <nl> + ` fn ( weights ) ` . The loss is typically added to ` tf . GraphKeys . REGULARIZATION_LOSS ` <nl> + <nl> + - - - <nl> + <nl> + # # # ` tf . contrib . layers . l1_regularizer ( scale ) ` { # l1_regularizer } <nl> + <nl> + Returns a function that can be used to apply L1 regularization to weights . <nl> + <nl> + L1 regularization encourages sparsity . <nl> + <nl> + # # # # # Args : <nl> + <nl> + <nl> + * < b > ` scale ` < / b > : A scalar multiplier ` Tensor ` . 0 . 0 disables the regularizer . <nl> + <nl> + # # # # # Returns : <nl> + <nl> + A function with signature ` l1 ( weights , name = None ) ` that apply L1 <nl> + regularization . <nl> + <nl> + # # # # # Raises : <nl> + <nl> + <nl> + * < b > ` ValueError ` < / b > : If scale is outside of the range [ 0 . 0 , 1 . 0 ] or if scale is not a <nl> + float . <nl> + <nl> + <nl> + - - - <nl> + <nl> + # # # ` tf . contrib . layers . l2_regularizer ( scale ) ` { # l2_regularizer } <nl> + <nl> + Returns a function that can be used to apply L2 regularization to weights . <nl> + <nl> + Small values of L2 can help prevent overfitting the training data . <nl> + <nl> + # # # # # Args : <nl> + <nl> + <nl> + * < b > ` scale ` < / b > : A scalar multiplier ` Tensor ` . 0 . 0 disables the regularizer . <nl> + <nl> + # # # # # Returns : <nl> + <nl> + A function with signature ` l2 ( weights , name = None ) ` that applies L2 <nl> + regularization . <nl> + <nl> + # # # # # Raises : <nl> + <nl> + <nl> + * < b > ` ValueError ` < / b > : If scale is outside of the range [ 0 . 0 , 1 . 0 ] or if scale is not a <nl> + float . <nl> + <nl> + <nl> + <nl> + # # Initializers <nl> + <nl> + Initializers are used to initialize variables with sensible values given their <nl> + size , data type , and purpose . <nl> + <nl> + - - - <nl> + <nl> + # # # ` tf . contrib . layers . xavier_initializer ( uniform = True , seed = None , dtype = tf . float32 ) ` { # xavier_initializer } <nl> + <nl> + Returns an initializer performing " Xavier " initialization for weights . <nl> + <nl> + This function implements the weight initialization from : <nl> + <nl> + Xavier Glorot and Yoshua Bengio ( 2010 ) : <nl> + Understanding the difficulty of training deep feedforward neural <nl> + networks . International conference on artificial intelligence and <nl> + statistics . <nl> + <nl> + This initializer is designed to keep the scale of the gradients roughly the <nl> + same in all layers . In uniform distribution this ends up being the range : <nl> + ` x = sqrt ( 6 . / ( in + out ) ) ; [ - x , x ] ` and for normal distribution a standard <nl> + deviation of ` sqrt ( 3 . / ( in + out ) ) ` is used . <nl> + <nl> + The returned initializer assumes that the shape of the weight matrix to be <nl> + initialized is ` [ in , out ] ` . <nl> + <nl> + # # # # # Args : <nl> + <nl> + <nl> + * < b > ` uniform ` < / b > : Whether to use uniform or normal distributed random initialization . <nl> + * < b > ` seed ` < / b > : A Python integer . Used to create random seeds . See <nl> + [ ` set_random_seed ` ] ( . . / . . / api_docs / python / constant_op . md # set_random_seed ) <nl> + for behavior . <nl> + * < b > ` dtype ` < / b > : The data type . Only floating point types are supported . <nl> + <nl> + # # # # # Returns : <nl> + <nl> + An initializer for a 2 - D weight matrix . <nl> + <nl> + # # # # # Raises : <nl> + <nl> + <nl> + * < b > ` TypeError ` < / b > : If dtype is not a floating point type . <nl> + <nl> + <nl> + - - - <nl> + <nl> + # # # ` tf . contrib . layers . xavier_initializer_conv2d ( uniform = True , seed = None , dtype = tf . float32 ) ` { # xavier_initializer_conv2d } <nl> + <nl> + Returns an " Xavier " initializer for 2D convolution weights . <nl> + <nl> + For details on the initialization performed , see ` xavier_initializer ` . This <nl> + function initializes a convolution weight variable which is assumed to be 4 - D . <nl> + The first two dimensions are expected to be the kernel size , the third <nl> + dimension is the number of input channels , and the last dimension is the <nl> + number of output channels . <nl> + <nl> + The number of inputs is therefore ` shape [ 0 ] * shape [ 1 ] * shape [ 2 ] ` , and the number <nl> + of outputs is ` shape [ 0 ] * shape [ 1 ] * shape [ 3 ] ` . <nl> + <nl> + # # # # # Args : <nl> + <nl> + <nl> + * < b > ` uniform ` < / b > : Whether to use uniform or normal distributed random initialization . <nl> + * < b > ` seed ` < / b > : A Python integer . Used to create random seeds . See <nl> + [ ` set_random_seed ` ] ( . . / . . / api_docs / python / constant_op . md # set_random_seed ) <nl> + for behavior . <nl> + * < b > ` dtype ` < / b > : The data type . Only floating point types are supported . <nl> + <nl> + # # # # # Returns : <nl> + <nl> + An initializer for a 4 - D weight matrix . <nl> + <nl> + # # # # # Raises : <nl> + <nl> + <nl> + * < b > ` TypeError ` < / b > : If dtype is not a floating point type . <nl> + <nl> + <nl> + <nl> + # # Summaries <nl> + <nl> + Helper functions to summarize specific variables or ops . <nl> + <nl> + - - - <nl> + <nl> + # # # ` tf . contrib . layers . summarize_activation ( op ) ` { # summarize_activation } <nl> + <nl> + Summarize an activation . <nl> + <nl> + This applies the given activation and adds useful summaries specific to the <nl> + activation . <nl> + <nl> + # # # # # Args : <nl> + <nl> + <nl> + * < b > ` op ` < / b > : The tensor to summarize ( assumed to be a layer activation ) . <nl> + <nl> + # # # # # Returns : <nl> + <nl> + The summary op created to summarize ` op ` . <nl> + <nl> + <nl> + - - - <nl> + <nl> + # # # ` tf . contrib . layers . summarize_tensor ( tensor ) ` { # summarize_tensor } <nl> + <nl> + Summarize a tensor using a suitable summary type . <nl> + <nl> + This function adds a summary op for ` tensor ` . The type of summary depends on <nl> + the shape of ` tensor ` . For scalars , a ` scalar_summary ` is created , for all <nl> + other tensors , ` histogram_summary ` is used . <nl> + <nl> + # # # # # Args : <nl> + <nl> + <nl> + * < b > ` tensor ` < / b > : The tensor to summarize <nl> + <nl> + # # # # # Returns : <nl> + <nl> + The summary op created . <nl> + <nl> + <nl> + - - - <nl> + <nl> + # # # ` tf . contrib . layers . summarize_tensors ( tensors , summarizer = summarize_tensor ) ` { # summarize_tensors } <nl> + <nl> + Summarize a set of tensors . <nl> + <nl> + <nl> + - - - <nl> + <nl> + # # # ` tf . contrib . layers . summarize_collection ( collection , name_filter = None , summarizer = summarize_tensor ) ` { # summarize_collection } <nl> + <nl> + Summarize a graph collection of tensors , possibly filtered by name . <nl> + <nl> + <nl> + <nl> + The layers module defines convenience functions ` summarize_variables ` , <nl> + ` summarize_weights ` and ` summarize_biases ` , which set the ` collection ` argument <nl> + of ` summarize_collection ` to ` VARIABLES ` , ` WEIGHTS ` and ` BIASES ` , respectively . <nl> + <nl> + - - - <nl> + <nl> + # # # ` tf . contrib . layers . summarize_activations ( name_filter = None , summarizer = summarize_activation ) ` { # summarize_activations } <nl> + <nl> + Summarize activations , using ` summarize_activation ` to summarize . <nl> + <nl> + <nl> + <nl> + # # Other Functions and Classes <nl> + - - - <nl> + <nl> + # # # ` tf . contrib . layers . assert_same_float_dtype ( tensors = None , dtype = None ) ` { # assert_same_float_dtype } <nl> + <nl> + Validate and return float type based on ` tensors ` and ` dtype ` . <nl> + <nl> + For ops such as matrix multiplication , inputs and weights must be of the <nl> + same float type . This function validates that all ` tensors ` are the same type , <nl> + validates that type is ` dtype ` ( if supplied ) , and returns the type . Type must <nl> + be ` dtypes . float32 ` or ` dtypes . float64 ` . If neither ` tensors ` nor <nl> + ` dtype ` is supplied , default to ` dtypes . float32 ` . <nl> + <nl> + # # # # # Args : <nl> + <nl> + <nl> + * < b > ` tensors ` < / b > : Tensors of input values . Can include ` None ` elements , which will be <nl> + ignored . <nl> + * < b > ` dtype ` < / b > : Expected type . <nl> + <nl> + # # # # # Returns : <nl> + <nl> + Validated type . <nl> + <nl> + # # # # # Raises : <nl> + <nl> + <nl> + * < b > ` ValueError ` < / b > : if neither ` tensors ` nor ` dtype ` is supplied , or result is not <nl> + float . <nl> + <nl> + <nl> new file mode 100644 <nl> index 0000000000000 . . 26b71172cbd14 <nl> mmm / dev / null <nl> ppp b / tensorflow / g3doc / api_docs / python / contrib . util . md <nl> <nl> + < ! - - This file is machine generated : DO NOT EDIT ! - - > <nl> + <nl> + # Utilities ( contrib ) <nl> + [ TOC ] <nl> + <nl> + Utilities for dealing with Tensors . <nl> + <nl> + # # Miscellaneous Utility Functions <nl> + <nl> + - - - <nl> + <nl> + # # # ` tf . contrib . util . constant_value ( tensor ) ` { # constant_value } <nl> + <nl> + Returns the constant value of the given tensor , if efficiently calculable . <nl> + <nl> + This function attempts to partially evaluate the given tensor , and <nl> + returns its value as a numpy ndarray if this succeeds . <nl> + <nl> + TODO ( mrry ) : Consider whether this function should use a registration <nl> + mechanism like gradients and ShapeFunctions , so that it is easily <nl> + extensible . <nl> + <nl> + # # # # # Args : <nl> + <nl> + <nl> + * < b > ` tensor ` < / b > : The Tensor to be evaluated . <nl> + <nl> + # # # # # Returns : <nl> + <nl> + A numpy ndarray containing the constant value of the given ` tensor ` , <nl> + or None if it cannot be calculated . <nl> + <nl> + # # # # # Raises : <nl> + <nl> + <nl> + * < b > ` TypeError ` < / b > : if tensor is not an ops . Tensor . <nl> + <nl> + <nl> + - - - <nl> + <nl> + # # # ` tf . contrib . util . make_tensor_proto ( values , dtype = None , shape = None ) ` { # make_tensor_proto } <nl> + <nl> + Create a TensorProto . <nl> + <nl> + # # # # # Args : <nl> + <nl> + <nl> + * < b > ` values ` < / b > : Values to put in the TensorProto . <nl> + * < b > ` dtype ` < / b > : Optional tensor_pb2 DataType value . <nl> + * < b > ` shape ` < / b > : List of integers representing the dimensions of tensor . <nl> + <nl> + # # # # # Returns : <nl> + <nl> + A TensorProto . Depending on the type , it may contain data in the <nl> + " tensor_content " attribute , which is not directly useful to Python programs . <nl> + To access the values you should convert the proto back to a numpy ndarray <nl> + with tensor_util . MakeNdarray ( proto ) . <nl> + <nl> + # # # # # Raises : <nl> + <nl> + <nl> + * < b > ` TypeError ` < / b > : if unsupported types are provided . <nl> + * < b > ` ValueError ` < / b > : if arguments have inappropriate values . <nl> + <nl> + make_tensor_proto accepts " values " of a python scalar , a python list , a <nl> + numpy ndarray , or a numpy scalar . <nl> + <nl> + If " values " is a python scalar or a python list , make_tensor_proto <nl> + first convert it to numpy ndarray . If dtype is None , the <nl> + conversion tries its best to infer the right numpy data <nl> + type . Otherwise , the resulting numpy array has a compatible data <nl> + type with the given dtype . <nl> + <nl> + In either case above , the numpy ndarray ( either the caller provided <nl> + or the auto converted ) must have the compatible type with dtype . <nl> + <nl> + make_tensor_proto then converts the numpy array to a tensor proto . <nl> + <nl> + If " shape " is None , the resulting tensor proto represents the numpy <nl> + array precisely . <nl> + <nl> + Otherwise , " shape " specifies the tensor ' s shape and the numpy array <nl> + can not have more elements than what " shape " specifies . <nl> + <nl> + <nl> mmm a / tensorflow / g3doc / api_docs / python / control_flow_ops . md <nl> ppp b / tensorflow / g3doc / api_docs / python / control_flow_ops . md <nl> <nl> # Control Flow <nl> <nl> Note : Functions taking ` Tensor ` arguments can also take anything accepted by <nl> - [ ` tf . convert_to_tensor ` ] ( . . / . . / api_docs / python / framework . md # convert_to_tensor ) . <nl> + [ ` tf . convert_to_tensor ` ] ( framework . md # convert_to_tensor ) . <nl> <nl> [ TOC ] <nl> <nl> mmm a / tensorflow / g3doc / api_docs / python / image . md <nl> ppp b / tensorflow / g3doc / api_docs / python / image . md <nl> <nl> # Images <nl> <nl> Note : Functions taking ` Tensor ` arguments can also take anything accepted by <nl> - [ ` tf . convert_to_tensor ` ] ( . . / . . / api_docs / python / framework . md # convert_to_tensor ) . <nl> + [ ` tf . convert_to_tensor ` ] ( framework . md # convert_to_tensor ) . <nl> <nl> [ TOC ] <nl> <nl> Note that this implementation is limited : <nl> <nl> <nl> <nl> + # # Working with Bounding Boxes <nl> + <nl> + - - - <nl> + <nl> + # # # ` tf . image . draw_bounding_boxes ( images , boxes , name = None ) ` { # draw_bounding_boxes } <nl> + <nl> + Draw bounding boxes on a batch of images . <nl> + <nl> + Outputs a copy of ` images ` but draws on top of the pixels zero or more bounding <nl> + boxes specified by the locations in ` boxes ` . The coordinates of the each <nl> + bounding box in ` boxes are encoded as ` [ y_min , x_min , y_max , x_max ] ` . The <nl> + bounding box coordinates are floats in ` [ 0 . 0 , 1 . 0 ] ` relative to the width and <nl> + height of the underlying image . <nl> + <nl> + For example , if an image is 100 x 200 pixels and the bounding box is <nl> + ` [ 0 . 1 , 0 . 5 , 0 . 2 , 0 . 9 ] ` , the bottom - left and upper - right coordinates of the <nl> + bounding box will be ` ( 10 , 40 ) ` to ` ( 50 , 180 ) ` . <nl> + <nl> + Parts of the bounding box may fall outside the image . <nl> + <nl> + # # # # # Args : <nl> + <nl> + <nl> + * < b > ` images ` < / b > : A ` Tensor ` of type ` float32 ` . <nl> + 4 - D with shape ` [ batch , height , width , depth ] ` . A batch of images . <nl> + * < b > ` boxes ` < / b > : A ` Tensor ` of type ` float32 ` . <nl> + 3 - D with shape ` [ batch , num_bounding_boxes , 4 ] ` containing bounding <nl> + boxes . <nl> + * < b > ` name ` < / b > : A name for the operation ( optional ) . <nl> + <nl> + # # # # # Returns : <nl> + <nl> + A ` Tensor ` of type ` float32 ` . <nl> + 4 - D with the same shape as ` images ` . The batch of input images with <nl> + bounding boxes drawn on the images . <nl> + <nl> + <nl> + - - - <nl> + <nl> + # # # ` tf . image . sample_distorted_bounding_box ( image_size , bounding_boxes , seed = None , seed2 = None , min_object_covered = None , aspect_ratio_range = None , area_range = None , max_attempts = None , use_image_if_no_bounding_boxes = None , name = None ) ` { # sample_distorted_bounding_box } <nl> + <nl> + Generate a single randomly distorted bounding box for an image . <nl> + <nl> + Bounding box annotations are often supplied in addition to ground - truth labels <nl> + in image recognition or object localization tasks . A common technique for <nl> + training such a system is to randomly distort an image while preserving <nl> + its content , i . e . * data augmentation * . This Op outputs a randomly distorted <nl> + localization of an object , i . e . bounding box , given an ` image_size ` , <nl> + ` bounding_boxes ` and a series of constraints . <nl> + <nl> + The output of this Op is a single bounding box that may be used to crop the <nl> + original image . The output is returned as 3 tensors : ` begin ` , ` size ` and <nl> + ` bboxes ` . The first 2 tensors can be fed directly into ` tf . slice ` to crop the <nl> + image . The latter may be supplied to ` tf . image . draw_bounding_box ` to visualize <nl> + what the bounding box looks like . <nl> + <nl> + Bounding boxes are supplied and returned as ` [ y_min , x_min , y_max , x_max ] ` . The <nl> + bounding box coordinates are floats in ` [ 0 . 0 , 1 . 0 ] ` relative to the width and <nl> + height of the underlying image . <nl> + <nl> + For example , <nl> + <nl> + # Generate a single distorted bounding box . <nl> + begin , size , bbox_for_draw = tf . image . sample_distorted_bounding_box ( <nl> + tf . shape ( image ) , <nl> + bounding_boxes = bounding_boxes ) <nl> + <nl> + # Draw the bounding box in an image summary . <nl> + image_with_box = tf . image . draw_bounding_boxes ( tf . expand_dims ( image , 0 ) , <nl> + bbox_for_draw ) <nl> + tf . image_summary ( ' images_with_box ' , image_with_box ) <nl> + <nl> + # Employ the bounding box to distort the image . <nl> + distorted_image = tf . slice ( image , begin , size ) <nl> + <nl> + Note that if no bounding box information is available , setting <nl> + ` use_image_if_no_bounding_boxes = true ` will assume there is a single implicit <nl> + bounding box covering the whole image . If ` use_image_if_no_bounding_boxes ` is <nl> + false and no bounding boxes are supplied , an error is raised . <nl> + <nl> + # # # # # Args : <nl> + <nl> + <nl> + * < b > ` image_size ` < / b > : A ` Tensor ` . Must be one of the following types : ` uint8 ` , ` int8 ` , ` int16 ` , ` int32 ` , ` int64 ` . <nl> + 1 - D , containing ` [ height , width , channels ] ` . <nl> + * < b > ` bounding_boxes ` < / b > : A ` Tensor ` of type ` float32 ` . <nl> + 3 - D with shape ` [ batch , N , 4 ] ` describing the N bounding boxes <nl> + associated with the image . <nl> + * < b > ` seed ` < / b > : An optional ` int ` . Defaults to ` 0 ` . <nl> + If either ` seed ` or ` seed2 ` are set to non - zero , the random number <nl> + generator is seeded by the given ` seed ` . Otherwise , it is seeded by a random <nl> + seed . <nl> + * < b > ` seed2 ` < / b > : An optional ` int ` . Defaults to ` 0 ` . <nl> + A second seed to avoid seed collision . <nl> + * < b > ` min_object_covered ` < / b > : An optional ` float ` . Defaults to ` 0 . 1 ` . <nl> + The cropped area of the image must contain at least this <nl> + fraction of any bounding box supplied . <nl> + * < b > ` aspect_ratio_range ` < / b > : An optional list of ` floats ` . Defaults to ` [ 0 . 75 , 1 . 33 ] ` . <nl> + The cropped area of the image must have an aspect ratio = <nl> + width / height within this range . <nl> + * < b > ` area_range ` < / b > : An optional list of ` floats ` . Defaults to ` [ 0 . 05 , 1 ] ` . <nl> + The cropped area of the image must contain a fraction of the <nl> + supplied image within in this range . <nl> + * < b > ` max_attempts ` < / b > : An optional ` int ` . Defaults to ` 100 ` . <nl> + Number of attempts at generating a cropped region of the image <nl> + of the specified constraints . After ` max_attempts ` failures , return the entire <nl> + image . <nl> + * < b > ` use_image_if_no_bounding_boxes ` < / b > : An optional ` bool ` . Defaults to ` False ` . <nl> + Controls behavior if no bounding boxes supplied . <nl> + If true , assume an implicit bounding box covering the whole input . If false , <nl> + raise an error . <nl> + * < b > ` name ` < / b > : A name for the operation ( optional ) . <nl> + <nl> + # # # # # Returns : <nl> + <nl> + A tuple of ` Tensor ` objects ( begin , size , bboxes ) . <nl> + <nl> + * < b > ` begin ` < / b > : A ` Tensor ` . Has the same type as ` image_size ` . 1 - D , containing ` [ offset_height , offset_width , 0 ] ` . Provide as input to <nl> + ` tf . slice ` . <nl> + * < b > ` size ` < / b > : A ` Tensor ` . Has the same type as ` image_size ` . 1 - D , containing ` [ target_height , target_width , - 1 ] ` . Provide as input to <nl> + ` tf . slice ` . <nl> + * < b > ` bboxes ` < / b > : A ` Tensor ` of type ` float32 ` . 3 - D with shape ` [ 1 , 1 , 4 ] ` containing the distorted bounding box . <nl> + Provide as input to ` tf . image . draw_bounding_boxes ` . <nl> + <nl> + <nl> + <nl> # # Other Functions and Classes <nl> - - - <nl> <nl> mmm a / tensorflow / g3doc / api_docs / python / index . md <nl> ppp b / tensorflow / g3doc / api_docs / python / index . md <nl> <nl> * [ ` crop_to_bounding_box ` ] ( . . / . . / api_docs / python / image . md # crop_to_bounding_box ) <nl> * [ ` decode_jpeg ` ] ( . . / . . / api_docs / python / image . md # decode_jpeg ) <nl> * [ ` decode_png ` ] ( . . / . . / api_docs / python / image . md # decode_png ) <nl> + * [ ` draw_bounding_boxes ` ] ( . . / . . / api_docs / python / image . md # draw_bounding_boxes ) <nl> * [ ` encode_jpeg ` ] ( . . / . . / api_docs / python / image . md # encode_jpeg ) <nl> * [ ` encode_png ` ] ( . . / . . / api_docs / python / image . md # encode_png ) <nl> * [ ` extract_glimpse ` ] ( . . / . . / api_docs / python / image . md # extract_glimpse ) <nl> <nl> * [ ` resize_nearest_neighbor ` ] ( . . / . . / api_docs / python / image . md # resize_nearest_neighbor ) <nl> * [ ` rgb_to_grayscale ` ] ( . . / . . / api_docs / python / image . md # rgb_to_grayscale ) <nl> * [ ` rgb_to_hsv ` ] ( . . / . . / api_docs / python / image . md # rgb_to_hsv ) <nl> + * [ ` sample_distorted_bounding_box ` ] ( . . / . . / api_docs / python / image . md # sample_distorted_bounding_box ) <nl> * [ ` saturate_cast ` ] ( . . / . . / api_docs / python / image . md # saturate_cast ) <nl> * [ ` transpose_image ` ] ( . . / . . / api_docs / python / image . md # transpose_image ) <nl> <nl> <nl> * [ ` is_built_with_cuda ` ] ( . . / . . / api_docs / python / test . md # is_built_with_cuda ) <nl> * [ ` main ` ] ( . . / . . / api_docs / python / test . md # main ) <nl> <nl> + * * * [ Layers ( contrib ) ] ( . . / . . / api_docs / python / contrib . layers . md ) * * : <nl> + * [ ` assert_same_float_dtype ` ] ( . . / . . / api_docs / python / contrib . layers . md # assert_same_float_dtype ) <nl> + * [ ` convolution2d ` ] ( . . / . . / api_docs / python / contrib . layers . md # convolution2d ) <nl> + * [ ` fully_connected ` ] ( . . / . . / api_docs / python / contrib . layers . md # fully_connected ) <nl> + * [ ` l1_regularizer ` ] ( . . / . . / api_docs / python / contrib . layers . md # l1_regularizer ) <nl> + * [ ` l2_regularizer ` ] ( . . / . . / api_docs / python / contrib . layers . md # l2_regularizer ) <nl> + * [ ` summarize_activation ` ] ( . . / . . / api_docs / python / contrib . layers . md # summarize_activation ) <nl> + * [ ` summarize_activations ` ] ( . . / . . / api_docs / python / contrib . layers . md # summarize_activations ) <nl> + * [ ` summarize_collection ` ] ( . . / . . / api_docs / python / contrib . layers . md # summarize_collection ) <nl> + * [ ` summarize_tensor ` ] ( . . / . . / api_docs / python / contrib . layers . md # summarize_tensor ) <nl> + * [ ` summarize_tensors ` ] ( . . / . . / api_docs / python / contrib . layers . md # summarize_tensors ) <nl> + * [ ` xavier_initializer ` ] ( . . / . . / api_docs / python / contrib . layers . md # xavier_initializer ) <nl> + * [ ` xavier_initializer_conv2d ` ] ( . . / . . / api_docs / python / contrib . layers . md # xavier_initializer_conv2d ) <nl> + <nl> + * * * [ Utilities ( contrib ) ] ( . . / . . / api_docs / python / contrib . util . md ) * * : <nl> + * [ ` constant_value ` ] ( . . / . . / api_docs / python / contrib . util . md # constant_value ) <nl> + * [ ` make_tensor_proto ` ] ( . . / . . / api_docs / python / contrib . util . md # make_tensor_proto ) <nl> + <nl> mmm a / tensorflow / g3doc / api_docs / python / io_ops . md <nl> ppp b / tensorflow / g3doc / api_docs / python / io_ops . md <nl> <nl> # Inputs and Readers <nl> <nl> Note : Functions taking ` Tensor ` arguments can also take anything accepted by <nl> - [ ` tf . convert_to_tensor ` ] ( . . / . . / api_docs / python / framework . md # convert_to_tensor ) . <nl> + [ ` tf . convert_to_tensor ` ] ( framework . md # convert_to_tensor ) . <nl> <nl> [ TOC ] <nl> <nl> mmm a / tensorflow / g3doc / api_docs / python / math_ops . md <nl> ppp b / tensorflow / g3doc / api_docs / python / math_ops . md <nl> <nl> # Math <nl> <nl> Note : Functions taking ` Tensor ` arguments can also take anything accepted by <nl> - [ ` tf . convert_to_tensor ` ] ( . . / . . / api_docs / python / framework . md # convert_to_tensor ) . <nl> + [ ` tf . convert_to_tensor ` ] ( framework . md # convert_to_tensor ) . <nl> <nl> [ TOC ] <nl> <nl> mmm a / tensorflow / g3doc / api_docs / python / nn . md <nl> ppp b / tensorflow / g3doc / api_docs / python / nn . md <nl> <nl> # Neural Network <nl> <nl> Note : Functions taking ` Tensor ` arguments can also take anything accepted by <nl> - [ ` tf . convert_to_tensor ` ] ( . . / . . / api_docs / python / framework . md # convert_to_tensor ) . <nl> + [ ` tf . convert_to_tensor ` ] ( framework . md # convert_to_tensor ) . <nl> <nl> [ TOC ] <nl> <nl> expression ` tf . nn . softmax ( tf . matmul ( inputs , weights ) + biases ) ` . <nl> See our [ Candidate Sampling Algorithms Reference ] <nl> ( . . / . . / extras / candidate_sampling . pdf ) <nl> <nl> - Also see Section 3 of http : / / arxiv . org / abs / 1412 . 2007 for the math . <nl> + Also see Section 3 of [ Jean et al . , 2014 ] ( http : / / arxiv . org / abs / 1412 . 2007 ) <nl> + ( [ pdf ] ( http : / / arxiv . org / pdf / 1412 . 2007 . pdf ) ) for the math . <nl> <nl> # # # # # Args : <nl> <nl> mmm a / tensorflow / g3doc / api_docs / python / script_ops . md <nl> ppp b / tensorflow / g3doc / api_docs / python / script_ops . md <nl> <nl> # Wraps python functions <nl> <nl> Note : Functions taking ` Tensor ` arguments can also take anything accepted by <nl> - [ ` tf . convert_to_tensor ` ] ( . . / . . / api_docs / python / framework . md # convert_to_tensor ) . <nl> + [ ` tf . convert_to_tensor ` ] ( framework . md # convert_to_tensor ) . <nl> <nl> [ TOC ] <nl> <nl> mmm a / tensorflow / g3doc / api_docs / python / sparse_ops . md <nl> ppp b / tensorflow / g3doc / api_docs / python / sparse_ops . md <nl> <nl> # Sparse Tensors <nl> <nl> Note : Functions taking ` Tensor ` arguments can also take anything accepted by <nl> - [ ` tf . convert_to_tensor ` ] ( . . / . . / api_docs / python / framework . md # convert_to_tensor ) . <nl> + [ ` tf . convert_to_tensor ` ] ( framework . md # convert_to_tensor ) . <nl> <nl> [ TOC ] <nl> <nl> mmm a / tensorflow / g3doc / api_docs / python / state_ops . md <nl> ppp b / tensorflow / g3doc / api_docs / python / state_ops . md <nl> <nl> # Variables <nl> <nl> Note : Functions taking ` Tensor ` arguments can also take anything accepted by <nl> - [ ` tf . convert_to_tensor ` ] ( . . / . . / api_docs / python / framework . md # convert_to_tensor ) . <nl> + [ ` tf . convert_to_tensor ` ] ( framework . md # convert_to_tensor ) . <nl> <nl> [ TOC ] <nl> <nl> to keep the scale intact , where ` dim = W . shape [ 0 ] ` ( the size of the input ) . <nl> A similar calculation for convolutional networks gives an analogous result <nl> with ` dim ` equal to the product of the first 3 dimensions . When <nl> nonlinearities are present , we need to multiply this by a constant ` factor ` . <nl> - See < https : / / arxiv . org / pdf / 1412 . 6558v3 . pdf > for deeper motivation , experiments <nl> + See [ Sussillo et al . , 2014 ] ( https : / / arxiv . org / abs / 1412 . 6558 ) <nl> + ( [ pdf ] ( http : / / arxiv . org / pdf / 1412 . 6558 . pdf ) ) for deeper motivation , experiments <nl> and the calculation of constants . In section 2 . 3 there , the constants were <nl> numerically computed : for a linear layer it ' s 1 . 0 , relu : ~ 1 . 43 , tanh : ~ 1 . 15 . <nl> <nl> This operation computes <nl> This operation outputs ` ref ` after the update is done . <nl> This makes it easier to chain operations that need to use the reset value . <nl> <nl> - If ` indices ` contains duplicate entries , lexicographically later entries <nl> - override earlier entries . <nl> + If values in ` ref ` is to be updated more than once , because there are <nl> + duplicate entires in ` indices ` , the order at which the updates happen <nl> + for each value is undefined . <nl> <nl> Requires ` updates . shape = indices . shape + ref . shape [ 1 : ] ` . <nl> <nl> mmm a / tensorflow / g3doc / api_docs / python / train . md <nl> ppp b / tensorflow / g3doc / api_docs / python / train . md <nl> Construct a new Momentum optimizer . <nl> <nl> Optimizer that implements the Adam algorithm . <nl> <nl> - See this [ paper ] ( http : / / arxiv . org / pdf / 1412 . 6980v7 . pdf ) . <nl> + See [ Kingma et . al . , 2014 ] ( http : / / arxiv . org / abs / 1412 . 6980 ) <nl> + ( [ pdf ] ( http : / / arxiv . org / pdf / 1412 . 6980 . pdf ) ) . <nl> <nl> - - - <nl> <nl> otherwise they ' re all shrunk by the global ratio . <nl> Any of the entries of ` t_list ` that are of type ` None ` are ignored . <nl> <nl> This is the correct way to perform gradient clipping ( for example , see <nl> - R . Pascanu , T . Mikolov , and Y . Bengio , " On the difficulty of training <nl> - Recurrent Neural Networks " . http : / / arxiv . org / abs / 1211 . 5063 ) <nl> + [ Pascanu et al . , 2012 ] ( http : / / arxiv . org / abs / 1211 . 5063 ) <nl> + ( [ pdf ] ( http : / / arxiv . org / pdf / 1211 . 5063 . pdf ) ) ) . <nl> <nl> However , it is slower than ` clip_by_norm ( ) ` because all the parameters must be <nl> ready before the clipping operation can be performed . <nl> a Python iterator that yields ` Event ` protocol buffers . <nl> Example : Print the contents of an events file . <nl> <nl> ` ` ` python <nl> - for e in tf . summary_iterator ( path to events file ) : <nl> + for e in tf . train . summary_iterator ( path to events file ) : <nl> print ( e ) <nl> ` ` ` <nl> <nl> Example : Print selected summary values . <nl> # summary value tag ' loss ' . These could have been added by calling <nl> # ` add_summary ( ) ` , passing the output of a scalar summary op created with <nl> # with : ` tf . scalar_summary ( [ ' loss ' ] , loss_tensor ) ` . <nl> - for e in tf . summary_iterator ( path to events file ) : <nl> + for e in tf . train . summary_iterator ( path to events file ) : <nl> for v in e . summary . value : <nl> if v . tag = = ' loss ' : <nl> print ( v . simple_value ) <nl> mmm a / tensorflow / g3doc / get_started / os_setup . md <nl> ppp b / tensorflow / g3doc / get_started / os_setup . md <nl> github source . <nl> <nl> # # Requirements <nl> <nl> - The TensorFlow Python API currently supports Python 2 . 7 and Python 3 . 3 + from <nl> - source . <nl> + The TensorFlow Python API supports Python 2 . 7 and Python 3 . 3 + . <nl> <nl> - The GPU version ( Linux only ) currently requires the Cuda Toolkit 7 . 0 and cuDNN <nl> - v2 . Please see [ Cuda installation ] ( # optional - install - cuda - gpus - on - linux ) . <nl> + The GPU version ( Linux only ) requires the Cuda Toolkit > = 7 . 0 and cuDNN > = <nl> + v2 . Please see [ Cuda installation ] ( # optional - install - cuda - gpus - on - linux ) <nl> + for details . <nl> <nl> # # Overview <nl> <nl> Install TensorFlow : <nl> <nl> ` ` ` bash <nl> # Ubuntu / Linux 64 - bit , CPU only : <nl> - $ sudo pip install - - upgrade https : / / storage . googleapis . com / tensorflow / linux / cpu / tensorflow - 0 . 6 . 0 - cp27 - none - linux_x86_64 . whl <nl> + $ sudo pip install - - upgrade https : / / storage . googleapis . com / tensorflow / linux / cpu / tensorflow - 0 . 7 . 0 - py2 - none - linux_x86_64 . whl <nl> <nl> # Ubuntu / Linux 64 - bit , GPU enabled : <nl> - $ sudo pip install - - upgrade https : / / storage . googleapis . com / tensorflow / linux / gpu / tensorflow - 0 . 6 . 0 - cp27 - none - linux_x86_64 . whl <nl> + $ sudo pip install - - upgrade https : / / storage . googleapis . com / tensorflow / linux / gpu / tensorflow - 0 . 7 . 0 - py2 - none - linux_x86_64 . whl <nl> <nl> # Mac OS X , CPU only : <nl> $ sudo easy_install - - upgrade six <nl> - $ sudo pip install - - upgrade https : / / storage . googleapis . com / tensorflow / mac / tensorflow - 0 . 6 . 0 - py2 - none - any . whl <nl> + $ sudo pip install - - upgrade https : / / storage . googleapis . com / tensorflow / mac / tensorflow - 0 . 7 . 0 - py2 - none - any . whl <nl> ` ` ` <nl> <nl> For python3 : <nl> <nl> ` ` ` bash <nl> # Ubuntu / Linux 64 - bit , CPU only : <nl> - $ sudo pip3 install - - upgrade https : / / storage . googleapis . com / tensorflow / linux / cpu / tensorflow - 0 . 6 . 0 - cp34 - none - linux_x86_64 . whl <nl> + $ sudo pip3 install - - upgrade https : / / storage . googleapis . com / tensorflow / linux / cpu / tensorflow - 0 . 7 . 0 - py3 - none - linux_x86_64 . whl <nl> <nl> # Ubuntu / Linux 64 - bit , GPU enabled : <nl> - $ sudo pip3 install - - upgrade https : / / storage . googleapis . com / tensorflow / linux / gpu / tensorflow - 0 . 6 . 0 - cp34 - none - linux_x86_64 . whl <nl> + $ sudo pip3 install - - upgrade https : / / storage . googleapis . com / tensorflow / linux / gpu / tensorflow - 0 . 7 . 0 - py3 - none - linux_x86_64 . whl <nl> <nl> # Mac OS X , CPU only : <nl> $ sudo easy_install - - upgrade six <nl> - $ sudo pip3 install - - upgrade https : / / storage . googleapis . com / tensorflow / mac / tensorflow - 0 . 6 . 0 - py3 - none - any . whl <nl> + $ sudo pip3 install - - upgrade https : / / storage . googleapis . com / tensorflow / mac / tensorflow - 0 . 7 . 0 - py3 - none - any . whl <nl> ` ` ` <nl> <nl> <nl> $ source ~ / tensorflow / bin / activate . csh # If using csh <nl> ( tensorflow ) $ # Your prompt should change <nl> <nl> # Ubuntu / Linux 64 - bit , CPU only : <nl> - ( tensorflow ) $ pip install - - upgrade https : / / storage . googleapis . com / tensorflow / linux / cpu / tensorflow - 0 . 6 . 0 - cp27 - none - linux_x86_64 . whl <nl> + ( tensorflow ) $ pip install - - upgrade https : / / storage . googleapis . com / tensorflow / linux / cpu / tensorflow - 0 . 7 . 0 - py2 - none - linux_x86_64 . whl <nl> <nl> # Ubuntu / Linux 64 - bit , GPU enabled : <nl> - ( tensorflow ) $ pip install - - upgrade https : / / storage . googleapis . com / tensorflow / linux / gpu / tensorflow - 0 . 6 . 0 - cp27 - none - linux_x86_64 . whl <nl> + ( tensorflow ) $ pip install - - upgrade https : / / storage . googleapis . com / tensorflow / linux / gpu / tensorflow - 0 . 7 . 0 - py2 - none - linux_x86_64 . whl <nl> <nl> # Mac OS X , CPU only : <nl> - ( tensorflow ) $ pip install - - upgrade https : / / storage . googleapis . com / tensorflow / mac / tensorflow - 0 . 6 . 0 - py2 - none - any . whl <nl> + ( tensorflow ) $ pip install - - upgrade https : / / storage . googleapis . com / tensorflow / mac / tensorflow - 0 . 7 . 0 - py2 - none - any . whl <nl> ` ` ` <nl> <nl> and again for python3 : <nl> $ source ~ / tensorflow / bin / activate . csh # If using csh <nl> ( tensorflow ) $ # Your prompt should change <nl> <nl> # Ubuntu / Linux 64 - bit , CPU only : <nl> - ( tensorflow ) $ pip install - - upgrade https : / / storage . googleapis . com / tensorflow / linux / cpu / tensorflow - 0 . 6 . 0 - cp34 - none - linux_x86_64 . whl <nl> + ( tensorflow ) $ pip install - - upgrade https : / / storage . googleapis . com / tensorflow / linux / cpu / tensorflow - 0 . 7 . 0 - py3 - none - linux_x86_64 . whl <nl> <nl> # Ubuntu / Linux 64 - bit , GPU enabled : <nl> - ( tensorflow ) $ pip install - - upgrade https : / / storage . googleapis . com / tensorflow / linux / gpu / tensorflow - 0 . 6 . 0 - cp34 - none - linux_x86_64 . whl <nl> + ( tensorflow ) $ pip install - - upgrade https : / / storage . googleapis . com / tensorflow / linux / gpu / tensorflow - 0 . 7 . 0 - py3 - none - linux_x86_64 . whl <nl> <nl> # Mac OS X , CPU only : <nl> - ( tensorflow ) $ pip3 install - - upgrade https : / / storage . googleapis . com / tensorflow / mac / tensorflow - 0 . 6 . 0 - py3 - none - any . whl <nl> + ( tensorflow ) $ pip3 install - - upgrade https : / / storage . googleapis . com / tensorflow / mac / tensorflow - 0 . 7 . 0 - py3 - none - any . whl <nl> ` ` ` <nl> <nl> With the Virtualenv environment activated , you can now <nl> code . <nl> * ` b . gcr . io / tensorflow / tensorflow : latest - devel - gpu ` : GPU Binary image plus source <nl> code . <nl> <nl> - We also have tags with ` latest ` replaced by a released version ( eg ` 0 . 6 . 0 - gpu ` ) . <nl> + We also have tags with ` latest ` replaced by a released version ( e . g . , ` 0 . 7 . 0 - gpu ` ) . <nl> <nl> With Docker the installation is as follows : <nl> <nl> Please specify the location of python . [ Default is / usr / bin / python ] : <nl> <nl> # # # # Optional : Install CUDA ( GPUs on Linux ) <nl> <nl> - In order to build or run TensorFlow with GPU support , both Cuda Toolkit 7 . 0 and <nl> - cuDNN v2 from NVIDIA need to be installed . <nl> + In order to build or run TensorFlow with GPU support , both NVIDIA ' s Cuda Toolkit ( > = 7 . 0 ) and <nl> + cuDNN ( > = v2 ) need to be installed . <nl> <nl> - TensorFlow GPU support requires having a GPU card with NVidia Compute Capability > = 3 . 5 . <nl> + TensorFlow GPU support requires having a GPU card with NVidia Compute Capability > = 3 . 0 . <nl> Supported cards include but are not limited to : <nl> <nl> * NVidia Titan <nl> Supported cards include but are not limited to : <nl> * NVidia K20 <nl> * NVidia K40 <nl> <nl> - # # # # # Download and install Cuda Toolkit 7 . 0 <nl> + # # # # # Download and install Cuda Toolkit <nl> <nl> - https : / / developer . nvidia . com / cuda - toolkit - 70 <nl> + https : / / developer . nvidia . com / cuda - downloads <nl> <nl> Install the toolkit into e . g . ` / usr / local / cuda ` <nl> <nl> - # # # # # Download and install cuDNN v2 <nl> + # # # # # Download and install cuDNN <nl> <nl> - https : / / developer . nvidia . com / rdp / cudnn - archive <nl> + https : / / developer . nvidia . com / cudnn <nl> <nl> Uncompress and copy the cuDNN files into the toolkit directory . Assuming the <nl> - toolkit is installed in ` / usr / local / cuda ` : <nl> + toolkit is installed in ` / usr / local / cuda ` , run the following commands ( edited <nl> + to reflect the cuDNN version you downloaded ) : <nl> <nl> ` ` ` bash <nl> tar xvzf cudnn - 6 . 5 - linux - x64 - v2 . tgz <nl> sudo chmod a + r / usr / local / cuda / lib64 / libcudnn * <nl> ` ` ` <nl> <nl> # # # # # Configure TensorFlow ' s canonical view of Cuda libraries <nl> + <nl> When running the ` configure ` script from the root of your source tree , select <nl> - the option ` Y ` when asked to build TensorFlow with GPU support . <nl> + the option ` Y ` when asked to build TensorFlow with GPU support . If you have <nl> + several versions of Cuda or cuDNN installed , you should definitely select <nl> + one explicitly instead of relying on the system default . You should see <nl> + prompts like the following : <nl> <nl> ` ` ` bash <nl> $ . / configure <nl> Please specify the location of python . [ Default is / usr / bin / python ] : <nl> Do you wish to build TensorFlow with GPU support ? [ y / N ] y <nl> GPU support will be enabled for TensorFlow <nl> <nl> - Please specify the location where CUDA 7 . 0 toolkit is installed . Refer to <nl> - README . md for more details . [ default is : / usr / local / cuda ] : / usr / local / cuda <nl> + Please specify the Cuda SDK version you want to use , e . g . 7 . 0 . [ Leave <nl> + empty to use system default ] : 7 . 5 <nl> <nl> - Please specify the location where the cuDNN v2 library is installed . Refer to <nl> + Please specify the location where CUDA 7 . 5 toolkit is installed . Refer to <nl> README . md for more details . [ default is : / usr / local / cuda ] : / usr / local / cuda <nl> <nl> + Please specify the Cudnn version you want to use . [ Leave empty to use system <nl> + default ] : 4 . 0 . 4 <nl> + <nl> + Please specify the location where the cuDNN 4 . 0 . 4 library is installed . Refer to <nl> + README . md for more details . [ default is : / usr / local / cuda ] : / usr / local / cudnn - r4 - rc / <nl> + <nl> + Please specify a list of comma - separated Cuda compute capabilities you want to <nl> + build with . You can find the compute capability of your device at : <nl> + https : / / developer . nvidia . com / cuda - gpus . <nl> + Please note that each additional compute capability significantly increases your <nl> + build time and binary size . [ Default is : \ " 3 . 5 , 5 . 2 \ " ] : 3 . 5 <nl> + <nl> Setting up Cuda include <nl> Setting up Cuda lib64 <nl> Setting up Cuda bin <nl> Configuration finished <nl> <nl> This creates a canonical set of symbolic links to the Cuda libraries on your system . <nl> Every time you change the Cuda library paths you need to run this step again before <nl> - you invoke the bazel build command . <nl> + you invoke the bazel build command . For the Cudnn libraries , use ' 6 . 5 ' for R2 , ' 7 . 0 ' <nl> + for R3 , and ' 4 . 0 . 4 ' for R4 - RC . <nl> + <nl> <nl> # # # # # Build your target with GPU support <nl> From the root of your source tree , run : <nl> $ bazel - bin / tensorflow / cc / tutorials_example_trainer - - use_gpu <nl> <nl> Note that " - - config = cuda " is needed to enable the GPU support . <nl> <nl> - # # # # # Enabling Cuda 3 . 0 <nl> - TensorFlow officially supports Cuda devices with 3 . 5 and 5 . 2 compute <nl> - capabilities . In order to enable earlier Cuda devices such as Grid K520 , you <nl> - need to target Cuda 3 . 0 . This can be done through TensorFlow unofficial <nl> - settings with " configure " . <nl> - <nl> - ` ` ` bash <nl> - $ TF_UNOFFICIAL_SETTING = 1 . / configure <nl> - <nl> - # Same as the official settings above <nl> - <nl> - WARNING : You are configuring unofficial settings in TensorFlow . Because some <nl> - external libraries are not backward compatible , these settings are largely <nl> - untested and unsupported . <nl> - <nl> - Please specify a list of comma - separated Cuda compute capabilities you want to <nl> - build with . You can find the compute capability of your device at : <nl> - https : / / developer . nvidia . com / cuda - gpus . <nl> - Please note that each additional compute capability significantly increases <nl> - your build time and binary size . [ Default is : " 3 . 5 , 5 . 2 " ] : 3 . 0 <nl> - <nl> - Setting up Cuda include <nl> - Setting up Cuda lib64 <nl> - Setting up Cuda bin <nl> - Setting up Cuda nvvm <nl> - Configuration finished <nl> - ` ` ` <nl> - <nl> - # # # # # Using a different Cuda SDK and Cudnn versions <nl> - TensorFlow officially supports Cuda 7 . 0 and Cudnn V2 ( 6 . 5 ) at this point . In <nl> - order to use a different Cuda SDK or Cudnn libraries , use the unofficial setting <nl> - with " configure " <nl> - <nl> - ` ` ` bash <nl> - $ TF_UNOFFICIAL_SETTING = 1 . / configure <nl> - . . . <nl> - Please specify the Cuda SDK version you want to use . [ Default is 7 . 0 ] : 7 . 5 <nl> - Please specify the location where CUDA 7 . 5 toolkit is installed . Refer to README . md for more details . [ Default is / usr / local / cuda ] : / usr / local / cuda - 7 . 5 <nl> - Please specify the Cudnn version you want to use . [ Default is 6 . 5 ] : 4 . 0 . 4 <nl> - Please specify the location where cuDNN 4 . 0 . 4 library is installed . Refer to README . md for more details . [ Default is / usr / local / cuda - 7 . 5 ] : / usr / local / cudnn - r4 - rc / <nl> - . . . <nl> - Setting up Cuda include <nl> - Setting up Cuda lib64 <nl> - Setting up Cuda bin <nl> - Setting up Cuda nvvm <nl> - Configuration finished <nl> - ` ` ` <nl> - <nl> - For the Cudnn libraries , use ' 6 . 5 ' for R2 , ' 7 . 0 ' for R3 , and ' 4 . 0 . 4 ' for <nl> - R4 - RC . <nl> - <nl> # # # # # Known issues <nl> <nl> * Although it is possible to build both Cuda and non - Cuda configs under the same <nl> configs in the same source tree . <nl> <nl> * You have to run configure before running bazel build . Otherwise , the build <nl> will fail with a clear error message . In the future , we might consider making <nl> - this more convenient by including the configure step in our build process , <nl> - given necessary bazel new feature support . <nl> + this more convenient by including the configure step in our build process . <nl> <nl> # # # Installation for Mac OS X <nl> <nl> $ bazel build - c opt - - config = cuda / / tensorflow / tools / pip_package : build_pip_pack <nl> $ bazel - bin / tensorflow / tools / pip_package / build_pip_package / tmp / tensorflow_pkg <nl> <nl> # The name of the . whl file will depend on your platform . <nl> - $ pip install / tmp / tensorflow_pkg / tensorflow - 0 . 6 . 0 - cp27 - none - linux_x86_64 . whl <nl> + $ pip install / tmp / tensorflow_pkg / tensorflow - 0 . 7 . 0 - py2 - none - linux_x86_64 . whl <nl> ` ` ` <nl> <nl> # # Setting up TensorFlow for Development <nl> ImportError : libcudart . so . 7 . 0 : cannot open shared object file : No such file or d <nl> ` ` ` <nl> <nl> Make sure you followed the GPU installation [ instructions ] ( # optional - install - cuda - gpus - on - linux ) . <nl> + If you built from source , and you left the Cuda or cuDNN version empty , try specifying them <nl> + explicitly . <nl> <nl> # # # Pip installation issues <nl> <nl> mmm a / tensorflow / python / BUILD <nl> ppp b / tensorflow / python / BUILD <nl> py_binary ( <nl> srcs_version = " PY2AND3 " , <nl> deps = [ <nl> " : docs " , <nl> - " : platform " , <nl> " / / tensorflow : tensorflow_py " , <nl> ] , <nl> ) <nl> new file mode 100644 <nl> index 0000000000000 . . 3ec2b4dfa33d8 <nl> Binary files / dev / null and b / tensorflow / python / __init__ . pyc differ <nl> mmm a / tensorflow / python / framework / docs . py <nl> ppp b / tensorflow / python / framework / docs . py <nl> <nl> <nl> Updates the documentation files . <nl> " " " <nl> + <nl> from __future__ import absolute_import <nl> from __future__ import division <nl> from __future__ import print_function <nl> mmm a / tensorflow / python / framework / gen_docs_combined . py <nl> ppp b / tensorflow / python / framework / gen_docs_combined . py <nl> <nl> FLAGS = tf . flags . FLAGS <nl> <nl> <nl> - # TODO ( josh11b , wicke ) : Remove the . . / . . / api_docs / python / once the <nl> - # website can handle it . <nl> PREFIX_TEXT = " " " <nl> Note : Functions taking ` Tensor ` arguments can also take anything accepted by <nl> - [ ` tf . convert_to_tensor ` ] ( . . / . . / api_docs / python / framework . md # convert_to_tensor ) . <nl> + [ ` tf . convert_to_tensor ` ] ( framework . md # convert_to_tensor ) . <nl> " " " <nl> <nl> <nl> def get_module_to_name ( ) : <nl> - return { tf : " tf " , <nl> - tf . errors : " tf . errors " , <nl> - tf . image : " tf . image " , <nl> - tf . nn : " tf . nn " , <nl> - tf . train : " tf . train " , <nl> - tf . python_io : " tf . python_io " , <nl> - tf . test : " tf . test " , } <nl> + return { <nl> + tf : " tf " , <nl> + tf . errors : " tf . errors " , <nl> + tf . image : " tf . image " , <nl> + tf . nn : " tf . nn " , <nl> + tf . train : " tf . train " , <nl> + tf . python_io : " tf . python_io " , <nl> + tf . test : " tf . test " , <nl> + tf . contrib . layers : " tf . contrib . layers " , <nl> + tf . contrib . util : " tf . contrib . util " , <nl> + } <nl> <nl> def all_libraries ( module_to_name , members , documented ) : <nl> # A list of ( filename , docs . Library ) pairs representing the individual files <nl> def library ( name , title , module = None , * * args ) : <nl> " RankingExample " , " SequenceExample " ] ) , <nl> library ( " script_ops " , " Wraps python functions " , prefix = PREFIX_TEXT ) , <nl> library ( " test " , " Testing " , tf . test ) , <nl> + library ( " contrib . layers " , " Layers ( contrib ) " , tf . contrib . layers ) , <nl> + library ( " contrib . util " , " Utilities ( contrib ) " , tf . contrib . util ) , <nl> ] <nl> <nl> _hidden_symbols = [ " Event " , " LogMessage " , " Summary " , " SessionLog " , " xrange " , <nl> mmm a / tensorflow / tensorboard / backend / tensorboard_handler_test . py <nl> ppp b / tensorflow / tensorboard / backend / tensorboard_handler_test . py <nl> <nl> from __future__ import division <nl> from __future__ import print_function <nl> <nl> + from six . moves import xrange <nl> + <nl> from tensorflow . python . platform import googletest <nl> from tensorflow . tensorboard . backend import tensorboard_handler <nl> <nl> mmm a / tensorflow / tools / ci_build / builds / configured <nl> ppp b / tensorflow / tools / ci_build / builds / configured <nl> CONTAINER_TYPE = $ ( echo " $ 1 " | tr ' [ : upper : ] ' ' [ : lower : ] ' ) <nl> shift 1 <nl> COMMAND = ( " $ @ " ) <nl> <nl> - export PYTHON_BIN_PATH = " $ { PYTHON_BIN_PATH : - $ ( which python ) } " <nl> + export CI_BUILD_PYTHON = " $ { CI_BUILD_PYTHON : - python } " <nl> + export PYTHON_BIN_PATH = " $ { PYTHON_BIN_PATH : - $ ( which $ { CI_BUILD_PYTHON } ) } " <nl> if [ " $ { CONTAINER_TYPE } " = = " gpu " ] ; then <nl> export TF_NEED_CUDA = 1 <nl> else <nl> mmm a / tensorflow / tools / ci_build / builds / pip . sh <nl> ppp b / tensorflow / tools / ci_build / builds / pip . sh <nl> echo " Installing pip whl file : $ { WHL_PATH } " <nl> <nl> # Call pip install twice , first time with - - upgrade and second time without it <nl> # This addresses the sporadic test failures related to protobuf version <nl> - $ { PYTHON_BIN_PATH } - m pip install - v - - user - - upgrade $ { WHL_PATH } & & <nl> + $ { PYTHON_BIN_PATH } - m pip install - v - - user - - upgrade $ { WHL_PATH } numpy = = 1 . 8 . 2 & & <nl> $ { PYTHON_BIN_PATH } - m pip install - v - - user $ { WHL_PATH } & & <nl> <nl> # If NO_TEST_ON_INSTALL is set to any non - empty value , skip all Python <nl> mmm a / tensorflow / tools / ci_build / builds / with_the_same_user <nl> ppp b / tensorflow / tools / ci_build / builds / with_the_same_user <nl> getent group " $ { CI_BUILD_GID } " | | addgroup - - gid " $ { CI_BUILD_GID } " " $ { CI_BUILD_G <nl> getent passwd " $ { CI_BUILD_UID } " | | adduser - - gid " $ { CI_BUILD_GID } " - - uid " $ { CI_BUILD_UID } " \ <nl> - - gecos " $ { CI_BUILD_USER } ( generated by with_the_same_user script ) " \ <nl> - - disabled - password - - home " $ { CI_BUILD_HOME } " - - quiet " $ { CI_BUILD_USER } " <nl> + sudo usermod - a - G sudo " $ { CI_BUILD_USER } " <nl> <nl> cp / root / . bazelrc " $ { CI_BUILD_HOME } / . bazelrc " <nl> chown " $ { CI_BUILD_UID } : $ { CI_BUILD_GID } " " $ { CI_BUILD_HOME } / . bazelrc " <nl> <nl> - sudo - u " # $ { CI_BUILD_UID } " - - preserve - env - H $ { COMMAND [ @ ] } <nl> + sudo - u " # $ { CI_BUILD_UID } " - - preserve - env " HOME = $ { CI_BUILD_HOME } " $ { COMMAND [ @ ] } <nl> mmm a / tensorflow / tools / ci_build / ci_parameterized_build . sh <nl> ppp b / tensorflow / tools / ci_build / ci_parameterized_build . sh <nl> BAZEL_TARGET = " / / tensorflow / . . . " <nl> <nl> # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # <nl> <nl> + echo " Parameterized build starts at : $ ( date ) " <nl> + echo " " <nl> + START_TIME = $ ( date + ' % s ' ) <nl> + <nl> # Convert all the required environment variables to lower case <nl> TF_BUILD_CONTAINER_TYPE = $ ( to_lower $ { TF_BUILD_CONTAINER_TYPE } ) <nl> TF_BUILD_PYTHON_VERSION = $ ( to_lower $ { TF_BUILD_PYTHON_VERSION } ) <nl> echo " TF_BUILD_APPEND_CI_DOCKER_EXTRA_PARAMS = " \ <nl> " $ { TF_BUILD_APPEND_CI_DOCKER_EXTRA_PARAMS } " <nl> echo " TF_BUILD_APPEND_ARGUMENTS = $ { TF_BUILD_APPEND_ARGUMENTS } " <nl> echo " TF_BUILD_BAZEL_TARGET = $ { TF_BUILD_BAZEL_TARGET } " <nl> + echo " TF_BUILD_BAZEL_CLEAN = $ { TF_BUILD_BAZEL_CLEAN } " <nl> echo " TF_BUILD_SERIAL_TESTS = $ { TF_BUILD_SERIAL_TESTS } " <nl> <nl> # Process container type <nl> if [ [ $ { TF_BUILD_IS_PIP } = = " no_pip " ] ] ; then <nl> # The 1st ( build ) step will be done in parallel , as default <nl> # But the 2nd ( test ) step will be done serially . <nl> <nl> - BUILD_ONLY_CMD = " $ { BAZEL_BUILD_ONLY_CMD } $ { OPT_FLAG } " \ <nl> + BUILD_ONLY_CMD = " $ { BAZEL_BUILD_ONLY_CMD } $ { OPT_FLAG } " \ <nl> " $ { TF_BUILD_APPEND_ARGUMENTS } $ { BAZEL_TARGET } " <nl> echo " Build - only command : $ { BUILD_ONLY_CMD } " <nl> <nl> if [ [ $ { TF_BUILD_PYTHON_VERSION } = = " python2 " ] ] ; then <nl> elif [ [ $ { TF_BUILD_PYTHON_VERSION } = = " python3 " ] ] ; then <nl> # Supply proper environment variable to select Python 3 <nl> if [ [ " $ { DO_DOCKER } " = = " 1 " ] ] ; then <nl> - EXTRA_PARAMS = " $ { EXTRA_PARAMS } - e PYTHON_BIN_PATH = / usr / bin / python3 " <nl> + EXTRA_PARAMS = " $ { EXTRA_PARAMS } - e CI_BUILD_PYTHON = python3 " <nl> else <nl> # Determine the path to python3 <nl> PYTHON3_PATH = $ ( which python3 | head - 1 ) <nl> else <nl> else <nl> $ { TMP_SCRIPT } <nl> fi <nl> - fi & & <nl> + fi & & FAILURE = 0 | | FAILURE = 1 <nl> + [ [ $ { FAILURE } = = " 0 " ] ] & & RESULT = " SUCCESS " | | RESULT = " FAILURE " <nl> <nl> rm - f $ { TMP_SCRIPT } <nl> + <nl> + END_TIME = $ ( date + ' % s ' ) <nl> + echo " " <nl> + echo " Parameterized build ends with $ { RESULT } at : $ ( date ) " \ <nl> + " ( Elapsed time : $ ( ( $ { END_TIME } - $ { START_TIME } ) ) s ) " <nl> + <nl> + exit $ { FAILURE } <nl> mmm a / tensorflow / tools / docker / Dockerfile <nl> ppp b / tensorflow / tools / docker / Dockerfile <nl> RUN pip - - no - cache - dir install \ <nl> python - m ipykernel . kernelspec <nl> <nl> # Install TensorFlow CPU version . <nl> - ENV TENSORFLOW_VERSION 0 . 6 . 0 <nl> + ENV TENSORFLOW_VERSION 0 . 7 . 0 <nl> RUN pip - - no - cache - dir install \ <nl> - https : / / storage . googleapis . com / tensorflow / linux / cpu / tensorflow - $ { TENSORFLOW_VERSION } - cp27 - none - linux_x86_64 . whl <nl> + http : / / storage . googleapis . com / tensorflow / linux / cpu / tensorflow - $ { TENSORFLOW_VERSION } - py2 - none - linux_x86_64 . whl <nl> <nl> # Set up our notebook config . <nl> COPY jupyter_notebook_config . py / root / . jupyter / <nl> mmm a / tensorflow / tools / docker / Dockerfile . devel <nl> ppp b / tensorflow / tools / docker / Dockerfile . devel <nl> RUN mkdir / bazel & & \ <nl> <nl> RUN git clone - - recursive https : / / github . com / tensorflow / tensorflow . git & & \ <nl> cd tensorflow & & \ <nl> - git checkout 0 . 6 . 0 <nl> + git checkout r0 . 7 <nl> WORKDIR / tensorflow <nl> <nl> # TODO ( craigcitro ) : Don ' t install the pip package , since it makes it <nl> mmm a / tensorflow / tools / docker / Dockerfile . devel - gpu <nl> ppp b / tensorflow / tools / docker / Dockerfile . devel - gpu <nl> RUN mkdir / bazel & & \ <nl> <nl> RUN git clone - - recursive https : / / github . com / tensorflow / tensorflow . git & & \ <nl> cd tensorflow & & \ <nl> - git checkout 0 . 6 . 0 <nl> + git checkout r0 . 7 <nl> WORKDIR / tensorflow <nl> <nl> # Configure the build for our CUDA configuration . <nl> mmm a / tensorflow / tools / docker / Dockerfile . gpu <nl> ppp b / tensorflow / tools / docker / Dockerfile . gpu <nl> RUN pip - - no - cache - dir install \ <nl> python - m ipykernel . kernelspec <nl> <nl> # Install TensorFlow GPU version . <nl> - ENV TENSORFLOW_VERSION 0 . 6 . 0 <nl> + ENV TENSORFLOW_VERSION 0 . 7 . 0 <nl> RUN pip - - no - cache - dir install \ <nl> - https : / / storage . googleapis . com / tensorflow / linux / gpu / tensorflow - $ { TENSORFLOW_VERSION } - cp27 - none - linux_x86_64 . whl <nl> + http : / / storage . googleapis . com / tensorflow / linux / gpu / tensorflow - $ { TENSORFLOW_VERSION } - py2 - none - linux_x86_64 . whl <nl> <nl> # Set up our notebook config . <nl> COPY jupyter_notebook_config . py / root / . jupyter / <nl> mmm a / tensorflow / tools / docs / gen_cc_md . py <nl> ppp b / tensorflow / tools / docs / gen_cc_md . py <nl> <nl> <nl> ANCHOR_RE = re . compile ( r ' \ W + ' ) <nl> <nl> - PAGE_TEMPLATE = ' ' ' # { 0 } ` { 1 } ` <nl> + PAGE_TEMPLATE = ' ' ' # ` { 0 } { 1 } ` <nl> <nl> { 2 } <nl> <nl> - # # Member Summary <nl> + # # # Member Details <nl> <nl> - { 3 } <nl> - <nl> - # # Member Details <nl> - <nl> - { 4 } ' ' ' <nl> + { 3 } ' ' ' <nl> <nl> INDEX_TEMPLATE = ' ' ' # TensorFlow C + + Session API reference documentation <nl> <nl> def index_page ( pages ) : <nl> all_md_files . append ( pages [ page_index ] . get_md_filename ( ) ) <nl> pages . pop ( page_index ) <nl> <nl> - # Footer <nl> - lines . append ( ' ' ' <nl> - <nl> - < div class = ' sections - order ' style = " display : none ; " > <nl> - < ! - - ' ' ' ) <nl> - lines . extend ( [ ( ' < ! - - % s - - > ' % f ) for f in all_md_files ] ) <nl> - lines . extend ( [ ' - - > ' , ' < / div > ' ] ) <nl> return ' \ n ' . join ( lines ) <nl> <nl> <nl> def __init__ ( self , xml_path , deftype ) : <nl> fulls = all_fulls ( members ) <nl> self . overview = page_overview ( soup . find ( ' compounddef ' ) ) <nl> self . page_text = PAGE_TEMPLATE . format ( <nl> - self . type , self . name , self . overview , briefs , fulls ) <nl> + self . type , self . name , self . overview , fulls ) <nl> <nl> def get_text ( self ) : <nl> return self . page_text <nl> def main ( unused_argv ) : <nl> if len ( fname ) < 6 : continue <nl> newpage = None <nl> if fname [ 0 : 5 ] = = ' class ' : <nl> - newpage = Page ( os . path . join ( FLAGS . src_dir , fname ) , ' Class ' ) <nl> + newpage = Page ( os . path . join ( FLAGS . src_dir , fname ) , ' class ' ) <nl> elif fname [ 0 : 6 ] = = ' struct ' : <nl> - newpage = Page ( os . path . join ( FLAGS . src_dir , fname ) , ' Struct ' ) <nl> + newpage = Page ( os . path . join ( FLAGS . src_dir , fname ) , ' struct ' ) <nl> if newpage is not None and page_in_name_list ( newpage , all_pages ) : <nl> pages . append ( newpage ) <nl> md_filename = newpage . get_md_filename ( ) <nl> def main ( unused_argv ) : <nl> return 0 <nl> <nl> if __name__ = = ' __main__ ' : <nl> - try : <nl> - argv = FLAGS ( sys . argv ) # parse flags <nl> - except flags . FlagsError as e : <nl> - print ( ' % s \ \ nUsage : % s ARGS \ \ n % s ' % ( e , sys . argv [ 0 ] , FLAGS ) ) <nl> - sys . exit ( 1 ) <nl> - main ( argv ) <nl> + main ( sys . argv ) <nl> mmm a / tensorflow / tools / docs / gen_docs . sh <nl> ppp b / tensorflow / tools / docs / gen_docs . sh <nl> <nl> # limitations under the License . <nl> # = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> <nl> - # This script needs to be run from the tensorflow / tools directory <nl> + # This script needs to be run from the tensorflow / tools / docs directory <nl> # Pass - a to also rebuild C + + docs . This requires doxygen . <nl> <nl> set - e <nl> mmm a / tensorflow / tools / docs / tf - doxy_for_md - config <nl> ppp b / tensorflow / tools / docs / tf - doxy_for_md - config <nl> WARN_LOGFILE = <nl> # spaces . <nl> # Note : If this tag is empty the current directory is searched . <nl> <nl> - INPUT = core / <nl> + INPUT = core / framework core / lib / core core / platform core / public <nl> <nl> # This tag can be used to specify the character encoding of the source files <nl> # that doxygen parses . Internally doxygen uses the UTF - 8 encoding . Doxygen uses <nl> mmm a / tensorflow / tools / pip_package / build_pip_package . sh <nl> ppp b / tensorflow / tools / pip_package / build_pip_package . sh <nl> function main ( ) { <nl> exit 1 <nl> fi <nl> cp - R \ <nl> - bazel - bin / tensorflow / tools / pip_package / build_pip_package . runfiles / * \ <nl> + bazel - bin / tensorflow / tools / pip_package / build_pip_package . runfiles / { tensorflow , external , google } \ <nl> $ { TMPDIR } <nl> + # TODO : We should have cleaner solution for this after 0 . 7 release <nl> + rm - rf $ { TMPDIR } / external / eigen_archive <nl> <nl> cp tensorflow / tools / pip_package / MANIFEST . in $ { TMPDIR } <nl> cp tensorflow / tools / pip_package / README $ { TMPDIR } <nl> mmm a / tensorflow / tools / pip_package / setup . py <nl> ppp b / tensorflow / tools / pip_package / setup . py <nl> <nl> from setuptools . command . install import install as InstallCommandBase <nl> from setuptools . dist import Distribution <nl> <nl> - _VERSION = ' 0 . 6 . 0 ' <nl> + _VERSION = ' 0 . 7 . 0 ' <nl> <nl> REQUIRED_PACKAGES = [ <nl> ' numpy > = 1 . 8 . 2 ' , <nl> def find_files ( pattern , root ) : <nl> version = _VERSION , <nl> description = ' TensorFlow helps the tensors flow ' , <nl> long_description = ' ' , <nl> - url = ' http : / / tensorflow . com / ' , <nl> + url = ' http : / / tensorflow . org / ' , <nl> author = ' Google Inc . ' , <nl> author_email = ' opensource @ google . com ' , <nl> # Contained modules and scripts . <nl>
|
Merge pull request from tensorflow / merge - 0 . 7
|
tensorflow/tensorflow
|
6671d58ff9c60c724582e4e7a4ddaad0a0acda5a
|
2016-02-17T04:23:28Z
|
mmm a / Marlin / Configuration . h <nl> ppp b / Marlin / Configuration . h <nl> <nl> <nl> / * * <nl> * - - NORMAL IS 4 . 7kohm PULLUP ! - - 1kohm pullup can be used on hotend sensor , using correct resistor and table <nl> - * <nl> + * <nl> * Temperature sensors available : <nl> * <nl> * - 3 : thermocouple with MAX31855 ( only for sensor 0 ) <nl> <nl> * 60 : 100k Maker ' s Tool Works Kapton Bed Thermistor beta = 3950 <nl> * 66 : 4 . 7M High Temperature thermistor from Dyze Design <nl> * 70 : the 100K thermistor found in the bq Hephestos 2 <nl> - * <nl> + * <nl> * 1k ohm pullup tables - This is atypical , and requires changing out the 4 . 7k pullup for 1k . <nl> * ( but gives greater accuracy and more stable PID ) <nl> * 51 : 100k thermistor - EPCOS ( 1k pullup ) <nl> * 52 : 200k thermistor - ATC Semitec 204GT - 2 ( 1k pullup ) <nl> * 55 : 100k thermistor - ATC Semitec 104GT - 2 ( Used in ParCan & J - Head ) ( 1k pullup ) <nl> - * <nl> + * <nl> * 1047 : Pt1000 with 4k7 pullup <nl> * 1010 : Pt1000 with 1k pullup ( non standard ) <nl> * 147 : Pt100 with 4k7 pullup <nl> <nl> / / The height can be set with M420 Z < height > <nl> # define ENABLE_LEVELING_FADE_HEIGHT <nl> <nl> - / / <nl> + / / <nl> / / Experimental Subdivision of the grid by Catmull - Rom method . <nl> / / Synthesizes intermediate points to produce a more detailed mesh . <nl> - / / <nl> + / / <nl> / / # define ABL_BILINEAR_SUBDIVISION <nl> # if ENABLED ( ABL_BILINEAR_SUBDIVISION ) <nl> / / Number of subdivisions between probe points <nl> mmm a / Marlin / Configuration_adv . h <nl> ppp b / Marlin / Configuration_adv . h <nl> <nl> <nl> / * * <nl> * Additional options for Graphical Displays <nl> - * <nl> + * <nl> * Use the optimizations here to improve printing performance , <nl> * which can be adversely affected by graphical display drawing , <nl> * especially when doing several short moves , and when printing <nl> * on DELTA and SCARA machines . <nl> - * <nl> + * <nl> * Some of these options may result in the display lagging behind <nl> * controller events , as there is a trade - off between reliable <nl> * printing performance versus fast display updates . <nl> mmm a / Marlin / M100_Free_Mem_Chk . cpp <nl> ppp b / Marlin / M100_Free_Mem_Chk . cpp <nl> <nl> <nl> / * * <nl> * M100 Free Memory Watcher <nl> - * <nl> + * <nl> * This code watches the free memory block between the bottom of the heap and the top of the stack . <nl> * This memory block is initialized and watched via the M100 command . <nl> - * <nl> + * <nl> * M100 I Initializes the free memory block and prints vitals statistics about the area <nl> * M100 F Identifies how much of the free memory block remains free and unused . It also <nl> * detects and reports any corruption within the free memory block that may have <nl> <nl> * data that does not match the expected value . <nl> * M100 C x Corrupts x locations within the free memory block . This is useful to check the <nl> * correctness of the M100 F and M100 D commands . <nl> - * <nl> + * <nl> * Initial version by Roxy - 3DPrintBoard <nl> * / <nl> # define M100_FREE_MEMORY_DUMPER / / Comment out to remove Dump sub - command <nl> mmm a / Marlin / Marlin_main . cpp <nl> ppp b / Marlin / Marlin_main . cpp <nl> inline void gcode_M104 ( ) { <nl> <nl> if ( code_value_temp_abs ( ) > thermalManager . degHotend ( target_extruder ) ) LCD_MESSAGEPGM ( MSG_HEATING ) ; <nl> } <nl> - <nl> + <nl> # if ENABLED ( AUTOTEMP ) <nl> planner . autotemp_M104_M109 ( ) ; <nl> # endif <nl> mmm a / Marlin / endstops . cpp <nl> ppp b / Marlin / endstops . cpp <nl> void Endstops : : update ( ) { <nl> } \ <nl> } while ( 0 ) <nl> <nl> - # if ENABLED ( G38_PROBE_TARGET ) & & PIN_EXISTS ( Z_MIN ) / / If G38 command then check Z_MIN for every axis and every direction <nl> + # if ENABLED ( G38_PROBE_TARGET ) & & PIN_EXISTS ( Z_MIN ) / / If G38 command then check Z_MIN for every axis and every direction <nl> <nl> # define UPDATE_ENDSTOP ( AXIS , MINMAX ) do { \ <nl> _UPDATE_ENDSTOP ( AXIS , MINMAX , NOOP ) ; \ <nl> if ( G38_move ) _UPDATE_ENDSTOP ( Z , MIN , G38_endstop_hit = true ) ; \ <nl> } while ( 0 ) <nl> <nl> - # else <nl> + # else <nl> <nl> # define UPDATE_ENDSTOP ( AXIS , MINMAX ) _UPDATE_ENDSTOP ( AXIS , MINMAX , NOOP ) <nl> <nl> mmm a / Marlin / example_configurations / Cartesio / Configuration . h <nl> ppp b / Marlin / example_configurations / Cartesio / Configuration . h <nl> <nl> <nl> / * * <nl> * - - NORMAL IS 4 . 7kohm PULLUP ! - - 1kohm pullup can be used on hotend sensor , using correct resistor and table <nl> - * <nl> + * <nl> * Temperature sensors available : <nl> * <nl> * - 3 : thermocouple with MAX31855 ( only for sensor 0 ) <nl> <nl> * 60 : 100k Maker ' s Tool Works Kapton Bed Thermistor beta = 3950 <nl> * 66 : 4 . 7M High Temperature thermistor from Dyze Design <nl> * 70 : the 100K thermistor found in the bq Hephestos 2 <nl> - * <nl> + * <nl> * 1k ohm pullup tables - This is atypical , and requires changing out the 4 . 7k pullup for 1k . <nl> * ( but gives greater accuracy and more stable PID ) <nl> * 51 : 100k thermistor - EPCOS ( 1k pullup ) <nl> * 52 : 200k thermistor - ATC Semitec 204GT - 2 ( 1k pullup ) <nl> * 55 : 100k thermistor - ATC Semitec 104GT - 2 ( Used in ParCan & J - Head ) ( 1k pullup ) <nl> - * <nl> + * <nl> * 1047 : Pt1000 with 4k7 pullup <nl> * 1010 : Pt1000 with 1k pullup ( non standard ) <nl> * 147 : Pt100 with 4k7 pullup <nl> <nl> / / The height can be set with M420 Z < height > <nl> # define ENABLE_LEVELING_FADE_HEIGHT <nl> <nl> - / / <nl> + / / <nl> / / Experimental Subdivision of the grid by Catmull - Rom method . <nl> / / Synthesizes intermediate points to produce a more detailed mesh . <nl> - / / <nl> + / / <nl> / / # define ABL_BILINEAR_SUBDIVISION <nl> # if ENABLED ( ABL_BILINEAR_SUBDIVISION ) <nl> / / Number of subdivisions between probe points <nl> mmm a / Marlin / example_configurations / Cartesio / Configuration_adv . h <nl> ppp b / Marlin / example_configurations / Cartesio / Configuration_adv . h <nl> <nl> <nl> / * * <nl> * Additional options for Graphical Displays <nl> - * <nl> + * <nl> * Use the optimizations here to improve printing performance , <nl> * which can be adversely affected by graphical display drawing , <nl> * especially when doing several short moves , and when printing <nl> * on DELTA and SCARA machines . <nl> - * <nl> + * <nl> * Some of these options may result in the display lagging behind <nl> * controller events , as there is a trade - off between reliable <nl> * printing performance versus fast display updates . <nl> mmm a / Marlin / example_configurations / Felix / Configuration . h <nl> ppp b / Marlin / example_configurations / Felix / Configuration . h <nl> <nl> <nl> / * * <nl> * - - NORMAL IS 4 . 7kohm PULLUP ! - - 1kohm pullup can be used on hotend sensor , using correct resistor and table <nl> - * <nl> + * <nl> * Temperature sensors available : <nl> * <nl> * - 3 : thermocouple with MAX31855 ( only for sensor 0 ) <nl> <nl> * 60 : 100k Maker ' s Tool Works Kapton Bed Thermistor beta = 3950 <nl> * 66 : 4 . 7M High Temperature thermistor from Dyze Design <nl> * 70 : the 100K thermistor found in the bq Hephestos 2 <nl> - * <nl> + * <nl> * 1k ohm pullup tables - This is atypical , and requires changing out the 4 . 7k pullup for 1k . <nl> * ( but gives greater accuracy and more stable PID ) <nl> * 51 : 100k thermistor - EPCOS ( 1k pullup ) <nl> * 52 : 200k thermistor - ATC Semitec 204GT - 2 ( 1k pullup ) <nl> * 55 : 100k thermistor - ATC Semitec 104GT - 2 ( Used in ParCan & J - Head ) ( 1k pullup ) <nl> - * <nl> + * <nl> * 1047 : Pt1000 with 4k7 pullup <nl> * 1010 : Pt1000 with 1k pullup ( non standard ) <nl> * 147 : Pt100 with 4k7 pullup <nl> <nl> / / The height can be set with M420 Z < height > <nl> # define ENABLE_LEVELING_FADE_HEIGHT <nl> <nl> - / / <nl> + / / <nl> / / Experimental Subdivision of the grid by Catmull - Rom method . <nl> / / Synthesizes intermediate points to produce a more detailed mesh . <nl> - / / <nl> + / / <nl> / / # define ABL_BILINEAR_SUBDIVISION <nl> # if ENABLED ( ABL_BILINEAR_SUBDIVISION ) <nl> / / Number of subdivisions between probe points <nl> mmm a / Marlin / example_configurations / Felix / Configuration_adv . h <nl> ppp b / Marlin / example_configurations / Felix / Configuration_adv . h <nl> <nl> <nl> / * * <nl> * Additional options for Graphical Displays <nl> - * <nl> + * <nl> * Use the optimizations here to improve printing performance , <nl> * which can be adversely affected by graphical display drawing , <nl> * especially when doing several short moves , and when printing <nl> * on DELTA and SCARA machines . <nl> - * <nl> + * <nl> * Some of these options may result in the display lagging behind <nl> * controller events , as there is a trade - off between reliable <nl> * printing performance versus fast display updates . <nl> mmm a / Marlin / example_configurations / Felix / DUAL / Configuration . h <nl> ppp b / Marlin / example_configurations / Felix / DUAL / Configuration . h <nl> <nl> <nl> / * * <nl> * - - NORMAL IS 4 . 7kohm PULLUP ! - - 1kohm pullup can be used on hotend sensor , using correct resistor and table <nl> - * <nl> + * <nl> * Temperature sensors available : <nl> * <nl> * - 3 : thermocouple with MAX31855 ( only for sensor 0 ) <nl> <nl> * 60 : 100k Maker ' s Tool Works Kapton Bed Thermistor beta = 3950 <nl> * 66 : 4 . 7M High Temperature thermistor from Dyze Design <nl> * 70 : the 100K thermistor found in the bq Hephestos 2 <nl> - * <nl> + * <nl> * 1k ohm pullup tables - This is atypical , and requires changing out the 4 . 7k pullup for 1k . <nl> * ( but gives greater accuracy and more stable PID ) <nl> * 51 : 100k thermistor - EPCOS ( 1k pullup ) <nl> * 52 : 200k thermistor - ATC Semitec 204GT - 2 ( 1k pullup ) <nl> * 55 : 100k thermistor - ATC Semitec 104GT - 2 ( Used in ParCan & J - Head ) ( 1k pullup ) <nl> - * <nl> + * <nl> * 1047 : Pt1000 with 4k7 pullup <nl> * 1010 : Pt1000 with 1k pullup ( non standard ) <nl> * 147 : Pt100 with 4k7 pullup <nl> <nl> / / The height can be set with M420 Z < height > <nl> # define ENABLE_LEVELING_FADE_HEIGHT <nl> <nl> - / / <nl> + / / <nl> / / Experimental Subdivision of the grid by Catmull - Rom method . <nl> / / Synthesizes intermediate points to produce a more detailed mesh . <nl> - / / <nl> + / / <nl> / / # define ABL_BILINEAR_SUBDIVISION <nl> # if ENABLED ( ABL_BILINEAR_SUBDIVISION ) <nl> / / Number of subdivisions between probe points <nl> mmm a / Marlin / example_configurations / Hephestos / Configuration . h <nl> ppp b / Marlin / example_configurations / Hephestos / Configuration . h <nl> <nl> <nl> / * * <nl> * - - NORMAL IS 4 . 7kohm PULLUP ! - - 1kohm pullup can be used on hotend sensor , using correct resistor and table <nl> - * <nl> + * <nl> * Temperature sensors available : <nl> * <nl> * - 3 : thermocouple with MAX31855 ( only for sensor 0 ) <nl> <nl> * 60 : 100k Maker ' s Tool Works Kapton Bed Thermistor beta = 3950 <nl> * 66 : 4 . 7M High Temperature thermistor from Dyze Design <nl> * 70 : the 100K thermistor found in the bq Hephestos 2 <nl> - * <nl> + * <nl> * 1k ohm pullup tables - This is atypical , and requires changing out the 4 . 7k pullup for 1k . <nl> * ( but gives greater accuracy and more stable PID ) <nl> * 51 : 100k thermistor - EPCOS ( 1k pullup ) <nl> * 52 : 200k thermistor - ATC Semitec 204GT - 2 ( 1k pullup ) <nl> * 55 : 100k thermistor - ATC Semitec 104GT - 2 ( Used in ParCan & J - Head ) ( 1k pullup ) <nl> - * <nl> + * <nl> * 1047 : Pt1000 with 4k7 pullup <nl> * 1010 : Pt1000 with 1k pullup ( non standard ) <nl> * 147 : Pt100 with 4k7 pullup <nl> <nl> / / The height can be set with M420 Z < height > <nl> # define ENABLE_LEVELING_FADE_HEIGHT <nl> <nl> - / / <nl> + / / <nl> / / Experimental Subdivision of the grid by Catmull - Rom method . <nl> / / Synthesizes intermediate points to produce a more detailed mesh . <nl> - / / <nl> + / / <nl> / / # define ABL_BILINEAR_SUBDIVISION <nl> # if ENABLED ( ABL_BILINEAR_SUBDIVISION ) <nl> / / Number of subdivisions between probe points <nl> mmm a / Marlin / example_configurations / Hephestos / Configuration_adv . h <nl> ppp b / Marlin / example_configurations / Hephestos / Configuration_adv . h <nl> <nl> <nl> / * * <nl> * Additional options for Graphical Displays <nl> - * <nl> + * <nl> * Use the optimizations here to improve printing performance , <nl> * which can be adversely affected by graphical display drawing , <nl> * especially when doing several short moves , and when printing <nl> * on DELTA and SCARA machines . <nl> - * <nl> + * <nl> * Some of these options may result in the display lagging behind <nl> * controller events , as there is a trade - off between reliable <nl> * printing performance versus fast display updates . <nl> mmm a / Marlin / example_configurations / K8200 / Configuration . h <nl> ppp b / Marlin / example_configurations / K8200 / Configuration . h <nl> <nl> <nl> / * * <nl> * - - NORMAL IS 4 . 7kohm PULLUP ! - - 1kohm pullup can be used on hotend sensor , using correct resistor and table <nl> - * <nl> + * <nl> * Temperature sensors available : <nl> * <nl> * - 3 : thermocouple with MAX31855 ( only for sensor 0 ) <nl> <nl> * 60 : 100k Maker ' s Tool Works Kapton Bed Thermistor beta = 3950 <nl> * 66 : 4 . 7M High Temperature thermistor from Dyze Design <nl> * 70 : the 100K thermistor found in the bq Hephestos 2 <nl> - * <nl> + * <nl> * 1k ohm pullup tables - This is atypical , and requires changing out the 4 . 7k pullup for 1k . <nl> * ( but gives greater accuracy and more stable PID ) <nl> * 51 : 100k thermistor - EPCOS ( 1k pullup ) <nl> * 52 : 200k thermistor - ATC Semitec 204GT - 2 ( 1k pullup ) <nl> * 55 : 100k thermistor - ATC Semitec 104GT - 2 ( Used in ParCan & J - Head ) ( 1k pullup ) <nl> - * <nl> + * <nl> * 1047 : Pt1000 with 4k7 pullup <nl> * 1010 : Pt1000 with 1k pullup ( non standard ) <nl> * 147 : Pt100 with 4k7 pullup <nl> <nl> / / The height can be set with M420 Z < height > <nl> # define ENABLE_LEVELING_FADE_HEIGHT <nl> <nl> - / / <nl> + / / <nl> / / Experimental Subdivision of the grid by Catmull - Rom method . <nl> / / Synthesizes intermediate points to produce a more detailed mesh . <nl> - / / <nl> + / / <nl> / / # define ABL_BILINEAR_SUBDIVISION <nl> # if ENABLED ( ABL_BILINEAR_SUBDIVISION ) <nl> / / Number of subdivisions between probe points <nl> mmm a / Marlin / example_configurations / K8200 / Configuration_adv . h <nl> ppp b / Marlin / example_configurations / K8200 / Configuration_adv . h <nl> <nl> <nl> / * * <nl> * Additional options for Graphical Displays <nl> - * <nl> + * <nl> * Use the optimizations here to improve printing performance , <nl> * which can be adversely affected by graphical display drawing , <nl> * especially when doing several short moves , and when printing <nl> * on DELTA and SCARA machines . <nl> - * <nl> + * <nl> * Some of these options may result in the display lagging behind <nl> * controller events , as there is a trade - off between reliable <nl> * printing performance versus fast display updates . <nl> mmm a / Marlin / example_configurations / K8400 / Configuration . h <nl> ppp b / Marlin / example_configurations / K8400 / Configuration . h <nl> <nl> <nl> / * * <nl> * - - NORMAL IS 4 . 7kohm PULLUP ! - - 1kohm pullup can be used on hotend sensor , using correct resistor and table <nl> - * <nl> + * <nl> * Temperature sensors available : <nl> * <nl> * - 3 : thermocouple with MAX31855 ( only for sensor 0 ) <nl> <nl> * 60 : 100k Maker ' s Tool Works Kapton Bed Thermistor beta = 3950 <nl> * 66 : 4 . 7M High Temperature thermistor from Dyze Design <nl> * 70 : the 100K thermistor found in the bq Hephestos 2 <nl> - * <nl> + * <nl> * 1k ohm pullup tables - This is atypical , and requires changing out the 4 . 7k pullup for 1k . <nl> * ( but gives greater accuracy and more stable PID ) <nl> * 51 : 100k thermistor - EPCOS ( 1k pullup ) <nl> * 52 : 200k thermistor - ATC Semitec 204GT - 2 ( 1k pullup ) <nl> * 55 : 100k thermistor - ATC Semitec 104GT - 2 ( Used in ParCan & J - Head ) ( 1k pullup ) <nl> - * <nl> + * <nl> * 1047 : Pt1000 with 4k7 pullup <nl> * 1010 : Pt1000 with 1k pullup ( non standard ) <nl> * 147 : Pt100 with 4k7 pullup <nl> <nl> / / The height can be set with M420 Z < height > <nl> # define ENABLE_LEVELING_FADE_HEIGHT <nl> <nl> - / / <nl> + / / <nl> / / Experimental Subdivision of the grid by Catmull - Rom method . <nl> / / Synthesizes intermediate points to produce a more detailed mesh . <nl> - / / <nl> + / / <nl> / / # define ABL_BILINEAR_SUBDIVISION <nl> # if ENABLED ( ABL_BILINEAR_SUBDIVISION ) <nl> / / Number of subdivisions between probe points <nl> mmm a / Marlin / example_configurations / K8400 / Configuration_adv . h <nl> ppp b / Marlin / example_configurations / K8400 / Configuration_adv . h <nl> <nl> <nl> / * * <nl> * Additional options for Graphical Displays <nl> - * <nl> + * <nl> * Use the optimizations here to improve printing performance , <nl> * which can be adversely affected by graphical display drawing , <nl> * especially when doing several short moves , and when printing <nl> * on DELTA and SCARA machines . <nl> - * <nl> + * <nl> * Some of these options may result in the display lagging behind <nl> * controller events , as there is a trade - off between reliable <nl> * printing performance versus fast display updates . <nl> mmm a / Marlin / example_configurations / K8400 / Dual - head / Configuration . h <nl> ppp b / Marlin / example_configurations / K8400 / Dual - head / Configuration . h <nl> <nl> <nl> / * * <nl> * - - NORMAL IS 4 . 7kohm PULLUP ! - - 1kohm pullup can be used on hotend sensor , using correct resistor and table <nl> - * <nl> + * <nl> * Temperature sensors available : <nl> * <nl> * - 3 : thermocouple with MAX31855 ( only for sensor 0 ) <nl> <nl> * 60 : 100k Maker ' s Tool Works Kapton Bed Thermistor beta = 3950 <nl> * 66 : 4 . 7M High Temperature thermistor from Dyze Design <nl> * 70 : the 100K thermistor found in the bq Hephestos 2 <nl> - * <nl> + * <nl> * 1k ohm pullup tables - This is atypical , and requires changing out the 4 . 7k pullup for 1k . <nl> * ( but gives greater accuracy and more stable PID ) <nl> * 51 : 100k thermistor - EPCOS ( 1k pullup ) <nl> * 52 : 200k thermistor - ATC Semitec 204GT - 2 ( 1k pullup ) <nl> * 55 : 100k thermistor - ATC Semitec 104GT - 2 ( Used in ParCan & J - Head ) ( 1k pullup ) <nl> - * <nl> + * <nl> * 1047 : Pt1000 with 4k7 pullup <nl> * 1010 : Pt1000 with 1k pullup ( non standard ) <nl> * 147 : Pt100 with 4k7 pullup <nl> <nl> / / The height can be set with M420 Z < height > <nl> # define ENABLE_LEVELING_FADE_HEIGHT <nl> <nl> - / / <nl> + / / <nl> / / Experimental Subdivision of the grid by Catmull - Rom method . <nl> / / Synthesizes intermediate points to produce a more detailed mesh . <nl> - / / <nl> + / / <nl> / / # define ABL_BILINEAR_SUBDIVISION <nl> # if ENABLED ( ABL_BILINEAR_SUBDIVISION ) <nl> / / Number of subdivisions between probe points <nl> mmm a / Marlin / example_configurations / RepRapWorld / Megatronics / Configuration . h <nl> ppp b / Marlin / example_configurations / RepRapWorld / Megatronics / Configuration . h <nl> <nl> <nl> / * * <nl> * - - NORMAL IS 4 . 7kohm PULLUP ! - - 1kohm pullup can be used on hotend sensor , using correct resistor and table <nl> - * <nl> + * <nl> * Temperature sensors available : <nl> * <nl> * - 3 : thermocouple with MAX31855 ( only for sensor 0 ) <nl> <nl> * 60 : 100k Maker ' s Tool Works Kapton Bed Thermistor beta = 3950 <nl> * 66 : 4 . 7M High Temperature thermistor from Dyze Design <nl> * 70 : the 100K thermistor found in the bq Hephestos 2 <nl> - * <nl> + * <nl> * 1k ohm pullup tables - This is atypical , and requires changing out the 4 . 7k pullup for 1k . <nl> * ( but gives greater accuracy and more stable PID ) <nl> * 51 : 100k thermistor - EPCOS ( 1k pullup ) <nl> * 52 : 200k thermistor - ATC Semitec 204GT - 2 ( 1k pullup ) <nl> * 55 : 100k thermistor - ATC Semitec 104GT - 2 ( Used in ParCan & J - Head ) ( 1k pullup ) <nl> - * <nl> + * <nl> * 1047 : Pt1000 with 4k7 pullup <nl> * 1010 : Pt1000 with 1k pullup ( non standard ) <nl> * 147 : Pt100 with 4k7 pullup <nl> <nl> / / The height can be set with M420 Z < height > <nl> # define ENABLE_LEVELING_FADE_HEIGHT <nl> <nl> - / / <nl> + / / <nl> / / Experimental Subdivision of the grid by Catmull - Rom method . <nl> / / Synthesizes intermediate points to produce a more detailed mesh . <nl> - / / <nl> + / / <nl> / / # define ABL_BILINEAR_SUBDIVISION <nl> # if ENABLED ( ABL_BILINEAR_SUBDIVISION ) <nl> / / Number of subdivisions between probe points <nl> mmm a / Marlin / example_configurations / RigidBot / Configuration . h <nl> ppp b / Marlin / example_configurations / RigidBot / Configuration . h <nl> <nl> <nl> / * * <nl> * - - NORMAL IS 4 . 7kohm PULLUP ! - - 1kohm pullup can be used on hotend sensor , using correct resistor and table <nl> - * <nl> + * <nl> * Temperature sensors available : <nl> * <nl> * - 3 : thermocouple with MAX31855 ( only for sensor 0 ) <nl> <nl> * 60 : 100k Maker ' s Tool Works Kapton Bed Thermistor beta = 3950 <nl> * 66 : 4 . 7M High Temperature thermistor from Dyze Design <nl> * 70 : the 100K thermistor found in the bq Hephestos 2 <nl> - * <nl> + * <nl> * 1k ohm pullup tables - This is atypical , and requires changing out the 4 . 7k pullup for 1k . <nl> * ( but gives greater accuracy and more stable PID ) <nl> * 51 : 100k thermistor - EPCOS ( 1k pullup ) <nl> * 52 : 200k thermistor - ATC Semitec 204GT - 2 ( 1k pullup ) <nl> * 55 : 100k thermistor - ATC Semitec 104GT - 2 ( Used in ParCan & J - Head ) ( 1k pullup ) <nl> - * <nl> + * <nl> * 1047 : Pt1000 with 4k7 pullup <nl> * 1010 : Pt1000 with 1k pullup ( non standard ) <nl> * 147 : Pt100 with 4k7 pullup <nl> <nl> / / The height can be set with M420 Z < height > <nl> # define ENABLE_LEVELING_FADE_HEIGHT <nl> <nl> - / / <nl> + / / <nl> / / Experimental Subdivision of the grid by Catmull - Rom method . <nl> / / Synthesizes intermediate points to produce a more detailed mesh . <nl> - / / <nl> + / / <nl> / / # define ABL_BILINEAR_SUBDIVISION <nl> # if ENABLED ( ABL_BILINEAR_SUBDIVISION ) <nl> / / Number of subdivisions between probe points <nl> mmm a / Marlin / example_configurations / RigidBot / Configuration_adv . h <nl> ppp b / Marlin / example_configurations / RigidBot / Configuration_adv . h <nl> <nl> <nl> / * * <nl> * Additional options for Graphical Displays <nl> - * <nl> + * <nl> * Use the optimizations here to improve printing performance , <nl> * which can be adversely affected by graphical display drawing , <nl> * especially when doing several short moves , and when printing <nl> * on DELTA and SCARA machines . <nl> - * <nl> + * <nl> * Some of these options may result in the display lagging behind <nl> * controller events , as there is a trade - off between reliable <nl> * printing performance versus fast display updates . <nl> mmm a / Marlin / example_configurations / SCARA / Configuration . h <nl> ppp b / Marlin / example_configurations / SCARA / Configuration . h <nl> <nl> <nl> / * * <nl> * - - NORMAL IS 4 . 7kohm PULLUP ! - - 1kohm pullup can be used on hotend sensor , using correct resistor and table <nl> - * <nl> + * <nl> * Temperature sensors available : <nl> * <nl> * - 3 : thermocouple with MAX31855 ( only for sensor 0 ) <nl> <nl> * 60 : 100k Maker ' s Tool Works Kapton Bed Thermistor beta = 3950 <nl> * 66 : 4 . 7M High Temperature thermistor from Dyze Design <nl> * 70 : the 100K thermistor found in the bq Hephestos 2 <nl> - * <nl> + * <nl> * 1k ohm pullup tables - This is atypical , and requires changing out the 4 . 7k pullup for 1k . <nl> * ( but gives greater accuracy and more stable PID ) <nl> * 51 : 100k thermistor - EPCOS ( 1k pullup ) <nl> * 52 : 200k thermistor - ATC Semitec 204GT - 2 ( 1k pullup ) <nl> * 55 : 100k thermistor - ATC Semitec 104GT - 2 ( Used in ParCan & J - Head ) ( 1k pullup ) <nl> - * <nl> + * <nl> * 1047 : Pt1000 with 4k7 pullup <nl> * 1010 : Pt1000 with 1k pullup ( non standard ) <nl> * 147 : Pt100 with 4k7 pullup <nl> <nl> / / The height can be set with M420 Z < height > <nl> # define ENABLE_LEVELING_FADE_HEIGHT <nl> <nl> - / / <nl> + / / <nl> / / Experimental Subdivision of the grid by Catmull - Rom method . <nl> / / Synthesizes intermediate points to produce a more detailed mesh . <nl> - / / <nl> + / / <nl> / / # define ABL_BILINEAR_SUBDIVISION <nl> # if ENABLED ( ABL_BILINEAR_SUBDIVISION ) <nl> / / Number of subdivisions between probe points <nl> mmm a / Marlin / example_configurations / SCARA / Configuration_adv . h <nl> ppp b / Marlin / example_configurations / SCARA / Configuration_adv . h <nl> <nl> <nl> / * * <nl> * Additional options for Graphical Displays <nl> - * <nl> + * <nl> * Use the optimizations here to improve printing performance , <nl> * which can be adversely affected by graphical display drawing , <nl> * especially when doing several short moves , and when printing <nl> * on DELTA and SCARA machines . <nl> - * <nl> + * <nl> * Some of these options may result in the display lagging behind <nl> * controller events , as there is a trade - off between reliable <nl> * printing performance versus fast display updates . <nl> mmm a / Marlin / example_configurations / TAZ4 / Configuration . h <nl> ppp b / Marlin / example_configurations / TAZ4 / Configuration . h <nl> <nl> <nl> / * * <nl> * - - NORMAL IS 4 . 7kohm PULLUP ! - - 1kohm pullup can be used on hotend sensor , using correct resistor and table <nl> - * <nl> + * <nl> * Temperature sensors available : <nl> * <nl> * - 3 : thermocouple with MAX31855 ( only for sensor 0 ) <nl> <nl> * 60 : 100k Maker ' s Tool Works Kapton Bed Thermistor beta = 3950 <nl> * 66 : 4 . 7M High Temperature thermistor from Dyze Design <nl> * 70 : the 100K thermistor found in the bq Hephestos 2 <nl> - * <nl> + * <nl> * 1k ohm pullup tables - This is atypical , and requires changing out the 4 . 7k pullup for 1k . <nl> * ( but gives greater accuracy and more stable PID ) <nl> * 51 : 100k thermistor - EPCOS ( 1k pullup ) <nl> * 52 : 200k thermistor - ATC Semitec 204GT - 2 ( 1k pullup ) <nl> * 55 : 100k thermistor - ATC Semitec 104GT - 2 ( Used in ParCan & J - Head ) ( 1k pullup ) <nl> - * <nl> + * <nl> * 1047 : Pt1000 with 4k7 pullup <nl> * 1010 : Pt1000 with 1k pullup ( non standard ) <nl> * 147 : Pt100 with 4k7 pullup <nl> <nl> / / The height can be set with M420 Z < height > <nl> # define ENABLE_LEVELING_FADE_HEIGHT <nl> <nl> - / / <nl> + / / <nl> / / Experimental Subdivision of the grid by Catmull - Rom method . <nl> / / Synthesizes intermediate points to produce a more detailed mesh . <nl> - / / <nl> + / / <nl> / / # define ABL_BILINEAR_SUBDIVISION <nl> # if ENABLED ( ABL_BILINEAR_SUBDIVISION ) <nl> / / Number of subdivisions between probe points <nl> mmm a / Marlin / example_configurations / TAZ4 / Configuration_adv . h <nl> ppp b / Marlin / example_configurations / TAZ4 / Configuration_adv . h <nl> <nl> <nl> / * * <nl> * Additional options for Graphical Displays <nl> - * <nl> + * <nl> * Use the optimizations here to improve printing performance , <nl> * which can be adversely affected by graphical display drawing , <nl> * especially when doing several short moves , and when printing <nl> * on DELTA and SCARA machines . <nl> - * <nl> + * <nl> * Some of these options may result in the display lagging behind <nl> * controller events , as there is a trade - off between reliable <nl> * printing performance versus fast display updates . <nl> mmm a / Marlin / example_configurations / WITBOX / Configuration . h <nl> ppp b / Marlin / example_configurations / WITBOX / Configuration . h <nl> <nl> <nl> / * * <nl> * - - NORMAL IS 4 . 7kohm PULLUP ! - - 1kohm pullup can be used on hotend sensor , using correct resistor and table <nl> - * <nl> + * <nl> * Temperature sensors available : <nl> * <nl> * - 3 : thermocouple with MAX31855 ( only for sensor 0 ) <nl> <nl> * 60 : 100k Maker ' s Tool Works Kapton Bed Thermistor beta = 3950 <nl> * 66 : 4 . 7M High Temperature thermistor from Dyze Design <nl> * 70 : the 100K thermistor found in the bq Hephestos 2 <nl> - * <nl> + * <nl> * 1k ohm pullup tables - This is atypical , and requires changing out the 4 . 7k pullup for 1k . <nl> * ( but gives greater accuracy and more stable PID ) <nl> * 51 : 100k thermistor - EPCOS ( 1k pullup ) <nl> * 52 : 200k thermistor - ATC Semitec 204GT - 2 ( 1k pullup ) <nl> * 55 : 100k thermistor - ATC Semitec 104GT - 2 ( Used in ParCan & J - Head ) ( 1k pullup ) <nl> - * <nl> + * <nl> * 1047 : Pt1000 with 4k7 pullup <nl> * 1010 : Pt1000 with 1k pullup ( non standard ) <nl> * 147 : Pt100 with 4k7 pullup <nl> <nl> / / The height can be set with M420 Z < height > <nl> # define ENABLE_LEVELING_FADE_HEIGHT <nl> <nl> - / / <nl> + / / <nl> / / Experimental Subdivision of the grid by Catmull - Rom method . <nl> / / Synthesizes intermediate points to produce a more detailed mesh . <nl> - / / <nl> + / / <nl> / / # define ABL_BILINEAR_SUBDIVISION <nl> # if ENABLED ( ABL_BILINEAR_SUBDIVISION ) <nl> / / Number of subdivisions between probe points <nl> mmm a / Marlin / example_configurations / WITBOX / Configuration_adv . h <nl> ppp b / Marlin / example_configurations / WITBOX / Configuration_adv . h <nl> <nl> <nl> / * * <nl> * Additional options for Graphical Displays <nl> - * <nl> + * <nl> * Use the optimizations here to improve printing performance , <nl> * which can be adversely affected by graphical display drawing , <nl> * especially when doing several short moves , and when printing <nl> * on DELTA and SCARA machines . <nl> - * <nl> + * <nl> * Some of these options may result in the display lagging behind <nl> * controller events , as there is a trade - off between reliable <nl> * printing performance versus fast display updates . <nl> mmm a / Marlin / example_configurations / adafruit / ST7565 / Configuration . h <nl> ppp b / Marlin / example_configurations / adafruit / ST7565 / Configuration . h <nl> <nl> <nl> / * * <nl> * - - NORMAL IS 4 . 7kohm PULLUP ! - - 1kohm pullup can be used on hotend sensor , using correct resistor and table <nl> - * <nl> + * <nl> * Temperature sensors available : <nl> * <nl> * - 3 : thermocouple with MAX31855 ( only for sensor 0 ) <nl> <nl> * 60 : 100k Maker ' s Tool Works Kapton Bed Thermistor beta = 3950 <nl> * 66 : 4 . 7M High Temperature thermistor from Dyze Design <nl> * 70 : the 100K thermistor found in the bq Hephestos 2 <nl> - * <nl> + * <nl> * 1k ohm pullup tables - This is atypical , and requires changing out the 4 . 7k pullup for 1k . <nl> * ( but gives greater accuracy and more stable PID ) <nl> * 51 : 100k thermistor - EPCOS ( 1k pullup ) <nl> * 52 : 200k thermistor - ATC Semitec 204GT - 2 ( 1k pullup ) <nl> * 55 : 100k thermistor - ATC Semitec 104GT - 2 ( Used in ParCan & J - Head ) ( 1k pullup ) <nl> - * <nl> + * <nl> * 1047 : Pt1000 with 4k7 pullup <nl> * 1010 : Pt1000 with 1k pullup ( non standard ) <nl> * 147 : Pt100 with 4k7 pullup <nl> <nl> / / The height can be set with M420 Z < height > <nl> # define ENABLE_LEVELING_FADE_HEIGHT <nl> <nl> - / / <nl> + / / <nl> / / Experimental Subdivision of the grid by Catmull - Rom method . <nl> / / Synthesizes intermediate points to produce a more detailed mesh . <nl> - / / <nl> + / / <nl> / / # define ABL_BILINEAR_SUBDIVISION <nl> # if ENABLED ( ABL_BILINEAR_SUBDIVISION ) <nl> / / Number of subdivisions between probe points <nl> mmm a / Marlin / example_configurations / delta / biv2 . 5 / Configuration . h <nl> ppp b / Marlin / example_configurations / delta / biv2 . 5 / Configuration . h <nl> <nl> / / The height can be set with M420 Z < height > <nl> # define ENABLE_LEVELING_FADE_HEIGHT <nl> <nl> - / / <nl> + / / <nl> / / Experimental Subdivision of the grid by Catmull - Rom method . <nl> / / Synthesizes intermediate points to produce a more detailed mesh . <nl> - / / <nl> + / / <nl> / / # define ABL_BILINEAR_SUBDIVISION <nl> # if ENABLED ( ABL_BILINEAR_SUBDIVISION ) <nl> / / Number of subdivisions between probe points <nl> mmm a / Marlin / example_configurations / delta / biv2 . 5 / Configuration_adv . h <nl> ppp b / Marlin / example_configurations / delta / biv2 . 5 / Configuration_adv . h <nl> <nl> <nl> / * * <nl> * Additional options for Graphical Displays <nl> - * <nl> + * <nl> * Use the optimizations here to improve printing performance , <nl> * which can be adversely affected by graphical display drawing , <nl> * especially when doing several short moves , and when printing <nl> * on DELTA and SCARA machines . <nl> - * <nl> + * <nl> * Some of these options may result in the display lagging behind <nl> * controller events , as there is a trade - off between reliable <nl> * printing performance versus fast display updates . <nl> mmm a / Marlin / example_configurations / delta / generic / Configuration . h <nl> ppp b / Marlin / example_configurations / delta / generic / Configuration . h <nl> <nl> <nl> / * * <nl> * - - NORMAL IS 4 . 7kohm PULLUP ! - - 1kohm pullup can be used on hotend sensor , using correct resistor and table <nl> - * <nl> + * <nl> * Temperature sensors available : <nl> * <nl> * - 3 : thermocouple with MAX31855 ( only for sensor 0 ) <nl> <nl> * 60 : 100k Maker ' s Tool Works Kapton Bed Thermistor beta = 3950 <nl> * 66 : 4 . 7M High Temperature thermistor from Dyze Design <nl> * 70 : the 100K thermistor found in the bq Hephestos 2 <nl> - * <nl> + * <nl> * 1k ohm pullup tables - This is atypical , and requires changing out the 4 . 7k pullup for 1k . <nl> * ( but gives greater accuracy and more stable PID ) <nl> * 51 : 100k thermistor - EPCOS ( 1k pullup ) <nl> * 52 : 200k thermistor - ATC Semitec 204GT - 2 ( 1k pullup ) <nl> * 55 : 100k thermistor - ATC Semitec 104GT - 2 ( Used in ParCan & J - Head ) ( 1k pullup ) <nl> - * <nl> + * <nl> * 1047 : Pt1000 with 4k7 pullup <nl> * 1010 : Pt1000 with 1k pullup ( non standard ) <nl> * 147 : Pt100 with 4k7 pullup <nl> <nl> / / The height can be set with M420 Z < height > <nl> # define ENABLE_LEVELING_FADE_HEIGHT <nl> <nl> - / / <nl> + / / <nl> / / Experimental Subdivision of the grid by Catmull - Rom method . <nl> / / Synthesizes intermediate points to produce a more detailed mesh . <nl> - / / <nl> + / / <nl> / / # define ABL_BILINEAR_SUBDIVISION <nl> # if ENABLED ( ABL_BILINEAR_SUBDIVISION ) <nl> / / Number of subdivisions between probe points <nl> mmm a / Marlin / example_configurations / delta / generic / Configuration_adv . h <nl> ppp b / Marlin / example_configurations / delta / generic / Configuration_adv . h <nl> <nl> <nl> / * * <nl> * Additional options for Graphical Displays <nl> - * <nl> + * <nl> * Use the optimizations here to improve printing performance , <nl> * which can be adversely affected by graphical display drawing , <nl> * especially when doing several short moves , and when printing <nl> * on DELTA and SCARA machines . <nl> - * <nl> + * <nl> * Some of these options may result in the display lagging behind <nl> * controller events , as there is a trade - off between reliable <nl> * printing performance versus fast display updates . <nl> mmm a / Marlin / example_configurations / delta / kossel_mini / Configuration . h <nl> ppp b / Marlin / example_configurations / delta / kossel_mini / Configuration . h <nl> <nl> <nl> / * * <nl> * - - NORMAL IS 4 . 7kohm PULLUP ! - - 1kohm pullup can be used on hotend sensor , using correct resistor and table <nl> - * <nl> + * <nl> * Temperature sensors available : <nl> * <nl> * - 3 : thermocouple with MAX31855 ( only for sensor 0 ) <nl> <nl> * 60 : 100k Maker ' s Tool Works Kapton Bed Thermistor beta = 3950 <nl> * 66 : 4 . 7M High Temperature thermistor from Dyze Design <nl> * 70 : the 100K thermistor found in the bq Hephestos 2 <nl> - * <nl> + * <nl> * 1k ohm pullup tables - This is atypical , and requires changing out the 4 . 7k pullup for 1k . <nl> * ( but gives greater accuracy and more stable PID ) <nl> * 51 : 100k thermistor - EPCOS ( 1k pullup ) <nl> * 52 : 200k thermistor - ATC Semitec 204GT - 2 ( 1k pullup ) <nl> * 55 : 100k thermistor - ATC Semitec 104GT - 2 ( Used in ParCan & J - Head ) ( 1k pullup ) <nl> - * <nl> + * <nl> * 1047 : Pt1000 with 4k7 pullup <nl> * 1010 : Pt1000 with 1k pullup ( non standard ) <nl> * 147 : Pt100 with 4k7 pullup <nl> <nl> / / The height can be set with M420 Z < height > <nl> # define ENABLE_LEVELING_FADE_HEIGHT <nl> <nl> - / / <nl> + / / <nl> / / Experimental Subdivision of the grid by Catmull - Rom method . <nl> / / Synthesizes intermediate points to produce a more detailed mesh . <nl> - / / <nl> + / / <nl> / / # define ABL_BILINEAR_SUBDIVISION <nl> # if ENABLED ( ABL_BILINEAR_SUBDIVISION ) <nl> / / Number of subdivisions between probe points <nl> mmm a / Marlin / example_configurations / delta / kossel_mini / Configuration_adv . h <nl> ppp b / Marlin / example_configurations / delta / kossel_mini / Configuration_adv . h <nl> <nl> <nl> / * * <nl> * Additional options for Graphical Displays <nl> - * <nl> + * <nl> * Use the optimizations here to improve printing performance , <nl> * which can be adversely affected by graphical display drawing , <nl> * especially when doing several short moves , and when printing <nl> * on DELTA and SCARA machines . <nl> - * <nl> + * <nl> * Some of these options may result in the display lagging behind <nl> * controller events , as there is a trade - off between reliable <nl> * printing performance versus fast display updates . <nl> mmm a / Marlin / example_configurations / delta / kossel_pro / Configuration . h <nl> ppp b / Marlin / example_configurations / delta / kossel_pro / Configuration . h <nl> <nl> <nl> / * * <nl> * - - NORMAL IS 4 . 7kohm PULLUP ! - - 1kohm pullup can be used on hotend sensor , using correct resistor and table <nl> - * <nl> + * <nl> * Temperature sensors available : <nl> * <nl> * - 3 : thermocouple with MAX31855 ( only for sensor 0 ) <nl> <nl> * 60 : 100k Maker ' s Tool Works Kapton Bed Thermistor beta = 3950 <nl> * 66 : 4 . 7M High Temperature thermistor from Dyze Design <nl> * 70 : the 100K thermistor found in the bq Hephestos 2 <nl> - * <nl> + * <nl> * 1k ohm pullup tables - This is atypical , and requires changing out the 4 . 7k pullup for 1k . <nl> * ( but gives greater accuracy and more stable PID ) <nl> * 51 : 100k thermistor - EPCOS ( 1k pullup ) <nl> * 52 : 200k thermistor - ATC Semitec 204GT - 2 ( 1k pullup ) <nl> * 55 : 100k thermistor - ATC Semitec 104GT - 2 ( Used in ParCan & J - Head ) ( 1k pullup ) <nl> - * <nl> + * <nl> * 1047 : Pt1000 with 4k7 pullup <nl> * 1010 : Pt1000 with 1k pullup ( non standard ) <nl> * 147 : Pt100 with 4k7 pullup <nl> <nl> / / The height can be set with M420 Z < height > <nl> # define ENABLE_LEVELING_FADE_HEIGHT <nl> <nl> - / / <nl> + / / <nl> / / Experimental Subdivision of the grid by Catmull - Rom method . <nl> / / Synthesizes intermediate points to produce a more detailed mesh . <nl> - / / <nl> + / / <nl> / / # define ABL_BILINEAR_SUBDIVISION <nl> # if ENABLED ( ABL_BILINEAR_SUBDIVISION ) <nl> / / Number of subdivisions between probe points <nl> mmm a / Marlin / example_configurations / delta / kossel_pro / Configuration_adv . h <nl> ppp b / Marlin / example_configurations / delta / kossel_pro / Configuration_adv . h <nl> <nl> <nl> / * * <nl> * Additional options for Graphical Displays <nl> - * <nl> + * <nl> * Use the optimizations here to improve printing performance , <nl> * which can be adversely affected by graphical display drawing , <nl> * especially when doing several short moves , and when printing <nl> * on DELTA and SCARA machines . <nl> - * <nl> + * <nl> * Some of these options may result in the display lagging behind <nl> * controller events , as there is a trade - off between reliable <nl> * printing performance versus fast display updates . <nl> mmm a / Marlin / example_configurations / delta / kossel_xl / Configuration . h <nl> ppp b / Marlin / example_configurations / delta / kossel_xl / Configuration . h <nl> <nl> <nl> / * * <nl> * - - NORMAL IS 4 . 7kohm PULLUP ! - - 1kohm pullup can be used on hotend sensor , using correct resistor and table <nl> - * <nl> + * <nl> * Temperature sensors available : <nl> * <nl> * - 3 : thermocouple with MAX31855 ( only for sensor 0 ) <nl> <nl> * 60 : 100k Maker ' s Tool Works Kapton Bed Thermistor beta = 3950 <nl> * 66 : 4 . 7M High Temperature thermistor from Dyze Design <nl> * 70 : the 100K thermistor found in the bq Hephestos 2 <nl> - * <nl> + * <nl> * 1k ohm pullup tables - This is atypical , and requires changing out the 4 . 7k pullup for 1k . <nl> * ( but gives greater accuracy and more stable PID ) <nl> * 51 : 100k thermistor - EPCOS ( 1k pullup ) <nl> * 52 : 200k thermistor - ATC Semitec 204GT - 2 ( 1k pullup ) <nl> * 55 : 100k thermistor - ATC Semitec 104GT - 2 ( Used in ParCan & J - Head ) ( 1k pullup ) <nl> - * <nl> + * <nl> * 1047 : Pt1000 with 4k7 pullup <nl> * 1010 : Pt1000 with 1k pullup ( non standard ) <nl> * 147 : Pt100 with 4k7 pullup <nl> <nl> / / The height can be set with M420 Z < height > <nl> # define ENABLE_LEVELING_FADE_HEIGHT <nl> <nl> - / / <nl> + / / <nl> / / Experimental Subdivision of the grid by Catmull - Rom method . <nl> / / Synthesizes intermediate points to produce a more detailed mesh . <nl> - / / <nl> + / / <nl> / / # define ABL_BILINEAR_SUBDIVISION <nl> # if ENABLED ( ABL_BILINEAR_SUBDIVISION ) <nl> / / Number of subdivisions between probe points <nl> mmm a / Marlin / example_configurations / delta / kossel_xl / Configuration_adv . h <nl> ppp b / Marlin / example_configurations / delta / kossel_xl / Configuration_adv . h <nl> <nl> <nl> / * * <nl> * Additional options for Graphical Displays <nl> - * <nl> + * <nl> * Use the optimizations here to improve printing performance , <nl> * which can be adversely affected by graphical display drawing , <nl> * especially when doing several short moves , and when printing <nl> * on DELTA and SCARA machines . <nl> - * <nl> + * <nl> * Some of these options may result in the display lagging behind <nl> * controller events , as there is a trade - off between reliable <nl> * printing performance versus fast display updates . <nl> mmm a / Marlin / example_configurations / makibox / Configuration . h <nl> ppp b / Marlin / example_configurations / makibox / Configuration . h <nl> <nl> <nl> / * * <nl> * - - NORMAL IS 4 . 7kohm PULLUP ! - - 1kohm pullup can be used on hotend sensor , using correct resistor and table <nl> - * <nl> + * <nl> * Temperature sensors available : <nl> * <nl> * - 3 : thermocouple with MAX31855 ( only for sensor 0 ) <nl> <nl> * 60 : 100k Maker ' s Tool Works Kapton Bed Thermistor beta = 3950 <nl> * 66 : 4 . 7M High Temperature thermistor from Dyze Design <nl> * 70 : the 100K thermistor found in the bq Hephestos 2 <nl> - * <nl> + * <nl> * 1k ohm pullup tables - This is atypical , and requires changing out the 4 . 7k pullup for 1k . <nl> * ( but gives greater accuracy and more stable PID ) <nl> * 51 : 100k thermistor - EPCOS ( 1k pullup ) <nl> * 52 : 200k thermistor - ATC Semitec 204GT - 2 ( 1k pullup ) <nl> * 55 : 100k thermistor - ATC Semitec 104GT - 2 ( Used in ParCan & J - Head ) ( 1k pullup ) <nl> - * <nl> + * <nl> * 1047 : Pt1000 with 4k7 pullup <nl> * 1010 : Pt1000 with 1k pullup ( non standard ) <nl> * 147 : Pt100 with 4k7 pullup <nl> <nl> / / The height can be set with M420 Z < height > <nl> # define ENABLE_LEVELING_FADE_HEIGHT <nl> <nl> - / / <nl> + / / <nl> / / Experimental Subdivision of the grid by Catmull - Rom method . <nl> / / Synthesizes intermediate points to produce a more detailed mesh . <nl> - / / <nl> + / / <nl> / / # define ABL_BILINEAR_SUBDIVISION <nl> # if ENABLED ( ABL_BILINEAR_SUBDIVISION ) <nl> / / Number of subdivisions between probe points <nl> mmm a / Marlin / example_configurations / makibox / Configuration_adv . h <nl> ppp b / Marlin / example_configurations / makibox / Configuration_adv . h <nl> <nl> <nl> / * * <nl> * Additional options for Graphical Displays <nl> - * <nl> + * <nl> * Use the optimizations here to improve printing performance , <nl> * which can be adversely affected by graphical display drawing , <nl> * especially when doing several short moves , and when printing <nl> * on DELTA and SCARA machines . <nl> - * <nl> + * <nl> * Some of these options may result in the display lagging behind <nl> * controller events , as there is a trade - off between reliable <nl> * printing performance versus fast display updates . <nl> mmm a / Marlin / example_configurations / tvrrug / Round2 / Configuration . h <nl> ppp b / Marlin / example_configurations / tvrrug / Round2 / Configuration . h <nl> <nl> <nl> / * * <nl> * - - NORMAL IS 4 . 7kohm PULLUP ! - - 1kohm pullup can be used on hotend sensor , using correct resistor and table <nl> - * <nl> + * <nl> * Temperature sensors available : <nl> * <nl> * - 3 : thermocouple with MAX31855 ( only for sensor 0 ) <nl> <nl> * 60 : 100k Maker ' s Tool Works Kapton Bed Thermistor beta = 3950 <nl> * 66 : 4 . 7M High Temperature thermistor from Dyze Design <nl> * 70 : the 100K thermistor found in the bq Hephestos 2 <nl> - * <nl> + * <nl> * 1k ohm pullup tables - This is atypical , and requires changing out the 4 . 7k pullup for 1k . <nl> * ( but gives greater accuracy and more stable PID ) <nl> * 51 : 100k thermistor - EPCOS ( 1k pullup ) <nl> * 52 : 200k thermistor - ATC Semitec 204GT - 2 ( 1k pullup ) <nl> * 55 : 100k thermistor - ATC Semitec 104GT - 2 ( Used in ParCan & J - Head ) ( 1k pullup ) <nl> - * <nl> + * <nl> * 1047 : Pt1000 with 4k7 pullup <nl> * 1010 : Pt1000 with 1k pullup ( non standard ) <nl> * 147 : Pt100 with 4k7 pullup <nl> <nl> / / The height can be set with M420 Z < height > <nl> # define ENABLE_LEVELING_FADE_HEIGHT <nl> <nl> - / / <nl> + / / <nl> / / Experimental Subdivision of the grid by Catmull - Rom method . <nl> / / Synthesizes intermediate points to produce a more detailed mesh . <nl> - / / <nl> + / / <nl> / / # define ABL_BILINEAR_SUBDIVISION <nl> # if ENABLED ( ABL_BILINEAR_SUBDIVISION ) <nl> / / Number of subdivisions between probe points <nl> mmm a / Marlin / example_configurations / tvrrug / Round2 / Configuration_adv . h <nl> ppp b / Marlin / example_configurations / tvrrug / Round2 / Configuration_adv . h <nl> <nl> <nl> / * * <nl> * Additional options for Graphical Displays <nl> - * <nl> + * <nl> * Use the optimizations here to improve printing performance , <nl> * which can be adversely affected by graphical display drawing , <nl> * especially when doing several short moves , and when printing <nl> * on DELTA and SCARA machines . <nl> - * <nl> + * <nl> * Some of these options may result in the display lagging behind <nl> * controller events , as there is a trade - off between reliable <nl> * printing performance versus fast display updates . <nl> mmm a / Marlin / language_tr . h <nl> ppp b / Marlin / language_tr . h <nl> <nl> # define MSG_HOME_OFFSETS_APPLIED _UxGT ( " Offset Tamam " ) / / Offset Tamam <nl> # define MSG_SET_ORIGIN _UxGT ( " Sıfır Belirle " ) / / Sıfır Belirle <nl> # define MSG_PREHEAT_1 _UxGT ( " Ön Isınma PLA " ) / / Ön Isınma PLA <nl> - # define MSG_PREHEAT_1_N MSG_PREHEAT_1 _UxGT ( " " ) / / <nl> + # define MSG_PREHEAT_1_N MSG_PREHEAT_1 _UxGT ( " " ) / / <nl> # define MSG_PREHEAT_1_ALL MSG_PREHEAT_1 _UxGT ( " Tüm " ) / / Tüm <nl> # define MSG_PREHEAT_1_BEDONLY MSG_PREHEAT_1 _UxGT ( " Tabla " ) / / Tabla <nl> # define MSG_PREHEAT_1_SETTINGS MSG_PREHEAT_1 _UxGT ( " Ayar " ) / / Ayar <nl> # define MSG_PREHEAT_2 _UxGT ( " Ön Isınma ABS " ) / / Ön Isınma ABS <nl> - # define MSG_PREHEAT_2_N MSG_PREHEAT_2 _UxGT ( " " ) / / <nl> + # define MSG_PREHEAT_2_N MSG_PREHEAT_2 _UxGT ( " " ) / / <nl> # define MSG_PREHEAT_2_ALL MSG_PREHEAT_2 _UxGT ( " Tüm " ) / / Tüm <nl> # define MSG_PREHEAT_2_BEDONLY MSG_PREHEAT_2 _UxGT ( " Tabla " ) / / Tabla <nl> # define MSG_PREHEAT_2_SETTINGS MSG_PREHEAT_2 _UxGT ( " Ayar " ) / / Ayar <nl> <nl> # define MSG_MAX _UxGT ( " " ) LCD_STR_THERMOMETER _UxGT ( " Max " ) / / Max <nl> # define MSG_FACTOR _UxGT ( " " ) LCD_STR_THERMOMETER _UxGT ( " Çarpan " ) / / Çarpan <nl> # define MSG_AUTOTEMP _UxGT ( " Autotemp " ) / / Autotemp <nl> - # define MSG_ON _UxGT ( " On " ) / / On <nl> + # define MSG_ON _UxGT ( " On " ) / / On <nl> # define MSG_OFF _UxGT ( " Off " ) / / Off <nl> # define MSG_PID_P _UxGT ( " PID - P " ) / / PID - P <nl> # define MSG_PID_I _UxGT ( " PID - I " ) / / PID - I <nl> mmm a / Marlin / pins . h <nl> ppp b / Marlin / pins . h <nl> <nl> # include " pins_MEGATRONICS_3 . h " <nl> # elif MB ( MEGATRONICS_31 ) <nl> # define MEGATRONICS_31 <nl> - # include " pins_MEGATRONICS_3 . h " <nl> + # include " pins_MEGATRONICS_3 . h " <nl> # elif MB ( OMCA_A ) <nl> # include " pins_OMCA_A . h " <nl> # elif MB ( OMCA ) <nl> mmm a / Marlin / pins_MEGATRONICS_3 . h <nl> ppp b / Marlin / pins_MEGATRONICS_3 . h <nl> <nl> # define LCD_PINS_D5 30 <nl> # define LCD_PINS_D6 39 <nl> # define LCD_PINS_D7 15 <nl> - <nl> + <nl> # define SHIFT_CLK 43 <nl> # define SHIFT_LD 35 <nl> # define SHIFT_OUT 34 <nl> mmm a / Marlin / pins_RIGIDBOARD_V2 . h <nl> ppp b / Marlin / pins_RIGIDBOARD_V2 . h <nl> <nl> <nl> # define DAC_STEPPER_SENSE 0 . 05 / / sense resistors on rigidboard stepper chips are . 05 value <nl> # define DAC_STEPPER_ADDRESS 0 <nl> - # define DAC_STEPPER_MAX 4096 / / was 5000 but max allowable value is actually 4096 <nl> + # define DAC_STEPPER_MAX 4096 / / was 5000 but max allowable value is actually 4096 <nl> # define DAC_STEPPER_VREF 1 / / internal Vref , gain 2x = 4 . 096V <nl> # define DAC_STEPPER_GAIN 1 / / value of 1 here sets gain of 2 <nl> # define DAC_DISABLE_PIN 42 / / set low to enable DAC <nl> mmm a / Marlin / planner . cpp <nl> ppp b / Marlin / planner . cpp <nl> void Planner : : _buffer_line ( const float & a , const float & b , const float & c , const <nl> const float target_float [ XYZE ] = { a , b , c , e } , <nl> de_float = target_float [ E_AXIS ] - position_float [ E_AXIS ] , <nl> mm_D_float = sqrt ( sq ( target_float [ X_AXIS ] - position_float [ X_AXIS ] ) + sq ( target_float [ Y_AXIS ] - position_float [ Y_AXIS ] ) ) ; <nl> - <nl> + <nl> memcpy ( position_float , target_float , sizeof ( position_float ) ) ; <nl> # endif <nl> <nl> void Planner : : _buffer_line ( const float & a , const float & b , const float & c , const <nl> if ( accel * block - > steps [ AXIS ] > comp ) accel = comp / block - > steps [ AXIS ] ; \ <nl> } \ <nl> } while ( 0 ) <nl> - <nl> + <nl> # define LIMIT_ACCEL_FLOAT ( AXIS , INDX ) do { \ <nl> if ( block - > steps [ AXIS ] & & max_acceleration_steps_per_s2 [ AXIS + INDX ] < accel ) { \ <nl> const float comp = ( float ) max_acceleration_steps_per_s2 [ AXIS + INDX ] * ( float ) block - > step_event_count ; \ <nl> void Planner : : _buffer_line ( const float & a , const float & b , const float & c , const <nl> / / Use LIN_ADVANCE for blocks if all these are true : <nl> / / <nl> / / esteps : We have E steps todo ( a printing move ) <nl> - / / <nl> + / / <nl> / / block - > steps [ X_AXIS ] | | block - > steps [ Y_AXIS ] : We have a movement in XY direction ( i . e . , not retract / prime ) . <nl> - / / <nl> + / / <nl> / / extruder_advance_k : There is an advance factor set . <nl> - / / <nl> + / / <nl> / / block - > steps [ E_AXIS ] ! = block - > step_event_count : A problem occurs if the move before a retract is too small . <nl> / / In that case , the retract and move will be executed together . <nl> / / This leads to too many advance steps due to a huge e_acceleration . <nl> mmm a / Marlin / planner . h <nl> ppp b / Marlin / planner . h <nl> typedef struct { <nl> # if ENABLED ( BARICUDA ) <nl> uint32_t valve_pressure , e_to_p_pressure ; <nl> # endif <nl> - <nl> + <nl> uint32_t segment_time ; <nl> <nl> } block_t ; <nl> class Planner { <nl> * Nominal speed of previous path line segment <nl> * / <nl> static float previous_nominal_speed ; <nl> - <nl> + <nl> / * * <nl> * Limit where 64bit math is necessary for acceleration calculation <nl> * / <nl> class Planner { <nl> / / Segment times ( in µs ) . Used for speed calculations <nl> static long axis_segment_time [ 2 ] [ 3 ] ; <nl> # endif <nl> - <nl> + <nl> # if ENABLED ( LIN_ADVANCE ) <nl> static float position_float [ NUM_AXIS ] ; <nl> static float extruder_advance_k ; <nl> class Planner { <nl> # define ARG_Z const float & lz <nl> <nl> # endif <nl> - <nl> + <nl> # if ENABLED ( LIN_ADVANCE ) <nl> void advance_M905 ( const float & k ) ; <nl> # endif <nl> mmm a / Marlin / stepper . cpp <nl> ppp b / Marlin / stepper . cpp <nl> void Stepper : : isr ( ) { <nl> CBI ( TIMSK0 , OCIE0B ) ; / / Temperature ISR <nl> DISABLE_STEPPER_DRIVER_INTERRUPT ( ) ; <nl> sei ( ) ; <nl> - <nl> + <nl> if ( cleaning_buffer_counter ) { <nl> - - cleaning_buffer_counter ; <nl> current_block = NULL ; <nl> void Stepper : : isr ( ) { <nl> # endif <nl> } <nl> # endif <nl> - <nl> + <nl> # if ENABLED ( ADVANCE ) | | ENABLED ( LIN_ADVANCE ) <nl> / / If we have esteps to execute , fire the next advance_isr " now " <nl> if ( e_steps [ TOOL_E_INDEX ] ) OCR0A = TCNT0 + 2 ; <nl> mmm a / Marlin / stepper . h <nl> ppp b / Marlin / stepper . h <nl> class Stepper { <nl> acc_step_rate = current_block - > initial_rate ; <nl> acceleration_time = calc_timer ( acc_step_rate ) ; <nl> OCR1A = acceleration_time ; <nl> - <nl> + <nl> # if ENABLED ( LIN_ADVANCE ) <nl> if ( current_block - > use_advance_lead ) { <nl> current_estep_rate [ current_block - > active_extruder ] = ( ( unsigned long ) acc_step_rate * current_block - > abs_adv_steps_multiplier8 ) > > 17 ; <nl> mmm a / Marlin / stepper_dac . cpp <nl> ppp b / Marlin / stepper_dac . cpp <nl> <nl> <nl> static float dac_perc ( int8_t n ) { return 100 . 0 * mcp4728_getValue ( dac_order [ n ] ) * ( 1 . 0 / ( DAC_STEPPER_MAX ) ) ; } <nl> static float dac_amps ( int8_t n ) { return mcp4728_getDrvPct ( dac_order [ n ] ) * ( DAC_STEPPER_MAX ) * 0 . 125 * ( 1 . 0 / ( DAC_STEPPER_SENSE ) ) ; } <nl> - <nl> + <nl> int16_t dac_current_get_percent ( AxisEnum axis ) { return mcp4728_getDrvPct ( dac_order [ axis ] ) ; } <nl> void dac_current_set_percents ( int16_t pct [ XYZE ] ) { <nl> LOOP_XYZE ( i ) dac_channel_pct [ i ] = pct [ dac_order [ i ] ] ; <nl> <nl> SERIAL_ECHO_START ; <nl> SERIAL_ECHOLNPGM ( " Stepper current values in % ( Amps ) : " ) ; <nl> SERIAL_ECHO_START ; <nl> - SERIAL_ECHOPAIR ( " X : " , dac_perc ( X_AXIS ) ) ; <nl> + SERIAL_ECHOPAIR ( " X : " , dac_perc ( X_AXIS ) ) ; <nl> SERIAL_ECHOPAIR ( " ( " , dac_amps ( X_AXIS ) ) ; <nl> SERIAL_ECHOPAIR ( " ) Y : " , dac_perc ( Y_AXIS ) ) ; <nl> SERIAL_ECHOPAIR ( " ( " , dac_amps ( Y_AXIS ) ) ; <nl> mmm a / Marlin / temperature . cpp <nl> ppp b / Marlin / temperature . cpp <nl> void Temperature : : isr ( ) { <nl> if ( ! endstop_monitor_count ) endstop_monitor ( ) ; / / report changes in endstop status <nl> } <nl> # endif <nl> - <nl> + <nl> SBI ( TIMSK0 , OCIE0B ) ; / / re - enable Temperature ISR <nl> } <nl>
|
Remove unnecessary tabs and spaces
|
MarlinFirmware/Marlin
|
069c6b38ddbff725ab26411c6cedef9984d4d377
|
2016-12-15T15:21:32Z
|
mmm a / tests / include / skipif . inc <nl> ppp b / tests / include / skipif . inc <nl> function skip_if_no_process_affinity ( ) <nl> } <nl> } <nl> <nl> + function skip_if_in_valgrind ( string $ reason = ' valgrind is too slow ' ) <nl> + { <nl> + skip ( $ reason , getenv ( ' USE_ZEND_ALLOC ' ) = = = ' 0 ' ) ; <nl> + } <nl> + <nl> function skip_if_in_travis ( string $ reason = ' not support in travis ' ) <nl> { <nl> skip ( $ reason , file_exists ( ' / . travisenv ' ) ) ; <nl> mmm a / tests / swoole_mysql_coro / big_big_data . phpt <nl> ppp b / tests / swoole_mysql_coro / big_big_data . phpt <nl> <nl> - - TEST - - <nl> swoole_mysql_coro : select huge data from db ( 10M ~ 64M ) <nl> - - SKIPIF - - <nl> - < ? php require __DIR__ . ' / . . / include / skipif . inc ' ; ? > <nl> + < ? php <nl> + require __DIR__ . ' / . . / include / skipif . inc ' ; <nl> + skip_if_in_valgrind ( ) ; <nl> + ? > <nl> - - FILE - - <nl> < ? php <nl> require __DIR__ . ' / . . / include / bootstrap . php ' ; <nl>
|
Add skip_if_in_valgrind .
|
swoole/swoole-src
|
07e83f901d11f47be8308f0e829c2e12b0b77d25
|
2018-12-17T04:59:12Z
|
mmm a / lib / Rest / Endpoint . cpp <nl> ppp b / lib / Rest / Endpoint . cpp <nl> Endpoint * Endpoint : : factory ( const Endpoint : : EndpointType type , <nl> <nl> if ( copy [ 0 ] = = ' [ ' ) { <nl> / / ipv6 <nl> - found = copy . find ( " ] : " , 1 ) ; <nl> - if ( found ! = string : : npos & & found + 2 < copy . size ( ) ) { <nl> + found = copy . find ( " ] : " , 1 ) ; <nl> + if ( found ! = string : : npos & & found > 2 & & found + 2 < copy . size ( ) ) { <nl> / / hostname and port ( e . g . [ address ] : port ) <nl> uint16_t port = ( uint16_t ) StringUtils : : uint32 ( copy . substr ( found + 2 ) ) ; <nl> <nl> - return new EndpointIpV6 ( type , protocol , encryption , specification , listenBacklog , copy . substr ( 0 , found + 1 ) , port ) ; <nl> + return new EndpointIpV6 ( type , protocol , encryption , specification , listenBacklog , copy . substr ( 1 , found - 1 ) , port ) ; <nl> } <nl> <nl> found = copy . find ( " ] " , 1 ) ; <nl> EndpointIp : : EndpointIp ( const Endpoint : : EndpointType type , <nl> const std : : string & host , <nl> const uint16_t port ) : <nl> Endpoint ( type , domainType , protocol , encryption , specification , listenBacklog ) , _host ( host ) , _port ( port ) { <nl> - <nl> + <nl> assert ( domainType = = DOMAIN_IPV4 | | domainType = = Endpoint : : DOMAIN_IPV6 ) ; <nl> } <nl> <nl>
|
bugfix IPv6 endpoint
|
arangodb/arangodb
|
5c3e10de70e230e900832b1bb7157bc2fd1f2034
|
2012-10-16T19:14:18Z
|
mmm a / ExampleSetups / G2P / setups / global . lstm . config <nl> ppp b / ExampleSetups / G2P / setups / global . lstm . config <nl> <nl> WorkDir = d : / exp / lts <nl> # DataDir = d : / data / lts <nl> DataDir = d : / data / ltsdbg <nl> + NdlDir = c : / dev / cntk5 / ExampleSetups / G2P / setups <nl> PredictionModelFeatureDir = \ \ speechstore5 \ transient \ kaishengy \ exp \ lts \ result \ expbilstmce300n \ s4 <nl> ExpDir = \ \ speechstore5 \ transient \ kaishengy \ exp \ lts \ result \ explstm <nl> OutDir = $ ExpDir $ <nl>
|
Update global configure for NDL BiLSTM setup
|
microsoft/CNTK
|
80c0ccb742628e6fd91db0628d0c6ea9a2ff2e75
|
2015-04-25T20:01:41Z
|
mmm a / x64_dbg_dbg / thread . cpp <nl> ppp b / x64_dbg_dbg / thread . cpp <nl> void ThreadGetList ( THREADLIST * List ) <nl> SHARED_ACQUIRE ( LockThreads ) ; <nl> <nl> / / <nl> - / / This function converts a C + + std : : vector to a C - style THREADLIST [ ] . <nl> + / / This function converts a C + + std : : unordered_map to a C - style THREADLIST [ ] . <nl> / / Also assume BridgeAlloc zeros the returned buffer . <nl> / / <nl> int count = ( int ) threadList . size ( ) ; <nl> void ThreadGetList ( THREADLIST * List ) <nl> List - > list = ( THREADALLINFO * ) BridgeAlloc ( count * sizeof ( THREADALLINFO ) ) ; <nl> <nl> / / Fill out the list data <nl> - for ( int i = 0 ; i < count ; i + + ) <nl> + int index = 0 ; <nl> + <nl> + for ( auto & itr : threadList ) <nl> { <nl> - HANDLE threadHandle = threadList [ i ] . Handle ; <nl> + HANDLE threadHandle = itr . second . Handle ; <nl> <nl> - / / Get the debugger ' s current thread index <nl> + / / Get the debugger ' s active thread index <nl> if ( threadHandle = = hActiveThread ) <nl> - List - > CurrentThread = i ; <nl> + List - > CurrentThread = index ; <nl> <nl> - memcpy ( & List - > list [ i ] . BasicInfo , & threadList [ i ] , sizeof ( THREADINFO ) ) ; <nl> + memcpy ( & List - > list [ index ] . BasicInfo , & itr . second , sizeof ( THREADINFO ) ) ; <nl> <nl> - List - > list [ i ] . ThreadCip = GetContextDataEx ( threadHandle , UE_CIP ) ; <nl> - List - > list [ i ] . SuspendCount = ThreadGetSuspendCount ( threadHandle ) ; <nl> - List - > list [ i ] . Priority = ThreadGetPriority ( threadHandle ) ; <nl> - List - > list [ i ] . WaitReason = ThreadGetWaitReason ( threadHandle ) ; <nl> - List - > list [ i ] . LastError = getLastErrorFromTeb ( threadList [ i ] . ThreadLocalBase ) ; <nl> + List - > list [ index ] . ThreadCip = GetContextDataEx ( threadHandle , UE_CIP ) ; <nl> + List - > list [ index ] . SuspendCount = ThreadGetSuspendCount ( threadHandle ) ; <nl> + List - > list [ index ] . Priority = ThreadGetPriority ( threadHandle ) ; <nl> + List - > list [ index ] . WaitReason = ThreadGetWaitReason ( threadHandle ) ; <nl> + List - > list [ index ] . LastError = getLastErrorFromTeb ( itr . second . ThreadLocalBase ) ; <nl> + index + + ; <nl> } <nl> } <nl> <nl>
|
DBG : Fix a bug in ThreadGetList ( totally wasn ' t my fault )
|
x64dbg/x64dbg
|
1cb2217a7a0f730432f27d9e54946866821fc914
|
2015-07-12T02:55:57Z
|
mmm a / modules / stitching / src / matchers . cpp <nl> ppp b / modules / stitching / src / matchers . cpp <nl> struct DistIdxPair <nl> <nl> struct MatchPairsBody : ParallelLoopBody <nl> { <nl> - MatchPairsBody ( const MatchPairsBody & other ) <nl> - : matcher ( other . matcher ) , features ( other . features ) , <nl> - pairwise_matches ( other . pairwise_matches ) , near_pairs ( other . near_pairs ) { } <nl> - <nl> MatchPairsBody ( FeaturesMatcher & _matcher , const vector < ImageFeatures > & _features , <nl> vector < MatchesInfo > & _pairwise_matches , vector < pair < int , int > > & _near_pairs ) <nl> : matcher ( _matcher ) , features ( _features ) , <nl>
|
Merge pull request from SpecLad : matchers - ctor
|
opencv/opencv
|
1b689a743175751134af7f4f84c3d13049a1f58e
|
2013-06-10T11:06:31Z
|
mmm a / src / layer / arm / convolutiondepthwise_3x3_int8 . h <nl> ppp b / src / layer / arm / convolutiondepthwise_3x3_int8 . h <nl> static void convdw3x3s1_int8_requant_neon ( const Mat & bottom_blob , Mat & top_blob , <nl> " vcvt . f32 . s32 q8 , q8 \ n " <nl> / / top_f32 = top_f32 * scale_int <nl> " vmul . f32 q0 , q7 , q14 \ n " <nl> - " vmul . f32 q1 , q8 , q14 \ n " <nl> + " vmul . f32 q4 , q8 , q14 \ n " <nl> / / top_f32 = top_f32 + bias <nl> " vadd . f32 q0 , q0 , q13 \ n " <nl> - " vadd . f32 q1 , q1 , q13 \ n " <nl> + " vadd . f32 q4 , q4 , q13 \ n " <nl> / / top_f32 = top_f32 * scale_out <nl> " vmul . f32 q0 , q0 , q15 \ n " <nl> - " vmul . f32 q1 , q1 , q15 \ n " <nl> + " vmul . f32 q4 , q4 , q15 \ n " <nl> / / top_f32 - > top_s32 <nl> " vcvtr . s32 . f32 s0 , s0 \ n " <nl> " vcvtr . s32 . f32 s1 , s1 \ n " <nl> " vcvtr . s32 . f32 s2 , s2 \ n " <nl> " vcvtr . s32 . f32 s3 , s3 \ n " <nl> - " vcvtr . s32 . f32 s4 , s4 \ n " <nl> - " vcvtr . s32 . f32 s5 , s5 \ n " <nl> - " vcvtr . s32 . f32 s6 , s6 \ n " <nl> - " vcvtr . s32 . f32 s7 , s7 \ n " <nl> + " vcvtr . s32 . f32 s16 , s16 \ n " <nl> + " vcvtr . s32 . f32 s17 , s17 \ n " <nl> + " vcvtr . s32 . f32 s18 , s18 \ n " <nl> + " vcvtr . s32 . f32 s19 , s19 \ n " <nl> / / top_s32 - > top_s16 <nl> " vqmovn . s32 d14 , q0 \ n " <nl> - " vqmovn . s32 d15 , q1 \ n " <nl> + " vqmovn . s32 d15 , q4 \ n " <nl> / / top_s16 - > top_s8 <nl> " vqmovn . s16 d14 , q7 \ n " <nl> / / save top_s8 <nl> static void convdw3x3s1_int8_requant_neon ( const Mat & bottom_blob , Mat & top_blob , <nl> " vcvt . f32 . s32 q12 , q12 \ n " <nl> / / top_f32 = top_f32 * scale_int <nl> " vmul . f32 q0 , q11 , q14 \ n " <nl> - " vmul . f32 q1 , q12 , q14 \ n " <nl> + " vmul . f32 q4 , q12 , q14 \ n " <nl> / / top_f32 = top_f32 + bias <nl> " vadd . f32 q0 , q0 , q13 \ n " <nl> - " vadd . f32 q1 , q1 , q13 \ n " <nl> + " vadd . f32 q4 , q4 , q13 \ n " <nl> / / top_f32 = top_f32 * scale_out <nl> " vmul . f32 q0 , q0 , q15 \ n " <nl> - " vmul . f32 q1 , q1 , q15 \ n " <nl> + " vmul . f32 q4 , q4 , q15 \ n " <nl> / / top_f32 - > top_s32 <nl> " vcvtr . s32 . f32 s0 , s0 \ n " <nl> " vcvtr . s32 . f32 s1 , s1 \ n " <nl> " vcvtr . s32 . f32 s2 , s2 \ n " <nl> " vcvtr . s32 . f32 s3 , s3 \ n " <nl> - " vcvtr . s32 . f32 s4 , s4 \ n " <nl> - " vcvtr . s32 . f32 s5 , s5 \ n " <nl> - " vcvtr . s32 . f32 s6 , s6 \ n " <nl> - " vcvtr . s32 . f32 s7 , s7 \ n " <nl> + " vcvtr . s32 . f32 s16 , s16 \ n " <nl> + " vcvtr . s32 . f32 s17 , s17 \ n " <nl> + " vcvtr . s32 . f32 s18 , s18 \ n " <nl> + " vcvtr . s32 . f32 s19 , s19 \ n " <nl> / / top_s32 - > top_s16 <nl> " vqmovn . s32 d14 , q0 \ n " <nl> - " vqmovn . s32 d15 , q1 \ n " <nl> + " vqmovn . s32 d15 , q4 \ n " <nl> / / top_s16 - > top_s8 <nl> " vqmovn . s16 d14 , q7 \ n " <nl> / / save top_s8 <nl> static void convdw3x3s1_int8_requant_neon ( const Mat & bottom_blob , Mat & top_blob , <nl> " r " ( bias0 ) , / / % 17 <nl> " r " ( scale_requant_in ) , / / % 18 <nl> " r " ( scale_requant_out ) / / % 19 <nl> - : " cc " , " memory " , " q0 " , " q1 " , " q5 " , " q6 " , " q7 " , " q8 " , " q9 " , " q10 " , " q11 " , " q12 " , " q13 " , " q14 " , " q15 " <nl> + : " cc " , " memory " , " q0 " , " q4 " , " q5 " , " q6 " , " q7 " , " q8 " , " q9 " , " q10 " , " q11 " , " q12 " , " q13 " , " q14 " , " q15 " <nl> ) ; <nl> } <nl> # endif / / __aarch64__ <nl> static void convdw3x3s1_int8_requant_neon ( const Mat & bottom_blob , Mat & top_blob , <nl> " vcvt . f32 . s32 q8 , q8 \ n " <nl> / / top_f32 = top_f32 * scale_int <nl> " vmul . f32 q0 , q7 , q14 \ n " <nl> - " vmul . f32 q1 , q8 , q14 \ n " <nl> + " vmul . f32 q4 , q8 , q14 \ n " <nl> / / top_f32 = top_f32 + bias <nl> " vadd . f32 q0 , q0 , q13 \ n " <nl> - " vadd . f32 q1 , q1 , q13 \ n " <nl> + " vadd . f32 q4 , q4 , q13 \ n " <nl> / / top_f32 = top_f32 * scale_out <nl> " vmul . f32 q0 , q0 , q15 \ n " <nl> - " vmul . f32 q1 , q1 , q15 \ n " <nl> + " vmul . f32 q4 , q4 , q15 \ n " <nl> / / top_f32 - > top_s32 <nl> " vcvtr . s32 . f32 s0 , s0 \ n " <nl> " vcvtr . s32 . f32 s1 , s1 \ n " <nl> " vcvtr . s32 . f32 s2 , s2 \ n " <nl> " vcvtr . s32 . f32 s3 , s3 \ n " <nl> - " vcvtr . s32 . f32 s4 , s4 \ n " <nl> - " vcvtr . s32 . f32 s5 , s5 \ n " <nl> - " vcvtr . s32 . f32 s6 , s6 \ n " <nl> - " vcvtr . s32 . f32 s7 , s7 \ n " <nl> + " vcvtr . s32 . f32 s16 , s16 \ n " <nl> + " vcvtr . s32 . f32 s17 , s17 \ n " <nl> + " vcvtr . s32 . f32 s18 , s18 \ n " <nl> + " vcvtr . s32 . f32 s19 , s19 \ n " <nl> / / top_s32 - > top_s16 <nl> " vqmovn . s32 d14 , q0 \ n " <nl> - " vqmovn . s32 d15 , q1 \ n " <nl> + " vqmovn . s32 d15 , q4 \ n " <nl> / / top_s16 - > top_s8 <nl> " vqmovn . s16 d14 , q7 \ n " <nl> / / save top_s8 <nl> static void convdw3x3s1_int8_requant_neon ( const Mat & bottom_blob , Mat & top_blob , <nl> " r " ( bias0 ) , / / % 13 <nl> " r " ( scale_requant_in ) , / / % 14 <nl> " r " ( scale_requant_out ) / / % 15 <nl> - : " cc " , " memory " , " q0 " , " q1 " , " q5 " , " q6 " , " q7 " , " q8 " , " q9 " , " q10 " , " q11 " , " q12 " , " q13 " , " q14 " , " q15 " <nl> + : " cc " , " memory " , " q0 " , " q4 " , " q5 " , " q6 " , " q7 " , " q8 " , " q9 " , " q10 " , " q11 " , " q12 " , " q13 " , " q14 " , " q15 " <nl> ) ; <nl> } <nl> # endif / / __aarch64__ <nl> static void convdw3x3s2_int8_requant_neon ( const Mat & bottom_blob , Mat & top_blob , <nl> " vcvt . f32 . s32 q8 , q8 \ n " <nl> / / top_f32 = top_f32 * scale_int <nl> " vmul . f32 q0 , q7 , q12 \ n " <nl> - " vmul . f32 q1 , q8 , q12 \ n " <nl> + " vmul . f32 q4 , q8 , q12 \ n " <nl> / / top_f32 = top_f32 + bias <nl> " vadd . f32 q0 , q0 , q11 \ n " <nl> - " vadd . f32 q1 , q1 , q11 \ n " <nl> + " vadd . f32 q4 , q4 , q11 \ n " <nl> / / top_f32 = top_f32 * scale_out <nl> " vmul . f32 q0 , q0 , q13 \ n " <nl> - " vmul . f32 q1 , q1 , q13 \ n " <nl> + " vmul . f32 q4 , q4 , q13 \ n " <nl> / / top_f32 - > top_s32 <nl> " vcvtr . s32 . f32 s0 , s0 \ n " <nl> " vcvtr . s32 . f32 s1 , s1 \ n " <nl> " vcvtr . s32 . f32 s2 , s2 \ n " <nl> " vcvtr . s32 . f32 s3 , s3 \ n " <nl> - " vcvtr . s32 . f32 s4 , s4 \ n " <nl> - " vcvtr . s32 . f32 s5 , s5 \ n " <nl> - " vcvtr . s32 . f32 s6 , s6 \ n " <nl> - " vcvtr . s32 . f32 s7 , s7 \ n " <nl> + " vcvtr . s32 . f32 s16 , s16 \ n " <nl> + " vcvtr . s32 . f32 s17 , s17 \ n " <nl> + " vcvtr . s32 . f32 s18 , s18 \ n " <nl> + " vcvtr . s32 . f32 s19 , s19 \ n " <nl> / / top_s32 - > top_s16 <nl> " vqmovn . s32 d14 , q0 \ n " <nl> - " vqmovn . s32 d15 , q1 \ n " <nl> + " vqmovn . s32 d15 , q4 \ n " <nl> / / top_s16 - > top_s8 <nl> " vqmovn . s16 d14 , q7 \ n " <nl> / / save top_s8 <nl> static void convdw3x3s2_int8_requant_neon ( const Mat & bottom_blob , Mat & top_blob , <nl> " r " ( bias0 ) , / / % 13 <nl> " r " ( scale_requant_in ) , / / % 14 <nl> " r " ( scale_requant_out ) / / % 15 <nl> - : " cc " , " memory " , " q0 " , " q1 " , " q5 " , " q6 " , " q7 " , " q8 " , " q9 " , " q10 " , " q11 " , " q12 " , " q13 " , " q14 " , " q15 " <nl> + : " cc " , " memory " , " q0 " , " q4 " , " q5 " , " q6 " , " q7 " , " q8 " , " q9 " , " q10 " , " q11 " , " q12 " , " q13 " , " q14 " , " q15 " <nl> ) ; <nl> } <nl> # endif / / __aarch64__ <nl>
|
arm - hisim100 - linux make error convolutiondepthwise_3x3_int8 . h for asm ( )
|
Tencent/ncnn
|
66d7ac246317ee233695f6db01cf0e5991233f5d
|
2019-10-31T09:33:54Z
|
mmm a / xbmc / cores / VideoRenderers / BaseRenderer . cpp <nl> ppp b / xbmc / cores / VideoRenderers / BaseRenderer . cpp <nl> bool CBaseRenderer : : FindResolutionFromOverride ( float fps , float & weight , bool fa <nl> <nl> for ( size_t j = ( int ) RES_DESKTOP ; j < g_settings . m_ResInfo . size ( ) ; j + + ) <nl> { <nl> - if ( g_settings . m_ResInfo [ j ] . iWidth = = g_settings . m_ResInfo [ m_resolution ] . iWidth <nl> - & & g_settings . m_ResInfo [ j ] . iHeight = = g_settings . m_ResInfo [ m_resolution ] . iHeight <nl> - & & g_settings . m_ResInfo [ j ] . iScreen = = g_settings . m_ResInfo [ m_resolution ] . iScreen ) <nl> + if ( g_settings . m_ResInfo [ j ] . iScreenWidth = = g_settings . m_ResInfo [ m_resolution ] . iScreenWidth <nl> + & & g_settings . m_ResInfo [ j ] . iScreenHeight = = g_settings . m_ResInfo [ m_resolution ] . iScreenHeight <nl> + & & g_settings . m_ResInfo [ j ] . iScreen = = g_settings . m_ResInfo [ m_resolution ] . iScreen ) <nl> { <nl> if ( g_settings . m_ResInfo [ j ] . fRefreshRate < = override . refreshmax <nl> & & g_settings . m_ResInfo [ j ] . fRefreshRate > = override . refreshmin ) <nl> void CBaseRenderer : : FindResolutionFromFpsMatch ( float fps , float & weight ) <nl> for ( size_t i = ( int ) RES_DESKTOP ; i < g_settings . m_ResInfo . size ( ) ; i + + ) <nl> { <nl> if ( MathUtils : : round_int ( g_settings . m_ResInfo [ i ] . fRefreshRate ) = = 60 <nl> - & & g_settings . m_ResInfo [ i ] . iWidth = = g_settings . m_ResInfo [ m_resolution ] . iWidth <nl> - & & g_settings . m_ResInfo [ i ] . iHeight = = g_settings . m_ResInfo [ m_resolution ] . iHeight <nl> - & & g_settings . m_ResInfo [ i ] . iScreen = = g_settings . m_ResInfo [ m_resolution ] . iScreen ) <nl> + & & g_settings . m_ResInfo [ i ] . iScreenWidth = = g_settings . m_ResInfo [ m_resolution ] . iScreenWidth <nl> + & & g_settings . m_ResInfo [ i ] . iScreenHeight = = g_settings . m_ResInfo [ m_resolution ] . iScreenHeight <nl> + & & g_settings . m_ResInfo [ i ] . iScreen = = g_settings . m_ResInfo [ m_resolution ] . iScreen ) <nl> { <nl> if ( fabs ( g_settings . m_ResInfo [ i ] . fRefreshRate - 60 . 0 ) < fabs ( g_settings . m_ResInfo [ m_resolution ] . fRefreshRate - 60 . 0 ) ) <nl> m_resolution = ( RESOLUTION ) i ; <nl> void CBaseRenderer : : FindResolutionFromFpsMatch ( float fps , float & weight ) <nl> CLog : : Log ( LOGDEBUG , " 60 hertz refreshrate not available , choosing highest " ) ; <nl> for ( size_t i = ( int ) RES_DESKTOP ; i < g_settings . m_ResInfo . size ( ) ; i + + ) <nl> { <nl> - if ( g_settings . m_ResInfo [ i ] . fRefreshRate > g_settings . m_ResInfo [ m_resolution ] . fRefreshRate <nl> - & & g_settings . m_ResInfo [ i ] . iWidth = = g_settings . m_ResInfo [ m_resolution ] . iWidth <nl> - & & g_settings . m_ResInfo [ i ] . iHeight = = g_settings . m_ResInfo [ m_resolution ] . iHeight <nl> - & & g_settings . m_ResInfo [ i ] . iScreen = = g_settings . m_ResInfo [ m_resolution ] . iScreen ) <nl> + if ( g_settings . m_ResInfo [ i ] . fRefreshRate > g_settings . m_ResInfo [ m_resolution ] . fRefreshRate <nl> + & & g_settings . m_ResInfo [ i ] . iScreenWidth = = g_settings . m_ResInfo [ m_resolution ] . iScreenWidth <nl> + & & g_settings . m_ResInfo [ i ] . iScreenHeight = = g_settings . m_ResInfo [ m_resolution ] . iScreenHeight <nl> + & & g_settings . m_ResInfo [ i ] . iScreen = = g_settings . m_ResInfo [ m_resolution ] . iScreen ) <nl> { <nl> m_resolution = ( RESOLUTION ) i ; <nl> } <nl> RESOLUTION CBaseRenderer : : FindClosestResolution ( float fps , float multiplier , RES <nl> { <nl> RESOLUTION_INFO & curr = g_settings . m_ResInfo [ current ] ; <nl> <nl> - int iWidth = curr . iWidth ; <nl> - int iHeight = curr . iHeight ; <nl> + int iScreenWidth = curr . iScreenWidth ; <nl> + int iScreenHeight = curr . iScreenHeight ; <nl> float fRefreshRate = fps ; <nl> <nl> / * <nl> RESOLUTION CBaseRenderer : : FindClosestResolution ( float fps , float multiplier , RES <nl> <nl> if ( m_iFlags & CONF_FLAGS_FORMAT_SBS ) <nl> { <nl> - iWidth / = 2 ; <nl> + iScreenWidth / = 2 ; <nl> fRefreshRate * = 2 ; <nl> } <nl> else if ( m_iFlags & CONF_FLAGS_FORMAT_TB ) <nl> { <nl> - iHeight / = 2 ; <nl> + iScreenHeight / = 2 ; <nl> fRefreshRate * = 2 ; <nl> } <nl> <nl> RESOLUTION CBaseRenderer : : FindClosestResolution ( float fps , float multiplier , RES <nl> <nl> / / discard resolutions that are not the same width and height <nl> / / or have a too low refreshrate <nl> - if ( info . iWidth ! = iWidth <nl> - | | info . iHeight ! = iHeight <nl> + if ( info . iScreenWidth ! = iScreenWidth <nl> + | | info . iScreenHeight ! = iScreenHeight <nl> | | info . iScreen ! = curr . iScreen <nl> | | info . fRefreshRate < ( fRefreshRate * multiplier / 1 . 001 ) - 0 . 001 ) <nl> continue ; <nl>
|
Merge pull request from popcornmix / clamped_resolution_fix
|
xbmc/xbmc
|
687b3ec0949d57ed8f8f78606e751c64c64625b4
|
2013-01-10T15:13:21Z
|
mmm a / cocos / 3d / CCAnimationCurve . h <nl> ppp b / cocos / 3d / CCAnimationCurve . h <nl> NS_CC_BEGIN <nl> enum class EvaluateType <nl> { <nl> LINEAR , <nl> - NEAR , <nl> + NEARE , <nl> QUAT_SLERP , <nl> USER_FUNCTION , <nl> } ; <nl> mmm a / cocos / 3d / CCAnimationCurve . inl <nl> ppp b / cocos / 3d / CCAnimationCurve . inl <nl> void AnimationCurve < componentSize > : : evaluate ( float time , float * dst , EvaluateTyp <nl> } <nl> } <nl> break ; <nl> - case EvaluateType : : NEAR : <nl> + case EvaluateType : : NEARE : <nl> { <nl> float * src = t > 0 . 5f ? toValue : fromValue ; <nl> memcpy ( dst , src , _componentSizeByte ) ; <nl> mmm a / cocos / editor - support / cocostudio / CCSSceneReader . cpp <nl> ppp b / cocos / editor - support / cocostudio / CCSSceneReader . cpp <nl> THE SOFTWARE . <nl> # include " cocostudio / CocoStudio . h " <nl> # include " ui / CocosGUI . h " <nl> # include " audio / include / SimpleAudioEngine . h " <nl> - # include " ObjectFactory . h " <nl> + # include " base / ObjectFactory . h " <nl> <nl> using namespace cocos2d ; <nl> using namespace ui ; <nl> mmm a / cocos / editor - support / cocostudio / CocoLoader . h <nl> ppp b / cocos / editor - support / cocostudio / CocoLoader . h <nl> <nl> # include < cstdio > <nl> # include < stdint . h > <nl> # include " ExtensionMacros . h " <nl> - # include " rapidjson . h " <nl> - # include " document . h " <nl> + # include " json / rapidjson . h " <nl> + # include " json / document . h " <nl> <nl> # pragma pack ( 4 ) <nl> <nl> struct stExpCocoObjectDesc <nl> m_szName = m_szName + ( uint64_t ) pStringMemoryAddr ; <nl> m_pAttribDescArray = m_pAttribDescArray + ( uint64_t ) pAttribMemoryAddr ; <nl> stExpCocoAttribDesc * tpAttribDescArray = ( stExpCocoAttribDesc * ) m_pAttribDescArray ; <nl> - for ( int i = 0 ; i < m_nAttribNum ; i + + ) <nl> + for ( uint32_t i = 0 ; i < m_nAttribNum ; i + + ) <nl> { <nl> tpAttribDescArray [ i ] . ReBuild ( pStringMemoryAddr ) ; <nl> } <nl> mmm a / cocos / editor - support / cocostudio / WidgetReader / ListViewReader / ListViewReader . cpp <nl> ppp b / cocos / editor - support / cocostudio / WidgetReader / ListViewReader / ListViewReader . cpp <nl> namespace cocostudio <nl> std : : string value = stChildArray [ i ] . GetValue ( ) ; <nl> <nl> if ( key = = " direction " ) { <nl> - listView - > setDirection ( ( ScrollView : : Direction ) valueToFloat ( value ) ) ; <nl> + listView - > setDirection ( ( ScrollView : : Direction ) valueToInt ( value ) ) ; <nl> } <nl> else if ( key = = " gravity " ) { <nl> listView - > setGravity ( ( ListView : : Gravity ) valueToInt ( value ) ) ; <nl> mmm a / cocos / editor - support / cocostudio / WidgetReader / PageViewReader / PageViewReader . cpp <nl> ppp b / cocos / editor - support / cocostudio / WidgetReader / PageViewReader / PageViewReader . cpp <nl> <nl> <nl> # include " PageViewReader . h " <nl> # include " ui / UIPageView . h " <nl> + # include " ui / UILayout . h " <nl> + # include " cocostudio / CocoLoader . h " <nl> <nl> USING_NS_CC ; <nl> using namespace ui ; <nl> namespace cocostudio <nl> } <nl> <nl> void PageViewReader : : setPropsFromBinary ( cocos2d : : ui : : Widget * widget , CocoLoader * pCocoLoader , stExpCocoNode * pCocoNode ) <nl> - { <nl> - LayoutReader : : setPropsFromBinary ( widget , pCocoLoader , pCocoNode ) ; <nl> + { <nl> + LayoutReader : : setPropsFromBinary ( widget , pCocoLoader , pCocoNode ) ; <nl> } <nl> <nl> void PageViewReader : : setPropsFromJsonDictionary ( Widget * widget , const rapidjson : : Value & options ) <nl> mmm a / cocos / editor - support / cocostudio / WidgetReader / PageViewReader / PageViewReader . h <nl> ppp b / cocos / editor - support / cocostudio / WidgetReader / PageViewReader / PageViewReader . h <nl> namespace cocostudio <nl> static void purge ( ) ; <nl> <nl> virtual void setPropsFromJsonDictionary ( cocos2d : : ui : : Widget * widget , const rapidjson : : Value & options ) ; <nl> - virtual void setPropsFromBinary ( cocos2d : : ui : : Widget * widget , CocoLoader * pCocoLoader , stExpCocoNode * pCocoNode ) ; <nl> + virtual void setPropsFromBinary ( cocos2d : : ui : : Widget * widget , CocoLoader * pCocoLoader , stExpCocoNode * pCocoNode ) ; <nl> <nl> } ; <nl> } <nl> mmm a / cocos / editor - support / cocostudio / WidgetReader / ScrollViewReader / ScrollViewReader . cpp <nl> ppp b / cocos / editor - support / cocostudio / WidgetReader / ScrollViewReader / ScrollViewReader . cpp <nl> namespace cocostudio <nl> else if ( key = = " innerHeight " ) { <nl> innerHeight = valueToFloat ( value ) ; <nl> } else if ( key = = " direction " ) { <nl> - scrollView - > setDirection ( ( ScrollView : : Direction ) valueToFloat ( value ) ) ; <nl> + scrollView - > setDirection ( ( ScrollView : : Direction ) valueToInt ( value ) ) ; <nl> } else if ( key = = " bounceEnable " ) { <nl> scrollView - > setBounceEnabled ( valueToBool ( value ) ) ; <nl> } <nl> mmm a / cocos / editor - support / cocostudio / proj . win32 / libCocosStudio . vcxproj <nl> ppp b / cocos / editor - support / cocostudio / proj . win32 / libCocosStudio . vcxproj <nl> <nl> < ClCompile Include = " . . \ CCTransformHelp . cpp " / > <nl> < ClCompile Include = " . . \ CCTween . cpp " / > <nl> < ClCompile Include = " . . \ CCUtilMath . cpp " / > <nl> + < ClCompile Include = " . . \ CocoLoader . cpp " / > <nl> < ClCompile Include = " . . \ DictionaryHelper . cpp " / > <nl> < ClCompile Include = " . . \ TriggerBase . cpp " / > <nl> < ClCompile Include = " . . \ TriggerMng . cpp " / > <nl> <nl> < ClInclude Include = " . . \ CCTransformHelp . h " / > <nl> < ClInclude Include = " . . \ CCTween . h " / > <nl> < ClInclude Include = " . . \ CCUtilMath . h " / > <nl> + < ClInclude Include = " . . \ CocoLoader . h " / > <nl> + < ClInclude Include = " . . \ CocoStudio . h " / > <nl> < ClInclude Include = " . . \ DictionaryHelper . h " / > <nl> < ClInclude Include = " . . \ ObjectFactory . h " / > <nl> < ClInclude Include = " . . \ TriggerBase . h " / > <nl> mmm a / cocos / editor - support / cocostudio / proj . win32 / libCocosStudio . vcxproj . filters <nl> ppp b / cocos / editor - support / cocostudio / proj . win32 / libCocosStudio . vcxproj . filters <nl> <nl> < ClCompile Include = " . . \ WidgetReader \ PageViewReader \ PageViewReader . cpp " > <nl> < Filter > reader \ WidgetReader \ PageViewReader < / Filter > <nl> < / ClCompile > <nl> + < ClCompile Include = " . . \ CocoLoader . cpp " > <nl> + < Filter > json < / Filter > <nl> + < / ClCompile > <nl> < / ItemGroup > <nl> < ItemGroup > <nl> < ClInclude Include = " . . \ CCComAttribute . h " > <nl> <nl> < ClInclude Include = " . . \ WidgetReader \ PageViewReader \ PageViewReader . h " > <nl> < Filter > reader \ WidgetReader \ PageViewReader < / Filter > <nl> < / ClInclude > <nl> + < ClInclude Include = " . . \ CocoLoader . h " > <nl> + < Filter > json < / Filter > <nl> + < / ClInclude > <nl> + < ClInclude Include = " . . \ CocoStudio . h " > <nl> + < Filter > json < / Filter > <nl> + < / ClInclude > <nl> < / ItemGroup > <nl> - < / Project > <nl> + < / Project > <nl> \ No newline at end of file <nl> mmm a / cocos / renderer / CCMeshCommand . h <nl> ppp b / cocos / renderer / CCMeshCommand . h <nl> NS_CC_BEGIN <nl> <nl> class GLProgramState ; <nl> class GLProgram ; <nl> - class Uniform ; <nl> + struct Uniform ; <nl> <nl> / / it is a common mesh <nl> class MeshCommand : public RenderCommand <nl>
|
Merge pull request from geron - cn / guagnhuiv3
|
cocos2d/cocos2d-x
|
0b677a86d7a15b44cbd3ba2b7f9b8215a84757ca
|
2014-06-18T06:47:11Z
|
mmm a / xbmc / video / VideoDatabase . cpp <nl> ppp b / xbmc / video / VideoDatabase . cpp <nl> bool CVideoDatabase : : GetResumePoint ( CVideoInfoTag & tag ) <nl> return match ; <nl> } <nl> <nl> - CVideoInfoTag CVideoDatabase : : GetDetailsForMovie ( auto_ptr < Dataset > & pDS , bool needsCast / * = false * / ) <nl> + CVideoInfoTag CVideoDatabase : : GetDetailsForMovie ( auto_ptr < Dataset > & pDS , bool needsCast / * = false * / , bool needsStreamDetails / * = true * / ) <nl> { <nl> - return GetDetailsForMovie ( pDS - > get_sql_record ( ) , needsCast ) ; <nl> + return GetDetailsForMovie ( pDS - > get_sql_record ( ) , needsCast , needsStreamDetails ) ; <nl> } <nl> <nl> - CVideoInfoTag CVideoDatabase : : GetDetailsForMovie ( const dbiplus : : sql_record * const record , bool needsCast / * = false * / ) <nl> + CVideoInfoTag CVideoDatabase : : GetDetailsForMovie ( const dbiplus : : sql_record * const record , bool needsCast / * = false * / , bool needsStreamDetails / * = true * / ) <nl> { <nl> CVideoInfoTag details ; <nl> <nl> CVideoInfoTag CVideoDatabase : : GetDetailsForMovie ( const dbiplus : : sql_record * cons <nl> GetCommonDetails ( record , details ) ; <nl> movieTime + = XbmcThreads : : SystemClockMillis ( ) - time ; time = XbmcThreads : : SystemClockMillis ( ) ; <nl> <nl> - GetStreamDetails ( details ) ; <nl> + if ( needsStreamDetails ) <nl> + GetStreamDetails ( details ) ; <nl> <nl> if ( needsCast ) <nl> { <nl> bool CVideoDatabase : : GetSetsByWhere ( const CStdString & strBaseDir , const Filter & <nl> } <nl> <nl> / / add the movie ' s details to the set <nl> - it - > second . movies . push_back ( GetDetailsForMovie ( m_pDS ) ) ; <nl> + it - > second . movies . push_back ( GetDetailsForMovie ( m_pDS , false , false ) ) ; <nl> <nl> m_pDS - > next ( ) ; <nl> } <nl> mmm a / xbmc / video / VideoDatabase . h <nl> ppp b / xbmc / video / VideoDatabase . h <nl> class CVideoDatabase : public CDatabase <nl> <nl> void DeleteStreamDetails ( int idFile ) ; <nl> CVideoInfoTag GetDetailsByTypeAndId ( VIDEODB_CONTENT_TYPE type , int id ) ; <nl> - CVideoInfoTag GetDetailsForMovie ( std : : auto_ptr < dbiplus : : Dataset > & pDS , bool needsCast = false ) ; <nl> - CVideoInfoTag GetDetailsForMovie ( const dbiplus : : sql_record * const record , bool needsCast = false ) ; <nl> + CVideoInfoTag GetDetailsForMovie ( std : : auto_ptr < dbiplus : : Dataset > & pDS , bool needsCast = false , bool needsStreamDetails = true ) ; <nl> + CVideoInfoTag GetDetailsForMovie ( const dbiplus : : sql_record * const record , bool needsCast = false , bool needsStreamDetails = true ) ; <nl> CVideoInfoTag GetDetailsForTvShow ( std : : auto_ptr < dbiplus : : Dataset > & pDS , bool needsCast = false ) ; <nl> CVideoInfoTag GetDetailsForTvShow ( const dbiplus : : sql_record * const record , bool needsCast = false ) ; <nl> CVideoInfoTag GetDetailsForEpisode ( std : : auto_ptr < dbiplus : : Dataset > & pDS , bool needsCast = false ) ; <nl>
|
videodb : do not retrieve streamdetails for set items - the actual streamdetails will be queried for the single items later on
|
xbmc/xbmc
|
30e9f4580dba814807fa31c1fbdfa4d2c6bc7777
|
2012-07-07T07:36:23Z
|
mmm a / src / arm / lithium - arm . cc <nl> ppp b / src / arm / lithium - arm . cc <nl> void LChunkBuilder : : VisitInstruction ( HInstruction * current ) { <nl> LInstruction * instr = current - > CompileToLithium ( this ) ; <nl> <nl> if ( instr ! = NULL ) { <nl> + / / Associate the hydrogen instruction first , since we may need it for <nl> + / / the ClobbersRegisters ( ) or ClobbersDoubleRegisters ( ) calls below . <nl> + instr - > set_hydrogen_value ( current ) ; <nl> + <nl> # if DEBUG <nl> / / Make sure that the lithium instruction has either no fixed register <nl> / / constraints in temps or the result OR no uses that are only used at <nl> void LChunkBuilder : : VisitInstruction ( HInstruction * current ) { <nl> if ( FLAG_stress_environments & & ! instr - > HasEnvironment ( ) ) { <nl> instr = AssignEnvironment ( instr ) ; <nl> } <nl> - instr - > set_hydrogen_value ( current ) ; <nl> chunk_ - > AddInstruction ( instr , current_block_ ) ; <nl> } <nl> current_instruction_ = old_current ; <nl> mmm a / src / arm / lithium - arm . h <nl> ppp b / src / arm / lithium - arm . h <nl> class LInstruction : public ZoneObject { <nl> / / Interface to the register allocator and iterators . <nl> bool ClobbersTemps ( ) const { return IsCall ( ) ; } <nl> bool ClobbersRegisters ( ) const { return IsCall ( ) ; } <nl> - bool ClobbersDoubleRegisters ( ) const { return IsCall ( ) ; } <nl> + virtual bool ClobbersDoubleRegisters ( ) const { return IsCall ( ) ; } <nl> <nl> / / Interface to the register allocator and iterators . <nl> bool IsMarkedAsCall ( ) const { return IsCall ( ) ; } <nl> class LCallRuntime V8_FINAL : public LTemplateInstruction < 1 , 1 , 0 > { <nl> DECLARE_CONCRETE_INSTRUCTION ( CallRuntime , " call - runtime " ) <nl> DECLARE_HYDROGEN_ACCESSOR ( CallRuntime ) <nl> <nl> + virtual bool ClobbersDoubleRegisters ( ) const V8_OVERRIDE { <nl> + return save_doubles ( ) = = kDontSaveFPRegs ; <nl> + } <nl> + <nl> const Runtime : : Function * function ( ) const { return hydrogen ( ) - > function ( ) ; } <nl> int arity ( ) const { return hydrogen ( ) - > argument_count ( ) ; } <nl> + SaveFPRegsMode save_doubles ( ) const { return hydrogen ( ) - > save_doubles ( ) ; } <nl> } ; <nl> <nl> <nl> mmm a / src / arm / lithium - codegen - arm . cc <nl> ppp b / src / arm / lithium - codegen - arm . cc <nl> void LCodeGen : : CallCodeGeneric ( Handle < Code > code , <nl> <nl> void LCodeGen : : CallRuntime ( const Runtime : : Function * function , <nl> int num_arguments , <nl> - LInstruction * instr ) { <nl> + LInstruction * instr , <nl> + SaveFPRegsMode save_doubles ) { <nl> ASSERT ( instr ! = NULL ) ; <nl> LPointerMap * pointers = instr - > pointer_map ( ) ; <nl> ASSERT ( pointers ! = NULL ) ; <nl> RecordPosition ( pointers - > position ( ) ) ; <nl> <nl> - __ CallRuntime ( function , num_arguments ) ; <nl> + __ CallRuntime ( function , num_arguments , save_doubles ) ; <nl> + <nl> RecordSafepointWithLazyDeopt ( instr , RECORD_SIMPLE_SAFEPOINT ) ; <nl> } <nl> <nl> mmm a / src / arm / lithium - codegen - arm . h <nl> ppp b / src / arm / lithium - codegen - arm . h <nl> class LCodeGen V8_FINAL BASE_EMBEDDED { <nl> <nl> void CallRuntime ( const Runtime : : Function * function , <nl> int num_arguments , <nl> - LInstruction * instr ) ; <nl> + LInstruction * instr , <nl> + SaveFPRegsMode save_doubles = kDontSaveFPRegs ) ; <nl> <nl> void CallRuntime ( Runtime : : FunctionId id , <nl> int num_arguments , <nl> mmm a / src / arm / macro - assembler - arm . cc <nl> ppp b / src / arm / macro - assembler - arm . cc <nl> void MacroAssembler : : GetLeastBitsFromInt32 ( Register dst , <nl> <nl> <nl> void MacroAssembler : : CallRuntime ( const Runtime : : Function * f , <nl> - int num_arguments ) { <nl> + int num_arguments , <nl> + SaveFPRegsMode save_doubles ) { <nl> / / All parameters are on the stack . r0 has the return value after call . <nl> <nl> / / If the expected number of arguments of the runtime function is <nl> void MacroAssembler : : CallRuntime ( const Runtime : : Function * f , <nl> / / smarter . <nl> mov ( r0 , Operand ( num_arguments ) ) ; <nl> mov ( r1 , Operand ( ExternalReference ( f , isolate ( ) ) ) ) ; <nl> - CEntryStub stub ( 1 ) ; <nl> - CallStub ( & stub ) ; <nl> - } <nl> - <nl> - <nl> - void MacroAssembler : : CallRuntime ( Runtime : : FunctionId fid , int num_arguments ) { <nl> - CallRuntime ( Runtime : : FunctionForId ( fid ) , num_arguments ) ; <nl> - } <nl> - <nl> - <nl> - void MacroAssembler : : CallRuntimeSaveDoubles ( Runtime : : FunctionId id ) { <nl> - const Runtime : : Function * function = Runtime : : FunctionForId ( id ) ; <nl> - mov ( r0 , Operand ( function - > nargs ) ) ; <nl> - mov ( r1 , Operand ( ExternalReference ( function , isolate ( ) ) ) ) ; <nl> - CEntryStub stub ( 1 , kSaveFPRegs ) ; <nl> + CEntryStub stub ( 1 , save_doubles ) ; <nl> CallStub ( & stub ) ; <nl> } <nl> <nl> mmm a / src / arm / macro - assembler - arm . h <nl> ppp b / src / arm / macro - assembler - arm . h <nl> class MacroAssembler : public Assembler { <nl> void TailCallStub ( CodeStub * stub , Condition cond = al ) ; <nl> <nl> / / Call a runtime routine . <nl> - void CallRuntime ( const Runtime : : Function * f , int num_arguments ) ; <nl> - void CallRuntimeSaveDoubles ( Runtime : : FunctionId id ) ; <nl> + void CallRuntime ( const Runtime : : Function * f , <nl> + int num_arguments , <nl> + SaveFPRegsMode save_doubles = kDontSaveFPRegs ) ; <nl> + void CallRuntimeSaveDoubles ( Runtime : : FunctionId id ) { <nl> + const Runtime : : Function * function = Runtime : : FunctionForId ( id ) ; <nl> + CallRuntime ( function , function - > nargs , kSaveFPRegs ) ; <nl> + } <nl> <nl> / / Convenience function : Same as above , but takes the fid instead . <nl> - void CallRuntime ( Runtime : : FunctionId fid , int num_arguments ) ; <nl> + void CallRuntime ( Runtime : : FunctionId id , int num_arguments ) { <nl> + CallRuntime ( Runtime : : FunctionForId ( id ) , num_arguments ) ; <nl> + } <nl> <nl> / / Convenience function : call an external reference . <nl> void CallExternalReference ( const ExternalReference & ext , <nl> mmm a / src / hydrogen - instructions . cc <nl> ppp b / src / hydrogen - instructions . cc <nl> void HCallNewArray : : PrintDataTo ( StringStream * stream ) { <nl> <nl> void HCallRuntime : : PrintDataTo ( StringStream * stream ) { <nl> stream - > Add ( " % o " , * name ( ) ) ; <nl> + if ( save_doubles ( ) = = kSaveFPRegs ) { <nl> + stream - > Add ( " [ save doubles ] " ) ; <nl> + } <nl> stream - > Add ( " # % d " , argument_count ( ) ) ; <nl> } <nl> <nl> mmm a / src / hydrogen - instructions . h <nl> ppp b / src / hydrogen - instructions . h <nl> class HCallRuntime V8_FINAL : public HCall < 1 > { <nl> HValue * context ( ) { return OperandAt ( 0 ) ; } <nl> const Runtime : : Function * function ( ) const { return c_function_ ; } <nl> Handle < String > name ( ) const { return name_ ; } <nl> + SaveFPRegsMode save_doubles ( ) const { return save_doubles_ ; } <nl> + void set_save_doubles ( SaveFPRegsMode save_doubles ) { <nl> + save_doubles_ = save_doubles ; <nl> + } <nl> <nl> virtual Representation RequiredInputRepresentation ( int index ) V8_OVERRIDE { <nl> return Representation : : Tagged ( ) ; <nl> class HCallRuntime V8_FINAL : public HCall < 1 > { <nl> Handle < String > name , <nl> const Runtime : : Function * c_function , <nl> int argument_count ) <nl> - : HCall < 1 > ( argument_count ) , c_function_ ( c_function ) , name_ ( name ) { <nl> + : HCall < 1 > ( argument_count ) , c_function_ ( c_function ) , name_ ( name ) , <nl> + save_doubles_ ( kDontSaveFPRegs ) { <nl> SetOperandAt ( 0 , context ) ; <nl> } <nl> <nl> const Runtime : : Function * c_function_ ; <nl> Handle < String > name_ ; <nl> + SaveFPRegsMode save_doubles_ ; <nl> } ; <nl> <nl> <nl> mmm a / src / hydrogen . h <nl> ppp b / src / hydrogen . h <nl> inline HInstruction * HGraphBuilder : : AddUncasted < HReturn > ( HConstant * value ) { <nl> } <nl> <nl> <nl> + template < > <nl> + inline HInstruction * HGraphBuilder : : AddUncasted < HCallRuntime > ( <nl> + Handle < String > name , <nl> + const Runtime : : Function * c_function , <nl> + int argument_count ) { <nl> + HCallRuntime * instr = New < HCallRuntime > ( name , c_function , argument_count ) ; <nl> + if ( graph ( ) - > info ( ) - > IsStub ( ) ) { <nl> + / / When compiling code stubs , we don ' t want to save all double registers <nl> + / / upon entry to the stub , but instead have the call runtime instruction <nl> + / / save the double registers only on - demand ( in the fallback case ) . <nl> + instr - > set_save_doubles ( kSaveFPRegs ) ; <nl> + } <nl> + AddInstruction ( instr ) ; <nl> + return instr ; <nl> + } <nl> + <nl> + <nl> template < > <nl> inline HInstruction * HGraphBuilder : : NewUncasted < HContext > ( ) { <nl> return HContext : : New ( zone ( ) ) ; <nl> mmm a / src / ia32 / lithium - codegen - ia32 . cc <nl> ppp b / src / ia32 / lithium - codegen - ia32 . cc <nl> void LCodeGen : : CallCode ( Handle < Code > code , <nl> <nl> void LCodeGen : : CallRuntime ( const Runtime : : Function * fun , <nl> int argc , <nl> - LInstruction * instr ) { <nl> + LInstruction * instr , <nl> + SaveFPRegsMode save_doubles ) { <nl> ASSERT ( instr ! = NULL ) ; <nl> ASSERT ( instr - > HasPointerMap ( ) ) ; <nl> LPointerMap * pointers = instr - > pointer_map ( ) ; <nl> RecordPosition ( pointers - > position ( ) ) ; <nl> <nl> - __ CallRuntime ( fun , argc ) ; <nl> + __ CallRuntime ( fun , argc , save_doubles ) ; <nl> <nl> RecordSafepointWithLazyDeopt ( instr , RECORD_SIMPLE_SAFEPOINT ) ; <nl> <nl> void LCodeGen : : DoCallNewArray ( LCallNewArray * instr ) { <nl> <nl> <nl> void LCodeGen : : DoCallRuntime ( LCallRuntime * instr ) { <nl> - CallRuntime ( instr - > function ( ) , instr - > arity ( ) , instr ) ; <nl> + CallRuntime ( instr - > function ( ) , instr - > arity ( ) , instr , instr - > save_doubles ( ) ) ; <nl> } <nl> <nl> <nl> mmm a / src / ia32 / lithium - codegen - ia32 . h <nl> ppp b / src / ia32 / lithium - codegen - ia32 . h <nl> class LCodeGen V8_FINAL BASE_EMBEDDED { <nl> <nl> void CallRuntime ( const Runtime : : Function * fun , <nl> int argc , <nl> - LInstruction * instr ) ; <nl> + LInstruction * instr , <nl> + SaveFPRegsMode save_doubles = kDontSaveFPRegs ) ; <nl> <nl> void CallRuntime ( Runtime : : FunctionId id , <nl> int argc , <nl> mmm a / src / ia32 / lithium - ia32 . cc <nl> ppp b / src / ia32 / lithium - ia32 . cc <nl> void LChunkBuilder : : VisitInstruction ( HInstruction * current ) { <nl> LInstruction * instr = current - > CompileToLithium ( this ) ; <nl> <nl> if ( instr ! = NULL ) { <nl> + / / Associate the hydrogen instruction first , since we may need it for <nl> + / / the ClobbersRegisters ( ) or ClobbersDoubleRegisters ( ) calls below . <nl> + instr - > set_hydrogen_value ( current ) ; <nl> + <nl> # if DEBUG <nl> / / Make sure that the lithium instruction has either no fixed register <nl> / / constraints in temps or the result OR no uses that are only used at <nl> void LChunkBuilder : : VisitInstruction ( HInstruction * current ) { <nl> clobber - > set_hydrogen_value ( current ) ; <nl> chunk_ - > AddInstruction ( clobber , current_block_ ) ; <nl> } <nl> - instr - > set_hydrogen_value ( current ) ; <nl> chunk_ - > AddInstruction ( instr , current_block_ ) ; <nl> } <nl> current_instruction_ = old_current ; <nl> mmm a / src / ia32 / lithium - ia32 . h <nl> ppp b / src / ia32 / lithium - ia32 . h <nl> class LCallRuntime V8_FINAL : public LTemplateInstruction < 1 , 1 , 0 > { <nl> DECLARE_CONCRETE_INSTRUCTION ( CallRuntime , " call - runtime " ) <nl> DECLARE_HYDROGEN_ACCESSOR ( CallRuntime ) <nl> <nl> + virtual bool ClobbersDoubleRegisters ( ) const V8_OVERRIDE { <nl> + return save_doubles ( ) = = kDontSaveFPRegs ; <nl> + } <nl> + <nl> const Runtime : : Function * function ( ) const { return hydrogen ( ) - > function ( ) ; } <nl> int arity ( ) const { return hydrogen ( ) - > argument_count ( ) ; } <nl> + SaveFPRegsMode save_doubles ( ) const { return hydrogen ( ) - > save_doubles ( ) ; } <nl> } ; <nl> <nl> <nl> mmm a / src / ia32 / macro - assembler - ia32 . cc <nl> ppp b / src / ia32 / macro - assembler - ia32 . cc <nl> void MacroAssembler : : IndexFromHash ( Register hash , Register index ) { <nl> } <nl> <nl> <nl> - void MacroAssembler : : CallRuntime ( Runtime : : FunctionId id , int num_arguments ) { <nl> - CallRuntime ( Runtime : : FunctionForId ( id ) , num_arguments ) ; <nl> - } <nl> - <nl> - <nl> - void MacroAssembler : : CallRuntimeSaveDoubles ( Runtime : : FunctionId id ) { <nl> - const Runtime : : Function * function = Runtime : : FunctionForId ( id ) ; <nl> - Set ( eax , Immediate ( function - > nargs ) ) ; <nl> - mov ( ebx , Immediate ( ExternalReference ( function , isolate ( ) ) ) ) ; <nl> - CEntryStub ces ( 1 , CpuFeatures : : IsSupported ( SSE2 ) ? kSaveFPRegs <nl> - : kDontSaveFPRegs ) ; <nl> - CallStub ( & ces ) ; <nl> - } <nl> - <nl> - <nl> void MacroAssembler : : CallRuntime ( const Runtime : : Function * f , <nl> - int num_arguments ) { <nl> + int num_arguments , <nl> + SaveFPRegsMode save_doubles ) { <nl> / / If the expected number of arguments of the runtime function is <nl> / / constant , we check that the actual number of arguments match the <nl> / / expectation . <nl> void MacroAssembler : : CallRuntime ( const Runtime : : Function * f , <nl> / / smarter . <nl> Set ( eax , Immediate ( num_arguments ) ) ; <nl> mov ( ebx , Immediate ( ExternalReference ( f , isolate ( ) ) ) ) ; <nl> - CEntryStub ces ( 1 ) ; <nl> + CEntryStub ces ( 1 , CpuFeatures : : IsSupported ( SSE2 ) ? save_doubles <nl> + : kDontSaveFPRegs ) ; <nl> CallStub ( & ces ) ; <nl> } <nl> <nl> mmm a / src / ia32 / macro - assembler - ia32 . h <nl> ppp b / src / ia32 / macro - assembler - ia32 . h <nl> class MacroAssembler : public Assembler { <nl> void StubReturn ( int argc ) ; <nl> <nl> / / Call a runtime routine . <nl> - void CallRuntime ( const Runtime : : Function * f , int num_arguments ) ; <nl> - void CallRuntimeSaveDoubles ( Runtime : : FunctionId id ) ; <nl> + void CallRuntime ( const Runtime : : Function * f , <nl> + int num_arguments , <nl> + SaveFPRegsMode save_doubles = kDontSaveFPRegs ) ; <nl> + void CallRuntimeSaveDoubles ( Runtime : : FunctionId id ) { <nl> + const Runtime : : Function * function = Runtime : : FunctionForId ( id ) ; <nl> + CallRuntime ( function , function - > nargs , kSaveFPRegs ) ; <nl> + } <nl> <nl> / / Convenience function : Same as above , but takes the fid instead . <nl> - void CallRuntime ( Runtime : : FunctionId id , int num_arguments ) ; <nl> + void CallRuntime ( Runtime : : FunctionId id , int num_arguments ) { <nl> + CallRuntime ( Runtime : : FunctionForId ( id ) , num_arguments ) ; <nl> + } <nl> <nl> / / Convenience function : call an external reference . <nl> void CallExternalReference ( ExternalReference ref , int num_arguments ) ; <nl> mmm a / src / x64 / lithium - codegen - x64 . cc <nl> ppp b / src / x64 / lithium - codegen - x64 . cc <nl> void LCodeGen : : CallCode ( Handle < Code > code , <nl> <nl> void LCodeGen : : CallRuntime ( const Runtime : : Function * function , <nl> int num_arguments , <nl> - LInstruction * instr ) { <nl> + LInstruction * instr , <nl> + SaveFPRegsMode save_doubles ) { <nl> ASSERT ( instr ! = NULL ) ; <nl> ASSERT ( instr - > HasPointerMap ( ) ) ; <nl> LPointerMap * pointers = instr - > pointer_map ( ) ; <nl> RecordPosition ( pointers - > position ( ) ) ; <nl> <nl> - __ CallRuntime ( function , num_arguments ) ; <nl> + __ CallRuntime ( function , num_arguments , save_doubles ) ; <nl> + <nl> RecordSafepointWithLazyDeopt ( instr , RECORD_SIMPLE_SAFEPOINT , 0 ) ; <nl> } <nl> <nl> void LCodeGen : : DoCallNewArray ( LCallNewArray * instr ) { <nl> <nl> <nl> void LCodeGen : : DoCallRuntime ( LCallRuntime * instr ) { <nl> - CallRuntime ( instr - > function ( ) , instr - > arity ( ) , instr ) ; <nl> + CallRuntime ( instr - > function ( ) , instr - > arity ( ) , instr , instr - > save_doubles ( ) ) ; <nl> } <nl> <nl> <nl> mmm a / src / x64 / lithium - codegen - x64 . h <nl> ppp b / src / x64 / lithium - codegen - x64 . h <nl> class LCodeGen V8_FINAL BASE_EMBEDDED { <nl> <nl> void CallRuntime ( const Runtime : : Function * function , <nl> int num_arguments , <nl> - LInstruction * instr ) ; <nl> + LInstruction * instr , <nl> + SaveFPRegsMode save_doubles = kDontSaveFPRegs ) ; <nl> <nl> void CallRuntime ( Runtime : : FunctionId id , <nl> int num_arguments , <nl> mmm a / src / x64 / lithium - x64 . cc <nl> ppp b / src / x64 / lithium - x64 . cc <nl> void LChunkBuilder : : VisitInstruction ( HInstruction * current ) { <nl> LInstruction * instr = current - > CompileToLithium ( this ) ; <nl> <nl> if ( instr ! = NULL ) { <nl> + / / Associate the hydrogen instruction first , since we may need it for <nl> + / / the ClobbersRegisters ( ) or ClobbersDoubleRegisters ( ) calls below . <nl> + instr - > set_hydrogen_value ( current ) ; <nl> + <nl> # if DEBUG <nl> / / Make sure that the lithium instruction has either no fixed register <nl> / / constraints in temps or the result OR no uses that are only used at <nl> void LChunkBuilder : : VisitInstruction ( HInstruction * current ) { <nl> if ( FLAG_stress_environments & & ! instr - > HasEnvironment ( ) ) { <nl> instr = AssignEnvironment ( instr ) ; <nl> } <nl> - instr - > set_hydrogen_value ( current ) ; <nl> chunk_ - > AddInstruction ( instr , current_block_ ) ; <nl> } <nl> current_instruction_ = old_current ; <nl> mmm a / src / x64 / lithium - x64 . h <nl> ppp b / src / x64 / lithium - x64 . h <nl> class LInstruction : public ZoneObject { <nl> / / Interface to the register allocator and iterators . <nl> bool ClobbersTemps ( ) const { return IsCall ( ) ; } <nl> bool ClobbersRegisters ( ) const { return IsCall ( ) ; } <nl> - bool ClobbersDoubleRegisters ( ) const { return IsCall ( ) ; } <nl> + virtual bool ClobbersDoubleRegisters ( ) const { return IsCall ( ) ; } <nl> <nl> virtual void SetDeferredLazyDeoptimizationEnvironment ( LEnvironment * env ) { } <nl> <nl> class LCallRuntime V8_FINAL : public LTemplateInstruction < 1 , 0 , 0 > { <nl> DECLARE_CONCRETE_INSTRUCTION ( CallRuntime , " call - runtime " ) <nl> DECLARE_HYDROGEN_ACCESSOR ( CallRuntime ) <nl> <nl> + virtual bool ClobbersDoubleRegisters ( ) const V8_OVERRIDE { <nl> + return save_doubles ( ) = = kDontSaveFPRegs ; <nl> + } <nl> + <nl> const Runtime : : Function * function ( ) const { return hydrogen ( ) - > function ( ) ; } <nl> int arity ( ) const { return hydrogen ( ) - > argument_count ( ) ; } <nl> + SaveFPRegsMode save_doubles ( ) const { return hydrogen ( ) - > save_doubles ( ) ; } <nl> } ; <nl> <nl> <nl> mmm a / src / x64 / macro - assembler - x64 . cc <nl> ppp b / src / x64 / macro - assembler - x64 . cc <nl> void MacroAssembler : : IndexFromHash ( Register hash , Register index ) { <nl> } <nl> <nl> <nl> - void MacroAssembler : : CallRuntime ( Runtime : : FunctionId id , int num_arguments ) { <nl> - CallRuntime ( Runtime : : FunctionForId ( id ) , num_arguments ) ; <nl> - } <nl> - <nl> - <nl> - void MacroAssembler : : CallRuntimeSaveDoubles ( Runtime : : FunctionId id ) { <nl> - const Runtime : : Function * function = Runtime : : FunctionForId ( id ) ; <nl> - Set ( rax , function - > nargs ) ; <nl> - LoadAddress ( rbx , ExternalReference ( function , isolate ( ) ) ) ; <nl> - CEntryStub ces ( 1 , kSaveFPRegs ) ; <nl> - CallStub ( & ces ) ; <nl> - } <nl> - <nl> - <nl> void MacroAssembler : : CallRuntime ( const Runtime : : Function * f , <nl> - int num_arguments ) { <nl> + int num_arguments , <nl> + SaveFPRegsMode save_doubles ) { <nl> / / If the expected number of arguments of the runtime function is <nl> / / constant , we check that the actual number of arguments match the <nl> / / expectation . <nl> void MacroAssembler : : CallRuntime ( const Runtime : : Function * f , <nl> / / smarter . <nl> Set ( rax , num_arguments ) ; <nl> LoadAddress ( rbx , ExternalReference ( f , isolate ( ) ) ) ; <nl> - CEntryStub ces ( f - > result_size ) ; <nl> + CEntryStub ces ( f - > result_size , save_doubles ) ; <nl> CallStub ( & ces ) ; <nl> } <nl> <nl> mmm a / src / x64 / macro - assembler - x64 . h <nl> ppp b / src / x64 / macro - assembler - x64 . h <nl> class MacroAssembler : public Assembler { <nl> void StubReturn ( int argc ) ; <nl> <nl> / / Call a runtime routine . <nl> - void CallRuntime ( const Runtime : : Function * f , int num_arguments ) ; <nl> + void CallRuntime ( const Runtime : : Function * f , <nl> + int num_arguments , <nl> + SaveFPRegsMode save_doubles = kDontSaveFPRegs ) ; <nl> <nl> / / Call a runtime function and save the value of XMM registers . <nl> - void CallRuntimeSaveDoubles ( Runtime : : FunctionId id ) ; <nl> + void CallRuntimeSaveDoubles ( Runtime : : FunctionId id ) { <nl> + const Runtime : : Function * function = Runtime : : FunctionForId ( id ) ; <nl> + CallRuntime ( function , function - > nargs , kSaveFPRegs ) ; <nl> + } <nl> <nl> / / Convenience function : Same as above , but takes the fid instead . <nl> - void CallRuntime ( Runtime : : FunctionId id , int num_arguments ) ; <nl> + void CallRuntime ( Runtime : : FunctionId id , int num_arguments ) { <nl> + CallRuntime ( Runtime : : FunctionForId ( id ) , num_arguments ) ; <nl> + } <nl> <nl> / / Convenience function : call an external reference . <nl> void CallExternalReference ( const ExternalReference & ext , <nl>
|
Lazily save double registers for HCallRuntime instructions within Hydrogen code stubs .
|
v8/v8
|
f1c28e77ffdcda7c2f1affc2fcf40982466385e8
|
2013-10-01T11:56:42Z
|
mmm a / fdbserver / Knobs . cpp <nl> ppp b / fdbserver / Knobs . cpp <nl> ServerKnobs : : ServerKnobs ( bool randomize , ClientKnobs * clientKnobs ) { <nl> <nl> / / Data distribution <nl> init ( RETRY_RELOCATESHARD_DELAY , 0 . 1 ) ; <nl> - init ( DATA_DISTRIBUTION_FAILURE_REACTION_TIME , 10 . 0 ) ; if ( randomize & & BUGGIFY ) DATA_DISTRIBUTION_FAILURE_REACTION_TIME = 1 . 0 ; <nl> + init ( DATA_DISTRIBUTION_FAILURE_REACTION_TIME , 60 . 0 ) ; if ( randomize & & BUGGIFY ) DATA_DISTRIBUTION_FAILURE_REACTION_TIME = 1 . 0 ; <nl> bool buggifySmallShards = randomize & & BUGGIFY ; <nl> init ( MIN_SHARD_BYTES , 200000 ) ; if ( buggifySmallShards ) MIN_SHARD_BYTES = 40000 ; / / FIXME : data distribution tracker ( specifically StorageMetrics ) relies on this number being larger than the maximum size of a key value pair <nl> init ( SHARD_BYTES_RATIO , 4 ) ; <nl>
|
Merge pull request from alexmiller - apple / dd - failure - time
|
apple/foundationdb
|
460af919139cb40822bdf836bf69258edd392952
|
2019-06-21T00:33:16Z
|
mmm a / src / Storages / MergeTree / MergeTreeDataPartWriterCompact . cpp <nl> ppp b / src / Storages / MergeTree / MergeTreeDataPartWriterCompact . cpp <nl> MergeTreeDataPartWriterCompact : : MergeTreeDataPartWriterCompact ( <nl> const MergeTreeIndexGranularity & index_granularity_ ) <nl> : IMergeTreeDataPartWriter ( <nl> data_part_ , columns_list_ , metadata_snapshot_ , indices_to_recalc_ , marks_file_extension_ , default_codec_ , settings_ , index_granularity_ ) <nl> + , plain_file ( data_part - > volume - > getDisk ( ) - > writeFile ( <nl> + part_path + MergeTreeDataPartCompact : : DATA_FILE_NAME_WITH_EXTENSION , <nl> + settings . max_compress_block_size , <nl> + WriteMode : : Rewrite , <nl> + settings . estimated_size , <nl> + settings . aio_threshold ) ) <nl> + , plain_hashing ( * plain_file ) <nl> + , marks_file ( data_part - > volume - > getDisk ( ) - > writeFile ( <nl> + part_path + MergeTreeDataPartCompact : : DATA_FILE_NAME + marks_file_extension_ , <nl> + 4096 , <nl> + WriteMode : : Rewrite ) ) <nl> + , marks ( * marks_file ) <nl> { <nl> - using DataPart = MergeTreeDataPartCompact ; <nl> - String data_file_name = DataPart : : DATA_FILE_NAME ; <nl> - <nl> - stream = std : : make_unique < Stream > ( <nl> - data_file_name , <nl> - data_part - > volume - > getDisk ( ) , <nl> - part_path + data_file_name , DataPart : : DATA_FILE_EXTENSION , <nl> - part_path + data_file_name , marks_file_extension , <nl> - default_codec , <nl> - settings . max_compress_block_size , <nl> - settings . estimated_size , <nl> - settings . aio_threshold ) ; <nl> + const auto & storage_columns = metadata_snapshot - > getColumns ( ) ; <nl> + for ( const auto & column : columns_list ) <nl> + compressed_streams [ column . name ] = std : : make_unique < CompressedStream > ( <nl> + plain_hashing , storage_columns . getCodecOrDefault ( column . name , default_codec ) ) ; <nl> } <nl> <nl> void MergeTreeDataPartWriterCompact : : write ( <nl> void MergeTreeDataPartWriterCompact : : writeBlock ( const Block & block ) <nl> <nl> for ( const auto & column : columns_list ) <nl> { <nl> - / / / There could already be enough data to compress into the new block . <nl> - if ( stream - > compressed . offset ( ) > = settings . min_compress_block_size ) <nl> - stream - > compressed . next ( ) ; <nl> + auto & stream = compressed_streams [ column . name ] ; <nl> <nl> - writeIntBinary ( stream - > plain_hashing . count ( ) , stream - > marks ) ; <nl> - writeIntBinary ( stream - > compressed . offset ( ) , stream - > marks ) ; <nl> + writeIntBinary ( plain_hashing . count ( ) , marks ) ; <nl> + writeIntBinary ( UInt64 ( 0 ) , marks ) ; <nl> <nl> writeColumnSingleGranule ( block . getByName ( column . name ) , current_row , rows_to_write ) ; <nl> + stream - > hashing_buf . next ( ) ; <nl> } <nl> <nl> + + from_mark ; <nl> void MergeTreeDataPartWriterCompact : : writeBlock ( const Block & block ) <nl> index_granularity . appendMark ( rows_written ) ; <nl> } <nl> <nl> - writeIntBinary ( rows_to_write , stream - > marks ) ; <nl> + writeIntBinary ( rows_to_write , marks ) ; <nl> } <nl> <nl> next_index_offset = 0 ; <nl> void MergeTreeDataPartWriterCompact : : writeColumnSingleGranule ( const ColumnWithTy <nl> IDataType : : SerializeBinaryBulkStatePtr state ; <nl> IDataType : : SerializeBinaryBulkSettings serialize_settings ; <nl> <nl> - serialize_settings . getter = [ this ] ( IDataType : : SubstreamPath ) - > WriteBuffer * { return & stream - > compressed ; } ; <nl> + serialize_settings . getter = [ this , & column ] ( IDataType : : SubstreamPath ) - > WriteBuffer * { return & compressed_streams . at ( column . name ) - > hashing_buf ; } ; <nl> serialize_settings . position_independent_encoding = true ; <nl> serialize_settings . low_cardinality_max_dictionary_size = 0 ; <nl> <nl> void MergeTreeDataPartWriterCompact : : finishDataSerialization ( IMergeTreeDataPart : <nl> { <nl> for ( size_t i = 0 ; i < columns_list . size ( ) ; + + i ) <nl> { <nl> - writeIntBinary ( stream - > plain_hashing . count ( ) , stream - > marks ) ; <nl> - writeIntBinary ( stream - > compressed . offset ( ) , stream - > marks ) ; <nl> + writeIntBinary ( plain_hashing . count ( ) , marks ) ; <nl> + writeIntBinary ( UInt64 ( 0 ) , marks ) ; <nl> } <nl> - writeIntBinary ( 0ULL , stream - > marks ) ; <nl> + writeIntBinary ( UInt64 ( 0 ) , marks ) ; <nl> } <nl> <nl> - stream - > finalize ( ) ; <nl> - stream - > addToChecksums ( checksums ) ; <nl> - stream . reset ( ) ; <nl> + plain_file - > next ( ) ; <nl> + marks . next ( ) ; <nl> + addToChecksums ( checksums ) ; <nl> } <nl> <nl> static void fillIndexGranularityImpl ( <nl> void MergeTreeDataPartWriterCompact : : fillIndexGranularity ( size_t index_granulari <nl> rows_in_block ) ; <nl> } <nl> <nl> + void MergeTreeDataPartWriterCompact : : addToChecksums ( MergeTreeDataPartChecksums & checksums ) <nl> + { <nl> + using uint128 = CityHash_v1_0_2 : : uint128 ; <nl> + <nl> + String data_file_name = MergeTreeDataPartCompact : : DATA_FILE_NAME_WITH_EXTENSION ; <nl> + String marks_file_name = MergeTreeDataPartCompact : : DATA_FILE_NAME + marks_file_extension ; <nl> + <nl> + checksums . files [ data_file_name ] . is_compressed = true ; <nl> + size_t uncompressed_size = 0 ; <nl> + uint128 uncompressed_hash { 0 , 0 } ; <nl> + <nl> + for ( const auto & [ _ , stream ] : compressed_streams ) <nl> + { <nl> + uncompressed_size + = stream - > hashing_buf . count ( ) ; <nl> + uncompressed_hash = CityHash_v1_0_2 : : CityHash128WithSeed ( <nl> + reinterpret_cast < char * > ( & uncompressed_hash ) , sizeof ( uncompressed_hash ) , uncompressed_hash ) ; <nl> + } <nl> + <nl> + checksums . files [ data_file_name ] . uncompressed_size = uncompressed_size ; <nl> + checksums . files [ data_file_name ] . uncompressed_hash = uncompressed_hash ; <nl> + checksums . files [ data_file_name ] . file_size = plain_hashing . count ( ) ; <nl> + checksums . files [ data_file_name ] . file_hash = plain_hashing . getHash ( ) ; <nl> + <nl> + checksums . files [ marks_file_name ] . file_size = marks . count ( ) ; <nl> + checksums . files [ marks_file_name ] . file_hash = marks . getHash ( ) ; <nl> + } <nl> + <nl> void MergeTreeDataPartWriterCompact : : ColumnsBuffer : : add ( MutableColumns & & columns ) <nl> { <nl> if ( accumulated_columns . empty ( ) ) <nl> mmm a / src / Storages / MergeTree / MergeTreeDataPartWriterCompact . h <nl> ppp b / src / Storages / MergeTree / MergeTreeDataPartWriterCompact . h <nl> class MergeTreeDataPartWriterCompact : public IMergeTreeDataPartWriter <nl> <nl> void writeBlock ( const Block & block ) ; <nl> <nl> - StreamPtr stream ; <nl> + void addToChecksums ( MergeTreeDataPartChecksums & checksumns ) ; <nl> <nl> Block header ; <nl> <nl> class MergeTreeDataPartWriterCompact : public IMergeTreeDataPartWriter <nl> } ; <nl> <nl> ColumnsBuffer columns_buffer ; <nl> + <nl> + / / / compressed - > compressed_buf - > plain_hashing - > plain_file <nl> + std : : unique_ptr < WriteBufferFromFileBase > plain_file ; <nl> + HashingWriteBuffer plain_hashing ; <nl> + <nl> + struct CompressedStream <nl> + { <nl> + CompressedWriteBuffer compressed_buf ; <nl> + HashingWriteBuffer hashing_buf ; <nl> + <nl> + CompressedStream ( WriteBuffer & buf , const CompressionCodecPtr & codec ) <nl> + : compressed_buf ( buf , codec ) , hashing_buf ( compressed_buf ) { } <nl> + } ; <nl> + <nl> + std : : unordered_map < String , std : : unique_ptr < CompressedStream > > compressed_streams ; <nl> + <nl> + / / / marks - > marks_file <nl> + std : : unique_ptr < WriteBufferFromFileBase > marks_file ; <nl> + HashingWriteBuffer marks ; <nl> } ; <nl> <nl> } <nl> new file mode 100644 <nl> index 00000000000 . . 982c45a26e3 <nl> mmm / dev / null <nl> ppp b / tests / queries / 0_stateless / 01375_compact_parts_codecs . reference <nl> <nl> + 12000 11890 <nl> + 11965 11890 <nl> + 5858 11890 <nl> new file mode 100644 <nl> index 00000000000 . . 467745c6fa2 <nl> mmm / dev / null <nl> ppp b / tests / queries / 0_stateless / 01375_compact_parts_codecs . sql <nl> <nl> + DROP TABLE IF EXISTS codecs ; <nl> + <nl> + CREATE TABLE codecs ( id UInt32 , val UInt32 , s String ) <nl> + ENGINE = MergeTree ORDER BY id <nl> + SETTINGS min_rows_for_wide_part = 10000 ; <nl> + INSERT INTO codecs SELECT number , number , toString ( number ) FROM numbers ( 1000 ) ; <nl> + SELECT sum ( data_compressed_bytes ) , sum ( data_uncompressed_bytes ) <nl> + FROM system . parts <nl> + WHERE table = ' codecs ' AND database = currentDatabase ( ) ; <nl> + <nl> + DROP TABLE codecs ; <nl> + <nl> + CREATE TABLE codecs ( id UInt32 CODEC ( NONE ) , val UInt32 CODEC ( NONE ) , s String CODEC ( NONE ) ) <nl> + ENGINE = MergeTree ORDER BY id <nl> + SETTINGS min_rows_for_wide_part = 10000 ; <nl> + INSERT INTO codecs SELECT number , number , toString ( number ) FROM numbers ( 1000 ) ; <nl> + SELECT sum ( data_compressed_bytes ) , sum ( data_uncompressed_bytes ) <nl> + FROM system . parts <nl> + WHERE table = ' codecs ' AND database = currentDatabase ( ) ; <nl> + <nl> + DROP TABLE codecs ; <nl> + <nl> + CREATE TABLE codecs ( id UInt32 , val UInt32 CODEC ( Delta , ZSTD ) , s String CODEC ( ZSTD ) ) <nl> + ENGINE = MergeTree ORDER BY id <nl> + SETTINGS min_rows_for_wide_part = 10000 ; <nl> + INSERT INTO codecs SELECT number , number , toString ( number ) FROM numbers ( 1000 ) ; <nl> + SELECT sum ( data_compressed_bytes ) , sum ( data_uncompressed_bytes ) <nl> + FROM system . parts <nl> + WHERE table = ' codecs ' AND database = currentDatabase ( ) ; <nl> + <nl> + DROP TABLE codecs ; <nl>
|
support codecs in compact parts
|
ClickHouse/ClickHouse
|
d6434f61dc7b08072862d4d10ea6fa9da781b6c1
|
2020-07-07T00:15:02Z
|
mmm a / src / GUI . cpp <nl> ppp b / src / GUI . cpp <nl> void GUI : : on_actionCreate_torrent_triggered ( ) { <nl> } <nl> } <nl> <nl> - bool GUI : : checkForModals ( ) const { <nl> - / / Returns true if there are any modal windows visible <nl> - QList < QDialog * > dialog_list = findChildren < QDialog * > ( ) ; <nl> - QList < QDialog * > : : const_iterator i ; <nl> - <nl> - for ( i = dialog_list . begin ( ) ; i ! = dialog_list . constEnd ( ) ; i + + ) { <nl> - if ( ( * i ) - > isModal ( ) & & ( * i ) - > isVisible ( ) ) { <nl> - return true ; <nl> - } <nl> - } <nl> - <nl> - return false ; <nl> - } <nl> - <nl> bool GUI : : event ( QEvent * e ) { <nl> if ( e - > type ( ) = = QEvent : : WindowStateChange ) { <nl> qDebug ( " Window change event " ) ; <nl> bool GUI : : event ( QEvent * e ) { <nl> qDebug ( " minimisation " ) ; <nl> QSettings settings ( QString : : fromUtf8 ( " qBittorrent " ) , QString : : fromUtf8 ( " qBittorrent " ) ) ; <nl> if ( systrayIcon & & settings . value ( QString : : fromUtf8 ( " Preferences / General / MinimizeToTray " ) , false ) . toBool ( ) ) { <nl> - if ( checkForModals ( ) ) <nl> - { <nl> - qDebug ( " Minimize to Tray enabled , but not hiding because of a modal dialog " ) ; <nl> - } <nl> - else <nl> - { <nl> + if ( ! qApp - > activeWindow ( ) | | ! qApp - > activeWindow ( ) - > isModal ( ) ) { <nl> qDebug ( " Minimize to Tray enabled , hiding ! " ) ; <nl> e - > accept ( ) ; <nl> QTimer : : singleShot ( 0 , this , SLOT ( hide ( ) ) ) ; <nl> - return true ; <nl> + return true ; <nl> } <nl> } <nl> } <nl> mmm a / src / GUI . h <nl> ppp b / src / GUI . h <nl> class GUI : public QMainWindow , private Ui : : MainWindow { <nl> ~ GUI ( ) ; <nl> / / Methods <nl> int getCurrentTabIndex ( ) const ; <nl> - bool checkForModals ( ) const ; <nl> TransferListWidget * getTransferList ( ) const { return transferList ; } <nl> QMenu * getTrayIconMenu ( ) ; <nl> <nl>
|
Attempt to make Ishan ' s patch lighter ( the new patch may not take into consideration all use cases but it seems to work ok )
|
qbittorrent/qBittorrent
|
bce7959332018477eda2b5dd725ff6aed6ef71fe
|
2010-06-10T19:54:20Z
|
mmm a / hphp / runtime / base / shared / concurrent_shared_store . cpp <nl> ppp b / hphp / runtime / base / shared / concurrent_shared_store . cpp <nl> SharedVariant * ConcurrentTableSharedStore : : unserialize ( CStrRef key , <nl> VariableUnserializer vu ( sval - > sAddr , sval - > getSerializedSize ( ) , sType ) ; <nl> Variant v ; <nl> v . unserialize ( & vu ) ; <nl> - sval - > var = SharedVariant : : Create ( v , sval - > isSerializedObj ( ) ) ; <nl> + sval - > var = new SharedVariant ( v , sval - > isSerializedObj ( ) ) ; <nl> stats_on_add ( key . get ( ) , sval , 0 , true , true ) ; / / delayed prime <nl> return sval - > var ; <nl> } catch ( Exception & e ) { <nl> bool ConcurrentTableSharedStore : : constructPrime ( CStrRef v , KeyValuePair & item , <nl> return false ; <nl> } <nl> } <nl> - item . value = SharedVariant : : Create ( v , serialized ) ; <nl> + item . value = new SharedVariant ( v , serialized ) ; <nl> return true ; <nl> } <nl> <nl> bool ConcurrentTableSharedStore : : constructPrime ( CVarRef v , <nl> return false ; <nl> } <nl> } <nl> - item . value = SharedVariant : : Create ( v , false ) ; <nl> + item . value = new SharedVariant ( v , false ) ; <nl> return true ; <nl> } <nl> <nl> mmm a / hphp / runtime / base / shared / concurrent_shared_store . h <nl> ppp b / hphp / runtime / base / shared / concurrent_shared_store . h <nl> namespace HPHP { <nl> <nl> class ConcurrentTableSharedStore : public SharedStore { <nl> public : <nl> - ConcurrentTableSharedStore ( int id ) <nl> + explicit ConcurrentTableSharedStore ( int id ) <nl> : SharedStore ( id ) , m_lockingFlag ( false ) , m_purgeCounter ( 0 ) { } <nl> <nl> virtual int size ( ) { <nl> class ConcurrentTableSharedStore : public SharedStore { <nl> <nl> protected : <nl> virtual SharedVariant * construct ( CVarRef v ) { <nl> - return SharedVariant : : Create ( v , false ) ; <nl> + return new SharedVariant ( v , false ) ; <nl> } <nl> <nl> struct charHashCompare { <nl> mmm a / hphp / runtime / base / shared / immutable_map . cpp <nl> ppp b / hphp / runtime / base / shared / immutable_map . cpp <nl> ImmutableMap * ImmutableMap : : Create ( ArrayData * arr , <nl> <nl> try { <nl> for ( ArrayIter it ( arr ) ; ! it . end ( ) ; it . next ( ) ) { <nl> - SharedVariant * key = SharedVariant : : Create ( it . first ( ) , false , true , <nl> - unserializeObj ) ; <nl> - SharedVariant * val = SharedVariant : : Create ( it . secondRef ( ) , false , true , <nl> - unserializeObj ) ; <nl> - ret - > add ( ret - > m . m_num , key , val ) ; <nl> + ret - > add ( ret - > m . m_num , it . first ( ) , it . secondRef ( ) , unserializeObj ) ; <nl> + + ret - > m . m_num ; <nl> } <nl> } catch ( . . . ) { <nl> HOT_FUNC <nl> void ImmutableMap : : Destroy ( ImmutableMap * map ) { <nl> Bucket * buckets = map - > buckets ( ) ; <nl> for ( int i = 0 ; i < map - > m . m_num ; i + + ) { <nl> - delete buckets [ i ] . key ; <nl> - delete buckets [ i ] . val ; <nl> + ( * ( SharedVariant * ) & buckets [ i ] . val ) . ~ SharedVariant ( ) ; <nl> } <nl> free ( map ) ; <nl> } <nl> <nl> HOT_FUNC <nl> - void ImmutableMap : : add ( int pos , SharedVariant * key , SharedVariant * val ) { <nl> + void ImmutableMap : : addVal ( int pos , int hash_pos , <nl> + CVarRef val , bool unserializeObj ) { <nl> / / NOTE : no check on duplication because we assume the original array has no <nl> / / duplication <nl> Bucket * bucket = buckets ( ) + pos ; <nl> - bucket - > key = key ; <nl> - bucket - > val = val ; <nl> - int hash_pos = <nl> - ( key - > is ( KindOfInt64 ) ? <nl> - key - > intData ( ) : key - > getStringData ( ) - > hash ( ) ) & m . m_capacity_mask ; <nl> - <nl> + new ( & bucket - > val ) SharedVariant ( val , false , true , unserializeObj ) ; <nl> int & hp = hash ( ) [ hash_pos ] ; <nl> bucket - > next = hp ; <nl> hp = pos ; <nl> } <nl> <nl> + HOT_FUNC <nl> + void ImmutableMap : : add ( int pos , CVarRef key , CVarRef val , bool unserializeObj ) { <nl> + int64_t ikey ; <nl> + StringData * skey ; <nl> + int32_t hash ; <nl> + Bucket * b = buckets ( ) + pos ; <nl> + <nl> + switch ( key . getType ( ) ) { <nl> + case KindOfInt64 : { <nl> + hash = ikey = key . toInt64 ( ) ; <nl> + b - > setIntKey ( ikey ) ; <nl> + break ; <nl> + } <nl> + case KindOfString : { <nl> + skey = StringData : : GetStaticString ( key . getStringData ( ) ) ; <nl> + goto static_case ; <nl> + } <nl> + case KindOfStaticString : { <nl> + skey = key . getStringData ( ) ; <nl> + static_case : <nl> + hash = skey - > hash ( ) ; <nl> + b - > setStrKey ( skey , hash ) ; <nl> + break ; <nl> + } <nl> + default : not_reached ( ) ; <nl> + } <nl> + addVal ( pos , hash & m . m_capacity_mask , val , unserializeObj ) ; <nl> + } <nl> + <nl> + # define STR_HASH ( x ) ( int32_t ( x ) | 0x80000000 ) <nl> + <nl> HOT_FUNC <nl> int ImmutableMap : : indexOf ( const StringData * key ) { <nl> - strhash_t h = key - > hash ( ) ; <nl> + strhash_t h = STR_HASH ( key - > hash ( ) ) ; <nl> int bucket = hash ( ) [ h & m . m_capacity_mask ] ; <nl> Bucket * b = buckets ( ) ; <nl> while ( bucket ! = - 1 ) { <nl> - if ( ! b [ bucket ] . key - > is ( KindOfInt64 ) & & <nl> - key - > same ( b [ bucket ] . key - > getStringData ( ) ) ) { <nl> + Bucket * cand = & b [ bucket ] ; <nl> + if ( cand - > hash ( ) = = h & & ( cand - > skey = = key | | key - > same ( cand - > skey ) ) ) { <nl> return bucket ; <nl> } <nl> bucket = b [ bucket ] . next ; <nl> int ImmutableMap : : indexOf ( int64_t key ) { <nl> int bucket = hash ( ) [ key & m . m_capacity_mask ] ; <nl> Bucket * b = buckets ( ) ; <nl> while ( bucket ! = - 1 ) { <nl> - if ( b [ bucket ] . key - > is ( KindOfInt64 ) & & <nl> - key = = b [ bucket ] . key - > intData ( ) ) { <nl> + if ( b [ bucket ] . hasIntKey ( ) & & key = = b [ bucket ] . ikey ) { <nl> return bucket ; <nl> } <nl> bucket = b [ bucket ] . next ; <nl> mmm a / hphp / runtime / base / shared / immutable_map . h <nl> ppp b / hphp / runtime / base / shared / immutable_map . h <nl> <nl> namespace HPHP { <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> - class SharedVariant ; <nl> / * * <nl> * an immutable map is a php - style array that can take strings and <nl> * ints as keys . the map also stores the order in which the elements <nl> class ImmutableMap { <nl> int indexOf ( const StringData * key ) ; <nl> int indexOf ( int64_t key ) ; <nl> <nl> - SharedVariant * getKeyIndex ( int index ) { <nl> + Variant getKey ( int index ) { <nl> assert ( index < size ( ) ) ; <nl> - return buckets ( ) [ index ] . key ; <nl> + Bucket * b = & buckets ( ) [ index ] ; <nl> + if ( b - > hasIntKey ( ) ) { <nl> + return b - > ikey ; <nl> + } else { <nl> + return b - > skey ; <nl> + } <nl> } <nl> <nl> - SharedVariant * getValIndex ( int index ) { <nl> + SharedVariant * getValue ( int index ) { <nl> assert ( index < size ( ) ) ; <nl> - return buckets ( ) [ index ] . val ; <nl> + return ( SharedVariant * ) & buckets ( ) [ index ] . val ; <nl> } <nl> <nl> unsigned size ( ) const { <nl> class ImmutableMap { <nl> static ImmutableMap * Create ( ArrayData * arr , <nl> bool unserializeObj ) ; <nl> static void Destroy ( ImmutableMap * im ) ; <nl> - private : <nl> - ImmutableMap ( ) { } <nl> - ~ ImmutableMap ( ) { } <nl> - void add ( int pos , SharedVariant * key , SharedVariant * val ) ; <nl> <nl> struct Bucket { <nl> / * * index of the next bucket , or - 1 if the end of a chain * / <nl> int next ; <nl> - / * * the value of this bucket * / <nl> - SharedVariant * key ; <nl> - SharedVariant * val ; <nl> + / * similar to HphpArray : : Elm * / <nl> + union { <nl> + int64_t ikey ; <nl> + StringData * skey ; <nl> + } ; <nl> + / / cannot declare SharedVariant here because of cyclic header <nl> + / / includes <nl> + TypedValueAux val ; <nl> + bool hasStrKey ( ) const { <nl> + return val . hash ( ) ! = 0 ; <nl> + } <nl> + bool hasIntKey ( ) const { <nl> + return val . hash ( ) = = 0 ; <nl> + } <nl> + int32_t hash ( ) const { <nl> + return val . hash ( ) ; <nl> + } <nl> + void setStrKey ( StringData * k , strhash_t h ) { <nl> + skey = k ; <nl> + val . hash ( ) = int32_t ( h ) | 0x80000000 ; <nl> + } <nl> + void setIntKey ( int64_t k ) { <nl> + ikey = k ; <nl> + val . hash ( ) = 0 ; <nl> + } <nl> } ; <nl> + <nl> + private : <nl> + ImmutableMap ( ) { } <nl> + ~ ImmutableMap ( ) { } <nl> + void addVal ( int pos , int hash_pos , CVarRef val , bool unserializeObj ) ; <nl> + void add ( int pos , CVarRef key , CVarRef val , bool unserializeObj ) ; <nl> + <nl> / * * index of the beginning of each hash chain * / <nl> int * hash ( ) const { return ( int * ) ( this + 1 ) ; } <nl> / * * buckets , stored in index order * / <nl> mmm a / hphp / runtime / base / shared / shared_map . cpp <nl> ppp b / hphp / runtime / base / shared / shared_map . cpp <nl> IMPLEMENT_SMART_ALLOCATION_HOT ( SharedMap ) ; <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> HOT_FUNC <nl> CVarRef SharedMap : : getValueRef ( ssize_t pos ) const { <nl> - SharedVariant * sv = m_arr - > getValue ( pos ) ; <nl> + SharedVariant * sv = getValueImpl ( pos ) ; <nl> DataType t = sv - > getType ( ) ; <nl> if ( ! IS_REFCOUNTED_TYPE ( t ) ) return sv - > asCVarRef ( ) ; <nl> if ( LIKELY ( m_localCache ! = nullptr ) ) { <nl> - assert ( unsigned ( pos ) < m_arr - > arrCap ( ) ) ; <nl> + assert ( unsigned ( pos ) < size ( ) ) ; <nl> TypedValue * tv = & m_localCache [ pos ] ; <nl> if ( tv - > m_type ! = KindOfUninit ) return tvAsCVarRef ( tv ) ; <nl> } else { <nl> static_assert ( KindOfUninit = = 0 , " must be 0 since we use smart_calloc " ) ; <nl> - unsigned cap = m_arr - > arrCap ( ) ; <nl> - m_localCache = ( TypedValue * ) smart_calloc ( cap , sizeof ( TypedValue ) ) ; <nl> + m_localCache = ( TypedValue * ) smart_calloc ( size ( ) , sizeof ( TypedValue ) ) ; <nl> } <nl> TypedValue * tv = & m_localCache [ pos ] ; <nl> tvAsVariant ( tv ) = sv - > toLocal ( ) ; <nl> CVarRef SharedMap : : getValueRef ( ssize_t pos ) const { <nl> HOT_FUNC <nl> SharedMap : : ~ SharedMap ( ) { <nl> if ( m_localCache ) { <nl> - for ( TypedValue * tv = m_localCache , * end = tv + m_arr - > arrCap ( ) ; <nl> + for ( TypedValue * tv = m_localCache , * end = tv + size ( ) ; <nl> tv < end ; + + tv ) { <nl> tvRefcountedDecRef ( tv ) ; <nl> } <nl> SharedMap : : ~ SharedMap ( ) { <nl> } <nl> } <nl> <nl> - bool SharedMap : : exists ( const StringData * k ) const { <nl> - return m_arr - > getIndex ( k ) ! = - 1 ; <nl> + ssize_t SharedMap : : getIndex ( const StringData * k ) const { <nl> + if ( isVector ( ) ) return - 1 ; <nl> + return m_map - > indexOf ( k ) ; <nl> } <nl> <nl> - bool SharedMap : : exists ( int64_t k ) const { <nl> - return m_arr - > getIndex ( k ) ! = - 1 ; <nl> + ssize_t SharedMap : : getIndex ( int64_t k ) const { <nl> + if ( isVector ( ) ) { <nl> + if ( k < 0 | | ( size_t ) k > = m_vec - > m_size ) return - 1 ; <nl> + return k ; <nl> + } <nl> + return m_map - > indexOf ( k ) ; <nl> } <nl> <nl> - ssize_t SharedMap : : getIndex ( int64_t k ) const { <nl> - return m_arr - > getIndex ( k ) ; <nl> + bool SharedMap : : exists ( const StringData * k ) const { <nl> + return getIndex ( k ) ! = - 1 ; <nl> } <nl> <nl> - ssize_t SharedMap : : getIndex ( const StringData * k ) const { <nl> - return m_arr - > getIndex ( k ) ; <nl> + bool SharedMap : : exists ( int64_t k ) const { <nl> + return getIndex ( k ) ! = - 1 ; <nl> } <nl> <nl> CVarRef SharedMap : : get ( const StringData * k , bool error / * = false * / ) const { <nl> - int index = m_arr - > getIndex ( k ) ; <nl> + int index = getIndex ( k ) ; <nl> if ( index = = - 1 ) { <nl> return error ? getNotFound ( k ) : null_variant ; <nl> } <nl> CVarRef SharedMap : : get ( const StringData * k , bool error / * = false * / ) const { <nl> } <nl> <nl> CVarRef SharedMap : : get ( int64_t k , bool error / * = false * / ) const { <nl> - int index = m_arr - > getIndex ( k ) ; <nl> + int index = getIndex ( k ) ; <nl> if ( index = = - 1 ) { <nl> return error ? getNotFound ( k ) : null_variant ; <nl> } <nl> ArrayData * SharedMap : : prepend ( CVarRef v , bool copy ) { <nl> } <nl> <nl> ArrayData * SharedMap : : escalate ( ) const { <nl> - ArrayData * ret = m_arr - > loadElems ( * this ) ; <nl> + ArrayData * ret = loadElems ( ) ; <nl> assert ( ! ret - > isStatic ( ) ) ; <nl> return ret ; <nl> } <nl> <nl> TypedValue * SharedMap : : nvGet ( int64_t k ) const { <nl> - int index = m_arr - > getIndex ( k ) ; <nl> + int index = getIndex ( k ) ; <nl> if ( index = = - 1 ) return nullptr ; <nl> return ( TypedValue * ) & getValueRef ( index ) ; <nl> } <nl> <nl> TypedValue * SharedMap : : nvGet ( const StringData * key ) const { <nl> - int index = m_arr - > getIndex ( key ) ; <nl> + int index = getIndex ( key ) ; <nl> if ( index = = - 1 ) return nullptr ; <nl> return ( TypedValue * ) & getValueRef ( index ) ; <nl> } <nl> <nl> void SharedMap : : nvGetKey ( TypedValue * out , ssize_t pos ) { <nl> - Variant k = m_arr - > getKey ( pos ) ; <nl> + Variant k = getKey ( pos ) ; <nl> TypedValue * tv = k . asTypedValue ( ) ; <nl> / / copy w / out clobbering out - > _count . <nl> out - > m_type = tv - > m_type ; <nl> TypedValue * SharedMap : : nvGetValueRef ( ssize_t pos ) { <nl> } <nl> <nl> TypedValue * SharedMap : : nvGetCell ( int64_t k ) const { <nl> - int index = m_arr - > getIndex ( k ) ; <nl> + int index = getIndex ( k ) ; <nl> return index ! = - 1 ? getValueRef ( index ) . getTypedAccessor ( ) : <nl> nvGetNotFound ( k ) ; <nl> } <nl> <nl> TypedValue * SharedMap : : nvGetCell ( const StringData * key ) const { <nl> - int index = m_arr - > getIndex ( key ) ; <nl> + int index = getIndex ( key ) ; <nl> return index ! = - 1 ? getValueRef ( index ) . getTypedAccessor ( ) : <nl> nvGetNotFound ( key ) ; <nl> } <nl> <nl> ArrayData * SharedMap : : escalateForSort ( ) { <nl> - ArrayData * ret = m_arr - > loadElems ( * this , true / * mapInit * / ) ; <nl> + ArrayData * ret = loadElems ( true / * mapInit * / ) ; <nl> assert ( ! ret - > isStatic ( ) ) ; <nl> return ret ; <nl> } <nl> <nl> + ArrayData * SharedMap : : loadElems ( bool mapInit / * = false * / ) const { <nl> + uint count = size ( ) ; <nl> + bool isVec = isVector ( ) ; <nl> + <nl> + auto ai = <nl> + mapInit ? ArrayInit ( count , ArrayInit : : mapInit ) : <nl> + isVec ? ArrayInit ( count , ArrayInit : : vectorInit ) : <nl> + ArrayInit ( count ) ; <nl> + <nl> + if ( isVec ) { <nl> + for ( uint i = 0 ; i < count ; i + + ) { <nl> + ai . set ( getValueRef ( i ) ) ; <nl> + } <nl> + } else { <nl> + for ( uint i = 0 ; i < count ; i + + ) { <nl> + ai . add ( m_map - > getKey ( i ) , getValueRef ( i ) , <nl> + true ) ; <nl> + } <nl> + } <nl> + ArrayData * elems = ai . create ( ) ; <nl> + if ( elems - > isStatic ( ) ) elems = elems - > copy ( ) ; <nl> + return elems ; <nl> + } <nl> + <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> } <nl> mmm a / hphp / runtime / base / shared / shared_map . h <nl> ppp b / hphp / runtime / base / shared / shared_map . h <nl> namespace HPHP { <nl> * / <nl> class SharedMap : public ArrayData { <nl> public : <nl> - SharedMap ( SharedVariant * source ) <nl> + explicit SharedMap ( SharedVariant * source ) <nl> : ArrayData ( kSharedMap ) <nl> - , m_arr ( source ) <nl> , m_localCache ( nullptr ) { <nl> + m_map = source - > getMap ( ) ; <nl> + m_isVector = source - > getIsVector ( ) ; <nl> } <nl> <nl> ~ SharedMap ( ) ; <nl> class SharedMap : public ArrayData { <nl> using ArrayData : : remove ; <nl> <nl> ssize_t vsize ( ) const { <nl> - return m_arr - > arrSize ( ) ; <nl> + return isVector ( ) ? m_vec - > m_size : m_map - > size ( ) ; <nl> } <nl> <nl> Variant getKey ( ssize_t pos ) const { <nl> - return m_arr - > getKey ( pos ) ; <nl> + if ( isVector ( ) ) { <nl> + assert ( pos < ( ssize_t ) m_vec - > m_size ) ; <nl> + return pos ; <nl> + } <nl> + return m_map - > getKey ( pos ) ; <nl> + } <nl> + <nl> + SharedVariant * getValueImpl ( ssize_t pos ) const { <nl> + return isVector ( ) ? m_vec - > getValue ( pos ) : m_map - > getValue ( pos ) ; <nl> } <nl> <nl> Variant getValue ( ssize_t pos ) const { return getValueRef ( pos ) ; } <nl> class SharedMap : public ArrayData { <nl> virtual ArrayData * escalate ( ) const ; <nl> virtual ArrayData * escalateForSort ( ) ; <nl> <nl> + ArrayData * loadElems ( bool mapInit = false ) const ; <nl> + <nl> private : <nl> - SharedVariant * m_arr ; <nl> + bool m_isVector ; <nl> + union { <nl> + ImmutableMap * m_map ; <nl> + VectorData * m_vec ; <nl> + } ; <nl> mutable TypedValue * m_localCache ; <nl> + bool isVector ( ) const { return m_isVector ; } <nl> } ; <nl> <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> mmm a / hphp / runtime / base / shared / shared_variant . cpp <nl> ppp b / hphp / runtime / base / shared / shared_variant . cpp <nl> SharedVariant : : SharedVariant ( CVarRef source , bool serialized , <nl> bool unserializeObj / * = false * / ) <nl> : m_flags ( 0 ) { <nl> assert ( ! serialized | | source . isString ( ) ) ; <nl> - m_count = 1 ; <nl> m_type = source . getType ( ) ; <nl> switch ( m_type ) { <nl> case KindOfBoolean : <nl> SharedVariant : : SharedVariant ( CVarRef source , bool serialized , <nl> setIsVector ( ) ; <nl> m_data . vec = new ( arr - > size ( ) ) VectorData ( ) ; <nl> for ( ArrayIter it ( arr ) ; ! it . end ( ) ; it . next ( ) ) { <nl> - SharedVariant * val = Create ( it . secondRef ( ) , false , true , <nl> - unserializeObj ) ; <nl> - m_data . vec - > vals ( ) [ m_data . vec - > m_size + + ] = val ; <nl> + m_data . vec - > add ( it . secondRef ( ) , unserializeObj ) ; <nl> } <nl> } else { <nl> m_data . map = ImmutableMap : : Create ( arr , unserializeObj ) ; <nl> SharedVariant : : ~ SharedVariant ( ) { <nl> <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> - HOT_FUNC <nl> - int SharedVariant : : getIndex ( const StringData * key ) { <nl> - assert ( is ( KindOfArray ) ) ; <nl> - if ( getIsVector ( ) ) return - 1 ; <nl> - return m_data . map - > indexOf ( key ) ; <nl> - } <nl> - <nl> - int SharedVariant : : getIndex ( int64_t key ) { <nl> - assert ( is ( KindOfArray ) ) ; <nl> - if ( getIsVector ( ) ) { <nl> - if ( key < 0 | | ( size_t ) key > = m_data . vec - > m_size ) return - 1 ; <nl> - return key ; <nl> - } <nl> - return m_data . map - > indexOf ( key ) ; <nl> - } <nl> - <nl> - Variant SharedVariant : : getKey ( ssize_t pos ) const { <nl> - assert ( is ( KindOfArray ) ) ; <nl> - if ( getIsVector ( ) ) { <nl> - assert ( pos < ( ssize_t ) m_data . vec - > m_size ) ; <nl> - return pos ; <nl> - } <nl> - return m_data . map - > getKeyIndex ( pos ) - > toLocal ( ) ; <nl> - } <nl> - <nl> - HOT_FUNC <nl> - SharedVariant * SharedVariant : : getValue ( ssize_t pos ) const { <nl> - assert ( is ( KindOfArray ) ) ; <nl> - if ( getIsVector ( ) ) { <nl> - assert ( pos < ( ssize_t ) m_data . vec - > m_size ) ; <nl> - return m_data . vec - > vals ( ) [ pos ] ; <nl> - } <nl> - return m_data . map - > getValIndex ( pos ) ; <nl> - } <nl> - <nl> - ArrayData * SharedVariant : : loadElems ( const SharedMap & sharedMap , <nl> - bool mapInit / * = false * / ) { <nl> - assert ( is ( KindOfArray ) ) ; <nl> - uint count = arrSize ( ) ; <nl> - bool isVector = getIsVector ( ) ; <nl> - <nl> - auto ai = <nl> - mapInit ? ArrayInit ( count , ArrayInit : : mapInit ) : <nl> - isVector ? ArrayInit ( count , ArrayInit : : vectorInit ) : <nl> - ArrayInit ( count ) ; <nl> - <nl> - if ( isVector ) { <nl> - for ( uint i = 0 ; i < count ; i + + ) { <nl> - ai . set ( sharedMap . getValueRef ( i ) ) ; <nl> - } <nl> - } else { <nl> - for ( uint i = 0 ; i < count ; i + + ) { <nl> - ai . add ( m_data . map - > getKeyIndex ( i ) - > toLocal ( ) , sharedMap . getValueRef ( i ) , <nl> - true ) ; <nl> - } <nl> - } <nl> - ArrayData * elems = ai . create ( ) ; <nl> - if ( elems - > isStatic ( ) ) elems = elems - > copy ( ) ; <nl> - return elems ; <nl> - } <nl> - <nl> - int SharedVariant : : countReachable ( ) const { <nl> - int count = 1 ; <nl> - if ( getType ( ) = = KindOfArray ) { <nl> - int size = arrSize ( ) ; <nl> - if ( ! getIsVector ( ) ) { <nl> - count + = size ; / / for keys <nl> - } <nl> - for ( int i = 0 ; i < size ; i + + ) { <nl> - SharedVariant * p = getValue ( i ) ; <nl> - count + = p - > countReachable ( ) ; / / for values <nl> - } <nl> - } <nl> - return count ; <nl> - } <nl> - <nl> - SharedVariant * SharedVariant : : Create <nl> - ( CVarRef source , bool serialized , bool inner / * = false * / , <nl> - bool unserializeObj / * = false * / ) { <nl> - return new SharedVariant ( source , serialized , inner , unserializeObj ) ; <nl> - } <nl> - <nl> SharedVariant * SharedVariant : : convertObj ( CVarRef var ) { <nl> if ( ! var . is ( KindOfObject ) | | getObjAttempted ( ) ) { <nl> return nullptr ; <nl> int32_t SharedVariant : : getSpaceUsage ( ) const { <nl> size + = sizeof ( VectorData ) + <nl> sizeof ( SharedVariant * ) * m_data . vec - > m_size ; <nl> for ( size_t i = 0 ; i < m_data . vec - > m_size ; i + + ) { <nl> - size + = m_data . vec - > vals ( ) [ i ] - > getSpaceUsage ( ) ; <nl> + size + = m_data . vec - > getValue ( i ) - > getSpaceUsage ( ) ; <nl> } <nl> } else { <nl> ImmutableMap * map = m_data . map ; <nl> size + = map - > getStructSize ( ) ; <nl> for ( int i = 0 ; i < map - > size ( ) ; i + + ) { <nl> - size + = map - > getKeyIndex ( i ) - > getSpaceUsage ( ) ; <nl> - size + = map - > getValIndex ( i ) - > getSpaceUsage ( ) ; <nl> + size + = sizeof ( int64_t ) ; <nl> + size + = map - > getValue ( i ) - > getSpaceUsage ( ) ; <nl> } <nl> } <nl> break ; <nl> void SharedVariant : : getStats ( SharedVariantStats * stats ) const { <nl> stats - > dataTotalSize = sizeof ( SharedVariant ) + sizeof ( VectorData ) ; <nl> stats - > dataTotalSize + = sizeof ( SharedVariant * ) * m_data . vec - > m_size ; <nl> for ( size_t i = 0 ; i < m_data . vec - > m_size ; i + + ) { <nl> - SharedVariant * v = m_data . vec - > vals ( ) [ i ] ; <nl> + SharedVariant * v = m_data . vec - > getValue ( i ) ; <nl> SharedVariantStats childStats ; <nl> v - > getStats ( & childStats ) ; <nl> stats - > addChildStats ( & childStats ) ; <nl> void SharedVariant : : getStats ( SharedVariantStats * stats ) const { <nl> stats - > dataTotalSize = sizeof ( SharedVariant ) + map - > getStructSize ( ) ; <nl> for ( int i = 0 ; i < map - > size ( ) ; i + + ) { <nl> SharedVariantStats childStats ; <nl> - map - > getKeyIndex ( i ) - > getStats ( & childStats ) ; <nl> + / / for key <nl> + childStats . dataSize = childStats . dataTotalSize = sizeof ( int64_t ) ; <nl> stats - > addChildStats ( & childStats ) ; <nl> - map - > getValIndex ( i ) - > getStats ( & childStats ) ; <nl> + map - > getValue ( i ) - > getStats ( & childStats ) ; <nl> stats - > addChildStats ( & childStats ) ; <nl> } <nl> } <nl> mmm a / hphp / runtime / base / shared / shared_variant . h <nl> ppp b / hphp / runtime / base / shared / shared_variant . h <nl> class SharedMap ; <nl> <nl> class SharedVariantStats ; <nl> <nl> + class VectorData ; <nl> + <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> <nl> class SharedVariant { <nl> class SharedVariant { <nl> bool unserializeObj = false ) ; <nl> ~ SharedVariant ( ) ; <nl> <nl> - / / Create will do the wrapped check before creating a SharedVariant <nl> - static SharedVariant * Create ( CVarRef source , bool serialized , <nl> - bool inner = false , <nl> - bool unserializeObj = false ) ; <nl> - <nl> bool is ( DataType d ) const { return m_type = = d ; } <nl> DataType getType ( ) const { return ( DataType ) m_type ; } <nl> CVarRef asCVarRef ( ) const { <nl> class SharedVariant { <nl> return m_data . str - > hash ( ) ; <nl> } <nl> <nl> - size_t arrSize ( ) const { <nl> - assert ( is ( KindOfArray ) ) ; <nl> - if ( getIsVector ( ) ) return m_data . vec - > m_size ; <nl> - return m_data . map - > size ( ) ; <nl> - } <nl> - <nl> - size_t arrCap ( ) const { <nl> - assert ( is ( KindOfArray ) ) ; <nl> - if ( getIsVector ( ) ) return m_data . vec - > m_size ; <nl> - return m_data . map - > capacity ( ) ; <nl> - } <nl> - <nl> - int getIndex ( int64_t key ) ; <nl> - int getIndex ( const StringData * key ) ; <nl> - <nl> - ArrayData * loadElems ( const SharedMap & sharedMap , <nl> - bool mapInit = false ) ; <nl> - <nl> - Variant getKey ( ssize_t pos ) const ; <nl> - <nl> - SharedVariant * getValue ( ssize_t pos ) const ; <nl> - <nl> / / implementing LeakDetectable <nl> void dump ( std : : string & out ) ; <nl> <nl> class SharedVariant { <nl> SharedVariant * convertObj ( CVarRef var ) ; <nl> bool isUnserializedObj ( ) { return getIsObj ( ) ; } <nl> <nl> - int countReachable ( ) const ; <nl> - <nl> private : <nl> - class VectorData { <nl> - public : <nl> - union { <nl> - size_t m_size ; <nl> - SharedVariant * m_align_dummy ; <nl> - } ; <nl> - <nl> - VectorData ( ) : m_size ( 0 ) { } <nl> - <nl> - ~ VectorData ( ) { <nl> - SharedVariant * * v = vals ( ) ; <nl> - for ( size_t i = 0 ; i < m_size ; i + + ) { <nl> - delete v [ i ] ; <nl> - } <nl> - } <nl> - SharedVariant * * vals ( ) { return ( SharedVariant * * ) ( this + 1 ) ; } <nl> - void * operator new ( size_t sz , int num ) { <nl> - assert ( sz = = sizeof ( VectorData ) ) ; <nl> - return malloc ( sizeof ( VectorData ) + num * sizeof ( SharedVariant * ) ) ; <nl> - } <nl> - void operator delete ( void * ptr ) { free ( ptr ) ; } <nl> - / / just to keep the compiler happy ; used if the constructor throws <nl> - void operator delete ( void * ptr , int num ) { free ( ptr ) ; } <nl> - } ; <nl> - <nl> / * <nl> * Keep the object layout binary compatible with Variant for primitive types . <nl> * We want to have compile time assertion to guard it but still want to have <nl> class SharedVariant { <nl> void setSerializedArray ( ) { m_flags | = SerializedArray ; } <nl> void clearSerializedArray ( ) { m_flags & = ~ SerializedArray ; } <nl> <nl> - bool getIsVector ( ) const { return ( bool ) ( m_flags & IsVector ) ; } <nl> void setIsVector ( ) { m_flags | = IsVector ; } <nl> void clearIsVector ( ) { m_flags & = ~ IsVector ; } <nl> <nl> class SharedVariant { <nl> bool getObjAttempted ( ) const { return ( bool ) ( m_flags & ObjAttempted ) ; } <nl> void setObjAttempted ( ) { m_flags | = ObjAttempted ; } <nl> void clearObjAttempted ( ) { m_flags & = ~ ObjAttempted ; } <nl> + <nl> + public : <nl> + bool getIsVector ( ) const { return ( bool ) ( m_flags & IsVector ) ; } <nl> + ImmutableMap * getMap ( ) const { return m_data . map ; } <nl> + } ; <nl> + <nl> + / * VectorData is used when when all the keys are integers <nl> + * in the sequential range 0 to n - 1 . * / <nl> + class VectorData { <nl> + public : <nl> + union { <nl> + size_t m_size ; <nl> + SharedVariant * m_align_dummy ; <nl> + } ; <nl> + <nl> + VectorData ( ) : m_size ( 0 ) { } <nl> + <nl> + ~ VectorData ( ) { <nl> + SharedVariant * v = vals ( ) ; <nl> + for ( size_t i = 0 ; i < m_size ; i + + ) { <nl> + v [ i ] . ~ SharedVariant ( ) ; <nl> + } <nl> + } <nl> + SharedVariant * getValue ( ssize_t pos ) { <nl> + return & ( ( SharedVariant * ) ( this + 1 ) ) [ pos ] ; <nl> + } <nl> + SharedVariant * vals ( ) { return ( SharedVariant * ) ( this + 1 ) ; } <nl> + void * operator new ( size_t sz , int num ) { <nl> + assert ( sz = = sizeof ( VectorData ) ) ; <nl> + return malloc ( sizeof ( VectorData ) + num * sizeof ( SharedVariant ) ) ; <nl> + } <nl> + void add ( CVarRef val , bool unserializeObj ) { <nl> + / * placement new * / <nl> + new ( & vals ( ) [ m_size + + ] ) SharedVariant ( val , false , true , unserializeObj ) ; <nl> + } <nl> + void operator delete ( void * ptr ) { free ( ptr ) ; } <nl> + / / just to keep the compiler happy ; used if the constructor throws <nl> + void operator delete ( void * ptr , int num ) { free ( ptr ) ; } <nl> } ; <nl> <nl> class SharedVariantStats { <nl>
|
clean up SharedMap
|
facebook/hhvm
|
841670cd26d0e2e50607940ab118a2813b218de0
|
2013-05-28T17:30:15Z
|
mmm a / core / variant . h <nl> ppp b / core / variant . h <nl> typedef PoolVector < Color > PoolColorArray ; <nl> <nl> class Variant { <nl> public : <nl> + / / If this changes the table in variant_op must be updated <nl> enum Type { <nl> <nl> NIL , <nl> class Variant { <nl> <nl> Variant ( const IP_Address & p_address ) ; <nl> <nl> + / / If this changes the table in variant_op must be updated <nl> enum Operator { <nl> <nl> / / comparation <nl> class Variant { <nl> OP_GREATER_EQUAL , <nl> / / mathematic <nl> OP_ADD , <nl> - OP_SUBSTRACT , <nl> + OP_SUBTRACT , <nl> OP_MULTIPLY , <nl> OP_DIVIDE , <nl> OP_NEGATE , <nl> mmm a / core / variant_op . cpp <nl> ppp b / core / variant_op . cpp <nl> <nl> # include " object . h " <nl> # include " script_language . h " <nl> <nl> + # define CASE_TYPE_ALL ( PREFIX , OP ) \ <nl> + CASE_TYPE ( PREFIX , OP , INT ) \ <nl> + CASE_TYPE_ALL_BUT_INT ( PREFIX , OP ) <nl> + <nl> + # define CASE_TYPE_ALL_BUT_INT ( PREFIX , OP ) \ <nl> + CASE_TYPE ( PREFIX , OP , NIL ) \ <nl> + CASE_TYPE ( PREFIX , OP , BOOL ) \ <nl> + CASE_TYPE ( PREFIX , OP , REAL ) \ <nl> + CASE_TYPE ( PREFIX , OP , STRING ) \ <nl> + CASE_TYPE ( PREFIX , OP , VECTOR2 ) \ <nl> + CASE_TYPE ( PREFIX , OP , RECT2 ) \ <nl> + CASE_TYPE ( PREFIX , OP , VECTOR3 ) \ <nl> + CASE_TYPE ( PREFIX , OP , TRANSFORM2D ) \ <nl> + CASE_TYPE ( PREFIX , OP , PLANE ) \ <nl> + CASE_TYPE ( PREFIX , OP , QUAT ) \ <nl> + CASE_TYPE ( PREFIX , OP , RECT3 ) \ <nl> + CASE_TYPE ( PREFIX , OP , BASIS ) \ <nl> + CASE_TYPE ( PREFIX , OP , TRANSFORM ) \ <nl> + CASE_TYPE ( PREFIX , OP , COLOR ) \ <nl> + CASE_TYPE ( PREFIX , OP , NODE_PATH ) \ <nl> + CASE_TYPE ( PREFIX , OP , _RID ) \ <nl> + CASE_TYPE ( PREFIX , OP , OBJECT ) \ <nl> + CASE_TYPE ( PREFIX , OP , DICTIONARY ) \ <nl> + CASE_TYPE ( PREFIX , OP , ARRAY ) \ <nl> + CASE_TYPE ( PREFIX , OP , POOL_BYTE_ARRAY ) \ <nl> + CASE_TYPE ( PREFIX , OP , POOL_INT_ARRAY ) \ <nl> + CASE_TYPE ( PREFIX , OP , POOL_REAL_ARRAY ) \ <nl> + CASE_TYPE ( PREFIX , OP , POOL_STRING_ARRAY ) \ <nl> + CASE_TYPE ( PREFIX , OP , POOL_VECTOR2_ARRAY ) \ <nl> + CASE_TYPE ( PREFIX , OP , POOL_VECTOR3_ARRAY ) \ <nl> + CASE_TYPE ( PREFIX , OP , POOL_COLOR_ARRAY ) <nl> + <nl> + # ifdef __GNUC__ <nl> + # define TYPE ( PREFIX , OP , TYPE ) & & PREFIX # # _ # # OP # # _ # # TYPE <nl> + <nl> + / * clang - format off * / <nl> + # define TYPES ( PREFIX , OP ) { \ <nl> + TYPE ( PREFIX , OP , NIL ) , \ <nl> + TYPE ( PREFIX , OP , BOOL ) , \ <nl> + TYPE ( PREFIX , OP , INT ) , \ <nl> + TYPE ( PREFIX , OP , REAL ) , \ <nl> + TYPE ( PREFIX , OP , STRING ) , \ <nl> + TYPE ( PREFIX , OP , VECTOR2 ) , \ <nl> + TYPE ( PREFIX , OP , RECT2 ) , \ <nl> + TYPE ( PREFIX , OP , VECTOR3 ) , \ <nl> + TYPE ( PREFIX , OP , TRANSFORM2D ) , \ <nl> + TYPE ( PREFIX , OP , PLANE ) , \ <nl> + TYPE ( PREFIX , OP , QUAT ) , \ <nl> + TYPE ( PREFIX , OP , RECT3 ) , \ <nl> + TYPE ( PREFIX , OP , BASIS ) , \ <nl> + TYPE ( PREFIX , OP , TRANSFORM ) , \ <nl> + TYPE ( PREFIX , OP , COLOR ) , \ <nl> + TYPE ( PREFIX , OP , NODE_PATH ) , \ <nl> + TYPE ( PREFIX , OP , _RID ) , \ <nl> + TYPE ( PREFIX , OP , OBJECT ) , \ <nl> + TYPE ( PREFIX , OP , DICTIONARY ) , \ <nl> + TYPE ( PREFIX , OP , ARRAY ) , \ <nl> + TYPE ( PREFIX , OP , POOL_BYTE_ARRAY ) , \ <nl> + TYPE ( PREFIX , OP , POOL_INT_ARRAY ) , \ <nl> + TYPE ( PREFIX , OP , POOL_REAL_ARRAY ) , \ <nl> + TYPE ( PREFIX , OP , POOL_STRING_ARRAY ) , \ <nl> + TYPE ( PREFIX , OP , POOL_VECTOR2_ARRAY ) , \ <nl> + TYPE ( PREFIX , OP , POOL_VECTOR3_ARRAY ) , \ <nl> + TYPE ( PREFIX , OP , POOL_COLOR_ARRAY ) , \ <nl> + } <nl> + / * clang - format on * / <nl> + <nl> + # define CASES ( PREFIX ) static void * switch_table_ # # PREFIX [ 25 ] [ 27 ] = { \ <nl> + TYPES ( PREFIX , OP_EQUAL ) , \ <nl> + TYPES ( PREFIX , OP_NOT_EQUAL ) , \ <nl> + TYPES ( PREFIX , OP_LESS ) , \ <nl> + TYPES ( PREFIX , OP_LESS_EQUAL ) , \ <nl> + TYPES ( PREFIX , OP_GREATER ) , \ <nl> + TYPES ( PREFIX , OP_GREATER_EQUAL ) , \ <nl> + TYPES ( PREFIX , OP_ADD ) , \ <nl> + TYPES ( PREFIX , OP_SUBTRACT ) , \ <nl> + TYPES ( PREFIX , OP_MULTIPLY ) , \ <nl> + TYPES ( PREFIX , OP_DIVIDE ) , \ <nl> + TYPES ( PREFIX , OP_NEGATE ) , \ <nl> + TYPES ( PREFIX , OP_POSITIVE ) , \ <nl> + TYPES ( PREFIX , OP_MODULE ) , \ <nl> + TYPES ( PREFIX , OP_STRING_CONCAT ) , \ <nl> + TYPES ( PREFIX , OP_SHIFT_LEFT ) , \ <nl> + TYPES ( PREFIX , OP_SHIFT_RIGHT ) , \ <nl> + TYPES ( PREFIX , OP_BIT_AND ) , \ <nl> + TYPES ( PREFIX , OP_BIT_OR ) , \ <nl> + TYPES ( PREFIX , OP_BIT_XOR ) , \ <nl> + TYPES ( PREFIX , OP_BIT_NEGATE ) , \ <nl> + TYPES ( PREFIX , OP_AND ) , \ <nl> + TYPES ( PREFIX , OP_OR ) , \ <nl> + TYPES ( PREFIX , OP_XOR ) , \ <nl> + TYPES ( PREFIX , OP_NOT ) , \ <nl> + TYPES ( PREFIX , OP_IN ) , \ <nl> + } <nl> + <nl> + # define SWITCH ( PREFIX , op , val ) goto * switch_table_ # # PREFIX [ op ] [ val ] ; <nl> + # define SWITCH_OP ( PREFIX , OP , val ) <nl> + # define CASE_TYPE ( PREFIX , OP , TYPE ) PREFIX # # _ # # OP # # _ # # TYPE : <nl> + <nl> + # else <nl> + # define CASES ( PREFIX ) <nl> + # define SWITCH ( PREFIX , op , val ) switch ( op ) <nl> + # define SWITCH_OP ( PREFIX , OP , val ) \ <nl> + case OP : \ <nl> + switch ( val ) <nl> + # define CASE_TYPE ( PREFIX , OP , TYPE ) case TYPE : <nl> + # endif <nl> + <nl> Variant : : operator bool ( ) const { <nl> <nl> bool b ; <nl> bool Variant : : booleanize ( bool & r_valid ) const { <nl> <nl> r_valid = true ; <nl> switch ( type ) { <nl> - case NIL : return false ; <nl> - case BOOL : return _data . _bool ; <nl> - case INT : return _data . _int ; <nl> - case REAL : return _data . _real ; <nl> - case STRING : return ( * reinterpret_cast < const String * > ( _data . _mem ) ) ! = " " ; <nl> + case NIL : <nl> + return false ; <nl> + case BOOL : <nl> + return _data . _bool ; <nl> + case INT : <nl> + return _data . _int ; <nl> + case REAL : <nl> + return _data . _real ; <nl> + case STRING : <nl> + return ( * reinterpret_cast < const String * > ( _data . _mem ) ) ! = " " ; <nl> case VECTOR2 : <nl> case RECT2 : <nl> case TRANSFORM2D : <nl> bool Variant : : booleanize ( bool & r_valid ) const { <nl> case BASIS : <nl> case TRANSFORM : <nl> case COLOR : <nl> - case _RID : return ( * reinterpret_cast < const RID * > ( _data . _mem ) ) . is_valid ( ) ; <nl> - case OBJECT : return _get_obj ( ) . obj ; <nl> - case NODE_PATH : return ( * reinterpret_cast < const NodePath * > ( _data . _mem ) ) ! = NodePath ( ) ; <nl> + case _RID : <nl> + return ( * reinterpret_cast < const RID * > ( _data . _mem ) ) . is_valid ( ) ; <nl> + case OBJECT : <nl> + return _get_obj ( ) . obj ; <nl> + case NODE_PATH : <nl> + return ( * reinterpret_cast < const NodePath * > ( _data . _mem ) ) ! = NodePath ( ) ; <nl> case DICTIONARY : <nl> case ARRAY : <nl> case POOL_BYTE_ARRAY : <nl> bool Variant : : booleanize ( bool & r_valid ) const { <nl> case POOL_COLOR_ARRAY : <nl> r_valid = false ; <nl> return false ; <nl> - default : { } <nl> + default : { <nl> + } <nl> } <nl> <nl> return false ; <nl> bool Variant : : booleanize ( bool & r_valid ) const { <nl> return ; \ <nl> } <nl> <nl> - # define DEFAULT_OP_NUM ( m_op , m_name , m_type ) \ <nl> - case m_name : { \ <nl> + # define _RETURN_FAIL \ <nl> + { \ <nl> + r_valid = false ; \ <nl> + return ; \ <nl> + } <nl> + <nl> + # define DEFAULT_OP_NUM ( m_prefix , m_op_name , m_name , m_op , m_type ) \ <nl> + CASE_TYPE ( m_prefix , m_op_name , m_name ) { \ <nl> switch ( p_b . type ) { \ <nl> - case BOOL : _RETURN ( p_a . _data . m_type m_op p_b . _data . _bool ) ; \ <nl> case INT : _RETURN ( p_a . _data . m_type m_op p_b . _data . _int ) ; \ <nl> case REAL : _RETURN ( p_a . _data . m_type m_op p_b . _data . _real ) ; \ <nl> default : { } \ <nl> bool Variant : : booleanize ( bool & r_valid ) const { <nl> return ; \ <nl> } ; <nl> <nl> - # define DEFAULT_OP_NUM_NEG ( m_name , m_type ) \ <nl> - case m_name : { \ <nl> - \ <nl> - _RETURN ( - p_a . _data . m_type ) ; \ <nl> + # ifdef DEBUG_ENABLED <nl> + # define DEFAULT_OP_NUM_DIV ( m_prefix , m_op_name , m_name , m_type ) \ <nl> + CASE_TYPE ( m_prefix , m_op_name , m_name ) { \ <nl> + switch ( p_b . type ) { \ <nl> + case INT : { \ <nl> + if ( p_b . _data . _int = = 0 ) { \ <nl> + r_valid = false ; \ <nl> + _RETURN ( " Division By Zero " ) ; \ <nl> + } \ <nl> + _RETURN ( p_a . _data . m_type / p_b . _data . _int ) ; \ <nl> + } \ <nl> + case REAL : { \ <nl> + if ( p_b . _data . _real = = 0 ) { \ <nl> + r_valid = false ; \ <nl> + _RETURN ( " Division By Zero " ) ; \ <nl> + } \ <nl> + _RETURN ( p_a . _data . m_type / p_b . _data . _real ) ; \ <nl> + } \ <nl> + default : { } \ <nl> + } \ <nl> + r_valid = false ; \ <nl> + return ; \ <nl> + } ; <nl> + # else <nl> + # define DEFAULT_OP_NUM_DIV ( m_prefix , m_op_name , m_name , m_type ) \ <nl> + CASE_TYPE ( m_prefix , m_op_name , m_name ) { \ <nl> + switch ( p_b . type ) { \ <nl> + case INT : _RETURN ( p_a . _data . m_type / p_b . _data . _int ) ; \ <nl> + case REAL : _RETURN ( p_a . _data . m_type / p_b . _data . _real ) ; \ <nl> + default : { } \ <nl> + } \ <nl> + r_valid = false ; \ <nl> + return ; \ <nl> + } ; <nl> + # endif <nl> + <nl> + # define DEFAULT_OP_NUM_NEG ( m_prefix , m_op_name , m_name , m_type ) \ <nl> + CASE_TYPE ( m_prefix , m_op_name , m_name ) { \ <nl> + _RETURN ( - p_a . _data . m_type ) ; \ <nl> } ; <nl> <nl> - # define DEFAULT_OP_NUM_POS ( m_name , m_type ) \ <nl> - case m_name : { \ <nl> - \ <nl> - _RETURN ( p_a . _data . m_type ) ; \ <nl> + # define DEFAULT_OP_NUM_POS ( m_prefix , m_op_name , m_name , m_type ) \ <nl> + CASE_TYPE ( m_prefix , m_op_name , m_name ) { \ <nl> + _RETURN ( p_a . _data . m_type ) ; \ <nl> } ; <nl> <nl> - # define DEFAULT_OP_NUM_VEC ( m_op , m_name , m_type ) \ <nl> - case m_name : { \ <nl> + # define DEFAULT_OP_NUM_VEC ( m_prefix , m_op_name , m_name , m_op , m_type ) \ <nl> + CASE_TYPE ( m_prefix , m_op_name , m_name ) { \ <nl> switch ( p_b . type ) { \ <nl> - case BOOL : _RETURN ( p_a . _data . m_type m_op p_b . _data . _bool ) ; \ <nl> case INT : _RETURN ( p_a . _data . m_type m_op p_b . _data . _int ) ; \ <nl> case REAL : _RETURN ( p_a . _data . m_type m_op p_b . _data . _real ) ; \ <nl> case VECTOR2 : _RETURN ( p_a . _data . m_type m_op * reinterpret_cast < const Vector2 * > ( p_b . _data . _mem ) ) ; \ <nl> bool Variant : : booleanize ( bool & r_valid ) const { <nl> return ; \ <nl> } ; <nl> <nl> - # define DEFAULT_OP_STR ( m_op , m_name , m_type ) \ <nl> - case m_name : { \ <nl> + # define DEFAULT_OP_STR_REV ( m_prefix , m_op_name , m_name , m_op , m_type ) \ <nl> + CASE_TYPE ( m_prefix , m_op_name , m_name ) { \ <nl> + switch ( p_b . type ) { \ <nl> + case STRING : _RETURN ( * reinterpret_cast < const m_type * > ( p_b . _data . _mem ) m_op * reinterpret_cast < const String * > ( p_a . _data . _mem ) ) ; \ <nl> + case NODE_PATH : _RETURN ( * reinterpret_cast < const m_type * > ( p_b . _data . _mem ) m_op * reinterpret_cast < const NodePath * > ( p_a . _data . _mem ) ) ; \ <nl> + default : { } \ <nl> + } \ <nl> + r_valid = false ; \ <nl> + return ; \ <nl> + } ; <nl> + <nl> + # define DEFAULT_OP_STR ( m_prefix , m_op_name , m_name , m_op , m_type ) \ <nl> + CASE_TYPE ( m_prefix , m_op_name , m_name ) { \ <nl> switch ( p_b . type ) { \ <nl> case STRING : _RETURN ( * reinterpret_cast < const m_type * > ( p_a . _data . _mem ) m_op * reinterpret_cast < const String * > ( p_b . _data . _mem ) ) ; \ <nl> case NODE_PATH : _RETURN ( * reinterpret_cast < const m_type * > ( p_a . _data . _mem ) m_op * reinterpret_cast < const NodePath * > ( p_b . _data . _mem ) ) ; \ <nl> bool Variant : : booleanize ( bool & r_valid ) const { <nl> return ; \ <nl> } ; <nl> <nl> - # define DEFAULT_OP_LOCALMEM ( m_op , m_name , m_type ) \ <nl> - case m_name : { \ <nl> - switch ( p_b . type ) { \ <nl> - case m_name : _RETURN ( * reinterpret_cast < const m_type * > ( p_a . _data . _mem ) m_op * reinterpret_cast < const m_type * > ( p_b . _data . _mem ) ) ; \ <nl> - default : { } \ <nl> - } \ <nl> - r_valid = false ; \ <nl> - return ; \ <nl> - } <nl> + # define DEFAULT_OP_LOCALMEM_REV ( m_prefix , m_op_name , m_name , m_op , m_type ) \ <nl> + CASE_TYPE ( m_prefix , m_op_name , m_name ) { \ <nl> + if ( p_b . type = = m_name ) \ <nl> + _RETURN ( * reinterpret_cast < const m_type * > ( p_b . _data . _mem ) m_op * reinterpret_cast < const m_type * > ( p_a . _data . _mem ) ) ; \ <nl> + r_valid = false ; \ <nl> + return ; \ <nl> + } ; <nl> + <nl> + # define DEFAULT_OP_LOCALMEM ( m_prefix , m_op_name , m_name , m_op , m_type ) \ <nl> + CASE_TYPE ( m_prefix , m_op_name , m_name ) { \ <nl> + if ( p_b . type = = m_name ) \ <nl> + _RETURN ( * reinterpret_cast < const m_type * > ( p_a . _data . _mem ) m_op * reinterpret_cast < const m_type * > ( p_b . _data . _mem ) ) ; \ <nl> + r_valid = false ; \ <nl> + return ; \ <nl> + } ; <nl> <nl> - # define DEFAULT_OP_LOCALMEM_NEG ( m_name , m_type ) \ <nl> - case m_name : { \ <nl> + # define DEFAULT_OP_LOCALMEM_NEG ( m_prefix , m_op_name , m_name , m_type ) \ <nl> + CASE_TYPE ( m_prefix , m_op_name , m_name ) { \ <nl> _RETURN ( - * reinterpret_cast < const m_type * > ( p_a . _data . _mem ) ) ; \ <nl> } <nl> <nl> - # define DEFAULT_OP_LOCALMEM_POS ( m_name , m_type ) \ <nl> - case m_name : { \ <nl> - _RETURN ( * reinterpret_cast < const m_type * > ( p_a . _data . _mem ) ) ; \ <nl> + # define DEFAULT_OP_LOCALMEM_POS ( m_prefix , m_op_name , m_name , m_type ) \ <nl> + CASE_TYPE ( m_prefix , m_op_name , m_name ) { \ <nl> + _RETURN ( * reinterpret_cast < const m_type * > ( p_a . _data . _mem ) ) ; \ <nl> } <nl> <nl> - # define DEFAULT_OP_LOCALMEM_NUM ( m_op , m_name , m_type ) \ <nl> - case m_name : { \ <nl> + # define DEFAULT_OP_LOCALMEM_NUM ( m_prefix , m_op_name , m_name , m_op , m_type ) \ <nl> + CASE_TYPE ( m_prefix , m_op_name , m_name ) { \ <nl> switch ( p_b . type ) { \ <nl> case m_name : _RETURN ( * reinterpret_cast < const m_type * > ( p_a . _data . _mem ) m_op * reinterpret_cast < const m_type * > ( p_b . _data . _mem ) ) ; \ <nl> - case BOOL : _RETURN ( * reinterpret_cast < const m_type * > ( p_a . _data . _mem ) m_op p_b . _data . _bool ) ; \ <nl> case INT : _RETURN ( * reinterpret_cast < const m_type * > ( p_a . _data . _mem ) m_op p_b . _data . _int ) ; \ <nl> case REAL : _RETURN ( * reinterpret_cast < const m_type * > ( p_a . _data . _mem ) m_op p_b . _data . _real ) ; \ <nl> default : { } \ <nl> bool Variant : : booleanize ( bool & r_valid ) const { <nl> return ; \ <nl> } <nl> <nl> - # define DEFAULT_OP_PTRREF ( m_op , m_name , m_sub ) \ <nl> - case m_name : { \ <nl> - switch ( p_b . type ) { \ <nl> - case m_name : _RETURN ( * p_a . _data . m_sub m_op * p_b . _data . m_sub ) ; \ <nl> - default : { } \ <nl> - } \ <nl> - r_valid = false ; \ <nl> - return ; \ <nl> + # define DEFAULT_OP_PTRREF ( m_prefix , m_op_name , m_name , m_op , m_sub ) \ <nl> + CASE_TYPE ( m_prefix , m_op_name , m_name ) { \ <nl> + if ( p_b . type = = m_name ) \ <nl> + _RETURN ( * p_a . _data . m_sub m_op * p_b . _data . m_sub ) ; \ <nl> + r_valid = false ; \ <nl> + return ; \ <nl> } <nl> <nl> - # define DEFAULT_OP_ARRAY_EQ ( m_name , m_type ) \ <nl> - DEFAULT_OP_ARRAY_OP ( m_name , m_type , ! = , ! = , true , false , false ) <nl> - <nl> - # define DEFAULT_OP_ARRAY_LT ( m_name , m_type ) \ <nl> - DEFAULT_OP_ARRAY_OP ( m_name , m_type , < , ! = , false , a_len < array_b . size ( ) , true ) <nl> - <nl> - # define DEFAULT_OP_ARRAY_OP ( m_name , m_type , m_opa , m_opb , m_ret_def , m_ret_s , m_ret_f ) \ <nl> - case m_name : { \ <nl> - if ( p_a . type ! = p_b . type ) { \ <nl> - r_valid = false ; \ <nl> - return ; \ <nl> - } \ <nl> - const PoolVector < m_type > & array_a = * reinterpret_cast < const PoolVector < m_type > * > ( p_a . _data . _mem ) ; \ <nl> - const PoolVector < m_type > & array_b = * reinterpret_cast < const PoolVector < m_type > * > ( p_b . _data . _mem ) ; \ <nl> - \ <nl> - int a_len = array_a . size ( ) ; \ <nl> - if ( a_len m_opa array_b . size ( ) ) { \ <nl> - _RETURN ( m_ret_s ) ; \ <nl> - } else { \ <nl> - \ <nl> - PoolVector < m_type > : : Read ra = array_a . read ( ) ; \ <nl> - PoolVector < m_type > : : Read rb = array_b . read ( ) ; \ <nl> - \ <nl> - for ( int i = 0 ; i < a_len ; i + + ) { \ <nl> - if ( ra [ i ] m_opb rb [ i ] ) \ <nl> - _RETURN ( m_ret_f ) ; \ <nl> - } \ <nl> - \ <nl> - _RETURN ( m_ret_def ) ; \ <nl> - } \ <nl> + # define DEFAULT_OP_ARRAY_EQ ( m_prefix , m_op_name , m_name , m_type ) \ <nl> + DEFAULT_OP_ARRAY_OP ( m_prefix , m_op_name , m_name , m_type , ! = , ! = , true , false , false ) <nl> + <nl> + # define DEFAULT_OP_ARRAY_LT ( m_prefix , m_op_name , m_name , m_type ) \ <nl> + DEFAULT_OP_ARRAY_OP ( m_prefix , m_op_name , m_name , m_type , < , ! = , false , a_len < array_b . size ( ) , true ) <nl> + <nl> + # define DEFAULT_OP_ARRAY_GT ( m_prefix , m_op_name , m_name , m_type ) \ <nl> + DEFAULT_OP_ARRAY_OP ( m_prefix , m_op_name , m_name , m_type , > , ! = , false , a_len < array_b . size ( ) , true ) <nl> + <nl> + # define DEFAULT_OP_ARRAY_OP ( m_prefix , m_op_name , m_name , m_type , m_opa , m_opb , m_ret_def , m_ret_s , m_ret_f ) \ <nl> + CASE_TYPE ( m_prefix , m_op_name , m_name ) { \ <nl> + if ( p_a . type ! = p_b . type ) { \ <nl> + r_valid = false ; \ <nl> + return ; \ <nl> + } \ <nl> + const PoolVector < m_type > & array_a = * reinterpret_cast < const PoolVector < m_type > * > ( p_a . _data . _mem ) ; \ <nl> + const PoolVector < m_type > & array_b = * reinterpret_cast < const PoolVector < m_type > * > ( p_b . _data . _mem ) ; \ <nl> + \ <nl> + int a_len = array_a . size ( ) ; \ <nl> + if ( a_len m_opa array_b . size ( ) ) { \ <nl> + _RETURN ( m_ret_s ) ; \ <nl> + } else { \ <nl> + \ <nl> + PoolVector < m_type > : : Read ra = array_a . read ( ) ; \ <nl> + PoolVector < m_type > : : Read rb = array_b . read ( ) ; \ <nl> + \ <nl> + for ( int i = 0 ; i < a_len ; i + + ) { \ <nl> + if ( ra [ i ] m_opb rb [ i ] ) \ <nl> + _RETURN ( m_ret_f ) ; \ <nl> + } \ <nl> + \ <nl> + _RETURN ( m_ret_def ) ; \ <nl> + } \ <nl> } <nl> <nl> - # define DEFAULT_OP_ARRAY_ADD ( m_name , m_type ) \ <nl> - case m_name : { \ <nl> + # define DEFAULT_OP_ARRAY_ADD ( m_prefix , m_op_name , m_name , m_type ) \ <nl> + CASE_TYPE ( m_prefix , m_op_name , m_name ) { \ <nl> if ( p_a . type ! = p_b . type ) { \ <nl> r_valid = false ; \ <nl> _RETURN ( NIL ) ; \ <nl> bool Variant : : booleanize ( bool & r_valid ) const { <nl> return ; \ <nl> } <nl> <nl> - void Variant : : evaluate ( const Operator & p_op , const Variant & p_a , const Variant & p_b , Variant & r_ret , bool & r_valid ) { <nl> + void Variant : : evaluate ( const Operator & p_op , const Variant & p_a , <nl> + const Variant & p_b , Variant & r_ret , bool & r_valid ) { <nl> <nl> + CASES ( math ) ; <nl> r_valid = true ; <nl> <nl> - switch ( p_op ) { <nl> - <nl> - case OP_EQUAL : { <nl> - <nl> - if ( ( int ( p_a . type ) * int ( p_b . type ) ) = = 0 ) { <nl> - / / null case is an exception , one of both is null <nl> - if ( p_a . type = = p_b . type ) / / null against null is true <nl> - _RETURN ( true ) ; <nl> - / / only against object is allowed <nl> - if ( p_a . type = = Variant : : OBJECT ) { <nl> - _RETURN ( p_a . _get_obj ( ) . obj = = NULL ) ; <nl> - } else if ( p_b . type = = Variant : : OBJECT ) { <nl> + SWITCH ( math , p_op , p_a . type ) { <nl> + SWITCH_OP ( math , OP_EQUAL , p_a . type ) { <nl> + CASE_TYPE ( math , OP_EQUAL , NIL ) { <nl> + if ( p_b . type = = NIL ) _RETURN ( true ) ; <nl> + if ( p_b . type = = OBJECT ) <nl> _RETURN ( p_b . _get_obj ( ) . obj = = NULL ) ; <nl> - } <nl> - / / otherwise , always false <nl> _RETURN ( false ) ; <nl> } <nl> <nl> - switch ( p_a . type ) { <nl> + CASE_TYPE ( math , OP_EQUAL , BOOL ) { <nl> + if ( p_b . type ! = BOOL ) _RETURN ( false ) ; <nl> + _RETURN ( p_a . _data . _bool = = p_b . _data . _bool ) ; <nl> + } <nl> <nl> - case NIL : { <nl> + CASE_TYPE ( math , OP_EQUAL , OBJECT ) { <nl> + if ( p_b . type = = OBJECT ) <nl> + _RETURN ( ( p_a . _get_obj ( ) . obj = = p_b . _get_obj ( ) . obj ) ) ; <nl> + if ( p_b . type = = NIL ) <nl> + _RETURN ( p_a . _get_obj ( ) . obj = = NULL ) ; <nl> + } <nl> <nl> - _RETURN ( p_b . type = = NIL | | ( p_b . type = = Variant : : OBJECT & & ! p_b . _get_obj ( ) . obj ) ) ; <nl> - } break ; <nl> + CASE_TYPE ( math , OP_EQUAL , DICTIONARY ) { <nl> + if ( p_b . type ! = DICTIONARY ) <nl> + _RETURN ( false ) ; <nl> <nl> - DEFAULT_OP_NUM ( = = , BOOL , _bool ) ; <nl> - DEFAULT_OP_NUM ( = = , INT , _int ) ; <nl> - DEFAULT_OP_NUM ( = = , REAL , _real ) ; <nl> - DEFAULT_OP_STR ( = = , STRING , String ) ; <nl> - DEFAULT_OP_LOCALMEM ( = = , VECTOR2 , Vector2 ) ; <nl> - DEFAULT_OP_LOCALMEM ( = = , RECT2 , Rect2 ) ; <nl> - DEFAULT_OP_PTRREF ( = = , TRANSFORM2D , _transform2d ) ; <nl> - DEFAULT_OP_LOCALMEM ( = = , VECTOR3 , Vector3 ) ; <nl> - DEFAULT_OP_LOCALMEM ( = = , PLANE , Plane ) ; <nl> - DEFAULT_OP_LOCALMEM ( = = , QUAT , Quat ) ; <nl> - DEFAULT_OP_PTRREF ( = = , RECT3 , _rect3 ) ; <nl> - DEFAULT_OP_PTRREF ( = = , BASIS , _basis ) ; <nl> - DEFAULT_OP_PTRREF ( = = , TRANSFORM , _transform ) ; <nl> + const Dictionary * arr_a = reinterpret_cast < const Dictionary * > ( p_a . _data . _mem ) ; <nl> + const Dictionary * arr_b = reinterpret_cast < const Dictionary * > ( p_b . _data . _mem ) ; <nl> <nl> - DEFAULT_OP_LOCALMEM ( = = , COLOR , Color ) ; <nl> - DEFAULT_OP_STR ( = = , NODE_PATH , NodePath ) ; <nl> - DEFAULT_OP_LOCALMEM ( = = , _RID , RID ) ; <nl> - case OBJECT : { <nl> + _RETURN ( * arr_a = = * arr_b ) ; <nl> + } <nl> <nl> - if ( p_b . type = = OBJECT ) <nl> - _RETURN ( ( p_a . _get_obj ( ) . obj = = p_b . _get_obj ( ) . obj ) ) ; <nl> - if ( p_b . type = = NIL ) <nl> - _RETURN ( ! p_a . _get_obj ( ) . obj ) ; <nl> - } break ; <nl> + CASE_TYPE ( math , OP_EQUAL , ARRAY ) { <nl> + if ( p_b . type ! = ARRAY ) <nl> + _RETURN ( false ) ; <nl> <nl> - case DICTIONARY : { <nl> + const Array * arr_a = reinterpret_cast < const Array * > ( p_a . _data . _mem ) ; <nl> + const Array * arr_b = reinterpret_cast < const Array * > ( p_b . _data . _mem ) ; <nl> <nl> - if ( p_b . type ! = DICTIONARY ) <nl> + int l = arr_a - > size ( ) ; <nl> + if ( arr_b - > size ( ) ! = l ) <nl> + _RETURN ( false ) ; <nl> + for ( int i = 0 ; i < l ; i + + ) { <nl> + if ( ! ( ( * arr_a ) [ i ] = = ( * arr_b ) [ i ] ) ) { <nl> _RETURN ( false ) ; <nl> + } <nl> + } <nl> <nl> - const Dictionary * arr_a = reinterpret_cast < const Dictionary * > ( p_a . _data . _mem ) ; <nl> - const Dictionary * arr_b = reinterpret_cast < const Dictionary * > ( p_b . _data . _mem ) ; <nl> + _RETURN ( true ) ; <nl> + } <nl> + <nl> + DEFAULT_OP_NUM ( math , OP_EQUAL , INT , = = , _int ) ; <nl> + DEFAULT_OP_NUM ( math , OP_EQUAL , REAL , = = , _real ) ; <nl> + DEFAULT_OP_STR ( math , OP_EQUAL , STRING , = = , String ) ; <nl> + DEFAULT_OP_LOCALMEM ( math , OP_EQUAL , VECTOR2 , = = , Vector2 ) ; <nl> + DEFAULT_OP_LOCALMEM ( math , OP_EQUAL , RECT2 , = = , Rect2 ) ; <nl> + DEFAULT_OP_PTRREF ( math , OP_EQUAL , TRANSFORM2D , = = , _transform2d ) ; <nl> + DEFAULT_OP_LOCALMEM ( math , OP_EQUAL , VECTOR3 , = = , Vector3 ) ; <nl> + DEFAULT_OP_LOCALMEM ( math , OP_EQUAL , PLANE , = = , Plane ) ; <nl> + DEFAULT_OP_LOCALMEM ( math , OP_EQUAL , QUAT , = = , Quat ) ; <nl> + DEFAULT_OP_PTRREF ( math , OP_EQUAL , RECT3 , = = , _rect3 ) ; <nl> + DEFAULT_OP_PTRREF ( math , OP_EQUAL , BASIS , = = , _basis ) ; <nl> + DEFAULT_OP_PTRREF ( math , OP_EQUAL , TRANSFORM , = = , _transform ) ; <nl> + DEFAULT_OP_LOCALMEM ( math , OP_EQUAL , COLOR , = = , Color ) ; <nl> + DEFAULT_OP_STR ( math , OP_EQUAL , NODE_PATH , = = , NodePath ) ; <nl> + DEFAULT_OP_LOCALMEM ( math , OP_EQUAL , _RID , = = , RID ) ; <nl> + <nl> + DEFAULT_OP_ARRAY_EQ ( math , OP_EQUAL , POOL_BYTE_ARRAY , uint8_t ) ; <nl> + DEFAULT_OP_ARRAY_EQ ( math , OP_EQUAL , POOL_INT_ARRAY , int ) ; <nl> + DEFAULT_OP_ARRAY_EQ ( math , OP_EQUAL , POOL_REAL_ARRAY , real_t ) ; <nl> + DEFAULT_OP_ARRAY_EQ ( math , OP_EQUAL , POOL_STRING_ARRAY , String ) ; <nl> + DEFAULT_OP_ARRAY_EQ ( math , OP_EQUAL , POOL_VECTOR2_ARRAY , Vector2 ) ; <nl> + DEFAULT_OP_ARRAY_EQ ( math , OP_EQUAL , POOL_VECTOR3_ARRAY , Vector3 ) ; <nl> + DEFAULT_OP_ARRAY_EQ ( math , OP_EQUAL , POOL_COLOR_ARRAY , Color ) ; <nl> + } <nl> <nl> - _RETURN ( * arr_a = = * arr_b ) ; <nl> + SWITCH_OP ( math , OP_NOT_EQUAL , p_a . type ) { <nl> + CASE_TYPE ( math , OP_NOT_EQUAL , NIL ) { <nl> + if ( p_b . type = = NIL ) _RETURN ( false ) ; <nl> + if ( p_b . type = = OBJECT ) <nl> + _RETURN ( p_b . _get_obj ( ) . obj ! = NULL ) ; <nl> + _RETURN ( true ) ; <nl> + } <nl> <nl> - } break ; <nl> - case ARRAY : { <nl> + CASE_TYPE ( math , OP_NOT_EQUAL , BOOL ) { <nl> + if ( p_b . type ! = BOOL ) _RETURN ( true ) ; <nl> + _RETURN ( p_a . _data . _bool ! = p_b . _data . _bool ) ; <nl> + } <nl> <nl> - if ( p_b . type ! = ARRAY ) <nl> - _RETURN ( false ) ; <nl> + CASE_TYPE ( math , OP_NOT_EQUAL , OBJECT ) { <nl> + if ( p_b . type = = OBJECT ) <nl> + _RETURN ( ( p_a . _get_obj ( ) . obj ! = p_b . _get_obj ( ) . obj ) ) ; <nl> + if ( p_b . type = = NIL ) <nl> + _RETURN ( p_a . _get_obj ( ) . obj ! = NULL ) ; <nl> + } <nl> + <nl> + CASE_TYPE ( math , OP_NOT_EQUAL , DICTIONARY ) { <nl> + if ( p_b . type ! = DICTIONARY ) <nl> + _RETURN ( true ) ; <nl> + <nl> + const Dictionary * arr_a = reinterpret_cast < const Dictionary * > ( p_a . _data . _mem ) ; <nl> + const Dictionary * arr_b = reinterpret_cast < const Dictionary * > ( p_b . _data . _mem ) ; <nl> + <nl> + _RETURN ( ( * arr_a = = * arr_b ) = = false ) ; <nl> + } <nl> + <nl> + CASE_TYPE ( math , OP_NOT_EQUAL , ARRAY ) { <nl> + if ( p_b . type ! = ARRAY ) <nl> + _RETURN ( true ) ; <nl> <nl> - const Array * arr_a = reinterpret_cast < const Array * > ( p_a . _data . _mem ) ; <nl> - const Array * arr_b = reinterpret_cast < const Array * > ( p_b . _data . _mem ) ; <nl> + const Array * arr_a = reinterpret_cast < const Array * > ( p_a . _data . _mem ) ; <nl> + const Array * arr_b = reinterpret_cast < const Array * > ( p_b . _data . _mem ) ; <nl> <nl> - int l = arr_a - > size ( ) ; <nl> - if ( arr_b - > size ( ) ! = l ) <nl> + int l = arr_a - > size ( ) ; <nl> + if ( arr_b - > size ( ) ! = l ) <nl> + _RETURN ( true ) ; <nl> + for ( int i = 0 ; i < l ; i + + ) { <nl> + if ( ( ( * arr_a ) [ i ] = = ( * arr_b ) [ i ] ) ) { <nl> _RETURN ( false ) ; <nl> - for ( int i = 0 ; i < l ; i + + ) { <nl> - if ( ! ( ( * arr_a ) [ i ] = = ( * arr_b ) [ i ] ) ) { <nl> - _RETURN ( false ) ; <nl> - } <nl> } <nl> + } <nl> <nl> - _RETURN ( true ) ; <nl> + _RETURN ( true ) ; <nl> + } <nl> + <nl> + DEFAULT_OP_NUM ( math , OP_NOT_EQUAL , INT , ! = , _int ) ; <nl> + DEFAULT_OP_NUM ( math , OP_NOT_EQUAL , REAL , ! = , _real ) ; <nl> + DEFAULT_OP_STR ( math , OP_NOT_EQUAL , STRING , ! = , String ) ; <nl> + DEFAULT_OP_LOCALMEM ( math , OP_NOT_EQUAL , VECTOR2 , ! = , Vector2 ) ; <nl> + DEFAULT_OP_LOCALMEM ( math , OP_NOT_EQUAL , RECT2 , ! = , Rect2 ) ; <nl> + DEFAULT_OP_PTRREF ( math , OP_NOT_EQUAL , TRANSFORM2D , ! = , _transform2d ) ; <nl> + DEFAULT_OP_LOCALMEM ( math , OP_NOT_EQUAL , VECTOR3 , ! = , Vector3 ) ; <nl> + DEFAULT_OP_LOCALMEM ( math , OP_NOT_EQUAL , PLANE , ! = , Plane ) ; <nl> + DEFAULT_OP_LOCALMEM ( math , OP_NOT_EQUAL , QUAT , ! = , Quat ) ; <nl> + DEFAULT_OP_PTRREF ( math , OP_NOT_EQUAL , RECT3 , ! = , _rect3 ) ; <nl> + DEFAULT_OP_PTRREF ( math , OP_NOT_EQUAL , BASIS , ! = , _basis ) ; <nl> + DEFAULT_OP_PTRREF ( math , OP_NOT_EQUAL , TRANSFORM , ! = , _transform ) ; <nl> + DEFAULT_OP_LOCALMEM ( math , OP_NOT_EQUAL , COLOR , ! = , Color ) ; <nl> + DEFAULT_OP_STR ( math , OP_NOT_EQUAL , NODE_PATH , ! = , NodePath ) ; <nl> + DEFAULT_OP_LOCALMEM ( math , OP_NOT_EQUAL , _RID , ! = , RID ) ; <nl> + <nl> + CASE_TYPE ( math , OP_NOT_EQUAL , POOL_BYTE_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_NOT_EQUAL , POOL_INT_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_NOT_EQUAL , POOL_REAL_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_NOT_EQUAL , POOL_STRING_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_NOT_EQUAL , POOL_VECTOR2_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_NOT_EQUAL , POOL_VECTOR3_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_NOT_EQUAL , POOL_COLOR_ARRAY ) ; <nl> + _RETURN_FAIL ; <nl> + } <nl> <nl> - } break ; <nl> + SWITCH_OP ( math , OP_LESS , p_a . type ) { <nl> + CASE_TYPE ( math , OP_LESS , BOOL ) { <nl> + if ( p_b . type ! = BOOL ) <nl> + _RETURN_FAIL ; <nl> <nl> - DEFAULT_OP_ARRAY_EQ ( POOL_BYTE_ARRAY , uint8_t ) ; <nl> - DEFAULT_OP_ARRAY_EQ ( POOL_INT_ARRAY , int ) ; <nl> - DEFAULT_OP_ARRAY_EQ ( POOL_REAL_ARRAY , real_t ) ; <nl> - DEFAULT_OP_ARRAY_EQ ( POOL_STRING_ARRAY , String ) ; <nl> - DEFAULT_OP_ARRAY_EQ ( POOL_VECTOR2_ARRAY , Vector3 ) ; <nl> - DEFAULT_OP_ARRAY_EQ ( POOL_VECTOR3_ARRAY , Vector3 ) ; <nl> - DEFAULT_OP_ARRAY_EQ ( POOL_COLOR_ARRAY , Color ) ; <nl> + if ( p_a . _data . _bool = = p_b . _data . _bool ) <nl> + _RETURN ( false ) ; <nl> <nl> - case VARIANT_MAX : { <nl> - r_valid = false ; <nl> - return ; <nl> + if ( p_a . _data . _bool & & ! p_b . _data . _bool ) <nl> + _RETURN ( false ) ; <nl> <nl> - } break ; <nl> + _RETURN ( true ) ; <nl> } <nl> - } break ; <nl> - case OP_NOT_EQUAL : { <nl> - Variant res ; <nl> - evaluate ( OP_EQUAL , p_a , p_b , res , r_valid ) ; <nl> - if ( ! r_valid ) <nl> - return ; <nl> - if ( res . type = = BOOL ) <nl> - res . _data . _bool = ! res . _data . _bool ; <nl> - _RETURN ( res ) ; <nl> - <nl> - } break ; <nl> - case OP_LESS : { <nl> - <nl> - switch ( p_a . type ) { <nl> - <nl> - DEFAULT_OP_FAIL ( NIL ) ; <nl> - DEFAULT_OP_NUM ( < , BOOL , _bool ) ; <nl> - DEFAULT_OP_NUM ( < , INT , _int ) ; <nl> - DEFAULT_OP_NUM ( < , REAL , _real ) ; <nl> - DEFAULT_OP_STR ( < , STRING , String ) ; <nl> - DEFAULT_OP_LOCALMEM ( < , VECTOR2 , Vector2 ) ; <nl> - DEFAULT_OP_FAIL ( RECT2 ) ; <nl> - DEFAULT_OP_FAIL ( TRANSFORM2D ) ; <nl> - DEFAULT_OP_LOCALMEM ( < , VECTOR3 , Vector3 ) ; <nl> - DEFAULT_OP_FAIL ( PLANE ) ; <nl> - DEFAULT_OP_FAIL ( QUAT ) ; <nl> - DEFAULT_OP_FAIL ( RECT3 ) ; <nl> - DEFAULT_OP_FAIL ( BASIS ) ; <nl> - DEFAULT_OP_FAIL ( TRANSFORM ) ; <nl> - <nl> - DEFAULT_OP_FAIL ( COLOR ) ; <nl> - <nl> - DEFAULT_OP_FAIL ( NODE_PATH ) ; <nl> - DEFAULT_OP_LOCALMEM ( < , _RID , RID ) ; <nl> - case OBJECT : { <nl> - <nl> - if ( p_b . type = = OBJECT ) <nl> - _RETURN ( ( p_a . _get_obj ( ) . obj < p_b . _get_obj ( ) . obj ) ) ; <nl> - } break ; <nl> - DEFAULT_OP_FAIL ( DICTIONARY ) ; <nl> - case ARRAY : { <nl> - <nl> - if ( p_b . type ! = ARRAY ) <nl> - _RETURN ( false ) ; <nl> - <nl> - const Array * arr_a = reinterpret_cast < const Array * > ( p_a . _data . _mem ) ; <nl> - const Array * arr_b = reinterpret_cast < const Array * > ( p_b . _data . _mem ) ; <nl> <nl> - int l = arr_a - > size ( ) ; <nl> - if ( arr_b - > size ( ) < l ) <nl> - _RETURN ( false ) ; <nl> - for ( int i = 0 ; i < l ; i + + ) { <nl> - if ( ! ( ( * arr_a ) [ i ] < ( * arr_b ) [ i ] ) ) { <nl> - _RETURN ( true ) ; <nl> - } <nl> - } <nl> + CASE_TYPE ( math , OP_LESS , OBJECT ) { <nl> + if ( p_b . type = = OBJECT ) <nl> + _RETURN ( ( p_a . _get_obj ( ) . obj < p_b . _get_obj ( ) . obj ) ) ; <nl> + } <nl> <nl> + CASE_TYPE ( math , OP_LESS , ARRAY ) { <nl> + if ( p_b . type ! = ARRAY ) <nl> _RETURN ( false ) ; <nl> <nl> - } break ; <nl> - DEFAULT_OP_ARRAY_LT ( POOL_BYTE_ARRAY , uint8_t ) ; <nl> - DEFAULT_OP_ARRAY_LT ( POOL_INT_ARRAY , int ) ; <nl> - DEFAULT_OP_ARRAY_LT ( POOL_REAL_ARRAY , real_t ) ; <nl> - DEFAULT_OP_ARRAY_LT ( POOL_STRING_ARRAY , String ) ; <nl> - DEFAULT_OP_ARRAY_LT ( POOL_VECTOR2_ARRAY , Vector3 ) ; <nl> - DEFAULT_OP_ARRAY_LT ( POOL_VECTOR3_ARRAY , Vector3 ) ; <nl> - DEFAULT_OP_ARRAY_LT ( POOL_COLOR_ARRAY , Color ) ; <nl> - case VARIANT_MAX : { <nl> - r_valid = false ; <nl> - return ; <nl> + const Array * arr_a = reinterpret_cast < const Array * > ( p_a . _data . _mem ) ; <nl> + const Array * arr_b = reinterpret_cast < const Array * > ( p_b . _data . _mem ) ; <nl> <nl> - } break ; <nl> - } <nl> - <nl> - } break ; <nl> - case OP_LESS_EQUAL : { <nl> - <nl> - switch ( p_a . type ) { <nl> - <nl> - DEFAULT_OP_FAIL ( NIL ) ; <nl> - DEFAULT_OP_NUM ( < = , BOOL , _bool ) ; <nl> - DEFAULT_OP_NUM ( < = , INT , _int ) ; <nl> - DEFAULT_OP_NUM ( < = , REAL , _real ) ; <nl> - DEFAULT_OP_STR ( < = , STRING , String ) ; <nl> - DEFAULT_OP_LOCALMEM ( < = , VECTOR2 , Vector2 ) ; <nl> - DEFAULT_OP_FAIL ( RECT2 ) ; <nl> - DEFAULT_OP_FAIL ( TRANSFORM2D ) ; <nl> - DEFAULT_OP_LOCALMEM ( < = , VECTOR3 , Vector3 ) ; <nl> - DEFAULT_OP_FAIL ( PLANE ) ; <nl> - DEFAULT_OP_FAIL ( QUAT ) ; <nl> - DEFAULT_OP_FAIL ( RECT3 ) ; <nl> - DEFAULT_OP_FAIL ( BASIS ) ; <nl> - DEFAULT_OP_FAIL ( TRANSFORM ) ; <nl> - <nl> - DEFAULT_OP_FAIL ( COLOR ) ; <nl> - <nl> - DEFAULT_OP_FAIL ( NODE_PATH ) ; <nl> - DEFAULT_OP_LOCALMEM ( < = , _RID , RID ) ; <nl> - case OBJECT : { <nl> - <nl> - if ( p_b . type = = OBJECT ) <nl> - _RETURN ( ( p_a . _get_obj ( ) . obj < = p_b . _get_obj ( ) . obj ) ) ; <nl> - } break ; <nl> - DEFAULT_OP_FAIL ( DICTIONARY ) ; <nl> - DEFAULT_OP_FAIL ( ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_BYTE_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_INT_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_REAL_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_STRING_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_VECTOR2_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_VECTOR3_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_COLOR_ARRAY ) ; <nl> - case VARIANT_MAX : { <nl> - r_valid = false ; <nl> - return ; <nl> + int l = arr_a - > size ( ) ; <nl> + if ( arr_b - > size ( ) < l ) <nl> + _RETURN ( false ) ; <nl> + for ( int i = 0 ; i < l ; i + + ) { <nl> + if ( ! ( ( * arr_a ) [ i ] < ( * arr_b ) [ i ] ) ) { <nl> + _RETURN ( true ) ; <nl> + } <nl> + } <nl> <nl> - } break ; <nl> + _RETURN ( false ) ; <nl> } <nl> <nl> - } break ; <nl> - case OP_GREATER : { <nl> + DEFAULT_OP_NUM ( math , OP_LESS , INT , < , _int ) ; <nl> + DEFAULT_OP_NUM ( math , OP_LESS , REAL , < , _real ) ; <nl> + DEFAULT_OP_STR ( math , OP_LESS , STRING , < , String ) ; <nl> + DEFAULT_OP_LOCALMEM ( math , OP_LESS , VECTOR2 , < , Vector2 ) ; <nl> + DEFAULT_OP_LOCALMEM ( math , OP_LESS , VECTOR3 , < , Vector3 ) ; <nl> + DEFAULT_OP_LOCALMEM ( math , OP_LESS , _RID , < , RID ) ; <nl> + DEFAULT_OP_ARRAY_LT ( math , OP_LESS , POOL_BYTE_ARRAY , uint8_t ) ; <nl> + DEFAULT_OP_ARRAY_LT ( math , OP_LESS , POOL_INT_ARRAY , int ) ; <nl> + DEFAULT_OP_ARRAY_LT ( math , OP_LESS , POOL_REAL_ARRAY , real_t ) ; <nl> + DEFAULT_OP_ARRAY_LT ( math , OP_LESS , POOL_STRING_ARRAY , String ) ; <nl> + DEFAULT_OP_ARRAY_LT ( math , OP_LESS , POOL_VECTOR2_ARRAY , Vector3 ) ; <nl> + DEFAULT_OP_ARRAY_LT ( math , OP_LESS , POOL_VECTOR3_ARRAY , Vector3 ) ; <nl> + DEFAULT_OP_ARRAY_LT ( math , OP_LESS , POOL_COLOR_ARRAY , Color ) ; <nl> + <nl> + CASE_TYPE ( math , OP_LESS , NIL ) <nl> + CASE_TYPE ( math , OP_LESS , RECT2 ) <nl> + CASE_TYPE ( math , OP_LESS , TRANSFORM2D ) <nl> + CASE_TYPE ( math , OP_LESS , PLANE ) <nl> + CASE_TYPE ( math , OP_LESS , QUAT ) <nl> + CASE_TYPE ( math , OP_LESS , RECT3 ) <nl> + CASE_TYPE ( math , OP_LESS , BASIS ) <nl> + CASE_TYPE ( math , OP_LESS , TRANSFORM ) <nl> + CASE_TYPE ( math , OP_LESS , COLOR ) <nl> + CASE_TYPE ( math , OP_LESS , NODE_PATH ) <nl> + CASE_TYPE ( math , OP_LESS , DICTIONARY ) <nl> + _RETURN_FAIL ; <nl> + } <nl> <nl> - Variant res ; <nl> - evaluate ( OP_LESS , p_b , p_a , res , r_valid ) ; <nl> - if ( ! r_valid ) <nl> - return ; <nl> - _RETURN ( res ) ; <nl> + SWITCH_OP ( math , OP_LESS_EQUAL , p_a . type ) { <nl> + CASE_TYPE ( math , OP_LESS_EQUAL , OBJECT ) { <nl> + if ( p_b . type = = OBJECT ) <nl> + _RETURN ( ( p_a . _get_obj ( ) . obj < = p_b . _get_obj ( ) . obj ) ) ; <nl> + } <nl> + <nl> + DEFAULT_OP_NUM ( math , OP_LESS_EQUAL , INT , < = , _int ) ; <nl> + DEFAULT_OP_NUM ( math , OP_LESS_EQUAL , REAL , < = , _real ) ; <nl> + DEFAULT_OP_STR ( math , OP_LESS_EQUAL , STRING , < = , String ) ; <nl> + DEFAULT_OP_LOCALMEM ( math , OP_LESS_EQUAL , VECTOR2 , < = , Vector2 ) ; <nl> + DEFAULT_OP_LOCALMEM ( math , OP_LESS_EQUAL , VECTOR3 , < = , Vector3 ) ; <nl> + DEFAULT_OP_LOCALMEM ( math , OP_LESS_EQUAL , _RID , < = , RID ) ; <nl> + <nl> + CASE_TYPE ( math , OP_LESS_EQUAL , NIL ) <nl> + CASE_TYPE ( math , OP_LESS_EQUAL , BOOL ) <nl> + CASE_TYPE ( math , OP_LESS_EQUAL , RECT2 ) <nl> + CASE_TYPE ( math , OP_LESS_EQUAL , TRANSFORM2D ) <nl> + CASE_TYPE ( math , OP_LESS_EQUAL , PLANE ) <nl> + CASE_TYPE ( math , OP_LESS_EQUAL , QUAT ) <nl> + CASE_TYPE ( math , OP_LESS_EQUAL , RECT3 ) <nl> + CASE_TYPE ( math , OP_LESS_EQUAL , BASIS ) <nl> + CASE_TYPE ( math , OP_LESS_EQUAL , TRANSFORM ) <nl> + CASE_TYPE ( math , OP_LESS_EQUAL , COLOR ) <nl> + CASE_TYPE ( math , OP_LESS_EQUAL , NODE_PATH ) <nl> + CASE_TYPE ( math , OP_LESS_EQUAL , DICTIONARY ) <nl> + CASE_TYPE ( math , OP_LESS_EQUAL , ARRAY ) <nl> + CASE_TYPE ( math , OP_LESS_EQUAL , POOL_BYTE_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_LESS_EQUAL , POOL_INT_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_LESS_EQUAL , POOL_REAL_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_LESS_EQUAL , POOL_STRING_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_LESS_EQUAL , POOL_VECTOR2_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_LESS_EQUAL , POOL_VECTOR3_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_LESS_EQUAL , POOL_COLOR_ARRAY ) ; <nl> + _RETURN_FAIL ; <nl> + } <nl> <nl> - } break ; <nl> - case OP_GREATER_EQUAL : { <nl> + SWITCH_OP ( math , OP_GREATER , p_a . type ) { <nl> + CASE_TYPE ( math , OP_GREATER , BOOL ) { <nl> + if ( p_b . type ! = BOOL ) <nl> + _RETURN_FAIL ; <nl> <nl> - Variant res ; <nl> - evaluate ( OP_LESS_EQUAL , p_b , p_a , res , r_valid ) ; <nl> - if ( ! r_valid ) <nl> - return ; <nl> - _RETURN ( res ) ; <nl> - } break ; <nl> - / / mathematic <nl> - case OP_ADD : { <nl> - switch ( p_a . type ) { <nl> - <nl> - DEFAULT_OP_FAIL ( NIL ) ; <nl> - DEFAULT_OP_NUM ( + , BOOL , _bool ) ; <nl> - DEFAULT_OP_NUM ( + , INT , _int ) ; <nl> - DEFAULT_OP_NUM ( + , REAL , _real ) ; <nl> - DEFAULT_OP_STR ( + , STRING , String ) ; <nl> - DEFAULT_OP_LOCALMEM ( + , VECTOR2 , Vector2 ) ; <nl> - DEFAULT_OP_FAIL ( RECT2 ) ; <nl> - DEFAULT_OP_FAIL ( TRANSFORM2D ) ; <nl> - DEFAULT_OP_LOCALMEM ( + , VECTOR3 , Vector3 ) ; <nl> - DEFAULT_OP_FAIL ( PLANE ) ; <nl> - DEFAULT_OP_LOCALMEM ( + , QUAT , Quat ) ; <nl> - DEFAULT_OP_FAIL ( RECT3 ) ; <nl> - DEFAULT_OP_FAIL ( BASIS ) ; <nl> - DEFAULT_OP_FAIL ( TRANSFORM ) ; <nl> - <nl> - DEFAULT_OP_LOCALMEM ( + , COLOR , Color ) ; <nl> - <nl> - DEFAULT_OP_FAIL ( NODE_PATH ) ; <nl> - DEFAULT_OP_FAIL ( _RID ) ; <nl> - DEFAULT_OP_FAIL ( OBJECT ) ; <nl> - DEFAULT_OP_FAIL ( DICTIONARY ) ; <nl> - <nl> - case ARRAY : { <nl> - if ( p_a . type ! = p_b . type ) { <nl> - r_valid = false ; <nl> - return ; <nl> - } <nl> - const Array & array_a = * reinterpret_cast < const Array * > ( p_a . _data . _mem ) ; <nl> - const Array & array_b = * reinterpret_cast < const Array * > ( p_b . _data . _mem ) ; <nl> - Array sum ; <nl> - int asize = array_a . size ( ) ; <nl> - int bsize = array_b . size ( ) ; <nl> - sum . resize ( asize + bsize ) ; <nl> - for ( int i = 0 ; i < asize ; i + + ) <nl> - sum [ i ] = array_a [ i ] ; <nl> - for ( int i = 0 ; i < bsize ; i + + ) <nl> - sum [ i + asize ] = array_b [ i ] ; <nl> - _RETURN ( sum ) ; <nl> - } <nl> - DEFAULT_OP_ARRAY_ADD ( POOL_BYTE_ARRAY , uint8_t ) ; <nl> - DEFAULT_OP_ARRAY_ADD ( POOL_INT_ARRAY , int ) ; <nl> - DEFAULT_OP_ARRAY_ADD ( POOL_REAL_ARRAY , real_t ) ; <nl> - DEFAULT_OP_ARRAY_ADD ( POOL_STRING_ARRAY , String ) ; <nl> - DEFAULT_OP_ARRAY_ADD ( POOL_VECTOR2_ARRAY , Vector2 ) ; <nl> - DEFAULT_OP_ARRAY_ADD ( POOL_VECTOR3_ARRAY , Vector3 ) ; <nl> - DEFAULT_OP_ARRAY_ADD ( POOL_COLOR_ARRAY , Color ) ; <nl> - case VARIANT_MAX : { <nl> - r_valid = false ; <nl> - return ; <nl> + if ( p_a . _data . _bool = = p_b . _data . _bool ) <nl> + _RETURN ( false ) ; <nl> <nl> - } break ; <nl> - } <nl> - } break ; <nl> - case OP_SUBSTRACT : { <nl> - switch ( p_a . type ) { <nl> - <nl> - DEFAULT_OP_FAIL ( NIL ) ; <nl> - DEFAULT_OP_NUM ( - , BOOL , _bool ) ; <nl> - DEFAULT_OP_NUM ( - , INT , _int ) ; <nl> - DEFAULT_OP_NUM ( - , REAL , _real ) ; <nl> - DEFAULT_OP_FAIL ( STRING ) ; <nl> - DEFAULT_OP_LOCALMEM ( - , VECTOR2 , Vector2 ) ; <nl> - DEFAULT_OP_FAIL ( RECT2 ) ; <nl> - DEFAULT_OP_FAIL ( TRANSFORM2D ) ; <nl> - DEFAULT_OP_LOCALMEM ( - , VECTOR3 , Vector3 ) ; <nl> - DEFAULT_OP_FAIL ( PLANE ) ; <nl> - DEFAULT_OP_LOCALMEM ( - , QUAT , Quat ) ; <nl> - DEFAULT_OP_FAIL ( RECT3 ) ; <nl> - DEFAULT_OP_FAIL ( BASIS ) ; <nl> - DEFAULT_OP_FAIL ( TRANSFORM ) ; <nl> - <nl> - DEFAULT_OP_LOCALMEM ( - , COLOR , Color ) ; <nl> - <nl> - DEFAULT_OP_FAIL ( NODE_PATH ) ; <nl> - DEFAULT_OP_FAIL ( _RID ) ; <nl> - DEFAULT_OP_FAIL ( OBJECT ) ; <nl> - DEFAULT_OP_FAIL ( DICTIONARY ) ; <nl> - DEFAULT_OP_FAIL ( ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_BYTE_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_INT_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_REAL_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_STRING_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_VECTOR2_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_VECTOR3_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_COLOR_ARRAY ) ; <nl> - case VARIANT_MAX : { <nl> - r_valid = false ; <nl> - return ; <nl> + if ( ! p_a . _data . _bool & & p_b . _data . _bool ) <nl> + _RETURN ( false ) ; <nl> <nl> - } break ; <nl> + _RETURN ( true ) ; <nl> } <nl> - } break ; <nl> - case OP_MULTIPLY : { <nl> - switch ( p_a . type ) { <nl> <nl> - DEFAULT_OP_FAIL ( NIL ) ; <nl> - DEFAULT_OP_NUM ( * , BOOL , _bool ) ; <nl> - DEFAULT_OP_NUM_VEC ( * , INT , _int ) ; <nl> - DEFAULT_OP_NUM_VEC ( * , REAL , _real ) ; <nl> - DEFAULT_OP_FAIL ( STRING ) ; <nl> - DEFAULT_OP_LOCALMEM_NUM ( * , VECTOR2 , Vector2 ) ; <nl> - DEFAULT_OP_FAIL ( RECT2 ) ; <nl> - case TRANSFORM2D : { <nl> + CASE_TYPE ( math , OP_GREATER , OBJECT ) { <nl> + if ( p_b . type = = OBJECT ) <nl> + _RETURN ( ( p_a . _get_obj ( ) . obj > p_b . _get_obj ( ) . obj ) ) ; <nl> + } <nl> <nl> - if ( p_b . type = = TRANSFORM2D ) { <nl> - _RETURN ( * p_a . _data . _transform2d * * p_b . _data . _transform2d ) ; <nl> - } ; <nl> - if ( p_b . type = = VECTOR2 ) { <nl> - _RETURN ( p_a . _data . _transform2d - > xform ( * ( const Vector2 * ) p_b . _data . _mem ) ) ; <nl> - } ; <nl> - r_valid = false ; <nl> - return ; <nl> - } break ; <nl> - DEFAULT_OP_LOCALMEM_NUM ( * , VECTOR3 , Vector3 ) ; <nl> - DEFAULT_OP_FAIL ( PLANE ) ; <nl> - case QUAT : { <nl> - <nl> - switch ( p_b . type ) { <nl> - case VECTOR3 : { <nl> - <nl> - _RETURN ( reinterpret_cast < const Quat * > ( p_a . _data . _mem ) - > xform ( * ( const Vector3 * ) p_b . _data . _mem ) ) ; <nl> - } break ; <nl> - case QUAT : { <nl> - <nl> - _RETURN ( * reinterpret_cast < const Quat * > ( p_a . _data . _mem ) * * reinterpret_cast < const Quat * > ( p_b . _data . _mem ) ) ; <nl> - } break ; <nl> - case REAL : { <nl> - _RETURN ( * reinterpret_cast < const Quat * > ( p_a . _data . _mem ) * p_b . _data . _real ) ; <nl> - } break ; <nl> - default : { } <nl> - } ; <nl> - r_valid = false ; <nl> - return ; <nl> - } break ; <nl> - DEFAULT_OP_FAIL ( RECT3 ) ; <nl> - case BASIS : { <nl> + CASE_TYPE ( math , OP_GREATER , ARRAY ) { <nl> + if ( p_b . type ! = ARRAY ) <nl> + _RETURN ( false ) ; <nl> <nl> - switch ( p_b . type ) { <nl> - case VECTOR3 : { <nl> + const Array * arr_a = reinterpret_cast < const Array * > ( p_a . _data . _mem ) ; <nl> + const Array * arr_b = reinterpret_cast < const Array * > ( p_b . _data . _mem ) ; <nl> <nl> - _RETURN ( p_a . _data . _basis - > xform ( * ( const Vector3 * ) p_b . _data . _mem ) ) ; <nl> - } ; <nl> - case BASIS : { <nl> + int l = arr_a - > size ( ) ; <nl> + if ( arr_b - > size ( ) > l ) <nl> + _RETURN ( false ) ; <nl> + for ( int i = 0 ; i < l ; i + + ) { <nl> + if ( ( ( * arr_a ) [ i ] < ( * arr_b ) [ i ] ) ) { <nl> + _RETURN ( false ) ; <nl> + } <nl> + } <nl> <nl> - _RETURN ( * p_a . _data . _basis * * p_b . _data . _basis ) ; <nl> - } ; <nl> - default : { } <nl> - } ; <nl> + _RETURN ( true ) ; <nl> + } <nl> + <nl> + DEFAULT_OP_NUM ( math , OP_GREATER , INT , > , _int ) ; <nl> + DEFAULT_OP_NUM ( math , OP_GREATER , REAL , > , _real ) ; <nl> + DEFAULT_OP_STR_REV ( math , OP_GREATER , STRING , < , String ) ; <nl> + DEFAULT_OP_LOCALMEM_REV ( math , OP_GREATER , VECTOR2 , < , Vector2 ) ; <nl> + DEFAULT_OP_LOCALMEM_REV ( math , OP_GREATER , VECTOR3 , < , Vector3 ) ; <nl> + DEFAULT_OP_LOCALMEM_REV ( math , OP_GREATER , _RID , < , RID ) ; <nl> + DEFAULT_OP_ARRAY_GT ( math , OP_GREATER , POOL_BYTE_ARRAY , uint8_t ) ; <nl> + DEFAULT_OP_ARRAY_GT ( math , OP_GREATER , POOL_INT_ARRAY , int ) ; <nl> + DEFAULT_OP_ARRAY_GT ( math , OP_GREATER , POOL_REAL_ARRAY , real_t ) ; <nl> + DEFAULT_OP_ARRAY_GT ( math , OP_GREATER , POOL_STRING_ARRAY , String ) ; <nl> + DEFAULT_OP_ARRAY_GT ( math , OP_GREATER , POOL_VECTOR2_ARRAY , Vector3 ) ; <nl> + DEFAULT_OP_ARRAY_GT ( math , OP_GREATER , POOL_VECTOR3_ARRAY , Vector3 ) ; <nl> + DEFAULT_OP_ARRAY_GT ( math , OP_GREATER , POOL_COLOR_ARRAY , Color ) ; <nl> + <nl> + CASE_TYPE ( math , OP_GREATER , NIL ) <nl> + CASE_TYPE ( math , OP_GREATER , RECT2 ) <nl> + CASE_TYPE ( math , OP_GREATER , TRANSFORM2D ) <nl> + CASE_TYPE ( math , OP_GREATER , PLANE ) <nl> + CASE_TYPE ( math , OP_GREATER , QUAT ) <nl> + CASE_TYPE ( math , OP_GREATER , RECT3 ) <nl> + CASE_TYPE ( math , OP_GREATER , BASIS ) <nl> + CASE_TYPE ( math , OP_GREATER , TRANSFORM ) <nl> + CASE_TYPE ( math , OP_GREATER , COLOR ) <nl> + CASE_TYPE ( math , OP_GREATER , NODE_PATH ) <nl> + CASE_TYPE ( math , OP_GREATER , DICTIONARY ) <nl> + _RETURN_FAIL ; <nl> + } <nl> + <nl> + SWITCH_OP ( math , OP_GREATER_EQUAL , p_a . type ) { <nl> + CASE_TYPE ( math , OP_GREATER_EQUAL , OBJECT ) { <nl> + if ( p_b . type = = OBJECT ) <nl> + _RETURN ( ( p_a . _get_obj ( ) . obj > = p_b . _get_obj ( ) . obj ) ) ; <nl> + } <nl> + <nl> + DEFAULT_OP_NUM ( math , OP_GREATER_EQUAL , INT , > = , _int ) ; <nl> + DEFAULT_OP_NUM ( math , OP_GREATER_EQUAL , REAL , > = , _real ) ; <nl> + DEFAULT_OP_STR_REV ( math , OP_GREATER_EQUAL , STRING , < = , String ) ; <nl> + DEFAULT_OP_LOCALMEM_REV ( math , OP_GREATER_EQUAL , VECTOR2 , < = , Vector2 ) ; <nl> + DEFAULT_OP_LOCALMEM_REV ( math , OP_GREATER_EQUAL , VECTOR3 , < = , Vector3 ) ; <nl> + DEFAULT_OP_LOCALMEM_REV ( math , OP_GREATER_EQUAL , _RID , < = , RID ) ; <nl> + <nl> + CASE_TYPE ( math , OP_GREATER_EQUAL , NIL ) <nl> + CASE_TYPE ( math , OP_GREATER_EQUAL , BOOL ) <nl> + CASE_TYPE ( math , OP_GREATER_EQUAL , RECT2 ) <nl> + CASE_TYPE ( math , OP_GREATER_EQUAL , TRANSFORM2D ) <nl> + CASE_TYPE ( math , OP_GREATER_EQUAL , PLANE ) <nl> + CASE_TYPE ( math , OP_GREATER_EQUAL , QUAT ) <nl> + CASE_TYPE ( math , OP_GREATER_EQUAL , RECT3 ) <nl> + CASE_TYPE ( math , OP_GREATER_EQUAL , BASIS ) <nl> + CASE_TYPE ( math , OP_GREATER_EQUAL , TRANSFORM ) <nl> + CASE_TYPE ( math , OP_GREATER_EQUAL , COLOR ) <nl> + CASE_TYPE ( math , OP_GREATER_EQUAL , NODE_PATH ) <nl> + CASE_TYPE ( math , OP_GREATER_EQUAL , DICTIONARY ) <nl> + CASE_TYPE ( math , OP_GREATER_EQUAL , ARRAY ) <nl> + CASE_TYPE ( math , OP_GREATER_EQUAL , POOL_BYTE_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_GREATER_EQUAL , POOL_INT_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_GREATER_EQUAL , POOL_REAL_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_GREATER_EQUAL , POOL_STRING_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_GREATER_EQUAL , POOL_VECTOR2_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_GREATER_EQUAL , POOL_VECTOR3_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_GREATER_EQUAL , POOL_COLOR_ARRAY ) ; <nl> + _RETURN_FAIL ; <nl> + } <nl> + <nl> + SWITCH_OP ( math , OP_ADD , p_a . type ) { <nl> + CASE_TYPE ( math , OP_ADD , ARRAY ) { <nl> + if ( p_a . type ! = p_b . type ) { <nl> r_valid = false ; <nl> return ; <nl> - } break ; <nl> - case TRANSFORM : { <nl> + } <nl> + const Array & array_a = * reinterpret_cast < const Array * > ( p_a . _data . _mem ) ; <nl> + const Array & array_b = * reinterpret_cast < const Array * > ( p_b . _data . _mem ) ; <nl> + Array sum ; <nl> + int asize = array_a . size ( ) ; <nl> + int bsize = array_b . size ( ) ; <nl> + sum . resize ( asize + bsize ) ; <nl> + for ( int i = 0 ; i < asize ; i + + ) <nl> + sum [ i ] = array_a [ i ] ; <nl> + for ( int i = 0 ; i < bsize ; i + + ) <nl> + sum [ i + asize ] = array_b [ i ] ; <nl> + _RETURN ( sum ) ; <nl> + } <nl> + <nl> + DEFAULT_OP_NUM ( math , OP_ADD , INT , + , _int ) ; <nl> + DEFAULT_OP_NUM ( math , OP_ADD , REAL , + , _real ) ; <nl> + DEFAULT_OP_STR ( math , OP_ADD , STRING , + , String ) ; <nl> + DEFAULT_OP_LOCALMEM ( math , OP_ADD , VECTOR2 , + , Vector2 ) ; <nl> + DEFAULT_OP_LOCALMEM ( math , OP_ADD , VECTOR3 , + , Vector3 ) ; <nl> + DEFAULT_OP_LOCALMEM ( math , OP_ADD , QUAT , + , Quat ) ; <nl> + DEFAULT_OP_LOCALMEM ( math , OP_ADD , COLOR , + , Color ) ; <nl> + <nl> + DEFAULT_OP_ARRAY_ADD ( math , OP_ADD , POOL_BYTE_ARRAY , uint8_t ) ; <nl> + DEFAULT_OP_ARRAY_ADD ( math , OP_ADD , POOL_INT_ARRAY , int ) ; <nl> + DEFAULT_OP_ARRAY_ADD ( math , OP_ADD , POOL_REAL_ARRAY , real_t ) ; <nl> + DEFAULT_OP_ARRAY_ADD ( math , OP_ADD , POOL_STRING_ARRAY , String ) ; <nl> + DEFAULT_OP_ARRAY_ADD ( math , OP_ADD , POOL_VECTOR2_ARRAY , Vector2 ) ; <nl> + DEFAULT_OP_ARRAY_ADD ( math , OP_ADD , POOL_VECTOR3_ARRAY , Vector3 ) ; <nl> + DEFAULT_OP_ARRAY_ADD ( math , OP_ADD , POOL_COLOR_ARRAY , Color ) ; <nl> + <nl> + CASE_TYPE ( math , OP_ADD , NIL ) <nl> + CASE_TYPE ( math , OP_ADD , BOOL ) <nl> + CASE_TYPE ( math , OP_ADD , RECT2 ) <nl> + CASE_TYPE ( math , OP_ADD , TRANSFORM2D ) <nl> + CASE_TYPE ( math , OP_ADD , PLANE ) <nl> + CASE_TYPE ( math , OP_ADD , RECT3 ) <nl> + CASE_TYPE ( math , OP_ADD , BASIS ) <nl> + CASE_TYPE ( math , OP_ADD , TRANSFORM ) <nl> + CASE_TYPE ( math , OP_ADD , NODE_PATH ) <nl> + CASE_TYPE ( math , OP_ADD , _RID ) <nl> + CASE_TYPE ( math , OP_ADD , OBJECT ) <nl> + CASE_TYPE ( math , OP_ADD , DICTIONARY ) <nl> + _RETURN_FAIL ; <nl> + } <nl> <nl> - switch ( p_b . type ) { <nl> - case VECTOR3 : { <nl> + SWITCH_OP ( math , OP_SUBTRACT , p_a . type ) { <nl> + DEFAULT_OP_NUM ( math , OP_SUBTRACT , INT , - , _int ) ; <nl> + DEFAULT_OP_NUM ( math , OP_SUBTRACT , REAL , - , _real ) ; <nl> + DEFAULT_OP_LOCALMEM ( math , OP_SUBTRACT , VECTOR2 , - , Vector2 ) ; <nl> + DEFAULT_OP_LOCALMEM ( math , OP_SUBTRACT , VECTOR3 , - , Vector3 ) ; <nl> + DEFAULT_OP_LOCALMEM ( math , OP_SUBTRACT , QUAT , - , Quat ) ; <nl> + DEFAULT_OP_LOCALMEM ( math , OP_SUBTRACT , COLOR , - , Color ) ; <nl> + <nl> + CASE_TYPE ( math , OP_SUBTRACT , NIL ) <nl> + CASE_TYPE ( math , OP_SUBTRACT , BOOL ) <nl> + CASE_TYPE ( math , OP_SUBTRACT , STRING ) <nl> + CASE_TYPE ( math , OP_SUBTRACT , RECT2 ) <nl> + CASE_TYPE ( math , OP_SUBTRACT , TRANSFORM2D ) <nl> + CASE_TYPE ( math , OP_SUBTRACT , PLANE ) <nl> + CASE_TYPE ( math , OP_SUBTRACT , RECT3 ) <nl> + CASE_TYPE ( math , OP_SUBTRACT , BASIS ) <nl> + CASE_TYPE ( math , OP_SUBTRACT , TRANSFORM ) <nl> + CASE_TYPE ( math , OP_SUBTRACT , NODE_PATH ) <nl> + CASE_TYPE ( math , OP_SUBTRACT , _RID ) <nl> + CASE_TYPE ( math , OP_SUBTRACT , OBJECT ) <nl> + CASE_TYPE ( math , OP_SUBTRACT , DICTIONARY ) <nl> + CASE_TYPE ( math , OP_SUBTRACT , ARRAY ) <nl> + CASE_TYPE ( math , OP_SUBTRACT , POOL_BYTE_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_SUBTRACT , POOL_INT_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_SUBTRACT , POOL_REAL_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_SUBTRACT , POOL_STRING_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_SUBTRACT , POOL_VECTOR2_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_SUBTRACT , POOL_VECTOR3_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_SUBTRACT , POOL_COLOR_ARRAY ) ; <nl> + _RETURN_FAIL ; <nl> + } <nl> <nl> - _RETURN ( p_a . _data . _transform - > xform ( * ( const Vector3 * ) p_b . _data . _mem ) ) ; <nl> - } ; <nl> - case TRANSFORM : { <nl> + SWITCH_OP ( math , OP_MULTIPLY , p_a . type ) { <nl> + CASE_TYPE ( math , OP_MULTIPLY , TRANSFORM2D ) { <nl> + if ( p_b . type = = TRANSFORM2D ) { <nl> + _RETURN ( * p_a . _data . _transform2d * * p_b . _data . _transform2d ) ; <nl> + } ; <nl> + if ( p_b . type = = VECTOR2 ) { <nl> + _RETURN ( p_a . _data . _transform2d - > xform ( * ( const Vector2 * ) p_b . _data . _mem ) ) ; <nl> + } ; <nl> + r_valid = false ; <nl> + return ; <nl> + } <nl> + <nl> + CASE_TYPE ( math , OP_MULTIPLY , QUAT ) { <nl> + switch ( p_b . type ) { <nl> + case VECTOR3 : { <nl> <nl> - _RETURN ( * p_a . _data . _transform * * p_b . _data . _transform ) ; <nl> - } ; <nl> - default : { } <nl> + _RETURN ( reinterpret_cast < const Quat * > ( p_a . _data . _mem ) - > xform ( * ( const Vector3 * ) p_b . _data . _mem ) ) ; <nl> + } break ; <nl> + case QUAT : { <nl> + <nl> + _RETURN ( * reinterpret_cast < const Quat * > ( p_a . _data . _mem ) * * reinterpret_cast < const Quat * > ( p_b . _data . _mem ) ) ; <nl> + } break ; <nl> + case REAL : { <nl> + _RETURN ( * reinterpret_cast < const Quat * > ( p_a . _data . _mem ) * p_b . _data . _real ) ; <nl> + } break ; <nl> + default : { } <nl> + } ; <nl> + r_valid = false ; <nl> + return ; <nl> + } <nl> + <nl> + CASE_TYPE ( math , OP_MULTIPLY , BASIS ) { <nl> + switch ( p_b . type ) { <nl> + case VECTOR3 : { <nl> + <nl> + _RETURN ( p_a . _data . _basis - > xform ( * ( const Vector3 * ) p_b . _data . _mem ) ) ; <nl> } ; <nl> - r_valid = false ; <nl> - return ; <nl> - } break ; <nl> - DEFAULT_OP_LOCALMEM_NUM ( * , COLOR , Color ) ; <nl> - <nl> - DEFAULT_OP_FAIL ( NODE_PATH ) ; <nl> - DEFAULT_OP_FAIL ( _RID ) ; <nl> - DEFAULT_OP_FAIL ( OBJECT ) ; <nl> - DEFAULT_OP_FAIL ( DICTIONARY ) ; <nl> - DEFAULT_OP_FAIL ( ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_BYTE_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_INT_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_REAL_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_STRING_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_VECTOR2_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_VECTOR3_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_COLOR_ARRAY ) ; <nl> - case VARIANT_MAX : { <nl> - r_valid = false ; <nl> - return ; <nl> + case BASIS : { <nl> <nl> - } break ; <nl> + _RETURN ( * p_a . _data . _basis * * p_b . _data . _basis ) ; <nl> + } ; <nl> + default : { } <nl> + } ; <nl> + r_valid = false ; <nl> + return ; <nl> } <nl> - } break ; <nl> - case OP_DIVIDE : { <nl> - switch ( p_a . type ) { <nl> <nl> - DEFAULT_OP_FAIL ( NIL ) ; <nl> - DEFAULT_OP_NUM ( / , BOOL , _bool ) ; <nl> - case INT : { <nl> - switch ( p_b . type ) { <nl> - case BOOL : { <nl> - int64_t b = p_b . _data . _bool ; <nl> - if ( b = = 0 ) { <nl> + CASE_TYPE ( math , OP_MULTIPLY , TRANSFORM ) { <nl> + switch ( p_b . type ) { <nl> + case VECTOR3 : { <nl> <nl> - r_valid = false ; <nl> - _RETURN ( " Division By False " ) ; <nl> - } <nl> - _RETURN ( p_a . _data . _int / b ) ; <nl> + _RETURN ( p_a . _data . _transform - > xform ( * ( const Vector3 * ) p_b . _data . _mem ) ) ; <nl> + } ; <nl> + case TRANSFORM : { <nl> <nl> - } break ; <nl> - case INT : { <nl> - int64_t b = p_b . _data . _int ; <nl> - if ( b = = 0 ) { <nl> + _RETURN ( * p_a . _data . _transform * * p_b . _data . _transform ) ; <nl> + } ; <nl> + default : { } <nl> + } ; <nl> + r_valid = false ; <nl> + return ; <nl> + } <nl> <nl> - r_valid = false ; <nl> - _RETURN ( " Division By Zero " ) ; <nl> - } <nl> - _RETURN ( p_a . _data . _int / b ) ; <nl> + DEFAULT_OP_NUM_VEC ( math , OP_MULTIPLY , INT , * , _int ) ; <nl> + DEFAULT_OP_NUM_VEC ( math , OP_MULTIPLY , REAL , * , _real ) ; <nl> + DEFAULT_OP_LOCALMEM_NUM ( math , OP_MULTIPLY , VECTOR2 , * , Vector2 ) ; <nl> + DEFAULT_OP_LOCALMEM_NUM ( math , OP_MULTIPLY , VECTOR3 , * , Vector3 ) ; <nl> + DEFAULT_OP_LOCALMEM_NUM ( math , OP_MULTIPLY , COLOR , * , Color ) ; <nl> + <nl> + CASE_TYPE ( math , OP_MULTIPLY , NIL ) <nl> + CASE_TYPE ( math , OP_MULTIPLY , BOOL ) <nl> + CASE_TYPE ( math , OP_MULTIPLY , STRING ) <nl> + CASE_TYPE ( math , OP_MULTIPLY , RECT2 ) <nl> + CASE_TYPE ( math , OP_MULTIPLY , PLANE ) <nl> + CASE_TYPE ( math , OP_MULTIPLY , RECT3 ) <nl> + CASE_TYPE ( math , OP_MULTIPLY , NODE_PATH ) <nl> + CASE_TYPE ( math , OP_MULTIPLY , _RID ) <nl> + CASE_TYPE ( math , OP_MULTIPLY , OBJECT ) <nl> + CASE_TYPE ( math , OP_MULTIPLY , DICTIONARY ) <nl> + CASE_TYPE ( math , OP_MULTIPLY , ARRAY ) <nl> + CASE_TYPE ( math , OP_MULTIPLY , POOL_BYTE_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_MULTIPLY , POOL_INT_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_MULTIPLY , POOL_REAL_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_MULTIPLY , POOL_STRING_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_MULTIPLY , POOL_VECTOR2_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_MULTIPLY , POOL_VECTOR3_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_MULTIPLY , POOL_COLOR_ARRAY ) ; <nl> + _RETURN_FAIL ; <nl> + } <nl> <nl> - } break ; <nl> - case REAL : _RETURN ( p_a . _data . _int / p_b . _data . _real ) ; <nl> - default : { } <nl> - } <nl> + SWITCH_OP ( math , OP_DIVIDE , p_a . type ) { <nl> + CASE_TYPE ( math , OP_DIVIDE , QUAT ) { <nl> + if ( p_b . type ! = REAL ) { <nl> r_valid = false ; <nl> return ; <nl> - } ; <nl> - DEFAULT_OP_NUM ( / , REAL , _real ) ; <nl> - DEFAULT_OP_FAIL ( STRING ) ; <nl> - DEFAULT_OP_LOCALMEM_NUM ( / , VECTOR2 , Vector2 ) ; <nl> - DEFAULT_OP_FAIL ( RECT2 ) ; <nl> - DEFAULT_OP_FAIL ( TRANSFORM2D ) ; <nl> - DEFAULT_OP_LOCALMEM_NUM ( / , VECTOR3 , Vector3 ) ; <nl> - DEFAULT_OP_FAIL ( PLANE ) ; <nl> - case QUAT : { <nl> - if ( p_b . type ! = REAL ) { <nl> - r_valid = false ; <nl> - return ; <nl> - } <nl> - _RETURN ( * reinterpret_cast < const Quat * > ( p_a . _data . _mem ) / p_b . _data . _real ) ; <nl> - } break ; <nl> - DEFAULT_OP_FAIL ( RECT3 ) ; <nl> - DEFAULT_OP_FAIL ( BASIS ) ; <nl> - DEFAULT_OP_FAIL ( TRANSFORM ) ; <nl> - <nl> - DEFAULT_OP_LOCALMEM_NUM ( / , COLOR , Color ) ; <nl> - <nl> - DEFAULT_OP_FAIL ( NODE_PATH ) ; <nl> - DEFAULT_OP_FAIL ( _RID ) ; <nl> - DEFAULT_OP_FAIL ( OBJECT ) ; <nl> - DEFAULT_OP_FAIL ( DICTIONARY ) ; <nl> - DEFAULT_OP_FAIL ( ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_BYTE_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_INT_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_REAL_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_STRING_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_VECTOR2_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_VECTOR3_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_COLOR_ARRAY ) ; <nl> - case VARIANT_MAX : { <nl> + } <nl> + # ifdef DEBUG_ENABLED <nl> + if ( p_b . _data . _real = = 0 ) { <nl> r_valid = false ; <nl> - return ; <nl> + _RETURN ( " Division By Zero " ) ; <nl> + } <nl> + # endif <nl> + _RETURN ( <nl> + * reinterpret_cast < const Quat * > ( p_a . _data . _mem ) / p_b . _data . _real ) ; <nl> + } <nl> + <nl> + DEFAULT_OP_NUM_DIV ( math , OP_DIVIDE , INT , _int ) ; <nl> + DEFAULT_OP_NUM_DIV ( math , OP_DIVIDE , REAL , _real ) ; <nl> + DEFAULT_OP_LOCALMEM_NUM ( math , OP_DIVIDE , VECTOR2 , / , Vector2 ) ; <nl> + DEFAULT_OP_LOCALMEM_NUM ( math , OP_DIVIDE , VECTOR3 , / , Vector3 ) ; <nl> + DEFAULT_OP_LOCALMEM_NUM ( math , OP_DIVIDE , COLOR , / , Color ) ; <nl> + <nl> + CASE_TYPE ( math , OP_DIVIDE , NIL ) <nl> + CASE_TYPE ( math , OP_DIVIDE , BOOL ) <nl> + CASE_TYPE ( math , OP_DIVIDE , STRING ) <nl> + CASE_TYPE ( math , OP_DIVIDE , RECT2 ) <nl> + CASE_TYPE ( math , OP_DIVIDE , TRANSFORM2D ) <nl> + CASE_TYPE ( math , OP_DIVIDE , PLANE ) <nl> + CASE_TYPE ( math , OP_DIVIDE , RECT3 ) <nl> + CASE_TYPE ( math , OP_DIVIDE , BASIS ) <nl> + CASE_TYPE ( math , OP_DIVIDE , TRANSFORM ) <nl> + CASE_TYPE ( math , OP_DIVIDE , NODE_PATH ) <nl> + CASE_TYPE ( math , OP_DIVIDE , _RID ) <nl> + CASE_TYPE ( math , OP_DIVIDE , OBJECT ) <nl> + CASE_TYPE ( math , OP_DIVIDE , DICTIONARY ) <nl> + CASE_TYPE ( math , OP_DIVIDE , ARRAY ) <nl> + CASE_TYPE ( math , OP_DIVIDE , POOL_BYTE_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_DIVIDE , POOL_INT_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_DIVIDE , POOL_REAL_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_DIVIDE , POOL_STRING_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_DIVIDE , POOL_VECTOR2_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_DIVIDE , POOL_VECTOR3_ARRAY ) ; <nl> + CASE_TYPE ( math , OP_DIVIDE , POOL_COLOR_ARRAY ) ; <nl> + _RETURN_FAIL ; <nl> + } <nl> <nl> - } break ; <nl> - } <nl> - <nl> - } break ; <nl> - case OP_POSITIVE : { <nl> - / / Simple case when user defines variable as + value . <nl> - switch ( p_a . type ) { <nl> - <nl> - DEFAULT_OP_FAIL ( NIL ) ; <nl> - DEFAULT_OP_FAIL ( STRING ) ; <nl> - DEFAULT_OP_FAIL ( RECT2 ) ; <nl> - DEFAULT_OP_FAIL ( TRANSFORM2D ) ; <nl> - DEFAULT_OP_FAIL ( RECT3 ) ; <nl> - DEFAULT_OP_FAIL ( BASIS ) ; <nl> - DEFAULT_OP_FAIL ( TRANSFORM ) ; <nl> - DEFAULT_OP_NUM_POS ( BOOL , _bool ) ; <nl> - DEFAULT_OP_NUM_POS ( INT , _int ) ; <nl> - DEFAULT_OP_NUM_POS ( REAL , _real ) ; <nl> - DEFAULT_OP_LOCALMEM_POS ( VECTOR3 , Vector3 ) ; <nl> - DEFAULT_OP_LOCALMEM_POS ( PLANE , Plane ) ; <nl> - DEFAULT_OP_LOCALMEM_POS ( QUAT , Quat ) ; <nl> - DEFAULT_OP_LOCALMEM_POS ( VECTOR2 , Vector2 ) ; <nl> - <nl> - DEFAULT_OP_FAIL ( COLOR ) ; <nl> - <nl> - DEFAULT_OP_FAIL ( NODE_PATH ) ; <nl> - DEFAULT_OP_FAIL ( _RID ) ; <nl> - DEFAULT_OP_FAIL ( OBJECT ) ; <nl> - DEFAULT_OP_FAIL ( DICTIONARY ) ; <nl> - DEFAULT_OP_FAIL ( ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_BYTE_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_INT_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_REAL_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_STRING_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_VECTOR2_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_VECTOR3_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_COLOR_ARRAY ) ; <nl> - case VARIANT_MAX : { <nl> - r_valid = false ; <nl> - return ; <nl> + SWITCH_OP ( math , OP_POSITIVE , p_a . type ) { <nl> + DEFAULT_OP_NUM_POS ( math , OP_POSITIVE , INT , _int ) ; <nl> + DEFAULT_OP_NUM_POS ( math , OP_POSITIVE , REAL , _real ) ; <nl> + DEFAULT_OP_LOCALMEM_POS ( math , OP_POSITIVE , VECTOR3 , Vector3 ) ; <nl> + DEFAULT_OP_LOCALMEM_POS ( math , OP_POSITIVE , PLANE , Plane ) ; <nl> + DEFAULT_OP_LOCALMEM_POS ( math , OP_POSITIVE , QUAT , Quat ) ; <nl> + DEFAULT_OP_LOCALMEM_POS ( math , OP_POSITIVE , VECTOR2 , Vector2 ) ; <nl> + <nl> + CASE_TYPE ( math , OP_POSITIVE , NIL ) <nl> + CASE_TYPE ( math , OP_POSITIVE , BOOL ) <nl> + CASE_TYPE ( math , OP_POSITIVE , STRING ) <nl> + CASE_TYPE ( math , OP_POSITIVE , RECT2 ) <nl> + CASE_TYPE ( math , OP_POSITIVE , TRANSFORM2D ) <nl> + CASE_TYPE ( math , OP_POSITIVE , RECT3 ) <nl> + CASE_TYPE ( math , OP_POSITIVE , BASIS ) <nl> + CASE_TYPE ( math , OP_POSITIVE , TRANSFORM ) <nl> + CASE_TYPE ( math , OP_POSITIVE , COLOR ) <nl> + CASE_TYPE ( math , OP_POSITIVE , NODE_PATH ) <nl> + CASE_TYPE ( math , OP_POSITIVE , _RID ) <nl> + CASE_TYPE ( math , OP_POSITIVE , OBJECT ) <nl> + CASE_TYPE ( math , OP_POSITIVE , DICTIONARY ) <nl> + CASE_TYPE ( math , OP_POSITIVE , ARRAY ) <nl> + CASE_TYPE ( math , OP_POSITIVE , POOL_BYTE_ARRAY ) <nl> + CASE_TYPE ( math , OP_POSITIVE , POOL_INT_ARRAY ) <nl> + CASE_TYPE ( math , OP_POSITIVE , POOL_REAL_ARRAY ) <nl> + CASE_TYPE ( math , OP_POSITIVE , POOL_STRING_ARRAY ) <nl> + CASE_TYPE ( math , OP_POSITIVE , POOL_VECTOR2_ARRAY ) <nl> + CASE_TYPE ( math , OP_POSITIVE , POOL_VECTOR3_ARRAY ) <nl> + CASE_TYPE ( math , OP_POSITIVE , POOL_COLOR_ARRAY ) <nl> + _RETURN_FAIL ; <nl> + } <nl> <nl> - } break ; <nl> - } <nl> - } break ; <nl> - case OP_NEGATE : { <nl> - switch ( p_a . type ) { <nl> - <nl> - DEFAULT_OP_FAIL ( NIL ) ; <nl> - DEFAULT_OP_NUM_NEG ( BOOL , _bool ) ; <nl> - DEFAULT_OP_NUM_NEG ( INT , _int ) ; <nl> - DEFAULT_OP_NUM_NEG ( REAL , _real ) ; <nl> - DEFAULT_OP_FAIL ( STRING ) ; <nl> - DEFAULT_OP_LOCALMEM_NEG ( VECTOR2 , Vector2 ) ; <nl> - DEFAULT_OP_FAIL ( RECT2 ) ; <nl> - DEFAULT_OP_FAIL ( TRANSFORM2D ) ; <nl> - DEFAULT_OP_LOCALMEM_NEG ( VECTOR3 , Vector3 ) ; <nl> - DEFAULT_OP_LOCALMEM_NEG ( PLANE , Plane ) ; <nl> - DEFAULT_OP_LOCALMEM_NEG ( QUAT , Quat ) ; <nl> - DEFAULT_OP_FAIL ( RECT3 ) ; <nl> - DEFAULT_OP_FAIL ( BASIS ) ; <nl> - DEFAULT_OP_FAIL ( TRANSFORM ) ; <nl> - <nl> - DEFAULT_OP_LOCALMEM_NEG ( COLOR , Color ) ; <nl> - <nl> - DEFAULT_OP_FAIL ( NODE_PATH ) ; <nl> - DEFAULT_OP_FAIL ( _RID ) ; <nl> - DEFAULT_OP_FAIL ( OBJECT ) ; <nl> - DEFAULT_OP_FAIL ( DICTIONARY ) ; <nl> - DEFAULT_OP_FAIL ( ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_BYTE_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_INT_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_REAL_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_STRING_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_VECTOR2_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_VECTOR3_ARRAY ) ; <nl> - DEFAULT_OP_FAIL ( POOL_COLOR_ARRAY ) ; <nl> - case VARIANT_MAX : { <nl> - r_valid = false ; <nl> - return ; <nl> + SWITCH_OP ( math , OP_NEGATE , p_a . type ) { <nl> + DEFAULT_OP_NUM_NEG ( math , OP_NEGATE , INT , _int ) ; <nl> + DEFAULT_OP_NUM_NEG ( math , OP_NEGATE , REAL , _real ) ; <nl> + <nl> + DEFAULT_OP_LOCALMEM_NEG ( math , OP_NEGATE , VECTOR2 , Vector2 ) ; <nl> + DEFAULT_OP_LOCALMEM_NEG ( math , OP_NEGATE , VECTOR3 , Vector3 ) ; <nl> + DEFAULT_OP_LOCALMEM_NEG ( math , OP_NEGATE , PLANE , Plane ) ; <nl> + DEFAULT_OP_LOCALMEM_NEG ( math , OP_NEGATE , QUAT , Quat ) ; <nl> + DEFAULT_OP_LOCALMEM_NEG ( math , OP_NEGATE , COLOR , Color ) ; <nl> + <nl> + CASE_TYPE ( math , OP_NEGATE , NIL ) <nl> + CASE_TYPE ( math , OP_NEGATE , BOOL ) <nl> + CASE_TYPE ( math , OP_NEGATE , STRING ) <nl> + CASE_TYPE ( math , OP_NEGATE , RECT2 ) <nl> + CASE_TYPE ( math , OP_NEGATE , TRANSFORM2D ) <nl> + CASE_TYPE ( math , OP_NEGATE , RECT3 ) <nl> + CASE_TYPE ( math , OP_NEGATE , BASIS ) <nl> + CASE_TYPE ( math , OP_NEGATE , TRANSFORM ) <nl> + CASE_TYPE ( math , OP_NEGATE , NODE_PATH ) <nl> + CASE_TYPE ( math , OP_NEGATE , _RID ) <nl> + CASE_TYPE ( math , OP_NEGATE , OBJECT ) <nl> + CASE_TYPE ( math , OP_NEGATE , DICTIONARY ) <nl> + CASE_TYPE ( math , OP_NEGATE , ARRAY ) <nl> + CASE_TYPE ( math , OP_NEGATE , POOL_BYTE_ARRAY ) <nl> + CASE_TYPE ( math , OP_NEGATE , POOL_INT_ARRAY ) <nl> + CASE_TYPE ( math , OP_NEGATE , POOL_REAL_ARRAY ) <nl> + CASE_TYPE ( math , OP_NEGATE , POOL_STRING_ARRAY ) <nl> + CASE_TYPE ( math , OP_NEGATE , POOL_VECTOR2_ARRAY ) <nl> + CASE_TYPE ( math , OP_NEGATE , POOL_VECTOR3_ARRAY ) <nl> + CASE_TYPE ( math , OP_NEGATE , POOL_COLOR_ARRAY ) <nl> + _RETURN_FAIL ; <nl> + } <nl> <nl> - } break ; <nl> - } <nl> - } break ; <nl> - case OP_MODULE : { <nl> - if ( p_a . type = = INT & & p_b . type = = INT ) { <nl> + SWITCH_OP ( math , OP_MODULE , p_a . type ) { <nl> + CASE_TYPE ( math , OP_MODULE , INT ) { <nl> + if ( p_b . type ! = INT ) { <nl> + _RETURN_FAIL ; <nl> + } <nl> # ifdef DEBUG_ENABLED <nl> if ( p_b . _data . _int = = 0 ) { <nl> r_valid = false ; <nl> void Variant : : evaluate ( const Operator & p_op , const Variant & p_a , const Variant & <nl> } <nl> # endif <nl> _RETURN ( p_a . _data . _int % p_b . _data . _int ) ; <nl> + } <nl> <nl> - } else if ( p_a . type = = STRING ) { <nl> - const String * format = reinterpret_cast < const String * > ( p_a . _data . _mem ) ; <nl> + CASE_TYPE ( math , OP_MODULE , STRING ) { <nl> + const String * format = <nl> + reinterpret_cast < const String * > ( p_a . _data . _mem ) ; <nl> <nl> String result ; <nl> bool error ; <nl> if ( p_b . type = = ARRAY ) { <nl> / / e . g . " frog % s % d " % [ " fish " , 12 ] <nl> - const Array * args = reinterpret_cast < const Array * > ( p_b . _data . _mem ) ; <nl> + const Array * args = <nl> + reinterpret_cast < const Array * > ( p_b . _data . _mem ) ; <nl> result = format - > sprintf ( * args , & error ) ; <nl> } else { <nl> / / e . g . " frog % d " % 12 <nl> void Variant : : evaluate ( const Operator & p_op , const Variant & p_a , const Variant & <nl> _RETURN ( result ) ; <nl> } <nl> <nl> - r_valid = false ; <nl> - return ; <nl> + CASE_TYPE ( math , OP_MODULE , NIL ) <nl> + CASE_TYPE ( math , OP_MODULE , BOOL ) <nl> + CASE_TYPE ( math , OP_MODULE , REAL ) <nl> + CASE_TYPE ( math , OP_MODULE , VECTOR2 ) <nl> + CASE_TYPE ( math , OP_MODULE , RECT2 ) <nl> + CASE_TYPE ( math , OP_MODULE , VECTOR3 ) <nl> + CASE_TYPE ( math , OP_MODULE , TRANSFORM2D ) <nl> + CASE_TYPE ( math , OP_MODULE , PLANE ) <nl> + CASE_TYPE ( math , OP_MODULE , QUAT ) <nl> + CASE_TYPE ( math , OP_MODULE , RECT3 ) <nl> + CASE_TYPE ( math , OP_MODULE , BASIS ) <nl> + CASE_TYPE ( math , OP_MODULE , TRANSFORM ) <nl> + CASE_TYPE ( math , OP_MODULE , COLOR ) <nl> + CASE_TYPE ( math , OP_MODULE , NODE_PATH ) <nl> + CASE_TYPE ( math , OP_MODULE , _RID ) <nl> + CASE_TYPE ( math , OP_MODULE , OBJECT ) <nl> + CASE_TYPE ( math , OP_MODULE , DICTIONARY ) <nl> + CASE_TYPE ( math , OP_MODULE , ARRAY ) <nl> + CASE_TYPE ( math , OP_MODULE , POOL_BYTE_ARRAY ) <nl> + CASE_TYPE ( math , OP_MODULE , POOL_INT_ARRAY ) <nl> + CASE_TYPE ( math , OP_MODULE , POOL_REAL_ARRAY ) <nl> + CASE_TYPE ( math , OP_MODULE , POOL_STRING_ARRAY ) <nl> + CASE_TYPE ( math , OP_MODULE , POOL_VECTOR2_ARRAY ) <nl> + CASE_TYPE ( math , OP_MODULE , POOL_VECTOR3_ARRAY ) <nl> + CASE_TYPE ( math , OP_MODULE , POOL_COLOR_ARRAY ) <nl> + _RETURN_FAIL ; <nl> + } <nl> <nl> - } break ; <nl> - case OP_STRING_CONCAT : { <nl> + SWITCH_OP ( math , OP_STRING_CONCAT , p_a . type ) { <nl> + CASE_TYPE_ALL ( math , OP_STRING_CONCAT ) <nl> <nl> _RETURN ( p_a . operator String ( ) + p_b . operator String ( ) ) ; <nl> - } break ; <nl> - / / bitwise <nl> - case OP_SHIFT_LEFT : { <nl> - if ( p_a . type = = INT & & p_b . type = = INT ) <nl> - _RETURN ( p_a . _data . _int < < p_b . _data . _int ) ; <nl> + } <nl> <nl> - r_valid = false ; <nl> - return ; <nl> + SWITCH_OP ( math , OP_SHIFT_LEFT , p_a . type ) { <nl> + CASE_TYPE ( math , OP_SHIFT_LEFT , INT ) { <nl> + if ( p_b . type ! = INT ) <nl> + _RETURN_FAIL ; <nl> + _RETURN ( p_a . _data . _int < < p_b . _data . _int ) ; <nl> + } <nl> + CASE_TYPE_ALL_BUT_INT ( math , OP_SHIFT_LEFT ) <nl> + _RETURN_FAIL ; <nl> + } <nl> <nl> - } break ; <nl> - case OP_SHIFT_RIGHT : { <nl> - if ( p_a . type = = INT & & p_b . type = = INT ) <nl> + SWITCH_OP ( math , OP_SHIFT_RIGHT , p_a . type ) { <nl> + CASE_TYPE ( math , OP_SHIFT_RIGHT , INT ) { <nl> + if ( p_b . type ! = INT ) <nl> + _RETURN_FAIL ; <nl> _RETURN ( p_a . _data . _int > > p_b . _data . _int ) ; <nl> + } <nl> + CASE_TYPE_ALL_BUT_INT ( math , OP_SHIFT_RIGHT ) <nl> + _RETURN_FAIL ; <nl> + } <nl> <nl> - r_valid = false ; <nl> - return ; <nl> - <nl> - } break ; <nl> - case OP_BIT_AND : { <nl> - if ( p_a . type = = INT & & p_b . type = = INT ) <nl> + SWITCH_OP ( math , OP_BIT_AND , p_a . type ) { <nl> + CASE_TYPE ( math , OP_BIT_AND , INT ) { <nl> + if ( p_b . type ! = INT ) <nl> + _RETURN_FAIL ; <nl> _RETURN ( p_a . _data . _int & p_b . _data . _int ) ; <nl> + } <nl> + CASE_TYPE_ALL_BUT_INT ( math , OP_BIT_AND ) <nl> + _RETURN_FAIL ; <nl> + } <nl> <nl> - r_valid = false ; <nl> - return ; <nl> - <nl> - } break ; <nl> - case OP_BIT_OR : { <nl> - <nl> - if ( p_a . type = = INT & & p_b . type = = INT ) <nl> + SWITCH_OP ( math , OP_BIT_OR , p_a . type ) { <nl> + CASE_TYPE ( math , OP_BIT_OR , INT ) { <nl> + if ( p_b . type ! = INT ) <nl> + _RETURN_FAIL ; <nl> _RETURN ( p_a . _data . _int | p_b . _data . _int ) ; <nl> + } <nl> + CASE_TYPE_ALL_BUT_INT ( math , OP_BIT_OR ) <nl> + _RETURN_FAIL ; <nl> + } <nl> <nl> - r_valid = false ; <nl> - return ; <nl> - <nl> - } break ; <nl> - case OP_BIT_XOR : { <nl> - <nl> - if ( p_a . type = = INT & & p_b . type = = INT ) <nl> + SWITCH_OP ( math , OP_BIT_XOR , p_a . type ) { <nl> + CASE_TYPE ( math , OP_BIT_XOR , INT ) { <nl> + if ( p_b . type ! = INT ) <nl> + _RETURN_FAIL ; <nl> _RETURN ( p_a . _data . _int ^ p_b . _data . _int ) ; <nl> + } <nl> + CASE_TYPE_ALL_BUT_INT ( math , OP_BIT_XOR ) <nl> + _RETURN_FAIL ; <nl> + } <nl> <nl> - r_valid = false ; <nl> - return ; <nl> - <nl> - } break ; <nl> - case OP_BIT_NEGATE : { <nl> - <nl> - if ( p_a . type = = INT ) <nl> + SWITCH_OP ( math , OP_BIT_NEGATE , p_a . type ) { <nl> + CASE_TYPE ( math , OP_BIT_NEGATE , INT ) { <nl> _RETURN ( ~ p_a . _data . _int ) ; <nl> + } <nl> + CASE_TYPE_ALL_BUT_INT ( math , OP_BIT_NEGATE ) <nl> + _RETURN_FAIL ; <nl> + } <nl> <nl> - r_valid = false ; <nl> - return ; <nl> - <nl> - } break ; <nl> - / / logic <nl> - case OP_AND : { <nl> - bool l = p_a . booleanize ( r_valid ) ; <nl> - if ( ! r_valid ) <nl> - return ; <nl> - bool r = p_b . booleanize ( r_valid ) ; <nl> - if ( ! r_valid ) <nl> - return ; <nl> + SWITCH_OP ( math , OP_AND , p_a . type ) { <nl> + CASE_TYPE_ALL ( math , OP_AND ) { <nl> + bool l = p_a . booleanize ( r_valid ) ; <nl> + if ( ! r_valid ) <nl> + return ; <nl> + bool r = p_b . booleanize ( r_valid ) ; <nl> + if ( ! r_valid ) <nl> + return ; <nl> <nl> - _RETURN ( l & & r ) ; <nl> - } break ; <nl> - case OP_OR : { <nl> - bool l = p_a . booleanize ( r_valid ) ; <nl> - if ( ! r_valid ) <nl> - return ; <nl> - bool r = p_b . booleanize ( r_valid ) ; <nl> - if ( ! r_valid ) <nl> - return ; <nl> + _RETURN ( l & & r ) ; <nl> + } <nl> + } <nl> <nl> - _RETURN ( l | | r ) ; <nl> + SWITCH_OP ( math , OP_OR , p_a . type ) { <nl> + CASE_TYPE_ALL ( math , OP_OR ) { <nl> + bool l = p_a . booleanize ( r_valid ) ; <nl> + if ( ! r_valid ) <nl> + return ; <nl> + bool r = p_b . booleanize ( r_valid ) ; <nl> + if ( ! r_valid ) <nl> + return ; <nl> <nl> - } break ; <nl> - case OP_XOR : { <nl> - bool l = p_a . booleanize ( r_valid ) ; <nl> - if ( ! r_valid ) <nl> - return ; <nl> - bool r = p_b . booleanize ( r_valid ) ; <nl> - if ( ! r_valid ) <nl> - return ; <nl> + _RETURN ( l | | r ) ; <nl> + } <nl> + } <nl> <nl> - _RETURN ( ( l | | r ) & & ! ( l & & r ) ) ; <nl> - } break ; <nl> - case OP_NOT : { <nl> + SWITCH_OP ( math , OP_XOR , p_a . type ) { <nl> + CASE_TYPE_ALL ( math , OP_XOR ) { <nl> + bool l = p_a . booleanize ( r_valid ) ; <nl> + if ( ! r_valid ) <nl> + return ; <nl> + bool r = p_b . booleanize ( r_valid ) ; <nl> + if ( ! r_valid ) <nl> + return ; <nl> <nl> - bool l = p_a . booleanize ( r_valid ) ; <nl> - if ( ! r_valid ) <nl> - return ; <nl> - _RETURN ( ! l ) ; <nl> + _RETURN ( ( l | | r ) & & ! ( l & & r ) ) ; <nl> + } <nl> + } <nl> <nl> - } break ; <nl> - case OP_IN : { <nl> + SWITCH_OP ( math , OP_NOT , p_a . type ) { <nl> + CASE_TYPE_ALL ( math , OP_NOT ) { <nl> + bool l = p_a . booleanize ( r_valid ) ; <nl> + if ( ! r_valid ) <nl> + return ; <nl> + _RETURN ( ! l ) ; <nl> + } <nl> + } <nl> <nl> + SWITCH_OP ( math , OP_IN , p_a . type ) { <nl> + CASE_TYPE_ALL ( math , OP_IN ) <nl> _RETURN ( p_b . in ( p_a , & r_valid ) ) ; <nl> - <nl> - } break ; <nl> - case OP_MAX : { <nl> - <nl> - r_valid = false ; <nl> - ERR_FAIL ( ) ; <nl> } <nl> } <nl> - <nl> - r_valid = false ; <nl> } <nl> <nl> void Variant : : set_named ( const StringName & p_index , const Variant & p_value , bool * r_valid ) { <nl> void Variant : : set ( const Variant & p_index , const Variant & p_value , bool * r_valid ) <nl> DEFAULT_OP_DVECTOR_SET ( POOL_VECTOR2_ARRAY , Vector2 , p_value . type ! = Variant : : VECTOR2 ) / / 25 <nl> DEFAULT_OP_DVECTOR_SET ( POOL_VECTOR3_ARRAY , Vector3 , p_value . type ! = Variant : : VECTOR3 ) <nl> DEFAULT_OP_DVECTOR_SET ( POOL_COLOR_ARRAY , Color , p_value . type ! = Variant : : COLOR ) <nl> - default : return ; <nl> + default : <nl> + return ; <nl> } <nl> } <nl> <nl> Variant Variant : : get ( const Variant & p_index , bool * r_valid ) const { <nl> DEFAULT_OP_DVECTOR_GET ( POOL_VECTOR2_ARRAY , Vector2 ) / / 25 <nl> DEFAULT_OP_DVECTOR_GET ( POOL_VECTOR3_ARRAY , Vector3 ) <nl> DEFAULT_OP_DVECTOR_GET ( POOL_COLOR_ARRAY , Color ) <nl> - default : return Variant ( ) ; <nl> + default : <nl> + return Variant ( ) ; <nl> } <nl> <nl> return Variant ( ) ; <nl> bool Variant : : iter_init ( Variant & r_iter , bool & valid ) const { <nl> return true ; <nl> <nl> } break ; <nl> - default : { } <nl> + default : { <nl> + } <nl> } <nl> <nl> valid = false ; <nl> void Variant : : blend ( const Variant & a , const Variant & b , float c , Variant & r_dst ) <nl> r_dst = Color ( r , g , b , a ) ; <nl> } <nl> return ; <nl> - default : { r_dst = c < 0 . 5 ? a : b ; } <nl> + default : { <nl> + r_dst = c < 0 . 5 ? a : b ; <nl> + } <nl> return ; <nl> } <nl> } <nl> mmm a / modules / gdscript / gd_compiler . cpp <nl> ppp b / modules / gdscript / gd_compiler . cpp <nl> int GDCompiler : : _parse_assign_right_expression ( CodeGen & codegen , const GDParser : <nl> switch ( p_expression - > op ) { <nl> <nl> case GDParser : : OperatorNode : : OP_ASSIGN_ADD : var_op = Variant : : OP_ADD ; break ; <nl> - case GDParser : : OperatorNode : : OP_ASSIGN_SUB : var_op = Variant : : OP_SUBSTRACT ; break ; <nl> + case GDParser : : OperatorNode : : OP_ASSIGN_SUB : var_op = Variant : : OP_SUBTRACT ; break ; <nl> case GDParser : : OperatorNode : : OP_ASSIGN_MUL : var_op = Variant : : OP_MULTIPLY ; break ; <nl> case GDParser : : OperatorNode : : OP_ASSIGN_DIV : var_op = Variant : : OP_DIVIDE ; break ; <nl> case GDParser : : OperatorNode : : OP_ASSIGN_MOD : var_op = Variant : : OP_MODULE ; break ; <nl> int GDCompiler : : _parse_expression ( CodeGen & codegen , const GDParser : : Node * p_expr <nl> if ( ! _create_binary_operator ( codegen , on , Variant : : OP_ADD , p_stack_level ) ) return - 1 ; <nl> } break ; <nl> case GDParser : : OperatorNode : : OP_SUB : { <nl> - if ( ! _create_binary_operator ( codegen , on , Variant : : OP_SUBSTRACT , p_stack_level ) ) return - 1 ; <nl> + if ( ! _create_binary_operator ( codegen , on , Variant : : OP_SUBTRACT , p_stack_level ) ) return - 1 ; <nl> } break ; <nl> case GDParser : : OperatorNode : : OP_MUL : { <nl> if ( ! _create_binary_operator ( codegen , on , Variant : : OP_MULTIPLY , p_stack_level ) ) return - 1 ; <nl> mmm a / modules / gdscript / gd_editor . cpp <nl> ppp b / modules / gdscript / gd_editor . cpp <nl> static bool _guess_expression_type ( GDCompletionContext & context , const GDParser : <nl> Variant : : Operator vop = Variant : : OP_MAX ; <nl> switch ( op - > op ) { <nl> case GDParser : : OperatorNode : : OP_ADD : vop = Variant : : OP_ADD ; break ; <nl> - case GDParser : : OperatorNode : : OP_SUB : vop = Variant : : OP_SUBSTRACT ; break ; <nl> + case GDParser : : OperatorNode : : OP_SUB : vop = Variant : : OP_SUBTRACT ; break ; <nl> case GDParser : : OperatorNode : : OP_MUL : vop = Variant : : OP_MULTIPLY ; break ; <nl> case GDParser : : OperatorNode : : OP_DIV : vop = Variant : : OP_DIVIDE ; break ; <nl> case GDParser : : OperatorNode : : OP_MOD : vop = Variant : : OP_MODULE ; break ; <nl> mmm a / modules / gdscript / gd_parser . cpp <nl> ppp b / modules / gdscript / gd_parser . cpp <nl> GDParser : : Node * GDParser : : _reduce_expression ( Node * p_node , bool p_to_const ) { <nl> _REDUCE_BINARY ( Variant : : OP_ADD ) ; <nl> } break ; <nl> case OperatorNode : : OP_SUB : { <nl> - _REDUCE_BINARY ( Variant : : OP_SUBSTRACT ) ; <nl> + _REDUCE_BINARY ( Variant : : OP_SUBTRACT ) ; <nl> } break ; <nl> case OperatorNode : : OP_MUL : { <nl> _REDUCE_BINARY ( Variant : : OP_MULTIPLY ) ; <nl> mmm a / modules / visual_script / visual_script_expression . cpp <nl> ppp b / modules / visual_script / visual_script_expression . cpp <nl> VisualScriptExpression : : ENode * VisualScriptExpression : : _parse_expression ( ) { <nl> case TK_OP_OR : op = Variant : : OP_OR ; break ; <nl> case TK_OP_NOT : op = Variant : : OP_NOT ; break ; <nl> case TK_OP_ADD : op = Variant : : OP_ADD ; break ; <nl> - case TK_OP_SUB : op = Variant : : OP_SUBSTRACT ; break ; <nl> + case TK_OP_SUB : op = Variant : : OP_SUBTRACT ; break ; <nl> case TK_OP_MUL : op = Variant : : OP_MULTIPLY ; break ; <nl> case TK_OP_DIV : op = Variant : : OP_DIVIDE ; break ; <nl> case TK_OP_MOD : op = Variant : : OP_MODULE ; break ; <nl> VisualScriptExpression : : ENode * VisualScriptExpression : : _parse_expression ( ) { <nl> case Variant : : OP_MODULE : priority = 2 ; break ; <nl> <nl> case Variant : : OP_ADD : priority = 3 ; break ; <nl> - case Variant : : OP_SUBSTRACT : priority = 3 ; break ; <nl> + case Variant : : OP_SUBTRACT : priority = 3 ; break ; <nl> <nl> case Variant : : OP_SHIFT_LEFT : priority = 4 ; break ; <nl> case Variant : : OP_SHIFT_RIGHT : priority = 4 ; break ; <nl> mmm a / modules / visual_script / visual_script_func_nodes . cpp <nl> ppp b / modules / visual_script / visual_script_func_nodes . cpp <nl> class VisualScriptNodeInstancePropertySet : public VisualScriptNodeInstance { <nl> value = Variant : : evaluate ( Variant : : OP_ADD , value , p_argument ) ; <nl> } break ; <nl> case VisualScriptPropertySet : : ASSIGN_OP_SUB : { <nl> - value = Variant : : evaluate ( Variant : : OP_SUBSTRACT , value , p_argument ) ; <nl> + value = Variant : : evaluate ( Variant : : OP_SUBTRACT , value , p_argument ) ; <nl> } break ; <nl> case VisualScriptPropertySet : : ASSIGN_OP_MUL : { <nl> value = Variant : : evaluate ( Variant : : OP_MULTIPLY , value , p_argument ) ; <nl> mmm a / modules / visual_script / visual_script_nodes . cpp <nl> ppp b / modules / visual_script / visual_script_nodes . cpp <nl> void register_visual_script_nodes ( ) { <nl> VisualScriptLanguage : : singleton - > add_register_func ( " operators / compare / greater_equal " , create_op_node < Variant : : OP_GREATER_EQUAL > ) ; <nl> / / mathematic <nl> VisualScriptLanguage : : singleton - > add_register_func ( " operators / math / add " , create_op_node < Variant : : OP_ADD > ) ; <nl> - VisualScriptLanguage : : singleton - > add_register_func ( " operators / math / subtract " , create_op_node < Variant : : OP_SUBSTRACT > ) ; <nl> + VisualScriptLanguage : : singleton - > add_register_func ( " operators / math / subtract " , create_op_node < Variant : : OP_SUBTRACT > ) ; <nl> VisualScriptLanguage : : singleton - > add_register_func ( " operators / math / multiply " , create_op_node < Variant : : OP_MULTIPLY > ) ; <nl> VisualScriptLanguage : : singleton - > add_register_func ( " operators / math / divide " , create_op_node < Variant : : OP_DIVIDE > ) ; <nl> VisualScriptLanguage : : singleton - > add_register_func ( " operators / math / negate " , create_op_node < Variant : : OP_NEGATE > ) ; <nl>
|
Move Variant : : evaluate ( ) switch to computed goto
|
godotengine/godot
|
137f8a58a8f2a6c356ef00e5371ff144c8a89fb0
|
2017-09-17T20:49:23Z
|
mmm a / xbmc / interfaces / Builtins . cpp <nl> ppp b / xbmc / interfaces / Builtins . cpp <nl> const BUILT_IN commands [ ] = { <nl> # if defined ( __APPLE__ ) <nl> { " RunAppleScript " , true , " Run the specified AppleScript command " } , <nl> # endif <nl> - { " RunPlugin " , true , " Run the specified plugin . This command is deprecated , use PlayMedia instead " } , <nl> + { " RunPlugin " , true , " Run the specified plugin " } , <nl> { " RunAddon " , true , " Run the specified plugin / script " } , <nl> { " Extract " , true , " Extracts the specified archive " } , <nl> { " PlayMedia " , true , " Play the specified media file ( or playlist ) " } , <nl> int CBuiltins : : Execute ( const CStdString & execString ) <nl> } <nl> else if ( execute . Equals ( " runplugin " ) ) <nl> { <nl> - CLog : : Log ( LOGWARNING , " RunPlugin ( ) is deprecated , use PlayMedia ( ) instead " ) ; <nl> - <nl> if ( params . size ( ) ) <nl> { <nl> - CStdString cmd ( execString ) ; <nl> - cmd . Replace ( " RunPlugin " , " PlayMedia " ) ; <nl> - return Execute ( cmd ) ; <nl> + CFileItem item ( params [ 0 ] ) ; <nl> + if ( ! item . m_bIsFolder ) <nl> + { <nl> + item . m_strPath = params [ 0 ] ; <nl> + CPluginDirectory : : RunScriptWithParams ( item . m_strPath ) ; <nl> + } <nl> } <nl> else <nl> { <nl>
|
Revert " changed : deprecate the RunPlugin ( ) built - in "
|
xbmc/xbmc
|
1abd0b4a7e9caf2a7a162e997c4962b6523c85f8
|
2011-03-28T23:38:48Z
|
mmm a / include / swift / SIL / MemAccessUtils . h <nl> ppp b / include / swift / SIL / MemAccessUtils . h <nl> struct AccessPathWithBase { <nl> / / <nl> / / base may be invalid for global_addr - > address_to_pointer - > phi patterns . <nl> / / FIXME : add a structural requirement to SIL so base is always valid in OSSA . <nl> + / / ! ! ! make this a PtrIntPair with a the access kind <nl> SILValue base ; <nl> <nl> / / / \ p address identifies the object seen by any memory operation that <nl>
|
MemAccessUtils comment
|
apple/swift
|
9c69d0242d4cf191e06ae8d2122e1265f9ddc4cc
|
2020-10-16T22:00:10Z
|
mmm a / tensorflow / python / client / quantize_training . i <nl> ppp b / tensorflow / python / client / quantize_training . i <nl> static PyObject * DoQuantizeTrainingOnGraphDefHelper ( <nl> tensorflow : : DoQuantizeTrainingOnSerializedGraphDef ( input_graph , num_bits , & result ) ; <nl> if ( ! status . ok ( ) ) { <nl> Set_TF_Status_from_Status ( out_status , status ) ; <nl> - return Py_None ; <nl> + Py_RETURN_NONE ; <nl> } <nl> PyObject * py_str = PyBytes_FromStringAndSize ( result . data ( ) , result . size ( ) ) ; <nl> if ( ! py_str ) { <nl> Set_TF_Status_from_Status ( out_status , <nl> tensorflow : : Status ( tensorflow : : error : : INTERNAL , <nl> " Failed to generate serialized string of the rewritten graph . " ) ) ; <nl> - return Py_None ; <nl> + Py_RETURN_NONE ; <nl> } <nl> <nl> return py_str ; <nl>
|
Fix reference counting of Py_None in SWIG wrappers .
|
tensorflow/tensorflow
|
ebd381708b31c5aeb8845e5742de123934de3aab
|
2016-09-07T03:32:42Z
|
mmm a / src / runtime / vm / translator / translator - x64 - vector . cpp <nl> ppp b / src / runtime / vm / translator / translator - x64 - vector . cpp <nl> void TranslatorX64 : : emitSetProp ( const Tracelet & t , <nl> <nl> m_regMap . allocInputReg ( * m_curNI , kRhsIdx ) ; <nl> PhysReg rhsReg = getReg ( val . location ) ; <nl> + LazyScratchReg tmp ( m_regMap ) ; <nl> if ( val . isVariant ( ) ) { <nl> - emitDeref ( a , rhsReg , rhsReg ) ; <nl> + tmp . alloc ( ) ; <nl> + emitDeref ( a , rhsReg , * tmp ) ; <nl> + rhsReg = * tmp ; <nl> } <nl> <nl> const bool incRef = true ; <nl> new file mode 100644 <nl> index 00000000000 . . 6b8d55a1a7c <nl> mmm / dev / null <nl> ppp b / src / test / vm / setmcrash . php <nl> <nl> + < ? php <nl> + <nl> + class X { <nl> + private $ foo ; <nl> + function foo ( & $ b ) { <nl> + $ this - > foo = $ b ; <nl> + } <nl> + } <nl> + <nl> + $ x = new X ; <nl> + $ t = null ; <nl> + $ x - > foo ( $ t ) ; <nl> + <nl> new file mode 100644 <nl> index 00000000000 . . e69de29bb2d <nl>
|
Dont trash an owned register
|
facebook/hhvm
|
83ad91782d3bb56b63de4210fc7e7e5a01a9729e
|
2012-10-11T18:46:19Z
|
mmm a / test / cpp / tensorexpr / test_boundsinference . cpp <nl> ppp b / test / cpp / tensorexpr / test_boundsinference . cpp <nl> void testMergeSymbolicBounds ( ) { <nl> res = mergeTensorAccesses ( info ) ; <nl> ASSERT_EQ ( res [ a . data ( ) ] . size ( ) , 1 ) ; <nl> pair = boundAsStringPair ( res [ a . data ( ) ] [ 0 ] ) ; <nl> - ASSERT_EQ ( pair . first , " Min ( Z , X , 1 ) " ) ; <nl> + ASSERT_EQ ( pair . first , " Min ( X , Z , 1 ) " ) ; <nl> ASSERT_EQ ( pair . second , " Y " ) ; <nl> <nl> / / If either side is only one apart , they must be adjacent . <nl> void testMergeSymbolicBounds ( ) { <nl> res = mergeTensorAccesses ( info ) ; <nl> ASSERT_EQ ( res [ a . data ( ) ] . size ( ) , 1 ) ; <nl> pair = boundAsStringPair ( res [ a . data ( ) ] [ 0 ] ) ; <nl> - ASSERT_EQ ( pair . first , " Min ( Z , X , 1 ) " ) ; <nl> + ASSERT_EQ ( pair . first , " Min ( X , Z , 1 ) " ) ; <nl> ASSERT_EQ ( pair . second , " Y " ) ; <nl> <nl> / / If either side is 2 apart , they may not be overlapping . <nl> void testMergeSymbolicAdjacent ( ) { <nl> ASSERT_EQ ( res [ a . data ( ) ] . size ( ) , 1 ) ; <nl> pair = boundAsStringPair ( res [ a . data ( ) ] [ 0 ] ) ; <nl> ASSERT_EQ ( pair . first , " 5 " ) ; <nl> - ASSERT_EQ ( pair . second , " Max ( Y , X , 1 ) " ) ; <nl> + ASSERT_EQ ( pair . second , " Max ( X , Y , 1 ) " ) ; <nl> <nl> info . clear ( ) ; <nl> info [ a . data ( ) ] . push_back ( { kLoad , { X . node ( ) } , { new IntImm ( 6 ) } } ) ; <nl> void testMergeSymbolicAdjacent ( ) { <nl> res = mergeTensorAccesses ( info ) ; <nl> ASSERT_EQ ( res [ a . data ( ) ] . size ( ) , 1 ) ; <nl> pair = boundAsStringPair ( res [ a . data ( ) ] [ 0 ] ) ; <nl> - ASSERT_EQ ( pair . first , " Min ( Y , X , 1 ) " ) ; <nl> + ASSERT_EQ ( pair . first , " Min ( X , Y , 1 ) " ) ; <nl> ASSERT_EQ ( pair . second , " 6 " ) ; <nl> } <nl> <nl> mmm a / test / cpp / tensorexpr / test_simplify . cpp <nl> ppp b / test / cpp / tensorexpr / test_simplify . cpp <nl> using SimpleIRExprEval = ExprEval < SimpleIREvaluator > ; <nl> ASSERT_EQ ( node_ - > name_hint ( ) , name ) ; \ <nl> } <nl> <nl> + # define IS_BINOP_W_VARS ( T , node , name , v1 , v2 ) \ <nl> + const T * name = nullptr ; \ <nl> + { \ <nl> + name = dynamic_cast < const T * > ( node ) ; \ <nl> + ASSERT_NE ( nullptr , name ) ; \ <nl> + IS_VAR_WITH_NAME ( name - > lhs ( ) , v1 ) ; \ <nl> + IS_VAR_WITH_NAME ( name - > rhs ( ) , v2 ) ; \ <nl> + } <nl> + <nl> + # define IS_BINOP_W_CONST ( T , node , name , v , c ) \ <nl> + const T * name = nullptr ; \ <nl> + { \ <nl> + name = dynamic_cast < const T * > ( node ) ; \ <nl> + ASSERT_NE ( nullptr , name ) ; \ <nl> + IS_VAR_WITH_NAME ( name - > lhs ( ) , v ) ; \ <nl> + IS_IMM_WITH_VAL ( Int , name - > rhs ( ) , c ) ; \ <nl> + } <nl> + <nl> void testConstantFoldSimple ( ) { <nl> KernelScope kernel_scope ; <nl> ExprHandle a ( 2 . 0f ) ; <nl> void testSimplifySymbolicMinMax ( ) { <nl> } <nl> } <nl> <nl> + void testSimplifyNestedMax ( ) { <nl> + KernelScope kernel_scope ; <nl> + VarHandle x ( " x " , kInt ) ; <nl> + VarHandle y ( " y " , kInt ) ; <nl> + VarHandle z ( " z " , kInt ) ; <nl> + <nl> + { <nl> + / / Max ( x + y , x + y ) = > x + y <nl> + ExprHandle body = Max : : make ( x + y , x + y , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_BINOP_W_VARS ( Add , simplified . node ( ) , add , " y " , " x " ) ; <nl> + } <nl> + <nl> + { <nl> + / / Max ( x + y , Max ( x + y , z ) ) = > Max ( y + x , z ) <nl> + ExprHandle body = Max : : make ( x + y , Max : : make ( x + y , z , true ) , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Max , simplified . node ( ) , max ) ; <nl> + IS_BINOP_W_VARS ( Add , max - > lhs ( ) , add , " y " , " x " ) ; <nl> + IS_VAR_WITH_NAME ( max - > rhs ( ) , " z " ) ; <nl> + } <nl> + <nl> + { <nl> + / / Max ( x + y , Max ( z , x + y ) ) = > Max ( y + x , z ) <nl> + ExprHandle body = Max : : make ( x + y , Max : : make ( z , x + y , true ) , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Max , simplified . node ( ) , max ) ; <nl> + IS_BINOP_W_VARS ( Add , max - > lhs ( ) , add , " y " , " x " ) ; <nl> + IS_VAR_WITH_NAME ( max - > rhs ( ) , " z " ) ; <nl> + } <nl> + <nl> + { <nl> + / / Max ( Max ( x + y , z ) , x + y ) = > Max ( y + x , z ) <nl> + ExprHandle body = Max : : make ( Max : : make ( x + y , z , true ) , x + y , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Max , simplified . node ( ) , max ) ; <nl> + IS_BINOP_W_VARS ( Add , max - > lhs ( ) , add , " y " , " x " ) ; <nl> + IS_VAR_WITH_NAME ( max - > rhs ( ) , " z " ) ; <nl> + } <nl> + <nl> + { <nl> + / / Max ( Max ( z , x + y ) , x + y ) = > Max ( y + x , z ) <nl> + ExprHandle body = Max : : make ( Max : : make ( z , x + y , true ) , x + y , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Max , simplified . node ( ) , max ) ; <nl> + IS_BINOP_W_VARS ( Add , max - > lhs ( ) , add , " y " , " x " ) ; <nl> + IS_VAR_WITH_NAME ( max - > rhs ( ) , " z " ) ; <nl> + } <nl> + <nl> + { <nl> + / / Max ( Max ( x , y ) , x ) = > Max ( Max ( x , y ) , x ) <nl> + / / Nested Max ops with different propagate_nans should not be simplified . <nl> + ExprHandle body = Max : : make ( Max : : make ( x , y , true ) , x , false ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Max , simplified . node ( ) , max ) ; <nl> + IS_BINOP_W_VARS ( Max , max - > lhs ( ) , max1 , " x " , " y " ) ; <nl> + ASSERT_TRUE ( max1 - > propagate_nans ( ) ) ; <nl> + IS_VAR_WITH_NAME ( max - > rhs ( ) , " x " ) ; <nl> + ASSERT_FALSE ( max - > propagate_nans ( ) ) ; <nl> + } <nl> + <nl> + { <nl> + / / Max ( Min ( x , y ) , Min ( x , z ) ) = > Min ( x , Max ( y , z ) ) <nl> + ExprHandle body = <nl> + Max : : make ( Min : : make ( x , y , true ) , Min : : make ( x , z , true ) , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Min , simplified . node ( ) , min ) ; <nl> + IS_VAR_WITH_NAME ( min - > lhs ( ) , " x " ) ; <nl> + IS_BINOP_W_VARS ( Max , min - > rhs ( ) , max , " y " , " z " ) ; <nl> + ASSERT_TRUE ( max - > propagate_nans ( ) ) ; <nl> + } <nl> + <nl> + { <nl> + / / Max ( Min ( x , y ) , Min ( z , x ) ) = > Min ( x , Max ( y , z ) ) <nl> + ExprHandle body = <nl> + Max : : make ( Min : : make ( x , y , true ) , Min : : make ( z , x , true ) , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Min , simplified . node ( ) , min ) ; <nl> + IS_VAR_WITH_NAME ( min - > lhs ( ) , " x " ) ; <nl> + IS_BINOP_W_VARS ( Max , min - > rhs ( ) , max , " y " , " z " ) ; <nl> + ASSERT_TRUE ( max - > propagate_nans ( ) ) ; <nl> + } <nl> + <nl> + { <nl> + / / Max ( Min ( y , x ) , Min ( x , z ) ) = > Min ( x , Max ( y , z ) ) <nl> + ExprHandle body = <nl> + Max : : make ( Min : : make ( y , x , true ) , Min : : make ( x , z , true ) , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Min , simplified . node ( ) , min ) ; <nl> + IS_VAR_WITH_NAME ( min - > lhs ( ) , " x " ) ; <nl> + IS_BINOP_W_VARS ( Max , min - > rhs ( ) , max , " y " , " z " ) ; <nl> + ASSERT_TRUE ( max - > propagate_nans ( ) ) ; <nl> + } <nl> + <nl> + { <nl> + / / Max ( Min ( y , x ) , Min ( z , x ) ) = > Min ( x , Max ( y , z ) ) <nl> + ExprHandle body = <nl> + Max : : make ( Min : : make ( y , x , true ) , Min : : make ( z , x , true ) , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Min , simplified . node ( ) , min ) ; <nl> + IS_VAR_WITH_NAME ( min - > lhs ( ) , " x " ) ; <nl> + IS_BINOP_W_VARS ( Max , min - > rhs ( ) , max , " y " , " z " ) ; <nl> + ASSERT_TRUE ( max - > propagate_nans ( ) ) ; <nl> + } <nl> + <nl> + { <nl> + / / Max ( Min ( y , x ) , Min ( z , x ) ) = > Max ( Min ( x , z ) , Min ( x , y ) ) <nl> + / / When all the ops in the pattern do not have the same propagate_nans , <nl> + / / it should not be simplified . <nl> + ExprHandle body = <nl> + Max : : make ( Min : : make ( y , x , true ) , Min : : make ( z , x , false ) , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Max , simplified . node ( ) , max ) ; <nl> + IS_BINOP_W_VARS ( Min , max - > lhs ( ) , min1 , " x " , " z " ) ; <nl> + ASSERT_FALSE ( min1 - > propagate_nans ( ) ) ; <nl> + IS_BINOP_W_VARS ( Min , max - > rhs ( ) , min2 , " x " , " y " ) ; <nl> + ASSERT_TRUE ( min2 - > propagate_nans ( ) ) ; <nl> + ASSERT_TRUE ( max - > propagate_nans ( ) ) ; <nl> + } <nl> + <nl> + { <nl> + / / Max ( 5 , Max ( x , 8 ) ) = > Max ( x , 8 ) <nl> + ExprHandle body = Max : : make ( 5 , Max : : make ( x , 8 , true ) , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_BINOP_W_CONST ( Max , simplified . node ( ) , max , " x " , 8 ) ; <nl> + ASSERT_TRUE ( max - > propagate_nans ( ) ) ; <nl> + } <nl> + <nl> + { <nl> + / / Max ( 8 , Max ( x , 5 ) ) = > Max ( x , 8 ) <nl> + ExprHandle body = Max : : make ( 8 , Max : : make ( x , 5 , true ) , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_BINOP_W_CONST ( Max , simplified . node ( ) , max , " x " , 8 ) ; <nl> + ASSERT_TRUE ( max - > propagate_nans ( ) ) ; <nl> + } <nl> + <nl> + { <nl> + / / Max ( Max ( x , 8 ) , 5 ) = > Max ( x , 8 ) <nl> + ExprHandle body = Max : : make ( Max : : make ( x , 8 , true ) , 5 , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_BINOP_W_CONST ( Max , simplified . node ( ) , max , " x " , 8 ) ; <nl> + ASSERT_TRUE ( max - > propagate_nans ( ) ) ; <nl> + } <nl> + <nl> + { <nl> + / / Max ( Max ( x , 5 ) , 8 ) = > Max ( x , 8 ) <nl> + ExprHandle body = Max : : make ( Max : : make ( x , 5 , true ) , 8 , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_BINOP_W_CONST ( Max , simplified . node ( ) , max , " x " , 8 ) ; <nl> + ASSERT_TRUE ( max - > propagate_nans ( ) ) ; <nl> + } <nl> + <nl> + { <nl> + / / Max ( 5 , Max ( x , Max ( y , Max ( z , 8 ) ) ) ) = > Max ( Max ( Max ( x , 8 ) , y ) , z ) <nl> + ExprHandle body = Max : : make ( <nl> + 5 , Max : : make ( x , Max : : make ( y , Max : : make ( z , 8 , true ) , true ) , true ) , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Max , simplified . node ( ) , max1 ) ; <nl> + IS_NODE_WITH_NAME ( Max , max1 - > lhs ( ) , max2 ) ; <nl> + IS_BINOP_W_CONST ( Max , max2 - > lhs ( ) , max3 , " x " , 8 ) ; <nl> + ASSERT_TRUE ( max3 - > propagate_nans ( ) ) ; <nl> + IS_VAR_WITH_NAME ( max2 - > rhs ( ) , " y " ) ; <nl> + IS_VAR_WITH_NAME ( max1 - > rhs ( ) , " z " ) ; <nl> + } <nl> + <nl> + { <nl> + / / Max ( 8 , Max ( Max ( y , Max ( z , 5 ) ) , x ) ) = > Max ( Max ( Max ( x , 8 ) , y ) , z ) <nl> + ExprHandle body = Max : : make ( <nl> + 8 , Max : : make ( Max : : make ( y , Max : : make ( z , 5 , true ) , true ) , x , true ) , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Max , simplified . node ( ) , max1 ) ; <nl> + IS_NODE_WITH_NAME ( Max , max1 - > lhs ( ) , max2 ) ; <nl> + IS_BINOP_W_CONST ( Max , max2 - > lhs ( ) , max3 , " x " , 8 ) ; <nl> + ASSERT_TRUE ( max3 - > propagate_nans ( ) ) ; <nl> + IS_VAR_WITH_NAME ( max2 - > rhs ( ) , " y " ) ; <nl> + IS_VAR_WITH_NAME ( max1 - > rhs ( ) , " z " ) ; <nl> + } <nl> + <nl> + { <nl> + / / Max ( 5 , Max ( Max ( Max ( z , 8 ) , y ) , x ) ) = > Max ( Max ( Max ( x , 8 ) , y ) , z ) <nl> + ExprHandle body = Max : : make ( <nl> + 5 , Max : : make ( Max : : make ( Max : : make ( z , 8 , true ) , y , true ) , x , true ) , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Max , simplified . node ( ) , max1 ) ; <nl> + IS_NODE_WITH_NAME ( Max , max1 - > lhs ( ) , max2 ) ; <nl> + IS_BINOP_W_CONST ( Max , max2 - > lhs ( ) , max3 , " x " , 8 ) ; <nl> + ASSERT_TRUE ( max3 - > propagate_nans ( ) ) ; <nl> + IS_VAR_WITH_NAME ( max2 - > rhs ( ) , " y " ) ; <nl> + IS_VAR_WITH_NAME ( max1 - > rhs ( ) , " z " ) ; <nl> + } <nl> + <nl> + { <nl> + / / Max ( Max ( x , Max ( y , Max ( 5 , z ) ) ) , 8 ) = > Max ( Max ( Max ( x , 8 ) , y ) , z ) <nl> + ExprHandle body = Max : : make ( <nl> + Max : : make ( x , Max : : make ( y , Max : : make ( 5 , z , true ) , true ) , true ) , 8 , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Max , simplified . node ( ) , max1 ) ; <nl> + IS_NODE_WITH_NAME ( Max , max1 - > lhs ( ) , max2 ) ; <nl> + IS_BINOP_W_CONST ( Max , max2 - > lhs ( ) , max3 , " x " , 8 ) ; <nl> + ASSERT_TRUE ( max3 - > propagate_nans ( ) ) ; <nl> + IS_VAR_WITH_NAME ( max2 - > rhs ( ) , " y " ) ; <nl> + IS_VAR_WITH_NAME ( max1 - > rhs ( ) , " z " ) ; <nl> + } <nl> + <nl> + { <nl> + / / Max ( Max ( Max ( y , Max ( 8 , z ) ) , x ) , 5 ) = > Max ( Max ( Max ( x , 8 ) , y ) , z ) <nl> + ExprHandle body = Max : : make ( <nl> + Max : : make ( Max : : make ( y , Max : : make ( z , 8 , true ) , true ) , x , true ) , 5 , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Max , simplified . node ( ) , max1 ) ; <nl> + IS_NODE_WITH_NAME ( Max , max1 - > lhs ( ) , max2 ) ; <nl> + IS_BINOP_W_CONST ( Max , max2 - > lhs ( ) , max3 , " x " , 8 ) ; <nl> + ASSERT_TRUE ( max3 - > propagate_nans ( ) ) ; <nl> + IS_VAR_WITH_NAME ( max2 - > rhs ( ) , " y " ) ; <nl> + IS_VAR_WITH_NAME ( max1 - > rhs ( ) , " z " ) ; <nl> + } <nl> + <nl> + { <nl> + / / Max ( Max ( Max ( Max ( 5 , z ) , y ) , x ) , 8 ) = > Max ( Max ( Max ( x , 8 ) , y ) , z ) <nl> + ExprHandle body = Max : : make ( <nl> + Max : : make ( Max : : make ( Max : : make ( z , 5 , true ) , y , true ) , x , true ) , 8 , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Max , simplified . node ( ) , max1 ) ; <nl> + IS_NODE_WITH_NAME ( Max , max1 - > lhs ( ) , max2 ) ; <nl> + IS_BINOP_W_CONST ( Max , max2 - > lhs ( ) , max3 , " x " , 8 ) ; <nl> + ASSERT_TRUE ( max3 - > propagate_nans ( ) ) ; <nl> + IS_VAR_WITH_NAME ( max2 - > rhs ( ) , " y " ) ; <nl> + IS_VAR_WITH_NAME ( max1 - > rhs ( ) , " z " ) ; <nl> + } <nl> + <nl> + { <nl> + / / Max ( Max ( Max ( Max ( z , 5 ) , y ) , x ) , 8 ) = > Max ( Max ( x , Max ( Max ( z , 5 ) , y ) ) , 8 ) <nl> + / / Do not simplify when all the Max ops do not have the same <nl> + / / propagate_nans . <nl> + ExprHandle body = Max : : make ( <nl> + Max : : make ( Max : : make ( Max : : make ( z , 5 , true ) , y , false ) , x , true ) , <nl> + 8 , <nl> + false ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Max , simplified . node ( ) , max1 ) ; <nl> + IS_NODE_WITH_NAME ( Max , max1 - > lhs ( ) , max2 ) ; <nl> + IS_VAR_WITH_NAME ( max2 - > lhs ( ) , " x " ) ; <nl> + IS_NODE_WITH_NAME ( Max , max2 - > rhs ( ) , max3 ) ; <nl> + IS_BINOP_W_CONST ( Max , max3 - > lhs ( ) , max4 , " z " , 5 ) ; <nl> + ASSERT_TRUE ( max4 - > propagate_nans ( ) ) ; <nl> + IS_VAR_WITH_NAME ( max3 - > rhs ( ) , " y " ) ; <nl> + ASSERT_FALSE ( max3 - > propagate_nans ( ) ) ; <nl> + ASSERT_TRUE ( max2 - > propagate_nans ( ) ) ; <nl> + IS_IMM_WITH_VAL ( Int , max1 - > rhs ( ) , 8 ) ; <nl> + ASSERT_FALSE ( max1 - > propagate_nans ( ) ) ; <nl> + } <nl> + <nl> + { <nl> + / / Max ( 8 , Max ( Max ( x , 5 ) , Max ( y , z ) ) ) = > Max ( Max ( Max ( x , 8 ) , y ) , z ) <nl> + ExprHandle body = Max : : make ( <nl> + 8 , Max : : make ( Max : : make ( x , 5 , true ) , Max : : make ( y , z , true ) , true ) , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Max , simplified . node ( ) , max1 ) ; <nl> + IS_NODE_WITH_NAME ( Max , max1 - > lhs ( ) , max2 ) ; <nl> + IS_BINOP_W_CONST ( Max , max2 - > lhs ( ) , max3 , " x " , 8 ) ; <nl> + ASSERT_TRUE ( max3 - > propagate_nans ( ) ) ; <nl> + IS_VAR_WITH_NAME ( max2 - > rhs ( ) , " y " ) ; <nl> + IS_VAR_WITH_NAME ( max1 - > rhs ( ) , " z " ) ; <nl> + } <nl> + <nl> + { <nl> + / / Max ( Max ( Max ( x , 5 ) , Max ( y , z ) ) , 8 ) = > Max ( Max ( Max ( x , 8 ) , y ) , z ) <nl> + ExprHandle body = Max : : make ( <nl> + Max : : make ( Max : : make ( x , 5 , true ) , Max : : make ( y , z , true ) , true ) , 8 , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Max , simplified . node ( ) , max1 ) ; <nl> + IS_NODE_WITH_NAME ( Max , max1 - > lhs ( ) , max2 ) ; <nl> + IS_BINOP_W_CONST ( Max , max2 - > lhs ( ) , max3 , " x " , 8 ) ; <nl> + ASSERT_TRUE ( max3 - > propagate_nans ( ) ) ; <nl> + IS_VAR_WITH_NAME ( max2 - > rhs ( ) , " y " ) ; <nl> + IS_VAR_WITH_NAME ( max1 - > rhs ( ) , " z " ) ; <nl> + } <nl> + } <nl> + <nl> + void testSimplifyNestedMin ( ) { <nl> + KernelScope kernel_scope ; <nl> + VarHandle x ( " x " , kInt ) ; <nl> + VarHandle y ( " y " , kInt ) ; <nl> + VarHandle z ( " z " , kInt ) ; <nl> + <nl> + { <nl> + / / Min ( x + y , x + y ) = > x + y <nl> + ExprHandle body = Min : : make ( x + y , x + y , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_BINOP_W_VARS ( Add , simplified . node ( ) , add , " y " , " x " ) ; <nl> + } <nl> + <nl> + { <nl> + / / Min ( x + y , Min ( x + y , z ) ) = > Min ( y + x , z ) <nl> + ExprHandle body = Min : : make ( x + y , Min : : make ( x + y , z , true ) , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Min , simplified . node ( ) , min ) ; <nl> + IS_BINOP_W_VARS ( Add , min - > lhs ( ) , add , " y " , " x " ) ; <nl> + IS_VAR_WITH_NAME ( min - > rhs ( ) , " z " ) ; <nl> + } <nl> + <nl> + { <nl> + / / Min ( x + y , Min ( z , x + y ) ) = > Min ( y + x , z ) <nl> + ExprHandle body = Min : : make ( x + y , Min : : make ( z , x + y , true ) , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Min , simplified . node ( ) , min ) ; <nl> + IS_BINOP_W_VARS ( Add , min - > lhs ( ) , add , " y " , " x " ) ; <nl> + IS_VAR_WITH_NAME ( min - > rhs ( ) , " z " ) ; <nl> + } <nl> + <nl> + { <nl> + / / Min ( Min ( x + y , z ) , x + y ) = > Min ( y + x , z ) <nl> + ExprHandle body = Min : : make ( Min : : make ( x + y , z , true ) , x + y , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Min , simplified . node ( ) , min ) ; <nl> + IS_BINOP_W_VARS ( Add , min - > lhs ( ) , add , " y " , " x " ) ; <nl> + IS_VAR_WITH_NAME ( min - > rhs ( ) , " z " ) ; <nl> + } <nl> + <nl> + { <nl> + / / Min ( Min ( z , x + y ) , x + y ) = > Min ( y + x , z ) <nl> + ExprHandle body = Min : : make ( Min : : make ( z , x + y , true ) , x + y , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Min , simplified . node ( ) , min ) ; <nl> + IS_BINOP_W_VARS ( Add , min - > lhs ( ) , add , " y " , " x " ) ; <nl> + IS_VAR_WITH_NAME ( min - > rhs ( ) , " z " ) ; <nl> + } <nl> + <nl> + { <nl> + / / Min ( Min ( x , y ) , x ) = > Min ( Min ( x , y ) , x ) <nl> + / / Nested Min ops with different propagate_nans should not be simplified . <nl> + ExprHandle body = Min : : make ( Min : : make ( x , y , true ) , x , false ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Min , simplified . node ( ) , min1 ) ; <nl> + IS_BINOP_W_VARS ( Min , min1 - > lhs ( ) , min2 , " x " , " y " ) ; <nl> + ASSERT_TRUE ( min2 - > propagate_nans ( ) ) ; <nl> + IS_VAR_WITH_NAME ( min1 - > rhs ( ) , " x " ) ; <nl> + ASSERT_FALSE ( min1 - > propagate_nans ( ) ) ; <nl> + } <nl> + <nl> + { <nl> + / / Min ( Max ( x , y ) , Max ( x , z ) ) = > Max ( x , Min ( y , z ) ) <nl> + ExprHandle body = <nl> + Min : : make ( Max : : make ( x , y , true ) , Max : : make ( x , z , true ) , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Max , simplified . node ( ) , max ) ; <nl> + IS_VAR_WITH_NAME ( max - > lhs ( ) , " x " ) ; <nl> + IS_BINOP_W_VARS ( Min , max - > rhs ( ) , min , " y " , " z " ) ; <nl> + ASSERT_TRUE ( min - > propagate_nans ( ) ) ; <nl> + } <nl> + <nl> + { <nl> + / / Min ( Max ( x , y ) , Max ( z , x ) ) = > Max ( x , Min ( y , z ) ) <nl> + ExprHandle body = <nl> + Min : : make ( Max : : make ( x , y , true ) , Max : : make ( z , x , true ) , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Max , simplified . node ( ) , max ) ; <nl> + IS_VAR_WITH_NAME ( max - > lhs ( ) , " x " ) ; <nl> + IS_BINOP_W_VARS ( Min , max - > rhs ( ) , min , " y " , " z " ) ; <nl> + ASSERT_TRUE ( min - > propagate_nans ( ) ) ; <nl> + } <nl> + <nl> + { <nl> + / / Min ( Max ( y , x ) , Max ( x , z ) ) = > Max ( x , Min ( y , z ) ) <nl> + ExprHandle body = <nl> + Min : : make ( Max : : make ( y , x , true ) , Max : : make ( x , z , true ) , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Max , simplified . node ( ) , max ) ; <nl> + IS_VAR_WITH_NAME ( max - > lhs ( ) , " x " ) ; <nl> + IS_BINOP_W_VARS ( Min , max - > rhs ( ) , min , " y " , " z " ) ; <nl> + ASSERT_TRUE ( min - > propagate_nans ( ) ) ; <nl> + } <nl> + <nl> + { <nl> + / / Min ( Max ( y , x ) , Max ( z , x ) ) = > Max ( x , Min ( y , z ) ) <nl> + ExprHandle body = <nl> + Min : : make ( Max : : make ( y , x , true ) , Max : : make ( z , x , true ) , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Max , simplified . node ( ) , max ) ; <nl> + IS_VAR_WITH_NAME ( max - > lhs ( ) , " x " ) ; <nl> + IS_BINOP_W_VARS ( Min , max - > rhs ( ) , min , " y " , " z " ) ; <nl> + ASSERT_TRUE ( min - > propagate_nans ( ) ) ; <nl> + } <nl> + <nl> + { <nl> + / / Min ( Max ( y , x ) , Max ( z , x ) ) = > Min ( Max ( x , z ) , Max ( x , y ) ) <nl> + / / When all the ops in the pattern do not have the same propagate_nans , <nl> + / / it should not be simplified . <nl> + ExprHandle body = <nl> + Min : : make ( Max : : make ( y , x , true ) , Max : : make ( z , x , false ) , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Min , simplified . node ( ) , min ) ; <nl> + IS_BINOP_W_VARS ( Max , min - > lhs ( ) , max1 , " x " , " z " ) ; <nl> + ASSERT_FALSE ( max1 - > propagate_nans ( ) ) ; <nl> + IS_BINOP_W_VARS ( Max , min - > rhs ( ) , max2 , " x " , " y " ) ; <nl> + ASSERT_TRUE ( max2 - > propagate_nans ( ) ) ; <nl> + ASSERT_TRUE ( min - > propagate_nans ( ) ) ; <nl> + } <nl> + <nl> + { <nl> + / / Min ( 5 , Min ( x , 8 ) ) = > Min ( x , 8 ) <nl> + ExprHandle body = Min : : make ( 5 , Min : : make ( x , 8 , true ) , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_BINOP_W_CONST ( Min , simplified . node ( ) , min , " x " , 5 ) ; <nl> + ASSERT_TRUE ( min - > propagate_nans ( ) ) ; <nl> + } <nl> + <nl> + { <nl> + / / Min ( 8 , Min ( x , 5 ) ) = > Min ( x , 8 ) <nl> + ExprHandle body = Min : : make ( 8 , Min : : make ( x , 5 , true ) , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_BINOP_W_CONST ( Min , simplified . node ( ) , min , " x " , 5 ) ; <nl> + ASSERT_TRUE ( min - > propagate_nans ( ) ) ; <nl> + } <nl> + <nl> + { <nl> + / / Min ( Min ( x , 8 ) , 5 ) = > Min ( x , 8 ) <nl> + ExprHandle body = Min : : make ( Min : : make ( x , 8 , true ) , 5 , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_BINOP_W_CONST ( Min , simplified . node ( ) , min , " x " , 5 ) ; <nl> + ASSERT_TRUE ( min - > propagate_nans ( ) ) ; <nl> + } <nl> + <nl> + { <nl> + / / Min ( Min ( x , 5 ) , 8 ) = > Min ( x , 8 ) <nl> + ExprHandle body = Min : : make ( Min : : make ( x , 5 , true ) , 8 , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_BINOP_W_CONST ( Min , simplified . node ( ) , min , " x " , 5 ) ; <nl> + ASSERT_TRUE ( min - > propagate_nans ( ) ) ; <nl> + } <nl> + <nl> + { <nl> + / / Min ( 5 , Min ( x , Min ( y , Min ( z , 8 ) ) ) ) = > Min ( Min ( Min ( x , 5 ) , y ) , z ) <nl> + ExprHandle body = Min : : make ( <nl> + 5 , Min : : make ( x , Min : : make ( y , Min : : make ( z , 8 , true ) , true ) , true ) , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Min , simplified . node ( ) , min1 ) ; <nl> + IS_NODE_WITH_NAME ( Min , min1 - > lhs ( ) , min2 ) ; <nl> + IS_BINOP_W_CONST ( Min , min2 - > lhs ( ) , min3 , " x " , 5 ) ; <nl> + ASSERT_TRUE ( min3 - > propagate_nans ( ) ) ; <nl> + IS_VAR_WITH_NAME ( min2 - > rhs ( ) , " y " ) ; <nl> + IS_VAR_WITH_NAME ( min1 - > rhs ( ) , " z " ) ; <nl> + } <nl> + <nl> + { <nl> + / / Min ( 5 , Min ( Min ( y , Min ( z , 8 ) ) , x ) ) = > Min ( Min ( Min ( x , 5 ) , y ) , z ) <nl> + ExprHandle body = Min : : make ( <nl> + 5 , Min : : make ( Min : : make ( y , Min : : make ( z , 8 , true ) , true ) , x , true ) , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Min , simplified . node ( ) , min1 ) ; <nl> + IS_NODE_WITH_NAME ( Min , min1 - > lhs ( ) , min2 ) ; <nl> + IS_BINOP_W_CONST ( Min , min2 - > lhs ( ) , min3 , " x " , 5 ) ; <nl> + ASSERT_TRUE ( min3 - > propagate_nans ( ) ) ; <nl> + IS_VAR_WITH_NAME ( min2 - > rhs ( ) , " y " ) ; <nl> + IS_VAR_WITH_NAME ( min1 - > rhs ( ) , " z " ) ; <nl> + } <nl> + <nl> + { <nl> + / / Min ( 5 , Min ( Min ( Min ( z , 8 ) , y ) , x ) ) = > Min ( Min ( Min ( x , 5 ) , y ) , z ) <nl> + ExprHandle body = Min : : make ( <nl> + 5 , Min : : make ( Min : : make ( Min : : make ( z , 8 , true ) , y , true ) , x , true ) , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Min , simplified . node ( ) , min1 ) ; <nl> + IS_NODE_WITH_NAME ( Min , min1 - > lhs ( ) , min2 ) ; <nl> + IS_BINOP_W_CONST ( Min , min2 - > lhs ( ) , min3 , " x " , 5 ) ; <nl> + ASSERT_TRUE ( min3 - > propagate_nans ( ) ) ; <nl> + IS_VAR_WITH_NAME ( min2 - > rhs ( ) , " y " ) ; <nl> + IS_VAR_WITH_NAME ( min1 - > rhs ( ) , " z " ) ; <nl> + } <nl> + <nl> + { <nl> + / / Min ( Min ( x , Min ( y , Min ( 8 , z ) ) ) , 5 ) = > Min ( Min ( Min ( x , 5 ) , y ) , z ) <nl> + ExprHandle body = Min : : make ( <nl> + Min : : make ( x , Min : : make ( y , Min : : make ( 8 , z , true ) , true ) , true ) , 5 , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Min , simplified . node ( ) , min1 ) ; <nl> + IS_NODE_WITH_NAME ( Min , min1 - > lhs ( ) , min2 ) ; <nl> + IS_BINOP_W_CONST ( Min , min2 - > lhs ( ) , min3 , " x " , 5 ) ; <nl> + ASSERT_TRUE ( min3 - > propagate_nans ( ) ) ; <nl> + IS_VAR_WITH_NAME ( min2 - > rhs ( ) , " y " ) ; <nl> + IS_VAR_WITH_NAME ( min1 - > rhs ( ) , " z " ) ; <nl> + } <nl> + <nl> + { <nl> + / / Min ( Min ( Min ( y , Min ( 8 , z ) ) , x ) , 5 ) = > Min ( Min ( Min ( x , 5 ) , y ) , z ) <nl> + ExprHandle body = Min : : make ( <nl> + Min : : make ( Min : : make ( y , Min : : make ( z , 8 , true ) , true ) , x , true ) , 5 , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Min , simplified . node ( ) , min1 ) ; <nl> + IS_NODE_WITH_NAME ( Min , min1 - > lhs ( ) , min2 ) ; <nl> + IS_BINOP_W_CONST ( Min , min2 - > lhs ( ) , min3 , " x " , 5 ) ; <nl> + ASSERT_TRUE ( min3 - > propagate_nans ( ) ) ; <nl> + IS_VAR_WITH_NAME ( min2 - > rhs ( ) , " y " ) ; <nl> + IS_VAR_WITH_NAME ( min1 - > rhs ( ) , " z " ) ; <nl> + } <nl> + <nl> + { <nl> + / / Min ( Min ( Min ( Min ( 8 , z ) , y ) , x ) , 5 ) = > Min ( Min ( Min ( x , 5 ) , y ) , z ) <nl> + ExprHandle body = Min : : make ( <nl> + Min : : make ( Min : : make ( Min : : make ( z , 8 , true ) , y , true ) , x , true ) , 5 , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Min , simplified . node ( ) , min1 ) ; <nl> + IS_NODE_WITH_NAME ( Min , min1 - > lhs ( ) , min2 ) ; <nl> + IS_BINOP_W_CONST ( Min , min2 - > lhs ( ) , min3 , " x " , 5 ) ; <nl> + ASSERT_TRUE ( min3 - > propagate_nans ( ) ) ; <nl> + IS_VAR_WITH_NAME ( min2 - > rhs ( ) , " y " ) ; <nl> + IS_VAR_WITH_NAME ( min1 - > rhs ( ) , " z " ) ; <nl> + } <nl> + <nl> + { <nl> + / / Min ( Min ( Min ( Min ( z , 5 ) , y ) , x ) , 8 ) = > Min ( Min ( x , Min ( Min ( z , 5 ) , y ) ) , 8 ) <nl> + / / Do not simplify when all the Min ops do not have the same <nl> + / / propagate_nans . <nl> + ExprHandle body = Min : : make ( <nl> + Min : : make ( Min : : make ( Min : : make ( z , 5 , true ) , y , false ) , x , true ) , <nl> + 8 , <nl> + false ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Min , simplified . node ( ) , min1 ) ; <nl> + IS_NODE_WITH_NAME ( Min , min1 - > lhs ( ) , min2 ) ; <nl> + IS_VAR_WITH_NAME ( min2 - > lhs ( ) , " x " ) ; <nl> + IS_NODE_WITH_NAME ( Min , min2 - > rhs ( ) , min3 ) ; <nl> + IS_BINOP_W_CONST ( Min , min3 - > lhs ( ) , min4 , " z " , 5 ) ; <nl> + ASSERT_TRUE ( min4 - > propagate_nans ( ) ) ; <nl> + IS_VAR_WITH_NAME ( min3 - > rhs ( ) , " y " ) ; <nl> + ASSERT_FALSE ( min3 - > propagate_nans ( ) ) ; <nl> + ASSERT_TRUE ( min2 - > propagate_nans ( ) ) ; <nl> + IS_IMM_WITH_VAL ( Int , min1 - > rhs ( ) , 8 ) ; <nl> + ASSERT_FALSE ( min1 - > propagate_nans ( ) ) ; <nl> + } <nl> + <nl> + { <nl> + / / Min ( 8 , Min ( Min ( x , 5 ) , Min ( y , z ) ) ) = > Min ( Min ( Min ( x , 5 ) , y ) , z ) <nl> + ExprHandle body = Min : : make ( <nl> + 8 , Min : : make ( Min : : make ( x , 5 , true ) , Min : : make ( y , z , true ) , true ) , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Min , simplified . node ( ) , min1 ) ; <nl> + IS_NODE_WITH_NAME ( Min , min1 - > lhs ( ) , min2 ) ; <nl> + IS_BINOP_W_CONST ( Min , min2 - > lhs ( ) , min3 , " x " , 5 ) ; <nl> + ASSERT_TRUE ( min3 - > propagate_nans ( ) ) ; <nl> + IS_VAR_WITH_NAME ( min2 - > rhs ( ) , " y " ) ; <nl> + IS_VAR_WITH_NAME ( min1 - > rhs ( ) , " z " ) ; <nl> + } <nl> + <nl> + { <nl> + / / Min ( Min ( Min ( x , 5 ) , Min ( y , z ) ) , 8 ) = > Min ( Min ( Min ( x , 5 ) , y ) , z ) <nl> + ExprHandle body = Min : : make ( <nl> + Min : : make ( Min : : make ( x , 5 , true ) , Min : : make ( y , z , true ) , true ) , 8 , true ) ; <nl> + ExprHandle simplified = IRSimplifier : : simplify ( body ) ; <nl> + <nl> + IS_NODE_WITH_NAME ( Min , simplified . node ( ) , min1 ) ; <nl> + IS_NODE_WITH_NAME ( Min , min1 - > lhs ( ) , min2 ) ; <nl> + IS_BINOP_W_CONST ( Min , min2 - > lhs ( ) , min3 , " x " , 5 ) ; <nl> + ASSERT_TRUE ( min3 - > propagate_nans ( ) ) ; <nl> + IS_VAR_WITH_NAME ( min2 - > rhs ( ) , " y " ) ; <nl> + IS_VAR_WITH_NAME ( min1 - > rhs ( ) , " z " ) ; <nl> + } <nl> + } <nl> + <nl> void testSimplifyWontReorderFloat ( ) { <nl> KernelScope kernel_scope ; <nl> <nl> mmm a / test / cpp / tensorexpr / tests . h <nl> ppp b / test / cpp / tensorexpr / tests . h <nl> namespace jit { <nl> _ ( SimplifyIfComponents ) \ <nl> _ ( SimplifyOpaqueTerms ) \ <nl> _ ( SimplifySymbolicMinMax ) \ <nl> + _ ( SimplifyNestedMax ) \ <nl> + _ ( SimplifyNestedMin ) \ <nl> _ ( SimplifyWontReorderFloat ) \ <nl> _ ( SimplifyRoundModPattern ) \ <nl> _ ( SimplifyRoundModPatternFactorization ) \ <nl> mmm a / torch / csrc / jit / tensorexpr / expr . h <nl> ppp b / torch / csrc / jit / tensorexpr / expr . h <nl> enum IRNodeType { <nl> kPolynomial , <nl> kTerm , <nl> kRoundOff , <nl> + kMaxTerm , <nl> + kMinTerm , <nl> kNone , <nl> kExtra <nl> } ; <nl> mmm a / torch / csrc / jit / tensorexpr / hash_provider . cpp <nl> ppp b / torch / csrc / jit / tensorexpr / hash_provider . cpp <nl> void HashProvider : : visit ( const Polynomial * v ) { <nl> <nl> putHash ( v , hash ) ; <nl> } <nl> + <nl> + void HashProvider : : visit ( const MaxTerm * v ) { <nl> + CACHE_GUARD ( ) ; <nl> + SimplifierHashType hash = hash_combine ( " maxterm " ) ; <nl> + if ( v - > scalar ( ) ) { <nl> + v - > scalar ( ) - > accept ( this ) ; <nl> + hash = hash_combine ( hash , hashOf ( v - > scalar ( ) ) ) ; <nl> + } <nl> + <nl> + for ( auto * c : v - > variables ( ) ) { <nl> + c - > accept ( this ) ; <nl> + hash = hash_combine ( hash , hashOf ( c ) ) ; <nl> + } <nl> + <nl> + putHash ( v , hash ) ; <nl> + } <nl> + <nl> + void HashProvider : : visit ( const MinTerm * v ) { <nl> + CACHE_GUARD ( ) ; <nl> + SimplifierHashType hash = hash_combine ( " minterm " ) ; <nl> + if ( v - > scalar ( ) ) { <nl> + v - > scalar ( ) - > accept ( this ) ; <nl> + hash = hash_combine ( hash , hashOf ( v - > scalar ( ) ) ) ; <nl> + } <nl> + <nl> + for ( auto * c : v - > variables ( ) ) { <nl> + c - > accept ( this ) ; <nl> + hash = hash_combine ( hash , hashOf ( c ) ) ; <nl> + } <nl> + <nl> + putHash ( v , hash ) ; <nl> + } <nl> + <nl> } / / namespace tensorexpr <nl> } / / namespace jit <nl> } / / namespace torch <nl> mmm a / torch / csrc / jit / tensorexpr / hash_provider . h <nl> ppp b / torch / csrc / jit / tensorexpr / hash_provider . h <nl> class TORCH_API HashProvider : public IRVisitor { <nl> void visit ( const Cond * v ) override ; <nl> void visit ( const Term * v ) override ; <nl> void visit ( const Polynomial * v ) override ; <nl> + void visit ( const MaxTerm * v ) override ; <nl> + void visit ( const MinTerm * v ) override ; <nl> <nl> template < typename . . . Types > <nl> SimplifierHashType hash_combine ( const Types & . . . args ) { <nl> mmm a / torch / csrc / jit / tensorexpr / ir . h <nl> ppp b / torch / csrc / jit / tensorexpr / ir . h <nl> class Intrinsics : public CallNode < Intrinsics > { <nl> <nl> class Polynomial ; <nl> class Term ; <nl> + class MaxTerm ; <nl> + class MinTerm ; <nl> <nl> class FunctionCall ; <nl> <nl> mmm a / torch / csrc / jit / tensorexpr / ir_mutator . cpp <nl> ppp b / torch / csrc / jit / tensorexpr / ir_mutator . cpp <nl> const Expr * IRMutator : : mutate ( const RoundOff * v ) { <nl> v - > lhs ( ) - > accept_mutator ( this ) , v - > rhs ( ) - > accept_mutator ( this ) ) ; <nl> } <nl> <nl> + const Expr * IRMutator : : mutate ( const MaxTerm * v ) { <nl> + const Expr * newScalar = nullptr ; <nl> + if ( v - > scalar ( ) ) { <nl> + newScalar = v - > scalar ( ) - > accept_mutator ( this ) ; <nl> + } <nl> + <nl> + std : : vector < const Expr * > variables ; <nl> + for ( const auto * t : v - > variables ( ) ) { <nl> + variables . push_back ( t - > accept_mutator ( this ) ) ; <nl> + } <nl> + return new MaxTerm ( v - > hasher ( ) , newScalar , v - > propagate_nans ( ) , variables ) ; <nl> + } <nl> + <nl> + const Expr * IRMutator : : mutate ( const MinTerm * v ) { <nl> + const Expr * newScalar = nullptr ; <nl> + if ( v - > scalar ( ) ) { <nl> + newScalar = v - > scalar ( ) - > accept_mutator ( this ) ; <nl> + } <nl> + <nl> + std : : vector < const Expr * > variables ; <nl> + for ( const auto * t : v - > variables ( ) ) { <nl> + variables . push_back ( t - > accept_mutator ( this ) ) ; <nl> + } <nl> + return new MinTerm ( v - > hasher ( ) , newScalar , v - > propagate_nans ( ) , variables ) ; <nl> + } <nl> + <nl> const Expr * IRMutator : : mutate ( const ReduceOp * v ) { <nl> const Expr * buf_new_expr = v - > accumulator ( ) - > accept_mutator ( this ) ; <nl> const Buf * buf_new = dynamic_cast < const Buf * > ( buf_new_expr ) ; <nl> mmm a / torch / csrc / jit / tensorexpr / ir_mutator . h <nl> ppp b / torch / csrc / jit / tensorexpr / ir_mutator . h <nl> class Stmt ; <nl> class Term ; <nl> class Polynomial ; <nl> class RoundOff ; <nl> + class MaxTerm ; <nl> + class MinTerm ; <nl> class ReduceOp ; <nl> class AtomicAdd ; <nl> class SyncThreads ; <nl> class TORCH_API IRMutator { <nl> virtual const Expr * mutate ( const Term * v ) ; <nl> virtual const Expr * mutate ( const Polynomial * v ) ; <nl> virtual const Expr * mutate ( const RoundOff * v ) ; <nl> + virtual const Expr * mutate ( const MaxTerm * v ) ; <nl> + virtual const Expr * mutate ( const MinTerm * v ) ; <nl> <nl> virtual const Expr * mutate ( const ReduceOp * v ) ; <nl> <nl> mmm a / torch / csrc / jit / tensorexpr / ir_printer . cpp <nl> ppp b / torch / csrc / jit / tensorexpr / ir_printer . cpp <nl> void IRPrinter : : visit ( const RoundOff * v ) { <nl> os ( ) < < " ) " ; <nl> } <nl> <nl> + void IRPrinter : : visit ( const MaxTerm * v ) { <nl> + os ( ) < < " MaxTerm ( " ; <nl> + if ( v - > scalar ( ) ) { <nl> + v - > scalar ( ) - > accept ( this ) ; <nl> + os ( ) < < " , " ; <nl> + } <nl> + for ( size_t i = 0 ; i < v - > variables ( ) . size ( ) ; + + i ) { <nl> + v - > variables ( ) [ i ] - > accept ( this ) ; <nl> + if ( i < v - > variables ( ) . size ( ) - 1 ) { <nl> + os ( ) < < " , " ; <nl> + } <nl> + } <nl> + os ( ) < < " ) " ; <nl> + } <nl> + <nl> + void IRPrinter : : visit ( const MinTerm * v ) { <nl> + os ( ) < < " MinTerm ( " ; <nl> + if ( v - > scalar ( ) ) { <nl> + v - > scalar ( ) - > accept ( this ) ; <nl> + os ( ) < < " , " ; <nl> + } <nl> + for ( size_t i = 0 ; i < v - > variables ( ) . size ( ) ; + + i ) { <nl> + v - > variables ( ) [ i ] - > accept ( this ) ; <nl> + if ( i < v - > variables ( ) . size ( ) - 1 ) { <nl> + os ( ) < < " , " ; <nl> + } <nl> + } <nl> + os ( ) < < " ) " ; <nl> + } <nl> + <nl> void IRPrinter : : visit ( const ReduceOp * v ) { <nl> os ( ) < < " ReduceOp ( " ; <nl> os ( ) < < * v - > accumulator ( ) < < " , " ; <nl> mmm a / torch / csrc / jit / tensorexpr / ir_printer . h <nl> ppp b / torch / csrc / jit / tensorexpr / ir_printer . h <nl> class TORCH_API IRPrinter : public IRVisitor { <nl> void visit ( const Term * v ) override ; <nl> void visit ( const Polynomial * v ) override ; <nl> void visit ( const RoundOff * v ) override ; <nl> + void visit ( const MaxTerm * v ) override ; <nl> + void visit ( const MinTerm * v ) override ; <nl> void visit ( const ReduceOp * v ) override ; <nl> <nl> void visit ( const AtomicAdd * v ) override ; <nl> mmm a / torch / csrc / jit / tensorexpr / ir_simplifier . cpp <nl> ppp b / torch / csrc / jit / tensorexpr / ir_simplifier . cpp <nl> void Polynomial : : sort ( ) { <nl> } ) ; <nl> } <nl> <nl> + void MaxTerm : : uniquefy ( ) { <nl> + std : : sort ( <nl> + variables_ . begin ( ) , variables_ . end ( ) , [ & ] ( const Expr * a , const Expr * b ) { <nl> + return hasher_ . hash ( a ) < hasher_ . hash ( b ) ; <nl> + } ) ; <nl> + auto it = std : : unique ( <nl> + variables_ . begin ( ) , variables_ . end ( ) , [ & ] ( const Expr * a , const Expr * b ) { <nl> + return hasher_ . hash ( a ) = = hasher_ . hash ( b ) ; <nl> + } ) ; <nl> + variables_ . resize ( std : : distance ( variables_ . begin ( ) , it ) ) ; <nl> + } <nl> + <nl> + void MinTerm : : uniquefy ( ) { <nl> + std : : sort ( <nl> + variables_ . begin ( ) , variables_ . end ( ) , [ & ] ( const Expr * a , const Expr * b ) { <nl> + return hasher_ . hash ( a ) < hasher_ . hash ( b ) ; <nl> + } ) ; <nl> + auto it = std : : unique ( <nl> + variables_ . begin ( ) , variables_ . end ( ) , [ & ] ( const Expr * a , const Expr * b ) { <nl> + return hasher_ . hash ( a ) = = hasher_ . hash ( b ) ; <nl> + } ) ; <nl> + variables_ . resize ( std : : distance ( variables_ . begin ( ) , it ) ) ; <nl> + } <nl> + <nl> / / Handles optimization cases for Broadcast / Ramp + / - Broadcast / Ramp <nl> template < class Op > <nl> const Expr * combineMultilane ( const Expr * lhs , const Expr * rhs ) { <nl> const Expr * PolynomialTransformer : : mutate ( const Div * v ) { <nl> return new Div ( lhs_new , rhs_new ) ; <nl> } <nl> <nl> + namespace { <nl> + <nl> + / / Combines two MinTerm / MaxTerm expressions into one . <nl> + / / The first type on the template refers to the op , as in Min or Max and the <nl> + / / second type refers to the corresponding term , as in MinTerm or MaxTerm . <nl> + template < class Op , class OpTerm > <nl> + const Expr * combineMinMaxTerms ( <nl> + const Expr * lhs , <nl> + const Expr * rhs , <nl> + bool propagate_nans , <nl> + HashProvider & hasher ) { <nl> + auto combine_scalars = [ & ] ( const Expr * c1 , const Expr * c2 ) - > const Expr * { <nl> + if ( c1 & & c2 ) { <nl> + return evaluateOp ( new Op ( c1 , c2 , propagate_nans ) ) ; <nl> + } <nl> + if ( c1 ) { <nl> + return c1 ; <nl> + } <nl> + return c2 ; <nl> + } ; <nl> + <nl> + auto combine_opterms = [ & ] ( const OpTerm * m1 , const OpTerm * m2 ) { <nl> + const Expr * scalar = combine_scalars ( m1 - > scalar ( ) , m2 - > scalar ( ) ) ; <nl> + std : : vector < const Expr * > variables ; <nl> + for ( auto v : m1 - > variables ( ) ) { <nl> + variables . push_back ( v ) ; <nl> + } <nl> + for ( auto v : m2 - > variables ( ) ) { <nl> + variables . push_back ( v ) ; <nl> + } <nl> + return new OpTerm ( hasher , scalar , propagate_nans , std : : move ( variables ) ) ; <nl> + } ; <nl> + <nl> + auto add_expr_to_opterm = [ & ] ( const Expr * expr , const OpTerm * opterm ) { <nl> + const Expr * scalar = nullptr ; <nl> + std : : vector < const Expr * > variables ; <nl> + if ( opterm ) { <nl> + scalar = opterm - > scalar ( ) ; <nl> + variables = opterm - > variables ( ) ; <nl> + } <nl> + if ( expr - > isConstant ( ) ) { <nl> + scalar = combine_scalars ( scalar , expr ) ; <nl> + } else { <nl> + variables . push_back ( expr ) ; <nl> + } <nl> + return new OpTerm ( hasher , scalar , propagate_nans , std : : move ( variables ) ) ; <nl> + } ; <nl> + <nl> + const OpTerm * lhs_opterm = dynamic_cast < const OpTerm * > ( lhs ) ; <nl> + const OpTerm * rhs_opterm = dynamic_cast < const OpTerm * > ( rhs ) ; <nl> + if ( lhs_opterm & & lhs_opterm - > propagate_nans ( ) ! = propagate_nans ) { <nl> + return new Op ( lhs , rhs , propagate_nans ) ; <nl> + } <nl> + if ( rhs_opterm & & rhs_opterm - > propagate_nans ( ) ! = propagate_nans ) { <nl> + return new Op ( lhs , rhs , propagate_nans ) ; <nl> + } <nl> + <nl> + if ( lhs_opterm & & rhs_opterm ) { <nl> + return combine_opterms ( lhs_opterm , rhs_opterm ) ; <nl> + } else if ( lhs_opterm ) { <nl> + return add_expr_to_opterm ( rhs , lhs_opterm ) ; <nl> + } else if ( rhs_opterm ) { <nl> + return add_expr_to_opterm ( lhs , rhs_opterm ) ; <nl> + } <nl> + return add_expr_to_opterm ( rhs , add_expr_to_opterm ( lhs , nullptr ) ) ; <nl> + } <nl> + <nl> + / / Returns true if op is one of the 2 operands in opterm and also returns <nl> + / / the other op of opterm in other_op . <nl> + template < class OpTerm > <nl> + bool isOperandInMinMaxTerm ( <nl> + const OpTerm * opterm , <nl> + const Expr * op , <nl> + HashProvider & hasher , <nl> + const Expr * * other_op ) { <nl> + if ( opterm - > variables ( ) . size ( ) ! = 2 ) { <nl> + return false ; <nl> + } <nl> + auto lhs = opterm - > variables ( ) [ 0 ] ; <nl> + auto rhs = opterm - > variables ( ) [ 1 ] ; <nl> + auto op_hash = hasher . hash ( op ) ; <nl> + if ( hasher . hash ( lhs ) = = op_hash ) { <nl> + * other_op = rhs ; <nl> + return true ; <nl> + } else if ( hasher . hash ( rhs ) = = op_hash ) { <nl> + * other_op = lhs ; <nl> + return true ; <nl> + } <nl> + return false ; <nl> + } ; <nl> + <nl> + / / Simplifies the nested min - max pattern like : <nl> + / / * Max ( Min ( x , y ) , Min ( x , z ) ) = > Min ( x , Max ( y , z ) ) <nl> + / / * Min ( Max ( x , y ) , Max ( x , z ) ) = > Max ( x , Min ( y , z ) ) <nl> + / / This function is called while processing the outer Min / Max ops . <nl> + / / At that point the inner Min / Max ops would have been converted to <nl> + / / MinTerm / MaxTerm as appropriate . So , this function checks for those <nl> + / / term expressions in the given lhs and rhs . <nl> + / / <nl> + / / The first type of the template must be the term type corresponding to the <nl> + / / outer op ( e . g . MaxTerm ) and the second type of the template must be the term <nl> + / / type corresponding to the expected inner op ( e . g . MinTerm ) . <nl> + template < class OpTerm , class OtherOpTerm > <nl> + bool simplifyNestedMinMax ( <nl> + const Expr * lhs , <nl> + const Expr * rhs , <nl> + bool propagate_nans , <nl> + HashProvider & hasher , <nl> + const Expr * * new_op ) { <nl> + auto lhs_opterm = dynamic_cast < const OtherOpTerm * > ( lhs ) ; <nl> + auto rhs_opterm = dynamic_cast < const OtherOpTerm * > ( rhs ) ; <nl> + if ( lhs_opterm & & rhs_opterm & & <nl> + lhs_opterm - > propagate_nans ( ) = = propagate_nans & & <nl> + rhs_opterm - > propagate_nans ( ) = = propagate_nans ) { <nl> + if ( ! lhs_opterm - > scalar ( ) & & ! rhs_opterm - > scalar ( ) ) { <nl> + if ( lhs_opterm - > variables ( ) . size ( ) = = 2 & & <nl> + rhs_opterm - > variables ( ) . size ( ) = = 2 ) { <nl> + auto rhs_v1 = rhs_opterm - > variables ( ) [ 0 ] ; <nl> + auto rhs_v2 = rhs_opterm - > variables ( ) [ 1 ] ; <nl> + const Expr * new_op_lhs ; <nl> + if ( isOperandInMinMaxTerm < OtherOpTerm > ( <nl> + lhs_opterm , rhs_v1 , hasher , & new_op_lhs ) ) { <nl> + auto inner_op = <nl> + new OpTerm ( hasher , nullptr , propagate_nans , new_op_lhs , rhs_v2 ) ; <nl> + * new_op = new OtherOpTerm ( <nl> + hasher , nullptr , propagate_nans , rhs_v1 , inner_op ) ; <nl> + return true ; <nl> + } <nl> + if ( isOperandInMinMaxTerm < OtherOpTerm > ( <nl> + lhs_opterm , rhs_v2 , hasher , & new_op_lhs ) ) { <nl> + auto inner_op = <nl> + new OpTerm ( hasher , nullptr , propagate_nans , new_op_lhs , rhs_v1 ) ; <nl> + * new_op = new OtherOpTerm ( <nl> + hasher , nullptr , propagate_nans , rhs_v2 , inner_op ) ; <nl> + return true ; <nl> + } <nl> + } <nl> + } <nl> + } <nl> + return false ; <nl> + } <nl> + <nl> + } / / namespace <nl> + <nl> const Expr * PolynomialTransformer : : mutate ( const Max * v ) { <nl> const Expr * lhs_new = v - > lhs ( ) - > accept_mutator ( this ) ; <nl> const Expr * rhs_new = v - > rhs ( ) - > accept_mutator ( this ) ; <nl> const Expr * PolynomialTransformer : : mutate ( const Max * v ) { <nl> return evaluateOp ( new Max ( lhs_new , rhs_new , v - > propagate_nans ( ) ) ) ; <nl> } <nl> <nl> + / / If diff is constant , return the appropriate operand . <nl> const Expr * diff = new Sub ( lhs_new , rhs_new ) ; <nl> diff = diff - > accept_mutator ( this ) ; <nl> - if ( ! diff - > isConstant ( ) ) { <nl> - return new Max ( lhs_new , rhs_new , v - > propagate_nans ( ) ) ; <nl> + if ( diff - > isConstant ( ) ) { <nl> + if ( immediateAs < int > ( diff ) > 0 ) { <nl> + return lhs_new ; <nl> + } <nl> + return rhs_new ; <nl> } <nl> <nl> - if ( immediateAs < int > ( diff ) > 0 ) { <nl> - return lhs_new ; <nl> + / / Max ( Min ( x , y ) , Min ( x , z ) ) = > Min ( x , Max ( y , z ) ) <nl> + const Expr * new_op ; <nl> + if ( simplifyNestedMinMax < MaxTerm , MinTerm > ( <nl> + lhs_new , rhs_new , v - > propagate_nans ( ) , hasher_ , & new_op ) ) { <nl> + return new_op ; <nl> } <nl> <nl> - return rhs_new ; <nl> + return combineMinMaxTerms < Max , MaxTerm > ( <nl> + lhs_new , rhs_new , v - > propagate_nans ( ) , hasher_ ) ; <nl> } <nl> <nl> const Expr * PolynomialTransformer : : mutate ( const Min * v ) { <nl> const Expr * PolynomialTransformer : : mutate ( const Min * v ) { <nl> return evaluateOp ( new Min ( lhs_new , rhs_new , v - > propagate_nans ( ) ) ) ; <nl> } <nl> <nl> + / / If diff is constant , return the appropriate operand . <nl> const Expr * diff = new Sub ( lhs_new , rhs_new ) ; <nl> diff = diff - > accept_mutator ( this ) ; <nl> - if ( ! diff - > isConstant ( ) ) { <nl> - return new Min ( lhs_new , rhs_new , v - > propagate_nans ( ) ) ; <nl> + if ( diff - > isConstant ( ) ) { <nl> + if ( immediateAs < int > ( diff ) < 0 ) { <nl> + return lhs_new ; <nl> + } <nl> + return rhs_new ; <nl> } <nl> <nl> - if ( immediateAs < int > ( diff ) < 0 ) { <nl> - return lhs_new ; <nl> + / / Min ( Max ( x , y ) , Max ( x , z ) ) = > Max ( x , Min ( y , z ) ) <nl> + const Expr * new_op ; <nl> + if ( simplifyNestedMinMax < MinTerm , MaxTerm > ( <nl> + lhs_new , rhs_new , v - > propagate_nans ( ) , hasher_ , & new_op ) ) { <nl> + return new_op ; <nl> } <nl> <nl> - return rhs_new ; <nl> + return combineMinMaxTerms < Min , MinTerm > ( <nl> + lhs_new , rhs_new , v - > propagate_nans ( ) , hasher_ ) ; <nl> } <nl> <nl> const Expr * PolynomialTransformer : : mutate ( const CompareSelect * v ) { <nl> const Expr * TermExpander : : mutate ( const Polynomial * v ) { <nl> return lastNode ; <nl> } <nl> <nl> + const Expr * TermExpander : : mutate ( const MaxTerm * v ) { <nl> + const auto & variables = v - > variables ( ) ; <nl> + if ( variables . empty ( ) ) { <nl> + if ( ! v - > scalar ( ) ) { <nl> + / / This case should never happen because MaxTerm will be created only <nl> + / / on valid Max expressions . <nl> + throw std : : logic_error ( " empty maxterm op " ) ; <nl> + } <nl> + return v - > scalar ( ) ; <nl> + } <nl> + const Expr * max ; <nl> + if ( v - > scalar ( ) ) { <nl> + max = new Max ( variables [ 0 ] , v - > scalar ( ) , v - > propagate_nans ( ) ) ; <nl> + } else { <nl> + max = variables [ 0 ] ; <nl> + } <nl> + for ( size_t i = 1 ; i < variables . size ( ) ; i + + ) { <nl> + max = new Max ( max , variables [ i ] , v - > propagate_nans ( ) ) ; <nl> + } <nl> + return max - > accept_mutator ( this ) ; <nl> + } <nl> + <nl> + const Expr * TermExpander : : mutate ( const MinTerm * v ) { <nl> + const auto & variables = v - > variables ( ) ; <nl> + if ( variables . empty ( ) ) { <nl> + if ( ! v - > scalar ( ) ) { <nl> + / / This case should never happen because MinTerm will be created only <nl> + / / on valid Min expressions . <nl> + throw std : : logic_error ( " empty minterm op " ) ; <nl> + } <nl> + return v - > scalar ( ) ; <nl> + } <nl> + const Expr * min ; <nl> + if ( v - > scalar ( ) ) { <nl> + min = new Min ( variables [ 0 ] , v - > scalar ( ) , v - > propagate_nans ( ) ) ; <nl> + } else { <nl> + min = variables [ 0 ] ; <nl> + } <nl> + for ( size_t i = 1 ; i < variables . size ( ) ; i + + ) { <nl> + min = new Min ( min , variables [ i ] , v - > propagate_nans ( ) ) ; <nl> + } <nl> + return min - > accept_mutator ( this ) ; <nl> + } <nl> + <nl> / / Expands RoundOff ( x , y ) = > Term ( 1 , Div ( x , y ) , y ) , which will later be expanded <nl> / / to Mul ( Div ( x , y ) , y ) . <nl> const Expr * TermExpander : : mutate ( const RoundOff * v ) { <nl> mmm a / torch / csrc / jit / tensorexpr / ir_simplifier . h <nl> ppp b / torch / csrc / jit / tensorexpr / ir_simplifier . h <nl> class RoundOff : public BinaryOpNode < RoundOff > { <nl> : BinaryOpNode ( lhs , rhs , IRNodeType : : kRoundOff ) { } <nl> } ; <nl> <nl> + class MaxTerm : public ExprNode < MaxTerm > { <nl> + public : <nl> + template < class . . . Args > <nl> + MaxTerm ( HashProvider & hasher , const Expr * s , bool p , Args . . . ts ) <nl> + : ExprNodeBase ( s ? promoteTypesVar ( s , ts . . . ) : promoteTypesVar ( ts . . . ) ) , <nl> + scalar_ ( s ) , <nl> + hasher_ ( hasher ) , <nl> + propagate_nans_ ( p ) { <nl> + addComponent ( ts . . . ) ; <nl> + uniquefy ( ) ; <nl> + } <nl> + <nl> + MaxTerm ( <nl> + HashProvider & hasher , <nl> + const Expr * s , <nl> + bool p , <nl> + std : : vector < const Expr * > v ) <nl> + : ExprNodeBase ( s ? promoteTypesVec ( s , v ) : promoteTypesVec ( v ) ) , <nl> + variables_ ( std : : move ( v ) ) , <nl> + scalar_ ( s ) , <nl> + hasher_ ( hasher ) , <nl> + propagate_nans_ ( p ) { <nl> + uniquefy ( ) ; <nl> + } <nl> + <nl> + bool propagate_nans ( ) const { <nl> + return propagate_nans_ ; <nl> + } <nl> + <nl> + const Expr * scalar ( ) const { <nl> + return scalar_ ; <nl> + } <nl> + const std : : vector < const Expr * > & variables ( ) const { <nl> + return variables_ ; <nl> + } <nl> + HashProvider & hasher ( ) const { <nl> + return hasher_ ; <nl> + } <nl> + <nl> + private : <nl> + std : : vector < const Expr * > variables_ ; <nl> + const Expr * scalar_ ; <nl> + HashProvider & hasher_ ; <nl> + bool propagate_nans_ ; <nl> + <nl> + void addComponent ( ) { } <nl> + void addComponent ( const Expr * e ) { <nl> + variables_ . push_back ( e ) ; <nl> + } <nl> + template < class . . . Es > <nl> + void addComponent ( const Expr * e , Es . . . es ) { <nl> + addComponent ( e ) ; <nl> + addComponent ( es . . . ) ; <nl> + } <nl> + <nl> + / / Uniquefy the terms using their hash . <nl> + void uniquefy ( ) ; <nl> + } ; <nl> + <nl> + class MinTerm : public ExprNode < MinTerm > { <nl> + public : <nl> + template < class . . . Args > <nl> + MinTerm ( HashProvider & hasher , const Expr * s , bool p , Args . . . ts ) <nl> + : ExprNodeBase ( s ? promoteTypesVar ( s , ts . . . ) : promoteTypesVar ( ts . . . ) ) , <nl> + scalar_ ( s ) , <nl> + hasher_ ( hasher ) , <nl> + propagate_nans_ ( p ) { <nl> + addComponent ( ts . . . ) ; <nl> + uniquefy ( ) ; <nl> + } <nl> + <nl> + MinTerm ( <nl> + HashProvider & hasher , <nl> + const Expr * s , <nl> + bool p , <nl> + std : : vector < const Expr * > v ) <nl> + : ExprNodeBase ( s ? promoteTypesVec ( s , v ) : promoteTypesVec ( v ) ) , <nl> + variables_ ( std : : move ( v ) ) , <nl> + scalar_ ( s ) , <nl> + hasher_ ( hasher ) , <nl> + propagate_nans_ ( p ) { <nl> + uniquefy ( ) ; <nl> + } <nl> + <nl> + bool propagate_nans ( ) const { <nl> + return propagate_nans_ ; <nl> + } <nl> + <nl> + const Expr * scalar ( ) const { <nl> + return scalar_ ; <nl> + } <nl> + const std : : vector < const Expr * > & variables ( ) const { <nl> + return variables_ ; <nl> + } <nl> + HashProvider & hasher ( ) const { <nl> + return hasher_ ; <nl> + } <nl> + <nl> + private : <nl> + std : : vector < const Expr * > variables_ ; <nl> + const Expr * scalar_ ; <nl> + HashProvider & hasher_ ; <nl> + bool propagate_nans_ ; <nl> + <nl> + void addComponent ( ) { } <nl> + void addComponent ( const Expr * e ) { <nl> + variables_ . push_back ( e ) ; <nl> + } <nl> + template < class . . . Es > <nl> + void addComponent ( const Expr * e , Es . . . es ) { <nl> + addComponent ( e ) ; <nl> + addComponent ( es . . . ) ; <nl> + } <nl> + <nl> + / / Uniquefy the terms using their hash . <nl> + void uniquefy ( ) ; <nl> + } ; <nl> + <nl> / / Stmt simplification should occur in both modes . <nl> class TORCH_API IRSimplifierBase : public IRMutator { <nl> public : <nl> class TORCH_API TermExpander : public IRSimplifierBase { <nl> / / Expand Polynomials out to a series of Adds . <nl> const Expr * mutate ( const Polynomial * v ) override ; <nl> <nl> + / / Expand MaxTerms to a series of Max ops . <nl> + const Expr * mutate ( const MaxTerm * v ) override ; <nl> + <nl> + / / Expand MinTerms to a series of Min ops . <nl> + const Expr * mutate ( const MinTerm * v ) override ; <nl> + <nl> / / Expand RoundOff to it ' s component : Mul ( Div ( lhs , rhs ) , rhs ) . <nl> const Expr * mutate ( const RoundOff * v ) override ; <nl> <nl> mmm a / torch / csrc / jit / tensorexpr / ir_visitor . cpp <nl> ppp b / torch / csrc / jit / tensorexpr / ir_visitor . cpp <nl> void IRVisitor : : visit ( const RoundOff * v ) { <nl> v - > rhs ( ) - > accept ( this ) ; <nl> } <nl> <nl> + void IRVisitor : : visit ( const MaxTerm * v ) { <nl> + v - > scalar ( ) - > accept ( this ) ; <nl> + for ( auto * t : v - > variables ( ) ) { <nl> + t - > accept ( this ) ; <nl> + } <nl> + } <nl> + <nl> + void IRVisitor : : visit ( const MinTerm * v ) { <nl> + v - > scalar ( ) - > accept ( this ) ; <nl> + for ( auto * t : v - > variables ( ) ) { <nl> + t - > accept ( this ) ; <nl> + } <nl> + } <nl> + <nl> void IRVisitor : : visit ( const ReduceOp * v ) { <nl> v - > accumulator ( ) - > accept ( this ) ; <nl> v - > body ( ) . node ( ) - > accept ( this ) ; <nl> mmm a / torch / csrc / jit / tensorexpr / ir_visitor . h <nl> ppp b / torch / csrc / jit / tensorexpr / ir_visitor . h <nl> class Cond ; <nl> class Term ; <nl> class Polynomial ; <nl> class RoundOff ; <nl> + class MaxTerm ; <nl> + class MinTerm ; <nl> class ReduceOp ; <nl> class AtomicAdd ; <nl> class SyncThreads ; <nl> class TORCH_API IRVisitor { <nl> virtual void visit ( const Term * v ) ; <nl> virtual void visit ( const Polynomial * v ) ; <nl> virtual void visit ( const RoundOff * v ) ; <nl> + virtual void visit ( const MaxTerm * v ) ; <nl> + virtual void visit ( const MinTerm * v ) ; <nl> virtual void visit ( const ReduceOp * v ) ; <nl> virtual void visit ( const AtomicAdd * v ) ; <nl> virtual void visit ( const SyncThreads * v ) ; <nl>
|
Simplify nested Min and Max patterns . ( )
|
pytorch/pytorch
|
ad7a2eb1c9f71331afcbcd10f0b39cb2d6fa4259
|
2020-09-14T20:24:46Z
|
mmm a / dbms / src / Interpreters / Context . cpp <nl> ppp b / dbms / src / Interpreters / Context . cpp <nl> void Context : : setFormatSchemaPath ( const String & path ) <nl> shared - > format_schema_path = path ; <nl> } <nl> <nl> - Context : : getSampleBlockCacheType & Context : : getSampleBlockCache ( ) const <nl> + Context : : SampleBlockCache & Context : : getSampleBlockCache ( ) const <nl> { <nl> - return getQueryContext ( ) . get_sample_block_cache ; <nl> + return getQueryContext ( ) . sample_block_cache ; <nl> } <nl> <nl> std : : shared_ptr < ActionLocksManager > Context : : getActionLocksManager ( ) <nl> mmm a / dbms / src / Interpreters / Context . h <nl> ppp b / dbms / src / Interpreters / Context . h <nl> class Context <nl> UInt64 session_close_cycle = 0 ; <nl> bool session_is_used = false ; <nl> <nl> + using SampleBlockCache = std : : unordered_map < std : : string , Block > ; <nl> + mutable SampleBlockCache sample_block_cache ; <nl> + <nl> using DatabasePtr = std : : shared_ptr < IDatabase > ; <nl> using Databases = std : : map < String , std : : shared_ptr < IDatabase > > ; <nl> <nl> class Context <nl> / / / User name and session identifier . Named sessions are local to users . <nl> using SessionKey = std : : pair < String , String > ; <nl> <nl> - using getSampleBlockCacheType = std : : unordered_map < std : : string , Block > ; <nl> - mutable Context : : getSampleBlockCacheType get_sample_block_cache ; <nl> - getSampleBlockCacheType & getSampleBlockCache ( ) const ; <nl> + SampleBlockCache & getSampleBlockCache ( ) const ; <nl> <nl> private : <nl> / * * Check if the current client has access to the specified database . <nl>
|
Fixed extremely bad code
|
ClickHouse/ClickHouse
|
21508df7c20c41ae49672b143ab3e2b7d4869950
|
2018-07-06T00:27:47Z
|
mmm a / Telegram / SourceFiles / platform / linux / specific_linux . cpp <nl> ppp b / Telegram / SourceFiles / platform / linux / specific_linux . cpp <nl> void PortalAutostart ( bool autostart , bool silent = false ) { <nl> QVariantMap options ; <nl> options [ " reason " ] = tr : : lng_settings_auto_start ( tr : : now ) ; <nl> options [ " autostart " ] = autostart ; <nl> - options [ " commandline " ] = QStringList ( { <nl> + options [ " commandline " ] = QStringList { <nl> cExeName ( ) , <nl> + qsl ( " - workdir " ) , <nl> + cWorkingDir ( ) , <nl> qsl ( " - autostart " ) <nl> - } ) ; <nl> + } ; <nl> options [ " dbus - activatable " ] = false ; <nl> <nl> auto message = QDBusMessage : : createMethodCall ( <nl> uint FileChooserPortalVersion ( ) { <nl> } <nl> # endif / / ! DESKTOP_APP_DISABLE_DBUS_INTEGRATION <nl> <nl> + QString EscapeShellInLauncher ( const QString & content ) { <nl> + return EscapeShell ( content . toUtf8 ( ) ) . replace ( ' \ \ ' , " \ \ \ \ " ) ; <nl> + } <nl> + <nl> QString FlatpakID ( ) { <nl> static const auto Result = [ ] { <nl> if ( ! qEnvironmentVariableIsEmpty ( " FLATPAK_ID " ) ) { <nl> bool GenerateDesktopFile ( <nl> <nl> QFile target ( targetFile ) ; <nl> if ( target . open ( QIODevice : : WriteOnly ) ) { <nl> - if ( ! Core : : UpdaterDisabled ( ) ) { <nl> - fileText = fileText . replace ( <nl> - QRegularExpression ( <nl> - qsl ( " ^ TryExec = . * $ " ) , <nl> - QRegularExpression : : MultilineOption ) , <nl> - qsl ( " TryExec = " ) <nl> - + QFile : : encodeName ( cExeDir ( ) + cExeName ( ) ) <nl> - . replace ( ' \ \ ' , " \ \ \ \ " ) ) ; <nl> - fileText = fileText . replace ( <nl> - QRegularExpression ( <nl> - qsl ( " ^ Exec = . * $ " ) , <nl> - QRegularExpression : : MultilineOption ) , <nl> - qsl ( " Exec = " ) <nl> - + EscapeShell ( QFile : : encodeName ( cExeDir ( ) + cExeName ( ) ) ) <nl> - . replace ( ' \ \ ' , " \ \ \ \ " ) <nl> - + ( args . isEmpty ( ) ? QString ( ) : ' ' + args ) ) ; <nl> - } else { <nl> - fileText = fileText . replace ( <nl> - QRegularExpression ( <nl> - qsl ( " ^ Exec = ( . * ) - - % u $ " ) , <nl> - QRegularExpression : : MultilineOption ) , <nl> - qsl ( " Exec = \ \ 1 " ) <nl> - + ( args . isEmpty ( ) ? QString ( ) : ' ' + args ) ) ; <nl> - } <nl> + fileText = fileText . replace ( <nl> + QRegularExpression ( <nl> + qsl ( " ^ TryExec = . * $ " ) , <nl> + QRegularExpression : : MultilineOption ) , <nl> + qsl ( " TryExec = % 1 " ) . arg ( <nl> + QString ( cExeDir ( ) + cExeName ( ) ) . replace ( ' \ \ ' , " \ \ \ \ " ) ) ) ; <nl> + <nl> + fileText = fileText . replace ( <nl> + QRegularExpression ( <nl> + qsl ( " ^ Exec = . * $ " ) , <nl> + QRegularExpression : : MultilineOption ) , <nl> + qsl ( " Exec = % 1 - workdir % 2 " ) . arg ( <nl> + EscapeShellInLauncher ( cExeDir ( ) + cExeName ( ) ) , <nl> + EscapeShellInLauncher ( cWorkingDir ( ) ) ) <nl> + + ( args . isEmpty ( ) ? QString ( ) : ' ' + args ) ) ; <nl> <nl> target . write ( fileText . toUtf8 ( ) ) ; <nl> target . close ( ) ; <nl> <nl> - if ( IsStaticBinary ( ) ) { <nl> + if ( ! Core : : UpdaterDisabled ( ) ) { <nl> DEBUG_LOG ( ( " App Info : removing old . desktop files " ) ) ; <nl> QFile : : remove ( qsl ( " % 1telegram . desktop " ) . arg ( targetPath ) ) ; <nl> QFile : : remove ( qsl ( " % 1telegramdesktop . desktop " ) . arg ( targetPath ) ) ; <nl> void RegisterCustomScheme ( bool force ) { <nl> <nl> GError * error = nullptr ; <nl> <nl> - const auto neededCommandlineBuilder = qsl ( " % 1 - - " ) <nl> - . arg ( ! Core : : UpdaterDisabled ( ) <nl> - ? cExeDir ( ) + cExeName ( ) <nl> - : cExeName ( ) ) ; <nl> + const auto neededCommandlineBuilder = qsl ( " % 1 - workdir % 2 - - " ) . arg ( <nl> + QString ( EscapeShell ( QFile : : encodeName ( cExeDir ( ) + cExeName ( ) ) ) ) , <nl> + QString ( EscapeShell ( QFile : : encodeName ( cWorkingDir ( ) ) ) ) ) ; <nl> <nl> const auto neededCommandline = qsl ( " % 1 % u " ) <nl> . arg ( neededCommandlineBuilder ) ; <nl>
|
Fix escaping in scheme creation on Linux and set - workdir
|
telegramdesktop/tdesktop
|
c8ce5dfa8bad172dbc35f15f2906defb1b04e322
|
2020-11-16T09:33:22Z
|
mmm a / libraries / wasm - jit / Source / Runtime / WAVMIntrinsics . cpp <nl> ppp b / libraries / wasm - jit / Source / Runtime / WAVMIntrinsics . cpp <nl> <nl> namespace Runtime <nl> { <nl> static void causeIntrensicException ( Exception : : Cause cause ) { <nl> - Platform : : immediately_exit ( std : : make_exception_ptr ( Exception { cause , std : : vector < std : : string > ( ) } ) ) ; <nl> + try { <nl> + Platform : : immediately_exit ( std : : make_exception_ptr ( Exception { cause , std : : vector < std : : string > ( ) } ) ) ; <nl> + } <nl> + catch ( . . . ) { <nl> + Platform : : immediately_exit ( std : : current_exception ( ) ) ; <nl> + } <nl> __builtin_unreachable ( ) ; <nl> - } <nl> + } <nl> <nl> template < typename Float > <nl> Float quietNaN ( Float value ) <nl> namespace Runtime <nl> <nl> DEFINE_INTRINSIC_FUNCTION3 ( wavmIntrinsics , indirectCallSignatureMismatch , indirectCallSignatureMismatch , none , i32 , index , i64 , expectedSignatureBits , i64 , tableBits ) <nl> { <nl> - TableInstance * table = reinterpret_cast < TableInstance * > ( tableBits ) ; <nl> - void * elementValue = table - > baseAddress [ index ] . value ; <nl> - const FunctionType * actualSignature = table - > baseAddress [ index ] . type ; <nl> - const FunctionType * expectedSignature = reinterpret_cast < const FunctionType * > ( ( Uptr ) expectedSignatureBits ) ; <nl> - std : : string ipDescription = " < unknown > " ; <nl> - LLVMJIT : : describeInstructionPointer ( reinterpret_cast < Uptr > ( elementValue ) , ipDescription ) ; <nl> - Log : : printf ( Log : : Category : : debug , " call_indirect signature mismatch : expected % s at index % u but got % s ( % s ) \ n " , <nl> - asString ( expectedSignature ) . c_str ( ) , <nl> - index , <nl> - actualSignature ? asString ( actualSignature ) . c_str ( ) : " nullptr " , <nl> - ipDescription . c_str ( ) <nl> - ) ; <nl> - causeIntrensicException ( elementValue = = nullptr ? Exception : : Cause : : undefinedTableElement : Exception : : Cause : : indirectCallSignatureMismatch ) ; <nl> + try { <nl> + TableInstance * table = reinterpret_cast < TableInstance * > ( tableBits ) ; <nl> + void * elementValue = table - > baseAddress [ index ] . value ; <nl> + const FunctionType * actualSignature = table - > baseAddress [ index ] . type ; <nl> + const FunctionType * expectedSignature = reinterpret_cast < const FunctionType * > ( ( Uptr ) expectedSignatureBits ) ; <nl> + std : : string ipDescription = " < unknown > " ; <nl> + LLVMJIT : : describeInstructionPointer ( reinterpret_cast < Uptr > ( elementValue ) , ipDescription ) ; <nl> + Log : : printf ( Log : : Category : : debug , " call_indirect signature mismatch : expected % s at index % u but got % s ( % s ) \ n " , <nl> + asString ( expectedSignature ) . c_str ( ) , <nl> + index , <nl> + actualSignature ? asString ( actualSignature ) . c_str ( ) : " nullptr " , <nl> + ipDescription . c_str ( ) <nl> + ) ; <nl> + causeIntrensicException ( elementValue = = nullptr ? Exception : : Cause : : undefinedTableElement : Exception : : Cause : : indirectCallSignatureMismatch ) ; <nl> + } <nl> + catch ( . . . ) { <nl> + Platform : : immediately_exit ( std : : current_exception ( ) ) ; <nl> + } <nl> } <nl> <nl> DEFINE_INTRINSIC_FUNCTION0 ( wavmIntrinsics , indirectCallIndexOutOfBounds , indirectCallIndexOutOfBounds , none ) <nl>
|
Protect against used wavmIntrensics exceptions hitting a brick wall on unwinding
|
EOSIO/eos
|
bbc57fd5047349d1b013bb493fe6fca49848ff7e
|
2019-04-15T20:49:26Z
|
mmm a / cyber / io / example / tcp_echo_server . cc <nl> ppp b / cyber / io / example / tcp_echo_server . cc <nl> int main ( int argc , char * argv [ ] ) { <nl> return ; <nl> } <nl> session . Listen ( 10 ) ; <nl> - auto conn_session = <nl> - session . Accept ( ( struct sockaddr * ) nullptr , nullptr ) ; <nl> + auto conn_session = session . Accept ( ( struct sockaddr * ) nullptr , nullptr ) ; <nl> std : : cout < < " accepted " < < std : : endl ; <nl> auto routine_name = <nl> " connected session " + std : : to_string ( Time : : Now ( ) . ToNanosecond ( ) ) ; <nl> mmm a / modules / common / math / linear_quadratic_regulator . cc <nl> ppp b / modules / common / math / linear_quadratic_regulator . cc <nl> void SolveLQRProblem ( const Matrix & A , const Matrix & B , const Matrix & Q , <nl> uint num_iteration = 0 ; <nl> double diff = std : : numeric_limits < double > : : max ( ) ; <nl> while ( num_iteration + + < max_num_iteration & & diff > tolerance ) { <nl> - Matrix P_next = AT * P * A - <nl> + Matrix P_next = <nl> + AT * P * A - <nl> ( AT * P * B + M ) * ( R + BT * P * B ) . inverse ( ) * ( BT * P * A + MT ) + Q ; <nl> / / check the difference between P and P_next <nl> diff = fabs ( ( P_next - P ) . maxCoeff ( ) ) ; <nl> mmm a / modules / common / math / linear_quadratic_regulator . h <nl> ppp b / modules / common / math / linear_quadratic_regulator . h <nl> namespace apollo { <nl> namespace common { <nl> namespace math { <nl> <nl> - <nl> / * * <nl> * @ brief Solver for discrete - time linear quadratic problem . <nl> * @ param A The system dynamic matrix <nl> namespace math { <nl> * / <nl> void SolveLQRProblem ( const Eigen : : MatrixXd & A , const Eigen : : MatrixXd & B , <nl> const Eigen : : MatrixXd & Q , const Eigen : : MatrixXd & R , <nl> - const Eigen : : MatrixXd & M , <nl> - const double tolerance , const uint max_num_iteration , <nl> - Eigen : : MatrixXd * ptr_K ) ; <nl> + const Eigen : : MatrixXd & M , const double tolerance , <nl> + const uint max_num_iteration , Eigen : : MatrixXd * ptr_K ) ; <nl> <nl> / * * <nl> * @ brief Solver for discrete - time linear quadratic problem . <nl> mmm a / modules / dreamview / backend / teleop / teleop . cc <nl> ppp b / modules / dreamview / backend / teleop / teleop . cc <nl> void TeleopService : : Start ( ) { <nl> void TeleopService : : RegisterMessageHandlers ( ) { <nl> / / Send current teleop status to the new client . <nl> websocket_ - > RegisterConnectionReadyHandler ( <nl> - [ this ] ( WebSocketHandler : : Connection * conn ) { <nl> - SendStatus ( conn ) ; <nl> - } ) ; <nl> + [ this ] ( WebSocketHandler : : Connection * conn ) { SendStatus ( conn ) ; } ) ; <nl> / / Start / Stop local and remote audio <nl> websocket_ - > RegisterMessageHandler ( <nl> " ToggleAudio " , <nl> mmm a / modules / planning / scenarios / park / emergency_pull_over / stage_slow_down . cc <nl> ppp b / modules / planning / scenarios / park / emergency_pull_over / stage_slow_down . cc <nl> Stage : : StageStatus EmergencyPullOverStageSlowDown : : Process ( <nl> / / set speed_limit to slow down <nl> const double adc_speed = <nl> common : : VehicleStateProvider : : Instance ( ) - > linear_velocity ( ) ; <nl> - const double target_speed = adc_speed - <nl> - scenario_config_ . max_stop_deceleration ( ) * <nl> - scenario_config_ . slow_down_deceleration_time ( ) ; <nl> + const double target_speed = <nl> + adc_speed - scenario_config_ . max_stop_deceleration ( ) * <nl> + scenario_config_ . slow_down_deceleration_time ( ) ; <nl> / / TODO ( all ) : to be updated <nl> if ( frame - > mutable_reference_line_info ( ) ) { <nl> auto * reference_line = <nl> mmm a / modules / planning / scenarios / scenario_manager . cc <nl> ppp b / modules / planning / scenarios / scenario_manager . cc <nl> void ScenarioManager : : UpdatePlanningContextPullOverScenario ( <nl> void ScenarioManager : : UpdatePlanningContextEmergencyPullOverScenario ( <nl> const Frame & frame , const ScenarioConfig : : ScenarioType & scenario_type ) { <nl> auto * emergency_pull_over = PlanningContext : : Instance ( ) <nl> - - > mutable_planning_status ( ) <nl> - - > mutable_emergency_pull_over ( ) ; <nl> + - > mutable_planning_status ( ) <nl> + - > mutable_emergency_pull_over ( ) ; <nl> <nl> if ( scenario_type ! = ScenarioConfig : : EMERGENCY_PULL_OVER ) { <nl> emergency_pull_over - > Clear ( ) ; <nl> mmm a / modules / planning / scenarios / scenario_manager . h <nl> ppp b / modules / planning / scenarios / scenario_manager . h <nl> class ScenarioManager final { <nl> void UpdatePlanningContextYieldSignScenario ( <nl> const Frame & frame , const ScenarioConfig : : ScenarioType & scenario_type ) ; <nl> <nl> - <nl> private : <nl> std : : unordered_map < ScenarioConfig : : ScenarioType , ScenarioConfig , <nl> std : : hash < int > > <nl> mmm a / modules / planning / tasks / deciders / path_bounds_decider / path_bounds_decider . cc <nl> ppp b / modules / planning / tasks / deciders / path_bounds_decider / path_bounds_decider . cc <nl> Status PathBoundsDecider : : Process ( <nl> - > mutable_planning_status ( ) <nl> - > mutable_pull_over ( ) ; <nl> auto * emergency_pull_over_status = PlanningContext : : Instance ( ) <nl> - - > mutable_planning_status ( ) <nl> - - > mutable_emergency_pull_over ( ) ; <nl> + - > mutable_planning_status ( ) <nl> + - > mutable_emergency_pull_over ( ) ; <nl> is_in_pull_over_scenario_ = pull_over_status - > is_in_pull_over_scenario ( ) ; <nl> is_in_emergency_pull_over_scenario_ = <nl> emergency_pull_over_status - > is_in_emergency_pull_over_scenario ( ) ; <nl> Status PathBoundsDecider : : GeneratePullOverPathBound ( <nl> - > mutable_planning_status ( ) <nl> - > mutable_pull_over ( ) ; <nl> auto * emergency_pull_over_status = PlanningContext : : Instance ( ) <nl> - - > mutable_planning_status ( ) <nl> - - > mutable_emergency_pull_over ( ) ; <nl> + - > mutable_planning_status ( ) <nl> + - > mutable_emergency_pull_over ( ) ; <nl> / / If already found a pull - over position , simply check if it ' s valid . <nl> int curr_idx = - 1 ; <nl> if ( emergency_pull_over_status - > has_position ( ) ) { <nl> curr_idx = IsPointWithinPathBound ( <nl> - reference_line_info , <nl> - emergency_pull_over_status - > position ( ) . x ( ) , <nl> - emergency_pull_over_status - > position ( ) . y ( ) , <nl> - * path_bound ) ; <nl> + reference_line_info , emergency_pull_over_status - > position ( ) . x ( ) , <nl> + emergency_pull_over_status - > position ( ) . y ( ) , * path_bound ) ; <nl> } else if ( pull_over_status - > is_feasible ( ) & & <nl> - pull_over_status - > has_position ( ) ) { <nl> + pull_over_status - > has_position ( ) ) { <nl> curr_idx = IsPointWithinPathBound ( <nl> - reference_line_info , <nl> - pull_over_status - > position ( ) . x ( ) , <nl> - pull_over_status - > position ( ) . y ( ) , <nl> - * path_bound ) ; <nl> + reference_line_info , pull_over_status - > position ( ) . x ( ) , <nl> + pull_over_status - > position ( ) . y ( ) , * path_bound ) ; <nl> } <nl> if ( curr_idx > = 0 ) { <nl> / / Trim path - bound properly . <nl> Status PathBoundsDecider : : GeneratePullOverPathBound ( <nl> emergency_pull_over_status - > set_theta ( std : : get < 2 > ( pull_over_configuration ) ) ; <nl> <nl> ADEBUG < < " Emergency Pull Over : x [ " <nl> - < < emergency_pull_over_status - > position ( ) . x ( ) <nl> - < < " ] y [ " < < emergency_pull_over_status - > position ( ) . y ( ) <nl> - < < " ] theta [ " < < emergency_pull_over_status - > theta ( ) < < " ] " ; <nl> + < < emergency_pull_over_status - > position ( ) . x ( ) < < " ] y [ " <nl> + < < emergency_pull_over_status - > position ( ) . y ( ) < < " ] theta [ " <nl> + < < emergency_pull_over_status - > theta ( ) < < " ] " ; <nl> <nl> } else if ( is_in_pull_over_scenario_ ) { <nl> pull_over_status - > Clear ( ) ; <nl> Status PathBoundsDecider : : GeneratePullOverPathBound ( <nl> <nl> curr_idx = std : : get < 3 > ( pull_over_configuration ) ; <nl> while ( static_cast < int > ( path_bound - > size ( ) ) - 1 > <nl> - curr_idx + kNumExtraTailBoundPoint ) { <nl> + curr_idx + kNumExtraTailBoundPoint ) { <nl> path_bound - > pop_back ( ) ; <nl> } <nl> for ( size_t idx = curr_idx + 1 ; idx < path_bound - > size ( ) ; + + idx ) { <nl> int PathBoundsDecider : : IsPointWithinPathBound ( <nl> } <nl> <nl> bool PathBoundsDecider : : FindDestinationPullOverS ( <nl> - const Frame & frame , <nl> - const ReferenceLineInfo & reference_line_info , <nl> + const Frame & frame , const ReferenceLineInfo & reference_line_info , <nl> const std : : vector < std : : tuple < double , double , double > > & path_bound , <nl> double * pull_over_s ) { <nl> / / destination_s based on routing_end <nl> bool PathBoundsDecider : : FindDestinationPullOverS ( <nl> } <nl> <nl> bool PathBoundsDecider : : FindEmergencyPullOverS ( <nl> - const ReferenceLineInfo & reference_line_info , <nl> - double * pull_over_s ) { <nl> + const ReferenceLineInfo & reference_line_info , double * pull_over_s ) { <nl> const double adc_end_s = reference_line_info . AdcSlBoundary ( ) . end_s ( ) ; <nl> <nl> / / TODO ( all ) : to be implemented <nl> bool PathBoundsDecider : : FindEmergencyPullOverS ( <nl> } <nl> <nl> bool PathBoundsDecider : : SearchPullOverPosition ( <nl> - const Frame & frame , <nl> - const ReferenceLineInfo & reference_line_info , <nl> + const Frame & frame , const ReferenceLineInfo & reference_line_info , <nl> const std : : vector < std : : tuple < double , double , double > > & path_bound , <nl> std : : tuple < double , double , double , int > * const pull_over_configuration ) { <nl> double pull_over_s = 0 . 0 ; <nl> bool PathBoundsDecider : : SearchPullOverPosition ( <nl> } else { <nl> / / 1 . Locate the first point after emergency_pull_over s . <nl> while ( idx < static_cast < int > ( path_bound . size ( ) ) & & <nl> - std : : get < 0 > ( path_bound [ idx ] ) < pull_over_s ) { <nl> + std : : get < 0 > ( path_bound [ idx ] ) < pull_over_s ) { <nl> + + idx ; <nl> } <nl> } <nl> bool PathBoundsDecider : : SearchPullOverPosition ( <nl> / / 2 . Find a window that is close to road - edge . <nl> bool has_a_feasible_window = false ; <nl> while ( ( search_backward & & idx > = 0 & & <nl> - std : : get < 0 > ( path_bound [ idx ] ) - std : : get < 0 > ( path_bound . front ( ) ) > <nl> - pull_over_space_length ) | | <nl> + std : : get < 0 > ( path_bound [ idx ] ) - std : : get < 0 > ( path_bound . front ( ) ) > <nl> + pull_over_space_length ) | | <nl> ( ! search_backward & & idx < static_cast < int > ( path_bound . size ( ) ) & & <nl> - std : : get < 0 > ( path_bound . back ( ) ) - std : : get < 0 > ( path_bound [ idx ] ) ) > <nl> + std : : get < 0 > ( path_bound . back ( ) ) - std : : get < 0 > ( path_bound [ idx ] ) ) > <nl> pull_over_space_length ) { <nl> int j = idx ; <nl> bool is_feasible_window = true ; <nl> while ( ( search_backward & & j > = 0 & & <nl> - std : : get < 0 > ( path_bound [ idx ] ) - std : : get < 0 > ( path_bound [ j ] ) < <nl> - pull_over_space_length ) | | <nl> - ( ! search_backward & & j < static_cast < int > ( path_bound . size ( ) ) & & <nl> - std : : get < 0 > ( path_bound [ j ] ) - std : : get < 0 > ( path_bound [ idx ] ) < <nl> - pull_over_space_length ) ) { <nl> + std : : get < 0 > ( path_bound [ idx ] ) - std : : get < 0 > ( path_bound [ j ] ) < <nl> + pull_over_space_length ) | | <nl> + ( ! search_backward & & j < static_cast < int > ( path_bound . size ( ) ) & & <nl> + std : : get < 0 > ( path_bound [ j ] ) - std : : get < 0 > ( path_bound [ idx ] ) < <nl> + pull_over_space_length ) ) { <nl> double curr_s = std : : get < 0 > ( path_bound [ j ] ) ; <nl> double curr_right_bound = std : : fabs ( std : : get < 1 > ( path_bound [ j ] ) ) ; <nl> double curr_road_left_width = 0 ; <nl> mmm a / modules / planning / tasks / deciders / path_bounds_decider / path_bounds_decider . h <nl> ppp b / modules / planning / tasks / deciders / path_bounds_decider / path_bounds_decider . h <nl> class PathBoundsDecider : public Decider { <nl> const std : : vector < std : : tuple < double , double , double > > & path_bound ) ; <nl> <nl> bool FindDestinationPullOverS ( <nl> - const Frame & frame , <nl> - const ReferenceLineInfo & reference_line_info , <nl> + const Frame & frame , const ReferenceLineInfo & reference_line_info , <nl> const std : : vector < std : : tuple < double , double , double > > & path_bound , <nl> double * pull_over_s ) ; <nl> - bool FindEmergencyPullOverS ( <nl> - const ReferenceLineInfo & reference_line_info , double * pull_over_s ) ; <nl> + bool FindEmergencyPullOverS ( const ReferenceLineInfo & reference_line_info , <nl> + double * pull_over_s ) ; <nl> <nl> bool SearchPullOverPosition ( <nl> const Frame & frame , const ReferenceLineInfo & reference_line_info , <nl> mmm a / modules / planning / tasks / deciders / st_bounds_decider / st_obstacles_processor . cc <nl> ppp b / modules / planning / tasks / deciders / st_bounds_decider / st_obstacles_processor . cc <nl> Status STObstaclesProcessor : : MapObstaclesToSTBoundaries ( <nl> / / Obstacle doesn ' t appear on ST - Graph . <nl> continue ; <nl> } <nl> - auto boundary = STBoundary : : CreateInstanceAccurate ( <nl> - lower_points , upper_points ) ; <nl> + auto boundary = <nl> + STBoundary : : CreateInstanceAccurate ( lower_points , upper_points ) ; <nl> boundary . set_id ( obs_ptr - > Id ( ) ) ; <nl> if ( obs_ptr - > Trajectory ( ) . trajectory_point ( ) . empty ( ) ) { <nl> / / Obstacle is static . <nl> Status STObstaclesProcessor : : MapObstaclesToSTBoundaries ( <nl> / / Preprocess the obstacles for sweep - line algorithms . <nl> / / Fetch every obstacle ' s beginning end ending t - edges only . <nl> for ( auto it : obs_id_to_st_boundary_ ) { <nl> - obs_t_edges_ . emplace_back ( true , it . second . min_t ( ) , <nl> - it . second . min_s ( ) , it . second . max_s ( ) , it . first ) ; <nl> - obs_t_edges_ . emplace_back ( false , it . second . max_t ( ) , <nl> - it . second . min_s ( ) , it . second . max_s ( ) , it . first ) ; <nl> + obs_t_edges_ . emplace_back ( true , it . second . min_t ( ) , it . second . min_s ( ) , <nl> + it . second . max_s ( ) , it . first ) ; <nl> + obs_t_edges_ . emplace_back ( false , it . second . max_t ( ) , it . second . min_s ( ) , <nl> + it . second . max_s ( ) , it . first ) ; <nl> } <nl> / / Sort the edges . <nl> std : : sort ( obs_t_edges_ . begin ( ) , obs_t_edges_ . end ( ) , <nl> STObstaclesProcessor : : GetAllSTBoundaries ( ) { <nl> return obs_id_to_st_boundary_ ; <nl> } <nl> <nl> - bool STObstaclesProcessor : : GetSBoundsFromDecisions ( double t , <nl> - std : : vector < std : : pair < double , double > > * const available_s_bounds , <nl> - std : : vector < std : : vector < std : : pair < std : : string , ObjectDecisionType > > > * <nl> - const available_obs_decisions ) { <nl> + bool STObstaclesProcessor : : GetSBoundsFromDecisions ( <nl> + double t , std : : vector < std : : pair < double , double > > * const available_s_bounds , <nl> + std : : vector < std : : vector < std : : pair < std : : string , ObjectDecisionType > > > * const <nl> + available_obs_decisions ) { <nl> / / Sanity checks . <nl> available_s_bounds - > clear ( ) ; <nl> available_obs_decisions - > clear ( ) ; <nl> bool STObstaclesProcessor : : GetSBoundsFromDecisions ( double t , <nl> s_min = std : : fmin ( s_min , obs_s_max ) ; <nl> } <nl> } <nl> - if ( s_min > s_max ) <nl> + if ( s_min > s_max ) { <nl> return false ; <nl> + } <nl> <nl> / / For newly entering st_boundaries , determine possible new - boundaries . <nl> / / For apparent ones , make decisions directly . <nl> bool STObstaclesProcessor : : GetSBoundsFromDecisions ( double t , <nl> if ( std : : get < 2 > ( obs_t_edge ) > = s_max ) { <nl> obs_id_to_decision_ [ std : : get < 4 > ( obs_t_edge ) ] = <nl> DetermineObstacleDecision ( std : : get < 2 > ( obs_t_edge ) , <nl> - std : : get < 3 > ( obs_t_edge ) , <nl> - s_max ) ; <nl> + std : : get < 3 > ( obs_t_edge ) , s_max ) ; <nl> } else if ( std : : get < 3 > ( obs_t_edge ) < = s_min ) { <nl> obs_id_to_decision_ [ std : : get < 4 > ( obs_t_edge ) ] = <nl> DetermineObstacleDecision ( std : : get < 2 > ( obs_t_edge ) , <nl> - std : : get < 3 > ( obs_t_edge ) , <nl> - s_min ) ; <nl> + std : : get < 3 > ( obs_t_edge ) , s_min ) ; <nl> } else { <nl> ambiguous_t_edges . push_back ( obs_t_edge ) ; <nl> } <nl> bool STObstaclesProcessor : : GetSBoundsFromDecisions ( double t , <nl> } <nl> / / For ambiguous ones , enumerate all decisions and corresponding bounds . <nl> auto s_gaps = FindSGaps ( ambiguous_t_edges , s_min , s_max ) ; <nl> - if ( s_gaps . empty ( ) ) <nl> + if ( s_gaps . empty ( ) ) { <nl> return false ; <nl> + } <nl> for ( auto s_gap : s_gaps ) { <nl> available_s_bounds - > push_back ( s_gap ) ; <nl> std : : vector < std : : pair < std : : string , ObjectDecisionType > > obs_decisions ; <nl> bool STObstaclesProcessor : : GetSBoundsFromDecisions ( double t , <nl> std : : string obs_id = std : : get < 4 > ( obs_t_edge ) ; <nl> double obs_s_min = std : : get < 2 > ( obs_t_edge ) ; <nl> double obs_s_max = std : : get < 3 > ( obs_t_edge ) ; <nl> - obs_decisions . emplace_back ( obs_id , DetermineObstacleDecision ( <nl> - obs_s_min , obs_s_max , ( s_gap . first + s_gap . second ) / 2 . 0 ) ) ; <nl> + obs_decisions . emplace_back ( <nl> + obs_id , <nl> + DetermineObstacleDecision ( obs_s_min , obs_s_max , <nl> + ( s_gap . first + s_gap . second ) / 2 . 0 ) ) ; <nl> } <nl> available_obs_decisions - > push_back ( obs_decisions ) ; <nl> } <nl> bool STObstaclesProcessor : : GetOverlappingS ( <nl> if ( pt_after_idx = = - 1 ) { <nl> pt_after_idx = static_cast < int > ( adc_path_points . size ( ) ) - 2 ; <nl> } <nl> - if ( pt_before_idx > = pt_after_idx ) <nl> + if ( pt_before_idx > = pt_after_idx ) { <nl> return false ; <nl> + } <nl> <nl> / / Detailed searching . <nl> bool has_overlapping = false ; <nl> bool STObstaclesProcessor : : IsADCOverlappingWithObstacle ( <nl> } <nl> <nl> std : : vector < std : : pair < double , double > > STObstaclesProcessor : : FindSGaps ( <nl> - const std : : vector < ObsTEdge > & obstacle_t_edges , <nl> - double s_min , double s_max ) { <nl> + const std : : vector < ObsTEdge > & obstacle_t_edges , double s_min , double s_max ) { <nl> std : : vector < std : : pair < double , int > > obs_s_edges ; <nl> for ( auto obs_t_edge : obstacle_t_edges ) { <nl> obs_s_edges . emplace_back ( std : : get < 2 > ( obs_t_edge ) , 1 ) ; <nl> std : : vector < std : : pair < double , double > > STObstaclesProcessor : : FindSGaps ( <nl> obs_s_edges . emplace_back ( s_min , 0 ) ; <nl> obs_s_edges . emplace_back ( s_max , 1 ) ; <nl> / / obs_s_edges . emplace_back ( std : : numeric_limits < double > : : max ( ) , 0 ) ; <nl> - std : : sort ( obs_s_edges . begin ( ) , obs_s_edges . end ( ) , <nl> - [ ] ( const std : : pair < double , int > & lhs , <nl> - const std : : pair < double , int > & rhs ) { <nl> - if ( lhs . first ! = rhs . first ) { <nl> - return lhs . first < rhs . first ; <nl> - } else { <nl> - return lhs . second > rhs . second ; <nl> - } <nl> - } ) ; <nl> + std : : sort ( <nl> + obs_s_edges . begin ( ) , obs_s_edges . end ( ) , <nl> + [ ] ( const std : : pair < double , int > & lhs , const std : : pair < double , int > & rhs ) { <nl> + if ( lhs . first ! = rhs . first ) { <nl> + return lhs . first < rhs . first ; <nl> + } else { <nl> + return lhs . second > rhs . second ; <nl> + } <nl> + } ) ; <nl> <nl> std : : vector < std : : pair < double , double > > s_gaps ; <nl> int num_st_obs = 1 ; <nl> std : : vector < std : : pair < double , double > > STObstaclesProcessor : : FindSGaps ( <nl> for ( auto obs_s_edge : obs_s_edges ) { <nl> if ( obs_s_edge . second = = 1 ) { <nl> num_st_obs + + ; <nl> - if ( num_st_obs = = 1 ) <nl> + if ( num_st_obs = = 1 ) { <nl> s_gaps . emplace_back ( prev_open_s , obs_s_edge . first ) ; <nl> + } <nl> } else { <nl> num_st_obs - - ; <nl> - if ( num_st_obs = = 0 ) <nl> + if ( num_st_obs = = 0 ) { <nl> prev_open_s = obs_s_edge . first ; <nl> + } <nl> } <nl> } <nl> <nl> mmm a / modules / planning / tasks / deciders / st_bounds_decider / st_obstacles_processor . h <nl> ppp b / modules / planning / tasks / deciders / st_bounds_decider / st_obstacles_processor . h <nl> class STObstaclesProcessor { <nl> std : : unordered_map < std : : string , STBoundary > GetAllSTBoundaries ( ) ; <nl> <nl> / * * @ brief Given a time t , get the lower and upper s - boundaries . <nl> - * If the boundary is well - defined based on decision made previously , <nl> - * fill " available_s_bounds " with only one boundary . <nl> - * Otherwise , fill " available_s_bounds with all candidates and <nl> - * " available_obs_decisions " with corresponding possible obstacle decisions . <nl> - * @ param Time t <nl> - * @ param The available s - boundaries to be filled up . <nl> - * @ param The corresponding possible obstacle decisions . <nl> - * @ return Whether we can get valid s - bounds . <nl> - * / <nl> - bool GetSBoundsFromDecisions ( double t , <nl> + * If the boundary is well - defined based on decision made previously , <nl> + * fill " available_s_bounds " with only one boundary . <nl> + * Otherwise , fill " available_s_bounds with all candidates and <nl> + * " available_obs_decisions " with corresponding possible obstacle decisions . <nl> + * @ param Time t <nl> + * @ param The available s - boundaries to be filled up . <nl> + * @ param The corresponding possible obstacle decisions . <nl> + * @ return Whether we can get valid s - bounds . <nl> + * / <nl> + bool GetSBoundsFromDecisions ( <nl> + double t , <nl> std : : vector < std : : pair < double , double > > * const available_s_bounds , <nl> - std : : vector < std : : vector < std : : pair < std : : string , ObjectDecisionType > > > * <nl> - const available_obs_decisions ) ; <nl> + std : : vector < <nl> + std : : vector < std : : pair < std : : string , ObjectDecisionType > > > * const <nl> + available_obs_decisions ) ; <nl> <nl> / * * @ brief Set the decision for a given obstacle . <nl> - * / <nl> - void SetObstacleDecision ( <nl> - const std : : string & obs_id , <nl> - const ObjectDecisionType & obs_decision ) ; <nl> + * / <nl> + void SetObstacleDecision ( const std : : string & obs_id , <nl> + const ObjectDecisionType & obs_decision ) ; <nl> <nl> / * * @ brief Set the decision for a list of obstacles . <nl> - * / <nl> + * / <nl> void SetObstacleDecision ( <nl> const std : : vector < std : : pair < std : : string , ObjectDecisionType > > & <nl> obstacle_decisions ) ; <nl> class STObstaclesProcessor { <nl> const double l_buffer ) const ; <nl> <nl> / * * @ brief Find the vertical ( s ) gaps of the st - graph . <nl> - * @ param Vector of obstacle - t - edges <nl> - * @ param The existing minimum s edge . <nl> - * @ param The existing maximum s edge . <nl> - * @ return A list of available s gaps for ADC to go . <nl> - * / <nl> + * @ param Vector of obstacle - t - edges <nl> + * @ param The existing minimum s edge . <nl> + * @ param The existing maximum s edge . <nl> + * @ return A list of available s gaps for ADC to go . <nl> + * / <nl> std : : vector < std : : pair < double , double > > FindSGaps ( <nl> const std : : vector < std : : tuple < int , double , double , double , std : : string > > & <nl> - obstacle_t_edges , double s_min , double s_max ) ; <nl> + obstacle_t_edges , <nl> + double s_min , double s_max ) ; <nl> <nl> / * * @ brief Based on obstacle position and prospective ADC position , <nl> - * determine the obstacle decision . <nl> - * @ param Obstacle ' s minimum s . <nl> - * @ param Obstacle ' s maximum s . <nl> - * @ param ADC ' s prospective position . <nl> - * @ return The decision for the given obstacle . <nl> - * / <nl> - ObjectDecisionType DetermineObstacleDecision ( <nl> - const double obs_s_min , const double obs_s_max , const double s ) const ; <nl> + * determine the obstacle decision . <nl> + * @ param Obstacle ' s minimum s . <nl> + * @ param Obstacle ' s maximum s . <nl> + * @ param ADC ' s prospective position . <nl> + * @ return The decision for the given obstacle . <nl> + * / <nl> + ObjectDecisionType DetermineObstacleDecision ( const double obs_s_min , <nl> + const double obs_s_max , <nl> + const double s ) const ; <nl> <nl> private : <nl> double planning_time_ ; <nl> mmm a / modules / planning / tasks / optimizers / path_time_heuristic / gridded_path_time_graph . cc <nl> ppp b / modules / planning / tasks / optimizers / path_time_heuristic / gridded_path_time_graph . cc <nl> void GriddedPathTimeGraph : : GetRowRange ( const StGraphPoint & point , <nl> <nl> const double max_acceleration = std : : abs ( vehicle_param_ . max_acceleration ( ) ) ; <nl> const double t_squared = unit_t_ * unit_t_ ; <nl> - const double s_upper_bound = <nl> - v0 * unit_t_ + acc_coeff * max_acceleration * t_squared + <nl> - point . point ( ) . s ( ) ; <nl> + const double s_upper_bound = v0 * unit_t_ + <nl> + acc_coeff * max_acceleration * t_squared + <nl> + point . point ( ) . s ( ) ; <nl> const auto next_highest_itr = <nl> std : : lower_bound ( spatial_distance_by_index_ . begin ( ) , <nl> spatial_distance_by_index_ . end ( ) , s_upper_bound ) ; <nl> void GriddedPathTimeGraph : : CalculateCostAt ( <nl> / / Use pre_v = ( pre_point . s - prepre_point . s ) / unit_t as previous v <nl> / / Current acc estimate : curr_a = ( curr_v - pre_v ) / unit_t <nl> / / = ( point . s + prepre_point . s - 2 * pre_point . s ) / ( unit_t * unit_t ) <nl> - const double curr_a = ( cost_cr . point ( ) . s ( ) + <nl> - pre_col [ r_pre ] . pre_point ( ) - > point ( ) . s ( ) - <nl> - 2 * pre_col [ r_pre ] . point ( ) . s ( ) ) / ( unit_t_ * unit_t_ ) ; <nl> + const double curr_a = <nl> + ( cost_cr . point ( ) . s ( ) + pre_col [ r_pre ] . pre_point ( ) - > point ( ) . s ( ) - <nl> + 2 * pre_col [ r_pre ] . point ( ) . s ( ) ) / <nl> + ( unit_t_ * unit_t_ ) ; <nl> if ( curr_a < gridded_path_time_graph_config_ . max_deceleration ( ) | | <nl> curr_a > gridded_path_time_graph_config_ . max_acceleration ( ) ) { <nl> continue ; <nl> void GriddedPathTimeGraph : : CalculateCostAt ( <nl> / / Use pre_v = ( pre_point . s - prepre_point . s ) / unit_t as previous v <nl> / / Current acc estimate : curr_a = ( curr_v - pre_v ) / unit_t <nl> / / = ( point . s + prepre_point . s - 2 * pre_point . s ) / ( unit_t * unit_t ) <nl> - const double curr_a = ( cost_cr . point ( ) . s ( ) + <nl> - pre_col [ r_pre ] . pre_point ( ) - > point ( ) . s ( ) - <nl> - 2 * pre_col [ r_pre ] . point ( ) . s ( ) ) / ( unit_t_ * unit_t_ ) ; <nl> + const double curr_a = <nl> + ( cost_cr . point ( ) . s ( ) + pre_col [ r_pre ] . pre_point ( ) - > point ( ) . s ( ) - <nl> + 2 * pre_col [ r_pre ] . point ( ) . s ( ) ) / <nl> + ( unit_t_ * unit_t_ ) ; <nl> if ( curr_a > vehicle_param_ . max_acceleration ( ) | | <nl> curr_a < vehicle_param_ . max_deceleration ( ) ) { <nl> continue ; <nl> mmm a / modules / planning / tasks / optimizers / path_time_heuristic / path_time_heuristic_optimizer . cc <nl> ppp b / modules / planning / tasks / optimizers / path_time_heuristic / path_time_heuristic_optimizer . cc <nl> Status PathTimeHeuristicOptimizer : : Process ( <nl> SpeedData * const speed_data ) { <nl> init_point_ = init_point ; <nl> <nl> - dp_st_speed_config_ = <nl> - reference_line_info_ - > IsChangeLanePath ( ) <nl> - ? speed_heuristic_config_ . lane_change_speed_config ( ) <nl> - : speed_heuristic_config_ . default_speed_config ( ) ; <nl> + dp_st_speed_config_ = reference_line_info_ - > IsChangeLanePath ( ) <nl> + ? speed_heuristic_config_ . lane_change_speed_config ( ) <nl> + : speed_heuristic_config_ . default_speed_config ( ) ; <nl> <nl> if ( path_data . discretized_path ( ) . empty ( ) ) { <nl> std : : string msg ( " Empty path data " ) ; <nl> mmm a / modules / prediction / container / adc_trajectory / adc_trajectory_container . cc <nl> ppp b / modules / prediction / container / adc_trajectory / adc_trajectory_container . cc <nl> void ADCTrajectoryContainer : : SetLaneSequence ( ) { <nl> void ADCTrajectoryContainer : : SetTargetLaneSequence ( ) { <nl> for ( const auto & lane : adc_trajectory_ . target_lane_id ( ) ) { <nl> if ( ! lane . id ( ) . empty ( ) ) { <nl> - if ( adc_target_lane_seq_ . empty ( ) | | lane . id ( ) ! = <nl> - adc_target_lane_seq_ . back ( ) ) { <nl> + if ( adc_target_lane_seq_ . empty ( ) | | <nl> + lane . id ( ) ! = adc_target_lane_seq_ . back ( ) ) { <nl> adc_target_lane_seq_ . emplace_back ( lane . id ( ) ) ; <nl> } <nl> } <nl> mmm a / modules / prediction / container / obstacles / obstacles_container . cc <nl> ppp b / modules / prediction / container / obstacles / obstacles_container . cc <nl> PredictionContainerMessage ObstaclesContainer : : GetContainerMessage ( ) { <nl> } <nl> PredictionObstacle prediction_obstacle = <nl> obstacle_ptr - > GeneratePredictionObstacle ( ) ; <nl> - container_message . add_prediction_obstacle ( ) - > CopyFrom ( <nl> - prediction_obstacle ) ; <nl> + container_message . add_prediction_obstacle ( ) - > CopyFrom ( prediction_obstacle ) ; <nl> } <nl> / / TODO ( kechxu ) add other info into prediction_obstacles if needed <nl> return container_message ; <nl> mmm a / modules / prediction / submodules / container_submodule . cc <nl> ppp b / modules / prediction / submodules / container_submodule . cc <nl> bool ContainerSubmodule : : Proc ( <nl> <nl> PredictionContainerMessage container_message = <nl> obstacles_container_ptr - > GetContainerMessage ( ) ; <nl> - container_writer_ - > Write ( std : : make_shared < PredictionContainerMessage > ( <nl> - container_message ) ) ; <nl> + container_writer_ - > Write ( <nl> + std : : make_shared < PredictionContainerMessage > ( container_message ) ) ; <nl> <nl> return true ; <nl> } <nl>
|
Robot : Code clean with clang - format .
|
ApolloAuto/apollo
|
377bb40c2ccef994e6fa94c091a19202b12bb818
|
2019-10-21T17:56:03Z
|
mmm a / cocos / scripting / lua / bindings / lua_cocos2dx_manual . cpp . REMOVED . git - id <nl> ppp b / cocos / scripting / lua / bindings / lua_cocos2dx_manual . cpp . REMOVED . git - id <nl> @ @ - 1 + 1 @ @ <nl> - c5f8d4a3ea721a2ecb36fe381430b7ac6fad1740 <nl> \ No newline at end of file <nl> + 29bde887dbd41a72f33704fb57cbab230d1a1688 <nl> \ No newline at end of file <nl> mmm a / tools / tolua / cocos2dx . ini <nl> ppp b / tools / tolua / cocos2dx . ini <nl> skip = Node : : [ setGLServerState description getUserObject . * UserData getGLServerS <nl> EGLView : : [ end swapBuffers ] , <nl> NewTextureAtlas : : [ * ] , <nl> DisplayLinkDirector : : [ mainLoop setAnimationInterval startAnimation stopAnimation ] , <nl> - RenderTexture : : [ listenToBackground listenToForeground ] <nl> + RenderTexture : : [ listenToBackground listenToForeground ] , <nl> + TMXTiledMap : : [ getPropertiesForGID ] <nl> <nl> rename_functions = SpriteFrameCache : : [ addSpriteFramesWithFile = addSpriteFrames getSpriteFrameByName = getSpriteFrame ] , <nl> ProgressTimer : : [ setReverseProgress = setReverseDirection ] , <nl>
|
Merge pull request from samuele3hu / developBinding
|
cocos2d/cocos2d-x
|
fe6ec12d595e66549325141b88e2d6603aff3f53
|
2014-02-19T08:22:43Z
|
mmm a / third_party / inspector_protocol / README . v8 <nl> ppp b / third_party / inspector_protocol / README . v8 <nl> Name : inspector protocol <nl> Short Name : inspector_protocol <nl> URL : https : / / chromium . googlesource . com / deps / inspector_protocol / <nl> Version : 0 <nl> - Revision : b13e24ccee66d7e0590ce1266db9c906e3648561 <nl> + Revision : a7423d8ca937e658ab3b85e3b02676bced145ba6 <nl> License : BSD <nl> License File : LICENSE <nl> Security Critical : no <nl> mmm a / third_party / inspector_protocol / code_generator . py <nl> ppp b / third_party / inspector_protocol / code_generator . py <nl> def main ( ) : <nl> " Array_h . template " , <nl> " DispatcherBase_h . template " , <nl> " Parser_h . template " , <nl> - " CBOR_h . template " , <nl> + " encoding_h . template " , <nl> ] <nl> <nl> protocol_cpp_templates = [ <nl> def main ( ) : <nl> " Object_cpp . template " , <nl> " DispatcherBase_cpp . template " , <nl> " Parser_cpp . template " , <nl> - " CBOR_cpp . template " , <nl> + " encoding_cpp . template " , <nl> ] <nl> <nl> forward_h_templates = [ <nl> mmm a / third_party / inspector_protocol / inspector_protocol . gni <nl> ppp b / third_party / inspector_protocol / inspector_protocol . gni <nl> template ( " inspector_protocol_generate " ) { <nl> invoker . config_file , <nl> " $ inspector_protocol_dir / lib / base_string_adapter_cc . template " , <nl> " $ inspector_protocol_dir / lib / base_string_adapter_h . template " , <nl> + " $ inspector_protocol_dir / lib / encoding_h . template " , <nl> + " $ inspector_protocol_dir / lib / encoding_cpp . template " , <nl> " $ inspector_protocol_dir / lib / Allocator_h . template " , <nl> " $ inspector_protocol_dir / lib / Array_h . template " , <nl> - " $ inspector_protocol_dir / lib / CBOR_h . template " , <nl> - " $ inspector_protocol_dir / lib / CBOR_cpp . template " , <nl> " $ inspector_protocol_dir / lib / DispatcherBase_cpp . template " , <nl> " $ inspector_protocol_dir / lib / DispatcherBase_h . template " , <nl> " $ inspector_protocol_dir / lib / ErrorSupport_cpp . template " , <nl> mmm a / third_party / inspector_protocol / inspector_protocol . gypi <nl> ppp b / third_party / inspector_protocol / inspector_protocol . gypi <nl> <nl> { <nl> ' variables ' : { <nl> ' inspector_protocol_files ' : [ <nl> + ' lib / encoding_h . template ' , <nl> + ' lib / encoding_cpp . template ' , <nl> ' lib / Allocator_h . template ' , <nl> ' lib / Array_h . template ' , <nl> - ' lib / CBOR_h . template ' , <nl> - ' lib / CBOR_cpp . template ' , <nl> ' lib / DispatcherBase_cpp . template ' , <nl> ' lib / DispatcherBase_h . template ' , <nl> ' lib / ErrorSupport_cpp . template ' , <nl> deleted file mode 100644 <nl> index d2375b6c621 . . 00000000000 <nl> mmm a / third_party / inspector_protocol / lib / CBOR_cpp . template <nl> ppp / dev / null <nl> <nl> - { # This template is generated by gen_cbor_templates . py . # } <nl> - / / Generated by lib / CBOR_cpp . template . <nl> - <nl> - / / Copyright 2019 The Chromium Authors . All rights reserved . <nl> - / / Use of this source code is governed by a BSD - style license that can be <nl> - / / found in the LICENSE file . <nl> - <nl> - <nl> - # include < cassert > <nl> - # include < limits > <nl> - <nl> - { % for namespace in config . protocol . namespace % } <nl> - namespace { { namespace } } { <nl> - { % endfor % } <nl> - <nl> - / / = = = = = encoding / cbor . cc = = = = = <nl> - <nl> - using namespace cbor ; <nl> - <nl> - namespace { <nl> - <nl> - / / See RFC 7049 Section 2 . 3 , Table 2 . <nl> - static constexpr uint8_t kEncodedTrue = <nl> - EncodeInitialByte ( MajorType : : SIMPLE_VALUE , 21 ) ; <nl> - static constexpr uint8_t kEncodedFalse = <nl> - EncodeInitialByte ( MajorType : : SIMPLE_VALUE , 20 ) ; <nl> - static constexpr uint8_t kEncodedNull = <nl> - EncodeInitialByte ( MajorType : : SIMPLE_VALUE , 22 ) ; <nl> - static constexpr uint8_t kInitialByteForDouble = <nl> - EncodeInitialByte ( MajorType : : SIMPLE_VALUE , 27 ) ; <nl> - <nl> - } / / namespace <nl> - <nl> - uint8_t EncodeTrue ( ) { return kEncodedTrue ; } <nl> - uint8_t EncodeFalse ( ) { return kEncodedFalse ; } <nl> - uint8_t EncodeNull ( ) { return kEncodedNull ; } <nl> - <nl> - uint8_t EncodeIndefiniteLengthArrayStart ( ) { <nl> - return kInitialByteIndefiniteLengthArray ; <nl> - } <nl> - <nl> - uint8_t EncodeIndefiniteLengthMapStart ( ) { <nl> - return kInitialByteIndefiniteLengthMap ; <nl> - } <nl> - <nl> - uint8_t EncodeStop ( ) { return kStopByte ; } <nl> - <nl> - namespace { <nl> - / / See RFC 7049 Table 3 and Section 2 . 4 . 4 . 2 . This is used as a prefix for <nl> - / / arbitrary binary data encoded as BYTE_STRING . <nl> - static constexpr uint8_t kExpectedConversionToBase64Tag = <nl> - EncodeInitialByte ( MajorType : : TAG , 22 ) ; <nl> - <nl> - / / When parsing CBOR , we limit recursion depth for objects and arrays <nl> - / / to this constant . <nl> - static constexpr int kStackLimit = 1000 ; <nl> - <nl> - / / Writes the bytes for | v | to | out | , starting with the most significant byte . <nl> - / / See also : https : / / commandcenter . blogspot . com / 2012 / 04 / byte - order - fallacy . html <nl> - template < typename T > <nl> - void WriteBytesMostSignificantByteFirst ( T v , std : : vector < uint8_t > * out ) { <nl> - for ( int shift_bytes = sizeof ( T ) - 1 ; shift_bytes > = 0 ; - - shift_bytes ) <nl> - out - > push_back ( 0xff & ( v > > ( shift_bytes * 8 ) ) ) ; <nl> - } <nl> - } / / namespace <nl> - <nl> - namespace cbor_internals { <nl> - / / Writes the start of a token with | type | . The | value | may indicate the size , <nl> - / / or it may be the payload if the value is an unsigned integer . <nl> - void WriteTokenStart ( MajorType type , uint64_t value , <nl> - std : : vector < uint8_t > * encoded ) { <nl> - if ( value < 24 ) { <nl> - / / Values 0 - 23 are encoded directly into the additional info of the <nl> - / / initial byte . <nl> - encoded - > push_back ( EncodeInitialByte ( type , / * additional_info = * / value ) ) ; <nl> - return ; <nl> - } <nl> - if ( value < = std : : numeric_limits < uint8_t > : : max ( ) ) { <nl> - / / Values 24 - 255 are encoded with one initial byte , followed by the value . <nl> - encoded - > push_back ( EncodeInitialByte ( type , kAdditionalInformation1Byte ) ) ; <nl> - encoded - > push_back ( value ) ; <nl> - return ; <nl> - } <nl> - if ( value < = std : : numeric_limits < uint16_t > : : max ( ) ) { <nl> - / / Values 256 - 65535 : 1 initial byte + 2 bytes payload . <nl> - encoded - > push_back ( EncodeInitialByte ( type , kAdditionalInformation2Bytes ) ) ; <nl> - WriteBytesMostSignificantByteFirst < uint16_t > ( value , encoded ) ; <nl> - return ; <nl> - } <nl> - if ( value < = std : : numeric_limits < uint32_t > : : max ( ) ) { <nl> - / / 32 bit uint : 1 initial byte + 4 bytes payload . <nl> - encoded - > push_back ( EncodeInitialByte ( type , kAdditionalInformation4Bytes ) ) ; <nl> - WriteBytesMostSignificantByteFirst < uint32_t > ( static_cast < uint32_t > ( value ) , <nl> - encoded ) ; <nl> - return ; <nl> - } <nl> - / / 64 bit uint : 1 initial byte + 8 bytes payload . <nl> - encoded - > push_back ( EncodeInitialByte ( type , kAdditionalInformation8Bytes ) ) ; <nl> - WriteBytesMostSignificantByteFirst < uint64_t > ( value , encoded ) ; <nl> - } <nl> - } / / namespace cbor_internals <nl> - <nl> - namespace { <nl> - / / Extracts sizeof ( T ) bytes from | in | to extract a value of type T <nl> - / / ( e . g . uint64_t , uint32_t , . . . ) , most significant byte first . <nl> - / / See also : https : / / commandcenter . blogspot . com / 2012 / 04 / byte - order - fallacy . html <nl> - template < typename T > <nl> - T ReadBytesMostSignificantByteFirst ( span < uint8_t > in ) { <nl> - assert ( static_cast < std : : size_t > ( in . size ( ) ) > = sizeof ( T ) ) ; <nl> - T result = 0 ; <nl> - for ( std : : size_t shift_bytes = 0 ; shift_bytes < sizeof ( T ) ; + + shift_bytes ) <nl> - result | = T ( in [ sizeof ( T ) - 1 - shift_bytes ] ) < < ( shift_bytes * 8 ) ; <nl> - return result ; <nl> - } <nl> - } / / namespace <nl> - <nl> - namespace cbor_internals { <nl> - int8_t ReadTokenStart ( span < uint8_t > bytes , MajorType * type , uint64_t * value ) { <nl> - if ( bytes . empty ( ) ) return - 1 ; <nl> - uint8_t initial_byte = bytes [ 0 ] ; <nl> - * type = MajorType ( ( initial_byte & kMajorTypeMask ) > > kMajorTypeBitShift ) ; <nl> - <nl> - uint8_t additional_information = initial_byte & kAdditionalInformationMask ; <nl> - if ( additional_information < 24 ) { <nl> - / / Values 0 - 23 are encoded directly into the additional info of the <nl> - / / initial byte . <nl> - * value = additional_information ; <nl> - return 1 ; <nl> - } <nl> - if ( additional_information = = kAdditionalInformation1Byte ) { <nl> - / / Values 24 - 255 are encoded with one initial byte , followed by the value . <nl> - if ( bytes . size ( ) < 2 ) return - 1 ; <nl> - * value = ReadBytesMostSignificantByteFirst < uint8_t > ( bytes . subspan ( 1 ) ) ; <nl> - return 2 ; <nl> - } <nl> - if ( additional_information = = kAdditionalInformation2Bytes ) { <nl> - / / Values 256 - 65535 : 1 initial byte + 2 bytes payload . <nl> - if ( static_cast < std : : size_t > ( bytes . size ( ) ) < 1 + sizeof ( uint16_t ) ) <nl> - return - 1 ; <nl> - * value = ReadBytesMostSignificantByteFirst < uint16_t > ( bytes . subspan ( 1 ) ) ; <nl> - return 3 ; <nl> - } <nl> - if ( additional_information = = kAdditionalInformation4Bytes ) { <nl> - / / 32 bit uint : 1 initial byte + 4 bytes payload . <nl> - if ( static_cast < std : : size_t > ( bytes . size ( ) ) < 1 + sizeof ( uint32_t ) ) <nl> - return - 1 ; <nl> - * value = ReadBytesMostSignificantByteFirst < uint32_t > ( bytes . subspan ( 1 ) ) ; <nl> - return 5 ; <nl> - } <nl> - if ( additional_information = = kAdditionalInformation8Bytes ) { <nl> - / / 64 bit uint : 1 initial byte + 8 bytes payload . <nl> - if ( static_cast < std : : size_t > ( bytes . size ( ) ) < 1 + sizeof ( uint64_t ) ) <nl> - return - 1 ; <nl> - * value = ReadBytesMostSignificantByteFirst < uint64_t > ( bytes . subspan ( 1 ) ) ; <nl> - return 9 ; <nl> - } <nl> - return - 1 ; <nl> - } <nl> - } / / namespace cbor_internals <nl> - <nl> - using cbor_internals : : WriteTokenStart ; <nl> - using cbor_internals : : ReadTokenStart ; <nl> - <nl> - void EncodeInt32 ( int32_t value , std : : vector < uint8_t > * out ) { <nl> - if ( value > = 0 ) { <nl> - WriteTokenStart ( MajorType : : UNSIGNED , value , out ) ; <nl> - } else { <nl> - uint64_t representation = static_cast < uint64_t > ( - ( value + 1 ) ) ; <nl> - WriteTokenStart ( MajorType : : NEGATIVE , representation , out ) ; <nl> - } <nl> - } <nl> - <nl> - void EncodeString16 ( span < uint16_t > in , std : : vector < uint8_t > * out ) { <nl> - uint64_t byte_length = static_cast < uint64_t > ( in . size_bytes ( ) ) ; <nl> - WriteTokenStart ( MajorType : : BYTE_STRING , byte_length , out ) ; <nl> - / / When emitting UTF16 characters , we always write the least significant byte <nl> - / / first ; this is because it ' s the native representation for X86 . <nl> - / / TODO ( johannes ) : Implement a more efficient thing here later , e . g . <nl> - / / casting * iff * the machine has this byte order . <nl> - / / The wire format for UTF16 chars will probably remain the same <nl> - / / ( least significant byte first ) since this way we can have <nl> - / / golden files , unittests , etc . that port easily and universally . <nl> - / / See also : <nl> - / / https : / / commandcenter . blogspot . com / 2012 / 04 / byte - order - fallacy . html <nl> - for ( const uint16_t two_bytes : in ) { <nl> - out - > push_back ( two_bytes ) ; <nl> - out - > push_back ( two_bytes > > 8 ) ; <nl> - } <nl> - } <nl> - <nl> - void EncodeString8 ( span < uint8_t > in , std : : vector < uint8_t > * out ) { <nl> - WriteTokenStart ( MajorType : : STRING , static_cast < uint64_t > ( in . size_bytes ( ) ) , <nl> - out ) ; <nl> - out - > insert ( out - > end ( ) , in . begin ( ) , in . end ( ) ) ; <nl> - } <nl> - <nl> - void EncodeFromLatin1 ( span < uint8_t > latin1 , std : : vector < uint8_t > * out ) { <nl> - for ( std : : ptrdiff_t ii = 0 ; ii < latin1 . size ( ) ; + + ii ) { <nl> - if ( latin1 [ ii ] < = 127 ) continue ; <nl> - / / If there ' s at least one non - ASCII char , convert to UTF8 . <nl> - std : : vector < uint8_t > utf8 ( latin1 . begin ( ) , latin1 . begin ( ) + ii ) ; <nl> - for ( ; ii < latin1 . size ( ) ; + + ii ) { <nl> - if ( latin1 [ ii ] < = 127 ) { <nl> - utf8 . push_back ( latin1 [ ii ] ) ; <nl> - } else { <nl> - / / 0xC0 means it ' s a UTF8 sequence with 2 bytes . <nl> - utf8 . push_back ( ( latin1 [ ii ] > > 6 ) | 0xc0 ) ; <nl> - utf8 . push_back ( ( latin1 [ ii ] | 0x80 ) & 0xbf ) ; <nl> - } <nl> - } <nl> - EncodeString8 ( span < uint8_t > ( utf8 . data ( ) , utf8 . size ( ) ) , out ) ; <nl> - return ; <nl> - } <nl> - EncodeString8 ( latin1 , out ) ; <nl> - } <nl> - <nl> - void EncodeFromUTF16 ( span < uint16_t > utf16 , std : : vector < uint8_t > * out ) { <nl> - / / If there ' s at least one non - ASCII char , encode as STRING16 ( UTF16 ) . <nl> - for ( uint16_t ch : utf16 ) { <nl> - if ( ch < = 127 ) continue ; <nl> - EncodeString16 ( utf16 , out ) ; <nl> - return ; <nl> - } <nl> - / / It ' s all US - ASCII , strip out every second byte and encode as UTF8 . <nl> - WriteTokenStart ( MajorType : : STRING , static_cast < uint64_t > ( utf16 . size ( ) ) , out ) ; <nl> - out - > insert ( out - > end ( ) , utf16 . begin ( ) , utf16 . end ( ) ) ; <nl> - } <nl> - <nl> - void EncodeBinary ( span < uint8_t > in , std : : vector < uint8_t > * out ) { <nl> - out - > push_back ( kExpectedConversionToBase64Tag ) ; <nl> - uint64_t byte_length = static_cast < uint64_t > ( in . size_bytes ( ) ) ; <nl> - WriteTokenStart ( MajorType : : BYTE_STRING , byte_length , out ) ; <nl> - out - > insert ( out - > end ( ) , in . begin ( ) , in . end ( ) ) ; <nl> - } <nl> - <nl> - / / A double is encoded with a specific initial byte <nl> - / / ( kInitialByteForDouble ) plus the 64 bits of payload for its value . <nl> - constexpr std : : ptrdiff_t kEncodedDoubleSize = 1 + sizeof ( uint64_t ) ; <nl> - <nl> - / / An envelope is encoded with a specific initial byte <nl> - / / ( kInitialByteForEnvelope ) , plus the start byte for a BYTE_STRING with a 32 <nl> - / / bit wide length , plus a 32 bit length for that string . <nl> - constexpr std : : ptrdiff_t kEncodedEnvelopeHeaderSize = 1 + 1 + sizeof ( uint32_t ) ; <nl> - <nl> - void EncodeDouble ( double value , std : : vector < uint8_t > * out ) { <nl> - / / The additional_info = 27 indicates 64 bits for the double follow . <nl> - / / See RFC 7049 Section 2 . 3 , Table 1 . <nl> - out - > push_back ( kInitialByteForDouble ) ; <nl> - union { <nl> - double from_double ; <nl> - uint64_t to_uint64 ; <nl> - } reinterpret ; <nl> - reinterpret . from_double = value ; <nl> - WriteBytesMostSignificantByteFirst < uint64_t > ( reinterpret . to_uint64 , out ) ; <nl> - } <nl> - <nl> - void EnvelopeEncoder : : EncodeStart ( std : : vector < uint8_t > * out ) { <nl> - assert ( byte_size_pos_ = = 0 ) ; <nl> - out - > push_back ( kInitialByteForEnvelope ) ; <nl> - out - > push_back ( kInitialByteFor32BitLengthByteString ) ; <nl> - byte_size_pos_ = out - > size ( ) ; <nl> - out - > resize ( out - > size ( ) + sizeof ( uint32_t ) ) ; <nl> - } <nl> - <nl> - bool EnvelopeEncoder : : EncodeStop ( std : : vector < uint8_t > * out ) { <nl> - assert ( byte_size_pos_ ! = 0 ) ; <nl> - / / The byte size is the size of the payload , that is , all the <nl> - / / bytes that were written past the byte size position itself . <nl> - uint64_t byte_size = out - > size ( ) - ( byte_size_pos_ + sizeof ( uint32_t ) ) ; <nl> - / / We store exactly 4 bytes , so at most INT32MAX , with most significant <nl> - / / byte first . <nl> - if ( byte_size > std : : numeric_limits < uint32_t > : : max ( ) ) return false ; <nl> - for ( int shift_bytes = sizeof ( uint32_t ) - 1 ; shift_bytes > = 0 ; <nl> - - - shift_bytes ) { <nl> - ( * out ) [ byte_size_pos_ + + ] = 0xff & ( byte_size > > ( shift_bytes * 8 ) ) ; <nl> - } <nl> - return true ; <nl> - } <nl> - <nl> - namespace { <nl> - class JSONToCBOREncoder : public JSONParserHandler { <nl> - public : <nl> - JSONToCBOREncoder ( std : : vector < uint8_t > * out , Status * status ) <nl> - : out_ ( out ) , status_ ( status ) { <nl> - * status_ = Status ( ) ; <nl> - } <nl> - <nl> - void HandleObjectBegin ( ) override { <nl> - envelopes_ . emplace_back ( ) ; <nl> - envelopes_ . back ( ) . EncodeStart ( out_ ) ; <nl> - out_ - > push_back ( kInitialByteIndefiniteLengthMap ) ; <nl> - } <nl> - <nl> - void HandleObjectEnd ( ) override { <nl> - out_ - > push_back ( kStopByte ) ; <nl> - assert ( ! envelopes_ . empty ( ) ) ; <nl> - envelopes_ . back ( ) . EncodeStop ( out_ ) ; <nl> - envelopes_ . pop_back ( ) ; <nl> - } <nl> - <nl> - void HandleArrayBegin ( ) override { <nl> - envelopes_ . emplace_back ( ) ; <nl> - envelopes_ . back ( ) . EncodeStart ( out_ ) ; <nl> - out_ - > push_back ( kInitialByteIndefiniteLengthArray ) ; <nl> - } <nl> - <nl> - void HandleArrayEnd ( ) override { <nl> - out_ - > push_back ( kStopByte ) ; <nl> - assert ( ! envelopes_ . empty ( ) ) ; <nl> - envelopes_ . back ( ) . EncodeStop ( out_ ) ; <nl> - envelopes_ . pop_back ( ) ; <nl> - } <nl> - <nl> - void HandleString8 ( span < uint8_t > chars ) override { <nl> - EncodeString8 ( chars , out_ ) ; <nl> - } <nl> - <nl> - void HandleString16 ( span < uint16_t > chars ) override { <nl> - for ( uint16_t ch : chars ) { <nl> - if ( ch > = 0x7f ) { <nl> - / / If there ' s at least one non - 7bit character , we encode as UTF16 . <nl> - EncodeString16 ( chars , out_ ) ; <nl> - return ; <nl> - } <nl> - } <nl> - std : : vector < uint8_t > sevenbit_chars ( chars . begin ( ) , chars . end ( ) ) ; <nl> - EncodeString8 ( span < uint8_t > ( sevenbit_chars . data ( ) , sevenbit_chars . size ( ) ) , <nl> - out_ ) ; <nl> - } <nl> - <nl> - void HandleBinary ( std : : vector < uint8_t > bytes ) override { <nl> - EncodeBinary ( span < uint8_t > ( bytes . data ( ) , bytes . size ( ) ) , out_ ) ; <nl> - } <nl> - <nl> - void HandleDouble ( double value ) override { EncodeDouble ( value , out_ ) ; } <nl> - <nl> - void HandleInt32 ( int32_t value ) override { EncodeInt32 ( value , out_ ) ; } <nl> - <nl> - void HandleBool ( bool value ) override { <nl> - / / See RFC 7049 Section 2 . 3 , Table 2 . <nl> - out_ - > push_back ( value ? kEncodedTrue : kEncodedFalse ) ; <nl> - } <nl> - <nl> - void HandleNull ( ) override { <nl> - / / See RFC 7049 Section 2 . 3 , Table 2 . <nl> - out_ - > push_back ( kEncodedNull ) ; <nl> - } <nl> - <nl> - void HandleError ( Status error ) override { <nl> - assert ( ! error . ok ( ) ) ; <nl> - * status_ = error ; <nl> - out_ - > clear ( ) ; <nl> - } <nl> - <nl> - private : <nl> - std : : vector < uint8_t > * out_ ; <nl> - std : : vector < EnvelopeEncoder > envelopes_ ; <nl> - Status * status_ ; <nl> - } ; <nl> - } / / namespace <nl> - <nl> - std : : unique_ptr < JSONParserHandler > NewJSONToCBOREncoder ( <nl> - std : : vector < uint8_t > * out , Status * status ) { <nl> - return std : : unique_ptr < JSONParserHandler > ( new JSONToCBOREncoder ( out , status ) ) ; <nl> - } <nl> - <nl> - namespace { <nl> - / / Below are three parsing routines for CBOR , which cover enough <nl> - / / to roundtrip JSON messages . <nl> - bool ParseMap ( int32_t stack_depth , CBORTokenizer * tokenizer , <nl> - JSONParserHandler * out ) ; <nl> - bool ParseArray ( int32_t stack_depth , CBORTokenizer * tokenizer , <nl> - JSONParserHandler * out ) ; <nl> - bool ParseValue ( int32_t stack_depth , CBORTokenizer * tokenizer , <nl> - JSONParserHandler * out ) ; <nl> - <nl> - void ParseUTF16String ( CBORTokenizer * tokenizer , JSONParserHandler * out ) { <nl> - std : : vector < uint16_t > value ; <nl> - span < uint8_t > rep = tokenizer - > GetString16WireRep ( ) ; <nl> - for ( std : : ptrdiff_t ii = 0 ; ii < rep . size ( ) ; ii + = 2 ) <nl> - value . push_back ( ( rep [ ii + 1 ] < < 8 ) | rep [ ii ] ) ; <nl> - out - > HandleString16 ( span < uint16_t > ( value . data ( ) , value . size ( ) ) ) ; <nl> - tokenizer - > Next ( ) ; <nl> - } <nl> - <nl> - bool ParseUTF8String ( CBORTokenizer * tokenizer , JSONParserHandler * out ) { <nl> - assert ( tokenizer - > TokenTag ( ) = = CBORTokenTag : : STRING8 ) ; <nl> - out - > HandleString8 ( tokenizer - > GetString8 ( ) ) ; <nl> - tokenizer - > Next ( ) ; <nl> - return true ; <nl> - } <nl> - <nl> - bool ParseValue ( int32_t stack_depth , CBORTokenizer * tokenizer , <nl> - JSONParserHandler * out ) { <nl> - if ( stack_depth > kStackLimit ) { <nl> - out - > HandleError ( <nl> - Status { Error : : CBOR_STACK_LIMIT_EXCEEDED , tokenizer - > Status ( ) . pos } ) ; <nl> - return false ; <nl> - } <nl> - / / Skip past the envelope to get to what ' s inside . <nl> - if ( tokenizer - > TokenTag ( ) = = CBORTokenTag : : ENVELOPE ) <nl> - tokenizer - > EnterEnvelope ( ) ; <nl> - switch ( tokenizer - > TokenTag ( ) ) { <nl> - case CBORTokenTag : : ERROR_VALUE : <nl> - out - > HandleError ( tokenizer - > Status ( ) ) ; <nl> - return false ; <nl> - case CBORTokenTag : : DONE : <nl> - out - > HandleError ( Status { Error : : CBOR_UNEXPECTED_EOF_EXPECTED_VALUE , <nl> - tokenizer - > Status ( ) . pos } ) ; <nl> - return false ; <nl> - case CBORTokenTag : : TRUE_VALUE : <nl> - out - > HandleBool ( true ) ; <nl> - tokenizer - > Next ( ) ; <nl> - return true ; <nl> - case CBORTokenTag : : FALSE_VALUE : <nl> - out - > HandleBool ( false ) ; <nl> - tokenizer - > Next ( ) ; <nl> - return true ; <nl> - case CBORTokenTag : : NULL_VALUE : <nl> - out - > HandleNull ( ) ; <nl> - tokenizer - > Next ( ) ; <nl> - return true ; <nl> - case CBORTokenTag : : INT32 : <nl> - out - > HandleInt32 ( tokenizer - > GetInt32 ( ) ) ; <nl> - tokenizer - > Next ( ) ; <nl> - return true ; <nl> - case CBORTokenTag : : DOUBLE : <nl> - out - > HandleDouble ( tokenizer - > GetDouble ( ) ) ; <nl> - tokenizer - > Next ( ) ; <nl> - return true ; <nl> - case CBORTokenTag : : STRING8 : <nl> - return ParseUTF8String ( tokenizer , out ) ; <nl> - case CBORTokenTag : : STRING16 : <nl> - ParseUTF16String ( tokenizer , out ) ; <nl> - return true ; <nl> - case CBORTokenTag : : BINARY : { <nl> - span < uint8_t > binary = tokenizer - > GetBinary ( ) ; <nl> - out - > HandleBinary ( std : : vector < uint8_t > ( binary . begin ( ) , binary . end ( ) ) ) ; <nl> - tokenizer - > Next ( ) ; <nl> - return true ; <nl> - } <nl> - case CBORTokenTag : : MAP_START : <nl> - return ParseMap ( stack_depth + 1 , tokenizer , out ) ; <nl> - case CBORTokenTag : : ARRAY_START : <nl> - return ParseArray ( stack_depth + 1 , tokenizer , out ) ; <nl> - default : <nl> - out - > HandleError ( <nl> - Status { Error : : CBOR_UNSUPPORTED_VALUE , tokenizer - > Status ( ) . pos } ) ; <nl> - return false ; <nl> - } <nl> - } <nl> - <nl> - / / | bytes | must start with the indefinite length array byte , so basically , <nl> - / / ParseArray may only be called after an indefinite length array has been <nl> - / / detected . <nl> - bool ParseArray ( int32_t stack_depth , CBORTokenizer * tokenizer , <nl> - JSONParserHandler * out ) { <nl> - assert ( tokenizer - > TokenTag ( ) = = CBORTokenTag : : ARRAY_START ) ; <nl> - tokenizer - > Next ( ) ; <nl> - out - > HandleArrayBegin ( ) ; <nl> - while ( tokenizer - > TokenTag ( ) ! = CBORTokenTag : : STOP ) { <nl> - if ( tokenizer - > TokenTag ( ) = = CBORTokenTag : : DONE ) { <nl> - out - > HandleError ( <nl> - Status { Error : : CBOR_UNEXPECTED_EOF_IN_ARRAY , tokenizer - > Status ( ) . pos } ) ; <nl> - return false ; <nl> - } <nl> - if ( tokenizer - > TokenTag ( ) = = CBORTokenTag : : ERROR_VALUE ) { <nl> - out - > HandleError ( tokenizer - > Status ( ) ) ; <nl> - return false ; <nl> - } <nl> - / / Parse value . <nl> - if ( ! ParseValue ( stack_depth , tokenizer , out ) ) return false ; <nl> - } <nl> - out - > HandleArrayEnd ( ) ; <nl> - tokenizer - > Next ( ) ; <nl> - return true ; <nl> - } <nl> - <nl> - / / | bytes | must start with the indefinite length array byte , so basically , <nl> - / / ParseArray may only be called after an indefinite length array has been <nl> - / / detected . <nl> - bool ParseMap ( int32_t stack_depth , CBORTokenizer * tokenizer , <nl> - JSONParserHandler * out ) { <nl> - assert ( tokenizer - > TokenTag ( ) = = CBORTokenTag : : MAP_START ) ; <nl> - out - > HandleObjectBegin ( ) ; <nl> - tokenizer - > Next ( ) ; <nl> - while ( tokenizer - > TokenTag ( ) ! = CBORTokenTag : : STOP ) { <nl> - if ( tokenizer - > TokenTag ( ) = = CBORTokenTag : : DONE ) { <nl> - out - > HandleError ( <nl> - Status { Error : : CBOR_UNEXPECTED_EOF_IN_MAP , tokenizer - > Status ( ) . pos } ) ; <nl> - return false ; <nl> - } <nl> - if ( tokenizer - > TokenTag ( ) = = CBORTokenTag : : ERROR_VALUE ) { <nl> - out - > HandleError ( tokenizer - > Status ( ) ) ; <nl> - return false ; <nl> - } <nl> - / / Parse key . <nl> - if ( tokenizer - > TokenTag ( ) = = CBORTokenTag : : STRING8 ) { <nl> - if ( ! ParseUTF8String ( tokenizer , out ) ) return false ; <nl> - } else if ( tokenizer - > TokenTag ( ) = = CBORTokenTag : : STRING16 ) { <nl> - ParseUTF16String ( tokenizer , out ) ; <nl> - } else { <nl> - out - > HandleError ( <nl> - Status { Error : : CBOR_INVALID_MAP_KEY , tokenizer - > Status ( ) . pos } ) ; <nl> - return false ; <nl> - } <nl> - / / Parse value . <nl> - if ( ! ParseValue ( stack_depth , tokenizer , out ) ) return false ; <nl> - } <nl> - out - > HandleObjectEnd ( ) ; <nl> - tokenizer - > Next ( ) ; <nl> - return true ; <nl> - } <nl> - } / / namespace <nl> - <nl> - void ParseCBOR ( span < uint8_t > bytes , JSONParserHandler * json_out ) { <nl> - if ( bytes . empty ( ) ) { <nl> - json_out - > HandleError ( Status { Error : : CBOR_NO_INPUT , 0 } ) ; <nl> - return ; <nl> - } <nl> - if ( bytes [ 0 ] ! = kInitialByteForEnvelope ) { <nl> - json_out - > HandleError ( Status { Error : : CBOR_INVALID_START_BYTE , 0 } ) ; <nl> - return ; <nl> - } <nl> - CBORTokenizer tokenizer ( bytes ) ; <nl> - if ( tokenizer . TokenTag ( ) = = CBORTokenTag : : ERROR_VALUE ) { <nl> - json_out - > HandleError ( tokenizer . Status ( ) ) ; <nl> - return ; <nl> - } <nl> - / / We checked for the envelope start byte above , so the tokenizer <nl> - / / must agree here , since it ' s not an error . <nl> - assert ( tokenizer . TokenTag ( ) = = CBORTokenTag : : ENVELOPE ) ; <nl> - tokenizer . EnterEnvelope ( ) ; <nl> - if ( tokenizer . TokenTag ( ) ! = CBORTokenTag : : MAP_START ) { <nl> - json_out - > HandleError ( <nl> - Status { Error : : CBOR_MAP_START_EXPECTED , tokenizer . Status ( ) . pos } ) ; <nl> - return ; <nl> - } <nl> - if ( ! ParseMap ( / * stack_depth = * / 1 , & tokenizer , json_out ) ) return ; <nl> - if ( tokenizer . TokenTag ( ) = = CBORTokenTag : : DONE ) return ; <nl> - if ( tokenizer . TokenTag ( ) = = CBORTokenTag : : ERROR_VALUE ) { <nl> - json_out - > HandleError ( tokenizer . Status ( ) ) ; <nl> - return ; <nl> - } <nl> - json_out - > HandleError ( <nl> - Status { Error : : CBOR_TRAILING_JUNK , tokenizer . Status ( ) . pos } ) ; <nl> - } <nl> - <nl> - CBORTokenizer : : CBORTokenizer ( span < uint8_t > bytes ) : bytes_ ( bytes ) { <nl> - ReadNextToken ( / * enter_envelope = * / false ) ; <nl> - } <nl> - CBORTokenizer : : ~ CBORTokenizer ( ) { } <nl> - <nl> - CBORTokenTag CBORTokenizer : : TokenTag ( ) const { return token_tag_ ; } <nl> - <nl> - void CBORTokenizer : : Next ( ) { <nl> - if ( token_tag_ = = CBORTokenTag : : ERROR_VALUE | | token_tag_ = = CBORTokenTag : : DONE ) <nl> - return ; <nl> - ReadNextToken ( / * enter_envelope = * / false ) ; <nl> - } <nl> - <nl> - void CBORTokenizer : : EnterEnvelope ( ) { <nl> - assert ( token_tag_ = = CBORTokenTag : : ENVELOPE ) ; <nl> - ReadNextToken ( / * enter_envelope = * / true ) ; <nl> - } <nl> - <nl> - Status CBORTokenizer : : Status ( ) const { return status_ ; } <nl> - <nl> - int32_t CBORTokenizer : : GetInt32 ( ) const { <nl> - assert ( token_tag_ = = CBORTokenTag : : INT32 ) ; <nl> - / / The range checks happen in : : ReadNextToken ( ) . <nl> - return static_cast < uint32_t > ( <nl> - token_start_type_ = = MajorType : : UNSIGNED <nl> - ? token_start_internal_value_ <nl> - : - static_cast < int64_t > ( token_start_internal_value_ ) - 1 ) ; <nl> - } <nl> - <nl> - double CBORTokenizer : : GetDouble ( ) const { <nl> - assert ( token_tag_ = = CBORTokenTag : : DOUBLE ) ; <nl> - union { <nl> - uint64_t from_uint64 ; <nl> - double to_double ; <nl> - } reinterpret ; <nl> - reinterpret . from_uint64 = ReadBytesMostSignificantByteFirst < uint64_t > ( <nl> - bytes_ . subspan ( status_ . pos + 1 ) ) ; <nl> - return reinterpret . to_double ; <nl> - } <nl> - <nl> - span < uint8_t > CBORTokenizer : : GetString8 ( ) const { <nl> - assert ( token_tag_ = = CBORTokenTag : : STRING8 ) ; <nl> - auto length = static_cast < std : : ptrdiff_t > ( token_start_internal_value_ ) ; <nl> - return bytes_ . subspan ( status_ . pos + ( token_byte_length_ - length ) , length ) ; <nl> - } <nl> - <nl> - span < uint8_t > CBORTokenizer : : GetString16WireRep ( ) const { <nl> - assert ( token_tag_ = = CBORTokenTag : : STRING16 ) ; <nl> - auto length = static_cast < std : : ptrdiff_t > ( token_start_internal_value_ ) ; <nl> - return bytes_ . subspan ( status_ . pos + ( token_byte_length_ - length ) , length ) ; <nl> - } <nl> - <nl> - span < uint8_t > CBORTokenizer : : GetBinary ( ) const { <nl> - assert ( token_tag_ = = CBORTokenTag : : BINARY ) ; <nl> - auto length = static_cast < std : : ptrdiff_t > ( token_start_internal_value_ ) ; <nl> - return bytes_ . subspan ( status_ . pos + ( token_byte_length_ - length ) , length ) ; <nl> - } <nl> - <nl> - void CBORTokenizer : : ReadNextToken ( bool enter_envelope ) { <nl> - if ( enter_envelope ) { <nl> - status_ . pos + = kEncodedEnvelopeHeaderSize ; <nl> - } else { <nl> - status_ . pos = <nl> - status_ . pos = = Status : : npos ( ) ? 0 : status_ . pos + token_byte_length_ ; <nl> - } <nl> - status_ . error = Error : : OK ; <nl> - if ( status_ . pos > = bytes_ . size ( ) ) { <nl> - token_tag_ = CBORTokenTag : : DONE ; <nl> - return ; <nl> - } <nl> - switch ( bytes_ [ status_ . pos ] ) { <nl> - case kStopByte : <nl> - SetToken ( CBORTokenTag : : STOP , 1 ) ; <nl> - return ; <nl> - case kInitialByteIndefiniteLengthMap : <nl> - SetToken ( CBORTokenTag : : MAP_START , 1 ) ; <nl> - return ; <nl> - case kInitialByteIndefiniteLengthArray : <nl> - SetToken ( CBORTokenTag : : ARRAY_START , 1 ) ; <nl> - return ; <nl> - case kEncodedTrue : <nl> - SetToken ( CBORTokenTag : : TRUE_VALUE , 1 ) ; <nl> - return ; <nl> - case kEncodedFalse : <nl> - SetToken ( CBORTokenTag : : FALSE_VALUE , 1 ) ; <nl> - return ; <nl> - case kEncodedNull : <nl> - SetToken ( CBORTokenTag : : NULL_VALUE , 1 ) ; <nl> - return ; <nl> - case kExpectedConversionToBase64Tag : { / / BINARY <nl> - int8_t bytes_read = <nl> - ReadTokenStart ( bytes_ . subspan ( status_ . pos + 1 ) , & token_start_type_ , <nl> - & token_start_internal_value_ ) ; <nl> - int64_t token_byte_length = 1 + bytes_read + token_start_internal_value_ ; <nl> - if ( - 1 = = bytes_read | | token_start_type_ ! = MajorType : : BYTE_STRING | | <nl> - status_ . pos + token_byte_length > bytes_ . size ( ) ) { <nl> - SetError ( Error : : CBOR_INVALID_BINARY ) ; <nl> - return ; <nl> - } <nl> - SetToken ( CBORTokenTag : : BINARY , <nl> - static_cast < std : : ptrdiff_t > ( token_byte_length ) ) ; <nl> - return ; <nl> - } <nl> - case kInitialByteForDouble : { / / DOUBLE <nl> - if ( status_ . pos + kEncodedDoubleSize > bytes_ . size ( ) ) { <nl> - SetError ( Error : : CBOR_INVALID_DOUBLE ) ; <nl> - return ; <nl> - } <nl> - SetToken ( CBORTokenTag : : DOUBLE , kEncodedDoubleSize ) ; <nl> - return ; <nl> - } <nl> - case kInitialByteForEnvelope : { / / ENVELOPE <nl> - if ( status_ . pos + kEncodedEnvelopeHeaderSize > bytes_ . size ( ) ) { <nl> - SetError ( Error : : CBOR_INVALID_ENVELOPE ) ; <nl> - return ; <nl> - } <nl> - / / The envelope must be a byte string with 32 bit length . <nl> - if ( bytes_ [ status_ . pos + 1 ] ! = kInitialByteFor32BitLengthByteString ) { <nl> - SetError ( Error : : CBOR_INVALID_ENVELOPE ) ; <nl> - return ; <nl> - } <nl> - / / Read the length of the byte string . <nl> - token_start_internal_value_ = ReadBytesMostSignificantByteFirst < uint32_t > ( <nl> - bytes_ . subspan ( status_ . pos + 2 ) ) ; <nl> - / / Make sure the payload is contained within the message . <nl> - if ( token_start_internal_value_ + kEncodedEnvelopeHeaderSize + <nl> - status_ . pos > <nl> - static_cast < std : : size_t > ( bytes_ . size ( ) ) ) { <nl> - SetError ( Error : : CBOR_INVALID_ENVELOPE ) ; <nl> - return ; <nl> - } <nl> - auto length = static_cast < std : : ptrdiff_t > ( token_start_internal_value_ ) ; <nl> - SetToken ( CBORTokenTag : : ENVELOPE , <nl> - kEncodedEnvelopeHeaderSize + length ) ; <nl> - return ; <nl> - } <nl> - default : { <nl> - span < uint8_t > remainder = <nl> - bytes_ . subspan ( status_ . pos , bytes_ . size ( ) - status_ . pos ) ; <nl> - assert ( ! remainder . empty ( ) ) ; <nl> - int8_t token_start_length = ReadTokenStart ( remainder , & token_start_type_ , <nl> - & token_start_internal_value_ ) ; <nl> - bool success = token_start_length ! = - 1 ; <nl> - switch ( token_start_type_ ) { <nl> - case MajorType : : UNSIGNED : / / INT32 . <nl> - if ( ! success | | std : : numeric_limits < int32_t > : : max ( ) < <nl> - token_start_internal_value_ ) { <nl> - SetError ( Error : : CBOR_INVALID_INT32 ) ; <nl> - return ; <nl> - } <nl> - SetToken ( CBORTokenTag : : INT32 , token_start_length ) ; <nl> - return ; <nl> - case MajorType : : NEGATIVE : / / INT32 . <nl> - if ( ! success | | <nl> - std : : numeric_limits < int32_t > : : min ( ) > <nl> - - static_cast < int64_t > ( token_start_internal_value_ ) - 1 ) { <nl> - SetError ( Error : : CBOR_INVALID_INT32 ) ; <nl> - return ; <nl> - } <nl> - SetToken ( CBORTokenTag : : INT32 , token_start_length ) ; <nl> - return ; <nl> - case MajorType : : STRING : { / / STRING8 . <nl> - if ( ! success | | remainder . size ( ) < static_cast < int64_t > ( <nl> - token_start_internal_value_ ) ) { <nl> - SetError ( Error : : CBOR_INVALID_STRING8 ) ; <nl> - return ; <nl> - } <nl> - auto length = static_cast < std : : ptrdiff_t > ( token_start_internal_value_ ) ; <nl> - SetToken ( CBORTokenTag : : STRING8 , token_start_length + length ) ; <nl> - return ; <nl> - } <nl> - case MajorType : : BYTE_STRING : { / / STRING16 . <nl> - if ( ! success | | <nl> - remainder . size ( ) < <nl> - static_cast < int64_t > ( token_start_internal_value_ ) | | <nl> - / / Must be divisible by 2 since UTF16 is 2 bytes per character . <nl> - token_start_internal_value_ & 1 ) { <nl> - SetError ( Error : : CBOR_INVALID_STRING16 ) ; <nl> - return ; <nl> - } <nl> - auto length = static_cast < std : : ptrdiff_t > ( token_start_internal_value_ ) ; <nl> - SetToken ( CBORTokenTag : : STRING16 , token_start_length + length ) ; <nl> - return ; <nl> - } <nl> - case MajorType : : ARRAY : <nl> - case MajorType : : MAP : <nl> - case MajorType : : TAG : <nl> - case MajorType : : SIMPLE_VALUE : <nl> - SetError ( Error : : CBOR_UNSUPPORTED_VALUE ) ; <nl> - return ; <nl> - } <nl> - } <nl> - } <nl> - } <nl> - <nl> - void CBORTokenizer : : SetToken ( CBORTokenTag token_tag , <nl> - std : : ptrdiff_t token_byte_length ) { <nl> - token_tag_ = token_tag ; <nl> - token_byte_length_ = token_byte_length ; <nl> - } <nl> - <nl> - void CBORTokenizer : : SetError ( Error error ) { <nl> - token_tag_ = CBORTokenTag : : ERROR_VALUE ; <nl> - status_ . error = error ; <nl> - } <nl> - <nl> - # if 0 <nl> - void DumpCBOR ( span < uint8_t > cbor ) { <nl> - std : : string indent ; <nl> - CBORTokenizer tokenizer ( cbor ) ; <nl> - while ( true ) { <nl> - fprintf ( stderr , " % s " , indent . c_str ( ) ) ; <nl> - switch ( tokenizer . TokenTag ( ) ) { <nl> - case CBORTokenTag : : ERROR_VALUE : <nl> - fprintf ( stderr , " ERROR { status . error = % d , status . pos = % ld } \ n " , <nl> - tokenizer . Status ( ) . error , tokenizer . Status ( ) . pos ) ; <nl> - return ; <nl> - case CBORTokenTag : : DONE : <nl> - fprintf ( stderr , " DONE \ n " ) ; <nl> - return ; <nl> - case CBORTokenTag : : TRUE_VALUE : <nl> - fprintf ( stderr , " TRUE_VALUE \ n " ) ; <nl> - break ; <nl> - case CBORTokenTag : : FALSE_VALUE : <nl> - fprintf ( stderr , " FALSE_VALUE \ n " ) ; <nl> - break ; <nl> - case CBORTokenTag : : NULL_VALUE : <nl> - fprintf ( stderr , " NULL_VALUE \ n " ) ; <nl> - break ; <nl> - case CBORTokenTag : : INT32 : <nl> - fprintf ( stderr , " INT32 [ % d ] \ n " , tokenizer . GetInt32 ( ) ) ; <nl> - break ; <nl> - case CBORTokenTag : : DOUBLE : <nl> - fprintf ( stderr , " DOUBLE [ % lf ] \ n " , tokenizer . GetDouble ( ) ) ; <nl> - break ; <nl> - case CBORTokenTag : : STRING8 : { <nl> - span < uint8_t > v = tokenizer . GetString8 ( ) ; <nl> - std : : string t ( v . begin ( ) , v . end ( ) ) ; <nl> - fprintf ( stderr , " STRING8 [ % s ] \ n " , t . c_str ( ) ) ; <nl> - break ; <nl> - } <nl> - case CBORTokenTag : : STRING16 : { <nl> - span < uint8_t > v = tokenizer . GetString16WireRep ( ) ; <nl> - std : : string t ( v . begin ( ) , v . end ( ) ) ; <nl> - fprintf ( stderr , " STRING16 [ % s ] \ n " , t . c_str ( ) ) ; <nl> - break ; <nl> - } <nl> - case CBORTokenTag : : BINARY : { <nl> - span < uint8_t > v = tokenizer . GetBinary ( ) ; <nl> - std : : string t ( v . begin ( ) , v . end ( ) ) ; <nl> - fprintf ( stderr , " BINARY [ % s ] \ n " , t . c_str ( ) ) ; <nl> - break ; <nl> - } <nl> - case CBORTokenTag : : MAP_START : <nl> - fprintf ( stderr , " MAP_START \ n " ) ; <nl> - indent + = " " ; <nl> - break ; <nl> - case CBORTokenTag : : ARRAY_START : <nl> - fprintf ( stderr , " ARRAY_START \ n " ) ; <nl> - indent + = " " ; <nl> - break ; <nl> - case CBORTokenTag : : STOP : <nl> - fprintf ( stderr , " STOP \ n " ) ; <nl> - indent . erase ( 0 , 2 ) ; <nl> - break ; <nl> - case CBORTokenTag : : ENVELOPE : <nl> - fprintf ( stderr , " ENVELOPE \ n " ) ; <nl> - tokenizer . EnterEnvelope ( ) ; <nl> - continue ; <nl> - } <nl> - tokenizer . Next ( ) ; <nl> - } <nl> - } <nl> - # endif <nl> - <nl> - <nl> - { % for namespace in config . protocol . namespace % } <nl> - } / / namespace { { namespace } } <nl> - { % endfor % } <nl> - <nl> mmm a / third_party / inspector_protocol / lib / Values_cpp . template <nl> ppp b / third_party / inspector_protocol / lib / Values_cpp . template <nl> static constexpr int kStackLimitValues = 1000 ; <nl> <nl> / / Below are three parsing routines for CBOR , which cover enough <nl> / / to roundtrip JSON messages . <nl> - std : : unique_ptr < DictionaryValue > parseMap ( int32_t stack_depth , CBORTokenizer * tokenizer ) ; <nl> - std : : unique_ptr < ListValue > parseArray ( int32_t stack_depth , CBORTokenizer * tokenizer ) ; <nl> - std : : unique_ptr < Value > parseValue ( int32_t stack_depth , CBORTokenizer * tokenizer ) ; <nl> + std : : unique_ptr < DictionaryValue > parseMap ( int32_t stack_depth , cbor : : CBORTokenizer * tokenizer ) ; <nl> + std : : unique_ptr < ListValue > parseArray ( int32_t stack_depth , cbor : : CBORTokenizer * tokenizer ) ; <nl> + std : : unique_ptr < Value > parseValue ( int32_t stack_depth , cbor : : CBORTokenizer * tokenizer ) ; <nl> <nl> / / | bytes | must start with the indefinite length array byte , so basically , <nl> / / ParseArray may only be called after an indefinite length array has been <nl> / / detected . <nl> - std : : unique_ptr < ListValue > parseArray ( int32_t stack_depth , CBORTokenizer * tokenizer ) { <nl> - DCHECK ( tokenizer - > TokenTag ( ) = = CBORTokenTag : : ARRAY_START ) ; <nl> + std : : unique_ptr < ListValue > parseArray ( int32_t stack_depth , cbor : : CBORTokenizer * tokenizer ) { <nl> + DCHECK ( tokenizer - > TokenTag ( ) = = cbor : : CBORTokenTag : : ARRAY_START ) ; <nl> tokenizer - > Next ( ) ; <nl> auto list = ListValue : : create ( ) ; <nl> - while ( tokenizer - > TokenTag ( ) ! = CBORTokenTag : : STOP ) { <nl> + while ( tokenizer - > TokenTag ( ) ! = cbor : : CBORTokenTag : : STOP ) { <nl> / / Error : : CBOR_UNEXPECTED_EOF_IN_ARRAY <nl> - if ( tokenizer - > TokenTag ( ) = = CBORTokenTag : : DONE ) return nullptr ; <nl> - if ( tokenizer - > TokenTag ( ) = = CBORTokenTag : : ERROR_VALUE ) return nullptr ; <nl> + if ( tokenizer - > TokenTag ( ) = = cbor : : CBORTokenTag : : DONE ) return nullptr ; <nl> + if ( tokenizer - > TokenTag ( ) = = cbor : : CBORTokenTag : : ERROR_VALUE ) return nullptr ; <nl> / / Parse value . <nl> auto value = parseValue ( stack_depth , tokenizer ) ; <nl> if ( ! value ) return nullptr ; <nl> std : : unique_ptr < ListValue > parseArray ( int32_t stack_depth , CBORTokenizer * tokeni <nl> } <nl> <nl> std : : unique_ptr < Value > parseValue ( <nl> - int32_t stack_depth , CBORTokenizer * tokenizer ) { <nl> + int32_t stack_depth , cbor : : CBORTokenizer * tokenizer ) { <nl> / / Error : : CBOR_STACK_LIMIT_EXCEEDED <nl> if ( stack_depth > kStackLimitValues ) return nullptr ; <nl> / / Skip past the envelope to get to what ' s inside . <nl> - if ( tokenizer - > TokenTag ( ) = = CBORTokenTag : : ENVELOPE ) <nl> + if ( tokenizer - > TokenTag ( ) = = cbor : : CBORTokenTag : : ENVELOPE ) <nl> tokenizer - > EnterEnvelope ( ) ; <nl> switch ( tokenizer - > TokenTag ( ) ) { <nl> - case CBORTokenTag : : ERROR_VALUE : <nl> + case cbor : : CBORTokenTag : : ERROR_VALUE : <nl> return nullptr ; <nl> - case CBORTokenTag : : DONE : <nl> + case cbor : : CBORTokenTag : : DONE : <nl> / / Error : : CBOR_UNEXPECTED_EOF_EXPECTED_VALUE <nl> return nullptr ; <nl> - case CBORTokenTag : : TRUE_VALUE : { <nl> + case cbor : : CBORTokenTag : : TRUE_VALUE : { <nl> std : : unique_ptr < Value > value = FundamentalValue : : create ( true ) ; <nl> tokenizer - > Next ( ) ; <nl> return value ; <nl> } <nl> - case CBORTokenTag : : FALSE_VALUE : { <nl> + case cbor : : CBORTokenTag : : FALSE_VALUE : { <nl> std : : unique_ptr < Value > value = FundamentalValue : : create ( false ) ; <nl> tokenizer - > Next ( ) ; <nl> return value ; <nl> } <nl> - case CBORTokenTag : : NULL_VALUE : { <nl> + case cbor : : CBORTokenTag : : NULL_VALUE : { <nl> std : : unique_ptr < Value > value = FundamentalValue : : null ( ) ; <nl> tokenizer - > Next ( ) ; <nl> return value ; <nl> } <nl> - case CBORTokenTag : : INT32 : { <nl> + case cbor : : CBORTokenTag : : INT32 : { <nl> std : : unique_ptr < Value > value = FundamentalValue : : create ( tokenizer - > GetInt32 ( ) ) ; <nl> tokenizer - > Next ( ) ; <nl> return value ; <nl> } <nl> - case CBORTokenTag : : DOUBLE : { <nl> + case cbor : : CBORTokenTag : : DOUBLE : { <nl> std : : unique_ptr < Value > value = FundamentalValue : : create ( tokenizer - > GetDouble ( ) ) ; <nl> tokenizer - > Next ( ) ; <nl> return value ; <nl> } <nl> - case CBORTokenTag : : STRING8 : { <nl> + case cbor : : CBORTokenTag : : STRING8 : { <nl> span < uint8_t > str = tokenizer - > GetString8 ( ) ; <nl> std : : unique_ptr < Value > value = <nl> StringValue : : create ( StringUtil : : fromUTF8 ( str . data ( ) , str . size ( ) ) ) ; <nl> tokenizer - > Next ( ) ; <nl> return value ; <nl> } <nl> - case CBORTokenTag : : STRING16 : { <nl> + case cbor : : CBORTokenTag : : STRING16 : { <nl> span < uint8_t > wire = tokenizer - > GetString16WireRep ( ) ; <nl> DCHECK_EQ ( wire . size ( ) & 1 , 0 ) ; <nl> std : : unique_ptr < Value > value = StringValue : : create ( StringUtil : : fromUTF16 ( <nl> std : : unique_ptr < Value > parseValue ( <nl> tokenizer - > Next ( ) ; <nl> return value ; <nl> } <nl> - case CBORTokenTag : : BINARY : { <nl> + case cbor : : CBORTokenTag : : BINARY : { <nl> span < uint8_t > payload = tokenizer - > GetBinary ( ) ; <nl> tokenizer - > Next ( ) ; <nl> return BinaryValue : : create ( Binary : : fromSpan ( payload . data ( ) , payload . size ( ) ) ) ; <nl> } <nl> - case CBORTokenTag : : MAP_START : <nl> + case cbor : : CBORTokenTag : : MAP_START : <nl> return parseMap ( stack_depth + 1 , tokenizer ) ; <nl> - case CBORTokenTag : : ARRAY_START : <nl> + case cbor : : CBORTokenTag : : ARRAY_START : <nl> return parseArray ( stack_depth + 1 , tokenizer ) ; <nl> default : <nl> / / Error : : CBOR_UNSUPPORTED_VALUE <nl> std : : unique_ptr < Value > parseValue ( <nl> / / ParseArray may only be called after an indefinite length array has been <nl> / / detected . <nl> std : : unique_ptr < DictionaryValue > parseMap ( <nl> - int32_t stack_depth , CBORTokenizer * tokenizer ) { <nl> + int32_t stack_depth , cbor : : CBORTokenizer * tokenizer ) { <nl> auto dict = DictionaryValue : : create ( ) ; <nl> tokenizer - > Next ( ) ; <nl> - while ( tokenizer - > TokenTag ( ) ! = CBORTokenTag : : STOP ) { <nl> - if ( tokenizer - > TokenTag ( ) = = CBORTokenTag : : DONE ) { <nl> + while ( tokenizer - > TokenTag ( ) ! = cbor : : CBORTokenTag : : STOP ) { <nl> + if ( tokenizer - > TokenTag ( ) = = cbor : : CBORTokenTag : : DONE ) { <nl> / / Error : : CBOR_UNEXPECTED_EOF_IN_MAP <nl> return nullptr ; <nl> } <nl> - if ( tokenizer - > TokenTag ( ) = = CBORTokenTag : : ERROR_VALUE ) return nullptr ; <nl> + if ( tokenizer - > TokenTag ( ) = = cbor : : CBORTokenTag : : ERROR_VALUE ) return nullptr ; <nl> / / Parse key . <nl> String key ; <nl> - if ( tokenizer - > TokenTag ( ) = = CBORTokenTag : : STRING8 ) { <nl> + if ( tokenizer - > TokenTag ( ) = = cbor : : CBORTokenTag : : STRING8 ) { <nl> span < uint8_t > key_span = tokenizer - > GetString8 ( ) ; <nl> key = StringUtil : : fromUTF8 ( key_span . data ( ) , key_span . size ( ) ) ; <nl> tokenizer - > Next ( ) ; <nl> - } else if ( tokenizer - > TokenTag ( ) = = CBORTokenTag : : STRING16 ) { <nl> + } else if ( tokenizer - > TokenTag ( ) = = cbor : : CBORTokenTag : : STRING16 ) { <nl> return nullptr ; / / STRING16 not supported yet . <nl> } else { <nl> / / Error : : CBOR_INVALID_MAP_KEY <nl> std : : unique_ptr < Value > Value : : parseBinary ( const uint8_t * data , size_t size ) { <nl> if ( bytes . empty ( ) ) return nullptr ; <nl> <nl> / / Error : : CBOR_INVALID_START_BYTE <nl> - / / TODO ( johannes ) : EncodeInitialByteForEnvelope ( ) method . <nl> - if ( bytes [ 0 ] ! = 0xd8 ) return nullptr ; <nl> + if ( bytes [ 0 ] ! = cbor : : InitialByteForEnvelope ( ) ) return nullptr ; <nl> <nl> - CBORTokenizer tokenizer ( bytes ) ; <nl> - if ( tokenizer . TokenTag ( ) = = CBORTokenTag : : ERROR_VALUE ) return nullptr ; <nl> + cbor : : CBORTokenizer tokenizer ( bytes ) ; <nl> + if ( tokenizer . TokenTag ( ) = = cbor : : CBORTokenTag : : ERROR_VALUE ) return nullptr ; <nl> <nl> / / We checked for the envelope start byte above , so the tokenizer <nl> / / must agree here , since it ' s not an error . <nl> - DCHECK ( tokenizer . TokenTag ( ) = = CBORTokenTag : : ENVELOPE ) ; <nl> + DCHECK ( tokenizer . TokenTag ( ) = = cbor : : CBORTokenTag : : ENVELOPE ) ; <nl> tokenizer . EnterEnvelope ( ) ; <nl> / / Error : : MAP_START_EXPECTED <nl> - if ( tokenizer . TokenTag ( ) ! = CBORTokenTag : : MAP_START ) return nullptr ; <nl> + if ( tokenizer . TokenTag ( ) ! = cbor : : CBORTokenTag : : MAP_START ) return nullptr ; <nl> std : : unique_ptr < Value > result = parseMap ( / * stack_depth = * / 1 , & tokenizer ) ; <nl> if ( ! result ) return nullptr ; <nl> - if ( tokenizer . TokenTag ( ) = = CBORTokenTag : : DONE ) return result ; <nl> - if ( tokenizer . TokenTag ( ) = = CBORTokenTag : : ERROR_VALUE ) return nullptr ; <nl> + if ( tokenizer . TokenTag ( ) = = cbor : : CBORTokenTag : : DONE ) return result ; <nl> + if ( tokenizer . TokenTag ( ) = = cbor : : CBORTokenTag : : ERROR_VALUE ) return nullptr ; <nl> / / Error : : CBOR_TRAILING_JUNK <nl> return nullptr ; <nl> } <nl> void Value : : writeJSON ( StringBuilder * output ) const <nl> <nl> void Value : : writeBinary ( std : : vector < uint8_t > * bytes ) const { <nl> DCHECK ( m_type = = TypeNull ) ; <nl> - bytes - > push_back ( EncodeNull ( ) ) ; <nl> + bytes - > push_back ( cbor : : EncodeNull ( ) ) ; <nl> } <nl> <nl> std : : unique_ptr < Value > Value : : clone ( ) const <nl> void FundamentalValue : : writeJSON ( StringBuilder * output ) const <nl> void FundamentalValue : : writeBinary ( std : : vector < uint8_t > * bytes ) const { <nl> switch ( type ( ) ) { <nl> case TypeDouble : <nl> - EncodeDouble ( m_doubleValue , bytes ) ; <nl> + cbor : : EncodeDouble ( m_doubleValue , bytes ) ; <nl> return ; <nl> case TypeInteger : <nl> - EncodeInt32 ( m_integerValue , bytes ) ; <nl> + cbor : : EncodeInt32 ( m_integerValue , bytes ) ; <nl> return ; <nl> case TypeBoolean : <nl> - bytes - > push_back ( m_boolValue ? EncodeTrue ( ) : EncodeFalse ( ) ) ; <nl> + bytes - > push_back ( m_boolValue ? cbor : : EncodeTrue ( ) : cbor : : EncodeFalse ( ) ) ; <nl> return ; <nl> default : <nl> DCHECK ( false ) ; <nl> namespace { <nl> / / have LATIN1 on the wire , so we call EncodeFromLatin1 which <nl> / / transcodes to UTF8 if needed . <nl> void EncodeString ( const String & s , std : : vector < uint8_t > * out ) { <nl> - if ( StringUtil : : CharactersLatin1 ( s ) ) { <nl> - EncodeFromLatin1 ( span < uint8_t > ( StringUtil : : CharactersLatin1 ( s ) , <nl> - StringUtil : : CharacterCount ( s ) ) , <nl> - out ) ; <nl> + if ( StringUtil : : CharacterCount ( s ) = = 0 ) { <nl> + cbor : : EncodeString8 ( span < uint8_t > ( nullptr , 0 ) , out ) ; / / Empty string . <nl> + } else if ( StringUtil : : CharactersLatin1 ( s ) ) { <nl> + cbor : : EncodeFromLatin1 ( span < uint8_t > ( StringUtil : : CharactersLatin1 ( s ) , <nl> + StringUtil : : CharacterCount ( s ) ) , <nl> + out ) ; <nl> } else if ( StringUtil : : CharactersUTF16 ( s ) ) { <nl> - EncodeFromUTF16 ( span < uint16_t > ( StringUtil : : CharactersUTF16 ( s ) , <nl> - StringUtil : : CharacterCount ( s ) ) , <nl> - out ) ; <nl> + cbor : : EncodeFromUTF16 ( span < uint16_t > ( StringUtil : : CharactersUTF16 ( s ) , <nl> + StringUtil : : CharacterCount ( s ) ) , <nl> + out ) ; <nl> } else if ( StringUtil : : CharactersUTF8 ( s ) ) { <nl> - EncodeString8 ( span < uint8_t > ( StringUtil : : CharactersUTF8 ( s ) , <nl> - StringUtil : : CharacterCount ( s ) ) , <nl> - out ) ; <nl> - } else { <nl> - EncodeString8 ( span < uint8_t > ( nullptr , 0 ) , out ) ; / / Empty string . <nl> + cbor : : EncodeString8 ( span < uint8_t > ( StringUtil : : CharactersUTF8 ( s ) , <nl> + StringUtil : : CharacterCount ( s ) ) , <nl> + out ) ; <nl> } <nl> } <nl> } / / namespace <nl> void BinaryValue : : writeJSON ( StringBuilder * output ) const <nl> } <nl> <nl> void BinaryValue : : writeBinary ( std : : vector < uint8_t > * bytes ) const { <nl> - EncodeBinary ( span < uint8_t > ( m_binaryValue . data ( ) , m_binaryValue . size ( ) ) , bytes ) ; <nl> + cbor : : EncodeBinary ( span < uint8_t > ( m_binaryValue . data ( ) , <nl> + m_binaryValue . size ( ) ) , bytes ) ; <nl> } <nl> <nl> std : : unique_ptr < Value > BinaryValue : : clone ( ) const <nl> void DictionaryValue : : writeJSON ( StringBuilder * output ) const <nl> } <nl> <nl> void DictionaryValue : : writeBinary ( std : : vector < uint8_t > * bytes ) const { <nl> - EnvelopeEncoder encoder ; <nl> + cbor : : EnvelopeEncoder encoder ; <nl> encoder . EncodeStart ( bytes ) ; <nl> - bytes - > push_back ( EncodeIndefiniteLengthMapStart ( ) ) ; <nl> + bytes - > push_back ( cbor : : EncodeIndefiniteLengthMapStart ( ) ) ; <nl> for ( size_t i = 0 ; i < m_order . size ( ) ; + + i ) { <nl> const String & key = m_order [ i ] ; <nl> Dictionary : : const_iterator value = m_data . find ( key ) ; <nl> void DictionaryValue : : writeBinary ( std : : vector < uint8_t > * bytes ) const { <nl> EncodeString ( key , bytes ) ; <nl> value - > second - > writeBinary ( bytes ) ; <nl> } <nl> - bytes - > push_back ( EncodeStop ( ) ) ; <nl> + bytes - > push_back ( cbor : : EncodeStop ( ) ) ; <nl> encoder . EncodeStop ( bytes ) ; <nl> } <nl> <nl> void ListValue : : writeJSON ( StringBuilder * output ) const <nl> } <nl> <nl> void ListValue : : writeBinary ( std : : vector < uint8_t > * bytes ) const { <nl> - EnvelopeEncoder encoder ; <nl> + cbor : : EnvelopeEncoder encoder ; <nl> encoder . EncodeStart ( bytes ) ; <nl> - bytes - > push_back ( EncodeIndefiniteLengthArrayStart ( ) ) ; <nl> + bytes - > push_back ( cbor : : EncodeIndefiniteLengthArrayStart ( ) ) ; <nl> for ( size_t i = 0 ; i < m_data . size ( ) ; + + i ) { <nl> m_data [ i ] - > writeBinary ( bytes ) ; <nl> } <nl> - bytes - > push_back ( EncodeStop ( ) ) ; <nl> + bytes - > push_back ( cbor : : EncodeStop ( ) ) ; <nl> encoder . EncodeStop ( bytes ) ; <nl> } <nl> <nl> mmm a / third_party / inspector_protocol / lib / base_string_adapter_cc . template <nl> ppp b / third_party / inspector_protocol / lib / base_string_adapter_cc . template <nl> bool AppendStringValueToMapBinary ( base : : StringPiece in , <nl> if ( in . size ( ) < 1 + 1 + 4 + 1 + 1 ) <nl> return false ; <nl> const uint8_t * envelope = reinterpret_cast < const uint8_t * > ( in . data ( ) ) ; <nl> - if ( cbor : : kInitialByteForEnvelope ! = envelope [ 0 ] ) <nl> + if ( cbor : : InitialByteForEnvelope ( ) ! = envelope [ 0 ] ) <nl> return false ; <nl> - if ( cbor : : kInitialByteFor32BitLengthByteString ! = envelope [ 1 ] ) <nl> + if ( cbor : : InitialByteFor32BitLengthByteString ( ) ! = envelope [ 1 ] ) <nl> return false ; <nl> - if ( cbor : : kInitialByteIndefiniteLengthMap ! = envelope [ 6 ] ) <nl> + if ( cbor : : EncodeIndefiniteLengthMapStart ( ) ! = envelope [ 6 ] ) <nl> return false ; <nl> <nl> uint32_t envelope_size = ReadEnvelopeSize ( envelope + 2 ) ; <nl> if ( envelope_size + 2 + 4 ! = in . size ( ) ) <nl> return false ; <nl> - if ( cbor : : kStopByte ! = static_cast < uint8_t > ( * in . rbegin ( ) ) ) <nl> + if ( cbor : : EncodeStop ( ) ! = static_cast < uint8_t > ( * in . rbegin ( ) ) ) <nl> return false ; <nl> <nl> std : : vector < uint8_t > encoded_entry ; <nl> encoded_entry . reserve ( 1 + 4 + key . size ( ) + 1 + 4 + value . size ( ) ) ; <nl> span < uint8_t > key_span ( <nl> reinterpret_cast < const uint8_t * > ( key . data ( ) ) , key . size ( ) ) ; <nl> - EncodeString8 ( key_span , & encoded_entry ) ; <nl> + cbor : : EncodeString8 ( key_span , & encoded_entry ) ; <nl> span < uint8_t > value_span ( <nl> reinterpret_cast < const uint8_t * > ( value . data ( ) ) , value . size ( ) ) ; <nl> - EncodeString8 ( value_span , & encoded_entry ) ; <nl> + cbor : : EncodeString8 ( value_span , & encoded_entry ) ; <nl> <nl> out - > clear ( ) ; <nl> out - > reserve ( in . size ( ) + encoded_entry . size ( ) ) ; <nl> out - > append ( in . begin ( ) , in . end ( ) - 1 ) ; <nl> out - > append ( reinterpret_cast < const char * > ( encoded_entry . data ( ) ) , <nl> encoded_entry . size ( ) ) ; <nl> - out - > append ( 1 , static_cast < char > ( cbor : : kStopByte ) ) ; <nl> + out - > append ( 1 , static_cast < char > ( cbor : : EncodeStop ( ) ) ) ; <nl> std : : size_t new_size = envelope_size + out - > size ( ) - in . size ( ) ; <nl> if ( new_size > static_cast < std : : size_t > ( <nl> std : : numeric_limits < uint32_t > : : max ( ) ) ) { <nl> new file mode 100644 <nl> index 00000000000 . . 84251d9b916 <nl> mmm / dev / null <nl> ppp b / third_party / inspector_protocol / lib / encoding_cpp . template <nl> <nl> + { # This template is generated by gen_cbor_templates . py . # } <nl> + / / Generated by lib / encoding_cpp . template . <nl> + <nl> + / / Copyright 2019 The Chromium Authors . All rights reserved . <nl> + / / Use of this source code is governed by a BSD - style license that can be <nl> + / / found in the LICENSE file . <nl> + <nl> + <nl> + # include < cassert > <nl> + # include < cstring > <nl> + # include < limits > <nl> + # include < stack > <nl> + <nl> + { % for namespace in config . protocol . namespace % } <nl> + namespace { { namespace } } { <nl> + { % endfor % } <nl> + <nl> + / / = = = = = encoding / encoding . cc = = = = = <nl> + <nl> + namespace cbor { <nl> + namespace { <nl> + / / Indicates the number of bits the " initial byte " needs to be shifted to the <nl> + / / right after applying | kMajorTypeMask | to produce the major type in the <nl> + / / lowermost bits . <nl> + static constexpr uint8_t kMajorTypeBitShift = 5u ; <nl> + / / Mask selecting the low - order 5 bits of the " initial byte " , which is where <nl> + / / the additional information is encoded . <nl> + static constexpr uint8_t kAdditionalInformationMask = 0x1f ; <nl> + / / Mask selecting the high - order 3 bits of the " initial byte " , which indicates <nl> + / / the major type of the encoded value . <nl> + static constexpr uint8_t kMajorTypeMask = 0xe0 ; <nl> + / / Indicates the integer is in the following byte . <nl> + static constexpr uint8_t kAdditionalInformation1Byte = 24u ; <nl> + / / Indicates the integer is in the next 2 bytes . <nl> + static constexpr uint8_t kAdditionalInformation2Bytes = 25u ; <nl> + / / Indicates the integer is in the next 4 bytes . <nl> + static constexpr uint8_t kAdditionalInformation4Bytes = 26u ; <nl> + / / Indicates the integer is in the next 8 bytes . <nl> + static constexpr uint8_t kAdditionalInformation8Bytes = 27u ; <nl> + <nl> + / / Encodes the initial byte , consisting of the | type | in the first 3 bits <nl> + / / followed by 5 bits of | additional_info | . <nl> + constexpr uint8_t EncodeInitialByte ( MajorType type , uint8_t additional_info ) { <nl> + return ( static_cast < uint8_t > ( type ) < < kMajorTypeBitShift ) | <nl> + ( additional_info & kAdditionalInformationMask ) ; <nl> + } <nl> + <nl> + / / TAG 24 indicates that what follows is a byte string which is <nl> + / / encoded in CBOR format . We use this as a wrapper for <nl> + / / maps and arrays , allowing us to skip them , because the <nl> + / / byte string carries its size ( byte length ) . <nl> + / / https : / / tools . ietf . org / html / rfc7049 # section - 2 . 4 . 4 . 1 <nl> + static constexpr uint8_t kInitialByteForEnvelope = <nl> + EncodeInitialByte ( MajorType : : TAG , 24 ) ; <nl> + / / The initial byte for a byte string with at most 2 ^ 32 bytes <nl> + / / of payload . This is used for envelope encoding , even if <nl> + / / the byte string is shorter . <nl> + static constexpr uint8_t kInitialByteFor32BitLengthByteString = <nl> + EncodeInitialByte ( MajorType : : BYTE_STRING , 26 ) ; <nl> + <nl> + / / See RFC 7049 Section 2 . 2 . 1 , indefinite length arrays / maps have additional <nl> + / / info = 31 . <nl> + static constexpr uint8_t kInitialByteIndefiniteLengthArray = <nl> + EncodeInitialByte ( MajorType : : ARRAY , 31 ) ; <nl> + static constexpr uint8_t kInitialByteIndefiniteLengthMap = <nl> + EncodeInitialByte ( MajorType : : MAP , 31 ) ; <nl> + / / See RFC 7049 Section 2 . 3 , Table 1 ; this is used for finishing indefinite <nl> + / / length maps / arrays . <nl> + static constexpr uint8_t kStopByte = <nl> + EncodeInitialByte ( MajorType : : SIMPLE_VALUE , 31 ) ; <nl> + <nl> + / / See RFC 7049 Section 2 . 3 , Table 2 . <nl> + static constexpr uint8_t kEncodedTrue = <nl> + EncodeInitialByte ( MajorType : : SIMPLE_VALUE , 21 ) ; <nl> + static constexpr uint8_t kEncodedFalse = <nl> + EncodeInitialByte ( MajorType : : SIMPLE_VALUE , 20 ) ; <nl> + static constexpr uint8_t kEncodedNull = <nl> + EncodeInitialByte ( MajorType : : SIMPLE_VALUE , 22 ) ; <nl> + static constexpr uint8_t kInitialByteForDouble = <nl> + EncodeInitialByte ( MajorType : : SIMPLE_VALUE , 27 ) ; <nl> + <nl> + / / See RFC 7049 Table 3 and Section 2 . 4 . 4 . 2 . This is used as a prefix for <nl> + / / arbitrary binary data encoded as BYTE_STRING . <nl> + static constexpr uint8_t kExpectedConversionToBase64Tag = <nl> + EncodeInitialByte ( MajorType : : TAG , 22 ) ; <nl> + <nl> + / / Writes the bytes for | v | to | out | , starting with the most significant byte . <nl> + / / See also : https : / / commandcenter . blogspot . com / 2012 / 04 / byte - order - fallacy . html <nl> + template < typename T > <nl> + void WriteBytesMostSignificantByteFirst ( T v , std : : vector < uint8_t > * out ) { <nl> + for ( int shift_bytes = sizeof ( T ) - 1 ; shift_bytes > = 0 ; - - shift_bytes ) <nl> + out - > push_back ( 0xff & ( v > > ( shift_bytes * 8 ) ) ) ; <nl> + } <nl> + <nl> + / / Extracts sizeof ( T ) bytes from | in | to extract a value of type T <nl> + / / ( e . g . uint64_t , uint32_t , . . . ) , most significant byte first . <nl> + / / See also : https : / / commandcenter . blogspot . com / 2012 / 04 / byte - order - fallacy . html <nl> + template < typename T > <nl> + T ReadBytesMostSignificantByteFirst ( span < uint8_t > in ) { <nl> + assert ( static_cast < std : : size_t > ( in . size ( ) ) > = sizeof ( T ) ) ; <nl> + T result = 0 ; <nl> + for ( std : : size_t shift_bytes = 0 ; shift_bytes < sizeof ( T ) ; + + shift_bytes ) <nl> + result | = T ( in [ sizeof ( T ) - 1 - shift_bytes ] ) < < ( shift_bytes * 8 ) ; <nl> + return result ; <nl> + } <nl> + } / / namespace <nl> + <nl> + namespace internals { <nl> + / / Reads the start of a token with definitive size from | bytes | . <nl> + / / | type | is the major type as specified in RFC 7049 Section 2 . 1 . <nl> + / / | value | is the payload ( e . g . for MajorType : : UNSIGNED ) or is the size <nl> + / / ( e . g . for BYTE_STRING ) . <nl> + / / If successful , returns the number of bytes read . Otherwise returns - 1 . <nl> + int8_t ReadTokenStart ( span < uint8_t > bytes , MajorType * type , uint64_t * value ) { <nl> + if ( bytes . empty ( ) ) <nl> + return - 1 ; <nl> + uint8_t initial_byte = bytes [ 0 ] ; <nl> + * type = MajorType ( ( initial_byte & kMajorTypeMask ) > > kMajorTypeBitShift ) ; <nl> + <nl> + uint8_t additional_information = initial_byte & kAdditionalInformationMask ; <nl> + if ( additional_information < 24 ) { <nl> + / / Values 0 - 23 are encoded directly into the additional info of the <nl> + / / initial byte . <nl> + * value = additional_information ; <nl> + return 1 ; <nl> + } <nl> + if ( additional_information = = kAdditionalInformation1Byte ) { <nl> + / / Values 24 - 255 are encoded with one initial byte , followed by the value . <nl> + if ( bytes . size ( ) < 2 ) <nl> + return - 1 ; <nl> + * value = ReadBytesMostSignificantByteFirst < uint8_t > ( bytes . subspan ( 1 ) ) ; <nl> + return 2 ; <nl> + } <nl> + if ( additional_information = = kAdditionalInformation2Bytes ) { <nl> + / / Values 256 - 65535 : 1 initial byte + 2 bytes payload . <nl> + if ( static_cast < std : : size_t > ( bytes . size ( ) ) < 1 + sizeof ( uint16_t ) ) <nl> + return - 1 ; <nl> + * value = ReadBytesMostSignificantByteFirst < uint16_t > ( bytes . subspan ( 1 ) ) ; <nl> + return 3 ; <nl> + } <nl> + if ( additional_information = = kAdditionalInformation4Bytes ) { <nl> + / / 32 bit uint : 1 initial byte + 4 bytes payload . <nl> + if ( static_cast < std : : size_t > ( bytes . size ( ) ) < 1 + sizeof ( uint32_t ) ) <nl> + return - 1 ; <nl> + * value = ReadBytesMostSignificantByteFirst < uint32_t > ( bytes . subspan ( 1 ) ) ; <nl> + return 5 ; <nl> + } <nl> + if ( additional_information = = kAdditionalInformation8Bytes ) { <nl> + / / 64 bit uint : 1 initial byte + 8 bytes payload . <nl> + if ( static_cast < std : : size_t > ( bytes . size ( ) ) < 1 + sizeof ( uint64_t ) ) <nl> + return - 1 ; <nl> + * value = ReadBytesMostSignificantByteFirst < uint64_t > ( bytes . subspan ( 1 ) ) ; <nl> + return 9 ; <nl> + } <nl> + return - 1 ; <nl> + } <nl> + <nl> + / / Writes the start of a token with | type | . The | value | may indicate the size , <nl> + / / or it may be the payload if the value is an unsigned integer . <nl> + void WriteTokenStart ( MajorType type , <nl> + uint64_t value , <nl> + std : : vector < uint8_t > * encoded ) { <nl> + if ( value < 24 ) { <nl> + / / Values 0 - 23 are encoded directly into the additional info of the <nl> + / / initial byte . <nl> + encoded - > push_back ( EncodeInitialByte ( type , / * additional_info = * / value ) ) ; <nl> + return ; <nl> + } <nl> + if ( value < = std : : numeric_limits < uint8_t > : : max ( ) ) { <nl> + / / Values 24 - 255 are encoded with one initial byte , followed by the value . <nl> + encoded - > push_back ( EncodeInitialByte ( type , kAdditionalInformation1Byte ) ) ; <nl> + encoded - > push_back ( value ) ; <nl> + return ; <nl> + } <nl> + if ( value < = std : : numeric_limits < uint16_t > : : max ( ) ) { <nl> + / / Values 256 - 65535 : 1 initial byte + 2 bytes payload . <nl> + encoded - > push_back ( EncodeInitialByte ( type , kAdditionalInformation2Bytes ) ) ; <nl> + WriteBytesMostSignificantByteFirst < uint16_t > ( value , encoded ) ; <nl> + return ; <nl> + } <nl> + if ( value < = std : : numeric_limits < uint32_t > : : max ( ) ) { <nl> + / / 32 bit uint : 1 initial byte + 4 bytes payload . <nl> + encoded - > push_back ( EncodeInitialByte ( type , kAdditionalInformation4Bytes ) ) ; <nl> + WriteBytesMostSignificantByteFirst < uint32_t > ( static_cast < uint32_t > ( value ) , <nl> + encoded ) ; <nl> + return ; <nl> + } <nl> + / / 64 bit uint : 1 initial byte + 8 bytes payload . <nl> + encoded - > push_back ( EncodeInitialByte ( type , kAdditionalInformation8Bytes ) ) ; <nl> + WriteBytesMostSignificantByteFirst < uint64_t > ( value , encoded ) ; <nl> + } <nl> + } / / namespace internals <nl> + <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + / / Detecting CBOR content <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + <nl> + uint8_t InitialByteForEnvelope ( ) { <nl> + return kInitialByteForEnvelope ; <nl> + } <nl> + uint8_t InitialByteFor32BitLengthByteString ( ) { <nl> + return kInitialByteFor32BitLengthByteString ; <nl> + } <nl> + bool IsCBORMessage ( span < uint8_t > msg ) { <nl> + return msg . size ( ) > = 6 & & msg [ 0 ] = = InitialByteForEnvelope ( ) & & <nl> + msg [ 1 ] = = InitialByteFor32BitLengthByteString ( ) ; <nl> + } <nl> + <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + / / Encoding invidiual CBOR items <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + <nl> + uint8_t EncodeTrue ( ) { <nl> + return kEncodedTrue ; <nl> + } <nl> + uint8_t EncodeFalse ( ) { <nl> + return kEncodedFalse ; <nl> + } <nl> + uint8_t EncodeNull ( ) { <nl> + return kEncodedNull ; <nl> + } <nl> + <nl> + uint8_t EncodeIndefiniteLengthArrayStart ( ) { <nl> + return kInitialByteIndefiniteLengthArray ; <nl> + } <nl> + <nl> + uint8_t EncodeIndefiniteLengthMapStart ( ) { <nl> + return kInitialByteIndefiniteLengthMap ; <nl> + } <nl> + <nl> + uint8_t EncodeStop ( ) { <nl> + return kStopByte ; <nl> + } <nl> + <nl> + void EncodeInt32 ( int32_t value , std : : vector < uint8_t > * out ) { <nl> + if ( value > = 0 ) { <nl> + internals : : WriteTokenStart ( MajorType : : UNSIGNED , value , out ) ; <nl> + } else { <nl> + uint64_t representation = static_cast < uint64_t > ( - ( value + 1 ) ) ; <nl> + internals : : WriteTokenStart ( MajorType : : NEGATIVE , representation , out ) ; <nl> + } <nl> + } <nl> + <nl> + void EncodeString16 ( span < uint16_t > in , std : : vector < uint8_t > * out ) { <nl> + uint64_t byte_length = static_cast < uint64_t > ( in . size_bytes ( ) ) ; <nl> + internals : : WriteTokenStart ( MajorType : : BYTE_STRING , byte_length , out ) ; <nl> + / / When emitting UTF16 characters , we always write the least significant byte <nl> + / / first ; this is because it ' s the native representation for X86 . <nl> + / / TODO ( johannes ) : Implement a more efficient thing here later , e . g . <nl> + / / casting * iff * the machine has this byte order . <nl> + / / The wire format for UTF16 chars will probably remain the same <nl> + / / ( least significant byte first ) since this way we can have <nl> + / / golden files , unittests , etc . that port easily and universally . <nl> + / / See also : <nl> + / / https : / / commandcenter . blogspot . com / 2012 / 04 / byte - order - fallacy . html <nl> + for ( const uint16_t two_bytes : in ) { <nl> + out - > push_back ( two_bytes ) ; <nl> + out - > push_back ( two_bytes > > 8 ) ; <nl> + } <nl> + } <nl> + <nl> + void EncodeString8 ( span < uint8_t > in , std : : vector < uint8_t > * out ) { <nl> + internals : : WriteTokenStart ( MajorType : : STRING , <nl> + static_cast < uint64_t > ( in . size_bytes ( ) ) , out ) ; <nl> + out - > insert ( out - > end ( ) , in . begin ( ) , in . end ( ) ) ; <nl> + } <nl> + <nl> + void EncodeFromLatin1 ( span < uint8_t > latin1 , std : : vector < uint8_t > * out ) { <nl> + for ( std : : ptrdiff_t ii = 0 ; ii < latin1 . size ( ) ; + + ii ) { <nl> + if ( latin1 [ ii ] < = 127 ) <nl> + continue ; <nl> + / / If there ' s at least one non - ASCII char , convert to UTF8 . <nl> + std : : vector < uint8_t > utf8 ( latin1 . begin ( ) , latin1 . begin ( ) + ii ) ; <nl> + for ( ; ii < latin1 . size ( ) ; + + ii ) { <nl> + if ( latin1 [ ii ] < = 127 ) { <nl> + utf8 . push_back ( latin1 [ ii ] ) ; <nl> + } else { <nl> + / / 0xC0 means it ' s a UTF8 sequence with 2 bytes . <nl> + utf8 . push_back ( ( latin1 [ ii ] > > 6 ) | 0xc0 ) ; <nl> + utf8 . push_back ( ( latin1 [ ii ] | 0x80 ) & 0xbf ) ; <nl> + } <nl> + } <nl> + EncodeString8 ( SpanFromVector ( utf8 ) , out ) ; <nl> + return ; <nl> + } <nl> + EncodeString8 ( latin1 , out ) ; <nl> + } <nl> + <nl> + void EncodeFromUTF16 ( span < uint16_t > utf16 , std : : vector < uint8_t > * out ) { <nl> + / / If there ' s at least one non - ASCII char , encode as STRING16 ( UTF16 ) . <nl> + for ( uint16_t ch : utf16 ) { <nl> + if ( ch < = 127 ) <nl> + continue ; <nl> + EncodeString16 ( utf16 , out ) ; <nl> + return ; <nl> + } <nl> + / / It ' s all US - ASCII , strip out every second byte and encode as UTF8 . <nl> + internals : : WriteTokenStart ( MajorType : : STRING , <nl> + static_cast < uint64_t > ( utf16 . size ( ) ) , out ) ; <nl> + out - > insert ( out - > end ( ) , utf16 . begin ( ) , utf16 . end ( ) ) ; <nl> + } <nl> + <nl> + void EncodeBinary ( span < uint8_t > in , std : : vector < uint8_t > * out ) { <nl> + out - > push_back ( kExpectedConversionToBase64Tag ) ; <nl> + uint64_t byte_length = static_cast < uint64_t > ( in . size_bytes ( ) ) ; <nl> + internals : : WriteTokenStart ( MajorType : : BYTE_STRING , byte_length , out ) ; <nl> + out - > insert ( out - > end ( ) , in . begin ( ) , in . end ( ) ) ; <nl> + } <nl> + <nl> + / / A double is encoded with a specific initial byte <nl> + / / ( kInitialByteForDouble ) plus the 64 bits of payload for its value . <nl> + constexpr std : : ptrdiff_t kEncodedDoubleSize = 1 + sizeof ( uint64_t ) ; <nl> + <nl> + / / An envelope is encoded with a specific initial byte <nl> + / / ( kInitialByteForEnvelope ) , plus the start byte for a BYTE_STRING with a 32 <nl> + / / bit wide length , plus a 32 bit length for that string . <nl> + constexpr std : : ptrdiff_t kEncodedEnvelopeHeaderSize = 1 + 1 + sizeof ( uint32_t ) ; <nl> + <nl> + void EncodeDouble ( double value , std : : vector < uint8_t > * out ) { <nl> + / / The additional_info = 27 indicates 64 bits for the double follow . <nl> + / / See RFC 7049 Section 2 . 3 , Table 1 . <nl> + out - > push_back ( kInitialByteForDouble ) ; <nl> + union { <nl> + double from_double ; <nl> + uint64_t to_uint64 ; <nl> + } reinterpret ; <nl> + reinterpret . from_double = value ; <nl> + WriteBytesMostSignificantByteFirst < uint64_t > ( reinterpret . to_uint64 , out ) ; <nl> + } <nl> + <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + / / cbor : : EnvelopeEncoder - for wrapping submessages <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + <nl> + void EnvelopeEncoder : : EncodeStart ( std : : vector < uint8_t > * out ) { <nl> + assert ( byte_size_pos_ = = 0 ) ; <nl> + out - > push_back ( kInitialByteForEnvelope ) ; <nl> + out - > push_back ( kInitialByteFor32BitLengthByteString ) ; <nl> + byte_size_pos_ = out - > size ( ) ; <nl> + out - > resize ( out - > size ( ) + sizeof ( uint32_t ) ) ; <nl> + } <nl> + <nl> + bool EnvelopeEncoder : : EncodeStop ( std : : vector < uint8_t > * out ) { <nl> + assert ( byte_size_pos_ ! = 0 ) ; <nl> + / / The byte size is the size of the payload , that is , all the <nl> + / / bytes that were written past the byte size position itself . <nl> + uint64_t byte_size = out - > size ( ) - ( byte_size_pos_ + sizeof ( uint32_t ) ) ; <nl> + / / We store exactly 4 bytes , so at most INT32MAX , with most significant <nl> + / / byte first . <nl> + if ( byte_size > std : : numeric_limits < uint32_t > : : max ( ) ) <nl> + return false ; <nl> + for ( int shift_bytes = sizeof ( uint32_t ) - 1 ; shift_bytes > = 0 ; <nl> + - - shift_bytes ) { <nl> + ( * out ) [ byte_size_pos_ + + ] = 0xff & ( byte_size > > ( shift_bytes * 8 ) ) ; <nl> + } <nl> + return true ; <nl> + } <nl> + <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + / / cbor : : NewCBOREncoder - for encoding from a streaming parser <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + <nl> + namespace { <nl> + class CBOREncoder : public StreamingParserHandler { <nl> + public : <nl> + CBOREncoder ( std : : vector < uint8_t > * out , Status * status ) <nl> + : out_ ( out ) , status_ ( status ) { <nl> + * status_ = Status ( ) ; <nl> + } <nl> + <nl> + void HandleMapBegin ( ) override { <nl> + envelopes_ . emplace_back ( ) ; <nl> + envelopes_ . back ( ) . EncodeStart ( out_ ) ; <nl> + out_ - > push_back ( kInitialByteIndefiniteLengthMap ) ; <nl> + } <nl> + <nl> + void HandleMapEnd ( ) override { <nl> + out_ - > push_back ( kStopByte ) ; <nl> + assert ( ! envelopes_ . empty ( ) ) ; <nl> + envelopes_ . back ( ) . EncodeStop ( out_ ) ; <nl> + envelopes_ . pop_back ( ) ; <nl> + } <nl> + <nl> + void HandleArrayBegin ( ) override { <nl> + envelopes_ . emplace_back ( ) ; <nl> + envelopes_ . back ( ) . EncodeStart ( out_ ) ; <nl> + out_ - > push_back ( kInitialByteIndefiniteLengthArray ) ; <nl> + } <nl> + <nl> + void HandleArrayEnd ( ) override { <nl> + out_ - > push_back ( kStopByte ) ; <nl> + assert ( ! envelopes_ . empty ( ) ) ; <nl> + envelopes_ . back ( ) . EncodeStop ( out_ ) ; <nl> + envelopes_ . pop_back ( ) ; <nl> + } <nl> + <nl> + void HandleString8 ( span < uint8_t > chars ) override { <nl> + EncodeString8 ( chars , out_ ) ; <nl> + } <nl> + <nl> + void HandleString16 ( span < uint16_t > chars ) override { <nl> + EncodeFromUTF16 ( chars , out_ ) ; <nl> + } <nl> + <nl> + void HandleBinary ( span < uint8_t > bytes ) override { EncodeBinary ( bytes , out_ ) ; } <nl> + <nl> + void HandleDouble ( double value ) override { EncodeDouble ( value , out_ ) ; } <nl> + <nl> + void HandleInt32 ( int32_t value ) override { EncodeInt32 ( value , out_ ) ; } <nl> + <nl> + void HandleBool ( bool value ) override { <nl> + / / See RFC 7049 Section 2 . 3 , Table 2 . <nl> + out_ - > push_back ( value ? kEncodedTrue : kEncodedFalse ) ; <nl> + } <nl> + <nl> + void HandleNull ( ) override { <nl> + / / See RFC 7049 Section 2 . 3 , Table 2 . <nl> + out_ - > push_back ( kEncodedNull ) ; <nl> + } <nl> + <nl> + void HandleError ( Status error ) override { <nl> + assert ( ! error . ok ( ) ) ; <nl> + * status_ = error ; <nl> + out_ - > clear ( ) ; <nl> + } <nl> + <nl> + private : <nl> + std : : vector < uint8_t > * out_ ; <nl> + std : : vector < EnvelopeEncoder > envelopes_ ; <nl> + Status * status_ ; <nl> + } ; <nl> + } / / namespace <nl> + <nl> + std : : unique_ptr < StreamingParserHandler > NewCBOREncoder ( <nl> + std : : vector < uint8_t > * out , <nl> + Status * status ) { <nl> + return std : : unique_ptr < StreamingParserHandler > ( new CBOREncoder ( out , status ) ) ; <nl> + } <nl> + <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + / / cbor : : CBORTokenizer - for parsing individual CBOR items <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + <nl> + CBORTokenizer : : CBORTokenizer ( span < uint8_t > bytes ) : bytes_ ( bytes ) { <nl> + ReadNextToken ( / * enter_envelope = * / false ) ; <nl> + } <nl> + CBORTokenizer : : ~ CBORTokenizer ( ) { } <nl> + <nl> + CBORTokenTag CBORTokenizer : : TokenTag ( ) const { <nl> + return token_tag_ ; <nl> + } <nl> + <nl> + void CBORTokenizer : : Next ( ) { <nl> + if ( token_tag_ = = CBORTokenTag : : ERROR_VALUE | | <nl> + token_tag_ = = CBORTokenTag : : DONE ) <nl> + return ; <nl> + ReadNextToken ( / * enter_envelope = * / false ) ; <nl> + } <nl> + <nl> + void CBORTokenizer : : EnterEnvelope ( ) { <nl> + assert ( token_tag_ = = CBORTokenTag : : ENVELOPE ) ; <nl> + ReadNextToken ( / * enter_envelope = * / true ) ; <nl> + } <nl> + <nl> + Status CBORTokenizer : : Status ( ) const { <nl> + return status_ ; <nl> + } <nl> + <nl> + int32_t CBORTokenizer : : GetInt32 ( ) const { <nl> + assert ( token_tag_ = = CBORTokenTag : : INT32 ) ; <nl> + / / The range checks happen in : : ReadNextToken ( ) . <nl> + return static_cast < uint32_t > ( <nl> + token_start_type_ = = MajorType : : UNSIGNED <nl> + ? token_start_internal_value_ <nl> + : - static_cast < int64_t > ( token_start_internal_value_ ) - 1 ) ; <nl> + } <nl> + <nl> + double CBORTokenizer : : GetDouble ( ) const { <nl> + assert ( token_tag_ = = CBORTokenTag : : DOUBLE ) ; <nl> + union { <nl> + uint64_t from_uint64 ; <nl> + double to_double ; <nl> + } reinterpret ; <nl> + reinterpret . from_uint64 = ReadBytesMostSignificantByteFirst < uint64_t > ( <nl> + bytes_ . subspan ( status_ . pos + 1 ) ) ; <nl> + return reinterpret . to_double ; <nl> + } <nl> + <nl> + span < uint8_t > CBORTokenizer : : GetString8 ( ) const { <nl> + assert ( token_tag_ = = CBORTokenTag : : STRING8 ) ; <nl> + auto length = static_cast < std : : ptrdiff_t > ( token_start_internal_value_ ) ; <nl> + return bytes_ . subspan ( status_ . pos + ( token_byte_length_ - length ) , length ) ; <nl> + } <nl> + <nl> + span < uint8_t > CBORTokenizer : : GetString16WireRep ( ) const { <nl> + assert ( token_tag_ = = CBORTokenTag : : STRING16 ) ; <nl> + auto length = static_cast < std : : ptrdiff_t > ( token_start_internal_value_ ) ; <nl> + return bytes_ . subspan ( status_ . pos + ( token_byte_length_ - length ) , length ) ; <nl> + } <nl> + <nl> + span < uint8_t > CBORTokenizer : : GetBinary ( ) const { <nl> + assert ( token_tag_ = = CBORTokenTag : : BINARY ) ; <nl> + auto length = static_cast < std : : ptrdiff_t > ( token_start_internal_value_ ) ; <nl> + return bytes_ . subspan ( status_ . pos + ( token_byte_length_ - length ) , length ) ; <nl> + } <nl> + <nl> + span < uint8_t > CBORTokenizer : : GetEnvelopeContents ( ) const { <nl> + assert ( token_tag_ = = CBORTokenTag : : ENVELOPE ) ; <nl> + auto length = static_cast < std : : ptrdiff_t > ( token_start_internal_value_ ) ; <nl> + return bytes_ . subspan ( status_ . pos + kEncodedEnvelopeHeaderSize , length ) ; <nl> + } <nl> + <nl> + void CBORTokenizer : : ReadNextToken ( bool enter_envelope ) { <nl> + if ( enter_envelope ) { <nl> + status_ . pos + = kEncodedEnvelopeHeaderSize ; <nl> + } else { <nl> + status_ . pos = <nl> + status_ . pos = = Status : : npos ( ) ? 0 : status_ . pos + token_byte_length_ ; <nl> + } <nl> + status_ . error = Error : : OK ; <nl> + if ( status_ . pos > = bytes_ . size ( ) ) { <nl> + token_tag_ = CBORTokenTag : : DONE ; <nl> + return ; <nl> + } <nl> + switch ( bytes_ [ status_ . pos ] ) { <nl> + case kStopByte : <nl> + SetToken ( CBORTokenTag : : STOP , 1 ) ; <nl> + return ; <nl> + case kInitialByteIndefiniteLengthMap : <nl> + SetToken ( CBORTokenTag : : MAP_START , 1 ) ; <nl> + return ; <nl> + case kInitialByteIndefiniteLengthArray : <nl> + SetToken ( CBORTokenTag : : ARRAY_START , 1 ) ; <nl> + return ; <nl> + case kEncodedTrue : <nl> + SetToken ( CBORTokenTag : : TRUE_VALUE , 1 ) ; <nl> + return ; <nl> + case kEncodedFalse : <nl> + SetToken ( CBORTokenTag : : FALSE_VALUE , 1 ) ; <nl> + return ; <nl> + case kEncodedNull : <nl> + SetToken ( CBORTokenTag : : NULL_VALUE , 1 ) ; <nl> + return ; <nl> + case kExpectedConversionToBase64Tag : { / / BINARY <nl> + int8_t bytes_read = internals : : ReadTokenStart ( <nl> + bytes_ . subspan ( status_ . pos + 1 ) , & token_start_type_ , <nl> + & token_start_internal_value_ ) ; <nl> + int64_t token_byte_length = 1 + bytes_read + token_start_internal_value_ ; <nl> + if ( - 1 = = bytes_read | | token_start_type_ ! = MajorType : : BYTE_STRING | | <nl> + status_ . pos + token_byte_length > bytes_ . size ( ) ) { <nl> + SetError ( Error : : CBOR_INVALID_BINARY ) ; <nl> + return ; <nl> + } <nl> + SetToken ( CBORTokenTag : : BINARY , <nl> + static_cast < std : : ptrdiff_t > ( token_byte_length ) ) ; <nl> + return ; <nl> + } <nl> + case kInitialByteForDouble : { / / DOUBLE <nl> + if ( status_ . pos + kEncodedDoubleSize > bytes_ . size ( ) ) { <nl> + SetError ( Error : : CBOR_INVALID_DOUBLE ) ; <nl> + return ; <nl> + } <nl> + SetToken ( CBORTokenTag : : DOUBLE , kEncodedDoubleSize ) ; <nl> + return ; <nl> + } <nl> + case kInitialByteForEnvelope : { / / ENVELOPE <nl> + if ( status_ . pos + kEncodedEnvelopeHeaderSize > bytes_ . size ( ) ) { <nl> + SetError ( Error : : CBOR_INVALID_ENVELOPE ) ; <nl> + return ; <nl> + } <nl> + / / The envelope must be a byte string with 32 bit length . <nl> + if ( bytes_ [ status_ . pos + 1 ] ! = kInitialByteFor32BitLengthByteString ) { <nl> + SetError ( Error : : CBOR_INVALID_ENVELOPE ) ; <nl> + return ; <nl> + } <nl> + / / Read the length of the byte string . <nl> + token_start_internal_value_ = ReadBytesMostSignificantByteFirst < uint32_t > ( <nl> + bytes_ . subspan ( status_ . pos + 2 ) ) ; <nl> + / / Make sure the payload is contained within the message . <nl> + if ( token_start_internal_value_ + kEncodedEnvelopeHeaderSize + <nl> + status_ . pos > <nl> + static_cast < std : : size_t > ( bytes_ . size ( ) ) ) { <nl> + SetError ( Error : : CBOR_INVALID_ENVELOPE ) ; <nl> + return ; <nl> + } <nl> + auto length = static_cast < std : : ptrdiff_t > ( token_start_internal_value_ ) ; <nl> + SetToken ( CBORTokenTag : : ENVELOPE , kEncodedEnvelopeHeaderSize + length ) ; <nl> + return ; <nl> + } <nl> + default : { <nl> + span < uint8_t > remainder = <nl> + bytes_ . subspan ( status_ . pos , bytes_ . size ( ) - status_ . pos ) ; <nl> + assert ( ! remainder . empty ( ) ) ; <nl> + int8_t token_start_length = internals : : ReadTokenStart ( <nl> + remainder , & token_start_type_ , & token_start_internal_value_ ) ; <nl> + bool success = token_start_length ! = - 1 ; <nl> + switch ( token_start_type_ ) { <nl> + case MajorType : : UNSIGNED : / / INT32 . <nl> + if ( ! success | | std : : numeric_limits < int32_t > : : max ( ) < <nl> + token_start_internal_value_ ) { <nl> + SetError ( Error : : CBOR_INVALID_INT32 ) ; <nl> + return ; <nl> + } <nl> + SetToken ( CBORTokenTag : : INT32 , token_start_length ) ; <nl> + return ; <nl> + case MajorType : : NEGATIVE : / / INT32 . <nl> + if ( ! success | | <nl> + std : : numeric_limits < int32_t > : : min ( ) > <nl> + - static_cast < int64_t > ( token_start_internal_value_ ) - 1 ) { <nl> + SetError ( Error : : CBOR_INVALID_INT32 ) ; <nl> + return ; <nl> + } <nl> + SetToken ( CBORTokenTag : : INT32 , token_start_length ) ; <nl> + return ; <nl> + case MajorType : : STRING : { / / STRING8 . <nl> + if ( ! success | | remainder . size ( ) < static_cast < int64_t > ( <nl> + token_start_internal_value_ ) ) { <nl> + SetError ( Error : : CBOR_INVALID_STRING8 ) ; <nl> + return ; <nl> + } <nl> + auto length = <nl> + static_cast < std : : ptrdiff_t > ( token_start_internal_value_ ) ; <nl> + SetToken ( CBORTokenTag : : STRING8 , token_start_length + length ) ; <nl> + return ; <nl> + } <nl> + case MajorType : : BYTE_STRING : { / / STRING16 . <nl> + if ( ! success | | <nl> + remainder . size ( ) < <nl> + static_cast < int64_t > ( token_start_internal_value_ ) | | <nl> + / / Must be divisible by 2 since UTF16 is 2 bytes per character . <nl> + token_start_internal_value_ & 1 ) { <nl> + SetError ( Error : : CBOR_INVALID_STRING16 ) ; <nl> + return ; <nl> + } <nl> + auto length = <nl> + static_cast < std : : ptrdiff_t > ( token_start_internal_value_ ) ; <nl> + SetToken ( CBORTokenTag : : STRING16 , token_start_length + length ) ; <nl> + return ; <nl> + } <nl> + case MajorType : : ARRAY : <nl> + case MajorType : : MAP : <nl> + case MajorType : : TAG : <nl> + case MajorType : : SIMPLE_VALUE : <nl> + SetError ( Error : : CBOR_UNSUPPORTED_VALUE ) ; <nl> + return ; <nl> + } <nl> + } <nl> + } <nl> + } <nl> + <nl> + void CBORTokenizer : : SetToken ( CBORTokenTag token_tag , <nl> + std : : ptrdiff_t token_byte_length ) { <nl> + token_tag_ = token_tag ; <nl> + token_byte_length_ = token_byte_length ; <nl> + } <nl> + <nl> + void CBORTokenizer : : SetError ( Error error ) { <nl> + token_tag_ = CBORTokenTag : : ERROR_VALUE ; <nl> + status_ . error = error ; <nl> + } <nl> + <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + / / cbor : : ParseCBOR - for receiving streaming parser events for CBOR messages <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + <nl> + namespace { <nl> + / / When parsing CBOR , we limit recursion depth for objects and arrays <nl> + / / to this constant . <nl> + static constexpr int kStackLimit = 1000 ; <nl> + <nl> + / / Below are three parsing routines for CBOR , which cover enough <nl> + / / to roundtrip JSON messages . <nl> + bool ParseMap ( int32_t stack_depth , <nl> + CBORTokenizer * tokenizer , <nl> + StreamingParserHandler * out ) ; <nl> + bool ParseArray ( int32_t stack_depth , <nl> + CBORTokenizer * tokenizer , <nl> + StreamingParserHandler * out ) ; <nl> + bool ParseValue ( int32_t stack_depth , <nl> + CBORTokenizer * tokenizer , <nl> + StreamingParserHandler * out ) ; <nl> + <nl> + void ParseUTF16String ( CBORTokenizer * tokenizer , StreamingParserHandler * out ) { <nl> + std : : vector < uint16_t > value ; <nl> + span < uint8_t > rep = tokenizer - > GetString16WireRep ( ) ; <nl> + for ( std : : ptrdiff_t ii = 0 ; ii < rep . size ( ) ; ii + = 2 ) <nl> + value . push_back ( ( rep [ ii + 1 ] < < 8 ) | rep [ ii ] ) ; <nl> + out - > HandleString16 ( span < uint16_t > ( value . data ( ) , value . size ( ) ) ) ; <nl> + tokenizer - > Next ( ) ; <nl> + } <nl> + <nl> + bool ParseUTF8String ( CBORTokenizer * tokenizer , StreamingParserHandler * out ) { <nl> + assert ( tokenizer - > TokenTag ( ) = = CBORTokenTag : : STRING8 ) ; <nl> + out - > HandleString8 ( tokenizer - > GetString8 ( ) ) ; <nl> + tokenizer - > Next ( ) ; <nl> + return true ; <nl> + } <nl> + <nl> + bool ParseValue ( int32_t stack_depth , <nl> + CBORTokenizer * tokenizer , <nl> + StreamingParserHandler * out ) { <nl> + if ( stack_depth > kStackLimit ) { <nl> + out - > HandleError ( <nl> + Status { Error : : CBOR_STACK_LIMIT_EXCEEDED , tokenizer - > Status ( ) . pos } ) ; <nl> + return false ; <nl> + } <nl> + / / Skip past the envelope to get to what ' s inside . <nl> + if ( tokenizer - > TokenTag ( ) = = CBORTokenTag : : ENVELOPE ) <nl> + tokenizer - > EnterEnvelope ( ) ; <nl> + switch ( tokenizer - > TokenTag ( ) ) { <nl> + case CBORTokenTag : : ERROR_VALUE : <nl> + out - > HandleError ( tokenizer - > Status ( ) ) ; <nl> + return false ; <nl> + case CBORTokenTag : : DONE : <nl> + out - > HandleError ( Status { Error : : CBOR_UNEXPECTED_EOF_EXPECTED_VALUE , <nl> + tokenizer - > Status ( ) . pos } ) ; <nl> + return false ; <nl> + case CBORTokenTag : : TRUE_VALUE : <nl> + out - > HandleBool ( true ) ; <nl> + tokenizer - > Next ( ) ; <nl> + return true ; <nl> + case CBORTokenTag : : FALSE_VALUE : <nl> + out - > HandleBool ( false ) ; <nl> + tokenizer - > Next ( ) ; <nl> + return true ; <nl> + case CBORTokenTag : : NULL_VALUE : <nl> + out - > HandleNull ( ) ; <nl> + tokenizer - > Next ( ) ; <nl> + return true ; <nl> + case CBORTokenTag : : INT32 : <nl> + out - > HandleInt32 ( tokenizer - > GetInt32 ( ) ) ; <nl> + tokenizer - > Next ( ) ; <nl> + return true ; <nl> + case CBORTokenTag : : DOUBLE : <nl> + out - > HandleDouble ( tokenizer - > GetDouble ( ) ) ; <nl> + tokenizer - > Next ( ) ; <nl> + return true ; <nl> + case CBORTokenTag : : STRING8 : <nl> + return ParseUTF8String ( tokenizer , out ) ; <nl> + case CBORTokenTag : : STRING16 : <nl> + ParseUTF16String ( tokenizer , out ) ; <nl> + return true ; <nl> + case CBORTokenTag : : BINARY : { <nl> + out - > HandleBinary ( tokenizer - > GetBinary ( ) ) ; <nl> + tokenizer - > Next ( ) ; <nl> + return true ; <nl> + } <nl> + case CBORTokenTag : : MAP_START : <nl> + return ParseMap ( stack_depth + 1 , tokenizer , out ) ; <nl> + case CBORTokenTag : : ARRAY_START : <nl> + return ParseArray ( stack_depth + 1 , tokenizer , out ) ; <nl> + default : <nl> + out - > HandleError ( <nl> + Status { Error : : CBOR_UNSUPPORTED_VALUE , tokenizer - > Status ( ) . pos } ) ; <nl> + return false ; <nl> + } <nl> + } <nl> + <nl> + / / | bytes | must start with the indefinite length array byte , so basically , <nl> + / / ParseArray may only be called after an indefinite length array has been <nl> + / / detected . <nl> + bool ParseArray ( int32_t stack_depth , <nl> + CBORTokenizer * tokenizer , <nl> + StreamingParserHandler * out ) { <nl> + assert ( tokenizer - > TokenTag ( ) = = CBORTokenTag : : ARRAY_START ) ; <nl> + tokenizer - > Next ( ) ; <nl> + out - > HandleArrayBegin ( ) ; <nl> + while ( tokenizer - > TokenTag ( ) ! = CBORTokenTag : : STOP ) { <nl> + if ( tokenizer - > TokenTag ( ) = = CBORTokenTag : : DONE ) { <nl> + out - > HandleError ( <nl> + Status { Error : : CBOR_UNEXPECTED_EOF_IN_ARRAY , tokenizer - > Status ( ) . pos } ) ; <nl> + return false ; <nl> + } <nl> + if ( tokenizer - > TokenTag ( ) = = CBORTokenTag : : ERROR_VALUE ) { <nl> + out - > HandleError ( tokenizer - > Status ( ) ) ; <nl> + return false ; <nl> + } <nl> + / / Parse value . <nl> + if ( ! ParseValue ( stack_depth , tokenizer , out ) ) <nl> + return false ; <nl> + } <nl> + out - > HandleArrayEnd ( ) ; <nl> + tokenizer - > Next ( ) ; <nl> + return true ; <nl> + } <nl> + <nl> + / / | bytes | must start with the indefinite length array byte , so basically , <nl> + / / ParseArray may only be called after an indefinite length array has been <nl> + / / detected . <nl> + bool ParseMap ( int32_t stack_depth , <nl> + CBORTokenizer * tokenizer , <nl> + StreamingParserHandler * out ) { <nl> + assert ( tokenizer - > TokenTag ( ) = = CBORTokenTag : : MAP_START ) ; <nl> + out - > HandleMapBegin ( ) ; <nl> + tokenizer - > Next ( ) ; <nl> + while ( tokenizer - > TokenTag ( ) ! = CBORTokenTag : : STOP ) { <nl> + if ( tokenizer - > TokenTag ( ) = = CBORTokenTag : : DONE ) { <nl> + out - > HandleError ( <nl> + Status { Error : : CBOR_UNEXPECTED_EOF_IN_MAP , tokenizer - > Status ( ) . pos } ) ; <nl> + return false ; <nl> + } <nl> + if ( tokenizer - > TokenTag ( ) = = CBORTokenTag : : ERROR_VALUE ) { <nl> + out - > HandleError ( tokenizer - > Status ( ) ) ; <nl> + return false ; <nl> + } <nl> + / / Parse key . <nl> + if ( tokenizer - > TokenTag ( ) = = CBORTokenTag : : STRING8 ) { <nl> + if ( ! ParseUTF8String ( tokenizer , out ) ) <nl> + return false ; <nl> + } else if ( tokenizer - > TokenTag ( ) = = CBORTokenTag : : STRING16 ) { <nl> + ParseUTF16String ( tokenizer , out ) ; <nl> + } else { <nl> + out - > HandleError ( <nl> + Status { Error : : CBOR_INVALID_MAP_KEY , tokenizer - > Status ( ) . pos } ) ; <nl> + return false ; <nl> + } <nl> + / / Parse value . <nl> + if ( ! ParseValue ( stack_depth , tokenizer , out ) ) <nl> + return false ; <nl> + } <nl> + out - > HandleMapEnd ( ) ; <nl> + tokenizer - > Next ( ) ; <nl> + return true ; <nl> + } <nl> + } / / namespace <nl> + <nl> + void ParseCBOR ( span < uint8_t > bytes , StreamingParserHandler * out ) { <nl> + if ( bytes . empty ( ) ) { <nl> + out - > HandleError ( Status { Error : : CBOR_NO_INPUT , 0 } ) ; <nl> + return ; <nl> + } <nl> + if ( bytes [ 0 ] ! = kInitialByteForEnvelope ) { <nl> + out - > HandleError ( Status { Error : : CBOR_INVALID_START_BYTE , 0 } ) ; <nl> + return ; <nl> + } <nl> + CBORTokenizer tokenizer ( bytes ) ; <nl> + if ( tokenizer . TokenTag ( ) = = CBORTokenTag : : ERROR_VALUE ) { <nl> + out - > HandleError ( tokenizer . Status ( ) ) ; <nl> + return ; <nl> + } <nl> + / / We checked for the envelope start byte above , so the tokenizer <nl> + / / must agree here , since it ' s not an error . <nl> + assert ( tokenizer . TokenTag ( ) = = CBORTokenTag : : ENVELOPE ) ; <nl> + tokenizer . EnterEnvelope ( ) ; <nl> + if ( tokenizer . TokenTag ( ) ! = CBORTokenTag : : MAP_START ) { <nl> + out - > HandleError ( <nl> + Status { Error : : CBOR_MAP_START_EXPECTED , tokenizer . Status ( ) . pos } ) ; <nl> + return ; <nl> + } <nl> + if ( ! ParseMap ( / * stack_depth = * / 1 , & tokenizer , out ) ) <nl> + return ; <nl> + if ( tokenizer . TokenTag ( ) = = CBORTokenTag : : DONE ) <nl> + return ; <nl> + if ( tokenizer . TokenTag ( ) = = CBORTokenTag : : ERROR_VALUE ) { <nl> + out - > HandleError ( tokenizer . Status ( ) ) ; <nl> + return ; <nl> + } <nl> + out - > HandleError ( Status { Error : : CBOR_TRAILING_JUNK , tokenizer . Status ( ) . pos } ) ; <nl> + } <nl> + } / / namespace cbor <nl> + <nl> + namespace json { <nl> + <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + / / json : : NewJSONEncoder - for encoding streaming parser events as JSON <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + <nl> + namespace { <nl> + / / Prints | value | to | out | with 4 hex digits , most significant chunk first . <nl> + void PrintHex ( uint16_t value , std : : string * out ) { <nl> + for ( int ii = 3 ; ii > = 0 ; - - ii ) { <nl> + int four_bits = 0xf & ( value > > ( 4 * ii ) ) ; <nl> + out - > append ( 1 , four_bits + ( ( four_bits < = 9 ) ? ' 0 ' : ( ' a ' - 10 ) ) ) ; <nl> + } <nl> + } <nl> + <nl> + / / In the writer below , we maintain a stack of State instances . <nl> + / / It is just enough to emit the appropriate delimiters and brackets <nl> + / / in JSON . <nl> + enum class Container { <nl> + / / Used for the top - level , initial state . <nl> + NONE , <nl> + / / Inside a JSON object . <nl> + MAP , <nl> + / / Inside a JSON array . <nl> + ARRAY <nl> + } ; <nl> + class State { <nl> + public : <nl> + explicit State ( Container container ) : container_ ( container ) { } <nl> + void StartElement ( std : : string * out ) { <nl> + assert ( container_ ! = Container : : NONE | | size_ = = 0 ) ; <nl> + if ( size_ ! = 0 ) { <nl> + char delim = ( ! ( size_ & 1 ) | | container_ = = Container : : ARRAY ) ? ' , ' : ' : ' ; <nl> + out - > append ( 1 , delim ) ; <nl> + } <nl> + + + size_ ; <nl> + } <nl> + Container container ( ) const { return container_ ; } <nl> + <nl> + private : <nl> + Container container_ = Container : : NONE ; <nl> + int size_ = 0 ; <nl> + } ; <nl> + <nl> + constexpr char kBase64Table [ ] = <nl> + " ABCDEFGHIJKLMNOPQRSTUVWXYZ " <nl> + " abcdefghijklmnopqrstuvwxyz0123456789 + / " ; <nl> + <nl> + void Base64Encode ( const span < uint8_t > & in , std : : string * out ) { <nl> + / / The following three cases are based on the tables in the example <nl> + / / section in https : / / en . wikipedia . org / wiki / Base64 . We process three <nl> + / / input bytes at a time , emitting 4 output bytes at a time . <nl> + std : : ptrdiff_t ii = 0 ; <nl> + <nl> + / / While possible , process three input bytes . <nl> + for ( ; ii + 3 < = in . size ( ) ; ii + = 3 ) { <nl> + uint32_t twentyfour_bits = ( in [ ii ] < < 16 ) | ( in [ ii + 1 ] < < 8 ) | in [ ii + 2 ] ; <nl> + out - > push_back ( kBase64Table [ ( twentyfour_bits > > 18 ) ] ) ; <nl> + out - > push_back ( kBase64Table [ ( twentyfour_bits > > 12 ) & 0x3f ] ) ; <nl> + out - > push_back ( kBase64Table [ ( twentyfour_bits > > 6 ) & 0x3f ] ) ; <nl> + out - > push_back ( kBase64Table [ twentyfour_bits & 0x3f ] ) ; <nl> + } <nl> + if ( ii + 2 < = in . size ( ) ) { / / Process two input bytes . <nl> + uint32_t twentyfour_bits = ( in [ ii ] < < 16 ) | ( in [ ii + 1 ] < < 8 ) ; <nl> + out - > push_back ( kBase64Table [ ( twentyfour_bits > > 18 ) ] ) ; <nl> + out - > push_back ( kBase64Table [ ( twentyfour_bits > > 12 ) & 0x3f ] ) ; <nl> + out - > push_back ( kBase64Table [ ( twentyfour_bits > > 6 ) & 0x3f ] ) ; <nl> + out - > push_back ( ' = ' ) ; / / Emit padding . <nl> + return ; <nl> + } <nl> + if ( ii + 1 < = in . size ( ) ) { / / Process a single input byte . <nl> + uint32_t twentyfour_bits = ( in [ ii ] < < 16 ) ; <nl> + out - > push_back ( kBase64Table [ ( twentyfour_bits > > 18 ) ] ) ; <nl> + out - > push_back ( kBase64Table [ ( twentyfour_bits > > 12 ) & 0x3f ] ) ; <nl> + out - > push_back ( ' = ' ) ; / / Emit padding . <nl> + out - > push_back ( ' = ' ) ; / / Emit padding . <nl> + } <nl> + } <nl> + <nl> + / / Implements a handler for JSON parser events to emit a JSON string . <nl> + class JSONEncoder : public StreamingParserHandler { <nl> + public : <nl> + JSONEncoder ( const Platform * platform , std : : string * out , Status * status ) <nl> + : platform_ ( platform ) , out_ ( out ) , status_ ( status ) { <nl> + * status_ = Status ( ) ; <nl> + state_ . emplace ( Container : : NONE ) ; <nl> + } <nl> + <nl> + void HandleMapBegin ( ) override { <nl> + if ( ! status_ - > ok ( ) ) <nl> + return ; <nl> + assert ( ! state_ . empty ( ) ) ; <nl> + state_ . top ( ) . StartElement ( out_ ) ; <nl> + state_ . emplace ( Container : : MAP ) ; <nl> + out_ - > append ( " { " ) ; <nl> + } <nl> + <nl> + void HandleMapEnd ( ) override { <nl> + if ( ! status_ - > ok ( ) ) <nl> + return ; <nl> + assert ( state_ . size ( ) > = 2 & & state_ . top ( ) . container ( ) = = Container : : MAP ) ; <nl> + state_ . pop ( ) ; <nl> + out_ - > append ( " } " ) ; <nl> + } <nl> + <nl> + void HandleArrayBegin ( ) override { <nl> + if ( ! status_ - > ok ( ) ) <nl> + return ; <nl> + state_ . top ( ) . StartElement ( out_ ) ; <nl> + state_ . emplace ( Container : : ARRAY ) ; <nl> + out_ - > append ( " [ " ) ; <nl> + } <nl> + <nl> + void HandleArrayEnd ( ) override { <nl> + if ( ! status_ - > ok ( ) ) <nl> + return ; <nl> + assert ( state_ . size ( ) > = 2 & & state_ . top ( ) . container ( ) = = Container : : ARRAY ) ; <nl> + state_ . pop ( ) ; <nl> + out_ - > append ( " ] " ) ; <nl> + } <nl> + <nl> + void HandleString16 ( span < uint16_t > chars ) override { <nl> + if ( ! status_ - > ok ( ) ) <nl> + return ; <nl> + state_ . top ( ) . StartElement ( out_ ) ; <nl> + out_ - > append ( " \ " " ) ; <nl> + for ( const uint16_t ch : chars ) { <nl> + if ( ch = = ' " ' ) { <nl> + out_ - > append ( " \ \ \ " " ) ; <nl> + } else if ( ch = = ' \ \ ' ) { <nl> + out_ - > append ( " \ \ \ \ " ) ; <nl> + } else if ( ch = = ' \ b ' ) { <nl> + out_ - > append ( " \ \ b " ) ; <nl> + } else if ( ch = = ' \ f ' ) { <nl> + out_ - > append ( " \ \ f " ) ; <nl> + } else if ( ch = = ' \ n ' ) { <nl> + out_ - > append ( " \ \ n " ) ; <nl> + } else if ( ch = = ' \ r ' ) { <nl> + out_ - > append ( " \ \ r " ) ; <nl> + } else if ( ch = = ' \ t ' ) { <nl> + out_ - > append ( " \ \ t " ) ; <nl> + } else if ( ch > = 32 & & ch < = 126 ) { <nl> + out_ - > append ( 1 , ch ) ; <nl> + } else { <nl> + out_ - > append ( " \ \ u " ) ; <nl> + PrintHex ( ch , out_ ) ; <nl> + } <nl> + } <nl> + out_ - > append ( " \ " " ) ; <nl> + } <nl> + <nl> + void HandleString8 ( span < uint8_t > chars ) override { <nl> + if ( ! status_ - > ok ( ) ) <nl> + return ; <nl> + state_ . top ( ) . StartElement ( out_ ) ; <nl> + out_ - > append ( " \ " " ) ; <nl> + for ( std : : ptrdiff_t ii = 0 ; ii < chars . size ( ) ; + + ii ) { <nl> + uint8_t c = chars [ ii ] ; <nl> + if ( c = = ' " ' ) { <nl> + out_ - > append ( " \ \ \ " " ) ; <nl> + } else if ( c = = ' \ \ ' ) { <nl> + out_ - > append ( " \ \ \ \ " ) ; <nl> + } else if ( c = = ' \ b ' ) { <nl> + out_ - > append ( " \ \ b " ) ; <nl> + } else if ( c = = ' \ f ' ) { <nl> + out_ - > append ( " \ \ f " ) ; <nl> + } else if ( c = = ' \ n ' ) { <nl> + out_ - > append ( " \ \ n " ) ; <nl> + } else if ( c = = ' \ r ' ) { <nl> + out_ - > append ( " \ \ r " ) ; <nl> + } else if ( c = = ' \ t ' ) { <nl> + out_ - > append ( " \ \ t " ) ; <nl> + } else if ( c > = 32 & & c < = 126 ) { <nl> + out_ - > append ( 1 , c ) ; <nl> + } else if ( c < 32 ) { <nl> + out_ - > append ( " \ \ u " ) ; <nl> + PrintHex ( static_cast < uint16_t > ( c ) , out_ ) ; <nl> + } else { <nl> + / / Inspect the leading byte to figure out how long the utf8 <nl> + / / byte sequence is ; while doing this initialize | codepoint | <nl> + / / with the first few bits . <nl> + / / See table in : https : / / en . wikipedia . org / wiki / UTF - 8 <nl> + / / byte one is 110x xxxx - > 2 byte utf8 sequence <nl> + / / byte one is 1110 xxxx - > 3 byte utf8 sequence <nl> + / / byte one is 1111 0xxx - > 4 byte utf8 sequence <nl> + uint32_t codepoint ; <nl> + int num_bytes_left ; <nl> + if ( ( c & 0xe0 ) = = 0xc0 ) { / / 2 byte utf8 sequence <nl> + num_bytes_left = 1 ; <nl> + codepoint = c & 0x1f ; <nl> + } else if ( ( c & 0xf0 ) = = 0xe0 ) { / / 3 byte utf8 sequence <nl> + num_bytes_left = 2 ; <nl> + codepoint = c & 0x0f ; <nl> + } else if ( ( c & 0xf8 ) = = 0xf0 ) { / / 4 byte utf8 sequence <nl> + codepoint = c & 0x07 ; <nl> + num_bytes_left = 3 ; <nl> + } else { <nl> + continue ; / / invalid leading byte <nl> + } <nl> + <nl> + / / If we have enough bytes in our input , decode the remaining ones <nl> + / / belonging to this Unicode character into | codepoint | . <nl> + if ( ii + num_bytes_left > chars . size ( ) ) <nl> + continue ; <nl> + while ( num_bytes_left > 0 ) { <nl> + c = chars [ + + ii ] ; <nl> + - - num_bytes_left ; <nl> + / / Check the next byte is a continuation byte , that is 10xx xxxx . <nl> + if ( ( c & 0xc0 ) ! = 0x80 ) <nl> + continue ; <nl> + codepoint = ( codepoint < < 6 ) | ( c & 0x3f ) ; <nl> + } <nl> + <nl> + / / Disallow overlong encodings for ascii characters , as these <nl> + / / would include " and other characters significant to JSON <nl> + / / string termination / control . <nl> + if ( codepoint < 0x7f ) <nl> + continue ; <nl> + / / Invalid in UTF8 , and can ' t be represented in UTF16 anyway . <nl> + if ( codepoint > 0x10ffff ) <nl> + continue ; <nl> + <nl> + / / So , now we transcode to UTF16 , <nl> + / / using the math described at https : / / en . wikipedia . org / wiki / UTF - 16 , <nl> + / / for either one or two 16 bit characters . <nl> + if ( codepoint < 0xffff ) { <nl> + out_ - > append ( " \ \ u " ) ; <nl> + PrintHex ( static_cast < uint16_t > ( codepoint ) , out_ ) ; <nl> + continue ; <nl> + } <nl> + codepoint - = 0x10000 ; <nl> + / / high surrogate <nl> + out_ - > append ( " \ \ u " ) ; <nl> + PrintHex ( static_cast < uint16_t > ( ( codepoint > > 10 ) + 0xd800 ) , out_ ) ; <nl> + / / low surrogate <nl> + out_ - > append ( " \ \ u " ) ; <nl> + PrintHex ( static_cast < uint16_t > ( ( codepoint & 0x3ff ) + 0xdc00 ) , out_ ) ; <nl> + } <nl> + } <nl> + out_ - > append ( " \ " " ) ; <nl> + } <nl> + <nl> + void HandleBinary ( span < uint8_t > bytes ) override { <nl> + if ( ! status_ - > ok ( ) ) <nl> + return ; <nl> + state_ . top ( ) . StartElement ( out_ ) ; <nl> + out_ - > append ( " \ " " ) ; <nl> + Base64Encode ( bytes , out_ ) ; <nl> + out_ - > append ( " \ " " ) ; <nl> + } <nl> + <nl> + void HandleDouble ( double value ) override { <nl> + if ( ! status_ - > ok ( ) ) <nl> + return ; <nl> + state_ . top ( ) . StartElement ( out_ ) ; <nl> + std : : unique_ptr < char [ ] > str_value = platform_ - > DToStr ( value ) ; <nl> + <nl> + / / DToStr may fail to emit a 0 before the decimal dot . E . g . this is <nl> + / / the case in base : : NumberToString in Chromium ( which is based on <nl> + / / dmg_fp ) . So , much like <nl> + / / https : / / cs . chromium . org / chromium / src / base / json / json_writer . cc <nl> + / / we probe for this and emit the leading 0 anyway if necessary . <nl> + const char * chars = str_value . get ( ) ; <nl> + if ( chars [ 0 ] = = ' . ' ) { <nl> + out_ - > append ( " 0 " ) ; <nl> + } else if ( chars [ 0 ] = = ' - ' & & chars [ 1 ] = = ' . ' ) { <nl> + out_ - > append ( " - 0 " ) ; <nl> + + + chars ; <nl> + } <nl> + out_ - > append ( chars ) ; <nl> + } <nl> + <nl> + void HandleInt32 ( int32_t value ) override { <nl> + if ( ! status_ - > ok ( ) ) <nl> + return ; <nl> + state_ . top ( ) . StartElement ( out_ ) ; <nl> + out_ - > append ( std : : to_string ( value ) ) ; <nl> + } <nl> + <nl> + void HandleBool ( bool value ) override { <nl> + if ( ! status_ - > ok ( ) ) <nl> + return ; <nl> + state_ . top ( ) . StartElement ( out_ ) ; <nl> + out_ - > append ( value ? " true " : " false " ) ; <nl> + } <nl> + <nl> + void HandleNull ( ) override { <nl> + if ( ! status_ - > ok ( ) ) <nl> + return ; <nl> + state_ . top ( ) . StartElement ( out_ ) ; <nl> + out_ - > append ( " null " ) ; <nl> + } <nl> + <nl> + void HandleError ( Status error ) override { <nl> + assert ( ! error . ok ( ) ) ; <nl> + * status_ = error ; <nl> + out_ - > clear ( ) ; <nl> + } <nl> + <nl> + private : <nl> + const Platform * platform_ ; <nl> + std : : string * out_ ; <nl> + Status * status_ ; <nl> + std : : stack < State > state_ ; <nl> + } ; <nl> + } / / namespace <nl> + <nl> + std : : unique_ptr < StreamingParserHandler > NewJSONEncoder ( const Platform * platform , <nl> + std : : string * out , <nl> + Status * status ) { <nl> + return std : : unique_ptr < StreamingParserHandler > ( <nl> + new JSONEncoder ( platform , out , status ) ) ; <nl> + } <nl> + <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + / / json : : ParseJSON - for receiving streaming parser events for JSON . <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + <nl> + namespace { <nl> + const int kStackLimit = 1000 ; <nl> + <nl> + enum Token { <nl> + ObjectBegin , <nl> + ObjectEnd , <nl> + ArrayBegin , <nl> + ArrayEnd , <nl> + StringLiteral , <nl> + Number , <nl> + BoolTrue , <nl> + BoolFalse , <nl> + NullToken , <nl> + ListSeparator , <nl> + ObjectPairSeparator , <nl> + InvalidToken , <nl> + NoInput <nl> + } ; <nl> + <nl> + const char * const kNullString = " null " ; <nl> + const char * const kTrueString = " true " ; <nl> + const char * const kFalseString = " false " ; <nl> + <nl> + template < typename Char > <nl> + class JsonParser { <nl> + public : <nl> + JsonParser ( const Platform * platform , StreamingParserHandler * handler ) <nl> + : platform_ ( platform ) , handler_ ( handler ) { } <nl> + <nl> + void Parse ( const Char * start , std : : size_t length ) { <nl> + start_pos_ = start ; <nl> + const Char * end = start + length ; <nl> + const Char * tokenEnd ; <nl> + ParseValue ( start , end , & tokenEnd , 0 ) ; <nl> + if ( tokenEnd ! = end ) { <nl> + HandleError ( Error : : JSON_PARSER_UNPROCESSED_INPUT_REMAINS , tokenEnd ) ; <nl> + } <nl> + } <nl> + <nl> + private : <nl> + bool CharsToDouble ( const uint16_t * chars , <nl> + std : : size_t length , <nl> + double * result ) { <nl> + std : : string buffer ; <nl> + buffer . reserve ( length + 1 ) ; <nl> + for ( std : : size_t ii = 0 ; ii < length ; + + ii ) { <nl> + bool is_ascii = ! ( chars [ ii ] & ~ 0x7F ) ; <nl> + if ( ! is_ascii ) <nl> + return false ; <nl> + buffer . push_back ( static_cast < char > ( chars [ ii ] ) ) ; <nl> + } <nl> + return platform_ - > StrToD ( buffer . c_str ( ) , result ) ; <nl> + } <nl> + <nl> + bool CharsToDouble ( const uint8_t * chars , std : : size_t length , double * result ) { <nl> + std : : string buffer ( reinterpret_cast < const char * > ( chars ) , length ) ; <nl> + return platform_ - > StrToD ( buffer . c_str ( ) , result ) ; <nl> + } <nl> + <nl> + static bool ParseConstToken ( const Char * start , <nl> + const Char * end , <nl> + const Char * * token_end , <nl> + const char * token ) { <nl> + / / | token | is \ 0 terminated , it ' s one of the constants at top of the file . <nl> + while ( start < end & & * token ! = ' \ 0 ' & & * start + + = = * token + + ) { <nl> + } <nl> + if ( * token ! = ' \ 0 ' ) <nl> + return false ; <nl> + * token_end = start ; <nl> + return true ; <nl> + } <nl> + <nl> + static bool ReadInt ( const Char * start , <nl> + const Char * end , <nl> + const Char * * token_end , <nl> + bool allow_leading_zeros ) { <nl> + if ( start = = end ) <nl> + return false ; <nl> + bool has_leading_zero = ' 0 ' = = * start ; <nl> + int length = 0 ; <nl> + while ( start < end & & ' 0 ' < = * start & & * start < = ' 9 ' ) { <nl> + + + start ; <nl> + + + length ; <nl> + } <nl> + if ( ! length ) <nl> + return false ; <nl> + if ( ! allow_leading_zeros & & length > 1 & & has_leading_zero ) <nl> + return false ; <nl> + * token_end = start ; <nl> + return true ; <nl> + } <nl> + <nl> + static bool ParseNumberToken ( const Char * start , <nl> + const Char * end , <nl> + const Char * * token_end ) { <nl> + / / We just grab the number here . We validate the size in DecodeNumber . <nl> + / / According to RFC4627 , a valid number is : [ minus ] int [ frac ] [ exp ] <nl> + if ( start = = end ) <nl> + return false ; <nl> + Char c = * start ; <nl> + if ( ' - ' = = c ) <nl> + + + start ; <nl> + <nl> + if ( ! ReadInt ( start , end , & start , / * allow_leading_zeros = * / false ) ) <nl> + return false ; <nl> + if ( start = = end ) { <nl> + * token_end = start ; <nl> + return true ; <nl> + } <nl> + <nl> + / / Optional fraction part <nl> + c = * start ; <nl> + if ( ' . ' = = c ) { <nl> + + + start ; <nl> + if ( ! ReadInt ( start , end , & start , / * allow_leading_zeros = * / true ) ) <nl> + return false ; <nl> + if ( start = = end ) { <nl> + * token_end = start ; <nl> + return true ; <nl> + } <nl> + c = * start ; <nl> + } <nl> + <nl> + / / Optional exponent part <nl> + if ( ' e ' = = c | | ' E ' = = c ) { <nl> + + + start ; <nl> + if ( start = = end ) <nl> + return false ; <nl> + c = * start ; <nl> + if ( ' - ' = = c | | ' + ' = = c ) { <nl> + + + start ; <nl> + if ( start = = end ) <nl> + return false ; <nl> + } <nl> + if ( ! ReadInt ( start , end , & start , / * allow_leading_zeros = * / true ) ) <nl> + return false ; <nl> + } <nl> + <nl> + * token_end = start ; <nl> + return true ; <nl> + } <nl> + <nl> + static bool ReadHexDigits ( const Char * start , <nl> + const Char * end , <nl> + const Char * * token_end , <nl> + int digits ) { <nl> + if ( end - start < digits ) <nl> + return false ; <nl> + for ( int i = 0 ; i < digits ; + + i ) { <nl> + Char c = * start + + ; <nl> + if ( ! ( ( ' 0 ' < = c & & c < = ' 9 ' ) | | ( ' a ' < = c & & c < = ' f ' ) | | <nl> + ( ' A ' < = c & & c < = ' F ' ) ) ) <nl> + return false ; <nl> + } <nl> + * token_end = start ; <nl> + return true ; <nl> + } <nl> + <nl> + static bool ParseStringToken ( const Char * start , <nl> + const Char * end , <nl> + const Char * * token_end ) { <nl> + while ( start < end ) { <nl> + Char c = * start + + ; <nl> + if ( ' \ \ ' = = c ) { <nl> + if ( start = = end ) <nl> + return false ; <nl> + c = * start + + ; <nl> + / / Make sure the escaped char is valid . <nl> + switch ( c ) { <nl> + case ' x ' : <nl> + if ( ! ReadHexDigits ( start , end , & start , 2 ) ) <nl> + return false ; <nl> + break ; <nl> + case ' u ' : <nl> + if ( ! ReadHexDigits ( start , end , & start , 4 ) ) <nl> + return false ; <nl> + break ; <nl> + case ' \ \ ' : <nl> + case ' / ' : <nl> + case ' b ' : <nl> + case ' f ' : <nl> + case ' n ' : <nl> + case ' r ' : <nl> + case ' t ' : <nl> + case ' v ' : <nl> + case ' " ' : <nl> + break ; <nl> + default : <nl> + return false ; <nl> + } <nl> + } else if ( ' " ' = = c ) { <nl> + * token_end = start ; <nl> + return true ; <nl> + } <nl> + } <nl> + return false ; <nl> + } <nl> + <nl> + static bool SkipComment ( const Char * start , <nl> + const Char * end , <nl> + const Char * * comment_end ) { <nl> + if ( start = = end ) <nl> + return false ; <nl> + <nl> + if ( * start ! = ' / ' | | start + 1 > = end ) <nl> + return false ; <nl> + + + start ; <nl> + <nl> + if ( * start = = ' / ' ) { <nl> + / / Single line comment , read to newline . <nl> + for ( + + start ; start < end ; + + start ) { <nl> + if ( * start = = ' \ n ' | | * start = = ' \ r ' ) { <nl> + * comment_end = start + 1 ; <nl> + return true ; <nl> + } <nl> + } <nl> + * comment_end = end ; <nl> + / / Comment reaches end - of - input , which is fine . <nl> + return true ; <nl> + } <nl> + <nl> + if ( * start = = ' * ' ) { <nl> + Char previous = ' \ 0 ' ; <nl> + / / Block comment , read until end marker . <nl> + for ( + + start ; start < end ; previous = * start + + ) { <nl> + if ( previous = = ' * ' & & * start = = ' / ' ) { <nl> + * comment_end = start + 1 ; <nl> + return true ; <nl> + } <nl> + } <nl> + / / Block comment must close before end - of - input . <nl> + return false ; <nl> + } <nl> + <nl> + return false ; <nl> + } <nl> + <nl> + static bool IsSpaceOrNewLine ( Char c ) { <nl> + / / \ v = vertial tab ; \ f = form feed page break . <nl> + return c = = ' ' | | c = = ' \ n ' | | c = = ' \ v ' | | c = = ' \ f ' | | c = = ' \ r ' | | <nl> + c = = ' \ t ' ; <nl> + } <nl> + <nl> + static void SkipWhitespaceAndComments ( const Char * start , <nl> + const Char * end , <nl> + const Char * * whitespace_end ) { <nl> + while ( start < end ) { <nl> + if ( IsSpaceOrNewLine ( * start ) ) { <nl> + + + start ; <nl> + } else if ( * start = = ' / ' ) { <nl> + const Char * comment_end ; <nl> + if ( ! SkipComment ( start , end , & comment_end ) ) <nl> + break ; <nl> + start = comment_end ; <nl> + } else { <nl> + break ; <nl> + } <nl> + } <nl> + * whitespace_end = start ; <nl> + } <nl> + <nl> + static Token ParseToken ( const Char * start , <nl> + const Char * end , <nl> + const Char * * tokenStart , <nl> + const Char * * token_end ) { <nl> + SkipWhitespaceAndComments ( start , end , tokenStart ) ; <nl> + start = * tokenStart ; <nl> + <nl> + if ( start = = end ) <nl> + return NoInput ; <nl> + <nl> + switch ( * start ) { <nl> + case ' n ' : <nl> + if ( ParseConstToken ( start , end , token_end , kNullString ) ) <nl> + return NullToken ; <nl> + break ; <nl> + case ' t ' : <nl> + if ( ParseConstToken ( start , end , token_end , kTrueString ) ) <nl> + return BoolTrue ; <nl> + break ; <nl> + case ' f ' : <nl> + if ( ParseConstToken ( start , end , token_end , kFalseString ) ) <nl> + return BoolFalse ; <nl> + break ; <nl> + case ' [ ' : <nl> + * token_end = start + 1 ; <nl> + return ArrayBegin ; <nl> + case ' ] ' : <nl> + * token_end = start + 1 ; <nl> + return ArrayEnd ; <nl> + case ' , ' : <nl> + * token_end = start + 1 ; <nl> + return ListSeparator ; <nl> + case ' { ' : <nl> + * token_end = start + 1 ; <nl> + return ObjectBegin ; <nl> + case ' } ' : <nl> + * token_end = start + 1 ; <nl> + return ObjectEnd ; <nl> + case ' : ' : <nl> + * token_end = start + 1 ; <nl> + return ObjectPairSeparator ; <nl> + case ' 0 ' : <nl> + case ' 1 ' : <nl> + case ' 2 ' : <nl> + case ' 3 ' : <nl> + case ' 4 ' : <nl> + case ' 5 ' : <nl> + case ' 6 ' : <nl> + case ' 7 ' : <nl> + case ' 8 ' : <nl> + case ' 9 ' : <nl> + case ' - ' : <nl> + if ( ParseNumberToken ( start , end , token_end ) ) <nl> + return Number ; <nl> + break ; <nl> + case ' " ' : <nl> + if ( ParseStringToken ( start + 1 , end , token_end ) ) <nl> + return StringLiteral ; <nl> + break ; <nl> + } <nl> + return InvalidToken ; <nl> + } <nl> + <nl> + static int HexToInt ( Char c ) { <nl> + if ( ' 0 ' < = c & & c < = ' 9 ' ) <nl> + return c - ' 0 ' ; <nl> + if ( ' A ' < = c & & c < = ' F ' ) <nl> + return c - ' A ' + 10 ; <nl> + if ( ' a ' < = c & & c < = ' f ' ) <nl> + return c - ' a ' + 10 ; <nl> + assert ( false ) ; / / Unreachable . <nl> + return 0 ; <nl> + } <nl> + <nl> + static bool DecodeString ( const Char * start , <nl> + const Char * end , <nl> + std : : vector < uint16_t > * output ) { <nl> + if ( start = = end ) <nl> + return true ; <nl> + if ( start > end ) <nl> + return false ; <nl> + output - > reserve ( end - start ) ; <nl> + while ( start < end ) { <nl> + uint16_t c = * start + + ; <nl> + / / If the | Char | we ' re dealing with is really a byte , then <nl> + / / we have utf8 here , and we need to check for multibyte characters <nl> + / / and transcode them to utf16 ( either one or two utf16 chars ) . <nl> + if ( sizeof ( Char ) = = sizeof ( uint8_t ) & & c > = 0x7f ) { <nl> + / / Inspect the leading byte to figure out how long the utf8 <nl> + / / byte sequence is ; while doing this initialize | codepoint | <nl> + / / with the first few bits . <nl> + / / See table in : https : / / en . wikipedia . org / wiki / UTF - 8 <nl> + / / byte one is 110x xxxx - > 2 byte utf8 sequence <nl> + / / byte one is 1110 xxxx - > 3 byte utf8 sequence <nl> + / / byte one is 1111 0xxx - > 4 byte utf8 sequence <nl> + uint32_t codepoint ; <nl> + int num_bytes_left ; <nl> + if ( ( c & 0xe0 ) = = 0xc0 ) { / / 2 byte utf8 sequence <nl> + num_bytes_left = 1 ; <nl> + codepoint = c & 0x1f ; <nl> + } else if ( ( c & 0xf0 ) = = 0xe0 ) { / / 3 byte utf8 sequence <nl> + num_bytes_left = 2 ; <nl> + codepoint = c & 0x0f ; <nl> + } else if ( ( c & 0xf8 ) = = 0xf0 ) { / / 4 byte utf8 sequence <nl> + codepoint = c & 0x07 ; <nl> + num_bytes_left = 3 ; <nl> + } else { <nl> + return false ; / / invalid leading byte <nl> + } <nl> + <nl> + / / If we have enough bytes in our inpput , decode the remaining ones <nl> + / / belonging to this Unicode character into | codepoint | . <nl> + if ( start + num_bytes_left > end ) <nl> + return false ; <nl> + while ( num_bytes_left > 0 ) { <nl> + c = * start + + ; <nl> + - - num_bytes_left ; <nl> + / / Check the next byte is a continuation byte , that is 10xx xxxx . <nl> + if ( ( c & 0xc0 ) ! = 0x80 ) <nl> + return false ; <nl> + codepoint = ( codepoint < < 6 ) | ( c & 0x3f ) ; <nl> + } <nl> + <nl> + / / Disallow overlong encodings for ascii characters , as these <nl> + / / would include " and other characters significant to JSON <nl> + / / string termination / control . <nl> + if ( codepoint < 0x7f ) <nl> + return false ; <nl> + / / Invalid in UTF8 , and can ' t be represented in UTF16 anyway . <nl> + if ( codepoint > 0x10ffff ) <nl> + return false ; <nl> + <nl> + / / So , now we transcode to UTF16 , <nl> + / / using the math described at https : / / en . wikipedia . org / wiki / UTF - 16 , <nl> + / / for either one or two 16 bit characters . <nl> + if ( codepoint < 0xffff ) { <nl> + output - > push_back ( codepoint ) ; <nl> + continue ; <nl> + } <nl> + codepoint - = 0x10000 ; <nl> + output - > push_back ( ( codepoint > > 10 ) + 0xd800 ) ; / / high surrogate <nl> + output - > push_back ( ( codepoint & 0x3ff ) + 0xdc00 ) ; / / low surrogate <nl> + continue ; <nl> + } <nl> + if ( ' \ \ ' ! = c ) { <nl> + output - > push_back ( c ) ; <nl> + continue ; <nl> + } <nl> + if ( start = = end ) <nl> + return false ; <nl> + c = * start + + ; <nl> + <nl> + if ( c = = ' x ' ) { <nl> + / / \ x is not supported . <nl> + return false ; <nl> + } <nl> + <nl> + switch ( c ) { <nl> + case ' " ' : <nl> + case ' / ' : <nl> + case ' \ \ ' : <nl> + break ; <nl> + case ' b ' : <nl> + c = ' \ b ' ; <nl> + break ; <nl> + case ' f ' : <nl> + c = ' \ f ' ; <nl> + break ; <nl> + case ' n ' : <nl> + c = ' \ n ' ; <nl> + break ; <nl> + case ' r ' : <nl> + c = ' \ r ' ; <nl> + break ; <nl> + case ' t ' : <nl> + c = ' \ t ' ; <nl> + break ; <nl> + case ' v ' : <nl> + c = ' \ v ' ; <nl> + break ; <nl> + case ' u ' : <nl> + c = ( HexToInt ( * start ) < < 12 ) + ( HexToInt ( * ( start + 1 ) ) < < 8 ) + <nl> + ( HexToInt ( * ( start + 2 ) ) < < 4 ) + HexToInt ( * ( start + 3 ) ) ; <nl> + start + = 4 ; <nl> + break ; <nl> + default : <nl> + return false ; <nl> + } <nl> + output - > push_back ( c ) ; <nl> + } <nl> + return true ; <nl> + } <nl> + <nl> + void ParseValue ( const Char * start , <nl> + const Char * end , <nl> + const Char * * value_token_end , <nl> + int depth ) { <nl> + if ( depth > kStackLimit ) { <nl> + HandleError ( Error : : JSON_PARSER_STACK_LIMIT_EXCEEDED , start ) ; <nl> + return ; <nl> + } <nl> + const Char * token_start ; <nl> + const Char * token_end ; <nl> + Token token = ParseToken ( start , end , & token_start , & token_end ) ; <nl> + switch ( token ) { <nl> + case NoInput : <nl> + HandleError ( Error : : JSON_PARSER_NO_INPUT , token_start ) ; <nl> + return ; <nl> + case InvalidToken : <nl> + HandleError ( Error : : JSON_PARSER_INVALID_TOKEN , token_start ) ; <nl> + return ; <nl> + case NullToken : <nl> + handler_ - > HandleNull ( ) ; <nl> + break ; <nl> + case BoolTrue : <nl> + handler_ - > HandleBool ( true ) ; <nl> + break ; <nl> + case BoolFalse : <nl> + handler_ - > HandleBool ( false ) ; <nl> + break ; <nl> + case Number : { <nl> + double value ; <nl> + if ( ! CharsToDouble ( token_start , token_end - token_start , & value ) ) { <nl> + HandleError ( Error : : JSON_PARSER_INVALID_NUMBER , token_start ) ; <nl> + return ; <nl> + } <nl> + if ( value > = std : : numeric_limits < int32_t > : : min ( ) & & <nl> + value < = std : : numeric_limits < int32_t > : : max ( ) & & <nl> + static_cast < int32_t > ( value ) = = value ) <nl> + handler_ - > HandleInt32 ( static_cast < int32_t > ( value ) ) ; <nl> + else <nl> + handler_ - > HandleDouble ( value ) ; <nl> + break ; <nl> + } <nl> + case StringLiteral : { <nl> + std : : vector < uint16_t > value ; <nl> + bool ok = DecodeString ( token_start + 1 , token_end - 1 , & value ) ; <nl> + if ( ! ok ) { <nl> + HandleError ( Error : : JSON_PARSER_INVALID_STRING , token_start ) ; <nl> + return ; <nl> + } <nl> + handler_ - > HandleString16 ( span < uint16_t > ( value . data ( ) , value . size ( ) ) ) ; <nl> + break ; <nl> + } <nl> + case ArrayBegin : { <nl> + handler_ - > HandleArrayBegin ( ) ; <nl> + start = token_end ; <nl> + token = ParseToken ( start , end , & token_start , & token_end ) ; <nl> + while ( token ! = ArrayEnd ) { <nl> + ParseValue ( start , end , & token_end , depth + 1 ) ; <nl> + if ( error_ ) <nl> + return ; <nl> + <nl> + / / After a list value , we expect a comma or the end of the list . <nl> + start = token_end ; <nl> + token = ParseToken ( start , end , & token_start , & token_end ) ; <nl> + if ( token = = ListSeparator ) { <nl> + start = token_end ; <nl> + token = ParseToken ( start , end , & token_start , & token_end ) ; <nl> + if ( token = = ArrayEnd ) { <nl> + HandleError ( Error : : JSON_PARSER_UNEXPECTED_ARRAY_END , token_start ) ; <nl> + return ; <nl> + } <nl> + } else if ( token ! = ArrayEnd ) { <nl> + / / Unexpected value after list value . Bail out . <nl> + HandleError ( Error : : JSON_PARSER_COMMA_OR_ARRAY_END_EXPECTED , <nl> + token_start ) ; <nl> + return ; <nl> + } <nl> + } <nl> + handler_ - > HandleArrayEnd ( ) ; <nl> + break ; <nl> + } <nl> + case ObjectBegin : { <nl> + handler_ - > HandleMapBegin ( ) ; <nl> + start = token_end ; <nl> + token = ParseToken ( start , end , & token_start , & token_end ) ; <nl> + while ( token ! = ObjectEnd ) { <nl> + if ( token ! = StringLiteral ) { <nl> + HandleError ( Error : : JSON_PARSER_STRING_LITERAL_EXPECTED , <nl> + token_start ) ; <nl> + return ; <nl> + } <nl> + std : : vector < uint16_t > key ; <nl> + if ( ! DecodeString ( token_start + 1 , token_end - 1 , & key ) ) { <nl> + HandleError ( Error : : JSON_PARSER_INVALID_STRING , token_start ) ; <nl> + return ; <nl> + } <nl> + handler_ - > HandleString16 ( span < uint16_t > ( key . data ( ) , key . size ( ) ) ) ; <nl> + start = token_end ; <nl> + <nl> + token = ParseToken ( start , end , & token_start , & token_end ) ; <nl> + if ( token ! = ObjectPairSeparator ) { <nl> + HandleError ( Error : : JSON_PARSER_COLON_EXPECTED , token_start ) ; <nl> + return ; <nl> + } <nl> + start = token_end ; <nl> + <nl> + ParseValue ( start , end , & token_end , depth + 1 ) ; <nl> + if ( error_ ) <nl> + return ; <nl> + start = token_end ; <nl> + <nl> + / / After a key / value pair , we expect a comma or the end of the <nl> + / / object . <nl> + token = ParseToken ( start , end , & token_start , & token_end ) ; <nl> + if ( token = = ListSeparator ) { <nl> + start = token_end ; <nl> + token = ParseToken ( start , end , & token_start , & token_end ) ; <nl> + if ( token = = ObjectEnd ) { <nl> + HandleError ( Error : : JSON_PARSER_UNEXPECTED_MAP_END , token_start ) ; <nl> + return ; <nl> + } <nl> + } else if ( token ! = ObjectEnd ) { <nl> + / / Unexpected value after last object value . Bail out . <nl> + HandleError ( Error : : JSON_PARSER_COMMA_OR_MAP_END_EXPECTED , <nl> + token_start ) ; <nl> + return ; <nl> + } <nl> + } <nl> + handler_ - > HandleMapEnd ( ) ; <nl> + break ; <nl> + } <nl> + <nl> + default : <nl> + / / We got a token that ' s not a value . <nl> + HandleError ( Error : : JSON_PARSER_VALUE_EXPECTED , token_start ) ; <nl> + return ; <nl> + } <nl> + <nl> + SkipWhitespaceAndComments ( token_end , end , value_token_end ) ; <nl> + } <nl> + <nl> + void HandleError ( Error error , const Char * pos ) { <nl> + assert ( error ! = Error : : OK ) ; <nl> + if ( ! error_ ) { <nl> + handler_ - > HandleError ( Status { error , pos - start_pos_ } ) ; <nl> + error_ = true ; <nl> + } <nl> + } <nl> + <nl> + const Char * start_pos_ = nullptr ; <nl> + bool error_ = false ; <nl> + const Platform * platform_ ; <nl> + StreamingParserHandler * handler_ ; <nl> + } ; <nl> + } / / namespace <nl> + <nl> + void ParseJSON ( const Platform * platform , <nl> + span < uint8_t > chars , <nl> + StreamingParserHandler * handler ) { <nl> + JsonParser < uint8_t > parser ( platform , handler ) ; <nl> + parser . Parse ( chars . data ( ) , chars . size ( ) ) ; <nl> + } <nl> + <nl> + void ParseJSON ( const Platform * platform , <nl> + span < uint16_t > chars , <nl> + StreamingParserHandler * handler ) { <nl> + JsonParser < uint16_t > parser ( platform , handler ) ; <nl> + parser . Parse ( chars . data ( ) , chars . size ( ) ) ; <nl> + } <nl> + } / / namespace json <nl> + <nl> + { % for namespace in config . protocol . namespace % } <nl> + } / / namespace { { namespace } } <nl> + { % endfor % } <nl> + <nl> similarity index 65 % <nl> rename from third_party / inspector_protocol / lib / CBOR_h . template <nl> rename to third_party / inspector_protocol / lib / encoding_h . template <nl> mmm a / third_party / inspector_protocol / lib / CBOR_h . template <nl> ppp b / third_party / inspector_protocol / lib / encoding_h . template <nl> <nl> { # This template is generated by gen_cbor_templates . py . # } <nl> - / / Generated by lib / CBOR_h . template . <nl> + / / Generated by lib / encoding_h . template . <nl> <nl> / / Copyright 2019 The Chromium Authors . All rights reserved . <nl> / / Use of this source code is governed by a BSD - style license that can be <nl> / / found in the LICENSE file . <nl> <nl> - # ifndef { { " _ " . join ( config . protocol . namespace ) } } _CBOR_h <nl> - # define { { " _ " . join ( config . protocol . namespace ) } } _CBOR_h <nl> + # ifndef { { " _ " . join ( config . protocol . namespace ) } } _encoding_h <nl> + # define { { " _ " . join ( config . protocol . namespace ) } } _encoding_h <nl> <nl> # include < cstddef > <nl> # include < cstdint > <nl> # include < memory > <nl> + # include < string > <nl> # include < vector > <nl> <nl> { % for namespace in config . protocol . namespace % } <nl> namespace { { namespace } } { <nl> { % endfor % } <nl> <nl> - / / = = = = = encoding / status . h = = = = = <nl> + / / = = = = = encoding / encoding . h = = = = = <nl> + <nl> + <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + / / span - sequence of bytes <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + <nl> + / / This template is similar to std : : span , which will be included in C + + 20 . Like <nl> + / / std : : span it uses ptrdiff_t , which is signed ( and thus a bit annoying <nl> + / / sometimes when comparing with size_t ) , but other than this it ' s much simpler . <nl> + template < typename T > <nl> + class span { <nl> + public : <nl> + using index_type = std : : ptrdiff_t ; <nl> + <nl> + span ( ) : data_ ( nullptr ) , size_ ( 0 ) { } <nl> + span ( const T * data , index_type size ) : data_ ( data ) , size_ ( size ) { } <nl> + <nl> + const T * data ( ) const { return data_ ; } <nl> + <nl> + const T * begin ( ) const { return data_ ; } <nl> + const T * end ( ) const { return data_ + size_ ; } <nl> + <nl> + const T & operator [ ] ( index_type idx ) const { return data_ [ idx ] ; } <nl> + <nl> + span < T > subspan ( index_type offset , index_type count ) const { <nl> + return span ( data_ + offset , count ) ; <nl> + } <nl> + <nl> + span < T > subspan ( index_type offset ) const { <nl> + return span ( data_ + offset , size_ - offset ) ; <nl> + } <nl> + <nl> + bool empty ( ) const { return size_ = = 0 ; } <nl> + <nl> + index_type size ( ) const { return size_ ; } <nl> + index_type size_bytes ( ) const { return size_ * sizeof ( T ) ; } <nl> + <nl> + private : <nl> + const T * data_ ; <nl> + index_type size_ ; <nl> + } ; <nl> + <nl> + template < typename T > <nl> + span < T > SpanFromVector ( const std : : vector < T > & v ) { <nl> + return span < T > ( v . data ( ) , v . size ( ) ) ; <nl> + } <nl> + <nl> + inline span < uint8_t > SpanFromStdString ( const std : : string & v ) { <nl> + return span < uint8_t > ( reinterpret_cast < const uint8_t * > ( v . data ( ) ) , v . size ( ) ) ; <nl> + } <nl> <nl> / / Error codes . <nl> enum class Error { <nl> enum class Error { <nl> JSON_PARSER_COMMA_OR_ARRAY_END_EXPECTED = 0x08 , <nl> JSON_PARSER_STRING_LITERAL_EXPECTED = 0x09 , <nl> JSON_PARSER_COLON_EXPECTED = 0x0a , <nl> - JSON_PARSER_UNEXPECTED_OBJECT_END = 0x0b , <nl> - JSON_PARSER_COMMA_OR_OBJECT_END_EXPECTED = 0x0c , <nl> + JSON_PARSER_UNEXPECTED_MAP_END = 0x0b , <nl> + JSON_PARSER_COMMA_OR_MAP_END_EXPECTED = 0x0c , <nl> JSON_PARSER_VALUE_EXPECTED = 0x0d , <nl> <nl> CBOR_INVALID_INT32 = 0x0e , <nl> struct Status { <nl> Status ( ) = default ; <nl> } ; <nl> <nl> - / / = = = = = encoding / span . h = = = = = <nl> - <nl> - / / This template is similar to std : : span , which will be included in C + + 20 . Like <nl> - / / std : : span it uses ptrdiff_t , which is signed ( and thus a bit annoying <nl> - / / sometimes when comparing with size_t ) , but other than this it ' s much simpler . <nl> - template < typename T > <nl> - class span { <nl> + / / Handler interface for parser events emitted by a streaming parser . <nl> + / / See cbor : : NewCBOREncoder , cbor : : ParseCBOR , json : : NewJSONEncoder , <nl> + / / json : : ParseJSON . <nl> + class StreamingParserHandler { <nl> public : <nl> - using index_type = std : : ptrdiff_t ; <nl> - <nl> - span ( ) : data_ ( nullptr ) , size_ ( 0 ) { } <nl> - span ( const T * data , index_type size ) : data_ ( data ) , size_ ( size ) { } <nl> - <nl> - const T * data ( ) const { return data_ ; } <nl> - <nl> - const T * begin ( ) const { return data_ ; } <nl> - const T * end ( ) const { return data_ + size_ ; } <nl> - <nl> - const T & operator [ ] ( index_type idx ) const { return data_ [ idx ] ; } <nl> - <nl> - span < T > subspan ( index_type offset , index_type count ) const { <nl> - return span ( data_ + offset , count ) ; <nl> - } <nl> - <nl> - span < T > subspan ( index_type offset ) const { <nl> - return span ( data_ + offset , size_ - offset ) ; <nl> - } <nl> - <nl> - bool empty ( ) const { return size_ = = 0 ; } <nl> - <nl> - index_type size ( ) const { return size_ ; } <nl> - index_type size_bytes ( ) const { return size_ * sizeof ( T ) ; } <nl> - <nl> - private : <nl> - const T * data_ ; <nl> - index_type size_ ; <nl> - } ; <nl> - <nl> - / / = = = = = encoding / json_parser_handler . h = = = = = <nl> - <nl> - / / Handler interface for JSON parser events . See also json_parser . h . <nl> - class JSONParserHandler { <nl> - public : <nl> - virtual ~ JSONParserHandler ( ) = default ; <nl> - virtual void HandleObjectBegin ( ) = 0 ; <nl> - virtual void HandleObjectEnd ( ) = 0 ; <nl> + virtual ~ StreamingParserHandler ( ) = default ; <nl> + virtual void HandleMapBegin ( ) = 0 ; <nl> + virtual void HandleMapEnd ( ) = 0 ; <nl> virtual void HandleArrayBegin ( ) = 0 ; <nl> virtual void HandleArrayEnd ( ) = 0 ; <nl> virtual void HandleString8 ( span < uint8_t > chars ) = 0 ; <nl> virtual void HandleString16 ( span < uint16_t > chars ) = 0 ; <nl> - virtual void HandleBinary ( std : : vector < uint8_t > bytes ) = 0 ; <nl> + virtual void HandleBinary ( span < uint8_t > bytes ) = 0 ; <nl> virtual void HandleDouble ( double value ) = 0 ; <nl> virtual void HandleInt32 ( int32_t value ) = 0 ; <nl> virtual void HandleBool ( bool value ) = 0 ; <nl> class JSONParserHandler { <nl> virtual void HandleError ( Status error ) = 0 ; <nl> } ; <nl> <nl> - / / = = = = = encoding / cbor_internals . h = = = = = <nl> - <nl> - namespace cbor { <nl> - enum class MajorType ; <nl> - } <nl> - <nl> - namespace cbor_internals { <nl> - <nl> - / / Reads the start of a token with definitive size from | bytes | . <nl> - / / | type | is the major type as specified in RFC 7049 Section 2 . 1 . <nl> - / / | value | is the payload ( e . g . for MajorType : : UNSIGNED ) or is the size <nl> - / / ( e . g . for BYTE_STRING ) . <nl> - / / If successful , returns the number of bytes read . Otherwise returns - 1 . <nl> - int8_t ReadTokenStart ( span < uint8_t > bytes , cbor : : MajorType * type , <nl> - uint64_t * value ) ; <nl> - <nl> - / / Writes the start of a token with | type | . The | value | may indicate the size , <nl> - / / or it may be the payload if the value is an unsigned integer . <nl> - void WriteTokenStart ( cbor : : MajorType type , uint64_t value , <nl> - std : : vector < uint8_t > * encoded ) ; <nl> - } / / namespace cbor_internals <nl> - <nl> - / / = = = = = encoding / cbor . h = = = = = <nl> - <nl> - <nl> namespace cbor { <nl> - <nl> - / / The major types from RFC 7049 Section 2 . 1 . <nl> - enum class MajorType { <nl> - UNSIGNED = 0 , <nl> - NEGATIVE = 1 , <nl> - BYTE_STRING = 2 , <nl> - STRING = 3 , <nl> - ARRAY = 4 , <nl> - MAP = 5 , <nl> - TAG = 6 , <nl> - SIMPLE_VALUE = 7 <nl> - } ; <nl> - <nl> - / / Indicates the number of bits the " initial byte " needs to be shifted to the <nl> - / / right after applying | kMajorTypeMask | to produce the major type in the <nl> - / / lowermost bits . <nl> - static constexpr uint8_t kMajorTypeBitShift = 5u ; <nl> - / / Mask selecting the low - order 5 bits of the " initial byte " , which is where <nl> - / / the additional information is encoded . <nl> - static constexpr uint8_t kAdditionalInformationMask = 0x1f ; <nl> - / / Mask selecting the high - order 3 bits of the " initial byte " , which indicates <nl> - / / the major type of the encoded value . <nl> - static constexpr uint8_t kMajorTypeMask = 0xe0 ; <nl> - / / Indicates the integer is in the following byte . <nl> - static constexpr uint8_t kAdditionalInformation1Byte = 24u ; <nl> - / / Indicates the integer is in the next 2 bytes . <nl> - static constexpr uint8_t kAdditionalInformation2Bytes = 25u ; <nl> - / / Indicates the integer is in the next 4 bytes . <nl> - static constexpr uint8_t kAdditionalInformation4Bytes = 26u ; <nl> - / / Indicates the integer is in the next 8 bytes . <nl> - static constexpr uint8_t kAdditionalInformation8Bytes = 27u ; <nl> - <nl> - / / Encodes the initial byte , consisting of the | type | in the first 3 bits <nl> - / / followed by 5 bits of | additional_info | . <nl> - constexpr uint8_t EncodeInitialByte ( MajorType type , uint8_t additional_info ) { <nl> - return ( static_cast < uint8_t > ( type ) < < kMajorTypeBitShift ) | <nl> - ( additional_info & kAdditionalInformationMask ) ; <nl> - } <nl> - <nl> - / / TAG 24 indicates that what follows is a byte string which is <nl> - / / encoded in CBOR format . We use this as a wrapper for <nl> - / / maps and arrays , allowing us to skip them , because the <nl> - / / byte string carries its size ( byte length ) . <nl> - / / https : / / tools . ietf . org / html / rfc7049 # section - 2 . 4 . 4 . 1 <nl> - static constexpr uint8_t kInitialByteForEnvelope = <nl> - EncodeInitialByte ( MajorType : : TAG , 24 ) ; <nl> - / / The initial byte for a byte string with at most 2 ^ 32 bytes <nl> - / / of payload . This is used for envelope encoding , even if <nl> - / / the byte string is shorter . <nl> - static constexpr uint8_t kInitialByteFor32BitLengthByteString = <nl> - EncodeInitialByte ( MajorType : : BYTE_STRING , 26 ) ; <nl> - <nl> - / / See RFC 7049 Section 2 . 2 . 1 , indefinite length arrays / maps have additional <nl> - / / info = 31 . <nl> - static constexpr uint8_t kInitialByteIndefiniteLengthArray = <nl> - EncodeInitialByte ( MajorType : : ARRAY , 31 ) ; <nl> - static constexpr uint8_t kInitialByteIndefiniteLengthMap = <nl> - EncodeInitialByte ( MajorType : : MAP , 31 ) ; <nl> - / / See RFC 7049 Section 2 . 3 , Table 1 ; this is used for finishing indefinite <nl> - / / length maps / arrays . <nl> - static constexpr uint8_t kStopByte = <nl> - EncodeInitialByte ( MajorType : : SIMPLE_VALUE , 31 ) ; <nl> - <nl> - } / / namespace cbor <nl> - <nl> / / The binary encoding for the inspector protocol follows the CBOR specification <nl> / / ( RFC 7049 ) . Additional constraints : <nl> / / - Only indefinite length maps and arrays are supported . <nl> static constexpr uint8_t kStopByte = <nl> / / as CBOR BYTE_STRING ( major type 2 ) . For such strings , the number of <nl> / / bytes encoded must be even . <nl> / / - UTF8 strings ( major type 3 ) are supported . <nl> - / / - 7 bit US - ASCII strings must always be encoded as UTF8 strings , not <nl> + / / - 7 bit US - ASCII strings must always be encoded as UTF8 strings , never <nl> / / as UTF16 strings . <nl> / / - Arbitrary byte arrays , in the inspector protocol called ' binary ' , <nl> / / are encoded as BYTE_STRING ( major type 2 ) , prefixed with a byte <nl> / / indicating base64 when rendered as JSON . <nl> <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + / / Detecting CBOR content <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + <nl> + / / The first byte for an envelope , which we use for wrapping dictionaries <nl> + / / and arrays ; and the byte that indicates a byte string with 32 bit length . <nl> + / / These two bytes start an envelope , and thereby also any CBOR message <nl> + / / produced or consumed by this protocol . See also | EnvelopeEncoder | below . <nl> + uint8_t InitialByteForEnvelope ( ) ; <nl> + uint8_t InitialByteFor32BitLengthByteString ( ) ; <nl> + <nl> + / / Checks whether | msg | is a cbor message . <nl> + bool IsCBORMessage ( span < uint8_t > msg ) ; <nl> + <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + / / Encoding individual CBOR items <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + <nl> + / / Some constants for CBOR tokens that only take a single byte on the wire . <nl> + uint8_t EncodeTrue ( ) ; <nl> + uint8_t EncodeFalse ( ) ; <nl> + uint8_t EncodeNull ( ) ; <nl> + uint8_t EncodeIndefiniteLengthArrayStart ( ) ; <nl> + uint8_t EncodeIndefiniteLengthMapStart ( ) ; <nl> + uint8_t EncodeStop ( ) ; <nl> + <nl> / / Encodes | value | as | UNSIGNED | ( major type 0 ) iff > = 0 , or | NEGATIVE | <nl> / / ( major type 1 ) iff < 0 . <nl> void EncodeInt32 ( int32_t value , std : : vector < uint8_t > * out ) ; <nl> void EncodeBinary ( span < uint8_t > in , std : : vector < uint8_t > * out ) ; <nl> / / with additional info = 27 , followed by 8 bytes in big endian . <nl> void EncodeDouble ( double value , std : : vector < uint8_t > * out ) ; <nl> <nl> - / / Some constants for CBOR tokens that only take a single byte on the wire . <nl> - uint8_t EncodeTrue ( ) ; <nl> - uint8_t EncodeFalse ( ) ; <nl> - uint8_t EncodeNull ( ) ; <nl> - uint8_t EncodeIndefiniteLengthArrayStart ( ) ; <nl> - uint8_t EncodeIndefiniteLengthMapStart ( ) ; <nl> - uint8_t EncodeStop ( ) ; <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + / / cbor : : EnvelopeEncoder - for wrapping submessages <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> <nl> / / An envelope indicates the byte length of a wrapped item . <nl> / / We use this for maps and array , which allows the decoder <nl> class EnvelopeEncoder { <nl> std : : size_t byte_size_pos_ = 0 ; <nl> } ; <nl> <nl> - / / This can be used to convert from JSON to CBOR , by passing the <nl> - / / return value to the routines in json_parser . h . The handler will encode into <nl> - / / | out | , and iff an error occurs it will set | status | to an error and clear <nl> - / / | out | . Otherwise , | status . ok ( ) | will be | true | . <nl> - std : : unique_ptr < JSONParserHandler > NewJSONToCBOREncoder ( <nl> - std : : vector < uint8_t > * out , Status * status ) ; <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + / / cbor : : NewCBOREncoder - for encoding from a streaming parser <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> <nl> - / / Parses a CBOR encoded message from | bytes | , sending JSON events to <nl> - / / | json_out | . If an error occurs , sends | out - > HandleError | , and parsing stops . <nl> - / / The client is responsible for discarding the already received information in <nl> - / / that case . <nl> - void ParseCBOR ( span < uint8_t > bytes , JSONParserHandler * json_out ) ; <nl> + / / This can be used to convert to CBOR , by passing the return value to a parser <nl> + / / that drives it . The handler will encode into | out | , and iff an error occurs <nl> + / / it will set | status | to an error and clear | out | . Otherwise , | status . ok ( ) | <nl> + / / will be | true | . <nl> + std : : unique_ptr < StreamingParserHandler > NewCBOREncoder ( <nl> + std : : vector < uint8_t > * out , <nl> + Status * status ) ; <nl> <nl> - / / Tags for the tokens within a CBOR message that CBORStream understands . <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + / / cbor : : CBORTokenizer - for parsing individual CBOR items <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + <nl> + / / Tags for the tokens within a CBOR message that CBORTokenizer understands . <nl> / / Note that this is not the same terminology as the CBOR spec ( RFC 7049 ) , <nl> / / but rather , our adaptation . For instance , we lump unsigned and signed <nl> / / major type into INT32 here ( and disallow values outside the int32_t range ) . <nl> enum class CBORTokenTag { <nl> DONE , <nl> } ; <nl> <nl> + / / The major types from RFC 7049 Section 2 . 1 . <nl> + enum class MajorType { <nl> + UNSIGNED = 0 , <nl> + NEGATIVE = 1 , <nl> + BYTE_STRING = 2 , <nl> + STRING = 3 , <nl> + ARRAY = 4 , <nl> + MAP = 5 , <nl> + TAG = 6 , <nl> + SIMPLE_VALUE = 7 <nl> + } ; <nl> + <nl> / / CBORTokenizer segments a CBOR message , presenting the tokens therein as <nl> / / numbers , strings , etc . This is not a complete CBOR parser , but makes it much <nl> / / easier to implement one ( e . g . ParseCBOR , above ) . It can also be used to parse <nl> class CBORTokenizer { <nl> / / To be called only if : : TokenTag ( ) = = CBORTokenTag : : BINARY . <nl> span < uint8_t > GetBinary ( ) const ; <nl> <nl> + / / To be called only if : : TokenTag ( ) = = CBORTokenTag : : ENVELOPE . <nl> + span < uint8_t > GetEnvelopeContents ( ) const ; <nl> + <nl> private : <nl> void ReadNextToken ( bool enter_envelope ) ; <nl> void SetToken ( CBORTokenTag token , std : : ptrdiff_t token_byte_length ) ; <nl> class CBORTokenizer { <nl> CBORTokenTag token_tag_ ; <nl> struct Status status_ ; <nl> std : : ptrdiff_t token_byte_length_ ; <nl> - cbor : : MajorType token_start_type_ ; <nl> + MajorType token_start_type_ ; <nl> uint64_t token_start_internal_value_ ; <nl> } ; <nl> <nl> - void DumpCBOR ( span < uint8_t > cbor ) ; <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + / / cbor : : ParseCBOR - for receiving streaming parser events for CBOR messages <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + <nl> + / / Parses a CBOR encoded message from | bytes | , sending events to <nl> + / / | out | . If an error occurs , sends | out - > HandleError | , and parsing stops . <nl> + / / The client is responsible for discarding the already received information in <nl> + / / that case . <nl> + void ParseCBOR ( span < uint8_t > bytes , StreamingParserHandler * out ) ; <nl> + <nl> + namespace internals { / / Exposed only for writing tests . <nl> + int8_t ReadTokenStart ( span < uint8_t > bytes , <nl> + cbor : : MajorType * type , <nl> + uint64_t * value ) ; <nl> + <nl> + void WriteTokenStart ( cbor : : MajorType type , <nl> + uint64_t value , <nl> + std : : vector < uint8_t > * encoded ) ; <nl> + } / / namespace internals <nl> + } / / namespace cbor <nl> + <nl> + namespace json { <nl> + / / Client code must provide an instance . Implementation should delegate <nl> + / / to whatever is appropriate . <nl> + class Platform { <nl> + public : <nl> + virtual ~ Platform ( ) = default ; <nl> + / / Parses | str | into | result | . Returns false iff there are <nl> + / / leftover characters or parsing errors . <nl> + virtual bool StrToD ( const char * str , double * result ) const = 0 ; <nl> + <nl> + / / Prints | value | in a format suitable for JSON . <nl> + virtual std : : unique_ptr < char [ ] > DToStr ( double value ) const = 0 ; <nl> + } ; <nl> <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + / / json : : NewJSONEncoder - for encoding streaming parser events as JSON <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + <nl> + / / Returns a handler object which will write ascii characters to | out | . <nl> + / / | status - > ok ( ) | will be false iff the handler routine HandleError ( ) is called . <nl> + / / In that case , we ' ll stop emitting output . <nl> + / / Except for calling the HandleError routine at any time , the client <nl> + / / code must call the Handle * methods in an order in which they ' d occur <nl> + / / in valid JSON ; otherwise we may crash ( the code uses assert ) . <nl> + std : : unique_ptr < StreamingParserHandler > NewJSONEncoder ( const Platform * platform , <nl> + std : : string * out , <nl> + Status * status ) ; <nl> + <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + / / json : : ParseJSON - for receiving streaming parser events for JSON <nl> + / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = <nl> + <nl> + void ParseJSON ( const Platform * platform , <nl> + span < uint8_t > chars , <nl> + StreamingParserHandler * handler ) ; <nl> + void ParseJSON ( const Platform * platform , <nl> + span < uint16_t > chars , <nl> + StreamingParserHandler * handler ) ; <nl> + } / / namespace json <nl> <nl> { % for namespace in config . protocol . namespace % } <nl> } / / namespace { { namespace } } <nl> { % endfor % } <nl> - # endif / / ! defined ( { { " _ " . join ( config . protocol . namespace ) } } _CBOR_h ) <nl> + # endif / / ! defined ( { { " _ " . join ( config . protocol . namespace ) } } _encoding_h ) <nl>
|
[ DevTools ] Roll inspector_protocol to a7423d8ca937e658ab3b85e3b02676bced145ba6 .
|
v8/v8
|
1cb390b8750d6b0d08a59c0a18049c66aed5b5ed
|
2019-03-13T18:19:28Z
|
mmm a / Telegram / SourceFiles / core / sandbox . cpp <nl> ppp b / Telegram / SourceFiles / core / sandbox . cpp <nl> int Sandbox : : start ( ) { <nl> [ = ] { newInstanceConnected ( ) ; } ) ; <nl> <nl> crl : : on_main ( this , [ = ] { checkForQuit ( ) ; } ) ; <nl> - connect ( <nl> - this , <nl> - & QCoreApplication : : aboutToQuit , <nl> - [ = ] { closeApplication ( ) ; } ) ; <nl> + connect ( this , & QCoreApplication : : aboutToQuit , [ = ] { <nl> + customEnterFromEventLoop ( [ & ] { <nl> + closeApplication ( ) ; <nl> + } ) ; <nl> + } ) ; <nl> <nl> if ( cManyInstance ( ) ) { <nl> LOG ( ( " Many instance allowed , starting . . . " ) ) ; <nl> mmm a / Telegram / SourceFiles / ui / effects / animations . cpp <nl> ppp b / Telegram / SourceFiles / ui / effects / animations . cpp <nl> void Manager : : schedule ( ) { <nl> stopTimer ( ) ; <nl> <nl> _scheduled = true ; <nl> - Ui : : PostponeCall ( [ = ] { <nl> + Ui : : PostponeCall ( static_cast < QObject * > ( this ) , [ = ] { <nl> _scheduled = false ; <nl> if ( _forceImmediateUpdate ) { <nl> _forceImmediateUpdate = false ; <nl>
|
Fix crash in application closing .
|
telegramdesktop/tdesktop
|
1ab4dbe46613b1de50f59031833041a2e0c3a8e9
|
2019-04-06T08:12:24Z
|
mmm a / ports / pcl / CONTROL <nl> ppp b / ports / pcl / CONTROL <nl> <nl> Source : pcl <nl> - Version : 1 . 8 . 1 - 6 <nl> + Version : 1 . 8 . 1 - 7 <nl> Description : Point Cloud Library ( PCL ) is open source library for 2D / 3D image and point cloud processing . <nl> Build - Depends : boost , eigen3 , flann , qhull , vtk <nl> <nl> new file mode 100644 <nl> index 00000000000 . . 2d8bd1bd354 <nl> mmm / dev / null <nl> ppp b / ports / pcl / cmakelists . patch <nl> <nl> pppmmm a / CMakeLists . txt <nl> ppp + b / CMakeLists . txt <nl> + endif ( WITH_PNG ) <nl> + # Qhull <nl> + option ( WITH_QHULL " Include convex - hull operations " TRUE ) <nl> + if ( WITH_QHULL ) <nl> + - if ( NOT PCL_SHARED_LIBS OR WIN32 ) <nl> + + if ( NOT PCL_SHARED_LIBS ) <nl> + set ( QHULL_USE_STATIC ON ) <nl> + - endif ( NOT PCL_SHARED_LIBS OR WIN32 ) <nl> + + endif ( NOT PCL_SHARED_LIBS ) <nl> + find_package ( Qhull ) <nl> + if ( QHULL_FOUND ) <nl> + include_directories ( $ { QHULL_INCLUDE_DIRS } ) <nl> mmm a / ports / pcl / portfile . cmake <nl> ppp b / ports / pcl / portfile . cmake <nl> vcpkg_from_github ( <nl> <nl> vcpkg_apply_patches ( <nl> SOURCE_PATH $ { SOURCE_PATH } <nl> - PATCHES " $ { CMAKE_CURRENT_LIST_DIR } / config . patch " <nl> + PATCHES " $ { CMAKE_CURRENT_LIST_DIR } / cmakelists . patch " <nl> + " $ { CMAKE_CURRENT_LIST_DIR } / config . patch " <nl> " $ { CMAKE_CURRENT_LIST_DIR } / config_install . patch " <nl> " $ { CMAKE_CURRENT_LIST_DIR } / find_flann . patch " <nl> " $ { CMAKE_CURRENT_LIST_DIR } / find_qhull . patch " <nl> mmm a / ports / qhull / CONTROL <nl> ppp b / ports / qhull / CONTROL <nl> <nl> Source : qhull <nl> - Version : 2015 . 2 - 1 <nl> + Version : 2015 . 2 - 2 <nl> Description : computes the convex hull , Delaunay triangulation , Voronoi diagram <nl> mmm a / ports / qhull / portfile . cmake <nl> ppp b / ports / qhull / portfile . cmake <nl> <nl> - # Common Ambient Variables : <nl> - # CURRENT_BUILDTREES_DIR = $ { VCPKG_ROOT_DIR } \ buildtrees \ $ { PORT } <nl> - # CURRENT_PACKAGES_DIR = $ { VCPKG_ROOT_DIR } \ packages \ $ { PORT } _ $ { TARGET_TRIPLET } <nl> - # CURRENT_PORT_DIR = $ { VCPKG_ROOT_DIR } \ ports \ $ { PORT } <nl> - # PORT = current port name ( zlib , etc ) <nl> - # TARGET_TRIPLET = current triplet ( x86 - windows , x64 - windows - static , etc ) <nl> - # VCPKG_CRT_LINKAGE = C runtime linkage type ( static , dynamic ) <nl> - # VCPKG_LIBRARY_LINKAGE = target library linkage type ( static , dynamic ) <nl> - # VCPKG_ROOT_DIR = < C : \ path \ to \ current \ vcpkg > <nl> - # VCPKG_TARGET_ARCHITECTURE = target architecture ( x64 , x86 , arm ) <nl> - # <nl> - <nl> include ( vcpkg_common_functions ) <nl> <nl> vcpkg_from_github ( <nl> vcpkg_from_github ( <nl> <nl> vcpkg_configure_cmake ( <nl> SOURCE_PATH $ { SOURCE_PATH } <nl> - # PREFER_NINJA # Disable this option if project cannot be built with Ninja <nl> + PREFER_NINJA <nl> OPTIONS <nl> - DINCLUDE_INSTALL_DIR = $ { CURRENT_PACKAGES_DIR } / include <nl> - DMAN_INSTALL_DIR = $ { CURRENT_PACKAGES_DIR } / doc / qhull <nl> if ( VCPKG_LIBRARY_LINKAGE STREQUAL " static " ) <nl> file ( REMOVE $ { CURRENT_PACKAGES_DIR } / lib / qhull . lib $ { CURRENT_PACKAGES_DIR } / debug / lib / qhull_d . lib ) <nl> file ( REMOVE $ { CURRENT_PACKAGES_DIR } / lib / qhull_p . lib $ { CURRENT_PACKAGES_DIR } / debug / lib / qhull_pd . lib ) <nl> file ( REMOVE $ { CURRENT_PACKAGES_DIR } / lib / qhull_r . lib $ { CURRENT_PACKAGES_DIR } / debug / lib / qhull_rd . lib ) <nl> + else ( ) <nl> + file ( REMOVE $ { CURRENT_PACKAGES_DIR } / lib / qhullcpp . lib $ { CURRENT_PACKAGES_DIR } / debug / lib / qhullcpp_d . lib ) <nl> + file ( REMOVE $ { CURRENT_PACKAGES_DIR } / lib / qhullstatic . lib $ { CURRENT_PACKAGES_DIR } / debug / lib / qhullstatic_d . lib ) <nl> + file ( REMOVE $ { CURRENT_PACKAGES_DIR } / lib / qhullstatic_r . lib $ { CURRENT_PACKAGES_DIR } / debug / lib / qhullstatic_rd . lib ) <nl> endif ( ) <nl> <nl> - # Handle copyright <nl> file ( COPY $ { SOURCE_PATH } / README . txt DESTINATION $ { CURRENT_PACKAGES_DIR } / share / qhull ) <nl> file ( RENAME $ { CURRENT_PACKAGES_DIR } / share / qhull / README . txt $ { CURRENT_PACKAGES_DIR } / share / qhull / copyright ) <nl>
|
Merge pull request from UnaNancyOwen / fix_qhull
|
microsoft/vcpkg
|
94bd9dd66e9db88f965c8b270ea58f927685a317
|
2017-11-26T02:48:37Z
|
mmm a / dist / windows / installer - translations / arabic . nsi <nl> ppp b / dist / windows / installer - translations / arabic . nsi <nl> LangString inst_firewallinfo $ { LANG_ARABIC } " جاري اضافة القاعدة <nl> ; LangString inst_warning $ { LANG_ENGLISH } " qBittorrent is running . Please close the application before installing . " <nl> LangString inst_warning $ { LANG_ARABIC } " البرنامج يعمل . يرجى اغلاقه قبل البدء في التنصيب " <nl> ; LangString inst_uninstall_question $ { LANG_ENGLISH } " A previous installation was detected . It will be uninstalled without deleting user settings . " <nl> - LangString inst_uninstall_question $ { LANG_ARABIC } " A previous installation was detected . It will be uninstalled without deleting user settings . " <nl> + LangString inst_uninstall_question $ { LANG_ARABIC } " يوجد نسخة سابقة من البرنامج . سيتم إزالتها دون حذف إعدادات المستخدم " <nl> ; LangString inst_unist $ { LANG_ENGLISH } " Uninstalling previous version . " <nl> LangString inst_unist $ { LANG_ARABIC } " جاري ازالة النسخة السابقة من البرنامج " <nl> ; LangString launch_qbt $ { LANG_ENGLISH } " Launch qBittorrent . " <nl> LangString launch_qbt $ { LANG_ARABIC } " تشغيل البرنامج " <nl> ; LangString inst_requires_64bit $ { LANG_ENGLISH } " This installer works only in 64 - bit Windows versions . " <nl> - LangString inst_requires_64bit $ { LANG_ARABIC } " This installer works only in 64 - bit Windows versions . " <nl> + LangString inst_requires_64bit $ { LANG_ARABIC } " هذا المثبت يعمل فقط في نسخ ويندوز 64 بت " <nl> <nl> <nl> ; mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm <nl>
|
Installer Arabic language update
|
qbittorrent/qBittorrent
|
8c0577862b99bc32e5df44b0d20f1314c894c2f7
|
2017-06-05T16:59:31Z
|
mmm a / src / mongo / db / s / migration_source_manager . cpp <nl> ppp b / src / mongo / db / s / migration_source_manager . cpp <nl> Status MigrationSourceManager : : commitChunkMetadataOnConfig ( OperationContext * opC <nl> / / and subsequent callers will try to do a full refresh . <nl> const auto refreshStatus = [ & ] { <nl> try { <nl> - forceShardFilteringMetadataRefresh ( opCtx , getNss ( ) ) ; <nl> + forceShardFilteringMetadataRefresh ( opCtx , getNss ( ) , true ) ; <nl> return Status : : OK ( ) ; <nl> } catch ( const DBException & ex ) { <nl> return ex . toStatus ( ) ; <nl>
|
SERVER - 36232 Ensure chunk migration commit and the subsequent refresh are causally consistent
|
mongodb/mongo
|
ba25922e6b2bffa60a8a4f3db8adca612da55e95
|
2018-07-23T14:42:44Z
|
mmm a / README . md <nl> ppp b / README . md <nl> Supported Attributes <nl> Name | Value <nl> mmm - : | mmmmmm <nl> width , height | positive number <nl> + minWidth , minHeight | positive number <nl> + maxWidth , maxHeight | positive number <nl> left , right , top , bottom | number <nl> margin , marginLeft , marginRight , marginTop , marginBottom | number <nl> padding , paddingLeft , paddingRight , paddingTop , paddingBottom | positive number <nl> mmm a / src / JavaTranspiler . js <nl> ppp b / src / JavaTranspiler . js <nl> function __transpileSingleTestToJava ( code ) { <nl> function ( str , match1 , match2 ) { <nl> return match1 + ' . ' + match2 . toLowerCase ( ) ; <nl> } ) <nl> + . replace ( / / style . maxDimensions [ CSS_WIDTH ] = > style . maxWidth <nl> + / ( style | layout ) \ . maxDimensions \ [ CSS_ ( WIDTH | HEIGHT ) \ ] / g , <nl> + function ( str , match1 , match2 ) { <nl> + return match1 + ' . max ' + match2 . substr ( 0 , 1 ) . toUpperCase ( ) + match2 . substr ( 1 ) . toLowerCase ( ) ; <nl> + } ) <nl> + . replace ( / / style . minDimensions [ CSS_WIDTH ] = > style . minWidth <nl> + / ( style | layout ) \ . minDimensions \ [ CSS_ ( WIDTH | HEIGHT ) \ ] / g , <nl> + function ( str , match1 , match2 ) { <nl> + return match1 + ' . min ' + match2 . substr ( 0 , 1 ) . toUpperCase ( ) + match2 . substr ( 1 ) . toLowerCase ( ) ; <nl> + } ) <nl> . replace ( / / layout . position [ CSS_TOP ] = > layout . y <nl> / layout \ . position \ [ CSS_ ( TOP | LEFT ) \ ] / g , <nl> function ( str , match1 ) { <nl> mmm a / src / Layout - test - utils . js <nl> ppp b / src / Layout - test - utils . js <nl> var layoutTestUtils = ( function ( ) { <nl> var div = document . createElement ( ' div ' ) ; <nl> transfer ( div , node , ' width ' , ' px ' ) ; <nl> transfer ( div , node , ' height ' , ' px ' ) ; <nl> + transfer ( div , node , ' minWidth ' , ' px ' ) ; <nl> + transfer ( div , node , ' minHeight ' , ' px ' ) ; <nl> + transfer ( div , node , ' maxWidth ' , ' px ' ) ; <nl> + transfer ( div , node , ' maxHeight ' , ' px ' ) ; <nl> transfer ( div , node , ' top ' , ' px ' ) ; <nl> transfer ( div , node , ' left ' , ' px ' ) ; <nl> transfer ( div , node , ' right ' , ' px ' ) ; <nl> mmm a / src / Layout . c <nl> ppp b / src / Layout . c <nl> void init_css_node ( css_node_t * node ) { <nl> node - > style . dimensions [ CSS_WIDTH ] = CSS_UNDEFINED ; <nl> node - > style . dimensions [ CSS_HEIGHT ] = CSS_UNDEFINED ; <nl> <nl> + node - > style . minDimensions [ CSS_WIDTH ] = CSS_UNDEFINED ; <nl> + node - > style . minDimensions [ CSS_HEIGHT ] = CSS_UNDEFINED ; <nl> + <nl> + node - > style . maxDimensions [ CSS_WIDTH ] = CSS_UNDEFINED ; <nl> + node - > style . maxDimensions [ CSS_HEIGHT ] = CSS_UNDEFINED ; <nl> + <nl> node - > style . position [ CSS_LEFT ] = CSS_UNDEFINED ; <nl> node - > style . position [ CSS_TOP ] = CSS_UNDEFINED ; <nl> node - > style . position [ CSS_RIGHT ] = CSS_UNDEFINED ; <nl> static float getPosition ( css_node_t * node , css_position_t position ) { <nl> return 0 ; <nl> } <nl> <nl> + static float boundAxis ( css_node_t * node , css_flex_direction_t axis , float value ) { <nl> + float min = CSS_UNDEFINED ; <nl> + float max = CSS_UNDEFINED ; <nl> + <nl> + if ( axis = = CSS_FLEX_DIRECTION_COLUMN ) { <nl> + min = node - > style . minDimensions [ CSS_HEIGHT ] ; <nl> + max = node - > style . maxDimensions [ CSS_HEIGHT ] ; <nl> + } else if ( axis = = CSS_FLEX_DIRECTION_ROW ) { <nl> + min = node - > style . minDimensions [ CSS_WIDTH ] ; <nl> + max = node - > style . maxDimensions [ CSS_WIDTH ] ; <nl> + } <nl> + <nl> + float boundValue = value ; <nl> + <nl> + if ( ! isUndefined ( max ) & & max > = 0 . 0 & & boundValue > max ) { <nl> + boundValue = max ; <nl> + } <nl> + if ( ! isUndefined ( min ) & & min > = 0 . 0 & & boundValue < min ) { <nl> + boundValue = min ; <nl> + } <nl> + <nl> + return boundValue ; <nl> + } <nl> + <nl> / / When the user specifically sets a value for width or height <nl> static void setDimensionFromStyle ( css_node_t * node , css_flex_direction_t axis ) { <nl> / / The parent already computed us a width or height . We just skip it <nl> static void setDimensionFromStyle ( css_node_t * node , css_flex_direction_t axis ) { <nl> <nl> / / The dimensions can never be smaller than the padding and border <nl> node - > layout . dimensions [ dim [ axis ] ] = fmaxf ( <nl> - node - > style . dimensions [ dim [ axis ] ] , <nl> + boundAxis ( node , axis , node - > style . dimensions [ dim [ axis ] ] ) , <nl> getPaddingAndBorderAxis ( node , axis ) <nl> ) ; <nl> } <nl> static void layoutNodeImpl ( css_node_t * node , float parentMaxWidth ) { <nl> ! isUndefined ( node - > layout . dimensions [ dim [ crossAxis ] ] ) & & <nl> ! isDimDefined ( child , crossAxis ) ) { <nl> child - > layout . dimensions [ dim [ crossAxis ] ] = fmaxf ( <nl> - node - > layout . dimensions [ dim [ crossAxis ] ] - <nl> + boundAxis ( child , crossAxis , node - > layout . dimensions [ dim [ crossAxis ] ] - <nl> getPaddingAndBorderAxis ( node , crossAxis ) - <nl> - getMarginAxis ( child , crossAxis ) , <nl> + getMarginAxis ( child , crossAxis ) ) , <nl> / / You never want to go smaller than padding <nl> getPaddingAndBorderAxis ( child , crossAxis ) <nl> ) ; <nl> static void layoutNodeImpl ( css_node_t * node , float parentMaxWidth ) { <nl> isPosDefined ( child , leading [ axis ] ) & & <nl> isPosDefined ( child , trailing [ axis ] ) ) { <nl> child - > layout . dimensions [ dim [ axis ] ] = fmaxf ( <nl> - node - > layout . dimensions [ dim [ axis ] ] - <nl> - getPaddingAndBorderAxis ( node , axis ) - <nl> - getMarginAxis ( child , axis ) - <nl> - getPosition ( child , leading [ axis ] ) - <nl> - getPosition ( child , trailing [ axis ] ) , <nl> + boundAxis ( child , axis , node - > layout . dimensions [ dim [ axis ] ] - <nl> + getPaddingAndBorderAxis ( node , axis ) - <nl> + getMarginAxis ( child , axis ) - <nl> + getPosition ( child , leading [ axis ] ) - <nl> + getPosition ( child , trailing [ axis ] ) ) , <nl> / / You never want to go smaller than padding <nl> getPaddingAndBorderAxis ( child , axis ) <nl> ) ; <nl> static void layoutNodeImpl ( css_node_t * node , float parentMaxWidth ) { <nl> totalFlexible + = getFlex ( child ) ; <nl> <nl> / / Even if we don ' t know its exact size yet , we already know the padding , <nl> - / / border and margin . We ' ll use this partial information to compute the <nl> - / / remaining space . <nl> + / / border and margin . We ' ll use this partial information , which represents <nl> + / / the smallest possible size for the child , to compute the remaining <nl> + / / available space . <nl> nextContentDim = getPaddingAndBorderAxis ( child , mainAxis ) + <nl> getMarginAxis ( child , mainAxis ) ; <nl> <nl> static void layoutNodeImpl ( css_node_t * node , float parentMaxWidth ) { <nl> / / remaining space <nl> if ( flexibleChildrenCount ! = 0 ) { <nl> float flexibleMainDim = remainingMainDim / totalFlexible ; <nl> + float baseMainDim ; <nl> + float boundMainDim ; <nl> + <nl> + / / Iterate over every child in the axis . If the flex share of remaining <nl> + / / space doesn ' t meet min / max bounds , remove this child from flex <nl> + / / calculations . <nl> + for ( i = startLine ; i < endLine ; + + i ) { <nl> + child = node - > get_child ( node - > context , i ) ; <nl> + if ( isFlex ( child ) ) { <nl> + baseMainDim = flexibleMainDim * getFlex ( child ) + <nl> + getPaddingAndBorderAxis ( child , mainAxis ) ; <nl> + boundMainDim = boundAxis ( child , mainAxis , baseMainDim ) ; <nl> + <nl> + if ( baseMainDim ! = boundMainDim ) { <nl> + remainingMainDim - = boundMainDim ; <nl> + totalFlexible - = getFlex ( child ) ; <nl> + } <nl> + } <nl> + } <nl> + flexibleMainDim = remainingMainDim / totalFlexible ; <nl> <nl> / / The non flexible children can overflow the container , in this case <nl> / / we should just assume that there is no space available . <nl> static void layoutNodeImpl ( css_node_t * node , float parentMaxWidth ) { <nl> if ( isFlex ( child ) ) { <nl> / / At this point we know the final size of the element in the main <nl> / / dimension <nl> - child - > layout . dimensions [ dim [ mainAxis ] ] = flexibleMainDim * getFlex ( child ) + <nl> - getPaddingAndBorderAxis ( child , mainAxis ) ; <nl> + child - > layout . dimensions [ dim [ mainAxis ] ] = boundAxis ( child , mainAxis , <nl> + flexibleMainDim * getFlex ( child ) + getPaddingAndBorderAxis ( child , mainAxis ) <nl> + ) ; <nl> <nl> maxWidth = CSS_UNDEFINED ; <nl> if ( isDimDefined ( node , CSS_FLEX_DIRECTION_ROW ) ) { <nl> static void layoutNodeImpl ( css_node_t * node , float parentMaxWidth ) { <nl> mainDim + = betweenMainDim + getDimWithMargin ( child , mainAxis ) ; <nl> / / The cross dimension is the max of the elements dimension since there <nl> / / can only be one element in that cross dimension . <nl> - crossDim = fmaxf ( crossDim , getDimWithMargin ( child , crossAxis ) ) ; <nl> + crossDim = fmaxf ( crossDim , boundAxis ( child , crossAxis , getDimWithMargin ( child , crossAxis ) ) ) ; <nl> } <nl> } <nl> <nl> static void layoutNodeImpl ( css_node_t * node , float parentMaxWidth ) { <nl> containerMainAxis = fmaxf ( <nl> / / We ' re missing the last padding at this point to get the final <nl> / / dimension <nl> - mainDim + getPaddingAndBorder ( node , trailing [ mainAxis ] ) , <nl> + boundAxis ( node , mainAxis , mainDim + getPaddingAndBorder ( node , trailing [ mainAxis ] ) ) , <nl> / / We can never assign a width smaller than the padding and borders <nl> getPaddingAndBorderAxis ( node , mainAxis ) <nl> ) ; <nl> static void layoutNodeImpl ( css_node_t * node , float parentMaxWidth ) { <nl> / / For the cross dim , we add both sides at the end because the value <nl> / / is aggregate via a max function . Intermediate negative values <nl> / / can mess this computation otherwise <nl> - crossDim + getPaddingAndBorderAxis ( node , crossAxis ) , <nl> + boundAxis ( node , crossAxis , crossDim + getPaddingAndBorderAxis ( node , crossAxis ) ) , <nl> getPaddingAndBorderAxis ( node , crossAxis ) <nl> ) ; <nl> } <nl> static void layoutNodeImpl ( css_node_t * node , float parentMaxWidth ) { <nl> / / previously . <nl> if ( ! isDimDefined ( child , crossAxis ) ) { <nl> child - > layout . dimensions [ dim [ crossAxis ] ] = fmaxf ( <nl> - containerCrossAxis - <nl> + boundAxis ( child , crossAxis , containerCrossAxis - <nl> getPaddingAndBorderAxis ( node , crossAxis ) - <nl> - getMarginAxis ( child , crossAxis ) , <nl> + getMarginAxis ( child , crossAxis ) ) , <nl> / / You never want to go smaller than padding <nl> getPaddingAndBorderAxis ( child , crossAxis ) <nl> ) ; <nl> static void layoutNodeImpl ( css_node_t * node , float parentMaxWidth ) { <nl> node - > layout . dimensions [ dim [ mainAxis ] ] = fmaxf ( <nl> / / We ' re missing the last padding at this point to get the final <nl> / / dimension <nl> - linesMainDim + getPaddingAndBorder ( node , trailing [ mainAxis ] ) , <nl> + boundAxis ( node , mainAxis , linesMainDim + getPaddingAndBorder ( node , trailing [ mainAxis ] ) ) , <nl> / / We can never assign a width smaller than the padding and borders <nl> getPaddingAndBorderAxis ( node , mainAxis ) <nl> ) ; <nl> static void layoutNodeImpl ( css_node_t * node , float parentMaxWidth ) { <nl> / / For the cross dim , we add both sides at the end because the value <nl> / / is aggregate via a max function . Intermediate negative values <nl> / / can mess this computation otherwise <nl> - linesCrossDim + getPaddingAndBorderAxis ( node , crossAxis ) , <nl> + boundAxis ( node , crossAxis , linesCrossDim + getPaddingAndBorderAxis ( node , crossAxis ) ) , <nl> getPaddingAndBorderAxis ( node , crossAxis ) <nl> ) ; <nl> } <nl> static void layoutNodeImpl ( css_node_t * node , float parentMaxWidth ) { <nl> isPosDefined ( child , leading [ axis ] ) & & <nl> isPosDefined ( child , trailing [ axis ] ) ) { <nl> child - > layout . dimensions [ dim [ axis ] ] = fmaxf ( <nl> - node - > layout . dimensions [ dim [ axis ] ] - <nl> - getPaddingAndBorderAxis ( node , axis ) - <nl> - getMarginAxis ( child , axis ) - <nl> - getPosition ( child , leading [ axis ] ) - <nl> - getPosition ( child , trailing [ axis ] ) , <nl> + boundAxis ( child , axis , node - > layout . dimensions [ dim [ axis ] ] - <nl> + getPaddingAndBorderAxis ( node , axis ) - <nl> + getMarginAxis ( child , axis ) - <nl> + getPosition ( child , leading [ axis ] ) - <nl> + getPosition ( child , trailing [ axis ] ) <nl> + ) , <nl> / / You never want to go smaller than padding <nl> getPaddingAndBorderAxis ( child , axis ) <nl> ) ; <nl> mmm a / src / Layout . h <nl> ppp b / src / Layout . h <nl> typedef struct { <nl> float padding [ 4 ] ; <nl> float border [ 4 ] ; <nl> float dimensions [ 2 ] ; <nl> + float minDimensions [ 2 ] ; <nl> + float maxDimensions [ 2 ] ; <nl> } css_style_t ; <nl> <nl> typedef struct css_node { <nl> mmm a / src / Layout . js <nl> ppp b / src / Layout . js <nl> var computeLayout = ( function ( ) { <nl> return 0 ; <nl> } <nl> <nl> + function boundAxis ( node , axis , value ) { <nl> + var min = { <nl> + row : node . style [ ' minWidth ' ] , <nl> + column : node . style [ ' minHeight ' ] <nl> + } [ axis ] ; <nl> + <nl> + var max = { <nl> + row : node . style [ ' maxWidth ' ] , <nl> + column : node . style [ ' maxHeight ' ] <nl> + } [ axis ] ; <nl> + <nl> + var boundValue = value ; <nl> + if ( ! isUndefined ( max ) & & max > = 0 & & boundValue > max ) { <nl> + boundValue = max ; <nl> + } <nl> + if ( ! isUndefined ( min ) & & min > = 0 & & boundValue < min ) { <nl> + boundValue = min ; <nl> + } <nl> + return boundValue ; <nl> + } <nl> + <nl> function fmaxf ( a , b ) { <nl> if ( a > b ) { <nl> return a ; <nl> var computeLayout = ( function ( ) { <nl> <nl> / / The dimensions can never be smaller than the padding and border <nl> node . layout [ dim [ axis ] ] = fmaxf ( <nl> - node . style [ dim [ axis ] ] , <nl> + boundAxis ( node , axis , node . style [ dim [ axis ] ] ) , <nl> getPaddingAndBorderAxis ( node , axis ) <nl> ) ; <nl> } <nl> var computeLayout = ( function ( ) { <nl> ! isUndefined ( node . layout [ dim [ crossAxis ] ] ) & & <nl> ! isDimDefined ( child , crossAxis ) ) { <nl> child . layout [ dim [ crossAxis ] ] = fmaxf ( <nl> - node . layout [ dim [ crossAxis ] ] - <nl> + boundAxis ( child , crossAxis , node . layout [ dim [ crossAxis ] ] - <nl> getPaddingAndBorderAxis ( node , crossAxis ) - <nl> - getMarginAxis ( child , crossAxis ) , <nl> + getMarginAxis ( child , crossAxis ) ) , <nl> / / You never want to go smaller than padding <nl> getPaddingAndBorderAxis ( child , crossAxis ) <nl> ) ; <nl> var computeLayout = ( function ( ) { <nl> isPosDefined ( child , leading [ axis ] ) & & <nl> isPosDefined ( child , trailing [ axis ] ) ) { <nl> child . layout [ dim [ axis ] ] = fmaxf ( <nl> - node . layout [ dim [ axis ] ] - <nl> - getPaddingAndBorderAxis ( node , axis ) - <nl> - getMarginAxis ( child , axis ) - <nl> - getPosition ( child , leading [ axis ] ) - <nl> - getPosition ( child , trailing [ axis ] ) , <nl> + boundAxis ( child , axis , node . layout [ dim [ axis ] ] - <nl> + getPaddingAndBorderAxis ( node , axis ) - <nl> + getMarginAxis ( child , axis ) - <nl> + getPosition ( child , leading [ axis ] ) - <nl> + getPosition ( child , trailing [ axis ] ) ) , <nl> / / You never want to go smaller than padding <nl> getPaddingAndBorderAxis ( child , axis ) <nl> ) ; <nl> var computeLayout = ( function ( ) { <nl> totalFlexible + = getFlex ( child ) ; <nl> <nl> / / Even if we don ' t know its exact size yet , we already know the padding , <nl> - / / border and margin . We ' ll use this partial information to compute the <nl> - / / remaining space . <nl> + / / border and margin . We ' ll use this partial information , which represents <nl> + / / the smallest possible size for the child , to compute the remaining <nl> + / / available space . <nl> nextContentDim = getPaddingAndBorderAxis ( child , mainAxis ) + <nl> getMarginAxis ( child , mainAxis ) ; <nl> <nl> var computeLayout = ( function ( ) { <nl> / / remaining space <nl> if ( flexibleChildrenCount ! = = 0 ) { <nl> var / * float * / flexibleMainDim = remainingMainDim / totalFlexible ; <nl> + var / * float * / baseMainDim ; <nl> + var / * float * / boundMainDim ; <nl> + <nl> + / / Iterate over every child in the axis . If the flex share of remaining <nl> + / / space doesn ' t meet min / max bounds , remove this child from flex <nl> + / / calculations . <nl> + for ( i = startLine ; i < endLine ; + + i ) { <nl> + child = node . children [ i ] ; <nl> + if ( isFlex ( child ) ) { <nl> + baseMainDim = flexibleMainDim * getFlex ( child ) + <nl> + getPaddingAndBorderAxis ( child , mainAxis ) ; <nl> + boundMainDim = boundAxis ( child , mainAxis , baseMainDim ) ; <nl> + <nl> + if ( baseMainDim ! = = boundMainDim ) { <nl> + remainingMainDim - = boundMainDim ; <nl> + totalFlexible - = getFlex ( child ) ; <nl> + } <nl> + } <nl> + } <nl> + flexibleMainDim = remainingMainDim / totalFlexible ; <nl> <nl> / / The non flexible children can overflow the container , in this case <nl> / / we should just assume that there is no space available . <nl> var computeLayout = ( function ( ) { <nl> if ( isFlex ( child ) ) { <nl> / / At this point we know the final size of the element in the main <nl> / / dimension <nl> - child . layout [ dim [ mainAxis ] ] = flexibleMainDim * getFlex ( child ) + <nl> - getPaddingAndBorderAxis ( child , mainAxis ) ; <nl> + child . layout [ dim [ mainAxis ] ] = boundAxis ( child , mainAxis , <nl> + flexibleMainDim * getFlex ( child ) + getPaddingAndBorderAxis ( child , mainAxis ) <nl> + ) ; <nl> <nl> maxWidth = CSS_UNDEFINED ; <nl> if ( isDimDefined ( node , CSS_FLEX_DIRECTION_ROW ) ) { <nl> var computeLayout = ( function ( ) { <nl> mainDim + = betweenMainDim + getDimWithMargin ( child , mainAxis ) ; <nl> / / The cross dimension is the max of the elements dimension since there <nl> / / can only be one element in that cross dimension . <nl> - crossDim = fmaxf ( crossDim , getDimWithMargin ( child , crossAxis ) ) ; <nl> + crossDim = fmaxf ( crossDim , boundAxis ( child , crossAxis , getDimWithMargin ( child , crossAxis ) ) ) ; <nl> } <nl> } <nl> <nl> var computeLayout = ( function ( ) { <nl> containerMainAxis = fmaxf ( <nl> / / We ' re missing the last padding at this point to get the final <nl> / / dimension <nl> - mainDim + getPaddingAndBorder ( node , trailing [ mainAxis ] ) , <nl> + boundAxis ( node , mainAxis , mainDim + getPaddingAndBorder ( node , trailing [ mainAxis ] ) ) , <nl> / / We can never assign a width smaller than the padding and borders <nl> getPaddingAndBorderAxis ( node , mainAxis ) <nl> ) ; <nl> var computeLayout = ( function ( ) { <nl> / / For the cross dim , we add both sides at the end because the value <nl> / / is aggregate via a max function . Intermediate negative values <nl> / / can mess this computation otherwise <nl> - crossDim + getPaddingAndBorderAxis ( node , crossAxis ) , <nl> + boundAxis ( node , crossAxis , crossDim + getPaddingAndBorderAxis ( node , crossAxis ) ) , <nl> getPaddingAndBorderAxis ( node , crossAxis ) <nl> ) ; <nl> } <nl> var computeLayout = ( function ( ) { <nl> / / previously . <nl> if ( ! isDimDefined ( child , crossAxis ) ) { <nl> child . layout [ dim [ crossAxis ] ] = fmaxf ( <nl> - containerCrossAxis - <nl> + boundAxis ( child , crossAxis , containerCrossAxis - <nl> getPaddingAndBorderAxis ( node , crossAxis ) - <nl> - getMarginAxis ( child , crossAxis ) , <nl> + getMarginAxis ( child , crossAxis ) ) , <nl> / / You never want to go smaller than padding <nl> getPaddingAndBorderAxis ( child , crossAxis ) <nl> ) ; <nl> var computeLayout = ( function ( ) { <nl> node . layout [ dim [ mainAxis ] ] = fmaxf ( <nl> / / We ' re missing the last padding at this point to get the final <nl> / / dimension <nl> - linesMainDim + getPaddingAndBorder ( node , trailing [ mainAxis ] ) , <nl> + boundAxis ( node , mainAxis , linesMainDim + getPaddingAndBorder ( node , trailing [ mainAxis ] ) ) , <nl> / / We can never assign a width smaller than the padding and borders <nl> getPaddingAndBorderAxis ( node , mainAxis ) <nl> ) ; <nl> var computeLayout = ( function ( ) { <nl> / / For the cross dim , we add both sides at the end because the value <nl> / / is aggregate via a max function . Intermediate negative values <nl> / / can mess this computation otherwise <nl> - linesCrossDim + getPaddingAndBorderAxis ( node , crossAxis ) , <nl> + boundAxis ( node , crossAxis , linesCrossDim + getPaddingAndBorderAxis ( node , crossAxis ) ) , <nl> getPaddingAndBorderAxis ( node , crossAxis ) <nl> ) ; <nl> } <nl> var computeLayout = ( function ( ) { <nl> isPosDefined ( child , leading [ axis ] ) & & <nl> isPosDefined ( child , trailing [ axis ] ) ) { <nl> child . layout [ dim [ axis ] ] = fmaxf ( <nl> - node . layout [ dim [ axis ] ] - <nl> - getPaddingAndBorderAxis ( node , axis ) - <nl> - getMarginAxis ( child , axis ) - <nl> - getPosition ( child , leading [ axis ] ) - <nl> - getPosition ( child , trailing [ axis ] ) , <nl> + boundAxis ( child , axis , node . layout [ dim [ axis ] ] - <nl> + getPaddingAndBorderAxis ( node , axis ) - <nl> + getMarginAxis ( child , axis ) - <nl> + getPosition ( child , leading [ axis ] ) - <nl> + getPosition ( child , trailing [ axis ] ) <nl> + ) , <nl> / / You never want to go smaller than padding <nl> getPaddingAndBorderAxis ( child , axis ) <nl> ) ; <nl> mmm a / src / __tests__ / Layout - random - test . js <nl> ppp b / src / __tests__ / Layout - random - test . js <nl> describe ( ' Random layout ' , function ( ) { <nl> var node = { style : { } } ; <nl> randMinMax ( node , 0 . 5 , ' width ' , - 100 , 1000 ) ; <nl> randMinMax ( node , 0 . 5 , ' height ' , - 100 , 1000 ) ; <nl> + randMinMax ( node , 0 . 5 , ' minWidth ' , - 100 , 1000 ) ; <nl> + randMinMax ( node , 0 . 5 , ' minHeight ' , - 100 , 1000 ) ; <nl> + randMinMax ( node , 0 . 5 , ' maxWidth ' , - 100 , 1000 ) ; <nl> + randMinMax ( node , 0 . 5 , ' maxHeight ' , - 100 , 1000 ) ; <nl> randMinMax ( node , 0 . 5 , ' top ' , - 10 , 10 ) ; <nl> randMinMax ( node , 0 . 5 , ' left ' , - 10 , 10 ) ; <nl> randMinMax ( node , 0 . 5 , ' right ' , - 10 , 10 ) ; <nl> mmm a / src / __tests__ / Layout - test . c <nl> ppp b / src / __tests__ / Layout - test . c <nl> int main ( ) <nl> <nl> test ( " should layout flex wrap with a line bigger than container " , root_node , root_layout ) ; <nl> } <nl> + <nl> + { <nl> + css_node_t * root_node = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_node ; <nl> + node_0 - > style . dimensions [ CSS_WIDTH ] = 100 ; <nl> + node_0 - > style . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + node_0 - > style . maxDimensions [ CSS_WIDTH ] = 90 ; <nl> + node_0 - > style . maxDimensions [ CSS_HEIGHT ] = 190 ; <nl> + } <nl> + <nl> + css_node_t * root_layout = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_layout ; <nl> + node_0 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_0 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_0 - > layout . dimensions [ CSS_WIDTH ] = 90 ; <nl> + node_0 - > layout . dimensions [ CSS_HEIGHT ] = 190 ; <nl> + } <nl> + <nl> + test ( " should use max bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + { <nl> + css_node_t * root_node = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_node ; <nl> + node_0 - > style . dimensions [ CSS_WIDTH ] = 100 ; <nl> + node_0 - > style . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + node_0 - > style . minDimensions [ CSS_WIDTH ] = 110 ; <nl> + node_0 - > style . minDimensions [ CSS_HEIGHT ] = 210 ; <nl> + } <nl> + <nl> + css_node_t * root_layout = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_layout ; <nl> + node_0 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_0 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_0 - > layout . dimensions [ CSS_WIDTH ] = 110 ; <nl> + node_0 - > layout . dimensions [ CSS_HEIGHT ] = 210 ; <nl> + } <nl> + <nl> + test ( " should use min bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + { <nl> + css_node_t * root_node = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_node ; <nl> + node_0 - > style . dimensions [ CSS_WIDTH ] = 100 ; <nl> + node_0 - > style . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + node_0 - > style . maxDimensions [ CSS_WIDTH ] = 90 ; <nl> + node_0 - > style . maxDimensions [ CSS_HEIGHT ] = 190 ; <nl> + node_0 - > style . minDimensions [ CSS_WIDTH ] = 110 ; <nl> + node_0 - > style . minDimensions [ CSS_HEIGHT ] = 210 ; <nl> + } <nl> + <nl> + css_node_t * root_layout = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_layout ; <nl> + node_0 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_0 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_0 - > layout . dimensions [ CSS_WIDTH ] = 110 ; <nl> + node_0 - > layout . dimensions [ CSS_HEIGHT ] = 210 ; <nl> + } <nl> + <nl> + test ( " should use min bounds over max bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + { <nl> + css_node_t * root_node = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_node ; <nl> + node_0 - > style . dimensions [ CSS_WIDTH ] = 100 ; <nl> + node_0 - > style . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + node_0 - > style . maxDimensions [ CSS_WIDTH ] = 80 ; <nl> + node_0 - > style . maxDimensions [ CSS_HEIGHT ] = 180 ; <nl> + node_0 - > style . minDimensions [ CSS_WIDTH ] = 90 ; <nl> + node_0 - > style . minDimensions [ CSS_HEIGHT ] = 190 ; <nl> + } <nl> + <nl> + css_node_t * root_layout = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_layout ; <nl> + node_0 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_0 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_0 - > layout . dimensions [ CSS_WIDTH ] = 90 ; <nl> + node_0 - > layout . dimensions [ CSS_HEIGHT ] = 190 ; <nl> + } <nl> + <nl> + test ( " should use min bounds over max bounds and natural width " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + { <nl> + css_node_t * root_node = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_node ; <nl> + node_0 - > style . dimensions [ CSS_WIDTH ] = 100 ; <nl> + node_0 - > style . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + node_0 - > style . minDimensions [ CSS_WIDTH ] = - 10 ; <nl> + node_0 - > style . minDimensions [ CSS_HEIGHT ] = - 20 ; <nl> + } <nl> + <nl> + css_node_t * root_layout = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_layout ; <nl> + node_0 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_0 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_0 - > layout . dimensions [ CSS_WIDTH ] = 100 ; <nl> + node_0 - > layout . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + } <nl> + <nl> + test ( " should ignore negative min bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + { <nl> + css_node_t * root_node = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_node ; <nl> + node_0 - > style . dimensions [ CSS_WIDTH ] = 100 ; <nl> + node_0 - > style . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + node_0 - > style . maxDimensions [ CSS_WIDTH ] = - 10 ; <nl> + node_0 - > style . maxDimensions [ CSS_HEIGHT ] = - 20 ; <nl> + } <nl> + <nl> + css_node_t * root_layout = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_layout ; <nl> + node_0 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_0 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_0 - > layout . dimensions [ CSS_WIDTH ] = 100 ; <nl> + node_0 - > layout . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + } <nl> + <nl> + test ( " should ignore negative max bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + { <nl> + css_node_t * root_node = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_node ; <nl> + node_0 - > style . maxDimensions [ CSS_WIDTH ] = 30 ; <nl> + node_0 - > style . maxDimensions [ CSS_HEIGHT ] = 10 ; <nl> + node_0 - > style . padding [ CSS_LEFT ] = 20 ; <nl> + node_0 - > style . padding [ CSS_TOP ] = 15 ; <nl> + node_0 - > style . padding [ CSS_RIGHT ] = 20 ; <nl> + node_0 - > style . padding [ CSS_BOTTOM ] = 15 ; <nl> + } <nl> + <nl> + css_node_t * root_layout = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_layout ; <nl> + node_0 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_0 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_0 - > layout . dimensions [ CSS_WIDTH ] = 40 ; <nl> + node_0 - > layout . dimensions [ CSS_HEIGHT ] = 30 ; <nl> + } <nl> + <nl> + test ( " should use padded size over max bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + { <nl> + css_node_t * root_node = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_node ; <nl> + node_0 - > style . minDimensions [ CSS_WIDTH ] = 50 ; <nl> + node_0 - > style . minDimensions [ CSS_HEIGHT ] = 40 ; <nl> + node_0 - > style . padding [ CSS_LEFT ] = 20 ; <nl> + node_0 - > style . padding [ CSS_TOP ] = 15 ; <nl> + node_0 - > style . padding [ CSS_RIGHT ] = 20 ; <nl> + node_0 - > style . padding [ CSS_BOTTOM ] = 15 ; <nl> + } <nl> + <nl> + css_node_t * root_layout = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_layout ; <nl> + node_0 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_0 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_0 - > layout . dimensions [ CSS_WIDTH ] = 50 ; <nl> + node_0 - > layout . dimensions [ CSS_HEIGHT ] = 40 ; <nl> + } <nl> + <nl> + test ( " should use min size over padded size " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + { <nl> + css_node_t * root_node = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_node ; <nl> + node_0 - > style . flex_direction = CSS_FLEX_DIRECTION_ROW ; <nl> + node_0 - > style . dimensions [ CSS_WIDTH ] = 300 ; <nl> + node_0 - > style . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + init_css_node_children ( node_0 , 3 ) ; <nl> + { <nl> + css_node_t * node_1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 0 ) ; <nl> + node_1 - > style . flex = 1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 1 ) ; <nl> + node_1 - > style . flex = 1 ; <nl> + node_1 - > style . minDimensions [ CSS_WIDTH ] = 200 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 2 ) ; <nl> + node_1 - > style . flex = 1 ; <nl> + } <nl> + } <nl> + <nl> + css_node_t * root_layout = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_layout ; <nl> + node_0 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_0 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_0 - > layout . dimensions [ CSS_WIDTH ] = 300 ; <nl> + node_0 - > layout . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + init_css_node_children ( node_0 , 3 ) ; <nl> + { <nl> + css_node_t * node_1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 0 ) ; <nl> + node_1 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_1 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_1 - > layout . dimensions [ CSS_WIDTH ] = 50 ; <nl> + node_1 - > layout . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 1 ) ; <nl> + node_1 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_1 - > layout . position [ CSS_LEFT ] = 50 ; <nl> + node_1 - > layout . dimensions [ CSS_WIDTH ] = 200 ; <nl> + node_1 - > layout . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 2 ) ; <nl> + node_1 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_1 - > layout . position [ CSS_LEFT ] = 250 ; <nl> + node_1 - > layout . dimensions [ CSS_WIDTH ] = 50 ; <nl> + node_1 - > layout . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + } <nl> + } <nl> + <nl> + test ( " should override flex direction size with min bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + { <nl> + css_node_t * root_node = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_node ; <nl> + node_0 - > style . flex_direction = CSS_FLEX_DIRECTION_ROW ; <nl> + node_0 - > style . dimensions [ CSS_WIDTH ] = 300 ; <nl> + node_0 - > style . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + init_css_node_children ( node_0 , 3 ) ; <nl> + { <nl> + css_node_t * node_1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 0 ) ; <nl> + node_1 - > style . flex = 1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 1 ) ; <nl> + node_1 - > style . flex = 1 ; <nl> + node_1 - > style . maxDimensions [ CSS_WIDTH ] = 110 ; <nl> + node_1 - > style . minDimensions [ CSS_WIDTH ] = 90 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 2 ) ; <nl> + node_1 - > style . flex = 1 ; <nl> + } <nl> + } <nl> + <nl> + css_node_t * root_layout = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_layout ; <nl> + node_0 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_0 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_0 - > layout . dimensions [ CSS_WIDTH ] = 300 ; <nl> + node_0 - > layout . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + init_css_node_children ( node_0 , 3 ) ; <nl> + { <nl> + css_node_t * node_1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 0 ) ; <nl> + node_1 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_1 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_1 - > layout . dimensions [ CSS_WIDTH ] = 100 ; <nl> + node_1 - > layout . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 1 ) ; <nl> + node_1 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_1 - > layout . position [ CSS_LEFT ] = 100 ; <nl> + node_1 - > layout . dimensions [ CSS_WIDTH ] = 100 ; <nl> + node_1 - > layout . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 2 ) ; <nl> + node_1 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_1 - > layout . position [ CSS_LEFT ] = 200 ; <nl> + node_1 - > layout . dimensions [ CSS_WIDTH ] = 100 ; <nl> + node_1 - > layout . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + } <nl> + } <nl> + <nl> + test ( " should not override flex direction size within bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + { <nl> + css_node_t * root_node = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_node ; <nl> + node_0 - > style . flex_direction = CSS_FLEX_DIRECTION_ROW ; <nl> + node_0 - > style . dimensions [ CSS_WIDTH ] = 300 ; <nl> + node_0 - > style . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + init_css_node_children ( node_0 , 3 ) ; <nl> + { <nl> + css_node_t * node_1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 0 ) ; <nl> + node_1 - > style . flex = 1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 1 ) ; <nl> + node_1 - > style . flex = 1 ; <nl> + node_1 - > style . maxDimensions [ CSS_WIDTH ] = 60 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 2 ) ; <nl> + node_1 - > style . flex = 1 ; <nl> + } <nl> + } <nl> + <nl> + css_node_t * root_layout = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_layout ; <nl> + node_0 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_0 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_0 - > layout . dimensions [ CSS_WIDTH ] = 300 ; <nl> + node_0 - > layout . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + init_css_node_children ( node_0 , 3 ) ; <nl> + { <nl> + css_node_t * node_1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 0 ) ; <nl> + node_1 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_1 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_1 - > layout . dimensions [ CSS_WIDTH ] = 120 ; <nl> + node_1 - > layout . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 1 ) ; <nl> + node_1 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_1 - > layout . position [ CSS_LEFT ] = 120 ; <nl> + node_1 - > layout . dimensions [ CSS_WIDTH ] = 60 ; <nl> + node_1 - > layout . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 2 ) ; <nl> + node_1 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_1 - > layout . position [ CSS_LEFT ] = 180 ; <nl> + node_1 - > layout . dimensions [ CSS_WIDTH ] = 120 ; <nl> + node_1 - > layout . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + } <nl> + } <nl> + <nl> + test ( " should override flex direction size with max bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + { <nl> + css_node_t * root_node = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_node ; <nl> + node_0 - > style . dimensions [ CSS_WIDTH ] = 300 ; <nl> + node_0 - > style . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + init_css_node_children ( node_0 , 1 ) ; <nl> + { <nl> + css_node_t * node_1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 0 ) ; <nl> + node_1 - > style . flex = 1 ; <nl> + node_1 - > style . maxDimensions [ CSS_WIDTH ] = 310 ; <nl> + node_1 - > style . minDimensions [ CSS_WIDTH ] = 290 ; <nl> + } <nl> + } <nl> + <nl> + css_node_t * root_layout = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_layout ; <nl> + node_0 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_0 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_0 - > layout . dimensions [ CSS_WIDTH ] = 300 ; <nl> + node_0 - > layout . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + init_css_node_children ( node_0 , 1 ) ; <nl> + { <nl> + css_node_t * node_1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 0 ) ; <nl> + node_1 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_1 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_1 - > layout . dimensions [ CSS_WIDTH ] = 300 ; <nl> + node_1 - > layout . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + } <nl> + } <nl> + <nl> + test ( " should pre - fill child size within bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + { <nl> + css_node_t * root_node = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_node ; <nl> + node_0 - > style . dimensions [ CSS_WIDTH ] = 300 ; <nl> + node_0 - > style . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + init_css_node_children ( node_0 , 1 ) ; <nl> + { <nl> + css_node_t * node_1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 0 ) ; <nl> + node_1 - > style . flex = 1 ; <nl> + node_1 - > style . maxDimensions [ CSS_WIDTH ] = 290 ; <nl> + } <nl> + } <nl> + <nl> + css_node_t * root_layout = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_layout ; <nl> + node_0 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_0 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_0 - > layout . dimensions [ CSS_WIDTH ] = 300 ; <nl> + node_0 - > layout . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + init_css_node_children ( node_0 , 1 ) ; <nl> + { <nl> + css_node_t * node_1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 0 ) ; <nl> + node_1 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_1 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_1 - > layout . dimensions [ CSS_WIDTH ] = 290 ; <nl> + node_1 - > layout . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + } <nl> + } <nl> + <nl> + test ( " should pre - fill child size within max bound " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + { <nl> + css_node_t * root_node = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_node ; <nl> + node_0 - > style . dimensions [ CSS_WIDTH ] = 300 ; <nl> + node_0 - > style . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + init_css_node_children ( node_0 , 1 ) ; <nl> + { <nl> + css_node_t * node_1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 0 ) ; <nl> + node_1 - > style . flex = 1 ; <nl> + node_1 - > style . minDimensions [ CSS_WIDTH ] = 310 ; <nl> + } <nl> + } <nl> + <nl> + css_node_t * root_layout = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_layout ; <nl> + node_0 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_0 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_0 - > layout . dimensions [ CSS_WIDTH ] = 300 ; <nl> + node_0 - > layout . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + init_css_node_children ( node_0 , 1 ) ; <nl> + { <nl> + css_node_t * node_1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 0 ) ; <nl> + node_1 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_1 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_1 - > layout . dimensions [ CSS_WIDTH ] = 310 ; <nl> + node_1 - > layout . dimensions [ CSS_HEIGHT ] = 200 ; <nl> + } <nl> + } <nl> + <nl> + test ( " should pre - fill child size within min bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + { <nl> + css_node_t * root_node = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_node ; <nl> + node_0 - > style . maxDimensions [ CSS_WIDTH ] = 300 ; <nl> + node_0 - > style . maxDimensions [ CSS_HEIGHT ] = 700 ; <nl> + node_0 - > style . minDimensions [ CSS_WIDTH ] = 100 ; <nl> + node_0 - > style . minDimensions [ CSS_HEIGHT ] = 500 ; <nl> + init_css_node_children ( node_0 , 2 ) ; <nl> + { <nl> + css_node_t * node_1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 0 ) ; <nl> + node_1 - > style . dimensions [ CSS_WIDTH ] = 200 ; <nl> + node_1 - > style . dimensions [ CSS_HEIGHT ] = 300 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 1 ) ; <nl> + node_1 - > style . dimensions [ CSS_WIDTH ] = 200 ; <nl> + node_1 - > style . dimensions [ CSS_HEIGHT ] = 300 ; <nl> + } <nl> + } <nl> + <nl> + css_node_t * root_layout = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_layout ; <nl> + node_0 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_0 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_0 - > layout . dimensions [ CSS_WIDTH ] = 200 ; <nl> + node_0 - > layout . dimensions [ CSS_HEIGHT ] = 600 ; <nl> + init_css_node_children ( node_0 , 2 ) ; <nl> + { <nl> + css_node_t * node_1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 0 ) ; <nl> + node_1 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_1 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_1 - > layout . dimensions [ CSS_WIDTH ] = 200 ; <nl> + node_1 - > layout . dimensions [ CSS_HEIGHT ] = 300 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 1 ) ; <nl> + node_1 - > layout . position [ CSS_TOP ] = 300 ; <nl> + node_1 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_1 - > layout . dimensions [ CSS_WIDTH ] = 200 ; <nl> + node_1 - > layout . dimensions [ CSS_HEIGHT ] = 300 ; <nl> + } <nl> + } <nl> + <nl> + test ( " should set parents size based on bounded children " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + { <nl> + css_node_t * root_node = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_node ; <nl> + node_0 - > style . maxDimensions [ CSS_WIDTH ] = 100 ; <nl> + node_0 - > style . maxDimensions [ CSS_HEIGHT ] = 500 ; <nl> + init_css_node_children ( node_0 , 2 ) ; <nl> + { <nl> + css_node_t * node_1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 0 ) ; <nl> + node_1 - > style . dimensions [ CSS_WIDTH ] = 200 ; <nl> + node_1 - > style . dimensions [ CSS_HEIGHT ] = 300 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 1 ) ; <nl> + node_1 - > style . dimensions [ CSS_WIDTH ] = 200 ; <nl> + node_1 - > style . dimensions [ CSS_HEIGHT ] = 300 ; <nl> + } <nl> + } <nl> + <nl> + css_node_t * root_layout = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_layout ; <nl> + node_0 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_0 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_0 - > layout . dimensions [ CSS_WIDTH ] = 100 ; <nl> + node_0 - > layout . dimensions [ CSS_HEIGHT ] = 500 ; <nl> + init_css_node_children ( node_0 , 2 ) ; <nl> + { <nl> + css_node_t * node_1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 0 ) ; <nl> + node_1 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_1 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_1 - > layout . dimensions [ CSS_WIDTH ] = 200 ; <nl> + node_1 - > layout . dimensions [ CSS_HEIGHT ] = 300 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 1 ) ; <nl> + node_1 - > layout . position [ CSS_TOP ] = 300 ; <nl> + node_1 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_1 - > layout . dimensions [ CSS_WIDTH ] = 200 ; <nl> + node_1 - > layout . dimensions [ CSS_HEIGHT ] = 300 ; <nl> + } <nl> + } <nl> + <nl> + test ( " should set parents size based on max bounded children " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + { <nl> + css_node_t * root_node = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_node ; <nl> + node_0 - > style . minDimensions [ CSS_WIDTH ] = 300 ; <nl> + node_0 - > style . minDimensions [ CSS_HEIGHT ] = 700 ; <nl> + init_css_node_children ( node_0 , 2 ) ; <nl> + { <nl> + css_node_t * node_1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 0 ) ; <nl> + node_1 - > style . dimensions [ CSS_WIDTH ] = 200 ; <nl> + node_1 - > style . dimensions [ CSS_HEIGHT ] = 300 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 1 ) ; <nl> + node_1 - > style . dimensions [ CSS_WIDTH ] = 200 ; <nl> + node_1 - > style . dimensions [ CSS_HEIGHT ] = 300 ; <nl> + } <nl> + } <nl> + <nl> + css_node_t * root_layout = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_layout ; <nl> + node_0 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_0 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_0 - > layout . dimensions [ CSS_WIDTH ] = 300 ; <nl> + node_0 - > layout . dimensions [ CSS_HEIGHT ] = 700 ; <nl> + init_css_node_children ( node_0 , 2 ) ; <nl> + { <nl> + css_node_t * node_1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 0 ) ; <nl> + node_1 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_1 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_1 - > layout . dimensions [ CSS_WIDTH ] = 200 ; <nl> + node_1 - > layout . dimensions [ CSS_HEIGHT ] = 300 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 1 ) ; <nl> + node_1 - > layout . position [ CSS_TOP ] = 300 ; <nl> + node_1 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_1 - > layout . dimensions [ CSS_WIDTH ] = 200 ; <nl> + node_1 - > layout . dimensions [ CSS_HEIGHT ] = 300 ; <nl> + } <nl> + } <nl> + <nl> + test ( " should set parents size based on min bounded children " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + { <nl> + css_node_t * root_node = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_node ; <nl> + node_0 - > style . align_items = CSS_ALIGN_STRETCH ; <nl> + node_0 - > style . dimensions [ CSS_WIDTH ] = 1000 ; <nl> + init_css_node_children ( node_0 , 1 ) ; <nl> + { <nl> + css_node_t * node_1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 0 ) ; <nl> + node_1 - > style . dimensions [ CSS_HEIGHT ] = 100 ; <nl> + node_1 - > style . maxDimensions [ CSS_WIDTH ] = 1100 ; <nl> + node_1 - > style . maxDimensions [ CSS_HEIGHT ] = 110 ; <nl> + node_1 - > style . minDimensions [ CSS_WIDTH ] = 900 ; <nl> + node_1 - > style . minDimensions [ CSS_HEIGHT ] = 90 ; <nl> + } <nl> + } <nl> + <nl> + css_node_t * root_layout = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_layout ; <nl> + node_0 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_0 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_0 - > layout . dimensions [ CSS_WIDTH ] = 1000 ; <nl> + node_0 - > layout . dimensions [ CSS_HEIGHT ] = 100 ; <nl> + init_css_node_children ( node_0 , 1 ) ; <nl> + { <nl> + css_node_t * node_1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 0 ) ; <nl> + node_1 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_1 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_1 - > layout . dimensions [ CSS_WIDTH ] = 1000 ; <nl> + node_1 - > layout . dimensions [ CSS_HEIGHT ] = 100 ; <nl> + } <nl> + } <nl> + <nl> + test ( " should keep stretched size within bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + { <nl> + css_node_t * root_node = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_node ; <nl> + node_0 - > style . align_items = CSS_ALIGN_STRETCH ; <nl> + node_0 - > style . dimensions [ CSS_WIDTH ] = 1000 ; <nl> + init_css_node_children ( node_0 , 1 ) ; <nl> + { <nl> + css_node_t * node_1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 0 ) ; <nl> + node_1 - > style . dimensions [ CSS_HEIGHT ] = 100 ; <nl> + node_1 - > style . maxDimensions [ CSS_WIDTH ] = 900 ; <nl> + node_1 - > style . maxDimensions [ CSS_HEIGHT ] = 90 ; <nl> + } <nl> + } <nl> + <nl> + css_node_t * root_layout = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_layout ; <nl> + node_0 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_0 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_0 - > layout . dimensions [ CSS_WIDTH ] = 1000 ; <nl> + node_0 - > layout . dimensions [ CSS_HEIGHT ] = 90 ; <nl> + init_css_node_children ( node_0 , 1 ) ; <nl> + { <nl> + css_node_t * node_1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 0 ) ; <nl> + node_1 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_1 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_1 - > layout . dimensions [ CSS_WIDTH ] = 900 ; <nl> + node_1 - > layout . dimensions [ CSS_HEIGHT ] = 90 ; <nl> + } <nl> + } <nl> + <nl> + test ( " should keep stretched size within max bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + { <nl> + css_node_t * root_node = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_node ; <nl> + node_0 - > style . align_items = CSS_ALIGN_STRETCH ; <nl> + node_0 - > style . dimensions [ CSS_WIDTH ] = 1000 ; <nl> + init_css_node_children ( node_0 , 1 ) ; <nl> + { <nl> + css_node_t * node_1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 0 ) ; <nl> + node_1 - > style . dimensions [ CSS_HEIGHT ] = 100 ; <nl> + node_1 - > style . minDimensions [ CSS_WIDTH ] = 1100 ; <nl> + node_1 - > style . minDimensions [ CSS_HEIGHT ] = 110 ; <nl> + } <nl> + } <nl> + <nl> + css_node_t * root_layout = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_layout ; <nl> + node_0 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_0 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_0 - > layout . dimensions [ CSS_WIDTH ] = 1000 ; <nl> + node_0 - > layout . dimensions [ CSS_HEIGHT ] = 110 ; <nl> + init_css_node_children ( node_0 , 1 ) ; <nl> + { <nl> + css_node_t * node_1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 0 ) ; <nl> + node_1 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_1 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_1 - > layout . dimensions [ CSS_WIDTH ] = 1100 ; <nl> + node_1 - > layout . dimensions [ CSS_HEIGHT ] = 110 ; <nl> + } <nl> + } <nl> + <nl> + test ( " should keep stretched size within min bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + { <nl> + css_node_t * root_node = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_node ; <nl> + node_0 - > style . flex_direction = CSS_FLEX_DIRECTION_ROW ; <nl> + node_0 - > style . dimensions [ CSS_WIDTH ] = 1000 ; <nl> + init_css_node_children ( node_0 , 1 ) ; <nl> + { <nl> + css_node_t * node_1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 0 ) ; <nl> + node_1 - > style . dimensions [ CSS_HEIGHT ] = 100 ; <nl> + node_1 - > style . minDimensions [ CSS_WIDTH ] = 100 ; <nl> + node_1 - > style . minDimensions [ CSS_HEIGHT ] = 110 ; <nl> + } <nl> + } <nl> + <nl> + css_node_t * root_layout = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_layout ; <nl> + node_0 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_0 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_0 - > layout . dimensions [ CSS_WIDTH ] = 1000 ; <nl> + node_0 - > layout . dimensions [ CSS_HEIGHT ] = 110 ; <nl> + init_css_node_children ( node_0 , 1 ) ; <nl> + { <nl> + css_node_t * node_1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 0 ) ; <nl> + node_1 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_1 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_1 - > layout . dimensions [ CSS_WIDTH ] = 100 ; <nl> + node_1 - > layout . dimensions [ CSS_HEIGHT ] = 110 ; <nl> + } <nl> + } <nl> + <nl> + test ( " should keep cross axis size within min bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + { <nl> + css_node_t * root_node = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_node ; <nl> + node_0 - > style . dimensions [ CSS_WIDTH ] = 1000 ; <nl> + node_0 - > style . dimensions [ CSS_HEIGHT ] = 1000 ; <nl> + init_css_node_children ( node_0 , 1 ) ; <nl> + { <nl> + css_node_t * node_1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 0 ) ; <nl> + node_1 - > style . position_type = CSS_POSITION_ABSOLUTE ; <nl> + node_1 - > style . maxDimensions [ CSS_WIDTH ] = 500 ; <nl> + node_1 - > style . maxDimensions [ CSS_HEIGHT ] = 600 ; <nl> + node_1 - > style . position [ CSS_LEFT ] = 100 ; <nl> + node_1 - > style . position [ CSS_TOP ] = 100 ; <nl> + node_1 - > style . position [ CSS_RIGHT ] = 100 ; <nl> + node_1 - > style . position [ CSS_BOTTOM ] = 100 ; <nl> + } <nl> + } <nl> + <nl> + css_node_t * root_layout = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_layout ; <nl> + node_0 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_0 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_0 - > layout . dimensions [ CSS_WIDTH ] = 1000 ; <nl> + node_0 - > layout . dimensions [ CSS_HEIGHT ] = 1000 ; <nl> + init_css_node_children ( node_0 , 1 ) ; <nl> + { <nl> + css_node_t * node_1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 0 ) ; <nl> + node_1 - > layout . position [ CSS_TOP ] = 100 ; <nl> + node_1 - > layout . position [ CSS_LEFT ] = 100 ; <nl> + node_1 - > layout . dimensions [ CSS_WIDTH ] = 500 ; <nl> + node_1 - > layout . dimensions [ CSS_HEIGHT ] = 600 ; <nl> + } <nl> + } <nl> + <nl> + test ( " should layout node with position absolute , top and left and max bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + { <nl> + css_node_t * root_node = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_node ; <nl> + node_0 - > style . dimensions [ CSS_WIDTH ] = 1000 ; <nl> + node_0 - > style . dimensions [ CSS_HEIGHT ] = 1000 ; <nl> + init_css_node_children ( node_0 , 1 ) ; <nl> + { <nl> + css_node_t * node_1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 0 ) ; <nl> + node_1 - > style . position_type = CSS_POSITION_ABSOLUTE ; <nl> + node_1 - > style . minDimensions [ CSS_WIDTH ] = 900 ; <nl> + node_1 - > style . minDimensions [ CSS_HEIGHT ] = 1000 ; <nl> + node_1 - > style . position [ CSS_LEFT ] = 100 ; <nl> + node_1 - > style . position [ CSS_TOP ] = 100 ; <nl> + node_1 - > style . position [ CSS_RIGHT ] = 100 ; <nl> + node_1 - > style . position [ CSS_BOTTOM ] = 100 ; <nl> + } <nl> + } <nl> + <nl> + css_node_t * root_layout = new_test_css_node ( ) ; <nl> + { <nl> + css_node_t * node_0 = root_layout ; <nl> + node_0 - > layout . position [ CSS_TOP ] = 0 ; <nl> + node_0 - > layout . position [ CSS_LEFT ] = 0 ; <nl> + node_0 - > layout . dimensions [ CSS_WIDTH ] = 1000 ; <nl> + node_0 - > layout . dimensions [ CSS_HEIGHT ] = 1000 ; <nl> + init_css_node_children ( node_0 , 1 ) ; <nl> + { <nl> + css_node_t * node_1 ; <nl> + node_1 = node_0 - > get_child ( node_0 - > context , 0 ) ; <nl> + node_1 - > layout . position [ CSS_TOP ] = 100 ; <nl> + node_1 - > layout . position [ CSS_LEFT ] = 100 ; <nl> + node_1 - > layout . dimensions [ CSS_WIDTH ] = 900 ; <nl> + node_1 - > layout . dimensions [ CSS_HEIGHT ] = 1000 ; <nl> + } <nl> + } <nl> + <nl> + test ( " should layout node with position absolute , top and left and min bounds " , root_node , root_layout ) ; <nl> + } <nl> / * * END_GENERATED * * / <nl> return tests_finished ( ) ; <nl> } <nl> mmm a / src / __tests__ / Layout - test . js <nl> ppp b / src / __tests__ / Layout - test . js <nl> describe ( ' Layout ' , function ( ) { <nl> ) ; <nl> } ) ; <nl> <nl> + it ( ' should use max bounds ' , function ( ) { <nl> + testLayout ( <nl> + { style : { width : 100 , height : 200 , maxWidth : 90 , maxHeight : 190 } } , <nl> + { width : 90 , height : 190 , top : 0 , left : 0 } <nl> + ) ; <nl> + } ) ; <nl> + <nl> + it ( ' should use min bounds ' , function ( ) { <nl> + testLayout ( <nl> + { style : { width : 100 , height : 200 , minWidth : 110 , minHeight : 210 } } , <nl> + { width : 110 , height : 210 , top : 0 , left : 0 } <nl> + ) ; <nl> + } ) ; <nl> + <nl> + it ( ' should use min bounds over max bounds ' , function ( ) { <nl> + testLayout ( <nl> + { style : { width : 100 , height : 200 , minWidth : 110 , maxWidth : 90 , minHeight : 210 , maxHeight : 190 } } , <nl> + { width : 110 , height : 210 , top : 0 , left : 0 } <nl> + ) ; <nl> + } ) ; <nl> + <nl> + it ( ' should use min bounds over max bounds and natural width ' , function ( ) { <nl> + testLayout ( <nl> + { style : { width : 100 , height : 200 , minWidth : 90 , maxWidth : 80 , minHeight : 190 , maxHeight : 180 } } , <nl> + { width : 90 , height : 190 , top : 0 , left : 0 } <nl> + ) ; <nl> + } ) ; <nl> + <nl> + it ( ' should ignore negative min bounds ' , function ( ) { <nl> + testLayout ( <nl> + { style : { width : 100 , height : 200 , minWidth : - 10 , minHeight : - 20 } } , <nl> + { width : 100 , height : 200 , top : 0 , left : 0 } <nl> + ) ; <nl> + } ) ; <nl> + <nl> + it ( ' should ignore negative max bounds ' , function ( ) { <nl> + testLayout ( <nl> + { style : { width : 100 , height : 200 , maxWidth : - 10 , maxHeight : - 20 } } , <nl> + { width : 100 , height : 200 , top : 0 , left : 0 } <nl> + ) ; <nl> + } ) ; <nl> + <nl> + it ( ' should use padded size over max bounds ' , function ( ) { <nl> + testLayout ( <nl> + { style : { paddingTop : 15 , paddingBottom : 15 , paddingLeft : 20 , paddingRight : 20 , maxWidth : 30 , maxHeight : 10 } } , <nl> + { width : 40 , height : 30 , top : 0 , left : 0 } <nl> + ) ; <nl> + } ) ; <nl> + <nl> + it ( ' should use min size over padded size ' , function ( ) { <nl> + testLayout ( <nl> + { style : { paddingTop : 15 , paddingBottom : 15 , paddingLeft : 20 , paddingRight : 20 , minWidth : 50 , minHeight : 40 } } , <nl> + { width : 50 , height : 40 , top : 0 , left : 0 } <nl> + ) ; <nl> + } ) ; <nl> + <nl> + it ( ' should override flex direction size with min bounds ' , function ( ) { <nl> + testLayout ( <nl> + { style : { width : 300 , height : 200 , flexDirection : ' row ' } , children : [ <nl> + { style : { flex : 1 } } , <nl> + { style : { flex : 1 , minWidth : 200 } } , <nl> + { style : { flex : 1 } } <nl> + ] } , <nl> + { width : 300 , height : 200 , top : 0 , left : 0 , children : [ <nl> + { width : 50 , height : 200 , top : 0 , left : 0 } , <nl> + { width : 200 , height : 200 , top : 0 , left : 50 } , <nl> + { width : 50 , height : 200 , top : 0 , left : 250 } <nl> + ] } <nl> + ) ; <nl> + } ) ; <nl> + <nl> + it ( ' should not override flex direction size within bounds ' , function ( ) { <nl> + testLayout ( <nl> + { style : { width : 300 , height : 200 , flexDirection : ' row ' } , children : [ <nl> + { style : { flex : 1 } } , <nl> + { style : { flex : 1 , minWidth : 90 , maxWidth : 110 } } , <nl> + { style : { flex : 1 } } <nl> + ] } , <nl> + { width : 300 , height : 200 , top : 0 , left : 0 , children : [ <nl> + { width : 100 , height : 200 , top : 0 , left : 0 } , <nl> + { width : 100 , height : 200 , top : 0 , left : 100 } , <nl> + { width : 100 , height : 200 , top : 0 , left : 200 } <nl> + ] } <nl> + ) ; <nl> + } ) ; <nl> + <nl> + it ( ' should override flex direction size with max bounds ' , function ( ) { <nl> + testLayout ( <nl> + { style : { width : 300 , height : 200 , flexDirection : ' row ' } , children : [ <nl> + { style : { flex : 1 } } , <nl> + { style : { flex : 1 , maxWidth : 60 } } , <nl> + { style : { flex : 1 } } <nl> + ] } , <nl> + { width : 300 , height : 200 , top : 0 , left : 0 , children : [ <nl> + { width : 120 , height : 200 , top : 0 , left : 0 } , <nl> + { width : 60 , height : 200 , top : 0 , left : 120 } , <nl> + { width : 120 , height : 200 , top : 0 , left : 180 } <nl> + ] } <nl> + ) ; <nl> + } ) ; <nl> + <nl> + it ( ' should pre - fill child size within bounds ' , function ( ) { <nl> + testLayout ( <nl> + { style : { width : 300 , height : 200 } , children : [ <nl> + { style : { flex : 1 , minWidth : 290 , maxWidth : 310 } } , <nl> + ] } , <nl> + { width : 300 , height : 200 , top : 0 , left : 0 , children : [ <nl> + { width : 300 , height : 200 , top : 0 , left : 0 } , <nl> + ] } <nl> + ) ; <nl> + } ) ; <nl> + <nl> + it ( ' should pre - fill child size within max bound ' , function ( ) { <nl> + testLayout ( <nl> + { style : { width : 300 , height : 200 } , children : [ <nl> + { style : { flex : 1 , maxWidth : 290 } } , <nl> + ] } , <nl> + { width : 300 , height : 200 , top : 0 , left : 0 , children : [ <nl> + { width : 290 , height : 200 , top : 0 , left : 0 } , <nl> + ] } <nl> + ) ; <nl> + } ) ; <nl> + <nl> + it ( ' should pre - fill child size within min bounds ' , function ( ) { <nl> + testLayout ( <nl> + { style : { width : 300 , height : 200 } , children : [ <nl> + { style : { flex : 1 , minWidth : 310 } } , <nl> + ] } , <nl> + { width : 300 , height : 200 , top : 0 , left : 0 , children : [ <nl> + { width : 310 , height : 200 , top : 0 , left : 0 } , <nl> + ] } <nl> + ) ; <nl> + } ) ; <nl> + <nl> + it ( ' should set parents size based on bounded children ' , function ( ) { <nl> + testLayout ( <nl> + { style : { minWidth : 100 , maxWidth : 300 , minHeight : 500 , maxHeight : 700 } , children : [ <nl> + { style : { width : 200 , height : 300 } } , <nl> + { style : { width : 200 , height : 300 } } , <nl> + ] } , <nl> + { width : 200 , height : 600 , top : 0 , left : 0 , children : [ <nl> + { width : 200 , height : 300 , top : 0 , left : 0 } , <nl> + { width : 200 , height : 300 , top : 300 , left : 0 } , <nl> + ] } <nl> + ) ; <nl> + } ) ; <nl> + <nl> + it ( ' should set parents size based on max bounded children ' , function ( ) { <nl> + testLayout ( <nl> + { style : { maxWidth : 100 , maxHeight : 500 } , children : [ <nl> + { style : { width : 200 , height : 300 } } , <nl> + { style : { width : 200 , height : 300 } } , <nl> + ] } , <nl> + { width : 100 , height : 500 , top : 0 , left : 0 , children : [ <nl> + { width : 200 , height : 300 , top : 0 , left : 0 } , <nl> + { width : 200 , height : 300 , top : 300 , left : 0 } , <nl> + ] } <nl> + ) ; <nl> + } ) ; <nl> + <nl> + it ( ' should set parents size based on min bounded children ' , function ( ) { <nl> + testLayout ( <nl> + { style : { minWidth : 300 , minHeight : 700 } , children : [ <nl> + { style : { width : 200 , height : 300 } } , <nl> + { style : { width : 200 , height : 300 } } , <nl> + ] } , <nl> + { width : 300 , height : 700 , top : 0 , left : 0 , children : [ <nl> + { width : 200 , height : 300 , top : 0 , left : 0 } , <nl> + { width : 200 , height : 300 , top : 300 , left : 0 } , <nl> + ] } <nl> + ) ; <nl> + } ) ; <nl> + <nl> + it ( ' should keep stretched size within bounds ' , function ( ) { <nl> + testLayout ( <nl> + { style : { width : 1000 , alignItems : ' stretch ' } , children : [ <nl> + { style : { height : 100 , minHeight : 90 , maxHeight : 110 , minWidth : 900 , maxWidth : 1100 } } <nl> + ] } , <nl> + { width : 1000 , height : 100 , top : 0 , left : 0 , children : [ <nl> + { width : 1000 , height : 100 , top : 0 , left : 0 } <nl> + ] } <nl> + ) ; <nl> + } ) ; <nl> + <nl> + it ( ' should keep stretched size within max bounds ' , function ( ) { <nl> + testLayout ( <nl> + { style : { width : 1000 , alignItems : ' stretch ' } , children : [ <nl> + { style : { height : 100 , maxHeight : 90 , maxWidth : 900 } } <nl> + ] } , <nl> + { width : 1000 , height : 90 , top : 0 , left : 0 , children : [ <nl> + { width : 900 , height : 90 , top : 0 , left : 0 } <nl> + ] } <nl> + ) ; <nl> + } ) ; <nl> + <nl> + it ( ' should keep stretched size within min bounds ' , function ( ) { <nl> + testLayout ( <nl> + { style : { width : 1000 , alignItems : ' stretch ' } , children : [ <nl> + { style : { height : 100 , minHeight : 110 , minWidth : 1100 } } <nl> + ] } , <nl> + { width : 1000 , height : 110 , top : 0 , left : 0 , children : [ <nl> + { width : 1100 , height : 110 , top : 0 , left : 0 } <nl> + ] } <nl> + ) ; <nl> + } ) ; <nl> + <nl> + it ( ' should keep cross axis size within min bounds ' , function ( ) { <nl> + testLayout ( <nl> + { style : { width : 1000 , flexDirection : ' row ' } , children : [ <nl> + { style : { height : 100 , minHeight : 110 , minWidth : 100 } } <nl> + ] } , <nl> + { width : 1000 , height : 110 , top : 0 , left : 0 , children : [ <nl> + { width : 100 , height : 110 , top : 0 , left : 0 } <nl> + ] } <nl> + ) ; <nl> + } ) ; <nl> + <nl> + it ( ' should layout node with position absolute , top and left and max bounds ' , function ( ) { <nl> + testLayout ( <nl> + { style : { width : 1000 , height : 1000 } , children : [ <nl> + { style : { position : ' absolute ' , top : 100 , left : 100 , bottom : 100 , right : 100 , maxWidth : 500 , maxHeight : 600 } } <nl> + ] } , <nl> + { width : 1000 , height : 1000 , top : 0 , left : 0 , children : [ <nl> + { width : 500 , height : 600 , top : 100 , left : 100 } , <nl> + ] } <nl> + ) ; <nl> + } ) ; <nl> + <nl> + it ( ' should layout node with position absolute , top and left and min bounds ' , function ( ) { <nl> + testLayout ( <nl> + { style : { width : 1000 , height : 1000 } , children : [ <nl> + { style : { position : ' absolute ' , top : 100 , left : 100 , bottom : 100 , right : 100 , minWidth : 900 , minHeight : 1000 } } <nl> + ] } , <nl> + { width : 1000 , height : 1000 , top : 0 , left : 0 , children : [ <nl> + { width : 900 , height : 1000 , top : 100 , left : 100 } , <nl> + ] } <nl> + ) ; <nl> + } ) ; <nl> + <nl> } ) ; <nl> mmm a / src / java / src / com / facebook / csslayout / CSSStyle . java <nl> ppp b / src / java / src / com / facebook / csslayout / CSSStyle . java <nl> <nl> <nl> public float width = CSSConstants . UNDEFINED ; <nl> public float height = CSSConstants . UNDEFINED ; <nl> + <nl> + public float minWidth = CSSConstants . UNDEFINED ; <nl> + public float minHeight = CSSConstants . UNDEFINED ; <nl> + <nl> + public float maxWidth = CSSConstants . UNDEFINED ; <nl> + public float maxHeight = CSSConstants . UNDEFINED ; <nl> } <nl> mmm a / src / java / src / com / facebook / csslayout / LayoutEngine . java <nl> ppp b / src / java / src / com / facebook / csslayout / LayoutEngine . java <nl> private static float getPaddingAndBorderAxis ( CSSNode node , CSSFlexDirection axis <nl> getLeading ( axis ) ) + getPaddingAndBorder ( node , getTrailing ( axis ) ) ; <nl> } <nl> <nl> + private static float boundAxis ( CSSNode node , CSSFlexDirection axis , float value ) { <nl> + float min = CSSConstants . UNDEFINED ; <nl> + float max = CSSConstants . UNDEFINED ; <nl> + <nl> + if ( axis = = CSSFlexDirection . COLUMN ) { <nl> + min = node . style . minHeight ; <nl> + max = node . style . maxHeight ; <nl> + } else if ( axis = = CSSFlexDirection . ROW ) { <nl> + min = node . style . minWidth ; <nl> + max = node . style . maxWidth ; <nl> + } <nl> + <nl> + float boundValue = value ; <nl> + <nl> + if ( ! CSSConstants . isUndefined ( max ) & & max > = 0 . 0 & & boundValue > max ) { <nl> + boundValue = max ; <nl> + } <nl> + if ( ! CSSConstants . isUndefined ( min ) & & min > = 0 . 0 & & boundValue < min ) { <nl> + boundValue = min ; <nl> + } <nl> + <nl> + return boundValue ; <nl> + } <nl> + <nl> private static void setDimensionFromStyle ( CSSNode node , CSSFlexDirection axis ) { <nl> / / The parent already computed us a width or height . We just skip it <nl> if ( ! CSSConstants . isUndefined ( getLayoutDimension ( node , getDim ( axis ) ) ) ) { <nl> private static void setDimensionFromStyle ( CSSNode node , CSSFlexDirection axis ) { <nl> <nl> / / The dimensions can never be smaller than the padding and border <nl> float maxLayoutDimension = Math . max ( <nl> - getStyleDimension ( node , getDim ( axis ) ) , <nl> + boundAxis ( node , axis , getStyleDimension ( node , getDim ( axis ) ) ) , <nl> getPaddingAndBorderAxis ( node , axis ) ) ; <nl> setLayoutDimension ( node , getDim ( axis ) , maxLayoutDimension ) ; <nl> } <nl> private static void layoutNodeImpl ( <nl> CSSLayoutContext layoutContext , <nl> CSSNode node , <nl> float parentMaxWidth ) { <nl> - <nl> for ( int i = 0 ; i < node . getChildCount ( ) ; i + + ) { <nl> node . getChildAt ( i ) . layout . resetResult ( ) ; <nl> } <nl> private static void layoutNodeImpl ( <nl> ! CSSConstants . isUndefined ( getLayoutDimension ( node , getDim ( crossAxis ) ) ) & & <nl> ! isDimDefined ( child , crossAxis ) ) { <nl> setLayoutDimension ( child , getDim ( crossAxis ) , Math . max ( <nl> - getLayoutDimension ( node , getDim ( crossAxis ) ) - <nl> + boundAxis ( child , crossAxis , getLayoutDimension ( node , getDim ( crossAxis ) ) - <nl> getPaddingAndBorderAxis ( node , crossAxis ) - <nl> - getMarginAxis ( child , crossAxis ) , <nl> + getMarginAxis ( child , crossAxis ) ) , <nl> / / You never want to go smaller than padding <nl> getPaddingAndBorderAxis ( child , crossAxis ) <nl> ) ) ; <nl> private static void layoutNodeImpl ( <nl> isPosDefined ( child , getLeading ( axis ) ) & & <nl> isPosDefined ( child , getTrailing ( axis ) ) ) { <nl> setLayoutDimension ( child , getDim ( axis ) , Math . max ( <nl> - getLayoutDimension ( node , getDim ( axis ) ) - <nl> - getPaddingAndBorderAxis ( node , axis ) - <nl> - getMarginAxis ( child , axis ) - <nl> - getPosition ( child , getLeading ( axis ) ) - <nl> - getPosition ( child , getTrailing ( axis ) ) , <nl> + boundAxis ( child , axis , getLayoutDimension ( node , getDim ( axis ) ) - <nl> + getPaddingAndBorderAxis ( node , axis ) - <nl> + getMarginAxis ( child , axis ) - <nl> + getPosition ( child , getLeading ( axis ) ) - <nl> + getPosition ( child , getTrailing ( axis ) ) ) , <nl> / / You never want to go smaller than padding <nl> getPaddingAndBorderAxis ( child , axis ) <nl> ) ) ; <nl> private static void layoutNodeImpl ( <nl> totalFlexible = totalFlexible + getFlex ( child ) ; <nl> <nl> / / Even if we don ' t know its exact size yet , we already know the padding , <nl> - / / border and margin . We ' ll use this partial information to compute the <nl> - / / remaining space . <nl> + / / border and margin . We ' ll use this partial information , which represents <nl> + / / the smallest possible size for the child , to compute the remaining <nl> + / / available space . <nl> nextContentDim = getPaddingAndBorderAxis ( child , mainAxis ) + <nl> getMarginAxis ( child , mainAxis ) ; <nl> <nl> private static void layoutNodeImpl ( <nl> / / remaining space <nl> if ( flexibleChildrenCount ! = 0 ) { <nl> float flexibleMainDim = remainingMainDim / totalFlexible ; <nl> + float baseMainDim ; <nl> + float boundMainDim ; <nl> + <nl> + / / Iterate over every child in the axis . If the flex share of remaining <nl> + / / space doesn ' t meet min / max bounds , remove this child from flex <nl> + / / calculations . <nl> + for ( i = startLine ; i < endLine ; + + i ) { <nl> + child = node . getChildAt ( i ) ; <nl> + if ( isFlex ( child ) ) { <nl> + baseMainDim = flexibleMainDim * getFlex ( child ) + <nl> + getPaddingAndBorderAxis ( child , mainAxis ) ; <nl> + boundMainDim = boundAxis ( child , mainAxis , baseMainDim ) ; <nl> + <nl> + if ( baseMainDim ! = boundMainDim ) { <nl> + remainingMainDim - = boundMainDim ; <nl> + totalFlexible - = getFlex ( child ) ; <nl> + } <nl> + } <nl> + } <nl> + flexibleMainDim = remainingMainDim / totalFlexible ; <nl> <nl> / / The non flexible children can overflow the container , in this case <nl> / / we should just assume that there is no space available . <nl> private static void layoutNodeImpl ( <nl> if ( isFlex ( child ) ) { <nl> / / At this point we know the final size of the element in the main <nl> / / dimension <nl> - setLayoutDimension ( child , getDim ( mainAxis ) , flexibleMainDim * getFlex ( child ) + <nl> - getPaddingAndBorderAxis ( child , mainAxis ) ) ; <nl> + setLayoutDimension ( child , getDim ( mainAxis ) , boundAxis ( child , mainAxis , <nl> + flexibleMainDim * getFlex ( child ) + getPaddingAndBorderAxis ( child , mainAxis ) <nl> + ) ) ; <nl> <nl> maxWidth = CSSConstants . UNDEFINED ; <nl> if ( isDimDefined ( node , CSSFlexDirection . ROW ) ) { <nl> private static void layoutNodeImpl ( <nl> mainDim = mainDim + betweenMainDim + getDimWithMargin ( child , mainAxis ) ; <nl> / / The cross dimension is the max of the elements dimension since there <nl> / / can only be one element in that cross dimension . <nl> - crossDim = Math . max ( crossDim , getDimWithMargin ( child , crossAxis ) ) ; <nl> + crossDim = Math . max ( crossDim , boundAxis ( child , crossAxis , getDimWithMargin ( child , crossAxis ) ) ) ; <nl> } <nl> } <nl> <nl> private static void layoutNodeImpl ( <nl> containerMainAxis = Math . max ( <nl> / / We ' re missing the last padding at this point to get the final <nl> / / dimension <nl> - mainDim + getPaddingAndBorder ( node , getTrailing ( mainAxis ) ) , <nl> + boundAxis ( node , mainAxis , mainDim + getPaddingAndBorder ( node , getTrailing ( mainAxis ) ) ) , <nl> / / We can never assign a width smaller than the padding and borders <nl> getPaddingAndBorderAxis ( node , mainAxis ) <nl> ) ; <nl> private static void layoutNodeImpl ( <nl> / / For the cross dim , we add both sides at the end because the value <nl> / / is aggregate via a max function . Intermediate negative values <nl> / / can mess this computation otherwise <nl> - crossDim + getPaddingAndBorderAxis ( node , crossAxis ) , <nl> + boundAxis ( node , crossAxis , crossDim + getPaddingAndBorderAxis ( node , crossAxis ) ) , <nl> getPaddingAndBorderAxis ( node , crossAxis ) <nl> ) ; <nl> } <nl> private static void layoutNodeImpl ( <nl> / / previously . <nl> if ( ! isDimDefined ( child , crossAxis ) ) { <nl> setLayoutDimension ( child , getDim ( crossAxis ) , Math . max ( <nl> - containerCrossAxis - <nl> + boundAxis ( child , crossAxis , containerCrossAxis - <nl> getPaddingAndBorderAxis ( node , crossAxis ) - <nl> - getMarginAxis ( child , crossAxis ) , <nl> + getMarginAxis ( child , crossAxis ) ) , <nl> / / You never want to go smaller than padding <nl> getPaddingAndBorderAxis ( child , crossAxis ) <nl> ) ) ; <nl> private static void layoutNodeImpl ( <nl> setLayoutDimension ( node , getDim ( mainAxis ) , Math . max ( <nl> / / We ' re missing the last padding at this point to get the final <nl> / / dimension <nl> - linesMainDim + getPaddingAndBorder ( node , getTrailing ( mainAxis ) ) , <nl> + boundAxis ( node , mainAxis , linesMainDim + getPaddingAndBorder ( node , getTrailing ( mainAxis ) ) ) , <nl> / / We can never assign a width smaller than the padding and borders <nl> getPaddingAndBorderAxis ( node , mainAxis ) <nl> ) ) ; <nl> private static void layoutNodeImpl ( <nl> / / For the cross dim , we add both sides at the end because the value <nl> / / is aggregate via a max function . Intermediate negative values <nl> / / can mess this computation otherwise <nl> - linesCrossDim + getPaddingAndBorderAxis ( node , crossAxis ) , <nl> + boundAxis ( node , crossAxis , linesCrossDim + getPaddingAndBorderAxis ( node , crossAxis ) ) , <nl> getPaddingAndBorderAxis ( node , crossAxis ) <nl> ) ) ; <nl> } <nl> private static void layoutNodeImpl ( <nl> isPosDefined ( child , getLeading ( axis ) ) & & <nl> isPosDefined ( child , getTrailing ( axis ) ) ) { <nl> setLayoutDimension ( child , getDim ( axis ) , Math . max ( <nl> - getLayoutDimension ( node , getDim ( axis ) ) - <nl> - getPaddingAndBorderAxis ( node , axis ) - <nl> - getMarginAxis ( child , axis ) - <nl> - getPosition ( child , getLeading ( axis ) ) - <nl> - getPosition ( child , getTrailing ( axis ) ) , <nl> + boundAxis ( child , axis , getLayoutDimension ( node , getDim ( axis ) ) - <nl> + getPaddingAndBorderAxis ( node , axis ) - <nl> + getMarginAxis ( child , axis ) - <nl> + getPosition ( child , getLeading ( axis ) ) - <nl> + getPosition ( child , getTrailing ( axis ) ) <nl> + ) , <nl> / / You never want to go smaller than padding <nl> getPaddingAndBorderAxis ( child , axis ) <nl> ) ) ; <nl> mmm a / src / java / tests / com / facebook / csslayout / LayoutEngineTest . java <nl> ppp b / src / java / tests / com / facebook / csslayout / LayoutEngineTest . java <nl> public void testCase94 ( ) <nl> <nl> test ( " should layout flex wrap with a line bigger than container " , root_node , root_layout ) ; <nl> } <nl> + <nl> + @ Test <nl> + public void testCase95 ( ) <nl> + { <nl> + TestCSSNode root_node = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_node ; <nl> + node_0 . style . width = 100 ; <nl> + node_0 . style . height = 200 ; <nl> + node_0 . style . maxWidth = 90 ; <nl> + node_0 . style . maxHeight = 190 ; <nl> + } <nl> + <nl> + TestCSSNode root_layout = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_layout ; <nl> + node_0 . layout . y = 0 ; <nl> + node_0 . layout . x = 0 ; <nl> + node_0 . layout . width = 90 ; <nl> + node_0 . layout . height = 190 ; <nl> + } <nl> + <nl> + test ( " should use max bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + @ Test <nl> + public void testCase96 ( ) <nl> + { <nl> + TestCSSNode root_node = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_node ; <nl> + node_0 . style . width = 100 ; <nl> + node_0 . style . height = 200 ; <nl> + node_0 . style . minWidth = 110 ; <nl> + node_0 . style . minHeight = 210 ; <nl> + } <nl> + <nl> + TestCSSNode root_layout = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_layout ; <nl> + node_0 . layout . y = 0 ; <nl> + node_0 . layout . x = 0 ; <nl> + node_0 . layout . width = 110 ; <nl> + node_0 . layout . height = 210 ; <nl> + } <nl> + <nl> + test ( " should use min bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + @ Test <nl> + public void testCase97 ( ) <nl> + { <nl> + TestCSSNode root_node = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_node ; <nl> + node_0 . style . width = 100 ; <nl> + node_0 . style . height = 200 ; <nl> + node_0 . style . maxWidth = 90 ; <nl> + node_0 . style . maxHeight = 190 ; <nl> + node_0 . style . minWidth = 110 ; <nl> + node_0 . style . minHeight = 210 ; <nl> + } <nl> + <nl> + TestCSSNode root_layout = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_layout ; <nl> + node_0 . layout . y = 0 ; <nl> + node_0 . layout . x = 0 ; <nl> + node_0 . layout . width = 110 ; <nl> + node_0 . layout . height = 210 ; <nl> + } <nl> + <nl> + test ( " should use min bounds over max bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + @ Test <nl> + public void testCase98 ( ) <nl> + { <nl> + TestCSSNode root_node = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_node ; <nl> + node_0 . style . width = 100 ; <nl> + node_0 . style . height = 200 ; <nl> + node_0 . style . maxWidth = 80 ; <nl> + node_0 . style . maxHeight = 180 ; <nl> + node_0 . style . minWidth = 90 ; <nl> + node_0 . style . minHeight = 190 ; <nl> + } <nl> + <nl> + TestCSSNode root_layout = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_layout ; <nl> + node_0 . layout . y = 0 ; <nl> + node_0 . layout . x = 0 ; <nl> + node_0 . layout . width = 90 ; <nl> + node_0 . layout . height = 190 ; <nl> + } <nl> + <nl> + test ( " should use min bounds over max bounds and natural width " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + @ Test <nl> + public void testCase99 ( ) <nl> + { <nl> + TestCSSNode root_node = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_node ; <nl> + node_0 . style . width = 100 ; <nl> + node_0 . style . height = 200 ; <nl> + node_0 . style . minWidth = - 10 ; <nl> + node_0 . style . minHeight = - 20 ; <nl> + } <nl> + <nl> + TestCSSNode root_layout = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_layout ; <nl> + node_0 . layout . y = 0 ; <nl> + node_0 . layout . x = 0 ; <nl> + node_0 . layout . width = 100 ; <nl> + node_0 . layout . height = 200 ; <nl> + } <nl> + <nl> + test ( " should ignore negative min bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + @ Test <nl> + public void testCase100 ( ) <nl> + { <nl> + TestCSSNode root_node = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_node ; <nl> + node_0 . style . width = 100 ; <nl> + node_0 . style . height = 200 ; <nl> + node_0 . style . maxWidth = - 10 ; <nl> + node_0 . style . maxHeight = - 20 ; <nl> + } <nl> + <nl> + TestCSSNode root_layout = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_layout ; <nl> + node_0 . layout . y = 0 ; <nl> + node_0 . layout . x = 0 ; <nl> + node_0 . layout . width = 100 ; <nl> + node_0 . layout . height = 200 ; <nl> + } <nl> + <nl> + test ( " should ignore negative max bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + @ Test <nl> + public void testCase101 ( ) <nl> + { <nl> + TestCSSNode root_node = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_node ; <nl> + node_0 . style . maxWidth = 30 ; <nl> + node_0 . style . maxHeight = 10 ; <nl> + node_0 . style . padding [ Spacing . LEFT ] = 20 ; <nl> + node_0 . style . padding [ Spacing . TOP ] = 15 ; <nl> + node_0 . style . padding [ Spacing . RIGHT ] = 20 ; <nl> + node_0 . style . padding [ Spacing . BOTTOM ] = 15 ; <nl> + } <nl> + <nl> + TestCSSNode root_layout = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_layout ; <nl> + node_0 . layout . y = 0 ; <nl> + node_0 . layout . x = 0 ; <nl> + node_0 . layout . width = 40 ; <nl> + node_0 . layout . height = 30 ; <nl> + } <nl> + <nl> + test ( " should use padded size over max bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + @ Test <nl> + public void testCase102 ( ) <nl> + { <nl> + TestCSSNode root_node = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_node ; <nl> + node_0 . style . minWidth = 50 ; <nl> + node_0 . style . minHeight = 40 ; <nl> + node_0 . style . padding [ Spacing . LEFT ] = 20 ; <nl> + node_0 . style . padding [ Spacing . TOP ] = 15 ; <nl> + node_0 . style . padding [ Spacing . RIGHT ] = 20 ; <nl> + node_0 . style . padding [ Spacing . BOTTOM ] = 15 ; <nl> + } <nl> + <nl> + TestCSSNode root_layout = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_layout ; <nl> + node_0 . layout . y = 0 ; <nl> + node_0 . layout . x = 0 ; <nl> + node_0 . layout . width = 50 ; <nl> + node_0 . layout . height = 40 ; <nl> + } <nl> + <nl> + test ( " should use min size over padded size " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + @ Test <nl> + public void testCase103 ( ) <nl> + { <nl> + TestCSSNode root_node = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_node ; <nl> + node_0 . style . flexDirection = CSSFlexDirection . ROW ; <nl> + node_0 . style . width = 300 ; <nl> + node_0 . style . height = 200 ; <nl> + addChildren ( node_0 , 3 ) ; <nl> + { <nl> + TestCSSNode node_1 ; <nl> + node_1 = node_0 . getChildAt ( 0 ) ; <nl> + node_1 . style . flex = 1 ; <nl> + node_1 = node_0 . getChildAt ( 1 ) ; <nl> + node_1 . style . flex = 1 ; <nl> + node_1 . style . minWidth = 200 ; <nl> + node_1 = node_0 . getChildAt ( 2 ) ; <nl> + node_1 . style . flex = 1 ; <nl> + } <nl> + } <nl> + <nl> + TestCSSNode root_layout = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_layout ; <nl> + node_0 . layout . y = 0 ; <nl> + node_0 . layout . x = 0 ; <nl> + node_0 . layout . width = 300 ; <nl> + node_0 . layout . height = 200 ; <nl> + addChildren ( node_0 , 3 ) ; <nl> + { <nl> + TestCSSNode node_1 ; <nl> + node_1 = node_0 . getChildAt ( 0 ) ; <nl> + node_1 . layout . y = 0 ; <nl> + node_1 . layout . x = 0 ; <nl> + node_1 . layout . width = 50 ; <nl> + node_1 . layout . height = 200 ; <nl> + node_1 = node_0 . getChildAt ( 1 ) ; <nl> + node_1 . layout . y = 0 ; <nl> + node_1 . layout . x = 50 ; <nl> + node_1 . layout . width = 200 ; <nl> + node_1 . layout . height = 200 ; <nl> + node_1 = node_0 . getChildAt ( 2 ) ; <nl> + node_1 . layout . y = 0 ; <nl> + node_1 . layout . x = 250 ; <nl> + node_1 . layout . width = 50 ; <nl> + node_1 . layout . height = 200 ; <nl> + } <nl> + } <nl> + <nl> + test ( " should override flex direction size with min bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + @ Test <nl> + public void testCase104 ( ) <nl> + { <nl> + TestCSSNode root_node = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_node ; <nl> + node_0 . style . flexDirection = CSSFlexDirection . ROW ; <nl> + node_0 . style . width = 300 ; <nl> + node_0 . style . height = 200 ; <nl> + addChildren ( node_0 , 3 ) ; <nl> + { <nl> + TestCSSNode node_1 ; <nl> + node_1 = node_0 . getChildAt ( 0 ) ; <nl> + node_1 . style . flex = 1 ; <nl> + node_1 = node_0 . getChildAt ( 1 ) ; <nl> + node_1 . style . flex = 1 ; <nl> + node_1 . style . maxWidth = 110 ; <nl> + node_1 . style . minWidth = 90 ; <nl> + node_1 = node_0 . getChildAt ( 2 ) ; <nl> + node_1 . style . flex = 1 ; <nl> + } <nl> + } <nl> + <nl> + TestCSSNode root_layout = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_layout ; <nl> + node_0 . layout . y = 0 ; <nl> + node_0 . layout . x = 0 ; <nl> + node_0 . layout . width = 300 ; <nl> + node_0 . layout . height = 200 ; <nl> + addChildren ( node_0 , 3 ) ; <nl> + { <nl> + TestCSSNode node_1 ; <nl> + node_1 = node_0 . getChildAt ( 0 ) ; <nl> + node_1 . layout . y = 0 ; <nl> + node_1 . layout . x = 0 ; <nl> + node_1 . layout . width = 100 ; <nl> + node_1 . layout . height = 200 ; <nl> + node_1 = node_0 . getChildAt ( 1 ) ; <nl> + node_1 . layout . y = 0 ; <nl> + node_1 . layout . x = 100 ; <nl> + node_1 . layout . width = 100 ; <nl> + node_1 . layout . height = 200 ; <nl> + node_1 = node_0 . getChildAt ( 2 ) ; <nl> + node_1 . layout . y = 0 ; <nl> + node_1 . layout . x = 200 ; <nl> + node_1 . layout . width = 100 ; <nl> + node_1 . layout . height = 200 ; <nl> + } <nl> + } <nl> + <nl> + test ( " should not override flex direction size within bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + @ Test <nl> + public void testCase105 ( ) <nl> + { <nl> + TestCSSNode root_node = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_node ; <nl> + node_0 . style . flexDirection = CSSFlexDirection . ROW ; <nl> + node_0 . style . width = 300 ; <nl> + node_0 . style . height = 200 ; <nl> + addChildren ( node_0 , 3 ) ; <nl> + { <nl> + TestCSSNode node_1 ; <nl> + node_1 = node_0 . getChildAt ( 0 ) ; <nl> + node_1 . style . flex = 1 ; <nl> + node_1 = node_0 . getChildAt ( 1 ) ; <nl> + node_1 . style . flex = 1 ; <nl> + node_1 . style . maxWidth = 60 ; <nl> + node_1 = node_0 . getChildAt ( 2 ) ; <nl> + node_1 . style . flex = 1 ; <nl> + } <nl> + } <nl> + <nl> + TestCSSNode root_layout = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_layout ; <nl> + node_0 . layout . y = 0 ; <nl> + node_0 . layout . x = 0 ; <nl> + node_0 . layout . width = 300 ; <nl> + node_0 . layout . height = 200 ; <nl> + addChildren ( node_0 , 3 ) ; <nl> + { <nl> + TestCSSNode node_1 ; <nl> + node_1 = node_0 . getChildAt ( 0 ) ; <nl> + node_1 . layout . y = 0 ; <nl> + node_1 . layout . x = 0 ; <nl> + node_1 . layout . width = 120 ; <nl> + node_1 . layout . height = 200 ; <nl> + node_1 = node_0 . getChildAt ( 1 ) ; <nl> + node_1 . layout . y = 0 ; <nl> + node_1 . layout . x = 120 ; <nl> + node_1 . layout . width = 60 ; <nl> + node_1 . layout . height = 200 ; <nl> + node_1 = node_0 . getChildAt ( 2 ) ; <nl> + node_1 . layout . y = 0 ; <nl> + node_1 . layout . x = 180 ; <nl> + node_1 . layout . width = 120 ; <nl> + node_1 . layout . height = 200 ; <nl> + } <nl> + } <nl> + <nl> + test ( " should override flex direction size with max bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + @ Test <nl> + public void testCase106 ( ) <nl> + { <nl> + TestCSSNode root_node = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_node ; <nl> + node_0 . style . width = 300 ; <nl> + node_0 . style . height = 200 ; <nl> + addChildren ( node_0 , 1 ) ; <nl> + { <nl> + TestCSSNode node_1 ; <nl> + node_1 = node_0 . getChildAt ( 0 ) ; <nl> + node_1 . style . flex = 1 ; <nl> + node_1 . style . maxWidth = 310 ; <nl> + node_1 . style . minWidth = 290 ; <nl> + } <nl> + } <nl> + <nl> + TestCSSNode root_layout = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_layout ; <nl> + node_0 . layout . y = 0 ; <nl> + node_0 . layout . x = 0 ; <nl> + node_0 . layout . width = 300 ; <nl> + node_0 . layout . height = 200 ; <nl> + addChildren ( node_0 , 1 ) ; <nl> + { <nl> + TestCSSNode node_1 ; <nl> + node_1 = node_0 . getChildAt ( 0 ) ; <nl> + node_1 . layout . y = 0 ; <nl> + node_1 . layout . x = 0 ; <nl> + node_1 . layout . width = 300 ; <nl> + node_1 . layout . height = 200 ; <nl> + } <nl> + } <nl> + <nl> + test ( " should pre - fill child size within bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + @ Test <nl> + public void testCase107 ( ) <nl> + { <nl> + TestCSSNode root_node = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_node ; <nl> + node_0 . style . width = 300 ; <nl> + node_0 . style . height = 200 ; <nl> + addChildren ( node_0 , 1 ) ; <nl> + { <nl> + TestCSSNode node_1 ; <nl> + node_1 = node_0 . getChildAt ( 0 ) ; <nl> + node_1 . style . flex = 1 ; <nl> + node_1 . style . maxWidth = 290 ; <nl> + } <nl> + } <nl> + <nl> + TestCSSNode root_layout = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_layout ; <nl> + node_0 . layout . y = 0 ; <nl> + node_0 . layout . x = 0 ; <nl> + node_0 . layout . width = 300 ; <nl> + node_0 . layout . height = 200 ; <nl> + addChildren ( node_0 , 1 ) ; <nl> + { <nl> + TestCSSNode node_1 ; <nl> + node_1 = node_0 . getChildAt ( 0 ) ; <nl> + node_1 . layout . y = 0 ; <nl> + node_1 . layout . x = 0 ; <nl> + node_1 . layout . width = 290 ; <nl> + node_1 . layout . height = 200 ; <nl> + } <nl> + } <nl> + <nl> + test ( " should pre - fill child size within max bound " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + @ Test <nl> + public void testCase108 ( ) <nl> + { <nl> + TestCSSNode root_node = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_node ; <nl> + node_0 . style . width = 300 ; <nl> + node_0 . style . height = 200 ; <nl> + addChildren ( node_0 , 1 ) ; <nl> + { <nl> + TestCSSNode node_1 ; <nl> + node_1 = node_0 . getChildAt ( 0 ) ; <nl> + node_1 . style . flex = 1 ; <nl> + node_1 . style . minWidth = 310 ; <nl> + } <nl> + } <nl> + <nl> + TestCSSNode root_layout = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_layout ; <nl> + node_0 . layout . y = 0 ; <nl> + node_0 . layout . x = 0 ; <nl> + node_0 . layout . width = 300 ; <nl> + node_0 . layout . height = 200 ; <nl> + addChildren ( node_0 , 1 ) ; <nl> + { <nl> + TestCSSNode node_1 ; <nl> + node_1 = node_0 . getChildAt ( 0 ) ; <nl> + node_1 . layout . y = 0 ; <nl> + node_1 . layout . x = 0 ; <nl> + node_1 . layout . width = 310 ; <nl> + node_1 . layout . height = 200 ; <nl> + } <nl> + } <nl> + <nl> + test ( " should pre - fill child size within min bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + @ Test <nl> + public void testCase109 ( ) <nl> + { <nl> + TestCSSNode root_node = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_node ; <nl> + node_0 . style . maxWidth = 300 ; <nl> + node_0 . style . maxHeight = 700 ; <nl> + node_0 . style . minWidth = 100 ; <nl> + node_0 . style . minHeight = 500 ; <nl> + addChildren ( node_0 , 2 ) ; <nl> + { <nl> + TestCSSNode node_1 ; <nl> + node_1 = node_0 . getChildAt ( 0 ) ; <nl> + node_1 . style . width = 200 ; <nl> + node_1 . style . height = 300 ; <nl> + node_1 = node_0 . getChildAt ( 1 ) ; <nl> + node_1 . style . width = 200 ; <nl> + node_1 . style . height = 300 ; <nl> + } <nl> + } <nl> + <nl> + TestCSSNode root_layout = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_layout ; <nl> + node_0 . layout . y = 0 ; <nl> + node_0 . layout . x = 0 ; <nl> + node_0 . layout . width = 200 ; <nl> + node_0 . layout . height = 600 ; <nl> + addChildren ( node_0 , 2 ) ; <nl> + { <nl> + TestCSSNode node_1 ; <nl> + node_1 = node_0 . getChildAt ( 0 ) ; <nl> + node_1 . layout . y = 0 ; <nl> + node_1 . layout . x = 0 ; <nl> + node_1 . layout . width = 200 ; <nl> + node_1 . layout . height = 300 ; <nl> + node_1 = node_0 . getChildAt ( 1 ) ; <nl> + node_1 . layout . y = 300 ; <nl> + node_1 . layout . x = 0 ; <nl> + node_1 . layout . width = 200 ; <nl> + node_1 . layout . height = 300 ; <nl> + } <nl> + } <nl> + <nl> + test ( " should set parents size based on bounded children " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + @ Test <nl> + public void testCase110 ( ) <nl> + { <nl> + TestCSSNode root_node = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_node ; <nl> + node_0 . style . maxWidth = 100 ; <nl> + node_0 . style . maxHeight = 500 ; <nl> + addChildren ( node_0 , 2 ) ; <nl> + { <nl> + TestCSSNode node_1 ; <nl> + node_1 = node_0 . getChildAt ( 0 ) ; <nl> + node_1 . style . width = 200 ; <nl> + node_1 . style . height = 300 ; <nl> + node_1 = node_0 . getChildAt ( 1 ) ; <nl> + node_1 . style . width = 200 ; <nl> + node_1 . style . height = 300 ; <nl> + } <nl> + } <nl> + <nl> + TestCSSNode root_layout = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_layout ; <nl> + node_0 . layout . y = 0 ; <nl> + node_0 . layout . x = 0 ; <nl> + node_0 . layout . width = 100 ; <nl> + node_0 . layout . height = 500 ; <nl> + addChildren ( node_0 , 2 ) ; <nl> + { <nl> + TestCSSNode node_1 ; <nl> + node_1 = node_0 . getChildAt ( 0 ) ; <nl> + node_1 . layout . y = 0 ; <nl> + node_1 . layout . x = 0 ; <nl> + node_1 . layout . width = 200 ; <nl> + node_1 . layout . height = 300 ; <nl> + node_1 = node_0 . getChildAt ( 1 ) ; <nl> + node_1 . layout . y = 300 ; <nl> + node_1 . layout . x = 0 ; <nl> + node_1 . layout . width = 200 ; <nl> + node_1 . layout . height = 300 ; <nl> + } <nl> + } <nl> + <nl> + test ( " should set parents size based on max bounded children " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + @ Test <nl> + public void testCase111 ( ) <nl> + { <nl> + TestCSSNode root_node = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_node ; <nl> + node_0 . style . minWidth = 300 ; <nl> + node_0 . style . minHeight = 700 ; <nl> + addChildren ( node_0 , 2 ) ; <nl> + { <nl> + TestCSSNode node_1 ; <nl> + node_1 = node_0 . getChildAt ( 0 ) ; <nl> + node_1 . style . width = 200 ; <nl> + node_1 . style . height = 300 ; <nl> + node_1 = node_0 . getChildAt ( 1 ) ; <nl> + node_1 . style . width = 200 ; <nl> + node_1 . style . height = 300 ; <nl> + } <nl> + } <nl> + <nl> + TestCSSNode root_layout = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_layout ; <nl> + node_0 . layout . y = 0 ; <nl> + node_0 . layout . x = 0 ; <nl> + node_0 . layout . width = 300 ; <nl> + node_0 . layout . height = 700 ; <nl> + addChildren ( node_0 , 2 ) ; <nl> + { <nl> + TestCSSNode node_1 ; <nl> + node_1 = node_0 . getChildAt ( 0 ) ; <nl> + node_1 . layout . y = 0 ; <nl> + node_1 . layout . x = 0 ; <nl> + node_1 . layout . width = 200 ; <nl> + node_1 . layout . height = 300 ; <nl> + node_1 = node_0 . getChildAt ( 1 ) ; <nl> + node_1 . layout . y = 300 ; <nl> + node_1 . layout . x = 0 ; <nl> + node_1 . layout . width = 200 ; <nl> + node_1 . layout . height = 300 ; <nl> + } <nl> + } <nl> + <nl> + test ( " should set parents size based on min bounded children " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + @ Test <nl> + public void testCase112 ( ) <nl> + { <nl> + TestCSSNode root_node = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_node ; <nl> + node_0 . style . alignItems = CSSAlign . STRETCH ; <nl> + node_0 . style . width = 1000 ; <nl> + addChildren ( node_0 , 1 ) ; <nl> + { <nl> + TestCSSNode node_1 ; <nl> + node_1 = node_0 . getChildAt ( 0 ) ; <nl> + node_1 . style . height = 100 ; <nl> + node_1 . style . maxWidth = 1100 ; <nl> + node_1 . style . maxHeight = 110 ; <nl> + node_1 . style . minWidth = 900 ; <nl> + node_1 . style . minHeight = 90 ; <nl> + } <nl> + } <nl> + <nl> + TestCSSNode root_layout = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_layout ; <nl> + node_0 . layout . y = 0 ; <nl> + node_0 . layout . x = 0 ; <nl> + node_0 . layout . width = 1000 ; <nl> + node_0 . layout . height = 100 ; <nl> + addChildren ( node_0 , 1 ) ; <nl> + { <nl> + TestCSSNode node_1 ; <nl> + node_1 = node_0 . getChildAt ( 0 ) ; <nl> + node_1 . layout . y = 0 ; <nl> + node_1 . layout . x = 0 ; <nl> + node_1 . layout . width = 1000 ; <nl> + node_1 . layout . height = 100 ; <nl> + } <nl> + } <nl> + <nl> + test ( " should keep stretched size within bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + @ Test <nl> + public void testCase113 ( ) <nl> + { <nl> + TestCSSNode root_node = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_node ; <nl> + node_0 . style . alignItems = CSSAlign . STRETCH ; <nl> + node_0 . style . width = 1000 ; <nl> + addChildren ( node_0 , 1 ) ; <nl> + { <nl> + TestCSSNode node_1 ; <nl> + node_1 = node_0 . getChildAt ( 0 ) ; <nl> + node_1 . style . height = 100 ; <nl> + node_1 . style . maxWidth = 900 ; <nl> + node_1 . style . maxHeight = 90 ; <nl> + } <nl> + } <nl> + <nl> + TestCSSNode root_layout = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_layout ; <nl> + node_0 . layout . y = 0 ; <nl> + node_0 . layout . x = 0 ; <nl> + node_0 . layout . width = 1000 ; <nl> + node_0 . layout . height = 90 ; <nl> + addChildren ( node_0 , 1 ) ; <nl> + { <nl> + TestCSSNode node_1 ; <nl> + node_1 = node_0 . getChildAt ( 0 ) ; <nl> + node_1 . layout . y = 0 ; <nl> + node_1 . layout . x = 0 ; <nl> + node_1 . layout . width = 900 ; <nl> + node_1 . layout . height = 90 ; <nl> + } <nl> + } <nl> + <nl> + test ( " should keep stretched size within max bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + @ Test <nl> + public void testCase114 ( ) <nl> + { <nl> + TestCSSNode root_node = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_node ; <nl> + node_0 . style . alignItems = CSSAlign . STRETCH ; <nl> + node_0 . style . width = 1000 ; <nl> + addChildren ( node_0 , 1 ) ; <nl> + { <nl> + TestCSSNode node_1 ; <nl> + node_1 = node_0 . getChildAt ( 0 ) ; <nl> + node_1 . style . height = 100 ; <nl> + node_1 . style . minWidth = 1100 ; <nl> + node_1 . style . minHeight = 110 ; <nl> + } <nl> + } <nl> + <nl> + TestCSSNode root_layout = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_layout ; <nl> + node_0 . layout . y = 0 ; <nl> + node_0 . layout . x = 0 ; <nl> + node_0 . layout . width = 1000 ; <nl> + node_0 . layout . height = 110 ; <nl> + addChildren ( node_0 , 1 ) ; <nl> + { <nl> + TestCSSNode node_1 ; <nl> + node_1 = node_0 . getChildAt ( 0 ) ; <nl> + node_1 . layout . y = 0 ; <nl> + node_1 . layout . x = 0 ; <nl> + node_1 . layout . width = 1100 ; <nl> + node_1 . layout . height = 110 ; <nl> + } <nl> + } <nl> + <nl> + test ( " should keep stretched size within min bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + @ Test <nl> + public void testCase115 ( ) <nl> + { <nl> + TestCSSNode root_node = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_node ; <nl> + node_0 . style . flexDirection = CSSFlexDirection . ROW ; <nl> + node_0 . style . width = 1000 ; <nl> + addChildren ( node_0 , 1 ) ; <nl> + { <nl> + TestCSSNode node_1 ; <nl> + node_1 = node_0 . getChildAt ( 0 ) ; <nl> + node_1 . style . height = 100 ; <nl> + node_1 . style . minWidth = 100 ; <nl> + node_1 . style . minHeight = 110 ; <nl> + } <nl> + } <nl> + <nl> + TestCSSNode root_layout = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_layout ; <nl> + node_0 . layout . y = 0 ; <nl> + node_0 . layout . x = 0 ; <nl> + node_0 . layout . width = 1000 ; <nl> + node_0 . layout . height = 110 ; <nl> + addChildren ( node_0 , 1 ) ; <nl> + { <nl> + TestCSSNode node_1 ; <nl> + node_1 = node_0 . getChildAt ( 0 ) ; <nl> + node_1 . layout . y = 0 ; <nl> + node_1 . layout . x = 0 ; <nl> + node_1 . layout . width = 100 ; <nl> + node_1 . layout . height = 110 ; <nl> + } <nl> + } <nl> + <nl> + test ( " should keep cross axis size within min bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + @ Test <nl> + public void testCase116 ( ) <nl> + { <nl> + TestCSSNode root_node = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_node ; <nl> + node_0 . style . width = 1000 ; <nl> + node_0 . style . height = 1000 ; <nl> + addChildren ( node_0 , 1 ) ; <nl> + { <nl> + TestCSSNode node_1 ; <nl> + node_1 = node_0 . getChildAt ( 0 ) ; <nl> + node_1 . style . positionType = CSSPositionType . ABSOLUTE ; <nl> + node_1 . style . maxWidth = 500 ; <nl> + node_1 . style . maxHeight = 600 ; <nl> + node_1 . style . positionLeft = 100 ; <nl> + node_1 . style . positionTop = 100 ; <nl> + node_1 . style . positionRight = 100 ; <nl> + node_1 . style . positionBottom = 100 ; <nl> + } <nl> + } <nl> + <nl> + TestCSSNode root_layout = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_layout ; <nl> + node_0 . layout . y = 0 ; <nl> + node_0 . layout . x = 0 ; <nl> + node_0 . layout . width = 1000 ; <nl> + node_0 . layout . height = 1000 ; <nl> + addChildren ( node_0 , 1 ) ; <nl> + { <nl> + TestCSSNode node_1 ; <nl> + node_1 = node_0 . getChildAt ( 0 ) ; <nl> + node_1 . layout . y = 100 ; <nl> + node_1 . layout . x = 100 ; <nl> + node_1 . layout . width = 500 ; <nl> + node_1 . layout . height = 600 ; <nl> + } <nl> + } <nl> + <nl> + test ( " should layout node with position absolute , top and left and max bounds " , root_node , root_layout ) ; <nl> + } <nl> + <nl> + @ Test <nl> + public void testCase117 ( ) <nl> + { <nl> + TestCSSNode root_node = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_node ; <nl> + node_0 . style . width = 1000 ; <nl> + node_0 . style . height = 1000 ; <nl> + addChildren ( node_0 , 1 ) ; <nl> + { <nl> + TestCSSNode node_1 ; <nl> + node_1 = node_0 . getChildAt ( 0 ) ; <nl> + node_1 . style . positionType = CSSPositionType . ABSOLUTE ; <nl> + node_1 . style . minWidth = 900 ; <nl> + node_1 . style . minHeight = 1000 ; <nl> + node_1 . style . positionLeft = 100 ; <nl> + node_1 . style . positionTop = 100 ; <nl> + node_1 . style . positionRight = 100 ; <nl> + node_1 . style . positionBottom = 100 ; <nl> + } <nl> + } <nl> + <nl> + TestCSSNode root_layout = new TestCSSNode ( ) ; <nl> + { <nl> + TestCSSNode node_0 = root_layout ; <nl> + node_0 . layout . y = 0 ; <nl> + node_0 . layout . x = 0 ; <nl> + node_0 . layout . width = 1000 ; <nl> + node_0 . layout . height = 1000 ; <nl> + addChildren ( node_0 , 1 ) ; <nl> + { <nl> + TestCSSNode node_1 ; <nl> + node_1 = node_0 . getChildAt ( 0 ) ; <nl> + node_1 . layout . y = 100 ; <nl> + node_1 . layout . x = 100 ; <nl> + node_1 . layout . width = 900 ; <nl> + node_1 . layout . height = 1000 ; <nl> + } <nl> + } <nl> + <nl> + test ( " should layout node with position absolute , top and left and min bounds " , root_node , root_layout ) ; <nl> + } <nl> / * * END_GENERATED * * / <nl> } <nl> mmm a / src / transpile . js <nl> ppp b / src / transpile . js <nl> function printLayout ( test ) { <nl> addFloat ( node , ' flex ' , ' flex ' ) ; <nl> addFloat ( node , ' width ' , ' dimensions [ CSS_WIDTH ] ' ) ; <nl> addFloat ( node , ' height ' , ' dimensions [ CSS_HEIGHT ] ' ) ; <nl> + addFloat ( node , ' maxWidth ' , ' maxDimensions [ CSS_WIDTH ] ' ) ; <nl> + addFloat ( node , ' maxHeight ' , ' maxDimensions [ CSS_HEIGHT ] ' ) ; <nl> + addFloat ( node , ' minWidth ' , ' minDimensions [ CSS_WIDTH ] ' ) ; <nl> + addFloat ( node , ' minHeight ' , ' minDimensions [ CSS_HEIGHT ] ' ) ; <nl> addSpacing ( node , ' margin ' , ' ' ) ; <nl> addSpacing ( node , ' padding ' , ' ' ) ; <nl> addSpacing ( node , ' border ' , ' Width ' ) ; <nl> function transpileAnnotatedJStoC ( jsCode ) { <nl> . replace ( / \ . children \ . length / g , ' . children_count ' ) <nl> . replace ( / \ . width / g , ' . dimensions [ CSS_WIDTH ] ' ) <nl> . replace ( / \ . height / g , ' . dimensions [ CSS_HEIGHT ] ' ) <nl> + . replace ( / \ . maxWidth / g , ' . maxDimensions [ CSS_WIDTH ] ' ) <nl> + . replace ( / \ . maxHeight / g , ' . maxDimensions [ CSS_HEIGHT ] ' ) <nl> + . replace ( / \ . minWidth / g , ' . minDimensions [ CSS_WIDTH ] ' ) <nl> + . replace ( / \ . minHeight / g , ' . minDimensions [ CSS_HEIGHT ] ' ) <nl> . replace ( / layout \ [ dim / g , ' layout . dimensions [ dim ' ) <nl> . replace ( / layout \ [ pos / g , ' layout . position [ pos ' ) <nl> . replace ( / layout \ [ leading / g , ' layout . position [ leading ' ) <nl>
|
Merge pull request from freakboy3742 / minmax
|
facebook/yoga
|
b664517e52cbd99901b7e4b9a858d27d54243732
|
2015-03-31T15:43:37Z
|
mmm a / resources / timezones / map_data . txt <nl> ppp b / resources / timezones / map_data . txt <nl> <nl> 3497 | Europe / Madrid <nl> 3498 | Europe / Madrid <nl> 350 | Europe / Gibraltar <nl> - 351 | Atlantic / Azores & Atlantic / Canary <nl> - 35121 | Atlantic / Canary <nl> - 35122 | Atlantic / Canary <nl> - 35123 | Atlantic / Canary <nl> - 35124 | Atlantic / Canary <nl> - 35125 | Atlantic / Canary <nl> - 35126 | Atlantic / Canary <nl> - 35127 | Atlantic / Canary <nl> - 35128 | Atlantic / Canary <nl> - 351291 | Atlantic / Canary <nl> + 351 | Europe / Lisbon & Atlantic / Azores & Atlantic / Madeira <nl> + 35121 | Europe / Lisbon <nl> + 35122 | Europe / Lisbon <nl> + 35123 | Europe / Lisbon <nl> + 35124 | Europe / Lisbon <nl> + 35125 | Europe / Lisbon <nl> + 35126 | Europe / Lisbon <nl> + 35127 | Europe / Lisbon <nl> + 35128 | Europe / Lisbon <nl> + 351291 | Atlantic / Madeira <nl> 351292 | Atlantic / Azores <nl> 351295 | Atlantic / Azores <nl> 351296 | Atlantic / Azores <nl>
|
Merge pull request from Talkdesk / fix - portuguese - timezones
|
google/libphonenumber
|
ea653c38ccc801b53e552bac1edf9587274e6b30
|
2015-02-20T09:28:14Z
|
mmm a / docs / AccessControlInStdlib . rst <nl> ppp b / docs / AccessControlInStdlib . rst <nl> the following cases : <nl> <nl> * ` private ` and ` internal ` symbols nested within ` public ` types : : <nl> <nl> - public struct Dictionary { <nl> - var _representation : _DictionaryRepresentation <nl> - } <nl> + public struct Dictionary { <nl> + var _representation : _DictionaryRepresentation <nl> + } <nl> <nl> ` private ` modifier <nl> = = = = = = = = = = = = = = = = = = <nl>
|
[ docs ] fix broken ReST , which breaks build
|
apple/swift
|
6735788ed0c7576650551855f479bf093c4e0041
|
2014-07-23T12:41:16Z
|
mmm a / jstests / core / apitest_dbcollection . js <nl> ppp b / jstests / core / apitest_dbcollection . js <nl> assert ( db . getCollection ( " test_db " ) . getIndexes ( ) . length = = 0 , 24 ) ; <nl> assert . neq ( undefined , t . totalIndexSize ( ) , <nl> ' db . collection . totalIndexSize ( ) cannot be undefined on a non - empty collection ' ) ; <nl> <nl> - if ( db . serverStatus ( ) storageEngine . name = = = ' mmapv1 ' ) { <nl> + if ( db . serverStatus ( ) . storageEngine . name = = = ' mmapv1 ' ) { <nl> / / Only in MMAPv1 do we guarantee that storageSize only changes when you write to a <nl> / / collection . <nl> assert . eq ( stats . storageSize , t . storageSize ( ) ) ; <nl>
|
Fix typo
|
mongodb/mongo
|
06b703935fae0354552a3596dd84e8b01f38d41d
|
2015-05-26T22:03:03Z
|
mmm a / bindings / python / cntk / learner . py <nl> ppp b / bindings / python / cntk / learner . py <nl> def update ( self , gradient_values , training_sample_count ) : <nl> <nl> return super ( Learner , self ) . update ( var_nd_map , training_sample_count ) <nl> <nl> + @ property <nl> @ typemap <nl> def parameters ( self ) : <nl> ' ' ' <nl> - Returns : <nl> - the set of parameters associated with this learner . <nl> + The set of parameters associated with this learner . <nl> ' ' ' <nl> return super ( Learner , self ) . parameters ( ) <nl> <nl> def reset_learning_rate ( self , learning_rate ) : <nl> ' ' ' <nl> return super ( Learner , self ) . reset_learning_rate ( ) <nl> <nl> + @ property <nl> def learning_rate ( self ) : <nl> ' ' ' <nl> - Returns the learning rate . <nl> + The learning rate . <nl> ' ' ' <nl> return super ( Learner , self ) . learning_rate ( ) <nl> <nl> mmm a / bindings / python / cntk / ops / __init__ . py <nl> ppp b / bindings / python / cntk / ops / __init__ . py <nl> def input_variable ( shape , data_type = np . float32 , needs_gradient = True , is_sparse = F <nl> name ( ` str ` , optional ) : the name of the Function instance in the network <nl> <nl> Returns : <nl> - : class : ` cntk . ops . functions . Function ` <nl> + : class : ` cntk . ops . variabls . Variable ` <nl> ' ' ' <nl> from cntk . cntk_py import input_variable <nl> from . . utils import sanitize_shape , sanitize_dtype_cntk <nl> mmm a / bindings / python / cntk / ops / functions . py <nl> ppp b / bindings / python / cntk / ops / functions . py <nl> def __getattr__ ( self , name ) : <nl> if name in self . __dict__ : <nl> return self . __dict__ [ name ] <nl> <nl> - if len ( self . outputs ( ) ) = = 1 : <nl> - return getattr ( self . output ( ) , name ) <nl> + if len ( self . outputs ) = = 1 : <nl> + return getattr ( self . output , name ) <nl> <nl> raise AttributeError ( " ' % s ' object has no attribute ' % s ' " % <nl> ( type ( self ) , name ) ) <nl> <nl> + @ property <nl> @ typemap <nl> def arguments ( self ) : <nl> ' ' ' <nl> - Returns a list of all input variables of the Function that are not of type Parameter or Constant <nl> - <nl> - Returns : <nl> - ` list ` : list of input variables <nl> + List of all input variables of the Function that are not of type Parameter or Constant <nl> ' ' ' <nl> return super ( Function , self ) . arguments ( ) <nl> <nl> + @ property <nl> @ typemap <nl> def attributes ( self ) : <nl> ' ' ' <nl> - Get the attributes of the function <nl> - <nl> - Returns : <nl> - ` dict ` : dictionary of string , value pairs <nl> + List of the attributes of the function <nl> ' ' ' <nl> return super ( Function , self ) . attributes ( ) <nl> <nl> def clone ( self , method = CloneMethod . freeze , replacements = None ) : <nl> replacements = dict ( ) <nl> return super ( Function , self ) . clone ( method , replacements ) <nl> <nl> + @ property <nl> @ typemap <nl> def constants ( self ) : <nl> ' ' ' <nl> - Returns a list of all ` Constant ` variables of this ` Function ` <nl> - <nl> - Returns : <nl> - ` list ` : all ` Constant ` variables of this ` Function ` <nl> + List of all ` Constant ` variables of this ` Function ` <nl> ' ' ' <nl> return super ( Function , self ) . constants ( ) <nl> <nl> def eval ( self , arguments = None , device = None ) : <nl> Returns : <nl> ` bool ` : ` True ` if updates have been performed <nl> ' ' ' <nl> - _ , output_map = self . forward ( arguments or { } , self . outputs ( ) , device = device ) <nl> + _ , output_map = self . forward ( arguments or { } , self . outputs , device = device ) <nl> <nl> if len ( output_map ) > 1 : <nl> return output_map <nl> def forward ( self , arguments , outputs , keep_for_backward = None , device = None ) : <nl> ' ' ' <nl> Computes and stores the values of speficied variables in ` outputs ` , <nl> using provided ` arguments ` values corresponding to each leaf ` Variable ` <nl> - of the function whose is_input ( ) is true . <nl> + of the function whose ` is_input ` is ` True ` . <nl> <nl> Args : <nl> arguments ( ` dict ` or ` list ` or ` tuple ` ) : maps variables to their <nl> def forward ( self , arguments , outputs , keep_for_backward = None , device = None ) : <nl> from cntk import DeviceDescriptor <nl> device = DeviceDescriptor . use_default_device ( ) <nl> <nl> - in_var_map = sanitize_var_map ( self . arguments ( ) , arguments , <nl> + in_var_map = sanitize_var_map ( self . arguments , arguments , <nl> None , device ) <nl> output_map = { v : None for v in outputs } <nl> keep_for_backward = set ( keep_for_backward or { } ) <nl> def backward ( self , state , root_gradients , variables ) : <nl> Returns : <nl> ` dict ` : mapping of ` variables ` to NumPy arrays <nl> ' ' ' <nl> - root_gradients = sanitize_var_map ( self . outputs ( ) , root_gradients ) <nl> + root_gradients = sanitize_var_map ( self . outputs , root_gradients ) <nl> <nl> var_gradients = dict ( ( var , None ) for var in variables ) <nl> <nl> def backward ( self , state , root_gradients , variables ) : <nl> <nl> return var_gradients <nl> <nl> + @ property <nl> @ typemap <nl> def inputs ( self ) : <nl> ' ' ' <nl> - Returns all input variables of this function . <nl> - <nl> - <nl> - Returns : <nl> - ` list ` : all input variables of this function . <nl> + List of all input variables of this function . <nl> ' ' ' <nl> return super ( Function , self ) . inputs ( ) <nl> <nl> + @ property <nl> @ typemap <nl> def name ( self ) : <nl> ' ' ' <nl> - Returns the name of ' this ' function . <nl> - <nl> - <nl> - Returns : <nl> - ` str ` : the name of ' this ' function . <nl> + Name of this function <nl> ' ' ' <nl> return super ( Function , self ) . name ( ) <nl> <nl> + @ property <nl> @ typemap <nl> def op_name ( self ) : <nl> ' ' ' <nl> - Returns the name of the operation that this Function denotes <nl> - <nl> - <nl> - Returns : <nl> - ` str ` : the name of the operation that this Function denotes <nl> + Name of the operation that this Function performs <nl> ' ' ' <nl> return super ( Function , self ) . op_name ( ) <nl> <nl> - # @ typemap <nl> - # Function . output = lambda self : get_output_and_keep_reference ( self ) <nl> - # ' ' ' <nl> - # output <nl> - <nl> - # Args : <nl> - # self . replace_placeholders_internal ( ph_map ( ` ph_map : ` ) : text <nl> <nl> - # Returns : <nl> - # ` None ` : text <nl> - # ' ' ' <nl> - # kwargs = dict ( locals ( ) ) ; del kwargs [ ' self ' ] ; return super ( Function , <nl> - # self ) . output ( * * kwargs ) <nl> - <nl> - # @ typemap <nl> - # def output_internal ( self ) : <nl> - # ' ' ' <nl> - <nl> - # Returns : <nl> - # ` Variable ` : text <nl> - # ' ' ' <nl> - # return super ( Function , self ) . output_internal ( ) <nl> + @ property <nl> + @ typemap <nl> + def output ( self ) : <nl> + ' ' ' <nl> + The single output variable if there is only one , or raises an exception . <nl> + ' ' ' <nl> + return super ( Function , self ) . output ( ) <nl> <nl> + @ property <nl> @ typemap <nl> def outputs ( self ) : <nl> ' ' ' <nl> - Returns a list consisting of all output variables of this function . <nl> - <nl> - <nl> - Returns : <nl> - ` list ` : all output variables of this function <nl> + List consisting of all output variables of this function . <nl> ' ' ' <nl> return super ( Function , self ) . outputs ( ) <nl> <nl> + @ property <nl> @ typemap <nl> def parameters ( self ) : <nl> ' ' ' <nl> - Returns a list of all parameter variables of this function . <nl> - <nl> - Returns : <nl> - ` list ` : all parameter variables of this function . <nl> + List of all parameter variables of this function . <nl> ' ' ' <nl> return super ( Function , self ) . parameters ( ) <nl> <nl> + @ property <nl> @ typemap <nl> def placeholders ( self ) : <nl> ' ' ' <nl> - Returns a list of all placeholders variables of this function . <nl> - <nl> - <nl> - Returns : <nl> - ` list ` : all placeholders variables of this function <nl> + List of all placeholders variables of this function . <nl> ' ' ' <nl> return super ( Function , self ) . placeholders ( ) <nl> <nl> def replace_placeholder ( self , placeholderReplacement ) : <nl> In - place replace the only placeholder in the function graph with the specified replacement <nl> <nl> Args : <nl> - placeholderReplacement ( ` Variable ` ) : the variable that will replace the placeholder <nl> + placeholderReplacement ( : class : ` cntk . ops . variables . Variable ` ) : the variable that will replace the placeholder <nl> <nl> Returns : <nl> ` Function ` : itself <nl> def replace_placeholder ( self , placeholderReplacement ) : <nl> return super ( Function , self ) . replace_placeholder ( * * kwargs ) <nl> <nl> @ typemap <nl> - def restore_from_legacy_model ( self , modelFilePath ) : <nl> + def restore_from_model ( self , modelFilePath ) : <nl> ' ' ' <nl> Restore the models parameters from a saved model file <nl> <nl> def restore_from_legacy_model ( self , modelFilePath ) : <nl> del kwargs [ ' self ' ] <nl> return super ( Function , self ) . restore_from_legacy_model ( * * kwargs ) <nl> <nl> + @ property <nl> @ typemap <nl> def root_function ( self ) : <nl> ' ' ' <nl> - Returns the primitive function at the root of the graph of functions underlying this function . <nl> - <nl> - Returns : <nl> - ` Function ` : the primitive function at the root of the graph of functions underlying this function <nl> + The primitive function at the root of the graph of functions underlying this function . <nl> ' ' ' <nl> return super ( Function , self ) . root_function ( ) <nl> mmm a / bindings / python / cntk / ops / tests / function_tests . py <nl> ppp b / bindings / python / cntk / ops / tests / function_tests . py <nl> <nl> <nl> def test_variable_forwarding ( ) : <nl> op = constant ( value = 2 , shape = ( 3 , 4 ) ) + 1 <nl> - assert op . shape ( ) . dimensions ( ) = = ( 3 , 4 ) <nl> + assert op . shape = = ( 3 , 4 ) <nl> <nl> <nl> def test_replace_placeholders ( ) : <nl> def test_replace_placeholders ( ) : <nl> res2 = p + 2 <nl> from . . import plus <nl> func = plus ( res2 , 10 ) <nl> - res2 . replace_placeholders ( { p : func . output ( ) } ) <nl> + res2 . replace_placeholders ( { p : func . output } ) <nl> <nl> assert res2 . eval ( { i : [ 3 ] } ) = = [ 15 ] <nl> <nl> def test_cloning ( ) : <nl> <nl> # Test freeze <nl> cloned = res . clone ( CloneMethod . freeze ) <nl> - assert cloned . inputs ( ) [ 0 ] . name ( ) = = ' p ' <nl> - assert cloned . inputs ( ) [ 0 ] . uid ( ) ! = p . uid ( ) <nl> - assert cloned . inputs ( ) [ 1 ] . name ( ) = = ' i ' <nl> - assert cloned . inputs ( ) [ 1 ] . uid ( ) ! = i . uid ( ) <nl> + assert cloned . inputs [ 0 ] . name = = ' p ' <nl> + assert cloned . inputs [ 0 ] . uid ! = p . uid <nl> + assert cloned . inputs [ 1 ] . name = = ' i ' <nl> + assert cloned . inputs [ 1 ] . uid ! = i . uid <nl> <nl> # TODO test other methods <nl> <nl> mmm a / bindings / python / cntk / ops / tests / non_diff_test . py <nl> ppp b / bindings / python / cntk / ops / tests / non_diff_test . py <nl> def test_op_round ( operand , expected , device_id , precision ) : <nl> from . . import round <nl> _test_unary_op ( precision , device_id , round , operand , <nl> expected_forward , expected_backward ) <nl> + <nl> + def test_input_variable ( ) : <nl> + from . . import input_variable <nl> + i = input_variable ( shape = ( 2 , 3 ) , name = ' i ' ) <nl> + assert i . shape = = ( 2 , 3 ) <nl> + assert i . name = = ' i ' <nl> + assert len ( i . dynamic_axes ) = = 2 <nl> + <nl> mmm a / bindings / python / cntk / ops / tests / non_linear_test . py <nl> ppp b / bindings / python / cntk / ops / tests / non_linear_test . py <nl> def test_op_dropout ( shape , dropout_rate , device_id , precision ) : <nl> cntk_device ( device_id ) , <nl> backward_pass = True ) <nl> <nl> - resulted_non_zeros + = np . count_nonzero ( forward [ dropout_node . output ( ) ] ) <nl> + resulted_non_zeros + = np . count_nonzero ( forward [ dropout_node . output ] ) <nl> <nl> resulted_non_zeros / = count <nl> num_elements = np . multiply . reduce ( shape ) <nl> mmm a / bindings / python / cntk / ops / tests / variables_test . py <nl> ppp b / bindings / python / cntk / ops / tests / variables_test . py <nl> def test_variable_shape ( variable_type , shape ) : <nl> shape = ( shape , ) <nl> assert c . shape = = shape , variable_type <nl> <nl> + VALUES = [ <nl> + [ 1 ] , <nl> + [ [ 1 ] , [ 2 ] ] , <nl> + [ [ [ 1 , 2 ] , [ 3 , 4 ] , [ 5 , 6 ] ] , [ [ 1 , 2 ] , [ 3 , 4 ] , [ 5 , 6 ] ] ] <nl> + ] <nl> + <nl> + @ pytest . mark . parametrize ( " value " , VALUES ) <nl> + def test_constant_value ( value ) : <nl> + c = Constant ( value = value ) <nl> + assert np . allclose ( c . value , value ) <nl> + <nl> + @ pytest . mark . parametrize ( " value " , VALUES ) <nl> + def test_parameter_value ( value ) : <nl> + c = Parameter ( init = value ) <nl> + assert np . allclose ( c . value , value ) <nl> + <nl> mmm a / bindings / python / cntk / ops / variables . py <nl> ppp b / bindings / python / cntk / ops / variables . py <nl> def __init__ ( self , shape = None , data_type = None , needs_gradient = False , is_sparse = F <nl> super ( Variable , self ) . __init__ ( shape , is_sparse , <nl> dtype , needs_gradient , name , dynamic_axes ) <nl> <nl> + @ property <nl> @ typemap <nl> def dynamic_axes ( self ) : <nl> ' ' ' <nl> - Returns the dynamic axes of this variable <nl> - <nl> - Returns : <nl> - ` list ` : list of ` : class : cntk . Axis ` that are the dynamic_axes of this Variable <nl> + The dynamic axes of this variable . <nl> ' ' ' <nl> return super ( Variable , self ) . dynamic_axes ( ) <nl> <nl> def dtype ( self ) : <nl> ' ' ' <nl> return sanitize_precision ( self . get_data_type ( ) ) <nl> <nl> + @ property <nl> @ typemap <nl> def is_constant ( self ) : <nl> ' ' ' <nl> - Returns True if this variable is a constant and False otherwise <nl> - <nl> - Returns : <nl> - ` bool ` : True if this variable is a Constant and False otherwise <nl> + Whether this variable is a constant . <nl> ' ' ' <nl> return super ( Variable , self ) . is_constant ( ) <nl> <nl> + @ property <nl> @ typemap <nl> def is_input ( self ) : <nl> ' ' ' <nl> - Returns True if this variable is an input and False otherwise <nl> - <nl> - Returns : <nl> - ` bool ` : True if this variable is an input and False otherwise <nl> + Whether this variable is an input . <nl> ' ' ' <nl> return super ( Variable , self ) . is_input ( ) <nl> <nl> + @ property <nl> @ typemap <nl> def is_output ( self ) : <nl> ' ' ' <nl> - Returns True if this variable is an output and False otherwise <nl> - <nl> - Returns : <nl> - ` bool ` : True if this variable is an output and False otherwise <nl> + Whether this variable is an output . <nl> ' ' ' <nl> return super ( Variable , self ) . is_output ( ) <nl> <nl> + @ property <nl> @ typemap <nl> def is_parameter ( self ) : <nl> ' ' ' <nl> - Returns True if this variable is a parameter and False otherwise <nl> - <nl> - Returns : <nl> - ` bool ` : True if this variable is a parameter and False otherwise <nl> + Whether this variable is a parameter . <nl> ' ' ' <nl> return super ( Variable , self ) . is_parameter ( ) <nl> <nl> + @ property <nl> @ typemap <nl> def is_placeholder ( self ) : <nl> ' ' ' <nl> - Returns True if this variable is a placeholder and False otherwise <nl> - <nl> - Returns : <nl> - ` bool ` : True if this variable is a placeholder and False otherwise <nl> + Whether this variable is a placeholder . <nl> ' ' ' <nl> return super ( Variable , self ) . is_placeholder ( ) <nl> <nl> + @ property <nl> @ typemap <nl> def is_sparse ( self ) : <nl> ' ' ' <nl> - Returns True if this variable will be bound to sparse data and False otherwise <nl> - <nl> - Returns : <nl> - ` bool ` : True if this variable will be bound to sparse data <nl> + Whether this variable is sparse . <nl> ' ' ' <nl> return super ( Variable , self ) . is_sparse ( ) <nl> <nl> def is_sparse ( self ) : <nl> # ' ' ' <nl> # return super ( Variable , self ) . kind ( ) <nl> <nl> + @ property <nl> @ typemap <nl> def name ( self ) : <nl> ' ' ' <nl> - Returns the name of this variable <nl> - <nl> - Returns : <nl> - ` str ` : the name of this variable <nl> + The name of this variable . <nl> ' ' ' <nl> return super ( Variable , self ) . name ( ) <nl> <nl> + @ property <nl> @ typemap <nl> def needs_gradient ( self ) : <nl> ' ' ' <nl> - Returns True if gradient computation is enabled for this variable and False otherwise . <nl> - <nl> - Returns : <nl> - ` bool ` : True if gradient computation is enabled for this variable and False otherwise . <nl> + Whether this variable needs gradients . <nl> ' ' ' <nl> return super ( Variable , self ) . needs_gradient ( ) <nl> <nl> + @ property <nl> @ typemap <nl> def owner ( self ) : <nl> ' ' ' <nl> - Returns : <nl> - ` Function ` : the Function object which ' this ' variable is an ouptut of . <nl> + The function this variable is an output of . <nl> ' ' ' <nl> if self . is_output ( ) = = False : <nl> raise RuntimeError ( ' called owner ( ) on a variable that is not an output variable ' ) <nl> def shape ( self ) : <nl> ' ' ' <nl> return super ( Variable , self ) . shape ( ) . dimensions ( ) <nl> <nl> + @ property <nl> @ typemap <nl> def uid ( self ) : <nl> ' ' ' <nl> - Returns : <nl> - ` str ` : the internally generated unique name of the variable <nl> + The internally generated unique name of the variable . <nl> ' ' ' <nl> return super ( Variable , self ) . uid ( ) <nl> <nl> def __init__ ( self , shape = None , init = None , data_type = None , <nl> super ( Parameter , self ) . __init__ ( shape , data_type , init , <nl> device , name ) <nl> <nl> + @ property <nl> @ typemap <nl> def value ( self ) : <nl> ' ' ' <nl> - Returns : <nl> - ` NDArrayView ` : the current value of the parameter . <nl> + NumPy array of the value <nl> ' ' ' <nl> - return super ( Parameter , self ) . value ( ) <nl> + return super ( Parameter , self ) . value ( ) . to_numpy ( ) <nl> <nl> @ property <nl> def shape ( self ) : <nl> def __init__ ( self , shape = None , value = None , data_type = None , device = None , name = ' ' ) <nl> <nl> # TODO how to expose Scalar ? <nl> <nl> + @ property <nl> @ typemap <nl> def value ( self ) : <nl> ' ' ' <nl> - Returns : <nl> - ` NDArrayView ` : the value of the constant . <nl> + NumPy array of the value <nl> ' ' ' <nl> - return super ( Constant , self ) . value ( ) <nl> + return super ( Constant , self ) . value ( ) . to_numpy ( ) <nl> <nl> @ property <nl> def shape ( self ) : <nl> mmm a / bindings / python / cntk / tests / initializer_test . py <nl> ppp b / bindings / python / cntk / tests / initializer_test . py <nl> <nl> <nl> def _check ( init , name ) : <nl> p = parameter ( shape = ( 10 , 20 , 5 ) , init = init ) <nl> - val = p . value ( ) . to_numpy ( ) <nl> + val = p . value <nl> assert np . allclose ( np . average ( val ) , 0 , atol = 0 . 1 ) , name <nl> assert np . var ( val ) > 0 . 01 , name <nl> <nl> mmm a / bindings / python / cntk / tests / learner_test . py <nl> ppp b / bindings / python / cntk / tests / learner_test . py <nl> def test_learner_init ( ) : <nl> <nl> res = i * w <nl> <nl> - learner = sgd ( res . parameters ( ) , lr = 0 . 1 ) <nl> + learner = sgd ( res . parameters , lr = 0 . 1 ) <nl> <nl> - learner_parameter = learner . parameters ( ) <nl> + learner_parameter = learner . parameters <nl> from . . ops . variables import Parameter <nl> param = learner_parameter . pop ( ) <nl> assert isinstance ( param , Parameter ) <nl> def test_learner_init ( ) : <nl> momentum_per_sample = momentums_per_sample ( <nl> np . exp ( - 1 . 0 / momentum_time_constant ) ) <nl> <nl> - momentum_sgd ( res . parameters ( ) , lr = 0 . 1 , momentums = momentum_per_sample ) <nl> + momentum_sgd ( res . parameters , lr = 0 . 1 , momentums = momentum_per_sample ) <nl> <nl> - nesterov ( res . parameters ( ) , lr = 0 . 1 , momentums = momentum_per_sample ) <nl> + nesterov ( res . parameters , lr = 0 . 1 , momentums = momentum_per_sample ) <nl> <nl> - adagrad ( res . parameters ( ) , lr = 0 . 1 , need_ave_multiplier = True ) <nl> + adagrad ( res . parameters , lr = 0 . 1 , need_ave_multiplier = True ) <nl> <nl> - fsadagrad ( res . parameters ( ) , lr = 0 . 1 , momentums = momentum_per_sample ) <nl> + fsadagrad ( res . parameters , lr = 0 . 1 , momentums = momentum_per_sample ) <nl> <nl> gamma , inc , dec , max , min = [ 0 . 1 ] * 5 <nl> - rmsprop ( res . parameters ( ) , 0 . 1 , gamma , inc , dec , max , min , True ) <nl> + rmsprop ( res . parameters , 0 . 1 , gamma , inc , dec , max , min , True ) <nl> <nl> def test_learner_update ( ) : <nl> i = input_variable ( shape = ( 1 , ) , <nl> def test_learner_update ( ) : <nl> w = parameter ( shape = ( 1 , ) , init = w_init ) <nl> res = i * w <nl> <nl> - learner = sgd ( res . parameters ( ) , lr = 0 . 1 ) <nl> + learner = sgd ( res . parameters , lr = 0 . 1 ) <nl> x = learner . update ( { w : np . asarray ( [ [ 2 . ] ] , dtype = np . float32 ) } , 1 ) <nl> - assert w . value ( ) . to_numpy ( ) < w_init <nl> + assert w . value < w_init <nl> <nl> mmm a / bindings / python / cntk / trainer . py <nl> ppp b / bindings / python / cntk / trainer . py <nl> def train_minibatch ( self , arguments , outputs = None , device = None ) : <nl> ' ' ' <nl> if not device : <nl> device = DeviceDescriptor . use_default_device ( ) <nl> - arguments = sanitize_var_map ( self . model ( ) . arguments ( ) , arguments ) <nl> + arguments = sanitize_var_map ( self . model . arguments , arguments ) <nl> <nl> if outputs : <nl> output_map = { v : None for v in outputs } <nl> def test_minibatch ( self , arguments , seq_starts = None , device = None ) : <nl> ' ' ' <nl> if not device : <nl> device = DeviceDescriptor . use_default_device ( ) <nl> - arguments = sanitize_var_map ( self . model ( ) . arguments ( ) , arguments , <nl> + arguments = sanitize_var_map ( self . model . arguments , arguments , <nl> seq_starts ) <nl> <nl> return super ( Trainer , self ) . test_minibatch ( arguments , device ) <nl> def restore_from_checkpoint ( self , filename ) : <nl> <nl> super ( Trainer , self ) . restore_from_checkpoint ( filename ) <nl> <nl> + @ property <nl> @ typemap <nl> def model ( self ) : <nl> ' ' ' <nl> - Returns the model that the trainer is training . <nl> - <nl> - Returns : <nl> - : class : ` cntk . ops . functions . Function ` <nl> + The model that the trainer is training . <nl> ' ' ' <nl> return super ( Trainer , self ) . model ( ) <nl> <nl> + @ property <nl> @ typemap <nl> def loss_function ( self ) : <nl> ' ' ' <nl> - Returns the loss function that the trainer is using . <nl> - <nl> - Returns : <nl> - : class : ` cntk . ops . functions . Function ` <nl> + The loss function that the trainer is using . <nl> ' ' ' <nl> return super ( Trainer , self ) . loss_function ( ) <nl> <nl> + @ property <nl> @ typemap <nl> def evaluation_function ( self ) : <nl> ' ' ' <nl> - Returns the evaluation function that the trainer is using . <nl> - <nl> - Returns : <nl> - : class : ` cntk . ops . functions . Function ` <nl> + The evaluation function that the trainer is using . <nl> ' ' ' <nl> return super ( Trainer , self ) . evaluation_function ( ) <nl> <nl> + @ property <nl> @ typemap <nl> def parameter_learners ( self ) : <nl> ' ' ' <nl> - Returns the parameter learners that the trainer is using . <nl> - <nl> - Returns : <nl> - ` list ` of : class : ` cntk . learner . Learner ` <nl> + The parameter learners that the trainer is using . <nl> ' ' ' <nl> return super ( Trainer , self ) . parameter_learners ( ) <nl> <nl> + @ property <nl> def previous_minibatch_loss_average ( self ) : <nl> ' ' ' <nl> - Returns the average training loss per sample for the last minibatch trained <nl> - <nl> - Returns : <nl> - ` double ` <nl> + The average training loss per sample for the last minibatch trained <nl> ' ' ' <nl> return super ( Trainer , self ) . previous_minibatch_loss_average ( ) <nl> <nl> + @ property <nl> def previous_minibatch_evaluation_average ( self ) : <nl> ' ' ' <nl> - Returns the average evaluation criterion value per sample for the last minibatch trained <nl> - <nl> - Returns : <nl> - ` double ` <nl> + The average evaluation criterion value per sample for the last minibatch trained <nl> ' ' ' <nl> return super ( Trainer , self ) . previous_minibatch_evaluation_average ( ) <nl> <nl> + @ property <nl> def previous_minibatch_sample_count ( self ) : <nl> ' ' ' <nl> - Returns the number of samples in the last minibatch trained with <nl> - <nl> - Returns : <nl> - ` int ` <nl> + The number of samples in the last minibatch trained with <nl> ' ' ' <nl> return super ( Trainer , self ) . previous_minibatch_sample_count ( ) <nl> <nl> mmm a / bindings / python / cntk / utils / __init__ . py <nl> ppp b / bindings / python / cntk / utils / __init__ . py <nl> def sanitize_shape ( shape ) : <nl> <nl> def sanitize_input ( arg , fallback_dtype = np . float32 ) : <nl> " " " <nl> - Convert to Variable or Constant so that it can be passed as Variable to the <nl> + Convert to Variable so that it can be passed as Variable to the <nl> CNTK operators . <nl> * If ` arg ` is a NumPy array and its type is neither ` np . float32 ` nor <nl> ` np . float64 ` , it sets it to ` np . float32 ` . <nl> def sanitize_input ( arg , fallback_dtype = np . float32 ) : <nl> be returned . <nl> <nl> Args : <nl> - arg ( number , NumPy array , ` Variable ` , or ` Function ` ) : input <nl> - fallback_dtype ( numpy dtype ) : fallback dtype in case ` arg ` is a list <nl> + arg ( number , NumPy array , : cntk : ` cntk . ops . variables . Variable ` , or <nl> + : class : ` cntk . ops . functions . Function ` ) : input <nl> + fallback_dtype ( NumPy dtype ) : fallback dtype in case ` arg ` is a list <nl> <nl> - Returns : <nl> - Constant , if ` arg ` was a number or NumPy array . Variable otherwise . <nl> + Returns : <nl> + Leave Constant , Parameter , and Variable as is . Return Constant , if <nl> + ` arg ` was a number or NumPy array . Variable otherwise . <nl> " " " <nl> <nl> - from cntk . ops . variables import Constant , Variable <nl> + from cntk . ops . variables import Constant , Variable , Parameter <nl> from cntk . ops import constant <nl> <nl> # is it a Variable ? <nl> if isinstance ( arg , <nl> - ( Constant , Variable , cntk_py . Constant , cntk_py . Variable ) ) : <nl> + ( Constant , cntk_py . Constant , <nl> + Variable , cntk_py . Variable , <nl> + Parameter , cntk_py . Parameter ) ) : <nl> return arg <nl> <nl> # or a Function ? <nl> if isinstance ( arg , cntk_py . Function ) : <nl> try : <nl> - return arg . output ( ) <nl> + return arg . output <nl> except RuntimeError : <nl> raise ValueError ( <nl> ' the argument has more than one output , please provide the one you want ' ) <nl> def get_data_type ( * args ) : <nl> ' NumPy type " % s " is not supported ' % arg . dtype ) <nl> dtypes . add ( arg . dtype . type ) <nl> elif isinstance ( arg , cntk_py . Function ) : <nl> - var_outputs = arg . outputs ( ) <nl> + var_outputs = arg . outputs <nl> if len ( var_outputs ) > 1 : <nl> raise ValueError ( <nl> ' expected single output , but got % i ' % len ( var_outputs ) ) <nl> def sanitize_batch ( var , batch , seq_starts = None , data_type = None , device = None ) : <nl> raise ValueError ( ' only float32 and float64 are supported ' ) <nl> elif isinstance ( batch , list ) : <nl> if is_tensor_list ( batch ) : <nl> - use_mask = len ( var . dynamic_axes ( ) ) > 1 <nl> + use_mask = len ( var . dynamic_axes ) > 1 <nl> <nl> if device is None : <nl> device = cntk_py . DeviceDescriptor . use_default_device ( ) <nl> def sanitize_var_map ( op_arguments , arguments , precision = None , <nl> <nl> Args : <nl> op_arguments ( : class : ` cntk . ops . functions . Function ` ) : arguments of the root function . In <nl> - forward pass it is typically ` op . arguments ( ) ` , in backward mode it is <nl> - ` op . outputs ( ) ` <nl> + forward pass it is typically ` op . arguments ` , in backward mode it is <nl> + ` op . outputs ` <nl> arguments ( ` dict ` or ` list ` or ` tuple ` ) : maps variables to their <nl> input data . The interpretation depends on the input type : <nl> * ` dict ` : keys are input variable or names and values are the input data . <nl> def sanitize_var_map ( op_arguments , arguments , precision = None , <nl> arguments = dict ( zip ( op_arguments , arguments ) ) <nl> <nl> if isinstance ( arguments , dict ) : <nl> - arg_names = [ var . name ( ) for var in op_arguments ] <nl> + arg_names = [ var . name for var in op_arguments ] <nl> name_counter = collections . Counter ( arg_names ) <nl> <nl> - var_name_map = dict ( ( var . name ( ) , var ) for var in op_arguments ) <nl> + var_name_map = dict ( ( var . name , var ) for var in op_arguments ) <nl> else : <nl> raise ValueError ( ' type " % s " is not supported ' % type ( arguments ) ) <nl> <nl> def get_train_loss ( trainer ) : <nl> ' ' ' <nl> import copy <nl> # we copy the value so swig does not destroy it when we leave the scope <nl> - return copy . copy ( trainer . previous_minibatch_loss_average ( ) ) <nl> + return copy . copy ( trainer . previous_minibatch_loss_average ) <nl> <nl> <nl> def get_train_eval_criterion ( trainer ) : <nl> def get_train_eval_criterion ( trainer ) : <nl> ' ' ' <nl> import copy <nl> # we copy the value so swig does not destroy it when we leave the scope <nl> - return copy . copy ( trainer . previous_minibatch_evaluation_average ( ) ) <nl> + return copy . copy ( trainer . previous_minibatch_evaluation_average ) <nl> <nl> <nl> def ensure_dev ( ndav , dev ) : <nl> def eval ( op , arguments = None , precision = None , device = None , backward_pass = False ) : <nl> mapping of output variables to their values . <nl> ' ' ' <nl> <nl> - state , forward_output = op . forward ( arguments , op . outputs ( ) , op . outputs ( ) , device = device ) <nl> + state , forward_output = op . forward ( arguments , op . outputs , op . outputs , device = device ) <nl> <nl> if backward_pass : <nl> root_gradients = { v : ones_like ( o , precision ) for v , o in <nl> mmm a / bindings / python / doc / cntk . utils . rst <nl> ppp b / bindings / python / doc / cntk . utils . rst <nl> cntk . utils . persist module <nl> : undoc - members : <nl> : show - inheritance : <nl> <nl> + cntk . utils . swig_helper module <nl> + mmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> + <nl> + . . automodule : : cntk . utils . swig_helper <nl> + : members : <nl> + : undoc - members : <nl> + : show - inheritance : <nl> + <nl> <nl> Module contents <nl> mmmmmmmmmmmmmmm <nl> mmm a / bindings / python / examples / CifarResNet / CifarResNet . py <nl> ppp b / bindings / python / examples / CifarResNet / CifarResNet . py <nl> def cifar_resnet ( base_path , debug_output = False ) : <nl> <nl> # Instantiate the trainer object to drive the model training <nl> trainer = Trainer ( classifier_output , ce , pe , <nl> - [ sgd ( classifier_output . parameters ( ) , lr = 0 . 0078125 ) ] ) <nl> + [ sgd ( classifier_output . parameters , lr = 0 . 0078125 ) ] ) <nl> <nl> # Get minibatches of images to train with and perform model training <nl> mb_size = 32 <nl> mmm a / bindings / python / examples / MNIST / SimpleMNIST . py <nl> ppp b / bindings / python / examples / MNIST / SimpleMNIST . py <nl> def simple_mnist ( debug_output = False ) : <nl> labels_si = mb_source [ labels_stream_name ] <nl> <nl> # Instantiate the trainer object to drive the model training <nl> - trainer = Trainer ( netout , ce , pe , [ sgd ( netout . parameters ( ) , <nl> + trainer = Trainer ( netout , ce , pe , [ sgd ( netout . parameters , <nl> lr = 0 . 003125 ) ] ) <nl> <nl> # Get minibatches of images to train with and perform model training <nl> mmm a / bindings / python / examples / NumpyInterop / FeedForwardNet . py <nl> ppp b / bindings / python / examples / NumpyInterop / FeedForwardNet . py <nl> def ffnet ( debug_output = False ) : <nl> pe = classification_error ( netout , label ) <nl> <nl> # Instantiate the trainer object to drive the model training <nl> - trainer = Trainer ( netout , ce , pe , [ sgd ( netout . parameters ( ) , lr = 0 . 02 ) ] ) <nl> + trainer = Trainer ( netout , ce , pe , [ sgd ( netout . parameters , lr = 0 . 02 ) ] ) <nl> <nl> # Get minibatches of training data and perform model training <nl> minibatch_size = 25 <nl> mmm a / bindings / python / examples / Sequence2Sequence / Sequence2Sequence . py <nl> ppp b / bindings / python / examples / Sequence2Sequence / Sequence2Sequence . py <nl> def sequence_to_sequence_translator ( debug_output = False ) : <nl> encoder_outputH = stabilize ( input_sequence ) <nl> for i in range ( 0 , num_layers ) : <nl> ( encoder_outputH , encoder_outputC ) = LSTMP_component_with_self_stabilization ( <nl> - encoder_outputH . output ( ) , hidden_dim , hidden_dim , future_value , future_value ) <nl> + encoder_outputH . output , hidden_dim , hidden_dim , future_value , future_value ) <nl> <nl> thought_vectorH = sequence . first ( encoder_outputH ) <nl> thought_vectorC = sequence . first ( encoder_outputC ) <nl> def sequence_to_sequence_translator ( debug_output = False ) : <nl> isFirst , thought_vector_broadcastC , past_value ( operand ) ) <nl> <nl> ( decoder_outputH , encoder_outputC ) = LSTMP_component_with_self_stabilization ( <nl> - decoder_outputH . output ( ) , hidden_dim , hidden_dim , recurrence_hookH , recurrence_hookC ) <nl> + decoder_outputH . output , hidden_dim , hidden_dim , recurrence_hookH , recurrence_hookC ) <nl> <nl> decoder_output = decoder_outputH <nl> decoder_dim = hidden_dim <nl> def sequence_to_sequence_translator ( debug_output = False ) : <nl> clipping_threshold_per_sample = 2 . 3 <nl> gradient_clipping_with_truncation = True <nl> <nl> - trainer = Trainer ( z , ce , errs , [ momentum_sgd ( z . parameters ( ) , lr , momentum_per_sample , clipping_threshold_per_sample , gradient_clipping_with_truncation ) ] ) <nl> + trainer = Trainer ( z , ce , errs , [ momentum_sgd ( z . parameters , lr , momentum_per_sample , clipping_threshold_per_sample , gradient_clipping_with_truncation ) ] ) <nl> <nl> rel_path = r " . . / . . / . . / . . / Examples / SequenceToSequence / CMUDict / Data / cmudict - 0 . 7b . train - dev - 20 - 21 . ctf " <nl> path = os . path . join ( os . path . dirname ( os . path . abspath ( __file__ ) ) , rel_path ) <nl> mmm a / bindings / python / examples / SequenceClassification / SequenceClassification . py <nl> ppp b / bindings / python / examples / SequenceClassification / SequenceClassification . py <nl> <nl> def LSTM_sequence_classifer_net ( input , num_output_classes , embedding_dim , LSTM_dim , cell_dim ) : <nl> embedding_function = embedding ( input , embedding_dim ) <nl> LSTM_function = LSTMP_component_with_self_stabilization ( <nl> - embedding_function . output ( ) , LSTM_dim , cell_dim ) [ 0 ] <nl> + embedding_function . output , LSTM_dim , cell_dim ) [ 0 ] <nl> thought_vector = select_last ( LSTM_function ) <nl> <nl> return linear_layer ( thought_vector , num_output_classes ) <nl> def train_sequence_classifier ( debug_output = False ) : <nl> <nl> # Instantiate the trainer object to drive the model training <nl> trainer = Trainer ( classifier_output , ce , pe , <nl> - [ sgd ( classifier_output . parameters ( ) , lr = 0 . 0005 ) ] ) <nl> + [ sgd ( classifier_output . parameters , lr = 0 . 0005 ) ] ) <nl> <nl> # Get minibatches of sequences to train with and perform model training <nl> minibatch_size = 200 <nl> def train_sequence_classifier ( debug_output = False ) : <nl> import copy <nl> <nl> evaluation_average = copy . copy ( <nl> - trainer . previous_minibatch_evaluation_average ( ) ) <nl> - loss_average = copy . copy ( trainer . previous_minibatch_loss_average ( ) ) <nl> + trainer . previous_minibatch_evaluation_average ) <nl> + loss_average = copy . copy ( trainer . previous_minibatch_loss_average ) <nl> <nl> return evaluation_average , loss_average <nl> <nl> mmm a / bindings / python / examples / common / nn . py <nl> ppp b / bindings / python / examples / common / nn . py <nl> def LSTMP_cell_with_self_stabilization ( input , prev_output , prev_cell_state ) : <nl> <nl> def LSTMP_component_with_self_stabilization ( input , output_dim , cell_dim , recurrence_hookH = past_value , recurrence_hookC = past_value ) : <nl> dh = placeholder_variable ( <nl> - shape = ( output_dim ) , dynamic_axes = input . dynamic_axes ( ) ) <nl> + shape = ( output_dim ) , dynamic_axes = input . dynamic_axes ) <nl> dc = placeholder_variable ( <nl> - shape = ( cell_dim ) , dynamic_axes = input . dynamic_axes ( ) ) <nl> + shape = ( cell_dim ) , dynamic_axes = input . dynamic_axes ) <nl> <nl> LSTMCell = LSTMP_cell_with_self_stabilization ( input , dh , dc ) <nl> actualDh = recurrence_hookH ( LSTMCell [ 0 ] ) <nl> def LSTMP_component_with_self_stabilization ( input , output_dim , cell_dim , recurre <nl> # Form the recurrence loop by replacing the dh and dc placeholders with <nl> # the actualDh and actualDc <nl> LSTMCell [ 0 ] . replace_placeholders ( <nl> - { dh : actualDh . output ( ) , dc : actualDc . output ( ) } ) <nl> + { dh : actualDh . output , dc : actualDc . output } ) <nl> <nl> return ( LSTMCell [ 0 ] , LSTMCell [ 1 ] ) <nl> <nl>
|
Turn attribute - like methods into properties
|
microsoft/CNTK
|
06265830712eecf99d9edab37ba2189b8b8885db
|
2016-10-15T20:20:51Z
|
mmm a / src / mongo / db / repl / rs_sync . cpp <nl> ppp b / src / mongo / db / repl / rs_sync . cpp <nl> namespace replset { <nl> void SyncTail : : oplogApplication ( ) { <nl> while ( 1 ) { <nl> OpQueue ops ; <nl> - time_t lastTimeChecked = time ( 0 ) ; <nl> <nl> verify ( ! Lock : : isLocked ( ) ) ; <nl> <nl> + Timer batchTimer ; <nl> + int lastTimeChecked = 0 ; <nl> + <nl> / / always fetch a few ops first <nl> - <nl> / / tryPopAndWaitForMore returns true when we need to end a batch early <nl> while ( ! tryPopAndWaitForMore ( & ops ) & & <nl> ( ops . getSize ( ) < replBatchSizeBytes ) ) { <nl> namespace replset { <nl> return ; <nl> } <nl> <nl> - time_t now = time ( 0 ) ; <nl> + int now = batchTimer . seconds ( ) ; <nl> + <nl> + / / don ' t wait more than five seconds building up a batch <nl> + if ( ! ops . empty ( ) & & now > replBatchLimitSeconds ) <nl> + break ; <nl> <nl> / / occasionally check some things <nl> if ( ops . empty ( ) | | now > lastTimeChecked ) { <nl> namespace replset { <nl> / / on , we can get ops that are way ahead of the delay and this will <nl> / / make this thread sleep longer when handleSlaveDelay is called <nl> / / and apply ops much sooner than we like . <nl> - if ( opTimestampSecs > static_cast < unsigned int > ( now - slaveDelaySecs ) ) { <nl> + if ( opTimestampSecs > static_cast < unsigned int > ( time ( 0 ) - slaveDelaySecs ) ) { <nl> break ; <nl> } <nl> } <nl> mmm a / src / mongo / db / repl / rs_sync . h <nl> ppp b / src / mongo / db / repl / rs_sync . h <nl> namespace replset { <nl> / / Cap the batches using the limit on journal commits . <nl> / / This works out to be 100 MB ( 64 bit ) or 50 MB ( 32 bit ) <nl> static const unsigned int replBatchSizeBytes = dur : : UncommittedBytesLimit ; <nl> + static const int replBatchLimitSeconds = 5 ; <nl> <nl> / / Prefetch and write a deque of operations , using the supplied function . <nl> / / Initial Sync and Sync Tail each use a different function . <nl>
|
SERVER - 6816 add a 5 sec time limit to replication batches
|
mongodb/mongo
|
2eeaad837463a929e1bd3d0488cf2884687e9359
|
2012-09-07T16:38:49Z
|
mmm a / cocos / scripting / lua - bindings / auto / api / Camera . lua <nl> ppp b / cocos / scripting / lua - bindings / auto / api / Camera . lua <nl> <nl> - - @ param self <nl> - - @ return Camera # Camera ret ( return value : cc . Camera ) <nl> <nl> mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm <nl> mmm Sets the position ( X , Y , and Z ) in its parent ' s coordinate system <nl> mmm @ function [ parent = # Camera ] setPosition3D <nl> mmm @ param self <nl> mmm @ param # vec3_table position <nl> mmm @ return Camera # Camera self ( return value : cc . Camera ) <nl> - <nl> return nil <nl> mmm a / cocos / scripting / lua - bindings / auto / api / Sprite3D . lua <nl> ppp b / cocos / scripting / lua - bindings / auto / api / Sprite3D . lua <nl> <nl> - - @ extend Node , BlendProtocol <nl> - - @ parent_module cc <nl> <nl> + mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> + - - <nl> + - - @ function [ parent = # Sprite3D ] isForceDepthWrite <nl> + - - @ param self <nl> + - - @ return bool # bool ret ( return value : bool ) <nl> + <nl> mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> - - <nl> - - @ function [ parent = # Sprite3D ] setCullFaceEnabled <nl> <nl> - - @ param # int index <nl> - - @ return Mesh # Mesh ret ( return value : cc . Mesh ) <nl> <nl> + mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> + - - Force to write to depth buffer , this is useful if you want to achieve effects like fading . <nl> + - - @ function [ parent = # Sprite3D ] setForceDepthWrite <nl> + - - @ param self <nl> + - - @ param # bool value <nl> + - - @ return Sprite3D # Sprite3D self ( return value : cc . Sprite3D ) <nl> + <nl> mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> - - get Mesh by Name , it returns the first one if there are more than one mesh with the same name <nl> - - @ function [ parent = # Sprite3D ] getMeshByName <nl> <nl> - - @ param # cc . GLProgramState glProgramState <nl> - - @ return Sprite3D # Sprite3D self ( return value : cc . Sprite3D ) <nl> <nl> + mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> + - - Executes an action , and returns the action that is executed . For Sprite3D special logic are needed to take care of Fading . < br > <nl> + - - This node becomes the action ' s target . Refer to Action : : getTarget ( ) < br > <nl> + - - warning Actions don ' t retain their target . < br > <nl> + - - return An Action pointer <nl> + - - @ function [ parent = # Sprite3D ] runAction <nl> + - - @ param self <nl> + - - @ param # cc . Action action <nl> + - - @ return Action # Action ret ( return value : cc . Action ) <nl> + <nl> mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - - <nl> - - just rember bind attributes <nl> - - @ function [ parent = # Sprite3D ] setGLProgram <nl> mmm a / cocos / scripting / lua - bindings / auto / lua_cocos2dx_3d_auto . cpp <nl> ppp b / cocos / scripting / lua - bindings / auto / lua_cocos2dx_3d_auto . cpp <nl> int lua_register_cocos2dx_3d_Skeleton3D ( lua_State * tolua_S ) <nl> return 1 ; <nl> } <nl> <nl> + int lua_cocos2dx_3d_Sprite3D_isForceDepthWrite ( lua_State * tolua_S ) <nl> + { <nl> + int argc = 0 ; <nl> + cocos2d : : Sprite3D * cobj = nullptr ; <nl> + bool ok = true ; <nl> + <nl> + # if COCOS2D_DEBUG > = 1 <nl> + tolua_Error tolua_err ; <nl> + # endif <nl> + <nl> + <nl> + # if COCOS2D_DEBUG > = 1 <nl> + if ( ! tolua_isusertype ( tolua_S , 1 , " cc . Sprite3D " , 0 , & tolua_err ) ) goto tolua_lerror ; <nl> + # endif <nl> + <nl> + cobj = ( cocos2d : : Sprite3D * ) tolua_tousertype ( tolua_S , 1 , 0 ) ; <nl> + <nl> + # if COCOS2D_DEBUG > = 1 <nl> + if ( ! cobj ) <nl> + { <nl> + tolua_error ( tolua_S , " invalid ' cobj ' in function ' lua_cocos2dx_3d_Sprite3D_isForceDepthWrite ' " , nullptr ) ; <nl> + return 0 ; <nl> + } <nl> + # endif <nl> + <nl> + argc = lua_gettop ( tolua_S ) - 1 ; <nl> + if ( argc = = 0 ) <nl> + { <nl> + if ( ! ok ) <nl> + { <nl> + tolua_error ( tolua_S , " invalid arguments in function ' lua_cocos2dx_3d_Sprite3D_isForceDepthWrite ' " , nullptr ) ; <nl> + return 0 ; <nl> + } <nl> + bool ret = cobj - > isForceDepthWrite ( ) ; <nl> + tolua_pushboolean ( tolua_S , ( bool ) ret ) ; <nl> + return 1 ; <nl> + } <nl> + luaL_error ( tolua_S , " % s has wrong number of arguments : % d , was expecting % d \ n " , " cc . Sprite3D : isForceDepthWrite " , argc , 0 ) ; <nl> + return 0 ; <nl> + <nl> + # if COCOS2D_DEBUG > = 1 <nl> + tolua_lerror : <nl> + tolua_error ( tolua_S , " # ferror in function ' lua_cocos2dx_3d_Sprite3D_isForceDepthWrite ' . " , & tolua_err ) ; <nl> + # endif <nl> + <nl> + return 0 ; <nl> + } <nl> int lua_cocos2dx_3d_Sprite3D_setCullFaceEnabled ( lua_State * tolua_S ) <nl> { <nl> int argc = 0 ; <nl> int lua_cocos2dx_3d_Sprite3D_getMeshByIndex ( lua_State * tolua_S ) <nl> <nl> return 0 ; <nl> } <nl> + int lua_cocos2dx_3d_Sprite3D_setForceDepthWrite ( lua_State * tolua_S ) <nl> + { <nl> + int argc = 0 ; <nl> + cocos2d : : Sprite3D * cobj = nullptr ; <nl> + bool ok = true ; <nl> + <nl> + # if COCOS2D_DEBUG > = 1 <nl> + tolua_Error tolua_err ; <nl> + # endif <nl> + <nl> + <nl> + # if COCOS2D_DEBUG > = 1 <nl> + if ( ! tolua_isusertype ( tolua_S , 1 , " cc . Sprite3D " , 0 , & tolua_err ) ) goto tolua_lerror ; <nl> + # endif <nl> + <nl> + cobj = ( cocos2d : : Sprite3D * ) tolua_tousertype ( tolua_S , 1 , 0 ) ; <nl> + <nl> + # if COCOS2D_DEBUG > = 1 <nl> + if ( ! cobj ) <nl> + { <nl> + tolua_error ( tolua_S , " invalid ' cobj ' in function ' lua_cocos2dx_3d_Sprite3D_setForceDepthWrite ' " , nullptr ) ; <nl> + return 0 ; <nl> + } <nl> + # endif <nl> + <nl> + argc = lua_gettop ( tolua_S ) - 1 ; <nl> + if ( argc = = 1 ) <nl> + { <nl> + bool arg0 ; <nl> + <nl> + ok & = luaval_to_boolean ( tolua_S , 2 , & arg0 , " cc . Sprite3D : setForceDepthWrite " ) ; <nl> + if ( ! ok ) <nl> + { <nl> + tolua_error ( tolua_S , " invalid arguments in function ' lua_cocos2dx_3d_Sprite3D_setForceDepthWrite ' " , nullptr ) ; <nl> + return 0 ; <nl> + } <nl> + cobj - > setForceDepthWrite ( arg0 ) ; <nl> + lua_settop ( tolua_S , 1 ) ; <nl> + return 1 ; <nl> + } <nl> + luaL_error ( tolua_S , " % s has wrong number of arguments : % d , was expecting % d \ n " , " cc . Sprite3D : setForceDepthWrite " , argc , 1 ) ; <nl> + return 0 ; <nl> + <nl> + # if COCOS2D_DEBUG > = 1 <nl> + tolua_lerror : <nl> + tolua_error ( tolua_S , " # ferror in function ' lua_cocos2dx_3d_Sprite3D_setForceDepthWrite ' . " , & tolua_err ) ; <nl> + # endif <nl> + <nl> + return 0 ; <nl> + } <nl> int lua_cocos2dx_3d_Sprite3D_getMeshByName ( lua_State * tolua_S ) <nl> { <nl> int argc = 0 ; <nl> int lua_register_cocos2dx_3d_Sprite3D ( lua_State * tolua_S ) <nl> tolua_cclass ( tolua_S , " Sprite3D " , " cc . Sprite3D " , " cc . Node " , nullptr ) ; <nl> <nl> tolua_beginmodule ( tolua_S , " Sprite3D " ) ; <nl> + tolua_function ( tolua_S , " isForceDepthWrite " , lua_cocos2dx_3d_Sprite3D_isForceDepthWrite ) ; <nl> tolua_function ( tolua_S , " setCullFaceEnabled " , lua_cocos2dx_3d_Sprite3D_setCullFaceEnabled ) ; <nl> tolua_function ( tolua_S , " setTexture " , lua_cocos2dx_3d_Sprite3D_setTexture ) ; <nl> tolua_function ( tolua_S , " getLightMask " , lua_cocos2dx_3d_Sprite3D_getLightMask ) ; <nl> int lua_register_cocos2dx_3d_Sprite3D ( lua_State * tolua_S ) <nl> tolua_function ( tolua_S , " removeAttachNode " , lua_cocos2dx_3d_Sprite3D_removeAttachNode ) ; <nl> tolua_function ( tolua_S , " getSkeleton " , lua_cocos2dx_3d_Sprite3D_getSkeleton ) ; <nl> tolua_function ( tolua_S , " getMeshByIndex " , lua_cocos2dx_3d_Sprite3D_getMeshByIndex ) ; <nl> + tolua_function ( tolua_S , " setForceDepthWrite " , lua_cocos2dx_3d_Sprite3D_setForceDepthWrite ) ; <nl> tolua_function ( tolua_S , " getMeshByName " , lua_cocos2dx_3d_Sprite3D_getMeshByName ) ; <nl> tolua_function ( tolua_S , " getAttachNode " , lua_cocos2dx_3d_Sprite3D_getAttachNode ) ; <nl> tolua_function ( tolua_S , " create " , lua_cocos2dx_3d_Sprite3D_create ) ; <nl> mmm a / cocos / scripting / lua - bindings / auto / lua_cocos2dx_3d_auto . hpp <nl> ppp b / cocos / scripting / lua - bindings / auto / lua_cocos2dx_3d_auto . hpp <nl> int register_all_cocos2dx_3d ( lua_State * tolua_S ) ; <nl> <nl> <nl> <nl> + <nl> + <nl> <nl> <nl> <nl>
|
Merge pull request from CocosRobot / update_lua_bindings_1422429538
|
cocos2d/cocos2d-x
|
757c4a6cdb4385db04986bfe7da99f01b164d2e6
|
2015-01-28T07:24:52Z
|
mmm a / dbms / src / Storages / StorageFactory . cpp <nl> ppp b / dbms / src / Storages / StorageFactory . cpp <nl> StoragePtr StorageFactory : : get ( <nl> } <nl> else if ( name = = " Dictionary " ) <nl> { <nl> - <nl> return StorageDictionary : : create ( <nl> table_name , context , query , columns , <nl> materialized_columns , alias_columns , column_defaults ) ; <nl>
|
Whitespace [ # CLICKHOUSE - 2 ] .
|
ClickHouse/ClickHouse
|
e6739cc35d6aa30f6939146afc3e13b761c8acc5
|
2017-08-10T19:41:21Z
|
mmm a / include / swift / AST / DiagnosticsSema . def <nl> ppp b / include / swift / AST / DiagnosticsSema . def <nl> ERROR ( actor_isolated_witness_could_be_async_handler , none , <nl> " actor - isolated % 0 % 1 cannot be used to satisfy a protocol requirement ; " <nl> " did you mean to make it an asychronous handler ? " , <nl> ( DescriptiveDeclKind , DeclName ) ) <nl> + ERROR ( global_actor_isolated_requirement , none , <nl> + " % 0 % 1 must be isolated to the global actor % 2 to satisfy corresponding " <nl> + " requirement from protocol % 3 " , <nl> + ( DescriptiveDeclKind , DeclName , Type , Identifier ) ) <nl> + ERROR ( global_actor_isolated_witness , none , <nl> + " % 0 % 1 isolated to global actor % 2 can not satisfy corresponding " <nl> + " requirement from protocol % 3 " , <nl> + ( DescriptiveDeclKind , DeclName , Type , Identifier ) ) <nl> + ERROR ( global_actor_isolated_requirement_witness_conflict , none , <nl> + " % 0 % 1 isolated to global actor % 2 can not satisfy corresponding " <nl> + " requirement from protocol % 3 isolated to global actor % 4 " , <nl> + ( DescriptiveDeclKind , DeclName , Type , Identifier , Type ) ) <nl> <nl> ERROR ( actorisolated_let , none , <nl> " ' @ actorIsolated ' is meaningless on ' let ' declarations because " <nl> mmm a / lib / Sema / TypeCheckProtocol . cpp <nl> ppp b / lib / Sema / TypeCheckProtocol . cpp <nl> static void emitDeclaredHereIfNeeded ( DiagnosticEngine & diags , <nl> diags . diagnose ( value , diag : : decl_declared_here , value - > getName ( ) ) ; <nl> } <nl> <nl> + bool ConformanceChecker : : checkActorIsolation ( <nl> + ValueDecl * requirement , ValueDecl * witness ) { <nl> + / / Ensure that the witness is not actor - isolated in a manner that makes it <nl> + / / unsuitable as a witness . <nl> + Type witnessGlobalActor ; <nl> + switch ( auto witnessRestriction = <nl> + ActorIsolationRestriction : : forDeclaration ( witness ) ) { <nl> + case ActorIsolationRestriction : : ActorSelf : { <nl> + / / Actor - isolated witnesses cannot conform to protocol requirements . <nl> + bool canBeAsyncHandler = false ; <nl> + if ( auto witnessFunc = dyn_cast < FuncDecl > ( witness ) ) { <nl> + canBeAsyncHandler = ! witnessFunc - > isAsyncHandler ( ) & & <nl> + witnessFunc - > canBeAsyncHandler ( ) ; <nl> + } <nl> + auto diag = witness - > diagnose ( <nl> + canBeAsyncHandler <nl> + ? diag : : actor_isolated_witness_could_be_async_handler <nl> + : diag : : actor_isolated_witness , <nl> + witness - > getDescriptiveKind ( ) , witness - > getName ( ) ) ; <nl> + <nl> + if ( canBeAsyncHandler ) { <nl> + diag . fixItInsert ( <nl> + witness - > getAttributeInsertionLoc ( false ) , " @ asyncHandler " ) ; <nl> + } <nl> + <nl> + return true ; <nl> + } <nl> + <nl> + case ActorIsolationRestriction : : GlobalActor : { <nl> + / / Hang on to the global actor that ' s used for the witness . It will need <nl> + / / to match that of the requirement . <nl> + witnessGlobalActor = witness - > getDeclContext ( ) - > mapTypeIntoContext ( <nl> + witnessRestriction . getGlobalActor ( ) ) ; <nl> + break ; <nl> + } <nl> + <nl> + case ActorIsolationRestriction : : Unsafe : <nl> + case ActorIsolationRestriction : : LocalCapture : <nl> + break ; <nl> + <nl> + case ActorIsolationRestriction : : Unrestricted : <nl> + / / The witness is completely unrestricted , so ignore any annotations on <nl> + / / the requirement . <nl> + return false ; <nl> + } <nl> + <nl> + / / Check whether the requirement requires some particular actor isolation . <nl> + Type requirementGlobalActor ; <nl> + switch ( auto requirementIsolation = getActorIsolation ( requirement ) ) { <nl> + case ActorIsolation : : ActorInstance : <nl> + llvm_unreachable ( " There are not actor protocols " ) ; <nl> + <nl> + case ActorIsolation : : GlobalActor : { <nl> + auto requirementSubs = SubstitutionMap : : getProtocolSubstitutions ( <nl> + Proto , Adoptee , ProtocolConformanceRef ( Conformance ) ) ; <nl> + requirementGlobalActor = requirementIsolation . getGlobalActor ( ) <nl> + . subst ( requirementSubs ) ; <nl> + break ; <nl> + } <nl> + <nl> + case ActorIsolation : : Independent : <nl> + case ActorIsolation : : Unspecified : <nl> + break ; <nl> + } <nl> + <nl> + / / If neither has a global actor , we ' re done . <nl> + if ( ! witnessGlobalActor & & ! requirementGlobalActor ) <nl> + return false ; <nl> + <nl> + / / If the witness has a global actor but the requirement does not , we have <nl> + / / an isolation error . <nl> + if ( witnessGlobalActor & & ! requirementGlobalActor ) { <nl> + witness - > diagnose ( <nl> + diag : : global_actor_isolated_witness , witness - > getDescriptiveKind ( ) , <nl> + witness - > getName ( ) , witnessGlobalActor , Proto - > getName ( ) ) ; <nl> + requirement - > diagnose ( diag : : decl_declared_here , requirement - > getName ( ) ) ; <nl> + return true ; <nl> + } <nl> + <nl> + / / If the requirement has a global actor but the witness does not , we have <nl> + / / an isolation error . <nl> + / / <nl> + / / FIXME : Within a module , this will be an inference rule . <nl> + if ( requirementGlobalActor & & ! witnessGlobalActor ) { <nl> + witness - > diagnose ( <nl> + diag : : global_actor_isolated_requirement , witness - > getDescriptiveKind ( ) , <nl> + witness - > getName ( ) , requirementGlobalActor , Proto - > getName ( ) ) <nl> + . fixItInsert ( <nl> + witness - > getAttributeInsertionLoc ( / * forModifier = * / false ) , <nl> + " @ " + requirementGlobalActor . getString ( ) ) ; <nl> + requirement - > diagnose ( diag : : decl_declared_here , requirement - > getName ( ) ) ; <nl> + return true ; <nl> + } <nl> + <nl> + / / If both have global actors but they differ , this is an isolation error . <nl> + if ( ! witnessGlobalActor - > isEqual ( requirementGlobalActor ) ) { <nl> + witness - > diagnose ( <nl> + diag : : global_actor_isolated_requirement_witness_conflict , <nl> + witness - > getDescriptiveKind ( ) , witness - > getName ( ) , witnessGlobalActor , <nl> + Proto - > getName ( ) , requirementGlobalActor ) ; <nl> + requirement - > diagnose ( diag : : decl_declared_here , requirement - > getName ( ) ) ; <nl> + return true ; <nl> + } <nl> + <nl> + / / Everything is okay . <nl> + return false ; <nl> + } <nl> + <nl> bool ConformanceChecker : : checkObjCTypeErasedGenerics ( <nl> AssociatedTypeDecl * assocType , <nl> Type type , <nl> void ConformanceChecker : : resolveValueWitnesses ( ) { <nl> return ; <nl> } <nl> <nl> - / / Check for actor - isolation consistency . <nl> - switch ( auto restriction = <nl> - ActorIsolationRestriction : : forDeclaration ( witness ) ) { <nl> - case ActorIsolationRestriction : : ActorSelf : { <nl> - / / Actor - isolated witnesses cannot conform to protocol requirements . <nl> - bool canBeAsyncHandler = false ; <nl> - if ( auto witnessFunc = dyn_cast < FuncDecl > ( witness ) ) { <nl> - canBeAsyncHandler = ! witnessFunc - > isAsyncHandler ( ) & & <nl> - witnessFunc - > canBeAsyncHandler ( ) ; <nl> - } <nl> - auto diag = witness - > diagnose ( <nl> - canBeAsyncHandler <nl> - ? diag : : actor_isolated_witness_could_be_async_handler <nl> - : diag : : actor_isolated_witness , <nl> - witness - > getDescriptiveKind ( ) , witness - > getName ( ) ) ; <nl> - <nl> - if ( canBeAsyncHandler ) { <nl> - diag . fixItInsert ( <nl> - witness - > getAttributeInsertionLoc ( false ) , " @ asyncHandler " ) ; <nl> - } <nl> + if ( checkActorIsolation ( requirement , witness ) ) <nl> return ; <nl> - } <nl> - <nl> - case ActorIsolationRestriction : : GlobalActor : { <nl> - / / FIXME : Check against the requirement . This needs serious refactoring . <nl> - break ; <nl> - } <nl> - <nl> - case ActorIsolationRestriction : : Unrestricted : <nl> - case ActorIsolationRestriction : : Unsafe : <nl> - case ActorIsolationRestriction : : LocalCapture : <nl> - break ; <nl> - } <nl> <nl> / / Objective - C checking for @ objc requirements . <nl> if ( requirement - > isObjC ( ) & & <nl> mmm a / lib / Sema / TypeCheckProtocol . h <nl> ppp b / lib / Sema / TypeCheckProtocol . h <nl> class ConformanceChecker : public WitnessChecker { <nl> Type type , <nl> TypeDecl * typeDecl ) ; <nl> <nl> + / / / Check that the witness and requirement have compatible actor contexts . <nl> + / / / <nl> + / / / \ returns true if an error occurred , false otherwise . <nl> + bool checkActorIsolation ( ValueDecl * requirement , ValueDecl * witness ) ; <nl> + <nl> / / / Record a type witness . <nl> / / / <nl> / / / \ param assocType The associated type whose witness is being recorded . <nl> new file mode 100644 <nl> index 000000000000 . . 8144892e1276 <nl> mmm / dev / null <nl> ppp b / test / decl / class / actor / global_actor_conformance . swift <nl> <nl> + / / RUN : % target - typecheck - verify - swift - enable - experimental - concurrency <nl> + / / REQUIRES : concurrency <nl> + <nl> + import _Concurrency <nl> + <nl> + actor class SomeActor { } <nl> + <nl> + @ globalActor <nl> + struct GlobalActor { <nl> + static var shared : SomeActor { SomeActor ( ) } <nl> + } <nl> + <nl> + @ globalActor <nl> + struct GenericGlobalActor < T > { <nl> + static var shared : SomeActor { SomeActor ( ) } <nl> + } <nl> + <nl> + protocol P1 { <nl> + associatedtype Assoc <nl> + <nl> + @ GlobalActor func method1 ( ) / / expected - note { { declared here } } <nl> + @ GenericGlobalActor < Int > func method2 ( ) / / expected - note { { declared here } } <nl> + @ GenericGlobalActor < Assoc > func method3 ( ) <nl> + func method4 ( ) / / expected - note { { declared here } } <nl> + } <nl> + <nl> + protocol P2 { <nl> + @ GlobalActor func asyncMethod1 ( ) async <nl> + @ GenericGlobalActor < Int > func asyncMethod2 ( ) async <nl> + func asyncMethod3 ( ) async <nl> + } <nl> + <nl> + class C1 : P1 , P2 { <nl> + typealias Assoc = String <nl> + <nl> + / / FIXME : This will be inferred <nl> + func method1 ( ) { } / / expected - error { { instance method ' method1 ( ) ' must be isolated to the global actor ' GlobalActor ' to satisfy corresponding requirement from protocol ' P1 ' } } { { 3 - 3 = @ GlobalActor } } <nl> + <nl> + @ GenericGlobalActor < String > func method2 ( ) { } / / expected - error { { instance method ' method2 ( ) ' isolated to global actor ' GenericGlobalActor < String > ' can not satisfy corresponding requirement from protocol ' P1 ' isolated to global actor ' GenericGlobalActor < Int > ' } } <nl> + @ GenericGlobalActor < String > func method3 ( ) { } <nl> + @ GlobalActor func method4 ( ) { } / / expected - error { { instance method ' method4 ( ) ' isolated to global actor ' GlobalActor ' can not satisfy corresponding requirement from protocol ' P1 ' } } <nl> + <nl> + / / Okay : we can ignore the mismatch in global actor types for ' async ' methods . <nl> + func asyncMethod1 ( ) async { } <nl> + @ GenericGlobalActor < String > func asyncMethod2 ( ) async { } <nl> + @ GlobalActor func asyncMethod3 ( ) async { } <nl> + } <nl> + <nl> + <nl> + class C2 : P1 { <nl> + typealias Assoc = Int <nl> + <nl> + / / Okay : we can ignore the mismatch in global actor types for ' asyncHandler ' <nl> + / / methods . <nl> + @ asyncHandler func method1 ( ) { } <nl> + @ asyncHandler func method2 ( ) { } <nl> + @ asyncHandler func method3 ( ) { } <nl> + @ asyncHandler func method4 ( ) { } <nl> + } <nl>
|
[ Concurrency ] Implement global actor isolation checking for conformances .
|
apple/swift
|
2d8f0733345acbb2b1a2308510dd1ba1308fb645
|
2020-10-12T23:38:20Z
|
mmm a / dlib / CMakeLists . txt <nl> ppp b / dlib / CMakeLists . txt <nl> if ( NOT TARGET dlib ) <nl> find_package ( CUDA 7 . 0 ) <nl> <nl> <nl> - if ( CUDA_FOUND ) <nl> + if ( CUDA_FOUND AND COMPILER_CAN_DO_CPP_11 ) <nl> <nl> set ( CUDA_HOST_COMPILATION_CPP ON ) <nl> # Note that we add __STRICT_ANSI__ to avoid freaking out nvcc with gcc specific <nl> mmm a / dlib / use_cpp_11 . cmake <nl> ppp b / dlib / use_cpp_11 . cmake <nl> if ( CMAKE_VERSION VERSION_LESS " 3 . 1 " ) <nl> message ( STATUS " C + + 11 activated . " ) <nl> add_global_compiler_switch ( " - std = gnu + + 11 " ) <nl> set ( COMPILER_CAN_DO_CPP_11 1 ) <nl> - elseif ( GCC_VERSION VERSION_GREATER 4 . 3 OR GCC_VERSION VERSION_EQUAL 4 . 3 ) <nl> - message ( STATUS " C + + 0x activated . " ) <nl> - add_global_compiler_switch ( " - std = gnu + + 0x " ) <nl> - set ( COMPILER_CAN_DO_CPP_11 1 ) <nl> endif ( ) <nl> elseif ( " $ { CMAKE_CXX_COMPILER_ID } " STREQUAL " Clang " ) <nl> execute_process ( COMMAND $ { CMAKE_CXX_COMPILER } - - version OUTPUT_VARIABLE clang_full_version_string ) <nl>
|
Apparently you need gcc 4 . 7 or newer for this stuff .
|
davisking/dlib
|
3a65e1c74cb720f181d015c3d59b171960c55d92
|
2015-10-18T03:07:23Z
|
mmm a / arangod / V8Server / ApplicationV8 . cpp <nl> ppp b / arangod / V8Server / ApplicationV8 . cpp <nl> bool ApplicationV8 : : prepare ( ) { <nl> } <nl> <nl> / / add v8 options <nl> - if ( _v8Options . size ( ) > 0 ) { <nl> + if ( ! _v8Options . empty ( ) ) { <nl> LOG_INFO ( " using V8 options ' % s ' " , _v8Options . c_str ( ) ) ; <nl> v8 : : V8 : : SetFlagsFromString ( _v8Options . c_str ( ) , ( int ) _v8Options . size ( ) ) ; <nl> } <nl> mmm a / arangosh / V8Client / arangosh . cpp <nl> ppp b / arangosh / V8Client / arangosh . cpp <nl> static string StartupPath = " " ; <nl> <nl> static bool UseCurrentModulePath = true ; <nl> <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + / / / @ brief options to pass to V8 <nl> + / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + <nl> + static std : : string V8Options ; <nl> + <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ brief javascript files to execute <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> static void JS_ImportCsvFile ( const v8 : : FunctionCallbackInfo < v8 : : Value > & args ) { <nl> / / extract the filename <nl> v8 : : String : : Utf8Value filename ( args [ 0 ] ) ; <nl> <nl> - if ( * filename = = 0 ) { <nl> - TRI_V8_THROW_TYPE_ERROR ( " < filename > must be an UTF - 8 filename " ) ; <nl> + if ( * filename = = nullptr ) { <nl> + TRI_V8_THROW_TYPE_ERROR ( " < filename > must be a UTF - 8 filename " ) ; <nl> } <nl> <nl> v8 : : String : : Utf8Value collection ( args [ 1 ] ) ; <nl> <nl> - if ( * collection = = 0 ) { <nl> - TRI_V8_THROW_TYPE_ERROR ( " < collection > must be an UTF - 8 filename " ) ; <nl> + if ( * collection = = nullptr ) { <nl> + TRI_V8_THROW_TYPE_ERROR ( " < collection > must be a UTF - 8 filename " ) ; <nl> } <nl> <nl> / / extract the options <nl> static void JS_ImportJsonFile ( const v8 : : FunctionCallbackInfo < v8 : : Value > & args ) <nl> / / extract the filename <nl> v8 : : String : : Utf8Value filename ( args [ 0 ] ) ; <nl> <nl> - if ( * filename = = 0 ) { <nl> - TRI_V8_THROW_TYPE_ERROR ( " < filename > must be an UTF - 8 filename " ) ; <nl> + if ( * filename = = nullptr ) { <nl> + TRI_V8_THROW_TYPE_ERROR ( " < filename > must be a UTF - 8 filename " ) ; <nl> } <nl> <nl> v8 : : String : : Utf8Value collection ( args [ 1 ] ) ; <nl> <nl> - if ( * collection = = 0 ) { <nl> - TRI_V8_THROW_TYPE_ERROR ( " < collection > must be an UTF8 filename " ) ; <nl> + if ( * collection = = nullptr ) { <nl> + TRI_V8_THROW_TYPE_ERROR ( " < collection > must be a UTF - 8 filename " ) ; <nl> } <nl> <nl> <nl> enum WRAP_CLASS_TYPES { WRAP_TYPE_CONNECTION = 1 } ; <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ brief parses the program options <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> + <nl> typedef enum __eRunMode { <nl> eInteractive , <nl> eExecuteScript , <nl> static vector < string > ParseProgramOptions ( int argc , char * args [ ] , eRunMode * run <nl> ( " javascript . startup - directory " , & StartupPath , " startup paths containing the JavaScript files " ) <nl> ( " javascript . unit - tests " , & UnitTests , " do not start as shell , run unit tests instead " ) <nl> ( " javascript . current - module - directory " , & UseCurrentModulePath , " add current directory to module path " ) <nl> + ( " javascript . v8 - options " , & V8Options , " options to pass to v8 " ) <nl> ( " jslint " , & JsLint , " do not start as shell , run jslint instead " ) <nl> ; <nl> <nl> static vector < string > ParseProgramOptions ( int argc , char * args [ ] , eRunMode * run <nl> <nl> BaseClient . parse ( options , description , " < options > " , argc , args , conf ) ; <nl> <nl> - / / set V8 options <nl> - v8 : : V8 : : SetFlagsFromCommandLine ( & argc , args , true ) ; <nl> - <nl> / / derive other paths from ` - - javascript . directory ` <nl> StartupModules = StartupPath + TRI_DIR_SEPARATOR_STR + " client " + TRI_DIR_SEPARATOR_STR + " modules ; " + <nl> StartupPath + TRI_DIR_SEPARATOR_STR + " common " + TRI_DIR_SEPARATOR_STR + " modules ; " + <nl> int run ( v8 : : Isolate * isolate , eRunMode runMode , bool promptError ) { <nl> return ( ok ) ? EXIT_SUCCESS : EXIT_FAILURE ; <nl> } <nl> <nl> + class BufferAllocator : public v8 : : ArrayBuffer : : Allocator { <nl> + public : <nl> + virtual void * Allocate ( size_t length ) { <nl> + void * data = AllocateUninitialized ( length ) ; <nl> + return data = = nullptr ? data : memset ( data , 0 , length ) ; <nl> + } <nl> + virtual void * AllocateUninitialized ( size_t length ) { return malloc ( length ) ; } <nl> + virtual void Free ( void * data , size_t ) { free ( data ) ; } <nl> + } ; <nl> <nl> / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / <nl> / / / @ brief main <nl> int main ( int argc , char * args [ ] ) { <nl> <nl> v8 : : V8 : : InitializeICU ( ) ; <nl> v8 : : Platform * platform = v8 : : platform : : CreateDefaultPlatform ( ) ; <nl> + <nl> v8 : : V8 : : InitializePlatform ( platform ) ; <nl> + / / set V8 options <nl> + if ( ! V8Options . empty ( ) ) { <nl> + / / explicit option - - javascript . v8 - options used <nl> + v8 : : V8 : : SetFlagsFromString ( V8Options . c_str ( ) , ( int ) V8Options . size ( ) ) ; <nl> + } <nl> + else { <nl> + / / no explicit option used , now pass all command - line arguments to v8 <nl> + v8 : : V8 : : SetFlagsFromCommandLine ( & argc , args , true ) ; <nl> + } <nl> + <nl> + BufferAllocator bufferAllocator ; <nl> + v8 : : V8 : : SetArrayBufferAllocator ( & bufferAllocator ) ; <nl> + <nl> v8 : : Isolate * isolate = v8 : : Isolate : : New ( ) ; <nl> isolate - > Enter ( ) ; <nl> { <nl> mmm a / js / common / bootstrap / module - internal . js <nl> ppp b / js / common / bootstrap / module - internal . js <nl> <nl> <nl> context . output + = " undefined " ; <nl> <nl> - <nl> if ( useColor ) { <nl> context . output + = colors . COLOR_RESET ; <nl> } <nl> <nl> context . output + = colors . COLOR_RESET ; <nl> } <nl> } <nl> + else if ( typeof ( value ) = = = " symbol " ) { <nl> + if ( useColor ) { <nl> + context . output + = colors . COLOR_NULL ; <nl> + } <nl> + <nl> + context . output + = value . toString ( ) ; <nl> + <nl> + if ( useColor ) { <nl> + context . output + = colors . COLOR_RESET ; <nl> + } <nl> + } <nl> else { <nl> context . output + = String ( value ) ; <nl> } <nl>
|
ES6
|
arangodb/arangodb
|
d358b8008071f9cf8a2233b47a7650e3e85463b7
|
2014-12-20T03:02:24Z
|
mmm a / include / caffe / filler . hpp <nl> ppp b / include / caffe / filler . hpp <nl> <nl> <nl> namespace caffe { <nl> <nl> + / / / @ brief Fills a Blob with constant or randomly - generated data . <nl> template < typename Dtype > <nl> class Filler { <nl> public : <nl> class Filler { <nl> } ; / / class Filler <nl> <nl> <nl> + / / / @ brief Fills a Blob with constant values @ f $ x = 0 @ f $ . <nl> template < typename Dtype > <nl> class ConstantFiller : public Filler < Dtype > { <nl> public : <nl> class ConstantFiller : public Filler < Dtype > { <nl> } <nl> } ; <nl> <nl> + / / / @ brief Fills a Blob with uniformly distributed values @ f $ x \ sim U ( a , b ) @ f $ . <nl> template < typename Dtype > <nl> class UniformFiller : public Filler < Dtype > { <nl> public : <nl> class UniformFiller : public Filler < Dtype > { <nl> } <nl> } ; <nl> <nl> + / / / @ brief Fills a Blob with Gaussian - distributed values @ f $ x = a @ f $ . <nl> template < typename Dtype > <nl> class GaussianFiller : public Filler < Dtype > { <nl> public : <nl> class GaussianFiller : public Filler < Dtype > { <nl> shared_ptr < SyncedMemory > rand_vec_ ; <nl> } ; <nl> <nl> + / * * @ brief Fills a Blob with values @ f $ x \ in [ 0 , 1 ] @ f $ <nl> + * such that @ f $ \ forall i \ sum_j x_ { ij } = 1 @ f $ . <nl> + * / <nl> template < typename Dtype > <nl> class PositiveUnitballFiller : public Filler < Dtype > { <nl> public : <nl> class PositiveUnitballFiller : public Filler < Dtype > { <nl> } <nl> } ; <nl> <nl> - / / A filler based on the paper [ Bengio and Glorot 2010 ] : Understanding <nl> - / / the difficulty of training deep feedforward neuralnetworks , but does not <nl> - / / use the fan_out value . <nl> - / / <nl> - / / It fills the incoming matrix by randomly sampling uniform data from <nl> - / / [ - scale , scale ] where scale = sqrt ( 3 / fan_in ) where fan_in is the number <nl> - / / of input nodes . You should make sure the input blob has shape ( num , a , b , c ) <nl> - / / where a * b * c = fan_in . <nl> + / * * <nl> + * @ brief Fills a Blob with values @ f $ x \ sim U ( - a , + a ) @ f $ where @ f $ a @ f $ <nl> + * is set inversely proportional to the number of incoming nodes . <nl> + * <nl> + * A Filler based on the paper [ Bengio and Glorot 2010 ] : Understanding <nl> + * the difficulty of training deep feedforward neuralnetworks , but does not <nl> + * use the fan_out value . <nl> + * <nl> + * It fills the incoming matrix by randomly sampling uniform data from <nl> + * [ - scale , scale ] where scale = sqrt ( 3 / fan_in ) where fan_in is the number <nl> + * of input nodes . You should make sure the input blob has shape ( num , a , b , c ) <nl> + * where a * b * c = fan_in . <nl> + * <nl> + * TODO ( dox ) : make notation in above comment consistent with rest & use LaTeX . <nl> + * / <nl> template < typename Dtype > <nl> class XavierFiller : public Filler < Dtype > { <nl> public : <nl> class XavierFiller : public Filler < Dtype > { <nl> } ; <nl> <nl> <nl> - / / A function to get a specific filler from the specification given in <nl> - / / FillerParameter . Ideally this would be replaced by a factory pattern , <nl> - / / but we will leave it this way for now . <nl> + / * * <nl> + * @ brief Get a specific filler from the specification given in FillerParameter . <nl> + * <nl> + * Ideally this would be replaced by a factory pattern , but we will leave it <nl> + * this way for now . <nl> + * / <nl> template < typename Dtype > <nl> Filler < Dtype > * GetFiller ( const FillerParameter & param ) { <nl> const std : : string & type = param . type ( ) ; <nl>
|
filler . hpp : add brief filler descriptions
|
BVLC/caffe
|
81eb2ebf0e8ab407c01d4702e6aaa17fc51108eb
|
2014-09-03T17:59:24Z
|
mmm a / x64_dbg_dbg / x64_dbg . cpp <nl> ppp b / x64_dbg_dbg / x64_dbg . cpp <nl> extern " C " DLL_EXPORT const char * _dbg_dbginit ( ) <nl> strcpy_s ( dbbasepath , dir ) ; / / debug directory <nl> strcat_s ( dbbasepath , " \ \ db " ) ; <nl> CreateDirectoryW ( StringUtils : : Utf8ToUtf16 ( dbbasepath ) . c_str ( ) , 0 ) ; / / create database directory <nl> - strcpy_s ( szSymbolCachePath , dir ) ; <nl> - strcat_s ( szSymbolCachePath , " \ \ symbols " ) ; <nl> + if ( ! BridgeSettingGet ( " Symbols " , " CachePath " , szSymbolCachePath ) ) <nl> + { <nl> + strcpy_s ( szSymbolCachePath , dir ) ; <nl> + strcat_s ( szSymbolCachePath , " \ \ symbols " ) ; <nl> + BridgeSettingSet ( " Symbols " , " CachePath " , szSymbolCachePath ) ; <nl> + } <nl> SetCurrentDirectoryW ( StringUtils : : Utf8ToUtf16 ( dir ) . c_str ( ) ) ; <nl> dputs ( " Allocating message stack . . . " ) ; <nl> gMsgStack = MsgAllocStack ( ) ; <nl> mmm a / x64_dbg_gui / Project / Src / Gui / SettingsDialog . cpp <nl> ppp b / x64_dbg_gui / Project / Src / Gui / SettingsDialog . cpp <nl> void SettingsDialog : : LoadSettings ( ) <nl> { <nl> ui - > chkSetJIT - > setDisabled ( true ) ; <nl> ui - > chkConfirmBeforeAtt - > setDisabled ( true ) ; <nl> - ui - > lbladminwarning - > setText ( QString ( " Warning : Run the debugger as Admin to enable JIT . " ) ) ; <nl> + ui - > lblAdminWarning - > setText ( QString ( " < font color = \ " red \ " > < b > Warning < / b > < / font > : Run the debugger as Admin to enable JIT . " ) ) ; <nl> } <nl> + else <nl> + ui - > lblAdminWarning - > setText ( " " ) ; <nl> } <nl> + char setting [ MAX_SETTING_SIZE ] = " " ; <nl> + if ( BridgeSettingGet ( " Symbols " , " DefaultStore " , setting ) ) <nl> + ui - > editSymbolStore - > setText ( QString ( setting ) ) ; <nl> + else <nl> + { <nl> + QString defaultStore = " http : / / msdl . microsoft . com / download / symbols " ; <nl> + ui - > editSymbolStore - > setText ( defaultStore ) ; <nl> + BridgeSettingSet ( " Symbols " , " DefaultStore " , defaultStore . toUtf8 ( ) . constData ( ) ) ; <nl> + } <nl> + if ( BridgeSettingGet ( " Symbols " , " CachePath " , setting ) ) <nl> + ui - > editSymbolCache - > setText ( QString ( setting ) ) ; <nl> <nl> bJitOld = settings . miscSetJIT ; <nl> bJitAutoOld = settings . miscSetJITAuto ; <nl> void SettingsDialog : : SaveSettings ( ) <nl> DbgCmdExecDirect ( " setjitauto off " ) ; <nl> } <nl> } <nl> + if ( settings . miscSymbolStore ) <nl> + BridgeSettingSet ( " Symbols " , " DefaultStore " , ui - > editSymbolStore - > text ( ) . toUtf8 ( ) . constData ( ) ) ; <nl> + if ( settings . miscSymbolCache ) <nl> + BridgeSettingSet ( " Symbols " , " CachePath " , ui - > editSymbolCache - > text ( ) . toUtf8 ( ) . constData ( ) ) ; <nl> <nl> Config ( ) - > load ( ) ; <nl> DbgSettingsUpdated ( ) ; <nl> void SettingsDialog : : on_chkTabBetweenMnemonicAndArguments_stateChanged ( int arg1 ) <nl> { <nl> settings . disasmTabBetweenMnemonicAndArguments = arg1 = = Qt : : Checked ; <nl> } <nl> + <nl> + void SettingsDialog : : on_editSymbolStore_textEdited ( const QString & arg1 ) <nl> + { <nl> + settings . miscSymbolStore = true ; <nl> + } <nl> + <nl> + void SettingsDialog : : on_editSymbolCache_textEdited ( const QString & arg1 ) <nl> + { <nl> + settings . miscSymbolCache = true ; <nl> + } <nl> mmm a / x64_dbg_gui / Project / Src / Gui / SettingsDialog . h <nl> ppp b / x64_dbg_gui / Project / Src / Gui / SettingsDialog . h <nl> private slots : <nl> / / Misc tab <nl> void on_chkSetJIT_stateChanged ( int arg1 ) ; <nl> void on_chkConfirmBeforeAtt_stateChanged ( int arg1 ) ; <nl> + void on_editSymbolStore_textEdited ( const QString & arg1 ) ; <nl> + void on_editSymbolCache_textEdited ( const QString & arg1 ) ; <nl> <nl> private : <nl> / / enums <nl> private slots : <nl> / / Misc Tab <nl> bool miscSetJIT ; <nl> bool miscSetJITAuto ; <nl> + bool miscSymbolStore ; <nl> + bool miscSymbolCache ; <nl> } ; <nl> <nl> / / variables <nl> mmm a / x64_dbg_gui / Project / Src / Gui / SettingsDialog . ui <nl> ppp b / x64_dbg_gui / Project / Src / Gui / SettingsDialog . ui <nl> <nl> < attribute name = " title " > <nl> < string > Misc < / string > <nl> < / attribute > <nl> - < layout class = " QVBoxLayout " name = " verticalLayout_3 " > <nl> - < item > <nl> - < widget class = " QCheckBox " name = " chkSetJIT " > <nl> - < property name = " text " > <nl> - < string > Set x64dbg as Just In Time Debugger < / string > <nl> - < / property > <nl> - < / widget > <nl> - < / item > <nl> - < item > <nl> - < layout class = " QHBoxLayout " name = " horizontalLayout " > <nl> - < item > <nl> - < widget class = " QLabel " name = " label " > <nl> - < property name = " text " > <nl> - < string > JIT : < / string > <nl> - < / property > <nl> - < / widget > <nl> - < / item > <nl> - < item > <nl> - < widget class = " QLineEdit " name = " editJIT " > <nl> - < property name = " readOnly " > <nl> - < bool > true < / bool > <nl> - < / property > <nl> - < / widget > <nl> - < / item > <nl> - < / layout > <nl> - < / item > <nl> - < item > <nl> - < widget class = " QCheckBox " name = " chkConfirmBeforeAtt " > <nl> - < property name = " text " > <nl> - < string > Confirm before attaching < / string > <nl> - < / property > <nl> - < / widget > <nl> - < / item > <nl> - < item > <nl> - < spacer name = " verticalSpacer_3 " > <nl> - < property name = " orientation " > <nl> - < enum > Qt : : Vertical < / enum > <nl> - < / property > <nl> - < property name = " sizeHint " stdset = " 0 " > <nl> - < size > <nl> - < width > 20 < / width > <nl> - < height > 40 < / height > <nl> - < / size > <nl> - < / property > <nl> - < / spacer > <nl> - < / item > <nl> - < item > <nl> - < widget class = " QLabel " name = " lbladminwarning " > <nl> - < property name = " text " > <nl> - < string / > <nl> - < / property > <nl> - < / widget > <nl> - < / item > <nl> - < / layout > <nl> + < widget class = " QWidget " name = " layoutWidget " > <nl> + < property name = " geometry " > <nl> + < rect > <nl> + < x > 11 < / x > <nl> + < y > 13 < / y > <nl> + < width > 341 < / width > <nl> + < height > 145 < / height > <nl> + < / rect > <nl> + < / property > <nl> + < layout class = " QVBoxLayout " name = " verticalLayout_9 " > <nl> + < item > <nl> + < layout class = " QHBoxLayout " name = " horizontalLayout_4 " > <nl> + < item > <nl> + < layout class = " QVBoxLayout " name = " verticalLayout_3 " > <nl> + < item > <nl> + < widget class = " QLabel " name = " label_2 " > <nl> + < property name = " text " > <nl> + < string > Symbol Store : < / string > <nl> + < / property > <nl> + < / widget > <nl> + < / item > <nl> + < item > <nl> + < widget class = " QLabel " name = " label_3 " > <nl> + < property name = " text " > <nl> + < string > Symbol Path : < / string > <nl> + < / property > <nl> + < / widget > <nl> + < / item > <nl> + < / layout > <nl> + < / item > <nl> + < item > <nl> + < layout class = " QVBoxLayout " name = " verticalLayout_8 " > <nl> + < item > <nl> + < widget class = " QLineEdit " name = " editSymbolStore " / > <nl> + < / item > <nl> + < item > <nl> + < widget class = " QLineEdit " name = " editSymbolCache " / > <nl> + < / item > <nl> + < / layout > <nl> + < / item > <nl> + < / layout > <nl> + < / item > <nl> + < item > <nl> + < widget class = " QCheckBox " name = " chkSetJIT " > <nl> + < property name = " text " > <nl> + < string > Set x64dbg as Just In Time Debugger < / string > <nl> + < / property > <nl> + < / widget > <nl> + < / item > <nl> + < item > <nl> + < layout class = " QHBoxLayout " name = " horizontalLayout " > <nl> + < item > <nl> + < widget class = " QLabel " name = " label " > <nl> + < property name = " text " > <nl> + < string > JIT : < / string > <nl> + < / property > <nl> + < / widget > <nl> + < / item > <nl> + < item > <nl> + < widget class = " QLineEdit " name = " editJIT " > <nl> + < property name = " readOnly " > <nl> + < bool > true < / bool > <nl> + < / property > <nl> + < / widget > <nl> + < / item > <nl> + < / layout > <nl> + < / item > <nl> + < item > <nl> + < widget class = " QCheckBox " name = " chkConfirmBeforeAtt " > <nl> + < property name = " text " > <nl> + < string > Confirm before attaching < / string > <nl> + < / property > <nl> + < / widget > <nl> + < / item > <nl> + < item > <nl> + < widget class = " QLabel " name = " lblAdminWarning " > <nl> + < property name = " text " > <nl> + < string > & lt ; font color = & quot ; red & quot ; & gt ; DIE SCUM ! & lt ; / font & gt ; < / string > <nl> + < / property > <nl> + < / widget > <nl> + < / item > <nl> + < / layout > <nl> + < / widget > <nl> < / widget > <nl> < / widget > <nl> < / item > <nl> mmm a / x64_dbg_gui / Project / Src / Gui / StatusLabel . cpp <nl> ppp b / x64_dbg_gui / Project / Src / Gui / StatusLabel . cpp <nl> <nl> # include " StatusLabel . h " <nl> + # include < QTextDocument > <nl> <nl> StatusLabel : : StatusLabel ( QStatusBar * parent ) : QLabel ( parent ) <nl> { <nl> void StatusLabel : : logUpdate ( QString message ) <nl> / / only show the last line in the status label <nl> QStringList lineList = labelText . split ( QChar ( ' \ n ' ) , QString : : SkipEmptyParts ) ; <nl> if ( lineList . size ( ) ) <nl> - setText ( lineList [ lineList . length ( ) - 1 ] ) ; <nl> + setText ( Qt : : convertFromPlainText ( lineList [ lineList . length ( ) - 1 ] ) ) ; <nl> else <nl> - setText ( labelText ) ; <nl> + setText ( Qt : : convertFromPlainText ( labelText ) ) ; <nl> this - > repaint ( ) ; <nl> } <nl>
|
DBG : resolved issue
|
x64dbg/x64dbg
|
10ef4a841f84c5e85eb950cfa7721bdd2ac821b8
|
2015-07-13T03:55:21Z
|
mmm a / vendor / brightray <nl> ppp b / vendor / brightray <nl> @ @ - 1 + 1 @ @ <nl> - Subproject commit 57022bdad1b22f3d0dbd3c3680adc54bc3f6d384 <nl> + Subproject commit ddfebd06326a956145dfde6ed5f863396953da6d <nl>
|
Upgrade brightray for
|
electron/electron
|
185d3a7c02a5984788d1c65ab9bc4eedbbbf2bf5
|
2014-11-06T11:09:10Z
|
mmm a / tensorflow / python / training / experimental / loss_scale_optimizer . py <nl> ppp b / tensorflow / python / training / experimental / loss_scale_optimizer . py <nl> <nl> from tensorflow . python . ops import math_ops <nl> from tensorflow . python . training import optimizer <nl> from tensorflow . python . training . experimental import loss_scale as loss_scale_module <nl> + from tensorflow . python . util import deprecation <nl> from tensorflow . python . util . tf_export import tf_export <nl> <nl> <nl> - @ tf_export ( v1 = [ ' train . experimental . MixedPrecisionLossScaleOptimizer ' ] ) <nl> + @ deprecation . deprecated_endpoints ( <nl> + ' train . experimental . MixedPrecisionLossScaleOptimizer ' ) <nl> + @ tf_export ( v1 = [ ' mixed_precision . MixedPrecisionLossScaleOptimizer ' , <nl> + ' train . experimental . MixedPrecisionLossScaleOptimizer ' ] ) <nl> class MixedPrecisionLossScaleOptimizer ( optimizer . Optimizer ) : <nl> " " " An optimizer that applies loss scaling . <nl> <nl> mmm a / tensorflow / python / training / experimental / mixed_precision . py <nl> ppp b / tensorflow / python / training / experimental / mixed_precision . py <nl> <nl> from tensorflow . python . training import optimizer <nl> from tensorflow . python . training . experimental import loss_scale_optimizer as loss_scale_optimizer_v1 <nl> from tensorflow . python . training . experimental import mixed_precision_global_state <nl> + from tensorflow . python . util import deprecation <nl> from tensorflow . python . util . tf_export import tf_export <nl> <nl> <nl> def _wrap_optimizer ( opt , loss_scale , use_v1_behavior ) : <nl> ' tf . keras . optimizers . Optimizer , but got : % s ' % opt ) <nl> <nl> <nl> + @ deprecation . deprecated ( <nl> + ' 2020 - 11 - 30 ' , <nl> + ' Use tf . keras . mixed_precision . There is a guide at ' <nl> + ' https : / / www . tensorflow . org / guide / mixed_precision . Alternatively , ' <nl> + ' ` tf . compat . v1 . mixed_precision . enable_mixed_precision_graph_rewrite ` can ' <nl> + ' be used , but this is not recommended for TF2 code . ' ) <nl> @ tf_export ( ' train . experimental . enable_mixed_precision_graph_rewrite ' , v1 = [ ] ) <nl> def enable_mixed_precision_graph_rewrite ( opt , loss_scale = ' dynamic ' ) : <nl> " " " Enable mixed precision via a graph rewrite . <nl> def enable_mixed_precision_graph_rewrite ( opt , loss_scale = ' dynamic ' ) : <nl> use_v1_behavior = False ) <nl> <nl> <nl> - @ tf_export ( v1 = [ ' train . experimental . enable_mixed_precision_graph_rewrite ' ] ) <nl> + @ deprecation . deprecated_endpoints ( <nl> + ' train . experimental . enable_mixed_precision_graph_rewrite ' ) <nl> + @ tf_export ( v1 = [ ' mixed_precision . enable_mixed_precision_graph_rewrite ' , <nl> + ' train . experimental . enable_mixed_precision_graph_rewrite ' ] ) <nl> def enable_mixed_precision_graph_rewrite_v1 ( opt , loss_scale = ' dynamic ' ) : <nl> " " " Enable mixed precision via a graph rewrite . <nl> <nl> def _enable_mixed_precision_graph_rewrite_base ( opt , loss_scale , <nl> return opt <nl> <nl> <nl> + @ deprecation . deprecated ( <nl> + ' 2020 - 11 - 30 ' , <nl> + ' Use tf . keras . mixed_precision . There is a guide at ' <nl> + ' https : / / www . tensorflow . org / guide / mixed_precision . Alternatively , ' <nl> + ' ` tf . compat . v1 . mixed_precision . disable_mixed_precision_graph_rewrite ` can ' <nl> + ' be used , but this is not recommended for TF2 code . ' ) <nl> @ tf_export ( ' train . experimental . disable_mixed_precision_graph_rewrite ' , v1 = [ ] ) <nl> def disable_mixed_precision_graph_rewrite ( ) : <nl> " " " Disables the mixed precision graph rewrite . <nl> def disable_mixed_precision_graph_rewrite ( ) : <nl> mixed_precision_global_state . mixed_precision_graph_rewrite_is_enabled = False <nl> <nl> <nl> - @ tf_export ( v1 = [ ' train . experimental . disable_mixed_precision_graph_rewrite ' ] ) <nl> + @ deprecation . deprecated_endpoints ( <nl> + ' train . experimental . disable_mixed_precision_graph_rewrite ' ) <nl> + @ tf_export ( v1 = [ ' mixed_precision . disable_mixed_precision_graph_rewrite ' , <nl> + ' train . experimental . disable_mixed_precision_graph_rewrite ' ] ) <nl> def disable_mixed_precision_graph_rewrite_v1 ( ) : <nl> " " " Disables the mixed precision graph rewrite . <nl> <nl> new file mode 100644 <nl> index 0000000000000 . . f1e49106bf3a7 <nl> mmm / dev / null <nl> ppp b / tensorflow / tools / api / golden / v1 / tensorflow . mixed_precision . - mixed - precision - loss - scale - optimizer . pbtxt <nl> <nl> + path : " tensorflow . mixed_precision . MixedPrecisionLossScaleOptimizer " <nl> + tf_class { <nl> + is_instance : " < class \ ' tensorflow . python . training . experimental . loss_scale_optimizer . MixedPrecisionLossScaleOptimizer \ ' > " <nl> + is_instance : " < class \ ' tensorflow . python . training . optimizer . Optimizer \ ' > " <nl> + is_instance : " < class \ ' tensorflow . python . training . tracking . base . Trackable \ ' > " <nl> + is_instance : " < type \ ' object \ ' > " <nl> + member { <nl> + name : " GATE_GRAPH " <nl> + mtype : " < type \ ' int \ ' > " <nl> + } <nl> + member { <nl> + name : " GATE_NONE " <nl> + mtype : " < type \ ' int \ ' > " <nl> + } <nl> + member { <nl> + name : " GATE_OP " <nl> + mtype : " < type \ ' int \ ' > " <nl> + } <nl> + member_method { <nl> + name : " __init__ " <nl> + argspec : " args = [ \ ' self \ ' , \ ' opt \ ' , \ ' loss_scale \ ' ] , varargs = None , keywords = None , defaults = None " <nl> + } <nl> + member_method { <nl> + name : " apply_gradients " <nl> + argspec : " args = [ \ ' self \ ' , \ ' grads_and_vars \ ' , \ ' global_step \ ' , \ ' name \ ' ] , varargs = None , keywords = None , defaults = [ \ ' None \ ' , \ ' None \ ' ] , " <nl> + } <nl> + member_method { <nl> + name : " compute_gradients " <nl> + argspec : " args = [ \ ' self \ ' , \ ' loss \ ' , \ ' var_list \ ' , \ ' gate_gradients \ ' , \ ' aggregation_method \ ' , \ ' colocate_gradients_with_ops \ ' , \ ' grad_loss \ ' ] , varargs = None , keywords = None , defaults = [ \ ' None \ ' , \ ' 1 \ ' , \ ' None \ ' , \ ' False \ ' , \ ' None \ ' ] , " <nl> + } <nl> + member_method { <nl> + name : " get_name " <nl> + argspec : " args = [ \ ' self \ ' ] , varargs = None , keywords = None , defaults = None " <nl> + } <nl> + member_method { <nl> + name : " get_slot " <nl> + argspec : " args = [ \ ' self \ ' , \ ' var \ ' , \ ' name \ ' ] , varargs = None , keywords = None , defaults = None " <nl> + } <nl> + member_method { <nl> + name : " get_slot_names " <nl> + argspec : " args = [ \ ' self \ ' ] , varargs = None , keywords = None , defaults = None " <nl> + } <nl> + member_method { <nl> + name : " minimize " <nl> + argspec : " args = [ \ ' self \ ' , \ ' loss \ ' , \ ' global_step \ ' , \ ' var_list \ ' , \ ' gate_gradients \ ' , \ ' aggregation_method \ ' , \ ' colocate_gradients_with_ops \ ' , \ ' name \ ' , \ ' grad_loss \ ' ] , varargs = None , keywords = None , defaults = [ \ ' None \ ' , \ ' None \ ' , \ ' 1 \ ' , \ ' None \ ' , \ ' False \ ' , \ ' None \ ' , \ ' None \ ' ] , " <nl> + } <nl> + member_method { <nl> + name : " variables " <nl> + argspec : " args = [ \ ' self \ ' ] , varargs = None , keywords = None , defaults = None " <nl> + } <nl> + } <nl> mmm a / tensorflow / tools / api / golden / v1 / tensorflow . mixed_precision . pbtxt <nl> ppp b / tensorflow / tools / api / golden / v1 / tensorflow . mixed_precision . pbtxt <nl> tf_module { <nl> name : " LossScale " <nl> mtype : " < type \ ' type \ ' > " <nl> } <nl> + member { <nl> + name : " MixedPrecisionLossScaleOptimizer " <nl> + mtype : " < type \ ' type \ ' > " <nl> + } <nl> member { <nl> name : " experimental " <nl> mtype : " < type \ ' module \ ' > " <nl> } <nl> + member_method { <nl> + name : " disable_mixed_precision_graph_rewrite " <nl> + argspec : " args = [ ] , varargs = None , keywords = None , defaults = None " <nl> + } <nl> + member_method { <nl> + name : " enable_mixed_precision_graph_rewrite " <nl> + argspec : " args = [ \ ' opt \ ' , \ ' loss_scale \ ' ] , varargs = None , keywords = None , defaults = [ \ ' dynamic \ ' ] , " <nl> + } <nl> } <nl> mmm a / tensorflow / tools / compatibility / renames_v2 . py <nl> ppp b / tensorflow / tools / compatibility / renames_v2 . py <nl> <nl> ' tf . compat . v1 . metrics . true_positives_at_thresholds ' , <nl> ' tf . min_max_variable_partitioner ' : <nl> ' tf . compat . v1 . min_max_variable_partitioner ' , <nl> + ' tf . mixed_precision . MixedPrecisionLossScaleOptimizer ' : <nl> + ' tf . compat . v1 . mixed_precision . MixedPrecisionLossScaleOptimizer ' , <nl> + ' tf . mixed_precision . disable_mixed_precision_graph_rewrite ' : <nl> + ' tf . compat . v1 . mixed_precision . disable_mixed_precision_graph_rewrite ' , <nl> + ' tf . mixed_precision . enable_mixed_precision_graph_rewrite ' : <nl> + ' tf . compat . v1 . mixed_precision . enable_mixed_precision_graph_rewrite ' , <nl> ' tf . mod ' : <nl> ' tf . math . floormod ' , <nl> ' tf . model_variables ' : <nl>
|
Deprecate enable_mixed_precision_graph_rewrite function .
|
tensorflow/tensorflow
|
011228639301a8ed60d39c94ab096b074091bb4a
|
2020-10-20T18:38:55Z
|
mmm a / platform / isim / detect . py <nl> ppp b / platform / isim / detect . py <nl> def get_opts ( ) : <nl> return [ <nl> ( ' ISIMPLATFORM ' , ' name of the iphone platform ' , ' iPhoneSimulator ' ) , <nl> ( ' ISIMPATH ' , ' the path to iphone toolchain ' , ' / Applications / Xcode . app / Contents / Developer / Platforms / $ { ISIMPLATFORM } . platform ' ) , <nl> - ( ' ISIMSDK ' , ' path to the iphone SDK ' , ' $ ISIMPATH / Developer / SDKs / $ { ISIMPLATFORM } 7 . 1 . sdk ' ) , <nl> + ( ' ISIMSDK ' , ' path to the iphone SDK ' , ' $ ISIMPATH / Developer / SDKs / $ { ISIMPLATFORM } . sdk ' ) , <nl> ( ' game_center ' , ' Support for game center ' , ' yes ' ) , <nl> ( ' store_kit ' , ' Support for in - app store ' , ' yes ' ) , <nl> ( ' ios_gles22_override ' , ' Force GLES2 . 0 on iOS ' , ' yes ' ) , <nl>
|
Merge pull request from umxprime / fix / isim - sdk - path
|
godotengine/godot
|
b1ec72a799af9fc608d84fb1ac7613c8eba7670d
|
2015-02-10T00:38:38Z
|
mmm a / test / lit . cfg <nl> ppp b / test / lit . cfg <nl> if test_options : <nl> config . swift_test_options + = ' ' <nl> config . swift_test_options + = test_options <nl> <nl> - config . sil_test_options = os . environ . get ( ' SIL_TEST_OPTIONS ' ) <nl> - if not config . sil_test_options : <nl> - config . sil_test_options = ' ' <nl> + config . swift_frontend_test_options = os . environ . get ( ' SWIFT_FRONTEND_TEST_OPTIONS ' , ' ' ) <nl> + config . swift_driver_test_options = os . environ . get ( ' SWIFT_DRIVER_TEST_OPTIONS ' , ' ' ) <nl> + config . sil_test_options = os . environ . get ( ' SIL_TEST_OPTIONS ' , ' ' ) <nl> <nl> clang_module_cache_path = tempfile . mkdtemp ( prefix = " swift - testsuite - clang - module - cache " ) <nl> mcp_opt = " - module - cache - path % r " % clang_module_cache_path <nl> config . substitutions . append ( ( ' % { python } ' , sys . executable ) ) <nl> config . substitutions . append ( ( ' % mcp_opt ' , mcp_opt ) ) <nl> config . substitutions . append ( ( ' % swift_driver_plain ' , " % r " % config . swift ) ) <nl> config . substitutions . append ( ( ' % swiftc_driver_plain ' , " % r " % config . swiftc ) ) <nl> - config . substitutions . append ( ( ' % swift_driver ' , " env SDKROOT = % r % s % s " % ( config . swift , mcp_opt , config . swift_test_options ) ) ) <nl> - config . substitutions . append ( ( ' % swiftc_driver ' , " env SDKROOT = % r % s % s " % ( config . swiftc , mcp_opt , config . swift_test_options ) ) ) <nl> + config . substitutions . append ( ( ' % swift_driver ' , " env SDKROOT = % r % s % s % s " % ( config . swift , mcp_opt , config . swift_test_options , config . swift_driver_test_options ) ) ) <nl> + config . substitutions . append ( ( ' % swiftc_driver ' , " env SDKROOT = % r % s % s % s " % ( config . swiftc , mcp_opt , config . swift_test_options , config . swift_driver_test_options ) ) ) <nl> config . substitutions . append ( ( ' % sil - opt ' , " % r % s % s " % ( config . sil_opt , mcp_opt , config . sil_test_options ) ) ) <nl> config . substitutions . append ( ( ' % sil - func - extractor ' , " % r % s " % ( config . sil_func_extractor , mcp_opt ) ) ) <nl> config . substitutions . append ( ( ' % sil - llvm - gen ' , " % r % s " % ( config . sil_llvm_gen , mcp_opt ) ) ) <nl> config . substitutions . append ( ( ' % llvm - dis ' , config . llvm_dis ) ) <nl> # This must come after all substitutions containing " % swift " . <nl> config . substitutions . append ( <nl> ( ' % swift ' , <nl> - " % r - frontend % s - disable - objc - attr - requires - foundation - module % s " <nl> - % ( config . swift , mcp_opt , config . swift_test_options ) ) ) <nl> + " % r - frontend % s - disable - objc - attr - requires - foundation - module % s % s " <nl> + % ( config . swift , mcp_opt , config . swift_test_options , config . swift_frontend_test_options ) ) ) <nl> <nl> config . clang_include_dir = \ <nl> os . path . join ( os . path . dirname ( os . path . dirname ( config . swift ) ) , ' include ' ) <nl> if run_vendor = = ' apple ' : <nl> ( run_cpu , run_os , run_vers , clang_mcp_opt ) ) <nl> <nl> config . target_build_swift = ( <nl> - " % s % s % s - F % s - Xlinker - rpath - Xlinker % s % s % s % s " % <nl> + " % s % s % s - F % s - Xlinker - rpath - Xlinker % s % s % s % s % s " % <nl> ( xcrun_prefix , config . swiftc , target_options , <nl> extra_frameworks_dir , <nl> " / tmp / swifttest - device / lib " , <nl> sdk_overlay_linker_opt , config . swift_test_options , <nl> + config . swift_driver_test_options , <nl> swift_execution_tests_extra_flags ) ) <nl> config . target_run = " unsupported " <nl> <nl> if run_vendor = = ' apple ' : <nl> ( run_cpu , run_os , run_vers , clang_mcp_opt ) ) <nl> <nl> config . target_build_swift = ( <nl> - " % s % s % s - F % s % s % s % s " % <nl> + " % s % s % s - F % s % s % s % s % s " % <nl> ( xcrun_prefix , config . swiftc , target_options , <nl> extra_frameworks_dir , <nl> sdk_overlay_linker_opt , config . swift_test_options , <nl> + config . swift_driver_test_options , <nl> swift_execution_tests_extra_flags ) ) <nl> # FIXME : allow specification of simulator and version <nl> # <nl> if run_vendor = = ' apple ' : <nl> ( run_cpu , run_os , run_vers , clang_mcp_opt ) ) <nl> <nl> config . target_build_swift = ( <nl> - " % s % s % s - F % s - Xlinker - rpath - Xlinker % s % s % s % s " <nl> + " % s % s % s - F % s - Xlinker - rpath - Xlinker % s % s % s % s % s " <nl> % ( xcrun_prefix , config . swiftc , target_options , <nl> extra_frameworks_dir , extra_frameworks_dir , <nl> sdk_overlay_linker_opt , config . swift_test_options , <nl> + config . swift_driver_test_options , <nl> swift_execution_tests_extra_flags ) ) <nl> config . target_run = " " <nl> <nl> if ' interpret ' in lit_config . params : <nl> target_run_base = ( <nl> - ' % s % s % s - module - name main % s % s ' <nl> + ' % s % s % s - module - name main % s % s % s ' <nl> % ( xcrun_prefix , config . swift , target_options , <nl> config . swift_test_options , <nl> + config . swift_driver_test_options , <nl> swift_execution_tests_extra_flags ) ) <nl> config . target_run_simple_swift = ( <nl> " % s % % s " % ( target_run_base ) ) <nl> if run_vendor = = ' apple ' : <nl> " % s ld - L % s " % <nl> ( xcrun_prefix , os . path . join ( test_resource_dir , config . target_sdk_name ) ) ) <nl> config . target_swift_frontend = ( <nl> - " % s - frontend % s - sdk % s % s " % <nl> + " % s - frontend % s - sdk % s % s % s " % <nl> ( config . swiftc , target_options , config . variant_sdk , <nl> - config . swift_test_options ) ) <nl> + config . swift_test_options , config . swift_frontend_test_options ) ) <nl> subst_target_swift_frontend_mock_sdk = ( <nl> - " % s - frontend % s - sdk % s % s " % <nl> + " % s - frontend % s - sdk % s % s % s " % <nl> ( config . swiftc , target_options_for_mock_sdk , config . variant_sdk , <nl> - config . swift_test_options ) ) <nl> + config . swift_test_options , config . swift_frontend_test_options ) ) <nl> config . target_swift_modulewrap = ( <nl> ' % s - modulewrap - target % s ' % <nl> ( config . swiftc , config . variant_triple ) ) <nl> elif run_os in [ ' linux - gnu ' , ' linux - gnueabihf ' , ' freebsd ' , ' windows - cygnus ' , ' wi <nl> config . target_runtime = " native " <nl> config . target_swift_autolink_extract = inferSwiftBinary ( " swift - autolink - extract " ) <nl> config . target_build_swift = ( <nl> - ' % s - target % s % s % s % s % s ' <nl> + ' % s - target % s % s % s % s % s % s ' <nl> % ( config . swiftc , config . variant_triple , resource_dir_opt , mcp_opt , <nl> - config . swift_test_options , swift_execution_tests_extra_flags ) ) <nl> + config . swift_test_options , config . swift_driver_test_options , <nl> + swift_execution_tests_extra_flags ) ) <nl> config . target_codesign = " echo " <nl> config . target_build_swift_dylib = ( <nl> " % s - parse - as - library - emit - library - o ' \ \ 1 ' " <nl> % ( config . target_build_swift ) ) <nl> config . target_swift_frontend = ( <nl> - ' % s - frontend - target % s % s % s % s ' <nl> + ' % s - frontend - target % s % s % s % s % s ' <nl> % ( config . swift , config . variant_triple , resource_dir_opt , mcp_opt , <nl> - config . swift_test_options ) ) <nl> + config . swift_test_options , config . swift_frontend_test_options ) ) <nl> subst_target_swift_frontend_mock_sdk = config . target_swift_frontend <nl> subst_target_swift_frontend_mock_sdk_after = " " <nl> config . target_run = ' ' <nl> if ' interpret ' in lit_config . params : <nl> target_run_base = ( <nl> - ' % s - target % s % s % s - module - name main % s % s ' <nl> + ' % s - target % s % s % s - module - name main % s % s % s ' <nl> % ( config . swift , config . variant_triple , resource_dir_opt , <nl> mcp_opt , config . swift_test_options , <nl> + config . swift_driver_test_options , <nl> swift_execution_tests_extra_flags ) ) <nl> config . target_run_simple_swift = ( <nl> ' % s % % s ' % ( target_run_base ) ) <nl> elif run_os = = ' linux - androideabi ' : <nl> " arm - linux - androideabi " , <nl> " { } . x " . format ( config . android_ndk_gcc_version ) ) ) <nl> config . target_build_swift = ( <nl> - ' % s - target % s - sdk % s % s - Xlinker - pie % s % s % s % s ' <nl> + ' % s - target % s - sdk % s % s - Xlinker - pie % s % s % s % s % s ' <nl> % ( config . swiftc , config . variant_triple , config . variant_sdk , <nl> android_linker_opt , resource_dir_opt , mcp_opt , <nl> - config . swift_test_options , swift_execution_tests_extra_flags ) ) <nl> + config . swift_test_options , <nl> + config . swift_driver_test_options , swift_execution_tests_extra_flags ) ) <nl> config . target_codesign = " echo " <nl> config . target_build_swift_dylib = ( <nl> " % s - parse - as - library - emit - library - o ' \ \ 1 ' " <nl> % ( config . target_build_swift ) ) <nl> config . target_swift_frontend = ( <nl> - ' % s - frontend - target % s - sdk % s % s % s % s ' <nl> + ' % s - frontend - target % s - sdk % s % s % s % s % s ' <nl> % ( config . swift , config . variant_triple , config . variant_sdk , <nl> - android_linker_opt , resource_dir_opt , mcp_opt ) ) <nl> + android_linker_opt , resource_dir_opt , mcp_opt , <nl> + config . swift_frontend_test_options ) ) <nl> subst_target_swift_frontend_mock_sdk = config . target_swift_frontend <nl> subst_target_swift_frontend_mock_sdk_after = " " <nl> config . target_run = os . path . join ( <nl> mmm a / validation - test / compiler_crashers_2_fixed / 0128 - rdar35088384 . swift <nl> ppp b / validation - test / compiler_crashers_2_fixed / 0128 - rdar35088384 . swift <nl> <nl> - / / RUN : % swift - target - frontend - typecheck - verify % s <nl> + / / RUN : % target - swift - frontend - typecheck - verify % s <nl> <nl> protocol Command { } <nl> <nl>
|
Merge pull request from gottesmm / pr - 4634c89f5bd84f2ca0600211216e7cc0ff934b4b
|
apple/swift
|
fd372081d94f2d896df27ab7446655f0fea7c7bf
|
2018-01-16T19:52:05Z
|
new file mode 100755 <nl> index 000000000000 . . 59e19b15d89e <nl> mmm / dev / null <nl> ppp b / jstests / geo_s2index . js <nl> <nl> + t = db . geo_s2index <nl> + t . drop ( ) <nl> + <nl> + pointA = { " type " : " Point " , " coordinates " : [ 40 , 5 ] } <nl> + t . insert ( { geo : pointA , nonGeo : [ " pointA " ] } ) <nl> + <nl> + pointD = { " type " : " Point " , " coordinates " : [ 41 . 001 , 6 . 001 ] } <nl> + t . insert ( { geo : pointD , nonGeo : [ " pointD " ] } ) <nl> + <nl> + someline = { " type " : " LineString " , " coordinates " : [ [ 40 , 5 ] , [ 41 , 6 ] ] } <nl> + t . insert ( { geo : someline , nonGeo : [ " someline " ] } ) <nl> + <nl> + pointB = { " type " : " Point " , " coordinates " : [ 41 , 6 ] } <nl> + t . insert ( { geo : pointB , nonGeo : [ " pointB " ] } ) <nl> + <nl> + pointC = { " type " : " Point " , " coordinates " : [ 41 , 6 ] } <nl> + t . insert ( { geo : pointC } ) <nl> + <nl> + somepoly = { " type " : " Polygon " , <nl> + " coordinates " : [ [ [ 40 , 5 ] , [ 40 , 6 ] , [ 41 , 6 ] , [ 41 , 5 ] , [ 40 , 5 ] ] ] } <nl> + t . insert ( { geo : somepoly , nonGeo : [ " somepoly " ] } ) <nl> + t . ensureIndex ( { geo : " s2d " , nonGeo : 1 } ) <nl> + <nl> + res = t . find ( { " geo " : { " $ intersect " : { " $ geometry " : pointA } } } ) ; <nl> + assert . eq ( res . count ( ) , 3 ) ; <nl> + <nl> + res = t . find ( { " geo " : { " $ intersect " : { " $ geometry " : pointB } } } ) ; <nl> + assert . eq ( res . count ( ) , 4 ) ; <nl> + <nl> + res = t . find ( { " geo " : { " $ intersect " : { " $ geometry " : pointD } } } ) ; <nl> + assert . eq ( res . count ( ) , 1 ) ; <nl> + <nl> + res = t . find ( { " geo " : { " $ intersect " : { " $ geometry " : someline } } } ) <nl> + assert . eq ( res . count ( ) , 5 ) ; <nl> + <nl> + res = t . find ( { " geo " : { " $ intersect " : { " $ geometry " : somepoly } } } ) <nl> + assert . eq ( res . count ( ) , 5 ) ; <nl> + <nl> + res = t . find ( { " geo " : { " $ intersect " : { " $ geometry " : somepoly } } } ) . limit ( 1 ) <nl> + assert . eq ( res . itcount ( ) , 1 ) ; <nl> + <nl> + res = t . find ( { " nonGeo " : " pointA " , <nl> + " geo " : { " $ intersect " : { " $ geometry " : somepoly } } } ) <nl> + assert . eq ( res . count ( ) , 1 ) ; <nl> mmm a / src / mongo / SConscript <nl> ppp b / src / mongo / SConscript <nl> serverOnlyFiles = [ " db / curop . cpp " , <nl> " db / explain . cpp " , <nl> " db / geo / 2d . cpp " , <nl> " db / geo / haystack . cpp " , <nl> + " db / geo / s2cursor . cpp " , <nl> + " db / geo / s2index . cpp " , <nl> " db / hashindex . cpp " , <nl> " db / ops / count . cpp " , <nl> " db / ops / delete . cpp " , <nl> env . StaticLibrary ( " serveronly " , serverOnlyFiles , <nl> LIBDEPS = [ " coreshard " , <nl> " dbcmdline " , <nl> " defaultversion " , <nl> - # " geojson " , <nl> + " geojson " , <nl> " geometry " , <nl> ' $ BUILD_DIR / third_party / shim_snappy ' ] ) <nl> <nl> mmm a / src / mongo / bson / bsonobj . h <nl> ppp b / src / mongo / bson / bsonobj . h <nl> namespace mongo { <nl> opELEM_MATCH = 0x12 , <nl> opNEAR = 0x13 , <nl> opWITHIN = 0x14 , <nl> - opMAX_DISTANCE = 0x15 <nl> + opMAX_DISTANCE = 0x15 , <nl> + opINTERSECT = 0x16 <nl> } ; <nl> <nl> / * * add all elements of the object to the specified vector * / <nl> mmm a / src / mongo / db / geo / geojsonparser . cpp <nl> ppp b / src / mongo / db / geo / geojsonparser . cpp <nl> <nl> # include " mongo / db / jsobj . h " <nl> # include " mongo / db / geo / geojsonparser . h " <nl> # include " third_party / s2 / s2 . h " <nl> + # include " third_party / s2 / s2cell . h " <nl> # include " third_party / s2 / s2latlng . h " <nl> # include " third_party / s2 / s2loop . h " <nl> # include " third_party / s2 / s2polygon . h " <nl> namespace mongo { <nl> return coordinates [ 0 ] . isNumber ( ) & & coordinates [ 1 ] . isNumber ( ) ; <nl> } <nl> <nl> + void GeoJSONParser : : parsePoint ( const BSONObj & obj , S2Cell * out ) { <nl> + const vector < BSONElement > & coords = obj . getFieldDotted ( GEOJSON_COORDINATES ) . Array ( ) ; <nl> + S2LatLng ll = S2LatLng : : FromDegrees ( coords [ 1 ] . Number ( ) , <nl> + coords [ 0 ] . Number ( ) ) . Normalized ( ) ; <nl> + * out = S2Cell ( ll ) ; <nl> + } <nl> + <nl> void GeoJSONParser : : parsePoint ( const BSONObj & obj , S2Point * out ) { <nl> const vector < BSONElement > & coords = obj . getFieldDotted ( GEOJSON_COORDINATES ) . Array ( ) ; <nl> * out = latLngToPoint ( coords ) ; <nl> mmm a / src / mongo / db / geo / geojsonparser . h <nl> ppp b / src / mongo / db / geo / geojsonparser . h <nl> <nl> # include < vector > <nl> # include " third_party / s2 / s2 . h " <nl> <nl> + class S2Cell ; <nl> class S2Polyline ; <nl> class S2Polygon ; <nl> <nl> namespace mongo { <nl> public : <nl> static bool isPoint ( const BSONObj & obj ) ; <nl> static void parsePoint ( const BSONObj & obj , S2Point * out ) ; <nl> + static void parsePoint ( const BSONObj & obj , S2Cell * out ) ; <nl> <nl> static bool isLineString ( const BSONObj & obj ) ; <nl> static void parseLineString ( const BSONObj & obj , S2Polyline * out ) ; <nl> new file mode 100644 <nl> index 000000000000 . . 43d19afdfdd4 <nl> mmm / dev / null <nl> ppp b / src / mongo / db / geo / s2common . h <nl> <nl> + / * * <nl> + * Copyright ( C ) 2012 10gen Inc . <nl> + * <nl> + * This program is free software : you can redistribute it and / or modify <nl> + * it under the terms of the GNU Affero General Public License , version 3 , <nl> + * as published by the Free Software Foundation . <nl> + * <nl> + * This program is distributed in the hope that it will be useful , <nl> + * but WITHOUT ANY WARRANTY ; without even the implied warranty of <nl> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE . See the <nl> + * GNU Affero General Public License for more details . <nl> + * <nl> + * You should have received a copy of the GNU Affero General Public License <nl> + * along with this program . If not , see < http : / / www . gnu . org / licenses / > . <nl> + * / <nl> + <nl> + # include " mongo / db / geo / geojsonparser . h " <nl> + # include " third_party / s2 / s2 . h " <nl> + # include " third_party / s2 / s2regioncoverer . h " <nl> + # include " third_party / s2 / s2cell . h " <nl> + # include " third_party / s2 / s2polyline . h " <nl> + # include " third_party / s2 / s2polygon . h " <nl> + # include " third_party / s2 / s2regioncoverer . h " <nl> + <nl> + # pragma once <nl> + <nl> + namespace mongo { <nl> + / / Used for passing geo data from the newCursor entry point to the S2Cursor class . <nl> + struct GeoQueryField { <nl> + GeoQueryField ( const string & f ) : field ( f ) , cell ( NULL ) , line ( NULL ) , polygon ( NULL ) { } <nl> + <nl> + / / Name of the field in the query . <nl> + string field ; <nl> + / / Only one of these should be non - NULL . S2Region is a superclass but it only supports <nl> + / / testing against S2Cells . We need the most specific class we can get . <nl> + / / Owned by S2Cursor . <nl> + S2Cell * cell ; <nl> + S2Polyline * line ; <nl> + S2Polygon * polygon ; <nl> + <nl> + / / The functions below are defined in s2cursor . cpp . <nl> + <nl> + / / Does this GeoQueryField intersect the provided data ? Sadly there is no common good way <nl> + / / to check this , so we do different things for all query / data pairs . <nl> + bool intersectsPoint ( const S2Cell & otherPoint ) ; <nl> + bool intersectsLine ( const S2Polyline & otherLine ) ; <nl> + bool intersectsPolygon ( const S2Polygon & otherPolygon ) ; <nl> + / / One region is not NULL and this returns it . <nl> + const S2Region & getRegion ( ) const ; <nl> + / / Delete the not NULL region . <nl> + void free ( ) ; <nl> + } ; <nl> + <nl> + struct S2IndexingParams { <nl> + / / Since we take the cartesian product when we generate keys for an insert , <nl> + / / we need a cap . <nl> + size_t maxKeysPerInsert ; <nl> + / / This is really an advisory parameter that we pass to the cover generator . The <nl> + / / finest / coarsest index level determine the required # of cells . <nl> + int maxCellsInCovering ; <nl> + / / What ' s the finest grained level that we ' ll index ? When we query for a point <nl> + / / we start at that - - we index nothing finer than this . <nl> + int finestIndexedLevel ; <nl> + / / And , what ' s the coarsest ? When we search in larger coverings we know we <nl> + / / can stop here - - we index nothing coarser than this . <nl> + int coarsestIndexedLevel ; <nl> + <nl> + string toString ( ) const { <nl> + stringstream ss ; <nl> + ss < < " maxKeysPerInsert : " < < maxKeysPerInsert < < endl ; <nl> + ss < < " maxCellsInCovering : " < < maxCellsInCovering < < endl ; <nl> + ss < < " finestIndexedLevel : " < < finestIndexedLevel < < endl ; <nl> + ss < < " coarsestIndexedLevel : " < < coarsestIndexedLevel < < endl ; <nl> + return ss . str ( ) ; <nl> + } <nl> + <nl> + void configureCoverer ( S2RegionCoverer * coverer ) const { <nl> + coverer - > set_min_level ( coarsestIndexedLevel ) ; <nl> + coverer - > set_max_level ( finestIndexedLevel ) ; <nl> + / / This is advisory ; the two above are strict . <nl> + coverer - > set_max_cells ( maxCellsInCovering ) ; <nl> + } <nl> + } ; <nl> + } / / namespace mongo <nl> new file mode 100644 <nl> index 000000000000 . . e5160c61cd92 <nl> mmm / dev / null <nl> ppp b / src / mongo / db / geo / s2cursor . cpp <nl> <nl> + / * * <nl> + * Copyright ( C ) 2012 10gen Inc . <nl> + * <nl> + * This program is free software : you can redistribute it and / or modify <nl> + * it under the terms of the GNU Affero General Public License , version 3 , <nl> + * as published by the Free Software Foundation . <nl> + * <nl> + * This program is distributed in the hope that it will be useful , <nl> + * but WITHOUT ANY WARRANTY ; without even the implied warranty of <nl> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE . See the <nl> + * GNU Affero General Public License for more details . <nl> + * <nl> + * You should have received a copy of the GNU Affero General Public License <nl> + * along with this program . If not , see < http : / / www . gnu . org / licenses / > . <nl> + * / <nl> + <nl> + # include " mongo / db / btree . h " <nl> + # include " mongo / db / index . h " <nl> + # include " mongo / db / matcher . h " <nl> + # include " mongo / db / geo / s2common . h " <nl> + # include " mongo / db / geo / s2cursor . h " <nl> + <nl> + namespace mongo { <nl> + / / Does this GeoQueryField intersect the provided data ? <nl> + bool GeoQueryField : : intersectsPoint ( const S2Cell & otherPoint ) { <nl> + if ( NULL ! = cell ) { <nl> + return cell - > MayIntersect ( otherPoint ) ; <nl> + } else if ( NULL ! = line ) { <nl> + return line - > MayIntersect ( otherPoint ) ; <nl> + } else { <nl> + return polygon - > MayIntersect ( otherPoint ) ; <nl> + } <nl> + } <nl> + <nl> + bool GeoQueryField : : intersectsLine ( const S2Polyline & otherLine ) { <nl> + if ( NULL ! = cell ) { <nl> + return otherLine . MayIntersect ( * cell ) ; <nl> + } else if ( NULL ! = line ) { <nl> + return otherLine . Intersects ( line ) ; <nl> + } else { <nl> + / / TODO ( hk ) : modify s2 library to just let us know if it intersected <nl> + / / rather than returning all this . <nl> + vector < S2Polyline * > clipped ; <nl> + polygon - > IntersectWithPolyline ( & otherLine , & clipped ) ; <nl> + bool ret = clipped . size ( ) > 0 ; <nl> + for ( size_t i = 0 ; i < clipped . size ( ) ; + + i ) delete clipped [ i ] ; <nl> + return ret ; <nl> + } <nl> + return false ; <nl> + } <nl> + <nl> + bool GeoQueryField : : intersectsPolygon ( const S2Polygon & otherPolygon ) { <nl> + if ( NULL ! = cell ) { <nl> + return otherPolygon . MayIntersect ( * cell ) ; <nl> + } else if ( NULL ! = line ) { <nl> + / / TODO ( hk ) : modify s2 library to just let us know if it intersected <nl> + / / rather than returning all this . <nl> + vector < S2Polyline * > clipped ; <nl> + otherPolygon . IntersectWithPolyline ( line , & clipped ) ; <nl> + bool ret = clipped . size ( ) > 0 ; <nl> + for ( size_t i = 0 ; i < clipped . size ( ) ; + + i ) delete clipped [ i ] ; <nl> + return ret ; <nl> + } else { <nl> + return otherPolygon . Intersects ( polygon ) ; <nl> + } <nl> + } <nl> + <nl> + const S2Region & GeoQueryField : : getRegion ( ) const { <nl> + if ( NULL ! = cell ) { <nl> + return * cell ; <nl> + } else if ( NULL ! = line ) { <nl> + return * line ; <nl> + } else if ( NULL ! = polygon ) { <nl> + return * polygon ; <nl> + } else { <nl> + / / TODO : freak out here . <nl> + return S2Cell ( ) ; <nl> + } <nl> + } <nl> + <nl> + void GeoQueryField : : free ( ) { <nl> + if ( NULL ! = cell ) { <nl> + delete cell ; <nl> + } else if ( NULL ! = line ) { <nl> + delete line ; <nl> + } else if ( NULL ! = polygon ) { <nl> + delete polygon ; <nl> + } <nl> + } <nl> + <nl> + S2Cursor : : S2Cursor ( const BSONObj & keyPattern , const IndexDetails * details , <nl> + const BSONObj & query , const vector < GeoQueryField > & fields , <nl> + const S2IndexingParams & params , int numWanted ) <nl> + : _details ( details ) , _fields ( fields ) , _params ( params ) , <nl> + _keyPattern ( keyPattern ) , _numToReturn ( numWanted ) { <nl> + BSONObjBuilder geoFieldsToNuke ; <nl> + for ( size_t i = 0 ; i < _fields . size ( ) ; + + i ) { <nl> + geoFieldsToNuke . append ( _fields [ i ] . field , " " ) ; <nl> + } <nl> + / / false means we want to filter OUT geoFieldsToNuke , not filter to include only that . <nl> + _filteredQuery = query . filterFieldsUndotted ( geoFieldsToNuke . obj ( ) , false ) ; <nl> + _matcher . reset ( new CoveredIndexMatcher ( _filteredQuery , keyPattern ) ) ; <nl> + } <nl> + <nl> + S2Cursor : : ~ S2Cursor ( ) { <nl> + / / We own these pointers . <nl> + for ( size_t i = 0 ; i < _fields . size ( ) ; + + i ) { <nl> + _fields [ i ] . free ( ) ; <nl> + } <nl> + } <nl> + <nl> + CoveredIndexMatcher * S2Cursor : : matcher ( ) const { return _matcher . get ( ) ; } <nl> + <nl> + bool S2Cursor : : ok ( ) { <nl> + if ( NULL = = _btreeCursor . get ( ) ) { <nl> + / / FieldRangeVector needs an IndexSpec so we make it one . <nl> + BSONObjBuilder specBuilder ; <nl> + BSONObjIterator i ( _keyPattern ) ; <nl> + while ( i . more ( ) ) { <nl> + BSONElement e = i . next ( ) ; <nl> + specBuilder . append ( e . fieldName ( ) , 1 ) ; <nl> + } <nl> + BSONObj spec = specBuilder . obj ( ) ; <nl> + IndexSpec specForFRV ( spec ) ; <nl> + / / All the magic is in makeUnifiedFRS . See below . <nl> + / / A lot of these arguments are opaque . <nl> + FieldRangeSet frs ( _details - > parentNS ( ) . c_str ( ) , makeUnifiedFRS ( ) , false , false ) ; <nl> + shared_ptr < FieldRangeVector > frv ( new FieldRangeVector ( frs , specForFRV , 1 ) ) ; <nl> + _btreeCursor . reset ( BtreeCursor : : make ( nsdetails ( _details - > parentNS ( ) . c_str ( ) ) , <nl> + * _details , frv , 1 ) ) ; <nl> + return advance ( ) ; <nl> + } <nl> + return _btreeCursor - > ok ( ) ; <nl> + } <nl> + <nl> + / / Make the FieldRangeSet of keys we look for . Here is an example : <nl> + / / regularfield1 : regularvalue1 , $ or [ { geo1 : { $ in [ parentcover1 , . . . ] } } , <nl> + / / { geo1 : { regex : ^ cover1 } } , <nl> + / / { geo1 : { regex : ^ cover2 } } , <nl> + / / As for what we put into the geo field , see lengthy comments below . <nl> + BSONObj S2Cursor : : makeUnifiedFRS ( ) { <nl> + BSONObjBuilder frsObjBuilder ; <nl> + frsObjBuilder . appendElements ( _filteredQuery ) ; <nl> + <nl> + S2RegionCoverer coverer ; <nl> + _params . configureCoverer ( & coverer ) ; <nl> + <nl> + for ( size_t i = 0 ; i < _fields . size ( ) ; + + i ) { <nl> + / / Get a set of coverings . We look for keys that have this covering as a prefix <nl> + / / ( meaning the key is contained within the covering ) . <nl> + vector < S2CellId > cover ; <nl> + coverer . GetCovering ( _fields [ i ] . getRegion ( ) , & cover ) ; <nl> + <nl> + BSONArrayBuilder orBuilder ; <nl> + <nl> + / / Look at the cells we cover ( and any cells they cover via prefix ) . Examine <nl> + / / everything which has our cover as a strict prefix of its key . Anything with our <nl> + / / cover as a strict prefix is contained within the cover and should be intersection <nl> + / / tested . <nl> + for ( size_t j = 0 ; j < cover . size ( ) ; + + j ) { <nl> + string regex = " ^ " + cover [ j ] . toString ( ) ; <nl> + orBuilder . append ( BSON ( _fields [ i ] . field < < BSON ( " $ regex " < < regex ) ) ) ; <nl> + } <nl> + <nl> + / / Look at the cells that cover us . We want to look at every cell that contains the <nl> + / / covering we would index on . We generate the would - index - with - this - covering and <nl> + / / find all the cells strictly containing the cells in that set , until we hit the <nl> + / / coarsest indexed cell . We use $ in , not a prefix match . Why not prefix ? Because <nl> + / / we ' ve already looked at everything finer or as fine as our initial covering . <nl> + / / <nl> + / / Say we have a fine point with cell id 212121 , we go up one , get 21212 , we don ' t <nl> + / / want to look at cells 21212 [ not - 1 ] because we know they ' re not going to intersect <nl> + / / with 212121 , but entries inserted with cell value 21212 ( no trailing digits ) may . <nl> + / / And we ' ve already looked at points with the cell id 211111 from the regex search <nl> + / / created above , so we only want things where the value of the last digit is not <nl> + / / stored ( and therefore could be 1 ) . <nl> + set < S2CellId > parentCells ; <nl> + for ( size_t j = 0 ; j < cover . size ( ) ; + + j ) { <nl> + for ( S2CellId id = cover [ j ] . parent ( ) ; <nl> + id . level ( ) > = _params . coarsestIndexedLevel ; id = id . parent ( ) ) { <nl> + parentCells . insert ( id ) ; <nl> + } <nl> + } <nl> + <nl> + / / Create the actual $ in statement . <nl> + BSONArrayBuilder inBuilder ; <nl> + for ( set < S2CellId > : : const_iterator it = parentCells . begin ( ) ; <nl> + it ! = parentCells . end ( ) ; + + it ) { <nl> + inBuilder . append ( it - > toString ( ) ) ; <nl> + } <nl> + orBuilder . append ( BSON ( _fields [ i ] . field < < BSON ( " $ in " < < inBuilder . arr ( ) ) ) ) ; <nl> + <nl> + / / Join the regexes with the in statement via an or . <nl> + / / TODO ( hk ) : see if this actually works with two geo fields or if they have <nl> + / / to be joined with an and or what . <nl> + frsObjBuilder . append ( " $ or " , orBuilder . arr ( ) ) ; <nl> + } <nl> + return frsObjBuilder . obj ( ) ; <nl> + } <nl> + <nl> + Record * S2Cursor : : _current ( ) { return _btreeCursor - > currLoc ( ) . rec ( ) ; } <nl> + BSONObj S2Cursor : : current ( ) { return _btreeCursor - > currLoc ( ) . obj ( ) ; } <nl> + DiskLoc S2Cursor : : currLoc ( ) { return _btreeCursor - > currLoc ( ) ; } <nl> + BSONObj S2Cursor : : currKey ( ) const { return _btreeCursor - > currKey ( ) ; } <nl> + DiskLoc S2Cursor : : refLoc ( ) { return DiskLoc ( ) ; } <nl> + long long S2Cursor : : nscanned ( ) { return _nscanned ; } <nl> + <nl> + / / This is the actual search . <nl> + bool S2Cursor : : advance ( ) { <nl> + if ( _numToReturn < = 0 ) { return false ; } <nl> + for ( ; _btreeCursor - > ok ( ) ; _btreeCursor - > advance ( ) ) { <nl> + if ( _seen . end ( ) ! = _seen . find ( _btreeCursor - > currLoc ( ) ) ) { continue ; } <nl> + _seen . insert ( _btreeCursor - > currLoc ( ) ) ; <nl> + + + _nscanned ; <nl> + <nl> + MatchDetails details ; <nl> + bool matched = _matcher - > matchesCurrent ( _btreeCursor . get ( ) , & details ) ; <nl> + if ( ! matched ) { continue ; } <nl> + <nl> + const BSONObj & indexedObj = _btreeCursor - > currLoc ( ) . obj ( ) ; <nl> + <nl> + size_t geoFieldsMatched = 0 ; <nl> + / / OK , cool , non - geo match satisfied . See if the object actually overlaps w / the geo <nl> + / / query fields . <nl> + for ( size_t i = 0 ; i < _fields . size ( ) ; + + i ) { <nl> + BSONElementSet geoFieldElements ; <nl> + indexedObj . getFieldsDotted ( _fields [ i ] . field , geoFieldElements ) ; <nl> + if ( geoFieldElements . empty ( ) ) { continue ; } <nl> + <nl> + bool match = false ; <nl> + <nl> + for ( BSONElementSet : : iterator oi = geoFieldElements . begin ( ) ; <nl> + oi ! = geoFieldElements . end ( ) ; + + oi ) { <nl> + const BSONObj & geoObj = oi - > Obj ( ) ; <nl> + if ( GeoJSONParser : : isPolygon ( geoObj ) ) { <nl> + S2Polygon shape ; <nl> + GeoJSONParser : : parsePolygon ( geoObj , & shape ) ; <nl> + match = _fields [ i ] . intersectsPolygon ( shape ) ; <nl> + } else if ( GeoJSONParser : : isLineString ( geoObj ) ) { <nl> + S2Polyline shape ; <nl> + GeoJSONParser : : parseLineString ( geoObj , & shape ) ; <nl> + match = _fields [ i ] . intersectsLine ( shape ) ; <nl> + } else if ( GeoJSONParser : : isPoint ( geoObj ) ) { <nl> + S2Cell point ; <nl> + GeoJSONParser : : parsePoint ( geoObj , & point ) ; <nl> + match = _fields [ i ] . intersectsPoint ( point ) ; <nl> + } <nl> + if ( match ) break ; <nl> + } <nl> + <nl> + if ( match ) { + + geoFieldsMatched ; } <nl> + } <nl> + <nl> + if ( geoFieldsMatched = = _fields . size ( ) ) { <nl> + / / We have a winner ! And we point at it . <nl> + - - _numToReturn ; <nl> + return true ; <nl> + } <nl> + } <nl> + return false ; <nl> + } <nl> + <nl> + / / TODO : yielding is very un - tested . <nl> + / / This is called when we ' re supposed to yield . <nl> + void S2Cursor : : noteLocation ( ) { <nl> + _btreeCursor - > noteLocation ( ) ; <nl> + _seen . clear ( ) ; <nl> + } <nl> + <nl> + / / Called when we ' re un - yielding . <nl> + void S2Cursor : : checkLocation ( ) { <nl> + _btreeCursor - > checkLocation ( ) ; <nl> + / / We are pointing at a valid btree location now , but it may not be a valid result . <nl> + / / This ensures that we ' re pointing at a valid result that satisfies the query . <nl> + <nl> + / / There is something subtle here : Say we point at something valid , and note the location <nl> + / / ( yield ) , then checkLocation ( unyield ) , when we call advance , we don ' t go past the object <nl> + / / that we were / are pointing at since we only do that if we ' ve seen it before ( that is , it ' s <nl> + / / in _seen , which we clear when we yield ) . <nl> + <nl> + advance ( ) ; <nl> + } <nl> + <nl> + void S2Cursor : : explainDetails ( BSONObjBuilder & b ) { <nl> + / / TODO ( hk ) : Dump more meaningful stats . <nl> + b < < " nscanned " < < _nscanned ; <nl> + } <nl> + } / / namespace mongo <nl> new file mode 100644 <nl> index 000000000000 . . e220f4070629 <nl> mmm / dev / null <nl> ppp b / src / mongo / db / geo / s2cursor . h <nl> <nl> + / * * <nl> + * Copyright ( C ) 2012 10gen Inc . <nl> + * <nl> + * This program is free software : you can redistribute it and / or modify <nl> + * it under the terms of the GNU Affero General Public License , version 3 , <nl> + * as published by the Free Software Foundation . <nl> + * <nl> + * This program is distributed in the hope that it will be useful , <nl> + * but WITHOUT ANY WARRANTY ; without even the implied warranty of <nl> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE . See the <nl> + * GNU Affero General Public License for more details . <nl> + * <nl> + * You should have received a copy of the GNU Affero General Public License <nl> + * along with this program . If not , see < http : / / www . gnu . org / licenses / > . <nl> + * / <nl> + <nl> + # include < vector > <nl> + # include " mongo / db / jsobj . h " <nl> + # include " mongo / db / commands . h " <nl> + # include " mongo / db / btree . h " <nl> + # include " mongo / db / cursor . h " <nl> + # include " mongo / db / diskloc . h " <nl> + # include " mongo / db / matcher . h " <nl> + # include " mongo / db / queryutil . h " <nl> + # include " mongo / db / geo / s2common . h " <nl> + <nl> + namespace mongo { <nl> + class S2Cursor : public Cursor { <nl> + public : <nl> + S2Cursor ( const BSONObj & keyPattern , const IndexDetails * details , const BSONObj & query , <nl> + const vector < GeoQueryField > & regions , const S2IndexingParams & params , <nl> + int numWanted ) ; <nl> + virtual ~ S2Cursor ( ) ; <nl> + virtual CoveredIndexMatcher * matcher ( ) const ; <nl> + <nl> + virtual bool supportYields ( ) { return true ; } <nl> + virtual bool supportGetMore ( ) { return true ; } <nl> + virtual bool isMultiKey ( ) const { return true ; } <nl> + virtual bool autoDedup ( ) const { return false ; } <nl> + virtual bool modifiedKeys ( ) const { return true ; } <nl> + virtual bool getsetdup ( DiskLoc loc ) { return false ; } <nl> + virtual string toString ( ) { return " S2Cursor " ; } <nl> + BSONObj indexKeyPattern ( ) { return _keyPattern ; } <nl> + virtual bool ok ( ) ; <nl> + virtual Record * _current ( ) ; <nl> + virtual BSONObj current ( ) ; <nl> + virtual DiskLoc currLoc ( ) ; <nl> + virtual bool advance ( ) ; <nl> + virtual BSONObj currKey ( ) const ; <nl> + virtual DiskLoc refLoc ( ) ; <nl> + virtual void noteLocation ( ) ; <nl> + virtual void checkLocation ( ) ; <nl> + virtual long long nscanned ( ) ; <nl> + virtual void explainDetails ( BSONObjBuilder & b ) ; <nl> + private : <nl> + / / Make an object that describes the restrictions on all possible valid keys . <nl> + / / It ' s kind of a monstrous object . Thanks , FieldRangeSet , for doing all the work <nl> + / / for us . <nl> + BSONObj makeUnifiedFRS ( ) ; <nl> + <nl> + / / Need this to make a FieldRangeSet . <nl> + const IndexDetails * _details ; <nl> + / / The query with the geo stuff taken out . We use this with a matcher . <nl> + BSONObj _filteredQuery ; <nl> + / / What geo regions are we looking for ? <nl> + vector < GeoQueryField > _fields ; <nl> + / / We use this for matching non - GEO stuff . <nl> + shared_ptr < CoveredIndexMatcher > _matcher ; <nl> + / / How were the keys created ? We need this to search for the right stuff . <nl> + S2IndexingParams _params ; <nl> + / / How many things did we scan / look at ? Not sure exactly how this is defined . <nl> + long long _nscanned ; <nl> + / / We have to pass this to the FieldRangeVector ctor ( in modified form ) . <nl> + BSONObj _keyPattern ; <nl> + / / How many docs do we want to return ? Starts with the # the user requests <nl> + / / and goes down . <nl> + int _numToReturn ; <nl> + <nl> + / / What have we checked so we don ' t repeat it and waste time ? <nl> + set < DiskLoc > _seen ; <nl> + / / This really does all the work / points into the btree . <nl> + scoped_ptr < BtreeCursor > _btreeCursor ; <nl> + } ; <nl> + } / / namespace mongo <nl> new file mode 100644 <nl> index 000000000000 . . c03d6dae906c <nl> mmm / dev / null <nl> ppp b / src / mongo / db / geo / s2index . cpp <nl> <nl> + / * * <nl> + * Copyright ( C ) 2012 10gen Inc . <nl> + * <nl> + * This program is free software : you can redistribute it and / or modify <nl> + * it under the terms of the GNU Affero General Public License , version 3 , <nl> + * as published by the Free Software Foundation . <nl> + * <nl> + * This program is distributed in the hope that it will be useful , <nl> + * but WITHOUT ANY WARRANTY ; without even the implied warranty of <nl> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE . See the <nl> + * GNU Affero General Public License for more details . <nl> + * <nl> + * You should have received a copy of the GNU Affero General Public License <nl> + * along with this program . If not , see < http : / / www . gnu . org / licenses / > . <nl> + * / <nl> + <nl> + # include " mongo / db / namespace - inl . h " <nl> + # include " mongo / db / jsobj . h " <nl> + # include " mongo / db / index . h " <nl> + # include " mongo / db / queryutil . h " <nl> + # include " mongo / db / geo / geojsonparser . h " <nl> + # include " mongo / db / geo / s2common . h " <nl> + # include " mongo / db / geo / s2cursor . h " <nl> + # include " third_party / s2 / s2 . h " <nl> + # include " third_party / s2 / s2cell . h " <nl> + # include " third_party / s2 / s2polygon . h " <nl> + # include " third_party / s2 / s2polyline . h " <nl> + # include " third_party / s2 / s2regioncoverer . h " <nl> + <nl> + namespace { <nl> + / / Used in a handful of places in GeoSphere2DType below . <nl> + static void keysFromRegion ( S2RegionCoverer * coverer , const S2Region & region , <nl> + vector < string > * out ) { <nl> + vector < S2CellId > covering ; <nl> + coverer - > GetCovering ( region , & covering ) ; <nl> + for ( size_t i = 0 ; i < covering . size ( ) ; + + i ) { <nl> + out - > push_back ( covering [ i ] . toString ( ) ) ; <nl> + } <nl> + } <nl> + } / / namespace <nl> + <nl> + namespace mongo { <nl> + <nl> + static const string SPHERE_2D_NAME = " s2d " ; <nl> + <nl> + class GeoSphere2DType : public IndexType { <nl> + public : <nl> + / / We keep track of what fields we ' ve indexed and if they ' re geo or not . <nl> + struct IndexedField { <nl> + enum Type { <nl> + GEO , <nl> + LITERAL <nl> + } ; <nl> + <nl> + Type type ; <nl> + string name ; <nl> + IndexedField ( Type t , const string & n ) : type ( t ) , name ( n ) { } <nl> + } ; <nl> + <nl> + GeoSphere2DType ( const IndexPlugin * plugin , const IndexSpec * spec , <nl> + const S2IndexingParams & params ) <nl> + : IndexType ( plugin , spec ) , _params ( params ) { <nl> + int geoFields = 0 ; <nl> + / / Categorize the fields we ' re indexing and make sure we have a geo field . <nl> + BSONObjIterator i ( spec - > keyPattern ) ; <nl> + while ( i . more ( ) ) { <nl> + BSONElement e = i . next ( ) ; <nl> + if ( e . type ( ) = = String & & SPHERE_2D_NAME = = e . valuestr ( ) ) { <nl> + _fields . push_back ( IndexedField ( IndexedField : : GEO , e . fieldName ( ) ) ) ; <nl> + + + geoFields ; <nl> + } else { <nl> + _fields . push_back ( IndexedField ( IndexedField : : LITERAL , e . fieldName ( ) ) ) ; <nl> + } <nl> + } <nl> + uassert ( 16450 , " Expect at least one geo field , spec = " + spec - > keyPattern . toString ( ) , <nl> + geoFields > = 1 ) ; <nl> + } <nl> + <nl> + virtual ~ GeoSphere2DType ( ) { } <nl> + <nl> + void getKeys ( const BSONObj & obj , BSONObjSet & keys ) const { <nl> + verify ( _fields . size ( ) > = 1 ) ; <nl> + <nl> + BSONObjSet keysToAdd ; <nl> + / / We output keys in the same order as the fields we index . <nl> + for ( size_t i = 0 ; i < _fields . size ( ) ; + + i ) { <nl> + const IndexedField & field = _fields [ i ] ; <nl> + <nl> + / / First , we get the keys that this field adds . Either they ' re added literally from <nl> + / / the value of the field , or they ' re transformed if the field is geo . <nl> + BSONElementSet fieldElements ; <nl> + obj . getFieldsDotted ( field . name , fieldElements ) ; <nl> + <nl> + BSONObj keysForThisField ; <nl> + if ( IndexedField : : GEO = = field . type ) { <nl> + keysForThisField = getGeoKeys ( fieldElements ) ; <nl> + } else if ( IndexedField : : LITERAL = = field . type ) { <nl> + keysForThisField = getLiteralKeys ( fieldElements ) ; <nl> + } else { <nl> + verify ( 0 ) ; <nl> + } <nl> + <nl> + / / We expect there to be _spec - > _missingField ( ) present in the keys if data is <nl> + / / missing . So , this should be non - empty . <nl> + verify ( ! keysForThisField . isEmpty ( ) ) ; <nl> + <nl> + / / We take the Cartesian product of all of the keys . This requires that we have <nl> + / / some keys to take the Cartesian product with . If keysToAdd . empty ( ) , we <nl> + / / initialize it . <nl> + if ( keysToAdd . empty ( ) ) { <nl> + / / This should only happen w / the first field . <nl> + verify ( 0 = = i ) ; <nl> + BSONObjIterator newIt ( keysForThisField ) ; <nl> + while ( newIt . more ( ) ) { <nl> + BSONObjBuilder b ; <nl> + b . append ( " " , newIt . next ( ) . String ( ) ) ; <nl> + keysToAdd . insert ( b . obj ( ) ) ; <nl> + } <nl> + continue ; <nl> + } <nl> + <nl> + BSONObjSet updatedKeysToAdd ; <nl> + for ( BSONObjSet : : const_iterator it = keysToAdd . begin ( ) ; it ! = keysToAdd . end ( ) ; <nl> + + + it ) { <nl> + <nl> + BSONObjIterator newIt ( keysForThisField ) ; <nl> + while ( newIt . more ( ) ) { <nl> + BSONObjBuilder b ; <nl> + b . appendElements ( * it ) ; <nl> + b . append ( " " , newIt . next ( ) . String ( ) ) ; <nl> + updatedKeysToAdd . insert ( b . obj ( ) ) ; <nl> + } <nl> + } <nl> + keysToAdd = updatedKeysToAdd ; <nl> + } <nl> + <nl> + if ( keysToAdd . size ( ) > _params . maxKeysPerInsert ) { <nl> + warning ( ) < < " insert of geo object generated lots of keys . \ n " <nl> + < < " consider creating larger buckets . obj = " <nl> + < < obj ; <nl> + } <nl> + <nl> + for ( BSONObjSet : : const_iterator it = keysToAdd . begin ( ) ; it ! = keysToAdd . end ( ) ; + + it ) { <nl> + keys . insert ( * it ) ; <nl> + } <nl> + } <nl> + <nl> + / / Entry point for a search . <nl> + virtual shared_ptr < Cursor > newCursor ( const BSONObj & query , const BSONObj & order , <nl> + int numWanted ) const { <nl> + / / I copied this from 2d . cpp . Guard against perversion . <nl> + if ( numWanted < 0 ) numWanted * = - 1 ; <nl> + if ( 0 = = numWanted ) numWanted = INT_MAX ; <nl> + <nl> + vector < GeoQueryField > regions ; <nl> + <nl> + / / Go through the fields that we index , and for each geo one , make a GeoQueryField <nl> + / / object for the S2Cursor class to do intersection testing / cover generating with . <nl> + for ( size_t i = 0 ; i < _fields . size ( ) ; + + i ) { <nl> + const IndexedField & field = _fields [ i ] ; <nl> + if ( IndexedField : : GEO ! = field . type ) { continue ; } <nl> + <nl> + / / Example of what we ' re trying to parse : <nl> + / / pointA = { " type " : " Point " , " coordinates " : [ 40 , 5 ] } <nl> + / / t . find ( { " geo " : { " $ intersect " : { " $ point " : pointA } } } ) <nl> + / / where field . name is " geo " <nl> + BSONElement e = query . getFieldDotted ( field . name ) ; <nl> + if ( e . eoo ( ) ) { continue ; } <nl> + <nl> + if ( ! e . isABSONObj ( ) ) { continue ; } <nl> + e = e . embeddedObject ( ) . firstElement ( ) ; <nl> + <nl> + if ( ! e . isABSONObj ( ) ) { continue ; } <nl> + <nl> + BSONObj : : MatchType matchType = static_cast < BSONObj : : MatchType > ( e . getGtLtOp ( ) ) ; <nl> + if ( BSONObj : : opINTERSECT ! = matchType & & BSONObj : : opWITHIN ! = matchType ) { <nl> + continue ; <nl> + } <nl> + <nl> + e = e . embeddedObject ( ) . firstElement ( ) ; <nl> + if ( ! e . isABSONObj ( ) ) { continue ; } <nl> + <nl> + BSONObj shapeObj = e . embeddedObject ( ) ; <nl> + if ( 2 ! = shapeObj . nFields ( ) ) { continue ; } <nl> + <nl> + GeoQueryField geoQueryField ( field . name ) ; <nl> + <nl> + if ( strcmp ( e . fieldName ( ) , " $ geometry " ) ) { continue ; } <nl> + if ( GeoJSONParser : : isPolygon ( shapeObj ) ) { <nl> + / / We can ' t really pass these things around willy - nilly except by ptr . <nl> + / / The cursor owns them . <nl> + geoQueryField . polygon = new S2Polygon ( ) ; <nl> + GeoJSONParser : : parsePolygon ( shapeObj , geoQueryField . polygon ) ; <nl> + } else if ( GeoJSONParser : : isPoint ( shapeObj ) ) { <nl> + geoQueryField . cell = new S2Cell ( ) ; <nl> + GeoJSONParser : : parsePoint ( shapeObj , geoQueryField . cell ) ; <nl> + } else if ( GeoJSONParser : : isLineString ( shapeObj ) ) { <nl> + geoQueryField . line = new S2Polyline ( ) ; <nl> + GeoJSONParser : : parseLineString ( shapeObj , geoQueryField . line ) ; <nl> + } else { <nl> + / / Maybe it ' s unknown geometry , maybe it ' s garbage . <nl> + / / TODO : alert the user ? <nl> + continue ; <nl> + } <nl> + regions . push_back ( geoQueryField ) ; <nl> + } <nl> + <nl> + S2Cursor * cursor = new S2Cursor ( keyPattern ( ) , getDetails ( ) , query , regions , _params , <nl> + numWanted ) ; <nl> + return shared_ptr < Cursor > ( cursor ) ; <nl> + } <nl> + <nl> + virtual IndexSuitability suitability ( const BSONObj & query , const BSONObj & order ) const { <nl> + for ( size_t i = 0 ; i < _fields . size ( ) ; + + i ) { <nl> + const IndexedField & field = _fields [ i ] ; <nl> + if ( IndexedField : : GEO ! = field . type ) { continue ; } <nl> + <nl> + BSONElement e = query . getFieldDotted ( field . name ) ; <nl> + if ( Object ! = e . type ( ) ) { continue ; } <nl> + / / we only support opWITHIN and opINTERSECT <nl> + / / getGtLtOp is horribly misnamed and really means get the operation . <nl> + switch ( e . embeddedObject ( ) . firstElement ( ) . getGtLtOp ( ) ) { <nl> + case BSONObj : : opWITHIN : <nl> + case BSONObj : : opINTERSECT : <nl> + return OPTIMAL ; <nl> + default : <nl> + return HELPFUL ; <nl> + } <nl> + } <nl> + return USELESS ; <nl> + } <nl> + <nl> + const IndexDetails * getDetails ( ) const { return _spec - > getDetails ( ) ; } <nl> + private : <nl> + / / Get the index keys for elements that are GeoJSON . <nl> + BSONObj getGeoKeys ( const BSONElementSet & elements ) const { <nl> + BSONArrayBuilder aBuilder ; <nl> + <nl> + S2RegionCoverer coverer ; <nl> + _params . configureCoverer ( & coverer ) ; <nl> + <nl> + / / See here for GeoJSON format : geojson . org / geojson - spec . html <nl> + for ( BSONElementSet : : iterator i = elements . begin ( ) ; i ! = elements . end ( ) ; + + i ) { <nl> + const BSONObj & obj = i - > Obj ( ) ; <nl> + <nl> + vector < string > cells ; <nl> + if ( GeoJSONParser : : isPolygon ( obj ) ) { <nl> + S2Polygon shape ; <nl> + GeoJSONParser : : parsePolygon ( obj , & shape ) ; <nl> + keysFromRegion ( & coverer , shape , & cells ) ; <nl> + } else if ( GeoJSONParser : : isLineString ( obj ) ) { <nl> + S2Polyline shape ; <nl> + GeoJSONParser : : parseLineString ( obj , & shape ) ; <nl> + keysFromRegion ( & coverer , shape , & cells ) ; <nl> + } else if ( GeoJSONParser : : isPoint ( obj ) ) { <nl> + S2Cell point ; <nl> + GeoJSONParser : : parsePoint ( obj , & point ) ; <nl> + keysFromRegion ( & coverer , point , & cells ) ; <nl> + } else { <nl> + / / TODO ( hk ) : report an error ? <nl> + } <nl> + <nl> + for ( vector < string > : : const_iterator it = cells . begin ( ) ; it ! = cells . end ( ) ; + + it ) { <nl> + aBuilder . append ( * it ) ; <nl> + } <nl> + } <nl> + <nl> + if ( 0 = = aBuilder . arrSize ( ) ) { <nl> + / / TODO ( hk ) : We use " " for empty . I should verify this actually works . <nl> + aBuilder . append ( " " ) ; <nl> + } <nl> + <nl> + return aBuilder . arr ( ) ; <nl> + } <nl> + <nl> + / / elements is a non - geo field . Add the values literally , expanding arrays . <nl> + BSONObj getLiteralKeys ( const BSONElementSet & elements ) const { <nl> + BSONArrayBuilder builder ; <nl> + if ( 0 = = elements . size ( ) ) { <nl> + builder . append ( " " ) ; <nl> + } else { <nl> + for ( BSONElementSet : : iterator i = elements . begin ( ) ; i ! = elements . end ( ) ; + + i ) { <nl> + builder . append ( * i ) ; <nl> + } <nl> + } <nl> + return builder . arr ( ) ; <nl> + } <nl> + <nl> + vector < IndexedField > _fields ; <nl> + S2IndexingParams _params ; <nl> + } ; <nl> + <nl> + class GeoSphere2DIndexPlugin : public IndexPlugin { <nl> + public : <nl> + GeoSphere2DIndexPlugin ( ) : IndexPlugin ( SPHERE_2D_NAME ) { } <nl> + <nl> + virtual IndexType * generate ( const IndexSpec * spec ) const { <nl> + S2IndexingParams params ; <nl> + / / TODO : parse params optionally from spec ? <nl> + params . maxKeysPerInsert = 100 ; <nl> + / / This is advisory . <nl> + params . maxCellsInCovering = 5 ; <nl> + const double radiusOfEarthInMeters = 6378 . 1 * 1000 . 0 ; <nl> + / / These are not advisory . <nl> + params . finestIndexedLevel = S2 : : kAvgEdge . GetClosestLevel ( 10 . 0 / radiusOfEarthInMeters ) ; <nl> + params . coarsestIndexedLevel = <nl> + S2 : : kAvgEdge . GetClosestLevel ( 1000000 . 0 / radiusOfEarthInMeters ) ; <nl> + return new GeoSphere2DType ( this , spec , params ) ; <nl> + } <nl> + } geoSphere2DIndexPlugin ; <nl> + } / / namespace mongo <nl> mmm a / src / mongo / db / jsobj . cpp <nl> ppp b / src / mongo / db / jsobj . cpp <nl> namespace mongo { <nl> } <nl> else if ( fn [ 1 ] = = ' t ' & & fn [ 2 ] = = ' y ' & & fn [ 3 ] = = ' p ' & & fn [ 4 ] = = ' e ' & & fn [ 5 ] = = 0 ) <nl> return BSONObj : : opTYPE ; <nl> - else if ( fn [ 1 ] = = ' i ' & & fn [ 2 ] = = ' n ' & & fn [ 3 ] = = 0 ) <nl> - return BSONObj : : opIN ; <nl> - else if ( fn [ 1 ] = = ' n ' & & fn [ 2 ] = = ' i ' & & fn [ 3 ] = = ' n ' & & fn [ 4 ] = = 0 ) <nl> + else if ( fn [ 1 ] = = ' i ' & & fn [ 2 ] = = ' n ' ) { <nl> + if ( 0 = = fn [ 3 ] ) { <nl> + return BSONObj : : opIN ; <nl> + } else if ( mongoutils : : str : : equals ( fn + 3 , " tersect " ) ) { <nl> + return BSONObj : : opINTERSECT ; <nl> + } <nl> + } else if ( fn [ 1 ] = = ' n ' & & fn [ 2 ] = = ' i ' & & fn [ 3 ] = = ' n ' & & fn [ 4 ] = = 0 ) <nl> return BSONObj : : NIN ; <nl> else if ( fn [ 1 ] = = ' a ' & & fn [ 2 ] = = ' l ' & & fn [ 3 ] = = ' l ' & & fn [ 4 ] = = 0 ) <nl> return BSONObj : : opALL ; <nl> mmm a / src / mongo / db / matcher . cpp <nl> ppp b / src / mongo / db / matcher . cpp <nl> namespace mongo { <nl> } <nl> case BSONObj : : opNEAR : <nl> case BSONObj : : opWITHIN : <nl> + case BSONObj : : opINTERSECT : <nl> case BSONObj : : opMAX_DISTANCE : <nl> break ; <nl> default : <nl> mmm a / src / mongo / db / queryutil . cpp <nl> ppp b / src / mongo / db / queryutil . cpp <nl> namespace mongo { <nl> case BSONObj : : opWITHIN : <nl> _special = " 2d " ; <nl> break ; <nl> + case BSONObj : : opINTERSECT : <nl> + _special = " s2d " ; <nl> + break ; <nl> case BSONObj : : opEXISTS : { <nl> if ( ! existsSpec ) { <nl> lower = upper = staticNull . firstElement ( ) ; <nl>
|
SERVER - 2874 add s2 indexing and cursor
|
mongodb/mongo
|
b500d17c347044b6d38c135c7edc142e3ac68658
|
2012-11-05T20:31:27Z
|
mmm a / xbmc / AppParamParser . cpp <nl> ppp b / xbmc / AppParamParser . cpp <nl> <nl> # include " AppParamParser . h " <nl> # include " GUIInfoManager . h " <nl> # include " PlayListPlayer . h " <nl> - # include " FileItem . h " <nl> # include " Application . h " <nl> # include " ApplicationMessenger . h " <nl> # include " settings / AdvancedSettings . h " <nl> mmm a / xbmc / Application . h <nl> ppp b / xbmc / Application . h <nl> <nl> # include " XBApplicationEx . h " <nl> <nl> # include " guilib / IMsgTargetCallback . h " <nl> - # include " threads / Condition . h " <nl> # include " utils / GlobalsHandling . h " <nl> # include " utils / StdString . h " <nl> <nl> mmm a / xbmc / ApplicationMessenger . h <nl> ppp b / xbmc / ApplicationMessenger . h <nl> <nl> * <nl> * / <nl> <nl> - # include " threads / CriticalSection . h " <nl> # include " utils / StdString . h " <nl> # include " guilib / WindowIDs . h " <nl> # include " threads / Thread . h " <nl> - # include " threads / Event . h " <nl> # include < boost / shared_ptr . hpp > <nl> <nl> # include < queue > <nl> mmm a / xbmc / FileItem . h <nl> ppp b / xbmc / FileItem . h <nl> <nl> # include " utils / ISortable . h " <nl> # include " XBDateTime . h " <nl> # include " utils / SortUtils . h " <nl> - # include " utils / LabelFormatter . h " <nl> # include " GUIPassword . h " <nl> # include " threads / CriticalSection . h " <nl> <nl> mmm a / xbmc / GUIInfoManager . h <nl> ppp b / xbmc / GUIInfoManager . h <nl> namespace INFO <nl> # define COMBINED_VALUES_START 100000 <nl> <nl> / / forward <nl> - class CInfoLabel ; <nl> class CGUIWindow ; <nl> namespace EPG { class CEpgInfoTag ; } <nl> <nl> mmm a / xbmc / NfoFile . h <nl> ppp b / xbmc / NfoFile . h <nl> <nl> <nl> # pragma once <nl> <nl> - # include " utils / XBMCTinyXML . h " <nl> # include " addons / Scraper . h " <nl> # include " utils / CharsetConverter . h " <nl> - # include " utils / XMLUtils . h " <nl> # include " utils / StdString . h " <nl> <nl> class CNfoFile <nl> mmm a / xbmc / TextureDatabase . h <nl> ppp b / xbmc / TextureDatabase . h <nl> <nl> <nl> # include " dbwrappers / Database . h " <nl> # include " TextureCacheJob . h " <nl> - # include " playlists / SmartPlayList . h " <nl> + # include " dbwrappers / DatabaseQuery . h " <nl> + # include " utils / DatabaseUtils . h " <nl> <nl> class CVariant ; <nl> <nl> mmm a / xbmc / ThumbnailCache . h <nl> ppp b / xbmc / ThumbnailCache . h <nl> <nl> * <nl> * / <nl> <nl> - # include " utils / StdString . h " <nl> - <nl> # include < map > <nl> <nl> class CCriticalSection ; <nl> - class CVideoInfoTag ; <nl> - namespace MUSIC_INFO <nl> - { <nl> - class CMusicInfoTag ; <nl> - } <nl> - class CAlbum ; <nl> - class CArtist ; <nl> - class CFileItem ; <nl> <nl> class CThumbnailCache <nl> { <nl> mmm a / xbmc / Util . h <nl> ppp b / xbmc / Util . h <nl> <nl> # define LEGAL_WIN32_COMPAT 1 <nl> # define LEGAL_FATX 2 <nl> <nl> - namespace XFILE <nl> - { <nl> - class IFileCallback ; <nl> - } <nl> - <nl> - class CFileItem ; <nl> class CFileItemList ; <nl> class CURL ; <nl> <nl> mmm a / xbmc / addons / AddonCallbacks . h <nl> ppp b / xbmc / addons / AddonCallbacks . h <nl> <nl> # include " cores / dvdplayer / DVDDemuxers / DVDDemuxUtils . h " <nl> # include " addons / include / xbmc_pvr_types . h " <nl> # include " addons / include / xbmc_codec_types . h " <nl> - # include " . . / . . / addons / library . xbmc . addon / libXBMC_addon . h " <nl> # include " . . / . . / addons / library . xbmc . gui / libXBMC_gui . h " <nl> <nl> typedef void ( * AddOnLogCallback ) ( void * addonData , const ADDON : : addon_log_t loglevel , const char * msg ) ; <nl> mmm a / xbmc / addons / GUIDialogAddonSettings . cpp <nl> ppp b / xbmc / addons / GUIDialogAddonSettings . cpp <nl> <nl> # include " utils / log . h " <nl> # include " Util . h " <nl> # include " URL . h " <nl> + # include " utils / XMLUtils . h " <nl> <nl> using namespace std ; <nl> using namespace ADDON ; <nl> mmm a / xbmc / addons / Skin . h <nl> ppp b / xbmc / addons / Skin . h <nl> <nl> # include " guilib / GUIIncludes . h " / / needed for the GUIInclude member <nl> # define CREDIT_LINE_LENGTH 50 <nl> <nl> - class TiXmlNode ; <nl> class CSetting ; <nl> <nl> namespace ADDON <nl>
|
Merge pull request from ace20022 / includes_1
|
xbmc/xbmc
|
54baec4f4e8c55d99b1bf5acbe76071b83355e61
|
2014-08-05T21:03:15Z
|
mmm a / . circleci / config . yml <nl> ppp b / . circleci / config . yml <nl> setup_ci_environment : & setup_ci_environment <nl> <nl> if [ [ " $ { JOB_BASE_NAME } " = = * - test * | | " $ { JOB_BASE_NAME } " = = smoke * ] ] ; then <nl> if [ - n " $ { CUDA_VERSION } " ] ; then <nl> - wget ' https : / / s3 . amazonaws . com / ossci - linux / nvidia_driver / NVIDIA - Linux - x86_64 - 396 . 26 . run ' <nl> - sudo / bin / bash . / NVIDIA - Linux - x86_64 - 396 . 26 . run - s - - no - drm <nl> + wget ' https : / / s3 . amazonaws . com / ossci - linux / nvidia_driver / NVIDIA - Linux - x86_64 - 410 . 79 . run ' <nl> + sudo / bin / bash . / NVIDIA - Linux - x86_64 - 410 . 79 . run - s - - no - drm <nl> nvidia - smi <nl> fi <nl> fi <nl>
|
Fixing cuda100 smoke tests
|
pytorch/pytorch
|
62883a911c66fe7e57019afd44256f85b7b9eb41
|
2019-01-03T01:13:16Z
|
mmm a / PowerEditor / bin / change . log <nl> ppp b / PowerEditor / bin / change . log <nl> <nl> - Notepad + + v6 . 9 New feature and bug - fixes : <nl> - <nl> - 1 . Add " Folder as Workspace " feature . <nl> - 2 . Fix Notepad + + hanging issue while user uses touchscreen to activate Notepad + + window . <nl> - 3 . HTML auto - close tag enhancement : Prevent < br > , < hr > , < img > , < link > and < meta > from being closed automatically . <nl> - 4 . Project enhancement : Allows user defined extension to associate workspace file . <nl> - 5 . Make behavior of SHIFT + END and SHIFT + HOME more consistent when word wrapping is enabled . <nl> - 6 . Add new API NPPM_SAVEFILE ( for plugins ) to save any file , not only the focused one . <nl> - 7 . Add file extensions for FreePascal / Lazarus pascal , lex ( as C ) . <nl> - 8 . Update keywords for C , C + + , JavaScript , Python and YAML . <nl> + Notepad + + v6 . 9 . 2 new features and bug - fixes : <nl> <nl> + 1 . Add most wanted feature : Log Mornitoring ( tail - f ) . <nl> + 2 . Add new feature : Find in Finder . <nl> + 3 . Fix status bar display bug in high dpi environment . <nl> + 4 . Fix open in explorer problem while path contain unusual characters . <nl> + 5 . Fix smart highlighter issue after zoom or code folding change . <nl> <nl> Included plugins : <nl> <nl> mmm a / PowerEditor / installer / nppSetup . nsi <nl> ppp b / PowerEditor / installer / nppSetup . nsi <nl> <nl> ; Define the application name <nl> ! define APPNAME " Notepad + + " <nl> <nl> - ! define APPVERSION " 6 . 9 " <nl> + ! define APPVERSION " 6 . 9 . 2 " <nl> ! define APPNAMEANDVERSION " $ { APPNAME } v $ { APPVERSION } " <nl> ! define VERSION_MAJOR 6 <nl> - ! define VERSION_MINOR 9 <nl> + ! define VERSION_MINOR 92 <nl> <nl> ! define APPWEBSITE " http : / / notepad - plus - plus . org / " <nl> <nl> mmm a / PowerEditor / src / resource . h <nl> ppp b / PowerEditor / src / resource . h <nl> <nl> # pragma once <nl> <nl> <nl> - # define NOTEPAD_PLUS_VERSION TEXT ( " Notepad + + v6 . 9 " ) <nl> + # define NOTEPAD_PLUS_VERSION TEXT ( " Notepad + + v6 . 9 . 2 " ) <nl> <nl> / / should be X . Y : ie . if VERSION_DIGITALVALUE = = 4 , 7 , 1 , 0 , then X = 4 , Y = 71 <nl> / / ex : # define VERSION_VALUE TEXT ( " 5 . 63 \ 0 " ) <nl> - # define VERSION_VALUE TEXT ( " 6 . 9 \ 0 " ) <nl> - # define VERSION_DIGITALVALUE 6 , 9 , 0 , 0 <nl> + # define VERSION_VALUE TEXT ( " 6 . 92 \ 0 " ) <nl> + # define VERSION_DIGITALVALUE 6 , 9 , 2 , 0 <nl> <nl> <nl> <nl>
|
[ RELEASE ] Notepad + + 6 . 9 . 2 release
|
notepad-plus-plus/notepad-plus-plus
|
52392a0b8129881a1a81fcfc8af415c508ed4729
|
2016-05-17T23:47:03Z
|
mmm a / xbmc / cores / VideoPlayer / VideoPlayerAudio . cpp <nl> ppp b / xbmc / cores / VideoPlayer / VideoPlayerAudio . cpp <nl> int CVideoPlayerAudio : : DecodeFrame ( DVDAudioFrame & audioframe ) <nl> <nl> / / consider stream stalled if queue is empty <nl> / / we can ' t sync audio to clock with an empty queue <nl> - if ( ALLOW_AUDIO ( m_speed ) ) <nl> + if ( ALLOW_AUDIO ( m_speed ) & & ! m_stalled ) <nl> { <nl> timeout = 0 ; <nl> } <nl>
|
VideoPlayer : fix busy loop in audio if stream is stalled
|
xbmc/xbmc
|
0e6e1c1cbdc48f9e1c4759676717880a716762be
|
2015-12-25T08:22:58Z
|
mmm a / tensorflow / python / compiler / tensorrt / BUILD <nl> ppp b / tensorflow / python / compiler / tensorrt / BUILD <nl> tf_py_wrap_cc ( <nl> name = " wrap_conversion " , <nl> srcs = [ " trt_conversion . i " ] , <nl> copts = tf_copts ( ) , <nl> - swig_includes = [ <nl> - " / / tensorflow / python : platform / base . i " , <nl> - ] , <nl> deps = [ <nl> " / / tensorflow / compiler / tf2tensorrt : py_utils " , <nl> " / / tensorflow / compiler / tf2tensorrt : trt_conversion " , <nl> mmm a / tensorflow / python / compiler / tensorrt / trt_conversion . i <nl> ppp b / tensorflow / python / compiler / tensorrt / trt_conversion . i <nl> limitations under the License . <nl> % { <nl> # define SWIG_FILE_WITH_INIT <nl> % } <nl> - % include " std_string . i " <nl> - % include " tensorflow / python / platform / base . i " <nl> <nl> % { <nl> struct version_struct { <nl> PyObject * version_helper ( version_struct * in ) { <nl> <nl> % } <nl> <nl> - _LIST_OUTPUT_TYPEMAP ( int , PyLong_FromLong ) ; <nl> - <nl> % typemap ( out ) version_struct { <nl> PyObject * tuple = version_helper ( & $ 1 ) ; <nl> if ( ! tuple ) SWIG_fail ; <nl> _LIST_OUTPUT_TYPEMAP ( int , PyLong_FromLong ) ; <nl> # include " tensorflow / compiler / tf2tensorrt / utils / py_utils . h " <nl> % } <nl> <nl> - % ignoreall <nl> - % unignore get_linked_tensorrt_version ; <nl> - % unignore get_loaded_tensorrt_version ; <nl> - % unignore is_tensorrt_enabled ; <nl> + % ignore " " ; <nl> + % rename ( " % s " ) get_linked_tensorrt_version ; <nl> + % rename ( " % s " ) get_loaded_tensorrt_version ; <nl> + % rename ( " % s " ) is_tensorrt_enabled ; <nl> <nl> % { <nl> <nl> version_struct get_linked_tensorrt_version ( ) ; <nl> version_struct get_loaded_tensorrt_version ( ) ; <nl> bool is_tensorrt_enabled ( ) ; <nl> <nl> - % unignoreall <nl> + % rename ( " % s " ) " " ; <nl>
|
Cleanup dependencies : let trt_conversion . i not depending on base . i .
|
tensorflow/tensorflow
|
dafbe3b5f5b372b226aed9bcd533ee0d92f73fac
|
2019-03-11T22:38:55Z
|
mmm a / folly / container / detail / F14Table . h <nl> ppp b / folly / container / detail / F14Table . h <nl> class F14HashToken final { <nl> F14HashToken ( ) = default ; <nl> <nl> private : <nl> - using HashPair = std : : pair < std : : size_t , uint8_t > ; <nl> + using HashPair = std : : pair < std : : size_t , std : : size_t > ; <nl> <nl> explicit F14HashToken ( HashPair hp ) : hp_ ( hp ) { } <nl> explicit operator HashPair ( ) const { <nl> struct alignas ( kRequiredVectorAlignment ) F14Chunk { <nl> / / / / / / / / <nl> / / Tag filtering using NEON intrinsics <nl> <nl> - SparseMaskIter tagMatchIter ( uint8_t needle ) const { <nl> - FOLLY_SAFE_DCHECK ( ( needle & 0x80 ) ! = 0 , " " ) ; <nl> + SparseMaskIter tagMatchIter ( std : : size_t needle ) const { <nl> + FOLLY_SAFE_DCHECK ( needle > = 0x80 & & needle < 0x100 , " " ) ; <nl> uint8x16_t tagV = vld1q_u8 ( & tags_ [ 0 ] ) ; <nl> - auto needleV = vdupq_n_u8 ( needle ) ; <nl> + auto needleV = vdupq_n_u8 ( static_cast < uint8_t > ( needle ) ) ; <nl> auto eqV = vceqq_u8 ( tagV , needleV ) ; <nl> / / get info from every byte into the bottom half of every uint16_t <nl> / / by shifting right 4 , then round to get it into a 64 - bit vector <nl> struct alignas ( kRequiredVectorAlignment ) F14Chunk { <nl> return static_cast < TagVector const * > ( static_cast < void const * > ( & tags_ [ 0 ] ) ) ; <nl> } <nl> <nl> - SparseMaskIter tagMatchIter ( uint8_t needle ) const { <nl> - FOLLY_SAFE_DCHECK ( ( needle & 0x80 ) ! = 0 , " " ) ; <nl> + SparseMaskIter tagMatchIter ( std : : size_t needle ) const { <nl> + FOLLY_SAFE_DCHECK ( needle > = 0x80 & & needle < 0x100 , " " ) ; <nl> auto tagV = _mm_load_si128 ( tagVector ( ) ) ; <nl> - auto needleV = _mm_set1_epi8 ( needle ) ; <nl> + <nl> + / / TRICKY ! It may seem strange to have a std : : size_t needle and narrow <nl> + / / it at the last moment , rather than making HashPair : : second be a <nl> + / / uint8_t , but the latter choice sometimes leads to a performance <nl> + / / problem . <nl> + / / <nl> + / / On architectures with SSE2 but not AVX2 , _mm_set1_epi8 expands <nl> + / / to multiple instructions . One of those is a MOVD of either 4 or <nl> + / / 8 byte width . Only the bottom byte of that move actually affects <nl> + / / the result , but if a 1 - byte needle has been spilled then this will <nl> + / / be a 4 byte load . GCC 5 . 5 has been observed to reload needle <nl> + / / ( or perhaps fuse a reload and part of a previous static_cast ) <nl> + / / needle using a MOVZX with a 1 byte load in parallel with the MOVD . <nl> + / / This combination causes a failure of store - to - load forwarding , <nl> + / / which has a big performance penalty ( 60 nanoseconds per find on <nl> + / / a microbenchmark ) . Keeping needle > = 4 bytes avoids the problem <nl> + / / and also happens to result in slightly more compact assembly . <nl> + auto needleV = _mm_set1_epi8 ( static_cast < uint8_t > ( needle ) ) ; <nl> auto eqV = _mm_cmpeq_epi8 ( tagV , needleV ) ; <nl> auto mask = _mm_movemask_epi8 ( eqV ) & kFullMask ; <nl> return SparseMaskIter { mask } ; <nl> class F14Table : public Policy { <nl> / / 64 - bit <nl> static HashPair splitHash ( std : : size_t hash ) { <nl> static_assert ( sizeof ( std : : size_t ) = = sizeof ( uint64_t ) , " " ) ; <nl> - uint8_t tag ; <nl> + std : : size_t tag ; <nl> if ( ! Policy : : isAvalanchingHasher ( ) ) { <nl> # if FOLLY_SSE_PREREQ ( 4 , 2 ) <nl> / / SSE4 . 2 CRC <nl> - auto c = _mm_crc32_u64 ( 0 , hash ) ; <nl> - tag = static_cast < uint8_t > ( ~ ( c > > 25 ) ) ; <nl> + std : : size_t c = _mm_crc32_u64 ( 0 , hash ) ; <nl> + tag = ( c > > 24 ) | 0x80 ; <nl> hash + = c ; <nl> # elif __ARM_FEATURE_CRC32 <nl> / / CRC is optional on armv8 ( - march = armv8 - a + crc ) , standard on armv8 . 1 <nl> - auto c = __crc32cd ( 0 , hash ) ; <nl> - tag = static_cast < uint8_t > ( ~ ( c > > 25 ) ) ; <nl> + std : : size_t c = __crc32cd ( 0 , hash ) ; <nl> + tag = ( c > > 24 ) | 0x80 ; <nl> hash + = c ; <nl> # else <nl> / / The mixer below is not fully avalanching for all 64 bits of <nl> class F14Table : public Policy { <nl> # endif <nl> hash = hi ^ lo ; <nl> hash * = kMul ; <nl> - tag = static_cast < uint8_t > ( hash > > 15 ) | 0x80 ; <nl> + tag = ( ( hash > > 15 ) & 0x7f ) | 0x80 ; <nl> hash > > = 22 ; <nl> # endif <nl> } else { <nl>
|
fix bad codegen by widening needle
|
facebook/folly
|
cc5a8cf0e5d2289a9172a206982942c6234bf65a
|
2018-08-03T20:20:52Z
|
mmm a / tensorflow / compiler / mlir / lite / ir / tfl_ops . cc <nl> ppp b / tensorflow / compiler / mlir / lite / ir / tfl_ops . cc <nl> limitations under the License . <nl> <nl> # include " tensorflow / compiler / mlir / lite / ir / tfl_ops . h " <nl> <nl> + # include < cstdint > <nl> + <nl> # include " llvm / ADT / APFloat . h " <nl> # include " llvm / ADT / APInt . h " <nl> # include " mlir / IR / Attributes . h " / / TF : local_config_mlir <nl> limitations under the License . <nl> # include " mlir / IR / StandardTypes . h " / / TF : local_config_mlir <nl> # include " mlir / IR / TypeUtilities . h " / / TF : local_config_mlir <nl> # include " mlir / StandardOps / Ops . h " / / TF : local_config_mlir <nl> + # include " mlir / Support / LLVM . h " / / TF : local_config_mlir <nl> # include " tensorflow / compiler / mlir / tensorflow / ir / tf_types . h " <nl> <nl> namespace mlir { <nl> OpFoldResult RangeOp : : fold ( ArrayRef < Attribute > operands ) { <nl> <nl> return nullptr ; <nl> } <nl> + <nl> + / / = = = mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - = = = / / <nl> + / / TransposeOp <nl> + / / = = = mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - = = = / / <nl> + <nl> + namespace { <nl> + <nl> + / / Computes the permutation of a constant ` input_tensor ` according to ` perm ` . <nl> + / / The function recursively traverses the dimensions of the output tensor in <nl> + / / a row - major order and writes the value in the output tensor into <nl> + / / ` new_values ` . <nl> + void ComputePermutation ( ElementsAttr input_tensor , ArrayRef < int32_t > perm , <nl> + ArrayRef < int64_t > output_shape , int num_dimensions , <nl> + int output_axis , std : : vector < uint64_t > * input_indices , <nl> + std : : vector < Attribute > * new_values ) { <nl> + / / Refer to the implementation of ` Transpose ` function in <nl> + / / tensorflow / lite / kernels / internal / reference / reference_ops . h <nl> + assert ( output_axis < num_dimensions ) ; <nl> + const int input_axis = perm [ output_axis ] ; <nl> + for ( int i = 0 ; i < output_shape [ output_axis ] ; + + i ) { <nl> + / / Update the input indices on ` input_axis ` . <nl> + input_indices - > at ( input_axis ) = i ; <nl> + / / Write the value from ` input_tensor ` if it is the last axis or <nl> + / / recurse into the next axis . <nl> + const bool is_last_axis = output_axis = = num_dimensions - 1 ; <nl> + if ( is_last_axis ) { <nl> + new_values - > push_back ( input_tensor . getValue ( * input_indices ) ) ; <nl> + } else { <nl> + ComputePermutation ( input_tensor , perm , output_shape , num_dimensions , <nl> + output_axis + 1 , input_indices , new_values ) ; <nl> + } <nl> + } <nl> + } <nl> + <nl> + } / / namespace <nl> + <nl> + OpFoldResult TransposeOp : : fold ( ArrayRef < Attribute > operands ) { <nl> + assert ( operands . size ( ) = = 2 ) ; <nl> + auto input_tensor = operands [ 0 ] . dyn_cast_or_null < ElementsAttr > ( ) ; <nl> + auto perm_tensor = operands [ 1 ] . dyn_cast_or_null < ElementsAttr > ( ) ; <nl> + if ( ! input_tensor | | ! perm_tensor ) return nullptr ; <nl> + <nl> + / / Do not try to fold elements attr of a quant type because <nl> + / / DenseElementsAttr does not support it . <nl> + if ( ! getType ( ) . cast < ShapedType > ( ) . getElementType ( ) . isIntOrFloat ( ) ) <nl> + return nullptr ; <nl> + <nl> + assert ( perm_tensor . getType ( ) . getRank ( ) = = 1 ) ; <nl> + const int num_dimensions = input_tensor . getType ( ) . getRank ( ) ; <nl> + assert ( perm_tensor . getType ( ) . getNumElements ( ) = = num_dimensions ) ; <nl> + <nl> + ArrayRef < int64_t > input_shape = input_tensor . getType ( ) . getShape ( ) ; <nl> + auto output_type = getType ( ) . cast < ShapedType > ( ) ; <nl> + <nl> + SmallVector < int32_t , 4 > perm ; <nl> + SmallVector < int64_t , 4 > output_shape ; <nl> + for ( int i = 0 ; i < num_dimensions ; + + i ) { <nl> + perm . push_back ( perm_tensor . getValue ( { static_cast < uint64_t > ( i ) } ) <nl> + . cast < IntegerAttr > ( ) <nl> + . getInt ( ) ) ; <nl> + output_shape . push_back ( input_shape [ perm [ i ] ] ) ; <nl> + <nl> + / / Check that the derived output shape matches the static shape . <nl> + assert ( ! output_type . hasStaticShape ( ) | | <nl> + output_type . getShape ( ) [ i ] = = output_shape [ i ] ) ; <nl> + } <nl> + <nl> + std : : vector < Attribute > new_values ; <nl> + new_values . reserve ( input_tensor . getType ( ) . getNumElements ( ) ) ; <nl> + std : : vector < uint64_t > input_indices ( num_dimensions ) ; <nl> + ComputePermutation ( input_tensor , perm , output_shape , num_dimensions , <nl> + / * output_axis = * / 0 , & input_indices , & new_values ) ; <nl> + auto result_type = <nl> + RankedTensorType : : get ( output_shape , output_type . getElementType ( ) ) ; <nl> + return DenseElementsAttr : : get ( result_type , new_values ) ; <nl> + } <nl> + <nl> / / = = = mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - = = = / / <nl> / / TableGen ' d op method definitions <nl> / / = = = mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm - = = = / / <nl> mmm a / tensorflow / compiler / mlir / lite / ir / tfl_ops . td <nl> ppp b / tensorflow / compiler / mlir / lite / ir / tfl_ops . td <nl> def TFL_TransposeOp : TFL_Op < " transpose " , <nl> let results = ( outs <nl> AnyTensor : $ y <nl> ) ; <nl> + <nl> + let hasFolder = 1 ; <nl> } <nl> <nl> def TFL_UnpackOp : TFL_Op < " unpack " , [ NoSideEffect ] > { <nl> mmm a / tensorflow / compiler / mlir / lite / tests / const - fold . mlir <nl> ppp b / tensorflow / compiler / mlir / lite / tests / const - fold . mlir <nl> func @ range_float_nonzero_base ( ) - > tensor < ? xf32 > { <nl> % 0 = " tfl . range " ( % cst , % cst_1 , % cst_2 ) : ( tensor < f32 > , tensor < f32 > , tensor < f32 > ) - > tensor < ? xf32 > <nl> return % 0 : tensor < ? xf32 > <nl> } <nl> + <nl> + / / CHECK - LABEL : @ transpose_no_fold <nl> + func @ transpose_no_fold ( % arg0 : tensor < 2xi32 > ) - > tensor < 2x2xi32 > { <nl> + % cst = constant dense < [ [ 0 , 1 ] , [ 2 , 3 ] ] > : tensor < 2x2xi32 > <nl> + <nl> + / / CHECK : tfl . transpose <nl> + % 0 = " tfl . transpose " ( % cst , % arg0 ) : ( tensor < 2x2xi32 > , tensor < 2xi32 > ) - > tensor < 2x2xi32 > <nl> + return % 0 : tensor < 2x2xi32 > <nl> + } <nl> + <nl> + / / CHECK - LABEL : @ transpose_1d <nl> + / / Basic 1D identity <nl> + func @ transpose_1d ( ) - > tensor < 3xi32 > { <nl> + % cst = constant dense < [ 1 , 2 , 3 ] > : tensor < 3xi32 > <nl> + % cst_perm = constant dense < 0 > : tensor < 1xi32 > <nl> + <nl> + / / CHECK : [ [ cst : % . * ] ] = constant dense < { { \ [ } } 1 , 2 , 3 ] > : tensor < 3xi32 > <nl> + / / CHECK : return [ [ cst ] ] <nl> + % 0 = " tfl . transpose " ( % cst , % cst_perm ) : ( tensor < 3xi32 > , tensor < 1xi32 > ) - > tensor < 3xi32 > <nl> + return % 0 : tensor < 3xi32 > <nl> + } <nl> + <nl> + / / CHECK - LABEL : @ transpose_dynamic <nl> + func @ transpose_dynamic ( ) - > tensor < ? xi32 > { <nl> + % cst = constant dense < [ 1 , 2 , 3 ] > : tensor < 3xi32 > <nl> + % cst_perm = constant dense < 0 > : tensor < 1xi32 > <nl> + <nl> + / / CHECK : [ [ cst : % . * ] ] = " tfl . pseudo_const " ( ) { value = dense < { { \ [ } } 1 , 2 , 3 ] > : tensor < 3xi32 > } : ( ) - > tensor < ? xi32 > <nl> + / / CHECK : return [ [ cst ] ] <nl> + % 0 = " tfl . transpose " ( % cst , % cst_perm ) : ( tensor < 3xi32 > , tensor < 1xi32 > ) - > tensor < ? xi32 > <nl> + return % 0 : tensor < ? xi32 > <nl> + } <nl> + <nl> + / / CHECK - LABEL : @ transpose_2d <nl> + func @ transpose_2d ( ) - > tensor < 2x2xi32 > { <nl> + % cst = constant dense < [ [ 0 , 1 ] , [ 2 , 3 ] ] > : tensor < 2x2xi32 > <nl> + % cst_perm = constant dense < [ 1 , 0 ] > : tensor < 2xi32 > <nl> + <nl> + / / CHECK : [ [ cst : % . * ] ] = constant dense < { { \ [ \ [ } } 0 , 2 ] , { { \ [ } } 1 , 3 ] ] > : tensor < 2x2xi32 > <nl> + / / CHECK : return [ [ cst ] ] <nl> + % 0 = " tfl . transpose " ( % cst , % cst_perm ) : ( tensor < 2x2xi32 > , tensor < 2xi32 > ) - > tensor < 2x2xi32 > <nl> + return % 0 : tensor < 2x2xi32 > <nl> + } <nl> + <nl> + / / CHECK - LABEL : @ transpose_2d_identity <nl> + func @ transpose_2d_identity ( ) - > tensor < 2x2xi32 > { <nl> + % cst = constant dense < [ [ 0 , 1 ] , [ 2 , 3 ] ] > : tensor < 2x2xi32 > <nl> + % cst_perm = constant dense < [ 0 , 1 ] > : tensor < 2xi32 > <nl> + <nl> + / / CHECK : [ [ cst : % . * ] ] = constant dense < { { \ [ \ [ } } 0 , 1 ] , { { \ [ } } 2 , 3 ] ] > : tensor < 2x2xi32 > <nl> + / / CHECK : return [ [ cst ] ] <nl> + % 0 = " tfl . transpose " ( % cst , % cst_perm ) : ( tensor < 2x2xi32 > , tensor < 2xi32 > ) - > tensor < 2x2xi32 > <nl> + return % 0 : tensor < 2x2xi32 > <nl> + } <nl> + <nl> + / / CHECK - LABEL : @ transpose_3d <nl> + / / A test case adopted from TransposeTest . Test3DInputConstTensor in <nl> + / / tensorflow / lite / kernels / transpose_test . cc <nl> + func @ transpose_3d ( ) - > tensor < 4x2x3xi32 > { <nl> + % cst = constant dense < [ [ [ 0 , 1 , 2 , 3 ] , [ 4 , 5 , 6 , 7 ] , [ 8 , 9 , 10 , 11 ] ] , [ [ 12 , 13 , 14 , 15 ] , [ 16 , 17 , 18 , 19 ] , [ 20 , 21 , 22 , 23 ] ] ] > : tensor < 2x3x4xi32 > <nl> + % cst_perm = constant dense < [ 2 , 0 , 1 ] > : tensor < 3xi32 > <nl> + <nl> + / / CHECK : [ [ cst : % . * ] ] = constant dense < { { \ [ \ [ \ [ } } 0 , 4 , 8 ] , { { \ [ } } 12 , 16 , 20 ] ] , { { \ [ \ [ } } 1 , 5 , 9 ] , { { \ [ } } 13 , 17 , 21 ] ] , { { \ [ \ [ } } 2 , 6 , 10 ] , { { \ [ } } 14 , 18 , 22 ] ] , { { \ [ \ [ } } 3 , 7 , 11 ] , { { \ [ } } 15 , 19 , 23 ] ] ] > : tensor < 4x2x3xi32 > <nl> + / / CHECK : return [ [ cst ] ] <nl> + % 0 = " tfl . transpose " ( % cst , % cst_perm ) : ( tensor < 2x3x4xi32 > , tensor < 3xi32 > ) - > tensor < 4x2x3xi32 > <nl> + return % 0 : tensor < 4x2x3xi32 > <nl> + } <nl>
|
Add a constant folder for tfl . transpose
|
tensorflow/tensorflow
|
bce3ce0c0b5b49cdb85f7cf5a8e12bc18b4b6da4
|
2019-08-02T00:49:52Z
|
mmm a / atom / browser / api / atom_api_session . cc <nl> ppp b / atom / browser / api / atom_api_session . cc <nl> <nl> # include " atom / browser / api / atom_api_cookies . h " <nl> # include " atom / browser / atom_browser_context . h " <nl> # include " atom / common / native_mate_converters / gurl_converter . h " <nl> - # include " base / thread_task_runner_handle . h " <nl> + # include " base / files / file_path . h " <nl> + # include " base / prefs / pref_service . h " <nl> # include " base / strings / string_util . h " <nl> + # include " base / thread_task_runner_handle . h " <nl> + # include " chrome / common / pref_names . h " <nl> # include " content / public / browser / browser_thread . h " <nl> # include " content / public / browser / storage_partition . h " <nl> # include " native_mate / callback . h " <nl> void Session : : SetProxy ( const std : : string & proxy , <nl> base : : Bind ( & SetProxyInIO , base : : Unretained ( getter ) , proxy , callback ) ) ; <nl> } <nl> <nl> + void Session : : SetDownloadPath ( const std : : string & path ) { <nl> + browser_context_ - > prefs ( ) - > SetFilePath ( prefs : : kDownloadDefaultDirectory , <nl> + base : : FilePath ( path ) ) ; <nl> + } <nl> + <nl> v8 : : Local < v8 : : Value > Session : : Cookies ( v8 : : Isolate * isolate ) { <nl> if ( cookies_ . IsEmpty ( ) ) { <nl> auto handle = atom : : api : : Cookies : : Create ( isolate , browser_context_ ) ; <nl> mate : : ObjectTemplateBuilder Session : : GetObjectTemplateBuilder ( <nl> . SetMethod ( " clearCache " , & Session : : ClearCache ) <nl> . SetMethod ( " clearStorageData " , & Session : : ClearStorageData ) <nl> . SetMethod ( " setProxy " , & Session : : SetProxy ) <nl> + . SetMethod ( " setDownloadPath " , & Session : : SetDownloadPath ) <nl> . SetProperty ( " cookies " , & Session : : Cookies ) ; <nl> } <nl> <nl> mmm a / atom / browser / api / atom_api_session . h <nl> ppp b / atom / browser / api / atom_api_session . h <nl> class Session : public mate : : TrackableObject < Session > { <nl> void ClearCache ( const net : : CompletionCallback & callback ) ; <nl> void ClearStorageData ( mate : : Arguments * args ) ; <nl> void SetProxy ( const std : : string & proxy , const base : : Closure & callback ) ; <nl> + void SetDownloadPath ( const std : : string & path ) ; <nl> v8 : : Local < v8 : : Value > Cookies ( v8 : : Isolate * isolate ) ; <nl> <nl> v8 : : Global < v8 : : Value > cookies_ ; <nl> mmm a / atom / browser / atom_browser_context . cc <nl> ppp b / atom / browser / atom_browser_context . cc <nl> content : : BrowserPluginGuestManager * AtomBrowserContext : : GetGuestManager ( ) { <nl> void AtomBrowserContext : : RegisterPrefs ( PrefRegistrySimple * pref_registry ) { <nl> pref_registry - > RegisterFilePathPref ( prefs : : kSelectFileLastDirectory , <nl> base : : FilePath ( ) ) ; <nl> + pref_registry - > RegisterFilePathPref ( prefs : : kDownloadDefaultDirectory , <nl> + base : : FilePath ( ) ) ; <nl> } <nl> <nl> } / / namespace atom <nl> mmm a / atom / browser / atom_download_manager_delegate . cc <nl> ppp b / atom / browser / atom_download_manager_delegate . cc <nl> <nl> <nl> # include < string > <nl> <nl> + # include " atom / browser / atom_browser_context . h " <nl> # include " atom / browser / native_window . h " <nl> # include " atom / browser / ui / file_dialog . h " <nl> # include " base / bind . h " <nl> # include " base / files / file_util . h " <nl> + # include " base / prefs / pref_service . h " <nl> + # include " chrome / common / pref_names . h " <nl> # include " content / public / browser / browser_context . h " <nl> # include " content / public / browser / browser_thread . h " <nl> # include " content / public / browser / download_manager . h " <nl> void AtomDownloadManagerDelegate : : OnDownloadPathGenerated ( <nl> return ; <nl> } <nl> <nl> + / / Remeber the last selected download directory . <nl> + AtomBrowserContext * browser_context = static_cast < AtomBrowserContext * > ( <nl> + download_manager_ - > GetBrowserContext ( ) ) ; <nl> + browser_context - > prefs ( ) - > SetFilePath ( prefs : : kDownloadDefaultDirectory , <nl> + path . DirName ( ) ) ; <nl> callback . Run ( path , <nl> content : : DownloadItem : : TARGET_DISPOSITION_PROMPT , <nl> content : : DOWNLOAD_DANGER_TYPE_NOT_DANGEROUS , path ) ; <nl> bool AtomDownloadManagerDelegate : : DetermineDownloadTarget ( <nl> const content : : DownloadTargetCallback & callback ) { <nl> DCHECK_CURRENTLY_ON ( content : : BrowserThread : : UI ) ; <nl> <nl> + AtomBrowserContext * browser_context = static_cast < AtomBrowserContext * > ( <nl> + download_manager_ - > GetBrowserContext ( ) ) ; <nl> + default_download_path_ = browser_context - > prefs ( ) - > GetFilePath ( <nl> + prefs : : kDownloadDefaultDirectory ) ; <nl> + / / If users didn ' t set download path , use ' Downloads ' directory by default . <nl> if ( default_download_path_ . empty ( ) ) { <nl> auto path = download_manager_ - > GetBrowserContext ( ) - > GetPath ( ) ; <nl> default_download_path_ = path . Append ( FILE_PATH_LITERAL ( " Downloads " ) ) ; <nl> mmm a / chromium_src / chrome / common / pref_names . cc <nl> ppp b / chromium_src / chrome / common / pref_names . cc <nl> <nl> namespace prefs { <nl> <nl> const char kSelectFileLastDirectory [ ] = " selectfile . last_directory " ; <nl> + const char kDownloadDefaultDirectory [ ] = " download . default_directory " ; <nl> <nl> } / / namespace prefs <nl> mmm a / chromium_src / chrome / common / pref_names . h <nl> ppp b / chromium_src / chrome / common / pref_names . h <nl> <nl> namespace prefs { <nl> <nl> extern const char kSelectFileLastDirectory [ ] ; <nl> + extern const char kDownloadDefaultDirectory [ ] ; <nl> <nl> } / / namespace prefs <nl>
|
Add ` session . setDownloadPath ` API .
|
electron/electron
|
fef53d18c4b694799b775bb53a764d8c803ceb2b
|
2015-07-26T08:51:27Z
|
mmm a / src / qt / guiutil . cpp <nl> ppp b / src / qt / guiutil . cpp <nl> void setupAddressWidget ( QValidatedLineEdit * widget , QWidget * parent ) <nl> widget - > setCheckValidator ( new BitcoinAddressCheckValidator ( parent ) ) ; <nl> } <nl> <nl> - void setupAmountWidget ( QLineEdit * widget , QWidget * parent ) <nl> - { <nl> - QDoubleValidator * amountValidator = new QDoubleValidator ( parent ) ; <nl> - amountValidator - > setDecimals ( 8 ) ; <nl> - amountValidator - > setBottom ( 0 . 0 ) ; <nl> - widget - > setValidator ( amountValidator ) ; <nl> - widget - > setAlignment ( Qt : : AlignRight | Qt : : AlignVCenter ) ; <nl> - } <nl> - <nl> bool parseBitcoinURI ( const QUrl & uri , SendCoinsRecipient * out ) <nl> { <nl> / / return if URI is not valid or is no bitcoin : URI <nl> mmm a / src / qt / guiutil . h <nl> ppp b / src / qt / guiutil . h <nl> namespace GUIUtil <nl> / / Return a monospace font <nl> QFont fixedPitchFont ( ) ; <nl> <nl> - / / Set up widgets for address and amounts <nl> + / / Set up widget for address <nl> void setupAddressWidget ( QValidatedLineEdit * widget , QWidget * parent ) ; <nl> - void setupAmountWidget ( QLineEdit * widget , QWidget * parent ) ; <nl> <nl> / / Parse " bitcoin : " URI into recipient object , return true on successful parsing <nl> bool parseBitcoinURI ( const QUrl & uri , SendCoinsRecipient * out ) ; <nl>
|
Qt : Remove unused method setupAmountWidget ( . . . )
|
bitcoin/bitcoin
|
3a0f8d795a96bdab4fffad7fcb00f73c615cf817
|
2018-03-25T19:15:08Z
|
mmm a / dbms / src / Functions / GatherUtils / Algorithms . h <nl> ppp b / dbms / src / Functions / GatherUtils / Algorithms . h <nl> void resizeDynamicSize ( ArraySource & & array_source , ValueSource & & value_source , <nl> while ( ! sink . isEnd ( ) ) <nl> { <nl> size_t row_num = array_source . rowNum ( ) ; <nl> - bool has_size = ! size_null_map | | ( size_null_map & & ( * size_null_map ) [ row_num ] ) ; <nl> + bool has_size = ! size_null_map | | ( * size_null_map ) [ row_num ] ; <nl> <nl> if ( has_size ) <nl> { <nl>
|
Simplified expression ( suggested by PVS - Studio )
|
ClickHouse/ClickHouse
|
5be06d855698af9cec7fb033a34a3e4c8902b425
|
2019-04-10T20:05:25Z
|
mmm a / src / MainWindow . cpp <nl> ppp b / src / MainWindow . cpp <nl> void MainWindow : : httpresponse ( QNetworkReply * reply ) <nl> <nl> if ( reply - > error ( ) = = QNetworkReply : : NoError ) <nl> { <nl> + / / Check for redirect <nl> + QVariant possibleRedirectUrl = <nl> + reply - > attribute ( QNetworkRequest : : RedirectionTargetAttribute ) ; <nl> + <nl> + if ( ! possibleRedirectUrl . toUrl ( ) . isEmpty ( ) ) <nl> + { <nl> + m_NetworkManager - > get ( QNetworkRequest ( possibleRedirectUrl . toUrl ( ) ) ) ; <nl> + return ; <nl> + } <nl> + <nl> / / first line of the currentrelease file contains a major . minor . patch version string <nl> QString sversion ( reply - > readLine ( ) ) ; <nl> <nl> QStringList versiontokens = sversion . split ( " . " ) ; <nl> + if ( versiontokens . size ( ) < 3 ) <nl> + return ; <nl> + <nl> int major = versiontokens [ 0 ] . toInt ( ) ; <nl> int minor = versiontokens [ 1 ] . toInt ( ) ; <nl> int patch = versiontokens [ 2 ] . toInt ( ) ; <nl>
|
updater : Fix a crash and handle redirects
|
sqlitebrowser/sqlitebrowser
|
26a23595b4ff285393949abaa6e55cafcff178fb
|
2014-04-28T20:12:17Z
|
mmm a / XBMC . xcodeproj / project . pbxproj <nl> ppp b / XBMC . xcodeproj / project . pbxproj <nl> <nl> F58E293911FFC103006F4D46 / * DVDInputStreamBluray . cpp in Sources * / = { isa = PBXBuildFile ; fileRef = F58E293711FFC103006F4D46 / * DVDInputStreamBluray . cpp * / ; } ; <nl> F592568810FBF2E100D2C91D / * ConvolutionKernels . cpp in Sources * / = { isa = PBXBuildFile ; fileRef = F592568710FBF2E100D2C91D / * ConvolutionKernels . cpp * / ; } ; <nl> F595994510E9F322004B58B3 / * DVDVideoCodecCrystalHD . cpp in Sources * / = { isa = PBXBuildFile ; fileRef = F595994410E9F322004B58B3 / * DVDVideoCodecCrystalHD . cpp * / ; } ; <nl> + F597B05B18A804E0005AADAE / * DVDVideoCodec . cpp in Sources * / = { isa = PBXBuildFile ; fileRef = F597B05A18A804E0005AADAE / * DVDVideoCodec . cpp * / ; } ; <nl> + F597B05C18A804E0005AADAE / * DVDVideoCodec . cpp in Sources * / = { isa = PBXBuildFile ; fileRef = F597B05A18A804E0005AADAE / * DVDVideoCodec . cpp * / ; } ; <nl> + F597B05D18A804E0005AADAE / * DVDVideoCodec . cpp in Sources * / = { isa = PBXBuildFile ; fileRef = F597B05A18A804E0005AADAE / * DVDVideoCodec . cpp * / ; } ; <nl> F59876C00FBA351D008EF4FB / * VideoReferenceClock . cpp in Sources * / = { isa = PBXBuildFile ; fileRef = F59876BF0FBA351D008EF4FB / * VideoReferenceClock . cpp * / ; } ; <nl> F59879080FBAA0C3008EF4FB / * QuartzCore . framework in Frameworks * / = { isa = PBXBuildFile ; fileRef = F59879070FBAA0C3008EF4FB / * QuartzCore . framework * / ; } ; <nl> F5987F050FBDF274008EF4FB / * DPMSSupport . cpp in Sources * / = { isa = PBXBuildFile ; fileRef = F5987F040FBDF274008EF4FB / * DPMSSupport . cpp * / ; } ; <nl> <nl> F592568710FBF2E100D2C91D / * ConvolutionKernels . cpp * / = { isa = PBXFileReference ; fileEncoding = 4 ; lastKnownFileType = sourcecode . cpp . cpp ; path = ConvolutionKernels . cpp ; sourceTree = " < group > " ; } ; <nl> F595994310E9F322004B58B3 / * DVDVideoCodecCrystalHD . h * / = { isa = PBXFileReference ; fileEncoding = 4 ; lastKnownFileType = sourcecode . c . h ; path = DVDVideoCodecCrystalHD . h ; sourceTree = " < group > " ; } ; <nl> F595994410E9F322004B58B3 / * DVDVideoCodecCrystalHD . cpp * / = { isa = PBXFileReference ; fileEncoding = 4 ; lastKnownFileType = sourcecode . cpp . cpp ; path = DVDVideoCodecCrystalHD . cpp ; sourceTree = " < group > " ; } ; <nl> + F597B05A18A804E0005AADAE / * DVDVideoCodec . cpp * / = { isa = PBXFileReference ; fileEncoding = 4 ; lastKnownFileType = sourcecode . cpp . cpp ; path = DVDVideoCodec . cpp ; sourceTree = " < group > " ; } ; <nl> F59876BE0FBA351D008EF4FB / * VideoReferenceClock . h * / = { isa = PBXFileReference ; fileEncoding = 4 ; lastKnownFileType = sourcecode . c . h ; path = VideoReferenceClock . h ; sourceTree = " < group > " ; } ; <nl> F59876BF0FBA351D008EF4FB / * VideoReferenceClock . cpp * / = { isa = PBXFileReference ; fileEncoding = 4 ; lastKnownFileType = sourcecode . cpp . cpp ; path = VideoReferenceClock . cpp ; sourceTree = " < group > " ; } ; <nl> F59879070FBAA0C3008EF4FB / * QuartzCore . framework * / = { isa = PBXFileReference ; lastKnownFileType = wrapper . framework ; name = QuartzCore . framework ; path = / System / Library / Frameworks / QuartzCore . framework ; sourceTree = " < absolute > " ; } ; <nl> <nl> F5F240EB110A4F76009126C6 / * CrystalHD . cpp * / , <nl> F5F240EC110A4F76009126C6 / * CrystalHD . h * / , <nl> E38E153B0D25F9F900618676 / * DllLibMpeg2 . h * / , <nl> + F597B05A18A804E0005AADAE / * DVDVideoCodec . cpp * / , <nl> E38E153C0D25F9F900618676 / * DVDVideoCodec . h * / , <nl> F595994410E9F322004B58B3 / * DVDVideoCodecCrystalHD . cpp * / , <nl> F595994310E9F322004B58B3 / * DVDVideoCodecCrystalHD . h * / , <nl> <nl> E38E20440D25F9FD00618676 / * DirectoryNodeTop100 . cpp in Sources * / , <nl> E38E20460D25F9FD00618676 / * DirectoryNodeYearAlbum . cpp in Sources * / , <nl> E38E20470D25F9FD00618676 / * DirectoryNodeYearSong . cpp in Sources * / , <nl> + F597B05B18A804E0005AADAE / * DVDVideoCodec . cpp in Sources * / , <nl> E38E20490D25F9FD00618676 / * QueryParams . cpp in Sources * / , <nl> E38E204A0D25F9FD00618676 / * MusicDatabaseDirectory . cpp in Sources * / , <nl> E38E204B0D25F9FD00618676 / * MusicSearchDirectory . cpp in Sources * / , <nl> <nl> DFF0F2C417528350002DA3A4 / * TextureGL . cpp in Sources * / , <nl> DFF0F2C517528350002DA3A4 / * TextureManager . cpp in Sources * / , <nl> DFF0F2C617528350002DA3A4 / * VisibleEffect . cpp in Sources * / , <nl> + F597B05D18A804E0005AADAE / * DVDVideoCodec . cpp in Sources * / , <nl> DFF0F2C717528350002DA3A4 / * XBTF . cpp in Sources * / , <nl> DFF0F2C817528350002DA3A4 / * XBTFReader . cpp in Sources * / , <nl> DFF0F2C917528350002DA3A4 / * GenericTouchActionHandler . cpp in Sources * / , <nl> <nl> E49911E1174E5D3700741B6D / * DVDInputStreamHTSP . cpp in Sources * / , <nl> E49911E2174E5D3700741B6D / * DVDInputStreamHttp . cpp in Sources * / , <nl> E49911E3174E5D3700741B6D / * DVDInputStreamMemory . cpp in Sources * / , <nl> + F597B05C18A804E0005AADAE / * DVDVideoCodec . cpp in Sources * / , <nl> E49911E4174E5D3700741B6D / * DVDInputStreamNavigator . cpp in Sources * / , <nl> E49911E5174E5D3700741B6D / * DVDInputStreamPVRManager . cpp in Sources * / , <nl> E49911E6174E5D3700741B6D / * DVDInputStreamRTMP . cpp in Sources * / , <nl>
|
XCode : Adjust to new DVDVideoCodec . cpp thx @ davilla
|
xbmc/xbmc
|
346d29e39f0381f67b77a502a7c8a3292ac6c7bc
|
2014-02-09T20:14:56Z
|
mmm a / imgui . cpp <nl> ppp b / imgui . cpp <nl> bool ImGui : : ButtonEx ( const char * label , const ImVec2 & size_arg , ImGuiButtonFlags <nl> const ImVec2 label_size = CalcTextSize ( label , NULL , true ) ; <nl> <nl> ImVec2 pos = window - > DC . CursorPos ; <nl> - if ( ( flags & ImGuiButtonFlags_AlignTextBaseLine ) & & style . FramePadding . y < window - > DC . CurrentLineTextBaseOffset ) <nl> + if ( ( flags & ImGuiButtonFlags_AlignTextBaseLine ) & & style . FramePadding . y < window - > DC . CurrentLineTextBaseOffset ) / / Try to vertically align buttons that are smaller / have no padding so that text baseline matches ( bit hacky , since it shouldn ' t be a flag ) <nl> pos . y + = window - > DC . CurrentLineTextBaseOffset - style . FramePadding . y ; <nl> ImVec2 size = CalcItemSize ( size_arg , label_size . x + style . FramePadding . x * 2 . 0f , label_size . y + style . FramePadding . y * 2 . 0f ) ; <nl> <nl>
|
Comment
|
ocornut/imgui
|
02cea0c3c3a7ac245f663e0ceb6f048eb0048405
|
2016-09-25T09:16:19Z
|
mmm a / tensorflow / core / grappler / optimizers / constant_folding . cc <nl> ppp b / tensorflow / core / grappler / optimizers / constant_folding . cc <nl> Status ConstantFolding : : SimplifyNode ( bool use_shape_info , NodeDef * node , <nl> } <nl> } <nl> } <nl> - <nl> / / Remove RandomShuffle op if it is scalar or first dimension is of size 1 . <nl> if ( use_shape_info & & IsRandomShuffle ( * node ) & & <nl> ! properties - > GetInputProperties ( node - > name ( ) ) . empty ( ) ) { <nl> Status ConstantFolding : : SimplifyNode ( bool use_shape_info , NodeDef * node , <nl> } <nl> } <nl> <nl> - / / Remove Reverse op over dimensions with size 1 . <nl> - if ( use_shape_info & & node - > op ( ) = = " ReverseV2 " & & <nl> - properties - > GetInputProperties ( node - > name ( ) ) . size ( ) > = 2 ) { <nl> - const auto & shape = properties - > GetInputProperties ( node - > name ( ) ) [ 0 ] . shape ( ) ; <nl> - if ( shape . unknown_rank ( ) ) { <nl> - / / Not optimizable . <nl> - return Status : : OK ( ) ; <nl> - } <nl> - const auto & a = properties - > GetInputProperties ( node - > name ( ) ) [ 1 ] ; <nl> - if ( TensorShape : : IsValid ( a . shape ( ) ) & & a . has_value ( ) ) { <nl> - Tensor axis ( a . dtype ( ) , a . shape ( ) ) ; <nl> - if ( ! axis . FromProto ( a . value ( ) ) ) { <nl> - return errors : : InvalidArgument ( " Cannot parse tensor from proto : " , <nl> - a . value ( ) . DebugString ( ) ) ; <nl> - } <nl> - std : : set < int > target_axes ; <nl> - for ( int j = 0 ; j < axis . NumElements ( ) ; + + j ) { <nl> - / / value of axis can be negative . <nl> - if ( axis . dtype ( ) = = DT_INT64 ) { <nl> - target_axes . insert ( ( axis . vec < int64 > ( ) ( j ) + shape . dim_size ( ) ) % <nl> - shape . dim_size ( ) ) ; <nl> - } else { <nl> - target_axes . insert ( ( axis . vec < int > ( ) ( j ) + shape . dim_size ( ) ) % <nl> - shape . dim_size ( ) ) ; <nl> - } <nl> - } <nl> - <nl> - / / The node is replaceable iff <nl> - / / unknown_rank = = false & & <nl> - / / ( dim_size = = 0 | | all dims have size 1 | | <nl> - / / all dims with > 1 size are not in target_axes ) <nl> - bool replaceable = ! shape . unknown_rank ( ) ; <nl> - for ( int j = 0 ; replaceable & & j < shape . dim_size ( ) ; + + j ) { <nl> - replaceable & = shape . dim ( j ) . size ( ) = = 1 | | <nl> - target_axes . find ( j ) = = target_axes . end ( ) ; <nl> - } <nl> - if ( replaceable ) { <nl> - ReplaceOperationWithIdentity ( 0 , * properties , node , optimized_graph ) ; <nl> - return Status : : OK ( ) ; <nl> - } <nl> - } <nl> + bool remove_reverse_successful = false ; <nl> + Status remove_reverse_status = <nl> + RemoveReverse ( * properties , use_shape_info , optimized_graph , node , <nl> + & remove_reverse_successful ) ; <nl> + if ( ! remove_reverse_status . ok ( ) ) { <nl> + return remove_reverse_status ; <nl> + } else if ( remove_reverse_successful ) { <nl> + return Status : : OK ( ) ; <nl> } <nl> <nl> bool simplify_slice_successful = false ; <nl> Status ConstantFolding : : SimplifyNode ( bool use_shape_info , NodeDef * node , <nl> return Status : : OK ( ) ; <nl> } <nl> <nl> + Status ConstantFolding : : RemoveReverse ( const GraphProperties & properties , <nl> + bool use_shape_info , <nl> + GraphDef * optimized_graph , NodeDef * node , <nl> + bool * success ) { <nl> + if ( use_shape_info & & node - > op ( ) = = " ReverseV2 " & & <nl> + properties . GetInputProperties ( node - > name ( ) ) . size ( ) > = 2 ) { <nl> + const auto & shape = properties . GetInputProperties ( node - > name ( ) ) [ 0 ] . shape ( ) ; <nl> + if ( shape . unknown_rank ( ) ) { <nl> + / / Not optimizable . <nl> + return Status : : OK ( ) ; <nl> + } <nl> + const auto & a = properties . GetInputProperties ( node - > name ( ) ) [ 1 ] ; <nl> + if ( TensorShape : : IsValid ( a . shape ( ) ) & & a . has_value ( ) ) { <nl> + Tensor axis ( a . dtype ( ) , a . shape ( ) ) ; <nl> + if ( ! axis . FromProto ( a . value ( ) ) ) { <nl> + return errors : : InvalidArgument ( " Cannot parse tensor from proto : " , <nl> + a . value ( ) . DebugString ( ) ) ; <nl> + } <nl> + std : : set < int > target_axes ; <nl> + for ( int j = 0 ; j < axis . NumElements ( ) ; + + j ) { <nl> + / / value of axis can be negative . <nl> + if ( axis . dtype ( ) = = DT_INT64 ) { <nl> + target_axes . insert ( ( axis . vec < int64 > ( ) ( j ) + shape . dim_size ( ) ) % <nl> + shape . dim_size ( ) ) ; <nl> + } else { <nl> + target_axes . insert ( ( axis . vec < int > ( ) ( j ) + shape . dim_size ( ) ) % <nl> + shape . dim_size ( ) ) ; <nl> + } <nl> + } <nl> + <nl> + / / The node is replaceable iff <nl> + / / unknown_rank = = false & & <nl> + / / ( dim_size = = 0 | | all dims have size 1 | | <nl> + / / all dims with > 1 size are not in target_axes ) <nl> + bool replaceable = ! shape . unknown_rank ( ) ; <nl> + for ( int j = 0 ; replaceable & & j < shape . dim_size ( ) ; + + j ) { <nl> + replaceable & = shape . dim ( j ) . size ( ) = = 1 | | <nl> + target_axes . find ( j ) = = target_axes . end ( ) ; <nl> + } <nl> + if ( replaceable ) { <nl> + ReplaceOperationWithIdentity ( 0 , properties , node , optimized_graph ) ; <nl> + * success = true ; <nl> + return Status : : OK ( ) ; <nl> + } <nl> + } <nl> + } <nl> + * success = false ; <nl> + return Status : : OK ( ) ; <nl> + } <nl> + <nl> Status ConstantFolding : : SimplifySlice ( const GraphProperties & properties , <nl> bool use_shape_info , <nl> GraphDef * optimized_graph , NodeDef * node , <nl> mmm a / tensorflow / core / grappler / optimizers / constant_folding . h <nl> ppp b / tensorflow / core / grappler / optimizers / constant_folding . h <nl> class ConstantFolding : public GraphOptimizer { <nl> / / Simplifies a Slice operation to an Identity operation if applicable . <nl> Status SimplifySlice ( const GraphProperties & properties , bool use_shape_info , <nl> GraphDef * optimized_graph , NodeDef * node , bool * success ) ; <nl> + <nl> + / / Removes Reverse op over dimensions with size 1 . <nl> + Status RemoveReverse ( const GraphProperties & properties , bool use_shape_info , <nl> + GraphDef * optimized_graph , NodeDef * node , bool * success ) ; <nl> / / Points to an externally provided device or to owned_device_ ; <nl> RewriterConfig : : Toggle opt_level_ ; <nl> DeviceBase * cpu_device_ ; <nl>
|
Extracts the ' remove reverse node ' optimization into its own method .
|
tensorflow/tensorflow
|
d40a5c7cc4d510902c8d2bb0981438c62397ab81
|
2018-05-25T23:46:28Z
|
mmm a / include / swift / AST / DiagnosticsSema . def <nl> ppp b / include / swift / AST / DiagnosticsSema . def <nl> ERROR ( serialization_missing_dependencies , Fatal , <nl> ERROR ( serialization_circular_dependency , Fatal , <nl> " circular dependency between modules ' % 0 ' and % 1 " , <nl> ( StringRef , Identifier ) ) <nl> - ERROR ( serialization_missing_shadowed_module , Fatal , <nl> + ERROR ( serialization_missing_underlying_module , Fatal , <nl> " cannot load underlying module for % 0 " , ( Identifier ) ) <nl> ERROR ( serialization_name_mismatch , Fatal , <nl> " cannot load module ' % 0 ' as ' % 1 ' " , ( StringRef , StringRef ) ) <nl> mmm a / include / swift / Serialization / ModuleFile . h <nl> ppp b / include / swift / Serialization / ModuleFile . h <nl> class ModuleFile <nl> / / / A reference back to the AST representation of the file . <nl> FileUnit * FileContext = nullptr ; <nl> <nl> - / / / The module shadowed by this module , if any . <nl> - ModuleDecl * ShadowedModule = nullptr ; <nl> + / / / The module that this module is an overlay of , if any . <nl> + ModuleDecl * UnderlyingModule = nullptr ; <nl> <nl> / / / The module file data . <nl> std : : unique_ptr < llvm : : MemoryBuffer > ModuleInputBuffer ; <nl> class ModuleFile <nl> return Dependencies ; <nl> } <nl> <nl> - / / / The module shadowed by this module , if any . <nl> - ModuleDecl * getShadowedModule ( ) const { return ShadowedModule ; } <nl> + / / / The module that this module is an overlay for , if any . <nl> + ModuleDecl * getUnderlyingModule ( ) const { return UnderlyingModule ; } <nl> <nl> / / / Searches the module ' s top - level decls for the given identifier . <nl> void lookupValue ( DeclName name , SmallVectorImpl < ValueDecl * > & results ) ; <nl> mmm a / include / swift / Serialization / Validation . h <nl> ppp b / include / swift / Serialization / Validation . h <nl> enum class Status { <nl> MissingDependency , <nl> <nl> / / / The module file is an overlay for a Clang module , which can ' t be found . <nl> - MissingShadowedModule , <nl> + MissingUnderlyingModule , <nl> <nl> / / / The module file depends on a module that is still being loaded , i . e . <nl> / / / there is a circular dependency . <nl> mmm a / lib / Serialization / Deserialization . cpp <nl> ppp b / lib / Serialization / Deserialization . cpp <nl> ModuleDecl * ModuleFile : : getModule ( ArrayRef < Identifier > name , <nl> / / FIXME : duplicated from NameBinder : : getModule <nl> if ( name . size ( ) = = 1 & & <nl> name . front ( ) = = FileContext - > getParentModule ( ) - > getName ( ) ) { <nl> - if ( ! ShadowedModule & & allowLoading ) { <nl> + if ( ! UnderlyingModule & & allowLoading ) { <nl> auto importer = getContext ( ) . getClangModuleLoader ( ) ; <nl> assert ( importer & & " no way to import shadowed module " ) ; <nl> - ShadowedModule = importer - > loadModule ( SourceLoc ( ) , <nl> - { { name . front ( ) , SourceLoc ( ) } } ) ; <nl> + UnderlyingModule = importer - > loadModule ( SourceLoc ( ) , <nl> + { { name . front ( ) , SourceLoc ( ) } } ) ; <nl> } <nl> <nl> - return ShadowedModule ; <nl> + return UnderlyingModule ; <nl> } <nl> <nl> SmallVector < ImportDecl : : AccessPathElement , 4 > importPath ; <nl> mmm a / lib / Serialization / ModuleFile . cpp <nl> ppp b / lib / Serialization / ModuleFile . cpp <nl> Status ModuleFile : : associateWithFileContext ( FileUnit * file , <nl> } <nl> auto module = getModule ( modulePath , / * allowLoading * / true ) ; <nl> if ( ! module | | module - > failedToLoad ( ) ) { <nl> - / / If we ' re missing the module we ' re shadowing , treat that specially . <nl> + / / If we ' re missing the module we ' re an overlay for , treat that specially . <nl> if ( modulePath . size ( ) = = 1 & & <nl> modulePath . front ( ) = = file - > getParentModule ( ) - > getName ( ) ) { <nl> - return error ( Status : : MissingShadowedModule ) ; <nl> + return error ( Status : : MissingUnderlyingModule ) ; <nl> } <nl> <nl> / / Otherwise , continue trying to load dependencies , so that we can list <nl> TypeDecl * ModuleFile : : lookupNestedType ( Identifier name , <nl> } <nl> } <nl> <nl> - if ( ! ShadowedModule ) <nl> + if ( ! UnderlyingModule ) <nl> return nullptr ; <nl> <nl> - for ( FileUnit * file : ShadowedModule - > getFiles ( ) ) <nl> + for ( FileUnit * file : UnderlyingModule - > getFiles ( ) ) <nl> if ( auto * nestedType = file - > lookupNestedType ( name , parent ) ) <nl> return nestedType ; <nl> <nl> ModuleFile : : getOpaqueReturnTypeDecls ( SmallVectorImpl < OpaqueTypeDecl * > & results ) <nl> } <nl> <nl> void ModuleFile : : getDisplayDecls ( SmallVectorImpl < Decl * > & results ) { <nl> - if ( ShadowedModule ) <nl> - ShadowedModule - > getDisplayDecls ( results ) ; <nl> + if ( UnderlyingModule ) <nl> + UnderlyingModule - > getDisplayDecls ( results ) ; <nl> <nl> PrettyStackTraceModuleFile stackEntry ( * this ) ; <nl> getImportDecls ( results ) ; <nl> mmm a / lib / Serialization / SerializedModuleLoader . cpp <nl> ppp b / lib / Serialization / SerializedModuleLoader . cpp <nl> FileUnit * SerializedModuleLoaderBase : : loadAST ( <nl> / / necessarily mean it ' s " system " module . User can make their own overlay <nl> / / in non - system directory . <nl> / / Remove this block after we fix the test suite . <nl> - if ( auto shadowed = loadedModuleFile - > getShadowedModule ( ) ) <nl> + if ( auto shadowed = loadedModuleFile - > getUnderlyingModule ( ) ) <nl> if ( shadowed - > isSystemModule ( ) ) <nl> M . setIsSystemModule ( true ) ; <nl> <nl> void swift : : serialization : : diagnoseSerializedASTLoadFailure ( <nl> break ; <nl> } <nl> <nl> - case serialization : : Status : : MissingShadowedModule : { <nl> - Ctx . Diags . diagnose ( diagLoc , diag : : serialization_missing_shadowed_module , <nl> + case serialization : : Status : : MissingUnderlyingModule : { <nl> + Ctx . Diags . diagnose ( diagLoc , diag : : serialization_missing_underlying_module , <nl> ModuleName ) ; <nl> if ( Ctx . SearchPathOpts . SDKPath . empty ( ) & & <nl> llvm : : Triple ( llvm : : sys : : getProcessTriple ( ) ) . isMacOSX ( ) ) { <nl> void SerializedASTFile : : collectLinkLibraries ( <nl> } <nl> <nl> bool SerializedASTFile : : isSystemModule ( ) const { <nl> - if ( auto Mod = File . getShadowedModule ( ) ) { <nl> + if ( auto Mod = File . getUnderlyingModule ( ) ) { <nl> return Mod - > isSystemModule ( ) ; <nl> } <nl> return false ; <nl> StringRef SerializedASTFile : : getFilename ( ) const { <nl> } <nl> <nl> const clang : : Module * SerializedASTFile : : getUnderlyingClangModule ( ) const { <nl> - if ( auto * ShadowedModule = File . getShadowedModule ( ) ) <nl> - return ShadowedModule - > findUnderlyingClangModule ( ) ; <nl> + if ( auto * UnderlyingModule = File . getUnderlyingModule ( ) ) <nl> + return UnderlyingModule - > findUnderlyingClangModule ( ) ; <nl> return nullptr ; <nl> } <nl> <nl>
|
Merge remote - tracking branch ' origin / master ' into master - next
|
apple/swift
|
2bfaa13a8e975f5ed50ad2f2931bc39805c89f87
|
2019-05-13T17:30:09Z
|
mmm a / site / source / docs / porting / simd . rst <nl> ppp b / site / source / docs / porting / simd . rst <nl> Porting SIMD code targeting WebAssembly <nl> <nl> Emscripten supports the ` WebAssembly SIMD proposal < https : / / github . com / webassembly / simd / > ` _ when using the WebAssembly LLVM backend . To enable SIMD , pass the - msimd128 flag at compile time . This will also turn on LLVM ' s autovectorization passes , so no source modifications are necessary to benefit from SIMD . <nl> <nl> - At the source level , the GCC / Clang ` SIMD Vector Extensions < https : / / gcc . gnu . org / onlinedocs / gcc / Vector - Extensions . html > ` _ can be used and will be lowered to WebAssembly SIMD instructions where possible . A portable intrinsics header for WebAssembly SIMD is also being actively developed . <nl> + At the source level , the GCC / Clang ` SIMD Vector Extensions < https : / / gcc . gnu . org / onlinedocs / gcc / Vector - Extensions . html > ` _ can be used and will be lowered to WebAssembly SIMD instructions where possible . In addition , there is a portable intrinsics header file that can be used . <nl> + <nl> + . . code - block : : cpp <nl> + <nl> + # include < wasm_simd128 . h > <nl> + <nl> + Separate documentation for the intrinsics header is a work in progress , but its usage is straightforward and its source can be found at ` wasm_simd128 . h < https : / / github . com / emscripten - core / emscripten / blob / master / system / include / wasm_simd128 . h > ` . These intrinsics are under active development in parallel with the SIMD proposal and should not be considered any more stable than the proposal itself . <nl> <nl> WebAssembly SIMD is not supported when using the Fastcomp backend . <nl> <nl> new file mode 100644 <nl> index 00000000000 . . 89c1f6a6f80 <nl> mmm / dev / null <nl> ppp b / system / include / wasm_simd128 . h <nl> <nl> + / * <nl> + WebAssembly SIMD128 Intrinsics <nl> + * / <nl> + <nl> + # include < stdint . h > <nl> + # include < stdbool . h > <nl> + <nl> + / / User - facing type <nl> + typedef int32_t v128_t __attribute__ ( ( __vector_size__ ( 16 ) , __aligned__ ( 16 ) ) ) ; <nl> + <nl> + / / Internal types determined by clang builtin definitions <nl> + typedef int32_t __v128_u __attribute__ ( ( __vector_size__ ( 16 ) , __aligned__ ( 1 ) ) ) ; <nl> + typedef char __i8x16 __attribute__ ( ( __vector_size__ ( 16 ) , __aligned__ ( 16 ) ) ) ; <nl> + typedef unsigned char __u8x16 __attribute__ ( ( __vector_size__ ( 16 ) , __aligned__ ( 16 ) ) ) ; <nl> + typedef short __i16x8 __attribute__ ( ( __vector_size__ ( 16 ) , __aligned__ ( 16 ) ) ) ; <nl> + typedef unsigned short __u16x8 __attribute__ ( ( __vector_size__ ( 16 ) , __aligned__ ( 16 ) ) ) ; <nl> + typedef int __i32x4 __attribute__ ( ( __vector_size__ ( 16 ) , __aligned__ ( 16 ) ) ) ; <nl> + typedef unsigned int __u32x4 __attribute__ ( ( __vector_size__ ( 16 ) , __aligned__ ( 16 ) ) ) ; <nl> + typedef long long __i64x2 __attribute__ ( ( __vector_size__ ( 16 ) , __aligned__ ( 16 ) ) ) ; <nl> + typedef unsigned long long __u64x2 __attribute__ ( ( __vector_size__ ( 16 ) , __aligned__ ( 16 ) ) ) ; <nl> + typedef float __f32x4 __attribute__ ( ( __vector_size__ ( 16 ) , __aligned__ ( 16 ) ) ) ; <nl> + typedef double __f64x2 __attribute__ ( ( __vector_size__ ( 16 ) , __aligned__ ( 16 ) ) ) ; <nl> + <nl> + # define __DEFAULT_FN_ATTRS __attribute__ ( ( __always_inline__ , __nodebug__ , __target__ ( " simd128 " ) , __min_vector_width__ ( 128 ) ) ) <nl> + <nl> + # ifdef __cplusplus <nl> + # include < type_traits > <nl> + # define __SAME_TYPE ( t1 , t2 ) ( std : : is_same < t1 , t2 > : : value ) <nl> + # else <nl> + # define __SAME_TYPE ( t1 , t2 ) ( __builtin_types_compatible_p ( t1 , t2 ) ) <nl> + # endif <nl> + <nl> + # define __REQUIRE_CONSTANT ( e , ty , msg ) _Static_assert ( __builtin_constant_p ( e ) & & __SAME_TYPE ( __typeof__ ( e ) , ty ) , msg ) <nl> + <nl> + / / v128 wasm_v128_load ( void * mem ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_v128_load ( const void * __mem ) { <nl> + / / UB - free unaligned access copied from xmmintrin . h <nl> + struct __wasm_v128_load_struct { <nl> + __v128_u __v ; <nl> + } __attribute__ ( ( __packed__ , __may_alias__ ) ) ; <nl> + return ( ( const struct __wasm_v128_load_struct * ) __mem ) - > __v ; <nl> + } <nl> + <nl> + / / wasm_v128_store ( void * mem , v128 a ) <nl> + static __inline__ void __DEFAULT_FN_ATTRS wasm_v128_store ( void * __mem , v128_t __a ) { <nl> + / / UB - free unaligned access copied from xmmintrin . h <nl> + struct __wasm_v128_store_struct { <nl> + __v128_u __v ; <nl> + } __attribute__ ( ( __packed__ , __may_alias__ ) ) ; <nl> + ( ( struct __wasm_v128_store_struct * ) __mem ) - > __v = __a ; <nl> + } <nl> + <nl> + / / wasm_i8x16_make ( . . . ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i8x16_make ( int8_t c0 , int8_t c1 , int8_t c2 , int8_t c3 , int8_t c4 , int8_t c5 , int8_t c6 , int8_t c7 , int8_t c8 , int8_t c9 , int8_t c10 , int8_t c11 , int8_t c12 , int8_t c13 , int8_t c14 , int8_t c15 ) { <nl> + return ( v128_t ) ( __i8x16 ) { c0 , c1 , c2 , c3 , c4 , c5 , c6 , c7 , c8 , c9 , c10 , c11 , c12 , c13 , c14 , c15 } ; <nl> + } <nl> + <nl> + / / wasm_i16x8_make ( . . . ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i16x8_make ( int16_t c0 , int16_t c1 , int16_t c2 , int16_t c3 , int16_t c4 , int16_t c5 , int16_t c6 , int16_t c7 ) { <nl> + return ( v128_t ) ( __i16x8 ) { c0 , c1 , c2 , c3 , c4 , c5 , c6 , c7 } ; <nl> + } <nl> + <nl> + / / wasm_i32x4_make ( . . . ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i32x4_make ( int32_t c0 , int32_t c1 , int32_t c2 , int32_t c3 ) { <nl> + return ( v128_t ) ( __i32x4 ) { c0 , c1 , c2 , c3 } ; <nl> + } <nl> + <nl> + / / wasm_f32x4_make ( . . . ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f32x4_make ( float c0 , float c1 , float c2 , float c3 ) { <nl> + return ( v128_t ) ( __f32x4 ) { c0 , c1 , c2 , c3 } ; <nl> + } <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + / / wasm_i64x2_make ( . . . ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i64x2_make ( int64_t c0 , int64_t c1 ) { <nl> + return ( v128_t ) ( __i64x2 ) { c0 , c1 } ; <nl> + } <nl> + <nl> + / / wasm_f64x2_make ( . . . ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f64x2_make ( double c0 , double c1 ) { <nl> + return ( v128_t ) ( __f64x2 ) { c0 , c1 } ; <nl> + } <nl> + <nl> + # endif / / __wasm_unimplemented_simd128__ <nl> + <nl> + / / v128_t wasm_i8x16_constant ( . . . ) <nl> + # define wasm_i8x16_const ( c0 , c1 , c2 , c3 , c4 , c5 , c6 , c7 , c8 , c9 , c10 , c11 , c12 , c13 , c14 , c15 ) \ <nl> + __extension__ ( { \ <nl> + __REQUIRE_CONSTANT ( c0 , int8_t , " expected constant int8_t " ) ; \ <nl> + __REQUIRE_CONSTANT ( c1 , int8_t , " expected constant int8_t " ) ; \ <nl> + __REQUIRE_CONSTANT ( c2 , int8_t , " expected constant int8_t " ) ; \ <nl> + __REQUIRE_CONSTANT ( c3 , int8_t , " expected constant int8_t " ) ; \ <nl> + __REQUIRE_CONSTANT ( c4 , int8_t , " expected constant int8_t " ) ; \ <nl> + __REQUIRE_CONSTANT ( c5 , int8_t , " expected constant int8_t " ) ; \ <nl> + __REQUIRE_CONSTANT ( c6 , int8_t , " expected constant int8_t " ) ; \ <nl> + __REQUIRE_CONSTANT ( c7 , int8_t , " expected constant int8_t " ) ; \ <nl> + __REQUIRE_CONSTANT ( c8 , int8_t , " expected constant int8_t " ) ; \ <nl> + __REQUIRE_CONSTANT ( c9 , int8_t , " expected constant int8_t " ) ; \ <nl> + __REQUIRE_CONSTANT ( c10 , int8_t , " expected constant int8_t " ) ; \ <nl> + __REQUIRE_CONSTANT ( c11 , int8_t , " expected constant int8_t " ) ; \ <nl> + __REQUIRE_CONSTANT ( c12 , int8_t , " expected constant int8_t " ) ; \ <nl> + __REQUIRE_CONSTANT ( c13 , int8_t , " expected constant int8_t " ) ; \ <nl> + __REQUIRE_CONSTANT ( c14 , int8_t , " expected constant int8_t " ) ; \ <nl> + __REQUIRE_CONSTANT ( c15 , int8_t , " expected constant int8_t " ) ; \ <nl> + ( v128_t ) ( __i8x16 ) { c0 , c1 , c2 , c3 , c4 , c5 , c6 , c7 , c8 , c9 , c10 , c11 , c12 , c13 , c14 , c15 } ; \ <nl> + } ) <nl> + <nl> + / / v128_t wasm_i16x8_constant ( . . . ) <nl> + # define wasm_i16x8_const ( c0 , c1 , c2 , c3 , c4 , c5 , c6 , c7 ) \ <nl> + __extension__ ( { \ <nl> + __REQUIRE_CONSTANT ( c0 , int16_t , " expected constant int16_t " ) ; \ <nl> + __REQUIRE_CONSTANT ( c1 , int16_t , " expected constant int16_t " ) ; \ <nl> + __REQUIRE_CONSTANT ( c2 , int16_t , " expected constant int16_t " ) ; \ <nl> + __REQUIRE_CONSTANT ( c3 , int16_t , " expected constant int16_t " ) ; \ <nl> + __REQUIRE_CONSTANT ( c4 , int16_t , " expected constant int16_t " ) ; \ <nl> + __REQUIRE_CONSTANT ( c5 , int16_t , " expected constant int16_t " ) ; \ <nl> + __REQUIRE_CONSTANT ( c6 , int16_t , " expected constant int16_t " ) ; \ <nl> + __REQUIRE_CONSTANT ( c7 , int16_t , " expected constant int16_t " ) ; \ <nl> + ( v128_t ) ( __i16x8 ) { c0 , c1 , c2 , c3 , c4 , c5 , c6 , c7 } ; \ <nl> + } ) <nl> + <nl> + / / v128_t wasm_i32x4_constant ( . . . ) <nl> + # define wasm_i32x4_const ( c0 , c1 , c2 , c3 ) \ <nl> + __extension__ ( { \ <nl> + __REQUIRE_CONSTANT ( c0 , int32_t , " expected constant int32_t " ) ; \ <nl> + __REQUIRE_CONSTANT ( c1 , int32_t , " expected constant int32_t " ) ; \ <nl> + __REQUIRE_CONSTANT ( c2 , int32_t , " expected constant int32_t " ) ; \ <nl> + __REQUIRE_CONSTANT ( c3 , int32_t , " expected constant int32_t " ) ; \ <nl> + ( v128_t ) ( __i32x4 ) { c0 , c1 , c2 , c3 } ; \ <nl> + } ) <nl> + <nl> + / / v128_t wasm_f32x4_constant ( . . . ) <nl> + # define wasm_f32x4_const ( c0 , c1 , c2 , c3 ) \ <nl> + __extension__ ( { \ <nl> + __REQUIRE_CONSTANT ( c0 , float , " expected constant float " ) ; \ <nl> + __REQUIRE_CONSTANT ( c1 , float , " expected constant float " ) ; \ <nl> + __REQUIRE_CONSTANT ( c2 , float , " expected constant float " ) ; \ <nl> + __REQUIRE_CONSTANT ( c3 , float , " expected constant float " ) ; \ <nl> + ( v128_t ) ( __f32x4 ) { c0 , c1 , c2 , c3 } ; \ <nl> + } ) <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + / / v128_t wasm_i64x2_constant ( . . . ) <nl> + # define wasm_i64x2_const ( c0 , c1 ) \ <nl> + __extension__ ( { \ <nl> + __REQUIRE_CONSTANT ( c0 , int64_t , " expected constant int64_t " ) ; \ <nl> + __REQUIRE_CONSTANT ( c1 , int64_t , " expected constant int64_t " ) ; \ <nl> + ( v128_t ) ( __i64x2 ) { c0 , c1 } ; \ <nl> + } ) <nl> + <nl> + / / v128_t wasm_f64x2_constant ( . . . ) <nl> + # define wasm_f64x2_const ( c0 , c1 ) \ <nl> + __extension__ ( { \ <nl> + __REQUIRE_CONSTANT ( c0 , double , " expected constant double " ) ; \ <nl> + __REQUIRE_CONSTANT ( c1 , double , " expected constant double " ) ; \ <nl> + ( v128_t ) ( __f64x2 ) { c0 , c1 } ; \ <nl> + } ) <nl> + <nl> + # endif / / __wasm_unimplemented_sidm128__ <nl> + <nl> + / / v128_t wasm_i8x16_splat ( int8_t a ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i8x16_splat ( int8_t a ) { <nl> + return ( v128_t ) ( __i8x16 ) { a , a , a , a , a , a , a , a , a , a , a , a , a , a , a , a } ; <nl> + } <nl> + <nl> + / / int8_t wasm_i8x16_extract_lane ( v128_t a , imm i ) <nl> + # define wasm_i8x16_extract_lane ( a , i ) ( __builtin_wasm_extract_lane_s_i8x16 ( ( __i8x16 ) ( a ) , i ) ) <nl> + <nl> + / / int8_t wasm_u8x16_extract_lane ( v128_t a , imm i ) <nl> + # define wasm_u8x16_extract_lane ( a , i ) ( __builtin_wasm_extract_lane_u_i8x16 ( ( __i8x16 ) ( a ) , i ) ) <nl> + <nl> + / / v128_t wasm_i8x16_replace_lane ( v128_t a , imm i , int8_t b ) <nl> + # define wasm_i8x16_replace_lane ( a , i , b ) \ <nl> + ( ( v128_t ) __builtin_wasm_replace_lane_i8x16 ( ( __i8x16 ) ( a ) , i , b ) ) <nl> + <nl> + / / v128_t wasm_i16x8_splat ( int16_t a ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i16x8_splat ( int16_t a ) { <nl> + return ( v128_t ) ( __i16x8 ) { a , a , a , a , a , a , a , a } ; <nl> + } <nl> + <nl> + / / int16_t wasm_i16x8_extract_lane ( v128_t a , imm i ) <nl> + # define wasm_i16x8_extract_lane ( a , i ) ( __builtin_wasm_extract_lane_s_i16x8 ( ( __i16x8 ) ( a ) , i ) ) <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + / / int16_t wasm_u16x8_extract_lane ( v128_t a , imm i ) <nl> + # define wasm_u16x8_extract_lane ( a , i ) ( __builtin_wasm_extract_lane_u_i16x8 ( ( __i16x8 ) ( a ) , i ) ) <nl> + <nl> + # endif / / __wasm_unimplemented_simd128__ <nl> + <nl> + / / v128_t wasm_i16x8_replace_lane ( v128_t a , imm i , int16_t b ) <nl> + # define wasm_i16x8_replace_lane ( a , i , b ) \ <nl> + ( ( v128_t ) __builtin_wasm_replace_lane_i16x8 ( ( __i16x8 ) ( a ) , i , b ) ) <nl> + <nl> + / / v128_t wasm_i32x4_splat ( int32_t a ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i32x4_splat ( int32_t a ) { <nl> + return ( v128_t ) ( __i32x4 ) { a , a , a , a } ; <nl> + } <nl> + <nl> + / / int32_t wasm_i32x4_extract_lane ( v128_t a , imm i ) <nl> + # define wasm_i32x4_extract_lane ( a , i ) ( __builtin_wasm_extract_lane_i32x4 ( ( __i32x4 ) ( a ) , i ) ) <nl> + <nl> + / / v128_t wasm_i32x4_replace_lane ( v128_t a , imm i , int32_t b ) <nl> + # define wasm_i32x4_replace_lane ( a , i , b ) \ <nl> + ( ( v128_t ) __builtin_wasm_replace_lane_i32x4 ( ( __i32x4 ) ( a ) , i , b ) ) <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + / / v128_t wasm_i64x2_splat ( int64_t a ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i64x2_splat ( int64_t a ) { <nl> + return ( v128_t ) ( __i64x2 ) { a , a } ; <nl> + } <nl> + <nl> + # endif / / __wasm_unimplemented_simd128__ <nl> + <nl> + / / int64_t wasm_i64x2_extract_lane ( v128_t a , imm i ) <nl> + # define wasm_i64x2_extract_lane ( a , i ) ( __builtin_wasm_extract_lane_i64x2 ( ( __i64x2 ) ( a ) , i ) ) <nl> + <nl> + / / v128_t wasm_i64x2_replace_lane ( v128_t a , imm i , int64_t b ) <nl> + # define wasm_i64x2_replace_lane ( a , i , b ) \ <nl> + ( ( v128_t ) __builtin_wasm_replace_lane_i64x2 ( ( __i64x2 ) ( a ) , i , b ) ) <nl> + <nl> + / / v128_t wasm_f32x4_splat ( float a ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f32x4_splat ( float a ) { <nl> + return ( v128_t ) ( __f32x4 ) { a , a , a , a } ; <nl> + } <nl> + <nl> + / / float wasm_f32x4_extract_lane ( v128_t a , imm i ) <nl> + # define wasm_f32x4_extract_lane ( a , i ) ( __builtin_wasm_extract_lane_f32x4 ( ( __f32x4 ) ( a ) , i ) ) <nl> + <nl> + / / v128_t wasm_f32x4_replace_lane ( v128_t a , imm i , float b ) <nl> + # define wasm_f32x4_replace_lane ( a , i , b ) \ <nl> + ( ( v128_t ) __builtin_wasm_replace_lane_f32x4 ( ( __f32x4 ) ( a ) , i , b ) ) <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + / / v128_t wasm_f64x2_splat ( double a ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f64x2_splat ( double a ) { <nl> + return ( v128_t ) ( __f64x2 ) { a , a } ; <nl> + } <nl> + <nl> + / / double __builtin_wasm_extract_lane_f64x2 ( v128_t a , imm i ) <nl> + # define wasm_f64x2_extract_lane ( a , i ) ( __builtin_wasm_extract_lane_f64x2 ( ( __f64x2 ) ( a ) , i ) ) <nl> + <nl> + / / v128_t wasm_f64x4_replace_lane ( v128_t a , imm i , double b ) <nl> + # define wasm_f64x2_replace_lane ( a , i , b ) \ <nl> + ( ( v128_t ) __builtin_wasm_replace_lane_f64x2 ( ( __f64x2 ) ( a ) , i , b ) ) <nl> + <nl> + # endif / / __wasm_unimplemented_simd128__ <nl> + <nl> + / / v128_t wasm_i8x16_eq ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i8x16_eq ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __i8x16 ) a = = ( __i8x16 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i8x16_ne ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i8x16_ne ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __i8x16 ) a ! = ( __i8x16 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i8x16_lt ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i8x16_lt ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __i8x16 ) a < ( __i8x16 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_u8x16_lt ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_u8x16_lt ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __u8x16 ) a < ( __u8x16 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i8x16_gt ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i8x16_gt ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __i8x16 ) a > ( __i8x16 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_u8x16_gt ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_u8x16_gt ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __u8x16 ) a > ( __u8x16 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i8x16_le ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i8x16_le ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __i8x16 ) a < = ( __i8x16 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i8x16_le ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_u8x16_le ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __u8x16 ) a < = ( __u8x16 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i8x16_ge ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i8x16_ge ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __i8x16 ) a > = ( __i8x16 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_u8x16_ge ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_u8x16_ge ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __u8x16 ) a > = ( __u8x16 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i16x8_eq ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i16x8_eq ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __i16x8 ) a = = ( __i16x8 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i16x8_ne ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i16x8_ne ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __u16x8 ) a ! = ( __u16x8 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i16x8_lt ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i16x8_lt ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __i16x8 ) a < ( __i16x8 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_u16x8_lt ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_u16x8_lt ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __u16x8 ) a < ( __u16x8 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i16x8_gt ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i16x8_gt ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __i16x8 ) a > ( __i16x8 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_u16x8_gt ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_u16x8_gt ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __u16x8 ) a > ( __u16x8 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i16x8_le ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i16x8_le ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __i16x8 ) a < = ( __i16x8 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i16x8_le ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_u16x8_le ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __u16x8 ) a < = ( __u16x8 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i16x8_ge ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i16x8_ge ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __i16x8 ) a > = ( __i16x8 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i16x8_ge ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_u16x8_ge ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __u16x8 ) a > = ( __u16x8 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i32x4_eq ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i32x4_eq ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __i32x4 ) a = = ( __i32x4 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i32x4_ne ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i32x4_ne ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __i32x4 ) a ! = ( __i32x4 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i32x4_lt ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i32x4_lt ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __i32x4 ) a < ( __i32x4 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_u32x4_lt ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_u32x4_lt ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __u32x4 ) a < ( __u32x4 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i32x4_gt ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i32x4_gt ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __i32x4 ) a > ( __i32x4 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i32x4_gt ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_u32x4_gt ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __u32x4 ) a > ( __u32x4 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i32x4_le ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i32x4_le ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __i32x4 ) a < = ( __i32x4 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_u32x4_le ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_u32x4_le ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __u32x4 ) a < = ( __u32x4 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i32x4_ge ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i32x4_ge ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __i32x4 ) a > = ( __i32x4 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_u32x4_ge ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_u32x4_ge ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __u32x4 ) a > = ( __u32x4 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_f32x4_eq ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f32x4_eq ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __f32x4 ) a = = ( __f32x4 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_f32x4_ne ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f32x4_ne ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __f32x4 ) a ! = ( __f32x4 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_f32x4_lt ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f32x4_lt ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __f32x4 ) a < ( __f32x4 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_f32x4_gt ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f32x4_gt ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __f32x4 ) a > ( __f32x4 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_f32x4_le ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f32x4_le ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __f32x4 ) a < = ( __f32x4 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_f32x4_ge ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f32x4_ge ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __f32x4 ) a > = ( __f32x4 ) b ) ; <nl> + } <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + / / v128_t wasm_f64x2_eq ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f64x2_eq ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __f64x2 ) a = = ( __f64x2 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_f64x2_ne ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f64x2_ne ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __f64x2 ) a ! = ( __f64x2 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_f64x2_lt ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f64x2_lt ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __f64x2 ) a < ( __f64x2 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_f64x2_gt ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f64x2_gt ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __f64x2 ) a > ( __f64x2 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_f64x2_le ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f64x2_le ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __f64x2 ) a < = ( __f64x2 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_f64x2_ge ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f64x2_ge ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __f64x2 ) a > = ( __f64x2 ) b ) ; <nl> + } <nl> + <nl> + # endif / / __wasm_unimplemented_simd128__ <nl> + <nl> + / / v128_t wasm_v128_not ( v128 a ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_v128_not ( v128_t a ) { return ~ a ; } <nl> + <nl> + / / v128_t wasm_v128_and ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_v128_and ( v128_t a , v128_t b ) { return a & b ; } <nl> + <nl> + / / v128_t wasm_v128_or ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_v128_or ( v128_t a , v128_t b ) { return a | b ; } <nl> + <nl> + / / v128_t wasm_v128_xor ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_v128_xor ( v128_t a , v128_t b ) { return a ^ b ; } <nl> + <nl> + / / v128_t wasm_v128_bitselect ( v128_t a , v128_t b , v128_t mask ) <nl> + / / ` a ` is selected for each lane for which ` mask ` is nonzero . <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_v128_bitselect ( v128_t a , v128_t b , v128_t mask ) { <nl> + return ( v128_t ) __builtin_wasm_bitselect ( ( __i32x4 ) a , ( __i32x4 ) b , ( __i32x4 ) mask ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i8x16_neg ( v128_t a ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i8x16_neg ( v128_t a ) { <nl> + return ( v128_t ) ( - ( __u8x16 ) a ) ; <nl> + } <nl> + <nl> + / / bool wasm_i8x16_any_true ( v128_t a ) <nl> + static __inline__ bool __DEFAULT_FN_ATTRS wasm_i8x16_any_true ( v128_t a ) { <nl> + return __builtin_wasm_any_true_i8x16 ( ( __i8x16 ) a ) ; <nl> + } <nl> + <nl> + / / bool wasm_i8x16_all_true ( v128_t a ) <nl> + static __inline__ bool __DEFAULT_FN_ATTRS wasm_i8x16_all_true ( v128_t a ) { <nl> + return __builtin_wasm_all_true_i8x16 ( ( __i8x16 ) a ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i8x16_shl ( v128_t a , int32_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i8x16_shl ( v128_t a , int32_t b ) { <nl> + return ( v128_t ) ( ( __i8x16 ) a < < b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i8x64_shr ( v128_t a , int32_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i8x16_shr ( v128_t a , int32_t b ) { <nl> + return ( v128_t ) ( ( __i8x16 ) a > > b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_u8x16_shr ( v128_t a , int32_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_u8x16_shr ( v128_t a , int32_t b ) { <nl> + return ( v128_t ) ( ( __u8x16 ) a > > b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i8x16_add ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i8x16_add ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __u8x16 ) a + ( __u8x16 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_add_saturate ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i8x16_add_saturate ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) __builtin_wasm_add_saturate_s_i8x16 ( ( __i8x16 ) a , ( __i8x16 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_add_saturate ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_u8x16_add_saturate ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) __builtin_wasm_add_saturate_u_i8x16 ( ( __i8x16 ) a , ( __i8x16 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i8x16_sub ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i8x16_sub ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __u8x16 ) a - ( __u8x16 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_sub_saturate ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i8x16_sub_saturate ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) __builtin_wasm_sub_saturate_s_i8x16 ( ( __i8x16 ) a , ( __i8x16 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_sub_saturate ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_u8x16_sub_saturate ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) __builtin_wasm_sub_saturate_u_i8x16 ( ( __i8x16 ) a , ( __i8x16 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i8x16_mul ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i8x16_mul ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __u8x16 ) a * ( __u8x16 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i16x8_neg ( v128_t a ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i16x8_neg ( v128_t a ) { <nl> + return ( v128_t ) ( - ( __u16x8 ) a ) ; <nl> + } <nl> + <nl> + / / bool wasm_i16x8_any_true ( v128_t a ) <nl> + static __inline__ bool __DEFAULT_FN_ATTRS wasm_i16x8_any_true ( v128_t a ) { <nl> + return __builtin_wasm_any_true_i16x8 ( ( __i16x8 ) a ) ; <nl> + } <nl> + <nl> + / / bool wasm_i16x8_all_true ( v128_t a ) <nl> + static __inline__ bool __DEFAULT_FN_ATTRS wasm_i16x8_all_true ( v128_t a ) { <nl> + return __builtin_wasm_all_true_i16x8 ( ( __i16x8 ) a ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i16x8_shl ( v128_t a , int32_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i16x8_shl ( v128_t a , int32_t b ) { <nl> + return ( v128_t ) ( ( __i16x8 ) a < < b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i16x8_shr ( v128_t a , int32_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i16x8_shr ( v128_t a , int32_t b ) { <nl> + return ( v128_t ) ( ( __i16x8 ) a > > b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_u16x8_shr ( v128_t a , int32_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_u16x8_shr ( v128_t a , int32_t b ) { <nl> + return ( v128_t ) ( ( __u16x8 ) a > > b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i16x8_add ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i16x8_add ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __u16x8 ) a + ( __u16x8 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_add_saturate ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i16x8_add_saturate ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) __builtin_wasm_add_saturate_s_i16x8 ( ( __i16x8 ) a , ( __i16x8 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_add_saturate ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_u16x8_add_saturate ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) __builtin_wasm_add_saturate_u_i16x8 ( ( __i16x8 ) a , ( __i16x8 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i16x8_sub ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i16x8_sub ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __i16x8 ) a - ( __i16x8 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_sub_saturate ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i16x8_sub_saturate ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) __builtin_wasm_sub_saturate_s_i16x8 ( ( __i16x8 ) a , ( __i16x8 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_sub_saturate ( v128_t a , v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_u16x8_sub_saturate ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) __builtin_wasm_sub_saturate_u_i16x8 ( ( __i16x8 ) a , ( __i16x8 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i16x8_mul ( v128_t a v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i16x8_mul ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __u16x8 ) a * ( __u16x8 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i32x4_neg ( v128_t a ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i32x4_neg ( v128_t a ) { <nl> + return ( v128_t ) ( - ( __u32x4 ) a ) ; <nl> + } <nl> + <nl> + / / bool wasm_i32x4_any_true ( v128_t a ) <nl> + static __inline__ bool __DEFAULT_FN_ATTRS wasm_i32x4_any_true ( v128_t a ) { <nl> + return __builtin_wasm_any_true_i32x4 ( ( __i32x4 ) a ) ; <nl> + } <nl> + <nl> + / / bool wasm_i32x4_all_true ( v128_t a ) <nl> + static __inline__ bool __DEFAULT_FN_ATTRS wasm_i32x4_all_true ( v128_t a ) { <nl> + return __builtin_wasm_all_true_i32x4 ( ( __i32x4 ) a ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i32x4_shl ( v128_t a , int32_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i32x4_shl ( v128_t a , int32_t b ) { <nl> + return ( v128_t ) ( ( __i32x4 ) a < < b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i32x4_shr ( v128_t a , int32_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i32x4_shr ( v128_t a , int32_t b ) { <nl> + return ( v128_t ) ( ( __i32x4 ) a > > b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_u32x4_shr ( v128_t a , int32_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_u32x4_shr ( v128_t a , int32_t b ) { <nl> + return ( v128_t ) ( ( __u32x4 ) a > > b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i32x4_add ( v128_t a v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i32x4_add ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __u32x4 ) a + ( __u32x4 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i32x4_sub ( v128_t a v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i32x4_sub ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __u32x4 ) a - ( __u32x4 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i32x4_mul ( v128_t a v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i32x4_mul ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __u32x4 ) a * ( __u32x4 ) b ) ; <nl> + } <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + / / v128_t wasm_i64x2_neg ( v128_t a ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i64x2_neg ( v128_t a ) { <nl> + return ( v128_t ) ( - ( __u64x2 ) a ) ; <nl> + } <nl> + <nl> + / / bool wasm_i64x2_any_true ( v128_t a ) <nl> + static __inline__ bool __DEFAULT_FN_ATTRS wasm_i64x2_any_true ( v128_t a ) { <nl> + return __builtin_wasm_any_true_i64x2 ( ( __i64x2 ) a ) ; <nl> + } <nl> + <nl> + / / bool wasm_i64x2_all_true ( v128_t a ) <nl> + static __inline__ bool __DEFAULT_FN_ATTRS wasm_i64x2_all_true ( v128_t a ) { <nl> + return __builtin_wasm_all_true_i64x2 ( ( __i64x2 ) a ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i64x2_shl ( v128_t a , int32_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i64x2_shl ( v128_t a , int32_t b ) { <nl> + return ( v128_t ) ( ( __i64x2 ) a < < ( int64_t ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i64x2_shr ( v128_t a , int32_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i64x2_shr ( v128_t a , int32_t b ) { <nl> + return ( v128_t ) ( ( __i64x2 ) a > > ( int64_t ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_u64x2_shr_u ( v128_t a , int32_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_u64x2_shr ( v128_t a , int32_t b ) { <nl> + return ( v128_t ) ( ( __u64x2 ) a > > ( int64_t ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i64x2_add ( v128_t a v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i64x2_add ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __u64x2 ) a + ( __u64x2 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_i64x2_sub ( v128_t a v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_i64x2_sub ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __u64x2 ) a - ( __u64x2 ) b ) ; <nl> + } <nl> + <nl> + # endif / / __wasm_unimplemented_simd128__ <nl> + <nl> + / / v128_t wasm_f32x4_abs ( v128_t a ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f32x4_abs ( v128_t a ) { <nl> + return ( v128_t ) __builtin_wasm_abs_f32x4 ( ( __f32x4 ) a ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_f32x4_neg ( v128_t a ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f32x4_neg ( v128_t a ) { <nl> + return ( v128_t ) ( - ( __f32x4 ) a ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_f32x4_sqrt ( v128_t a ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f32x4_sqrt ( v128_t a ) { <nl> + return ( v128_t ) __builtin_wasm_sqrt_f32x4 ( ( __f32x4 ) a ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_f32x4_add ( v128_t a v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f32x4_add ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __f32x4 ) a + ( __f32x4 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_f32x4_sub ( v128_t a v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f32x4_sub ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __f32x4 ) a - ( __f32x4 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_f32x4_mul ( v128_t a v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f32x4_mul ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __f32x4 ) a * ( __f32x4 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_f32x4_div ( v128_t a v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f32x4_div ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __f32x4 ) a / ( __f32x4 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_f32x4_min ( v128_t a v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f32x4_min ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) __builtin_wasm_min_f32x4 ( ( __f32x4 ) a , ( __f32x4 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_f32x4_max ( v128_t a v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f32x4_max ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) __builtin_wasm_max_f32x4 ( ( __f32x4 ) a , ( __f32x4 ) b ) ; <nl> + } <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + / / v128_t wasm_f64x2_abs ( v128_t a ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f64x2_abs ( v128_t a ) { <nl> + return ( v128_t ) __builtin_wasm_abs_f64x2 ( ( __f64x2 ) a ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_f64x2_neg ( v128_t a ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f64x2_neg ( v128_t a ) { <nl> + return ( v128_t ) ( - ( __f64x2 ) a ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_f64x2_sqrt ( v128_t a ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f64x2_sqrt ( v128_t a ) { <nl> + return ( v128_t ) __builtin_wasm_sqrt_f64x2 ( ( __f64x2 ) a ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_f64x2_add ( v128_t a v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f64x2_add ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __f64x2 ) a + ( __f64x2 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_f64x2_sub ( v128_t a v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f64x2_sub ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __f64x2 ) a - ( __f64x2 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_f64x2_mul ( v128_t a v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f64x2_mul ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __f64x2 ) a * ( __f64x2 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_f64x2_div ( v128_t a v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f64x2_div ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) ( ( __f64x2 ) a / ( __f64x2 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_f64x2_min ( v128_t a v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f64x2_min ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) __builtin_wasm_min_f64x2 ( ( __f64x2 ) a , ( __f64x2 ) b ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_f64x2_max ( v128_t a v128_t b ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_f64x2_max ( v128_t a , v128_t b ) { <nl> + return ( v128_t ) __builtin_wasm_max_f64x2 ( ( __f64x2 ) a , ( __f64x2 ) b ) ; <nl> + } <nl> + <nl> + # endif / / __wasm_unimplemented_simd128__ <nl> + <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_trunc_saturate_i32x4_f32x4 ( v128_t a ) { <nl> + return ( v128_t ) __builtin_wasm_trunc_saturate_s_i32x4_f32x4 ( ( __f32x4 ) a ) ; <nl> + } <nl> + <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_trunc_saturate_u32x4_f32x4 ( v128_t a ) { <nl> + return ( v128_t ) __builtin_wasm_trunc_saturate_u_i32x4_f32x4 ( ( __f32x4 ) a ) ; <nl> + } <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_trunc_saturate_i64x2_f64x2 ( v128_t a ) { <nl> + return ( v128_t ) __builtin_wasm_trunc_saturate_s_i64x2_f64x2 ( ( __f64x2 ) a ) ; <nl> + } <nl> + <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_trunc_saturate_u64x2_f64x2 ( v128_t a ) { <nl> + return ( v128_t ) __builtin_wasm_trunc_saturate_s_i64x2_f64x2 ( ( __f64x2 ) a ) ; <nl> + } <nl> + <nl> + # endif / / __wasm_unimplemented_simd128__ <nl> + <nl> + / / v128_t wasm_convert_f32x4_i32x4 ( v128_t a ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_convert_f32x4_i32x4 ( v128_t v ) { <nl> + return ( v128_t ) __builtin_convertvector ( ( __i32x4 ) v , __f32x4 ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_convert_f32x4_u32x4 ( v128_t a ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_convert_f32x4_u32x4 ( v128_t v ) { <nl> + return ( v128_t ) __builtin_convertvector ( ( __u32x4 ) v , __f32x4 ) ; <nl> + } <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + / / v128_t wasm_convert_f64x2_i64x2 ( v128_t a ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_convert_f64x2_i64x2 ( v128_t v ) { <nl> + return ( v128_t ) __builtin_convertvector ( ( __i64x2 ) v , __f64x2 ) ; <nl> + } <nl> + <nl> + / / v128_t wasm_convert_f64x2_u64x2 ( v128_t a ) <nl> + static __inline__ v128_t __DEFAULT_FN_ATTRS wasm_convert_f64x2_u64x2 ( v128_t v ) { <nl> + return ( v128_t ) __builtin_convertvector ( ( __u64x2 ) v , __f64x2 ) ; <nl> + } <nl> + <nl> + # endif / / __wasm_unimplemented_simd128__ <nl> + <nl> + # define wasm_v8x16_shuffle ( \ <nl> + a , b , c0 , c1 , c2 , c3 , c4 , c5 , c6 , c7 , c8 , c9 , c10 , c11 , c12 , c13 , c14 , c15 ) \ <nl> + ( ( v128_t ) ( __builtin_shufflevector ( ( __u8x16 ) ( a ) , ( __u8x16 ) ( b ) , c0 , c1 , c2 , c3 , c4 , c5 , c6 , c7 , \ <nl> + c8 , c9 , c10 , c11 , c12 , c13 , c14 , c15 ) ) ) <nl> mmm a / tests / test_core . py <nl> ppp b / tests / test_core . py <nl> def test_wasm_builtin_simd ( self , js_engines ) : <nl> self . do_run ( open ( path_from_root ( ' tests ' , ' test_wasm_builtin_simd . c ' ) ) . read ( ) , ' Success ! ' , <nl> js_engines = js_engines ) <nl> self . emcc_args . append ( ' - munimplemented - simd128 ' ) <nl> - self . do_run ( open ( path_from_root ( ' tests ' , ' test_wasm_builtin_simd . c ' ) ) . read ( ) , ' Success ! ' , <nl> - js_engines = [ ] ) <nl> + self . build ( open ( path_from_root ( ' tests ' , ' test_wasm_builtin_simd . c ' ) ) . read ( ) , <nl> + self . get_dir ( ) , os . path . join ( self . get_dir ( ) , ' src . cpp ' ) ) <nl> + <nl> + @ wasm_simd <nl> + def test_wasm_intrinsics_simd ( self , js_engines ) : <nl> + self . emcc_args . extend ( [ ' - Wpedantic ' , ' - Werror ' , ' - Wall ' ] ) <nl> + self . do_run ( open ( path_from_root ( ' tests ' , ' test_wasm_intrinsics_simd . c ' ) ) . read ( ) , ' Success ! ' , <nl> + js_engines = js_engines ) <nl> + self . emcc_args . append ( ' - munimplemented - simd128 ' ) <nl> + self . build ( open ( path_from_root ( ' tests ' , ' test_wasm_intrinsics_simd . c ' ) ) . read ( ) , <nl> + self . get_dir ( ) , os . path . join ( self . get_dir ( ) , ' src . cpp ' ) ) <nl> <nl> @ asm_simd <nl> def test_simd ( self ) : <nl> mmm a / tests / test_wasm_builtin_simd . c <nl> ppp b / tests / test_wasm_builtin_simd . c <nl> static int failures = 0 ; <nl> } \ <nl> } ) <nl> <nl> - int EMSCRIPTEN_KEEPALIVE main ( int argc , char * * argv ) { <nl> + int EMSCRIPTEN_KEEPALIVE __attribute__ ( ( __optnone__ ) ) main ( int argc , char * * argv ) { <nl> { <nl> i8x16 vec = { 3 , 3 , 3 , 3 , 3 , 3 , 3 , 3 , 3 , 3 , 3 , 3 , 3 , 3 , 3 , 3 } ; <nl> expect_vec ( i8x16_load ( & vec ) , <nl> int EMSCRIPTEN_KEEPALIVE main ( int argc , char * * argv ) { <nl> expect_eq ( i8x16_all_true ( ( i8x16 ) { 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 } ) , 0 ) ; <nl> expect_eq ( i8x16_all_true ( ( i8x16 ) { 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 1 , 0 , 0 , 0 , 0 } ) , 0 ) ; <nl> expect_eq ( i8x16_all_true ( ( i8x16 ) { 1 , 1 , 1 , 1 , 1 , 0 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 } ) , 0 ) ; <nl> - expect_eq ( i8x16_all_true ( ( i8x16 ) { 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 } ) , 1 ) ; <nl> + / / https : / / bugs . chromium . org / p / v8 / issues / detail ? id = 9372 <nl> + / * expect_eq ( i8x16_all_true ( ( i8x16 ) { 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 } ) , 1 ) ; * / <nl> expect_vec ( <nl> i8x16_shl ( ( i8x16 ) { 0 , 1 , 2 , 4 , 8 , 16 , 32 , 64 , - 128 , 3 , 6 , 12 , 24 , 48 , 96 , - 64 } , 1 ) , <nl> ( ( i8x16 ) { 0 , 2 , 4 , 8 , 16 , 32 , 64 , - 128 , 0 , 6 , 12 , 24 , 48 , 96 , - 64 , - 128 } ) <nl> int EMSCRIPTEN_KEEPALIVE main ( int argc , char * * argv ) { <nl> expect_eq ( i16x8_all_true ( ( i16x8 ) { 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 } ) , 0 ) ; <nl> expect_eq ( i16x8_all_true ( ( i16x8 ) { 0 , 0 , 1 , 0 , 0 , 0 , 0 , 0 } ) , 0 ) ; <nl> expect_eq ( i16x8_all_true ( ( i16x8 ) { 1 , 1 , 1 , 1 , 1 , 0 , 1 , 1 } ) , 0 ) ; <nl> - expect_eq ( i16x8_all_true ( ( i16x8 ) { 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 } ) , 1 ) ; <nl> + / * expect_eq ( i16x8_all_true ( ( i16x8 ) { 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 } ) , 1 ) ; * / <nl> expect_vec ( <nl> i16x8_shl ( ( i16x8 ) { 0 , 8 , 16 , 128 , 256 , 2048 , 4096 , - 32768 } , 1 ) , <nl> ( ( i16x8 ) { 0 , 16 , 32 , 256 , 512 , 4096 , 8192 , 0 } ) <nl> int EMSCRIPTEN_KEEPALIVE main ( int argc , char * * argv ) { <nl> expect_eq ( i32x4_all_true ( ( i32x4 ) { 0 , 0 , 0 , 0 } ) , 0 ) ; <nl> expect_eq ( i32x4_all_true ( ( i32x4 ) { 0 , 0 , 1 , 0 } ) , 0 ) ; <nl> expect_eq ( i32x4_all_true ( ( i32x4 ) { 1 , 0 , 1 , 1 } ) , 0 ) ; <nl> - expect_eq ( i32x4_all_true ( ( i32x4 ) { 1 , 1 , 1 , 1 } ) , 1 ) ; <nl> + / * expect_eq ( i32x4_all_true ( ( i32x4 ) { 1 , 1 , 1 , 1 } ) , 1 ) ; * / <nl> expect_vec ( i32x4_shl ( ( i32x4 ) { 1 , 0x40000000 , 0x80000000 , - 1 } , 1 ) , ( ( i32x4 ) { 2 , 0x80000000 , 0 , - 2 } ) ) ; <nl> expect_vec ( i32x4_shl ( ( i32x4 ) { 1 , 0x40000000 , 0x80000000 , - 1 } , 32 ) , ( ( i32x4 ) { 1 , 0x40000000 , 0x80000000 , - 1 } ) ) ; <nl> expect_vec ( i32x4_shr_s ( ( i32x4 ) { 1 , 0x40000000 , 0x80000000 , - 1 } , 1 ) , ( ( i32x4 ) { 0 , 0x20000000 , 0xc0000000 , - 1 } ) ) ; <nl> new file mode 100644 <nl> index 00000000000 . . ad74b333e5a <nl> mmm / dev / null <nl> ppp b / tests / test_wasm_intrinsics_simd . c <nl> <nl> + # include < stdint . h > <nl> + # include < stdio . h > <nl> + # include < math . h > <nl> + # include < emscripten . h > <nl> + # include < wasm_simd128 . h > <nl> + <nl> + # define TESTFN EMSCRIPTEN_KEEPALIVE __attribute__ ( ( noinline ) ) <nl> + <nl> + v128_t TESTFN i8x16_load ( void * ptr ) { <nl> + return wasm_v128_load ( ptr ) ; <nl> + } <nl> + void TESTFN i8x16_store ( void * ptr , v128_t vec ) { <nl> + wasm_v128_store ( ptr , vec ) ; <nl> + } <nl> + v128_t TESTFN i8x16_const ( void ) { <nl> + return wasm_i8x16_const ( <nl> + ( int8_t ) 1 , ( int8_t ) 2 , ( int8_t ) 3 , ( int8_t ) 4 , <nl> + ( int8_t ) 5 , ( int8_t ) 6 , ( int8_t ) 7 , ( int8_t ) 8 , <nl> + ( int8_t ) 9 , ( int8_t ) 10 , ( int8_t ) 11 , ( int8_t ) 12 , <nl> + ( int8_t ) 13 , ( int8_t ) 14 , ( int8_t ) 15 , ( int8_t ) 16 <nl> + ) ; <nl> + } <nl> + v128_t TESTFN i16x8_const ( void ) { <nl> + return wasm_i16x8_const ( <nl> + ( int16_t ) 1 , ( int16_t ) 2 , ( int16_t ) 3 , ( int16_t ) 4 , <nl> + ( int16_t ) 5 , ( int16_t ) 6 , ( int16_t ) 7 , ( int16_t ) 8 <nl> + ) ; <nl> + } <nl> + v128_t TESTFN i32x4_const ( void ) { <nl> + return wasm_i32x4_const ( ( int32_t ) 1 , ( int32_t ) 2 , ( int32_t ) 3 , ( int32_t ) 4 ) ; <nl> + } <nl> + v128_t TESTFN f32x4_const ( void ) { <nl> + return wasm_f32x4_const ( 1 . f , 2 . f , 3 . f , 4 . f ) ; <nl> + } <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + v128_t TESTFN i64x2_const ( void ) { <nl> + return wasm_i64x2_const ( ( int64_t ) 1 , ( int64_t ) 2 ) ; <nl> + } <nl> + v128_t TESTFN f64x2_const ( void ) { <nl> + return wasm_f64x2_const ( 1 . , 2 . ) ; <nl> + } <nl> + <nl> + # endif / / __wasm_unimplemented_sidm128__ <nl> + <nl> + v128_t TESTFN i8x16_make ( int8_t first ) { <nl> + return wasm_i8x16_make ( <nl> + first , ( int8_t ) 2 , ( int8_t ) 3 , ( int8_t ) 4 , <nl> + ( int8_t ) 5 , ( int8_t ) 6 , ( int8_t ) 7 , ( int8_t ) 8 , <nl> + ( int8_t ) 9 , ( int8_t ) 10 , ( int8_t ) 11 , ( int8_t ) 12 , <nl> + ( int8_t ) 13 , ( int8_t ) 14 , ( int8_t ) 15 , ( int8_t ) 16 <nl> + ) ; <nl> + } <nl> + v128_t TESTFN i16x8_make ( int16_t first ) { <nl> + return wasm_i16x8_make ( <nl> + first , ( int16_t ) 2 , ( int16_t ) 3 , ( int16_t ) 4 , <nl> + ( int16_t ) 5 , ( int16_t ) 6 , ( int16_t ) 7 , ( int16_t ) 8 <nl> + ) ; <nl> + } <nl> + v128_t TESTFN i32x4_make ( int32_t first ) { <nl> + return wasm_i32x4_make ( first , ( int32_t ) 2 , ( int32_t ) 3 , ( int32_t ) 4 ) ; <nl> + } <nl> + v128_t TESTFN f32x4_make ( float first ) { <nl> + return wasm_f32x4_make ( first , 2 . f , 3 . f , 4 . f ) ; <nl> + } <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + v128_t TESTFN i64x2_make ( int64_t first ) { <nl> + return wasm_i64x2_make ( first , ( int64_t ) 2 ) ; <nl> + } <nl> + v128_t TESTFN f64x2_make ( double first ) { <nl> + return wasm_f64x2_make ( first , 2 . ) ; <nl> + } <nl> + <nl> + # endif / / __wasm_unimplemented_sidm128__ <nl> + <nl> + v128_t TESTFN i8x16_shuffle_interleave_bytes ( v128_t x , v128_t y ) { <nl> + return wasm_v8x16_shuffle ( x , y , 0 , 17 , 2 , 19 , 4 , 21 , 6 , 23 , 8 , 25 , 10 , 27 , 12 , 29 , 14 , 31 ) ; <nl> + } <nl> + v128_t TESTFN i32x4_shuffle_reverse ( v128_t vec ) { <nl> + return wasm_v8x16_shuffle ( vec , vec , 12 , 13 , 14 , 15 , 8 , 9 , 10 , 11 , 4 , 5 , 6 , 7 , 0 , 1 , 2 , 3 ) ; <nl> + } <nl> + v128_t TESTFN i8x16_splat ( int32_t x ) { <nl> + return wasm_i8x16_splat ( x ) ; <nl> + } <nl> + int32_t TESTFN i8x16_extract_lane_s_first ( v128_t vec ) { <nl> + return wasm_i8x16_extract_lane ( vec , 0 ) ; <nl> + } <nl> + int32_t TESTFN i8x16_extract_lane_s_last ( v128_t vec ) { <nl> + return wasm_i8x16_extract_lane ( vec , 15 ) ; <nl> + } <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + uint32_t TESTFN i8x16_extract_lane_u_first ( v128_t vec ) { <nl> + return wasm_u8x16_extract_lane ( vec , 0 ) ; <nl> + } <nl> + uint32_t TESTFN i8x16_extract_lane_u_last ( v128_t vec ) { <nl> + return wasm_u8x16_extract_lane ( vec , 15 ) ; <nl> + } <nl> + <nl> + # endif / / __wasm_unimplemented_simd128__ <nl> + <nl> + v128_t TESTFN i8x16_replace_lane_first ( v128_t vec , int32_t val ) { <nl> + return wasm_i8x16_replace_lane ( vec , 0 , val ) ; <nl> + } <nl> + v128_t TESTFN i8x16_replace_lane_last ( v128_t vec , int32_t val ) { <nl> + return wasm_i8x16_replace_lane ( vec , 15 , val ) ; <nl> + } <nl> + v128_t TESTFN i16x8_splat ( int32_t x ) { <nl> + return wasm_i16x8_splat ( x ) ; <nl> + } <nl> + int32_t TESTFN i16x8_extract_lane_s_first ( v128_t vec ) { <nl> + return wasm_i16x8_extract_lane ( vec , 0 ) ; <nl> + } <nl> + int32_t TESTFN i16x8_extract_lane_s_last ( v128_t vec ) { <nl> + return wasm_i16x8_extract_lane ( vec , 7 ) ; <nl> + } <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + int32_t TESTFN i16x8_extract_lane_u_first ( v128_t vec ) { <nl> + return wasm_u16x8_extract_lane ( vec , 0 ) ; <nl> + } <nl> + int32_t TESTFN i16x8_extract_lane_u_last ( v128_t vec ) { <nl> + return wasm_u16x8_extract_lane ( vec , 7 ) ; <nl> + } <nl> + <nl> + # endif / / __wasm_unimplemented_simd128__ <nl> + <nl> + v128_t TESTFN i16x8_replace_lane_first ( v128_t vec , int32_t val ) { <nl> + return wasm_i16x8_replace_lane ( vec , 0 , val ) ; <nl> + } <nl> + v128_t TESTFN i16x8_replace_lane_last ( v128_t vec , int32_t val ) { <nl> + return wasm_i16x8_replace_lane ( vec , 7 , val ) ; <nl> + } <nl> + v128_t TESTFN i32x4_splat ( int32_t x ) { <nl> + return wasm_i32x4_splat ( x ) ; <nl> + } <nl> + int32_t TESTFN i32x4_extract_lane_first ( v128_t vec ) { <nl> + return wasm_i32x4_extract_lane ( vec , 0 ) ; <nl> + } <nl> + int32_t TESTFN i32x4_extract_lane_last ( v128_t vec ) { <nl> + return wasm_i32x4_extract_lane ( vec , 3 ) ; <nl> + } <nl> + v128_t TESTFN i32x4_replace_lane_first ( v128_t vec , int32_t val ) { <nl> + return wasm_i32x4_replace_lane ( vec , 0 , val ) ; <nl> + } <nl> + v128_t TESTFN i32x4_replace_lane_last ( v128_t vec , int32_t val ) { <nl> + return wasm_i32x4_replace_lane ( vec , 3 , val ) ; <nl> + } <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + v128_t TESTFN i64x2_splat ( int64_t x ) { <nl> + return wasm_i64x2_splat ( x ) ; <nl> + } <nl> + int64_t TESTFN i64x2_extract_lane_first ( v128_t vec ) { <nl> + return wasm_i64x2_extract_lane ( vec , 0 ) ; <nl> + } <nl> + int64_t TESTFN i64x2_extract_lane_last ( v128_t vec ) { <nl> + return wasm_i64x2_extract_lane ( vec , 1 ) ; <nl> + } <nl> + v128_t TESTFN i64x2_replace_lane_first ( v128_t vec , int64_t val ) { <nl> + return wasm_i64x2_replace_lane ( vec , 0 , val ) ; <nl> + } <nl> + v128_t TESTFN i64x2_replace_lane_last ( v128_t vec , int64_t val ) { <nl> + return wasm_i64x2_replace_lane ( vec , 1 , val ) ; <nl> + } <nl> + <nl> + # endif / / __wasm_unimplemented_simd128__ <nl> + <nl> + v128_t TESTFN f32x4_splat ( float x ) { <nl> + return wasm_f32x4_splat ( x ) ; <nl> + } <nl> + float TESTFN f32x4_extract_lane_first ( v128_t vec ) { <nl> + return wasm_f32x4_extract_lane ( vec , 0 ) ; <nl> + } <nl> + float TESTFN f32x4_extract_lane_last ( v128_t vec ) { <nl> + return wasm_f32x4_extract_lane ( vec , 3 ) ; <nl> + } <nl> + v128_t TESTFN f32x4_replace_lane_first ( v128_t vec , float val ) { <nl> + return wasm_f32x4_replace_lane ( vec , 0 , val ) ; <nl> + } <nl> + v128_t TESTFN f32x4_replace_lane_last ( v128_t vec , float val ) { <nl> + return wasm_f32x4_replace_lane ( vec , 3 , val ) ; <nl> + } <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + v128_t TESTFN f64x2_splat ( double x ) { <nl> + return wasm_f64x2_splat ( x ) ; <nl> + } <nl> + double TESTFN f64x2_extract_lane_first ( v128_t vec ) { <nl> + return wasm_f64x2_extract_lane ( vec , 0 ) ; <nl> + } <nl> + double TESTFN f64x2_extract_lane_last ( v128_t vec ) { <nl> + return wasm_f64x2_extract_lane ( vec , 1 ) ; <nl> + } <nl> + v128_t TESTFN f64x2_replace_lane_first ( v128_t vec , double val ) { <nl> + return wasm_f64x2_replace_lane ( vec , 0 , val ) ; <nl> + } <nl> + v128_t TESTFN f64x2_replace_lane_last ( v128_t vec , double val ) { <nl> + return wasm_f64x2_replace_lane ( vec , 1 , val ) ; <nl> + } <nl> + <nl> + # endif / / __wasm_unimplemented_simd128__ <nl> + <nl> + v128_t TESTFN i8x16_eq ( v128_t x , v128_t y ) { <nl> + return wasm_i8x16_eq ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i8x16_ne ( v128_t x , v128_t y ) { <nl> + return wasm_i8x16_ne ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i8x16_lt_s ( v128_t x , v128_t y ) { <nl> + return wasm_i8x16_lt ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i8x16_lt_u ( v128_t x , v128_t y ) { <nl> + return wasm_u8x16_lt ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i8x16_gt_s ( v128_t x , v128_t y ) { <nl> + return wasm_i8x16_gt ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i8x16_gt_u ( v128_t x , v128_t y ) { <nl> + return wasm_u8x16_gt ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i8x16_le_s ( v128_t x , v128_t y ) { <nl> + return wasm_i8x16_le ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i8x16_le_u ( v128_t x , v128_t y ) { <nl> + return wasm_u8x16_le ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i8x16_ge_s ( v128_t x , v128_t y ) { <nl> + return wasm_i8x16_ge ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i8x16_ge_u ( v128_t x , v128_t y ) { <nl> + return wasm_u8x16_ge ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i16x8_eq ( v128_t x , v128_t y ) { <nl> + return wasm_i16x8_eq ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i16x8_ne ( v128_t x , v128_t y ) { <nl> + return wasm_i16x8_ne ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i16x8_lt_s ( v128_t x , v128_t y ) { <nl> + return wasm_i16x8_lt ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i16x8_lt_u ( v128_t x , v128_t y ) { <nl> + return wasm_u16x8_lt ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i16x8_gt_s ( v128_t x , v128_t y ) { <nl> + return wasm_i16x8_gt ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i16x8_gt_u ( v128_t x , v128_t y ) { <nl> + return wasm_u16x8_gt ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i16x8_le_s ( v128_t x , v128_t y ) { <nl> + return wasm_i16x8_le ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i16x8_le_u ( v128_t x , v128_t y ) { <nl> + return wasm_u16x8_le ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i16x8_ge_s ( v128_t x , v128_t y ) { <nl> + return wasm_i16x8_ge ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i16x8_ge_u ( v128_t x , v128_t y ) { <nl> + return wasm_u16x8_ge ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i32x4_eq ( v128_t x , v128_t y ) { <nl> + return wasm_i32x4_eq ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i32x4_ne ( v128_t x , v128_t y ) { <nl> + return wasm_i32x4_ne ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i32x4_lt_s ( v128_t x , v128_t y ) { <nl> + return wasm_i32x4_lt ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i32x4_lt_u ( v128_t x , v128_t y ) { <nl> + return wasm_u32x4_lt ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i32x4_gt_s ( v128_t x , v128_t y ) { <nl> + return wasm_i32x4_gt ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i32x4_gt_u ( v128_t x , v128_t y ) { <nl> + return wasm_u32x4_gt ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i32x4_le_s ( v128_t x , v128_t y ) { <nl> + return wasm_i32x4_le ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i32x4_le_u ( v128_t x , v128_t y ) { <nl> + return wasm_u32x4_le ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i32x4_ge_s ( v128_t x , v128_t y ) { <nl> + return wasm_i32x4_ge ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i32x4_ge_u ( v128_t x , v128_t y ) { <nl> + return wasm_u32x4_ge ( x , y ) ; <nl> + } <nl> + v128_t TESTFN f32x4_eq ( v128_t x , v128_t y ) { <nl> + return wasm_f32x4_eq ( x , y ) ; <nl> + } <nl> + v128_t TESTFN f32x4_ne ( v128_t x , v128_t y ) { <nl> + return wasm_f32x4_ne ( x , y ) ; <nl> + } <nl> + v128_t TESTFN f32x4_lt ( v128_t x , v128_t y ) { <nl> + return wasm_f32x4_lt ( x , y ) ; <nl> + } <nl> + v128_t TESTFN f32x4_gt ( v128_t x , v128_t y ) { <nl> + return wasm_f32x4_gt ( x , y ) ; <nl> + } <nl> + v128_t TESTFN f32x4_le ( v128_t x , v128_t y ) { <nl> + return wasm_f32x4_le ( x , y ) ; <nl> + } <nl> + v128_t TESTFN f32x4_ge ( v128_t x , v128_t y ) { <nl> + return wasm_f32x4_ge ( x , y ) ; <nl> + } <nl> + <nl> + # ifdef __wasm_undefined_simd128__ <nl> + <nl> + v128_t TESTFN f64x2_eq ( v128_t x , v128_t y ) { <nl> + return wasm_f64x2_eq ( x , y ) ; <nl> + } <nl> + v128_t TESTFN f64x2_ne ( v128_t x , v128_t y ) { <nl> + return wasm_f64x2_ne ( x , y ) ; <nl> + } <nl> + v128_t TESTFN f64x2_lt ( v128_t x , v128_t y ) { <nl> + return wasm_f64x2_lt ( x , y ) ; <nl> + } <nl> + v128_t TESTFN f64x2_gt ( v128_t x , v128_t y ) { <nl> + return wasm_f64x2_gt ( x , y ) ; <nl> + } <nl> + v128_t TESTFN f64x2_le ( v128_t x , v128_t y ) { <nl> + return wasm_f64x2_le ( x , y ) ; <nl> + } <nl> + v128_t TESTFN f64x2_ge ( v128_t x , v128_t y ) { <nl> + return wasm_f64x2_ge ( x , y ) ; <nl> + } <nl> + <nl> + # endif / / __wasm_undefined_simd128__ <nl> + <nl> + v128_t TESTFN v128_not ( v128_t vec ) { <nl> + return wasm_v128_not ( vec ) ; <nl> + } <nl> + v128_t TESTFN v128_and ( v128_t x , v128_t y ) { <nl> + return wasm_v128_and ( x , y ) ; <nl> + } <nl> + v128_t TESTFN v128_or ( v128_t x , v128_t y ) { <nl> + return wasm_v128_or ( x , y ) ; <nl> + } <nl> + v128_t TESTFN v128_xor ( v128_t x , v128_t y ) { <nl> + return wasm_v128_xor ( x , y ) ; <nl> + } <nl> + v128_t TESTFN v128_bitselect ( v128_t x , v128_t y , v128_t cond ) { <nl> + return wasm_v128_bitselect ( x , y , cond ) ; <nl> + } <nl> + v128_t TESTFN i8x16_neg ( v128_t vec ) { <nl> + return wasm_i8x16_neg ( vec ) ; <nl> + } <nl> + int32_t TESTFN i8x16_any_true ( v128_t vec ) { <nl> + return wasm_i8x16_any_true ( vec ) ; <nl> + } <nl> + int32_t TESTFN i8x16_all_true ( v128_t vec ) { <nl> + return wasm_i8x16_all_true ( vec ) ; <nl> + } <nl> + v128_t TESTFN i8x16_shl ( v128_t vec , int32_t shift ) { <nl> + return wasm_i8x16_shl ( vec , shift ) ; <nl> + } <nl> + v128_t TESTFN i8x16_shr_s ( v128_t vec , int32_t shift ) { <nl> + return wasm_i8x16_shr ( vec , shift ) ; <nl> + } <nl> + v128_t TESTFN i8x16_shr_u ( v128_t vec , int32_t shift ) { <nl> + return wasm_u8x16_shr ( vec , shift ) ; <nl> + } <nl> + v128_t TESTFN i8x16_add ( v128_t x , v128_t y ) { <nl> + return wasm_i8x16_add ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i8x16_add_saturate_s ( v128_t x , v128_t y ) { <nl> + return wasm_i8x16_add_saturate ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i8x16_add_saturate_u ( v128_t x , v128_t y ) { <nl> + return wasm_u8x16_add_saturate ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i8x16_sub ( v128_t x , v128_t y ) { <nl> + return wasm_i8x16_sub ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i8x16_sub_saturate_s ( v128_t x , v128_t y ) { <nl> + return wasm_i8x16_sub_saturate ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i8x16_sub_saturate_u ( v128_t x , v128_t y ) { <nl> + return wasm_u8x16_sub_saturate ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i8x16_mul ( v128_t x , v128_t y ) { <nl> + return wasm_i8x16_mul ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i16x8_neg ( v128_t vec ) { <nl> + return wasm_i16x8_neg ( vec ) ; <nl> + } <nl> + bool TESTFN i16x8_any_true ( v128_t vec ) { <nl> + return wasm_i16x8_any_true ( vec ) ; <nl> + } <nl> + bool TESTFN i16x8_all_true ( v128_t vec ) { <nl> + return wasm_i16x8_all_true ( vec ) ; <nl> + } <nl> + v128_t TESTFN i16x8_shl ( v128_t vec , int32_t shift ) { <nl> + return wasm_i16x8_shl ( vec , shift ) ; <nl> + } <nl> + v128_t TESTFN i16x8_shr_s ( v128_t vec , int32_t shift ) { <nl> + return wasm_i16x8_shr ( vec , shift ) ; <nl> + } <nl> + v128_t TESTFN i16x8_shr_u ( v128_t vec , int32_t shift ) { <nl> + return wasm_u16x8_shr ( vec , shift ) ; <nl> + } <nl> + v128_t TESTFN i16x8_add ( v128_t x , v128_t y ) { <nl> + return wasm_i16x8_add ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i16x8_add_saturate_s ( v128_t x , v128_t y ) { <nl> + return wasm_i16x8_add_saturate ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i16x8_add_saturate_u ( v128_t x , v128_t y ) { <nl> + return wasm_u16x8_add_saturate ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i16x8_sub ( v128_t x , v128_t y ) { <nl> + return wasm_i16x8_sub ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i16x8_sub_saturate_s ( v128_t x , v128_t y ) { <nl> + return wasm_i16x8_sub_saturate ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i16x8_sub_saturate_u ( v128_t x , v128_t y ) { <nl> + return wasm_u16x8_sub_saturate ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i16x8_mul ( v128_t x , v128_t y ) { <nl> + return wasm_i16x8_mul ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i32x4_neg ( v128_t vec ) { <nl> + return wasm_i32x4_neg ( vec ) ; <nl> + } <nl> + int32_t TESTFN i32x4_any_true ( v128_t vec ) { <nl> + return wasm_i32x4_any_true ( vec ) ; <nl> + } <nl> + int32_t TESTFN i32x4_all_true ( v128_t vec ) { <nl> + return wasm_i32x4_all_true ( vec ) ; <nl> + } <nl> + v128_t TESTFN i32x4_shl ( v128_t vec , int32_t shift ) { <nl> + return wasm_i32x4_shl ( vec , shift ) ; <nl> + } <nl> + v128_t TESTFN i32x4_shr_s ( v128_t vec , int32_t shift ) { <nl> + return wasm_i32x4_shr ( vec , shift ) ; <nl> + } <nl> + v128_t TESTFN i32x4_shr_u ( v128_t vec , int32_t shift ) { <nl> + return wasm_u32x4_shr ( vec , shift ) ; <nl> + } <nl> + v128_t TESTFN i32x4_add ( v128_t x , v128_t y ) { <nl> + return wasm_i32x4_add ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i32x4_sub ( v128_t x , v128_t y ) { <nl> + return wasm_i32x4_sub ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i32x4_mul ( v128_t x , v128_t y ) { <nl> + return wasm_i32x4_mul ( x , y ) ; <nl> + } <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + <nl> + v128_t TESTFN i64x2_neg ( v128_t vec ) { <nl> + return wasm_i64x2_neg ( vec ) ; <nl> + } <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + bool TESTFN i64x2_any_true ( v128_t vec ) { <nl> + return wasm_i64x2_any_true ( vec ) ; <nl> + } <nl> + bool TESTFN i64x2_all_true ( v128_t vec ) { <nl> + return wasm_i64x2_all_true ( vec ) ; <nl> + } <nl> + <nl> + # endif / / __wasm_unimplemented_simd128__ <nl> + <nl> + v128_t TESTFN i64x2_shl ( v128_t vec , int32_t shift ) { <nl> + return wasm_i64x2_shl ( vec , shift ) ; <nl> + } <nl> + <nl> + v128_t TESTFN i64x2_shr_s ( v128_t vec , int32_t shift ) { <nl> + return wasm_i64x2_shr ( vec , shift ) ; <nl> + } <nl> + v128_t TESTFN i64x2_shr_u ( v128_t vec , int32_t shift ) { <nl> + return wasm_u64x2_shr ( vec , shift ) ; <nl> + } <nl> + v128_t TESTFN i64x2_add ( v128_t x , v128_t y ) { <nl> + return wasm_i64x2_add ( x , y ) ; <nl> + } <nl> + v128_t TESTFN i64x2_sub ( v128_t x , v128_t y ) { <nl> + return wasm_i64x2_sub ( x , y ) ; <nl> + } <nl> + <nl> + # endif / / __wasm_unimplemented_simd128__ <nl> + <nl> + v128_t TESTFN f32x4_abs ( v128_t vec ) { <nl> + return wasm_f32x4_abs ( vec ) ; <nl> + } <nl> + v128_t TESTFN f32x4_neg ( v128_t vec ) { <nl> + return wasm_f32x4_neg ( vec ) ; <nl> + } <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + v128_t TESTFN f32x4_sqrt ( v128_t vec ) { <nl> + return wasm_f32x4_sqrt ( vec ) ; <nl> + } <nl> + <nl> + # endif / / __wasm_unimplemented_simd128__ <nl> + <nl> + v128_t TESTFN f32x4_add ( v128_t x , v128_t y ) { <nl> + return wasm_f32x4_add ( x , y ) ; <nl> + } <nl> + v128_t TESTFN f32x4_sub ( v128_t x , v128_t y ) { <nl> + return wasm_f32x4_sub ( x , y ) ; <nl> + } <nl> + v128_t TESTFN f32x4_mul ( v128_t x , v128_t y ) { <nl> + return wasm_f32x4_mul ( x , y ) ; <nl> + } <nl> + v128_t TESTFN f32x4_div ( v128_t x , v128_t y ) { <nl> + return wasm_f32x4_div ( x , y ) ; <nl> + } <nl> + v128_t TESTFN f32x4_min ( v128_t x , v128_t y ) { <nl> + return wasm_f32x4_min ( x , y ) ; <nl> + } <nl> + v128_t TESTFN f32x4_max ( v128_t x , v128_t y ) { <nl> + return wasm_f32x4_max ( x , y ) ; <nl> + } <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + v128_t TESTFN f64x2_abs ( v128_t vec ) { <nl> + return wasm_f64x2_abs ( vec ) ; <nl> + } <nl> + v128_t TESTFN f64x2_neg ( v128_t vec ) { <nl> + return wasm_f64x2_neg ( vec ) ; <nl> + } <nl> + v128_t TESTFN f64x2_sqrt ( v128_t vec ) { <nl> + return wasm_f64x2_sqrt ( vec ) ; <nl> + } <nl> + v128_t TESTFN f64x2_add ( v128_t x , v128_t y ) { <nl> + return wasm_f64x2_add ( x , y ) ; <nl> + } <nl> + v128_t TESTFN f64x2_sub ( v128_t x , v128_t y ) { <nl> + return wasm_f64x2_sub ( x , y ) ; <nl> + } <nl> + v128_t TESTFN f64x2_mul ( v128_t x , v128_t y ) { <nl> + return wasm_f64x2_mul ( x , y ) ; <nl> + } <nl> + v128_t TESTFN f64x2_div ( v128_t x , v128_t y ) { <nl> + return wasm_f64x2_div ( x , y ) ; <nl> + } <nl> + v128_t TESTFN f64x2_min ( v128_t x , v128_t y ) { <nl> + return wasm_f64x2_min ( x , y ) ; <nl> + } <nl> + v128_t TESTFN f64x2_max ( v128_t x , v128_t y ) { <nl> + return wasm_f64x2_max ( x , y ) ; <nl> + } <nl> + <nl> + # endif / / __wasm_unimplemented_simd128__ <nl> + <nl> + v128_t TESTFN i32x4_trunc_s_f32x4_sat ( v128_t vec ) { <nl> + return wasm_trunc_saturate_i32x4_f32x4 ( vec ) ; <nl> + } <nl> + v128_t TESTFN i32x4_trunc_u_f32x4_sat ( v128_t vec ) { <nl> + return wasm_trunc_saturate_u32x4_f32x4 ( vec ) ; <nl> + } <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + v128_t TESTFN i64x2_trunc_s_f64x2_sat ( v128_t vec ) { <nl> + return wasm_trunc_saturate_i64x2_f64x2 ( vec ) ; <nl> + } <nl> + v128_t TESTFN i64x2_trunc_u_f64x2_sat ( v128_t vec ) { <nl> + return wasm_trunc_saturate_u64x2_f64x2 ( vec ) ; <nl> + } <nl> + <nl> + # endif / / __wasm_unimplemented_simd128__ <nl> + <nl> + v128_t TESTFN f32x4_convert_s_i32x4 ( v128_t vec ) { <nl> + return wasm_convert_f32x4_i32x4 ( vec ) ; <nl> + } <nl> + v128_t TESTFN f32x4_convert_u_i32x4 ( v128_t vec ) { <nl> + return wasm_convert_f32x4_u32x4 ( vec ) ; <nl> + } <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + v128_t TESTFN f64x2_convert_s_i64x2 ( v128_t vec ) { <nl> + return wasm_convert_f64x2_i64x2 ( vec ) ; <nl> + } <nl> + v128_t TESTFN f64x2_convert_u_i64x2 ( v128_t vec ) { <nl> + return wasm_convert_f64x2_u64x2 ( vec ) ; <nl> + } <nl> + <nl> + # endif / / __wasm_unimplemented_simd128__ <nl> + <nl> + static int failures = 0 ; <nl> + <nl> + # define formatter ( x ) _Generic ( ( x ) , \ <nl> + char : " % d " , \ <nl> + unsigned char : " % d " , \ <nl> + short : " % d " , \ <nl> + unsigned short : " % d " , \ <nl> + int : " % d " , \ <nl> + unsigned int : " % d " , \ <nl> + long long : " % lld " , \ <nl> + unsigned long long : " % lld " , \ <nl> + bool : " % d " , \ <nl> + float : " % f " , \ <nl> + double : " % f " \ <nl> + ) <nl> + <nl> + # define err ( x ) fprintf ( stderr , formatter ( x ) , x ) <nl> + <nl> + # define eq ( a , b ) ( { \ <nl> + bool anan = _Generic ( ( a ) , \ <nl> + float : isnan ( a ) , \ <nl> + double : isnan ( a ) , \ <nl> + default : false ) ; \ <nl> + bool bnan = _Generic ( ( b ) , \ <nl> + float : isnan ( b ) , \ <nl> + double : isnan ( b ) , \ <nl> + default : false ) ; \ <nl> + ( ( anan & & bnan ) | | ( ! anan & & a = = b ) ) ; \ <nl> + } ) <nl> + <nl> + # define expect_eq ( _a , _b ) __extension__ ( { \ <nl> + __typeof__ ( _a ) a = ( _a ) , b = ( _b ) ; \ <nl> + if ( ! eq ( a , b ) ) { \ <nl> + failures + + ; \ <nl> + fprintf ( stderr , " line % d : expected " , __LINE__ ) ; \ <nl> + err ( b ) ; \ <nl> + fprintf ( stderr , " , got " ) ; \ <nl> + err ( a ) ; \ <nl> + fprintf ( stderr , " \ n " ) ; \ <nl> + } \ <nl> + } ) <nl> + <nl> + # define expect_vec ( _a , _b ) __extension__ ( { \ <nl> + __typeof__ ( _b ) a = ( __typeof__ ( _b ) ) ( _a ) , b = ( _b ) ; \ <nl> + bool err = false ; \ <nl> + size_t lanes = sizeof ( a ) / sizeof ( a [ 0 ] ) ; \ <nl> + for ( size_t i = 0 ; i < lanes ; i + + ) { \ <nl> + if ( ! eq ( a [ i ] , b [ i ] ) ) { \ <nl> + err = true ; \ <nl> + break ; \ <nl> + } \ <nl> + } \ <nl> + if ( err ) { \ <nl> + failures + + ; \ <nl> + fprintf ( stderr , " line % d : expected { " , __LINE__ ) ; \ <nl> + for ( size_t i = 0 ; i < lanes - 1 ; i + + ) { \ <nl> + err ( b [ i ] ) ; \ <nl> + fprintf ( stderr , " , " ) ; \ <nl> + } \ <nl> + err ( b [ lanes - 1 ] ) ; \ <nl> + fprintf ( stderr , " } , got { " ) ; \ <nl> + for ( size_t i = 0 ; i < lanes - 1 ; i + + ) { \ <nl> + err ( a [ i ] ) ; \ <nl> + fprintf ( stderr , " , " ) ; \ <nl> + } \ <nl> + err ( a [ lanes - 1 ] ) ; \ <nl> + fprintf ( stderr , " } \ n " ) ; \ <nl> + } \ <nl> + } ) <nl> + <nl> + # define i8x16 ( c1 , c2 , c3 , c4 , c5 , c6 , c7 , c8 , c9 , c10 , c11 , c12 , c13 , c14 , c15 , c16 ) \ <nl> + ( __extension__ ( char __attribute__ ( ( __vector_size__ ( 16 ) ) ) ) \ <nl> + { c1 , c2 , c3 , c4 , c5 , c6 , c7 , c8 , c9 , c10 , c11 , c12 , c13 , c14 , c15 , c16 } ) <nl> + <nl> + # define u8x16 ( c1 , c2 , c3 , c4 , c5 , c6 , c7 , c8 , c9 , c10 , c11 , c12 , c13 , c14 , c15 , c16 ) \ <nl> + ( __extension__ ( unsigned char __attribute__ ( ( __vector_size__ ( 16 ) ) ) ) \ <nl> + { c1 , c2 , c3 , c4 , c5 , c6 , c7 , c8 , c9 , c10 , c11 , c12 , c13 , c14 , c15 , c16 } ) <nl> + <nl> + # define i16x8 ( c1 , c2 , c3 , c4 , c5 , c6 , c7 , c8 ) \ <nl> + ( __extension__ ( short __attribute__ ( ( __vector_size__ ( 16 ) ) ) ) \ <nl> + { c1 , c2 , c3 , c4 , c5 , c6 , c7 , c8 } ) <nl> + <nl> + # define u16x8 ( c1 , c2 , c3 , c4 , c5 , c6 , c7 , c8 ) \ <nl> + ( __extension__ ( unsigned short __attribute__ ( ( __vector_size__ ( 16 ) ) ) ) \ <nl> + { c1 , c2 , c3 , c4 , c5 , c6 , c7 , c8 } ) <nl> + <nl> + # define i32x4 ( c1 , c2 , c3 , c4 ) \ <nl> + ( __extension__ ( int __attribute__ ( ( __vector_size__ ( 16 ) ) ) ) { c1 , c2 , c3 , c4 } ) <nl> + <nl> + # define u32x4 ( c1 , c2 , c3 , c4 ) \ <nl> + ( __extension__ ( unsigned int __attribute__ ( ( __vector_size__ ( 16 ) ) ) ) { c1 , c2 , c3 , c4 } ) <nl> + <nl> + # define i64x2 ( c1 , c2 ) \ <nl> + ( __extension__ ( long long __attribute__ ( ( __vector_size__ ( 16 ) ) ) ) { c1 , c2 } ) <nl> + <nl> + # define u64x2 ( c1 , c2 ) \ <nl> + ( __extension__ ( unsigned long long __attribute__ ( ( __vector_size__ ( 16 ) ) ) ) { c1 , c2 } ) <nl> + <nl> + # define f32x4 ( c1 , c2 , c3 , c4 ) \ <nl> + ( __extension__ ( float __attribute__ ( ( __vector_size__ ( 16 ) ) ) ) { c1 , c2 , c3 , c4 } ) <nl> + <nl> + # define f64x2 ( c1 , c2 ) \ <nl> + ( __extension__ ( double __attribute__ ( ( __vector_size__ ( 16 ) ) ) ) { c1 , c2 } ) <nl> + <nl> + <nl> + int EMSCRIPTEN_KEEPALIVE __attribute__ ( ( __optnone__ ) ) main ( int argc , char * * argv ) { <nl> + { <nl> + v128_t vec = ( v128_t ) u8x16 ( 3 , 3 , 3 , 3 , 3 , 3 , 3 , 3 , 3 , 3 , 3 , 3 , 3 , 3 , 3 , 3 ) ; <nl> + expect_vec ( i8x16_load ( & vec ) , <nl> + i8x16 ( 3 , 3 , 3 , 3 , 3 , 3 , 3 , 3 , 3 , 3 , 3 , 3 , 3 , 3 , 3 , 3 ) ) ; <nl> + i8x16_store ( & vec , ( v128_t ) i8x16 ( 7 , 7 , 7 , 7 , 7 , 7 , 7 , 7 , 7 , 7 , 7 , 7 , 7 , 7 , 7 , 7 ) ) ; <nl> + expect_vec ( i8x16_load ( & vec ) , <nl> + i8x16 ( 7 , 7 , 7 , 7 , 7 , 7 , 7 , 7 , 7 , 7 , 7 , 7 , 7 , 7 , 7 , 7 ) ) ; <nl> + } <nl> + expect_vec ( i8x16_const ( ) , u8x16 ( 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 ) ) ; <nl> + expect_vec ( i16x8_const ( ) , u16x8 ( 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 ) ) ; <nl> + expect_vec ( i32x4_const ( ) , u32x4 ( 1 , 2 , 3 , 4 ) ) ; <nl> + expect_vec ( f32x4_const ( ) , f32x4 ( 1 . , 2 . , 3 . , 4 . ) ) ; <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + expect_vec ( i64x2_const ( ) , u64x2 ( 1 , 2 ) ) ; <nl> + expect_vec ( f64x2_const ( ) , f64x2 ( 1 . , 2 . ) ) ; <nl> + <nl> + # endif / / __wasm_unimplemented_simd128__ <nl> + <nl> + expect_vec ( i8x16_make ( 1 ) , u8x16 ( 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 ) ) ; <nl> + expect_vec ( i16x8_make ( 1 ) , u16x8 ( 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 ) ) ; <nl> + expect_vec ( i32x4_make ( 1 ) , u32x4 ( 1 , 2 , 3 , 4 ) ) ; <nl> + expect_vec ( f32x4_make ( 1 ) , f32x4 ( 1 . , 2 . , 3 . , 4 . ) ) ; <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + expect_vec ( i64x2_make ( 1 ) , u64x2 ( 1 , 2 ) ) ; <nl> + expect_vec ( f64x2_make ( 1 ) , f64x2 ( 1 . , 2 . ) ) ; <nl> + <nl> + # endif / / __wasm_unimplemented_simd128__ <nl> + <nl> + expect_vec ( <nl> + i8x16_shuffle_interleave_bytes ( <nl> + ( v128_t ) i8x16 ( 1 , 0 , 3 , 0 , 5 , 0 , 7 , 0 , 9 , 0 , 11 , 0 , 13 , 0 , 15 , 0 ) , <nl> + ( v128_t ) i8x16 ( 0 , 2 , 0 , 4 , 0 , 6 , 0 , 8 , 0 , 10 , 0 , 12 , 0 , 14 , 0 , 16 ) <nl> + ) , <nl> + i8x16 ( 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 ) <nl> + ) ; <nl> + expect_vec ( i32x4_shuffle_reverse ( ( v128_t ) i32x4 ( 1 , 2 , 3 , 4 ) ) , i32x4 ( 4 , 3 , 2 , 1 ) ) ; <nl> + <nl> + / / i8x16 lane accesses <nl> + expect_vec ( i8x16_splat ( 5 ) , i8x16 ( 5 , 5 , 5 , 5 , 5 , 5 , 5 , 5 , 5 , 5 , 5 , 5 , 5 , 5 , 5 , 5 ) ) ; <nl> + expect_vec ( i8x16_splat ( 257 ) , i8x16 ( 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) ) ; <nl> + expect_eq ( <nl> + i8x16_extract_lane_s_first ( <nl> + ( v128_t ) i8x16 ( - 1 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) <nl> + ) , <nl> + - 1 <nl> + ) ; <nl> + expect_eq ( <nl> + i8x16_extract_lane_s_last ( <nl> + ( v128_t ) i8x16 ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , - 1 ) <nl> + ) , <nl> + - 1 <nl> + ) ; <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + expect_eq ( <nl> + i8x16_extract_lane_u_first ( <nl> + ( v128_t ) i8x16 ( - 1 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) <nl> + ) , <nl> + 255 <nl> + ) ; <nl> + expect_eq ( <nl> + i8x16_extract_lane_u_last ( <nl> + ( v128_t ) i8x16 ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , - 1 ) <nl> + ) , <nl> + 255 <nl> + ) ; <nl> + <nl> + # endif / / __wasm_unimplemented_simd128__ <nl> + <nl> + expect_vec ( <nl> + i8x16_replace_lane_first ( <nl> + ( v128_t ) i8x16 ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) , <nl> + 7 <nl> + ) , <nl> + i8x16 ( 7 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i8x16_replace_lane_last ( <nl> + ( v128_t ) i8x16 ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) , <nl> + 7 <nl> + ) , <nl> + i8x16 ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 7 ) <nl> + ) ; <nl> + <nl> + / / i16x8 lane accesses <nl> + expect_vec ( i16x8_splat ( 5 ) , i16x8 ( 5 , 5 , 5 , 5 , 5 , 5 , 5 , 5 ) ) ; <nl> + expect_vec ( i16x8_splat ( 65537 ) , i16x8 ( 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) ) ; <nl> + expect_eq ( i16x8_extract_lane_s_first ( ( v128_t ) i16x8 ( - 1 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) ) , - 1 ) ; <nl> + expect_eq ( i16x8_extract_lane_s_last ( ( v128_t ) i16x8 ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , - 1 ) ) , - 1 ) ; <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + expect_eq ( i16x8_extract_lane_u_first ( ( v128_t ) i16x8 ( - 1 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) ) , 65535 ) ; <nl> + expect_eq ( i16x8_extract_lane_u_last ( ( v128_t ) i16x8 ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , - 1 ) ) , 65535 ) ; <nl> + <nl> + # endif / / __wasm_unimplemented_simd128__ <nl> + <nl> + expect_vec ( <nl> + i16x8_replace_lane_first ( ( v128_t ) i16x8 ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) , 7 ) , <nl> + i16x8 ( 7 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i16x8_replace_lane_last ( ( v128_t ) i16x8 ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) , 7 ) , <nl> + i16x8 ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 7 ) <nl> + ) ; <nl> + <nl> + / / i32x4 lane accesses <nl> + expect_vec ( i32x4_splat ( - 5 ) , i32x4 ( - 5 , - 5 , - 5 , - 5 ) ) ; <nl> + expect_eq ( i32x4_extract_lane_first ( ( v128_t ) i32x4 ( - 5 , 0 , 0 , 0 ) ) , - 5 ) ; <nl> + expect_eq ( i32x4_extract_lane_last ( ( v128_t ) i32x4 ( 0 , 0 , 0 , - 5 ) ) , - 5 ) ; <nl> + expect_vec ( <nl> + i32x4_replace_lane_first ( ( v128_t ) i32x4 ( 0 , 0 , 0 , 0 ) , 53 ) , <nl> + i32x4 ( 53 , 0 , 0 , 0 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i32x4_replace_lane_last ( ( v128_t ) i32x4 ( 0 , 0 , 0 , 0 ) , 53 ) , <nl> + i32x4 ( 0 , 0 , 0 , 53 ) <nl> + ) ; <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + / / i64x2 lane accesses <nl> + expect_vec ( i64x2_splat ( - 5 ) , i64x2 ( - 5 , - 5 ) ) ; <nl> + expect_eq ( i64x2_extract_lane_first ( ( v128_t ) i64x2 ( - 5 , 0 ) ) , - 5 ) ; <nl> + expect_eq ( i64x2_extract_lane_last ( ( v128_t ) i64x2 ( 0 , - 5 ) ) , - 5 ) ; <nl> + expect_vec ( i64x2_replace_lane_first ( ( v128_t ) i64x2 ( 0 , 0 ) , 53 ) , i64x2 ( 53 , 0 ) ) ; <nl> + expect_vec ( i64x2_replace_lane_last ( ( v128_t ) i64x2 ( 0 , 0 ) , 53 ) , i64x2 ( 0 , 53 ) ) ; <nl> + <nl> + # endif / / __wasm_unimplemented_simd128__ <nl> + <nl> + / / f32x4 lane accesses <nl> + expect_vec ( f32x4_splat ( - 5 ) , f32x4 ( - 5 , - 5 , - 5 , - 5 ) ) ; <nl> + expect_eq ( f32x4_extract_lane_first ( ( v128_t ) f32x4 ( - 5 , 0 , 0 , 0 ) ) , - 5 ) ; <nl> + expect_eq ( f32x4_extract_lane_last ( ( v128_t ) f32x4 ( 0 , 0 , 0 , - 5 ) ) , - 5 ) ; <nl> + expect_vec ( f32x4_replace_lane_first ( ( v128_t ) f32x4 ( 0 , 0 , 0 , 0 ) , 53 ) , f32x4 ( 53 , 0 , 0 , 0 ) ) ; <nl> + expect_vec ( f32x4_replace_lane_last ( ( v128_t ) f32x4 ( 0 , 0 , 0 , 0 ) , 53 ) , f32x4 ( 0 , 0 , 0 , 53 ) ) ; <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + / / f64x2 lane accesses <nl> + expect_vec ( f64x2_splat ( - 5 ) , f64x2 ( - 5 , - 5 ) ) ; <nl> + expect_eq ( f64x2_extract_lane_first ( ( v128_t ) f64x2 ( - 5 , 0 ) ) , - 5 ) ; <nl> + expect_eq ( f64x2_extract_lane_last ( ( v128_t ) f64x2 ( 0 , - 5 ) ) , - 5 ) ; <nl> + expect_vec ( f64x2_replace_lane_first ( ( v128_t ) f64x2 ( 0 , 0 ) , 53 ) , f64x2 ( 53 , 0 ) ) ; <nl> + expect_vec ( f64x2_replace_lane_last ( ( v128_t ) f64x2 ( 0 , 0 ) , 53 ) , f64x2 ( 0 , 53 ) ) ; <nl> + <nl> + # endif / / __wasm_unimplemented_simd128__ <nl> + <nl> + / / i8x16 comparisons <nl> + expect_vec ( <nl> + i8x16_eq ( <nl> + ( v128_t ) i8x16 ( 0 , 127 , 13 , 128 , 1 , 13 , 129 , 42 , 0 , 127 , 255 , 42 , 1 , 13 , 129 , 42 ) , <nl> + ( v128_t ) i8x16 ( 0 , 255 , 13 , 42 , 129 , 127 , 0 , 128 , 0 , 255 , 13 , 42 , 129 , 127 , 0 , 128 ) <nl> + ) , <nl> + u8x16 ( - 1 , 0 , - 1 , 0 , 0 , 0 , 0 , 0 , - 1 , 0 , 0 , - 1 , 0 , 0 , 0 , 0 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i8x16_ne ( <nl> + ( v128_t ) i8x16 ( 0 , 127 , 13 , 128 , 1 , 13 , 129 , 42 , 0 , 127 , 255 , 42 , 1 , 13 , 129 , 42 ) , <nl> + ( v128_t ) i8x16 ( 0 , 255 , 13 , 42 , 129 , 127 , 0 , 128 , 0 , 255 , 13 , 42 , 129 , 127 , 0 , 128 ) <nl> + ) , <nl> + u8x16 ( 0 , - 1 , 0 , - 1 , - 1 , - 1 , - 1 , - 1 , 0 , - 1 , - 1 , 0 , - 1 , - 1 , - 1 , - 1 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i8x16_lt_s ( <nl> + ( v128_t ) i8x16 ( 0 , 127 , 13 , 128 , 1 , 13 , 129 , 42 , 0 , 127 , 255 , 42 , 1 , 13 , 129 , 42 ) , <nl> + ( v128_t ) i8x16 ( 0 , 255 , 13 , 42 , 129 , 127 , 0 , 128 , 0 , 255 , 13 , 42 , 129 , 127 , 0 , 128 ) <nl> + ) , <nl> + u8x16 ( 0 , 0 , 0 , - 1 , 0 , - 1 , - 1 , 0 , 0 , 0 , - 1 , 0 , 0 , - 1 , - 1 , 0 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i8x16_lt_u ( <nl> + ( v128_t ) u8x16 ( 0 , 127 , 13 , 128 , 1 , 13 , 129 , 42 , 0 , 127 , 255 , 42 , 1 , 13 , 129 , 42 ) , <nl> + ( v128_t ) u8x16 ( 0 , 255 , 13 , 42 , 129 , 127 , 0 , 128 , 0 , 255 , 13 , 42 , 129 , 127 , 0 , 128 ) <nl> + ) , <nl> + u8x16 ( 0 , - 1 , 0 , 0 , - 1 , - 1 , 0 , - 1 , 0 , - 1 , 0 , 0 , - 1 , - 1 , 0 , - 1 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i8x16_gt_s ( <nl> + ( v128_t ) i8x16 ( 0 , 127 , 13 , 128 , 1 , 13 , 129 , 42 , 0 , 127 , 255 , 42 , 1 , 13 , 129 , 42 ) , <nl> + ( v128_t ) i8x16 ( 0 , 255 , 13 , 42 , 129 , 127 , 0 , 128 , 0 , 255 , 13 , 42 , 129 , 127 , 0 , 128 ) <nl> + ) , <nl> + u8x16 ( 0 , - 1 , 0 , 0 , - 1 , 0 , 0 , - 1 , 0 , - 1 , 0 , 0 , - 1 , 0 , 0 , - 1 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i8x16_gt_u ( <nl> + ( v128_t ) u8x16 ( 0 , 127 , 13 , 128 , 1 , 13 , 129 , 42 , 0 , 127 , 255 , 42 , 1 , 13 , 129 , 42 ) , <nl> + ( v128_t ) u8x16 ( 0 , 255 , 13 , 42 , 129 , 127 , 0 , 128 , 0 , 255 , 13 , 42 , 129 , 127 , 0 , 128 ) <nl> + ) , <nl> + u8x16 ( 0 , 0 , 0 , - 1 , 0 , 0 , - 1 , 0 , 0 , 0 , - 1 , 0 , 0 , 0 , - 1 , 0 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i8x16_le_s ( <nl> + ( v128_t ) i8x16 ( 0 , 127 , 13 , 128 , 1 , 13 , 129 , 42 , 0 , 127 , 255 , 42 , 1 , 13 , 129 , 42 ) , <nl> + ( v128_t ) i8x16 ( 0 , 255 , 13 , 42 , 129 , 127 , 0 , 128 , 0 , 255 , 13 , 42 , 129 , 127 , 0 , 128 ) <nl> + ) , <nl> + u8x16 ( - 1 , 0 , - 1 , - 1 , 0 , - 1 , - 1 , 0 , - 1 , 0 , - 1 , - 1 , 0 , - 1 , - 1 , 0 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i8x16_le_u ( <nl> + ( v128_t ) i8x16 ( 0 , 127 , 13 , 128 , 1 , 13 , 129 , 42 , 0 , 127 , 255 , 42 , 1 , 13 , 129 , 42 ) , <nl> + ( v128_t ) i8x16 ( 0 , 255 , 13 , 42 , 129 , 127 , 0 , 128 , 0 , 255 , 13 , 42 , 129 , 127 , 0 , 128 ) <nl> + ) , <nl> + i8x16 ( - 1 , - 1 , - 1 , 0 , - 1 , - 1 , 0 , - 1 , - 1 , - 1 , 0 , - 1 , - 1 , - 1 , 0 , - 1 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i8x16_ge_s ( <nl> + ( v128_t ) i8x16 ( 0 , 127 , 13 , 128 , 1 , 13 , 129 , 42 , 0 , 127 , 255 , 42 , 1 , 13 , 129 , 42 ) , <nl> + ( v128_t ) i8x16 ( 0 , 255 , 13 , 42 , 129 , 127 , 0 , 128 , 0 , 255 , 13 , 42 , 129 , 127 , 0 , 128 ) <nl> + ) , <nl> + u8x16 ( - 1 , - 1 , - 1 , 0 , - 1 , 0 , 0 , - 1 , - 1 , - 1 , 0 , - 1 , - 1 , 0 , 0 , - 1 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i8x16_ge_u ( <nl> + ( v128_t ) i8x16 ( 0 , 127 , 13 , 128 , 1 , 13 , 129 , 42 , 0 , 127 , 255 , 42 , 1 , 13 , 129 , 42 ) , <nl> + ( v128_t ) i8x16 ( 0 , 255 , 13 , 42 , 129 , 127 , 0 , 128 , 0 , 255 , 13 , 42 , 129 , 127 , 0 , 128 ) <nl> + ) , <nl> + i8x16 ( - 1 , 0 , - 1 , - 1 , 0 , 0 , - 1 , 0 , - 1 , 0 , - 1 , - 1 , 0 , 0 , - 1 , 0 ) <nl> + ) ; <nl> + <nl> + / / i16x8 comparisons <nl> + expect_vec ( <nl> + i16x8_eq ( <nl> + ( v128_t ) i16x8 ( 0 , 32767 , 13 , - 32768 , 1 , - 32767 , 42 , - 25536 ) , <nl> + ( v128_t ) i16x8 ( 0 , 13 , 1 , 32767 , - 32767 , 42 , - 25536 , 32767 ) <nl> + ) , <nl> + u16x8 ( - 1 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i16x8_ne ( <nl> + ( v128_t ) i16x8 ( 0 , 32767 , 13 , - 32768 , 1 , - 32767 , 42 , - 25536 ) , <nl> + ( v128_t ) i16x8 ( 0 , 13 , 1 , 32767 , - 32767 , 42 , - 25536 , 32767 ) <nl> + ) , <nl> + u16x8 ( 0 , - 1 , - 1 , - 1 , - 1 , - 1 , - 1 , - 1 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i16x8_lt_s ( <nl> + ( v128_t ) i16x8 ( 0 , 32767 , 13 , - 32768 , 1 , - 32767 , 42 , - 25536 ) , <nl> + ( v128_t ) i16x8 ( 0 , 13 , 1 , 32767 , - 32767 , 42 , - 25536 , 32767 ) <nl> + ) , <nl> + u16x8 ( 0 , 0 , 0 , - 1 , 0 , - 1 , 0 , - 1 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i16x8_lt_u ( <nl> + ( v128_t ) u16x8 ( 0 , 32767 , 13 , - 32768 , 1 , - 32767 , 42 , - 25536 ) , <nl> + ( v128_t ) u16x8 ( 0 , 13 , 1 , 32767 , - 32767 , 42 , - 25536 , 32767 ) <nl> + ) , <nl> + u16x8 ( 0 , 0 , 0 , 0 , - 1 , 0 , - 1 , 0 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i16x8_gt_s ( <nl> + ( v128_t ) i16x8 ( 0 , 32767 , 13 , - 32768 , 1 , - 32767 , 42 , - 25536 ) , <nl> + ( v128_t ) i16x8 ( 0 , 13 , 1 , 32767 , - 32767 , 42 , - 25536 , 32767 ) <nl> + ) , <nl> + u16x8 ( 0 , - 1 , - 1 , 0 , - 1 , 0 , - 1 , 0 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i16x8_gt_u ( <nl> + ( v128_t ) u16x8 ( 0 , 32767 , 13 , - 32768 , 1 , - 32767 , 42 , - 25536 ) , <nl> + ( v128_t ) u16x8 ( 0 , 13 , 1 , 32767 , - 32767 , 42 , - 25536 , 32767 ) <nl> + ) , <nl> + u16x8 ( 0 , - 1 , - 1 , - 1 , 0 , - 1 , 0 , - 1 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i16x8_le_s ( <nl> + ( v128_t ) i16x8 ( 0 , 32767 , 13 , - 32768 , 1 , - 32767 , 42 , - 25536 ) , <nl> + ( v128_t ) i16x8 ( 0 , 13 , 1 , 32767 , - 32767 , 42 , - 25536 , 32767 ) <nl> + ) , <nl> + u16x8 ( - 1 , 0 , 0 , - 1 , 0 , - 1 , 0 , - 1 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i16x8_le_u ( <nl> + ( v128_t ) u16x8 ( 0 , 32767 , 13 , - 32768 , 1 , - 32767 , 42 , - 25536 ) , <nl> + ( v128_t ) u16x8 ( 0 , 13 , 1 , 32767 , - 32767 , 42 , - 25536 , 32767 ) <nl> + ) , <nl> + u16x8 ( - 1 , 0 , 0 , 0 , - 1 , 0 , - 1 , 0 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i16x8_ge_s ( <nl> + ( v128_t ) i16x8 ( 0 , 32767 , 13 , - 32768 , 1 , - 32767 , 42 , - 25536 ) , <nl> + ( v128_t ) i16x8 ( 0 , 13 , 1 , 32767 , - 32767 , 42 , - 25536 , 32767 ) <nl> + ) , <nl> + u16x8 ( - 1 , - 1 , - 1 , 0 , - 1 , 0 , - 1 , 0 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i16x8_ge_u ( <nl> + ( v128_t ) u16x8 ( 0 , 32767 , 13 , - 32768 , 1 , - 32767 , 42 , - 25536 ) , <nl> + ( v128_t ) u16x8 ( 0 , 13 , 1 , 32767 , - 32767 , 42 , - 25536 , 32767 ) <nl> + ) , <nl> + u16x8 ( - 1 , - 1 , - 1 , - 1 , 0 , - 1 , 0 , - 1 ) <nl> + ) ; <nl> + <nl> + / / i342x4 comparisons <nl> + expect_vec ( <nl> + i32x4_eq ( ( v128_t ) i32x4 ( 0 , - 1 , 53 , - 7 ) , ( v128_t ) i32x4 ( 0 , 53 , - 7 , - 1 ) ) , <nl> + u32x4 ( - 1 , 0 , 0 , 0 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i32x4_ne ( ( v128_t ) i32x4 ( 0 , - 1 , 53 , - 7 ) , ( v128_t ) i32x4 ( 0 , 53 , - 7 , - 1 ) ) , <nl> + u32x4 ( 0 , - 1 , - 1 , - 1 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i32x4_lt_s ( ( v128_t ) i32x4 ( 0 , - 1 , 53 , - 7 ) , ( v128_t ) i32x4 ( 0 , 53 , - 7 , - 1 ) ) , <nl> + u32x4 ( 0 , - 1 , 0 , - 1 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i32x4_lt_u ( ( v128_t ) u32x4 ( 0 , - 1 , 53 , - 7 ) , ( v128_t ) u32x4 ( 0 , 53 , - 7 , - 1 ) ) , <nl> + u32x4 ( 0 , 0 , - 1 , - 1 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i32x4_gt_s ( ( v128_t ) i32x4 ( 0 , - 1 , 53 , - 7 ) , ( v128_t ) i32x4 ( 0 , 53 , - 7 , - 1 ) ) , <nl> + u32x4 ( 0 , 0 , - 1 , 0 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i32x4_gt_u ( ( v128_t ) u32x4 ( 0 , - 1 , 53 , - 7 ) , ( v128_t ) u32x4 ( 0 , 53 , - 7 , - 1 ) ) , <nl> + u32x4 ( 0 , - 1 , 0 , 0 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i32x4_le_s ( ( v128_t ) i32x4 ( 0 , - 1 , 53 , - 7 ) , ( v128_t ) i32x4 ( 0 , 53 , - 7 , - 1 ) ) , <nl> + u32x4 ( - 1 , - 1 , 0 , - 1 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i32x4_le_u ( ( v128_t ) u32x4 ( 0 , - 1 , 53 , - 7 ) , ( v128_t ) u32x4 ( 0 , 53 , - 7 , - 1 ) ) , <nl> + u32x4 ( - 1 , 0 , - 1 , - 1 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i32x4_ge_s ( ( v128_t ) i32x4 ( 0 , - 1 , 53 , - 7 ) , ( v128_t ) i32x4 ( 0 , 53 , - 7 , - 1 ) ) , <nl> + u32x4 ( - 1 , 0 , - 1 , 0 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i32x4_ge_u ( ( v128_t ) u32x4 ( 0 , - 1 , 53 , - 7 ) , ( v128_t ) u32x4 ( 0 , 53 , - 7 , - 1 ) ) , <nl> + u32x4 ( - 1 , - 1 , 0 , 0 ) <nl> + ) ; <nl> + <nl> + / / f32x4 comparisons <nl> + expect_vec ( <nl> + f32x4_eq ( ( v128_t ) f32x4 ( 0 , - 1 , 1 , 0 ) , ( v128_t ) f32x4 ( 0 , 0 , - 1 , 1 ) ) , <nl> + u32x4 ( - 1 , 0 , 0 , 0 ) <nl> + ) ; <nl> + expect_vec ( <nl> + f32x4_ne ( ( v128_t ) f32x4 ( 0 , - 1 , 1 , 0 ) , ( v128_t ) f32x4 ( 0 , 0 , - 1 , 1 ) ) , <nl> + u32x4 ( 0 , - 1 , - 1 , - 1 ) <nl> + ) ; <nl> + expect_vec ( <nl> + f32x4_lt ( ( v128_t ) f32x4 ( 0 , - 1 , 1 , 0 ) , ( v128_t ) f32x4 ( 0 , 0 , - 1 , 1 ) ) , <nl> + u32x4 ( 0 , - 1 , 0 , - 1 ) <nl> + ) ; <nl> + expect_vec ( <nl> + f32x4_gt ( ( v128_t ) f32x4 ( 0 , - 1 , 1 , 0 ) , ( v128_t ) f32x4 ( 0 , 0 , - 1 , 1 ) ) , <nl> + u32x4 ( 0 , 0 , - 1 , 0 ) <nl> + ) ; <nl> + expect_vec ( <nl> + f32x4_le ( ( v128_t ) f32x4 ( 0 , - 1 , 1 , 0 ) , ( v128_t ) f32x4 ( 0 , 0 , - 1 , 1 ) ) , <nl> + u32x4 ( - 1 , - 1 , 0 , - 1 ) <nl> + ) ; <nl> + expect_vec ( <nl> + f32x4_ge ( ( v128_t ) i32x4 ( 0 , - 1 , 1 , 0 ) , ( v128_t ) f32x4 ( 0 , 0 , - 1 , 1 ) ) , <nl> + u32x4 ( - 1 , 0 , - 1 , 0 ) <nl> + ) ; <nl> + expect_vec ( <nl> + f32x4_eq ( ( v128_t ) f32x4 ( NAN , 0 , NAN , INFINITY ) , ( v128_t ) f32x4 ( 0 , NAN , NAN , INFINITY ) ) , <nl> + u32x4 ( 0 , 0 , 0 , - 1 ) <nl> + ) ; <nl> + expect_vec ( <nl> + f32x4_ne ( ( v128_t ) f32x4 ( NAN , 0 , NAN , INFINITY ) , ( v128_t ) f32x4 ( 0 , NAN , NAN , INFINITY ) ) , <nl> + u32x4 ( - 1 , - 1 , - 1 , 0 ) <nl> + ) ; <nl> + expect_vec ( <nl> + f32x4_lt ( ( v128_t ) f32x4 ( NAN , 0 , NAN , INFINITY ) , ( v128_t ) f32x4 ( 0 , NAN , NAN , INFINITY ) ) , <nl> + u32x4 ( 0 , 0 , 0 , 0 ) <nl> + ) ; <nl> + expect_vec ( <nl> + f32x4_gt ( ( v128_t ) f32x4 ( NAN , 0 , NAN , INFINITY ) , ( v128_t ) f32x4 ( 0 , NAN , NAN , INFINITY ) ) , <nl> + u32x4 ( 0 , 0 , 0 , 0 ) <nl> + ) ; <nl> + expect_vec ( <nl> + f32x4_le ( ( v128_t ) f32x4 ( NAN , 0 , NAN , INFINITY ) , ( v128_t ) f32x4 ( 0 , NAN , NAN , INFINITY ) ) , <nl> + u32x4 ( 0 , 0 , 0 , - 1 ) <nl> + ) ; <nl> + expect_vec ( <nl> + f32x4_ge ( ( v128_t ) f32x4 ( NAN , 0 , NAN , INFINITY ) , ( v128_t ) f32x4 ( 0 , NAN , NAN , INFINITY ) ) , <nl> + u32x4 ( 0 , 0 , 0 , - 1 ) <nl> + ) ; <nl> + expect_vec ( <nl> + f32x4_eq ( ( v128_t ) f32x4 ( - INFINITY , 0 , NAN , - INFINITY ) , ( v128_t ) f32x4 ( 0 , INFINITY , INFINITY , NAN ) ) , <nl> + u32x4 ( 0 , 0 , 0 , 0 ) <nl> + ) ; <nl> + expect_vec ( <nl> + f32x4_ne ( ( v128_t ) f32x4 ( - INFINITY , 0 , NAN , - INFINITY ) , ( v128_t ) f32x4 ( 0 , INFINITY , INFINITY , NAN ) ) , <nl> + u32x4 ( - 1 , - 1 , - 1 , - 1 ) <nl> + ) ; <nl> + expect_vec ( <nl> + f32x4_lt ( ( v128_t ) f32x4 ( - INFINITY , 0 , NAN , - INFINITY ) , ( v128_t ) f32x4 ( 0 , INFINITY , INFINITY , NAN ) ) , <nl> + u32x4 ( - 1 , - 1 , 0 , 0 ) <nl> + ) ; <nl> + expect_vec ( <nl> + f32x4_gt ( ( v128_t ) f32x4 ( - INFINITY , 0 , NAN , - INFINITY ) , ( v128_t ) f32x4 ( 0 , INFINITY , INFINITY , NAN ) ) , <nl> + u32x4 ( 0 , 0 , 0 , 0 ) <nl> + ) ; <nl> + expect_vec ( <nl> + f32x4_le ( ( v128_t ) f32x4 ( - INFINITY , 0 , NAN , - INFINITY ) , ( v128_t ) f32x4 ( 0 , INFINITY , INFINITY , NAN ) ) , <nl> + u32x4 ( - 1 , - 1 , 0 , 0 ) <nl> + ) ; <nl> + expect_vec ( <nl> + f32x4_ge ( ( v128_t ) f32x4 ( - INFINITY , 0 , NAN , - INFINITY ) , ( v128_t ) f32x4 ( 0 , INFINITY , INFINITY , NAN ) ) , <nl> + u32x4 ( 0 , 0 , 0 , 0 ) <nl> + ) ; <nl> + <nl> + # ifdef __wasm_undefined_simd128__ <nl> + <nl> + / / f64x2 comparisons <nl> + expect_vec ( f64x2_eq ( ( v128_t ) f64x2 ( 0 , 1 ) , ( v128_t ) f64x2 ( 0 , 0 ) ) , u64x2 ( - 1 , 0 ) ) ; <nl> + expect_vec ( f64x2_ne ( ( v128_t ) f64x2 ( 0 , 1 ) , ( v128_t ) f64x2 ( 0 , 0 ) ) , u64x2 ( 0 , - 1 ) ) ; <nl> + expect_vec ( f64x2_lt ( ( v128_t ) f64x2 ( 0 , 1 ) , ( v128_t ) f64x2 ( 0 , 0 ) ) , u64x2 ( 0 , 0 ) ) ; <nl> + expect_vec ( f64x2_gt ( ( v128_t ) f64x2 ( 0 , 1 ) , ( v128_t ) f64x2 ( 0 , 0 ) ) , u64x2 ( 0 , - 1 ) ) ; <nl> + expect_vec ( f64x2_le ( ( v128_t ) f64x2 ( 0 , 1 ) , ( v128_t ) f64x2 ( 0 , 0 ) ) , u64x2 ( - 1 , 0 ) ) ; <nl> + expect_vec ( f64x2_ge ( ( v128_t ) f64x2 ( 0 , 1 ) , ( v128_t ) f64x2 ( 0 , 0 ) ) , u64x2 ( - 1 , - 1 ) ) ; <nl> + expect_vec ( f64x2_eq ( ( v128_t ) f64x2 ( NAN , 0 ) , ( v128_t ) f64x2 ( INFINITY , INFINITY ) ) , u64x2 ( 0 , 0 ) ) ; <nl> + expect_vec ( f64x2_ne ( ( v128_t ) f64x2 ( NAN , 0 ) , ( v128_t ) f64x2 ( INFINITY , INFINITY ) ) , u64x2 ( - 1 , - 1 ) ) ; <nl> + expect_vec ( f64x2_lt ( ( v128_t ) f64x2 ( NAN , 0 ) , ( v128_t ) f64x2 ( INFINITY , INFINITY ) ) , u64x2 ( 0 , - 1 ) ) ; <nl> + expect_vec ( f64x2_gt ( ( v128_t ) f64x2 ( NAN , 0 ) , ( v128_t ) f64x2 ( INFINITY , INFINITY ) ) , u64x2 ( 0 , 0 ) ) ; <nl> + expect_vec ( f64x2_le ( ( v128_t ) f64x2 ( NAN , 0 ) , ( v128_t ) f64x2 ( INFINITY , INFINITY ) ) , u64x2 ( 0 , - 1 ) ) ; <nl> + expect_vec ( f64x2_ge ( ( v128_t ) f64x2 ( NAN , 0 ) , ( v128_t ) f64x2 ( INFINITY , INFINITY ) ) , u64x2 ( 0 , 0 ) ) ; <nl> + <nl> + # endif / / __wasm_undefined_simd128__ <nl> + <nl> + / / bitwise operations <nl> + expect_vec ( v128_not ( ( v128_t ) i32x4 ( 0 , - 1 , 0 , - 1 ) ) , ( v128_t ) i32x4 ( - 1 , 0 , - 1 , 0 ) ) ; <nl> + expect_vec ( <nl> + v128_and ( ( v128_t ) i32x4 ( 0 , 0 , - 1 , - 1 ) , ( v128_t ) i32x4 ( 0 , - 1 , 0 , - 1 ) ) , <nl> + i32x4 ( 0 , 0 , 0 , - 1 ) <nl> + ) ; <nl> + expect_vec ( <nl> + v128_or ( ( v128_t ) i32x4 ( 0 , 0 , - 1 , - 1 ) , ( v128_t ) i32x4 ( 0 , - 1 , 0 , - 1 ) ) , <nl> + i32x4 ( 0 , - 1 , - 1 , - 1 ) <nl> + ) ; <nl> + expect_vec ( <nl> + v128_xor ( ( v128_t ) i32x4 ( 0 , 0 , - 1 , - 1 ) , ( v128_t ) i32x4 ( 0 , - 1 , 0 , - 1 ) ) , <nl> + i32x4 ( 0 , - 1 , - 1 , 0 ) <nl> + ) ; <nl> + expect_vec ( <nl> + v128_bitselect ( <nl> + ( v128_t ) i32x4 ( 0xAAAAAAAA , 0xAAAAAAAA , 0xAAAAAAAA , 0xAAAAAAAA ) , <nl> + ( v128_t ) i32x4 ( 0xBBBBBBBB , 0xBBBBBBBB , 0xBBBBBBBB , 0xBBBBBBBB ) , <nl> + ( v128_t ) i32x4 ( 0xF0F0F0F0 , 0xFFFFFFFF , 0x00000000 , 0xFF00FF00 ) <nl> + ) , <nl> + i32x4 ( 0xABABABAB , 0xAAAAAAAA , 0xBBBBBBBB , 0xAABBAABB ) <nl> + ) ; <nl> + <nl> + / / i8x16 arithmetic <nl> + expect_vec ( <nl> + i8x16_neg ( ( v128_t ) i8x16 ( 0 , 1 , 42 , - 3 , - 56 , 127 , - 128 , - 126 , 0 , - 1 , - 42 , 3 , 56 , - 127 , - 128 , 126 ) ) , <nl> + i8x16 ( 0 , - 1 , - 42 , 3 , 56 , - 127 , - 128 , 126 , 0 , 1 , 42 , - 3 , - 56 , 127 , - 128 , - 126 ) <nl> + ) ; <nl> + expect_eq ( i8x16_any_true ( ( v128_t ) i8x16 ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) ) , 0 ) ; <nl> + expect_eq ( i8x16_any_true ( ( v128_t ) i8x16 ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 1 , 0 , 0 , 0 , 0 ) ) , 1 ) ; <nl> + expect_eq ( i8x16_any_true ( ( v128_t ) i8x16 ( 1 , 1 , 1 , 1 , 1 , 0 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) ) , 1 ) ; <nl> + expect_eq ( i8x16_any_true ( ( v128_t ) i8x16 ( 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) ) , 1 ) ; <nl> + expect_eq ( i8x16_all_true ( ( v128_t ) i8x16 ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) ) , 0 ) ; <nl> + expect_eq ( i8x16_all_true ( ( v128_t ) i8x16 ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 1 , 0 , 0 , 0 , 0 ) ) , 0 ) ; <nl> + expect_eq ( i8x16_all_true ( ( v128_t ) i8x16 ( 1 , 1 , 1 , 1 , 1 , 0 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) ) , 0 ) ; <nl> + / / https : / / bugs . chromium . org / p / v8 / issues / detail ? id = 9372 <nl> + / / expect_eq ( i8x16_all_true ( ( v128_t ) i8x16 ( 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) ) , 1 ) ; <nl> + expect_vec ( <nl> + i8x16_shl ( ( v128_t ) i8x16 ( 0 , 1 , 2 , 4 , 8 , 16 , 32 , 64 , - 128 , 3 , 6 , 12 , 24 , 48 , 96 , - 64 ) , 1 ) , <nl> + i8x16 ( 0 , 2 , 4 , 8 , 16 , 32 , 64 , - 128 , 0 , 6 , 12 , 24 , 48 , 96 , - 64 , - 128 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i8x16_shl ( ( v128_t ) i8x16 ( 0 , 1 , 2 , 4 , 8 , 16 , 32 , 64 , - 128 , 3 , 6 , 12 , 24 , 48 , 96 , - 64 ) , 8 ) , <nl> + i8x16 ( 0 , 1 , 2 , 4 , 8 , 16 , 32 , 64 , - 128 , 3 , 6 , 12 , 24 , 48 , 96 , - 64 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i8x16_shr_u ( ( v128_t ) u8x16 ( 0 , 1 , 2 , 4 , 8 , 16 , 32 , 64 , - 128 , 3 , 6 , 12 , 24 , 48 , 96 , - 64 ) , 1 ) , <nl> + u8x16 ( 0 , 0 , 1 , 2 , 4 , 8 , 16 , 32 , 64 , 1 , 3 , 6 , 12 , 24 , 48 , 96 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i8x16_shr_u ( ( v128_t ) u8x16 ( 0 , 1 , 2 , 4 , 8 , 16 , 32 , 64 , - 128 , 3 , 6 , 12 , 24 , 48 , 96 , - 64 ) , 8 ) , <nl> + u8x16 ( 0 , 1 , 2 , 4 , 8 , 16 , 32 , 64 , - 128 , 3 , 6 , 12 , 24 , 48 , 96 , - 64 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i8x16_shr_s ( ( v128_t ) i8x16 ( 0 , 1 , 2 , 4 , 8 , 16 , 32 , 64 , - 128 , 3 , 6 , 12 , 24 , 48 , 96 , - 64 ) , 1 ) , <nl> + i8x16 ( 0 , 0 , 1 , 2 , 4 , 8 , 16 , 32 , - 64 , 1 , 3 , 6 , 12 , 24 , 48 , - 32 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i8x16_shr_s ( ( v128_t ) i8x16 ( 0 , 1 , 2 , 4 , 8 , 16 , 32 , 64 , - 128 , 3 , 6 , 12 , 24 , 48 , 96 , - 64 ) , 8 ) , <nl> + i8x16 ( 0 , 1 , 2 , 4 , 8 , 16 , 32 , 64 , - 128 , 3 , 6 , 12 , 24 , 48 , 96 , - 64 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i8x16_add ( <nl> + ( v128_t ) i8x16 ( 0 , 42 , 255 , 128 , 127 , 129 , 6 , 29 , 103 , 196 , 231 , 142 , 17 , 250 , 1 , 73 ) , <nl> + ( v128_t ) i8x16 ( 3 , 231 , 1 , 128 , 129 , 6 , 103 , 17 , 42 , 29 , 73 , 42 , 0 , 255 , 127 , 142 ) <nl> + ) , <nl> + i8x16 ( 3 , 17 , 0 , 0 , 0 , 135 , 109 , 46 , 145 , 225 , 48 , 184 , 17 , 249 , 128 , 215 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i8x16_add_saturate_s ( <nl> + ( v128_t ) i8x16 ( 0 , 42 , 255 , 128 , 127 , 129 , 6 , 29 , 103 , 196 , 231 , 142 , 17 , 250 , 1 , 73 ) , <nl> + ( v128_t ) i8x16 ( 3 , 231 , 1 , 128 , 129 , 6 , 103 , 17 , 42 , 29 , 73 , 42 , 0 , 255 , 127 , 142 ) <nl> + ) , <nl> + i8x16 ( 3 , 17 , 0 , 128 , 0 , 135 , 109 , 46 , 127 , 225 , 48 , 184 , 17 , 249 , 127 , 215 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i8x16_add_saturate_u ( <nl> + ( v128_t ) u8x16 ( 0 , 42 , 255 , 128 , 127 , 129 , 6 , 29 , 103 , 196 , 231 , 142 , 17 , 250 , 1 , 73 ) , <nl> + ( v128_t ) u8x16 ( 3 , 231 , 1 , 128 , 129 , 6 , 103 , 17 , 42 , 29 , 73 , 42 , 0 , 255 , 127 , 142 ) <nl> + ) , <nl> + u8x16 ( 3 , 255 , 255 , 255 , 255 , 135 , 109 , 46 , 145 , 225 , 255 , 184 , 17 , 255 , 128 , 215 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i8x16_sub ( <nl> + ( v128_t ) i8x16 ( 0 , 42 , 255 , 128 , 127 , 129 , 6 , 29 , 103 , 196 , 231 , 142 , 17 , 250 , 1 , 73 ) , <nl> + ( v128_t ) i8x16 ( 3 , 231 , 1 , 128 , 129 , 6 , 103 , 17 , 42 , 29 , 73 , 42 , 0 , 255 , 127 , 142 ) <nl> + ) , <nl> + i8x16 ( 253 , 67 , 254 , 0 , 254 , 123 , 159 , 12 , 61 , 167 , 158 , 100 , 17 , 251 , 130 , 187 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i8x16_sub_saturate_s ( <nl> + ( v128_t ) i8x16 ( 0 , 42 , 255 , 128 , 127 , 129 , 6 , 29 , 103 , 196 , 231 , 142 , 17 , 250 , 1 , 73 ) , <nl> + ( v128_t ) i8x16 ( 3 , 231 , 1 , 128 , 129 , 6 , 103 , 17 , 42 , 29 , 73 , 42 , 0 , 255 , 127 , 142 ) <nl> + ) , <nl> + i8x16 ( 253 , 67 , 254 , 0 , 127 , 128 , 159 , 12 , 61 , 167 , 158 , 128 , 17 , 251 , 130 , 127 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i8x16_sub_saturate_u ( <nl> + ( v128_t ) u8x16 ( 0 , 42 , 255 , 128 , 127 , 129 , 6 , 29 , 103 , 196 , 231 , 142 , 17 , 250 , 1 , 73 ) , <nl> + ( v128_t ) u8x16 ( 3 , 231 , 1 , 128 , 129 , 6 , 103 , 17 , 42 , 29 , 73 , 42 , 0 , 255 , 127 , 142 ) <nl> + ) , <nl> + u8x16 ( 0 , 0 , 254 , 0 , 0 , 123 , 0 , 12 , 61 , 167 , 158 , 100 , 17 , 0 , 0 , 0 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i8x16_mul ( <nl> + ( v128_t ) i8x16 ( 0 , 42 , 255 , 128 , 127 , 129 , 6 , 29 , 103 , 196 , 231 , 142 , 17 , 250 , 1 , 73 ) , <nl> + ( v128_t ) i8x16 ( 3 , 231 , 1 , 128 , 129 , 6 , 103 , 17 , 42 , 29 , 73 , 42 , 0 , 255 , 127 , 142 ) <nl> + ) , <nl> + i8x16 ( 0 , 230 , 255 , 0 , 255 , 6 , 106 , 237 , 230 , 52 , 223 , 76 , 0 , 6 , 127 , 126 ) <nl> + ) ; <nl> + <nl> + / / i16x8 arithmetic <nl> + expect_vec ( <nl> + i16x8_neg ( ( v128_t ) i16x8 ( 0 , 1 , 42 , - 3 , - 56 , 32767 , - 32768 , 32766 ) ) , <nl> + i16x8 ( 0 , - 1 , - 42 , 3 , 56 , - 32767 , - 32768 , - 32766 ) <nl> + ) ; <nl> + expect_eq ( i16x8_any_true ( ( v128_t ) i16x8 ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) ) , 0 ) ; <nl> + expect_eq ( i16x8_any_true ( ( v128_t ) i16x8 ( 0 , 0 , 1 , 0 , 0 , 0 , 0 , 0 ) ) , 1 ) ; <nl> + expect_eq ( i16x8_any_true ( ( v128_t ) i16x8 ( 1 , 1 , 1 , 1 , 1 , 0 , 1 , 1 ) ) , 1 ) ; <nl> + expect_eq ( i16x8_any_true ( ( v128_t ) i16x8 ( 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) ) , 1 ) ; <nl> + expect_eq ( i16x8_all_true ( ( v128_t ) i16x8 ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) ) , 0 ) ; <nl> + expect_eq ( i16x8_all_true ( ( v128_t ) i16x8 ( 0 , 0 , 1 , 0 , 0 , 0 , 0 , 0 ) ) , 0 ) ; <nl> + expect_eq ( i16x8_all_true ( ( v128_t ) i16x8 ( 1 , 1 , 1 , 1 , 1 , 0 , 1 , 1 ) ) , 0 ) ; <nl> + / / expect_eq ( i16x8_all_true ( ( v128_t ) i16x8 ( 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) ) , 1 ) ; <nl> + expect_vec ( <nl> + i16x8_shl ( ( v128_t ) i16x8 ( 0 , 8 , 16 , 128 , 256 , 2048 , 4096 , - 32768 ) , 1 ) , <nl> + i16x8 ( 0 , 16 , 32 , 256 , 512 , 4096 , 8192 , 0 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i16x8_shl ( ( v128_t ) i16x8 ( 0 , 8 , 16 , 128 , 256 , 2048 , 4096 , - 32768 ) , 16 ) , <nl> + i16x8 ( 0 , 8 , 16 , 128 , 256 , 2048 , 4096 , - 32768 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i16x8_shr_u ( ( v128_t ) u16x8 ( 0 , 8 , 16 , 128 , 256 , 2048 , 4096 , - 32768 ) , 1 ) , <nl> + u16x8 ( 0 , 4 , 8 , 64 , 128 , 1024 , 2048 , 16384 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i16x8_shr_u ( ( v128_t ) u16x8 ( 0 , 8 , 16 , 128 , 256 , 2048 , 4096 , - 32768 ) , 16 ) , <nl> + u16x8 ( 0 , 8 , 16 , 128 , 256 , 2048 , 4096 , - 32768 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i16x8_shr_s ( ( v128_t ) i16x8 ( 0 , 8 , 16 , 128 , 256 , 2048 , 4096 , - 32768 ) , 1 ) , <nl> + i16x8 ( 0 , 4 , 8 , 64 , 128 , 1024 , 2048 , - 16384 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i16x8_shr_s ( ( v128_t ) i16x8 ( 0 , 8 , 16 , 128 , 256 , 2048 , 4096 , - 32768 ) , 16 ) , <nl> + i16x8 ( 0 , 8 , 16 , 128 , 256 , 2048 , 4096 , - 32768 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i16x8_add ( <nl> + ( v128_t ) i16x8 ( 0 , - 256 , - 32768 , 32512 , - 32512 , - 6400 , - 1536 , 32766 ) , <nl> + ( v128_t ) i16x8 ( 768 , 1 , - 32768 , - 32512 , 1536 , 18688 , - 256 , 2 ) <nl> + ) , <nl> + i16x8 ( 768 , - 255 , 0 , 0 , - 30976 , 12288 , - 1792 , - 32768 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i16x8_add_saturate_s ( <nl> + ( v128_t ) i16x8 ( 0 , - 256 , - 32768 , 32512 , - 32512 , - 6400 , - 1536 , 32766 ) , <nl> + ( v128_t ) i16x8 ( 768 , 1 , - 32768 , - 32512 , 1536 , 18688 , - 256 , 2 ) <nl> + ) , <nl> + i16x8 ( 768 , - 255 , - 32768 , 0 , - 30976 , 12288 , - 1792 , 32767 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i16x8_add_saturate_u ( <nl> + ( v128_t ) u16x8 ( 0 , - 256 , - 32768 , 32512 , - 32512 , - 6400 , - 1536 , 32766 ) , <nl> + ( v128_t ) u16x8 ( 768 , 1 , - 32768 , - 32512 , 1536 , 18688 , - 256 , 2 ) <nl> + ) , <nl> + u16x8 ( 768 , - 255 , - 1 , - 1 , - 30976 , - 1 , - 1 , - 32768 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i16x8_sub ( <nl> + ( v128_t ) i16x8 ( 0 , - 256 , - 32768 , 32512 , - 32512 , - 6400 , - 1536 , 32766 ) , <nl> + ( v128_t ) i16x8 ( 768 , 1 , - 32768 , - 32512 , 1536 , 18688 , - 256 , 2 ) <nl> + ) , <nl> + i16x8 ( - 768 , - 257 , 0 , - 512 , 31488 , - 25088 , - 1280 , 32764 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i16x8_sub_saturate_s ( <nl> + ( v128_t ) i16x8 ( 0 , - 256 , - 32768 , 32512 , - 32512 , - 6400 , - 1536 , 32766 ) , <nl> + ( v128_t ) i16x8 ( 768 , 1 , - 32768 , - 32512 , 1536 , 18688 , - 256 , 2 ) <nl> + ) , <nl> + i16x8 ( - 768 , - 257 , 0 , 32767 , - 32768 , - 25088 , - 1280 , 32764 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i16x8_sub_saturate_u ( <nl> + ( v128_t ) u16x8 ( 0 , - 256 , - 32768 , 32512 , - 32512 , - 6400 , - 1536 , 32766 ) , <nl> + ( v128_t ) u16x8 ( 768 , 1 , - 32768 , - 32512 , 1536 , 18688 , - 256 , 2 ) <nl> + ) , <nl> + u16x8 ( 0 , - 257 , 0 , 0 , 31488 , - 25088 , 0 , 32764 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i16x8_mul ( <nl> + ( v128_t ) i16x8 ( 0 , - 256 , - 32768 , 32512 , - 32512 , - 6400 , - 1536 , 32766 ) , <nl> + ( v128_t ) i16x8 ( 768 , 1 , - 32768 , - 32512 , 1536 , 18688 , - 256 , 2 ) <nl> + ) , <nl> + i16x8 ( 0 , - 256 , 0 , 0 , 0 , 0 , 0 , - 4 ) <nl> + ) ; <nl> + <nl> + / / i32x4 arithmetic <nl> + expect_vec ( <nl> + i32x4_neg ( ( v128_t ) i32x4 ( 0 , 1 , 0x80000000 , 0x7fffffff ) ) , <nl> + i32x4 ( 0 , - 1 , 0x80000000 , 0x80000001 ) <nl> + ) ; <nl> + expect_eq ( i32x4_any_true ( ( v128_t ) i32x4 ( 0 , 0 , 0 , 0 ) ) , 0 ) ; <nl> + expect_eq ( i32x4_any_true ( ( v128_t ) i32x4 ( 0 , 0 , 1 , 0 ) ) , 1 ) ; <nl> + expect_eq ( i32x4_any_true ( ( v128_t ) i32x4 ( 1 , 0 , 1 , 1 ) ) , 1 ) ; <nl> + expect_eq ( i32x4_any_true ( ( v128_t ) i32x4 ( 1 , 1 , 1 , 1 ) ) , 1 ) ; <nl> + expect_eq ( i32x4_all_true ( ( v128_t ) i32x4 ( 0 , 0 , 0 , 0 ) ) , 0 ) ; <nl> + expect_eq ( i32x4_all_true ( ( v128_t ) i32x4 ( 0 , 0 , 1 , 0 ) ) , 0 ) ; <nl> + expect_eq ( i32x4_all_true ( ( v128_t ) i32x4 ( 1 , 0 , 1 , 1 ) ) , 0 ) ; <nl> + / / expect_eq ( i32x4_all_true ( ( v128_t ) i32x4 ( 1 , 1 , 1 , 1 ) ) , 1 ) ; <nl> + expect_vec ( <nl> + i32x4_shl ( ( v128_t ) i32x4 ( 1 , 0x40000000 , 0x80000000 , - 1 ) , 1 ) , <nl> + i32x4 ( 2 , 0x80000000 , 0 , - 2 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i32x4_shl ( ( v128_t ) i32x4 ( 1 , 0x40000000 , 0x80000000 , - 1 ) , 32 ) , <nl> + i32x4 ( 1 , 0x40000000 , 0x80000000 , - 1 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i32x4_shr_s ( ( v128_t ) i32x4 ( 1 , 0x40000000 , 0x80000000 , - 1 ) , 1 ) , <nl> + i32x4 ( 0 , 0x20000000 , 0xc0000000 , - 1 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i32x4_shr_s ( ( v128_t ) i32x4 ( 1 , 0x40000000 , 0x80000000 , - 1 ) , 32 ) , <nl> + i32x4 ( 1 , 0x40000000 , 0x80000000 , - 1 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i32x4_shr_u ( ( v128_t ) u32x4 ( 1 , 0x40000000 , 0x80000000 , - 1 ) , 1 ) , <nl> + u32x4 ( 0 , 0x20000000 , 0x40000000 , 0x7fffffff ) <nl> + ) ; <nl> + expect_vec ( <nl> + i32x4_shr_u ( ( v128_t ) u32x4 ( 1 , 0x40000000 , 0x80000000 , - 1 ) , 32 ) , <nl> + u32x4 ( 1 , 0x40000000 , 0x80000000 , - 1 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i32x4_add ( ( v128_t ) i32x4 ( 0 , 0x80000001 , 42 , 5 ) , ( v128_t ) i32x4 ( 0 , 0x80000001 , 5 , 42 ) ) , <nl> + i32x4 ( 0 , 2 , 47 , 47 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i32x4_sub ( ( v128_t ) i32x4 ( 0 , 2 , 47 , 47 ) , ( v128_t ) i32x4 ( 0 , 0x80000001 , 42 , 5 ) ) , <nl> + i32x4 ( 0 , 0x80000001 , 5 , 42 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i32x4_mul ( ( v128_t ) i32x4 ( 0 , 0x80000001 , 42 , 5 ) , ( v128_t ) i32x4 ( 0 , 0x80000001 , 42 , 5 ) ) , <nl> + i32x4 ( 0 , 1 , 1764 , 25 ) <nl> + ) ; <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + / / i64x2 arithmetic <nl> + expect_vec ( i64x2_neg ( ( v128_t ) i64x2 ( 0x8000000000000000 , 42 ) ) , i64x2 ( 0x8000000000000000 , - 42 ) ) ; <nl> + expect_eq ( i64x2_any_true ( ( v128_t ) i64x2 ( 0 , 0 ) ) , 0 ) ; <nl> + expect_eq ( i64x2_any_true ( ( v128_t ) i64x2 ( 1 , 0 ) ) , 1 ) ; <nl> + expect_eq ( i64x2_any_true ( ( v128_t ) i64x2 ( 1 , 1 ) ) , 1 ) ; <nl> + expect_eq ( i64x2_all_true ( ( v128_t ) i64x2 ( 0 , 0 ) ) , 0 ) ; <nl> + expect_eq ( i64x2_all_true ( ( v128_t ) i64x2 ( 1 , 0 ) ) , 0 ) ; <nl> + / / expect_eq ( i64x2_all_true ( ( v128_t ) i64x2 ( 1 , 1 ) ) , 1 ) ; <nl> + <nl> + expect_vec ( i64x2_shl ( ( v128_t ) i64x2 ( 1 , 0x8000000000000000 ) , 1 ) , i64x2 ( 2 , 0 ) ) ; <nl> + expect_vec ( i64x2_shl ( ( v128_t ) i64x2 ( 1 , 0x8000000000000000 ) , 64 ) , i64x2 ( 1 , 0x8000000000000000 ) ) ; <nl> + expect_vec ( i64x2_shr_s ( ( v128_t ) i64x2 ( 1 , 0x8000000000000000 ) , 1 ) , i64x2 ( 0 , 0xc000000000000000 ) ) ; <nl> + expect_vec ( i64x2_shr_s ( ( v128_t ) i64x2 ( 1 , 0x8000000000000000 ) , 64 ) , i64x2 ( 1 , 0x8000000000000000 ) ) ; <nl> + expect_vec ( i64x2_shr_u ( ( v128_t ) u64x2 ( 1 , 0x8000000000000000 ) , 1 ) , u64x2 ( 0 , 0x4000000000000000 ) ) ; <nl> + expect_vec ( i64x2_shr_u ( ( v128_t ) u64x2 ( 1 , 0x8000000000000000 ) , 64 ) , u64x2 ( 1 , 0x8000000000000000 ) ) ; <nl> + expect_vec ( <nl> + i64x2_add ( ( v128_t ) i64x2 ( 0x8000000000000001 , 42 ) , ( v128_t ) i64x2 ( 0x8000000000000001 , 0 ) ) , <nl> + i64x2 ( 2 , 42 ) <nl> + ) ; <nl> + expect_vec ( <nl> + i64x2_sub ( ( v128_t ) i64x2 ( 2 , 42 ) , ( v128_t ) i64x2 ( 0x8000000000000001 , 0 ) ) , <nl> + i64x2 ( 0x8000000000000001 , 42 ) <nl> + ) ; <nl> + <nl> + # endif / / __wasm_unimplemented_simd128__ <nl> + <nl> + / / f32x4 arithmetic <nl> + expect_vec ( f32x4_abs ( ( v128_t ) f32x4 ( - 0 . , NAN , - INFINITY , 5 ) ) , f32x4 ( 0 , NAN , INFINITY , 5 ) ) ; <nl> + expect_vec ( f32x4_neg ( ( v128_t ) f32x4 ( - 0 . , NAN , - INFINITY , 5 ) ) , f32x4 ( 0 , - NAN , INFINITY , - 5 ) ) ; <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + expect_vec ( f32x4_sqrt ( ( v128_t ) f32x4 ( 0 . , NAN , INFINITY , 4 ) ) , f32x4 ( - 0 . , NAN , INFINITY , 2 ) ) ; <nl> + <nl> + # endif / / __wasm_unimplemented_simd128__ <nl> + <nl> + expect_vec ( <nl> + f32x4_add ( ( v128_t ) f32x4 ( NAN , - NAN , INFINITY , 42 ) , ( v128_t ) f32x4 ( 42 , INFINITY , INFINITY , 1 ) ) , <nl> + f32x4 ( NAN , - NAN , INFINITY , 43 ) <nl> + ) ; <nl> + expect_vec ( <nl> + f32x4_sub ( ( v128_t ) f32x4 ( NAN , - NAN , INFINITY , 42 ) , ( v128_t ) f32x4 ( 42 , INFINITY , - INFINITY , 1 ) ) , <nl> + f32x4 ( NAN , - NAN , INFINITY , 41 ) <nl> + ) ; <nl> + expect_vec ( <nl> + f32x4_mul ( ( v128_t ) f32x4 ( NAN , - NAN , INFINITY , 42 ) , ( v128_t ) f32x4 ( 42 , INFINITY , INFINITY , 2 ) ) , <nl> + f32x4 ( NAN , - NAN , INFINITY , 84 ) <nl> + ) ; <nl> + expect_vec ( <nl> + f32x4_div ( ( v128_t ) f32x4 ( NAN , - NAN , INFINITY , 42 ) , ( v128_t ) f32x4 ( 42 , INFINITY , 2 , 2 ) ) , <nl> + f32x4 ( NAN , - NAN , INFINITY , 21 ) <nl> + ) ; <nl> + expect_vec ( <nl> + f32x4_min ( ( v128_t ) f32x4 ( - 0 . , 0 , NAN , 5 ) , ( v128_t ) f32x4 ( 0 , - 0 . , 5 , NAN ) ) , <nl> + f32x4 ( - 0 . , - 0 . , NAN , NAN ) <nl> + ) ; <nl> + expect_vec ( <nl> + f32x4_max ( ( v128_t ) f32x4 ( - 0 . , 0 , NAN , 5 ) , ( v128_t ) f32x4 ( 0 , - 0 . , 5 , NAN ) ) , <nl> + f32x4 ( 0 , 0 , NAN , NAN ) <nl> + ) ; <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + / / f64x2 arithmetic <nl> + expect_vec ( f64x2_abs ( ( v128_t ) f64x2 ( - 0 . , NAN ) ) , f64x2 ( 0 , NAN ) ) ; <nl> + expect_vec ( f64x2_abs ( ( v128_t ) f64x2 ( - INFINITY , 5 ) ) , f64x2 ( INFINITY , 5 ) ) ; <nl> + <nl> + expect_vec ( f64x2_neg ( ( v128_t ) f64x2 ( - 0 . , NAN ) ) , f64x2 ( 0 , - NAN ) ) ; <nl> + expect_vec ( f64x2_neg ( ( v128_t ) f64x2 ( - INFINITY , 5 ) ) , f64x2 ( INFINITY , - 5 ) ) ; <nl> + <nl> + expect_vec ( f64x2_sqrt ( ( v128_t ) f64x2 ( - 0 . , NAN ) ) , f64x2 ( - 0 . , NAN ) ) ; <nl> + expect_vec ( f64x2_sqrt ( ( v128_t ) f64x2 ( INFINITY , 4 ) ) , f64x2 ( INFINITY , 2 ) ) ; <nl> + <nl> + expect_vec ( <nl> + f64x2_add ( ( v128_t ) f64x2 ( NAN , - NAN ) , ( v128_t ) f64x2 ( 42 , INFINITY ) ) , <nl> + f64x2 ( NAN , - NAN ) <nl> + ) ; <nl> + expect_vec ( <nl> + f64x2_add ( ( v128_t ) f64x2 ( INFINITY , 42 ) , ( v128_t ) f64x2 ( INFINITY , 1 ) ) , <nl> + f64x2 ( INFINITY , 43 ) <nl> + ) ; <nl> + expect_vec ( <nl> + f64x2_sub ( ( v128_t ) f64x2 ( NAN , - NAN ) , ( v128_t ) f64x2 ( 42 , INFINITY ) ) , <nl> + f64x2 ( NAN , - NAN ) <nl> + ) ; <nl> + expect_vec ( <nl> + f64x2_sub ( ( v128_t ) f64x2 ( INFINITY , 42 ) , ( v128_t ) f64x2 ( - INFINITY , 1 ) ) , <nl> + f64x2 ( INFINITY , 41 ) <nl> + ) ; <nl> + expect_vec ( <nl> + f64x2_mul ( ( v128_t ) f64x2 ( NAN , - NAN ) , ( v128_t ) f64x2 ( 42 , INFINITY ) ) , <nl> + f64x2 ( NAN , - NAN ) <nl> + ) ; <nl> + expect_vec ( <nl> + f64x2_mul ( ( v128_t ) f64x2 ( INFINITY , 42 ) , ( v128_t ) f64x2 ( INFINITY , 2 ) ) , <nl> + f64x2 ( INFINITY , 84 ) <nl> + ) ; <nl> + expect_vec ( <nl> + f64x2_div ( ( v128_t ) f64x2 ( NAN , - NAN ) , ( v128_t ) f64x2 ( 42 , INFINITY ) ) , <nl> + f64x2 ( NAN , - NAN ) <nl> + ) ; <nl> + expect_vec ( <nl> + f64x2_div ( ( v128_t ) f64x2 ( INFINITY , 42 ) , ( v128_t ) f64x2 ( 2 , 2 ) ) , <nl> + f64x2 ( INFINITY , 21 ) <nl> + ) ; <nl> + <nl> + expect_vec ( f64x2_min ( ( v128_t ) f64x2 ( - 0 . , 0 ) , ( v128_t ) f64x2 ( 0 , - 0 ) ) , f64x2 ( - 0 . , - 0 ) ) ; <nl> + expect_vec ( f64x2_min ( ( v128_t ) f64x2 ( NAN , 5 ) , ( v128_t ) f64x2 ( 5 , NAN ) ) , f64x2 ( NAN , NAN ) ) ; <nl> + expect_vec ( f64x2_max ( ( v128_t ) f64x2 ( - 0 . , 0 ) , ( v128_t ) f64x2 ( 0 , - 0 ) ) , f64x2 ( 0 , 0 ) ) ; <nl> + expect_vec ( f64x2_max ( ( v128_t ) f64x2 ( NAN , 5 ) , ( v128_t ) f64x2 ( 5 , NAN ) ) , f64x2 ( NAN , NAN ) ) ; <nl> + <nl> + # endif / / __wasm_unimplemented_simd128__ <nl> + <nl> + / / conversions <nl> + expect_vec ( <nl> + i32x4_trunc_s_f32x4_sat ( ( v128_t ) f32x4 ( 42 , NAN , INFINITY , - INFINITY ) ) , <nl> + i32x4 ( 42 , 0 , 2147483647 , - 2147483648ll ) <nl> + ) ; <nl> + expect_vec ( <nl> + i32x4_trunc_u_f32x4_sat ( ( v128_t ) f32x4 ( 42 , NAN , INFINITY , - INFINITY ) ) , <nl> + u32x4 ( 42 , 0 , 4294967295ull , 0 ) <nl> + ) ; <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + expect_vec ( i64x2_trunc_s_f64x2_sat ( ( v128_t ) f64x2 ( 42 , NAN ) ) , i64x2 ( 42 , 0 ) ) ; <nl> + expect_vec ( <nl> + i64x2_trunc_s_f64x2_sat ( ( v128_t ) f64x2 ( INFINITY , - INFINITY ) ) , <nl> + i64x2 ( 9223372036854775807ll , - 9223372036854775807ll - 1 ) <nl> + ) ; <nl> + expect_vec ( i64x2_trunc_u_f64x2_sat ( ( v128_t ) f64x2 ( 42 , NAN ) ) , u64x2 ( 42 , 0 ) ) ; <nl> + expect_vec ( <nl> + i64x2_trunc_u_f64x2_sat ( ( v128_t ) f64x2 ( INFINITY , - INFINITY ) ) , <nl> + u64x2 ( 18446744073709551615ull , 0 ) <nl> + ) ; <nl> + <nl> + # endif / / __wasm_unimplemented_simd128__ <nl> + <nl> + expect_vec ( <nl> + f32x4_convert_s_i32x4 ( ( v128_t ) i32x4 ( 0 , - 1 , 2147483647 , - 2147483647 - 1 ) ) , <nl> + f32x4 ( 0 , - 1 , 2147483648 . , - 2147483648 . ) <nl> + ) ; <nl> + expect_vec ( <nl> + f32x4_convert_u_i32x4 ( ( v128_t ) u32x4 ( 0 , - 1 , 2147483647 , - 2147483647 - 1 ) ) , <nl> + f32x4 ( 0 , 4294967296 . , 2147483648 . , 2147483648 . ) <nl> + ) ; <nl> + <nl> + # ifdef __wasm_unimplemented_simd128__ <nl> + <nl> + expect_vec ( f64x2_convert_s_i64x2 ( ( v128_t ) i64x2 ( 0 , - 1 ) ) , f64x2 ( 0 , - 1 ) ) ; <nl> + expect_vec ( <nl> + f64x2_convert_s_i64x2 ( ( v128_t ) i64x2 ( 9223372036854775807 , - 9223372036854775807 - 1 ) ) , <nl> + f64x2 ( 9223372036854775807 . , - 9223372036854775808 . ) <nl> + ) ; <nl> + expect_vec ( <nl> + f64x2_convert_u_i64x2 ( ( v128_t ) u64x2 ( 0 , - 1 ) ) , <nl> + f64x2 ( 0 , 18446744073709551616 . ) <nl> + ) ; <nl> + expect_vec ( <nl> + f64x2_convert_u_i64x2 ( ( v128_t ) i64x2 ( 9223372036854775807 , - 9223372036854775808 . ) ) , <nl> + f64x2 ( 9223372036854775807 . , 9223372036854775808 . ) <nl> + ) ; <nl> + <nl> + # endif / / __wasm_unimplemented_simd128__ <nl> + <nl> + if ( failures = = 0 ) { <nl> + printf ( " Success ! \ n " ) ; <nl> + } else { <nl> + printf ( " Failed : ( \ n " ) ; <nl> + } <nl> + } <nl>
|
Wasm SIMD intrinsics ( )
|
emscripten-core/emscripten
|
7da0dc898a489e8186736f4d9d2ea2c22083e06d
|
2019-06-26T00:22:01Z
|
mmm a / src / Functions / FunctionsExternalDictionaries . h <nl> ppp b / src / Functions / FunctionsExternalDictionaries . h <nl> class FunctionDictHelper <nl> <nl> bool isDictGetFunctionInjective ( const Block & sample_block ) <nl> { <nl> + / / / Assume non - injective by default <nl> + if ( ! sample_block ) <nl> + return false ; <nl> + <nl> if ( sample_block . columns ( ) ! = 3 & & sample_block . columns ( ) ! = 4 ) <nl> throw Exception { " Function dictGet . . . takes 3 or 4 arguments " , ErrorCodes : : NUMBER_OF_ARGUMENTS_DOESNT_MATCH } ; <nl> <nl> mmm a / src / Functions / IFunction . h <nl> ppp b / src / Functions / IFunction . h <nl> class IFunctionBase <nl> * But we assume , that it is injective . This could be documented as implementation - specific behaviour . <nl> * <nl> * sample_block should contain data types of arguments and values of constants , if relevant . <nl> + * NOTE : to check is function injective with any arguments , you can pass <nl> + * empty block as sample_block ( since most of the time function will <nl> + * ignore it anyway , and creating arguments just for checking is <nl> + * function injective or not is overkill ) . <nl> * / <nl> virtual bool isInjective ( const Block & / * sample_block * / ) const { return false ; } <nl> <nl> mmm a / src / Interpreters / SyntaxAnalyzer . cpp <nl> ppp b / src / Interpreters / SyntaxAnalyzer . cpp <nl> void executeScalarSubqueries ( ASTPtr & query , const Context & context , size_t sub <nl> <nl> const std : : unordered_set < String > possibly_injective_function_names <nl> { <nl> + " dictGet " , <nl> " dictGetString " , <nl> " dictGetUInt8 " , <nl> " dictGetUInt16 " , <nl> void optimizeGroupBy ( ASTSelectQuery * select_query , const NameSet & source_colum <nl> continue ; <nl> } <nl> <nl> - const auto & dict_name = function - > arguments - > children [ 0 ] - > as < ASTLiteral & > ( ) . value . safeGet < String > ( ) ; <nl> - const auto & dict_ptr = context . getExternalDictionariesLoader ( ) . getDictionary ( dict_name ) ; <nl> - const auto & attr_name = function - > arguments - > children [ 1 ] - > as < ASTLiteral & > ( ) . value . safeGet < String > ( ) ; <nl> + const auto * dict_name_ast = function - > arguments - > children [ 0 ] - > as < ASTLiteral > ( ) ; <nl> + const auto * attr_name_ast = function - > arguments - > children [ 1 ] - > as < ASTLiteral > ( ) ; <nl> + if ( ! dict_name_ast | | ! attr_name_ast ) <nl> + { <nl> + + + i ; <nl> + continue ; <nl> + } <nl> + <nl> + const auto & dict_name = dict_name_ast - > value . safeGet < String > ( ) ; <nl> + const auto & attr_name = attr_name_ast - > value . safeGet < String > ( ) ; <nl> <nl> + const auto & dict_ptr = context . getExternalDictionariesLoader ( ) . getDictionary ( dict_name ) ; <nl> if ( ! dict_ptr - > isInjective ( attr_name ) ) <nl> { <nl> + + i ; <nl> new file mode 100644 <nl> index 00000000000 . . e69de29bb2d <nl> new file mode 100644 <nl> index 00000000000 . . 88a2b25c2db <nl> mmm / dev / null <nl> ppp b / tests / queries / 0_stateless / 01375_GROUP_BY_injective_elimination_dictGet_BAD_ARGUMENTS . sql <nl> @ @ - 0 , 0 + 1 @ @ <nl> + SELECT dictGetString ( concat ( ' default ' , ' . countryId ' ) , ' country ' , toUInt64 ( number ) ) AS country FROM numbers ( 2 ) GROUP BY country ; - - { serverError 36 ; } <nl> new file mode 100644 <nl> index 00000000000 . . 9459d4ba2a0 <nl> mmm / dev / null <nl> ppp b / tests / queries / 0_stateless / 01376_GROUP_BY_injective_elimination_dictGet . reference <nl> @ @ - 0 , 0 + 1 @ @ <nl> + 1 . 1 <nl> new file mode 100644 <nl> index 00000000000 . . 1c7a4d16f05 <nl> mmm / dev / null <nl> ppp b / tests / queries / 0_stateless / 01376_GROUP_BY_injective_elimination_dictGet . sql <nl> <nl> + - - https : / / github . com / ClickHouse / ClickHouse / issues / 11469 <nl> + SELECT dictGet ( ' default . countryId ' , ' country ' , toUInt64 ( number ) ) AS country FROM numbers ( 2 ) GROUP BY country ; - - { serverError 36 ; } <nl> + <nl> + <nl> + - - with real dictionary <nl> + DROP TABLE IF EXISTS dictdb_01376 . table_for_dict ; <nl> + DROP DICTIONARY IF EXISTS dictdb_01376 . dict_exists ; <nl> + DROP DATABASE IF EXISTS dictdb_01376 ; <nl> + <nl> + CREATE DATABASE dictdb_01376 ENGINE = Ordinary ; <nl> + <nl> + CREATE TABLE dictdb_01376 . table_for_dict <nl> + ( <nl> + key_column UInt64 , <nl> + value Float64 <nl> + ) <nl> + ENGINE = Memory ( ) ; <nl> + <nl> + INSERT INTO dictdb_01376 . table_for_dict VALUES ( 1 , 1 . 1 ) ; <nl> + <nl> + CREATE DICTIONARY IF NOT EXISTS dictdb_01376 . dict_exists <nl> + ( <nl> + key_column UInt64 , <nl> + value Float64 DEFAULT 77 . 77 <nl> + ) <nl> + PRIMARY KEY key_column <nl> + SOURCE ( CLICKHOUSE ( HOST ' localhost ' PORT 9000 USER ' default ' TABLE ' table_for_dict ' DB ' dictdb_01376 ' ) ) <nl> + LIFETIME ( 1 ) <nl> + LAYOUT ( FLAT ( ) ) ; <nl> + <nl> + SELECT dictGet ( ' dictdb_01376 . dict_exists ' , ' value ' , toUInt64 ( 1 ) ) as val FROM numbers ( 2 ) GROUP BY val ; <nl>
|
Merge pull request from azat / GROUP - BY - injective - elimination - dictGet - fixes
|
ClickHouse/ClickHouse
|
0c37fe9c7573d8add09ddd5987e03c1dfbe56ac4
|
2020-07-09T01:29:26Z
|
mmm a / include / swift / Parse / Parser . h <nl> ppp b / include / swift / Parse / Parser . h <nl> class Parser { <nl> } <nl> ParserResult < Expr > parseExprIs ( ) ; <nl> ParserResult < Expr > parseExprAs ( ) ; <nl> - NullablePtr < Expr > parseExprSequence ( Diag < > ID ) ; <nl> + ParserResult < Expr > parseExprSequence ( Diag < > ID ) ; <nl> ParserResult < Expr > parseExprPostfix ( Diag < > ID ) ; <nl> ParserResult < Expr > parseExprUnary ( Diag < > ID ) ; <nl> ParserResult < Expr > parseExprNew ( ) ; <nl> mmm a / lib / Parse / ParseExpr . cpp <nl> ppp b / lib / Parse / ParseExpr . cpp <nl> ParserResult < Expr > Parser : : parseExpr ( Diag < > Message , bool isExprBasic ) { <nl> / / Name binding will resolve whether it ' s in a valid pattern position . <nl> if ( isOnlyStartOfMatchingPattern ( ) ) { <nl> ParserResult < Pattern > pattern = parseMatchingPattern ( ) ; <nl> - if ( pattern . isNull ( ) ) <nl> - return nullptr ; <nl> if ( pattern . hasCodeCompletion ( ) ) <nl> return makeParserCodeCompletionResult < Expr > ( ) ; <nl> + if ( pattern . isNull ( ) ) <nl> + return nullptr ; <nl> return makeParserResult ( new ( Context ) UnresolvedPatternExpr ( pattern . get ( ) ) ) ; <nl> } <nl> <nl> - NullablePtr < Expr > expr = parseExprSequence ( Message ) ; <nl> + ParserResult < Expr > expr = parseExprSequence ( Message ) ; <nl> + if ( expr . hasCodeCompletion ( ) ) <nl> + return makeParserCodeCompletionResult < Expr > ( ) ; <nl> if ( expr . isNull ( ) ) <nl> return nullptr ; <nl> <nl> ParserResult < Expr > Parser : : parseExpr ( Diag < > Message , bool isExprBasic ) { <nl> / / Otherwise , the closure implicitly forms a call . <nl> Expr * arg = createArgWithTrailingClosure ( Context , SourceLoc ( ) , { } , <nl> nullptr , SourceLoc ( ) , closure ) ; <nl> - expr = new ( Context ) CallExpr ( expr . get ( ) , arg ) ; <nl> + expr = makeParserResult ( new ( Context ) CallExpr ( expr . get ( ) , arg ) ) ; <nl> } <nl> } <nl> <nl> ParserResult < Expr > Parser : : parseExprAs ( ) { <nl> / / / The sequencing for binary exprs is not structural , i . e . , binary operators <nl> / / / are not inherently right - associative . If present , ' ? ' and ' : ' tokens must <nl> / / / match . <nl> - NullablePtr < Expr > Parser : : parseExprSequence ( Diag < > Message ) { <nl> + ParserResult < Expr > Parser : : parseExprSequence ( Diag < > Message ) { <nl> SmallVector < Expr * , 8 > SequencedExprs ; <nl> SourceLoc startLoc = Tok . getLoc ( ) ; <nl> <nl> NullablePtr < Expr > Parser : : parseExprSequence ( Diag < > Message ) { <nl> <nl> while ( true ) { <nl> / / Parse a unary expression . <nl> - auto Primary = parseExprUnary ( Message ) ; <nl> + ParserResult < Expr > Primary = parseExprUnary ( Message ) ; <nl> + if ( Primary . hasCodeCompletion ( ) ) <nl> + return makeParserCodeCompletionResult < Expr > ( ) ; <nl> if ( Primary . isNull ( ) ) <nl> return nullptr ; <nl> SequencedExprs . push_back ( Primary . get ( ) ) ; <nl> NullablePtr < Expr > Parser : : parseExprSequence ( Diag < > Message ) { <nl> SourceLoc questionLoc = consumeToken ( ) ; <nl> <nl> / / Parse the middle expression of the ternary . <nl> - NullablePtr < Expr > middle <nl> - = parseExprSequence ( diag : : expected_expr_after_if_question ) ; <nl> + ParserResult < Expr > middle = <nl> + parseExprSequence ( diag : : expected_expr_after_if_question ) ; <nl> + if ( middle . hasCodeCompletion ( ) ) <nl> + return makeParserCodeCompletionResult < Expr > ( ) ; <nl> if ( middle . isNull ( ) ) <nl> return nullptr ; <nl> <nl> NullablePtr < Expr > Parser : : parseExprSequence ( Diag < > Message ) { <nl> if ( ! Tok . is ( tok : : colon ) ) { <nl> diagnose ( questionLoc , diag : : expected_colon_after_if_question ) ; <nl> <nl> - return new ( Context ) ErrorExpr ( { startLoc , <nl> - middle . get ( ) - > getSourceRange ( ) . End } ) ; <nl> + return makeParserErrorResult ( new ( Context ) ErrorExpr ( <nl> + { startLoc , middle . get ( ) - > getSourceRange ( ) . End } ) ) ; <nl> } <nl> <nl> SourceLoc colonLoc = consumeToken ( ) ; <nl> NullablePtr < Expr > Parser : : parseExprSequence ( Diag < > Message ) { <nl> <nl> / / If we saw no operators , don ' t build a sequence . <nl> if ( SequencedExprs . size ( ) = = 1 ) <nl> - return SequencedExprs [ 0 ] ; <nl> + return makeParserResult ( SequencedExprs [ 0 ] ) ; <nl> <nl> - return SequenceExpr : : create ( Context , SequencedExprs ) ; <nl> + return makeParserResult ( SequenceExpr : : create ( Context , SequencedExprs ) ) ; <nl> } <nl> <nl> / / / parseExprUnary <nl>
|
parseExprSequence ( ) : use ParserResult
|
apple/swift
|
2bf010151b132df550fb6e54085706730c1fd883
|
2013-08-21T23:30:47Z
|
mmm a / Marlin / src / gcode / motion / G2_G3 . cpp <nl> ppp b / Marlin / src / gcode / motion / G2_G3 . cpp <nl> void GcodeSuite : : G2_G3 ( const bool clockwise ) { <nl> len = d2 . magnitude ( ) , / / Distance to mid - point of move from current <nl> h2 = ( r - len ) * ( r + len ) , / / factored to reduce rounding error <nl> h = ( h2 > = 0 ) ? SQRT ( h2 ) : 0 . 0f ; / / Distance to the arc pivot - point from midpoint <nl> - const xy_pos_t s = { - d2 . y , d2 . x } / len ; / / Unit vector along perpendicular bisector <nl> - arc_offset = d2 + s * e * h ; / / The calculated offset ( mid - point if | r | < = len ) <nl> + const xy_pos_t s = { - d2 . y , d2 . x } ; / / Perpendicular bisector . ( Divide by len for unit vector . ) <nl> + arc_offset = d2 + s / len * e * h ; / / The calculated offset ( mid - point if | r | < = len ) <nl> } <nl> } <nl> } <nl>
|
Followup to " Fix G2 / G3 rounding " ( )
|
MarlinFirmware/Marlin
|
e79666a82b7fd907ed0210c034342f851daad1b8
|
2019-10-11T02:16:37Z
|
mmm a / docs / api / tray . md <nl> ppp b / docs / api / tray . md <nl> Sets the title displayed next to the tray icon in the status bar ( Support ANSI c <nl> <nl> Returns ` String ` - the title displayed next to the tray icon in the status bar <nl> <nl> - # # # # ` tray . setHighlightMode ( mode ) ` _macOS_ <nl> - <nl> - * ` mode ` String - Highlight mode with one of the following values : <nl> - * ` selection ` - Highlight the tray icon when it is clicked and also when <nl> - its context menu is open . This is the default . <nl> - * ` always ` - Always highlight the tray icon . <nl> - * ` never ` - Never highlight the tray icon . <nl> - <nl> - Sets when the tray ' s icon background becomes highlighted ( in blue ) . <nl> - <nl> - * * [ Deprecated ] ( breaking - changes . md # tray ) * * <nl> - <nl> - * * Note : * * You can use ` highlightMode ` with a [ ` BrowserWindow ` ] ( browser - window . md ) <nl> - by toggling between ` ' never ' ` and ` ' always ' ` modes when the window visibility <nl> - changes . <nl> - <nl> - ` ` ` javascript <nl> - const { BrowserWindow , Tray } = require ( ' electron ' ) <nl> - <nl> - const win = new BrowserWindow ( { width : 800 , height : 600 } ) <nl> - const tray = new Tray ( ' / path / to / my / icon ' ) <nl> - <nl> - tray . on ( ' click ' , ( ) = > { <nl> - win . isVisible ( ) ? win . hide ( ) : win . show ( ) <nl> - } ) <nl> - win . on ( ' show ' , ( ) = > { <nl> - tray . setHighlightMode ( ' always ' ) <nl> - } ) <nl> - win . on ( ' hide ' , ( ) = > { <nl> - tray . setHighlightMode ( ' never ' ) <nl> - } ) <nl> - ` ` ` <nl> - <nl> # # # # ` tray . setIgnoreDoubleClickEvents ( ignore ) ` _macOS_ <nl> <nl> * ` ignore ` Boolean <nl> mmm a / lib / browser / api / tray . js <nl> ppp b / lib / browser / api / tray . js <nl> const { Tray } = process . electronBinding ( ' tray ' ) <nl> <nl> Object . setPrototypeOf ( Tray . prototype , EventEmitter . prototype ) <nl> <nl> - / / Deprecations <nl> - Tray . prototype . setHighlightMode = deprecate . removeFunction ( Tray . prototype . setHighlightMode , ' setHighlightMode ' ) <nl> - <nl> module . exports = Tray <nl> mmm a / shell / browser / api / atom_api_tray . cc <nl> ppp b / shell / browser / api / atom_api_tray . cc <nl> <nl> # include " shell / common / node_includes . h " <nl> # include " ui / gfx / image / image . h " <nl> <nl> - namespace mate { <nl> - <nl> - template < > <nl> - struct Converter < electron : : TrayIcon : : HighlightMode > { <nl> - static bool FromV8 ( v8 : : Isolate * isolate , <nl> - v8 : : Local < v8 : : Value > val , <nl> - electron : : TrayIcon : : HighlightMode * out ) { <nl> - using HighlightMode = electron : : TrayIcon : : HighlightMode ; <nl> - std : : string mode ; <nl> - if ( ConvertFromV8 ( isolate , val , & mode ) ) { <nl> - if ( mode = = " always " ) { <nl> - * out = HighlightMode : : ALWAYS ; <nl> - return true ; <nl> - } <nl> - if ( mode = = " selection " ) { <nl> - * out = HighlightMode : : SELECTION ; <nl> - return true ; <nl> - } <nl> - if ( mode = = " never " ) { <nl> - * out = HighlightMode : : NEVER ; <nl> - return true ; <nl> - } <nl> - } <nl> - return false ; <nl> - } <nl> - } ; <nl> - } / / namespace mate <nl> - <nl> namespace electron { <nl> <nl> namespace api { <nl> std : : string Tray : : GetTitle ( ) { <nl> # endif <nl> } <nl> <nl> - void Tray : : SetHighlightMode ( TrayIcon : : HighlightMode mode ) { <nl> - tray_icon_ - > SetHighlightMode ( mode ) ; <nl> - } <nl> - <nl> void Tray : : SetIgnoreDoubleClickEvents ( bool ignore ) { <nl> # if defined ( OS_MACOSX ) <nl> tray_icon_ - > SetIgnoreDoubleClickEvents ( ignore ) ; <nl> void Tray : : BuildPrototype ( v8 : : Isolate * isolate , <nl> . SetMethod ( " setToolTip " , & Tray : : SetToolTip ) <nl> . SetMethod ( " setTitle " , & Tray : : SetTitle ) <nl> . SetMethod ( " getTitle " , & Tray : : GetTitle ) <nl> - . SetMethod ( " setHighlightMode " , & Tray : : SetHighlightMode ) <nl> . SetMethod ( " setIgnoreDoubleClickEvents " , <nl> & Tray : : SetIgnoreDoubleClickEvents ) <nl> . SetMethod ( " getIgnoreDoubleClickEvents " , <nl> mmm a / shell / browser / api / atom_api_tray . h <nl> ppp b / shell / browser / api / atom_api_tray . h <nl> class Tray : public mate : : TrackableObject < Tray > , public TrayIconObserver { <nl> void SetToolTip ( const std : : string & tool_tip ) ; <nl> void SetTitle ( const std : : string & title ) ; <nl> std : : string GetTitle ( ) ; <nl> - void SetHighlightMode ( TrayIcon : : HighlightMode mode ) ; <nl> void SetIgnoreDoubleClickEvents ( bool ignore ) ; <nl> bool GetIgnoreDoubleClickEvents ( ) ; <nl> void DisplayBalloon ( mate : : Arguments * args , const mate : : Dictionary & options ) ; <nl> mmm a / shell / browser / ui / tray_icon . cc <nl> ppp b / shell / browser / ui / tray_icon . cc <nl> TrayIcon : : ~ TrayIcon ( ) { } <nl> <nl> void TrayIcon : : SetPressedImage ( ImageType image ) { } <nl> <nl> - void TrayIcon : : SetHighlightMode ( TrayIcon : : HighlightMode mode ) { } <nl> - <nl> void TrayIcon : : DisplayBalloon ( ImageType icon , <nl> const base : : string16 & title , <nl> const base : : string16 & contents ) { } <nl> mmm a / shell / browser / ui / tray_icon . h <nl> ppp b / shell / browser / ui / tray_icon . h <nl> class TrayIcon { <nl> / / status icon ( e . g . Ubuntu Unity ) . <nl> virtual void SetToolTip ( const std : : string & tool_tip ) = 0 ; <nl> <nl> - / / Sets the status icon highlight mode . This only works on macOS . <nl> - enum class HighlightMode { <nl> - ALWAYS , / / Always highlight the tray icon <nl> - NEVER , / / Never highlight the tray icon <nl> - SELECTION / / Highlight the tray icon when clicked or the menu is opened <nl> - } ; <nl> - virtual void SetHighlightMode ( HighlightMode mode ) ; <nl> - <nl> # if defined ( OS_MACOSX ) <nl> / / Set / Get flag determining whether to ignore double click events . <nl> virtual void SetIgnoreDoubleClickEvents ( bool ignore ) = 0 ; <nl> mmm a / shell / browser / ui / tray_icon_cocoa . h <nl> ppp b / shell / browser / ui / tray_icon_cocoa . h <nl> <nl> <nl> namespace electron { <nl> <nl> - class TrayIconCocoa : public TrayIcon , public AtomMenuModel : : Observer { <nl> + class TrayIconCocoa : public TrayIcon { <nl> public : <nl> TrayIconCocoa ( ) ; <nl> ~ TrayIconCocoa ( ) override ; <nl> class TrayIconCocoa : public TrayIcon , public AtomMenuModel : : Observer { <nl> void SetToolTip ( const std : : string & tool_tip ) override ; <nl> void SetTitle ( const std : : string & title ) override ; <nl> std : : string GetTitle ( ) override ; <nl> - void SetHighlightMode ( TrayIcon : : HighlightMode mode ) override ; <nl> void SetIgnoreDoubleClickEvents ( bool ignore ) override ; <nl> bool GetIgnoreDoubleClickEvents ( ) override ; <nl> void PopUpOnUI ( AtomMenuModel * menu_model ) ; <nl> class TrayIconCocoa : public TrayIcon , public AtomMenuModel : : Observer { <nl> void SetContextMenu ( AtomMenuModel * menu_model ) override ; <nl> gfx : : Rect GetBounds ( ) override ; <nl> <nl> - protected : <nl> - / / AtomMenuModel : : Observer : <nl> - void OnMenuWillClose ( ) override ; <nl> - <nl> private : <nl> - / / Atom custom view for NSStatusItem . <nl> + / / Electron custom view for NSStatusItem . <nl> base : : scoped_nsobject < StatusItemView > status_item_view_ ; <nl> <nl> / / Status menu shown when right - clicking the system icon . <nl> base : : scoped_nsobject < AtomMenuController > menu_ ; <nl> <nl> - / / Used for unregistering observer . <nl> - AtomMenuModel * menu_model_ = nullptr ; / / weak ref . <nl> - <nl> base : : WeakPtrFactory < TrayIconCocoa > weak_factory_ ; <nl> <nl> DISALLOW_COPY_AND_ASSIGN ( TrayIconCocoa ) ; <nl> mmm a / shell / browser / ui / tray_icon_cocoa . mm <nl> ppp b / shell / browser / ui / tray_icon_cocoa . mm <nl> <nl> # include < string > <nl> # include < vector > <nl> <nl> - # include " base / mac / sdk_forward_declarations . h " <nl> # include " base / message_loop / message_loop . h " <nl> + # include " base / message_loop / message_pump_mac . h " <nl> # include " base / strings / sys_string_conversions . h " <nl> # include " base / task / post_task . h " <nl> # include " content / public / browser / browser_task_traits . h " <nl> # include " content / public / browser / browser_thread . h " <nl> - # include " shell / browser / mac / atom_application . h " <nl> # include " shell / browser / ui / cocoa / NSString + ANSI . h " <nl> # include " shell / browser / ui / cocoa / atom_menu_controller . h " <nl> - # include " ui / display / screen . h " <nl> # include " ui / events / cocoa / cocoa_event_utils . h " <nl> - # include " ui / gfx / image / image . h " <nl> # include " ui / gfx / mac / coordinate_conversion . h " <nl> # include " ui / native_theme / native_theme . h " <nl> <nl> - namespace { <nl> - <nl> - / / By default , macOS sets 4px to tray image as left and right padding margin . <nl> - const CGFloat kHorizontalMargin = 4 ; <nl> - / / macOS tends to make the title 2px lower . <nl> - const CGFloat kVerticalTitleMargin = 2 ; <nl> - <nl> - } / / namespace <nl> - <nl> @ interface StatusItemView : NSView { <nl> electron : : TrayIconCocoa * trayIcon_ ; / / weak <nl> AtomMenuController * menuController_ ; / / weak <nl> - electron : : TrayIcon : : HighlightMode highlight_mode_ ; <nl> BOOL ignoreDoubleClickEvents_ ; <nl> - BOOL forceHighlight_ ; <nl> - BOOL inMouseEventSequence_ ; <nl> - BOOL ANSI_ ; <nl> - base : : scoped_nsobject < NSImage > image_ ; <nl> - base : : scoped_nsobject < NSImage > alternateImage_ ; <nl> - base : : scoped_nsobject < NSString > title_ ; <nl> - base : : scoped_nsobject < NSMutableAttributedString > attributedTitle_ ; <nl> base : : scoped_nsobject < NSStatusItem > statusItem_ ; <nl> base : : scoped_nsobject < NSTrackingArea > trackingArea_ ; <nl> } <nl> - ( void ) dealloc { <nl> - ( id ) initWithIcon : ( electron : : TrayIconCocoa * ) icon { <nl> trayIcon_ = icon ; <nl> menuController_ = nil ; <nl> - highlight_mode_ = electron : : TrayIcon : : HighlightMode : : SELECTION ; <nl> ignoreDoubleClickEvents_ = NO ; <nl> - forceHighlight_ = NO ; <nl> - inMouseEventSequence_ = NO ; <nl> <nl> if ( ( self = [ super initWithFrame : CGRectZero ] ) ) { <nl> [ self registerForDraggedTypes : @ [ <nl> - ( id ) initWithIcon : ( electron : : TrayIconCocoa * ) icon { <nl> NSStatusItem * item = [ [ NSStatusBar systemStatusBar ] <nl> statusItemWithLength : NSVariableStatusItemLength ] ; <nl> statusItem_ . reset ( [ item retain ] ) ; <nl> - [ statusItem_ setView : self ] ; <nl> - / / Finalize setup by sizing our views <nl> + [ [ statusItem_ button ] addSubview : self ] ; / / inject custom view <nl> [ self updateDimensions ] ; <nl> - <nl> - / / Add NSTrackingArea for listening to mouseEnter , mouseExit , and mouseMove <nl> - / / events <nl> - trackingArea_ . reset ( [ [ NSTrackingArea alloc ] <nl> - initWithRect : [ self bounds ] <nl> - options : NSTrackingMouseEnteredAndExited | NSTrackingMouseMoved | <nl> - NSTrackingActiveAlways <nl> - owner : self <nl> - userInfo : nil ] ) ; <nl> - [ self addTrackingArea : trackingArea_ ] ; <nl> } <nl> return self ; <nl> } <nl> <nl> - ( void ) updateDimensions { <nl> - NSStatusBar * bar = [ NSStatusBar systemStatusBar ] ; <nl> - [ self setFrame : NSMakeRect ( 0 , 0 , [ self fullWidth ] , [ bar thickness ] ) ] ; <nl> - [ self setNeedsDisplay : YES ] ; <nl> + [ self setFrame : [ statusItem_ button ] . frame ] ; <nl> + } <nl> + <nl> + - ( void ) updateTrackingAreas { <nl> + / / Use NSTrackingArea for listening to mouseEnter , mouseExit , and mouseMove <nl> + / / events . <nl> + [ self removeTrackingArea : trackingArea_ ] ; <nl> + trackingArea_ . reset ( [ [ NSTrackingArea alloc ] <nl> + initWithRect : [ self bounds ] <nl> + options : NSTrackingMouseEnteredAndExited | NSTrackingMouseMoved | <nl> + NSTrackingActiveAlways <nl> + owner : self <nl> + userInfo : nil ] ) ; <nl> + [ self addTrackingArea : trackingArea_ ] ; <nl> } <nl> <nl> - ( void ) removeItem { <nl> - / / Turn off tracking events to prevent crash <nl> + / / Turn off tracking events to prevent crash . <nl> if ( trackingArea_ ) { <nl> [ self removeTrackingArea : trackingArea_ ] ; <nl> trackingArea_ . reset ( ) ; <nl> } <nl> [ [ NSStatusBar systemStatusBar ] removeStatusItem : statusItem_ ] ; <nl> - [ statusItem_ setView : nil ] ; <nl> + [ self removeFromSuperview ] ; <nl> statusItem_ . reset ( ) ; <nl> } <nl> <nl> - - ( void ) drawRect : ( NSRect ) dirtyRect { <nl> - / / Draw the tray icon and title that align with NSStatusItem , layout : <nl> - / / mmmmmmmmmmmmmmm - <nl> - / / | icon | title | <nl> - / / / mmmmmmmmmmmmmmm - <nl> - <nl> - CGFloat thickness = [ [ statusItem_ statusBar ] thickness ] ; <nl> - <nl> - / / Draw the system bar background . <nl> - [ statusItem_ drawStatusBarBackgroundInRect : self . bounds <nl> - withHighlight : [ self shouldHighlight ] ] ; <nl> - <nl> - / / Determine which image to use . <nl> - NSImage * image = image_ . get ( ) ; <nl> - if ( inMouseEventSequence_ & & alternateImage_ ) { <nl> - image = alternateImage_ . get ( ) ; <nl> - } <nl> - / / Apply the higlight color if the image is a template image . When this moves <nl> - / / to using the new [ NSStatusItem button ] API , this should work automagically . <nl> - if ( [ image isTemplate ] = = YES ) { <nl> - NSImage * imageWithColor = [ [ image copy ] autorelease ] ; <nl> - [ imageWithColor lockFocus ] ; <nl> - [ [ self colorWithHighlight : [ self isHighlighted ] ] set ] ; <nl> - CGRect imageBounds = CGRectMake ( 0 , 0 , image . size . width , image . size . height ) ; <nl> - NSRectFillUsingOperation ( imageBounds , NSCompositeSourceAtop ) ; <nl> - [ imageWithColor unlockFocus ] ; <nl> - image = imageWithColor ; <nl> - } <nl> - <nl> - / / Draw the image <nl> - [ image <nl> - drawInRect : CGRectMake ( roundf ( ( [ self iconWidth ] - image . size . width ) / 2 ) , <nl> - roundf ( ( thickness - image . size . height ) / 2 ) , <nl> - image . size . width , image . size . height ) ] ; <nl> - <nl> - if ( title_ ) { <nl> - / / Draw title . <nl> - NSRect titleDrawRect = NSMakeRect ( [ self iconWidth ] , - kVerticalTitleMargin , <nl> - [ self titleWidth ] , thickness ) ; <nl> - [ attributedTitle_ drawInRect : titleDrawRect ] ; <nl> - } <nl> - } <nl> - <nl> - - ( BOOL ) isDarkMode { <nl> - if ( @ available ( macOS 10 . 14 , * ) ) { <nl> - return ui : : NativeTheme : : GetInstanceForNativeUi ( ) - > SystemDarkModeEnabled ( ) ; <nl> - } <nl> - NSUserDefaults * defaults = [ NSUserDefaults standardUserDefaults ] ; <nl> - NSString * mode = [ defaults stringForKey : @ " AppleInterfaceStyle " ] ; <nl> - return mode & & [ mode isEqualToString : @ " Dark " ] ; <nl> - } <nl> - <nl> - - ( BOOL ) isHighlighted { <nl> - BOOL highlight = [ self shouldHighlight ] ; <nl> - return highlight | [ self isDarkMode ] ; <nl> - } <nl> - <nl> - / / The width of the full status item . <nl> - - ( CGFloat ) fullWidth { <nl> - if ( title_ ) <nl> - return [ self iconWidth ] + [ self titleWidth ] + kHorizontalMargin ; <nl> - else <nl> - return [ self iconWidth ] ; <nl> - } <nl> - <nl> - / / The width of the icon . <nl> - - ( CGFloat ) iconWidth { <nl> - if ( ! image_ & & title_ ) <nl> - return kHorizontalMargin ; <nl> - CGFloat thickness = [ [ NSStatusBar systemStatusBar ] thickness ] ; <nl> - CGFloat imageHeight = [ image_ size ] . height ; <nl> - CGFloat imageWidth = [ image_ size ] . width ; <nl> - CGFloat iconWidth = imageWidth ; <nl> - if ( imageWidth < thickness ) { <nl> - / / Image ' s width must be larger than menu bar ' s height . <nl> - iconWidth = thickness ; <nl> - } else { <nl> - CGFloat verticalMargin = thickness - imageHeight ; <nl> - / / Image must have same horizontal vertical margin . <nl> - if ( verticalMargin > 0 & & imageWidth ! = imageHeight ) <nl> - iconWidth = imageWidth + verticalMargin ; <nl> - CGFloat horizontalMargin = thickness - imageWidth ; <nl> - / / Image must have at least kHorizontalMargin horizontal margin on each <nl> - / / side . <nl> - if ( horizontalMargin < 2 * kHorizontalMargin ) <nl> - iconWidth = imageWidth + 2 * kHorizontalMargin ; <nl> - } <nl> - return iconWidth ; <nl> - } <nl> - <nl> - / / The width of the title . <nl> - - ( CGFloat ) titleWidth { <nl> - if ( ! title_ ) <nl> - return 0 ; <nl> - return [ attributedTitle_ size ] . width ; <nl> - } <nl> - <nl> - - ( NSColor * ) colorWithHighlight : ( BOOL ) highlight { <nl> - return highlight ? [ NSColor whiteColor ] <nl> - : [ NSColor colorWithRed : 0 . 265625 <nl> - green : 0 . 25390625 <nl> - blue : 0 . 234375 <nl> - alpha : 1 . 0 ] ; <nl> - } <nl> - <nl> - ( void ) setImage : ( NSImage * ) image { <nl> - image_ . reset ( [ image copy ] ) ; <nl> + [ [ statusItem_ button ] setImage : [ image copy ] ] ; <nl> [ self updateDimensions ] ; <nl> } <nl> <nl> - ( void ) setAlternateImage : ( NSImage * ) image { <nl> - alternateImage_ . reset ( [ image copy ] ) ; <nl> - } <nl> - <nl> - - ( void ) setHighlight : ( electron : : TrayIcon : : HighlightMode ) mode { <nl> - highlight_mode_ = mode ; <nl> - [ self setNeedsDisplay : YES ] ; <nl> + [ [ statusItem_ button ] setAlternateImage : [ image copy ] ] ; <nl> } <nl> <nl> - ( void ) setIgnoreDoubleClickEvents : ( BOOL ) ignore { <nl> - ( BOOL ) getIgnoreDoubleClickEvents { <nl> } <nl> <nl> - ( void ) setTitle : ( NSString * ) title { <nl> + if ( [ title containsANSICodes ] ) { <nl> + [ [ statusItem_ button ] <nl> + setAttributedTitle : [ title attributedStringParsingANSICodes ] ] ; <nl> + } else { <nl> + [ [ statusItem_ button ] setTitle : [ title copy ] ] ; <nl> + } <nl> + <nl> + / / Fix icon margins . <nl> if ( title . length > 0 ) { <nl> - title_ . reset ( [ title copy ] ) ; <nl> - ANSI_ = [ title containsANSICodes ] ; <nl> + [ [ statusItem_ button ] setImagePosition : NSImageLeft ] ; <nl> } else { <nl> - title_ . reset ( ) ; <nl> - ANSI_ = NO ; <nl> + [ [ statusItem_ button ] setImagePosition : NSImageOnly ] ; <nl> } <nl> - [ self updateAttributedTitle ] ; <nl> + <nl> [ self updateDimensions ] ; <nl> } <nl> <nl> - ( NSString * ) title { <nl> - return title_ ; <nl> - } <nl> - <nl> - - ( void ) updateAttributedTitle { <nl> - NSDictionary * attributes = <nl> - @ { NSFontAttributeName : [ NSFont menuBarFontOfSize : 0 ] } ; <nl> - <nl> - if ( ANSI_ ) { <nl> - NSCharacterSet * whites = [ NSCharacterSet whitespaceCharacterSet ] ; <nl> - NSString * title = [ title_ stringByTrimmingCharactersInSet : whites ] ; <nl> - attributedTitle_ . reset ( [ title attributedStringParsingANSICodes ] ) ; <nl> - [ attributedTitle_ addAttributes : attributes <nl> - range : NSMakeRange ( 0 , [ attributedTitle_ length ] ) ] ; <nl> - return ; <nl> - } <nl> - <nl> - / / check title_ being nil <nl> - NSString * title = @ " " ; <nl> - if ( title_ ) <nl> - title = title_ ; <nl> - <nl> - attributedTitle_ . reset ( [ [ NSMutableAttributedString alloc ] <nl> - initWithString : title <nl> - attributes : attributes ] ) ; <nl> - <nl> - / / NSFontAttributeName : [ NSFont menuBarFontOfSize : 0 ] , <nl> - / / NSForegroundColorAttributeName : [ self colorWithHighlight : highlight ] <nl> - [ attributedTitle_ addAttributes : attributes <nl> - range : NSMakeRange ( 0 , [ attributedTitle_ length ] ) ] ; <nl> - [ attributedTitle_ addAttribute : NSForegroundColorAttributeName <nl> - value : [ self colorWithHighlight : [ self isHighlighted ] ] <nl> - range : NSMakeRange ( 0 , [ attributedTitle_ length ] ) ] ; <nl> + return [ statusItem_ button ] . title ; <nl> } <nl> <nl> - ( void ) setMenuController : ( AtomMenuController * ) menu { <nl> menuController_ = menu ; <nl> + [ statusItem_ setMenu : [ menuController_ menu ] ] ; <nl> } <nl> <nl> - ( void ) mouseDown : ( NSEvent * ) event { <nl> - inMouseEventSequence_ = YES ; <nl> - [ self setNeedsDisplay : YES ] ; <nl> + / / Pass click to superclass to show menu . Custom mouseUp handler won ' t be <nl> + / / invoked . <nl> + if ( menuController_ ) { <nl> + [ super mouseDown : event ] ; <nl> + } else { <nl> + [ [ statusItem_ button ] highlight : YES ] ; <nl> + } <nl> } <nl> <nl> - ( void ) mouseUp : ( NSEvent * ) event { <nl> - if ( ! inMouseEventSequence_ ) { <nl> - / / If the menu is showing , when user clicked the tray icon , the ` mouseDown ` <nl> - / / event will be dissmissed , we need to close the menu at this time . <nl> - [ self setNeedsDisplay : YES ] ; <nl> - return ; <nl> - } <nl> - inMouseEventSequence_ = NO ; <nl> - <nl> - / / Show menu when there is a context menu . <nl> - / / NB ( hokein ) : Make tray ' s behavior more like official one ' s . <nl> - / / When the tray icon gets clicked quickly multiple times , the <nl> - / / event . clickCount doesn ' t always return 1 . Instead , it returns a value that <nl> - / / counts the clicked times . <nl> - / / So we don ' t check the clickCount here , just pop up the menu for each click <nl> - / / event . <nl> - if ( menuController_ ) <nl> - [ statusItem_ popUpStatusItemMenu : [ menuController_ menu ] ] ; <nl> - <nl> - / / Don ' t emit click events when menu is showing . <nl> - if ( menuController_ ) <nl> - return ; <nl> + [ [ statusItem_ button ] highlight : NO ] ; <nl> <nl> / / If we are ignoring double click events , we should ignore the ` clickCount ` <nl> / / value and immediately emit a click event . <nl> - ( void ) mouseUp : ( NSEvent * ) event { <nl> trayIcon_ - > NotifyDoubleClicked ( <nl> gfx : : ScreenRectFromNSRect ( event . window . frame ) , <nl> ui : : EventFlagsFromModifiers ( [ event modifierFlags ] ) ) ; <nl> - <nl> - [ self setNeedsDisplay : YES ] ; <nl> } <nl> <nl> - ( void ) popUpContextMenu : ( electron : : AtomMenuModel * ) menu_model { <nl> - ( void ) popUpContextMenu : ( electron : : AtomMenuModel * ) menu_model { <nl> base : : scoped_nsobject < AtomMenuController > menuController ( <nl> [ [ AtomMenuController alloc ] initWithModel : menu_model <nl> useDefaultAccelerator : NO ] ) ; <nl> - forceHighlight_ = YES ; / / Should highlight when showing menu . <nl> - [ self setNeedsDisplay : YES ] ; <nl> - <nl> - [ statusItem_ popUpStatusItemMenu : [ menuController menu ] ] ; <nl> - forceHighlight_ = NO ; <nl> - [ self setNeedsDisplay : YES ] ; <nl> + / / Hacky way to mimic design of ordinary tray menu . <nl> + [ statusItem_ setMenu : [ menuController menu ] ] ; <nl> + [ [ statusItem_ button ] performClick : self ] ; <nl> + [ statusItem_ setMenu : [ menuController_ menu ] ] ; <nl> return ; <nl> } <nl> <nl> - ( void ) popUpContextMenu : ( electron : : AtomMenuModel * ) menu_model { <nl> / / Ensure the UI can update while the menu is fading out . <nl> base : : ScopedPumpMessagesInPrivateModes pump_private ; <nl> <nl> - / / Redraw the tray icon to show highlight if it is enabled . <nl> - [ self setNeedsDisplay : YES ] ; <nl> - <nl> - [ statusItem_ popUpStatusItemMenu : [ menuController_ menu ] ] ; <nl> - / / The popUpStatusItemMenu returns only after the showing menu is closed . <nl> - / / When it returns , we need to redraw the tray icon to not show highlight . <nl> - [ self setNeedsDisplay : YES ] ; <nl> + [ [ statusItem_ button ] performClick : self ] ; <nl> } <nl> } <nl> <nl> - ( BOOL ) performDragOperation : ( id < NSDraggingInfo > ) sender { <nl> return YES ; <nl> } <nl> <nl> - - ( void ) setNeedsDisplay : ( BOOL ) display { <nl> - [ self updateAttributedTitle ] ; <nl> - [ super setNeedsDisplay : display ] ; <nl> - } <nl> - <nl> - - ( BOOL ) shouldHighlight { <nl> - using HighlightMode = electron : : TrayIcon : : HighlightMode ; <nl> - switch ( highlight_mode_ ) { <nl> - case HighlightMode : : ALWAYS : <nl> - return true ; <nl> - case HighlightMode : : NEVER : <nl> - return false ; <nl> - case HighlightMode : : SELECTION : <nl> - BOOL isMenuOpen = menuController_ & & [ menuController_ isMenuOpen ] ; <nl> - return forceHighlight_ | | inMouseEventSequence_ | | isMenuOpen ; <nl> - } <nl> - } <nl> - <nl> @ end <nl> <nl> namespace electron { <nl> - ( BOOL ) shouldHighlight { <nl> <nl> TrayIconCocoa : : ~ TrayIconCocoa ( ) { <nl> [ status_item_view_ removeItem ] ; <nl> - if ( menu_model_ ) <nl> - menu_model_ - > RemoveObserver ( this ) ; <nl> } <nl> <nl> void TrayIconCocoa : : SetImage ( const gfx : : Image & image ) { <nl> - [ status_item_view_ setImage : image . IsEmpty ( ) ? nil : image . AsNSImage ( ) ] ; <nl> + [ status_item_view_ setImage : image . AsNSImage ( ) ] ; <nl> } <nl> <nl> void TrayIconCocoa : : SetPressedImage ( const gfx : : Image & image ) { <nl> - ( BOOL ) shouldHighlight { <nl> return base : : SysNSStringToUTF8 ( [ status_item_view_ title ] ) ; <nl> } <nl> <nl> - void TrayIconCocoa : : SetHighlightMode ( TrayIcon : : HighlightMode mode ) { <nl> - [ status_item_view_ setHighlight : mode ] ; <nl> - } <nl> - <nl> void TrayIconCocoa : : SetIgnoreDoubleClickEvents ( bool ignore ) { <nl> [ status_item_view_ setIgnoreDoubleClickEvents : ignore ] ; <nl> } <nl> - ( BOOL ) shouldHighlight { <nl> } <nl> <nl> void TrayIconCocoa : : SetContextMenu ( AtomMenuModel * menu_model ) { <nl> - / / Substribe to MenuClosed event . <nl> - if ( menu_model_ ) <nl> - menu_model_ - > RemoveObserver ( this ) ; <nl> - <nl> - menu_model_ = menu_model ; <nl> - <nl> if ( menu_model ) { <nl> - menu_model - > AddObserver ( this ) ; <nl> / / Create native menu . <nl> menu_ . reset ( [ [ AtomMenuController alloc ] initWithModel : menu_model <nl> useDefaultAccelerator : NO ] ) ; <nl> } else { <nl> menu_ . reset ( ) ; <nl> } <nl> - <nl> [ status_item_view_ setMenuController : menu_ . get ( ) ] ; <nl> } <nl> <nl> gfx : : Rect TrayIconCocoa : : GetBounds ( ) { <nl> - auto bounds = gfx : : ScreenRectFromNSRect ( [ status_item_view_ window ] . frame ) ; <nl> - return bounds ; <nl> - } <nl> - <nl> - void TrayIconCocoa : : OnMenuWillClose ( ) { <nl> - [ status_item_view_ setNeedsDisplay : YES ] ; <nl> + return gfx : : ScreenRectFromNSRect ( [ status_item_view_ window ] . frame ) ; <nl> } <nl> <nl> / / static <nl>
|
feat : migrate custom macOS tray view to native one ( )
|
electron/electron
|
47a38daee20ef77cc98bc13bb7a7c1d690fd635c
|
2019-07-31T17:52:50Z
|
mmm a / examples / rigid_body . py <nl> ppp b / examples / rigid_body . py <nl> <nl> penalty = 1e4 <nl> damping = 0 . 01 <nl> <nl> - gradient_clip = 300 <nl> + gradient_clip = 30 <nl> spring_omega = 30 <nl> <nl> n_springs = 0 <nl> spring_anchor_a = ti . global_var ( ti . i32 ) <nl> spring_anchor_b = ti . global_var ( ti . i32 ) <nl> + # spring_length = - 1 means it is a joint <nl> spring_length = scalar ( ) <nl> spring_offset_a = vec ( ) <nl> spring_offset_b = vec ( ) <nl> def place ( ) : <nl> <nl> <nl> dt = 0 . 001 <nl> - learning_rate = 0 . 00001 <nl> + learning_rate = 0 . 0001 <nl> <nl> <nl> @ ti . func <nl> def apply_spring_force ( t : ti . i32 ) : <nl> for i in range ( n_springs ) : <nl> a = spring_anchor_a [ i ] <nl> b = spring_anchor_b [ i ] <nl> - pos_a , _ , rela_a = to_world ( t , a , spring_offset_a [ i ] ) <nl> - pos_b , _ , rela_b = to_world ( t , b , spring_offset_b [ i ] ) <nl> + pos_a , vel_a , rela_a = to_world ( t , a , spring_offset_a [ i ] ) <nl> + pos_b , vel_b , rela_b = to_world ( t , b , spring_offset_b [ i ] ) <nl> dist = pos_a - pos_b <nl> length = dist . norm ( ) + 1e - 4 <nl> <nl> def apply_spring_force ( t : ti . i32 ) : <nl> actuation + = bias [ i ] <nl> actuation = ti . tanh ( actuation ) <nl> <nl> - target_length = spring_length [ i ] * ( 0 . 7 + 0 . 6 * actuation ) <nl> + is_joint = spring_length [ i ] = = - 1 <nl> + <nl> + target_length = spring_length [ i ] * ( 0 . 8 + 0 . 4 * actuation ) <nl> + if is_joint : <nl> + target_length = 0 . 0 <nl> impulse = dt * ( length - target_length ) * spring_stiffness [ <nl> i ] / length * dist <nl> + <nl> + if is_joint : <nl> + rela_vel = vel_a - vel_b <nl> + rela_vel_norm = rela_vel . norm ( ) + 1e - 3 <nl> + impulse_dir = rela_vel / rela_vel_norm <nl> + impulse_contribution = inverse_mass [ a ] + ti . sqr ( <nl> + cross ( impulse_dir , rela_a ) ) * inverse_inertia [ <nl> + a ] + inverse_mass [ b ] + ti . sqr ( cross ( impulse_dir , <nl> + rela_b ) ) * \ <nl> + inverse_inertia [ <nl> + b ] <nl> + # project relative velocity <nl> + impulse + = rela_vel_norm / impulse_contribution * impulse_dir <nl> + <nl> apply_impulse ( t , a , - impulse , pos_a ) <nl> apply_impulse ( t , b , impulse , pos_b ) <nl> <nl> def main ( ) : <nl> add_object ( x = [ 0 . 2 , 0 . 15 ] , halfsize = [ 0 . 03 , 0 . 02 ] ) <nl> add_object ( x = [ 0 . 3 , 0 . 15 ] , halfsize = [ 0 . 03 , 0 . 02 ] ) <nl> add_object ( x = [ 0 . 4 , 0 . 15 ] , halfsize = [ 0 . 03 , 0 . 02 ] ) <nl> + add_object ( x = [ 0 . 4 , 0 . 3 ] , halfsize = [ 0 . 005 , 0 . 03 ] ) <nl> <nl> l = 0 . 12 <nl> s = 15 <nl> def main ( ) : <nl> add_spring ( 0 , 2 , [ 0 . 03 , 0 . 00 ] , [ 0 . 0 , 0 . 0 ] , l , s ) <nl> add_spring ( 0 , 3 , [ 0 . 03 , 0 . 00 ] , [ 0 . 0 , 0 . 0 ] , l , s ) <nl> add_spring ( 0 , 3 , [ 0 . 1 , 0 . 00 ] , [ 0 . 0 , 0 . 0 ] , l , s ) <nl> + add_spring ( 0 , 4 , [ 0 . 1 , 0 ] , [ 0 , - 0 . 05 ] , - 1 , s ) <nl> <nl> setup_robot ( ) <nl> <nl> for i in range ( n_springs ) : <nl> for j in range ( n_sin_waves ) : <nl> - weights [ i , j ] = np . random . randn ( ) * 0 . 01 <nl> + weights [ i , j ] = np . random . randn ( ) * 0 . 001 <nl> <nl> forward ( ' initial ' ) <nl> for iter in range ( 1000 ) : <nl> def main ( ) : <nl> for j in range ( n_sin_waves ) : <nl> weights [ i , j ] + = scale * weights . grad [ i , j ] <nl> bias [ i ] + = scale * bias . grad [ i ] <nl> - <nl> <nl> clear ( ) <nl> forward ( ' final ' ) <nl> new file mode 100644 <nl> index 00000000000 . . 8b137891791 <nl> mmm / dev / null <nl> ppp b / examples / robot_config . py <nl> @ @ - 0 , 0 + 1 @ @ <nl> + <nl>
|
joint
|
taichi-dev/taichi
|
57af2a910ff8add5ac1af5a78f45a485ed55920a
|
2019-09-14T03:16:19Z
|
mmm a / base / common / wide_integer_impl . h <nl> ppp b / base / common / wide_integer_impl . h <nl> struct integer < Bits , Signed > : : _impl <nl> constexpr int64_t min_int = std : : numeric_limits < int64_t > : : min ( ) ; <nl> <nl> constexpr long double max_int_long_double = static_cast < long double > ( max_int ) ; <nl> - constexpr long double min_int_long_double = static_cast < long double > ( min_int ) ; <nl> <nl> if ( ( rhs > 0 & & rhs < max_int ) | | <nl> ( rhs < 0 & & rhs > min_int ) ) <nl>
|
fix unused var
|
ClickHouse/ClickHouse
|
21dd9a3ec3eadccd484f873cdcf53f977d76164a
|
2020-11-10T17:35:05Z
|
mmm a / platform / haiku / os_haiku . cpp <nl> ppp b / platform / haiku / os_haiku . cpp <nl> void OS_Haiku : : set_window_title ( const String & p_title ) { <nl> } <nl> <nl> Size2 OS_Haiku : : get_window_size ( ) const { <nl> - ERR_PRINT ( " get_window_size ( ) NOT IMPLEMENTED " ) ; <nl> + BSize size = window - > Size ( ) ; <nl> + return Size2i ( size . IntegerWidth ( ) , size . IntegerHeight ( ) ) ; <nl> + } <nl> + <nl> + Point2 OS_Haiku : : get_window_position ( ) const { <nl> + BPoint point ( 0 , 0 ) ; <nl> + window - > ConvertToScreen ( & point ) ; <nl> + return Point2i ( point . x , point . y ) ; <nl> + } <nl> + <nl> + void OS_Haiku : : set_window_position ( const Point2 & p_position ) { <nl> + window - > MoveTo ( p_position . x , p_position . y ) ; <nl> } <nl> <nl> void OS_Haiku : : set_video_mode ( const VideoMode & p_video_mode , int p_screen ) { <nl> mmm a / platform / haiku / os_haiku . h <nl> ppp b / platform / haiku / os_haiku . h <nl> class OS_Haiku : public OS_Unix { <nl> <nl> virtual void set_window_title ( const String & p_title ) ; <nl> virtual Size2 get_window_size ( ) const ; <nl> + virtual Point2 get_window_position ( ) const ; <nl> + virtual void set_window_position ( const Point2 & p_position ) ; <nl> <nl> virtual void set_video_mode ( const VideoMode & p_video_mode , int p_screen = 0 ) ; <nl> virtual VideoMode get_video_mode ( int p_screen = 0 ) const ; <nl>
|
Haiku : implemet get_widow_size ( ) get / set_window_position ( )
|
godotengine/godot
|
b59e95ce1c9905af8c7d44b74082ac489520e9b2
|
2015-06-20T12:35:54Z
|
mmm a / osquery / tables / networking / curl_certificate . cpp <nl> ppp b / osquery / tables / networking / curl_certificate . cpp <nl> static void fillRow ( Row & r , X509 * cert , int dump_certificate ) { <nl> r [ " sha1_fingerprint " ] = ss . str ( ) ; <nl> } <nl> <nl> - r [ " certificate_version " ] = INTEGER ( certversion ( cert ) ) ; <nl> + r [ " version " ] = INTEGER ( certversion ( cert ) ) ; <nl> r [ " signature_algorithm " ] = signature_algorithm ( cert ) ; <nl> r [ " signature " ] = signature ( cert ) ; <nl> <nl> static void fillRow ( Row & r , X509 * cert , int dump_certificate ) { <nl> <nl> r [ " key_usage " ] = certificate_extensions ( cert , NID_key_usage ) ; <nl> r [ " extended_key_usage " ] = certificate_extensions ( cert , NID_ext_key_usage ) ; <nl> - r [ " certificate_policies " ] = <nl> - certificate_extensions ( cert , NID_certificate_policies ) ; <nl> + r [ " policies " ] = certificate_extensions ( cert , NID_certificate_policies ) ; <nl> <nl> r [ " subject_alternative_names " ] = <nl> certificate_extensions ( cert , NID_subject_alt_name ) ; <nl> static void fillRow ( Row & r , X509 * cert , int dump_certificate ) { <nl> r [ " subject_info_access " ] = certificate_extensions ( cert , NID_sinfo_access ) ; <nl> r [ " policy_mappings " ] = certificate_extensions ( cert , NID_policy_mappings ) ; <nl> <nl> - r [ " has_expired " ] = INTEGER ( has_cert_expired ( cert ) ) ; <nl> + r [ " has_expired " ] = has_cert_expired ( cert ) ? " 1 " : " 0 " ; <nl> <nl> r [ " basic_constraint " ] = certificate_extensions ( cert , NID_basic_constraints ) ; <nl> r [ " name_constraints " ] = certificate_extensions ( cert , NID_name_constraints ) ; <nl> static void fillRow ( Row & r , X509 * cert , int dump_certificate ) { <nl> <nl> / / check the dump_certificate flag and dump the certificate in PEM format <nl> if ( dump_certificate ) { <nl> - r [ " certificate_pem " ] = pem ( cert ) ; <nl> + r [ " pem " ] = pem ( cert ) ; <nl> } <nl> } <nl> <nl> mmm a / specs / curl_certificate . table <nl> ppp b / specs / curl_certificate . table <nl> schema ( [ <nl> Column ( " valid_to " , TEXT , " Period of validity end date " ) , <nl> Column ( " sha256_fingerprint " , TEXT , " SHA - 256 fingerprint " ) , <nl> Column ( " sha1_fingerprint " , TEXT , " SHA1 fingerprint " ) , <nl> - Column ( " certificate_version " , INTEGER , " Version Number " ) , <nl> + Column ( " version " , INTEGER , " Version Number " ) , <nl> Column ( " signature_algorithm " , TEXT , " Signature Algorithm " ) , <nl> Column ( " signature " , TEXT , " Signature " ) , <nl> Column ( " subject_key_identifier " , TEXT , " Subject Key Identifier " ) , <nl> Column ( " authority_key_identifier " , TEXT , " Authority Key Identifier " ) , <nl> Column ( " key_usage " , TEXT , " Usage of key in certificate " ) , <nl> Column ( " extended_key_usage " , TEXT , " Extended usage of key in certificate " ) , <nl> - Column ( " certificate_policies " , TEXT , " Certificate Policies " ) , <nl> + Column ( " policies " , TEXT , " Certificate Policies " ) , <nl> Column ( " subject_alternative_names " , TEXT , " Subject Alternative Name " ) , <nl> Column ( " issuer_alternative_names " , TEXT , " Issuer Alternative Name " ) , <nl> Column ( " info_access " , TEXT , " Authority Information Access " ) , <nl> Column ( " subject_info_access " , TEXT , " Subject Information Access " ) , <nl> Column ( " policy_mappings " , TEXT , " Policy Mappings " ) , <nl> - Column ( " certificate_has_expired " , INTEGER , " 1 if the certificate has expired , 0 otherwise " ) , <nl> + Column ( " has_expired " , INTEGER , " 1 if the certificate has expired , 0 otherwise " ) , <nl> Column ( " basic_constraint " , TEXT , " Basic Constraints " ) , <nl> Column ( " name_constraints " , TEXT , " Name Constraints " ) , <nl> Column ( " policy_constraints " , TEXT , " Policy Constraints " ) , <nl> Column ( " dump_certificate " , INTEGER , " Set this value to ' 1 ' to dump certificate " , additional = True , hidden = True ) , <nl> - Column ( " certificate_pem " , TEXT , " Certificate PEM format " ) <nl> + Column ( " pem " , TEXT , " Certificate PEM format " ) <nl> ] ) <nl> implementation ( " curl_certificate @ genTLSCertificate " ) <nl> examples ( [ <nl> mmm a / tests / integration / tables / curl_certificate . cpp <nl> ppp b / tests / integration / tables / curl_certificate . cpp <nl> <nl> * the LICENSE file found in the root directory of this source tree . <nl> * / <nl> <nl> - / / Sanity check integration test for curl_certificate <nl> - / / Spec file : specs / curl_certificate . table <nl> - <nl> # include < osquery / tests / integration / tables / helper . h > <nl> <nl> namespace osquery { <nl> namespace table_tests { <nl> <nl> class curlCertificate : public testing : : Test { <nl> - protected : <nl> - void SetUp ( ) override { <nl> - setUpEnvironment ( ) ; <nl> - } <nl> + protected : <nl> + void SetUp ( ) override { <nl> + setUpEnvironment ( ) ; <nl> + } <nl> } ; <nl> <nl> TEST_F ( curlCertificate , test_sanity ) { <nl> - / / 1 . Query data <nl> - auto const data = <nl> - execute_query ( " select * from curl_certificate where hostname = ' ' " ) ; <nl> - / / 2 . Check size before validation <nl> - / / ASSERT_GE ( data . size ( ) , 0ul ) ; <nl> - / / ASSERT_EQ ( data . size ( ) , 1ul ) ; <nl> - / / ASSERT_EQ ( data . size ( ) , 0ul ) ; <nl> - / / 3 . Build validation map <nl> - / / See helper . h for avaialbe flags <nl> - / / Or use custom DataCheck object <nl> - / / ValidationMap row_map = { <nl> - / / { " hostname " , NormalType } <nl> - / / { " common_name " , NormalType } <nl> - / / { " organization " , NormalType } <nl> - / / { " organization_unit " , NormalType } <nl> - / / { " serial_number " , NormalType } <nl> - / / { " issuer_common_name " , NormalType } <nl> - / / { " issuer_organization " , NormalType } <nl> - / / { " issuer_organization_unit " , NormalType } <nl> - / / { " valid_from " , NormalType } <nl> - / / { " valid_to " , NormalType } <nl> - / / { " sha256_fingerprint " , NormalType } <nl> - / / { " sha1_fingerprint " , NormalType } <nl> - / / } <nl> - / / 4 . Perform validation <nl> - / / validate_rows ( data , row_map ) ; <nl> + auto const data = execute_query ( <nl> + " select * from curl_certificate where hostname = ' www . github . com ' " ) ; <nl> + ASSERT_EQ ( data . size ( ) , 1ul ) ; <nl> + ValidationMap row_map = { { " hostname " , NormalType } , <nl> + { " common_name " , NormalType } , <nl> + { " organization " , NormalType } , <nl> + { " organization_unit " , NormalType } , <nl> + { " serial_number " , NormalType } , <nl> + { " issuer_common_name " , NormalType } , <nl> + { " issuer_organization " , NormalType } , <nl> + { " issuer_organization_unit " , NormalType } , <nl> + { " valid_from " , NormalType } , <nl> + { " valid_to " , NormalType } , <nl> + { " sha256_fingerprint " , NormalType } , <nl> + { " sha1_fingerprint " , NormalType } , <nl> + { " version " , IntType } , <nl> + { " signature_algorithm " , NormalType } , <nl> + { " signature " , NormalType } , <nl> + { " subject_key_identifier " , NormalType } , <nl> + { " authority_key_identifier " , NormalType } , <nl> + { " key_usage " , NormalType } , <nl> + { " extended_key_usage " , NormalType } , <nl> + { " policies " , NormalType } , <nl> + { " subject_alternative_names " , NormalType } , <nl> + { " issuer_alternative_names " , NormalType } , <nl> + { " info_access " , NormalType } , <nl> + { " subject_info_access " , NormalType } , <nl> + { " policy_mappings " , NormalType } , <nl> + { " has_expired " , IntType } , <nl> + { " basic_constraint " , NormalType } , <nl> + { " name_constraints " , NormalType } , <nl> + { " policy_constraints " , NormalType } , <nl> + { " pem " , NormalType } } ; <nl> + validate_rows ( data , row_map ) ; <nl> } <nl> <nl> } / / namespace table_tests <nl>
|
curl_certificate test ( )
|
osquery/osquery
|
dcf72523f7724e7b5608c953ba66e218dd39daa3
|
2020-07-26T20:38:59Z
|
diff - - git a / code / cryptography / Rail fence Cipher / railfence . cpp b / code / cryptography / rail_fence_cipher / rail_fence . cpp <nl> similarity index 95 % <nl> rename from code / cryptography / Rail fence Cipher / railfence . cpp <nl> rename to code / cryptography / rail_fence_cipher / rail_fence . cpp <nl> mmm a / code / cryptography / Rail fence Cipher / railfence . cpp <nl> ppp b / code / cryptography / rail_fence_cipher / rail_fence . cpp <nl> <nl> # include < cstring > <nl> # include < string > <nl> # include < bits / stdc + + . h > <nl> - <nl> + / / Part of Cosmos by OpenGenus Foundation <nl> + / / Rails fence cipher - Cryptography <nl> using namespace std ; <nl> <nl> string Encryptor ( string msg , int key ) { <nl>
|
fixed location
|
OpenGenus/cosmos
|
268268c9b1662777129fa5e959bce255318c13c3
|
2017-10-11T21:03:38Z
|
mmm a / stdlib / public / core / Collection . swift <nl> ppp b / stdlib / public / core / Collection . swift <nl> extension Indexable { <nl> _precondition ( <nl> bounds . lowerBound < = range . lowerBound , <nl> " range . lowerBound is out of bounds : index designates a position before bounds . lowerBound " ) <nl> - _precondition ( <nl> - bounds . lowerBound < = range . upperBound , <nl> - " range . upperBound is out of bounds : index designates a position before bounds . lowerBound " ) <nl> - <nl> _precondition ( <nl> range . lowerBound < = bounds . upperBound , <nl> " range . lowerBound is out of bounds : index designates a position after bounds . upperBound " ) <nl> + <nl> + _precondition ( <nl> + bounds . lowerBound < = range . upperBound , <nl> + " range . upperBound is out of bounds : index designates a position before bounds . lowerBound " ) <nl> _precondition ( <nl> range . upperBound < = bounds . upperBound , <nl> - " range . lowerBound is out of bounds : index designates a position after bounds . upperBound " ) <nl> + " range . upperBound is out of bounds : index designates a position after bounds . upperBound " ) <nl> } <nl> <nl> @ warn_unused_result <nl>
|
New indexing model : reorder preconditions to make more sense
|
apple/swift
|
df73cd2f5d73402448f156d00362448db818a3d5
|
2016-03-29T23:34:55Z
|
mmm a / tensorflow / core / platform / default / logging . cc <nl> ppp b / tensorflow / core / platform / default / logging . cc <nl> class TFLogSinks { <nl> / / Up to 128 messages will be queued until a log sink is added . <nl> / / The queue will then be logged to the first added log sink . <nl> / / <nl> - / / NO_DEFAULT_LOGGER is not defined defined : <nl> + / / NO_DEFAULT_LOGGER is not defined : <nl> / / The messages will be logged using the default logger . The default logger <nl> / / will log to stdout on all platforms except for Android . On Androit the <nl> / / default Android logger will be used . <nl>
|
Fixed a typo .
|
tensorflow/tensorflow
|
3c212292416ea8d30ebfbeef2f4455ca0bb884bb
|
2020-10-23T14:28:16Z
|
mmm a / generic / THTensorMath . c <nl> ppp b / generic / THTensorMath . c <nl> void THTensor_ ( topk ) ( THTensor * rt_ , THLongTensor * ri_ , THTensor * t , long k , int <nl> long sliceSize = THTensor_ ( size ) ( t , dim ) ; <nl> THArgCheck ( k > 0 & & k < = sliceSize , 2 , " k not in range for dimension " ) ; <nl> <nl> - / * Just implement in terms of sort and narrow for now * / <nl> - THTensor * tmpResults = THTensor_ ( new ) ( ) ; <nl> - THLongTensor * tmpIndices = THLongTensor_new ( ) ; <nl> + THTensor * tmpResults = THTensor_ ( new ) ( ) ; <nl> + THTensor_ ( resize1d ) ( tmpResults , sliceSize ) ; <nl> + real * tmp__data = THTensor_ ( data ) ( tmpResults ) ; <nl> <nl> - THLongStorage * topKSize = THTensor_ ( newSizeOf ) ( t ) ; <nl> + THLongTensor * tmpIndices = THLongTensor_new ( ) ; <nl> + THLongTensor_resize1d ( tmpIndices , sliceSize ) ; <nl> + long * tmpi__data = THLongTensor_data ( tmpIndices ) ; <nl> + <nl> + THLongStorage * topKSize = THTensor_ ( newSizeOf ) ( t ) ; <nl> THLongStorage_set ( topKSize , dim , k ) ; <nl> THTensor_ ( resize ) ( rt_ , topKSize , NULL ) ; <nl> THLongTensor_resize ( ri_ , topKSize , NULL ) ; <nl> THLongStorage_free ( topKSize ) ; <nl> <nl> - THTensor_ ( sort ) ( tmpResults , tmpIndices , t , dim , dir ) ; <nl> - THTensor_ ( narrow ) ( tmpResults , NULL , dim , 0 , k ) ; <nl> - THLongTensor_narrow ( tmpIndices , NULL , dim , 0 , k ) ; <nl> + if ( dir ) { <nl> + / * k largest elements , descending order ( optional : see sorted ) * / <nl> + long K = sliceSize - k ; <nl> + TH_TENSOR_DIM_APPLY3 ( real , t , real , rt_ , long , ri_ , dim , <nl> + long i ; <nl> + for ( i = 0 ; i < sliceSize ; i + + ) <nl> + { <nl> + tmp__data [ i ] = t_data [ i * t_stride ] ; <nl> + tmpi__data [ i ] = i ; <nl> + } <nl> + if ( K > 0 ) <nl> + THTensor_ ( quickselect ) ( tmp__data , tmpi__data , K - 1 , sliceSize , 1 ) ; <nl> + if ( sorted ) <nl> + THTensor_ ( quicksortdescend ) ( tmp__data + K , tmpi__data + K , k , 1 ) ; <nl> + for ( i = 0 ; i < k ; i + + ) <nl> + { <nl> + rt__data [ i * rt__stride ] = tmp__data [ i + K ] ; <nl> + ri__data [ i * ri__stride ] = tmpi__data [ i + K ] ; <nl> + } ) <nl> + } <nl> + else { <nl> + / * k smallest elements , ascending order ( optional : see sorted ) * / <nl> + TH_TENSOR_DIM_APPLY3 ( real , t , real , rt_ , long , ri_ , dim , <nl> + long i ; <nl> + for ( i = 0 ; i < sliceSize ; i + + ) <nl> + { <nl> + tmp__data [ i ] = t_data [ i * t_stride ] ; <nl> + tmpi__data [ i ] = i ; <nl> + } <nl> + THTensor_ ( quickselect ) ( tmp__data , tmpi__data , k - 1 , sliceSize , 1 ) ; <nl> + if ( sorted ) <nl> + THTensor_ ( quicksortascend ) ( tmp__data , tmpi__data , k - 1 , 1 ) ; <nl> + for ( i = 0 ; i < k ; i + + ) <nl> + { <nl> + rt__data [ i * rt__stride ] = tmp__data [ i ] ; <nl> + ri__data [ i * ri__stride ] = tmpi__data [ i ] ; <nl> + } ) <nl> + } <nl> <nl> - THTensor_ ( freeCopyTo ) ( tmpResults , rt_ ) ; <nl> - THLongTensor_freeCopyTo ( tmpIndices , ri_ ) ; <nl> + THTensor_ ( free ) ( tmpResults ) ; <nl> + THLongTensor_free ( tmpIndices ) ; <nl> } <nl> <nl> void THTensor_ ( tril ) ( THTensor * r_ , THTensor * t , long k ) <nl>
|
torch . topk : use quickselect + quicksort
|
pytorch/pytorch
|
3e467b1b21d84800848b287b4237810a5415186b
|
2016-01-23T17:02:54Z
|
mmm a / xbmc / guilib / guiinfo / GUIControlsGUIInfo . cpp <nl> ppp b / xbmc / guilib / guiinfo / GUIControlsGUIInfo . cpp <nl> bool CGUIControlsGUIInfo : : GetBool ( bool & value , const CGUIListItem * gitem , int co <nl> const CGUIViewState * viewState = window - > GetViewState ( ) ; <nl> if ( viewState ) <nl> { <nl> - value = ( static_cast < unsigned int > ( viewState - > GetSortMethod ( ) . sortBy ) = = info . GetData2 ( ) ) ; <nl> + value = ( static_cast < int > ( viewState - > GetSortMethod ( ) . sortBy ) = = info . GetData2 ( ) ) ; <nl> return true ; <nl> } <nl> } <nl> bool CGUIControlsGUIInfo : : GetBool ( bool & value , const CGUIListItem * gitem , int co <nl> const CGUIViewState * viewState = window - > GetViewState ( ) ; <nl> if ( viewState ) <nl> { <nl> - value = ( static_cast < int > ( viewState - > GetSortOrder ( ) ) = = info . GetData1 ( ) ) ; <nl> + value = ( static_cast < unsigned int > ( viewState - > GetSortOrder ( ) ) = = info . GetData1 ( ) ) ; <nl> return true ; <nl> } <nl> } <nl>
|
Merge pull request from howie - f / fix - warning
|
xbmc/xbmc
|
63f9d2d8c128c4cc6e0367ce665ff59993e79d28
|
2020-09-10T19:42:37Z
|
mmm a / xbmc / storage / cdioSupport . cpp <nl> ppp b / xbmc / storage / cdioSupport . cpp <nl> CCdInfo * CCdIoSupport : : GetCdInfo ( char * cDeviceFileName ) <nl> if ( - 1 = = m_nFirstData ) <nl> m_nFirstData = i ; <nl> } <nl> + ti . nfsInfo = FS_NO_DATA ; <nl> ti . ms_offset = 0 ; <nl> ti . isofs_size = 0 ; <nl> ti . nJolietLevel = 0 ; <nl> ti . nFrames = : : cdio_get_track_lba ( cdio , i ) ; <nl> + ti . nMins = 0 ; <nl> + ti . nSecs = 0 ; <nl> + <nl> info - > SetTrackInformation ( i , ti ) ; <nl> / * skip to leadout ? * / <nl> if ( i = = m_nNumTracks ) <nl> CCdInfo * CCdIoSupport : : GetCdInfo ( char * cDeviceFileName ) <nl> ti . isofs_size = 0 ; <nl> ti . nJolietLevel = 0 ; <nl> ti . nFrames = : : cdio_get_track_lba ( cdio , i ) ; <nl> + ti . nMins = 0 ; <nl> + ti . nSecs = 0 ; <nl> info - > SetTrackInformation ( i + 1 , ti ) ; <nl> } <nl> case TRACK_FORMAT_ERROR : <nl>
|
Fix use of uninitialized data in cdioSupport . cpp ( again )
|
xbmc/xbmc
|
0bdcde44af07d1664a3eee70eb599ba21f1b088d
|
2012-09-06T01:56:07Z
|
mmm a / tensorflow / g3doc / get_started / os_setup . md <nl> ppp b / tensorflow / g3doc / get_started / os_setup . md <nl> $ git clone - - recurse - submodules https : / / github . com / tensorflow / tensorflow <nl> ` ` ` <nl> <nl> ` - - recurse - submodules ` is required to fetch the protobuf library that TensorFlow <nl> - depends on . <nl> + depends on . Note that these instructions will install the latest master branch <nl> + of tensorflow . If you want to install a specific branch ( such as a release branch ) , <nl> + pass ` - b < branchname > ` to the ` git clone ` command . <nl> <nl> # # # Installation for Linux <nl> <nl>
|
Note that master is installed with plain git clone
|
tensorflow/tensorflow
|
4fba0fd5a1abfe4c47df4b6512994b1e5313ee57
|
2016-01-27T18:55:18Z
|
mmm a / cmake / ConfigureCompiler . cmake <nl> ppp b / cmake / ConfigureCompiler . cmake <nl> else ( ) <nl> if ( GCC ) <nl> add_compile_options ( - Wno - pragmas ) <nl> add_compile_options ( - mavx ) <nl> - # add_compile_options ( - fno - builtin - memcpy ) <nl> + # Intentionally using builtin memcpy . G + + does a good job on small memcpy ' s when the size is known at runtime . <nl> + # If the size is not known , then it falls back on the memcpy that ' s available at runtime ( rte_memcpy , as of this <nl> + # writing ; see flow . cpp ) . <nl> + # <nl> + # The downside of the builtin memcpy is that it ' s slower at large copies , so if we spend a lot of time on large <nl> + # copies of sizes that are known at compile time , this might not be a win . See the output of performance / memcpy <nl> + # for more information . <nl> + # add_compile_options ( - fno - builtin - memcpy ) <nl> # Otherwise ` state [ [ maybe_unused ] ] int x ; ` will issue a warning . <nl> # https : / / stackoverflow . com / questions / 50646334 / maybe - unused - on - member - variable - gcc - warns - incorrectly - that - attribute - is <nl> add_compile_options ( - Wno - attributes ) <nl> mmm a / flow / flow . cpp <nl> ppp b / flow / flow . cpp <nl> <nl> # include " flow / flow . h " <nl> # include " flow / DeterministicRandom . h " <nl> # include " flow / UnitTest . h " <nl> + # include " flow / rte_memcpy . h " <nl> + # include " flow / folly_memcpy . h " <nl> # include < stdarg . h > <nl> # include < cinttypes > <nl> <nl> + void * rte_memcpy_noinline ( void * __restrict __dest , const void * __restrict __src , size_t __n ) { <nl> + return rte_memcpy ( __dest , __src , __n ) ; <nl> + } <nl> + <nl> + / / This compilation unit will be linked in to the main binary , so this should override glibc memcpy <nl> + __attribute__ ( ( visibility ( " default " ) ) ) void * memcpy ( void * __restrict __dest , const void * __restrict __src , size_t __n ) { <nl> + return rte_memcpy ( __dest , __src , __n ) ; <nl> + } <nl> + <nl> INetwork * g_network = 0 ; <nl> <nl> FILE * randLog = 0 ; <nl> mmm a / flow / rte_memcpy . h <nl> ppp b / flow / rte_memcpy . h <nl> extern " C " { <nl> static force_inline void * <nl> rte_memcpy ( void * dst , const void * src , size_t n ) ; <nl> <nl> + / / # define RTE_MACHINE_CPUFLAG_AVX512F <nl> + # define RTE_MACHINE_CPUFLAG_AVX2 <nl> + <nl> # ifdef RTE_MACHINE_CPUFLAG_AVX512F <nl> <nl> # define ALIGNMENT_MASK 0x3F <nl> mmm a / flow / test_memcpy . cpp <nl> ppp b / flow / test_memcpy . cpp <nl> <nl> # include < string . h > <nl> # include < stdlib . h > <nl> <nl> + # include " flow / folly_memcpy . h " <nl> # include " flow / rte_memcpy . h " <nl> # include " flow / IRandom . h " <nl> <nl> mmm a / flow / test_memcpy_perf . cpp <nl> ppp b / flow / test_memcpy_perf . cpp <nl> <nl> # include " flow / rte_memcpy . h " <nl> # include " flow / IRandom . h " <nl> # include " flow / UnitTest . h " <nl> + # include " flow / flow . h " <nl> <nl> extern " C " { <nl> - void * folly_memcpy ( void * dst , void * src , uint32_t length ) ; <nl> + void * folly_memcpy ( void * dst , const void * src , uint32_t length ) ; <nl> } <nl> <nl> + <nl> + void * rte_memcpy_noinline ( void * dst , const void * src , size_t length ) ; / / for performance comparisons <nl> + <nl> / * <nl> * Set this to the maximum buffer size you want to test . If it is 0 , then the <nl> * values in the buf_sizes [ ] array below will be used . <nl> do_uncached_write ( uint8_t * dst , int is_dst_cached , <nl> fill_addr_arrays ( dst_addrs , is_dst_cached , 0 , <nl> src_addrs , is_src_cached , 0 ) ; <nl> for ( j = 0 ; j < TEST_BATCH_SIZE ; j + + ) { <nl> - rte_memcpy ( dst + dst_addrs [ j ] , src + src_addrs [ j ] , size ) ; <nl> + memcpy ( dst + dst_addrs [ j ] , src + src_addrs [ j ] , size ) ; <nl> } <nl> } <nl> } <nl> do { \ <nl> src_addrs , is_src_cached , src_uoffset ) ; \ <nl> start_time = rte_rdtsc ( ) ; \ <nl> for ( t = 0 ; t < TEST_BATCH_SIZE ; t + + ) \ <nl> - rte_memcpy ( dst + dst_addrs [ t ] , src + src_addrs [ t ] , size ) ; \ <nl> + rte_memcpy_noinline ( dst + dst_addrs [ t ] , src + src_addrs [ t ] , size ) ; \ <nl> total_time + = rte_rdtsc ( ) - start_time ; \ <nl> } \ <nl> for ( iter = 0 ; iter < ( TEST_ITERATIONS / TEST_BATCH_SIZE ) ; iter + + ) { \ <nl>
|
Settle on using rte_memcpy when we do not know the copy size at runtime , and builtin memcpy otherwise
|
apple/foundationdb
|
e77f9701f32b32238d75db1e917cabcb1539ce9f
|
2020-06-02T22:06:57Z
|
mmm a / lib / IDE / CommentConversion . cpp <nl> ppp b / lib / IDE / CommentConversion . cpp <nl> struct CommentToXMLConverter { <nl> <nl> void printHeader ( const Header * H ) { <nl> llvm : : SmallString < 4 > Tag ; <nl> - llvm : : raw_svector_ostream S ( Tag ) ; <nl> - S < < " < h " < < H - > getLevel ( ) < < " > " ; <nl> - printRawHTML ( S . str ( ) ) ; <nl> + llvm : : raw_svector_ostream TagStream ( Tag ) ; <nl> + TagStream < < " < h " < < H - > getLevel ( ) < < " > " ; <nl> + printRawHTML ( TagStream . str ( ) ) ; <nl> for ( auto Child : H - > getChildren ( ) ) <nl> printASTNode ( Child ) ; <nl> <nl> - Tag . clear ( ) ; <nl> - S < < " < / h " < < H - > getLevel ( ) < < " > " ; <nl> - printRawHTML ( S . str ( ) ) ; <nl> + llvm : : SmallString < 5 > EndTag ; <nl> + llvm : : raw_svector_ostream EndTagStream ( EndTag ) ; <nl> + EndTagStream < < " < / h " < < H - > getLevel ( ) < < " > " ; <nl> + printRawHTML ( EndTagStream . str ( ) ) ; <nl> } <nl> <nl> void printHRule ( const HRule * HR ) { <nl> break ; <nl> <nl> void printHeader ( const Header * H ) { <nl> llvm : : SmallString < 4 > Tag ; <nl> - llvm : : raw_svector_ostream S ( Tag ) ; <nl> - S < < " < h " < < H - > getLevel ( ) < < " > " ; <nl> - print ( S . str ( ) ) ; <nl> + llvm : : raw_svector_ostream TagStream ( Tag ) ; <nl> + TagStream < < " < h " < < H - > getLevel ( ) < < " > " ; <nl> + print ( TagStream . str ( ) ) ; <nl> for ( auto Child : H - > getChildren ( ) ) <nl> printASTNode ( Child ) ; <nl> <nl> - Tag . clear ( ) ; <nl> - S < < " < / h " < < H - > getLevel ( ) < < " > " ; <nl> - print ( S . str ( ) ) ; <nl> + llvm : : SmallString < 5 > EndTag ; <nl> + llvm : : raw_svector_ostream EndTagStream ( EndTag ) ; <nl> + EndTagStream < < " < / h " < < H - > getLevel ( ) < < " > " ; <nl> + print ( EndTagStream . str ( ) ) ; <nl> } <nl> <nl> void printText ( const Text * T ) { <nl> mmm a / tools / swift - ide - test / swift - ide - test . cpp <nl> ppp b / tools / swift - ide - test / swift - ide - test . cpp <nl> static int doPrintComments ( const CompilerInvocation & InitInvok , <nl> CompilerInvocation Invocation ( InitInvok ) ; <nl> Invocation . addInputFilename ( SourceFilename ) ; <nl> Invocation . getLangOptions ( ) . AttachCommentsToDecls = true ; <nl> + Invocation . getLangOptions ( ) . EnableObjCAttrRequiresFoundation = false ; <nl> <nl> CompilerInstance CI ; <nl> / / Display diagnostics to stderr . <nl>
|
ASan build fix : overlapping memcpy and compiler flags in swift - ide - test
|
apple/swift
|
0ee38ba4aa7d4d82ef851771da289957271399f3
|
2015-04-26T20:02:00Z
|
mmm a / src / deoptimizer . cc <nl> ppp b / src / deoptimizer . cc <nl> Handle < Object > TranslatedState : : MaterializeAt ( int frame_index , <nl> } <nl> <nl> Handle < Object > map_object = materializer . At ( value_index ) ; <nl> - Handle < Map > map = <nl> - Map : : GeneralizeAllFieldRepresentations ( Handle < Map > : : cast ( map_object ) ) ; <nl> + Handle < Map > map = Map : : GeneralizeAllFields ( Handle < Map > : : cast ( map_object ) ) ; <nl> switch ( map - > instance_type ( ) ) { <nl> case MUTABLE_HEAP_NUMBER_TYPE : <nl> case HEAP_NUMBER_TYPE : { <nl> mmm a / src / map - updater . cc <nl> ppp b / src / map - updater . cc <nl> Handle < Map > MapUpdater : : Update ( ) { <nl> return result_map_ ; <nl> } <nl> <nl> - MapUpdater : : State MapUpdater : : CopyGeneralizeAllRepresentations ( <nl> - const char * reason ) { <nl> + MapUpdater : : State MapUpdater : : CopyGeneralizeAllFields ( const char * reason ) { <nl> StoreMode store_mode = <nl> modified_descriptor_ > = 0 ? FORCE_FIELD : ALLOW_IN_DESCRIPTOR ; <nl> - result_map_ = Map : : CopyGeneralizeAllRepresentations ( <nl> + result_map_ = Map : : CopyGeneralizeAllFields ( <nl> old_map_ , new_elements_kind_ , modified_descriptor_ , store_mode , new_kind_ , <nl> new_attributes_ , reason ) ; <nl> state_ = kEnd ; <nl> MapUpdater : : State MapUpdater : : FindRootMap ( ) { <nl> root_map_ = handle ( old_map_ - > FindRootMap ( ) , isolate_ ) ; <nl> int root_nof = root_map_ - > NumberOfOwnDescriptors ( ) ; <nl> if ( ! old_map_ - > EquivalentToForTransition ( * root_map_ ) ) { <nl> - return CopyGeneralizeAllRepresentations ( " GenAll_NotEquivalent " ) ; <nl> + return CopyGeneralizeAllFields ( " GenAll_NotEquivalent " ) ; <nl> } <nl> <nl> ElementsKind from_kind = root_map_ - > elements_kind ( ) ; <nl> MapUpdater : : State MapUpdater : : FindRootMap ( ) { <nl> to_kind ! = SLOW_SLOPPY_ARGUMENTS_ELEMENTS & & <nl> ! ( IsTransitionableFastElementsKind ( from_kind ) & & <nl> IsMoreGeneralElementsKindTransition ( from_kind , to_kind ) ) ) { <nl> - return CopyGeneralizeAllRepresentations ( " GenAll_InvalidElementsTransition " ) ; <nl> + return CopyGeneralizeAllFields ( " GenAll_InvalidElementsTransition " ) ; <nl> } <nl> <nl> if ( modified_descriptor_ > = 0 & & modified_descriptor_ < root_nof ) { <nl> MapUpdater : : State MapUpdater : : FindRootMap ( ) { <nl> old_descriptors_ - > GetDetails ( modified_descriptor_ ) ; <nl> if ( old_details . kind ( ) ! = new_kind_ | | <nl> old_details . attributes ( ) ! = new_attributes_ ) { <nl> - return CopyGeneralizeAllRepresentations ( " GenAll_RootModification1 " ) ; <nl> + return CopyGeneralizeAllFields ( " GenAll_RootModification1 " ) ; <nl> } <nl> if ( ! new_representation_ . fits_into ( old_details . representation ( ) ) ) { <nl> - return CopyGeneralizeAllRepresentations ( " GenAll_RootModification2 " ) ; <nl> + return CopyGeneralizeAllFields ( " GenAll_RootModification2 " ) ; <nl> } <nl> if ( old_details . location ( ) ! = kField ) { <nl> - return CopyGeneralizeAllRepresentations ( " GenAll_RootModification3 " ) ; <nl> + return CopyGeneralizeAllFields ( " GenAll_RootModification3 " ) ; <nl> } <nl> DCHECK_EQ ( kData , old_details . kind ( ) ) ; <nl> DCHECK_EQ ( kData , new_kind_ ) ; <nl> MapUpdater : : State MapUpdater : : FindRootMap ( ) { <nl> FieldType * old_field_type = <nl> old_descriptors_ - > GetFieldType ( modified_descriptor_ ) ; <nl> if ( ! new_field_type_ - > NowIs ( old_field_type ) ) { <nl> - return CopyGeneralizeAllRepresentations ( " GenAll_RootModification4 " ) ; <nl> + return CopyGeneralizeAllFields ( " GenAll_RootModification4 " ) ; <nl> } <nl> } <nl> <nl> MapUpdater : : State MapUpdater : : FindTargetMap ( ) { <nl> if ( old_details . kind ( ) = = kAccessor & & <nl> ! EqualImmutableValues ( GetValue ( i ) , tmp_descriptors - > GetValue ( i ) ) ) { <nl> / / TODO ( ishell ) : mutable accessors are not implemented yet . <nl> - return CopyGeneralizeAllRepresentations ( " GenAll_Incompatible " ) ; <nl> + return CopyGeneralizeAllFields ( " GenAll_Incompatible " ) ; <nl> } <nl> / / Check if old location fits into tmp location . <nl> if ( ! LocationFitsInto ( old_details . location ( ) , tmp_details . location ( ) ) ) { <nl> MapUpdater : : State MapUpdater : : FindTargetMap ( ) { <nl> # endif <nl> if ( old_details . kind ( ) = = kAccessor & & <nl> ! EqualImmutableValues ( GetValue ( i ) , tmp_descriptors - > GetValue ( i ) ) ) { <nl> - return CopyGeneralizeAllRepresentations ( " GenAll_Incompatible " ) ; <nl> + return CopyGeneralizeAllFields ( " GenAll_Incompatible " ) ; <nl> } <nl> DCHECK ( ! tmp_map - > is_deprecated ( ) ) ; <nl> target_map_ = tmp_map ; <nl> MapUpdater : : State MapUpdater : : ConstructNewMap ( ) { <nl> / / could be inserted regardless of whether transitions array is full or not . <nl> if ( maybe_transition = = NULL & & <nl> ! TransitionArray : : CanHaveMoreTransitions ( split_map ) ) { <nl> - return CopyGeneralizeAllRepresentations ( " GenAll_CantHaveMoreTransitions " ) ; <nl> + return CopyGeneralizeAllFields ( " GenAll_CantHaveMoreTransitions " ) ; <nl> } <nl> <nl> old_map_ - > NotifyLeafMapLayoutChange ( ) ; <nl> mmm a / src / map - updater . h <nl> ppp b / src / map - updater . h <nl> class MapUpdater { <nl> / / When a requested reconfiguration can not be done the result is a copy <nl> / / of | old_map_ | where every field has | Tagged | representation and | Any | <nl> / / field type . This map is disconnected from the transition tree . <nl> - State CopyGeneralizeAllRepresentations ( const char * reason ) ; <nl> + State CopyGeneralizeAllFields ( const char * reason ) ; <nl> <nl> / / Returns name of a | descriptor | property . <nl> inline Name * GetKey ( int descriptor ) const ; <nl> mmm a / src / objects - inl . h <nl> ppp b / src / objects - inl . h <nl> void DescriptorArray : : SetSortedKey ( int descriptor_index , int pointer ) { <nl> } <nl> <nl> <nl> - void DescriptorArray : : SetRepresentation ( int descriptor_index , <nl> - Representation representation ) { <nl> - DCHECK ( ! representation . IsNone ( ) ) ; <nl> - PropertyDetails details = GetDetails ( descriptor_index ) ; <nl> - set ( ToDetailsIndex ( descriptor_index ) , <nl> - details . CopyWithRepresentation ( representation ) . AsSmi ( ) ) ; <nl> - } <nl> - <nl> - <nl> Object * * DescriptorArray : : GetValueSlot ( int descriptor_number ) { <nl> DCHECK ( descriptor_number < number_of_descriptors ( ) ) ; <nl> return RawFieldOfElementAt ( ToValueIndex ( descriptor_number ) ) ; <nl> mmm a / src / objects . cc <nl> ppp b / src / objects . cc <nl> int Map : : NumberOfFields ( ) { <nl> return result ; <nl> } <nl> <nl> - Handle < Map > Map : : CopyGeneralizeAllRepresentations ( <nl> - Handle < Map > map , ElementsKind elements_kind , int modify_index , <nl> - StoreMode store_mode , PropertyKind kind , PropertyAttributes attributes , <nl> - const char * reason ) { <nl> + void DescriptorArray : : GeneralizeAllFields ( ) { <nl> + int length = number_of_descriptors ( ) ; <nl> + for ( int i = 0 ; i < length ; i + + ) { <nl> + PropertyDetails details = GetDetails ( i ) ; <nl> + details = details . CopyWithRepresentation ( Representation : : Tagged ( ) ) ; <nl> + if ( details . location ( ) = = kField ) { <nl> + DCHECK_EQ ( kData , details . kind ( ) ) ; <nl> + SetValue ( i , FieldType : : Any ( ) ) ; <nl> + } <nl> + set ( ToDetailsIndex ( i ) , details . AsSmi ( ) ) ; <nl> + } <nl> + } <nl> + <nl> + Handle < Map > Map : : CopyGeneralizeAllFields ( Handle < Map > map , <nl> + ElementsKind elements_kind , <nl> + int modify_index , StoreMode store_mode , <nl> + PropertyKind kind , <nl> + PropertyAttributes attributes , <nl> + const char * reason ) { <nl> Isolate * isolate = map - > GetIsolate ( ) ; <nl> Handle < DescriptorArray > old_descriptors ( map - > instance_descriptors ( ) , isolate ) ; <nl> int number_of_own_descriptors = map - > NumberOfOwnDescriptors ( ) ; <nl> Handle < DescriptorArray > descriptors = <nl> DescriptorArray : : CopyUpTo ( old_descriptors , number_of_own_descriptors ) ; <nl> - <nl> - for ( int i = 0 ; i < number_of_own_descriptors ; i + + ) { <nl> - descriptors - > SetRepresentation ( i , Representation : : Tagged ( ) ) ; <nl> - if ( descriptors - > GetDetails ( i ) . location ( ) = = kField ) { <nl> - DCHECK_EQ ( kData , descriptors - > GetDetails ( i ) . kind ( ) ) ; <nl> - descriptors - > SetValue ( i , FieldType : : Any ( ) ) ; <nl> - } <nl> - } <nl> + descriptors - > GeneralizeAllFields ( ) ; <nl> <nl> Handle < LayoutDescriptor > new_layout_descriptor ( <nl> LayoutDescriptor : : FastPointerLayout ( ) , isolate ) ; <nl> Handle < Map > Map : : ReconfigureElementsKind ( Handle < Map > map , <nl> return mu . ReconfigureElementsKind ( new_elements_kind ) ; <nl> } <nl> <nl> - / / Generalize the representation of all DATA descriptors . <nl> - Handle < Map > Map : : GeneralizeAllFieldRepresentations ( <nl> - Handle < Map > map ) { <nl> + / / Generalize all fields and update the transition tree . <nl> + Handle < Map > Map : : GeneralizeAllFields ( Handle < Map > map ) { <nl> Isolate * isolate = map - > GetIsolate ( ) ; <nl> Handle < FieldType > any_type = FieldType : : Any ( isolate ) ; <nl> <nl> Handle < Map > Map : : CopyReplaceDescriptors ( <nl> CHECK ( maybe_name . ToHandle ( & name ) ) ; <nl> ConnectTransition ( map , result , name , simple_flag ) ; <nl> } else { <nl> - int length = descriptors - > number_of_descriptors ( ) ; <nl> - for ( int i = 0 ; i < length ; i + + ) { <nl> - descriptors - > SetRepresentation ( i , Representation : : Tagged ( ) ) ; <nl> - if ( descriptors - > GetDetails ( i ) . location ( ) = = kField ) { <nl> - DCHECK_EQ ( kData , descriptors - > GetDetails ( i ) . kind ( ) ) ; <nl> - descriptors - > SetValue ( i , FieldType : : Any ( ) ) ; <nl> - } <nl> - } <nl> + descriptors - > GeneralizeAllFields ( ) ; <nl> result - > InitializeDescriptors ( * descriptors , <nl> LayoutDescriptor : : FastPointerLayout ( ) ) ; <nl> } <nl> Handle < Map > Map : : ReconfigureExistingProperty ( Handle < Map > map , int descriptor , <nl> if ( ! map - > GetBackPointer ( ) - > IsMap ( ) ) { <nl> / / There is no benefit from reconstructing transition tree for maps without <nl> / / back pointers . <nl> - return CopyGeneralizeAllRepresentations ( <nl> - map , map - > elements_kind ( ) , descriptor , FORCE_FIELD , kind , attributes , <nl> - " GenAll_AttributesMismatchProtoMap " ) ; <nl> + return CopyGeneralizeAllFields ( map , map - > elements_kind ( ) , descriptor , <nl> + FORCE_FIELD , kind , attributes , <nl> + " GenAll_AttributesMismatchProtoMap " ) ; <nl> } <nl> <nl> if ( FLAG_trace_generalization ) { <nl> mmm a / src / objects . h <nl> ppp b / src / objects . h <nl> class DescriptorArray : public FixedArray { <nl> inline Name * GetSortedKey ( int descriptor_number ) ; <nl> inline int GetSortedKeyIndex ( int descriptor_number ) ; <nl> inline void SetSortedKey ( int pointer , int descriptor_number ) ; <nl> - inline void SetRepresentation ( int descriptor_number , <nl> - Representation representation ) ; <nl> <nl> / / Accessor for complete descriptor . <nl> inline void Get ( int descriptor_number , Descriptor * desc ) ; <nl> class DescriptorArray : public FixedArray { <nl> PropertyDetails details ) ; <nl> void Replace ( int descriptor_number , Descriptor * descriptor ) ; <nl> <nl> + / / Generalizes representation and field type of all field descriptors . <nl> + void GeneralizeAllFields ( ) ; <nl> + <nl> / / Append automatically sets the enumeration index . This should only be used <nl> / / to add descriptors in bulk at the end , followed by sorting the descriptor <nl> / / array . <nl> class Map : public HeapObject { <nl> int target_inobject , int target_unused , <nl> int * old_number_of_fields ) ; <nl> / / TODO ( ishell ) : moveit ! <nl> - static Handle < Map > GeneralizeAllFieldRepresentations ( Handle < Map > map ) ; <nl> + static Handle < Map > GeneralizeAllFields ( Handle < Map > map ) ; <nl> MUST_USE_RESULT static Handle < FieldType > GeneralizeFieldType ( <nl> Representation rep1 , Handle < FieldType > type1 , Representation rep2 , <nl> Handle < FieldType > type2 , Isolate * isolate ) ; <nl> class Map : public HeapObject { <nl> PropertyNormalizationMode mode ) ; <nl> <nl> / / TODO ( ishell ) : Move to MapUpdater . <nl> - static Handle < Map > CopyGeneralizeAllRepresentations ( <nl> + static Handle < Map > CopyGeneralizeAllFields ( <nl> Handle < Map > map , ElementsKind elements_kind , int modify_index , <nl> StoreMode store_mode , PropertyKind kind , PropertyAttributes attributes , <nl> const char * reason ) ; <nl> mmm a / test / cctest / test - field - type - tracking . cc <nl> ppp b / test / cctest / test - field - type - tracking . cc <nl> struct CheckUnrelated { <nl> <nl> / / Checks that given | map | is NOT deprecated , and | new_map | is a result of <nl> / / copy - generalize - all - representations . <nl> - struct CheckCopyGeneralizeAllRepresentations { <nl> + struct CheckCopyGeneralizeAllFields { <nl> void Check ( Handle < Map > map , Handle < Map > new_map , Expectations & expectations ) { <nl> CHECK ( ! map - > is_deprecated ( ) ) ; <nl> CHECK_NE ( * map , * new_map ) ; <nl> TEST ( ReconfigureDataFieldAttribute_AccConstantToAccFieldAfterTargetMap ) { <nl> <nl> TestConfig config ; <nl> if ( IS_ACCESSOR_FIELD_SUPPORTED ) { <nl> - CheckCopyGeneralizeAllRepresentations checker ; <nl> + CheckCopyGeneralizeAllFields checker ; <nl> TestReconfigureProperty_CustomPropertyAfterTargetMap ( config , checker ) ; <nl> } else { <nl> / / Currently we have a copy - generalize - all - representations case . <nl> - CheckCopyGeneralizeAllRepresentations checker ; <nl> + CheckCopyGeneralizeAllFields checker ; <nl> TestReconfigureProperty_CustomPropertyAfterTargetMap ( config , checker ) ; <nl> } <nl> } <nl>
|
[ runtime ] Add DescriptorArray : : GeneralizeAllFields ( ) .
|
v8/v8
|
322a37856a786723cf995bb005f66b9f1573e6ce
|
2017-01-17T15:39:06Z
|
mmm a / include / swift / AST / Decl . h <nl> ppp b / include / swift / AST / Decl . h <nl> class ClassDecl : public NominalTypeDecl { <nl> struct SelfReferenceKind { <nl> bool result ; <nl> bool parameter ; <nl> + bool requirement ; <nl> bool other ; <nl> <nl> / / / The type does not refer to ' Self ' at all . <nl> static SelfReferenceKind None ( ) { <nl> - return SelfReferenceKind ( false , false , false ) ; <nl> + return SelfReferenceKind ( false , false , false , false ) ; <nl> } <nl> <nl> / / / The type refers to ' Self ' , but only as the result type of a method . <nl> static SelfReferenceKind Result ( ) { <nl> - return SelfReferenceKind ( true , false , false ) ; <nl> + return SelfReferenceKind ( true , false , false , false ) ; <nl> } <nl> <nl> / / / The type refers to ' Self ' , but only as the parameter type of a method . <nl> static SelfReferenceKind Parameter ( ) { <nl> - return SelfReferenceKind ( false , true , false ) ; <nl> + return SelfReferenceKind ( false , true , false , false ) ; <nl> + } <nl> + <nl> + / / / The type refers to ' Self ' within a same - type requiement . <nl> + static SelfReferenceKind Requirement ( ) { <nl> + return SelfReferenceKind ( false , false , true , false ) ; <nl> } <nl> <nl> / / / The type refers to ' Self ' in a position that is invariant . <nl> static SelfReferenceKind Other ( ) { <nl> - return SelfReferenceKind ( false , false , true ) ; <nl> + return SelfReferenceKind ( false , false , false , true ) ; <nl> } <nl> <nl> SelfReferenceKind flip ( ) const { <nl> - return SelfReferenceKind ( parameter , result , other ) ; <nl> + return SelfReferenceKind ( parameter , result , requirement , other ) ; <nl> } <nl> <nl> SelfReferenceKind operator | = ( SelfReferenceKind kind ) { <nl> result | = kind . result ; <nl> + requirement | = kind . requirement ; <nl> parameter | = kind . parameter ; <nl> other | = kind . other ; <nl> return * this ; <nl> } <nl> <nl> operator bool ( ) const { <nl> - return result | | parameter | | other ; <nl> + return result | | parameter | | requirement | | other ; <nl> } <nl> <nl> private : <nl> - SelfReferenceKind ( bool result , bool parameter , bool other ) <nl> - : result ( result ) , parameter ( parameter ) , other ( other ) { } <nl> + SelfReferenceKind ( bool result , bool parameter , bool requirement , bool other ) <nl> + : result ( result ) , parameter ( parameter ) , requirement ( requirement ) , <nl> + other ( other ) { } <nl> } ; <nl> <nl> / / / ProtocolDecl - A declaration of a protocol , for example : <nl> mmm a / include / swift / AST / DiagnosticsSema . def <nl> ppp b / include / swift / AST / DiagnosticsSema . def <nl> ERROR ( witness_self_non_subtype , none , <nl> " ( % 2 ) because it uses ' Self ' in a non - parameter , non - result type " <nl> " position " , <nl> ( Type , DeclName , Type ) ) <nl> + ERROR ( witness_self_same_type , none , <nl> + " % 0 % 1 in non - final class % 2 cannot be used to satisfy requirement % 3 % 4 " <nl> + " ( in protocol % 5 ) due to same - type requirement involving ' Self ' " , <nl> + ( DescriptiveDeclKind , DeclName , Type , DescriptiveDeclKind , <nl> + DeclName , Type ) ) <nl> + WARNING ( witness_self_same_type_warn , none , <nl> + " % 0 % 1 in non - final class % 2 cannot be used to satisfy requirement % 3 % 4 " <nl> + " ( in protocol % 5 ) due to same - type requirement involving ' Self ' " , <nl> + ( DescriptiveDeclKind , DeclName , Type , DescriptiveDeclKind , <nl> + DeclName , Type ) ) <nl> + NOTE ( witness_self_weaken_same_type , none , <nl> + " consider weakening the same - type requirement % 0 = = % 1 to a superclass " <nl> + " requirement " , ( Type , Type ) ) <nl> ERROR ( witness_requires_dynamic_self , none , <nl> " method % 0 in non - final class % 1 must return ` Self ` to conform to " <nl> " protocol % 2 " , <nl> mmm a / lib / AST / Decl . cpp <nl> ppp b / lib / AST / Decl . cpp <nl> findProtocolSelfReferences ( const ProtocolDecl * proto , Type type , <nl> return SelfReferenceKind : : None ( ) ; <nl> } <nl> <nl> + / / / Find Self references in a generic signature ' s same - type requirements . <nl> + static SelfReferenceKind <nl> + findProtocolSelfReferences ( const ProtocolDecl * protocol , <nl> + GenericSignature * genericSig ) { <nl> + if ( ! genericSig ) return SelfReferenceKind : : None ( ) ; <nl> + <nl> + auto selfTy = protocol - > getSelfInterfaceType ( ) ; <nl> + for ( const auto & req : genericSig - > getRequirements ( ) ) { <nl> + if ( req . getKind ( ) ! = RequirementKind : : SameType ) <nl> + continue ; <nl> + <nl> + if ( req . getFirstType ( ) - > isEqual ( selfTy ) | | <nl> + req . getSecondType ( ) - > isEqual ( selfTy ) ) <nl> + return SelfReferenceKind : : Requirement ( ) ; <nl> + } <nl> + <nl> + return SelfReferenceKind : : None ( ) ; <nl> + } <nl> + <nl> / / / Find Self references within the given requirement . <nl> SelfReferenceKind <nl> ProtocolDecl : : findProtocolSelfReferences ( const ValueDecl * value , <nl> ProtocolDecl : : findProtocolSelfReferences ( const ValueDecl * value , <nl> if ( type - > hasError ( ) ) <nl> return SelfReferenceKind : : None ( ) ; <nl> <nl> - if ( isa < AbstractFunctionDecl > ( value ) ) { <nl> + if ( auto func = dyn_cast < AbstractFunctionDecl > ( value ) ) { <nl> / / Skip the ' self ' parameter . <nl> type = type - > castTo < AnyFunctionType > ( ) - > getResult ( ) ; <nl> <nl> ProtocolDecl : : findProtocolSelfReferences ( const ValueDecl * value , <nl> return SelfReferenceKind : : Other ( ) ; <nl> } <nl> <nl> + / / Check the requirements of a generic function . <nl> + if ( func - > isGeneric ( ) ) { <nl> + if ( auto result = <nl> + : : findProtocolSelfReferences ( this , func - > getGenericSignature ( ) ) ) <nl> + return result ; <nl> + } <nl> + <nl> return : : findProtocolSelfReferences ( this , type , <nl> skipAssocTypes ) ; <nl> - } else if ( isa < SubscriptDecl > ( value ) ) { <nl> + } else if ( auto subscript = dyn_cast < SubscriptDecl > ( value ) ) { <nl> + / / Check the requirements of a generic subscript . <nl> + if ( subscript - > isGeneric ( ) ) { <nl> + if ( auto result = <nl> + : : findProtocolSelfReferences ( this , <nl> + subscript - > getGenericSignature ( ) ) ) <nl> + return result ; <nl> + } <nl> + <nl> return : : findProtocolSelfReferences ( this , type , <nl> skipAssocTypes ) ; <nl> } else { <nl> mmm a / lib / Sema / TypeCheckProtocol . cpp <nl> ppp b / lib / Sema / TypeCheckProtocol . cpp <nl> witnessHasImplementsAttrForRequirement ( ValueDecl * witness , <nl> return false ; <nl> } <nl> <nl> + / / / Determine the given witness has a same - type constraint constraining the <nl> + / / / given ' Self ' type , and return the <nl> + / / / <nl> + / / / \ returns None if there is no such constraint ; a non - empty optional that <nl> + / / / may have the \ c RequirementRepr for the actual constraint . <nl> + static Optional < RequirementRepr * > <nl> + getAdopteeSelfSameTypeConstraint ( ClassDecl * selfClass , ValueDecl * witness ) { <nl> + auto genericSig = <nl> + witness - > getInnermostDeclContext ( ) - > getGenericSignatureOfContext ( ) ; <nl> + if ( ! genericSig ) return None ; <nl> + <nl> + for ( const auto & req : genericSig - > getRequirements ( ) ) { <nl> + if ( req . getKind ( ) ! = RequirementKind : : SameType ) <nl> + continue ; <nl> + <nl> + if ( req . getFirstType ( ) - > getAnyNominal ( ) = = selfClass | | <nl> + req . getSecondType ( ) - > getAnyNominal ( ) = = selfClass ) { <nl> + / / Try to find the requirement - as - written . <nl> + GenericParamList * genericParams = nullptr ; <nl> + <nl> + if ( auto func = dyn_cast < AbstractFunctionDecl > ( witness ) ) <nl> + genericParams = func - > getGenericParams ( ) ; <nl> + else if ( auto subscript = dyn_cast < SubscriptDecl > ( witness ) ) <nl> + genericParams = subscript - > getGenericParams ( ) ; <nl> + if ( genericParams ) { <nl> + for ( auto & req : genericParams - > getRequirements ( ) ) { <nl> + if ( req . getKind ( ) ! = RequirementReprKind : : SameType ) <nl> + continue ; <nl> + <nl> + if ( req . getFirstType ( ) - > getAnyNominal ( ) = = selfClass | | <nl> + req . getSecondType ( ) - > getAnyNominal ( ) = = selfClass ) <nl> + return & req ; <nl> + } <nl> + } <nl> + <nl> + / / Form an optional ( nullptr ) to indicate that we don ' t have the <nl> + / / requirement itself . <nl> + return nullptr ; <nl> + } <nl> + } <nl> + <nl> + return None ; <nl> + } <nl> + <nl> ResolveWitnessResult <nl> ConformanceChecker : : resolveWitnessViaLookup ( ValueDecl * requirement ) { <nl> assert ( ! isa < AssociatedTypeDecl > ( requirement ) & & " Use resolveTypeWitnessVia * " ) ; <nl> ConformanceChecker : : resolveWitnessViaLookup ( ValueDecl * requirement ) { <nl> conformance - > getType ( ) ) ; <nl> } ) ; <nl> } <nl> + } else if ( selfKind . requirement ) { <nl> + if ( auto constraint = getAdopteeSelfSameTypeConstraint ( classDecl , <nl> + witness ) ) { <nl> + / / A " Self = = " constraint works incorrectly with subclasses . Complain . <nl> + auto proto = Conformance - > getProtocol ( ) ; <nl> + auto & diags = proto - > getASTContext ( ) . Diags ; <nl> + diags . diagnose ( witness - > getLoc ( ) , <nl> + proto - > getASTContext ( ) . LangOpts . isSwiftVersion3 ( ) <nl> + ? diag : : witness_self_same_type_warn <nl> + : diag : : witness_self_same_type , <nl> + witness - > getDescriptiveKind ( ) , <nl> + witness - > getFullName ( ) , <nl> + Conformance - > getType ( ) , <nl> + requirement - > getDescriptiveKind ( ) , <nl> + requirement - > getFullName ( ) , <nl> + proto - > getDeclaredType ( ) ) ; <nl> + <nl> + if ( auto requirementRepr = * constraint ) { <nl> + diags . diagnose ( requirementRepr - > getEqualLoc ( ) , <nl> + diag : : witness_self_weaken_same_type , <nl> + requirementRepr - > getFirstType ( ) , <nl> + requirementRepr - > getSecondType ( ) ) <nl> + . fixItReplace ( requirementRepr - > getEqualLoc ( ) , " : " ) ; <nl> + } <nl> + } <nl> } <nl> <nl> / / A non - final class can model a protocol requirement with a <nl> new file mode 100644 <nl> index 000000000000 . . 55871749989f <nl> mmm / dev / null <nl> ppp b / test / Compatibility / self_same_type . swift <nl> <nl> + / / RUN : % target - typecheck - verify - swift - swift - version 3 <nl> + <nl> + protocol P { <nl> + associatedtype T <nl> + } <nl> + <nl> + protocol Q { <nl> + func foo < T : P > ( _ : T , _ : T . T ) where T . T = = Self <nl> + } <nl> + <nl> + class C1 : Q { <nl> + func foo < T : P > ( _ : T , _ : C1 ) where T . T = = C1 { } / / expected - warning { { instance method ' foo ' in non - final class ' C1 ' cannot be used to satisfy requirement instance method ' foo ' ( in protocol ' Q ' ) due to same - type requirement involving ' Self ' } } } } <nl> + / / expected - note @ - 1 { { consider weakening the same - type requirement ' T . T ' = = ' C1 ' to a superclass requirement } } { { 41 - 43 = : } } <nl> + } <nl> + <nl> + final class C2 : Q { <nl> + func foo < T : P > ( _ : T , _ : C2 ) where T . T = = C2 { } <nl> + } <nl> + <nl> + class C3 : Q { <nl> + func foo < T : P > ( _ : T , _ : C3 ) where T . T : C3 { } <nl> + } <nl> mmm a / test / Interpreter / SDK / archive_attributes . swift <nl> ppp b / test / Interpreter / SDK / archive_attributes . swift <nl> <nl> <nl> / / REQUIRES : executable_test <nl> / / REQUIRES : objc_interop <nl> + / / REQUIRES : CPU = i386_or_x86_64 <nl> / / UNSUPPORTED : OS = tvos <nl> / / UNSUPPORTED : OS = watchos <nl> <nl> new file mode 100644 <nl> index 000000000000 . . 1891cd92374b <nl> mmm / dev / null <nl> ppp b / test / decl / protocol / conforms / self_same_type . swift <nl> <nl> + / / RUN : % target - typecheck - verify - swift - swift - version 4 <nl> + <nl> + protocol P { <nl> + associatedtype T <nl> + } <nl> + <nl> + protocol Q { <nl> + func foo < T : P > ( _ : T , _ : T . T ) where T . T = = Self <nl> + } <nl> + <nl> + class C1 : Q { <nl> + func foo < T : P > ( _ : T , _ : C1 ) where T . T = = C1 { } / / expected - error { { instance method ' foo ' in non - final class ' C1 ' cannot be used to satisfy requirement instance method ' foo ' ( in protocol ' Q ' ) due to same - type requirement involving ' Self ' } } } } <nl> + / / expected - note @ - 1 { { consider weakening the same - type requirement ' T . T ' = = ' C1 ' to a superclass requirement } } { { 41 - 43 = : } } <nl> + } <nl> + <nl> + final class C2 : Q { <nl> + func foo < T : P > ( _ : T , _ : C2 ) where T . T = = C2 { } <nl> + } <nl> + <nl> + class C3 : Q { <nl> + func foo < T : P > ( _ : T , _ : C3 ) where T . T : C3 { } <nl> + } <nl>
|
Merge remote - tracking branch ' origin / master ' into master - next
|
apple/swift
|
a3e2512e2bbabdad11cc12ba4b8ae0aeabb365e3
|
2017-05-22T18:08:34Z
|
mmm a / tensorflow / lite / delegates / gpu / cl / kernels / convolution_transposed . cc <nl> ppp b / tensorflow / lite / delegates / gpu / cl / kernels / convolution_transposed . cc <nl> limitations under the License . <nl> <nl> # include < string > <nl> # include < utility > <nl> + # include < vector > <nl> <nl> # include " absl / strings / substitute . h " <nl> # include " tensorflow / lite / delegates / gpu / cl / kernels / util . h " <nl> # include " tensorflow / lite / delegates / gpu / cl / kernels / work_group_picking . h " <nl> # include " tensorflow / lite / delegates / gpu / cl / precision . h " <nl> # include " tensorflow / lite / delegates / gpu / cl / tensor_type . h " <nl> + # include " tensorflow / lite / delegates / gpu / common / shape . h " <nl> # include " tensorflow / lite / delegates / gpu / common / status . h " <nl> <nl> namespace tflite { <nl> std : : string ConvolutionTransposed : : GenerateConvolutionTransposedCode ( <nl> auto src_desc = op_def . src_tensors [ 0 ] ; <nl> src_desc . SetTextureAddressMode ( TextureAddressMode : : ZERO ) ; <nl> AddSrcTensor ( " src_tensor " , src_desc ) ; <nl> - <nl> AddDstTensor ( " dst_tensor " , op_def . dst_tensors [ 0 ] ) ; <nl> <nl> - const auto src_tensor_type = op_def . src_tensors [ 0 ] . storage_type ; <nl> - bool image_buffer = src_tensor_type = = TensorStorageType : : IMAGE_BUFFER ; <nl> - bool manual_clamp = <nl> - image_buffer | | src_tensor_type = = TensorStorageType : : BUFFER ; <nl> + const auto & src_def = op_def . src_tensors [ 0 ] ; <nl> <nl> std : : string c = GetCommonDefines ( op_def . precision ) ; <nl> <nl> std : : string ConvolutionTransposed : : GenerateConvolutionTransposedCode ( <nl> auto generate_id = [ & ] ( const std : : string & x , const std : : string & y , <nl> const std : : string & z ) { <nl> std : : string id ; <nl> - if ( op_def . src_tensors [ 0 ] . HasAxis ( Axis : : WIDTH ) ) { <nl> + if ( src_def . HasAxis ( Axis : : WIDTH ) ) { <nl> id + = " _w " + x ; <nl> } <nl> - if ( op_def . src_tensors [ 0 ] . HasAxis ( Axis : : HEIGHT ) ) { <nl> + if ( src_def . HasAxis ( Axis : : HEIGHT ) ) { <nl> id + = " _h " + y ; <nl> } <nl> - if ( op_def . src_tensors [ 0 ] . HasAxis ( Axis : : DEPTH ) ) { <nl> + if ( src_def . HasAxis ( Axis : : DEPTH ) ) { <nl> id + = " _d " + z ; <nl> } <nl> return id ; <nl> std : : string ConvolutionTransposed : : GenerateConvolutionTransposedCode ( <nl> auto generate_check = [ & ] ( const std : : string & x , const std : : string & y , <nl> const std : : string & z ) { <nl> std : : string check ; <nl> - if ( op_def . src_tensors [ 0 ] . HasAxis ( Axis : : WIDTH ) ) { <nl> - check + = " in_x " + x ; <nl> - } <nl> - if ( op_def . src_tensors [ 0 ] . HasAxis ( Axis : : HEIGHT ) ) { <nl> - check + = " & & in_y " + y ; <nl> - } <nl> - if ( op_def . src_tensors [ 0 ] . HasAxis ( Axis : : DEPTH ) ) { <nl> - check + = " & & in_z " + z ; <nl> + const std : : vector < Axis > axes { Axis : : WIDTH , Axis : : HEIGHT , Axis : : DEPTH } ; <nl> + const std : : vector < std : : string > names { " in_x " , " in_y " , " in_z " } ; <nl> + const std : : vector < std : : string > coords { x , y , z } ; <nl> + for ( int i = 0 ; i < axes . size ( ) ; + + i ) { <nl> + const auto & axis = axes [ i ] ; <nl> + if ( src_def . HasAxis ( axis ) & & ! src_def . SupportsZeroClamp ( axis ) & & <nl> + block_size [ i ] ! = 1 ) { <nl> + if ( ! check . empty ( ) ) { <nl> + check + = " & & " ; <nl> + } <nl> + check + = names [ i ] + coords [ i ] ; <nl> + } <nl> } <nl> return check ; <nl> } ; <nl> std : : string ConvolutionTransposed : : GenerateConvolutionTransposedCode ( <nl> c + = " int ceil_x = dst_x / args . stride_x ; \ n " ; <nl> c + = " dst_x = ceil_x * args . stride_x * " + std : : to_string ( block_size . x ) + <nl> " + rem_x ; \ n " ; <nl> - if ( op_def . src_tensors [ 0 ] . HasAxis ( Axis : : DEPTH ) ) { <nl> + if ( src_def . HasAxis ( Axis : : DEPTH ) ) { <nl> c + = " int linear_id_y = get_global_id ( 1 ) ; \ n " ; <nl> c + = " int dst_y = linear_id_y % args . grid_size_y ; \ n " ; <nl> c + = " int dst_z = linear_id_y / args . grid_size_y ; \ n " ; <nl> std : : string ConvolutionTransposed : : GenerateConvolutionTransposedCode ( <nl> if ( weights_are_buffer ) { <nl> c + = " int f_base = dst_s * args . src_tensor . Slices ( ) * args . kernel_size_x " <nl> " * args . kernel_size_y " ; <nl> - if ( op_def . src_tensors [ 0 ] . HasAxis ( Axis : : DEPTH ) ) { <nl> + if ( src_def . HasAxis ( Axis : : DEPTH ) ) { <nl> c + = " * args . kernel_size_z " ; <nl> } <nl> c + = " ; \ n " ; <nl> std : : string ConvolutionTransposed : : GenerateConvolutionTransposedCode ( <nl> c + = <nl> " int src_y = ( kernel_first_dst_y + offset_y_strided ) / args . stride_y - " <nl> " offset_y ; \ n " ; <nl> - if ( op_def . src_tensors [ 0 ] . HasAxis ( Axis : : DEPTH ) ) { <nl> + if ( src_def . HasAxis ( Axis : : DEPTH ) ) { <nl> c + = " int kernel_first_dst_z = dst_z + args . padding_z ; \ n " ; <nl> c + = " int kernel_last_dst_z = kernel_first_dst_z - args . kernel_size_z ; \ n " ; <nl> c + = " int offset_z = abs ( args . padding_z ) ; \ n " ; <nl> std : : string ConvolutionTransposed : : GenerateConvolutionTransposedCode ( <nl> for ( int z = 0 ; z < block_size . z ; + + z ) { <nl> const std : : string zindex = std : : to_string ( z ) ; <nl> c + = " int sz " + zindex + " = src_z + " + zindex + " ; \ n " ; <nl> - if ( src_tensor_type ! = TensorStorageType : : TEXTURE_3D ) { <nl> + if ( ! src_def . SupportsZeroClamp ( Axis : : DEPTH ) ) { <nl> c + = " bool in_z " + zindex + " = sz " + zindex + " > = 0 & & sz " + <nl> zindex + " < args . src_tensor . Depth ( ) ; \ n " ; <nl> + if ( ! src_def . CanReadOutOfBorder ( Axis : : DEPTH ) ) { <nl> + c + = " sz " + zindex + " = clamp ( sz " + zindex + <nl> + " , 0 , args . src_tensor . Depth ( ) - 1 ) ; \ n " ; <nl> + } <nl> } <nl> } <nl> - if ( block_size . z = = 1 & & <nl> - ( src_tensor_type ! = TensorStorageType : : TEXTURE_3D ) ) { <nl> + if ( block_size . z = = 1 & & ! src_def . SupportsZeroClamp ( Axis : : DEPTH ) ) { <nl> c + = " if ( ! in_z0 ) continue ; \ n " ; <nl> } <nl> c + = " int kernel_z = kernel_first_dst_z - src_as_dst_z ; \ n " ; <nl> std : : string ConvolutionTransposed : : GenerateConvolutionTransposedCode ( <nl> for ( int y = 0 ; y < block_size . y ; + + y ) { <nl> const std : : string yindex = std : : to_string ( y ) ; <nl> const std : : string src_y = <nl> - op_def . src_tensors [ 0 ] . HasAxis ( Axis : : DEPTH ) ? " src_y_copy " : " src_y " ; <nl> + src_def . HasAxis ( Axis : : DEPTH ) ? " src_y_copy " : " src_y " ; <nl> c + = " int sy " + yindex + " = " + src_y + " + " + yindex + " ; \ n " ; <nl> - if ( manual_clamp ) { <nl> + if ( ! src_def . SupportsZeroClamp ( Axis : : HEIGHT ) ) { <nl> c + = " bool in_y " + yindex + " = sy " + yindex + " > = 0 & & sy " + <nl> yindex + " < args . src_tensor . Height ( ) ; \ n " ; <nl> - if ( ! image_buffer ) { <nl> + if ( ! src_def . CanReadOutOfBorder ( Axis : : HEIGHT ) ) { <nl> c + = " sy " + yindex + " = clamp ( sy " + yindex + <nl> " , 0 , args . src_tensor . Height ( ) - 1 ) ; \ n " ; <nl> } <nl> } <nl> } <nl> + if ( block_size . y = = 1 & & ! src_def . SupportsZeroClamp ( Axis : : HEIGHT ) ) { <nl> + c + = " if ( ! in_y0 ) continue ; \ n " ; <nl> + } <nl> c + = " int kernel_y = kernel_first_dst_y - src_as_dst_y ; \ n " ; <nl> c + = " int src_as_dst_x = src_x * args . stride_x ; \ n " ; <nl> c + = " int src_x_copy = src_x ; \ n " ; <nl> std : : string ConvolutionTransposed : : GenerateConvolutionTransposedCode ( <nl> for ( int x = 0 ; x < block_size . x ; + + x ) { <nl> const std : : string xindex = std : : to_string ( x ) ; <nl> c + = " int sx " + xindex + " = src_x_copy + " + xindex + " ; \ n " ; <nl> - if ( manual_clamp ) { <nl> + if ( ! src_def . SupportsZeroClamp ( Axis : : WIDTH ) ) { <nl> c + = " bool in_x " + xindex + " = sx " + xindex + " > = 0 & & sx " + <nl> xindex + " < args . src_tensor . Width ( ) ; \ n " ; <nl> - if ( ! image_buffer ) { <nl> + if ( ! src_def . CanReadOutOfBorder ( Axis : : WIDTH ) ) { <nl> c + = " sx " + xindex + " = clamp ( sx " + xindex + <nl> " , 0 , args . src_tensor . Width ( ) - 1 ) ; \ n " ; <nl> } <nl> } <nl> } <nl> + if ( block_size . x = = 1 & & ! src_def . SupportsZeroClamp ( Axis : : WIDTH ) ) { <nl> + c + = " if ( ! in_x0 ) continue ; \ n " ; <nl> + } <nl> for ( int z = 0 ; z < block_size . z ; + + z ) { <nl> const std : : string zind = std : : to_string ( z ) ; <nl> for ( int y = 0 ; y < block_size . y ; + + y ) { <nl> std : : string ConvolutionTransposed : : GenerateConvolutionTransposedCode ( <nl> const std : : string id = generate_id ( xind , yind , zind ) ; <nl> const std : : string check = generate_check ( xind , yind , zind ) ; <nl> std : : string coords = " sx " + xind + " , sy " + yind ; <nl> - if ( op_def . src_tensors [ 0 ] . HasAxis ( Axis : : DEPTH ) ) { <nl> + if ( src_def . HasAxis ( Axis : : DEPTH ) ) { <nl> coords + = " , sz " + zind ; <nl> } <nl> - c + = " args . src_tensor . GetAddress ( addr " + id + " , " + coords + <nl> - " , 0 ) ; \ n " ; <nl> - if ( image_buffer ) { <nl> + if ( src_def . IsLinear ( ) ) { <nl> + c + = " args . src_tensor . GetAddress ( addr " + id + " , " + coords + <nl> + " , 0 ) ; \ n " ; <nl> + } <nl> + if ( src_def . ReturnsZeroForNegOneRead ( ) ) { <nl> c + = " addr " + id + " = select ( - 1 , addr " + id + " , ( " + check + <nl> " ) ) ; \ n " ; <nl> c + = " int ds " + id + <nl> std : : string ConvolutionTransposed : : GenerateConvolutionTransposedCode ( <nl> } <nl> } <nl> } <nl> - if ( src_tensor_type = = TensorStorageType : : BUFFER ) { <nl> + if ( src_def . storage_type = = TensorStorageType : : BUFFER ) { <nl> c + = " int ds = args . src_tensor . SliceStride ( ) ; \ n " ; <nl> } <nl> - if ( block_size . x = = 1 & & block_size . y = = 1 & & manual_clamp ) { <nl> - c + = " if ( ! in_x0 | | ! in_y0 ) continue ; \ n " ; <nl> - } <nl> c + = " int kernel_x = kernel_first_dst_x - src_as_dst_x ; \ n " ; <nl> - if ( op_def . src_tensors [ 0 ] . HasAxis ( Axis : : DEPTH ) ) { <nl> + if ( src_def . HasAxis ( Axis : : DEPTH ) ) { <nl> c + = " int kernel_index = ( kernel_z * args . kernel_size_y + kernel_y ) " <nl> " * args . kernel_size_x + kernel_x ; \ n " ; <nl> } else { <nl> std : : string ConvolutionTransposed : : GenerateConvolutionTransposedCode ( <nl> for ( int x = 0 ; x < block_size . x ; + + x ) { <nl> const std : : string xind = std : : to_string ( x ) ; <nl> const std : : string id = generate_id ( xind , yind , zind ) ; <nl> - const std : : string check = generate_check ( xind , yind , zind ) ; <nl> - if ( image_buffer ) { <nl> - c + = " FLT4 src " + id + " = args . src_tensor . Read ( addr " + id + <nl> - " ) ; addr " + id + " + = ds " + id + " ; \ n " ; <nl> - } else if ( manual_clamp ) { <nl> - if ( conditional_read ) { <nl> - c + = " FLT4 src " + id + " = " + check + <nl> - " ? args . src_tensor . Read ( addr " + id + <nl> - " ) : ( FLT4 ) ( 0 . 0f ) ; addr " + id + " + = ds ; \ n " ; <nl> - } else { <nl> - c + = " FLT4 src " + id + " = args . src_tensor . Read ( addr " + id + <nl> - " ) * ( FLT ) ( " + check + " ) ; addr " + id + " + = ds ; \ n " ; <nl> + std : : string address ; <nl> + if ( src_def . IsLinear ( ) ) { <nl> + address = " addr " + id ; <nl> + } else { <nl> + address = " sx " + xind + " , sy " + yind ; <nl> + if ( src_def . HasAxis ( Axis : : DEPTH ) ) { <nl> + address + = " , sz " + zind ; <nl> } <nl> + address + = " , s " ; <nl> + } <nl> + if ( src_def . ReturnsZeroForNegOneRead ( ) ) { <nl> + c + = " FLT4 src " + id + " = args . src_tensor . Read ( " + address + <nl> + " ) ; " + address + " + = ds " + id + " ; \ n " ; <nl> } else { <nl> - std : : string coords = " sx " + xind + " , sy " + yind ; <nl> - if ( op_def . src_tensors [ 0 ] . HasAxis ( Axis : : DEPTH ) ) { <nl> - coords + = " , sz " + zind ; <nl> + const std : : string check = generate_check ( xind , yind , zind ) ; <nl> + if ( ! check . empty ( ) ) { <nl> + if ( conditional_read ) { <nl> + c + = " FLT4 src " + id + " = " + check + <nl> + " ? args . src_tensor . Read ( " + address + " ) : ( FLT4 ) ( 0 . 0f ) ; \ n " ; <nl> + } else { <nl> + c + = " FLT4 src " + id + " = args . src_tensor . Read ( " + <nl> + address + " ) * ( FLT ) ( " + check + " ) ; \ n " ; <nl> + } <nl> + } else { <nl> + c + = " FLT4 src " + id + " = args . src_tensor . Read ( " + <nl> + address + " ) ; \ n " ; <nl> + } <nl> + if ( src_def . IsLinear ( ) ) { <nl> + c + = " addr " + id + " + = ds ; \ n " ; <nl> } <nl> - c + = " FLT4 src " + id + " = args . src_tensor . Read ( " + coords + <nl> - " , s ) ; \ n " ; <nl> } <nl> } <nl> } <nl> std : : string ConvolutionTransposed : : GenerateConvolutionTransposedCode ( <nl> c + = " } \ n " ; <nl> c + = " } \ n " ; <nl> c + = " } \ n " ; <nl> - if ( op_def . src_tensors [ 0 ] . HasAxis ( Axis : : DEPTH ) ) { <nl> + if ( src_def . HasAxis ( Axis : : DEPTH ) ) { <nl> c + = " } \ n " ; <nl> } <nl> for ( int s = 0 ; s < block_size . w ; + + s ) { <nl> std : : string ConvolutionTransposed : : GenerateConvolutionTransposedCode ( <nl> c + = " { \ n " ; <nl> c + = " int xc = dst_x + args . stride_x * " + xind + " ; \ n " ; <nl> c + = " int yc = dst_y + args . stride_y * " + yind + " ; \ n " ; <nl> - if ( op_def . src_tensors [ 0 ] . HasAxis ( Axis : : DEPTH ) ) { <nl> + if ( src_def . HasAxis ( Axis : : DEPTH ) ) { <nl> c + = " int zc = dst_z + args . stride_z * " + zind + " ; \ n " ; <nl> checks + = " & & zc < args . dst_tensor . Depth ( ) " ; <nl> coords + = " , zc " ; <nl> mmm a / tensorflow / lite / delegates / gpu / cl / tensor_type . cc <nl> ppp b / tensorflow / lite / delegates / gpu / cl / tensor_type . cc <nl> void TensorDescriptor : : UploadData ( absl : : Span < const float > src ) { <nl> } <nl> } <nl> <nl> + bool TensorDescriptor : : SupportsZeroClamp ( const Axis & axis ) const { <nl> + switch ( storage_type ) { <nl> + case TensorStorageType : : UNKNOWN : <nl> + return false ; <nl> + case TensorStorageType : : BUFFER : <nl> + case TensorStorageType : : IMAGE_BUFFER : <nl> + return false ; <nl> + case TensorStorageType : : TEXTURE_ARRAY : <nl> + case TensorStorageType : : TEXTURE_2D : <nl> + case TensorStorageType : : SINGLE_TEXTURE_2D : <nl> + return axis = = Axis : : WIDTH | | axis = = Axis : : HEIGHT ; <nl> + case TensorStorageType : : TEXTURE_3D : <nl> + return axis = = Axis : : WIDTH | | axis = = Axis : : HEIGHT | | axis = = Axis : : DEPTH ; <nl> + } <nl> + } <nl> + <nl> + bool TensorDescriptor : : CanReadOutOfBorder ( const Axis & axis ) const { <nl> + switch ( storage_type ) { <nl> + case TensorStorageType : : UNKNOWN : <nl> + return false ; <nl> + case TensorStorageType : : BUFFER : <nl> + return false ; <nl> + case TensorStorageType : : IMAGE_BUFFER : <nl> + case TensorStorageType : : TEXTURE_2D : <nl> + case TensorStorageType : : TEXTURE_3D : <nl> + case TensorStorageType : : SINGLE_TEXTURE_2D : <nl> + case TensorStorageType : : TEXTURE_ARRAY : <nl> + return true ; <nl> + } <nl> + } <nl> + <nl> + bool TensorDescriptor : : IsLinear ( ) const { <nl> + return storage_type = = TensorStorageType : : BUFFER | | <nl> + storage_type = = TensorStorageType : : IMAGE_BUFFER ; <nl> + } <nl> + <nl> + bool TensorDescriptor : : ReturnsZeroForNegOneRead ( ) const { <nl> + return storage_type = = TensorStorageType : : IMAGE_BUFFER ; <nl> + } <nl> + <nl> namespace { <nl> int GetLinearIndex ( const TensorDescriptor & desc , const BHWDC & shape , int b , <nl> int x , int y , int d , int s , int sub_c ) { <nl> mmm a / tensorflow / lite / delegates / gpu / cl / tensor_type . h <nl> ppp b / tensorflow / lite / delegates / gpu / cl / tensor_type . h <nl> struct TensorDescriptor : public GPUObjectDescriptor { <nl> void UploadData ( const tflite : : gpu : : Tensor < HWC , DataType : : FLOAT32 > & src ) ; <nl> void UploadData ( const tflite : : gpu : : Tensor < Linear , DataType : : FLOAT32 > & src ) ; <nl> <nl> + bool SupportsZeroClamp ( const Axis & axis ) const ; <nl> + bool CanReadOutOfBorder ( const Axis & axis ) const ; <nl> + bool IsLinear ( ) const ; <nl> + <nl> + / / applicable only for types that : IsLinear - > true . <nl> + / / In this case for address we have 1d component - addr ( int ) <nl> + / / If for addr = = - 1 this linear storage type returns FLT4 ( 0 . 0 ) , this function <nl> + / / returns true , otherwise false <nl> + bool ReturnsZeroForNegOneRead ( ) const ; <nl> + <nl> DataType data_type = DataType : : UNKNOWN ; <nl> TensorStorageType storage_type = TensorStorageType : : UNKNOWN ; <nl> / / This field describes logical layout , actual ( physical ) GPU layout can be <nl>
|
Added new utility functions to TensorDescriptor for codegen simplification and generalization .
|
tensorflow/tensorflow
|
2cdb2b4d7619282a3c8787b38eceb8c37261e778
|
2020-09-02T23:53:06Z
|
mmm a / src / global - handles . cc <nl> ppp b / src / global - handles . cc <nl> void GlobalHandles : : IterateNewSpaceWeakIndependentRoots ( ObjectVisitor * v ) { <nl> } <nl> <nl> <nl> + bool GlobalHandles : : IterateObjectGroups ( ObjectVisitor * v , <nl> + WeakSlotCallbackWithHeap can_skip ) { <nl> + int last = 0 ; <nl> + bool any_group_was_visited = false ; <nl> + for ( int i = 0 ; i < object_groups_ . length ( ) ; i + + ) { <nl> + ObjectGroup * entry = object_groups_ . at ( i ) ; <nl> + ASSERT ( entry ! = NULL ) ; <nl> + <nl> + Object * * * objects = entry - > objects_ ; <nl> + bool group_should_be_visited = false ; <nl> + for ( size_t j = 0 ; j < entry - > length_ ; j + + ) { <nl> + Object * object = * objects [ j ] ; <nl> + if ( object - > IsHeapObject ( ) ) { <nl> + if ( ! can_skip ( isolate_ - > heap ( ) , & object ) ) { <nl> + group_should_be_visited = true ; <nl> + break ; <nl> + } <nl> + } <nl> + } <nl> + <nl> + if ( ! group_should_be_visited ) { <nl> + object_groups_ [ last + + ] = entry ; <nl> + continue ; <nl> + } <nl> + <nl> + / / An object in the group requires visiting , so iterate over all <nl> + / / objects in the group . <nl> + for ( size_t j = 0 ; j < entry - > length_ ; + + j ) { <nl> + Object * object = * objects [ j ] ; <nl> + if ( object - > IsHeapObject ( ) ) { <nl> + v - > VisitPointer ( & object ) ; <nl> + any_group_was_visited = true ; <nl> + } <nl> + } <nl> + <nl> + / / Once the entire group has been iterated over , set the object <nl> + / / group to NULL so it won ' t be processed again . <nl> + entry - > Dispose ( ) ; <nl> + object_groups_ . at ( i ) = NULL ; <nl> + } <nl> + object_groups_ . Rewind ( last ) ; <nl> + return any_group_was_visited ; <nl> + } <nl> + <nl> + <nl> bool GlobalHandles : : PostGarbageCollectionProcessing ( <nl> GarbageCollector collector ) { <nl> / / Process weak global handle callbacks . This must be done after the <nl> mmm a / src / global - handles . h <nl> ppp b / src / global - handles . h <nl> class GlobalHandles { <nl> / / See the note above . <nl> void IterateNewSpaceWeakIndependentRoots ( ObjectVisitor * v ) ; <nl> <nl> + / / Iterate over objects in object groups that have at least one object <nl> + / / which requires visiting . The callback has to return true if objects <nl> + / / can be skipped and false otherwise . <nl> + bool IterateObjectGroups ( ObjectVisitor * v , WeakSlotCallbackWithHeap can_skip ) ; <nl> + <nl> / / Add an object group . <nl> / / Should be only used in GC callback function before a collection . <nl> / / All groups are destroyed after a garbage collection . <nl> mmm a / src / heap . cc <nl> ppp b / src / heap . cc <nl> void Heap : : Scavenge ( ) { <nl> <nl> new_space_front = DoScavenge ( & scavenge_visitor , new_space_front ) ; <nl> <nl> - while ( IterateObjectGroups ( & scavenge_visitor ) ) { <nl> + while ( isolate ( ) - > global_handles ( ) - > IterateObjectGroups ( <nl> + & scavenge_visitor , & IsUnscavengedHeapObject ) ) { <nl> new_space_front = DoScavenge ( & scavenge_visitor , new_space_front ) ; <nl> } <nl> isolate ( ) - > global_handles ( ) - > RemoveObjectGroups ( ) ; <nl> void Heap : : Scavenge ( ) { <nl> } <nl> <nl> <nl> - / / TODO ( mstarzinger ) : Unify this method with <nl> - / / MarkCompactCollector : : MarkObjectGroups ( ) . <nl> - bool Heap : : IterateObjectGroups ( ObjectVisitor * scavenge_visitor ) { <nl> - List < ObjectGroup * > * object_groups = <nl> - isolate ( ) - > global_handles ( ) - > object_groups ( ) ; <nl> - <nl> - int last = 0 ; <nl> - bool changed = false ; <nl> - for ( int i = 0 ; i < object_groups - > length ( ) ; i + + ) { <nl> - ObjectGroup * entry = object_groups - > at ( i ) ; <nl> - ASSERT ( entry ! = NULL ) ; <nl> - <nl> - Object * * * objects = entry - > objects_ ; <nl> - bool group_marked = false ; <nl> - for ( size_t j = 0 ; j < entry - > length_ ; j + + ) { <nl> - Object * object = * objects [ j ] ; <nl> - if ( object - > IsHeapObject ( ) ) { <nl> - if ( ! IsUnscavengedHeapObject ( this , & object ) ) { <nl> - group_marked = true ; <nl> - break ; <nl> - } <nl> - } <nl> - } <nl> - <nl> - if ( ! group_marked ) { <nl> - ( * object_groups ) [ last + + ] = entry ; <nl> - continue ; <nl> - } <nl> - <nl> - for ( size_t j = 0 ; j < entry - > length_ ; + + j ) { <nl> - Object * object = * objects [ j ] ; <nl> - if ( object - > IsHeapObject ( ) ) { <nl> - scavenge_visitor - > VisitPointer ( & object ) ; <nl> - changed = true ; <nl> - } <nl> - } <nl> - <nl> - entry - > Dispose ( ) ; <nl> - object_groups - > at ( i ) = NULL ; <nl> - } <nl> - object_groups - > Rewind ( last ) ; <nl> - return changed ; <nl> - } <nl> - <nl> - <nl> String * Heap : : UpdateNewSpaceReferenceInExternalStringTableEntry ( Heap * heap , <nl> Object * * p ) { <nl> MapWord first_word = HeapObject : : cast ( * p ) - > map_word ( ) ; <nl> mmm a / src / heap . h <nl> ppp b / src / heap . h <nl> class Heap { <nl> bool PerformGarbageCollection ( GarbageCollector collector , <nl> GCTracer * tracer ) ; <nl> <nl> - bool IterateObjectGroups ( ObjectVisitor * scavenge_visitor ) ; <nl> - <nl> inline void UpdateOldSpaceLimits ( ) ; <nl> <nl> / / Allocate an uninitialized object in map space . The behavior is identical <nl> mmm a / src / mark - compact . cc <nl> ppp b / src / mark - compact . cc <nl> bool MarkCompactCollector : : IsUnmarkedHeapObject ( Object * * p ) { <nl> } <nl> <nl> <nl> + bool MarkCompactCollector : : IsUnmarkedHeapObjectWithHeap ( Heap * heap , <nl> + Object * * p ) { <nl> + Object * o = * p ; <nl> + ASSERT ( o - > IsHeapObject ( ) ) ; <nl> + HeapObject * heap_object = HeapObject : : cast ( o ) ; <nl> + MarkBit mark = Marking : : MarkBitFrom ( heap_object ) ; <nl> + return ! mark . Get ( ) ; <nl> + } <nl> + <nl> + <nl> void MarkCompactCollector : : MarkSymbolTable ( ) { <nl> SymbolTable * symbol_table = heap ( ) - > symbol_table ( ) ; <nl> / / Mark the symbol table itself . <nl> void MarkCompactCollector : : MarkRoots ( RootMarkingVisitor * visitor ) { <nl> } <nl> <nl> <nl> - void MarkCompactCollector : : MarkObjectGroups ( ) { <nl> - List < ObjectGroup * > * object_groups = <nl> - heap ( ) - > isolate ( ) - > global_handles ( ) - > object_groups ( ) ; <nl> - <nl> - int last = 0 ; <nl> - for ( int i = 0 ; i < object_groups - > length ( ) ; i + + ) { <nl> - ObjectGroup * entry = object_groups - > at ( i ) ; <nl> - ASSERT ( entry ! = NULL ) ; <nl> - <nl> - Object * * * objects = entry - > objects_ ; <nl> - bool group_marked = false ; <nl> - for ( size_t j = 0 ; j < entry - > length_ ; j + + ) { <nl> - Object * object = * objects [ j ] ; <nl> - if ( object - > IsHeapObject ( ) ) { <nl> - HeapObject * heap_object = HeapObject : : cast ( object ) ; <nl> - MarkBit mark = Marking : : MarkBitFrom ( heap_object ) ; <nl> - if ( mark . Get ( ) ) { <nl> - group_marked = true ; <nl> - break ; <nl> - } <nl> - } <nl> - } <nl> - <nl> - if ( ! group_marked ) { <nl> - ( * object_groups ) [ last + + ] = entry ; <nl> - continue ; <nl> - } <nl> - <nl> - / / An object in the group is marked , so mark as grey all white heap <nl> - / / objects in the group . <nl> - for ( size_t j = 0 ; j < entry - > length_ ; + + j ) { <nl> - Object * object = * objects [ j ] ; <nl> - if ( object - > IsHeapObject ( ) ) { <nl> - HeapObject * heap_object = HeapObject : : cast ( object ) ; <nl> - MarkBit mark = Marking : : MarkBitFrom ( heap_object ) ; <nl> - MarkObject ( heap_object , mark ) ; <nl> - } <nl> - } <nl> - <nl> - / / Once the entire group has been colored grey , set the object group <nl> - / / to NULL so it won ' t be processed again . <nl> - entry - > Dispose ( ) ; <nl> - object_groups - > at ( i ) = NULL ; <nl> - } <nl> - object_groups - > Rewind ( last ) ; <nl> - } <nl> - <nl> - <nl> void MarkCompactCollector : : MarkImplicitRefGroups ( ) { <nl> List < ImplicitRefGroup * > * ref_groups = <nl> heap ( ) - > isolate ( ) - > global_handles ( ) - > implicit_ref_groups ( ) ; <nl> void MarkCompactCollector : : ProcessMarkingDeque ( ) { <nl> } <nl> <nl> <nl> - void MarkCompactCollector : : ProcessExternalMarking ( ) { <nl> + void MarkCompactCollector : : ProcessExternalMarking ( RootMarkingVisitor * visitor ) { <nl> bool work_to_do = true ; <nl> ASSERT ( marking_deque_ . IsEmpty ( ) ) ; <nl> while ( work_to_do ) { <nl> - MarkObjectGroups ( ) ; <nl> + heap ( ) - > isolate ( ) - > global_handles ( ) - > IterateObjectGroups ( <nl> + visitor , & IsUnmarkedHeapObjectWithHeap ) ; <nl> MarkImplicitRefGroups ( ) ; <nl> work_to_do = ! marking_deque_ . IsEmpty ( ) ; <nl> ProcessMarkingDeque ( ) ; <nl> void MarkCompactCollector : : MarkLiveObjects ( ) { <nl> / / The objects reachable from the roots are marked , yet unreachable <nl> / / objects are unmarked . Mark objects reachable due to host <nl> / / application specific logic . <nl> - ProcessExternalMarking ( ) ; <nl> + ProcessExternalMarking ( & root_visitor ) ; <nl> <nl> / / The objects reachable from the roots or object groups are marked , <nl> / / yet unreachable objects are unmarked . Mark objects reachable <nl> void MarkCompactCollector : : MarkLiveObjects ( ) { <nl> <nl> / / Repeat host application specific marking to mark unmarked objects <nl> / / reachable from the weak roots . <nl> - ProcessExternalMarking ( ) ; <nl> + ProcessExternalMarking ( & root_visitor ) ; <nl> <nl> AfterMarking ( ) ; <nl> } <nl> mmm a / src / mark - compact . h <nl> ppp b / src / mark - compact . h <nl> class MarkCompactCollector { <nl> / / symbol table are weak . <nl> void MarkSymbolTable ( ) ; <nl> <nl> - / / Mark objects in object groups that have at least one object in the <nl> - / / group marked . <nl> - void MarkObjectGroups ( ) ; <nl> - <nl> / / Mark objects in implicit references groups if their parent object <nl> / / is marked . <nl> void MarkImplicitRefGroups ( ) ; <nl> <nl> / / Mark all objects which are reachable due to host application <nl> / / logic like object groups or implicit references ' groups . <nl> - void ProcessExternalMarking ( ) ; <nl> + void ProcessExternalMarking ( RootMarkingVisitor * visitor ) ; <nl> <nl> / / Mark objects reachable ( transitively ) from objects in the marking stack <nl> / / or overflowed in the heap . <nl> class MarkCompactCollector { <nl> / / Callback function for telling whether the object * p is an unmarked <nl> / / heap object . <nl> static bool IsUnmarkedHeapObject ( Object * * p ) ; <nl> + static bool IsUnmarkedHeapObjectWithHeap ( Heap * heap , Object * * p ) ; <nl> <nl> / / Map transitions from a live map to a dead map must be killed . <nl> / / We replace them with a null descriptor , with the same key . <nl>
|
Unify object groups iteration in global handles .
|
v8/v8
|
a4c4862ed85dd464be15a6ae8ef56a496afea385
|
2012-12-04T10:23:43Z
|
mmm a / src / node / interop / interop_client . js <nl> ppp b / src / node / interop / interop_client . js <nl> function unimplementedService ( client , done ) { <nl> client . unimplementedCall ( { } , function ( err , resp ) { <nl> assert ( err ) ; <nl> assert . strictEqual ( err . code , grpc . status . UNIMPLEMENTED ) ; <nl> - assert ( ! err . message ) ; <nl> done ( ) ; <nl> } ) ; <nl> } <nl> mmm a / src / python / grpcio_tests / tests / interop / client . py <nl> ppp b / src / python / grpcio_tests / tests / interop / client . py <nl> def _stub ( args ) : <nl> ( ( ' grpc . ssl_target_name_override ' , args . server_host_override , ) , ) ) <nl> else : <nl> channel = grpc . insecure_channel ( target ) <nl> - return test_pb2 . TestServiceStub ( channel ) <nl> + if args . test_case = = " unimplemented_service " : <nl> + return test_pb2 . UnimplementedServiceStub ( channel ) <nl> + else : <nl> + return test_pb2 . TestServiceStub ( channel ) <nl> <nl> <nl> def _test_case_from_arg ( test_case_arg ) : <nl> mmm a / src / python / grpcio_tests / tests / interop / methods . py <nl> ppp b / src / python / grpcio_tests / tests / interop / methods . py <nl> <nl> from src . proto . grpc . testing import messages_pb2 <nl> from src . proto . grpc . testing import test_pb2 <nl> <nl> + _INITIAL_METADATA_KEY = " x - grpc - test - echo - initial " <nl> + _TRAILING_METADATA_KEY = " x - grpc - test - echo - trailing - bin " <nl> + <nl> + def _maybe_echo_metadata ( servicer_context ) : <nl> + " " " Copies metadata from request to response if it is present . " " " <nl> + invocation_metadata = dict ( servicer_context . invocation_metadata ( ) ) <nl> + if _INITIAL_METADATA_KEY in invocation_metadata : <nl> + initial_metadatum = ( <nl> + _INITIAL_METADATA_KEY , invocation_metadata [ _INITIAL_METADATA_KEY ] ) <nl> + servicer_context . send_initial_metadata ( ( initial_metadatum , ) ) <nl> + if _TRAILING_METADATA_KEY in invocation_metadata : <nl> + trailing_metadatum = ( <nl> + _TRAILING_METADATA_KEY , invocation_metadata [ _TRAILING_METADATA_KEY ] ) <nl> + servicer_context . set_trailing_metadata ( ( trailing_metadatum , ) ) <nl> + <nl> + def _maybe_echo_status_and_message ( request , servicer_context ) : <nl> + " " " Sets the response context code and details if the request asks for them " " " <nl> + if request . HasField ( ' response_status ' ) : <nl> + servicer_context . set_code ( request . response_status . code ) <nl> + servicer_context . set_details ( request . response_status . message ) <nl> <nl> class TestService ( test_pb2 . TestServiceServicer ) : <nl> <nl> def EmptyCall ( self , request , context ) : <nl> + _maybe_echo_metadata ( context ) <nl> return empty_pb2 . Empty ( ) <nl> <nl> def UnaryCall ( self , request , context ) : <nl> - if request . HasField ( ' response_status ' ) : <nl> - context . set_code ( request . response_status . code ) <nl> - context . set_details ( request . response_status . message ) <nl> + _maybe_echo_metadata ( context ) <nl> + _maybe_echo_status_and_message ( request , context ) <nl> return messages_pb2 . SimpleResponse ( <nl> payload = messages_pb2 . Payload ( <nl> type = messages_pb2 . COMPRESSABLE , <nl> body = b ' \ x00 ' * request . response_size ) ) <nl> <nl> def StreamingOutputCall ( self , request , context ) : <nl> - if request . HasField ( ' response_status ' ) : <nl> - context . set_code ( request . response_status . code ) <nl> - context . set_details ( request . response_status . message ) <nl> + _maybe_echo_status_and_message ( request , context ) <nl> for response_parameters in request . response_parameters : <nl> yield messages_pb2 . StreamingOutputCallResponse ( <nl> payload = messages_pb2 . Payload ( <nl> def StreamingInputCall ( self , request_iterator , context ) : <nl> aggregated_payload_size = aggregate_size ) <nl> <nl> def FullDuplexCall ( self , request_iterator , context ) : <nl> + _maybe_echo_metadata ( context ) <nl> for request in request_iterator : <nl> - if request . HasField ( ' response_status ' ) : <nl> - context . set_code ( request . response_status . code ) <nl> - context . set_details ( request . response_status . message ) <nl> + _maybe_echo_status_and_message ( request , context ) <nl> for response_parameters in request . response_parameters : <nl> yield messages_pb2 . StreamingOutputCallResponse ( <nl> payload = messages_pb2 . Payload ( <nl> def HalfDuplexCall ( self , request_iterator , context ) : <nl> return self . FullDuplexCall ( request_iterator , context ) <nl> <nl> <nl> + def _expect_status_code ( call , expected_code ) : <nl> + if call . code ( ) ! = expected_code : <nl> + raise ValueError ( <nl> + ' expected code % s , got % s ' % ( expected_code , call . code ( ) ) ) <nl> + <nl> + <nl> + def _expect_status_details ( call , expected_details ) : <nl> + if call . details ( ) ! = expected_details : <nl> + raise ValueError ( <nl> + ' expected message % s , got % s ' % ( expected_details , call . details ( ) ) ) <nl> + <nl> + <nl> + def _validate_status_code_and_details ( call , expected_code , expected_details ) : <nl> + _expect_status_code ( call , expected_code ) <nl> + _expect_status_details ( call , expected_details ) <nl> + <nl> + <nl> + def _validate_payload_type_and_length ( response , expected_type , expected_length ) : <nl> + if response . payload . type is not expected_type : <nl> + raise ValueError ( <nl> + ' expected payload type % s , got % s ' % <nl> + ( expected_type , type ( response . payload . type ) ) ) <nl> + elif len ( response . payload . body ) ! = expected_length : <nl> + raise ValueError ( <nl> + ' expected payload body size % d , got % d ' % <nl> + ( expected_length , len ( response . payload . body ) ) ) <nl> + <nl> + <nl> def _large_unary_common_behavior ( <nl> stub , fill_username , fill_oauth_scope , call_credentials ) : <nl> + size = 314159 <nl> request = messages_pb2 . SimpleRequest ( <nl> - response_type = messages_pb2 . COMPRESSABLE , response_size = 314159 , <nl> + response_type = messages_pb2 . COMPRESSABLE , response_size = size , <nl> payload = messages_pb2 . Payload ( body = b ' \ x00 ' * 271828 ) , <nl> fill_username = fill_username , fill_oauth_scope = fill_oauth_scope ) <nl> response_future = stub . UnaryCall . future ( <nl> request , credentials = call_credentials ) <nl> response = response_future . result ( ) <nl> - if response . payload . type is not messages_pb2 . COMPRESSABLE : <nl> - raise ValueError ( <nl> - ' response payload type is " % s " ! ' % type ( response . payload . type ) ) <nl> - elif len ( response . payload . body ) ! = 314159 : <nl> - raise ValueError ( <nl> - ' response body of incorrect size % d ! ' % len ( response . payload . body ) ) <nl> - else : <nl> - return response <nl> + _validate_payload_type_and_length ( response , messages_pb2 . COMPRESSABLE , size ) <nl> + return response <nl> <nl> <nl> def _empty_unary ( stub ) : <nl> def _server_streaming ( stub ) : <nl> ) <nl> response_iterator = stub . StreamingOutputCall ( request ) <nl> for index , response in enumerate ( response_iterator ) : <nl> - if response . payload . type ! = messages_pb2 . COMPRESSABLE : <nl> - raise ValueError ( <nl> - ' response body of invalid type % s ! ' % response . payload . type ) <nl> - elif len ( response . payload . body ) ! = sizes [ index ] : <nl> - raise ValueError ( <nl> - ' response body of invalid size % d ! ' % len ( response . payload . body ) ) <nl> + _validate_payload_type_and_length ( <nl> + response , messages_pb2 . COMPRESSABLE , sizes [ index ] ) <nl> + <nl> <nl> def _cancel_after_begin ( stub ) : <nl> sizes = ( 27182 , 8 , 1828 , 45904 , ) <nl> def _ping_pong ( stub ) : <nl> payload = messages_pb2 . Payload ( body = b ' \ x00 ' * payload_size ) ) <nl> pipe . add ( request ) <nl> response = next ( response_iterator ) <nl> - if response . payload . type ! = messages_pb2 . COMPRESSABLE : <nl> - raise ValueError ( <nl> - ' response body of invalid type % s ! ' % response . payload . type ) <nl> - if len ( response . payload . body ) ! = response_size : <nl> - raise ValueError ( <nl> - ' response body of invalid size % d ! ' % len ( response . payload . body ) ) <nl> + _validate_payload_type_and_length ( <nl> + response , messages_pb2 . COMPRESSABLE , response_size ) <nl> <nl> <nl> def _cancel_after_first_response ( stub ) : <nl> def _empty_stream ( stub ) : <nl> <nl> <nl> def _status_code_and_message ( stub ) : <nl> - message = ' test status message ' <nl> + details = ' test status message ' <nl> code = 2 <nl> status = grpc . StatusCode . UNKNOWN # code = 2 <nl> + <nl> + # Test with a UnaryCall <nl> request = messages_pb2 . SimpleRequest ( <nl> response_type = messages_pb2 . COMPRESSABLE , <nl> response_size = 1 , <nl> payload = messages_pb2 . Payload ( body = b ' \ x00 ' ) , <nl> - response_status = messages_pb2 . EchoStatus ( code = code , message = message ) <nl> + response_status = messages_pb2 . EchoStatus ( code = code , message = details ) <nl> ) <nl> response_future = stub . UnaryCall . future ( request ) <nl> - if response_future . code ( ) ! = status : <nl> - raise ValueError ( <nl> - ' expected code % s , got % s ' % ( status , response_future . code ( ) ) ) <nl> - elif response_future . details ( ) ! = message : <nl> - raise ValueError ( <nl> - ' expected message % s , got % s ' % ( message , response_future . details ( ) ) ) <nl> + _validate_status_code_and_details ( response_future , status , details ) <nl> <nl> - request = messages_pb2 . StreamingOutputCallRequest ( <nl> + # Test with a FullDuplexCall <nl> + with _Pipe ( ) as pipe : <nl> + response_iterator = stub . FullDuplexCall ( pipe ) <nl> + request = messages_pb2 . StreamingOutputCallRequest ( <nl> + response_type = messages_pb2 . COMPRESSABLE , <nl> + response_parameters = ( <nl> + messages_pb2 . ResponseParameters ( size = 1 ) , ) , <nl> + payload = messages_pb2 . Payload ( body = b ' \ x00 ' ) , <nl> + response_status = messages_pb2 . EchoStatus ( code = code , message = details ) ) <nl> + pipe . add ( request ) # sends the initial request . <nl> + # Dropping out of with block closes the pipe <nl> + _validate_status_code_and_details ( response_iterator , status , details ) <nl> + <nl> + <nl> + def _unimplemented_method ( test_service_stub ) : <nl> + response_future = ( <nl> + test_service_stub . UnimplementedCall . future ( empty_pb2 . Empty ( ) ) ) <nl> + _expect_status_code ( response_future , grpc . StatusCode . UNIMPLEMENTED ) <nl> + <nl> + <nl> + def _unimplemented_service ( unimplemented_service_stub ) : <nl> + response_future = ( <nl> + unimplemented_service_stub . UnimplementedCall . future ( empty_pb2 . Empty ( ) ) ) <nl> + _expect_status_code ( response_future , grpc . StatusCode . UNIMPLEMENTED ) <nl> + <nl> + <nl> + def _custom_metadata ( stub ) : <nl> + initial_metadata_value = " test_initial_metadata_value " <nl> + trailing_metadata_value = " \ x0a \ x0b \ x0a \ x0b \ x0a \ x0b " <nl> + metadata = ( <nl> + ( _INITIAL_METADATA_KEY , initial_metadata_value ) , <nl> + ( _TRAILING_METADATA_KEY , trailing_metadata_value ) ) <nl> + <nl> + def _validate_metadata ( response ) : <nl> + initial_metadata = dict ( response . initial_metadata ( ) ) <nl> + if initial_metadata [ _INITIAL_METADATA_KEY ] ! = initial_metadata_value : <nl> + raise ValueError ( <nl> + ' expected initial metadata % s , got % s ' % ( <nl> + initial_metadata_value , initial_metadata [ _INITIAL_METADATA_KEY ] ) ) <nl> + trailing_metadata = dict ( response . trailing_metadata ( ) ) <nl> + if trailing_metadata [ _TRAILING_METADATA_KEY ] ! = trailing_metadata_value : <nl> + raise ValueError ( <nl> + ' expected trailing metadata % s , got % s ' % ( <nl> + trailing_metadata_value , initial_metadata [ _TRAILING_METADATA_KEY ] ) ) <nl> + <nl> + # Testing with UnaryCall <nl> + request = messages_pb2 . SimpleRequest ( <nl> response_type = messages_pb2 . COMPRESSABLE , <nl> - response_parameters = ( <nl> - messages_pb2 . ResponseParameters ( size = 1 ) , ) , <nl> - response_status = messages_pb2 . EchoStatus ( code = code , message = message ) ) <nl> - response_iterator = stub . StreamingOutputCall ( request ) <nl> - if response_future . code ( ) ! = status : <nl> - raise ValueError ( <nl> - ' expected code % s , got % s ' % ( status , response_iterator . code ( ) ) ) <nl> - elif response_future . details ( ) ! = message : <nl> - raise ValueError ( <nl> - ' expected message % s , got % s ' % ( message , response_iterator . details ( ) ) ) <nl> + response_size = 1 , <nl> + payload = messages_pb2 . Payload ( body = b ' \ x00 ' ) ) <nl> + response_future = stub . UnaryCall . future ( request , metadata = metadata ) <nl> + _validate_metadata ( response_future ) <nl> <nl> + # Testing with FullDuplexCall <nl> + with _Pipe ( ) as pipe : <nl> + response_iterator = stub . FullDuplexCall ( pipe , metadata = metadata ) <nl> + request = messages_pb2 . StreamingOutputCallRequest ( <nl> + response_type = messages_pb2 . COMPRESSABLE , <nl> + response_parameters = ( <nl> + messages_pb2 . ResponseParameters ( size = 1 ) , ) ) <nl> + pipe . add ( request ) # Sends the request <nl> + next ( response_iterator ) # Causes server to send trailing metadata <nl> + # Dropping out of the with block closes the pipe <nl> + _validate_metadata ( response_iterator ) <nl> <nl> def _compute_engine_creds ( stub , args ) : <nl> response = _large_unary_common_behavior ( stub , True , True , None ) <nl> class TestCase ( enum . Enum ) : <nl> CANCEL_AFTER_FIRST_RESPONSE = ' cancel_after_first_response ' <nl> EMPTY_STREAM = ' empty_stream ' <nl> STATUS_CODE_AND_MESSAGE = ' status_code_and_message ' <nl> + UNIMPLEMENTED_METHOD = ' unimplemented_method ' <nl> + UNIMPLEMENTED_SERVICE = ' unimplemented_service ' <nl> + CUSTOM_METADATA = " custom_metadata " <nl> COMPUTE_ENGINE_CREDS = ' compute_engine_creds ' <nl> OAUTH2_AUTH_TOKEN = ' oauth2_auth_token ' <nl> JWT_TOKEN_CREDS = ' jwt_token_creds ' <nl> def test_interoperability ( self , stub , args ) : <nl> _empty_stream ( stub ) <nl> elif self is TestCase . STATUS_CODE_AND_MESSAGE : <nl> _status_code_and_message ( stub ) <nl> + elif self is TestCase . UNIMPLEMENTED_METHOD : <nl> + _unimplemented_method ( stub ) <nl> + elif self is TestCase . UNIMPLEMENTED_SERVICE : <nl> + _unimplemented_service ( stub ) <nl> + elif self is TestCase . CUSTOM_METADATA : <nl> + _custom_metadata ( stub ) <nl> elif self is TestCase . COMPUTE_ENGINE_CREDS : <nl> _compute_engine_creds ( stub , args ) <nl> elif self is TestCase . OAUTH2_AUTH_TOKEN : <nl> mmm a / tools / run_tests / run_interop_tests . py <nl> ppp b / tools / run_tests / run_interop_tests . py <nl> def global_env ( self ) : <nl> ' PYTHONPATH ' : ' { } / src / python / gens ' . format ( DOCKER_WORKDIR_ROOT ) } <nl> <nl> def unimplemented_test_cases ( self ) : <nl> - return _SKIP_ADVANCED + _SKIP_COMPRESSION <nl> + return _SKIP_COMPRESSION <nl> <nl> def unimplemented_test_cases_server ( self ) : <nl> - return _SKIP_ADVANCED + _SKIP_COMPRESSION <nl> + return _SKIP_COMPRESSION <nl> <nl> def __str__ ( self ) : <nl> return ' python ' <nl>
|
Merge pull request from ncteisen / new_python_interop_tests
|
grpc/grpc
|
970078807f63726e32b30e1a263244a548d86e4c
|
2016-10-26T16:07:32Z
|
mmm a / tools / simulator / libsimulator / lib / ProjectConfig / ProjectConfig . cpp <nl> ppp b / tools / simulator / libsimulator / lib / ProjectConfig / ProjectConfig . cpp <nl> <nl> <nl> # include " ProjectConfig / ProjectConfig . h " <nl> # include " ProjectConfig / SimulatorConfig . h " <nl> + # include " cocostudio / LocalizationManager . h " <nl> <nl> # ifdef _MSC_VER <nl> # define strcasecmp _stricmp <nl>
|
Add missing include file in ProjectConfig . cpp .
|
cocos2d/cocos2d-x
|
c599f071a40dfc2482248a905901bde26acf9e6d
|
2016-03-08T06:36:01Z
|
mmm a / release / files_common <nl> ppp b / release / files_common <nl> selfdrive / test / test_fingerprints . py <nl> selfdrive / test / test_car_models . py <nl> <nl> selfdrive / ui / SConscript <nl> - selfdrive / ui / * . c <nl> selfdrive / ui / * . cc <nl> - selfdrive / ui / * . h <nl> selfdrive / ui / * . hpp <nl> selfdrive / ui / ui <nl> selfdrive / ui / spinner / Makefile <nl> mmm a / selfdrive / ui / SConscript <nl> ppp b / selfdrive / ui / SConscript <nl> src = [ ' ui . cc ' , ' paint . cc ' , ' sidebar . cc ' , ' # phonelibs / nanovg / nanovg . c ' ] <nl> libs = [ common , ' zmq ' , ' czmq ' , ' capnp ' , ' kj ' , ' m ' , cereal , messaging , gpucommon , visionipc ] <nl> <nl> if arch = = " aarch64 " : <nl> - src + = [ ' sound . cc ' , ' slplay . c ' ] <nl> + src + = [ ' sound . cc ' ] <nl> libs + = [ ' EGL ' , ' GLESv3 ' , ' gnustl_shared ' , ' log ' , ' utils ' , ' gui ' , ' hardware ' , ' ui ' , ' CB ' , ' gsl ' , ' adreno_utils ' , ' OpenSLES ' , ' cutils ' , ' uuid ' , ' OpenCL ' ] <nl> linkflags = [ ' - Wl , - rpath = / system / lib64 , - rpath = / system / comma / usr / lib ' ] <nl> else : <nl> mmm a / selfdrive / ui / linux . cc <nl> ppp b / selfdrive / ui / linux . cc <nl> int touch_read ( TouchState * s , int * out_x , int * out_y ) { <nl> <nl> # include " sound . hpp " <nl> <nl> - void ui_sound_init ( ) { } <nl> - void ui_sound_destroy ( ) { } <nl> - <nl> - void set_volume ( int volume ) { } <nl> - <nl> - void play_alert_sound ( AudibleAlert alert ) { } <nl> - void stop_alert_sound ( AudibleAlert alert ) { } <nl> + bool Sound : : init ( int volume ) { return true ; } <nl> + bool Sound : : play ( AudibleAlert alert , int repeat ) { return true ; } <nl> + void Sound : : stop ( ) { } <nl> + void Sound : : setVolume ( int volume , int timeout_seconds ) { } <nl> + AudibleAlert Sound : : currentPlaying ( ) { return AudibleAlert : : NONE ; } <nl> + Sound : : ~ Sound ( ) { } <nl> <nl> # include " common / visionimg . h " <nl> # include < sys / mman . h > <nl> deleted file mode 100644 <nl> index ddfbad56c5 . . 0000000000 <nl> mmm a / selfdrive / ui / slplay . c <nl> ppp / dev / null <nl> <nl> - # include < stdio . h > <nl> - # include < assert . h > <nl> - # include < unistd . h > <nl> - # include < stdlib . h > <nl> - # include < getopt . h > <nl> - <nl> - # include " common / timing . h " <nl> - # include " slplay . h " <nl> - <nl> - SLEngineItf engineInterface = NULL ; <nl> - SLObjectItf outputMix = NULL ; <nl> - SLObjectItf engine = NULL ; <nl> - uri_player players [ 32 ] = { { NULL , NULL , NULL } } ; <nl> - <nl> - uint64_t loop_start = 0 ; <nl> - uint64_t loop_start_ctx = 0 ; <nl> - <nl> - uri_player * get_player_by_uri ( const char * uri ) { <nl> - for ( uri_player * s = players ; s - > uri ! = NULL ; s + + ) { <nl> - if ( strcmp ( s - > uri , uri ) = = 0 ) { <nl> - return s ; <nl> - } <nl> - } <nl> - <nl> - return NULL ; <nl> - } <nl> - <nl> - uri_player * slplay_create_player_for_uri ( const char * uri , char * * error ) { <nl> - uri_player player = { uri , NULL , NULL } ; <nl> - <nl> - SLresult result ; <nl> - SLDataLocator_URI locUri = { SL_DATALOCATOR_URI , ( SLchar * ) uri } ; <nl> - SLDataFormat_MIME formatMime = { SL_DATAFORMAT_MIME , NULL , SL_CONTAINERTYPE_UNSPECIFIED } ; <nl> - SLDataSource audioSrc = { & locUri , & formatMime } ; <nl> - <nl> - SLDataLocator_OutputMix outMix = { SL_DATALOCATOR_OUTPUTMIX , outputMix } ; <nl> - SLDataSink audioSnk = { & outMix , NULL } ; <nl> - <nl> - result = ( * engineInterface ) - > CreateAudioPlayer ( engineInterface , & player . player , & audioSrc , & audioSnk , 0 , NULL , NULL ) ; <nl> - if ( result ! = SL_RESULT_SUCCESS ) { <nl> - * error = " Failed to create audio player " ; <nl> - return NULL ; <nl> - } <nl> - <nl> - result = ( * ( player . player ) ) - > Realize ( player . player , SL_BOOLEAN_FALSE ) ; <nl> - if ( result ! = SL_RESULT_SUCCESS ) { <nl> - * error = " Failed to realize audio player " ; <nl> - return NULL ; <nl> - } <nl> - <nl> - result = ( * ( player . player ) ) - > GetInterface ( player . player , SL_IID_PLAY , & ( player . playInterface ) ) ; <nl> - if ( result ! = SL_RESULT_SUCCESS ) { <nl> - * error = " Failed to get player interface " ; <nl> - return NULL ; <nl> - } <nl> - <nl> - result = ( * ( player . playInterface ) ) - > SetPlayState ( player . playInterface , SL_PLAYSTATE_PAUSED ) ; <nl> - if ( result ! = SL_RESULT_SUCCESS ) { <nl> - * error = " Failed to initialize playstate to SL_PLAYSTATE_PAUSED " ; <nl> - return NULL ; <nl> - } <nl> - <nl> - uri_player * p = players ; <nl> - while ( p - > uri ! = NULL ) { <nl> - p + + ; <nl> - } <nl> - * p = player ; <nl> - <nl> - return p ; <nl> - } <nl> - <nl> - void slplay_setup ( char * * error ) { <nl> - SLresult result ; <nl> - SLEngineOption engineOptions [ ] = { { SL_ENGINEOPTION_THREADSAFE , SL_BOOLEAN_TRUE } } ; <nl> - result = slCreateEngine ( & engine , 1 , engineOptions , 0 , NULL , NULL ) ; <nl> - if ( result ! = SL_RESULT_SUCCESS ) { <nl> - * error = " Failed to create OpenSL engine " ; <nl> - } <nl> - <nl> - result = ( * engine ) - > Realize ( engine , SL_BOOLEAN_FALSE ) ; <nl> - if ( result ! = SL_RESULT_SUCCESS ) { <nl> - * error = " Failed to realize OpenSL engine " ; <nl> - } <nl> - <nl> - result = ( * engine ) - > GetInterface ( engine , SL_IID_ENGINE , & engineInterface ) ; <nl> - if ( result ! = SL_RESULT_SUCCESS ) { <nl> - * error = " Failed to realize OpenSL engine " ; <nl> - } <nl> - <nl> - const SLInterfaceID ids [ 1 ] = { SL_IID_VOLUME } ; <nl> - const SLboolean req [ 1 ] = { SL_BOOLEAN_FALSE } ; <nl> - result = ( * engineInterface ) - > CreateOutputMix ( engineInterface , & outputMix , 1 , ids , req ) ; <nl> - if ( result ! = SL_RESULT_SUCCESS ) { <nl> - * error = " Failed to create output mix " ; <nl> - } <nl> - <nl> - result = ( * outputMix ) - > Realize ( outputMix , SL_BOOLEAN_FALSE ) ; <nl> - if ( result ! = SL_RESULT_SUCCESS ) { <nl> - * error = " Failed to realize output mix " ; <nl> - } <nl> - } <nl> - <nl> - void slplay_destroy ( ) { <nl> - for ( uri_player * player = players ; player - > uri ! = NULL ; player + + ) { <nl> - if ( player - > player ) { <nl> - ( * ( player - > player ) ) - > Destroy ( player - > player ) ; <nl> - } <nl> - } <nl> - <nl> - ( * outputMix ) - > Destroy ( outputMix ) ; <nl> - ( * engine ) - > Destroy ( engine ) ; <nl> - } <nl> - <nl> - void slplay_stop ( uri_player * player , char * * error ) { <nl> - SLPlayItf playInterface = player - > playInterface ; <nl> - SLresult result ; <nl> - <nl> - / / stop a loop <nl> - loop_start = 0 ; <nl> - <nl> - result = ( * playInterface ) - > SetPlayState ( playInterface , SL_PLAYSTATE_PAUSED ) ; <nl> - if ( result ! = SL_RESULT_SUCCESS ) { <nl> - * error = " Failed to set SL_PLAYSTATE_STOPPED " ; <nl> - return ; <nl> - } <nl> - } <nl> - <nl> - void slplay_stop_uri ( const char * uri , char * * error ) { <nl> - uri_player * player = get_player_by_uri ( uri ) ; <nl> - slplay_stop ( player , error ) ; <nl> - } <nl> - <nl> - void SLAPIENTRY slplay_callback ( SLPlayItf playItf , void * context , SLuint32 event ) { <nl> - uint64_t cb_loop_start = * ( ( uint64_t * ) context ) ; <nl> - if ( event = = SL_PLAYEVENT_HEADATEND & & cb_loop_start = = loop_start ) { <nl> - ( * playItf ) - > SetPlayState ( playItf , SL_PLAYSTATE_STOPPED ) ; <nl> - ( * playItf ) - > SetMarkerPosition ( playItf , 0 ) ; <nl> - ( * playItf ) - > SetPlayState ( playItf , SL_PLAYSTATE_PLAYING ) ; <nl> - } <nl> - } <nl> - <nl> - void slplay_play ( const char * uri , bool loop , char * * error ) { <nl> - SLresult result ; <nl> - <nl> - uri_player * player = get_player_by_uri ( uri ) ; <nl> - if ( player = = NULL ) { <nl> - player = slplay_create_player_for_uri ( uri , error ) ; <nl> - if ( * error ) { <nl> - return ; <nl> - } <nl> - } <nl> - <nl> - SLPlayItf playInterface = player - > playInterface ; <nl> - if ( loop ) { <nl> - loop_start = nanos_since_boot ( ) ; <nl> - loop_start_ctx = loop_start ; <nl> - result = ( * playInterface ) - > RegisterCallback ( playInterface , slplay_callback , & loop_start_ctx ) ; <nl> - if ( result ! = SL_RESULT_SUCCESS ) { <nl> - char error [ 64 ] ; <nl> - snprintf ( error , sizeof ( error ) , " Failed to register callback . % d " , result ) ; <nl> - * error = error [ 0 ] ; <nl> - return ; <nl> - } <nl> - <nl> - result = ( * playInterface ) - > SetCallbackEventsMask ( playInterface , SL_PLAYEVENT_HEADATEND ) ; <nl> - if ( result ! = SL_RESULT_SUCCESS ) { <nl> - * error = " Failed to set callback event mask " ; <nl> - return ; <nl> - } <nl> - } <nl> - <nl> - / / Reset the audio player <nl> - result = ( * playInterface ) - > ClearMarkerPosition ( playInterface ) ; <nl> - if ( result ! = SL_RESULT_SUCCESS ) { <nl> - * error = " Failed to clear marker position " ; <nl> - return ; <nl> - } <nl> - result = ( * playInterface ) - > SetPlayState ( playInterface , SL_PLAYSTATE_PAUSED ) ; <nl> - result = ( * playInterface ) - > SetPlayState ( playInterface , SL_PLAYSTATE_STOPPED ) ; <nl> - result = ( * playInterface ) - > SetPlayState ( playInterface , SL_PLAYSTATE_PLAYING ) ; <nl> - if ( result ! = SL_RESULT_SUCCESS ) { <nl> - * error = " Failed to set SL_PLAYSTATE_PLAYING " ; <nl> - } <nl> - } <nl> deleted file mode 100644 <nl> index f8c39ceeb7 . . 0000000000 <nl> mmm a / selfdrive / ui / slplay . h <nl> ppp / dev / null <nl> <nl> - # ifndef SLPLAY_H <nl> - # define SLPLAY_H <nl> - <nl> - # include < SLES / OpenSLES . h > <nl> - # include < SLES / OpenSLES_Android . h > <nl> - # include < stdbool . h > <nl> - <nl> - typedef struct { <nl> - const char * uri ; <nl> - SLObjectItf player ; <nl> - SLPlayItf playInterface ; <nl> - } uri_player ; <nl> - <nl> - void slplay_setup ( char * * error ) ; <nl> - uri_player * slplay_create_player_for_uri ( const char * uri , char * * error ) ; <nl> - void slplay_play ( const char * uri , bool loop , char * * error ) ; <nl> - void slplay_stop_uri ( const char * uri , char * * error ) ; <nl> - void slplay_destroy ( ) ; <nl> - <nl> - # endif <nl> - <nl> mmm a / selfdrive / ui / sound . cc <nl> ppp b / selfdrive / ui / sound . cc <nl> <nl> - # include < stdlib . h > <nl> - # include " sound . hpp " <nl> <nl> + # include " sound . hpp " <nl> + # include < math . h > <nl> + # include < stdlib . h > <nl> + # include < atomic > <nl> # include " common / swaglog . h " <nl> + # include " common / timing . h " <nl> + <nl> + # define LogOnError ( func , msg ) \ <nl> + if ( ( func ) ! = SL_RESULT_SUCCESS ) { LOGW ( msg ) ; } <nl> + <nl> + # define ReturnOnError ( func , msg ) \ <nl> + if ( ( func ) ! = SL_RESULT_SUCCESS ) { LOGW ( msg ) ; return false ; } <nl> + <nl> + static std : : map < AudibleAlert , const char * > sound_map { <nl> + { AudibleAlert : : CHIME_DISENGAGE , " . . / assets / sounds / disengaged . wav " } , <nl> + { AudibleAlert : : CHIME_ENGAGE , " . . / assets / sounds / engaged . wav " } , <nl> + { AudibleAlert : : CHIME_WARNING1 , " . . / assets / sounds / warning_1 . wav " } , <nl> + { AudibleAlert : : CHIME_WARNING2 , " . . / assets / sounds / warning_2 . wav " } , <nl> + { AudibleAlert : : CHIME_WARNING2_REPEAT , " . . / assets / sounds / warning_2 . wav " } , <nl> + { AudibleAlert : : CHIME_WARNING_REPEAT , " . . / assets / sounds / warning_repeat . wav " } , <nl> + { AudibleAlert : : CHIME_ERROR , " . . / assets / sounds / error . wav " } , <nl> + { AudibleAlert : : CHIME_PROMPT , " . . / assets / sounds / error . wav " } } ; <nl> + <nl> + struct Sound : : Player { <nl> + SLObjectItf player ; <nl> + SLPlayItf playItf ; <nl> + / / slplay_callback runs on a background thread , use atomic to ensure thread safe . <nl> + std : : atomic < int > repeat ; <nl> + } ; <nl> <nl> - typedef struct { <nl> - AudibleAlert alert ; <nl> - const char * uri ; <nl> - bool loop ; <nl> - } sound_file ; <nl> + bool Sound : : init ( int volume ) { <nl> + SLEngineOption engineOptions [ ] = { { SL_ENGINEOPTION_THREADSAFE , SL_BOOLEAN_TRUE } } ; <nl> + const SLInterfaceID ids [ 1 ] = { SL_IID_VOLUME } ; <nl> + const SLboolean req [ 1 ] = { SL_BOOLEAN_FALSE } ; <nl> + SLEngineItf engineInterface = NULL ; <nl> + ReturnOnError ( slCreateEngine ( & engine_ , 1 , engineOptions , 0 , NULL , NULL ) , " Failed to create OpenSL engine " ) ; <nl> + ReturnOnError ( ( * engine_ ) - > Realize ( engine_ , SL_BOOLEAN_FALSE ) , " Failed to realize OpenSL engine " ) ; <nl> + ReturnOnError ( ( * engine_ ) - > GetInterface ( engine_ , SL_IID_ENGINE , & engineInterface ) , " Failed to get OpenSL engine interface " ) ; <nl> + ReturnOnError ( ( * engineInterface ) - > CreateOutputMix ( engineInterface , & outputMix_ , 1 , ids , req ) , " Failed to create output mix " ) ; <nl> + ReturnOnError ( ( * outputMix_ ) - > Realize ( outputMix_ , SL_BOOLEAN_FALSE ) , " Failed to realize output mix " ) ; <nl> + <nl> + for ( auto & kv : sound_map ) { <nl> + SLDataLocator_URI locUri = { SL_DATALOCATOR_URI , ( SLchar * ) kv . second } ; <nl> + SLDataFormat_MIME formatMime = { SL_DATAFORMAT_MIME , NULL , SL_CONTAINERTYPE_UNSPECIFIED } ; <nl> + SLDataSource audioSrc = { & locUri , & formatMime } ; <nl> + SLDataLocator_OutputMix outMix = { SL_DATALOCATOR_OUTPUTMIX , outputMix_ } ; <nl> + SLDataSink audioSnk = { & outMix , NULL } ; <nl> + <nl> + SLObjectItf player = NULL ; <nl> + SLPlayItf playItf = NULL ; <nl> + ReturnOnError ( ( * engineInterface ) - > CreateAudioPlayer ( engineInterface , & player , & audioSrc , & audioSnk , 0 , NULL , NULL ) , " Failed to create audio player " ) ; <nl> + ReturnOnError ( ( * player ) - > Realize ( player , SL_BOOLEAN_FALSE ) , " Failed to realize audio player " ) ; <nl> + ReturnOnError ( ( * player ) - > GetInterface ( player , SL_IID_PLAY , & playItf ) , " Failed to get player interface " ) ; <nl> + ReturnOnError ( ( * playItf ) - > SetPlayState ( playItf , SL_PLAYSTATE_PAUSED ) , " Failed to initialize playstate to SL_PLAYSTATE_PAUSED " ) ; <nl> + <nl> + player_ [ kv . first ] = new Sound : : Player { player , playItf } ; <nl> + } <nl> <nl> - extern " C " { <nl> - # include " slplay . h " <nl> + setVolume ( volume ) ; <nl> + return true ; <nl> } <nl> <nl> - int last_volume = 0 ; <nl> - <nl> - void set_volume ( int volume ) { <nl> - if ( last_volume ! = volume ) { <nl> - char volume_change_cmd [ 64 ] ; <nl> - sprintf ( volume_change_cmd , " service call audio 3 i32 3 i32 % d i32 1 & " , volume ) ; <nl> - <nl> - / / 5 second timeout at 60fps <nl> - int volume_changed = system ( volume_change_cmd ) ; <nl> - last_volume = volume ; <nl> + AudibleAlert Sound : : currentPlaying ( ) { <nl> + if ( currentSound_ ! = AudibleAlert : : NONE ) { <nl> + auto playItf = player_ . at ( currentSound_ ) - > playItf ; <nl> + SLuint32 state ; <nl> + if ( SL_RESULT_SUCCESS = = ( * playItf ) - > GetPlayState ( playItf , & state ) & & <nl> + ( state = = SL_PLAYSTATE_STOPPED | | state = = SL_PLAYSTATE_PAUSED ) ) { <nl> + currentSound_ = AudibleAlert : : NONE ; <nl> + } <nl> } <nl> + return currentSound_ ; <nl> } <nl> <nl> - <nl> - sound_file sound_table [ ] = { <nl> - { cereal : : CarControl : : HUDControl : : AudibleAlert : : CHIME_DISENGAGE , " . . / assets / sounds / disengaged . wav " , false } , <nl> - { cereal : : CarControl : : HUDControl : : AudibleAlert : : CHIME_ENGAGE , " . . / assets / sounds / engaged . wav " , false } , <nl> - { cereal : : CarControl : : HUDControl : : AudibleAlert : : CHIME_WARNING1 , " . . / assets / sounds / warning_1 . wav " , false } , <nl> - { cereal : : CarControl : : HUDControl : : AudibleAlert : : CHIME_WARNING2 , " . . / assets / sounds / warning_2 . wav " , false } , <nl> - { cereal : : CarControl : : HUDControl : : AudibleAlert : : CHIME_WARNING2_REPEAT , " . . / assets / sounds / warning_2 . wav " , true } , <nl> - { cereal : : CarControl : : HUDControl : : AudibleAlert : : CHIME_WARNING_REPEAT , " . . / assets / sounds / warning_repeat . wav " , true } , <nl> - { cereal : : CarControl : : HUDControl : : AudibleAlert : : CHIME_ERROR , " . . / assets / sounds / error . wav " , false } , <nl> - { cereal : : CarControl : : HUDControl : : AudibleAlert : : CHIME_PROMPT , " . . / assets / sounds / error . wav " , false } , <nl> - { cereal : : CarControl : : HUDControl : : AudibleAlert : : NONE , NULL , false } , <nl> - } ; <nl> - <nl> - sound_file * get_sound_file ( AudibleAlert alert ) { <nl> - for ( sound_file * s = sound_table ; s - > alert ! = cereal : : CarControl : : HUDControl : : AudibleAlert : : NONE ; s + + ) { <nl> - if ( s - > alert = = alert ) { <nl> - return s ; <nl> - } <nl> + void SLAPIENTRY slplay_callback ( SLPlayItf playItf , void * context , SLuint32 event ) { <nl> + Sound : : Player * s = reinterpret_cast < Sound : : Player * > ( context ) ; <nl> + if ( event = = SL_PLAYEVENT_HEADATEND & & s - > repeat > 1 ) { <nl> + - - s - > repeat ; <nl> + ( * playItf ) - > SetPlayState ( playItf , SL_PLAYSTATE_STOPPED ) ; <nl> + ( * playItf ) - > SetMarkerPosition ( playItf , 0 ) ; <nl> + ( * playItf ) - > SetPlayState ( playItf , SL_PLAYSTATE_PLAYING ) ; <nl> } <nl> - <nl> - return NULL ; <nl> } <nl> <nl> - void play_alert_sound ( AudibleAlert alert ) { <nl> - sound_file * sound = get_sound_file ( alert ) ; <nl> - char * error = NULL ; <nl> + bool Sound : : play ( AudibleAlert alert , int repeat ) { <nl> + if ( currentSound_ ! = AudibleAlert : : NONE ) { <nl> + stop ( ) ; <nl> + } <nl> + auto player = player_ . at ( alert ) ; <nl> + SLPlayItf playItf = player - > playItf ; <nl> + player - > repeat = repeat ; <nl> + if ( player - > repeat > 0 ) { <nl> + ReturnOnError ( ( * playItf ) - > RegisterCallback ( playItf , slplay_callback , player ) , " Failed to register callback " ) ; <nl> + ReturnOnError ( ( * playItf ) - > SetCallbackEventsMask ( playItf , SL_PLAYEVENT_HEADATEND ) , " Failed to set callback event mask " ) ; <nl> + } <nl> <nl> - slplay_play ( sound - > uri , sound - > loop , & error ) ; <nl> - if ( error ) { <nl> - LOGW ( " error playing sound : % s " , error ) ; <nl> + / / Reset the audio player <nl> + ReturnOnError ( ( * playItf ) - > ClearMarkerPosition ( playItf ) , " Failed to clear marker position " ) ; <nl> + uint32_t states [ ] = { SL_PLAYSTATE_PAUSED , SL_PLAYSTATE_STOPPED , SL_PLAYSTATE_PLAYING } ; <nl> + for ( auto state : states ) { <nl> + ReturnOnError ( ( * playItf ) - > SetPlayState ( playItf , state ) , " Failed to set SL_PLAYSTATE_PLAYING " ) ; <nl> } <nl> + currentSound_ = alert ; <nl> + return true ; <nl> } <nl> <nl> - void stop_alert_sound ( AudibleAlert alert ) { <nl> - sound_file * sound = get_sound_file ( alert ) ; <nl> - char * error = NULL ; <nl> - <nl> - slplay_stop_uri ( sound - > uri , & error ) ; <nl> - if ( error ) { <nl> - LOGW ( " error stopping sound : % s " , error ) ; <nl> + void Sound : : stop ( ) { <nl> + if ( currentSound_ ! = AudibleAlert : : NONE ) { <nl> + auto player = player_ . at ( currentSound_ ) ; <nl> + player - > repeat = 0 ; <nl> + LogOnError ( ( * ( player - > playItf ) ) - > SetPlayState ( player - > playItf , SL_PLAYSTATE_PAUSED ) , " Failed to set SL_PLAYSTATE_PAUSED " ) ; <nl> + currentSound_ = AudibleAlert : : NONE ; <nl> } <nl> } <nl> <nl> - void ui_sound_init ( ) { <nl> - char * error = NULL ; <nl> - slplay_setup ( & error ) ; <nl> - if ( error ) goto fail ; <nl> - <nl> - for ( sound_file * s = sound_table ; s - > alert ! = cereal : : CarControl : : HUDControl : : AudibleAlert : : NONE ; s + + ) { <nl> - slplay_create_player_for_uri ( s - > uri , & error ) ; <nl> - if ( error ) goto fail ; <nl> + void Sound : : setVolume ( int volume , int timeout_seconds ) { <nl> + if ( last_volume_ = = volume ) return ; <nl> + <nl> + double current_time = nanos_since_boot ( ) ; <nl> + if ( ( current_time - last_set_volume_time_ ) > ( timeout_seconds * ( 1e + 9 ) ) ) { <nl> + char volume_change_cmd [ 64 ] ; <nl> + snprintf ( volume_change_cmd , sizeof ( volume_change_cmd ) , " service call audio 3 i32 3 i32 % d i32 1 & " , volume ) ; <nl> + system ( volume_change_cmd ) ; <nl> + last_volume_ = volume ; <nl> + last_set_volume_time_ = current_time ; <nl> } <nl> - return ; <nl> - <nl> - fail : <nl> - LOGW ( error ) ; <nl> - exit ( 1 ) ; <nl> } <nl> <nl> - void ui_sound_destroy ( ) { <nl> - slplay_destroy ( ) ; <nl> + Sound : : ~ Sound ( ) { <nl> + for ( auto & kv : player_ ) { <nl> + ( * ( kv . second - > player ) ) - > Destroy ( kv . second - > player ) ; <nl> + delete kv . second ; <nl> + } <nl> + if ( outputMix_ ) ( * outputMix_ ) - > Destroy ( outputMix_ ) ; <nl> + if ( engine_ ) ( * engine_ ) - > Destroy ( engine_ ) ; <nl> } <nl> - <nl> mmm a / selfdrive / ui / sound . hpp <nl> ppp b / selfdrive / ui / sound . hpp <nl> <nl> - # ifndef __SOUND_HPP <nl> - # define __SOUND_HPP <nl> - <nl> + # pragma once <nl> + # include < map > <nl> # include " cereal / gen / cpp / log . capnp . h " <nl> <nl> - typedef cereal : : CarControl : : HUDControl : : AudibleAlert AudibleAlert ; <nl> - <nl> - void ui_sound_init ( ) ; <nl> - void ui_sound_destroy ( ) ; <nl> + # if defined ( QCOM ) | | defined ( QCOM2 ) <nl> + # include < SLES / OpenSLES . h > <nl> + # include < SLES / OpenSLES_Android . h > <nl> + # endif <nl> <nl> - void set_volume ( int volume ) ; <nl> + typedef cereal : : CarControl : : HUDControl : : AudibleAlert AudibleAlert ; <nl> <nl> - void play_alert_sound ( AudibleAlert alert ) ; <nl> - void stop_alert_sound ( AudibleAlert alert ) ; <nl> + class Sound { <nl> + public : <nl> + Sound ( ) = default ; <nl> + bool init ( int volume ) ; <nl> + bool play ( AudibleAlert alert , int repeat = 0 ) ; <nl> + void stop ( ) ; <nl> + void setVolume ( int volume , int timeout_seconds = 5 ) ; <nl> + AudibleAlert currentPlaying ( ) ; <nl> + ~ Sound ( ) ; <nl> <nl> + private : <nl> + # if defined ( QCOM ) | | defined ( QCOM2 ) <nl> + SLObjectItf engine_ = nullptr ; <nl> + SLObjectItf outputMix_ = nullptr ; <nl> + int last_volume_ = 0 ; <nl> + double last_set_volume_time_ = 0 . ; <nl> + AudibleAlert currentSound_ = AudibleAlert : : NONE ; <nl> + struct Player ; <nl> + std : : map < AudibleAlert , Player * > player_ ; <nl> + friend void SLAPIENTRY slplay_callback ( SLPlayItf playItf , void * context , SLuint32 event ) ; <nl> # endif <nl> - <nl> + } ; <nl> deleted file mode 100755 <nl> index 802bdee8fc . . 0000000000 <nl> mmm a / selfdrive / ui / test / build_play_sound . sh <nl> ppp / dev / null <nl> <nl> - # ! / bin / sh <nl> - clang - fPIC - o play_sound play_sound . c . . / slplay . c - I . . / . . / - I . . / - lOpenSLES - Wl , - rpath = / system / lib64 <nl> - <nl> deleted file mode 100644 <nl> index 030a9f25a5 . . 0000000000 <nl> mmm a / selfdrive / ui / test / play_sound . c <nl> ppp / dev / null <nl> <nl> - # include < stdio . h > <nl> - # include " slplay . h " <nl> - <nl> - void play_sound ( char * uri , int volume ) { <nl> - char * * error = NULL ; <nl> - printf ( " call slplay_setup \ n " ) ; <nl> - slplay_setup ( error ) ; <nl> - if ( error ) { printf ( " % s \ n " , * error ) ; return ; } <nl> - <nl> - printf ( " call slplay_create_player_for_uri \ n " ) ; <nl> - slplay_create_player_for_uri ( uri , error ) ; <nl> - if ( error ) { printf ( " % s \ n " , * error ) ; return ; } <nl> - <nl> - printf ( " call slplay_play \ n " ) ; <nl> - <nl> - while ( 1 ) { <nl> - char volume_change_cmd [ 64 ] ; <nl> - sprintf ( volume_change_cmd , " service call audio 3 i32 3 i32 % d i32 1 " , volume ) ; <nl> - system ( volume_change_cmd ) ; <nl> - <nl> - slplay_play ( uri , false , error ) ; <nl> - if ( error ) { printf ( " % s \ n " , * error ) ; return ; } <nl> - <nl> - sleep ( 1 ) ; <nl> - } <nl> - } <nl> - <nl> - int main ( int argc , char * argv [ ] ) { <nl> - int volume = 10 ; <nl> - if ( argc > 2 ) { <nl> - volume = atoi ( argv [ 2 ] ) ; <nl> - } <nl> - printf ( " setting volume to % d \ n " , volume ) ; <nl> - <nl> - play_sound ( argv [ 1 ] , volume ) ; <nl> - return 0 ; <nl> - } <nl> mmm a / selfdrive / ui / ui . cc <nl> ppp b / selfdrive / ui / ui . cc <nl> void handle_message ( UIState * s , SubMaster & sm ) { <nl> if ( ! scene . frontview ) { s - > controls_seen = true ; } <nl> <nl> auto alert_sound = scene . controls_state . getAlertSound ( ) ; <nl> - const auto sound_none = cereal : : CarControl : : HUDControl : : AudibleAlert : : NONE ; <nl> - if ( alert_sound ! = s - > alert_sound ) { <nl> - if ( s - > alert_sound ! = sound_none ) { <nl> - stop_alert_sound ( s - > alert_sound ) ; <nl> - } <nl> - if ( alert_sound ! = sound_none ) { <nl> - play_alert_sound ( alert_sound ) ; <nl> - s - > alert_type = scene . controls_state . getAlertType ( ) ; <nl> + if ( alert_sound ! = s - > sound . currentPlaying ( ) ) { <nl> + if ( alert_sound = = AudibleAlert : : NONE ) { <nl> + s - > sound . stop ( ) ; <nl> + } else { <nl> + s - > sound . play ( alert_sound ) ; <nl> } <nl> - s - > alert_sound = alert_sound ; <nl> } <nl> scene . alert_text1 = scene . controls_state . getAlertText1 ( ) ; <nl> scene . alert_text2 = scene . controls_state . getAlertText2 ( ) ; <nl> void handle_message ( UIState * s , SubMaster & sm ) { <nl> if ( ! s - > started ) { <nl> if ( s - > status ! = STATUS_STOPPED ) { <nl> update_status ( s , STATUS_STOPPED ) ; <nl> - s - > alert_sound_timeout = 0 ; <nl> s - > vision_seen = false ; <nl> s - > controls_seen = false ; <nl> s - > active_app = cereal : : UiLayoutState : : App : : HOME ; <nl> int main ( int argc , char * argv [ ] ) { <nl> TouchState touch = { 0 } ; <nl> touch_init ( & touch ) ; <nl> s - > touch_fd = touch . fd ; <nl> - ui_sound_init ( ) ; <nl> <nl> / / light sensor scaling params <nl> const bool LEON = util : : read_file ( " / proc / cmdline " ) . find ( " letv " ) ! = std : : string : : npos ; <nl> int main ( int argc , char * argv [ ] ) { <nl> <nl> const int MIN_VOLUME = LEON ? 12 : 9 ; <nl> const int MAX_VOLUME = LEON ? 15 : 12 ; <nl> + assert ( s - > sound . init ( MIN_VOLUME ) ) ; <nl> <nl> - set_volume ( MIN_VOLUME ) ; <nl> - s - > volume_timeout = 5 * UI_FREQ ; <nl> int draws = 0 ; <nl> <nl> s - > scene . satelliteCount = - 1 ; <nl> int main ( int argc , char * argv [ ] ) { <nl> should_swap = true ; <nl> } <nl> <nl> - if ( s - > volume_timeout > 0 ) { <nl> - s - > volume_timeout - - ; <nl> - } else { <nl> - int volume = fmin ( MAX_VOLUME , MIN_VOLUME + s - > scene . controls_state . getVEgo ( ) / 5 ) ; / / up one notch every 5 m / s <nl> - set_volume ( volume ) ; <nl> - s - > volume_timeout = 5 * UI_FREQ ; <nl> - } <nl> + s - > sound . setVolume ( fmin ( MAX_VOLUME , MIN_VOLUME + s - > scene . controls_state . getVEgo ( ) / 5 ) , 5 ) ; / / up one notch every 5 m / s <nl> <nl> / / If car is started and controlsState times out , display an alert <nl> if ( s - > controls_timeout > 0 ) { <nl> int main ( int argc , char * argv [ ] ) { <nl> <nl> s - > scene . alert_text1 = " TAKE CONTROL IMMEDIATELY " ; <nl> s - > scene . alert_text2 = " Controls Unresponsive " ; <nl> - <nl> ui_draw_vision_alert ( s , s - > scene . alert_size , s - > status , s - > scene . alert_text1 . c_str ( ) , s - > scene . alert_text2 . c_str ( ) ) ; <nl> <nl> - s - > alert_sound_timeout = 2 * UI_FREQ ; <nl> - s - > alert_sound = cereal : : CarControl : : HUDControl : : AudibleAlert : : CHIME_WARNING_REPEAT ; <nl> - play_alert_sound ( s - > alert_sound ) ; <nl> + s - > sound . play ( AudibleAlert : : CHIME_WARNING_REPEAT , 3 ) ; / / loop sound 3 times <nl> } <nl> - <nl> - s - > alert_sound_timeout - - ; <nl> s - > controls_seen = false ; <nl> } <nl> <nl> - / / stop playing alert sound <nl> - if ( ( ! s - > started | | ( s - > started & & s - > alert_sound_timeout = = 0 ) ) & & <nl> - s - > alert_sound ! = cereal : : CarControl : : HUDControl : : AudibleAlert : : NONE ) { <nl> - stop_alert_sound ( s - > alert_sound ) ; <nl> - s - > alert_sound = cereal : : CarControl : : HUDControl : : AudibleAlert : : NONE ; <nl> - } <nl> - <nl> read_param_timeout ( & s - > is_metric , " IsMetric " , & s - > is_metric_timeout ) ; <nl> read_param_timeout ( & s - > longitudinal_control , " LongitudinalControl " , & s - > longitudinal_control_timeout ) ; <nl> read_param_timeout ( & s - > limit_set_speed , " LimitSetSpeed " , & s - > limit_set_speed_timeout ) ; <nl> int main ( int argc , char * argv [ ] ) { <nl> } <nl> <nl> set_awake ( s , true ) ; <nl> - ui_sound_destroy ( ) ; <nl> <nl> / / wake up bg thread to exit <nl> pthread_mutex_lock ( & s - > lock ) ; <nl> mmm a / selfdrive / ui / ui . hpp <nl> ppp b / selfdrive / ui / ui . hpp <nl> <nl> - # ifndef _UI_H <nl> - # define _UI_H <nl> - <nl> + # pragma once <nl> # include " messaging . hpp " <nl> <nl> # ifdef __APPLE__ <nl> typedef struct UIState { <nl> <nl> / / timeouts <nl> int awake_timeout ; <nl> - int volume_timeout ; <nl> int controls_timeout ; <nl> - int alert_sound_timeout ; <nl> int speed_lim_off_timeout ; <nl> int is_metric_timeout ; <nl> int longitudinal_control_timeout ; <nl> typedef struct UIState { <nl> bool limit_set_speed ; <nl> float speed_lim_off ; <nl> bool is_ego_over_limit ; <nl> - std : : string alert_type ; <nl> - AudibleAlert alert_sound ; <nl> float alert_blinking_alpha ; <nl> bool alert_blinked ; <nl> bool started ; <nl> typedef struct UIState { <nl> model_path_vertices_data model_path_vertices [ MODEL_LANE_PATH_CNT * 2 ] ; <nl> <nl> track_vertices_data track_vertices [ 2 ] ; <nl> + <nl> + Sound sound ; <nl> } UIState ; <nl> <nl> / / API <nl> void ui_draw_image ( NVGcontext * vg , float x , float y , float w , float h , int image <nl> void ui_draw_rect ( NVGcontext * vg , float x , float y , float w , float h , NVGcolor color , float r = 0 , int width = 0 ) ; <nl> void ui_draw_rect ( NVGcontext * vg , float x , float y , float w , float h , NVGpaint & paint , float r = 0 ) ; <nl> void ui_nvg_init ( UIState * s ) ; <nl> - <nl> - # endif <nl>
|
Sound refactor ( )
|
commaai/openpilot
|
8cacc14b3180e831ed4fa0782102f973efe8851f
|
2020-06-15T22:26:38Z
|
mmm a / db / db_test . cc <nl> ppp b / db / db_test . cc <nl> namespace { <nl> void VerifyCompactionResult ( <nl> const ColumnFamilyMetaData & cf_meta , <nl> const std : : set < std : : string > & overlapping_file_numbers ) { <nl> + # ifndef NDEBUG <nl> for ( auto & level : cf_meta . levels ) { <nl> for ( auto & file : level . files ) { <nl> assert ( overlapping_file_numbers . find ( file . name ) = = <nl> overlapping_file_numbers . end ( ) ) ; <nl> } <nl> } <nl> + # endif <nl> } <nl> <nl> const SstFileMetaData * PickFileRandomly ( <nl>
|
Fix the build with - DNDEBUG .
|
facebook/rocksdb
|
d232cb156bf541db5105cc15319316e23bdef5d9
|
2014-12-22T23:06:18Z
|
mmm a / main / tests / test_oa_hash_map . cpp <nl> ppp b / main / tests / test_oa_hash_map . cpp <nl> MainLoop * test ( ) { <nl> / / stress test / test for issue # 22928 <nl> { <nl> OAHashMap < int , int > map ; <nl> - int dummy ; <nl> + int dummy = 0 ; <nl> const int N = 1000 ; <nl> uint32_t * keys = new uint32_t [ N ] ; <nl> <nl> mmm a / scene / resources / bit_mask . cpp <nl> ppp b / scene / resources / bit_mask . cpp <nl> static void fill_bits ( const BitMap * p_src , Ref < BitMap > & p_map , const Point2i & p_ <nl> int stack_size = 0 ; <nl> <nl> Point2i pos = p_pos ; <nl> - int next_i ; <nl> - int next_j ; <nl> + int next_i = 0 ; <nl> + int next_j = 0 ; <nl> <nl> bool reenter = true ; <nl> bool popped = false ; <nl>
|
Fix maybe - uninitialized warnings from GCC 4 . 8 . x
|
godotengine/godot
|
4d546164e749871c96b77f6c894a47302827b796
|
2018-12-17T11:42:26Z
|
mmm a / guilib / TextureDX . cpp <nl> ppp b / guilib / TextureDX . cpp <nl> void CDXTexture : : CreateTextureObject ( ) <nl> return ; <nl> } <nl> <nl> - m_texture . Create ( m_textureWidth , m_textureHeight , 1 , 0 , format , D3DPOOL_MANAGED ) ; <nl> + m_texture . Create ( m_textureWidth , m_textureHeight , 1 , g_Windowing . DefaultD3DUsage ( ) , format , g_Windowing . DefaultD3DPool ( ) ) ; <nl> } <nl> <nl> void CDXTexture : : DestroyTextureObject ( ) <nl> mmm a / xbmc / RenderSystemDX . cpp <nl> ppp b / xbmc / RenderSystemDX . cpp <nl> <nl> # include " D3DResource . h " <nl> # include " GUISettings . h " <nl> # include " AdvancedSettings . h " <nl> + # include " utils / SystemInfo . h " <nl> <nl> using namespace std ; <nl> <nl> + / / Dynamic loading of Direct3DCreate9Ex to keep compatibility with 2000 / XP . <nl> + typedef HRESULT ( WINAPI * LPDIRECT3DCREATE9EX ) ( UINT SDKVersion , IDirect3D9Ex * * ppD3D ) ; <nl> + static LPDIRECT3DCREATE9EX g_Direct3DCreate9Ex ; <nl> + static HMODULE g_D3D9ExHandle ; <nl> + <nl> + static bool LoadD3D9Ex ( ) <nl> + { <nl> + g_Direct3DCreate9Ex = ( LPDIRECT3DCREATE9EX ) GetProcAddress ( GetModuleHandle ( " d3d9 . dll " ) , " Direct3DCreate9Ex " ) ; <nl> + if ( g_Direct3DCreate9Ex = = NULL ) <nl> + return false ; <nl> + return true ; <nl> + } <nl> + <nl> CRenderSystemDX : : CRenderSystemDX ( ) : CRenderSystemBase ( ) <nl> { <nl> m_enumRenderingSystem = RENDERING_SYSTEM_DIRECTX ; <nl> CRenderSystemDX : : CRenderSystemDX ( ) : CRenderSystemBase ( ) <nl> m_adapter = D3DADAPTER_DEFAULT ; <nl> m_screenHeight = 0 ; <nl> m_systemFreq = CurrentHostFrequency ( ) ; <nl> + m_useD3D9Ex = false ; <nl> + m_defaultD3DUsage = 0 ; <nl> + m_defaultD3DPool = D3DPOOL_MANAGED ; <nl> <nl> ZeroMemory ( & m_D3DPP , sizeof ( D3DPRESENT_PARAMETERS ) ) ; <nl> } <nl> bool CRenderSystemDX : : InitRenderSystem ( ) <nl> m_renderCaps = 0 ; <nl> D3DADAPTER_IDENTIFIER9 AIdentifier ; <nl> <nl> + m_useD3D9Ex = ( g_sysinfo . IsVistaOrHigher ( ) & & LoadD3D9Ex ( ) ) ; <nl> m_pD3D = NULL ; <nl> <nl> - m_pD3D = Direct3DCreate9 ( D3D_SDK_VERSION ) ; <nl> - if ( m_pD3D = = NULL ) <nl> - return false ; <nl> - <nl> + if ( m_useD3D9Ex ) <nl> + { <nl> + HRESULT hr ; <nl> + if ( FAILED ( hr = g_Direct3DCreate9Ex ( D3D_SDK_VERSION , ( IDirect3D9Ex * * ) & m_pD3D ) ) ) <nl> + return false ; <nl> + CLog : : Log ( LOGDEBUG , " % s - using D3D9Ex " , __FUNCTION__ ) ; <nl> + } <nl> + else <nl> + { <nl> + m_pD3D = Direct3DCreate9 ( D3D_SDK_VERSION ) ; <nl> + if ( m_pD3D = = NULL ) <nl> + return false ; <nl> + } <nl> + <nl> UpdateMonitor ( ) ; <nl> <nl> if ( CreateDevice ( ) = = false ) <nl> bool CRenderSystemDX : : InitRenderSystem ( ) <nl> } <nl> m_maxTextureSize = min ( caps . MaxTextureWidth , caps . MaxTextureHeight ) ; <nl> <nl> + if ( caps . Caps2 & D3DCAPS2_DYNAMICTEXTURES ) <nl> + { <nl> + m_defaultD3DUsage = D3DUSAGE_DYNAMIC ; <nl> + m_defaultD3DPool = D3DPOOL_DEFAULT ; <nl> + CLog : : Log ( LOGDEBUG , " % s - using D3DCAPS2_DYNAMICTEXTURES " , __FUNCTION__ ) ; <nl> + } <nl> + else <nl> + { <nl> + m_defaultD3DUsage = 0 ; <nl> + m_defaultD3DPool = D3DPOOL_MANAGED ; <nl> + } <nl> + <nl> return true ; <nl> } <nl> <nl> void CRenderSystemDX : : BuildPresentParameters ( ) <nl> m_D3DPP . MultiSampleType = D3DMULTISAMPLE_NONE ; <nl> m_D3DPP . MultiSampleQuality = 0 ; <nl> <nl> + <nl> / / Try to create a 32 - bit depth , 8 - bit stencil <nl> if ( FAILED ( m_pD3D - > CheckDeviceFormat ( m_adapter , <nl> D3DDEVTYPE_HAL , m_D3DPP . BackBufferFormat , D3DUSAGE_DEPTHSTENCIL , <nl> void CRenderSystemDX : : BuildPresentParameters ( ) <nl> else <nl> m_D3DPP . AutoDepthStencilFormat = D3DFMT_D24X8 ; <nl> } <nl> + <nl> + if ( m_useD3D9Ex ) <nl> + { <nl> + ZeroMemory ( & m_D3DDMEX , sizeof ( D3DDISPLAYMODEEX ) ) ; <nl> + m_D3DDMEX . Size = sizeof ( D3DDISPLAYMODEEX ) ; <nl> + m_D3DDMEX . Width = m_D3DPP . BackBufferWidth ; <nl> + m_D3DDMEX . Height = m_D3DPP . BackBufferHeight ; <nl> + m_D3DDMEX . RefreshRate = m_D3DPP . FullScreen_RefreshRateInHz ; <nl> + m_D3DDMEX . Format = m_D3DPP . BackBufferFormat ; <nl> + m_D3DDMEX . ScanLineOrdering = D3DSCANLINEORDERING_PROGRESSIVE ; <nl> + } <nl> } <nl> <nl> bool CRenderSystemDX : : DestroyRenderSystem ( ) <nl> void CRenderSystemDX : : OnDeviceLost ( ) <nl> void CRenderSystemDX : : OnDeviceReset ( ) <nl> { <nl> CSingleLock lock ( m_resourceSection ) ; <nl> + <nl> if ( m_needNewDevice ) <nl> CreateDevice ( ) ; <nl> else <nl> { <nl> / / just need a reset <nl> - m_nDeviceStatus = m_pD3DDevice - > Reset ( & m_D3DPP ) ; <nl> + if ( m_useD3D9Ex ) <nl> + m_nDeviceStatus = ( ( IDirect3DDevice9Ex * ) m_pD3DDevice ) - > ResetEx ( & m_D3DPP , m_D3DPP . Windowed ? NULL : & m_D3DDMEX ) ; <nl> + else <nl> + m_nDeviceStatus = m_pD3DDevice - > Reset ( & m_D3DPP ) ; <nl> } <nl> <nl> if ( m_nDeviceStatus = = S_OK ) <nl> bool CRenderSystemDX : : CreateDevice ( ) <nl> <nl> BuildPresentParameters ( ) ; <nl> <nl> - hr = m_pD3D - > CreateDevice ( m_adapter , devType , m_hFocusWnd , <nl> - D3DCREATE_HARDWARE_VERTEXPROCESSING | D3DCREATE_MULTITHREADED , & m_D3DPP , & m_pD3DDevice ) ; <nl> - if ( FAILED ( hr ) ) <nl> + if ( m_useD3D9Ex ) <nl> + { <nl> + hr = ( ( IDirect3D9Ex * ) m_pD3D ) - > CreateDeviceEx ( m_adapter , devType , m_hFocusWnd , <nl> + D3DCREATE_HARDWARE_VERTEXPROCESSING | D3DCREATE_MULTITHREADED , & m_D3DPP , m_D3DPP . Windowed ? NULL : & m_D3DDMEX , ( IDirect3DDevice9Ex * * ) & m_pD3DDevice ) ; <nl> + <nl> + if ( FAILED ( hr ) ) <nl> + { <nl> + / / Try a second time , may fail the first time due to back buffer count , <nl> + / / which will be corrected down to 1 by the runtime <nl> + hr = ( ( IDirect3D9Ex * ) m_pD3D ) - > CreateDeviceEx ( m_adapter , devType , m_hFocusWnd , <nl> + D3DCREATE_HARDWARE_VERTEXPROCESSING | D3DCREATE_MULTITHREADED , & m_D3DPP , m_D3DPP . Windowed ? NULL : & m_D3DDMEX , ( IDirect3DDevice9Ex * * ) & m_pD3DDevice ) ; <nl> + if ( FAILED ( hr ) ) <nl> + { <nl> + hr = ( ( IDirect3D9Ex * ) m_pD3D ) - > CreateDeviceEx ( m_adapter , devType , m_hFocusWnd , <nl> + D3DCREATE_MIXED_VERTEXPROCESSING | D3DCREATE_MULTITHREADED , & m_D3DPP , m_D3DPP . Windowed ? NULL : & m_D3DDMEX , ( IDirect3DDevice9Ex * * ) & m_pD3DDevice ) ; <nl> + if ( FAILED ( hr ) ) <nl> + { <nl> + hr = ( ( IDirect3D9Ex * ) m_pD3D ) - > CreateDeviceEx ( m_adapter , devType , m_hFocusWnd , <nl> + D3DCREATE_SOFTWARE_VERTEXPROCESSING | D3DCREATE_MULTITHREADED , & m_D3DPP , m_D3DPP . Windowed ? NULL : & m_D3DDMEX , ( IDirect3DDevice9Ex * * ) & m_pD3DDevice ) ; <nl> + } <nl> + if ( FAILED ( hr ) ) <nl> + return false ; <nl> + } <nl> + } <nl> + } <nl> + else <nl> { <nl> - / / Try a second time , may fail the first time due to back buffer count , <nl> - / / which will be corrected down to 1 by the runtime <nl> - hr = m_pD3D - > CreateDevice ( m_adapter , devType , m_hFocusWnd , <nl> + <nl> + hr = m_pD3D - > CreateDevice ( m_adapter , devType , m_hFocusWnd , <nl> D3DCREATE_HARDWARE_VERTEXPROCESSING | D3DCREATE_MULTITHREADED , & m_D3DPP , & m_pD3DDevice ) ; <nl> - if ( FAILED ( hr ) ) <nl> + <nl> + if ( FAILED ( hr ) ) <nl> { <nl> + / / Try a second time , may fail the first time due to back buffer count , <nl> + / / which will be corrected down to 1 by the runtime <nl> hr = m_pD3D - > CreateDevice ( m_adapter , devType , m_hFocusWnd , <nl> - D3DCREATE_MIXED_VERTEXPROCESSING | D3DCREATE_MULTITHREADED , & m_D3DPP , & m_pD3DDevice ) ; <nl> + D3DCREATE_HARDWARE_VERTEXPROCESSING | D3DCREATE_MULTITHREADED , & m_D3DPP , & m_pD3DDevice ) ; <nl> if ( FAILED ( hr ) ) <nl> { <nl> hr = m_pD3D - > CreateDevice ( m_adapter , devType , m_hFocusWnd , <nl> - D3DCREATE_SOFTWARE_VERTEXPROCESSING | D3DCREATE_MULTITHREADED , & m_D3DPP , & m_pD3DDevice ) ; <nl> + D3DCREATE_MIXED_VERTEXPROCESSING | D3DCREATE_MULTITHREADED , & m_D3DPP , & m_pD3DDevice ) ; <nl> + if ( FAILED ( hr ) ) <nl> + { <nl> + hr = m_pD3D - > CreateDevice ( m_adapter , devType , m_hFocusWnd , <nl> + D3DCREATE_SOFTWARE_VERTEXPROCESSING | D3DCREATE_MULTITHREADED , & m_D3DPP , & m_pD3DDevice ) ; <nl> + } <nl> + if ( FAILED ( hr ) ) <nl> + return false ; <nl> } <nl> - if ( FAILED ( hr ) ) <nl> - return false ; <nl> } <nl> } <nl> <nl> mmm a / xbmc / RenderSystemDX . h <nl> ppp b / xbmc / RenderSystemDX . h <nl> class CRenderSystemDX : public CRenderSystemBase <nl> LPDIRECT3DDEVICE9 Get3DDevice ( ) { return m_pD3DDevice ; } <nl> int GetBackbufferCount ( ) const { return m_D3DPP . BackBufferCount ; } <nl> <nl> + bool UseD3D9Ex ( ) { return m_useD3D9Ex ; } <nl> + DWORD DefaultD3DUsage ( ) { return m_defaultD3DUsage ; } <nl> + D3DPOOL DefaultD3DPool ( ) { return m_defaultD3DPool ; } <nl> + <nl> / * ! <nl> \ brief Register as a dependent of the DirectX Render System <nl> Resources should call this on construction if they ' re dependent on the Render System <nl> class CRenderSystemDX : public CRenderSystemBase <nl> unsigned int m_screenHeight ; <nl> <nl> D3DPRESENT_PARAMETERS m_D3DPP ; <nl> + D3DDISPLAYMODEEX m_D3DDMEX ; <nl> HWND m_hFocusWnd ; <nl> HWND m_hDeviceWnd ; <nl> unsigned int m_nBackBufferWidth ; <nl> class CRenderSystemDX : public CRenderSystemBase <nl> HRESULT m_nDeviceStatus ; <nl> IDirect3DStateBlock9 * m_stateBlock ; <nl> int64_t m_systemFreq ; <nl> + bool m_useD3D9Ex ; <nl> + DWORD m_defaultD3DUsage ; <nl> + D3DPOOL m_defaultD3DPool ; <nl> <nl> CCriticalSection m_resourceSection ; <nl> std : : vector < ID3DResource * > m_resources ; <nl> mmm a / xbmc / cores / VideoRenderers / WinRenderer . cpp <nl> ppp b / xbmc / cores / VideoRenderers / WinRenderer . cpp <nl> void CWinRenderer : : UpdateVideoFilter ( ) <nl> return ; <nl> } <nl> <nl> - if ( ! m_HQKernelTexture . Create ( 256 , 1 , 1 , 0 , D3DFMT_A16B16G16R16F , D3DPOOL_MANAGED ) ) <nl> + if ( ! m_HQKernelTexture . Create ( 256 , 1 , 1 , g_Windowing . DefaultD3DUsage ( ) , D3DFMT_A16B16G16R16F , g_Windowing . DefaultD3DPool ( ) ) ) <nl> { <nl> CLog : : Log ( LOGERROR , __FUNCTION__ " : Failed to create kernel texture . " ) ; <nl> g_application . m_guiDialogKaiToast . QueueNotification ( CGUIDialogKaiToast : : Error , " Video Renderering " , " Failed to init video scaler , falling back to bilinear scaling . " ) ; <nl> bool CWinRenderer : : CreateYV12Texture ( int index ) <nl> DeleteYV12Texture ( index ) ; <nl> <nl> SVideoBuffer & buf = m_VideoBuffers [ index ] ; <nl> - if ( ! buf . planes [ 0 ] . texture . Create ( m_sourceWidth , m_sourceHeight , 1 , 0 , D3DFMT_L8 , D3DPOOL_MANAGED ) <nl> - | | ! buf . planes [ 1 ] . texture . Create ( m_sourceWidth / 2 , m_sourceHeight / 2 , 1 , 0 , D3DFMT_L8 , D3DPOOL_MANAGED ) <nl> - | | ! buf . planes [ 2 ] . texture . Create ( m_sourceWidth / 2 , m_sourceHeight / 2 , 1 , 0 , D3DFMT_L8 , D3DPOOL_MANAGED ) ) <nl> + <nl> + if ( ! buf . planes [ 0 ] . texture . Create ( m_sourceWidth , m_sourceHeight , 1 , g_Windowing . DefaultD3DUsage ( ) , D3DFMT_L8 , g_Windowing . DefaultD3DPool ( ) ) <nl> + | | ! buf . planes [ 1 ] . texture . Create ( m_sourceWidth / 2 , m_sourceHeight / 2 , 1 , g_Windowing . DefaultD3DUsage ( ) , D3DFMT_L8 , g_Windowing . DefaultD3DPool ( ) ) <nl> + | | ! buf . planes [ 2 ] . texture . Create ( m_sourceWidth / 2 , m_sourceHeight / 2 , 1 , g_Windowing . DefaultD3DUsage ( ) , D3DFMT_L8 , g_Windowing . DefaultD3DPool ( ) ) ) <nl> { <nl> CLog : : Log ( LOGERROR , " Unable to create YV12 video texture % i " , index ) ; <nl> return false ; <nl>
|
changed : Use D3D9Ex on vista and higher if available . , Thx CrystalP for patch
|
xbmc/xbmc
|
7ad8e39cee2aeb9bf7baa07a529cf771127c4fc9
|
2010-05-01T11:37:56Z
|
mmm a / dbms / tests / queries / 0_stateless / 00952_basic_constraints . reference <nl> ppp b / dbms / tests / queries / 0_stateless / 00952_basic_constraints . reference <nl> <nl> 1 2 <nl> - Exception ok <nl> + ok <nl> 1 2 <nl> - Exception ok <nl> - Exception ok <nl> + ok <nl> + ok <nl> 0 11 <nl> 7 18 <nl> mmm a / dbms / tests / queries / 0_stateless / 00952_basic_constraints . sh <nl> ppp b / dbms / tests / queries / 0_stateless / 00952_basic_constraints . sh <nl> <nl> CURDIR = $ ( cd " $ ( dirname " $ { BASH_SOURCE [ 0 ] } " ) " & & pwd ) <nl> . $ CURDIR / . . / shell_config . sh <nl> <nl> + EXCEPTION_SUCCESS_TEXT = ok <nl> + <nl> $ CLICKHOUSE_CLIENT - - query = " DROP TABLE IF EXISTS test_constraints ; " <nl> <nl> $ CLICKHOUSE_CLIENT - - query = " CREATE TABLE test_constraints <nl> $ CLICKHOUSE_CLIENT - - query = " SELECT * FROM test_constraints ; " <nl> # This one must throw and exception <nl> EXCEPTION_TEXT = " Some constraints are not satisfied " <nl> $ CLICKHOUSE_CLIENT - - query = " INSERT INTO test_constraints VALUES ( 3 , 4 ) , ( 1 , 0 ) ; " 2 > & 1 \ <nl> - | grep - q " $ EXCEPTION_TEXT " & & echo " Exception ok " | | echo " Did not thrown an exception " <nl> + | grep - q " $ EXCEPTION_TEXT " & & echo " $ EXCEPTION_SUCCESS_TEXT " | | echo " Did not thrown an exception " <nl> $ CLICKHOUSE_CLIENT - - query = " SELECT * FROM test_constraints ; " <nl> <nl> $ CLICKHOUSE_CLIENT - - query = " DROP TABLE test_constraints ; " <nl> ENGINE = MergeTree ORDER BY ( a ) ; " <nl> # This one must throw an exception <nl> EXCEPTION_TEXT = " Some constraints are not satisfied " <nl> $ CLICKHOUSE_CLIENT - - query = " INSERT INTO test_constraints VALUES ( 1 , 2 ) ; " 2 > & 1 \ <nl> - | grep - q " $ EXCEPTION_TEXT " & & echo " Exception ok " | | echo " Did not thrown an exception " <nl> + | grep - q " $ EXCEPTION_TEXT " & & echo " $ EXCEPTION_SUCCESS_TEXT " | | echo " Did not thrown an exception " <nl> $ CLICKHOUSE_CLIENT - - query = " SELECT * FROM test_constraints ; " <nl> <nl> # This one must throw an exception <nl> EXCEPTION_TEXT = " Some constraints are not satisfied " <nl> $ CLICKHOUSE_CLIENT - - query = " INSERT INTO test_constraints VALUES ( 5 , 16 ) , ( 10 , 11 ) ; " 2 > & 1 \ <nl> - | grep - q " $ EXCEPTION_TEXT " & & echo " Exception ok " | | echo " Did not thrown an exception " <nl> + | grep - q " $ EXCEPTION_TEXT " & & echo " $ EXCEPTION_SUCCESS_TEXT " | | echo " Did not thrown an exception " <nl> $ CLICKHOUSE_CLIENT - - query = " SELECT * FROM test_constraints ; " <nl> <nl> # This one must succeed <nl>
|
Removed word " exception " from test reference
|
ClickHouse/ClickHouse
|
ff6cdaeb9846ae63ddba399e215809f0be641793
|
2019-05-27T06:30:18Z
|
mmm a / tensorflow / contrib / slim / python / slim / learning . py <nl> ppp b / tensorflow / contrib / slim / python / slim / learning . py <nl> def InitAssignFn ( sess ) : <nl> <nl> from tensorflow . contrib . framework . python . ops import variables <nl> from tensorflow . python . framework import constant_op <nl> + from tensorflow . python . framework import errors <nl> from tensorflow . python . framework import ops <nl> from tensorflow . python . ops import array_ops <nl> from tensorflow . python . ops import clip_ops <nl> def train ( <nl> save_model_secs = save_interval_secs , <nl> init_fn = init_fn ) <nl> <nl> - with sv . managed_session ( master , start_standard_services = False ) as sess : <nl> - if is_chief : <nl> - if logdir : <nl> - sv . start_standard_services ( sess ) <nl> - elif startup_delay_steps > 0 : <nl> - _wait_for_step ( sess , global_step , <nl> - min ( startup_delay_steps , number_of_steps or sys . maxint ) ) <nl> - sv . start_queue_runners ( sess ) <nl> - if is_chief and sync_optimizer : <nl> - sv . start_queue_runners ( sess , [ chief_queue_runner ] ) <nl> - <nl> + should_retry = True <nl> + while should_retry : <nl> try : <nl> - while not sv . should_stop ( ) : <nl> - total_loss , should_stop = train_step_fn ( <nl> - sess , train_op , global_step , train_step_kwargs ) <nl> - if should_stop : <nl> - break <nl> - finally : <nl> - if sv . is_chief and cleanup_op is not None : <nl> - sess . run ( cleanup_op ) <nl> - <nl> - # This waits for service threads to finish . <nl> - sv . Stop ( ) <nl> - <nl> - if logdir and sv . is_chief : <nl> - logging . info ( ' Finished training ! Saving model to disk . ' ) <nl> - sv . saver . save ( sess , sv . save_path , global_step = sv . global_step ) <nl> - <nl> - return total_loss <nl> + should_retry = False <nl> + with sv . managed_session ( master , start_standard_services = False ) as sess : <nl> + logging . info ( ' Starting Session . ' ) <nl> + if is_chief : <nl> + if logdir : <nl> + sv . start_standard_services ( sess ) <nl> + elif startup_delay_steps > 0 : <nl> + _wait_for_step ( sess , global_step , <nl> + min ( startup_delay_steps , <nl> + number_of_steps or sys . maxint ) ) <nl> + sv . start_queue_runners ( sess ) <nl> + logging . info ( ' Starting Queues . ' ) <nl> + if is_chief and sync_optimizer : <nl> + sv . start_queue_runners ( sess , [ chief_queue_runner ] ) <nl> + try : <nl> + while not sv . should_stop ( ) : <nl> + total_loss , should_stop = train_step_fn ( <nl> + sess , train_op , global_step , train_step_kwargs ) <nl> + if should_stop : <nl> + logging . info ( ' Stopping Training . ' ) <nl> + break <nl> + if logdir and sv . is_chief : <nl> + logging . info ( ' Finished training ! Saving model to disk . ' ) <nl> + sv . saver . save ( sess , sv . save_path , global_step = sv . global_step ) <nl> + finally : <nl> + if sv . is_chief and cleanup_op is not None : <nl> + logging . info ( ' About to execute sync_clean_up_op ! ' ) <nl> + sess . run ( cleanup_op ) <nl> + <nl> + except errors . AbortedError : <nl> + # Always re - run on AbortedError as it indicates a restart of one of the <nl> + # distributed tensorflow servers . <nl> + logging . info ( ' Retrying training ! ' ) <nl> + should_retry = True <nl> + <nl> + return total_loss <nl>
|
Capture errors . AbortedError in slim . learning . train ( ) which allows to recover from failure .
|
tensorflow/tensorflow
|
b104d4cae81d97343b12455b1acfdbf49dcee742
|
2016-07-21T04:33:01Z
|
mmm a / tensorflow / python / eager / benchmarks_test . py <nl> ppp b / tensorflow / python / eager / benchmarks_test . py <nl> def _run ( self , func , num_iters = 10000 , execution_mode = None ) : <nl> wall_time = mean_us , <nl> extras = { " examples_per_sec " : num_iters / total_time } ) <nl> <nl> - def benchmark_mirroring_off ( self ) : <nl> + # TODO ( gjn ) : Fix continuous benchmark runs <nl> + def _DISABLED_benchmark_mirroring_off ( self ) : <nl> remote . connect_to_remote_host ( self . _cached_server_target ) <nl> <nl> x = random_ops . random_uniform ( ( 2 , 2 ) ) . cpu ( ) <nl> def func ( m ) : <nl> context . context ( ) . mirroring_policy = context . MIRRORING_NONE <nl> self . _run ( lambda : func ( x ) ) <nl> <nl> - def benchmark_mirroring_on ( self ) : <nl> + # TODO ( gjn ) : Fix continuous benchmark runs <nl> + def _DISABLED_benchmark_mirroring_on ( self ) : <nl> remote . connect_to_remote_host ( self . _cached_server_target ) <nl> <nl> x = random_ops . random_uniform ( ( 2 , 2 ) ) . cpu ( ) <nl>
|
Disable remote mirroring benchmarks
|
tensorflow/tensorflow
|
c6dd9ce543924614119c2ba68e0d3788e8c420b6
|
2019-06-21T19:10:14Z
|
new file mode 100644 <nl> index 000000000000 . . a3b58ad99402 <nl> mmm / dev / null <nl> ppp b / contrib / gitian - keys / willyko - key . pgp <nl> <nl> + mmm - - BEGIN PGP PUBLIC KEY BLOCKmmm - - <nl> + Version : GnuPG v1 <nl> + <nl> + mQINBFgs / RoBEADFxycJTUvwqzBZZ0aBZXbmr8Ppd3EPrgBRd47k7uwanf7UFmvY <nl> + Xt4gMEI + EdV0GuoQ0SeoAmQqc5Fxu3AQe2XFbiF + ZNNYT3 + V / 5GAzWsAH22ncQr0 <nl> + AuK95pPi + PZ + M2h669cq / RzFUXZDew0NobR2oBS5h6g3rgmmejVLRqnUpWkkSrqi <nl> + aNgD2GSn8g820wM6LpdxcjTqmMpHHT5owAbv0UP3IcdtpBaS5McoUXK + OAdKK / Zw <nl> + JQ0J1kx4vIyNwuPD3klziGQw8Izb / gFpWg8XaJmMhD5BxNuXJC58Bj9 / sFTc0GDQ <nl> + VKMFpYpNi8a6hLPFb4hMjYF77awoz57HtyOOsS03KO / 57QE1htx + 2NeDm4XkZSBk <nl> + + wrU3zgbtmOBcfzEHS / HrROksYDi + Qw3HZL98nfDEWNfsDzfhMZ9wHdM3NsR2xk6 <nl> + oNtX0CprS1n2Xr2AY9X1oNgiZCJaSftU67j3lr + 9gHOH61ktxt3cUCDodUFjkpKn <nl> + r1CQ2LB63AoUbwGMAeozdXZWzbXJAJbcH9G77zEi9rW0WA2yMSxTXHlpE9MS0UcE <nl> + BVkIMv2b9iQzlhiS8jh8AiKFO1PuT26Cw52N / lSPhA81zw79pZfSYwKFICGHYfvw <nl> + ozZeN9Q + PPl5tqi / 3SExxlZKe8EmaveTrUfKMBS4lQO2gWe0bCFgLOIzIwARAQAB <nl> + tB1XaWxseSBLbyA8d2lsbHlrQHN5c2NvaW4ub3JnPokCOAQTAQIAIgUCWCz9GgIb <nl> + AwYLCQgHAwIGFQgCCQoLBBYCAwECHgECF4AACgkQjjqPMkfby7 + 0wA / / cX7Tc3Nz <nl> + 19ApwSbGfC8pJA / nSybcVivroJRftpzeOmYrVM084T9REvYwugl89djvxn6m96iQ <nl> + kqoUGWhBVBtDReVCL7z53G42lHjemaFcxBhIazKxO0qvcc / UXUVOs2OdUbzObDFL <nl> + dHO5xBVqEnW3sq + r4blsXR8U79B9IIri4 + 2hy4OoEjYv9DzBaaoaqU + F3mudXbmo <nl> + R + hsWc + mklV + + TX / kuw6EWT8tusFjXrfqqKcKPRPhbn48OSGWsEPc7yELf7pYFR8 <nl> + uDU40faJqkvQ83h5WMTDAhLxd / 918ZitqBhjSP + 7Humf2YhSto7YmtEWlbeAW + qy <nl> + TcBYkK6SJh8Do3xZd / prFBKEu395n5VQKuLjXaOjqMc1oDHQyPJJjXSN5thLHvan <nl> + z7nNLt2QZO / kxXITDdbWlktVe / WSoive7TuY4dGuX4Si2z9wyhFYxtZDsqE0qmqN <nl> + jIDAZ7u8Qq / LGqpdjOmYr2fEwHe1yVIS + BtVGvtShkX + J + QPb8qBl1d7Ii5i5Afl <nl> + GJoLLIUFkPcIRTYPZpppGSuqfyWdNnaasbLH44lxJisSMMw + fxZabt2bykYN / ZXa <nl> + RP / ItDj81vklg + n6r4f / nZTF1r0UUy4LbSbBY15B4Xm0Tdvh1PMfj / w2q10l7bZB <nl> + XLi9Z / QPaW7TyzaBuLkVckbVFn2nYnXfzHG5Ag0EWCz9GgEQALCgTibFnw + Q3PEL <nl> + G5 / peQcQqHxrPAB37HV39B1DedGhVUa6aGSUaLoNMyUjUX1HWN3mWFKTYVB4CH5Y <nl> + xjaXUwxdwCZgBNe4TDglKFPuc + frlSTZxDVE9 / fjArmrUP6TPU447ujspyngGLa5 <nl> + et5Uig / LxIX / + Mm0ZiYJxb1rMJwK998U1Ev1aHxgNjwTI2ehcKu8CAGOyflzh6a2 <nl> + iTBUmLfnQMv5248P2d4P8WDiPq61CWTYTMCFqHqkYKy7h9BYIuMajw3KsgOUNfL2 <nl> + 1e9Ue8yv5UchZ + GDlBjidIkoK1nd2uJ0kPJkafLGWbcliJfvXxKliZnSbz1Cd4A0 <nl> + HDKKCwiuwSLy2aYbs7IRtAOyDER4 + fjBcqtf0QTIvoAdNZ9gL64DKVaB58vuSixj <nl> + K1i83XbTOt3q821HxxBrX9u6HP2E5kFdxT2KHDbisAWNP0rFnHVpjugehKFfZb6q <nl> + jbDt3nQL5uCQ8gTNCd4fsoSK6KhCDjamDXlKmaGlxqwOV4W8ZwihoeGt690h7NIH <nl> + h4eiSmMOej3or32lcDETEwrjA2PxvcFsikFc56hRkTaSyyBEH2xhkRrjXMqiQfH0 <nl> + j7iOY2PWpFEuu2HVzqe5dBXzn9sMIwxeNCxR / P + xHMqPUlgD1SXEYCNLvvzD6p0 + <nl> + kqSe7PiJoEIv351T3hnBhQ6rK0ChABEBAAGJAh8EGAECAAkFAlgs / RoCGwwACgkQ <nl> + jjqPMkfby7 / mQA / / YsAOdDBl0GscB1PBNXi8VMlI7yG9cqiGrYnZX7h4wUoGEbPI <nl> + jap / PixIsxBCf1BqBRDJdFyvzH9amLlcaVNdCyh6Yt1Pi8kassmz / kbIYgpbFkIL <nl> + ES9N24N7BZ94P77OQy5wic + B4WqJnVrtKr9JBalgBSOMqtccYCma5Ew00mqp + FXM <nl> + suDyBk2HXyl + u6 / rRmqZ + BoU8iRpus9F80LFKGEsAgjLjKv68KmApzjunzsBotKk <nl> + g9AsBk4ygbp + nECAtsxpbLMo4hPr4qWm7G4mU5g4xOK2chpAPeqyf0857RWgsXaO <nl> + kjrUu / M5Hme2eIlXwBF14ac4QPnY1rlAIaulvXzmQnMYQFZiw9vaTOdqBFHjkh7T <nl> + XYRAr589Woo25PfMJCbC + Rop6ku6sCFMorbBwojyRhFJnk9xsy5kP5D9IhkPAKu / <nl> + / ABlei0xPOl / gCUUJP7aIikZgS5lAk1TSe / R + yV6ExNwudtLw1G0K2 / sY3B4Xo3X <nl> + Q1lTAQPlnAIeK / vlbttLZNIBWquw4cPAkPpIyjmE1dd6jGQdUyZE22uPBx + gpq1w <nl> + AacmVLwvPMe1De0ilJOzj6KpXWBCwt0DWXWztovpBVcAC + qbTrZF9H5dllpqyzKt <nl> + OvxzGssjrX4rDkOx7MyVa2tnXmeCuSN / RvlOUwPvf5zYM8Wh9g7fc6jcDQu5Ag0E <nl> + WfkOfAEQAMNkzAQqSenpXtHsnuCqM1oMMF2kRzny / Jqh3q3BxZ8MHLDhoRRaTENu <nl> + lA4APRXMNM / wlZJUSLX8wWBhufnsPtMf6OOVMZ4AVbXHjUgyJ7lO1zHdj0u6PpYP <nl> + 9gmHthIz7FF + cxHj4ziC4CmtRctrn + / U4MwYtNPhxkTnS26oOZes / HXMYSvQBMgT <nl> + AP27GNOBiJRthjIEITvSvS0YZOxgLtWgGiks / pGUw5wm1rguuQVyZ1 / LfXBooYJo <nl> + u / v21AEjpuTg7JlwbqXr2k5LojAGq7AxDyWy21IW0E45Gog38zg / hwNll + hjRbSu <nl> + pipf74WXR6xMMlW6A + XWUvElkicfDx8e9LJUnqWbZ + FL7X4SB54ZHNCvfo / 8Ug1V <nl> + 2tiY9WbUZL9n5ZQHNlk3J + UK / KDvwey1VzKPFjpQNlfahhnppDGiCey + mERjI + 75 <nl> + gPbk0ctOAEYXgLJjoonGX + iByAfY0YyJF281CtaK / sXQU + TzLLT15WET + gYGsJdY <nl> + xh1PdPscNdSgYudvbKZoFnqUwEGEfD8dT5bjOphfY5 + LvGUR2GuLNZpMidcduTYf <nl> + SWAY / vQHQIJArXu29BKscm3tg6tzXu3l9p / bGIQUQB7obN91y3xD3BLICIPRGhKE <nl> + 924wxxCuH1vLKmxWDdAAxKo + rEdLJ / rbZnjWQENEFiJ114fBk2NVABEBAAGJBEQE <nl> + GAEIAA8FAln5DnwCGwIFCQPCZwACKQkQjjqPMkfby7 / BXSAEGQEIAAYFAln5DnwA <nl> + CgkQYFGSo / 6YSmANoQ / / SbcKxkop2zA2HrWS4THcEJQwSJ0KGAN / VB83JQhoWThX <nl> + CWxsFNJjBy7 + rsoXd3wQG1 / aN42nTuj + eh + R6WJJaqqnMqd52l4Kc1kJA6z4DGsy <nl> + 3azCDvyzibM0AkJyMJyYi6HRKjzA4M + xKR1HoT / NdQUP5CBUVfvMblSaOWiw4rja <nl> + IhWcbgbQ + Zam / VaV5l1O90eaD9tL3twSfPLYZ / wkeO63jJKHBpI8fpMql / bLg9WD <nl> + Au3h / lU63NWe5lZO1z / jIdfiTSvg8nu162vcOgmUCWo9spkybjJd0Mx6ZId79rVo <nl> + 58lwZ4QoaMgPGoVP67LyLOxJTIXeyG5xr1LxhMPMGbnBhlnMQrboLV9kPEL3raHE <nl> + EEKDTtZimVK3ZxmfyBd6MDmwcL / K73xu / R8be9TgdwD8 / BZJSOTkO87qZ82G9T7E <nl> + oY5IHU + qd41 / Yjbut7AVtAlCr5Lor31EYvZh3gI / H8uZFddOu37Ij7e9Fw2ywv3A <nl> + wPks89tfOvahkfCOJ29znB + uQYpJ461jjhdkB7EHG4ae07M5rRtkNbIc3dqbnMhz <nl> + VA3JpRJN77xPXV7uITHo1s + b50RvWmfYW91zvipaSZxbMLuGBMhn / 1QaM1djLOYN <nl> + JordDBwEr2bi5a063yUbZrk6ddECuyxndDHWDNr + Tqx6o7lmAT48UJ199zA4scbf <nl> + 2g / 9EiRPGcRovsn1tUdjzfmWDxhrRV6F3rYJB1 + i6Mqeg2iHHYxxiNDXcuWYXHQ / <nl> + WPWLk5 + lgh0rQbrE7InzEejoM0FIHzLTm0lSQpau50 / PT2FiH6sOEEDyT2IhBtXX <nl> + eOnKAi1IfGNMzEaPEY8PXH78dEGv0iXIgy4l8Bc57q09Z9R / OUi + Yb1p + S5F / aOi <nl> + 7Jd53GGE1bfBIlsMos092XoiMdvKmAczyCUIempKHUBPoqfJge77qk7zJKkyM3Dk <nl> + VX0lXLdhj0PfslFrNf2uRF4uZkmfUV7peeD023c0 / SVp3ILUAVds52yawi6Exv4a <nl> + bbvhIw72fc31frCRBqc9HVsBraoozzE9bksG1MdNI3GgKxecOu9lldedlIqi4lO4 <nl> + 7kTVDLEmcsQO + sSxkXQz2sMSD01CQndpPuhFNlqvVnfK + Kv8pSG37VzSSQz1nt5K <nl> + w / fJBo4T / ztR7D9RzbSDxBP8Jjaa + UYabjab5HcE0JI4CpgmzIOB7qPVbYCn + LNX <nl> + c8Xw5 / 9iTw + ayawl7PCGRfd14 / OPRzI8vS0I9bF8AG84XM46yxAtYieH / 9RI3b6 / <nl> + GiQYDkBNi6Kb1LfSzx8oKAkbMgiy4y3vWxLQnE34bAoXjGiYdAMliOsyGcvmnObD <nl> + GmSTIlIqunq60CyhaUSIkl2VRhjzz0igfS9751XEvnjeXDc = <nl> + = PVBi <nl> + mmm - - END PGP PUBLIC KEY BLOCKmmm - - <nl>
|
Merge : Add gitian PGP key : willyko
|
bitcoin/bitcoin
|
595ec11d804f1f3a92c46f19586c283f9a9ffb4c
|
2017-11-17T13:19:41Z
|
mmm a / src / Server / GRPCServer . cpp <nl> ppp b / src / Server / GRPCServer . cpp <nl> <nl> # include " GRPCServer . h " <nl> # if USE_GRPC <nl> <nl> + # include < Columns / ColumnString . h > <nl> + # include < Columns / ColumnsNumber . h > <nl> # include < Common / CurrentThread . h > <nl> # include < Common / SettingsChanges . h > <nl> # include < DataStreams / AddingDefaultsBlockInputStream . h > <nl> # include < DataStreams / AsynchronousBlockInputStream . h > <nl> # include < Interpreters / Context . h > <nl> + # include < Interpreters / InternalTextLogsQueue . h > <nl> # include < Interpreters / executeQuery . h > <nl> # include < IO / ConcatReadBuffer . h > <nl> # include < IO / ReadBufferFromString . h > <nl> namespace <nl> str . append ( str . empty ( ) ? " " : " , " ) . append ( " extremes " ) ; <nl> if ( result . has_progress ( ) ) <nl> str . append ( str . empty ( ) ? " " : " , " ) . append ( " progress " ) ; <nl> + if ( result . logs_size ( ) ) <nl> + str . append ( str . empty ( ) ? " " : " , " ) . append ( " logs : " ) . append ( std : : to_string ( result . logs_size ( ) ) ) . append ( " entries " ) ; <nl> if ( result . has_exception ( ) ) <nl> str . append ( str . empty ( ) ? " " : " , " ) . append ( " exception " ) ; <nl> return str ; <nl> namespace <nl> <nl> void finishQuery ( ) ; <nl> void onException ( const Exception & exception ) ; <nl> + void onFatalError ( ) ; <nl> void close ( ) ; <nl> <nl> void readQueryInfo ( ) ; <nl> namespace <nl> void addProgressToResult ( ) ; <nl> void addTotalsToResult ( const Block & totals ) ; <nl> void addExtremesToResult ( const Block & extremes ) ; <nl> + void addLogsToResult ( ) ; <nl> void sendResult ( ) ; <nl> void throwIfFailedToSendResult ( ) ; <nl> void sendException ( const Exception & exception ) ; <nl> namespace <nl> <nl> BlockIO io ; <nl> Progress progress ; <nl> + InternalTextLogsQueuePtr logs_queue ; <nl> <nl> GRPCQueryInfo query_info ; / / / We reuse the same messages multiple times . <nl> GRPCResult result ; <nl> namespace <nl> query_context - > applySettingsChanges ( settings_changes ) ; <nl> const Settings & settings = query_context - > getSettingsRef ( ) ; <nl> <nl> + / / / Prepare for sending exceptions and logs . <nl> send_exception_with_stacktrace = query_context - > getSettingsRef ( ) . calculate_text_stack_trace ; <nl> + const auto client_logs_level = query_context - > getSettingsRef ( ) . send_logs_level ; <nl> + if ( client_logs_level ! = LogsLevel : : none ) <nl> + { <nl> + logs_queue = std : : make_shared < InternalTextLogsQueue > ( ) ; <nl> + logs_queue - > max_priority = Poco : : Logger : : parseLevel ( client_logs_level . toString ( ) ) ; <nl> + CurrentThread : : attachInternalTextLogsQueue ( logs_queue , client_logs_level ) ; <nl> + CurrentThread : : setFatalErrorCallback ( [ this ] { onFatalError ( ) ; } ) ; <nl> + } <nl> <nl> / / / Set the current database if specified . <nl> if ( ! query_info . database ( ) . empty ( ) ) <nl> namespace <nl> " Only the following fields can be set : input_data , next_query_info " , <nl> ErrorCodes : : INVALID_GRPC_QUERY_INFO ) ; <nl> } <nl> - LOG_DEBUG ( log , " Received extra QueryInfo with input data : { } bytes " , query_info . input_data ( ) . size ( ) ) ; <nl> + LOG_DEBUG ( log , " Received extra QueryInfo : input_data : { } bytes " , query_info . input_data ( ) . size ( ) ) ; <nl> need_input_data_from_query_info = true ; <nl> } <nl> <nl> namespace <nl> after_send_progress . restart ( ) ; <nl> } <nl> <nl> + addLogsToResult ( ) ; <nl> + <nl> bool has_output = write_buffer - > offset ( ) ; <nl> - if ( has_output | | result . has_progress ( ) ) <nl> + if ( has_output | | result . has_progress ( ) | | result . logs_size ( ) ) <nl> sendResult ( ) ; <nl> <nl> throwIfFailedToSendResult ( ) ; <nl> namespace <nl> after_send_progress . restart ( ) ; <nl> } <nl> <nl> + addLogsToResult ( ) ; <nl> + <nl> bool has_output = write_buffer - > offset ( ) ; <nl> - if ( has_output | | result . has_progress ( ) ) <nl> + if ( has_output | | result . has_progress ( ) | | result . logs_size ( ) ) <nl> sendResult ( ) ; <nl> <nl> throwIfFailedToSendResult ( ) ; <nl> namespace <nl> finalize = true ; <nl> io . onFinish ( ) ; <nl> query_scope - > logPeakMemoryUsage ( ) ; <nl> + addLogsToResult ( ) ; <nl> sendResult ( ) ; <nl> close ( ) ; <nl> <nl> namespace <nl> <nl> if ( responder & & ! responder_finished ) <nl> { <nl> + try <nl> + { <nl> + / / / Try to send logs to client , but it could be risky too . <nl> + addLogsToResult ( ) ; <nl> + } <nl> + catch ( . . . ) <nl> + { <nl> + LOG_WARNING ( log , " Couldn ' t send logs to client " ) ; <nl> + } <nl> + <nl> try <nl> { <nl> sendException ( exception ) ; <nl> namespace <nl> close ( ) ; <nl> } <nl> <nl> + void Call : : onFatalError ( ) <nl> + { <nl> + if ( responder & & ! responder_finished ) <nl> + { <nl> + try <nl> + { <nl> + finalize = true ; <nl> + addLogsToResult ( ) ; <nl> + sendResult ( ) ; <nl> + } <nl> + catch ( . . . ) <nl> + { <nl> + } <nl> + } <nl> + } <nl> + <nl> void Call : : close ( ) <nl> { <nl> responder . reset ( ) ; <nl> namespace <nl> stream - > writeSuffix ( ) ; <nl> } <nl> <nl> + void Call : : addLogsToResult ( ) <nl> + { <nl> + if ( ! logs_queue ) <nl> + return ; <nl> + <nl> + static_assert ( : : clickhouse : : grpc : : LOG_NONE = = 0 ) ; <nl> + static_assert ( : : clickhouse : : grpc : : LOG_FATAL = = static_cast < int > ( Poco : : Message : : PRIO_FATAL ) ) ; <nl> + static_assert ( : : clickhouse : : grpc : : LOG_CRITICAL = = static_cast < int > ( Poco : : Message : : PRIO_CRITICAL ) ) ; <nl> + static_assert ( : : clickhouse : : grpc : : LOG_ERROR = = static_cast < int > ( Poco : : Message : : PRIO_ERROR ) ) ; <nl> + static_assert ( : : clickhouse : : grpc : : LOG_WARNING = = static_cast < int > ( Poco : : Message : : PRIO_WARNING ) ) ; <nl> + static_assert ( : : clickhouse : : grpc : : LOG_NOTICE = = static_cast < int > ( Poco : : Message : : PRIO_NOTICE ) ) ; <nl> + static_assert ( : : clickhouse : : grpc : : LOG_INFORMATION = = static_cast < int > ( Poco : : Message : : PRIO_INFORMATION ) ) ; <nl> + static_assert ( : : clickhouse : : grpc : : LOG_DEBUG = = static_cast < int > ( Poco : : Message : : PRIO_DEBUG ) ) ; <nl> + static_assert ( : : clickhouse : : grpc : : LOG_TRACE = = static_cast < int > ( Poco : : Message : : PRIO_TRACE ) ) ; <nl> + <nl> + MutableColumns columns ; <nl> + while ( logs_queue - > tryPop ( columns ) ) <nl> + { <nl> + if ( columns . empty ( ) | | columns [ 0 ] - > empty ( ) ) <nl> + continue ; <nl> + <nl> + const auto & column_time = typeid_cast < const ColumnUInt32 & > ( * columns [ 0 ] ) ; <nl> + const auto & column_time_microseconds = typeid_cast < const ColumnUInt32 & > ( * columns [ 1 ] ) ; <nl> + const auto & column_query_id = typeid_cast < const ColumnString & > ( * columns [ 3 ] ) ; <nl> + const auto & column_thread_id = typeid_cast < const ColumnUInt64 & > ( * columns [ 4 ] ) ; <nl> + const auto & column_level = typeid_cast < const ColumnInt8 & > ( * columns [ 5 ] ) ; <nl> + const auto & column_source = typeid_cast < const ColumnString & > ( * columns [ 6 ] ) ; <nl> + const auto & column_text = typeid_cast < const ColumnString & > ( * columns [ 7 ] ) ; <nl> + size_t num_rows = column_time . size ( ) ; <nl> + <nl> + for ( size_t row = 0 ; row ! = num_rows ; + + row ) <nl> + { <nl> + auto & log_entry = * result . add_logs ( ) ; <nl> + log_entry . set_time ( column_time . getElement ( row ) ) ; <nl> + log_entry . set_time_microseconds ( column_time_microseconds . getElement ( row ) ) ; <nl> + StringRef query_id = column_query_id . getDataAt ( row ) ; <nl> + log_entry . set_query_id ( query_id . data , query_id . size ) ; <nl> + log_entry . set_thread_id ( column_thread_id . getElement ( row ) ) ; <nl> + log_entry . set_level ( static_cast < : : clickhouse : : grpc : : LogsLevel > ( column_level . getElement ( row ) ) ) ; <nl> + StringRef source = column_source . getDataAt ( row ) ; <nl> + log_entry . set_source ( source . data , source . size ) ; <nl> + StringRef text = column_text . getDataAt ( row ) ; <nl> + log_entry . set_text ( text . data , text . size ) ; <nl> + } <nl> + } <nl> + } <nl> + <nl> void Call : : sendResult ( ) <nl> { <nl> / / / gRPC doesn ' t allow to write anything to a finished responder . <nl> mmm a / src / Server / grpc_protos / clickhouse_grpc . proto <nl> ppp b / src / Server / grpc_protos / clickhouse_grpc . proto <nl> message QueryInfo { <nl> bool next_query_info = 11 ; <nl> } <nl> <nl> + enum LogsLevel { <nl> + LOG_NONE = 0 ; <nl> + LOG_FATAL = 1 ; <nl> + LOG_CRITICAL = 2 ; <nl> + LOG_ERROR = 3 ; <nl> + LOG_WARNING = 4 ; <nl> + LOG_NOTICE = 5 ; <nl> + LOG_INFORMATION = 6 ; <nl> + LOG_DEBUG = 7 ; <nl> + LOG_TRACE = 8 ; <nl> + } <nl> + <nl> + message LogEntry { <nl> + uint32 time = 1 ; <nl> + uint32 time_microseconds = 2 ; <nl> + uint64 thread_id = 3 ; <nl> + string query_id = 4 ; <nl> + LogsLevel level = 5 ; <nl> + string source = 6 ; <nl> + string text = 7 ; <nl> + } <nl> + <nl> message Progress { <nl> uint64 read_rows = 1 ; <nl> uint64 read_bytes = 2 ; <nl> message Result { <nl> string output = 1 ; <nl> string totals = 2 ; <nl> string extremes = 3 ; <nl> - Progress progress = 4 ; <nl> - Exception exception = 5 ; <nl> + repeated LogEntry logs = 4 ; <nl> + Progress progress = 5 ; <nl> + Exception exception = 6 ; <nl> } <nl> <nl> service ClickHouse { <nl> mmm a / tests / integration / test_grpc_protocol / test . py <nl> ppp b / tests / integration / test_grpc_protocol / test . py <nl> def query_and_get_extremes ( * args , * * kwargs ) : <nl> extremes + = result . extremes <nl> return extremes <nl> <nl> + def query_and_get_logs ( * args , * * kwargs ) : <nl> + logs = " " <nl> + for result in query_no_errors ( * args , * * kwargs ) : <nl> + for log_entry in result . logs : <nl> + # print ( log_entry ) <nl> + logs + = log_entry . text + " \ n " <nl> + return logs <nl> + <nl> @ pytest . fixture ( scope = " module " , autouse = True ) <nl> def start_cluster ( ) : <nl> cluster . start ( ) <nl> def test_errors_handling ( ) : <nl> query ( " CREATE TABLE t ( a UInt8 ) ENGINE = Memory " ) <nl> e = query_and_get_error ( " CREATE TABLE t ( a UInt8 ) ENGINE = Memory " ) <nl> assert " Table default . t already exists " in e . display_text <nl> + <nl> + def test_logs ( ) : <nl> + logs = query_and_get_logs ( " SELECT 1 " , settings = { ' send_logs_level ' : ' debug ' } ) <nl> + assert " SELECT 1 " in logs <nl> + assert " Read 1 rows " in logs <nl> + assert " Peak memory usage " in logs <nl>
|
Send logs via gRPC protocol too .
|
ClickHouse/ClickHouse
|
4f0405af93fb7d54eae010a52ef9d02c861bb779
|
2020-11-24T14:55:01Z
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.