id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st80968 | Hi @ptrblck
I tried a couple of more things after updating llvm:
_ re-installing xcode
_ pip3 install mkl
_ pip3 install mkl-devel
however the compilation still break at 75% as before …
in case it could help, I join the initial checks of the compilation …
Building wheel torch-1.3.0a0+4fb5a7c
-- Building version 1.3.0a0+4fb5a7c
cmake -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/Users/adrienbitton/Desktop/pytorch/torch -DCMAKE_PREFIX_PATH=/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages -DNUMPY_INCLUDE_DIR=/usr/local/lib/python3.7/site-packages/numpy/core/include -DPYTHON_EXECUTABLE=/usr/local/opt/python/bin/python3.7 -DPYTHON_INCLUDE_DIR=/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/include/python3.7m -DPYTHON_LIBRARY=/usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/Python.framework/Versions/3.7/Python -DTORCH_BUILD_VERSION=1.3.0a0+4fb5a7c -DUSE_CUDA=True -DUSE_NUMPY=True /Users/adrienbitton/Desktop/pytorch
-- The CXX compiler identification is AppleClang 8.0.0.8000042
-- The C compiler identification is AppleClang 8.0.0.8000042
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Not forcing any particular BLAS to be found
-- CLANG_VERSION_STRING: 8.0
-- Performing Test COMPILER_WORKS
-- Performing Test COMPILER_WORKS - Success
-- Performing Test SUPPORT_GLIBCXX_USE_C99
-- Performing Test SUPPORT_GLIBCXX_USE_C99 - Success
-- Performing Test CAFFE2_EXCEPTION_PTR_SUPPORTED
-- Performing Test CAFFE2_EXCEPTION_PTR_SUPPORTED - Success
-- std::exception_ptr is supported.
-- NUMA is disabled
-- Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING
-- Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING - Failed
-- Turning off deprecation warning due to glog.
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX2_EXTENSIONS
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX2_EXTENSIONS - Success
-- Current compiler supports avx2 extension. Will build perfkernels.
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS - Failed
-- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY
-- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY - Success
-- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY
-- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY - Success
-- Performing Test COMPILER_SUPPORTS_RDYNAMIC
-- Performing Test COMPILER_SUPPORTS_RDYNAMIC - Success
-- Building using own protobuf under third_party per request.
-- Use custom protobuf build.
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - found
-- Found Threads: TRUE
-- Caffe2 protobuf include directory: $<BUILD_INTERFACE:/Users/adrienbitton/Desktop/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include>
-- Trying to find preferred BLAS backend of choice: MKL
-- MKL_THREADING = OMP
-- Looking for sys/types.h
-- Looking for sys/types.h - found
-- Looking for stdint.h
-- Looking for stdint.h - found
-- Looking for stddef.h
-- Looking for stddef.h - found
-- Check size of void*
-- Check size of void* - done
-- Looking for cblas_sgemm
-- Looking for cblas_sgemm - found
-- MKL libraries: /usr/local/lib/libmkl_intel_lp64.dylib;/usr/local/lib/libmkl_intel_thread.dylib;/usr/local/lib/libmkl_core.dylib;/usr/local/lib/libiomp5.dylib;/usr/lib/libpthread.dylib;/usr/lib/libm.dylib
-- MKL include directory: /usr/local/include
-- MKL OpenMP type: Intel
-- MKL OpenMP library: /usr/local/lib/libiomp5.dylib
-- The ASM compiler identification is AppleClang
-- Found assembler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang
-- Brace yourself, we are building NNPACK
-- Performing Test NNPACK_ARCH_IS_X86_32
-- Performing Test NNPACK_ARCH_IS_X86_32 - Failed
-- Found PythonInterp: /usr/local/opt/python/bin/python3.7 (found version "3.7.3")
-- NNPACK backend is x86-64
-- Failed to find LLVM FileCheck
-- Found Git: /usr/local/bin/git (found version "2.21.0")
-- git Version: v1.4.0-505be96a
-- Version: 1.4.0
-- Performing Test HAVE_CXX_FLAG_STD_CXX11
-- Performing Test HAVE_CXX_FLAG_STD_CXX11 - Success
-- Performing Test HAVE_CXX_FLAG_WALL
-- Performing Test HAVE_CXX_FLAG_WALL - Success
-- Performing Test HAVE_CXX_FLAG_WEXTRA
-- Performing Test HAVE_CXX_FLAG_WEXTRA - Success
-- Performing Test HAVE_CXX_FLAG_WSHADOW
-- Performing Test HAVE_CXX_FLAG_WSHADOW - Success
-- Performing Test HAVE_CXX_FLAG_WERROR
-- Performing Test HAVE_CXX_FLAG_WERROR - Success
-- Performing Test HAVE_CXX_FLAG_PEDANTIC
-- Performing Test HAVE_CXX_FLAG_PEDANTIC - Success
-- Performing Test HAVE_CXX_FLAG_PEDANTIC_ERRORS
-- Performing Test HAVE_CXX_FLAG_PEDANTIC_ERRORS - Success
-- Performing Test HAVE_CXX_FLAG_WSHORTEN_64_TO_32
-- Performing Test HAVE_CXX_FLAG_WSHORTEN_64_TO_32 - Success
-- Performing Test HAVE_CXX_FLAG_WFLOAT_EQUAL
-- Performing Test HAVE_CXX_FLAG_WFLOAT_EQUAL - Success
-- Performing Test HAVE_CXX_FLAG_FSTRICT_ALIASING
-- Performing Test HAVE_CXX_FLAG_FSTRICT_ALIASING - Success
-- Performing Test HAVE_CXX_FLAG_WNO_DEPRECATED_DECLARATIONS
-- Performing Test HAVE_CXX_FLAG_WNO_DEPRECATED_DECLARATIONS - Success
-- Performing Test HAVE_CXX_FLAG_WSTRICT_ALIASING
-- Performing Test HAVE_CXX_FLAG_WSTRICT_ALIASING - Success
-- Performing Test HAVE_CXX_FLAG_WD654
-- Performing Test HAVE_CXX_FLAG_WD654 - Failed
-- Performing Test HAVE_CXX_FLAG_WTHREAD_SAFETY
-- Performing Test HAVE_CXX_FLAG_WTHREAD_SAFETY - Success
-- Performing Test HAVE_THREAD_SAFETY_ATTRIBUTES
-- Performing Test HAVE_THREAD_SAFETY_ATTRIBUTES
-- Performing Test HAVE_THREAD_SAFETY_ATTRIBUTES -- failed to compile
-- Performing Test HAVE_CXX_FLAG_COVERAGE
-- Performing Test HAVE_CXX_FLAG_COVERAGE - Success
-- Performing Test HAVE_STD_REGEX
-- Performing Test HAVE_STD_REGEX
-- Performing Test HAVE_STD_REGEX -- success
-- Performing Test HAVE_GNU_POSIX_REGEX
-- Performing Test HAVE_GNU_POSIX_REGEX
-- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile
-- Performing Test HAVE_POSIX_REGEX
-- Performing Test HAVE_POSIX_REGEX
-- Performing Test HAVE_POSIX_REGEX -- success
-- Performing Test HAVE_STEADY_CLOCK
-- Performing Test HAVE_STEADY_CLOCK
-- Performing Test HAVE_STEADY_CLOCK -- success
CMake Warning at cmake/Dependencies.cmake:452 (message):
A compiler with AVX512 support is required for FBGEMM. Not compiling with
FBGEMM. Turn this warning off by USE_FBGEMM=OFF.
Call Stack (most recent call first):
CMakeLists.txt:362 (include)
-- Using third party subdirectory Eigen.
Python 3.7.3
-- Found PythonInterp: /usr/local/opt/python/bin/python3.7 (found suitable version "3.7.3", minimum required is "2.7")
-- Found PythonLibs: /usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/Python.framework/Versions/3.7/Python (found suitable version "3.7.3", minimum required is "2.7")
-- Could NOT find pybind11 (missing: pybind11_DIR)
-- Could NOT find pybind11 (missing: pybind11_INCLUDE_DIR)
-- Using third_party/pybind11.
-- Adding OpenMP CXX_FLAGS: -Xpreprocessor -fopenmp -I/usr/local/include
-- Will link against OpenMP libraries: /usr/local/lib/libiomp5.dylib
-- Found CUDA: /usr/local/cuda (found version "9.0")
-- Caffe2: CUDA detected: 9.0
-- Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda
-- Caffe2: Header version is: 9.0
-- Found CUDNN: /usr/local/cuda/lib/libcudnn.dylib
-- Found cuDNN: v7.0.4 (include: /usr/local/cuda/include, library: /usr/local/cuda/lib/libcudnn.dylib)
CMake Warning at cmake/public/utils.cmake:172 (message):
In the future we will require one to explicitly pass TORCH_CUDA_ARCH_LIST
to cmake instead of implicitly setting it as an env variable. This will
become a FATAL_ERROR in future version of pytorch.
Call Stack (most recent call first):
cmake/public/cuda.cmake:369 (torch_cuda_get_nvcc_gencode_flag)
cmake/Dependencies.cmake:828 (include)
CMakeLists.txt:362 (include)
-- Added CUDA NVCC flags for: -gencode;arch=compute_61,code=sm_61
-- Could NOT find CUB (missing: CUB_INCLUDE_DIR)
CMake Warning at cmake/Dependencies.cmake:1032 (message):
Metal is only used in ios builds.
Call Stack (most recent call first):
CMakeLists.txt:362 (include)
--
-- ******** Summary ********
-- CMake version : 3.14.5
-- CMake command : /usr/local/Cellar/cmake/3.14.5/bin/cmake
-- System : Darwin
-- C++ compiler : /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++
-- C++ compiler version : 8.0.0.8000042
-- CXX flags : -Wno-deprecated -fvisibility-inlines-hidden -Wno-deprecated-declarations -Xpreprocessor -fopenmp -I/usr/local/include -Wnon-virtual-dtor
-- Build type : Release
-- Compile definitions : TH_BLAS_MKL;ONNX_ML=1
-- CMAKE_PREFIX_PATH : /usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages;/usr/local/cuda
-- CMAKE_INSTALL_PREFIX : /Users/adrienbitton/Desktop/pytorch/torch
-- CMAKE_MODULE_PATH : /Users/adrienbitton/Desktop/pytorch/cmake/Modules;/Users/adrienbitton/Desktop/pytorch/cmake/public/../Modules_CUDA_fix
--
-- ONNX version : 1.5.0
-- ONNX NAMESPACE : onnx_torch
-- ONNX_BUILD_TESTS : OFF
-- ONNX_BUILD_BENCHMARKS : OFF
-- ONNX_USE_LITE_PROTO : OFF
-- ONNXIFI_DUMMY_BACKEND : OFF
-- ONNXIFI_ENABLE_EXT : OFF
--
-- Protobuf compiler :
-- Protobuf includes :
-- Protobuf libraries :
-- BUILD_ONNX_PYTHON : OFF
--
-- ******** Summary ********
-- CMake version : 3.14.5
-- CMake command : /usr/local/Cellar/cmake/3.14.5/bin/cmake
-- System : Darwin
-- C++ compiler : /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++
-- C++ compiler version : 8.0.0.8000042
-- CXX flags : -Wno-deprecated -fvisibility-inlines-hidden -Wno-deprecated-declarations -Xpreprocessor -fopenmp -I/usr/local/include -Wnon-virtual-dtor
-- Build type : Release
-- Compile definitions : TH_BLAS_MKL;ONNX_ML=1
-- CMAKE_PREFIX_PATH : /usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages;/usr/local/cuda
-- CMAKE_INSTALL_PREFIX : /Users/adrienbitton/Desktop/pytorch/torch
-- CMAKE_MODULE_PATH : /Users/adrienbitton/Desktop/pytorch/cmake/Modules;/Users/adrienbitton/Desktop/pytorch/cmake/public/../Modules_CUDA_fix
--
-- ONNX version : 1.4.1
-- ONNX NAMESPACE : onnx_torch
-- ONNX_BUILD_TESTS : OFF
-- ONNX_BUILD_BENCHMARKS : OFF
-- ONNX_USE_LITE_PROTO : OFF
-- ONNXIFI_DUMMY_BACKEND : OFF
--
-- Protobuf compiler :
-- Protobuf includes :
-- Protobuf libraries :
-- BUILD_ONNX_PYTHON : OFF
-- Found CUDA with FP16 support, compiling with torch.cuda.HalfTensor
-- Removing -DNDEBUG from compile flags
-- MAGMA not found. Compiling without MAGMA support
-- Could not find hardware support for NEON on this machine.
-- No OMAP3 processor on this machine.
-- No OMAP4 processor on this machine.
-- Looking for cpuid.h
-- Looking for cpuid.h - found
-- Performing Test HAVE_GCC_GET_CPUID
-- Performing Test HAVE_GCC_GET_CPUID - Success
-- Performing Test NO_GCC_EBX_FPIC_BUG
-- Performing Test NO_GCC_EBX_FPIC_BUG - Success
-- Performing Test C_HAS_AVX_1
-- Performing Test C_HAS_AVX_1 - Failed
-- Performing Test C_HAS_AVX_2
-- Performing Test C_HAS_AVX_2 - Success
-- Performing Test C_HAS_AVX2_1
-- Performing Test C_HAS_AVX2_1 - Failed
-- Performing Test C_HAS_AVX2_2
-- Performing Test C_HAS_AVX2_2 - Success
-- Performing Test CXX_HAS_AVX_1
-- Performing Test CXX_HAS_AVX_1 - Failed
-- Performing Test CXX_HAS_AVX_2
-- Performing Test CXX_HAS_AVX_2 - Success
-- Performing Test CXX_HAS_AVX2_1
-- Performing Test CXX_HAS_AVX2_1 - Failed
-- Performing Test CXX_HAS_AVX2_2
-- Performing Test CXX_HAS_AVX2_2 - Success
-- AVX compiler support found
-- AVX2 compiler support found
-- Performing Test BLAS_F2C_DOUBLE_WORKS
-- Performing Test BLAS_F2C_DOUBLE_WORKS - Failed
-- Performing Test BLAS_F2C_FLOAT_WORKS
-- Performing Test BLAS_F2C_FLOAT_WORKS - Success
-- Performing Test BLAS_USE_CBLAS_DOT
-- Performing Test BLAS_USE_CBLAS_DOT - Success
-- Found a library with BLAS API (mkl).
-- Found a library with LAPACK API (mkl).
disabling ROCM because NOT USE_ROCM is set
-- MIOpen not found. Compiling without MIOpen support
-- MKLDNN_THREADING = OMP:COMP
CMake Warning (dev) at third_party/ideep/mkl-dnn/cmake/options.cmake:33 (option):
Policy CMP0077 is not set: option() honors normal variables. Run "cmake
--help-policy CMP0077" for policy details. Use the cmake_policy command to
set the policy and suppress this warning.
For compatibility with older versions of CMake, option is clearing the
normal variable 'MKLDNN_ENABLE_CONCURRENT_EXEC'.
Call Stack (most recent call first):
third_party/ideep/mkl-dnn/cmake/utils.cmake:24 (include)
third_party/ideep/mkl-dnn/CMakeLists.txt:74 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Found OpenMP_C: -Xpreprocessor -fopenmp -I/usr/local/include (found version "4.0")
-- Found OpenMP_CXX: -Xpreprocessor -fopenmp -I/usr/local/include (found version "4.0")
-- Found OpenMP: TRUE (found version "4.0")
-- OpenMP lib: provided by compiler
-- Could NOT find Doxygen (missing: DOXYGEN_EXECUTABLE)
-- VTune profiling environment is unset
-- Found MKL-DNN: TRUE
-- Looking for mmap
-- Looking for mmap - found
-- Looking for shm_open
-- Looking for shm_open - found
-- Looking for shm_unlink
-- Looking for shm_unlink - found
-- Looking for malloc_usable_size
-- Looking for malloc_usable_size - not found
-- Performing Test C_HAS_THREAD
-- Performing Test C_HAS_THREAD - Success
-- don't use NUMA
-- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT
-- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT - Failed
-- Check size of long double
-- Check size of long double - done
-- Performing Test COMPILER_SUPPORTS_LONG_DOUBLE
-- Performing Test COMPILER_SUPPORTS_LONG_DOUBLE - Success
-- Performing Test COMPILER_SUPPORTS_FLOAT128
-- Performing Test COMPILER_SUPPORTS_FLOAT128 - Failed
-- Performing Test COMPILER_SUPPORTS_SSE2
-- Performing Test COMPILER_SUPPORTS_SSE2 - Success
-- Performing Test COMPILER_SUPPORTS_SSE4
-- Performing Test COMPILER_SUPPORTS_SSE4 - Success
-- Performing Test COMPILER_SUPPORTS_AVX
-- Performing Test COMPILER_SUPPORTS_AVX - Success
-- Performing Test COMPILER_SUPPORTS_FMA4
-- Performing Test COMPILER_SUPPORTS_FMA4 - Success
-- Performing Test COMPILER_SUPPORTS_AVX2
-- Performing Test COMPILER_SUPPORTS_AVX2 - Success
-- Performing Test COMPILER_SUPPORTS_SVE
-- Performing Test COMPILER_SUPPORTS_SVE - Failed
-- Performing Test COMPILER_SUPPORTS_AVX512F
-- Performing Test COMPILER_SUPPORTS_AVX512F - Failed
-- Performing Test COMPILER_SUPPORTS_OPENMP
-- Performing Test COMPILER_SUPPORTS_OPENMP - Failed
-- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES
-- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES - Failed
-- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH
-- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH - Success
-- Configuring build for SLEEF-v3.2
Target system: Darwin-16.7.0
Target processor: x86_64
Host system: Darwin-16.7.0
Host processor: x86_64
Detected C compiler: AppleClang @ /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang
-- Using option `-Wall -Wno-unused -Wno-attributes -Wno-unused-result -ffp-contract=off -fno-math-errno -fno-trapping-math` to compile libsleef
-- Building shared libs : OFF
-- MPFR : LIB_MPFR-NOTFOUND
-- GMP : LIBGMP-NOTFOUND
-- RUNNING_ON_TRAVIS : 0
-- COMPILER_SUPPORTS_OPENMP :
-- NCCL operators skipped due to no CUDA support
-- Including IDEEP operators
-- Excluding image processing operators due to no opencv
-- Excluding video processing operators due to no opencv
-- MPI operators skipped due to no MPI support |
st80969 | I think I did at first (when following again the readme) but I am not sure …
I re-checked dependencies and re-cloned the source.
I will try again from scratch:
git submodule sync
git submodule update --init --recursive
LDFLAGS="-L/usr/local/opt/llvm/lib" CPPFLAGS="-I/usr/local/opt/llvm/include" TORCH_CUDA_ARCH_LIST=6.1 python3 setup.py install |
st80970 | Now that I re-installed dependencies and re-cloned pytorch source.
Trying compiling from scratch leads to other errors …
It doesn’t start the build % but first goes with
Building CXX object caffe2/CMakeFiles
and it stops at [1970/3136]
I don’t know why is this new step happening and why it is failing … any hint or known workaround ? Only thing I found could be setting BUILD_SHARED_LIBS=OFF or USE_NINJA=OFF ; I am trying it now …
[1909/3136] Building CXX object caffe2/CMakeFiles/torch.dir/sgd/learning_rate_op.cc.o
In file included from ../caffe2/sgd/learning_rate_op.cc:1:
In file included from ../caffe2/sgd/learning_rate_op.h:6:
In file included from ../caffe2/core/context.h:9:
In file included from ../caffe2/core/allocator.h:3:
In file included from ../c10/core/CPUAllocator.h:7:
../c10/util/Logging.h:191:29: warning: comparison of integers of different signs: 'const unsigned long' and 'const int' [-Wsign-compare]
BINARY_COMP_HELPER(Greater, >)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~
../c10/util/Logging.h:184:11: note: expanded from macro 'BINARY_COMP_HELPER'
if (x op y) { \
~ ^ ~
../caffe2/sgd/learning_rate_op.h:143:7: note: in instantiation of function template specialization 'c10::enforce_detail::Greater<unsigned long, int>' requested here
CAFFE_ENFORCE_GT(
^
../c10/util/Logging.h:242:27: note: expanded from macro 'CAFFE_ENFORCE_GT'
CAFFE_ENFORCE_THAT_IMPL(Greater((x), (y)), #x " > " #y, __VA_ARGS__)
^
../caffe2/sgd/learning_rate_op.h:23:20: note: in instantiation of member function 'caffe2::LearningRateOp<float, caffe2::CPUContext>::createLearningRateFunctor' requested here
functor_.reset(createLearningRateFunctor(policy));
^
../c10/util/Registry.h:184:30: note: in instantiation of member function 'caffe2::LearningRateOp<float, caffe2::CPUContext>::LearningRateOp' requested here
return ObjectPtrType(new DerivedType(args...));
^
../caffe2/sgd/learning_rate_op.cc:4:1: note: in instantiation of function template specialization 'c10::Registerer<std::__1::basic_string<char>, std::__1::unique_ptr<caffe2::OperatorBase, std::__1::default_delete<caffe2::OperatorBase> >, const caffe2::OperatorDef &, caffe2::Workspace *>::DefaultCreator<caffe2::LearningRateOp<float, caffe2::CPUContext> >' requested here
REGISTER_CPU_OPERATOR(LearningRate, LearningRateOp<float, CPUContext>);
^
../caffe2/core/operator.h:1396:3: note: expanded from macro 'REGISTER_CPU_OPERATOR'
C10_REGISTER_CLASS(CPUOperatorRegistry, name, __VA_ARGS__)
^
../c10/util/Registry.h:279:3: note: expanded from macro 'C10_REGISTER_CLASS'
C10_REGISTER_TYPED_CLASS(RegistryName, #key, __VA_ARGS__)
^
../c10/util/Registry.h:238:33: note: expanded from macro 'C10_REGISTER_TYPED_CLASS'
Registerer##RegistryName::DefaultCreator<__VA_ARGS__>, \
^
In file included from ../caffe2/sgd/learning_rate_op.cc:1:
In file included from ../caffe2/sgd/learning_rate_op.h:8:
../caffe2/sgd/learning_rate_functors.h:279:11: warning: using integer absolute value function 'abs' when argument is of floating point type [-Wabsolute-value]
T x = abs(static_cast<T>(iter) / stepsize_ - 2 * cycle + 1);
^
../caffe2/sgd/learning_rate_functors.h:268:3: note: in instantiation of member function 'caffe2::CyclicalLearningRate<float>::operator()' requested here
CyclicalLearningRate(
^
../caffe2/sgd/learning_rate_op.h:179:18: note: in instantiation of member function 'caffe2::CyclicalLearningRate<float>::CyclicalLearningRate' requested here
return new CyclicalLearningRate<T>(base_lr_, max_lr, stepsize, decay);
^
../caffe2/sgd/learning_rate_op.h:23:20: note: in instantiation of member function 'caffe2::LearningRateOp<float, caffe2::CPUContext>::createLearningRateFunctor' requested here
functor_.reset(createLearningRateFunctor(policy));
^
../c10/util/Registry.h:184:30: note: in instantiation of member function 'caffe2::LearningRateOp<float, caffe2::CPUContext>::LearningRateOp' requested here
return ObjectPtrType(new DerivedType(args...));
^
../caffe2/sgd/learning_rate_op.cc:4:1: note: in instantiation of function template specialization 'c10::Registerer<std::__1::basic_string<char>, std::__1::unique_ptr<caffe2::OperatorBase, std::__1::default_delete<caffe2::OperatorBase> >, const caffe2::OperatorDef &, caffe2::Workspace *>::DefaultCreator<caffe2::LearningRateOp<float, caffe2::CPUContext> >' requested here
REGISTER_CPU_OPERATOR(LearningRate, LearningRateOp<float, CPUContext>);
^
../caffe2/core/operator.h:1396:3: note: expanded from macro 'REGISTER_CPU_OPERATOR'
C10_REGISTER_CLASS(CPUOperatorRegistry, name, __VA_ARGS__)
^
../c10/util/Registry.h:279:3: note: expanded from macro 'C10_REGISTER_CLASS'
C10_REGISTER_TYPED_CLASS(RegistryName, #key, __VA_ARGS__)
^
../c10/util/Registry.h:238:33: note: expanded from macro 'C10_REGISTER_TYPED_CLASS'
Registerer##RegistryName::DefaultCreator<__VA_ARGS__>, \
^
../caffe2/sgd/learning_rate_functors.h:279:11: note: use function 'std::abs' instead
T x = abs(static_cast<T>(iter) / stepsize_ - 2 * cycle + 1);
^~~
std::abs
../caffe2/sgd/learning_rate_functors.h:281:12: warning: using integer absolute value function 'abs' when argument is of floating point type [-Wabsolute-value]
(T(abs(max_lr_)) / T(abs(base_lr_)) - 1) * std::max(T(0.0), (1 - x)) *
^
../caffe2/sgd/learning_rate_functors.h:281:12: note: use function 'std::abs' instead
(T(abs(max_lr_)) / T(abs(base_lr_)) - 1) * std::max(T(0.0), (1 - x)) *
^~~
std::abs
../caffe2/sgd/learning_rate_functors.h:281:30: warning: using integer absolute value function 'abs' when argument is of floating point type [-Wabsolute-value]
(T(abs(max_lr_)) / T(abs(base_lr_)) - 1) * std::max(T(0.0), (1 - x)) *
^
../caffe2/sgd/learning_rate_functors.h:281:30: note: use function 'std::abs' instead
(T(abs(max_lr_)) / T(abs(base_lr_)) - 1) * std::max(T(0.0), (1 - x)) *
^~~
std::abs
4 warnings generated.
[1914/3136] Building CXX object caffe2/CMakeFiles/torch.dir/share/contrib/nnpack/conv_op.cc.o
../caffe2/share/contrib/nnpack/conv_op.cc:183:16: warning: unused variable 'output_channels' [-Wunused-variable]
const size_t output_channels = Y->dim32(1);
^
../caffe2/share/contrib/nnpack/conv_op.cc:181:16: warning: unused variable 'batch_size' [-Wunused-variable]
const size_t batch_size = X.dim32(0);
^
../caffe2/share/contrib/nnpack/conv_op.cc:155:13: warning: unused variable 'N' [-Wunused-variable]
const int N = X.dim32(0), C = X.dim32(1), H = X.dim32(2), W = X.dim32(3);
^
../caffe2/share/contrib/nnpack/conv_op.cc:182:16: warning: unused variable 'input_channels' [-Wunused-variable]
const size_t input_channels = X.dim32(1);
^
../caffe2/share/contrib/nnpack/conv_op.cc:175:27: warning: comparison of integers of different signs: 'size_type' (aka 'unsigned long') and 'const int' [-Wsign-compare]
if (dummyBias_.size() != M) {
~~~~~~~~~~~~~~~~~ ^ ~
In file included from ../caffe2/share/contrib/nnpack/conv_op.cc:7:
In file included from ../caffe2/core/context.h:9:
In file included from ../caffe2/core/allocator.h:3:
In file included from ../c10/core/CPUAllocator.h:7:
../c10/util/Logging.h:189:28: warning: comparison of integers of different signs: 'const unsigned long' and 'const int' [-Wsign-compare]
BINARY_COMP_HELPER(Equals, ==)
~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~
../c10/util/Logging.h:184:11: note: expanded from macro 'BINARY_COMP_HELPER'
if (x op y) { \
~ ^ ~
../caffe2/share/contrib/nnpack/conv_op.cc:274:11: note: in instantiation of function template specialization 'c10::enforce_detail::Equals<unsigned long, int>' requested here
CAFFE_ENFORCE_EQ(transformedFilters_.size(), group_);
^
../c10/util/Logging.h:232:27: note: expanded from macro 'CAFFE_ENFORCE_EQ'
CAFFE_ENFORCE_THAT_IMPL(Equals((x), (y)), #x " == " #y, __VA_ARGS__)
^
6 warnings generated.
[1915/3136] Building CXX object caffe2/CMakeFiles/torch.dir/transforms/common_subexpression_elimination.cc.o
../caffe2/transforms/common_subexpression_elimination.cc:104:23: warning: comparison of integers of different signs: 'int' and 'size_type' (aka 'unsigned long') [-Wsign-compare]
for (int i = 0; i < subgraph.size(); i++) {
~ ^ ~~~~~~~~~~~~~~~
1 warning generated.
[1918/3136] Building CXX object caffe2/CMakeFiles/torch.dir/transforms/pattern_net_transform.cc.o
../caffe2/transforms/pattern_net_transform.cc:117:30: warning: comparison of integers of different signs: 'value_type' (aka 'int') and 'size_type' (aka 'unsigned long') [-Wsign-compare]
if (inverse_ops_[parent] < subgraph.size() &&
~~~~~~~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~~
../caffe2/transforms/pattern_net_transform.cc:125:29: warning: comparison of integers of different signs: 'value_type' (aka 'int') and 'size_type' (aka 'unsigned long') [-Wsign-compare]
if (inverse_ops_[child] < subgraph.size() &&
~~~~~~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~~
../caffe2/transforms/pattern_net_transform.cc:153:21: warning: comparison of integers of different signs: 'int' and 'size_type' (aka 'unsigned long') [-Wsign-compare]
for (int i = 0; i < match.size(); i++) {
~ ^ ~~~~~~~~~~~~
../caffe2/transforms/pattern_net_transform.cc:182:21: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
for (int i = 0; i < r_.size(); i++) {
~ ^ ~~~~~~~~~
4 warnings generated.
[1920/3136] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/autograd/generated/Functions.cpp.o
../torch/csrc/autograd/generated/Functions.cpp:401:8: warning: unused function 'cumprod_backward' [-Wunused-function]
Tensor cumprod_backward(const Tensor &grad, const Tensor &input, int64_t dim, optional<ScalarType> dtype) {
^
1 warning generated.
[1941/3136] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/autograd/profiler.cpp.o
../torch/csrc/autograd/profiler.cpp:72:26: warning: comparison of integers of different signs: 'int' and 'size_type' (aka 'unsigned long') [-Wsign-compare]
for(int i = 0; i < shapes.size(); i++) {
~ ^ ~~~~~~~~~~~~~
../torch/csrc/autograd/profiler.cpp:75:34: warning: comparison of integers of different signs: 'int' and 'size_type' (aka 'unsigned long') [-Wsign-compare]
for(int dim = 0; dim < shapes[i].size(); dim++) {
~~~ ^ ~~~~~~~~~~~~~~~~
../torch/csrc/autograd/profiler.cpp:77:22: warning: comparison of integers of different signs: 'int' and 'unsigned long' [-Wsign-compare]
if(dim < shapes[i].size() - 1)
~~~ ^ ~~~~~~~~~~~~~~~~~~~~
../torch/csrc/autograd/profiler.cpp:84:16: warning: comparison of integers of different signs: 'int' and 'unsigned long' [-Wsign-compare]
if(i < shapes.size() - 1)
~ ^ ~~~~~~~~~~~~~~~~~
4 warnings generated.
[1958/3136] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/interpreter.cpp.o
../torch/csrc/jit/interpreter.cpp:292:23: warning: moving a temporary object prevents copy elision [-Wpessimizing-move]
can_emit_inline = std::move(CanEmitInline(graph).can_emit_inline_);
^
../torch/csrc/jit/interpreter.cpp:292:23: note: remove std::move call here
can_emit_inline = std::move(CanEmitInline(graph).can_emit_inline_);
^~~~~~~~~~ ~
1 warning generated.
[1967/3136] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/passes/alias_analysis.cpp.o
FAILED: caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/passes/alias_analysis.cpp.o
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++ -DAT_PARALLEL_OPENMP=1 -DCAFFE2_BUILD_MAIN_LIB -DCPUINFO_SUPPORTED_PLATFORM=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DIDEEP_USE_MKL -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DUSE_CUDA -D_FILE_OFFSET_BITS=64 -D_THP_CORE -Dtorch_EXPORTS -Iaten/src -I../aten/src -I. -I../ -I../cmake/../third_party/benchmark/include -Icaffe2/contrib/aten -I../third_party/onnx -Ithird_party/onnx -I../third_party/foxi -Ithird_party/foxi -I../caffe2/../torch/csrc/api -I../caffe2/../torch/csrc/api/include -I../caffe2/aten/src/TH -Icaffe2/aten/src/TH -I../caffe2/../torch/../aten/src -Icaffe2/aten/src -Icaffe2/../aten/src -Icaffe2/../aten/src/ATen -I../caffe2/../torch/csrc -I../caffe2/../torch/../third_party/miniz-2.0.8 -I../aten/src/TH -I../aten/../third_party/catch/single_include -I../aten/src/ATen/.. -Icaffe2/aten/src/ATen -I../third_party/miniz-2.0.8 -I../caffe2/core/nomnigraph/include -Icaffe2/aten/src/THC -I../aten/src/THC -I../aten/src/THCUNN -I../aten/src/ATen/cuda -I../c10/.. -Ithird_party/ideep/mkl-dnn/include -I../third_party/ideep/mkl-dnn/src/../include -I../third_party/QNNPACK/include -I../third_party/pthreadpool/include -I../aten/src/ATen/native/quantized/cpu/qnnpack/include -I../aten/src/ATen/native/quantized/cpu/qnnpack/src -I../third_party/QNNPACK/deps/clog/include -I../third_party/NNPACK/include -I../third_party/cpuinfo/include -I../third_party/FP16/include -I../c10/cuda/../.. -isystem ../cmake/../third_party/googletest/googlemock/include -isystem ../cmake/../third_party/googletest/googletest/include -isystem ../third_party/protobuf/src -isystem /usr/local/include -isystem ../third_party/gemmlowp -isystem ../third_party/neon2sse -isystem ../cmake/../third_party/eigen -isystem /usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/include/python3.7m -isystem /usr/local/lib/python3.7/site-packages/numpy/core/include -isystem ../cmake/../third_party/pybind11/include -isystem /opt/rocm/hip/include -isystem /include -isystem ../cmake/../third_party/cub -isystem /usr/local/cuda/include -isystem ../third_party/ideep/mkl-dnn/include -isystem ../third_party/ideep/include -isystem include -Wno-deprecated -fvisibility-inlines-hidden -Wno-deprecated-declarations -Xpreprocessor -fopenmp -I/usr/local/include -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -Wno-invalid-partial-specialization -Wno-typedef-redefinition -Wno-unknown-warning-option -Wno-unused-private-field -Wno-inconsistent-missing-override -Wno-aligned-allocation-unavailable -Wno-c++14-extensions -Wno-constexpr-not-const -Wno-missing-braces -Qunused-arguments -fcolor-diagnostics -fno-math-errno -fno-trapping-math -Wno-unused-private-field -Wno-missing-braces -Wno-c++14-extensions -Wno-constexpr-not-const -DHAVE_AVX_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O3 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.12.sdk -mmacosx-version-min=10.9 -fPIC -DCUDA_HAS_FP16=1 -DHAVE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD -Wall -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wno-write-strings -Wno-unknown-pragmas -Wno-missing-braces -fvisibility=hidden -DCAFFE2_BUILD_MAIN_LIB -O2 -std=gnu++11 -MD -MT caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/passes/alias_analysis.cpp.o -MF caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/passes/alias_analysis.cpp.o.d -o caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/passes/alias_analysis.cpp.o -c ../torch/csrc/jit/passes/alias_analysis.cpp
In file included from ../torch/csrc/jit/passes/alias_analysis.cpp:1:
In file included from ../torch/csrc/jit/passes/alias_analysis.h:3:
../c10/util/flat_hash_map.h:1367:24: error: no member named 'out_of_range' in namespace 'std'
throw std::out_of_range("Argument passed to at() was not in the map.");
~~~~~^
../c10/util/flat_hash_map.h:1374:24: error: no member named 'out_of_range' in namespace 'std'
throw std::out_of_range("Argument passed to at() was not in the map.");
~~~~~^
2 errors generated.
[1970/3136] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/passes/batch_mm.cpp.o
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "setup.py", line 756, in <module>
build_deps()
File "setup.py", line 321, in build_deps
cmake=cmake)
File "/Users/adrienbitton/Desktop/pytorch/tools/build_pytorch_libs.py", line 63, in build_caffe2
cmake.build(my_env)
File "/Users/adrienbitton/Desktop/pytorch/tools/setup_helpers/cmake.py", line 331, in build
self.run(build_args, my_env)
File "/Users/adrienbitton/Desktop/pytorch/tools/setup_helpers/cmake.py", line 142, in run
check_call(command, cwd=self.build_dir, env=env)
File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/subprocess.py", line 347, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '4']' returned non-zero exit status 1. |
st80971 | setting BUILD_SHARED_LIBS=OFF breaks before 0% (similar as before when it stopped at [1970/3136] but the total build size is smaller)
setting USE_NINJA=OFF breaks at 78% (before was 75%) with the following:
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/autograd/saved_variable.cpp.o
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/autograd/variable.cpp.o
4 warnings generated.
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/autodiff.cpp.o
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/attributes.cpp.o
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/argument_spec.cpp.o
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/pass_manager.cpp.o
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/pickler.cpp.o
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/graph_executor.cpp.o
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/import_source.cpp.o
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/import.cpp.o
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/pickle.cpp.o
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/import_export_helpers.cpp.o
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/instruction.cpp.o
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/interpreter.cpp.o
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/constants.cpp.o
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/node_hashing.cpp.o
[ 79%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/ir.cpp.o
/Users/adrienbitton/Desktop/pytorch/torch/csrc/jit/interpreter.cpp:292:23: warning: moving a temporary object prevents copy elision [-Wpessimizing-move]
can_emit_inline = std::move(CanEmitInline(graph).can_emit_inline_);
^
/Users/adrienbitton/Desktop/pytorch/torch/csrc/jit/interpreter.cpp:292:23: note: remove std::move call here
can_emit_inline = std::move(CanEmitInline(graph).can_emit_inline_);
^~~~~~~~~~ ~
[ 79%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/irparser.cpp.o
[ 79%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/jit_log.cpp.o
1 warning generated.
[ 79%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/operator.cpp.o
[ 79%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/register_c10_ops.cpp.o
[ 79%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/subgraph_matcher.cpp.o
[ 79%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/symbolic_script.cpp.o
[ 79%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/profiling_record.cpp.o
[ 79%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/profiling_graph_executor_impl.cpp.o
[ 79%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/passes/alias_analysis.cpp.o
In file included from /Users/adrienbitton/Desktop/pytorch/torch/csrc/jit/passes/alias_analysis.cpp:1:
In file included from /Users/adrienbitton/Desktop/pytorch/torch/csrc/jit/passes/alias_analysis.h:3:
/Users/adrienbitton/Desktop/pytorch/c10/util/flat_hash_map.h:1367:24: error: no member named 'out_of_range' in namespace 'std'
throw std::out_of_range("Argument passed to at() was not in the map.");
~~~~~^
/Users/adrienbitton/Desktop/pytorch/c10/util/flat_hash_map.h:1374:24: error: no member named 'out_of_range' in namespace 'std'
throw std::out_of_range("Argument passed to at() was not in the map.");
~~~~~^
[ 79%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/passes/batch_mm.cpp.o
[ 79%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/passes/bailout_graph.cpp.o
2 errors generated.
make[2]: *** [caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/passes/alias_analysis.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[1]: *** [caffe2/CMakeFiles/torch.dir/all] Error 2
make: *** [all] Error 2
Traceback (most recent call last):
File "setup.py", line 756, in <module>
build_deps()
File "setup.py", line 321, in build_deps
cmake=cmake)
File "/Users/adrienbitton/Desktop/pytorch/tools/build_pytorch_libs.py", line 63, in build_caffe2
cmake.build(my_env)
File "/Users/adrienbitton/Desktop/pytorch/tools/setup_helpers/cmake.py", line 331, in build
self.run(build_args, my_env)
File "/Users/adrienbitton/Desktop/pytorch/tools/setup_helpers/cmake.py", line 142, in run
check_call(command, cwd=self.build_dir, env=env)
File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/subprocess.py", line 347, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '4']' returned non-zero exit status 2. |
st80972 | I see a very strange scenario, for one parameter. The .grad is zero, while the value is change. What’s wrong? Thanks. |
st80973 | Hi,
There are few threads in this forum about this.
Things like l2 regularization, or momentum terms in Adam will change the parameters even when the gradients are 0. |
st80974 | Hi, albanD, is there any way to avoid changing the parameters and relevant statistics in such cases when the gradient is zeros. |
st80975 | That depends on your optimizer really. Chaging these parameters and statistics is the “right thing to do” if you really want to apply the optimizer.
If you want to freeze some weights manually, the right way to do it is not to give it to the optimizer or you can try and set weight.grad = None. |
st80976 | OS: Ubuntu 18.04
GPU: Nvidia - RTX 2060, NVIDIA-SMI 430.26, Driver Version: 430.26, CUDA Version: 10.2
Laptop: Lenovo Legion Y540
I setup my GPU for PyTorch based on this post: https://forums.fast.ai/t/successful-ubuntu-18-04-with-igpu-for-xserver-and-nvidia-gpu-for-cuda-work-setup/20128/9 4
I have a hybrid setup. Intel Integrated GPU is used for display and NVIDIA for deep learning. Since CUDA comes bundled with PyTorch, I haven’t installed it separately. I have Installed pytorch using: conda install pytorch torchvision cudatoolkit=10.0 -c pytorch.
$ lspci | grep VGA
00:02.0 VGA compatible controller: Intel Corporation Device 3e9b
01:00.0 VGA compatible controller: NVIDIA Corporation Device 1f11 (rev a1)
torch.cuda.is_available() returns True only for a short while after I boot my machine. After that, it returns False.
I don’t understand what the issue is. Cuda seems to available for short window of time after I boot my machine. If I miss that window, I need to reboot to be able to use the GPU in PyTorch. Please help! |
st80977 | Could you check some power settings and make sure the GPU isn’t deactivated to save battery power? |
st80978 | I did, I did not find any such setting. In the NVIDIA X Server application, the PRIME profile is set to ‘Intel (power save)’ but that is because I’m using the Intel card for display.
I will experiment some more and see in what cases does this happen. Will update this thread at a later time. |
st80979 | I’ve write a custom network but some how it does not learn, the gradients are always zeros.
[ ] network: sigunet
class sigunet(nn.Module):
def __init__(self, model, concate, m, n, kernel_size, pool_size, threshold, device, sequence_length=96):
super(sigunet, self).__init__()
self.model = model
self.concate = concate
self.m = m
self.n = n
self.kernel_size = kernel_size
self.pool_size = pool_size
self.threshold = threshold
self.device = device
self.loss_function = nn.CrossEntropyLoss(ignore_index=IGNORE_INDEX)
self.sequence_length = sequence_length
pass1_len = sequence_length
pass2_len = self.pool_len(pass1_len, 2, 2)
pass3_len = self.pool_len(pass2_len, 2, 2)
self.level_1 = [conv1d(concate.output_size, m, kernel_size).cuda(), conv1d(m, m, kernel_size).cuda(), avg_pool(2)]
self.level_2 = [conv1d(m, (m + n), kernel_size).cuda(), conv1d((m + n), (m + n), kernel_size).cuda(), avg_pool(2)]
self.level_3 = [conv1d((m + n), (m + 2 * n), kernel_size).cuda(), conv1d((m + 2 * n), (m + 2 * n), kernel_size).cuda(), \
avg_pool(2)]
self.delevel_1 = [conv1d((m + 2 * n), (m + 3 * n), kernel_size).cuda(), conv1d((m + 3 * n), (m + 3 * n), kernel_size).cuda(),\
deconv1d((m + 3 * n), (m + 2 * n), pass3_len, kernel_size, 2).cuda()]
self.delevel_2 = [conv1d((2 * m + 4 * n), (m + 2 * n), kernel_size).cuda(), conv1d((m + 2 * n), (m + 2 * n), kernel_size).cuda(),\
deconv1d((m + 2 * n), (m + n), pass2_len, kernel_size, 2).cuda()]
self.delevel_3 = [conv1d((2 * m + 2 * n), (m + n), kernel_size).cuda(), conv1d((m + n), (m + n), kernel_size).cuda(),\
deconv1d((m + n), m, pass1_len, kernel_size, 2).cuda()]
self.finals = [conv1d((2 * m), m, kernel_size).cuda(), conv1d(m, 3, kernel_size, nn.Softmax(dim=1)).cuda()]
def forward(self, inputs, targets):
outputs = self.model(inputs)
indexed_sequences, _ = inputs
onehot = index2onehot(dim=self.concate.onehot_size, indexed_sequences=indexed_sequences, device=self.device)
# the front two ouputs is going to be ignored
# encoded_sources: (batch_size, seq_len, embed_size)
mlm_outputs, nsp_outputs, encoded_sources = outputs
# Permute the axis to adapt to nn.Conv1d
# encoded_sources: (batch_size, embed_size, seq_len)
# https://discuss.pytorch.org/t/swap-axes-in-pytorch/970/2
sigunet_input_ = self.concate((encoded_sources, onehot))
sigunet_input = sigunet_input_.transpose(2, 1)
out = self.level_1[0](sigunet_input)
pass1 = self.level_1[1](out)
out = self.level_1[2](pass1)
out = self.level_2[0](out)
pass2 = self.level_2[1](out)
out = self.level_2[2](pass2)
out = self.level_3[0](out)
pass3 = self.level_3[1](out)
out = self.level_3[2](pass3)
out = self.delevel_1[0](out)
out = self.delevel_1[1](out)
out = self.delevel_1[2](out)
out = torch.cat([out, pass3], dim=1)
out = self.delevel_2[0](out)
out = self.delevel_2[1](out)
out = self.delevel_2[2](out)
out = torch.cat([out, pass2], dim=1)
out = self.delevel_3[0](out)
out = self.delevel_3[1](out)
out = self.delevel_3[2](out)
out = torch.cat([out, pass1], dim=1)
out = self.finals[0](out)
out = self.finals[1](out)
_out = out.transpose(2, 1)
# Make it (batch_size, length, channels)
#trans_out = out.transpose(2, 1)
# errorenous
#out, _ = torch.max(out, 2)
predictions = self.pass_threshold(_out)
loss = self.loss_function(_out.reshape(-1, 3), targets.reshape(-1))
# predictions = self.detect_SignalPeptides(_out)
return predictions, loss.unsqueeze(dim=0)
[ ] trainer
...
for param in self.loss_model.parameters():
print(param.grad.data.sum())
...
the self.loss_model mentioned above is sigunet
and the grads are all zeros.
I assume it could be some operations that are not prop-able. |
st80980 | Could you try to use nn.ModuleList instead of Python lists to store your layers?
This will make sure all parameters are returned when calling model.parameters() and passing them to the optimizer.
However, this would not explain the gradients to be all zero, but might be a starter in debugging this issue. |
st80981 | I think I’ve found the problem.
According to: Why "loss.backward()" didn't update parameters' gradient? 10
Add a batch normalization layer could solve this. |
st80982 | Hi, I implemented a network, which used torch.cat() to concatenate the input. when I train this network, I found it’s too slow. Is torch.cat inefficient, and are there any other ways to do the same?
The following is the code, thank you
class AdaptationNN(nn.Module):
def __init__(self, input_dim, output_dim):
super(AdaptationNN, self).__init__()
self.fc1 = nn.Linear(input_dim, 1000, bias=False)
self.fc1.weight = torch.nn.Parameter(torch.Tensor(1000, input_dim).uniform_(
-np.sqrt(0.01 / (input_dim + 1000)), np.sqrt(0.01 / (input_dim + 1000))))
self.fc1.bias = torch.nn.Parameter(torch.zeros(1000))
self.bn1 = nn.BatchNorm1d(1000, momentum=0.05)
self.act1 = nn.ReLU()
#the size of speaker code is 50
self.fc2 = nn.Linear(1050, 1000, bias=False)
self.fc2.weight = torch.nn.Parameter(torch.Tensor(1000, 1050).uniform_(
-np.sqrt(0.01 / (1000 + 1050)), np.sqrt(0.01 / (1000 + 1050))))
self.fc2.bias = torch.nn.Parameter(torch.zeros(1000))
self.bn2 = nn.BatchNorm1d(1000, momentum=0.05)
self.act2 = nn.ReLU()
self.fc3 = nn.Linear(1050, output_dim, bias=True)
self.fc3.weight = torch.nn.Parameter(torch.Tensor(output_dim, 1050).uniform_(
-np.sqrt(0.01 / (1050 + output_dim)), np.sqrt(0.01 / (1050 + output_dim))))
self.fc3.bias = torch.nn.Parameter(torch.zeros(output_dim))
def forward(self, speaker_codes, input):
x = self.act1(self.bn1(self.fc1(torch.cat((speaker_codes, input),1))))
x = self.act2(self.bn2(self.fc2(torch.cat((speaker_codes, x), 1))))
x = self.fc3(torch.cat((speaker_codes, x), 1))
return x |
st80983 | The following issue indicates that cat is slow on CPU: https://github.com/pytorch/pytorch/issues/18634 88
I found that a custom cat function is slightly faster on CPU:
def customCat(allTensors):
totalSize = sum([t.shape[0] for t in allTensors]);
newTensor = torch.zeros(totalSize, allTensors[0].shape[1], allTensors[0].shape[2]);
counter = 0;
for t in allTensors:
newTensor[counter:counter+t.shape[0]] = t;
counter += t.shape[0];
return newTensor;
It definitely isn’t a generic replacement for cat, and I haven’t tested this code.
You also have the option of doing concatenations in the DataLoader via a user-defined collate_fn. DataLoader will be able to parallelize the cat operations across multiple batches. It may hide any latency you see. |
st80984 | The chunks input is a list of nn.Sequential networks from a model I have divided up to run on multiple GPUs/CPUs. The device_list input is a list of devices, like for example ['cuda:0', 'cuda:1', 'cuda:2', 'cuda:3'], or ['cpu', 'cuda:0', 'cuda:1']. Both lists will have the same number of values.
Unfortunately I don’t have multiple GPUs, and online services can be rather expensive for multiple GPUs. So, I want to make sure that I have things right before I try to run the code. I created the following class to run a single model with a batch size of 1 across multiple devices. Basically each device is supposed to run part of a model before passing the output to the next device.
I put the chunks onto their devices:
for i, chunk in enumerate(chunks):
chunk.to(device_list[i])
And then I pass them to my class:
class ModelParallelModel(nn.Sequential):
def __init__(self, chunks, device_list):
super(ModelParallelModel, self)
self.chunks = chunks
self.device_list = device_list
print(str(len(self.chunks)))
print(self.device_list)
def forward(self, input):
for i, chunk in enumerate(chunks):
if i < len(chunks) -1:
input = chunk(input.to(device_list[i]) ).to(device_list[i+1])
else:
input = chunk(input.to(device_list[i]))
return input
These lines of code are where I create the model and then try to run:
net = ModelParallelModel(chunks, device_list)
net(test_image)
Though I get this error:
4
['cuda:0', 'cuda:1', 'cuda:2', 'cuda:3']
Traceback (most recent call last):
File "test.py", line 465, in <module>
main()
File "test.py", line 164, in main
net(test_image)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 538, in __call__
for hook in self._forward_pre_hooks.values():
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 591, in __getattr__
type(self).__name__, name))
AttributeError: 'ModelParallelModel' object has no attribute '_forward_pre_hooks'
How do I fix this error? I can’t seem to find anything about what this error means, or how to fix it anywhere. |
st80985 | Solved by ptrblck in post #3
You are missing the __init__ call in super:
# yours
super(ModelParallelModel, self)
# fix to
super(ModelParallelModel, self).__init__()
However, I’m currently not sure, why you are deriving from nn.Sequential instead of nn.Module, as you are not using any attributes from nn.Sequential. |
st80986 | I was trying to follow this guide: https://pytorch.org/tutorials/intermediate/model_parallel_tutorial.html 68 |
st80987 | You are missing the __init__ call in super:
# yours
super(ModelParallelModel, self)
# fix to
super(ModelParallelModel, self).__init__()
However, I’m currently not sure, why you are deriving from nn.Sequential instead of nn.Module, as you are not using any attributes from nn.Sequential. |
st80988 | @ptrblck Thanks for the reply, I was able to get my code running on multiple GPUs!
I pull loss values from multiple layers (that can be on different devices) and add them together before running backward(). Currently, I have to use .to(device) to bring all these loss values to a single device to run backwards. It looks a bit ugly and it’s redundant when I’m using a single GPU/CPU, but I’m not sure if that’s something I could change? DataParallel is for duplicating a model across different GPUs and CPUs, but could I use it only on the loss values so that I don’t have to use to use .to(device)? If I can use it, then would it slow down my code?
I’m also having an issue where GPU:0 always shows at least some usage. In the few tests I did with the same inputs and changing what layers went on which GPUs, I also had at least 850MiB of usage on GPU:0, even when it shouldn’t have been used at all.
Currently I make sure that my inputs and model are CUDA with the following code:
dtype = torch.cuda.FloatTensor
input_image.type(dtype)
model = model.cuda()
Do I need to do something else in order to prevent GPU:0 from always having extra usage? |
st80989 | I am working on a toy dataset to play with. I am trying to calculate loss via BCEWithLogitsLoss(), but loss is decreasing very slowly. And prediction giving by Neural network also is not correct.
model = nn.Linear(1,1)
input_tensor = th.tensor([[5.8],[6.0],[5.5],[4.5],[4.1],[3.5]],requires_grad=True)
target_tensor = th.tensor([[1.0],[1],[1],[0],[0],[0]],requires_grad=False)
criterion = nn.BCEWithLogitsLoss()
optimizer = optim.SGD(model.parameters(), lr =0.1)
for i in range(500):
optimizer.zero_grad()
predict_tensor = model(input_tensor)
loss = criterion(predict_tensor,target_tensor)
loss.backward()
optimizer.step()
print(loss.data)
Loss values:
tensor(1.7171)
tensor(1.3382)
tensor(1.0275)
tensor(0.8316)
tensor(0.7524)
tensor(0.7327)
tensor(0.7287)
tensor(0.7274)
tensor(0.7265)
tensor(0.7257)
tensor(0.7250)
tensor(0.7242)
tensor(0.7234)
tensor(0.7226)
tensor(0.7219)
tensor(0.7211)
tensor(0.7203)
tensor(0.7195)
tensor(0.7188)
tensor(0.7180)
tensor(0.7172)
tensor(0.7165)
tensor(0.7157)
tensor(0.7149)
tensor(0.7142)
tensor(0.7134)
tensor(0.7127)
tensor(0.7119)
tensor(0.7111)
tensor(0.7104)
tensor(0.7096)
tensor(0.7089)
tensor(0.7081)
tensor(0.7074)
tensor(0.7066)
tensor(0.7059)
tensor(0.7051)
tensor(0.7044)
tensor(0.7036)
tensor(0.7029)
tensor(0.7022)
tensor(0.7014)
tensor(0.7007)
tensor(0.6999)
tensor(0.6992)
tensor(0.6985)
tensor(0.6977)
tensor(0.6970)
tensor(0.6963)
tensor(0.6955)
tensor(0.6948)
tensor(0.6941)
tensor(0.6933)
tensor(0.6926)
tensor(0.6919)
tensor(0.6912)
tensor(0.6904)
tensor(0.6897)
tensor(0.6890)
tensor(0.6883)
tensor(0.6875)
tensor(0.6868)
tensor(0.6861)
tensor(0.6854)
tensor(0.6847)
tensor(0.6840)
tensor(0.6833)
tensor(0.6825)
tensor(0.6818)
tensor(0.6811)
tensor(0.6804)
tensor(0.6797)
tensor(0.6790)
tensor(0.6783)
tensor(0.6776)
tensor(0.6769)
tensor(0.6762)
tensor(0.6755)
tensor(0.6748)
tensor(0.6741)
tensor(0.6734)
tensor(0.6727)
tensor(0.6720)
tensor(0.6713)
tensor(0.6706)
tensor(0.6699)
tensor(0.6693)
tensor(0.6686)
tensor(0.6679)
tensor(0.6672)
tensor(0.6665)
tensor(0.6658)
tensor(0.6651)
tensor(0.6645)
tensor(0.6638)
tensor(0.6631)
tensor(0.6624)
tensor(0.6617)
tensor(0.6611)
tensor(0.6604)
tensor(0.6597)
tensor(0.6591)
tensor(0.6584)
tensor(0.6577)
tensor(0.6570)
tensor(0.6564)
tensor(0.6557)
tensor(0.6550)
tensor(0.6544)
tensor(0.6537)
tensor(0.6530)
tensor(0.6524)
tensor(0.6517)
tensor(0.6511)
tensor(0.6504)
tensor(0.6497)
tensor(0.6491)
tensor(0.6484)
tensor(0.6478)
tensor(0.6471)
tensor(0.6465)
tensor(0.6458)
tensor(0.6452)
tensor(0.6445)
tensor(0.6439)
tensor(0.6432)
tensor(0.6426)
tensor(0.6419)
tensor(0.6413)
tensor(0.6407)
tensor(0.6400)
tensor(0.6394)
tensor(0.6387)
tensor(0.6381)
tensor(0.6375)
tensor(0.6368)
tensor(0.6362)
tensor(0.6355)
tensor(0.6349)
tensor(0.6343)
tensor(0.6336)
tensor(0.6330)
tensor(0.6324)
tensor(0.6318)
tensor(0.6311)
tensor(0.6305)
tensor(0.6299)
tensor(0.6293)
tensor(0.6286)
tensor(0.6280)
tensor(0.6274)
tensor(0.6268)
tensor(0.6262)
tensor(0.6255)
tensor(0.6249)
tensor(0.6243)
tensor(0.6237)
tensor(0.6231)
tensor(0.6225)
tensor(0.6218)
tensor(0.6212)
tensor(0.6206)
tensor(0.6200)
tensor(0.6194)
tensor(0.6188)
tensor(0.6182)
tensor(0.6176)
tensor(0.6170)
tensor(0.6164)
tensor(0.6158)
tensor(0.6152)
tensor(0.6146)
tensor(0.6140)
tensor(0.6134)
tensor(0.6128)
tensor(0.6122)
tensor(0.6116)
tensor(0.6110)
tensor(0.6104)
tensor(0.6098)
tensor(0.6092)
tensor(0.6086)
tensor(0.6080)
tensor(0.6074)
tensor(0.6069)
tensor(0.6063)
tensor(0.6057)
tensor(0.6051)
tensor(0.6045)
tensor(0.6039)
tensor(0.6033)
tensor(0.6028)
tensor(0.6022)
tensor(0.6016)
tensor(0.6010)
tensor(0.6005)
tensor(0.5999)
tensor(0.5993)
tensor(0.5987)
tensor(0.5982)
tensor(0.5976)
tensor(0.5970)
tensor(0.5964)
tensor(0.5959)
tensor(0.5953)
tensor(0.5947)
tensor(0.5942)
tensor(0.5936)
tensor(0.5930)
tensor(0.5925)
tensor(0.5919)
tensor(0.5913)
tensor(0.5908)
tensor(0.5902)
tensor(0.5897)
tensor(0.5891)
tensor(0.5885)
tensor(0.5880)
tensor(0.5874)
tensor(0.5869)
tensor(0.5863)
tensor(0.5858)
tensor(0.5852)
tensor(0.5847)
tensor(0.5841)
tensor(0.5836)
tensor(0.5830)
tensor(0.5825)
tensor(0.5819)
tensor(0.5814)
tensor(0.5808)
tensor(0.5803)
tensor(0.5797)
tensor(0.5792)
tensor(0.5786)
tensor(0.5781)
tensor(0.5776)
tensor(0.5770)
tensor(0.5765)
tensor(0.5759)
tensor(0.5754)
tensor(0.5749)
tensor(0.5743)
tensor(0.5738)
tensor(0.5733)
tensor(0.5727)
tensor(0.5722)
tensor(0.5717)
tensor(0.5711)
tensor(0.5706)
tensor(0.5701)
tensor(0.5696)
tensor(0.5690)
tensor(0.5685)
tensor(0.5680)
tensor(0.5675)
tensor(0.5669)
tensor(0.5664)
tensor(0.5659)
tensor(0.5654)
tensor(0.5648)
tensor(0.5643)
tensor(0.5638)
tensor(0.5633)
tensor(0.5628)
tensor(0.5623)
tensor(0.5617)
tensor(0.5612)
tensor(0.5607)
tensor(0.5602)
tensor(0.5597)
tensor(0.5592)
tensor(0.5587)
tensor(0.5582)
tensor(0.5576)
tensor(0.5571)
tensor(0.5566)
tensor(0.5561)
tensor(0.5556)
tensor(0.5551)
tensor(0.5546)
tensor(0.5541)
tensor(0.5536)
tensor(0.5531)
tensor(0.5526)
tensor(0.5521)
tensor(0.5516)
tensor(0.5511)
tensor(0.5506)
tensor(0.5501)
tensor(0.5496)
tensor(0.5491)
tensor(0.5486)
tensor(0.5481)
tensor(0.5476)
tensor(0.5472)
tensor(0.5467)
tensor(0.5462)
tensor(0.5457)
tensor(0.5452)
tensor(0.5447)
tensor(0.5442)
tensor(0.5437)
tensor(0.5432)
tensor(0.5428)
tensor(0.5423)
tensor(0.5418)
tensor(0.5413)
tensor(0.5408)
tensor(0.5403)
tensor(0.5399)
tensor(0.5394)
tensor(0.5389)
tensor(0.5384)
tensor(0.5380)
tensor(0.5375)
tensor(0.5370)
tensor(0.5365)
tensor(0.5361)
tensor(0.5356)
tensor(0.5351)
tensor(0.5346)
tensor(0.5342)
tensor(0.5337)
tensor(0.5332)
tensor(0.5328)
tensor(0.5323)
tensor(0.5318)
tensor(0.5313)
tensor(0.5309)
tensor(0.5304)
tensor(0.5300)
tensor(0.5295)
tensor(0.5290)
tensor(0.5286)
tensor(0.5281)
tensor(0.5276)
tensor(0.5272)
tensor(0.5267)
tensor(0.5263)
tensor(0.5258)
tensor(0.5253)
tensor(0.5249)
tensor(0.5244)
tensor(0.5240)
tensor(0.5235)
tensor(0.5231)
tensor(0.5226)
tensor(0.5222)
tensor(0.5217)
tensor(0.5213)
tensor(0.5208)
tensor(0.5204)
tensor(0.5199)
tensor(0.5195)
tensor(0.5190)
tensor(0.5186)
tensor(0.5181)
tensor(0.5177)
tensor(0.5172)
tensor(0.5168)
tensor(0.5163)
tensor(0.5159)
tensor(0.5155)
tensor(0.5150)
tensor(0.5146)
tensor(0.5141)
tensor(0.5137)
tensor(0.5133)
tensor(0.5128)
tensor(0.5124)
tensor(0.5120)
tensor(0.5115)
tensor(0.5111)
tensor(0.5106)
tensor(0.5102)
tensor(0.5098)
tensor(0.5093)
tensor(0.5089)
tensor(0.5085)
tensor(0.5081)
tensor(0.5076)
tensor(0.5072)
tensor(0.5068)
tensor(0.5063)
tensor(0.5059)
tensor(0.5055)
tensor(0.5051)
tensor(0.5046)
tensor(0.5042)
tensor(0.5038)
tensor(0.5034)
tensor(0.5029)
tensor(0.5025)
tensor(0.5021)
tensor(0.5017)
tensor(0.5013)
tensor(0.5008)
tensor(0.5004)
tensor(0.5000)
tensor(0.4996)
tensor(0.4992)
tensor(0.4988)
tensor(0.4983)
tensor(0.4979)
tensor(0.4975)
tensor(0.4971)
tensor(0.4967)
tensor(0.4963)
tensor(0.4959)
tensor(0.4954)
tensor(0.4950)
tensor(0.4946)
tensor(0.4942)
tensor(0.4938)
tensor(0.4934)
tensor(0.4930)
tensor(0.4926)
tensor(0.4922)
tensor(0.4918)
tensor(0.4914)
tensor(0.4910)
tensor(0.4906)
tensor(0.4902)
tensor(0.4898)
tensor(0.4894)
tensor(0.4890)
tensor(0.4886)
tensor(0.4882)
tensor(0.4878)
tensor(0.4874)
tensor(0.4870)
tensor(0.4866)
tensor(0.4862)
tensor(0.4858)
tensor(0.4854)
tensor(0.4850)
tensor(0.4846)
tensor(0.4842)
tensor(0.4838)
tensor(0.4834)
tensor(0.4830)
tensor(0.4826)
tensor(0.4822)
tensor(0.4818)
tensor(0.4814)
tensor(0.4811)
tensor(0.4807)
tensor(0.4803)
tensor(0.4799)
tensor(0.4795)
tensor(0.4791)
tensor(0.4787)
tensor(0.4784)
tensor(0.4780)
tensor(0.4776)
tensor(0.4772)
tensor(0.4768)
tensor(0.4764)
tensor(0.4761)
tensor(0.4757)
tensor(0.4753)
tensor(0.4749)
tensor(0.4745)
tensor(0.4742)
tensor(0.4738)
tensor(0.4734)
tensor(0.4730)
tensor(0.4726)
tensor(0.4723)
tensor(0.4719)
tensor(0.4715)
tensor(0.4711)
tensor(0.4708)
tensor(0.4704)
tensor(0.4700)
tensor(0.4697)
tensor(0.4693)
tensor(0.4689)
tensor(0.4685)
tensor(0.4682)
tensor(0.4678)
tensor(0.4674)
tensor(0.4671)
tensor(0.4667)
tensor(0.4663)
tensor(0.4660)
tensor(0.4656)
tensor(0.4652)
tensor(0.4649)
tensor(0.4645)
tensor(0.4641)
tensor(0.4638)
tensor(0.4634)
tensor(0.4631)
tensor(0.4627)
tensor(0.4623)
tensor(0.4620)
tensor(0.4616)
tensor(0.4613)
print(model(th.tensor([80.5]))) gives tensor([139.4498], grad_fn=<AddBackward0>)
My model is giving logits as outputs and I want it to give me probabilities but if I add an activation function at the end, BCEWithLogitsLoss() would mess up because it expects logits as inputs.
Do troubleshooting with Google colab notebook: https://colab.research.google.com/drive/1WjCcSv5nVXf-zD1mCEl17h5jp7V2Pooz 2 |
st80990 | Solved by KFrank in post #2
Hi Mnauf!
I suspect that you are misunderstanding how to interpret the
predictions made by this network.
First, you are using, as you say, BCEWithLogitsLoss. Therefore you
are training your predictions to be “logits.” These are “raw scores,”
if you will, that are real numbers ranging from -i… |
st80991 | Hi Mnauf!
mnauf:
I am working on a toy dataset to play with. I am trying to calculate loss via BCEWithLogitsLoss(), but loss is decreasing very slowly. And prediction giving by Neural network also is not correct.
model = nn.Linear(1,1)
input_tensor = th.tensor([[5.8],[6.0],[5.5],[4.5],[4.1],[3.5]],requires_grad=True)
target_tensor = th.tensor([[1.0],[1],[1],[0],[0],[0]],requires_grad=False)
criterion = nn.BCEWithLogitsLoss()
...
Loss values:
tensor(1.7171)
tensor(1.3382)
...
tensor(0.4616)
tensor(0.4613)
print(model(th.tensor([80.5]))) gives tensor([139.4498], grad_fn=<AddBackward0>)
I suspect that you are misunderstanding how to interpret the
predictions made by this network.
First, you are using, as you say, BCEWithLogitsLoss. Therefore you
are training your predictions to be “logits.” These are “raw scores,”
if you will, that are real numbers ranging from -infinity to +infinity.
Values less than 0 predict class “0” and values greater than 0
predict class “1”.
(When pumped though a sigmoid function, they become predicted
probabilities of the sample in question being in the “1” class. We
generally convert that to a non-probabilistic prediction by saying
P < 0.5 --> class “0”, and P > 0.5 --> class “1”.)
Second, your model is a simple (one-dimensional) linear function.
Therefore it can’t cluster predictions together – it can only get the
boundary between class “0” and class “1” right. (Because of this,
you will not ever be able to drive your loss to zero, even if your
prediction accuracy is perfect.) From your six data points that
boundary is somewhere around 5.0.
Anyway, your model works for me.
Note, I’ve run the below test using pytorch version 0.3.0, so I had
to tweak your code a little bit.
Here is my complete script:
import torch
print (torch.__version__)
torch.manual_seed (2019)
model = torch.nn.Linear (1, 1)
print ('model.weight =', model.weight.data[0][0])
print ('model.bias =', model.bias.data[0])
input_tensor = torch.autograd.Variable (torch.Tensor([[5.8],[6.0],[5.5],[4.5],[4.1],[3.5]]))
target_tensor = torch.autograd.Variable (torch.Tensor([[1.0],[1],[1],[0],[0],[0]]))
criterion = torch.nn.BCEWithLogitsLoss()
optimizer = torch.optim.SGD(model.parameters(), lr =0.1)
for i in range (500):
optimizer.zero_grad()
predict_tensor = model(input_tensor)
loss = criterion(predict_tensor,target_tensor)
loss.backward()
optimizer.step()
if (i + 1) % 100 == 0:
print (loss.data[0])
pd = predict_tensor.data
print (pd[0][0], pd[1][0], pd[2][0], pd[3][0], pd[4][0], pd[5][0])
print ('model.weight =', model.weight.data[0][0])
print ('model.bias =', model.bias.data[0])
And here is the complete output:
>>> import torch
>>> print (torch.__version__)
0.3.0b0+591e73e
>>>
>>> torch.manual_seed (2019)
<torch._C.Generator object at 0x000001FFD89F5630>
>>>
>>> model = torch.nn.Linear (1, 1)
>>> print ('model.weight =', model.weight.data[0][0])
model.weight = -0.41710567474365234
>>> print ('model.bias =', model.bias.data[0])
model.bias = 0.1750929355621338
>>>
>>> input_tensor = torch.autograd.Variable (torch.Tensor([[5.8],[6.0],[5.5],[4.5],[4.1],[3.5]]))
>>> target_tensor = torch.autograd.Variable (torch.Tensor([[1.0],[1],[1],[0],[0],[0]]))
>>> criterion = torch.nn.BCEWithLogitsLoss()
>>> optimizer = torch.optim.SGD(model.parameters(), lr =0.1)
>>>
>>> for i in range (500):
... optimizer.zero_grad()
... predict_tensor = model(input_tensor)
... loss = criterion(predict_tensor,target_tensor)
... loss.backward()
... optimizer.step()
... if (i + 1) % 100 == 0:
... print (loss.data[0])
... pd = predict_tensor.data
... print (pd[0][0], pd[1][0], pd[2][0], pd[3][0], pd[4][0], pd[5][0])
...
0.6329450607299805
0.4733131527900696 0.5083065032958984 0.42082297801971436 0.2458558976650238 0.17586903274059296 0.07088879495859146
0.5748059749603271
0.5860824584960938 0.650752604007721 0.489077091217041 0.16572602093219757 0.03638556972146034 -0.15762503445148468
0.5252929329872131
0.6919865608215332 0.7841049432754517 0.5538088083267212 0.09321644902229309 -0.09102052450180054 -0.3673758804798126
0.48296624422073364
0.7910601496696472 0.9085955619812012 0.6147568225860596 0.027079343795776367 -0.20799170434474945 -0.5605981349945068
0.4465940296649933
0.8835896849632263 1.024709701538086 0.6719093322753906 -0.03369140625 -0.3159317672252655 -0.7392921447753906
>>> print ('model.weight =', model.weight.data[0][0])
model.weight = 0.7067369818687439
>>> print ('model.bias =', model.bias.data[0])
model.bias = -3.2145912647247314
The loss goes down systematically (but, as noted above, doesn’t
go to zero). And at the end of the run the prediction accuracy is
perfect on your set of six samples (with the predictions understood
as described above).
Good luck.
K. Frank
[Edit:]
Please let me correct an incorrect statement I made. I said that
you can’t drive the loss all the way to zero, but in fact you can.
As the “weight” in the model – the multiplicative factor in the linear
function – becomes larger and larger, the logits predicted by the
model get pushed out towards -infinity and +infinity. This will cause
the sigmoid (that is implicit in BCEWithLogitsLoss) to saturate at
0 and 1, so the predictions will become (increasing close to) exactly
correct (provided the bias is adjusted according, which the training
algorithm does), and the loss approaches zero.
Here are the last twenty loss values obtained by running Mnauf’s
training loop for 10,000 iterations:
0.06364566087722778
0.06364051252603531
0.06363534182310104
0.06363023072481155
0.06362506002187729
0.06361991167068481
0.06361475586891174
0.06360962986946106
0.06360447406768799
0.06359934061765671
0.06359419226646423
0.06358903646469116
0.06358392536640167
0.0635787770152092
0.06357365846633911
0.06356850266456604
0.06356338411569595
0.06355825066566467
0.06355313956737518
0.0635480210185051
So the loss does approach zero, although very slowly. Note, as the
sigmoid saturates, its gradients go to zero, so (with a fixed learning
rate) the training slows way down. |
st80992 | I work with a GPU server where many of us can use it and we should not use all GPUs for a single user. So I’m trying to do something that will be able to automatically use the GPU with the least memory used by other users.
I saw that it is possible to get the same GPU usage information as with “nvidia-smi” with functions like the package [nvgpu](https://pypi.org/project/nvgpu 1 /) or an example using [subprocess.call to execute the command “nvidia-smi” and place the output in a csv file] (Why the two GPUs on my machine have the same ID, so that Pytorch can only choose one? 2). So with that, I can know which GPU has the least memory used.
The only trouble is that the order of the devices of PyTorch is not the same as on “nvidia-smi” (can be seen by the order of names in my case). So I’m looking for a way to convert the GPU order from “nvidia-smi” to the one used by PyTorch to properly select the GPU with the least memory used.
Note: In my case, the server has 2 GPU “GTX 1080 Ti” and 1 GPU “TITAN RTX”, so I can see the difference between the names, but not between the 2 GPU “GTX 1080 Ti”. |
st80993 | Solved by Poisson_Chasseur in post #2
I found the solution to the problem that has already been solved as presented in the following links:
Order of CUDA devices
How to setting the GPU No. for training?
Gpu devices: nvidia-smi and cuda.get_device_name() output appear inconsistent
The solution is simply to make the first line… |
st80994 | I found the solution to the problem that has already been solved as presented in the following links:
Order of CUDA devices 102
How to setting the GPU No. for training? 39
Gpu devices: nvidia-smi and cuda.get_device_name() output appear inconsistent 40
The solution is simply to make the first line of the following code BEFORE calling functions related to “torch.cuda” since its initialization is performed only once.
#Change the order so that it is the one used by "nvidia-smi" and not the
#one used by all other programs ("FASTEST_FIRST")
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
#Check that it is the same order as for "nvidia-smi":
[torch.cuda.get_device_name(i) for i in xrange(0,torch.cuda.device_count())] |
st80995 | Hi,
For example, if I have bunch of feature maps, let’s say we have 100 feature map in the shape [128, 256, 256].
then I want to get the pixel-wise max feature map of this 100 feature.
the easiest way is to use torch.cat() to concatenate the total 100 feature map then use torch.max() to get the result. But my memory is not big enough to store the tensor from concatenation.
So, I think maybe I can use for-loop to get the max tensor. code is like this:
for i in range(100):
if i == 0:
out = feature[i]
else:
out = torch.max(out, feature[i])
but I find that memory is still growing. I believe it is because this torch.max(out, feature[i]) also need to create a continuous space to hold the tensor. But is there a way to free this memory after the calculation? Because ideally we don’t need this memory to do auto-grad, we only need to remember each pixels comes from then we can get gradient. |
st80996 | I assume feature is a list holding all tensors?
If that’s the case, you would have enough memory to at least store the data once.
Could you try to create a feature tensor in the first place instead of a list?
If you are working close to the memory limit, the memory peaks might yield an OOM error.
That being said, if you don’t need gradients for the calculation, you could wrap the code in a with torch.no_grad() block. |
st80997 | Thanks for replying.
I find that my statement is misleading. Sorry about that, let me make the problem more clear.
Suppose I have 300 vector store in a tensor called feature, which is in the shape [300, 1024]. And we also have an index tensor, which is in [300] size and indicates which vector should send into max() operation to get maximum tensor. so, index is something like [1,0,1,0,0,1,…,0].
The easiest way to do this is just one line code:
out = torch.max(feature[torch.where(index == 1)], 0)
But, we have a limit memory, and I guess this part feature[torch.where(index == 1)] will allocate a new memory, which is not acceptable.
So, I think there is another way: first, allocate space in memory for storage maximum result. Then, use a for-loop to compare each feature with result one-by-one.
out = torch.zeros([1024])
for i in range(300):
if index[i] == 1:
out = torch.max(out, feature[i])
So, ideally, we only need to allocate one [1024] size memory for out tensor, and this is acceptable by our gpu. (Both index and feature are tensors here, but we don’t want to create a big intermediate variable so we use for-loop)
However, I find that during the for-loop, each iteration, memory is still growing. I don’t know why, and how to avoid it.
We need out tensor contains gradient for back-propagation. So I guess this is the result pytorch doesn’t release the memory after torch.max(out, feature[i]). But, ideally, we just need one same size tensor with out to remember where the max value come from, then we can do backpropagation. Is there a way to do that? |
st80998 | Hello! This is not really a PyTorch question, and I apologize for this, but I hope someone can help me anyway. I have some images with a fixed background and a single object on them which is placed, in each image, at a different position on that background. I want to find a way to extract, in an unsupervised way, the positions of that object. For example, us, as humans, would record the x and y location of the object. Of course the NN doesn’t have a notion of x and y, but i would like, given an image, the NN to produce 2 numbers, that preserve as much as possible from the actual relative position of objects on the background. For example, if 3 objects are equally spaced on a straight line (in 3 of the images), I would like the 2 numbers produced by the NN for each of the 3 images to preserve this ordering, even if they won’t form a straight line. They can form a weird curve, but as long as the order is correct that can be topologically transformed to the right, straight line. Can someone suggest me any paper/architecture that did something similar? Thank you! |
st80999 | If I understand correctly, your problem is similar to the usual bounding-box based detection but you rather want a point representing the center of the object rather than thd bbox.
I am not sure to follow the second part though, how would that be different from detection? |
st81000 | Thank you for your reply. I was thinking too to use something similar to bounding-boxes. However, don’t they require supervised learning i.e. don’t you need to actually label the right boxes for the training set? I want to find the position in an unsupervised way. |
st81001 | A bit more information is needed to properly answer this question. What is the background, and what are the objects you are trying to identify? Depending on the simplicity of the task, you may be able to use generic non-neural network based computer vision techniques. For instance, if the background is a solid or completely stationary you could try background subtraction and clustering. If the objects are roughly circular, you could try using a Hough transform. See openCV packages and documentation for this.
However, in general, if the specific identities of the objects are important (i.e. if they are unique) you’ll likely have to used supervised learning to train a model to recognize these objects. Again, there may be an easy cheat depending on the details of the task but it’s hard to say based on your description. |
st81002 | The background and object can be anything. For example, assume I have 1000 pictures with a (fixed) forest background and a (fixed) cat image at different positions on each of these 1000 images. I want a NN able to output 2 numbers that are related to the real x and y of the cat position. By this I mean applying a (possibly highly non-linear) transformation on the 2 numbers output for each image, I can get the actual x and y of the cat. Of course I need the same transformation for each image and also that transformation function might be highly complicated, but what I want is an output that somehow stores the actual position of the cat. For example if the NN outputs for each image instead of (x,y), (log(sqrt(x)),atan(exp(y)) it’s fine as the functions are invertible. However I can’t hardcode this by hand, as later I might have another dataset (say dog on a bedroom background). So a good NN shouldn’t care too much about the object and background (assuming they are reasonable). Of course you would need to retrain it for each case, but for the purpose of my project, I don’t need a validation set. I just want the NN to discover the x and y on it’s own for the training data and that’s it (so basically highly overfitting would be ideal in this case). Thank you! |
st81003 | @smu226 “I want to find a way to extract, in an unsupervised way, the positions of that object”
You cannot expect a neural network to output ANY specific target (e.g. x y positions or any function thereof) without having trained it to predict such outputs. Neural networks are a supervised learning technique. Thus, your options are:
1.) train the model with labeled examples
2.) Use a non-trained image processing technique such as convolving the target with the image or background subtraction, or perhaps clustering by pixel values if the color intensities are different. |
st81004 | I don’t think that’s true. For example, one thing I tried was to use a bottleneck, made of these 2 nodes which I want as the output and force the NN to reconstruct the original image using these 2 numbers. Ideally the “encoder” (going from the image to the 2 numbers) would output some function of x and y and the “decoder” (going from the 2 numbers back to the original image) would learn the inverse of that function of x and y and place the object there. I did that and it works ok-ish, in the sense that for most of the points the ordering is correct, but there are quite a few whose position is pretty messed up. So my point is, this can be done without using labeled examples or non-NN techniques (probably having 10 or 100 times more data would fix the problem even with my approach?). However I was hoping for something more clever that just and encoder-decoder. This is the most obvious thing to try first, but I was hoping (and given the rate at which AI evolves I am pretty sure) that someone has a more effective way of doing this and someone can point me towards that paper/code. |
st81005 | For the encoder-decoder example you’ve listed above, you are in fact training the model in a supervised approach. The key insight here is that you use the same image for the input and the label, and this is used to train both the encoder and decoder portion of the model; the loss is backpropogated through both modules. While the encoder may learn something similar to x y coordinates if the hidden state is of size two, there’s no guarantee of exactly what it is learning.
It doesn’t really seem that the information you’ve given is sufficient to provide you with the sort of solution you seem to be after. Is your dataset limited? If so, try doing some transforms on increase the size of your dataset and this may yield better results from your encoder-decoder approach. |
st81006 | By unsupervised I meant that the images have no label. I didn’t think that what I am doing counts as supervised (what is unsupervised learning then?). The data set is not limited, I am generating the images, so I can generate as many as I want (i.e. I can place the object wherever I want on the background). But I am looking for something more data efficient, hence why I was asking for some new insight, beside the plain old encoder-decoder architecture. |
st81007 | smu226:
bottleneck, made of these 2 nodes which I want as the output and force the NN to reconstruct the original image using these 2 numbers
Nowadays bottleneck is very abused term, what do you mean by bottleneck. For instance, inside some Resnet models there is a bottleneck part.
You should keep trying if you are using convolution networks, because convolution is a pattern matching.
Do you have the image you are actually searching inside the bigger image? |
st81008 | Oh, sorry (I am quite new to the field in general)! By bottleneck I meant the middle layer between input and output which has only 2 nodes (and all the other layers have more than that). So basically all the useful information in the input image should be stored in this 2 nodes layer. The problem with CNN as far as I understand (I read an Uber paper too about this a while ago) is that they can identify the presence of an object in the image, but they are translationally invariant, so they can’t really locate the position of that object within the image (but I have no better tool so far, hence here I am). I do have the separately both the small image (object) and big image (background), as I am generating the overall combination of them (the position of the object on the background) for training, but I don’t want to use that information, as ideally I want to build a NN that can be used for cases where you don’t have access to that. Thank you! |
st81009 | If you are generating the images, I guess I don’t understand why you just label them automatically during generation? That’s the neatest way to get a function that maps you input space to your desired outputs. |
st81010 | That’s not the point… If I wanted a supervised learning approach that way, I wouldn’t be here asking for help. The point is that I want to apply this, later, to images for which I don’t have access to the actual x and y of the object, or to the background and the object separately. I use these generated images as a training set for different architectures, the fact that I know the real x and y in this case is irrelevant for the purpose of my project or for the purpose of my original question here. |
st81011 | You have an interesting use case and I assume you’ve already seen Unsupervised learning of foreground object detection 31?
Could this approach work, if you add some postprocessing, i.e. using the soft masks to get a center of mass for your object localization? |
st81012 | You can also look at Saliency filters and then get the coordinates with contour detection if you want a way without a nn this would use OpenCV. |
st81013 | I am new to Pytorch and would appreciate some direction on how to create and use an LSTM cell with multiple additional gates. For example I would like to implement the LSTM cell described in the this paper 35 |
st81014 | You just take an LSTMCell implementation and modify it.
For example, you can copy the implementation here and modify it for your own purposes:
github.com
pytorch/benchmark/blob/09eaadc1d05ad442b1f0beb82babf875bbafb24b/rnns/fastrnns/cells.py#L25-L40 45
def lstm_cell(input, hidden, w_ih, w_hh, b_ih, b_hh):
# type: (Tensor, Tuple[Tensor, Tensor], Tensor, Tensor, Tensor, Tensor) -> Tuple[Tensor, Tensor]
hx, cx = hidden
gates = torch.mm(input, w_ih.t()) + torch.mm(hx, w_hh.t()) + b_ih + b_hh
ingate, forgetgate, cellgate, outgate = gates.chunk(4, 1)
ingate = torch.sigmoid(ingate)
forgetgate = torch.sigmoid(forgetgate)
cellgate = torch.tanh(cellgate)
outgate = torch.sigmoid(outgate)
cy = (forgetgate * cx) + (ingate * cellgate)
hy = outgate * torch.tanh(cy)
return hy, cy |
st81015 | I am having a little difficulty getting the implementation of custom lstm cell a la https://github.com/pytorch/benchmark/tree/master/rnns/fastrnns 9 to work. Please my code with minimal implementation below
custom RNN
import torch
from collections import namedtuple
'''
Define a creator as a function:
(options) -> (inputs, params, forward, backward_setup, backward)
inputs: the inputs to the returned 'forward'. One can call
forward(*inputs) directly.
params: List[Tensor] all requires_grad=True parameters.
forward: function / graph executor / module
One can call rnn(rnn_inputs) using the outputs of the creator.
backward_setup: backward_inputs = backward_setup(*outputs)
Then, we pass backward_inputs to backward. If None, then it is assumed to
be the identity function.
backward: Given `output = backward_setup(*forward(*inputs))`, performs
backpropagation. If None, then nothing happens.
fastrnns.bench times the forward and backward invocations.
'''
def lstm_cell(input, hidden, w_ih, w_hh, b_ih, b_hh):
# type: (Tensor, Tuple[Tensor, Tensor], Tensor, Tensor, Tensor, Tensor) -> Tuple[Tensor, Tensor]
hx, cx = hidden
gates = torch.mm(input, w_ih.t()) + torch.mm(hx, w_hh.t()) + b_ih + b_hh
ingate, forgetgate, cellgate, outgate = gates.chunk(4, 1)
ingate = torch.sigmoid(ingate)
forgetgate = torch.sigmoid(forgetgate)
cellgate = torch.tanh(cellgate)
outgate = torch.sigmoid(outgate)
cy = (forgetgate * cx) + (ingate * cellgate)
hy = outgate * torch.tanh(cy)
return hy, cy
def varlen_lstm_creator(script=False, **kwargs):
sequences, _, hidden, params, _ = varlen_lstm_inputs(
return_module=False, **kwargs)
inputs = [sequences, hidden] + params[0]
return ModelDef(
inputs=inputs,
params=flatten_list(params),
forward=varlen_lstm_factory(lstm_cell, script),
backward_setup=varlen_lstm_backward_setup,
backward=simple_backward)
ModelDef = namedtuple('ModelDef', [
'inputs', 'params', 'forward', 'backward_setup', 'backward'])
def varlen_lstm_inputs(minlen=30, maxlen=100,
numLayers=1, inputSize=512, hiddenSize=512,
miniBatch=64, return_module=False, device='cpu',
seed=None, **kwargs):
if seed is not None:
torch.manual_seed(seed)
lengths = torch.randint(
low=minlen, high=maxlen, size=[miniBatch],
dtype=torch.long, device=device)
x = [torch.randn(length, inputSize, device=device)
for length in lengths]
hx = torch.randn(numLayers, miniBatch, hiddenSize, device=device)
cx = torch.randn(numLayers, miniBatch, hiddenSize, device=device)
lstm = torch.nn.LSTM(inputSize, hiddenSize, numLayers).to(device)
if return_module:
return x, lengths, (hx, cx), lstm.all_weights, lstm
else:
# NB: lstm.all_weights format:
# wih, whh, bih, bhh = lstm.all_weights[layer]
return x, lengths, (hx, cx), lstm.all_weights, None
def varlen_lstm_backward_setup(forward_output, seed=None):
if seed:
torch.manual_seed(seed)
rnn_utils = torch.nn.utils.rnn
sequences = forward_output[0]
padded = rnn_utils.pad_sequence(sequences)
grad = torch.randn_like(padded)
return padded, grad
def varlen_lstm_factory(cell, script):
def dynamic_rnn(sequences, hiddens, wih, whh, bih, bhh):
# type: (List[Tensor], Tuple[Tensor, Tensor], Tensor, Tensor, Tensor, Tensor) -> Tuple[List[Tensor], Tuple[List[Tensor], List[Tensor]]]
hx, cx = hiddens
hxs = hx.unbind(1)
cxs = cx.unbind(1)
# List of: (output, hx, cx)
outputs = []
hx_outs = []
cx_outs = []
for batch in range(len(sequences)):
output = []
hy, cy = hxs[batch], cxs[batch]
inputs = sequences[batch].unbind(0)
for seq_idx in range(len(inputs)):
hy, cy = cell(
inputs[seq_idx].unsqueeze(0), (hy, cy), wih, whh, bih, bhh)
output += [hy]
outputs += [torch.stack(output)]
hx_outs += [hy.unsqueeze(0)]
cx_outs += [cy.unsqueeze(0)]
return outputs, (hx_outs, cx_outs)
if script:
cell = torch.jit.script(cell)
dynamic_rnn = torch.jit.script(dynamic_rnn)
return dynamic_rnn
def simple_backward(output, grad_output):
return output.backward(grad_output)
# list[list[T]] -> list[T]
def flatten_list(lst):
result = []
for inner in lst:
result.extend(inner)
return result
a basic POF
import torch
import torch.nn as nn
from CustomRNN import varlen_lstm_creator
torch.manual_seed(1)
lstm = varlen_lstm_creator(inputSize=3, hiddenSize=3) # Input dim is 3, output dim is 3
inputs = [torch.randn(1, 3) for _ in range(5)] # make a sequence of length 5
# initialize the hidden state.
hidden = (torch.randn(1, 1, 3),
torch.randn(1, 1, 3))
for i in inputs:
# Step through the sequence one element at a time.
# after each step, hidden contains the hidden state.
out, hidden = lstm(i.view(1, 1, -1), hidden)
inputs = torch.cat(inputs).view(len(inputs), 1, -1)
hidden = (torch.randn(1, 1, 3), torch.randn(1, 1, 3)) # clean out hidden state
out, hidden = lstm(inputs, hidden)
print(out)
print(hidden)
Error
TypeError: ‘ModelDef’ object is not callable
Any help is grealty appreciated! |
st81016 | The whole file / helper functions in there are not callable as-is, it’s part of a larger benchmark harness.
I meant that you probably want to just grab the cell functions themselves. |
st81017 | OK. But how do I integrate a cell function with for example torch.nn.LSTM? I appreciate your time. |
st81018 | you dont integrate a cell function with torch.nn.LSTM, you write a for loop around it.
Something like:
for t in range(timesteps):
for b in range(batches):
h, c = F.lstm_cell(input, weight, bias, h, c)
out = F.linear(weight_out, h) |
st81019 | This would be an example of such a loop that you would write: https://github.com/pytorch/benchmark/blob/09eaadc1d05ad442b1f0beb82babf875bbafb24b/rnns/fastrnns/factory.py#L165-L182 36 |
st81020 | I was able to create an use a number of custom RNN/ LSTM cell using https://github.com/NVIDIA/apex/tree/master/apex/RNN 18.
I am struggling to with a custom lstm cell that returns tuples for both the hidden and cell state. Please find minimal code below. Any help is appreciated.
class skipLSTMRNNCell(RNNCell):
def __init__(self, input_size, hidden_size, bias=False, output_size=None):
gate_multiplier = 4
super(skipLSTMRNNCell, self).__init__(gate_multiplier, input_size, hidden_size, SkipLSTMCell, n_hidden_states=4,
bias=bias, output_size=output_size)
self.w_uh = Parameter(xavier_uniform(torch.Tensor(1, hidden_size)))
self.b_uh = Parameter(torch.ones(1))
self.reset_parameters()
def forward(self, input):
# if not inited or bsz has changed this will create hidden states
self.init_hidden(input.size()[0])
hidden_state = self.hidden[0] if self.n_hidden_states == 1 else self.hidden
self.hidden = list(
self.cell(input, hidden_state, self.w_ih, self.w_hh, self.w_uh, self.b_uh,
b_ih=self.b_ih, b_hh=self.b_hh)
)
if self.output_size != self.hidden_size:
self.hidden[0] = F.linear(self.hidden[0], self.w_ho)
return tuple(self.hidden)
def new_like(self, new_input_size=None):
if new_input_size is None:
new_input_size = self.input_size
return type(self)(
new_input_size,
self.hidden_size,
self.bias,
self.output_size)
cell
def SkipLSTMCell(input, hidden, w_ih, w_hh, w_uh, b_uh=None, b_ih=None, b_hh=None):
# if num_layers != 1:
# raise RuntimeError("wrong num_layers: got {}, expected {}".format(num_layers, 1))
# w_ih, w_hh = w_ih[0], w_hh[0]
# b_ih = b_ih[0] if b_ih is not None else None
# b_hh = b_hh[0] if b_hh is not None else None
c_prev, h_prev, update_prob_prev, cum_update_prob_prev = hidden
gates = F.linear(input, w_ih, b_ih) + F.linear(h_prev, w_hh, b_hh)
ingate, forgetgate, cellgate, outgate = gates.chunk(4, 1)
ingate = F.sigmoid(ingate)
forgetgate = F.sigmoid(forgetgate)
cellgate = torch.tanh(cellgate)
outgate = F.sigmoid(outgate)
new_c_tilde = (forgetgate * c_prev) + (ingate * cellgate)
new_h_tilde = outgate * torch.tanh(new_c_tilde)
# Compute value for the update prob
new_update_prob_tilde = torch.sigmoid(F.linear(new_c_tilde, w_uh, b_uh))
# Compute value for the update gate
cum_update_prob = cum_update_prob_prev + torch.min(update_prob_prev, 1. - cum_update_prob_prev)
# round
bn = BinaryLayer()
update_gate = bn(cum_update_prob)
# Apply update gate
new_c = update_gate * new_c_tilde + (1. - update_gate) * c_prev
new_h = update_gate * new_h_tilde + (1. - update_gate) * h_prev
new_cum_update_prob = update_gate * 0. + (1. - update_gate) * cum_update_prob
new_update_prob = update_gate * new_update_prob_tilde + (1. - update_gate) * update_prob_prev
new_state = (new_c, new_h, new_update_prob, new_cum_update_prob)
new_output = (new_h, update_gate)
return new_output, new_state |
st81021 | Can anyone explain what this is line doing?
ingate, forgetgate, cellgate, outgate = gates.chunk(4, 1) |
st81022 | This topic isn’t strictly related to pytorch, but to computer vision. The 2D bounding box intersection over union (IOU) is relatively straightforward to calculate for a ground truth bounding box versus a model output, and thus translates neatly into a loss function. For 3D object detections, it would be nice to extend the IOU concept into 3D. Unfortunately, as far as I can tell there isn’t a (simple) way to calculate the intersection between two 6-sided 3-dimensional space; even 2D IOU with arbitrary quadrilaterals instead of axis-constrained rectangles is somewhat difficult.
I am wondering if anyone knows either: a. how the 3D bounding box IOU is calculated (I have found papers that reference it but never with a description of how it is calculated) or b. what loss functions are used instead of one based on 3D volume IOU. |
st81023 | I’m running an off policy rl algorithm with deepminds pysc2, and i am finding im quickly running out of gpu memory. My pc does only have 4 gig of vram, so if this is a bad plan from the start just let me know.
Essentially the run loop of the program goes:
Actor and critic initialised on gpu
observe environment
process observations (into cuda tensors, such as minimap_features, a 1 x 4 x 64 x 64 tensor)
actor.forward(processed observations)
store a bunch of information in the replay buffer (am storing, for example minimap_features.cpu())
the problem im having appears to be that the memory is never being deallocated, with the cuda.memory_allocated and max_memory_allocated remaining (almost) the same as eachother and constantly, linearly increasing. My guessing is that something I am doing is maintaining references to previously calculated variables which should have been cleared. Any advice? (if you want to know more about a specific part of the code i can show) |
st81024 | If you see an increase in memory usage, you are most likely right in maintaining some references to variables.
Could you check your code for storing tensors which are not detached from the computation graph, e.g. losses.append(loss)? This is often the cause of increasing memory usage, as loss still holds the whole computation graph, if you don’t call detach() on it. |
st81025 | I solved this, it turns out that my storing in the replay buffer by using .cpu() on the tensors was keeping a reference, using .cpu().data fixed this |
st81026 | Good to hear it’s fixed. However, the usage of the .data attribute is not recommended so I would use .detach() instead. |
st81027 | Hi,
It is mentioned in the documentation of an LSTM, that if batch_first = True for pack_padded_sequence input to LSTM (bi-directional), the last hidden state output is also of shape (batch, num_directions, hidden_size). However, the output of the last hidden state appears to be of shape (num_directions, batch, hidden_size), even though batch_first is set to true. This is the code:
import torch
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence, PackedSequence
from torch import nn
sequences = torch.FloatTensor([[1, 2, 0, 0, 0, 0], # length 2
[3, 4, 5, 0, 0, 0], # length 3
[5, 6, 0, 0, 0, 0], # length 2
[8, 9, 10, 11, 12, 0]
]) # length 5
seq_lengths = torch.LongTensor([2, 3, 2, 5])
rnn = nn.LSTM(1, 5,
batch_first=True,
bidirectional = True)
packed_sequences = pack_padded_sequence(sequences.unsqueeze(2),
lengths=seq_lengths,
batch_first=True,
enforce_sorted=False)
rnn_output, (hn,cn) = rnn(packed_sequences)
print(hn.shape) # torch.Size([2, 4, 5]). It should be torch.Size([4, 2, 5])? |
st81028 | Hi,
Unfortunately I have no experience with the recurrent modules and the pack padded sequence mechanics.
From what I see, could it be the case that this distinction batch_first is to compare with the sequence dimension that you can have sometimes? and that the bidirectional will always add an extra dimension of size 2 at position 0 on top of whatever you already had? |
st81029 | Thank you for your answer. Most probably this is what happens… However, even if the bidirectional is set to False, the output would still be of shape (1,4,5). Transposing the matrix to be (4,1,5) would solve the problem i believe. I just wanted to make sure i’m doing the right thing. Thanks a lot. |
st81030 | So after asking around, basically, hidden_state is just not influenced by the batch_first argument. So it will have this shape whatever you pass.
This inconsistency is tracked in the following issue 24. |
st81031 | got error on doing this
from flashtorch.utils import apply_transforms, load_image
from flashtorch.saliency import Backprop
train_dataset = torchvision.datasets.CIFAR10(root='./cifar10/',
train=True,
transform=transforms.Compose([
transforms.ToTensor()
]),
download=True)
...
backprop = Backprop(model)
backprop.visualize((train_dataset[0][0].unsqueeze(0)).to('cuda'), 6, guided=True, use_gpu=True)
where do I need to set to(‘cuda’)? |
st81032 | image.png912×394 115 KB
The current status is that we want to sum convolution layer feature maps based on index
to get the sum of the certain part of the feature maps.
I can’t understand the error situation that I posted on the image.
If the fake.item() got zero, Then we should get division by zero error not “float division by zero” error.
Is inferring tensor.item() on specific variable on various time is dangerous? |
st81033 | If (fake + real) is equal to zero, then why do you think the problem comes from the .item() method ? |
st81034 | Because when python divide value by zero, the name of the error is “division by zero” where as when divide float by zero, the output is “float divided by zero”
for example
“3/0” > “division by zero” error
“1.5/0” > “float division by zero” error
“0/0” > “division by zero” error |
st81035 | Hi @Yupjun
Try:
fake.dtype
if it returns torch.float32, fake.item() will return you a Python float scalar (.
And in Python:
>>> 0.0/0
ZeroDivisionError: float division by zero
so everything is expect here. |
st81036 | Hi,
I am reading a file includes class labels that are 0 and 1 and I want to convert it to long tensor to use CrossEntropy by the code below:
def read_labels(filename):
lists = deque()
with open(filename, 'r') as input_file:
lines_cache = input_file.readlines()
for current_line in lines_cache:
sp = current_line.split()
sp = map(float, sp)
sp = list(sp)
lists.append(sp)
current_label_array = np.array(lists)
return current_label_array
current_label = read_labels('./train/labels1.txt')
current_label = torch.as_tensor(current_label, dtype=torch.long, device=torch.device("cuda:0"))
but I receive this error. How could I do it?
TypeError: can't convert np.ndarray of type numpy.object_. The only supported types are: float64, float32, float16, int64, int32, int16, int8, and uint8.
Thanks |
st81037 | Solved by ptrblck in post #2
Could you print the dtype of current_label before passing it to torch.as_tensor?
It seems you are reading mixed object types, which would create this error (i.e. not all your values are represented as a numerical dtype). |
st81038 | Could you print the dtype of current_label before passing it to torch.as_tensor?
It seems you are reading mixed object types, which would create this error (i.e. not all your values are represented as a numerical dtype). |
st81039 | Yes, you are right. It returns object . I checked the file. There are 2 empty lines at the end.
Thanks Ptrblck. |
st81040 | https://pytorch.org/docs/stable/tensorboard.html
I just copy the code of first block.Run code in jupyter notebook and open tensorboard in anaconda prompt. Images works well, but graphs is strange.
i.png1920×872 75 KB
QQ截图20190920203832.png1916×862 53.9 KB
f.png1339×898 17.8 KB
(torch version 1.2)
Is it incompatible with Window 10? |
st81041 | Solved by ptrblck in post #2
This is a known issue and should be fixed in the nightly builds. Could you update and try your code again? |
st81042 | This is a known issue 3 and should be fixed in the nightly builds. Could you update and try your code again? |
st81043 | I’m trying to do a simple line:
w = w.view(w.shape[0] * w.shape[2], w.shape[1], 1,1,1)
And I’m getting an error:
view size is not compatible with input tensor’s
Not only I checked a million times the sizes and that they are compatible, but it’s easy to see that the operation must be the same.
Is it because “view” is just looking at the memory differently? and maybe here he can’t? any Idea??
Thanks. |
st81044 | It’s hard it’s a big program, and it used to work before.
But I solved it using reshape.
Sorry for bothering. |
st81045 | After I tried the solution from issue 1375 56, i.e. sudo rm -r ~/.nv, I still got the same error like before.
But the problem happened when I called the model’s forward() function instead of the steps of placing the input tensors on GPU.
The version of python I use now is 3.5, pytorch 0.3.1, cudann 7.0.5. I got two GPUs on my computer and used the below snippet for specifying GPU:
opt_gpu = 0
os.environ["CUDA_VISIBLE_DEVICES"] = opt_gpu
The detailed errors are:
RuntimeError: cublas runtime error : library not initialized at /opt/conda/conda-bld/pytorch_1518241081361/work/torch/lib/THC/THCGeneral.c:405
/opt/conda/conda-bld/pytorch_1518241081361/work/torch/lib/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = 3]: block: [0,0,0], thread: [106,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/opt/conda/conda-bld/pytorch_1518241081361/work/torch/lib/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = 3]: block: [0,0,0], thread: [360,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/opt/conda/conda-bld/pytorch_1518241081361/work/torch/lib/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = 3]: block: [1,0,0], thread: [80,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
Any advices, thanks. |
st81046 | When I run
import torch
torch.cuda.FloatTensor([1.])
I seem to be getting the error: RuntimeError: cuda runtime error (38) : no CUDA-capable device is detected at /pytorch/aten/src/THC/THCGeneral.cpp:50.
I’m using Debian (Stretch). Checking the nvidia driveron the console:
/sbin/modinfo nvidia_current
returns:
filename: /lib/modules/4.9.0-11-amd64/updates/dkms/nvidia-current.ko
alias: char-major-195-*
version: 418.74
supported: external
license: NVIDIA
srcversion: AB4044DE27C9CA55579A110
when I do nvcc --version, I get:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Tue_Jun_12_23:07:04_CDT_2018
Cuda compilation tools, release 9.2, V9.2.148
On python, when I do
import torch
print(torch.version.cuda)
print(torch.cuda.device_count())
print(torch.cuda.is_available())
I get:
9.2.148
0
False
I’ve tried doing os.environ["CUDA_VISIBLE_DEVICES"] = '0, but it still doesn’t work.
Any help would be much appreciated! |
st81047 | Did you update the NVIDIA drivers or CUDA recently? If so, I would recommend to restart the machine, as we’ve seen some similar errors in the past. |
st81048 | For example, I loaded pretrained Vgg16 model and want to keep same architecture (BN, maxpool, ReLU etc.) and only change several layer’s filter size (3x3->5x5).
Is this possible? How can I do this?
Thank you.
Son. |
st81049 | You can do that buy you will have to retrain the network or at least fine-tune it, as replaced convolutions won’t have pretrained weights.
To do so, you have to check network’s code and replace those convolution whose kernel is (3,3) by (5,5) |
st81050 | @JuanFMontesinos
Yeah right, that’s what I want to do, to reduce effort.
But don’t know how to do replace the convolution kernel |
st81051 | Hi, I’m having a look at the code and it seems network is dinamically generated.
github.com
pytorch/vision/blob/f677ea31db8f45dbfec2fe5e519da82853815776/torchvision/models/vgg.py#L70 1
nn.init.constant_(m.bias, 0)
def make_layers(cfg, batch_norm=False):
layers = []
in_channels = 3
for v in cfg:
if v == 'M':
layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
else:
conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1)
if batch_norm:
layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]
else:
layers += [conv2d, nn.ReLU(inplace=True)]
in_channels = v
return nn.Sequential(*layers)
cfgs = {
'A': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
If you replace kernel_size there, you would replace all the convolutions.
You can look for another implementation in which you can tune all the convolutions statically, however it will be a mess to load pytorch’s pretrained weights.
Regards
EDIT, you may be able to extend the function to arbitrary choose convolution sizes for each layer as network is built here:
github.com
pytorch/vision/blob/f677ea31db8f45dbfec2fe5e519da82853815776/torchvision/models/vgg.py#L90
'A': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
'B': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
'D': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'],
'E': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'],
}
def _vgg(arch, cfg, batch_norm, pretrained, progress, **kwargs):
if pretrained:
kwargs['init_weights'] = False
model = VGG(make_layers(cfgs[cfg], batch_norm=batch_norm), **kwargs)
if pretrained:
state_dict = load_state_dict_from_url(model_urls[arch],
progress=progress)
model.load_state_dict(state_dict)
return model
def vgg11(pretrained=False, progress=True, **kwargs):
r"""VGG 11-layer model (configuration "A") from
`"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_
if you make make_layers function to accept another parameter you can set them as you want to. |
st81052 | Hi, i am recently trying to try out some style transfer codes facilitating activations of pretrained network in torchvision.
Reference: Accessing intermediate layers of a pretrained network forward? 63
The issue is that I wish to try using an object detection network such as Faster R-CNN, in which case the definition of network is kind of of different e.g. there is no more ‘.features’ attribute.
How to access parameters (weight and activation in multiple layers if possible) in such case? |
st81053 | Hi @pinata1337,
An easy way to find out it is to print your model:
>>> print(frcnn)
FasterRCNN(
(transform): GeneralizedRCNNTransform()
(backbone): Sequential(
(0): ConvBNReLU(
(0): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
[...]
You can always access the submodules of a module by it’s named attribute (or index when the Module is a Sequential). So, in the case of FasterR-CNN, let’s say you want to access the first Conv2D in the first ConvBNReLU in backbone:
>>> frcnn.backbone[0][0]
Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) |
st81054 | Much obliged for the help! Now I know how to access the layers in that network.
However I still encountered other obstacles while trying to get activations for certain data, at a certain layer of the network(e.g. the output) .
When I tried to obtain activation of a layer of the network with the code:
net = models.detection.fasterrcnn_resnet50_fpn(pretrained=True).eval()
for inputs, targets in trainloader:
aaaa = net(inputs)
bbbb = net(targets)
loss = nn.MSELoss(aaaa, bbbb)
Errors pop out:
RuntimeError: The expanded size of the tensor (128) must match the existing size (800) at non-singleton dimension 2. Target sizes: [3, 128, 128]. Tensor sizes: [3, 800, 800]
I do not fully understand the meaning of this error because cnn should work for any size of image(128 in my case) as long as they’re consistent…
I think I might need to read some kind of manual about how to access parameters of the network given input data and location of the wanted parameters, still looking just yet.
Is there any insights you could share on this issue? |
st81055 | That’s because the tensor(or images in your case) shapes passed on to MSELoss is different.
One tensor is of shape (3, 128, 128) and the other is (3, 800, 800). Make sure ‘aaaa’ and ‘bbbb’ shapes’ match. |
st81056 | What is the simplest syntax to transform 2D tensor
A B
C D
into
A A B B
A A B B
C C D D
C C D D
Note they are parameter tensors, so I need autograd to back propagate gradient from latter into former.
Thanks! |
st81057 | Solved by zfzhang in post #2
I found a numpy.repeat()-like function in latest pytorch (1.1), but it is needed to be called twice:
z = x.repeat_interleave(2,dim=0).repeat_interleave(2,dim=1) |
st81058 | I found a numpy.repeat()-like function in latest pytorch (1.1), but it is needed to be called twice:
z = x.repeat_interleave(2,dim=0).repeat_interleave(2,dim=1) |
st81059 | Thanks for the hint! Kronecker product seems to be exactly what I need. But there seems no such corresponding function in Pytorch |
st81060 | zfzhang:
But there seems no such corresponding function in Pytorch
Kronecker Product
Is there a way to perform a Kronecker product between two matrices? I know there’s outer products for vectors, but is there something like that for 2D Tensors? |
st81061 | Hello, I installed the pytorch using ```
conda install pytorch torchvision cudatoolkit=9.0 -c pytorch
However, it downloaded pytorch-1.1.0 instead of the latest version 1.2. How does that happen? |
st81062 | The PyTorch 1.2.0 binaries ship with CUDA9.2 and CUDA10.0, so you would have to change the cudatoolkit version to one of these. |
st81063 | Hi Ptrbick, thanks, do i have to update my cuda version? on my windows system, after typing nvcc --version, the information is Cuda compilation tools, release 9.0, V9.0.176 |
st81064 | The binaries ship their CUDA and cudnn versions, so you don’t need to install CUDA locally (unless you would like to build from source). You need to install the NVIDIA drivers, though. |
st81065 | The reason is that we didn’t build PyTorch 1.2 binaries with CUDA 9.0, so you should either install CUDA 9.2 or CUDA 10.0. Please see the install commands in https://pytorch.org 7. |
st81066 | I cannot find transformer module with pytorch version (1.1.0), please can someone help me to get the latest libraries from PyTorch |
st81067 | Solved by ptrblck in post #2
nn.Transformer was added in PyTorch 1.2.0, so you should update to the latest stable version.
Have a look at the install instructions for the binaries: link. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.