Skip to content
This repository has been archived by the owner on Jul 26, 2022. It is now read-only.

PyTorch Failed to Build #4

Open
NiteKat opened this issue Dec 9, 2018 · 3 comments
Open

PyTorch Failed to Build #4

NiteKat opened this issue Dec 9, 2018 · 3 comments

Comments

@NiteKat
Copy link

NiteKat commented Dec 9, 2018

Following the guide at https://torchcraft.github.io/TorchCraftAI/docs/install-windows.html, I am getting an error when running the command "python setup.py build". I am running the command form the pytorch folder. Both CUDA 10 and CUDA 9 with patches are installed (I've been told that CUDA 10 is not compatible, but originally the link to CUDA brought me to version 10).

This is the last output of the command:
-- Configuring incomplete, errors occurred!
See also "E:/Program Files (x86)/BWAPI/CherryPi/TorchCraftAI/3rdparty/pytorch/build/CMakeFiles/CMakeOutput.log".
See also "E:/Program Files (x86)/BWAPI/CherryPi/TorchCraftAI/3rdparty/pytorch/build/CMakeFiles/CMakeError.log".

(base) E:\Program Files (x86)\BWAPI\CherryPi\TorchCraftAI\3rdparty\pytorch\build>IF ERRORLEVEL 1 exit 1
Failed to run 'tools\build_pytorch_libs.bat --use-cuda --use-fbgemm --use-nnpack --use-mkldnn --use-qnnpack caffe2'

Attached are the two log files mentioned in the error output.

CMakeError.log
CMakeOutput.log

@ebetica
Copy link
Contributor

ebetica commented Dec 10, 2018

Is it possible to paste the full output of the command? The last page or two might be informative too if it's hard for you to get the full output. The logs don't seem to be useful for me, most of it is compiling test programs to see if a compiler feature exists. Alternatively, you can check the PyTorch issues to see if you can find anything similar to your case.

@NiteKat
Copy link
Author

NiteKat commented Dec 13, 2018

Unfortunately, some of the output gets cut off because there is so much, but here is what I can access after the command finishes running.

-- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core]
-- Library mkl_intel_lp64: not found
-- Checking for [mkl_intel - mkl_intel_thread - mkl_core]
-- Library mkl_intel: not found
-- Checking for [mkl_intel_lp64 - mkl_sequential - mkl_core]
-- Library mkl_intel_lp64: not found
-- Checking for [mkl_intel - mkl_sequential - mkl_core]
-- Library mkl_intel: not found
-- Checking for [mkl_intel_lp64 - mkl_sequential - mkl_core]
-- Library mkl_intel_lp64: not found
-- Checking for [mkl_intel - mkl_sequential - mkl_core]
-- Library mkl_intel: not found
-- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - libiomp5md - pthread]
-- Library mkl_intel_lp64: not found
-- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md - pthread]
-- Library mkl_intel: not found
-- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - pthread]
-- Library mkl_intel_lp64: not found
-- Checking for [mkl_intel - mkl_intel_thread - mkl_core - pthread]
-- Library mkl_intel: not found
-- Checking for [mkl - guide - pthread - m]
-- Library mkl: not found
-- MKL library not found
-- Checking for [Accelerate]
-- Library Accelerate: BLAS_Accelerate_LIBRARY-NOTFOUND
-- Checking for [vecLib]
-- Library vecLib: BLAS_vecLib_LIBRARY-NOTFOUND
-- Checking for [openblas]
-- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND
-- Checking for [openblas - pthread]
-- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND
-- Checking for [libopenblas]
-- Library libopenblas: BLAS_libopenblas_LIBRARY-NOTFOUND
-- Checking for [goto2 - gfortran]
-- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND
-- Checking for [goto2 - gfortran - pthread]
-- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND
-- Checking for [acml - gfortran]
-- Library acml: BLAS_acml_LIBRARY-NOTFOUND
-- Checking for [ptf77blas - atlas - gfortran]
-- Library ptf77blas: BLAS_ptf77blas_LIBRARY-NOTFOUND
-- Checking for [blas]
-- Library blas: BLAS_blas_LIBRARY-NOTFOUND
-- Cannot find a library with BLAS API. Not using BLAS.
CMake Warning at cmake/Dependencies.cmake:129 (message):
Target platform "Windows" is not supported in QNNPACK. Supported platforms
are Android, iOS, Linux, and macOS. Turn this warning off by
USE_QNNPACK=OFF.
Call Stack (most recent call first):
CMakeLists.txt:200 (include)

CMake Warning at cmake/External/nnpack.cmake:21 (message):
NNPACK not supported on MSVC yet. Turn this warning off by USE_NNPACK=OFF.
Call Stack (most recent call first):
cmake/Dependencies.cmake:195 (include)
CMakeLists.txt:200 (include)

CMake Warning at cmake/Dependencies.cmake:205 (message):
Not compiling with NNPACK. Suppress this warning with -DUSE_NNPACK=OFF
Call Stack (most recent call first):
CMakeLists.txt:200 (include)

-- Found PythonInterp: E:/ProgramData/Anaconda3/python.exe (found version "3.7")
-- git Version: v1.4.0-505be96a
-- Version: 1.4.0
-- Performing Test HAVE_STD_REGEX -- success
-- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile
-- Performing Test HAVE_POSIX_REGEX -- failed to compile
-- Performing Test HAVE_STEADY_CLOCK -- success
-- Using third party subdirectory Eigen.
Python 3.7.0
-- Setting Python's include dir to E:\ProgramData\Anaconda3\include from distutils.sysconfig
-- Found PythonInterp: E:/ProgramData/Anaconda3/python.exe (found suitable version "3.7", minimum required is "2.7")
-- NumPy ver. 1.15.4 found (include: E:/ProgramData/Anaconda3/lib/site-packages/numpy/core/include)
-- Could NOT find pybind11 (missing: pybind11_DIR)
-- Could NOT find pybind11 (missing: pybind11_INCLUDE_DIR)
-- Using third_party/pybind11.
-- Found CUDA: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.0 (found suitable version "10.0", minimum required is "7.0")
-- Caffe2: CUDA detected: 10.0
-- Caffe2: CUDA nvcc is: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.0/bin/nvcc.exe
-- Caffe2: CUDA toolkit directory: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.0
-- Caffe2: Header version is: 10.0
CMake Error at cmake/public/cuda.cmake:123 (file):
file failed to open for reading (No such file or directory):

\=//cudnn.h

Call Stack (most recent call first):
cmake/Dependencies.cmake:626 (include)
CMakeLists.txt:200 (include)

-- Found cuDNN: v? (include: =/, library: =/)
-- Automatic GPU detection failed. Building for common architectures.
-- Autodetected CUDA architecture(s): 3.0;3.5;5.0;5.2;6.0;6.1;7.0;7.0+PTX
-- Added CUDA NVCC flags for: -gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_70,code=compute_70
-- Could NOT find CUB (missing: CUB_INCLUDE_DIR)

-- ******** Summary ********
-- CMake version : 3.12.2
-- CMake command : E:/ProgramData/Anaconda3/Library/bin/cmake.exe
-- System : Windows
-- C++ compiler : E:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.11.25503/bin/HostX86/x64/cl.exe
-- C++ compiler version : 19.11.25548.2
-- CXX flags : /EHa
-- Build type : Release
-- Compile definitions :
-- CMAKE_PREFIX_PATH :
-- CMAKE_INSTALL_PREFIX : E:/Program Files (x86)/BWAPI/CherryPi/TorchCraftAI/3rdparty/pytorch/torch/lib/tmp_install
-- CMAKE_MODULE_PATH : E:/Program Files (x86)/BWAPI/CherryPi/TorchCraftAI/3rdparty/pytorch/cmake/Modules;E:/Program Files (x86)/BWAPI/CherryPi/TorchCraftAI/3rdparty/pytorch/cmake/public/../Modules_CUDA_fix

-- ONNX version : 1.3.0
-- ONNX NAMESPACE : onnx_torch
-- ONNX_BUILD_TESTS : OFF
-- ONNX_BUILD_BENCHMARKS : OFF
-- ONNX_USE_LITE_PROTO : OFF
-- ONNXIFI_DUMMY_BACKEND : OFF

-- Protobuf compiler :
-- Protobuf includes :
-- Protobuf libraries :
-- BUILD_ONNX_PYTHON : OFF
-- Found CUDA with FP16 support, compiling with torch.cuda.HalfTensor
-- Removing -DNDEBUG from compile flags
-- Compiling with OpenMP support
-- MAGMA not found. Compiling without MAGMA support
-- Could not find hardware support for NEON on this machine.
-- No OMAP3 processor on this machine.
-- No OMAP4 processor on this machine.
-- AVX compiler support found
-- AVX2 compiler support found
-- Atomics: using MSVC intrinsics
-- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - libiomp5md]
-- Library mkl_intel_lp64: not found
-- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md]
-- Library mkl_intel: not found
-- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core]
-- Library mkl_intel_lp64: not found
-- Checking for [mkl_intel - mkl_intel_thread - mkl_core]
-- Library mkl_intel: not found
-- Checking for [mkl_intel_lp64 - mkl_sequential - mkl_core]
-- Library mkl_intel_lp64: not found
-- Checking for [mkl_intel - mkl_sequential - mkl_core]
-- Library mkl_intel: not found
-- Checking for [mkl_intel_lp64 - mkl_sequential - mkl_core]
-- Library mkl_intel_lp64: not found
-- Checking for [mkl_intel - mkl_sequential - mkl_core]
-- Library mkl_intel: not found
-- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - libiomp5md - pthread]
-- Library mkl_intel_lp64: not found
-- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md - pthread]
-- Library mkl_intel: not found
-- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - pthread]
-- Library mkl_intel_lp64: not found
-- Checking for [mkl_intel - mkl_intel_thread - mkl_core - pthread]
-- Library mkl_intel: not found
-- Checking for [mkl - guide - pthread - m]
-- Library mkl: not found
-- MKL library not found
-- Checking for [Accelerate]
-- Library Accelerate: BLAS_Accelerate_LIBRARY-NOTFOUND
-- Checking for [vecLib]
-- Library vecLib: BLAS_vecLib_LIBRARY-NOTFOUND
-- Checking for [openblas]
-- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND
-- Checking for [openblas - pthread]
-- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND
-- Checking for [libopenblas]
-- Library libopenblas: BLAS_libopenblas_LIBRARY-NOTFOUND
-- Checking for [goto2 - gfortran]
-- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND
-- Checking for [goto2 - gfortran - pthread]
-- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND
-- Checking for [acml - gfortran]
-- Library acml: BLAS_acml_LIBRARY-NOTFOUND
-- Checking for [ptf77blas - atlas - gfortran]
-- Library ptf77blas: BLAS_ptf77blas_LIBRARY-NOTFOUND
-- Checking for [blas]
-- Library blas: BLAS_blas_LIBRARY-NOTFOUND
-- Cannot find a library with BLAS API. Not using BLAS.
-- LAPACK requires BLAS
-- Cannot find a library with LAPACK API. Not using LAPACK.
-- Found CUDA: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.0 (found suitable version "10.0", minimum required is "5.5")
disabling ROCM because NOT USE_ROCM is set
-- MIOpen not found. Compiling without MIOpen support
-- MKL-DNN needs omp 3+ which is not supported in MSVC so far
-- MKL-DNN needs omp 3+ which is not supported in MSVC so far
CMake Warning at cmake/Dependencies.cmake:1302 (MESSAGE):
MKLDNN could not be found.
Call Stack (most recent call first):
CMakeLists.txt:200 (include)

-- Using python found in E:\ProgramData\Anaconda3\python.exe
-- Using python found in E:\ProgramData\Anaconda3\python.exe
-- NCCL operators skipped due to no CUDA support
-- Excluding ideep operators as we are not using ideep
-- Excluding image processing operators due to no opencv
-- Excluding video processing operators due to no opencv
-- MPI operators skipped due to no MPI support
-- Include Observer library
-- Using Lib/site-packages as python relative installation path
-- Automatically generating missing init.py files.
-- A previous caffe2 cmake run already created the init.py files.
CMake Warning at CMakeLists.txt:388 (message):
Generated cmake files are only fully tested if one builds with system glog,
gflags, and protobuf. Other settings may generate files that are not well
tested.

--
-- ******** Summary ********
-- General:
-- CMake version : 3.12.2
-- CMake command : E:/ProgramData/Anaconda3/Library/bin/cmake.exe
-- System : Windows
-- C++ compiler : E:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.11.25503/bin/HostX86/x64/cl.exe
-- C++ compiler version : 19.11.25548.2
-- BLAS : MKL
-- CXX flags : /EHa -openmp /MP /bigobj
-- Build type : Release
-- Compile definitions : ONNX_NAMESPACE=onnx_torch;_CRT_SECURE_NO_DEPRECATE=1;USE_MSC_ATOMICS=1
-- CMAKE_PREFIX_PATH :
-- CMAKE_INSTALL_PREFIX : E:/Program Files (x86)/BWAPI/CherryPi/TorchCraftAI/3rdparty/pytorch/torch/lib/tmp_install

-- TORCH_VERSION : 1.0.0
-- CAFFE2_VERSION : 1.0.0
-- BUILD_ATEN_MOBILE : OFF
-- BUILD_ATEN_ONLY : OFF
-- BUILD_BINARY : OFF
-- BUILD_CUSTOM_PROTOBUF : ON
-- Link local protobuf : ON
-- BUILD_DOCS : OFF
-- BUILD_PYTHON : ON
-- Python version : 3.7
-- Python executable : E:/ProgramData/Anaconda3/python.exe
-- Pythonlibs version : 3.7.0
-- Python library : E:/ProgramData/Anaconda3/libs/python37.lib
-- Python includes : E:\ProgramData\Anaconda3\include
-- Python site-packages: Lib/site-packages
-- BUILD_CAFFE2_OPS : ON
-- BUILD_SHARED_LIBS : ON
-- BUILD_TEST : ON
-- USE_ASAN : OFF
-- USE_CUDA : 1
-- CUDA static link : OFF
-- USE_CUDNN : ON
-- CUDA version : 10.0
-- cuDNN version : ?
-- CUDA root directory : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.0
-- CUDA library : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.0/lib/x64/cuda.lib
-- cudart library : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.0/lib/x64/cudart_static.lib
-- cublas library : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.0/lib/x64/cublas.lib
-- cufft library : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.0/lib/x64/cufft.lib
-- curand library : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.0/lib/x64/curand.lib
-- cuDNN library : =/
-- nvrtc : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.0/lib/x64/nvrtc.lib
-- CUDA include path : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.0/include
-- NVCC executable : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.0/bin/nvcc.exe
-- CUDA host compiler : $(VCInstallDir)Tools/MSVC/$(VCToolsVersion)/bin/Host$(Platform)/$(PlatformTarget)
-- USE_TENSORRT : OFF
-- USE_ROCM : OFF
-- USE_EIGEN_FOR_BLAS : ON
-- USE_FBGEMM : OFF
-- USE_FFMPEG : OFF
-- USE_GFLAGS : OFF
-- USE_GLOG : OFF
-- USE_LEVELDB : OFF
-- USE_LITE_PROTO : OFF
-- USE_LMDB : OFF
-- USE_METAL : OFF
-- USE_MKL : OFF
-- USE_MKLDNN : OFF
-- USE_MOBILE_OPENGL : OFF
-- USE_NCCL : OFF
-- USE_NNPACK : OFF
-- USE_NUMPY : ON
-- USE_OBSERVERS : ON
-- USE_OPENCL : OFF
-- USE_OPENCV : OFF
-- USE_OPENMP : OFF
-- USE_PROF : OFF
-- USE_QNNPACK : OFF
-- USE_REDIS : OFF
-- USE_ROCKSDB : OFF
-- USE_ZMQ : OFF
-- USE_DISTRIBUTED : OFF
-- Public Dependencies : Threads::Threads
-- Private Dependencies : cpuinfo;fp16;aten_op_header_gen;onnxifi_loader
-- Configuring incomplete, errors occurred!
See also "E:/Program Files (x86)/BWAPI/CherryPi/TorchCraftAI/3rdparty/pytorch/build/CMakeFiles/CMakeOutput.log".
See also "E:/Program Files (x86)/BWAPI/CherryPi/TorchCraftAI/3rdparty/pytorch/build/CMakeFiles/CMakeError.log".

(base) E:\Program Files (x86)\BWAPI\CherryPi\TorchCraftAI\3rdparty\pytorch\build>IF ERRORLEVEL 1 exit 1
Failed to run 'tools\build_pytorch_libs.bat --use-cuda --use-fbgemm --use-nnpack --use-mkldnn --use-qnnpack caffe2'

(base) E:\Program Files (x86)\BWAPI\CherryPi\TorchCraftAI\3rdparty\pytorch>

@ebetica
Copy link
Contributor

ebetica commented Dec 14, 2018

It looks like the same error here. We will document that you apparently need to install cudnn, we were not aware PyTorch had this bug on windows :(

#2 (comment)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants