Package Details: caffe2 0.8.2_1.8.1-1

Git Clone URL: https://aur.archlinux.org/caffe2.git (read-only, click to copy)
Package Base: caffe2
Description: A new lightweight, modular, and scalable deep learning framework (cpu only)
Upstream URL: https://caffe2.ai/
Keywords: ai artificial cuda intelligence nvidia
Licenses: BSD
Conflicts: caffe2-cpu, python-pytorch
Provides: caffe2-cpu
Replaces: caffe2-cpu
Submitter: dbermond
Maintainer: dbermond
Last Packager: dbermond
Votes: 0
Popularity: 0.000000
First Submitted: 2017-04-29 16:21 (UTC)
Last Updated: 2021-03-28 20:32 (UTC)

Required by (1)

Sources (43)

Pinned Comments

dbermond commented on 2018-08-22 18:15 (UTC)

Important notice:

This package now provides the non-cuda version (as known as the 'cpu only' build).

If you want caffe2 with cuda support, use package caffe2-cuda. This new package naming schema will better reflect the package contents. This conforms to tensorflow package naming from the official repositories.

Latest Comments

dbermond commented on 2021-05-26 17:37 (UTC)

@roachsinai The caffe2 package is for cpu-only (non-cuda) computations. So, what you're calling caffe2-cpu is actually caffe2. There is a pinned comment at the top of this page explaining this.

roachsinai commented on 2021-05-26 17:26 (UTC)

@dbermond thanks for your reply.

So there is caffe2 and caffe2-cude but no caffe2-cpu?

dbermond commented on 2021-05-26 16:19 (UTC)

@roachsinai This is a split package that builds both caffe2 (non-cuda, cpu-only) and caffe2-cuda. That's why you see two cmake calls on the build() function.

roachsinai commented on 2021-05-26 05:44 (UTC)

Hi, there. As already has caffe2-cude version, why in the PKGBUILD of caffe2 still include thing like cmake -B build-cuda -S pytorch?

dbermond commented on 2021-03-06 17:09 (UTC)

@petronny This is now fixed.

petronny commented on 2020-08-01 13:56 (UTC)

[ 96%] Building CXX object binaries/CMakeFiles/make_cifar_db.dir/make_cifar_db.cc.o
/usr/bin/ld: warning: libQt5Test.so.5, needed by /usr/lib/libopencv_highgui.so.4.4, not found (try using -rpath or -rpath-link)
/usr/bin/ld: warning: libQt5OpenGL.so.5, needed by /usr/lib/libopencv_highgui.so.4.4, not found (try using -rpath or -rpath-link)
/usr/bin/ld: warning: libQt5Widgets.so.5, needed by /usr/lib/libopencv_highgui.so.4.4, not found (try using -rpath or -rpath-link)
/usr/bin/ld: warning: libQt5Gui.so.5, needed by /usr/lib/libopencv_highgui.so.4.4, not found (try using -rpath or -rpath-link)
/usr/bin/ld: warning: libQt5Core.so.5, needed by /usr/lib/libopencv_highgui.so.4.4, not found (try using -rpath or -rpath-link)
/usr/bin/ld: /usr/lib/libopencv_highgui.so.4.4: undefined reference to `QWidget::showNormal()@Qt_5'
/usr/bin/ld: /usr/lib/libopencv_highgui.so.4.4: undefined reference to `QTimer::start(int)@Qt_5'
/usr/bin/ld: /usr/lib/libopencv_highgui.so.4.4: undefined reference to `QMutex::lock()@Qt_5'
/usr/bin/ld: /usr/lib/libopencv_highgui.so.4.4: undefined reference to `QAbstractSlider::setPageStep(int)@Qt_5'

Please add missing dependencies.

dbermond commented on 2019-06-29 18:41 (UTC)

@sleeping No need to delete caffe2 for now because python-pytorch and caffe2 are different packages. caffe2 can be used by people that want only it, without pytorch.

And pytorch does not provide the caffe2 binary executables.

sleeping commented on 2019-06-24 09:22 (UTC)

caffe was merged with PyTorch

this package should probably cease to exist?

dbermond commented on 2018-10-25 03:40 (UTC)

@petronny Yes, libibverbs was merged into rdma-core. Package updated.

petronny commented on 2018-10-22 05:46 (UTC)

libibverbs no longer exists in AUR now ... Should we use rdma-core?

dbermond commented on 2018-08-22 18:15 (UTC)

Important notice:

This package now provides the non-cuda version (as known as the 'cpu only' build).

If you want caffe2 with cuda support, use package caffe2-cuda. This new package naming schema will better reflect the package contents. This conforms to tensorflow package naming from the official repositories.

dbermond commented on 2018-06-03 15:57 (UTC)

@petronny Thank you for pointing this. Pytorch checksum updated.

petronny commented on 2018-06-03 14:29 (UTC)

Wrong hashsum for pytorch-0.4.0.tar.gz Please update it to f91c059710f802c91bed8207f2d461851b1bc2d44f7cd6e9aaa548392db9412f

dbermond commented on 2018-04-30 19:27 (UTC)

PyTorch released its first stable version containing Caffe2.

Package updated to caffe2 0.8.2 from pytorch stable 0.4.0. Note that this package does not contain pytorch, but caffe2 only.

dbermond commented on 2018-04-30 19:26 (UTC)

PyTorch released its first stable version containing Caffe2.

Package updated to caffe2 0.8.2 from pytorch stable 0.4.0. Note that this package does not contain pytorch, but caffe2 only.

Note: gcc5 is needed for building. Fortunately, gcc5 was moved from AUR to the [community] official repository. No need to compile gcc5 from AUR for the time being.

dbermond commented on 2018-04-23 18:01 (UTC)

I have backported a patch to enable building with the newest version of 'gcc' from the official repositories (currently gcc7). Package is now building fine with gcc7.

dbermond commented on 2018-04-23 17:55 (UTC) (edited on 2018-04-23 17:55 (UTC) by dbermond)

I have added patches for building with cuda 9.1. Package is finally building fine again.

Important notes:

  • gcc5 from the AUR is currently needed for building. It does not build with cuda 9.1 + gcc6 (this is a known upstream issue). Be warned that gcc5 from the AUR takes a lot of time to compile.

  • The Caffe2 source code moved to PyTorch repository, but currently there is no stable version containing Caffe2 there. It is yet to be released by upstream.

  • The current available stable version of Caffe2 does not have python3 support. Python3 support currently is available on the git master branch only. If you want python3 support right now, please use caffe2-git or caffe2-cpu-git packages.

dbermond commented on 2018-04-12 15:16 (UTC)

@daquexian Yes, I know, but currently there is no stable release that supports python3 yet. The python3 support is present in git master only.

Unfortunately, caffe2-git (and also caffe2) currently does not compile because upstream has a nvcc issue with cuda 9.1. Details here: https://github.com/caffe2/caffe2/issues/1459 (and also here: https://github.com/caffe2/caffe2/issues/1636).

If you want python3 support for caffe2 right now, the only option is to use the package caffe2-cpu-git.

daquexian commented on 2018-04-02 06:37 (UTC)

@dbermond, Caffe2 has supported Python3, https://github.com/caffe2/caffe2/issues/1720

dbermond commented on 2018-01-26 14:25 (UTC)

@ricefan123 Thank you for reporting this. Now fixed.

ricefan123 commented on 2018-01-26 14:06 (UTC)

thirdparty-protobuf-3.1.0.tar.gz check sum failed. Please fix it. Thank you.

dbermond commented on 2018-01-25 14:24 (UTC)

@ricefan123 This is a known upstream issue. Currently, caffe2 does not build with CUDA 9, not even the git master version. Developers are aware.

Details here: https://github.com/caffe2/caffe2/issues/1459

Unfortunately, package is broken and we should wait for an upstream fix. If you really want to use caffe2, you'll need to downgrade CUDA to version 8.0. Otherwise, use caffe (previous generation).

ricefan123 commented on 2018-01-23 04:27 (UTC)

Compilation error occurs again.

[ 62%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_abs_op.cu.o /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/core/context_gpu.h: In destructor ‘caffe2::CUDAContext::~CUDAContext()’: /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/core/context_gpu.h:149:22: warning: throw will always call terminate() [-Wterminate] inline void SwitchToDevice(int stream_id) { ^ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/core/context_gpu.h:149:22: note: in C++11 destructors default to noexcept /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple: In instantiation of ‘static constexpr bool std::TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<long int&="" int&,="" long="" unsigned="">}; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’: /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple:626:248: required by substitution of ‘template<class (_uelements)="=" ...="" 1),="" _uelements,="" int&="" int&,="" long="" std::enable_if<(((std::_tc<(sizeof...="" typename="" unsigned="">::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<long int&="" int&,="" long="" unsigned="">}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), long unsigned int&, long unsigned int&, long unsigned int&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’ /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple:1545:43: required from ‘constexpr std::tuple<_Elements& ...> std::tie(_Elements& ...) [with _Elements = {long unsigned int, long unsigned int, long unsigned int}]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/operators/elementwise_op.h:241:10: required from ‘bool caffe2::BinaryElementwiseOp<inputtypes, context,="" functor,="" typemap="">::DoRunWithType() [with T = float; InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor>; TypeMap = caffe2::SameTypeAsInput]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/core/operator.h:640:80: required from ‘static bool caffe2::DispatchHelper<caffe2::tensortypes<firsttype, ...="" types="">, ExtraArgs ...>::call(Op, const caffe2::TypeMeta&) [with Op = caffe2::BinaryElementwiseOp<caffe2::tensortypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor> >; FirstType = float; Types = {}; ExtraArgs = {}]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/core/operator.h:642:47: required from ‘static bool caffe2::DispatchHelper<caffe2::tensortypes<firsttype, ...="" types="">, ExtraArgs ...>::call(Op</caffe2::tensortypes<firsttype,></caffe2::absgradientcudafunctor></caffe2::tensortypes<float>, const caffe2::Tensor<context>&) [with Op = caffe2::BinaryElementwiseOp<caffe2::tensortypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor> >; Context = caffe2::CUDAContext; FirstType = float; Types = {}; ExtraArgs = {}]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/operators/elementwise_op.h:215:42: required from ‘bool caffe2::BinaryElementwiseOp<inputtypes, context,="" functor,="" typemap="">::RunOnDevice() [with InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor>; TypeMap = caffe2::SameTypeAsInput]’ /tmp/tmpxft_00006f0e_00000000-5_abs_op.compute_70.cudafe1.stub.c:20:27: required from here /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’ return and_<is_constructible<_elements, _uelements&&="">...>::value; ^~~~~ /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple:484:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<long int&="" int&,="" long="" unsigned="">}; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’ not a return-statement } ^ /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<long int&="" int&,="" long="" unsigned="">}; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’: /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple:626:362: required by substitution of ‘template<class (_uelements)="=" ...="" 1),="" _uelements,="" int&="" int&,="" long="" std::enable_if<(((std::_tc<(sizeof...="" typename="" unsigned="">::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<long int&="" int&,="" long="" unsigned="">}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), long unsigned int&, long unsigned int&, long unsigned int&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’ /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple:1545:43: required from ‘constexpr std::tuple<_Elements& ...> std::tie(_Elements& ...) [with _Elements = {long unsigned int, long unsigned int, long unsigned int}]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/operators/elementwise_op.h:241:10: required from ‘bool caffe2::BinaryElementwiseOp<inputtypes, context,="" functor,="" typemap="">::DoRunWithType() [with T = float; InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor>; TypeMap = caffe2::SameTypeAsInput]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/core/operator.h:640:80: required from ‘static bool caffe2::DispatchHelper<caffe2::tensortypes<firsttype, ...="" types="">, ExtraArgs ...>::call(Op, const caffe2::TypeMeta&) [with Op = caffe2::BinaryElementwiseOp<caffe2::tensortypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor> >; FirstType = float; Types = {}; ExtraArgs = {}]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/core/operator.h:642:47: required from ‘static bool caffe2::DispatchHelper<caffe2::tensortypes<firsttype, ...="" types="">, ExtraArgs ...>::call(Op</caffe2::tensortypes<firsttype,></caffe2::absgradientcudafunctor></caffe2::tensortypes<float>, const caffe2::Tensor<context>&) [with Op = caffe2::BinaryElementwiseOp<caffe2::tensortypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor> >; Context = caffe2::CUDAContext; FirstType = float; Types = {}; ExtraArgs = {}]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/operators/elementwise_op.h:215:42: required from ‘bool caffe2::BinaryElementwiseOp<inputtypes, context,="" functor,="" typemap="">::RunOnDevice() [with InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor>; TypeMap = caffe2::SameTypeAsInput]’ /tmp/tmpxft_00006f0e_00000000-5_abs_op.compute_70.cudafe1.stub.c:20:27: required from here /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’ return </caffe2::absgradientcudafunctor></float></inputtypes,></caffe2::absgradientcudafunctor></caffe2::tensortypes<float></context></caffe2::tensortypes<firsttype,></caffe2::absgradientcudafunctor></float></inputtypes,></missing></anonymous></long></template-parameter-1-1></anonymous></class></anonymous></long></anonymous></anonymous></long></anonymous></is_constructible<_elements,>and</caffe2::absgradientcudafunctor></float></inputtypes,></caffe2::absgradientcudafunctor></caffe2::tensortypes<float></context></caffe2::tensortypes<firsttype,></caffe2::absgradientcudafunctor></float></inputtypes,></missing></anonymous></long></template-parameter-1-1></anonymous></class></anonymous></long></anonymous><is_convertible<_uelements&&, _elements="">...>::value; ^~~~~ /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple:490:1: error: body of constexpr function ‘static constexpr bool std::TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<long int&="" int&,="" long="" unsigned="">}; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’ not a return-statement } ^ /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<long int&="" int&,="" long="" unsigned="">&; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’: /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple:662:419: required by substitution of ‘template<class (_uelements)),="" ...="" _dummy,="" _uelements,="" class="" int&="" int&,="" long="" sizeof...="" std::enable_if<((std::_tc<(1ul="=" typename="" unsigned="">::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), long unsigned int&, long unsigned int&, long unsigned int&>::_NonNestedTuple<const ...="" tuple<_elements="">&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {long unsigned int&, long unsigned int&, long unsigned int&}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), long unsigned int&, long unsigned int&, long unsigned int&>::_NonNestedTuple<const ...="" tuple<_elements="">&>()), bool>::type <anonymous> = <missing>]’ /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple:1545:43: required from ‘constexpr std::tuple<_Elements& ...> std::tie(_Elements& ...) [with _Elements = {long unsigned int, long unsigned int, long unsigned int}]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/operators/elementwise_op.h:241:10: required from ‘bool caffe2::BinaryElementwiseOp<inputtypes, context,="" functor,="" typemap="">::DoRunWithType() [with T = float; InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor>; TypeMap = caffe2::SameTypeAsInput]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/core/operator.h:640:80: required from ‘static bool caffe2::DispatchHelper<caffe2::tensortypes<firsttype, ...="" types="">, ExtraArgs ...>::call(Op, const caffe2::TypeMeta&) [with Op = caffe2::BinaryElementwiseOp<caffe2::tensortypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor> >; FirstType = float; Types = {}; ExtraArgs = {}]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/core/operator.h:642:47: required from ‘static bool caffe2::DispatchHelper<caffe2::tensortypes<firsttype, ...="" types="">, ExtraArgs ...>::call(Op</caffe2::tensortypes<firsttype,></caffe2::absgradientcudafunctor></caffe2::tensortypes<float>, const caffe2::Tensor<context>&) [with Op = caffe2::BinaryElementwiseOp<caffe2::tensortypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor> >; Context = caffe2::CUDAContext; FirstType = float; Types = {}; ExtraArgs = {}]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/operators/elementwise_op.h:215:42: required from ‘bool caffe2::BinaryElementwiseOp<inputtypes, context,="" functor,="" typemap="">::RunOnDevice() [with InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor>; TypeMap = caffe2::SameTypeAsInput]’ /tmp/tmpxft_00006f0e_00000000-5_abs_op.compute_70.cudafe1.stub.c:20:27: required from here /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple:495:244: error: wrong number of template arguments (4, should be 2) return and_<not</caffe2::absgradientcudafunctor></float></inputtypes,></caffe2::absgradientcudafunctor></caffe2::tensortypes<float></context></caffe2::tensortypes<firsttype,></caffe2::absgradientcudafunctor></float></inputtypes,></missing></anonymous></const></template-parameter-1-1></anonymous></const></class></anonymous></long></anonymous></anonymous></long></anonymous><is_same<tuple<_elements...>, ^ <br> /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/type_traits:1558:8: note: provided for ‘template<class _from,="" _to="" class=""> struct std::is_convertible’ struct is_convertible ^~~~~~~~~~~~~~ /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<long int&="" int&,="" long="" unsigned="">&; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’ not a return-statement } ^ /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<long int&="" int&,="" long="" unsigned="">&&; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’: /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple:686:422: required by substitution of ‘template<class (_uelements)),="" ...="" _dummy,="" _uelements,="" class="" int&="" int&,="" long="" sizeof...="" std::enable_if<((std::_tc<(1ul="=" typename="" unsigned="">::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), long unsigned int&, long unsigned int&, long unsigned int&>::_NonNestedTuple<tuple<_elements ...="">&&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {long unsigned int&, long unsigned int&, long unsigned int&}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), long unsigned int&, long unsigned int&, long unsigned int&>::_NonNestedTuple<tuple<_elements ...="">&&>()), bool>::type <anonymous> = <missing>]’ /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple:1545:43: required from ‘constexpr std::tuple<_Elements& ...> std::tie(_Elements& ...) [with _Elements = {long unsigned int, long unsigned int, long unsigned int}]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/operators/elementwise_op.h:241:10: required from ‘bool caffe2::BinaryElementwiseOp<inputtypes, context,="" functor,="" typemap="">::DoRunWithType() [with T = float; InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor>; TypeMap = caffe2::SameTypeAsInput]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/core/operator.h:640:80: required from ‘static bool caffe2::DispatchHelper<caffe2::tensortypes<firsttype, ...="" types="">, ExtraArgs ...>::call(Op, const caffe2::TypeMeta&) [with Op = caffe2::BinaryElementwiseOp<caffe2::tensortypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor> >; FirstType = float; Types = {}; ExtraArgs = {}]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/core/operator.h:642:47: required from ‘static bool caffe2::DispatchHelper<caffe2::tensortypes<firsttype, ...="" types="">, ExtraArgs ...>::call(Op</caffe2::tensortypes<firsttype,></caffe2::absgradientcudafunctor></caffe2::tensortypes<float>, const caffe2::Tensor<context>&) [with Op = caffe2::BinaryElementwiseOp<caffe2::tensortypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor> >; Context = caffe2::CUDAContext; FirstType = float; Types = {}; ExtraArgs = {}]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/operators/elementwise_op.h:215:42: required from ‘bool caffe2::BinaryElementwiseOp<inputtypes, context,="" functor,="" typemap="">::RunOnDevice() [with InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor>; TypeMap = caffe2::SameTypeAsInput]’ /tmp/tmpxft_00006f0e_00000000-5_abs_op.compute_70.cudafe1.stub.c:20:27: required from here /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple:495:244: error: wrong number of template arguments (4, should be 2) return and_<not</caffe2::absgradientcudafunctor></float></inputtypes,></caffe2::absgradientcudafunctor></caffe2::tensortypes<float></context></caffe2::tensortypes<firsttype,></caffe2::absgradientcudafunctor></float></inputtypes,></missing></anonymous></tuple<_elements></template-parameter-1-1></anonymous></tuple<_elements></class></anonymous></long></anonymous></anonymous></long></anonymous><is_same<tuple<_elements...>, ^ <br> /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/type_traits:1558:8: note: provided for ‘template<class _from,="" _to="" class=""> struct std::is_convertible’ struct is_convertible ^~~~~~~~~~~~~~ /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<long int&="" int&,="" long="" unsigned="">&&; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’ not a return-statement } ^ /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<long int="" int,="" long="" unsigned="">}; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’: /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple:626:248: required by substitution of ‘template<class (_uelements)="=" ...="" 1),="" _uelements,="" int&="" int&,="" long="" std::enable_if<(((std::_tc<(sizeof...="" typename="" unsigned="">::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<long int="" int,="" long="" unsigned="">}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), long unsigned int&, long unsigned int&, long unsigned int&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/operators/elementwise_op.h:241:26: required from ‘bool caffe2::BinaryElementwiseOp<inputtypes, context,="" functor,="" typemap="">::DoRunWithType() [with T = float; InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor>; TypeMap = caffe2::SameTypeAsInput]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/core/operator.h:640:80: required from ‘static bool caffe2::DispatchHelper<caffe2::tensortypes<firsttype, ...="" types="">, ExtraArgs ...>::call(Op, const caffe2::TypeMeta&) [with Op = caffe2::BinaryElementwiseOp<caffe2::tensortypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor> >; FirstType = float; Types = {}; ExtraArgs = {}]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/core/operator.h:642:47: required from ‘static bool caffe2::DispatchHelper<caffe2::tensortypes<firsttype, ...="" types="">, ExtraArgs ...>::call(Op</caffe2::tensortypes<firsttype,></caffe2::absgradientcudafunctor></caffe2::tensortypes<float>, const caffe2::Tensor<context>&) [with Op = caffe2::BinaryElementwiseOp<caffe2::tensortypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor> >; Context = caffe2::CUDAContext; FirstType = float; Types = {}; ExtraArgs = {}]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/operators/elementwise_op.h:215:42: required from ‘bool caffe2::BinaryElementwiseOp<inputtypes, context,="" functor,="" typemap="">::RunOnDevice() [with InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor>; TypeMap = caffe2::SameTypeAsInput]’ /tmp/tmpxft_00006f0e_00000000-5_abs_op.compute_70.cudafe1.stub.c:20:27: required from here /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’ return and_<is_constructible<_elements, _uelements&&="">...>::value; ^~~~~ /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple:484:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<long int="" int,="" long="" unsigned="">}; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’ not a return-statement } ^ /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<long int="" int,="" long="" unsigned="">}; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’: /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple:626:362: required by substitution of ‘template<class (_uelements)="=" ...="" 1),="" _uelements,="" int&="" int&,="" long="" std::enable_if<(((std::_tc<(sizeof...="" typename="" unsigned="">::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<long int="" int,="" long="" unsigned="">}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), long unsigned int&, long unsigned int&, long unsigned int&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/operators/elementwise_op.h:241:26: required from ‘bool caffe2::BinaryElementwiseOp<inputtypes, context,="" functor,="" typemap="">::DoRunWithType() [with T = float; InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor>; TypeMap = caffe2::SameTypeAsInput]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/core/operator.h:640:80: required from ‘static bool caffe2::DispatchHelper<caffe2::tensortypes<firsttype, ...="" types="">, ExtraArgs ...>::call(Op, const caffe2::TypeMeta&) [with Op = caffe2::BinaryElementwiseOp<caffe2::tensortypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor> >; FirstType = float; Types = {}; ExtraArgs = {}]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/core/operator.h:642:47: required from ‘static bool caffe2::DispatchHelper<caffe2::tensortypes<firsttype, ...="" types="">, ExtraArgs ...>::call(Op</caffe2::tensortypes<firsttype,></caffe2::absgradientcudafunctor></caffe2::tensortypes<float>, const caffe2::Tensor<context>&) [with Op = caffe2::BinaryElementwiseOp<caffe2::tensortypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor> >; Context = caffe2::CUDAContext; FirstType = float; Types = {}; ExtraArgs = {}]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/operators/elementwise_op.h:215:42: required from ‘bool caffe2::BinaryElementwiseOp<inputtypes, context,="" functor,="" typemap="">::RunOnDevice() [with InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor>; TypeMap = caffe2::SameTypeAsInput]’ /tmp/tmpxft_00006f0e_00000000-5_abs_op.compute_70.cudafe1.stub.c:20:27: required from here /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’ return </caffe2::absgradientcudafunctor></float></inputtypes,></caffe2::absgradientcudafunctor></caffe2::tensortypes<float></context></caffe2::tensortypes<firsttype,></caffe2::absgradientcudafunctor></float></inputtypes,></missing></anonymous></long></template-parameter-1-1></anonymous></class></anonymous></long></anonymous></anonymous></long></anonymous></is_constructible<_elements,>and</caffe2::absgradientcudafunctor></float></inputtypes,></caffe2::absgradientcudafunctor></caffe2::tensortypes<float></context></caffe2::tensortypes<firsttype,></caffe2::absgradientcudafunctor></float></inputtypes,></missing></anonymous></long></template-parameter-1-1></anonymous></class></anonymous></long></anonymous></anonymous></long></anonymous><is_convertible<_uelements&&, _elements="">...>::value; ^~~~~ /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple:490:1: error: body of constexpr function ‘static constexpr bool std::TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<long int="" int,="" long="" unsigned="">}; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’ not a return-statement } ^ /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<long int="" int,="" long="" unsigned="">&; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’: /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple:662:419: required by substitution of ‘template<class (_uelements)),="" ...="" _dummy,="" _uelements,="" class="" int&="" int&,="" long="" sizeof...="" std::enable_if<((std::_tc<(1ul="=" typename="" unsigned="">::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), long unsigned int&, long unsigned int&, long unsigned int&>::_NonNestedTuple<const ...="" tuple<_elements="">&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {long unsigned int, long unsigned int, long unsigned int}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), long unsigned int&, long unsigned int&, long unsigned int&>::_NonNestedTuple<const ...="" tuple<_elements="">&>()), bool>::type <anonymous> = <missing>]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/operators/elementwise_op.h:241:26: required from ‘bool caffe2::BinaryElementwiseOp<inputtypes, context,="" functor,="" typemap="">::DoRunWithType() [with T = float; InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor>; TypeMap = caffe2::SameTypeAsInput]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/core/operator.h:640:80: required from ‘static bool caffe2::DispatchHelper<caffe2::tensortypes<firsttype, ...="" types="">, ExtraArgs ...>::call(Op, const caffe2::TypeMeta&) [with Op = caffe2::BinaryElementwiseOp<caffe2::tensortypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor> >; FirstType = float; Types = {}; ExtraArgs = {}]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/core/operator.h:642:47: required from ‘static bool caffe2::DispatchHelper<caffe2::tensortypes<firsttype, ...="" types="">, ExtraArgs ...>::call(Op</caffe2::tensortypes<firsttype,></caffe2::absgradientcudafunctor></caffe2::tensortypes<float>, const caffe2::Tensor<context>&) [with Op = caffe2::BinaryElementwiseOp<caffe2::tensortypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor> >; Context = caffe2::CUDAContext; FirstType = float; Types = {}; ExtraArgs = {}]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/operators/elementwise_op.h:215:42: required from ‘bool caffe2::BinaryElementwiseOp<inputtypes, context,="" functor,="" typemap="">::RunOnDevice() [with InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor>; TypeMap = caffe2::SameTypeAsInput]’ /tmp/tmpxft_00006f0e_00000000-5_abs_op.compute_70.cudafe1.stub.c:20:27: required from here /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple:495:244: error: wrong number of template arguments (4, should be 2) return and_<not</caffe2::absgradientcudafunctor></float></inputtypes,></caffe2::absgradientcudafunctor></caffe2::tensortypes<float></context></caffe2::tensortypes<firsttype,></caffe2::absgradientcudafunctor></float></inputtypes,></missing></anonymous></const></template-parameter-1-1></anonymous></const></class></anonymous></long></anonymous></anonymous></long></anonymous><is_same<tuple<_elements...>, ^ <br> /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/type_traits:1558:8: note: provided for ‘template<class _from,="" _to="" class=""> struct std::is_convertible’ struct is_convertible ^~~~~~~~~~~~~~ /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<long int="" int,="" long="" unsigned="">&; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’ not a return-statement } ^ /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<long int="" int,="" long="" unsigned="">&&; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’: /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple:686:422: required by substitution of ‘template<class (_uelements)),="" ...="" _dummy,="" _uelements,="" class="" int&="" int&,="" long="" sizeof...="" std::enable_if<((std::_tc<(1ul="=" typename="" unsigned="">::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), long unsigned int&, long unsigned int&, long unsigned int&>::_NonNestedTuple<tuple<_elements ...="">&&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {long unsigned int, long unsigned int, long unsigned int}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), long unsigned int&, long unsigned int&, long unsigned int&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), long unsigned int&, long unsigned int&, long unsigned int&>::_NonNestedTuple<tuple<_elements ...="">&&>()), bool>::type <anonymous> = <missing>]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/operators/elementwise_op.h:241:26: required from ‘bool caffe2::BinaryElementwiseOp<inputtypes, context,="" functor,="" typemap="">::DoRunWithType() [with T = float; InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor>; TypeMap = caffe2::SameTypeAsInput]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/core/operator.h:640:80: required from ‘static bool caffe2::DispatchHelper<caffe2::tensortypes<firsttype, ...="" types="">, ExtraArgs ...>::call(Op, const caffe2::TypeMeta&) [with Op = caffe2::BinaryElementwiseOp<caffe2::tensortypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor> >; FirstType = float; Types = {}; ExtraArgs = {}]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/core/operator.h:642:47: required from ‘static bool caffe2::DispatchHelper<caffe2::tensortypes<firsttype, ...="" types="">, ExtraArgs ...>::call(Op</caffe2::tensortypes<firsttype,></caffe2::absgradientcudafunctor></caffe2::tensortypes<float>, const caffe2::Tensor<context>&) [with Op = caffe2::BinaryElementwiseOp<caffe2::tensortypes<float>, caffe2::CUDAContext, caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor> >; Context = caffe2::CUDAContext; FirstType = float; Types = {}; ExtraArgs = {}]’ /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/caffe2/operators/elementwise_op.h:215:42: required from ‘bool caffe2::BinaryElementwiseOp<inputtypes, context,="" functor,="" typemap="">::RunOnDevice() [with InputTypes = caffe2::TensorTypes<float>; Context = caffe2::CUDAContext; Functor = caffe2::WithoutBroadcast<caffe2::absgradientcudafunctor>; TypeMap = caffe2::SameTypeAsInput]’ /tmp/tmpxft_00006f0e_00000000-5_abs_op.compute_70.cudafe1.stub.c:20:27: required from here /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple:495:244: error: wrong number of template arguments (4, should be 2) return and_<not</caffe2::absgradientcudafunctor></float></inputtypes,></caffe2::absgradientcudafunctor></caffe2::tensortypes<float></context></caffe2::tensortypes<firsttype,></caffe2::absgradientcudafunctor></float></inputtypes,></missing></anonymous></tuple<_elements></template-parameter-1-1></anonymous></tuple<_elements></class></anonymous></long></anonymous></anonymous></long></anonymous><is_same<tuple<_elements...>, ^ <br> /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/type_traits:1558:8: note: provided for ‘template<class _from,="" _to="" class=""> struct std::is_convertible’ struct is_convertible ^~~~~~~~~~~~~~ /usr/lib/gcc/x86_64-pc-linux-gnu/6.4.1/include/c++/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<long int="" int,="" long="" unsigned="">&&; bool <anonymous> = true; _Elements = {long unsigned int&, long unsigned int&, long unsigned int&}]’ not a return-statement } ^ CMake Error at caffe2_gpu_generated_abs_op.cu.o.Release.cmake:275 (message): Error generating file /home/ricefan123/tmp/yaourt-tmp-ricefan123/aur-caffe2-git/src/caffe2-git/build/caffe2/CMakeFiles/caffe2_gpu.dir/operators/./caffe2_gpu_generated_abs_op.cu.o</anonymous></long></anonymous></class></is_same<tuple<_elements...></class></is_same<tuple<_elements...></is_convertible<_uelements&&,></class></is_same<tuple<_elements...></class></is_same<tuple<_elements...></is_convertible<_uelements&&,>

make[2]: [caffe2/CMakeFiles/caffe2_gpu.dir/build.make:72: caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_abs_op.cu.o] Error 1 make[1]: [CMakeFiles/Makefile2:754: caffe2/CMakeFiles/caffe2_gpu.dir/all] Error 2 make: *** [Makefile:141: all] Error 2

dbermond commented on 2017-09-29 15:10 (UTC) (edited on 2017-09-29 15:22 (UTC) by dbermond)

@bertptrs You're right! Fixed. Thank you for reporting this!

bertptrs commented on 2017-09-29 11:28 (UTC)

It appears that the checksum for thirdparty-protobuf-3.1.0.tar.gz is wrong. I detect it as 1176b093a05f2f4f8264d9d33f886255418af5c1007811328185f385ba6f9a7c.

dbermond commented on 2017-09-13 20:49 (UTC)

cuda 8.0.61-3 fixes the glibc 2.26 issue. This package is now building fine again.

dbermond commented on 2017-09-09 17:09 (UTC)

It seems that CUDA/NVCC 8.0.61 is not compatible with glibc 2.26. As a result, currently this package cannot be compiled with gpu support as it should be. Use caffe2-cpu instead (cpu-only, without gpu support) until there is a repository or upstream fix. Please leave a reply if you find a workaround.

dbermond commented on 2017-09-09 12:40 (UTC) (edited on 2017-09-10 00:40 (UTC) by dbermond)

@petronny Hi. Thank you for reporting this. I could reproduce the issue and identified the cause as being the gloo compilation, and not nccl directly. Compiling without gloo support (-DUSE_GLOO:BOOL='OFF') solves the issue. Updating the gloo commit to latest git master also solves the issue, since upstream gloo removed nccl from its third party dependencies. But another issue has just appeared: both caffe2 and gloo fails to compile with the newly released glibc 2.26. Since this glibc 2.26 issue is an unsolved upstream bug, I will update the package only after it's being solved, since it will not compile anyway for now.

petronny commented on 2017-09-08 14:58 (UTC)

I get -- Found OpenMP_C: -fopenmp (found version "4.0") -- Found OpenMP_CXX: -fopenmp (found version "4.0") -- Adding -fopenmp -- CUDA detected: 8.0 -- Automatic GPU detection failed. Building for all known architectures. -- Added CUDA NVCC flags for: sm_20 sm_21 sm_30 sm_35 sm_50 sm_52 sm_60 sm_61 -- Found libcuda: /opt/cuda/lib64/stubs/libcuda.so -- Found libnvrtc: /opt/cuda/lib64/libnvrtc.so -- Found CUDNN: /opt/cuda/include -- Found cuDNN: v7.0.1 (include: /opt/cuda/include, library: /opt/cuda/lib64/libcudnn.so) -- Could NOT find CUB (missing: CUB_INCLUDE_DIR) -- Could NOT find Gloo (missing: Gloo_INCLUDE_DIR Gloo_LIBRARY) -- Found hiredis: /usr/include/hiredis -- Found hiredis (include: /usr/include/hiredis, library: /lib/libhiredis.so) -- MPI include path: /usr/include -- MPI libraries: /usr/lib/openmpi/libmpi_cxx.so/usr/lib/openmpi/libmpi.so -- Found CUDA: /opt/cuda (found suitable version "8.0", minimum required is "7.0") -- CUDA detected: 8.0 -- Found libcuda: /opt/cuda/lib64/stubs/libcuda.so -- Found libnvrtc: /opt/cuda/lib64/libnvrtc.so CMake Error at /usr/share/cmake-3.9/Modules/ExternalProject.cmake:2010 (message): No download info given for 'nccl_external' and its source directory: /build/caffe2/src/caffe2-0.8.1/third_party/gloo/third-party/nccl is not an existing non-empty directory. Please specify one of: * SOURCE_DIR with an existing non-empty directory * URL * GIT_REPOSITORY * HG_REPOSITORY * CVS_REPOSITORY and CVS_MODULE * SVN_REVISION * DOWNLOAD_COMMAND Call Stack (most recent call first): /usr/share/cmake-3.9/Modules/ExternalProject.cmake:2565 (_ep_add_download_command) third_party/gloo/cmake/External/nccl.cmake:16 (ExternalProject_Add) third_party/gloo/cmake/Dependencies.cmake:53 (include) third_party/gloo/CMakeLists.txt:44 (include) without nccl installed. Adding nccl back to depends works.

dbermond commented on 2017-08-22 20:20 (UTC)

@wangqr Thank you for reporting this. I can reproduce the issue. It seems that the caffe2 python dependencies have slightly changed from when I firstly checked it. I will be updating this package and the other caffe2 packages on the AUR to fix this issue.

wangqr commented on 2017-08-22 08:34 (UTC) (edited on 2017-08-22 08:35 (UTC) by wangqr)

After installed this package I tried to import it, but get the following error: Python 2.7.13 (default, Jul 21 2017, 03:24:34) [GCC 7.1.1 20170630] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from caffe2.python import workspace, model_helper Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.7/site-packages/caffe2/python/workspace.py", line 14, i n <module> from past.builtins import basestring ImportError: No module named past.builtins >>> Installing python2-future resolves the issue.

dbermond commented on 2017-08-09 17:25 (UTC)

caffe2 0.8.1 adds support for the newly released cudnn 7.0.

dbermond commented on 2017-08-08 10:31 (UTC)

@petronny I have already "done something": https://aur.archlinux.org/cgit/aur.git/commit/?h=caffe2&id=894fb0890375500b0629e13ff2ad457244eaf68f That's the workaround that I could manage to do while a proper patch cannot be applied. This will require downgrade of cudnn to 6.0.21. In the meanwhile you can use caffe2-cpu, which does not have cudnn dependency.

petronny commented on 2017-08-08 07:09 (UTC)

Hi, the cudnn in [community] has been updated to 7.0.1 Please do something. The package won't be built now.

pdrocaldeira commented on 2017-07-24 22:05 (UTC)

@dbermond Thank YOU! Worked just fine. :)

dbermond commented on 2017-07-21 01:57 (UTC)

@pdrocaldeira Thank you for reporting it. This should be fixed as of version 0.7.0-13. All other caffe2 packages were fixed too.

pdrocaldeira commented on 2017-07-19 02:05 (UTC)

Can't install. Trying -git also. Getting this error: g++-5: error: unrecognized command line option ‘-fno-plt’

dbermond commented on 2017-04-30 16:50 (UTC)

@dydokamil "Currently, Caffe2 is supporting 2.7 only and python3 support is coming." https://github.com/caffe2/caffe2/issues/315 If you need python3 right now please use caffe (previous generation).

dydokamil commented on 2017-04-30 09:41 (UTC)

No python3? :(