Package Details: python-apex-git 0.1.r478-1

Git Clone URL: https://aur.archlinux.org/python-apex-git.git (read-only)
Package Base: python-apex-git
Description: A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch
Upstream URL: https://github.com/NVIDIA/apex
Licenses: BSD
Conflicts: python-apex
Provides: python-apex
Submitter: leomao
Maintainer: leomao
Last Packager: leomao
Votes: 0
Popularity: 0.000000
First Submitted: 2018-12-14 06:07
Last Updated: 2019-08-16 11:05

Required by (1)

Sources (1)

Latest Comments

1 2 Next › Last »

petronny commented on 2019-08-16 03:59

There is one more thing to fix to pass the build.

==> Building Python 3
Traceback (most recent call last):
  File "setup.py", line 5, in <module>
    from pip._internal import main as pipmain
ModuleNotFoundError: No module named 'pip'
==> ERROR: A failure occurred in build().
    Aborting...

Please add python-pip to makedepends.

leomao commented on 2019-08-13 07:42

@petronny you're right. Fixed.

petronny commented on 2019-08-13 05:53

No CUDA runtime is found, using CUDA_HOME='/opt/cuda'
Traceback (most recent call last):
  File "setup.py", line 64, in <module>
    check_cuda_torch_binary_vs_bare_metal(torch.utils.cpp_extension.CUDA_HOME)
  File "setup.py", line 43, in check_cuda_torch_binary_vs_bare_metal
    torch_binary_major = torch.version.cuda.split(".")[0]
AttributeError: 'NoneType' object has no attribute 'split'

Please change python-pytorch to python-pytorch-cuda.

leomao commented on 2019-08-05 04:03

@petronny Thanks for the suggestion. I Updated the PKGBUILD.

petronny commented on 2019-08-05 03:59

Also, git should be in makedepends.

petronny commented on 2019-08-04 05:48

It shouldn't be an any package since it depends on cuda.
Please set arch to ('x86_64').

leomao commented on 2019-04-12 09:55

Please check https://github.com/NVIDIA/apex/issues/212. Currently, I don't have a solution with pytorch/pytorch-cuda in the community repo...

For now, I compile pytorch master myself...

drr21 commented on 2019-04-09 16:05

I get this warning when I use apex.amp:

'Warning: multi_tensor_applier fused unscale kernel is unavailable, possibly because apex was installed without --cuda_ext --cpp_ext. Using Python fallback. Original ImportError was: ImportError('/usr/lib/python3.7/site-packages/amp_C.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZN3c105ErrorC1ENS_14SourceLocationERKSs')'

hottea commented on 2019-03-22 08:41

@leomao see syncbn for syncbn example. Actually, see this issue. It seems that pytorch appends -D_GLIBCXX_USE_CXX11_ABI=0 to compiler flags by default. I don't see a way to override it. And according to pytorch's PKGBUILD, there is no modify related to this flag. I believe that pytorch is build with -D_GLIBCXX_USE_CXX11_ABI=0, which is the default behavior of pytorch official configuration. So it would be OK to build apex extension with the same flag, aka -D_GLIBCXX_USE_CXX11_ABI=0. However, it's not. I try to build build apex with -D_GLIBCXX_USE_CXX11_ABI=1 by manually replace all -D_GLIBCXX_USE_CXX11_ABI=0 to -D_GLIBCXX_USE_CXX11_ABI=1 in /usr/lib/python3.7/site-packages/torch/utils/cpp_extension.py, and it works as expected. However, one should not expect to modify this cpp_extension.py during building apex with devtools, right?

leomao commented on 2019-02-25 03:10

Hi @hottea, thanks for reporting the issue. Could you provide a code snippet for testing? I just checked that the examples and tests ran without errors.