Package Details: nvidia-docker 2.10.0-1

Git Clone URL: https://aur.archlinux.org/nvidia-docker.git (read-only, click to copy)
Package Base: nvidia-docker
Description: Build and run Docker containers leveraging NVIDIA GPUs
Upstream URL: https://github.com/NVIDIA/nvidia-docker
Keywords: cuda docker gpu nvidia
Licenses: BSD
Submitter: marcelhuber
Maintainer: jshap (kiendang)
Last Packager: kiendang
Votes: 36
Popularity: 0.047547
First Submitted: 2016-07-26 09:17 (UTC)
Last Updated: 2022-03-26 04:24 (UTC)

Pinned Comments

jshap commented on 2019-08-17 01:14 (UTC) (edited on 2019-08-19 15:51 (UTC) by jshap)

This package is now deprecated in upstream, as you can now use nvidia-container-toolkit together with docker 19.03's new native GPU support in order to use NVIDIA accelerated docker containers without requiring nvidia-docker. I'm keeping the package alive for now because it still works but in the future it may become fully unsupported in upstream.

For more info, see: https://wiki.archlinux.org/index.php/Docker#Run_GPU_accelerated_Docker_containers_with_NVIDIA_GPUs

Latest Comments

jshap commented on 2019-08-17 01:14 (UTC) (edited on 2019-08-19 15:51 (UTC) by jshap)

This package is now deprecated in upstream, as you can now use nvidia-container-toolkit together with docker 19.03's new native GPU support in order to use NVIDIA accelerated docker containers without requiring nvidia-docker. I'm keeping the package alive for now because it still works but in the future it may become fully unsupported in upstream.

For more info, see: https://wiki.archlinux.org/index.php/Docker#Run_GPU_accelerated_Docker_containers_with_NVIDIA_GPUs

ruro commented on 2019-08-07 21:40 (UTC)

Can confirm, just changing PKGBUILD to use depends=(docker nvidia-container-runtime) worked perfectly for me.

kiendang commented on 2019-07-30 00:34 (UTC)

@vanyasaem @jshap70 if I'm not wrong the correct dependencies should be depends=(docker nvidia-container-runtime). nvidia-container-runtime already depends on nvidia-container-toolkit. This follows the dependencies in the official packages for Ubuntu/Centos.

jshap commented on 2019-07-29 16:38 (UTC)

@vanyasem you need to update your PKGBUILD to use depends=(docker nvidia-container-runtime nvidia-container-toolkit) or else new installs will continue to be broken.

lesto commented on 2019-07-29 09:27 (UTC)

this depends on nvidia-container-runtime, that depends on nvidia-container-toolkit, that conflict with nvidia-docker

jshap commented on 2019-07-28 01:45 (UTC)

this package is becoming deprecated, see: https://github.com/NVIDIA/nvidia-container-runtime/releases/tag/3.1.0

In it's place you can use Docker 19.03's native gpu support using the --gpu flag through the new nvidia-container-toolkit package.

For now nvidia-docker should continue to work however "in the future" the package will no longer be supported. See https://github.com/NVIDIA/nvidia-docker for all of the info.

cboden commented on 2019-05-05 11:01 (UTC) (edited on 2019-05-05 11:13 (UTC) by cboden)

Hi, i could not get it to work (nvidia-418.56-11, docker-1:18.09.5-1):

nvidia-docker run --rm nvidia/cuda:10.1-base nvidia-smi

docker: Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "process_linux.go:424: container init caused \"process_linux.go:407: running prestart hook 1 caused \\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig --device=all --compute --utility --require=cuda>=10.1 brand=tesla,driver>=384,driver<385 brand=tesla,driver>=410,driver<411 --pid=3804 /var/lib/docker/overlay2/c9f8ff84653c0a76927a284df4e2392ee898c278d049c03cf6232c3e5aa20e25/merged]\\nnvidia-container-cli: ldcache error: process /sbin/ldconfig failed with error code: 1\\n\\"\"": unknown.

What i found in /var/log/nvidia-container-runtime-hook.log:

I0505 11:03:49.106724 4833 nvc_mount.c:115] mounting /dev/nvidia-uvm-tools at /var/lib/docker/overlay2/2585ca7db234b07e9816b60f01d631e71cc8081494094796de92b7cc599d152d/merged/dev/nvidia-uvm-tools I0505 11:03:49.106738 4833 nvc_mount.c:349] whitelisting device node 234:1 I0505 11:03:49.106796 4833 nvc_mount.c:115] mounting /dev/nvidia0 at /var/lib/docker/overlay2/2585ca7db234b07e9816b60f01d631e71cc8081494094796de92b7cc599d152d/merged/dev/nvidia0 I0505 11:03:49.106835 4833 nvc_mount.c:312] mounting /proc/driver/nvidia/gpus/0000:01:00.0 at /var/lib/docker/overlay2/2585ca7db234b07e9816b60f01d631e71cc8081494094796de92b7cc599d152d/merged/proc/driver/nvidia/gpus/0000:01:00.0 I0505 11:03:49.106852 4833 nvc_mount.c:349] whitelisting device node 195:0 I0505 11:03:49.106866 4833 nvc_ldcache.c:326] executing /sbin/ldconfig from host at /var/lib/docker/overlay2/2585ca7db234b07e9816b60f01d631e71cc8081494094796de92b7cc599d152d/merged E0505 11:03:49.107702 1 nvc_ldcache.c:357] could not start /sbin/ldconfig: process execution failed: operation not permitted I0505 11:03:49.117656 4833 nvc.c:311] shutting down library context I0505 11:03:49.117965 4843 driver.c:183] terminating driver service I0505 11:03:49.160038 4833 driver.c:224] driver service terminated successfully

SilverMight commented on 2018-06-13 01:05 (UTC) (edited on 2018-06-13 01:05 (UTC) by SilverMight)

Getting this in docker.service logs

level=error msg="get nvidia_driver_396.24: error looking up volume plugin nvidia-docker: plugin \"nvidia-docker\" not found"

vanyasem commented on 2018-06-07 04:46 (UTC)

Changed the package to use original sources

vanyasem commented on 2018-05-24 19:07 (UTC)

Thank you, I will fix it as soon as possible

eschwartz commented on 2018-05-23 23:05 (UTC)

Also this is no longer an arch-dependent package, it's an any package.

eschwartz commented on 2018-05-23 18:14 (UTC)

Please package this properly, using original sources instead of rpms.

There's no reason to use some prebuilt rpm just to cp these files:

https://github.com/NVIDIA/nvidia-docker/blob/master/nvidia-docker https://github.com/NVIDIA/nvidia-docker/blob/master/daemon.json https://github.com/NVIDIA/nvidia-docker/blob/master/LICENSE

rpm spec which cp's those files: https://github.com/NVIDIA/nvidia-docker/blob/master/rpm/SPECS/nvidia-docker2.spec

vfbsilva commented on 2018-05-09 23:33 (UTC) (edited on 2018-05-09 23:33 (UTC) by vfbsilva)

Problems to run here:

nvidia-docker run --rm nvidia/cuda nvidia-smi

docker: Error response from daemon: create nvidia_driver_396.24:

VolumeDriver.Create: internal error, check logs for details. See 'docker run --help

lukeyeager commented on 2018-04-12 20:07 (UTC)

@whenov got it, thanks for the context. For now, I'm going to leave the packages as they are. For one reason, each package has it's own version number upstream, and I feel like it's useful to see that in my local package list.

If others agree that they'd prefer a single package, please chime in!

whenov commented on 2018-03-16 01:53 (UTC)

@lukeyeager, it's just that these dependency packages are unlikely to be the dependencies of any other packages. And for those who prefer to use AUR manually with makepkg, it's more convenient to have a single package.

lukeyeager commented on 2018-03-15 16:29 (UTC)

@whenov That's definitely possible. I was just following the upstream packaging convention. What's the upside of having a single package?

I'm also happy to let this package die and let the nvidia-docker v1 package maintainer take over. I haven't found time this week to update all the out-of-date dependencies of this package anyway.

whenov commented on 2018-03-15 16:05 (UTC)

Is it possible to merge nvidia-docker2 and its four dependencies into a single package?

mimoralea commented on 2017-08-23 13:17 (UTC)

@simon_sjw, your solution works nicely. Thanks for sharing.

simon_sjw commented on 2017-08-07 00:28 (UTC) (edited on 2017-08-25 12:04 (UTC) by simon_sjw)

On installing, I had a fail on testing using nvidia-docker run --rm nvidia/cuda nvidia-smi getting an "invalid cross-device link" error. (When this is running correctly, you should get the status of your gpu as seen from within docker.) To fix: Use systemctl status nvidia-docker to tell you location of the service. Then open it with your favourite editor and alter ExecStart=/usr/bin/nvidia-docker-plugin -s $SOCK_DIR to ExecStart=/usr/bin/nvidia-docker-plugin -s $SOCK_DIR -d /usr/local/nvidia-driver the -d flag allows you to ensure that the NVIDIA driver installation lives on the same partition as the volume directory of the Docker plugin. (this is a known issue with nvidia-docker). https://github.com/NVIDIA/nvidia-docker/issues/216 and https://github.com/NVIDIA/nvidia-docker/wiki/nvidia-docker-plugin#known-limitations After making this change, remember to stop, reload and start the service (or just reboot your machine).

marcelhuber commented on 2017-02-24 09:56 (UTC)

@xoryouyou: Thank you for adapting the corresponding lines.

xoryouyou commented on 2017-02-13 11:40 (UTC)

Newest version is https://github.com/NVIDIA/nvidia-docker/releases/tag/v1.0.0 changes needed to be made: pkgver=1.0.0 source_x86_64=(https://github.com/NVIDIA/nvidia-docker/releases/download/v${pkgver}/nvidia-docker-${pkgver}-1.${arch}.rpm) sha256sums_x86_64=('6669686952a190557ceccb272c97e9fc11f744d8e949e78c3a5854517a39e958')

tiagoshibata commented on 2016-10-02 14:21 (UTC)

Docker should be a dependency: nvidia-docker run --name digits -d -p 5000:34448 nvidia-docker | 2016/10/02 11:16:02 Error: failed to run docker command