Package Details: nvidia-docker 2.0.3-4

Git Clone URL: (read-only)
Package Base: nvidia-docker
Description: Build and run Docker containers leveraging NVIDIA GPUs
Upstream URL:
Keywords: cuda docker gpu nvidia
Licenses: BSD
Submitter: marcelhuber
Maintainer: vanyasem
Last Packager: vanyasem
Votes: 33
Popularity: 0.755942
First Submitted: 2016-07-26 09:17
Last Updated: 2018-06-07 04:50

Latest Comments

1 2 Next › Last »

cboden commented on 2019-05-05 11:01

Hi, i could not get it to work (nvidia-418.56-11, docker-1:18.09.5-1):

nvidia-docker run --rm nvidia/cuda:10.1-base nvidia-smi

docker: Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "process_linux.go:424: container init caused \"process_linux.go:407: running prestart hook 1 caused \\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig --device=all --compute --utility --require=cuda>=10.1 brand=tesla,driver>=384,driver<385 brand=tesla,driver>=410,driver<411 --pid=3804 /var/lib/docker/overlay2/c9f8ff84653c0a76927a284df4e2392ee898c278d049c03cf6232c3e5aa20e25/merged]\\nnvidia-container-cli: ldcache error: process /sbin/ldconfig failed with error code: 1\\n\\"\"": unknown.

What i found in /var/log/nvidia-container-runtime-hook.log:

I0505 11:03:49.106724 4833 nvc_mount.c:115] mounting /dev/nvidia-uvm-tools at /var/lib/docker/overlay2/2585ca7db234b07e9816b60f01d631e71cc8081494094796de92b7cc599d152d/merged/dev/nvidia-uvm-tools I0505 11:03:49.106738 4833 nvc_mount.c:349] whitelisting device node 234:1 I0505 11:03:49.106796 4833 nvc_mount.c:115] mounting /dev/nvidia0 at /var/lib/docker/overlay2/2585ca7db234b07e9816b60f01d631e71cc8081494094796de92b7cc599d152d/merged/dev/nvidia0 I0505 11:03:49.106835 4833 nvc_mount.c:312] mounting /proc/driver/nvidia/gpus/0000:01:00.0 at /var/lib/docker/overlay2/2585ca7db234b07e9816b60f01d631e71cc8081494094796de92b7cc599d152d/merged/proc/driver/nvidia/gpus/0000:01:00.0 I0505 11:03:49.106852 4833 nvc_mount.c:349] whitelisting device node 195:0 I0505 11:03:49.106866 4833 nvc_ldcache.c:326] executing /sbin/ldconfig from host at /var/lib/docker/overlay2/2585ca7db234b07e9816b60f01d631e71cc8081494094796de92b7cc599d152d/merged E0505 11:03:49.107702 1 nvc_ldcache.c:357] could not start /sbin/ldconfig: process execution failed: operation not permitted I0505 11:03:49.117656 4833 nvc.c:311] shutting down library context I0505 11:03:49.117965 4843 driver.c:183] terminating driver service I0505 11:03:49.160038 4833 driver.c:224] driver service terminated successfully

SilverMight commented on 2018-06-13 01:05

Getting this in docker.service logs

level=error msg="get nvidia_driver_396.24: error looking up volume plugin nvidia-docker: plugin \"nvidia-docker\" not found"

vanyasem commented on 2018-06-07 04:46

Changed the package to use original sources

vanyasem commented on 2018-05-24 19:07

Thank you, I will fix it as soon as possible

eschwartz commented on 2018-05-23 23:05

Also this is no longer an arch-dependent package, it's an any package.

eschwartz commented on 2018-05-23 18:14

Please package this properly, using original sources instead of rpms.

There's no reason to use some prebuilt rpm just to cp these files:

rpm spec which cp's those files:

vfbsilva commented on 2018-05-09 23:33

Problems to run here:

nvidia-docker run --rm nvidia/cuda nvidia-smi

docker: Error response from daemon: create nvidia_driver_396.24:

VolumeDriver.Create: internal error, check logs for details. See 'docker run --help

lukeyeager commented on 2018-04-12 20:07

@whenov got it, thanks for the context. For now, I'm going to leave the packages as they are. For one reason, each package has it's own version number upstream, and I feel like it's useful to see that in my local package list.

If others agree that they'd prefer a single package, please chime in!

whenov commented on 2018-03-16 01:53

@lukeyeager, it's just that these dependency packages are unlikely to be the dependencies of any other packages. And for those who prefer to use AUR manually with makepkg, it's more convenient to have a single package.

lukeyeager commented on 2018-03-15 16:29

@whenov That's definitely possible. I was just following the upstream packaging convention. What's the upside of having a single package?

I'm also happy to let this package die and let the nvidia-docker v1 package maintainer take over. I haven't found time this week to update all the out-of-date dependencies of this package anyway.