Package Details: nvidia-container-toolkit 1.9.0-1

Git Clone URL: https://aur.archlinux.org/nvidia-container-toolkit.git (read-only, click to copy)
Package Base: nvidia-container-toolkit
Description: NVIDIA container runtime toolkit
Upstream URL: https://github.com/NVIDIA/nvidia-container-toolkit
Keywords: docker nvidia nvidia-docker runc
Licenses: Apache
Conflicts: nvidia-container-runtime, nvidia-container-runtime-hook
Replaces: nvidia-container-runtime-hook
Submitter: jshap
Maintainer: jshap (kiendang)
Last Packager: kiendang
Votes: 23
Popularity: 0.171098
First Submitted: 2019-07-28 01:19 (UTC)
Last Updated: 2022-03-26 04:23 (UTC)

Pinned Comments

jshap commented on 2019-07-28 01:43 (UTC) (edited on 2019-07-29 22:32 (UTC) by jshap)

see the release notes here for why this exists: https://github.com/NVIDIA/nvidia-container-runtime/releases/tag/3.1.0

tl;dr: nvidia-docker is deprecated because docker now has native gpu support, which this package is required to use. :)

Latest Comments

acxz commented on 2022-02-13 04:58 (UTC) (edited on 2022-02-13 16:55 (UTC) by acxz)

Thanks @lahwaacz that fixed it! Here is the upstream issue: https://github.com/golang/go/issues/43505

lahwaacz commented on 2022-02-12 15:30 (UTC)

LTO needs to be disabled with options=('!lto') in the PKGBUILD. See https://lists.archlinux.org/pipermail/arch-dev-public/2021-December/030603.html

acxz commented on 2022-02-11 03:07 (UTC) (edited on 2022-02-11 03:08 (UTC) by acxz)

I am getting the following error when building:

==> Starting build()...
# github.com/NVIDIA/nvidia-container-toolkit/cmd/nvidia-container-toolkit
flag provided but not defined: -flto
usage: link [options] main.o
...

gnaggnoyil commented on 2021-12-29 09:40 (UTC)

@rezanmz Setting corresponding environment variables in proper place of PKGBUILD instead of go env -w is better as it won't change user's go env settings permanently after makepkg. Of course whether or not and how to change PKGBUILD is determined by the maintainer tho.

rezanmz commented on 2021-12-28 20:47 (UTC)

@gnaggnoyil Thank you! I executed the command that you mentioned and it worked like a charm! I think this should be added to the PKGBUILD.

gnaggnoyil commented on 2021-12-28 19:47 (UTC)

@bsautner I'm sure the error mentioned by @rezanmz is not about missing dependencies. I believe that's due to GO111MODULE not setting to auto in go env and you didn't realized that you've executed go env -w GO111MODULE=auto before.

bsautner commented on 2021-11-18 14:09 (UTC)

this installed fine for me, you just have to install the dependencies listed in the dependency section here first.

rezanmz commented on 2021-10-08 11:50 (UTC)

@jshap

I can't build this package:


patching file config/config.toml.centos
cannot find package "github.com/NVIDIA/nvidia-container-toolkit/pkg" in any of:
        /usr/lib/go/src/github.com/NVIDIA/nvidia-container-toolkit/pkg (from $GOROOT)
        /home/reza/.cache/yay/nvidia-container-toolkit/src/gopath/src/github.com/NVIDIA/nvidia-container-toolkit/pkg (from $GOPATH)
==> ERROR: A failure occurred in build().
    Aborting...
error making: nvidia-container-toolkit

Is there a fix?

ng0177 commented on 2021-08-15 10:29 (UTC)

As a half-savy user, I am stuck as what to do in terms of a step-by-step instruction. It is fine also to wait for an upstream fix w/o any manual intervention as long as it comes within the next few months. I anticipate to run SimNet. Appreciate your good work.

lahwaacz commented on 2021-08-10 19:23 (UTC)

@jshap Have you changed your mind about disclosing the alternative workaround to users in the post_install message?

jshap commented on 2021-08-10 19:11 (UTC)

@lahwaacz I'm gonna assume you're having a bad day and give you the benefit of the doubt on the weirdly aggressive tone.

Firstly, I was specifically talking to the user before you about how to eliminate the error with the current setup without even needing to reinstall it since they had already changed their kernel cli arguments. I should have said "you can do like @lahwaacz said or you can ...", however I figured it was somewhat obvious.

In the general case I still believe the config file change by default is the correct one as it doesn't require a system configuration change to run the program. I already answered this and nothing in the program has changed to change my mind. Feel free to disagree but please at least be respectful about it.

lahwaacz commented on 2021-08-10 18:53 (UTC) (edited on 2021-08-10 18:55 (UTC) by lahwaacz)

@jshap That's not at all what I mentioned as the two alternatives. Congratulations to completely missing what your post_install script says...

Again, the things from https://aur.archlinux.org/cgit/aur.git/commit/?h=nvidia-container-toolkit&id=269f783e9db636f6d19eca4f46b07308ba39c5df should be reverted in the PKGBUILD IMO, since even @jshap now seems to encourage the alternative solution (i.e. disabling the unified cgroup hierarchy).

jshap commented on 2021-08-10 14:27 (UTC)

Yes, like @lahwaacz said you can either disable the patch from the aur build or edit /etc/nvidia-container-runtime/config.toml to either comment out no-cgroups = true or change it to no-cgroups = false.

lahwaacz commented on 2021-08-02 19:35 (UTC)

@ng0177: read the post_install scriptlet added in https://aur.archlinux.org/cgit/aur.git/commit/?h=nvidia-container-toolkit&id=269f783e9db636f6d19eca4f46b07308ba39c5df, or revert the patch altogether, since you have disabled the unified cgroup hierarchy.

ng0177 commented on 2021-08-02 18:34 (UTC) (edited on 2021-08-02 18:37 (UTC) by ng0177)

/etc/default/grub:
GRUB_CMDLINE_LINUX_DEFAULT="quiet loglevel=3 nowatchdog systemd.unified_cgroup_hierarchy=false"

sudo grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=EndeavourOS-grub

docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi
Failed to initialize NVML: Unknown Error

Have I missed anything towards the current workaround? Thanks!

lahwaacz commented on 2021-06-11 06:24 (UTC) (edited on 2021-06-11 06:25 (UTC) by lahwaacz)

@jshap Yes, you can't make assumptions about every user's system or requirements, which is also a reason why you shouldn't set no-cgroups = true without even mentioning an alternative. People had to reboot to switch from cgroups v1 to v2 in systemd, so I don't think "It also does not require a reboot." is a valid argument to support this change. Ultimately, it is the user who should decide if they want cgroups v1 or no cgroups at all.

As for the development of this tool, AFAIK there is no roadmap except for a comment claiming that it will take "at least 9 months" (since January), so we shouldn't expect a proper solution before October. This does not sound like a short term to me...

jshap commented on 2021-06-10 18:44 (UTC) (edited on 2021-06-10 18:45 (UTC) by jshap)

@lahwaacz the reason I chose to turn off c-groups in the toolkit rather than force the system to v1 cgroups was because I didn't want to make assumptions about every user's system or requirements of other setups on their system. It also does not require a reboot.

You're correct that I could probably add a note explaining the kernel option too, however it's already been sufficiently documented in the links I'd attached. Eventually the tool will be rewritten to not even require c-group usage directly and instead just operate through runc, so they're both short term solutions anyways.

lahwaacz commented on 2021-06-06 10:33 (UTC) (edited on 2021-06-06 10:34 (UTC) by lahwaacz)

What is the reason for this package to prefer the workaround using no-cgroups = true and thus forcing users to manually specify devices exposed to the container, rather than instructing users to set systemd.unified_cgroup_hierarchy=false on the kernel command line? At least the post-install message should mention both options.

jshap commented on 2021-04-09 23:34 (UTC)

Thanks for all the replies. I will be going though these to decide what the best option is for a fix in the package soon.

HedgehogCode commented on 2021-04-09 09:21 (UTC) (edited on 2021-04-09 09:22 (UTC) by HedgehogCode)

You that you can add the nvidia devices to the container after using the no-cgroups=true hack as mentioned in this [1] comment:

$ docker run --gpus all --device /dev/nvidia0 --device /dev/nvidia-uvm --device /dev/nvidia-uvm-tools --device /dev/nvidiactl nvidia/cuda:11.0-base nvidia-smi
Fri Apr  9 09:15:37 2021       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.67       Driver Version: 460.67       CUDA Version: 11.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  GeForce GTX 1050    Off  | 00000000:01:00.0 Off |                  N/A |
| N/A   53C    P0    N/A /  N/A |      0MiB /  4042MiB |      0%      Default |
[...]

[1] https://github.com/NVIDIA/nvidia-docker/issues/1447#issuecomment-757034464

adjama commented on 2021-04-07 15:02 (UTC)

@GeorgeRaven, @jsharp

Unfortunatly I have to confirm your issue, I also cannot run nvidia-smi in my containers, I do get the same error Failed to initialize NVML: Unknown Error.

So maybe there is more to it than just the cgroups-issue?

GeorgeRaven commented on 2021-04-07 14:47 (UTC) (edited on 2021-04-07 15:03 (UTC) by GeorgeRaven)

hey @jsharp and @adjama I haven't tried setting the option systemd.unified_cgroup_hierarchy=0 yet as I want to be on-site for this change, but I just wanted to let you know that @adjama's suggestion does work in that the containers can now be built and run, however trying to use nvidia-smi inside the container still fails.

$ sudo docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi
Failed to initialize NVML: Unknown Error
$ sudo docker run --gpus all -t nvidia/cuda:11.0-base nvidia-debugdump -l
Error: nvmlInit(): Unknown Error

But thanks for the help guys, that helps a lot, I have a feeling based on the discussions setting cgroup_hierarchy to off is probably the way to go once I can do it in person. If that fails then I will try this method and look for any relevant logs that could help diagnose this unknown error.

adjama commented on 2021-04-07 14:29 (UTC) (edited on 2021-04-07 14:59 (UTC) by adjama)

@jshap I came across an issue which might be related. When starting/running a docker container with nvidia support (--gpus all) I got this error message:

Error response from daemon: OCI runtime create failed: container_linux.go:367: starting container process caused: process_linux.go:495: container init caused: Running hook #0:: error running hook: exit status 1,
stdout: , stderr: nvidia-container-cli: container error: cgroup subsystem devices not found: unknown

Looks like this is already known to nvidia [1], this [2] also might be related.

However, as mentioned in [2], I could fix my issue with editing the /etc/nvidia-container-runtime/config.toml and change #no-cgroups=false to no-cgroups=true. Afte a restart of the docker.service everything worked as usual. Hope this helps.

[1] https://github.com/NVIDIA/nvidia-docker/issues/1447
[2] https://github.com/NVIDIA/libnvidia-container/issues/111

jshap commented on 2021-04-06 19:22 (UTC)

@GeorgeRaven glad you figured out the symbol issue, I think the cgroup problem might be related to the systemd.unified_cgroup_hierarchy=0 kernel cli option. I haven't had time to investigate it yet because I've been busy but give it a shot.

GeorgeRaven commented on 2021-04-06 10:45 (UTC) (edited on 2021-04-06 12:17 (UTC) by GeorgeRaven)

hey @jshap, having an issue with an undefined symbol, do you have any idea what could be the cause/ have you seen this before? At first I thought it was just something missing in this machines LD_CONFIG but comparing it to others (where the others work as expected) but they all contain the same entries. Here is an example error via docker:

$ sudo docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi
docker: Error response from daemon: OCI runtime create failed: container_linux.go:367: starting container process caused: process_linux.go:495: container init caused: Running hook #0:: error running hook: exit status 127, stdout: , stderr: /usr/bin/nvidia-container-cli: symbol lookup error: /usr/bin/nvidia-container-cli: undefined symbol: nvc_nvcaps_device_from_proc_path, version NVC_1.0: unknown.

running nvidia-container-cli directly

$ /usr/bin/nvidia-container-cli
/usr/bin/nvidia-container-cli: symbol lookup error: /usr/bin/nvidia-container-cli: undefined symbol: nvc_nvcaps_device_from_proc_path, version NVC_1.0

If not don't worry I just thought id ask before submitting an issue since it could be packaging related/ you may know better for arch specifics.

nvidia 460.67-5 nvidia-container-toolkit 1.4.2-1 docker 1:20.10.5-1

EDIT:

Upon further inspection I found out the issue was with libnvidia-container being an outdated version on this particular machine. Only to then run into "nvidia-container-cli: container error: cgroup subsystem devices not found: unknown"

jshap commented on 2021-03-10 20:18 (UTC)

@jakobsg that's a usage issue. see: https://github.com/NVIDIA/nvidia-docker/issues/586 and https://github.com/NVIDIA/nvidia-docker/wiki/Frequently-Asked-Questions#is-opengl-supported and https://gitlab.com/nvidia/container-images/samples/-/blob/master/opengl/ubuntu16.04/glxgears/Dockerfile

jakobsg commented on 2021-03-10 19:33 (UTC) (edited on 2021-03-10 19:34 (UTC) by jakobsg)

Is this suppose to let me run an application like glxgears from my docker once installed? Cause I still get the same message after installation:


$ glxgears
X Error of failed request:  BadValue (integer parameter out of range for operation)
  Major opcode of failed request:  152 (GLX)
  Minor opcode of failed request:  3 (X_GLXCreateContext)
  Value in failed request:  0x0
  Serial number of failed request:  35
  Current serial number in output stream:  36

Any thoughts?

FallenWarrior2k commented on 2020-11-06 09:29 (UTC)

Please install the OCI hook definition so this can be used by e.g. Podman users without having to manually download the hook file.

Patch with the necessary changes to the PKGBUILD:

diff --git a/PKGBUILD b/PKGBUILD
index 4d567eb..384c129 100644
--- a/PKGBUILD
+++ b/PKGBUILD
@@ -42,6 +42,7 @@ build() {

 package() {
   install -D -m755 "${_srcdir}/${pkgname}" "$pkgdir/usr/bin/${pkgname}"
+  install -D -m644 "${_srcdir}/oci-nvidia-hook.json" "$pkgdir/usr/share/containers/oci/hooks.d/00-oci-nvidia-hook.json"
   pushd "$pkgdir/usr/bin/"
   ln -sf "${pkgname}" "nvidia-container-runtime-hook"
   popd

kiendang commented on 2020-07-14 11:44 (UTC)

@yosunpeng you can edit the PKGBUILD and add GOPROXY=https://goproxy.cn to the go build command

GOPROXY=https://goproxy.cn \
GOPATH="${srcdir}/gopath" \
  go build -v \
  ...

yosunpeng commented on 2020-07-14 07:53 (UTC)

I got an error while installing this package.

go: github.com/BurntSushi/toml@v0.3.1: Get "https://proxy.golang.org/github.com/%21burnt%21sushi/toml/@v/v0.3.1.mod": dial tcp: lookup proxy.golang.org: Temporary failure in name resolution
==> ERROR: A failure occurred in build().
    Aborting...

It seems that China's GFW bans golang.org. Is there any possible to skip this step or use mirror web server? Please help, thank you!

kipsora commented on 2020-07-02 18:10 (UTC) (edited on 2020-07-02 18:11 (UTC) by kipsora)

I also modified the checksums by hand and also got sucked by the _srcdir issue. Finally got my hands dirty by changing _srcdir to nvidia-container-toolkit-${pkgver} (and need to manually decompress the tarball). But that's only a workaround. Hope the maintainer of this package fix these issues soon.

hanielxx commented on 2020-07-02 07:40 (UTC)

I've fixed it via changing the sha256sums manually and rebuilding.

Another problem is that _srcdir should be "nvidia-container-toolkit-${pkgver}".

hanielxx commented on 2020-07-02 05:28 (UTC) (edited on 2020-07-02 05:37 (UTC) by hanielxx)

The following error occured when i tried to install it:

$ yay -S nvidia-container-toolkit
:: Checking for conflicts...
:: Checking for inner conflicts...
[Aur: 1]  nvidia-container-toolkit-1.1.2-1

  1 nvidia-container-toolkit                 (Build Files Exist)
==> Packages to cleanBuild?
==> [N]one [A]ll [Ab]ort [I]nstalled [No]tInstalled or (1 2 3, 1-3, ^4)
==> n
:: PKGBUILD up to date, Skipping (1/1): nvidia-container-toolkit
  1 nvidia-container-toolkit                 (Build Files Exist)
==> Diffs to show?
==> [N]one [A]ll [Ab]ort [I]nstalled [No]tInstalled or (1 2 3, 1-3, ^4)
==> n
:: Parsing SRCINFO (1/1): nvidia-container-toolkit
==> Making package: nvidia-container-toolkit 1.1.2-1 (2020年07月**日 *****)
==> Retrieving sources...
  -> Downloading v1.1.2.tar.gz...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   138  100   138    0     0     31      0  0:00:04  0:00:04 --:--:--    31
100   139  100   139    0     0     28      0  0:00:04  0:00:04 --:--:--  135k
100 1703k    0 1703k    0     0  19245      0 --:--:--  0:01:30 --:--:-- 15403
==> Validating source files with sha256sums...
    v1.1.2.tar.gz ... FAILED
==> ERROR: One or more files did not pass the validity check!
Error downloading sources: nvidia-container-toolkit

What's more, the sha256sums generated by makepkg -g is not consistent with that in PKGBUILD.

What can i do to install it?

I tried (SOLVED) error "One or more files did not pass the validity check!", but it also failed.

glvr182 commented on 2020-03-11 16:08 (UTC) (edited on 2020-03-11 16:09 (UTC) by glvr182)

When GO111MODULE is set to on, the build failed with the following error: can't load package: package nvidia-container-toolkit is not in GOROOT (/usr/lib/go/src/nvidia-container-toolkit)

A way to fix this is to set GO111MODULE to off during the build step:

GOPATH="${srcdir}/gopath" GO111MODULE=off go build -v \
                            -buildmode=pie \
                            -gcflags "all=-trimpath=${PWD}" \
                            -asmflags "all=-trimpath=${PWD}" \
                            -ldflags "-extldflags ${LDFLAGS}" \
                            "$pkgname"

hantian_pang commented on 2019-12-13 04:29 (UTC) (edited on 2019-12-13 04:29 (UTC) by hantian_pang)

error log like this:

/usr/bin/ld: /home/pang/.cache/pikaur/build/libnvidia-container/src/libnvidia-container-1.0.5/deps/usr/lib/libelf.a(elf_scn.o): in function `_libelf_load_section_headers':

/home/pang/.cache/pikaur/build/libnvidia-container/src/libnvidia-container-1.0.5/deps/src/elftoolchain-0.7.1/libelf/elf_scn.c:72: undefined reference to `_libelf_fsize'

/usr/bin/ld: /home/pang/.cache/pikaur/build/libnvidia-container/src/libnvidia-container-1.0.5/deps/usr/lib/libelf.a(gelf_fsize.o): in function elf32_fsize': /home/pang/.cache/pikaur/build/libnvidia-container/src/libnvidia-container-1.0.5/deps/src/elftoolchain-0.7.1/libelf/gelf_fsize.c:37: undefined reference to_libelf_fsize'

/usr/bin/ld: /home/pang/.cache/pikaur/build/libnvidia-container/src/libnvidia-container-1.0.5/deps/usr/lib/libelf.a(gelf_fsize.o): in function elf64_fsize': /home/pang/.cache/pikaur/build/libnvidia-container/src/libnvidia-container-1.0.5/deps/src/elftoolchain-0.7.1/libelf/gelf_fsize.c:43: undefined reference to_libelf_fsize'

/usr/bin/ld: /home/pang/.cache/pikaur/build/libnvidia-container/src/libnvidia-container-1.0.5/deps/usr/lib/libelf.a(gelf_fsize.o): in function gelf_fsize': /home/pang/.cache/pikaur/build/libnvidia-container/src/libnvidia-container-1.0.5/deps/src/elftoolchain-0.7.1/libelf/gelf_fsize.c:56: undefined reference to_libelf_fsize'

/usr/bin/ld: /home/pang/.cache/pikaur/build/libnvidia-container/src/libnvidia-container-1.0.5/deps/usr/lib/libelf.a(libelf_ehdr.o): in function `_libelf_load_extended':

/home/pang/.cache/pikaur/build/libnvidia-container/src/libnvidia-container-1.0.5/deps/src/elftoolchain-0.7.1/libelf/libelf_ehdr.c:52: undefined reference to `_libelf_fsize'

/usr/bin/ld: /home/pang/.cache/pikaur/build/libnvidia-container/src/libnvidia-container-1.0.5/deps/usr/lib/libelf.a(libelf_ehdr.o):/home/pang/.cache/pikaur/build/libnvidia-container/src/libnvidia-container-1.0.5/deps/src/elftoolchain-0.7.1/libelf/libelf_ehdr.c:138: more undefined references to `_libelf_fsize' follow

jshap commented on 2019-11-06 06:43 (UTC)

@ruro no worries, I actually forgot that gcc-go exists. I was just trying to make things cleaner, the other flags should already be handling it anyways so it's really a non-issue :)

ruro commented on 2019-11-06 06:03 (UTC) (edited on 2019-11-06 06:03 (UTC) by ruro)

@jshap ah, yes, my bad. I thought, that I checked that I had the latest go, but apparently I must have misread the version.

Although, I am indeed not using Arch, but Manjaro, I think the issue is not that.

I have core/gcc-go 9.2.0-3 installed instead of community/go 2:1.13.4-1. Apparently, core/gcc-go provides go=1.12.2.

jshap commented on 2019-11-06 02:17 (UTC)

@ruro guessing not arch? since go 1.13 that flag is available :(

I'll remove it though.

ruro commented on 2019-11-05 23:20 (UTC) (edited on 2019-11-05 23:23 (UTC) by ruro)

Latest version doesn't build for me with the following error:

flag provided but not defined: -trimpath
usage: go build [-o output] [-i] [build flags] [packages]
Run 'go help build' for details.

Edit: removing line 33 of PKGBUILD seems to build just fine.

jshap commented on 2019-11-04 16:56 (UTC)

@ecly good catch, it was using go install but should have been using go build.

should be fixed now :)

ecly commented on 2019-11-04 14:32 (UTC)

This PKGBUILD is not compatible with go installations that use a local $GOBIN ENVVAR, as the resulting binary will be located in this directory when using go install, rather than the directory used in the PKGBUILD.

jshap commented on 2019-10-18 16:51 (UTC) (edited on 2019-10-18 16:51 (UTC) by jshap)

@darthdeus nvidia-smi reports the embedded version of cuda in the driver and not the installed toolkit version. so the version number is based on your nvidia kernel module version of the host, because really docker is mapping to that under the hood.

cuda toolkits are compatible on any driver higher than itself, so 9.0 will work on 10.1, but 10.1 won't work on 9.0, etc. so everything will be fine to continue using the 9.0 image.

you can check the toolkit version on the image with something like docker run --rm --gpus all nvidia/cuda:9.0-base cat /usr/local/cuda/version.txt

darthdeus commented on 2019-10-18 16:32 (UTC)

I'm not sure if I'm doing something wrong, but I have the regular cuda package installed from pacman, which has CUDA 10.1, and then tried to follow the wiki https://wiki.archlinux.org/index.php/Docker#With_NVIDIA_Container_Toolkit_(recommended) which says to run docker run --gpus all nvidia/cuda:9.0-base nvidia-smi, which also works, but the output says CUDA Version: 10.1, even though the image name has 9.0 in it.

Am I doing something wrong? I thought the container would have CUDA based on the image, and not on my external CUDA installation. Or is it just a problem of nvidia-smi?

jshap commented on 2019-09-11 13:40 (UTC) (edited on 2019-09-11 13:41 (UTC) by jshap)

@tndev yeah looks like the release was pulled for some reason? give me a bit to ask about it.

it makes sense that you cannot downgrade past runtime's v3.1.0 release as toolkit did not exist before that. Note that there is almost 0 difference for arch for toolkit between v3.1.0 (toolkit v1.0.1) and v3.1.3 (toolkit v1.0.4), so if you have to build this package at (this repos) commit 8378443f176f it should be fine.

tndev commented on 2019-09-11 09:12 (UTC) (edited on 2019-09-11 09:14 (UTC) by tndev)

This package is broken for me since (and including) the commit of 17b11a79498d. The latest downloadable version on github for me is 3.1.0 (https://github.com/NVIDIA/nvidia-container-runtime/archive/v3.1.0.tar.gz), for any newer version I get a 404 response from github.

jshap commented on 2019-08-13 13:37 (UTC)

@liufei that output doesn't really make a whole lot of sense to be honest... what command is it produced from? if you're not using makepkg directly, try it.

liufei commented on 2019-08-13 06:34 (UTC)

my install is error,i don't know how to resolve it. error just like this can you help me ?

start build():

unicode/utf16

all=-trimpath=/home/liufei/.cache/yay/nvidia-container-toolkit/src:0:0: open all=-trimpath=/home/liufei/.cache/yay/nvidia-container-toolkit/src: no such file or directory

encoding

all=-trimpath=/home/liufei/.cache/yay/nvidia-container-toolkit/src:0:0: open all=-trimpath=/home/liufei/.cache/yay/nvidia-container-toolkit/src: no such file or directory

bytes

all=-trimpath=/home/liufei/.cache/yay/nvidia-container-toolkit/src:0:0: open all=-trimpath=/home/liufei/.cache/yay/nvidia-container-toolkit/src: no such file or directory

context

all=-trimpath=/home/liufei/.cache/yay/nvidia-container-toolkit/src:0:0: open all=-trimpath=/home/liufei/.cache/yay/nvidia-container-toolkit/src: no such file or directory

encoding/base64

all=-trimpath=/home/liufei/.cache/yay/nvidia-container-toolkit/src:0:0: open all=-trimpath=/home/liufei/.cache/yay/nvidia-container-toolkit/src: no such file or directory

sort

all=-trimpath=/home/liufei/.cache/yay/nvidia-container-toolkit/src:0:0: open all=-trimpath=/home/liufei/.cache/yay/nvidia-container-toolkit/src: no such file or directory

strings

all=-trimpath=/home/liufei/.cache/yay/nvidia-container-toolkit/src:0:0: open all=-trimpath=/home/liufei/.cache/yay/nvidia-container-toolkit/src: no such file or directory

log

all=-trimpath=/home/liufei/.cache/yay/nvidia-container-toolkit/src:0:0: open all=-trimpath=/home/liufei/.cache/yay/nvidia-container-toolkit/src: no such file or directory

jshap commented on 2019-07-29 22:32 (UTC)

@lesto good catch, should be fixed now :)

lesto commented on 2019-07-29 09:30 (UTC)

i get warning: ==> WARNING: Package contains reference to $srcdir usr/bin/nvidia-container-toolkit

jshap commented on 2019-07-28 01:43 (UTC) (edited on 2019-07-29 22:32 (UTC) by jshap)

see the release notes here for why this exists: https://github.com/NVIDIA/nvidia-container-runtime/releases/tag/3.1.0

tl;dr: nvidia-docker is deprecated because docker now has native gpu support, which this package is required to use. :)