Package Details: python-tensorrt 10.1.0.27-1

Git Clone URL: https://aur.archlinux.org/tensorrt.git (read-only, click to copy)
Package Base: tensorrt
Description: A platform for high-performance deep learning inference on NVIDIA hardware (python bindings and tools)
Upstream URL: https://developer.nvidia.com/tensorrt/
Keywords: ai artificial intelligence nvidia
Licenses: Apache-2.0, LicenseRef-custom
Provides: python-onnx-graphsurgeon, python-polygraphy, python-tensorflow-quantization
Submitter: dbermond
Maintainer: dbermond
Last Packager: dbermond
Votes: 17
Popularity: 0.85
First Submitted: 2018-07-29 16:17 (UTC)
Last Updated: 2024-06-18 20:08 (UTC)

Dependencies (18)

Sources (12)

Pinned Comments

dbermond commented on 2024-05-24 19:28 (UTC)

@FuzzyAtish TLDR: it fails to build against cuda 12.5, downgrade cuda to 12.4.1 and it will work. The long story: there are two issues regarding this. Firstly, tensorrt 10.0.1 apparently does not support cuda 12.5, as you can see in the upstream documentation. Secondly, cuda 12.5 is not supported by the current version of the nvidia drivers that are available in the official repositories (cuda 12.5 was pushed without a driver that supports it), and you can read more details about this in the Arch Linux cuda package issues page at this link.

Latest Comments

1 2 3 4 5 6 .. 9 Next › Last »

dbermond commented on 2024-07-27 01:17 (UTC)

@milianw I could compile your 'binsim.cu' source file using cuda 12.5.1 by running the exact same nvcc command that you posted in the mentioned nvidia thread. No errors, no warnings, and the 'binsimCUDA' executable builds fine. I cannot answer why you are getting these errors, and further discussing this here will be out of the scope of this AUR web page.

milianw commented on 2024-07-25 19:33 (UTC)

@dbermond: if gcc is not an issue, then why did I see the compile errors from the linked forum thread? I have gcc13 installed, but only

$ ls /usr/include/c++/
14.1.1  v1

So gcc13 will still end up using libstc++ headers from gcc14 which are incompatible. How is this supposed to work?

dbermond commented on 2024-07-25 13:24 (UTC)

@milianw Sure, I will be happy to update the package if you provide the fix for this issue that I reported upstream in the same day 10.2 was released. And there is also this another one which I reported, but I could fix it myself. Please notice that if you can fix the first issue, other ones may arise later in the compilation, or even in the python modules, so make sure to check everything. Regarding the gcc usage in cuda, each cuda version uses a specific gcc version. cuda 12.5 uses gcc13 (and not gcc14), so the gcc version is not a problem for us, since the cuda package is already using the correct one.

milianw commented on 2024-07-25 10:12 (UTC)

meh, just updating the versions won't be sufficient since cuda (even 12.5 apparently) is not compatible with gcc 14 system includes: https://forums.developer.nvidia.com/t/cuda-12-4-nvcc-and-gcc-14-1-incompatibility/293295

milianw commented on 2024-07-25 08:57 (UTC)

there's now tensorrt 10.2.0.19 with cuda 12.5 support, this package could be updated accordingly

FuzzyAtish commented on 2024-05-24 19:41 (UTC)

Thank you for great and speedy answer. I should've probably searched for the issue a bit, apologies for being lazy

dbermond commented on 2024-05-24 19:28 (UTC)

@FuzzyAtish TLDR: it fails to build against cuda 12.5, downgrade cuda to 12.4.1 and it will work. The long story: there are two issues regarding this. Firstly, tensorrt 10.0.1 apparently does not support cuda 12.5, as you can see in the upstream documentation. Secondly, cuda 12.5 is not supported by the current version of the nvidia drivers that are available in the official repositories (cuda 12.5 was pushed without a driver that supports it), and you can read more details about this in the Arch Linux cuda package issues page at this link.

FuzzyAtish commented on 2024-05-24 17:52 (UTC)

I'm getting the following error when building

#17 2311.3 [ 36%] Building CUDA object 
plugin/CMakeFiles/nvinfer_plugin.dir/roiAlignPlugin/roiAlignKernel.cu.o
#17 2320.2 home/builder/.cache/yay/tensorrt/src/TensorRT/plugin/common/common.cuh(321): 
error: identifier "FLT_MAX" is undefined
#17 2320.2       float threadData(-FLT_MAX);
#17 2320.2                         ^
#17 2320.2
#17 2320.2 home/builder/.cache/yay/tensorrt/src/TensorRT/plugin/common/common.cuh(373): 
error: identifier "FLT_MAX" is undefined
#17 2320.2       float threadData(-FLT_MAX);
#17 2320.2                         ^
#17 2320.2
#17 2320.3 2 errors detected in the compilation of "/home/builder/.cache/yay/tensorrt/src/TensorRT/plugin/roiAlignPlugin/roiAlignKernel.cu".

dbermond commented on 2024-04-04 18:45 (UTC)

@tbb @qsthy this is an issue with the python-onnx package, as it needs a rebuild against latest protobuf. The latest protobuf version did not have a soname version bump, but it seems to be ABI incompatible, thus needing a rebuild of some dependent packages. I've talked to the maintainer, and a fixed version is currently in [extra-testing]. Either wait for it to leave [extra-testing], or enable this repository. I've tested, and it's building fine with it.