Search Criteria
Package Details: python-tensorrt 10.14.1.48-1
Package Actions
| Git Clone URL: | https://aur.archlinux.org/tensorrt.git (read-only, click to copy) |
|---|---|
| Package Base: | tensorrt |
| Description: | A platform for high-performance deep learning inference on NVIDIA hardware (python bindings and tools) |
| Upstream URL: | https://developer.nvidia.com/tensorrt/ |
| Keywords: | ai artificial intelligence nvidia |
| Licenses: | Apache-2.0 AND LicenseRef-TensorRT-LICENSE-AGREEMENT AND LicenseRef-Python-TensorRT-LICENSE-AGREEMENT |
| Provides: | python-onnx-graphsurgeon, python-polygraphy, python-tensorflow-quantization |
| Submitter: | dbermond |
| Maintainer: | dbermond |
| Last Packager: | dbermond |
| Votes: | 21 |
| Popularity: | 0.154398 |
| First Submitted: | 2018-07-29 16:17 (UTC) |
| Last Updated: | 2025-11-08 20:20 (UTC) |
Dependencies (25)
- gcc-libs (gcc-libs-gitAUR, gccrs-libs-gitAUR, gcc-libs-snapshotAUR)
- glibc (glibc-gitAUR, glibc-eacAUR)
- python
- python-numpy (python-numpy-gitAUR, python-numpy1AUR, python-numpy-mkl-binAUR, python-numpy-mkl-tbbAUR, python-numpy-mklAUR)
- tensorrtAUR
- cmake (cmake3AUR, cmake-gitAUR) (make)
- cuda (cuda11.1AUR, cuda-12.2AUR, cuda12.0AUR, cuda11.4AUR, cuda11.4-versionedAUR, cuda12.0-versionedAUR, cuda-12.5AUR, cuda-12.9AUR) (make)
- cudnn (cudnn9.10-cuda12.9AUR) (make)
- git (git-gitAUR, git-glAUR) (make)
- nvidia-utils (nvidia-410xx-utilsAUR, nvidia-440xx-utilsAUR, nvidia-430xx-utilsAUR, nvidia-340xx-utilsAUR, nvidia-510xx-utilsAUR, nvidia-utils-teslaAUR, nvidia-470xx-utilsAUR, nvidia-550xx-utilsAUR, nvidia-390xx-utilsAUR, nvidia-vulkan-utilsAUR, nvidia-535xx-utilsAUR, nvidia-utils-betaAUR, nvidia-525xx-utilsAUR) (make)
- python (make)
- python-build (make)
- python-installer (make)
- python-ml-dtypes (make)
- python-onnx (make)
- python-setuptools (make)
- python-typing_extensions (make)
- python-wheel (make)
- python-coloredAUR (optional) – for onnx_graphsurgeon and polygraphy python modules
- python-ml-dtypes (optional) – for onnx_graphsurgeon python module
- python-onnx (optional) – for onnx_graphsurgeon python module
- python-onnxruntime (python-onnxruntime-cpu, python-onnxruntime-cuda, python-onnxruntime-opt-cuda, python-onnxruntime-opt-rocm, python-onnxruntime-rocm) (optional) – for onnx_graphsurgeon python module
- python-protobuf (python-protobuf-gitAUR, python-protobuf-21AUR) (optional) – for polygraphy python modules
- python-tensorflow-cuda (python-tensorflow-cuda-gitAUR, python-tensorflow-opt-cuda) (optional) – for polygraphy and tensorflow-quantization python modules
- python-tf2onnxAUR (optional) – for tensorflow-quantization python module
Required by (1)
Sources (12)
- 010-tensorrt-use-local-protobuf-sources.patch
- 020-tensorrt-fix-python.patch
- cub-nvlabs
- git+https://github.com/google/benchmark.git
- git+https://github.com/NVIDIA/TensorRT.git#tag=v10.14
- git+https://github.com/onnx/onnx-tensorrt.git
- git+https://github.com/onnx/onnx.git
- git+https://github.com/protocolbuffers/protobuf.git
- git+https://github.com/pybind/pybind11.git
- https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.14.1/tars/TensorRT-10.14.1.48.Linux.x86_64-gnu.cuda-13.0.tar.gz
- https://github.com/google/protobuf/releases/download/v3.20.1/protobuf-cpp-3.20.1.tar.gz
- TensorRT-LICENSE-AGREEMENT.txt
Latest Comments
1 2 3 4 5 6 .. 11 Next › Last »
idanka commented on 2025-10-16 23:29 (UTC)
@dbermond Sry Manjaro is up to date, but latest cuda package 12.9.1-2 is installed.
dbermond commented on 2025-10-14 20:42 (UTC)
@idanka I have just checked, and the package is building perfectly fine in an up-to-date Arch Linux system with cuda 13.0.2. The compute_110 architecture is supported by cuda 13, and I cannot reproduce your issue. Make sure that you have an up-to-date Arch Linux system with the latest packaged cuda version installed. The hardware is not relevant at build time, but only at run time, and you can build it even in a system without a nvidia gpu.
idanka commented on 2025-10-14 20:22 (UTC) (edited on 2025-10-14 20:24 (UTC) by idanka)
Update error nvidia rtx 3050
CMake will not be able to correctly generate this project. Call Stack (most recent call first): CMakeLists.txt:102 (project)
-- Configuring incomplete, errors occurred! ==> HIBA: Hiba történt a build()-ben. Megszakítás...
dbermond commented on 2025-10-11 14:09 (UTC)
@AngelSherry @gbin should be fixed by now.
AngelSherry commented on 2025-10-11 13:10 (UTC) (edited on 2025-10-11 13:16 (UTC) by AngelSherry)
PROCINFO["version"] is an internal feature of GNU awk (used only within awk) and cannot control the output language of external commands (such as pacman). It will only get you the awk version number (e.g. 5.3.0) and will not be able to extract the cudnn version.
When enable a non-English language on
/etc/locale.genand make it as default language (e.g LANG=ja, LANG=zh-tw). Pacman will make all its outputs localized including onpacman -Qi. That's whyawkcan't get original non-localized string output like Version. Version onpacman -Qihas been localized to 版本 in Traditional Chinese and バーション in Japanese.Use LANG=C to force pacman to output in English, which is the standard practice in the Arch Linux community.
LANG=C pacman -Qi 'cudnn' | awk '/^Version/ { print $3 }' | grep -oE '^[0-9]+\.[0-9]+'The build part should change like below:
This is the minimum change to ensure the PKGBUILD builds on non-English users' system.
gbin commented on 2025-10-11 12:17 (UTC)
@angelsherry
Maybe something like:
That is not localized would be more robust?
AngelSherry commented on 2025-10-11 10:13 (UTC) (edited on 2025-10-11 12:52 (UTC) by AngelSherry)
This part will make build() fail if the users don't use English as their default language. Because when you use pacman -Qi to query the version of cuda, the text will be translated to native language. As a result,
awkcannot query anything and exit with empty string.zh_TW:
so please consider to add the variable
env LANG=Cbeforepacman -Qidbermond commented on 2025-03-24 16:32 (UTC)
@ework thanks for pointing this. I'll remove in the next update, as I think it does not hurt let it there for the time being.
ework commented on 2025-03-23 02:06 (UTC)
@dbermond, we fixed the exec stack issue in TensorRT 10.9 by linking with "-Wl,-z,noexecstack", so you can drop that patch if you'd like.
snoriman commented on 2025-02-24 08:43 (UTC)
I'm trying to build this tensorrt package but running into an issue where the
Python.hisn't found. I've these packages installed:Contents of
/usr/include/:The output of the
makepkgcommand:1 2 3 4 5 6 .. 11 Next › Last »