Search Criteria
Package Details: python-tensorrt 10.6.0.26-1
Package Actions
Git Clone URL: | https://aur.archlinux.org/tensorrt.git (read-only, click to copy) |
---|---|
Package Base: | tensorrt |
Description: | A platform for high-performance deep learning inference on NVIDIA hardware (python bindings and tools) |
Upstream URL: | https://developer.nvidia.com/tensorrt/ |
Keywords: | ai artificial intelligence nvidia |
Licenses: | Apache-2.0, LicenseRef-custom |
Provides: | python-onnx-graphsurgeon, python-polygraphy, python-tensorflow-quantization |
Submitter: | dbermond |
Maintainer: | dbermond |
Last Packager: | dbermond |
Votes: | 20 |
Popularity: | 0.98 |
First Submitted: | 2018-07-29 16:17 (UTC) |
Last Updated: | 2024-11-08 22:21 (UTC) |
Dependencies (18)
- python (python37AUR, python311AUR, python310AUR)
- python-numpy (python-numpy-flameAUR, python-numpy-gitAUR, python-numpy1AUR, python-numpy-mkl-tbbAUR, python-numpy-mklAUR, python-numpy-mkl-binAUR)
- tensorrtAUR
- cmake (cmake-gitAUR) (make)
- cuda (cuda11.1AUR, cuda-12.2AUR, cuda12.0AUR, cuda11.4AUR, cuda11.4-versionedAUR, cuda12.0-versionedAUR) (make)
- cudnn (make)
- git (git-gitAUR, git-glAUR) (make)
- python (python37AUR, python311AUR, python310AUR) (make)
- python-build (make)
- python-installer (python-installer-gitAUR) (make)
- python-onnx (make)
- python-setuptools (make)
- python-wheel (make)
- python-onnx (optional) – for onnx_graphsurgeon python module
- python-onnxruntime (python-onnxruntime-opt, python-onnxruntime-opt-rocm, python-onnxruntime-rocm) (optional) – for onnx_graphsurgeon and polygraphy python modules
- python-protobuf (python-protobuf-gitAUR) (optional) – for polygraphy and tensorflow-quantization python modules
- python-tensorflow-cuda (python-tensorflow-cuda-gitAUR, python-tensorflow-opt-cuda) (optional) – for polygraphy python module
- python-tf2onnxAUR (optional) – for tensorflow-quantization python module
Required by (1)
Sources (13)
- 010-tensorrt-use-local-protobuf-sources.patch
- 020-tensorrt-fix-python.patch
- 030-tensorrt-onnx-tensorrt-disable-missing-source-file.patch
- cub-nvlabs
- git+https://github.com/google/benchmark.git
- git+https://github.com/NVIDIA/TensorRT.git#tag=v10.6.0
- git+https://github.com/onnx/onnx-tensorrt.git
- git+https://github.com/onnx/onnx.git
- git+https://github.com/pybind/pybind11.git
- https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.6.0/tars/TensorRT-10.6.0.26.Linux.x86_64-gnu.cuda-12.6.tar.gz
- https://github.com/google/protobuf/releases/download/v3.20.1/protobuf-cpp-3.20.1.tar.gz
- protobuf-protocolbuffers
- TensorRT-SLA.txt
Latest Comments
1 2 3 4 5 6 .. 9 Next › Last »
whiteLinux commented on 2024-11-10 05:12 (UTC)
030-tensorrt-onnx-tensorrt-disable-missing-source-file.patch is no longer needed.The annotated file "errorHelpers.cpp" provide _ZTIN8onnx2trt16OnnxTrtExceptionE define.python-onnx package also include this file,so install python-onnx will fix it
dbermond commented on 2024-11-02 14:47 (UTC)
@jholmer try to rebuild the python-onnx package and use it. For rebuilding, use the PKGBUILD from the official repositories (can be found in this link). If if works, please open a bug report for python-onnx.
FuzzyAtish commented on 2024-10-31 11:07 (UTC)
@jholmer In the past with similar errors what worked for me was to reinstall the
python-onnx
package. I'm not saying it's a definite solution, but it could workjholmer commented on 2024-10-20 01:17 (UTC)
I am also receiving the "Could not load library libnvonnxparser.so.10: Unable to open library: libnvonnxparser.so.10 due to /usr/lib/libnvonnxparser.so.10: undefined symbol: _ZTIN8onnx2trt16OnnxTrtExceptionE" error on runtime. I have tried doing a completely clean build. I'm wondering if there is some sort of version incompatibility between this package and a dependency?
dbermond commented on 2024-09-19 21:26 (UTC)
@lu0se yes, the '.so' link to 'libnvonnxparser.so.10' is right, otherwise you would get a 'file not found' error. If you are getting a 'Using existing $srcdir/ tree' warning during makepkg, it means that you are not doing a clean build. You should use makepkg --cleanbuild/-C option for doing a clean build when building your packages, or build it in a clean chroot. Try to do it and see if it works.
lu0se commented on 2024-09-19 18:03 (UTC)
trtexec --onnx=rife_v4.10.onnx
&&&& RUNNING TensorRT.trtexec [TensorRT v100400] [b26] # trtexec --onnx=/usr/lib/vapoursynth/models/rife/rife_v4.10.onnx --device=0 [09/20/2024-02:01:38] [I] Start parsing network model. [09/20/2024-02:01:38] [E] Could not load library libnvonnxparser.so.10: Unable to open library: libnvonnxparser.so.10 due to /usr/lib/libnvonnxparser.so.10: undefined symbol: _ZTIN8onnx2trt16OnnxTrtExceptionE [09/20/2024-02:01:38] [E] Assertion failure: parser.onnxParser != nullptr
is libnvonnxparser.so.10 link right?is it related to WARNING: Using existing $srcdir/ tree
milianw commented on 2024-08-06 16:28 (UTC)
@dbermond: the forum post is not mine. I got the same/similar error when I tried to edit the PKGBUILD manually to try to build the newer tensorrt against cuda 12.5.
monarc99 commented on 2024-08-06 14:55 (UTC) (edited on 2024-08-06 15:05 (UTC) by monarc99)
You have a commented out part in the PKGBUILD in which you compile the python bindings.
# python bindings (fails to build with python 3.11) #local _pyver ...
Since my GPU (1060) is no longer supported by tensorrt 10, I had to get the 9 version to work and compile the python bindings for python 3.12 myself.
All I had to do was set another ENV variable and adjust the install command.
in build()...{ ... local -x TENSORRT_MODULE="tensorrt" ... }
the generated whl is located somewhere else, therefore adapt the install command
in package_python-tensorrt() { ...
python -m installer --destdir="$pkgdir" "TensorRT/python/build/bindings_wheel/dist/"*.whl
... }
I cannot say whether everything is correct. But everything compiles and the models also run (rife+upscale via trt).
In case someone might need it.
dbermond commented on 2024-07-27 01:17 (UTC)
@milianw I could compile your 'binsim.cu' source file using cuda 12.5.1 by running the exact same nvcc command that you posted in the mentioned nvidia thread. No errors, no warnings, and the 'binsimCUDA' executable builds fine. I cannot answer why you are getting these errors, and further discussing this here will be out of the scope of this AUR web page.
milianw commented on 2024-07-25 19:33 (UTC)
@dbermond: if gcc is not an issue, then why did I see the compile errors from the linked forum thread? I have gcc13 installed, but only
So gcc13 will still end up using libstc++ headers from gcc14 which are incompatible. How is this supposed to work?
1 2 3 4 5 6 .. 9 Next › Last »