Search Criteria
Package Details: python-tensorrt 10.7.0.23-1
Package Actions
Git Clone URL: | https://aur.archlinux.org/tensorrt.git (read-only, click to copy) |
---|---|
Package Base: | tensorrt |
Description: | A platform for high-performance deep learning inference on NVIDIA hardware (python bindings and tools) |
Upstream URL: | https://developer.nvidia.com/tensorrt/ |
Keywords: | ai artificial intelligence nvidia |
Licenses: | Apache-2.0, LicenseRef-custom |
Provides: | python-onnx-graphsurgeon, python-polygraphy, python-tensorflow-quantization |
Submitter: | dbermond |
Maintainer: | dbermond |
Last Packager: | dbermond |
Votes: | 20 |
Popularity: | 0.54 |
First Submitted: | 2018-07-29 16:17 (UTC) |
Last Updated: | 2024-12-07 14:13 (UTC) |
Dependencies (18)
- python (python37AUR, python311AUR, python310AUR)
- python-numpy (python-numpy-flameAUR, python-numpy-gitAUR, python-numpy-mkl-binAUR, python-numpy-mkl-tbbAUR, python-numpy-mklAUR, python-numpy1AUR)
- tensorrtAUR
- cmake (cmake-gitAUR) (make)
- cuda (cuda11.1AUR, cuda-12.2AUR, cuda12.0AUR, cuda11.4AUR, cuda11.4-versionedAUR, cuda12.0-versionedAUR) (make)
- cudnn (make)
- git (git-gitAUR, git-glAUR) (make)
- python (python37AUR, python311AUR, python310AUR) (make)
- python-build (make)
- python-installer (python-installer-gitAUR) (make)
- python-onnx (make)
- python-setuptools (make)
- python-wheel (make)
- python-onnx (optional) – for onnx_graphsurgeon python module
- python-onnxruntime (python-onnxruntime-opt, python-onnxruntime-opt-rocm, python-onnxruntime-rocm) (optional) – for onnx_graphsurgeon and polygraphy python modules
- python-protobuf (python-protobuf-gitAUR) (optional) – for polygraphy and tensorflow-quantization python modules
- python-tensorflow-cuda (python-tensorflow-cuda-gitAUR, python-tensorflow-opt-cuda) (optional) – for polygraphy python module
- python-tf2onnxAUR (optional) – for tensorflow-quantization python module
Required by (1)
Sources (13)
- 010-tensorrt-use-local-protobuf-sources.patch
- 020-tensorrt-fix-python.patch
- 030-tensorrt-onnx-tensorrt-disable-missing-source-file.patch
- cub-nvlabs
- git+https://github.com/google/benchmark.git
- git+https://github.com/NVIDIA/TensorRT.git#tag=v10.7.0
- git+https://github.com/onnx/onnx-tensorrt.git
- git+https://github.com/onnx/onnx.git
- git+https://github.com/pybind/pybind11.git
- https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.7.0/tars/TensorRT-10.7.0.23.Linux.x86_64-gnu.cuda-12.6.tar.gz
- https://github.com/google/protobuf/releases/download/v3.20.1/protobuf-cpp-3.20.1.tar.gz
- protobuf-protocolbuffers
- TensorRT-SLA.txt
Latest Comments
1 2 3 4 5 6 .. 10 Next › Last »
dbermond commented on 2024-12-19 03:26 (UTC)
@sots removing the unneeded patch is already on my radar, thanks for pointing anyway. The build jobs/parallelism is a user setting and should be configured in your 'makepkg.conf'.
sots commented on 2024-12-19 01:55 (UTC)
Recommended patch for PKGBUILD:
juliusk commented on 2024-12-03 01:21 (UTC) (edited on 2024-12-03 01:21 (UTC) by juliusk)
whiteLinux commented on 2024-11-10 05:12 (UTC)
030-tensorrt-onnx-tensorrt-disable-missing-source-file.patch is no longer needed.The annotated file "errorHelpers.cpp" provide _ZTIN8onnx2trt16OnnxTrtExceptionE define.python-onnx package also include this file,so install python-onnx will fix it
dbermond commented on 2024-11-02 14:47 (UTC)
@jholmer try to rebuild the python-onnx package and use it. For rebuilding, use the PKGBUILD from the official repositories (can be found in this link). If if works, please open a bug report for python-onnx.
FuzzyAtish commented on 2024-10-31 11:07 (UTC)
@jholmer In the past with similar errors what worked for me was to reinstall the
python-onnx
package. I'm not saying it's a definite solution, but it could workjholmer commented on 2024-10-20 01:17 (UTC)
I am also receiving the "Could not load library libnvonnxparser.so.10: Unable to open library: libnvonnxparser.so.10 due to /usr/lib/libnvonnxparser.so.10: undefined symbol: _ZTIN8onnx2trt16OnnxTrtExceptionE" error on runtime. I have tried doing a completely clean build. I'm wondering if there is some sort of version incompatibility between this package and a dependency?
dbermond commented on 2024-09-19 21:26 (UTC)
@lu0se yes, the '.so' link to 'libnvonnxparser.so.10' is right, otherwise you would get a 'file not found' error. If you are getting a 'Using existing $srcdir/ tree' warning during makepkg, it means that you are not doing a clean build. You should use makepkg --cleanbuild/-C option for doing a clean build when building your packages, or build it in a clean chroot. Try to do it and see if it works.
lu0se commented on 2024-09-19 18:03 (UTC)
trtexec --onnx=rife_v4.10.onnx
&&&& RUNNING TensorRT.trtexec [TensorRT v100400] [b26] # trtexec --onnx=/usr/lib/vapoursynth/models/rife/rife_v4.10.onnx --device=0 [09/20/2024-02:01:38] [I] Start parsing network model. [09/20/2024-02:01:38] [E] Could not load library libnvonnxparser.so.10: Unable to open library: libnvonnxparser.so.10 due to /usr/lib/libnvonnxparser.so.10: undefined symbol: _ZTIN8onnx2trt16OnnxTrtExceptionE [09/20/2024-02:01:38] [E] Assertion failure: parser.onnxParser != nullptr
is libnvonnxparser.so.10 link right?is it related to WARNING: Using existing $srcdir/ tree
milianw commented on 2024-08-06 16:28 (UTC)
@dbermond: the forum post is not mine. I got the same/similar error when I tried to edit the PKGBUILD manually to try to build the newer tensorrt against cuda 12.5.
1 2 3 4 5 6 .. 10 Next › Last »