Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
As mentioned in [1], LTO works again.
Also switches to the new way to configure parallelism of nvcc. Upstream
introduces a similar mechanism in onnxruntime 1.10 [2], and my original
approach (added in [3]) is always overridden.
[1] https://aur.archlinux.org/packages/python-onnxruntime#comment-886838
[2] https://github.com/microsoft/onnxruntime/pull/8974
[3] https://github.com/archlinuxcn/repo/commit/99c193b303a416811c9cdac6b12c30804ad5acb6
|
|
* Rename python-onnxruntime-cuda to onnxruntime-cuda - this split
package actually does not contain Python libraries.
* Move optdepends to the correct package
* Drop flatbuffers fix after upstream adds the compatibility back [1]
* Improve onednn patch - fallback to bundled onednn if the system one
is missing & wrap other usage of `DNNL_DLL_PATH`
* Add CUDA architectures introduced in CUDA 11.8. See [2]
* Refresh patches for 1.13
[1] https://github.com/google/flatbuffers/issues/7499
[2] https://github.com/archlinux/svntogit-community/commit/54642de5ba70f8c67ad6c16815dd31138ba47456
|
|
See: 0dfd9f002580878cd55b40a4e5ffd6c06c4612a2
Addresses https://aur.archlinux.org/packages/python-onnxruntime#comment-886281
|
|
Upstream flatbuffers renamed the package from Flatbuffers to FlatBuffers
(capital B) [1].
Fixes the issue reported at
https://aur.archlinux.org/packages/python-onnxruntime#comment-879320
Also drops patches that are part of the latest version.
[1] https://github.com/google/flatbuffers/pull/7378
|
|
|
|
* Update Python dependencies following upstream [1].
* Rebase patches for devendoring after upstream changes [2].
* Avoid wheel.vendored, which is needed since [3] while devendored in
Arch [4].
[1] https://github.com/microsoft/onnxruntime/pull/11522
[2] https://github.com/microsoft/onnxruntime/pull/11146
[3] https://github.com/microsoft/onnxruntime/pull/11834
[4] https://github.com/archlinux/svntogit-community/commit/e691288eda92fb5982ac5ac18f6459c5da560d7a
|
|
* Make CUDA build optional as requested [1]
* Use a better fix for protobuf 3.20 compatibility
* Fix GCC 12 build errors
[1] https://aur.archlinux.org/pkgbase/python-onnxruntime#comment-858912
|
|
|
|
Also adds sodepends to avoid incompatible updates as well as removes
unneeded items from `depends` for python-onnxruntime-cuda.
See: https://github.com/microsoft/onnxruntime/issues/11129
|
|
|
|
Fixes the issue mentioned at https://aur.archlinux.org/packages/python-onnxruntime#comment-854850
|
|
* Switch back from clang to gcc. Apparently upstream tests more on gcc
than on clang, and there are several compatibility issues between
onnxruntime and clang [1,2] as well as cuda and clang [3]. On the
other hand, internal compiler errors from gcc have been fixed.
* Add more optional dependencies for several sub-packages, as motivated
by [4].
* Fix missing orttraining Python files, which is discovered when I'm
checking optional dependencies.
* Don't hard-code usage of GNU make, as suggested in [4].
[1] https://github.com/microsoft/onnxruntime/pull/10014
[2] https://github.com/microsoft/onnxruntime/pull/10160
[3] https://forums.developer.nvidia.com/t/building-with-clang-cuda-11-3-0-works-but-with-cuda-11-3-1-fails-regression/182176
[4] https://aur.archlinux.org/packages/python-onnxruntime/#comment-843401
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
With LTO, linking fails after peak memory usage 33.8 GB [1]:
> CMakeFiles/onnxruntime_providers_cuda.dir/build/python-onnxruntime/src/onnxruntime/onnxruntime/core/providers/cuda/activation/activations.cc.o: file not recognized: file format not recognized
[1] https://build.archlinuxcn.org/grafana/d/000000003/memory?orgId=1&from=1627032600000&to=1627034400000
|
|
|
|
|
|
Also enables orttraining and use shared libs
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|