summarylogtreecommitdiffstats
path: root/install-orttraining-files.diff
diff options
context:
space:
mode:
authorChih-Hsuan Yen2021-12-31 14:06:49 +0800
committerChih-Hsuan Yen2021-12-31 14:07:10 +0800
commit0185531906bda3a9aba93bbb0f3dcfeb0ae671ad (patch)
treec5101895f73be2e196ec65113b8dde92f72b256d /install-orttraining-files.diff
parenta84b5454e84aed96aa78cc311c407d496b3855d6 (diff)
downloadaur-0185531906bda3a9aba93bbb0f3dcfeb0ae671ad.tar.gz
various fixes/improvements
* Switch back from clang to gcc. Apparently upstream tests more on gcc than on clang, and there are several compatibility issues between onnxruntime and clang [1,2] as well as cuda and clang [3]. On the other hand, internal compiler errors from gcc have been fixed. * Add more optional dependencies for several sub-packages, as motivated by [4]. * Fix missing orttraining Python files, which is discovered when I'm checking optional dependencies. * Don't hard-code usage of GNU make, as suggested in [4]. [1] https://github.com/microsoft/onnxruntime/pull/10014 [2] https://github.com/microsoft/onnxruntime/pull/10160 [3] https://forums.developer.nvidia.com/t/building-with-clang-cuda-11-3-0-works-but-with-cuda-11-3-1-fails-regression/182176 [4] https://aur.archlinux.org/packages/python-onnxruntime/#comment-843401
Diffstat (limited to 'install-orttraining-files.diff')
-rw-r--r--install-orttraining-files.diff19
1 files changed, 19 insertions, 0 deletions
diff --git a/install-orttraining-files.diff b/install-orttraining-files.diff
new file mode 100644
index 000000000000..e95601fcd183
--- /dev/null
+++ b/install-orttraining-files.diff
@@ -0,0 +1,19 @@
+--- a/setup.py 2021-12-29 22:44:09.924917943 +0800
++++ b/setup.py 2021-12-29 22:49:16.216878004 +0800
+@@ -355,7 +355,7 @@
+ 'Operating System :: Microsoft :: Windows',
+ 'Operating System :: MacOS'])
+
+-if enable_training:
++if True:
+ packages.extend(['onnxruntime.training',
+ 'onnxruntime.training.amp',
+ 'onnxruntime.training.optim',
+@@ -373,6 +373,7 @@
+ package_data['onnxruntime.training.ortmodule.torch_cpp_extensions.cuda.torch_gpu_allocator'] = ['*.cc']
+ package_data['onnxruntime.training.ortmodule.torch_cpp_extensions.cuda.fused_ops'] = \
+ ['*.cpp', '*.cu', '*.cuh', '*.h']
++if enable_training:
+ requirements_file = "requirements-training.txt"
+ # with training, we want to follow this naming convention:
+ # stable: