Package Details: baballonia v1.1.0.9-1

Git Clone URL: https://aur.archlinux.org/baballonia.git (read-only, click to copy)
Package Base: baballonia
Description: A cross-platform, hardware-agnostic VR eye and face tracking application.
Upstream URL: https://github.com/Project-Babble/Baballonia
Keywords: babble eye face tracking vr
Licenses: LicenseRef-Babble Software Distribution License 1.0
Submitter: awh
Maintainer: awh
Last Packager: awh
Votes: 0
Popularity: 0.000000
First Submitted: 2025-10-29 02:56 (UTC)
Last Updated: 2026-02-02 18:06 (UTC)

Latest Comments

1 2 Next › Last »

awh commented on 2026-01-31 23:01 (UTC)

@hype-vhs added - not sure why namcap didn't catch that.

hype-vhs commented on 2026-01-31 22:48 (UTC)

It fails to build in a clean chroot, please add unzip to makedepends?

exuvo commented on 2025-12-22 18:36 (UTC)

I built the new package and it works. Ran the trainer manually from terminal with my previous training data and everything ran as expected on GPU. Thanks for your hard work! Have a good christmas!

awh commented on 2025-12-22 18:14 (UTC) (edited on 2025-12-22 18:15 (UTC) by awh)

I released v1.1.0.9rc3-3 - let me know if it works for you. The solution actually ends up being quite a bit nicer, since the GPU support can be swapped in via alternate providing Arch packages, and we don't have to build a giant blob in this package.

I might look at building BabbleCalibration from source at some point too, but for now it works.

exuvo commented on 2025-12-21 16:53 (UTC)

The one based on https://github.com/Project-Babble/BabbleTrainer/blob/main/requirements_gpu.txt with CUDA only support i think? was too large to build on github the dev said. The version under releases is still only CPU for non-windows.

As far as i understand pytorch has no version that supports both NVIDIA and AMD at the same time for non-windows. On windows it uses a microsoft directml layer between to support multiple GPU vendors.

awh commented on 2025-12-20 19:42 (UTC) (edited on 2025-12-20 19:43 (UTC) by awh)

I pushed a pkgrel update to fetch the prerelease version of BabbleTrainer. If the prebuilt binary doesn't include the necessary libraries for Linux GPU support then that should be fixed upstream, assuming there's no platform specific requirements that means it can't be built generically.

edit: or do you mean if those Arch packages are installed, the prebuilt version works fine?

exuvo commented on 2025-12-20 00:53 (UTC) (edited on 2025-12-20 00:53 (UTC) by exuvo)

BabbleTrainer changes are merged now into main. Instead of using the pre-built trainer from releases do what i said in my previous comment. Needed python dependencies from https://github.com/Project-Babble/BabbleTrainer/blob/main/requirements_gpu.txt : python-onnx python-onnxscript python-pillow python-tqdm python-opencv python-numpy python-pytorch

exuvo commented on 2025-12-15 18:36 (UTC) (edited on 2025-12-15 19:30 (UTC) by exuvo)

It seems BabbleTrainer requires write permissions to its directory for the temp files it creates. Which it obviously does not have nor should it.

I fixed it in https://github.com/Project-Babble/BabbleTrainer/pull/4, we just need to wait.

When that is merged, prepend BabbleTrainer/main.py with "#!/usr/bin/env python3", place the .py files in Calibration/Linux/Trainer and symlink BabbleTrainer to main.py . "BabbleTrainer/babble_data/setup.py install" needs to be used to install a python package too. Then add python-pytorch as a dependency so user can pick the right python-pytorch-xxx for their GPU. Also saves some space as it would no longer be including a copy of the python libraries.

exuvo commented on 2025-12-14 23:07 (UTC) (edited on 2025-12-14 23:08 (UTC) by exuvo)

It was literally only this to get GPU working for the trainer. I'll try to get it upstreamed but the file will still only support either cuda or rocm if it is packaged as one file with libraries included.

$ git diff
diff --git a/main.py b/main.py
index b3d7d24..991727f 100644
--- a/main.py
+++ b/main.py
@@ -30,7 +30,11 @@ try:
     import torch_directml
     device = torch_directml.device()
 except:
-    device = "cpu"
+    try:
+        import torch.cuda
+        device = torch.device('cuda')
+    except:
+        device = "cpu"

exuvo commented on 2025-12-14 22:11 (UTC)

Ah i missed that import line good catch.