Package Details: whisper.cpp 1.8.3-1

Git Clone URL: https://aur.archlinux.org/whisper.cpp.git (read-only, click to copy)
Package Base: whisper.cpp
Description: Port of OpenAI's Whisper model in C/C++ (with OpenBLAS + Vulkan optimizations)
Upstream URL: https://github.com/ggerganov/whisper.cpp
Licenses: MIT
Submitter: robertfoster
Maintainer: robertfoster
Last Packager: robertfoster
Votes: 21
Popularity: 0.167839
First Submitted: 2023-03-10 17:32 (UTC)
Last Updated: 2026-01-17 01:17 (UTC)

Latest Comments

1 2 3 4 5 6 .. 9 Next › Last »

gwuensch commented on 2026-01-18 20:29 (UTC)

Now that libggml has releases and whisper.cpp has been updated to work with the latest release, I'm in favor of changing the dependency to libggml.

gwuensch commented on 2026-01-15 22:10 (UTC)

It's just the old checksum from the previous version. Judging from the commit message the bump was automated, so can you please correct this and check if this is the GitHub action's fault, @robertfoster?

Marzal commented on 2026-01-15 21:44 (UTC)

I can't install 1.8.3.

With wget https://github.com/ggerganov/whisper.cpp/archive/v1.8.3.tar.gz I get 870ba21409cdf66697dc4db15ebdb13bc67037d76c7cc63756c81471d8f1731a v1.8.3.tar.gz

Package has sha256sums=('bcee25589bb8052d9e155369f6759a05729a2022d2a8085c1aa4345108523077' '5f880edae417c7083a9403260e5c381285e4c52ccc39f127c6510fdfa249c1ad')

gwuensch commented on 2026-01-08 19:32 (UTC) (edited on 2026-01-08 20:25 (UTC) by gwuensch)

It didn't work out unfortunately, 0.9.4 is too old and it doesn't build:

whisper.cpp/src/whisper.cpp-1.8.2/examples/talk-llama/llama-model.cpp: In constructor ‘llm_build_apertus::llm_build_apertus(const llama_model&, const llm_graph_params&)’:
whisper.cpp/src/whisper.cpp-1.8.2/examples/talk-llama/llama-model.cpp:19330:43: error: ‘ggml_xielu’ was not declared in this scope; did you mean ‘ggml_silu’?
19330 |                 ggml_tensor * activated = ggml_xielu(ctx0, up, alpha_n_val, alpha_p_val, beta_val, eps_val);
      |                                           ^~~~~~~~~~
      |                                           ggml_silu
make[2]: *** [examples/talk-llama/CMakeFiles/whisper-talk-llama.dir/build.make:373: examples/talk-llama/CMakeFiles/whisper-talk-llama.dir/llama-model.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:733: examples/talk-llama/CMakeFiles/whisper-talk-llama.dir/all] Error 2
make: *** [Makefile:136: all] Error 2

You could either pin libggml to the version vendored with whisper.cpp 1.8.2, which is based on this commit, or to the latest working commit, which is this one according to @berif69361. whisper.cpp's master branch now includes ggml 0.9.5, so let's just hope they stick to these stable releases in the future.

I also checked if it would be possible to just backport some commits for ggml 0.9.5 compatibility, but this includes multiple commits made over several weeks, so probably not worth the hassle.

Edit: I can confirm whisper.cpp 0.8.2 build successfully builds with either ggml commit.

gwuensch commented on 2026-01-08 18:35 (UTC)

I expect 0.9.4 to work, but I'll check and report back.

xiota commented on 2026-01-08 17:41 (UTC) (edited on 2026-01-08 17:44 (UTC) by xiota)

@gwuensch Would you mind checking which version works okay? 0.9.4?

I'm willing to roll back the version as long as no one complains about epoch.

I'll also look at revising the -git package to make easier to select a specific commit / version.

gwuensch commented on 2026-01-08 16:35 (UTC)

The latest stable ggml release (0.9.5) is still too new for the latest whisper.cpp release (1.8.2), but I'm also in favor of switching over. It should hopefully limit the amount of future incompatibilities.

xiota commented on 2026-01-08 06:29 (UTC)

The problem with using vendored ggml is it conflicts with other packages that use ggml.

libggml previously didn't have stable releases, but now they do. So I (re)created aur/libggml. I haven't tested, but I expect switching to the stable package would resolve recently reported issues building this.

gwuensch commented on 2026-01-08 03:47 (UTC)

@harre I'm not at all surprised that this works. The ggml version included with whisper.cpp is one from October, long before the commit in question. ggml has already been synced to a newer version on whisper's master branch, so I expect the incompatibility to be fixed too. There is just no release for that yet.

harre commented on 2026-01-08 03:09 (UTC)

I was able to get it working without using system-ggml with this change, I guess your system needs to be able to use vulkan though. I'm on AMD 7800xt GPU and have quite fast whisper speeds, around 42x realtime on large model.

diff --git a/PKGBUILD b/PKGBUILD
index c9119fc..9104fca 100644
--- a/PKGBUILD
+++ b/PKGBUILD
@@ -7,7 +7,7 @@ pkgdesc="Port of OpenAI's Whisper model in C/C++ (with OpenBLAS + Vulkan optimiz
 arch=('armv7h' 'aarch64' 'x86_64')
 url="https://github.com/ggerganov/whisper.cpp"
 license=("MIT")
-depends=('libggml-git' 'sdl2-compat')
+depends=('sdl2-compat')
 makedepends=(
   'cmake'
   'git'
@@ -31,7 +31,8 @@ build() {
     -DWHISPER_SDL2=1 \
     -DWHISPER_BUILD_SERVER=0 \
     -DWHISPER_BUILD_TESTS=0 \
-    -DWHISPER_USE_SYSTEM_GGML=1
+    -DWHISPER_USE_SYSTEM_GGML=0 \
+    -DGGML_VULKAN=1

   cmake --build "${srcdir}/build"
 }