Package Details: whisper.cpp-cuda 1.8.3-1

Git Clone URL: https://aur.archlinux.org/whisper.cpp-cuda.git (read-only, click to copy)
Package Base: whisper.cpp-cuda
Description: Port of OpenAI's Whisper model in C/C++ (with NVIDIA CUDA optimizations)
Upstream URL: https://github.com/ggerganov/whisper.cpp
Licenses: MIT
Conflicts: whisper.cpp
Provides: whisper.cpp
Submitter: robertfoster
Maintainer: robertfoster
Last Packager: robertfoster
Votes: 1
Popularity: 0.019270
First Submitted: 2024-12-11 23:08 (UTC)
Last Updated: 2026-01-17 01:18 (UTC)

Required by (13)

Sources (2)

Latest Comments

1 2 3 Next › Last »

Moilleadoir commented on 2026-01-04 04:13 (UTC)

According to issue 2774, the problem binaries should have a whisper- prefix.

demizer commented on 2026-01-03 19:34 (UTC)

As @WinTao says, building from git works.

# Maintainer: robertfoster

_pkgbase=whisper.cpp
pkgname="${_pkgbase}-cuda"
pkgver=1.8.2.r0.g12345678
pkgrel=1
pkgdesc="Port of OpenAI's Whisper model in C/C++ (with NVIDIA CUDA optimizations)"
arch=('armv7h' 'aarch64' 'x86_64')
url="https://github.com/ggerganov/whisper.cpp"
license=("MIT")
depends=('libggml-cuda-git' 'nvidia-utils' 'sdl2-compat')
conflicts=("${_pkgbase}")
provides=("${_pkgbase}")
makedepends=(
  'cmake'
  'git'
  'cuda'
)
source=(
  "${_pkgbase}::git+${url}.git"
  disable-deprecated.patch
)

pkgver() {
  cd "${srcdir}/${_pkgbase}"
  git describe --long --tags --abbrev=8 | sed 's/^v//;s/\([^-]*-g\)/r\1/;s/-/./g'
}

prepare() {
  cd "${srcdir}/${_pkgbase}"
  patch -Np1 -i "${srcdir}/disable-deprecated.patch"
}

build() {
  cmake \
    -B "${srcdir}/build" \
    -S "${srcdir}/${_pkgbase}" \
    -DCMAKE_INSTALL_PREFIX=/usr \
    -DCMAKE_BUILD_TYPE=Release \
    -DCMAKE_CUDA_ARCHITECTURES=89-real \
    -DWHISPER_SDL2=1 \
    -DWHISPER_BUILD_SERVER=0 \
    -DWHISPER_BUILD_TESTS=0 \
    -DWHISPER_USE_SYSTEM_GGML=1

  cmake --build "${srcdir}/build"
}

package() {
  DESTDIR="${pkgdir}" cmake --install "${srcdir}/build"

  cp -r "${srcdir}/build/bin" "${pkgdir}/usr"
  install -Dm644 "${srcdir}/${_pkgbase}/LICENSE" \
    -t "${pkgdir}/usr/share/licenses/${pkgname}"
}

sha256sums=('SKIP'
            '5f880edae417c7083a9403260e5c381285e4c52ccc39f127c6510fdfa249c1ad')

WinTao commented on 2025-12-29 21:14 (UTC)

Hitting the same GGML_KQ_MASK_PAD missing symbol.

The master branch of whisper.cpp has a commit from 2025/12/12 that fixes this issue: https://github.com/ggml-org/whisper.cpp/commit/cd9b8c6d1889ff8d40c201cd3eeec1a2c1dc5f8d

But no new version released yet. Modifying the PKGBUILD to build from the current git master branch fixed it for me.

Another issue: when installing the package, it creates binaries such as /usr/bin/stream (which conflicts with imagemagick) or /usr/bin/lsp (sounds like that would conflict with something). Maybe we shouldn't blindly install all the generated binaries but select them by name?

Moilleadoir commented on 2025-12-18 11:53 (UTC)

[  1%] Building CXX object src/CMakeFiles/whisper.dir/whisper.cpp.o
[  3%] Building CXX object examples/CMakeFiles/common-sdl.dir/common-sdl.cpp.o
In file included from /media/sonraí/ext4/pamac-build/whisper.cpp-cuda/src/whisper.cpp-1.8.2/src/../include/whisper.h:4,
                 from /media/sonraí/ext4/pamac-build/whisper.cpp-cuda/src/whisper.cpp-1.8.2/src/whisper.cpp:1:
/media/sonraí/ext4/pamac-build/whisper.cpp-cuda/src/whisper.cpp-1.8.2/src/whisper.cpp: In function ‘ggml_cgraph* whisper_build_graph_decoder(whisper_context&, whisper_state&, const whisper_batch&, bool, bool)’:
/media/sonraí/ext4/pamac-build/whisper.cpp-cuda/src/whisper.cpp-1.8.2/src/whisper.cpp:2504:101: error: ‘GGML_KQ_MASK_PAD’ was not declared in this scope
 2504 |     struct ggml_tensor * KQ_mask = ggml_new_tensor_3d(ctx0, GGML_TYPE_F32, n_kv, GGML_PAD(n_tokens, GGML_KQ_MASK_PAD), 1);
      |                                                                                                     ^~~~~~~~~~~~~~~~
/media/sonraí/ext4/pamac-build/whisper.cpp-cuda/src/whisper.cpp-1.8.2/src/whisper.cpp: In function ‘bool whisper_decode_internal(whisper_context&, whisper_state&, const whisper_batch&, int, bool, ggml_abort_callback, void*)’:
/media/sonraí/ext4/pamac-build/whisper.cpp-cuda/src/whisper.cpp-1.8.2/src/whisper.cpp:2928:63: error: ‘GGML_KQ_MASK_PAD’ was not declared in this scope
 2928 |                 for (int i = n_tokens; i < GGML_PAD(n_tokens, GGML_KQ_MASK_PAD); ++i) {
      |                                                               ^~~~~~~~~~~~~~~~
[  5%] Linking CXX static library libcommon-sdl.a
[  5%] Built target common-sdl
make[2]: *** [src/CMakeFiles/whisper.dir/build.make:79: src/CMakeFiles/whisper.dir/whisper.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:403: src/CMakeFiles/whisper.dir/all] Error 2
make: *** [Makefile:136: all] Error 2
==> ERROR: A failure occurred in build().
    Aborting...

luiscastro193 commented on 2025-10-16 09:33 (UTC)

It compiled fine for me

Kcchouette commented on 2025-10-16 09:29 (UTC) (edited on 2025-10-16 09:44 (UTC) by Kcchouette)

[ 86%] Linking CXX executable ../../../bin/wchess
[ 87%] Building CXX object examples/talk-llama/CMakeFiles/whisper-talk-llama.dir/llama-model-saver.cpp.o
[ 89%] Building CXX object examples/talk-llama/CMakeFiles/whisper-talk-llama.dir/llama-model.cpp.o
/var/tmp/pamac-build/whisper.cpp-cuda/src/whisper.cpp-1.8.2/examples/talk-llama/llama-model.cpp: In constructor ‘llm_build_apertus::llm_build_apertus(const llama_model&, const llm_graph_params&)’:
/var/tmp/pamac-build/whisper.cpp-cuda/src/whisper.cpp-1.8.2/examples/talk-llama/llama-model.cpp:19330:43: error: ‘ggml_xielu’ was not declared in this scope; did you mean ‘ggml_silu’?
19330 |                 ggml_tensor * activated = ggml_xielu(ctx0, up, alpha_n_val, alpha_p_val, beta_val, eps_val);
      |                                           ^~~~~~~~~~
      |                                           ggml_silu
make[2]: *** [examples/talk-llama/CMakeFiles/whisper-talk-llama.dir/build.make:373: examples/talk-llama/CMakeFiles/whisper-talk-llama.dir/llama-model.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:733: examples/talk-llama/CMakeFiles/whisper-talk-llama.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[ 89%] Built target wchess
make: *** [Makefile:136: all] Error 2
==> ERROR: A failure occurred in build().
    Aborting...

Edit: building again libggml-cuda-git fix the problem

evorster commented on 2025-08-18 14:24 (UTC)

So close!

-- Installing: /home/evert/Aur/whisper.cpp-cuda/pkg/whisper.cpp-cuda/usr/lib/pkgconfig/whisper.pc
/home/evert/Aur/whisper.cpp-cuda/PKGBUILD: line 40: cd: /home/evert/Aur/whisper.cpp-cuda/src/build/bin: No such file or directory
==> ERROR: A failure occurred in package().
    Aborting...
 -> error making: whisper.cpp-cuda-exit status 4
 -> Failed to install the following packages. Manual intervention is required:
whisper.cpp-cuda - exit status 4
evert@Evert ~ [1]> 

ashs commented on 2025-08-11 16:24 (UTC) (edited on 2025-08-11 16:26 (UTC) by ashs)

I get this error at 95%:

/home/ashish/.cache/yay/whisper.cpp-cuda/src/whisper.cpp-1.7.6/examples/talk-llama/llama-model.cpp:224:42: error: too few arguments to function ‘ggml_tensor* ggml_ssm_scan(ggml_context*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*)’
  224 |                 op_tensor = ggml_ssm_scan(ctx, s, x, dt, w, B, C);
      |                             ~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~

dbb commented on 2025-07-06 11:53 (UTC)

Fails to build in a clean chroot with:

/usr/bin/ld: warning: libcuda.so.1, needed by /usr/lib/libggml-cuda.so, not found (try using -rpath or -rpath-link)
/usr/bin/ld: /usr/lib/libggml-cuda.so: undefined reference to `cuMemCreate'
/usr/bin/ld: /usr/lib/libggml-cuda.so: undefined reference to `cuMemAddressReserve'
/usr/bin/ld: /usr/lib/libggml-cuda.so: undefined reference to `cuMemUnmap'
/usr/bin/ld: /usr/lib/libggml-cuda.so: undefined reference to `cuMemSetAccess'
/usr/bin/ld: /usr/lib/libggml-cuda.so: undefined reference to `cuDeviceGet'
/usr/bin/ld: /usr/lib/libggml-cuda.so: undefined reference to `cuMemAddressFree'
/usr/bin/ld: /usr/lib/libggml-cuda.so: undefined reference to `cuGetErrorString'
/usr/bin/ld: /usr/lib/libggml-cuda.so: undefined reference to `cuDeviceGetAttribute'
/usr/bin/ld: /usr/lib/libggml-cuda.so: undefined reference to `cuMemMap'
/usr/bin/ld: /usr/lib/libggml-cuda.so: undefined reference to `cuMemRelease'
/usr/bin/ld: /usr/lib/libggml-cuda.so: undefined reference to `cuMemGetAllocationGranularity'
collect2: error: ld returned 1 exit status
make[2]: *** [tests/CMakeFiles/test-vad.dir/build.make:107: bin/test-vad] Error 1
make[1]: *** [CMakeFiles/Makefile2:1418: tests/CMakeFiles/test-vad.dir/all] Error 2
make: *** [Makefile:146: all] Error 2

Had to add nvidia-utils to dependencies to get it to build.

evorster commented on 2025-06-27 17:18 (UTC) (edited on 2025-06-27 17:18 (UTC) by evorster)

I now get this compile error:

[ 84%] Building CXX object examples/talk-llama/CMakeFiles/whisper-talk-llama.dir/unicode-data.cpp.o
/home/evert/Aur/whisper.cpp-cuda/src/whisper.cpp-1.7.6/examples/talk-llama/llama-graph.cpp: In member function ‘ggml_tensor* llm_graph_context::build_moe_ffn(ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, int64_t, int64_t, llm_ffn_op_type, bool, bool, float, llama_expert_gating_func_type, int) const’:
/home/evert/Aur/whisper.cpp-cuda/src/whisper.cpp-1.7.6/examples/talk-llama/llama-graph.cpp:722:34: error: ‘ggml_repeat_4d’ was not declared in this scope; did you mean ‘ggml_repeat’?
  722 |         ggml_tensor * repeated = ggml_repeat_4d(ctx0, cur, n_embd, n_expert_used, n_tokens, 1);
      |                                  ^~~~~~~~~~~~~~
      |                                  ggml_repeat
[ 85%] Linking CXX executable ../../bin/vad-speech-segments
make[2]: *** [examples/talk-llama/CMakeFiles/whisper-talk-llama.dir/build.make:205: examples/talk-llama/CMakeFiles/whisper-talk-llama.dir/llama-graph.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....