Package Details: whisper.cpp-vulkan 1.7.1-4

Git Clone URL: https://aur.archlinux.org/whisper.cpp.git (read-only, click to copy)
Package Base: whisper.cpp
Description: Port of OpenAI's Whisper model in C/C++ (with Vulkan optimizations)
Upstream URL: https://github.com/ggerganov/whisper.cpp
Licenses: MIT
Conflicts: whisper.cpp
Provides: whisper.cpp
Submitter: robertfoster
Maintainer: robertfoster
Last Packager: robertfoster
Votes: 11
Popularity: 0.63
First Submitted: 2023-03-10 17:32 (UTC)
Last Updated: 2024-11-04 13:56 (UTC)

Latest Comments

« First ‹ Previous 1 2 3 Next › Last »

homocomputeris commented on 2024-10-10 23:12 (UTC)

Where are models supposed to be placed?

dreieck commented on 2024-10-10 11:12 (UTC) (edited on 2024-10-13 10:12 (UTC) by dreieck)

Your packages llama.cpp-vulkan-git and whisper.cpp-clblas do conflict which each other, which is not reflected in the conflicts array.

Please add the corresponding conflicts entry, or, to ensure compatibility, think of stripping out the libggml stuff and depend on libggml:

error: failed to commit transaction (conflicting files)
/usr/include/ggml-alloc.h exists in both 'llama.cpp-vulkan-git' and 'whisper.cpp-clblas'
/usr/include/ggml-backend.h exists in both 'llama.cpp-vulkan-git' and 'whisper.cpp-clblas'
/usr/include/ggml-blas.h exists in both 'llama.cpp-vulkan-git' and 'whisper.cpp-clblas'
/usr/include/ggml-cann.h exists in both 'llama.cpp-vulkan-git' and 'whisper.cpp-clblas'
/usr/include/ggml-cuda.h exists in both 'llama.cpp-vulkan-git' and 'whisper.cpp-clblas'
/usr/include/ggml-kompute.h exists in both 'llama.cpp-vulkan-git' and 'whisper.cpp-clblas'
/usr/include/ggml-metal.h exists in both 'llama.cpp-vulkan-git' and 'whisper.cpp-clblas'
/usr/include/ggml-rpc.h exists in both 'llama.cpp-vulkan-git' and 'whisper.cpp-clblas'
/usr/include/ggml-sycl.h exists in both 'llama.cpp-vulkan-git' and 'whisper.cpp-clblas'
/usr/include/ggml-vulkan.h exists in both 'llama.cpp-vulkan-git' and 'whisper.cpp-clblas'
/usr/lib/libggml.so exists in both 'llama.cpp-vulkan-git' and 'whisper.cpp-clblas'
Errors occurred, no packages were upgraded.

Regards!

dreieck commented on 2024-10-10 11:06 (UTC)

You need to install the license file:

whisper.cpp-clblas E: Uncommon license identifiers such as 'MIT' require license files below /usr/share/licenses/whisper.cpp-clblas/ or switching to common license identifiers. Found 0/1 required license files.

Regards and thanks for maintaining!

homocomputeris commented on 2024-10-07 22:17 (UTC)

Can the CUDA dependency be dropped somehow for Intel?

leuko commented on 2024-08-11 15:27 (UTC) (edited on 2024-08-11 16:08 (UTC) by leuko)

Using an AUR helper I got:

Build whisper.cpp with CUBlas (NVIDIA CUDA)
...
CMake Error at /usr/share/cmake/Modules/CMakeDetermineCompilerId.cmake:838 (message):
  Compiling the CUDA compiler identification source file
  "CMakeCUDACompilerId.cu" failed.

Building in a clean chroot solved the problem.

recallmenot commented on 2024-06-25 02:28 (UTC) (edited on 2024-06-25 02:49 (UTC) by recallmenot)

Ah, so I discovered the trick to using whisper.cpp-openvino. place the converted model and xml in the same dir as the regular model: https://huggingface.co/twdragon/whisper.cpp-openvino/tree/main Then launch with --ov-e-device GPU added to the commandline, else it will run openvino on the CPU.

But it still appears I'm doing something wrong: I get no performance increase: same 76s for my 708s test file as openvino on the CPU, so about 9.3x speed with the base model. CPU load is still fairly high and my UHD 770 (sorry, only iGPU) shows only light and periodic usage by whisper.cpp in intel-gpu-top. My RAM is only DDR4 3600MHz though. Using the regular whisper.cpp build is slower but the model file is larger aswell: 167s, so only 4.2x speed. I could imagine this to be due to openvino model using smaller floats (= less precision)?

robertfoster commented on 2024-05-03 18:25 (UTC)

@Melon_bread @solarisfire hipblas support enablement will be taken in consideration when referenced on the official README.md.

solarisfire commented on 2024-04-26 21:32 (UTC)

Build seems to be broken with latest rocm-llvm?

-- Build files have been written to: /home/solarisfire/.cache/yay/whisper.cpp/src/whisper.cpp-hipblas/build
[  0%] Built target json_cpp
[  5%] Building CXX object CMakeFiles/ggml-rocm.dir/ggml-cuda.cu.o
c++: error: language hip not recognized
c++: error: language hip not recognized
make[2]: *** [CMakeFiles/ggml-rocm.dir/build.make:76: CMakeFiles/ggml-rocm.dir/ggml-cuda.cu.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:211: CMakeFiles/ggml-rocm.dir/all] Error 2
make: *** [Makefile:146: all] Error 2
==> ERROR: A failure occurred in build().
    Aborting...
 -> error making: whisper.cpp-exit status 4
 -> Failed to install the following packages. Manual intervention is required:
whisper.cpp-cublas - exit status 4

Melon_Bread commented on 2024-04-25 00:48 (UTC)

Is there any chance we can get a whisper.cpp-hipblas package since there is rocm/hipblas support for whisper.cpp in their cmake files? (Thank you for your packages)