Search Criteria
Package Details: whisper.cpp-vulkan 1.7.1-4
Package Actions
Git Clone URL: | https://aur.archlinux.org/whisper.cpp.git (read-only, click to copy) |
---|---|
Package Base: | whisper.cpp |
Description: | Port of OpenAI's Whisper model in C/C++ (with Vulkan optimizations) |
Upstream URL: | https://github.com/ggerganov/whisper.cpp |
Licenses: | MIT |
Conflicts: | whisper.cpp |
Provides: | whisper.cpp |
Submitter: | robertfoster |
Maintainer: | robertfoster |
Last Packager: | robertfoster |
Votes: | 11 |
Popularity: | 0.63 |
First Submitted: | 2023-03-10 17:32 (UTC) |
Last Updated: | 2024-11-04 13:56 (UTC) |
Dependencies (10)
- gcc-libs (gcc-libs-gitAUR, gccrs-libs-gitAUR, gcc11-libsAUR, gcc-libs-snapshotAUR)
- glibc (glibc-gitAUR, glibc-linux4AUR, glibc-eacAUR, glibc-eac-binAUR, glibc-eac-rocoAUR)
- vulkan-driver (nvidia-410xx-utilsAUR, nvidia-440xx-utilsAUR, nvidia-430xx-utilsAUR, swiftshader-gitAUR, amdvlk-debugAUR, nvidia-vulkan-utilsAUR, amdvlk-2023q3.3AUR, amdvlk-2021q2.5AUR, vulkan-amdgpu-proAUR, nvidia-390xx-utilsAUR, amdvlk-gitAUR, vulkan-nouveau-gitAUR, mesa-minimal-gitAUR, mesa-gitAUR, vulkan-amdgpu-pro-legacyAUR, nvidia-utils-teslaAUR, amdvlk-binAUR, mesa-wsl2-gitAUR, nvidia-535xx-utilsAUR, nvidia-525xx-utilsAUR, nvidia-510xx-utilsAUR, nvidia-550xx-utilsAUR, nvidia-utils-betaAUR, nvidia-470xx-utilsAUR, amdonly-gaming-vulkan-radeon-gitAUR, amdonly-gaming-vulkan-swrast-gitAUR, vulkan-radeon-amd-bc250AUR, amdvlk, nvidia-utils, vulkan-intel, vulkan-nouveau, vulkan-radeon, vulkan-swrast, vulkan-virtio)
- vulkan-icd-loader (vulkan-icd-loader-gitAUR)
- blas-openblas (make)
- cmake (cmake-gitAUR) (make)
- cuda (cuda11.1AUR, cuda-12.2AUR, cuda12.0AUR, cuda11.4AUR, cuda11.4-versionedAUR, cuda12.0-versionedAUR) (make)
- git (git-gitAUR, git-glAUR) (make)
- openvinoAUR (openvino-gitAUR) (make)
- vulkan-icd-loader (vulkan-icd-loader-gitAUR) (make)
Required by (1)
- shisper-git (requires whisper.cpp)
Latest Comments
« First ‹ Previous 1 2 3 Next › Last »
homocomputeris commented on 2024-10-10 23:12 (UTC)
Where are models supposed to be placed?
dreieck commented on 2024-10-10 11:12 (UTC) (edited on 2024-10-13 10:12 (UTC) by dreieck)
Your packages
llama.cpp-vulkan-git
andwhisper.cpp-clblas
do conflict which each other, which is not reflected in theconflicts
array.Please add the corresponding
conflicts
entry, or, to ensure compatibility, think of stripping out thelibggml
stuff and depend onlibggml
:Regards!
dreieck commented on 2024-10-10 11:06 (UTC)
You need to install the license file:
Regards and thanks for maintaining!
homocomputeris commented on 2024-10-07 22:17 (UTC)
Can the CUDA dependency be dropped somehow for Intel?
leuko commented on 2024-08-11 15:27 (UTC) (edited on 2024-08-11 16:08 (UTC) by leuko)
Using an AUR helper I got:
Building in a clean chroot solved the problem.
recallmenot commented on 2024-06-25 02:28 (UTC) (edited on 2024-06-25 02:49 (UTC) by recallmenot)
Ah, so I discovered the trick to using whisper.cpp-openvino. place the converted model and xml in the same dir as the regular model: https://huggingface.co/twdragon/whisper.cpp-openvino/tree/main Then launch with --ov-e-device GPU added to the commandline, else it will run openvino on the CPU.
But it still appears I'm doing something wrong: I get no performance increase: same 76s for my 708s test file as openvino on the CPU, so about 9.3x speed with the base model. CPU load is still fairly high and my UHD 770 (sorry, only iGPU) shows only light and periodic usage by whisper.cpp in intel-gpu-top. My RAM is only DDR4 3600MHz though. Using the regular whisper.cpp build is slower but the model file is larger aswell: 167s, so only 4.2x speed. I could imagine this to be due to openvino model using smaller floats (= less precision)?
robertfoster commented on 2024-05-03 18:25 (UTC)
@Melon_bread @solarisfire hipblas support enablement will be taken in consideration when referenced on the official README.md.
solarisfire commented on 2024-04-26 21:32 (UTC)
Build seems to be broken with latest rocm-llvm?
Melon_Bread commented on 2024-04-25 00:48 (UTC)
Is there any chance we can get a
whisper.cpp-hipblas
package since there is rocm/hipblas support for whisper.cpp in their cmake files? (Thank you for your packages)« First ‹ Previous 1 2 3 Next › Last »