Related issue: extra/python-pytorch/14
Search Criteria
Package Details: whisper-git 2023.11.17.r2.gba3f3cd5-1
Package Actions
Git Clone URL: | https://aur.archlinux.org/whisper-git.git (read-only, click to copy) |
---|---|
Package Base: | whisper-git |
Description: | General-purpose speech-recognition model by OpenAI |
Upstream URL: | https://github.com/openai/whisper |
Licenses: | MIT |
Conflicts: | whisper |
Provides: | whisper |
Submitter: | blinry |
Maintainer: | blinry (xiota) |
Last Packager: | xiota |
Votes: | 14 |
Popularity: | 0.35 |
First Submitted: | 2022-09-22 17:49 (UTC) |
Last Updated: | 2024-02-07 22:31 (UTC) |
Dependencies (12)
- ffmpeg (ffmpeg-nvcodec-11-1-gitAUR, ffmpeg-amd-full-gitAUR, ffmpeg-cudaAUR, ffmpeg-full-gitAUR, ffmpeg-gitAUR, ffmpeg-fullAUR, ffmpeg-decklinkAUR, ffmpeg-headlessAUR, ffmpeg-amd-fullAUR, ffmpeg-libfdk_aacAUR, ffmpeg-obsAUR, ffmpeg-ffplayoutAUR)
- python-more-itertools
- python-numba (python-numba-gitAUR)
- python-pytorch (python-pytorch-mkl-gitAUR, python-pytorch-cuda-gitAUR, python-pytorch-mkl-cuda-gitAUR, python-pytorch-cxx11abiAUR, python-pytorch-cxx11abi-optAUR, python-pytorch-cxx11abi-cudaAUR, python-pytorch-cxx11abi-opt-cudaAUR, python-pytorch-cxx11abi-rocmAUR, python-pytorch-cxx11abi-opt-rocmAUR, python-pytorch-rocm-binAUR, python-pytorch-cuda, python-pytorch-opt, python-pytorch-opt-cuda, python-pytorch-opt-rocm, python-pytorch-rocm)
- python-tiktoken (python-tiktoken-gitAUR)
- python-tqdm
- git (git-gitAUR, git-glAUR) (make)
- python-build (make)
- python-installer (python-installer-gitAUR) (make)
- python-setuptools (make)
- python-wheel (make)
- tritonAUR (triton-gitAUR) (optional) – CUDA accelerated filters
Required by (1)
- python-stable-ts-git (requires whisper)
Sources (1)
xiota commented on 2024-05-02 14:47 (UTC)
Alpha97 commented on 2024-05-02 13:04 (UTC)
Below error is encountered for normal execution, and it works again after rollback the python-pytorch-opt-cuda (2.3.0-2 => 2.3.0-1)
I am not sure if it's the problem of whisper, just leave this workaround for your reference.
Traceback (most recent call last): File "/usr/bin/whisper", line 5, in <module> from whisper.transcribe import cli File "/usr/lib/python3.12/site-packages/whisper/init.py", line 8, in <module> import torch File "/usr/lib/python3.12/site-packages/torch/init.py", line 237, in <module> from torch._C import * # noqa: F403 ^^^^^^^^^^^^^^^^^^^^^^ ImportError: /usr/lib/libtorch_cpu.so: undefined symbol: cblas_gemm_f16f16f32
xiota commented on 2024-02-11 05:09 (UTC)
@schaefer I would search for potentially related open and closed issues at github/pytorch. This issue might be worth watching: HIP error: shared object initialization failed
schaefer commented on 2024-02-10 07:38 (UTC)
@xiota Here is complete output: https://0x0.st/HdA8.cmd
xiota commented on 2024-02-08 21:33 (UTC)
@schaefer Would depend on the cause of the problem. Could you put a full log in a pastebin?
schaefer commented on 2024-02-08 13:43 (UTC) (edited on 2024-02-08 13:44 (UTC) by schaefer)
@xiota true, thank you, just verified working status with python-pytorch-opt
. It does not work anymore with python-pytorch-opt-rocm
though, which means no GPU for me. Can you point me to where I should look for/open an issue?
xiota commented on 2024-02-07 23:08 (UTC)
@schaefer This package is working with python-pytorch
. There may be some problem with another package. Something may need to be rebuilt or an issue reported somewhere.
schaefer commented on 2024-02-07 10:16 (UTC) (edited on 2024-02-07 10:19 (UTC) by schaefer)
Stopped working with the latest arch update to rocm 6.0.0
Traceback (most recent call last):
File "/usr/bin/whisper", line 8, in <module>
sys.exit(cli())
^^^^^
File "/usr/lib/python3.11/site-packages/whisper/transcribe.py", line 577, in cli
model = load_model(model_name, device=device, download_root=model_dir)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/whisper/__init__.py", line 151, in load_model
model.load_state_dict(checkpoint["model_state_dict"])
File "/usr/lib/python3.11/site-packages/torch/nn/modules/module.py", line 2153, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for Whisper:
While copying the parameter named "encoder.blocks.0.attn.query.weight", whose dimensions in the model are torch.Size([768, 768]) and whose dimensions in the checkpoint are torch.Size([768, 768]), an exception occurred : ('HIP error: shared object initialization failed\nHIP kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.\nFor debugging consider passing HIP_LAUNCH_BLOCKING=1.\nCompile with `TORCH_USE_HIP_DSA` to enable device-side assertions.\n',).
(... many more)
xiota commented on 2023-09-24 04:11 (UTC)
@mapatrapa Package builds and works fine on my computer. You probably need to rebuild.
mapatrapa commented on 2023-09-23 21:21 (UTC)
Since the last rocm packages update from the official archlinux repositories, whisper has not worked again, I get core dumped when load the model. May be whisper-git need an update?
Pinned Comments
xiota commented on 2023-05-11 15:41 (UTC) (edited on 2023-09-24 04:35 (UTC) by xiota)
Updated package. Builds in clean chroot. Tested with a video.
If you have any issues with this package, rebuild in clean chroot before reporting.
To find packages that need to be rebuilt, run
check-broken-packages
(fromaur/check-broken-packages-pacman-hook-git
).