Package Details: llama-cpp-rocm-git r1110.423db74-1

Git Clone URL: https://aur.archlinux.org/llama-cpp-rocm-git.git (read-only, click to copy)
Package Base: llama-cpp-rocm-git
Description: Port of Facebook's LLaMA model in C/C++ (with ROCm) (PR#1087)
Upstream URL: https://github.com/ggerganov/llama.cpp
Licenses: MIT
Conflicts: llama-cpp, llama.cpp
Provides: llama-cpp, llama.cpp
Submitter: ulyssesrr
Maintainer: ulyssesrr
Last Packager: ulyssesrr
Votes: 0
Popularity: 0.000000
First Submitted: 2023-08-22 16:25 (UTC)
Last Updated: 2023-08-22 16:25 (UTC)

Dependencies (3)

Required by (1)

Sources (1)

Latest Comments

edtoml commented on 2024-02-06 01:00 (UTC)

This package builds with the rocm 6.0.1 packages but cannot run models. It missreports the context sizes and then segfaults - worked with rocm 5.7.1. [ed@grover Mixtral-8x7B-Instruct-v0.1-GGUF]$ llama.cpp -ngl 7 -i -ins -m ./mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf -c 16384 main: warning: base model only supports context sizes no greater than 2048 tokens (16384 specified) main: build = 1163 (9035cfcd) main: seed = 1707181157 ggml_init_cublas: found 1 ROCm devices: Device 0: AMD Radeon RX 6600 XT, compute capability 10.3 Segmentation fault (core dumped)

CorvetteCole commented on 2023-10-18 20:22 (UTC)

You don't need to pull from the PR #1087 branch anymore, it has been merged