Package Details: gpt4all-chat 3.2.1-1

Git Clone URL: https://aur.archlinux.org/gpt4all-chat.git (read-only, click to copy)
Package Base: gpt4all-chat
Description: run open-source LLMs anywhere
Upstream URL: https://gpt4all.io
Keywords: chatgpt gpt llm
Licenses: MIT
Submitter: ZhangHua
Maintainer: ZhangHua
Last Packager: ZhangHua
Votes: 8
Popularity: 1.15
First Submitted: 2023-11-22 05:47 (UTC)
Last Updated: 2024-08-14 03:04 (UTC)

Latest Comments

« First ‹ Previous 1 2 3 Next › Last »

javalsai commented on 2024-05-26 13:33 (UTC)

@raine and even after installing all those ~20GB of packages (including cuda, rocm...), in my case (AMD card) I get:

CMake Warning at /home/javalsai/.cache/paru/clone/gpt4all-chat/src/gpt4all-2.8.0/gpt4all-backend/CMakeLists.txt:71 (message):
  CUDA Toolkit not found.  To build without CUDA, use -DLLMODEL_CUDA=OFF.


CMake Error at /usr/share/cmake/Modules/Internal/CMakeCUDAFindToolkit.cmake:104 (message):
  Failed to find nvcc.

  Compiler requires the CUDA toolkit.  Please set the CUDAToolkit_ROOT
  variable.
Call Stack (most recent call first):
  /usr/share/cmake/Modules/CMakeDetermineCUDACompiler.cmake:85 (cmake_cuda_find_toolkit)
  /home/javalsai/.cache/paru/clone/gpt4all-chat/src/gpt4all-2.8.0/gpt4all-backend/CMakeLists.txt:73 (enable_language)

My guess is that it should be a card check, and then put that flag depending on the model. But that should also detect which packages to install, cuda is not going to do anything on my system and it's just gonna bloat it, whether in a NVIDIA system it's likely already installed.

If that's not possible in the PKGBUILD format, couldn't it be put in the prepare() step (for example)? Just make sure to install it as an optional package, I think I've seen some packages that do it.

raine commented on 2024-05-26 02:50 (UTC) (edited on 2024-05-26 02:50 (UTC) by raine)

Yes, I know that they are a part of makedepends.

However, my point is that, if you do as you suggest, every single compilation/update of gpt4all-chat will unnecessarily burn through a significant amount of SSD/NVM lifetime. Not an issue for HDD, but most people have moved on to SSDs, so it is a problem.

ZhangHua commented on 2024-05-26 01:43 (UTC)

@raine Those extra packages are just for building and will be removed after building is finished successfully. Because I enabled all supported ways to load models, like Kompute, Vulkan, CUDA, ROCm. Final program will not spend too much space, the hugest dependency is cuda and it can be removed safely, because I have made cuda optional.

raine commented on 2024-05-25 18:07 (UTC)

Version 2.8.0 is trying to install an unnecessary ~20GB of new packages (rocm) on my system with no AMD cards.

Maybe it is better to have separate packages for different GPU vendors?

ZhangHua commented on 2024-05-25 13:54 (UTC)

@gugah cuda is in the makedepends so I don't think any build dependency is missing for 2.8.0. Maybe it is because that you need to logout and login to make cuda's changes to environment variables work. You can create this package in a clean chroot by running makechrootpkg or extra-x86_64-build, those commands are available in package devtools.

As for the #include <algorithm> problem, this package is built with gcc14 until 2.8.0 so 002-fix-include-algorithm.diff is introduced. As we using cuda, gcc14 is replaced by gcc13 so there is no need to use this patch until cuda is also using gcc14. But applying this patch and build with gcc13 seems fine, so this patch is not removed.

gugah commented on 2024-05-25 13:41 (UTC) (edited on 2024-05-26 02:06 (UTC) by gugah)

@ZhangHua, maybe a build dependency is missing for 2.8.0? I'm getting:

-- Looking for a CUDA compiler - NOTFOUND
CMake Warning at /home/gugah/.cache/paru/clone/gpt4all-chat/src/gpt4all-2.8.0/gpt4all-backend/CMakeLists.txt:71 (message):
  CUDA Toolkit not found.  To build without CUDA, use -DLLMODEL_CUDA=OFF.

Even though CUDA is installed. Btw, upstream is fixing the #include <algorithm> error and some other missing includes that fail only with gcc14.

edit: I needed to logout/login before building gpt4all-chat as mentioned by the maintainer.

javalsai commented on 2024-05-23 22:20 (UTC)

For the record, after some recent update (somewhere in the last 7 days), I got a verbose error on gpt4all-chat-git while the compiler checked the environment and I was able to finally compile gpt4all-chat by editing the PKGBUILD and adding -DLLMODEL_CUDA=OFF to the compiler options (I have an AMD card).

ZhangHua commented on 2024-05-21 01:37 (UTC) (edited on 2024-05-21 01:42 (UTC) by ZhangHua)

@gugah I checked your patch and find it works! thank you so much for your help! I have created a new release so everyone using this package can benefit from this patch.

AUR does not support pull request, so if you have any improvement to the repository, please contact maintainers directly or just leave your patch in a comment.

gugah commented on 2024-05-20 15:16 (UTC) (edited on 2024-05-20 19:50 (UTC) by gugah)

@javalsai, I've also had the exact same error when building. I tried changing some configs in /etc/makepkg.conf without luck

edit: A quick search points to a missing #include <algorithm> in gpt4all-backend/llamamodel.cpp. I'll check if it compiles with a simple patch.

edit 2: I was able to patch the build with

diff --git a/PKGBUILD b/PKGBUILD
index 4aa976a..d7bfe9c 100644
--- a/PKGBUILD
+++ b/PKGBUILD
@@ -12,6 +12,7 @@ makedepends=("cmake" "shaderc" "vulkan-tools" "vulkan-headers")
 source=(
     "$pkgname-$pkgver.tar.gz::https://github.com/nomic-ai/gpt4all/archive/refs/tags/v$pkgver.tar.gz"
     "001-change-binary-name.diff"
+    "002-fix-include-algorithm.diff"
 )
 declare -rAg _modules_name_map=(
     [gpt4all-backend/llama.cpp-mainline]=https://github.com/nomic-ai/llama.cpp/archive/a3f03b7e793ee611c4918235d4532ee535a9530d.tar.gz
@@ -35,6 +36,7 @@ do
 done
 sha256sums=('6849bfa2956019a3f24e350984fe9114b0c6e71932665640f770549d20721243'
             'c9f1242ff0dfd7367387d5e7d228b808cdb7f6a0a368ba37e326afb21c603a44'
+            '33353c4d0d7a5da7862c4965cf4e69452dda68d2dca184c38208cd6d20746913'
             'b47b1d8154a99304a406d564dfaad6dc91332b8bccc4ef15f1b2d2cce332b84b'
             '2fef47fc74c8ccc32b33b8c83f9833b6a4c02e09da8d688abb6ee35167652ea9')

@@ -65,6 +67,7 @@ prepare() {
         fi
     done
     patch -Np1 -i ../001-change-binary-name.diff
+    patch -Np1 -i ../002-fix-include-algorithm.diff
 }
 build() {
     cmake -B build-chat -S "$srcdir/gpt4all-$pkgver/gpt4all-chat" \

002-fix-include-algorithm.diff is trivial:

diff --git a/gpt4all-backend/llamamodel.cpp b/gpt4all-backend/llamamodel.cpp
index e88ad9f..b35bbdf 100644
--- a/gpt4all-backend/llamamodel.cpp
+++ b/gpt4all-backend/llamamodel.cpp
@@ -1,6 +1,7 @@
 #define LLAMAMODEL_H_I_KNOW_WHAT_I_AM_DOING_WHEN_INCLUDING_THIS_FILE
 #include "llamamodel_impl.h"

+#include <algorithm>
 #include <cassert>
 #include <cmath>
 #include <cstdio>

I would PR this fix but I don't know where to @ZhangHua

javalsai commented on 2024-05-20 14:40 (UTC) (edited on 2024-05-20 14:40 (UTC) by javalsai)

@ZhangHua neither of those commands worked, both gave the same errors in the chroot environment, I think I read somewhere in the wiki that it reads config from /etc/makepkg.conf, so I'm starting to think it could be an issue with that, but afaik mine is pretty normal, nothing special to break on.

Anyways, I don't really need this package, so it's not a priority for me to get it working, just leaving it here as I didn't find any related issue, if nobody else reports this it might just be me...