Package Details: gpt4all-chat 3.4.2-1

Git Clone URL: https://aur.archlinux.org/gpt4all-chat.git (read-only, click to copy)
Package Base: gpt4all-chat
Description: run open-source LLMs anywhere
Upstream URL: https://gpt4all.io
Keywords: chatgpt gpt llm
Licenses: MIT
Submitter: ZhangHua
Maintainer: ZhangHua
Last Packager: ZhangHua
Votes: 8
Popularity: 0.51
First Submitted: 2023-11-22 05:47 (UTC)
Last Updated: 2024-10-17 05:30 (UTC)

Latest Comments

« First ‹ Previous 1 2 3 4 Next › Last »

gugah commented on 2024-05-25 13:41 (UTC) (edited on 2024-05-26 02:06 (UTC) by gugah)

@ZhangHua, maybe a build dependency is missing for 2.8.0? I'm getting:

-- Looking for a CUDA compiler - NOTFOUND
CMake Warning at /home/gugah/.cache/paru/clone/gpt4all-chat/src/gpt4all-2.8.0/gpt4all-backend/CMakeLists.txt:71 (message):
  CUDA Toolkit not found.  To build without CUDA, use -DLLMODEL_CUDA=OFF.

Even though CUDA is installed. Btw, upstream is fixing the #include <algorithm> error and some other missing includes that fail only with gcc14.

edit: I needed to logout/login before building gpt4all-chat as mentioned by the maintainer.

javalsai commented on 2024-05-23 22:20 (UTC)

For the record, after some recent update (somewhere in the last 7 days), I got a verbose error on gpt4all-chat-git while the compiler checked the environment and I was able to finally compile gpt4all-chat by editing the PKGBUILD and adding -DLLMODEL_CUDA=OFF to the compiler options (I have an AMD card).

ZhangHua commented on 2024-05-21 01:37 (UTC) (edited on 2024-05-21 01:42 (UTC) by ZhangHua)

@gugah I checked your patch and find it works! thank you so much for your help! I have created a new release so everyone using this package can benefit from this patch.

AUR does not support pull request, so if you have any improvement to the repository, please contact maintainers directly or just leave your patch in a comment.

gugah commented on 2024-05-20 15:16 (UTC) (edited on 2024-05-20 19:50 (UTC) by gugah)

@javalsai, I've also had the exact same error when building. I tried changing some configs in /etc/makepkg.conf without luck

edit: A quick search points to a missing #include <algorithm> in gpt4all-backend/llamamodel.cpp. I'll check if it compiles with a simple patch.

edit 2: I was able to patch the build with

diff --git a/PKGBUILD b/PKGBUILD
index 4aa976a..d7bfe9c 100644
--- a/PKGBUILD
+++ b/PKGBUILD
@@ -12,6 +12,7 @@ makedepends=("cmake" "shaderc" "vulkan-tools" "vulkan-headers")
 source=(
     "$pkgname-$pkgver.tar.gz::https://github.com/nomic-ai/gpt4all/archive/refs/tags/v$pkgver.tar.gz"
     "001-change-binary-name.diff"
+    "002-fix-include-algorithm.diff"
 )
 declare -rAg _modules_name_map=(
     [gpt4all-backend/llama.cpp-mainline]=https://github.com/nomic-ai/llama.cpp/archive/a3f03b7e793ee611c4918235d4532ee535a9530d.tar.gz
@@ -35,6 +36,7 @@ do
 done
 sha256sums=('6849bfa2956019a3f24e350984fe9114b0c6e71932665640f770549d20721243'
             'c9f1242ff0dfd7367387d5e7d228b808cdb7f6a0a368ba37e326afb21c603a44'
+            '33353c4d0d7a5da7862c4965cf4e69452dda68d2dca184c38208cd6d20746913'
             'b47b1d8154a99304a406d564dfaad6dc91332b8bccc4ef15f1b2d2cce332b84b'
             '2fef47fc74c8ccc32b33b8c83f9833b6a4c02e09da8d688abb6ee35167652ea9')

@@ -65,6 +67,7 @@ prepare() {
         fi
     done
     patch -Np1 -i ../001-change-binary-name.diff
+    patch -Np1 -i ../002-fix-include-algorithm.diff
 }
 build() {
     cmake -B build-chat -S "$srcdir/gpt4all-$pkgver/gpt4all-chat" \

002-fix-include-algorithm.diff is trivial:

diff --git a/gpt4all-backend/llamamodel.cpp b/gpt4all-backend/llamamodel.cpp
index e88ad9f..b35bbdf 100644
--- a/gpt4all-backend/llamamodel.cpp
+++ b/gpt4all-backend/llamamodel.cpp
@@ -1,6 +1,7 @@
 #define LLAMAMODEL_H_I_KNOW_WHAT_I_AM_DOING_WHEN_INCLUDING_THIS_FILE
 #include "llamamodel_impl.h"

+#include <algorithm>
 #include <cassert>
 #include <cmath>
 #include <cstdio>

I would PR this fix but I don't know where to @ZhangHua

javalsai commented on 2024-05-20 14:40 (UTC) (edited on 2024-05-20 14:40 (UTC) by javalsai)

@ZhangHua neither of those commands worked, both gave the same errors in the chroot environment, I think I read somewhere in the wiki that it reads config from /etc/makepkg.conf, so I'm starting to think it could be an issue with that, but afaik mine is pretty normal, nothing special to break on.

Anyways, I don't really need this package, so it's not a priority for me to get it working, just leaving it here as I didn't find any related issue, if nobody else reports this it might just be me...

ZhangHua commented on 2024-05-15 01:09 (UTC)

@javalsai It is indeed an error but I completely have no idea about this, because I am not familiar with C++. Maybe you can clone this aur repo and use makechrootpkg or extra-x86_64-build to build this package in a clean chroot?

You can check https://wiki.archlinux.org/title/DeveloperWiki:Building_in_a_clean_chroot for more info about building packages in clean chroot.

javalsai commented on 2024-05-14 22:11 (UTC) (edited on 2024-05-14 22:13 (UTC) by javalsai)

I somehow manage to get a different compile error each time, yes, I have base-devel installed:

/home/javalsai/.cache/paru/clone/gpt4all-chat/src/gpt4all-2.7.5/gpt4all-backend/llamamodel.cpp:45:21: error: no matching function for call to ‘find(std::vector<const char*>::const_iterator, std::vector<const char*>::const_iterator, const std::string&)’
   45 |     return std::find(EMBEDDING_ARCHES.begin(), EMBEDDING_ARCHES.end(), arch) < EMBEDDING_ARCHES.end();

/home/javalsai/.cache/paru/clone/gpt4all-chat/src/gpt4all-2.7.5/gpt4all-backend/llamamodel.cpp:674:20: error: ‘find_if’ is not a member of ‘std’; did you mean ‘find’?
  674 |     auto it = std::find_if(specs, std::end(specs),

/home/javalsai/.cache/paru/clone/gpt4all-chat/src/gpt4all-2.7.5/gpt4all-backend/llamamodel.cpp:892:22: error: ‘transform’ is not a member of ‘std’
  892 |                 std::transform(embd, embd_end, embd, [mean](double f){ return f - mean; });

etc (seems to be same kind of errors tho)

ZhangHua commented on 2024-04-01 01:13 (UTC)

@nicholasr-ITSulu Installing base-devel is required for using AUR. See https://wiki.archlinux.org/title/Arch_User_Repository#Getting_started for more info.

nicholasr-ITSulu commented on 2024-04-01 00:42 (UTC) (edited on 2024-04-01 00:55 (UTC) by nicholasr-ITSulu)

After base-devel was installed on my system, gpt4all-chat was able to build successfully.

Please add base-devel to the Dependencies to avoid this issue for future users. If this is not possible nor advisable, please include a reason

ZhangHua commented on 2024-03-31 11:17 (UTC)

@nicholasr-ITSulu patch is required by base-devel, which is required by every package on AUR. So please install base-devel so that this package can be built successfully.