Package Details: gpt4all-chat 3.1.0-1

Git Clone URL: https://aur.archlinux.org/gpt4all-chat.git (read-only, click to copy)
Package Base: gpt4all-chat
Description: run open-source LLMs anywhere
Upstream URL: https://gpt4all.io
Keywords: chatgpt gpt llm
Licenses: MIT
Submitter: ZhangHua
Maintainer: ZhangHua
Last Packager: ZhangHua
Votes: 7
Popularity: 1.12
First Submitted: 2023-11-22 05:47 (UTC)
Last Updated: 2024-07-25 00:57 (UTC)

Latest Comments

1 2 3 Next › Last »

ZhangHua commented on 2024-07-03 13:11 (UTC) (edited on 2024-07-03 14:14 (UTC) by ZhangHua)

@AndyRTR I build this package with pkgctl build and it does have the problem you say. I think I may do more research on how to make pkgctl happy because I use makechrootpkg and it can build this PKGBUILD without problem.

But I must say this is not a zsh-only declare option, -r is supported by bash's declare command: https://www.gnu.org/software/bash/manual/html_node/Bash-Builtins.html

Edit: After I did some research, I found that the PKGBUILD is sourced when code is executed on the line 334 of /usr/share/devtools/lib/build/build.sh, and sourced again on the line 37 of /usr/share/devtools/lib/util/pacman.sh, which results that bash tries override a readonly variable and throws an error. Maybe this is pkgctl's problem?

AndyRTR commented on 2024-07-03 12:38 (UTC)

Please avoid using zsh-only declare options. Your PKGBUILD is not bash/sh compatible. See "man PKGBUILD". Please fix your package to allow for easy build for non-zsh users.

AndyRTR commented on 2024-06-09 07:44 (UTC)

Build fails in a clean chroot: PKGBUILD: line 19: _modules_name_map: readonly variable

ZhangHua commented on 2024-05-27 01:30 (UTC)

@javalsai No, cuda injects itself into your PATH environment variable. You can check here about that. If you are not hurry to use this package, I will split cuda and rocm support into other packages, just like what @raine says, so all you need to do is waiting for 2.8.0-3 is updated.

raine commented on 2024-05-26 22:43 (UTC)

Indeed, I mean different PKGBUILD. Yes, it is a bit more work, but I would say not too much since all PKGBUILDs are almost identical, and once you set them up, updating them is usually a simple change of the version number.

It is not unusual to have separate packages in such a situation, there are many "-cuda", "-rocm", "-mkl", ... packages, precisely for this reason.

javalsai commented on 2024-05-26 14:59 (UTC)

@ZhangHua what changes to environmental variables are you talking about? it's just a compilation flag right?

ZhangHua commented on 2024-05-26 14:03 (UTC)

@javalsai Maybe it is because that you need to logout and login to make cuda's changes to environment variables work. You can create this package in a clean chroot by running makechrootpkg or extra-x86_64-build, those commands are available in package devtools.

@raine If I use split packages which are still using same PKGBUILD, cuda and rocm still have to be installed. Maybe you mean upload cuda and rocm support in different PKGBUILDs? I think that will make PKGBUILDs hard to be maintained. What's more, package arrayfile is packaged by using cuda in makedepends and optdepends. But if you still think it is needed to split cuda and rocm into different PKGBUILDs, please let me know and I will do splitting job when I am free.

javalsai commented on 2024-05-26 13:33 (UTC)

@raine and even after installing all those ~20GB of packages (including cuda, rocm...), in my case (AMD card) I get:

CMake Warning at /home/javalsai/.cache/paru/clone/gpt4all-chat/src/gpt4all-2.8.0/gpt4all-backend/CMakeLists.txt:71 (message):
  CUDA Toolkit not found.  To build without CUDA, use -DLLMODEL_CUDA=OFF.


CMake Error at /usr/share/cmake/Modules/Internal/CMakeCUDAFindToolkit.cmake:104 (message):
  Failed to find nvcc.

  Compiler requires the CUDA toolkit.  Please set the CUDAToolkit_ROOT
  variable.
Call Stack (most recent call first):
  /usr/share/cmake/Modules/CMakeDetermineCUDACompiler.cmake:85 (cmake_cuda_find_toolkit)
  /home/javalsai/.cache/paru/clone/gpt4all-chat/src/gpt4all-2.8.0/gpt4all-backend/CMakeLists.txt:73 (enable_language)

My guess is that it should be a card check, and then put that flag depending on the model. But that should also detect which packages to install, cuda is not going to do anything on my system and it's just gonna bloat it, whether in a NVIDIA system it's likely already installed.

If that's not possible in the PKGBUILD format, couldn't it be put in the prepare() step (for example)? Just make sure to install it as an optional package, I think I've seen some packages that do it.

raine commented on 2024-05-26 02:50 (UTC) (edited on 2024-05-26 02:50 (UTC) by raine)

Yes, I know that they are a part of makedepends.

However, my point is that, if you do as you suggest, every single compilation/update of gpt4all-chat will unnecessarily burn through a significant amount of SSD/NVM lifetime. Not an issue for HDD, but most people have moved on to SSDs, so it is a problem.