Package Details: gpt4all-chat-git 3.3.0.r22.g767189d-1

Git Clone URL: https://aur.archlinux.org/gpt4all-chat-git.git (read-only, click to copy)
Package Base: gpt4all-chat-git
Description: Cross platform Qt based GUI for GPT4All versions
Upstream URL: https://github.com/nomic-ai/gpt4all
Keywords: chatgpt gptj.cpp gui llama.cpp offline
Licenses: MIT
Conflicts: gpt4all-chat
Provides: gpt4all-chat
Submitter: jgmdev
Maintainer: jgmdev
Last Packager: jgmdev
Votes: 14
Popularity: 0.018402
First Submitted: 2023-05-12 17:04 (UTC)
Last Updated: 2024-10-04 04:24 (UTC)

Latest Comments

« First ‹ Previous 1 2 3 4 5 Next › Last »

ZhangHua commented on 2023-09-12 12:52 (UTC)

I think we may need to add -DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON to cmake. Because vulkan cannot be initialized successfully in a headless environment like CI or container. This results the build failure if we build this package in chroot environment.

quest commented on 2023-09-06 04:22 (UTC)

Builds just fine, but is trying to build the same commit every time I try and do an update.

1  aur/gpt4all-chat-git  r1367.b6e38d6-1 -> r1367.b6e38d69-1

ReneS commented on 2023-09-05 17:13 (UTC)

What ZhangHua said. I don't know about glslc but vulkan-tools definitely is a dependency now

ZhangHua commented on 2023-09-05 11:46 (UTC)

This package requires vulkaninfo and glslc when building. This means that we need to add vulkan-tools and shaderc in makedepends.

What's more, it will check vulkan's version by default, however, this is impossible in a chroot environment, so we may need to pass -DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON to cmake.

After this package is built, I ran namcap to check if there is anything forgotten. It says that fmt and python is required but not in the depends, so I think we may also need to add them into PKGBUILD.

abougouffa commented on 2023-09-05 11:23 (UTC)

Thank you for the quick fix, I confirm it is working now. Thanks, @jgmdev.

CrashTD commented on 2023-09-05 10:15 (UTC)

thanks a lot for the quick reaction. works fine now. and so far the old models still work. but i have only tested Wizard and Hermes.

jgmdev commented on 2023-09-05 04:17 (UTC)

@MetzKetz and @abougouffa the install step should be fixed now, had to patch some cmake install scripts since they try to install non-existent files, this should be a temporary fix until upstream fixes their cmake stuff. Also WARNING, seems like newer version is not compatible with older models.

abougouffa commented on 2023-09-04 23:36 (UTC) (edited on 2023-09-04 23:39 (UTC) by abougouffa)

Not compiling on my machine either!

-- Installing: /home/username/.cache/yay/gpt4all-chat-git/pkg/gpt4all-chat-git/opt/gpt4all-chat/lib/libkompute.so.0
-- Set runtime path of "/home/username/.cache/yay/gpt4all-chat-git/pkg/gpt4all-chat-git/opt/gpt4all-chat/lib/libkompute.so.0.8.1" to ""
-- Installing: /home/username/.cache/yay/gpt4all-chat-git/pkg/gpt4all-chat-git/opt/gpt4all-chat/lib/libkompute.so
-- Installing: /home/username/.cache/yay/gpt4all-chat-git/pkg/gpt4all-chat-git/opt/gpt4all-chat/lib/cmake/kompute/komputeConfig.cmake
CMake Error at llmodel/llama.cpp-mainline/kompute/src/cmake_install.cmake:83 (file):
  file INSTALL cannot find
  "/home/username/.cache/yay/gpt4all-chat-git/src/gpt4all/gpt4all-chat/build/llmodel/llama.cpp-mainline/kompute/kompute/komputeConfigVersion.cmake":
  No such file or directory.
Call Stack (most recent call first):
  llmodel/llama.cpp-mainline/kompute/cmake_install.cmake:57 (include)
  llmodel/cmake_install.cmake:47 (include)
  cmake_install.cmake:47 (include)

CrashTD commented on 2023-09-03 10:01 (UTC) (edited on 2023-09-03 10:02 (UTC) by CrashTD)

Fails to build

-- Installing: /build/gpt4all-chat-git/pkg/gpt4all-chat-git/opt/gpt4all-chat/lib/cmake/kompute/komputeConfig.cmake
CMake Error at llmodel/llama.cpp-mainline/kompute/src/cmake_install.cmake:83 (file):
  file INSTALL cannot find
  "/build/gpt4all-chat-git/src/gpt4all/gpt4all-chat/build/llmodel/llama.cpp-mainline/kompute/kompute/komputeConfigVersion.cmake":
  No such file or directory.
Call Stack (most recent call first):
  llmodel/llama.cpp-mainline/kompute/cmake_install.cmake:57 (include)
  llmodel/cmake_install.cmake:47 (include)
  cmake_install.cmake:47 (include)


make: *** [Makefile:120: install] Error 1
==> ERROR: A failure occurred in package().
    Aborting...

Tio commented on 2023-08-31 21:04 (UTC)

Not working for me either...

[Warning] (Thu Aug 31 23:01:18 2023): WARNING: Could not download models.json synchronously
[Debug] (Thu Aug 31 23:01:19 2023): deserializing chats took: 1 ms
[Warning] (Thu Aug 31 23:01:19 2023): QQmlApplicationEngine failed to load component
[Warning] (Thu Aug 31 23:01:19 2023): qrc:/gpt4all/main.qml: module "gtk2" is not installed
[Warning] (Thu Aug 31 23:01:19 2023): "ERROR: Modellist download failed with error code \"5-Operation canceled\""
[Warning] (Thu Aug 31 23:01:19 2023): QIODevice::read (QNetworkReplyHttpImpl): device not open
[Warning] (Thu Aug 31 23:01:19 2023): ERROR: Couldn't parse:  "" "illegal value"
[Warning] (Thu Aug 31 23:01:19 2023): QIODevice::read (QNetworkReplyHttpImpl): device not open
[Warning] (Thu Aug 31 23:01:19 2023): ERROR: Couldn't parse:  "" "illegal value"