python-ollama
|
0.2.0-1 |
3 |
1.47
|
Ollama Python library |
daniel_chesters
|
2024-05-15 17:13 (UTC) |
aichat
|
0.17.0-1 |
4 |
1.46
|
OpenAI, ChatGPT, ollama and more in your terminal |
murlakatamenka
|
2024-05-14 00:58 (UTC) |
oterm
|
0.2.9-1 |
4 |
1.32
|
A text-based terminal client for Ollama |
daniel_chesters
|
2024-05-15 17:15 (UTC) |
chatgpt.sh
|
0.58.5-1 |
2 |
1.09
|
Shell wrapper for OpenAI's ChatGPT, DALL-E, Whisper, and TTS. Features LocalAI, Ollama, Gemini and Mistral integration. |
lilikoi
|
2024-05-17 02:30 (UTC) |
llama.cpp-git
|
b2698-1 |
6 |
0.83
|
Port of Facebook's LLaMA model in C/C++ (with OPENBlas CPU optimizations) |
robertfoster
|
2024-04-19 16:24 (UTC) |
llama.cpp-cublas-git
|
b2698-1 |
6 |
0.83
|
Port of Facebook's LLaMA model in C/C++ (with NVIDIA CUDA optimizations) |
robertfoster
|
2024-04-19 16:24 (UTC) |
llama.cpp-clblas-git
|
b2698-1 |
6 |
0.83
|
Port of Facebook's LLaMA model in C/C++ (with OpenCL optimizations) |
robertfoster
|
2024-04-19 16:24 (UTC) |
llama.cpp-hipblas-git
|
b2698-1 |
6 |
0.83
|
Port of Facebook's LLaMA model in C/C++ (with AMD ROCm optimizations) |
robertfoster
|
2024-04-19 16:24 (UTC) |
llama.cpp-sycl-f16-git
|
b2698-1 |
6 |
0.83
|
Port of Facebook's LLaMA model in C/C++ (with Intel SYCL GPU optimizations and F16) |
robertfoster
|
2024-04-19 16:24 (UTC) |
llama.cpp-sycl-f32-git
|
b2698-1 |
6 |
0.83
|
Port of Facebook's LLaMA model in C/C++ (with Intel SYCL GPU optimizations and F32) |
robertfoster
|
2024-04-19 16:24 (UTC) |
llama.cpp-vulkan-git
|
b2698-1 |
6 |
0.83
|
Port of Facebook's LLaMA model in C/C++ (with Vulkan GPU optimizations) |
robertfoster
|
2024-04-19 16:24 (UTC) |
ollama-cuda-git
|
0.1.30.gc2712b55-1 |
1 |
0.57
|
Create, run and share large language models (LLMs) with CUDA |
sr.team
|
2024-04-01 12:42 (UTC) |
python-ollama-git
|
0.1.7.r0.gfcdf577-1 |
1 |
0.37
|
Ollama Python library |
daniel_chesters
|
2024-03-26 10:40 (UTC) |
ollama-rocm-git
|
0.1.30.gc2712b55-1 |
1 |
0.32
|
Create, run and share large language models (LLMs) with ROCm |
sr.team
|
2024-03-25 23:09 (UTC) |
ollamamodelupdater
|
1.0.1-1 |
2 |
0.28
|
Tool to help you update your Ollama models |
That1Calculator
|
2024-02-08 03:53 (UTC) |
oterm-git
|
0.2.4.r0.ga0167f4-1 |
2 |
0.27
|
A text-based terminal client for Ollama |
daniel_chesters
|
2024-03-26 00:16 (UTC) |
llama-cpp
|
c3e53b4-1 |
1 |
0.14
|
Port of Facebook's LLaMA model in C/C++ |
Freed
|
2023-08-24 11:40 (UTC) |
llama-cpp-cuda
|
c3e53b4-1 |
1 |
0.14
|
Port of Facebook's LLaMA model in C/C++ (with CUDA) |
Freed
|
2023-08-24 11:40 (UTC) |
llama-cpp-opencl
|
c3e53b4-1 |
1 |
0.14
|
Port of Facebook's LLaMA model in C/C++ (with OpenCL) |
Freed
|
2023-08-24 11:40 (UTC) |
litellm-ollama
|
4-3 |
2 |
0.10
|
Metapackage to setup ollama models with OpenAI API locally |
orphan
|
2024-01-30 03:59 (UTC) |
ollamamodelupdater-bin
|
1.0.1-1 |
1 |
0.07
|
Tool to help you update your Ollama models |
That1Calculator
|
2024-02-08 03:53 (UTC) |
oatmeal-bin
|
0.13.0-1 |
1 |
0.04
|
Terminal UI to chat with large language models (LLM) using backends such as Ollama, and direct integrations with your favourite editor like Neovim! |
dustinblackman
|
2024-03-16 02:13 (UTC) |
tlm
|
1.1-1 |
0 |
0.00
|
Local CLI Copilot, powered by CodeLLaMa. |
matth
|
2024-04-22 13:11 (UTC) |
python-llama-cpp
|
0.1.83-1 |
0 |
0.00
|
Python bindings for llama.cpp |
Freed
|
2023-09-02 22:18 (UTC) |
ollama-generic-git
|
0.1.37+3.r2707.20240511.4ec7445a-1 |
0 |
0.00
|
Create, run and share large language models (LLMs). CPU optimisation only. |
dreieck
|
2024-05-12 10:11 (UTC) |
ollama-openmpi-git
|
0.1.37+3.r2707.20240511.4ec7445a-1 |
0 |
0.00
|
Create, run and share large language models (LLMs). CPU optimisation with openMPI. |
dreieck
|
2024-05-12 10:11 (UTC) |
ollama-openblas-git
|
0.1.37+3.r2707.20240511.4ec7445a-1 |
0 |
0.00
|
Create, run and share large language models (LLMs). CPU optimisation with openblas. |
dreieck
|
2024-05-12 10:11 (UTC) |
ollama-vulkan-git
|
0.1.37+3.r2707.20240511.4ec7445a-1 |
0 |
0.00
|
Create, run and share large language models (LLMs). With vulkan backend. |
dreieck
|
2024-05-12 10:11 (UTC) |
llamafile-git
|
0.8.1.r341.9cf7363-2 |
0 |
0.00
|
Distribute and run LLMs with a single file. |
maintuner
|
2024-04-30 17:01 (UTC) |
llamafile-bin
|
0.8.1-7 |
0 |
0.00
|
Distribute and run LLMs with a single file. |
maintuner
|
2024-04-30 07:27 (UTC) |
llamafile
|
0.8.1-2 |
0 |
0.00
|
Distribute and run LLMs with a single file. |
maintuner
|
2024-04-29 08:29 (UTC) |
llama-cpp-rocm-git
|
r1110.423db74-1 |
0 |
0.00
|
Port of Facebook's LLaMA model in C/C++ (with ROCm) (PR#1087) |
ulyssesrr
|
2023-08-22 16:25 (UTC) |
llama-app-bin
|
0.0.1-1 |
0 |
0.00
|
a simple app to use LLaMa language models on your computer, built with rust, llama-rs, tauri and vite. |
AronYoung
|
2023-06-23 13:21 (UTC) |
libggml-git
|
r916.20240512.9149580-3 |
0 |
0.00
|
Tensor library for machine learning. Used by llama.cpp and whisper.cpp. |
dreieck
|
2024-05-13 08:25 (UTC) |
godmode-bin
|
1.0.0_beta.10-3 |
0 |
0.00
|
AI Chat Browser: Fast, Full webapp access to ChatGPT / Claude / Bard / Bing / Llama2! |
zxp19821005
|
2024-03-22 02:19 (UTC) |
godmode
|
1.0.0_beta.10-5 |
0 |
0.00
|
AI Chat Browser: Fast, Full webapp access to ChatGPT / Claude / Bard / Bing / Llama2! |
zxp19821005
|
2024-03-22 02:25 (UTC) |
alpaka-git
|
r304.df0b0c5-1 |
0 |
0.00
|
Kirigami client for Ollama |
Mailaender
|
2024-04-22 20:30 (UTC) |
alpaca-electron-git
|
1.0.6.r3.g04bad63-1 |
0 |
0.00
|
The simplest way to run Alpaca (and other LLaMA-based local LLMs) on your own computer |
zxp19821005
|
2024-04-23 10:03 (UTC) |
alpaca-electron-bin
|
1.0.5-8 |
0 |
0.00
|
The simplest way to run Alpaca (and other LLaMA-based local LLMs) on your own computer |
zxp19821005
|
2024-03-18 22:37 (UTC) |
air-git
|
0.6.10-1 |
0 |
0.00
|
A simple ChatGPT & Llama-cpp command line with ansi markdown display (written in Rust) |
alesc
|
2024-01-27 12:56 (UTC) |
ai-writer
|
1.2.0-6 |
0 |
0.00
|
A markdown editor powered by AI (Ollama) |
zxp19821005
|
2024-03-19 01:16 (UTC) |