stable-diffusion.cpp-cublas-git
|
r108.48bcce4-1 |
2 |
0.13
|
Stable Diffusion in pure C/C++ (with NVIDIA CUDA optimizations) |
robertfoster
|
2024-03-26 10:14 (UTC) |
stable-diffusion.cpp-hipblas-git
|
r108.48bcce4-1 |
2 |
0.13
|
Stable Diffusion in pure C/C++ (with AMD ROCm optimizations) |
robertfoster
|
2024-03-26 10:14 (UTC) |
brother-mfc-l2710dw
|
4.0.0-1 |
6 |
0.14
|
LPR and CUPS driver for the Brother MFC-L2710DW |
robertfoster
|
2018-08-19 07:31 (UTC) |
python-flask-themes2
|
1.0.1-1 |
1 |
0.24
|
Easily theme your Flask app |
robertfoster
|
2023-12-21 09:14 (UTC) |
soulseekqt
|
20221224-1 |
84 |
0.25
|
A desktop client for the Soulseek peer-to-peer file sharing network |
robertfoster
|
2022-12-31 15:59 (UTC) |
python-base58
|
2.1.1-1 |
10 |
0.37
|
Bitcoin-compatible Base58 and Base58Check implementation |
robertfoster
|
2021-11-03 10:07 (UTC) |
cie-middleware-git
|
1.5.1.r6.fe57d43-1 |
3 |
0.49
|
Middleware della CIE (Carta di Identità Elettronica) per Linux (mio fork) |
robertfoster
|
2024-03-30 18:39 (UTC) |
whatsie-git
|
4.14.2.r0.gc478a7d-1 |
4 |
0.57
|
Fast Light weight WhatsApp Client based on Qt's WebEngine, With lots of settings and packed goodies |
robertfoster
|
2023-12-01 09:02 (UTC) |
dhewm3
|
1.5.3-1 |
22 |
0.59
|
Doom 3 engine with native 64-bit support, SDL, and OpenAL |
robertfoster
|
2024-04-21 14:09 (UTC) |
whatsie
|
4.14.2-1 |
11 |
0.70
|
Fast Light weight WhatsApp Client based on Qt's WebEngine, With lots of settings and packed goodies |
robertfoster
|
2023-12-01 08:58 (UTC) |
llama.cpp-git
|
b2698-1 |
6 |
0.83
|
Port of Facebook's LLaMA model in C/C++ (with OPENBlas CPU optimizations) |
robertfoster
|
2024-04-19 16:24 (UTC) |
llama.cpp-cublas-git
|
b2698-1 |
6 |
0.83
|
Port of Facebook's LLaMA model in C/C++ (with NVIDIA CUDA optimizations) |
robertfoster
|
2024-04-19 16:24 (UTC) |
llama.cpp-clblas-git
|
b2698-1 |
6 |
0.83
|
Port of Facebook's LLaMA model in C/C++ (with OpenCL optimizations) |
robertfoster
|
2024-04-19 16:24 (UTC) |
llama.cpp-hipblas-git
|
b2698-1 |
6 |
0.83
|
Port of Facebook's LLaMA model in C/C++ (with AMD ROCm optimizations) |
robertfoster
|
2024-04-19 16:24 (UTC) |
llama.cpp-sycl-f16-git
|
b2698-1 |
6 |
0.83
|
Port of Facebook's LLaMA model in C/C++ (with Intel SYCL GPU optimizations and F16) |
robertfoster
|
2024-04-19 16:24 (UTC) |
llama.cpp-sycl-f32-git
|
b2698-1 |
6 |
0.83
|
Port of Facebook's LLaMA model in C/C++ (with Intel SYCL GPU optimizations and F32) |
robertfoster
|
2024-04-19 16:24 (UTC) |
llama.cpp-vulkan-git
|
b2698-1 |
6 |
0.83
|
Port of Facebook's LLaMA model in C/C++ (with Vulkan GPU optimizations) |
robertfoster
|
2024-04-19 16:24 (UTC) |
whisper.cpp
|
1.5.5-3 |
8 |
0.86
|
Port of OpenAI's Whisper model in C/C++ (with OPENBlas CPU optimizations) |
robertfoster
|
2024-04-27 14:59 (UTC) |
whisper.cpp-cublas
|
1.5.5-3 |
8 |
0.86
|
Port of OpenAI's Whisper model in C/C++ (with NVIDIA CUDA optimizations) |
robertfoster
|
2024-04-27 14:59 (UTC) |
whisper.cpp-clblas
|
1.5.5-3 |
8 |
0.86
|
Port of OpenAI's Whisper model in C/C++ (with OpenCL optimizations) |
robertfoster
|
2024-04-27 14:59 (UTC) |
whisper.cpp-openvino
|
1.5.5-3 |
8 |
0.86
|
Port of OpenAI's Whisper model in C/C++ (with OpenVINO run-time) |
robertfoster
|
2024-04-27 14:59 (UTC) |