Search Criteria
Package Details: litellm-ollama 4-3
Package Actions
Git Clone URL: | https://aur.archlinux.org/litellm-ollama.git (read-only, click to copy) |
---|---|
Package Base: | litellm-ollama |
Description: | Metapackage to setup ollama models with OpenAI API locally |
Upstream URL: | None |
Keywords: | ai llm local openai server |
Licenses: | MIT |
Submitter: | shtrophic |
Maintainer: | None |
Last Packager: | dnim |
Votes: | 2 |
Popularity: | 0.002179 |
First Submitted: | 2023-10-08 16:16 (UTC) |
Last Updated: | 2024-01-30 03:59 (UTC) |
Dependencies (8)
- gunicorn
- litellmAUR
- ollama (ollama-generic-gitAUR, ollama-openmpi-gitAUR, ollama-openblas-gitAUR, ollama-vulkan-gitAUR, ollama-cuda-gitAUR, ollama-rocm-gitAUR, ollama-cuda, ollama-rocm)
- python-apschedulerAUR
- python-backoffAUR
- python-fastapi
- python-orjson (python-orjson-gitAUR)
- uvicorn
Latest Comments
lumnn commented on 2024-01-18 15:45 (UTC)
Thank you for this package.
I believe port 8000 is quite popular and may conflict with many other software. In case anyone looks for easiest way to change ports then the best option would be to start/enable service and then
$ systemctl edit litellm-ollama@[model].service
And add there following content
[Service] Environment="PORT=--port='31000'"