01/10/2025 - OllamaΒΆ
Today I did a couple of things that I should really write down how I did it on here, so I have it documented.
- Installed Ollama on Nix-OS as a systemd daemon, the service runs on boot.
- Installed SillyTavern WebServer for Ollama
- Got some local LLM models from hugging face
- Hooked them all together
Also removed my proton VPN from the Nix configuration and instead hooked that up to Network Manager so that Gnome's VPN interface can recognise it, and I can actually turn it off/on now. Previously it was starting at boot with WireGuard, but to disable it I had to use Systemd, and re-enabling it afterwards was not consistent. Now it works through Gnome and I can easily turn it on and off.
The Ollama AI is giving me an idea, since SillyTavern connects to a URL at the moment, which is my localhost, I think that means I can set up the model to be hosted on my gaming PC with a better GPU, and still run the web server from my laptop. A GPT model working over LAN.