
Using Ollama depends a lot on the equipment you run - you should aim to have at least 12gb of VRAM/unified memory to run models. I have one copy running in a docker container using CPU on Linux and another running on the GPU of my windows desktop so I can give install advice for either OS if you’d like
When on your wifi, try navigating in your browser to your windows computer’s address with a colon and the port 11434 at the end. Would look something like this:
http://192.168.xx.xx:11434/
If it works your browser will just load the text: Ollama is running
From there you just need to figure out how you want to interact with it. I personally pair it with OpenWebUI for the web interface