HelpAdministrationSetting Up Ollama

Setting Up Ollama

Ollama is the service that runs AI models on your machine. LanJAM connects to Ollama to power the chat — without it, nobody can send messages.

What is Ollama?

Ollama is a free, open-source tool that downloads and runs AI language models locally on your computer. Think of it as the engine that makes the AI work. LanJAM is the interface your family uses to talk to it.

Installing Ollama

macOS

brew install ollama

Or download the app from ollama.com.

Linux

curl -fsSL https://ollama.com/install.sh | sh

Windows

Download the installer from ollama.com and run it. Ollama installs as a background service.

Starting Ollama

After installation, Ollama needs to be running before LanJAM can use it:

  • macOS/Linux: Run ollama serve in a terminal, or launch the Ollama app.
  • Windows: Ollama starts automatically as a service after installation.
  • From LanJAM: Go to Admin > Status and click the Start Ollama button.

Checking if Ollama is running

Go to Admin > Status in LanJAM. The Ollama card will show:

  • Green tick — Ollama is connected and working
  • Red cross — Ollama is not reachable

You can also check from a terminal:

ollama list

If this returns a list of models (or an empty list), Ollama is running.

Common problems

"Ollama is unreachable"

This means LanJAM cannot connect to the Ollama service. Try these steps in order:

  1. Click "Start Ollama" on the Admin > Status page.
  2. Check if Ollama is installed — open a terminal and type ollama --version. If it says "command not found", Ollama is not installed.
  3. Start it manually — run ollama serve in a terminal.
  4. Restart after a reboot — Ollama does not always start automatically after your computer restarts. You may need to start it manually.

"Start Ollama" button does not work

The Start button tries to launch Ollama from the same machine that runs LanJAM. This will not work if:

  • Ollama is not installed — You need to install it first (see above).
  • Ollama runs on a different machine — The Start button only works for Ollama installed on the same computer as LanJAM. If Ollama runs elsewhere, start it on that machine directly.
  • LanJAM is running inside Docker — The Docker container cannot start Ollama on the host. Start Ollama on the host machine manually.

Ollama is on a different machine

If you run Ollama on a more powerful computer (for example, one with a GPU), make sure:

  1. Ollama is running on that machine (ollama serve).
  2. It is reachable from the LanJAM machine on port 11434.
  3. In LanJAM, use the Remote tab in Add AI Model to connect to it, or set the OLLAMA_HOST environment variable to point to the remote machine (e.g. http://192.168.1.100:11434).

Downloading your first model

Once Ollama is running, you need to download at least one AI model:

  1. Go to Admin > AI Models in LanJAM.
  2. Click Add AI Model.
  3. Choose a recommended model or type a custom name.
  4. Click Install and wait for the download to complete.
  5. Click Set Active on the installed model.

Popular starter models:

  • llama3.2 — Good all-rounder, works well on most machines
  • mistral — Fast and capable, lower memory usage
  • phi3 — Lightweight, good for machines with limited RAM