Ollama is an open-source tool that makes it easy to download and run large language models on your own computer. It exposes a local HTTP server that LMSA can connect to over your Wi-Fi network, keeping your conversations entirely on your local network.Documentation Index
Fetch the complete documentation index at: https://docs.lmsa.app/llms.txt
Use this file to discover all available pages before exploring further.
Before you start
Both your PC and your Android device must be connected to the same Wi-Fi network. LMSA communicates with Ollama over your local network, so this is required.Set up Ollama on your PC
Install Ollama
Download and install Ollama from ollama.com on your Windows, macOS, or Linux computer.
Configure Ollama to accept network connections
By default, Ollama only listens on
localhost, which means your Android device cannot reach it. You need to set the OLLAMA_HOST environment variable to allow connections from other devices on your network.- Windows: Open System Properties → Environment Variables and add a new system variable
OLLAMA_HOSTwith the value0.0.0.0. - macOS: Add the following line to your shell profile (
~/.zshrcor~/.bash_profile) and restart your terminal: - Linux (systemd): Edit the Ollama service override:
Add:Then run
sudo systemctl restart ollama.
Pull a model
Open a terminal and pull a model. For example:You can browse available models at ollama.com/library.
Find your PC's local IP address
You need the PC’s local network IP address — not
localhost or 127.0.0.1.- Windows: Open a terminal and run
ipconfig. Look for the IPv4 Address under your Wi-Fi adapter (for example,192.168.1.45). - macOS: Open System Settings → Wi-Fi → Details and copy the IP Address field.
- Linux: Run
ip addr showand look for your Wi-Fi interface’sinetaddress.
Connect LMSA to Ollama
Enable Ollama mode
Tap Configure and turn on the Ollama toggle. This switches LMSA to Ollama’s API format, which is required for correct model discovery and chat.
Enter your PC's IP and port
Enter your PC’s local IP address (for example,
192.168.1.45) and port 11434 (Ollama’s default port). Tap Save.When Ollama mode is enabled, LMSA automatically uses Ollama’s OpenAI-compatible API at
/v1/chat/completions. You do not need to configure this manually.Troubleshooting
LMSA can't connect to Ollama
LMSA can't connect to Ollama
- Confirm that you set
OLLAMA_HOST=0.0.0.0and restarted Ollama after making the change. - Make sure both devices are on the same Wi-Fi network.
- Verify that port
11434is not blocked by a firewall on your PC. - You can test from another terminal on your PC with:
curl http://localhost:11434/api/tags— if this returns a model list, Ollama is running correctly.
No models appear in LMSA
No models appear in LMSA
At least one model must be pulled before LMSA can list them. Run
ollama pull <model-name> and then refresh the connection in LMSA settings.Connection works but I can't select a model
Connection works but I can't select a model
LMSA detects the currently running Ollama model automatically. If no model is actively running, select one from the model picker in LMSA to load it.