LMSA supports any API that follows the OpenAI chat completions specification. This includes self-hosted inference servers such as vLLM, LocalAI, text-generation-webui with the OpenAI extension, or any private API endpoint that your organization runs. If it speaks OpenAI-compatible chat completions, LMSA can connect to it.Documentation Index
Fetch the complete documentation index at: https://docs.lmsa.app/llms.txt
Use this file to discover all available pages before exploring further.
Connect LMSA to a custom endpoint
Enter the base URL
Tap Configure and enter the base URL of your API. For example:or, if your server already includes a version path:LMSA accepts either format and will normalize it automatically — you can also paste a full chat completions URL (ending in
/chat/completions) and LMSA will strip the suffix for you.Enter an API key (optional)
If your server requires authentication, enter an API key in the API Key field. Leave it empty for servers that do not require authentication.
Enter a model ID (optional)
If your server does not expose a
/models endpoint, enter a model ID manually in the Model ID field. LMSA will use this ID in chat requests instead of attempting to fetch the model list.If your server does expose /models, leave this field empty — LMSA will discover available models automatically.The URL you enter must be reachable from your Android device. If you are connecting to a server on your local network, ensure both devices are on the same Wi-Fi network. If you are connecting to a remote server, ensure the URL is publicly accessible or reachable via VPN.
URL normalization
LMSA is flexible about the URL format you provide. All of the following inputs resolve to the same endpoint:| What you enter | What LMSA uses |
|---|---|
http://192.168.1.10:8000 | http://192.168.1.10:8000/v1/chat/completions |
http://192.168.1.10:8000/v1 | http://192.168.1.10:8000/v1/chat/completions |
http://192.168.1.10:8000/v1/chat/completions | http://192.168.1.10:8000/v1/chat/completions |
Troubleshooting
LMSA shows 'No endpoint configured' or can't connect
LMSA shows 'No endpoint configured' or can't connect
- Verify the URL is correct and that the server is running.
- Test the endpoint from a browser or
curlto confirm it is accessible from your network. - If connecting over Wi-Fi, make sure your Android device and the server are on the same network.
- If the server requires HTTPS, use
https://in the URL. Self-signed certificates may cause connection errors on Android.
No models appear in the model picker
No models appear in the model picker
Some servers do not expose a
/models endpoint. In that case, enter the model ID manually in the Model ID field in LMSA’s Custom Endpoint settings. LMSA will use that ID for all chat requests.Requests fail with an authentication error
Requests fail with an authentication error
Double-check the API key you entered in LMSA. The key is sent as a
Bearer token in the Authorization header with every request.The Smart Reply feature is not available when Custom Endpoint is selected as your connection type.