OpenRouter is a cloud gateway that gives you access to over 100 AI models from providers like OpenAI, Anthropic, Google, Meta, and others — all through a single API key. LMSA connects directly to OpenRouter’s API using your key, with no intermediate servers involved.Documentation Index
Fetch the complete documentation index at: https://docs.lmsa.app/llms.txt
Use this file to discover all available pages before exploring further.
Get an OpenRouter API key
Create an OpenRouter account
Go to openrouter.ai and sign up for a free account.
Connect LMSA to OpenRouter
Choose a model
LMSA will load the full list of available OpenRouter models. Tap the model picker and select the model you want to use.
Free and paid models
OpenRouter offers a selection of free models that you can use without adding credits to your account. Paid models are billed directly on the OpenRouter side — LMSA does not handle any payments or subscriptions. Check openrouter.ai/models for the current pricing of each model.The Smart Reply feature is not available when OpenRouter is selected as your connection type.
Privacy details
When you use OpenRouter, your chat messages travel from LMSA directly to OpenRouter’s API — no LMSA servers are involved. OpenRouter then routes your request to the underlying model provider (such as OpenAI or Anthropic) according to their own privacy and data retention policies. Review OpenRouter’s privacy policy for details on how they handle your data.Troubleshooting
No models load after entering my API key
No models load after entering my API key
Check that your API key is correct and has not been revoked. You can verify it in the OpenRouter dashboard under Keys. Also confirm that your Android device has an active internet connection.
I get an error when sending a message
I get an error when sending a message
Some models require credits on your OpenRouter account before you can use them. Log in to OpenRouter and check your credit balance. Free models do not require credits.
Responses are slower than expected
Responses are slower than expected
Response speed for cloud models depends on the provider’s infrastructure and the size of the selected model. Smaller models (for example, 7B or 8B parameter models) generally respond faster.