LMSA’s chat interface is the heart of the app. Every conversation you have — whether with a local model on your PC or a cloud model via OpenRouter — runs through the same clean, full-featured chat view. Messages are stored locally on your device, and the interface renders markdown, code, and math natively so responses look exactly as the model intended.Documentation Index
Fetch the complete documentation index at: https://docs.lmsa.app/llms.txt
Use this file to discover all available pages before exploring further.
Starting a conversation
Open LMSA and tap the compose icon
Tap the new chat icon in the top bar (or open the sidebar and tap New Chat) to start a fresh conversation.
Select a model
Tap the model selector at the top of the screen. LMSA loads the list of available models from your connected provider. Tap any model to activate it.
Rich message rendering
LMSA renders AI responses with full formatting support — you are not reading raw text.Markdown
Markdown
Bold, italic, strikethrough, tables, ordered and unordered lists, block quotes, and horizontal rules all render natively inside message bubbles.
Syntax-highlighted code blocks
Syntax-highlighted code blocks
Fenced code blocks are highlighted for their language automatically. Tap the copy icon on any code block to copy the contents to your clipboard.
LaTeX math (KaTeX)
LaTeX math (KaTeX)
Inline math uses single dollar signs:
$E = mc^2$. Display math uses double dollar signs: $$\int_0^\infty e^{-x^2} dx = \frac{\sqrt{\pi}}{2}$$. LMSA renders both using KaTeX with no extra setup required.Thinking and reasoning mode
Some models — such as DeepSeek-R1 — produce a chain-of-thought before their final answer. LMSA exposes this as a collapsible Reasoning Process section inside the message bubble.- Show or hide thinking: Tap the [Hide] / [Show] toggle inside any reasoning block to collapse or expand it for that message. You can also set a global default in Settings → Hide thinking to collapse all reasoning blocks automatically.
- Reasoning level: For models that accept an explicit effort hint, go to Settings → Reasoning level and choose from Default, Low, Medium, High, or Disabled. This controls how much compute the model spends on internal reasoning before replying.
Reasoning mode only appears when your selected model produces
<think> blocks. If you do not see it, your current model does not support chain-of-thought output.