LMSA lets you attach files directly to your messages so the AI can analyze, summarize, or answer questions about your documents. Text extraction, OCR, and PDF rendering all happen on your device — nothing is sent to an external processing service. Tap the attachment icon in the chat input bar to get started.Documentation Index
Fetch the complete documentation index at: https://docs.lmsa.app/llms.txt
Use this file to discover all available pages before exploring further.
Supported file types
LMSA accepts a wide range of document and code formats:Documents
Documents
Plain text (
.txt), PDF (.pdf), HTML (.html), Markdown (.md), and CSV (.csv)Code files
Code files
Python (
.py), JavaScript (.js), JSON (.json), and other common source code formatsImages
Images
JPEG, PNG, WebP, GIF, BMP, AVIF, APNG, and SVG — for vision-capable models
Attaching a file
Tap the attachment icon
In the chat input bar, tap the paperclip or attachment icon to open your device’s file picker.
Select your file
Browse to the file you want to attach and tap it. LMSA processes it immediately on your device.
How LMSA processes files
PDFs are rendered using PDF.js, which gives LMSA access to the text layer in the document. For scanned PDFs or image-based documents where no text layer exists, LMSA falls back to OCR. OCR (optical character recognition) is handled by Tesseract.js, running entirely on your device. This means you can extract text from scanned documents, photos of printed text, or screenshots without any data leaving your phone. Images attached to a message are passed directly to the model if it supports vision inputs. The model can describe the image, answer questions about it, or use it as context.Vision features require a model that supports image inputs. Check your model’s documentation to confirm multimodal capability. If your model is text-only, attached images will not be interpreted visually.