Empower your AI agents to find specific information in large, complex PDF documents using Nomic's layout model.
We have built a web demo, allowing users to upload PDFs and ask questions. The web demo runs layout detection, OCR, text embeddings, and LLM-based generation directly in the browser:
nomic-layout.mp4
To try it yourself, create a .env.local in the project root and add your Muna access key. You can sign up at muna.ai/settings/developer and create a key:
# Muna access key
MUNA_ACCESS_KEY="muna_****"Then start the Next.js development server:
# Run the web app
$ npm run devAsk your AI agent natural-language questions about your PDF documents and get precise, cited answers. The skill uses layout detection, OCR, and text embeddings to index every text region across your PDFs, then performs vector search to find the most relevant passages.
Tip
It works equally well with born-digital PDFs and scanned documents.
First, install the skill in your AI agent:
# Install the Nomic Layout skill
$ npx skills add muna-ai/nomic-layoutThen create a .env in your project root and add your Muna access key. You can sign up at muna.ai/settings/developer and create a key:
# Muna access key
MUNA_ACCESS_KEY="muna_****"Finally, drop a bunch of PDF's into the project directory and ask your AI agent a question:
> "What kind of hydraulic fluid should we use in maintenance?"- Join the Muna community.
- Check out the Muna docs.
- Read the Muna blog.
- Reach out to us at hi@muna.ai.