Setup Wizard
The setup wizard runs automatically on first launch and walks you through installing Ollama (the local AI runtime) and downloading a model. The entire process takes 2-5 minutes.
What the Wizard Does
- Detects Ollama — checks if Ollama is already installed on your system
- Installs Ollama — downloads and runs the Ollama installer if it's missing
- Recommends a model — selects a model based on your available RAM and GPU
- Downloads the model — pulls the selected model via Ollama (typically 2-4 GB)
- Verifies — runs a test query to confirm everything works end-to-end
Model Recommendations
FileMind recommends a model based on your hardware. The default is
mistral for systems with 8+ GB RAM, or a smaller model for constrained hardware.
You can change the model at any time in Configuration.
| RAM | Recommended Model | Size |
|---|---|---|
| 8 GB | llama3.2:3b | ~2 GB |
| 16 GB | mistral | ~4 GB |
| 32 GB+ | mistral or llama3.1:8b | ~4-5 GB |
A dedicated GPU (NVIDIA CUDA or Apple Metal) significantly speeds up AI operations but is not required. FileMind works on CPU-only systems.
What the AI Model Does
FileMind uses the local AI model for three things:
- Metadata extraction fallback — when DOI lookup and heuristic parsing both fail, the LLM extracts title, authors, and year from PDF text. This is a last resort, not the primary method.
- Ask My Library (RAG) — answers your questions using content from your papers, with enforced citations.
- Query rewriting — optionally rewrites your search queries for better retrieval.
The LLM never generates filenames. Filenames are always composed deterministically from a template using validated metadata fields.
Using a Cloud LLM Provider Instead
If you prefer to use a cloud LLM instead of a local model, FileMind supports OpenAI, Anthropic, and Google Gemini. Set the provider and API key in your config file:
[llm]
provider = "anthropic"
model = "claude-sonnet-4-20250514"
anthropic_api_key = "sk-ant-..." When using a cloud provider, only extracted text snippets are sent to the API — never the full PDF. You can skip the Ollama installation entirely.
Re-running the Wizard
You can re-run the setup wizard at any time from Settings → Setup Wizard. This is useful if you want to switch models, reinstall Ollama, or verify your setup.