Documentation
Setting Up Ollama
Ollama is required to run AI models locally. This guide walks you through installation step by step for each platform.
What is Ollama?
Ollama is a free, open-source tool that runs AI models on your computer. Think of it as the "engine" that powers Typilot's AI features. Without Ollama, Typilot can't generate responses.
Windows Installation
- 1Visit ollama.com/download in your web browser
- 2Click the "Download for Windows" button
- 3Run the downloaded installer (OllamaSetup.exe)
- 4Follow the installation wizard - it will install Ollama automatically
- 5Ollama will start running in the background as a Windows service
- 6You'll see Ollama in your system tray when it's running
Verify Installation:
Open Command Prompt and type: ollama --versionNote: Ollama runs automatically as a service on Windows, so you don't need to manually start it.
macOS Installation
- 1Visit ollama.com/download in your web browser
- 2Click "Download for macOS"
- 3Download the .zip file and extract it
- 4Drag Ollama to your Applications folder
- 5Double-click Ollama in Applications to launch it
- 6Check your menu bar for the Ollama icon (you might need to allow it in System Preferences)
Verify Installation:
Open Terminal and type: ollama --versionNote: You may need to allow Ollama in System Preferences > Security & Privacy if it's blocked.
Linux Installation
- 1Open your terminal
- 2Run the installation script: curl -fsSL https://ollama.com/install.sh | sh
- 3Or install via your package manager if available
- 4Start Ollama service: sudo systemctl start ollama (or launch it manually)
- 5Enable it to start on boot: sudo systemctl enable ollama
Verify Installation:
Type in terminal: ollama --versionNote: On Linux, Ollama runs as a service. You may need sudo privileges to manage it.
Quick Start After Installation
- Verify Ollama is running (check system tray or run verification command)
- Open Typilot and go to the Monitoring page
- Check that Ollama connection shows as "Connected"
- Go to the Models page and download your first model (recommended: llama3.2:3b)
- Start using AI commands!
Troubleshooting
Ollama not starting
- Check if Ollama is already running: Look for Ollama in your system tray (Windows/macOS) or run `systemctl status ollama` (Linux)
- Restart your computer
- Reinstall Ollama
- Check firewall settings - Ollama uses port 11434
Typilot can't connect to Ollama
- Verify Ollama is running: Visit http://localhost:11434 in your browser - you should see Ollama's API response
- Check the Ollama URL in Typilot Settings matches http://localhost:11434
- Try restarting both Ollama and Typilot
- Check if another application is using port 11434
Models not downloading
- Ensure you have a stable internet connection for the initial download
- Check available disk space - models can be several GB
- Try downloading from Typilot's Models page instead of command line
- Verify Ollama is running and accessible
Still having issues? Check our comprehensive troubleshooting guide or contact support.