Documentation

Setting Up Ollama

Ollama is required to run AI models locally. This guide walks you through installation step by step for each platform.

What is Ollama?

Ollama is a free, open-source tool that runs AI models on your computer. Think of it as the "engine" that powers Typilot's AI features. Without Ollama, Typilot can't generate responses.

Windows Installation

  1. 1Visit ollama.com/download in your web browser
  2. 2Click the "Download for Windows" button
  3. 3Run the downloaded installer (OllamaSetup.exe)
  4. 4Follow the installation wizard - it will install Ollama automatically
  5. 5Ollama will start running in the background as a Windows service
  6. 6You'll see Ollama in your system tray when it's running

Verify Installation:

Open Command Prompt and type: ollama --version

Note: Ollama runs automatically as a service on Windows, so you don't need to manually start it.

macOS Installation

  1. 1Visit ollama.com/download in your web browser
  2. 2Click "Download for macOS"
  3. 3Download the .zip file and extract it
  4. 4Drag Ollama to your Applications folder
  5. 5Double-click Ollama in Applications to launch it
  6. 6Check your menu bar for the Ollama icon (you might need to allow it in System Preferences)

Verify Installation:

Open Terminal and type: ollama --version

Note: You may need to allow Ollama in System Preferences > Security & Privacy if it's blocked.

Linux Installation

  1. 1Open your terminal
  2. 2Run the installation script: curl -fsSL https://ollama.com/install.sh | sh
  3. 3Or install via your package manager if available
  4. 4Start Ollama service: sudo systemctl start ollama (or launch it manually)
  5. 5Enable it to start on boot: sudo systemctl enable ollama

Verify Installation:

Type in terminal: ollama --version

Note: On Linux, Ollama runs as a service. You may need sudo privileges to manage it.

Quick Start After Installation

  1. Verify Ollama is running (check system tray or run verification command)
  2. Open Typilot and go to the Monitoring page
  3. Check that Ollama connection shows as "Connected"
  4. Go to the Models page and download your first model (recommended: llama3.2:3b)
  5. Start using AI commands!

Troubleshooting

Ollama not starting

  • Check if Ollama is already running: Look for Ollama in your system tray (Windows/macOS) or run `systemctl status ollama` (Linux)
  • Restart your computer
  • Reinstall Ollama
  • Check firewall settings - Ollama uses port 11434

Typilot can't connect to Ollama

  • Verify Ollama is running: Visit http://localhost:11434 in your browser - you should see Ollama's API response
  • Check the Ollama URL in Typilot Settings matches http://localhost:11434
  • Try restarting both Ollama and Typilot
  • Check if another application is using port 11434

Models not downloading

  • Ensure you have a stable internet connection for the initial download
  • Check available disk space - models can be several GB
  • Try downloading from Typilot's Models page instead of command line
  • Verify Ollama is running and accessible

Still having issues? Check our comprehensive troubleshooting guide or contact support.