Follow the steps below to install Ollama, a local model runner, and download the Mistral model. This setup allows you to run a large language model locally on your computer without an API key.
Go to https://ollama.com/download
Click Download for macOS
Open the downloaded .dmg file
Drag Ollama into your Applications folder
Open Ollama once to start the background service
Open Terminal and run:
curl -fsSL https://ollama.com/install.sh | sh
Run:
ollama --version
Go to https://ollama.com/download
Click Download for Windows
Run the installer (OllamaSetup.exe)
Follow the installation steps
Open PowerShell and run:
irm https://ollama.com/install.ps1 | iex
Open Command Prompt, PowerShell, or Windows Terminal and run:
ollama --version
Run:
ollama run mistral
The first time you run this command, Ollama will download the Mistral model (several GB). After the download finishes, an interactive chat session will start.
After starting Mistral, type directly into the terminal:
Explain linear regression in three bullet points.
You should receive a generated response.
To exit, press Ctrl + D or type /bye.
Minimum:
8 GB RAM
Recommended:
16 GB RAM
Notes:
No GPU required
Several GB of free disk space needed
List installed models:
ollama list
Download without running:
ollama pull mistral
Remove a model:
ollama rm mistral
“ollama not found”
Restart Terminal or your computer
Slow performance
Close other applications
Ensure enough available RAM
Download stuck
Press Ctrl + C
Run again: ollama run mistral
Model will not start
Restart Ollama or reboot your system