Running Local LLMs With Ollama

Getting Started With Ollama

Installing Ollama

This guide walks you through downloading, installing, and verifying Ollama on your local machine so you can run large language models (LLMs) like LLaMA 3.2 straight from your terminal.

1. Downloading Ollama

Visit the Ollama website and choose your operating system. Below is a quick reference for each platform:

Operating SystemFile FormatAction
macOS.zipDownload, unzip, and move to Applications
Linux.tar.gzDownload and extract
Windows.exeDownload and run installer

The image shows a webpage from ollama.com, offering a download for running large language models like Llama 3.3 and Phi 4, with options for macOS, Linux, and Windows.

Select the download button for your OS, then you’ll land on a page like this:

The image shows a webpage for downloading "Ollama" with options for macOS, Linux, and Windows. It features a download button for macOS and a section to sign up for updates.

2. Installing on macOS

Once the .zip file has finished downloading:

  1. Open Finder and navigate to your Downloads folder.
  2. Unzip the archive and double-click Ollama.app.
  3. When prompted, click Open.
  4. macOS will ask to move the app to Applications—confirm to complete the install.

Note

If you see a security prompt about an unidentified developer, go to System Preferences > Security & Privacy and allow the app to run.

The image shows a Mac Finder window with a "Downloads" folder open, containing a folder, a ZIP file, and an app named "Ollama." A security prompt is asking if the user wants to open the downloaded app.

3. Verifying the CLI

Launch your terminal. On first run, you may be asked to install the Ollama CLI—type yes to proceed. Once installed, confirm everything is set up by running:

ollama

You should see output similar to:

Usage:
  ollama [flags]
  ollama [command]

Available Commands:
  serve   Start ollama
  create  Create a model from a Modelfile
  show    Show information for a model
  run     Run a model
  stop    Stop a running model
  pull    Pull a model from a registry
  push    Push a model to a registry
  list    List models
  ps      List running models
  cp      Copy a model
  rm      Remove a model
  help    Help about any command

Flags:
  -h, --help      help for ollama
  -v, --version   Show version information

Use "ollama [command] --help" for more information about a command.

Note

If you don’t see the above output, ensure your PATH includes the Ollama binary or restart your terminal session.

Congratulations! You now have Ollama installed and can run LLMs locally.

4. Next Steps

To get started, try running Meta’s LLama 3.2 model:

ollama run llama3

Explore the full set of commands with ollama help or check out the official documentation to learn how to create and manage your own models.

Watch Video

Watch video content

Previous
Ollama Introduction