Running Local LLMs With Ollama

Customising Models With Ollama

Section Introduction

Welcome to the final section of this course on Ollama model customization. In this module, you’ll discover how to use the Modelfile—a declarative blueprint that makes it easy to tailor pre-trained models to your unique requirements.

What Is a Modelfile?

A Modelfile is a configuration file, conceptually similar to a Dockerfile, that defines:

  • The base model image to start from
  • Custom layers or modifications
  • Dependencies and environment setup

Note

If you’re familiar with Docker, you’ll recognize the same concepts—base images, commands, and dependency management—when working with a Modelfile.

Why Customize Models?

Customizing models empowers you to:

  • Optimize performance for specialized domains
  • Incorporate proprietary datasets during fine-tuning
  • Add custom preprocessing or tokenization steps
BenefitDescription
Domain AdaptationAlign models with industry-specific terminology
Efficiency TuningPrune or quantize for faster, leaner inference
Feature ExtensionIntegrate custom modules (e.g., sentiment analysis, QA)

Hands-On Demo

In this demo, we’ll:

  1. Pull a pre-trained model from the Ollama Registry
  2. Write and configure a Modelfile to customize its behavior
  3. Build and run the customized model locally

Publishing to the Ollama Model Registry

Once your model is configured and tested, you’ll publish it to the Ollama Model Registry so others can:

ollama push your-custom-model
ollama pull your-custom-model

Warning

Before publishing, make sure you’re authenticated with the Ollama CLI. Run ollama login to set up credentials.

Learning Outcomes

By the end of this lesson, you will be able to:

  • Define and configure a Modelfile
  • Customize a base model to suit real-world use cases
  • Publish your custom model to the Ollama Model Registry

Let’s get started!

Watch Video

Watch video content

Previous
Demo Migrating an Application to Use the OpenAI API