šŸ“š Ollama - Awesome Go Library for Artificial Intelligence

Go Gopher mascot for Ollama

Run large language models locally.

šŸ·ļø Artificial Intelligence
šŸ“‚ Libraries for building programs that leverage AI.
ā­ 92,508 stars
View on GitHub šŸ”—

Detailed Description of Ollama

Ā ollama

Ollama

Discord

Get up and running with large language models.

macOS

Download

Windows preview

Download

Linux

curl -fsSL https://ollama.com/install.sh | sh

Manual install instructions

Docker

The official Ollama Docker image ollama/ollama is available on Docker Hub.

Libraries

Quickstart

To run and chat with Llama 3.2:

ollama run llama3.2

Model library

Ollama supports a list of models available on ollama.com/library

Here are some example models that can be downloaded:

ModelParametersSizeDownload
Llama 3.23B2.0GBollama run llama3.2
Llama 3.21B1.3GBollama run llama3.2:1b
Llama 3.18B4.7GBollama run llama3.1
Llama 3.170B40GBollama run llama3.1:70b
Llama 3.1405B231GBollama run llama3.1:405b
Phi 3 Mini3.8B2.3GBollama run phi3
Phi 3 Medium14B7.9GBollama run phi3:medium
Gemma 22B1.6GBollama run gemma2:2b
Gemma 29B5.5GBollama run gemma2
Gemma 227B16GBollama run gemma2:27b
Mistral7B4.1GBollama run mistral
Moondream 21.4B829MBollama run moondream
Neural Chat7B4.1GBollama run neural-chat
Starling7B4.1GBollama run starling-lm
Code Llama7B3.8GBollama run codellama
Llama 2 Uncensored7B3.8GBollama run llama2-uncensored
LLaVA7B4.5GBollama run llava
Solar10.7B6.1GBollama run solar

[!NOTE] You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.

Customize a model

Import from GGUF

Ollama supports importing GGUF models in the Modelfile:

  1. Create a file named Modelfile, with a FROM instruction with the local filepath to the model you want to import.

    FROM ./vicuna-33b.Q4_0.gguf
    
  2. Create the model in Ollama

    ollama create example -f Modelfile
    
  3. Run the model

    ollama run example
    

Import from PyTorch or Safetensors

See the guide on importing models for more information.

Customize a prompt

Models from the Ollama library can be customized with a prompt. For example, to customize the llama3.2 model:

ollama pull llama3.2

Create a Modelfile:

FROM llama3.2

# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1

# set the system message
SYSTEM """
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
"""

Next, create and run the model:

ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.

For more examples, see the examples directory. For more information on working with a Modelfile, see the Modelfile documentation.

CLI Reference

Create a model

ollama create is used to create a model from a Modelfile.

ollama create mymodel -f ./Modelfile

Pull a model

ollama pull llama3.2

This command can also be used to update a local model. Only the diff will be pulled.

Remove a model

ollama rm llama3.2

Copy a model

ollama cp llama3.2 my-model

Multiline input

For multiline input, you can wrap text with """:

>>> """Hello,
... world!
... """
I'm a basic program that prints the famous "Hello, world!" message to the console.

Multimodal models

ollama run llava "What's in this image? /Users/jmorgan/Desktop/smile.png"
The image features a yellow smiley face, which is likely the central focus of the picture.

Pass the prompt as an argument

$ ollama run llama3.2 "Summarize this file: $(cat README.md)"
 Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.

Show model information

ollama show llama3.2

List models on your computer

ollama list

List which models are currently loaded

ollama ps

Stop a model which is currently running

ollama stop llama3.2

Start Ollama

ollama serve is used when you want to start ollama without running the desktop application.

Building

See the developer guide

Running local builds

Next, start the server:

./ollama serve

Finally, in a separate shell, run a model:

./ollama run llama3.2

REST API

Ollama has a REST API for running and managing models.

Generate a response

curl http://localhost:11434/api/generate -d '{
  "model": "llama3.2",
  "prompt":"Why is the sky blue?"
}'

Chat with a model

curl http://localhost:11434/api/chat -d '{
  "model": "llama3.2",
  "messages": [
    { "role": "user", "content": "why is the sky blue?" }
  ]
}'

See the API documentation for all endpoints.

Community Integrations

Web & Desktop

Terminal

Apple Vision Pro

Database

Package managers

Libraries

Mobile

  • Enchanted
  • Maid
  • ConfiChat (Lightweight, standalone, multi-platform, and privacy focused LLM chat interface with optional encryption)

Extensions & Plugins

Supported backends

  • llama.cpp project founded by Georgi Gerganov.