Ollama Installation
Get up and running with large language models.
macOS
Windows preview
Linux
curl -fsSL https://ollama.com/install.sh | sh
Manual install instructions (opens in a new tab)
Docker
The official Ollama Docker image (opens in a new tab) ollama/ollama
is available on Docker Hub.
Libraries
Quickstart
To run and chat with Llama 3 (opens in a new tab):
ollama run llama3
Model library
Ollama supports a list of models available on ollama.com/library (opens in a new tab)
Here are some example models that can be downloaded:
Model | Parameters | Size | Download |
---|---|---|---|
Llama 3 | 8B | 4.7GB | ollama run llama3 |
Llama 3 | 70B | 40GB | ollama run llama3:70b |
Phi 3 Mini | 3.8B | 2.3GB | ollama run phi3 |
Phi 3 Medium | 14B | 7.9GB | ollama run phi3:medium |
Gemma 2 | 9B | 5.5GB | ollama run gemma2 |
Gemma 2 | 27B | 16GB | ollama run gemma2:27b |
Mistral | 7B | 4.1GB | ollama run mistral |
Moondream 2 | 1.4B | 829MB | ollama run moondream |
Neural Chat | 7B | 4.1GB | ollama run neural-chat |
Starling | 7B | 4.1GB | ollama run starling-lm |
Code Llama | 7B | 3.8GB | ollama run codellama |
Llama 2 Uncensored | 7B | 3.8GB | ollama run llama2-uncensored |
LLaVA | 7B | 4.5GB | ollama run llava |
Solar | 10.7B | 6.1GB | ollama run solar |
[!NOTE] You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
Customize a model
Import from GGUF
Ollama supports importing GGUF models in the Modelfile:
-
Create a file named
Modelfile
, with aFROM
instruction with the local filepath to the model you want to import.FROM ./vicuna-33b.Q4_0.gguf
-
Create the model in Ollama
ollama create example -f Modelfile
-
Run the model
ollama run example
Import from PyTorch or Safetensors
See the guide on importing models for more information.
Customize a prompt
Models from the Ollama library can be customized with a prompt. For example, to customize the llama3
model:
ollama pull llama3
Create a Modelfile
:
FROM llama3
# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1
# set the system message
SYSTEM """
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
"""
Next, create and run the model:
ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.
For more examples, see the examples directory. For more information on working with a Modelfile, see the Modelfile documentation.
CLI Reference
Create a model
ollama create
is used to create a model from a Modelfile.
ollama create mymodel -f ./Modelfile
Pull a model
ollama pull llama3
This command can also be used to update a local model. Only the diff will be pulled.
Remove a model
ollama rm llama3
Copy a model
ollama cp llama3 my-model
Multiline input
For multiline input, you can wrap text with """
:
>>> """Hello,
... world!
... """
I'm a basic program that prints the famous "Hello, world!" message to the console.
Multimodal models
>>> What's in this image? /Users/jmorgan/Desktop/smile.png
The image features a yellow smiley face, which is likely the central focus of the picture.
Pass the prompt as an argument
$ ollama run llama3 "Summarize this file: $(cat README.md)"
Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.
Show model information
ollama show llama3
List models on your computer
ollama list
Start Ollama
ollama serve
is used when you want to start ollama without running the desktop application.
Building
See the developer guide (opens in a new tab)
Running local builds
Next, start the server:
./ollama serve
Finally, in a separate shell, run a model:
./ollama run llama3
REST API
Ollama has a REST API for running and managing models.
Generate a response
curl http://localhost:11434/api/generate -d '{
"model": "llama3",
"prompt":"Why is the sky blue?"
}'
Chat with a model
curl http://localhost:11434/api/chat -d '{
"model": "llama3",
"messages": [
{ "role": "user", "content": "why is the sky blue?" }
]
}'
See the API documentation for all endpoints.
Community Integrations
Web & Desktop
- Open WebUI (opens in a new tab)
- Enchanted (macOS native) (opens in a new tab)
- Hollama (opens in a new tab)
- Lollms-Webui (opens in a new tab)
- LibreChat (opens in a new tab)
- Bionic GPT (opens in a new tab)
- HTML UI (opens in a new tab)
- Saddle (opens in a new tab)
- Chatbot UI (opens in a new tab)
- Chatbot UI v2 (opens in a new tab)
- Typescript UI (opens in a new tab)
- Minimalistic React UI for Ollama Models (opens in a new tab)
- Ollamac (opens in a new tab)
- big-AGI (opens in a new tab)
- Cheshire Cat assistant framework (opens in a new tab)
- Amica (opens in a new tab)
- chatd (opens in a new tab)
- Ollama-SwiftUI (opens in a new tab)
- Dify.AI (opens in a new tab)
- MindMac (opens in a new tab)
- NextJS Web Interface for Ollama (opens in a new tab)
- Msty (opens in a new tab)
- Chatbox (opens in a new tab)
- WinForm Ollama Copilot (opens in a new tab)
- NextChat (opens in a new tab) with Get Started Doc (opens in a new tab)
- Alpaca WebUI (opens in a new tab)
- OllamaGUI (opens in a new tab)
- OpenAOE (opens in a new tab)
- Odin Runes (opens in a new tab)
- LLM-X (opens in a new tab) (Progressive Web App)
- AnythingLLM (Docker + MacOs/Windows/Linux native app) (opens in a new tab)
- Ollama Basic Chat: Uses HyperDiv Reactive UI (opens in a new tab)
- Ollama-chats RPG (opens in a new tab)
- QA-Pilot (opens in a new tab) (Chat with Code Repository)
- ChatOllama (opens in a new tab) (Open Source Chatbot based on Ollama with Knowledge Bases)
- CRAG Ollama Chat (opens in a new tab) (Simple Web Search with Corrective RAG)
- RAGFlow (opens in a new tab) (Open-source Retrieval-Augmented Generation engine based on deep document understanding)
- StreamDeploy (opens in a new tab) (LLM Application Scaffold)
- chat (opens in a new tab) (chat web app for teams)
- Lobe Chat (opens in a new tab) with Integrating Doc (opens in a new tab)
- Ollama RAG Chatbot (opens in a new tab) (Local Chat with multiple PDFs using Ollama and RAG)
- BrainSoup (opens in a new tab) (Flexible native client with RAG & multi-agent automation)
- macai (opens in a new tab) (macOS client for Ollama, ChatGPT, and other compatible API back-ends)
- Olpaka (opens in a new tab) (User-friendly Flutter Web App for Ollama)
- OllamaSpring (opens in a new tab) (Ollama Client for macOS)
- LLocal.in (opens in a new tab) (Easy to use Electron Desktop Client for Ollama)
- Ollama with Google Mesop (opens in a new tab) (Mesop Chat Client implementation with Ollama)
- Kerlig AI (opens in a new tab) (AI writing assistant for macOS)
- AI Studio (opens in a new tab)
- Sidellama (opens in a new tab) (browser-based LLM client)
- LLMStack (opens in a new tab) (No-code multi-agent framework to build LLM agents and workflows)
Terminal
- oterm (opens in a new tab)
- Ellama Emacs client (opens in a new tab)
- Emacs client (opens in a new tab)
- gen.nvim (opens in a new tab)
- ollama.nvim (opens in a new tab)
- ollero.nvim (opens in a new tab)
- ollama-chat.nvim (opens in a new tab)
- ogpt.nvim (opens in a new tab)
- gptel Emacs client (opens in a new tab)
- Oatmeal (opens in a new tab)
- cmdh (opens in a new tab)
- ooo (opens in a new tab)
- shell-pilot (opens in a new tab)
- tenere (opens in a new tab)
- llm-ollama (opens in a new tab) for Datasette's LLM CLI (opens in a new tab).
- typechat-cli (opens in a new tab)
- ShellOracle (opens in a new tab)
- tlm (opens in a new tab)
- podman-ollama (opens in a new tab)
- gollama (opens in a new tab)
Database
- MindsDB (opens in a new tab) (Connects Ollama models with nearly 200 data platforms and apps)
- chromem-go (opens in a new tab) with example (opens in a new tab)
Package managers
Libraries
- LangChain (opens in a new tab) and LangChain.js (opens in a new tab) with example (opens in a new tab)
- LangChainGo (opens in a new tab) with example (opens in a new tab)
- LangChain4j (opens in a new tab) with example (opens in a new tab)
- LangChainRust (opens in a new tab) with example (opens in a new tab)
- LlamaIndex (opens in a new tab)
- LiteLLM (opens in a new tab)
- OllamaSharp for .NET (opens in a new tab)
- Ollama for Ruby (opens in a new tab)
- Ollama-rs for Rust (opens in a new tab)
- Ollama-hpp for C++ (opens in a new tab)
- Ollama4j for Java (opens in a new tab)
- ModelFusion Typescript Library (opens in a new tab)
- OllamaKit for Swift (opens in a new tab)
- Ollama for Dart (opens in a new tab)
- Ollama for Laravel (opens in a new tab)
- LangChainDart (opens in a new tab)
- Semantic Kernel - Python (opens in a new tab)
- Haystack (opens in a new tab)
- Elixir LangChain (opens in a new tab)
- Ollama for R - rollama (opens in a new tab)
- Ollama for R - ollama-r (opens in a new tab)
- Ollama-ex for Elixir (opens in a new tab)
- Ollama Connector for SAP ABAP (opens in a new tab)
- Testcontainers (opens in a new tab)
- Portkey (opens in a new tab)
- PromptingTools.jl (opens in a new tab) with an example (opens in a new tab)
- LlamaScript (opens in a new tab)
Mobile
Extensions & Plugins
- Raycast extension (opens in a new tab)
- Discollama (opens in a new tab) (Discord bot inside the Ollama discord channel)
- Continue (opens in a new tab)
- Obsidian Ollama plugin (opens in a new tab)
- Logseq Ollama plugin (opens in a new tab)
- NotesOllama (opens in a new tab) (Apple Notes Ollama plugin)
- Dagger Chatbot (opens in a new tab)
- Discord AI Bot (opens in a new tab)
- Ollama Telegram Bot (opens in a new tab)
- Hass Ollama Conversation (opens in a new tab)
- Rivet plugin (opens in a new tab)
- Obsidian BMO Chatbot plugin (opens in a new tab)
- Cliobot (opens in a new tab) (Telegram bot with Ollama support)
- Copilot for Obsidian plugin (opens in a new tab)
- Obsidian Local GPT plugin (opens in a new tab)
- Open Interpreter (opens in a new tab)
- Llama Coder (opens in a new tab) (Copilot alternative using Ollama)
- Ollama Copilot (opens in a new tab) (Proxy that allows you to use ollama as a copilot like Github copilot)
- twinny (opens in a new tab) (Copilot and Copilot chat alternative using Ollama)
- Wingman-AI (opens in a new tab) (Copilot code and chat alternative using Ollama and HuggingFace)
- Page Assist (opens in a new tab) (Chrome Extension)
- AI Telegram Bot (opens in a new tab) (Telegram bot using Ollama in backend)
- AI ST Completion (opens in a new tab) (Sublime Text 4 AI assistant plugin with Ollama support)
- Discord-Ollama Chat Bot (opens in a new tab) (Generalized TypeScript Discord Bot w/ Tuning Documentation)
- Discord AI chat/moderation bot (opens in a new tab) Chat/moderation bot written in python. Uses Ollama to create personalities.
- Headless Ollama (opens in a new tab) (Scripts to automatically install ollama client & models on any OS for apps that depends on ollama server)
Supported backends
- llama.cpp (opens in a new tab) project founded by Georgi Gerganov.