Skip to content

To use chat_ollama() first download and install Ollama. Then install some models either from the command line (e.g. with ollama pull llama3.1) or within R using ollamar (e.g. ollamar::pull("llama3.1")).

This function is a lightweight wrapper around chat_openai() with the defaults tweaked for ollama.

Known limitations

  • Tool calling is not supported with streaming (i.e. when echo is "text" or "all")

  • Models can only use 2048 input tokens, and there's no way to get them to use more, except by creating a custom model with a different default.

  • Tool calling generally seems quite weak, at least with the models I have tried it with.

Usage

chat_ollama(
  system_prompt = NULL,
  base_url = "http://localhost:11434",
  model,
  seed = NULL,
  api_args = list(),
  echo = NULL,
  api_key = NULL
)

models_ollama(base_url = "http://localhost:11434")

Arguments

system_prompt

A system prompt to set the behavior of the assistant.

base_url

The base URL to the endpoint; the default uses OpenAI.

model

The model to use for the chat. Use models_ollama() to see all options.

seed

Optional integer seed that ChatGPT uses to try and make output more reproducible.

api_args

Named list of arbitrary extra arguments appended to the body of every chat API call. Combined with the body object generated by ellmer with modifyList().

echo

One of the following options:

  • none: don't emit any output (default when running in a function).

  • output: echo text and tool-calling output as it streams in (default when running at the console).

  • all: echo all input and output.

Note this only affects the chat() method.

api_key

Ollama doesn't require an API key for local usage and in most cases you do not need to provide an api_key.

However, if you're accessing an Ollama instance hosted behind a reverse proxy or secured endpoint that enforces bearer‐token authentication, you can set api_key (or the OLLAMA_API_KEY environment variable).

Value

A Chat object.

Examples

if (FALSE) { # \dontrun{
chat <- chat_ollama(model = "llama3.2")
chat$chat("Tell me three jokes about statisticians")
} # }