Skip to content

To use chat_ollama() first download and install Ollama. Then install some models from the command line, e.g. with ollama pull llama3.1 or ollama pull gemma2.

This function is a lightweight wrapper around chat_openai() with the defaults tweaked for ollama.

Known limitations

  • Tool calling is not supported with streaming (i.e. when echo is "text" or "all")

  • Tool calling generally seems quite weak, at least with the models I have tried it with.

Usage

chat_ollama(
  system_prompt = NULL,
  turns = NULL,
  base_url = "http://localhost:11434",
  model,
  seed = NULL,
  api_args = list(),
  echo = NULL
)

Arguments

system_prompt

A system prompt to set the behavior of the assistant.

turns

A list of Turns to start the chat with (i.e., continuing a previous conversation). If not provided, the conversation begins from scratch.

base_url

The base URL to the endpoint; the default uses OpenAI.

model

The model to use for the chat. The default, NULL, will pick a reasonable default, and tell you about. We strongly recommend explicitly choosing a model for all but the most casual use.

seed

Optional integer seed that ChatGPT uses to try and make output more reproducible.

api_args

Named list of arbitrary extra arguments appended to the body of every chat API call.

echo

One of the following options:

  • none: don't emit any output (default when running in a function).

  • text: echo text output as it streams in (default when running at the console).

  • all: echo all input and output.

Note this only affects the chat() method.

Value

A Chat object.

Examples

if (FALSE) { # \dontrun{
chat <- chat_ollama(model = "llama3.2")
chat$chat("Tell me three jokes about statisticians")
} # }