To use chat_ollama()
first download and install
Ollama. Then install some models either from the
command line (e.g. with ollama pull llama3.1
) or within R using
ollamar (e.g.
ollamar::pull("llama3.1")
).
This function is a lightweight wrapper around chat_openai()
with
the defaults tweaked for ollama.
Known limitations
Tool calling is not supported with streaming (i.e. when
echo
is"text"
or"all"
)Models can only use 2048 input tokens, and there's no way to get them to use more, except by creating a custom model with a different default.
Tool calling generally seems quite weak, at least with the models I have tried it with.
Usage
chat_ollama(
system_prompt = NULL,
turns = NULL,
base_url = "http://localhost:11434",
model,
seed = NULL,
api_args = list(),
echo = NULL
)
Arguments
- system_prompt
A system prompt to set the behavior of the assistant.
- turns
A list of Turns to start the chat with (i.e., continuing a previous conversation). If not provided, the conversation begins from scratch.
- base_url
The base URL to the endpoint; the default uses OpenAI.
- model
The model to use for the chat. The default,
NULL
, will pick a reasonable default, and tell you about. We strongly recommend explicitly choosing a model for all but the most casual use.- seed
Optional integer seed that ChatGPT uses to try and make output more reproducible.
- api_args
Named list of arbitrary extra arguments appended to the body of every chat API call.
- echo
One of the following options:
none
: don't emit any output (default when running in a function).text
: echo text output as it streams in (default when running at the console).all
: echo all input and output.
Note this only affects the
chat()
method.
Value
A Chat object.
See also
Other chatbots:
chat_bedrock()
,
chat_claude()
,
chat_cortex_analyst()
,
chat_databricks()
,
chat_deepseek()
,
chat_gemini()
,
chat_github()
,
chat_groq()
,
chat_openai()
,
chat_openrouter()
,
chat_perplexity()