Skip to content

To use chat_lmstudio() first download and install LM Studio. Then load a model using the LM Studio GUI and start the local server. To learn more about running LM Studio locally, see https://lmstudio.ai/docs/developer/core/server/.

Built on top of chat_openai_compatible().

Usage

chat_lmstudio(
  system_prompt = NULL,
  base_url = Sys.getenv("LMSTUDIO_BASE_URL", "http://localhost:1234"),
  model,
  params = NULL,
  api_args = list(),
  echo = NULL,
  credentials = NULL,
  api_headers = character()
)

models_lmstudio(base_url = "http://localhost:1234", credentials = NULL)

Arguments

system_prompt

A system prompt to set the behavior of the assistant.

base_url

The base URL to the API endpoint.

model

The model to use for the chat. Use models_lmstudio() to see all options.

params

Common model parameters, usually created by params().

api_args

Named list of arbitrary extra arguments appended to the body of every chat API call. Combined with the body object generated by ellmer with modifyList().

echo

One of the following options:

  • none: don't emit any output (default when running in a function).

  • output: echo text and tool-calling output as it streams in (default when running at the console).

  • all: echo all input and output.

Note this only affects the chat() method.

credentials

LM Studio doesn't require credentials for local usage and in most cases you do not need to provide credentials.

However, if you're accessing an LM Studio instance hosted behind a reverse proxy or secured endpoint that enforces bearer-token authentication, you can set the LMSTUDIO_API_KEY environment variable or provide a callback function to credentials.

api_headers

Named character vector of arbitrary extra headers appended to every chat API call.

Value

A Chat object.

Examples

if (FALSE) { # \dontrun{
# https://lmstudio.ai/models/zai-org/glm-4.7-flash
chat <- chat_lmstudio(model = "zai-org/glm-4.7-flash")
chat$chat("Tell me three jokes about statisticians")
} # }