Title: LLM-Powered Code Generation and Error Fixing for 'RStudio'
Version: 1.0.0
Description: An 'RStudio' addin that integrates large language model (LLM) assistance directly into the code-editing workflow. Users can generate R code from inline comments, obtain LLM-assisted fixes for console errors, and insert plain-English explanations of selected code blocks - all without leaving the editor. Supports 'OpenAI', 'Anthropic' (Claude), 'DeepSeek', 'Groq', 'Together AI', 'OpenRouter', 'Ollama' (fully local, no API key required), and any 'OpenAI'-compatible custom endpoint (e.g. 'LM Studio', 'vLLM', 'llama.cpp').
License: MIT + file LICENSE
Date: 2026-04-25
Depends: R (≥ 4.1.0)
Encoding: UTF-8
RoxygenNote: 7.3.2
URL: https://github.com/ShiyangZheng/llmcoder
BugReports: https://github.com/ShiyangZheng/llmcoder/issues
Imports: rstudioapi (≥ 0.13), httr2 (≥ 1.0.0), miniUI (≥ 0.1.1), shiny (≥ 1.7.0), stringr (≥ 1.5.0), rlang (≥ 1.0.0)
Suggests: jsonlite (≥ 1.8.0), testthat (≥ 3.0.0), withr (≥ 2.5.0)
NeedsCompilation: no
Packaged: 2026-04-29 07:10:37 UTC; admin
Author: Shiyang Zheng ORCID iD [aut, cre]
Maintainer: Shiyang Zheng <shiyang.zheng@nottingham.ac.uk>
Repository: CRAN
Date/Publication: 2026-04-29 19:00:02 UTC

Collect the most recent R error message using multiple strategies

Description

Tries, in order: rlang::last_error(), .Last.error (base R), .Last.error condition message. Returns NULL if nothing is found.

Usage

.collect_last_error()

Explain selected R code as inline comments

Description

Select a block of R code in the editor, then trigger this addin (recommended shortcut: Ctrl+Shift+E / Cmd+Shift+E). An explanation is inserted as ⁠#⁠ comment lines immediately above the selected code block.

Usage

addin_explain_code()

Details

The LLM receives the selected code and is instructed to produce a concise, human-readable explanation — focusing on what the code does and why, not on basic R syntax. Every output line is prefixed with ⁠# ⁠ so the explanation is valid R that can be left in the source file.

Value

Invisible NULL (called for side-effects).

See Also

addin_generate_from_comment(), llmcoder_setup()


Fix the last console error automatically

Description

After running code that produces an error in the R console, trigger this addin (recommended shortcut: Ctrl+Shift+F / Cmd+Shift+F).

Usage

addin_fix_console_error()

Details

The addin attempts to recover the most recent error message using several strategies, in order of priority:

  1. The rlang last-error store (rlang::last_error()), which captures errors thrown by rlang-aware packages and the tidyverse.

  2. Base R's .Last.error binding (set whenever an unhandled condition reaches the top level).

  3. The .Last.error.trace character vector written by some versions of rlang.

The complete source file currently open in the editor is also sent to the LLM as context. The LLM returns the entire corrected file, with changed lines annotated as ⁠# FIX: <reason>⁠. A diff-style preview dialog lets you review and edit the fix before applying it.

Workflow

  1. Run code — error appears in console.

  2. Trigger this addin.

  3. Review the fix in the preview dialog → click Apply Fix.

If no recent error is detected, a dialog explains the possible reasons and suggests using addin_fix_selected_error() instead.

Value

Invisible NULL (called for side-effects).

See Also

addin_fix_selected_error(), llmcoder_setup()


Fix an error by selecting its text

Description

Select the error message text in the editor (or paste it into a temporary comment), then trigger this addin. The addin pairs the selected text with the complete source file currently open in the editor and asks the LLM for a fix, displaying the result in a review dialog.

Usage

addin_fix_selected_error()

Details

This addin is the recommended fallback when addin_fix_console_error() does not detect an error automatically (e.g., because the error occurred inside a tryCatch() block or in a separate R process).

Workflow

  1. Copy the error message from the console.

  2. Paste it anywhere in the source file, or simply select it in the console output if your terminal supports that.

  3. Select the error text in the editor.

  4. Trigger this addin.

  5. Review and apply the suggested fix.

Value

Invisible NULL (called for side-effects).

See Also

addin_fix_console_error(), llmcoder_setup()


Generate R code from a comment (silent insert)

Description

Places the cursor on a line beginning with ⁠#⁠, then triggers this addin (default shortcut: Ctrl+Shift+G on Windows/Linux, Cmd+Shift+G on macOS). The LLM reads the comment text and the surrounding code context, then inserts the generated R code on the line immediately below the comment.

Usage

addin_generate_from_comment()

Details

The addin extracts the text of the comment at the cursor position and up to getOption("llmcoder.context_lines", 40L) lines of preceding code as context. The provider, model, and API key are taken from options set by llmcoder_setup() or the LLMcoder Settings addin.

No dialog is shown; code is inserted immediately. Use addin_generate_with_preview() if you prefer to review the output first.

Value

Invisible NULL (called for side-effects).

See Also

addin_generate_with_preview(), llmcoder_setup()


Generate R code with an editable preview dialog

Description

Same as addin_generate_from_comment() but opens a Shiny gadget so you can review and optionally edit the generated code before it is inserted into the editor. Recommended shortcut: Ctrl+Shift+P / Cmd+Shift+P.

Usage

addin_generate_with_preview()

Details

The preview dialog shows the generated code in an editable text area. Click Insert to place it in the editor, or close the dialog to discard the result.

Value

Invisible NULL (called for side-effects).

See Also

addin_generate_from_comment(), llmcoder_setup()


Open the LLMcoder settings dialog

Description

Launches an interactive Shiny gadget that lets you configure the LLM provider, model, API key, Ollama URL (for local models), custom base URL (for LM Studio / vLLM / llama.cpp), and context-window size. Settings can optionally be persisted to ⁠~/.Rprofile⁠ so they survive R restarts.

Usage

addin_settings()

Value

Invisible NULL (called for side-effects).

See Also

llmcoder_setup(), llmcoder_config()


System prompt for code explanation

Description

Returns the system prompt instructing the LLM to write R comments explaining the user's selected code.

Usage

build_explain_prompt()

Value

Character string: the system prompt sent to the LLM for the explain workflow.

Examples

## Not run: 
build_explain_prompt()

## End(Not run)

System prompt for error fixing

Description

Returns the system prompt instructing the LLM to diagnose an R error and produce corrected code.

Usage

build_fix_prompt()

Value

Character string: the system prompt sent to the LLM for the fix workflow.

Examples

## Not run: 
build_fix_prompt()

## End(Not run)

System prompt for code generation

Description

System prompt for code generation

Usage

build_system_prompt()

Anthropic Messages API call

Description

Anthropic Messages API call

Usage

call_anthropic(prompt, system_prompt, api_key, model)

Call the configured LLM

Description

Unified dispatch function that reads provider, model, and credentials from options (set via llmcoder_setup() or Addins \→ LLMcoder Settings) and forwards the request to the appropriate backend.

Usage

call_llm(prompt, system_prompt, context = NULL)

Arguments

prompt

Character. The user-facing instruction.

system_prompt

Character. The system-level instruction for the model.

context

Character or NULL. Surrounding R code sent as additional context (prepended to prompt).

Details

Supported providers:

"openai"

OpenAI Chat Completions API (⁠https://api.openai.com/v1⁠).

"anthropic"

Anthropic Messages API (⁠https://api.anthropic.com/v1/messages⁠).

"deepseek"

DeepSeek Chat API, OpenAI-compatible (⁠https://api.deepseek.com/v1⁠).

"ollama"

Local Ollama server (default ⁠http://localhost:11434⁠). No API key required.

"groq"

Groq Cloud API, OpenAI-compatible (⁠https://api.groq.com/openai/v1⁠). Extremely fast inference.

"together"

Together AI API, OpenAI-compatible (⁠https://api.together.xyz/v1⁠). Wide open-source model selection.

"openrouter"

OpenRouter API, OpenAI-compatible (⁠https://openrouter.ai/api/v1⁠). Unified gateway to 100+ models.

"custom"

Any OpenAI-compatible server. Set llmcoder.custom_url to the base URL (e.g.\ "http://localhost:1234/v1" for LM Studio).

Value

Character string containing the model's response text.

Examples

## Not run: 
llmcoder_setup("ollama", model = "llama3")
resp <- call_llm(
  prompt        = "Write R code to compute the mean of a numeric vector",
  system_prompt = "You are an R programming assistant.",
  context       = NULL
)
cat(resp)

## End(Not run)


Ollama local API (uses the OpenAI-compatible /v1 endpoint, Ollama >= 0.1.24)

Description

No API key is required. The Ollama server must be running locally; start it with ⁠ollama serve⁠ in a terminal.

Usage

call_ollama(prompt, system_prompt, model, base_url)

Generic OpenAI-compatible chat completions call

Description

Generic OpenAI-compatible chat completions call

Usage

call_openai_compat(
  prompt,
  system_prompt,
  api_key,
  model,
  base_url,
  extra_hdrs = character()
)

Strip markdown code fences from LLM output

Description

Strips common markdown code fences from LLM output so the raw code can be inserted into the editor.

Usage

clean_code_output(code)

Arguments

code

Character string returned by an LLM, possibly wrapped in ⁠ ```r⁠ code fences.

Value

Character string with fences removed. If no fences are found, the input is returned as-is.

Examples

## Not run: 
raw <- "\n```r\nx <- mean(1:10)\nprint(x)\n```\n"
clean_code_output(raw)
clean_code_output("no fences here")

## End(Not run)

Default model name per provider

Description

Returns a sensible default model name when the user has not specified one explicitly.

Usage

default_model(provider)

Arguments

provider

Character. Provider identifier (see call_llm()).

Value

Character string with the default model name.

Examples

## Not run: 
default_model("openai")
default_model("anthropic")
default_model("ollama")

## End(Not run)

Extract the comment text at the current cursor position

Description

Reads the line at the cursor position from the active editor context and returns its components. Throws an informative error if the cursor is not positioned on a comment line (i.e., a line starting with ⁠#⁠, possibly preceded by whitespace).

Usage

extract_comment_at_cursor(ctx)

Arguments

ctx

An rstudio_editor_context object from rstudioapi::getSourceEditorContext().

Value

A named list with four components:

comment

Character. The comment text with the leading ⁠#⁠ character(s) and optional space stripped.

row

Integer. 1-based row index of the comment line in the document.

full_line

Character. The raw full-line text as it appears in the editor.

indent

Character. The leading whitespace of the line (used to preserve indentation when inserting generated code).


Shared CSS for all gadgets

Description

Shared CSS for all gadgets

Usage

gadget_css()

Collect N lines of surrounding code above the cursor

Description

Returns a character string containing the n lines of source code immediately above the comment line, joined by newlines. This is sent to the LLM as context so that it can infer variable names, existing code style, and already-loaded packages.

Usage

gather_context(ctx, row, n = 30)

Arguments

ctx

An rstudio_editor_context object.

row

Integer. 1-based row of the comment line (context is taken from rows max(1, row - n) to row - 1).

n

Integer. Maximum number of context lines (default 30).

Value

A single character string (may be "" if row == 1).


Get the active source editor context

Description

Wraps rstudioapi::getSourceEditorContext() with a check that RStudio is available. Called by all addin entry points before any other operation.

Usage

get_editor_ctx()

Value

An rstudio_editor_context object (a list returned by rstudioapi::getSourceEditorContext()).


Insert text immediately after a given row in the editor

Description

Inserts one or more lines of text at the beginning of the row that follows row in the currently active source editor. Each line is prepended with indent to match the indentation level of the originating comment.

Usage

insert_after_row(text, row, indent = "")

Arguments

text

Character. Code to insert; may contain newlines.

row

Integer. 1-based row after which the text is inserted.

indent

Character. Leading whitespace prepended to every inserted line (default "").

Value

Invisible NULL (called for side-effects).


Show the current LLMcoder configuration

Description

Returns (and prints) the active provider, model, API key (masked), context-lines setting, and any provider-specific URLs.

Usage

llmcoder_config()

Value

An object of class "llmcoder_config": a named list with elements provider, model, api_key, context_lines, ollama_url, and custom_url. The API key is masked for security. When printed, it displays in a human-readable table.

See Also

llmcoder_setup()

Examples

# Show current configuration (reads from option values)
llmcoder_config()

# Capture the config as a list for programmatic use
cfg <- llmcoder_config()
cfg$provider
cfg$model


Configure LLMcoder for the current session

Description

Sets the LLM provider, API key, model, and related options for the current R session. For permanent configuration that survives restarts, use Addins > LLMcoder Settings, which writes to ~/.Rprofile.

Usage

llmcoder_setup(
  provider = c("openai", "anthropic", "deepseek", "ollama", "groq", "together",
    "openrouter", "custom"),
  api_key = NULL,
  model = NULL,
  context_lines = 40L,
  ollama_url = "http://localhost:11434",
  custom_url = ""
)

Arguments

provider

Character. One of "openai", "anthropic", "deepseek", "ollama", "groq", "together", "openrouter", or "custom".

api_key

Character. Your API key. Not required when provider = "ollama".

model

Character. Model identifier. If NULL or "", a sensible default is chosen for the provider (see Details).

context_lines

Integer. Number of lines of code above the cursor that are sent as context to the LLM (default 40). Higher values improve suggestion quality but increase latency and token cost.

ollama_url

Character. Base URL of the Ollama server (default "http://localhost:11434"). Only used when provider = "ollama".

custom_url

Character. Base URL of a custom OpenAI-compatible server (e.g. "http://localhost:1234/v1" for LM Studio). Only used when provider = "custom".

Details

Provider defaults:

Provider Default model Notes
openai ⁠gpt-4o-mini⁠ Fast, cost-effective
anthropic claude-sonnet-4-20250514 Strongest reasoning
deepseek deepseek-chat Very cheap, great code quality
ollama llama3 No API key, fully local
groq ⁠llama-3.3-70b-versatile⁠ Extremely fast inference
together ⁠meta-llama/Llama-3-70b-chat-hf⁠ Large open-source model choice
openrouter ⁠openai/gpt-4o-mini⁠ Unified gateway for 100+ models
custom "" (must specify) Any OpenAI-compat endpoint

Value

Invisible NULL.

See Also

llmcoder_config(), addin_settings()

Examples

## Not run: 
# OpenAI
llmcoder_setup("openai", api_key = Sys.getenv("OPENAI_API_KEY"))
llmcoder_setup("openai", api_key = Sys.getenv("OPENAI_API_KEY"), model = "gpt-4o")

# Anthropic Claude
llmcoder_setup("anthropic", api_key = Sys.getenv("ANTHROPIC_API_KEY"))

# DeepSeek (cheapest, excellent code quality)
llmcoder_setup("deepseek", api_key = Sys.getenv("DEEPSEEK_API_KEY"))

# Ollama — fully local, no API key needed
llmcoder_setup("ollama", model = "qwen2.5-coder:7b")
llmcoder_setup("ollama", model = "codellama:13b",
               ollama_url = "http://192.168.1.10:11434")  # remote server

# Groq — extremely fast inference on open models
llmcoder_setup("groq",
  api_key = Sys.getenv("GROQ_API_KEY"),
  model   = "llama-3.3-70b-versatile")

# Together AI — wide open-source model selection
llmcoder_setup("together",
  api_key = Sys.getenv("TOGETHER_API_KEY"),
  model   = "mistralai/Mixtral-8x7B-Instruct-v0.1")

# OpenRouter — unified gateway, supports 100+ models
llmcoder_setup("openrouter",
  api_key = Sys.getenv("OPENROUTER_API_KEY"),
  model   = "anthropic/claude-3.5-sonnet")

# LM Studio or any OpenAI-compatible local server
llmcoder_setup("custom",
  api_key    = "lm-studio",
  model      = "local-model",
  custom_url = "http://localhost:1234/v1")

# Reduce context window to save tokens
llmcoder_setup("openai",
  api_key       = Sys.getenv("OPENAI_API_KEY"),
  context_lines = 20L)

## End(Not run)


Emit a status message to the R console

Description

Prefixes the message with ⁠[llmcoder] ⁠ so users can distinguish addin output from their own code output.

Usage

notify(msg)

Arguments

msg

Character. The message text.

Value

Invisible NULL.


List models available on a running Ollama server

Description

Queries GET /api/tags on the local Ollama REST API and returns the names of all installed models. Useful for populating the model selector in the Settings gadget.

Usage

ollama_list_models(
  base_url = getOption("llmcoder.ollama_url", "http://localhost:11434")
)

Arguments

base_url

Character. Ollama base URL. Defaults to the value of getOption("llmcoder.ollama_url", "http://localhost:11434").

Details

Ollama must be running (⁠ollama serve⁠) before calling this function. Models are installed with ⁠ollama pull <model>⁠ from the terminal.

Value

Character vector of model tag names, or NULL if Ollama is not reachable.

Examples

## Not run: 
ollama_list_models()
# [1] "llama3:latest"  "qwen2.5-coder:7b"  "mistral:latest"

## End(Not run)


Safely call the LLM, catching API errors

Description

Safely call the LLM, catching API errors

Usage

safe_call_llm(prompt, system_prompt, context)

Safely obtain the active editor context

Description

Safely obtain the active editor context

Usage

safe_get_ctx()

Write llmcoder options to ~/.Rprofile

Description

Writes (or replaces) an ⁠# --- llmcoder ---⁠ block in the user's ⁠~/.Rprofile⁠ so that llmcoder settings persist across R sessions.

Usage

write_rprofile(provider, model, api_key, ctx_lines, ollama_url, custom_url)

Arguments

provider

Character. Provider identifier (see llmcoder_setup()).

model

Character. Model name.

api_key

Character. API key (may be "" for Ollama).

ctx_lines

Integer. Number of context lines.

ollama_url

Character. Ollama base URL.

custom_url

Character. Custom endpoint base URL.

Value

Invisible NULL. Called for its side-effect of writing to ⁠~/.Rprofile⁠.

Examples

## Not run: 
write_rprofile(
  provider   = "ollama",
  model      = "llama3",
  api_key    = "",
  ctx_lines  = 40L,
  ollama_url = "http://localhost:11434",
  custom_url = ""
)

## End(Not run)