Title: | Use Large Language Models Directly in your Development Environment |
---|---|
Description: | Large language models are readily accessible via API. This package lowers the barrier to use the API inside of your development environment. For more on the API, see <https://platform.openai.com/docs/introduction>. |
Authors: | Michel Nivard [aut, cph], James Wade [aut, cre, cph] , Samuel Calderon [aut] |
Maintainer: | James Wade <[email protected]> |
License: | MIT + file LICENSE |
Version: | 0.4.0.9009 |
Built: | 2024-11-19 05:41:47 UTC |
Source: | https://github.com/MichelNivard/gptstudio |
This function provides a high-level interface for communicating with various services and models supported by gptstudio. It orchestrates the creation, configuration, and execution of a request based on user inputs and options set for gptstudio. The function supports a range of tasks from text generation to code synthesis and can be customized according to skill level and coding style preferences.
chat( prompt, service = getOption("gptstudio.service"), history = list(list(role = "system", content = "You are an R chat assistant")), stream = FALSE, model = getOption("gptstudio.model"), skill = getOption("gptstudio.skill"), style = getOption("gptstudio.code_style", "no preference"), task = getOption("gptstudio.task", "coding"), custom_prompt = NULL, process_response = FALSE, session = NULL, ... )
chat( prompt, service = getOption("gptstudio.service"), history = list(list(role = "system", content = "You are an R chat assistant")), stream = FALSE, model = getOption("gptstudio.model"), skill = getOption("gptstudio.skill"), style = getOption("gptstudio.code_style", "no preference"), task = getOption("gptstudio.task", "coding"), custom_prompt = NULL, process_response = FALSE, session = NULL, ... )
prompt |
A string containing the initial prompt or question to be sent to the model. This is a required parameter. |
service |
The AI service to be used for the request. If not explicitly
provided, this defaults to the value set in
|
history |
An optional parameter that can be used to include previous interactions or context for the current session. Defaults to a system message indicating "You are an R chat assistant". |
stream |
A logical value indicating whether the interaction should be
treated as a stream for continuous interactions. If not explicitly
provided, this defaults to the value set in
|
model |
The specific model to use for the request. If not explicitly
provided, this defaults to the value set in |
skill |
A character string indicating the skill or capability level of
the user. This parameter allows for customizing the behavior of the model
to the user. If not explicitly provided, this defaults to the value set in
|
style |
The coding style preferred by the user for code generation
tasks. This parameter is particularly useful when the task involves
generating code snippets or scripts. If not explicitly provided, this
defaults to the value set in |
task |
The specific type of task to be performed, ranging from text
generation to code synthesis, depending on the capabilities of the model.
If not explicitly provided, this defaults to the value set in
|
custom_prompt |
An optional parameter that provides a way to extend or customize the initial prompt with additional instructions or context. |
process_response |
A logical indicating whether to process the model's
response. If |
session |
An optional parameter for a shiny session object. |
... |
Reserved for future use. |
Depending on the task and processing, the function returns the response from the model, which could be text, code, or any other structured output defined by the task and model capabilities. The precise format and content of the output depend on the specified options and the capabilities of the selected model.
## Not run: # Basic usage with a text prompt: result <- chat("What is the weather like today?") # Advanced usage with custom settings, assuming appropriate global options are set: result <- chat( prompt = "Write a simple function in R", skill = "advanced", style = "tidyverse", task = "coding" ) # Usage with explicit service and model specification: result <- chat( prompt = "Explain the concept of tidy data in R", service = "openai", model = "gpt-4-turbo-preview", skill = "intermediate", task = "general" ) ## End(Not run)
## Not run: # Basic usage with a text prompt: result <- chat("What is the weather like today?") # Advanced usage with custom settings, assuming appropriate global options are set: result <- chat( prompt = "Write a simple function in R", skill = "advanced", style = "tidyverse", task = "coding" ) # Usage with explicit service and model specification: result <- chat( prompt = "Explain the concept of tidy data in R", service = "openai", model = "gpt-4-turbo-preview", skill = "intermediate", task = "general" ) ## End(Not run)
This function creates a customizable system prompt based on user-defined parameters such as coding style, skill level, and task. It supports customization for specific use cases through a custom prompt option.
chat_create_system_prompt( style = getOption("gptstudio.code_style"), skill = getOption("gptstudio.skill"), task = getOption("gptstudio.task"), custom_prompt = getOption("gptstudio.custom_prompt"), in_source = FALSE )
chat_create_system_prompt( style = getOption("gptstudio.code_style"), skill = getOption("gptstudio.skill"), task = getOption("gptstudio.task"), custom_prompt = getOption("gptstudio.custom_prompt"), in_source = FALSE )
style |
A character string indicating the preferred coding style. Valid
values are "tidyverse", "base", "no preference". Defaults to |
skill |
The self-described skill level of the programmer. Valid values
are "beginner", "intermediate", "advanced", "genius". Defaults to |
task |
The specific task to be performed: "coding", "general", "advanced developer", or "custom". This influences the generated system prompt. Defaults to "coding". |
custom_prompt |
An optional custom prompt string to be utilized when
|
in_source |
A logical indicating whether the instructions are intended for use in a source script. This parameter is required and must be explicitly set to TRUE or FALSE. Default is FALSE. |
Returns a character string that forms a system prompt tailored to the specified parameters. The string provides guidance or instructions based on the user's coding style, skill level, and task.
## Not run: chat_create_system_prompt(in_source = TRUE) chat_create_system_prompt( style = "tidyverse", skill = "advanced", task = "coding", in_source = FALSE ) ## End(Not run)
## Not run: chat_create_system_prompt(in_source = TRUE) chat_create_system_prompt( style = "tidyverse", skill = "advanced", task = "coding", in_source = FALSE ) ## End(Not run)
This appends a new response to the chat history
chat_history_append(history, role, content, name = NULL)
chat_history_append(history, role, content, name = NULL)
history |
List containing previous responses. |
role |
Author of the message. One of |
content |
Content of the message. If it is from the user most probably comes from an interactive input. |
name |
Name for the author of the message. Currently used to support rendering of help pages |
list of chat messages
Default chat message
chat_message_default(translator = create_translator())
chat_message_default(translator = create_translator())
translator |
A Translator from |
A default chat message for welcoming users.
This generic function checks the API connection for a specified service by dispatching to related methods.
check_api_connection_openai(service, api_key)
check_api_connection_openai(service, api_key)
service |
The name of the API service for which the connection is being checked. |
api_key |
The API key used for authentication. |
A logical value indicating whether the connection was successful.
Create a bslib theme that matches the user's RStudio IDE theme.
create_chat_app_theme(ide_colors = get_ide_theme_info())
create_chat_app_theme(ide_colors = get_ide_theme_info())
ide_colors |
List containing the colors of the IDE theme. |
A bslib theme
This function submits a user message to the Cohere Chat API, potentially along with other parameters such as chat history or connectors, and returns the API's response.
create_chat_cohere( prompt, chat_history = NULL, connectors = NULL, model = "command", api_key = Sys.getenv("COHERE_API_KEY") )
create_chat_cohere( prompt, chat_history = NULL, connectors = NULL, model = "command", api_key = Sys.getenv("COHERE_API_KEY") )
prompt |
A string containing the user message. |
chat_history |
A list of previous messages for context, if any. |
connectors |
A list of connector objects, if any. |
model |
A string representing the Cohere model to be used, defaulting to "command". Other options include "command-light", "command-nightly", and "command-light-nightly". |
api_key |
The API key for accessing the Cohere API, defaults to the COHERE_API_KEY environment variable. |
The response from the Cohere Chat API containing the model's reply.
Generate text completions using Anthropic's API
create_completion_anthropic( prompt = list(list(role = "user", content = "Hello")), system = NULL, model = "claude-3-5-sonnet-20240620", max_tokens = 1028, key = Sys.getenv("ANTHROPIC_API_KEY") )
create_completion_anthropic( prompt = list(list(role = "user", content = "Hello")), system = NULL, model = "claude-3-5-sonnet-20240620", max_tokens = 1028, key = Sys.getenv("ANTHROPIC_API_KEY") )
prompt |
The prompt for generating completions |
system |
A system messages to instruct the model. Defaults to NULL. |
model |
The model to use for generating text. By default, the function will try to use "claude-2.1". |
max_tokens |
The maximum number of tokens to generate. Defaults to 256. |
key |
The API key for accessing Anthropic's API. By default, the
function will try to use the |
A list with the generated completions and other information returned by the API.
## Not run: create_completion_anthropic( prompt = "\n\nHuman: Hello, world!\n\nAssistant:", model = "claude-3-haiku-20240307", max_tokens = 1028 ) ## End(Not run)
## Not run: create_completion_anthropic( prompt = "\n\nHuman: Hello, world!\n\nAssistant:", model = "claude-3-haiku-20240307", max_tokens = 1028 ) ## End(Not run)
Use this function to generate text completions using OpenAI's API.
create_completion_azure_openai( prompt, task = Sys.getenv("AZURE_OPENAI_TASK"), base_url = Sys.getenv("AZURE_OPENAI_ENDPOINT"), deployment_name = Sys.getenv("AZURE_OPENAI_DEPLOYMENT_NAME"), api_key = Sys.getenv("AZURE_OPENAI_API_KEY"), api_version = Sys.getenv("AZURE_OPENAI_API_VERSION") )
create_completion_azure_openai( prompt, task = Sys.getenv("AZURE_OPENAI_TASK"), base_url = Sys.getenv("AZURE_OPENAI_ENDPOINT"), deployment_name = Sys.getenv("AZURE_OPENAI_DEPLOYMENT_NAME"), api_key = Sys.getenv("AZURE_OPENAI_API_KEY"), api_version = Sys.getenv("AZURE_OPENAI_API_VERSION") )
prompt |
a list to use as the prompt for generating completions |
task |
a character string for the API task (e.g. "completions"). Defaults to the Azure OpenAI task from environment variables if not specified. |
base_url |
a character string for the base url. It defaults to the Azure OpenAI endpoint from environment variables if not specified. |
deployment_name |
a character string for the deployment name. It will default to the Azure OpenAI deployment name from environment variables if not specified. |
api_key |
a character string for the API key. It will default to the Azure OpenAI API key from your environment variables if not specified. |
api_version |
a character string for the API version. It will default to the Azure OpenAI API version from your environment variables if not specified. |
a list with the generated completions and other information returned by the API
Generate text completions using Google AI Studio's API
create_completion_google( prompt, model = "gemini-pro", key = Sys.getenv("GOOGLE_API_KEY") )
create_completion_google( prompt, model = "gemini-pro", key = Sys.getenv("GOOGLE_API_KEY") )
prompt |
The prompt for generating completions |
model |
The model to use for generating text. By default, the function will try to use "text-bison-001" |
key |
The API key for accessing Google AI Studio's API. By default, the
function will try to use the |
A list with the generated completions and other information returned by the API.
## Not run: create_completion_google( prompt = "Write a story about a magic backpack", temperature = 1.0, candidate_count = 3 ) ## End(Not run)
## Not run: create_completion_google( prompt = "Write a story about a magic backpack", temperature = 1.0, candidate_count = 3 ) ## End(Not run)
Generate text completions using HuggingFace's API
create_completion_huggingface( prompt, history = NULL, model = "tiiuae/falcon-7b-instruct", token = Sys.getenv("HF_API_KEY"), max_new_tokens = 250 )
create_completion_huggingface( prompt, history = NULL, model = "tiiuae/falcon-7b-instruct", token = Sys.getenv("HF_API_KEY"), max_new_tokens = 250 )
prompt |
The prompt for generating completions |
history |
A list of the previous chat responses |
model |
The model to use for generating text |
token |
The API key for accessing HuggingFace's API. By default, the
function will try to use the |
max_new_tokens |
Maximum number of tokens to generate, defaults to 250 |
A list with the generated completions and other information returned by the API.
## Not run: create_completion_huggingface( model = "gpt2", prompt = "Hello world!" ) ## End(Not run)
## Not run: create_completion_huggingface( model = "gpt2", prompt = "Hello world!" ) ## End(Not run)
This function sends a series of messages alongside a chosen model to the Perplexity API to generate a chat completion. It returns the API's generated responses.
create_completion_perplexity( prompt, model = "mistral-7b-instruct", api_key = Sys.getenv("PERPLEXITY_API_KEY") )
create_completion_perplexity( prompt, model = "mistral-7b-instruct", api_key = Sys.getenv("PERPLEXITY_API_KEY") )
prompt |
A list containing prompts to be sent in the chat. |
model |
A character string representing the Perplexity model to be used. Defaults to "mistral-7b-instruct". |
api_key |
The API key for accessing the Perplexity API. Defaults to the PERPLEXITY_API_KEY environment variable. |
The response from the Perplexity API containing the completion for the chat.
This returns a list of color properties for a chat message
create_ide_matching_colors( role = c("user", "assistant"), ide_colors = get_ide_theme_info() )
create_ide_matching_colors( role = c("user", "assistant"), ide_colors = get_ide_theme_info() )
role |
The role of the message author |
ide_colors |
List containing the colors of the IDE theme. |
list
The language can be set via options("gptstudio.language" = "<language>")
(defaults to "en").
create_translator(language = getOption("gptstudio.language"))
create_translator(language = getOption("gptstudio.language"))
language |
The language to be found in the translation JSON file. |
A Translator from shiny.i18n::Translator
Encode an image file to base64
encode_image(image_path)
encode_image(image_path)
image_path |
String containing the path to the image file |
A base64 encoded string of the image
Get a list of the endpoints supported by gptstudio.
get_available_endpoints()
get_available_endpoints()
A character vector
get_available_endpoints()
get_available_endpoints()
Get a list of the models supported by the OpenAI API.
get_available_models(service)
get_available_models(service)
service |
The API service |
A character vector
## Not run: get_available_models() ## End(Not run)
## Not run: get_available_models() ## End(Not run)
Retrieves the current RStudio IDE theme information including whether it is a dark theme, and the background and foreground colors in hexadecimal format.
get_ide_theme_info()
get_ide_theme_info()
A list with the following components:
is_dark |
A logical indicating whether the current IDE theme is dark. |
bg |
A character string representing the background color of the IDE theme in hex format. |
fg |
A character string representing the foreground color of the IDE theme in hex format. |
If RStudio is unavailable, returns the fallback theme details.
theme_info <- get_ide_theme_info() print(theme_info)
theme_info <- get_ide_theme_info() print(theme_info)
This function initializes and runs the Chat GPT Shiny App as a background job in RStudio and opens it in the viewer pane or browser window.
gptstudio_chat(host = getOption("shiny.host", "127.0.0.1"))
gptstudio_chat(host = getOption("shiny.host", "127.0.0.1"))
host |
A character string specifying the host on which to run the app.
Defaults to the value of |
The function performs the following steps:
Verifies that RStudio API is available.
Finds an available port for the Shiny app.
Creates a temporary directory for the app files.
Runs the app as a background job in RStudio.
Opens the app in the RStudio viewer pane or browser window.
This function does not return a value. It runs the Shiny app as a side effect.
This function is designed to work within the RStudio IDE and requires the rstudioapi package.
## Not run: gptstudio_chat() ## End(Not run)
## Not run: gptstudio_chat() ## End(Not run)
Call this function as a Rstudio addin to ask GPT to improve spelling and grammar of selected text.
gptstudio_chat_in_source_addin()
gptstudio_chat_in_source_addin()
This function has no return value.
# Select some text in a source file # Then call the function as an RStudio addin ## Not run: gptstudio_chat_in_source() ## End(Not run)
# Select some text in a source file # Then call the function as an RStudio addin ## Not run: gptstudio_chat_in_source() ## End(Not run)
Call this function as a Rstudio addin to ask GPT to add comments to your code
gptstudio_comment_code()
gptstudio_comment_code()
This function has no return value.
# Open a R file in Rstudio # Then call the function as an RStudio addin ## Not run: gptstudio_comment_code() ## End(Not run)
# Open a R file in Rstudio # Then call the function as an RStudio addin ## Not run: gptstudio_comment_code() ## End(Not run)
This function dynamically creates a request skeleton for different AI text generation services.
gptstudio_create_skeleton( service = "openai", prompt = "Name the top 5 packages in R.", history = list(list(role = "system", content = "You are an R chat assistant")), stream = TRUE, model = "gpt-4o-mini", ... )
gptstudio_create_skeleton( service = "openai", prompt = "Name the top 5 packages in R.", history = list(list(role = "system", content = "You are an R chat assistant")), stream = TRUE, model = "gpt-4o-mini", ... )
service |
The text generation service to use. Currently supports "openai", "huggingface", "anthropic", "google", "azure_openai", "ollama", and "perplexity". |
prompt |
The initial prompt or question to pass to the text generation service. |
history |
A list indicating the conversation history, where each element is a list with elements "role" (who is speaking; e.g., "system", "user") and "content" (what was said). |
stream |
Logical; indicates if streaming responses should be used. Currently, this option is not supported across all services. |
model |
The specific model to use for generating responses. Defaults to "gpt-3.5-turbo". |
... |
Additional arguments passed to the service-specific skeleton creation function. |
Depending on the selected service, returns a list that represents the configured request ready to be passed to the corresponding API.
## Not run: request_skeleton <- gptstudio_create_skeleton( service = "openai", prompt = "Name the top 5 packages in R.", history = list(list(role = "system", content = "You are an R assistant")), stream = TRUE, model = "gpt-3.5-turbo" ) ## End(Not run)
## Not run: request_skeleton <- gptstudio_create_skeleton( service = "openai", prompt = "Name the top 5 packages in R.", history = list(list(role = "system", content = "You are an R assistant")), stream = TRUE, model = "gpt-3.5-turbo" ) ## End(Not run)
This function provides a generic interface for calling different APIs
(e.g., OpenAI, HuggingFace, Google AI Studio). It dispatches the actual API
calls to the relevant method based on the class
of the skeleton
argument.
gptstudio_request_perform(skeleton, ...)
gptstudio_request_perform(skeleton, ...)
skeleton |
A |
... |
Extra arguments (e.g., |
A gptstudio_response_skeleton
object
## Not run: gptstudio_request_perform(gptstudio_skeleton) ## End(Not run)
## Not run: gptstudio_request_perform(gptstudio_skeleton) ## End(Not run)
This function provides a generic interface for calling different APIs
(e.g., OpenAI, HuggingFace, Google AI Studio). It dispatches the actual API
calls to the relevant method based on the class
of the skeleton
argument.
gptstudio_response_process(skeleton, ...)
gptstudio_response_process(skeleton, ...)
skeleton |
A |
... |
Extra arguments, not currently used |
A gptstudio_request_skeleton
with updated history and prompt removed
## Not run: gptstudio_response_process(gptstudio_skeleton) ## End(Not run)
## Not run: gptstudio_response_process(gptstudio_skeleton) ## End(Not run)
This starts the chatgpt app. It is exported to be able to run it from an R script.
gptstudio_run_chat_app( ide_colors = get_ide_theme_info(), code_theme_url = get_highlightjs_theme(), host = getOption("shiny.host", "127.0.0.1"), port = getOption("shiny.port") )
gptstudio_run_chat_app( ide_colors = get_ide_theme_info(), code_theme_url = get_highlightjs_theme(), host = getOption("shiny.host", "127.0.0.1"), port = getOption("shiny.port") )
ide_colors |
List containing the colors of the IDE theme. |
code_theme_url |
URL to the highlight.js theme |
host |
The IPv4 address that the application should listen on. Defaults
to the |
port |
The TCP port that the application should listen on. If the
|
Nothing.
This function prints out the current configuration settings for gptstudio and checks API connections if verbose is TRUE.
gptstudio_sitrep(verbose = TRUE)
gptstudio_sitrep(verbose = TRUE)
verbose |
Logical value indicating whether to output additional information, such as API connection checks. Defaults to TRUE. |
Invisibly returns NULL, as the primary purpose of this function is to print to the console.
## Not run: gptstudio_sitrep(verbose = FALSE) # Print basic settings, no API checks gptstudio_sitrep() # Print settings and check API connections ## End(Not run)
## Not run: gptstudio_sitrep(verbose = FALSE) # Print basic settings, no API checks gptstudio_sitrep() # Print settings and check API connections ## End(Not run)
Construct a GPT Studio request skeleton.
gptstudio_skeleton_build(skeleton, skill, style, task, custom_prompt, ...)
gptstudio_skeleton_build(skeleton, skill, style, task, custom_prompt, ...)
skeleton |
A GPT Studio request skeleton object. |
skill |
The skill level of the user for the chat conversation. This can be set through the "gptstudio.skill" option. Default is the "gptstudio.skill" option. Options are "beginner", "intermediate", "advanced", and "genius". |
style |
The style of code to use. Applicable styles can be retrieved from the "gptstudio.code_style" option. Default is the "gptstudio.code_style" option. Options are "base", "tidyverse", or "no preference". |
task |
Specifies the task that the assistant will help with. Default is "coding". Others are "general", "advanced developer", and "custom". |
custom_prompt |
This is a custom prompt that may be used to guide the AI in its responses. Default is NULL. It will be the only content provided to the system prompt. |
... |
Additional arguments. |
An updated GPT Studio request skeleton.
Call this function as a Rstudio addin to ask GPT to improve spelling and grammar of selected text.
gptstudio_spelling_grammar()
gptstudio_spelling_grammar()
This function has no return value.
# Select some text in Rstudio # Then call the function as an RStudio addin ## Not run: gptstudio_spelling_grammar() ## End(Not run)
# Select some text in Rstudio # Then call the function as an RStudio addin ## Not run: gptstudio_spelling_grammar() ## End(Not run)
An audio clip input control that records short audio clips from the microphone
input_audio_clip( id, record_label = "Record", stop_label = "Stop", reset_on_record = TRUE, mime_type = NULL, audio_bits_per_second = NULL, show_mic_settings = TRUE, ... )
input_audio_clip( id, record_label = "Record", stop_label = "Stop", reset_on_record = TRUE, mime_type = NULL, audio_bits_per_second = NULL, show_mic_settings = TRUE, ... )
id |
The input slot that will be used to access the value. |
record_label |
Display label for the "record" control, or NULL for no label. Default is 'Record'. |
stop_label |
Display label for the "stop" control, or NULL for no label. Default is 'Record'. |
reset_on_record |
Whether to reset the audio clip input value when recording starts. If TRUE, the audio clip input value will become NULL at the moment the Record button is pressed; if FALSE, the value will not change until the user stops recording. Default is TRUE. |
mime_type |
The MIME type of the audio clip to record. By default, this is NULL, which means the browser will choose a suitable MIME type for audio recording. Common MIME types include 'audio/webm' and 'audio/mp4'. |
audio_bits_per_second |
The target audio bitrate in bits per second. By default, this is NULL, which means the browser will choose a suitable bitrate for audio recording. This is only a suggestion; the browser may choose a different bitrate. |
show_mic_settings |
Whether to show the microphone settings in the settings menu. Default is TRUE. |
... |
Additional parameters to pass to the underlying HTML tag. |
An audio clip input control that can be added to a UI definition.
App Server
mod_app_server(id, ide_colors = get_ide_theme_info())
mod_app_server(id, ide_colors = get_ide_theme_info())
id |
id of the module |
ide_colors |
List containing the colors of the IDE theme. |
App UI
mod_app_ui( id, ide_colors = get_ide_theme_info(), code_theme_url = get_highlightjs_theme() )
mod_app_ui( id, ide_colors = get_ide_theme_info(), code_theme_url = get_highlightjs_theme() )
id |
id of the module |
ide_colors |
List containing the colors of the IDE theme. |
code_theme_url |
URL to the highlight.js theme |
Chat server
mod_chat_server( id, ide_colors = get_ide_theme_info(), translator = create_translator(), settings, history )
mod_chat_server( id, ide_colors = get_ide_theme_info(), translator = create_translator(), settings, history )
id |
id of the module |
ide_colors |
List containing the colors of the IDE theme. |
translator |
Translator from |
settings , history
|
Reactive values from the settings and history module |
Chat UI
mod_chat_ui( id, translator = create_translator(), code_theme_url = get_highlightjs_theme() )
mod_chat_ui( id, translator = create_translator(), code_theme_url = get_highlightjs_theme() )
id |
id of the module |
translator |
A Translator from |
code_theme_url |
URL to the highlight.js theme |
Create HTML dependency for multimodal component
multimodal_dep()
multimodal_dep()
Generate text completions using OpenAI's API for Chat
openai_create_chat_completion( prompt = "<|endoftext|>", model = getOption("gptstudio.model"), openai_api_key = Sys.getenv("OPENAI_API_KEY"), task = "chat/completions" )
openai_create_chat_completion( prompt = "<|endoftext|>", model = getOption("gptstudio.model"), openai_api_key = Sys.getenv("OPENAI_API_KEY"), task = "chat/completions" )
prompt |
The prompt for generating completions |
model |
The model to use for generating text |
openai_api_key |
The API key for accessing OpenAI's API. By default, the
function will try to use the |
task |
The task that specifies the API url to use, defaults to "completions" and "chat/completions" is required for ChatGPT model. |
A list with the generated completions and other information returned by the API.
## Not run: openai_create_completion( model = "text-davinci-002", prompt = "Hello world!" ) ## End(Not run)
## Not run: openai_create_completion( model = "text-davinci-002", prompt = "Hello world!" ) ## End(Not run)
Stream handler for chat completions
Stream handler for chat completions
R6 class that allows to handle chat completions chunk by chunk. It also adds methods to retrieve relevant data. This class DOES NOT make the request.
Because httr2::req_perform_stream
blocks the R console until the stream
finishes, this class can take a shiny session object to handle communication
with JS without recurring to a shiny::observe
inside a module server.
SSEparser::SSEparser
-> OpenaiStreamParser
shinySession
Holds the session
provided at initialization
user_prompt
The user_prompt
provided at initialization,
after being formatted with markdown.
value
The content of the stream. It updates constantly until the stream ends.
new()
Start a StreamHandler. Recommended to be assigned to the stream_handler
name.
OpenaiStreamParser$new(session = NULL, user_prompt = NULL)
session
The shiny session it will send the message to (optional).
user_prompt
The prompt for the chat completion. Only to be displayed in an HTML tag containing the prompt. (Optional).
append_parsed_sse()
Overwrites SSEparser$append_parsed_sse()
to be able to
send a custom message to a shiny session, escaping shiny's reactivity.
OpenaiStreamParser$append_parsed_sse(parsed_event)
parsed_event
An already parsed server-sent event to append to the events field.
clone()
The objects of this class are cloneable with this method.
OpenaiStreamParser$clone(deep = FALSE)
deep
Whether to make a deep clone.
This function parses a data URI and returns the MIME type and decoded data.
parse_data_uri(data_uri)
parse_data_uri(data_uri)
data_uri |
A string. The data URI to parse. |
A list with two elements: 'mime_type' and 'data'.
This function prepares the chat completion prompt to be sent to the OpenAI API. It also generates a system message according to the given parameters and inserts it at the beginning of the conversation.
prepare_chat_history( history = NULL, style = getOption("gptstudio.code_style"), skill = getOption("gptstudio.skill"), task = "coding", custom_prompt = NULL )
prepare_chat_history( history = NULL, style = getOption("gptstudio.code_style"), skill = getOption("gptstudio.skill"), task = "coding", custom_prompt = NULL )
history |
A list of previous messages in the conversation. This can include roles such as 'system', 'user', or 'assistant'. System messages are discarded. Default is NULL.i |
style |
The style of code to use. Applicable styles can be retrieved from the "gptstudio.code_style" option. Default is the "gptstudio.code_style" option. Options are "base", "tidyverse", or "no preference". |
skill |
The skill level of the user for the chat conversation. This can be set through the "gptstudio.skill" option. Default is the "gptstudio.skill" option. Options are "beginner", "intermediate", "advanced", and "genius". |
task |
Specifies the task that the assistant will help with. Default is "coding". Others are "general", "advanced developer", and "custom". |
custom_prompt |
This is a custom prompt that may be used to guide the AI in its responses. Default is NULL. It will be the only content provided to the system prompt. |
A list where the first entry is an initial system message followed by any non-system entries from the chat history.
A function that sends a request to the Anthropic API and returns the response.
query_api_anthropic(request_body, key = Sys.getenv("ANTHROPIC_API_KEY"))
query_api_anthropic(request_body, key = Sys.getenv("ANTHROPIC_API_KEY"))
request_body |
A list that contains the parameters for the task. |
key |
String containing an Anthropic API key. Defaults to the ANTHROPIC_API_KEY environmental variable if not specified. |
The response from the API.
This function sends a JSON post request to the Cohere Chat API, retries on failure up to three times, and returns the response. The function handles errors by providing a descriptive message and failing gracefully.
query_api_cohere(request_body, api_key = Sys.getenv("COHERE_API_KEY"))
query_api_cohere(request_body, api_key = Sys.getenv("COHERE_API_KEY"))
request_body |
A list containing the body of the POST request. |
api_key |
String containing a Cohere API key. Defaults to the COHERE_API_KEY environmental variable if not specified. |
A parsed JSON object as the API response.
A function that sends a request to the Google AI Studio API and returns the response.
query_api_google(model, request_body, key = Sys.getenv("GOOGLE_API_KEY"))
query_api_google(model, request_body, key = Sys.getenv("GOOGLE_API_KEY"))
model |
A character string that specifies the model to send to the API. |
request_body |
A list that contains the parameters for the task. |
key |
String containing a Google AI Studio API key. Defaults to the GOOGLE_API_KEY environmental variable if not specified. |
The response from the API.
A function that sends a request to the HuggingFace API and returns the response.
query_api_huggingface(task, request_body, token = Sys.getenv("HF_API_KEY"))
query_api_huggingface(task, request_body, token = Sys.getenv("HF_API_KEY"))
task |
A character string that specifies the task to send to the API. |
request_body |
A list that contains the parameters for the task. |
token |
String containing a HuggingFace API key. Defaults to the HF_API_KEY environmental variable if not specified. |
The response from the API.
A function that sends a request to the OpenAI API and returns the response.
query_api_openai( task, request_body, openai_api_key = Sys.getenv("OPENAI_API_KEY") )
query_api_openai( task, request_body, openai_api_key = Sys.getenv("OPENAI_API_KEY") )
task |
A character string that specifies the task to send to the API. |
request_body |
A list that contains the parameters for the task. |
openai_api_key |
String containing an OpenAI API key. Defaults to the OPENAI_API_KEY environmental variable if not specified. |
The response from the API.
This function sends a JSON post request to the Perplexity API, retries on failure up to three times, and returns the response. The function handles errors by providing a descriptive message and failing gracefully.
query_api_perplexity(request_body, api_key = Sys.getenv("PERPLEXITY_API_KEY"))
query_api_perplexity(request_body, api_key = Sys.getenv("PERPLEXITY_API_KEY"))
request_body |
A list containing the body of the POST request. |
api_key |
String containing a Perplexity API key. Defaults to the PERPLEXITY_API_KEY environmental variable if not specified. |
A parsed JSON object as the API response.
This function sends a request to a specific OpenAI API task
endpoint at
the base URL https://api.openai.com/v1
, and authenticates with
an API key using a Bearer token.
request_base(task, token = Sys.getenv("OPENAI_API_KEY"))
request_base(task, token = Sys.getenv("OPENAI_API_KEY"))
task |
character string specifying an OpenAI API endpoint task |
token |
String containing an OpenAI API key. Defaults to the OPENAI_API_KEY environmental variable if not specified. |
An httr2 request object
This function sends a request to the Anthropic API endpoint and authenticates with an API key.
request_base_anthropic(key = Sys.getenv("ANTHROPIC_API_KEY"))
request_base_anthropic(key = Sys.getenv("ANTHROPIC_API_KEY"))
key |
String containing an Anthropic API key. Defaults to the ANTHROPIC_API_KEY environmental variable if not specified. |
An httr2 request object
This function sets up a POST request to the Cohere Chat API's chat endpoint and includes necessary headers such as 'accept', 'content-type', and 'Authorization' with a bearer token.
request_base_cohere(api_key = Sys.getenv("COHERE_API_KEY"))
request_base_cohere(api_key = Sys.getenv("COHERE_API_KEY"))
api_key |
String containing a Cohere API key. Defaults to the COHERE_API_KEY environment variable if not specified. |
An httr2
request object pre-configured with the API endpoint and required headers.
This function sends a request to a specific Google AI Studio API endpoint and authenticates with an API key.
request_base_google(model, key = Sys.getenv("GOOGLE_API_KEY"))
request_base_google(model, key = Sys.getenv("GOOGLE_API_KEY"))
model |
character string specifying a Google AI Studio API model |
key |
String containing a Google AI Studio API key. Defaults to the GOOGLE_API_KEY environmental variable if not specified. |
An httr2 request object
This function sends a request to a specific HuggingFace API endpoint and authenticates with an API key using a Bearer token.
request_base_huggingface(task, token = Sys.getenv("HF_API_KEY"))
request_base_huggingface(task, token = Sys.getenv("HF_API_KEY"))
task |
character string specifying a HuggingFace API endpoint task |
token |
String containing a HuggingFace API key. Defaults to the HF_API_KEY environmental variable if not specified. |
An httr2 request object
This function sets up a POST request to the Perplexity API's chat/completions endpoint and includes necessary headers such as 'accept', 'content-type', and 'Authorization' with a bearer token.
request_base_perplexity(api_key = Sys.getenv("PERPLEXITY_API_KEY"))
request_base_perplexity(api_key = Sys.getenv("PERPLEXITY_API_KEY"))
api_key |
String containing a Perplexity API key. Defaults to the PERPLEXITY_API_KEY environment variable if not specified. |
An httr2
request object pre-configured with the API endpoint and required headers.
RGB str to hex
rgb_str_to_hex(rgb_string)
rgb_str_to_hex(rgb_string)
rgb_string |
The RGB string as returned by |
hex color
stream_chat_completion
sends the prepared chat completion request to the
OpenAI API and retrieves the streamed response.
stream_chat_completion( messages = list(list(role = "user", content = "Hi there!")), element_callback = openai_handler, model = "gpt-4o-mini", openai_api_key = Sys.getenv("OPENAI_API_KEY") )
stream_chat_completion( messages = list(list(role = "user", content = "Hi there!")), element_callback = openai_handler, model = "gpt-4o-mini", openai_api_key = Sys.getenv("OPENAI_API_KEY") )
messages |
A list of messages in the conversation, including the current user prompt (optional). |
element_callback |
A callback function to handle each element of the streamed response (optional). |
model |
A character string specifying the model to use for chat completion. The default model is "gpt-4o-mini". |
openai_api_key |
A character string of the OpenAI API key. By default, it is fetched from the "OPENAI_API_KEY" environment variable. Please note that the OpenAI API key is sensitive information and should be treated accordingly. |
The same as httr2::req_perform_stream
Places an invisible empty chat message that will hold a streaming message. It can be reset dynamically inside a shiny app
streamingMessage( ide_colors = get_ide_theme_info(), width = NULL, height = NULL, element_id = NULL )
streamingMessage( ide_colors = get_ide_theme_info(), width = NULL, height = NULL, element_id = NULL )
ide_colors |
List containing the colors of the IDE theme. |
width , height
|
Must be a valid CSS unit (like |
element_id |
The element's id |
Output and render functions for using streamingMessage within Shiny applications and interactive Rmd documents.
streamingMessageOutput(outputId, width = "100%", height = NULL) renderStreamingMessage(expr, env = parent.frame(), quoted = FALSE)
streamingMessageOutput(outputId, width = "100%", height = NULL) renderStreamingMessage(expr, env = parent.frame(), quoted = FALSE)
outputId |
output variable to read from |
width , height
|
Must be a valid CSS unit (like |
expr |
An expression that generates a streamingMessage |
env |
The environment in which to evaluate |
quoted |
Is |
This function processes the chat history, filters out system messages, and formats the remaining messages with appropriate styling.
style_chat_history(history, ide_colors = get_ide_theme_info())
style_chat_history(history, ide_colors = get_ide_theme_info())
history |
A list of chat messages with elements containing 'role' and 'content'. |
ide_colors |
List containing the colors of the IDE theme. |
A list of formatted chat messages with styling applied, excluding system messages.
chat_history_example <- list( list(role = "user", content = "Hello, World!"), list(role = "system", content = "System message"), list(role = "assistant", content = "Hi, how can I help?") ) ## Not run: style_chat_history(chat_history_example) ## End(Not run)
chat_history_example <- list( list(role = "user", content = "Hello, World!"), list(role = "system", content = "System message"), list(role = "assistant", content = "Hi, how can I help?") ) ## Not run: style_chat_history(chat_history_example) ## End(Not run)
Style a message based on the role of its author.
style_chat_message(message, ide_colors = get_ide_theme_info())
style_chat_message(message, ide_colors = get_ide_theme_info())
message |
A chat message. |
ide_colors |
List containing the colors of the IDE theme. |
An HTML element.
Modified version of textAreaInput()
that removes the label container.
It's used in mod_prompt_ui()
text_area_input_wrapper( inputId, label, value = "", width = NULL, height = NULL, cols = NULL, rows = NULL, placeholder = NULL, resize = NULL, textarea_class = NULL )
text_area_input_wrapper( inputId, label, value = "", width = NULL, height = NULL, cols = NULL, rows = NULL, placeholder = NULL, resize = NULL, textarea_class = NULL )
inputId |
The |
label |
Display label for the control, or |
value |
Initial value. |
width |
The width of the input, e.g. |
height |
The height of the input, e.g. |
cols |
Value of the visible character columns of the input, e.g. |
rows |
The value of the visible character rows of the input, e.g. |
placeholder |
A character string giving the user a hint as to what can be entered into the control. Internet Explorer 8 and 9 do not support this option. |
resize |
Which directions the textarea box can be resized. Can be one of
|
textarea_class |
Class to be applied to the textarea element |
A modified textAreaInput
This function takes an audio file in data URI format, converts it to WAV, and sends it to OpenAI's transcription API to get the transcribed text.
transcribe_audio(audio_input, api_key = Sys.getenv("OPENAI_API_KEY"))
transcribe_audio(audio_input, api_key = Sys.getenv("OPENAI_API_KEY"))
audio_input |
A string. The audio data in data URI format. |
api_key |
A string. Your OpenAI API key. Defaults to the OPENAI_API_KEY environment variable. |
A string containing the transcribed text.
## Not run: audio_uri <- "data:audio/webm;base64,SGVsbG8gV29ybGQ=" # Example data URI transcription <- transcribe_audio(audio_uri) print(transcription) ## End(Not run)
## Not run: audio_uri <- "data:audio/webm;base64,SGVsbG8gV29ybGQ=" # Example data URI transcription <- transcribe_audio(audio_uri) print(transcription) ## End(Not run)
HTML widget for showing a welcome message in the chat app. This has been created to be able to bind the message to a shiny event to trigger a new render.
welcomeMessage( ide_colors = get_ide_theme_info(), translator = create_translator(), width = NULL, height = NULL, element_id = NULL )
welcomeMessage( ide_colors = get_ide_theme_info(), translator = create_translator(), width = NULL, height = NULL, element_id = NULL )
ide_colors |
List containing the colors of the IDE theme. |
translator |
A Translator from |
width , height
|
Must be a valid CSS unit (like |
element_id |
The element's id |
Output and render functions for using welcomeMessage within Shiny applications and interactive Rmd documents.
welcomeMessageOutput(outputId, width = "100%", height = NULL) renderWelcomeMessage(expr, env = parent.frame(), quoted = FALSE)
welcomeMessageOutput(outputId, width = "100%", height = NULL) renderWelcomeMessage(expr, env = parent.frame(), quoted = FALSE)
outputId |
output variable to read from |
width , height
|
Must be a valid CSS unit (like |
expr |
An expression that generates a welcomeMessage |
env |
The environment in which to evaluate |
quoted |
Is |