R/api_openai_completions.R
api_openai_chat_completions.Rd
Interacts with the OpenAI API to obtain chat completions based on the GPT model. The function supports various customization parameters for the request. Additionally, it includes functionality to re-call the API if the returned 'message.content' is not properly formed JSON, ensuring more robust API interaction.
api_openai_chat_completions(
model = "gpt-3.5-turbo",
system_message = "",
user_message = "",
temperature = 1,
top_p = 1,
n = 1,
stream = FALSE,
stop = NULL,
max_tokens = NULL,
presence_penalty = 0,
frequency_penalty = 0,
logit_bias = NULL,
user = NULL,
openai_api_key = Sys.getenv("OPENAI_API_KEY"),
openai_organization = NULL,
is_json_output = TRUE
)
The model to use, defaults to 'gpt-3.5-turbo'.
System prompt
User prompt
Controls randomness in generation, default 1.
Controls diversity of generation, default 1.
Number of completions to generate, default 1.
If TRUE, returns a stream of responses, default FALSE.
Sequence of tokens which will automatically complete generation.
The maximum number of tokens to generate, NULL for no limit.
Alters likelihood of new topics, default 0.
Alters likelihood of repeated topics, default 0.
A named list of biases to apply to token logits.
An identifier for the user, if applicable.
The API key for OpenAI, defaults to the environment variable OPENAI_API_KEY.
Optional organization identifier for the API.
If TRUE, ensures output is valid JSON, default TRUE.
Returns a string with the API response, or JSON if is_json_output is TRUE.