OpenaiOpenai / Chat

appKey: openai
actionKey: chat

Action Inputs

The following parameters are available when configuring this action:
modelId
Model Id
The ID of the model to use for chat completions
string
userMessage
User Message
The user messages provide instructions to the assistant. They can be generated by the end users of an application, or set by a developer as an instruction.
string
frequencyPenalty
Frequency Penalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
string | undefined
images
Images
<p>Provide one or more images to <a href="https://platform.openai.com/docs/guides/vision">OpenAI's vision model</a>. Accepts URLs or base64 encoded strings. Compatible with the <code>gpt4-vision-preview model</code>.</p>
string | undefined
maxTokens
Max Tokens
<p>The maximum number of <a href="https://beta.openai.com/tokenizer">tokens</a> to generate in the completion.</p>
string | undefined
messages
Messages
<p><strong>Advanced</strong>. Because <a href="https://platform.openai.com/docs/guides/chat/introduction">the models have no memory of past chat requests</a>, all relevant information must be supplied via the conversation. You can provide <a href="https://platform.openai.com/docs/guides/chat/introduction">an array of messages</a> from prior conversations here. If this param is set, the action ignores the values passed to <strong>System Instructions</strong> and <strong>Assistant Response</strong>, appends the new <strong>User Message</strong> to the end of this array, and sends it to the API.</p>
string | undefined
n
N
How many completions to generate for each prompt
string | undefined
presencePenalty
Presence Penalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
string | undefined
responseFormat
Response Format
<p>Specify the format that the model must output. <a href="https://platform.openai.com/docs/api-reference/chat/create#chat-create-response_format">Setting to <code>json_object</code> guarantees the message the model generates is valid JSON</a>. Defaults to <code>text</code>.</p>
string | undefined
stop
Stop
Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
string | undefined
systemInstructions
System Instructions
<p>The system message helps set the behavior of the assistant. For example: "You are a helpful assistant." <a href="https://platform.openai.com/docs/guides/chat/instructing-chat-models">See these docs</a> for tips on writing good instructions.</p>
string | undefined
temperature
Temperature
<p><strong>Optional</strong>. What <a href="https://towardsdatascience.com/how-to-sample-from-language-models-682bceb97277">sampling temperature</a> to use. Higher values mean the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer.</p>
string | undefined
topP
Top P
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or <code>temperature</code> but not both.
string | undefined
user
User
<p>A unique identifier representing your end-user, which can help OpenAI monitor and detect abuse. <a href="https://platform.openai.com/docs/guides/safety-best-practices/end-user-ids">Learn more here</a>.</p>
string | undefined