OpenAI - GPT
GPTClient
The GPTClient
class provides functionality to interact with the OpenAI GPT API. It offers methods for building message histories, submitting both synchronous and asynchronous chat and vision requests (including streaming responses), determining model versions and their context window sizes, and retrieving available models. Additionally, it defines several public constants representing chat roles and supported model identifiers.
A quick example
Here we'll create a new client with our API key. Create and submit a request to o3-mini
, we can set the reasoning effort, message, and even force json_schema mode with a class.
You can see how easy it is to operate and integrate with OpenAI!
Properties
ApiKey
Gets the API key used for authenticating requests with the OpenAI API.
ROLE_USER
Represents the role for a user in a chat conversation.
ROLE_ASSISTANT
Represents the role for the assistant in a chat conversation.
ROLE_SYSTEM (deprecated for ROLE_DEVELOPER)
Represents the system role in a chat conversation.
ROLE_DEVELOPER
Represents the developer role in a chat conversation.
Methods
BuildMessageWithHistory
Builds a list of messages by prioritizing the user message and then appending historical messages as long as they do not exceed the specified context window token size. This method uses a provided token count function to ensure that the combined messages fit within the allowed token budget. It returns a tuple containing the list of messages, the number of tokens used (calculated as the difference between the initial context window size and the remaining tokens), and the total tokens counted from all messages.
Parameters:
Func<string, int> Count
: A function that returns the token count for a given string.string userMessage
: The current user message to include.string systemMessage
: An optional system message to include.int ContextWindowTokenSize
(default: 128000): The maximum allowed token count.IEnumerable<Message> MessageHistory
: An optional collection of historical messages (ordered from oldest to newest).
Example:
GetGPTVersion
Determines the version of a GPT model based on its identifier string and sets the corresponding context window token size. The method compares the input gptModel
against known model names and patterns and returns the model version as a decimal value while outputting the appropriate context window size.
Parameters:
string gptModel
: The GPT model identifier (e.g., "gpt-3.5-turbo", "gpt-4-turbo").out int ContextWindowSize
: An output parameter that will be set to the context window token size associated with the model.
Example:
SubmitRequestAsync
Submits a chat request asynchronously to the OpenAI API. This method sends the provided Request
object as a POST request and awaits a response. On success, it sets the model on the returned ChatCompletion
object and associates the original request with the response.
Parameters:
Request request
: The request object containing the chat parameters.CancellationToken? cancelToken
(optional): A cancellation token to cancel the request if needed.
Example:
SubmitRequest
Submits a chat request synchronously to the OpenAI API. It sends the provided Request
object as a POST request and returns a RestResponse
containing the chat completion result. The response’s model is set based on the request.
Parameters:
Request request
: The request object containing the chat parameters.CancellationToken? cancelToken
(optional): A cancellation token to cancel the request if needed.
Example:
SubmitVisionRequestAsync
Submits a vision request asynchronously to the OpenAI API. This method accepts a GPTVisionPayload.Request
object containing vision-specific parameters and sends it as a POST request. If successful, the returned ChatCompletion
object will have its model property set from the request.
Parameters:
GPTVisionPayload.Request request
: The vision request object.CancellationToken? cancelToken
(optional): A cancellation token to cancel the request if needed.
Example:
SubmitVisionRequest
Submits a vision request synchronously to the OpenAI API. It accepts a GPTVisionPayload.Request
object and returns a RestResponse
containing the chat completion result, with the model property set based on the request.
Parameters:
GPTVisionPayload.Request request
: The vision request object.
Example:
SubmitStreamRequest
Submits a chat request that streams results back from the OpenAI API. This method enables streaming by setting req.Stream
to true, and it processes the incoming stream by invoking a provided callback (res
) with each new token received. The method aggregates the response into a complete ChatCompletion
object that is returned once streaming is finished.
Parameters:
Request req
: The chat request object with streaming enabled.Action<string> res
: A callback action invoked with each new token as it is received.CancellationToken? cancelToken
(optional): A cancellation token to cancel the streaming operation if needed.
Example:
GetModels
Retrieves the list of available models from the OpenAI API. This method sends a GET request to the models endpoint and returns a RestResponse
containing a ModelResponse
object with the models information.
Example:
Last updated