OpenAI - GPT

GPTClient

The GPTClient class provides functionality to interact with the OpenAI GPT API. It offers methods for building message histories, submitting both synchronous and asynchronous chat and vision requests (including streaming responses), determining model versions and their context window sizes, and retrieving available models. Additionally, it defines several public constants representing chat roles and supported model identifiers.

A quick example

Here we'll create a new client with our API key. Create and submit a request to o3-mini, we can set the reasoning effort, message, and even force json_schema mode with a class.

You can see how easy it is to operate and integrate with OpenAI!

var gptClient = new GPTClient(ThreadRegistry.Instance.GetValue<string>("AppSettings:OAIKey"));
var GPTResponse = gptClient.SubmitRequest(new GPTClient.Request()
{
    MaxTokens = 90_000,
    Model = "o3-mini",
    ReasoningEffort = "high",
    user = "zach@perigee.software",
    Messages = new List<GPTClient.Message>()
    {
        new GPTClient.Message()
        {
            Role = GPTClient.ROLE_USER,
            Content = $@"User Content here"

        }
    }
}.WithJsonSchema(typeof(DocumentFormat)));

Properties

ApiKey

Gets the API key used for authenticating requests with the OpenAI API.

ROLE_USER

Represents the role for a user in a chat conversation.

ROLE_ASSISTANT

Represents the role for the assistant in a chat conversation.

ROLE_SYSTEM (deprecated for ROLE_DEVELOPER)

Represents the system role in a chat conversation.

ROLE_DEVELOPER

Represents the developer role in a chat conversation.

Methods

BuildMessageWithHistory

Builds a list of messages by prioritizing the user message and then appending historical messages as long as they do not exceed the specified context window token size. This method uses a provided token count function to ensure that the combined messages fit within the allowed token budget. It returns a tuple containing the list of messages, the number of tokens used (calculated as the difference between the initial context window size and the remaining tokens), and the total tokens counted from all messages.

Parameters:

  • Func<string, int> Count: A function that returns the token count for a given string.

  • string userMessage: The current user message to include.

  • string systemMessage: An optional system message to include.

  • int ContextWindowTokenSize (default: 128000): The maximum allowed token count.

  • IEnumerable<Message> MessageHistory: An optional collection of historical messages (ordered from oldest to newest).

Example:

// Define a simple token counting function (for example, counting words)
int TokenCount(string s) => s.Split(' ').Length;

var historyMessages = new List<Message>
{
    new Message(GPTClient.ROLE_USER, "This is a previous conversation message.")
};

var result = GPTClient.BuildMessageWithHistory(
    TokenCount,
    "Hello, how are you?",
    "System instructions for context.",
    128000,
    historyMessages
);

// result.Item1: List<Message> containing the constructed messages
// result.Item2: Number of tokens used (ContextWindowTokenSize - remaining tokens)
// result.Item3: Total tokens counted from the messages

GetGPTVersion

Determines the version of a GPT model based on its identifier string and sets the corresponding context window token size. The method compares the input gptModel against known model names and patterns and returns the model version as a decimal value while outputting the appropriate context window size.

Parameters:

  • string gptModel: The GPT model identifier (e.g., "gpt-3.5-turbo", "gpt-4-turbo").

  • out int ContextWindowSize: An output parameter that will be set to the context window token size associated with the model.

Example:

int contextSize;
decimal version = GPTClient.GetGPTVersion("gpt-3.5-turbo", out contextSize);
Console.WriteLine($"Model Version: {version}, Context Window Size: {contextSize}");

SubmitRequestAsync

Submits a chat request asynchronously to the OpenAI API. This method sends the provided Request object as a POST request and awaits a response. On success, it sets the model on the returned ChatCompletion object and associates the original request with the response.

Parameters:

  • Request request: The request object containing the chat parameters.

  • CancellationToken? cancelToken (optional): A cancellation token to cancel the request if needed.

Example:

RestResponse<ChatCompletion> response = await gptClient.SubmitRequestAsync(request, cancellationToken);
if (response?.Data != null)
{
    Console.WriteLine(response.Data.Choices.FirstOrDefault()?.Message.Content);
}

SubmitRequest

Submits a chat request synchronously to the OpenAI API. It sends the provided Request object as a POST request and returns a RestResponse containing the chat completion result. The response’s model is set based on the request.

Parameters:

  • Request request: The request object containing the chat parameters.

  • CancellationToken? cancelToken (optional): A cancellation token to cancel the request if needed.

Example:

RestResponse<ChatCompletion> response = gptClient.SubmitRequest(request, cancellationToken);
if (response?.Data != null)
{
    Console.WriteLine(response.Data.Choices.FirstOrDefault()?.Message.Content);
}

SubmitVisionRequestAsync

Submits a vision request asynchronously to the OpenAI API. This method accepts a GPTVisionPayload.Request object containing vision-specific parameters and sends it as a POST request. If successful, the returned ChatCompletion object will have its model property set from the request.

Parameters:

  • GPTVisionPayload.Request request: The vision request object.

  • CancellationToken? cancelToken (optional): A cancellation token to cancel the request if needed.

Example:

RestResponse<ChatCompletion> visionResponse = await gptClient.SubmitVisionRequestAsync(visionRequest, cancellationToken);
if (visionResponse?.Data != null)
{
    Console.WriteLine(visionResponse.Data.Choices.FirstOrDefault()?.Message.Content);
}

SubmitVisionRequest

Submits a vision request synchronously to the OpenAI API. It accepts a GPTVisionPayload.Request object and returns a RestResponse containing the chat completion result, with the model property set based on the request.

Parameters:

  • GPTVisionPayload.Request request: The vision request object.

Example:

RestResponse<ChatCompletion> visionResponse = gptClient.SubmitVisionRequest(visionRequest);
if (visionResponse?.Data != null)
{
    Console.WriteLine(visionResponse.Data.Choices.FirstOrDefault()?.Message.Content);
}

SubmitStreamRequest

Submits a chat request that streams results back from the OpenAI API. This method enables streaming by setting req.Stream to true, and it processes the incoming stream by invoking a provided callback (res) with each new token received. The method aggregates the response into a complete ChatCompletion object that is returned once streaming is finished.

Parameters:

  • Request req: The chat request object with streaming enabled.

  • Action<string> res: A callback action invoked with each new token as it is received.

  • CancellationToken? cancelToken (optional): A cancellation token to cancel the streaming operation if needed.

Example:

ChatCompletion completion = await gptClient.SubmitStreamRequest(
    request,
    token => Console.Write(token),
    cancellationToken
);
Console.WriteLine("Streaming complete.");

GetModels

Retrieves the list of available models from the OpenAI API. This method sends a GET request to the models endpoint and returns a RestResponse containing a ModelResponse object with the models information.

Example:

RestResponse<ModelResponse> modelsResponse = gptClient.GetModels();
if (modelsResponse?.Data != null)
{
    foreach (var model in modelsResponse.Data.Models)
    {
        Console.WriteLine(model.Id);
    }
}

Last updated