LogoLogo
HomePricingDocumentation
  • 💿Getting Started
    • Installation and Project Setup
    • Hello Perigee!
    • Perigee Application Design
    • Hello Configuration
    • Hello Logs
    • Hello Integration
    • Troubleshooting
    • Case Studies
  • 📃License + Notice
    • 📂Licensing
    • Notice of Third Party Agreements
  • 🚀Perigee and Beyond
    • Extending - Threads
    • Extending - Loaders
    • ⏳All about CRON
  • 🔮API Generation
    • What is API Generation?
    • API Builder
  • 🗺️Architecting YOUR App
    • Design and Requirements
    • Define Sources
    • Requirements
  • 🧩Core Modules
    • 🌐PerigeeApplication
    • 🪡Thread Registry
    • Event Sources
      • Scheduled/Logic
        • CRON Thread
        • Scheduler
        • Sync Agent
      • Watchers
        • SalesForce
        • Sharepoint
        • Directory Watch
        • Directory Notifier
        • IMAP
    • Credential Management
      • Connection Strings
      • Custom Refresh Logic
      • RestSharp Authenticator
      • Credential Store SDK
      • ⁉️Troubleshooting Credentials
    • Integration Utilities
      • HTTP(S) - RestSharp
      • Transaction Coordinator
      • Limiter
      • Watermarking
    • Alert Managers
      • SMS
      • Email
      • Discord
      • Teams
    • File Formats
      • Excel
      • CSV
    • 📁File System Storage
      • File Revision Store
      • Concurrent File Store
      • FileSync + Cache
    • Third Party
      • SmartSheets
      • Microsoft Graph
    • Perigee In Parallel
      • Parallel Processing Reference
      • Extensions
      • GroupProcessor
      • SingleProcessor
    • 🧱Utility Classes
      • Metrics
      • F(x) Expressions
      • Multi-Threaded Processor (Scatter Gather)
      • OpenAI - GPT
      • XML Converter
      • Dynamic Data Table
      • Debounce
      • Thread Conditions
      • Perigee Utility Class
      • Network Utility
      • Lists
      • FileUtil
      • Inclusive2DRange
      • Strings, Numbers, Dates
      • Nested Sets
      • Behavior Trees
      • JsonCompress
      • Topological Sorting
      • DBDownloader
    • 🈁Bit Serializer
  • 📣Examples and Demos
    • API + Perigee
    • 📰Excel Quick Load
    • SalesForce Watcher
    • Report Scheduler
    • Agent Data Synchronization
    • 📩IMAP Echo bot
    • Watch and load CSVs
    • Graph Delegated Authorization + DataVerse
    • Coordinator Demo
    • Azure Service Bus
    • QuickBooks Online
  • 📘Blueprints
    • Perigee With .NET Hosting
    • Web Host Utilities
    • 🔌Plugin Load Context
  • 🎞️Transforms
    • 🌟What is Transforms?
    • 📘Terminology
    • 🦾The Mapping Document
    • 👾Transformation Process
    • 😎Profile
    • 🎒Automation
      • 🕓Package Options
      • 🔳Configuration
    • 🔧Utilities
      • 🧹Clean
      • 📑Map File
      • 🔎File Identification
      • 🗺️Map Generation
      • 🪅Insert Statement Generation
  • 🗃️Transform SDK
    • 👋Quick Start Guide
    • 🥳MapTo
    • 🔌Authoring Plugins
      • 🔘File IO Process
      • 📢Data Quality
      • 🟢Transform Process
    • SDK Reference
      • 🔘FileIOProcessData
      • 📢DataQualityContext
      • 🎛️TransformDataContext
      • 🏅TransformResult
Powered by GitBook
On this page
  • GPTClient
  • A quick example
  • Converse, with ease!
  • Properties
  • ApiKey
  • ROLE_USER
  • ROLE_ASSISTANT
  • ROLE_SYSTEM (deprecated for ROLE_DEVELOPER)
  • ROLE_DEVELOPER
  • Methods
  • StreamConversation
  • ClearConversation
  • BuildMessageWithHistory
  • GetGPTVersion
  • SubmitRequestAsync
  • SubmitRequest
  • SubmitVisionRequestAsync
  • SubmitVisionRequest
  • SubmitStreamRequest
  • GetModels
  • Quick Model Methods
Export as PDF
  1. Core Modules
  2. Utility Classes

OpenAI - GPT

GPTClient

The GPTClient class provides functionality to interact with the OpenAI GPT API. It offers methods for building message histories, submitting both synchronous and asynchronous chat and vision requests (including streaming responses), determining model versions and their context window sizes, and retrieving available models. Additionally, it defines several public constants representing chat roles and supported model identifiers.

A quick example

Here we'll create a new client with our API key. Create and submit a request to o3-mini, we can set the reasoning effort, message, and even force json_schema mode with a class.

You can see how easy it is to operate and integrate with OpenAI!

var gptClient = new GPTClient(ThreadRegistry.Instance.GetValue<string>("AppSettings:OAIKey"));
var GPTResponse = gptClient.SubmitRequest(new GPTClient.Request()
{
    MaxTokens = 90_000,
    Model = "o3-mini",
    ReasoningEffort = "high",
    user = "zach@perigee.software",
    Messages = new List<GPTClient.Message>()
    {
        new GPTClient.Message()
        {
            Role = GPTClient.ROLE_USER,
            Content = $@"User Content here"

        }
    }
}.WithJsonSchema(typeof(DocumentFormat)));

Converse, with ease!

//Get a token encoder, GPTClient conversation will manage context window size
var tokenEncoder = new Tiktoken.Encoder(new Tiktoken.Encodings.Cl100KBase());

//Declare a new client with the model and token counter so we can use the built in conversation functions, read our API key from the config
var client = new GPTClient(
    ThreadRegistry.Instance.GetAppSetting<string>("OAIKey"), 
    GPTClient.Request.O3Mini(), tokenEncoder.CountTokens);

//Start a conversation!
while (true)
{
    client.StreamConversation(Console.ReadLine(), Console.Write, null, Environment.NewLine).GetAwaiter().GetResult();
}

Check out the gif to see how this looks running in an app!

Properties

ApiKey

Gets the API key used for authenticating requests with the OpenAI API.

ROLE_USER

Represents the role for a user in a chat conversation.

ROLE_ASSISTANT

Represents the role for the assistant in a chat conversation.

ROLE_SYSTEM (deprecated for ROLE_DEVELOPER)

Represents the system role in a chat conversation.

ROLE_DEVELOPER

Represents the developer role in a chat conversation.

Methods

StreamConversation

Conversation methods allow every message of the "conversation" to be managed and maintained via the client. This simplifies worrying about context token sizes, managing message state and history, etc. Simply feed a new message into the conversation any time and history will be maintained.

Parameters:

  • string newUserMessage: A new message to feed into the conversation.

  • Action<string> res: The response stream feed, use this to append to a logger, feed to the console, etc.

  • CancellationToken? cancelToken: An optional cancellation token.

  • string EndOfStream: If provided, will append this content to the end of the current response. Useful if you need to statically append html, newlines, or other EOF content.

//Get a token encoder, GPTClient conversation will manage context window size
var tokenEncoder = new Tiktoken.Encoder(new Tiktoken.Encodings.Cl100KBase());

//Declare a new client with the model and token counter so we can use the built in conversation functions, read our API key from the config
var client = new GPTClient(
    ThreadRegistry.Instance.GetAppSetting<string>("OAIKey"), 
    GPTClient.Request.O3Mini(), tokenEncoder.CountTokens);

//Start a conversation!
while (true)
{
    client.StreamConversation(Console.ReadLine(), Console.Write, null, Environment.NewLine).GetAwaiter().GetResult();
}

ClearConversation

If using the conversation model, you can easily clear all history and reset state by using this method.

client.ClearConversation();

BuildMessageWithHistory

Builds a list of messages by prioritizing the user message and then appending historical messages as long as they do not exceed the specified context window token size. This method uses a provided token count function to ensure that the combined messages fit within the allowed token budget. It returns a tuple containing the list of messages, the number of tokens used (calculated as the difference between the initial context window size and the remaining tokens), and the total tokens counted from all messages.

Parameters:

  • Func<string, int> Count: A function that returns the token count for a given string.

  • string userMessage: The current user message to include.

  • string systemMessage: An optional system message to include.

  • int ContextWindowTokenSize (default: 128000): The maximum allowed token count.

  • IEnumerable<Message> MessageHistory: An optional collection of historical messages (ordered from oldest to newest).

Example:

// Define a simple token counting function (for example, counting words)
int TokenCount(string s) => s.Split(' ').Length;

var historyMessages = new List<Message>
{
    new Message(GPTClient.ROLE_USER, "This is a previous conversation message.")
};

var result = GPTClient.BuildMessageWithHistory(
    TokenCount,
    "Hello, how are you?",
    "System instructions for context.",
    128000,
    historyMessages
);

// result.Item1: List<Message> containing the constructed messages
// result.Item2: Number of tokens used (ContextWindowTokenSize - remaining tokens)
// result.Item3: Total tokens counted from the messages

GetGPTVersion

Determines the version of a GPT model based on its identifier string and sets the corresponding context window token size. The method compares the input gptModel against known model names and patterns and returns the model version as a decimal value while outputting the appropriate context window size.

Parameters:

  • string gptModel: The GPT model identifier (e.g., "gpt-3.5-turbo", "gpt-4-turbo").

  • out int ContextWindowSize: An output parameter that will be set to the context window token size associated with the model.

Example:

int contextSize;
decimal version = GPTClient.GetGPTVersion("gpt-3.5-turbo", out contextSize);
Console.WriteLine($"Model Version: {version}, Context Window Size: {contextSize}");

SubmitRequestAsync

Submits a chat request asynchronously to the OpenAI API. This method sends the provided Request object as a POST request and awaits a response. On success, it sets the model on the returned ChatCompletion object and associates the original request with the response.

Parameters:

  • Request request: The request object containing the chat parameters.

  • CancellationToken? cancelToken (optional): A cancellation token to cancel the request if needed.

Example:

RestResponse<ChatCompletion> response = await gptClient.SubmitRequestAsync(request, cancellationToken);
if (response?.Data != null)
{
    Console.WriteLine(response.Data.Choices.FirstOrDefault()?.Message.Content);
}

SubmitRequest

Submits a chat request synchronously to the OpenAI API. It sends the provided Request object as a POST request and returns a RestResponse containing the chat completion result. The response’s model is set based on the request.

Parameters:

  • Request request: The request object containing the chat parameters.

  • CancellationToken? cancelToken (optional): A cancellation token to cancel the request if needed.

Example:

RestResponse<ChatCompletion> response = gptClient.SubmitRequest(request, cancellationToken);
if (response?.Data != null)
{
    Console.WriteLine(response.Data.Choices.FirstOrDefault()?.Message.Content);
}

SubmitVisionRequestAsync

Submits a vision request asynchronously to the OpenAI API. This method accepts a GPTVisionPayload.Request object containing vision-specific parameters and sends it as a POST request. If successful, the returned ChatCompletion object will have its model property set from the request.

Parameters:

  • GPTVisionPayload.Request request: The vision request object.

  • CancellationToken? cancelToken (optional): A cancellation token to cancel the request if needed.

Example:

RestResponse<ChatCompletion> visionResponse = await gptClient.SubmitVisionRequestAsync(visionRequest, cancellationToken);
if (visionResponse?.Data != null)
{
    Console.WriteLine(visionResponse.Data.Choices.FirstOrDefault()?.Message.Content);
}

SubmitVisionRequest

Submits a vision request synchronously to the OpenAI API. It accepts a GPTVisionPayload.Request object and returns a RestResponse containing the chat completion result, with the model property set based on the request.

Parameters:

  • GPTVisionPayload.Request request: The vision request object.

Example:

RestResponse<ChatCompletion> visionResponse = gptClient.SubmitVisionRequest(visionRequest);
if (visionResponse?.Data != null)
{
    Console.WriteLine(visionResponse.Data.Choices.FirstOrDefault()?.Message.Content);
}

SubmitStreamRequest

Submits a chat request that streams results back from the OpenAI API. This method enables streaming by setting req.Stream to true, and it processes the incoming stream by invoking a provided callback (res) with each new token received. The method aggregates the response into a complete ChatCompletion object that is returned once streaming is finished.

Parameters:

  • Request req: The chat request object with streaming enabled.

  • Action<string> res: A callback action invoked with each new token as it is received.

  • CancellationToken? cancelToken (optional): A cancellation token to cancel the streaming operation if needed.

Example:

ChatCompletion completion = await gptClient.SubmitStreamRequest(
    request,
    token => Console.Write(token),
    cancellationToken
);
Console.WriteLine("Streaming complete.");

GetModels

Retrieves the list of available models from the OpenAI API. This method sends a GET request to the models endpoint and returns a RestResponse containing a ModelResponse object with the models information.

Example:

RestResponse<ModelResponse> modelsResponse = gptClient.GetModels();
if (modelsResponse?.Data != null)
{
    foreach (var model in modelsResponse.Data.Models)
    {
        Console.WriteLine(model.Id);
    }
}

Quick Model Methods

GPTClient.Request.O3Mini("How many cups of sugar does it take to get to the moon?", "user@user.com");
GPTClient.Request.O1("Are you the droid we're looking for?");
GPTClient.Request.GPT4o().WithContextWindowSize(50_000);
GPTClient.Request.O1Mini().WithDeveloperMessage("You are smart, brave, and capable");
PreviousMulti-Threaded Processor (Scatter Gather)NextXML Converter

Last updated 2 months ago

This sample is using the absolute minimum amount of code possible to achieve a full conversation with context window limits, model selection, token encoding, and a full chat history. We're using the package to feed the client with a method to get token counts. The GPT client does the rest when using the Conversation modes. It keeps track of message history, client messages, context token sizes, and more!

To use Conversation methods, please allocate the GPTClient with the APIKey, The RequestModel, and a method to return token counts, or (s) ⇒ 0if you want to disable context window management We're using the package to feed the client with a method to get token counts. If running this sample, install this package as well.

There are quite a few "quick" request methods available, these methods create the request object to be sent to GPT. These can be used to rapidly create new messages from users, override context window sizes, pre-append developer(system) prompts, and create GPT clients ready for . They come pre-configured with the maxiumum completion tokens, model names, and context window size information.

I won't list the models here as they change and get updated frequently, however, we keep these relatively up to date, and the latest greatest models should be available to use.

🧩
🧱
👍
TikToken
TikToken
Conversation methods
What the above code looks like running in a console application