OpenAI Client - (Responses)
OpenAIClient
The OpenAIClient class provides functionality to interact with OpenAI’s ChatGPT API using the latest Responses endpoints. These new endpoints unlock a wide range of powerful features, including vector store search, batching, tool calling, and more.
We built these classes to make development simple and intuitive. They include numerous convenient overloads, static helpers, and a fluent syntax that lets you quickly assemble complete requests with minimal effort.
Tool calling is fully integrated. The internal client manages all the complexity, including reflection-based tool mapping, argument parsing, and the back and forth communication between your code and the OpenAI servers, so you don’t have to.
A quick example
Here we'll create a new client with our API key. Create and submit a request to gpt-5.1, in only a few lines of code.
//Declare a client
var aic = new OpenAIClient(apiKey);
//Get a response
var response = await aic.GetResponseAsync(
OpenAIClient.GPT51("It's this easy to message AI??"));
//Use the built in MessageContent Property to retrieve the latest message
if (response.IsSuccessful)
Console.WriteLine(response.Data.MessageContent);
Converse, with ease!
Using the responses API allows us to optionally include a previous response ID, allowing for an incredibly easy way to manage conversation state.
var aic = new OpenAIClient(apiKey);
OpenAIClient.Responses.ResponseBody? prev = null;
while (true)
{
prev = (await aic.GetResponseStreamAsync(
OpenAIClient.GPT5Mini(Console.ReadLine()).WithPreviousResponseID(prev?.Id ?? string.Empty),
(ev) => Console.Write(ev.delta))).response;
Console.WriteLine();
}Check out the gif to see how this looks running in an app!

Tool calling
Tool calling (custom functions) are very easy to use. Simply decorate a method with the AITool attribute, and optionally provide AIParameter attributes to any parameters to give GPT more context about the given parameter or input class. After using RegisterTools, the client handles the internal fun of reflection mapping, argument parsing, calling, and responses back to the server.
var aic = new OpenAIClient(apiKey)
.RegisterTools<ToolsExample>();
var rs = await aic.GetResponseAsync(OpenAIClient.GPT5Mini("What is the weather like in Seattle Washington?"));
if (rs.IsSuccessful)
Console.WriteLine(rs.Data!.MessageContent);
The sample ToolsExample class. You may either have multiple parameters, or one input class as a parameter which will be deserialized for you.
public class ToolsExample
{
[OpenAIClient.AITool("Get the current tempurature at a location in fahrenheit")]
public string GetWeather([OpenAIClient.AIParameter("state code, 2 characters")] string location)
{
return "currently 84 degrees";
}
}It's currently 84°F in Seattle, WA.
JSON Schema
To use your own classes as json schema, simply register them and then retrieve them again with the MessageContent overload to deserialize the response.
In this example, we also show providing model instructions, telling OpenAI not to store the message (which is required if you intend to have OpenAI manage conversation state), disabling the tools use, we also set the reasoning to medium, and finally let the client serialize our ItemCategory class as the response format.
Using fluent syntax, it's really easy to build up a message exactly how we need. Simply deserialize the response into yrou model format by using the MessageAs<> overload.
var aic = new OpenAIClient(apiKey).RegisterTools<ToolsExample>();
var resp = await aic.GetResponseAsync(
OpenAIClient.GPT5Nano("Necklace, 14CT Gold")
.WithInstructions("Categorize the item into the provided available buckets")
.WithStore(false)
.WithoutTools()
.WithReasoning("medium")
.WithJsonSchema<ItemCategory>());
ItemCategory itc = resp.Data.MessageAs<ItemCategory>();Here's an example ItemCategory class. We may optionally use the Description (From System.ComponentModel) attribute to provide additional detail to the model.
public class ItemCategory
{
[Description("Item type, try to match based on what the item is.")]
public ItemType Type { get; set; } = ItemType.na;
}
public enum ItemType
{
na,
electronic,
household,
clothing,
jewelry,
furniture,
other
}Vector Store
The responses API allows us to use and search our Vector Stores. Simply add the WithFileSearch call to the builder chain and OpenAI will use the provided vector stores to search for content before replying.
var aic = new OpenAIClient(apiKey);
var fileSearch = await aic.GetResponseAsync(
OpenAIClient.GPT5Mini("How do I enter my billing time into the system?")
.WithInstructions(@"Ensure that response messages remain concise and suitable for a help desk message")
.WithVerbosity("low")
.WithFileSearch(["vs_12345678900987654321"], 3));Batching
In this example, I show how you can easily turn a client request into a batched request. First build the base model request (GPT5Mini), then use AsBatchedList. The dictionary input provides the CustomID (Key) and the new user message (Value) for each batched request.
CreateBatchAsync does multiple things for you:
It creates the jsonl file.
Uploads it using the Files API and marks the file for batch completion.
Creates a new batch request based on this uploaded file.
The resulting file and batch classes are returned in a tuple.
GetBatchAsync retrieves the batch object. This tells you if it's currently processing, complete, errored out, etc.
There is a little helper, .IsDone, that tells us that the batch is no longer processing. This does not mean everything completed successfully. It does however signal we should no longer be polling for any future status changes.
GetBatchedResults retrieves the final output jsonl document(s). Both the error file and the completed file.
Since batched requests are not all guaranteed to succeed, we automatically join the errored results to the completed results, deserialize each request and give you the results in one list. You may iterate over these results and check for errors, get the CustomID (a, b as shown below), and if the request was a success, get the full response.
It is NOT advised that you pull for batches every 2 minutes as shown below, however, for this simple example showing how to perform a full round trip, it works for the demo. Please implement a better method of pulling for batched completions, or register a webhook for the best possible experience.
var aic = new OpenAIClient(apiKey);
var createBatch = await aic.CreateBatchAsync(
OpenAIClient.GPT5Mini("").AsBatchedList(new() {
{"a", "What is 5+2"},
{"b", "What is 91+2"}
}));
if (createBatch.batch?.IsSuccessful ?? false)
{
var batch = await aic.GetBatchAsync(createBatch.batch.Data!.ID);
while (!(batch?.Data?.isDone ?? false))
{
//Async block until each 2 minute mark using CRON
await PerigeeUtil.BlockUntil("*/2 * * * *");
batch = await aic.GetBatchAsync(createBatch.batch.Data.ID);
}
var rs = await aic.GetBatchedResults(batch.Data!);
foreach (var batchedResponse in rs.results)
{
var error = batchedResponse.Error?.Message ?? string.Empty;
var key = batchedResponse.CustomID;
var response = batchedResponse.Response;
}
}Flex Service Tier
You may use the flex processing tier for near half priced requests (check pricing page!). Simply add the WithFlexServiceTier option.
var aic = new OpenAIClient(apiKey);
var response = await aic.GetResponseAsync(
OpenAIClient.GPT5("Flex pricing!").WithFlexServiceTier());Vision Requests (Attach images)
Any message can have attached image content. There are multiple built in methods to make this process easier right on the ResponseContent class.
var aic = new OpenAIClient(apiKey);
var rsp = await aic.GetResponseAsync(
OpenAIClient.GPT5Mini("What can you tell me about how these two songs would mix?")
.WithContent(OpenAIClient.Request.ResponseContent.FromImage(
File.ReadAllBytes("/DJ.png"),
OpenAIClient.Request.ImageFormat.PNG,
OpenAIClient.Request.ImageDetail.High)));
Web Search
You can use the WebSearch tool by simply supplying that as an option. You can get back the annotations (list of sources and urls) searched from this tool.
var aic = new OpenAIClient(apiKey);
var webResults = await aic.GetResponseAsync(
OpenAIClient.GPT5Mini("whats the top new story today?")
.WithWebSearch(OpenAIClient.Request.WebSearchActions.sources));Pricing
You can get the price of a request in several ways. There's a built in pricing model calculator, and all of the default pricing models are available.
We also internally keep track of token totals throughout a process. As an example, when a tool is called, those tokens are added to the final response after a tool completion call. This allows for much easier price and token tracking.
//ResponseBody.USDPrice
decimal price = response.Data.USDPrice;
//Under the hood, that property calls this:
decimal price = OpenAIClient.Pricing.PriceOf(response.Data);
//You can supply your own pricing models. It's a simple class with:
// request model, input/cache/output token price per million, service tier
decimal price = OpenAIClient.Pricing.PriceOf(response.Data, priceModels: OpenAIClient.Pricing.USD_PricingLatest());You can easily get the latest pricing models available in the application with the following code. In this example we grab our defaults, then add in a custom model with pricing per million tokens. The pricing model then has access to use this model. If you want to supply your own pricing models, feel free to build a custom list of Pricing models and supply that to the PriceOf call shown above.
List<OpenAIClient.Pricing?> priceModels = OpenAIClient.Pricing.USD_PricingLatest();
priceModels.Add(new OpenAIClient.Pricing()
{
Billing = "default", //Batch, flex
Cached = 1.0m,
Input = 2.0m,
Output = 4,
Model = "gpt-4million",
Type = "Text" //Audio, Video...
});Fluent Syntax
There's various fluent syntax builders built into the application. Start typing With... and see what comes up! There's everything from reasoning efforts, verbosity, tools use, caching, message history, caching, and more. These fluent builders make writing OpenAI requests very intuitive and easy.
var req = OpenAIClient.GPT5("Hello world")
// Add system/developer instructions
.WithInstructions("You are a helpful assistant.")
// Add or append user content
.WithContent(OpenAIClient.Request.ResponseContent.From("Additional user message."))
// Enable JSON schema output
.WithJsonSchema<MySchemaType>()
// Adjust verbosity (high, medium, low)
.WithVerbosity("high")
// Enable/disable storage for this request
.WithStore(false)
// Set reasoning effort + optional summary (minimal, low, medium, high)
.WithReasoning("medium", summary: "concise")
// Set prompt cache retention (key optional)
.WithPromptCacheRetention("cache-key-xyz", "24h")
// Or set prompt cache key directly
.WithPromptCacheKey("my-short-key")
// Add previous response ID for continued conversation context
.WithPreviousResponseID("resp_123")
// Rebuild message history from an existing response body
.WithMessageHistory(previousMessage)
// Enable web search tool with actions (sources), optional location, and domain filters
.WithWebSearch(
actions: OpenAIClient.Request.WebSearchActions.sources,
location: new OpenAIClient.Request.WebSearchUserLocation { country = "US", city = "Dallas" },
filteredDomains: new[] { "example.com" },
externalWebAccess: true
)
// Enable flex service tier
.WithFlexServiceTier()
// Or auto service tier
.WithAutoServiceTier()
// Disable all tool usage
.WithoutTools()
// Add file search tool (vector stores + optional max results)
.WithFileSearch(new[] { "vs_123", "vs_456" }, maxNumResults: 10)
// Supply explicit tool choice options
// FYI: You can't supply both of these options at once, check out the documentation to see how to use this.
.WithToolChoice(
ForceFunction: "MyFunctionName",
choice: OpenAIClient.Request.AllowedToolChoice.required,
allowedTools: new[] { "ToolA", "ToolB" },
mode: OpenAIClient.Request.ToolChoiceMode.required
);Predefined GPT Models
Starting from GPT-5 onwards, the general use models are availabe in a static builder right off the main OpenAIClient. Here you can supply the user message, the user (safety ID), reasoning and more. They are the base for building the request and then allow you to use the fluent syntax for configuring the request.
OpenAIClient.GPT5Mini();
OpenAIClient.GPT51();
OpenAIClient.GPT5Pro();
//... and more!Last updated
