Talk to FabrCore Agents with Any OpenAI-Compatible Client

Eric Brasher April 14, 2026 at 11:30 AM 6 min read
Release

FabrCore agents have always been reachable through Orleans grain calls and the Agent REST API. But many external systems — automation platforms, CLI tools, Python scripts — already know how to talk to OpenAI. Today we are shipping a Chat Completions endpoint at /fabrcoreapi/ChatCompletion that accepts the familiar messages-and-options format, so any system that can call an OpenAI-style API can now interact with FabrCore's configured LLM models directly.

The Chat Completions Endpoint

The new POST /fabrcoreapi/ChatCompletion endpoint uses IFabrCoreChatClientService to resolve models by name from your fabrcore.json configuration. It accepts multi-message conversations with role-based messages — the same structure used by OpenAI, Azure OpenAI, and other compatible providers.

POST /fabrcoreapi/ChatCompletion — Request
{
  "Messages": [
    { "Role": "system", "Content": "You are a helpful assistant." },
    { "Role": "user", "Content": "Summarize this document..." }
  ],
  "Options": {
    "Model": "default",
    "MaxOutputTokens": 2048,
    "Temperature": 0.2
  }
}
Response
{
  "Text": "The document discusses three main themes...",
  "Model": "gpt-4o",
  "Usage": {
    "InputTokens": 150,
    "OutputTokens": 80
  }
}

The Options block is entirely optional. All fields within it are optional as well. Supported options include Model (defaults to "default"), MaxOutputTokens, Temperature, TopP, TopK, StopSequences, FrequencyPenalty, and PresencePenalty. The model name resolves against fabrcore.json, so you reference models by their configured name rather than the raw provider model ID.

You can also send a simple single-prompt request without the full messages array — the endpoint handles both formats.

Calling from External Systems

Because the endpoint follows the standard messages-and-model pattern, any HTTP client or OpenAI-compatible library can call it. Here is a basic example using curl:

Shell — curl request to the Chat Completions endpoint
curl -X POST https://your-host/fabrcoreapi/ChatCompletion \
  -H "Content-Type: application/json" \
  -d '{
    "Messages": [
      { "Role": "user", "Content": "Extract entities from this text..." }
    ],
    "Options": { "Model": "gpt-4o-mini", "MaxOutputTokens": 2048 }
  }'

This is designed for single-turn completions — no streaming, no tool calling. It complements the existing POST /fabrcoreapi/agent/chat/{handle} endpoint, which sends a message to a specific agent and receives the agent's response. The Chat Completions endpoint bypasses the agent layer entirely and goes straight to the configured LLM, making it ideal for utility tasks like text extraction, classification, or summarization that do not need agent state or tools.

EndpointGoes ThroughBest For
/fabrcoreapi/agent/chat/{handle}Agent grain (tools, history, state)Conversational agent interactions
/fabrcoreapi/ChatCompletionLLM directly via IChatClientUtility completions, external integrations

Using FabrCoreHostApiClient from .NET

For .NET applications, FabrCore.Client provides IFabrCoreHostApiClient — a typed HTTP client that wraps every FabrCore server endpoint. The Chat Completions endpoint is accessible through GetChatCompletionAsync:

C# — Simple prompt with FabrCoreHostApiClient
@inject IFabrCoreHostApiClient ApiClient

// Single prompt with options
ChatCompletionResponse completion = await ApiClient.GetChatCompletionAsync(
    "Extract entities from this text...",
    new ChatCompletionOptions { Model = "gpt-4o-mini", MaxOutputTokens = 2048 });

Console.WriteLine($"{completion.Model}: {completion.Text}");
Console.WriteLine($"Tokens: {completion.Usage.InputTokens} in / {completion.Usage.OutputTokens} out");
C# — Multi-message conversation with FabrCoreHostApiClient
// Full multi-message request
ChatCompletionResponse multi = await ApiClient.GetChatCompletionAsync(
    new ChatCompletionRequest
    {
        Messages = new List<ChatCompletionMessageRequest>
        {
            new() { Role = "system", Content = "You are a helpful assistant." },
            new() { Role = "user", Content = "Summarize this document..." }
        },
        Options = new ChatCompletionOptions { Model = "default", Temperature = 0.2f }
    });

The base URL is read from the FabrCoreHostUrl configuration key in your client's appsettings.json (defaults to http://localhost:5000). No x-user header is required for Chat Completions since it does not go through the agent layer — it is a host-scoped endpoint.

What This Means for Integration

The Chat Completions endpoint closes a gap that existed between FabrCore's agent-oriented APIs and the broader ecosystem of AI tooling. Previously, if a Python service or third-party automation tool needed to call an LLM through FabrCore, it had to go through the agent API — which meant creating an agent instance, managing handles, and dealing with the agent lifecycle for what was essentially a stateless completion request.

Now those systems can point their existing OpenAI client library at FabrCore's host URL and make standard completion calls. The model routing, API key management, timeout configuration, and provider abstraction all happen server-side via fabrcore.json. External callers do not need to know whether the underlying model is OpenAI, Azure, Grok, Gemini, or OpenRouter.

Combined with the existing agent, embeddings, file, and discovery endpoints, FabrCore's REST surface now covers the full range of AI operations:

CapabilityEndpoint
Agent conversations/fabrcoreapi/agent/chat/{handle}
LLM completions/fabrcoreapi/ChatCompletion
Vector embeddings/fabrcoreapi/Embeddings
File storage/fabrcoreapi/File
Registry discovery/fabrcoreapi/Discovery
Cluster diagnostics/fabrcoreapi/Diagnostics

Built with FabrCore on .NET 10.


Eric Brasher

Builder of FabrCore and OpenCaddis.