Understanding FabrCore's Actor Model

Eric Brasher February 21, 2026 at 11:08 AM 5 min read

FabrCore is built on Microsoft Orleans, which follows the actor model. Understanding this architecture is key to building reliable multi-agent systems — and to understanding why certain patterns that seem convenient would actually break the system.

Actor Model Fundamentals

The actor model is a concurrency paradigm where each actor (in Orleans, a grain) is an independent unit of computation with its own isolated state. FabrCore agents are Orleans grains, which means three guarantees apply to every agent:

Grain Isolation: Each agent's state is entirely isolated. Agent A cannot read or write Agent B's memory, configuration, or plugin state. There are no shared mutable variables between agents. This isolation is enforced by the runtime, not by convention — it's physically impossible for one grain to touch another grain's state directly.

Single-Threaded Execution: Grains process one message at a time. When Agent A is handling a user request, no other message can execute concurrently within that same grain. This eliminates an entire class of race conditions and data corruption bugs. You never need locks, mutexes, or concurrent collections inside an agent — the runtime guarantees sequential execution.

Location Transparency: Grains may be on different silos in a cluster. Agent A might be running on Server 1 while Agent B is on Server 3. The code that communicates between them doesn't know or care — Orleans handles routing, serialization, and network transport transparently. Your agent code looks the same whether you're running a single-node dev setup or a 50-node production cluster.

Why Tool Sharing Breaks the Model

A common question is: "Can Agent A call a tool that belongs to Agent B?" The answer is no, and understanding why requires understanding what the actor model guarantees.

Bypasses the message queue. Every interaction with a grain goes through its message queue. When you call a tool on Agent B from Agent A's execution context, you're bypassing that queue entirely. Agent B's grain doesn't know the call is happening — it never had a chance to schedule it.

Violates single-threaded guarantees. If Agent B is currently processing its own message and Agent A invokes one of Agent B's tools directly, that tool execution would run concurrently with Agent B's in-progress work. The single-threaded guarantee that the entire system depends on is now broken. Plugin state, chat history, session data — all of it is now subject to race conditions.

Creates hidden dependencies. When agents communicate through messages, the dependency is explicit and visible. When Agent A silently reaches into Agent B's tool set, you have a hidden coupling that doesn't show up in configuration, logging, or health diagnostics. Debugging becomes significantly harder.

Introduces concurrency hazards on shared plugin state. Plugins often maintain internal state — file handles, HTTP clients, caches, counters. If two agents share a plugin instance and both invoke it concurrently (which cross-agent tool sharing enables), that plugin state is now a shared mutable resource with no synchronization. This is exactly the class of bugs the actor model is designed to prevent.

The Correct Approach: Messaging

If Agent A needs something that Agent B can do, the correct approach is to send a message. FabrCore provides SendAndReceiveMessage for exactly this pattern — it sends a message to another agent and awaits the response, all through the proper grain messaging infrastructure:

Inter-Agent Communication
// Use SendAndReceiveMessage for inter-agent communication
var response = await fabrcoreAgentHost.SendAndReceiveMessage(new AgentMessage
{
    ToHandle = "user:file-agent",
    FromHandle = fabrcoreAgentHost.GetHandle(),
    Kind = MessageKind.Request,
    Message = "Read the file at /tmp/data.txt"
});

This approach respects every actor model guarantee. The message enters Agent B's queue. Agent B processes it when it's ready — after any in-flight work completes. Agent B's plugins execute within Agent B's single-threaded context. The dependency is explicit: it shows up in logs, in message traces, and in health diagnostics.

The pattern is: Agent A describes what it needs in a message, and Agent B decides how to fulfill it using its own tools. This is proper encapsulation — the same principle that makes object-oriented programming work, applied at the agent level.

Plugin-Per-Agent is Lightweight

If two agents need the same capability — say, both need to read files — the correct answer is to configure the same plugin on both agents. Each agent gets its own plugin instance.

This might sound wasteful, but plugin instances are just a few object allocations. A plugin is typically a class with a handful of methods and maybe an HttpClient or a configuration object. We're talking bytes, not megabytes. The memory overhead of a duplicate plugin instance is negligible compared to the cost of the LLM calls that agent is making.

And the benefit is enormous: each agent's plugin instance is fully isolated. Agent A's file plugin and Agent B's file plugin can operate concurrently without any synchronization, because they don't share state. The actor model's guarantees are preserved.

Sub-Millisecond Communication

A common concern is: "Won't messaging between agents add latency?" In practice, SendAndReceiveMessage within the same silo is typically sub-millisecond. The message doesn't go over the network — it's an in-memory grain call routed by the Orleans scheduler.

Even cross-silo calls (where Agent A is on a different server than Agent B) add only a few milliseconds of network overhead. Compare that to the hundreds of milliseconds or seconds that an LLM inference call takes, and the messaging overhead is effectively zero.

The "latency" concern is usually unfounded. If your agent system is slow, the bottleneck is LLM inference, not grain-to-grain messaging. Optimizing away the message by sharing tools directly would save microseconds while introducing correctness bugs — the worst possible tradeoff.

Learn More

The actor model isn't just an implementation detail — it's the foundation that makes FabrCore's multi-agent systems reliable. Grain isolation, single-threaded execution, and location transparency are the properties that let you build agent systems that scale without the concurrency nightmares that plague shared-state architectures.

Check out the full documentation for more details on agent design, communication patterns, and configuration.


Eric Brasher

Builder of FabrCore and OpenCaddis.