Agentic workflows: how AI agents chain business tasks

Last updated: April 24, 2026

A dark radial diagram with a central red orchestrator node surrounded by a ring of glowing specialist agent panels, with faint curved trails showing tasks routed between them.

The interactive below shows an in motion. Tasks arrive on the outer ring, curve through a central orchestrator, and visit each specialist agent in sequence. Switch between three preset workflows (quote to cash, support, onboarding), push the arrival rate up, and crank the failure rate to see where the pipeline starts to bleed. Click the fullscreen button for a closer look.

New tab

Demo

Agentic workflow: a radial orchestrator routing tasks

A radial simulation of an agentic workflow. Tasks arrive on an outer intake ring, curve through a central orchestrator, and visit specialized agents in order. Switch between three preset pipelines, dial up the failure rate, and watch the throughput strip.

Built with Canvas 2D. Render and interaction by Pro Trailblazer.

What you're seeing

The bright red node in the middle is the . Its only job is to know which agent should get the task next. The ring of panels around it are the specialist : INTAKE validates what came in, RESEARCH pulls context, PRICE calls a calculator or database, DRAFT writes the customer-facing content, VERIFY checks the draft, SEND pushes it out, and FOLLOW-UP nudges later if needed.

A few things to try:

  • Switch the workflow preset. Quote to Cash touches every agent. Support skips PRICE and FOLLOW-UP. Onboarding moves VERIFY earlier in the pipeline. The agents don't change, the routing does.
  • Push arrival rate to 4 or 5 tasks per second. Watch the LOAD counter on DRAFT climb faster than the others. The slowest agent becomes the queue, every time.
  • Set failure rate to 20 percent. Some tasks flash white and loop back for a retry. After three attempts they get dropped with an X. The SUCCESS percentage and the dashed line in the strip tell you what your customers would actually experience.
  • Hit Burst +20. A spike of twenty tasks hits the intake ring at once. The router handles them fine. The downstream agents queue up, which is the real cost of a traffic spike.

How it actually works

An agent is not a smarter model. It is a regular LLM call wrapped in three things: a scoped job ("you only draft quotes, nothing else"), a set of tools it's allowed to call (a pricing API, a CRM lookup, an email sender), and a clear output contract ("return a JSON object with these fields").

The orchestrator is the boring part that makes the system work. It holds the pipeline definition ("first INTAKE, then RESEARCH, then PRICE"), passes outputs from one agent to the next, handles retries when an agent returns an error or a malformed result, and decides when to drop a task that keeps failing. It is usually a few hundred lines of code, not a separate model.

Two details in the simulation that map directly to real systems:

  1. Tasks always route through the center. Agents don't talk directly to each other. This is a deliberate choice. If every agent can call every other agent, you end up with a network of hidden dependencies and no single place to change behavior. Central routing is worse at raw speed and better at literally everything else. The same pattern shows up at the model level in mixture-of-experts architectures, where a gating network routes each token to the right expert.
  2. Each agent has a processing time and a load bar. DRAFT is the slowest because long-form generation takes the most tokens. PRICE is fast because it is really just a case of . The load bar fills up as tasks queue. That bar is the single most useful operational metric in a production agentic system.

Where this breaks for a small business

The simulation is clean. Real SMB workflows have three kinds of mess that don't show up on screen.

First, the inputs are never uniform. A "new quote request" can arrive as an email, a web form, a text message, or a phone call someone transcribed badly. INTAKE in the real world spends most of its effort deciding what the task even is, not validating it.

Second, the tools lie. A pricing API can return stale data, a CRM can be missing the contact, the email sender can silently drop a message. The orchestrator has to treat every tool call as potentially wrong and plan for what to do when it is. That's where most of the real engineering goes.

Third, humans interrupt. A customer replies mid-pipeline with new information, a staff member overrides the draft, a rule changes on the last day of the quarter. A good agentic system needs a "pause, edit, resume" path, which means every task needs persistent state, not just an in-memory handoff.

Key takeaways

  • Agents are not smarter LLMs, they are LLMs plus a scoped job and a set of tools. The intelligence is in how they are wired together, not in any one call.
  • The orchestrator is where the business logic lives. If every agent can talk to every other agent, you do not have a workflow, you have a chat room.
  • Failure handling is the product. A demo that never fails hides the entire engineering problem.
  • Throughput is bounded by the slowest agent. Queue depth tells you where to spend money first.
  • Start with a preset pipeline, not a free-form agent. Most SMB workflows are four to seven fixed steps with light branching, and a DAG beats a debate every time.