The sync problem nobody warns you about
I didn't anticipate this. When you run a multi-agent system with 26 agents, each operating in its own workspace with its own memory files, you run into a communication problem that has nothing to do with the agents themselves.
The problem is policy drift.
Say I decide that all agents must use Slack channel IDs instead of channel names when sending messages. (This actually happened. Channel names silently fail in the OpenClaw gateway.) I tell Nyx, my orchestrator. Nyx updates her own memory. Great.
But what about the other 25 agents? Each one has a MEMORY.md file it reads at the start of every session. If I don't update all of them, Forge will keep using #ai-forge when it should be using channel:C0AGKBNMESG. The SEO agent will break when it tries to post results. And I won't know until something fails.
This is the sync problem. It's unglamorous. It took me three weeks and three different approaches to solve it.
What I tried first: just tell Nyx
My first instinct was simple. Nyx is the orchestrator. She talks to all the other agents. So I'd tell Nyx a policy change and expect her to propagate it.
This doesn't work for a basic reason: Nyx only talks to agents when she needs something from them. If I tell Nyx that dev servers must bind to 0.0.0.0 for Tailscale access, she'll remember it. But she won't spawn sessions with every agent just to deliver a memo. Why would she? She has better things to do.
The result was patchy adoption. Some agents would learn about changes through normal work conversations. Others wouldn't hear about it for days. I'd find agents still using old practices weeks after a policy change.
What I tried second: the shared decisions file
My next approach was a shared decisions.md file. One file in a shared folder that every agent reads at session start. Write the decision there, and it propagates automatically when each agent boots up.
This worked better. But it had a scaling problem. After a few weeks, the file got long. Some decisions were permanent standing orders. Others were one-time announcements about infrastructure changes. Mixing these together made the file bloated and hard to scan.
Worse, I couldn't tell who had actually read what. I'd add a decision about Anthropic rate limits, and I had no way to verify whether Forge, the agent most likely to blow past those limits, had actually ingested the rule.
The broadcast system
The current system is what I call the broadcast folder. It lives at shared/broadcasts/ in the OpenClaw workspace, and it works like this.
When something needs to go out to all agents (or a specific subset), someone writes a markdown file with a date prefix:
shared/broadcasts/2026-03-12-anthropic-rate-limit-policy.md
shared/broadcasts/2026-03-11-slack-channel-ids-fix.md
shared/broadcasts/2026-03-13-dev-server-links.md
The file follows a simple format:
# STANDING ORDER -- Anthropic API Rate Limit Policy
Date: 2026-03-12
From: Nyx (on behalf of David Bakke)
Priority: CRITICAL
## The Rule
Anthropic MUST NEVER be overloaded. Under any circumstances.
## Rules for ALL agents
1. Never spawn more than 2 Anthropic sub-agents simultaneously
2. Prefer non-Anthropic models for routine work
3. If Anthropic returns 529: immediately fall back to Gemini or GPT-4o
That's the write side. The read side is equally straightforward. Every agent is expected to check the broadcasts folder at session start. When an agent reads a broadcast, it logs the event to shared/broadcasts/read-log.md:
[2026-03-12 19:36] nyx read: 2026-03-12-anthropic-rate-limit-policy.md
[2026-03-12 23:53] forge read: 2026-03-12-slack-formatting-directive.md
[2026-03-13 04:00] synapse-cron processed: 2026-03-12-anthropic-rate-limit-policy.md
And the agent writes the key takeaway into its own MEMORY.md so it's loaded at every future session start.
Why broadcasts must come from the root session
This matters and it took a mistake to learn. Early on, I had agents generating their own broadcasts when they hit problems. Forge discovered something about file size limits and wrote a broadcast about it.
The issue is authority. When Forge writes a broadcast, other agents don't know if it's a suggestion or a directive. Is this something David approved, or is it one agent's opinion? In a human org, this would be like a coworker sending an all-staff email about a new policy without clearing it with management first.
Now the rule is clear: broadcasts originate from Nyx, acting on my behalf. Nyx is the orchestrator. She's the one who turns my decisions into formalized broadcasts. Other agents can request broadcasts by escalating to Nyx, but they don't write directly to the broadcasts folder.
This gives the system a clear chain of authority. David decides. Nyx formalizes and distributes. Agents read, log, and internalize.
The Synapse sweep
There's a wrinkle. Not every agent runs every day. Some agents, like Atlas (market research) or Ads (advertising), might go days or weeks between sessions. They could miss broadcasts entirely.
This is where Synapse comes in. Synapse is the operations agent. It runs on a cron job and its responsibilities include a nightly broadcast sweep. At 04:00 Oslo time, Synapse checks the broadcasts folder, reads any new files, and logs them as processed. It also verifies that all agents' MEMORY.md files are up to date with recent broadcasts.
You can see this in the read log:
[2026-03-14 04:00] synapse-cron processed: 2026-03-14-supabase-github-access-map.md
[2026-03-14 04:00] synapse-cron processed: 2026-03-14-iphone-tailscale-clarification.md
Synapse is the safety net. If an agent missed a broadcast, the next time it starts a session, the information should already be in its memory from Synapse's overnight update. In practice, this means most agents are fully synced within 24 hours of any broadcast.
What gets broadcast vs. what stays direct
Not everything belongs in a broadcast. I've landed on a simple taxonomy after a month of use.
Broadcasts are for things that affect multiple agents or change how the system operates as a whole. The Anthropic rate limit policy, for example. That applies to every agent that might spawn sub-agents using Claude. The Tailscale links rule. The credential security directive.
I've sent broadcasts about infrastructure changes too. When I migrated the PortLink Supabase project to a new instance in Zurich, that needed a broadcast because multiple agents reference that project. When I added two new agents (Folio for receipt processing and the #ai-receipts channel), every agent needed to know the new routing.
Direct messages are for task-specific instructions. If I need Forge to refactor a particular component, that's a direct conversation in #ai-forge. If I need Hermes to reclassify some emails, same thing. These don't need system-wide distribution.
The gray area is standing orders that only affect a subset of agents. The "design system first" rule mostly matters to Forge and the product agents. The frontend skills directive is relevant to maybe six agents. For these, I still use broadcasts because the cost of over-communicating is low and the cost of an agent not knowing a relevant rule is high.
Real broadcasts I've sent
Here's a sampling from the last couple weeks to give you a sense of what these look like in practice:
Slack channel IDs fix (March 11): After discovering that channel names silently fail, I broadcast the rule that all agents must use channel:CXXXXXXXXXX format. Included the key IDs everyone needs.
Agents roster created (March 11): Announced that shared/AGENTS-ROSTER.md is now the single source of truth for all 25 agents, their models, Slack channels, and workspace paths.
Anthropic rate limit policy (March 12): After a gateway restart caused by too many parallel Claude calls, this broadcast established the two-concurrent-Anthropic-jobs rule. This one was marked CRITICAL and permanent.
Credential security directive (March 12): After catching an agent echoing a token value in a Slack message, I broadcast the rule that credentials can be used in tool calls but never repeated in chat or summaries.
Frontend skills and audit tools (March 14): Announced the core trio of skills (frontend-design, accessibility, vercel-react-best-practices) that any agent doing UI work must load.
Folio receipts pipeline (March 17): Introduced the new Folio agent and the #ai-receipts channel. Told all agents that receipt processing now routes through this new pipeline.
Each of these could have been a conversation with one or two agents. But making them broadcasts ensures that any agent, at any point in the future, can find out why things work the way they do.
The read-log as an audit trail
One unexpected benefit of the read-log: it's an accountability trail. When something goes wrong, I can check whether the relevant agent actually read the relevant broadcast.
I had a situation where an agent was still using markdown tables in Slack after I'd broadcast the formatting directive. Checking the read log showed that the agent had never logged a read. The broadcast wasn't the problem. The problem was that this particular agent's session startup wasn't reading the broadcasts folder.
Without the read log, I would have spent time re-broadcasting or wondering if the message wasn't clear. With it, I could diagnose the root cause in thirty seconds and fix the real issue.
What I'd change
If I were designing this from scratch, I'd add two things.
First, broadcast categories with filtered delivery. Right now, every broadcast goes to the same folder and every agent reads all of them. But the Ads agent doesn't care about Supabase migration details, and Forge doesn't care about email marketing skills. A tagging system that lets agents filter for relevant broadcasts would reduce noise.
Second, acknowledgment enforcement. Right now, agents are expected to log reads, but nothing prevents them from skipping it. I'd like Synapse's nightly sweep to not just process broadcasts but also verify that each target agent has acknowledged receipt. If an agent hasn't acknowledged a critical broadcast within 48 hours, flag it.
The boring truth about multi-agent systems
The broadcast system isn't exciting. It's a folder with markdown files and a log. There's no vector database, no RAG pipeline, no semantic search. It's essentially a shared bulletin board with a sign-in sheet.
But it solves a real problem that gets worse as you add agents. Every new agent is another endpoint that needs to stay current with system policies. At 10 agents, you can maybe manage this with ad hoc conversations. At 26, you can't.
The pattern that works is simple: write it down, put it somewhere everyone can find it, make everyone log that they've read it, and have a sweep process to catch anything that falls through. It's not very different from how actual organizations distribute memos. The technology is irrelevant. The discipline is what matters.