The reports nobody read
I built two research agents in late February. Scout handles competitive intelligence and rapid web scanning. Atlas does deep market research and feasibility analysis. I gave them solid prompts, connected them to web search, set up their output directories, and put Scout on a cron schedule running four times a day.
They worked. Really well, actually.
Scout would scan AI news, maritime tech developments, Norwegian startup funding rounds, and SaaS trends. It wrote clean, source-linked summaries to an intel/raw/ folder. Atlas would take Scout's raw findings and produce analysis documents with market context, competitor comparisons, and strategic implications.
Within a week, I had a growing library of intelligence reports. Detailed, well-sourced, neatly formatted. Some of them were genuinely insightful.
I didn't read most of them.
The downstream consumer problem
Here's what I didn't think about when I built Scout and Atlas: who actually needs this information, and what process exists to act on it?
I had built the research supply chain without building the demand side. Reports were flowing into a folder. But no agent was configured to consume those reports and turn them into actions. No workflow existed to take a competitive intelligence finding and route it to the relevant product agent. No dashboard summarized what Scout found yesterday. No priority system flagged which findings actually mattered.
The research was landing in a void.
I'd occasionally open the intel folder, scan a few reports, and think "huh, interesting." Then I'd go back to whatever I was actually working on. The reports accumulated. Some of them were probably important. I'll never know which ones I missed.
Why I turned them off
After about two weeks, I disabled both agents. Scout's cron job got commented out. Atlas wasn't running on a schedule anyway, but I removed it from the list of agents Nyx would spontaneously delegate research tasks to.
The immediate trigger was a quiet realization: these agents were consuming Gemini Flash tokens four times a day, producing output that nobody processed, and creating an illusion of productivity. The research folder was growing. It felt like progress. It wasn't.
I calculated roughly what Scout was costing in API calls. Not a lot. Maybe a few dollars a week on Gemini Flash. That's not the point. The real cost was the false sense that I had competitive intelligence covered. I didn't. I had a pile of unread documents.
The capability trap
This is a pattern I've seen in my own work and in the companies I consult for. It's tempting to build capabilities first and figure out the consumption later. The logic goes: "If we're generating research, we'll naturally start using it. The supply will create its own demand."
It doesn't. Not automatically.
In a human organization, a research team is effective because there's a meeting where they present findings. There's a manager who reads the reports and decides what to act on. There's a planning process that incorporates research into strategy. The research team isn't effective because of the research itself. They're effective because of the organizational machinery that consumes the research.
My agents didn't have that machinery. Scout produced reports. Atlas analyzed them. Then nothing.
What needs to exist before I turn them back on
I have a list. None of it is about improving the research quality.
A routing system. When Scout finds something relevant to PortLink (maritime AI, port digitalization), that finding should automatically land in the PortLink agent's workspace. When it finds a competitor to EventRipple, the EventRipple agent should know about it. Right now, everything dumps to one folder regardless of relevance.
A priority filter. Not all findings are equal. A new competitor launching in the Norwegian market is more important than a general trend article about AI. I need a layer that scores findings by urgency and relevance, and only surfaces the ones above a threshold.
A weekly digest. I don't want to read raw intelligence reports. I want a summary. Five bullet points of the most important things that happened this week. Which competitors did something. Which funding rounds closed. Which technology shifts affect my products. One document, once a week, written for me specifically.
Action triggers. Some findings should trigger actual work, not just reports. If Scout discovers a maritime AI company just raised funding, that's relevant to PortLink's competitive positioning. The finding should create a task, not just a file.
A feedback loop. When I read a finding and mark it as useful or not useful, that signal should influence future scanning. Scout's cron job cycles through categories (AI news, maritime tech, Norwegian startups, SaaS trends). If maritime tech findings are consistently more useful than generic AI news, the scanning should shift toward maritime tech.
None of this requires new AI capabilities. It requires systems thinking. Inputs need outputs. Outputs need consumers. Consumers need ways to respond.
The uncomfortable parallel to real companies
I've seen this exact pattern play out in organizations that aren't running AI agents. A company hires a data scientist. The data scientist builds dashboards and generates reports. Nobody changes their behavior based on the reports. Six months later, leadership wonders why the data team isn't delivering value.
The data team is delivering exactly what they were asked to deliver: data. The problem is that nobody built the decision-making processes that should consume that data. The meetings aren't structured around data review. The planning cycle doesn't incorporate analytics. The data exists in a vacuum.
When I tell this story about Scout and Atlas, some people say "well, that's obvious." Maybe it is. But I still built two research agents before I built the systems to use their output. Knowing a pattern and avoiding it are different skills.
What I'm doing differently now
I've adopted a rule for new agent capabilities: don't build the producer until the consumer exists.
Before I reactivate Scout, I need the routing system and the weekly digest pipeline to be working. I can test both of these with manually created research files before any agent is involved. If the routing correctly delivers a maritime tech finding to the PortLink workspace, and the digest pipeline produces a readable weekly summary from a folder of test documents, then I can plug Scout back in and the output will actually go somewhere.
This is less exciting than building new agents. Building agents is fun. Building the plumbing that makes agents useful is boring. But the plumbing is the part that matters.
I'm also being more careful about the distinction between "can I build this?" and "should I build this now?" I can build a real-time news monitoring system with natural language alerts. Should I? Only if I have time to read the alerts and act on them. If I don't, I'm just building a more sophisticated way to generate guilt about unread notifications.
The agents are still there
Scout and Atlas aren't deleted. Their workspaces, prompts, and configurations are intact. Scout can still transcribe YouTube videos and summarize articles on demand. When I need specific research, I can ask Nyx to delegate it to Scout as a one-off task. That works well because there's an immediate consumer: me, right now, asking a specific question.
What I've disabled is the autonomous, scheduled research. The four-times-a-day cron job that produces reports optimistically, hoping someone will want them.
The irony is that Scout is probably my most technically capable agent for its role. The prompts are good. The source diversity is excellent. The output quality is high. None of that mattered because the output had nowhere to go.
Capability without consumption is just inventory. And inventory that nobody uses is waste.
When I'll turn them back on
My target is to have the consumption pipeline built within the next month. The routing system is straightforward. The weekly digest requires a summarization step that I'll probably run on a local model since it's a formatting job, not a reasoning job. The action triggers are the hardest part because they require defining what counts as actionable, and that's a judgment call that will evolve over time.
When those pieces are in place, Scout goes back on the cron schedule. This time, the reports will flow into a system instead of a folder. And I'll know within a week whether the system actually works, because the digest will either tell me things I didn't know, or it won't.
That's the test. Not "did the agent produce output?" but "did the output change what I did?"