The AI Product Landscape in 2026
Three years into the generative AI revolution, the dust is settling. The hype cycle has peaked, and we're now in the productive phase — where real products ship and real value gets created.
I've been building AI-powered products across three very different domains: maritime communication (PortLink), event management (EventRipple), and consulting for enterprise clients through Bakke & Co. Here's what I've learned about what actually works.
Lesson 1: Start with the Workflow, Not the Model
The biggest mistake I see teams make is starting with "we should use AI for this." The right question is: "What's the most painful part of this workflow?"
At PortLink, the pain point was obvious: port call coordination involves dozens of stakeholders communicating through fragmented channels — email, WhatsApp, Excel spreadsheets. The AI doesn't replace the communication; it orchestrates it.
"The best AI products are invisible. Users shouldn't think about the AI — they should think about how easy their job just became."
Lesson 2: Hybrid Intelligence Beats Pure Automation
Full automation sounds great in demos. In production, it's terrifying. We learned this the hard way at EventRipple, where our offer pricing engine generates suggestions but always keeps a human in the loop.
The pattern that works:
- AI suggests → Human approves
- AI drafts → Human edits
- AI flags → Human decides
This isn't a compromise. It's the optimal architecture for trust-building products.
Lesson 3: The Model is 10% of the Product
Everyone obsesses over which model to use. GPT-4? Claude? Gemini? The truth is, for most products, the model matters far less than:
- Data pipeline quality — garbage in, garbage out
- Prompt engineering — the art of asking the right questions
- Fallback handling — what happens when the AI is wrong?
- UX design — how do users interact with AI outputs?
At Bakke & Co, we've built products on Claude, GPT-4, Gemini, and local models. The model choice matters less than the architecture around it.
Lesson 4: Cost Management is a Feature
Running AI at scale is expensive. We've developed a tiered model strategy:
| Tier | Model | Use Case | Cost | |------|-------|----------|------| | Scout | Llama 3.1 8B (local) | Monitoring, classification | Free | | Worker | Qwen 32B / Llama 70B (local) | Analysis, code review | Free | | Premium | Claude Opus / GPT-4o | Complex reasoning, user-facing | $$$ |
Local models handle ~70% of our workload at zero marginal cost. Cloud models handle the 30% that requires maximum quality. This isn't just cost optimization — it's architectural resilience.
Lesson 5: Ship Fast, Iterate Faster
The AI landscape changes monthly. The product you plan in January might be obsolete by March — not because AI got worse, but because new capabilities emerged.
Our approach:
- 2-week sprints focused on user-facing features
- Monthly model evaluations (new releases, fine-tuning, cost changes)
- Continuous prompt refinement based on production data
What's Next?
2026 is the year of AI agents — autonomous systems that can plan, execute, and learn. We're already building agentic workflows at PortLink where the system handles routine port call communications end-to-end.
But the fundamental lesson hasn't changed: start with the human problem, build the AI around it, and keep humans in the loop.
David Bakke is the founder of Bakke & Co and co-founder of PortLink AS. He's been building AI-powered products for enterprise clients since 2024.