Somewhere in the Mediterranean right now, a cruise ship is approaching port. The cruise line's port operations team has been emailing the local port agent for three weeks to coordinate the call. The port agent has been calling the terminal to confirm a berth. The terminal has been faxing the pilot station about the vessel's draft and beam. The pilot station has been emailing the tug company about availability. The coast guard wants manifests by email. The tourism operator wants to know the exact arrival time so they can schedule 2,000 people onto shore excursion buses.
These conversations happen in parallel, across email, phone calls, WhatsApp, fax machines (yes, still), and spreadsheets. No single party has the complete picture. The cruise line doesn't know what the terminal told the pilot station. The port agent doesn't have visibility into the tug company's schedule. Everyone is operating on partial information, and when something changes, that change propagates through a phone tree of manual updates.
This is what PortLink is trying to fix.
The problem is communication, not logistics
Maritime logistics has plenty of software. Port management systems, vessel tracking platforms, berth allocation tools. What nobody has built well is the communication layer.
The actual decisions in a port call, when to send the pilot, which berth to assign, when to start fueling, how to schedule shore excursions, are made through conversations between people. Those conversations happen in inboxes, on phone calls, in chat messages. The information exists but it's scattered across dozens of systems belonging to dozens of different organizations that don't share databases.
A single cruise port call can involve 15 to 30 different parties. Cruise line operations. The ship itself. The local port agent (who coordinates on behalf of the cruise line). The port authority. The terminal operator. Pilots. Tug companies. Fuel suppliers. Waste removal. Shore excursion operators. Immigration authorities. Customs. Provisioning companies. Medical services. Ground transportation.
Each of these parties communicates bilaterally with some subset of the others. The port agent is usually the hub, but even the port agent only sees the conversations they're part of. When the terminal changes the berth assignment, the port agent finds out and relays that to the cruise line, the pilot station, the tug company, and the shore excursion operator. By phone. Or email. One by one.
This is the problem PortLink solves. Not by replacing these communications, but by aggregating and normalizing them into a shared view that all authorized parties can see.
What PortLink actually does
PortLink is an AI-powered platform that ingests communication from multiple channels (email, structured data, API integrations) and builds a unified timeline and status view for each port call.
When the terminal emails the port agent about a berth change, PortLink's AI reads that email, extracts the relevant data (new berth number, time, reason for change), and updates the port call record. Every party with access to that port call sees the update immediately, without waiting for the port agent to relay it manually.
The AI component handles the messy part: understanding unstructured communication. Maritime emails are not standardized. They contain nautical terminology, abbreviations, multiple languages, and embedded tables of varying formats. A berth confirmation from one terminal looks nothing like a berth confirmation from another. PortLink's AI normalizes this. It extracts the structured data from the unstructured message and slots it into the right place in the port call timeline.
The platform also handles document management. Pre-arrival documents, crew lists, passenger manifests, fuel orders, waste declarations. These currently get attached to emails and forwarded between parties. PortLink indexes them, links them to the right port call, and makes them searchable.
Why maritime, why now
People ask why I chose maritime as the vertical for an AI product. There are three honest reasons.
First, the problem is real and expensive. A miscommunication in a port call doesn't just cause inconvenience. It causes delays. And maritime delays are measured in tens of thousands of dollars per hour. A cruise ship that arrives at the wrong berth, or has to wait because the pilot wasn't informed of a schedule change, burns fuel, disrupts passenger itineraries, and cascades into delays at the next port. The industry has financial motivation to fix this.
Second, my co-founder Kris Willassen has spent 15 years in enterprise software at a global cruise line. He's lived this problem. He knows exactly which workflows are broken, which parties are involved, and what a solution needs to look like to be adopted. Domain expertise matters enormously in vertical SaaS. You can't build for an industry you don't understand. Kris understands it deeply.
Third, maritime is behind. Way behind. I don't say that to be dismissive. I say it because it creates opportunity. Industries that still use fax machines for operational communication have enormous room for improvement. The barrier to entry isn't technology. It's trust and integration. Maritime companies need to trust that an AI platform will handle their operational data correctly. And the platform needs to integrate with how they already work, not force them to change everything.
The market is meaningful. Maritime digitalization is projected to be a $30 billion opportunity by 2035. But I care less about the total addressable market and more about whether the specific problem we're solving resonates with the specific people who have it. And so far, when Kris describes the product to port operations people, they nod and say "yes, this is exactly what we need."
The technical stack
PortLink runs on React/Next.js with Supabase as the backend. The database schema was designed around the port call as the central entity, with related tables for vessels, ports, parties, communications, documents, and timeline events.
We recently migrated to a new Supabase project hosted in Zurich, which matters for data residency. Maritime companies operating in European waters want their data in Europe. The old project was inherited from an earlier prototype phase and had accumulated technical debt that was easier to reset than refactor.
The design system is a separate repository with token-based design. Navy (#0A2342) as the primary, accent blue (#00A8E8) for interactive elements, muted gray (#56708c) for secondary text. The design system uses --pl- prefixed tokens throughout. Every component is documented in standalone HTML files showing all states and variants.
The landing page has a persona-personalized UX: different hero content and messaging depending on whether the visitor is from a cruise line, a port agent, or a tour operator. This is because these three user types have very different pain points, and a generic "we fix maritime communication" message doesn't resonate with any of them specifically enough.
How OpenClaw agents contribute
This is where my AI agent setup earns its keep. PortLink isn't being built by a team of human developers. It's being built primarily by AI agents, coordinated by my orchestrator Nyx.
Forge, my lead engineer agent running on Claude Opus 4.6, handles the actual coding. When we need a new feature, the PortLink PM agent (a dedicated agent for project management) defines the requirements in the project context file. Nyx reads the requirements and spawns Forge with the specific task. Forge reads the design system, the codebase, and the requirements, then builds the feature.
Sentinel, the code review agent, reviews Forge's output for security issues, code quality, and standards compliance. In a traditional setup, this would be a pull request review by another developer. In my setup, Sentinel reads the changes, runs the audit, and flags issues. Forge fixes them. The cycle repeats until the code passes.
Nyx coordinates the overall project. It maintains the project context file, tracks the Go/No-Go deadline, and makes sure information flows between agents. When the CFO agent reports on PortLink's burn rate (300,000 NOK of 1.5 million deployed as of mid-March), that information ends up in the project context file where Nyx and the PM agent can see it.
The content and CMO agents handle marketing. The landing page copy, the messaging framework, the persona-specific positioning. When I need a new version of the landing page, CMO defines the strategy and Content writes the copy, both referencing the same project context that Forge uses for implementation.
The April 1 deadline
We have a Go/No-Go decision on April 1, 2026. This is the first formal checkpoint where Kris and I, along with our Danish investors, decide whether PortLink is ready to move to the next phase.
I'll be honest about where things stand.
What's working: the design system is solid and consistently applied across the landing page and early app components. The database schema is sound. The Supabase infrastructure is set up in the right region with proper access controls. The AI classification layer for maritime communications has been validated against test data. The landing page with persona-personalized UX is built and functional.
What's not ready: the Netlify deployment is paused because we exceeded the free tier usage limit and haven't upgraded the plan yet. That needs to happen before Go/No-Go. The actual product (not just the landing page) needs more feature implementation to demonstrate the core value proposition in a live demo. We need real marketing copy on the landing page, not developer placeholder text. And the app needs to handle the full lifecycle of at least one realistic port call scenario end to end.
The Supabase migration added some delay. Moving to a new project meant restoring the database from a backup, which went smoothly but still took coordination time. We got 27 tables restored from an August 2025 backup, including seed data for 11 ports, 6 vessels, and enough sample data to build features against.
What I'd do differently
If I were starting PortLink over, I'd change a few things.
I'd start with real email data sooner. We spent weeks building infrastructure before feeding actual maritime emails through the system. When we finally did, we discovered edge cases that our schema didn't handle well. Emails that reference multiple port calls. Emails with attached spreadsheets containing berth schedules. Emails in mixed languages. The sooner you expose your system to real data, the faster you learn what your assumptions got wrong.
I'd separate the landing page from the app codebase from day one. We initially built them in the same repository, which muddled the deployment story. The landing page has different update cadence, different deployment concerns, and different audiences than the app. They should be independent.
I'd establish the ownership structure and investor agreements before writing code. We're three-way split between my company Bakke & Co, Kris's company Keelstone AS, and our Danish investors Leo and Dann. Getting alignment on roles, responsibilities, and decision-making authority up front saves painful conversations later. We did get this sorted, but it would have been cleaner as the first task rather than something that happened in parallel with development.
Why this vertical works for AI
Maritime communication is a particularly good fit for AI because the problem is fundamentally about understanding unstructured text and extracting structured data. That's what language models do well.
A berth confirmation email from a terminal in Barcelona looks different from one in Piraeus. Different format, different language, different level of detail. But they both contain the same core information: berth number, date, time, vessel name, any special requirements. An AI that can read both emails and extract the same structured data from each is solving the exact problem that makes human operators spend hours copying data between systems.
The AI doesn't need to be perfect. It needs to be faster and more consistent than a human reading 50 emails a day and manually updating a spreadsheet. If the AI correctly extracts 95% of the information and flags the other 5% for human review, that's a massive time savings. The human isn't eliminated. They're freed from the mechanical extraction work so they can focus on the judgment calls.
This is the pitch that resonates with maritime professionals: not "AI replaces you" but "AI handles the data entry so you can handle the decisions." Port agents don't want to be replaced. They want to stop spending three hours a day copying information between emails and spreadsheets.
Where we go from here
The Go/No-Go on April 1 will determine the next phase. If we proceed, the immediate next step is a pilot with an actual cruise line or port agent, using real operational data. Kris's network gives us access to potential pilot partners. The goal for Q3 2026 is a working pilot that demonstrates the full port call communication lifecycle.
The technical work between now and then is focused on three things: getting the core app feature-complete for a demo, fixing the Netlify deployment, and integrating the AI communication parsing with the live Supabase backend.
Everything I've built with OpenClaw, the agent architecture, the design system, the cron jobs, the memory system, it all exists in large part to make PortLink possible. A startup with one technical founder and a 26-agent AI organization building a vertical SaaS product for one of the world's oldest industries. It's either the most efficient way to build a company or a very elaborate Rube Goldberg machine.
Ask me again on April 2.