Skip to main content
Tilbake til blogg
DevelopmentMarch 28, 202614 min

Mastering Serverless Architecture: When to Use It and When to Avoid It

Serverless computing promises infinite scale and zero ops. The reality is more nuanced. Here's a practical guide to serverless architecture based on real-world experience building production systems that serve millions of requests.

DB

David Bakke

Founder, Bakke & Co

PostShare
ForsidebildeDevelopment

"We're going serverless!" my co-founder announced excitedly. "No more servers to manage! Infinite scale! Pay only for what we use!"

Six months later, we were drowning in Lambda functions, debugging bizarre timeout issues, and paying more than we did with traditional servers.

Serverless isn't a silver bullet. But it's not a trap either. It's a tool—a powerful one—that works brilliantly for specific use cases and terribly for others.

This is what I wish someone had told me before I migrated our entire infrastructure to Lambda.

What Serverless Actually Means

Let's clear up the terminology first, because "serverless" is a terrible name that confuses everyone.

Serverless doesn't mean no servers. It means no servers you have to manage.

More precisely, serverless is:

A cloud execution model where you write code that runs in response to events, and the cloud provider handles all the infrastructure, scaling, and operations automatically.

The core characteristics:

  1. Event-driven - Code runs in response to triggers (HTTP requests, database changes, scheduled events)
  2. Automatic scaling - From zero to thousands of concurrent executions without configuration
  3. Pay-per-execution - You're billed for actual compute time, not server uptime
  4. Managed infrastructure - No servers, VMs, or containers to patch or maintain

The main platforms:

  • AWS Lambda (the pioneer)
  • Google Cloud Functions
  • Azure Functions
  • Cloudflare Workers (edge compute)

When Serverless Shines

After building multiple production systems with serverless, here are the use cases where it absolutely dominates:

1. Event-Driven Workloads

Serverless was built for this. Examples from our systems:

Image Processing Pipeline

User uploads image → S3 trigger → Lambda resizes/optimizes → Saves to CDN

Webhook Handlers

Stripe sends webhook → API Gateway → Lambda processes event → Updates database

Scheduled Jobs

CloudWatch cron → Lambda runs nightly cleanup → Sends success notification

These workloads share common traits:

  • Sporadic - Not running constantly
  • Short-lived - Complete in seconds or minutes
  • Isolated - Each execution is independent
  • Scalable - Need to handle variable load

For these patterns, serverless is perfect. No idle servers. Automatic scaling. Simple deployment.

2. Microservices with Variable Load

One of our services handles user notifications. The traffic pattern looks like this:

  • 6 AM - 9 AM: 10K requests/hour (morning check-ins)
  • 9 AM - 5 PM: 2K requests/hour (steady background)
  • 5 PM - 7 PM: 15K requests/hour (evening peak)
  • 7 PM - 6 AM: 500 requests/hour (quiet hours)

With traditional servers, we'd need to provision for peak load (15K requests/hour) and waste resources during quiet hours. With Lambda, we pay only for actual usage and get automatic scaling for peaks.

Cost comparison:

  • EC2 (t3.medium): $30/month running 24/7
  • Lambda: $8/month for actual usage

And Lambda handles traffic spikes without configuration.

3. Rapid Prototyping and MVPs

When testing new features or building MVPs, serverless removes operational overhead. You can focus purely on business logic.

Our typical MVP architecture:

Frontend (Vercel/Netlify)
  ↓
API Gateway
  ↓
Lambda Functions (Node.js/Python)
  ↓
DynamoDB / RDS / MongoDB Atlas

Time to production: 2-3 days Operational burden: Near zero Cost: $5-20/month for typical MVP load

Compare this to setting up VMs, load balancers, auto-scaling groups, monitoring, and deployments. The velocity difference is massive.

4. Background Jobs and Async Processing

Queue-based architectures map perfectly to serverless:

API endpoint receives job → Publishes to SQS → Lambda processes in background

Real example from our system:

// API endpoint - synchronous
export async function POST(request: Request) {
  const { userId, reportType } = await request.json();

  // Queue the job
  await sqs.sendMessage({
    QueueUrl: REPORT_QUEUE_URL,
    MessageBody: JSON.stringify({ userId, reportType }),
  });

  return Response.json({ status: 'queued', jobId });
}

// Lambda worker - asynchronous
export async function handler(event: SQSEvent) {
  for (const record of event.Records) {
    const { userId, reportType } = JSON.parse(record.body);

    // Generate report (expensive operation)
    const report = await generateReport(userId, reportType);

    // Save to S3
    await s3.putObject({
      Bucket: REPORTS_BUCKET,
      Key: `${userId}/${reportType}.pdf`,
      Body: report,
    });

    // Notify user
    await sendEmail(userId, 'Your report is ready!');
  }
}

Benefits:

  • API responds instantly (no waiting for report generation)
  • Workers scale automatically with queue depth
  • Failures are retried automatically
  • Cost is proportional to actual jobs processed

When to Avoid Serverless

Now for the part that would have saved me months of pain: when NOT to use serverless.

1. Long-Running Processes

Lambda has a hard timeout: 15 minutes max (AWS Lambda).

If your process takes longer—video encoding, data migrations, ML training, large batch jobs—Lambda is the wrong tool.

Better alternatives:

  • ECS/Fargate - Containerized workloads with no time limits
  • Batch processing - AWS Batch, Google Cloud Batch
  • Long-running VMs - EC2 for truly long processes

2. Consistent High Traffic

If your workload is constant and high-volume, serverless gets expensive fast.

Cost example:

A service handling 1000 requests/second 24/7:

Lambda cost calculation:

  • Requests: 2.6 billion/month
  • Compute time: 100ms average @ 1GB memory
  • Cost: ~$1,500/month

EC2 cost (c5.2xlarge):

  • 8 vCPU, 16GB RAM
  • Cost: ~$250/month

At consistent high load, dedicated infrastructure wins on cost.

3. Stateful Applications

Lambda functions are ephemeral and stateless. Each invocation is isolated. You can't maintain connections or state between requests.

This breaks patterns like:

  • WebSocket servers (can work but requires API Gateway WebSocket support)
  • Database connection pooling (connections are expensive to establish)
  • In-memory caching (cache is lost between invocations)
  • Long-lived TCP connections

4. Complex Dependencies or Large Binaries

Lambda has package size limits:

  • 50 MB (zipped) for direct upload
  • 250 MB (unzipped) including layers

If your application has heavy dependencies—ML models, large binaries, native libraries—you'll fight these limits constantly.

The Hybrid Approach That Works

After years of trial and error, here's the architecture pattern we settled on:

Core Services - Traditional Infrastructure

  • User-facing APIs - ECS (low latency, predictable cost)
  • WebSocket servers - ECS (stateful connections)
  • Background workers - ECS (long-running jobs)

Edge Layer - Serverless

  • API Gateway - Routing, auth, rate limiting
  • Cloudflare Workers - Edge compute, caching
  • Lambda@Edge - CDN logic

Event Processing - Serverless

  • Webhook handlers - Lambda
  • Image processing - Lambda
  • Scheduled jobs - Lambda
  • Queue workers - Lambda (for short jobs)

Data Layer

  • RDS (PostgreSQL) - Transactional data
  • DynamoDB - High-throughput key-value
  • S3 - Object storage
  • ElastiCache - Caching

This hybrid approach gives us:

  • Low latency where it matters (ECS for APIs)
  • Operational simplicity where possible (Lambda for events)
  • Cost optimization (right tool for each job)

The Bottom Line

Serverless is a tool, not a religion. Use it where it fits:

Use serverless for:

  • Event-driven workloads
  • Variable/unpredictable traffic
  • Background jobs (< 15 min)
  • Rapid prototyping
  • Low operational overhead

Avoid serverless for:

  • Long-running processes (> 15 min)
  • Consistent high traffic
  • Stateful applications
  • Latency-critical paths (< 100ms)
  • Complex dependencies

The best architecture is usually hybrid: serverless for events and async work, traditional infrastructure for core services.

Don't migrate to serverless because it's trendy. Migrate because it solves a real problem better than the alternatives.

And whatever you do, start small. One service. One use case. Learn the gotchas on something non-critical before betting your entire stack on it.

Your future self will thank you.

ServerlessAWSArchitectureCloudScalability