Navigation
// shipped ≠ production-ready

Your app works.
Your infra doesn't.

Shipping code is the easy part. Keeping it alive at 3am when the database locks, traffic triples, and your only alert is an angry customer tweet — that's the hard part. I've spent 25 years making sure that doesn't happen.

See plans Let's talk
// the gap

Devs build features. Nobody builds the safety net.

Flying blind

Your app crashed at 2am. You found out at 9am — from an angry Slack message. Six hours of downtime. Zero alerts.

One server, one prayer

You got on the front page of HN. The single $20 droplet melted. By the time you SSH'd in, the moment was gone.

ChatGPT infra

Your Dockerfile runs as root, your secrets are in .env committed to git, and your "CI/CD" is you running npm run build over SSH. It works — until it doesn't.

// the work

I handle the infra. You ship the product.

Ongoing Infrastructure

Your cloud, properly set up and watched around the clock. I've been doing this since before "DevOps" was a word — when we just called it "keeping things running."

  • 24/7 monitoring & alerting
  • CI/CD pipeline setup
  • Auto-scaling configuration
  • Security hardening
  • Database optimization
  • Disaster recovery planning
$ elasticmind status --app your-saas
checking infrastructure...

Uptime: 99.97% (last 30 days)
Response time: 142ms avg
Auto-scaling: active (2-8 instances)
SSL: valid (243 days remaining)
Backups: daily, tested weekly
Monitoring: all systems green

your infrastructure is in good hands._

Firefighting

Something's on fire and you don't know why. I've been the person getting paged at 3am for most of my career. Send me the problem, I'll find the root cause and fix it — no retainer, no onboarding, just hands on keyboard.

  • Incident response & root cause analysis
  • Performance troubleshooting
  • Data recovery
  • Post-incident hardening
⚠ ALERT: your-saas.com is DOWN
03:14 AM — auto-notification sent

$ elasticmind respond --incident INC-847
03:16 AM — engineer connected
03:22 AM — root cause: OOM on primary db
03:31 AM — fix deployed, scaling adjusted
03:33 AM — service restored

total downtime: 19 minutes._
// early access

Sentinel

EARLY ACCESS

I can't watch every server myself. So I'm building something that can. Sentinel learns your infrastructure's normal behavior, catches anomalies, and fixes what it can before you even notice.

sentinel watching your-saas.com...

[02:14 AM] memory usage spike detected (87%)
[02:14 AM] analyzing root cause...
[02:15 AM] identified: connection pool leak in /api/payments
[02:15 AM] action: recycling stale connections
[02:15 AM] memory normalized (41%)
[02:16 AM] incident report generated

[02:16 AM] notifying team: "fixed a memory leak while you slept"

0 downtime. 0 human intervention._

Knows what "normal" looks like

Memory at 70% on Tuesdays is fine. Memory at 70% on Saturday morning isn't. Sentinel learns your patterns — not just thresholds — so it knows the difference.

Fixes before you wake up

Connection pool leaking? Sentinel drains stale connections. Disk filling up? It rotates logs and clears temp files. You get a Slack message in the morning, not a 3am phone call.

Knows when to call me

Some problems need a human. When that happens, Sentinel pages me with a full timeline — what broke, what it already tried, and where it got stuck. I start with context, not questions.

Sentinel is currently in early access for Full DevOps plan subscribers.

Get Full DevOps + Sentinel
// pricing

Straightforward pricing

Monthly. Cancel anytime. No "let me check with my manager" calls.

shield
Monitoring & Alerts

I set up the alarms. When something breaks, you know in seconds — not hours.

$250 /mo
billed monthly
  • Uptime monitoring (1-min checks)
  • Alert setup (Slack, email, SMS)
  • Monthly health check report
  • Basic performance dashboard
  • Email support (24h response)
Get started
MOST POPULAR
production
Full DevOps

The whole stack, handled. CI/CD, scaling, security, backups — I treat it like my own.

$650 /mo
billed monthly
  • Everything in Shield
  • CI/CD pipeline setup & maintenance
  • Auto-scaling configuration
  • Security hardening & SSL
  • Database backup & optimization
  • Async support (4h response)
  • Sentinel AI agent (early access)
Get started
firefighter
Emergency Response

It's 3am, everything is down, and you don't know why. Call me. I've seen this before.

$200
per hour · billed per incident
Response within 30 minutes
Root cause analysis
Fix deployed & verified
Post-incident report
Hardening recommendations
Get help now
// the big stuff

Sometimes you need more than a plan

Migrating off a monolith? Redesigning for scale? Doing due diligence on an acquisition's tech stack? That's not a monthly plan — that's a project. Here's how I work:

1

Assessment

I dig into your codebase, infra, and deploy process. You get a brutally honest written report — what's working, what's a ticking time bomb, and what I'd do about it.

2

Proposal

If there's a project worth doing, I'll scope it with a fixed price and timeline. No hourly billing surprise. You know what you're paying before we start.

3

Execution

I do the work. Not a junior dev reading my notes — me, hands on your codebase. Migrations, refactors, new infra. You get commits, not slide decks.

Assessment starts at $500
Applied as credit if you proceed with the project

$ book --assessment
GS
// the human

Gregory Serrão

Head of IT Architecture

I started in infrastructure before AWS existed. Spent most of my career in banking — the kind of systems where "it went down for 5 minutes" makes the evening news. Built three digital banks from bare metal to production. Now I lead architecture at a US bank and help founders avoid the mistakes I've already made.

Most of what I do can't be prompted.

25+
years
3
banks built
99.9%
uptime target
// ping

Tell me what's breaking

I reply to every message myself. No chatbot, no SDR, no "someone from our team will reach out." Just me. Usually within a few hours.