One of our very clever competitors, Fathom Analytics, uses AWS Lambda to process incoming requests and push to their ingest queue. This is a very sensible design choice- it allows them to have almost infinite scale (as long as they have an infinite wallet), and don’t need to worry about downtime in the hot-path of their ingest pipeline. Even if their app goes offline, they’ll keep on ingesting analytics.
For us, we evaluated AWS Lambda but quickly decided against it. Fathom's choice is likely born out of the time in which their platform was created. It was a new era of serverless, and AWS Lambda was really the only game in town.

Since then, we’ve had Google Cloud functions, and a number of others. Most rely on a docker-style execution model; your code runs in a container.
The problem with AWS Lambda is:
- Cost - it ramps up quickly with request volume. I don’t know what Fathom pay, but I bet it’s a lot.
- Start-up time- cold startups of Lambda functions are on the order of 500-1000ms. Not ideal if you want to make sure analytics data is ingested using a beacon, and especially not so if your actual execution time is only 10ms.
I suspect AWS Lambda was designed more for batch/one-off tasks like image-resizing rather than high-throughput ingest.
Enter Cloudflare Workers. Workers, while looking similar to AWS Lambda superficially, are a different beast. They are extra-lightweight node-js instances, that run javascript at Cloudflare’s edge. This means the startup time is near instant, the running time is very fast, and the costs are very, very low.
Here’s a cost comparison done by ChatGPT for one billion ingest requests on both platforms:
Cost Comparison: 1 Billion Requests — AWS Lambda vs Cloudflare Workers#
⚙️ Assumptions
Parameter | Value |
---|---|
Requests | 1,000,000,000/month |
Avg payload size | 1KB |
Avg execution time | 5ms |
Memory usage (Lambda) | 128MB |
Region | US-based pricing |
💰 Cloudflare Workers (Bundled Plan)
Cost Component | Value |
---|---|
Free tier | 100K requests |
Included in plan | 10M requests for $5 |
Remaining | 990M |
Request cost | 990M × $0.50 / 1M = $495 |
Total | $500 |
No GB-s or CPU-time billing. Optional costs for R2, KV, or Queues if used.
💰 AWS Lambda (Direct Invocation)
Compute
- GB-s per invocation = 128MB × 5ms = 0.000625 GB-s
- Total GB-s = 1B × 0.000625 = 625,000 GB-s
- Compute cost = 625,000 × $0.00001667 = $10.42
Invocation
- Free tier = 1M
- Paid = 999M × $0.20 / 1M = $199.80
Component | Cost |
---|---|
Invocations | $199.80 |
GB-seconds | $10.42 |
Total | $210.22 |
Assumes Lambda is invoked directly, not via HTTP.
❌ AWS Lambda + API Gateway (Realistic HTTP Use Case)
- API Gateway cost: $3.50 per million × 1B = $3,500
Platform | Cost |
---|---|
Lambda | $210.22 |
API Gateway | $3,500.00 |
Total | ~$3,710.22 |
🧾 Summary
Platform | Cost for 1B Requests | Notes |
---|---|---|
Cloudflare Workers | ~$500 | Edge-native, cold-start free, simple billing |
AWS Lambda | ~$210 | Only if invoked directly, not HTTP-based |
Lambda + API Gateway | ~$3,710 | Realistic cost for HTTP ingestion |
✅ Conclusion
For high-volume, global HTTP-based event ingestion, Cloudflare Workers is ~7× cheaper than AWS Lambda + API Gateway, with lower latency and simpler operations.
Ready to Try Privacy-First Analytics?
See how Userbird can give you powerful insights without compromising user privacy. No cookies, no tracking consent needed, fully GDPR compliant.