I’ve decided to start a bit of a series of blog posts called “Clever Idea”. I’ll use the space to talk about something I’m working on either in my hobby project (an app for picking Football games with your friends) or at work that I think is clever. The intent will partly be to share a technique or technology that I think some people reading might be interested in. I’ll also be hoping that occasionally someone points out to me that I’m not as clever as I thought and there’s actually a better way to accomplish what I’m trying to accomplish.
This season will be the first season that I’m planning to open up my little app to more than just my immediate friends. I’m planning to let my friends invite their friends to form leagues and hopefully end up with a couple dozen leagues. Being somebody who has floated from infrastructure to software development and back again throughout my career, I know that this doesn’t just mean developing new screens for adding leagues and registering new players… This means, planning out my infrastructure so that the application can scale appropriately.
Like any good architect, I started with the usual questions… how many concurrent users do I think I will have? How many rows in the databases? etc… The problem here is, like most software projects, I don’t really know. It could be that no one will be interested in the app and I’ll only have a couple users. It could also be that sometime during the season it spreads like wildfire and I have a hundred leagues all of a sudden.
I have a micro-service based architecture. Last season it consisted primarily of containerized Spring Boot apps running on a EKS (Kubernetes) cluster and communicating with a relational database deployed on AWS RDS. This architecture is certainly scalable relative to the vast majority of monolithic applications that exist in enterprise IT today. I had an auto-scaling group setup to support the EKS cluster and it would scale down to two nodes and up as far as I needed. Without re-architecting the database, it probably could have scaled to several hundred leagues. It’s pretty flexible, allowing my AWS bill to run from ~$200/mo (a small RDS server, a couple small K8s application nodes, and the EKS control plane) to a cluster/db large enough to support a few hundred leagues with the only down-time being when I switched from smaller DB instances to larger ones.
It’s not nearly as flexible as Lambda / DynamoDB though. When I rebuilt the application this year it was with that flexibility specifically in mind. The app now runs entirely on Lambda services and stores data (including cached calculations) in DynamoDB. Both of these are considered serverless which means AWS ensures that my Lambda services always have a place to run and my DynamoDB data is always available, actually providing more reliability/availability than the K8s/Cluster architecture I had built. More importantly for this post, Lambda and DynamoDB are both “pay by the drink”. With Amazon, those are very small drinks:
- The base unit for Lambda is 50ms. A typical call from the front-end of my app to the backend will result in a log line that reads: “REPORT RequestId: 6419f5ca-f747-4b77-a311-09392fc6bcc3 Duration: 148.03 ms Billed Duration: 150 ms Memory Size: 256 MB Max Memory Used: 151 MB”. AWS charges (past the free tier… which is a nice feature, but not the focus here) $0.0000002 per request and $0.0000166667 per GB/s. For 10,000 calls similar to the one above, I’d be charged $0.002 for the number of calls $.0064 for the memory consumed. We do need to remember that in a micro-service architecture, there are a lot of calls; some actions will result in 5 or 6 Lambda services to run. However, based on the numbers above… if I end up with only a handful of users, my Lambda charges will be negligible.
- For DynamoDB, the lower end of the scale is similarly impressive. Charging $1.25 for a million writes, $0.25 for a million reads, and $0.02 per 100,000 DynamoDB Stream Reads (more on these in another post). I know from last seasons that if I only have a couple of leagues then, after refactoring for a big data architecture, I will end up with ~5,000 bets to keep track of that are rarely ever read (there are cached totals) but often get written and rewritten (let’s say 5 times per bet), ~300 games that are read every time a user loads their bet page (lets say 12,500 times players read all 300 games), and ~25 player/league records that are used on nearly every call from the UI (let’s say users are queried 50,000 times). If I use those conservative guesses for usage, a small season would cost me $0.036 for the writes and the resulting DynamoDB Stream Reads and $0.95 to satisfy all the reads of the games and leagues. That means my relatively small league is costing me less than $1.00 for the whole season.
The reason I titled this post “Serverless is Commitmentless” is that I can build an app and host it on redundant compute and storage without really paying anything at all. If I get lucky though, and the application is a huge success, this architecture would scale to thousands of leagues before I need to rethink my DynamoDB indexes. As long as my revenue goes up faster than the AWS hosting fees when that happens, I have zero upside or downside risk from infrastructure cost when starting a new project.