Announcing Coherence 2.0 and CNC, the first open source IaC framework
All posts

Containers and The Serverless Spectrum

What does serverless mean to different teams? How does Coherence help teams with serverless apps?
July 12, 2022

What does Serverless mean in the real world?

The idea that you don’t need to manage long-running state of servers anymore, that we should treat them as “cattle” vs. as “pets,” has been modernized and packaged into the “serverless” movement. At Coherence, it’s important for us to figure out what people mean when they say “serverless” because we need to know exactly how they plan to build or migrate their application on our platform. From what we’ve seen, the value of “serverless” is autoscaling, stateless systems, billed based on usage. They run (maybe on the “edge”) in response to events (like requests). The technology is embodied by products like: Lambda, Firebase, Netlify, Vercel, Cloudflare, Fastly, and Azure/Google/Firebase/Supabase Cloud Functions. After organizing the variety of deployments we’ve heard described as “serverless,” we believe that there is a serverless spectrum that looks something like this:

  • “Cloud Functions as a Service” - Functions pasted into a provider’s web UI and hooked up to a load balancer or message queue with additional configuration
  • “Serverless Frameworks” - Serverless pro framework and serverless stack
  • “Full-Stack JS Serverless” - Redwood/Netlify/Vercel/JAMStack
  • Lambda Monoliths” - often these are legacy apps deployed on serverless function runtimes 
  • “Cloud Container Runtimes” - App Runner, Cloud Run, Fargate, Azure Containers/Apps, 2nd-gen PaaS like fly.io

When do teams adopt Serverless? When should they?

For new products built on these systems from the start, we’re seeing that serverless can be awesome for developers. In practice, a good developer experience is possible if you choose AWS and CDK/Serverless.com, or Next.JS/RedwoodJS and Vercel/Netlify. Quasi-serverless hosted functions on k8s are a bit more experimental but, for some use cases, can also make a good developer experience possible. Many other stacks have too many holes and require insurmountable amounts of glue to deliver a working full-stack product that isn’t a nightmare to operate. Good developer experiences are mostly in categories 2 & 3 above - which is why they aren’t our focus today at Coherence. There are also many kinds of products where these stacks aren’t a good fit, e.g. if you want to run lots of long-running jobs or use libraries that aren’t available in JS, to name a few important ones. Also, real-world serverless generally means you have to rewrite large parts of your app, and therefore better suited for greenfield as opposed to legacy projects. 

Compared to serverless, containers are an older technology with many different deployment paradigms. There is less linkage to the infrastructure since you can run a container anywhere from your laptop to a VM to k8s to Heroku to advanced hosted edge platforms. In practice, k8s dominates the container space. The biggest advantage of containers is that they’re compatible with the last 30 years of Linux and Windows based software, which makes them more migration and legacy friendly and a much better fit for maintaining or enhancing existing systems.. The most cutting-edge platforms for running containers can achieve parity with serverless in some important dimensions. For example, Google Could Run (roughly equal to hosted Knative) offers a fully-managed runtime that scales from 0 to infinity, bills by the millisecond, and offloads most infrastructure layer security, availability, and reliability concerns in a similar way to “Serverless” products. On other dimensions, such as “cold-start” time, they’re not as good. Of course, solutions like not scaling all the way to 0 can help if these matter for your application.

A containerized application “done right” can therefore take advantage of both the almost-serverless “cattle” operability profile of managed runtimes, as well as being compatible with “cloud-native” runtimes for jobs where they make sense (e.g. scheduled execution, long-running persistent daemons, other recurring batch work) and even dipping back into compatibility with older technologies for other use cases where a VM or laptop makes the most sense. For these reasons, containers are preferable to serverless for many teams. We believe they fit the “simple and boring” mantra you might find many experienced technologists chanting.

How does Coherence help teams who are looking to adopt the best parts of Serverless?

Coherence’s first “flavor” is a GCP version of “Containers as Serverless”. Our second flavor will be an AWS version of the same thing, built on Fargate. After that, we plan to offer AWS, GCP, and CloudFlare versions of the “Cloud Functions” products in the first category above. We want to help teams that are setting up lambdas, using API Gateways, and using step functions/batch/event bridge (or the equivalent on the other providers). By adding cloud IDE, managed CI/CD, and GitOps environment management we can offer a ton of functionality on top of the underlying cloud primitives and get to a 10x better developer experience for most teams using these tools today.

Overall, we’re focused on where we can deliver the most value to developers by doing better than current tools without compromising the experience. Today, Coherence can deliver a container platform that gives your developers a best-in-class experience — without your team having to build a complete toolchain of its own out of raw building blocks. From Cloud IDEs to managed GitOps and managed infra-as-code, we provide a (pun-intended) coherent developer experience from one simple configuration, and close the loop from dev to production.

Give Coherence a try and let us know what you think!