Serverless architecture refers to a new way of designing and managing a web application’s back end. It’s server capacity that the server provider manages and scales for your app. Serverless is actually a misnomer. There are still servers, but the work of tuning and scaling server use based on load is now taken care of by the server provider, instead of the developer.
Traditionally, if you needed computing power and storage for your web app, you paid a lump sum every month in exchange for server capacity. When your app needed to scale up or down to accommodate different loads, it was up to the developer to design a scaling scheme that would dedicate new server resources or release unused resources.
Serverless automates all the work of tuning autoscaling schemes and load balancing. I believe serverless architecture is the future. However, it’s not going to replace our current model of microservices and containers anytime soon. There are still too many hurdles for developers adopting serverless, and it requires more up-front work for the development team. As serverless becomes easier to use, its popularity will inevitably rise since it makes server management so much easier.
How Serverless Works
Traditional web apps require the client (browser) to connect with the app’s server that handles requests. A serverless app decouples the various functions, databases, and authentication services that go into an app. In many cases, the client can connect directly to the database or a third-party service. The app’s functions and backend come through the serverless architecture’s API, and they each run separately.
These serverless functions and backend run on a function-as-a-service (FaaS) and backend-as-a-service (BaaS) model, where you only pay for what you use. The benefit here is every component of the app can scale independently, and the serverless provider manages that scaling for you. You save a lot of time that would have been spent on tuning load balancing and scaling.
Of course, decoupling an application’s functions creates a challenge with granularity. It takes more time up front for a developer to assess and test how small or large each function should be. Too small, and it gets cumbersome to manage all the various functions. Too large, and the functions end up becoming mini-monoliths themselves, degrading the value of serverless. This up-front challenge is the big hurdle to serverless, and it requires an initial time investment to get right.
A hybrid serverless and traditional solution is also possible. You could use a serverless API to extend an existing app or replace selected features. This hybrid approach is likely the easiest way for most development projects to migrate to serverless over time, since developing a serverless solution from scratch is likely to cost a lot of time and money.
Advantages
The biggest advantage of serverless computing is its cost effectiveness. Renting or purchasing a fixed quantity of servers is inherently inefficient during periods of underutilization or idle time. Even compared to an autoscaling group, serverless can be more cost-efficient thanks to more efficient bin-packing.
The cost efficiency applies to developer time as well. While serverless apps take longer to set up in the beginning, the developer doesn’t need to spend time creating and tuning autoscaling policies. Over time, this reduced maintenance load pays for the initial time investment required.
Additionally, serverless architecture implements a RESTful API to call its FaaS components. As such, the units of code exposed to the outside are just functions, and the developer doesn’t need to worry about handling HTTP requests or multithreading. This makes back-end development much simpler.
Disadvantages
Serverless does come with some limitations. There are limits to the resources you can use, so it’s not well-suited to any performance-computing applications. In those cases, it would likely be cheaper to rent a dedicated server anyway.
Another limitation of serverless is response latency. Unlike on a continuously running server, virtual machine, or container, serverless providers typically spin down code completely when it’s not in use. If a runtime (like Java) takes a while to start back up, latency suffers.
Monitoring and debugging performance gets a little trickier with serverless, too. You can time functions in serverless, but you can’t attach profilers, debuggers, or APM tools. Since serverless environments typically aren’t open source, you couldn’t even set up a local environment to replicate and test performance.
Why Serverless
Serverless apps aren’t easier to develop (in fact, they’re usually harder), but they’re easier to administer after development. You can build a scalable and highly available application with minimal maintenance. This reduction of ongoing costs makes a compelling business case for serverless, and that cost advantage will drive serverless’s adoption.
This year, we hope to deploy a few serverless apps as we deepen our expertise in serverless architecture. All the major cloud vendors - Amazon, Google, Microsoft, IBM - are on board with serverless and building their own FaaS and BaaS offerings. Serverless is the future of scalable web apps, and it’s exciting to be at the leading edge of an emerging technology shift.