Building software has probably never been easier than it is today. There are hundreds of programming languages, thousands of tools and millions of frameworks at everyone’s disposal. The market is full of opportunities and ideas which can turn into successful products and companies.
However, statistics show that the average lifespan of a start-up is 20 months and 90% of them fail to establish themselves on the market. How does this relate to the length of a delivery and deployment process of a new product in your company? You’re probably thinking that a few weeks from now your programmers are going to stop configuring Jenkins, building Docker images of “Hello World”, and will already be in the second “business sprint”.
What if your products can be built and deployed in days instead of months? Moreover, your monthly costs can be as low as a couple of Starbucks Venti Lattes and you’ll have the possibility to scale indefinitely? Some might say that this sounds too good to be true. Nevertheless, one of our recent projects was built by a single developer with the support of a single architect in about 4 weeks. It had Several APIs with the AWS API Gateway and AWS Lambda and AWS SQS Queues and integrations with Google’s Dialogflow Chatbot agent. Moreover, the solution provided the flexibility to rewrite the API, change the integration concept, or even introduce new architecture elements overnight. So, if you want to experience the speed of light in delivering software—sit back and read about what Serverless Framework has to offer.
Short Time-to-Market & Freedom to Fail Cheap
To begin with, let’s imagine a start-up with delivery time of several months. Firstly, it would probably go out of business after the first two months. Secondly, no organisation would put its money at risk for a company with such low velocity and time-to-market capabilities. Therefore, in the environment of constant instability, change of requirements, and vague visions, serverless computing shines the most.
The serverless approach offers fast prototyping and implementation without any sacrifices in the operability, reliability, and scalability areas. First of all, deployment is as simple as running a single command. For the operations team, this simply means that deploying a critical bug fix takes minutes instead of hours.
Moreover, non-functional requirements are out of the picture as your architecture is prepared for any load with no upfront investment necessary thanks to a simple `pay-as-you-go` model. AWS services like AWS Lambda and API Gateway are designed to cope with any load and they don’t require the operations or the dev team to configure any autoscaling rules manually.
Finally, your system is built for maximum visibility since logging, distributed tracing, alerting, and advanced monitoring are embedded in the platform. The only thing left to do is setting up alerts on metrics and KPIs which the business team is interested in, and to react to them as they come. Independently of what your goals are, serverless computing gives you a single huge advantage over traditional delivery and deployment models—it lets you fail not only fast but also cheap.
Simple Continuous Delivery Pipeline & Low Total Cost of Ownership
Whenever you’re building software, you’ll need a delivery pipeline that assures the high quality of the code, fulfilment of business needs, and compliance with company standards. Organisations often underestimate the amount of time needed to set up such a pipeline. Software developers focus mostly on aspects related directly to the code and understate the problems they might encounter while preparing the code pipelines. The same goes for issues that may come up later, while maintaining and operating the working solution in production.
While prototyping a new product, barely anyone considers OS patching, certificate renewals, cost optimisations, instance reservations, data recovery, backups etc. Paying attention to all these things is at the heart of the DevOps methodology.
However, if your company doesn’t have any DevOps skills then hiring a specialist in this area can be a difficult and costly endeavour. This is caused by the shortage of skilled people on the market and, above all, their financial expectations.
Nevertheless, Serverless Framework lives up to the promise of fast software delivery that we described at the beginning of this article. Moreover, deploying your software is as simple as executing the `serverless deploy` command. In its default configuration, it immediately starts the deployment of the current version of the code. Regardless of the programming language, the deployment always boils down to this single command. Moreover, the framework and its CLI natively support the `--stage` parameter that uses an arbitrary string to distinguish your resources from each other. The built-in stage handling feature can make it confusing as to where the test environment ends and the production starts. Therefore, it becomes apparent how easy it is to get any code to the cloud using these simple commands:
- `serverless deploy –stage=local-dev-martin` - the deployment your developers use to test the local changes
- `serverless deploy –stage=test` - your shared test environment
- `serverless deploy –stage=prod` - your production
And when you think that the process could not get any easier, the Serverless Framework also offers built-in integrations with all the AWS services you may need in your applications, e.g. SQS, SNS, Cognito, Kinesis, etc. Moreover, it handles the creation of the underlying S3 buckets, CloudFormation stacks, and all the things your team would otherwise have to build by hand.
Highly Performant Dev Teams
Besides being freed from creating and managing unnecessary resources, your dev teams are also liberated from using their own machines for integration testing the product their building.
The Serverless Framework Offline Plugin is always at your disposal if you want to build and test your code locally, without AWS access. That said, serverless shines the most when tested in the actual cloud. Firstly, no computing power is needed for running multiple Docker containers as your tests take place elsewhere. Secondly, you’re mitigating the risk of using faulty mocks, dummy services, and broken integrations, as your code is integrated and tested in the wild.
Now, let’s talk about the other side of serverless computing. The processes that used to be free when ran locally, now turn into pay-per-use services. Each deployment, each API call, each SQS message consumption results in actual money being spent.
Of course, if your team consists of two developers and the lifespan of the project is a month, these costs can be neglected. But if you have separate streams and multiple teams with several developers in each of them, you may want to make this decision more carefully. However, let’s not throw the baby out with the bathwater and make sure that we don’t pass on this perfect opportunity too soon. There’s a rule of thumb with all the cloud resources: monitor before you act. Use all the available information to make the right decision and keep evaluating as you go.
Regardless of which cloud provider you choose, the building blocks for serverless applications can be scaled infinitely. For instance, when there’s not much activity, the AWS API Gateway backed by AWS Lambda simply waits for the traffic to come. But, once the traffic spikes—the load, latency, internal resource usage, and several other metrics are monitored and respond accordingly. It’s up to the service, not the developer or any other person, to decide how many servers, processes, instances to start or stop to ensure the load is proper for the server and error-free. All this happens seamlessly.
You can probably imagine a world with no more unfulfilled requests and no customers calling support to complain about their orders not being recorded or processed. That said, your serverless architecture is only as strong as its weakest component.
So, even if serverless deployment is scalable and reliable, there could always be a relational database it connects to. If the database is too small, the number of open connections too high, or the disk too slow, this will result in Denial of Service. On the other hand, there are serverless databases, NoSQL engines and other automatically scalable services, which wouldn’t have this flaw.
Therefore, special care should be taken when incorporating additional layers into existing solutions. It’s important to avoid the situation where something that could be considered a clear advantage quickly turns into a complete disaster.
Overall, however, serverless computing shines the most when used to solve a specific class of problems, but also alongside the right set of services and tools.
The benefits of going serverless with the Serverless Framework are straightforward: ease of use, lack of complicated delivery pipelines, extremely fast time-to-market, agility, and robustness, just to name a few. If you’re currently considering building your Frontend and API application combo with an unpredictable load, usage spikes, and low activity periods with almost no load—serverless computing is the way to go. We did it with our 4-week project and are very happy with that decision. The costs are relatively low and whenever there was a spike in the load—it never failed us.
However, if your application’s load is stable and you constantly need a lot of processing power, you might want to consider other options. Especially if you need long-running processing in the background. With great serverless power comes great responsibility—the responsibility to choose it for the right class of problems.
All in all, building software with the serverless-first approach in mind will hardly ever cut off the escape route to the serverfull solutions. Moreover, it promotes writing highly decoupled code, which can be easily grouped into higher-order modules when needed.