Bundle all of your application’s files into an archive. Transfer the file to the production server. Unzip it into a new folder. Switch the VirtualHost’s directory to the new one. Restart the webserver.

You may add, remove or change one of the steps but you might be familiar with the pipeline above. Many of us have used these steps or something like them in the past few years to deploy your website or application to a server. Heck, you might have even used them today! If you did, I’m here to bring you good news: you might not have to do that anymore.

Continuous Delivery and Serverless Computing are relatively recent terms, but they surely are concepts that development and operations teams wished were around for decades. Continuous Delivery is a practice where changes in your code are continuously deployed, using a predefined set of actions that may prepare, build, test and deploy your code. Of course, you can choose when to deploy it or not but you don’t have to do all the deployment work: the software does it for you. Full disclosure though: Continuous Delivery is the second step on a workflow of three, the other two being Continuous Integration and Continuous Deployment. So, for you to be able to properly implement CD, you first need to implement a Continuous Integration pipeline, where your code is fully integrated, and every push generates a build artifact that’s ready to be deployed.

Serverless Computing is the other concept that might come in handy when you don’t want to be managing servers. When you go serverless, the burden of having to boot up servers and loading your software is not on you, but on your cloud provider. So, you provide your code and the cloud provider will automatically manage the computing needs whenever it needs to run. Of course, this isn’t magic: your cloud provider uses their servers and VMs behind-the-scenes, but you will always have your code up and running, no matter how many instances it needs to handle all the load. That’s because, instead of having one server that’s always on (being underutilized most of the time), with Serverless Computing your code only executes when it’s needed and you only pay for what you use.

Though it can be extremely helpful in some cases, Serverless Computing also has its drawbacks. Because you don’t have an always-ready instance of your application and your cloud provider might need to prepare a new instance to run your code, you will get more latency on your requests. This might not be a problem for most applications, but for time-critical applications, this might be a deal-breaker. Another thing to consider before going serverless is if you have a stateful application, that depends on persistent connections to store states. Instances of your application can be spawned and killed in a matter of seconds when you use Serverless Computing, taking with them all the data that they might store in memory. Make sure your application doesn’t depend on anything that needs to be left on memory between requests and, if you need any session data across requests, make sure to persist it to reliable storage, such as S3 or any database that you may use.

These concepts may be a lot to wrap your mind around right now, so don’t worry. Put them into practice one step at a time. Start with creating Continuous Delivery pipelines that can be used to deploy your application to your server right now. Try using a CI application or even use git hooks to do it. No matter what you choose to go with, begin to automate your application deployment flow. Once you have that sorted out, it will be easier to go serverless with your application and stop worrying about your servers. That will be one thing less to worry about and more time that you’ll have to focus on your code.