Continuous delivery, or CD, is the practice of automating the manual steps that are required to build and release software. The focus of continuous delivery is making sure that a project’s code is always in a deployable state. This is achieved by introducing and implementing a series of automated tests that become part of the CD workflow. Continuous delivery helps teams speed up the software delivery process by automating manual tasks and allowing software engineers to focus on more creative tasks.
The aim of continuous delivery is to make the software release process faster and more reliable, shortening the time it takes to get feedback and delivering value to users more quickly than is possible with a manual process.
Once releasing is robust and repeatable, it becomes easy to do it more often and you can start delivering small improvements on a weekly, daily or even hourly basis. Like continuous integration, continuous delivery requires the DevOps trinity to put it into practice: tools, process and culture.
At the heart of DevOps is a shift in mindset. Rather than viewing the software development process as a one-way conveyor belt, with requirements, code and reports handed off from one team to the next in a linear manner, DevOps champions collaboration and rapid feedback from short, iterative cycles.
Changing your definition of done can help in adopting this mentality: instead of considering your part complete when you hand your code over to the next team in the chain, your new feature or code change is only done once it’s released to live. If an issue is found at any stage in the pipeline, communicating that feedback promptly and collaborating on a fix makes for a quicker resolution than lengthy reports that have to go via a change board for approval. That’s what continuous delivery is all about.
Looking at the whole software development lifecycle, rather than just one part of it, gives you both a better understanding of what’s needed to deliver software to users and the opportunity to open more lines of communication with the other teams involved.
Continuous delivery is about identifying the pain points that slow down this delivery process and building an automated pipeline to make releasing faster and more reliable so that you’re always ready to release. With your pipeline in place, you should be able to deploy any good build to live with a single command.
Continuous integration provides the foundation for this, with code changes committed at least daily followed by an automated build and test process to provide rapid feedback to developers. If a build or test fails, addressing it is everyone’s priority.
By catching bugs early, you can fix them while the code is still fresh in your mind and avoid other functionality being built on top of bad code only to be unpicked later. With continuous delivery, the build containing the latest changes from the CI process is automatically promoted through a series of pre-production environments. Although the final push to production is triggered manually, it still follows a scripted process, making it easy to repeat so that you can release as often as needed.
Building your CI/CD pipeline is an opportunity to collaborate with the various stakeholders in your release process so that you can factor their needs into the pipeline design. Hopefully, you’ve already engaged with your colleagues in QA when designing automated tests.
Adding a stage for manual exploratory testing in a suitable test environment (as close as possible to production) will identify failures you had not anticipated (and which can then be covered by automated tests). It’s an important step for continuous delivery.
The infosec or cybersecurity team is often seen as a barrier to frequent releases because of the time involved in running a security audit and the long reports that follow. Taking a DevSecOps approach will help you weave security requirements into your pipeline.
The exact build steps, environments and tests that you need depend on the architecture of your software and your organizational priorities. If you’re building a system based on microservices, you can take advantage of the architecture to run tests on individual services in parallel before combining them for more complex integration and end-to-end tests.
Manual exploratory testing might feel like overkill for every bug fix coming through the pipeline, in which case having optional steps or alternative pipelines based on the type of change can be more efficient for continuous delivery.
Once you’ve decided the stages of your pipeline, including the tests to run in each, it’s time to script the process to ensure it’s reliable and repeatable. To avoid introducing inconsistencies, the same build artifact from the CI stage should be deployed to each pre-production environment and to production itself.
Ideally, test environments should be refreshed for each new build, and using containers with an infrastructure-as-code approach means you can script these steps, tearing down and spinning up new environments as needed.
If your pipeline includes staging environments for support, sales or marketing teams to familiarize themselves with new features, you may prefer to manually control when they are updated with a new build to avoid disrupting work in progress. As with the final release to live, deployment can still be scripted to keep the process fast and consistent.
Continuous delivery promises faster releases without compromising on quality, but making that a reality requires cooperation from multiple parts of an organization.
Breaking down silos is both a challenge in the short term and a benefit in the long term, as that collaboration will help you work more effectively.
Implementing continuous delivery requires an investment of time and can be a daunting prospect. Taking an iterative approach and building up your process over time makes this more manageable and enables you to demonstrate the benefits to senior stakeholders. Collecting metrics on build and test times and comparing these to manual procedures is a simple way to show the return on investment, as are defect rates.
Measuring the value of continuous delivery can be useful when planning your infrastructure requirements. As you scale up your release process you’ll likely want to start running multiple builds and tests in parallel, and the machines available may become a limiting factor. Once you’ve optimized the performance of your pipeline you may want to consider moving to cloud-hosted infrastructure so that you can scale as needed.
Building a CI/CD pipeline might seem like a daunting task that requires a lot of initial input from the team. However, when done right, CI/CD can allow your team to set up the software delivery process in one-tenth as much time. However, it is important to follow some CD best practices to get the process right.
Continuous delivery is the process of automating the various steps that are required to ship software to production. However, it does not entail fully automating the release of the software – that’s where continuous deployment comes in. We can think of continuous delivery and continuous deployment as different parts of the same CI/CD process.
Continuous delivery makes it easier and quicker to release software, so you can deploy to production much more frequently. Instead of a large quarterly or annual release, smaller updates are delivered frequently. Not only does this mean that users get new functionality and bug fixes sooner, but it also means you can see how your software is used in the wild and adjust plans accordingly.
While some organizations prefer to maintain control over the final step in the release process, for others the logical conclusion of a CI/CD pipeline is to automate the release to live, using a practice known as continuous deployment.