Last summer I wrote a piece discussing the hows and whys of using automation to manage cloud infrastructure. I took a high-level approach to the subject, and today I want to dive into how to practically apply this technique in a production environment. I’ll take a simple but flexible example case, using tools from the HashiCorp suite, including Packer and Terraform, to deploy and manage a single simple-case application stack from Jenkins.
This post assumes some basic knowledge of continuous integration, and an application stored in a git repository that has automated test and build jobs using Jenkins or something similar. To this existing basic workflow, we’ll add a simple infrastructure configuration directly into the repository.
Image Credit: slideshare.net
Depending on whether your infrastructure is VM-based or container-based, your next step in the pipeline is to create a Packer job to take the build process output and push it out to an image. Packer supports multiple different backends, so whether you’re targeting Amazon Machine Images (AMIs) or Docker containers or something completely different, you can use Packer to define this image generation in code. The Packer file contains configuration information and all the steps needed to install and configure your application. The method of defining these steps is flexible, ranging from simple file copies to shell scripts to Ansible playbooks and chef cookbooks.
The output of this stage is an image with your application installed and ready to run on startup, stored in whatever format and environment you intend to run it. It’s wise to implement simple health checks to validate that the application runs as expected on startup. Sub-stages that stand the application up in a test environment for load testing, security scans, and other pre-flight checks are sound planning, but this leads us to the next phase: Infrastructure.
At the infrastructure deployment phase, you may want to implement a manual check – whether it’s for cost savings in lower environments or to prevent disruptions in production. However, operations with well-defined policies and backout plans, such as blue-green deployment or automated rollbacks triggered by health checks, may wish to keep this stage part of the fully automated workflow.
Using a tool such as Terraform, you can parameterize your infrastructure deployment across virtualization environments on premises, public cloud providers, and more. The parameters to this phase are the identifiers or paths to the outputs of the previous stage, and the variables specific to the deployment environment. When you run a terraform apply command with the freshly-generated image, Terraform will dynamically create or allocate the infrastructure resources as defined. Terraform’s definitions are imperative, so if the stack you’re creating already exists, it will update the existing instances in your fleet to the new image.
Extending the Integration
This is a basic recipe that can be repeated for single applications, or even across the services that may comprise an application or API. But, there is no shortage of ways to extend the automation further. Iterative changes to the terraform script can create a blue-green deployment, allowing you to stage which instances get traffic and when as you roll out. Database backups and restores can be an automated part of the pipeline for testing resets. Any number of embellishments can suit this workflow to your use case. There are many ways to handle configuration, secrets, and other aspects I will cover in subsequent posts.
While the initial setup of continuous integration for infrastructure can be a little challenging, the core of the system is relatively simple. The benefits such as lack of human error, repeatability, and easy disaster recovery make the effort well worth it. Implementing a starting workflow like this can be a good bridge into larger scale DevOps for your organization.
Are you exploring DevOps in your organization? Questions about the Cloud, Packer, or Rerraform? Comment below and join the conversation!