Infrastructure as Code
The industry is moving at a fast pace with the emergence of public cloud consumption over that last few years and the technology advances it brings. Data centre resources can simply be consumed without having to procure the hardware and integration, this is all done ready to be consumed.
With that, anyone with a credit card can implement a production ready datacentre anywhere in the world within moments, either by a self-service web page or by consuming APIs via scripts, the latter of which is a very interesting concept.
By configuring the required infrastructure via APIs, the infrastructure is treated as software. Scripts, or code, can be produced that indicate the desired state. Suddenly a full infrastructure; compute, storage, networking, security, global load balancing can all be created and controlled as code known as Infrastructure as code.
Infrastructure as code can provide the foundation for a DevOps transformation, introducing a software developers processes such as version control, code review, continuous integration and automated testing to the infrastructure deployment. Eliminating manual configuration each time, custom individual scripts, slow and error prone deployments.
The concept is compelling but what tools are available?
Terraform is an open source program provided by HashiCorp and backed by a large community, simply search online for Terraform and workable infrastructure as code scripts can be found. Terraform is an orchestration tool which deploys the infrastructure as code ready for production in a declarative fashion, that is the desired end state that is stipulated.
Using a declarative tool, if the desired end state of 15 instances are required for example then 15 instances will be deployed but if any changes are required, for example, another 5 instances were required making the total 20 instances, then an additional 5 instance will be added and not an additional 20 added each time the tool was run.
Meaning there isn’t a need to write a new piece of code if the infrastructure needs to change, teams can simply work together and through collaboration and version control, code can be worked on as a team creating blueprints for the datacentre.
Terraform is a very simple application to install, it can run on a Windows, Mac or Linux machine it doesn’t require any sort of infrastructure. Once installed, the code is simply created as a .tf file, further files can be created in a directory such as variables including information such as customer API keys or server names, then the code simply references those. Meaning only the variables need to be changed without the need to keep changing the main script.
Terraform has a large community with users uploading their code to Git repositories such as github.com for all to use. Because of this, Terraform is very easily adopted, the scripts are easy to use and very quickly absorbed. Once the scripts are ready, Terraform can run in “planning” mode to show any changes that will occur and more importantly any errors.
Terraform talks to infrastructure vendors in the form of providers, a large list of providers are available for vendors such as VMware, AWS, Azure and GCP among others. Simply add the provider to the code and the access keys / authentication required, Terraform will then go out and connect using those credentials all from a single PC.
Once connected resources can be created, if its AWS, for example, EC2, EBS/S3, VPC, Elastic Load Balancers and many more can all be created using code. Below example is Amazon Elastic Container Service (ECS) resource created with an Autoscaling group for container instances.
Infrastructure as code has been adopted rapidly due to the distributive emergence of AWS and although there are many open source tools such as Terraform, AWS do provide their own tool – AWS CloudFormation.
There is no additional cost to use CloudFormation but it is to be used with AWS. CloudFormation does provide the ability to create a one-click fully deployable AWS infrastructure.
Templates can be written in YAML or JSON format, depending on the users preference and can be uploaded and shared in an S3 bucket. For users that prefer a console, CloudFormation templates can be created from within the AWS management console through the design template canvas, simply drag and drop the service in AWS that make up a ‘Stack’. Once created it can be exported into a template and stored in S3 with code exposed.
Below illustrates the AWS process further.
AWS provide some very good, detailed documentation around each service that can be created using CloudFormation including example scripts and even what the example looked like using the design canvas. Making it very easy to adopt. Below is a basic example of creating an EC2 instance in a security group using the design canvas, the components were added along with the code ready to be exported to a template.
By using such tools to deploy infrastructure as code, traditional infrastructure deployment can be treated in the same way as application development and with it the benefits that can bring, such as version control, collaboration, continuous integration and automation.
Additional tools such as Chef and Puppet, which are classed as configuration management tools, can be further introduced to install and manage software on servers that are already deployed.
Combining tools such as Chef with AWS CloudFormation provides the ability to deploy a full production front and back end stack in AWS with the applications configured.
These tools are not limited to public cloud consumption, they can very much be used for the on-premises data centre, for instance, Terraform continue to improve their VMware vSphere providers.
With an underlying Software Defined Data Centre (SDDC) on-premises, infrastructure can be automated, bringing the benefits of public cloud consumption to the data centre. By combining configuration management tools such as Chef the real benefits of an SDDC solution can be achieved.
Why not read 'Data centre Automation and Operations'?