Jesse testing image fail aug 23rd 2019

August 23, 2019

Tweet This:
Share on LinkedIn:

By Taylor Owen, Kovarus Automation Solution Architect

In the first part of this blog, I laid out some key features tools need to have to manage infrastructure well, the definition of those features, and what features Terraform and Ansible are the strongest in. To help with visualizing the concepts of Part 1, I have created a chart:

From the diagram one can see that Terraform is stronger at Infrastructure Provisioning vs. Ansible, but Ansible is stronger in Configuration Management of devices and operating systems. When leveraging these two tools one can see that there is a strong case for integrating them together to create a power provisioning, orchestration, and configuration management platform.

There are a few ways of integrating these two tools. I’m going to rank these from worst to best in terms of scalability and ease of integration:

Ansible Calling Terraform

There is an Ansible module that can be leveraged in calling Terraform from Ansible. It is aptly named terraform. This module accepts the state as either planned, present, or absent which lines up with terraform plan, terraform apply, and terraform destroy commands.

- name: Execute Terraform
  hosts: localhost
    - name: Execute Terraform Plan
        project_path: “{{ project _dir }}”
        state: planned

    - name: Execute Terraform Apply
        project_path: "{{ project_dir }}"
        state: present

    - name: Execute Terraform Destroy
        project_path: "{{ project_dir }}"
        state: absent

The reason for this being my least preferred method is due to having to parse the output of the module to manage and build your inventory. If you’re wanting to execute subsequent playbooks against different host groups in your inventory, that must be done during the playbook run using the add_host Ansible module. This tends to not be scalable if you’re having to deal with provisioning many different VMs, EC2 instances, etc.

Terraform Calling Ansible

This tends to be my preferred method for small- to medium-sized environments. Before we talk about why, let’s look at some code!

resource "null_resource" "ansible_playbook" {
  triggers = {
    instance_ids = "${join(",", aws_instance.web.*.id)}"
  provisioner "local-exec" {
    command = "ansible-playbook -i inventory ./ansible/web.yml -e '{\"apache_test_message\":\"This was provisioned by ${var.user}\"}'"

To help make Terraform idempotent, we leverage the null_resource with triggers. Null resources in Terraform are a way to run provisioners that do not belong to a specific resource. We could associate a playbook with each individual resource, but that becomes cumbersome quickly, and you lose out on the rich metadata that Terraform can provide to Ansible. A change in the trigger data is what tells Terraform when to execute the provider. In the previous example, the null_resource provider is executed when there is a change in the number of web servers.

There are a few ways to have Ansible execute against the newly provisioned resources. One can either place an inventory file on disk, place the resources to the –limit Ansible flag, or leverage dynamic inventory. In terms of scalability and continued configuration management of the operating system the last option is preferred. Leveraging Ansible dynamic inventory and using tags/attributes for the grouping of hosts, the user can target the correct hosts in the Ansible playbooks. This allows the user to reuse existing playbooks as part of deploying new resources and continue smoothly into day-two operations.

Pipelines for Integrating Terraform and Ansible

The last, most common option is leveraging a pipeline such as Jenkins/CloudBees. This option allows the greatest amount of scalability, flexibility, and features, but it does come at a cost of additional complexity. Chances are there is already a pipeline deployed in your company’s environment that could be leveraged for this as well.

Leveraging the APIs available in Terraform Enterprise and Ansible Tower via the pipeline allows the products to scale independent of each other, and have centralized points of management, auditing, etc. The pipeline can be kicked off via a simple webhook call to Jenkins/CloudBees, which allows all approval processes to function within a different toolset such as ServiceNow, Jira, etc. Manual approvals can also be placed in the pipeline itself as well. The pipeline allows the flexibility of being able to add other technologies and integration points where appropriate, without having to worry whether Terraform or Ansible supports the integration point’s API. As a company starts to take automation with infrastructure as code seriously, pipelines are necessary since CI/CD practices need to be brought into the infrastructure-as-code efforts.


In today’s technological landscape it is more paramount than ever to adopt the Unix philosophy, to pick the tool that can do one thing, and do it very well. Through the use of pipelines, we can then start to integrate the technologies together, and pass needed data and metadata from step to step. Terraform is excellent at provisioning cloud-like infrastructure in a stateful manner. Ansible is excellent for orchestration and configuration management. With these combined, one can implement an excellent solution for treating infrastructure as code.

Looking to learn more about modernizing and automating IT? We created the Kovarus Proven Solutions Center (KPSC) to let you see what’s possible and learn how we can help you succeed. To learn more about the KPSC go to the KPSC page.

Also, follow Kovarus on LinkedIn for technology updates from our experts along with updates on Kovarus news and events.