Kong Enterprise 3.3 delivers enhanced security, usability, and platform reliability. Learn more

Infrastructure as Code without Infrastructure


Infrastructure as Code (IaC) is a powerful process – replacing manual, error prone and expensive operations with automated, consistent and quick provisioning of resources. In many cases, IaC is dependent on existing infrastructure, typically including a configuration management system. Chef, Puppet and SaltStack are all commonly referenced players in this market, each requiring resources to be in place and having their own difficulties in setup and maintenance. As we move to microservices and container orchestration, our need for resource-intensive and complex tooling to provision infrastructure and application dependencies diminishes. So how do you solve the chicken-and-egg problem of standing up IaC without relying on other infrastructure?

Our solution in Amazon Web Services (AWS) was Terraform, cloud-init, Minimal Ubuntu and Ansible. Terraform was an easy choice given our existing use and expertise with the product for provisioning in AWS. We were building Amazon Machine Images (AMIs) using Packer with a minimal set of software packages to bootstrap systems for dynamic configuration based on their role by our configuration management system. However, every change, no matter how subtle it was, required building a new AMI. It also didn’t save much on boot time since an agent would configure the system dynamically at first boot-up. We were also spending a lot of time maintaining a configuration management system and scripts, as well as keeping up on Domain Specific Languages (DSLs).

Minimal Ubuntu

Enter Minimal Ubuntu – images designed for automating deployment at scale with an optimized kernel and boot process. Needing only to install a small set of packages and most of our tooling at the orchestration layer, we are still able to provision a system that is ready for production traffic in under four minutes. The simplicity of these images also provide greater security and ease of administration.

Cloud-init is installed on Minimal Ubuntu, which allows further configuration of the system using user data. Given the lack of documentation and more sophisticated features of other configuration management systems, we were still looking for something else. Ansible became an attractive option for several reasons: simplistic yet powerful approach to automation, readable configuration and templating using YAML and Jinja2 versus a DSL, and the community contributions and industry embracement.


Most of the documentation for Ansible, though, focuses on the use of a master server that pushes configuration to clients. This doesn’t solve the problem of IaC without relying on infrastructure. Also, maintaining dynamic inventories of clients and pushing configurations to systems in auto scaling groups that need to be ready for production traffic as soon as possible did not make sense. Ansible has a concept of local playbooks, but there isn’t much light shed on the power and simplicity of it. This blog post will walk you through combining these tools to build a bastion host configured with Duo Multi-Factor Authentication (MFA) for SSH and a framework to easily add additional host roles. For brevity, other configuration of our bastion hosts is left out. You will want to perform further tuning and hardening depending on your environment.


Starting with Terraform (note all examples are using version 0.12.x) at the account/IAM level, you will need a EC2 instance profile with access to an S3 bucket where the Ansible playbook tarball will be stored. Terraform for creating the S3 bucket is left to the reader – it is straightforward, and many examples exist for it. It is recommended to enable encryption at rest on the S3 bucket as sensitive information may be required to bootstrap a host:

data "aws_iam_policy_document" "ansible" {
  statement {
    actions = [
    resources = ["${aws_s3_bucket.ansible.arn}/*"]

resource "aws_iam_policy" "ansible" { 
  name        = "ansible"
  description = "Access to the Ansible S3 bucket"
  policy      = data.aws_iam_policy_document.ansible.json

data "aws_iam_policy_document" "bastion" {
  statement {
    actions = ["sts:AssumeRole"]

    principals {
      type        = "Service"
      identifiers = ["ec2.amazonaws.com"]

resource "aws_iam_role" "bastion" {
  name               = "bastion"
  assume_role_policy = data.aws_iam_policy_document.main.json

resource "aws_iam_role_policy_attachment" "bastion" {
  role       = aws_iam_role.bastion.name
  policy_arn = aws_iam_policy.ansible.arn

resource "aws_iam_instance_profile" "bastion" {
  name = aws_iam_role.bastion.name
  role = aws_iam_role.bastion.name

With a policy to read the S3 bucket and an instance profile the bastion host can assume, define the bastion host EC2 instance:

resource "aws_instance" "main" {
  ami           = var.ami
  instance_type = var.instance_type

  user_data = data.template_cloudinit_config.main.rendered
  key_name  = var.ssh_key

  iam_instance_profile = "bastion"

  subnet_id                   = var.subnet_id
  vpc_security_group_ids      = [var.vpc_security_group_ids]
  associate_public_ip_address = true

Most variables are self-explanatory. For this exercise, we will bring attention to the ami and user_data values. The ami value can be found by selecting the version of Ubuntu and the Amazon region for your instance here: https://wiki.ubuntu.com/Minimal.

The user_data value defines the cloud-init configuration:

data "aws_region" "current" {}

data "template_cloudinit_config" "main" {
  gzip          = true
  base64_encode = true

  part {
    filename     = "init.cfg"
    content_type = "text/cloud-config"
    content      = templatefile("${path.module}/cloud-init.cfg", {}) 

  part {
    content_type = "text/x-shellscript"
    content      = templatefile(
        ROLE   = var.role
        ENV    = var.environment
        VPC    = var.vpc
        REGION = data.aws_region.current.name

The cloud-init.cfg specifies a minimal configuration – installing the AWS CLI tool and Ansible to handle the rest of the process:

# Package configuration
    - arches: [default]

apt_update: true
package_upgrade: true
  - ansible
  - awscli

  - path: /etc/apt/apt.conf.d/00InstallRecommends
    owner: root:root
    permissions: '0644'
    content: |
      APT::Install-Recommends "false";

The shell script following the cloud-init template downloads the Ansible playbook tarball and executes it. Variables for the environment (dev, stage, prod), VPC name and AWS region are passed to customize the configuration based on those settings. The role variable is passed as a tag to define what role the host will play, somewhat correlating to Ansible roles (explained later):

# HOME is not defined for cloud-init
# Ansible, and likely others, don't like that
export HOME

cd /opt
aws s3 cp s3://s3-bucket-name/ansible.tar.gz .
if [ $? != 0 ]; then
 echo "Error: Cannot download from S3, check instance profile."
 exit 1

tar zxf ansible.tar.gz && rm -f ansible.tar.gz
ansible-playbook --connection local --inventory, \
  --extra-vars env=${ENV} --extra-vars vpc=${VPC} --extra-vars region=${REGION} \
  --tags ${ROLE} ansible/site.yml

The Ansible tarball is created from another Git repository with the Ansible playbook and uploaded to the secure S3 bucket. The directory layout is as follows:

    roles/                      # Ansible roles, see https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html
                main.yml        # Applied to all systems
                main.yml        # Bastion host "role"
                common_auth     # /etc/pam.d/common_auth 
                sshd            # /etc/pam.d/sshd
                sshd_config     # /etc/ssh/sshd_config
    site.yml                    # Master playbook
    vars/                       # Variable configuration
        [environment]/          # i.e. dev, stage, prod
            main.yml            # Variables specific to an environment
            [vpc]/              # VPC name, i.e. dev-ops
                main.yml        # Variables specific to an environment and VPC 
                [region]/       # i.e. us-west-2
                    main.yml    # Variables specific to the environment, VPC and region
        main.yml                # Global variables

Ansible roles provide convention over configuration to simplify units of work. We break out each package into a role so they can be reused. We leverage Ansible tags to associate Ansible roles with our concept of a host “role, i.e., bastion. This keeps site.yml simple and clear:

- hosts: localhost
  connection: local

    - { role: common, tags: ["always"] }
    - { role: bastion, tags: ["bastion"] }

always is a special tag, specifying to always run a task regardless of the tag specified at execution. It provides the mechanism to run common tasks regardless of the host “role. For this example, we will only use roles/common/tasks/main.yml to load our variable hierarchy but could include tasks for creating admin users, installing default packages, etc.:

- name: Include site variables
  include_vars: vars/main.yml

- name: Include environment variables
  include_vars: vars/{{ env }}/main.yml

- name: Include VPC variables
  include_vars: vars/{{ env }}/{{ vpc }}/main.yml

- name: Include region variables
  include_vars: vars/{{ env }}/{{ vpc }}/{{ region }}/main.yml

This provides a powerful and flexible framework for defining variables at different levels. Site level variables apply to all hosts. Variables that might differ between dev and prod (i.e., logging host) can be defined at the environment level in vars/dev/main.yml and vars/prod/main.yml. main.yml must exist for each environment, VPC and AWS region, if only just “—” for its content. In this example, we will define one site level variable in vars/main.yml:

  secrets: s3-bucket-name/secrets

This defines the variable aws.secrets, an S3 bucket and path for downloading files that need to be secured outside of the Ansible playbook Git repository. This value can be customized per environment, VPC and/or region by moving it down the variable hierarchy. Moving onto bastion, roles/bastion/tasks/main.yml disables selective TCP ACKs and installs Ansible roles for software, which for this example, is limited to duo:

- name: Disable selective acks (CVE-2019-11477)
    name: net.ipv4.tcp_sack
    value: '0'
    state: present

- include_role:
    name: "{{ item }}"
    - duo

Lastly, we have duo in roles/duo/tasks.yml:

- name: Add key
    data: |

      -----END PGP PUBLIC KEY BLOCK-----

- name: Add repository
    repo: deb [arch=amd64] https://pkg.duosecurity.com/Ubuntu bionic main
    state: present
    filename: duo

- name: Install
    name: duo-unix
    state: present
    update_cache: yes

- name: Download configuration
  command: "aws s3 cp s3://{{ aws.secrets }}/{{ role_name }}/pam_duo.conf /etc/duo/pam_duo.conf"

- name: Secure configuration
    path: /etc/duo/pam_duo.conf
    owner: root
    group: root
    mode: 0600

- name: Configure PAM common
    src: common_auth
    dest: /etc/pam.d/common_auth
    owner: root
    group: root
    mode: 0644

- name: Configure PAM sshd
    src: sshd
    dest: /etc/pam.d/sshd
    owner: root
    group: root
    mode: 0644

- name: Configure sshd
    src: sshd_config
    dest: /etc/ssh/sshd_config
    owner: root
    group: root
    mode: 0644

- name: Restart sshd
    name: sshd
    state: restarted
    daemon_reload: yes

The duo configuration file contains secrets, so it is downloaded from the encrypted S3 bucket in the secrets/bastion path:

; This file is managed by Ansible, do not modify locally
ikey = [redacted]
skey = [redacted]
host = [redacted]

failmode = safe

; Send command for Duo Push authentication
pushinfo = yes
autopush = yes

The remaining files are kept in version control for auditing:

# This file is managed by Ansible, do not modify locally

# /etc/pam.d/common-auth - authentication settings common to all services
# This file is included from other service-specific PAM config files,
# and should contain a list of the authentication modules that define
# the central authentication scheme for use on the system
# (e.g., /etc/shadow, LDAP, Kerberos, etc.).  The default is to use the
# traditional Unix authentication mechanisms.
# As of pam 1.0.1-6, this file is managed by pam-auth-update by default.
# To take advantage of this, it is recommended that you configure any
# local modules either before or after the default block, and use
# pam-auth-update to manage selection of other modules.  See
# pam-auth-update(8) for details.

# here are the per-package modules (the "Primary" block)
#auth	[success=1 default=ignore]	pam_unix.so nullok_secure
auth  requisite pam_unix.so nullok_secure
auth  [success=1 default=ignore] /lib64/security/pam_duo.so
# here's the fallback if no module succeeds
auth	requisite			pam_deny.so
# prime the stack with a positive return value if there isn't one already;
# this avoids us returning an error just because nothing sets a success code
# since the modules above will each just jump around
auth	required			pam_permit.so
# and here are more per-package modules (the "Additional" block)
auth	optional			pam_cap.so 
# end of pam-auth-update config
# This file is managed by Ansible, do not modify locally

# PAM configuration for the Secure Shell service

# Standard Un*x authentication.
#@include common-auth

# Disallow non-root logins when /etc/nologin exists.
account    required     pam_nologin.so

# Uncomment and edit /etc/security/access.conf if you need to set complex
# access limits that are hard to express in sshd_config.
# account  required     pam_access.so

# Standard Un*x authorization.
@include common-account

# SELinux needs to be the first session rule.  This ensures that any
# lingering context has been cleared.  Without this it is possible that a
# module could execute code in the wrong domain.
session [success=ok ignore=ignore module_unknown=ignore default=bad]        pam_selinux.so close

# Set the loginuid process attribute.
session    required     pam_loginuid.so

# Create a new session keyring.
session    optional     pam_keyinit.so force revoke

# Standard Un*x session setup and teardown.
@include common-session

# Set up user limits from /etc/security/limits.conf.
session    required     pam_limits.so

# Read environment variables from /etc/environment and
# /etc/security/pam_env.conf.
session    required     pam_env.so # [1]
# In Debian 4.0 (etch), locale-related environment variables were moved to
# /etc/default/locale, so read that as well.
session    required     pam_env.so user_readenv=1 envfile=/etc/default/locale

# SELinux needs to intervene at login time to ensure that the process starts
# in the proper default security context.  Only sessions which are intended
# to run in the user's context should be run after this.
session [success=ok ignore=ignore module_unknown=ignore default=bad]        pam_selinux.so open

# Standard Un*x password updating.
@include common-password

# Duo MFA authentication
auth  [success=1 default=ignore] /lib64/security/pam_duo.so
auth  requisite pam_deny.so
auth  required pam_permit.so
# This file is managed by Ansible, do not modify locally

# This is the sshd server system-wide configuration file.  See
# sshd_config(5) for more information.

# This sshd was compiled with PATH=/usr/bin:/bin:/usr/sbin:/sbin

Protocol 2
StrictModes yes

AuthenticationMethods publickey,keyboard-interactive
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
PasswordAuthentication no

X11Forwarding yes

AcceptEnv LANG LC_*

Subsystem sftp /usr/lib/openssh/sftp-server

UsePAM yes
UseDNS no

Create the Ansible playbook tarball that extracts to ansible/ and upload it to the S3 bucket specified in Terraform. Apply the Terraform for IAM first, and then continue to the EC2 instances. Minutes later, you will be able to login to your bastion hosts with Duo MFA. 

You now have a framework that is easy to extend – add software packages to existing host roles, customizing configuration, and adding new host roles that consume software packages. A special thanks to @_p0pr0ck5_ for his work on the variable hierarchy loading in Ansible.

Share Post

Subscribe to Our Newsletter!

    How to Scale High-Performance APIs and Microservices

    Learn how to make your API strategy a competitive advantage.

    June 20, 2023 8:00 AM (PT) Register Now