[Infrastructure as Code (IaC)](https://en.wikipedia.org/wiki/Infrastructure_as_code)Infrastructure as Code (IaC) is a powerful process - replacing manual, error prone and expensive operations with automated, consistent and quick provisioning of resources. In many cases, IaC is dependent on existing infrastructure, typically including a configuration management system. Chef, Puppet and SaltStack are all commonly referenced players in this market, each requiring resources to be in place and having their own difficulties in setup and maintenance. As we move to microservices and container orchestration, our need for resource-intensive and complex tooling to provision infrastructure and application dependencies diminishes. So how do you solve the chicken-and-egg problem of standing up IaC without relying on other infrastructure?
Enter Minimal Ubuntu - images designed for automating deployment at scale with an optimized kernel and boot process. Needing only to install a small set of packages and most of our tooling at the orchestration layer, we are still able to provision a system that is ready for production traffic in under four minutes. The simplicity of these images also provide greater security and ease of administration.
Cloud-init is installed on Minimal Ubuntu, which allows further configuration of the system using user data. Given the lack of documentation and more sophisticated features of other configuration management systems, we were still looking for something else. Ansible became an attractive option for several reasons: simplistic yet powerful approach to automation, readable configuration and templating using YAML and Jinja2 versus a DSL, and the community contributions and industry embracement.
## Ansible
Most of the documentation for Ansible, though, focuses on the use of a master server that pushes configuration to clients. This doesn't solve the problem of IaC without relying on infrastructure. Also, maintaining dynamic inventories of clients and pushing configurations to systems in auto scaling groups that need to be ready for production traffic as soon as possible did not make sense. Ansible has a concept of local playbooks, but there isn't much light shed on the power and simplicity of it. This blog post will walk you through combining these tools to build a bastion host configured with [Duo Multi-Factor Authentication (MFA)](https://duo.com/product/multi-factor-authentication-mfa)Duo Multi-Factor Authentication (MFA) for SSH and a framework to easily add additional host roles. For brevity, other configuration of our bastion hosts is left out. You will want to perform further tuning and hardening depending on your environment.
## Terraform
Starting with Terraform (note all examples are using version 0.12.x) at the account/IAM level, you will need a EC2 instance profile with access to an S3 bucket where the Ansible playbook [tarball](https://en.wikipedia.org/wiki/Tar_(computing))tarball will be stored. Terraform for creating the S3 bucket is left to the reader - it is straightforward, and many examples exist for it. It is recommended to enable encryption at rest on the S3 bucket as sensitive information may be required to bootstrap a host:
data "aws_iam_policy_document""ansible"{ statement { actions = ["s3:ListBucket","s3:GetObject",] resources = ["${aws_s3_bucket.ansible.arn}/*"]}}resource "aws_iam_policy""ansible"{ name = "ansible" description = "Access to the Ansible S3 bucket" policy = data.aws_iam_policy_document.ansible.json
}data "aws_iam_policy_document""bastion"{ statement { actions = ["sts:AssumeRole"] principals { type = "Service" identifiers = ["ec2.amazonaws.com"]}}}resource "aws_iam_role""bastion"{ name = "bastion" assume_role_policy = data.aws_iam_policy_document.main.json
}resource "aws_iam_role_policy_attachment""bastion"{ role = aws_iam_role.bastion.name
policy_arn = aws_iam_policy.ansible.arn
}resource "aws_iam_instance_profile""bastion"{ name = aws_iam_role.bastion.name
role = aws_iam_role.bastion.name
}}
With a policy to read the S3 bucket and an instance profile the bastion host can assume, define the bastion host EC2 instance:
The shell script following the cloud-init template downloads the Ansible playbook tarball and executes it. Variables for the *environment* (dev, stage, prod), *VPC name* and *AWS region* are passed to customize the configuration based on those settings. The role variable is passed as a tag to define what role the host will play, somewhat correlating to Ansible roles (explained later):
#!/bin/sh
# HOME is not defined for cloud-init
# Ansible, and likely others, don't like that
HOME=/root
export HOME
cd /opt
aws s3 cp s3://s3-bucket-name/ansible.tar.gz .if [ $? != 0]; then
echo "Error: Cannot download from S3, check instance profile." exit 1fi
tar zxf ansible.tar.gz && rm -f ansible.tar.gz
ansible-playbook --connection local --inventory 127.0.0.1, \
--extra-vars env=${ENV} --extra-vars vpc=${VPC} --extra-vars region=${REGION} \
--tags ${ROLE} ansible/site.yml
The Ansible tarball is created from another Git repository with the Ansible playbook and uploaded to the secure S3 bucket. The directory layout is as follows:
ansible/
roles/ # Ansible roles, see https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html common/
tasks/
main.yml # Applied to all systems
bastion/
tasks/
main.yml # Bastion host "role" duo/
files/
common_auth # /etc/pam.d/common_auth
sshd # /etc/pam.d/sshd
sshd_config # /etc/ssh/sshd_config
tasks/
main.yml
site.yml # Master playbook
vars/ # Variable configuration
[environment]/ # i.e. dev, stage, prod
main.yml # Variables specific to an environment
[vpc]/ # VPC name, i.e. dev-ops
main.yml # Variables specific to an environment and VPC
[region]/ # i.e. us-west-2 main.yml # Variables specific to the environment, VPC and region
main.yml # Global variables
*always* is a special tag, specifying to always run a task regardless of the tag specified at execution. It provides the mechanism to run common tasks regardless of the host "*role*.*"* For this example, we will only use *roles/common/tasks/main.yml* to load our variable hierarchy but could include tasks for creating admin users, installing default packages, etc.:
---
- name: Include site variables
include_vars: vars/main.yml
- name: Include environment variables
include_vars: vars/{{ env }}/main.yml
- name: Include VPC variables
include_vars: vars/{{ env }}/{{ vpc }}/main.yml
- name: Include region variables
include_vars: vars/{{ env }}/{{ vpc }}/{{ region }}/main.yml
This provides a powerful and flexible framework for defining variables at different levels. Site level variables apply to all hosts. Variables that might differ between dev and prod (i.e., logging host) can be defined at the environment level in *vars/dev/main.yml* and *vars/prod/main.yml*. *main.yml* must exist for each environment, VPC and AWS region, if only just "—" for its content. In this example, we will define one site level variable in *vars/main.yml*:
---
aws: secrets: s3-bucket-name/secrets
This defines the variable *aws.secrets*, an S3 bucket and path for downloading files that need to be secured outside of the Ansible playbook Git repository. This value can be customized per environment, VPC and/or region by moving it down the variable hierarchy. Moving onto bastion, *roles/bastion/tasks/main.yml* disables selective TCP ACKs and installs Ansible roles for software, which for this example, is limited to duo:
The duo configuration file contains secrets, so it is downloaded from the encrypted S3 bucket in the* secrets/bastion* path:
; This file is managed by Ansible, do not modify locally
[duo]ikey = [redacted]skey = [redacted]host = [redacted]
failmode = safe
; Send command for Duo Push authentication
pushinfo = yes
autopush = yes
The remaining files are kept in version control for auditing:
# This file is managed by Ansible, do not modify locally
# /etc/pam.d/common-auth - authentication settings common to all services
#
# This file is included from other service-specific PAM config files,# and should contain a list of the authentication modules that define
# the central authentication scheme for use on the system
# (e.g., /etc/shadow, LDAP, Kerberos, etc.). The default is to use the
# traditional Unix authentication mechanisms.
#
# As of pam 1.0.1-6, this file is managed by pam-auth-update by default.
# To take advantage of this, it is recommended that you configure any
# local modules either before or after the default block, and use
# pam-auth-update to manage selection of other modules. See
# pam-auth-update(8) for details.
# here are the per-package modules (the "Primary" block)
#auth[success=1 default=ignore]pam_unix.so nullok_secure
auth requisite pam_unix.so nullok_secure
auth [success=1 default=ignore] /lib64/security/pam_duo.so
# here's the fallback if no module succeeds
authrequisitepam_deny.so
# prime the stack with a positive return value if there isn't one already;
# this avoids us returning an error just because nothing sets a success code
# since the modules above will each just jump around
authrequiredpam_permit.so
# and here are more per-package modules (the "Additional" block)
authoptionalpam_cap.so
# end of pam-auth-update config
# This file is managed by Ansible, do not modify locally
# PAM configuration for the Secure Shell service
# Standard Un*x authentication.
#@include common-auth
# Disallow non-root logins when /etc/nologin exists.
account required pam_nologin.so
# Uncomment and edit /etc/security/access.conf if you need to set complex
# access limits that are hard to express in sshd_config.
# account required pam_access.so
# Standard Un*x authorization.
@include common-account
# SELinux needs to be the first session rule. This ensures that any
# lingering context has been cleared. Without this it is possible that a
# module could execute code in the wrong domain.
session [success=ok ignore=ignore module_unknown=ignore default=bad] pam_selinux.so close
# Set the loginuid process attribute.
session required pam_loginuid.so
# Create a new session keyring.
session optional pam_keyinit.so force revoke
# Standard Un*x session setup and teardown.
@include common-session
# Set up user limits from /etc/security/limits.conf.
session required pam_limits.so
# Read environment variables from /etc/environment and
# /etc/security/pam_env.conf.
session required pam_env.so # [1]# In Debian 4.0 (etch), locale-related environment variables were moved to
# /etc/default/locale, so read that as well.
session required pam_env.so user_readenv=1 envfile=/etc/default/locale
# SELinux needs to intervene at login time to ensure that the process starts
# in the proper default security context. Only sessions which are intended
# to run in the user's context should be run after this.
session [success=ok ignore=ignore module_unknown=ignore default=bad] pam_selinux.so open
# Standard Un*x password updating.
@include common-password
# Duo MFA authentication
auth [success=1 default=ignore] /lib64/security/pam_duo.so
auth requisite pam_deny.so
auth required pam_permit.so
# This file is managed by Ansible, do not modify locally
# This is the sshd server system-wide configuration file. See
# sshd_config(5) for more information.
# This sshd was compiled with PATH=/usr/bin:/bin:/usr/sbin:/sbin
Protocol 2StrictModes yes
AuthenticationMethods publickey,keyboard-interactive
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
PasswordAuthentication no
X11Forwarding yes
AcceptEnv LANG LC_*
Subsystem sftp /usr/lib/openssh/sftp-server
UsePAM yes
UseDNS no
Create the Ansible playbook tarball that extracts to ansible/ and upload it to the S3 bucket specified in Terraform. Apply the Terraform for IAM first, and then continue to the EC2 instances. Minutes later, you will be able to login to your bastion hosts with Duo MFA.
You now have a framework that is easy to extend – add software packages to existing host roles, customizing configuration, and adding new host roles that consume software packages. A special thanks to [@_p0pr0ck5_](https://twitter.com/_p0pr0ck5_)@_p0pr0ck5_ for his work on the variable hierarchy loading in Ansible.
In this Kongcast episode , Henrik Blixt, Product Manager for Argo at Intuit, gives an introduction to Argo, an open source tool for Kubernetes and incubating project of CNCF. Check out the transcript and video from our conversation below, and be su
As organizations adopt a microservices architecture , API gateway usage has increased. Kong Gateway is one of the promising API gateways in the market. It has both OSS and enterprise support, releases multiple features and is easy to use. Kong
Key Takeaways API testing is crucial for ensuring the reliability, security, and performance of modern applications. Different types of testing, such as functional, security, performance, and integration testing, should be employed to cover all aspe
So, what exactly is Kong Insomnia? Kong Insomnia is your all-in-one platform for designing, testing, debugging, and shipping APIs at speed. Built for developers who need power without bloat, Insomnia helps you move fast whether you’re working solo,
For the last 10+ years, Docker has been one of the leading technology tools for helping developers build, run, update, and manage containers. It is one of the most widely used containerization tools available, able to work with Linux, Microsoft Wind
In the rapidly evolving world of microservices and cloud-native applications , service mesh has emerged as a critical tool for managing complex, distributed systems. As organizations increasingly adopt microservices architectures, they face new c
The pace of the industry today is pressuring software developers to build, test, and release software more frequently than ever. To achieve this pace, teams have built two core processes into their workflow: Continuous Integration and Continuous Dep