In this example I'm going to provision and new EC2 instance with an EBS volume attached. During the volume mounting, I'm going to make sure that there is a file system exist in the EBS volume. If not I will format the disk. To make things easier I will install lvm; linux volume manager during the volume mount time.
A file to define our cloud-init scripts cloudinit.tf,
data "template_file" "init-script" {
template = file("scripts/init.cfg")
vars = {
REGION = var.AWS_REGION
}
}
data "template_file" "shell-script" {
template = file("scripts/volumes.sh")
vars = {
DEVICE = var.INSTANCE_DEVICE_NAME
}
}
data "template_cloudinit_config" "cloudinit-example" {
gzip = false
base64_encode = false
part {
filename = "init.cfg"
content_type = "text/cloud-config"
content = data.template_file.init-script.rendered
}
part {
content_type = "text/x-shellscript"
content = data.template_file.shell-script.rendered
}
}
In here I have two data blocks refering to the scripts that I want to get executed. The first one is the init-script. In here I'm passing AWS region as an argument. The next one is a shell script that accepts the device name as an argument. Next I have another data block which as two sections that refers to the rendered output of the first two data blocks.
Then a cloud-init config file scripts/init.cfg,
#cloud-config
repo_update: true
repo_upgrade: all
packages:
- docker.io
- lvm2
output:
all: '| tee -a /var/log/cloud-init-output.log'
Let's make a script to attach a volume scripts/volumes.sh,
#!/bin/bash
set -ex
vgchange -ay
DEVICE_FS=`blkid -o value -s TYPE ${DEVICE} || echo ""`
if [ "`echo -n $DEVICE_FS`" == "" ] ; then
# wait for the device to be attached
DEVICENAME=`echo "${DEVICE}" | awk -F '/' '{print $3}'`
DEVICEEXISTS=''
while [[ -z $DEVICEEXISTS ]]; do
echo "checking $DEVICENAME"
DEVICEEXISTS=`lsblk |grep "$DEVICENAME" |wc -l`
if [[ $DEVICEEXISTS != "1" ]]; then
sleep 15
fi
done
pvcreate ${DEVICE}
vgcreate data ${DEVICE}
lvcreate --name volume1 -l 100%FREE data
mkfs.ext4 /dev/data/volume1
fi
mkdir -p /data
echo '/dev/data/volume1 /data ext4 defaults 0 0' >> /etc/fstab
mount /data
# install docker
curl https://get.docker.com | bash
Next, we refresh the state of the lvm with vgchange -ay command. Then we look up for our EBS volumes' file system's existance. If the file system is not avaialble, we are going to format the disk first, or else use it as is. Then we are going to create the data directory if it doesn't exist, and then mount it.
Finally, let's define out EC2 instance in a file with the name instance.tf,
resource "aws_instance" "example" {
ami = var.AMIS[var.AWS_REGION]
instance_type = "t2.micro"
# the VPC subnet
subnet_id = aws_subnet.main-public-1.id
# the security group
vpc_security_group_ids = [aws_security_group.allow-ssh.id]
# the public SSH key
key_name = aws_key_pair.mykeypair.key_name
# user data
user_data = data.template_cloudinit_config.cloudinit-example.rendered
}
resource "aws_ebs_volume" "ebs-volume-1" {
availability_zone = "eu-west-1a"
size = 20
type = "gp2"
tags = {
Name = "extra volume data"
}
}
resource "aws_volume_attachment" "ebs-volume-1-attachment" {
device_name = var.INSTANCE_DEVICE_NAME
volume_id = aws_ebs_volume.ebs-volume-1.id
instance_id = aws_instance.example.id
skip_destroy = true # skip destroy to avoid issues with terraform destroy
}
In above file, I use the property user_data to pass the user data needs to be placed in. This is done by getting the rendered output of the cloud-init script. You can notice that I've taken the device name from a variable in the EBS volume. That's because I'm using it in the cloud-init scripts as well. Notice that I've set skip_destroy property to true in the volume attachment. The reason behind this is that my intention here is to format the disk if there aren't any data in it when mounting, ie. first time, or else just attach the disk with the existing data.
Generate ssh keys,
$ ssh-keygen -f mykey
Initialize the providers,
$ terraform init
Apply the changes,
$ terraform apply
Don't forget to clean up once experiments are done,
This is a usual config file. Here I first update the package repos and the upgrade all packages to their latest version. Then I install two packages docker, and lvm2. All the logs generated by executing cloud-init script will then be directed to /var/log/cloud-init-output.log.