Managing User Data in EC2

In this example I'm going to provision and new EC2 instance with an EBS volume attached. During the volume mounting, I'm going to make sure that there is a file system exist in the EBS volume. If not I will format the disk. To make things easier I will install lvm; linux volume manager during the volume mount time.

First let's create a file with the name vars.tf,

variable "AWS_REGION" {
  default = "eu-west-1"
}

variable "PATH_TO_PRIVATE_KEY" {
  default = "mykey"
}

variable "PATH_TO_PUBLIC_KEY" {
  default = "mykey.pub"
}

variable "AMIS" {
  type = map(string)
  default = {
    us-east-1 = "ami-13be557e"
    us-west-2 = "ami-06b94666"
    eu-west-1 = "ami-844e0bf7"
  }
}

variable "INSTANCE_DEVICE_NAME" {
  default = "/dev/xvdh"
}

Next we will define our AWS provider in a file named provider.tf,

Then let's create a file with the name vpc.tf to define out VPC,

Let's define our security groups in a file with the name securitygroup.tf,

A file to load our public key key.tf,

A file to define our cloud-init scripts cloudinit.tf,

In here I have two data blocks refering to the scripts that I want to get executed. The first one is the init-script. In here I'm passing AWS region as an argument. The next one is a shell script that accepts the device name as an argument. Next I have another data block which as two sections that refers to the rendered output of the first two data blocks.

Then a cloud-init config file scripts/init.cfg,

This is a usual cloud-init config file. Here I first update the package repos and the upgrade all packages to their latest version. Then I install two packages docker, and lvm2. All the logs generated by executing cloud-init script will then be directed to /var/log/cloud-init-output.log.

Let's make a script to attach a volume scripts/volumes.sh,

Next, we refresh the state of the lvm with vgchange -ay command. Then we look up for our EBS volumes' file system's existance. If the file system is not avaialble, we are going to format the disk first, or else use it as is. Then we are going to create the data directory if it doesn't exist, and then mount it.

Finally, let's define out EC2 instance in a file with the name instance.tf,

In above file, I use the property user_data to pass the user data needs to be placed in. This is done by getting the rendered output of the cloud-init script. You can notice that I've taken the device name from a variable in the EBS volume. That's because I'm using it in the cloud-init scripts as well. Notice that I've set skip_destroy property to true in the volume attachment. The reason behind this is that my intention here is to format the disk if there aren't any data in it when mounting, ie. first time, or else just attach the disk with the existing data.

Generate ssh keys,

Initialize the providers,

Apply the changes,

Don't forget to clean up once experiments are done,

Last updated

Was this helpful?