Terraform tips (1)

In the past two sprints (each sprint is two weeks but last one is a bit messier and longer) I have been working on some Terraform scripts. We exclusively use the AWS provider. The workflow is like this

— A Git repository has all the Terraform scripts, wrapper scripts in shell, as well as docker configurations (Dockerfile), Jenkins pipeline configurations.

— There are 2 types of Jenkins pipelines. One is to build one or more docker image, copying artifact from another Jenkins job. The tip here is to distribute the Dockerfile with the artifact can be helpful, as it can contain the exact artifact file name after maven's processing. But on the other hand, put Dockerfile in the same place as the Jenkins pipeline scripts makes it really easy to update (they often need to update together), especially in an environment like ours -- it takes time to get a PR merged to a source code repository, and takes a lot of time to get the artifact rebuilt just to update a Dockerfile.

— The other Jenkins pipeline calls terraform. However, we don't just install terraform in the server, instead the Jenkins pipeline uses the docker image "hashicorp/terraform". And we don't just execute the default entrypoint, but rather do multiple steps: 1. (before running docker) fetch the latest code from Git repository; 2. map the source code read-only to the docker container; 3. copy the source code (should have linked it since all the runtime is Linux) to a temporary folder; 4. set environment so it can access AWS resource; 5. run shell scripts to set more environment, select modules. And then for each module, generate scripts and configurations, select remote state file location. The "tf plan" and "tf apply" will be the last step.

It is quiet something to setup. The sole purpose is to keep the Jenkins workspace "clean" because Terraform writes lot of intermediate files to the same folder as the ".tf" files (let's call such a folder a module.) And luckily I am only an implementer of this plan, I did not have to start from scratch.

We have had 3 releases with the scripts. Each release took me 3 hours, sometimes to instruct engineering to update the production environment Jenkins job, more time is spent on trial and error -- if Terraform borked, manually forget or import resources. And the time before release? Painful as it is. As I complained in one twitter, once I had to run a Jenkins job more than 50 times in a workday to make some mix of groovy, bash, sh, tf to run. Thought I was familiar with bash but never used it fluently as real work required. But now it is much better, a new module or a big change can be added in a matter of one day or two. That is why I want to put down everything I have learned in the process.

Error

Anonymous comments are disabled in this journal

default userpic

Your reply will be screened

Your IP address will be recorded