Friday, November 21, 2025

Learning Infrastructure as Code with Terraform

After automating my container builds with GitHub Actions, I faced a new challenge: how do I deploy these containers to AWS without clicking through endless console menus? The answer was Terraform, a tool that lets you define infrastructure in code files. Today I went from manual AWS console work to managing infrastructure like a developer manages code.

The Mental Shift

I've clicked through the AWS console plenty while studying for my Solutions Architect certification. Create a bucket here, configure some settings there, click save. But this approach doesn't scale and leaves no record of what you did. Infrastructure as code flips this model completely. You describe what you want in a file, and Terraform figures out how to make it happen.

The hardest part wasn't the syntax. It was changing how I think about infrastructure. Instead of "first do this, then do that," I had to think "here's the end state I want" and let Terraform work out the steps. This declarative approach felt strange at first but quickly became natural.

Getting Started

I installed Terraform with Homebrew and created my first project to make a simple S3 bucket. The configuration file was surprisingly readable: define an AWS provider, describe a bucket with some properties, and that's it. Running terraform init downloaded the AWS provider plugin. Running terraform plan showed me what would be created. Running terraform apply actually created it.

That first successful apply was satisfying. I wrote about ten lines of configuration, ran two commands, and infrastructure appeared in AWS. No console clicking required.

The Power of Plan and Apply

The terraform plan command became my favorite feature immediately. It shows exactly what will change before anything actually happens. Resources marked with a plus will be created. A tilde means modified. A minus means destroyed. This preview eliminated the fear of making changes. I could see the impact before committing to it.

I practiced by creating multiple resources, modifying their properties, and watching Terraform show me the precise differences. The tool understood dependencies automatically. When I created a file inside a bucket, Terraform knew to create the bucket first without me specifying the order.

Variables and State

Hardcoding values isn't scalable, so I learned about variables. I created a variables.tf file defining configurable values like region, environment, and resource names. A separate terraform.tfvars file set the actual values. This separation means I can reuse the same infrastructure code across different projects just by changing the variable file.

The state file was the key to understanding how Terraform works. After creating resources, Terraform writes a JSON file tracking everything it manages. This state lets Terraform compare what exists to what you want and calculate the minimal set of changes needed. It's Terraform's memory, and losing it means Terraform forgets what it created.

How My Certification Helped

My AWS Solutions Architect certification provided crucial context. When Terraform creates an S3 bucket, it's making AWS API calls. I understood those APIs from studying for the exam. I knew about IAM permissions, regions, and resource naming constraints. I understood why some changes require resource replacement while others can be updated in place.

This foundation meant I wasn't learning AWS and Terraform simultaneously. I was applying existing knowledge through a new tool, which made the learning curve much gentler.

What Changed

I went from clicking through the AWS console to defining infrastructure in version-controlled files. My infrastructure is now documented, repeatable, and shareable. I can destroy everything with one command and recreate it identically minutes later. Changes are reviewable through Git diffs just like application code.

More importantly, I understand why professionals work this way. Manual infrastructure management doesn't scale. Infrastructure as code does.

Next: Bringing It All Together

I now have every piece needed for deployment. GitHub Actions builds my container images automatically. ECR stores them. Terraform can define AWS infrastructure. Tomorrow I combine these skills to deploy my FastAPI application to ECS using Terraform.

The configuration will be more complex than an S3 bucket, but the workflow is identical: define resources in code, plan the changes, review them, and apply. I'll describe an ECS cluster, task definition pointing to my ECR image, and a service to run the task. Terraform will create everything, and my application will be running in the cloud.

This is the milestone I've been working toward. From local development to automated deployment in AWS, defined entirely in code. Every skill I've learned contributes to this moment. The pieces are ready. Tomorrow I put them together.

No comments:

Post a Comment