Automating infrastructure management with AWS code build, code pipeline, and terraform

Automating infrastructure management with AWS code build, code pipeline, and terraform

Setting up a Cloud Infrastructure with AWS Services: codebuild, codepipeline, iam, kms, terraform and more

**Client Background:**

A client we continuously work with, a fintech company which provides loans online, operates its infrastructure on AWS. They had been managing infrastructure changes manually, leading to inconsistencies, human errors, and longer deployment times. They wanted to optimize their development and deployment process, reduce operational costs, and improve resource management. Our engineers at Global Mobility Services engaged with our client to implement an infrastructure pipeline using Terraform with AWS DevOps services to address these challenges.

**Infrastructure Pipeline Setup:**

AWS DevOps Infrastructure Deploying Resources Cross Account

1. **Development Account:**

– Engineers at GMS created the initial infrastructure pipeline repository on AWS code commit (infra-pipeline-repo) using Terraform code and hosted it in the client’s development AWS account.

  – A CodeCommit repository (infra-code-repo) was set up to store Terraform infrastructure code which would create the base architecture for development like VPC’s, security groups, rds databases, routing tables, cloudtrail config, etc.

  – The pipeline had three stages: build, dev, and prod.

Here is a similar sample of terraform that we deployed for testing the pipeline:

provider “aws”

  region = “us-east-1”  # Adjust the region as needed

}

# Create VPC

resource “aws_vpc” “my_vpc” {

  cidr_block = “10.0.0.0/16”

}

# Create a subnet inside the VPC

resource “aws_subnet” “my_subnet” {

  vpc_id     = aws_vpc.my_vpc.id

  cidr_block = “10.0.1.0/24”

}

# Create security group

resource “aws_security_group” “my_security_group” {

  name_prefix = “my-security-group-“

  vpc_id      = aws_vpc.my_vpc.id

  // Define security group rules here

  // Example:

  // ingress {

  //   from_port   = 80

  //   to_port     = 80

  //   protocol    = “tcp”

  //   cidr_blocks = [“0.0.0.0/0”]

  // }

}

# Create RDS instance

resource “aws_db_instance” “my_rds_instance” {

  allocated_storage    = 20

  storage_type        = “gp2”

  engine              = “mysql”

  engine_version      = “5.7”

  instance_class      = “db.t2.micro”

  name                = “myrds”

  username            = “admin”

  password            = “mysecretpassword”

  parameter_group_name = “default.mysql5.7”

  skip_final_snapshot = true

  // Attach RDS instance to the subnet group

  vpc_security_group_ids = [aws_security_group.my_security_group.id]

  subnet_group_name     = aws_db_subnet_group.my_db_subnet_group.name

}

# Create DB subnet group

resource “aws_db_subnet_group” “my_db_subnet_group” {

  name       = “my-db-subnet-group”

  subnet_ids = [aws_subnet.my_subnet.id]

}

# Create CloudTrail

resource “aws_cloudtrail” “my_cloudtrail” {

  name                          = “my-cloudtrail”

  s3_bucket_name                = “my-cloudtrail-bucket”

  include_global_service_events = true

  is_multi_region_trail         = true

2. **Cross-Account Role:**

  – In the production AWS account, GMS engineers set up a cross-account IAM role with the necessary permissions for the pipeline to deploy resources in the production environment.

**Positive Business Impact:**

1. **Accelerated Deployment Process:**

  – With the infrastructure pipeline in place, the client’s development team could deploy infrastructure changes much faster and with consistency.

  – Code changes automatically triggered pipeline executions, reducing manual intervention and speeding up the process.

2. **Reduced Operational Costs:**

  – By automating the infrastructure deployment, the client significantly reduced the time and effort required by their operations team to manage infrastructure changes.

  – Faster deployments also meant that developers could test and release new features to customers quicker, potentially increasing revenue.

3. **Improved Resource Management:**

  – The pipeline enforced best practices and consistency in infrastructure changes, reducing the chances of misconfigurations and security vulnerabilities.

  – Resources were automatically cleaned up and managed, preventing resource sprawl and optimizing AWS costs.

4. **Alerting Mechanisms:**

  – GMS engineers set up alerts at each stage of the pipeline to notify the development and operations teams about the pipeline’s status.

  – For example, the build stage alerted the team directly to their shared distribution email if there were any issues in the Terraform code, such as syntax errors or missing dependencies.

  – The dev and prod stages alerted the team about successful deployments or any potential errors encountered during deployment.

**Cost Savings:**

Over time, our fintech friends experienced significant cost savings due to the infrastructure pipeline:

– The reduction in manual work for infrastructure deployment and management saved the company significant operational costs.

– The improved resource management and automated cleanup reduced unnecessary resource consumption, leading to cost optimization.

– Faster time-to-market allowed them to capture business opportunities more rapidly, potentially generating more revenue.

**Conclusion:**

Setting up an infrastructure pipeline using Terraform had a positive business impact for the client. The streamlined development and deployment process, coupled with reduced operational costs and improved resource management, helped the company achieve greater efficiency and competitiveness in the market. The alerting mechanisms provided timely notifications, enabling the team to respond promptly to any issues during the deployment process, ensuring a smooth and reliable infrastructure delivery. Ultimately, the infrastructure pipeline became a valuable asset, contributing to its growth and success in the long term.

Share Post

Search