The following is a general step-by-step guide to deploying your repository to AWS.
You’ll need:
- A Dockerized Repository to deploy (Or just use a prebuilt nginx one)
- An AWS account
What we’ll do:
- Setup the AWS CLI
- Understand AWS EC2
- Understand and use AWS ECR
- Use Terraform to setup our AWS with SSH access and IAM profiles
AWS CLI
Login to the AWS Management Console
Set the region (e.g Sydney)
You can find your (probably empty) credentials file in
~/.aws
Follow these instructions to get your keys and tokens https://docs.aws.amazon.com/powershell/latest/userguide/pstools-appendix-sign-up.html and set them in the following format:
[default] aws_access_key_id = <Value> aws_secret_access_key = <Value> aws_session_token = <Value>
EC2
Create a Basic Instance in the UI:
- Open
EC2
, clickInstance
, clickLaunch
- Follow the prompts you want
- Click
instances
and access your new instance. Hit connect to open it within your browser
How to SSH into an instance:
- Open the instance and click
Connect
and thenSSH client
for instructions
- cd locally into the directory you're keeping your private .pem key
- Run the
example
ssh command, you may need to add fingerprint
Terraform
Common Terraform CLI Commands:
terraform init
Initialize and configure backend before useterraform validate
Checks for typos/syntax errorsterraform plan
Tests changes without really modifying resources in the state fileterraform apply
Applies changes to change state/configsterraform destroy
Deletes the infrastructure stack entirelyFollow these docs to setup terraform, and use it to build docker images & containers, deploy an EC2 instance to AWS and learn how to write Terraform files: https://learn.hashicorp.com/tutorials/terraform/aws-build?in=terraform/aws-get-started
- For the final step, use S3
Deploy Docker Images to AWS using Terraform from Scratch
Create a new repo to store Terraform files. Don’t use the root of your main repo
Create a new file called
main.tf
and insert your AWS credentials, e.g:provider "aws" { region = "ap-southeast-2" profile = "default" access_key = "<key>" # In AWS, click your name, Security Credentials, Access Keys, Create New~ secret_key = "<key>" }
Ensure these match the credentials file, simply enter in your cli `aws configure` and follow
the prompts
Create a basic EC2 instance, e.g:
resource "aws_instance" "<Instance name>" { ami = "<Virtual Machine AMI>" instance_type = "<Free Tier VM>" tags = { name = "<Instance name again>" } }
Run
terraform init
and terraform apply
- You will run build after every major change Setup Security Groups to handle Ports and CIDR
Create a new file called
security.tf
Create a new resource to handle SSH connections on port 22, e.g:
resource "aws_security_group" "<security group for ssh name>" { name = "<name again>" ingress { # Ingress means 'Inbound' or people connecting TO the server cidr_blocks = ["0.0.0.0/0"] from_port = 22 to_port = 22 protocol = "tcp" } egress { # Egress means 'Outbound' or connections your server makes outward, these are sensible defaults from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } }
Then back in
main.tf
, call your security group within the aws_instance
resource, e.g:security_groups = [aws_security_group.<security group for ssh name>.name] # This may need to be 'id' not 'name', try both as you go
Optional: Setup SSH access so you can connect to the running instance in your own terminal
Check your
~/.ssh
folder for any rsa public keys, if none, follow these docs https://docs.joyent.com/public-cloud/getting-started/ssh-keys/generating-an-ssh-key-manually/manually-generating-your-ssh-key-in-mac-os-xIn your
security.tf
file, add your rsa key, eg:resource "aws_key_pair" "id_rsa" { key_name = "id_rsa" public_key = "ssh-rsa <The rest of your key>" }
Then back in
main.tf
, call your key within the aws_instance
resource, e.g:key_name = aws_key_pair.id_rsa.key_name
Open the instance in AWS Management Console and select
Connect
then the SSH
optioncd into your .ssh folder on your machine and issue this command:
ssh -i <name of rsa key with no .pem> <address shown on AWS SSH instructions>
Setup Docker with a script
The following is just to get docker running and pulling a base nginx image within the EC2 instance, later steps we'll pull whatever custom docker images you want from a storage service
Create a new shell script file called
docker_installer.sh
Write the script to use sudo to install docker, pull nginx and run it on port 80, e.g:
#!/bin/bash echo "Starting Docker" > hello.tmp set -ex sudo amazon-linux-extras install -y docker sudo service docker start sudo docker pull nginx sudo docker run -p 80:80 -d nginx
Call the script from
main.tf
within the aws_instance
resource, e.g:user_data = "${file("docker_installer.sh")}"
To expose Port 80 for inbound connections you'll need another Security Group rule inside
security.tf
, similar to the one used for SSH, e,g:resource "aws_security_group" "<security group name for main entry>" { name = "<main entry point name again>" # Main entry means when people search this instances IP, without a port, they'll get this ingress { cidr_blocks = ["0.0.0.0/0"] from_port = 80 to_port = 80 protocol = "tcp" } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } }
Then back in
main.tf
, call this new security group as another arg in the array within the aws_instance
resource, e.g:security_groups = [aws_security_group.<security group for ssh name>.name, aws_security_group.<security group for main entry>.name]
Setup a Virtual Private Cloud (VPC)
Create a new terraform file called
network.tf
Create a VPC resource and internet gateway, the name of the VPC is used often so keep it short, e.g:
resource "aws_vpc" "<VPC name>" { cidr_block = "10.0.0.0/16" # Not sure what these mean tags = { name = "<Name again>" } } resource "aws_internet_gateway" "<Gateway name>" { vpc_id = aws_vpc.marvin_vpc.id tags = { name = "<Gateway name>" } }
Then back in
security.tf
, call your VPC within every aws_security_group
resource you want inside the VPC, e.g:vpc_id = aws_vpc.<VPC Name>.id # Why id now? I dont know, something about output variables
Ensure the Security groups are called via
id
not name
if you are having issuesSetup Route 53
TBD - Freewheelers was too abstract, for now just use the public address in EC2
Deploy Docker Images to ECR
If you havent already, create Docker images (From the previous section) and come back
ECR (Elastic Container Registry) is used to store Docker containers and easily update them with new versions, like a repo
Make a new terraform file called
ecr.tf
Add a resource that sets up a repository for a docker image. If you have multiple docker files, setup a new repo for each, e.g:
resource "aws_ecr_repository" "<repo name>" { name = "<repo name>" image_tag_mutability = "IMMUTABLE" image_scanning_configuration { scan_on_push = false } }
Now push the docker files(s) from your personal machine to this repo
Open ECR in AWS management console, click repository, open your new repo and hit 'View Push Commands'
Follow the steps shown
Setup IAM (Identity and Access Management) so EC2 can access ECR
Create a new terraform file called
iam.tf
Inside iam.tf, setup an iam role resource, e.g:
resource "aws_iam_role" "<group name>" { name = "<group name again>" assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Action": "sts:AssumeRole", "Principal": { "Service": "ec2.amazonaws.com" }, "Effect": "Allow" } ] } EOF tags = { name = "<group name again>" } }
Next, add an instance profile resource that calls the group, e.g:
resource "aws_iam_instance_profile" "<profile name>" { name = "<profile name>" role = aws_iam_role.<group name from previous step>.id }
Next, add a role policy resource which calls the group name as well, e.g:
resource "aws_iam_role_policy" "<policy name>" { name = "<policy name>" role = aws_iam_role.<group name from first step>.id policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Action": [ "ecr:GetAuthorizationToken", "ecr:BatchGetImage", "ecr:GetRepositoryPolicy", "ecr:DescribeRepositories", "ecr:ListImages", "ecr:DescribeImages", "ecr:GetDownloadUrlForLayer" ], "Effect": "Allow", "Resource": "*" } ] } EOF }
Finally, return to
main.tf
and inside your aws_instance
, call the policy profile by id, e.g:iam_instance_profile = aws_iam_instance_profile.<policy name from last step>.id
Pull ECR Docker Images into your EC2 instance
You can do this in two ways, in the EC2 terminal over SSH (To test initially) or add each command to the same
docker_installer.sh
file used to download docker.Add commands to
docker-installer.sh
script to login, pull and run your image:#!/bin/bash set -ex sudo amazon-linux-extras install -y docker sudo service docker start sudo docker login -u AWS -p $(aws ecr get-login-password --region ap-southeast-2) <Your ECR URI> sudo docker pull <Your ECR URI>/<Name of repo you want to use> sudo docker run -p 80:80 -d <Your ECR URI>/<Name of repo you want to use>:latest
If you're setting up multiple docker containers that need to talk to eachother, setup a docker bridge. You can add this next on your script or do it manually in the instance cli:
sudo docker network create <network name>
to create the bridge networksudo docker run --network <network name> -p <port>:<port> -d --env --name <container name you'd like> <ECR repo URI>/<name of repo>
to run docker images with the network flagIf you need to insert environment variables the docker image will use, add the variable flag and with the value, e.g:
sudo docker run --network <network name> -p <port>:<port> -d --env <variable key>=<variable value in single quotes> --name <name you'd like to give container so it's easier to see in the network> <ECR repo URI>
Test your containers are running with
sudo docker ps
or sudo docker ps -a
If any don't stay spun-up (eg EXITED), copy it's name/hash and enter
sudo docker logs <name/hash of container
Done! With any luck, when you search up your EC2 public IP address you now see your app.