Docker is a tool to create isolated environments for deployment in the form of containers, generated by images. If you have a website with a Frontend, Backend and Database, you'll need 3 docker containers. Each container runs on the host OS for the main things but provides their own libraries.
Install & Use Docker
- Install docker https://www.docker.com/products/docker-desktop
- Access the docker Command Line interface with the ‘docker’ keyword, test with
docker version
- Go to https://hub.docker.com/ and pull an image (prebuilt environment) to try out
Docker Command Cheat Sheet
docker images
to see all available imagesdocker run –name aNameIWant
to name a container. Add a d
(for detached) so it runs in the background.docker ps -a
to see all running containersdocker run -p 80:80
to specify an open portdocker logs <nameOrID>
to see container logsdocker exec -it <nameOrID> sh
open a terminal inside the container at the rootdocker exec -it <nameOrID> /bin/bash
open a terminal inside the container at the desired directorydocker stop <nameOrID>
To stop a container runningdocker start <name>
To start a stopped containerdocker rm <nameOrID>
to remove containersdocker rmi <imageName>
to remove imagesdocker push <org>/<name>
to push your image to docker hubHow To Docker-ize Your Repo (From Scratch)
This is the setup for a development repo, which is good to do as it's easier and will highlight issues sooner. Generally however, you should create a build of your app (For example Springboot should become a .jar file or React should become an optimized index build) which requires more advanced steps and is easiest to do with a reference (Shown in the next 'Advanced' section).
- Create a Dockerfile in the root of your repo. Just Dockerfile, no extension
- Decide on a base image, something minimal and summon it using
FROM
, for example:
FROM gradle:5.6.4-jdk11
- Create a working directory (A folder for files and commands to be executed in) in your container using
WORKDIR
which automaticlaly creates and will 'cd' into the folder, Or dont if you wan't everything in the cointainers root, for example:
WORKDIR /app
- Copy all the files from your repo directly into this directory using
COPY
, for example (First dot, copies all. Second dot, copies 'here'):
COPY . .
- Setup any scripts or commands that need to be run using
RUN
, for example:
RUN yarn add
- Expose a port you'd like the container to be accessed by, for example:
EXPOSE 8081
- Finally, enter the command that will start your app once all dependencies are setup using
ENTRYPOINT
with each word in an array, for example:
ENTRYPOINT ["./gradlew", "bootRun"]
or ENTRYPOINT /<workdir>/scriptFile.sh
Note: Entrypoint runs last from the containers root whenever a container is spun up. Wheras
CMD
runs in the current work directory in the container but only when the container is first created. Tip: Entrypoint is the best time to run a script containing environment variables, more in the 'Advanced' section...- Save the Dockerfile and run build from the directory you wrote the dockerfile to create an image
docker build -t <name> .
<-- DONT FORGET THE DOT
docker images
to check it exists
docker run --name <name> -p <docker port>:<your port> <image name>:latest
to spin up your container. Addd
(detached) so it runs in the background
Advanced Docker Deployments
Java
- Builds the Java app in a temp folder
- Gets a new jdk image because gradle is no longer needed
- Hides the .jar file as an environment variable
- Moves the .jar file from the temp file and puts in in a new directory
- Entrypoint is a java command to execute the jar file (Same as CMD)
# Creates build FROM gradle:5.6.4-jdk11 AS TEMP_BUILDER WORKDIR /api COPY . . RUN ./gradlew clean build # Deploy Production FROM adoptopenjdk/openjdk11:alpine-jre ENV ARTIFACT_NAME=elective-1.0.jar WORKDIR /app COPY --from=TEMP_BUILDER /api/build/libs/$ARTIFACT_NAME . # TOMCAT uses port 8080 EXPOSE 8080 ENTRYPOINT exec java -jar ${ARTIFACT_NAME}
React
- Gets a node image to create a build in/with
- Sets the docker containers path to use the one in the node_modules folder. Letting it access these commands globally
- Copies in the files and runs the build command, which creates a new, optimized, build folder
- Replaces the image with an nginx one and copies the build into a directory nginx executes from
- Exposes a desired port and starts the server securely
# Create build FROM node:14.15.1-alpine3.12 as build WORKDIR /app ENV PATH /app/node_modules/.bin:$PATH COPY package.json ./ COPY package-lock.json ./ COPY . ./ RUN npm install RUN npm run build # Deploy production environment FROM nginx:stable-alpine COPY --from=build /app/build /usr/share/nginx/html EXPOSE 80 ENTRYPOINT ["nginx", "-g", "daemon off;"]
Database
- This is an example using postgres, ensure you don't already have it running locally
- The environment variables are login credentials and the name of a database to create
- Build/run the container as normal and access the db normally (aka not inside the container cli), e.g
psql -h localhost -user -pass
- To let other Dockerized apps access this, setup the containers using
Docker-compose
here: https://docs.docker.com/compose/networking/
FROM postgres:12 EXPOSE 5432 ENV POSTGRES_USER=freewheelers ENV POSTGRES_PASSWORD=postgres ENV POSTGRES_DB=freewheelers
Environment Variables
When adding environment variables, it is best to do this NOT when making an image (Images are immutable) but instead, at runtime. To do this, write a shell script and addon the dockerfiles original ENTRYPOINT (from the end of your docker file) to the end of a script which gets all your environment variables:
#!/bin/sh cd /app/migrations/ sh ./mybatis/bin/migrate --env=docker up java -jar /app/api.jar
Then insert the script into your Docker container, run it and make it the entry point, e.g:
COPY myscript.sh . RUN chmod 775 myscript.sh EXPOSE 8080 ENTRYPOINT /<workdir>/myscript.sh # ENTRYPOINT executes when the container is spun up, grabbing whatever variables the newly deployed app needs