K8s Architecture

Documentation: https://kubernetes.io/docs/home/
Official Cheat Sheet:
https://kubernetes.io/docs/reference/kubectl/cheatsheet/
Local vs. Managed
Setting up a local Kubernetes orchestration using Minikube involves installing and configuring Kubernetes components on your own computer, while managed orchestration using AWS EKS involves delegating the management of Kubernetes to Amazon Web Services (AWS). In other words, with Minikube, you have more control over the Kubernetes environment but also more responsibility for its maintenance, while with AWS EKS, AWS handles the day-to-day operations of Kubernetes, allowing you to focus on developing and deploying applications.
Setup
Local Setup
- Download and install local container service
- Docker Desktop: https://www.docker.com/products/docker-desktop/
- Colima: https://github.com/abiosoft/colima
- Download and install Minikube: https://minikube.sigs.k8s.io/docs/start/
- Install kubectl CLI: https://kubernetes.io/docs/tasks/tools/
- Start Docker Desktop/Colima
- Run this terminal command:
minikube start
to start the cluster
- Test all is working with:
kubectl get po -A
to see the Kubernetes architecture running
- See the dashboard with:
minikube dashboard
Connecting Cloud Cluster
For Linux/Mac
- Download the .yaml config file from your cloud cluster
- In your root ~/ directory,
mkdir .kube
- Copy
cp
the config contents into~/.kube/config
- Test by running
kubectl get nodes
And validate you see the nodes from the cloud cluster
Context
Install kubectx to easily switch between cloud ad local (Minikube) clusters from here: https://github.com/ahmetb/kubectx#homebrew-macos-and-linux
Basic Commands
See Pods
kubectl get pods
With labels, add
--show-labels
With max info, add -o wide
Run Pod
kubectl run <mywebserver> --image=nginx
Pod Data
kubectl describe pod <mywebserver>
Connect to Pod
kubectl exec -it <mywebserver> -- bash
Replace <bash> with any other command you want to run in the pod containerDelete Pod
kubectl delete pod <mywebserver>
Delete all
kubectl delete pods --all
Run Pod via .YAML file
kubectl apply -f pod.yaml
Generate .YAML file
kubectl run nginx-port --image=nginx --port=80 --dry-run=client -o yaml
dry-run means the resources aren’t createdAdd label
kubectl label pod <podname> env=dev
Search labels
kubectl get pods -l env=dev
use env!=dev for all except the dev labelDeployments
Replica Sets
Dictates the desired state of the pods
apiVersion: apps/v1 kind: ReplicaSet metadata: name: replicaset spec: replicas: 4 selector: matchLabels: tier: distinctlabel # Replicate 4 of the following templates: template: metadata: labels: tier: distinctlabel spec: containers: - name: yeehaw image: nginx
Run the above and it will spin up 4 pods with the given template and if any go down, it’ll spin them back up
See Replica Sets
kubectl get replicaset
Delete Replica Set
kubectl delete rs <replicaset>
Deployment Sets
Scales up a new Replica Set and scales down the old without any downtime
apiVersion: apps/v1 kind: Deployment metadata: name: replicaset spec: replicas: 4 selector: matchLabels: tier: distinctlabel # Replicate 4 of the following templates: template: metadata: labels: tier: distinctlabel spec: containers: - name: yeehaw image: nginx:1.17.3
See deployments
kubectl get deployments
Deployment data
kubectl describe deployments <deployment>
New deployment
kubectl create deployment <name> -f <file>.yaml
Rerunning will trigger a new deploymentRollout History
kubectl rollout history <deployment>
More specific revision info, add
--revision 1
or --revision 2
etcRollback
kubectl rollout undo <deployment> --to-revision=1
A rollback appears as a new revisionDelete deployment
kubectl delete deployment <deployment>
Daemon Sets
Creates a pod for every available worker node
apiVersion: apps/v1 kind: DaemonSet metadata: name: kplabs-daemonset spec: selector: matchLabels: tier: all-pods template: metadata: labels: tier: all-pods spec: containers: - name: pods image: nginx
See DaemonSets
kubectl get daemonset
DaemonSet data
kubectl describe daemonset <daemonsetname>
Node Affinity
Use case: Only deploy pod(s) to nodes that have specific key-values (e.g. disk=ssd) depending on hard or soft preferences
Deploys pod only to nodes with required matching criteria:
apiVersion: v1 kind: Pod metadata: name: node-affinity spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: NodeSelectorTerms: - matchExpression: # Label/key given to the node (disk) which will have a value of ssd - key: disk # Logical operator (In, NotIn, DoesNotExist etc) operator: In values: - ssd containers: - name: service-pods image: nginx
If the criteria can’t be met, the pod becomes ‘pending’ until the criteria is met
Deploys pod to any node but prefers matching the criteria:
apiVersion: v1 kind: Pod metadata: name: node-affinity spec: affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: - matchExpression: - key: memory operator: In values: - high - medium containers: - name: service-pods image: nginx
If any nodes have a memory value label/key of high or medium then it will get the pod
Resource Requests/Limits
Specify the minimum and maximum node resources a pod can consume
apiVersion: v1 kind: Pod metadata: name: resource-hogpod spec: containers: - name: container-name image: nginx resources: # Min resources that a pod must have guaranteed, in order to run requests: memory: "64Mi" cpu: "0.5" # Max resources a pod can consume, but CAN be exceeded if necessary limits: memory: "128Mi" cpu: "1"
HELM
Helm is a package manager that allows you to package Kubernetes resources, such as Deployments, Services, ConfigMaps, and Secrets, into reusable units called charts.
- Install in your dev environment from here: https://helm.sh/docs/intro/install/
- Find Helm packages from here: https://artifacthub.io/
- Uninstall with
helm uninstall <release-name>
Debugging tip: When installing it’s common for the STATUS of various pods to not become READY, to troubleshoot run
kubectl describe pods <pod not running>
and read the EventsSee downloaded Helm repos
helm repo list
See available Helm releases
helm list
See all helms in all namespace
helm list --all-namespaces
Services
A service acts like the middle man between outside requests and pods because as pods come and go their IP address change so having a service in the middle (Also called a gateway) to be the static IP and keep track of pods IP address by way of endpoints using a Selector.

Service Commands
See services
kubectl get services
Service data
kubectl describe service <service-name>
Delete Service
kubectl delete service <service-name>
Selector
This is basic example of a selector that will ‘pickup’ any pods with the app label ‘nginx’ as an endpoint
apiVersion: v1 kind: Service metadata: name: kplabs-service-selector spec: selector: # Target port 80 on any pods the app label "nginx" app: nginx ports: - port: 8080 targetPort: 80
This example deployment spins up 3 pods with the app label ‘nginx’ each on port 80
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80
You can scale up/down the deployment and the Service will automatically add them as new endpoints as long as they have the app label ‘nginx’, for example:
kubectl scale deployment/nginx-deployment --replicas=10
NodePorts
Service that allows clients to connect to the containers running in a pod from outside the Kubernetes cluster, usually when security and availability are not concerns like in a staging environment
apiVersion: v1 kind: Service metadata: name: kplabs-nodeport spec: selector: type: publicpod type: NodePort ports: - port: 80 targetPort: 80
- Get the node IP address using:
kubectl get nodes -o wide
- Copy the EXTERNAL IP e.g
159.65.158.146
- Get the nodeport service port using:
kubectl get service
- Copy the PORT e.g
31947
- Reach the service on the node by using the full IP, e.g
159.65.158.146:31947
Load Balancer
Works the same as a NodePort except you do not need the port and is ideal in a managed environment
apiVersion: v1 kind: Service metadata: name: loadbalancer spec: type: LoadBalancer ports: - port: 80 protocol: TCP # Sets up an endpoint on any pods with the loadbalanced label selector: type: loadbalanced
Get the IP address from something like https://cloud.digitalocean.com/networking/load_balancers
Ingress
A gateway that routes traffic hitting the cluster to the correct service depending on the domain (host). Made up of two parts, the Ingress Service (Rules for where traffic should go) and the Ingress Controller (Configured depending on the workload and will automatically manage load balancers and firewalls).
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-gateway spec: rules: - host: website01.example.internal http: paths: - pathType: Prefix path: "/" backend: service: name: service1 port: number: 80 - host: website02.example.internal http: paths: - pathType: Prefix path: "/" backend: service: name: service2 port: number: 80
You can see the routing by describing the ingress, e.g here is the exposed port, path and Pod IP addresses
Host Path Backends website01.example.internal / service1:80 (10.244.1.15:80) website02.example.internal / service2:80 (10.244.1.110:80)
Decide what controller suites the workload and install it using Helm, this is the nginx controller from https://kubernetes.github.io/ingress-nginx/deploy/#quick-start
helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace
This creates the controller service within the namespace of ingress-nginx, see the service by running:
kubectl get service -n ingress-nginx
See all Ingresses
kubectl get ingress
See Ingress data
kubectl describe ingress <ingress-name>
Security
Namespaces
Smaller ‘virtual’ clusters within a greater cluster which cant really access one-another and usually used to seperate teams.
When you don’t designate a namespace, it goes into the ‘default’ namespace. If you ‘get pods’ for example, you’ll only see default-namespace pods.
See namespaces
kubectl get namespace
See XYZ in another Namespace
kubectl get pods —-namespace <namespace>
See all XYZ in all namespaces
kubectl get pods --all-namespaces -o wide
Service Accounts
Provide unique identities for applications running within the cluster.
- Namespace-based: Each service account belongs to a specific namespace, limiting its visibility and access to resources within that namespace.
- Pod-bound: Service accounts are typically attached to Pods, granting the applications running inside the Pod the associated identity.
- External access: Service accounts can be used to access external services, such as cloud resources, by delegating permissions and credentials.
Create Service Account
kubctl serviceaccount custom-token serviceaccount/custom-token created
Mount custom Service Account to a pod (Default is applied if none provided)
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-pod spec: serviceAccountName: custom-name
Access a pods internal created token
kubectl exec -it <pod> — bash
cat /run/secrets/kubernetes.io/serviceaccount/token
Roles
Rules are scoped only to a namespace
Create role and apply it via kubectl, e.g:
apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default # User can only perform (lists) in default namespace name: pod-reader rules: - apiGroups: [""] # Default API group resources: ["pods"] verbs: ["lists"]
And apply role binding to a user, e.g:
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: read-pods namespace: default subjects: - kind: User name: john apiGroup: system:serviceaccount:default:external roleRef: kind: Role name: pod-reader apiGroup: rbac.authorization.k8s.io
Cluster Roles Work the same but are scoped to entire clusters across namespaces and usually allow for more powerful permissions
See Roles
kubectl get roles
See Rolebindings
kubectl get rolebinding
See ClusterRoles
kubectl get clusterrole
See role data
kubectl describe role <role-name>
See role binding data
kubectl describe rolebinding <rolebinding-name>
See cluster roles
kubectl describe role <role-name>
TLS Certificate
TLS certificates are digital credentials that authenticate the identity of a website or server
Kube Config
Make a config from scratch
1. Add cluster details.
kubectl config --kubeconfig=base-config set-cluster development --server=https://1.2.3.4
2. Add user details
kubectl config --kubeconfig=base-config set-credentials experimenter --username=dev --password=some-password
3. Setting Contexts
kubectl config --kubeconfig=base-config set-context dev-frontend --cluster=development --namespace=frontend --user=experimenter
4. Repeating above steps for second cluster
kubectl config --kubeconfig=base-config set-cluster production --server=https://4.5.6.7
kubectl config --kubeconfig=base-config set-context prod-frontend --cluster=production --namespace=frontend --user=experimenter
View Kubeconfig
kubectl config --kubeconfig=base-config view
Get current context information:
kubectl config --kubeconfig=base-config get-contexts
Switch Contexts:
kubectl config --kubeconfig=base-config use-context dev-frontend
Secrets
Secrets for any pods, Created and stored in Etcd
Create secret
a. From a literal value
kubectl create secret generic firstsecret --from-literal=dbpass=mypassword123
b. From a file
kubectl create secret generic secondsecret --from-file=./credentials.txt
c. From YAML file
kubectl apply -f secret-stringdata.yaml
apiVersion: v1 kind: Secret metadata: name: thirdsecret type: Opaque stringData: config.yaml: |- # Base64 encode keys username: dbadmin password: mypassword123
Mount secret in Pod
a. Volume
apiVersion: v1 kind: Pod metadata: name: secretmount spec: containers: - name: secretmount image: nginx volumeMounts: - name: foo # Create volume called foo mountPath: "/etc/foo" # Create directory for it in the pod readOnly: true # So secret cant be changed volumes: - name: foo # Create this secret in the mount path secret: secretName: firstsecret
Find the secret in the pod by exec-ing into it and running
cat /etc/foo/dbpass
b. Environment Variables
apiVersion: v1 kind: Pod metadata: name: secret-env spec: containers: - name: secret-env image: nginx env: - name: SECRET_USERNAME # Associated name valueFrom: secretKeyRef: name: firstsecret # Uses created key from Etcd key: dbpass restartPolicy: Never
Find the secret in the pod by exec-ing into it and running
echo $SECRET_USERNAME
NOTE: Keys are not encoded by default so you can just use them within the applicatios without decoding
See secrets
kubectl get secret
See secret data
kubectl describe secret <secretname>
Encode (Base 64)
echo -n ‘<key>’ | base64
Decode key (Base 64)
echo <key> | base64 -d
Storage
Minikube Volume
Create a volume the same way you create a Pod
- Make a directory for the files to be stored in, e.g
mkdir /mydata
Note: If running in Minikube
Minikube itself is a VM so you’ll need to create the directory within that, not your host machine
minikube mount <source directory>:<target directory>
Source = Host Machine. Unfortunately if you restart Minikube the mount will be lost- Apply a container and volume, e.g
apiVersion: v1 kind: Pod metadata: name: demopod-volume spec: containers: - image: nginx name: test-container volumeMounts: - name: first-volume mountPath: /data # Inside the container volumes: - name: first-volume hostPath: path: /mydata # Host machine directory type: Directory
Persistent Volume
PVs are storage spaces that persist even after pods are restarted.
- Provision volume (Dynamically)
apiVersion: v1 kind: PersistentVolume metadata: name: block-pv spec: # storageClassName: manual If you want manual provisioning capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: /tmp/data # Host machine directory set up prior
- Claim volume
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 10Gi
- Reference the claimed PVC when spinning up the pods
apiVersion: v1 kind: Pod metadata: name: pvb-pod spec: containers: - image: nginx name: my-container volumeMounts: - name: first-volume mountPath: /data volumes: - name: my-volume persistentVolumeClaim: claimName: pvc # Match name from step. 2
In most cloud environments you wont use hosts as those providers will allow their own method of dynamic volume provisioning
See Persistent Volumes
kubectl get pv
See Persistent Volume Claims
kubectl get pvc
Resizing PVC
- Change the pod data, e.g
kubectl edit pvc <pod-pvc-name>
- Change the storage capacity and save
:wq!
- Delete and reapply the pod
kubectl delete pod <pvc-pod-file>
kubectl apply -f <pod-pcv-file>
ConfigMaps
Centrally store config data for running pods for things like dev/staging/prod environments
- Create a properties files, e.g
dev.properties
app.env=dev app.mem=2048m app.properties=dev.env.url
- Create a ConfigMap using the file
kubectl create configmap dev-properties --from-file=dev.properties
- Mount ConfigMap to a Pod via a Volume
apiVersion: v1 kind: Pod metadata: name: configmap-pod spec: containers: - name: test-container image: nginx volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: dev-properties restartPolicy: Never
- You’ll be able to find the properties file if you exec into the pod and look inside
etc/config
See configmaps
kubectl get configmap
See ConfigMap data
kubectl describe configmap <configmap-name>
Network Policies
By default, all pods accept traffic from anywhere
Block all inbound traffic - Default Deny Ingress
apiVersion: v1 kind: NetworkPolicy metaData: name: default-deny-ingress spec: podSelector: {} podTypes: - Ingress
Block all outbound traffic - Default Deny Egress
apiVersion: v1 kind: NetworkPolicy metaData: name: default-deny-egress spec: podSelector: {} podTypes: - Egress
Block all traffic for Pods that have a specific label (e.g Suspicious)
apiVersion: v1 kind: NetworkPolicy metaData: name: deny-ingress-egress spec: podSelector: matchLabels: role: suspicious podTypes: - Ingress - Egress
Allow traffic from range of IPs except one (for pods with a label of secure)
apiVersion: v1 kind: NetworkPolicy metaData: name: ingress-from-ips spec: podSelector: matchLabels: role: secure ingress: - from: - ipBlock: cidr: 192.168.0.0/16 # All all from this range of IPs except: 192.168.182.197/32 # Except this IP policyTypes: - Ingress
Allow traffic to a specific IP (for pods with a label of secure)
apiVersion: v1 kind: NetworkPolicy metaData: name: egress-to-ips spec: podSelector: matchLabels: role: secure egress: - to: - ipBlock: cidr: 192.168.0.0/16 # Allowed IP address policyTypes: - Egress
Allow ingress connection based on namespace (for pods with a label of reconcile)
apiVersion: v1 kind: NetworkPolicy metaData: name: namespace-selector spec: podSelector: matchLabels: role: secure Ingress: - from: - namespaceSelector: matchLabels: role: app # Namespace role podSelector: matchLabels: role: reconcile # Only pods labelled reconcile within app get matched policyTypes: - Ingress
Logging & Monitoring
Note: Kubernetes Events are generated for state change, errors and messages. By default, events are deleted after 1 hour.
See Events
kubectl get events
See Events within a namespace
kubectl get namespace
kubectl get -n <namespace-name>
Filter events
kubectl get events —-field-selector involvedObject.name=<pod-or-anything>
Troubleshooting
Pod Never Ready
Run
kubectl describe pods <pod-name>
and read the EventsTest Pod Connection
Inter-pod connection
- Run
kubectl get pods -o wide
and note the IP addresses, e.g
NAME READY STATUS IP frontend-pod 1/1 Running 10.244.1.1 backend-pod 1/1 Running 10.244.1.110
- Remote connect to the pod you want to test FROM e.g
kubectl exec -it frontend-pod -- bash
- Once inside the pod, install curl and/or nano with
apt-get update && apt-get install curl nano
- Test the connection of the pod you want to connect TO with curl, e.g
curl 10.244.1.110
Outer-pod connection
kubectl exec exec -n external -it <pod-name> —- ping 192.168.137.68
Events and Filtering
Note: Kubernetes Events are generated for state change, errors and messages. By default, events are deleted after 1 hour.
See Events
kubectl get events
See Events within a namespace
kubectl get namespace
kubectl get -n <namespace-name>
Filter events
kubectl get events —-field-selector involvedObject.name=<pod-or-anything>
Resources Not Selecting One Another
- Check the labels match (Key and value)
- Check namespaces match
Drain Node
Gracefully terminates all processes on the node while preventing any new pods being assigned to it
kubectl drain <node-name>
Taint Based Evictions
Any nodes with certain taints are handled accordingly
Get taints on a node
kubectl describe node <node-name>
Add taint to a node:
kubectl taint nodes <node-name> <key=<value>:<effect>[tolerationSeconds]
Remove taint from a node
kubectl taint nodes <node-name> <key=<value>:<effect>
Get tolerations on a pod:
kubectl describe pod <pod-name>
Add toleration to a pod
kubectl edit pod <pod-name> (edit spec.tolerations)
Check evictions:
kubectl get events -n <namespace>
Dump Logs
See small amount of cluster info status data
kubectl cluster-info
See all cluster info status data
kubectl cluster-info dump
TLS Timeouts
One possible cause for these is a lack of resources.
Cloud
In the case of AWS or cloud providers using something like EC2, simply increase the instance size, e.g t2.micro > t2.medium
Local
Clear any running contexts
colima delete
Increase resources available to Docker and Minikube (Default: cpu=2, memory=4096)
colima start --cpu 4 --memory 8
minikube start --cpus 4 --memory 7200
Service Not Loading In Browser
Sometimes everything looks fine in the cluster and a container should be loading via web address, wether you’re using a Load Balancer, NodePort, tunnelling with Minikube (Which btw make sure to use SSH version >3) or even an ingress
Terminal open the browser at the correct address
minikube service <service-name>
One of the address should work if it doesn’t load the browser for you
If it can’t find it, that could be a cluePoint a new port at the service
kubectl port-forward service/<service-name> 7080:80
SSH out of date (Minikube)
Minikube tunnel will silently fail if SSH is older than v3