Cluster Architecture,
Installation & Configuration
kubeadm, RBAC, etcd, Helm, Kustomize, High Availability, CRDs, CNI/CSI/CRI — the foundation of every CKA exam.
Objectives
- Create a Role scoped to a namespace with specific verb/resource permissions
- Bind the Role to a ServiceAccount using a RoleBinding
- Verify permissions with
kubectl auth can-i - Understand the difference between Role and ClusterRole
Key Commands
# Create role imperatively (exam speed)
kubectl create role pod-reader --verb=get,list,watch --resource=pods -n rx-dev# Bind to service account
kubectl create rolebinding pod-reader-binding \
--role=pod-reader --serviceaccount=rx-dev:rx-sa -n rx-dev# Verify access
kubectl auth can-i get pods --as=system:serviceaccount:rx-dev:rx-sa -n rx-dev
Always use --dry-run=client -o yaml to generate YAML fast. Never hand-write RBAC YAML in the exam.
Objectives
- Create ClusterRoles for cross-namespace access patterns
- Use aggregated ClusterRoles with label selectors
- Debug RBAC failures using
kubectl auth can-iand audit logs - Understand when to use ClusterRole vs Role for namespace-scoped resources
Key Commands
kubectl create clusterrole node-reader --verb=get,list --resource=nodes
kubectl auth can-i --list --as=jane
In our pharmaceutical AKS clusters we bind ClusterRoles with RoleBindings (not ClusterRoleBindings) to limit blast radius to specific namespaces.
Objectives
- Install containerd, kubelet, kubeadm, kubectl on all nodes
- Bootstrap the control plane with
kubeadm init - Configure kubectl access via KUBECONFIG
- Join worker nodes using the kubeadm token
- Install a CNI plugin (Calico or Flannel)
Key Commands
kubeadm init --pod-network-cidr=192.168.0.0/16
mkdir -p $HOME/.kube && cp /etc/kubernetes/admin.conf $HOME/.kube/config
kubeadm token create --print-join-command
You won’t install from scratch in the exam but you must know the sequence cold — especially how to recover from a failed kubeadm init using kubeadm reset.
Objectives
- Drain the control plane node before upgrading
- Upgrade kubeadm, then apply the upgrade plan
- Upgrade kubelet and kubectl on the control plane
- Repeat drain/upgrade/uncordon cycle on each worker node
Key Commands
kubectl drain controlplane --ignore-daemonsets --delete-emptydir-data
apt-get install -y kubeadm=1.30.0-1.1
kubeadm upgrade apply v1.30.0
apt-get install -y kubelet=1.30.0-1.1 kubectl=1.30.0-1.1
systemctl restart kubelet
kubectl uncordon controlplane
Cluster upgrade is near-certain on every exam. Practice until you can do the full sequence in under 8 minutes. Order matters: kubeadm → kubelet → kubectl. Never skip the drain.
Objectives
- Take an etcd snapshot using
etcdctl snapshot save - Verify snapshot integrity with
etcdctl snapshot status - Restore etcd from snapshot and update the static pod manifest
- Confirm cluster recovers after restore
Key Commands
# Take snapshot
ETCDCTL_API=3 etcdctl snapshot save /opt/etcd-backup.db \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key# Restore to new data dir
ETCDCTL_API=3 etcdctl snapshot restore /opt/etcd-backup.db \
--data-dir=/var/lib/etcd-restore# Update etcd static pod manifest
vim /etc/kubernetes/manifests/etcd.yaml
# Change: --data-dir=/var/lib/etcd-restore
Memorise the 5 flags: --endpoints, --cacert, --cert, --key, --data-dir. Certs are always under /etc/kubernetes/pki/etcd/.
Objectives
- Add and update a Helm chart repository
- Install a chart with custom values using
--setand-f - Upgrade a release and inspect revision history
- Roll back to a previous revision
Key Commands
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm install my-app bitnami/nginx -n rx-dev --set replicaCount=2
helm upgrade my-app bitnami/nginx -n rx-dev -f values.yaml
helm rollback my-app 1 -n rx-dev
helm history my-app -n rx-dev
helm get values my-app -n rx-dev
Helm docs are NOT allowed in the exam. Know these commands from memory. Always specify -n namespace — missing it is the #1 Helm mistake under pressure.
Objectives
- Understand chart structure:
Chart.yaml,templates/,values.yaml - Create a chart with a Deployment and Service template
- Use template functions:
{{ .Values.* }},{{ .Release.Name }} - Lint and dry-run before installing
Key Commands
helm create my-chart
helm lint ./my-chart
helm install test ./my-chart --dry-run --debug
helm template test ./my-chart
Objectives
- Create a base kustomization and dev/prod overlays
- Apply patches to override specific fields per environment
- Deploy with
kubectl apply -k
Key Commands
kubectl apply -k ./overlays/prod
kubectl kustomize ./overlays/prod # preview output
Objectives
- Understand stacked vs external etcd topology
- Add a second control plane node with
kubeadm join --control-plane - Verify HA — stop one control plane, confirm cluster stability
Know the quorum formula: N/2+1 etcd members needed. 3 nodes = tolerate 1 failure. 5 nodes = tolerate 2.
Objectives
- Locate and inspect static pod manifests in
/etc/kubernetes/manifests/ - Understand which components are static pods vs systemd services
- Modify an API server flag and observe restart behaviour
Key Commands
ls /etc/kubernetes/manifests/ # static pods
systemctl status kubelet # systemd service
journalctl -u kubelet -n 50
Objectives
- Create a client certificate for a new user via the Kubernetes CSR API
- Build a kubeconfig file and switch contexts
- Understand certificate-based vs token-based auth
Objectives
- Identify enabled admission controllers from the API server manifest
- Create a ResourceQuota and LimitRange for a namespace
- Enable
NodeRestrictionand observe its effect
Objectives
- Create a ServiceAccount and bind it to a Role
- Assign to a Pod and verify token mounting
- Disable automounting:
automountServiceAccountToken: false
Objectives
- Create a CRD and define a custom resource schema with validation
- Create instances of the custom resource and query with kubectl
- Understand the reconciliation loop pattern used by operators
Objectives
- Install a CNI plugin and verify pod networking works
- Locate CNI config files at
/etc/cni/net.d/ - Debug CNI failures when nodes show NotReady
Objectives
- Inspect installed CSI drivers with
kubectl get csidrivers - Trace a PVC through the CSI provisioner
- Debug a PVC stuck in Pending due to CSI driver issues
Objectives
- Use
crictlto inspect containers at the runtime level - Understand how kubelet communicates with the container runtime via CRI
Key Commands
crictl ps # containers
crictl pods # pod sandboxes
crictl logs <id>
Objectives
- Drain a node and verify pod rescheduling
- Cordon without evicting — prevent new scheduling only
- Perform OS-level update and bring node back with uncordon
Key Commands
kubectl cordon node01
kubectl drain node01 --ignore-daemonsets --delete-emptydir-data
kubectl uncordon node01