Domain 1 — Cluster Architecture

← CKA Preparation 2026
CKA Preparation 2026 Domain 1 of 5
🏗️ DOMAIN 1 · CLUSTER ARCHITECTURE

Cluster Architecture,
Installation & Configuration

kubeadm, RBAC, etcd, Helm, Kustomize, High Availability, CRDs, CNI/CSI/CRI — the foundation of every CKA exam.

⚖️ 25% Exam Weight
18 Labs · ~14 hours
6 / 18 Complete
🆕 Helm · Kustomize · CRDs New 2025
Domain Progress6 / 18 labs
GitHub Repository
opscart / production-cka
📁 cluster-architecture/
01
RBAC Basics
Roles, ClusterRoles, RoleBindings, ClusterRoleBindings
✓ Complete 30 min

Objectives

  • Create a Role scoped to a namespace with specific verb/resource permissions
  • Bind the Role to a ServiceAccount using a RoleBinding
  • Verify permissions with kubectl auth can-i
  • Understand the difference between Role and ClusterRole

Key Commands

# Create role imperatively (exam speed) kubectl create role pod-reader --verb=get,list,watch --resource=pods -n rx-dev# Bind to service account kubectl create rolebinding pod-reader-binding \ --role=pod-reader --serviceaccount=rx-dev:rx-sa -n rx-dev# Verify access kubectl auth can-i get pods --as=system:serviceaccount:rx-dev:rx-sa -n rx-dev
⚡ Exam Tip

Always use --dry-run=client -o yaml to generate YAML fast. Never hand-write RBAC YAML in the exam.

View lab files on GitHub — 01-rbac-basics/
02
RBAC Advanced + Diagrams
ClusterRoles, aggregation rules, impersonation, audit
✓ Complete 40 min

Objectives

  • Create ClusterRoles for cross-namespace access patterns
  • Use aggregated ClusterRoles with label selectors
  • Debug RBAC failures using kubectl auth can-i and audit logs
  • Understand when to use ClusterRole vs Role for namespace-scoped resources

Key Commands

kubectl create clusterrole node-reader --verb=get,list --resource=nodes kubectl auth can-i --list --as=jane
🏭 Production Note

In our pharmaceutical AKS clusters we bind ClusterRoles with RoleBindings (not ClusterRoleBindings) to limit blast radius to specific namespaces.

View lab files on GitHub — 02-rbac-advanced/
03
kubeadm Install
Bootstrap a cluster from scratch with kubeadm
✓ Complete 60 min

Objectives

  • Install containerd, kubelet, kubeadm, kubectl on all nodes
  • Bootstrap the control plane with kubeadm init
  • Configure kubectl access via KUBECONFIG
  • Join worker nodes using the kubeadm token
  • Install a CNI plugin (Calico or Flannel)

Key Commands

kubeadm init --pod-network-cidr=192.168.0.0/16 mkdir -p $HOME/.kube && cp /etc/kubernetes/admin.conf $HOME/.kube/config kubeadm token create --print-join-command
⚡ Exam Tip

You won’t install from scratch in the exam but you must know the sequence cold — especially how to recover from a failed kubeadm init using kubeadm reset.

View lab files on GitHub — 03-kubeadm-install/
04
Cluster Upgrade
Upgrade control plane and worker nodes with kubeadm
✓ Complete 50 min

Objectives

  • Drain the control plane node before upgrading
  • Upgrade kubeadm, then apply the upgrade plan
  • Upgrade kubelet and kubectl on the control plane
  • Repeat drain/upgrade/uncordon cycle on each worker node

Key Commands

kubectl drain controlplane --ignore-daemonsets --delete-emptydir-data apt-get install -y kubeadm=1.30.0-1.1 kubeadm upgrade apply v1.30.0 apt-get install -y kubelet=1.30.0-1.1 kubectl=1.30.0-1.1 systemctl restart kubelet kubectl uncordon controlplane
⚡ Exam Tip

Cluster upgrade is near-certain on every exam. Practice until you can do the full sequence in under 8 minutes. Order matters: kubeadm → kubelet → kubectl. Never skip the drain.

View lab files on GitHub — 04-cluster-upgrade/
05
etcd Backup & Restore
Snapshot, restore, disaster recovery for etcd
✓ Complete 45 min

Objectives

  • Take an etcd snapshot using etcdctl snapshot save
  • Verify snapshot integrity with etcdctl snapshot status
  • Restore etcd from snapshot and update the static pod manifest
  • Confirm cluster recovers after restore

Key Commands

# Take snapshot ETCDCTL_API=3 etcdctl snapshot save /opt/etcd-backup.db \ --endpoints=https://127.0.0.1:2379 \ --cacert=/etc/kubernetes/pki/etcd/ca.crt \ --cert=/etc/kubernetes/pki/etcd/server.crt \ --key=/etc/kubernetes/pki/etcd/server.key# Restore to new data dir ETCDCTL_API=3 etcdctl snapshot restore /opt/etcd-backup.db \ --data-dir=/var/lib/etcd-restore# Update etcd static pod manifest vim /etc/kubernetes/manifests/etcd.yaml # Change: --data-dir=/var/lib/etcd-restore
⚡ Exam Tip

Memorise the 5 flags: --endpoints, --cacert, --cert, --key, --data-dir. Certs are always under /etc/kubernetes/pki/etcd/.

View lab files on GitHub — 05-etcd-backup-restore/
06
Helm Basics
Repos, install, upgrade, rollback, inspect releases
✓ Complete 40 min

Objectives

  • Add and update a Helm chart repository
  • Install a chart with custom values using --set and -f
  • Upgrade a release and inspect revision history
  • Roll back to a previous revision

Key Commands

helm repo add bitnami https://charts.bitnami.com/bitnami helm repo update helm install my-app bitnami/nginx -n rx-dev --set replicaCount=2 helm upgrade my-app bitnami/nginx -n rx-dev -f values.yaml helm rollback my-app 1 -n rx-dev helm history my-app -n rx-dev helm get values my-app -n rx-dev
⚡ Exam Tip

Helm docs are NOT allowed in the exam. Know these commands from memory. Always specify -n namespace — missing it is the #1 Helm mistake under pressure.

View lab files on GitHub — 06-helm-basics/
07
Helm Charts
Chart structure, templates, values hierarchy, packaging
▶ Next 50 min

Objectives

  • Understand chart structure: Chart.yaml, templates/, values.yaml
  • Create a chart with a Deployment and Service template
  • Use template functions: {{ .Values.* }}, {{ .Release.Name }}
  • Lint and dry-run before installing

Key Commands

helm create my-chart helm lint ./my-chart helm install test ./my-chart --dry-run --debug helm template test ./my-chart
08
Kustomize Basics
Overlays, patches, namePrefix, commonLabels
⏸ Pending 40 min

Objectives

  • Create a base kustomization and dev/prod overlays
  • Apply patches to override specific fields per environment
  • Deploy with kubectl apply -k

Key Commands

kubectl apply -k ./overlays/prod kubectl kustomize ./overlays/prod # preview output
09
High Availability Clusters
Stacked etcd HA, load balancer, multi-control-plane
⏸ Pending 60 min

Objectives

  • Understand stacked vs external etcd topology
  • Add a second control plane node with kubeadm join --control-plane
  • Verify HA — stop one control plane, confirm cluster stability
⚡ Exam Tip

Know the quorum formula: N/2+1 etcd members needed. 3 nodes = tolerate 1 failure. 5 nodes = tolerate 2.

10
Cluster Components Deep Dive
API server, scheduler, controller-manager, kubelet
⏸ Pending 45 min

Objectives

  • Locate and inspect static pod manifests in /etc/kubernetes/manifests/
  • Understand which components are static pods vs systemd services
  • Modify an API server flag and observe restart behaviour

Key Commands

ls /etc/kubernetes/manifests/ # static pods systemctl status kubelet # systemd service journalctl -u kubelet -n 50
11
API Server Authentication
Certificates, kubeconfig, user creation via CSR API
⏸ Pending 50 min

Objectives

  • Create a client certificate for a new user via the Kubernetes CSR API
  • Build a kubeconfig file and switch contexts
  • Understand certificate-based vs token-based auth
12
Admission Controllers
Built-in controllers, ResourceQuota, LimitRange
⏸ Pending 45 min

Objectives

  • Identify enabled admission controllers from the API server manifest
  • Create a ResourceQuota and LimitRange for a namespace
  • Enable NodeRestriction and observe its effect
13
Service Accounts
Token mounting, projected volumes, automount control
⏸ Pending 35 min

Objectives

  • Create a ServiceAccount and bind it to a Role
  • Assign to a Pod and verify token mounting
  • Disable automounting: automountServiceAccountToken: false
14
CRDs & Operators
Custom Resource Definitions, controller pattern
⏸ Pending 60 min

Objectives

  • Create a CRD and define a custom resource schema with validation
  • Create instances of the custom resource and query with kubectl
  • Understand the reconciliation loop pattern used by operators
15
CNI Plugins
Calico, Flannel, Cilium — install, inspect, troubleshoot
⏸ Pending 50 min

Objectives

  • Install a CNI plugin and verify pod networking works
  • Locate CNI config files at /etc/cni/net.d/
  • Debug CNI failures when nodes show NotReady
16
CSI Storage Interface
CSI drivers, volume lifecycle, dynamic provisioning
⏸ Pending 45 min

Objectives

  • Inspect installed CSI drivers with kubectl get csidrivers
  • Trace a PVC through the CSI provisioner
  • Debug a PVC stuck in Pending due to CSI driver issues
17
CRI Runtimes
containerd, crictl, runtime inspection
⏸ Pending 40 min

Objectives

  • Use crictl to inspect containers at the runtime level
  • Understand how kubelet communicates with the container runtime via CRI

Key Commands

crictl ps # containers crictl pods # pod sandboxes crictl logs <id>
18
Cluster Maintenance
Node drain, cordon, OS upgrade, uncordon
⏸ Pending 50 min

Objectives

  • Drain a node and verify pod rescheduling
  • Cordon without evicting — prevent new scheduling only
  • Perform OS-level update and bring node back with uncordon

Key Commands

kubectl cordon node01 kubectl drain node01 --ignore-daemonsets --delete-emptydir-data kubectl uncordon node01
Scroll to Top