Blog header background

K8s Architecture: Master Node, Worker Node Components

Updated on April 29, 2026

11 min read

Copy link
Share on WhatsApp

K8s Architecture: Master Node, Worker Node & Components Guide

Understanding the Kubernetes architecture is the foundation for everything you’ll do with the platform from debugging a CrashLoopBackOff to architecting a multi-region setup. This guide is for backend developers, DevOps practitioners, SREs, and platform engineers who want a solid mental model of how Kubernetes actually works under the hood. We’ll walk through the control plane components, worker node components, the scheduling flow, networking, storage, and how all the pieces interact in real production clusters.

K8s architecture has two main planes: the control plane (master components) and the data plane (worker nodes). The control plane runs API Server, etcd, Scheduler, Controller Manager, and Cloud Controller Manager – they collectively decide what should run where. Each worker node runs kubelet (manages pods on the node), kube-proxy (network rules), and a container runtime (containerd is current default). The kubernetes architecture diagram shows: API Server is the only component that talks to etcd; everything else flows through API Server. With DevOps job listings up 75% YoY (Spacelift 2025) and Kubernetes appearing in 80%+ of senior cloud-native listings, this Kubernetes architecture explained correctly is non-negotiable for engineering interviews.

Key Components at a Glance

Plane

Component

Purpose

Control plane

kube-apiserver

REST API gateway; only component that writes etcd

Control plane

etcd

Consistent key-value store for cluster state

Control plane

kube-scheduler

Picks node for unscheduled pods

Control plane

kube-controller-manager

Runs core control loops (Deployments, ReplicaSets, etc.)

Control plane

cloud-controller-manager

Cloud-specific glue (LoadBalancer, EBS, etc.)

Data plane

kubelet

Manages pods on a node; talks to API Server + runtime

Data plane

kube-proxy

Implements Service networking rules (iptables/IPVS/eBPF)

Data plane

Container runtime

Runs containers (containerd, CRI-O)

Add-ons

DNS (CoreDNS)

Cluster-internal DNS (Service discovery)

Add-ons

CNI plugin

Pod networking (Cilium, Calico, AWS VPC CNI, etc.)

Add-ons

Ingress controller

L7 routing (NGINX, Traefik, AWS ALB Controller)

Source: official Kubernetes Components documentation.

brochure-banner-bg

POSTGRADUATE PROGRAM IN

Multi Cloud Architecture & DevOps

Master cloud architecture, DevOps practices, and automation to build scalable, resilient systems.

1. kubernetes architecture explained – Control Plane

Direct answer: At a high level: the control plane is the brain. It runs on dedicated nodes (or hosted by your cloud provider as a managed service like EKS/AKS/GKE) and continuously reconciles desired state (what you wrote in YAML) with actual state (what’s running on worker nodes). Everything in Kubernetes flows through the API Server.

Brush up on what is a Kubernetes cluster, what is a Pod in Kubernetes, what is Docker, and Docker vs Kubernetes if you want the foundational context.

1.1 kube-apiserver – The Gateway

The API Server is the front door to the cluster. Every kubectl command, every controller, every kubelet talks to it via REST API (typically over mTLS). It validates and authorises requests, persists changes to etcd, and serves watch streams to the rest of the cluster. It’s stateless and horizontally scalable – most production clusters run 3+ replicas behind a load balancer.

1.2 etcd – The Source of Truth

etcd is a distributed, consistent key-value store. It holds ALL cluster states: every Pod spec, ConfigMap, Secret, custom resource. Only the API Server writes to etcd. Lose etcd and you lose your cluster. Production clusters run etcd as a 3- or 5-node Raft cluster across availability zones with automated backups every 30 minutes.

1.3 kube-scheduler – Pod Placement

The Scheduler watches for Pods with no assigned node and picks one. It runs a 2-stage process: (1) filter – eliminate nodes that can’t run the Pod (insufficient resources, missing taints/tolerations, node affinity rules); (2) score – rank remaining candidates and pick the highest. Custom schedulers and scheduling profiles let you implement domain-specific placement logic.

1.4 kube-controller-manager – The Reconciliation Engine

Runs all the core controllers in a single binary: Deployment Controller, ReplicaSet Controller, Node Controller, ServiceAccount Controller, Endpoint Controller, etc. Each controller watches a resource type and runs a control loop: ‘compare desired state to actual state, take action to reconcile.’ This is what makes Kubernetes self-healing.

1.5 cloud-controller-manager – Cloud Glue

Cloud-specific controllers that integrate with your cloud provider – provisioning LoadBalancer Services as cloud LBs, attaching/detaching EBS volumes, registering nodes with the cloud’s compute API, etc. On AWS this is the AWS Cloud Controller; on Azure it’s the Azure Cloud Controller; on GCP it’s GKE’s equivalent. Self-hosted clusters skip this component.

2. kubernetes cluster architecture – Worker Nodes

Direct answer: kubernetes cluster architecture’s data plane is where your application pods actually run. Worker nodes are EC2 instances (on EKS), VMs (on AKS), or any Linux box that joins the cluster. Each node runs three core components: kubelet, kube-proxy, and a container runtime.

2.1 kubelet – The Node Agent

kubelet runs on every worker node. It receives PodSpecs from the API Server (via the watch API), instructs the container runtime to start/stop containers, monitors their health, and reports node + pod status back to the API Server. It also implements liveness/readiness/startup probes, runs init containers in order, and handles volume mounting. If kubelet dies, the node goes NotReady and pods may be evicted.

2.2 kube-proxy – Service Networking

kube-proxy implements Kubernetes Service abstractions on each node. When you create a Service of type ClusterIP, kube-proxy programs iptables/IPVS/eBPF rules so that traffic to the Service’s virtual IP gets load-balanced to one of the matching Pods. Cilium replaces kube-proxy entirely with eBPF-based service load balancing – faster + more observable.

2.3 Container Runtime

The container runtime starts and stops containers. Kubernetes uses the Container Runtime Interface (CRI) to talk to it. Current production runtimes: containerd (default in most managed clusters), CRI-O (RHEL/OpenShift). Docker Engine is no longer a supported runtime since K8s 1.24 – though the same Docker images work fine via containerd.

3. K8s architecture diagram – Visual Walkthrough

Direct answer: A typical K8s architecture diagram shows the control plane on the left/top, worker nodes on the right/bottom, and arrows representing the API Server as the hub. Below is a text-rendering of the canonical layout plus the data flow when you apply a Deployment.

# Canonical Kubernetes architecture (text version) ┌─────────────────────────────────────────────────────────────┐ │ CONTROL PLANE │ │ │ │ ┌────────────┐ ┌──────┐ ┌──────────────┐ │ │ │ API Server │◄──►│ etcd │ │ Scheduler │ │ │ └─────▲──────┘ └──────┘ └──────────────┘ │ │ │ ┌──────────────┐ │ │ │ │ Controller │ │ │ │ │ Manager │ │ │ │ └──────────────┘ │ │ │ ┌──────────────┐ │ │ │ │ Cloud CCM │ │ │ │ └──────────────┘ │ └────────┼─────────────────────────────────────────────────────┘ │ │ (mTLS REST + watch streams) │ ┌────────┼──────────────────┐ ┌──────────────────────────┐ │ WORKER NODE 1 │ │ WORKER NODE 2 │ │ ┌─────────┐ ┌──────────┐│ │ ┌─────────┐ ┌─────────┐ │ │ │ kubelet │ │ kube- ││ │ │ kubelet │ │ kube- │ │ │ └────┬────┘ │ proxy ││ │ └────┬────┘ │ proxy │ │ │ │ └──────────┘│ │ │ └─────────┘ │ │ ┌────▼─────────┐ │ │ ┌────▼───────────┐ │ │ │ containerd │ │ │ │ containerd │ │ │ └──────────────┘ │ │ └────────────────┘ │ │ Pods: A, B, C │ │ Pods: D, E, F │ └───────────────────────────┘ └──────────────────────────┘

3.1 What Happens When You Run kubectl apply -f deployment.yaml

  • kubectl sends the Deployment spec to the API Server via REST
  • API Server validates + authorises (RBAC), then writes to etcd
  • Deployment Controller watches → creates a ReplicaSet
  • ReplicaSet Controller watches → creates Pods (still unassigned)
  • Scheduler watches for unassigned Pods → picks a node, writes back to API Server
  • kubelet on the chosen node watches → pulls images, starts containers via containerd
  • kubelet reports Pod status back to API Server (Running, Ready, etc.)
  • If you created a Service: kube-proxy on every node programs iptables/IPVS rules
skill-test-section-bg

82.9%

of professionals don't believe their degree can help them get ahead at work.

4. Networking + Storage in the Architecture

Direct answer: A complete kubernetes architecture diagram includes networking (CNI plugin, Services, Ingress) and storage (CSI driver, PersistentVolumes). These are pluggable – you pick implementations based on your cloud provider and workload needs.

4.1 Networking – CNI, Services, Ingress

  • CNI plugin (Cilium, Calico, AWS VPC CNI, GCP, Azure CNI) – assigns Pod IPs and routes Pod-to-Pod traffic
  • Services – virtual IPs that load-balance to matching Pods (ClusterIP, NodePort, LoadBalancer, ExternalName)
  • Ingress – L7 HTTP/HTTPS routing via an Ingress Controller (NGINX, Traefik, AWS ALB Controller, Kong)
  • NetworkPolicy – namespace + label-based traffic control between Pods
  • Service Mesh (Istio, Linkerd, Cilium Service Mesh) – observability + mTLS + advanced routing

4.2 Storage – Volumes, PVs, PVCs, CSI

  • Volume – backing storage for a container (emptyDir, hostPath, configMap, secret, etc.)
  • PersistentVolume (PV) – cluster-level storage resource (provisioned by admin or dynamically)
  • PersistentVolumeClaim (PVC) – namespace-level request for storage
  • StorageClass – defines how PVs are provisioned (EBS gp3, EFS, Azure Disk, etc.)
  • CSI driver – Container Storage Interface plugin per storage backend (cloud-provided)

5. Production Architecture Variants

5.1 HA Control Plane

  • 3-5 control plane replicas across availability zones – survives single AZ failure
  • etcd as 3 or 5-node Raft cluster – odd number, quorum-based
  • Load balancer in front of API Server replicas
  • On EKS/AKS/GKE – managed by the cloud provider; you don’t see these details

5.2 Worker Node Pools

  • Multiple node pools (system, application, GPU, spot) for workload isolation
  • Taints + tolerations to keep node pools dedicated
  • Pod Disruption Budgets to survive node drains during upgrades
  • Cluster Autoscaler or Karpenter for node-level autoscaling

5.3 Add-Ons

  • CoreDNS for cluster-internal DNS
  • CNI plugin (Cilium recommended for new production clusters)
  • Metrics Server for HPA (CPU/memory autoscaling)
  • Cert-manager for automated TLS certificates
  • Prometheus + Grafana stack for observability
  • ArgoCD or Flux for GitOps deployment

6. Best Practices and Gotchas

  • Never expose etcd directly – only the API Server should ever write to it
  • Use HA control plane in production – single replica = single point of failure
  • Run multiple worker node pools – don’t put system pods on the same nodes as your apps
  • Set resource requests + limits on every container – prevents noisy-neighbour issues
  • Use Pod Security Standards (Restricted) by default; opt-in to Privileged only where needed
  • Keep K8s versions current – minor versions get ~14 months support; falling behind = security debt
  • Use NetworkPolicy by default; cluster without policies = flat network = lateral movement risk
  • Tag all workloads with app/env/team labels – cost visibility + RBAC scoping rely on labels

7. Why This Knowledge Pays in India

Direct answer: Indian senior DevOps and platform engineering interviews increasingly probe Kubernetes internals deeply. Engineers who can draw the architecture from memory, explain the data flow on `kubectl apply`, and trace a production failure through the components are 2-3x more likely to land senior offers.

7.1 Market Signals

7.2 What Hero Vired DevOps Course Covers

  • 8-month duration with 70-90% live instructor-led sessions
  • 7+ industry projects – multi-tier microservices on EKS/AKS/GKE, GitOps deploys, progressive delivery
  • Skills covered: Advanced CI/CD Pipeline Architecture, Kubernetes at Scale, Infrastructure as Code (Terraform, Ansible), Container Orchestration, GitOps & Deployment Strategies, Advanced Monitoring & Logging, Security at Scale (DevSecOps)
  • Multi-cloud – AWS, Azure (with Microsoft Azure content access), GCP – managed Kubernetes on each (EKS, AKS, GKE)
  • Apply and master GenAI in DevOps – agentic AI for cluster cost optimisation, alert triage, manifest generation
  • Career services – CV and LinkedIn branding, mock interviews, 1:1 personalised career coaching

7.3 Salary Impact in India

Role / Experience

Salary Range (India, 2026)

Junior DevOps / Cloud Engineer (0-2 yrs)

INR 6-12 LPA

Mid-level Kubernetes engineer (3-5 yrs)

INR 14-26 LPA

Senior SRE / Platform Engineer (6-9 yrs)

INR 26-50 LPA

Principal Platform / SRE Lead (10+ yrs)

INR 45-90 LPA+

Ranges from Naukri, AmbitionBox, LinkedIn see DevOps engineer salary, DevOps engineer skills, DevOps roadmap, and how to become a DevOps engineer.

8. Final Takeaway

Kubernetes architecture is the foundation for every cloud-native engineering decision. Master the control plane (API Server, etcd, Scheduler, Controllers, Cloud CCM) and the data plane (kubelet, kube-proxy, container runtime); understand the watch-and-reconcile flow; know how networking + storage + DNS plug in. For senior platform-engineering interviews, demonstrate fluency: ‘When I `kubectl apply`, the API Server validates and persists to etcd; the Deployment Controller creates a ReplicaSet; the Scheduler picks a node; kubelet pulls the image via containerd and starts the container; kube-proxy programs Service rules; status flows back through API Server.’ That’s the kind of practical answer that separates senior offers from rejections.

Related reads: Kubernetes interview questions, Kubernetes architecture deep-dive, Docker vs Kubernetes, Docker Swarm vs Kubernetes, and DevOps tools.

FAQs
Q1. What is K8s architecture in simple terms?
It has two main planes: the control plane (master components - API Server, etcd, Scheduler, Controller Manager, Cloud Controller Manager) decides what should run where; the data plane (worker nodes - kubelet, kube-proxy, container runtime) actually runs the workloads. The API Server is the only component that talks to etcd; everything flows through it. The whole system is a watch-and-reconcile loop: write desired state to etcd, controllers detect drift, take action to converge.
Q2. Can you describe the kubernetes architecture diagram?
The canonical kubernetes architecture diagram shows the control plane as a box containing API Server, etcd, Scheduler, Controller Manager, and Cloud Controller Manager - with the API Server at the centre as the only component that talks to etcd. Worker nodes are separate boxes, each running kubelet, kube-proxy, and a container runtime (containerd). Arrows show kubelet polling the API Server, the Scheduler watching for unscheduled Pods, and kube-proxy programming Service routing rules.
Q3. What are the main components of kubernetes cluster architecture?
kubernetes cluster architecture has 5 control plane components (kube-apiserver, etcd, kube-scheduler, kube-controller-manager, cloud-controller-manager) and 3 worker node components (kubelet, kube-proxy, container runtime). Plus pluggable add-ons: CoreDNS for DNS, a CNI plugin for Pod networking (Cilium/Calico/AWS VPC CNI), a CSI driver for storage, an Ingress controller for L7 routing, and Metrics Server for HPA.
Q4. Show me a K8s architecture diagram for production.
A production K8s architecture diagram includes: 3-5 control plane replicas across availability zones (or hosted by EKS/AKS/GKE); 3- or 5-node etcd cluster with automated backups; multiple worker node pools (system, app, GPU, spot) with taints + tolerations; load balancer in front of API Server; CNI plugin like Cilium for networking; Cluster Autoscaler or Karpenter for autoscaling; CoreDNS, cert-manager, Prometheus + Grafana, ArgoCD as standard add-ons.
Q5. What does the kubernetes architecture explained look like end-to-end when I run kubectl apply?
kubernetes architecture explained end-to-end via the apply flow: (1) kubectl sends the YAML to the API Server; (2) API Server validates + authorises + writes to etcd; (3) Deployment Controller creates a ReplicaSet; (4) ReplicaSet Controller creates Pods; (5) Scheduler picks a node and writes assignment back to API Server; (6) kubelet on the assigned node pulls the image via containerd and starts containers; (7) kubelet reports status to API Server; (8) if a Service was created, kube-proxy on every node programs iptables/IPVS rules to route traffic.
Q6. What are the most common Kubernetes architecture mistakes in production?
Top mistakes: (1) running a single control plane replica - single point of failure; (2) putting system pods on the same nodes as application pods - noisy neighbour failures; (3) no NetworkPolicy - flat network gives lateral movement to attackers; (4) no resource requests/limits - pods OOM-kill each other; (5) ignoring K8s minor version upgrades - falling behind = security debt; (6) using kube-proxy iptables at very high node counts - IPVS or eBPF (Cilium) scale better; (7) etcd backups not tested - first time you need them you discover they don't work. Master Kubernetes architecture for senior platform-engineering roles Join the Hero Vired Postgraduate Program in DevOps Course - 8 months, 70-90% live instructor-led, 7+ industry projects on AWS/Azure/GCP, CI/CD, Kubernetes at scale, Terraform, ArgoCD, and GenAI in DevOps. Explore the Hero Vired DevOps Course →

Updated on April 29, 2026

Link
Loading related articles...