Hop Into Eggciting Learning Opportunities | Flat 25% OFF | Code: EASTER
Kubernetes7 min read

Kubernetes Architecture Explained: Control Plane vs Worker Nodes in Detail

Suyash RaizadaSuyash Raizada
Kubernetes Architecture Explained: Control Plane vs Worker Nodes in Detail

Kubernetes architecture is built on a clear separation of responsibilities: the control plane decides what should happen, and worker nodes make it happen by running your applications. Understanding this split is essential for anyone designing, operating, or troubleshooting clusters at scale, because it explains everything from scheduling behavior to high availability and failure modes.

This guide breaks down Kubernetes architecture in practical terms, covering the core control plane components, worker node components, and the scheduling and reconciliation loops that keep the cluster aligned with its desired state.

Certified Artificial Intelligence Expert Ad Strip

What Is Kubernetes Architecture?

At a high level, Kubernetes is a distributed system that continuously reconciles desired state (what you declare in manifests) with actual state (what is currently running). This is accomplished through two major parts:

  • Control plane: The orchestration layer that stores cluster state, makes placement decisions, and runs controllers that enforce the desired state.

  • Worker nodes: The execution layer where Pods run, networking rules are applied, and runtime status is reported back to the control plane.

This separation provides scalability and operational clarity: central decision-making paired with distributed execution.

The Kubernetes Control Plane: What It Does and Why It Matters

The Kubernetes control plane is the cluster brain. It accepts requests, validates and persists the desired state, schedules workloads, and runs controllers that continuously work to close the gap between desired and actual state.

A key operational detail: if the control plane becomes unavailable, existing workloads on worker nodes can continue running, but new workloads cannot be scheduled. Changes such as scaling, updates, and remediation actions stall because the orchestrator is no longer making new decisions.

kube-apiserver: The Front Door and Communication Hub

kube-apiserver is the central API endpoint and the primary control plane entry point. All cluster interactions pass through it, including:

  • kubectl commands

  • CI/CD deployments

  • Controllers watching and updating objects

  • Worker node agents reporting status

It is also the only control plane component that initiates direct connections to etcd. Other components communicate with the API server rather than talking to etcd directly.

The API server includes an aggregation layer that can extend the Kubernetes API, enabling custom resources and controllers. This capability is foundational for platform engineering patterns such as building internal Kubernetes APIs for your organization.

etcd: The Cluster Source of Truth

etcd is a consistent, distributed key-value store that holds cluster state and configuration. It stores Kubernetes objects such as:

  • Pods, Deployments, and Services

  • ConfigMaps and Secrets metadata

  • Node registrations and health signals

  • Namespaces, RBAC policies, and more

Because etcd is the system of record, its reliability and performance directly affect the entire cluster. Regular etcd backups and strict access controls are critical components of production readiness.

kube-scheduler: Placing Pods on the Right Nodes

kube-scheduler watches for newly created Pods that have no assigned node and selects the best worker node for each one. Scheduling considers more than available CPU and memory. It can also account for:

  • Resource requests and limits across the cluster

  • Hardware, software, and policy constraints

  • Node affinity and Pod affinity or anti-affinity rules

  • Data locality and topology preferences

  • Inter-workload interference and deadlines

The scheduler operates in two main phases:

  1. Filtering: removes nodes that cannot satisfy the Pod requirements.

  2. Scoring: ranks remaining nodes using multiple scheduling plugins and selects the highest-ranked candidate.

If multiple nodes receive the same score, one is selected at random from the top candidates.

kube-controller-manager: Continuous Reconciliation and Self-Healing

kube-controller-manager runs multiple controllers compiled into a single binary to reduce operational complexity. Controllers continuously observe cluster state through the API server and take action when reality diverges from the desired state.

Common controllers include:

  • Node controller: reacts when nodes become unhealthy or unreachable.

  • Job controller: manages one-off and batch workloads.

  • Replication and deployment controllers: ensure the desired number of Pod replicas are running.

  • EndpointSlice controller: connects Services to the correct backing Pods.

  • Service account controller: creates default service accounts for new namespaces.

A practical example of Kubernetes self-healing: if a Deployment declares 5 replicas and one Pod crashes, the relevant controller ensures a replacement Pod is created so the replica count returns to 5.

cloud-controller-manager: Integrating with Cloud Providers

cloud-controller-manager enables Kubernetes to interact with underlying cloud services through provider APIs. Typical responsibilities include:

  • Node-related updates: labeling, annotations, hostname data, capacity information, and node health attributes.

  • Route configuration: ensuring Pods across nodes can communicate as expected.

  • Service controller: provisioning and managing cloud load balancers for Kubernetes Services where applicable.

Control Plane High Availability: Why Production Clusters Replicate Masters

A cluster can run with a single control plane node, but production environments commonly use multiple control plane nodes for fault tolerance. Running three or more control plane nodes allows the cluster to tolerate individual failures while maintaining API availability and active control loops.

High availability is not only about uptime. It protects your ability to deploy, scale, and remediate issues during partial outages - often the exact moments when a functioning control plane matters most.

Worker Nodes in Kubernetes: Where Applications Actually Run

Worker nodes form the execution layer of Kubernetes. They run containerized workloads as Pods, apply networking rules, and report health and status back to the control plane. Unhealthy worker nodes can cause degraded application performance, scheduling delays, or outages depending on replica distribution and workload criticality.

kubelet: The Node Supervisor and Pod Lifecycle Manager

kubelet is the primary node agent. It ensures containers are running according to the Pod specifications received from the control plane. Responsibilities include:

  • Starting and stopping containers based on Pod specs

  • Restarting containers when they fail, according to the defined restart policy

  • Reporting node and Pod status back to the API server

kubelet is central to operational troubleshooting because it reflects what the node is actually doing relative to what the control plane expects.

kube-proxy: Service Networking and Traffic Steering

kube-proxy manages network rules on each node so traffic reaches the correct Pods. It implements Kubernetes Service behavior by directing requests appropriately, supporting both internal cluster communication and exposure paths to external clients depending on the Service type and networking configuration.

Container Runtime: Pulling Images and Running Containers

The container runtime pulls container images and executes them. Kubernetes supports multiple runtimes that implement the Container Runtime Interface (CRI), with containerd being the most widely used today. The runtime is responsible for actually creating and managing containers on the node.

The Control Loop and Node Agent Feedback: How Kubernetes Stays Consistent

Kubernetes depends on continuous feedback between the control plane and worker nodes. The node agent (primarily kubelet) receives instructions, enforces runtime behavior, and reports status back to the control plane. This closes the control loop:

  • Desired state is declared (for example, a Deployment with a specified replica count).

  • Control plane records and acts (controllers and scheduler coordinate the necessary changes).

  • Worker nodes execute (kubelet and the runtime start Pods; kube-proxy applies network rules).

  • Status is reported back to the API server.

Without reliable bidirectional communication, the control plane loses visibility and cannot safely enforce desired state across the cluster.

Scheduling Workflow: From Defined to Assigned

Scheduling is the bridge between declaring a workload and running it. A simplified placement flow looks like this:

  1. A Pod is created and stored as desired state via the API server.

  2. The scheduler detects the unscheduled Pod.

  3. Filtering removes nodes that cannot run the Pod.

  4. Scoring ranks the remaining nodes using scheduling plugins.

  5. The Pod is bound to the chosen node.

  6. The kubelet on that node starts the Pod using the container runtime.

This flow explains common real-world scenarios, including why a Pod remains in Pending status when constraints make every available node ineligible during the filtering phase.

Real-World Operational Impact: What Failures Look Like

The control plane versus worker node split has clear consequences during incidents:

  • Control plane outage: existing Pods may continue running, but deploying new versions, scheduling new Pods, scaling, and reliable self-healing all become unavailable.

  • Worker node failure: workloads on that node are disrupted; controllers attempt to recreate affected Pods on other nodes if capacity and constraints allow.

Identifying which side is failing helps teams triage faster. API errors and stuck deployments typically point to control plane issues, while localized application outages and node NotReady events indicate worker node problems.

Skills to Build Next

For professionals looking to advance their Kubernetes expertise, structured learning paths covering Kubernetes administration, container security, and cloud-native DevOps provide a strong foundation. Blockchain Council offers certifications in Kubernetes, DevOps, and cybersecurity for those seeking recognized credentials in these disciplines.

Conclusion

Kubernetes architecture centers on a clear division of responsibilities: the control plane stores desired state, schedules workloads, and runs controllers that reconcile changes, while worker nodes run Pods, enforce networking, and continuously report health and status. This design enables scalability, self-healing, and consistent operations - but it also shapes failure modes. Worker nodes can keep serving traffic during control plane disruptions, yet the cluster cannot reliably evolve without an available control plane.

Understanding control plane components such as kube-apiserver, etcd, the scheduler, and controller managers - alongside worker node components like kubelet, kube-proxy, and the container runtime - enables faster troubleshooting, better availability design, and more confident Kubernetes operations overall.

Related Articles

View All

Trending Articles

View All

Search Programs

Search all certifications, exams, live training, e-books and more.