Kubernetes has transformed the application deployment and management process and has become an integral part of today’s software development. In this era of cloud-based applications, a good understanding of its efficient container orchestration makes it a valuable skill for IT practitioners.
Preparing for Kubernetes interviews can be daunting due to the wide range of topics and complexities involved. Understanding key concepts and being able to articulate them is essential for success.
In this particular blog, we present a collection of Kubernetes interview questions that are frequently asked in interviews at different levels.
What is Kubernetes?
Kubernetes, typically known as K8s, is an open-source framework that manages containerized applications. It was built using Google’s Cloud platform and is perfect for managing cluster nodes and their entire lifecycle and delivers a framework rare to find in today’s technological world.
A Kubernetes cluster fundamentally consists of containers that are specified as units (Pod) which may consist of multiple interrelated containers. Thanks to this organisation, the application management and scaling processes are simplified and the applications are made reliable. Tasks such as load balancing, service orientation, or rote application maintenance tasks like automated rollouts are automatically carried out by Kubernetes.
All in all, Kubernetes addresses the challenges of any developer or IT specialist, responsible for building scalable and resilient systems by abstracting away the nuisances of large-scale containerized application orchestration.
Get curriculum highlights, career paths, industry insights and accelerate your technology journey.
Download brochure
Basic Kubernetes Interview Questions
What are K8s?
K8s is a shorthand for Kubernetes, where the “8” represents the eight letters between “K” and “s.” This abbreviation is commonly used in the tech community for convenience.
- Kubernetes is an open-source platform.
- It automates the deployment, scaling, and management of containerized applications.
- Widely adopted for its efficiency and scalability in handling complex applications.
Explain Kubernetes Architecture
Kubernetes architecture is divided into two main components: Control Plane and Nodes.
Control Plane:
- API Server: Acts as the front-end for the Kubernetes control plane.
- etcd: Stores all cluster data and configurations.
- Controller Manager: Manages controllers that handle routine tasks.
- Scheduler: Assigns workloads to nodes based on resource availability.
Nodes:
- Kubelet: Ensures containers are running in Pods.
- kube-proxy: Manages network rules for pod communication.
- Container Runtime: Runs the containers (e.g., Docker).
What are Clusters in Kubernetes?
A Kubernetes Cluster consists of a set of nodes that run containerized applications.
- Master Node: Manages the cluster and orchestrates tasks.
- Worker Nodes: Execute application workloads.
- Cluster Components: Include networking, storage, and other services that facilitate application deployment.
What is a Pod in Kubernetes?
A Pod is the smallest deployable unit in Kubernetes.
- Contains One or More Containers: Typically Docker containers.
- Shared Resources: Includes storage, network IP, and specifications for how to run the containers.
- Lifecycle Management: Pods are ephemeral and can be created, destroyed, or replicated as needed.
What is Kubelet?
Kubelet is an essential agent running on each node in a Kubernetes cluster.
- Responsibilities:
- Ensures containers are running in Pods.
- Communicates with the Kubernetes API server.
- Monitors resource usage and reports back to the control plane.
- Configuration Management: Applies the desired state defined in Pod specifications.
What is kube-proxy?
kube-proxy manages network communication within the Kubernetes cluster.
- Handles Traffic Routing: Directs traffic to the appropriate Pods based on service definitions.
- Implements Networking Rules: Uses iptables or IPVS to manage network rules.
- Ensures Service Discovery: Facilitates communication between different services and Pods.
What is the Master Node in Kubernetes?
The Master Node is the control center of a Kubernetes cluster.
- Components:
- API Server: Interfaces with users and other components.
- Scheduler: Allocates resources to Pods.
- Controller Manager: Oversees various controllers for cluster management.
- etcd: Stores all cluster configuration and state data.
What is a Kubernetes Service?
A Kubernetes Service is an abstraction that defines a logical set of Pods and a policy to access them.
- Types of Services:
- ClusterIP: Exposes the service on an internal IP within the cluster.
- NodePort: Exposes the service on each node’s IP at a static port.
- LoadBalancer: Integrates with cloud providers to expose the service externally.
- Purpose:
- Facilitates communication between different parts of an application.
- Ensures reliable access to Pods regardless of their lifecycle.
What is a Kubernetes Deployment?
A Deployment manages the deployment and scaling of Pods in Kubernetes.
- Key Features:
- Declarative Updates: Define the desired state, and Kubernetes handles the changes.
- Scaling: Adjust the number of Pod replicas as needed.
- Rolling Updates: Update applications without downtime.
What is a Kubernetes ReplicaSet?
A ReplicaSet ensures a specified number of Pod replicas are running at any given time.
- Primary Function:
- Maintains the desired number of identical Pods.
- Relationship with Deployments:
- Often managed by Deployments to handle updates and scaling.
- Use Cases:
- Ensures high availability of applications.
- Automatically replaces failed or terminated Pods.
What is a Kubernetes DaemonSet?
A DaemonSet ensures that a copy of a specific Pod runs on all (or selected) nodes in the cluster.
- Common Uses:
- Deploying system-level services like log collectors or monitoring agents.
- Key Characteristics:
- Automatically adds Pods to new nodes.
- Ensures uniform deployment across the cluster.
- Benefits:
- Simplifies the deployment of node-specific services.
- Maintains consistency and reliability across all nodes.
How are Kubernetes and Docker Related?
Kubernetes and Docker are complementary technologies used for containerized application deployment.
- Docker:
- Containerization Platform: Builds and runs containers.
- Focus: Packaging applications with their dependencies.
- Kubernetes:
- Orchestration System: Manages multiple containers across clusters.
- Focus: Scaling, deployment, and management of containerized applications.
- Integration:
- Kubernetes uses Docker as the default container runtime (though it supports others).
- Together, they provide a complete solution for deploying and managing scalable applications.
What is a Kubernetes Job?
A Kubernetes Job manages the execution of one or more Pods to completion.
- Purpose:
- Runs batch processes or finite tasks.
- Key Features:
- Ensures a specified number of Pods successfully terminate.
- Handles retries for failed Pods.
- Usage:
- Data processing tasks.
- Running scripts or migrations.
What is Kubernetes Scheduling Policy?
Kubernetes Scheduling Policy determines how Pods are assigned to Nodes.
- Components:
- Schedulers: Decide which Node a Pod should run on.
- Policies: Define rules and constraints for scheduling.
- Factors Considered:
- Resource availability.
- Node affinity/anti-affinity.
- Taints and tolerations.
- Customisation:
- Administrators can create custom scheduling policies to meet specific needs.
What is Kubernetes Affinity?
Kubernetes Affinity controls how Pods are placed relative to other Pods and Nodes.
- Types:
- Node Affinity: Preferences for certain Nodes based on labels.
- Pod Affinity: Encourages Pods to be placed near other specific Pods.
- Pod Anti-Affinity: Prevents Pods from being placed near certain Pods.
- Benefits:
- Enhances performance by co-locating related Pods.
- Improves availability by spreading Pods across Nodes.
- Configuration:
- Defined in Pod specifications using affinity rules.
What is Kubernetes Vertical Pod Autoscaling (VPA)?
Kubernetes Vertical Pod Autoscaling (VPA) automatically adjusts the resource requests and limits for Pods.
- Functionality:
- Monitors resource usage of Pods.
- Recommends or applies changes to CPU and memory allocations.
- Benefits:
- Ensures Pods have adequate resources.
- Optimise resource utilisation.
- Use Cases:
- Applications with variable resource needs.
- Enhancing performance without manual intervention.
What is Kubernetes Monitoring?
Kubernetes Monitoring involves tracking the performance and health of the cluster and its applications.
- Tools Used:
- Prometheus: Collects metrics.
- Grafana: Visualise data.
- Key Metrics:
- CPU and memory usage.
- Pod status and performance.
- Network traffic.
- Importance:
- Identifies issues proactively.
- Ensures the reliability and efficiency of applications.
What is Kubernetes Logging?
Kubernetes Logging refers to the collection, storage, and analysis of logs generated by applications and the cluster.
- Components:
- Log Collectors: Tools like Fluentd or Logstash gather logs from Pods.
- Storage Solutions: Centralid systems like Elasticsearch store logs.
- Visualisation: Tools like Kibana display log data for analysis.
- Benefits:
- Aids in troubleshooting and debugging.
- Provides insights into application behavior.
- Best Practices:
- Implement centralised logging.
- Ensure log retention policies are in place.
What is Kubernetes Secrets?
Kubernetes Secrets manages sensitive information within the cluster.
- Purpose:
- Store confidential data like passwords, tokens, and keys.
- Features:
- Encoded in base64.
- Can be mounted as files or environment variables in Pods.
- Security:
- Access is restricted to authorid Pods and users.
- Can be integrated with external secret management systems.
- Usage:
- Protecting application credentials.
- Managing API keys securely.
What is the Role of Kube-apiserver?
The Kube-apiserver is the central management entity in Kubernetes.
- Functions:
- Serves the Kubernetes API.
- Processes RESTful requests from users and components.
- Responsibilities:
- Validates and configures data for API objects.
- Acts as the communication hub for the cluster.
- Interaction:
- Interfaces with etcd to store cluster state.
- Communicates with other control plane components like the scheduler and controller manager.
What is the Job of the kube-scheduler?
The kube-scheduler assigns Pods to Nodes in the Kubernetes cluster.
- Key Responsibilities:
- Selects the most suitable Node for a Pod based on resource requirements and policies.
- Considers factors like CPU, memory, and affinity rules.
- Decision-Making:
- Evaluates Node availability and workload.
- Ensures balanced distribution of Pods across the cluster.
- Integration:
- Works closely with kube-apiserver to receive scheduling requests.
What is Heapster in Kubernetes?
Heapster was a monitoring and performance analysis tool for Kubernetes clusters.
- Functions:
- Collected metrics from the cluster.
- Provided data to visualisation tools like Grafana.
- Legacy Role:
- Served as the main source for Kubernetes cluster metrics.
- Current Alternatives:
- Prometheus has replaced Heapster for monitoring Kubernetes.
What do you know about Minikube?
Minikube is a tool that allows you to run Kubernetes locally.
- Purpose:
- Provides a local Kubernetes cluster for development and testing.
- Features:
- Simple setup on various operating systems.
- Supports most Kubernetes features.
- Benefits:
- Enables developers to experiment without a cloud environment.
- Facilitates learning and debugging Kubernetes applications.
- Usage:
- Ideal for single-node clusters during development.
Also Read: An Introduction to Kubernetes And Containers
What is NodePort?
NodePort is a type of Kubernetes Service that exposes Pods on a static port on each Node’s IP.
- Functionality:
- Allocates a port from a range (default 30000-32767).
- Forwards external traffic to the service’s backend Pods.
- Usage:
- Enables external access to services without a LoadBalancer.
- Limitations:
- Not suitable for high-traffic production environments.
- Requires manual handling of port assignments for scalability.
What is the Kubernetes Dashboard?
The Kubernetes Dashboard is a web-based user interface for managing Kubernetes clusters.
- Features:
- Visualises cluster resources and their status.
- Allows users to deploy and manage applications.
- Provides access to logs and resource metrics.
- Benefits:
- Simplifies cluster management through a graphical interface.
- Enhances accessibility for users unfamiliar with command-line tools.
- Security:
- Access is controlled via authentication and role-based access control (RBAC).
What is Kubernetes DNS?
Kubernetes DNS provides service discovery within the cluster by assigning DNS names to Services.
- Functionality:
- Automatically creates DNS records for Kubernetes Services.
- Allows Pods to communicate using service names instead of IP addresses.
- Benefits:
- Simplifies networking by abstracting Pod IPs.
- Facilitates load balancing and service discovery.
- Components:
- CoreDNS or kube-dns handles DNS queries within the cluster.
How Does Kubernetes Handle Container Scaling?
Kubernetes handles container scaling through both manual and automatic methods.
- Manual Scaling:
- kubectl Command: Users can manually adjust the number of Pod replicas.
kubectl scale deployment <deployment-name> –replicas=<number>
- Automatic Scaling:
- Horizontal Pod Autoscaler (HPA):
- Function: Automatically adjusts the number of Pod replicas based on CPU utilisation or other select metrics.
- Configuration: Defined in a YAML file specifying target metrics and scaling thresholds.
- Vertical Pod Autoscaler (VPA):
- Function: Adjusts resource requests and limits for individual Pods based on usage.
- Use Cases: Ideal for workloads with varying resource requirements.
What is the Difference Between Deploying Applications on Hosts and Containers?
Aspect |
Deploying on Hosts |
Deploying on Containers |
Isolation |
Limited isolation; applications share the OS. |
High isolation; containers have their own environments. |
Resource Efficiency |
Higher overhead; each application requires full OS. |
Lightweight; share the host OS, reducing overhead. |
Portability |
Less portable; dependent on host configurations. |
Highly portable; consistent environments across systems. |
Scalability |
Manual scaling; more complex to manage. |
Easy scaling; orchestrated by systems like Kubernetes. |
Deployment Speed |
Slower; involves setting up the entire OS. |
Faster; containers start quickly with minimal setup. |
Maintenance |
More maintenance; OS updates affect all applications. |
Simplified; containers can be updated independently. |
Why Do We Need Container Orchestration?
Container orchestration is essential for managing complex containerized applications efficiently.
- Automated Deployment: Simplifies the process of deploying containers across multiple environments.
- Scaling: Automatically adjusts the number of running containers based on demand.
- Load Balancing: Distributes traffic evenly across containers to ensure optimal performance.
- Resource Management: Efficiently allocates resources like CPU and memory to containers based on requirements.
- Health Monitoring: Continuously monitors container health and replaces failed containers automatically.
- Networking: Manages inter-container communication and service discovery seamlessly.
Which Node in Kubernetes Keeps Track of Resource Utilisation?
The Node Controller within the Kubernetes Control Plane keeps track of resource utilisation.
- Function:
- Monitors the status of nodes and their resource usage.
- Ensures that nodes are healthy and have sufficient resources for scheduling Pods.
- Components Involved:
- kubelet: Runs on each worker node and reports resource usage to the control plane.
- Metrics Server: Collects and aggregates resource usage data for Pods and nodes.
Explain the Difference Between a Pod and a Container in Kubernetes
Aspect |
Container |
Pod |
Definition |
Lightweight runtime environment encapsulating an application and its dependencies. |
Smallest deployable unit in Kubernetes, consisting of one or more containers. |
Isolation |
Runs a single application process. |
Groups related containers that share resources. |
Lifecycle |
Managed individually. |
Managed as a single entity with shared lifecycle. |
Resources |
Allocates CPU, memory per container. |
Shares network, storage among containers. |
Describe the Internals of the Kubernetes Control Plane.
The Kubernetes Control Plane is the brain of the cluster, managing all operations and ensuring the desired state is maintained.
- Core Components:
- kube-apiserver:
- Role: Serves as the main API endpoint for all cluster interactions.
- Function: Processes RESTful requests and updates cluster state in etcd.
- etcd:
- Role: A distributed key-value store.
- Function: Stores all cluster data, configurations, and state information.
- kube-scheduler:
- Role: Assigns Pods to Nodes.
- Function: Evaluates resource availability and policies to make scheduling decisions.
- Controller Manager:
- Role: Runs controller processes.
- Function: Manages various aspects like replication, endpoints, and node operations.
- Communication Flow:
- API Requests: Users or automated systems send requests to the kube-apiserver.
- State Management: kube-apiserver updates etcd with desired state configurations.
- Scheduling and Deployment: kube-scheduler and controllers work to achieve and maintain the desired state by managing Pods and Nodes.
What Are Taints and Tolerations in Kubernetes?
Taints and Tolerations work together to control how Pods are scheduled onto Nodes, ensuring that Pods are placed only on suitable Nodes.
- Taints:
- Definition: Applied to Nodes to repel certain Pods from being scheduled on them.
- Format: <key>=<value>:<effect>
- Effect Types:
- NoSchedule: Prevents Pods without matching tolerations from being scheduled.
- PreferNoSchedule: Tries to avoid scheduling Pods without tolerations.
- NoExecute: Evicts existing Pods that do not tolerate the taint.
- Tolerations:
- Definition: Applied to Pods to allow them to be scheduled on Nodes with matching taints.
- Specification: Defined in the Pod’s YAML configuration.
Compare Kubernetes Operators with Traditional Deployment Methods.
Aspect |
Kubernetes Operators |
Traditional Deployment Methods |
Automation |
Automate complex application lifecycle tasks. |
Manual intervention for scaling, updates. |
Customisation |
Extend Kubernetes with custom logic via CRDs. |
Limited to predefined deployment scripts. |
Management |
Handle operations like backups, scaling, healing. |
Require separate tools and scripts. |
Integration |
Seamlessly integrate with Kubernetes APIs. |
Often external to Kubernetes ecosystem. |
What is Kubernetes Ingress?
Kubernetes Ingress is an API object that manages external access to services within a Kubernetes cluster, typically via HTTP and HTTPS.
- Functionality:
- Routing: Directs incoming traffic to the appropriate services based on hostnames or URL paths.
- SSL/TLS Termination: Handles HTTPS connections by managing SSL certificates.
- Load Balancing: Distributes traffic evenly across multiple service instances.
- Components:
- Ingress Resource: Defines the routing rules and configurations.
- Ingress Controller: Implements the rules defined in the Ingress resource, managing the actual traffic flow.
What is a Kubernetes CronJob?
A Kubernetes CronJob schedules Jobs to run periodically at specified times or intervals, similar to cron in Unix systems.
Key Features:
- Scheduled Execution:
- Runs Jobs based on time schedules defined using cron syntax.
- Example: Run a backup every midnight.
- Configuration:
- Defined in a YAML file specifying the schedule and Job details.
Use Cases:
- Automated Backups: Regularly back up databases or file systems.
- Report Generation: Generate and send reports at scheduled times.
- Maintenance Tasks: Perform routine maintenance like log cleanup.
What are Init Containers, and How Do They Differ from Regular Containers in a Pod?
Aspect |
Init Containers |
Regular Containers |
Purpose |
Run initialisation tasks before main containers start. |
Execute the main application processes. |
Execution Order |
Run sequentially and must complete successfully before regular containers start. |
Start after Init Containers have finished. |
Isolation |
Can perform setup tasks like configuration or data fetching. |
Focused on running the primary application. |
How Does Kubernetes Handle Rolling Back Deployments?
Kubernetes provides mechanisms to roll back deployments to a previous stable state in case of failures or issues.
Rollback to Last Revision:
kubectl rollout undo deployment/<deployment-name>
Rollback to Specific Revision:
kubectl rollout undo deployment/<deployment-name> –to-revision=<revision-number>
Automatic Rollbacks:
- Health Checks: Kubernetes monitors the health of new Pods during deployment.
- Failure Detection: If new Pods fail health checks, Kubernetes automatically reverts to the previous stable version.
Configuration Strategies:
- Deployment Strategy: Typically uses RollingUpdate to update Pods incrementally.
- Max Surge & Max Unavailable: Control the number of Pods updated simultaneously to ensure availability.
How Do Kubernetes Services of type ClusterIP, NodePort, and LoadBalancer Differ?
Service Type |
ClusterIP |
NodePort |
LoadBalancer |
Accessibility |
Internal to the cluster only. |
Exposed on each Node’s IP at a static port. |
Externally accessible via a cloud provider’s load balancer. |
Use Cases |
Internal communication between services. |
Basic external access without cloud load balancers. |
External access with managed load balancing. |
Configuration |
Default service type. |
Requires specifying a port range. |
Integrates with cloud provider APIs. |
What is Kubernetes Grafana?
Grafana is an open-source platform integrated with Kubernetes for monitoring and visualisation of metrics and logs.
Key Features:
- Dashboards:
- Create customisable dashboards to visuali Kubernetes metrics.
- Pre-built templates available for common Kubernetes components.
- Data Sources:
- Integrates with Prometheus, InfluxDB, Elasticsearch, and more.
- Collects data from Kubernetes clusters for analysis.
- Alerting:
- Set up alerts based on specific metrics thresholds.
- Notifies teams via email, Slack, or other channels when issues arise.
What are the Best Practices for Kubernetes Cluster Security?
Ensuring Kubernetes cluster security involves implementing multiple strategies to protect against threats and vulnerabilities.
Security Best Practices:
1. Use RBAC Effectively:
- Grant the least privilege necessary.
- Regularly audit and update RoleBindings and ClusterRoleBindings.
2. Enable Network Policies:
- Define how Pods communicate with each other and with external services.
- Restrict traffic to only what’s necessary.
3. Secure etcd:
- Encrypt data at rest.
- Restrict access to authorid users only.
4. Implement Pod Security Policies:
- Define security contexts for Pods.
- Enforce restrictions on container privileges and capabilities.
5. Use TLS for Communication:
- Encrypt all data in transit within the cluster.
- Ensure API server and kubelet communications are secured.
Explain the Differences Between a DaemonSet and a ReplicaSet
DaemonSet and ReplicaSet are Kubernetes controllers that manage Pod deployments, but they serve different purposes.
Aspect |
DaemonSet |
ReplicaSet |
Primary Function |
Ensures a copy of a Pod runs on all (or selected) Nodes. |
Maintains a specified number of identical Pod replicas. |
Use Cases |
– Deploying system-level services like log collectors.
– Running monitoring agents on every node. |
– Ensuring high availability of applications.
– Scaling applications by increasing replicas. |
Deployment Scope |
Node-specific deployment; targets individual nodes based on labels. |
Namespace-specific; manages Pods within a namespace. |
Scheduling |
Automatically schedules Pods on new nodes added to the cluster. |
Schedules Pods based on ReplicaSet specifications and resource availability. |
Management |
No inherent scaling; Pods are tied to node presence. |
Supports scaling up or down by adjusting the replica count. |
Relationship with Deployments |
Not typically managed by Deployments. |
Often managed by Deployments for handling updates and scaling. |
How Does Kubernetes Handle Node Failures and Resiliency?
Kubernetes ensures resiliency and handles node failures through automated detection and recovery mechanisms.
Detection of Node Failures:
- Heartbeat Mechanism:
- kubelet on each node sends regular heartbeats to the API Server.
- Missing heartbeats indicate potential node failure.
- Node Controller:
- Monitors node status and detects unresponsive nodes.
- Marks nodes as NotReady if heartbeats are missed.
Recovery Mechanisms:
1. Pod Eviction:
- When a node is marked NotReady, Kubernetes evicts the Pods running on it.
- Evicted Pods are scheduled to run on other healthy nodes.
2. ReplicaSet Adjustments:
- ReplicaSets ensure the desired number of Pod replicas are running.
- If Pods are lost due to node failure, ReplicaSets create new Pods on available nodes.
3. Self-Healing:
- Automatically replaces failed or terminated Pods.
- Maintains application availability without manual intervention.
Compare Kubernetes ReplicaSets and Deployments.
Aspect |
ReplicaSet |
Deployment |
Primary Function |
Maintains a set number of Pod replicas. |
Manages ReplicaSets and handles updates. |
Use Cases |
Ensuring Pod availability. |
Rolling updates, rollbacks, scaling Pods. |
Management |
Typically managed by Deployments. |
Provides higher-level management over ReplicaSets. |
Versioning |
No inherent version control. |
Supports versioning and history of ReplicaSets. |
What is Kubernetes CRD Controller?
A Kubernetes Custom Resource Definition (CRD) Controller extends Kubernetes functionality by managing custom resources.
Custom Resource Definitions (CRDs):
- Allows users to define new resource types beyond the built-in Kubernetes objects.
- Example: Defining a Database resource.
- Purpose:
- Enables the creation of domain-specific APIs tailored to application needs.
- Facilitates the management of complex applications through Kubernetes-native interfaces.
CRD Controllers:
- Function:
- Watches for changes to custom resources.
- Implements the logic to reconcile the desired state with the actual state.
- Components:
- Custom Controller: A program that handles the business logic for managing custom resources.
- Informer: Listens for events related to custom resources and triggers controller actions.
- Workflow:
- Define CRD: Create a YAML file specifying the new resource type.
- Deploy CRD: Apply the CRD to the Kubernetes cluster.
- Implement Controller: Develop a controller that responds to CRUD operations on the custom resource.
- Operate: The controller manages the lifecycle of custom resources, ensuring they behave as intended.
What is a Job in Kubernetes, and How Does it Differ from a CronJob?
Aspect |
Job |
CronJob |
Function |
Executes Pods to completion for finite tasks. |
Schedules Jobs to run periodically based on cron expressions. |
Use Cases |
Data processing, batch tasks. |
Scheduled backups, report generation. |
Execution |
Runs once or a set number of times. |
Runs repeatedly at specified intervals. |
What are Kubernetes Network Plugins?
Kubernetes Network Plugins extend and enhance the networking capabilities of a Kubernetes cluster, enabling various network functionalities.
Functionality:
- Container Networking: Manages network communication between Pods across nodes.
- Network Policies: Implements rules for traffic control and security.
- Service Networking: Facilitates communication between Services and Pods.
Common Network Plugins:
1. Calico:
- Features: Network policies, encryption, and high performance.
- Use Cases: Security-focused environments requiring fine-grained traffic control.
2. Flannel:
- Features: Simple overlay networking using VXLAN or other backend protocols.
- Use Cases: Basic networking needs with minimal configuration.
3. Weave Net:
- Features: Automatic mesh networking, network policies, and encryption.
- Use Cases: Environments requiring ease of setup and robust networking features.
Compare StatefulSets and Deployments. When Would You Use One Over the Other?
Aspect |
StatefulSet |
Deployment |
Use Case |
Stateful applications (databases, etc.) |
Stateless applications (web servers) |
Identity |
Maintains unique Pod identities and storage |
Pods are interchangeable |
Ordering |
Ordered deployment and scaling |
No guaranteed order |
Storage |
Persistent storage with stable volumes |
Typically uses ephemeral storage |
Advanced Kubernetes Interview Questions
What Does the Node Status Contain?
The Node Status in Kubernetes provides comprehensive details about a node’s health and resource availability.
- Conditions:
- Ready: Indicates if the node is operational.
- MemoryPressure: Shows memory usage issues.
- DiskPressure: Reflects disk usage problems.
- PIDPressure: Indicates process ID exhaustion.
- NetworkUnavailable: States network setup status.
- Capacity:
- CPU and Memory: Total resources available.
- Ephemeral Storage: Temporary storage capacity.
- Allocatable:
- Resources Available for Pods: CPU, memory, and storage after reserving system resources.
- Addresses:
- Internal and External IPs: Network addresses of the node.
- Hostname: Node’s name within the cluster.
Compare ConfigMaps and Secrets in Kubernetes.
Aspect
|
ConfigMaps
|
Secrets
|
Data Type |
Non-sensitive configuration data. |
Sensitive information like passwords and keys. |
Storage |
Stored as plain text. |
Stored in base64 encoding and can be encrypted. |
Usage |
Injected as environment variables or files. |
Injected similarly but with enhanced security. |
Access Control |
Less restrictive. |
Restricted access with tighter security measures. |
How Can You Achieve Communication Between Pods in Different Nodes?
Communication between Pods on different nodes is facilitated through Kubernetes networking.
1. Cluster Networking:
- Flat Network: Ensures all Pods can communicate without NAT.
- CNI Plugins: Implement network connectivity (e.g., Calico, Flannel).
2. Services:
- Kubernetes Services: Provide stable endpoints for Pods.
- DNS Resolution: Allows Pods to discover services by name.
3. Network Policies:
- Traffic Control: Define allowed communication paths.
- Security: Restrict or permit traffic as needed.
4. Overlay Networks:
- VXLAN or IP-in-IP: Encapsulate traffic between nodes.
- Scalability: Supports large clusters efficiently.
Explain the Concept of a Custom Operator in Kubernetes
A Custom Operator extends Kubernetes by managing custom resources.
- Definition:
- Operator: A controller that automates the management of complex applications.
- Custom Resources: User-defined objects representing application-specific entities.
- Components:
- CRD (Custom Resource Definition): Defines new resource types.
- Controller Logic: Implements business logic for resource management.
What is a DaemonSet, and How is it Different from a Deployment?
Aspect |
DaemonSet |
Deployment |
Primary Function |
Ensures a copy of a Pod runs on all (or selected) Nodes. |
Manages the deployment and scaling of Pods. |
Use Cases |
Deploying node-specific services like log collectors or monitoring agents. |
Running scalable, stateless applications like web servers. |
Scaling |
Automatically adds Pods to new nodes. |
Manually scales based on replica count. |
How Do You Upgrade a Kubernetes Cluster?
Upgrading a Kubernetes cluster requires careful planning to minimise downtime.
Steps:
1. Backup:
- etcd Backup: Ensure all cluster data is backed up.
- Configuration Files: Save all YAML configurations.
2. Plan the Upgrade:
- Review Release Notes: Understand new features and deprecations.
- Check Compatibility: Ensure add ons and tools support the new version.
3. Upgrade Control Plane:
- Update components like kube-apiserver, kube-scheduler, and kube-controller-manager first.
4. Upgrade Worker Nodes:
- Update kubelet and kube-proxy: Install the new binaries.
5. Verify:
- Health Checks: Ensure all components are running correctly.
- Test Applications: Validate that workloads function as expected.
Compare Horizontal Pod Autoscaling and Cluster Autoscaling in Kubernetes.
Aspect |
Horizontal Pod Autoscaling (HPA) |
Cluster Autoscaling |
Function |
Adjusts the number of Pod replicas |
Adds or removes Nodes based on demand |
Scope |
Per Deployment or ReplicaSet |
Entire Kubernetes cluster |
Triggers |
CPU usage, custom metrics |
Resource requests and Pod scheduling |
Use Cases |
Scaling web servers based on traffic |
Managing cluster size during load spikes |
What is Container Resource Monitoring?
Container Resource Monitoring tracks the performance and usage of containerized applications within Kubernetes.
- Key Metrics:
- CPU Usage: Measures processing power consumed by containers.
- Memory Usage: Tracks memory consumption and leaks.
- Network Traffic: Monitors data flow in and out of containers.
- Disk I/O: Observes read/write operations and storage usage.
Describe a Few Important Kubectl Commands
Kubectl is the command-line tool for interacting with Kubernetes clusters.
kubectl get pods
kubectl get services
kubectl describe pod <pod-name>
kubectl apply -f <file.yaml>
kubectl delete pod <pod-name>
kubectl scale deployment <deployment-name> –replicas=<number>
kubectl rollout status deployment/<deployment-name>
kubectl logs <pod-name>
Explain the Working of the kube-scheduler
The kube-scheduler is a critical Kubernetes component responsible for assigning Pods to appropriate Nodes.
- Primary Functions:
- Resource Evaluation: Assesses available CPU, memory, and other resources on Nodes.
- Policy Enforcement: Applies scheduling policies like affinity, anti-affinity, and taints.
- Prioritisation: Ranks Nodes based on criteria to select the best fit for Pod placement.
- Workflow:
- Pod Submission: A Pod is created without a Node assignment.
- Scheduling Decision: kube-scheduler evaluates Nodes and selects one that meets the Pod’s requirements.
- Binding: Assigns the Pod to the chosen Node by updating the Pod’s spec.
What are ConfigMaps and Secrets in Kubernetes, and How Do They Differ?
Aspect |
ConfigMaps |
Secrets |
Data Type |
Non-sensitive configuration data. |
Sensitive information like passwords and keys. |
Storage |
Stored as plain text. |
Stored in base64 encoding and can be encrypted. |
Usage |
Injected as environment variables or files. |
Injected similarly but with enhanced security. |
Access Control |
Less restrictive. |
Restricted access with tighter security measures. |
What is Kubernetes Controller?
A Kubernetes Controller is a control loop that continuously monitors the cluster state and makes changes to achieve the desired state.
- Core Controllers:
- Deployment Controller: Manages Deployments and ReplicaSets.
- ReplicaSet Controller: Ensures a specified number of Pod replicas are running.
- DaemonSet Controller: Ensures a Pod runs on all or selected Nodes.
- StatefulSet Controller: Manages stateful applications with stable identities.
Also Read: DevOps Tutorial: A Comprehensive Beginner’s Guide
What is the Kubernetes ReplicaSet?
A Kubernetes ReplicaSet ensures that a specified number of Pod replicas are running at any given time.
- Primary Functions:
- Maintain Replicas: Keeps the desired number of identical Pods active.
- Automatic Replacement: Recreates Pods that fail or are deleted.
- Use Cases:
- High Availability: Ensures application availability by maintaining multiple Pod replicas.
- Load Balancing: Distributes traffic across multiple Pods to optimi performance.
What is a Deployment in Kubernetes, and How Does It Differ from a ReplicaSet?
Aspect |
Deployment |
ReplicaSet |
Function |
Manages desired state for Pods, handles updates and rollbacks. |
Ensures a specified number of Pod replicas are running. |
Features |
Declarative updates, scaling, rolling updates. |
Maintains Pod count, auto-replaces failed Pods. |
Management |
Controls ReplicaSets for advanced lifecycle management. |
Typically managed by Deployments for scaling and updates. |
Explain the Working of the Master Node in Kubernetes.
The Master Node is the central control unit of a Kubernetes cluster, managing all cluster activities.
Functions:
- Orchestration: Schedules and manages the deployment of Pods across worker nodes.
- Configuration Management: Stores and retrieves cluster configurations from etcd.
- Monitoring: Continuously monitors the cluster’s state and initiates corrective actions when necessary.
- API Handling: Processes API requests from users, CLI tools, and other components.
Workflow:
- User Interaction: Users interact with the cluster via kubectl or other tools, sending requests to the kube-apiserver.
- Scheduling: The kube-scheduler determines the best node for new Pods.
- Controller Actions: Controllers ensure that the desired number of Pods are running and manage updates or failures.
What are Ingress Controllers, and How Do They Differ from Services of type LoadBalancer?
Aspect |
Ingress Controllers |
LoadBalancer Services |
Function |
Manage external HTTP/HTTPS access with advanced routing. |
Expose Services externally using cloud provider’s load balancer. |
Capabilities |
Advanced routing based on hostnames and paths, SSL termination. |
Basic load balancing and external access via fixed IP. |
Components |
Ingress Resource and Ingress Controller. |
Service Type: LoadBalancer integrating with cloud APIs. |
Use Cases |
Complex traffic management, multiple domain routing. |
Simple external access with cloud-managed load balancing. |
Explain the Process of Rolling Updates in Kubernetes and How It Ensures Zero Downtime.
Steps:
- Initiate Update: Change the Pod template in the Deployment.
- Create New Pods: Gradually add new Pod replicas with the updated configuration.
- Monitor Health: Ensure new Pods are running correctly using readiness probes.
- Terminate Old Pods: Gradually remove outdated Pods once new Pods are healthy.
How It Ensures Zero Downtime:
- Incremental Updates: Updates Pods one at a time or in small batches.
- Readiness Probes: Ensure new Pods are ready before old Pods are removed.
- Max Surge & Max Unavailable: Control the number of Pods updated simultaneously to maintain availability.
What are the Differences Between Kubernetes Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA)?
Aspect |
Horizontal Pod Autoscaler (HPA) |
Vertical Pod Autoscaler (VPA) |
Function |
Adjusts the number of Pod replicas based on metrics. |
Adjusts resource requests and limits for existing Pods. |
Scope |
Scales out/in Pods horizontally. |
Scales up/down resources vertically within Pods. |
Use Cases |
Handling variable workloads by adding/removing Pods. |
Optimising resource allocation for stable workloads. |
Configuration |
Defined using HPA objects targeting metrics like CPU. |
Configured with VPA objects specifying resource policies. |
What is a Container Runtime in Kubernetes?
A Container Runtime is the software responsible for running containers on a Kubernetes node.
- Functionality:
- Pulls Container Images: Downloads container images from registries.
- Runs Containers: Manages the lifecycle of containers, including starting, stopping, and monitoring.
- Interfacing with OS: Communicates with the operating system to allocate resources.
How Do Kubernetes Ingress and Services Differ in Managing External Access to Applications?
Aspect |
Ingress |
Services |
Function |
Manages external HTTP/HTTPS routing and load balancing. |
Exposes Pods internally or externally via IPs and ports. |
Capabilities |
Advanced routing based on hostnames and paths. |
Basic load balancing and service discovery. |
Components |
Ingress Resource and Ingress Controller. |
Service Types: ClusterIP, NodePort, LoadBalancer. |
Use Cases |
Managing complex web traffic with SSL termination. |
Exposing services for internal use or simple external access. |
What is the Google Container Engine?
Google Container Engine, which is more commonly referred to as Google Kubernetes Engine (GKE), is a Google-based managed application services Kubernetes engine.
Features:
- Managed Infrastructure: Handles automated deployment, upkeep, and the scaling of Kubernetes clusters.
- Integration with Google Cloud Services: Offers seamless integration with numerous other Google applications such as Cloud Storage, BigQuery, etc.
- Automatic Upgrades: Users are required to engage minimally when upgrading Kubernetes to a newer version.
- High Availability: Makes sure that the different components of clusters remain functional and live in different zones, hence ensuring availability.
Compare Kubernetes Persistent Volumes (PV) and Persistent Volume Claims (PVC).
Aspect |
Persistent Volume (PV) |
Persistent Volume Claim (PVC) |
Definition |
Cluster-wide storage resource provisioned by admins. |
Request for storage by users specifying size and access modes. |
Provisioning |
Manually or dynamically provisioned by Kubernetes. |
Automatically bound to matching PVs based on requirements. |
Lifecycle |
Exists independently of Pods. |
Tied to the lifecycle of Pods that use them. |
Usage |
Represents actual storage resources like NFS, cloud disks. |
Acts as an interface for Pods to request storage without knowing the underlying details. |
Conclusion
Kubernetes interviews can be stressful because of the massive number of concepts to learn and topics to practice, but if one knows what the fundamentals are and practices the frequently asked questions, such individuals stand to gain a lot. This blog has provided a comprehensive list of top Kubernetes interview questions categorized into Easy, Intermediate, and Advanced levels to cater to all skill sets.
By mastering these questions and their answers, you’ll be well-equipped to demonstrate your Kubernetes expertise and secure your desired role. Continue to explore and stay updated with the latest Kubernetes developments to enhance your proficiency and career prospects in this dynamic field. Kubernetes architecture is a component of DevOps, so to learn more about this architecture, you must try the Certificate Program in DevOps & Cloud Engineering with Microsoft by Hero Vired.
FAQs
Share hands-on projects, discuss deployments, mention certifications, and explain troubleshooting scenarios effectively.
Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), and Certified Kubernetes Security Specialist (CKS).
Managing scalability, ensuring security, troubleshooting deployments, handling complex networking, and integrating with existing systems.
Very important. Proficiency in Bash or Python helps automate tasks and manage deployments efficiently.
Helm, Prometheus, Grafana, Istio, Terraform, and CI/CD tools like Jenkins or GitLab
Collaboration is crucial for managing deployments, coordinating with developers, and maintaining cluster health effectively.
Updated on December 5, 2024