Last updated: April 13, 2025
Table of Contents
- 1. Introduction: What is Kubernetes?
- 2. Why Use Kubernetes?
- 3. Basic Architecture Overview
- 4. Container Images: The Building Blocks
- 5. Core Kubernetes Objects
- 6. Declarative Configuration (YAML)
- 7. Deploying Your Application (Workflow)
- 8. Next Steps: Running Locally
- 9. Conclusion
- 10. Related Articles
1. Introduction: What is Kubernetes?
Kubernetes (k8s
) is an open-source platform for automating the deployment, scaling, and
management of containerized applications. It groups containers that make up an application
into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running
production workloads at Google, combined with best-of-breed ideas and practices from the community.
2. Why Use Kubernetes?
As applications scale and become distributed across multiple containers (microservices), Kubernetes provides essential capabilities:
- Automation: Automates scheduling, self-healing (restarting failed containers), scaling, and updates.
- Scalability: Easily scale applications horizontally (more instances) based on load.
- Resilience: Improves application availability by managing container lifecycle and distributing load.
- Portability: Provides a consistent platform across different cloud providers and on-premises infrastructure.
- Efficiency: Optimizes resource utilization by packing containers onto nodes effectively.
3. Basic Architecture Overview
A Kubernetes cluster has two main types of components:
3.1 Control Plane
Manages the cluster's state. Components include the API server (frontend), etcd (database), scheduler, and controller managers.
3.2 Worker Nodes
Machines (VMs or physical) that run the actual application containers. Each node runs a kubelet
(manages containers on the node) and kube-proxy
(manages network connectivity).
3.3 kubectl
The command-line tool (CLI) used by developers and administrators to interact with the Kubernetes API server and manage cluster resources.
4. Container Images: The Building Blocks
Kubernetes doesn't run your compiled code directly; it runs containers. Containers package your application code along with all its dependencies (libraries, runtime, system tools). The blueprint for creating a container is a container image, most commonly built using Docker.
4.1 Role of the Dockerfile
You create a container image using a text file called a Dockerfile
. This file contains
step-by-step instructions:
- Start from a base image (e.g., a specific version of Node.js, Python, Rust, or a minimal OS like Alpine).
- Copy your compiled application code and dependencies into the image.
- Define necessary environment variables.
- Specify which ports the application uses.
- Set the command to run when a container starts from this image.
Example Dockerfile (Simple Rust Web App):
# Stage 1: Build the application
FROM rust:1.70 as builder
WORKDIR /usr/src/app
COPY . .
# Build release binary (assuming Cargo.toml is setup)
RUN cargo build --release
# Stage 2: Create the final, smaller image
FROM debian:buster-slim
WORKDIR /app
# Copy only the compiled binary from the builder stage
COPY --from=builder /usr/src/app/target/release/my-rust-app .
# Expose the port the app listens on
EXPOSE 8080
# Command to run the application
CMD ["./my-rust-app"]
You then use the docker build
command to create the image from this Dockerfile. See our Docker Commands Guide for more details.
4.2 Container Registries
Once built, you need to store your container image somewhere Kubernetes can access it. This place is called a container registry. Popular registries include:
- Docker Hub (public and private)
- Google Container Registry (GCR) / Artifact Registry (GCP)
- Amazon Elastic Container Registry (ECR) (AWS)
- Azure Container Registry (ACR) (Azure)
- GitLab Container Registry
You push your built image to a registry using docker push <registry-path/image-name:tag>
.
Kubernetes then pulls the specified image from the registry when creating containers.
5. Core Kubernetes Objects
Kubernetes uses object abstractions to represent the state of your system.
5.1 Pods
The smallest deployable unit. A Pod encapsulates one or more containers, shared storage (Volumes), and network resources (a unique IP address). Containers within a Pod are always co-located and co-scheduled on the same node.
Crucially, a Pod's definition specifies which container image(s) to run.
5.2 Deployments
Manages the lifecycle of Pods. You declare the desired state (e.g., "3 replicas of my app running image v1.2"), and the Deployment controller ensures that state is maintained. It handles creating Pods based on a template (which specifies the container image), scaling, rolling updates, and rollbacks.
5.3 Services
Provides a stable endpoint (IP address and DNS name) to access a set of Pods. Services act as load balancers, distributing traffic to healthy Pods managed by a Deployment (or other controller) based on labels.
5.4 Namespaces
Virtual clusters within a physical cluster, used for organizing resources and isolating environments (e.g., development, staging, production).
5.5 ConfigMaps and Secrets
- ConfigMaps: Store non-sensitive configuration data externally.
- Secrets: Store sensitive data like passwords or API keys.
Both can be injected into Pods as environment variables or mounted as files.
6. Declarative Configuration (YAML)
Kubernetes operates on a declarative model. You define the desired state of your application using YAML
manifest files and apply them to the cluster using kubectl apply -f <filename.yaml>
.
6.1 Example Deployment YAML
This manifest tells Kubernetes to ensure one replica of a Pod running your application image is always running.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment # Name of the Deployment object
spec:
replicas: 1
selector:
matchLabels:
app: my-app # Selects Pods with this label
template: # This defines the Pod(s) to be created
metadata:
labels:
app: my-app # Label applied to the Pod(s)
spec:
containers:
- name: my-app-container # Name of the container within the Pod
# --- This is where your compiled project's image is specified ---
image: your-registry/your-app-name:v1.0 # Replace with your image path and tag
# ---
ports:
- containerPort: 8080 # The port your application listens on inside the container
# Optional: Add environment variables from ConfigMaps/Secrets
# envFrom:
# - configMapRef:
# name: my-app-config
# - secretRef:
# name: my-app-secrets
6.2 Example Service YAML
This manifest creates a stable endpoint to access the Pods managed by the Deployment above.
apiVersion: v1
kind: Service
metadata:
name: my-app-service # Name of the Service object
spec:
type: LoadBalancer # Or NodePort, ClusterIP depending on how you want to expose it
selector:
app: my-app # Connects this Service to Pods with the label "app: my-app"
ports:
- protocol: TCP
port: 80 # Port the Service listens on within the cluster
targetPort: 8080 # Port the container listens on (must match containerPort above)
7. Deploying Your Application (Workflow)
The typical workflow to get your compiled project running on Kubernetes is:
- Write Code: Develop and compile your application.
- Create Dockerfile: Define how to package your compiled application and dependencies into a container image.
- Build Image: Use
docker build
to create the image. - Push Image: Push the image to a container registry (e.g., Docker Hub, GCR, ECR).
- Write Kubernetes Manifests: Create YAML files (like
deployment.yaml
,service.yaml
) defining your Deployment, Service, etc., ensuring theimage:
field in the Deployment points to the image you pushed in step 4. - Apply Manifests: Use
kubectl apply -f <manifest-file.yaml>
to create or update the resources in your Kubernetes cluster. Kubernetes will then pull the specified image from the registry and start the containers as defined. - Verify: Use
kubectl get pods
,kubectl get services
,kubectl logs <pod-name>
to check the status and logs of your application.
8. Next Steps: Running Locally
The best way to solidify these concepts is through practice. Set up a local Kubernetes cluster using tools like:
- Minikube
- Kind (Kubernetes in Docker)
- k3s/k3d
- Docker Desktop Kubernetes
Try building an image for a simple application and deploying it using the YAML examples above.
9. Conclusion
Kubernetes orchestrates containerized applications by managing core objects like Pods, Deployments, and Services. Getting your compiled application onto Kubernetes involves packaging it into a container image (using a Dockerfile), pushing that image to a registry, and then defining Kubernetes resources (primarily Deployments) in YAML manifests that reference your specific image. Kubernetes then handles the process of pulling the image and running your application containers according to your declarative configuration.
10. Related Articles
- Getting Started with Minikube
- Docker Commands Guide
- Docker Compose Guide
- Infrastructure as Code Explained
- Choosing a Cloud Provider (AWS vs GCP vs Azure)
- Getting started with Rust and Kubernetes
- CI/CD Pipelines Explained