Skip to main content
Cloud Computing

Kubernetes for Beginners: Complete Tutorial

Mart 15, 2026 5 dk okuma 19 views Raw
Container orchestration technology representing Kubernetes cluster management
İçindekiler

What Is Kubernetes?

Kubernetes (often abbreviated as K8s) is an open-source container orchestration platform originally developed by Google and now maintained by the Cloud Native Computing Foundation (CNCF). It automates the deployment, scaling, and management of containerized applications across clusters of machines.

While Docker excels at running individual containers, Kubernetes manages hundreds or thousands of containers across multiple servers, ensuring they run reliably, scale automatically, and recover from failures without manual intervention.

Why You Need Container Orchestration

Running a few Docker containers on a single server is straightforward. But as your application grows, you face challenges that manual management cannot solve:

  • Scaling — How do you automatically add more container instances when traffic increases?
  • Load balancing — How do you distribute traffic evenly across multiple container instances?
  • Self-healing — How do you automatically restart failed containers or replace unhealthy nodes?
  • Rolling updates — How do you deploy new versions without downtime?
  • Service discovery — How do containers find and communicate with each other?
  • Resource management — How do you efficiently allocate CPU and memory across containers?

Kubernetes solves all of these problems through a declarative configuration model where you describe your desired state, and Kubernetes continuously works to maintain it.

Core Kubernetes Architecture

Control Plane

The control plane manages the overall cluster and makes global decisions about scheduling, scaling, and responding to events. Key components include:

  • API Server — The front end for the Kubernetes control plane. All commands and configurations pass through it.
  • etcd — A distributed key-value store that holds all cluster state and configuration data.
  • Scheduler — Assigns pods to nodes based on resource requirements, constraints, and availability.
  • Controller Manager — Runs controllers that maintain the desired state (replication, endpoints, namespaces, etc.).

Worker Nodes

Worker nodes run your containerized applications. Each node contains:

  • kubelet — An agent that ensures containers are running in a pod as expected.
  • kube-proxy — Manages network rules for pod-to-pod communication and service exposure.
  • Container runtime — The software that runs containers (containerd, CRI-O).

Essential Kubernetes Objects

Pods

A pod is the smallest deployable unit in Kubernetes. It represents one or more containers that share storage, network, and a specification for how to run. Most pods contain a single container, but sidecar patterns use multiple containers in one pod.

Deployments

Deployments manage the desired state of your pods. They handle creating pods, scaling them up or down, and performing rolling updates. A deployment configuration specifies the container image, number of replicas, and update strategy.

Services

Services provide stable networking for pods. Since pods are ephemeral and can be replaced at any time, their IP addresses change. A service provides a consistent endpoint (IP and DNS name) that routes traffic to the appropriate pods.

  • ClusterIP — Internal-only access within the cluster
  • NodePort — Exposes the service on a static port on each node
  • LoadBalancer — Provisions an external load balancer (cloud environments)

ConfigMaps and Secrets

ConfigMaps store non-sensitive configuration data as key-value pairs. Secrets store sensitive data like passwords and API keys in an encoded format. Both can be mounted as files or injected as environment variables into pods.

Getting Started with Kubernetes

Setting Up a Local Cluster

For learning and development, you can run Kubernetes locally using:

  • Minikube — Creates a single-node Kubernetes cluster in a VM
  • kind — Runs Kubernetes clusters inside Docker containers
  • Docker Desktop — Includes a built-in Kubernetes cluster option

Your First Deployment

Create a file called deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app
        image: nginx:latest
        ports:
        - containerPort: 80

Apply it with kubectl apply -f deployment.yaml. Kubernetes will create three nginx pods distributed across your cluster nodes.

Exposing Your Application

Create a service to make your deployment accessible:

apiVersion: v1
kind: Service
metadata:
  name: my-app-service
spec:
  type: LoadBalancer
  selector:
    app: my-app
  ports:
  - port: 80
    targetPort: 80

Essential kubectl Commands

  • kubectl get pods — List all pods
  • kubectl get services — List all services
  • kubectl describe pod pod-name — Detailed pod information
  • kubectl logs pod-name — View pod logs
  • kubectl scale deployment my-app --replicas=5 — Scale a deployment
  • kubectl delete pod pod-name — Delete a pod (the deployment will recreate it)

Managed Kubernetes Services

Running Kubernetes in production requires managing the control plane, upgrading clusters, and maintaining infrastructure. Managed services handle this complexity for you:

  • Amazon EKS — AWS-managed Kubernetes with deep integration into AWS services
  • Azure AKS — Microsoft-managed Kubernetes with Azure Active Directory integration
  • Google GKE — Google-managed Kubernetes, widely regarded as the most mature managed offering

Kubernetes Best Practices

  1. Use namespaces to organize resources and enforce access controls
  2. Set resource requests and limits on every container to prevent resource starvation
  3. Use liveness and readiness probes so Kubernetes can detect and replace unhealthy pods
  4. Store configuration externally using ConfigMaps and Secrets, not baked into images
  5. Implement RBAC to control who can do what within the cluster
  6. Use Helm charts for templating and managing complex deployments

Conclusion

Kubernetes is a powerful platform that solves the operational challenges of running containerized applications at scale. While the learning curve is steep, starting with local clusters and simple deployments builds a solid foundation. As your applications grow, Kubernetes provides the automation, scalability, and resilience needed for production-grade infrastructure.

Bu yazıyı paylaş