Introduction: Mastering Kubernetes Core Resources

To effectively use Kubernetes, you must thoroughly understand three core resources: Pod, Deployment, and Service. These three are the most fundamental building blocks for deploying and operating applications in Kubernetes. In this Part 8, we'll explore the concepts, YAML writing, and practical usage patterns of each resource in detail.

1. Understanding Pods in Detail

A Pod is the smallest deployable unit in Kubernetes. It contains one or more containers, and containers within the same Pod share network and storage.

1.1 Pod Characteristics

  • Container Group: Multiple containers can be placed in a single Pod
  • Shared Network: Containers in the same Pod communicate via localhost
  • Shared Storage: Data can be shared through volumes
  • Ephemeral Nature: Pods can be deleted and recreated at any time
  • Unique IP: Each Pod has a unique IP address within the cluster

1.2 Sidecar Pattern

The sidecar pattern places auxiliary containers alongside the main container. It's used for log collection, proxies, configuration management, and more.

# sidecar-example.yaml
apiVersion: v1
kind: Pod
metadata:
  name: web-with-sidecar
spec:
  containers:
  # Main application container
  - name: web-app
    image: nginx:1.25
    ports:
    - containerPort: 80
    volumeMounts:
    - name: shared-logs
      mountPath: /var/log/nginx

  # Sidecar: Log collector
  - name: log-collector
    image: busybox
    command: ['sh', '-c', 'tail -F /var/log/nginx/access.log']
    volumeMounts:
    - name: shared-logs
      mountPath: /var/log/nginx

  volumes:
  - name: shared-logs
    emptyDir: {}

1.3 Init Container

Init Containers run before the main containers start. They're used for waiting for databases, creating configuration files, setting permissions, etc.

# init-container-example.yaml
apiVersion: v1
kind: Pod
metadata:
  name: app-with-init
spec:
  initContainers:
  # First Init Container: Wait for DB
  - name: wait-for-db
    image: busybox
    command: ['sh', '-c', 'until nc -z mysql-service 3306; do echo waiting for db; sleep 2; done']

  # Second Init Container: Prepare config file
  - name: prepare-config
    image: busybox
    command: ['sh', '-c', 'cp /config-source/app.conf /config/app.conf']
    volumeMounts:
    - name: config-volume
      mountPath: /config
    - name: config-source
      mountPath: /config-source

  containers:
  - name: main-app
    image: my-app:latest
    volumeMounts:
    - name: config-volume
      mountPath: /app/config

  volumes:
  - name: config-volume
    emptyDir: {}
  - name: config-source
    configMap:
      name: app-config

1.4 Writing Pod YAML

Let's look at a complete Pod YAML example.

# complete-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: my-pod
  labels:
    app: web
    environment: production
  annotations:
    description: "Production web server"
spec:
  # Restart policy
  restartPolicy: Always  # Always, OnFailure, Never

  # Node selection
  nodeSelector:
    disktype: ssd

  # Container definition
  containers:
  - name: web
    image: nginx:1.25

    # Port settings
    ports:
    - name: http
      containerPort: 80
      protocol: TCP

    # Resource limits
    resources:
      requests:
        memory: "128Mi"
        cpu: "250m"
      limits:
        memory: "256Mi"
        cpu: "500m"

    # Environment variables
    env:
    - name: ENV_NAME
      value: "production"
    - name: DB_PASSWORD
      valueFrom:
        secretKeyRef:
          name: db-secrets
          key: password

    # Health checks
    livenessProbe:
      httpGet:
        path: /healthz
        port: 80
      initialDelaySeconds: 30
      periodSeconds: 10

    readinessProbe:
      httpGet:
        path: /ready
        port: 80
      initialDelaySeconds: 5
      periodSeconds: 5

    # Volume mounts
    volumeMounts:
    - name: data-volume
      mountPath: /data

  volumes:
  - name: data-volume
    persistentVolumeClaim:
      claimName: my-pvc

2. Understanding Deployments

Deployments manage declarative updates for Pods. They maintain the desired number of Pods through ReplicaSets and provide rolling updates and rollback capabilities.

2.1 Relationship Between Deployment and ReplicaSet

Deployments create and manage ReplicaSets. ReplicaSets maintain the specified number of Pod replicas.

  • Deployment: Higher-level abstraction, manages update strategies
  • ReplicaSet: Maintains Pod replica count
  • Pod: Runs the actual application

2.2 Writing Deployment YAML

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-deployment
  labels:
    app: web
spec:
  # Replica count
  replicas: 3

  # Selector (which Pods to manage)
  selector:
    matchLabels:
      app: web

  # Update strategy
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1        # Additional Pods to create
      maxUnavailable: 0  # Maximum unavailable Pods

  # Pod template
  template:
    metadata:
      labels:
        app: web
        version: v1
    spec:
      containers:
      - name: web
        image: nginx:1.25
        ports:
        - containerPort: 80
        resources:
          requests:
            cpu: "100m"
            memory: "128Mi"
          limits:
            cpu: "200m"
            memory: "256Mi"
        livenessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 10
          periodSeconds: 5
        readinessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 3

2.3 Rolling Updates

Rolling updates gradually deploy new Pod versions while replacing existing Pods.

# Update image (triggers rolling update)
kubectl set image deployment/web-deployment web=nginx:1.26

# Check update status
kubectl rollout status deployment/web-deployment

# View update history
kubectl rollout history deployment/web-deployment

# View specific revision details
kubectl rollout history deployment/web-deployment --revision=2

2.4 Rollback

If problems occur, you can easily rollback to a previous version.

# Rollback to previous version
kubectl rollout undo deployment/web-deployment

# Rollback to specific revision
kubectl rollout undo deployment/web-deployment --to-revision=2

# Check status after rollback
kubectl rollout status deployment/web-deployment

3. Scaling

Kubernetes supports both manual scaling and automatic scaling (HPA).

3.1 Manual Scaling

# Adjust replica count
kubectl scale deployment/web-deployment --replicas=5

# Check current state
kubectl get deployment web-deployment

# Scale through YAML modification
kubectl edit deployment web-deployment

3.2 HPA (Horizontal Pod Autoscaler)

HPA automatically adjusts the number of Pods based on CPU, memory usage, or custom metrics.

# hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: web-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: web-deployment

  minReplicas: 2
  maxReplicas: 10

  metrics:
  # CPU-based scaling
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70

  # Memory-based scaling
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 80

  # Detailed scaling behavior settings
  behavior:
    scaleDown:
      stabilizationWindowSeconds: 300  # Wait time before scale down
      policies:
      - type: Percent
        value: 10
        periodSeconds: 60
    scaleUp:
      stabilizationWindowSeconds: 0
      policies:
      - type: Percent
        value: 100
        periodSeconds: 15
      - type: Pods
        value: 4
        periodSeconds: 15
      selectPolicy: Max
# Create HPA (command)
kubectl autoscale deployment web-deployment --min=2 --max=10 --cpu-percent=70

# Check HPA status
kubectl get hpa

# HPA details
kubectl describe hpa web-hpa

4. Service Types

Services provide stable network endpoints for Pod sets. Pod IPs can change, but Service IPs remain fixed.

4.1 ClusterIP (Default)

Provides a virtual IP accessible only within the cluster.

# clusterip-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: web-service
spec:
  type: ClusterIP  # Default, can be omitted
  selector:
    app: web
  ports:
  - name: http
    protocol: TCP
    port: 80        # Service port
    targetPort: 80  # Pod port

Use Cases: Internal microservice communication, database connections

4.2 NodePort

Allows external access through specific ports on each node.

# nodeport-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: web-nodeport
spec:
  type: NodePort
  selector:
    app: web
  ports:
  - name: http
    protocol: TCP
    port: 80         # Service port
    targetPort: 80   # Pod port
    nodePort: 30080  # Node port (30000-32767)

Use Cases: Development/test environments, external access without load balancer

4.3 LoadBalancer

Gets an external IP through the cloud provider's load balancer.

# loadbalancer-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: web-loadbalancer
  annotations:
    # AWS example
    service.beta.kubernetes.io/aws-load-balancer-type: nlb
spec:
  type: LoadBalancer
  selector:
    app: web
  ports:
  - name: http
    protocol: TCP
    port: 80
    targetPort: 80
  - name: https
    protocol: TCP
    port: 443
    targetPort: 443

Use Cases: External service exposure in production environments

4.4 ExternalName

Provides a DNS CNAME for external services.

# externalname-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: external-db
spec:
  type: ExternalName
  externalName: db.example.com

Use Cases: External database connections, external API service connections

4.5 Complete Service YAML Example

# complete-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: web-service
  labels:
    app: web
  annotations:
    prometheus.io/scrape: "true"
    prometheus.io/port: "80"
spec:
  type: ClusterIP

  selector:
    app: web
    environment: production

  ports:
  - name: http
    protocol: TCP
    port: 80
    targetPort: http  # Reference Pod port name
  - name: https
    protocol: TCP
    port: 443
    targetPort: https

  # Session affinity (same client to same Pod)
  sessionAffinity: ClientIP
  sessionAffinityConfig:
    clientIP:
      timeoutSeconds: 3600

5. Labels and Selectors

Labels and selectors are the core mechanism for organizing and connecting Kubernetes resources.

5.1 Labels

Labels are key-value pairs attached to resources.

# Label examples
metadata:
  labels:
    app: web
    environment: production
    version: v1.0.0
    team: backend
    tier: frontend
# Query resources by label
kubectl get pods -l app=web
kubectl get pods -l 'environment in (production, staging)'
kubectl get pods -l app=web,environment=production

# Add/modify labels
kubectl label pods my-pod new-label=new-value

# Remove label
kubectl label pods my-pod new-label-

# Show all labels
kubectl get pods --show-labels

5.2 Selectors

Selectors select resources based on labels.

# Equality-based selector
selector:
  matchLabels:
    app: web
    environment: production

# Set-based selector
selector:
  matchLabels:
    app: web
  matchExpressions:
  - key: environment
    operator: In
    values:
    - production
    - staging
  - key: version
    operator: NotIn
    values:
    - v1.0.0
  - key: team
    operator: Exists

5.3 Labels and Selectors Usage Example

# Complete deployment example
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-deployment
  labels:
    app: web
    environment: production
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
      environment: production
  template:
    metadata:
      labels:
        app: web
        environment: production
        version: v2.0.0
    spec:
      containers:
      - name: web
        image: nginx:1.25
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: web-service
spec:
  selector:
    app: web
    environment: production
  ports:
  - port: 80
    targetPort: 80
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: web-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: web-deployment
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70

6. Practical Command Reference

6.1 Pod Commands

# List Pods
kubectl get pods
kubectl get pods -o wide
kubectl get pods -w  # Watch in real-time

# Pod details
kubectl describe pod my-pod

# View Pod logs
kubectl logs my-pod
kubectl logs my-pod -c container-name  # Specific container
kubectl logs my-pod -f  # Real-time logs
kubectl logs my-pod --previous  # Previous container logs

# Access Pod shell
kubectl exec -it my-pod -- /bin/bash
kubectl exec -it my-pod -c container-name -- /bin/sh

# Delete Pod
kubectl delete pod my-pod
kubectl delete pod my-pod --grace-period=0 --force  # Force delete

6.2 Deployment Commands

# Deployment management
kubectl create deployment nginx --image=nginx
kubectl get deployments
kubectl describe deployment my-deployment

# Updates and rollbacks
kubectl set image deployment/my-deployment container=image:tag
kubectl rollout status deployment/my-deployment
kubectl rollout history deployment/my-deployment
kubectl rollout undo deployment/my-deployment

# Scaling
kubectl scale deployment/my-deployment --replicas=5

# Pause/Resume
kubectl rollout pause deployment/my-deployment
kubectl rollout resume deployment/my-deployment

6.3 Service Commands

# Service management
kubectl expose deployment my-deployment --port=80 --type=ClusterIP
kubectl get services
kubectl describe service my-service

# Check service endpoints
kubectl get endpoints my-service

# Port forwarding (for local testing)
kubectl port-forward service/my-service 8080:80

Conclusion

In this Part 8, we explored Kubernetes core resources - Pod, Deployment, and Service - in depth. Pods are the basic units of container groups, and various patterns can be implemented through the sidecar pattern and Init Containers. Deployments provide declarative Pod management with rolling updates and rollback capabilities, plus automatic scaling through HPA. Services provide stable network access to Pods, and you can choose between ClusterIP, NodePort, LoadBalancer, etc. based on your needs.

By utilizing labels and selectors, you can flexibly connect and manage these resources. In the next part, we'll explore ConfigMap, Secret, Ingress, and other data and configuration management topics.