☸ Complete Reference

Kubernetes YAML
Master Guide

Every Kubernetes resource type explained with real-world YAML examples, field-by-field breakdowns, and production patterns. From beginner to expert.

20
Resource Types
100+
YAML Fields
5
Categories
All Resource Types
01 β€” WORKLOAD
πŸ“¦
Pod
Smallest deployable unit. One or more containers sharing network and storage.
02 β€” WORKLOAD
πŸš€
Deployment
Manages stateless apps. Rolling updates, rollbacks, replica scaling.
03 β€” WORKLOAD
πŸ”
ReplicaSet
Maintains N identical Pod replicas. Usually managed by Deployment.
04 β€” WORKLOAD
πŸ—„οΈ
StatefulSet
Manages stateful apps (DBs). Stable identity and persistent storage.
05 β€” WORKLOAD
πŸ‘οΈ
DaemonSet
Runs one Pod per node. For monitoring, logging, networking agents.
06 β€” WORKLOAD
βš™οΈ
Job
Run-to-completion tasks. Batch processing, migrations, backups.
07 β€” WORKLOAD
⏰
CronJob
Scheduled Jobs using cron syntax. Cleanup tasks, reports.
08 β€” NETWORKING
🌐
Service
Stable network endpoint for Pods. ClusterIP, NodePort, LoadBalancer.
09 β€” NETWORKING
πŸ”€
Ingress
HTTP/HTTPS routing. Domain-based, path-based routing + TLS.
10 β€” NETWORKING
πŸ”’
NetworkPolicy
Firewall rules between Pods. Control ingress/egress traffic.
11 β€” CONFIG
πŸ“‹
ConfigMap
Non-sensitive config data. Env vars, config files, command args.
12 β€” CONFIG
πŸ”‘
Secret
Sensitive data: passwords, tokens, TLS certs. Base64 encoded.
13 β€” STORAGE
πŸ’Ύ
PersistentVolume
Cluster-wide storage resource. Admin-provisioned disk.
14 β€” STORAGE
πŸ“
PVC
Storage request by user. Binds to a PersistentVolume.
15 β€” STORAGE
πŸ—οΈ
StorageClass
Dynamic storage provisioning. SSD, HDD, cloud disk types.
16 β€” RBAC
🏒
Namespace
Virtual cluster isolation. Separate teams, environments.
17 β€” RBAC
πŸ‘€
ServiceAccount
Pod identity for API access. Authentication for workloads.
18 β€” RBAC
πŸ›‘οΈ
Role & RoleBinding
Permissions to resources. Who can do what in a namespace.
19 β€” SCALING
πŸ“ˆ
HPA
Auto-scale Pods by CPU/memory. Horizontal Pod Autoscaler.
20 β€” SCALING
βš–οΈ
ResourceQuota
Limit total resources per namespace. Prevent runaway usage.
πŸ“¦
Workload Core apps/v1
🌐
Shared Network
All containers in a Pod share the same IP address and port space.
πŸ’Ύ
Shared Storage
Containers can mount the same volumes and share files.
⚑
Same Lifecycle
All containers start and stop together as one unit.
πŸ–₯️
Single Node
All containers in a Pod always run on the same node.
πŸ“„ pod.yaml β€” Full Example
apiVersion: v1                      # Core API group
kind: Pod                             # Resource type
metadata:
  name: my-app-pod                   # Pod's unique name
  namespace: production              # Which namespace
  labels:
    app: my-app                      # Used by Services/Deployments
    version: v1
  annotations:
    description: "Main web server"

spec:
  containers:                         # List of containers
    - name: web                       # Container name
      image: nginx:1.21              # Docker image
      ports:
        - containerPort: 80          # Port container listens on
      env:                            # Environment variables
        - name: APP_ENV
          value: "production"
      resources:                      # CPU/memory limits
        requests:
          memory: "64Mi"
          cpu: "250m"
        limits:
          memory: "128Mi"
          cpu: "500m"
      livenessProbe:                  # Is container alive?
        httpGet:
          path: /health
          port: 80
        initialDelaySeconds: 15
      readinessProbe:                 # Is container ready for traffic?
        httpGet:
          path: /ready
          port: 80
      volumeMounts:
        - name: config-volume
          mountPath: /etc/config

  initContainers:                     # Run before main containers
    - name: init-db
      image: busybox
      command: ['sh', '-c', 'until nc -z db 5432; do sleep 1; done']

  volumes:                            # Storage volumes
    - name: config-volume
      configMapRef:
        name: app-config

  restartPolicy: Always              # Always|OnFailure|Never
  nodeSelector:                       # Run on specific nodes
    disktype: ssd
FieldRequiredWhat it does
apiVersion: v1RequiredCore API group for basic resources
kind: PodRequiredTells Kubernetes what resource this is
metadata.nameRequiredUnique name for this Pod in the namespace
spec.containers[].imageRequiredDocker image to run (name:tag format)
spec.containers[].resourcesOptionalCPU/memory requests and limits. Always set in production!
livenessProbeOptionalKubernetes restarts container if this fails
readinessProbeOptionalNo traffic sent until this passes
initContainersOptionalRun before app containers β€” wait for DB, setup files, etc.
restartPolicyOptionalAlways (default), OnFailure, Never
⚠️
Never use bare Pods in production. If a node dies, bare Pods are NOT recreated. Always use a Deployment or StatefulSet to manage Pods.
πŸš€
Stateless Apps Rolling Updates apps/v1 Most Used
DeploymentYou manage this
β†’
ReplicaSetAuto-created
β†’
Pod 1Auto-created
+
Pod 2Auto-created
+
Pod 3Auto-created
πŸ“„ deployment.yaml β€” Production Example
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pathnex-deployment
  namespace: production
  labels:
    app: pathnex

spec:
  replicas: 3                         # Run 3 Pods at all times

  selector:                           # Which Pods this manages
    matchLabels:
      app: pathnex                   # MUST match template labels

  strategy:                           # How to update Pods
    type: RollingUpdate              # Replace Pods gradually
    rollingUpdate:
      maxUnavailable: 1             # Max Pods down during update
      maxSurge: 1                   # Max extra Pods during update

  template:                           # Pod blueprint (same as Pod spec)
    metadata:
      labels:
        app: pathnex                # MUST match selector
    spec:
      containers:
        - name: nginx
          image: nginx:latest
          ports:
            - containerPort: 80
          resources:
            requests:
              cpu: "100m"
              memory: "128Mi"
            limits:
              cpu: "500m"
              memory: "256Mi"
          livenessProbe:
            httpGet:
              path: /
              port: 80
            initialDelaySeconds: 10
            periodSeconds: 10
          readinessProbe:
            httpGet:
              path: /
              port: 80
            initialDelaySeconds: 5
FieldRequiredWhat it does
spec.replicasRequiredHow many Pod copies to run. Default is 1.
spec.selectorRequiredLabels to find which Pods belong to this Deployment. Immutable after creation.
spec.strategy.typeOptionalRollingUpdate (default) or Recreate (kill all then start new)
maxUnavailableOptionalMax Pods that can be down during update. Use 0 for zero-downtime.
maxSurgeOptionalMax extra Pods created during update.
spec.templateRequiredThe Pod definition β€” same as writing a Pod spec directly.
πŸ’‘
Key kubectl commands: kubectl rollout undo deployment/pathnex-deployment  |  kubectl rollout history deployment/pathnex-deployment  |  kubectl scale deployment pathnex-deployment --replicas=5
πŸ”
apps/v1 Usually auto-managed
πŸ“„ replicaset.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: my-replicaset
spec:
  replicas: 3                         # Keep 3 pods always running
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
        - name: my-container
          image: nginx:latest
⚠️
Use Deployment instead. ReplicaSets don't support rolling updates or rollbacks. Always use a Deployment which manages ReplicaSets for you automatically.
πŸ—„οΈ
apps/v1 Databases Stable Identity
🏷️
Stable Names
Pods get predictable names: pod-0, pod-1, pod-2 (not random hashes)
πŸ’Ύ
Own Storage
Each Pod gets its OWN PVC that persists even if Pod is deleted
πŸ“‹
Ordered Deploy
Pods start in order: 0, 1, 2. Stop in reverse: 2, 1, 0
🌐
Stable DNS
Each Pod gets a stable DNS hostname for peer discovery
πŸ“„ statefulset.yaml β€” MySQL Example
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
spec:
  serviceName: "mysql"               # Headless service name (required!)
  replicas: 3
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
        - name: mysql
          image: mysql:8.0
          env:
            - name: MYSQL_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mysql-secret
                  key: password
          volumeMounts:
            - name: mysql-data
              mountPath: /var/lib/mysql

  volumeClaimTemplates:               # Creates unique PVC per Pod!
    - metadata:
        name: mysql-data
      spec:
        accessModes: ["ReadWriteOnce"]
        resources:
          requests:
            storage: 10Gi

# Pod names will be: mysql-0, mysql-1, mysql-2
# DNS names:  mysql-0.mysql.default.svc.cluster.local
πŸ‘οΈ
apps/v1 One per Node Infrastructure
πŸ“Š
Monitoring
Prometheus Node Exporter, Datadog agent on every node
πŸ“
Logging
Fluentd, Filebeat collect logs from every node
🌐
Networking
CNI plugins (Calico, Flannel) run as DaemonSets
πŸ”’
Security
Falco, Sysdig security agents on every node
πŸ“„ daemonset.yaml β€” Log Collector
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: kube-system
spec:
  selector:
    matchLabels:
      name: fluentd
  template:
    metadata:
      labels:
        name: fluentd
    spec:
      tolerations:                     # Allow running on control-plane nodes
        - key: node-role.kubernetes.io/control-plane
          operator: Exists
          effect: NoSchedule
      containers:
        - name: fluentd
          image: fluent/fluentd:v1.14
          volumeMounts:
            - name: varlog
              mountPath: /var/log     # Access node logs
      volumes:
        - name: varlog
          hostPath:
            path: /var/log            # Mount from the actual node
βš™οΈ
batch/v1 One-time Task Run to Completion
πŸ“„ job.yaml β€” Database Migration
apiVersion: batch/v1
kind: Job
metadata:
  name: db-migration
spec:
  completions: 1                      # How many Pods must succeed
  parallelism: 1                      # How many run simultaneously
  backoffLimit: 4                     # Retry 4 times if fails
  activeDeadlineSeconds: 300         # Kill if not done in 5 min

  template:
    spec:
      restartPolicy: OnFailure        # Never | OnFailure (required for Jobs)
      containers:
        - name: migration
          image: my-app:latest
          command: ["python", "manage.py", "migrate"]
          env:
            - name: DATABASE_URL
              valueFrom:
                secretKeyRef:
                  name: db-secret
                  key: url
⏰
batch/v1 Scheduled
πŸ“„ cronjob.yaml β€” Daily Backup at 2am
apiVersion: batch/v1
kind: CronJob
metadata:
  name: daily-backup
spec:
  schedule: "0 2 * * *"              # Every day at 2:00 AM
  #           ┬ ┬ ┬ ┬ ┬
  #           β”‚ β”‚ β”‚ β”‚ └─ Day of week (0-7)
  #           β”‚ β”‚ β”‚ └─── Month (1-12)
  #           β”‚ β”‚ └───── Day of month (1-31)
  #           β”‚ └─────── Hour (0-23)
  #           └───────── Minute (0-59)

  successfulJobsHistoryLimit: 3      # Keep last 3 successful jobs
  failedJobsHistoryLimit: 1          # Keep last 1 failed job
  concurrencyPolicy: Forbid         # Don't run if previous still running

  jobTemplate:
    spec:
      template:
        spec:
          restartPolicy: OnFailure
          containers:
            - name: backup
              image: my-backup-tool:latest
              command: ["sh", "-c", "backup.sh"]
🌐
v1 4 Types Load Balancer
4 Service Types
πŸ“„ Type 1: ClusterIP β€” Internal Only (Default)
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  type: ClusterIP                     # Only accessible inside cluster
  selector:
    app: pathnex                     # Forwards to Pods with this label
  ports:
    - port: 80                        # Service port (what others call)
      targetPort: 8080               # Pod port (what app listens on)
πŸ“„ Type 2: NodePort β€” Access via Node IP
spec:
  type: NodePort                      # Accessible via NodeIP:NodePort
  selector:
    app: pathnex
  ports:
    - port: 80                        # ClusterIP port
      targetPort: 8080               # Pod port
      nodePort: 30080                # External port (30000-32767)
# Access: http://any-node-ip:30080
πŸ“„ Type 3: LoadBalancer β€” Cloud External IP
spec:
  type: LoadBalancer                  # Cloud creates external load balancer
  selector:
    app: pathnex
  ports:
    - port: 80
      targetPort: 8080
# AWS/GCP/Azure creates a real external IP automatically
πŸ“„ Type 4: Headless β€” No ClusterIP (for StatefulSets)
spec:
  clusterIP: None                     # No virtual IP β€” direct Pod DNS
  selector:
    app: mysql
  ports:
    - port: 3306
# Access: mysql-0.mysql.default.svc.cluster.local
# Each Pod gets its own DNS entry. Used with StatefulSets.
TypeAccessible FromUse Case
ClusterIPInside cluster onlyInternal microservices communication
NodePortNode IP + portDevelopment, on-premise clusters
LoadBalancerExternal internetProduction apps on cloud (AWS/GCP/Azure)
HeadlessDirect Pod DNSStatefulSets, databases, peer discovery
πŸ”€
networking.k8s.io/v1 HTTP Routing TLS/HTTPS
InternetUser browser
β†’
IngressRoute by host/path
β†’
Service Aapi.myapp.com
+
Service B/admin path
πŸ“„ ingress.yaml β€” Production with TLS
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  ingressClassName: nginx            # Which ingress controller to use

  tls:                                # HTTPS configuration
    - hosts:
        - myapp.com
        - api.myapp.com
      secretName: myapp-tls          # TLS cert stored here

  rules:
    - host: myapp.com                # Domain-based routing
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: frontend-service
                port:
                  number: 80

    - host: api.myapp.com            # Subdomain routing
      http:
        paths:
          - path: /v1               # Path-based routing
            pathType: Prefix
            backend:
              service:
                name: api-service
                port:
                  number: 8080
πŸ”’
networking.k8s.io/v1 Security
πŸ“„ networkpolicy.yaml β€” Only allow frontend β†’ backend
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: backend-allow-frontend
spec:
  podSelector:                        # Apply rules TO these Pods
    matchLabels:
      app: backend

  policyTypes:
    - Ingress                         # Control incoming traffic
    - Egress                          # Control outgoing traffic

  ingress:                            # Who can SEND traffic to backend
    - from:
        - podSelector:              # Only from frontend pods
            matchLabels:
              app: frontend
      ports:
        - protocol: TCP
          port: 8080

  egress:                             # Where backend can SEND traffic
    - to:
        - podSelector:
            matchLabels:
              app: database
      ports:
        - port: 5432
πŸ“‹
v1 Non-sensitive Config Data
πŸ“„ configmap.yaml β€” All Usage Patterns
apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  # Simple key-value pairs
  APP_ENV: "production"
  LOG_LEVEL: "info"
  MAX_CONNECTIONS: "100"

  # Entire config file as a value
  nginx.conf: |
    server {
      listen 80;
      location / {
        proxy_pass http://backend:8080;
      }
    }

  app.properties: |
    database.host=postgres
    database.port=5432
    cache.ttl=3600

---
# HOW TO USE IN A POD:
spec:
  containers:
    - name: app
      image: my-app:latest

      # Option 1: Single key as env var
      env:
        - name: APP_ENV
          valueFrom:
            configMapKeyRef:
              name: app-config
              key: APP_ENV

      # Option 2: All keys as env vars
      envFrom:
        - configMapRef:
            name: app-config

      # Option 3: Mount as files
      volumeMounts:
        - name: config
          mountPath: /etc/nginx

  volumes:
    - name: config
      configMap:
        name: app-config             # nginx.conf becomes /etc/nginx/nginx.conf
πŸ”‘
v1 Sensitive Data Base64
πŸ“„ secret.yaml β€” All Secret Types
# Type 1: Generic secret (most common)
apiVersion: v1
kind: Secret
metadata:
  name: db-secret
type: Opaque                         # Generic type
data:                                 # Values MUST be base64 encoded
  username: YWRtaW4=                 # echo -n 'admin' | base64
  password: cGFzc3dvcmQxMjM=        # echo -n 'password123' | base64

---
# Type 2: TLS secret (for HTTPS)
apiVersion: v1
kind: Secret
metadata:
  name: myapp-tls
type: kubernetes.io/tls
data:
  tls.crt: <base64-cert>
  tls.key: <base64-key>

---
# HOW TO USE IN POD:
env:
  - name: DB_PASSWORD
    valueFrom:
      secretKeyRef:
        name: db-secret
        key: password
πŸ”΄
Base64 is NOT encryption! Anyone with cluster access can decode it. In production: enable etcd encryption at rest, use external secret managers like HashiCorp Vault, AWS Secrets Manager, or use the External Secrets Operator.
πŸ’Ύ
v1 Cluster-wide Admin Provisions
PVAdmin creates
←bindsβ†’
PVCUser requests
←mountsβ†’
PodUses storage
πŸ“„ persistentvolume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv
spec:
  capacity:
    storage: 10Gi                   # Total disk size

  accessModes:                       # How Pods can access it
    - ReadWriteOnce                   # One node reads+writes (typical)
    # - ReadOnlyMany     # Many nodes read-only
    # - ReadWriteMany    # Many nodes read+write (NFS)

  reclaimPolicy: Retain             # Retain|Delete|Recycle
  storageClassName: standard        # Must match PVC's storageClass

  hostPath:                          # Local disk (dev only)
    path: /mnt/data

  # For AWS EBS:
  # awsElasticBlockStore:
  #   volumeID: vol-xxxx
  #   fsType: ext4

  # For NFS:
  # nfs:
  #   path: /exports/data
  #   server: nfs-server.example.com
πŸ“
v1 User Requests
πŸ“„ pvc.yaml + usage in Pod
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: standard        # Must match PV or StorageClass
  resources:
    requests:
      storage: 5Gi                  # Request 5GB of storage

---
# USE IN A POD:
spec:
  containers:
    - name: app
      volumeMounts:
        - mountPath: /data
          name: storage
  volumes:
    - name: storage
      persistentVolumeClaim:
        claimName: my-pvc           # Reference the PVC
πŸ—οΈ
storage.k8s.io/v1 Dynamic Provisioning
πŸ“„ storageclass.yaml β€” AWS EBS SSD
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast-ssd
provisioner: kubernetes.io/aws-ebs  # Cloud provider plugin
parameters:
  type: gp3                          # AWS EBS type (SSD)
  fsType: ext4
reclaimPolicy: Delete              # Delete disk when PVC deleted
allowVolumeExpansion: true         # Allow resizing volumes

---
# GCP example:
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-ssd

---
# Azure example:
provisioner: kubernetes.io/azure-disk
parameters:
  skuName: Premium_LRS
🏒
v1 Isolation
πŸ“„ namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: production
  labels:
    env: prod
    team: platform

---
apiVersion: v1
kind: Namespace
metadata:
  name: staging

# Default namespaces created by Kubernetes:
# default          - Where resources go if no namespace specified
# kube-system      - Kubernetes system components
# kube-public      - Publicly readable resources
# kube-node-lease  - Node heartbeat tracking
πŸ‘€
v1 Pod Identity
πŸ“„ serviceaccount.yaml + usage
apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-app-sa
  namespace: production

---
# USE IN A POD:
spec:
  serviceAccountName: my-app-sa    # Pod uses this identity
  containers:
    - name: app
      image: my-app:latest
# Token automatically mounted at:
# /var/run/secrets/kubernetes.io/serviceaccount/token
πŸ›‘οΈ
rbac.authorization.k8s.io/v1 Permissions Security
πŸ“„ role.yaml + rolebinding.yaml
# ROLE: What actions are allowed
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: pod-reader
  namespace: production             # Only in this namespace
rules:
  - apiGroups: [""]                 # "" = core API group
    resources: ["pods", "pods/log"]  # Which resources
    verbs: ["get", "list", "watch"]  # What actions allowed
  - apiGroups: ["apps"]
    resources: ["deployments"]
    verbs: ["get", "list", "update"]

---
# ROLEBINDING: Assign Role to someone
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods-binding
  namespace: production
subjects:                            # Who gets this role
  - kind: ServiceAccount
    name: my-app-sa
    namespace: production
  - kind: User
    name: jane@company.com
roleRef:                             # Which role to bind
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

# All verbs: get, list, watch, create, update, patch, delete
# ClusterRole = same but cluster-wide (no namespace)
πŸ“ˆ
autoscaling/v2 Auto-scaling CPU/Memory
πŸ“„ hpa.yaml β€” CPU + Memory Scaling
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: pathnex-hpa
spec:
  scaleTargetRef:                    # What to scale
    apiVersion: apps/v1
    kind: Deployment
    name: pathnex-deployment

  minReplicas: 2                    # Never go below 2
  maxReplicas: 10                   # Never go above 10

  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 70   # Scale up if CPU > 70%

    - type: Resource
      resource:
        name: memory
        target:
          type: Utilization
          averageUtilization: 80   # Scale up if memory > 80%

# Flow: 2 pods β†’ traffic spike β†’ CPU 70% β†’ HPA adds pods β†’ up to 10
# Requires: metrics-server installed in cluster
βš–οΈ
v1 Namespace Limits
πŸ“„ resourcequota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
  name: production-quota
  namespace: production
spec:
  hard:
    # Compute
    requests.cpu: "4"               # Total CPU requests in namespace
    requests.memory: 8Gi
    limits.cpu: "8"                 # Total CPU limits in namespace
    limits.memory: 16Gi

    # Object counts
    pods: "20"                      # Max 20 Pods
    services: "10"                  # Max 10 Services
    secrets: "20"
    configmaps: "20"
    persistentvolumeclaims: "5"

---
# LimitRange: Default limits per Pod (companion to ResourceQuota)
apiVersion: v1
kind: LimitRange
metadata:
  name: default-limits
  namespace: production
spec:
  limits:
    - type: Container
      default:                      # Applied if not specified
        cpu: 200m
        memory: 256Mi
      defaultRequest:
        cpu: 100m
        memory: 128Mi
πŸŽ“
You've completed the Kubernetes YAML Master Guide! You now know all 20 resource types. Real mastery comes from practice β€” try deploying a complete app using Deployment + Service + Ingress + ConfigMap + Secret + HPA.