The 10 most critical security risks in Kubernetes environments and how to mitigate them.
The OWASP Kubernetes Top 10 identifies the most significant security risks in Kubernetes environments. Kubernetes has become the de facto standard for container orchestration, but its flexibility and complexity introduce numerous security challenges. This guide covers workload misconfigurations, RBAC issues, secrets management, network segmentation, and more.
Insecure workload configurations are the most common Kubernetes security issue. Containers running as root, with excessive privileges, writable filesystems, or without resource limits create significant attack surfaces. Default configurations are often insecure and must be explicitly hardened.
A compromised container with root privileges and host access can escape the container sandbox, access the host filesystem, and pivot to other workloads. Privilege escalation from a misconfigured pod can lead to full cluster compromise.
apiVersion: v1 kind: Pod metadata: name: insecure-app spec: containers: - name: app image: myapp:latest # Mutable tag! securityContext: privileged: true # Full host access! runAsUser: 0 # Running as root! volumeMounts: - mountPath: /host name: host-fs volumes: - name: host-fs hostPath: path: / # Entire host filesystem!
apiVersion: v1 kind: Pod metadata: name: secure-app spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: app image: myapp@sha256:abc123... # Immutable digest securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 1000 capabilities: drop: ["ALL"] resources: limits: cpu: "500m" memory: "256Mi" requests: cpu: "100m" memory: "128Mi"
Kubernetes RBAC (Role-Based Access Control) is powerful but complex. Overly permissive roles — especially cluster-admin bindings, wildcard permissions, and excessive service account privileges — allow unauthorized access to cluster resources, secrets, and workloads.
An attacker who gains access to an over-privileged service account can list secrets, create privileged pods, modify deployments, and escalate to cluster-admin. Wildcard RBAC rules are the Kubernetes equivalent of granting root access.
# ClusterRoleBinding granting cluster-admin to a service account apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: app-admin-binding subjects: - kind: ServiceAccount name: my-app # App SA with cluster-admin! namespace: default roleRef: kind: ClusterRole name: cluster-admin # Full cluster control! apiGroup: rbac.authorization.k8s.io
# Scoped Role with minimum required permissions apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: app-role namespace: my-app-ns rules: - apiGroups: [""] resources: ["configmaps"] verbs: ["get", "list"] # Read-only, specific resources resourceNames: ["app-config"] # Named resource only --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: app-role-binding namespace: my-app-ns subjects: - kind: ServiceAccount name: my-app namespace: my-app-ns roleRef: kind: Role name: app-role apiGroup: rbac.authorization.k8s.io
Kubernetes Secrets are only base64-encoded by default, not encrypted. Storing sensitive data (API keys, database passwords, TLS certificates) in plain ConfigMaps, environment variables, or unencrypted Secrets exposes them to anyone with API access. Secrets are also visible in etcd if encryption at rest is not enabled.
Exposed secrets allow attackers to access databases, cloud accounts, and external services. Secrets stored in etcd without encryption can be read by anyone with etcd access. Environment variable secrets are visible in pod specs and process listings.
# Secret in plain ConfigMap — visible to anyone apiVersion: v1 kind: ConfigMap metadata: name: app-config data: DB_PASSWORD: "SuperSecret123!" # Plaintext password! API_KEY: "sk-live-abc123xyz" # API key in ConfigMap! --- apiVersion: v1 kind: Pod metadata: name: app spec: containers: - name: app image: myapp:latest envFrom: - configMapRef: name: app-config # All values as env vars!
# Use External Secrets Operator with a vault backend apiVersion: external-secrets.io/v1beta1 kind: ExternalSecret metadata: name: app-secrets spec: refreshInterval: 1h secretStoreRef: name: vault-backend kind: ClusterSecretStore target: name: app-secrets data: - secretKey: db-password remoteRef: key: secret/myapp/db property: password --- apiVersion: v1 kind: Pod metadata: name: app spec: containers: - name: app image: myapp@sha256:abc123... volumeMounts: - name: secrets mountPath: /etc/secrets readOnly: true # Mount as read-only files volumes: - name: secrets secret: secretName: app-secrets
Without cluster-level policies, there are no guardrails preventing the deployment of insecure workloads. Pod Security Standards (PSS), admission controllers, and policy engines like OPA Gatekeeper or Kyverno are essential for enforcing security baselines across all namespaces.
Without policy enforcement, any developer can deploy privileged containers, use hostPath volumes, or disable security controls. A single misconfigured workload can compromise the entire cluster. Manual review cannot scale to catch all violations.
#!/bin/bash # No admission control — any pod spec is accepted # No Pod Security Standards configured # Anyone can deploy a privileged container kubectl run hacker-pod --image=alpine \ --overrides='{"spec":{"containers":[{ "name":"hack", "image":"alpine", "securityContext":{"privileged":true}, "command":["nsenter","--target","1","--mount","--uts","--ipc","--net","--pid","--","bash"] }]}}' # No policy blocks this — now has host root access!
# Enforce Pod Security Standards with Kyverno apiVersion: kyverno.io/v1 kind: ClusterPolicy metadata: name: disallow-privileged spec: validationFailureAction: Enforce background: true rules: - name: deny-privileged match: any: - resources: kinds: ["Pod"] validate: message: "Privileged containers are not allowed." pattern: spec: containers: - securityContext: privileged: "false" - name: require-run-as-non-root match: any: - resources: kinds: ["Pod"] validate: message: "Containers must run as non-root." pattern: spec: securityContext: runAsNonRoot: true
By default, all pods in a Kubernetes cluster can communicate with each other without any restrictions. This flat network architecture means that a compromised pod can reach any other pod, service, or even the Kubernetes API server. Network Policies are essential for implementing microsegmentation.
Without network segmentation, lateral movement is trivial. A compromised frontend pod can directly access backend databases, internal services, and the metadata API (169.254.169.254). This violates the principle of least privilege at the network layer.
# No NetworkPolicy — all pods can communicate freely apiVersion: v1 kind: Namespace metadata: name: production # No default deny policy # No network policies at all # Any pod can reach: # - Other pods in any namespace # - Kubernetes API server # - Cloud metadata API (169.254.169.254) # - External internet
# Default deny all ingress and egress apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny-all namespace: production spec: podSelector: {} policyTypes: ["Ingress", "Egress"] --- # Allow only frontend to backend communication apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-frontend-to-backend namespace: production spec: podSelector: matchLabels: app: backend policyTypes: ["Ingress"] ingress: - from: - podSelector: matchLabels: app: frontend ports: - port: 8080 protocol: TCP
Kubernetes control plane components (API server, etcd, kubelet, dashboard) and application services can be inadvertently exposed to the internet or untrusted networks. Exposed dashboards, unauthenticated kubelets, and publicly accessible API servers are common attack vectors.
Exposed Kubernetes components provide direct access to cluster management. An unauthenticated kubelet API allows container execution. An exposed dashboard with default credentials grants cluster-admin access. Public etcd access exposes all cluster data including secrets.
# Kubernetes Dashboard exposed via LoadBalancer apiVersion: v1 kind: Service metadata: name: kubernetes-dashboard namespace: kubernetes-dashboard spec: type: LoadBalancer # Public internet access! ports: - port: 443 targetPort: 8443 selector: k8s-app: kubernetes-dashboard # Dashboard bound to cluster-admin service account # No authentication required — full cluster access!
# Dashboard accessible only via kubectl proxy apiVersion: v1 kind: Service metadata: name: kubernetes-dashboard namespace: kubernetes-dashboard spec: type: ClusterIP # Internal only ports: - port: 443 targetPort: 8443 selector: k8s-app: kubernetes-dashboard --- # Access via: kubectl proxy, then browse to: # http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/ # Or use kubectl port-forward for secure access
Kubernetes clusters consist of many components: API server, etcd, kubelet, kube-proxy, CoreDNS, and third-party add-ons (Ingress controllers, service meshes, monitoring). Misconfigured or unpatched components introduce vulnerabilities. Default configurations are often not hardened.
Vulnerable cluster components can be exploited for remote code execution, privilege escalation, or denial of service. CVEs in kubelet, Ingress controllers, or CNI plugins can lead to container escapes and cluster takeover. Outdated components accumulate known vulnerabilities.
# kubelet with insecure configuration # /var/lib/kubelet/config.yaml apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: anonymous: enabled: true # Anyone can access kubelet API! webhook: enabled: false # No authorization! authorization: mode: AlwaysAllow # All requests permitted! readOnlyPort: 10255 # Unauthenticated read port open!
# Hardened kubelet configuration apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: anonymous: enabled: false # Deny anonymous access webhook: enabled: true # Use API server for auth x509: clientCAFile: /etc/kubernetes/pki/ca.crt authorization: mode: Webhook # API server authorizes requests readOnlyPort: 0 # Disable read-only port protectKernelDefaults: true eventRecordQPS: 5
Kubernetes clusters running in cloud environments (AWS, GCP, Azure) can access cloud metadata APIs and IAM roles attached to nodes. Attackers who compromise a pod can use these to escalate privileges from the cluster to the cloud account, accessing S3 buckets, databases, and other cloud services.
Node-level IAM roles are inherited by all pods on that node. A compromised pod can query the metadata API (169.254.169.254) to obtain cloud credentials, then access any cloud resource the node role permits. This enables lateral movement from Kubernetes to the broader cloud environment.
# Node IAM role with excessive permissions resource "aws_iam_role_policy_attachment" "node_admin" { role = aws_iam_role.node_role.name policy_arn = "arn:aws:iam::aws:policy/AdministratorAccess" # All pods on this node inherit admin access! } # No IMDS restrictions — any pod can get node credentials # curl http://169.254.169.254/latest/meta-data/iam/security-credentials/
# Use IAM Roles for Service Accounts (IRSA) resource "aws_iam_role" "app_role" { name = "my-app-role" assume_role_policy = jsonencode({ Version = "2012-10-17" Statement = [{ Effect = "Allow" Principal = { Federated = aws_iam_openid_connect_provider.eks.arn } Action = "sts:AssumeRoleWithWebIdentity" Condition = { StringEquals = { "${aws_iam_openid_connect_provider.eks.url}:sub" = "system:serviceaccount:my-ns:my-app" } } }] }) } # Pod-level scoped IAM via service account annotation # Only this specific pod gets these specific permissions
Kubernetes supports multiple authentication mechanisms: certificates, tokens, OIDC, and webhook authentication. Broken authentication includes using static tokens, sharing kubeconfig files, not rotating certificates, and failing to integrate with identity providers. Service account token auto-mounting creates unnecessary attack surface.
Stolen or leaked kubeconfig files and service account tokens provide direct cluster access. Static tokens that never expire give persistent access. Without centralized identity management, revoking access for departed team members is difficult and error-prone.
# Static token file for API server authentication # /etc/kubernetes/token-auth-file.csv # token,user,uid,"groups" # abc123,admin,1,"system:masters" # Static token, never expires! # Pod with auto-mounted service account token apiVersion: v1 kind: Pod metadata: name: app spec: # automountServiceAccountToken defaults to true! # Token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token containers: - name: app image: myapp:latest # App doesn't need K8s API access but has a token anyway
# Disable auto-mounted token for pods that don't need API access apiVersion: v1 kind: ServiceAccount metadata: name: my-app namespace: my-app-ns automountServiceAccountToken: false # No auto-mount --- # For pods that need API access, use bound tokens with expiry apiVersion: v1 kind: Pod metadata: name: api-consumer spec: serviceAccountName: api-consumer-sa automountServiceAccountToken: false containers: - name: app image: myapp@sha256:abc123... volumeMounts: - name: token mountPath: /var/run/secrets/tokens readOnly: true volumes: - name: token projected: sources: - serviceAccountToken: expirationSeconds: 3600 # 1-hour expiry audience: api-server
Kubernetes generates audit events for API server requests, but audit logging is often disabled or misconfigured. Without proper logging and monitoring, malicious activities — such as unauthorized secret access, RBAC changes, and container escapes — go undetected. Runtime security monitoring at the container and node level is essential.
Without audit logging, attackers operate undetected. They can create backdoor service accounts, extract secrets, and modify workloads without any record. Incident response is severely impaired when there is no audit trail to determine what happened, when, and by whom.
# API server with no audit logging configured # kube-apiserver flags: # --audit-log-path="" # No audit log! # --audit-policy-file="" # No audit policy! # No runtime security monitoring # No alerting on suspicious activities # Container logs not collected centrally # Default log retention — logs lost on pod restart
# Comprehensive audit policy apiVersion: audit.k8s.io/v1 kind: Policy rules: # Log all secret access at Metadata level - level: Metadata resources: - group: "" resources: ["secrets"] # Log RBAC changes at RequestResponse level - level: RequestResponse resources: - group: rbac.authorization.k8s.io resources: ["clusterroles", "clusterrolebindings", "roles", "rolebindings"] # Log pod exec/attach (potential attacks) - level: Request resources: - group: "" resources: ["pods/exec", "pods/attach", "pods/portforward"] # Default: log at Metadata level - level: Metadata
| ID | Vulnerability | Severity | Key Mitigation |
|---|---|---|---|
| K01 | Insecure Workload Configurations | Critical | Non-root, read-only FS, drop capabilities, resource limits |
| K02 | Overly Permissive Authorization | Critical | Namespace-scoped RBAC, no wildcards, audit bindings |
| K03 | Secrets Management Failures | Critical | External secrets, encryption at rest, file mounts |
| K04 | Lack Of Policy Enforcement | High | Pod Security Standards, Kyverno/OPA Gatekeeper |
| K05 | Missing Network Segmentation | High | Default-deny NetworkPolicies, microsegmentation |
| K06 | Overly Exposed Components | High | ClusterIP services, kubectl proxy, firewall rules |
| K07 | Misconfigured Cluster Components | High | CIS Benchmark, disable anonymous auth, patch management |
| K08 | Cluster To Cloud Lateral Movement | Critical | IRSA/Workload Identity, block metadata API, IMDSv2 |
| K09 | Broken Authentication Mechanisms | Critical | OIDC integration, bound tokens, disable auto-mount |
| K10 | Inadequate Logging And Monitoring | Medium | Audit logging, Falco/Tetragon, SIEM integration |