📋 Pre-Development Checklist

Items to verify before developing a new API endpoint.

  • API specification (OpenAPI) has been created
  • Authentication and authorization methods have been determined
  • Input parameter types and constraints have been defined
  • Fields to include in responses have been clearly defined
  • Error response format is standardized
  • Rate limiting requirements have been established
  • Handling of personal and sensitive data has been confirmed
  • Authentication method has been determined (OAuth 2.0, JWT, API Key)
  • Required permissions (scopes/roles) have been defined for each endpoint
  • Resource-level authorization checks have been designed (BOLA countermeasure)
  • Admin functions are clearly separated
  • Token expiration and refresh methods have been determined
  • Communication uses TLS 1.2 or higher
  • Encryption method for stored data has been determined
  • Masking and anonymization policies for personal information have been determined
  • Sensitive information (tokens, passwords) is not included in logs
  • Debug information and stack traces are not included in API responses

👀 Code Review Perspectives

Security perspectives to check during API code reviews.

Check ItemImportanceExplanation
Authentication middleware is applied to all endpointsRequiredEnsure no authentication checks are missing except for public APIs
Object-level authorization checks existRequiredVerify requested resources belong to the requesting user
Admin functions have role verificationRequiredEnsure regular users cannot call admin APIs
Algorithm is specified during JWT verificationRequiredPrevention of alg: none attacks
Token expiration is properly configuredRecommendedAccess token: within 15 minutes
Check ItemImportanceExplanation
Request body has schema validationRequiredValidation of type, length, and format
SQL queries are parameterizedRequiredEnsure queries are not built with string concatenation
Path parameter format validation existsRequiredCheck for expected format such as UUID
Pagination parameters have upper limitsRecommendedPrevent excessive requests like limit=999999
File uploads have type and size restrictionsRecommendedValidation of Content-Type and file size
Check ItemImportanceExplanation
Responses do not contain unnecessary fieldsRequiredPrevent leakage of password_hash, internal IDs, etc.
Error responses do not contain stack tracesRequiredDo not expose internal information in production
Security headers are configuredRecommendedX-Content-Type-Options, HSTS, etc.
Content-Type is correctly setRecommendedExplicitly specify application/json
Check ItemImportanceExplanation
Rate limiting is appliedRequiredEspecially for authentication and payment endpoints
Appropriate logs are being recordedRecommendedTraceable: who did what and when
Secrets are not hardcodedRequiredUse environment variables or secret managers
CORS settings are appropriateRecommendedAvoid using wildcards
Dependencies have no known vulnerabilitiesRecommendedCheck results of npm audit / Snyk

📜 Security Policy Templates

Templates for basic policies in internal API development. Customize to fit your project.

1. Authentication Policy

PolicyTemplate
■ Authentication Methods
  - User-facing APIs: OAuth 2.0 (Authorization Code + PKCE)
  - Server-to-server APIs: OAuth 2.0 (Client Credentials) or mTLS
  - External integration APIs: API Key + HMAC signature

■ Token Management
  - Access token expiration: 15 minutes
  - Refresh token expiration: 7 days (rotation required)
  - Token storage: HttpOnly + Secure + SameSite Cookie

■ Password Policy
  - Minimum 12 characters, at least one uppercase, lowercase, number, and symbol
  - Hash with bcrypt (cost factor 12 or higher)
  - Password list attack protection (Have I Been Pwned API integration)

2. API Design Policy

PolicyTemplate
■ Versioning
  - Version management via URL path: /api/v1/resources
  - Minimum 6-month support period for old versions
  - Notify deprecation via Deprecation header

■ Rate Limiting (Default Values)
  - General APIs: 100 requests/15 min
  - Auth APIs: 5 requests/15 min
  - Public APIs: 30 requests/min
  - Always return RateLimit-* headers

■ Responses
  - Content-Type: application/json (fixed)
  - Error responses in RFC 7807 (Problem Details) format
  - Never return stack traces in production
  - Pagination: default 20 items, max 100 items

3. Logging & Audit Policy

PolicyTemplate
■ Required Log Fields
  - Timestamp (ISO 8601, UTC)
  - Request ID (UUID for traceability)
  - User ID / API Client ID
  - HTTP Method + Endpoint
  - Status Code
  - Source IP
  - Response Time

■ Prohibited Log Fields (Sensitive Data)
  - Passwords / Tokens / API Keys
  - Credit card numbers
  - Full PII display (masking required)

■ Monitoring Alerts
  - Auth failures: Alert at 5+/min
  - 403 errors: Alert at 10+/min
  - 500 errors: Immediate alert on 1 occurrence
  - Rate limit exceeded: Pattern analysis

🚨 Incident Response Flow

Phase 1: Detection

Detect incidents through monitoring alerts, user reports, or external notifications. Determine severity through initial triage.

0-30 minutes

Phase 2: Containment

Prevent damage escalation. Invalidate API keys, temporarily suspend affected endpoints, and identify the scope of impact.

30 min - 2 hours

Phase 3: Eradication & Recovery

Fix vulnerabilities, apply patches, and restore services. Notify affected users.

2-24 hours

Phase 4: Post-Incident Analysis

Conduct postmortem. Perform root cause analysis, develop recurrence prevention measures, and document findings.

1-5 business days

🤖 AI / LLM Security Checklist

Additional checklist items for systems integrating AI models and LLM-powered features.

  • Model provider has been evaluated for security and compliance (SOC 2, data processing agreement)
  • System prompts are stored securely and not exposed to end users
  • Token budget and cost limits are configured per request and per user
  • Data classification policy defines what data can be sent to external LLM APIs
  • LLM outputs are validated and sanitized before rendering or downstream processing
  • Fallback behavior is defined for model unavailability or degraded responses
  • Model version pinning strategy is documented to prevent unexpected behavior changes
  • Agent permissions follow least privilege principle — only necessary tools and APIs are accessible
  • Tool/function calling uses an explicit allowlist (not a denylist)
  • Human-in-the-loop approval is required for high-impact actions (payments, deletions, external communications)
  • Agent memory scope is bounded — conversation history does not leak across tenants or sessions
  • Inter-agent communication is authenticated and uses signed messages
  • Goal drift detection is implemented — agents are monitored for deviation from intended objectives
  • Emergency kill switch exists to halt agent execution immediately
  • Training data provenance is documented with lineage tracking
  • Data integrity checks exist for training and fine-tuning datasets (hash verification)
  • Model artifacts are versioned and stored in a tamper-proof registry
  • Model drift monitoring is in place to detect performance degradation
  • Third-party models and embeddings have been evaluated for known vulnerabilities
  • PII and sensitive data scrubbing is applied to training datasets

🧐 AI / LLM Code Review Perspectives

Security check items specific to code that integrates LLMs, AI agents, and ML models.

Check ItemImportanceExplanation
System prompts are not exposed in client-side code or API responsesRequiredPrompt leakage enables targeted prompt injection attacks
LLM outputs are validated before rendering (HTML/JS/SQL)RequiredLLM-generated content may contain XSS payloads or injection vectors
User input and system instructions are clearly separated in promptsRequiredPrevents direct prompt injection by maintaining instruction-data boundary
Tool/function call permissions are scoped per user roleRequiredPrevent privilege escalation through agent tool access
Token count limits are enforced per request and per sessionRecommendedPrevents cost explosion and denial-of-wallet attacks
RAG retrieval results are sanitized before insertion into promptsRequiredRetrieved documents may contain indirect prompt injection payloads
Agent actions and tool calls are logged with full audit trailRecommendedEssential for incident investigation and compliance in AI systems

📑 AI / LLM Security Policy Templates

Policy templates for organizations deploying AI models and LLM-powered applications.

4. AI Model Access Control Policy

PolicyTemplate
■ Model Access Tiers
  - Tier 1 (Restricted): GPT-4 class / fine-tuned models — requires team lead approval
  - Tier 2 (Standard): GPT-3.5 class / embeddings — available to all developers
  - Tier 3 (Open): Open-source models (local inference) — no approval required

■ API Key Management for LLM Providers
  - One API key per service/environment (never share across projects)
  - Monthly spend alerts at 50%, 80%, 100% of budget
  - Hard spending caps enforced at the provider level
  - Key rotation: every 90 days or immediately upon team member departure

■ Data Sent to External Models
  - NEVER send: PII, credentials, internal IP, source code, customer data
  - ALLOWED with review: anonymized logs, public documentation, synthetic data
  - All prompts to external APIs must be logged (excluding PII)

5. AI Data Handling Policy

PolicyTemplate
■ Training Data Requirements
  - All training data must have documented provenance and licensing
  - PII must be removed or anonymized before use in training/fine-tuning
  - Data poisoning checks: validate data integrity with hash verification
  - Retain training data snapshots for reproducibility and audit

■ RAG (Retrieval-Augmented Generation) Data
  - Document ingestion pipeline must sanitize content (strip scripts, injections)
  - Access control on vector store must mirror source document permissions
  - Embedding models must be versioned and pinned

■ Model Output Data
  - LLM outputs must not be trusted as authoritative — always verify facts
  - Generated code must pass the same security review as human-written code
  - Outputs containing PII must be flagged and redacted before storage

6. AI Incident Classification Policy

PolicyTemplate
■ Severity Levels for AI-Specific Incidents
  - P1 (Critical): Prompt injection leading to data exfiltration or unauthorized actions
  - P1 (Critical): Model serving compromised or returning manipulated outputs
  - P2 (High): Training data poisoning detected, jailbreak bypass discovered
  - P2 (High): Agent performing unintended actions outside approved scope
  - P3 (Medium): Model drift causing degraded accuracy below threshold
  - P4 (Low): Cost overrun due to excessive token usage

■ Response Procedures
  - P1: Immediately disable affected model endpoint, notify security team
  - P2: Quarantine affected model version, roll back to last known good
  - P3: Trigger retraining pipeline, increase monitoring frequency
  - P4: Adjust rate limits and budget caps, review usage patterns

■ Post-Incident Requirements
  - Root cause analysis within 48 hours for P1/P2
  - Update prompt injection test suite with new attack vectors
  - Review and update guardrails configuration

Quick Reference: HTTP Status Codes

Security-related HTTP status codes used in API design.

CodeMeaningUse Case
400Bad RequestInput validation error
401UnauthorizedAuthentication required or token is invalid
403ForbiddenAuthenticated but insufficient permissions
404Not FoundResource does not exist (can also be used instead of 403)
405Method Not AllowedHTTP method not permitted
413Payload Too LargeRequest body size exceeded
422Unprocessable EntitySyntax is correct but semantics error
429Too Many RequestsRate limit exceeded
500Internal Server ErrorInternal server error (hide details)