All articles
Security

Security Best Practices for Production AI Agents

AI agents have unique security requirements — they execute code, call APIs, and handle sensitive data. Here are the practices that actually matter.

Maritime Team·March 4, 20266 min read

AI agents aren't just chatbots. They execute code, call external APIs, access databases, and make decisions autonomously. That makes their security surface fundamentally different from traditional web applications.

The Agent Threat Model

Traditional web apps have a well-understood threat model: user input is untrusted, sanitize everything, authenticate and authorize. AI agents add new dimensions:

  • Prompt injection — Malicious input that hijacks agent behavior
  • Tool misuse — Agents calling tools in unintended ways
  • Data exfiltration — Agents leaking sensitive information through tool calls
  • Lateral movement — Compromised agents accessing other systems via shared credentials
  • Cost attacks — Adversarial inputs designed to maximize token usage and compute costs

Practice 1: Isolate Agent Execution

Every agent should run in its own isolated environment. At Maritime, each agent gets a dedicated container with:

  • No shared filesystem with other agents
  • Network policies that restrict egress to explicitly allowed domains
  • Resource limits (CPU, memory, execution time) that prevent runaway processes
  • Read-only filesystem with writable tmpfs for temporary data

This isn't optional. If an agent is compromised, isolation prevents lateral movement to other agents or infrastructure.

Practice 2: Rotate and Scope Credentials

Never give an agent more access than it needs. This means:

  • Scoped API keys — If your agent only needs to read from a database, don't give it write access
  • Short-lived tokens — Prefer tokens that expire over long-lived API keys
  • Per-agent credentials — Don't share API keys across agents, even if they access the same service
  • Encrypted storage — Secrets should be encrypted at rest and only decrypted at runtime in memory

Maritime's secrets management handles encryption and injection automatically. Secrets are never written to disk or included in container images.

Practice 3: Log Everything

Every tool call, every API request, every decision your agent makes should be logged. When (not if) something goes wrong, you need an audit trail.

Critical events to log: - Tool invocations and their parameters - External API calls and responses - Token usage per request - Error states and recovery actions - Input/output pairs for each invocation

Maritime captures structured logs for all agent activity, queryable through the dashboard or API.

Practice 4: Set Resource Boundaries

AI agents can be expensive when they loop. A poorly written agent (or a clever adversarial input) can cause recursive tool calls that burn through API credits in minutes.

Set hard limits on: - Maximum execution time per invocation - Maximum token budget per request - Maximum number of tool calls per execution - Maximum concurrent invocations

Practice 5: Validate Tool Outputs

Your agent's tools are part of its attack surface. If a tool returns data from an external source, that data could contain prompt injection attempts.

Treat tool outputs the same way you'd treat user input in a web app — validate, sanitize, and constrain what the agent can do with the results.

The Bottom Line

Security for AI agents requires the same rigor as any production system, plus additional considerations unique to autonomous agents. The practices above aren't comprehensive, but they cover the highest-impact areas.

Build with the assumption that your agent will encounter adversarial input. Design your infrastructure so that when it does, the blast radius is contained.