All articles
Tutorials

Deploying CrewAI Agents to Production in 5 Minutes

A step-by-step guide to taking your CrewAI multi-agent crew from local development to a live API endpoint with webhooks and monitoring.

Maritime Team·February 25, 20264 min read

CrewAI makes it easy to build multi-agent systems. Getting them into production? That's been the hard part — until now.

The Gap Between Development and Production

If you've built a CrewAI crew locally, you know the workflow: define your agents, assign tasks, run the crew. It works great on your laptop. But moving to production means dealing with:

  • Containerization and Dockerfiles
  • Infrastructure provisioning (VMs, Kubernetes, or serverless)
  • Secrets management for API keys
  • Monitoring and log aggregation
  • Endpoint exposure and authentication
  • Scaling and cost management

That's a lot of DevOps for what should be a simple deployment.

The Maritime Approach

Maritime auto-detects CrewAI projects and handles the entire deployment pipeline. Here's the actual workflow:

Step 1: Connect Your Repository

Link your GitHub repo containing your CrewAI project. Maritime looks for the standard CrewAI project structure — agents.yaml, tasks.yaml, and your crew definition.

Step 2: Configure Environment Variables

Add your API keys (OpenAI, Anthropic, Serper, etc.) through the Maritime dashboard or CLI. These are encrypted at rest and injected at runtime — never stored in your container image.

maritime env set OPENAI_API_KEY=sk-...
maritime env set SERPER_API_KEY=...

Step 3: Deploy

maritime deploy

That's it. Maritime builds your container, sets up the API endpoint, and configures sleep/wake lifecycle management. Your crew is now accessible via:

POST https://api.maritime.sh/v1/agents/{agent-id}/invoke

Step 4: Add Triggers (Optional)

Set up webhooks, cron schedules, or messaging integrations to trigger your crew automatically:

# Run every morning at 9am UTC

# Trigger via Telegram maritime trigger add --type telegram --bot-token $BOT_TOKEN ```

Monitoring Your Crew

Once deployed, Maritime provides real-time logs, invocation history, and resource metrics. You can see exactly which agent in your crew is executing, what tools are being called, and how long each task takes.

Cost

A CrewAI crew on the Smart tier costs $1/month for up to 1,000 invocations. Each invocation can run as long as it needs — there's no execution time limit on the Smart tier.

For crews that need to handle higher throughput, the Extended ($5/month) and Always-On ($10/month) tiers scale accordingly.

What About Other Frameworks?

Maritime supports LangGraph, OpenClaw, and custom Docker containers with the same workflow. The platform auto-detects your framework and applies the right build configuration.