MaritimemaritimeDocumentation
Dashboard

LangGraph Guide

Deploy LangGraph agents on Maritime.

Overview

The LangGraph template deploys a Python container running a FastAPI server that wraps LangGraph. It exposes a /run endpoint that builds a StateGraph with an LLM node, compiles and invokes it, and returns the result.

Docker Image

Image: maritimeai/template-langgraph:latest

Python 3.12-slim with langgraph, langchain-openai, FastAPI, and uvicorn. Runs on port 8080.

API Endpoints

GET  /health  → {"status": "ok"}
POST /run     → {"task": "..."} → {"result": "..."}

Source Code

main.py
from typing import TypedDict
from fastapi import FastAPI
from langchain_openai import ChatOpenAI
from langgraph.graph import END, StateGraph
from pydantic import BaseModel

app = FastAPI()

class GraphState(TypedDict):
    task: str
    result: str

class RunRequest(BaseModel):
    task: str

@app.get("/health")
def health():
    return {"status": "ok"}

@app.post("/run")
def run(req: RunRequest):
    llm = ChatOpenAI(model="gpt-4o-mini")

    def process(state: GraphState) -> GraphState:
        response = llm.invoke(state["task"])
        return {"task": state["task"], "result": response.content}

    workflow = StateGraph(GraphState)
    workflow.add_node("process", process)
    workflow.set_entry_point("process")
    workflow.add_edge("process", END)

    graph = workflow.compile()
    result = graph.invoke({"task": req.task, "result": ""})
    return {"result": result["result"]}

Environment Variables

Set OPENAI_API_KEY in your agent's environment variables for LangGraph to use an LLM via langchain-openai.

Deploy

Select Template → LangGraph Agent in the Create Agent modal. Maritime pulls the image and starts the container automatically.