Package and deploy an AI agent to production with logging, tracing, and health checks. Supports TrueFoundry, Modal, Railway, Fly.io, and containerized deploys. Use when the user says "deploy this agent", "put this in prod", "containerize my agent", or "add observability".
npx @senso-ai/shipables install KeyanVakil/deploy-agentGet the agent running in production and keep it that way.
Before deploying, harden the agent:
.env.example.// Minimum viable logging for an agent
console.log(JSON.stringify({
event: 'agent_run_start',
run_id: runId,
input_length: input.length,
timestamp: new Date().toISOString()
}))
// Log each step
console.log(JSON.stringify({
event: 'agent_step',
run_id: runId,
step: stepNumber,
tool_called: toolName,
tokens_used: usage.total_tokens
}))
// Log completion
console.log(JSON.stringify({
event: 'agent_run_end',
run_id: runId,
success: true,
total_steps: steps,
total_tokens: totalTokens,
duration_ms: Date.now() - startTime
}))
Use structured JSON logs — they're parseable by every logging platform.
Generate a Dockerfile:
FROM node:20-slim
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
EXPOSE 3000
CMD ["node", "dist/index.js"]
TrueFoundry
truefoundry.yaml with service config (replicas, CPU, memory, env vars)tfy deploy — handles image build, push, and rolloutRailway / Fly.io
railway up or fly deploy — both auto-detect DockerfileAdd a /health route that returns { status: "ok", version: "..." }. Most platforms need this for zero-downtime deploys.
DEBUG=* or verbose logging enabled — it leaks data and kills performanceNODE_ENV=production — many libraries behave differently