Writing Assembly Instructions from scratch means knowing which Robots exist, what parameters they accept, and how to wire Steps together. That is fine once you know the system, but it is a steep first ten minutes. The Template Editor now has a built-in AI chat that handles all of that for you.

Type what you want: "resize uploaded images to 300px wide, convert to WebP, and store on S3" — and the assistant: "Botty", produces valid Assembly Instructions. It knows every Robot, validates the output against real schemas, and sees your linting errors so it can also iterate and help you with those.

What you can do with it

  • Generate from scratch. Describe a workflow in plain English and get a complete Template back. The assistant picks the right Robots, wires Steps together, and includes sensible defaults.
  • Modify what you have. Open an existing Template and ask for changes: "add a watermark step after resize" or "switch the storage from S3 to Google Cloud Storage." The assistant reads your current instructions and preserves steps you did not mention.
  • Iterate. You can ask follow-up questions and have a conversation with the assistant to iterate on the Instructions until you're satisfied with the results, and only then hit Use these Instructions.
  • Fix linting errors. The editor already shows lint diagnostics: undefined steps, missing parameters, unknown Robots. The AI chat sees those same diagnostics. You can ask "fix the linting errors" or "what does this error mean?" and get a corrected Template with an explanation.
  • Learn as you go. Not sure what /video/adaptive does or which parameters /image/resize accepts? Ask. The assistant looks up Robot documentation on the fly.

How it works under the hood

We did not bolt a generic chatbot onto the editor. The entire feature runs on Transloadit primitives that already existed, wired together in a way that keeps the model grounded.

The MCP Server provides the toolbox

The same Transloadit MCP Server that powers integrations with Claude Code, Codex, Cursor, and Gemini CLI is what gives the chat assistant its capabilities. When the model needs to discover Robots, read parameter docs, or lint a draft, it calls MCP tools:

  • transloadit_list_robots — browse the full catalog of 60+ Robots
  • transloadit_get_robot_help — fetch parameter documentation for a specific Robot
  • transloadit_lint_assembly_instructions — validate a set of Assembly Instructions against real schemas

On top of that, every request includes the user's current Template and linting diagnostics when relevant, as well as bundled docs on Assembly Instructions, Assembly Variables, and Dynamic Evaluation. The model is asked to preserve existing steps and expressions, adapt only what the user requested, and return the full updated JSON.

The proposed instructions then run through linting and type schemas (that we spent tremendous amounts of effort on shipping the Node SDK v4 and are now paying dividends), and retries if there are issues, before handing them back. This drastically reduces hallucinations that you could still get if you asked a general purpose LLM without this grounding and validation loop.

The end result is that you, the user, only ever see final, validated outputs.

/ai/chat runs the conversation

The conversation itself runs through our /ai/chat Robot, which means token costs are tracked as regular Assembly usage, visible on your dashboard, billed the same way as any other Robot. No separate API key or billing account to manage.

The flow, step by step

  1. You type a message in the editor chat
  2. Your current Assembly Instructions and any linting errors are included as context, as well as important documentation on Templates, Assembly Variables, etc.
  3. The message is sent to /ai/chat as an Assembly that runs inside your Workspace, which calls the model with MCP tools attached
  4. The model discovers Robots, reads parameter docs, drafts instructions, and lints them, all through MCP tool calls
  5. The validated result comes back to the editor
  6. You review and click "Use This" to apply it to your Template

Every round-trip is a real Assembly. You can inspect it on the Assembly detail page, see token usage, and trace exactly what happened.

You can also use your own agent

You can absolutely ask Claude Code, Codex, Cursor, Gemini CLI, or your own agent to generate Assembly Instructions too. The advantage of having this built into the Template Editor is that it is there exactly where you need it, it automatically sees your current Template and linting errors, and it keeps iterating until the result is valid before handing it back.

Design in the browser

The chat UI itself was built with an iterative, visual workflow. Our colleague Peter Assentorp created Design in the Browser, a harness for designing UI directly in the environment where it will actually run. Combined with Claude Code, this turned out to be a fast way to iterate: describe a change in natural language, see it render immediately, adjust, repeat. No mockups, no handoff — the design is the code.

Try it

Open any Template in your Transloadit Console and look for the AI chat at the top of the editor. Describe what you need and let it draft the instructions. The feature is available on all plans.

If you are building agent workflows outside the console, the same capabilities are available through the MCP Server, Agent Skills, or the /ai/chat Robot directly.