Creating a use case from scratch
The most common workflow: describe a business goal, get a production-ready use case. Your prompt:- The
use-case-builderagent reads the platform documentation - It designs an architecture with the right triggers, workflows, and integrations
- It presents a plan: “I’ll create 2 workflows — a Gmail service-event trigger for new emails, and a sub-workflow for AI extraction and file handling”
- After your approval, it creates the folder structure and delegates each workflow to
n8n-workflow-builder - It validates everything and shows you the complete use case
- Describe the business goal, not the implementation (“monitor Gmail” not “use a Gmail trigger node”)
- Mention the integrations you need (Gmail, Sheets, Drive, Slack, etc.)
- Specify what should happen on success and on failure
- Include any user-configurable settings (“the user should be able to choose which email label to monitor”)
Modifying an existing use case
When you need to add features, change triggers, or refactor. Your prompt:- The
use-case-modifierreads the entire existing use case - It identifies that a new workflow is needed (Slack notification) and that the main workflow needs to call it
- It presents a change plan: “I’ll add a new sub-workflow for Slack posting, update the main workflow’s success branch to call it, and add the Slack ORGCRED integration to config.ts”
- After approval, it makes targeted edits and delegates the new workflow to
n8n-workflow-builder - It validates to ensure nothing was broken
Building a single workflow
For when you need one workflow, not a full use case. Your prompt:- The
n8n-workflow-builderreads the HTTP trigger guide and integration guides - It creates a complete workflow JSON with the correct trigger, Codika Init, business logic, and Submit Result/Report Error pattern
- It handles all placeholder usage, credential configuration, and node positioning
Testing and debugging
After building (or when something breaks), use the tester. Your prompt:- The
use-case-testerverifies and deploys the use case - It triggers each HTTP workflow with test data constructed from
inputSchema - On success, it verifies outputs match
outputSchema - On failure, it fetches the execution trace, diagnoses the error, fixes the workflow, and redeploys
- It repeats up to 5 times until all workflows pass
The full build-test cycle
For maximum confidence, chain the agents together:Writing effective prompts
Do
- Be specific about integrations: “Use Gmail for email, Google Sheets for logging, and Slack for notifications”
- Describe the data flow: “Extract the sender, subject, and key dates from each email”
- Mention edge cases: “If the AI can’t parse the email, log it as unprocessed and skip”
- Specify user settings: “The user should configure their Slack channel and email filter criteria at install time”
Don’t
- Don’t specify implementation details: “Use a FLEXCRED_ANTHROPIC placeholder” — the agent knows this
- Don’t prescribe trigger types: “Make it an HTTP POST webhook” — let the agent decide the best trigger
- Don’t worry about Codika patterns: Init nodes, Submit Result, Report Error — the agent handles these automatically
- Don’t specify node positioning or IDs: The agent follows standard conventions