Guide
Use Margarita orchestration file to build a custom Ralph Wiggum loop tailored to your own specific workflow.
Maybe you want it to post to JIRA after tasking out the implementation plan. Maybe you want to run a code quality analysis after each implementation step. With a single monolithic prompt, you have no way to inject custom logic or effects at specific points in the workflow.
As the agent accumulates conversation history, earlier instructions and code get pushed out of focus — leading to drift and repeated mistakes.
Need a JIRA step? A code quality check? A Slack update? You might not have the tight control that you want.
Mixing product refinement, task planning, and implementation into a single prompt produces a generalist agent that is mediocre at all three.
custom-ralph.mgx file orchestrates the entire workflow. It prompts the
user for input, delegates task breakdown to a specialist template, then loops over each
task — injecting domain guidance and checking acceptance criteria on every iteration.
@state maxIterations = 5
@state tasks = []
// ── Phase 1: Get user input ───────────────────────────────
@effect input "What do you want to build?:" => userInput
// ── Phase 2: Break it into tasks ─────────────────────────
[[ use-case-task-breakdown.mg input=userInput ]]
@effect run
// ── Phase 3: Implement each task ─────────────────────────
for task in tasks:
@effect context clear
@state isDone = false
<<
Check the current state of the project and attempt to implement:
${task}
>>
// Add domain rules that help the loop run better.
[[ additional_guidance.mg ]]
<<
Mandatory last step: Once you're done check the acceptance criteria ${task.acceptanceCriteria}.
Set variable `isDone` = true ONLY if ALL criteria are met.
Set variable `isDone` = false if any criteria are not met.
>>
// Run the agent 5 times and try to complete the task.
for i in range(5):
@effect run
// If the agent says the task is done break and move onto the next task.
if isDone:
break
.mg template is a focused prompt with a single job.
use-case-task-breakdown.mg decomposes the user's idea into an ordered
implementation plan with acceptance criteria. additional_guidance.mg injects
domain-specific rules into every iteration of the loop — no parameters needed.
Decomposes the user's idea into an ordered implementation plan. Included via
[[ use-case-task-breakdown.mg input=userInput ]]. Stores results in the
tasks state variable.
<<
Your task is to expand the following ask into a more detailed set of
requirements / task breakdown.
- The output should be a more detailed description of the task,
without implementation details.
- Consult AGENTS.md for code standards and best practices for
writing clear, actionable tasks.
- Ask any follow up questions to reduce ambiguity and ensure the
task is well defined.
- Keep the number of tasks to a small amount ideally 1-3.
- We should follow Red, Green, Refactor where possible.
Create a list to store the task breakdown. Each task should have
the following structure:
{
"task": "The refined feature description",
"acceptanceCriteria": [
"A specific, measurable outcome that indicates the task
is successfully implemented.",
"Another specific, measurable outcome..."
]
}
Store the list of tasks in the `tasks` state variable.
## This is the use case that should be broken down:
${input}
>>
Domain-specific rules injected into every iteration of the implementation loop.
Included via [[ additional_guidance.mg ]] — no parameters needed.
<<
Here are some things to keep in mind:
- There is no SQL involved in the task list, so don't do any SQL
queries or database interactions.
- We should not commit to git or do any version control operations
as part of the task list.
- Don't add any temp files to docs. If you absolutely need to
create a temp file do it in the ./temp directory.
- If you get into a scenario where a criterion is not met but the
guidance forbids it, just set the task to done, but make a note
to the user.
- Use `make test` to run tests.
>>
additional_guidance.mg, runs the agent, and breaks when
isDone is true — capped at five iterations per task.
for task in tasks:
// Fresh context for every task
@effect context clear
@state isDone = false
<< Check the current state of the project
and attempt to implement: ${task} >>
// Inject domain rules each iteration
[[ additional_guidance.mg ]]
<< Mandatory last step: check ${task.acceptanceCriteria}.
Set `isDone` = true ONLY if ALL criteria are met. >>
// Retry up to 5 times per task
for i in range(5):
@effect run
if isDone:
break
<<
Here are some things to keep in mind:
- No SQL queries or database interactions.
- No git commits or version control operations.
- Temp files go in ./temp, never in docs.
- If guidance prevents meeting a criterion,
mark the task done and note it for the user.
- Use `make test` to run tests.
>>
@effect context clear resets accumulated history between phases. Each
specialist prompt sees only what it needs — preventing refinement noise from bleeding into
the implementation loop.
@state variables persist for the lifetime of the .mgx run.
The tasks list is written by the agent in the breakdown phase and read by
the loop in the next — no manual wiring needed.
[[ additional_guidance.mg ]] can be dropped into any iteration of the loop.
Domain constraints — no SQL, no git commits, where to put temp files — are expressed once
and applied consistently, without repeating them in every prompt.
Grab the example files from GitHub and start composing your own specialized prompt templates.