Guide

Build a Custom
Ralph Wiggum Loop

Use Margarita orchestration file to build a custom Ralph Wiggum loop tailored to your own specific workflow.

Margarita
The Problem

Other Ralph Tools dont fit your workflow?

Maybe you want it to post to JIRA after tasking out the implementation plan. Maybe you want to run a code quality analysis after each implementation step. With a single monolithic prompt, you have no way to inject custom logic or effects at specific points in the workflow.

🔄

Context bloat

As the agent accumulates conversation history, earlier instructions and code get pushed out of focus — leading to drift and repeated mistakes.

Need more custom steps

Need a JIRA step? A code quality check? A Slack update? You might not have the tight control that you want.

📈

One giant prompt

Mixing product refinement, task planning, and implementation into a single prompt produces a generalist agent that is mediocre at all three.

The Solution

Compose. Orchestrate. Iterate.

Break the workflow into focused Margarita prompt templates — one to break an idea into tasks, one to guide each iteration with domain rules. Wire them together in a Margarita .mgx orchestration file that manages state, clears context between tasks, and loops over each task until its acceptance criteria are met.

1
Create the .mgx
The custom-ralph.mgx file orchestrates the entire workflow. It prompts the user for input, delegates task breakdown to a specialist template, then loops over each task — injecting domain guidance and checking acceptance criteria on every iteration.
custom-ralph.mgx
---
description: Build a custom Ralph wiggum loop
---

@state maxIterations = 5
@state tasks = []

// ── Phase 1: Get user input ───────────────────────────────
@effect input "What do you want to build?:" => userInput

// ── Phase 2: Break it into tasks ─────────────────────────
[[ use-case-task-breakdown.mg input=userInput ]]

@effect run

// ── Phase 3: Implement each task ─────────────────────────
for task in tasks:
    @effect context clear
    @state isDone = false

    <<
    Check the current state of the project and attempt to implement:
    ${task}
    >>
    // Add domain rules that help the loop run better.
    [[ additional_guidance.mg ]]
    <<
    Mandatory last step: Once you're done check the acceptance criteria ${task.acceptanceCriteria}.
    Set variable `isDone` = true ONLY if ALL criteria are met.
    Set variable `isDone` = false if any criteria are not met.
    >>

    // Run the agent 5 times and try to complete the task. 
    for i in range(5):
        @effect run

        // If the agent says the task is done break and move onto the next task. 
        if isDone:
            break
2
Define specialized prompt templates
Each .mg template is a focused prompt with a single job. use-case-task-breakdown.mg decomposes the user's idea into an ordered implementation plan with acceptance criteria. additional_guidance.mg injects domain-specific rules into every iteration of the loop — no parameters needed.

use-case-task-breakdown.mg

Decomposes the user's idea into an ordered implementation plan. Included via [[ use-case-task-breakdown.mg input=userInput ]]. Stores results in the tasks state variable.

---
description: Task Breakdown - Converts use cases into actionable implementation tasks
parameters: input (string) - Use case to break down into tasks
---
<<
Your task is to expand the following ask into a more detailed set of
requirements / task breakdown.

- The output should be a more detailed description of the task,
  without implementation details.
- Consult AGENTS.md for code standards and best practices for
  writing clear, actionable tasks.
- Ask any follow up questions to reduce ambiguity and ensure the
  task is well defined.
- Keep the number of tasks to a small amount ideally 1-3.
- We should follow Red, Green, Refactor where possible.

Create a list to store the task breakdown. Each task should have
the following structure:
{
    "task": "The refined feature description",
    "acceptanceCriteria": [
        "A specific, measurable outcome that indicates the task
         is successfully implemented.",
        "Another specific, measurable outcome..."
    ]
}

Store the list of tasks in the `tasks` state variable.

## This is the use case that should be broken down:
${input}
>>

additional_guidance.mg

Domain-specific rules injected into every iteration of the implementation loop. Included via [[ additional_guidance.mg ]] — no parameters needed.

---
description: "Additional guidance for custom Ralph, based on project feedback."
---
<<
Here are some things to keep in mind:
- There is no SQL involved in the task list, so don't do any SQL
  queries or database interactions.
- We should not commit to git or do any version control operations
  as part of the task list.
- Don't add any temp files to docs. If you absolutely need to
  create a temp file do it in the ./temp directory.
- If you get into a scenario where a criterion is not met but the
  guidance forbids it, just set the task to done, but make a note
  to the user.
- Use `make test` to run tests.
>>
3
Drive the implementation loop
For each task, Margarita clears context then enters an inner loop that injects the task prompt, includes additional_guidance.mg, runs the agent, and breaks when isDone is true — capped at five iterations per task.
custom-ralph.mgx Loop logic
for task in tasks:
    // Fresh context for every task
    @effect context clear
    @state isDone = false

    << Check the current state of the project
and attempt to implement: ${task} >>
    // Inject domain rules each iteration
    [[ additional_guidance.mg ]]
    << Mandatory last step: check ${task.acceptanceCriteria}.
Set `isDone` = true ONLY if ALL criteria are met. >>

    // Retry up to 5 times per task
    for i in range(5):
        @effect run

        if isDone:
            break
additional_guidance.mg Domain rules
---
description: "Additional guidance for custom Ralph."
---
<<
Here are some things to keep in mind:
- No SQL queries or database interactions.
- No git commits or version control operations.
- Temp files go in ./temp, never in docs.
- If guidance prevents meeting a criterion,
  mark the task done and note it for the user.
- Use `make test` to run tests.
>>
✏️

Context stays clean

@effect context clear resets accumulated history between phases. Each specialist prompt sees only what it needs — preventing refinement noise from bleeding into the implementation loop.

🆕

State spans the whole run

@state variables persist for the lifetime of the .mgx run. The tasks list is written by the agent in the breakdown phase and read by the loop in the next — no manual wiring needed.

📝

Inject domain rules anywhere

[[ additional_guidance.mg ]] can be dropped into any iteration of the loop. Domain constraints — no SQL, no git commits, where to put temp files — are expressed once and applied consistently, without repeating them in every prompt.

Ready to build your own loop?

Grab the example files from GitHub and start composing your own specialized prompt templates.