Set up Journey Simulations with Maxim

Santiago Cardona Updated by Santiago Cardona

Testing conversational journeys — especially ones powered by AI agents — is hard. You can click through the preview, but that doesn't scale if you have a complex Journeys suite: a single journey can branch into hundreds of realistic paths depending on how a user phrases things, what language they speak, or what mood they're in. Manually walking each one is slow, inconsistent, and almost impossible to repeat as you iterate.

This guide walks you through connecting Turn's Journeys Simulation API to Maxim's Simulation tool so you can automatically run your journeys against realistic, LLM-powered user personas — and catch regressions before your real users do.

See Test your journeys with the Simulation API and Journeys Simulation API docs for more details.

Prerequisites

Before you start, make sure you have:

  • A journey on your Turn number (production or staging revision)
  • The journey UUID — visible in the URL when editing the journey
  • An API token for the number the journey runs on (Settings → API & Webhooks)
  • Maxim account with access to the Simulation feature
Step 1 — Grab your Turn credentials
  1. Head to Settings → API & Webhooks in Turn and create an API token. This is the Bearer token Maxim will use.
  2. Open the journey you want to test and copy the UUID from the URL (.../journeys/<uuid>/...).

Keep both values handy — you'll paste them into Maxim next.

Step 2 — Create a simulation in Maxim
  1. In Maxim, navigate to Agents → HTTP endpoint and click New Simulation.
  2. Set the endpoint URL to:
    https://whatsapp.turn.io/v1/journeys/<JOURNEY_UUID>/simulation
    Replace <JOURNEY_UUID> with the UUID from Step 1.
  3. Under Headers, add:
    Authorization: Bearer <YOUR_TURN_TOKEN>
    Content-Type: application/json
    Accept: application/json
Step 3 — Map the request and response

Maxim needs to know how to build each request body and where to find the agent's reply in the response.

Request body template — tell Maxim to send the user turn as input and to reuse a stable session key per run:

{
"input": "{{input}}",
"revision": "production",
"contact": {
"name": "{{persona.name}}",
"language": "eng"
},
"simulation_id": "{{simulationId}}"
}
  • {{input}} is the LLM-generated user turn.
  • {{simulationId}} should be a unique-per-conversation token Maxim generates (6–32 chars). This becomes the simulation_id and keeps each simulated conversation in its own session.

Now, send a first message to check everything is well configured. After manually sending a message and getting a response, you'll be able to see the Switch to AI Simulation button:

Click the Switch to AI Simulation button to open the modal and configure the AI-simulated session:

Response mapping — point Maxim's agent response field to:

message

That's the field Turn returns on every turn (see Journeys Simulation API response fields).

Step 4 — Define a scenario and persona

This is where simulation earns its keep. A good scenario + persona combo catches problems no scripted test would find.

Scenario — describe the situation the user is in. Be specific:

A new caregiver wants to register their 6-month-old for the next vaccination round. They don't know which clinic is closest and are worried about side effects.

Persona — describe how the user talks:

Anxious first-time parent, types in short fragmented messages, occasionally switches from English to Portuguese, sometimes asks the same question twice when nervous.

Maxim recommends mixing emotional states and expertise levels across runs — calm vs. frustrated, first-time vs. returning, literate vs. low-literacy. This is especially important for AI Agent blocks, where intent recognition has to hold up against messy real-world phrasing.

Step 5 — Run it

Hit Start Simulation. Maxim will:

  1. Open a session against Turn by POSTing with a fresh simulation_id.
  2. Take Turn's reply, feed it to the persona LLM, and generate the next user turn.
  3. POST that user turn back to the same simulation_id.
  4. Repeat until the journey returns state: "end" or the turn limit is hit.

And you're done! 🎉 You'll see each generated input and output in the message history.

What to do next

  • Run a baseline: kick off 20–30 runs across 3–4 personas and save the results as your baseline.
  • Wire it into your release flow: every time you publish a new revision of a journey, re-run the same suite against revision: "production" (or "staging" before publishing) and compare scores.
  • Investigate regressions: when an evaluator score drops, open the transcript — Maxim shows you exactly the turn where the journey went off-rails, and Turn's context field in the response tells you which card was active.

Tips

  • Use one simulation_id per conversation, not per suite. Reusing it across runs will bleed state between tests.
  • Prefer staging while iterating. It lets you test unpublished changes without touching the production revision.
  • Keep turn limits tight. A runaway simulation against a looping journey is the fastest way to burn LLM credits.
  • Seed contact.language to match the journey's default language, otherwise localised cards may fall back to English.

Was this article helpful?

Set up AI Evals & Logs with Maxim

Contact