Build a WhatsApp agent
with OpenClaw + TimelinesAI

Turn WhatsApp into a customer channel your OpenClaw agent operates on: auto-reply to incoming messages, send transactional and event-triggered notifications, sync with your CRM, share the inbox with teammates. TimelinesAI runs the WhatsApp gateway. Your agent handles the reasoning.

Auto-replyTransactional sendsCRM syncShared inbox

What you can build

What your agent can do with WhatsApp

With a TimelinesAI skill installed, your OpenClaw agent can operate WhatsApp as a real customer channel. The long version of this list is in the sections below — every capability has the specific API calls it needs.

Answer incoming customer messages automatically — 24/7 autoresponder, after-hours responder, or full AI chatbot that escalates to a human when stuck.

Send transactional and event-triggered messages — order confirmations, shipping updates, appointment reminders, payment receipts, and notifications triggered by events in other tools (HubSpot, Stripe, Calendly). Only to customers who expect the message.

Qualify leads through multi-turn conversations — ask a sequence of questions, store answers, tag the chat as qualified or not.

Sync WhatsApp activity with your CRM — look up incoming numbers in HubSpot/Pipedrive, update deal stages, push notes back.

Summarize and score conversations — "what did ACME ask about last week", "score this chat 1–10 on intent".

Share the inbox with teammates — your agent drafts replies as private notes, humans send them; or your agent sends and humans watch.

Handle media — photos of receipts, PDFs, voice notes — processed by OpenClaw, replied to automatically.


Start here

How to use this guide

This guide is dual-mode. You can read it yourself and follow Setup manually — the path the sections below lay out — or you can hand the whole thing to your OpenClaw agent (or any agent that can fetch a URL) and let it do the setup for you. Both paths land at the same place: four WhatsApp skills installed, a token and webhook wired up, your agent answering customer messages.

If you’re reading this yourself

Scroll through the capability groups below — Incoming, Outbound, CRM & analytics, Operations — and decide which ones matter for your use case. Then follow Setup manually; it’s four steps and about ten minutes. Every capability lists the exact API calls it makes, and every code block in this guide has been run against the live TimelinesAI API before publication. If you get stuck, Things-to-know collects every gotcha that will cost you an hour if you don’t know about it in advance.

If you want your agent to do the setup

Paste this prompt into OpenClaw (or Claude Code, Cursor, Claude desktop, any agent that can read a URL). It points at this guide by URL so the agent reads the live version, then walks you through install, token, webhook, smoke-test, and enabling your first skill — asking for input only where it genuinely needs something from you.

Read https://timelines.ai/guide/openclaw-whatsapp-skills end to end.

Then help me set up an OpenClaw + TimelinesAI WhatsApp agent:

1. Clone InitechSoftware/openclaw-whatsapp-skills into ~/.openclaw/workspace/
   and symlink the four skills into ~/.openclaw/skills/.

2. Ask me for my TimelinesAI API token (Integrations → Public API → Copy
   on app.timelines.ai) and write it to ~/.openclaw/workspace/.env.timelinesai
   as TIMELINES_AI_API_KEY.

3. Run the smoke-test curl from Setup. If it fails, walk me through the
   "If you see something else" failure block in the guide.

4. Help me deploy examples/vercel-webhook-receiver from the companion repo
   and register the webhook with TimelinesAI.

5. Ask which of the four skills I want to enable first — whatsapp-autoresponder,
   whatsapp-lead-qualifier, whatsapp-send, or whatsapp-delivery-check — and
   walk me through its capability section plus any relevant gotchas from
   Things-to-know.

6. Before enabling anything that sends outbound messages, show me Channel
   choice so I pick the right WhatsApp channel (personal number vs Business
   API) for my use case.

The prompt intentionally stops before turning any outbound sending on — it routes you through Channel choice first so you pick the right WhatsApp channel (personal vs Business API) for what you’re actually trying to do. Ban risk is real and upstream, not something the gateway can protect you from.


Getting started

Setup

Four things to wire together. After this, your skill just makes API calls.

  1. 1

    Connect your WhatsApp number

    Sign in at app.timelines.ai, scan the QR code with the phone that holds your business number. TimelinesAI runs the gateway from here on. Same flow as the standard personal-number setup.

  2. 2

    Get an API token

    Integrations → Public API → Copy. Save as TIMELINES_AI_API_KEY. One token covers the whole workspace.

  3. 3

    Install a skill from the companion repo

    Clone InitechSoftware/openclaw-whatsapp-skills into ~/.openclaw/workspace/, then symlink each skill directory into ~/.openclaw/skills/. Four-line quick start in the companion README.

  4. 4

    Register a webhook

    Point message:received:new at a public HTTPS URL. TimelinesAI will push incoming messages to it. Your receiver invokes OpenClaw.

Smoke-test the token

Before you build anything, confirm auth works:

curl -sS -w "\nHTTP: %{http_code}\n" \
  -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
  https://app.timelines.ai/integrations/api/whatsapp_accounts

You should see

{
  "status": "ok",
  "data": {
    "whatsapp_accounts": [
      {
        "id": "<phone>@s.whatsapp.net",
        "phone": "+<phone>",
        "status": "connected",
        "account_name": "<your label>"
      }
    ]
  }
}

If you see something else

  • HTTP 401 with the bare text Unauthorized — the token didn’t land. Re-check step 2, make sure you included the Bearer prefix, and confirm the token didn’t get line-wrapped on paste. Note the response is plain text, not JSON — piping to jq will error out.
  • A bootstrap-styled HTML “Page not found” page with HTTP 404 — you’ve got a trailing slash or a typo in the path. The API serves HTML (not a JSON error) for bad paths, so if your output starts with <!DOCTYPE html>, drop any trailing slash and re-check the path.

Register the webhook once

# Generate a random secret once. Add it as a query param so only
# TimelinesAI's registered URL can reach your receiver.
WEBHOOK_SECRET=$(openssl rand -hex 16)
WEBHOOK_URL="https://your-app.example.com/api/webhook?secret=${WEBHOOK_SECRET}"

curl -sS -X POST \
  -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
  -H "Content-Type: application/json" \
  -d "{\"event_type\":\"message:received:new\",\"url\":\"${WEBHOOK_URL}\",\"enabled\":true}" \
  https://app.timelines.ai/integrations/api/webhooks

Minimal webhook receiver (Node / Vercel)

~40 lines. Validates the ?secret= query param, drops outbound echoes, acks 200 within the 5-second retry window, then forwards the message to your OpenClaw host.

// api/webhook.js
export default async function handler(req, res) {
  // Path-segment auth — TimelinesAI doesn't sign webhooks yet, so the
  // ?secret=<token> you registered is the only thing stopping strangers
  // from invoking this endpoint.
  if (req.query.secret !== process.env.WEBHOOK_SECRET) {
    return res.status(404).end();
  }
  if (req.method !== "POST") return res.status(405).end();

  const { event_type, data } = req.body || {};
  if (event_type !== "message:received:new") {
    return res.status(200).json({ ignored: event_type });
  }

  // Drop your agent's own sends echoing back as "received".
  // TimelinesAI also fires message:received:new for outbound messages
  // synced from another WhatsApp client on the same number.
  if (data.from_me === true) {
    return res.status(200).json({ ignored: "from_me" });
  }

  // Ack FAST. TimelinesAI retries 3x with a 5-second timeout per attempt —
  // if you take longer than 5s the same event hits you up to 3 times.
  res.status(200).json({ ok: true });

  // TL webhook payloads sometimes arrive flat (data.chat_id) and sometimes
  // nested (data.chat.id) depending on event flavor and version. Destructure
  // with fallback so both shapes work — see Things to know below.
  const chatId = data.chat_id ?? data.chat?.id;
  const whatsappAccountId =
    data.whatsapp_account_id ?? data.chat?.whatsapp_account_id;

  // Fire-and-forget handoff to wherever your OpenClaw host accepts
  // inbound messages. For production durability replace this with a
  // push to a queue (QStash, Inngest, SQS) your agent drains separately.
  try {
    await fetch(process.env.OPENCLAW_HOOK_URL, {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({
        chat_id: chatId,
        whatsapp_account_id: whatsappAccountId,
        text: data.text,
        sender_phone: data.sender_phone,
        message_uid: data.message_uid,
      }),
    });
  } catch (err) {
    console.error("[timelinesai-webhook] handoff failed:", err);
  }
}

OPENCLAW_HOOK_URL is whatever endpoint your OpenClaw deployment exposes for inbound messages — it depends on how you host the agent. For a production-grade reference with durability notes, see examples/vercel-webhook-receiver in the companion repo.


Your install

What's inside your OpenClaw workspace

You connected a TimelinesAI token and symlinked skills from the companion repo. Here’s what you actually installed, and which files you edit when you want to change how your agent thinks, what it knows, and how it behaves. Nothing here is magic — it’s all plain markdown and env files in two directories.

Skills — ~/.openclaw/skills/<skill>/

Every skill you symlinked is a directory with one required file (SKILL.md) and any resources that file references. If you want to change how your FAQ handler sounds, or re-order the lead qualifier’s questions, the edit lives in that skill’s SKILL.md. No code to recompile, no build step.

FileWhat it controls
SKILL.mdThe whole skill. YAML frontmatter at the top (name, description, user-invocable, and any dispatch flags) plus the instructions/prompt body underneath. This is where you edit what the skill does, when OpenClaw picks it, and how it behaves. Every one of the four WhatsApp skills in the companion repo is a single SKILL.md — no code to recompile, just edit and restart.
README.mdHuman documentation for the skill. OpenClaw does not read this file — it's for you, your teammates, and anyone browsing the skill on GitHub. Optional; the companion repo skills all ship with one.
Scripts, resources, schemasAnything referenced from SKILL.md via {baseDir}/... — small helper scripts, prompt templates, JSON schemas, test fixtures. Ignored by OpenClaw unless your SKILL.md explicitly points at them. Put whatever the skill needs alongside SKILL.md and reference it by path.

For the four WhatsApp skills specifically — whatsapp-autoresponder, whatsapp-lead-qualifier, whatsapp-send, whatsapp-delivery-check — the top of each SKILL.md has the role and allowed tools; the body has the actual prompt your agent runs with. Read one of them end-to-end before you edit; the patterns repeat.

Workspace — ~/.openclaw/workspace/

The workspace directory is OpenClaw’s home for everything that isn’t a skill: who your agent is, what it knows about you, what tools it has, how it boots. OpenClaw’s own docs say “this folder is home. Treat it that way.” These files ship with the install — you fill them out over time:

FileWhat it controls
IDENTITY.mdWho your agent is — name, creature (how it conceptualises itself), vibe, signature emoji, avatar. Fill this out early; most of the other files reference back to it.
SOUL.mdCore operating principles and personality. How the agent should behave when nobody is watching — when to be genuine vs performative, how to earn trust, where privacy boundaries sit. OpenClaw's docs call it the agent's conscience.
USER.mdWho the agent is helping (you). Name, pronouns, timezone, interests, project context. Agents are better at helping someone specific than a generic “user” — this file is how they keep track.
TOOLS.mdEnvironment-specific config you don't want inside any skill — device names, host addresses, local preferences. Lives in the workspace so you can edit it without touching anything you might share.
AGENTS.mdThe workspace README for agents. Describes how OpenClaw expects this particular workspace to be run. Usually shipped with the install and rarely edited.
BOOT.mdShort, explicit instructions for what OpenClaw should do on startup. Empty by default; fill it if you want deterministic boot behavior across restarts.
BOOTSTRAP.mdFirst-boot onboarding conversation. Walks a fresh workspace through establishing its identity, then directs you to save the results into IDENTITY.md, SOUL.md, USER.md. Delete it once you've filled those out.
HEARTBEAT.mdPeriodic task definitions. Empty file means no heartbeats; add tasks here when you want the agent to check something on an interval (e.g. poll a queue every five minutes).

Per-integration env files

Alongside the canonical files above, you add one env file per integration you wire up. These aren’t part of the OpenClaw install — they’re yours, and they hold secrets that should never be committed to a public repo:

FileWhat it controls
.env.<integration>Secrets and config for one integration, one file per integration. For the WhatsApp work you created .env.timelinesai with TIMELINES_AI_API_KEY and ALLOWED_SENDER_JID in Setup step 2. Pattern-match this file when you add HubSpot, Stripe, Pipedrive, or any other tool — one .env.<name>, source it in the skill that needs it.
The one-line rule: change how your agent BEHAVES by editing a skill’s SKILL.md; change WHO your agent is by editing SOUL.md or IDENTITY.md in the workspace; change WHAT SECRETS it uses by editing the right .env.<integration> file. Everything else in the workspace is machinery you’ll rarely touch.

Channel choice

Personal numbers vs WhatsApp Business API

Before you build outbound flows, understand which WhatsApp channel your use case belongs to. Picking wrong gets numbers banned.

This guide is about personal WhatsApp numbers — the ones you connect to TimelinesAI by scanning a QR code. They are for inbound conversations and transactional sends to customers who expect the message. For cold outreach, marketing broadcasts, or promotional campaigns, use WhatsApp Business API instead — TimelinesAI supports it today through the dashboard, and public API automation for Business API workflows is coming in Q2 2026.

Safe with the skills in this guide (personal number)

  • Inbound conversations. Customer messages you first, you reply. No risk.
  • Transactional sends. Order confirmations, shipping updates, delivery notifications, payment receipts, appointment reminders — any message a customer expects because they just did something with your business.
  • Event-triggered notifications from other tools. HubSpot deal → demo confirmation, Stripe failed payment → recovery note, Calendly booking → pre-meeting reminder. The customer opted in when they used the upstream tool.
  • Replying inside WhatsApp's 24-hour customer service window. Once a customer messages you, you have 24 hours to reply freely. The autoresponder, FAQ handler, and lead qualifier skills operate inside this window.

NOT safe on personal numbers

  • Cold outreach to lists you bought or scraped — WhatsApp bans numbers for this within hours.
  • Marketing broadcasts to customers who didn't specifically opt in.
  • Promotional campaigns, sales offers, seasonal pushes, product launches.
  • Anything resembling a marketing blast. If you're asking “what throughput can I get”, you're on the wrong channel.

How to tell which channel you need

  1. 1.Did the customer message you first, or are you replying inside an active 24-hour session? → Personal number, this guide.
  2. 2.Is the customer about to receive something they explicitly expect (order, appointment, payment, delivery)? → Personal number, transactional send.
  3. 3.Triggered by a customer-opted-in event in your CRM or billing tool? → Personal number, event-triggered send.
  4. 4.Broadcast, cold outreach, or promotional campaign? → Business API, use the TimelinesAI dashboard.
  5. 5.Not sure? → Don't send. Treat it as promotional and route it to the Business API path.

Every outbound capability below assumes you've passed this test. If you're not sure, re-read this section before shipping. TimelinesAI cannot protect you from the WhatsApp-layer ban — the ban is enforced upstream, at WhatsApp's infrastructure, not at the gateway.


Reference

API reference

Every endpoint you need to build every capability below. Base URL https://app.timelines.ai/integrations/api. Auth Authorization: Bearer $TIMELINES_AI_API_KEY.

Reading

MethodPathWhat it returns
GET/whatsapp_accountsYour connected WhatsApp numbers, each with JID, phone, status, account name.
GET/chatsChat list. Supports ?phone=... and ?label=... filters. Paginate with ?page=N (50 per page, fixed). Each chat has whatsapp_account_id holding the JID of the number that owns it, plus chatgpt_autoresponse_enabled — see Things to know before launching your own agent.
GET/chats/{id}One chat's full detail.
GET/chats/{id}/messagesMessage history, 50 per page. Paginate with ?page=N and watch data.has_more_pages. Each message carries from_me, sender_phone, text, timestamp, message_type, status (Sent/Delivered/Read), and origin (Public API vs WhatsApp app).
GET/chats/{id}/labelsLabels on the chat.
GET/messages/{uid}/status_historySent / Delivered / Read timeline for an outbound message.
GET/messages/{uid}/reactionsReactions on a message. Returns {data: {users: [{name, phone, reaction, current}], reactions: {<emoji>: count}, total: N}} — an object, not a flat array. users lists who reacted (each carries the emoji they picked plus a current boolean marking your own workspace); reactions is a histogram keyed by emoji. Empty state is {users: [], reactions: {}, total: 0}.
GET/filesFiles you uploaded via the API.
GET/webhooksYour registered webhook subscriptions.

Writing

MethodPathWhat it does
POST/messagesSend to a phone number. Body: {"phone":"+...","text":"..."}. Returns {"message_uid":"..."}.
POST/chats/{id}/messagesSend into an existing chat. Body: {"text":"..."}. Sender is whichever WhatsApp number owns the chat.
POST/chats/{id}/notesAttach a private note to a chat. Not sent to WhatsApp, visible only inside TimelinesAI. Used for agent state and draft-review workflows.
POST/chats/{id}/labelsAdd a label to a chat. Use for stage tracking, routing, stop-reply flags.
PATCH/chats/{id}Update chat metadata — assignee (responsible_email), read state, and chatgpt_autoresponse_enabled. Turn the last one off before your agent starts replying, or TL’s built-in responder will race yours.
PATCH/messages/{uid}/reactionsSet a reaction emoji on a message. Body takes the literal emoji character, not a shortcode.
POST/files_uploadUpload a file via multipart/form-data (field name: file). Returns data.uid, which you pass to chat/messages as file_uid. No upload-by-URL variant.
POST/webhooksRegister a webhook subscription.
PUT/webhooks/{id}Update or enable/disable a subscription.
DELETE/webhooks/{id}Remove a subscription.

Before you start

Things to know

A few details that are easy to miss from the reference and will each cost you an hour if you don't know them.

!
No trailing slashes. GET /chats works. GET /chats/ returns the TimelinesAI 404 page — which looks like a network problem and isn't. Every URL in this guide is written without a trailing slash on purpose.
!
JSON bodies must be valid UTF-8 — and the trap is broader than heredocs. The parser rejects anything else. Em-dashes and smart quotes trigger it most, but ALSO any shell that might be running in a non-UTF-8 locale (Git Bash on Windows, some Docker base images, older SSH sessions) will corrupt those characters in both heredocs AND inline curl -d "..." args — em-dash becomes byte 0x97, which fails UTF-8 decode on the server. Write payloads to a file saved explicitly as UTF-8 and use curl --data-binary @file.json; never rely on inline -d for anything with smart punctuation.
!
Base URL is https://app.timelines.ai/integrations/api. Some older blog posts reference a different subdomain with an X-API-KEY header — that's outdated. Use Bearer auth on app.timelines.ai/integrations/api.
!
Attachment URLs in webhook payloads expire quickly. When a customer sends a photo or PDF, download it inline in the webhook handler, not from an async worker.
!
Personal numbers get banned for cold outreach. The send endpoints in this guide will let you send to anyone, but WhatsApp will ban your number quickly for unsolicited outbound. Only send outbound to people who messaged you first or who explicitly expect a transactional message. For broadcasts, use WhatsApp Business API — see Channel choice.
!
Sending is asynchronous. POST /messages returns a message_uid — that's a receipt, not a delivery confirmation. Use GET /messages/{uid}/status_history to check actual delivery.
!
Response shapes vary — list endpoints nest under a typed key. Most list endpoints return {"data":{"<typed-key>":[...]}}, not a flat {"data":[...]}. GET /whatsapp_accounts nests under data.whatsapp_accounts, GET /chats under data.chats (with a data.has_more_pages pagination flag), GET /chats/{id}/messages under data.messages, GET /chats/{id}/labels under data.labels. Exceptions: GET /messages/{uid}/status_history and GET /files return data as a flat array. Code that assumes a uniform shape breaks on the first mismatch — read the response as res?.data?.<typed> ?? res?.data ?? [].
!
Use from_me, not sender_phone, for direction on history reads. GET /chats/{id}/messages returns outbound messages (your own sends via the Public API) with sender_phone set to the team number, not an empty string. Naive direction detection via sender_phone != "" will label every outbound as inbound and invert the conversation. Trust the from_me boolean — true is outbound (your agent speaking), false is inbound (customer).
!
History field is uid; webhook field is message_uid. Same value, different key. When TL delivers a message:received:new webhook, the payload uses message_uid. When you read the same chat's history via GET /chats/{id}/messages, the field is uid. They're the same identifier. Code that stores UIDs from webhooks and looks them up in history must normalize both key names. History messages also use timestamp, not created_at.
!
Labels are a REPLACE, not an ADD. POST /chats/{id}/labels takes {"labels":["needs-human","intent/sales"]} — an array, plural key. The call replaces the full label set on the chat; to add a label you first GET the existing set, append yours, and POST the combined list. To clear all labels: POST {"labels":[]}. There is no working DELETE /chats/{id}/labels/{name} endpoint for removing a single label individually; you have to POST the whole new set minus the one you want gone.
!
Fresh webhook registrations may replay recent history. On the very first events after you POST /webhooks, TL may deliver a burst of past message:received:new events — or the registration handshake may retry against a container that's still cold-starting. Either way, your receiver must be idempotent from event #1. Dedupe on message_uid from the moment it comes up, don't assume "clean slate" at subscription time.
!
Webhook payloads may arrive flat OR nested. TL sometimes delivers data.chat_id / data.whatsapp_account_id at the top level and sometimes nests them as data.chat.id / data.chat.whatsapp_account_id — the variation depends on event flavor and version. Destructure defensively: const chatId = data.chat_id ?? data.chat?.id. Attachments are worse: data.attachment_url, data.attachment?.url, and data.file_url are all possible. A receiver that only reads the flat shape silently fails on nested deliveries.
!
List endpoints return 50 items per page, fixed. ?limit=N, ?per_page=N, and ?page_size=N are silently ignored — the API picks its own size. Paginate with ?page=N (1-indexed) and stop when data.has_more_pages is false. Verified on /chats and /chats/{id}/messages; the pattern applies to other list reads. If you need "all messages since X", expect to walk pages, not a single big limit=1000 call.
!
Disable TimelinesAI’s built-in ChatGPT autoresponder before your agent takes over. Every chat defaults to chatgpt_autoresponse_enabled: true and TL’s built-in responder will reply to incoming messages alongside your skill, producing double-replies the customer sees. PATCH /chats/{id} with {"chatgpt_autoresponse_enabled": false} turns it off per chat. There’s no bulk-disable on the Public API, so make this the first action your receiver runs on any chat it’s about to take over (and run it once on every existing chat your agent is already managing).
!
Notes live in /chats/{id}/messages with message_type=note and from_me=false, always. POST /chats/{id}/notes writes into the same timeline as real WhatsApp messages — when you fetch history, notes come back interleaved with from_me: false regardless of who wrote them. Any code that reasons about conversation flow MUST filter by .message_type == "whatsapp": a naive response-time calculation pairing inbound→outbound on from_me alone will treat every note as an inbound customer message and corrupt the latency, and a summary or scoring skill that feeds the full history to an LLM will see its own prior notes as customer utterances. Keep notes in the stream only when you explicitly want structured state (the qualifier pattern); drop them everywhere else.

When API calls fail

Five failure modes will eventually hit your skill. Each one looks different in the response, and each one has a different correct response.

!
400 — invalid request body. The most common cause is non-UTF-8 characters in your JSON (em-dashes, smart quotes from a shell heredoc). Write payloads to a file with explicit UTF-8 encoding and use curl --data-binary @file.json. Same root cause as the UTF-8 tip above.
!
401 — token expired or revoked. Rotate from app.timelines.ai → Integrations → Public API → Regenerate. The new token takes effect immediately; the old one stops working at the same instant. Read it from your environment so you can rotate without redeploying.
!
429 — rate limited. The response carries a Retry-After header in seconds. Respect it and retry once. If you hit 429 repeatedly, your skill is sending faster than the ~30-messages-per-minute personal-number throttle allows — throttle client-side or split outbound load across multiple connected numbers.
!
5xx — gateway upstream issue. Retry with exponential backoff: 2s, 4s, 8s. After the third failure, surface to a human and stop sending. Don't infinite-loop — persistent 5xx is usually a TimelinesAI status page event, not something your retries will fix.
!
Webhook idempotency. TimelinesAI retries each delivery up to 3 times with a 5-second timeout. If your receiver doesn't dedupe, the customer sees the same reply 3 times. Store the incoming message_uid as a chat note before processing; if the next webhook arrives with the same uid, skip it. Notes are cheap and visible to humans for debugging.

A small shell pattern that branches on each status code:

$ # Capture the HTTP status into a variable, then branch
$ http_code=$(curl -sS -o /tmp/resp.json -w "%{http_code}" \
    -X POST \
    -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
    -H "Content-Type: application/json" \
    --data-binary @/tmp/send.json \
    https://app.timelines.ai/integrations/api/messages)

$ case "$http_code" in
    200|201) ;;                                # ok, carry on
    400)     echo "bad body: $(cat /tmp/resp.json)"; exit 1 ;;
    401)     echo "token expired - rotate"; exit 2 ;;
    429)     sleep 30; retry_once ;;           # respect Retry-After
    5*)      sleep $((2 ** attempt)); retry_with_backoff ;;
  esac

Operating safely in production

Two more things that aren't failures, but will save you a bad day later.

!
Debugging a silently dropped webhook. When your skill stops replying to a real conversation, check in this order: (1) TimelinesAI's webhook delivery log — if the event isn't even reaching your URL, the issue is registration or DNS, not your skill; (2) your hosting platform's function logs (Vercel logs, etc.) for an unhandled exception; (3) drop a breadcrumb note in the chat with POST /chats/{id}/notes before processing the event so you can confirm the receiver actually fired; (4) GET /webhooks to verify the subscription is still enabled (rotated tokens occasionally drop subscriptions silently).
!
One token covers the whole workspace. Your API token reads and writes every chat, message, label, note, file, and webhook in the workspace. Treat it like a password: store it in .env, never commit it, and rotate from Integrations → Public API → Regenerate when a teammate leaves or a deploy environment changes. The new token takes effect immediately; the old one stops working at the same instant. There's no per-skill or read-only scope today.

Context & memory

Conversation context

The webhook hands your skill a single message — the one that just arrived. No prior turns, no chat history, no running transcript. For a stateless FAQ or an after-hours responder that's plenty. For a conversational agent that has to remember what the customer said three turns ago, your skill has to fetch the context itself. Three patterns cover the full range, from the cheapest possible loop to the one that actually remembers.

1

Stateless

Reply to the latest message alone. One POST per turn, no GET, no history. This is the simplest loop and the default in the ready-made skills. Use it for FAQ bots, after-hours responders, and classification or routing — anywhere the answer depends only on the current message. The agent forgets everything between turns: if the customer writes a follow-up that refers back to something said earlier, it won't understand it.

2

Full context window

Fetch the last 20 messages before every reply and pass them to the model as prior conversation. Two API calls per turn instead of one, plus the extra prompt tokens on every call. Use it for conversational agents, multi-turn qualification, and anywhere the agent has to remember the thread. Twenty recent turns is usually enough — going wider rarely helps and makes the prompt expensive; going narrower makes the agent forget things the customer said a minute ago.

3

Adaptive context

Fetch context only when the incoming message looks like a follow-up. A cheap heuristic on the text — starts with a pronoun, references "that" or "it", arrives within 30 seconds of your previous reply, short one-word acknowledgements — decides whether to pull history or reply stateless. Most turns stay cheap; continuations get memory. Start with the full window; only move to adaptive once you know your cost budget and have real production traffic to tune the heuristic against.

Fetching the window is a single GET. Response is a flat array of messages on the chat, newest last, mixing inbound customer turns with your own outbound replies and any notes your skill has written:

$ curl -sS -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
    "https://app.timelines.ai/integrations/api/chats/12345678/messages?limit=20"
{"status":"ok","data":[
  {"message_uid":"INBOUND-UID-PLACEHOLDER",
   "text":"hi, is the blue one still available?",
   "sender_phone":"+15550200",
   "message_type":"text","origin":"WhatsApp",
   "created_at":"2026-04-14T10:22:03Z"},
  {"message_uid":"OUTBOUND-UID-PLACEHOLDER",
   "text":"Yes, one left. Want me to hold it for you?",
   "sender_phone":"",
   "message_type":"text","origin":"Public API",
   "created_at":"2026-04-14T10:22:41Z"},
  {"message_uid":"INBOUND-UID-PLACEHOLDER-2",
   "text":"yes please, until tomorrow",
   "sender_phone":"+15550200",
   "message_type":"text","origin":"WhatsApp",
   "created_at":"2026-04-14T10:23:08Z"}
]}

Gotchas that bite in production

!
Filter notes and your own outbound out of the history. GET /chats/{id}/messages returns everything on the chat: inbound customer turns, your own outbound replies, and any notes your skill wrote for state or debugging. Notes carry message_type == "note" and origin == "Public API" — filter them before passing history to the model, otherwise your agent sees its own internal bookkeeping as if it were real conversation and starts replying to its own notes.
!
Starting mid-conversation. When your skill sees a chat for the first time, the chat may already have a long prior history from before you subscribed the webhook. Always fetch recent context on a first-seen chat — otherwise you'll reply to a brand-new "hi" as if it were a fresh opener, when the customer is actually thirty turns deep in an existing thread with your team.
!
Dedup webhook bursts. When a customer types three messages in four seconds, you get three webhooks — possibly in parallel. Without a per-chat lock or a short debounce, the customer gets three replies overlapping. Pattern: hold the first webhook for 1–2 seconds, then fetch history once and reply to the combined state. Webhook retries (up to 3 attempts, 5-second timeout each) also arrive the same way — dedup on message_uid before processing.
!
Order by created_at, not by message_uid. Message UIDs are workspace-scoped and not globally sortable. When you assemble history into a prompt, order turns by the created_at timestamp from the payload, not by UID. Cross-workspace UIDs aren't shared either — the same physical WhatsApp message delivered to two different TimelinesAI workspaces has a different UID in each one.

State persistence

OpenClaw skills don't keep state in memory across invocations. WhatsApp conversations are multi-turn. The fix is to store state on the chat itself:

  • Labels hold discrete stage — discovery/q1, qualified, escalate. Add with POST /chats/{id}/labels, read with GET /chats/{id}/labels.
  • Notes hold structured data — team_size=8, draft replies, lead scores. Add with POST /chats/{id}/notes. Read by iterating GET /chats/{id}/messages and filtering message_type == "note".

The upside: crash safety, visibility to human teammates, clean handoff — a human can clear a label to rewind the flow, or add escalate to take over. The trade-off: every state transition is an HTTP call. For customer-facing flows this is fine.


Sending from the right number

If your workspace has more than one WhatsApp number connected, your skill has to make sure it sends from the intended one. Every chat has a whatsapp_account_id field holding the full JID (like PHONE@s.whatsapp.net) of the number that owns it. When you POST /chats/{id}/messages, the sender is always that JID — you don't pick it, the chat does.

The pattern:

  1. 1Hard-code the allowed sender JID in each skill’s environment (e.g. ALLOWED_SENDER_JID).
  2. 2Before sending, GET /chats/{id} and compare whatsapp_account_id to your allowed JID.
  3. 3If they don’t match, skip the send — surface to a human or drop the event.

Two extra HTTP calls per turn, zero chance of sending from the wrong persona. For single-number workspaces this doesn't apply.


Incoming

Incoming messages — what your agent can handle

Every time a customer messages your WhatsApp number, TimelinesAI fires a message:received:new event to your webhook with the chat id, text, sender phone, and attachments. Your skill reads the event, decides what to do, and replies with POST /chats/{chat_id}/messages. Everything below is a variation on that loop.

1

Auto-reply to every incoming message

Answer questions about shipping, returns, and opening hours. For anything else, draft a reply and tag the chat for review.
Between 22:00 and 08:00 reply automatically. During business hours just flag incoming chats for me.
Handle my WhatsApp replies while I'm in this meeting.
How it works

skill receives the webhook payload, composes a reply, calls POST /chats/{id}/messages with {"text":"..."}. Stateless by default — one API call per turn. For multi-turn conversations where the agent needs to remember prior messages, see Conversation context below.

2

FAQ handler with escalation to a human

Answer questions about shipping, returns, and opening hours. For anything else, tag the chat needs-human and stop replying until I clear the tag.
How it works

before replying, check GET /chats/{id}/labels. If the chat has needs-human, exit without sending. If the incoming text matches an FAQ topic, reply. Otherwise POST /chats/{id}/labels with the escalation tag and exit silently. Your team's inbox filters on the label.

3

Route conversations to the right person

For each incoming chat, figure out if it's sales, support, or billing and tag it. Assign sales chats to alex@ours and billing to jamie@ours.
How it works

classify intent from the text, POST /chats/{id}/labels with intent/sales or similar, then PATCH /chats/{id} with {"responsible_email":"..."} to hand the chat off in the TimelinesAI inbox.

4

Qualify leads through a question sequence

For any new chat from our Facebook ad campaign, ask about their use case, team size, and timeline. Store answers as notes on the chat. Tag it qualified if team size is 5 or more.
How it works

labels track which question you're currently on (discovery/q1, q2, q3), notes hold the answers. Each turn: read the current stage label, parse incoming text as the answer, write it via POST /chats/{id}/notes, advance the label, ask the next question. No external database — see State persistence below.

$ # 1. Read current stage from the chat's labels. Labels nest under
$ # data.labels as a string array — see Response shapes in Things to know.
$ curl -sS -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
    https://app.timelines.ai/integrations/api/chats/12345678/labels
{"status":"ok","data":{"labels":["discovery/q1"]}}

$ # 2. Save the customer's answer as a structured note. Notes are stored
$ # as messages with message_type=note and aren't pushed to WhatsApp.
$ cat > /tmp/note.json <<'JSON'
{"text":"team_size=12"}
JSON
$ curl -sS -X POST \
    -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
    -H "Content-Type: application/json" \
    --data-binary @/tmp/note.json \
    https://app.timelines.ai/integrations/api/chats/12345678/notes
{"status":"ok","data":{"message_uid":"NOTE-UID-PLACEHOLDER"}}

$ # 3. Advance the stage label. POST /labels REPLACES the full set, so
$ # read existing labels, swap q1 for q2, POST the combined list. The
$ # body key is "labels" (plural, array) — not "label" singular.
$ curl -sS -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
    https://app.timelines.ai/integrations/api/chats/12345678/labels \
  | jq -c '.data.labels | map(if . == "discovery/q1" then "discovery/q2" else . end) | {labels: .}' \
  > /tmp/labels.json
$ cat /tmp/labels.json
{"labels":["discovery/q2"]}
$ curl -sS -X POST \
    -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
    -H "Content-Type: application/json" \
    --data-binary @/tmp/labels.json \
    https://app.timelines.ai/integrations/api/chats/12345678/labels
{"status":"ok","data":{"labels":["discovery/q2"]}}
5

Understand photos, PDFs, and receipts

When a customer sends a photo of a receipt, extract the amount and vendor and add them as a note.
If someone sends a PDF, classify it as invoice / contract / ID and tag the chat accordingly.
How it works

webhook payloads include an attachment URL. Download it in the handler (it expires quickly), process with OpenClaw's vision or document tools, write extracted data back via POST /chats/{id}/notes.

$ # The attachment URL in the webhook payload expires in ~15 minutes.
$ # Download it inline in the receiver, before queuing async work.
$ ATTACH_URL="https://files.timelines.ai/abc123/receipt.jpg?expires=..."
$ curl -sS "$ATTACH_URL" -o /tmp/receipt.jpg

$ # Process /tmp/receipt.jpg with OpenClaw vision tools, then write
$ # the extracted fields back as a structured note on the chat:
$ curl -sS -X POST \
    -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
    -H "Content-Type: application/json" \
    --data-binary '{"text":"receipt: amount=$42.50 vendor=Cafe Madrid"}' \
    https://app.timelines.ai/integrations/api/chats/12345678/notes
6

Transcribe voice notes and reply

Transcribe inbound voice notes. Reply in text — send a voice reply through the TimelinesAI dashboard if you really need one back.
How it works

webhook delivers a URL to the voice file. Download, transcribe, compose a text reply via POST /chats/{id}/messages. Voice replies are a TimelinesAI dashboard feature today and are not in the current public API reference — if your skill needs to send voice replies programmatically, contact TimelinesAI support to confirm whether the legacy voice_message endpoint is still available on your workspace.

7

Match the customer's language

If the customer writes in Spanish, reply in Spanish. Switch languages mid-conversation if they do.
How it works

pure OpenClaw-side reasoning on the incoming text. TimelinesAI just carries the reply.

8

React to messages without sending a full reply

React with 👀 to every incoming message so customers know I've seen it, then take my time composing the real reply.
How it works

PATCH /messages/{uid}/reactions with {"reaction":"👀"}. The reaction field must carry the literal emoji character — shortcodes like "eyes" or ":eyes:" are rejected with HTTP 400 "Reaction has invalid format". No message credit consumed — reactions are lightweight.

$ # Send the raw emoji character in the body. Save to a UTF-8 file and
$ # use --data-binary so shells can't downgrade it.
$ printf '{"reaction":"👀"}' > /tmp/react.json

$ curl -sS -X PATCH \
    -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
    -H "Content-Type: application/json" \
    --data-binary @/tmp/react.json \
    https://app.timelines.ai/integrations/api/messages/INBOUND-UID-PLACEHOLDER/reactions
{"status":"ok"}

Outbound

Outbound — messages your agent starts

Messages your agent initiates, not replies. Triggered by an event in your other tools (a new order, a failed payment, a meeting booked) or a direct human instruction about a specific person.

Before every capability in this section: the customer either opened the thread recently (within WhatsApp's 24-hour session window) or explicitly expects this message. If neither is true, don't send it from a personal number — that's Business API territory.

9

Send a transactional message by name or to a new recipient

Message John that his invoice is ready.
Text the plumber our new office address so he can deliver the parts.
Send the signed contract to the client who just wired the deposit.
How it works

for an existing chat, look it up via GET /chats?name=John (or your CRM) and call POST /chats/{id}/messages. For a new recipient, POST /messages with {"phone":"+...","text":"..."}. The whatsapp-send skill in the companion repo handles both modes with UTF-8-safe serialization.

$ cat > /tmp/send.json <<'JSON'
{"phone":"+15550200",
 "text":"Hi - your order shipped. Tracking: ABC123."}
JSON

$ curl -sS -X POST \
    -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
    -H "Content-Type: application/json" \
    --data-binary @/tmp/send.json \
    https://app.timelines.ai/integrations/api/messages
{"status":"ok","data":{"message_uid":"OUTBOUND-UID-PLACEHOLDER"}}
10

Scheduled follow-ups inside an active conversation

Every Monday at 9am, check chats tagged to-follow-up that had customer activity in the last 24 hours but no reply from us, and send a gentle check-in.
How it works

schedule it with an OpenClaw cron job or standing order (see OpenClaw's automation docs). The job pulls the audience via GET /chats?label=to-follow-up&read=false, reads each chat's last_message_timestamp, and only sends when the last customer turn was less than 24 hours ago. Outside that window a follow-up becomes a re-engagement touch and you're back to Business API territory — see Channel choice.

11

Trigger messages from events in your other tools

When a HubSpot deal hits 'demo scheduled', send a WhatsApp confirmation with the meeting link.
When Stripe reports a failed payment, send a polite recovery message with a link to update the card.
When a Calendly booking is created, send a pre-meeting reminder the morning of.
How it works

your existing tool (HubSpot, Stripe, Calendly, Pipedrive) fires its own webhook into the same receiver you set up in Setup — add a handler keyed on event source or request path, branch off from the TimelinesAI handler, and call POST /messages (for a new recipient) or POST /chats/{id}/messages (into an existing chat). WhatsApp becomes a delivery channel for any workflow you already have. Because the sends are downstream of a customer action in the upstream tool, they're transactional by nature — well within the personal-number rules.

12

Send files and documents on request

Generate the quote PDF and send it to the customer who just asked for pricing.
Email the contract as a PDF and also drop it in the WhatsApp chat for the client.
How it works

two steps. Upload the file bytes via POST /files_upload as multipart/form-data (there is no upload-by-URL endpoint — your agent must download the file itself first, then POST it). The response returns a uid you pass back to POST /chats/{id}/messages in the file_uid field. The customer asked for the document — this is a reply to their request, not cold outreach.

$ # Step 1 - upload the file bytes as multipart/form-data.
$ # The form field name is "file". The response gives you a uid.
$ curl -sS -X POST \
    -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
    -F "file=@/path/to/quote.pdf" \
    https://app.timelines.ai/integrations/api/files_upload
{"status":"ok","data":{
  "uid":"FILE-UID-PLACEHOLDER",
  "filename":"quote.pdf",
  "size":128453,
  "mimetype":"application/pdf",
  "uploaded_by_email":"you@yourcompany.com",
  "uploaded_at":"2026-04-14 10:22:03 +0000",
  "temporary_download_url":"https://tl-prod-data.s3.amazonaws.com/..."
}}

$ # Step 2 - attach the uploaded file to a chat message using file_uid.
$ cat > /tmp/send.json <<'JSON'
{"text":"Quote attached for your review.",
 "file_uid":"FILE-UID-PLACEHOLDER"}
JSON
$ curl -sS -X POST \
    -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
    -H "Content-Type: application/json" \
    --data-binary @/tmp/send.json \
    https://app.timelines.ai/integrations/api/chats/12345678/messages
{"status":"ok","data":{"message_uid":"OUTBOUND-UID-PLACEHOLDER"}}
13

Check whether a message was actually delivered

Did John actually receive the invoice message I sent this morning?
How it works

every send returns a message_uid. Later, GET /messages/{uid}/status_history returns the Sent / Delivered / Read timeline. Delivery is usually within a second or two on an active number. The whatsapp-delivery-check skill in the companion repo wraps this.

$ curl -sS -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
    https://app.timelines.ai/integrations/api/messages/OUTBOUND-UID-PLACEHOLDER/status_history
{"status":"ok","data":[
  {"status":"Sent",     "timestamp":"2026-04-12 12:28:40 +0000"},
  {"status":"Delivered","timestamp":"2026-04-12 12:28:41 +0000"}
]}

CRM

CRM and analytics

Read endpoints give your agent enough data to answer analytical questions and sync state with your CRM in natural language.

14

Response time reporting

What was our average first-reply time on WhatsApp this week?
Who on my team is slowest to reply?
How it works

pull recent chats, then for each chat fetch the message timeline, find the first outbound message after each inbound, and aggregate the deltas client-side. Two API endpoints, no special analytics call.

$ # Pull the last page of a chat's messages (50 per page, fixed).
$ # Messages nest under data.messages — not a flat .data[] array.
$ # FILTER message_type=="whatsapp" to exclude notes (see Things to know
$ # below — notes live in the same timeline with from_me=false and would
$ # otherwise corrupt the response-time calculation).
$ curl -sS -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
    "https://app.timelines.ai/integrations/api/chats/12345678/messages?page=1" \
  | jq -r '.data.messages[] | select(.message_type=="whatsapp") | "\(.timestamp)\tfrom_me=\(.from_me)\t\(.text[0:40])"'

# 2026-04-12 12:28:40 +0300  from_me=false  Hi - is the order shipped?
# 2026-04-12 12:30:15 +0300  from_me=true   Yes - tracking ABC123, ETA 2-3
# 2026-04-12 14:02:11 +0300  from_me=false  Got it, thanks!
# ...

$ # Compute first-reply latency from each consecutive (false -> true)
$ # pair, then average across all chats. Pure client-side aggregation.
$ # For full history walk ?page=2, ?page=3, ... until data.has_more_pages is false.
15

Unanswered message detection

How many messages did we receive yesterday? How many are still unanswered?
Show me every chat with an incoming message in the last 24 hours and no reply.
How it works

GET /chats?read=false for unread chats, then filter to .data.messages[] where message_type=="whatsapp" AND from_me==false — notes also carry from_me=false and would inflate the unanswered count. See Things to know.

16

Summarize conversations on demand

Summarize the entire conversation with ACME Corp. What are their pain points?
Give me a one-paragraph brief on every chat I haven't replied to yet.
How it works

fetch GET /chats/{id}/messages, filter to .data.messages[] where message_type="whatsapp" (notes would otherwise leak your agent's own prior scribbles into the summary as if they were customer utterances — see Things to know), then let OpenClaw summarize. Paginate with ?page=N for long threads.

17

Enrich your CRM with WhatsApp activity

For every new chat this week, look up the number in HubSpot. If it's a contact, tag the chat with their deal stage. If not, create the contact.
How it works

combine TimelinesAI reads with your CRM's API. Write results back to the chat via POST /chats/{id}/labels and POST /chats/{id}/notes.

$ # 1. Pull recent chats with their phone numbers. /chats returns 50 per
$ # page under data.chats — paginate with ?page=N if you need more.
$ curl -sS -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
    "https://app.timelines.ai/integrations/api/chats?page=1" \
  | jq -r '.data.chats[] | "\(.id)\t\(.phone)"'

# 12345678   +15550100
# 12345679   +15550200
# ...

$ # 2. For each phone, look up your CRM (HubSpot/Pipedrive/Close).
$ #    Pseudo-code:
$ #      contact = hubspot.search_by_phone(phone)
$ #      stage   = contact.deal_stage if contact else "unknown"

$ # 3. Add the deal stage label WITHOUT wiping existing labels.
$ # POST /labels is REPLACE semantics (full set), so GET current labels,
$ # append the new one, and POST the combined list.
$ curl -sS -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
    https://app.timelines.ai/integrations/api/chats/12345678/labels \
  | jq -c '.data.labels + ["hubspot/qualified-lead"] | unique | {labels: .}' \
  > /tmp/labels.json
$ curl -sS -X POST \
    -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
    -H "Content-Type: application/json" \
    --data-binary @/tmp/labels.json \
    https://app.timelines.ai/integrations/api/chats/12345678/labels
18

Score leads from conversation content

Score every chat tagged inbound-lead from 1 to 10 on fit and urgency. Write the score as a note.
How it works

LLM reasoning over GET /chats/{id}/messages — filter to message_type="whatsapp" first, otherwise your own previous lead-score notes will be re-read as customer utterances and drift the score on every run. Write the result via POST /chats/{id}/notes with a predictable prefix (e.g. "lead_score: fit=8 urgency=6") so your next pass can find and supersede it.


Operations

Scale and handoff

Patterns for human-in-the-loop workflows, multi-agent routing, and conversation memory. All four capabilities below are designed to coexist with humans working the same chats from the TimelinesAI shared inbox.

19

Draft replies for human review instead of sending

For every new incoming message, draft a reply and save it as a note. Don't send — I'll review and send them myself.
How it works

POST /chats/{id}/notes with the draft text instead of /messages. The note appears in the same chat view your teammates already use.

20

Hand off to a human when the agent is stuck

If the conversation goes more than 5 turns without resolution, or the customer asks for a human, tag escalate and stop replying until I clear the tag.
How it works

count turns with GET /chats/{id}/messages, check stop-reply labels with GET /chats/{id}/labels before every send. If escalation triggers, POST /chats/{id}/labels with escalate and exit.

$ # 1. Count outbound 'whatsapp' messages (skip notes) in this chat.
$ # Messages nest under data.messages — not a flat .data[] array.
$ TURNS=$(curl -sS -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
    "https://app.timelines.ai/integrations/api/chats/12345678/messages?page=1" \
  | jq '[.data.messages[] | select(.from_me==true and .message_type=="whatsapp")] | length')
$ echo $TURNS
8

$ # 2. Past the threshold? Append 'escalate' to the chat's labels and stop.
$ # POST /labels REPLACES the full set, so read existing first, combine,
$ # then POST — otherwise you wipe every other label on the chat.
$ if [ "$TURNS" -ge 5 ]; then
    curl -sS -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
      https://app.timelines.ai/integrations/api/chats/12345678/labels \
    | jq -c '.data.labels + ["escalate"] | unique | {labels: .}' \
    > /tmp/labels.json
    curl -sS -X POST \
      -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
      -H "Content-Type: application/json" \
      --data-binary @/tmp/labels.json \
      https://app.timelines.ai/integrations/api/chats/12345678/labels
  fi

$ # 3. Before every future send, GET /chats/{id}/labels and bail
$ #    if .data.labels contains 'escalate' or 'needs-human'.
21

Run multiple specialized agents on one inbox

Sales AI handles pricing questions, support AI handles product questions. Route by intent; if both are unsure, escalate to me.
How it works

two specialist skills in one OpenClaw workspace (one for sales, one for support) plus a third intent-classifier skill that runs first on every incoming message, tags the chat with the chosen specialist, and exits. Each specialist reads the chat's intent label before replying and bails if the label points at the other one — so exactly one skill fires per message.

22

Remember past conversations with the same customer

Last week you mentioned you were traveling — how did that go?
How it works

OpenClaw's own memory plus GET /chats/{id}/messages for the full WhatsApp history. Chat history survives across invocations because it lives on TimelinesAI.


Testing

Test your agent end-to-end with a second WhatsApp number

After setup you can see capabilities listed here, but you can’t see your agent actually work until a real customer messages it. That’s a bad loop for iteration. Here’s a better one: connect a second WhatsApp number to the same TimelinesAI workspace, use it as a scripted customer, and watch your real agent skill handle the conversation end-to-end. No mocks, no fake webhooks — it’s your real agent, your real API, just with a second number playing the role of the human on the other side.

Prerequisite — two connected WhatsApp numbers. This pattern needs two numbers in your TimelinesAI workspace: one acting as the customer persona, one running your real agent stack. If you only have one connected number right now, skip ahead to capability #19 (Draft replies for human review) instead — it gives you a slower but still useful iteration loop where your agent drafts replies as notes and you approve them in the dashboard.

The pattern

TimelinesAI lets multiple WhatsApp numbers live in one workspace and routes their inbound webhooks through the same receiver. The trick is the `whatsapp_account_id` field on every webhook payload — it carries the JID of the number that received the message. Read it, switch on it, dispatch to either a persona step (when your agent just replied to the customer) or an agent step (when the customer just messaged your agent). Each side sends via the normal POST /messages or POST /chats/{id}/messages from the other side’s perspective.

Architecture

  [+1 555 0100]                   [+1 555 0200]
     persona                    agent under test
        |                              |
        |    one TimelinesAI workspace |
        |                              |
        +------->  Public API  <-------+
                        |
                 message:received:new
                        |
                        v
           +--------------------------+
           |  Your test receiver      |
           |  switch (accountJid) {   |
           |    persona_jid -> next   |
           |    agent_jid   -> skill  |
           |  }                       |
           +--------------------------+

Persona sender

The persona side is a scripted customer: a list of lines to say next. When it’s time for the next turn, POST to /messages with the agent’s phone number as the recipient. TimelinesAI handles the routing because both numbers are in one workspace.

// persona-sender.js — send the next scripted customer turn from the
// persona WhatsApp number to the agent-under-test number. TimelinesAI
// routes the send because both numbers share one workspace.

const PERSONA_SCRIPT = [
  "hi I saw your ad",
  "we want to reduce customer churn",
  "we are 25 people",
  "we need to roll out in Q3",
];
let idx = 0;

export async function sendNextPersonaTurn() {
  if (idx >= PERSONA_SCRIPT.length) return;
  const text = PERSONA_SCRIPT[idx++];

  await fetch("https://app.timelines.ai/integrations/api/messages", {
    method: "POST",
    headers: {
      "Authorization": `Bearer ${process.env.TIMELINES_AI_API_KEY}`,
      "Content-Type": "application/json",
    },
    body: Buffer.from(JSON.stringify({
      phone: process.env.AGENT_UNDER_TEST_PHONE,
      text,
    }), "utf-8"),
  });
  console.log(`[persona] sent: ${text}`);
}

Receiver jid-switch

One receiver, one branch per side. Everything else you already know from Setup — the `?secret=` auth, the `from_me` echo filter, the flat-vs-nested payload fallback — applies here unchanged.

// api/webhook.js — route events by which number received them
export default async function handler(req, res) {
  if (req.query.secret !== process.env.WEBHOOK_SECRET) return res.status(404).end();
  const { event_type, data } = req.body || {};
  if (event_type !== "message:received:new") return res.status(200).end();
  if (data?.from_me === true) return res.status(200).end();
  res.status(200).json({ ok: true });

  const accountJid = data.whatsapp_account_id ?? data.chat?.whatsapp_account_id;
  if (accountJid === process.env.AGENT_UNDER_TEST_JID) {
    // Customer just messaged the agent — run your agent-under-test skill
    await handleAgentTurn(data);
  } else if (accountJid === process.env.PERSONA_JID) {
    // Agent just replied to the persona — advance the scripted scenario
    await advancePersona(data);
  }
}

Those two snippets are the core idea. The full harness — persona scripts, scenario loader, CLI kickoff, split-pane color logger, Vercel config — lives in examples/test-harness/ in the companion repo, with its own README that walks through deploy, env vars, webhook registration, and running your first scenario end-to-end.

Watching it happen

The harness runs autonomously — it doesn’t need you to approve each reply. Your oversight happens in the TimelinesAI dashboard: open the chat between your persona and agent numbers in another browser tab and every turn shows up live. If the agent says something dumb, send a manual reply from the agent’s inbox to steer the conversation. If you need to change behavior mid-run, add a label or write a note on the chat — both are visible to the agent’s skill on its next turn. Stop the run by deleting the registered webhook or shutting down the Vercel function; the scenario halts immediately.


Example

Example: one full round trip

A concrete five-step loop showing what the API looks like in practice — how your agent sends an outbound, confirms delivery, processes the customer's reply, and sends a follow-up. Phone numbers, chat IDs, and message UIDs below are placeholders — substitute your own.

Placeholders used throughout this example:

Your business number   → +1 555 0100 (JID: 15550100@s.whatsapp.net)
Your customer's number → +1 555 0200
API token              → $TIMELINES_AI_API_KEY
1

Confirm your token works and list your connected numbers

$ curl -sS -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
    https://app.timelines.ai/integrations/api/whatsapp_accounts

{"status":"ok","data":{"whatsapp_accounts":[
  {"id":"15550100@s.whatsapp.net","phone":"+15550100",
   "status":"active","account_name":"Your Business"}
]}}

You should see a list of your connected numbers. If the status is anything other than active, fix that in the TimelinesAI dashboard before continuing.

2

Send an outbound message

Write the payload to a file with UTF-8 encoding, then curl it. This is the pattern for every outbound.

$ cat > /tmp/send.json <<'JSON'
{"phone":"+15550200",
 "text":"Hi - your order shipped. Tracking: ABC123."}
JSON

$ curl -sS -X POST \
    -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
    -H "Content-Type: application/json" \
    --data-binary @/tmp/send.json \
    https://app.timelines.ai/integrations/api/messages

{"status":"ok","data":{"message_uid":"OUTBOUND-UID-PLACEHOLDER"}}

The response is a receipt, not a delivery confirmation. Hold on to the message_uid for the next step.

3

Check delivery status

$ curl -sS -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
    .../messages/OUTBOUND-UID-PLACEHOLDER/status_history

{"status":"ok","data":[
  {"status":"Sent",     "timestamp":"2026-04-12 12:28:40 +0000"},
  {"status":"Delivered","timestamp":"2026-04-12 12:28:41 +0000"}
]}

Sent → Delivered is typically within a second on an active number. The Read status appears later, when the recipient actually opens the chat.

4

The customer replies, your webhook fires

When the customer responds, TimelinesAI posts to your registered webhook URL:

{
  "event_type": "message:received:new",
  "data": {
    "chat_id": 12345678,
    "message_uid": "INBOUND-UID-PLACEHOLDER",
    "sender_phone": "+15550200",
    "sender_name": "Customer",
    "text": "Thanks! When will it arrive?",
    "timestamp": "2026-04-12 12:29:20 +0000"
  }
}

Your receiver acks with a 200 immediately, then hands the payload to your OpenClaw skill.

5

Reply back from the same chat

$ cat > /tmp/reply.json <<'JSON'
{"text":"Estimated delivery is 2-3 business days. You'll get tracking updates to this chat."}
JSON

$ curl -sS -X POST \
    -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
    -H "Content-Type: application/json" \
    --data-binary @/tmp/reply.json \
    https://app.timelines.ai/integrations/api/chats/12345678/messages

{"status":"ok","data":{"message_uid":"REPLY-UID-PLACEHOLDER"}}

Because you're sending into an existing chat via /chats/{id}/messages, the sender is automatically the WhatsApp number that owns the chat — you don't pick it, the chat record does. This is the whole reason you don't accidentally send from the wrong number in multi-number workspaces.


Limits

Limits and caveats

What this guide does NOT cover, and why.

!
Personal numbers get banned for cold outreach. Unsolicited broadcasts from personal numbers are not a supported use case. The ban is enforced at WhatsApp's infrastructure layer, not at the TimelinesAI gateway, so we cannot protect against it.
!
WhatsApp's 24-hour customer service window. You can reply to a customer freely for 24 hours after their last message. Outside that window, messages require opt-in and templates — Business API territory.
!
Asynchronous delivery. POST /messages returns a message_uid (a receipt), not a delivery confirmation. Use /status_history to confirm real delivery.
!
Attachment URLs expire quickly. Download media inline in the webhook handler, not from a delayed worker.
!
Text, media, reactions, metadata only — no voice/video calls, no broadcast-status, no WhatsApp Channels or Stories.

Building blocks

Companion skill bundle on GitHub →

4 working skills, a Vercel webhook receiver, compliance docs, and a full mirror of this guide. MIT licensed.

TimelinesAI API Docs · OpenClaw Skills Docs · v0.1.0 release

Capability guide · 2026 · Canonical URL timelines.ai/guide/openclaw-whatsapp-skills