Build a WhatsApp agent
with OpenClaw + TimelinesAI
Turn WhatsApp into a customer channel your OpenClaw agent operates on: auto-reply to incoming messages, send transactional and event-triggered notifications, sync with your CRM, share the inbox with teammates. TimelinesAI runs the WhatsApp gateway. Your agent handles the reasoning.
Auto-replyTransactional sendsCRM syncShared inbox
What your agent can do with WhatsApp
With a TimelinesAI skill installed, your OpenClaw agent can operate WhatsApp as a real customer channel. The long version of this list is in the sections below — every capability has the specific API calls it needs.
Answer incoming customer messages automatically — 24/7 autoresponder, after-hours responder, or full AI chatbot that escalates to a human when stuck.
Send transactional and event-triggered messages — order confirmations, shipping updates, appointment reminders, payment receipts, and notifications triggered by events in other tools (HubSpot, Stripe, Calendly). Only to customers who expect the message.
Qualify leads through multi-turn conversations — ask a sequence of questions, store answers, tag the chat as qualified or not.
Sync WhatsApp activity with your CRM — look up incoming numbers in HubSpot/Pipedrive, update deal stages, push notes back.
Summarize and score conversations — "what did ACME ask about last week", "score this chat 1–10 on intent".
Share the inbox with teammates — your agent drafts replies as private notes, humans send them; or your agent sends and humans watch.
Handle media — photos of receipts, PDFs, voice notes — processed by OpenClaw, replied to automatically.
How to use this guide
This guide is dual-mode. You can read it yourself and follow Setup manually — the path the sections below lay out — or you can hand the whole thing to your OpenClaw agent (or any agent that can fetch a URL) and let it do the setup for you. Both paths land at the same place: four WhatsApp skills installed, a token and webhook wired up, your agent answering customer messages.
If you’re reading this yourself
Scroll through the capability groups below — Incoming, Outbound, CRM & analytics, Operations — and decide which ones matter for your use case. Then follow Setup manually; it’s four steps and about ten minutes. Every capability lists the exact API calls it makes, and every code block in this guide has been run against the live TimelinesAI API before publication. If you get stuck, Things-to-know collects every gotcha that will cost you an hour if you don’t know about it in advance.
If you want your agent to do the setup
Paste this prompt into OpenClaw (or Claude Code, Cursor, Claude desktop, any agent that can read a URL). It points at this guide by URL so the agent reads the live version, then walks you through install, token, webhook, smoke-test, and enabling your first skill — asking for input only where it genuinely needs something from you.
Read https://timelines.ai/guide/openclaw-whatsapp-skills end to end.
Then help me set up an OpenClaw + TimelinesAI WhatsApp agent:
1. Clone InitechSoftware/openclaw-whatsapp-skills into ~/.openclaw/workspace/
and symlink the four skills into ~/.openclaw/skills/.
2. Ask me for my TimelinesAI API token (Integrations → Public API → Copy
on app.timelines.ai) and write it to ~/.openclaw/workspace/.env.timelinesai
as TIMELINES_AI_API_KEY.
3. Run the smoke-test curl from Setup. If it fails, walk me through the
"If you see something else" failure block in the guide.
4. Help me deploy examples/vercel-webhook-receiver from the companion repo
and register the webhook with TimelinesAI.
5. Ask which of the four skills I want to enable first — whatsapp-autoresponder,
whatsapp-lead-qualifier, whatsapp-send, or whatsapp-delivery-check — and
walk me through its capability section plus any relevant gotchas from
Things-to-know.
6. Before enabling anything that sends outbound messages, show me Channel
choice so I pick the right WhatsApp channel (personal number vs Business
API) for my use case.The prompt intentionally stops before turning any outbound sending on — it routes you through Channel choice first so you pick the right WhatsApp channel (personal vs Business API) for what you’re actually trying to do. Ban risk is real and upstream, not something the gateway can protect you from.
Setup
Four things to wire together. After this, your skill just makes API calls.
- 1
Connect your WhatsApp number
Sign in at app.timelines.ai, scan the QR code with the phone that holds your business number. TimelinesAI runs the gateway from here on. Same flow as the standard personal-number setup.
- 2
Get an API token
Integrations → Public API → Copy. Save as TIMELINES_AI_API_KEY. One token covers the whole workspace.
- 3
Install a skill from the companion repo
Clone InitechSoftware/openclaw-whatsapp-skills into ~/.openclaw/workspace/, then symlink each skill directory into ~/.openclaw/skills/. Four-line quick start in the companion README.
- 4
Register a webhook
Point message:received:new at a public HTTPS URL. TimelinesAI will push incoming messages to it. Your receiver invokes OpenClaw.
Smoke-test the token
Before you build anything, confirm auth works:
curl -sS -w "\nHTTP: %{http_code}\n" \
-H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
https://app.timelines.ai/integrations/api/whatsapp_accountsYou should see
{
"status": "ok",
"data": {
"whatsapp_accounts": [
{
"id": "<phone>@s.whatsapp.net",
"phone": "+<phone>",
"status": "connected",
"account_name": "<your label>"
}
]
}
}If you see something else
- HTTP 401 with the bare text Unauthorized — the token didn’t land. Re-check step 2, make sure you included the Bearer prefix, and confirm the token didn’t get line-wrapped on paste. Note the response is plain text, not JSON — piping to jq will error out.
- A bootstrap-styled HTML “Page not found” page with HTTP 404 — you’ve got a trailing slash or a typo in the path. The API serves HTML (not a JSON error) for bad paths, so if your output starts with <!DOCTYPE html>, drop any trailing slash and re-check the path.
Register the webhook once
# Generate a random secret once. Add it as a query param so only
# TimelinesAI's registered URL can reach your receiver.
WEBHOOK_SECRET=$(openssl rand -hex 16)
WEBHOOK_URL="https://your-app.example.com/api/webhook?secret=${WEBHOOK_SECRET}"
curl -sS -X POST \
-H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
-H "Content-Type: application/json" \
-d "{\"event_type\":\"message:received:new\",\"url\":\"${WEBHOOK_URL}\",\"enabled\":true}" \
https://app.timelines.ai/integrations/api/webhooksMinimal webhook receiver (Node / Vercel)
~40 lines. Validates the ?secret= query param, drops outbound echoes, acks 200 within the 5-second retry window, then forwards the message to your OpenClaw host.
// api/webhook.js
export default async function handler(req, res) {
// Path-segment auth — TimelinesAI doesn't sign webhooks yet, so the
// ?secret=<token> you registered is the only thing stopping strangers
// from invoking this endpoint.
if (req.query.secret !== process.env.WEBHOOK_SECRET) {
return res.status(404).end();
}
if (req.method !== "POST") return res.status(405).end();
const { event_type, data } = req.body || {};
if (event_type !== "message:received:new") {
return res.status(200).json({ ignored: event_type });
}
// Drop your agent's own sends echoing back as "received".
// TimelinesAI also fires message:received:new for outbound messages
// synced from another WhatsApp client on the same number.
if (data.from_me === true) {
return res.status(200).json({ ignored: "from_me" });
}
// Ack FAST. TimelinesAI retries 3x with a 5-second timeout per attempt —
// if you take longer than 5s the same event hits you up to 3 times.
res.status(200).json({ ok: true });
// TL webhook payloads sometimes arrive flat (data.chat_id) and sometimes
// nested (data.chat.id) depending on event flavor and version. Destructure
// with fallback so both shapes work — see Things to know below.
const chatId = data.chat_id ?? data.chat?.id;
const whatsappAccountId =
data.whatsapp_account_id ?? data.chat?.whatsapp_account_id;
// Fire-and-forget handoff to wherever your OpenClaw host accepts
// inbound messages. For production durability replace this with a
// push to a queue (QStash, Inngest, SQS) your agent drains separately.
try {
await fetch(process.env.OPENCLAW_HOOK_URL, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
chat_id: chatId,
whatsapp_account_id: whatsappAccountId,
text: data.text,
sender_phone: data.sender_phone,
message_uid: data.message_uid,
}),
});
} catch (err) {
console.error("[timelinesai-webhook] handoff failed:", err);
}
}OPENCLAW_HOOK_URL is whatever endpoint your OpenClaw deployment exposes for inbound messages — it depends on how you host the agent. For a production-grade reference with durability notes, see examples/vercel-webhook-receiver in the companion repo.
What's inside your OpenClaw workspace
You connected a TimelinesAI token and symlinked skills from the companion repo. Here’s what you actually installed, and which files you edit when you want to change how your agent thinks, what it knows, and how it behaves. Nothing here is magic — it’s all plain markdown and env files in two directories.
Skills — ~/.openclaw/skills/<skill>/
Every skill you symlinked is a directory with one required file (SKILL.md) and any resources that file references. If you want to change how your FAQ handler sounds, or re-order the lead qualifier’s questions, the edit lives in that skill’s SKILL.md. No code to recompile, no build step.
| File | What it controls |
|---|---|
| SKILL.md | The whole skill. YAML frontmatter at the top (name, description, user-invocable, and any dispatch flags) plus the instructions/prompt body underneath. This is where you edit what the skill does, when OpenClaw picks it, and how it behaves. Every one of the four WhatsApp skills in the companion repo is a single SKILL.md — no code to recompile, just edit and restart. |
| README.md | Human documentation for the skill. OpenClaw does not read this file — it's for you, your teammates, and anyone browsing the skill on GitHub. Optional; the companion repo skills all ship with one. |
| Scripts, resources, schemas | Anything referenced from SKILL.md via {baseDir}/... — small helper scripts, prompt templates, JSON schemas, test fixtures. Ignored by OpenClaw unless your SKILL.md explicitly points at them. Put whatever the skill needs alongside SKILL.md and reference it by path. |
For the four WhatsApp skills specifically — whatsapp-autoresponder, whatsapp-lead-qualifier, whatsapp-send, whatsapp-delivery-check — the top of each SKILL.md has the role and allowed tools; the body has the actual prompt your agent runs with. Read one of them end-to-end before you edit; the patterns repeat.
Workspace — ~/.openclaw/workspace/
The workspace directory is OpenClaw’s home for everything that isn’t a skill: who your agent is, what it knows about you, what tools it has, how it boots. OpenClaw’s own docs say “this folder is home. Treat it that way.” These files ship with the install — you fill them out over time:
| File | What it controls |
|---|---|
| IDENTITY.md | Who your agent is — name, creature (how it conceptualises itself), vibe, signature emoji, avatar. Fill this out early; most of the other files reference back to it. |
| SOUL.md | Core operating principles and personality. How the agent should behave when nobody is watching — when to be genuine vs performative, how to earn trust, where privacy boundaries sit. OpenClaw's docs call it the agent's conscience. |
| USER.md | Who the agent is helping (you). Name, pronouns, timezone, interests, project context. Agents are better at helping someone specific than a generic “user” — this file is how they keep track. |
| TOOLS.md | Environment-specific config you don't want inside any skill — device names, host addresses, local preferences. Lives in the workspace so you can edit it without touching anything you might share. |
| AGENTS.md | The workspace README for agents. Describes how OpenClaw expects this particular workspace to be run. Usually shipped with the install and rarely edited. |
| BOOT.md | Short, explicit instructions for what OpenClaw should do on startup. Empty by default; fill it if you want deterministic boot behavior across restarts. |
| BOOTSTRAP.md | First-boot onboarding conversation. Walks a fresh workspace through establishing its identity, then directs you to save the results into IDENTITY.md, SOUL.md, USER.md. Delete it once you've filled those out. |
| HEARTBEAT.md | Periodic task definitions. Empty file means no heartbeats; add tasks here when you want the agent to check something on an interval (e.g. poll a queue every five minutes). |
Per-integration env files
Alongside the canonical files above, you add one env file per integration you wire up. These aren’t part of the OpenClaw install — they’re yours, and they hold secrets that should never be committed to a public repo:
| File | What it controls |
|---|---|
| .env.<integration> | Secrets and config for one integration, one file per integration. For the WhatsApp work you created .env.timelinesai with TIMELINES_AI_API_KEY and ALLOWED_SENDER_JID in Setup step 2. Pattern-match this file when you add HubSpot, Stripe, Pipedrive, or any other tool — one .env.<name>, source it in the skill that needs it. |
Personal numbers vs WhatsApp Business API
Before you build outbound flows, understand which WhatsApp channel your use case belongs to. Picking wrong gets numbers banned.
Safe with the skills in this guide (personal number)
- •Inbound conversations. Customer messages you first, you reply. No risk.
- •Transactional sends. Order confirmations, shipping updates, delivery notifications, payment receipts, appointment reminders — any message a customer expects because they just did something with your business.
- •Event-triggered notifications from other tools. HubSpot deal → demo confirmation, Stripe failed payment → recovery note, Calendly booking → pre-meeting reminder. The customer opted in when they used the upstream tool.
- •Replying inside WhatsApp's 24-hour customer service window. Once a customer messages you, you have 24 hours to reply freely. The autoresponder, FAQ handler, and lead qualifier skills operate inside this window.
NOT safe on personal numbers
- •Cold outreach to lists you bought or scraped — WhatsApp bans numbers for this within hours.
- •Marketing broadcasts to customers who didn't specifically opt in.
- •Promotional campaigns, sales offers, seasonal pushes, product launches.
- •Anything resembling a marketing blast. If you're asking “what throughput can I get”, you're on the wrong channel.
How to tell which channel you need
- 1.Did the customer message you first, or are you replying inside an active 24-hour session? → Personal number, this guide.
- 2.Is the customer about to receive something they explicitly expect (order, appointment, payment, delivery)? → Personal number, transactional send.
- 3.Triggered by a customer-opted-in event in your CRM or billing tool? → Personal number, event-triggered send.
- 4.Broadcast, cold outreach, or promotional campaign? → Business API, use the TimelinesAI dashboard.
- 5.Not sure? → Don't send. Treat it as promotional and route it to the Business API path.
Every outbound capability below assumes you've passed this test. If you're not sure, re-read this section before shipping. TimelinesAI cannot protect you from the WhatsApp-layer ban — the ban is enforced upstream, at WhatsApp's infrastructure, not at the gateway.
API reference
Every endpoint you need to build every capability below. Base URL https://app.timelines.ai/integrations/api. Auth Authorization: Bearer $TIMELINES_AI_API_KEY.
Reading
| Method | Path | What it returns |
|---|---|---|
| GET | /whatsapp_accounts | Your connected WhatsApp numbers, each with JID, phone, status, account name. |
| GET | /chats | Chat list. Supports ?phone=... and ?label=... filters. Paginate with ?page=N (50 per page, fixed). Each chat has whatsapp_account_id holding the JID of the number that owns it, plus chatgpt_autoresponse_enabled — see Things to know before launching your own agent. |
| GET | /chats/{id} | One chat's full detail. |
| GET | /chats/{id}/messages | Message history, 50 per page. Paginate with ?page=N and watch data.has_more_pages. Each message carries from_me, sender_phone, text, timestamp, message_type, status (Sent/Delivered/Read), and origin (Public API vs WhatsApp app). |
| GET | /chats/{id}/labels | Labels on the chat. |
| GET | /messages/{uid}/status_history | Sent / Delivered / Read timeline for an outbound message. |
| GET | /messages/{uid}/reactions | Reactions on a message. Returns {data: {users: [{name, phone, reaction, current}], reactions: {<emoji>: count}, total: N}} — an object, not a flat array. users lists who reacted (each carries the emoji they picked plus a current boolean marking your own workspace); reactions is a histogram keyed by emoji. Empty state is {users: [], reactions: {}, total: 0}. |
| GET | /files | Files you uploaded via the API. |
| GET | /webhooks | Your registered webhook subscriptions. |
Writing
| Method | Path | What it does |
|---|---|---|
| POST | /messages | Send to a phone number. Body: {"phone":"+...","text":"..."}. Returns {"message_uid":"..."}. |
| POST | /chats/{id}/messages | Send into an existing chat. Body: {"text":"..."}. Sender is whichever WhatsApp number owns the chat. |
| POST | /chats/{id}/notes | Attach a private note to a chat. Not sent to WhatsApp, visible only inside TimelinesAI. Used for agent state and draft-review workflows. |
| POST | /chats/{id}/labels | Add a label to a chat. Use for stage tracking, routing, stop-reply flags. |
| PATCH | /chats/{id} | Update chat metadata — assignee (responsible_email), read state, and chatgpt_autoresponse_enabled. Turn the last one off before your agent starts replying, or TL’s built-in responder will race yours. |
| PATCH | /messages/{uid}/reactions | Set a reaction emoji on a message. Body takes the literal emoji character, not a shortcode. |
| POST | /files_upload | Upload a file via multipart/form-data (field name: file). Returns data.uid, which you pass to chat/messages as file_uid. No upload-by-URL variant. |
| POST | /webhooks | Register a webhook subscription. |
| PUT | /webhooks/{id} | Update or enable/disable a subscription. |
| DELETE | /webhooks/{id} | Remove a subscription. |
Things to know
A few details that are easy to miss from the reference and will each cost you an hour if you don't know them.
When API calls fail
Five failure modes will eventually hit your skill. Each one looks different in the response, and each one has a different correct response.
A small shell pattern that branches on each status code:
$ # Capture the HTTP status into a variable, then branch
$ http_code=$(curl -sS -o /tmp/resp.json -w "%{http_code}" \
-X POST \
-H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
-H "Content-Type: application/json" \
--data-binary @/tmp/send.json \
https://app.timelines.ai/integrations/api/messages)
$ case "$http_code" in
200|201) ;; # ok, carry on
400) echo "bad body: $(cat /tmp/resp.json)"; exit 1 ;;
401) echo "token expired - rotate"; exit 2 ;;
429) sleep 30; retry_once ;; # respect Retry-After
5*) sleep $((2 ** attempt)); retry_with_backoff ;;
esacOperating safely in production
Two more things that aren't failures, but will save you a bad day later.
Conversation context
The webhook hands your skill a single message — the one that just arrived. No prior turns, no chat history, no running transcript. For a stateless FAQ or an after-hours responder that's plenty. For a conversational agent that has to remember what the customer said three turns ago, your skill has to fetch the context itself. Three patterns cover the full range, from the cheapest possible loop to the one that actually remembers.
Stateless
Reply to the latest message alone. One POST per turn, no GET, no history. This is the simplest loop and the default in the ready-made skills. Use it for FAQ bots, after-hours responders, and classification or routing — anywhere the answer depends only on the current message. The agent forgets everything between turns: if the customer writes a follow-up that refers back to something said earlier, it won't understand it.
Full context window
Fetch the last 20 messages before every reply and pass them to the model as prior conversation. Two API calls per turn instead of one, plus the extra prompt tokens on every call. Use it for conversational agents, multi-turn qualification, and anywhere the agent has to remember the thread. Twenty recent turns is usually enough — going wider rarely helps and makes the prompt expensive; going narrower makes the agent forget things the customer said a minute ago.
Adaptive context
Fetch context only when the incoming message looks like a follow-up. A cheap heuristic on the text — starts with a pronoun, references "that" or "it", arrives within 30 seconds of your previous reply, short one-word acknowledgements — decides whether to pull history or reply stateless. Most turns stay cheap; continuations get memory. Start with the full window; only move to adaptive once you know your cost budget and have real production traffic to tune the heuristic against.
Fetching the window is a single GET. Response is a flat array of messages on the chat, newest last, mixing inbound customer turns with your own outbound replies and any notes your skill has written:
$ curl -sS -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
"https://app.timelines.ai/integrations/api/chats/12345678/messages?limit=20"
{"status":"ok","data":[
{"message_uid":"INBOUND-UID-PLACEHOLDER",
"text":"hi, is the blue one still available?",
"sender_phone":"+15550200",
"message_type":"text","origin":"WhatsApp",
"created_at":"2026-04-14T10:22:03Z"},
{"message_uid":"OUTBOUND-UID-PLACEHOLDER",
"text":"Yes, one left. Want me to hold it for you?",
"sender_phone":"",
"message_type":"text","origin":"Public API",
"created_at":"2026-04-14T10:22:41Z"},
{"message_uid":"INBOUND-UID-PLACEHOLDER-2",
"text":"yes please, until tomorrow",
"sender_phone":"+15550200",
"message_type":"text","origin":"WhatsApp",
"created_at":"2026-04-14T10:23:08Z"}
]}Gotchas that bite in production
State persistence
OpenClaw skills don't keep state in memory across invocations. WhatsApp conversations are multi-turn. The fix is to store state on the chat itself:
- •Labels hold discrete stage — discovery/q1, qualified, escalate. Add with POST /chats/{id}/labels, read with GET /chats/{id}/labels.
- •Notes hold structured data — team_size=8, draft replies, lead scores. Add with POST /chats/{id}/notes. Read by iterating GET /chats/{id}/messages and filtering message_type == "note".
The upside: crash safety, visibility to human teammates, clean handoff — a human can clear a label to rewind the flow, or add escalate to take over. The trade-off: every state transition is an HTTP call. For customer-facing flows this is fine.
Sending from the right number
If your workspace has more than one WhatsApp number connected, your skill has to make sure it sends from the intended one. Every chat has a whatsapp_account_id field holding the full JID (like PHONE@s.whatsapp.net) of the number that owns it. When you POST /chats/{id}/messages, the sender is always that JID — you don't pick it, the chat does.
The pattern:
- 1Hard-code the allowed sender JID in each skill’s environment (e.g. ALLOWED_SENDER_JID).
- 2Before sending, GET /chats/{id} and compare whatsapp_account_id to your allowed JID.
- 3If they don’t match, skip the send — surface to a human or drop the event.
Two extra HTTP calls per turn, zero chance of sending from the wrong persona. For single-number workspaces this doesn't apply.
Incoming
Incoming messages — what your agent can handle
Every time a customer messages your WhatsApp number, TimelinesAI fires a message:received:new event to your webhook with the chat id, text, sender phone, and attachments. Your skill reads the event, decides what to do, and replies with POST /chats/{chat_id}/messages. Everything below is a variation on that loop.
Auto-reply to every incoming message
“Answer questions about shipping, returns, and opening hours. For anything else, draft a reply and tag the chat for review.”
“Between 22:00 and 08:00 reply automatically. During business hours just flag incoming chats for me.”
“Handle my WhatsApp replies while I'm in this meeting.”
skill receives the webhook payload, composes a reply, calls POST /chats/{id}/messages with {"text":"..."}. Stateless by default — one API call per turn. For multi-turn conversations where the agent needs to remember prior messages, see Conversation context below.
FAQ handler with escalation to a human
“Answer questions about shipping, returns, and opening hours. For anything else, tag the chat needs-human and stop replying until I clear the tag.”
before replying, check GET /chats/{id}/labels. If the chat has needs-human, exit without sending. If the incoming text matches an FAQ topic, reply. Otherwise POST /chats/{id}/labels with the escalation tag and exit silently. Your team's inbox filters on the label.
Route conversations to the right person
“For each incoming chat, figure out if it's sales, support, or billing and tag it. Assign sales chats to alex@ours and billing to jamie@ours.”
classify intent from the text, POST /chats/{id}/labels with intent/sales or similar, then PATCH /chats/{id} with {"responsible_email":"..."} to hand the chat off in the TimelinesAI inbox.
Qualify leads through a question sequence
“For any new chat from our Facebook ad campaign, ask about their use case, team size, and timeline. Store answers as notes on the chat. Tag it qualified if team size is 5 or more.”
labels track which question you're currently on (discovery/q1, q2, q3), notes hold the answers. Each turn: read the current stage label, parse incoming text as the answer, write it via POST /chats/{id}/notes, advance the label, ask the next question. No external database — see State persistence below.
$ # 1. Read current stage from the chat's labels. Labels nest under
$ # data.labels as a string array — see Response shapes in Things to know.
$ curl -sS -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
https://app.timelines.ai/integrations/api/chats/12345678/labels
{"status":"ok","data":{"labels":["discovery/q1"]}}
$ # 2. Save the customer's answer as a structured note. Notes are stored
$ # as messages with message_type=note and aren't pushed to WhatsApp.
$ cat > /tmp/note.json <<'JSON'
{"text":"team_size=12"}
JSON
$ curl -sS -X POST \
-H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
-H "Content-Type: application/json" \
--data-binary @/tmp/note.json \
https://app.timelines.ai/integrations/api/chats/12345678/notes
{"status":"ok","data":{"message_uid":"NOTE-UID-PLACEHOLDER"}}
$ # 3. Advance the stage label. POST /labels REPLACES the full set, so
$ # read existing labels, swap q1 for q2, POST the combined list. The
$ # body key is "labels" (plural, array) — not "label" singular.
$ curl -sS -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
https://app.timelines.ai/integrations/api/chats/12345678/labels \
| jq -c '.data.labels | map(if . == "discovery/q1" then "discovery/q2" else . end) | {labels: .}' \
> /tmp/labels.json
$ cat /tmp/labels.json
{"labels":["discovery/q2"]}
$ curl -sS -X POST \
-H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
-H "Content-Type: application/json" \
--data-binary @/tmp/labels.json \
https://app.timelines.ai/integrations/api/chats/12345678/labels
{"status":"ok","data":{"labels":["discovery/q2"]}}Understand photos, PDFs, and receipts
“When a customer sends a photo of a receipt, extract the amount and vendor and add them as a note.”
“If someone sends a PDF, classify it as invoice / contract / ID and tag the chat accordingly.”
webhook payloads include an attachment URL. Download it in the handler (it expires quickly), process with OpenClaw's vision or document tools, write extracted data back via POST /chats/{id}/notes.
$ # The attachment URL in the webhook payload expires in ~15 minutes.
$ # Download it inline in the receiver, before queuing async work.
$ ATTACH_URL="https://files.timelines.ai/abc123/receipt.jpg?expires=..."
$ curl -sS "$ATTACH_URL" -o /tmp/receipt.jpg
$ # Process /tmp/receipt.jpg with OpenClaw vision tools, then write
$ # the extracted fields back as a structured note on the chat:
$ curl -sS -X POST \
-H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
-H "Content-Type: application/json" \
--data-binary '{"text":"receipt: amount=$42.50 vendor=Cafe Madrid"}' \
https://app.timelines.ai/integrations/api/chats/12345678/notesTranscribe voice notes and reply
“Transcribe inbound voice notes. Reply in text — send a voice reply through the TimelinesAI dashboard if you really need one back.”
webhook delivers a URL to the voice file. Download, transcribe, compose a text reply via POST /chats/{id}/messages. Voice replies are a TimelinesAI dashboard feature today and are not in the current public API reference — if your skill needs to send voice replies programmatically, contact TimelinesAI support to confirm whether the legacy voice_message endpoint is still available on your workspace.
Match the customer's language
“If the customer writes in Spanish, reply in Spanish. Switch languages mid-conversation if they do.”
pure OpenClaw-side reasoning on the incoming text. TimelinesAI just carries the reply.
React to messages without sending a full reply
“React with 👀 to every incoming message so customers know I've seen it, then take my time composing the real reply.”
PATCH /messages/{uid}/reactions with {"reaction":"👀"}. The reaction field must carry the literal emoji character — shortcodes like "eyes" or ":eyes:" are rejected with HTTP 400 "Reaction has invalid format". No message credit consumed — reactions are lightweight.
$ # Send the raw emoji character in the body. Save to a UTF-8 file and
$ # use --data-binary so shells can't downgrade it.
$ printf '{"reaction":"👀"}' > /tmp/react.json
$ curl -sS -X PATCH \
-H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
-H "Content-Type: application/json" \
--data-binary @/tmp/react.json \
https://app.timelines.ai/integrations/api/messages/INBOUND-UID-PLACEHOLDER/reactions
{"status":"ok"}Outbound
Outbound — messages your agent starts
Messages your agent initiates, not replies. Triggered by an event in your other tools (a new order, a failed payment, a meeting booked) or a direct human instruction about a specific person.
Before every capability in this section: the customer either opened the thread recently (within WhatsApp's 24-hour session window) or explicitly expects this message. If neither is true, don't send it from a personal number — that's Business API territory.
Send a transactional message by name or to a new recipient
“Message John that his invoice is ready.”
“Text the plumber our new office address so he can deliver the parts.”
“Send the signed contract to the client who just wired the deposit.”
for an existing chat, look it up via GET /chats?name=John (or your CRM) and call POST /chats/{id}/messages. For a new recipient, POST /messages with {"phone":"+...","text":"..."}. The whatsapp-send skill in the companion repo handles both modes with UTF-8-safe serialization.
$ cat > /tmp/send.json <<'JSON'
{"phone":"+15550200",
"text":"Hi - your order shipped. Tracking: ABC123."}
JSON
$ curl -sS -X POST \
-H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
-H "Content-Type: application/json" \
--data-binary @/tmp/send.json \
https://app.timelines.ai/integrations/api/messages
{"status":"ok","data":{"message_uid":"OUTBOUND-UID-PLACEHOLDER"}}Scheduled follow-ups inside an active conversation
“Every Monday at 9am, check chats tagged to-follow-up that had customer activity in the last 24 hours but no reply from us, and send a gentle check-in.”
schedule it with an OpenClaw cron job or standing order (see OpenClaw's automation docs). The job pulls the audience via GET /chats?label=to-follow-up&read=false, reads each chat's last_message_timestamp, and only sends when the last customer turn was less than 24 hours ago. Outside that window a follow-up becomes a re-engagement touch and you're back to Business API territory — see Channel choice.
Trigger messages from events in your other tools
“When a HubSpot deal hits 'demo scheduled', send a WhatsApp confirmation with the meeting link.”
“When Stripe reports a failed payment, send a polite recovery message with a link to update the card.”
“When a Calendly booking is created, send a pre-meeting reminder the morning of.”
your existing tool (HubSpot, Stripe, Calendly, Pipedrive) fires its own webhook into the same receiver you set up in Setup — add a handler keyed on event source or request path, branch off from the TimelinesAI handler, and call POST /messages (for a new recipient) or POST /chats/{id}/messages (into an existing chat). WhatsApp becomes a delivery channel for any workflow you already have. Because the sends are downstream of a customer action in the upstream tool, they're transactional by nature — well within the personal-number rules.
Send files and documents on request
“Generate the quote PDF and send it to the customer who just asked for pricing.”
“Email the contract as a PDF and also drop it in the WhatsApp chat for the client.”
two steps. Upload the file bytes via POST /files_upload as multipart/form-data (there is no upload-by-URL endpoint — your agent must download the file itself first, then POST it). The response returns a uid you pass back to POST /chats/{id}/messages in the file_uid field. The customer asked for the document — this is a reply to their request, not cold outreach.
$ # Step 1 - upload the file bytes as multipart/form-data.
$ # The form field name is "file". The response gives you a uid.
$ curl -sS -X POST \
-H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
-F "file=@/path/to/quote.pdf" \
https://app.timelines.ai/integrations/api/files_upload
{"status":"ok","data":{
"uid":"FILE-UID-PLACEHOLDER",
"filename":"quote.pdf",
"size":128453,
"mimetype":"application/pdf",
"uploaded_by_email":"you@yourcompany.com",
"uploaded_at":"2026-04-14 10:22:03 +0000",
"temporary_download_url":"https://tl-prod-data.s3.amazonaws.com/..."
}}
$ # Step 2 - attach the uploaded file to a chat message using file_uid.
$ cat > /tmp/send.json <<'JSON'
{"text":"Quote attached for your review.",
"file_uid":"FILE-UID-PLACEHOLDER"}
JSON
$ curl -sS -X POST \
-H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
-H "Content-Type: application/json" \
--data-binary @/tmp/send.json \
https://app.timelines.ai/integrations/api/chats/12345678/messages
{"status":"ok","data":{"message_uid":"OUTBOUND-UID-PLACEHOLDER"}}Check whether a message was actually delivered
“Did John actually receive the invoice message I sent this morning?”
every send returns a message_uid. Later, GET /messages/{uid}/status_history returns the Sent / Delivered / Read timeline. Delivery is usually within a second or two on an active number. The whatsapp-delivery-check skill in the companion repo wraps this.
$ curl -sS -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
https://app.timelines.ai/integrations/api/messages/OUTBOUND-UID-PLACEHOLDER/status_history
{"status":"ok","data":[
{"status":"Sent", "timestamp":"2026-04-12 12:28:40 +0000"},
{"status":"Delivered","timestamp":"2026-04-12 12:28:41 +0000"}
]}CRM
CRM and analytics
Read endpoints give your agent enough data to answer analytical questions and sync state with your CRM in natural language.
Response time reporting
“What was our average first-reply time on WhatsApp this week?”
“Who on my team is slowest to reply?”
pull recent chats, then for each chat fetch the message timeline, find the first outbound message after each inbound, and aggregate the deltas client-side. Two API endpoints, no special analytics call.
$ # Pull the last page of a chat's messages (50 per page, fixed).
$ # Messages nest under data.messages — not a flat .data[] array.
$ # FILTER message_type=="whatsapp" to exclude notes (see Things to know
$ # below — notes live in the same timeline with from_me=false and would
$ # otherwise corrupt the response-time calculation).
$ curl -sS -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
"https://app.timelines.ai/integrations/api/chats/12345678/messages?page=1" \
| jq -r '.data.messages[] | select(.message_type=="whatsapp") | "\(.timestamp)\tfrom_me=\(.from_me)\t\(.text[0:40])"'
# 2026-04-12 12:28:40 +0300 from_me=false Hi - is the order shipped?
# 2026-04-12 12:30:15 +0300 from_me=true Yes - tracking ABC123, ETA 2-3
# 2026-04-12 14:02:11 +0300 from_me=false Got it, thanks!
# ...
$ # Compute first-reply latency from each consecutive (false -> true)
$ # pair, then average across all chats. Pure client-side aggregation.
$ # For full history walk ?page=2, ?page=3, ... until data.has_more_pages is false.Unanswered message detection
“How many messages did we receive yesterday? How many are still unanswered?”
“Show me every chat with an incoming message in the last 24 hours and no reply.”
GET /chats?read=false for unread chats, then filter to .data.messages[] where message_type=="whatsapp" AND from_me==false — notes also carry from_me=false and would inflate the unanswered count. See Things to know.
Summarize conversations on demand
“Summarize the entire conversation with ACME Corp. What are their pain points?”
“Give me a one-paragraph brief on every chat I haven't replied to yet.”
fetch GET /chats/{id}/messages, filter to .data.messages[] where message_type="whatsapp" (notes would otherwise leak your agent's own prior scribbles into the summary as if they were customer utterances — see Things to know), then let OpenClaw summarize. Paginate with ?page=N for long threads.
Enrich your CRM with WhatsApp activity
“For every new chat this week, look up the number in HubSpot. If it's a contact, tag the chat with their deal stage. If not, create the contact.”
combine TimelinesAI reads with your CRM's API. Write results back to the chat via POST /chats/{id}/labels and POST /chats/{id}/notes.
$ # 1. Pull recent chats with their phone numbers. /chats returns 50 per
$ # page under data.chats — paginate with ?page=N if you need more.
$ curl -sS -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
"https://app.timelines.ai/integrations/api/chats?page=1" \
| jq -r '.data.chats[] | "\(.id)\t\(.phone)"'
# 12345678 +15550100
# 12345679 +15550200
# ...
$ # 2. For each phone, look up your CRM (HubSpot/Pipedrive/Close).
$ # Pseudo-code:
$ # contact = hubspot.search_by_phone(phone)
$ # stage = contact.deal_stage if contact else "unknown"
$ # 3. Add the deal stage label WITHOUT wiping existing labels.
$ # POST /labels is REPLACE semantics (full set), so GET current labels,
$ # append the new one, and POST the combined list.
$ curl -sS -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
https://app.timelines.ai/integrations/api/chats/12345678/labels \
| jq -c '.data.labels + ["hubspot/qualified-lead"] | unique | {labels: .}' \
> /tmp/labels.json
$ curl -sS -X POST \
-H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
-H "Content-Type: application/json" \
--data-binary @/tmp/labels.json \
https://app.timelines.ai/integrations/api/chats/12345678/labelsScore leads from conversation content
“Score every chat tagged inbound-lead from 1 to 10 on fit and urgency. Write the score as a note.”
LLM reasoning over GET /chats/{id}/messages — filter to message_type="whatsapp" first, otherwise your own previous lead-score notes will be re-read as customer utterances and drift the score on every run. Write the result via POST /chats/{id}/notes with a predictable prefix (e.g. "lead_score: fit=8 urgency=6") so your next pass can find and supersede it.
Operations
Scale and handoff
Patterns for human-in-the-loop workflows, multi-agent routing, and conversation memory. All four capabilities below are designed to coexist with humans working the same chats from the TimelinesAI shared inbox.
Draft replies for human review instead of sending
“For every new incoming message, draft a reply and save it as a note. Don't send — I'll review and send them myself.”
POST /chats/{id}/notes with the draft text instead of /messages. The note appears in the same chat view your teammates already use.
Hand off to a human when the agent is stuck
“If the conversation goes more than 5 turns without resolution, or the customer asks for a human, tag escalate and stop replying until I clear the tag.”
count turns with GET /chats/{id}/messages, check stop-reply labels with GET /chats/{id}/labels before every send. If escalation triggers, POST /chats/{id}/labels with escalate and exit.
$ # 1. Count outbound 'whatsapp' messages (skip notes) in this chat.
$ # Messages nest under data.messages — not a flat .data[] array.
$ TURNS=$(curl -sS -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
"https://app.timelines.ai/integrations/api/chats/12345678/messages?page=1" \
| jq '[.data.messages[] | select(.from_me==true and .message_type=="whatsapp")] | length')
$ echo $TURNS
8
$ # 2. Past the threshold? Append 'escalate' to the chat's labels and stop.
$ # POST /labels REPLACES the full set, so read existing first, combine,
$ # then POST — otherwise you wipe every other label on the chat.
$ if [ "$TURNS" -ge 5 ]; then
curl -sS -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
https://app.timelines.ai/integrations/api/chats/12345678/labels \
| jq -c '.data.labels + ["escalate"] | unique | {labels: .}' \
> /tmp/labels.json
curl -sS -X POST \
-H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
-H "Content-Type: application/json" \
--data-binary @/tmp/labels.json \
https://app.timelines.ai/integrations/api/chats/12345678/labels
fi
$ # 3. Before every future send, GET /chats/{id}/labels and bail
$ # if .data.labels contains 'escalate' or 'needs-human'.Run multiple specialized agents on one inbox
“Sales AI handles pricing questions, support AI handles product questions. Route by intent; if both are unsure, escalate to me.”
two specialist skills in one OpenClaw workspace (one for sales, one for support) plus a third intent-classifier skill that runs first on every incoming message, tags the chat with the chosen specialist, and exits. Each specialist reads the chat's intent label before replying and bails if the label points at the other one — so exactly one skill fires per message.
Remember past conversations with the same customer
“Last week you mentioned you were traveling — how did that go?”
OpenClaw's own memory plus GET /chats/{id}/messages for the full WhatsApp history. Chat history survives across invocations because it lives on TimelinesAI.
Test your agent end-to-end with a second WhatsApp number
After setup you can see capabilities listed here, but you can’t see your agent actually work until a real customer messages it. That’s a bad loop for iteration. Here’s a better one: connect a second WhatsApp number to the same TimelinesAI workspace, use it as a scripted customer, and watch your real agent skill handle the conversation end-to-end. No mocks, no fake webhooks — it’s your real agent, your real API, just with a second number playing the role of the human on the other side.
The pattern
TimelinesAI lets multiple WhatsApp numbers live in one workspace and routes their inbound webhooks through the same receiver. The trick is the `whatsapp_account_id` field on every webhook payload — it carries the JID of the number that received the message. Read it, switch on it, dispatch to either a persona step (when your agent just replied to the customer) or an agent step (when the customer just messaged your agent). Each side sends via the normal POST /messages or POST /chats/{id}/messages from the other side’s perspective.
Architecture
[+1 555 0100] [+1 555 0200]
persona agent under test
| |
| one TimelinesAI workspace |
| |
+-------> Public API <-------+
|
message:received:new
|
v
+--------------------------+
| Your test receiver |
| switch (accountJid) { |
| persona_jid -> next |
| agent_jid -> skill |
| } |
+--------------------------+Persona sender
The persona side is a scripted customer: a list of lines to say next. When it’s time for the next turn, POST to /messages with the agent’s phone number as the recipient. TimelinesAI handles the routing because both numbers are in one workspace.
// persona-sender.js — send the next scripted customer turn from the
// persona WhatsApp number to the agent-under-test number. TimelinesAI
// routes the send because both numbers share one workspace.
const PERSONA_SCRIPT = [
"hi I saw your ad",
"we want to reduce customer churn",
"we are 25 people",
"we need to roll out in Q3",
];
let idx = 0;
export async function sendNextPersonaTurn() {
if (idx >= PERSONA_SCRIPT.length) return;
const text = PERSONA_SCRIPT[idx++];
await fetch("https://app.timelines.ai/integrations/api/messages", {
method: "POST",
headers: {
"Authorization": `Bearer ${process.env.TIMELINES_AI_API_KEY}`,
"Content-Type": "application/json",
},
body: Buffer.from(JSON.stringify({
phone: process.env.AGENT_UNDER_TEST_PHONE,
text,
}), "utf-8"),
});
console.log(`[persona] sent: ${text}`);
}Receiver jid-switch
One receiver, one branch per side. Everything else you already know from Setup — the `?secret=` auth, the `from_me` echo filter, the flat-vs-nested payload fallback — applies here unchanged.
// api/webhook.js — route events by which number received them
export default async function handler(req, res) {
if (req.query.secret !== process.env.WEBHOOK_SECRET) return res.status(404).end();
const { event_type, data } = req.body || {};
if (event_type !== "message:received:new") return res.status(200).end();
if (data?.from_me === true) return res.status(200).end();
res.status(200).json({ ok: true });
const accountJid = data.whatsapp_account_id ?? data.chat?.whatsapp_account_id;
if (accountJid === process.env.AGENT_UNDER_TEST_JID) {
// Customer just messaged the agent — run your agent-under-test skill
await handleAgentTurn(data);
} else if (accountJid === process.env.PERSONA_JID) {
// Agent just replied to the persona — advance the scripted scenario
await advancePersona(data);
}
}Those two snippets are the core idea. The full harness — persona scripts, scenario loader, CLI kickoff, split-pane color logger, Vercel config — lives in examples/test-harness/ in the companion repo, with its own README that walks through deploy, env vars, webhook registration, and running your first scenario end-to-end.
Watching it happen
The harness runs autonomously — it doesn’t need you to approve each reply. Your oversight happens in the TimelinesAI dashboard: open the chat between your persona and agent numbers in another browser tab and every turn shows up live. If the agent says something dumb, send a manual reply from the agent’s inbox to steer the conversation. If you need to change behavior mid-run, add a label or write a note on the chat — both are visible to the agent’s skill on its next turn. Stop the run by deleting the registered webhook or shutting down the Vercel function; the scenario halts immediately.
Example: one full round trip
A concrete five-step loop showing what the API looks like in practice — how your agent sends an outbound, confirms delivery, processes the customer's reply, and sends a follow-up. Phone numbers, chat IDs, and message UIDs below are placeholders — substitute your own.
Placeholders used throughout this example:
Your business number → +1 555 0100 (JID: 15550100@s.whatsapp.net) Your customer's number → +1 555 0200 API token → $TIMELINES_AI_API_KEY
Confirm your token works and list your connected numbers
$ curl -sS -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
https://app.timelines.ai/integrations/api/whatsapp_accounts
{"status":"ok","data":{"whatsapp_accounts":[
{"id":"15550100@s.whatsapp.net","phone":"+15550100",
"status":"active","account_name":"Your Business"}
]}}You should see a list of your connected numbers. If the status is anything other than active, fix that in the TimelinesAI dashboard before continuing.
Send an outbound message
Write the payload to a file with UTF-8 encoding, then curl it. This is the pattern for every outbound.
$ cat > /tmp/send.json <<'JSON'
{"phone":"+15550200",
"text":"Hi - your order shipped. Tracking: ABC123."}
JSON
$ curl -sS -X POST \
-H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
-H "Content-Type: application/json" \
--data-binary @/tmp/send.json \
https://app.timelines.ai/integrations/api/messages
{"status":"ok","data":{"message_uid":"OUTBOUND-UID-PLACEHOLDER"}}The response is a receipt, not a delivery confirmation. Hold on to the message_uid for the next step.
Check delivery status
$ curl -sS -H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
.../messages/OUTBOUND-UID-PLACEHOLDER/status_history
{"status":"ok","data":[
{"status":"Sent", "timestamp":"2026-04-12 12:28:40 +0000"},
{"status":"Delivered","timestamp":"2026-04-12 12:28:41 +0000"}
]}Sent → Delivered is typically within a second on an active number. The Read status appears later, when the recipient actually opens the chat.
The customer replies, your webhook fires
When the customer responds, TimelinesAI posts to your registered webhook URL:
{
"event_type": "message:received:new",
"data": {
"chat_id": 12345678,
"message_uid": "INBOUND-UID-PLACEHOLDER",
"sender_phone": "+15550200",
"sender_name": "Customer",
"text": "Thanks! When will it arrive?",
"timestamp": "2026-04-12 12:29:20 +0000"
}
}Your receiver acks with a 200 immediately, then hands the payload to your OpenClaw skill.
Reply back from the same chat
$ cat > /tmp/reply.json <<'JSON'
{"text":"Estimated delivery is 2-3 business days. You'll get tracking updates to this chat."}
JSON
$ curl -sS -X POST \
-H "Authorization: Bearer $TIMELINES_AI_API_KEY" \
-H "Content-Type: application/json" \
--data-binary @/tmp/reply.json \
https://app.timelines.ai/integrations/api/chats/12345678/messages
{"status":"ok","data":{"message_uid":"REPLY-UID-PLACEHOLDER"}}Because you're sending into an existing chat via /chats/{id}/messages, the sender is automatically the WhatsApp number that owns the chat — you don't pick it, the chat record does. This is the whole reason you don't accidentally send from the wrong number in multi-number workspaces.
Limits and caveats
What this guide does NOT cover, and why.
Related guides
If you're building this for a real customer channel, these three pages cover adjacent territory you'll touch sooner or later.
Connect multiple WhatsApp numbers →
Run several personal numbers from one workspace, with shared inbox routing and per-number JID pinning.
HubSpot + WhatsApp integration →
Sync conversations into HubSpot deals — the closest off-the-shelf companion to capability #11 (event-triggered sends).
ChatGPT agents for WhatsApp →
The non-OpenClaw take on the same idea — a managed autoresponder you configure in the dashboard, no skill required.
Building blocks
Companion skill bundle on GitHub →4 working skills, a Vercel webhook receiver, compliance docs, and a full mirror of this guide. MIT licensed.
TimelinesAI API Docs · OpenClaw Skills Docs · v0.1.0 release
Capability guide · 2026 · Canonical URL timelines.ai/guide/openclaw-whatsapp-skills