Adds `agent.humanDelay` config option to create natural rhythm between
streamed message bubbles. When enabled, introduces a random delay
(default 800-2500ms) between block replies, making multi-message
responses feel more like natural human texting.
Config example:
```json
{
"agent": {
"blockStreamingDefault": "on",
"humanDelay": {
"enabled": true,
"minMs": 800,
"maxMs": 2500
}
}
}
```
- First message sends immediately
- Subsequent messages wait a random delay before sending
- Works with iMessage, Signal, and Discord providers
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
3.2 KiB
summary, read_when
| summary | read_when | |||
|---|---|---|---|---|
| Message flow, sessions, queueing, and reasoning visibility |
|
Messages
This page ties together how Clawdbot handles inbound messages, sessions, queueing, streaming, and reasoning visibility.
Message flow (high level)
Inbound message
-> routing/bindings -> session key
-> queue (if a run is active)
-> agent run (streaming + tools)
-> outbound replies (provider limits + chunking)
Key knobs live in configuration:
messages.*for prefixes, queueing, and group behavior.agents.defaults.*for block streaming and chunking defaults.- Provider overrides (
whatsapp.*,telegram.*, etc.) for caps and streaming toggles.
See Configuration for full schema.
Sessions and devices
Sessions are owned by the gateway, not by clients.
- Direct chats collapse into the agent main session key.
- Groups/channels get their own session keys.
- The session store and transcripts live on the gateway host.
Multiple devices/providers can map to the same session, but history is not fully synced back to every client. Recommendation: use one primary device for long conversations to avoid divergent context. The Control UI and TUI always show the gateway-backed session transcript, so they are the source of truth.
Details: Session management.
Queueing and followups
If a run is already active, inbound messages can be queued, steered into the current run, or collected for a followup turn.
- Configure via
messages.queue(andmessages.queue.byProvider). - Modes:
interrupt,steer,followup,collect, plus backlog variants.
Details: Queueing.
Streaming, chunking, and batching
Block streaming sends partial replies as the model produces text blocks. Chunking respects provider text limits and avoids splitting fenced code.
Key settings:
agents.defaults.blockStreamingDefault(on|off, default off)agents.defaults.blockStreamingBreak(text_end|message_end)agents.defaults.blockStreamingChunk(minChars|maxChars|breakPreference)agents.defaults.blockStreamingCoalesce(idle-based batching)agents.defaults.humanDelay(human-like pause between block replies)- Provider overrides:
*.blockStreamingand*.blockStreamingCoalesce(non-Telegram providers require explicit*.blockStreaming: true)
Details: Streaming + chunking.
Reasoning visibility and tokens
Clawdbot can expose or hide model reasoning:
/reasoning on|off|streamcontrols visibility.- Reasoning content still counts toward token usage when produced by the model.
- Telegram supports reasoning stream into the draft bubble.
Details: Thinking + reasoning directives and Token use.
Prefixes, threading, and replies
Outbound message formatting is centralized in messages:
messages.responsePrefix(outbound prefix) andwhatsapp.messagePrefix(WhatsApp inbound prefix)- Reply threading via
replyToModeand per-provider defaults
Details: Configuration and provider docs.