- Auto-detect and load images referenced in user prompts
- Inject history images at their original message positions
- Fix EXIF orientation - rotate before resizing in resizeToJpeg
- Sandbox security: validate paths, block remote URLs when sandbox enabled
- Prevent duplicate history image injection across turns
- Handle string-based user message content (convert to array)
- Add bounds check for message index in history processing
- Fix regex to properly match relative paths (./ ../)
- Add multi-image support for iMessage attachments
- Pass MAX_IMAGE_BYTES limit to image loading
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* feat(whatsapp): add debounceMs for batching rapid messages
Add a `debounceMs` configuration option to WhatsApp channel settings
that batches rapid consecutive messages from the same sender into a
single response. This prevents triggering separate agent runs for
each message when a user sends multiple short messages in quick
succession (e.g., "Hey!", "how are you?", "I was wondering...").
Changes:
- Add `debounceMs` config to WhatsAppConfig and WhatsAppAccountConfig
- Implement message buffering in `monitorWebInbox` with:
- Map-based buffer keyed by sender (DM) or chat ID (groups)
- Debounce timer that resets on each new message
- Message combination with newline separator
- Single message optimization (no modification if only one message)
- Wire `debounceMs` through account resolution and monitor tuning
- Add UI hints and schema documentation
Usage example:
{
"channels": {
"whatsapp": {
"debounceMs": 5000 // 5 second window
}
}
}
Default behavior: `debounceMs: 0` (disabled by default)
Verified: All existing tests pass (3204 tests), TypeScript compilation
succeeds with no errors.
Implemented with assistance from AI coding tools.
Closes#967
* chore: wip inbound debounce
* fix: debounce inbound messages across channels (#971) (thanks @juanpablodlc)
---------
Co-authored-by: Peter Steinberger <steipete@gmail.com>
Adds support for template variables in `messages.responsePrefix` that
resolve dynamically at runtime with the actual model used (including
after fallback).
Supported variables (case-insensitive):
- {model} - short model name (e.g., "claude-opus-4-5", "gpt-4o")
- {modelFull} - full model identifier (e.g., "anthropic/claude-opus-4-5")
- {provider} - provider name (e.g., "anthropic", "openai")
- {thinkingLevel} or {think} - thinking level ("high", "low", "off")
- {identity.name} or {identityName} - agent identity name
Example: "[{model} | think:{thinkingLevel}]" → "[claude-opus-4-5 | think:high]"
Variables show the actual model used after fallback, not the intended
model. Unresolved variables remain as literal text.
Implementation:
- New module: src/auto-reply/reply/response-prefix-template.ts
- Template interpolation in normalize-reply.ts via context provider
- onModelSelected callback in agent-runner-execution.ts
- Updated all 6 provider message handlers (web, signal, discord,
telegram, slack, imessage)
- 27 unit tests covering all variables and edge cases
- Documentation in docs/gateway/configuration.md and JSDoc
Fixes#923
Adds `agent.humanDelay` config option to create natural rhythm between
streamed message bubbles. When enabled, introduces a random delay
(default 800-2500ms) between block replies, making multi-message
responses feel more like natural human texting.
Config example:
```json
{
"agent": {
"blockStreamingDefault": "on",
"humanDelay": {
"enabled": true,
"minMs": 800,
"maxMs": 2500
}
}
}
```
- First message sends immediately
- Subsequent messages wait a random delay before sending
- Works with iMessage, Signal, and Discord providers
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>