Support ${VAR_NAME} syntax in any config string value, substituted at
config load time. Useful for referencing API keys and secrets from
environment variables without hardcoding them in the config file.
- Only uppercase env vars matched: [A-Z_][A-Z0-9_]*
- Missing/empty env vars throw MissingEnvVarError with path context
- Escape with $${VAR} to output literal ${VAR}
- Works with $include (included files also get substitution)
Closes#1009
* feat(whatsapp): add debounceMs for batching rapid messages
Add a `debounceMs` configuration option to WhatsApp channel settings
that batches rapid consecutive messages from the same sender into a
single response. This prevents triggering separate agent runs for
each message when a user sends multiple short messages in quick
succession (e.g., "Hey!", "how are you?", "I was wondering...").
Changes:
- Add `debounceMs` config to WhatsAppConfig and WhatsAppAccountConfig
- Implement message buffering in `monitorWebInbox` with:
- Map-based buffer keyed by sender (DM) or chat ID (groups)
- Debounce timer that resets on each new message
- Message combination with newline separator
- Single message optimization (no modification if only one message)
- Wire `debounceMs` through account resolution and monitor tuning
- Add UI hints and schema documentation
Usage example:
{
"channels": {
"whatsapp": {
"debounceMs": 5000 // 5 second window
}
}
}
Default behavior: `debounceMs: 0` (disabled by default)
Verified: All existing tests pass (3204 tests), TypeScript compilation
succeeds with no errors.
Implemented with assistance from AI coding tools.
Closes#967
* chore: wip inbound debounce
* fix: debounce inbound messages across channels (#971) (thanks @juanpablodlc)
---------
Co-authored-by: Peter Steinberger <steipete@gmail.com>
Adds support for template variables in `messages.responsePrefix` that
resolve dynamically at runtime with the actual model used (including
after fallback).
Supported variables (case-insensitive):
- {model} - short model name (e.g., "claude-opus-4-5", "gpt-4o")
- {modelFull} - full model identifier (e.g., "anthropic/claude-opus-4-5")
- {provider} - provider name (e.g., "anthropic", "openai")
- {thinkingLevel} or {think} - thinking level ("high", "low", "off")
- {identity.name} or {identityName} - agent identity name
Example: "[{model} | think:{thinkingLevel}]" → "[claude-opus-4-5 | think:high]"
Variables show the actual model used after fallback, not the intended
model. Unresolved variables remain as literal text.
Implementation:
- New module: src/auto-reply/reply/response-prefix-template.ts
- Template interpolation in normalize-reply.ts via context provider
- onModelSelected callback in agent-runner-execution.ts
- Updated all 6 provider message handlers (web, signal, discord,
telegram, slack, imessage)
- 27 unit tests covering all variables and edge cases
- Documentation in docs/gateway/configuration.md and JSDoc
Fixes#923