Adds `agent.humanDelay` config option to create natural rhythm between
streamed message bubbles. When enabled, introduces a random delay
(default 800-2500ms) between block replies, making multi-message
responses feel more like natural human texting.
Config example:
```json
{
"agent": {
"blockStreamingDefault": "on",
"humanDelay": {
"enabled": true,
"minMs": 800,
"maxMs": 2500
}
}
}
```
- First message sends immediately
- Subsequent messages wait a random delay before sending
- Works with iMessage, Signal, and Discord providers
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add [[audio_as_voice]] detection to splitMediaFromOutput()
- Pass audioAsVoice through onBlockReply callback chain
- Buffer audio blocks during streaming, flush at end with correct flag
- Non-audio media still streams immediately
- Fix: emit payloads with audioAsVoice flag even if text is empty
Co-authored-by: Manuel Hettich <17690367+ManuelHettich@users.noreply.github.com>
- Fix disableBlockStreaming logic in telegram/bot.ts to properly enable
block streaming when telegram.streamMode is 'block' regardless of
blockStreamingDefault setting
- Set minChars default to 1 for Telegram block mode so chunks send
immediately on newlines/sentences instead of waiting for 800 chars
- Skip coalescing for Telegram block mode when not explicitly configured
to reduce chunk batching delays
- Fix newline preference to wait for actual newlines instead of breaking
on any whitespace when buffer is under maxChars
Fixes issue where all Telegram messages were batched into one message
at the end instead of streaming as separate messages during generation.
- Add shared hasRepliedRef state between auto-reply and tool paths
- Extract buildSlackThreadingContext helper in agent-runner.ts
- Extract resolveThreadTsFromContext helper in slack-actions.ts
- Update docs with clear replyToMode table (off/first/all)
- Add tests for first mode behavior across multiple messages
## Problem
When messages arrived while the agent was busy processing a previous message,
the same message could be enqueued multiple times into the followup queue.
This happened because Discord's event system can emit the same message multiple
times (e.g., during reconnects or due to slow listener processing), and the
followup queue had no deduplication logic.
This caused the bot to respond to the same user message 2-4+ times.
## Solution
Add simple exact-match deduplication in `enqueueFollowupRun()`: if a prompt
is already in the queue, skip adding it again. Extracted into a small
`isPromptAlreadyQueued()` helper for clarity.
## Testing
- Added test cases for deduplication (same prompt rejected, different accepted)
- Manually verified on Discord: single response per message even when multiple
events fire during slow agent processing