Commit Graph

548 Commits

Author SHA1 Message Date
Peter Steinberger
82f71d25e5 refactor: centralize history context wrapping 2026-01-10 19:16:26 +01:00
Peter Steinberger
b977ae19af chore: fix lint and typing 2026-01-10 19:16:25 +01:00
Peter Steinberger
d41372b9d9 feat: unify provider history context 2026-01-10 19:16:25 +01:00
Peter Steinberger
6480ef369f fix: telegram draft chunking defaults (#667) (thanks @rubyrunsstuff) 2026-01-10 18:30:06 +01:00
Peter Steinberger
a54706a063 fix: throttle cli credential sync 2026-01-10 17:44:03 +01:00
Peter Steinberger
e3cd431551 fix(auto-reply): RawBody commands + locked session updates (#643) 2026-01-10 17:32:31 +01:00
Peter Steinberger
fb03149df4 fix: finalize human delay config typing (#446) (thanks @tony-freedomology) 2026-01-10 17:15:27 +01:00
Lloyd
ab994d2c63 feat(agent): add human-like delay between block replies
Adds `agent.humanDelay` config option to create natural rhythm between
streamed message bubbles. When enabled, introduces a random delay
(default 800-2500ms) between block replies, making multi-message
responses feel more like natural human texting.

Config example:
```json
{
  "agent": {
    "blockStreamingDefault": "on",
    "humanDelay": {
      "enabled": true,
      "minMs": 800,
      "maxMs": 2500
    }
  }
}
```

- First message sends immediately
- Subsequent messages wait a random delay before sending
- Works with iMessage, Signal, and Discord providers

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 17:12:50 +01:00
Peter Steinberger
b7fdc266ad test(auto-reply): cover native /model session routing 2026-01-10 16:50:32 +01:00
Peter Steinberger
b99eb4c9f3 fix(auto-reply): apply native commands to target session 2026-01-10 16:48:53 +01:00
Peter Steinberger
70c1732dd1 refactor: centralize messaging dedupe helpers 2026-01-10 16:02:56 +01:00
Peter Steinberger
920b3880c1 test: add elevated mode regressions 2026-01-10 05:31:48 +01:00
Peter Steinberger
66db6c749d fix: persist elevated off override 2026-01-10 05:23:46 +01:00
Peter Steinberger
cdb915d527 chore: normalize Clawdbot naming 2026-01-10 05:14:09 +01:00
Peter Steinberger
34664601e0 fix(auto-reply): default audioAsVoice to false 2026-01-10 02:25:19 +00:00
Peter Steinberger
003cda73e8 style: fix biome formatting 2026-01-10 02:11:43 +00:00
Peter Steinberger
afe6f182ca feat: show effective config in /debug 2026-01-10 03:10:14 +01:00
Peter Steinberger
5a6ae2624e docs: add /config get alias 2026-01-10 03:10:14 +01:00
Peter Steinberger
8b579c91a5 feat: add /config chat config updates 2026-01-10 03:01:27 +01:00
Peter Steinberger
f28a4a34ad refactor: unify inline directives and media fetch 2026-01-10 03:01:04 +01:00
Peter Steinberger
4075895c4c refactor: consolidate reply/media helpers 2026-01-10 02:41:16 +01:00
Peter Steinberger
a29f5dda2e test(live): gateway smoke across profile-key models 2026-01-10 01:09:41 +00:00
Peter Steinberger
623d1e11f1 refactor: centralize session agent resolution 2026-01-10 01:57:54 +01:00
Peter Steinberger
f4b3869f45 Merge pull request #490 from jarvis-medmatic/feat/audio-as-voice-tag
feat(telegram): `[[audio_as_voice]]` tag support
2026-01-10 00:52:02 +00:00
Peter Steinberger
c56b2f4bc1 fix: honor audio_as_voice streaming + parse tests (#490) (thanks @jarvis-medmatic) 2026-01-10 01:50:33 +01:00
Peter Steinberger
cb10682d3e fix(openai): avoid invalid reasoning replay 2026-01-10 00:45:10 +00:00
Jarvis
5fedfd8d15 chore: format audioAsVoice updates
Co-authored-by: Manuel Hettich <17690367+ManuelHettich@users.noreply.github.com>
2026-01-10 01:44:57 +01:00
Jarvis
05a99aa49b feat(telegram): buffer audio blocks for [[audio_as_voice]] tag support
- Add [[audio_as_voice]] detection to splitMediaFromOutput()
- Pass audioAsVoice through onBlockReply callback chain
- Buffer audio blocks during streaming, flush at end with correct flag
- Non-audio media still streams immediately
- Fix: emit payloads with audioAsVoice flag even if text is empty

Co-authored-by: Manuel Hettich <17690367+ManuelHettich@users.noreply.github.com>
2026-01-10 01:41:18 +01:00
Peter Steinberger
5898304fa0 fix: abort runs between tool calls 2026-01-10 01:26:25 +01:00
Peter Steinberger
1fd7a6e310 fix: keep telegram streamMode draft-only (#619) (thanks @rubyrunsstuff) 2026-01-10 01:14:40 +01:00
Ruby
b4fbf2fe0d fix: enable block streaming for Telegram when streamMode is 'block'
- Fix disableBlockStreaming logic in telegram/bot.ts to properly enable
  block streaming when telegram.streamMode is 'block' regardless of
  blockStreamingDefault setting
- Set minChars default to 1 for Telegram block mode so chunks send
  immediately on newlines/sentences instead of waiting for 800 chars
- Skip coalescing for Telegram block mode when not explicitly configured
  to reduce chunk batching delays
- Fix newline preference to wait for actual newlines instead of breaking
  on any whitespace when buffer is under maxChars

Fixes issue where all Telegram messages were batched into one message
at the end instead of streaming as separate messages during generation.
2026-01-10 01:11:41 +01:00
Peter Steinberger
097550c299 fix: centralize verbose overrides and tool stream gating 2026-01-10 00:52:24 +01:00
Peter Steinberger
9a8d3aed26 test: update status expectations for verbose/elevated labels 2026-01-09 23:43:24 +00:00
Peter Steinberger
e18080163f fix: simplify verbose/elevated status labels 2026-01-09 23:41:57 +00:00
Peter Steinberger
96e17d407a fix: filter NO_REPLY prefixes 2026-01-09 23:29:05 +00:00
Peter Steinberger
a9a70ea278 fix: persist verbose off and gate tool stream 2026-01-10 00:22:28 +01:00
Peter Steinberger
98d0318d4e refactor: cron payload migration cleanup (#621)
* refactor: centralize cron payload migration

* test: stabilize block streaming mocks

* test: adjust chunker fence-close case
2026-01-09 22:56:55 +00:00
Peter Steinberger
3b91148a0a fix: handle fence-close paragraph breaks 2026-01-09 22:20:22 +00:00
Peter Steinberger
3adec35632 fix: make forced block chunking fence-safe 2026-01-09 21:52:47 +00:00
Peter Steinberger
79af03ba5e fix(auto-reply): tighten block streaming defaults 2026-01-09 22:41:10 +01:00
Peter Steinberger
53f51786f2 fix: default block streaming coalesce idle to 1s 2026-01-09 22:31:19 +01:00
Peter Steinberger
6c7a27c010 refactor: normalize main session key handling 2026-01-09 22:30:15 +01:00
Peter Steinberger
c37b77855b Merge pull request #464 from austinm911/fix/slack-thread-replies
feat(slack): implement configurable reply threading
2026-01-09 21:10:39 +00:00
Peter Steinberger
84046cbad8 fix(slack): mrkdwn + thread edge cases (#464) (thanks @austinm911) 2026-01-09 22:09:02 +01:00
Austin Mudd
b4663ed11c Slack: implement replyToMode threading for tool path
- Add shared hasRepliedRef state between auto-reply and tool paths
- Extract buildSlackThreadingContext helper in agent-runner.ts
- Extract resolveThreadTsFromContext helper in slack-actions.ts
- Update docs with clear replyToMode table (off/first/all)
- Add tests for first mode behavior across multiple messages
2026-01-09 21:59:51 +01:00
Peter Steinberger
42a0089b3b fix: require explicit system event session keys 2026-01-09 21:59:01 +01:00
Peter Steinberger
5fa26bfec7 feat: add per-agent elevated controls 2026-01-09 20:42:19 +00:00
Peter Steinberger
d3a0114b6b fix: dedupe followup queue by message id (#600) (thanks @samratjha96) 2026-01-09 20:44:11 +01:00
Samrat Jha
9185fdc896 fix(queue): deduplicate followup queue entries to prevent duplicate responses
## Problem

When messages arrived while the agent was busy processing a previous message,
the same message could be enqueued multiple times into the followup queue.
This happened because Discord's event system can emit the same message multiple
times (e.g., during reconnects or due to slow listener processing), and the
followup queue had no deduplication logic.

This caused the bot to respond to the same user message 2-4+ times.

## Solution

Add simple exact-match deduplication in `enqueueFollowupRun()`: if a prompt
is already in the queue, skip adding it again. Extracted into a small
`isPromptAlreadyQueued()` helper for clarity.

## Testing

- Added test cases for deduplication (same prompt rejected, different accepted)
- Manually verified on Discord: single response per message even when multiple
  events fire during slow agent processing
2026-01-09 20:40:18 +01:00
Peter Steinberger
2977b296e6 feat(messages): add whatsapp messagePrefix and responsePrefix auto 2026-01-09 19:29:04 +00:00