From d97c211e8292c0ea4750bf109887b4c371435b71 Mon Sep 17 00:00:00 2001 From: Peter Steinberger Date: Mon, 12 Jan 2026 02:11:33 +0000 Subject: [PATCH] docs: make remote host examples generic --- docs/cli/gateway.md | 2 +- docs/experiments/research/memory.md | 2 +- docs/gateway/configuration.md | 2 +- docs/gateway/remote-gateway-readme.md | 4 ++-- docs/gateway/remote.md | 2 +- docs/platforms/mac/remote.md | 2 +- docs/plugins/voice-call.md | 6 ++++++ docs/providers/minimax.md | 2 +- 8 files changed, 14 insertions(+), 8 deletions(-) diff --git a/docs/cli/gateway.md b/docs/cli/gateway.md index 16e40ecf7..b312f63a8 100644 --- a/docs/cli/gateway.md +++ b/docs/cli/gateway.md @@ -92,7 +92,7 @@ The macOS app “Remote over SSH” mode uses a local port-forward so the remote CLI equivalent: ```bash -clawdbot gateway status --ssh steipete@peters-mac-studio-1 +clawdbot gateway status --ssh user@gateway-host ``` Options: diff --git a/docs/experiments/research/memory.md b/docs/experiments/research/memory.md index 59d35d22b..6ea3633e5 100644 --- a/docs/experiments/research/memory.md +++ b/docs/experiments/research/memory.md @@ -192,7 +192,7 @@ Offline-friendly alternatives (in increasing complexity): - SuCo (research-grade; attractive if there’s a solid implementation you can embed) Open question: -- what’s the **best** offline embedding model for “personal assistant memory” on your machines (MacBook + Castle)? +- what’s the **best** offline embedding model for “personal assistant memory” on your machines (laptop + desktop)? - if you already have Ollama: embed with a local model; otherwise ship a small embedding model in the toolchain. ## Smallest useful pilot diff --git a/docs/gateway/configuration.md b/docs/gateway/configuration.md index 06ceec2f7..9991fc8a1 100644 --- a/docs/gateway/configuration.md +++ b/docs/gateway/configuration.md @@ -1726,7 +1726,7 @@ Notes: ### Local models (LM Studio) — recommended setup -Best current local setup (what we’re running): **MiniMax M2.1** on a beefy Mac Studio +Best current local setup (what we’re running): **MiniMax M2.1** on a powerful local machine via **LM Studio** using the **Responses API**. ```json5 diff --git a/docs/gateway/remote-gateway-readme.md b/docs/gateway/remote-gateway-readme.md index 8aee92842..cccc8cf31 100644 --- a/docs/gateway/remote-gateway-readme.md +++ b/docs/gateway/remote-gateway-readme.md @@ -11,7 +11,7 @@ Clawdbot.app uses SSH tunneling to connect to a remote gateway. This guide shows ``` ┌─────────────────────────────────────────────────────────────┐ -│ MacBook │ +│ Client Machine │ │ │ │ Clawdbot.app ──► ws://127.0.0.1:18789 (local port) │ │ │ │ @@ -150,4 +150,4 @@ launchctl bootout gui/$UID/com.clawdbot.ssh-tunnel | `KeepAlive` | Automatically restarts tunnel if it crashes | | `RunAtLoad` | Starts tunnel when the agent loads | -Clawdbot.app connects to `ws://127.0.0.1:18789` on your MacBook. The SSH tunnel forwards that connection to port 18789 on the remote machine where the Gateway is running. +Clawdbot.app connects to `ws://127.0.0.1:18789` on your client machine. The SSH tunnel forwards that connection to port 18789 on the remote machine where the Gateway is running. diff --git a/docs/gateway/remote.md b/docs/gateway/remote.md index b170a8629..8766fa01a 100644 --- a/docs/gateway/remote.md +++ b/docs/gateway/remote.md @@ -5,7 +5,7 @@ read_when: --- # Remote access (SSH, tunnels, and tailnets) -This repo supports “remote over SSH” by keeping a single Gateway (the master) running on a host (e.g., your Mac Studio) and connecting clients to it. +This repo supports “remote over SSH” by keeping a single Gateway (the master) running on a dedicated host (desktop/server) and connecting clients to it. - For **operators (you / the macOS app)**: SSH tunneling is the universal fallback. - For **nodes (iOS/Android and future devices)**: prefer the Gateway **Bridge** when on the same LAN/tailnet (see [Discovery](/gateway/discovery)). diff --git a/docs/platforms/mac/remote.md b/docs/platforms/mac/remote.md index 26b9b72bc..245138a2d 100644 --- a/docs/platforms/mac/remote.md +++ b/docs/platforms/mac/remote.md @@ -6,7 +6,7 @@ read_when: # Remote Clawdbot (macOS ⇄ remote host) -This flow lets the macOS app act as a full remote control for a Clawdbot gateway running on another host (e.g. a Mac Studio). All features—health checks, Voice Wake forwarding, and Web Chat—reuse the same remote SSH configuration from *Settings → General*. +This flow lets the macOS app act as a full remote control for a Clawdbot gateway running on another host (desktop/server). All features—health checks, Voice Wake forwarding, and Web Chat—reuse the same remote SSH configuration from *Settings → General*. ## Modes - **Local (this Mac)**: Everything runs on the laptop. No SSH involved. diff --git a/docs/plugins/voice-call.md b/docs/plugins/voice-call.md index 12a17443c..c4a2cd264 100644 --- a/docs/plugins/voice-call.md +++ b/docs/plugins/voice-call.md @@ -22,6 +22,12 @@ Quick mental model: - Configure under `plugins.entries.voice-call.config` - Use `clawdbot voicecall …` or the `voice_call` tool +## Where it runs (local vs remote) + +The Voice Call plugin runs **inside the Gateway process**. + +If you use a remote Gateway, install/configure the plugin on the **machine running the Gateway**, then restart the Gateway to load it. + ## Install ### Option A: install from npm (recommended) diff --git a/docs/providers/minimax.md b/docs/providers/minimax.md index f7fdadbcb..c5bd1f118 100644 --- a/docs/providers/minimax.md +++ b/docs/providers/minimax.md @@ -123,7 +123,7 @@ Configure via CLI: **Best for:** local inference with LM Studio. We have seen strong results with MiniMax M2.1 on powerful hardware (e.g. a -beefy Mac Studio) using LM Studio's local server. +desktop/server) using LM Studio's local server. Configure via CLI: - Run `clawdbot configure`