docs: make remote host examples generic

This commit is contained in:
Peter Steinberger
2026-01-12 02:11:33 +00:00
parent 4ced7b886e
commit d97c211e82
8 changed files with 14 additions and 8 deletions

View File

@@ -92,7 +92,7 @@ The macOS app “Remote over SSH” mode uses a local port-forward so the remote
CLI equivalent:
```bash
clawdbot gateway status --ssh steipete@peters-mac-studio-1
clawdbot gateway status --ssh user@gateway-host
```
Options:

View File

@@ -192,7 +192,7 @@ Offline-friendly alternatives (in increasing complexity):
- SuCo (research-grade; attractive if theres a solid implementation you can embed)
Open question:
- whats the **best** offline embedding model for “personal assistant memory” on your machines (MacBook + Castle)?
- whats the **best** offline embedding model for “personal assistant memory” on your machines (laptop + desktop)?
- if you already have Ollama: embed with a local model; otherwise ship a small embedding model in the toolchain.
## Smallest useful pilot

View File

@@ -1726,7 +1726,7 @@ Notes:
### Local models (LM Studio) — recommended setup
Best current local setup (what were running): **MiniMax M2.1** on a beefy Mac Studio
Best current local setup (what were running): **MiniMax M2.1** on a powerful local machine
via **LM Studio** using the **Responses API**.
```json5

View File

@@ -11,7 +11,7 @@ Clawdbot.app uses SSH tunneling to connect to a remote gateway. This guide shows
```
┌─────────────────────────────────────────────────────────────┐
MacBook
Client Machine
│ │
│ Clawdbot.app ──► ws://127.0.0.1:18789 (local port) │
│ │ │
@@ -150,4 +150,4 @@ launchctl bootout gui/$UID/com.clawdbot.ssh-tunnel
| `KeepAlive` | Automatically restarts tunnel if it crashes |
| `RunAtLoad` | Starts tunnel when the agent loads |
Clawdbot.app connects to `ws://127.0.0.1:18789` on your MacBook. The SSH tunnel forwards that connection to port 18789 on the remote machine where the Gateway is running.
Clawdbot.app connects to `ws://127.0.0.1:18789` on your client machine. The SSH tunnel forwards that connection to port 18789 on the remote machine where the Gateway is running.

View File

@@ -5,7 +5,7 @@ read_when:
---
# Remote access (SSH, tunnels, and tailnets)
This repo supports “remote over SSH” by keeping a single Gateway (the master) running on a host (e.g., your Mac Studio) and connecting clients to it.
This repo supports “remote over SSH” by keeping a single Gateway (the master) running on a dedicated host (desktop/server) and connecting clients to it.
- For **operators (you / the macOS app)**: SSH tunneling is the universal fallback.
- For **nodes (iOS/Android and future devices)**: prefer the Gateway **Bridge** when on the same LAN/tailnet (see [Discovery](/gateway/discovery)).

View File

@@ -6,7 +6,7 @@ read_when:
# Remote Clawdbot (macOS ⇄ remote host)
This flow lets the macOS app act as a full remote control for a Clawdbot gateway running on another host (e.g. a Mac Studio). All features—health checks, Voice Wake forwarding, and Web Chat—reuse the same remote SSH configuration from *Settings → General*.
This flow lets the macOS app act as a full remote control for a Clawdbot gateway running on another host (desktop/server). All features—health checks, Voice Wake forwarding, and Web Chat—reuse the same remote SSH configuration from *Settings → General*.
## Modes
- **Local (this Mac)**: Everything runs on the laptop. No SSH involved.

View File

@@ -22,6 +22,12 @@ Quick mental model:
- Configure under `plugins.entries.voice-call.config`
- Use `clawdbot voicecall …` or the `voice_call` tool
## Where it runs (local vs remote)
The Voice Call plugin runs **inside the Gateway process**.
If you use a remote Gateway, install/configure the plugin on the **machine running the Gateway**, then restart the Gateway to load it.
## Install
### Option A: install from npm (recommended)

View File

@@ -123,7 +123,7 @@ Configure via CLI:
**Best for:** local inference with LM Studio.
We have seen strong results with MiniMax M2.1 on powerful hardware (e.g. a
beefy Mac Studio) using LM Studio's local server.
desktop/server) using LM Studio's local server.
Configure via CLI:
- Run `clawdbot configure`