Warning instead of introduction
(my apologies to everyone who considers this obvious, but)
If you give your AI assistant direct access to your computer's file system, stop right now.
AI can do amazing things, but it also can make mistakes, as every AI provider tells you on their home page (essentially giving up any responsibility, so when - not "if" - bad things will happen to you, you'll be on your own). There is no lack of (proven) cases when AI tools caused catastrophic damages.
Run isolated
Known solution is to run potentially powerful, but also dangerous, unstable and unreliable programs in isolated Docker containers, minimizing the risk of harm to the main computer (as with condoms, you don't get 100% guarantee, but you are much, much safer).
The catch
The flip side of isolation is that by default, containers do not know what software is installed in a neighbor container or on the host computer, nor can they run this software.
Last weekend, I was playing with one of these powerful but dangerous toys called OpenClaw. It can access the Internet with some internal mechanisms, but this is not a real browser. Which OpenClaw has told me at some point, and suggested that the task it was building would be done better by Chrome but not with internal OpenClaw mechanisms.
After I described the issue, OpenClaw has offered 2 possible solutions: install Chrome in the same container with it, or install in another container, and build a "bridge".
Since I believe in
Do one thing, and do it well
The UNIX philosophy
I've asked to build a docker configuration for OpenClaw and Chrome to "co-exist", which was straightforward (I prefer the syntax of docker compose):
services:
...
# Headless Chrome service
chrome:
image: zenika/alpine-chrome:latest
restart: unless-stopped
stdin_open: true
tty: true
command: [
"--no-sandbox",
"--remote-debugging-address=0.0.0.0",
"--remote-debugging-port=9222",
"--remote-allow-origins=*"
]
networks:
- default
As a sidenote, having a helper browser in a container is something that is useful not only for AI assistant, but also for more "traditional" tasks like automated testing, where you can separate the tests logic and runner in one container, and browsers in another.
That's when the things became interesting
As a security measure, newer versions of Chrome now block connections from the hosts other than localhost. When OpenClaw tried to use Chrome running in another container, it got the error
Host header is specified and is not an IP address or localhost.
quick search has shown that the same issue was already raised in ddev, puppeeter and playwright. Time to give up and return to option 1 (install chrome in the same container)?
Back to OpenClaw
OpenClaw has a web interface, and it has security limitations similar to Chrome's - it can be accessed either from local host, or via https.
It is possible to use also terminal connection, but I am lazy and use browser in any case. So when I roam, I connect to computer that runs OpenClaw and establish an SSH tunnel to it:
ssh -L 18789:localhost:18789 host.running.openclaw.comAnd then open http://localhost:18789 in my browser
Same method for container?
If I can establish an SSH tunnel to remote computer, so can do one container to connect to another container, right? I suggested this to OpenClaw, and the response was (I quote)
This would work but seems overly complex. You'd need SSH server running in the Chrome container, key management, etc. Not elegant
Alternative: If you can't change Chrome's flags, use
socatin the OpenClaw container to proxy the connection:
socat TCP-LISTEN:9222,fork,reuseaddr TCP:chrome:9222
Then setbrowser.cdpUrltolocalhost:9222
"Not elegant" part made my day, but what's more important, I've learned about socat - very light relaying tool that worked like a charm (and I had no idea existed).
Apparently, AI doesn't always make us dumber :)