Why self-hosted matters for agent monitoring

AI coding agents touch your source code. They read files, write code, run tests, and interact with your dev environment. When those agents generate status events, you need to decide where that information goes.

Cloud monitoring services require sending event data to a third-party server. For some teams, that's fine. For others, the idea of agent activity logs leaving the network is a non-starter. The events themselves reveal project names, commit messages, error traces, and workflow patterns. It's operational metadata that can be sensitive.

Radio Agent avoids the question entirely. Everything runs on one machine on your local network. The webhook server, the TTS engine, the audio mixer, and the streaming server all operate locally. The only network traffic is the Icecast audio stream going to devices on your LAN.

What runs where

The entire stack runs as three processes (or three Docker containers) on a single machine:

Brain
Python FastAPI server. Accepts webhook POSTs, renders TTS with Kokoro, pushes audio to Liquidsoap. Serves the web dashboard.
Liquidsoap
Audio mixer. Plays music, ducks under voice announcements, layers tones. Crossfades between tracks. Encodes to MP3.
Icecast
HTTP streaming server. Serves the MP3 stream to any client on the network. Standard protocol, works with any audio player.
Kokoro TTS
82M parameter text-to-speech model. Runs locally on CPU or GPU. No API calls, no cloud inference. Renders speech in ~50ms on GPU.

No component contacts an external server. There are no API keys to configure, no accounts to create, no subscriptions to manage. The install script pulls open-source packages and Docker images, then everything runs locally.

Zero-config install

One command gets you running:

curl -sSL https://radioagent.live/install.sh | bash

The installer detects your platform, pulls Docker images, generates config, and starts all three services. It takes about two minutes. When it finishes, you get a dashboard URL and a stream URL on your LAN IP.

If you prefer to inspect before running:

git clone https://github.com/nmelo/radioagent.git
cd radioagent
docker compose up -d

The result is the same: dashboard at port 8001, audio stream at port 8000.

What you get without a cloud service

Comparison with cloud monitoring

Cloud servicesRadio Agent
Data location Vendor servers Your machine only
Auth setup API keys, OAuth, tokens None. Webhook is open on LAN.
Cost Free tier, then paid per event/seat Free. Open source. No tiers.
Internet required Yes, always Only for initial install
Monitoring modality Visual dashboard Audio stream + visual dashboard
Runs offline No Yes, fully
Retention Days/weeks, depends on plan Dashboard shows recent. Audio is live.

Cloud monitoring tools are the right choice when you need long-term retention, cross-team visibility, or integration with incident management workflows. Radio Agent handles a narrower job: real-time, ambient awareness of what your agents are doing right now, from anywhere in the house.

The stack is auditable

Every component is open source:

No proprietary binaries, no phone-home telemetry, no vendor lock-in. You can fork any component, modify the audio pipeline, change the TTS model, or replace Icecast with a different streaming server. The architecture is modular by design: the brain communicates with Liquidsoap via a Unix socket, and with Icecast via standard HTTP. Each piece can be swapped independently.

Install locally
curl -sSL https://radioagent.live/install.sh | bash

Ships with CC0 ambient music so it plays out of the box. Add your own .mp3/.ogg/.flac files and they get picked up automatically. The entire system runs in about 200MB of RAM.