silentwatch-mcp
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@silentwatch-mcpcheck for silent failures in my cron jobs"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
silentwatch-mcp
Catch the cron failures your monitoring is silent about. An MCP server that surfaces scheduled-job state — runs, overdue jobs, and silent failures that exit 0 but produced nothing useful — to any Claude or MCP-aware agent. Works with OpenClaw schedulers, system cron, and systemd timers out of the box.
What it does
Every team running scheduled jobs has hit at least one of these:
Silent failure — the job ran, returned exit code 0, but produced no useful output (a web-search cron returning empty, a backup that wrote a 0-byte file, a digest email that sent with
<no rows>in the body). Traditional monitoring sees a green checkmark; the data is broken anyway.Overdue without alert — a job stopped running for 3 days; nobody noticed because nobody was watching
Last-success drift — the job runs every hour but only succeeded once in the last 12 attempts; everyone assumes it's healthy because the most recent run was green
Audit-trail gap — you need to know when a specific job last completed for a compliance check, and the only "log" is
journalctloutput that rotated last week
silentwatch-mcp exposes that visibility as MCP tools your AI agent can query directly. No metrics pipeline, no separate dashboard, no SaaS subscription.
> claude: which of my cron jobs have silent failures in the last 24 hours?
[MCP tool: find_silent_failures]
3 jobs flagged:
• web-search-refresh — ran 12× successfully but output empty in 8 (66% silent fail rate)
• daily-summary — ran 1× successfully (24× expected); output normal
• audit-snapshot — last success 5 days ago, all subsequent runs returned exit 0 with empty bodyWhy silentwatch-mcp
Three things existing tools (Cronitor, Healthchecks.io, Datadog, Prometheus) don't do:
Detect silent failures, not just exit codes. Traditional cron monitoring assumes
exit 0 = success. We check the output against configurable rules: empty output, length anomaly vs historical median, error keywords in stdout despite exit 0, duration anomaly. The job that "ran successfully" but returned nothing useful — that's the failure mode that hides for weeks. We catch it.MCP-native, no integration layer. Claude Desktop, Cline, Continue, OpenClaw agents — any MCP-aware client queries directly. No Grafana plugin, no API wrapper, no JSON to parse manually.
Multi-source out of the box. OpenClaw native JSONL logs, system crontab (
/etc/crontab+/etc/cron.d/*+ per-usercrontab -l), and systemd timers (systemctl list-timers+journalctl) — all four backends ship in v0.3, so you can runsilentwatch-mcpagainst whatever scheduler you have. No vendor lock-in.
Built for the SMB self-hoster running a $40 VPS where Datadog is overkill and a "$0/mo open-source MCP" is the right price point — but the silent-failure detection is just as valuable on enterprise infra.
Tool surface
The server registers these MCP tools (full spec in SPEC.md):
Tool | What it does |
| Enumerate all known cron jobs with last-run summary |
| Detailed status for one job: last run, last success, success rate over window |
| Recent run history with timing + status + output snippet |
| Jobs whose schedule says they should have run but haven't |
| Jobs that ran "successfully" but output looks suspicious |
| Recent log output for one job |
Resources:
cron://jobs— list of all jobs (manifest)cron://job/{id}— individual job manifest + recent runscron://run/{id}— individual run instance with full output
Prompts:
diagnose-overdue— diagnostic prompt template for an overdue jobsummarize-cron-health— daily digest of cron activity + anomalies
Quickstart
v0.3 beta — all 4 backends shipped + real overdue detection via cron-schedule parsing (croniter). Mock, OpenClaw JSONL, crontab, and systemd backends are all production-ready. 74 tests passing. v1.0 is now polish: PyPI release + GitHub Actions CI + MCP registry submissions.
Install
pip install silentwatch-mcp # not yet on PyPI; install from source for now:
pip install -e .Configure for Claude Desktop
Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):
{
"mcpServers": {
"silentwatch": {
"command": "python",
"args": ["-m", "silentwatch_mcp"],
"env": {
"SILENTWATCH_BACKEND": "mock"
}
}
}
}Backends (all four shipped as of v0.3):
SILENTWATCH_BACKEND=mock— returns sample data (default for development)SILENTWATCH_BACKEND=openclaw-jsonl— parses OpenClaw's native cron run JSONL files (setSILENTWATCH_OPENCLAW_LOGSto the directory, default~/.openclaw/cron-runs/); richest data — full run history + silent-fail detectionSILENTWATCH_BACKEND=crontab— parses/etc/crontab+/etc/cron.d/*+ user crontabs (crontab -l); last-run inferred from/var/log/syslogor/var/log/cron(setSILENTWATCH_SYSLOGto override)SILENTWATCH_BACKEND=systemd— parsessystemctl list-timers --all --output=json+journalctl -u <unit>for run history; liftsOnCalendar=into the schedule field
All non-mock backends gracefully return empty results on platforms / hosts where the underlying tooling isn't present, so configuration is safe to leave in place across environments.
Restart Claude Desktop
The server registers as silentwatch. Test:
Show me all my cron jobs and their last-run status.
Roadmap
Version | Scope | Status |
v0.1 | Protocol wiring, mock backend, all 6 tools registered with stub data, tests pass | ✅ Complete |
v0.2 | OpenClaw JSONL backend implemented (real cron run parsing, malformed-line handling, silent-fail enrichment) | ✅ Complete (2026-05-02) |
v0.3 | Crontab + systemd backends; cron-schedule parsing for real overdue detection (croniter); 35 new tests | ✅ Complete (2026-05-02) |
v1.0 | Polish: PyPI release, GitHub Actions CI, MCP registry submissions (Glama + PulseMCP), refined silent-fail rule configuration | ⏳ Phase 1 ship target (W3, May 18) |
v1.x | Additional backends (Cowork scheduler, Claude Code background tasks, generic JSON config), webhook emitter for alerts | ⏳ Phase 2+ |
Need this adapted to your stack?
silentwatch-mcp ships with 4 backends (mock, OpenClaw JSONL, crontab, systemd). If your scheduler is something else — AWS EventBridge, GCP Cloud Scheduler, Hangfire, Sidekiq, Temporal, Apache Airflow, Prefect, Dagster, or a custom job runner — and you want the same silent-failure-detection MCP visibility surface for it, that's a Custom MCP Build engagement.
Tier | Scope | Investment | Timeline |
Simple | Single backend adapter for an existing scheduler with documented API (e.g., GCP Cloud Scheduler) | $8,000–$10,000 | 1–2 weeks |
Standard | Custom backend + custom silent-fail rules + integration with your existing alerting (PagerDuty, Slack, etc.) | $15,000–$20,000 | 2–4 weeks |
Complex | Multi-backend (federated cron across regions / clusters / tenants) + RBAC + audit-log integration + on-call workflow | $25,000–$35,000 | 4–8 weeks |
To engage:
Email admin@pixelette.tech with subject
Custom MCP Build inquiryInclude: a 1-paragraph description of your scheduler stack + which tier you're considering
Reply within 2 business days with a 30-min discovery call slot
This server is also part of the AI Production Discipline Framework — the methodology underlying production AI audits I run.
Production AI audits
If you're running production AI and want an outside practitioner to score readiness, find the failure patterns that are already present, and write the corrective-action plan — that's what this MCP is built into supporting. The standalone audit service:
Tier | Scope | Investment | Timeline |
Audit Lite | One system, top-5 findings, written report | $1,500 | 1 week |
Audit Standard | Full audit, all 14 patterns, 5 Cs findings, 90-day follow-up | $3,000 | 2–3 weeks |
Audit + Workshop | Standard audit + 2-day team workshop + first monthly audit included | $7,500 | 3–4 weeks |
Same email channel: admin@pixelette.tech with subject AI audit inquiry.
Contributing
PRs welcome. The structure is intentionally flat to make custom backends easy to add — see src/silentwatch_mcp/backends/ for existing examples.
To add a new backend:
Subclass
CronBackendinbackends/<your_backend>.pyImplement
list_jobs,get_job_runs,tail_logsRegister in
backends/__init__.pyAdd tests in
tests/test_backend_<your_backend>.py
Bug reports + feature requests: open a GitHub issue.
License
MIT — see LICENSE.
Related
AI Production Discipline Framework — Notion template, $29
SPEC.md — full server design
Model Context Protocol — protocol overview
Built by Temur Khan — independent practitioner on production AI systems. Contact: admin@pixelette.tech
Maintenance
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/temurkhan13/silentwatch-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server