agent-friend
This server provides a comprehensive general-purpose utility toolkit of 51 built-in tools for AI agents, covering the following areas:
Alert Management: Define named alert rules with conditions (gt/gte/lt/lte/eq/ne/between/outside/contains/etc.) and severity levels; evaluate values, list/get/delete rules, and view alert history and stats.
Audit Logging: Record structured audit events with actor, resource, severity, and outcome; search, filter, export, and analyze logs with timeline bucketing and stats.
Batch Processing: Map, filter, reduce, partition, chunk, and zip lists using named or built-in functions; register custom Python functions for reuse.
Browser / Fetch: Open URLs and return page text content (HTML auto-converted to plain text).
Caching: Store, retrieve, and delete cached values with optional TTL expiration; view hit/miss statistics.
Text Chunking: Split text by chars, tokens, sentences, paragraphs, separator, or sliding window; chunk lists; get text statistics.
Code Execution: Execute Python or Bash code and return output.
Configuration Management: Set, get, list, delete, and export named config stores with dot-notation; load environment variables, set defaults, and validate required keys.
Cryptography: Generate secure tokens, UUIDs, and random bytes; hash data (MD5/SHA family); HMAC sign/verify; Base64 encode/decode.
Database (SQLite): Run SELECT queries and data-modifying statements; list tables and retrieve schemas.
Date & Time: Get current time, parse/format/diff datetimes, add durations, convert timezones, and convert to/from Unix timestamps.
Text Diffing: Compare texts or files with unified diffs, word-level diffs, similarity stats, and fuzzy matching.
Environment Variables: Get, set, list, and check environment variables; load from
.envfiles.Event Bus: Subscribe/unsubscribe to topics, publish events, view history, list subscribers, and get stats.
File Operations: Read, write, append, and list files and directories.
Formatting Utilities: Format bytes, durations, numbers, percentages, currencies, ordinals, plurals, truncated/padded text, and plain-text tables.
Integrates into CI pipelines to audit tool token budgets and provide optimization suggestions directly within GitHub Actions.
Supports exporting tool definitions for use with Google Gemini models.
Provides an agent runtime for multi-turn conversations with tool-use capabilities using local LLMs through Ollama.
Allows exporting Python functions for use with OpenAI function calling and supports agent runtimes using OpenAI models.
agent-friend
Bloated MCP schemas degrade tool selection accuracy by 3x — and burn tokens before your agent does anything useful. Scalekit's benchmark: accuracy drops from 43% to 14% with verbose schemas. The average MCP server wastes 2,500+ tokens on descriptions alone.
pip install agent-friend
agent-friend fix server.json > server_fixed.jsonGitHub's official MCP: 20,444 tokens → ~14,000. Same tools. More accurate. No config.
Fix
Auto-fix schema issues — naming, verbose descriptions, missing constraints:
agent-friend fix tools.json > tools_fixed.json
# agent-friend fix v0.59.0
#
# Applied fixes:
# ✓ create-page -> create_page (name)
# ✓ Stripped "This tool allows you to " from search description
# ✓ Trimmed get_database description (312 -> 198 chars)
# ✓ Added properties to undefined object in post_page.properties
#
# Summary: 12 fixes applied across 8 tools
# Token reduction: 2,450 -> 2,180 tokens (-11.0%)6 fix rules: naming (kebab→snake_case), verbose prefixes, long descriptions, long param descriptions, redundant params, undefined schemas. Use --dry-run to preview, --diff to see changes, --only names,prefixes to select rules.
Grade
See how your server scores against 201 others (A+ through F):
agent-friend grade --example notion
# Overall Grade: F
# Score: 19.8/100
# Tools: 22 | Tokens: 4483Notion's official MCP server. 22 tools. Grade F. Every tool name violates MCP naming conventions. 5 undefined schemas.
5 real servers bundled — grade spectrum from F to A+:
Server | Tools | Grade | Tokens |
| 22 | F (19.8) | 4,483 |
| 11 | D+ (64.9) | 1,392 |
| 12 | C+ (79.6) | 1,824 |
| 7 | A- (91.2) | 382 |
| 8 | A+ (97.3) | 721 |
We've graded 201 MCP servers — the top 4 most popular all score D or below. 3,991 tools, 512K tokens analyzed.
Try it live: See Notion's F grade — paste your own schema, get A–F instantly.
Validate
Catch schema errors before they crash in production:
agent-friend validate tools.json
# agent-friend validate — schema correctness report
#
# ✓ 3 tools validated, 0 errors, 0 warnings
#
# Summary: 3 tools, 0 errors, 0 warnings — PASS13 checks: missing names, invalid types, orphaned required params, malformed enums, duplicate names, untyped nested objects, prompt override detection. Use --strict to treat warnings as errors, --json for CI.
Or use the free web validator — no install needed.
Audit
See exactly where your tokens are going:
agent-friend audit tools.json
# agent-friend audit — tool token cost report
#
# Tool Description Tokens (est.)
# get_weather 67 chars ~79 tokens
# search_web 145 chars ~99 tokens
# send_email 28 chars ~79 tokens
# ──────────────────────────────────────────────────────
# Total (3 tools) ~257 tokens
#
# Format comparison (total):
# openai ~279 tokens
# anthropic ~257 tokens
# google ~245 tokens <- cheapest
# mcp ~257 tokensAccepts OpenAI, Anthropic, MCP, Google, or JSON Schema format. Auto-detects.
The quality pipeline: validate (correct?) → audit (expensive?) → optimize (suggestions) → fix (auto-repair) → grade (report card).
Write once, deploy everywhere
from agent_friend import tool
@tool
def get_weather(city: str, units: str = "celsius") -> dict:
"""Get current weather for a city."""
return {"city": city, "temp": 22, "units": units}
get_weather.to_openai() # OpenAI function calling
get_weather.to_anthropic() # Claude tool_use
get_weather.to_google() # Gemini
get_weather.to_mcp() # Model Context Protocol
get_weather.to_json_schema() # Raw JSON SchemaOne function definition. Five framework formats. No vendor lock-in.
from agent_friend import tool, Toolkit
kit = Toolkit([search, calculate])
kit.to_openai() # Both tools, OpenAI format
kit.to_mcp() # Both tools, MCP formatCI / GitHub Action
Token budget check for your pipeline — like bundle size checks, but for AI tool schemas:
- uses: 0-co/agent-friend@main
with:
file: tools.json
validate: true # check schema correctness first
threshold: 1000 # fail if total tokens exceed budget
grade: true # combined report card (A+ through F)
grade_threshold: 80 # fail if score < 80agent-friend grade tools.json --threshold 90 # exit code 1 if below 90
agent-friend audit tools.json --threshold 500 # exit code 2 if over budgetPre-commit hook
Grade and validate your MCP schema on every commit:
# .pre-commit-config.yaml
repos:
- repo: https://github.com/0-co/agent-friend
rev: v0.209.0
hooks:
- id: agent-friend-grade # fail if score < 60 (default)
- id: agent-friend-validate # fail on any structural errorOverride the threshold:
- id: agent-friend-grade
args: ["--threshold", "80"] # fail if score < 80Claude Code hook
Auto-check grades when you add MCP servers to Claude Code:
mkdir -p ~/.claude/hooks
curl -sL https://0-co.github.io/company/claude-code-hook.sh -o ~/.claude/hooks/af-check.sh
chmod +x ~/.claude/hooks/af-check.shAdd to ~/.claude/settings.json:
{
"hooks": {
"ConfigChange": [{
"matcher": ".",
"hooks": [{"type": "command", "command": "bash ~/.claude/hooks/af-check.sh"}]
}]
}
}Now every time you add an MCP server to Claude Code, you see its grade. See Discussion #191 for details.
Start a new MCP server
Use mcp-starter — a GitHub template repo that scaffolds a new server pre-configured for A+. agent-friend pre-commit hook and CI grading included.
REST API
Grade schemas without installing the package. Live at http://89.167.39.157:8082:
# Grade tools from a JSON body
curl -X POST http://89.167.39.157:8082/v1/grade \
-H 'Content-Type: application/json' \
-d '[{"name": "search", "description": "Search the web", "parameters": {"type": "object", "properties": {"query": {"type": "string", "description": "Search query"}}, "required": ["query"]}}]'
# Grade a remote schema by URL
curl "http://89.167.39.157:8082/v1/grade?url=https://example.com/schema.json"Returns {"score": 92.0, "grade": "A-", "tool_count": 1, "total_tokens": 43, ...}. CORS enabled. Source: api_server.py.
# CI pass/fail check (200=pass, 422=fail)
curl "http://89.167.39.157:8082/v1/check?url=https://example.com/schema.json&threshold=80"
# README badge redirect (shields.io)
curl -L "http://89.167.39.157:8082/badge?repo=owner/repo-name"Endpoints: /v1/grade, /v1/check?url=...&threshold=80, /v1/servers, /badge?repo=....
Also included
51 built-in tools — memory, search, code execution, databases, HTTP, caching, queues, state machines, vector search, and more. All stdlib, zero external dependencies. See TOOLS.md for the full list.
Agent runtime — Friend class for multi-turn conversations with tool use across 5 providers: OpenAI, Anthropic, OpenRouter, Ollama, and BitNet (Microsoft's 1-bit CPU inference).
CLI — interactive REPL, one-shot tasks, streaming. Run agent-friend --help.
Hosted version?
The REST API at http://89.167.39.157:8082 is free with rate limits. If you want unlimited API access, CI webhooks, or email alerts when your schema score drops — tell us in Discussion #188. Building it if there's demand.
Built by an AI, live on Twitch
This entire project is built and maintained by an autonomous AI agent, streamed 24/7 at twitch.tv/0coceo.
Discussions · Leaderboard · Web Tools · Bluesky · Dev.to
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/0-co/agent-friend'
If you have feedback or need assistance with the MCP directory API, please join our Discord server