Skip to main content
Glama
onion-ai

onion-mcp-server

Official
by onion-ai

sys_json_valid

Validate JSON strings for correctness and return formatted results, with optional pretty-printing.

Instructions

验证 JSON 字符串是否合法,并返回格式化后的结果。

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
textYes要验证的 JSON 字符串
prettyNo验证通过后是否返回格式化 JSON(默认 true)

Implementation Reference

  • Tool schema registration for sys_json_valid within SYSTEM_TOOLS list. Defines name, description, and inputSchema (required 'text' string, optional 'pretty' boolean)
    types.Tool(
        name="sys_json_valid",
        description="验证 JSON 字符串是否合法,并返回格式化后的结果。",
        inputSchema={
            "type": "object",
            "properties": {
                "text": {
                    "type":        "string",
                    "description": "要验证的 JSON 字符串",
                },
                "pretty": {
                    "type":        "boolean",
                    "description": "验证通过后是否返回格式化 JSON(默认 true)",
                    "default":     True,
                },
            },
            "required": ["text"],
        },
    ),
  • Handler function that validates JSON strings. Uses json.loads() and returns detailed error position (line/col/context) on failure, or formatted JSON output on success.
    def _sys_json_valid(args: dict) -> list[types.TextContent]:
        text   = args["text"]
        pretty = bool(args.get("pretty", True))
    
        try:
            parsed = json.loads(text)
        except json.JSONDecodeError as e:
            return [types.TextContent(type="text", text=(
                f"❌ JSON 格式错误\n\n"
                f"错误位置: 第 {e.lineno} 行,第 {e.colno} 列\n"
                f"错误信息: {e.msg}\n"
                f"上下文:   ...{text[max(0,e.pos-20):e.pos+20]}..."
            ))]
    
        if pretty:
            formatted = json.dumps(parsed, ensure_ascii=False, indent=2)
            return [types.TextContent(type="text", text=(
                f"✅ JSON 合法\n\n```json\n{formatted}\n```"
            ))]
    
        type_name = type(parsed).__name__
        return [types.TextContent(type="text", text=(
            f"✅ JSON 合法  类型: {type_name}"
        ))]
  • Routing map inside handle_system that dispatches 'sys_json_valid' to the _sys_json_valid handler function.
    handlers = {
        "sys_time":       _sys_time,
        "sys_uuid":       _sys_uuid,
        "sys_hash":       _sys_hash,
        "sys_base64":     _sys_base64,
        "sys_url_encode": _sys_url_encode,
        "sys_json_valid": _sys_json_valid,
    }
    fn = handlers.get(name)
    if fn is None:
        raise ValueError(f"未知 system 工具: {name}")
    return fn(arguments)
  • Top-level routing in server.py: maps all SYSTEM_TOOLS (including sys_json_valid) to the handle_system dispatcher.
    for _t in SYSTEM_TOOLS: 
        _HANDLERS[_t.name] = handle_system
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must convey behavioral traits. It states validation and formatting but does not disclose what happens when input is invalid (e.g., error message, return type). This is a significant gap for a validation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that conveys the core purpose without any extraneous words. It is concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (2 parameters with full schema descriptions, no output schema), the description covers the basic purpose. However, it lacks details on error handling and the default formatting behavior, which would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for both parameters. The description adds no new parameter details beyond what is already in the schema, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool validates JSON strings and returns formatted results. The verb 'validate' and resource 'JSON string' are specific, and there are no sibling tools with similar functionality, making it distinctive.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide explicit guidance on when to use this tool versus alternatives. However, since no sibling tool directly validates JSON, usage is implied. A more explicit statement about when to choose this tool (e.g., when needing JSON validation before processing) would be beneficial.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/onion-ai/mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server