Skip to main content
Glama
OpenSIPS

OpenSIPS MCP Server

Official
by OpenSIPS

perf_hotspots

Quickly identify performance bottlenecks in your OpenSIPS server by summarizing shmem usage, dialog count, active transactions, and error rates into a single actionable report.

Instructions

Quick "where is the pain?" summary.

Combines mem, get_statistics for core rates and error counters, and a snapshot of process state. Returns a flat dict of the single most useful numbers — shmem usage %, dialog count, tm active transactions, error rate — so an operator (or LLM) can decide whether to dig deeper.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should disclose behavioral traits. It indicates a read-only summary but does not mention side effects, permissions, rate limits, or cost. It adds basic transparency about what data is combined but is insufficient for full behavioral understanding.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is four sentences, front-loaded with the main purpose, and every sentence adds value. No redundant or vague language.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple summary tool, the description is largely complete: it explains the combination of sources and the returned fields. However, it could mention more about when to use this instead of other stats tools or note potential limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has no parameters, so the description cannot add parameter semantics beyond the schema. However, it compensates by detailing the output composition and values, which is valuable context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides a quick summary of system pain points by combining multiple data sources. It specifies the exact values returned and distinguishes itself from siblings by being a high-level diagnostic entry point.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when an operator or LLM needs to decide whether to dig deeper, but does not explicitly state when to use versus alternatives like get_statistics or perf_memory_report. The context is clear but lacks explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/OpenSIPS/opensips-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server