Skip to main content
Glama
Azhar-obenan

Obenan MCP Server

by Azhar-obenan

Server Quality Checklist

42%
Profile completionA complete profile improves this server's visibility in search results.
  • This repository includes a README.md file.

  • Add a LICENSE file by following GitHub's guide.

    MCP servers without a LICENSE cannot be installed.

  • Latest release: v0.1.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • This server provides 4 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • Are you the author?

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden for behavioral disclosure but fails to indicate if the operation is read-only, destructive, or idempotent. It does not disclose rate limits, authentication requirements, or what format the analysis results take.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single sentence and appropriately concise, but suffers from under-specification rather than efficient information density. It is front-loaded with the verb and resource, yet wastes the opportunity to provide behavioral details within the same length.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a single-parameter tool, the description is insufficient given the lack of output schema and annotations. It fails to explain what the Obenan Review Analyzer actually returns or the nature of the analysis performed, leaving critical gaps in the agent's understanding.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 100% description coverage for the single 'prompt' parameter. The description adds no additional semantics about parameter format, expected content, or examples, meeting the baseline expectation when the schema is self-documenting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description identifies the resource (reviews) and action (analyze), but 'analyze' is vague regarding the specific analysis type (sentiment, summarization, extraction). It mentions the Obenan API, providing domain context that distinguishes it from location-focused siblings, though the core purpose remains under-specified.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance provided on when to use this tool versus the location-based siblings (fetch_my_locations, etc.), nor any prerequisites, input constraints, or conditions for optimal use. The agent must infer applicability solely from the tool name.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full disclosure burden but provides minimal behavioral context. It mentions authentication (access token) but omits what data structure is returned, pagination behavior, rate limits, or error conditions. The agent cannot determine if this is a safe read operation or what fields the locations contain.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely concise at 9 words in a single sentence. Front-loaded with verb 'Fetch'. While no words are wasted, the brevity crosses into under-specification given the lack of annotations and output schema. Appropriate density but insufficient length for the complexity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the presence of multiple location-related siblings and no output schema, the description inadequately prepares the agent for tool selection. It fails to explain return values, distinguish this list operation from single-record retrieval, or document authentication requirements beyond the parameter itself.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Input schema has 100% description coverage, establishing a baseline of 3. The description adds no additional parameter semantics beyond mentioning the access token (already documented in schema). It does not explain the relationship between group_id and the returned locations or provide usage examples.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description states the basic action (Fetch locations) and target API (Obenan), matching the tool name's intent. However, it fails to clarify the 'my' aspect (user-specific locations vs. public/global) or distinguish from siblings like 'get_location_details' and 'search_locations_by_name', leaving ambiguity about scope and return cardinality.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance provided on when to use this tool versus the three sibling location tools. The description does not indicate whether this returns a list vs. single record, or when to prefer searching by name versus fetching 'my' locations.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure but fails to address idempotency, error handling (e.g., 404 for invalid IDs), what constitutes 'detailed information,' or specific auth requirements beyond the parameter name.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, front-loaded sentence with no redundancy. However, it is arguably underspecified given the lack of annotations and output schema, though structurally it is efficient.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Without an output schema, the description should clarify what 'detailed information' includes (e.g., address, hours, coordinates) or behavioral traits. It also lacks differentiation from siblings and auth context, leaving operational gaps.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, with both location_id and access_token fully documented in the schema. The description adds no additional parameter semantics, but given the high schema coverage, the baseline score of 3 is appropriate.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the verb ('Get'), resource ('detailed information about a specific location'), and scope ('by ID'). The 'by ID' phrasing implicitly distinguishes it from sibling search_locations_by_name and fetch_my_locations, though it does not explicitly name these alternatives.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus siblings like search_locations_by_name (when the ID is unknown) or fetch_my_locations (for bulk retrieval). No prerequisites or exclusions are mentioned.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure but fails to specify what the tool returns (list of locations? IDs only? how many results?), pagination behavior, search matching logic (partial, case-sensitive), or rate limits. The 'allow selection' phrase suggests interactive behavior but doesn't clarify the actual API mechanics.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is brief (11 words) and front-loads the primary action, but the second clause 'and allow selection to get details' is structurally awkward and semantically vague. It attempts to pack workflow guidance into a single sentence, resulting in confusing phrasing that could be interpreted as the tool having interactive UI capabilities.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a 2-parameter search tool without output schema, the description covers the minimum viable use case but leaves significant gaps. It doesn't describe the return format (critical since no output schema exists), result limits, or the relationship to fetch_my_locations. Adequate but clearly incomplete given the lack of structured metadata.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the schema adequately documents both search_term and access_token without requiring additional description text. The description mentions no parameters explicitly, but the schema compensates fully, meeting the baseline expectation for this dimension.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the core action ('Search for locations') and filtering criteria ('containing a specific name'). However, the phrase 'allow selection to get details' is ambiguous—it's unclear if the tool performs selection or merely returns candidates for selection via the sibling get_location_details tool.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description implies a workflow (search → get details) but doesn't explicitly state when to use this tool versus fetch_my_locations. It hints at using results with get_location_details via 'allow selection to get details,' but lacks explicit 'when to use/when not to use' guidance or clear differentiation from the sibling that fetches 'my locations'.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

review-analyzer-mcp-server MCP server

Copy to your README.md:

Score Badge

review-analyzer-mcp-server MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Azhar-obenan/review-analyzer-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server