Skip to main content
Glama
ThinkneoAI

thinkneo-control-plane

Official

Server Quality Checklist

  • Disambiguation4/5

    Tools are generally distinct with clear domains: policy (permissions), guardrail (content safety), spend (analytics), budget (limits), compliance (audit), alerts (incidents), and provider health. Minor semantic overlap exists between check_spend and get_budget_status (both financial), and between check_policy and evaluate_guardrail (both validation), but descriptions clarify their distinct scopes.

    Naming Consistency4/5

    Consistent snake_case prefixing (thinkneo_) and mostly verb_noun structure (check, evaluate, get, list, schedule). Minor deviation with provider_status which lacks a verb prefix, breaking the pattern established by get_budget_status and get_compliance_status. Verbs vary (check/evaluate/get) but remain readable.

    Tool Count4/5

    Eight tools is a reasonable count for an AI governance control plane, covering policy, cost, compliance, and operational monitoring. The inclusion of schedule_demo (a sales/marketing function) is slightly incongruous with the operational focus of the other seven tools, but doesn't bloat the set.

    Completeness3/5

    Strong observability coverage (read-only status checks, spend analysis, alert listing) but lacks expected control plane capabilities: no tools to create/update policies, set budgets, configure guardrails, or manage/acknowledge alerts. The surface is adequate for monitoring but incomplete for 'control' operations implied by the server name.

  • Average 3.7/5 across 8 of 8 tools scored.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v0.1.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • This repository includes a glama.json configuration file.

  • This server provides 8 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • This server has been verified by its author.

Tool Scores

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations already establish readOnlyHint=true and idempotentHint=true, so the description doesn't need to repeat safety traits. It adds value by disclosing the authentication requirement and summarizing the return structure (cost breakdown by provider/model/period). However, it omits rate limits, caching behavior, or error conditions (e.g., invalid workspace).

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is three sentences with zero waste. It front-loads the core action ('Check AI spend summary'), follows with return value description, and ends with the auth requirement. Every sentence earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the presence of an output schema (not shown but indicated) and 100% input schema coverage, the description appropriately provides a high-level summary rather than detailed field documentation. However, it lacks differentiation from financial siblings (particularly thinkneo_get_budget_status) and doesn't clarify the operational scope limitations (workspace-only filtering).

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the schema itself documents all five parameters comprehensively (including the conditional logic for start_date/end_date). The description mentions 'workspace, team, or project' which loosely maps to the workspace parameter and group_by options, but doesn't add syntax details, format examples, or clarify the relationship between period and date parameters beyond what's in the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool 'Check[s] AI spend summary' with specific breakdown dimensions (provider, model, time period). However, it slightly conflates filtering scope by mentioning 'team, or project' as targets when the schema only supports workspace-level filtering with team/project as grouping dimensions (group_by parameter), and doesn't differentiate from the similar sibling thinkneo_get_budget_status.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description notes 'Requires authentication' as a prerequisite but provides no guidance on when to use this tool versus alternatives like thinkneo_get_budget_status (which also deals with financial data) or when to use custom date ranges versus preset periods. No exclusions or prerequisites beyond auth are mentioned.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations already declare readOnlyHint=true and idempotentHint=true. The description adds the authentication requirement, which is valuable behavioral context not present in the annotations. No contradictions with annotations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences with zero waste. The first sentence front-loads the core purpose and scope; the second states the auth requirement. Every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the low complexity (4 simple parameters), 100% schema coverage, existing annotations, and presence of an output schema, the description is complete. It does not need to detail return values since the output schema exists, and the auth requirement is noted.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the baseline is 3. The description references the parameter concepts (model, provider, action) but does not add semantic details beyond the schema's examples (e.g., gpt-4o, openai) or explain parameter interaction logic.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the verb (Check) and the specific resource (governance policy allowance for models/providers/actions). It implicitly distinguishes from siblings like check_spend or get_budget_status through specificity, though it doesn't explicitly name alternatives.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description mentions 'Requires authentication' as a prerequisite, but provides no guidance on when to use this tool versus siblings like thinkneo_get_compliance_status or thinkneo_evaluate_guardrail, nor when to check specific parameter combinations.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations declare readOnlyHint=true. Description adds 'Requires authentication' which is critical behavioral context not in annotations, and previews returned data concepts (spend vs limit, thresholds). Does not contradict annotations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three sentences with zero waste. Front-loaded with main action ('Get current budget...'), followed by specific capabilities ('Shows spend vs limit...') and requirements ('Requires authentication'). Every sentence earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Appropriate for a read-only status tool with existing output schema and annotations. Description covers authentication requirement and nature of returned data without needing to replicate output schema details. Sufficient for tool complexity.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% with the 'workspace' parameter fully documented in schema ('Workspace name or ID to retrieve current budget status for'). Description does not add parameter-specific semantics beyond schema, which meets baseline for high coverage.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear verb 'Get' and specific resource 'budget utilization and enforcement status'. Lists specific data points (spend vs limit, thresholds, overage) that implicitly distinguish from simple spend checking, though does not explicitly differentiate from sibling thinkneo_check_spend.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No explicit guidance on when to use versus alternatives like thinkneo_check_spend. While the specific capabilities listed (enforcement status, projections) imply appropriate use cases, there is no explicit when-to-use or when-not-to-use guidance.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations already establish read-only and idempotent safety. The description adds valuable behavioral context not in annotations: the authentication requirement and a summary of return values (risk assessment, violations, recommendations). It appropriately doesn't contradict the annotations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three tightly constructed sentences with zero waste. Front-loaded with the core action, followed by return value description, and ending with the authentication requirement. Every sentence earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the presence of a complete input schema (100% coverage), output schema, and annotations, the description provides sufficient additional context (authentication needs, return summary, workflow timing) without needing to duplicate schema details. Adequately complete for the tool's complexity.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the baseline is appropriately met. The description implicitly clarifies the 'text' parameter by calling it a 'prompt or text' and provides workflow context ('before sending to AI provider') that adds semantic meaning to the evaluation process without redundantly listing parameters.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action (evaluate), resource (prompt/text against ThinkNEO guardrail policies), and scope. It distinguishes from siblings by specifying 'guardrail policies' versus general 'policy' checks or other operations, though it doesn't explicitly contrast with thinkneo_check_policy.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides important temporal context ('before sending it to an AI provider') implying when to invoke it in a workflow. However, it lacks explicit guidance on when to use this versus thinkneo_check_policy or other policy-related siblings, and doesn't mention prerequisites beyond authentication.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations indicate read-only/idempotent behavior, and the description is consistent with 'Get' and 'Shows'. It adds valuable context beyond annotations by noting the authentication requirement and previewing specific output content (governance score, pending actions, compliance gaps), helping the agent understand what data will be returned even though an output schema exists.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three sentences efficiently structured: purpose statement first, followed by output preview, then authentication requirement. Every sentence provides distinct value without redundancy or unnecessary elaboration. Appropriate length for the tool's complexity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the presence of complete schema annotations (100% coverage), output schema, and safety hints (readOnly/idempotent), the description provides sufficient high-level context. It appropriately focuses on purpose and prerequisites rather than duplicating structured data, though it could slightly improve by hinting at relationship to sibling governance tools.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the baseline is 3. The description does not explicitly discuss the parameters (workspace, framework) or their semantics, relying entirely on the schema's detailed descriptions. No additional parameter guidance (e.g., format hints, examples) is provided in the description text.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool retrieves 'compliance and audit readiness status' and specifies the target resource (workspace). It distinguishes itself from siblings like check_policy or check_spend by specifying outputs unique to compliance (governance score, compliance gaps), though it doesn't explicitly name sibling alternatives.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description notes 'Requires authentication' as a prerequisite but provides no guidance on when to use this tool versus siblings like thinkneo_check_policy or thinkneo_evaluate_guardrail. There are no explicit when-to-use or when-not-to-use conditions.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    The annotation declares readOnlyHint: true, and the description aligns with this ('List'). The description adds valuable behavioral context beyond the annotation: it specifies 'active' status (scope/temporal filtering), lists content categories to expect, and notes authentication requirements. It does not contradict the read-only nature.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of three sentences with zero waste: sentence one establishes core purpose, sentence two specifies content types (high value add), and sentence three states authentication requirements. Information is front-loaded appropriately with the primary action stated first.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the existence of an output schema (covering return values), 100% parameter schema coverage, and readOnly annotations, the description provides sufficient context. It adequately explains what the tool returns (the four alert types) without needing to detail return structure. It meets completeness requirements for a standard list operation with three parameters.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, providing detailed descriptions for workspace, severity, and limit parameters. The description mentions 'workspace' in the first sentence, reinforcing the required parameter's role, but does not elaborate further on parameter semantics. With full schema coverage, the baseline score of 3 is appropriate as the description does not need to compensate for missing schema documentation.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb ('List') with clear resource ('active alerts and incidents') and scope ('for a workspace'). It effectively distinguishes from siblings (check_policy, check_spend, evaluate_guardrail, etc.) by positioning this as an aggregation endpoint that covers 'budget alerts, policy violations, guardrail triggers, and provider issues'—implicitly contrasting with the specific deep-check operations of sibling tools.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides implied usage guidance by enumerating the specific alert types included (budget, policy, guardrail, provider), which helps identify when to use this tool versus specific check/evaluate siblings. However, it lacks explicit guidance such as 'use this for overview, use check_policy for specific policy validation' or explicit when-not conditions.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    While annotations declare readOnlyHint=true, the description adds valuable behavioral context: it specifies the authentication requirements ('No authentication required') and details what metrics are returned (latency, error rates, availability). This goes beyond the binary read-only flag to explain what data the agent can expect.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three well-structured sentences with zero waste: first establishes purpose, second details output metrics, third states authentication requirements. Information is front-loaded and every sentence earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the presence of an output schema and readOnly annotations, the description provides sufficient context without redundancy. It adequately covers the tool's purpose, return value categories, and invocation prerequisites for a simple monitoring tool with two optional parameters.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the schema fully documents both parameters including specific provider enum values (openai, anthropic, etc.) and the optional workspace context. The description relies on the schema for parameter semantics, which is appropriate given the comprehensive schema documentation.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses specific verbs ('Get', 'Shows') and clearly identifies the resource (health and performance status of AI providers) and scope (routed through ThinkNEO gateway). It effectively distinguishes from siblings like check_spend, check_policy, and get_compliance_status by focusing on provider infrastructure rather than financial or policy domains.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides one usage constraint ('No authentication required'), which is helpful for invocation prerequisites. However, it lacks explicit guidance on when to use this tool versus sibling status-checking tools like thinkneo_get_budget_status or thinkneo_check_spend, leaving the selection criteria implied rather than stated.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations indicate this is a non-read-only, non-idempotent operation (write). The description adds valuable behavioral context that no authentication is required, which is critical for a write operation. It also clarifies the data collection nature of the tool.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three efficient sentences: the first states the core action, the second describes the data collection behavior, and the third provides the auth requirement. Every sentence earns its place with no redundancy or waste.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the 100% schema coverage, existing output schema, and annotations covering the safety profile, the description provides complete contextual value by adding the authentication requirement (not in annotations) without needing to repeat parameter or return value details.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the schema fully documents all seven parameters. The description broadly mentions 'contact information and preferences' but does not add semantic details, formats, or relationships beyond what the structured schema already provides, warranting the baseline score.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb ('Schedule') with a clear resource ('demo or discovery call with the ThinkNEO team'), and effectively distinguishes from operational siblings (check_policy, check_spend, etc.) by indicating this is for booking meetings rather than querying system state.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides clear context that this is for scheduling demos and includes the important usage constraint 'No authentication required.' However, it lacks explicit when-to-use guidance contrasting with the seven sibling monitoring tools, though the functional distinction is clear.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

mcp-server MCP server

Copy to your README.md:

Score Badge

mcp-server MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level description quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ThinkneoAI/mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server