obscuraai-mcp
Server Quality Checklist
- Disambiguation5/5
With only one tool, there is no possibility of ambiguity or overlap between tools, as there are no other tools to compare it to. The tool's purpose is clearly defined and distinct by default.
Naming Consistency5/5Since there is only one tool, naming consistency is inherently perfect. The tool name 'generate_obscura_workflow' follows a clear verb_noun pattern, but there are no other tools to assess consistency across a set.
Tool Count2/5A single tool is too few for the server's apparent purpose of generating and managing visual AI automation workflows, as it lacks operations for viewing, editing, or exporting workflows beyond the initial generation. This minimal scope will likely cause agent failures due to incomplete functionality.
Completeness2/5The tool surface is severely incomplete for the domain of workflow automation; it only provides generation without any CRUD operations (e.g., no tools to list, update, delete, or retrieve existing workflows), leaving significant gaps that will hinder agent effectiveness.
Average 3.2/5 across 1 of 1 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v1.0.4
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 1 tool. View schema
No known security issues or vulnerabilities reported.
Are you the author?
Add related servers to improve discoverability.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the output ('shareable link') and that the workflow can be 'viewed, edited, and exported,' but lacks details on permissions, rate limits, error handling, or whether the generation is idempotent. For a tool with no annotations, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, stating the purpose and output in the first sentence. The second sentence adds usage context without redundancy. Both sentences earn their place, but minor improvements in clarity could push it to a perfect score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description partially compensates by explaining the output format and usage. However, it lacks details on behavioral traits like error conditions or performance, and does not fully address the complexity of a workflow generation tool. It is minimally adequate but has clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description does not explicitly discuss parameters, but the input schema has 100% description coverage, clearly documenting both parameters. The description implies parameter use through 'Describe the business process to automate' but adds no additional meaning beyond the schema. With high schema coverage, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Generate a visual AI automation workflow for a business process.' It specifies the verb ('Generate') and resource ('visual AI automation workflow'), and mentions the output ('Returns a shareable link to an interactive canvas on obscuraai.xyz'). However, since there are no sibling tools, it cannot distinguish from alternatives, preventing a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage guidance: 'Use when a user wants to map out an automation, workflow, or AI system for their business.' This implies the context but does not explicitly state when not to use it or compare to alternatives. Since there are no sibling tools, the lack of alternatives is understandable, but the guidance remains basic.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/madeinphantom/obscuraai-mcp-server'
If you have feedback or need assistance with the MCP directory API, please join our Discord server