SOMA
Server Quality Checklist
- Disambiguation5/5
The three tools have completely distinct purposes: listing capabilities, submitting new requests, and checking existing request status. No overlap or ambiguity exists between them.
Naming Consistency5/5All tools follow a consistent verb_noun pattern in snake_case (check_status, list_services, submit_request). The naming convention is predictable and uniform throughout the set.
Tool Count4/5Three tools is at the lower bound of the ideal range but appropriate for this concierge-style service. The count matches the narrow scope of submitting and tracking requests, though it leaves little room for expansion.
Completeness3/5While the basic submit-and-check workflow is covered, notable gaps exist for a request management system: no ability to cancel or modify requests, retrieve detailed request information beyond status, or list historical requests. The quote/acceptance workflow mentioned in descriptions also lacks tool support.
Average 3.6/5 across 3 of 3 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v1.0.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
This repository includes a glama.json configuration file.
- This server provides 3 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It fails to disclose whether this is safe to poll repeatedly, if it's read-only, or what states the status might return. These are critical gaps for a status-checking tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. The purpose is front-loaded ('Check the status...'), followed immediately by the parameter semantics. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter tool with an output schema (so return values needn't be described), but clear gaps remain regarding behavioral traits (idempotency, polling safety) that are important for status-checking operations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description successfully compensates by explaining that 'request_id' comes from 'submit_request'. This provides crucial semantic context linking the parameter to the sibling tool's output, though it lacks format constraints or examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description states a specific action ('Check') and resource ('status of a Soma request'). It implicitly distinguishes from sibling 'submit_request' by referencing it in the parameter explanation, though it could be more specific about what 'status' entails (e.g., completion state vs health check).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The parameter description implies a workflow ('the ID returned by submit_request'), suggesting when to use this tool. However, it lacks explicit guidance on polling behavior, rate limits, or when NOT to use this versus alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It compensates partially by specifying the return value ('available service categories'), but fails to state whether the operation is read-only, idempotent, or has side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with no redundancy. The first states the action, the second the return value. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (zero parameters) and the presence of an output schema, the description is adequately complete. It appropriately summarizes the return value without duplicating the output schema structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, establishing a baseline of 4. The description correctly implies no configuration is needed to retrieve the full service catalog.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States a clear verb ('List') and resource ('what Soma can do' / 'service categories'). Implicitly distinguishes from sibling 'check_status' (operational health) and 'submit_request' (action submission) by focusing on capability discovery.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to invoke this tool versus alternatives. Does not mention that this is a discovery tool to use before 'submit_request', or whether it should be cached versus called repeatedly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Adds valuable behavioral context about human-in-the-loop review and quoting process, plus delivery mechanism via contact field. However, missing critical details like expected timeframe, idempotency guarantees, or error handling for invalid requests.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with clear purpose statement. Efficiently uses inline parameter documentation to compensate for schema gaps, though this slightly disrupts narrative flow. No redundant or filler content; every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for tool complexity: 2 simple parameters with output schema present (per context signals), so return values need not be described. Covers submission flow, human review process, and parameter semantics sufficiently for an agent to invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage (properties lack descriptions). Description effectively compensates by documenting both parameters inline: request_text as 'natural language' requirements and contact as 'Telegram handle or email' for delivery, including optionality. Could improve with format examples or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Submit') with resource ('service request') and scope ('to Soma — the agent marketplace'). Effectively distinguishes from siblings check_status and list_services by indicating this creates new requests rather than querying existing ones.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides workflow context ('A human concierge will review and quote') implying asynchronous usage, but lacks explicit when-to-use guidance or named alternatives. Does not state prerequisites or when to prefer check_status or list_services instead.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/giskard09/soma'
If you have feedback or need assistance with the MCP directory API, please join our Discord server