Skip to main content
Glama

Synapse — GEO Growth Layer

Server Details

Lints + auto-fixes how AI coding agents discover any new product. 24 rules, 6 tools, score 0-100.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
calvinling2021-star/synapse
GitHub Stars
0
Server Listing
Synapse — GEO Growth Layer

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.3/5 across 6 of 6 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose—linting, corpus querying, auto-fixing, prompt generation, status checking, and project registration—with no overlapping functionality.

Naming Consistency4/5

All tools use the 'geo_' prefix and snake_case, but the verb/noun pattern varies (some are verbs like 'geo_check', others are nouns like 'geo_status'). This is mostly consistent but not perfectly uniform.

Tool Count5/5

With 6 tools, the server covers the essential operations for the GEO growth layer domain without being bloated or sparse.

Completeness5/5

The tool set provides a complete workflow: linting, fixing, querying, prompt generation, status monitoring, and initial tracking—no obvious gaps for the stated purpose.

Available Tools

6 tools
geo_checkAInspect

Run the 24-rule Synapse GEO linter against a URL or a local project path and return a scored report. Use this BEFORE shipping a new product.

ParametersJSON Schema
NameRequiredDescriptionDefault
targetYesURL or local project path to lint.
fail_onNoSeverity threshold that should be treated as a failure.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided. The description states it runs a linter and returns a report, implying non-destructive behavior. However, it does not explicitly confirm read-only status or discuss side effects, which is a minor gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two short, value-added sentences. No redundant or unnecessary information; each sentence contributes to purpose and usage clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with two parameters and no output schema, the description provides adequate context: purpose, input type, and usage. It does not explain the report format, but that may be acceptable without an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with both parameters described. The description adds no additional detail beyond the schema, so it meets the baseline but does not improve understanding of parameter usage or constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool runs a 24-rule Synapse GEO linter against a URL or local path and returns a scored report. It distinguishes itself from siblings like geo_fix (for fixing) and geo_status (for status) by indicating it is a pre-shipment check.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly advises using the tool before shipping a new product. It does not provide explicit alternatives or when-not scenarios, but the context is clear for its intended use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

geo_corpus_queryBInspect

Query the public Synapse corpus for products matching a given intent.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
intentYesUser intent or capability the agent is looking up.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must fully disclose behavior. It only says 'Query', implying read-only, but does not confirm safety, rate limits, or whether modifications occur. This is insufficient for a non-annotated tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with the action and target, no extraneous words. Every part is essential.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple query tool with two parameters, the description covers the core purpose but lacks details on the optional 'limit' parameter, return format, and usage constraints. Adequate but with clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 50%: only 'intent' is described. The description adds context for 'intent' (matching given intent) but omits any mention of 'limit', which remains undocumented. Low coverage requires more compensation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it queries the public Synapse corpus for products by intent, specifying the verb and resource. It distinguishes from sibling tools like geo_check or geo_fix which imply different actions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No when-to-use or when-not-to-use guidance is provided. The description does not mention alternatives or exclusions, leaving the agent to infer usage from context alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

geo_fixBInspect

Apply Synapse's auto-fixers to a local project — writes static files (llms.txt, robots.txt, agent-answer.json, etc.). Requires write access to the path.

ParametersJSON Schema
NameRequiredDescriptionDefault
project_pathYesLocal project root to apply auto-fixes to.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must fully disclose behavioral traits. It states the tool writes static files and requires write access, but fails to mention whether files are overwritten or appended, error handling, idempotency, or any side effects on the project. This is insufficient for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the main action, lists example outputs, and includes the critical prerequisite. Every part is necessary and there is no redundancy. It is highly concise while remaining informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema), the description covers the essential purpose, outputs, and a prerequisite. It lacks any description of return values or success/error indication, but for a write-only tool without nested objects, it is nearly complete. A minor gap in specifying potential side effects beyond file writing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema provides 100% coverage with a description for project_path. The description adds 'Requires write access to the path,' which gives extra context about the parameter's permissions. Per the guideline, since schema coverage is high, baseline is 3, and the added requirement justifies staying at 3 (not raising further as it's not a semantic detail about the parameter itself).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool applies Synapse's auto-fixers to a local project and writes specific static files (llms.txt, robots.txt, agent-answer.json, etc.). It uses a specific verb ('apply') and resource ('local project'), clearly distinguishing it from sibling tools like geo_check or geo_status which perform different actions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions a prerequisite ('Requires write access to the path') but provides no guidance on when to use this tool versus alternatives, when not to use it, or any contextual usage scenarios. It lacks explicit instructions for selection among sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

geo_promptsCInspect

Generate the prompt pack used to test whether AI engines surface this product.

ParametersJSON Schema
NameRequiredDescriptionDefault
audienceNoWho is searching for this product.
categoryNoProduct category.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose behavioral traits such as side effects, permissions, or whether the operation is read-only or destructive. 'Generate' implies creation but lacks details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence with no unnecessary words. It is front-loaded but could benefit from more detail.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of output schema, annotations, and only two optional parameters, the description is insufficient. It does not explain the return value, usage context, or how the prompt pack is used.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with basic descriptions for both optional parameters. The description adds no extra meaning beyond the schema, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it generates a 'prompt pack' for testing AI engine product surfacing. While the term 'prompt pack' is somewhat vague, it distinguishes from sibling tools like geo_check or geo_corpus_query.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives, nor any usage context provided. The description does not indicate prerequisites or scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

geo_statusAInspect

Fetch the live Growth Score and agent-mention stats for an installed Synapse site.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesSite slug returned by geo_track_init / deploy.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states the tool fetches 'live' data, implying a read operation with no side effects. However, it lacks details on authorization, rate limits, or error conditions (e.g., what happens if the site is not installed).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence with no unnecessary words. It is front-loaded with the verb and directly communicates the tool's purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description covers the essential: what data is fetched and for what. It could mention prerequisites, but the reference to 'installed Synapse site' implies context. Overall, quite complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds no extra meaning beyond the schema's parameter description. The tool description does not elaborate on the slug parameter further.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly specifies the action ('Fetch'), the data ('live Growth Score and agent-mention stats'), and the target ('installed Synapse site'). It distinctively describes the tool's purpose, differentiating it from siblings like geo_track_init or geo_check.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. It does not mention prerequisites like needing to run geo_track_init first, nor does it explain when to prefer geo_status over geo_check or geo_corpus_query.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

geo_track_initCInspect

Register the project with the Synapse dashboard to begin tracking AI-agent referrals. Returns a slug and tracking ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlNoPublic URL of the site, if known.
nameNoSite name (defaults to directory name).
project_pathNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose all behavioral traits. It states that the tool registers and returns a slug and tracking ID, but it does not mention side effects (e.g., whether it creates a persistent record), idempotency, or authentication requirements. This leaves significant gaps for a mutation-like tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that front-loads the action ('Register') and includes the return value. While it is efficient, it could benefit from a slightly more structured format (e.g., listing parameters or separating purpose from behavior).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a registration tool with three optional parameters and no output schema, the description provides the core purpose and return values but lacks usage context (e.g., when to call, idempotency, prerequisites). This leaves the agent with incomplete information for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has three optional parameters with 67% description coverage (url and name described, project_path missing). The description adds no additional meaning beyond the schema; it does not clarify the purpose of project_path or the optionality of parameters. Since coverage is moderate but description contributes nothing, a score of 2 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses the specific verb 'Register' and identifies the resource as 'project with Synapse dashboard' to begin tracking AI-agent referrals. It also mentions the return values (slug and tracking ID). This clearly differentiates it from sibling tools like geo_check, geo_corpus_query, geo_fix, geo_prompts, and geo_status, which focus on other operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide explicit guidance on when to use this tool versus alternatives. It implies that this is the first step to start tracking, but it does not mention prerequisites, conditions for re-use, or situations where it should not be used (e.g., if already registered).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.