Skip to main content
Glama

Server Details

Live browser debugging for AI assistants — DOM, console, network via MCP.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
scottconfusedgorilla/sncro-relay
GitHub Stars
0
Server Listing
sncro

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.4/5 across 9 of 9 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap. The session management tools (check_session, create_session, end_session) handle connection lifecycle, the diagnostic tools (get_console_logs, get_network_log, get_page_snapshot) provide different types of browser data, the DOM tools (query_all, query_element) offer element querying at different granularities, and report_issue stands alone for feedback. An agent can easily distinguish when to use each tool.

Naming Consistency5/5

All tools follow a consistent verb_noun naming pattern with snake_case throughout. Session tools use verbs like check, create, end; diagnostic tools use get_*; query tools use query_*; and report_issue follows the same convention. There are no deviations in style or structure across the nine tools.

Tool Count5/5

Nine tools is well-scoped for a browser debugging assistant. The set covers session lifecycle (3 tools), browser diagnostics (3 tools), DOM inspection (2 tools), and feedback (1 tool). Each tool earns its place with clear utility, and there are no redundant or missing core operations for the domain.

Completeness5/5

The tool surface provides complete coverage for browser debugging and session management. It includes session creation, status checking, and termination; multiple diagnostic data sources (console, network, page snapshot); DOM querying at both summary and detailed levels; and a feedback mechanism. There are no obvious gaps—agents can perform full debugging workflows without dead ends.

Available Tools

9 tools
check_sessionAInspect

Check the connection status of a sncro session.

Call this after create_session to confirm the browser has connected before using other tools. If status is "waiting", the user hasn't enabled sncro yet — remind them to click/paste the enable URL, wait a few seconds, and call check_session again.

Returns: status: "not_found" | "waiting" | "connected" session_age_seconds: how long since the session was created next_step: what to do based on current status

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYes
secretYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior: it returns status information with specific possible values, session age, and next-step guidance. However, it doesn't mention error handling, rate limits, or authentication requirements beyond the key/secret parameters, leaving some behavioral aspects uncovered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured and front-loaded: the first sentence states the core purpose, followed by usage instructions, conditional logic, and return value details. Every sentence adds value without redundancy, and the information is logically organized for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 2 parameters, no annotations, and no output schema, the description does a good job explaining the workflow context, return values, and next steps. However, it doesn't fully compensate for the lack of parameter documentation or provide details about error cases or authentication requirements, leaving some gaps in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage, but the tool description provides no additional information about what the 'key' and 'secret' parameters represent or how they should be obtained. While the description compensates somewhat by explaining the overall purpose and workflow, it fails to add meaningful semantic context for the parameters themselves, resulting in a baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verb ('Check') and resource ('connection status of a sncro session'), distinguishing it from siblings like create_session (creates) and end_session (terminates). It explicitly explains this is for verifying browser connectivity after session creation, making the purpose unambiguous and distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('Call this after create_session to confirm the browser has connected before using other tools') and what to do in specific scenarios (if status is 'waiting', remind user to enable sncro and call again). It clearly positions this as a verification step in a workflow, offering practical alternatives and exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_sessionAInspect

Create a new sncro session. Returns a session key and secret.

Args: project_key: The project key from CLAUDE.md (registered at sncro.net) git_user: The current git username (for guest access control). If omitted or empty, the call is treated as a guest session — allowed only when the project owner has "Allow guest access" enabled. brief: If True, skip the first-run briefing (tool list, tips, mobile notes) and return a compact response. Pass this on the second and subsequent create_session calls in the same conversation, once you already know how to use the tools.

After calling this, tell the user to paste the enable_url in their browser. Then use the returned session_key and session_secret with all other sncro tools.

If no project key is available: tell the user to go to https://www.sncro.net/projects to register their project and get a key. It takes 30 seconds — sign in with GitHub, click "+ Add project", enter the domain, and copy the project key into CLAUDE.md.

ParametersJSON Schema
NameRequiredDescriptionDefault
briefNo
git_userNo
project_keyYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior: it creates a session, returns credentials, requires project registration, includes guest access control logic, and provides user instructions for browser activation. It doesn't mention rate limits or error handling, but covers core operational behavior well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (purpose, args explanation, usage instructions), but could be more concise. Some sentences could be tightened (e.g., the project registration instructions are quite detailed). However, most content earns its place by providing essential context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a session creation tool with no annotations and no output schema, the description provides comprehensive context: purpose, all parameter semantics, behavioral details (guest access logic, briefing behavior), prerequisites (project registration), and post-call instructions. It adequately compensates for the lack of structured metadata.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by explaining all three parameters in detail: project_key (required, from CLAUDE.md), git_user (optional, for guest access control), and brief (boolean, skips first-run briefing). It provides practical usage context beyond basic schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Create a new sncro session') and resource ('session'), distinguishing it from sibling tools like check_session or end_session. It explicitly mentions what it returns ('session key and secret'), making the purpose unambiguous and distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool (first step for session creation, with project key), when not to use it (if no project key available), and alternatives (directing users to register at sncro.net). It also specifies usage context for the 'brief' parameter (second and subsequent calls).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

end_sessionAInspect

Explicitly close a sncro session — "Finished With Engines".

Call this when you are done debugging and will not need the sncro tools again in this conversation. After this returns, all sncro tool calls on this key will refuse with a SESSION_CLOSED message — that is your signal to stop trying to use them and not apologise about it.

Use it when:

  • The original problem is solved and the conversation has moved on

  • The user explicitly says "we're done with sncro for now"

  • You're entering a long stretch of work that won't need browser visibility

The session can't be reopened. If you need browser visibility later, ask the user whether to start a new one with create_session.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYes
secretYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and excels by disclosing critical behavioral traits: it explains the irreversible effect (session can't be reopened), the post-call consequence (all sncro tool calls will refuse with SESSION_CLOSED), and the expected agent response (stop trying and not apologise). This goes beyond basic functionality to cover outcomes and agent behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core action, followed by usage guidelines in a structured bulleted list, and ends with important caveats. Every sentence adds value—none are redundant—making it efficient and well-organized for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (irreversible session closure) and lack of annotations or output schema, the description is complete: it covers purpose, usage scenarios, behavioral effects, and limitations. It provides all necessary context for an agent to invoke it correctly without needing additional structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 0%, but the description does not mention or explain the parameters (key and secret). However, with only 2 parameters and no output schema, the baseline is 4 as the description compensates by fully explaining the tool's purpose and usage, though it misses an opportunity to clarify parameter roles.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('explicitly close a sncro session') with the distinctive phrase 'Finished With Engines', and it differentiates from sibling tools by explaining this is the termination operation versus create_session for starting. The purpose is unambiguous and distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit when-to-use scenarios in a bulleted list (e.g., problem solved, user says done, long stretch without browser visibility) and when-not-to-use by stating the session can't be reopened and to use create_session if needed later. This offers clear guidance on alternatives and exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_console_logsAInspect

Get recent console logs and errors from the browser.

Returns the latest console output and any JavaScript errors, including unhandled exceptions and promise rejections.

This reads from baseline data that the browser pushes every 5 seconds, so it works even if the browser tab is in the background. If you get a "no data" error, the browser hasn't connected yet — call check_session to diagnose, then retry.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYes
secretYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and adds valuable behavioral context: it explains the data source ('baseline data that the browser pushes every 5 seconds'), works in background tabs, and includes error handling advice. However, it doesn't mention rate limits or authentication details beyond the parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by details about returns, data behavior, and error handling. Every sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, 0% schema coverage, and no output schema, the description does well on purpose and usage but leaves significant gaps: parameters are undocumented, and while it mentions returns, it doesn't detail the output structure. It's adequate for basic use but incomplete for full tool understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% with 2 parameters (key, secret), and the description provides no information about what these parameters represent, their format, or their purpose. The description fails to compensate for the complete lack of schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get recent console logs and errors') and resources ('from the browser'), distinguishing it from siblings like get_network_log or get_page_snapshot by focusing on console output and JavaScript errors.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit guidance is provided on when to use alternatives: if encountering a 'no data' error, the description directs users to 'call check_session to diagnose, then retry,' naming a specific sibling tool and providing a clear workflow.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_network_logAInspect

Get network performance data from the browser.

Returns resource timing entries (URLs, durations, sizes) sorted by duration (slowest first), plus page navigation timing.

Use this to find slow API calls, large assets, or overall page load performance.

Requires a connected browser session. If you get BROWSER_NOT_CONNECTED, call check_session first and wait for "connected" status.

Args: key: The sncro session key secret: The session secret from create_session limit: Max resources to return (default 50) type: Filter by initiator type (e.g. "fetch", "xmlhttprequest", "img", "script", "css")

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYes
typeNo
limitNo
secretYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: the sorting order ('sorted by duration (slowest first)'), authentication requirements ('Requires a connected browser session'), error handling ('If you get BROWSER_NOT_CONNECTED...'), and default values ('limit: Max resources to return (default 50)'). It doesn't mention rate limits or pagination behavior, but covers most essential operational aspects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with purpose first, then usage context, prerequisites, and parameter details. Every sentence earns its place by providing essential information without redundancy. The parameter explanations are clear and directly relevant to tool invocation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 4 parameters, 0% schema coverage, no annotations, and no output schema, the description does an excellent job covering most essential aspects: purpose, usage, authentication, error handling, and parameter semantics. The main gap is the lack of output format details beyond mentioning what data is returned, but given the complexity level, this is reasonably complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by providing clear semantic explanations for all 4 parameters: 'key' and 'secret' as authentication credentials, 'limit' with its default value and purpose, and 'type' with examples of valid values. The description adds substantial value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get network performance data'), resource ('from the browser'), and output format ('resource timing entries... plus page navigation timing'). It distinguishes from siblings like get_console_logs and get_page_snapshot by focusing on network performance data rather than console output or visual snapshots.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('to find slow API calls, large assets, or overall page load performance') and includes a prerequisite about requiring a connected browser session. It doesn't explicitly mention when NOT to use it or name specific alternatives among siblings, though the use case naturally differentiates from other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_page_snapshotAInspect

Get a high-level snapshot of the current page.

Returns URL, title, viewport dimensions, scroll position, top-level DOM structure, recent console logs, and recent errors.

Requires a connected browser session. If you get BROWSER_NOT_CONNECTED, call check_session first and wait for "connected" status.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYes
secretYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing prerequisites (connected browser session), error handling (BROWSER_NOT_CONNECTED), and the comprehensive nature of returned data. It doesn't mention rate limits or performance characteristics, but provides substantial behavioral context beyond basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured and concise: first sentence states purpose, second enumerates returned data, third specifies prerequisites, fourth provides error handling guidance. Every sentence earns its place with zero wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 2 undocumented parameters, no annotations, and no output schema, the description does well on purpose, usage, and behavioral aspects but completely fails to address parameter semantics. The comprehensive return data description partially compensates for lack of output schema, but the parameter gap is significant.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% with 2 required parameters (key, secret), and the description provides absolutely no information about what these parameters mean, their purpose, or how they should be used. The description fails to compensate for the complete lack of schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get a high-level snapshot') and resources ('current page'), distinguishing it from siblings like get_console_logs or get_network_log by specifying it returns a comprehensive set of page data including URL, title, DOM structure, and logs/errors.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit guidance is provided on when to use this tool: 'Requires a connected browser session' and what to do if encountering BROWSER_NOT_CONNECTED ('call check_session first and wait for "connected" status'). This clearly distinguishes it from session management tools like create_session or end_session.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

query_allAInspect

Query all matching DOM elements by CSS selector.

Returns a summary of each matching element (tag, id, class, bounding rect, inner text). Useful for checking lists, grids, or multiple instances of a component.

Requires a connected browser session. If you get BROWSER_NOT_CONNECTED, call check_session first and wait for "connected" status.

Args: key: The sncro session key secret: The session secret from create_session selector: CSS selector limit: Max elements to return (default 20)

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYes
limitNo
secretYes
selectorYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses important behavioral traits: requires a connected browser session, mentions error handling (BROWSER_NOT_CONNECTED), and describes the return format. However, it doesn't cover potential side effects, performance characteristics, or authentication details beyond the key/secret parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with purpose first, usage context, prerequisites, and parameter details. Each sentence earns its place, though the parameter section could be slightly more concise. Overall efficient with good front-loading of key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 4 parameters with 0% schema coverage and no output schema, the description does a reasonable job but has gaps. It explains the return format but not in detail, doesn't cover error cases beyond BROWSER_NOT_CONNECTED, and lacks information about pagination or what happens when limit is exceeded. For a tool with no annotations and no output schema, more completeness would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description must compensate. It provides meaningful context for all 4 parameters: explains key/secret are for session authentication, selector is CSS selector, and limit has a default of 20. This adds substantial value beyond the bare schema, though it could elaborate on selector syntax or key/secret format.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Query') and resource ('DOM elements by CSS selector'), and distinguishes it from siblings by mentioning it returns a summary of each matching element. It explicitly differentiates from query_element by handling multiple elements.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('checking lists, grids, or multiple instances of a component') and mentions a prerequisite (connected browser session). However, it doesn't explicitly state when NOT to use it or name specific alternatives like query_element for single elements.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

query_elementAInspect

Query a DOM element by CSS selector.

Returns bounding rect, attributes, computed styles, inner text, and child count. Use this to debug layout, positioning, and visibility issues.

Requires a connected browser session. If you get BROWSER_NOT_CONNECTED, call check_session first and wait for "connected" status. If you get BROWSER_TIMEOUT, the page may be navigating — wait a moment and retry.

Args: key: The sncro session key secret: The session secret from create_session selector: CSS selector (e.g. "#photo-wrap", ".toolbar > button:first-child") styles: Optional list of CSS properties to read (e.g. ["transform", "width", "display"])

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYes
secretYes
stylesNo
selectorYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes prerequisites (connected browser session), error handling scenarios (BROWSER_NOT_CONNECTED, BROWSER_TIMEOUT), and what information is returned (bounding rect, attributes, computed styles, inner text, child count). It doesn't mention rate limits, authentication details beyond parameters, or whether the operation is read-only, but provides substantial behavioral context for a query tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with zero wasted sentences. It begins with the core purpose, immediately states what's returned, provides usage context, gives error handling guidance, and concludes with parameter explanations. Each sentence serves a distinct purpose and contributes to understanding the tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 4 parameters, no annotations, and no output schema, the description provides substantial context: purpose, return values, usage scenarios, error handling, and parameter semantics. It doesn't explicitly describe the output format or structure, which would be helpful given the lack of output schema, but covers most essential aspects for a query operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description must compensate for the schema's lack of parameter documentation. It successfully explains all 4 parameters: key and secret are identified as session credentials, selector is explained with CSS selector examples, and styles is described as an optional list of CSS properties with examples. The description adds meaningful context beyond the bare parameter names in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verb ('Query') and resource ('DOM element by CSS selector'), distinguishing it from siblings like query_all (which likely queries multiple elements) or get_page_snapshot (which captures visual state). It explicitly lists what information is returned, making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('to debug layout, positioning, and visibility issues') and when not to use it (by specifying prerequisites like requiring a connected browser session). It also names alternative actions for error conditions (call check_session for BROWSER_NOT_CONNECTED, wait and retry for BROWSER_TIMEOUT), though it doesn't explicitly compare to sibling tools beyond error handling.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

report_issueAInspect

Report an issue, feature request, or success story for sncro.

IMPORTANT: ALWAYS ask the user before submitting ANY feedback. Show them exactly what you plan to send and get explicit approval. Never submit feedback without the user's knowledge and consent.

For ALL categories:

  • Draft the text and show it to the user BEFORE submitting

  • Wait for explicit approval — do NOT submit until they confirm

  • Keep descriptions GENERAL — no proprietary code, no internal project names, no sensitive data

For SUCCESS STORIES (category: success_story):

  • These WILL be displayed publicly on sncro.net

  • Ask: "Mind if I share that as a sncro success story?"

  • Focus on what sncro did, not what the project is

Args: project_key: The project key from CLAUDE.md category: One of: bug, feature_request, usability, documentation, success_story description: Clear description of the issue, suggestion, or success story git_user: Your git username

ParametersJSON Schema
NameRequiredDescriptionDefault
categoryYes
git_userNo
descriptionYes
project_keyYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure and does so effectively. It clearly describes the tool's interactive nature (requires user approval before submission), privacy constraints ('Keep descriptions GENERAL — no proprietary code'), and public disclosure implications for success stories ('These WILL be displayed publicly on sncro.net'). The only minor gap is it doesn't mention rate limits or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and well-structured. It begins with the core purpose, immediately follows with critical usage warnings in ALL CAPS for emphasis, then provides category-specific guidance, and finally lists parameters. While slightly longer than minimal, every sentence adds necessary value for proper tool usage. The structure helps users quickly grasp the most important constraints first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (interactive user approval requirement, privacy constraints, public disclosure implications) and the absence of both annotations and output schema, the description provides substantial contextual information. It covers behavioral expectations, usage constraints, and parameter guidance reasonably well. The main gap is the lack of information about what happens after submission (confirmation, error handling, or response format).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate for undocumented parameters. It provides some semantic context for 'category' (listing valid values and special handling for 'success_story') and 'description' ('Clear description of the issue, suggestion, or success story'), but offers no guidance for 'project_key' or 'git_user'. With 4 parameters total and partial coverage in the description, this meets the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Report an issue, feature request, or success story for sncro.' This specifies the verb ('report') and resource ('issue, feature request, or success story'), making it easy to understand what the tool does. However, it doesn't explicitly differentiate from sibling tools like 'query_all' or 'get_console_logs', which are diagnostic tools rather than feedback submission tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit and comprehensive usage guidelines. It specifies when to use the tool (for reporting feedback to sncro), includes critical exclusions ('Never submit feedback without the user's knowledge and consent'), and offers detailed procedural steps ('Draft the text and show it to the user BEFORE submitting', 'Wait for explicit approval'). It also provides category-specific guidance for 'success_story' submissions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.