Skip to main content
Glama

Silicon Friendly

Server Details

Directory rating websites on AI-agent-friendliness. Search, lookup, and submit.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
unlikefraction/silicon-friendly
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.8/5 across 8 of 8 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no ambiguity. For example, check_agent_friendliness provides a quick status check, get_website offers detailed information, submit_website handles new submissions, and verify_website manages verification processes. The tools cover different aspects of directory interaction without overlap.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern using snake_case. Examples include check_agent_friendliness, get_levels_info, submit_website, and verify_website. This uniformity makes the toolset predictable and easy to understand.

Tool Count5/5

With 8 tools, the set is well-scoped for managing a website directory focused on agent-friendliness. It covers core operations like checking, searching, submitting, verifying, and listing websites, plus informational tools, without being overly sparse or bloated.

Completeness4/5

The toolset provides comprehensive coverage for the directory's purpose, including CRUD-like operations (submit, get, list, search) and verification workflows. A minor gap is the lack of an update or delete tool for website entries, which might limit maintenance, but agents can work around this.

Available Tools

8 tools
check_agent_friendlinessAInspect

Quick check if a website is in the Silicon Friendly directory and its agent-friendliness level.

Args:
    domain: The website domain to check (e.g. "stripe.com", "github.com")

Returns:
    Simple response with domain, in_directory, level (L0-L5), and whether it's verified.
ParametersJSON Schema
NameRequiredDescriptionDefault
domainYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses return structure (domain, in_directory, level, verified) and L0-L5 range, which is valuable given no output schema. However, it omits error behavior for invalid domains and does not clarify if this is a cached or real-time check.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Structured with clear Purpose-Args-Returns sections. Three compact sentences with no redundancy. The docstring-style formatting is slightly unconventional for MCP but efficiently communicates necessary information given the lack of output schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter lookup tool. The description compensates for missing output schema by detailing the exact return fields (in_directory, level L0-L5, verified status), providing sufficient contract information for agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% (no description field). The Args section compensates by providing critical format examples ('stripe.com', 'github.com') clarifying that protocols should be omitted, adding semantic value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'check' with specific resource 'Silicon Friendly directory' and distinct output 'agent-friendliness level (L0-L5)'. 'Quick check' implies lightweight lookup distinguishing it from more comprehensive siblings like get_website or search_websites, though explicit differentiation is not stated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'Quick check' implies usage for rapid validation of single domains versus bulk operations, but lacks explicit when/when-not guidance regarding siblings like search_websites or get_website.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_levels_infoAInspect

Get info about the 5-level rating system and all 30 criteria.

Returns:
    The level system explanation and all criteria with their descriptions.
ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses return values ('level system explanation and all criteria'), but lacks other behavioral context like caching, rate limits, or how the rating system relates to actual verification operations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences with zero waste. Front-loaded with purpose, followed by return value disclosure. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a parameter-less tool: it substitutes for the missing output schema by describing returns. However, it omits how this rating system integrates with the verification domain implied by sibling tools, leaving contextual gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters, warranting baseline score 4. Description appropriately does not attempt to invent parameter semantics where none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description specifies the exact resource (5-level rating system and 30 criteria) with a clear verb 'Get info'. The specific mention of '30 criteria' distinguishes this from website-focused siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to invoke this tool versus alternatives, or how it relates to the verification workflow suggested by sibling tools like verify_website or submit_website.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_verify_queueAInspect

Get websites that need verification. Verify them to earn 10 search queries each.

Args:
    auth_token: Your Silicon bearer token for authentication

Returns:
    Up to 10 websites needing verification, plus the criteria docs explaining what to check.
ParametersJSON Schema
NameRequiredDescriptionDefault
auth_tokenNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must carry the full burden. It successfully discloses the return payload ('Up to 10 websites... plus criteria docs') and authentication requirements ('Silicon bearer token'), compensating for the missing output schema. However, it fails to mention that auth_token has a default empty string (making it technically optional), what happens if the queue is empty, or any rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description uses a clear two-part structure: action statement followed by Args/Returns documentation. Each sentence earns its place—the first establishes purpose, the second explains value, Args documents the parameter, and Returns explains the output. The Args/Returns labels in a description field are slightly redundant with structured schema fields, but are necessary compensation given the 0% schema coverage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description adequately compensates by describing what gets returned (up to 10 items plus documentation) and the authentication mechanism. For a simple single-parameter tool, this covers the essential gaps. It could be improved by mentioning edge cases (empty queue) or pagination, but the core information is present.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates by documenting the auth_token parameter in the Args section as 'Your Silicon bearer token for authentication.' This adds necessary semantic meaning beyond the schema's type/title. However, it does not clarify why the parameter has a default empty value or when it can be omitted, preventing a perfect score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves websites requiring verification using the specific verb 'Get' and resource 'websites that need verification.' It also explains the incentive ('earn 10 search queries each'), which helps distinguish it from list_verified_websites. However, it could more explicitly contrast with verify_website (which submits verifications) to fully earn a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies a workflow ('Verify them to earn...') suggesting this tool should be used before verify_website, but lacks explicit guidance on when to use this versus list_verified_websites or search_websites. The phrase 'need verification' provides implicit filtering context, but no explicit 'when not to use' or alternative recommendations are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_websiteBInspect

Get details about a specific website in the Silicon Friendly directory.

Args:
    domain: The website domain (e.g. "stripe.com", "github.com")

Returns:
    Website details including name, domain, level, description, verification info, and all 30 criteria.
ParametersJSON Schema
NameRequiredDescriptionDefault
domainYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Provides detailed return value shape ('30 criteria', 'verification info') compensating for missing output schema. Lacks safety disclosure (read-only vs. mutation) and auth requirements. No annotations provided to carry this burden, so gaps remain.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Structured Args/Returns format efficiently conveys missing schema metadata without verbosity. Front-loaded purpose statement. No redundant or wasteful sentences given the complete lack of parameter documentation in the schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter lookup tool. Describes return payload contents to compensate for absent output schema. Mentions specific 'Silicon Friendly' directory context. Could benefit from noting if unverified sites are included or only verified ones.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Excellent compensation for 0% schema description coverage. Provides clear semantics ('The website domain') and concrete format examples ('stripe.com', 'github.com') that clarify the expected input format beyond the bare string type in schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Get') + specific resource ('website') + scoped context ('Silicon Friendly directory'). Implies single-record retrieval, distinguishing it from sibling list_verified_websites and search_websites, though explicit contrast would strengthen it.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this vs. alternatives like search_websites (when domain is unknown) or list_verified_websites (for browsing). No prerequisites or conditions mentioned despite specific 'Silicon Friendly' domain context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_verified_websitesBInspect

List all verified websites in the directory, sorted by most recently updated.

Args:
    page: Page number (20 results per page)

Returns:
    List of verified websites with their level and basic info.
ParametersJSON Schema
NameRequiredDescriptionDefault
pageNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden. It successfully conveys pagination behavior (20 results per page) and sorting order (recently updated), plus hints at the data model ('level' references sibling 'get_levels_info'). However, it lacks information on rate limits, what 'verified' status entails, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Uses a clear docstring structure with distinct Args and Returns sections. The main sentence establishes scope immediately. No redundant words, though the Returns section partially duplicates information already implied by the function name.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 1-parameter list tool, it adequately covers pagination mechanics and references the domain model ('level'). However, given the rich sibling ecosystem (submit, verify, search), it should clarify the relationship to the verification workflow and when to use fetching vs. listing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage, requiring the description to compensate. The Args section successfully documents the 'page' parameter as 'Page number (20 results per page)', explaining both its purpose and its relationship to result set sizing. It could note that it defaults to 1, but otherwise fully compensates for the schema gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('List') and resource ('verified websites'), with specific sorting behavior ('most recently updated'). However, it does not explicitly differentiate from sibling 'search_websites' or 'get_website' regarding when to browse vs. search vs. fetch a single item.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like 'search_websites' or 'get_website'. No mention of prerequisites or conditions that would necessitate paging through results versus filtering.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_websitesAInspect

Search the Silicon Friendly directory for AI-agent-friendly websites.

Args:
    query: Search terms to find websites (e.g. "payment processing", "email API")
    search_type: Type of search - 'semantic' (AI-powered, better results) or 'keyword' (exact token match). Default: 'semantic'

Returns:
    List of matching websites with name, domain, level, and similarity/relevance score.
ParametersJSON Schema
NameRequiredDescriptionDefault
queryYes
search_typeNosemantic
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the behavioral difference between search types ('AI-powered' vs 'exact token match') and return structure, but omits information about permissions, rate limits, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Uses a clear docstring structure (Args/Returns) appropriate for zero-schema-coverage scenarios. Information is front-loaded and dense, though the Returns section could be more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple search tool with 2 parameters and no output schema, the description is sufficiently complete: it covers the resource, all parameters with examples, and expected return format including specific fields (name, domain, level, score).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by documenting both parameters in the Args section: query includes usage examples ('payment processing', 'email API'), and search_type includes enum values, behavioral distinctions, and default value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb (Search), specific resource (Silicon Friendly directory), and specific scope (AI-agent-friendly websites), clearly distinguishing it from sibling tools like get_website (retrieve specific) and list_verified_websites (browse all).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through the specific resource scope and explains internal parameter choices (semantic vs keyword search), but lacks explicit guidance on when to choose this over list_verified_websites or check_agent_friendliness.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

submit_websiteAInspect

Submit a new website to the Silicon Friendly directory.

Requires authentication. Pass your silicon auth_token.

Args:
    url: The website URL (e.g. "https://stripe.com")
    name: Display name for the website (e.g. "Stripe")
    description: What the site does and why it's useful for agents
    auth_token: Your Silicon bearer token for authentication

Returns:
    The created website entry, or an error if it already exists.
ParametersJSON Schema
NameRequiredDescriptionDefault
urlYes
nameYes
auth_tokenNo
descriptionNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses auth requirement, bearer token format, return value structure ('created website entry'), and idempotency constraint (errors on duplicates). Does not mention rate limits or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear Purpose/Args/Returns sections. Front-loaded with the core action. Minor redundancy between 'Requires authentication' statement and auth_token argument documentation, but remains concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Excellent completeness given 0% schema coverage and lack of annotations/output schema. Covers purpose, auth, all parameters with examples, return values, and error conditions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the Args section fully compensates by documenting all 4 parameters with concrete examples (e.g., 'https://stripe.com') and semantic meaning (e.g., 'Display name', 'bearer token').

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb 'Submit', resource 'website', and destination 'Silicon Friendly directory'. Clearly distinguishes from sibling tools like search_websites, verify_website, and get_website by emphasizing 'new' submission.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly notes authentication requirement and identifies the error condition 'if it already exists'. Lacks explicit workflow guidance comparing to verify_website, but contextually implies this is for initial submission versus verification.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify_websiteAInspect

Submit a verification for a website - evaluate it against all 30 criteria.

Earns you 10 search queries for each new verification.

Args:
    domain: The website domain to verify (e.g. "stripe.com")
    criteria: Dict of all 30 boolean criteria fields. See get_verify_queue for field names.
        Example: {"l1_semantic_html": true, "l1_meta_tags": true, ...}
    auth_token: Your Silicon bearer token for authentication

Returns:
    Verification result including whether it was new and queries awarded.
ParametersJSON Schema
NameRequiredDescriptionDefault
domainYes
criteriaYes
auth_tokenNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, description carries full burden. Discloses side effects (earns 10 queries per new verification) and output behavior ('whether it was new and queries awarded'). Lacks idempotency or error handling details but covers core behavioral expectations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Structured Args/Returns format appropriate given schema deficiency. First sentence front-loads purpose; reward second sentence earns its place. Brief return description suffices given lack of formal output schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for complexity: covers 3 parameters (including nested criteria object), references external schema location, explains return payload structure despite no output schema, and documents side effects.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Compensates effectively for 0% schema coverage. Domain includes example ('stripe.com'), criteria specifies type (Dict of 30 boolean fields) with partial example and cross-reference to sibling tool for full schema, auth_token identifies auth mechanism.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb-resource pair ('Submit a verification for a website') and scope ('evaluate it against all 30 criteria'). Clearly distinguishes from sibling get_website (retrieval) and suggests distinct purpose from submit_website through verification-specific language.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

References sibling get_verify_queue for criteria field names, providing explicit navigation. Explains reward side effect ('Earns you 10 search queries'). However, lacks explicit guidance on when to use verify_website vs submit_website or other siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.