Skip to main content
Glama

Server Details

MCP server for browsing and searching items on 1stDibs marketplace.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

4 tools
browse_itemsBrowse 1stDibs ItemsBInspect

Browse 1stDibs with the given category and optional page number and filters. Filters are in the format of a query string like category=furniture&location=usa-north-america. Read the taxonomy to discover facets and filters.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number to browse
filtersNoFilters to apply to the browse
categoryYesCategory to browse
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions reading the taxonomy to discover facets and filters, which adds some context, but fails to describe critical behaviors such as pagination limits, rate limits, authentication needs, or what the output looks like (e.g., list of items with metadata). For a browse tool with no annotation coverage, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, with the core purpose stated first. However, the second sentence about filter format and third about taxonomy reading could be more integrated, and there's minor redundancy (e.g., 'category' mentioned twice). Overall, it's efficient but not perfectly structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (browsing with filters and pagination), lack of annotations, and no output schema, the description is incomplete. It doesn't explain return values, error handling, or behavioral constraints like rate limits. The mention of taxonomy reading helps but doesn't compensate for the overall gaps in context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters (category, page, filters). The description adds minimal value beyond the schema by noting filters are 'in the format of a query string' and referencing the taxonomy for facets, but it doesn't provide examples or clarify semantics like valid category values or filter syntax details. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Browse 1stDibs with the given category and optional page number and filters.' It specifies the verb ('browse'), resource ('1stDibs items'), and scope (category-based browsing with filters). However, it doesn't explicitly differentiate from sibling tools like 'search_items' or 'item_details' beyond mentioning the taxonomy reading for filters.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by mentioning 'optional page number and filters' and referencing the taxonomy for facets, but it lacks explicit guidance on when to use this tool versus alternatives like 'search_items' or 'item_details'. No exclusions or prerequisites are stated, leaving usage context somewhat vague.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

item_detailsGet 1stDibs Item DetailsBInspect

Get details on a specific item on 1stDibs. You'll need to input the item ID. You can search for an item or browse for an item to find an item ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
itemIdYesItem ID to get details for
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions needing an item ID but doesn't disclose behavioral traits such as error handling (e.g., what happens if the ID is invalid), rate limits, authentication requirements, or the format of returned details. The description is minimal and lacks critical operational context for a tool that retrieves data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences: one states the purpose and required input, and the other provides usage context. It's front-loaded with the core functionality. However, the second sentence could be more tightly integrated, and there's minor redundancy in mentioning 'item' multiple times, but overall it's efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (single parameter, no output schema, no annotations), the description is incomplete. It doesn't explain what details are returned (e.g., price, description, availability), potential errors, or dependencies. Without annotations or output schema, the description should compensate by providing more context, but it leaves significant gaps for effective tool invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'itemId' documented as 'Item ID to get details for'. The description adds that 'You'll need to input the item ID,' which reinforces the requirement but doesn't provide additional meaning beyond the schema (e.g., examples of IDs, where to find them). With high schema coverage, the baseline is 3, and the description doesn't significantly enhance parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get details on a specific item on 1stDibs' with the required input 'item ID'. It specifies the verb 'Get details' and resource 'specific item', distinguishing it from siblings like 'browse_items' or 'search_items' which are for finding items rather than retrieving details. However, it doesn't explicitly contrast with 'read_taxonomy', which might also retrieve details but for a different resource type.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by stating 'You can search for an item or browse for an item to find an item ID,' suggesting this tool is used after obtaining an ID from other tools. However, it doesn't explicitly state when to use this tool versus alternatives like 'read_taxonomy' or provide clear exclusions (e.g., not for bulk details). The guidance is helpful but not comprehensive.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

read_taxonomyRead 1stDibs TaxonomyCInspect

Discover how items on 1stDibs are categorized. We have a taxonomy with furniture, art, jewelry, and fashion categories at the top level. This will return a list of next level categories as well as facets with their appropriate filters. You need to specify a category.

ParametersJSON Schema
NameRequiredDescriptionDefault
categoryYesCategory to read taxonomy for
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions that the tool returns a list of next-level categories and facets with filters, but it doesn't cover critical aspects like whether this is a read-only operation, potential rate limits, authentication needs, error handling, or pagination. For a tool with no annotations, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and well-structured, using three sentences to explain the purpose, output, and requirement. It avoids unnecessary details and is front-loaded with the main function. However, it could be slightly more efficient by combining ideas, but overall, it earns its place with clear information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (a single parameter, no output schema, no annotations), the description is moderately complete. It covers the basic purpose and output but lacks details on behavioral traits, usage context, and parameter examples. Without annotations or an output schema, it should provide more guidance on what the return values entail, leaving room for improvement.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the parameter 'category' documented as 'Category to read taxonomy for.' The description adds minimal value beyond this, only reiterating that a category must be specified. It doesn't provide examples of valid categories (e.g., 'furniture') or explain the semantics further, so it meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to discover how items are categorized on 1stDibs by returning a list of next-level categories and facets with filters. It specifies the verb 'discover' and resource 'taxonomy' with examples of top-level categories. However, it doesn't explicitly differentiate from sibling tools like browse_items or search_items, which might also involve categorization.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal guidance: it states 'You need to specify a category,' which is a basic requirement. It doesn't explain when to use this tool versus alternatives like browse_items or search_items, nor does it provide context on prerequisites or exclusions. This lack of comparative guidance limits its utility for an AI agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_itemsSearch 1stDibs ItemsBInspect

Search 1stDibs with the given query and optional page number and filters. Filters are in the format of a query string like category=furniture&location=usa-north-america. Read the taxonomy to discover categories, facets and filters.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoJump to a specific page number
queryYesSearch Query
filtersNoFilters to apply to the search
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions that filters use a query string format and references the taxonomy, but doesn't disclose important behavioral traits such as whether this is a read-only operation, rate limits, authentication requirements, pagination behavior beyond the page parameter, or what the response format looks like.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with three sentences. It's front-loaded with the core purpose, followed by parameter format details and a reference to taxonomy. There's minimal waste, though the second sentence could be slightly more streamlined.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 3 parameters, no annotations, and no output schema, the description is incomplete. It doesn't explain what the tool returns (search results format), doesn't cover behavioral aspects like rate limits or authentication, and while it mentions the taxonomy, it doesn't fully compensate for the lack of structured documentation about the filter system.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents all three parameters. The description adds some value by explaining that filters use 'a query string like category=furniture&location=usa-north-america' and references the taxonomy, but doesn't provide additional syntax or format details beyond what the schema descriptions offer for 'query' and 'page'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Search 1stDibs with the given query and optional page number and filters.' It specifies the verb ('Search') and resource ('1stDibs Items'), but doesn't explicitly differentiate it from sibling tools like 'browse_items' or 'item_details' beyond mentioning the search functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some implied usage context by mentioning optional parameters and referencing the taxonomy for filters. However, it doesn't explicitly state when to use this tool versus alternatives like 'browse_items' or 'item_details', nor does it provide clear exclusions or prerequisites for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources