1stDibs
Server Details
MCP server for browsing and searching items on 1stDibs marketplace.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
4 toolsbrowse_itemsBrowse 1stDibs ItemsBInspect
Browse 1stDibs with the given category and optional page number and filters. Filters are in the format of a query string like category=furniture&location=usa-north-america. Read the taxonomy to discover facets and filters.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number to browse | |
| filters | No | Filters to apply to the browse | |
| category | Yes | Category to browse |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions reading the taxonomy to discover facets and filters, which adds some context, but fails to describe critical behaviors such as pagination limits, rate limits, authentication needs, or what the output looks like (e.g., list of items with metadata). For a browse tool with no annotation coverage, this is a significant gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, with the core purpose stated first. However, the second sentence about filter format and third about taxonomy reading could be more integrated, and there's minor redundancy (e.g., 'category' mentioned twice). Overall, it's efficient but not perfectly structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (browsing with filters and pagination), lack of annotations, and no output schema, the description is incomplete. It doesn't explain return values, error handling, or behavioral constraints like rate limits. The mention of taxonomy reading helps but doesn't compensate for the overall gaps in context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters (category, page, filters). The description adds minimal value beyond the schema by noting filters are 'in the format of a query string' and referencing the taxonomy for facets, but it doesn't provide examples or clarify semantics like valid category values or filter syntax details. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Browse 1stDibs with the given category and optional page number and filters.' It specifies the verb ('browse'), resource ('1stDibs items'), and scope (category-based browsing with filters). However, it doesn't explicitly differentiate from sibling tools like 'search_items' or 'item_details' beyond mentioning the taxonomy reading for filters.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by mentioning 'optional page number and filters' and referencing the taxonomy for facets, but it lacks explicit guidance on when to use this tool versus alternatives like 'search_items' or 'item_details'. No exclusions or prerequisites are stated, leaving usage context somewhat vague.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
item_detailsGet 1stDibs Item DetailsBInspect
Get details on a specific item on 1stDibs. You'll need to input the item ID. You can search for an item or browse for an item to find an item ID.
| Name | Required | Description | Default |
|---|---|---|---|
| itemId | Yes | Item ID to get details for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions needing an item ID but doesn't disclose behavioral traits such as error handling (e.g., what happens if the ID is invalid), rate limits, authentication requirements, or the format of returned details. The description is minimal and lacks critical operational context for a tool that retrieves data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two sentences: one states the purpose and required input, and the other provides usage context. It's front-loaded with the core functionality. However, the second sentence could be more tightly integrated, and there's minor redundancy in mentioning 'item' multiple times, but overall it's efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (single parameter, no output schema, no annotations), the description is incomplete. It doesn't explain what details are returned (e.g., price, description, availability), potential errors, or dependencies. Without annotations or output schema, the description should compensate by providing more context, but it leaves significant gaps for effective tool invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the parameter 'itemId' documented as 'Item ID to get details for'. The description adds that 'You'll need to input the item ID,' which reinforces the requirement but doesn't provide additional meaning beyond the schema (e.g., examples of IDs, where to find them). With high schema coverage, the baseline is 3, and the description doesn't significantly enhance parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get details on a specific item on 1stDibs' with the required input 'item ID'. It specifies the verb 'Get details' and resource 'specific item', distinguishing it from siblings like 'browse_items' or 'search_items' which are for finding items rather than retrieving details. However, it doesn't explicitly contrast with 'read_taxonomy', which might also retrieve details but for a different resource type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by stating 'You can search for an item or browse for an item to find an item ID,' suggesting this tool is used after obtaining an ID from other tools. However, it doesn't explicitly state when to use this tool versus alternatives like 'read_taxonomy' or provide clear exclusions (e.g., not for bulk details). The guidance is helpful but not comprehensive.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
read_taxonomyRead 1stDibs TaxonomyCInspect
Discover how items on 1stDibs are categorized. We have a taxonomy with furniture, art, jewelry, and fashion categories at the top level. This will return a list of next level categories as well as facets with their appropriate filters. You need to specify a category.
| Name | Required | Description | Default |
|---|---|---|---|
| category | Yes | Category to read taxonomy for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions that the tool returns a list of next-level categories and facets with filters, but it doesn't cover critical aspects like whether this is a read-only operation, potential rate limits, authentication needs, error handling, or pagination. For a tool with no annotations, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and well-structured, using three sentences to explain the purpose, output, and requirement. It avoids unnecessary details and is front-loaded with the main function. However, it could be slightly more efficient by combining ideas, but overall, it earns its place with clear information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a single parameter, no output schema, no annotations), the description is moderately complete. It covers the basic purpose and output but lacks details on behavioral traits, usage context, and parameter examples. Without annotations or an output schema, it should provide more guidance on what the return values entail, leaving room for improvement.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with the parameter 'category' documented as 'Category to read taxonomy for.' The description adds minimal value beyond this, only reiterating that a category must be specified. It doesn't provide examples of valid categories (e.g., 'furniture') or explain the semantics further, so it meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to discover how items are categorized on 1stDibs by returning a list of next-level categories and facets with filters. It specifies the verb 'discover' and resource 'taxonomy' with examples of top-level categories. However, it doesn't explicitly differentiate from sibling tools like browse_items or search_items, which might also involve categorization.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal guidance: it states 'You need to specify a category,' which is a basic requirement. It doesn't explain when to use this tool versus alternatives like browse_items or search_items, nor does it provide context on prerequisites or exclusions. This lack of comparative guidance limits its utility for an AI agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_itemsSearch 1stDibs ItemsBInspect
Search 1stDibs with the given query and optional page number and filters. Filters are in the format of a query string like category=furniture&location=usa-north-america. Read the taxonomy to discover categories, facets and filters.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Jump to a specific page number | |
| query | Yes | Search Query | |
| filters | No | Filters to apply to the search |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions that filters use a query string format and references the taxonomy, but doesn't disclose important behavioral traits such as whether this is a read-only operation, rate limits, authentication requirements, pagination behavior beyond the page parameter, or what the response format looks like.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with three sentences. It's front-loaded with the core purpose, followed by parameter format details and a reference to taxonomy. There's minimal waste, though the second sentence could be slightly more streamlined.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 3 parameters, no annotations, and no output schema, the description is incomplete. It doesn't explain what the tool returns (search results format), doesn't cover behavioral aspects like rate limits or authentication, and while it mentions the taxonomy, it doesn't fully compensate for the lack of structured documentation about the filter system.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already documents all three parameters. The description adds some value by explaining that filters use 'a query string like category=furniture&location=usa-north-america' and references the taxonomy, but doesn't provide additional syntax or format details beyond what the schema descriptions offer for 'query' and 'page'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Search 1stDibs with the given query and optional page number and filters.' It specifies the verb ('Search') and resource ('1stDibs Items'), but doesn't explicitly differentiate it from sibling tools like 'browse_items' or 'item_details' beyond mentioning the search functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some implied usage context by mentioning optional parameters and referencing the taxonomy for filters. However, it doesn't explicitly state when to use this tool versus alternatives like 'browse_items' or 'item_details', nor does it provide clear exclusions or prerequisites for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!