Partle Marketplac
Server Details
Search products and stores in local physical shops. Find availability, prices, and store locations. Currently focused on hardware stores in Spain.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
12 toolscreate_productAInspect
Create a new product listing. Requires an API key (generate at /account).
Args:
api_key: Your Partle API key (starts with pk_).
name: Product name.
description: Product description (optional).
price: Price in whole currency units (e.g. 15.99 means €15.99). NOT cents. Max 100000.
currency: Currency symbol, defaults to €.
url: Link to the product page (optional).
store_id: Store ID to list under (optional — omit for personal listing).| Name | Required | Description | Default |
|---|---|---|---|
| url | No | ||
| name | Yes | ||
| price | No | ||
| api_key | Yes | ||
| currency | No | € | |
| store_id | No | ||
| description | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond the annotations (which indicate non-idempotent write operations), the description adds critical behavioral context: authentication requirements (API key starting with pk_), data format constraints (price is NOT cents, max 100000), and default values (currency defaults to €). It does not contradict annotations (readOnlyHint: false aligns with 'Create').
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently front-loaded with purpose and prerequisites in the first sentence, followed by a structured Args section. Every sentence provides value; there is no redundant or filler content despite the need to document seven undocumented parameters.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the input side is comprehensively documented given the 0% schema coverage and 7 parameters, the description omits what the tool returns (e.g., product ID, created object) despite there being no output schema to provide this information. For a write operation, this return value context is a notable gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description carries the full burden and excels by documenting all 7 parameters with rich semantic meaning: api_key format hint ('starts with pk_'), price unit warnings with examples, optional flags, and default values. This prevents significant agent error (e.g., submitting price in cents).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb-resource pair ('Create a new product listing') that precisely defines the tool's function. It clearly distinguishes from sibling tools like update_product or get_product through its explicit focus on creation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear prerequisite guidance ('Requires an API key') with a specific location to obtain it ('/account'), and explains the store_id parameter's contextual use ('omit for personal listing'). However, it lacks explicit differentiation from update_product for scenarios where the user might be unsure whether to create new or modify existing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_productADestructiveIdempotentInspect
Permanently delete a product listing and all its images.
Args:
api_key: Your Partle API key (starts with pk_).
product_id: ID of the product to delete.| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes | ||
| product_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable cascading deletion context (images destroyed) and API key format hint (pk_ prefix) beyond annotations; does not mention idempotency but annotation covers this.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with clear action statement; Args block is slightly informal but concise and scannable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for a destructive no-return operation; could explicitly mention idempotent nature for safety clarity but annotations cover this.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Fully compensates for 0% schema description coverage by detailing both parameters, including helpful format hint for api_key.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific action (permanently delete product listing) and explicitly mentions 'all its images' to distinguish from sibling delete_product_image.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when/when-not guidance or comparison to alternatives like delete_product_image or update_product.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_product_imageADestructiveIdempotentInspect
Delete a specific image from a product.
Args:
api_key: Your Partle API key (starts with pk_).
product_id: ID of the product.
image_id: ID of the image to delete (visible in product details).| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes | ||
| image_id | Yes | ||
| product_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Aligns with destructive annotation by stating 'Delete', and adds minor context that image_id is 'visible in product details', but lacks details on permanence or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded action statement with concise Args section; every sentence provides necessary information given the lack of schema descriptions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for simple 3-parameter tool, but lacks description of return value or success/failure behavior given no output schema exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Excellent compensation for 0% schema description coverage by providing format hints (api_key starts with pk_) and sourcing guidance (image_id visible in product details).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb (Delete) + resource (image from a product) clearly distinguishes from sibling tools like delete_product or upload_product_image.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use vs alternatives (e.g., when to use delete_product instead) or when-not-to-use scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_my_productsARead-onlyInspect
List all products created by the API key owner.
Args:
api_key: Your Partle API key (starts with pk_).
limit: Max results (1-200, default 50).| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| api_key | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds limit constraints (1-200) and API key format hint (pk_) beyond annotations, but omits rate limits or pagination details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded purpose statement with structured Args section; no redundant text given the schema lacks descriptions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple read operation with 2 parameters and existing output schema; could mention cursor/pagination behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Fully compensates for 0% schema description coverage by documenting both parameters with types, constraints, and format hints in the Args block.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb (List) and resource scope (all products by API key owner), distinguishing it from sibling get_product (singular) and search_products.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use versus get_product, search_products, or store-specific queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_productARead-onlyInspect
Get detailed information about a single product by its ID.
Args:
product_id: The product ID.| Name | Required | Description | Default |
|---|---|---|---|
| product_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The readOnlyHint: true annotation already indicates this is a safe read operation. The description adds that it returns 'detailed information' but does not specify error behavior for invalid IDs, rate limits, or what specific product attributes are returned.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The main description is efficiently front-loaded in a single sentence. The Args section is structured but redundant given the schema, though partially justified by the lack of schema descriptions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter read operation, the description covers the core mechanism but lacks details about the return structure or error conditions, which would be valuable given no output schema is provided.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description provides only tautological information ('The product ID') that adds no semantic value beyond the parameter name and schema title, failing to explain valid formats, ranges, or ID sources.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('product') and explicitly scopes it to 'by its ID', which clearly differentiates it from siblings like search_products (query-based) and get_my_products (listing multiple).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While 'by its ID' implicitly indicates you need a known identifier, there is no explicit guidance on when to use this versus search_products for discovery, or prerequisites for obtaining valid IDs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_statsARead-onlyInspect
Get platform statistics (total products, total stores).
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare readOnlyHint=true, the description adds crucial behavioral context by specifying exactly what statistics are returned (total products and total stores). This compensates for the missing output schema, though it doesn't mention caching, rate limits, or data freshness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with zero waste. The parenthetical efficiently specifies the returned data structure without verbosity. Information is front-loaded with the action verb immediately followed by the resource type.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read-only tool with no parameters, the description is complete. It compensates for the lack of output schema by enumerating the returned statistics. Could improve by mentioning if statistics are real-time or cached, but adequate for the complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With zero parameters, the baseline score applies. The description requires no parameter clarification, and the empty schema is self-explanatory for a simple statistics retrieval endpoint.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('platform statistics'), and explicitly lists the returned metrics ('total products, total stores'). This clearly distinguishes it from sibling tools like get_product or get_store which retrieve individual records rather than aggregate counts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through 'platform statistics' (aggregate scope), distinguishing it from individual CRUD operations on products/stores. However, it lacks explicit guidance on when to choose this over search_products or get_my_products for counting purposes.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_storeBRead-onlyInspect
Get detailed information about a single store by its ID.
Args:
store_id: The store ID.| Name | Required | Description | Default |
|---|---|---|---|
| store_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true, confirming the safe read operation. The description adds context that it returns 'detailed information', but lacks specifics about the return structure, rate limits, or error conditions given the absence of an output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately brief and front-loaded with the core action. The Args section is structured but redundant given the schema, though it doesn't significantly bloat the content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter read operation, the description is minimally adequate. However, given the lack of output schema, it should provide more specifics about what 'detailed information' includes or the return format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. However, 'The store ID' is essentially a tautology that adds no semantic value beyond the parameter name. It fails to explain ID format, constraints, or lookup behavior.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific verb (Get), resource (store), and access method (by its ID). It implicitly distinguishes from the sibling 'search_stores' by specifying 'single store', though it doesn't explicitly name the alternative.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives like 'search_stores'. There are no explicit prerequisites, exclusions, or conditional usage notes.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_productsARead-onlyInspect
Search marketplace products by name or description.
Args:
query: Search term (e.g. "wireless headphones", "cerrojo").
min_price: Minimum price filter in EUR.
max_price: Maximum price filter in EUR.
tags: Comma-separated tag filter (e.g. "electronics,bluetooth").
store_id: Filter to a single store by ID.
sort_by: Sort order — one of "price_desc", "name_asc", "newest", "oldest".
semantic: Use semantic (vector) search for cross-language matching.
When true, ranks by meaning similarity — e.g. searching "drill"
also finds "taladro" (Spanish) and "Bohrmaschine" (German).
limit: Max results (1-100, default 20).
offset: Number of results to skip for pagination (default 0).| Name | Required | Description | Default |
|---|---|---|---|
| tags | No | ||
| limit | No | ||
| query | Yes | ||
| offset | No | ||
| sort_by | No | ||
| semantic | No | ||
| store_id | No | ||
| max_price | No | ||
| min_price | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnly and openWorld behavior, so the burden is lighter. The description adds valuable behavioral constraints: valid range for limit (1-100), currency unit for prices (EUR), specific enum values for sort_by, and comma-separated format for tags.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The structure efficiently front-loads the purpose in one sentence, followed by a structured Args block. Given the complete lack of schema descriptions, the Args section is necessary and contains no redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 6 parameters, 0% schema coverage, and an existing output schema, the description is complete. It documents every parameter's semantics and behavior, and correctly relies on the output schema to describe return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by providing detailed Args documentation for all 6 parameters, including examples (e.g., 'wireless headphones', 'cerrojo'), format specifications (comma-separated), and constraint documentation (default 20, max 100).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (Search) and resource (marketplace products), and the scope qualifier 'by name or description' effectively distinguishes it from sibling tools like get_product (ID-based lookup) and search_stores (different resource).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While it doesn't explicitly name alternatives, the phrase 'by name or description' provides clear context for when to use this tool (text-based searching) versus ID-based retrieval or store searches. This implies usage without explicit exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_storesBRead-onlyInspect
Search or list stores in the marketplace.
Args:
query: Optional search term to filter stores by name or address.
limit: Max results (1-50, default 20).| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and open-world hints. The description adds value by specifying that query filters by 'name or address' and that limit accepts values 1-50, but omits pagination behavior, rate limits, or result ordering details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with a clear one-line purpose statement followed by a structured Args block. No sentences are wasted, though the Args format is slightly informal compared to natural language integration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (documenting return values elsewhere) and simple parameters, the description is adequate but minimal. It lacks contextual guidance regarding the sibling get_store tool and does not mention pagination cursor handling despite the limit parameter implying paginated results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates via the Args block. It documents both parameters: query (optional search term filtering name/address) and limit (range 1-50, default 20), providing sufficient detail for invocation despite the bare JSON schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Search[es] or list[s] stores in the marketplace'—specific verb and resource. It implicitly distinguishes from product-related siblings (create_product, search_products) by focusing on stores, though it could explicitly contrast with get_store.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this versus alternatives. It does not mention sibling tool get_store (for single-store lookups) or clarify when to search versus list, leaving the agent to infer usage from parameter names alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
submit_feedbackAInspect
Submit feedback about your experience using Partle. Tell us what's confusing, broken, or could be better.
Args:
feedback: Freeform text describing the issue or suggestion (max 5000 chars).| Name | Required | Description | Default |
|---|---|---|---|
| feedback | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds the 5000 character constraint and clarifies this sends feedback to Partle (the platform), supplementing basic annotations without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Appropriately brief and front-loaded; Args section is slightly informal but clearly delineates the single parameter's requirements.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter tool with no output schema, the description provides sufficient context to invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by explaining the parameter's purpose (freeform issue/suggestion text) and constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb (Submit) + resource (feedback about Partle experience), clearly distinguishes from product/store management siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage through examples of feedback content (confusing, broken, improvements) but lacks explicit when/when-not guidance relative to alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_productAIdempotentInspect
Update an existing product listing. Only provided fields are changed.
Args:
api_key: Your Partle API key (starts with pk_).
product_id: ID of the product to update.
name: New product name (optional).
description: New description (optional).
price: New price in whole currency units (e.g. 15.99 means €15.99). NOT cents. Max 100000.
currency: New currency symbol (optional).
url: New product page link (optional).| Name | Required | Description | Default |
|---|---|---|---|
| url | No | ||
| name | No | ||
| price | No | ||
| api_key | Yes | ||
| currency | No | ||
| product_id | Yes | ||
| description | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds crucial behavioral details beyond annotations: partial/PATCH-like updates, price format ('NOT cents'), max price limit, and API key prefix pattern.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Efficient docstring format with front-loaded purpose statement; Args section is necessary given lack of schema descriptions. No wasted sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers essential gaps given complexity (7 params) and lack of output schema; explains auth requirement and partial update semantics sufficiently for invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Excellent compensation for 0% schema coverage with detailed Args documentation, especially for price semantics (decimal units, max 100000) and api_key format (pk_ prefix).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Update') + resource ('existing product listing') that distinguishes from siblings create_product, delete_product, and get_product.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explains partial update behavior ('Only provided fields are changed') but lacks explicit guidance on when to use vs create_product or error handling.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
upload_product_imageBDestructiveIdempotentInspect
Upload an image for a product. Provide either image_base64 or image_url (not both).
Args:
api_key: Your Partle API key (starts with pk_).
product_id: ID of the product to attach the image to.
image_base64: Base64-encoded image data.
content_type: MIME type when using image_base64 (e.g. "image/jpeg"). Required with image_base64.
image_url: URL to download the image from (alternative to image_base64).| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes | ||
| image_url | No | ||
| product_id | Yes | ||
| content_type | No | ||
| image_base64 | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate the operation is destructive and idempotent, but the description does not explain what gets destroyed (e.g., overwriting existing images) or the benefits of idempotency. It adds value by documenting the mutual exclusivity constraint and conditional parameter requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core action, followed by a structured Args block that efficiently presents parameter details. No sentences are wasted, though the docstring format is slightly informal.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 0% schema coverage and lack of output schema, the description adequately covers input parameters but leaves gaps regarding the destructive behavior (what happens to existing images), return values, and success/failure modes.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates effectively by documenting all 5 parameters, including format hints (API key prefix), encoding details (Base64), and conditional requirements (content_type with image_base64).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool uploads an image for a product with a specific verb and resource. However, it does not explicitly differentiate from siblings like `update_product` or clarify when to use this versus creating a product with an image initially.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides parameter-level constraints (mutual exclusivity of image_base64 vs image_url), but offers no guidance on when to select this tool over alternatives like `update_product` or prerequisites such as product existence.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!