Skip to main content
Glama

Tonzar B2B Russian Export Marketplace

Server Details

Search 160k+ Russian B2B products from 8,900+ verified manufacturers (EN/RU).

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
introman2023/tonzar-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 5 of 5 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: getProduct retrieves product details, getSupplier retrieves supplier details, listCategories lists categories, listProducts lists products in a category, and searchProducts performs full-text search. No overlap or ambiguity.

Naming Consistency5/5

All tools follow a consistent lowerCamelCase verb_noun pattern: getProduct, getSupplier, listCategories, listProducts, searchProducts. No mixing of styles or conventions.

Tool Count5/5

With 5 tools, the server is well-scoped for a B2B marketplace catalog. The tools cover browsing categories, listing products, searching, and retrieving detailed information without being overly numerous or sparse.

Completeness4/5

The tool set covers core informational needs: category browsing, product listing, search, and detail retrieval. However, it lacks transactional functions (e.g., order, quote, contact) that might be expected in a B2B marketplace, but these are beyond the apparent informational scope.

Available Tools

5 tools
getProductAInspect

Get full details of a specific product by ID, including description, specifications, pricing, manufacturer info.

ParametersJSON Schema
NameRequiredDescriptionDefault
productIdYesTonzar product ID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. It states the tool returns 'full details' and lists what is included, but it does not explicitly state it is read-only, mention error handling, or describe the response format beyond the listed fields. Adequate but lacks explicit safety guarantees.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that is front-loaded with the core action and resource, then lists included details. Every word contributes meaning; no unnecessary text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple get-by-ID tool with one parameter and no output schema, the description covers the purpose and what data is returned (description, specifications, pricing, manufacturer info). It lacks explicit mention of the return format or error conditions, but given the tool's simplicity, it is mostly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema provides one parameter (productId) with description 'Tonzar product ID', and coverage is 100%. The description adds context that this ID is for a specific product, but does not add additional syntax or format details beyond what the schema already provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Get full details of a specific product by ID', specifying the verb, resource, and scope. It lists the types of details included (description, specifications, pricing, manufacturer info), and the tool name and context signals show it is distinct from siblings like listProducts or getSupplier.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Usage is implied: 'by ID' indicates you need a product ID. However, the description does not explicitly state when to use this tool versus alternatives like searchProducts or listProducts, nor does it mention any prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getSupplierAInspect

Get details about a Russian manufacturer/supplier including product count, categories, and company info.

ParametersJSON Schema
NameRequiredDescriptionDefault
supplierIdYesTonzar supplier ID
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description should disclose behavioral traits. It only states 'Get details' without confirming read-only behavior, authorization needs, or side effects, which is insufficient for a tool lacking annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys the tool's purpose and key return fields. No unnecessary words or repetition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple input schema (one parameter) and no output schema, the description adequately covers what the tool returns. However, it could mention the data format or any constraints, but overall it is fairly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a single parameter described as 'Tonzar supplier ID'. The description adds no further meaning beyond the schema, so it meets the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get details'), the resource ('Russian manufacturer/supplier'), and the specific details included (product count, categories, company info). This distinctly differentiates it from siblings like getProduct and listProducts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving supplier details but provides no explicit guidance on when to use this tool over alternatives. Sibling tools are not mentioned, and no when-not-to-use conditions are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

listCategoriesAInspect

List all product categories in the Tonzar catalog with product counts. 15 root categories covering industrial equipment, medical devices, agricultural machinery, transport, electronics, and more.

ParametersJSON Schema
NameRequiredDescriptionDefault
parentIdNoOptional parent category ID to list subcategories. Omit for root categories.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Mentions product counts but lacks details on accuracy, pagination, or rate limits. Basic behavior is clear but safety/reliability not disclosed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences: first states purpose and feature, second provides context. No wasted words, front-loaded with key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacks details on return format (e.g., fields within categories, ordering, limits). With no output schema, description should explain what output includes beyond 'product counts'.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with parentId described. Description adds value by clarifying root vs. subcategories and listing example domains, exceeding schema detail.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it lists product categories with product counts, mentions 15 root categories covering specific domains, and distinguishes from sibling tools focused on products/suppliers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear guidance on when to use the parentId parameter (to list subcategories) and when to omit it (for root categories). No explicit alternatives or exclusions, but context is sufficient.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

listProductsAInspect

Browse products in a specific category by ID. Returns paginated list with images, prices, and suppliers. Use after listCategories to explore a category.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number (1-based, default 1)
limitNoProducts per page (1-50, default 20)
categoryIdYesCategory ID (get from listCategories)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations, so description carries full burden. It discloses paginated output with specific fields, but lacks details on error handling, rate limits, or sorting behavior. Adequate for a simple read tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, then usage guideline. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool is simple, schema describes parameters thoroughly. Description mentions output fields. Missing output schema but implicit for a paginated list. Sibling differentiation via usage guidance completes context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage with clear defaults. Description adds no new parameter semantics beyond what schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it browses products by category ID and returns paginated list with images, prices, and suppliers. It distinguishes from siblings like 'listCategories' and 'searchProducts' by specifying 'after listCategories'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use after listCategories to explore a category,' providing clear context. However, it does not mention when not to use or explicitly name alternatives like 'searchProducts'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

searchProductsAInspect

Search the Tonzar B2B catalog of 160,000+ Russian industrial, medical, and agricultural products. Returns matching products with prices, suppliers, and specs. Use for finding Russian equipment for export.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesSearch query (product name, type, or keyword). English or Russian.
excludeNoExclude products containing these terms in name, description or specs. Comma-separated for multiple (e.g. "chipboard,ЛДСП"). Also supports minus syntax in query itself (e.g. query "desk MDF -chipboard").
categoryNoOptional category filter (e.g. "Medical", "Industrial", "Transport")
maxResultsNoMax results to return (1-50, default 10)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It accurately states this is a read/search operation with no destructive effects, and returns structured results. It does not mention auth, rate limits, or pagination, but for a simple search this is acceptable.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first states the catalog scope and return fields, second provides a use case. No wasted words, front-loaded with the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately describes return content (prices, suppliers, specs). It specifies catalog size for credibility. A simple search tool is well-covered.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds value by clarifying the query accepts English or Russian, explains the 'exclude' parameter syntax (comma-separated, minus syntax), and notes 'category' is optional. This extends beyond schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool as searching a specific catalog of 160K+ Russian products, explicitly states what it returns (matching products with prices, suppliers, specs), and provides a use case ('Russian equipment for export'). This distinguishes it from siblings like getProduct (single product) or listProducts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context ('Use for finding Russian equipment for export') but does not explicitly state when not to use or name alternatives. The purpose is clear enough to guide selection without detailed exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.