Tonzar B2B Russian Export Marketplace
Server Details
Search 160k+ Russian B2B products from 8,900+ verified manufacturers (EN/RU).
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- introman2023/tonzar-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 5 of 5 tools scored.
Each tool has a clearly distinct purpose: getProduct retrieves product details, getSupplier retrieves supplier details, listCategories lists categories, listProducts lists products in a category, and searchProducts performs full-text search. No overlap or ambiguity.
All tools follow a consistent lowerCamelCase verb_noun pattern: getProduct, getSupplier, listCategories, listProducts, searchProducts. No mixing of styles or conventions.
With 5 tools, the server is well-scoped for a B2B marketplace catalog. The tools cover browsing categories, listing products, searching, and retrieving detailed information without being overly numerous or sparse.
The tool set covers core informational needs: category browsing, product listing, search, and detail retrieval. However, it lacks transactional functions (e.g., order, quote, contact) that might be expected in a B2B marketplace, but these are beyond the apparent informational scope.
Available Tools
5 toolsgetProductAInspect
Get full details of a specific product by ID, including description, specifications, pricing, manufacturer info.
| Name | Required | Description | Default |
|---|---|---|---|
| productId | Yes | Tonzar product ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden for behavioral disclosure. It states the tool returns 'full details' and lists what is included, but it does not explicitly state it is read-only, mention error handling, or describe the response format beyond the listed fields. Adequate but lacks explicit safety guarantees.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is front-loaded with the core action and resource, then lists included details. Every word contributes meaning; no unnecessary text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple get-by-ID tool with one parameter and no output schema, the description covers the purpose and what data is returned (description, specifications, pricing, manufacturer info). It lacks explicit mention of the return format or error conditions, but given the tool's simplicity, it is mostly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema provides one parameter (productId) with description 'Tonzar product ID', and coverage is 100%. The description adds context that this ID is for a specific product, but does not add additional syntax or format details beyond what the schema already provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Get full details of a specific product by ID', specifying the verb, resource, and scope. It lists the types of details included (description, specifications, pricing, manufacturer info), and the tool name and context signals show it is distinct from siblings like listProducts or getSupplier.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Usage is implied: 'by ID' indicates you need a product ID. However, the description does not explicitly state when to use this tool versus alternatives like searchProducts or listProducts, nor does it mention any prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
getSupplierAInspect
Get details about a Russian manufacturer/supplier including product count, categories, and company info.
| Name | Required | Description | Default |
|---|---|---|---|
| supplierId | Yes | Tonzar supplier ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description should disclose behavioral traits. It only states 'Get details' without confirming read-only behavior, authorization needs, or side effects, which is insufficient for a tool lacking annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that efficiently conveys the tool's purpose and key return fields. No unnecessary words or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple input schema (one parameter) and no output schema, the description adequately covers what the tool returns. However, it could mention the data format or any constraints, but overall it is fairly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a single parameter described as 'Tonzar supplier ID'. The description adds no further meaning beyond the schema, so it meets the baseline of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get details'), the resource ('Russian manufacturer/supplier'), and the specific details included (product count, categories, company info). This distinctly differentiates it from siblings like getProduct and listProducts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving supplier details but provides no explicit guidance on when to use this tool over alternatives. Sibling tools are not mentioned, and no when-not-to-use conditions are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
listCategoriesAInspect
List all product categories in the Tonzar catalog with product counts. 15 root categories covering industrial equipment, medical devices, agricultural machinery, transport, electronics, and more.
| Name | Required | Description | Default |
|---|---|---|---|
| parentId | No | Optional parent category ID to list subcategories. Omit for root categories. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Mentions product counts but lacks details on accuracy, pagination, or rate limits. Basic behavior is clear but safety/reliability not disclosed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences: first states purpose and feature, second provides context. No wasted words, front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lacks details on return format (e.g., fields within categories, ordering, limits). With no output schema, description should explain what output includes beyond 'product counts'.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with parentId described. Description adds value by clarifying root vs. subcategories and listing example domains, exceeding schema detail.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it lists product categories with product counts, mentions 15 root categories covering specific domains, and distinguishes from sibling tools focused on products/suppliers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear guidance on when to use the parentId parameter (to list subcategories) and when to omit it (for root categories). No explicit alternatives or exclusions, but context is sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
listProductsAInspect
Browse products in a specific category by ID. Returns paginated list with images, prices, and suppliers. Use after listCategories to explore a category.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (1-based, default 1) | |
| limit | No | Products per page (1-50, default 20) | |
| categoryId | Yes | Category ID (get from listCategories) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations, so description carries full burden. It discloses paginated output with specific fields, but lacks details on error handling, rate limits, or sorting behavior. Adequate for a simple read tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, then usage guideline. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool is simple, schema describes parameters thoroughly. Description mentions output fields. Missing output schema but implicit for a paginated list. Sibling differentiation via usage guidance completes context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage with clear defaults. Description adds no new parameter semantics beyond what schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it browses products by category ID and returns paginated list with images, prices, and suppliers. It distinguishes from siblings like 'listCategories' and 'searchProducts' by specifying 'after listCategories'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use after listCategories to explore a category,' providing clear context. However, it does not mention when not to use or explicitly name alternatives like 'searchProducts'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
searchProductsAInspect
Search the Tonzar B2B catalog of 160,000+ Russian industrial, medical, and agricultural products. Returns matching products with prices, suppliers, and specs. Use for finding Russian equipment for export.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Search query (product name, type, or keyword). English or Russian. | |
| exclude | No | Exclude products containing these terms in name, description or specs. Comma-separated for multiple (e.g. "chipboard,ЛДСП"). Also supports minus syntax in query itself (e.g. query "desk MDF -chipboard"). | |
| category | No | Optional category filter (e.g. "Medical", "Industrial", "Transport") | |
| maxResults | No | Max results to return (1-50, default 10) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It accurately states this is a read/search operation with no destructive effects, and returns structured results. It does not mention auth, rate limits, or pagination, but for a simple search this is acceptable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first states the catalog scope and return fields, second provides a use case. No wasted words, front-loaded with the core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately describes return content (prices, suppliers, specs). It specifies catalog size for credibility. A simple search tool is well-covered.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds value by clarifying the query accepts English or Russian, explains the 'exclude' parameter syntax (comma-separated, minus syntax), and notes 'category' is optional. This extends beyond schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool as searching a specific catalog of 160K+ Russian products, explicitly states what it returns (matching products with prices, suppliers, specs), and provides a use case ('Russian equipment for export'). This distinguishes it from siblings like getProduct (single product) or listProducts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context ('Use for finding Russian equipment for export') but does not explicitly state when not to use or name alternatives. The purpose is clear enough to guide selection without detailed exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!