Kapruka MCP
Server Details
Free public MCP server for Kapruka.com — Sri Lanka's largest e-commerce platform.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- kapruka/mcp
- GitHub Stars
- 0
- Server Listing
- Kapruka MCP Server
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.5/5 across 5 of 5 tools scored.
Each tool has a distinct, non-overlapping purpose: delivery checking, product detail retrieval, category listing, city listing, and product search. No two tools could be confused for one another.
All tool names follow the consistent pattern 'kapruka_verb_noun' in snake_case, making it predictable and easy to understand their function at a glance.
With 5 tools, the set is concise and focused on core e-commerce information retrieval (products, categories, delivery). While a few additional operations like reviews or recommendations could be added, the current count is well-scoped and not excessive.
The tools cover the essential lifecycle for a shopping assistant: search for products, get details, list categories, and check delivery feasibility. Missing capabilities like user cart or order placement are out of scope, so the set is nearly complete for its stated purpose.
Available Tools
5 toolskapruka_check_deliveryARead-onlyInspect
Check whether Kapruka can deliver to a given city on a given date, and at what rate.
Returns the flat delivery rate (LKR), whether the requested date is available,
and — if not — the next available date plus reason. Kapruka delivers as a
single shipment per order at one flat rate regardless of item count.
If a `product_id` is supplied and the code matches a perishable family
(CAKE*, FLOWER*, COMBO*), an extra warning is added when the chosen
delivery date is more than 1 day out.
Args:
params (CheckDeliveryInput):
- city (str): Canonical city name (e.g. 'Colombo 03', 'Galle')
- delivery_date (Optional[str]): YYYY-MM-DD; defaults to today (LK time)
- product_id (Optional[str]): Optional, enables perishable warning
- response_format (str): 'markdown' (default) or 'json'
Returns:
str: Delivery feasibility + rate in the requested format.
JSON schema:
{
"city": str,
"now": str, # ISO timestamp, Sri Lanka time
"checked_date": str, # YYYY-MM-DD
"available": bool,
"rate": number, # flat LKR rate per order
"currency": "LKR",
"reason": str | null, # populated when available=false
"next_available_date": str|null, # populated when available=false
"perishable_warning": str | null # populated when product_id is perishable
}| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds significant behavioral details beyond annotations: flat rate per order, perishable warnings, default to today's date, and next available date logic. It does not contradict the readOnlyHint or openWorldHint annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with sections and front-loaded purpose. It includes a detailed output schema which, while useful, could be considered slightly verbose. Overall efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With annotations and output schema present, the description covers all key aspects: purpose, parameters, behavior (perishable logic, flat rate), defaults, and response format. No gaps identified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Although schema descriptions exist for parameters, the main description adds context for product_id (perishable warning) and delivery_date (default behavior), which enhances understanding. Given the 0% schema coverage signal, the description compensates adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool checks delivery feasibility and rate for a given city and date. It uses a specific verb ('Check') and resource ('delivery'), and distinguishes itself from sibling tools like kapruka_list_delivery_cities by focusing on checking rather than listing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a clear prerequisite ('use kapruka_list_delivery_cities to find a canonical city name'), but does not explicitly state when to use this tool versus alternatives. However, the tool's purpose is self-evident given its name and sibling context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
kapruka_get_productARead-onlyIdempotentInspect
Fetch full details for a single Kapruka product by its product ID.
Returns name, description, price (with optional currency conversion), stock status,
images, variants, shipping info, and a direct product URL.
Note: Some IDs starting with 'CATSYM' are category landing pages, not purchasable
products — this tool will flag those clearly.
Args:
params (GetProductInput):
- product_id (str): Kapruka product ID (e.g. 'cake00ka002034')
- currency (str): Price currency — LKR (default), USD, GBP, AUD, CAD, EUR
- type (Optional[str]): Optional type hint (e.g. 'specialgifts')
- response_format (str): 'markdown' (default) or 'json'
Returns:
str: Product details in the requested format.
JSON schema:
{
"id": str,
"name": str,
"description": str,
"summary": str,
"price": {"amount": float, "currency": str},
"compare_at_price": {"amount": float, "currency": str} | null,
"in_stock": bool,
"stock_level": str, # "low" | "medium" | "high"
"category": {"id": str, "name": str, "slug": str, "path": str},
"variants": [{"id": str, "name": str, "sku": str, "price": {...},
"in_stock": bool, "stock_level": str, "attributes": {...}}],
"images": [str], # list of full-resolution image URLs
"attributes": {"type": str, "subtype": str, "weight": str, "vendor": str},
"shipping": {"ships_from": str, "ships_internationally": bool, "restricted_countries": [str]},
"rating": null,
"url": str
}
Error: "Error: <message>" on failure.| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description confirms the read-only nature (no mutations), warns about CATSYM IDs, and describes error messages. Annotations already indicate readOnlyHint, idempotentHint, and non-destructive behavior, so the description adds further context without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and well-structured with headings for purpose, notes, args, and returns. While it includes a comprehensive JSON schema, it remains focused and front-loaded with essential information. Slightly longer than necessary but justified by the depth of detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers everything needed: purpose, parameters, edge cases (CATSYM), return format (including full JSON schema), and error handling. No gaps remain, making it fully actionable for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite the schema description coverage being 0%, the tool description includes a full 'Args' section that explains each parameter with details beyond the schema, including examples and defaults. This compensates fully, making the parameter semantics very clear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Fetch full details for a single Kapruka product by its product ID', specifying the verb, resource, and scope. This distinguishes it from sibling tools that list categories or search products.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a note about CATSYM IDs not being purchasable, which helps avoid misuse. However, it does not explicitly state when to use this tool versus alternatives like kapruka_search_products or kapruka_list_categories.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
kapruka_list_categoriesARead-onlyIdempotentInspect
List top-level Kapruka product categories by name with browse URLs.
Returns category names (usable as the `category` filter on kapruka_search_products)
plus the public Kapruka.com URL for each category landing page — useful for shopping
agents that want to send users directly to a category to browse. Internal IDs and
product counts are not exposed. Results are cached for 30 minutes server-side.
Args:
params (ListCategoriesInput):
- depth (int): Sub-category levels to include, 1 or 2 (default 1)
- response_format (str): 'markdown' (default) or 'json'
Returns:
str: Category tree in the requested format.
JSON schema:
{
"categories": [
{
"name": str,
"url": str, # kapruka.com category landing page
"children": [{"name": str, "url": str, "children": [...]}]
}
]
}
Error: "Error: <message>" on failure.| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds that results are cached for 30 minutes server-side and that internal IDs and product counts are not exposed, providing useful behavioral detail beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is well-structured with a clear opening, then bullet-point-like details, followed by args and return format. While not extremely concise, every sentence adds value and the structure is easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema is provided (JSON schema for categories), the description additionally explains the error format and notes caching behavior. It covers the essential aspects for an AI agent to understand inputs, outputs, and side effects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but the description's Args section explains the two parameters: depth (1 or 2) and response_format (markdown or json), adding meaning beyond the schema which only has titles and defaults. It clearly links parameters to their behavior.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool lists top-level Kapruka product categories by name with browse URLs. It specifies that category names are usable as a filter on kapruka_search_products, which distinguishes it from sibling tools like kapruka_search_products itself.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description explains the tool is useful for shopping agents wanting to send users directly to a category, but does not explicitly state when not to use it or compare to alternatives like kapruka_get_product. The context of being a listing tool is implied.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
kapruka_list_delivery_citiesARead-onlyIdempotentInspect
List or search Sri Lankan cities Kapruka delivers to.
Use the `query` param to filter (e.g. "colombo" → all Colombo zones,
"anur" → Anuradhapura). Without a query you get the first 25 cities
alphabetically, which is rarely what an agent needs — pass a query.
Returns canonical city names (use these as the `city` argument to
kapruka_check_delivery) plus any common aliases / vernacular spellings.
Args:
params (ListDeliveryCitiesInput):
- query (Optional[str]): Partial match filter
- limit (int): Max results, 1–50 (default 25)
- response_format (str): 'markdown' (default) or 'json'
Returns:
str: Cities list in the requested format.
JSON schema:
{
"cities": [{"name": str, "aliases": [str]}],
"total_matched": int,
"showing": int
}| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnly/ idempotent/ openWorld. Description adds: default behavior without query, response formats, limit constraints, and that results include aliases. No contradiction; it supplements annotations well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured: one-line summary, then usage insight, then explicit args and returns. Every sentence adds value, no fluff. Front-loaded with purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (3 params, simple output schema, strong annotations), the description is fully complete: covers usage, parameter semantics, return types, and cross-references sibling tool. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 0% schema description coverage, the description fully explains each parameter: query (partial match, case-insensitive), limit (1-50, default 25), response_format (markdown/json). This adds complete meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it lists or searches Sri Lankan cities for Kapruka delivery. It uses specific verb ('list or search') and resource ('Sri Lankan cities'), and distinguishes from sibling tools by noting the returned canonical names are inputs to kapruka_check_delivery.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use the query parameter and warns that omitting it yields rarely-needed first 25 cities. Also notes that canonical names should be passed to kapruka_check_delivery, providing clear when-to-use and when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
kapruka_search_productsARead-onlyIdempotentInspect
Search for products on Kapruka.com by keyword, with optional category filter and pagination.
Returns a ranked list of matching products with prices, stock status, images, and URLs.
Supports cursor-based pagination — pass next_cursor from one response into the next call.
Pagination is capped at 3 pages per query to discourage catalog enumeration; for broader
discovery, refine the query or filter by category instead.
Queries must be at least 3 characters and contain specific terms — pure stopword queries
(e.g. "the", "a an") are rejected.
By default, category landing pages (CATSYM entries with price=0) are filtered out so results
contain only purchasable products. Set include_stubs=true to include them.
Args:
params (SearchProductsInput):
- q (str): Search query (e.g. 'birthday cake', 'roses', 'tea gift'). Min 3 chars.
- category (Optional[str]): Category filter (e.g. 'Birthday', 'Flowers')
- limit (int): Results per page, 1–50 (default 10)
- cursor (Optional[str]): Pagination cursor from previous response
- currency (str): LKR (default), USD, GBP, AUD, CAD, EUR
- min_price (Optional[float]): Min price (inclusive) in the requested currency
- max_price (Optional[float]): Max price (inclusive) in the requested currency
- in_stock_only (bool): Restrict to in-stock items (default false)
- sort (str): 'relevance' | 'price_asc' | 'price_desc' | 'newest' | 'bestseller'
- include_stubs (bool): Include category landing pages (default false)
- response_format (str): 'markdown' (default) or 'json'
Returns:
str: Search results in the requested format.
JSON schema:
{
"results": [
{
"id": str,
"name": str,
"summary": str,
"price": {"amount": float | null, "currency": str},
"compare_at_price": {"amount": float, "currency": str} | null,
"in_stock": bool,
"stock_level": str,
"image_url": str | null,
"category": {"id": str, "name": str, "slug": str},
"rating": null,
"ships_internationally": bool,
"url": str
}
],
"next_cursor": str | null, # null after page 3 even if upstream has more
"applied_filters": {"q": str, "limit": int, "in_stock_only": bool}
}
Error: "Error: <message>" or "No products found for '<query>'" on failure.| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, idempotentHint=true. Description adds behavioral details: cursor-based pagination with 3-page cap, stopword rejection, default filtering of stubs, error formats. Adds value without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured: one-line summary, then pagination and query constraints, then Args list. Slightly lengthy but each sentence serves a purpose; no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers all aspects: input parameters, pagination mechanics, query validation, output format with JSON schema, error handling. Output schema exists, so return values are clear. Complete for a complex search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description provides thorough explanation for each parameter in the Args section, adding context about pagination, currency options, price filters, and response format. Schema also has descriptions, so combined coverage is comprehensive.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it searches products on Kapruka.com by keyword, returning a ranked list with prices, stock status, images, and URLs. It is well-differentiated from siblings like kapruka_get_product (single product) and kapruka_list_categories (category list).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit usage constraints: pagination capped at 3 pages, queries must be at least 3 characters and not stopwords. Suggests refining the query or using category filter for broader discovery. Could explicitly mention alternative tools (e.g., kapruka_get_product for details), but still clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.