Skip to main content
Glama

Makuri Showcase (CogniLedger)

Server Details

Public MCP server for Makuri, an EU-compliant AI tutoring platform for immigrant children.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Cogniledger/cogniledger-mcp-makuri
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 8 of 8 tools scored.

Server CoherenceA
Disambiguation5/5

All 8 tools have clearly distinct purposes covering separate aspects of the platform: compliance, contact, info, pricing, safety, subjects, languages, and tech stack. No two tools overlap in intent.

Naming Consistency5/5

All tools follow a uniform 'get_' prefix followed by a descriptive noun, making the pattern predictable and easy to navigate.

Tool Count5/5

With 8 tools, the set is appropriately scoped for a showcase server that provides key information about the platform without being overwhelming or too sparse.

Completeness4/5

The tool set covers major information categories (compliance, contact, platform info, pricing, safety, subjects, languages, tech stack). Minor gaps like FAQ or use cases exist but do not severely hinder the server's purpose.

Available Tools

8 tools
get_compliance_matrixAInspect

Returns Makuri's current compliance posture across EU AI Act, GDPR, GDPR-K (children data), COPPA, and ISO 42001. Each entry shows current status (compliant, in_progress, not_applicable), evidence, and notes. Use when the user asks about regulatory compliance, AI Act classification, or data protection for children.

ParametersJSON Schema
NameRequiredDescriptionDefault
regulationNoOptional filter to return a single regulation. When omitted, returns all five regulations in the matrix.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description carries burden. It discloses that the tool returns a matrix with status, evidence, and notes, which sufficiently describes behavior for a read-only tool. No side effects implied.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-crafted sentences: first defines function, second provides usage guidance. No wasted words, information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description fully explains return structure: each entry includes status (with possible values), evidence, and notes. Covers all necessary context for a simple matrix retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with enum and description. Description adds value by explaining that omitting the parameter returns all five regulations, and that each entry contains status, evidence, notes.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it returns Makuri's compliance posture across five specific regulations, with status, evidence, and notes. It distinguishes from sibling tools like get_safety_features or get_pricing_tiers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use when the user asks about regulatory compliance, AI Act classification, or data protection for children,' providing clear context. No explicit when-not or alternatives, but siblings are clearly different.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_contact_infoAInspect

Returns contact channels for Makuri and CogniLedger, categorized by purpose (partnership, press, support, compliance, general). Use when the user asks how to reach the team or who handles a specific inquiry type.

ParametersJSON Schema
NameRequiredDescriptionDefault
purposeNoOptional filter for inquiry purpose. When omitted, returns all contact channels.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It fails to mention any behavioral traits like read-only nature, auth requirements, rate limits, or output size. Given it's a lookup tool, it likely has minimal side effects, but this is not stated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words. The action and scope are front-loaded, followed by usage guidance. Ideal length for this simple tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given one optional param, no output schema, and no annotations, the description adequately explains what the tool returns and when to use it. It could specify the output structure, but for a simple channel list, this is sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (the parameter's enum and optionality are described). The description adds the entities 'Makuri and CogniLedger' not present in the schema, which clarifies the scope. However, this is a minor addition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns contact channels for Makuri and CogniLedger, categorized by purpose (partnership, press, support, compliance, general). This specific verb+resource combination distinguishes it from sibling tools like get_compliance_matrix or get_pricing_tiers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use when the user asks how to reach the team or who handles a specific inquiry type', providing clear context. While no alternatives are named, the sibling list suggests differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_platform_infoAInspect

Returns general information about the Makuri platform, including mission, target users, founding details, and company information. Use this tool when the user asks 'what is Makuri', 'who made it', or wants a general overview.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It clarifies that the tool returns information and lists content, implying a read-only operation. While it omits details like authentication or caching, the tool is simple and low-risk, making this sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences: first describes what the tool returns, second provides explicit usage triggers. No fluff, front-loaded with purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a no-parameter info tool with no output schema, the description fully covers what the agent needs: what data is returned and when to use it. Context is complete given the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, so baseline is 4. Description adds no parameter info (unnecessary) but clarifies the returned data scope, which adds value beyond the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns general information about the Makuri platform including mission, target users, founding details, and company info. It distinguishes itself from siblings like get_compliance_matrix and get_contact_info by being a general overview.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says when to use: when the user asks 'what is Makuri', 'who made it', or wants a general overview. It does not explicitly mention alternatives or when not to use, but the sibling tools provide specific functions, so inference is straightforward.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_pricing_tiersAInspect

Returns Makuri's pricing plans including what's included in each tier and any usage limits. Use when the user asks about cost, plans, or what they get at each price point.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description implies a read-only operation but does not explicitly state read-only behavior, authorization needs, or any side effects. With no annotations, the description carries full burden and is adequate but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with purpose followed by usage context. Every sentence is meaningful with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (no parameters, no output schema), the description provides sufficient completeness by stating what the tool returns and when to use it. It could optionally describe the return format, but it's not necessary for a straightforward list.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has no properties, so baseline is 4. The description does not need to add parameter information since none exist, and it correctly omits unnecessary details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Returns' and the resource 'Makuri's pricing plans' with specific details about inclusions and usage limits. It distinguishes itself from sibling tools like get_platform_info and get_tech_stack.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Use when the user asks about cost, plans, or what they get at each price point,' providing clear context for invocation. However, it does not mention when not to use this tool or suggest alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_safety_featuresAInspect

Returns information about safety features on Makuri, including age verification, content filtering, parental controls, and AI safety guardrails. Use when the user asks about child safety, content moderation, or how Makuri protects minors.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description accurately describes the read-only nature of the tool. No annotations are provided, but the description itself is transparent. It does not disclose any potential side effects or authorization needs, which is acceptable for a simple informational tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: the first states the purpose with examples, the second provides usage guidance. Front-loaded and efficient with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter tool with no output schema, the description fully covers what the tool does and when to use it. No additional information is necessary.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so the schema description coverage is effectively 100%. The description adds value by listing the categories of information returned, which is not present in the empty input schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns information about safety features on Makuri, listing specific examples like age verification and content filtering. It distinguishes itself from sibling tools that cover different aspects (compliance, contact info, etc.).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'Use when the user asks about child safety, content moderation, or how Makuri protects minors.' This provides clear guidance and excludes alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_subjectsAInspect

Returns the list of academic subjects Makuri teaches, grouped by grade level, with information about exam preparation coverage. Use when the user asks what Makuri teaches or about specific subjects.

ParametersJSON Schema
NameRequiredDescriptionDefault
grade_levelNoOptional grade-level filter (e.g. 'gimnaziu', 'liceu'). Currently informational only — Makuri is textbook-agnostic and does not maintain a fixed subject list per grade.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries the burden of behavioral transparency. It only describes the basic output and lacks disclosure of important nuances, such as the grade_level parameter being informational only or Makuri's textbook-agnostic nature (details present only in the schema). No side effects, permissions, or limitations are mentioned.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise, with two sentences that efficiently convey the core functionality and usage context. No redundant information, and it is front-loaded with the main action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description adequately covers the tool's purpose, output nature, and usage context. However, it omits the nuance that the grade_level filter is currently informational only, which is present in the schema. For a simple retrieval tool with no output schema, it is reasonably complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for the single parameter, so the baseline score is 3. The tool description does not add any additional parameter semantics beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Returns' and the resource 'list of academic subjects', along with additional details like grouping by grade level and exam prep coverage. It provides a specific use case, distinguishing it from sibling tools which focus on other domains like compliance, pricing, etc.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states 'Use when the user asks what Makuri teaches or about specific subjects,' giving clear context for when to invoke the tool. However, it does not mention when not to use it or provide alternatives, though sibling tools are distinct enough in purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_supported_languagesAInspect

Returns the list of languages supported by Makuri, with separate coverage details for user interface versus AI tutor interactions. Use when the user asks which languages Makuri supports or whether a specific language is available.

ParametersJSON Schema
NameRequiredDescriptionDefault
localeNoOptional ISO 639-1 locale code (e.g. 'ro', 'uk', 'ar'). When provided, returns only that locale; otherwise returns all 14.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must carry the full burden. It describes the output but does not explicitly state that the tool is read-only, has no side effects, or any other behavioral traits. The description is adequate but lacks explicit safety or cost disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the action and return value, and every word earns its place. No fluff or repetition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has one optional parameter and no output schema, the description provides sufficient context about what the tool does and when to use it. It does not detail the output structure, but that is not critical for a simple list.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema already provides 100% coverage of the optional 'locale' parameter with a clear description. The tool description adds no additional meaning beyond what the schema says. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns the list of supported languages with separate UI and AI tutor coverage details, and explicitly tells when to use it. It is distinct from sibling tools which cover different topics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description gives explicit usage context ('Use when the user asks which languages Makuri supports or whether a specific language is available'), but does not discuss when not to use or alternatives. For a straightforward getter, this is clear enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_tech_stackAInspect

Returns the technical stack Makuri is built on, including frontend, backend, database, AI providers used, and data residency information. Use when the user asks how Makuri is built or which AI models it uses.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It indicates the tool returns information but does not explicitly state it is read-only, idempotent, or free of side effects. The content implies a safe retrieval operation, but lacks explicit behavioral disclosure that would fully inform an AI agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—two sentences conveying all essential information. It is front-loaded with the action and resource, followed by usage guidance. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no parameters and no output schema, the description adequately lists the categories of information returned (frontend, backend, database, AI providers, data residency). It is sufficient for the tool's purpose, though it could optionally mention whether the returned data is static or dynamic.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters with 100% coverage. According to guidelines, 0 parameters earns a baseline score of 4. The description does not need to add parameter information since none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns the technical stack of Makuri, listing specific components (frontend, backend, database, AI providers, data residency). It uses a specific verb 'Returns' and identifies the resource precisely. Given siblings like get_platform_info, this tool is distinctly about the underlying tech stack.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use the tool: 'Use when the user asks how Makuri is built or which AI models it uses.' This provides clear context. However, it does not mention any exclusions or alternative tools, which would be beneficial for differentiation from get_platform_info.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.