Skip to main content
Glama

Server Details

Carnatic flute KSGMS syllabus — search lessons, fetch content, ask a guru-voice RAG bot.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.8/5 across 7 of 7 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct action: asking a chatbot, fetching specific classes/lessons, listing them, searching, and logging practice. No two tools overlap in purpose.

Naming Consistency5/5

All tool names follow a consistent verb_noun snake_case pattern (ask_venuvidya, get_class, list_lessons, log_practice_session, search_syllabus), making them predictable and easy to navigate.

Tool Count5/5

With 7 tools, the server is well-scoped for a learning platform. Each tool serves a clear function without being too few or excessive.

Completeness4/5

The server covers core viewing, listing, searching, and practice logging workflows. Minor gaps exist (e.g., no update/delete for practice sessions), but the main use cases are supported.

Available Tools

7 tools
ask_venuvidyaAInspect

Ask VenuVidya — the in-app RAG chatbot. Returns a grounded answer + citations.

Args:
    question: Student question (2-1500 chars).
    level: Optional level filter for retrieval ('junior' | 'senior' | 'vidwat').
    language: 'en', 'kn' or 'auto'.
ParametersJSON Schema
NameRequiredDescriptionDefault
levelNo
languageNoauto
questionYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full responsibility. It mentions returning a grounded answer with citations, but does not disclose rate limits, authentication needs, or error behavior beyond parameter constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences plus a clear parameter list. The purpose is front-loaded, and every sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the core function and parameters adequately for a simple RAG tool. However, without an output schema, more detail on the citation format or return types would enhance completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by explaining each parameter: question length constraint (2-1500 chars), level filter options, and language choices ('en','kn','auto'), which are not documented in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool is an in-app RAG chatbot that returns grounded answers with citations. It identifies the specific resource (VenuVidya) and distinguishes it from sibling tools like get_class or list_lessons.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the context of sibling tools implies this is for student questions, no explicit guidance is given on when to use this tool versus alternatives like search_syllabus. There are no when-not-to-use or prerequisite instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_classAInspect

Fetch a single class journal entry with its items and practice log.

ParametersJSON Schema
NameRequiredDescriptionDefault
class_idYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears full responsibility. It states 'Fetch' suggesting read-only, but does not disclose any behavioral traits such as side effects, permissions, or response details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, concise and to the point, with no superfluous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple fetch operation with one parameter and no output schema, the description provides key details (items and practice log). It is complete enough for the tool's complexity, though return type info is missing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, and the description does not add meaning beyond the parameter name 'class_id'. It does not explain format, constraints, or how it relates to the returned data.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly specifies the action ('Fetch'), the resource ('single class journal entry'), and includes components ('items and practice log'), which distinguishes it from sibling tools like 'list_classes' and 'get_lesson'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving a specific class entry, contrasting with 'list_classes' for listing. However, it does not explicitly state when not to use it or suggest alternatives, leaving it implicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_lessonAInspect

Fetch the full study content of a single lesson.

Args:
    lesson_id: Lesson identifier, e.g. 'junior-theory-1-4'.
ParametersJSON Schema
NameRequiredDescriptionDefault
lesson_idYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description indicates a read operation ('Fetch') and promises 'full study content', which is useful. However, with no annotations and no mention of side effects, auth requirements, or rate limits, the behavioral disclosure is minimal. It does not contradict any annotations as none are provided.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with no redundant information. It consists of two sentences that efficiently convey the tool's purpose and parameter usage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Without an output schema, the description states it returns 'full study content' but omits details on structure, error handling, or pagination. Given the tool's simplicity (one param), this is minimally adequate but could be improved by specifying what constitutes 'full study content'.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% coverage (no parameter descriptions), but the tool description adds a meaningful example ('e.g., junior-theory-1-4') and clarifies the identifier format, which enhances understanding beyond the schema's basic type declaration.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'fetch' and resource 'full study content of a single lesson', distinguishing it from sibling tools like list_lessons (listing) and get_class (class overview). The purpose is specific and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives (e.g., list_lessons for browsing or ask_venuvidya for Q&A). The description implies usage for fetching specific lesson content but lacks explicit context or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_classesAInspect

List recent class journal entries (date descending).

Args:
    level: optional level filter.
    limit: max number of classes (1-100, default 20).
ParametersJSON Schema
NameRequiredDescriptionDefault
levelNo
limitNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It only mentions listing entries in descending order, but does not disclose any behavioral traits like read-only nature, authentication needs, or potential side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is extremely concise with two lines of brief parameter explanations. No wasted words, front-loaded with purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple list tool with no output schema and few parameters, description covers basic use. However, it omits details like return format, pagination, or any side effects, which would be helpful for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, but description adds meaning for both parameters: 'level' as optional filter and 'limit' with range and default. This compensates for missing schema descriptions, though not highly detailed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb 'list' and resource 'class journal entries' with sorting order. It distinguishes from sibling tools like 'get_class' (singular) and 'list_lessons' (different resource).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives, no prerequisites or context provided. Description only states what it does, not usage conditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_lessonsAInspect

List lesson metadata. Filter by level/category.

Args:
    level: 'junior' | 'senior' | 'vidwat'.
    category: e.g. 'theory-1', 'practical-1', etc.
ParametersJSON Schema
NameRequiredDescriptionDefault
levelNo
categoryNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description should detail behavior. It does not mention read-only nature, pagination, or what happens with null filters. The description is terse.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences plus docstring-style parameter explanations. No redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple list tool with optional filters, it covers the main purpose. Missing explicit output format description, but acceptable given simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, but description adds meaning with example values for level and category. However, it doesn't clarify that parameters are optional or default behavior.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'List lesson metadata' which specifies the action and resource. It distinguishes from siblings like 'get_lesson' (detail) and 'list_classes' (different resource).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implied usage for listing with optional filters. No explicit when-not-to-use or alternatives, but sibling names provide context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

log_practice_sessionAInspect

Log a practice session against a class item. Requires admin_pin.

Args:
    class_id: Parent class id.
    item_id: Item id inside the class (kriti/varna/etc.).
    practiced_on: Date in YYYY-MM-DD.
    duration_min: Duration in minutes.
    notes: Free-text notes.
    rating: Self-rating 1-5.
    admin_pin: Admin PIN required for writes.
ParametersJSON Schema
NameRequiredDescriptionDefault
notesNo
ratingNo
item_idYes
class_idYes
admin_pinNo
duration_minNo
practiced_onYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must cover behavioral traits. It notes the admin_pin requirement for writes, but does not describe side effects, success/error behavior, or return values. The behavioral disclosure is minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with a clear two-part structure: a one-line purpose followed by a parameter list. No unnecessary text, though the Args list could be integrated more naturally.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema and no annotations. The description covers what the tool does and the required admin_pin, but lacks details on return values, error conditions, or idempotency. It is adequate but not comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description's Args list is essential. It explains each parameter's purpose (e.g., 'Date in YYYY-MM-DD', 'Self-rating 1-5'), adding meaning beyond the schema titles and types.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the action ('Log a practice session against a class item') with a clear verb and resource. It is distinct from sibling tools which are mostly read-only (get_, list_) or different actions (ask_venuvidya, search_syllabus).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions the requirement of admin_pin but provides no guidance on when to use this tool versus alternatives. No exclusion criteria or comparisons to sibling tools are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_syllabusAInspect

Keyword (BM25) search across all Carnatic flute lessons.

Args:
    query: Natural-language question or keywords (English or Kannada).
    k: Number of top results to return (1-10, default 4).
    level: Optional filter — 'junior', 'senior' or 'vidwat'.
ParametersJSON Schema
NameRequiredDescriptionDefault
kNo
levelNo
queryYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses a read-only operation (BM25 search) with no side effects. Could mention idempotency or permission requirements, but implicit safety is acceptable.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise: one-line purpose then bulleted args. No wasted words, front-loaded with algorithm hint. Ideal for quick parsing.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers search scope, parameter constraints, and optional filters. Lacks description of return format, but no output schema is provided. For a simple search tool, this is largely sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, so description compensates fully. It defines query as natural-language/Kannada, k range 1-10 with default, level values 'junior', 'senior', 'vidwat'. Adds meaning beyond schema types.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Keyword (BM25) search across all Carnatic flute lessons', specifying the verb (search) and resource (flute lesson syllabus). It distinguishes from sibling tools like get_lesson (retrieval) and ask_venuvidya (Q&A).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage for searching lessons, but does not explicitly contrast with siblings or state when not to use. For example, ask_venuvidya might handle conversational queries, but no guidance given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources