Skip to main content
Glama

intake_datasource_schema

Inspect the datasource declared in a run manifest and persist a schema summary, using an optional preferred sheet name.

Instructions

Inspect the datasource declared in the run manifest and persist schema_summary.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
run_idYes
preferred_sheetNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It states that it persists schema_summary but does not clarify what that entails (e.g., whether it modifies state, requires specific permissions, or has side effects). The verb 'persist' suggests mutation, but no confirmation or details are given.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently communicates the core action. It is front-loaded with the key verb and resource, but could be slightly more structured without adding length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema and is part of a complex authoring workflow, the description is incomplete. It does not explain what 'schema_summary' contains, how it relates to the run, or what happens if the datasource is missing. Sibling tools like 'inspect_target_schema' and 'describe_capability' provide similar functionality, but no distinction is made.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description should compensate. The description mentions 'datasource declared in the run manifest', which implies 'run_id' is the run identifier, but 'preferred_sheet' is not explained. With 0% coverage and only 2 parameters, the description adds minimal semantics beyond the schema field titles.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Inspect', 'persist') and clearly identifies the resource ('datasource declared in the run manifest') and outcome ('persist schema_summary'). It distinguishes itself from sibling tools like 'inspect_target_schema' by focusing on the datasource from the run manifest rather than a target schema.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is given on when to use this tool versus alternatives such as 'inspect_target_schema' or 'describe_capability'. There is no mention of prerequisites (e.g., must have an active run) or scenarios where this tool should be avoided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/imgwho/cwtwb'

If you have feedback or need assistance with the MCP directory API, please join our Discord server