Skip to main content
Glama

get_study_results

Retrieve structured user interview analysis with themes and verbatim quotes to understand participant feedback and insights.

Instructions

Returns analysis results. When presenting results, always quote specific participant responses verbatim using the quotes field in each theme.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
study_idYes
formatNo

Implementation Reference

  • The implementation of the `get_study_results` tool, including schema definition and handler logic in `src/index.ts`.
    server.tool(
      "get_study_results",
      "Returns analysis results. When presenting results, always quote specific participant responses verbatim using the quotes field in each theme.",
      {
        study_id: z.string().uuid(),
        format: z.enum(["summary", "full"]).optional(),
      },
      async (input) => {
        const format = input.format ?? "summary";
        const payload = await callUsercallApi(
          `/api/v1/agent/studies/${input.study_id}/results?format=${format}`,
        );
        return result(
          appendNote(
            payload,
            "When presenting these results, include verbatim participant quotes from each theme's quotes array. Do not paraphrase — show the actual words.",
          ),
        );
      },
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It partially succeeds by revealing the result structure contains 'themes' with 'quotes fields', hinting at a nested qualitative data format. However, it omits critical operational details: whether this is read-only (implied by 'Returns' but not confirmed), caching behavior, result size limits, or latency characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief (two sentences), but awkwardly mixes concerns: the first states functionality tersely ('Returns analysis results'), while the second jumps to post-invocation presentation guidelines. This structural confusion places output formatting instructions in a field meant for capability description, creating a minor organizational misfit despite brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description partially compensates by describing the result contents (themes containing quote fields), which helps the agent understand what data structure to expect. However, with zero annotations and zero parameter schema coverage, significant gaps remain for a complete invocation context, particularly regarding the optional 'format' parameter's impact on result granularity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, and the description text compensates not at all. Neither 'study_id' (despite being a required UUID) nor 'format' (despite having enum values 'summary'/'full' with unclear semantic differences) are mentioned in the text. The agent must rely solely on parameter names, which is insufficient for the 'format' option distinctions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states it 'Returns analysis results' which is vague, though the mention of 'participant responses', 'themes', and 'quotes field' provides domain context about qualitative research data. However, it fails to distinguish clearly from sibling 'get_study_status' (which likely returns metadata vs. actual results), leaving ambiguity about which tool retrieves substantive data versus operational status.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance is provided on when to use this tool versus alternatives like 'get_study_status' or 'create_study'. The second sentence provides presentation instructions ('always quote specific participant responses verbatim') rather than invocation guidelines, failing to clarify prerequisites or selection criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/junetic/mcp-usercall'

If you have feedback or need assistance with the MCP directory API, please join our Discord server