Skip to main content
Glama
martinfrasch

ResearchTwin

get_datasets

Retrieve a researcher's datasets with QIC scores, DOIs, download counts, and FAIR-based quality assessments to evaluate research impact.

Instructions

Get a researcher's datasets with QIC (Quality x Impact x Collaboration) scores.

Args: slug: Researcher identifier.

Returns Figshare datasets with DOIs, download/view counts, and QIC scores computed using FAIR-based quality assessment.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
slugYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The get_datasets tool handler function that fetches a researcher's datasets with QIC scores from the ResearchTwin API endpoint /api/researcher/{slug}/datasets, formats the results with titles, QIC scores, and download counts, and returns a formatted string response.
    @mcp.tool(annotations=ToolAnnotations(title="Get Datasets", read_only_hint=True))
    async def get_datasets(slug: str) -> str:
        """Get a researcher's datasets with QIC (Quality x Impact x Collaboration) scores.
    
        Args:
            slug: Researcher identifier.
    
        Returns Figshare datasets with DOIs, download/view counts, and QIC scores
        computed using FAIR-based quality assessment.
        """
        data = await _get(f"/api/researcher/{slug}/datasets")
        items = data.get("items", [])
        if not items:
            return f"No datasets found for {slug}."
    
        lines = []
        for ds in items:
            qic = ds.get("qic_score", 0)
            lines.append(f"- **{ds['title']}** (QIC: {qic}, downloads: {ds.get('downloads', 0)})")
    
        return f"**{data.get('total', len(items))} datasets for {slug}:**\n" + "\n".join(lines)
  • Tool registration via @mcp.tool() decorator with ToolAnnotations specifying the title 'Get Datasets' and read_only_hint=True. This registers the get_datasets function as an MCP tool with the FastMCP framework.
    @mcp.tool(annotations=ToolAnnotations(title="Get Datasets", read_only_hint=True))
  • The _get helper function that makes async HTTP GET requests to the ResearchTwin API. Used by get_datasets and other tools to communicate with the backend API endpoints.
    async def _get(path: str, params: dict | None = None) -> dict:
        """Make a GET request to the ResearchTwin API."""
        async with httpx.AsyncClient(timeout=TIMEOUT) as client:
            resp = await client.get(f"{BASE_URL}{path}", params=params)
            resp.raise_for_status()
            return resp.json()
  • Function signature and docstring defining the input schema for get_datasets tool. Accepts a 'slug' parameter (researcher identifier) and documents the expected return format including Figshare datasets with DOIs, download/view counts, and QIC scores.
    async def get_datasets(slug: str) -> str:
        """Get a researcher's datasets with QIC (Quality x Impact x Collaboration) scores.
    
        Args:
            slug: Researcher identifier.
    
        Returns Figshare datasets with DOIs, download/view counts, and QIC scores
        computed using FAIR-based quality assessment.
        """
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations only provide a title, so the description carries full burden. It describes the return data (Figshare datasets with DOIs, counts, and QIC scores) and mentions the QIC computation method, which adds useful behavioral context. However, it doesn't disclose potential limitations like rate limits, authentication needs, or pagination behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear purpose statement followed by Args and Returns sections. It's appropriately sized with no redundant information. The only minor improvement would be integrating the sections more seamlessly, but overall it's efficient and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single parameter, output schema exists), the description is reasonably complete. It explains the purpose, parameter meaning, and return data, and the output schema will handle return value details. It could benefit from more behavioral context, but it covers the essentials adequately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage for the single parameter 'slug', the description compensates by explaining it as a 'Researcher identifier.' This adds meaningful semantics beyond the schema's generic 'Slug' title. However, it doesn't provide format examples or constraints, leaving some ambiguity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get a researcher's datasets') and resource ('datasets with QIC scores'), distinguishing it from siblings like get_papers or get_repos. It explicitly mentions the QIC scoring methodology, which provides additional specificity beyond a generic dataset retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying it retrieves datasets for a researcher, but it doesn't explicitly state when to use this tool versus alternatives like get_papers or list_researchers. No exclusions or prerequisites are mentioned, leaving the agent to infer appropriate scenarios based on the tool's purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/martinfrasch/researchtwin'

If you have feedback or need assistance with the MCP directory API, please join our Discord server