Skip to main content
Glama
martinfrasch

ResearchTwin

get_context

Retrieve comprehensive research context for a researcher by identifier, including S-Index metrics, paper impact, data source connections, and quality scores from multiple academic platforms.

Instructions

Get comprehensive research context for a researcher including all data source metrics.

Args: slug: Researcher identifier (e.g. 'martin-frasch').

Returns S-Index, paper impact, source connection status (Semantic Scholar, Google Scholar, GitHub, Figshare), dataset QIC scores, and repo QIC scores. More detailed than get_profile — use this when you need the full picture.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
slugYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • MCP tool handler for 'get_context'. Decorated with @mcp.tool(), this async function takes a researcher slug, calls the backend API endpoint /api/context/{slug}, and returns comprehensive research context as JSON including S-Index, paper impact, source connection statuses, and QIC scores.
    @mcp.tool(annotations=ToolAnnotations(title="Get Researcher Context", read_only_hint=True))
    async def get_context(slug: str) -> str:
        """Get comprehensive research context for a researcher including all data source metrics.
    
        Args:
            slug: Researcher identifier (e.g. 'martin-frasch').
    
        Returns S-Index, paper impact, source connection status (Semantic Scholar,
        Google Scholar, GitHub, Figshare), dataset QIC scores, and repo QIC scores.
        More detailed than get_profile — use this when you need the full picture.
        """
        data = await _get(f"/api/context/{slug}")
        return json.dumps(data, indent=2)
  • Backend API endpoint implementation for /api/context/{slug}. This FastAPI route fetches researcher data from all sources (Semantic Scholar, Google Scholar, GitHub, Figshare), computes QIC scores, and returns a comprehensive context object with S-Index, paper impact, source statuses, dataset scores, and repository scores.
    @app.get("/api/context/{slug}")
    async def get_context(slug: str):
        researcher = _get_researcher_or_404(slug)
    
        merged_data, gh_data, fs_data = await _fetch_all(researcher)
        qic = compute_researcher_qic(fs_data, gh_data, merged_data)
    
        def _source_status(data, name):
            if "_error" in data:
                return {"status": "error", "error": data["_error"][:100]}
            return {"status": "connected"}
    
        # Academic sources — merged data has _sources field
        academic_sources = merged_data.get("_sources", [])
        s2_info = {"status": "connected"} if "semantic_scholar" in academic_sources else {"status": "error", "error": "Semantic Scholar unavailable"}
        s2_info.update({"paper_count": merged_data.get("paper_count", 0), "citation_count": merged_data.get("citation_count", 0), "h_index": merged_data.get("h_index", 0)})
    
        gs_info = {"status": "connected"} if "google_scholar" in academic_sources else {"status": "error", "error": "Google Scholar unavailable"}
        gs_info.update({"i10_index": merged_data.get("i10_index", 0)})
    
        gh_info = _source_status(gh_data, "github")
        gh_info.update({"total_repos": gh_data.get("total_repos", 0), "total_stars": gh_data.get("total_stars", 0)})
    
        fs_info = _source_status(fs_data, "figshare")
        fs_info.update({"total_datasets": fs_data.get("total_datasets", 0), "total_downloads": fs_data.get("total_downloads", 0)})
    
        return {
            "researcher_slug": slug,
            "display_name": researcher["display_name"],
            "s_index": qic["s_index"],
            "paper_impact": qic["paper_impact"],
            "summary": qic["summary"],
            "sources": {
                "semantic_scholar": s2_info,
                "google_scholar": gs_info,
                "github": gh_info,
                "figshare": fs_info,
            },
            "dataset_scores": qic.get("dataset_scores", []),
            "repo_scores": qic.get("repo_scores", [])[:5],
        }
  • Tool registration via @mcp.tool() decorator with ToolAnnotations specifying title='Get Researcher Context' and read_only_hint=True. This decorator registers the get_context function as an MCP tool.
    @mcp.tool(annotations=ToolAnnotations(title="Get Researcher Context", read_only_hint=True))
  • Documentation listing 'get_context' as one of the available tools in the about() resource function, which provides information about the ResearchTwin platform.
    "- get_context: Get full research context with all metrics\n"
  • Helper function _get() that makes HTTP GET requests to the ResearchTwin API. Used by the get_context tool (and other tools) to communicate with the backend API endpoints.
    async def _get(path: str, params: dict | None = None) -> dict:
        """Make a GET request to the ResearchTwin API."""
        async with httpx.AsyncClient(timeout=TIMEOUT) as client:
            resp = await client.get(f"{BASE_URL}{path}", params=params)
            resp.raise_for_status()
            return resp.json()
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations only provide a title, so the description carries the burden of behavioral disclosure. It describes what data is returned (S-Index, paper impact, source connection status, QIC scores), which adds useful context beyond the schema. However, it doesn't mention potential limitations like rate limits, authentication needs, or error conditions. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured: a purpose statement, parameter explanation with example, return value details, and usage guideline. Every sentence adds value without redundancy. It's front-loaded with the core purpose and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (which handles return values), the description focuses on purpose, parameters, and differentiation from siblings. It covers the essential context well, though it could benefit from mentioning behavioral aspects like performance or data freshness. The presence of an output schema reduces the need for return value details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It explains the 'slug' parameter as 'Researcher identifier (e.g. 'martin-frasch')', providing a clear example and clarifying its purpose. This adds meaningful semantics beyond the bare schema, though it doesn't detail format constraints or validation rules.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('comprehensive research context for a researcher'), specifying it includes 'all data source metrics'. It explicitly distinguishes from sibling 'get_profile' by stating it's 'more detailed' and provides 'the full picture', making the purpose specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'use this when you need the full picture' and contrasts it with 'get_profile' by noting it's 'more detailed'. This provides clear guidance on when to choose this tool over the alternative sibling, though it doesn't mention other siblings like 'get_papers' or 'list_researchers'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/martinfrasch/researchtwin'

If you have feedback or need assistance with the MCP directory API, please join our Discord server