Skip to main content
Glama
martinfrasch

ResearchTwin

get_profile

Retrieve researcher profiles with S-Index scores, h-index, citation counts, and links to publications, datasets, and repositories for impact analysis.

Instructions

Get a researcher's profile with S-Index score and summary metrics.

Args: slug: Researcher identifier (e.g. 'martin-frasch'). Use list_researchers to find valid slugs.

Returns structured profile with S-Index, h-index, paper count, citation count, and links to papers/datasets/repos endpoints.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
slugYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The get_profile tool handler function that fetches a researcher's profile with S-Index score and summary metrics. Takes a slug parameter and returns JSON-formatted data from the ResearchTwin API endpoint /api/researcher/{slug}/profile.
    @mcp.tool(annotations=ToolAnnotations(title="Get Researcher Profile", read_only_hint=True))
    async def get_profile(slug: str) -> str:
        """Get a researcher's profile with S-Index score and summary metrics.
    
        Args:
            slug: Researcher identifier (e.g. 'martin-frasch'). Use list_researchers to find valid slugs.
    
        Returns structured profile with S-Index, h-index, paper count, citation count,
        and links to papers/datasets/repos endpoints.
        """
        data = await _get(f"/api/researcher/{slug}/profile")
        return json.dumps(data, indent=2)
  • Helper function that makes HTTP GET requests to the ResearchTwin API. Used by get_profile and other tools to communicate with the backend API.
    async def _get(path: str, params: dict | None = None) -> dict:
        """Make a GET request to the ResearchTwin API."""
        async with httpx.AsyncClient(timeout=TIMEOUT) as client:
            resp = await client.get(f"{BASE_URL}{path}", params=params)
            resp.raise_for_status()
            return resp.json()
  • FastMCP server instance initialization that serves as the registration point for all MCP tools including get_profile. Tools are registered using the @mcp.tool() decorator.
    mcp = FastMCP(
        name="researchtwin",
        instructions=(
            "ResearchTwin is a federated platform for research discovery. "
            "Use these tools to find researchers, explore their publications, "
            "datasets, and code repositories, and compute S-Index impact metrics. "
            "Start with list_researchers or discover to find relevant research."
        ),
    )
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations only provide a title ('Get Researcher Profile'), so the description carries the burden of behavioral disclosure. It describes what the tool returns (structured profile with metrics and links) and mentions a prerequisite (using 'list_researchers' to find slugs), which adds useful context. However, it doesn't cover aspects like error handling, rate limits, or authentication needs. With minimal annotations, the description adds some value but could be more comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded: the first sentence states the purpose, followed by an 'Args:' section for parameters and a 'Returns:' section for output. Every sentence earns its place, with no wasted words. It's appropriately sized for a single-parameter tool with detailed parameter explanation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no nested objects) and the presence of an output schema (which handles return values), the description is largely complete. It covers purpose, parameter semantics, and output overview. However, it could improve by mentioning behavioral aspects like error cases (e.g., invalid slug) or linking to sibling tools more explicitly. The output schema reduces the need for return value details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It fully explains the single parameter 'slug': defines it as 'Researcher identifier,' provides an example ('martin-frasch'), and specifies how to obtain valid values ('Use list_researchers to find valid slugs'). This adds significant meaning beyond the schema, which only indicates it's a required string titled 'Slug.'

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get a researcher's profile with S-Index score and summary metrics.' It specifies the verb ('Get') and resource ('researcher's profile'), but doesn't explicitly differentiate it from sibling tools like 'list_researchers' (which lists researchers) or 'get_papers' (which gets papers). The purpose is clear but lacks sibling differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use this tool: to retrieve a specific researcher's profile. It mentions 'Use list_researchers to find valid slugs,' which implies an alternative tool for finding slugs, but doesn't explicitly state when not to use this tool or compare it to other profile-related siblings. The guidance is helpful but not exhaustive.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/martinfrasch/researchtwin'

If you have feedback or need assistance with the MCP directory API, please join our Discord server