Skip to main content
Glama

research

Search scientific literature and receive a cited Markdown report synthesizing findings from multiple research papers.

Instructions

Submit a scientific question to the SPARKIT research agent.

SPARKIT searches the literature, reads relevant papers, and returns a cited Markdown report. Best for questions where a correct answer requires synthesizing across multiple primary sources.

Args: question: Free-text scientific question. Be specific — "Which kinases are upregulated in pancreatic cancer with evidence from human tissue?" works better than "tell me about pancreatic cancer." response_format: "full" (default) for a multi-paragraph Markdown report, or "brief" for a tighter summary. include_citations: Keep True (default) so the report is usable for downstream work; only set False if you specifically want unsourced prose. max_wait_seconds: How long to block waiting for the job before returning the job_id with instructions to poll via get_job_status. Default 240s (4 min). Range 30-540.

Returns the cited Markdown report on success. If the job is still running at the wait limit, returns the job_id and status so the caller can resume with get_job_status.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
questionYes
response_formatNofull
include_citationsNo
max_wait_secondsNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The main handler for the 'research' MCP tool. Defined as an async function decorated with @mcp.tool(). It accepts a question, response_format, include_citations, and max_wait_seconds; validates inputs, submits the research job via SparkitClient.submit_research(), and polls until completion or timeout via _await_completion().
    @mcp.tool()
    async def research(
        question: str,
        response_format: str = "full",
        include_citations: bool = True,
        max_wait_seconds: int = _DEFAULT_MAX_WAIT_SECONDS,
    ) -> str:
        """Submit a scientific question to the SPARKIT research agent.
    
        SPARKIT searches the literature, reads relevant papers, and returns
        a cited Markdown report. Best for questions where a correct answer
        requires synthesizing across multiple primary sources.
    
        Args:
            question: Free-text scientific question. Be specific —
                "Which kinases are upregulated in pancreatic cancer with
                evidence from human tissue?" works better than "tell me
                about pancreatic cancer."
            response_format: ``"full"`` (default) for a multi-paragraph
                Markdown report, or ``"brief"`` for a tighter summary.
            include_citations: Keep ``True`` (default) so the report is
                usable for downstream work; only set ``False`` if you
                specifically want unsourced prose.
            max_wait_seconds: How long to block waiting for the job before
                returning the job_id with instructions to poll via
                ``get_job_status``. Default 240s (4 min). Range 30-540.
    
        Returns the cited Markdown report on success. If the job is still
        running at the wait limit, returns the job_id and status so the
        caller can resume with ``get_job_status``.
        """
        if not question or not question.strip():
            return "Error: `question` is required and cannot be empty."
    
        wait = max(_MIN_MAX_WAIT_SECONDS, min(_MAX_MAX_WAIT_SECONDS, max_wait_seconds))
        if response_format not in ("full", "brief"):
            return "Error: `response_format` must be 'full' or 'brief'."
    
        try:
            async with SparkitClient() as client:
                job = await client.submit_research(
                    question.strip(),
                    response_format=response_format,
                    include_citations=include_citations,
                )
                logger.info("Submitted SPARKIT job %s", job.job_id)
                return await _await_completion(client, job, wait)
        except SparkitAPIError as e:
            return _format_api_error(e)
  • The @mcp.tool() decorator on the 'research' function registers it as an MCP tool with the FastMCP server instance.
    @mcp.tool()
  • The type signature and docstring (including Args) define the input schema for the research tool: question (str, required), response_format (str, default 'full'), include_citations (bool, default True), max_wait_seconds (int, default 240).
    async def research(
        question: str,
        response_format: str = "full",
        include_citations: bool = True,
        max_wait_seconds: int = _DEFAULT_MAX_WAIT_SECONDS,
    ) -> str:
  • Helper async function _await_completion that polls the job status until completion, failure, or timeout. Used by the research handler to synchronously wait for results.
    async def _await_completion(
        client: SparkitClient, job: Job, max_wait_seconds: int
    ) -> str:
        """Poll until the job is terminal or until ``max_wait_seconds`` elapses."""
        elapsed = 0.0
        current = job
        while current.status in {"queued", "running"} and elapsed < max_wait_seconds:
            await asyncio.sleep(_POLL_INTERVAL_SECONDS)
            elapsed += _POLL_INTERVAL_SECONDS
            try:
                current = await client.get_job(job.job_id)
            except SparkitAPIError as e:
                # Transient lookup error mid-poll; bail out with the job_id so
                # the caller can resume rather than swallowing progress.
                return (
                    f"Submitted as `{job.job_id}` but couldn't fetch status: "
                    f"{e.message}. Use `get_job_status` to check on it."
                )
    
        if current.status == "completed":
            return _format_completed(current)
        if current.status in {"failed", "cancelled"}:
            return _format_terminal_failure(current)
        # Still queued or running and we ran out the clock.
        return _format_in_flight(current, elapsed)
  • The SparkitClient.submit_research() method that sends the POST /v1/research API request to submit a research question. Returns a Job object.
        async def submit_research(
            self,
            question: str,
            *,
            response_format: str = "full",
            include_citations: bool = True,
            max_answer_tokens: int | None = None,
        ) -> Job:
            body: dict[str, Any] = {
                "question": question,
                "response_format": response_format,
                "include_citations": include_citations,
            }
            if max_answer_tokens is not None:
                body["max_answer_tokens"] = max_answer_tokens
            resp = await self._client.post("/v1/research", json=body)
            return _job_from_dict(_unwrap(resp))
    
        async def get_job(self, job_id: str) -> Job:
            resp = await self._client.get(f"/v1/research/{job_id}")
            return _job_from_dict(_unwrap(resp))
    
    
    def _unwrap(resp: httpx.Response) -> dict[str, Any]:
        """Raise ``SparkitAPIError`` on non-2xx; otherwise return parsed JSON."""
        if 200 <= resp.status_code < 300:
            try:
                return resp.json()
            except ValueError as e:
                raise SparkitAPIError(
                    resp.status_code, f"Malformed JSON response: {e}"
                ) from e
    
        # Error path — try to extract the API's structured error.
        code: str | None = None
        message = resp.reason_phrase or "Request failed."
        try:
            body = resp.json()
            err = body.get("error") if isinstance(body, dict) else None
            if isinstance(err, dict):
                code = err.get("code")
                message = err.get("message") or message
        except ValueError:
            pass
        raise SparkitAPIError(resp.status_code, message, code=code)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Despite no annotations, the description discloses key behaviors: async execution with timeout (max_wait_seconds), return types (inline report vs job_id), and parameter defaults. Could add rate limits or error handling, but overall thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured: concise opening, contextual paragraph, bullet-like Args section, and return value explanation. Every sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers input, usage, return types, and sibling relationship. Missing explicit error scenarios, but output schema likely covers that. Overall very complete for a complex async tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by explaining each parameter in detail: question specificity, response_format options, include_citations rationale, and max_wait_seconds range and purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool submits a scientific question to the SPARKIT research agent, which searches literature and returns a cited Markdown report. It distinguishes from sibling 'get_job_status' by describing async behavior and polling instructions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Best for questions where a correct answer requires synthesizing across multiple primary sources.' Provides context on when to use, and mentions alternative polling via get_job_status.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/SPARKIT-science/sparkit-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server