Skip to main content
Glama

get_submission

Retrieve complete submission details including title, abstract, authors, and keywords from OpenReview conferences using venue and submission identifiers.

Instructions

Get full details of a submission (title, abstract, authors, keywords, etc.).

Args: venue_id: The venue identifier (e.g., 'ICLR.cc/2025/Conference'). submission_id: The submission's note ID. Provide this OR submission_number. submission_number: The submission's paper number. Provide this OR submission_id.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
venue_idYes
submission_idNo
submission_numberNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The handler for `get_submission` fetches submission data from OpenReview using the provided ID or number and formats it into a markdown string. It is registered as an MCP tool using `@mcp.tool()`.
    @mcp.tool()
    async def get_submission(
        venue_id: str,
        submission_id: str | None = None,
        submission_number: int | None = None,
    ) -> str:
        """Get full details of a submission (title, abstract, authors, keywords, etc.).
    
        Args:
            venue_id: The venue identifier (e.g., 'ICLR.cc/2025/Conference').
            submission_id: The submission's note ID. Provide this OR submission_number.
            submission_number: The submission's paper number. Provide this OR submission_id.
        """
        client = get_client()
    
        if submission_id:
            note = client.get_note(submission_id)
        elif submission_number is not None:
            notes = client.get_all_notes(
                invitation=f"{venue_id}/-/Submission",
                number=submission_number,
            )
            if not notes:
                return f"No submission found with number {submission_number} in {venue_id}."
            note = notes[0]
        else:
            return "Please provide either submission_id or submission_number."
    
        content = note.content
        title = content.get("title", {}).get("value", "Untitled")
        abstract = content.get("abstract", {}).get("value", "N/A")
        authors = content.get("authors", {}).get("value", [])
        keywords = content.get("keywords", {}).get("value", [])
    
        date_str = ""
        if note.cdate:
            dt = datetime.fromtimestamp(note.cdate / 1000, tz=timezone.utc)
            date_str = dt.strftime("%Y-%m-%d")
    
        lines = [
            f"# #{note.number}: {title}",
            f"",
            f"**ID:** {note.id}",
            f"**Forum:** https://openreview.net/forum?id={note.forum}",
        ]
        if date_str:
            lines.append(f"**Submitted:** {date_str}")
        if authors:
            lines.append(f"**Authors:** {', '.join(authors)}")
        if keywords:
            lines.append(f"**Keywords:** {', '.join(keywords)}")
    
        lines.append(f"\n## Abstract\n\n{abstract}")
    
        skip_keys = {"title", "abstract", "authors", "authorids", "keywords", "pdf", "venueid",
                     "venue", "TLDR", "_bibtex"}
        for key, val in content.items():
            if key not in skip_keys and isinstance(val, dict) and "value" in val:
                lines.append(f"\n**{key}:** {val['value']}")
    
        return "\n".join(lines)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses what data is returned (title, abstract, authors, keywords), adding valuable context beyond the schema. However, it omits behavioral details like error handling when submissions aren't found, visibility permissions, or whether the operation is read-only.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with a single front-loaded sentence stating purpose, followed by a clean Args section documenting parameters. No redundant or wasted text; every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 simple parameters, output schema present), the description is complete. It adequately explains the lookup mechanism (venue + identifier) and return content. Minor gap: lacks mention of error conditions or permission requirements typical of academic conference systems.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 0% schema description coverage, the description comprehensively documents all three parameters: venue_id includes a concrete example ('ICLR.cc/2025/Conference'), and both submission_id and submission_number include semantic definitions plus their mutual exclusivity constraint. Fully compensates for the schema deficiency.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves 'full details of a submission' and explicitly lists specific fields returned (title, abstract, authors, keywords), which distinguishes it from siblings like get_reviews or get_pdf. However, it does not explicitly contrast with these alternatives in the text.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides internal usage guidance by specifying the XOR relationship between submission_id and submission_number ('Provide this OR...'), but lacks explicit guidance on when to choose this tool over siblings like get_reviews or get_discussion for different data needs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/michaelqshieh/openreview-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server