Skip to main content
Glama

edit_review

Update an existing review for a conference paper submission. Modify text, rating, confidence, and other review elements, then preview changes before final submission.

Instructions

Edit your existing review. Returns a preview — call confirm_submission to actually post the edit.

Args: venue_id: The venue identifier. submission_number: The paper number. review_text: Updated review text. rating: Updated rating. confidence: Updated confidence. title: Updated title. strengths: Updated strengths. weaknesses: Updated weaknesses. questions: Updated questions.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
venue_idYes
submission_numberYes
review_textNo
ratingNo
confidenceNo
titleNo
strengthsNo
weaknessesNo
questionsNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The `edit_review` function handles the logic of updating a review in OpenReview. It fetches the existing review, prepares the updated content, and stores a pending edit operation.
    async def edit_review(
        venue_id: str,
        submission_number: int,
        review_text: str | None = None,
        rating: int | None = None,
        confidence: int | None = None,
        title: str | None = None,
        strengths: str | None = None,
        weaknesses: str | None = None,
        questions: str | None = None,
    ) -> str:
        """Edit your existing review. Returns a preview — call confirm_submission to actually post the edit.
    
        Args:
            venue_id: The venue identifier.
            submission_number: The paper number.
            review_text: Updated review text.
            rating: Updated rating.
            confidence: Updated confidence.
            title: Updated title.
            strengths: Updated strengths.
            weaknesses: Updated weaknesses.
            questions: Updated questions.
        """
        client = get_client()
        profile_id = client.profile.id
        anon_groups = client.get_groups(
            prefix=f"{venue_id}/Submission{submission_number}/Reviewer_",
            signatory=profile_id,
        )
        if not anon_groups:
            return f"Could not find your anonymous reviewer group for Submission #{submission_number}."
    
        anon_id = anon_groups[0].id
        existing_reviews = client.get_all_notes(
            invitation=f"{venue_id}/Submission{submission_number}/-/Official_Review",
            signature=anon_id,
        )
        if not existing_reviews:
            return f"No existing review found for Submission #{submission_number} by you."
    
        existing = existing_reviews[0]
        content = dict(existing.content)
        if title is not None:
            content["title"] = {"value": title}
        if review_text is not None:
            content["review"] = {"value": review_text}
        if rating is not None:
            content["rating"] = {"value": rating}
        if confidence is not None:
            content["confidence"] = {"value": confidence}
        if strengths is not None:
            content["strengths"] = {"value": strengths}
        if weaknesses is not None:
            content["weaknesses"] = {"value": weaknesses}
        if questions is not None:
            content["questions"] = {"value": questions}
    
        payload = {
            "invitation": f"{venue_id}/Submission{submission_number}/-/Official_Review",
            "signatures": [anon_id],
            "note_id": existing.id,
            "content": content,
        }
        preview_lines = [
            f"## Edited Review Preview for Submission #{submission_number}",
            f"**Editing note:** {existing.id}",
            f"**Rating:** {content.get('rating', {}).get('value', 'N/A')}",
            f"**Confidence:** {content.get('confidence', {}).get('value', 'N/A')}",
        ]
        review_val = content.get("review", {}).get("value", "")
        if review_val:
            preview_lines.append(f"\n{review_val}")
    
        preview = "\n".join(preview_lines)
        confirmation_id = pending_store.add(action="edit_review", payload=payload, preview=preview)
        return f"{preview}\n\n---\n**Confirmation ID:** `{confirmation_id}`\n\nCall `confirm_submission` with this ID to update the review."
  • The `edit_review` function is decorated with `@mcp.tool()` which registers it as an MCP tool. Although the decorator is applied to the function definition, in this file the registration happens implicitly via the function definition.
    async def edit_review(
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It successfully discloses the preview-and-confirm workflow and implies this is a mutation operation. However, it lacks critical safety details: whether this is destructive (overwrites previous content), required permissions/authorization, partial vs. full update semantics, or rate limiting constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with high information density: first sentence establishes purpose, second explains the critical workflow constraint, followed by structured parameter documentation. The 'Args:' header is slightly redundant but necessary given the schema coverage gap. No wasted sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a 9-parameter mutation tool with an output schema. The description correctly avoids duplicating return value details (since output schema exists) while explaining the preview nature of the response. Missing only validation constraints and error condition guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Given 0% schema description coverage (titles only, no descriptions), the Args section compensates effectively by documenting all 9 parameters. It adds semantic meaning beyond the schema (e.g., 'paper number' clarifies 'Submission Number', 'Updated' prefix implies modification context). Deduction for lacking value constraints (e.g., rating ranges, venue_id format).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the core action ('Edit your existing review') with specific verb and resource. Implicitly distinguishes from 'submit_review' sibling by specifying 'existing' and explicitly differentiates from 'confirm_submission' by describing the two-phase workflow, though it doesn't explicitly contrast with initial submission tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit workflow guidance: 'Returns a preview — call confirm_submission to actually post the edit.' This clearly establishes when to use this tool (to generate a preview) versus when to use the sibling confirm_submission tool (to finalize), managing the agent's expectations about the state change.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/michaelqshieh/openreview-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server