Skip to main content
Glama
ISimon3

Interactive Feedback MCP

by ISimon3

interactive_feedback

Request interactive feedback from users with text or images. Enable users to provide direct input or select predefined options during AI-assisted workflows without additional premium requests.

Instructions

向用户请求交互式反馈,支持文本和图片

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
messageYes向用户提出的具体问题
predefined_optionsNo提供给用户选择的预定义选项(可选)

Implementation Reference

  • The primary handler for the 'interactive_feedback' MCP tool. Decorated with @mcp.tool() for automatic registration and schema generation via Pydantic Fields. Collects user feedback by launching a subprocess UI and returns structured response with text and base64-encoded images.
    @mcp.tool()
    def interactive_feedback(
        message: str = Field(description="向用户提出的具体问题"),
        predefined_options: list = Field(default=None, description="提供给用户选择的预定义选项(可选)"),
    ) -> Dict[str, str | List[Dict[str, str]]]:
        """向用户请求交互式反馈,支持文本和图片"""
        # 如果没有提供预定义选项,使用默认选项
        predefined_options_list = predefined_options if isinstance(predefined_options, list) else None
        
        # 确保预定义选项列表不为空
        if not predefined_options_list:
            predefined_options_list = [
                "已解决当前问题",
                "进一步优化程序",
                "进一步优化界面",
                "还有一些问题需要修复",           
                "没有修复任何错误",
            ]
        
        result = launch_feedback_ui(message, predefined_options_list)
        
        # 构建返回结果
        response = {
            'interactive_feedback': result.get('interactive_feedback', '')
        }
        
        # 如果有图片,添加到返回结果中
        if 'images' in result and result['images']:
            response['images'] = result['images']
        
        return response
  • Helper utility that launches the feedback_ui.py script as a subprocess using temporary JSON file for communication, processes attached images by converting them to base64, and returns the feedback data.
    def launch_feedback_ui(summary: str, predefinedOptions: list[str] | None = None) -> dict[str, str | list[str]]:
        # 为反馈结果创建一个临时文件
        with tempfile.NamedTemporaryFile(suffix=".json", delete=False) as tmp:
            output_file = tmp.name
    
        try:
            # 获取相对于此脚本的feedback_ui.py路径
            script_dir = os.path.dirname(os.path.abspath(__file__))
            feedback_ui_path = os.path.join(script_dir, "feedback_ui.py")
    
            # 作为单独的进程运行feedback_ui.py
            # 注意:uv似乎有一个bug,所以我们需要
            # 传递一堆特殊标志来使其工作
            args = [
                sys.executable,
                "-u",
                feedback_ui_path,
                "--prompt", summary,
                "--output-file", output_file,
                "--predefined-options", "|||".join(predefinedOptions) if predefinedOptions else ""
            ]
            result = subprocess.run(
                args,
                check=False,
                shell=False,
                stdout=subprocess.DEVNULL,
                stderr=subprocess.DEVNULL,
                stdin=subprocess.DEVNULL,
                close_fds=True
            )
            if result.returncode != 0:
                raise Exception(f"启动反馈UI失败: {result.returncode}")
    
            # 从临时文件读取结果
            with open(output_file, 'r') as f:
                result_data = json.load(f)
            os.unlink(output_file)
            
            # 处理图片路径,将图片转换为base64
            if 'image_paths' in result_data and result_data['image_paths']:
                image_data = []
                for img_path in result_data['image_paths']:
                    if os.path.exists(img_path):
                        try:
                            with open(img_path, 'rb') as img_file:
                                img_content = img_file.read()
                                img_base64 = base64.b64encode(img_content).decode('utf-8')
                                img_filename = os.path.basename(img_path)
                                image_data.append({
                                    'filename': img_filename,
                                    'content': img_base64,
                                    'path': img_path
                                })
                        except Exception as e:
                            print(f"处理图片时出错: {e}")
                
                # 添加图片数据到结果中
                result_data['images'] = image_data
            
            return result_data
        except Exception as e:
            if os.path.exists(output_file):
                os.unlink(output_file)
            raise e
  • Type definition for the feedback result structure used by the UI, specifying the 'interactive_feedback' field and list of image paths.
    class FeedbackResult(TypedDict):
        interactive_feedback: str
        image_paths: List[str]
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions that the tool '支持文本和图片' (supports text and images), which adds some context about input types. However, it does not describe critical behavioral traits such as whether this is a blocking operation, how feedback is collected or returned, error handling, or any permissions required. For a tool with no annotations, this leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded: '向用户请求交互式反馈,支持文本和图片' (Request interactive feedback from users, supporting text and images). It is a single sentence with no wasted words, clearly stating the core functionality. Every part of the description earns its place by specifying the action and supported formats.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a feedback collection tool with no annotations and no output schema, the description is incomplete. It lacks information about how the feedback is returned (e.g., format, structure), any side effects, or error conditions. While it mentions support for text and images, it does not cover the full behavioral context needed for an agent to use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, meaning both parameters ('message' and 'predefined_options') are fully documented in the schema. The description does not add any additional meaning or clarification beyond what the schema provides (e.g., it does not explain how 'predefined_options' should be formatted or used). With high schema coverage, the baseline score of 3 is appropriate as the description does not compensate but also does not detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: '向用户请求交互式反馈,支持文本和图片' (Request interactive feedback from users, supporting text and images). It specifies the verb ('请求' - request) and resource ('交互式反馈' - interactive feedback) with additional detail about supported input types. However, since there are no sibling tools, it cannot demonstrate differentiation from alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention any prerequisites, exclusions, or contextual factors that would help an agent decide when this tool is appropriate. The absence of sibling tools means no explicit alternatives are named, but general usage context is still missing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ISimon3/interactive-feedback-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server