review_analyzer
Analyze customer reviews to extract insights and answer specific questions about feedback using the Obenan MCP Server's API.
Instructions
Analyze reviews using the Obenan Review Analyzer API
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | Question or prompt for the review analyzer |
Implementation Reference
- src/obenan_mcp_server/server.py:430-504 (handler)The implementation of the review_analyzer tool, which sends a POST request to an external Obenan API to analyze reviews based on a user prompt.
async def handle_review_analyzer( arguments: dict[str, Any] | None ) -> list[types.TextContent]: prompt = arguments.get("prompt") if not prompt: return [types.TextContent( type="text", text="❌ Error: Prompt is required for review analysis" )] try: # Use access token from environment for authorization if needed access_token = os.environ.get("OBENAN_LOGIN_ACCESS_TOKEN") # Prepare the payload with user prompt and hardcoded values payload = { "prompt": prompt, "location_id": [471, 472, 475], "thirdPartyReviewSourcesId": [69], "companyId": [175] } # Set up headers if token is available headers = {} if access_token: headers["Authorization"] = f"Bearer {access_token}" headers["Content-Type"] = "application/json" # Make the POST request url = "https://reviewanalyser.obenan.com/chat" response = requests.post(url, json=payload, headers=headers) if response.status_code == 200: data = response.json() # Format the response for better readability formatted_response = f"✅ Review Analysis Result\n\n" # Add text response if available if "response" in data: formatted_response += f"Analysis: {data['response']}\n\n" # Format graph data if available if "graph_response" in data and isinstance(data["graph_response"], dict): graph = data["graph_response"] formatted_response += f"Chart Type: {graph.get('chart_type', 'Unknown')}\n" # Handle columns if "columns" in graph and isinstance(graph["columns"], list): formatted_response += f"Columns: {', '.join(graph['columns'])}\n\n" # Handle data if "data" in graph and isinstance(graph["data"], list): formatted_response += f"Data:\n" for item in graph["data"]: for key, value in item.items(): formatted_response += f" {key}: {value}\n" formatted_response += "\n" # Include full JSON response for reference formatted_response += f"\n===FULL RESPONSE===\n{json.dumps(data, indent=2)}" return [types.TextContent(type="text", text=formatted_response)] else: error_msg = f"❌ Failed to analyze reviews: HTTP {response.status_code}\n{response.text[:500]}" return [types.TextContent(type="text", text=error_msg)] except Exception as e: error_trace = traceback.format_exc() return [types.TextContent( type="text", text=f"🚨 Error analyzing reviews: {str(e)}\n\n{error_trace[:500]}" )] - src/obenan_mcp_server/server.py:63-73 (registration)Registration of the 'review_analyzer' tool, including its description and input schema.
types.Tool( name="review_analyzer", description="Analyze reviews using the Obenan Review Analyzer API", inputSchema={ "type": "object", "properties": { "prompt": {"type": "string", "description": "Question or prompt for the review analyzer"} }, "required": ["prompt"] }, )