summarize_text
Generate concise summaries of lengthy text using an LLM model. Specify content, model type, and custom instructions to tailor the output based on your needs.
Instructions
Summarize text using an LLM model.
⚠️ COST WARNING: This tool makes an API call to Whissle which may incur costs. Only use when explicitly requested by the user.
Args:
content (str): The text to summarize
model_name (str, optional): The LLM model to use. Defaults to "openai"
instruction (str, optional): Specific instructions for summarization
Returns:
TextContent with the summary.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| content | Yes | ||
| instruction | No | ||
| model_name | No | openai |
Implementation Reference
- whissle_mcp/server.py:384-449 (handler)The complete implementation of the 'summarize_text' tool, including the @mcp.tool decorator for registration and schema definition, and the handler function that performs text summarization using the Whissle API's llm_text_summarizer method with comprehensive error handling and retries.@mcp.tool( description="""Summarize text using an LLM model. ⚠️ COST WARNING: This tool makes an API call to Whissle which may incur costs. Only use when explicitly requested by the user. Args: content (str): The text to summarize model_name (str, optional): The LLM model to use. Defaults to "openai" instruction (str, optional): Specific instructions for summarization Returns: TextContent with the summary. """ ) def summarize_text( content: str, model_name: str = "openai", instruction: Optional[str] = None, ) -> TextContent: try: if not content: logger.error("Empty content provided for summarization") return make_error("Content is required") # Log the request details logger.info(f"Summarizing text using model: {model_name}") logger.info(f"Text length: {len(content)} characters") retry_count = 0 max_retries = 2 # Increased from 1 to 2 while retry_count <= max_retries: try: logger.info(f"Attempting summarization (Attempt {retry_count+1}/{max_retries+1})") response = client.llm_text_summarizer( content=content, model_name=model_name, instruction=instruction, ) if response and response.response: logger.info("Summarization successful") return TextContent( type="text", text=f"Summary:\n{response.response}", ) else: logger.error("No summary was returned from the API") return make_error("No summary was returned from the API") except Exception as api_error: error_msg = str(api_error) logger.error(f"Summarization error: {error_msg}") # Handle API errors with retries error_result = handle_api_error(error_msg, "summarization", retry_count, max_retries) if error_result is not None: # If we should not retry return error_result # Return the error message retry_count += 1 # If we get here, all retries failed logger.error(f"All summarization attempts failed after {max_retries+1} attempts") return make_error(f"Failed to summarize text after {max_retries+1} attempts") except Exception as e: logger.error(f"Unexpected error during summarization: {str(e)}") return make_error(f"Failed to summarize text: {str(e)}")