summarize_text
Generate concise summaries from text content using LLM models. Specify custom instructions to tailor summaries for specific needs.
Instructions
Summarize text using an LLM model.
⚠️ COST WARNING: This tool makes an API call to Whissle which may incur costs. Only use when explicitly requested by the user.
Args:
content (str): The text to summarize
model_name (str, optional): The LLM model to use. Defaults to "openai"
instruction (str, optional): Specific instructions for summarization
Returns:
TextContent with the summary.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| content | Yes | ||
| model_name | No | openai | |
| instruction | No |
Implementation Reference
- whissle_mcp/server.py:384-450 (handler)The handler function for the 'summarize_text' tool, decorated with @mcp.tool for registration. It uses the Whissle client's llm_text_summarizer method to generate summaries, includes input validation, logging, retry logic for API errors, and returns a TextContent object with the summary.@mcp.tool( description="""Summarize text using an LLM model. ⚠️ COST WARNING: This tool makes an API call to Whissle which may incur costs. Only use when explicitly requested by the user. Args: content (str): The text to summarize model_name (str, optional): The LLM model to use. Defaults to "openai" instruction (str, optional): Specific instructions for summarization Returns: TextContent with the summary. """ ) def summarize_text( content: str, model_name: str = "openai", instruction: Optional[str] = None, ) -> TextContent: try: if not content: logger.error("Empty content provided for summarization") return make_error("Content is required") # Log the request details logger.info(f"Summarizing text using model: {model_name}") logger.info(f"Text length: {len(content)} characters") retry_count = 0 max_retries = 2 # Increased from 1 to 2 while retry_count <= max_retries: try: logger.info(f"Attempting summarization (Attempt {retry_count+1}/{max_retries+1})") response = client.llm_text_summarizer( content=content, model_name=model_name, instruction=instruction, ) if response and response.response: logger.info("Summarization successful") return TextContent( type="text", text=f"Summary:\n{response.response}", ) else: logger.error("No summary was returned from the API") return make_error("No summary was returned from the API") except Exception as api_error: error_msg = str(api_error) logger.error(f"Summarization error: {error_msg}") # Handle API errors with retries error_result = handle_api_error(error_msg, "summarization", retry_count, max_retries) if error_result is not None: # If we should not retry return error_result # Return the error message retry_count += 1 # If we get here, all retries failed logger.error(f"All summarization attempts failed after {max_retries+1} attempts") return make_error(f"Failed to summarize text after {max_retries+1} attempts") except Exception as e: logger.error(f"Unexpected error during summarization: {str(e)}") return make_error(f"Failed to summarize text: {str(e)}")
- whissle_mcp/server.py:384-397 (registration)The @mcp.tool decorator registers the 'summarize_text' tool with MCP, including schema description for parameters: content (str), model_name (str, optional), instruction (Optional[str]), and return type TextContent.@mcp.tool( description="""Summarize text using an LLM model. ⚠️ COST WARNING: This tool makes an API call to Whissle which may incur costs. Only use when explicitly requested by the user. Args: content (str): The text to summarize model_name (str, optional): The LLM model to use. Defaults to "openai" instruction (str, optional): Specific instructions for summarization Returns: TextContent with the summary. """ )
- whissle_mcp/server.py:385-397 (schema)The tool description provides the input schema: content (required str), model_name (optional str default 'openai'), instruction (optional str), and output as TextContent with summary.description="""Summarize text using an LLM model. ⚠️ COST WARNING: This tool makes an API call to Whissle which may incur costs. Only use when explicitly requested by the user. Args: content (str): The text to summarize model_name (str, optional): The LLM model to use. Defaults to "openai" instruction (str, optional): Specific instructions for summarization Returns: TextContent with the summary. """ )