Skip to main content
Glama

list_saved_searches

Retrieve all saved searches from Splunk to view their names, descriptions, and search queries for monitoring and analysis.

Instructions

List all saved searches in Splunk

Returns:
    List of saved searches with their names, descriptions, and search queries

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The main handler function for the 'list_saved_searches' tool. It is registered via the @mcp.tool() decorator and implements the core logic: connects to Splunk service, iterates through all saved searches, extracts name, description, and search query for each, handles errors per item, and returns a list of dictionaries.
    @mcp.tool()
    async def list_saved_searches() -> List[Dict[str, Any]]:
        """
        List all saved searches in Splunk
        
        Returns:
            List of saved searches with their names, descriptions, and search queries
        """
        try:
            service = get_splunk_connection()
            saved_searches = []
            
            for saved_search in service.saved_searches:
                try:
                    saved_searches.append({
                        "name": saved_search.name,
                        "description": saved_search.description or "",
                        "search": saved_search.search
                    })
                except Exception as e:
                    logger.warning(f"⚠️ Error processing saved search: {str(e)}")
                    continue
                
            return saved_searches
            
        except Exception as e:
            logger.error(f"❌ Failed to list saved searches: {str(e)}")
            raise
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states the return format ('List of saved searches with their names, descriptions, and search queries'), which is helpful, but lacks details on permissions, rate limits, pagination, or error handling for a read operation in Splunk.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence and adds return details in a second sentence. It's efficient with zero waste, though slightly brief for a tool with no annotations, which keeps it from a perfect score.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 0 parameters, 100% schema coverage, and no output schema, the description is minimally adequate. It explains what the tool does and the return format, but for a read operation in Splunk with no annotations, it could benefit from more behavioral context like access requirements or data scope.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and schema description coverage is 100%, so no parameter documentation is needed. The description appropriately omits parameter details, earning a high baseline score for not adding unnecessary information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('all saved searches in Splunk'), making the purpose unambiguous. However, it doesn't differentiate from sibling tools like 'list_indexes' or 'list_users' beyond specifying the resource type, which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description doesn't mention prerequisites, limitations, or comparisons to sibling tools like 'search_splunk' or 'get_indexes_and_sourcetypes', leaving the agent without context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/livehybrid/splunk-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server