Skip to main content
Glama
itshare4u

Agent Knowledge MCP

create_index

Create a new Elasticsearch index with custom mapping and optional settings configuration to organize searchable data.

Instructions

Create a new Elasticsearch index with optional mapping and settings configuration

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
indexYesName of the new Elasticsearch index to create
mappingYesIndex mapping configuration defining field types and properties
settingsNoOptional index settings for shards, replicas, analysis, etc.

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • Core handler implementation for 'create_index' tool. Performs Elasticsearch index creation with mandatory metadata governance checks, special handling for system indices, detailed success/error responses, and comprehensive exception handling for various failure modes.
    @app.tool(
        description="Create a new Elasticsearch index with optional mapping and settings configuration",
        tags={"elasticsearch", "create", "index", "mapping"}
    )
    async def create_index(
            index: Annotated[str, Field(description="Name of the new Elasticsearch index to create")],
            mapping: Annotated[
                Dict[str, Any], Field(description="Index mapping configuration defining field types and properties")],
            settings: Annotated[Optional[Dict[str, Any]], Field(
                description="Optional index settings for shards, replicas, analysis, etc.")] = None
    ) -> str:
        """Create a new Elasticsearch index with mapping and optional settings."""
        try:
            es = get_es_client()
    
            # Special case: Allow creating index_metadata without validation
            if index == "index_metadata":
                body = {"mappings": mapping}
                if settings:
                    body["settings"] = settings
    
                result = es.indices.create(index=index, body=body)
    
                return (f"✅ Index metadata system initialized successfully!\n\n" +
                        f"📋 **Metadata Index Created**: {index}\n" +
                        f"🔧 **System Status**: Index metadata management now active\n" +
                        f"✅ **Next Steps**:\n" +
                        f"   1. Use 'create_index_metadata' to document your indices\n" +
                        f"   2. Then use 'create_index' to create actual indices\n" +
                        f"   3. Use 'list_indices' to see metadata integration\n\n" +
                        f"🎯 **Benefits Unlocked**:\n" +
                        f"   • Index governance and documentation enforcement\n" +
                        f"   • Enhanced index listing with descriptions\n" +
                        f"   • Proper cleanup workflows for index deletion\n" +
                        f"   • Team collaboration through shared index understanding\n\n" +
                        f"📋 **Technical Details**:\n{json.dumps(result, indent=2, ensure_ascii=False)}")
    
            # Check if metadata document exists for this index
            metadata_index = "index_metadata"
            try:
                # Search for existing metadata document
                search_body = {
                    "query": {
                        "term": {
                            "index_name": index
                        }
                    },
                    "size": 1
                }
    
                metadata_result = es.search(index=metadata_index, body=search_body)
    
                if metadata_result['hits']['total']['value'] == 0:
                    return (f"❌ Index creation blocked - Missing metadata documentation!\n\n" +
                            f"🚨 **MANDATORY: Create Index Metadata First**:\n" +
                            f"   📋 **Required Action**: Before creating index '{index}', you must document it\n" +
                            f"   🔧 **Use This Tool**: Call 'create_index_metadata' tool first\n" +
                            f"   📝 **Required Information**:\n" +
                            f"      • Index purpose and description\n" +
                            f"      • Data types and content it will store\n" +
                            f"      • Usage patterns and access frequency\n" +
                            f"      • Retention policies and lifecycle\n" +
                            f"      • Related indices and dependencies\n\n" +
                            f"💡 **Workflow**:\n" +
                            f"   1. Call 'create_index_metadata' with index name and description\n" +
                            f"   2. Then call 'create_index' again to create the actual index\n" +
                            f"   3. This ensures proper documentation and governance\n\n" +
                            f"🎯 **Why This Matters**:\n" +
                            f"   • Prevents orphaned indices without documentation\n" +
                            f"   • Ensures team understands index purpose\n" +
                            f"   • Facilitates better index management and cleanup\n" +
                            f"   • Provides context for future maintenance")
    
            except Exception as metadata_error:
                # If metadata index doesn't exist, that's also a problem
                if "index_not_found" in str(metadata_error).lower():
                    return (f"❌ Index creation blocked - Metadata system not initialized!\n\n" +
                            f"🚨 **SETUP REQUIRED**: Index metadata system needs initialization\n" +
                            f"   📋 **Step 1**: Create metadata index first using 'create_index' with name 'index_metadata'\n" +
                            f"   📝 **Step 2**: Use this mapping for metadata index:\n" +
                            f"```json\n" +
                            f"{{\n" +
                            f"  \"properties\": {{\n" +
                            f"    \"index_name\": {{\"type\": \"keyword\"}},\n" +
                            f"    \"description\": {{\"type\": \"text\"}},\n" +
                            f"    \"purpose\": {{\"type\": \"text\"}},\n" +
                            f"    \"data_types\": {{\"type\": \"keyword\"}},\n" +
                            f"    \"created_by\": {{\"type\": \"keyword\"}},\n" +
                            f"    \"created_date\": {{\"type\": \"date\"}},\n" +
                            f"    \"usage_pattern\": {{\"type\": \"keyword\"}},\n" +
                            f"    \"retention_policy\": {{\"type\": \"text\"}},\n" +
                            f"    \"related_indices\": {{\"type\": \"keyword\"}},\n" +
                            f"    \"tags\": {{\"type\": \"keyword\"}}\n" +
                            f"  }}\n" +
                            f"}}\n" +
                            f"```\n" +
                            f"   🔧 **Step 3**: Then use 'create_index_metadata' to document your index\n" +
                            f"   ✅ **Step 4**: Finally create your actual index\n\n" +
                            f"💡 **This is a one-time setup** - once metadata index exists, normal workflow applies")
    
            # If we get here, metadata exists - proceed with index creation
            body = {"mappings": mapping}
            if settings:
                body["settings"] = settings
    
            result = es.indices.create(index=index, body=body)
    
            return f"✅ Index '{index}' created successfully:\n\n{json.dumps(result, indent=2, ensure_ascii=False)}"
    
        except Exception as e:
            # Provide detailed error messages for different types of Elasticsearch errors
            error_message = "❌ Failed to create index:\n\n"
    
            error_str = str(e).lower()
            if "connection" in error_str or "refused" in error_str:
                error_message += "🔌 **Connection Error**: Cannot connect to Elasticsearch server\n"
                error_message += f"📍 Check if Elasticsearch is running at the configured address\n"
                error_message += f"💡 Try: Use 'setup_elasticsearch' tool to start Elasticsearch\n\n"
            elif "already exists" in error_str or "resource_already_exists" in error_str:
                error_message += f"📁 **Index Exists**: Index '{index}' already exists\n"
                error_message += f"📍 Cannot create an index that already exists\n"
                error_message += f"💡 Try: Use 'delete_index' first, or choose a different name\n\n"
            elif "mapping" in error_str or "invalid" in error_str:
                error_message += f"📝 **Mapping Error**: Invalid index mapping or settings\n"
                error_message += f"📍 The provided mapping/settings are not valid\n"
                error_message += f"💡 Try: Check mapping syntax and field types\n\n"
            elif "permission" in error_str or "forbidden" in error_str:
                error_message += "🔒 **Permission Error**: Not allowed to create index\n"
                error_message += f"📍 Insufficient permissions for index creation\n"
                error_message += f"💡 Try: Check Elasticsearch security settings\n\n"
            else:
                error_message += f"⚠️ **Unknown Error**: {str(e)}\n\n"
    
            error_message += f"🔍 **Technical Details**: {str(e)}"
    
            return error_message
  • Pydantic-based input schema for the create_index tool, defining parameters index (str), mapping (Dict), settings (Optional Dict) with descriptions via Field.
    async def create_index(
            index: Annotated[str, Field(description="Name of the new Elasticsearch index to create")],
            mapping: Annotated[
                Dict[str, Any], Field(description="Index mapping configuration defining field types and properties")],
            settings: Annotated[Optional[Dict[str, Any]], Field(
                description="Optional index settings for shards, replicas, analysis, etc.")] = None
    ) -> str:
  • Mounts the elasticsearch_index sub-server app (containing create_index) into the unified Elasticsearch server app.
    from .sub_servers.elasticsearch_index import app as index_app
    from .sub_servers.elasticsearch_search import app as search_app
    from .sub_servers.elasticsearch_batch import app as batch_app
    
    # Create unified FastMCP application
    app = FastMCP(
        name="AgentKnowledgeMCP-Elasticsearch",
        version="2.0.0",
        instructions="Unified Elasticsearch tools for comprehensive knowledge management via modular server mounting"
    )
    
    # ================================
    # SERVER MOUNTING - MODULAR ARCHITECTURE
    # ================================
    
    print("🏗️ Mounting Elasticsearch sub-servers...")
    
    # Mount all sub-servers into unified interface
    app.mount(snapshots_app)           # 3 tools: snapshot management
    app.mount(index_metadata_app)      # 3 tools: metadata governance  
    app.mount(document_app)            # 3 tools: document operations
    app.mount(index_app)               # 3 tools: index management
  • Mounts the unified elasticsearch_server_app (including create_index via sub-mount) into the main AgentKnowledgeMCP server.
    from src.elasticsearch.elasticsearch_server import app as elasticsearch_server_app  
    from src.prompts.prompt_server import app as prompt_server_app
    
    # Import middleware
    from src.middleware.confirmation_middleware import ConfirmationMiddleware
    
    # Load configuration and initialize components
    CONFIG = load_config()
    init_security(CONFIG["security"]["allowed_base_directory"])
    
    # Initialize confirmation manager
    confirmation_manager = initialize_confirmation_manager(CONFIG)
    print(f"✅ Confirmation system initialized (enabled: {CONFIG.get('confirmation', {}).get('enabled', True)})")
    
    # Auto-setup Elasticsearch if needed
    print("🔍 Checking Elasticsearch configuration...")
    config_path = Path(__file__).parent / "config.json"
    setup_result = auto_setup_elasticsearch(config_path, CONFIG)
    
    if setup_result["status"] == "setup_completed":
        # Reload config after setup
        CONFIG = load_config()
        print("✅ Elasticsearch auto-setup completed")
    elif setup_result["status"] == "already_configured":
        print("✅ Elasticsearch already configured")
    elif setup_result["status"] == "setup_failed":
        print(f"⚠️  Elasticsearch auto-setup failed: {setup_result.get('error', 'Unknown error')}")
        print("📝 You can manually setup using the 'setup_elasticsearch' tool")
    
    init_elasticsearch(CONFIG)
    
    # Create main FastMCP server
    app = FastMCP(
        name=CONFIG["server"]["name"],
        version=CONFIG["server"]["version"],
        instructions="🏗️ AgentKnowledgeMCP - Modern FastMCP server with modular composition architecture for knowledge management, Elasticsearch operations, file management, and system administration"
    )
    
    # ================================
    # MIDDLEWARE CONFIGURATION
    # ================================
    
    print("🔒 Adding confirmation middleware...")
    
    # Add confirmation middleware to main server
    app.add_middleware(ConfirmationMiddleware())
    
    print("✅ Confirmation middleware added successfully!")
    
    # ================================
    # SERVER COMPOSITION - MOUNTING
    # ================================
    
    print("🏗️ Mounting individual servers into main server...")
    
    # Mount Elasticsearch server with 'es' prefix
    # This provides: es_search, es_index_document, es_create_index, etc.
    app.mount(elasticsearch_server_app)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool creates an index with optional configuration but fails to address critical behavioral aspects like required permissions, whether the operation is idempotent, potential side effects on existing indices, or error conditions. This leaves significant gaps for an AI agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action ('Create a new Elasticsearch index') and includes essential details about optional configurations. There is no wasted language, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that there is an output schema (which handles return values) and high schema coverage, the description is somewhat complete for basic understanding. However, as a mutation tool with no annotations, it lacks critical behavioral context like permissions or error handling, making it only minimally adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description adds minimal value by mentioning 'optional mapping and settings configuration', which aligns with the schema but doesn't provide additional semantic context beyond what's in the structured fields.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Create a new Elasticsearch index') and distinguishes it from sibling tools like 'delete_index' or 'list_indices'. It specifies the resource type (Elasticsearch index) and mentions optional configuration aspects, making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'setup_elasticsearch' or 'create_index_metadata', nor does it mention prerequisites or exclusions. It merely states what the tool does without contextual usage instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/itshare4u/AgentKnowledgeMCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server