Skip to main content
Glama
agentience

Tribal Knowledge Service

by agentience

get_api_status

Check the operational status of the Tribal Knowledge Service API to verify connectivity and functionality for error tracking and learning.

Instructions

Check the API status.

Returns:
    API status information

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • Handler function for the 'get_api_status' MCP tool. It checks the external API health by making a GET request to '/health' using the make_api_request helper.
    @mcp.tool()
    async def get_api_status() -> Dict:
        """
        Check the API status.
    
        Returns:
            API status information
        """
        return await make_api_request("GET", "/health")
  • Alternative handler for 'get_api_status' MCP tool. Returns static status information about the Tribal MCP server including version.
    @mcp.tool()
    async def get_api_status() -> Dict:
        """
        Check the API status.
    
        Returns:
            API status information
        """
        from mcp_server_tribal import __version__
    
        return {
            "status": "ok",
            "name": "Tribal",
            "version": __version__,
        }
  • Helper utility function used by the get_api_status handler (and other tools) to perform HTTP requests to the backend Tribal API.
    async def make_api_request(
        method: str,
        endpoint: str,
        data: Optional[Dict] = None,
        params: Optional[Dict] = None,
    ) -> dict:
        """
        Make an API request to the Tribal API.
    
        Args:
            method: HTTP method (GET, POST, PUT, DELETE)
            endpoint: API endpoint
            data: Request data
            params: Query parameters
    
        Returns:
            API response
        """
        url = f"{API_URL}{endpoint}"
        headers = {"X-API-Key": API_KEY}
    
        async with httpx.AsyncClient() as client:
            if method == "GET":
                response = await client.get(url, headers=headers, params=params)
            elif method == "POST":
                response = await client.post(url, headers=headers, json=data)
            elif method == "PUT":
                response = await client.put(url, headers=headers, json=data)
            elif method == "DELETE":
                response = await client.delete(url, headers=headers)
            else:
                raise ValueError(f"Unsupported HTTP method: {method}")
    
            if response.status_code >= 400:
                logger.error(f"API request failed: {response.status_code} {response.text}")
                response.raise_for_status()
    
            if response.status_code == 204:  # No content
                return {}
    
            return response.json()
  • Explicit tool execution handler that registers and dispatches the 'get_api_status' tool (among others) by calling the appropriate handler function.
    @mcp.handle_execution
    async def handle_execution(tool_name: str, params: Dict) -> Dict:
        """
        Handle tool execution.
    
        Args:
            tool_name: Name of the tool to execute
            params: Tool parameters
    
        Returns:
            Tool execution result
        """
        logger.info(f"Executing tool: {tool_name} with params: {json.dumps(params)}")
    
        if tool_name == "track_error":
            return await track_error(**params)
        elif tool_name == "find_similar_errors":
            return await find_similar_errors(**params)
        elif tool_name == "search_errors":
            return await search_errors(**params)
        elif tool_name == "get_error_by_id":
            return await get_error_by_id(**params)
        elif tool_name == "get_api_status":
            return await get_api_status()
        else:
            logger.error(f"Unknown tool: {tool_name}")
            raise ValueError(f"Unknown tool: {tool_name}")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states the tool 'Check[s] the API status' and 'Returns API status information', but doesn't disclose behavioral traits like whether it's read-only, requires authentication, has rate limits, or what specific information is included in the return. For a tool with zero annotation coverage, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences, but it's not front-loaded effectively. The first sentence states the purpose, but the second ('Returns: API status information') is redundant and doesn't add value beyond what's implied. It could be more structured to emphasize key details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description is incomplete. It doesn't explain what 'API status information' includes, how it's formatted, or any behavioral context. For a tool that might be critical for monitoring, this leaves too many gaps for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and schema description coverage is 100%, so no parameter documentation is needed. The description doesn't add parameter semantics, but this is appropriate given the lack of parameters, warranting a baseline score of 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool 'Check[s] the API status', which is a clear verb+resource combination. However, it doesn't differentiate this from sibling tools like 'track_error' or 'get_error_by_id' that might also provide status-related information, nor does it specify what aspects of API status are checked (health, uptime, version, etc.).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description doesn't mention prerequisites, timing, or how it differs from sibling tools like 'track_error' or 'search_errors' that might overlap in monitoring contexts. This leaves the agent with no usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/agentience/tribal_mcp_server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server