Skip to main content
Glama

get_ai_status

Check the real-time operational status of major AI services including Claude, OpenAI, Gemini, Mistral, Cohere, Replicate, and Hugging Face. Identify outages or performance issues instantly.

Instructions

Get real-time operational status of major AI services (Claude, OpenAI, Gemini, Mistral, Cohere, Replicate, Hugging Face).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The handler function for the 'get_ai_status' tool. It calls fetchJSON('/status') to get real-time operational status of major AI services (Claude, OpenAI, Gemini, Mistral, Cohere, Replicate, Hugging Face), formats the response, and returns the status text.
    server.tool(
      'get_ai_status',
      'Get real-time operational status of major AI services (Claude, OpenAI, Gemini, Mistral, Cohere, Replicate, Hugging Face).',
      {},
      async () => {
        const data = await fetchJSON('/status') as {
          services: { name: string; provider: string; status: string; components: { name: string; status: string }[] }[];
        };
    
        const text = data.services
          .map(s => {
            const components = s.components.length > 0
              ? '\n' + s.components.map(c => `     ${c.name}: ${c.status}`).join('\n')
              : '';
            return `  ${s.status === 'operational' ? 'OK' : s.status.toUpperCase()} ${s.name} (${s.provider})${components}`;
          })
          .join('\n');
    
        return { content: [{ type: 'text' as const, text: `AI Service Status:\n${text}` }] };
      }
    );
  • The tool is registered via server.tool() with the name 'get_ai_status', no input schema (empty object), and a description string.
    server.tool(
      'get_ai_status',
      'Get real-time operational status of major AI services (Claude, OpenAI, Gemini, Mistral, Cohere, Replicate, Hugging Face).',
      {},
      async () => {
        const data = await fetchJSON('/status') as {
          services: { name: string; provider: string; status: string; components: { name: string; status: string }[] }[];
        };
    
        const text = data.services
          .map(s => {
            const components = s.components.length > 0
              ? '\n' + s.components.map(c => `     ${c.name}: ${c.status}`).join('\n')
              : '';
            return `  ${s.status === 'operational' ? 'OK' : s.status.toUpperCase()} ${s.name} (${s.provider})${components}`;
          })
          .join('\n');
    
        return { content: [{ type: 'text' as const, text: `AI Service Status:\n${text}` }] };
      }
    );
  • Input schema for get_ai_status is an empty object (no parameters required). The response type is inferred from the async handler returning { content: [{ type: 'text', text: string }] }.
    {},
  • The fetchJSON helper function used by get_ai_status to make API calls to the TensorFeed API (API_BASE + '/status'). It handles headers, auth, and error responses.
    async function fetchJSON(path: string, opts: FetchOptions = {}): Promise<unknown> {
      const headers: Record<string, string> = {
        'User-Agent': `TensorFeed-MCP/${SDK_VERSION}`,
      };
      if (opts.body !== undefined) headers['Content-Type'] = 'application/json';
      if (opts.auth) {
        const token = process.env.TENSORFEED_TOKEN;
        if (!token) {
          throw new Error(
            'TENSORFEED_TOKEN env var is not set. Premium MCP tools require a bearer token. ' +
              'Buy credits at https://tensorfeed.ai/developers/agent-payments and pass the returned tf_live_... token via the TENSORFEED_TOKEN env var in your MCP client config.',
          );
        }
        headers['Authorization'] = `Bearer ${token}`;
      }
      const res = await fetch(`${API_BASE}${path}`, {
        method: opts.method ?? 'GET',
        headers,
        ...(opts.body !== undefined ? { body: JSON.stringify(opts.body) } : {}),
      });
      if (!res.ok) {
        let errPayload: unknown;
        try {
          errPayload = await res.json();
        } catch {
          errPayload = await res.text().catch(() => '');
        }
        if (res.status === 402) {
          throw new Error(
            `Payment required (402). Your token may be out of credits. Top up at https://tensorfeed.ai/developers/agent-payments. Detail: ${JSON.stringify(errPayload)}`,
          );
        }
        if (res.status === 401) {
          throw new Error(
            `Token rejected (401). Check that TENSORFEED_TOKEN is set to a valid tf_live_... token. Detail: ${JSON.stringify(errPayload)}`,
          );
        }
        throw new Error(`API error ${res.status}: ${JSON.stringify(errPayload)}`);
      }
      return res.json();
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must carry the full burden. It states 'real-time' implying fresh data, but does not disclose update frequency, caching, or whether it's a read-only operation. Adequate but leaves details about behavior implicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no redundancy, directly conveys the purpose and scope. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema provided. Description lists services but does not specify what 'operational status' entails (e.g., online/offline, latency). Given sibling tools, more detail would help, but it is minimally sufficient for a status check.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters in schema (0 params, 100% schema coverage). Description does not need to add parameter info. Baseline score of 4 is appropriate since it is not required to explain absent parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb 'Get', resource 'real-time operational status', and lists specific major AI services. Distinguishes from siblings like 'is_service_down' and 'status_uptime' by focusing on a broad check across multiple named providers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives such as 'is_service_down' (specific outage check) or 'status_uptime' (uptime details). The description only states what it does without usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/RipperMercs/tensorfeed'

If you have feedback or need assistance with the MCP directory API, please join our Discord server