Skip to main content
Glama

ninja_query_installed_os_patches

Retrieve OS patch installation history across all managed devices. Filter by status and dates to track patch compliance.

Instructions

Query OS patch install history across all managed devices.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
dfNoDevice filter expression
pageSizeNoMax results to return
cursorNoPagination cursor from previous response
statusNoFilter by install status
installedBeforeNoFilter patches installed before this date
installedAfterNoFilter patches installed after this date

Implementation Reference

  • The queryTool factory function generates the handler for ninja_query_installed_os_patches. The handler (line 26) calls client.get(path, clean(args)), which makes a GET request to '/queries/os-patch-installs' with cleaned arguments.
    function queryTool(
      name: string,
      description: string,
      path: string,
      extraProps: Record<string, unknown> = {},
    ): ToolDef {
      return {
        tool: {
          name,
          description,
          inputSchema: {
            type: 'object',
            properties: { ...basePaginationProps, ...extraProps },
          },
        },
        handler: async (args, client: NinjaOneClient) => client.get(path, clean(args)),
      };
    }
  • The tool definition/schema for ninja_query_installed_os_patches. It specifies the name, description, API path '/queries/os-patch-installs', and input schema with base pagination props (df, pageSize, cursor) plus extra filter props (status, installedBefore, installedAfter).
    queryTool(
      'ninja_query_installed_os_patches',
      'Query OS patch install history across all managed devices.',
      '/queries/os-patch-installs',
      {
        status: { type: 'string', description: 'Filter by install status' },
        installedBefore: { type: 'string', description: 'Filter patches installed before this date' },
        installedAfter: { type: 'string', description: 'Filter patches installed after this date' },
      },
    ),
  • The tool (via queryTools array) is spread into ALL_TOOLS which is the master list of all tool definitions registered with the MCP server.
    export const ALL_TOOLS = [
      ...deviceTools,
      ...organizationTools,
      ...alertTools,
      ...activityTools,
      ...ticketingTools,
      ...queryTools,
      ...policyTools,
      ...userTools,
      ...backupTools,
      ...systemTools,
    ];
  • src/index.ts:31-33 (registration)
    The MCP server registers the tool for listing via ListToolsRequestSchema handler, making it available to clients.
    server.setRequestHandler(ListToolsRequestSchema, async () => ({
      tools: ALL_TOOLS.map((def) => def.tool),
    }));
  • src/index.ts:35-60 (registration)
    The MCP server dispatches tool calls to the handler via CallToolRequestSchema, looking up the handler by tool name in the toolMap.
    server.setRequestHandler(CallToolRequestSchema, async (request) => {
      const { name, arguments: args } = request.params;
      const handler = toolMap.get(name);
    
      if (!handler) {
        return {
          content: [{ type: 'text', text: `Unknown tool: ${name}` }],
          isError: true,
        };
      }
    
      try {
        const result = await handler(
          (args ?? {}) as Record<string, unknown>,
          ninjaClient,
        );
        return {
          content: [{ type: 'text', text: JSON.stringify(result, null, 2) }],
        };
      } catch (err) {
        return {
          content: [{ type: 'text', text: err instanceof Error ? err.message : String(err) }],
          isError: true,
        };
      }
    });
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description partially implies a read operation but fails to disclose behavioral traits such as required permissions, potential rate limits, or the scope of data retrieval (e.g., all devices could be expensive). More details would improve transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with no wasted words. While it could include more detail, it remains appropriately concise for the core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description does not explain what the query returns or how to use the parameters effectively (e.g., date filters, pagination). This lacks completeness for an agent to confidently invoke the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the description adds no additional meaning to the parameters. The baseline of 3 is appropriate as the schema already documents the parameters sufficiently.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Query' and resource 'OS patch install history', and adds scope 'across all managed devices', clearly distinguishing it from per-device patch queries. This helps the agent understand the tool's purpose unambiguously.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is given on when to use this tool versus similar sibling tools like ninja_get_device_installed_os_patches or ninja_query_os_patches. The agent must infer from the description alone, which lacks explicit when-to-use or when-not-to-use indications.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Allied-Business-Solutions/ninjaone-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server