Skip to main content
Glama
glitch-cc

OSINT MCP Server

by glitch-cc

osint_company_research

Research companies using AI-powered intelligence gathering. Input a company name and optional domain to access comprehensive business insights.

Instructions

AI-powered company research using Perplexity.

Args: company: Company name domain: Company domain

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
companyYes
domainNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'AI-powered' and 'using Perplexity,' hinting at external API usage and potential rate limits or costs, but doesn't specify authentication needs, response format, error handling, or whether it's read-only. For a tool with no annotation coverage, this leaves significant gaps in understanding its operational traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in one sentence, followed by a clear 'Args:' section listing parameters. It avoids redundancy and wastes no words, though the parameter explanations are overly brief. The structure is efficient, but could benefit from slightly more detail in the args to enhance clarity without sacrificing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, 0% schema coverage, and an output schema present (which handles return values), the description is moderately complete. It covers the basic purpose and parameters but lacks behavioral details like rate limits or error cases. For a research tool with external dependencies, more context on usage constraints would improve completeness, though the output schema mitigates some gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It lists 'company' and 'domain' parameters with brief labels but no details on format, constraints, or interaction (e.g., if domain is optional for disambiguation). This adds minimal meaning beyond the schema's type definitions, partially addressing the coverage gap but not fully explaining usage nuances.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs 'AI-powered company research using Perplexity,' which specifies the verb (research), resource (company), and method (Perplexity). It distinguishes from siblings like 'osint_company_enrich' or 'osint_company_people' by focusing on general research rather than enrichment or people-specific data. However, it doesn't explicitly contrast with 'osint_query' or 'osint_person_research,' leaving some ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention scenarios like initial company discovery, competitive analysis, or when to prefer it over siblings such as 'osint_company_enrich' for detailed data or 'osint_query' for broader searches. The lack of context makes it unclear how this tool fits into the OSINT workflow.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/glitch-cc/osint-mcp-unified'

If you have feedback or need assistance with the MCP directory API, please join our Discord server