Skip to main content
Glama

Server Details

Financial intelligence: insider trades, SEC filings, 13F holdings, and market signals.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
profitelligence/profitelligence-mcp-server
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

7 tools
assessBInspect

Position health check for a stock.

Returns material events, insider sentiment, institutional sentiment, technical signals, risk factors.

Args: symbol: Stock symbol to assess days: Lookback period (default 30)

Examples: assess("NVDA") assess("AAPL", days=90)

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNo
symbolYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states the tool returns multiple data categories but doesn't describe rate limits, authentication needs, error conditions, data freshness, or whether it's a read-only operation. The description is functional but lacks important operational context that would help an agent use it effectively.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with a clear purpose statement, bulleted return categories, parameter explanations, and examples. Each section serves a distinct purpose. While slightly longer than minimal, every sentence adds value. The front-loaded purpose statement helps agents quickly understand the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (which handles return value documentation) and relatively simple parameters, the description covers the basics adequately. However, for a tool with no annotations and multiple sibling alternatives, it should provide more behavioral context and usage guidance. The parameter explanations compensate for the schema's lack of descriptions, but operational transparency remains limited.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description must compensate for the schema's lack of parameter documentation. It successfully explains both parameters: 'symbol' as 'Stock symbol to assess' and 'days' as 'Lookback period (default 30)'. The examples further clarify usage. However, it doesn't specify format requirements (e.g., symbol casing) or constraints (e.g., valid day ranges).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs a 'health check for a stock' and lists the specific types of information returned (material events, insider sentiment, institutional sentiment, technical signals, risk factors). This distinguishes it from siblings like 'institutional' or 'screen' by specifying its comprehensive assessment nature. However, it doesn't explicitly contrast with all siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'investigate', 'pulse', or 'screen'. It mentions what the tool returns but gives no context about appropriate scenarios, prerequisites, or exclusions. The examples show basic usage but don't explain decision criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

institutionalAInspect

Institutional investor intelligence from 13F filings.

Query types:

  • "manager": Profile an institutional investor (by name or CIK)

  • "security": Institutional ownership landscape for a stock

  • "signal": Find stocks with institutional flow patterns

Args: query_type: Type of query ("manager", "security", "signal") identifier: Symbol or manager name/CIK (required for manager/security) signal_type: For signal queries - "accumulation", "distribution", "conviction", "new" limit: Max results (default 25)

Examples: institutional("manager", identifier="Citadel") institutional("security", identifier="NVDA") institutional("signal", signal_type="accumulation") institutional("signal") # Overview of all signals

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
identifierNo
query_typeYes
signal_typeNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes the tool's functionality and query types but lacks details on behavioral traits like rate limits, authentication needs, data freshness, or error handling. The description doesn't contradict annotations (none exist), but it provides only basic operational context without deeper behavioral disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose, followed by query types, arguments, and examples. Each section is concise and informative, with no wasted sentences. The bullet points and examples enhance readability without unnecessary verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (4 parameters, 0% schema coverage, no annotations) and the presence of an output schema, the description is fairly complete. It covers the tool's purpose, query types, parameter semantics, and usage examples. However, it could be more complete by addressing behavioral aspects like rate limits or error cases, which are important for a tool with multiple query modes.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It effectively adds meaning by explaining each parameter's purpose: query_type defines the query category, identifier specifies the target for manager/security queries, signal_type indicates the flow pattern for signal queries, and limit controls result count. This clarifies semantics beyond the bare schema, though it could provide more detail on identifier formats or signal_type options.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides 'institutional investor intelligence from 13F filings' and specifies three distinct query types with their purposes: profiling investors, analyzing ownership landscapes, and identifying flow patterns. This is specific, uses clear verbs, and distinguishes this tool's domain from its siblings (assess, investigate, etc.).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use each query type (e.g., 'Profile an institutional investor', 'Institutional ownership landscape for a stock'), but it does not explicitly state when to use this tool versus its sibling tools (assess, investigate, etc.) or mention any exclusions. The examples help illustrate usage but don't provide comparative guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

investigateAInspect

Research any entity - company, insider, or sector.

Auto-detects type from subject:

  • Stock symbols (AAPL) → company

  • CIK numbers (0001067983) → insider

  • Sector names (Technology) → sector

Args: subject: Symbol, CIK, or sector name entity_type: Optional override - "company", "insider", or "sector" days: Lookback period (default 30)

Examples: investigate("AAPL") investigate("0001067983") investigate("Technology", entity_type="sector")

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNo
subjectYes
entity_typeNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It explains the auto-detection feature and default lookback period, which adds useful context beyond basic functionality. However, it lacks details on permissions, rate limits, error handling, or what specific data is returned, leaving gaps for a research tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose, followed by auto-detection rules, parameter explanations, and examples. Every sentence adds value without redundancy, making it efficient and easy to scan.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, 0% schema coverage, no annotations, but with an output schema present, the description does a good job covering input semantics and usage. It explains parameters and provides examples, though it could benefit from more behavioral details like data sources or limitations, which the output schema may address.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It effectively explains all three parameters: 'subject' with auto-detection examples, 'entity_type' as an optional override with valid values, and 'days' as a lookback period with default. The examples clarify usage, though it could add more on parameter constraints or formats.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Research') and identifies the target resources ('any entity - company, insider, or sector'). It distinguishes itself from siblings by focusing on entity research rather than assessment, institutional data, market pulse, screening, searching, or service info.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use this tool by explaining auto-detection logic based on subject input patterns (stock symbols → company, CIK numbers → insider, sector names → sector). However, it does not explicitly state when not to use it or name specific alternatives among the sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pulseBInspect

Market snapshot - what's happening right now.

Returns market movers, recent filings, insider trades, economic indicators. No parameters needed.

Example: pulse()

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the return data types but doesn't cover critical aspects like rate limits, authentication needs, data freshness, or potential side effects. For a tool with zero annotation coverage, this leaves significant gaps in understanding its operational behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core purpose followed by return details and usage notes. Every sentence adds value without waste, though the example could be slightly more informative. It's efficient but not perfectly structured for maximum clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (simple snapshot with no parameters) and the presence of an output schema (which handles return values), the description is minimally adequate. However, with no annotations and sibling tools present, it lacks context on when to use this versus alternatives and misses behavioral details, making it incomplete for optimal agent decision-making.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so the schema already fully documents the lack of inputs. The description adds value by explicitly stating 'No parameters needed' and providing an example call, which clarifies usage beyond the schema. This earns a baseline 4 for effectively compensating with clear guidance.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as returning a 'market snapshot' with specific data types (market movers, recent filings, insider trades, economic indicators), which is a specific verb+resource combination. However, it doesn't distinguish this from sibling tools like 'assess' or 'screen' that might also provide market-related information, preventing a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal guidance by stating 'No parameters needed' and including an example call, but it doesn't explain when to use this tool versus alternatives like 'assess' or 'screen' for market analysis. There's no explicit when/when-not context or mention of prerequisites, leaving usage unclear relative to siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

screenCInspect

Scan market for opportunities.

Args: focus: "all", "multi_signal", "insider", or "events" sector: Filter by sector (e.g., "Technology") min_score: Minimum score 0-100 days: Lookback period (default 7) limit: Max results (default 25)

Examples: screen() screen(focus="insider", sector="Technology")

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNo
focusNoall
limitNo
sectorNo
min_scoreNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions scanning for opportunities but doesn't describe what kind of data is returned, whether this is a read-only operation, if there are rate limits, authentication requirements, or what constitutes an 'opportunity' in the output. The examples show usage but don't explain behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (purpose, Args, Examples). The opening statement is front-loaded, and each sentence serves a purpose. The Args section is particularly efficient in explaining multiple parameters concisely.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that there's an output schema (which handles return values) and the description covers all parameters well, the main gaps are in behavioral transparency and usage guidelines. For a tool with 5 parameters and no annotations, the description does an adequate job but could better explain what 'opportunities' means and how this differs from sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates well by providing clear explanations for all 5 parameters in the Args section. Each parameter gets a brief but meaningful explanation that adds value beyond the bare schema, including enum values for 'focus', examples for 'sector', and ranges for 'min_score'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states 'Scan market for opportunities' which gives a general purpose, but it's vague about what constitutes an 'opportunity' and doesn't distinguish this tool from sibling tools like 'assess', 'investigate', or 'search'. The verb 'scan' is somewhat specific but lacks clear resource identification beyond 'market'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided about when to use this tool versus alternatives like 'assess', 'investigate', or 'search'. The description only explains what the tool does, not when it's appropriate or what problems it solves compared to other tools on the server.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

service_infoAInspect

Info about Profitelligence service and your account.

Args: info_type: What info to retrieve - "overview": Service description and capabilities - "profile": Your subscription tier, features, and account status - "pricing": Subscription tiers and pricing - "capabilities": Available tools and data sources - "status": Server configuration and health

Examples: service_info() # Overview service_info("profile") # Your account service_info("pricing") # Pricing info

ParametersJSON Schema
NameRequiredDescriptionDefault
info_typeNooverview

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It indicates this is a read-only info retrieval tool (implied by 'Info about'), but doesn't specify authentication needs, rate limits, or error handling. It adds some context with examples, but lacks details on response format or potential side effects, making it adequate but with gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with a clear purpose statement, followed by organized sections for args and examples. Every sentence earns its place by providing necessary information without redundancy, making it efficient and easy to scan for key details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 optional parameter) and the presence of an output schema (which handles return values), the description is mostly complete. It covers purpose, parameter semantics, and usage examples adequately. However, it could improve by addressing behavioral aspects like authentication or linking to sibling tools, leaving minor gaps in contextual guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds significant meaning beyond the input schema, which has 0% description coverage. It clearly explains the info_type parameter with specific options (overview, profile, pricing, capabilities, status) and their purposes, along with examples for usage. This fully compensates for the schema's lack of documentation, providing essential semantic context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves information about the Profitelligence service and user account, using the verb 'retrieve' with specific resource types. However, it doesn't differentiate from sibling tools like 'assess' or 'investigate' which might also provide information, so it doesn't fully distinguish its specific scope from alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through the examples (e.g., service_info() for overview, service_info('profile') for account details), suggesting when to use different info_type values. However, it lacks explicit guidance on when to choose this tool over siblings like 'search' or 'investigate' for information retrieval, leaving context somewhat implied rather than stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.