Skip to main content
Glama
Fund-z

FundzWatch MCP Server

Server Quality Checklist

83%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation4/5

    Most tools have distinct purposes, but get_events and get_watchlist_events overlap significantly in functionality, which could cause confusion for an agent. The other tools like get_market_brief, get_market_pulse, and get_scored_leads are clearly differentiated by their specific focuses on analysis, activity overview, and sales leads respectively.

    Naming Consistency5/5

    All tool names follow a consistent verb_noun pattern using snake_case, starting with 'get_' or 'manage_', which makes them predictable and easy to understand. This uniformity aids in agent navigation and reduces cognitive load when selecting tools.

    Tool Count5/5

    With 7 tools, the server is well-scoped for its purpose of providing business intelligence and watchlist management. Each tool serves a specific function without redundancy, making the count appropriate for covering core operations like data retrieval, analysis, and user management.

    Completeness4/5

    The tool set covers key areas such as real-time events, market analysis, sales leads, usage tracking, and watchlist management, offering a comprehensive surface for business intelligence. A minor gap exists in lacking direct tools for deeper historical analysis or customizable alert settings, but agents can work around this with the provided tools.

  • Average 3.5/5 across 7 of 7 tools scored.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v1.0.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • This repository includes a glama.json configuration file.

  • This server provides 7 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • This server has been verified by its author.

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Adds 'real-time' context not in schema, which is valuable. However, with no annotations provided, the description should disclose more: authentication requirements, rate limits, pagination behavior, or failure modes when no events match filters.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two well-structured sentences with the specific examples front-loaded. Suffers only slightly from not mentioning the temporal window (days parameter) which is a key behavioral constraint.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for 5 parameters with complete schema coverage and no output schema, but lacks temporal scope clarification (the lookback window is only in schema) and differentiation from similar event-retrieval siblings.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, establishing baseline 3. Description mentions filtering by 'type, industry, and location' which maps to parameters, but adds no syntax clarification beyond the schema's comma-separated format notes.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clearly states the tool retrieves business events (funding, acquisitions, etc.) with specific resource types listed. However, it fails to distinguish from sibling tool get_watchlist_events, which also presumably retrieves events.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no guidance on when to use this general event feed versus alternatives like get_watchlist_events (user-curated) or get_market_brief. No prerequisites or exclusion criteria mentioned.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It successfully discloses the ongoing behavioral side effect (tracked companies generate alerts for new events), but lacks other behavioral details such as whether removal is permanent, if operations are idempotent, or any rate limiting.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two efficient sentences with zero waste. The first front-loads the core CRUD functionality while the second explains the consequence (alerts), earning its place by justifying why the tool exists.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the simple schema (2 params, 1 enum) and lack of output schema, the description is minimally adequate. It explains the watchlist concept and alert side-effect but omits details about return values or specific behavioral differences between the three actions.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the baseline is 3. The description mentions 'companies' which contextually maps to the 'domains' parameter, and references the action types, but does not add significant semantic detail (format constraints, required/optional logic per action) beyond what the schema already provides.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description clearly states the three supported operations (add, remove, list) and the target resource (companies on a watchlist). It also explains the value proposition (alert generation). It does not explicitly differentiate from the sibling `get_watchlist_events`, which is the main gap preventing a 5.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description explains what actions are possible (list, add, remove) but provides no guidance on when to choose this tool over siblings like `get_watchlist_events` or when to use each specific action vs. the others. No explicit when-to-use or when-not-to-use guidance is present.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. 'Recent' alludes to temporal filtering (aligns with 'days' parameter) and enumerated event types hint at valid values for 'types' parameter. However, lacks disclosure on pagination, empty result handling, rate limits, or data freshness.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence, front-loaded with action verb. Colon-separated list efficiently documents event categories without verbosity. No redundant or wasted text.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a 2-parameter tool with complete schema coverage. Description explains what is retrieved (watchlist events) and their nature. Lacks output format details, but acceptable given no output schema exists and the tool's straightforward purpose.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% (baseline 3). Description adds significant value by enumerating specific event type examples (funding, acquisitions, executive hires, contracts) that help populate the 'types' parameter, translating the schema's generic 'Comma-separated event types' into concrete business events.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Specific verb 'Get' + resource 'events' + clear scope 'companies on your watchlist'. Lists concrete event categories (funding, acquisitions, etc.) that clarify return values. Mentions 'watchlist' which distinguishes from sibling 'get_events', though doesn't explicitly contrast them.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No explicit guidance on when to use versus alternatives (e.g., get_events). No mention of prerequisites like requiring a populated watchlist first. Usage must be inferred from the name and 'watchlist' reference alone.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Adds valuable context that content is 'AI-generated' and scoped to 'today', which helps set expectations about source and temporality. However, with no annotations provided, the description carries full disclosure burden and omits critical behavioral details: caching behavior, idempotency (will the same brief be returned on repeated calls?), side effects, or rate limits.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single well-structured sentence with zero waste. Front-loaded with action verb and immediately describes the distinct nature (AI-generated, narrative) and content scope (movements, patterns, opportunities). Every clause earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a parameterless tool, but lacks description of return structure given no output schema exists. Mentions 'brief' and 'narrative analysis' but doesn't describe format, length, or specific data fields returned, which would help the agent consume the result correctly.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Zero input parameters present. Per scoring rules, 0 params = baseline 4. No parameter documentation required or provided.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear verb 'Get' and specific resource 'AI-generated strategic intelligence brief'. Describes content type well (narrative analysis of movements, patterns, opportunities). However, does not explicitly differentiate from sibling 'get_market_pulse' which likely also retrieves market data.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Implies usage context through 'strategic intelligence' and 'narrative analysis' (vs raw data), suggesting when to use this over alternatives. However, lacks explicit when/when-not guidance or named alternatives for different use cases.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full disclosure burden. It successfully explains what the AI scoring evaluates (intent, stage, strategy) and that results include recent business events, but omits operational details like read-only safety, rate limits, or caching behavior.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two well-constructed sentences with zero waste. First sentence establishes purpose and ICP context immediately; second sentence details return values. Perfectly front-loaded and appropriately sized.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Without an output schema, the description adequately discloses return values (companies with scored dimensions). The 100% parameter coverage means the description doesn't need to elaborate on inputs. Minor deduction for not confirming read-only nature given lack of annotations.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% with clear descriptions for all four parameters. The description adds minimal semantic meaning beyond the schema other than the ICP framing, which contextualizes why one would filter by industries or buying stages. Baseline 3 is appropriate given the schema's completeness.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clearly identifies the resource (AI-scored sales leads) and action (Get), and distinguishes from siblings by specifying ICP-based matching and AI-generated dimensions (buyer intent, buying stage, outreach strategy). Falls short of 5 only because it does not explicitly contrast with sibling tools like get_events.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Implies targeted use case via 'based on your ICP (Ideal Customer Profile),' hinting that an ICP must be configured first. However, lacks explicit when-to-use guidance relative to get_events or get_watchlist_events, and does not state prerequisites clearly.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations, description carries full disclosure burden. Adds valuable behavioral constraints: 'real-time' indicates data freshness, and 'past 7 and 30 days' defines fixed, non-configurable lookback windows. However, omits permissions, rate limits, and error conditions typically expected for unannotated tools.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence, front-loaded with action and resource. Efficiently enumerates data categories and timeframes without redundancy. Every element earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Comprehensive for a parameterless tool. Compensates for missing output schema by enumerating specific return data categories and temporal scope. Adequate for tool complexity, though could optionally note data aggregation level.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Input schema has zero parameters. Per scoring rules, zero-parameter tools receive baseline 4. Description appropriately requires no parameter clarification.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Provides specific verb ('Get'), resource ('market activity overview'), and detailed scope including specific metrics (funding totals, acquisition counts, executive moves, contracts, product launches) and fixed timeframes (past 7 and 30 days). Clear about what the tool retrieves, though lacks explicit differentiation from similar sibling 'get_market_brief'.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Lists specific content categories providing implied usage context for when this aggregated data is needed, but lacks explicit when/when-not guidance or named alternative tools (e.g., when to use 'get_market_brief' or 'get_events' instead).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It discloses return values (calls made, limits, current tier) compensating for the missing output schema, but omits caching behavior, rate limit implications, or whether this request counts against quota.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely concise at 11 words. Front-loaded with the action verb, uses a colon to efficiently separate the operation from return value details, with zero redundant content.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a zero-parameter read tool, the description adequately compensates for missing output schema by enumerating the three data fields returned. Could be improved by noting if authentication is required or caching behavior.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Baseline 4 applies per rules (0 parameters). The description appropriately makes no mention of parameters since none exist, and the empty schema requires no additional semantic clarification.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb ('Check') and resource ('FundzWatch API usage') and explicitly distinguishes from siblings by targeting account metadata (calls, limits, tier) rather than business data like events or leads.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No explicit when-to-use guidance or alternatives are mentioned. While the scope is clear from context, there is no guidance on when to prefer this over other tools or prerequisites for calling it.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

fundzwatch-mcp MCP server

Copy to your README.md:

Score Badge

fundzwatch-mcp MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Fund-z/fundzwatch-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server