Skip to main content
Glama

Dominant Constraint Identification

identify_constraint
Read-onlyIdempotent

Pinpoint your revenue bottleneck by analyzing pipeline coverage, conversion rates, velocity, and deal traits to identify which of four constraints—lead generation, conversion, delivery, or profitability—limits growth.

Instructions

Identify the dominant scaling constraint bottlenecking revenue.

Analyzes pipeline coverage, conversion rates, velocity, and deal characteristics to determine which of 4 constraints is dominant: Lead Generation, Conversion, Delivery, or Profitability.

Returns the Revenue Formula breakdown (Traffic × CR1 × CR2 × ... × ACV × 1/Churn) with gap-to-benchmark for each lever and the weakest link.

Args: source: "auto" (uses HubSpot if API key is set, otherwise sample data), "hubspot" for live data, "sample" for built-in demo data. pipeline_id: Optional HubSpot pipeline ID to filter. quota: Optional quarterly revenue quota for pipeline coverage calculation.

Returns: JSON with dominant constraint, severity scores, revenue formula, and recommended focus.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
sourceNoauto
pipeline_idNo
quotaNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint, destructiveHint, idempotentHint, and openWorldHint. The description adds value by explaining the analysis logic and return structure. It could be more explicit about being a non-modifying analysis, but annotations sufficiently cover the safety profile.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is around 130 words, well-structured with an opening purpose statement, analysis method, parameter details, and return description. Every sentence adds value, and the front-loaded style ensures key information is immediate.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tool with 3 parameters, multiple data sources, and analytical output, the description covers purpose, methodology, parameters, and return value. An output schema exists, but the description still summarizes the JSON structure appropriately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, so the description must explain all parameters. It does so comprehensively: source options (auto, hubspot, sample) with behavior, and optional pipeline_id and quota with their purposes. This fully compensates for the lack of schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool identifies the dominant scaling constraint bottlenecking revenue, using specific verb 'identify' and resource 'constraint'. It details the analysis of pipeline metrics and four constraint types, distinguishing it from sibling tools like analyze_engine or score_pipeline_health.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains parameter options (source values, optional pipeline_id, quota) and when to use each data source. However, it does not explicitly state when not to use this tool or provide direct comparisons to sibling tools, though the purpose is distinct enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/artefactventures/artefact-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server