Skip to main content
Glama

Server Details

Venturu's MCP server turns AI agents into deal scouts - letting them search the Venturu marketplace for businesses up for sale, discover brokers who can guide the process, and reach out directly to sellers or brokers to get the conversation started.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.3/5 across 9 of 9 tools scored. Lowest: 3.5/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: contact_broker and contact_seller target different entities, get_broker and get_business retrieve specific details, list_business_categories and list_languages provide reference data, search_brokers and search_businesses perform distinct searches, and who_am_i handles authentication verification. The descriptions explicitly differentiate their functions and use cases.

Naming Consistency5/5

All tool names follow a consistent snake_case pattern with clear verb_noun structures: contact_broker, get_business, list_languages, search_brokers, etc. The naming is predictable and readable throughout, with no deviations in style or convention.

Tool Count5/5

With 9 tools, the server is well-scoped for its business brokerage domain. Each tool earns its place by covering essential operations like searching, retrieving details, contacting parties, and managing reference data, without being overly sparse or bloated.

Completeness4/5

The tool set provides strong coverage for core workflows in a business brokerage platform, including discovery (search, list), detail retrieval (get), and contact (contact). Minor gaps exist, such as no update or delete operations for listings or broker profiles, but these are likely intentional given the platform's nature, and agents can work effectively with the provided tools.

Available Tools

9 tools
contact_brokerContact BrokerAInspect

Send a contact message to a broker on Venturu by their profile slug. Requires an authenticated Venturu account. Set inquiryType to "buying" (default) for buyer representation or "selling" for seller representation. Provide the broker slug and the message to send. Use search_brokers to find broker slugs.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYes
messageYes
inquiryTypeNobuying
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish this is a non-destructive write operation (readOnlyHint: false, destructiveHint: false, openWorldHint: true). The description adds critical context beyond annotations: authentication requirements and the semantic meaning of inquiryType values (buyer vs seller representation). Minor gap regarding idempotency or rate limiting prevents a 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Five sentences, zero waste. Front-loaded with purpose, followed by prerequisites, parameter semantics, and cross-references. Every sentence adds unique value not present in structured fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for a messaging tool with no output schema. Covers purpose, auth, platform context (Venturu), parameter semantics, and sibling relationships. Minor deduction for not describing success behavior or error states (e.g., invalid slug handling), though this is partially mitigated by openWorldHint annotation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by documenting all three parameters: 'slug' is defined as 'profile slug' with discovery method via search_brokers, 'message' is defined as the content to send, and 'inquiryType' is fully enumerated with behavioral semantics ('buying' for buyer representation, 'selling' for seller representation).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Send') and clearly identifies the resource ('contact message to a broker on Venturu') and access method ('by their profile slug'). It distinguishes itself from sibling tool search_brokers by clarifying that this tool requires an existing slug rather than performing discovery.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states prerequisites ('Requires an authenticated Venturu account'), provides clear parameter guidance for inquiryType ('buying' vs 'selling'), and explicitly names the sibling tool to use for prerequisite data gathering ('Use search_brokers to find broker slugs').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

contact_sellerContact SellerAInspect

Send a contact message to a seller on Venturu by their listing ID. Requires an authenticated Venturu account. Provide the listing ID and the message to send. Use search_businesses to find listing IDs.

ParametersJSON Schema
NameRequiredDescriptionDefault
messageYes
listingIdYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (destructive=false, readOnly=false). Description adds authentication requirement and external platform context ('Venturu') which is valuable. However, it does not disclose return value semantics, error conditions (e.g., invalid listing ID), rate limits, or idempotency given the openWorldHint=true.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three efficient sentences with no waste: action/purpose first, then prerequisites, then parameter guidance with sibling dependency. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 2-parameter messaging tool with annotations present, the description covers essential context: action, auth, parameters, and sibling dependency. Absence of output schema is partially mitigated by clear action description, though return value documentation would strengthen it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage. Description compensates by identifying both parameters ('listing ID' and 'message to send') and their semantic roles. However, it fails to elaborate on constraints (e.g., message minLength 1, listingId integer range) or provide format examples that the schema omits.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Send' with resource 'contact message' and target 'seller on Venturu'. Explicitly scopes to 'listing ID' which distinguishes it from sibling contact_broker (which likely uses broker ID) and search_businesses (which finds listings).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states prerequisite 'Requires an authenticated Venturu account' and dependency 'Use search_businesses to find listing IDs'. Lacks explicit contrast with contact_broker regarding when to use seller vs broker contact, though names imply the distinction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_brokerGet Broker DetailsA
Read-only
Inspect

Get full details for a single broker (agent) by their profile slug. Call this when the user asks for more information about a specific broker. Use the slug from search_brokers results.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and destructiveHint=false, covering safety profile. Description adds workflow context (slug source) but does not disclose error behavior (invalid slug), rate limits, or specify what 'full details' includes. Adds marginal value beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first defines the operation, second defines the trigger condition and prerequisite. Front-loaded with actionable verb. No redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for a simple read-only getter with one required parameter. Covers purpose, invocation trigger, and data dependency chain. Lacks output format details but no output schema exists to guide that disclosure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage. Description compensates by explaining 'slug' is a 'profile slug' and specifying it must come from 'search_brokers results', providing semantic meaning and data sourcing guidance. Does not specify format constraints or examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb ('Get') + resource ('full details for a single broker') + identification method ('by their profile slug'). Distinguishes from sibling search_brokers by specifying this retrieves a single entity by slug versus searching.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to call ('when the user asks for more information about a specific broker') and provides prerequisite workflow guidance ('Use the slug from search_brokers results'). Implicitly distinguishes from contact_broker by focusing on information retrieval rather than communication.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_businessGet Business DetailsA
Read-only
Inspect

Get full details for a single business (listing) by its slug. Call this when the user asks for more information about a specific business. Use the slug from search_businesses results.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations confirm this is a safe read operation (readOnlyHint=true), while the description adds valuable behavioral context: it implies a comprehensive data return ('full details') and establishes a strict dependency chain requiring results from search_businesses first. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three tightly constructed sentences with zero redundancy. It front-loads the core purpose, immediately follows with usage conditions, and ends with the critical workflow instruction. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (single string parameter) and presence of safety annotations, the description is appropriately complete. It hints at the return value scope ('full details'), though it could optionally elaborate on specific fields returned given the lack of an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates by semantically defining the single parameter through phrases like 'by its slug' and 'Use the slug from search_businesses results,' explaining both its nature (identifier) and provenance (derived from search results).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the specific action ('Get full details'), the resource ('a single business (listing)'), and the identifier ('by its slug'). It clearly distinguishes this tool from its sibling search_businesses by specifying this retrieves details for a single business rather than a list.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance on when to use ('when the user asks for more information about a specific business') and critical workflow context ('Use the slug from search_businesses results'), effectively establishing the prerequisite tool and distinguishing this from search-based alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_business_categoriesList Business CategoriesA
Read-only
Inspect

Returns all industry categories and their business types with IDs. Use the business type IDs in search_businesses (businessTypeIds) to filter listings by category. Call this first when you need to discover which IDs to use for a given industry or business type.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety (readOnlyHint=true, destructiveHint=false). The description adds valuable workflow context beyond annotations: it explains this is a discovery/lookup tool intended to be called before searching, and describes what data set it returns (categories with IDs).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: sentence 1 defines the return value, sentence 2 explains integration with a specific sibling tool, and sentence 3 provides clear sequencing guidance. Information is front-loaded and every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description adequately explains what gets returned (categories with their IDs). Combined with annotations and clear workflow integration guidance, the description provides sufficient context for an agent to invoke this tool correctly in the discovery-then-search workflow.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters (empty properties object). Per calibration guidelines, 0 parameters warrants a baseline score of 4, as there are no parameter semantics to describe beyond the schema state.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Returns all industry categories and their business types with IDs') and distinguishes this from sibling tools by explicitly referencing search_businesses and the businessTypeIds parameter, clarifying the relationship between the tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent explicit guidance: it states exactly when to use this tool ('Call this first when you need to discover which IDs to use'), names the specific sibling tool to use next (search_businesses), and explains the workflow sequence (discovery before filtering).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_languagesList LanguagesA
Read-only
Inspect

Returns all languages with their IDs. Use these IDs in search_brokers (languageIds) to find brokers who speak specific languages. Call this when you need to discover which language IDs to use.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds valuable behavioral context: it specifies the return structure ('languages with their IDs'), indicates scope ('all'), and explains the workflow dependency on search_brokers. It does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste. Front-loaded with the core purpose ('Returns all languages...'), followed by integration guidance, and ending with usage trigger. Every sentence earns its place with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple lookup tool with no parameters and good annotations, the description is complete. It compensates for the missing output schema by stating what gets returned ('languages with their IDs'), which is sufficient for a discovery utility of this scope.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters. Per scoring rules, this establishes a baseline score of 4. No additional parameter semantics are needed or expected.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states a specific verb ('Returns') and resource ('all languages with their IDs'). It clearly distinguishes from sibling tools by explicitly mentioning the intended consumer (search_brokers) and the specific parameter (languageIds), clarifying its unique role in the workflow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent guidance provided: explicitly states 'Use these IDs in search_brokers' and 'Call this when you need to discover which language IDs to use.' This provides clear when-to-use context and identifies the exact sibling tool relationship, leaving no ambiguity about its purpose in the tool chain.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_brokersSearch BrokersA
Read-only
Inspect

Search for business brokers (agents) on Venturu by location, name, languages, and more. Returns verified brokers with email and phone redacted.

ParametersJSON Schema
NameRequiredDescriptionDefault
cityNo
nameNo
pageNo
sortNo
limitNo
stateNo
countyNo
zipCodeNo
countryCodeNo
languageIdsNo
neighborhoodNo
opportunityScoreNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations establish this is read-only and non-destructive, the description adds valuable behavioral context not found elsewhere: it discloses that returned brokers are 'verified' and that 'email and phone redacted' (privacy-sensitive data handling). This security/privacy disclosure is crucial for agent decision-making.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of exactly two high-value sentences with zero redundancy. The first sentence establishes purpose and filter categories; the second discloses return behavior. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 12 parameters with no schema descriptions and no output schema, the description provides adequate high-level context about the search capabilities and return data characteristics, but the lack of detail on sorting options, pagination behavior, and the opportunityScore filter leaves significant gaps for proper tool invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description partially compensates by mapping natural language categories (location, name, languages) to parameters, but leaves pagination (page, limit), sorting (sort), and the nested opportunityScore object completely unexplained. It mentions 'and more' but doesn't clarify the remaining 9 parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (search) and resource (business brokers/agents on Venturu), including the platform name. It implicitly distinguishes from sibling 'get_broker' (search vs. retrieve by ID) and 'search_businesses' (brokers vs. businesses), though it doesn't explicitly clarify when to use each.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus alternatives like 'get_broker' (for specific ID lookup) or 'search_businesses' (for business listings instead of agents). There are no prerequisites, exclusion criteria, or workflow context provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_businessesSearch BusinessesA
Read-only
Inspect

Search for businesses (listings) for sale on Venturu. Supports natural-language location (e.g. 'Palm Beach, FL', 'Miami', '33101') via the location parameter, or an exact bbox. Use flat min/max fields for ranges (e.g. minPrice/maxPrice, minRevenue/maxRevenue, minProfit/maxProfit, minSde/maxSde, minOpportunityScore/maxOpportunityScore). Returns censored listing data with titles and addresses handled according to listing visibility settings.

ParametersJSON Schema
NameRequiredDescriptionDefault
bboxNo
limitNo
stateNo
cursorNo
maxSdeNo
minSdeNo
listedByNo
locationNo
maxPriceNo
minPriceNo
statusesNo
maxProfitNo
minProfitNo
saleTypesNo
maxRevenueNo
minRevenueNo
visaQualifiedNo
maxDownPaymentNo
maxSdeMultipleNo
minDownPaymentNo
minSdeMultipleNo
businessTypeIdsNo
orderByPropertyNorecommended
sbaPrequalifiedNo
maxEmployeeCountNo
minEmployeeCountNo
orderByDirectionNodesc
propertyIncludedNo
maxRevenueMultipleNo
minRevenueMultipleNo
maxEstablishmentAgeNo
maxOpportunityScoreNo
maxOwnerWorkedHoursNo
minEstablishmentAgeNo
minOpportunityScoreNo
minOwnerWorkedHoursNo
buyerFinancingAvailableNo
includeMissingMultiplesNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds crucial behavioral context beyond annotations: discloses that returned data is 'censored' with visibility-dependent handling of titles/addresses, and explains the location parsing behavior (natural language vs exact coordinates). No contradictions with readOnly/destructive hints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three dense sentences with zero waste. Front-loaded with core purpose, followed by specific parameter patterns with concrete examples, ending with return value characteristics. Every clause delivers actionable information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for basic search operations given the complexity (38 parameters), but significant gaps remain for specialized filters. No output schema is present, though the description explains the censored nature of returns.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description partially compensates by documenting the range filtering pattern (min/max pairs) and location/bbox semantics. However, it leaves 25+ specialized parameters (visaQualified, sbaPrequalified, saleTypes, etc.) completely unexplained.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the action ('Search for businesses') and resource ('listings for sale on Venturu'), clearly distinguishing it from sibling tools like 'get_business' (singular retrieval) and 'search_brokers' (different resource).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear guidance on parameter usage patterns: natural-language location examples ('Palm Beach, FL') vs exact bbox, and the flat min/max field pattern for ranges. Could improve by explicitly contrasting with 'get_business' for single-record lookups.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

who_am_iWho am IA
Read-only
Inspect

Returns the identity of the currently authenticated user. Requires authentication. Use this to verify that the connection is correctly authenticated (e.g. in the voice agent).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly/destructive), but description adds critical behavioral context: 'Requires authentication' constraint not visible in annotations. Adds usage context (verification use case). Does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: (1) core function, (2) prerequisite, (3) usage guidance. Front-loaded with the essential action 'Returns the identity'. Efficient structure for a simple utility tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters and simple read-only semantics, the description provides sufficient context for tool selection. Notes authentication requirement and use case. Without output schema, could optionally specify what identity fields are returned (name, email, ID), but not required for selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present, which per guidelines warrants a baseline of 4. Description appropriately focuses on behavior rather than inventing parameter documentation where none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Returns' with clear resource 'identity of the currently authenticated user'. It clearly distinguishes this authentication utility from business-focused siblings like 'search_businesses' or 'contact_broker'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('verify that the connection is correctly authenticated') and provides a concrete example ('e.g. in the voice agent'). Also notes the prerequisite 'Requires authentication'. Lacks explicit 'when not to use' but context clearly differentiates it from operational tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources