Skip to main content
Glama

Caplia

Server Details

MCP server for VC pitch-deck scoring, thesis-fit matching, and deal-flow management.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 12 of 12 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose, from fetching company profiles, metrics, and scores to listing companies, documents, theses, and views. Even the similar list_companies and get_view_companies are differentiated by their parameters and focus.

Naming Consistency5/5

All tools follow a consistent 'caplia_verb_noun' pattern using verbs like 'get', 'list', 'search', and 'submit'. The naming is uniform and predictable.

Tool Count5/5

With 12 tools, the set is well-scoped for a pipeline management server. Each tool serves a necessary function without redundancy or excessive granularity.

Completeness5/5

The tool surface covers the core workflow: searching and listing companies, retrieving detailed profiles and documents, viewing theses and pipeline views, and submitting pitch decks. There are no obvious gaps for the intended domain.

Available Tools

12 tools
caplia_get_companyAInspect

Fetch the full profile of one company by UUID: name, problem statement, industry, stage, website, founders, fundraising round, etc.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesCompany UUID
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description should disclose behavioral traits beyond read-only fetch. It lists returned fields but omits details like error responses (e.g., 404 if not found), permissions, or side effects, leaving significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence concisely states action and enumerates key returned fields with no unnecessary words, achieving high efficiency.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple single-parameter tool with no output schema, the description covers the core functionality and return fields. Missing details like error handling or rate limits, but overall adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description adds no extra meaning beyond the schema's 'Company UUID' with format uuid. Baseline score of 3 is appropriate as no additional semantic value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Fetch' and resource 'full profile of one company by UUID', and lists specific fields (name, problem statement, industry, etc.), distinguishing it from siblings like caplia_get_company_metrics which returns only metrics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for fetching a full company profile by UUID but does not explicitly state when to use or not use this tool over siblings (e.g., for metrics use caplia_get_company_metrics), leaving usage guidance implicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

caplia_get_company_metricsAInspect

Get traction and key metrics for a company: revenue, growth, headcount, fundraising round milestones, etc.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesCompany UUID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must disclose behavioral traits. It lists output content but does not mention side effects, authorization needs, rate limits, or whether the call is read-only. For a simple retrieval tool, this is adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with bullet-like examples after a colon. No extraneous text; every word contributes to understanding. Front-loaded with the action and core resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple retrieval tool with one parameter and no output schema, the description reasonably covers what the tool returns. Could be improved by indicating if metrics are current or historical, but overall sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the single parameter 'id' is fully described in schema. The description adds no additional semantic detail about the parameter beyond referring to the company.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses verb 'Get' and specifies resource 'traction and key metrics for a company' with concrete examples (revenue, growth, headcount, fundraising milestones). Clearly distinguishes from sibling tools like caplia_get_company (basic info) and caplia_get_company_scores (scores).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not-to-use guidance. The purpose is implied but lacks comparison to alternatives or exclusion criteria. For example, it does not clarify when to use this vs. caplia_get_company for metrics.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

caplia_get_company_scoresAInspect

Get the CRI score and per-thesis match scores for a company. Returns { cri: { score, scored_at, domains }, thesis_matches: [...] }. Useful when an agent needs to reason about how a company fits the team's investment theses.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesCompany UUID
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description only lists return structure without disclosing side effects, rate limits, or authentication needs, leaving behavioral traits unclear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences: first states purpose with return shape, second adds usage context. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description adequately covers return format and usage context, though could explain CRI score briefly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with 'id' described as 'Company UUID'; description adds return format but no extra semantics for parameters, meeting baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Get the CRI score and per-thesis match scores for a company' and shows return format, distinguishing it from siblings like caplia_get_company and caplia_get_company_metrics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Useful when an agent needs to reason about how a company fits the team's investment theses', providing clear context but not specifying when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

caplia_get_document_urlAInspect

Get a 1-hour signed download URL for a specific data-room document. The agent can fetch the bytes directly from the returned URL — file content does not pass through the API.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesDocument UUID
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses key behaviors: URL expires in 1 hour and file content is not proxied through the API. Without annotations, this provides useful context, though it does not cover potential rate limits or authentication specifics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no redundancy. First sentence states the core action, second adds a critical behavioral detail. Every word adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description sufficiently covers purpose, behavior, and output type (a URL). Could optionally mention response format, but not essential.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema describes 'id' as 'Document UUID' at 100% coverage. Description adds 'specific data-room document' context but no extra parameter semantics beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Describes a specific verb-resource pair: 'Get a 1-hour signed download URL for a specific data-room document.' Clearly distinguishes from siblings like caplia_get_company or caplia_list_company_documents by focusing solely on URL retrieval for a single document.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use vs alternatives. While it's clear the tool is for downloading, there is no mention of prerequisites or cases where other tools (e.g., list_company_documents) should be used first.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

caplia_get_jobAInspect

Poll the status of an async job (typically the result of a deck submission). Returns { status, company_id, results, errors }. Recommended polling cadence: every 3 seconds.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesJob UUID from POST /v1/decks
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided. The description discloses the return format and suggests non-destructive polling behavior, but lacks explicit statements about idempotency, safety, or side effects. Minimal but acceptable.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no fluff. Each sentence serves a clear purpose: stating the action and detailing the return value with a usage hint.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given low complexity and no output schema, the description covers the essential aspects but could be more detailed (e.g., possible status values, terminal states). Adequate but not thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the schema describes the parameter as 'Job UUID from POST /v1/decks'. The description adds only indirect context about the source of the job ID. Baseline 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly specifies the tool polls an async job status, linking it to deck submission results. It distinguishes itself from sibling tools like caplia_get_company or caplia_submit_deck.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides a recommended polling cadence (every 3 seconds) but does not explicitly state when to use or avoid this tool, nor compares to alternatives. Context is implied but not fully elaborated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

caplia_get_view_companiesAInspect

Fetch the companies belonging to a named pipeline view (e.g. deal-flow, my-pipeline).

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesView key from caplia_list_views
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral transparency. It only states 'Fetch' but does not disclose whether the operation is read-only (likely but not confirmed), pagination behavior, error handling for invalid keys, or rate limits. This is insufficient for a reliable agent decision.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, complete sentence with no superfluous words. It puts the action and resource first, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity (1 param, no output schema), the description is adequate but does not explain the return format (e.g., array of company objects) or any limitations (e.g., max results). A more complete description would mention the output structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% and the single parameter 'key' is already well-described as 'View key from caplia_list_views'. The description adds no new syntactic or semantic detail beyond the examples, so it meets the baseline but does not exceed it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Fetch' and the resource 'companies belonging to a named pipeline view' with concrete examples ('deal-flow', 'my-pipeline'). It differentiates from siblings like caplia_list_views and caplia_list_companies by specifying the view filter.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly connects to caplia_list_views via the parameter description and implies that the agent should first obtain a view key from that tool. It does not, however, explicitly state when NOT to use it (e.g., for unfiltered company lists), but the purpose is clear enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

caplia_list_companiesAInspect

List companies in the caller's pipeline. Supports cursor pagination (cursor from the previous response's next_cursor) and filtering by stage or by named view. Use limit (default 25, max 100) to bound the page size.

ParametersJSON Schema
NameRequiredDescriptionDefault
viewNoFilter to a named pipeline view (e.g. my-pipeline)
limitNo
stageNoFilter to one pipeline stage
cursorNoPagination cursor from a previous response
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must cover behavioral traits. It explains pagination behavior and filtering, but lacks details such as error handling (e.g., invalid view/stage), rate limits, or whether the result set is always the caller's pipeline. The basics are covered, but depth is missing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with no fluff. The first sentence states the core purpose, and the second provides essential usage details (pagination, filtering, limit bounds). Every sentence is necessary.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description does not explain the structure of returned data (e.g., which fields each company object contains). It covers input parameters well but is incomplete regarding the output shape for a list tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (all parameters described in schema). The description adds value by explaining how cursor pagination works ('cursor from previous response's next_cursor') and the default/max for limit, which goes beyond the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'List companies in the caller's pipeline', which is a specific verb and resource. It distinguishes itself from sibling tools like caplia_get_company (single company) and caplia_get_view_companies (specific view).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on pagination (cursor-based, with 'cursor' from previous 'next_cursor') and filtering by stage or view. It also notes the default and max limit. However, it does not explicitly mention when to use this tool over alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

caplia_list_company_documentsAInspect

List all data-room documents attached to a company: pitch decks, financial models, founder updates, term sheets. Each entry has id, name, folder, type, size, and created_at.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesCompany UUID
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose behavioral traits like read-only nature, authentication needs, rate limits, or pagination. It only implies a read operation through 'list', lacking deeper transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, focused sentence that efficiently states the purpose, examples, and output fields. Every part contributes value, with no redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the required parameter and lists output fields (id, name, folder, type, size, created_at), compensating for the lack of an output schema. However, it omits details on pagination, sorting, or error handling, which are minor gaps for a simple list tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema already describes 'id' as a Company UUID with format uuid (coverage 100%). The description reinforces that the documents belong to a company but adds no additional semantic meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists all data-room documents attached to a company, with examples like pitch decks and financial models. It specifies the fields in each entry, distinguishing it from sibling tools that operate on single documents or other entities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use the tool (to list documents for a company) but does not mention when not to use it or compare it to siblings like caplia_get_document_url for fetching a single document's URL. No explicit guidelines on alternatives or prerequisites beyond the required company UUID.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

caplia_list_thesesAInspect

List the team's active investment theses with their descriptions. Useful for an agent that's reasoning about whether a deal fits a thesis.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It states the tool returns active theses with descriptions, implying read-only and filtered scope. Lacks details on data freshness or auth, but the simple nature of the tool makes it adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words. Efficiently conveys purpose and usage context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters, no output schema, and simple list semantics, the description fully covers what the agent needs: what it returns and why to use it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so description need not add param info. Baseline score of 4 due to zero parameters and 100% schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists active investment theses with descriptions, using a specific verb and resource. It distinguishes from sibling tools like caplia_list_companies and caplia_get_company.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides context for when to use: 'for an agent reasoning about whether a deal fits a thesis.' Does not explicitly mention alternatives or when not to use, but the context is clear and helpful.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

caplia_list_viewsAInspect

List the team's configured pipeline views (e.g. Deal Flow, My Pipeline, Screening). Each view has a key that can be used as the view argument on caplia_list_companies.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, description bears burden of behavioral disclosure. It indicates this is a read operation listing views, but does not detail return format, authentication needs, or any side effects. Adequate given tool simplicity but could be more explicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no unnecessary words, front-loaded with action and examples. Each sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple parameterless list tool with no output schema, the description sufficiently covers purpose and output utility. Could mention scope (team) but not critical. Complete enough for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (zero parameters), so baseline is 3. Description adds value by explaining that each view has a key usable in another tool, providing meaning beyond the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb (list) and resource (team's configured pipeline views), provides examples, and explains how output keys relate to another tool, distinguishing it from siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies usage context by noting that view keys are used for caplia_list_companies, giving a clear reason to invoke this tool. However, it does not explicitly state when not to use or name alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

caplia_submit_deckAInspect

Submit a pitch deck (PDF) to the caller's Caplia pipeline. The deck flows through the same intake pipeline as web uploads and email forwards: text extraction → company shell creation → CRI scoring → thesis matching. Returns { job_id, status: "queued", poll_url } immediately; use caplia_get_job with the returned id to watch the company and scores land (typically 30s-2min). Requires a key with the write scope.

ParametersJSON Schema
NameRequiredDescriptionDefault
notesNoFree-form context from your CRM — "Met at YC demo day", "warm intro from X". Surfaces in the intake event payload. Optional.
file_b64YesBase64-encoded contents of the PDF file. Most MCP clients (Claude Desktop, Cursor) can read a local PDF and base64-encode its bytes. Max 50 MB decoded.
filenameNoOriginal filename for human-readable display in the intake log (e.g. "tesla-seed-deck.pdf"). Optional but recommended.
company_urlNoPre-tag with the company's website (e.g. "https://tesla.com"). Helps the extraction step. Optional.
company_nameNoPre-tag the deck with a company name your CRM already knows. The worker uses it when creating the company shell. Optional.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses the return format ({ job_id, status: 'queued', poll_url }), processing time (30s-2min), pipeline steps (extraction, shell creation, scoring, matching), and authentication requirement (write scope). No destructive behavior is mentioned, which is appropriate for a submission tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is moderately lengthy but every sentence provides useful information: purpose, pipeline, return format, polling advice, and scope requirement. It could be slightly more concise but remains focused.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but the description explains the return format. All 5 parameters are documented with added context. The pipeline steps are detailed. Given moderate complexity and no annotations, the description is sufficiently complete for correct tool selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (all params have descriptions). The description adds meaning: notes as CRM context, filename for display, company_url to aid extraction, company_name for company shell creation, and file_b64 encoding hints with max size. This goes beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool submits a pitch deck PDF to the Caplia pipeline, specifying the verb 'submit' and the resource. It details the pipeline stages and distinguishes itself from sibling read tools like caplia_get_company and caplia_get_job.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use the tool (submitting a deck) and provides follow-up instructions (use caplia_get_job to watch results). It mentions the required write scope but does not explicitly exclude alternative tools; however, the sibling tools are all read-oriented, so differentiation is clear enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources