Skip to main content
Glama
soil-dev

capsulemcp

list_entity_tracks

List all track instances applied to a specific entity record, including auto-applied tracks from board rules. Identify source by comparing track definition IDs.

Instructions

List track INSTANCES on a specific record — i.e., which tracks have been applied to this opportunity / project / party. Distinct from list_track_definitions, which lists the templates. NOTE: some boards have stage-triggered automation that auto-applies tracks when an entity enters specific stages — tracks returned here may include BOTH manually-applied tracks (via apply_track) and auto-applied tracks from Capsule board rules. To distinguish, compare each track's trackDefinition.id against your application's apply_track call history.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
entityYesUse 'kases' for projects.
entityIdYes

Implementation Reference

  • Handler function that calls capsuleGet to fetch track instances for a specific entity record via GET /{entity}/{entityId}/tracks.
    export async function listEntityTracks(input: z.infer<typeof listEntityTracksSchema>) {
      const { data } = await capsuleGet<{ tracks: unknown[] }>(
        `/${input.entity}/${input.entityId}/tracks`,
      );
      return data;
    }
  • Zod schema for list_entity_tracks input validation: entity (enum: parties, opportunities, kases) and entityId (positive integer).
    export const listEntityTracksSchema = z.object({
      entity: TrackEntity,
      entityId: z.number().int().positive(),
    });
  • src/server.ts:919-924 (registration)
    Registration of the tool with the MCP server via registerTool helper, binding the name, description, schema, and handler together.
    registerTool(
      server,
      "list_entity_tracks",
      "List track INSTANCES on a specific record — i.e., which tracks have been applied to this opportunity / project / party. Distinct from list_track_definitions, which lists the templates. NOTE: some boards have stage-triggered automation that auto-applies tracks when an entity enters specific stages — tracks returned here may include BOTH manually-applied tracks (via apply_track) and auto-applied tracks from Capsule board rules. To distinguish, compare each track's `trackDefinition.id` against your application's apply_track call history.",
      listEntityTracksSchema,
      listEntityTracks,
  • Helper function capsuleGet that performs a GET request to the Capsule API, handling auth, URL building, response parsing, and pagination.
    export async function capsuleGet<T>(path: string, params?: QueryParams): Promise<PagedResult<T>> {
      const token = getToken();
      const url = buildUrl(path, params);
      const { res, cleanup } = await doFetch(url, { headers: baseHeaders(token) });
      try {
        const data = await handleResponse<T>(res);
        const nextPage = parseNextPage(res.headers.get("Link"));
        return { data, nextPage };
      } finally {
        cleanup();
      }
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden of behavioral disclosure. It explains that the tool returns both manually and auto-applied tracks, and suggests how to distinguish them using trackDefinition.id. This goes beyond basic read behavior, though it does not cover response structure or error cases.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured paragraph that front-loads the main purpose, then distinguishes from a sibling, and includes a useful note. It is concise but not overly brief, though the note could potentially be shortened.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool lists tracks on an entity with nuance (manual vs auto-applied), the description covers this well, but it lacks details about the output structure (since no output schema exists) and does not mention error conditions or pagination. It assumes knowledge of trackDefinition.id without explaining the returned fields.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 50% (entity has description, entityId does not). The description adds the clarification that 'kases' is used for projects, which overlaps with the schema's enum description. It does not explain entityId or add any other parameter-level meaning, failing to compensate for the missing schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists track instances on a specific record, using the verb 'List' and specifying the resource 'track INSTANCES'. It explicitly distinguishes from the sibling tool 'list_track_definitions' by noting the difference between instances and templates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description clearly indicates when to use this tool (to see which tracks are applied to an entity) and distinguishes it from list_track_definitions. The note about auto-applied tracks provides additional context for interpreting results, but it does not explicitly exclude other scenarios or mention alternatives like show_track.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/soil-dev/capsulemcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server