Server Details
Build and publish websites through AI conversation.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- megberts/mcp-websitepublisher-ai
- GitHub Stars
- 0
See and control every tool call
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
56 toolsadd_task_historyAInspect
Append a history record to a task. This is the primary way to report progress, add notes, store handover snippets, or attach architecture documents. Never overwrites existing state — fully append-only.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | Task slug to add history to | |
| type | Yes | Record type: progress (with completion_pct), note, snippet (MD content), blocker, fork, architecture (with version) | |
| author | No | Author identifier e.g. "claude", "mikey", "tester" (optional) | |
| status | No | Current status: open, in_progress, blocked, done (optional) | |
| content | No | Full MD content for snippets or architecture docs — stored in S3 (optional) | |
| summary | Yes | Short summary always readable without fetching content | |
| version | No | Version string e.g. "v1.0", only meaningful for type=architecture (optional) | |
| session_label | No | Label for this session e.g. "sessie-a", "claude-desktop", "mikey" (optional) | |
| snoozed_until | No | Snooze this task until date, e.g. "2026-04-04" or "2026-04-07". Task will be hidden from list_tasks until that date (optional). | |
| completion_pct | No | Completion 0-100, only used when type=progress (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate non-destructive write (destructiveHint=false, readOnlyHint=false), but the description adds crucial behavioral detail: 'Never overwrites existing state — fully append-only.' This safety guarantee is valuable context beyond the structured hints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: first establishes core action, second lists value propositions/use cases, third provides behavioral guarantee. Information is front-loaded and every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 10-parameter mutation tool, the description adequately covers intent, safety semantics, and typical workflows. However, with no output schema provided, it could briefly mention what the tool returns (e.g., confirmation of appended record) to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema carries the parameter documentation burden. The description mentions specific record types (progress, snippet, architecture) which adds context for the 'type' parameter, but does not elaborate on parameter syntax, validation rules, or relationships between fields (e.g., completion_pct requiring type=progress).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Append' with clear resource 'history record to a task', and enumerates distinct use cases (progress, notes, snippets, architecture documents) that clearly differentiate it from sibling tools like create_task or get_task_history.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
States 'This is the primary way to report progress...' establishing it as the standard mechanism for task updates. However, it lacks explicit 'when not to use' guidance (e.g., when to use create_task vs add_task_history) though the 'append' semantics imply existing task requirement.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
configure_formIdempotentInspect
Define or update a form for a published website. Sets required fields and what actions execute on submit. Use action for a single action, or actions (array, max 3) for multiple in sequence. Built-in action (no setup needed): {"type": "leads"} — stores submissions in platform-native lead capture, retrievable via get_integration_schema on the "leads" service. External actions: {"type": "iapi", "service": "resend", ...} for email, or {"type": "webhook", "url": "..."}. The visitor only sends field values — actions are locked server-side. Use list_integrations to check available external services. IMPORTANT: When generating the JavaScript snippet for the page, always use the Safari ITP-proof pattern: store session_id in sessionStorage and send it as X-Session-Id header on both the session GET and form submit fetch calls. This is required for custom domains (e.g. www.site.com) where Safari blocks third-party cookies.
| Name | Required | Description | Default |
|---|---|---|---|
| action | No | Single action on submit. Types: iapi (integration), webhook (POST to URL), none. For iapi: include service, endpoint, input_template with {{fields.xxx}} placeholders. Example: {"type": "iapi", "service": "resend", "endpoint": "send-email", "input_template": {"from": "noreply@site.com", "to": "info@site.com", "subject": "{{fields.name}}", "text": "{{fields.message}}"}} | |
| actions | No | Multiple actions (max 3), executed in order. Use instead of action when you need e.g. webhook + confirmation email. Each item follows the same format as action. Example: [{"type": "webhook", "url": "https://hooks.example.com/lead"}, {"type": "iapi", "service": "resend", "endpoint": "send-email", "input_template": {"from": "noreply@site.com", "to": "{{fields.email}}", "subject": "Bevestiging", "text": "Bedankt {{fields.naam}}"}}] | |
| form_name | Yes | Unique form identifier (lowercase, a-z0-9 and underscore, max 64 chars). Example: "contact", "order_form", "newsletter" | |
| project_id | Yes | The project ID | |
| required_fields | Yes | Array of field names that the visitor must fill in. Example: ["name", "email", "message"] | |
| max_submits_per_session | No | Max times a visitor can submit this form per session (default: 3, max: 100) |
configure_visitor_authAIdempotentInspect
Configure visitor authentication for a published website. Enables email-based login (magic link or 6-digit code) for website visitors. Required before visitors can authenticate. Use method "link" for one-click email login, "code" for a 6-digit verification code.
| Name | Required | Description | Default |
|---|---|---|---|
| enabled | No | Enable or disable visitor auth (default: true) | |
| methods | No | Allowed auth methods. "link" = magic link email, "code" = 6-digit code. Default: ["link", "code"] | |
| project_id | Yes | The project ID | |
| success_url | No | Relative URL to redirect to after successful verification (e.g. "/welkom"). Default: "/" | |
| require_name | No | Ask visitor for their name during auth (default: false) | |
| allowed_domains | No | Optional email domain whitelist (e.g. ["bedrijf.nl"]). Null or omit = all domains allowed. | |
| session_ttl_verified | No | How long a verified session lasts in seconds (min 3600, max 2592000). Default: 604800 (7 days) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare idempotentHint=true and destructiveHint=false. The description adds crucial state-machine context that this configuration is 'Required before visitors can authenticate,' clarifying sequencing requirements. No contradictions with annotations; 'Configure' correctly implies the non-read-only nature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero redundancy: (1) core purpose, (2) capability enabled, (3) prerequisite status, (4) method selection guide. Information is front-loaded and every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for a 7-parameter configuration tool with no output schema. Covers the 'why' (prerequisite) and key option explanations. Could briefly acknowledge idempotent behavior explicitly, though covered by annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, establishing a baseline of 3. The description adds minor semantic color ('one-click email login') for the methods parameter but does not significantly elaborate on other parameters like allowed_domains or session_ttl_verified beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action ('Configure visitor authentication'), target resource ('published website'), and mechanism ('email-based login'). The prerequisite statement ('Required before visitors can authenticate') implicitly distinguishes this from sibling get_visitor_auth_config by establishing this as the mandatory setup action versus a read operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear prerequisite guidance ('Required before visitors can authenticate') and explains when to use specific method options ('Use method "link" for one-click...'). Lacks explicit reference to get_visitor_auth_config as the alternative for reading current configuration.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_edit_sessionInspect
Creates a visual edit session so the user can upload and manage images on their published page using a browser-based editor. Returns an edit URL to share with the user. When creating pages with images, use data-wpe-slot placeholder images instead of base64 — then create an edit session so the user can upload real images.
| Name | Required | Description | Default |
|---|---|---|---|
| page_slug | Yes | The page slug to edit (e.g., "index.htm", "about.htm") | |
| project_id | Yes | The project ID | |
| ttl_minutes | No | How long the edit session stays active (5-120 minutes, default: 30) | |
| capabilities | No | What the user can do in the editor. Default: image_upload, image_resize, image_padding |
create_entityInspect
Create a new entity (data model). Example: create a "blogpost" entity with title, content, author fields.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Entity name in lowercase (e.g., "blogpost", "product", "testimonial") | |
| plural | No | Plural form of entity name (optional, defaults to name + "s") | |
| project_id | Yes | The project ID | |
| properties | No | Array of property definitions. Each property needs: name (string), type (varchar|text|int|datetime|tinyint), length (optional), required (optional boolean) | |
| description | No | Description of the entity (optional) | |
| public_read | No | Enable public read-only access without authentication (optional, default: false). When true, records are accessible via /mapi/public/{projectId}/{entity}. |
create_fragmentInspect
Create a reusable HTML fragment. Returns include_tag to embed in pages: . Use for shared elements like nav, footer, banners.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Fragment name (lowercase, hyphens allowed, e.g. "menu", "footer") | |
| content | Yes | HTML content of the fragment | |
| project_id | Yes | The project ID |
create_pageInspect
Create a new page with HTML content. Tip: use to embed reusable fragments (nav, footer). Create fragments first with create_fragment.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | The page slug (e.g., "index.html", "blog/post1.html") | |
| title | No | Page title (optional) | |
| content | Yes | Full HTML content with DOCTYPE | |
| language | No | Page language ISO code e.g. "en", "nl", "de" (optional) | |
| seo_title | No | SEO title shown in browser tab and search results (optional) | |
| project_id | Yes | The project ID | |
| description | No | SEO description (optional) | |
| landingpage | No | Set as homepage of the project (optional) | |
| seo_keywords | No | SEO keywords, comma-separated, max 9 (optional) | |
| seo_robots_index | No | Include page in sitemap and search engines (optional, default: false) | |
| seo_robots_follow | No | Allow search engines to follow links on this page (optional, default: false) |
create_recordInspect
Create a new record in an entity. Fields depend on the entity schema.
| Name | Required | Description | Default |
|---|---|---|---|
| data | Yes | Record data as key-value pairs matching the entity schema | |
| project_id | Yes | The project ID | |
| entity_name | Yes | The entity name (e.g., "blogpost") |
create_scheduled_taskAInspect
Create a scheduled task that runs automatically at specified times. Supports cron expressions for flexible scheduling. Use run_once=true for one-time scheduled actions (e.g., publish a page at a specific date/time). Common cron patterns: "0 9 * * *" (daily 9am), "0 9 * * 1" (Monday 9am), "0 */6 * * *" (every 6h), "0 0 1 1 *" (Jan 1st midnight).
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Human-readable task name (e.g., "Publish blog post Monday 9am") | |
| run_once | No | If true, the task runs once at the next matching time and then deactivates automatically (default: false) | |
| schedule | Yes | Cron expression (e.g., "0 9 * * 1" for every Monday at 9am) | |
| timezone | No | Timezone for the schedule (default: Europe/Amsterdam). Examples: Europe/London, America/New_York, Asia/Tokyo | |
| project_id | Yes | The project ID | |
| action_type | Yes | What the task does. publish_page/unpublish_page require page_id in payload. webhook requires url in payload. | |
| action_payload | Yes | Parameters for the action. For publish_page/unpublish_page: {"page_id": 123}. For webhook: {"url": "https://...", "body": {...}, "method": "POST"} |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a mutating, non-idempotent operation (readOnlyHint: false, idempotentHint: false), but the description doesn't warn about duplicate task creation risks. It adds valuable behavioral context that run_once tasks 'deactivate automatically' after execution, but misses other side effects like validation failures or timezone handling implications.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three efficiently structured sentences: purpose declaration, feature/guidance combo, and actionable reference examples. Zero redundancy. Information density is high with every clause serving either functional definition or practical implementation guidance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 7-parameter complexity with nested objects and no output schema, the description adequately covers the scheduling domain mechanics, timezone awareness, and action types. Minor gap: it doesn't address error handling, validation constraints, or the implications of idempotentHint=false (duplicate prevention).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description elevates this by providing concrete cron pattern examples ('0 9 * * 1' for Monday 9am) that clarify the 'schedule' parameter syntax, and contextualizes the 'run_once' parameter with specific use cases (publishing pages). This practical guidance exceeds the raw schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a clear specific verb ('Create') and resource ('scheduled task'), immediately establishing the tool's function. It distinguishes itself from sibling tools like 'create_task' by emphasizing automatic execution 'at specified times' and cron expression support, making the scheduling domain explicit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit guidance on when to use specific parameters: 'Use run_once=true for one-time scheduled actions (e.g., publish a page at a specific date/time).' This clearly delineates recurring vs. one-time usage patterns. However, it lacks explicit contrast with sibling 'create_task' (immediate execution vs. scheduled) or reference to 'delete_scheduled_task' for cleanup.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_taskAInspect
Create a new task item in the shared task tracker. Use descriptive slugs like "sapi-fase4" or "capi-intake-redesign".
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | Unique URL-safe identifier, e.g. "sapi-fase4" | |
| tags | No | Array of tags e.g. ["sapi", "p2"] (optional) | |
| scope | No | Scope: "global" or a project ID string (optional, default: global) | |
| title | Yes | Short descriptive title | |
| parent_id | No | ID of parent task if this is a sub-item or fork (optional) | |
| description | No | Longer description of the task (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-idempotent write operation (readOnlyHint: false, idempotentHint: false). The description adds practical guidance on slug naming conventions but omits error handling (e.g., duplicate slug behavior), return values, or side effects that would help an agent predict outcomes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise with two efficient sentences. The purpose is front-loaded in the first sentence, and the second sentence provides actionable guidance without verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description should ideally specify what the tool returns (e.g., task ID, object) to enable subsequent tool calls. The parameter schema is well-covered, but the operational contract remains incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already fully documents all 6 parameters including examples. The description reinforces the slug parameter with additional examples but does not add semantic meaning beyond what the structured schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (Create) and resource (task item in the shared task tracker), effectively distinguishing it from siblings like create_record, create_entity, and create_scheduled_task through the 'task tracker' context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While it establishes the context (shared task tracker), it lacks explicit guidance on when to use this versus create_record or create_entity, and does not mention prerequisites or when not to use the tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_assetDestructiveIdempotentInspect
Delete an asset
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | The asset path to delete | |
| project_id | Yes | The project ID |
delete_entityDestructiveIdempotentInspect
Delete an entity and ALL its data. This action cannot be undone!
| Name | Required | Description | Default |
|---|---|---|---|
| project_id | Yes | The project ID | |
| entity_name | Yes | The entity name to delete |
delete_fragmentDestructiveIdempotentInspect
Delete a reusable HTML fragment. Pages using its include_tag will render an empty string in its place.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Fragment name to delete | |
| project_id | Yes | The project ID |
delete_pageDestructiveIdempotentInspect
Delete a page
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | The page slug to delete | |
| project_id | Yes | The project ID |
delete_recordDestructiveIdempotentInspect
Delete a record by ID
| Name | Required | Description | Default |
|---|---|---|---|
| record_id | Yes | The record ID to delete | |
| project_id | Yes | The project ID | |
| entity_name | Yes | The entity name (e.g., "blogpost") |
delete_scheduled_taskADestructiveIdempotentInspect
Delete a scheduled task and all its run history.
| Name | Required | Description | Default |
|---|---|---|---|
| task_id | Yes | The task ID to delete (get from list_scheduled_tasks) | |
| project_id | Yes | The project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare the operation as destructive and idempotent, the description adds crucial behavioral context not present in structured data: the deletion cascades to 'all its run history'. This warns the agent that this operation destroys associated historical data, a significant side effect beyond just removing the task configuration.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is perfectly front-loaded with the action ('Delete') and contains zero waste. Every phrase earns its place: 'scheduled' differentiates from regular tasks, and 'run history' clarifies the destructive scope.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of clear annotations (destructive, idempotent) and complete input schema coverage, the description successfully covers the primary behavioral nuance (cascade deletion of history). It appropriately omits return value details since no output schema exists to document.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema adequately documents both 'project_id' and 'task_id'. The main description adds no parameter-specific semantics, but given the complete schema coverage, it doesn't need to. Baseline score applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Delete') with a specific resource ('scheduled task') and distinguishes it from sibling tools like 'delete_task' and 'delete_record' by specifying 'scheduled'. It further clarifies scope by mentioning 'run history', which ties to related history-management siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through the resource type ('scheduled task') and cascade behavior ('run history'), but lacks explicit guidance on when to use this versus 'delete_task' or prerequisites like requiring the task to be inactive. The schema references 'list_scheduled_tasks' for the ID, but this guidance is in the schema rather than the main description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
execute_integrationInspect
Execute an integration action — e.g., send an email via Resend, create a payment via Mollie. The system resolves vault credentials server-side so you never handle API keys directly. The integration must be configured first via setup_integration (not needed for built-in integrations). Call get_integration_schema first to get the exact endpoint name and required input fields.
| Name | Required | Description | Default |
|---|---|---|---|
| input | Yes | Input data for the endpoint. Fields depend on the specific endpoint schema. Example for Resend send-email: {"from": "noreply@site.com", "to": "user@example.com", "subject": "Hello", "html": "<p>Content</p>"} | |
| service | Yes | Integration id (e.g., "resend", "mollie") | |
| endpoint | Yes | Endpoint name (e.g., "send-email", "create-payment", "get-payment") | |
| project_id | Yes | The project ID |
export_tasksARead-onlyIdempotentInspect
Generate a Markdown overview of all tasks grouped by status (in_progress, blocked, open, done) with completion percentages. Includes recent activity from today and yesterday. Use this at the start of a session for a quick backlog overview, or to share current status.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare readOnlyHint=true and idempotentHint=true, the description adds valuable operational context: the output format (Markdown), specific status groupings, inclusion of completion percentages, and temporal scope (recent activity from today and yesterday). It also clarifies the scope is 'all tasks' (unfiltered). No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences total with zero waste. Front-loaded with the core output specification (Markdown overview, status groupings, percentages) before usage guidance. Every sentence earns its place—first defines behavior, second adds temporal context, third provides usage scenarios.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description adequately compensates by detailing the report structure (grouped by status with percentages) and content (recent activity). Given the tool's simplicity (no parameters, read-only) and the clarity of the described output format, the description provides sufficient context for invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters. Per evaluation rules, zero-parameter tools receive a baseline score of 4. The description appropriately requires no parameter explanation since none exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Generate[s] a Markdown overview of all tasks grouped by status' with specific groupings (in_progress, blocked, open, done) and completion percentages. This precisely defines the verb (Generate), resource (tasks), format (Markdown), and aggregation logic, effectively distinguishing it from sibling tools like list_tasks or get_task which likely return structured data rather than formatted reports.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly provides usage context: 'Use this at the start of a session for a quick backlog overview, or to share current status.' This guides the agent toward appropriate scenarios (session initialization, status sharing) versus alternatives like list_tasks (likely for programmatic filtering) or get_task (single-task retrieval). Lacks explicit 'when not to use' guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_analyticsARead-onlyIdempotentInspect
Get visitor analytics for a project. Returns pageview counts, unique visitors, top pages, referrers, device breakdown, UTM data, or daily trend. Use period "today", "7d", "30d", or "90d".
| Name | Required | Description | Default |
|---|---|---|---|
| metric | Yes | Which metric to retrieve. "summary" returns totals, others return breakdowns. | |
| period | No | Time period. Default: 30d | |
| project_id | Yes | The project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish read-only, idempotent, non-destructive behavior. The description adds value by disclosing return data types (pageview counts, breakdowns, UTM data) which compensate for the missing output schema. Does not mention rate limits, caching behavior, or data retention windows, but meets baseline expectations given strong annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with zero waste. Front-loaded with the core purpose ('Get visitor analytics'), followed immediately by return value specification, then parameter guidance. Every clause earns its place; no redundancy with schema annotations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema exists, the description adequately compensates by enumerating expected return values (summary totals vs. breakdowns). With 100% schema coverage and straightforward 3-parameter structure, the description provides sufficient context for correct invocation, though it could explicitly note that project_id refers to an existing monitored project.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage with clear enum documentation. The description reinforces the period options and implicitly maps metric enum values to return types (e.g., 'daily trend' for trend metric), but does not add significant semantic depth beyond what the schema and enum names already provide. Baseline 3 is appropriate given schema completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb ('Get') + resource ('visitor analytics') + scope ('for a project'). The return value enumeration (pageview counts, unique visitors, etc.) clearly distinguishes this from sibling 'get_' tools like get_tracking_scripts or get_project_status by specifying it retrieves statistical data rather than configuration or content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context on valid period values ('today', '7d', '30d', '90d') and maps metric types to expected return data (referrers, devices, etc.). However, it lacks explicit guidance on when NOT to use this versus siblings like get_tracking_scripts or prerequisites like requiring the tracking scripts to be installed first.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_edit_session_changesRead-onlyIdempotentInspect
Reads back what the user changed during a visual edit session. Returns a structured changelog with uploaded images, dimensions, style changes, etc. Use this after sharing the edit URL with the user to see what they did.
| Name | Required | Description | Default |
|---|---|---|---|
| session_id | Yes | The edit session ID (wpe_...) |
get_entity_schemaRead-onlyIdempotentInspect
Get the schema definition of an entity, including all its properties and their types
| Name | Required | Description | Default |
|---|---|---|---|
| project_id | Yes | The project ID | |
| entity_name | Yes | The entity name (e.g., "blogpost") |
get_integration_schemaARead-onlyIdempotentInspect
Get the full schema of a specific integration: all available endpoints, required fields, and input parameters. Call this before execute_integration to know exactly how to call an endpoint. Use list_integrations first to see which integrations are available.
| Name | Required | Description | Default |
|---|---|---|---|
| service | Yes | Integration id (e.g. "admin_auth", "leads", "data_grid", "resend", "mollie") | |
| project_id | Yes | The project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/idempotent/destructive hints, so description focuses on workflow context and return value content (schema includes endpoints/fields). Adds valuable behavioral context about the two-phase workflow (list → get schema → execute) that annotations don't cover. Minor gap: doesn't specify return format structure (JSON Schema, OpenAPI, etc.).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: sentence 1 states purpose, sentence 2 defines prerequisite relationship with execute_integration, sentence 3 defines prerequisite relationship with list_integrations. Perfectly front-loaded and information-dense.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without an output schema, the description adequately explains what gets returned (full schema details). References the correct sibling tools to complete the workflow picture. Minor deduction for not clarifying the schema format type, but sufficient for tool selection and invocation given strong annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with both parameters (service, project_id) fully documented including examples for service. Description maps 'specific integration' to the service parameter but doesn't add syntax, validation rules, or format details beyond what the schema already provides. Baseline 3 appropriate given schema completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Get' with clear resource 'full schema of a specific integration' and details scope (endpoints, required fields, input parameters). Distinctly positions the tool as integration-specific via references to execute_integration and list_integrations, differentiating it from siblings like get_entity_schema.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit workflow guidance: 'Call this before execute_integration' and prerequisite 'Use list_integrations first'. This clearly defines when to use the tool and the necessary sequencing with sibling tools, including implicit 'when-not-to-use' (don't call execute_integration without consulting this first).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_pageRead-onlyIdempotentInspect
Get a specific page with its content. Returns version and version_hash for use with patch_page.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | The page slug (e.g., "index.html", "about.html") | |
| project_id | Yes | The project ID |
get_page_versionsRead-onlyIdempotentInspect
Get the version history of a page. Returns metadata (version numbers, hashes, timestamps, change summaries) — no content. Useful to check what changed or find a version to rollback to.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | The page slug | |
| project_id | Yes | The project ID |
get_project_statusRead-onlyIdempotentInspect
Get detailed status of a specific project including page, asset, and entity counts
| Name | Required | Description | Default |
|---|---|---|---|
| project_id | Yes | The project ID |
get_recordRead-onlyIdempotentInspect
Get a single record by ID
| Name | Required | Description | Default |
|---|---|---|---|
| record_id | Yes | The record ID | |
| project_id | Yes | The project ID | |
| entity_name | Yes | The entity name (e.g., "blogpost") |
get_skillRead-onlyIdempotentInspect
ALWAYS call this tool at the start of every conversation where you will build or modify a WebsitePublisher website. Returns the full agent skill document with critical patterns, code snippets (including the required SAPI session/csrf form pattern), platform knowledge, and go-live checklist. Calling this ensures you have the latest instructions regardless of platform.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
get_taskARead-onlyIdempotentInspect
Get a single task with computed completion percentage, current status, and available architecture document versions.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | Task slug (e.g. "sapi-fase4", "capi-intake") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare readOnly/idempotent status, the description adds valuable context about response contents: specifically mentioning computed fields (completion percentage) and architecture document versions. This helps the agent understand what data richness to expect beyond a generic task object.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, dense sentence that front-loads the action and efficiently packs three distinct value-adds: the operation, the computed fields, and the architecture document versions. Zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only one well-documented parameter and comprehensive annotations, the description adequately covers the return value semantics (status, completion percentage, versions) despite no output schema. Could mention idempotency or cache behavior, but sufficient for this complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for the single 'slug' parameter (including helpful examples like 'sapi-fase4'), the schema carries the semantic load. The description doesn't add parameter-specific guidance beyond implying single-item lookup.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the verb ('Get') and resource ('single task'), and distinguishes from sibling list_tasks by specifying unique returned fields (computed completion percentage, architecture document versions). However, it doesn't explicitly mention the lookup is 'by slug' though this is implied.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The term 'single task' implies use for individual retrieval versus listing, but lacks explicit guidance on when to use this versus list_tasks or get_task_history. No mention of error cases (e.g., slug not found).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_task_historyARead-onlyIdempotentInspect
Get the full history of a task — all progress updates, notes, snippets, blockers, and architecture documents. Use with_content=true to fetch MD content from S3.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | Task slug | |
| type | No | Filter by type: progress, note, snippet, blocker, fork, architecture (optional) | |
| with_content | No | Fetch MD content from S3 for records that have content (optional, default false) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnly, idempotent). Description adds valuable external dependency context (S3 storage for content) and documents the kinds of records returned. Does not disclose ordering (chronological?), pagination behavior, or size limits that would be useful for a history retrieval tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence front-loads the core purpose and scope; second sentence provides actionable parameter guidance. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Input parameters are fully documented between schema and description. However, lacking an output schema, the description omits what the history object structure looks like (e.g., chronological ordering, timestamp fields, author metadata), which would be necessary for the agent to use the results effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3. Description adds value by mapping the 'type' filter values to human-readable content types (e.g., 'progress updates' for 'progress') and emphasizing the S3 retrieval behavior for with_content, helping the agent understand the cost/implication of that flag.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Get') + resource ('full history of a task') and explicitly lists content types (progress updates, notes, snippets, blockers, architecture documents) to distinguish from sibling get_task (current state) and add_task_history (write operation).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit usage guidance for the with_content parameter ('Use with_content=true to fetch MD content from S3'), indicating when to enable that flag. Lacks explicit contrast with get_task for when to query history versus current state, preventing a 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_tracking_scriptsRead-onlyIdempotentInspect
Get the current tracking scripts configured for a project. Returns head_scripts and body_scripts, or null if none configured.
| Name | Required | Description | Default |
|---|---|---|---|
| project_id | Yes | The project ID |
get_visitor_auth_configARead-onlyIdempotentInspect
Get the current visitor auth configuration for a project. Shows whether auth is enabled, which methods are allowed, and the success redirect URL.
| Name | Required | Description | Default |
|---|---|---|---|
| project_id | Yes | The project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish read-only, idempotent, non-destructive safety profile. Description adds valuable behavioral context by detailing what data is returned ('whether auth is enabled, which methods are allowed, and the success redirect URL'), compensating for the lack of an output schema. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first establishes purpose, second previews return value content. Appropriately front-loaded and sized for a single-parameter read tool. Every sentence earns its place by conveying distinct information (action vs. return payload).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for a simple read tool with one required parameter. Good annotations cover safety properties. Absence of output schema is adequately compensated by the description's preview of returned fields (enabled status, methods, redirect URL). No gaps remain for an agent to invoke this tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for the single 'project_id' parameter. Description mentions 'for a project' which aligns with the parameter but does not add additional semantic details, syntax examples, or validation rules beyond what the schema already provides. Baseline score appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Get' with clear resource 'visitor auth configuration' and scope 'for a project'. The term 'Get' effectively distinguishes this from sibling 'configure_visitor_auth', indicating this is the read counterpart to the configuration setter.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context that this is a read operation ('Get', 'Shows'), implying it retrieves current state rather than modifying it. However, it does not explicitly name 'configure_visitor_auth' as the alternative for modifications or provide explicit when-to-use guidance beyond the implicit getter/setter relationship.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_assetsRead-onlyIdempotentInspect
List all assets (images, CSS, JS, etc.) in a project
| Name | Required | Description | Default |
|---|---|---|---|
| project_id | Yes | The project ID |
list_entitiesRead-onlyIdempotentInspect
List all entities (data models) in a project. Entities are like database tables that store structured data.
| Name | Required | Description | Default |
|---|---|---|---|
| project_id | Yes | The project ID |
list_formsRead-onlyIdempotentInspect
List all configured forms for a project. Shows form names, required fields, configured actions, and submit limits.
| Name | Required | Description | Default |
|---|---|---|---|
| project_id | Yes | The project ID |
list_fragmentsRead-onlyIdempotentInspect
List all reusable HTML fragments in a project. Returns fragment_name, include_tag, content and version info.
| Name | Required | Description | Default |
|---|---|---|---|
| project_id | Yes | The project ID |
list_integrationsRead-onlyIdempotentInspect
List all available integrations and their configuration status for a project. Shows which integrations are fully configured (vault secrets present and ready to use) and which are available but need setup. Use get_integration_schema to see the full endpoint details and input parameters for a specific integration.
| Name | Required | Description | Default |
|---|---|---|---|
| project_id | Yes | The project ID |
list_pagesRead-onlyIdempotentInspect
List all pages in a project
| Name | Required | Description | Default |
|---|---|---|---|
| project_id | Yes | The project ID |
list_projectsRead-onlyIdempotentInspect
List all projects the authenticated user has access to. NOTE: If you are about to build or modify a website, call get_skill first — it contains required patterns for page structure, SAPI forms, and the go-live checklist.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
list_recordsRead-onlyIdempotentInspect
List all records of an entity type with optional pagination
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number for pagination (optional, default: 1) | |
| sort_by | No | Field to sort by (optional, default: "id") | |
| per_page | No | Records per page (optional, default: 50, max: 100) | |
| project_id | Yes | The project ID | |
| sort_order | No | Sort order: "ASC" or "DESC" (optional, default: "DESC") | |
| entity_name | Yes | The entity name (e.g., "blogpost") |
list_scheduled_tasksARead-onlyIdempotentInspect
List all scheduled tasks for a project, showing their status, next run time, and last execution.
| Name | Required | Description | Default |
|---|---|---|---|
| project_id | Yes | The project ID | |
| active_only | No | Only show active tasks (default: true) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety (readOnlyHint, destructiveHint, idempotentHint). The description adds valuable behavioral context by disclosing what data is returned ('status, next run time, and last execution'), compensating for the missing output schema. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that is front-loaded with the action ('List all scheduled tasks') and efficiently packs in both the filtering scope and return value preview. No redundant words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (2 primitive parameters, no nested objects), good annotations, and 100% schema coverage, the description is sufficient. It appropriately compensates for the lack of output schema by listing the key return fields.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description provides minimal additional parameter semantics beyond 'for a project' (mapping to project_id). It does not elaborate on the 'active_only' filtering behavior beyond the schema's own description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('List') with clear resource ('scheduled tasks') and scope ('for a project'). It distinguishes from the sibling 'list_tasks' by specifying 'scheduled tasks' in both name and description, though it lacks explicit contrast with siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by specifying the resource type ('scheduled tasks'), allowing inference that this is for scheduled automation rather than general tasks (addressed by 'list_tasks'). However, it lacks explicit when-to-use guidance, prerequisites, or named alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_tasksARead-onlyIdempotentInspect
List all tracked tasks with current completion percentage and status. Snoozed tasks are hidden by default — use include_snoozed=true to show them.
| Name | Required | Description | Default |
|---|---|---|---|
| tag | No | Filter by tag, e.g. "sapi", "capi", "p2" (optional) | |
| scope | No | Filter by scope: "global" or a project ID (optional, defaults to global) | |
| status | No | Filter by status: open, in_progress, blocked, done (optional) | |
| include_snoozed | No | Include snoozed tasks in results (optional, default false) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish read-only, idempotent, non-destructive safety. The description adds valuable behavioral context: the default filtering of snoozed tasks and hints at return values (completion percentage, status). It does not contradict annotations, though it could disclose pagination behavior or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence establishes purpose and return data; second sentence delivers critical behavioral context about snoozed tasks. Information is front-loaded and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple schema (4 flat optional params) and absence of output schema, the description adequately covers the snoozed-filtering quirk and mentions key return fields. It could improve by addressing pagination limits or explicitly contrasting with 'export_tasks', but it is sufficiently complete for invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents all four parameters. The description references 'include_snoozed=true' as an example, reinforcing its usage, but does not add semantic meaning (format constraints, valid ranges) beyond what the schema already provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool lists 'tracked tasks' with specific attributes (completion percentage, status). The verb 'List' and resource 'tasks' are specific. However, it does not explicitly differentiate from sibling 'export_tasks' or 'list_scheduled_tasks', though 'tracked' provides implicit distinction from 'scheduled'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides specific behavioral guidance about snoozed tasks being hidden by default and how to reveal them. However, it lacks guidance on when to choose this tool over 'get_task' (singular), 'export_tasks', or 'list_scheduled_tasks', or prerequisites for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
patch_pageInspect
Apply targeted changes to an existing page without sending the full content. Requires base_version_hash from get_page to prevent conflicts. Supports replace and delete operations. Much more efficient than update_page for small edits.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | The page slug to patch | |
| patches | Yes | Array of patch operations. Each patch needs: operation ("replace" or "delete"), find (exact string to locate). For replace: also include replace (new string). The find string must match exactly once in the page content. | |
| project_id | Yes | The project ID | |
| patch_summary | No | Brief description of what was changed (optional, stored in version history) | |
| base_version_hash | Yes | The version_hash from get_page response. Ensures no conflicting changes. |
remove_formDestructiveIdempotentInspect
Remove a form configuration from a project. Visitors will no longer be able to submit this form.
| Name | Required | Description | Default |
|---|---|---|---|
| form_name | Yes | The form name to remove | |
| project_id | Yes | The project ID |
remove_integrationDestructiveIdempotentInspect
Remove an integration by permanently deleting all its vault secrets. After removal, the integration endpoints will no longer work until reconfigured via setup_integration.
| Name | Required | Description | Default |
|---|---|---|---|
| service | Yes | Integration id to remove (e.g., "resend", "mollie") | |
| project_id | Yes | The project ID |
remove_tracking_scriptsDestructiveIdempotentInspect
Remove all tracking scripts from a project. Pages will no longer have tracking code injected.
| Name | Required | Description | Default |
|---|---|---|---|
| project_id | Yes | The project ID |
rollback_pageInspect
Rollback a page to a previous version. Creates a new version with the old content (audit trail preserved). Specify either target_version (number) or target_version_hash. Use get_page_versions to find available versions.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | The page slug to rollback | |
| project_id | Yes | The project ID | |
| target_version | No | Version number to rollback to (use this OR target_version_hash) | |
| target_version_hash | No | Version hash to rollback to (use this OR target_version) |
set_tracking_scriptsIdempotentInspect
Set Google Analytics, Google Tag Manager, Meta Pixel, or other tracking/conversion scripts for a project. Scripts are automatically injected into every page: head_scripts before (for analytics/GTM), body_scripts before (for conversion pixels). Set a field to null or omit it to clear.
| Name | Required | Description | Default |
|---|---|---|---|
| project_id | Yes | The project ID | |
| body_scripts | No | Scripts to inject before </body>. Typically: GTM noscript fallback, conversion pixels, chat widgets. Include full tags. | |
| head_scripts | No | Scripts to inject before </head>. Typically: Google Tag Manager, Google Analytics, Meta Pixel base code. Include full <script> tags. |
setup_integrationIdempotentInspect
Configure an integration by storing its required API keys in the vault. Validates key format against the integration manifest. After setup, the integration endpoints become available for execute_integration calls. Use list_integrations first to see what secrets are required.
| Name | Required | Description | Default |
|---|---|---|---|
| secrets | Yes | Key-value pairs of secret names and their values. Example for Resend: {"resend_api_key": "re_abc123..."} | |
| service | Yes | Integration id (e.g., "resend", "mollie") | |
| project_id | Yes | The project ID |
update_entityIdempotentInspect
Update entity metadata such as plural name, description, or public_read access. Set public_read to true to make entity data accessible without authentication via /mapi/public/{projectId}/{entity}.
| Name | Required | Description | Default |
|---|---|---|---|
| plural | No | New plural form (optional) | |
| project_id | Yes | The project ID | |
| description | No | New description (optional) | |
| entity_name | Yes | The entity name to update | |
| public_read | No | Enable/disable public read-only access without authentication (optional, default: false) |
update_fragmentIdempotentInspect
Update a reusable HTML fragment. All pages using its include_tag will serve the new content on next request (cache invalidated automatically).
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Fragment name to update | |
| content | Yes | New HTML content | |
| project_id | Yes | The project ID |
update_pageIdempotentInspect
Replace an existing page with full new content. For small changes, use patch_page instead — it saves tokens and preserves version history.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | The page slug to update | |
| title | No | New page title (optional) | |
| content | No | New HTML content (optional) | |
| language | No | Page language ISO code e.g. "en", "nl", "de" (optional) | |
| seo_title | No | SEO title shown in browser tab and search results (optional) | |
| project_id | Yes | The project ID | |
| description | No | New SEO description (optional) | |
| landingpage | No | Set as homepage of the project (optional) | |
| seo_keywords | No | SEO keywords, comma-separated, max 9 (optional) | |
| seo_robots_index | No | Include page in sitemap and search engines (optional) | |
| seo_robots_follow | No | Allow search engines to follow links on this page (optional) |
update_recordIdempotentInspect
Update an existing record. Only provided fields will be updated.
| Name | Required | Description | Default |
|---|---|---|---|
| data | Yes | Fields to update as key-value pairs | |
| record_id | Yes | The record ID to update | |
| project_id | Yes | The project ID | |
| entity_name | Yes | The entity name (e.g., "blogpost") |
upload_assetInspect
Upload an asset (image, font, PDF, etc). Provide either content (base64) OR source_url (public HTTPS URL) — not both. Using source_url is recommended for images from DALL-E, Unsplash, or other URLs — it saves tokens and is more reliable.
| Name | Required | Description | Default |
|---|---|---|---|
| alt | No | Alt text for images (optional) | |
| slug | Yes | The asset path (e.g., "images/logo.png", "css/style.css") | |
| content | No | Base64 encoded content (no data: prefix). Use this OR source_url. | |
| project_id | Yes | The project ID | |
| source_url | No | Public HTTPS URL to fetch the file from (e.g. DALL-E image URL, Unsplash URL). Use this OR content. Only for images, PDF, fonts, and ico — CSS/SVG/JSON must use content. |
vault_delete_secretDestructiveIdempotentInspect
Permanently delete a secret from the project vault. This cannot be undone. The encrypted value is destroyed.
| Name | Required | Description | Default |
|---|---|---|---|
| key_name | Yes | The secret key name to delete (e.g., "resend_api_key") | |
| project_id | Yes | The project ID |
vault_list_secretsRead-onlyIdempotentInspect
List all secrets stored in a project's vault. Returns metadata only (key names, prefixes, service types, status) — never the actual secret values. Use this to check what credentials are stored before setting up integrations.
| Name | Required | Description | Default |
|---|---|---|---|
| project_id | Yes | The project ID | |
| service_type | No | Optional filter by service type (e.g., "resend", "mollie") |
vault_store_secretIdempotentInspect
Store or update a secret in the project vault. The value is encrypted with AES-256-GCM and can never be read back. Use this to save API keys for integrations. If the key_name already exists, the value is replaced. For integration setup, prefer setup_integration which handles validation.
| Name | Required | Description | Default |
|---|---|---|---|
| value | Yes | The secret value to encrypt and store (e.g., the actual API key) | |
| key_name | Yes | Unique identifier for the secret (lowercase, underscores, e.g., "resend_api_key", "mollie_api_key") | |
| project_id | Yes | The project ID | |
| description | No | Human-readable description (e.g., "Resend API key for transactional email") — optional | |
| service_type | No | Service type label (e.g., "resend", "mollie") — optional |
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Receive usage reports showing how your server is being used
Get monitoring and health status updates for your server
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.