Skip to main content
Glama
Facets-cloud

Facets Module MCP Server

by Facets-cloud

test_already_previewed_module

Test a previously previewed module by deploying it to a target project using terraform apply, after validating project existence and preview module support.

Instructions

Test a module that has been previewed by asking the user for the project_name where it needs to be tested.

This tool checks if the project exists, verifies if it supports preview modules, and then does terraform apply. You can check logs for the apply using get_deployment_logs, and check the status of the deployment using check_deployment_status.

Args: project_name (str): The name of the test project (stack) to deploy to intent (str): The intent of the module to deploy flavor (str): The flavor of the module to deploy version (str): The version of the module to deploy environment_name (str, optional): The specific environment name to deploy to. Provide this only if the user has asked you to.

Returns: str: Result of the deployment operation as a JSON string

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
project_nameYes
intentYes
flavorYes
versionYes
environment_nameNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses the steps (check project existence, verify preview support, terraform apply) and return type (JSON string). However, it omits side effects (e.g., resource changes), permissions required, and failure modes. It adds some value but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is moderately concise with clear sections: purpose, mechanism, args, returns. There is some redundancy (first sentence repeats project_name from Args) and extra detail about monitoring tools, but overall it is front-loaded and structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (deployment with checks) and lack of output schema details (only JSON string), the description adequately covers the workflow but misses prerequisites, error handling, and detailed return structure. It is moderately complete but has notable gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so description must compensate. The Args section provides brief explanations for each parameter (e.g., environment_name is optional and should only be provided if user asks). This adds meaning beyond the schema but is minimal. No enums, examples, or constraints are given.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool tests a module that has been previewed by deploying to a project. The verb 'test' and resource 'module' are specific. However, the phrase 'by asking the user for the project_name' is ambiguous since the parameter is provided as input, not by asking.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus sibling tools like validate_module or push_preview_module. The description only mentions post-use monitoring tools (get_deployment_logs, check_deployment_status) without clarifying preconditions or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Facets-cloud/facets-module-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server