Skip to main content
Glama

Generate test artifacts

wopee_generate_artifact

:

Instructions

Generate AI-powered test artifacts for a suite using the Wopee.io AI engine. Each call creates one artifact type — call multiple times for different types. Generation order matters: APP_CONTEXT must be generated before user stories, and user stories before test cases. If called out of order, the AI may produce lower quality results. On success, returns confirmation that generation started. Use wopee_fetch_artifact to retrieve the generated content once ready. Do NOT use this to update existing artifacts — use wopee_update_artifact instead. Generating the same type again overwrites the previous version.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
typeYesType of test artifact to generate. One of: APP_CONTEXT, GENERAL_USER_STORIES, USER_STORIES_WITH_TEST_CASES, TEST_CASES, TEST_CASE_STEPS, REUSABLE_TEST_CASES, REUSABLE_TEST_CASE_STEPS. Start with APP_CONTEXT, then generate stories and test cases from it.
suiteUuidYesUUID of the analysis suite to generate artifacts for. Get this from wopee_create_blank_suite or wopee_fetch_analysis_suites.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. Effectively reveals: async behavior ('returns confirmation that generation started' implying background processing), destructive potential ('overwrites the previous version'), and quality constraints (order dependency affects output quality). Missing only minor details like rate limits or exact error states.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfectly structured and front-loaded. Every sentence earns its place: purpose first, usage pattern (multiple calls), ordering constraints, success behavior, sibling references, and overwrite warning. No redundancy or filler despite covering complex workflow dependencies.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Excellent coverage for a generation tool without output schema. Explains return value ('confirmation that generation started'), side effects (overwrite behavior), and full workflow integration (fetch_artifact for retrieval). Only minor gap is lack of error state descriptions or timing estimates for generation completion.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% so baseline applies. Description loosely references the type parameter ('Each call creates one artifact type') but adds no syntax, format, or constraint details beyond what the schema already documents for the 'type' and 'suiteUuid' parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: 'Generate AI-powered test artifacts' provides clear verb and resource. Explicitly distinguishes from siblings by stating 'Do NOT use this to update existing artifacts — use wopee_update_artifact instead' and directing users to 'wopee_fetch_artifact' for retrieval, clearly delineating the create/update/read separation in the tool suite.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Outstanding guidance: explicitly states when-not to use ('Do NOT use this to update'), names the correct alternative ('use wopee_update_artifact instead'), and details critical workflow dependencies ('APP_CONTEXT must be generated before user stories, and user stories before test cases'). Also specifies the async retrieval pattern requiring wopee_fetch_artifact.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Wopee-io/wopee-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server