Skip to main content
Glama

tests-run

Execute Unity tests with detailed results, supporting filtering by test mode, assembly, namespace, class, and method for targeted testing during development.

Instructions

Execute Unity tests and return detailed results. Supports filtering by test mode, assembly, namespace, class, and method. Recommended to use 'EditMode' for faster iteration during development.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
testModeNoTest mode to run. Options: 'EditMode', 'PlayMode'. Default: 'EditMode' Values: [EditMode, PlayMode]1
testAssemblyNoSpecific test assembly name to run (optional). Example: 'Assembly-CSharp-Editor-testable'
testNamespaceNoSpecific test namespace to run (optional). Example: 'MyTestNamespace'
testClassNoSpecific test class name to run (optional). Example: 'MyTestClass'
testMethodNoSpecific fully qualified test method to run (optional). Example: 'MyTestNamespace.FixtureName.TestName'
includePassingTestsNoInclude details for all tests, both passing and failing (default: false). If you just need details for failing tests, set to false.false
includeMessagesNoInclude test result messages in the test results (default: true). If you just need pass/fail status, set to false.true
includeStacktraceNoInclude stack traces in the test results (default: false).false
includeLogsNoInclude console logs in the test results (default: false).false
logTypeNoLog type filter for console logs. Options: 'Log', 'Warning', 'Assert', 'Error', 'Exception'. (default: 'Warning') Values: [Error, Warning, Log]2
includeLogsStacktraceNoInclude stack traces for console logs in the test results (default: false). This is huge amount of data, use only if really needed.false
requestIdNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions that the tool 'Supports filtering by test mode, assembly, namespace, class, and method' and returns 'detailed results,' but it lacks critical behavioral details: whether it's read-only or destructive (e.g., does it modify test state?), performance implications (e.g., execution time), error handling, or output format. For a tool with 12 parameters and no annotations, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded: the first sentence states the core purpose, the second adds filtering support, and the third provides a usage tip. Each sentence earns its place, with no wasted words. It could be slightly more structured (e.g., bullet points for filtering), but it's efficient overall.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (12 parameters, no annotations, no output schema), the description is incomplete. It doesn't explain the output format ('detailed results' is vague), error conditions, or behavioral traits like whether it's safe for repeated use. For a tool that executes tests and returns results, more context is needed to guide an agent effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is high (92%), so the schema already documents most parameters well. The description adds minimal value beyond the schema: it lists filtering options (test mode, assembly, namespace, class, method) and mentions 'detailed results,' but doesn't explain parameter interactions or provide additional context. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Execute Unity tests and return detailed results.' It specifies the verb ('Execute') and resource ('Unity tests') with the outcome ('detailed results'). However, it doesn't explicitly differentiate from sibling tools, as none appear to be test-related (siblings are mostly asset, gameobject, scene, and profiling tools).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some usage guidance: 'Recommended to use 'EditMode' for faster iteration during development.' This implies a context (development iteration) and a preference, but it doesn't specify when to use this tool versus alternatives (e.g., other test runners or manual testing) or any prerequisites. The guidance is helpful but not comprehensive.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/butterlatte-zhang/unity-ai-bridge'

If you have feedback or need assistance with the MCP directory API, please join our Discord server