Skip to main content
Glama

get_console_output

Retrieve console output from a running Godot project to monitor logs, debug issues, and filter by text or error level for focused analysis.

Instructions

Get console output from the running Godot project. Optionally filter by text or error level.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
filterNoText filter for console output
errorsOnlyNoOnly show errors and warnings

Implementation Reference

  • The _get_console_output function reads the latest log file from the logs directory and returns the last 100 lines as the console output.
    func _get_console_output(_params: Dictionary) -> Dictionary:
    \tvar log_dir := OS.get_user_data_dir() + "/logs"
    \tvar dir := DirAccess.open(log_dir)
    \tif not dir:
    \t\treturn {"output": [], "note": "No log directory found"}
    \tvar files: Array[String] = []
    \tdir.list_dir_begin()
    \tvar file_name := dir.get_next()
    \twhile file_name != "":
    \t\tif file_name.ends_with(".log"):
    \t\t\tfiles.append(file_name)
    \t\tfile_name = dir.get_next()
    \tdir.list_dir_end()
    \tif files.is_empty():
    \t\treturn {"output": [], "note": "No log files found"}
    \tfiles.sort()
    \tvar latest := files[-1]
    \tvar f := FileAccess.open(log_dir + "/" + latest, FileAccess.READ)
    \tif not f:
    \t\treturn {"output": [], "note": "Could not open log file"}
    \tvar content := f.get_as_text()
    \tf.close()
    \tvar lines := content.split("\\n")
    \tvar tail_count := 100
    \tvar start := max(0, lines.size() - tail_count)
    \tvar tail: Array[String] = []
    \tfor i in range(start, lines.size()):
    \t\tif lines[i].strip_edges() != "":
    \t\t\ttail.append(lines[i])
    \treturn {"output": tail, "file": latest}
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions filtering options but fails to describe critical behaviors: whether this returns real-time or historical output, if it's paginated or limited in volume, what format the output takes (e.g., plain text, structured logs), or any side effects (e.g., clearing the console). For a read operation with zero annotation coverage, this leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose ('Get console output') and immediately adds optional filtering details. There is zero wasted verbiage or redundancy, making it appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (runtime diagnostics with filtering) and lack of annotations or output schema, the description is incomplete. It doesn't explain the return format, volume limitations, real-time vs. historical behavior, or error handling. For a tool that likely returns structured log data, this leaves the agent guessing about how to interpret results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters (filter and errorsOnly). The description adds marginal value by mentioning 'filter by text or error level', which aligns with but doesn't expand beyond the schema. With high schema coverage, the baseline score of 3 is appropriate as the description doesn't provide additional parameter insights.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and resource 'console output from the running Godot project', making the purpose specific. It distinguishes this tool from other sibling tools that deal with nodes, files, scenes, or resources rather than runtime console output. However, it doesn't explicitly differentiate from potential console-related siblings (none exist in the list).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., project must be running), nor does it compare with other diagnostic tools like get_signal_log or get_runtime_state. The optional filtering hints at usage but lacks explicit context or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/woohq/godette-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server