Skip to main content
Glama
Noosbai
by Noosbai

Noter un print

submit_feedback

Submit 3D printing feedback to improve future recommendations and share anonymized data with the community. Record print quality, adhesion, strength, and issues encountered.

Instructions

Enregistre un retour après impression. Note la qualité, l'adhésion, la solidité, et les problèmes rencontrés. Ces données améliorent les futures recommandations et peuvent être partagées anonymement avec la communauté.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
model_nameYesNom du modèle imprimé
materialYesMatériau utilisé (PLA, PETG, ABS...)
printerYesNom de l'imprimante
nozzleYesDiamètre de buse en mm
goalYesObjectif utilisé (draft, standard, quality, strong)
layer_heightYesHauteur de couche utilisée en mm
infill_percentYesPourcentage de remplissage
perimetersYesNombre de périmètres
print_speedYesVitesse d'impression en mm/s
nozzle_tempYesTempérature buse en °C
bed_tempYesTempérature plateau en °C
support_usedYesSupports utilisés ?
brim_usedYesBrim utilisé ?
quality_scoreYesQualité de surface (1=terrible, 5=parfait)
adhesion_scoreYesAdhésion au plateau (1=décollé, 5=parfait)
strength_scoreYesSolidité perçue (1=fragile, 5=incassable)
overall_scoreYesSatisfaction globale (1=raté, 5=parfait)
issuesNoProblèmes rencontrés : warping, stringing, under-extrusion, over-extrusion, layer-shift, ringing, blobs, elephants-foot, cracking, poor-bridging, clogging
notesNoNotes libres sur l'impression
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the tool 'enregistre' (records/stores) data and that it 'peuvent être partagées anonymement' (may be shared anonymously), which hints at data persistence and privacy considerations. However, it doesn't address critical behavioral aspects like authentication requirements, rate limits, whether the operation is idempotent, what happens on failure, or what confirmation/response to expect. For a data submission tool with 19 parameters, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each earn their place. The first sentence states the core action and key parameters. The second sentence explains the value and data usage policy. No wasted words, front-loaded with the primary purpose, and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the high parameter count (19), no annotations, and no output schema, the description provides adequate basic context about what the tool does and why. However, it lacks sufficient behavioral transparency for a data submission tool, doesn't explain what happens after submission (success/failure responses), and offers minimal guidance on usage versus alternatives. The 100% schema coverage helps, but the description alone leaves gaps in operational understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 19 parameters thoroughly with descriptions, constraints, and defaults. The description adds minimal value beyond the schema by mentioning the four main scoring dimensions (quality, adhesion, strength, issues) and the purpose of data collection. However, it doesn't provide additional context about parameter relationships, formatting expectations, or usage patterns that aren't already in the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Enregistre un retour après impression') and resources ('Note la qualité, l'adhésion, la solidité, et les problèmes rencontrés'). It distinguishes itself from siblings like 'export_feedback' or 'feedback_stats' by focusing on submission/recording rather than analysis or export. The description explicitly mentions what data is captured and the downstream benefits.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context ('après impression') but doesn't provide explicit guidance on when to use this tool versus alternatives. There's no mention of prerequisites, timing considerations, or comparison with sibling tools like 'diagnose_print' or 'upload_print'. The agent must infer usage from the purpose statement alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Noosbai/PrusaMCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server