Skip to main content
Glama

strale_methodology

Explains Strale's dual-profile scoring model for AI agent trust and quality, covering code quality, operational dependability, test infrastructure, audit trails, and transparent scoring methodology.

Instructions

Get Strale's quality and trust methodology. Explains the dual-profile scoring model: Quality Profile (code quality, 4 factors) and Reliability Profile (operational dependability, 4 factors weighted by capability type), combined via a published 5×5 matrix into the SQS confidence score. Covers execution guidance, test infrastructure (~1340 test suites with tiered scheduling), provenance tracking, audit trails, badge system, and honest disclosure of current limitations.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The tool 'strale_methodology' is registered directly in 'packages/mcp-server/src/tools.ts'. Its handler is defined inline as an asynchronous function returning a hardcoded string describing the methodology.
        async () => {
          const methodologyText = `STRALE QUALITY & TRUST METHODOLOGY
    ===================================
    
    WHAT STRALE IS
    Strale is trust and quality infrastructure for AI agents. Agents call capabilities (atomic data operations) and solutions (multi-step workflows) via a unified API. Every execution is independently tested, scored, and auditable.
    
    SQS — STRALE QUALITY SCORE
    The SQS is a combined confidence score (0-100) derived from two independent profiles:
    - Quality Profile (QP): How well-built is Strale's code? (code correctness, schema compliance, error handling, edge cases)
    - Reliability Profile (RP): How dependable is the service right now? (availability, success rate, upstream health, latency)
    The two profiles combine via a published matrix into the headline SQS score.
    
    QUALITY PROFILE (QP)
    Measures code and methodology quality. Stable over time — only changes when code changes.
    Four factors:
      Correctness (50%) — Does it return accurate data for known inputs?
      Schema Compliance (31%) — Does the response match the declared format?
      Error Handling (13%) — Are errors caught and reported cleanly?
      Edge Cases (6%) — Does it handle unusual inputs gracefully?
    Upstream service failures are EXCLUDED from the Quality Profile.
    Grade scale: A (>=90), B (>=75), C (>=50), D (>=25), F (<25)
    Label format: "Code quality: [Grade]" (DEC-20260315-J)
  • The 'strale_methodology' tool is registered using server.registerTool within the MCP server tool initialization logic in 'packages/mcp-server/src/tools.ts'.
    // Meta-tool: strale_methodology (no API key required)
    server.registerTool(
      "strale_methodology",
      {
        description:
          `Get Strale's quality and trust methodology. Explains the dual-profile scoring model: Quality Profile (code quality, 4 factors) and Reliability Profile (operational dependability, 4 factors weighted by capability type), combined via a published 5×5 matrix into the SQS confidence score. Covers execution guidance, test infrastructure (~${capabilities.length * 5} test suites with tiered scheduling), provenance tracking, audit trails, badge system, and honest disclosure of current limitations.`,
        inputSchema: z.object({}),
      },
      async () => {
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively compensates by detailing the comprehensive nature of the returned methodology (5×5 matrix, 1340 test suites, provenance tracking, current limitations), giving the agent clear expectations about the informational scope and depth of the response.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is information-dense but well-structured, moving logically from high-level concept (dual-profile model) to specific components (4 factors each, 5×5 matrix) to operational details (test infrastructure, audit trails) to limitations. Every clause adds specific content details without redundancy, though the single-sentence density approaches the limit of optimal readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description comprehensively enumerates the methodology components an agent can expect (scoring models, execution guidance, test infrastructure statistics, badge system, limitations). This effectively substitutes for formal output documentation by setting clear expectations about the knowledge base being retrieved.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema defines zero parameters (empty object). According to calibration rules, 0 parameters establishes a baseline score of 4. The description correctly requires no additional parameter explanation since there are no inputs to document.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Get') and resource ('Strale's quality and trust methodology'), then elaborates extensively on scope: dual-profile scoring model, Quality/Reliability Profiles, SQS confidence score, test infrastructure, and audit systems. It clearly distinguishes from sibling strale_trust_profile by focusing on explanatory methodology rather than specific profile data retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description thoroughly documents what content is returned (scoring matrices, test suites, badge systems), allowing agents to infer this is for understanding Strale's evaluation framework. However, it lacks explicit when-to-use guidance or comparison to alternatives like strale_trust_profile (e.g., 'use this to understand scoring methodology, use strale_trust_profile to retrieve specific component ratings').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/strale-io/strale'

If you have feedback or need assistance with the MCP directory API, please join our Discord server