Skip to main content
Glama

ctrltest.analyze_pid

Analyze PID controller performance for flapping-wing systems by evaluating gains against plant dynamics to calculate control metrics and stability.

Instructions

Score PID gains for a flapping-wing plant. Provide plant dynamics and optional gradients/metadata. Returns key control metrics plus provenance. Example input: {"plant":{"natural_frequency_hz":4.2,"damping_ratio":0.45},"gains":{"kp":1.1,"ki":0.2,"kd":0.08},"diffsph_metrics":{"force_gradient_norm":1.9}}

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
requestYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
iseYes
metadataNo
overshootYes
moe_energy_jYes
extra_metricsNo
settling_timeYes
moe_latency_msYes
lyapunov_marginYes
multi_modal_scoreNo
gust_rejection_pctYes
moe_switch_penaltyYes
cpg_energy_baseline_jYes
cpg_energy_consumed_jYes
cpg_energy_reduction_pctYes
gust_detection_latency_msYes
gust_detection_bandwidth_hzYes

Implementation Reference

  • Registers the 'ctrltest.analyze_pid' tool on the FastMCP server instance using the @app.tool decorator. The handler function 'analyze' receives typed input and calls the core evaluation logic.
    @app.tool(
        name="ctrltest.analyze_pid",
        description=(
            "Score PID gains for a flapping-wing plant. "
            "Provide plant dynamics and optional gradients/metadata. "
            "Returns key control metrics plus provenance. "
            "Example input: "
            '{"plant":{"natural_frequency_hz":4.2,"damping_ratio":0.45},'
            '"gains":{"kp":1.1,"ki":0.2,"kd":0.08},'
            '"diffsph_metrics":{"force_gradient_norm":1.9}}'
        ),
        meta={"version": "0.1.0", "categories": ["control", "analysis"]},
    )
    def analyze(request: ControlAnalysisInput) -> ControlAnalysisOutput:
        return evaluate_control(request)
  • Exact implementation of the tool logic: surrogate model computes closed-loop response using python-control library, derives performance metrics (overshoot, ISE, settling), heuristic scores for gust/CPG/MoE components, and optional gradient-based multi-modal score.
    def evaluate_control(inputs: ControlAnalysisInput) -> ControlAnalysisOutput:
        if inputs.prefer_high_fidelity and is_available():
            try:
                high_fidelity = run_high_fidelity(inputs)
                if high_fidelity is not None:
                    return high_fidelity
            except Exception as exc:  # pragma: no cover - fallback safety
                LOGGER.warning("PteraControls evaluation failed, using surrogate results: %s", exc)
    
        plant = inputs.plant
        gains = inputs.gains
        simulation = inputs.simulation
    
        omega_n = 2.0 * math.pi * plant.natural_frequency_hz
        zeta = plant.damping_ratio
        plant_tf = tf([omega_n**2], [1, 2 * zeta * omega_n, omega_n**2])
        pid_tf = tf([gains.kd, gains.kp, gains.ki], [1, 0])
        closed_loop = feedback(pid_tf * plant_tf, 1)
    
        t = np.linspace(0, simulation.duration_s, simulation.sample_points)
        setpoint = inputs.setpoint
        u = np.full_like(t, setpoint)
        _, y = forced_response(closed_loop, T=t, U=u)
        error = setpoint - y
    
        overshoot = float(np.max(y) - setpoint)
        tolerance = plant.settling_tolerance_rad
        try:
            settling_idx = np.argmax(np.abs(error) < tolerance)
            settling_time = (
                float(t[settling_idx])
                if np.abs(error[settling_idx]) < tolerance
                else simulation.duration_s
            )
        except ValueError:  # pragma: no cover
            settling_time = simulation.duration_s
        ise = float(np.trapezoid(error**2, t))
    
        gust_detector = inputs.gust_detector
        adaptive_cpg = inputs.adaptive_cpg
        moe_router = inputs.moe_router
    
        detection_latency = float(max(gust_detector.latency_ms, 0.1))
        detector_gain = min(gust_detector.sensitivity * (gust_detector.bandwidth_hz / 1200.0), 1.1)
        gust_rejection_pct = min(adaptive_cpg.target_rejection_pct * detector_gain, 0.95)
        energy_baseline = float(adaptive_cpg.energy_baseline_j)
        energy_reduction = min(max(adaptive_cpg.energy_reduction_pct, 0.0), 0.95)
        energy_consumed = energy_baseline * (1.0 - energy_reduction)
        lyapunov_margin = float(adaptive_cpg.lyapunov_margin)
        moe_switch_penalty = moe_router.switch_cost_weight * moe_router.switch_events
        moe_latency = min(
            moe_router.latency_budget_ms * (1.0 + 0.02 * moe_router.switch_events),
            moe_router.latency_budget_ms * 1.5,
        )
        moe_energy = moe_router.energy_budget_j * (1.0 - energy_reduction)
    
        diffsph_metrics = _coerce_metrics(inputs.diffsph_metrics)
        foam_metrics = _coerce_metrics(inputs.foam_metrics)
    
        extra_metrics: dict[str, Any] | None = None
        multi_modal_score: float | None = None
    
        if diffsph_metrics:
            extra_metrics = (extra_metrics or {}) | diffsph_metrics
        if foam_metrics:
            extra_metrics = (extra_metrics or {}) | foam_metrics
        if diffsph_metrics and foam_metrics:
            ratio = float(foam_metrics.get("lift_drag_ratio", 0.0))
            denom = max(abs(ratio), 1e-6)
            force_norm = float(diffsph_metrics.get("force_gradient_norm", 0.0))
            multi_modal_score = round(force_norm / denom, 6)
    
        return ControlAnalysisOutput(
            overshoot=float(overshoot),
            ise=float(ise),
            settling_time=float(settling_time),
            gust_detection_latency_ms=round(detection_latency, 6),
            gust_detection_bandwidth_hz=round(gust_detector.bandwidth_hz, 6),
            gust_rejection_pct=round(gust_rejection_pct, 6),
            cpg_energy_baseline_j=round(energy_baseline, 6),
            cpg_energy_consumed_j=round(energy_consumed, 6),
            cpg_energy_reduction_pct=round(energy_reduction, 6),
            lyapunov_margin=round(lyapunov_margin, 6),
            moe_switch_penalty=round(moe_switch_penalty, 6),
            moe_latency_ms=round(moe_latency, 6),
            moe_energy_j=round(moe_energy, 6),
            multi_modal_score=multi_modal_score,
            extra_metrics=extra_metrics,
            metadata={"solver": "analytic"},
        )
  • Pydantic schema for tool input, including plant dynamics, PID gains, simulation settings, and optional metrics/configs for gust detection, adaptive CPG, MoE router, gradients.
    class ControlAnalysisInput(BaseModel):
        plant: ControlPlant
        gains: PIDGains
        simulation: ControlSimulation = Field(default_factory=ControlSimulation)
        setpoint: float = Field(0.0)
        gust_detector: GustDetectorConfig = Field(default_factory=GustDetectorConfig)
        adaptive_cpg: AdaptiveCPGConfig = Field(default_factory=AdaptiveCPGConfig)
        moe_router: MoERouterConfig = Field(default_factory=MoERouterConfig)
        diffsph_metrics: dict[str, Any] | None = None
        foam_metrics: dict[str, Any] | None = None
        prefer_high_fidelity: bool = Field(
            default=True,
            description="Attempt to use PteraControls when available before falling back to the analytic surrogate.",
        )
  • Pydantic schema for tool output, defining all computed control metrics and metadata.
    class ControlAnalysisOutput(BaseModel):
        overshoot: float
        ise: float
        settling_time: float
        gust_detection_latency_ms: float
        gust_detection_bandwidth_hz: float
        gust_rejection_pct: float
        cpg_energy_baseline_j: float
        cpg_energy_consumed_j: float
        cpg_energy_reduction_pct: float
        lyapunov_margin: float
        moe_switch_penalty: float
        moe_latency_ms: float
        moe_energy_j: float
        multi_modal_score: float | None = None
        extra_metrics: dict[str, Any] | None = None
        metadata: dict[str, Any] | None = None
  • Optional high-fidelity helper using external PteraControls toolbox for simulation if preferred and available, otherwise None to trigger surrogate fallback.
    def run_high_fidelity(inputs: ControlAnalysisInput) -> Optional[ControlAnalysisOutput]:
        if pc is None:  # pragma: no cover - guarded import
            return None
    
        try:
            payload = {
                "plant": inputs.plant.model_dump(),
                "gains": inputs.gains.model_dump(),
                "simulation": inputs.simulation.model_dump(),
                "setpoint": inputs.setpoint,
                "gust_detector": inputs.gust_detector.model_dump(),
                "adaptive_cpg": inputs.adaptive_cpg.model_dump(),
                "moe_router": inputs.moe_router.model_dump(),
                "diffsph_metrics": inputs.diffsph_metrics or {},
                "foam_metrics": inputs.foam_metrics or {},
            }
            result: Any = None
            if hasattr(pc, "evaluate_controller"):
                result = pc.evaluate_controller(payload)  # type: ignore[attr-defined]
            elif hasattr(pc, "api") and hasattr(pc.api, "evaluate_controller"):
                result = pc.api.evaluate_controller(payload)  # type: ignore[attr-defined]
            if not isinstance(result, dict):
                return None
    
            metadata = {
                "solver": "pteracontrols",
                "source": result.get("source", "unknown"),
            }
    
            return ControlAnalysisOutput(
                overshoot=float(result.get("overshoot", 0.0)),
                ise=float(result.get("ise", 0.0)),
                settling_time=float(result.get("settling_time", inputs.simulation.duration_s)),
                gust_detection_latency_ms=float(result.get("gust_detection_latency_ms", inputs.gust_detector.latency_ms)),
                gust_detection_bandwidth_hz=float(result.get("gust_detection_bandwidth_hz", inputs.gust_detector.bandwidth_hz)),
                gust_rejection_pct=float(result.get("gust_rejection_pct", inputs.adaptive_cpg.target_rejection_pct)),
                cpg_energy_baseline_j=float(result.get("cpg_energy_baseline_j", inputs.adaptive_cpg.energy_baseline_j)),
                cpg_energy_consumed_j=float(result.get("cpg_energy_consumed_j", inputs.adaptive_cpg.energy_baseline_j)),
                cpg_energy_reduction_pct=float(result.get("cpg_energy_reduction_pct", inputs.adaptive_cpg.energy_reduction_pct)),
                lyapunov_margin=float(result.get("lyapunov_margin", inputs.adaptive_cpg.lyapunov_margin)),
                moe_switch_penalty=float(result.get("moe_switch_penalty", 0.0)),
                moe_latency_ms=float(result.get("moe_latency_ms", inputs.moe_router.latency_budget_ms)),
                moe_energy_j=float(result.get("moe_energy_j", inputs.moe_router.energy_budget_j)),
                multi_modal_score=result.get("multi_modal_score"),
                extra_metrics=result.get("extra_metrics"),
                metadata=metadata,
            )
        except Exception as exc:  # pragma: no cover - safety
            raise RuntimeError(f"PteraControls evaluation failed: {exc}") from exc
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. While it mentions what the tool returns ('key control metrics plus provenance'), it doesn't describe important behavioral aspects like computational requirements, accuracy limitations, whether it's a simulation or real-time analysis, error conditions, or performance characteristics. The description is insufficient for a complex control analysis tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is reasonably concise with two sentences plus an example. The first sentence states the purpose and required inputs, the second describes the output, and the example provides concrete illustration. However, the example could be more focused on illustrating the structure rather than specific values.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complex parameter structure (1 top-level parameter with 10 nested properties across multiple objects), 0% schema description coverage, and no annotations, the description is incomplete. While an output schema exists (which helps), the description doesn't adequately explain the sophisticated control engineering concepts involved or the tool's operational context. The example helps but doesn't compensate for the missing conceptual explanation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description mentions 'plant dynamics and optional gradients/metadata' which aligns with the input schema's structure. However, with 0% schema description coverage, the description doesn't adequately explain the complex nested parameter structure (plant, gains, simulation, setpoint, gust_detector, adaptive_cpg, moe_router, diffsph_metrics, foam_metrics, prefer_high_fidelity). The example input shows some parameters but doesn't cover the full complexity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Score PID gains for a flapping-wing plant' with specific verb ('Score') and resource ('PID gains'). It mentions providing 'plant dynamics and optional gradients/metadata' and returning 'key control metrics plus provenance'. However, without sibling tools, we cannot assess differentiation from alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions the tool's function but offers no context about prerequisites, typical use cases, or limitations. The example input shows what data to provide, but doesn't explain when this analysis would be appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Three-Little-Birds/ctrltest-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server