Skip to main content
Glama
lara-muhanna

MCP Airlock

by lara-muhanna

Server Quality Checklist

58%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation4/5

    Each tool targets a distinct function (provenance auditing, capability issuance, generic HTTP fetching, and weather retrieval) with minimal overlap. An agent can easily distinguish when to use each based on the task requirements.

    Naming Consistency3/5

    Mixed naming patterns: two tools use 'airlock_' prefix with verbs (issue, audit_tail), while others use 'http_' prefix or no prefix (weather_hourly). The weather tool breaks the verb-first convention used by others, using a noun-adjective pattern instead.

    Tool Count3/5

    Four tools is acceptably compact but the set suffers from scope confusion, mixing core Airlock security functions with unrelated utility tools (weather, HTTP). This feels like two different servers merged together rather than a cohesive toolset.

    Completeness2/5

    The Airlock domain (capability management) is severely underdeveloped, offering only issuance and audit tail reading without revocation, validation, or capability listing. The utility tools (weather, HTTP) are also minimal, offering only hourly forecasts and GET requests without parameter customization.

  • Average 3.1/5 across 4 of 4 tools scored. Lowest: 2.4/5.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v0.1.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • This server provides 4 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • Are you the author?

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Lacking annotations, the description only discloses the 'short-lived' nature (relating to ttl_seconds) but omits critical behavioral details: what authorization the lease grants, how the constraints object limits usage, side effects of issuance, or security implications.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single dense sentence with no filler words; information is front-loaded. However, extreme brevity becomes a liability given the complete absence of schema documentation and annotations.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Inadequate for a security-sensitive tool with 6 parameters (including a nested constraints object). Fails to explain the capability model, what the lease authorizes, or the purpose of required fields like 'subject', creating operational risk.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description partially compensates by implying session_id, intent, and ttl_seconds via 'bound to session+intent' and 'short-lived', but leaves 'subject', 'tools', and the nested 'constraints' object completely unexplained.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States the core action (Issue) and resource (capability lease) with binding context (session+intent), but uses domain jargon without explanation and fails to differentiate from sibling 'airlock_audit_tail' or explain what the lease enables.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no guidance on when to issue capabilities versus using other tools, no security prerequisites, and no warnings about the sensitivity of granting tool access via leases.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. While it notes events are 'recent' and 'signed', it fails to specify the time window for 'recent', result ordering, return format, pagination behavior, or the implications of 'signed' (verification requirements?). This leaves critical behavioral gaps for an audit tool.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is extremely concise at five words, immediately front-loading the verb and object. While efficient, this brevity is arguably inappropriate given the complete absence of annotations and schema descriptions, leaving the description too terse to stand alone as documentation.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a security/audit tool handling cryptographically signed provenance data, the description is inadequate. With no output schema, no parameter descriptions, and no annotations, the description should explain what data structure is returned and what 'airlock' provenance tracks, but it provides none of this context.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters1/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 0% description coverage for the 'limit' parameter. The description completely fails to compensate by explaining the parameter's purpose, valid ranges, or default behavior (20). The agent has no textual guidance on how to use the only available parameter.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Read') and the specific resource ('signed provenance events'), distinguishing it from the sibling 'airlock_issue_capability' (which issues/writes) and the unrelated 'weather_hourly' and 'http_get_json' tools. However, it could better clarify what constitutes a 'provenance event' in this specific 'airlock' domain.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives, nor does it mention prerequisites (e.g., specific permissions needed to read signed audit trails) or when not to use it. The agent receives no signals about appropriate usage contexts.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full disclosure burden. It successfully indicates the geocoding behavior ('Resolve') and data source ('Open-Meteo'), but omits critical behavioral details like error handling for invalid locations, output format, units (imperial/metric), or rate limits.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, dense sentence with zero redundancy. Every word contributes essential information (action, scope, data source), making it appropriately sized and front-loaded.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the simple 3-parameter schema with primitive types and no output schema, the description adequately covers the core function. However, gaps remain: the 'hours' parameter is undocumented, and the absence of annotations or output schema leaves the return structure and units unspecified.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 0%, requiring the description to compensate. It implicitly documents the 'city' and 'state' parameters by specifying they are 'US city/state', adding geographic context not present in the raw parameter names. However, it fails to mention the 'hours' parameter or its constraints (1-168, default 24).

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses specific verbs ('Resolve', 'fetch') and identifies the exact resource ('hourly weather'). It clearly distinguishes the tool from siblings (airlock_audit_tail, http_get_json) by specifying the weather domain and Open-Meteo data source.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description implies geographic constraints by specifying 'US city/state', hinting at usage boundaries. However, it lacks explicit guidance on when to use versus alternatives (e.g., for non-US locations) or prerequisites.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It adds valuable behavioral context by specifying 'public' (implying no auth headers required) and 'JSON' (setting expectation for response parsing), but omits operational details like timeout behavior, redirect handling, or error responses for non-JSON content.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, dense sentence with zero waste. It front-loads the action and precisely qualifies the target resource type and protocol without filler text.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's simplicity (2 parameters, no output schema), the description adequately covers the essential contract. The 'public' qualifier is crucial for setting correct expectations, though mentioning error handling for non-JSON responses would strengthen completeness.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, with the schema fully documenting both 'url' and 'query' parameters. The description adds no additional parameter semantics beyond what the schema provides, meeting the baseline expectation for well-documented schemas.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description provides a specific verb ('Fetch'), resource type ('JSON'), and scope ('public HTTPS API endpoint'), clearly positioning this as a generic external HTTP client distinct from domain-specific siblings like weather_hourly and airlock_audit_tail.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    While it lacks explicit 'when-to-use' statements, the description implies usage through the 'public HTTPS API' scope, suggesting external/unauthenticated endpoints versus the internal/domain-specific siblings. However, it does not explicitly direct users to alternatives like weather_hourly for weather data.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

mcp-airlock MCP server

Copy to your README.md:

Score Badge

mcp-airlock MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/lara-muhanna/mcp-airlock'

If you have feedback or need assistance with the MCP directory API, please join our Discord server