Skip to main content
Glama

Server Details

MCP server for static security analysis of Android source code

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
ako2345/android-security-analyzer
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

4 tools
analyze_android_projectAInspect

Analyzes an Android project's source code for security vulnerabilities. Accepts project files (AndroidManifest.xml, build.gradle, Java/Kotlin sources, XML configs) and returns a structured security report with findings, severity scores, and recommendations.

ParametersJSON Schema
NameRequiredDescriptionDefault
filesYesArray of project files to analyze
optionsNoAnalysis options
projectNameYesName of the Android project being analyzed
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full disclosure burden. It effectively describes the return format (structured report with severity scores) since no output schema exists, and mentions analysis options (secret scanning). However, it omits safety characteristics (read-only vs. destructive), data retention policies for uploaded files, or execution time expectations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences with zero redundancy. First sentence establishes the core function; second covers inputs and outputs. Information is front-loaded and every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of both annotations and output schema, the description appropriately compensates by detailing the return structure (findings, severity scores, recommendations). It could be improved by noting the read-only/safe nature of the analysis, but adequately covers the essential gaps given the tool's moderate complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds value by enumerating expected file types (AndroidManifest.xml, build.gradle, Java/Kotlin sources, XML configs), providing concrete examples of what to include in the files array beyond the generic schema description of 'project files'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the specific action (analyzes), resource (Android project source code), and goal (security vulnerabilities). It distinguishes itself from siblings explain_finding (singular finding explanation), list_android_security_checks (listing available checks), and health (system status) by being the only one that performs comprehensive project file analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the purpose is clear, the description lacks explicit when-to-use guidance or prerequisite context (e.g., 'use this first to scan the project, then use explain_finding for details on specific findings'). Usage relative to siblings must be inferred from the tool names rather than stated guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

explain_findingAInspect

Explains a specific security finding by its rule ID. Returns why it is a risk, how to fix it, and false positive considerations.

ParametersJSON Schema
NameRequiredDescriptionDefault
findingIdYesThe rule/finding ID (e.g., MAN-001, SRC-003, SEC-002)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively discloses return value content ('why it is a risk, how to fix it, and false positive considerations'), helping the agent understand this provides actionable remediation advice. It misses explicit non-destructive or error-state declarations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences with zero waste. The first front-loads the core action and required parameter; the second details the behavioral output. Every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with 100% schema coverage and no output schema, the description adequately covers the tool's purpose and return content (risk explanation, fixes, false positives). It lacks explicit error-handling documentation but is otherwise complete for the complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with clear examples (e.g., MAN-001). The description mentions 'by its rule ID' which aligns with the parameter name and schema, but adds no additional semantic context, validation rules, or format constraints beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Explains'), the resource ('security finding'), and the required identifier ('by its rule ID'). It distinguishes from siblings like 'list_android_security_checks' (which lists findings) by emphasizing it operates on a 'specific' finding.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The word 'specific' implies usage when the user has an identified finding ID rather than browsing, suggesting contrast with 'list_android_security_checks'. However, there is no explicit when-to-use guidance, when-not-to-use, or naming of sibling alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

healthAInspect

Returns the server health status, version, and rule engine statistics.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It effectively discloses what the tool returns (health status, version, and rule engine statistics), giving the agent insight into the response structure. It does not mention caching, side effects, or performance characteristics, but the return value disclosure is the critical behavior for a health endpoint.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, tightly constructed sentence with no wasted words. It front-loads the verb 'Returns' and lists the three distinct data categories returned, with every element earning its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (zero parameters) and lack of output schema, the description adequately compensates by enumerating the three components of the return value (health, version, statistics). It is complete enough for an agent to understand the tool's utility, though it could benefit from timing guidance (e.g., when to check health).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters. Per scoring guidelines, 0 params equals a baseline of 4. The description correctly omits parameter discussion since none exist, and the schema coverage is 100% (of empty set).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb 'Returns' and clearly identifies the resources (server health status, version, rule engine statistics). It effectively distinguishes itself from the Android security-focused siblings (analyze_android_project, explain_finding, list_android_security_checks) by focusing on server diagnostics rather than code analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description implies this is a diagnostic/utility tool distinct from the Android analysis siblings, it provides no explicit guidance on when to invoke it (e.g., 'use at startup to verify connectivity' or 'call before operations to check server status').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_android_security_checksAInspect

Returns the list of all implemented security checks/rules with their IDs, categories, severity levels, and descriptions.

ParametersJSON Schema
NameRequiredDescriptionDefault
categoryNoFilter rules by category (manifest, gradle, source, xml-config, secret). Leave empty for all rules.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It effectively discloses return structure (fields included for each rule), but omits safety/side-effect disclosure (e.g., whether this is read-only, cached, or expensive) and pagination behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with zero waste. Every clause serves a purpose: action (Returns), scope (all implemented), resource (security checks/rules), and payload structure (IDs, categories, severity, descriptions).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but description compensates by detailing what the response contains. Given the tool's simplicity (1 optional parameter, flat structure), this is sufficient, though explicit mention of read-only safety would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% ('Filter rules by category...'), establishing baseline 3. Description mentions 'all implemented' which implies the optional filtering capability, but does not add syntax details, examples, or semantic meaning beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Returns' + specific resource 'security checks/rules' + details on returned fields (IDs, categories, severity, descriptions). Clear scope ('all implemented') distinguishes this catalog/metadata tool from sibling 'analyze_android_project' which likely executes checks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus siblings like 'analyze_android_project' or 'explain_finding'. Missing context such as 'use this to review available rules before configuring an analysis' or 'use this to understand rule IDs referenced in findings'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.