Skip to main content
Glama

nexus-mcp

Server Details

Japanese LLM security — prompt injection detection (jpi-guard) + PII masking (PII Guard). Free.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
nexus-api-lab/nexus-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 6 of 6 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but there is some overlap between check_injection, sanitize_content, and validate_rag_input, as all three involve prompt injection detection for Japanese content. However, their specific focuses (detection vs. sanitization vs. validation) and descriptions help differentiate them, preventing major confusion.

Naming Consistency4/5

Tool names follow a consistent verb_noun pattern (e.g., check_injection, get_pii_guard_key, pii_scan), with only sanitize_content slightly deviating by using a verb_adjective format. Overall, the naming is predictable and readable, with minor inconsistency.

Tool Count5/5

With 6 tools, the server is well-scoped for its purpose of Japanese-language content security and PII handling. Each tool serves a clear role, such as injection detection, PII scanning, and API key management, making the count appropriate without being excessive or insufficient.

Completeness5/5

The tool set provides comprehensive coverage for the domain of Japanese content security, including injection detection, sanitization, validation, PII scanning, and API key acquisition for related services. There are no obvious gaps, as it supports the full workflow from input validation to content processing.

Available Tools

6 tools
check_injectionAInspect

Detect prompt injection attacks in user-supplied text before passing to an LLM. Specialized for Japanese RAG applications. Detects: role impersonation, full-width character bypass (全角バイパス), polite-language disguise (丁寧語擬装), indirect injection, Base64 obfuscation. Returns is_injection, risk_level, and detection_reason.

ParametersJSON Schema
NameRequiredDescriptionDefault
inputYesThe user-supplied text to check for prompt injection (max 10,000 chars).
api_keyYesYour jpi-guard API key. Get a free trial at https://jpi-guard.nexus-api-lab.workers.dev/
languageNoPrimary language. Use 'ja' for Japanese-language LLM apps. Default: auto.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes what the tool does (detects specific injection types) and the return values (is_injection, risk_level, detection_reason), but lacks details on permissions, rate limits, or error handling. It does not contradict annotations, as there are none.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by specialization details and return values, all in three concise sentences with zero waste. Every sentence adds value, making it efficient and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (security-focused with multiple detection types) and lack of output schema, the description is mostly complete by specifying the return values. However, it could benefit from more behavioral context, such as performance or limitations, to fully compensate for the missing annotations and output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description does not add any parameter-specific information beyond what's in the schema, such as explaining the 'language' enum choices in more detail. Baseline score of 3 is appropriate as the schema handles the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Detect prompt injection attacks') and resources ('user-supplied text'), and distinguishes it from siblings by specifying specialization for Japanese RAG applications and listing specific detection types like role impersonation and Base64 obfuscation. It goes beyond a tautology by detailing the detection scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('before passing to an LLM' and 'Specialized for Japanese RAG applications'), which helps differentiate it from siblings like pii_scan or sanitize_content. However, it does not explicitly state when not to use it or name specific alternatives among the siblings, such as validate_rag_input.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_pii_guard_keyAInspect

Register an email address and get a free PII Guard API key. Returns a key with 10,000 requests/month (permanent free tier, no credit card required). The key can be used immediately with pii_scan. If the email already has a key, the existing key is returned (new_key: false).

ParametersJSON Schema
NameRequiredDescriptionDefault
emailYesEmail address to register. Used only for key delivery and quota management.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: the free tier details (10,000 requests/month, permanent, no credit card), idempotency ('If the email already has a key, the existing key is returned'), and the immediate usability with 'pii_scan'. It doesn't mention rate limits beyond the monthly quota or error conditions, but covers the essential operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with the core purpose in the first sentence and additional details in subsequent sentences. Every sentence adds value: the free tier specifics, immediate usability, and idempotency behavior, with zero redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single parameter, no output schema, no annotations), the description is largely complete. It covers the purpose, key behavioral traits, and usage context. However, it lacks details on error handling or the exact return format (though it hints at a 'new_key' field), leaving minor gaps for a mutation-like tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the single parameter 'email' fully documented in the schema. The description adds minimal semantic context beyond the schema ('Email address to register. Used only for key delivery and quota management.'), so it meets the baseline of 3 without significantly enhancing parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Register an email address and get a free PII Guard API key') and the resource involved ('PII Guard API key'). It distinguishes this tool from siblings like 'get_trial_key' by specifying the permanent free tier nature and from 'pii_scan' by focusing on key acquisition rather than scanning.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Register an email address and get a free PII Guard API key') and mentions its relationship to 'pii_scan' ('The key can be used immediately with pii_scan'). However, it doesn't explicitly state when not to use it or compare it to alternatives like 'get_trial_key' beyond the free tier mention.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_trial_keyAInspect

Get a free trial API key for the nexus-api-lab cleanse API. Returns a key with 2,000 requests, valid for 30 days. No credit card or signup required. The key can be used immediately with sanitize_content.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does well by disclosing key behavioral traits: it returns a key with 2,000 requests, valid for 30 days, no credit card/signup required, and immediate usability. It lacks details on rate limits, error handling, or authentication needs, but covers the core functionality adequately for a zero-parameter tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by essential details (requests, validity, requirements, usage) in concise sentences. Every sentence adds value without waste, making it efficient and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (zero parameters, no annotations, no output schema), the description is nearly complete: it explains what the tool does, key features of the returned key, and how to use it. It could improve by specifying the exact return format or error cases, but for a straightforward trial key generator, it provides sufficient context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters with 100% schema description coverage, so the baseline is 4. The description does not need to explain parameters, and it appropriately focuses on the tool's purpose and output without redundant parameter information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get a free trial API key') and resource ('for the nexus-api-lab cleanse API'), distinguishing it from sibling tools like 'sanitize_content' or 'pii_scan' that perform different operations. It provides concrete details about what is obtained rather than just restating the tool name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: to obtain a trial key for the cleanse API, with details about no credit card/signup required and immediate usability with 'sanitize_content'. However, it does not specify when NOT to use it or mention alternatives like 'get_pii_guard_key' for different API keys, which would be helpful for sibling differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pii_scanAInspect

Scan Japanese text for personally identifiable information (PII) and return findings with masked output. Runs on regex + checksum validation + keyword proximity scoring only — no LLM involved, fully deterministic. Detects 10 categories: My Number / マイナンバー (mod-11 checksum), credit card (Luhn-validated), bank account, passport, phone, email, postal address, date of birth, driver's license, and person name. Full-width character normalization included. Returns findings[] with type/score/position, has_high_risk flag for high-severity categories, and masked_text with [NAME][PHONE][CARD] placeholders ready for downstream LLM pipelines. Free — 10,000 requests/month.

ParametersJSON Schema
NameRequiredDescriptionDefault
maskNoIf true, return masked_text with PII replaced by [TYPE] placeholders. Default: true.
textYesThe Japanese text to scan for PII (max 100 KB).
api_keyYesYour PII Guard API key (format: pii_<32hex>). Get a free key with get_pii_guard_key or at https://www.nexus-api-lab.com/pii-guard.html
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It thoroughly describes behavioral traits: deterministic operation (regex + checksum + keyword scoring, no LLM), categories detected (10 specific types), normalization (full-width character), output structure (findings[], has_high_risk, masked_text), and operational details (free with 10,000 requests/month). This provides comprehensive context beyond what a schema would cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core purpose and key features. Every sentence adds value: it explains the scanning method, lists categories, describes normalization, details the return structure, and notes usage limits. There is no redundant or unnecessary information, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a PII scanning tool with no annotations and no output schema, the description is complete enough. It covers the tool's purpose, behavioral traits, detected categories, output format, and usage constraints. This provides sufficient context for an agent to understand and invoke the tool correctly, compensating for the lack of structured output information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters (mask, text, api_key) adequately. The description does not add any specific meaning or usage details about these parameters beyond what the schema provides, such as explaining how 'mask' interacts with the masking process or elaborating on 'text' constraints. Baseline 3 is appropriate as the schema handles the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Scan Japanese text for personally identifiable information (PII) and return findings with masked output.' It specifies the target language (Japanese), the action (scan for PII), and the output (findings with masked output). It also distinguishes from siblings by focusing on PII scanning rather than injection checking, key retrieval, sanitization, or validation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: scanning Japanese text for PII detection and masking. It implies usage for downstream LLM pipelines with masked placeholders. However, it does not explicitly state when not to use it or name alternatives among siblings, such as using 'sanitize_content' for broader content cleaning or 'validate_rag_input' for input validation, which would require more explicit guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sanitize_contentAInspect

Sanitize external content before passing it to an LLM. Detects and removes prompt injection payloads: hidden HTML instructions, zero-width character attacks, fullwidth Unicode bypasses, semantic overrides ("Ignore all previous instructions"), and encoding evasion. Specialized for Japanese-language content. Returns cleaned_content safe to pass to the model.

ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyYesYour nexus-api-lab API key (Bearer token). Get a free trial key at https://www.nexus-api-lab.com
contentYesThe external text content to sanitize (max 512 KB).
languageNoPrimary language of the content. Use 'ja' for Japanese-language LLM apps. Default: auto.
on_timeoutNoBehavior on timeout. fail_open: return original content unmodified (availability-first). fail_close: return error (security-first). Default: fail_open.
source_urlNoURL where content was fetched from (optional, used for audit trail).
strictnessNoDetection sensitivity. low=regex only (fastest), medium=regex+semantic, high=all stages (most thorough). Default: medium.
content_typeNoFormat of the input content. Default: plaintext.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's function (sanitization against specific attack types) and output (cleaned_content), but it lacks details on behavioral traits such as rate limits, authentication requirements (implied by api_key but not explained), error handling beyond timeout, or performance characteristics. It adds value by specifying the security focus and language specialization but misses operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with two sentences that efficiently convey the tool's purpose, scope, and outcome without waste. Every sentence earns its place: the first explains what the tool does and its targets, and the second specifies the specialization and return value. No redundant or vague phrasing is present.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (security tool with 7 parameters, no annotations, no output schema), the description is reasonably complete. It covers the tool's core function, attack types, language focus, and output, but it could improve by addressing missing behavioral aspects like auth needs or error scenarios. The lack of output schema means the description should ideally explain return values more, though it does state 'Returns cleaned_content'. Overall, it's adequate but has minor gaps for a security-critical tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents all 7 parameters thoroughly. The description does not add any parameter-specific semantics beyond what the schema provides (e.g., it mentions 'Japanese-language content' which aligns with the 'language' parameter but doesn't elaborate further). Baseline 3 is appropriate as the schema does the heavy lifting, and the description adds no extra param details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('sanitize', 'detects and removes') and resources ('external content'), explicitly distinguishing it from siblings like 'check_injection' or 'pii_scan' by focusing on cleaning content for LLM safety rather than just detection or PII scanning. It specifies the specialized domain ('Japanese-language content') and the outcome ('Returns cleaned_content safe to pass to the model').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('before passing it to an LLM', 'Specialized for Japanese-language content'), but it does not explicitly mention when not to use it or name alternatives among the sibling tools (e.g., 'check_injection' for detection-only or 'validate_rag_input' for broader validation). The guidance is sufficient for typical use cases but lacks explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validate_rag_inputAInspect

Validate user input is safe before sending to your RAG pipeline. Combines prompt injection detection and content safety check. Returns safe: true if input can proceed to LLM, or safe: false with block_reason if injection detected. Use this as a gate in your RAG handler.

ParametersJSON Schema
NameRequiredDescriptionDefault
inputYesThe user query to validate before passing to the RAG/LLM pipeline.
api_keyYesYour jpi-guard API key. Get a free trial at https://jpi-guard.nexus-api-lab.workers.dev/
fail_openNoIf true, return safe=true on API timeout (availability-first). Default: false (security-first).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses key behavioral traits: the tool performs validation, returns a boolean 'safe' status with a 'block_reason' if unsafe, and is intended as a gatekeeper. However, it lacks details on rate limits, authentication needs beyond the api_key parameter, or what specific safety checks are performed. This is adequate but has gaps for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by additional context in two more sentences. Every sentence earns its place by explaining functionality, return values, and usage guidance without redundancy. It's appropriately sized and efficiently structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (validation with multiple checks), no annotations, and no output schema, the description is fairly complete. It covers purpose, usage, and return values, but lacks details on output structure beyond 'safe: true/false with block_reason', which could be more explicit. For a tool with no output schema, this is good but not fully comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description adds no additional meaning beyond what the schema provides, such as explaining how 'input' relates to RAG or the implications of 'fail_open'. With high schema coverage, the baseline score of 3 is appropriate as the description doesn't compensate but doesn't detract either.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('validate user input is safe') and resources ('RAG pipeline'), distinguishing it from siblings like 'check_injection' or 'sanitize_content' by combining multiple safety checks. It explicitly mentions prompt injection detection and content safety check, making the purpose unambiguous and distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Use this as a gate in your RAG handler.' It also implies alternatives by mentioning it combines multiple checks, suggesting it might be preferred over separate tools like 'check_injection' or 'pii_scan' for a comprehensive safety check. This gives clear context for usage versus siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.