Skip to main content
Glama

Enkrypt AI MCP Server

Official
by enkryptai

add_guardrails_policy

Add a guardrails policy to enforce AI safety measures by configuring detectors for injection attacks, PII, NSFW content, toxicity, bias, and more. Define custom settings for each detector to ensure compliance with specified security and content guidelines.

Instructions

Add a new guardrails policy.

Args: policy_name: The name of the policy to add. detectors: detectors_config: Dictionary of detector configurations. Each key should be the name of a detector, and the value should be a dictionary of settings for that detector. Available detectors and their configurations are as follows:

- injection_attack: Configured using InjectionAttackDetector model. Example: {"enabled": True} - pii: Configured using PiiDetector model. Example: {"enabled": False, "entities": ["email", "phone"]} - nsfw: Configured using NsfwDetector model. Example: {"enabled": True} - toxicity: Configured using ToxicityDetector model. Example: {"enabled": True} - topic: Configured using TopicDetector model. Example: {"enabled": True, "topic": ["politics", "religion"]} - keyword: Configured using KeywordDetector model. Example: {"enabled": True, "banned_keywords": ["banned_word1", "banned_word2"]} - policy_violation: Configured using PolicyViolationDetector model. Example: {"enabled": True, "need_explanation": True, "policy_text": "Your policy text here"} - bias: Configured using BiasDetector model. Example: {"enabled": True} - copyright_ip: Configured using CopyrightIpDetector model. Example: {"enabled": True} - system_prompt: Configured using SystemPromptDetector model. Example: {"enabled": True, "index": "system_prompt_index"} Example usage: { "injection_attack": {"enabled": True}, "nsfw": {"enabled": True} }

Returns: A dictionary containing the response message and policy details.

Input Schema

NameRequiredDescriptionDefault
detectorsYes
policy_descriptionYes
policy_nameYes

Input Schema (JSON Schema)

{ "properties": { "detectors": { "additionalProperties": true, "title": "Detectors", "type": "object" }, "policy_description": { "title": "Policy Description", "type": "string" }, "policy_name": { "title": "Policy Name", "type": "string" } }, "required": [ "policy_name", "policy_description", "detectors" ], "title": "add_guardrails_policyArguments", "type": "object" }

Implementation Reference

  • The core handler function implementing the 'add_guardrails_policy' tool. It takes policy_name, policy_description, and detectors config, calls guardrails_client.add_policy(), and returns the result as a dictionary. Registered as an MCP tool via the @mcp.tool() decorator. The docstring provides detailed input schema including available detectors.
    @mcp.tool() def add_guardrails_policy(policy_name: str, policy_description: str, detectors: Dict[str, Any]) -> Dict[str, Any]: """ Add a new guardrails policy. Args: policy_name: The name of the policy to add. detectors: detectors_config: Dictionary of detector configurations. Each key should be the name of a detector, and the value should be a dictionary of settings for that detector. Available detectors and their configurations are as follows: - injection_attack: Configured using InjectionAttackDetector model. Example: {"enabled": True} - pii: Configured using PiiDetector model. Example: {"enabled": False, "entities": ["email", "phone"]} - nsfw: Configured using NsfwDetector model. Example: {"enabled": True} - toxicity: Configured using ToxicityDetector model. Example: {"enabled": True} - topic: Configured using TopicDetector model. Example: {"enabled": True, "topic": ["politics", "religion"]} - keyword: Configured using KeywordDetector model. Example: {"enabled": True, "banned_keywords": ["banned_word1", "banned_word2"]} - policy_violation: Configured using PolicyViolationDetector model. Example: {"enabled": True, "need_explanation": True, "policy_text": "Your policy text here"} - bias: Configured using BiasDetector model. Example: {"enabled": True} - copyright_ip: Configured using CopyrightIpDetector model. Example: {"enabled": True} - system_prompt: Configured using SystemPromptDetector model. Example: {"enabled": True, "index": "system_prompt_index"} Example usage: { "injection_attack": {"enabled": True}, "nsfw": {"enabled": True} } Returns: A dictionary containing the response message and policy details. """ # Create a policy with a dictionary add_policy_response = guardrails_client.add_policy( policy_name=policy_name, config=detectors, description=policy_description ) # Return policy details as a dictionary return add_policy_response.to_dict()
  • The @mcp.tool() decorator registers the add_guardrails_policy function as an MCP tool.
    @mcp.tool()
  • Docstring defines the input schema with parameter descriptions, examples of detectors configurations, and return type.
    """ Add a new guardrails policy. Args: policy_name: The name of the policy to add. detectors: detectors_config: Dictionary of detector configurations. Each key should be the name of a detector, and the value should be a dictionary of settings for that detector. Available detectors and their configurations are as follows: - injection_attack: Configured using InjectionAttackDetector model. Example: {"enabled": True} - pii: Configured using PiiDetector model. Example: {"enabled": False, "entities": ["email", "phone"]} - nsfw: Configured using NsfwDetector model. Example: {"enabled": True} - toxicity: Configured using ToxicityDetector model. Example: {"enabled": True} - topic: Configured using TopicDetector model. Example: {"enabled": True, "topic": ["politics", "religion"]} - keyword: Configured using KeywordDetector model. Example: {"enabled": True, "banned_keywords": ["banned_word1", "banned_word2"]} - policy_violation: Configured using PolicyViolationDetector model. Example: {"enabled": True, "need_explanation": True, "policy_text": "Your policy text here"} - bias: Configured using BiasDetector model. Example: {"enabled": True} - copyright_ip: Configured using CopyrightIpDetector model. Example: {"enabled": True} - system_prompt: Configured using SystemPromptDetector model. Example: {"enabled": True, "index": "system_prompt_index"} Example usage: { "injection_attack": {"enabled": True}, "nsfw": {"enabled": True} } Returns: A dictionary containing the response message and policy details. """
  • Reference to the tool in the docstring of 'mitigation_guardrails_policy' tool, instructing to use add_guardrails_policy after getting config.
    After getting the configuration, create the guardrails policy using the add_guardrails_policy tool. """ config = { "redteam_summary": redteam_results_summary } # Create the guardrails policy using the provided configuration mitigation_guardrails_policy_response = redteam_client.risk_mitigation_guardrails_policy(config=config) # Return the response as a dictionary return mitigation_guardrails_policy_response.to_dict()

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/enkryptai/enkryptai-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server