Server Configuration
Describes the environment variables required to run the server.
Name | Required | Description | Default |
---|---|---|---|
ENKRYPTAI_API_KEY | Yes | Your Enkrypt AI API key from https://app.enkryptai.com/settings/api |
Schema
Prompts
Interactive templates invoked by user choice
Name | Description |
---|---|
No prompts |
Resources
Contextual data attached and managed by the client
Name | Description |
---|---|
No resources |
Tools
Functions exposed to the LLM to take actions
Name | Description |
---|---|
guardrails_detect | Detect sensitive content using Guardrails. Args: ctx: The context object containing the request context. text: The text to detect sensitive content in. detectors_config: Dictionary of detector configurations. Each key should be the name of a detector, and the value should be a dictionary of settings for that detector. Available detectors and their configurations are as follows: Returns: A dictionary containing the detection results with safety assessments. |
get_guardrails_policy | Get all guardrails policies. |
retrieve_policy_configuration | Retrieve and print the policy configuration for a given policy name. Args: policy_name: The name of the policy to retrieve. Returns: A dictionary containing the policy configuration. |
add_guardrails_policy | Add a new guardrails policy. Args: policy_name: The name of the policy to add. detectors: detectors_config: Dictionary of detector configurations. Each key should be the name of a detector, and the value should be a dictionary of settings for that detector. Available detectors and their configurations are as follows: Returns: A dictionary containing the response message and policy details. |
update_guardrails_policy | Update an existing guardrails policy with new configuration. Args: policy_name: The name of the policy to update. detectors: Dictionary of detector configurations. Each key should be the name of a detector, and the value should be a dictionary of settings for that detector. Available detectors and their configurations are as follows: Returns: A dictionary containing the response message and updated policy details. |
remove_guardrails_policy | Remove an existing guardrails policy. Args: policy_name: The name of the policy to remove. Returns: A dictionary containing the response message and details of the deleted policy. |
use_policy_to_detect | Use a policy to detect violations in the provided text. Args: policy_name: The name of the policy to use for detection. text: The text to check for policy violations. Returns: A dictionary containing the response message and details of the detection. |
add_model | Add a new model using the provided configuration. Args: config: A dictionary containing the model configuration details. The structure of the ModelConfig is as follows: Example usage: { "model_saved_name": "example_model_name", # The name under which the model is saved. "model_version": "v1", # The version of the model. "testing_for": "foundationModels", # The purpose for which the model is being tested. (Always foundationModels) "model_name":"example_model", # The name of the model. (e.g., gpt-4o, claude-3-5-sonnet, etc.) "model_config": { "model_provider": "example_provider", # The provider of the model. (e.g., openai, anthropic, etc.) "endpoint_url":"https://api.example.com/model", # The endpoint URL for the model only required if provider type is custom, otherwise don't include this key. "apikey":"example_api_key", # The API key to access the model. "system_prompt": "Some system prompt", # The system prompt for the model, only required if the user specifies, otherwise blank. "input_modalities": ["text"], # The type of data the model works with (Possible values: text, image, audio) keep it as text unless otherwise specified. "output_modalities": ["text"], # The type of data the model works with keep it as text only. If user asks for others, that modality is on our roadmap, please contact hello@enkryptai.com if you need early access to this. }, } Ask the user for all the details before passing the config to the tool. Returns: A dictionary containing the response message and details of the added model. |
add_agent | Add a new agent using the provided configuration. Args: config: A dictionary containing the agent configuration details. The structure of the AgentConfig is as follows: Example usage: { "model_saved_name": "example_agent_name", # The name under which the agent is saved. "model_version": "v1", # The version of the agent. "testing_for": "agents", # The purpose for which the agent is being tested. (Always agents) "model_name":"", # Blank always "model_config": { "model_provider": "custom", #Always custom "endpoint_url": "", #the endpoint url of the agent (Mandatory) "input_modalities": ["text"], #Always text "output_modalities": ["text"], #Always text "custom_headers": [{ # A list of custom headers to be sent to the agent. (Mandatory) "key": "Content-Type", "value": "application/json" }...], "custom_response_format": "", # Ask user for the response format of the agent in jq format (Mandatory) "custom_response_content_type": "json", # The content type of the agent's response (always json) (Mandatory) "custom_payload", # Ask user for the payload to be sent to the agent (always keep placeholder for prompt as '{prompt}') (Mandatory) "tools": [{ # Ask user for a list of tools to be used by the agent. (MANDATORY) "name": "name of the tool", "description": "description of the tool" }...] }, } NOTE: DO NOT ASSUME ANY FIELDS AND ASK THE USER FOR ALL THE DETAILS BEFORE PASSING THE CONFIG TO THE TOOL. Ask the user for all the mandatory details before passing the config to the tool. Returns: A dictionary containing the response message and details of the added agent. |
add_model_from_url | Add a new model using the provided configuration. Args: config: A dictionary containing the url model configuration details. The structure of the ModelConfig is as follows: Example usage: { "model_saved_name": "example_model_name", # The name under which the model is saved. "model_version": "v1", # The version of the model. "testing_for": "URL", # The purpose for which the model is being tested. (Always URL) "model_name":"example_url", # The url of the chatbot site provided by user. "model_config": { "model_provider": "url", # Always fixed to 'url' "endpoint_url":"example_url", # Same as model_name "apikey":"none", # The API key to access the model. "input_modalities": ["text"], # Always fixed to ['text'] "output_modalities": ["text"], # Always fixed to ['text'] }, } Ask the user for the url before passing the config to the tool. Returns: A dictionary containing the response message and details of the added model. |
list_models | List all models and print details of the first model. Returns: A dictionary containing the list of models. |
get_model_details | Retrieve details of a specific model using its saved name. Args: model_saved_name: The name under which the model is saved. model_version: The version of the model to be used for the redteam task. Returns: A dictionary containing the details of the model. |
modify_model_config | Modify the model configuration and update the model. Args: new_model_config: The sample model configuration to be modified. Example usage: { "model_saved_name": "example_model_name", # The name under which the model is saved. "testing_for": "LLM", # The purpose for which the model is being tested. (Always LLM) "model_name": "example_model", # The name of the model. (e.g., gpt-4o, claude-3-5-sonnet, etc.) "modality": "text", # The type of data the model works with (e.g., text, image). "model_config": { "model_version": "1.0", # The version of the model. "model_provider": "example_provider", # The provider of the model. (e.g., openai, anthropic, etc.) "endpoint_url": "https://api.example.com/model", # The endpoint URL for the model. "apikey": "example_api_key", # The API key to access the model. }, } test_model_saved_name: The saved name of the model to be tested. Returns: A dictionary containing the response message and details of the modified model. |
remove_model | Remove a model. Args: test_model_saved_name: The saved name of the model to be removed. Returns: A dictionary containing the response message and details of the deleted model. |
add_redteam_task | Add a redteam task using a saved model. Args: model_saved_name: The saved name of the model to be used for the redteam task. model_version: The version of the model to be used for the redteam task. redteam_model_config: The configuration for the redteam task. Example usage: sample_redteam_model_config = { "test_name": redteam_test_name, "dataset_name": "standard", "redteam_test_configurations": { #IMPORTANT: Before setting the redteam test config, ask the user which tests they would want to run and the sample percentage. "bias_test": { "sample_percentage": 2, "attack_methods": {"basic": ["basic"]}, }, "cbrn_test": { "sample_percentage": 2, "attack_methods": {"basic": ["basic"]}, }, "insecure_code_test": { "sample_percentage": 2, "attack_methods": {"basic": ["basic"]}, }, "toxicity_test": { "sample_percentage": 2, "attack_methods": {"basic": ["basic"]}, }, "harmful_test": { "sample_percentage": 2, "attack_methods": {"basic": ["basic"]}, }, }, } These are the only 5 tests available. Ask the user which ones to run and sample percentage for each as well. Returns: A dictionary containing the response message and details of the added redteam task. |
add_custom_redteam_task | Add a custom use-case basedredteam task using a saved model. NOTE: Not compatible with audio and image modalities. Args: model_saved_name: The saved name of the model to be used for the redteam task. model_version: The version of the model to be used for the redteam task. custom_redteam_model_config: The configuration for the customredteam task. Example usage: sample_redteam_model_config = { "test_name": redteam_test_name, "dataset_configuration": { #Ask user for all these details, do not fill it on your own (system_description, policy_description and tools) "system_description": "", # The system description of the model for the custom use-case. (Mandatory) "policy_description": "", # The policy which the model for the custom use-case should follow. (Optional) "tools": [ { "name": "web_search", # The name of the tool to be used for the custom use-case. (Optional) "description": "The tool web search is used to search the web for information related to finance." # The description of the tool to be used for the custom use-case. (Optional) } ], #The following are the default values for the custom use-case. Change them only if the user asks for a different test size. "max_prompts": 500, # The maximum number of prompts to be used for the custom use-case. "scenarios": 2, # The number of scenarios to be used for the custom use-case. "categories": 2, # The number of categories to be used for the custom use-case. "depth": 1, # The depth of the custom use-case. } "redteam_test_configurations": { #IMPORTANT: Before setting the redteam test config, ask the user which tests they would want to run and the sample percentage. Note: The custom test is mandatory. other 5 are optional. "bias_test": { "sample_percentage": 2, "attack_methods": {"basic": ["basic"]}, }, "cbrn_test": { "sample_percentage": 2, "attack_methods": {"basic": ["basic"]}, }, "insecure_code_test": { "sample_percentage": 2, "attack_methods": {"basic": ["basic"]}, }, "toxicity_test": { "sample_percentage": 2, "attack_methods": {"basic": ["basic"]}, }, "harmful_test": { "sample_percentage": 2, "attack_methods": {"basic": ["basic"]}, }, "custom_test": { "sample_percentage": 100, # The sample percentage for the custom use-case. Keep it at 100 unless the user asks for a different sample percentage. "attack_methods": {"basic": ["basic"]}, } }, } Returns: A dictionary containing the response message and details of the added redteam task. |
add_agent_redteam_task | Add a redteam task using a saved agent. Args: agent_saved_name: The saved name of the agent to be used for the redteam task. agent_version: The version of the agent to be used for the redteam task. agent_redteam_model_config: The configuration for the redteam task. ASK USER FOR ALL THESE DETAILS. Example usage: sample_redteam_model_config = { "test_name": redteam_test_name, "dataset_configuration": { #Ask user for all these details, do not fill it on your own (system_description, policy_description. Tools can be gotten from agent config otherwise ask user) "system_description": "Ask user for this", # Ask user for the system description of the agent for the custom use-case. (Mandatory exactly same as what the user has input) "policy_description": "Ask user for this", # Ask user for the policy which the agent for the custom use-case should follow. (Optional) "tools": [ { "name": "ask user for this", # The name of the tool to be used for the custom use-case. (Mandatory) "description": "ask user for this" # The description of the tool to be used for the custom use-case. (Mandatory) } ], #The following are the default values for the custom use-case. Change them only if the user asks for a different test size. "max_prompts": 500, # The maximum number of prompts to be used for the custom use-case. "scenarios": 2, # The number of scenarios to be used for the custom use-case. "categories": 2, # The number of categories to be used for the custom use-case. "depth": 1, # The depth of the custom use-case. } "redteam_test_configurations": { #IMPORTANT: Before setting the redteam test config, ask the user which tests they would want to run and the sample percentage "alignment_and_governance_test": { "sample_percentage": 2, "attack_methods": { "basic": [ "basic" ], "advanced": { "static": [ "encoding" ] } } }, "input_and_content_integrity_test": { "sample_percentage": 2, "attack_methods": { "basic": [ "basic" ], "advanced": { "static": [ "encoding" ] } } }, "infrastructure_and_integration_test": { "sample_percentage": 2, "attack_methods": { "basic": [ "basic" ], "advanced": { "static": [ "encoding" ] } } }, "security_and_privacy_test": { "sample_percentage": 2, "attack_methods": { "basic": [ "basic" ], "advanced": { "static": [ "encoding" ] } } }, "human_factors_and_societal_impact_test": { "sample_percentage": 2, "attack_methods": { "basic": [ "basic" ], "advanced": { "static": [ "encoding" ] } } }, "access_control_test": { "sample_percentage": 2, "attack_methods": { "basic": [ "basic" ], "advanced": { "static": [ "encoding" ] } } }, "physical_and_actuation_safety_test": { "sample_percentage": 2, "attack_methods": { "basic": [ "basic" ], "advanced": { "static": [ "encoding" ] } } }, "reliability_and_monitoring_test": { "sample_percentage": 2, "attack_methods": { "basic": [ "basic" ], "advanced": { "static": [ "encoding" ] } } } }, } Returns: A dictionary containing the response message and details of the added redteam task. |
get_redteam_task_status | Get the status of a redteam task. Args: test_name: The name of the redteam test. Returns: A dictionary containing the status of the redteam task. |
get_redteam_task_details | Retrieve details of a redteam task. Args: test_name: The name of the redteam test. Returns: A dictionary containing the details of the redteam task including the system prompt of the target model used. |
list_redteam_tasks | List all redteam tasks, optionally filtered by status. Args: status: The status to filter tasks by (e.g., "Finished"). If None, list all tasks. Returns: A dictionary containing the list of redteam tasks. |
get_redteam_task_results_summary | Get the results summary of a redteam task. Args: test_name: The name of the redteam test. Returns: A dictionary containing the results summary of the redteam task. |
harden_system_prompt | Harden the system prompt by using the redteam results summary and the system prompt. Args: redteam_results_summary: A dictionary containing only the top 20 categories of the redteam results summary in terms of success percent (retrieve using get_redteam_task_results_summary tool). NOTE: If there are more than 20 items in category array, only pass the top 20 categories with the highest success percent. Format: { "category": [ { "Bias": { "total": 6, "test_type": "adv_info_test", "success(%)": 66.67 } }, contd. ] } system_prompt: The system prompt to be hardened (retrieve using get_redteam_task_details tool). Returns: A dictionary containing the response message and details of the hardened system prompt. |
mitigation_guardrails_policy | Create a guardrails policy by using the redteam results summary. Args: redteam_results_summary: A dictionary containing only the top 20 categories of the redteam results summary in terms of success percent (retrieve using get_redteam_task_results_summary tool). NOTE: If there are more than 20 items in category array, only pass the top 20 categories with the highest success percent. Format: { "category": [ { "Bias": { "total": 6, "test_type": "adv_info_test", "success(%)": 66.67 } }, contd. ] } Returns: A dictionary containing the response message and details of the created guardrails policy. After getting the configuration, create the guardrails policy using the add_guardrails_policy tool. |
add_deployment | Add a new deployment using the provided configuration. Args: deployment_config: A dictionary containing the deployment configuration details. Always ask user if they want to block any of the detectors in the policy for both input and output. (if you dont know what detectors are present in the policy, you can use the get_guardrails_policy tool) Returns: A dictionary containing the response message and details of the added deployment. |
get_deployment_details | Retrieve details of a specific deployment using its name. Args: deployment_name: The name of the deployment to retrieve details for. Returns: A dictionary containing the details of the deployment. |
list_deployments | List all deployments and print details of the first deployment. Returns: A dictionary containing the list of deployments. |
modify_deployment_config | Modify the deployment configuration and update the deployment. Args: deployment_name: The name of the deployment to be modified. new_deployment_config: The new deployment configuration to be modified. Returns: A dictionary containing the response message and details of the modified deployment. |
remove_deployment | Remove an existing deployment. Args: deployment_name: The name of the deployment to remove. Returns: A dictionary containing the response message and details of the deleted deployment. |