Skip to main content
Glama
jphyqr

HashBuilds Secure Prompts

by jphyqr

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
HASHBUILDS_API_URLNoOverride the API base URL (default: https://hashbuilds.com/api/secure-prompts). For local development, use http://localhost:3001/api/secure-promptshttps://hashbuilds.com/api/secure-prompts

Tools

Functions exposed to the LLM to take actions

NameDescription
register_secure_prompt

Register a prompt with HashBuilds Secure Prompts for security verification and get embed options. This uses AI to scan the prompt for injection attacks, hidden instructions, data exfiltration, jailbreak attempts, and other security issues. Returns multiple display options (full badge, compact link, icon button) with implementation guidance. After registering, ASK THE USER which display option they prefer before implementing. The response includes an implementationGuide field with detailed instructions for styling and placement.

verify_secure_prompt

Verify an existing secure prompt by its ID. Returns the security scan results, risk level, and verification status.

get_embed_code

Generate HTML and React embed code for displaying a secure prompt badge. Use this after registering a prompt to get the code to add to your website.

audit_prompts

Analyze a list of prompts found in a codebase and categorize them as user-facing (needs badge) or internal (audit only). This tool helps users who already have prompts in their codebase understand which ones should be registered with secure badges vs which are internal-only.

HOW TO USE:

  1. First, search the codebase for prompts using patterns like:

    • Files matching: public/PROMPT_*.txt, */prompt.ts

    • Code patterns: 'You are a', 'systemPrompt', 'SYSTEM_PROMPT', role: 'system'

  2. Extract the prompt text and file location for each found prompt

  3. Call this tool with the prompts array

  4. Present the audit results to the user, showing:

    • User-facing prompts that should get security badges

    • Internal prompts that are safe but should be audited

    • Prompts needing manual review

  5. Ask the user which prompts they want to register for badges

  6. Use register_secure_prompt for each selected prompt

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jphyqr/secure-prompts-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server