PQS - Prompt Quality Score
OfficialServer Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Capabilities
Features and capabilities supported by this server
| Capability | Details |
|---|---|
| tools | {} |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| score_promptA | Score any LLM prompt for quality using PQS (Prompt Quality Score). Returns a grade (A-F), score out of 40, and percentile. Free tier — no payment required. Use this before sending any prompt to an LLM to check if it is worth running. |
| optimize_promptA | Score AND optimize any LLM prompt using PQS. Returns the original score, an optimized version of the prompt, and dimension-by-dimension breakdown across 8 quality dimensions based on PEEM, RAGAS, G-Eval, and MT-Bench frameworks. Costs $0.025 USDC via x402. Use this when you want to improve a prompt before running it. |
| compare_modelsA | Compare how Claude vs GPT-4o handles the same prompt using PQS. Both models are scored head-to-head by a third model judge. Returns winner, scores, and recommendation on which model to use for this prompt type. Costs $0.50 USDC via x402. |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
No prompts | |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
No resources | |
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/OnChainAIIntel/pqs-mcp-server'
If you have feedback or need assistance with the MCP directory API, please join our Discord server