Anti-Bullshit MCP Server
Server Quality Checklist
Latest release: v1.0.0
- Disambiguation5/5
Each tool has a clearly distinct purpose: analyze_claim focuses on epistemological frameworks and validation steps, check_manipulation targets manipulation tactics across cultures, and validate_sources deals with source and evidence validation. There is no overlap in functionality, making tool selection straightforward.
Naming Consistency5/5All tool names follow a consistent verb_noun pattern (analyze_claim, check_manipulation, validate_sources) with clear, descriptive verbs and nouns. The naming is uniform and predictable throughout the set.
Tool Count3/5With only 3 tools, the server feels thin for its broad domain of anti-bullshit analysis, which could include more operations like fact-checking, bias detection, or reporting. While each tool is distinct, the count is borderline low for comprehensive coverage.
Completeness3/5The tools cover analysis, manipulation checking, and source validation, but there are notable gaps such as lack of fact-checking, bias assessment, or result summarization tools. Agents might encounter dead ends when needing broader anti-bullshit workflows.
Average 2.7/5 across 3 of 3 tools scored.
See the Tool Scores section below for per-tool breakdowns.
- No issues in the last 6 months
- No commit activity data available
- No stable releases found
- No critical vulnerability alerts
- No high-severity vulnerability alerts
- No code scanning findings
- CI status not available
This repository is licensed under MIT License.
This repository includes a README.md file.
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
This server has been verified by its author.
Add related servers to improve discoverability.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions checking for manipulation tactics across cultural contexts, but doesn't describe what the tool actually does behaviorally—such as whether it returns a score, categories of manipulation, examples, or any limitations like accuracy, rate limits, or authentication needs. This leaves significant gaps in understanding the tool's operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's function. It's appropriately sized and front-loaded with the core purpose, though it could be slightly more structured by explicitly mentioning the input parameter or output expectations to improve clarity without adding unnecessary length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (analyzing manipulation across cultures) and lack of annotations or output schema, the description is incomplete. It doesn't explain what the tool returns, how results are formatted, any limitations in cultural coverage, or behavioral traits. This makes it inadequate for an agent to fully understand and use the tool effectively, especially without structured output information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'text' parameter clearly documented as 'Text to analyze for manipulation'. The description adds no additional meaning beyond this, as it doesn't elaborate on parameter usage, format expectations, or cultural context implications. With high schema coverage, the baseline score of 3 is appropriate, as the schema adequately handles parameter semantics without description enhancement.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose3/5Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool's purpose as checking for manipulation tactics, which is clear but vague. It specifies 'across different cultural contexts' which adds some nuance, but doesn't clearly distinguish this from sibling tools like 'analyze_claim' or 'validate_sources' in terms of what specific manipulation tactics it detects or how it differs from general claim analysis or source validation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives like 'analyze_claim' or 'validate_sources' is provided. The description implies usage for analyzing text for manipulation in cultural contexts, but lacks any context on prerequisites, exclusions, or comparative advantages over sibling tools, leaving the agent to infer when this specific tool is appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden but provides minimal behavioral insight. It doesn't disclose whether this is a read-only operation, if it modifies data, authentication needs, rate limits, or output format. The phrase 'configured framework' hints at external configuration but lacks details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no wasted words. It's front-loaded with the core action, though it could be more structured by elaborating on key aspects like output or context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete. It doesn't explain what validation results look like, error conditions, or behavioral traits. For a tool with two parameters and undefined output, more context is needed to guide effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents parameters. The description adds no additional meaning beyond implying validation uses a framework, which is already covered by the 'framework' parameter's enum. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose3/5Does the description clearly state what the tool does and how it differs from similar tools?
The description states the action ('validate') and target ('sources and evidence'), but lacks specificity about what validation entails or how it differs from sibling tools like 'analyze_claim' or 'check_manipulation'. It's vague about the scope and mechanism of validation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description mentions 'configured framework' but doesn't explain what contexts or scenarios warrant its use, nor does it reference sibling tools for comparison or exclusion.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'analyze' and 'suggest validation steps' but doesn't specify what the analysis entails (e.g., output format, depth), whether it's read-only or has side effects, or any constraints like rate limits or authentication needs. This leaves significant gaps for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('analyze a claim') and adds key details ('using multiple epistemological frameworks and suggest validation steps'). Every word contributes meaning without redundancy or waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of analyzing claims with frameworks and no annotations or output schema, the description is incomplete. It doesn't explain what the tool returns (e.g., analysis results, validation steps list), how frameworks differ, or behavioral aspects like safety or performance. For a tool with 2 parameters and no structured output, more context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters ('framework' with enum values and 'text' as the claim). The description adds no additional semantic context beyond what's in the schema, such as explaining the frameworks or how they affect analysis. Baseline 3 is appropriate when the schema handles parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('analyze') and resource ('a claim'), specifying it uses 'multiple epistemological frameworks' and 'suggest validation steps'. This provides a specific verb+resource combination, though it doesn't explicitly differentiate from sibling tools like 'check_manipulation' or 'validate_sources' which might have overlapping domains.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'check_manipulation' or 'validate_sources'. It mentions 'validation steps' but doesn't clarify if this is for preliminary analysis, in-depth validation, or specific contexts. No exclusions or prerequisites are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/bmorphism/anti-bullshit-mcp-server'
If you have feedback or need assistance with the MCP directory API, please join our Discord server