Skip to main content
Glama
competlab

competlab-mcp-server

get_tech_trust_dashboard

Retrieve competitor security grades, technology stacks, and 24 trust signals including compliance and certifications. Analyze AI bot blocking policies, DNS infrastructure, and access AI-generated security insights.

Instructions

Get the latest Tech & Trust Profile for all competitors. Returns security headers (grade A-F, HSTS, CSP, X-Frame-Options), trust signals (compliance, reviews, social proof, certifications — 24 signals in 4 categories), technology stack (47 tech, 43 growth, 27 engagement tools), robots.txt AI bot blocking status, DNS infrastructure, and AI analysis with insights and actions. Use this for the current snapshot; use get_tech_trust_history for past runs. Read-only. Returns JSON object.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
projectIdYesProject ID (from list_projects)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so description carries full disclosure burden. It explicitly states 'Read-only' covering safety profile, and comprehensively details return structure (security headers, 24 trust signals, tech stack counts, etc.). Minor gap: doesn't clarify if this triggers new analysis or retrieves cached latest run, nor mentions auth requirements or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Information-dense without waste: opens with purpose, details specific data categories via efficient parenthetical lists (24 signals, 47 tech tools), provides usage guideline, safety declaration, and return format. Every sentence delivers distinct value (scope, content taxonomy, sibling routing, safety, format).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Without output schema, description compensates by exhaustively listing return data categories and structure ('Returns JSON object' plus detailed content breakdown). Covers sibling relationship and read-only safety. Minor omissions: error conditions (e.g., missing project, no data available) and whether operation triggers new scans or retrieves cached results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with projectId well-documented ('Project ID (from list_projects)'). Description adds implicit scope context ('for all competitors') indicating the parameter determines the competitor set, but doesn't need to duplicate schema details. Baseline 3 appropriate when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Get' with resource 'Tech & Trust Profile' and clearly scopes to 'all competitors'. It distinguishes from sibling 'get_tech_trust_history' explicitly ('Use this for the current snapshot; use get_tech_trust_history for past runs'), making temporal scope unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use guidance ('Use this for the current snapshot') and names the exact alternative tool for different temporal needs ('use get_tech_trust_history for past runs'). Clear temporal differentiation prevents agent confusion between current state vs. historical analysis.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/competlab/competlab-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server