website-search
Server Details
Write better IR reports, improve security writing, and plan cybersecurity product strategy.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
14 toolsget_articleARead-onlyIdempotentInspect
Get the full content of a specific article from Lenny Zeltser's Website by URL path. Security articles on malware analysis, incident response, and security leadership. Returns title, date, topics, summary, and full body text.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | Article URL path (e.g., '/about', '/article-slug') |
Output Schema
| Name | Required | Description |
|---|---|---|
| url | Yes | |
| body | No | |
| date | No | |
| site | Yes | |
| title | Yes | |
| topics | No | |
| abstract | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover read-only, non-destructive, and idempotent behavior, but the description adds value by specifying the return content (title, date, topics, summary, body text) and the source website, which are not captured in annotations. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by context and return details in two efficient sentences. Every sentence adds value without redundancy, making it appropriately sized and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 parameter, no output schema), the description provides sufficient context with purpose, source, content types, and return structure. However, it lacks explicit error handling or rate limit info, though annotations cover safety aspects adequately.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents the 'url' parameter. The description does not add extra parameter details beyond what the schema provides, such as format examples or constraints, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get'), resource ('full content of a specific article'), and source ('Lenny Zeltser's Website by URL path'), with specific content domains ('Security articles on malware analysis, incident response, and security leadership'). It distinguishes from sibling tools like 'search_zeltser' by focusing on retrieval of a single article rather than searching.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when you need the full content of a specific article by URL, with context about the article types. However, it does not explicitly state when not to use it (e.g., vs. 'search_zeltser' for broader queries) or list alternatives, though the distinction is clear from the purpose.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_capabilitiesARead-onlyIdempotentInspect
List all capabilities and tools available from the Lenny Zeltser's Website MCP server, including search tools and any specialized features like IR report writing assistance.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds value by specifying the scope ('including search tools and any specialized features like IR report writing assistance'), which provides context beyond annotations. It doesn't contradict annotations, as listing is consistent with read-only behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys the tool's purpose and scope. It is front-loaded with the core action ('List all capabilities and tools') and adds necessary context without waste. Every part of the sentence contributes to understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema) and rich annotations, the description is largely complete. It explains what the tool does and its scope. However, it could be more complete by clarifying the output format or how results are structured, as there's no output schema provided. The annotations cover behavioral aspects well.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters, with 100% schema description coverage (empty schema). The description doesn't need to explain parameters, as there are none. It appropriately focuses on the tool's purpose without redundant parameter details, meeting the baseline for zero parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'List all capabilities and tools available from the Lenny Zeltser's Website MCP server.' It specifies the verb ('List') and resource ('capabilities and tools'), and mentions the scope ('including search tools and any specialized features like IR report writing assistance'). However, it doesn't explicitly differentiate from siblings like 'get_index_info' or 'search_zeltser', which might also provide related metadata or listings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by stating it lists 'all capabilities and tools,' suggesting it's for discovery or overview purposes. However, it lacks explicit guidance on when to use this tool versus alternatives, such as whether 'get_index_info' might provide similar information or if this is the primary entry point for tool enumeration. No exclusions or clear alternatives are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_index_infoARead-onlyIdempotentInspect
Get statistics about the Lenny Zeltser's Website search index including total pages indexed, last update time, and available tools.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| site | Yes | |
| tools | Yes | |
| version | Yes | |
| generated | Yes | |
| pageCount | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, non-destructive, and idempotent behavior, which the description does not contradict. The description adds valuable context by specifying the types of statistics returned (pages indexed, update time, tools), which helps the agent understand what to expect beyond the safety profile provided by annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys the tool's purpose and key details without any wasted words. It is front-loaded with the main action and resource, followed by specific examples of statistics, making it easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema) and rich annotations, the description is largely complete. It explains what the tool does and what information it returns, which compensates for the missing output schema. However, it could slightly improve by mentioning the format of the statistics (e.g., numeric, timestamp) for full completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the baseline is 4. The description appropriately does not discuss parameters, as none exist, and instead focuses on the tool's output semantics, which is useful given the lack of an output schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get statistics') and resource ('Lenny Zeltser's Website search index'), with explicit details about what statistics are included (total pages indexed, last update time, available tools). It distinguishes itself from siblings like 'get_article' or 'search_zeltser' by focusing on index metadata rather than content retrieval or search operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for obtaining index statistics, which provides clear context for when to use this tool. However, it does not explicitly state when not to use it or name alternatives among siblings (e.g., 'get_capabilities' might overlap in providing system information), so it lacks explicit exclusions or comparisons.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_security_writing_guidelinesARead-onlyIdempotentInspect
Get Lenny Zeltser's expert writing guidelines for security reports and assessments. Provides guidance on tone, structure, clarity, executive summaries, and avoiding common writing mistakes. Works for any security document. Your documents are never sent to this server—guidelines flow to your AI for local analysis. Note: For incident response reports specifically, use the ir_* tools which provide deeper section-by-section review criteria.
| Name | Required | Description | Default |
|---|---|---|---|
| focus | No | Which aspects of writing to focus on. 'tone': voice, do/avoid examples. 'structure': paragraphs, report qualities, formatting. 'clarity': sentences, jargon alternatives. 'executive_summary': exec summary best practices. 'critique': writing as critique not criticism. 'analytical': evidence attribution, confidence language, comparative language, gap acknowledgment. 'all' or omit for everything. | |
| include_examples | No | Include before/after examples. Default: true. Set to false for smaller response. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond what annotations provide. While annotations indicate read-only, non-destructive, and idempotent operations, the description clarifies that 'Your documents are never sent to this server—guidelines flow to your AI for local analysis,' addressing privacy/security concerns. It doesn't contradict annotations, but provides important implementation details about data flow.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with three sentences that each serve distinct purposes: stating the tool's function, clarifying data privacy, and providing sibling tool differentiation. There's no wasted language, and key information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the comprehensive annotations (read-only, non-destructive, idempotent) and full schema coverage, the description provides sufficient context for this informational tool. It could potentially mention output format or response structure since there's no output schema, but the privacy clarification and sibling differentiation compensate well for this.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already fully documents both parameters. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline expectation. The description focuses on the tool's purpose and usage rather than parameter details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get Lenny Zeltser's expert writing guidelines for security reports and assessments.' It specifies the resource (writing guidelines) and scope (security reports/assessments), and distinguishes it from siblings by noting that for incident response reports, ir_* tools should be used instead.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly provides usage guidance: 'Works for any security document' and 'For incident response reports specifically, use the ir_* tools which provide deeper section-by-section review criteria.' This clearly defines when to use this tool versus alternatives, including specific sibling tool categories.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ir_get_guidelinesARead-onlyIdempotentInspect
Get Lenny Zeltser's expert writing guidelines for incident response reports. Topics: tone, words, structure, executive_summary, voice, articles, or summary for quick reference. Your incident data is never sent to this server—guidelines flow to your AI for local analysis.
| Name | Required | Description | Default |
|---|---|---|---|
| topic | No | Specific topic: tone (collaborative framing), words (clarity, jargon), structure (paragraphs, headings), executive_summary (exec summary rules), voice (style guidelines), articles (related reading), or summary for a quick reference card. Omit for full guidelines. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover read-only, non-destructive, and idempotent hints, but the description adds valuable context: it clarifies that incident data is never sent to the server, ensuring privacy, and notes that guidelines flow to the AI for local analysis. This enhances transparency beyond the annotations without contradicting them.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by topic details and privacy assurance. Every sentence earns its place by adding essential information without redundancy, making it efficiently structured and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 optional parameter, no output schema) and rich annotations, the description is mostly complete. It covers purpose, topics, and privacy, but could slightly enhance completeness by mentioning the return format or how the guidelines are presented, though this is mitigated by the straightforward nature of the tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds meaning by explaining the purpose of omitting the topic (for full guidelines) and briefly describing each topic (e.g., 'tone (collaborative framing)'), which complements the schema's enum list and provides additional semantic context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the specific resource 'Lenny Zeltser's expert writing guidelines for incident response reports,' distinguishing it from siblings like 'get_security_writing_guidelines' or 'get_article' by focusing on incident response. It specifies the topics covered, making the purpose highly specific and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by listing topics and noting that omitting the topic yields full guidelines, but it does not explicitly state when to use this tool versus alternatives like 'get_security_writing_guidelines' or 'ir_get_template.' It provides clear context for incident response report writing but lacks explicit exclusions or named alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ir_get_templateARead-onlyIdempotentInspect
Get Lenny Zeltser's structured incident response report template. Covers all critical IR sections with field-by-field guidance. Your incident data is never sent to this server—guidelines flow to your AI for local analysis.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds valuable context beyond annotations: it clarifies that 'your incident data is never sent to this server' (privacy/security behavior) and 'guidelines flow to your AI for local analysis' (how the output is used).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the core purpose, followed by important behavioral details. Every sentence adds value: the first explains what the tool does, and the second addresses privacy and usage flow. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter tool with rich annotations (read-only, idempotent, non-destructive) and no output schema, the description is mostly complete. It explains the purpose, privacy behavior, and output usage. However, it doesn't detail the exact structure or format of the returned template, which could be helpful given no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are 0 parameters, and schema description coverage is 100% (empty schema). The description doesn't need to explain parameters, but it implicitly clarifies that no inputs are required by stating what the tool provides without mentioning any user inputs. Baseline for 0 params is 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get'), the resource ('Lenny Zeltser's structured incident response report template'), and distinguishes it from siblings by specifying it's for incident response (vs. product or general articles). It explicitly mentions what the template provides ('covers all critical IR sections with field-by-field guidance').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: when you need an incident response report template with guidance. It doesn't explicitly state when not to use it or name alternatives, but the sibling tools (e.g., 'ir_get_guidelines', 'product_get_template') imply differentiation by domain (IR vs. product).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ir_load_contextARead-onlyIdempotentInspect
Load Lenny Zeltser's IR report writing context for local analysis. Returns expert guidelines for field completeness, incident identification, notification triggers, and writing quality. Your AI uses this context to analyze your incident notes locally—your notes are never sent to this server. Use detail_level to control response size: "minimal" (~2k tokens), "standard" (~5k tokens), or "comprehensive" (~11k tokens).
| Name | Required | Description | Default |
|---|---|---|---|
| topics | No | Specific topics to load. Overrides detail_level for fine-grained control. Options: completeness (field guidance), incidents (type identification), notifications (regulatory triggers), writing (style analysis), actions (urgency categorization), stakeholders (party identification), sections (review criteria). | |
| detail_level | No | Level of detail to return. 'minimal': core field guidance only (~2k tokens). 'standard': field guidance + writing analysis + notifications (~5k tokens, default). 'comprehensive': everything including examples and all incident types (~11k tokens). | |
| incident_type | No | Load guidance for a specific incident type only (saves tokens). Omit to load all types when 'incidents' topic is included. | |
| include_examples | No | Include good/poor examples in field guidance. Default: false. Set to true for learning/training. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds useful context: it specifies that notes are never sent to the server (privacy assurance) and mentions token sizes for detail levels (performance guidance). However, it does not disclose rate limits, authentication needs, or error behaviors beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by specific usage notes and parameter guidance. Every sentence earns its place by clarifying functionality, privacy, and control options without redundancy, making it efficiently structured and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, rich annotations, and 100% schema coverage, the description is mostly complete. It covers purpose, usage context, and key behavioral aspects like privacy and token control. However, without an output schema, it could benefit from more detail on return values (e.g., structure of guidelines), though the mention of 'expert guidelines' provides some indication.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all 4 parameters. The description adds minimal value beyond the schema: it explains the purpose of detail_level with token estimates and mentions local analysis context, but does not provide additional syntax, format, or usage details for parameters like 'topics' or 'incident_type' that aren't already in the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Load Lenny Zeltser's IR report writing context for local analysis.' It specifies the resource (IR report writing context) and verb (load), and distinguishes it from siblings by emphasizing local analysis and that notes are never sent to the server, unlike tools like 'ir_review_report' or 'search_zeltser' that might involve server-side processing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool: 'Your AI uses this context to analyze your incident notes locally.' It implies usage for IR report analysis but does not explicitly state when not to use it or name alternatives among siblings, such as 'ir_get_guidelines' or 'product_load_context', leaving some room for ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ir_review_reportARead-onlyIdempotentInspect
Get Lenny Zeltser's expert criteria for reviewing an existing IR report. Returns focused guidance for constructive critique — what to check in each section, writing quality issues to identify, and how to frame feedback collaboratively. Your AI uses this to analyze your report locally—your report is never sent to this server.
| Name | Required | Description | Default |
|---|---|---|---|
| focus | No | What aspects to focus on. 'completeness': is everything covered? 'clarity': jargon, passive voice, vague terms. 'tone': collaborative framing. 'structure': sentence/paragraph organization. | |
| sections | No | Specific sections to get review criteria for. Omit or use 'all' for complete review criteria. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety (readOnlyHint, destructiveHint) and idempotency, but the description adds valuable context: 'Your AI uses this to analyze your report locally—your report is never sent to this server.' This clarifies data privacy and local processing behavior, which annotations do not address, enhancing transparency beyond the structured hints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by additional context in a second sentence. Both sentences are efficient and directly relevant, with no wasted words, making it easy for an AI agent to quickly grasp the tool's function and constraints.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no output schema), the description is mostly complete. It covers purpose, usage context, and behavioral details like data privacy. However, it does not explain the return format or how the criteria are applied, which could be helpful since there is no output schema. Annotations provide safety and idempotency, but the description compensates well overall.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing full details for both parameters (focus and sections). The description does not add any parameter-specific information beyond the schema, such as default behaviors or usage examples. With high schema coverage, a baseline score of 3 is appropriate as the description relies on the schema for parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get Lenny Zeltser's expert criteria for reviewing an existing IR report.' It specifies the verb ('Get'), resource ('expert criteria'), and distinguishes it from siblings by mentioning the specific domain (IR reports) and the expert source, unlike generic tools like get_article or search_zeltser.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: 'to analyze your report locally' and 'for constructive critique.' It implicitly suggests using it for IR report reviews but does not explicitly state when not to use it or name alternatives among siblings, such as ir_get_guidelines or ir_get_template, which might serve related purposes.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
product_compare_contextARead-onlyIdempotentInspect
Load Lenny Zeltser's comparative analysis framework for evaluating multiple security companies side by side. Returns structured scoring rubric, evaluation dimensions, evidence tiering guidance, and comparison-type-specific instructions. Requires comparative analysis content. Your product plans are never sent to this server—guidelines flow to your AI for local analysis.
| Name | Required | Description | Default |
|---|---|---|---|
| company_count | Yes | Number of companies being compared (2-10). | |
| comparison_type | Yes | Type of comparison. 'competition': direct/adjacent competitors. 'market_segment': companies in same segment. 'portfolio': cohort evaluation. | |
| include_scoring_rubric | No | Include the structured 1-5 scoring rubric. Default: true. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations already declare readOnlyHint=true and idempotentHint=true, establishing the safe read-only nature. The description adds crucial privacy context beyond these annotations by stating that 'Your product plans are never sent to this server' and that 'guidelines flow to your AI for local analysis.' It also details the four specific return components (scoring rubric, evaluation dimensions, evidence tiering guidance, and comparison-type-specific instructions), compensating for the lack of an output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of four information-dense sentences that progress logically from purpose to outputs to requirements to privacy guarantees. Every sentence serves a distinct function without redundancy, and the critical action and scope are front-loaded in the opening clause.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's straightforward purpose as a context loader with simple scalar parameters and complete schema documentation, the description adequately explains the return values and operational constraints. The privacy disclosure and explicit listing of framework components provide sufficient context despite the absence of a formal output schema, though explicit differentiation from single-product siblings would strengthen it further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input parameters are fully documented in the schema itself, including enum descriptions for comparison_type and range constraints for company_count. The description references the comparative nature ('comparison-type-specific instructions') but does not add syntactic details, validation rules, or semantic nuances beyond what the schema already provides, warranting the baseline score for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with the specific verb 'Load' and identifies the exact resource as 'Lenny Zeltser's comparative analysis framework.' It clearly distinguishes this tool from single-product siblings like product_load_context by emphasizing 'evaluating multiple security companies side by side,' establishing a clear comparative scope that differentiates it from other product analysis tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states the prerequisite 'Requires comparative analysis content' and explains the data flow model where 'Your product plans are never sent to this server.' While it provides clear context that this is for comparative evaluation, it does not explicitly name sibling alternatives or specify when to avoid this tool in favor of single-company analysis tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
product_get_guidelinesARead-onlyIdempotentInspect
Get Lenny Zeltser's expert strategic guidelines for a specific product strategy topic. Topics: market (segmentation), capabilities (AI, agents, MVP, positioning), sales (GTM, channels, distribution, POCs), pricing (models, retention), delivery (deployment, APIs), trust (compliance, security program), platform (ecosystem positioning), team (expertise, gaps), competitive (differentiation, moats), smb (SMB market dynamics), endpoint (endpoint viability), ai_security (AI security vertical), role (product manager responsibilities), category_creation (new category strategy), comparative (multi-company analysis), evidence_tiering (evidence classification framework). Your product plans are never sent to this server—guidelines flow to your AI for local analysis.
| Name | Required | Description | Default |
|---|---|---|---|
| topic | No | Specific topic to get guidelines for. Omit or use 'all' for a complete overview. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare readOnly/idempotent hints, the description adds crucial behavioral context not captured in structured fields: 'Your product plans are never sent to this server—guidelines flow to your AI for local analysis.' This privacy/data handling disclosure is significant for a retrieval tool handling sensitive strategy content. It does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is structured efficiently with three distinct components: purpose statement, topic enumeration, and privacy note. Every sentence earns its place. The topic list is lengthy but necessary given the 17 enum options; the parenthetical explanations prevent parameter misuse without requiring separate documentation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter retrieval tool with robust annotations and no output schema, the description adequately covers the tool's scope, available topics, and data privacy model. It could be improved by briefly describing the format or structure of the returned guidelines to compensate for the missing output schema, but the core functionality is well-documented.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema description coverage for the 'topic' parameter, the description adds substantial semantic value by mapping each enum value to its meaning: 'market (segmentation), capabilities (AI, agents, MVP, positioning), sales (GTM, channels...)' etc. This parenthetical clarification of domain terminology significantly aids correct parameter selection beyond the raw enum list in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves 'Lenny Zeltser's expert strategic guidelines for a specific product strategy topic' with a specific verb and resource. However, it does not explicitly distinguish from sibling tools like get_security_writing_guidelines or ir_get_guidelines, relying instead on the 'product strategy' qualifier for implicit differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description enumerates 17 specific topics covered (market, capabilities, sales, etc.), which provides implied usage context for when to invoke the tool. However, it lacks explicit guidance on when NOT to use this tool versus alternatives like product_get_template or get_security_writing_guidelines, and states no prerequisites or dependencies.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
product_get_templateARead-onlyIdempotentInspect
Get Lenny Zeltser's fill-in-the-blank template for planning a security product strategy. Includes strategic questions organized by section with evidence columns. Your product plans are never sent to this server—guidelines flow to your AI for local analysis. The template is Copyright (c) 2026 Lenny Zeltser; any content you create using it is entirely yours.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare readOnly/idempotent/destructive status, the description adds crucial privacy context: 'Your product plans are never sent to this server—guidelines flow to your AI for local analysis.' It also discloses copyright ownership (Lenny Zeltser, 2026), providing legal context not present in structured fields.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste: (1) core purpose, (2) content structure, (3) critical privacy/behavioral note, (4) copyright. Information is front-loaded with the essential action and resource type in the opening clause.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple retrieval tool with no output schema, the description adequately describes the return value's content (questions by section, evidence columns) and format (fill-in-the-blank template). Minor gap: does not explicitly state the data format (e.g., markdown/text) though 'template' implies structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema contains zero parameters. Per calibration rules, 0 params warrants a baseline score of 4. The description correctly omits parameter discussion as there are none to document, with no penalty applicable.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with specific verb 'Get' and precise resource 'fill-in-the-blank template for planning a security product strategy'. It clearly distinguishes from sibling 'ir_get_template' via domain (product vs incident response) and from 'product_get_guidelines' via format (template with questions/evidence columns vs guidelines).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context by describing the template's structure (strategic questions organized by section with evidence columns), implicitly guiding selection when a structured framework is needed. However, it does not explicitly contrast with 'product_get_guidelines' or state when to prefer one over the other, stopping short of a 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
product_load_contextARead-onlyIdempotentInspect
Load Lenny Zeltser's product strategy context for local analysis. Returns expert strategic frameworks, principles, and guidance for evaluating or creating security product plans. Your AI uses this context to analyze your product plans locally—your plans are never sent to this server. Use detail_level to control response size: "minimal" (~2k tokens), "standard" (~5k tokens), "compact" (~3-4k tokens, all sections but stripped), or "comprehensive" (~12k tokens). Use market_segment: "smb" for SMB-specific guidance. Use product_focus: "endpoint" for endpoint security viability assessment. Set include_template: true to include the fill-in-the-blank template in the response.
| Name | Required | Description | Default |
|---|---|---|---|
| topics | No | Specific topics to include. Overrides detail_level for fine-grained control. | |
| detail_level | No | Level of detail to return. "minimal": market + capabilities only (~2k tokens). "standard": core strategy sections (~5k tokens, default). "compact": all sections with stripped subsections (~3-4k tokens, good for batch analysis). "comprehensive": everything + examples (~12k tokens). | |
| analysis_mode | No | 'internal': planning your own product (default). 'external': evaluating another company from outside. External mode reframes questions and adjusts evidence standards. | |
| product_focus | No | Include vertical-specific guidance. 'endpoint': platform entrapment, defensibility. 'ai_security': AI threat landscape, buyer personas, regulatory alignment. | |
| market_segment | No | Include SMB-specific guidance (distribution, buying triggers, readiness). | |
| company_context | No | Filter guidance to startup or large company perspective. Stage values (pre_seed, seed, series_a, series_b, growth, late_stage) imply startup context with stage-specific emphasis. | |
| include_examples | No | Include examples in framework sections. Default: false. | |
| include_template | No | Include the fill-in-the-blank strategy template at the end of the context response. Default: false. Saves a separate product_get_template call. | |
| evaluation_perspective | No | Emphasize framework sections relevant to a specific perspective. Composes with analysis_mode. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/idempotent hints, so description adds value by disclosing privacy model ('plans are never sent to this server') and precise response sizing (~2k, ~5k, ~12k tokens). Notes compositional behavior ('Composes with analysis_mode'). No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense single paragraph with zero waste. Front-loaded with purpose ('Load...context'), followed by return value, privacy guarantee, and sequential parameter guidance. Every sentence conveys unique operational guidance; no tautology or redundancy despite 9-parameter complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, so description appropriately explains return values ('expert strategic frameworks, principles, and guidance'). Covers key parameter interactions (detail_level token sizes, composition of evaluation_perspective). Given 9 parameters with 100% schema coverage, description strategically elaborates on high-impact parameters without redundancy.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3. Description adds substantial value: specific token counts for detail_level options, usage context for include_template ('saves a separate call'), and assessment framing for product_focus ('viability assessment'). These semantic details aid agent reasoning beyond schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with specific verb 'Load' and resource 'Lenny Zeltser's product strategy context', immediately clarifying scope. It distinguishes from siblings like product_review_plan by emphasizing 'local analysis' and product_get_template by noting the include_template parameter 'saves a separate product_get_template call'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit parameter-level guidance ('Use detail_level to control response size', 'Use market_segment...', 'Use product_focus...'). Explicitly names alternative tool product_get_template. Privacy note 'your plans are never sent to this server' implies when to use vs server-side alternatives, though could more explicitly contrast with product_review_plan.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
product_review_planARead-onlyIdempotentInspect
Get Lenny Zeltser's expert criteria for reviewing an existing product strategy plan. Returns focused guidance for constructive critique—what to check in each section, strategic coherence issues, and how to frame feedback collaboratively. Your AI uses this to analyze your plan locally—your plan is never sent to this server. Use market_segment: "smb" to include SMB-specific review criteria. Use product_focus: "endpoint" to include endpoint viability assessment.
| Name | Required | Description | Default |
|---|---|---|---|
| focus | No | What aspects to focus on. 'completeness': is everything covered? 'strategy': are decisions coherent? 'feasibility': can this team execute? | |
| sections | No | Specific sections to get review criteria for. Omit or use 'all' for complete review criteria. | |
| review_type | No | 'internal': reviewing your own plan (default). 'external-analysis': reviewing an analysis of another company. Adjusts criteria to focus on evidence tiering, source attribution, and marketing language. | |
| product_focus | No | Include vertical-specific review criteria. 'endpoint': endpoint viability. 'ai_security': AI security market assessment. | |
| market_segment | No | Include SMB-specific review criteria. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare readOnlyHint and idempotentHint, the description adds essential behavioral context not captured in structured fields: the data privacy model ('Your AI uses this to analyze your plan locally—your plan is never sent to this server') and the return value semantics ('focused guidance for constructive critique—what to check in each section'). It does not cover authentication or rate limits, preventing a 5.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description comprises four efficient sentences: purpose (sentence 1), return value (sentence 2), privacy guarantee (sentence 3), and parameter examples (sentence 4). Every sentence delivers unique value with no redundancy or filler, and the information is front-loaded by priority (purpose first, details last).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's read-only nature and lack of output schema, the description adequately covers the conceptual output (review criteria, section checklists) and critical privacy constraints. It sufficiently compensates for the missing output schema by explaining what 'focused guidance' entails. Minor gap: could clarify behavior when called with zero parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description mentions specific parameter values ('smb', 'endpoint') but essentially repeats the schema's explanations without adding syntactic details, validation rules, or semantic nuance beyond what the structured schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'Lenny Zeltser's expert criteria for reviewing an existing product strategy plan,' specifying the exact resource (expert criteria), action (reviewing), and target (existing product strategy plans). This distinguishes it from sibling tools like product_get_guidelines (general guidance) and product_get_template (templates) by focusing specifically on critique and evaluation frameworks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear usage context for specific parameter values ('Use market_segment: "smb"...' and 'Use product_focus: "endpoint"...') and critical privacy guidance ('your plan is never sent to this server'). However, it lacks explicit 'when-not-to-use' guidance or direct comparison to sibling alternatives like product_get_guidelines.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_zeltserARead-onlyIdempotentInspect
Search Lenny Zeltser's Website by keywords. Security articles on malware analysis, incident response, and security leadership. Searches across titles, abstracts, full content, and topics.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results to return (default: 10, max: 25) | |
| query | Yes | Search terms to find relevant content |
Output Schema
| Name | Required | Description |
|---|---|---|
| site | Yes | |
| count | Yes | |
| query | Yes | |
| results | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, non-destructive, and idempotent behavior, but the description adds useful context by specifying the search scope (titles, abstracts, content, topics) and the content domain (security articles on malware analysis, incident response, security leadership). This enhances understanding beyond the basic safety profile provided by annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by supporting details in a second sentence. Every sentence adds value without redundancy, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 parameters, 100% schema coverage, annotations covering safety), the description is complete enough for a search tool. However, the lack of an output schema means the description could benefit from mentioning the return format (e.g., list of articles with titles/links), though this is a minor gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents the 'query' and 'limit' parameters. The description does not add any additional parameter details beyond what the schema provides, such as syntax examples or format specifics, meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search Lenny Zeltser's Website'), the resource ('Security articles'), and the scope ('by keywords... Searches across titles, abstracts, full content, and topics'). It distinguishes itself from siblings like 'get_article' by emphasizing search functionality rather than direct retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for finding security articles via keywords, but it does not explicitly state when to use this tool versus alternatives like 'get_article' or other sibling tools. No exclusions or prerequisites are mentioned, leaving the context somewhat vague.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!