Skip to main content
Glama

roslyn:analyze_change_impact

Identify breaking changes before modifying C# code by analyzing symbol impact across your project. Detects affected locations and safety issues for refactoring operations like renaming or parameter changes.

Instructions

Analyze what would break if you change a symbol. Identifies breaking changes before you make them.

USAGE: analyze_change_impact(filePath, line, column, changeType="rename|changeType|addParameter|removeParameter") OUTPUT: List of impacted locations, whether change is safe, and specific issues at each location. IMPORTANT: Uses ZERO-BASED coordinates (editor line - 1).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
filePathYesAbsolute path to source file
lineYesZero-based line number of the symbol
columnYesZero-based column number
changeTypeYesType of change: rename, changeType, addParameter, removeParameter, changeAccessibility, delete
newValueNoOptional: new value for rename/changeType
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it's an analysis tool (not a mutating operation), it uses zero-based coordinates (important implementation detail), and it outputs impact analysis results. It doesn't mention rate limits, authentication needs, or error handling, but covers the essential behavior adequately for this context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (purpose, usage, output, important note) and appropriately sized. Every sentence earns its place, though the usage line could be slightly more polished. It's front-loaded with the core purpose and maintains good information density without unnecessary repetition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 5-parameter analysis tool with no annotations and no output schema, the description does a good job covering the essential context: purpose, usage syntax, output format, and critical implementation detail (zero-based coordinates). It could benefit from more detail about the output structure or error cases, but given the complexity and lack of structured output schema, it provides sufficient guidance for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value beyond the schema: it mentions the changeType parameter with a subset of possible values (missing 'changeAccessibility' and 'delete' from the schema) and hints at coordinate system, but doesn't explain parameter interactions or provide additional semantic context. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('analyze what would break', 'identifies breaking changes') and distinguishes it from siblings like 'rename_symbol' or 'change_signature' by focusing on impact analysis rather than performing changes. It explicitly mentions analyzing changes to symbols, which is distinct from other analysis tools in the sibling list.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('if you change a symbol', 'before you make them'), implying it's for pre-change analysis. However, it doesn't explicitly state when NOT to use it or name specific alternatives among the many sibling tools, though the purpose naturally differentiates it from tools that perform actual modifications.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/pzalutski-pixel/sharplens-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server