Skip to main content
Glama

Compare Technologies

compare_techs
Read-only

Compare 2-4 technologies side-by-side with per-dimension winners and compatibility matrices to identify optimal choices for your project context.

Instructions

Side-by-side comparison of 2-4 technologies with per-dimension winners and compatibility matrix.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
technologiesYesTechnologies to compare
contextNoContext for scoring

Implementation Reference

  • The main handler function that executes the 'compare_techs' tool logic: validates inputs, fetches tech data and scores, determines dimension winners, builds compatibility matrix, generates Markdown tables with overall verdict and recommendation.
    export function executeCompareTechs(input: CompareTechsInput): { text: string; isError?: boolean } {
    	const { technologies, context = 'default' } = input;
    
    	// Validate all technologies exist
    	const allTechIds = getAllTechIds();
    	const invalidTechs: string[] = [];
    
    	for (const techId of technologies) {
    		if (!techExists(techId)) {
    			invalidTechs.push(techId);
    		}
    	}
    
    	if (invalidTechs.length > 0) {
    		const error = techNotFoundError(invalidTechs[0], allTechIds);
    		return { text: error.toResponseText(), isError: true };
    	}
    
    	// Check for duplicates
    	const uniqueTechs = [...new Set(technologies)];
    	if (uniqueTechs.length !== technologies.length) {
    		const error = new McpError(ErrorCode.INVALID_INPUT, 'Duplicate technologies in comparison list');
    		return { text: error.toResponseText(), isError: true };
    	}
    
    	// Build comparison data
    	const comparisons: TechComparison[] = technologies.map((techId) => {
    		const tech = getTechnology(techId)!;
    		const scores = getScores(techId, context)!;
    		const overall = calculateOverallScore(scores);
    		return {
    			id: techId,
    			name: tech.name,
    			scores,
    			overall,
    			grade: scoreToGrade(overall)
    		};
    	});
    
    	// Sort by overall score
    	const sorted = [...comparisons].sort((a, b) => b.overall - a.overall);
    
    	// Determine dimension winners
    	const dimensionWinners = determineDimensionWinners(comparisons);
    
    	// Build response
    	const techNames = comparisons.map((t) => t.name).join(' vs ');
    	let text = `## Comparison: ${techNames} (context: ${context})
    
    ### Overall Scores
    | Technology | Score | Grade |
    |------------|-------|-------|
    `;
    
    	for (const tech of sorted) {
    		text += `| ${tech.name} | ${tech.overall} | ${tech.grade} |\n`;
    	}
    
    	// Per-dimension breakdown
    	text += `
    ### Per-Dimension Winners
    | Dimension | Winner | Margin | Notes |
    |-----------|--------|--------|-------|
    `;
    
    	for (const w of dimensionWinners) {
    		const winnerDisplay = w.winner ?? 'Tie';
    		const marginDisplay = w.winner ? `+${w.margin}` : '-';
    		text += `| ${w.dimension} | ${winnerDisplay} | ${marginDisplay} | ${w.notes} |\n`;
    	}
    
    	// Compatibility matrix (for all pairs)
    	text += '\n### Compatibility Matrix\n| Pair | Score | Verdict |\n|------|-------|---------|\n';
    
    	for (let i = 0; i < comparisons.length; i++) {
    		for (let j = i + 1; j < comparisons.length; j++) {
    			const a = comparisons[i];
    			const b = comparisons[j];
    			const score = getCompatibility(a.id, b.id);
    			const verdict = getCompatibilityVerdict(score);
    			text += `| ${a.id} ↔ ${b.id} | ${score} | ${verdict} |\n`;
    		}
    	}
    
    	// Verdict
    	const leader = sorted[0];
    	const runnerUp = sorted[1];
    	const overallMargin = leader.overall - runnerUp.overall;
    
    	text += '\n';
    	if (overallMargin < 3) {
    		text += `**Verdict**: Close call between ${leader.name} and ${runnerUp.name}\n`;
    		text += `**Recommendation**: Both are strong choices; consider your specific priorities.`;
    	} else {
    		text += `**Verdict**: ${leader.name} leads with ${leader.overall}/100\n`;
    
    		// Find what leader is best at
    		const leaderStrengths = dimensionWinners.filter((w) => w.winner === leader.name).map((w) => w.dimension);
    
    		if (leaderStrengths.length > 0) {
    			const strengthsText = leaderStrengths.slice(0, 2).join(' and ');
    			text += `**Recommendation**: Consider ${leader.name} for ${strengthsText} priorities.`;
    		}
    	}
    
    	text += `\n\nData version: ${DATA_VERSION}`;
    
    	return { text };
    }
  • Zod input validation schema for the compare_techs tool: requires 2-4 technology IDs, optional context.
    export const CompareTechsInputSchema = z.object({
    	technologies: z
    		.array(z.string().min(1))
    		.min(2)
    		.max(4)
    		.describe('Technology IDs to compare (2-4 technologies)'),
    	context: z.enum(CONTEXTS).optional().default('default').describe('Context for score lookup')
    });
  • MCP tool definition including name, description, and structural inputSchema used for registration.
    export const compareTechsToolDefinition = {
    	name: 'compare_techs',
    	description: 'Side-by-side comparison of 2-4 technologies with per-dimension winners and compatibility matrix.',
    	inputSchema: {
    		type: 'object' as const,
    		properties: {
    			technologies: {
    				type: 'array',
    				items: { type: 'string' },
    				minItems: 2,
    				maxItems: 4,
    				description: 'Technology IDs to compare (e.g., ["nextjs", "sveltekit", "nuxt"])'
    			},
    			context: {
    				type: 'string',
    				enum: CONTEXTS,
    				description: 'Context for scoring (default, mvp, enterprise)'
    			}
    		},
    		required: ['technologies']
    	}
    };
  • src/server.ts:98-122 (registration)
    MCP server registration of the 'compare_techs' tool, wiring the tool definition, Zod inputSchema for registration, and handler callback invoking executeCompareTechs.
    server.registerTool(
    	compareTechsToolDefinition.name,
    	{
    		title: 'Compare Technologies',
    		description: compareTechsToolDefinition.description,
    		inputSchema: {
    			technologies: z.array(z.string().min(1)).min(2).max(4).describe('Technologies to compare'),
    			context: z.enum(CONTEXTS).optional().describe('Context for scoring')
    		},
    		annotations: {
    			readOnlyHint: true,
    			destructiveHint: false,
    			openWorldHint: false
    		}
    	},
    	async (args) => {
    		debug('compare_techs called', args);
    		const input = CompareTechsInputSchema.parse(args);
    		const { text, isError } = executeCompareTechs(input);
    		return {
    			content: [{ type: 'text', text }],
    			isError
    		};
    	}
    );
  • Helper function used by the handler to compute per-dimension winners, margins, and notes (tie/close/clear) across compared technologies.
    function determineDimensionWinners(techs: TechComparison[]): DimensionWinner[] {
    	const winners: DimensionWinner[] = [];
    
    	for (const dim of SCORE_DIMENSIONS) {
    		const sorted = [...techs].sort((a, b) => b.scores[dim] - a.scores[dim]);
    		const first = sorted[0];
    		const second = sorted[1];
    		const margin = first.scores[dim] - second.scores[dim];
    
    		let winner: string | null;
    		let notes: string;
    
    		if (margin < 3) {
    			// Tie if margin is less than 3
    			winner = null;
    			notes = 'Tie';
    		} else if (margin < 10) {
    			winner = first.name;
    			notes = 'Close competition';
    		} else {
    			winner = first.name;
    			notes = 'Clear winner';
    		}
    
    		winners.push({
    			dimension: DIMENSION_LABELS[dim],
    			winner,
    			margin,
    			notes
    		});
    	}
    
    	return winners;
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=false, and destructiveHint=false, so the agent knows this is a safe, closed-world read operation. The description adds useful context about the output format ('per-dimension winners and compatibility matrix'), but doesn't disclose behavioral traits like rate limits, authentication needs, or what 'closed-world' means in practice beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core functionality and includes key constraints (2-4 technologies) and output details. Every word earns its place with zero waste or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only comparison tool with good annotations and full schema coverage, the description is adequate but has gaps. It explains the output format but doesn't clarify how 'winners' are determined or what dimensions are compared. Without an output schema, more detail about return values would be helpful, though annotations cover the safety profile sufficiently.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents both parameters (technologies array with constraints, context enum). The description adds no additional parameter semantics beyond what's in the schema - it doesn't explain what 'per-dimension winners' means in relation to parameters or provide examples of technology names. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs a 'side-by-side comparison of 2-4 technologies' with specific outputs ('per-dimension winners and compatibility matrix'), which is a specific verb+resource combination. However, it doesn't explicitly differentiate from sibling tools like 'analyze_tech' or 'recommend_stack', which might have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'analyze_tech' or 'recommend_stack'. It mentions the 2-4 technology constraint and context parameter, but doesn't explain when this comparison is appropriate versus other analysis or recommendation tools in the sibling list.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/hoklims/stacksfinder-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server