concept_relatedness
Calculate semantic relatedness scores between concepts to quantify similarity, analyze relationships, and compare ideas across languages using ConceptNet embeddings.
Instructions
Calculate precise semantic relatedness score between two concepts.
This tool uses ConceptNet's semantic embeddings to calculate how
related two concepts are to each other. The score ranges from 0.0
(completely unrelated) to 1.0 (very strongly related).
Features:
- Precise quantitative similarity measurement
- Cross-language comparison support
- Detailed relationship analysis and interpretation
- Confidence levels and percentile estimates
- Format control: minimal (~96% smaller) vs verbose (full metadata)
Format Options:
- verbose=false (default): Returns minimal format optimized for LLM consumption
- verbose=true: Returns comprehensive format with full ConceptNet metadata
- Backward compatibility maintained with existing tools
Analysis Components:
- Numeric relatedness score (0.0-1.0)
- Descriptive interpretation and confidence level
- Likely connection explanations
- Semantic distance and relationship strength
- Cross-language analysis when applicable
Use this when you need to:
- Quantify how similar two concepts are
- Compare concepts across different languages
- Measure semantic distance between ideas
- Validate conceptual relationships
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| concept1 | Yes | ||
| concept2 | Yes | ||
| language1 | No | en | |
| language2 | No | en | |
| verbose | No |
Implementation Reference
- Core handler function that performs parameter validation, normalization, calls ConceptNet relatedness API, processes response into minimal or verbose format, and handles all errors.async def concept_relatedness( concept1: str, concept2: str, ctx: Context, language1: str = "en", language2: str = "en", verbose: bool = False ) -> Dict[str, Any]: """ Calculate semantic relatedness score between two concepts. This tool uses ConceptNet's semantic embeddings to calculate how related two concepts are to each other. The score ranges from 0.0 (completely unrelated) to 1.0 (very strongly related). By default, returns a minimal format optimized for LLM consumption. Args: concept1: First concept term for comparison (e.g., "dog", "happiness") concept2: Second concept term for comparison (e.g., "cat", "joy") language1: Language code for first concept (default: "en") language2: Language code for second concept (default: "en") verbose: If True, returns detailed format with full metadata (default: False) Returns: Relatedness score with strength category (minimal format) or comprehensive analysis with detailed metadata (verbose format). Examples: - concept_relatedness("dog", "cat") -> Minimal format: {"concept1": "dog", "concept2": "cat", "relatedness": 0.78, "strength": "strong"} - concept_relatedness("dog", "cat", verbose=True) -> Full detailed format with analysis - concept_relatedness("perro", "dog", "es", "en") -> Cross-language comparison - concept_relatedness("happy", "sad") -> Compare emotional concepts """ start_time = datetime.now(timezone.utc) execution_start = time.time() try: # Log the incoming request await ctx.info(f"Calculating relatedness between '{concept1}' ({language1}) and '{concept2}' ({language2})") # 1. Parameter validation validation_result = await _validate_parameters(concept1, concept2, language1, language2, ctx) if validation_result: return validation_result # Return error response if validation failed # 2. Check for identical concepts identical_result = await _check_identical_concepts(concept1, concept2, language1, language2, start_time) if identical_result: return identical_result # 3. Normalize concept terms normalized_concept1 = normalize_concept_text(concept1, language1) normalized_concept2 = normalize_concept_text(concept2, language2) if normalized_concept1 != concept1: await ctx.debug(f"Normalized concept1: '{concept1}' -> '{normalized_concept1}'") if normalized_concept2 != concept2: await ctx.debug(f"Normalized concept2: '{concept2}' -> '{normalized_concept2}'") # 4. Query ConceptNet relatedness API await ctx.info(f"Querying ConceptNet relatedness API...") async with ConceptNetClient() as client: try: response = await client.get_relatedness( concept1=normalized_concept1, concept2=normalized_concept2, language1=language1, language2=language2 ) except ConceptNotFoundError as e: return _create_concept_not_found_response(concept1, concept2, language1, language2, str(e), start_time) except ConceptNetAPIError as e: return _create_api_error_response(concept1, concept2, language1, language2, str(e), start_time) # 5. Return appropriate format based on verbose parameter execution_time_ms = int((time.time() - execution_start) * 1000) score = response.get('value', 0.0) if verbose: # Return detailed format with full metadata (existing behavior) enhanced_response = await _create_enhanced_response( response, concept1, concept2, normalized_concept1, normalized_concept2, language1, language2, start_time, execution_time_ms, ctx ) description = enhanced_response.get("relatedness", {}).get("description", "unknown") await ctx.info(f"Relatedness calculated: {score:.3f} ({description}) (verbose format)") return enhanced_response else: # Return minimal format optimized for LLMs processor = ResponseProcessor() minimal_response = processor.create_minimal_relatedness_response( score, concept1, concept2 ) strength = minimal_response.get("strength", "unknown") await ctx.info(f"Relatedness calculated: {score:.3f} ({strength}) (minimal format)") return minimal_response except MCPValidationError as e: # Handle validation errors specifically return { "error": "validation_error", "message": f"Validation error for field '{e.field}': {e.value} (expected: {e.expected})", "field": e.field, "value": e.value, "expected": e.expected, "concepts": { "concept1": concept1, "concept2": concept2, "language1": language1, "language2": language2 }, "query_time": start_time.isoformat() + "Z" } except ConceptNotFoundError as e: return _create_concept_not_found_response(concept1, concept2, language1, language2, str(e), start_time) except ConceptNetAPIError as e: return _create_api_error_response(concept1, concept2, language1, language2, str(e), start_time) except Exception as e: logger.error(f"Unexpected error in concept_relatedness: {e}") return { "error": "unexpected_error", "message": f"An unexpected error occurred: {str(e)}", "concepts": { "concept1": concept1, "concept2": concept2, "language1": language1, "language2": language2 }, "query_time": start_time.isoformat() + "Z" }
- src/conceptnet_mcp/server.py:346-414 (registration)FastMCP tool registration decorator and wrapper function that delegates to the core concept_relatedness implementation with error handling.@mcp.tool( name="concept_relatedness", description=""" Calculate precise semantic relatedness score between two concepts. This tool uses ConceptNet's semantic embeddings to calculate how related two concepts are to each other. The score ranges from 0.0 (completely unrelated) to 1.0 (very strongly related). Features: - Precise quantitative similarity measurement - Cross-language comparison support - Detailed relationship analysis and interpretation - Confidence levels and percentile estimates - Format control: minimal (~96% smaller) vs verbose (full metadata) Format Options: - verbose=false (default): Returns minimal format optimized for LLM consumption - verbose=true: Returns comprehensive format with full ConceptNet metadata - Backward compatibility maintained with existing tools Analysis Components: - Numeric relatedness score (0.0-1.0) - Descriptive interpretation and confidence level - Likely connection explanations - Semantic distance and relationship strength - Cross-language analysis when applicable Use this when you need to: - Quantify how similar two concepts are - Compare concepts across different languages - Measure semantic distance between ideas - Validate conceptual relationships """, tags={"conceptnet", "relatedness", "similarity", "comparison", "quantitative"} ) async def concept_relatedness_tool( concept1: str, concept2: str, ctx: Context, language1: str = "en", language2: str = "en", verbose: bool = False ) -> Dict[str, Any]: """ MCP tool wrapper for concept relatedness calculation functionality. Args: concept1: First concept term for comparison (e.g., "dog", "happiness") concept2: Second concept term for comparison (e.g., "cat", "joy") language1: Language code for first concept (default: "en") language2: Language code for second concept (default: "en") verbose: If True, returns detailed format with full metadata (default: False) Returns: Relatedness score with strength category (minimal format) or comprehensive analysis with detailed metadata (verbose format) """ try: return await concept_relatedness( concept1=concept1, concept2=concept2, ctx=ctx, language1=language1, language2=language2, verbose=verbose ) except Exception as e: return await handle_server_error(e, "concept_relatedness")
- Explicit JSON input schema definition for the concept_relatedness tool used in Cloudflare Workers deployment."concept_relatedness": { "name": "concept_relatedness", "description": "Calculate semantic relatedness score between two concepts. Returns minimal format (~96% smaller) by default or verbose format with full metadata when verbose=true.", "inputSchema": { "type": "object", "properties": { "concept1": {"type": "string", "description": "First concept for comparison"}, "concept2": {"type": "string", "description": "Second concept for comparison"}, "language1": {"type": "string", "default": "en", "description": "Language for first concept"}, "language2": {"type": "string", "default": "en", "description": "Language for second concept"}, "verbose": {"type": "boolean", "default": False, "description": "Return detailed format with full metadata (default: false for minimal format)"} }, "required": ["concept1", "concept2"] } }
- cloudflare-workers/src/main.py:342-362 (handler)Lightweight proxy handler for Cloudflare Workers deployment that directly queries ConceptNet /relatedness endpoint and adds metadata.async def _concept_relatedness(self, concept1: str, concept2: str, language1: str = "en", language2: str = "en", verbose: bool = False) -> Dict[str, Any]: """Implement concept relatedness tool.""" uri1 = f"/c/{language1}/{normalize_concept_text(concept1, language1)}" uri2 = f"/c/{language2}/{normalize_concept_text(concept2, language2)}" params = { "node1": uri1, "node2": uri2 } response_data = await self.http_client.request("GET", "/relatedness", params=params) # Add Workers-specific metadata response_data['deployment'] = { 'platform': 'cloudflare-workers-python', 'comparison': f"{concept1} <-> {concept2}" } return response_data
- Local FastMCP tool registration within the tools module (supplementary to server.py registration).mcp.tool( name="concept_relatedness", description="Calculate semantic relatedness score between two concepts using ConceptNet's embeddings", tags={"semantic", "relatedness", "comparison", "similarity"} )(concept_relatedness)