Skip to main content
Glama
cloudthinker-ai

Postgres MCP Pro Plus

analyze_schema_relationships

Analyze PostgreSQL schema relationships and dependencies with visual representation to understand database structure and connections.

Instructions

Analyze schema relationships and dependencies with visual representation

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • MCP tool registration for 'analyze_schema_relationships'. Instantiates SchemaMappingTool with SQL driver, fetches user schemas, calls the tool's analyze_schema_relationships method, and formats the text response.
    @mcp.tool(description="Analyze schema relationships and dependencies with visual representation")
    async def analyze_schema_relationships() -> ResponseType:
        """Analyze inter-schema dependencies and relationships with visual representation data."""
        try:
            sql_driver = await get_sql_driver()
            mapping_tool = SchemaMappingTool(sql_driver)
    
            # Get user schemas
            user_schemas_query = """
                SELECT schema_name
                FROM information_schema.schemata
                WHERE schema_name NOT IN ('information_schema', 'pg_catalog', 'pg_toast')
                AND schema_name NOT LIKE 'pg_temp_%'
                AND schema_name NOT LIKE 'pg_toast_temp_%'
                ORDER BY schema_name
            """
    
            rows = await sql_driver.execute_query(user_schemas_query)
            user_schemas = [row.cells["schema_name"] for row in rows] if rows else []
    
            # Analyze schema relationships
            result = await mapping_tool.analyze_schema_relationships(user_schemas)
    
            return format_text_response(result)
    
        except Exception as e:
            logger.error(f"Error analyzing schema relationships: {e}")
            return format_error_response(str(e))
  • Core handler in SchemaMappingTool class that performs the schema relationship analysis: resets state, initializes SchemaNode instances, analyzes schemas and tables via helpers, builds mappings, generates results, and formats as text.
    async def analyze_schema_relationships(self, schemas: list[str]) -> str:
        """Analyze relationships between schemas and generate mapping data."""
        try:
            # Reset state
            self.schema_nodes = {}
            self.table_nodes = {}
            self.cross_schema_relationships = []
            self.intra_schema_relationships = []
    
            # Initialize schema nodes
            for schema in schemas:
                self.schema_nodes[schema] = SchemaNode(name=schema)
    
            # Analyze each schema
            for schema in schemas:
                await self._analyze_schema(schema)
    
            # Build relationship mappings
            await self._build_relationship_mappings()
    
            # Generate analysis results
            result = await self._generate_analysis_results()
            return self._format_as_text(result)
    
        except Exception as e:
            logger.error(f"Error analyzing schema relationships: {e}")
            return f"Error analyzing schema relationships: {e}"
  • Helper method that aggregates analysis from schema dependencies, table dependencies, relationship patterns, visual data, and recommendations into a comprehensive result dictionary.
    async def _generate_analysis_results(self) -> dict[str, Any]:
        """Generate comprehensive analysis results."""
        try:
            # Schema analysis
            schema_analysis = self._analyze_schema_dependencies()
    
            # Table analysis
            table_analysis = self._analyze_table_dependencies()
    
            # Relationship patterns
            relationship_patterns = self._analyze_relationship_patterns()
    
            # Visual representation data
            visual_data = self._generate_visual_representation()
    
            # Recommendations
            recommendations = self._generate_recommendations()
    
            return {
                "schema_analysis": schema_analysis,
                "table_analysis": table_analysis,
                "relationship_patterns": relationship_patterns,
                "visual_representation": visual_data,
                "recommendations": recommendations,
                "summary": {
                    "total_schemas": len(self.schema_nodes),
                    "total_tables": len(self.table_nodes),
                    "cross_schema_relationships": len(self.cross_schema_relationships),
                    "intra_schema_relationships": len(self.intra_schema_relationships),
                },
            }
    
        except Exception as e:
            logger.error(f"Error generating analysis results: {e}")
            raise
  • Dataclass defining SchemaNode with fields and properties for dependency and isolation scoring, used throughout the analysis.
    @dataclass
    class SchemaNode:
        """Represents a schema node in the dependency graph."""
    
        name: str
        table_count: int = 0
        total_size_bytes: int = 0
        total_rows: int = 0
        outgoing_references: set[str] = field(default_factory=set)
        incoming_references: set[str] = field(default_factory=set)
        self_references: int = 0
    
        @property
        def dependency_score(self) -> float:
            """Calculate dependency score based on incoming and outgoing references."""
            return len(self.incoming_references) * 2 + len(self.outgoing_references)
    
        @property
        def isolation_score(self) -> float:
            """Calculate isolation score (lower is more isolated)."""
            return len(self.incoming_references) + len(self.outgoing_references)
  • Dataclass defining TableNode with fields and properties for connection count, hub status, and isolation, used in table-level analysis.
    @dataclass
    class TableNode:
        """Represents a table node in the dependency graph."""
    
        schema: str
        name: str
        qualified_name: str
        size_bytes: int = 0
        row_count: int = 0
        outgoing_fks: list[str] = field(default_factory=list)
        incoming_fks: list[str] = field(default_factory=list)
    
        @property
        def connection_count(self) -> int:
            """Total number of connections (incoming + outgoing)."""
            return len(self.outgoing_fks) + len(self.incoming_fks)
    
        @property
        def is_hub(self) -> bool:
            """Check if table is a hub (has many incoming references)."""
            return len(self.incoming_fks) >= 3
    
        @property
        def is_isolated(self) -> bool:
            """Check if table has no foreign key relationships."""
            return len(self.outgoing_fks) == 0 and len(self.incoming_fks) == 0
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'visual representation', hinting at output format, but doesn't specify what that entails (e.g., graph, diagram, text), whether it's read-only or has side effects, or any performance or permission requirements. This leaves significant gaps for a tool with potential complexity in analysis.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's function. It's front-loaded with the core purpose and avoids unnecessary words, though it could be slightly more structured by elaborating on the 'visual representation' aspect to enhance clarity without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 0 parameters and no output schema, the description is minimally complete but lacks depth. It hints at output ('visual representation') but doesn't detail what that means, and with no annotations, it fails to cover behavioral aspects like safety or performance. For an analysis tool, this leaves the agent with insufficient context to fully understand its use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, with 100% schema description coverage, so there's no need to compensate for undocumented inputs. The description doesn't add parameter details beyond the schema, but with no parameters, a baseline of 4 is appropriate as it avoids confusion and aligns with the empty input structure.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as 'Analyze schema relationships and dependencies with visual representation', specifying the verb 'analyze' and the resource 'schema relationships and dependencies'. It distinguishes from siblings like 'list_schemas' or 'get_object_details' by focusing on analysis rather than listing or retrieval, though it doesn't explicitly differentiate from other analysis tools like 'analyze_db_health'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, context, or exclusions, and with siblings like 'list_schemas' and 'analyze_db_health', there's no indication of when this specific analysis tool is preferred over others for understanding schema structures.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/cloudthinker-ai/postgres-mcp-pro-plus'

If you have feedback or need assistance with the MCP directory API, please join our Discord server