Skip to main content
Glama
semanticintent

Semantic D1 MCP

Official

analyze_database_schema

Analyze Cloudflare D1 database schema structure, tables, columns, indexes, and relationships to understand database design and optionally include sample data for context.

Instructions

Analyze D1 database schema structure, tables, columns, indexes, and relationships with optional sample data

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
environmentYesDatabase environment to analyze
includeSamplesNoInclude sample data from tables (max 5 rows per table)
maxSampleRowsNoMaximum number of sample rows per table

Implementation Reference

  • Core handler logic for analyzing database schema: manages caching based on environment, fetches schema from repository if not cached, and prepares the response.
    async execute(request: AnalyzeSchemaRequest): Promise<SchemaAnalysisResponse> {
    	const environment = request.environment;
    	const includeSamples = request.includeSamples ?? true;
    	const maxSampleRows = request.maxSampleRows ?? 5;
    
    	// Observable: Cache key based on environment
    	const cacheKey = `schema:${environment}`;
    
    	// Check cache first (avoid repeated API calls)
    	const cachedSchema = await this.cache.get<DatabaseSchema>(cacheKey);
    	if (cachedSchema) {
    		return this.formatResponse(cachedSchema, includeSamples, maxSampleRows);
    	}
    
    	// Fetch schema from repository
    	const databaseId = this.databaseConfig.getDatabaseId(environment);
    	const schema = await this.repository.fetchDatabaseSchema(databaseId);
    
    	// Cache for future requests (10-minute TTL)
    	await this.cache.set(cacheKey, schema, AnalyzeSchemaUseCase.CACHE_TTL_SECONDS);
    
    	// Format and return response
    	return this.formatResponse(schema, includeSamples, maxSampleRows);
    }
  • MCP protocol handler for 'analyze_database_schema' tool call: parses arguments, invokes AnalyzeSchemaUseCase, and formats MCP response.
    private async handleAnalyzeSchema(args: unknown) {
    	const { environment, includeSamples, maxSampleRows } = args as {
    		environment: string;
    		includeSamples?: boolean;
    		maxSampleRows?: number;
    	};
    
    	const result = await this.analyzeSchemaUseCase.execute({
    		environment: parseEnvironment(environment),
    		includeSamples,
    		maxSampleRows,
    	});
    
    	return {
    		content: [
    			{
    				type: 'text',
    				text: JSON.stringify(result, null, 2),
    			},
    		],
    	};
    }
  • Input schema definition for the 'analyze_database_schema' tool, specifying parameters like environment, includeSamples, and maxSampleRows.
    inputSchema: {
    	type: 'object',
    	properties: {
    		environment: {
    			type: 'string',
    			enum: ['development', 'staging', 'production'],
    			description: 'Database environment to analyze',
    		},
    		includeSamples: {
    			type: 'boolean',
    			default: true,
    			description: 'Include sample data from tables (max 5 rows per table)',
    		},
    		maxSampleRows: {
    			type: 'number',
    			default: 5,
    			description: 'Maximum number of sample rows per table',
    		},
    	},
    	required: ['environment'],
    },
  • Registration of the 'analyze_database_schema' tool in the MCP ListTools response.
    {
    	name: 'analyze_database_schema',
    	description:
    		'Analyze D1 database schema structure, tables, columns, indexes, and relationships with optional sample data',
    	inputSchema: {
    		type: 'object',
    		properties: {
    			environment: {
    				type: 'string',
    				enum: ['development', 'staging', 'production'],
    				description: 'Database environment to analyze',
    			},
    			includeSamples: {
    				type: 'boolean',
    				default: true,
    				description: 'Include sample data from tables (max 5 rows per table)',
    			},
    			maxSampleRows: {
    				type: 'number',
    				default: 5,
    				description: 'Maximum number of sample rows per table',
    			},
    		},
    		required: ['environment'],
    	},
    },
  • Helper function to format the raw DatabaseSchema into the detailed analysis response, including optional sample data fetching.
    private async formatResponse(
    	schema: DatabaseSchema,
    	includeSamples: boolean,
    	maxSampleRows: number,
    ): Promise<SchemaAnalysisResponse> {
    	const tables: TableAnalysis[] = [];
    
    	for (const table of schema.tables) {
    		const tableAnalysis: TableAnalysis = {
    			name: table.name,
    			type: table.type,
    			columnCount: table.columns.length,
    			columns: table.columns.map((col) => ({
    				name: col.name,
    				type: col.type,
    				nullable: col.isNullable,
    				isPrimaryKey: col.isPrimaryKey,
    				defaultValue: col.defaultValue,
    			})),
    			indexes: table.indexes.map((idx) => ({
    				name: idx.name,
    				columns: [...idx.columns],
    				isUnique: idx.isUnique,
    				isPrimaryKey: idx.isPrimaryKey,
    			})),
    			foreignKeys: table.foreignKeys.map((fk) => ({
    				column: fk.column,
    				referencedTable: fk.referencesTable,
    				referencedColumn: fk.referencesColumn,
    				onDelete: fk.onDelete,
    				onUpdate: fk.onUpdate,
    			})),
    		};
    
    		// Fetch sample data if requested
    		if (includeSamples) {
    			const databaseId = this.databaseConfig.getDatabaseId(schema.environment);
    			const samples = await this.fetchSampleData(databaseId, table.name, maxSampleRows);
    			tableAnalysis.samples = samples;
    		}
    
    		tables.push(tableAnalysis);
    	}
    
    	return {
    		databaseName: schema.name,
    		environment: schema.environment,
    		tableCount: schema.tables.length,
    		tables,
    		fetchedAt: schema.fetchedAt,
    	};
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'optional sample data' and 'max 5 rows per table' which provides some behavioral context, but doesn't cover important aspects like whether this is a read-only operation, performance implications, authentication requirements, rate limits, or what the analysis output format looks like.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose and includes the optional sample data feature. Every word serves a purpose with no wasted text or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a database analysis tool with no annotations and no output schema, the description is insufficient. It doesn't explain what the analysis output contains, how relationships are determined, what 'analyze' actually means operationally, or how this differs from the sibling tools. The context signals show this is a 3-parameter tool with 100% schema coverage, but the description doesn't compensate for the lack of behavioral and output information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description adds minimal value beyond the schema - it mentions 'optional sample data' which relates to 'includeSamples' parameter, but doesn't provide additional semantic context beyond what's in the parameter descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: analyzing database schema structure including tables, columns, indexes, and relationships, with optional sample data. It uses specific verbs ('analyze') and resources ('D1 database schema structure'), but doesn't explicitly differentiate from sibling tools like 'get_table_relationships' or 'validate_database_schema'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus the sibling tools. It mentions 'optional sample data' but doesn't explain when to include samples versus when to use alternatives like 'suggest_schema_optimizations' or 'validate_database_schema' for different analysis needs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/semanticintent/semantic-d1-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server