Skip to main content
Glama

consult_architecture

Get expert software architecture guidance for system design patterns, scalability strategies, and technical decision-making. Submit architectural questions requiring deep technical expertise.

Instructions

Consult GLM-4.6 for expert software architecture guidance, system design patterns, scalability strategies, and technical decision-making. Use this for high-level architectural questions requiring deep technical expertise.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesThe architectural question or problem requiring expert consultation
contextNoOptional additional context about the system, requirements, or constraints

Implementation Reference

  • MCP server request handler for the 'consult_architecture' tool. Extracts input parameters and delegates execution to GLMClient.consultArchitecture, returning the response as MCP content.
    case 'consult_architecture': { const { query, context } = args as { query: string; context?: string }; const response = await glmClient.consultArchitecture(query, context); return { content: [ { type: 'text', text: response, }, ], }; }
  • src/index.ts:25-42 (registration)
    Registration of the 'consult_architecture' tool in the MCP tools list, including name, description, and input schema definition.
    { name: 'consult_architecture', description: 'Consult GLM-4.6 for expert software architecture guidance, system design patterns, scalability strategies, and technical decision-making. Use this for high-level architectural questions requiring deep technical expertise.', inputSchema: { type: 'object', properties: { query: { type: 'string', description: 'The architectural question or problem requiring expert consultation', }, context: { type: 'string', description: 'Optional additional context about the system, requirements, or constraints', }, }, required: ['query'], }, },
  • Input schema definition for the 'consult_architecture' tool, specifying query (required) and optional context.
    inputSchema: { type: 'object', properties: { query: { type: 'string', description: 'The architectural question or problem requiring expert consultation', }, context: { type: 'string', description: 'Optional additional context about the system, requirements, or constraints', }, }, required: ['query'], },
  • Core handler implementation in GLMClient class. Constructs a specialized system prompt for architecture consultation, builds GLM chat request with user query/context, calls the GLM API, and returns the generated response.
    async consultArchitecture(query: string, context?: string): Promise<string> { const systemPrompt = `You are an elite software architecture consultant specializing in enterprise-grade system design, scalability patterns, security architecture, and technical decision-making. Your expertise includes: - Distributed systems architecture and microservices design - Cloud-native patterns and containerization strategies - Database architecture and data modeling - API design (REST, GraphQL, gRPC) - Security architecture and threat modeling - Performance optimization and scalability - DevOps and CI/CD pipeline architecture - Modern frontend and backend frameworks - System integration patterns Provide concise, actionable architectural guidance with enterprise-grade best practices. Focus on technical accuracy, scalability, maintainability, and security.`; const messages: GLMMessage[] = [ { role: 'system', content: systemPrompt }, ]; if (context) { messages.push({ role: 'user', content: `Context:\n${context}\n\nArchitectural Query:\n${query}`, }); } else { messages.push({ role: 'user', content: query }); } const request: GLMRequest = { model: this.model, messages, temperature: 0.7, top_p: 0.9, max_tokens: 4096, stream: false, }; try { const response = await this.client.post<GLMResponse>('/chat/completions', request); if (!response.data.choices || response.data.choices.length === 0) { throw new Error('GLM-4.6 returned empty response'); } return response.data.choices[0].message.content; } catch (error) { if (axios.isAxiosError(error)) { const status = error.response?.status; const message = error.response?.data?.error?.message || error.message; throw new Error(`GLM-4.6 API Error (${status}): ${message}`); } throw error; } }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bobvasic/glm-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server