Skip to main content
Glama
taylorleese

mcp-toolz

ask_deepseek

Get AI-powered answers or second opinions on specific context entries using DeepSeek's analysis capabilities.

Instructions

Ask DeepSeek a question about a context entry, or get a general second opinion

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
context_idYesContext ID to ask about
questionNoOptional specific question to ask about the context. If not provided, gets a general second opinion.

Implementation Reference

  • MCP tool handler for 'ask_deepseek': fetches context, calls DeepSeekClient.get_second_opinion, saves response if generic opinion, returns formatted result or error.
    if name == "ask_deepseek":
        context_id = arguments["context_id"]
        question = arguments.get("question")
        context = self.storage.get_context(context_id)
        if not context:
            return [TextContent(type="text", text=f"Context {context_id} not found")]
    
        try:
            deepseek_client = DeepSeekClient()
            response = deepseek_client.get_second_opinion(context, question)
    
            # Only save to database if it's a generic second opinion (no custom question)
            if not question:
                self.storage.update_deepseek_response(context_id, response)
    
            header = "DeepSeek's Answer:" if question else "DeepSeek's Opinion:"
            return [TextContent(type="text", text=f"{header}\n\n{response}")]
        except ValueError as e:
            return [TextContent(type="text", text=f"Error: {e}")]
  • Input schema for the 'ask_deepseek' tool: requires 'context_id', optional 'question' for specific query or general second opinion.
    Tool(
        name="ask_deepseek",
        description="Ask DeepSeek a question about a context entry, or get a general second opinion",
        inputSchema={
            "type": "object",
            "properties": {
                "context_id": {"type": "string", "description": "Context ID to ask about"},
                "question": {
                    "type": "string",
                    "description": (
                        "Optional specific question to ask about the context. If not provided, gets a general second opinion."
                    ),
                },
            },
            "required": ["context_id"],
        },
    ),
  • DeepSeekClient.get_second_opinion and _format_context_for_deepseek: formats context appropriately, constructs system/user prompts, calls DeepSeek API via OpenAI-compatible client to get response.
        def get_second_opinion(self, context: ContextEntry, question: str | None = None) -> str:
            """Get DeepSeek's second opinion on a context, or answer a specific question.
    
            Args:
                context: The context entry to analyze
                question: Optional specific question to ask. If None, provides general second opinion.
            """
            if question:
                # Custom question mode
                system_prompt = """You are a senior software engineering consultant answering questions about code, \
    architecture decisions, and implementation plans.
    
    Provide clear, actionable answers based on the context provided."""
                user_content = self._format_context_for_deepseek(context, question)
            else:
                # Generic second opinion mode
                system_prompt = """You are a senior software engineering consultant providing second opinions on code, \
    architecture decisions, and implementation plans.
    
    Your role is to:
    - Provide constructive, balanced feedback
    - Highlight both strengths and potential issues
    - Suggest alternatives when appropriate
    - Point out edge cases or security concerns
    - Be concise but thorough
    
    Format your response clearly with sections as needed."""
                user_content = self._format_context_for_deepseek(context)
    
            response = self.client.chat.completions.create(
                model=self.model,
                messages=[
                    {"role": "system", "content": system_prompt},
                    {"role": "user", "content": user_content},
                ],
            )
    
            return response.choices[0].message.content or ""
    
        def _format_context_for_deepseek(self, context: ContextEntry, question: str | None = None) -> str:
            """Format a context entry for DeepSeek consumption.
    
            Args:
                context: The context entry to format
                question: Optional specific question to append
            """
            parts = [
                f"# Context: {context.title}",
                f"\n**Type:** {context.type}",
                f"**Timestamp:** {context.timestamp.isoformat()}",
            ]
    
            if context.tags:
                parts.append(f"**Tags:** {', '.join(context.tags)}")
    
            parts.append("\n## Content\n")
    
            # Add specific content based on type
            if context.content.messages:
                parts.append("### Conversation\n")
                for msg in context.content.messages:
                    parts.append(msg)
    
            if context.content.code:
                parts.append("### Code\n")
                for file_path, code in context.content.code.items():
                    parts.append(f"**File:** `{file_path}`\n```\n{code}\n```\n")
    
            if context.content.suggestions:
                parts.append(f"### Suggestion\n{context.content.suggestions}\n")
    
            if context.content.errors:
                parts.append(f"### Error/Debug Info\n```\n{context.content.errors}\n```\n")
    
            # Add question or default request
            if question:
                parts.append(f"\n---\n**Question:** {question}")
            else:
                parts.append("\n---\nPlease provide a second opinion on the above context.")
    
            return "\n".join(parts)
  • Registration of the list_tools method on the MCP Server instance, which includes the 'ask_deepseek' tool in its returned list.
    self.server.call_tool()(self.call_tool)

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/taylorleese/mcp-toolz'

If you have feedback or need assistance with the MCP directory API, please join our Discord server