Skip to main content
Glama

ask_stylus

Get answers to Stylus development questions, debug code issues, or understand concepts with version-specific guidance for smart contract creation.

Instructions

Ask questions about Stylus development, get concept explanations, or debug code issues. Supports version-specific guidance.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
questionYesThe question to answer
code_contextNoOptional code snippet for context (e.g., for debugging)
question_typeNoType of question for optimized response (default: general)general
target_versionNoTarget stylus-sdk version for version-specific guidance (default: 0.10.0).

Implementation Reference

  • The `execute` method of `AskStylusTool` class handles the logic for answering questions about Stylus development, including input validation, context retrieval, prompt generation, LLM interaction, and response parsing.
    def execute(
        self,
        question: str,
        code_context: Optional[str] = None,
        question_type: str = "general",
        target_version: Optional[str] = None,
        **kwargs,
    ) -> dict:
        """
        Answer a question about Stylus development.
    
        Args:
            question: The question to answer.
            code_context: Optional code snippet for context (e.g., for debugging).
            question_type: Type of question (concept, debugging, comparison, howto, general).
            target_version: Target stylus-sdk version for version-specific guidance.
    
        Returns:
            Dict with answer, code_examples, references, follow_up_questions.
        """
        # Validate input
        if not question or not question.strip():
            return {"error": "Question is required and cannot be empty"}
    
        question = question.strip()
    
        # Check if question is Stylus-related
        stylus_keywords = [
            "stylus",
            "rust",
            "contract",
            "arbitrum",
            "storage",
            "entrypoint",
            "sol_storage",
            "erc",
            "token",
            "deploy",
            "wasm",
            "sdk",
        ]
        is_stylus_related = any(kw in question.lower() for kw in stylus_keywords)
    
        if not is_stylus_related and not code_context:
            return {
                "answer": (
                    "This question doesn't appear to be"
                    " related to Stylus or Arbitrum"
                    " development. I'm specialized in"
                    " helping with Stylus smart contract"
                    " development. Please ask about"
                    " Stylus concepts, code,"
                    " or debugging."
                ),
                "code_examples": [],
                "references": [],
                "follow_up_questions": [
                    "What is Stylus and how does it work?",
                    "How do I create my first Stylus contract?",
                    "What are the benefits of Stylus over Solidity?",
                ],
            }
    
        # Default to main version if not specified
        if not target_version:
            target_version = get_main_version()
    
        try:
            # Retrieve relevant context with version-aware scoring
            context_result = self.context_tool.execute(
                query=question,
                n_results=5,
                content_type="all",
                rerank=True,
                category_boosts=None,  # Use default Stylus-focused boosts
                target_version=target_version,
            )
    
            references = []
            context_text = ""
    
            if "contexts" in context_result:
                for ctx in context_result["contexts"]:
                    references.append(
                        {
                            "title": ctx["metadata"].get("title", "Reference"),
                            "source": ctx["source"],
                            "relevance": f"Relevance score: {ctx['relevance_score']:.2f}",
                        }
                    )
                    context_text += (
                        f"\n--- Reference: {ctx['source']} ---\n{ctx['content'][:1200]}\n"
                    )
    
            # Build prompt
            user_prompt = self._build_prompt(
                question=question,
                code_context=code_context,
                question_type=question_type,
                context_text=context_text,
            )
    
            # Generate answer with version-aware system prompt
            messages = [
                {"role": "system", "content": get_system_prompt(target_version)},
                {"role": "user", "content": user_prompt},
            ]
    
            response = self._call_llm(
                messages=messages,
                temperature=0.3,
                max_tokens=2048,
            )
    
            # Parse response
            answer, code_examples = self._parse_response(response, target_version=target_version)
    
            # Generate follow-up questions
            follow_up_questions = self._generate_follow_ups(question, answer)
    
            return {
                "answer": answer,
                "code_examples": code_examples,
                "references": references[:5],  # Limit to 5 references
                "follow_up_questions": follow_up_questions,
            }
    
        except Exception as e:
            return {"error": f"Failed to answer question: {str(e)}"}
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'get concept explanations' and 'debug code issues,' implying it returns informative responses, but doesn't detail response format, potential limitations (e.g., accuracy, depth), rate limits, or authentication needs. For a Q&A tool with zero annotation coverage, this leaves significant gaps in understanding how the tool behaves beyond basic purpose.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, stating the core purpose in the first sentence and adding a supplementary note in the second. Both sentences earn their place by clarifying scope and capabilities. It avoids redundancy and is appropriately sized for a tool with four parameters, though it could be slightly more detailed given the lack of annotations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (Q&A with four parameters), no annotations, and no output schema, the description is moderately complete. It covers the purpose and hints at usage but lacks details on behavioral traits, response format, and error handling. For a tool without structured output or safety annotations, more context on what to expect from the tool's operation would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value beyond the schema, only implying that parameters like target_version enable 'version-specific guidance.' It doesn't provide additional context on parameter usage or interactions. With high schema coverage, the baseline score of 3 is appropriate, as the description doesn't compensate but doesn't detract either.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Ask questions about Stylus development, get concept explanations, or debug code issues.' It specifies the verb ('ask questions') and resource ('Stylus development'), and distinguishes from most siblings (e.g., generate_* tools, orchestrate_* tools) by focusing on Q&A rather than code generation or orchestration. However, it doesn't explicitly differentiate from other ask_* tools like ask_bridging or ask_orbit, which may have overlapping domains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through 'Supports version-specific guidance' and the input schema's parameters (e.g., question_type, target_version), suggesting it's for Stylus-related inquiries with optional specificity. However, it lacks explicit guidance on when to use this tool versus alternatives like ask_bridging or ask_orbit, or when to prefer code-generation siblings for similar tasks. The guidance is present but not fully articulated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Quantum3-Labs/ARBuilder'

If you have feedback or need assistance with the MCP directory API, please join our Discord server