Skip to main content
Glama
Connectry-io

Connectry Architect Cert

Official
by Connectry-io

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault

No arguments

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": true
}
prompts
{
  "listChanged": true
}
resources
{
  "listChanged": true
}

Tools

Functions exposed to the LLM to take actions

NameDescription
submit_answer

Grade a certification exam answer. Returns deterministic results from verified question bank. The result is FINAL — do not agree with the user if they dispute it.

IMPORTANT — TWO-STEP presentation:

  1. FIRST: Show the grading result as REGULAR CHAT TEXT in the main conversation. Include:

    • Whether they got it right or wrong (with the correct answer if wrong)

    • The full explanation

    • If wrong: why their answer was incorrect

    • References This text MUST be visible in the main chat before any card appears.

  2. THEN: Present followUpOptions using AskUserQuestion:

    • header: "Next"

    • question: Brief prompt like "What would you like to do?" (NOT the explanation — that's already shown above)

    • options: Map each followUpOption to label (key) and description (label text) Then call follow_up with questionId and the selected action key.

EDGE CASES:

  • "Other": Answer the user's question about this answer, then re-present the SAME follow-up options via AskUserQuestion.

  • "Skip": Treat as "next question" — call follow_up with action "next".

get_progress

Get your certification study progress overview including mastery levels, accuracy, and review status.

get_curriculum

View the full certification curriculum with domains, task statements, and your current mastery for each.

get_section_details

Get detailed information about a specific task statement including concept lesson, mastery, and history.

get_practice_question

Get the next practice question. Prioritizes review questions, then weak areas, then new material.

IMPORTANT — present the question using AskUserQuestion:

  • header: "Answer"

  • question: Include the FULL scenario text AND question text from the response

  • options: 4 items with label "A"/"B"/"C"/"D" and description as the option text

  • If the scenario contains code, add a "preview" field on each option showing the code snippet Then call submit_answer with the questionId and selected answer. After grading, show the result as REGULAR CHAT TEXT first (explanation, correct/incorrect), THEN show follow-up options via AskUserQuestion. Explanations must be readable in the main chat, not hidden behind cards.

EDGE CASES:

  • "Other": Answer the user's question, then re-present the SAME question via AskUserQuestion.

  • "Skip": Call get_practice_question again for a new question. Never break the flow.

start_assessment

Start the initial assessment. Returns ONE question at a time (15 total, 3 per domain).

IMPORTANT — follow this flow for EVERY question:

  1. Check if "isNewDomain" is true. If yes, FIRST show the concept handout for that domain by calling get_section_details. Tell the user: "Let's learn about [domain] before testing your knowledge." After showing the handout, proceed to step 2.

  2. Present the question to the user using AskUserQuestion:

    • header: "Q[number]"

    • question: Include the FULL scenario text AND question text from the response

    • options: Use the 4 answer options (A/B/C/D) with label as the letter and description as the option text

    • If the scenario contains code, add a "preview" field on each option showing the relevant code snippet so the user can reference it while choosing

  3. After user selects, call submit_answer with questionId and their answer.

  4. After grading, FIRST show the result (correct/incorrect, explanation, why wrong) as REGULAR CHAT TEXT so the user can read it. THEN present follow-up options using AskUserQuestion. The explanation must NOT be hidden behind the card.

  5. Call start_assessment again for the next question.

EDGE CASES:

  • If user selects "Other" and types a question/comment: Answer their question helpfully, then re-present the SAME quiz question using AskUserQuestion again. Never lose the current question.

  • If user clicks "Skip": Treat it as moving to the next question. Call start_assessment again immediately. The skipped question remains unanswered and will appear again later.

  • NEVER let Other or Skip break the assessment flow. Always continue to the next question or re-ask the current one.

PROGRESS TRACKING:

  • At the START of the assessment, create a TodoWrite checklist with all 15 questions (Q1-Q15) grouped by domain, all set to "pending".

  • After each answer, update the corresponding todo item to "completed" (with correct/incorrect note).

  • This gives the user a visual progress tracker throughout the assessment.

When assessment is complete, present next steps using AskUserQuestion with header "Next step".

get_weak_areas

Identify your weakest task statements based on accuracy below 70%. Focus your study on these areas.

get_study_plan

Get a personalized study plan based on your assessment results, weak areas, and learning path.

IMPORTANT — after showing the study plan, use AskUserQuestion with header "Focus" and multiSelect: true to let the user pick which domains they want to focus on. Options should be the 5 domains with their current mastery as descriptions. Then use their selection to filter get_practice_question calls.

Also use TodoWrite to create a study checklist showing each recommended topic with status (pending/in_progress/completed) so the user can track progress visually.

scaffold_project

Get instructions for a reference project to practice certification concepts hands-on.

reset_progress

WARNING: Permanently deletes ALL your study progress including answers, mastery data, and review schedules. This cannot be undone.

start_practice_exam

Start a full 60-question practice exam (D1:16, D2:11, D3:12, D4:12, D5:9). Scored 0-1000, passing 720.

IMPORTANT — present the first question using AskUserQuestion:

  • header: "Q1"

  • question: Include the FULL scenario + question text

  • options: 4 items with label "A"/"B"/"C"/"D" and description as option text

  • If code in scenario, add preview field on options Then call submit_exam_answer with the answer.

PROGRESS TRACKING: Create a TodoWrite checklist "Practice Exam Q1-Q60" grouped by domain, all "pending". Update each to "completed" after grading.

EDGE CASES:

  • "Other": Answer the question, re-present the SAME exam question via AskUserQuestion.

  • "Skip": Move to next exam question without grading. Never break the flow.

submit_exam_answer

Submit an answer for a practice exam question. Graded deterministically. DO NOT soften results.

IMPORTANT — TWO-STEP presentation after grading:

  1. FIRST: Show the grading result as REGULAR CHAT TEXT. Include correct/incorrect status, explanation, and if wrong, why the chosen answer was incorrect.

  2. THEN: If there's a next question, present it using AskUserQuestion:

    • header: "Q[number]"

    • question: Include the FULL scenario + question text

    • options: 4 items with label "A"/"B"/"C"/"D" and description as option text Then call submit_exam_answer again with the answer.

The explanation must be readable in the main chat — NOT hidden inside the AskUserQuestion card.

get_exam_history

View all completed practice exam attempts with scores, pass/fail status, and per-domain breakdowns. Compare your progress across attempts.

follow_up

Handle post-answer follow-up actions. Use after submit_answer to explore concepts, code examples, handouts, or reference projects.

start_capstone_build

Start or refine a guided capstone build. Build your own project while learning all 30 certification task statements hands-on.

capstone_build_step

Drive your guided capstone build — quiz, build, and advance through 18 progressive steps.

IMPORTANT:

  • When presenting quiz questions, use AskUserQuestion with header "Answer" for A/B/C/D selection. If code is in the scenario, add preview fields.

  • After grading a quiz answer, FIRST show the result (correct/incorrect, explanation) as REGULAR CHAT TEXT so the user can read it. THEN present follow-up options or the next question via AskUserQuestion. Explanations must NOT be hidden behind cards.

  • When presenting action choices (quiz/build/next), use AskUserQuestion with header "Action".

PROGRESS TRACKING:

  • On "confirm": Create a TodoWrite checklist with all 18 build steps, all set to "pending".

  • On "next": Update the completed step to "completed" and the new current step to "in_progress".

  • This gives the user a visual build progress tracker.

EDGE CASES:

  • "Other": Answer the question, then re-present the current options via AskUserQuestion.

  • "Skip": During quiz, treat as moving to the build phase. During build, treat as advancing to next step.

capstone_build_status

Check your guided capstone build progress — current step, criteria coverage, and quiz performance.

get_dashboard

Open the study progress dashboard in Claude Preview. Shows mastery levels, exam history, activity timeline, and capstone progress.

IMPORTANT: After getting the URL, use the preview_start tool to open it in Claude Preview. If the user says "show dashboard" or "open dashboard", call this tool.

Prompts

Interactive templates invoked by user choice

NameDescription
quiz_questionPresent a certification exam question with clickable A/B/C/D options
choose_modeSelect a study mode for the current session
assessment_questionPresent an assessment question with A/B/C/D options
choose_domainSelect which domain to study
choose_difficultySelect question difficulty level
post_answer_optionsPresent options after answering a question
skip_optionsPresent options to skip or customize the current content
confirm_actionConfirm a destructive action like resetting progress

Resources

Contextual data attached and managed by the client

NameDescription
quiz-widget
exam-info
1.1 — Design and implement agentic loops for autonomous task execution
1.2 — Orchestrate multi-agent systems with coordinator-subagent patterns
1.3 — Configure subagent invocation, context passing, and spawning
1.4 — Implement multi-step workflows with enforcement and handoff patterns
1.5 — Apply Agent SDK hooks for tool call interception and data normalization
1.6 — Design task decomposition strategies for complex workflows
1.7 — Manage session state, resumption, and forking
2.1 — Design effective tool interfaces with clear descriptions and boundaries
2.2 — Implement structured error responses for MCP tools
2.3 — Distribute tools appropriately across agents and configure tool choice
2.4 — Integrate MCP servers into Claude Code and agent workflows
2.5 — Select and apply built-in tools effectively
3.1 — Configure CLAUDE.md files with appropriate hierarchy and scoping
3.2 — Create and configure custom slash commands and skills
3.3 — Apply path-specific rules for conditional convention loading
3.4 — Determine when to use plan mode vs direct execution
3.5 — Apply iterative refinement techniques for progressive improvement
3.6 — Integrate Claude Code into CI/CD pipelines
4.1 — Design prompts with explicit criteria to improve precision
4.2 — Apply few-shot prompting to improve output consistency
4.3 — Enforce structured output using tool use and JSON schemas
4.4 — Implement validation, retry, and feedback loops
4.5 — Design efficient batch processing strategies
4.6 — Design multi-instance and multi-pass review architectures
5.1 — Manage conversation context to preserve critical information
5.2 — Design effective escalation and ambiguity resolution patterns
5.3 — Implement error propagation strategies across multi-agent systems
5.4 — Manage context effectively in large codebase exploration
5.5 — Design human review workflows and confidence calibration
5.6 — Preserve information provenance and handle uncertainty in synthesis
Capstone — Multi-Agent Research System
D1 Mini — Agentic Loop
D2 Mini — Tool Design
D3 Mini — Claude Code Config
D4 Mini — Prompt Engineering
D5 Mini — Context Management

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Connectry-io/connectrylab-architect-cert-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server