Skip to main content
Glama
Connectry-io

Connectry Architect Cert

Official

submit_exam_answer

Submit practice exam answers for deterministic grading with immediate feedback and explanation, then proceed to next question in the certification preparation workflow.

Instructions

Submit an answer for a practice exam question. Graded deterministically. DO NOT soften results.

IMPORTANT — TWO-STEP presentation after grading:

  1. FIRST: Show the grading result as REGULAR CHAT TEXT. Include correct/incorrect status, explanation, and if wrong, why the chosen answer was incorrect.

  2. THEN: If there's a next question, present it using AskUserQuestion:

    • header: "Q[number]"

    • question: Include the FULL scenario + question text

    • options: 4 items with label "A"/"B"/"C"/"D" and description as option text Then call submit_exam_answer again with the answer.

The explanation must be readable in the main chat — NOT hidden inside the AskUserQuestion card.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
examIdYesThe practice exam ID
questionIdYesThe question ID being answered
answerYesYour answer: A, B, C, or D

Implementation Reference

  • The handler logic for the 'submit_exam_answer' tool, which processes the user's answer, grades it, records the result in the database, and provides feedback or the next question.
        async ({ examId, questionId, answer }) => {
          const userId = userConfig.userId;
          ensureUser(db, userId);
    
          const exam = getExamById(db, examId);
          if (!exam) {
            return {
              content: [{ type: 'text' as const, text: JSON.stringify({ error: 'Exam not found', examId }) }],
              isError: true,
            };
          }
          if (exam.completedAt) {
            return {
              content: [{ type: 'text' as const, text: 'This exam is already completed. Start a new practice exam to try again.' }],
              isError: true,
            };
          }
          if (exam.answeredQuestionIds.includes(questionId)) {
            return {
              content: [{ type: 'text' as const, text: `Question ${questionId} has already been answered in this exam.` }],
              isError: true,
            };
          }
    
          const allQuestions = loadQuestions();
          const question = allQuestions.find(q => q.id === questionId);
          if (!question) {
            return {
              content: [{ type: 'text' as const, text: JSON.stringify({ error: 'Question not found', questionId }) }],
              isError: true,
            };
          }
    
          // Grade the answer
          const result = gradeAnswer(question, answer);
    
          // Record in exam
          recordExamAnswer(db, examId, questionId, result.isCorrect, question.domainId);
    
          const answeredCount = exam.answeredQuestionIds.length + 1;
          const remaining = exam.totalQuestions - answeredCount;
    
          const lines: string[] = [];
    
          // Grade feedback
          if (result.isCorrect) {
            lines.push(`✅ Correct! (${answeredCount}/${exam.totalQuestions})`);
          } else {
            lines.push(`❌ Incorrect. The correct answer is ${result.correctAnswer}. (${answeredCount}/${exam.totalQuestions})`);
            if (result.whyUserWasWrong) {
              lines.push('', `Why ${result.userAnswer} is wrong: ${result.whyUserWasWrong}`);
            }
          }
          lines.push('', result.explanation);
    
          // Check if exam is complete
          if (remaining === 0) {
            const completed = completeExam(db, examId);
            if (completed) {
              lines.push('', '═══ PRACTICE EXAM COMPLETE ═══', '');
              lines.push(`Score: ${completed.score}/1000`);
              lines.push(`Result: ${completed.passed ? '✅ PASSED' : '❌ FAILED'} (passing: 720/1000)`);
              lines.push(`Correct: ${completed.correctAnswers}/${completed.totalQuestions}`);
              lines.push('');
              lines.push('Domain Breakdown:');
    
              const scores = completed.domainScores;
              for (const key of Object.keys(scores).sort()) {
                const ds = scores[key];
                lines.push(`  D${ds.domainId}: ${ds.domainTitle} — ${ds.correctAnswers}/${ds.totalQuestions} (${ds.accuracyPercent}%) [weight: ${ds.weight}%]`);
              }
    
              // Compare with previous attempts
              const history = getExamHistory(db, userId);
              if (history.length > 1) {
                const previous = history[1]; // history[0] is current
                const scoreDiff = completed.score - previous.score;
                const arrow = scoreDiff > 0 ? '↑' : scoreDiff < 0 ? '↓' : '→';
                lines.push('');
                lines.push('─── Compared to Previous Attempt ───');
                lines.push(`  Previous score: ${previous.score}/1000 ${previous.passed ? '(passed)' : '(failed)'}`);
                lines.push(`  Change: ${arrow} ${scoreDiff > 0 ? '+' : ''}${scoreDiff} points`);
    
                // Per-domain comparison
                for (const key of Object.keys(scores).sort()) {
                  const current = scores[key];
                  const prev = previous.domainScores[key];
                  if (prev) {
                    const diff = current.accuracyPercent - prev.accuracyPercent;
                    const dArrow = diff > 0 ? '↑' : diff < 0 ? '↓' : '→';
                    lines.push(`  D${current.domainId}: ${prev.accuracyPercent}% → ${current.accuracyPercent}% ${dArrow}`);
                  }
                }
              }
            }
          } else {
            // Serve next question
            const nextQuestionId = exam.questionIds.find(
              id => !exam.answeredQuestionIds.includes(id) && id !== questionId
            );
            if (nextQuestionId) {
              const nextQuestion = allQuestions.find(q => q.id === nextQuestionId);
              if (nextQuestion) {
                lines.push('');
                lines.push(`─── Question ${answeredCount + 1} of ${exam.totalQuestions} ───`);
                lines.push('');
                lines.push(`Domain: D${nextQuestion.domainId}`);
                lines.push(`Task: ${nextQuestion.taskStatement}`);
                lines.push(`Difficulty: ${nextQuestion.difficulty}`);
                lines.push('');
                lines.push(`Scenario: ${nextQuestion.scenario}`);
                lines.push('');
                lines.push(nextQuestion.text);
                lines.push('');
                lines.push(`A) ${nextQuestion.options.A}`);
                lines.push(`B) ${nextQuestion.options.B}`);
                lines.push(`C) ${nextQuestion.options.C}`);
                lines.push(`D) ${nextQuestion.options.D}`);
    
                const selected = await elicitSingleSelect(server, 'Select your answer:', 'answer', [
                  { value: 'A', title: `A) ${nextQuestion.options.A}` },
                  { value: 'B', title: `B) ${nextQuestion.options.B}` },
                  { value: 'C', title: `C) ${nextQuestion.options.C}` },
                  { value: 'D', title: `D) ${nextQuestion.options.D}` },
                ]);
    
                if (selected !== null) {
                  lines.push('', `User selected: ${selected}`);
                }
              }
            }
          }
    
          return {
            content: [{ type: 'text' as const, text: lines.join('\n') }],
            _meta: buildQuizMeta(),
          };
        }
      );
    }
  • Input validation schema (using zod) for the 'submit_exam_answer' tool.
    {
      examId: z.number().describe('The practice exam ID'),
      questionId: z.string().describe('The question ID being answered'),
      answer: z.string().describe('Your answer: A, B, C, or D'),
    },
  • Registration function for 'submit_exam_answer' that binds the tool to the MCP server.
    export function registerSubmitExamAnswer(server: McpServer, db: Database.Database, userConfig: UserConfig): void {
      server.tool(
        'submit_exam_answer',
        `Submit an answer for a practice exam question. Graded deterministically. DO NOT soften results.
    
    IMPORTANT — TWO-STEP presentation after grading:
    1. FIRST: Show the grading result as REGULAR CHAT TEXT. Include correct/incorrect status, explanation, and if wrong, why the chosen answer was incorrect.
    2. THEN: If there's a next question, present it using AskUserQuestion:
       - header: "Q[number]"
       - question: Include the FULL scenario + question text
       - options: 4 items with label "A"/"B"/"C"/"D" and description as option text
       Then call submit_exam_answer again with the answer.
    
    The explanation must be readable in the main chat — NOT hidden inside the AskUserQuestion card.`,
        {
          examId: z.number().describe('The practice exam ID'),
          questionId: z.string().describe('The question ID being answered'),
          answer: z.string().describe('Your answer: A, B, C, or D'),
        },
        async ({ examId, questionId, answer }) => {
          const userId = userConfig.userId;
          ensureUser(db, userId);
    
          const exam = getExamById(db, examId);
          if (!exam) {
            return {
              content: [{ type: 'text' as const, text: JSON.stringify({ error: 'Exam not found', examId }) }],
              isError: true,
            };
          }
          if (exam.completedAt) {
            return {
              content: [{ type: 'text' as const, text: 'This exam is already completed. Start a new practice exam to try again.' }],
              isError: true,
            };
          }
          if (exam.answeredQuestionIds.includes(questionId)) {
            return {
              content: [{ type: 'text' as const, text: `Question ${questionId} has already been answered in this exam.` }],
              isError: true,
            };
          }
    
          const allQuestions = loadQuestions();
          const question = allQuestions.find(q => q.id === questionId);
          if (!question) {
            return {
              content: [{ type: 'text' as const, text: JSON.stringify({ error: 'Question not found', questionId }) }],
              isError: true,
            };
          }
    
          // Grade the answer
          const result = gradeAnswer(question, answer);
    
          // Record in exam
          recordExamAnswer(db, examId, questionId, result.isCorrect, question.domainId);
    
          const answeredCount = exam.answeredQuestionIds.length + 1;
          const remaining = exam.totalQuestions - answeredCount;
    
          const lines: string[] = [];
    
          // Grade feedback
          if (result.isCorrect) {
            lines.push(`✅ Correct! (${answeredCount}/${exam.totalQuestions})`);
          } else {
            lines.push(`❌ Incorrect. The correct answer is ${result.correctAnswer}. (${answeredCount}/${exam.totalQuestions})`);
            if (result.whyUserWasWrong) {
              lines.push('', `Why ${result.userAnswer} is wrong: ${result.whyUserWasWrong}`);
            }
          }
          lines.push('', result.explanation);
    
          // Check if exam is complete
          if (remaining === 0) {
            const completed = completeExam(db, examId);
            if (completed) {
              lines.push('', '═══ PRACTICE EXAM COMPLETE ═══', '');
              lines.push(`Score: ${completed.score}/1000`);
              lines.push(`Result: ${completed.passed ? '✅ PASSED' : '❌ FAILED'} (passing: 720/1000)`);
              lines.push(`Correct: ${completed.correctAnswers}/${completed.totalQuestions}`);
              lines.push('');
              lines.push('Domain Breakdown:');
    
              const scores = completed.domainScores;
              for (const key of Object.keys(scores).sort()) {
                const ds = scores[key];
                lines.push(`  D${ds.domainId}: ${ds.domainTitle} — ${ds.correctAnswers}/${ds.totalQuestions} (${ds.accuracyPercent}%) [weight: ${ds.weight}%]`);
              }
    
              // Compare with previous attempts
              const history = getExamHistory(db, userId);
              if (history.length > 1) {
                const previous = history[1]; // history[0] is current
                const scoreDiff = completed.score - previous.score;
                const arrow = scoreDiff > 0 ? '↑' : scoreDiff < 0 ? '↓' : '→';
                lines.push('');
                lines.push('─── Compared to Previous Attempt ───');
                lines.push(`  Previous score: ${previous.score}/1000 ${previous.passed ? '(passed)' : '(failed)'}`);
                lines.push(`  Change: ${arrow} ${scoreDiff > 0 ? '+' : ''}${scoreDiff} points`);
    
                // Per-domain comparison
                for (const key of Object.keys(scores).sort()) {
                  const current = scores[key];
                  const prev = previous.domainScores[key];
                  if (prev) {
                    const diff = current.accuracyPercent - prev.accuracyPercent;
                    const dArrow = diff > 0 ? '↑' : diff < 0 ? '↓' : '→';
                    lines.push(`  D${current.domainId}: ${prev.accuracyPercent}% → ${current.accuracyPercent}% ${dArrow}`);
                  }
                }
              }
            }
          } else {
            // Serve next question
            const nextQuestionId = exam.questionIds.find(
              id => !exam.answeredQuestionIds.includes(id) && id !== questionId
            );
            if (nextQuestionId) {
              const nextQuestion = allQuestions.find(q => q.id === nextQuestionId);
              if (nextQuestion) {
                lines.push('');
                lines.push(`─── Question ${answeredCount + 1} of ${exam.totalQuestions} ───`);
                lines.push('');
                lines.push(`Domain: D${nextQuestion.domainId}`);
                lines.push(`Task: ${nextQuestion.taskStatement}`);
                lines.push(`Difficulty: ${nextQuestion.difficulty}`);
                lines.push('');
                lines.push(`Scenario: ${nextQuestion.scenario}`);
                lines.push('');
                lines.push(nextQuestion.text);
                lines.push('');
                lines.push(`A) ${nextQuestion.options.A}`);
                lines.push(`B) ${nextQuestion.options.B}`);
                lines.push(`C) ${nextQuestion.options.C}`);
                lines.push(`D) ${nextQuestion.options.D}`);
    
                const selected = await elicitSingleSelect(server, 'Select your answer:', 'answer', [
                  { value: 'A', title: `A) ${nextQuestion.options.A}` },
                  { value: 'B', title: `B) ${nextQuestion.options.B}` },
                  { value: 'C', title: `C) ${nextQuestion.options.C}` },
                  { value: 'D', title: `D) ${nextQuestion.options.D}` },
                ]);
    
                if (selected !== null) {
                  lines.push('', `User selected: ${selected}`);
                }
              }
            }
          }
    
          return {
            content: [{ type: 'text' as const, text: lines.join('\n') }],
            _meta: buildQuizMeta(),
          };
        }
      );
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the deterministic grading approach, the two-step presentation requirement, formatting specifications for results display, and the conditional flow for subsequent questions. However, it doesn't address potential error conditions, rate limits, or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized but not optimally structured. It front-loads the core purpose but then dedicates most content to presentation guidelines rather than tool behavior. Some sentences could be more concise, and the formatting instructions might be better separated from the tool's functional description.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (involving grading and conditional follow-up actions) and the absence of both annotations and output schema, the description does a reasonably complete job. It covers the grading approach, presentation requirements, and workflow continuation. However, it doesn't describe what happens on the final question or error scenarios, leaving some gaps in contextual understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters (examId, questionId, answer) with their types and basic descriptions. The description adds some context about the answer parameter ('Your answer: A, B, C, or D'), which slightly enhances understanding beyond the schema's 'string' type, but doesn't provide significant additional semantic value for the other parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Submit an answer for a practice exam question. Graded deterministically.' It specifies the verb ('submit'), resource ('answer'), and context ('practice exam question'), but doesn't explicitly differentiate it from the sibling 'submit_answer' tool, which appears to be a similar function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines, detailing a two-step presentation process after grading: first showing grading results as regular chat text, then presenting the next question using AskUserQuestion if available. It specifies when to use the tool again ('call submit_exam_answer again with the answer') and includes formatting requirements for the presentation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Connectry-io/connectrylab-architect-cert-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server