Skip to main content
Glama
kenm47

jensenify-mcp

by kenm47

jensen_status

Monitor progress toward $250k/year compute spend goals with real-time tracking and personalized recommendations for optimizing AI-assisted development workflows.

Instructions

Track your progress toward Jensen Huang's recommended $250k/year compute spend. Displays a real-time progress bar and personalized recommendations for increasing your humanistic compute utilization. Every great engineer knows their number.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
calls_per_sessionNoAverage number of AI-assisted tool calls per coding session
sessions_per_weekNoNumber of coding sessions per week

Implementation Reference

  • The handler function for the 'jensen_status' tool which calculates the annual spend estimate and returns a formatted status update.
    async ({ calls_per_session, sessions_per_week }) => {
      sessionCalls++;
    
      const estimate = estimateAnnualSpend({
        tier,
        callsPerSession: calls_per_session,
        sessionsPerWeek: sessions_per_week,
      });
    
      let response = formatEstimate(estimate);
      response += "\n\n";
      response += formatStatusUpdate(sessionCalls, sessionTokens, tier);
      response += "\n\n";
    
      if (estimate.percentOfJensen < 100) {
        response += `\n## Recommendations to Reach Jensen Target\n\n`;
    
        if (tier !== "jensen") {
          response += `1. **Upgrade your tier to 'jensen'** — You're currently on '${tier}'. The jensen tier includes ${Object.keys(TIER_TEXTS.jensen).length} canonical works for maximum context injection.\n`;
        }
    
        response += `2. **Increase session frequency** — Consider coding on weekends. The classics don't take days off.\n`;
        response += `3. **Consult the canon more often** — Run consult_the_canon before every function, not just major decisions.\n`;
        response += `4. **Enable full text inclusion** — Always set include_full_texts=true for maximum wisdom absorption.\n`;
        response += `5. **Share with your team** — Jensen's vision is collective. Every engineer should contribute to the $250k target.\n`;
  • src/index.ts:136-150 (registration)
    Registration and schema definition for the 'jensen_status' tool.
    server.tool(
      "jensen_status",
      `Track your progress toward Jensen Huang's recommended $250k/year compute spend.
    Displays a real-time progress bar and personalized recommendations for increasing your
    humanistic compute utilization. Every great engineer knows their number.`,
      {
        calls_per_session: z
          .number()
          .default(20)
          .describe("Average number of AI-assisted tool calls per coding session"),
        sessions_per_week: z
          .number()
          .default(5)
          .describe("Number of coding sessions per week"),
      },
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions 'real-time progress bar' and 'personalized recommendations' which give some behavioral context, but doesn't address critical aspects like whether this tool makes changes to any system, requires authentication, has rate limits, or what happens when invoked. The description is too vague about actual behavior beyond surface-level output.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with three sentences that each serve a purpose: stating the tool's function, describing its outputs, and providing motivational context. It's front-loaded with the core functionality. The motivational quote at the end could be considered slightly extraneous but doesn't significantly detract from the overall efficiency.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations and no output schema, the description is insufficiently complete. It doesn't explain what the output looks like (beyond mentioning 'progress bar' and 'recommendations'), doesn't address error conditions, and provides minimal behavioral context. The motivational statement doesn't add functional completeness. Given the lack of structured data, the description should do more heavy lifting.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters. The description doesn't add any meaningful parameter semantics beyond what's in the schema - it doesn't explain how these parameters affect the progress calculation or recommendations. The baseline of 3 is appropriate when the schema does all the parameter documentation work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: tracking progress toward a specific compute spend goal ($250k/year) with a progress bar and recommendations. It uses specific verbs ('track', 'displays') and identifies the resource (compute utilization). However, it doesn't explicitly differentiate from the sibling tool 'consult_the_canon', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions 'personalized recommendations' but doesn't specify what triggers those recommendations or when this tool should be used instead of the sibling tool. There's no mention of prerequisites, frequency of use, or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/kenm47/jensenify-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server