Skip to main content
Glama

value.detect

Identify up to 3 value betting opportunities by comparing market odds to fair value odds with a minimum 5% edge.

Instructions

Confronta quote mercato vs fair per trovare fino a 3 value pick (edge >= 5%).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
match_idYesFixture id

Implementation Reference

  • Core handler function that detects value picks by comparing best market odds to computed fair odds, filtering for edge >= 5% and odds >= 1.5, sorting and taking top 3.
    export const detectValue = async (matchId: number): Promise<ValueDetectionResult> => {
      const snapshot = await buildMatchSnapshot(matchId);
      const fair = computeFairOddsFromSnapshot(snapshot);
      const odds = await oddsApi.getMarketOddsForFixture(snapshot.match);
    
      const picks: ValuePick[] = odds
        .map((market) => {
          if (!market.bookmakers.length) return undefined;
          const bestBook = market.bookmakers.reduce((best, current) =>
            current.oddsDecimal > best.oddsDecimal ? current : best,
          );
          const fairOdds = fair.fairOdds[market.selection];
          const edge = bestBook.oddsDecimal / fairOdds - 1;
          if (edge < EDGE_THRESHOLD || bestBook.oddsDecimal < MIN_ODDS) return undefined;
          return {
            market: market.market,
            selection: market.selection,
            bookmaker: bestBook.book,
            offeredOdds: Number(bestBook.oddsDecimal.toFixed(3)),
            fairOdds,
            edge: Number(edge.toFixed(3)),
            rationale: buildRationale(snapshot.home.form, snapshot.away.form, fair.lambdaHome, fair.lambdaAway, market.selection),
          } satisfies ValuePick;
        })
        .filter(Boolean)
        .sort((a, b) => (b!.edge - a!.edge))
        .slice(0, 3) as ValuePick[];
    
      return {
        matchId,
        picks,
        fair,
        odds,
      };
    };
  • Registers the 'value.detect' tool with FastMCP server, including name, description, input schema, and execute wrapper calling detectValue.
    export const registerValueTool = (server: FastMCP) => {
      server.addTool({
        name: "value.detect",
        description: "Confronta quote mercato vs fair per trovare fino a 3 value pick (edge >= 5%).",
        parameters: z.object({
          match_id: z.number().describe("Fixture id"),
        }),
        execute: async (args) => {
          const payload = await detectValue(args.match_id);
          return JSON.stringify(payload, null, 2);
        },
      });
    };
  • Zod schema defining the input parameter 'match_id' as a number.
    parameters: z.object({
      match_id: z.number().describe("Fixture id"),
    }),
  • Helper function to generate rationale for value picks based on form and lambda values.
    const buildRationale = (
      homeForm: string | undefined,
      awayForm: string | undefined,
      lambdaHome: number,
      lambdaAway: number,
      selection: ValuePick["selection"],
    ): string => {
      const lambdaNote = `λ_home=${lambdaHome.toFixed(2)} λ_away=${lambdaAway.toFixed(2)}`;
      const homeNote = homeForm ? `home form ${homeForm}` : undefined;
      const awayNote = awayForm ? `away form ${awayForm}` : undefined;
      const formSnippet = [homeNote, awayNote].filter(Boolean).join(", ");
      return `${selection} boosted by ${lambdaNote}${formSnippet ? ` (${formSnippet})` : ""}`;
    };
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool finds 'value picks' with an 'edge >= 5%', which implies some calculation or comparison, but doesn't describe what happens during execution (e.g., whether it fetches data, performs computations, or returns specific formats). For a tool with no annotations, this leaves significant gaps in understanding its behavior, such as error handling or output structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence in Italian that conveys the core purpose without unnecessary words. It's front-loaded with the main action and result, making it easy to understand quickly. There's no wasted verbiage, and every part of the sentence contributes to the tool's definition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a tool that compares market vs fair quotes to find value picks, the description is incomplete. There's no output schema, and the description doesn't explain what the output looks like (e.g., a list of picks with details). With no annotations and minimal parameter explanation, it fails to provide enough context for an agent to fully understand how to use and interpret results, especially for a potentially data-intensive operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the single parameter 'match_id' documented as 'Fixture id'. The description doesn't add any meaning beyond this, as it doesn't explain how the match_id is used (e.g., to fetch market and fair quotes for that specific match). Since the schema already fully describes the parameter, the baseline score of 3 is appropriate, indicating adequate but no extra value from the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Confronta quote mercato vs fair per trovare fino a 3 value pick (edge >= 5%)' which translates to 'Compare market quotes vs fair to find up to 3 value picks (edge >= 5%)'. It specifies the action (compare market vs fair quotes), the resource (value picks), and the scope (up to 3 picks with edge >= 5%). However, it doesn't explicitly distinguish this from sibling tools like fair.compute or odds.prematch, which prevents a score of 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, such as needing a match_id from fixtures.list, or compare it to siblings like fair.compute (which might compute fair values) or odds.prematch (which might provide market quotes). There's only an implied context of finding value picks, but no explicit usage instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Valerio357/bet_mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server