Skip to main content
Glama

scan_bounty

Check GitHub repository bounties for legitimacy with a 0-5 security score to identify potential scams before engagement.

Instructions

Anti-scam scanner — checks if a GitHub repo's bounty is legitimate (0-5 score)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
repoYesGitHub owner/repo (e.g. Expensify/App)

Implementation Reference

  • The tool handler for "scan_bounty" which calls the scanRepo helper function.
    case "scan_bounty": {
      const result = scanRepo((args as any).repo);
      return {
        content: [{
          type: "text",
          text: `BOUNTY SCAN: ${result.repo}\nScore: ${result.score}/5 — ${result.verdict}\n\nRed Flags:\n${result.red_flags.map((f) => `  - ${f}`).join("\n") || "  None"}\n\nGreen Flags:\n${result.green_flags.map((f) => `  + ${f}`).join("\n") || "  None"}`,
        }],
      };
    }
  • The implementation of scanRepo that performs the bounty scanning logic.
    function scanRepo(repo: string): ScamScore {
      const db = loadDB();
    
      // Check known lists
      if (db.scam_list.includes(repo)) {
        return {
          repo,
          score: 0,
          verdict: "KNOWN SCAM",
          red_flags: ["In known scam list"],
          green_flags: [],
        };
      }
      if (db.legit_list.includes(repo)) {
        return {
          repo,
          score: 5,
          verdict: "KNOWN LEGIT",
          red_flags: [],
          green_flags: ["In known legit list"],
        };
      }
    
      let score = 3; // Start neutral
      const red_flags: string[] = [];
      const green_flags: string[] = [];
    
      // Check repo data
      const repoData = ghApi(`repos/${repo}`);
      if (!repoData) {
        return { repo, score: 1, verdict: "CANNOT VERIFY", red_flags: ["Repo not accessible"], green_flags: [] };
      }
    
      const data = JSON.parse(repoData);
    
      // Stars
      if (data.stargazers_count > 100) { score++; green_flags.push(`${data.stargazers_count} stars`); }
      if (data.stargazers_count < 5) { score--; red_flags.push(`Only ${data.stargazers_count} stars`); }
    
      // Age
      const created = new Date(data.created_at);
      const ageMonths = (Date.now() - created.getTime()) / (30 * 24 * 60 * 60 * 1000);
      if (ageMonths < 1) { score--; red_flags.push("Repo created less than 1 month ago"); }
      if (ageMonths > 12) { green_flags.push(`Repo age: ${Math.floor(ageMonths)} months`); }
    
      // Forks
      if (data.forks_count > 10) { green_flags.push(`${data.forks_count} forks`); }
    
      // Organization
      if (data.owner?.type === "Organization") { score++; green_flags.push("Owned by organization"); }
    
      // Check closed PRs without merging
      const closedPRs = ghApi(`repos/${repo}/pulls?state=closed&per_page=20`);
      if (closedPRs) {
        const prs = JSON.parse(closedPRs);
        const closedNotMerged = prs.filter((p: any) => !p.merged_at).length;
        if (closedNotMerged > 15) {
          score--;
          red_flags.push(`${closedNotMerged}/20 PRs closed without merge — possible bounty bait`);
  • src/index.ts:295-305 (registration)
    The MCP tool definition/registration for "scan_bounty".
    {
      name: "scan_bounty",
      description: "Anti-scam scanner — checks if a GitHub repo's bounty is legitimate (0-5 score)",
      inputSchema: {
        type: "object" as const,
        required: ["repo"],
        properties: {
          repo: { type: "string", description: "GitHub owner/repo (e.g. Expensify/App)" },
        },
      },
    },
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the output format (0-5 score) and character of the check ('Anti-scam'), but lacks operational details like error conditions, rate limits, or what specific legitimacy criteria are evaluated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfectly sized at one impactful sentence with zero waste. The em-dash structure front-loads the tool type ('Anti-scam scanner') immediately, followed by the specific action and output format.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple single-parameter tool without output schema, the description is nearly complete by specifying the scoring range. Minor gaps remain regarding error handling for invalid repos, but the core functionality is adequately covered.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage ('GitHub owner/repo'), the description doesn't need to add significant parameter semantics. It confirms the context ('GitHub repo') but doesn't expand on syntax or validation rules beyond the schema, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('checks') and resources ('GitHub repo's bounty') and clearly distinguishes itself from sibling 'find_bounties' by specifying its 'Anti-scam' nature and unique 0-5 scoring output format.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While 'Anti-scam scanner' implies a verification use case, there is no explicit guidance on when to invoke this versus alternatives like 'find_bounties' or 'check_prs', nor any mention of prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ElromEvedElElyon/revenue-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server