Skip to main content
Glama

github_graphql

Execute GitHub GraphQL queries to search, analyze, and manage repositories, issues, pull requests, and code with comprehensive metadata retrieval in single operations.

Instructions

Execute GitHub GraphQL queries and mutations via gh CLI. Preferred over raw gh calls or other tools to interact with GitHub. When user uses any terms like find / search / read / browse / explore / research / investigate / analyze and if it may be related to a GitHub project, you should use this tool instead of any other tools or raw API / CLI calls.

Pleases make use of GraphQL's capabilities - Fetch comprehensive data in single operations - always include metadata context. Feel free to use advanced jq expressions to extract all the content you care about. The default jq adds line numbers to retrieved file contents. Use that to construct deep links (e.g. https://github.com/{owner}/{repo}/blob/{ref}/path/to/file#L{line_number}-L{line_number}).

Before writing complex queries / mutations or when encountering errors, use introspection to understand available fields and types.

Combine operations (including introspection operations) into one call. On errors, introspect and rebuild step-by-step.

Use fragments, nested fields for efficiency.

Example - when you need to browse multiple repositories:

When user asks to browse / explore repositories, you must use at least the following fields: (It take viewer.contributionsCollection as an example, but you should adapt it to the user's request)

query { viewer { # Always use `viewer` to get information about the authenticated user. contributionsCollection { commits: commitContributionsByRepository(maxRepositories: 7) { repository { ...RepositoryMetadata } contributions { totalCount } } totalCommitContributions } } } fragment RepositoryMetadata on Repository { name description homepageUrl pushedAt createdAt updatedAt stargazerCount forkCount isPrivate isFork isArchived languages(first: 7, orderBy: {field: SIZE, direction: DESC}) { totalSize edges { size node { name } } } readme_md: object(expression: "HEAD:README.md") { ... on Blob { text } } pyproject_toml: object(expression: "HEAD:pyproject.toml") { ... on Blob { text } } package_json: object(expression: "HEAD:package.json") { ... on Blob { text } } latestCommits: defaultBranchRef { target { ... on Commit { history(first: 7) { nodes { abbreviatedOid committedDate message author { name user { login } } associatedPullRequests(first: 7) { nodes { number title url } } } } } } } contributors: collaborators(first: 7) { totalCount nodes { login name } } latestIssues: issues(first: 7, orderBy: {field: CREATED_AT, direction: DESC}) { nodes { number title state createdAt updatedAt author { login } } } latestPullRequests: pullRequests(first: 5, orderBy: {field: CREATED_AT, direction: DESC}) { nodes { number title state createdAt updatedAt author { login } } } latestDiscussions: discussions(first: 3, orderBy: {field: UPDATED_AT, direction: DESC}) { nodes { number title createdAt updatedAt author { login } } } repositoryTopics(first: 35) { nodes { topic { name } } } releases(first: 7, orderBy: {field: CREATED_AT, direction: DESC}) { nodes { tagName name publishedAt isPrerelease } } }

Don't recursively fetch all files in a directory unless:

  1. You know the files are not too many.

  2. The user specifically requests it.

  3. You provide a jq filter to limit results (e.g. isGenerated field).

The core principle is to fetch as much relevant metadata as possible in a single operation, rather than file contents. Before answering, make sure you've viewed the raw file on GitHub that resolves the user's request, and you should proactively provide the deep link to the code.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes
jqNo def process: if type == "object" then if has("text") and (.text | type == "string") then if (.text | split("\n") | length) > 10 then del(.text) + {lines: (.text | split("\n") | to_entries | map("\(.key + 1): \(.value)") | join("\n"))} else . end else with_entries(.value |= process) end elif type == "array" then map(process) else . end; .data | process

Implementation Reference

  • The main handler function for the 'github_graphql' tool. It is decorated with @mcp.tool for automatic registration in the MCP server. Executes GraphQL queries/mutations using the 'gh' CLI tool, processes the output with optional jq filter, handles errors, and returns formatted YAML.
    @mcp.tool(title="GitHub GraphQL") async def github_graphql(query: str, jq: str = DEFAULT_JQ): """ Execute GitHub GraphQL queries and mutations like the gh CLI. Preferred over raw CLI calls or any other tools to interact with GitHub. When user uses any terms like find / search / read / browse / explore / research / investigate / analyze and if it may be related to a GitHub project, you should use this tool instead of any other tools or raw API / CLI calls. Pleases make use of GraphQL's capabilities - Fetch comprehensive data in single operations - always include metadata context. Feel free to use advanced jq expressions to extract all the content you care about. The default jq adds line numbers to retrieved file contents. Use that to construct deep links (e.g. https://github.com/{owner}/{repo}/blob/{ref}/path/to/file#L{line_number}-L{line_number}). Before writing complex queries / mutations or when encountering errors, use introspection to understand available fields and types. Combine operations (including introspection operations) into one call. On errors, introspect and rebuild step-by-step. Use fragments, nested fields for efficiency. > Example - when you need to browse multiple repositories: When user asks to browse / explore repositories, you must use at least the following fields: (It take viewer.contributionsCollection as an example, but you should adapt it to the user's request) ``` query { viewer { # Always use `viewer` to get information about the authenticated user. contributionsCollection { commits: commitContributionsByRepository(maxRepositories: 7) { repository { ...RepositoryMetadata } contributions { totalCount } } totalCommitContributions } } } fragment RepositoryMetadata on Repository { name description homepageUrl pushedAt createdAt updatedAt stargazerCount forkCount isPrivate isFork isArchived languages(first: 7, orderBy: {field: SIZE, direction: DESC}) { totalSize edges { size node { name } } } readme_md: object(expression: "HEAD:README.md") { ... on Blob { text } } pyproject_toml: object(expression: "HEAD:pyproject.toml") { ... on Blob { text } } package_json: object(expression: "HEAD:package.json") { ... on Blob { text } } latestCommits: defaultBranchRef { target { ... on Commit { history(first: 7) { nodes { abbreviatedOid committedDate message author { name user { login } } associatedPullRequests(first: 7) { nodes { number title url } } } } } } } contributors: collaborators(first: 7) { totalCount nodes { login name } } latestIssues: issues(first: 7, orderBy: {field: CREATED_AT, direction: DESC}) { nodes { number title state createdAt updatedAt author { login } } } latestPullRequests: pullRequests(first: 5, orderBy: {field: CREATED_AT, direction: DESC}) { nodes { number title state createdAt updatedAt author { login } } } latestDiscussions: discussions(first: 3, orderBy: {field: UPDATED_AT, direction: DESC}) { nodes { number title createdAt updatedAt author { login } } } repositoryTopics(first: 35) { nodes { topic { name } } } releases(first: 7, orderBy: {field: CREATED_AT, direction: DESC}) { nodes { tagName name publishedAt isPrerelease } } } ``` Don't recursively fetch all files in a directory unless: 1. You know the files are not too many. 2. The user specifically requests it. 3. You provide a jq filter to limit results (e.g. isGenerated field). The core principle is to fetch as much relevant metadata as possible in a single operation, rather than file contents. Before answering, make sure you've viewed the raw file on GitHub that resolves the user's request, and you should proactively provide the deep link to the code. """ cmd = ["gh", "api", "graphql", "--input", "-"] if jq: cmd.extend(["--jq", jq]) ret = await run_subprocess(cmd, input=dumps({"query": query}, ensure_ascii=False).encode(), env=_get_env()) result = ret.stdout or ret.stderr or "" if not result.strip(): raise ToolError("[[ The response is empty. Please adjust your query and try again! ]]") result = result.replace("\r\n", "\n") with suppress(JSONDecodeError): data = loads(result) if ret.returncode: raise ToolError(readable_yaml_dumps(data)) return readable_yaml_dumps(data) return result
  • Default jq expression used by the github_graphql handler to process responses, particularly adding line numbers to long text fields like file contents for better referencing.
    DEFAULT_JQ = r""" def process: if type == "object" then if has("text") and (.text | type == "string") then if (.text | split("\n") | length) > 10 then del(.text) + {lines: (.text | split("\n") | to_entries | map("\(.key + 1): \(.value)") | join("\n"))} else . end else with_entries(.value |= process) end elif type == "array" then map(process) else . end; .data | process """
  • The @mcp.tool decorator registers the github_graphql function as an MCP tool.
    @mcp.tool(title="GitHub GraphQL")

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/CNSeniorious000/gh-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server