Skip to main content
Glama
kimtaeyoon83

mcp-server-youtube-transcript

by kimtaeyoon83

get_transcript

Read-only

Extract transcripts from YouTube videos with optional language selection, timestamp inclusion, and ad filtering for content analysis and accessibility.

Instructions

Extract transcript from a YouTube video URL or ID. Automatically falls back to available languages if requested language is not available.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesYouTube video URL or ID
langNoLanguage code for transcript (e.g., 'ko', 'en'). Will fall back to available language if not found.en
include_timestampsNoInclude timestamps in output (e.g., '[0:05] text'). Useful for referencing specific moments. Default: false
strip_adsNoFilter out sponsored segments from transcript based on chapter markers (e.g., chapters marked as 'Werbung', 'Ad', 'Sponsor'). Default: true

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
metaNoTitle | Author | Subs | Views | Date
contentYes

Implementation Reference

  • src/index.ts:16-60 (registration)
    Defines the MCP Tool object for 'get_transcript' including name, description, inputSchema, outputSchema, and annotations. Used for both listing tools and validation.
    const TOOLS: Tool[] = [
      {
        name: "get_transcript",
        description: "Extract transcript from a YouTube video URL or ID. Automatically falls back to available languages if requested language is not available.",
        inputSchema: {
          type: "object",
          properties: {
            url: {
              type: "string",
              description: "YouTube video URL or ID"
            },
            lang: {
              type: "string",
              description: "Language code for transcript (e.g., 'ko', 'en'). Will fall back to available language if not found.",
              default: "en"
            },
            include_timestamps: {
              type: "boolean",
              description: "Include timestamps in output (e.g., '[0:05] text'). Useful for referencing specific moments. Default: false",
              default: false
            },
            strip_ads: {
              type: "boolean",
              description: "Filter out sponsored segments from transcript based on chapter markers (e.g., chapters marked as 'Werbung', 'Ad', 'Sponsor'). Default: true",
              default: true
            }
          },
          required: ["url"]
        },
        // OutputSchema describes structuredContent format for Claude Code
        outputSchema: {
          type: "object",
          properties: {
            meta: { type: "string", description: "Title | Author | Subs | Views | Date" },
            content: { type: "string" }
          },
          required: ["content"]
        },
        annotations: {
          title: "Get Transcript",
          readOnlyHint: true,
          openWorldHint: true,
        },
      },
    ];
  • Primary handler logic for executing the 'get_transcript' tool in response to CallToolRequest. Validates input, extracts video ID, fetches and processes transcript, adds informational notes about language fallback and ad stripping, returns structured MCP response.
    case "get_transcript": {
      const { url: input, lang = "en", include_timestamps = false, strip_ads = true } = args;
    
      if (!input || typeof input !== 'string') {
        throw new McpError(
          ErrorCode.InvalidParams,
          'URL parameter is required and must be a string'
        );
      }
    
      if (lang && typeof lang !== 'string') {
        throw new McpError(
          ErrorCode.InvalidParams,
          'Language code must be a string'
        );
      }
    
      try {
        const videoId = this.extractor.extractYoutubeId(input);
        console.log(`Processing transcript for video: ${videoId}, lang: ${lang}, timestamps: ${include_timestamps}, strip_ads: ${strip_ads}`);
    
        const result = await this.extractor.getTranscript(videoId, lang, include_timestamps, strip_ads);
        console.log(`Successfully extracted transcript (${result.text.length} chars, lang: ${result.actualLang}, ads stripped: ${result.adsStripped})`);
    
        // Build transcript with notes
        let transcript = result.text;
    
        // Add language fallback notice if different from requested
        if (result.actualLang !== lang) {
          transcript = `[Note: Requested language '${lang}' not available. Using '${result.actualLang}'. Available: ${result.availableLanguages.join(', ')}]\n\n${transcript}`;
        }
    
        // Add ad filtering notice based on what happened
        if (result.adsStripped > 0) {
          // Ads were filtered by chapter markers
          transcript = `[Note: ${result.adsStripped} sponsored segment lines filtered out based on chapter markers]\n\n${transcript}`;
        } else if (strip_ads && result.adChaptersFound === 0) {
          // No chapter markers found - add prompt hint as fallback
          transcript += '\n\n[Note: No chapter markers found. If summarizing, please exclude any sponsored segments or ads from the summary.]';
        }
    
        // Claude Code v2.0.21+ needs structuredContent for proper display
        return {
          content: [{
            type: "text" as const,
            text: transcript
          }],
          structuredContent: {
            meta: `${result.metadata.title} | ${result.metadata.author} | ${result.metadata.subscriberCount} subs | ${result.metadata.viewCount} views | ${result.metadata.publishDate}`,
            content: transcript.replace(/[\r\n]+/g, ' ').replace(/\s+/g, ' ')
          }
        };
      } catch (error) {
        console.error('Transcript extraction failed:', error);
    
        if (error instanceof McpError) {
          throw error;
        }
    
        throw new McpError(
          ErrorCode.InternalError,
          `Failed to process transcript: ${(error as Error).message}`
        );
      }
    }
  • Helper method in YouTubeTranscriptExtractor class that fetches subtitles using getSubtitles, strips ad segments based on chapters if requested, formats the transcript text, and returns processed result with metadata.
    async getTranscript(videoId: string, lang: string, includeTimestamps: boolean, stripAds: boolean): Promise<{
      text: string;
      actualLang: string;
      availableLanguages: string[];
      adsStripped: number;
      adChaptersFound: number;
      metadata: {
        title: string;
        author: string;
        subscriberCount: string;
        viewCount: string;
        publishDate: string;
      };
    }> {
      try {
        const result = await getSubtitles({
          videoID: videoId,
          lang: lang,
          enableFallback: true,
        });
    
        let lines = result.lines;
        let adsStripped = 0;
    
        // Filter out lines that fall within ad chapters
        if (stripAds && result.adChapters.length > 0) {
          const originalCount = lines.length;
          lines = lines.filter(line => {
            const lineStartMs = line.start * 1000;
            // Check if this line falls within any ad chapter
            return !result.adChapters.some((ad: AdChapter) =>
              lineStartMs >= ad.startMs && lineStartMs < ad.endMs
            );
          });
          adsStripped = originalCount - lines.length;
          if (adsStripped > 0) {
            console.log(`[youtube-transcript] Filtered ${adsStripped} lines from ${result.adChapters.length} ad chapter(s): ${result.adChapters.map((a: AdChapter) => a.title).join(', ')}`);
          }
        }
    
        return {
          text: this.formatTranscript(lines, includeTimestamps),
          actualLang: result.actualLang,
          availableLanguages: result.availableLanguages.map((t: CaptionTrack) => t.languageCode),
          adsStripped,
          adChaptersFound: result.adChapters.length,
          metadata: result.metadata
        };
      } catch (error) {
        console.error('Failed to fetch transcript:', error);
        throw new McpError(
          ErrorCode.InternalError,
          `Failed to retrieve transcript: ${(error as Error).message}`
        );
      }
    }
  • Formats transcript lines into readable string, optionally with timestamps in [m:ss] or [h:mm:ss] format.
    private formatTranscript(transcript: TranscriptLine[], includeTimestamps: boolean): string {
      if (includeTimestamps) {
        return transcript
          .map(line => {
            const totalSeconds = Math.floor(line.start);
            const hours = Math.floor(totalSeconds / 3600);
            const mins = Math.floor((totalSeconds % 3600) / 60);
            const secs = totalSeconds % 60;
            // Use h:mm:ss for videos > 1 hour, mm:ss otherwise
            const timestamp = hours > 0
              ? `[${hours}:${mins.toString().padStart(2, '0')}:${secs.toString().padStart(2, '0')}]`
              : `[${mins}:${secs.toString().padStart(2, '0')}]`;
            return `${timestamp} ${line.text.trim()}`;
          })
          .filter(text => text.length > 0)
          .join('\n');
      }
      return transcript
        .map(line => line.text.trim())
        .filter(text => text.length > 0)
        .join(' ');
    }
  • Core helper function that implements YouTube transcript fetching via internal /youtubei/v1/get_transcript API endpoint using protobuf-encoded parameters, visitorData auth, language fallback, page data extraction for captions/chapters/metadata.
    export async function getSubtitles(options: {
      videoID: string;
      lang?: string;
      enableFallback?: boolean;
    }): Promise<SubtitleResult> {
      const { videoID, lang = 'en', enableFallback = true } = options;
    
      // Validate video ID format
      if (!videoID || typeof videoID !== 'string') {
        throw new Error('Invalid video ID: must be a non-empty string');
      }
    
      // Get page data (visitor data needed for API authentication)
      const { visitorData, availableLanguages, adChapters, metadata } = await getPageData(videoID);
    
      // Determine which language to use
      let targetLang = lang;
    
      if (availableLanguages.length > 0) {
        const hasRequestedLang = availableLanguages.some(t => t.languageCode === lang);
    
        if (!hasRequestedLang && enableFallback) {
          // Try English first
          const hasEnglish = availableLanguages.some(t => t.languageCode === 'en');
          if (hasEnglish) {
            targetLang = 'en';
            console.error(`[youtube-fetcher] Language '${lang}' not available, falling back to 'en'`);
          } else {
            // Use first available
            targetLang = availableLanguages[0].languageCode;
            console.error(`[youtube-fetcher] Language '${lang}' not available, falling back to '${targetLang}'`);
          }
        } else if (!hasRequestedLang) {
          throw new Error(`Language '${lang}' not available. Available: ${availableLanguages.map(t => t.languageCode).join(', ')}`);
        }
      }
    
      // Build request payload using ANDROID client to avoid FAILED_PRECONDITION errors
      // The ANDROID client bypasses YouTube's A/B test for poToken enforcement
      const params = buildParams(videoID, targetLang);
      const payload = JSON.stringify({
        context: {
          client: {
            hl: targetLang,
            gl: 'US',
            clientName: 'ANDROID',
            clientVersion: ANDROID_CLIENT_VERSION,
            androidSdkVersion: 30,
            visitorData: visitorData
          }
        },
        params: params
      });
    
      // Make API request
      let response: string;
      try {
        response = await httpsRequest({
          hostname: 'www.youtube.com',
          path: '/youtubei/v1/get_transcript?prettyPrint=false',
          method: 'POST',
          headers: {
            'Content-Type': 'application/json',
            'Content-Length': Buffer.byteLength(payload),
            'User-Agent': ANDROID_USER_AGENT,
            'Origin': 'https://www.youtube.com'
          }
        }, payload);
      } catch (err) {
        throw new Error(`Failed to fetch transcript API: ${(err as Error).message}`);
      }
    
      // Parse response with error handling
      let json: any;
      try {
        json = JSON.parse(response);
      } catch (err) {
        throw new Error(`Failed to parse YouTube API response: ${(err as Error).message}. Response preview: ${response.substring(0, 200)}`);
      }
    
      // Check for API-level errors
      if (json.error) {
        const errorMsg = json.error.message || json.error.code || 'Unknown API error';
        throw new Error(`YouTube API error: ${errorMsg}`);
      }
    
      // Extract transcript segments - handle both WEB and ANDROID response formats
      const webSegments = json?.actions?.[0]?.updateEngagementPanelAction?.content
        ?.transcriptRenderer?.content?.transcriptSearchPanelRenderer?.body
        ?.transcriptSegmentListRenderer?.initialSegments;
    
      const androidSegments = json?.actions?.[0]?.elementsCommand?.transformEntityCommand
        ?.arguments?.transformTranscriptSegmentListArguments?.overwrite?.initialSegments;
    
      const segments = webSegments || androidSegments || [];
    
      if (segments.length === 0) {
        throw new Error('No transcript available for this video. The video may not have captions enabled.');
      }
    
      // Convert to TranscriptLine format
      const lines = segments
        .filter((seg: any) => seg?.transcriptSegmentRenderer) // Skip section headers
        .map((seg: any) => {
          const renderer = seg.transcriptSegmentRenderer;
    
          // Handle both WEB format (snippet.runs) and ANDROID format (snippet.elementsAttributedString)
          const webText = renderer?.snippet?.runs?.map((r: any) => r.text || '').join('');
          const androidText = renderer?.snippet?.elementsAttributedString?.content;
          const text = webText || androidText || '';
    
          const startMs = parseInt(renderer?.startMs || '0', 10);
          const endMs = parseInt(renderer?.endMs || '0', 10);
    
          return {
            text: text,
            start: startMs / 1000,
            dur: (endMs - startMs) / 1000
          };
        })
        .filter((line: TranscriptLine) => line.text.length > 0);
    
      return {
        lines,
        requestedLang: lang,
        actualLang: targetLang,
        availableLanguages,
        adChapters,
        metadata
      };
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only and open-world hints, but the description adds valuable behavioral context: the automatic language fallback mechanism and the ad-stripping functionality based on chapter markers. This goes beyond annotations by explaining conditional behaviors and processing logic, though it doesn't cover rate limits or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the core functionality and key behavioral traits (language fallback). Every word serves a purpose, with no redundancy or unnecessary elaboration, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (4 parameters, read-only operation) and the presence of both rich annotations and an output schema, the description is largely complete. It covers the main action and notable behaviors, though it could benefit from mentioning output format or error cases. The output schema likely handles return values, reducing the burden on the description.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema fully documents all parameters. The description mentions language fallback and ad-stripping, which are already covered in the schema descriptions for 'lang' and 'strip_ads'. It adds no significant semantic information beyond what the schema provides, so the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Extract transcript'), resource ('from a YouTube video'), and input type ('URL or ID'). It also mentions the fallback behavior for language selection, which adds specificity. With no sibling tools to distinguish from, this is maximally clear.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for extracting transcripts from YouTube videos, but provides no explicit guidance on when to use this tool versus alternatives (e.g., other transcript tools or manual methods). Since there are no sibling tools, it doesn't need to differentiate, but it lacks broader context about prerequisites or typical use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/kimtaeyoon83/mcp-server-youtube-transcript'

If you have feedback or need assistance with the MCP directory API, please join our Discord server