Skip to main content
Glama
DLHellMe
by DLHellMe

api_scrape_channel

Extract posts from Telegram channels using API methods. Specify URL, date ranges, and post limits to collect channel content programmatically.

Instructions

Scrape a Telegram channel using the API (fast and efficient)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesThe Telegram channel URL (e.g., https://t.me/channelname)
max_postsNoMaximum number of posts to scrape (0 for unlimited)
date_fromNoStart date in ISO format (optional)
date_toNoEnd date in ISO format (optional)

Implementation Reference

  • Core handler function executing the 'api_scrape_channel' tool: validates connection, processes scrape options, calls TelegramApiScraper.scrape(), handles large results by saving to JSON file and returning summary with sample, or formats small results with MarkdownFormatter.
    async handleApiScrapeChannel(this: any, args: any): Promise<any> {
      if (!this._apiScraper || !this._apiScraper.isConnected()) {
        return {
          content: [{
            type: 'text',
            text: '❌ Not connected to Telegram API. Please use telegram_api_login first.'
          }]
        };
      }
      
      try {
        const options = {
          url: args.url || args.channel,
          maxPosts: args.max_posts !== undefined ? args.max_posts : (args.limit !== undefined ? args.limit : 0),
          dateFrom: args.date_from ? new Date(args.date_from) : undefined,
          dateTo: args.date_to ? new Date(args.date_to) : undefined
        };
        
        const result = await this._apiScraper.scrape(options);
        
        if (result.error) {
          return {
            content: [{
              type: 'text',
              text: `❌ Scraping failed: ${result.error}`
            }]
          };
        }
        
        // Check if we have a large amount of data
        const postCount = result.posts?.length || 0;
        const isLargeData = postCount > 50;
        
        if (isLargeData) {
          // Save to file to avoid Claude's limits
          const fs = await import('fs/promises');
          const path = await import('path');
          
          const timestampParts = new Date().toISOString().split('.');
          const timestamp = (timestampParts[0] || '').replace(/:/g, '-');
          const urlParts = (args.url || args.channel || '').split('/');
          const channelName = urlParts.pop() || 'unknown';
          const filename = `${channelName}_${timestamp}_api.json`;
          const filepath = path.join('scraped_data', filename);
          
          await fs.mkdir('scraped_data', { recursive: true });
          await fs.writeFile(filepath, JSON.stringify(result, null, 2));
          
          // Return summary instead of full data
          const summary = {
            channel: result.channel_name,
            total_posts: postCount,
            date_range: postCount > 0 ? {
              first: result.posts[postCount - 1].date,
              last: result.posts[0].date
            } : null,
            saved_to_file: filename,
            sample_posts: result.posts?.slice(0, 5) || []
          };
          
          return {
            content: [{
              type: 'text',
              text: `πŸ“Š Channel Summary (${postCount} posts - Full data saved to file)\n\n` +
                `Channel: ${summary.channel}\n` +
                `Total posts: ${summary.total_posts}\n` +
                `Date range: ${summary.date_range ? `${new Date(summary.date_range.first).toLocaleDateString()} - ${new Date(summary.date_range.last).toLocaleDateString()}` : 'N/A'}\n` +
                `\nπŸ’Ύ Full data saved to: scraped_data/${filename}\n\n` +
                `Sample of first 5 posts:\n` +
                summary.sample_posts.map((p: any) => `\nπŸ“… ${new Date(p.date).toLocaleDateString()}\n${p.content?.substring(0, 100)}...\nπŸ‘ ${p.views || 0} views`).join('\n---\n') +
                `\n\nβœ… To analyze all posts, use the saved JSON file.`
            }]
          };
        } else {
          // Small data - return as usual
          const formatter = new (await import('./formatters/markdown-formatter.js')).MarkdownFormatter();
          const markdown = formatter.format(result);
          
          return {
            content: [{
              type: 'text',
              text: `${markdown}\n\nβœ… Scraped using Telegram API - Fast and reliable!`
            }]
          };
        }
      } catch (error) {
        return {
          content: [{
            type: 'text',
            text: `❌ API scraping failed: ${error instanceof Error ? error.message : 'Unknown error'}`
          }]
        };
      }
    },
  • Input schema definition for the 'api_scrape_channel' tool, defining parameters like url (required), max_posts, date_from, date_to.
    {
      name: 'api_scrape_channel',
      description: 'Scrape a Telegram channel using the API (fast and efficient)',
      inputSchema: {
        type: 'object',
        properties: {
          url: {
            type: 'string',
            description: 'The Telegram channel URL (e.g., https://t.me/channelname)'
          },
          max_posts: {
            type: 'number',
            description: 'Maximum number of posts to scrape (0 for unlimited)',
            default: 0
          },
          date_from: {
            type: 'string',
            description: 'Start date in ISO format (optional)'
          },
          date_to: {
            type: 'string',
            description: 'End date in ISO format (optional)'
          }
        },
        required: ['url']
      }
    },
  • src/server.ts:98-99 (registration)
    Registration in the tool dispatch switch statement: routes 'api_scrape_channel' calls to handleApiScrapeChannel.
    case 'api_scrape_channel':
      return await this.handleApiScrapeChannel(args);
  • src/server.ts:763-764 (registration)
    Binding the handler from apiHandlers to the server class instance for use in tool dispatch.
    private handleApiLogin = apiHandlers.handleApiLogin.bind(this);
    private handleApiScrapeChannel = apiHandlers.handleApiScrapeChannel.bind(this);

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/DLHellMe/telegram-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server