Skip to main content
Glama

mcp-google-sheets

de.json24.1 kB
{ "Transcribe and extract data from audio using AssemblyAI's Speech AI.": "Transcribe und extrahiere Daten aus Audio mit AssemblyAI's Speech AI.", "You can retrieve your AssemblyAI API key within your AssemblyAI [Account Settings](https://www.assemblyai.com/app/account?utm_source=activepieces).": "Sie können Ihren AssemblyAI API-Schlüssel in Ihren AssemblyAI [Kontoeinstellungen](https://www.assemblyai.com/app/account?utm_source=activepieces).", "Upload File": "Datei hochladen", "Transcribe": "Transcribe", "Get Transcript": "Get Transcript", "Get Transcript Sentences": "Transkripte Sätze abrufen", "Get Transcript Paragraphs": "Transkript-Absätze erhalten", "Get Transcript Subtitles": "Transkript-Untertitel erhalten", "Get Transcript Redacted Audio": "Transkript-Redacted Audio erhalten", "Search words in transcript": "Suchbegriffe im Protokoll", "List transcripts": "Protokolle auflisten", "Delete transcript": "Transkript löschen", "Run a Task using LeMUR": "Eine Aufgabe mit LeMUR ausführen", "Retrieve LeMUR response": "LeMUR Antwort abrufen", "Purge LeMUR request data": "LeMUR Daten löschen", "Custom API Call": "Eigener API-Aufruf", "Upload a media file to AssemblyAI's servers.": "Eine Mediendatei auf AssemblyAIs Server hochladen.", "Transcribe an audio or video file using AssemblyAI.": "Eine Audio- oder Videodatei mit AssemblyAI übertragen.", "Retrieves a transcript by its ID.": "Ruft ein Transkript durch seine ID ab.", "Retrieve the sentences of the transcript by its ID.": "Abrufen der Sätze des Transkripts durch seine ID.", "Retrieve the paragraphs of the transcript by its ID.": "Die Absätze des Transkripts durch seine ID abrufen.", "Export the transcript as SRT or VTT subtitles.": "Exportieren Sie das Transkript als SRT oder VTT Untertitel.", "Get the result of the redacted audio model.": "Holen Sie sich das Ergebnis des geretteten Audiomodells.", "Search through the transcript for keywords. You can search for individual words, numbers, or phrases containing up to five words or numbers.": "Durchsuchen Sie das Transkript nach Schlüsselwörtern. Sie können nach einzelnen Wörtern, Zahlen oder Phrasen suchen, die bis zu fünf Wörter oder Zahlen enthalten.", "Retrieve a list of transcripts you created.\nTranscripts are sorted from newest to oldest. The previous URL always points to a page with older transcripts.": "Eine Liste der von Ihnen erstellten Transkripte abrufen.\nTranskripte werden von neusten zu ältesten. Die vorherige URL verweist immer auf eine Seite mit älteren Abschriften.", "Remove the data from the transcript and mark it as deleted.": "Die Daten aus dem Transkript entfernen und als gelöscht markieren.", "Use the LeMUR task endpoint to input your own LLM prompt.": "Benutzen Sie den Task-Endpunkt von LeMUR, um Ihre eigene LLM-Eingabeaufforderung einzugeben.", "Retrieve a LeMUR response that was previously generated.": "Rufen Sie eine zuvor generierte LeMUR-Antwort ab.", "Delete the data for a previously submitted LeMUR request.\nThe LLM response data, as well as any context provided in the original request will be removed.": "Löscht die Daten für eine zuvor eingereichte LeMUR-Anfrage.\nDie LLM-Antwortdaten sowie alle Kontexte, die in der ursprünglichen Anfrage enthalten sind, werden entfernt.", "Make a custom API call to a specific endpoint": "Einen benutzerdefinierten API-Aufruf an einen bestimmten Endpunkt machen", "Audio File": "Audiodatei", "Audio URL": "Audio-URL", "Language Code": "Sprachcode", "Language Detection": "Spracherkennung", "Language Confidence Threshold": "Grenzwert für Sprachvertrauen", "Speech Model": "Sprachmodell", "Punctuate": "Punctuate", "Format Text": "Text formatieren", "Disfluencies": "Disfluenzen", "Dual Channel": "Doppelkanal", "Webhook URL": "Webhook-URL", "Webhook Auth Header Name": "Webhook Auth Headername", "Webhook Auth Header Value": "Webhook Auth Header Wert", "Key Phrases": "Schlüsselwörter", "Audio Start From": "Audio Start von", "Audio End At": "Audio Ende um", "Word Boost": "Wort-Boost", "Word Boost Level": "Wort-Boost Level", "Filter Profanity": "Profanität filtern", "Redact PII": "Redact PII", "Redact PII Audio": "Redact PII Audio", "Redact PII Audio Quality": "PII Audio Qualität Redact", "Redact PII Policies": "Redact PII Policies", "Redact PII Substitution": "PII-Substitution Redact", "Speaker Labels": "Lautsprecher-Labels", "Speakers Expected": "Lautsprecher erwartet", "Content Moderation": "Moderation der Inhalte", "Content Moderation Confidence": "Moderation des Inhalts", "Topic Detection": "Themenerkennung", "Custom Spellings": "Eigene Rechtschreibung", "Sentiment Analysis": "Stimmungsanalyse", "Auto Chapters": "Auto-Kapitel", "Entity Detection": "Entitäts-Erkennung", "Speech Threshold": "Sprach-Grenzwert", "Enable Summarization": "Zusammenfassung aktivieren", "Summary Model": "Zusammenfassungsmodell", "Summary Type": "Übersichts-Typ", "Enable Custom Topics": "Eigene Themen aktivieren", "Custom Topics": "Eigene Themen", "Wait until transcript is ready": "Warten, bis das Transkript fertig ist", "Throw if transcript status is error": "Werfen, wenn Transkript-Status ein Fehler ist", "Transcript ID": "Transkript-ID", "Subtitles Format": "Untertitelformat", "Number of Characters per Caption": "Anzahl der Zeichen pro Untertitel", "Download file?": "Datei herunterladen?", "Download File Name": "Dateiname herunterladen", "Words": "Wörter", "Limit": "Limit", "Status": "Status", "Created On": "Erstellt am", "Before ID": "Vor ID", "After ID": "Nach ID", "Throttled Only": "Nur gedrosselt", "Prompt": "Prompt", "Transcript IDs": "Transkript-ID", "Input Text": "Input Text", "Context": "Kontext", "Final Model": "Letztes Modell", "Maximum Output Size": "Maximale Ausgabegröße", "Temperature": "Temperatur", "LeMUR request ID": "LeMUR Anfrage-ID", "Method": "Methode", "Headers": "Kopfzeilen", "Query Parameters": "Abfrageparameter", "Body": "Körper", "Response is Binary ?": "Antwort ist binär?", "No Error on Failure": "Kein Fehler bei Fehler", "Timeout (in seconds)": "Timeout (in Sekunden)", "The File or URL of the audio or video file.": "Die Datei oder URL der Audio- oder Videodatei.", "The URL of the audio or video file to transcribe.": "Die URL der zu übertragenden Audio- oder Videodatei.", "The language of your audio file. Possible values are found in [Supported Languages](https://www.assemblyai.com/docs/concepts/supported-languages).\nThe default value is 'en_us'.\n": "The language of your audio file. Possible values are found in [Supported Languages](https://www.assemblyai.com/docs/concepts/supported-languages).\nThe default value is 'en_us'.\n", "Enable [Automatic language detection](https://www.assemblyai.com/docs/models/speech-recognition#automatic-language-detection), either true or false.": "Enable [Automatic language detection](https://www.assemblyai.com/docs/models/speech-recognition#automatic-language-detection), either true or false.", "The confidence threshold for the automatically detected language.\nAn error will be returned if the language confidence is below this threshold.\nDefaults to 0.\n": "The confidence threshold for the automatically detected language.\nAn error will be returned if the language confidence is below this threshold.\nDefaults to 0.\n", "The speech model to use for the transcription. When `null`, the \"best\" model is used.": "Das Sprachmodell, das für die Transkription verwendet wird. Wenn null`, wird das \"beste\" Modell verwendet.", "Enable Automatic Punctuation, can be true or false": "Automatische Satzzeichen, kann wahr oder falsch sein", "Enable Text Formatting, can be true or false": "Aktiviere Textformatierung, kann wahr oder falsch sein", "Transcribe Filler Words, like \"umm\", in your media file; can be true or false": "Füller-Wörter wie \"umm\" in Ihrer Mediendatei umwandeln; kann wahr oder falsch sein", "Enable [Dual Channel](https://www.assemblyai.com/docs/models/speech-recognition#dual-channel-transcription) transcription, can be true or false.": "Enable [Dual Channel](https://www.assemblyai.com/docs/models/speech-recognition#dual-channel-transcription) transcription, can be true or false.", "The URL to which we send webhook requests.\nWe sends two different types of webhook requests.\nOne request when a transcript is completed or failed, and one request when the redacted audio is ready if redact_pii_audio is enabled.\n": "The URL to which we send webhook requests.\nWe sends two different types of webhook requests.\nOne request when a transcript is completed or failed, and one request when the redacted audio is ready if redact_pii_audio is enabled.\n", "The header name to be sent with the transcript completed or failed webhook requests": "Der Header-Name, der mit dem Transkript versendet werden soll oder fehlgeschlagene Webhook-Anfragen", "The header value to send back with the transcript completed or failed webhook requests for added security": "Der Header-Wert, der mit dem Transkript vervollständigt oder fehlgeschlagene Webhook-Anfragen für zusätzliche Sicherheit zurückgesendet werden soll", "Enable Key Phrases, either true or false": "Aktiviere Schlüsselwörter, ob wahr oder falsch", "The point in time, in milliseconds, to begin transcribing in your media file": "Der Zeitpunkt in Millisekunden, um in Ihrer Mediendatei zu schreiben", "The point in time, in milliseconds, to stop transcribing in your media file": "Der Zeitpunkt in Millisekunden, um die Umwandlung in Ihre Mediendatei zu beenden", "The list of custom vocabulary to boost transcription probability for": "Die Liste des benutzerdefinierten Vokabulars zur Erhöhung der Transkriptionswahrscheinlichkeit für", "How much to boost specified words": "Wie viel bestimmte Wörter erhöhen sollen", "Filter profanity from the transcribed text, can be true or false": "Profanität aus dem überschriebenen Text filtern, kann wahr oder falsch sein", "Redact PII from the transcribed text using the Redact PII model, can be true or false": "PII aus dem transkribierten Text mit dem Modell Redact PII Redact kann wahr oder falsch sein", "Generate a copy of the original media file with spoken PII \"beeped\" out, can be true or false. See [PII redaction](https://www.assemblyai.com/docs/models/pii-redaction) for more details.": "Generate a copy of the original media file with spoken PII \"beeped\" out, can be true or false. See [PII redaction](https://www.assemblyai.com/docs/models/pii-redaction) for more details.", "Controls the filetype of the audio created by redact_pii_audio. Currently supports mp3 (default) and wav. See [PII redaction](https://www.assemblyai.com/docs/models/pii-redaction) for more details.": "Controls the filetype of the audio created by redact_pii_audio. Currently supports mp3 (default) and wav. See [PII redaction](https://www.assemblyai.com/docs/models/pii-redaction) for more details.", "The list of PII Redaction policies to enable. See [PII redaction](https://www.assemblyai.com/docs/models/pii-redaction) for more details.": "The list of PII Redaction policies to enable. See [PII redaction](https://www.assemblyai.com/docs/models/pii-redaction) for more details.", "The replacement logic for detected PII, can be \"entity_type\" or \"hash\". See [PII redaction](https://www.assemblyai.com/docs/models/pii-redaction) for more details.": "The replacement logic for detected PII, can be \"entity_type\" or \"hash\". See [PII redaction](https://www.assemblyai.com/docs/models/pii-redaction) for more details.", "Enable [Speaker diarization](https://www.assemblyai.com/docs/models/speaker-diarization), can be true or false": "Enable [Speaker diarization](https://www.assemblyai.com/docs/models/speaker-diarization), can be true or false", "Tells the speaker label model how many speakers it should attempt to identify, up to 10. See [Speaker diarization](https://www.assemblyai.com/docs/models/speaker-diarization) for more details.": "Tells the speaker label model how many speakers it should attempt to identify, up to 10. See [Speaker diarization](https://www.assemblyai.com/docs/models/speaker-diarization) for more details.", "Enable [Content Moderation](https://www.assemblyai.com/docs/models/content-moderation), can be true or false": "Enable [Content Moderation](https://www.assemblyai.com/docs/models/content-moderation), can be true or false", "The confidence threshold for the Content Moderation model. Values must be between 25 and 100.": "Der Konfidenzschwellenwert für das Modell der Moderation. Werte müssen zwischen 25 und 100 liegen.", "Enable [Topic Detection](https://www.assemblyai.com/docs/models/topic-detection), can be true or false": "Enable [Topic Detection](https://www.assemblyai.com/docs/models/topic-detection), can be true or false", "Customize how words are spelled and formatted using to and from values.\nUse a JSON array of objects of the following format:\n```\n[\n {\n \"from\": [\"original\", \"spelling\"],\n \"to\": \"corrected\"\n }\n]\n```\n": "Customize how words are spelled and formatted using to and from values.\nUse a JSON array of objects of the following format:\n```\n[\n {\n \"from\": [\"original\", \"spelling\"],\n \"to\": \"corrected\"\n }\n]\n```\n", "Enable [Sentiment Analysis](https://www.assemblyai.com/docs/models/sentiment-analysis), can be true or false": "Enable [Sentiment Analysis](https://www.assemblyai.com/docs/models/sentiment-analysis), can be true or false", "Enable [Auto Chapters](https://www.assemblyai.com/docs/models/auto-chapters), can be true or false": "Enable [Auto Chapters](https://www.assemblyai.com/docs/models/auto-chapters), can be true or false", "Enable [Entity Detection](https://www.assemblyai.com/docs/models/entity-detection), can be true or false": "Enable [Entity Detection](https://www.assemblyai.com/docs/models/entity-detection), can be true or false", "Reject audio files that contain less than this fraction of speech.\nValid values are in the range [0, 1] inclusive.\n": "Reject audio files that contain less than this fraction of speech.\nValid values are in the range [0, 1] inclusive.\n", "Enable [Summarization](https://www.assemblyai.com/docs/models/summarization), can be true or false": "Enable [Summarization](https://www.assemblyai.com/docs/models/summarization), can be true or false", "The model to summarize the transcript": "Das Modell, das das Transkript zusammenfasst", "The type of summary": "Der Typ der Zusammenfassung", "Enable custom topics, either true or false": "Eigene Themen aktivieren, ob wahr oder falsch", "The list of custom topics": "Die Liste der benutzerdefinierten Themen", "Wait until the transcript status is \"completed\" or \"error\" before moving on to the next step.": "Warten Sie, bis der Transkript-Status \"vollständig\" oder \"Fehler\" ist, bevor Sie zum nächsten Schritt weitergehen.", "If the transcript status is \"error\", throw an error.": "Wenn der Transkript-Status \"Fehler\" ist, werfen Sie einen Fehler.", "The maximum number of characters per caption": "Die maximale Anzahl an Zeichen pro Beschriftung", "The desired file name for storing in ActivePieces. Make sure the file extension is correct.": "Der gewünschte Dateiname für die Speicherung in ActivePieces. Stellen Sie sicher, dass die Dateiendung korrekt ist.", "Keywords to search for": "Suchbegriffe", "Maximum amount of transcripts to retrieve": "Maximale Anzahl der abzurufenden Transkripte", "Filter by transcript status": "Nach Transkript-Status filtern", "Only get transcripts created on this date": "Nur Transkripte, die zu diesem Datum erstellt werden", "Get transcripts that were created before this transcript ID": "Erhalte Transkripte, die vor dieser Transkript-ID erstellt wurden", "Get transcripts that were created after this transcript ID": "Erhalte Transkripte, die nach dieser Transkript-ID erstellt wurden", "Only get throttled transcripts, overrides the status filter": "Nur gedrosselte Transkripte erhalten, überschreibt den Statusfilter", "Your text to prompt the model to produce a desired output, including any context you want to pass into the model.": "Ihr Text, der das Modell dazu auffordert, eine gewünschte Ausgabe zu erzeugen, einschließlich aller Kontexte, die Sie an das Modell übergeben möchten.", "A list of completed transcripts with text. Up to a maximum of 100 files or 100 hours, whichever is lower.\nUse either transcript_ids or input_text as input into LeMUR.\n": "A list of completed transcripts with text. Up to a maximum of 100 files or 100 hours, whichever is lower.\nUse either transcript_ids or input_text as input into LeMUR.\n", "Custom formatted transcript data. Maximum size is the context limit of the selected model, which defaults to 100000.\nUse either transcript_ids or input_text as input into LeMUR.\n": "Custom formatted transcript data. Maximum size is the context limit of the selected model, which defaults to 100000.\nUse either transcript_ids or input_text as input into LeMUR.\n", "Context to provide the model. This can be a string or a free-form JSON value.": "Kontext, um das Modell zur Verfügung zu stellen. Dies kann ein String oder ein frei formbarer JSON-Wert sein.", "The model that is used for the final prompt after compression is performed.\n": "The model that is used for the final prompt after compression is performed.\n", "Max output size in tokens, up to 4000": "Maximale Ausgabegröße in Token, bis 4000", "The temperature to use for the model.\nHigher values result in answers that are more creative, lower values are more conservative.\nCan be any value between 0.0 and 1.0 inclusive.\n": "The temperature to use for the model.\nHigher values result in answers that are more creative, lower values are more conservative.\nCan be any value between 0.0 and 1.0 inclusive.\n", "The ID of the LeMUR request whose data you want to delete. This would be found in the response of the original request.": "Die ID der LeMUR-Anfrage, deren Daten Sie löschen möchten, finden Sie in der Antwort der ursprünglichen Anfrage.", "Authorization headers are injected automatically from your connection.": "Autorisierungs-Header werden automatisch von Ihrer Verbindung injiziert.", "Enable for files like PDFs, images, etc..": "Aktivieren für Dateien wie PDFs, Bilder, etc..", "English (Global)": "Englisch (Global)", "English (Australian)": "Englisch (australisch)", "English (British)": "Englisch (britisch)", "English (US)": "Englisch (USA)", "Spanish": "Spanisch", "French": "Französisch", "German": "Deutsch", "Italian": "Italienisch", "Portuguese": "Portugiesisch", "Dutch": "Niederländisch", "Afrikaans": "Afrikaner", "Albanian": "Albanisch", "Amharic": "Amharic", "Arabic": "Arabisch", "Armenian": "Armenisch", "Assamese": "Assamisch", "Azerbaijani": "Aserbaidschan", "Bashkir": "Bashkir", "Basque": "Baskisch", "Belarusian": "Belarussisch", "Bengali": "Bengalisch", "Bosnian": "Bosnisch", "Breton": "Breton", "Bulgarian": "Bulgarisch", "Burmese": "Burmese", "Catalan": "Katalanisch", "Chinese": "Chinesisch", "Croatian": "Kroatisch", "Czech": "Tschechisch", "Danish": "Dänisch", "Estonian": "Estnisch", "Faroese": "Faroese", "Finnish": "Finnisch", "Galician": "Galizisch", "Georgian": "Georgisch", "Greek": "Griechisch", "Gujarati": "Gujarati", "Haitian": "Haitian", "Hausa": "Hausa", "Hawaiian": "Hawaiisch", "Hebrew": "Hebräisch", "Hindi": "Hannah", "Hungarian": "Ungarisch", "Icelandic": "Icelandic", "Indonesian": "Indonesisch", "Japanese": "Japanisch", "Javanese": "Javanese", "Kannada": "Kannada", "Kazakh": "Kazakh", "Khmer": "Khmer", "Korean": "Koreanisch", "Lao": "Lao", "Latin": "Latein", "Latvian": "Lettisch", "Lingala": "Lingala", "Lithuanian": "Litauisch", "Luxembourgish": "Luxemburgisch", "Macedonian": "Makedonisch", "Malagasy": "Malagasy", "Malay": "Malaiisch", "Malayalam": "Malayalam", "Maltese": "Maltese", "Maori": "Maori", "Marathi": "Marathi", "Mongolian": "Mongolisch", "Nepali": "Nepali", "Norwegian": "Norwegisch", "Norwegian Nynorsk": "Norwegian Nynorsk", "Occitan": "Occitan", "Panjabi": "Panjabi", "Pashto": "Pashto", "Persian": "Persisch", "Polish": "Polnisch", "Romanian": "Rumänisch", "Russian": "Russisch", "Sanskrit": "Sanskrit", "Serbian": "Serbisch", "Shona": "Senna", "Sindhi": "Sindhi", "Sinhala": "Sinhala", "Slovak": "Slowakisch", "Slovenian": "Slovenian", "Somali": "Somali", "Sundanese": "Sundanese", "Swahili": "Swahili", "Swedish": "Schwedisch", "Tagalog": "Tagalog", "Tajik": "Tadschikistan", "Tamil": "Tamil", "Tatar": "Tatar", "Telugu": "Telugu", "Thai": "Thailändisch", "Tibetan": "Tibetisch", "Turkish": "Türkisch", "Turkmen": "Turkmen", "Ukrainian": "Ukrainische", "Urdu": "Urdu", "Uzbek": "Uzbek", "Vietnamese": "Vietnamese", "Welsh": "Walisisch", "Yiddish": "Jiddisch", "Yoruba": "Yoruba", "Best": "Beste", "Nano": "Nano", "Low": "Niedrig", "Default": "Standard", "High": "Hoch", "MP3": "MP3", "WAV": "WAV", "Account Number": "Kundennummer", "Banking Information": "Bankinformationen", "Blood Type": "Bluttyp", "Credit Card CVV": "Kreditkarten CVV", "Credit Card Expiration": "Kreditkartenablauf", "Credit Card Number": "Kreditkartennummer", "Date": "Datum", "Date Interval": "Datum-Intervall", "Date of Birth": "Geburtsdatum", "Driver's License": "Führerschein", "Drug": "Drogen", "Duration": "Dauer", "Email Address": "E-Mail-Adresse", "Event": "Ereignis", "Filename": "Dateiname", "Gender Sexuality": "Geschlechtssexualität", "Healthcare Number": "Nummer der Gesundheitsversorgung", "Injury": "Verletzte", "IP Address": "IP-Adresse", "Language": "Sprache", "Location": "Standort", "Marital Status": "Ehe Status", "Medical Condition": "Medizinische Bedingung", "Medical Process": "Medizinischer Prozess", "Money Amount": "Geldbetrag", "Nationality": "Nationalität", "Number Sequence": "Nummernfolge", "Occupation": "Besetzung", "Organization": "Organisation", "Passport Number": "Reisepassnummer", "Password": "Kennwort", "Person Age": "Personenalter", "Person Name": "Personenname", "Phone Number": "Telefonnummer", "Physical Attribute": "Physisches Attribut", "Political Affiliation": "Politische Zugehörigkeit", "Religion": "Religion", "Statistics": "Statistiken", "Time": "Zeit", "URL": "URL", "US Social Security Number": "US-Sozialversicherungsnummer", "Username": "Benutzername", "Vehicle ID": "Fahrzeug-ID", "Zodiac Sign": "Sternzeichen", "Entity Name": "Entitätsname", "Hash": "Hash", "Informative": "Informativ", "Conversational": "Konversation", "Catchy": "Einbrüchig", "Bullets": "Kugeln", "Bullets Verbose": "Geschlossen", "Gist": "Gist", "Headline": "Überschrift", "Paragraph": "Absatz", "SRT": "SRT", "VTT": "VTT", "Queued": "Warteschlange", "Processing": "Verarbeitung", "Completed": "Abgeschlossen", "Error": "Fehler", "Claude 3.5 Sonnet (on Anthropic)": "Claude 3.5 Sonnet (auf Anthropic)", "Claude 3 Opus (on Anthropic)": "Claude 3 Opus (auf Anthropic)", "Claude 3 Haiku (on Anthropic)": "Claude 3 Haiku (auf Anthropic)", "Claude 3 Sonnet (on Anthropic)": "Claude 3 Sonnet (auf Anthropic)", "Claude 2.1 (on Anthropic)": "Claude 2.1 (auf Anthropic)", "Claude 2 (on Anthropic)": "Claude 2 (auf Anthropic)", "Claude Instant 1.2 (on Anthropic)": "Claude Instant 1.2 (auf Anthropic)", "Basic": "Einfache", "Mistral 7B (Hosted by AssemblyAI)": "Mistral 7B (gehostet von AssemblyAI)", "GET": "ERHALTEN", "POST": "POST", "PATCH": "PATCH", "PUT": "PUT", "DELETE": "LÖSCHEN", "HEAD": "HEAD" }

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/activepieces/activepieces'

If you have feedback or need assistance with the MCP directory API, please join our Discord server