This workflow automates the monitoring of messages in a Telegram chat for toxic language, providing real-time moderation. The process begins with a Telegram trigger that activates whenever a message, edited message, or channel post is received. The content of the message is then sent to Google Perspective API, which analyzes the text for attributes such as profanity, identity attack, and threats. If the profanity score exceeds 0.7, the workflow responds by sending a warning message back in the chat, notifying users that toxic language is not tolerated. If the message does not meet this threshold, no action is taken.
This workflow is ideal for managing online communities and chat groups, ensuring a respectful environment by automatically flagging and addressing inappropriate language. It reduces manual moderation effort and promotes healthy interactions in Telegram groups or channels.
Reviews
There are no reviews yet.