Découvrez les fonctions avancées de modération de contenu fournies par ChatBotKit pour garantir la sécurité et l'intégrité des interactions entre les robots et les utilisateurs. Activez l'analyse du contenu, la détection de la langue et le refus automatique pour vous protéger contre les contenus nuisibles et inappropriés.

ChatBotKit comes with advanced content moderation features that are essential for maintaining the integrity and safety of bot-user interactions. By utilizing these features, developers can ensure that the content generated by and for their bots remains respectful, safe, and free from harmful language.

Caractéristiques

  1. Content Scanning: Once content moderation is enabled, all incoming and outgoing content will be meticulously scanned.
  2. Language Detection: The system can recognize harmful, hateful, and other types of inappropriate language.
  3. Automatic Refusal: If flagged content is detected, the bot will automatically refuse to respond, ensuring that harmful content doesn't get propagated.

Enabling Content Moderation

To enable content moderation for your bots and integrations:

  1. Go to the Bot Advanced Settings.
  2. Toggle the Moderation switch to ON.

Remember: Once enabled, all content – both incoming and outgoing – will be subject to content moderation. This ensures a comprehensive shield against potential harm.

How it Works

  • When a user sends a message to the bot, ChatBotKit will scan the content before processing.
  • If inappropriate language or content is detected, the message will be flagged.
  • The bot will not process flagged content and will instead send a default refusal message. This ensures that inappropriate prompts don't result in any undesired responses.
  • You view flagged content in your conversations.
  • You will receive an email notification if a message is flagged for moderation.