Azure AI Content Safety Container for Text and Image Moderation
⚠️ DISCLAIMER: This software is provided "as is" and the author disclaims all warranties with regard to this software including all implied warranties of merchantability and fitness. In no event shall the author be liable for any special, direct, indirect, or consequential damages or any damages whatsoever resulting from loss of use, data or profits, whether in an action of contract, negligence or other tortious action, arising out of or in connection with the use or performance of this software.
🔒 SECURITY: This plugin processes text and image content through external API calls to Azure AI Content Safety services. Please ensure you comply with your organization's data privacy policies and Azure's terms of service when using this plugin. The plugin author is not responsible for any data privacy or security issues arising from the use of this software.
Azure AI Content Safety is an AI service that detects harmful user-generated and AI-generated content in applications and services. Azure AI Content Safety includes text and image APIs that allow you to detect material that is harmful.
Every harm category the service applies also comes with a severity level rating. The severity level is meant to indicate the severity of the consequences of showing the flagged content.
1)Harm Categories
| Category | Description | API term |
|---|---|---|
| Hate and Fairness | Hate and fairness harms refer to any content that attacks or uses discriminatory language with reference to a person or identity group based on certain differentiating attributes of these groups.
This includes, but is not limited to:
| Hate |
| Sexual | Sexual describes language related to anatomical organs and genitals, romantic relationships and sexual acts, acts portrayed in erotic or affectionate terms, including those portrayed as an assault or a forced sexual violent act against one's will.
This includes but is not limited to:
| Sexual |
| Violence | Violence describes language related to physical actions intended to hurt, injure, damage, or kill someone or something; describes weapons, guns, and related entities.
This includes, but isn't limited to:
| Violence |
| Self-Harm | Self-harm describes language related to physical actions intended to purposely hurt, injure, damage one's body or kill oneself.
This includes, but isn't limited to:
| SelfHarm |
2)Severity Levels
Containers let you use a subset of the Azure AI Content Safety features in your own environment. With content safety containers, you can build a content safety application architecture optimized for both robust cloud capabilities and edge locality. Containers help you meet specific security and data governance requirements.
Available Containers:
The content safety container is available in public preview. Containers in preview are still under development and don't meet Microsoft's stability and support requirements.
This is a Dify plugin that integrates with the Azure AI Content Safety Container to analyze both text and image content for harmful material. The plugin can detect various types of harmful content including hate speech, violence, sexual content, and self-harm.
1)Deploy Azure AI Content Safety Container
Before using this plugin, make sure you have an Azure AI Content Safety Container properly set up and running. See Install and run content safety containers with Docker for setup instructions. Please verify that your container is accessible and responding to API requests before configuring this plugin.
2)Update Dify ENV
When users send images to the chatbox, that can be used to access the image will be generated in (each image corresponds to one url). The image moderation tool obtains the image by accessing these , converts it to base64, and then sends it to the Image Analyze API for review. Therefore, the correct or must be set in order to generate a corresponding accessible url. Generally, this should be consistent with the main domain name used to access the Dify Portal.
The structure of is as follows:
1)Get Azure AI Content Safety Container Tools
Azure AI Content Safety Container can be installed via Plugin Marketplace, Github or Local Package File. Please choose the installation method that best suits your needs. If you are installing via Local Package File, please set for the component.
2)Authentication
On the Dify navigation page, go to [Tools] > [Azure AI Content Safety Container] > [To Authorize] to fill in the API Endpoint, API Version and optional headers.


For example:
3)Using the tool
You can use this tool in Chatflow or Workflow. The tool accepts both text and image inputs.
Parameters:
Image Requirements:
All parameters are optional. The tool automatically detects when Text or Image inputs are provided (non-empty) and calls the corresponding APIs for content moderation accordingly.


The tool provides several output variables for use in your workflow:
Example structure:
1) Example 1: Text Moderation – Harmful Category

2) Example 2: Text Moderation – Using Block List

3) Example 3: Image Moderation – Harmful Category (Single Image, Multiple Images)


4) Example 4: Text and Image Moderation with Block List

See Secure your AI Apps with Azure AI Content Safety Container