Microsoft is launching a new AI-powered moderation service that it says is designed to foster safer online environments and communities. Called Azure AI Content Safety, the new offering, available through the Azure AI product platform, offers a range of AI models trained to detect "inappropriate" content across images and text.
The models — which can understand text in English, Spanish, German, French, Japanese, Portuguese, Italian, and Chinese — assign a severity score to flagged content, indicating to moderators what content requires action. "Microsoft has been working on solutions in response to the challenge of harmful content appearing in online communities for over two years.
We recognized that existing systems weren't effectively taking into account context or able to work in multiple languages," the Microsoft spokesperson said via email. "New [AI] models are able to understand content and cultural context so much better. They are multilingual from the start … and they provide clear and understandable explanations, allowing users to understand why content was flagged or removed."