AI Powered Content Moderation System
DOI:
https://doi.org/10.64751/Abstract
With the rapid increase in user-generated content on social media and other online platforms, the need to have intelligent and scalable content moderation systems is even more crucial. The automated moderation systems based on AI should not only be utilized to locate poisonous, dangerous, and inappropriate multimodal content but also ensure that the process is transparent and credible. We outline a content moderation model, based on AI analysis of images, text, and audio. Picture moderation is done using deep transfer learning architectures such as VGG16, Xception, and ResNet50. The system is also made more resilient by using a hybrid ensemble model that consists of Xception and ResNet50. Image preparation includes changing the size, normalizing, converting to NumPy, and utilizing pretrained convolutional neural networks to get features. Transformer-based language models BERT and RoBERTa are moderated to use with the right preprocessing, label encoding, and embedding techniques to moderate text. The process involves transcribing audio with openAI whisper, followed by classifying the text with trained transformer models to moderate audio. Explainable AI methods like LIME and SHAP make guarantee that models can be understood. In the case of real-time moderation services, the system is configured using Flask framework and SQLite-based authentication. Experimental evaluation indicates that BERT is more effective than the Hybrid Ensemble model when it comes to text classification (99.20% accuracy) and picture classification (93.10% accuracy). This implies that both the models are very dependable and can be applied in practical scenarios.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.






