Explainable Deep Learning for AI-Generated Image Detection
DOI:
https://doi.org/10.64751/Keywords:
Artificial Intelligence (AI), Explainable Artificial Intelligence (XAI), Deep Learning, AIGenerated Image Detection, Convolutional Neural Networks (CNN), Generative Adversarial Networks (GAN), Image Forensics, Synthetic Image Detection, Model Interpretability, Feature Visualization, Digital Image Analysis, Machine Learning, Fake Image Detection, Visual Explainability, Computer Vision.Abstract
In The rapid advancement of generative models has led to a significant increase in highly
realistic AI-generated images, raising serious concerns regarding misinformation, digital
forensics, and media authenticity. Traditional detection methods struggle to generalize across
diverse generative architectures and evolving synthesis techniques. This study proposes an
explainable deep learning framework for detecting AI-generated images using Convolutional
Neural Networks (CNNs) combined with Explainable Artificial Intelligence (XAI)
techniques. The proposed model leverages deep feature extraction capabilities of CNNs to
distinguish between authentic and synthetic images by learning subtle artifacts, texture
inconsistencies, and frequency-domain anomalies introduced during the generation process.
To address the “black-box” nature of deep learning models, interpretability methods such as
Grad-CAM and SHAP are integrated to provide visual and feature-level explanations of the
model’s predictions. These explanations highlight discriminative regions and patterns that
contribute most to classification decisions, enhancing transparency and trustworthiness.
Experimental results demonstrate that the proposed framework achieves high detection
accuracy across multiple datasets while maintaining robustness against variations in
generative techniques. Furthermore, the incorporation of XAI methods improves model
interpretability, making it suitable for real-world applications in digital forensics, content
moderation, and media verification. This work contributes toward building reliable and
transparent systems for combating the growing challenges posed by AI-generated visual
content.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.






