An Intelligent Loan Default Prediction System Using Machine Learning with Explainable AI Integration
DOI:
https://doi.org/10.64751/Keywords:
Loan Default Prediction, Machine Learning, Logistic Regression, Random Forest, Support Vector Machine, Explainable AI, SHAP, Credit Risk Analysis, Financial Analytics, Data PreprocessingAbstract
Loan default prediction has become a critical component in modern financial systems, enabling
banks and financial institutions to minimize risk and make informed lending decisions. With the
rapid growth of digital banking and large-scale financial datasets, traditional rule-based systems
are no longer sufficient to accurately assess borrower risk. This project presents an intelligent
Loan Default Prediction System that leverages machine learning techniques combined with
explainable artificial intelligence (XAI) to enhance prediction accuracy and transparency.
The proposed system integrates multiple supervised learning algorithms, including Logistic
Regression, Random Forest, and Support Vector Machine (SVM), to classify whether a borrower
is likely to default on a loan. The system is implemented using Python and provides an interactive
graphical user interface (GUI) built with Tkinter, allowing users to load datasets, preprocess data,
train models, visualize results, and perform predictions seamlessly.
A robust preprocessing pipeline is incorporated to handle real-world data challenges such as
missing values, categorical variables, and feature scaling. Missing values are handled using mean
imputation, while categorical variables are transformed using one-hot encoding. Standardization
is applied to ensure optimal model performance. The dataset is then split into training and testing
sets to evaluate model performance effectively.
The system evaluates model performance using accuracy metrics and visual tools such as ROC
curves and model comparison graphs. These visualizations help users understand model
effectiveness and select the best-performing algorithm. Additionally, the integration of SHAP
(SHapley Additive exPlanations) provides interpretability by explaining feature contributions to
predictions, making the model more transparent and trustworthy for financial decision-making
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.






