Products and Solutions

Interpretability of Machine Learning Models for Fraud Detection

Aug 26, 2020 | by Adam Lauz, Lior Finkelshtein |
Sketch of a brain with lines of code in the background

The ability to explain automated decision models is critical for gaining trust, maintaining high usage and quality assurance. Users need to understand why certain decisions are made by the model, especially if the decision was wrong. For example, consider your own interactions with your bank. If a transaction is declined, you want customer support to explain why. 

Interpretable models, such as “linear regression” and “decision trees,” offer a better understanding of the features that significantly impact the prediction. One downside of having a “simple” model is that in some cases, it will not have the ability to describe complex structures in the data, such as multiple interactions. However, wise feature engineering and feature selection can offer high performance while keeping a model “lean” and interpretable. 

Today, advances in hardware are offering more computational power and machine learning (ML) algorithms are built to run efficiently in parallel on multiple machines. This increases the usage of more complex ML algorithms such as “deep learning” and “XGBoost.” In cases where the data has a complex structure, these models can provide higher accuracy.

Explaining the path that leads to the decision for these models is hard. However, new developments such as “Model-agnostic” methods are enabling us to “peek under the hood” of the models, making it easier to understand the contribution of the different features to the prediction.

Understanding the RSA Risk Engine

The RSA Risk Engine helps our customers achieve fraud detection reaching 97 percent, with transaction intervention of 5 percent or less, allowing for more detection with less customer friction. It is based on the Naïve Bayes algorithm, which is highly scalable, efficient and reliable, combined with RSA Fraud & Risk Intelligence’s domain expertise and a variety of sources of information. The RSA Risk Engine assigns a unique risk score to every digital transaction. The risk score and the risk policy set by the organization determine whether a user is challenged with step-up authentication. The RSA Risk Engine provides interpretability by pinpointing the most significant risk contributors of each transaction in a clear and simple way. This allows for a deeper understanding and easier analysis. 

Below is a visual illustration of what the RSA Risk Engine takes into account:

Accuracy and interpretability are critical for decision-based algorithms

In just the past year, there have been countless examples of algorithms failing to deliver accurate information, demonstrating dangerous bias, being manipulated by end-users and more. These examples underscore that machine learning algorithms are imperfect, but their flaws need to be understood to help improve them in the future. 

It’s critical to achieve high accuracy while being able to interpret the results of machine learning decision-based models. When it comes to fraud prevention, the RSA Risk Engine is a leader in achieving high accuracy while enabling the ability to easily interpret the results. 

Recommended for you