At Pondera, we are often asked whether fraud detection algorithms will ever completely replace human investigators. And while I can’t address the “ever” part of the question, I can confidently state that it will not happen in the foreseeable future. One of the major reasons for this? Prediction models, like many people, struggle to distinguish between cause and effect.
A Stanford University professor recently shared her studies on this topic which support many of our own findings. She noted that while prediction algorithms are excellent at finding patterns in large data sets, their effectiveness is limited because they struggle with determining causation. An example she used is that algorithms have been shown to help identify patients who should not receive hip surgery because they would likely die of other causes. However, the algorithms are unable to prioritize those patients who should receive the surgery.
In several cases, the professor notes that correlation can be as low as 50%. And she properly notes that while this may be fine in certain situations, governments simply cannot conduct such high-risk experiments with social welfare, economic policies, and other important matters. And unlike controlled environments, such as those that use placebos to test medications, the real world is simply too messy and unpredictable to control all factors.
This problem of causation identifies an important intersection between human reasoning and prediction algorithms. We believe that in complex, rapidly changing environments like fraud detection, effective detection systems combine the power of modern detection algorithms with experienced human reasoning.
By leveraging the individual strengths of both machine and human learning, we can analyze massive data sets and make sense of the findings. We regularly use the system to find the problem and ask the human experts to help explain the problem. This makes the results actionable, which ultimately is what our government partners require.