Welcome to the Pondera FraudCast, a weekly blog where we post information on fraud trends, lessons learned from client engagements, and observations from our investigators in the field. We hope you’ll check back often to stay current with our efforts to combat fraud, waste, and abuse in large government programs.
As a company that works with government clients, we spend a tremendous amount of time and money responding to Requests for Proposals (RFPs). We understand that governments use RFPs to ensure competitive bidding processes and to articulate their requirements. However, the process still causes enough angst for prospective bidders that, ironically, it often actually limits competition.
We wrote in a previous blog post about the lengthy RFP procurement cycles and their impacts on the final project. Today I’d like to discuss the formats of the RFPs themselves which often cause confusion, leading to large numbers of vendor questions, which in turn leads to delayed timelines and incorrectly submitted bids. I confess that I have never been on the “other side of the table” writing an RFP and I can only imagine how difficult it must be. But I still have one simple suggestion that I wish government agencies would take prior to releasing an RFP.
Before releasing an RFP to the vendor community, I suggest that government run an internal “mock” procurement: “release” the bid to a few agency employees and ask them to respond to it. They don’t have to provide actual answers, just an outline so they can make sure they understand what the RFP requires, where responses should go, how the format works, and other structural issues. It’s important that these people had nothing to do with the writing of the RFP document itself because then they’d naturally understand what they intended when they wrote it.
Commonly confusing issues we see in RFPs include where to place a Statement of Work (in tables or in text), repeated questions, seemingly mutually exclusive statements or requirements, and “thrown in” requirements that belong in other sections and break up the flow of the response.
I think government officials would be amazed at how much confusion and time they could take out of their procurements by performing this simple quality assurance exercise. This would also reduce the number of questions the state would have to respond to and provide more focus on issues of substance rather than administrative or formatting issues. Finally, it would lead to more uniformity of responses allowing governments to evaluate responses for their merit rather than having to search for answers to their requirements.
One of my colleagues recently returned from a conference on government program integrity with an interesting anecdote. He recounted a vendor presentation where the speaker was touting a 52% accuracy rate in their fraud lead generation system. So… nearly half of the system’s leads generated false positives. Not so sure I’d brag about that.
High false positive rates lead to wasted investigative time and money and unwarranted intrusions into the lives of legitimate program beneficiaries and service providers. Ultimately, they lead to a lack of confidence in the system itself and investigators revert back to more manual detection methods. When one considers all the important services governments deliver and the immense political pressure they endure, this is obviously not acceptable.
Shortly after hearing this story, we were asked to respond to a question about false positive rates and any existing industry standards or even benchmarks. While every vendor, including Pondera, makes claims about our system efficacy, very few standards actually exist. Conversely, our clients (the government program administrators) generally are subject to improper payment standards placed on them by the federal government.
I think there is a great opportunity, even responsibility, for governments to create these standards. Fraud detection standards would challenge the vendor community to “put up or shut up”, leading to more innovation. They could also be adjusted as the standards are met and surpassed leading to constant improvement. And they would provide governments with a uniform method for measuring vendor performance.
It is true that fraud detection systems still rely on quality program data and can suffer from the old adage "garbage in, garbage out”. So government would still share in the responsibility of meeting any new standards. But clearly, there is more we can do. And this would benefit all parties involved… except, of course, the fraudsters.
I believe that simple things can make a big difference. This week, for example, I went through self-checkout at a local grocery store and the keypad gave me the options of “debit card” or “all other tenders” to complete my transaction. “All other tenders”—who talks like that? No doubt there was a group of people that decided that, technically, “tenders” was the best word to cover all the other options. It doesn’t really matter that it makes the system more confusing. That’s my problem.
Software systems suffer from this problem perhaps more than any other consumer product. I remember the old joke about having to go to the “start” button to stop the computer. This still happens, despite the fact that experience has shown us that the single most important feature contributing to the success of software is usability.
Put simply, even the most powerful system is completely worthless if people can’t figure out how to use it. My own brother discovered this when he recently decided to switch from the iPhone to an Android phone for the additional capabilities. Not a very technical person, he quickly switched back complaining that he was utterly confused by the “full fledged computer” he was carrying around in his pocket.
At Pondera, we make the claim that our system is “built by investigators, for investigators.” And it’s true. Our most important design principle is to “mask” the underlying complexity of the system and provide analysts and investigators with an intuitive system that works the way they do. Technical people can’t do this. Data scientists can’t do this. Only investigators can do this. That’s why we hire them and task them with our most important work.