Mary Breslin, CFE
Can a system or software be responsible for fraud? Like a voyeur, I sat and watched the system automatically, and without human intervention, conceal fraud. Let me explain.
I recently completed a fraud investigation in which the system — hardware and software — provided both the opportunity for the fraud and an automated concealment. The fraud was a financially significant skimming scheme that had been ongoing for years and included several individuals colluding to commit occupational fraud. As is not unusual in large fraud cases, the “blame game” started after the conclusion of the investigation.
Who was responsible? The operational management team? They oversaw the daily activities and were overly trusting of the employees.
The executives? They set the tone, the company culture and established policies and procedures which were incredibly lacking in appropriate guidance and expectations of employees.
The software company? They had created the system that provided the opportunity and, more concerning, the concealment. Everything about the system design, reporting and audit trail made the fraud easy to commit and provided nothing that would have helped management identify it.
Frauds. Understanding them and how they are discovered are discussions everyone gets excited about. They captivate our interest and woo us with their exploits. But the aftermath of a fraud and how we clean up the mess is sometimes even more complex — and interesting. In this case, should the software company have any responsibility? Some, or all of the responsibility? Should they have been able to predict this abuse of their system? These became serious questions in the aftermath for the companies involved, as well as the insurance companies and lawyers.
This particular fraud is still playing out, so I don’t have the final outcome. But this case raises important questions that are asked more and more frequently in the aftermath of fraud cases involving software: does the software company bear any legal responsibility for “fraud proofing” their systems?
Of course, predicting every possible abuse is not realistic. But should software companies, at a minimum, brainstorm possible fraud scenarios and build controls and reporting around those? I think they should. It’s not unreasonable to expect a software company to anticipate potential abuses for known industry fraud risks and provide some protection and reporting for them beyond the normal business controls that all companies build into their software.
Are software companies equipped to do this? Maybe some, but probably not most. In this particular case, the issue that allowed the fraud was tied directly to a significant and legitimate business requirement; a type of transaction that occurred many times a day in the normal course of business. And the system was designed to accommodate this. Unfortunately, while the system was designed to accommodate this need, little or no consideration was given for ways to prevent or detect abuse and fraud. The result in this case was a multimillion-dollar fraud.
This year at the 31st Annual ACFE Global Fraud Conference I spoke on critical thinking and fighting fraud. As part of that talk I discussed biases — one of which is automation bias. That is when organizations have an overreliance on systems and an assumption that those systems are doing or preventing things we have no evidence for. Unless a person has a technological background, they rarely question what a system is doing (and what it should be doing), but instead rely blindly on the system and the reports it produces.
Once the fraud was fully understood, everyone — even the software engineers who built the system — were stunned with what was really happening. They had never imagined that such a regular activity could be abused in this way. Therefore, nothing was done to track, monitor or report on that type of activity.
How does this happen then? After all, this is not condemning the software engineers. They are focused on turning business needs and requirements into systems and software that can facilitate, track and report on operational transactions, which they are very skilled at. But when the focus is purely on development, it can be very difficult to see unintended consequences and potential risks.
It takes many perspectives and skills to see all the potential needs and risks — especially potential fraud risks. Few software engineers are fraud experts. Until that changes, more software engineering firms should consider hiring CFEs to participate in the design and development phases of new software, review updates and changes, and continually weigh in on emerging risks.
Software firms need to incorporate CFEs into the process to protect both their customers from fraud and themselves from potential liability. It could also help answer the question, “How do we ensure our system isn’t facilitating or contributing to a fraud?"