How 'cyborgs' can win at the fraud fight

Register now

The "singularity" is the hypothesized moment when machine learning becomes so advanced that computers start self-educating, triggering runaway growth in artificial intelligence that some believe will result in robots that are far more intelligent than humans.

Some of the smartest people in the world, such as Stephen Hawking and Elon Musk, consider this a very serious concern. But in business, AI is not something to be feared. Any look back at analytics in 2017 proves that the business community sees machine learning as an opportunity to solve just about any problem, from targeted marketing to fighting fraud.

Today, there are many AI fraud management tools on the market. These tools work well for large businesses that collect a lot of consumer data but are significantly less effective for small or medium-sized enterprises. To perform at its peak AI needs a huge amount of data, and most SMEs simply don't have enough.
Even for large enterprises, these services have their setbacks. There's a huge learning curve to the implementation of AI tools and they very rarely work at full efficiency. We've spoken to merchants who used these tools and have had them return false decline rates of more than 15%. Seeing so much lost business, these merchants would typically stop using these fraud tools in favor of returning to manual screening.

We take a different approach, called the cyborg approach. While AI can be effective, it is not yet ready to beat fraud efficiently and reliably. The secret to fighting fraud effectively is to use AI in conjunction with human beings to create an unbeatable anti-fraud team.

In 1997, Deep Blue beat Kasparov at a game of chess. Kasparov was the world-leading competitive chess player at the time, and Deep Blue was a chess-playing computer designed by IBM. When a computer beat the world's best chess player, it sparked a global conversation about artificial intelligence that continues to this day.

It became commonplace to assume that computers will eventually, inevitably, become better than us at everything, from playing chess to driving cars to writing hit songs. However, later (and less well-known) developments in chess-playing computers would suggest something else entirely.

In 2005, two New Hampshire chess players competed in a "freestyle" tournament with the help of AI, developing a methodology for when to use the computer and when to opt instead for human judgement. This human-AI collaboration won the tournament and for the next decade "cyborg" chess players would consistently beat both human and AI competitors.

This calls the whole concept of the singularity into question; AI may be better than humans at some things, but when humans and AI work together, they're unstoppable.

Fighting fraud on all fronts requires the ability to both process big data, which AI excels at, and to understand how fraudsters think, which humans excel at.

One of the key flaws in many of today's AI fraud management tools is that true fraud only accounts for about 15% of chargebacks. The biggest challenge for businesses is friendly fraud, which is invisible to most rule-based fraud monitoring systems and will rarely be filtered out by AI tools. Relying exclusively on these tools can allow most fraud to go unnoticed.

The other side of the coin is that if fraud screening tools are too unforgiving, merchants will lose business due to a high rate of false positives.

The space between these two extremes is where human knowledge and "common sense" is required. Don't make the mistake of giving up on machine intelligence just because it's not blocking 100% of fraud or it's returning too many false positives – the key is to supplement AI toolsets with human fraud analysts.

In the past, automated fraud management tools used man-made rule sets to screen for fraud (and many still do), but today's AI technologies can "learn". With enough client data, they can identify patterns beyond what a human is capable of and use those patterns to spot unusual behaviors. However, to function with a high degree of accuracy and efficiency, it requires human checkpoints to "teach" the AI what to look for, to separate the fraudulent from the merely uncommon, and to guide the machine learning to the right kind of behaviors.

This is especially true for SMEs who may have relatively small data sets. With less data, it becomes harder for AI to filter out coincidences and return false positives, so the active participation of a human analyst is much more important.

Manual fraud screening becomes less and less practical as merchants grow in size. When using AI, the amount of human intervention required is going to vary from company to company, so it's critical to do AB testing to discover the most efficient approach. In our experience, this collaborative, "cyborg" approach is the most effective way for a merchant to fight fraud.
Just like playing chess, fraud prevention works best when AI and humans work together.

For reprint and licensing requests for this article, click here.
Payment fraud Artificial intelligence Retailers ISO and agent