We usually think of card fraud as hurting just the merchant who accepts the card, but when one merchant accepts payment from a stolen card it hurts other merchants.
Criminals who sell stolen credit cards perform quality control using bots, and stopping bots can help get to the root of the cause and ultimately stop much more.
Breaches are a problem that isn't going away anytime soon. After a breach, stolen cards trickle out to dozens of retail level card-shops where fraudsters can purchase card details in a convenient, modern e-commerce experience. Stolen card vendors test their cards prior to selling them: a tested and known-good card from a vendor with a reputation for quality can sell for as much as $20 or $40. Untested cards from vendors without a reputation for quality go for as little as $0.30. The criminals have a big incentive to test their cards and sell only high-quality goods.
How do criminals convert nearly worthless untested cards into highly sought after tested cards? They test their stolen cards by making small dollar purchases on high-volume websites. This avoids both using up the available credit and tripping fraud detection.
With millions of cards stolen in a single data breach, criminals must use bots to test their cards. Testing a million cards by hand would take prohibitively long. But with botnets available to rent for as little as $2 an hour for 10,000 nodes, a hundred thousand cards can be tested in less time than it takes the hacker to grab another cup of coffee.
Since the card networks and card processing firms are reluctant to work directly with criminals peddling stolen data, the fraudsters have turned to abusing merchants websites to test their cards. This is possible because ecommerce sites are not designed to stop the botnets criminals use.
Of course, small transactions on large volume websites arent free: the merchant abused to test cards pays transaction fees as well as chargebacks. Or they must resort to using more expensive payments processors who charge higher fees overall.
Automated interaction with websites used to be no more annoying than spam: everybody has interacted with a CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart), also known as those annoying squiggles meant to be deciphered by humans but not by robots, which are used to stop spammers at some point while browsing the web. Today, automation enables a number of threats, among them card testing and the stolen card economy.
Once the criminals have tested their stolen cards as good-to-go, they can be sold with the fraudsters seal of approval. Bad guys trade on their reputation like the rest of us, and they cant lie about what cards are working.
In economic terms, a website that allows criminals to test cards is creating an externality. Their insecure posture lets criminals sell cards for more, leading to greater incentives to steal and sell payment cards.
Without testing, the vendors of stolen cards cant guarantee quality. Without quality guarantees, buyers wont know what theyre getting and will be less willing to participate in the markets.
Online merchants can protect themselves against this kind of enabling fraud (which hurts their own business as well as their neighbors) by implementing some degree of anti-automation on their website. Many options exist, from CAPTCHAs, to IP reputation, to behavior analysis, to more modern approaches like real-time polymorphism. Each has its own tradeoffs, so no solution is best.
While CAPTCHAs are a well-understood technology, the means to defeat them are as well. Websites to defeat CAPTCHAs offer their services for pennies. IP reputation and rate limiting can be avoided by using commercial services as legitimate as Amazon Web Services (AWS) or by using a rented botnet. Real-time polymorphism can be difficult to implement, and it is a new technology, so unknown unknowns may exist.
Once EMV rolls out in the US, fraud will shift even further to online card-not-present. This will drive fraud costs higher for Internet merchants and make it even more important that websites protect themselves against bots. Besides: bots dont leave good reviews on Yelp.
Timothy Peacock is a threat researcher at Shape Security.