In the constant game of cat and mouse between security professionals and those attempting to thwart businesses’ defences online, few would argue that the ingenuity of cybercriminals (combined with the helpful advantage of operating outside of legal and moral boundaries) keeps them one step ahead in the game.
But the security sphere appears to be catching up, at least around one issue: automated bots.
The use of bots to launch automated attacks is nothing new. Bots (both good and bad) have been used to perform simple, repetitive tasks for as long as we’ve been online. Automated responses have transformed ways of communicating with customers for online businesses, particularly in the financial sector. In the identification industry, automation has the potential to bring improved efficiency and cost savings across the board, from identity proofing and compliance due diligence to automated fraud detection and more.
As with all things on the Internet, however, bots can be put to malicious use. There are examples of bots programmed to steal content, overwhelm websites, or even attempt to access a user account without permission. One of the highest profile examples would be the use of automated bots by scalpers to purchase vast sums of tickets for high-demand events, then re-sell them at greatly increased prices. In January 2016, the IRS was hacked using stolen social security details in conjunction with an automated bot to set up fraudulent accounts.
Automation systems such as GUI Scripts not only mimic human users, but also can manipulate web browsers to mimic and replay what might appear at face value to be human input. Yet this is where the criminals may have slipped up. In assuming strength by mimicking human users, they may have allowed a weakness to show when confronted with one particularly fascinating arm of the security sphere: passive biometrics.
The rise of biometric authentication in security could invigorate the fight against automation-led cyberattacks. Companies that work in this field represent the cutting edge of what security technology can achieve by analysing the intricacies of our relationships with the technology we use so often.
While the advent of machine learning solutions made bots increasingly more sophisticated, they are unable to replicate the subtle, unique variables that present themselves in humans in every instance of data input. Biometrics companies have developed solutions to identify users by factors that could be lifted wholesale from a sci-fi film: the angle of a handheld device when in use, the pressure applied to the keys or screen, and the length of gaps between typing and swiping can all be used to separate good users from bad. These factors are virtually impossible for a non-human interface to replicate. Anomalous behaviour can be identified by analysing these factors, even in large data sets, and by comparing the patterns of known human users with unusual patterns.
Biometrics present a watershed moment in the fight against automated cyber-attacks. Up until biometrics became a realistic alternative, multi-layer authentication was the best option available to businesses hoping to distinguish between a genuine user and an automated attack.