Digital ID’s ‘trusted network’ makes liability more complex
The move to provide consumers with one-click enrollment experiences like single sign-on across apps, such as bankID, Project Verify and GSMA RCS universal profile, will ramp up in the coming year.
In parts of Europe and Canada where a handful of very large institutions control most of the banking structure, moves are underway for a coordinated, shared identity verification and management solution that will be made available to merchants and other providers.
While this type of utility is certainly attractive, it is not without tremendous risk. If, and when, bad actors make their way into the into the system with fictitious identities and fraud is propagated through the trusted network, liability issues will certainly become center stage.
Given the ongoing cadence of breaches involving consumer identity information and the history of criminals successfully bypassing fraud detection systems, it’s highly likely that this type of fraud event will make the headlines sometime soon.
Customer experience and security in the web and mobile channels have been quasi-coordinated, but managed quite differently. Even within the mobile device, web and application capabilities have require different strategies and technologies. While organizations try to make the customer experiences similar across channels, the back-end effort is extraordinary and resource intensive.
Fortunately, the W3C has implemented an interoperability standard proposed by the FIDO Alliance that allows common capabilities to be shared between the web and mobile channels using the same code base. This will save developers time and money, provide new data streams for analytics and behavioral analysis, and provide a much more consistent user experience across channels.
While artificial intelligence (i.e. neural network technology) will not become more prevalent in production environments for identity verification in the short term, the technology will become much more involved in the development of production models. We know of at least one very large financial services provider that was told by regulators to pursue AI-based solutions to help with its AML program.
In the short term, explainable linear models will be used as a benchmark for nonlinear machine learning models.
When validation data sets are run in parallel with linear and nonlinear models and converge on same answer, the linear models can be used to approximate the decision from the neural network. Fraud, risk management and AML, CIP/KYC processes stand to benefit from this approach as compliance officers see the benefits of explainable and transparent machine learning models over their legacy, opaque and unwieldy rules-based systems.