“I trust you.”
Those are simple yet powerful words. Trust is one of the key constructs of our society. According to the Oxford Dictionary, trust is defined as “the firm belief in the reliability, truth, or ability of someone or something.”
In the old days, a physical handshake was considered one of the most important symbols of trust. You would shake hands at the conclusion of a business transaction or when you came to an agreement of sort. You knew the person that you were conducting business with, and most likely his or her family and hometown. But what about in the new digital world? How do I know you are really who you say you are? When I provide my credit card information for an online transaction, how do I gauge whether or not I could trust the merchant with my financial information and that they would send me the goods purchased as promised? Can I really trust the robo-adviser with all my assets and lifetime savings? How has technology affected the way we trust each other? Look no further than how Twitter has attempted to handle identity verification with its blue checkmark.
Maybe that could be one of the reasons why some of the incumbent banks have such a hard time with adapting identify verification for the digital age. They still require you to physically go into a bank branch so that they can “verify” who you are in person before you can open a bank account, because that is how the system is set up and how the operational processes have always been. Compare that experience to what you get from the newer digital banks, where you can finish the entire onboarding online in just a few minutes.
Apparently, trust is a matter of perspective.
For years, customers place their assets in the bank and trust that they will act in their best interest. Brad Leimer commented at the recent Finovate Fall conference that all banks and bank products should be a fiduciary and act in their clients’ best interest. Indeed, one would think it’s common sense that financial institutions should take care of their customers. However, a quick look at the earnings of the financial institutions in recent years would make you question if that is really the case. According to CNBC, “noninterest income” is close to 40% of some large regional banks’ income. These fees include endless number of items, such as ATM withdrawal fees, deposit fees, transaction fees, insufficient funds fees, annual fees, inactivity fees, checking account fees, mortgage fees and credit card fees, etc. Overdraft fees, in particular, bring big profits to banks. JPMorgan Chase reportedly made $1.9 billion from overdraft fees in 2016, Wells Fargo made $1.8 billion and Bank of America made $1.7 billion. If financial institutions are indeed acting in the best interest of their customers, they would have looked after the financial health of their customers first, instead of reaping benefits from events that might have had been avoided in the first place. It’s no wonder why consumers are becoming less trusting of the financial institutions who put their profitability ahead of their customers’ well-being.
On the other hand, consider the Big Four tech firms: Google, Apple, Facebook and Amazon. These four companies know more about me than I know about myself. With all of the iOS devices and services, Apple knows almost all aspects of my life: where I shop, what I listen to, what I watch, what I do, where I go and what I search for on the phone. It knows what I want and it can predict what I might want. In fact, Apple is ranked as one of the world’s 10 largest companies in 2017. Along with Alphabet (Google’s parent company), Microsoft, Amazon and Facebook, they belong to this elite club of global tech companies with over $3 trillion in combined market capitalization.
We entrust these tech companies with vast amounts of information about our daily lives, with an expectation that they will safeguard it. With their having access to such data, we expect them to deliver a physical good or a piece of information/advice. Take the case of Amazon. In addition to acting as an e-commerce marketplace where goods are bought and sold, it leverages information on our purchase behavior and predict what we might like. Amazon presents recommendations based on our habits and entices you to buy more things, delighting us every step of the way so that it is now second nature to many of us to just order things directly from Amazon instead of going to a physical store. We benchmark our experience based on our interactions with these big tech companies. In this day and age, banks are no longer competing with other banks; consumers expect banking to be as seamless and intuitive as what we experience with Apple and Amazon.
But have we gotten too casual in the trust we place with the big tech firms in exchange for more personalized experience and convenience? A few months ago, Google’s AI division, DeepMind, was found to have violated the Data Protection Law with its data sharing partnership with a London National Health Service Trust. As TechCrunch reported, “DeepMind was handed access to the nonanonymized medical records of 1.6M patients without their knowledge or consent, and under loosely defined contract terms that failed to firmly lock down what the company might be able to do with highly sensitive medical data.” DeepMind subsequently backed away from its original intention of feeding this data set to its AI models.
So where do we draw the line between convenience and privacy? The movie “The Circle” brings home a similar point. What is the price that we are willing to pay in exchange for convenience? How do we trust that the information we provide will not be exploited for commercial gains or some other agenda? And with whom do we place our trust?
Consider the lawsuit between LinkedIn and the startup hiQ Labs, where the latter was accused of scraping information from “publicly available data.” How much control should companies such as Microsoft and LinkedIn have over our data that is hosted on their services? Who really owns our public data and who “should” be the guardian of it?
The Brookings Institution recently published an article by Tom Wheeler, former chairman of the FCC, on the issue of news and social media. He posed a simple question: “Did technology kill the truth?”
Wheeler wrote: “To maximize reach, traditional outlets curated information for veracity and balance. In stark contrast, the curation of social media platforms is not for veracity, but for advertising velocity.” The software algorithms, he argued, are programmed to prioritize user attention over truth to optimize for engagement (and hence maximize advertising dollars.) News feeds are curated based on user preferences and information is accumulated by the platforms. Consider the echo chamber effect that was on full display during the recent U.S. presidential election. How exactly can we determine if we can trust what we see on social media? Even within our own social circles, where we are all trying to put on our best facade, how much of what we read and experience is real?
So, can tech create trust?
Trust is an essential part of the social fabric. With advanced technology and machines becoming an integral part of our daily interaction, it is easy to lose sight of what makes us human in the first place. Are we more trusting of technology, or have we simply become more complacent? While robots can perform a lot of the tasks that are done by humans today, what role do they play in the erosion or evolution of trust in our society? When we entrust our assets to algorithms and robo-advisers, are we placing our trust in the human adviser behind the scene, or the machine itself?
We used to say that trust is earned, not freely given. Will we evolve to a world where trust becomes a commodity that can be bought? What does it mean when we tell a machine that “I trust you”?
Suddenly, those three words don’t sound as simple anymore.