Photo credit: gremlin/Getty Images
Can’t you attend Transform 2022? Check out all the summit sessions in our on-demand library now! look here.
In today’s competitive digital marketplace, consumers are more empowered than ever. They are free to choose which companies to do business with and have plenty of options to change their minds at any time. Mistakes in reducing customer experience during sign-up or onboarding can lead them to swap one brand for another with the click of a button.
Consumers are also increasingly concerned about how companies protect their data, adding another layer of complexity to businesses as they aim to build trust in the digital world. In a KPMG study, 86% of respondents expressed growing concerns about data privacy, while 78% expressed concern about the amount of data collected.
At the same time, the surge in consumer adoption of digital technology has led to an alarming increase in fraud. Businesses must build trust and help consumers feel their data is protected, but must also provide a fast, seamless onboarding experience that truly prevents back-end fraud.
Thus, artificial intelligence (AI) has been advertised as a panacea for fraud prevention in recent years, as it promises to automate the authentication process. However, despite the various discussions surrounding its use in digital identity verification, many misconceptions about AI remain.
event
MetaBeat 2022
MetaBeat will be on October 4th at San Francisco brings together thought leaders to provide guidance on how Metaverse technology is changing the way all industries communicate and do business, CA.
Register here
In today’s world, true AI machines can successfully verify that identities do not exist without human interaction. When companies talk about leveraging AI for authentication, they are actually talking about using machine learning (ML), which is an application of AI. In the case of machine learning, the system is trained by feeding it large amounts of data and allowing it to adjust and improve over time, or “learn”.
When applied to the authentication process, machine learning can play a game-changing role in building trust, removing friction, and fighting fraud. With it, businesses can analyze vast amounts of digital transaction data, increase efficiency and identify patterns that can improve decision-making. However, getting caught up in the hype without really understanding machine learning and how to use it properly can reduce its value and, in many cases, cause serious problems. Enterprises should consider the following points when using machine learning ML for authentication.
Bias in Machine Learning
Bias in machine learning models can lead to exclusion, discrimination, and ultimately a negative customer experience. Training an ML system with historical data translates biases in the data into the model, which can be a serious risk. If the people building the ML system are biased or unintentionally biased against the training data, decisions can be based on biased assumptions.
When a machine learning algorithm makes wrong assumptions, it creates a domino effect where the system keeps learning the wrong things. Without human expertise from data and fraud scientists, and oversight to identify and correct biases, the problem will be duplicated, exacerbating the problem.
New form of fraud
Machines are very good at detecting trends that have been identified as suspicious, but their key blind spot is novelty. ML models use data patterns, so it is assumed that future activities will follow these same patterns, or at least a consistent rate of change. This leaves the possibility for attacks to succeed simply because the system hasn’t seen them during training.
Layering fraud review teams onto machine learning ensures new frauds are identified and flagged, and updated data is fed back into the system. Human fraud experts can flag transactions that may have initially been controlled by identity verification but are suspected of being fraudulent, and provide that data to businesses for scrutiny. In this case, the machine learning system encodes this knowledge and adjusts its algorithms accordingly.
One of the biggest blows to machine learning is the lack of transparency, which is the fundamental principle of machine learning authentication. Need to be able to explain how and why certain decisions were made, and to share information with regulators about each stage of the process and customer journey. Lack of transparency also fuels mistrust among users.
Most machine learning systems provide simple pass or fail scores. If the process behind a decision is not transparent, it will be difficult for regulators to justify when the call comes. Continuous data feedback from machine learning systems can help businesses understand and explain why decisions are made, and make informed decisions and adjustments to the authentication process.
There is no doubt that machine learning plays an important role in authentication and will continue to do so in the future. However, it is clear that machines alone are not enough to verify identities at scale without increasing risk. The power of machine learning is best realized with human expertise and data transparency to make decisions that help businesses build customer loyalty and growth.
Christina Luttrell is the CEO of GBG Americas, by
Acuant and IDology.
Welcome to the VentureBeat community!
DataDecisionMakers is a place for experts, including technologists who work with data, to share data-related insights and innovations.
Join our DataDecisionMakers if you want to learn about cutting edge ideas and the latest information, best practices, and future data technologies for data and data.
You may even consider publishing your own article!
Read more from DataDecisionMakers