The Human Touch in Identity Verification

Businesses and consumers alike are growing more dependent on AI every day. The technology serves as the foundation of almost every digital assistant, search engine, online shopping recommendation or automated technology we currently use. Its reach only continues to expand as organizations rush to digitally transform their businesses in wake of the pandemic. In fact, early data shows that 43% of companies accelerated their rollout of AI-powered tools due to the pandemic, and those numbers will only continue to grow. However, this rapid adoption and increased dependence on AI also comes with challenges.

Why Identify Verification Isn’t Ready for Autonomous AI

New research shows that with this immense growth in AI adoption comes executive concern that the technology is moving too fast and becoming the “wild west” with few rules or regulations. Challenges around inequity and bias are becoming increasingly pertinent as a result, especially when it comes to identity verification (IDV)—a technology that has also become critical with the recent boom in digital solutions and online activity coming out of global lockdowns. With more people relying on digital identity verification for approval of financial transactions, loans, transportation and more, biased outcomes can have a significant impact. Let’s take a look at some AI flaws that are negatively impacting certain demographics and populations:

Biased AI algorithms: Biased AI algorithms remain a significant problem when it comes to IDV. AI algorithms continuously learn to make decisions based on data that can include and reflect biased information such as gender, race or sexual orientation. Even after these variables are removed, data that includes biased historical decisions or social inequities can remain. Organizations rely on these algorithms to confirm if a person is who they claim to be, but biased algorithms can result in certain customers being unfairly rejected or blocked, denying them access to technologies and resources.

Access to modern technology: Certain areas of the world are disproportionately impacted by their lack of access to modern cameras and devices. Modern facial recognition and authentication software often leverage biometrics to trace facial features from a video or photograph, but blurry and low-resolution images can easily impact the software’s ability to identify a face, continuing to fuel bias in the AI models.

Document inequality: Not all documents are created equal. Certain countries have invested more in the security of their documents, making it easier to present your ID online compared to older paper documents, such as government-issued IDs that have machine-readable fields (NFC, MRZ). This also puts people in less technologically advanced countries at a disadvantage.

With these challenges remaining, it is clear that there are certain decisions that AI technology simply isn’t ready to make, especially if doing so could have negative ramifications on certain populations. So, where can humans fill in these shortcomings?

Integrating Human Decision-Making Into the IDV Process

Organizations should begin by implementing flexible decision-making thresholds in the IDV process for whenever there is uncertainty about verification or when a verification comes up inconclusive. At this point in the verification process, a diverse team of specialists should be included to help make the verification decision once the automated system has failed to do so. This allows for more accurate, unbiased decision-making and authorization.

With autonomous driving, for example, there needs to be a human behind the wheel to make decisions if the conditions are suboptimal for AI to decide. The same must happen in the field of identity verification. Humans are more than capable of handling identity verification tasks—the downside is that it’s costly and time-consuming to do so. Consumers on the other side of the screen waiting for confirmation of their loan application, wire transfer or car registration aren’t willing to wait. The key is to integrate human intuition when it is absolutely necessary and only in the scope of where it is needed—whether that’s low confidence on facial biometrics due to training data bias in a specific region, a low-quality picture of the ID or wherever there is other risk of bias. This helps to keep a consistent level of quality, speed and equity—while training the AI models to further improve with every verification.

The Ongoing AI Equity Race

The race to equity in AI will be a long one, especially as AI adoption continues to rapidly increase. One day we will likely reach autonomous AI, but not until these biases are eliminated and the playing field is leveled. Looking ahead, it will be important for organizations to remember that AI bias is invariably a result of biases in the real world, but the process to change this within an organization starts with its people.

Featured eBook
The Dangers of Open Source Software and Best Practices for Securing Code

The Dangers of Open Source Software and Best Practices for Securing Code

More and more organizations are incorporating open source software into their development pipelines. After all, embracing open source products such as operating systems, code libraries, software and applications can reduce costs, introduce additional flexibility and help to accelerate delivery. Yet, open source software can introduce additional concerns into the development process—namely, security. Unlike commercial, or … Read More