ServiceNow: Can AI make banking fair?

Financial institutions are rapidly using AI to expand access to less affluent, non -traditional customers. However, in this technology there are questions about whether AI inadvertently introduces biases in financial decision making.

In response, AI researchers designed guidelines, recommendations, checklists, and other frameworks to ensure that the use of AI is financially fair to customers. But that effort revealed that defining fairness is very difficult. Instead, organizations are more focused on reducing the potential harms caused by AI and less on completely eliminating bias.

The business potential is vast. In the United States, about 1 in 4 Americans lack a bank and cannot apply for traditional loans. In Mexico, two-thirds of adults do not have a bank account. Across the African continent, most people do not have a bank account or credit score.

1 of 4

Number of Americans lacking in banks

These communities are often referred to as “underbanked” because they cannot access services such as mortgages and credit cards neglected by wealthier consumers. Such people “have no traditional identity, collateral, or credit history — or all three needed to access financial services,” said Margarete Biallas of the International Finance Corporation, a member organization of the World Bank.

Expanding financial access using AI

In the early 2000s, Biallas, an economist and leader in digital finance training, helped clothing manufacturers in Cambodia set up mobile payment options. Prior to IFC’s involvement, garment workers, who were mostly women, received their wages as cash in factory envelopes, creating the potential for theft and violence against them, Biallas added. To resolve this issue, IFC has partnered with Melbourne-based manufacturers and ANZ Bank to set up digital-payment options to pay workers on their mobile devices.

[Tom Davenport: Enterprise AI is starting to pay off. Here’s why.]

The organization then launched an AI-enabled credit scoring system for workers to apply for loans despite their lack of traditional credit scores. The AI ​​system analyzes data not typically included in a credit score, such as total income, employment history, and how often a borrower spends money on non -essential items, such as jewelry or electronics. By collecting data from mobile phones and satellites, banks can verify the identity and creditworthiness of individuals and businesses. AI can use satellite data to establish an employment history by verifying that a person works on a particular farm or factory, which can be verified by data from a cell phone they were carrying at the time, Biallas explained in a 2020 IFC report.

Experts are concerned about potential biases and fairness issues that arise when AI-driven technology makes financial decisions.

IFC is not alone in these efforts. In Egypt, where two-thirds of adults do not have a bank account, Cairo-based Commercial International Bank has developed predictive analytics software that uses non-traditional data — home address, employment status, and run -in the law-to gauge ability of the borrower to repay the loans. In 2017, the State Bank of India developed its own AI-powered platform that allows underbanked people to be approved for a loan almost immediately.

A question of AI fairness

For all the benefits of AI, experts are concerned about the potential biases and fairness issues that arise when AI-driven technology makes financial decisions. During the design and training of AI and ML models, people choose what data to use to train them, so that biases can enter the algorithms through conscious or unconscious bias in the data itself. training. For example, past hiring for IT jobs is leaning towards male applicants, and AI trained on such data may be disproportionately selecting men for future openings than women.

Problems can arise even if the algorithms are deliberately made blind to race or gender. In 2019, Apple partnered with Goldman Sachs to launch the Apple credit card. But Apple was forced to investigate its algorithms after it revealed that Goldman Sachs ’credit approval system discriminated against female applicants even if the applicants were not identified by gender. Similarly, Amazon made headlines in 2016 when outside researchers showed that its algorithms systematically excluded Black neighborhoods from same-day delivery service — even though the system was designed to intentionally not notice the race.

Also, the researchers showed that ML models are meant to help applicants who have previously been disadvantaged by extending the credit scoring standard that often ends in the continuation of discrimination. For example, AI-driven credit-scoring contributes to a $ 17 billion credit gap between men and women, according to a study from Women’s World Banking.

Addressing concerns

One of the earliest attempts to address the issue of financial AI equity came in 2018 from Singapore, where the country’s finance ministry met with international industry partners to address these concerns. In collaboration with a range of global analysts and banks, they developed the FEAT Fairness Assessment Methodology to help financial services providers create fairer, more ethical AI use cases, according to Grace Abuhamad, head of research for ServiceNow’s AI Trust and Governance Lab. “Singapore helped start a global conversation about financial reliability,” he said.

FEAT treats fairness as a controversial concept for which no generally accepted definition exists. According to the FEAT framework, financial institutions cannot reduce bias by pretending to be without race and gender. Instead, they should have their own definition of fairness, identify which groups could be affected by any particular financial decision, and state how those groups could be harmed. To periodically assess the fairness of AI-powered business models, institutions should use independent auditors.

Singapore helps start a global conversation on financial reliability

Other organizations have created management frameworks for AI. In 2019, the European Union created the Ethical Guidelines for Reliable AI. The guidelines recognize that AI can have a negative impact on children, people with disabilities, and other groups who have previously been harmed. It also emphasizes ongoing auditing and supervision.

In 2018, Microsoft launched a research and advocacy program focused on AI fairness. Like Singapore’s FEAT, Microsoft’s “AI Fairness Checklist” insists that no single definition of fairness exists and the goal of any AI -powered system should be to minimize harm. Last year, the U.S. Federal Trade Commission released its own statement calling for “truth, fairness, and fairness” in the use of AI in financial decision making. The statement urged financial institutions to tell the truth about their data, aim for transparency, and “do more good than harm” or challenge the FTC to use that model as unfair.

Concerns over the bias and fairness of AI systems must be addressed, says Abuhamad of ServiceNow, but not by focusing on general ideas of fairness. “Instead,” he says, “be completely clear about what your perspective on fairness is and how you’ve evaluated your algorithms based on that perspective.”

.

#ServiceNow #banking #fair #Source Link #ServiceNow: Can AI make banking fair?

Leave a Comment