Can we trust AI to grant loans?

Experts say goal is to build trust and demonstrate that upsides of using AI far outweigh any perceived risks
Can we trust AI to grant loans?

To In According Opportunity: Potential Of Genai Fintech In Jessup, Area The The Are A Growing Use Partner For Regulation Time, Cases The Space   Chris Advisory All Financial

Generative artificial intelligence (GenAI) is finding its way into just about every business process, and it has begun to fundamentally transform the fintech world. But how much autonomy can it be granted given the inherent lack of transparency when it comes to how it makes its decisions, such as granting loans or assessing insurance risk?

Experts say the goal is now to build consumer trust and demonstrate that the upsides of using this technology far outweigh any perceived risks, while ensuring end-to-end safety in the use of GenAI.

According to Chris Bollard, partner in commercial and technology with A&L Goodbody, firms are “enthusiastically” embracing AI technology. “Even the more traditional financial institutions which also operate in the fintech space are seeing AI as an area of opportunity; rolling out AI solutions to assist its corporate customers, investing in AI solutions or using these internally to assist in documenting their own regulatory compliance.” He adds that the European Central Bank has even developed a portfolio of AI solutions itself to assist in the supervision of banks and manage the vast amount of supervisory information received by EU regulators.

David Lee, chief technology officer with PwC Ireland, says the use of GenAI in fintech involves tools designed to provide a more complete customer experience, “going well beyond the traditional chatbot”. For example, these tools can help support service agents identify gaps in the information they are providing customers and help them become better at their job by providing feedback on their performance in customer interactions, but it is also being deployed across the traditional back-office functions of HR, finance and compliance. “The next generation of GenAI tools will be virtual assistants who will perform the mundane tasks for each of these back-office functions and we expect these to become generally available in the next 12-18 months,” notes Lee.

The potential use-cases for GenAI in the fintech space are growing all the time, adds Bollard’s colleague, Chris Jessup, a partner in financial regulation advisory. “AI systems can potentially be used to evaluate credit scores or consumer creditworthiness and ultimately determine access to financial products, including credit and insurance,” Jessup explains. But with these advances comes a need for the appropriate safeguards. “However, such uses require careful consideration of regulatory obligations and the management of risk.”

KPMG consulting partner Jean Rea sees particular relevance for insuretech, which she describes as “the use of technology to innovate in the insurance sector”. Technologies such as data analytics, AI, and the internet of things can improve customer experience, efficiency and cut costs, she explains.

“Generative AI can significantly transform business by automating and performing certain tasks with unmatched speed and efficiency,” she adds.

There are risks of course. “Pricing and underwriting are at the heart of insurance business,” says Rea. “The Central Bank’s report on Data Ethics within Insurance, published last August, highlighted that Irish insurance firms identified personalised pricing and enhanced risk assessments as potential benefits in using big data and related technology in pricing and underwriting. The report also highlighted that pricing was one area which could result in heightened consumer risks, in particular as firms are expected to increase their use of big data and related technology. In the report firms were reminded to ensure that they implement practices that are consumer focused and consistently result in fair outcomes for consumers.”

Regulation typically tends to lag behind technological advances, but the EU Artificial Intelligence Act is an EU regulation due to come into force in the coming weeks. This regulation recognises that certain forms of AI should attract additional compliance obligations, and, unsurprisingly, AI systems relating to the assessment of credit worthiness and health and life insurance risk have been designated as “high risk” under the Act. This means AI systems must be accompanied by a wide range of mandatory obligations to mitigate the risks involved, Jessup explains. “Essentially, AI systems must be designed in such a way as to enable deployers [for example, banks or insurers] to implement human oversight. This will mean inbuilt operational constraints that cannot be overridden by the system itself. The advantage of this added regulatory constraint will hopefully be the building of trust in consumers in engaging with these technological innovations.”

The Central Bank of Ireland (CBI) is likely to have its own expectations around safeguards where AI systems are being used, he adds.

Lee points out that more traditional AI methods, such as statistical analysis and machine learning have been used for a long time in retail credit scoring and insurance underwriting. “They have been an essential part of the ability to assess large volumes of applications and existing customers in an automated manner but while these techniques are very highly regulated and well understood they are still regarded by the EU AI Act as being a form of high risk AI.”

Aside from the regulatory issues. building customer trust is one of the key challenges that providers and deployers of AI systems face, adds Bollard. “Deployed correctly, AI solutions have the potential to enhance trust through the provision of better financial services, however this message will need to be accompanied by clear evidence of the overall safety and trustworthiness of these tools.”

Rea advises organisations beginning to use generative AI to pay attention to important issues such as ethical and responsible use, keeping abreast and adapting quickly to the changing regulatory environment and keeping the trust of key stakeholders. “It is clear that firms are and will continue to innovate,” she adds. “When they are doing so it is important to ensure they adopt a consumer-focused approach, which includes careful consideration of the ethical questions and broader implications of their use of AI that is in line with existing consumer requirements and expectations.”

More in this section

Cookie Policy Privacy Policy Brand Safety FAQ Help Contact Us Terms and Conditions

Limited © Group Examiner Echo