AI Ethics: Do Challenger Banks Need a Moral Code?

Afzal Ibrahim
19 min readMar 23, 2021
AI Ethics — Licensed edited by author

In recent years, financial inclusion has been the to-go term in every discussion about inclusive and ethical economic development. New forms of financial institutions are sprouting and quickly gaining the public’s trust by adopting the latest technology for their banking services, such as Artificial Intelligence (AI), Decentralized Tech, Quantum computing, etc.

The question that remains is whether these banks need to establish a moral code for their use of technology, especially in the practice of AI. Before attempting to answer the core question, let’s take a quick look at a few underlying narratives.

Financial Inclusion: Definition and Goals

As the name itself suggests, financial inclusion, or inclusive finance, refers to the effort to make financial services widely accessible and affordable to every individual and business. Simply put, regardless of the company’s size or the individual’s net income, anyone should be entitled to enjoy banking products. The policies are established to remove any barrier that may prevent one from joining the banked community.

“Poverty denies people any semblance of control over their destiny, it is the ultimate denial of human rights. The accepted human rights are food, shelter, health and education, and the basic responsibility of a society is to make sure that an environment exists so that people can have these things. The big financial institutions currently ignore almost two-thirds of the world’s population. So I say the right to credit should have the topmost priority on the list of human rights.” — Nobel Peace Prize Laureate Muhammad Yunus

On a micro-level, financial inclusion attempts to improve the individual’s life by making financial processes easier and more effective. According to the World Bank, financial inclusion “facilitates day-to-day living, and helps families and businesses plan for everything from long-term goals to unexpected emergencies.”

By so doing, inclusive finance aims to fulfill the overall mission of reducing poverty and developing the economy through creating a smooth and strong financial flow within a country. This means that financial inclusion is closely tied to financial development, which is generally measured by the number of people owning and using financial products.

Financial Inclusion: Current Global State

There has been an upward trend in financial development in the past decade. The 2017 Global Findex database points out that the percentage of adults owning an account with either a financial institution or a mobile company rises steadily from 51% in 2011 to 62% in 2014 and 69% in 2017.

In countries like China, Kenya, India, and Thailand, about 80% of the population already own accounts. The strategy behind this impressive rate is inducing reforms, making use of private companies’ innovations, and encouraging individuals to open low-cost accounts for mobile and digital transactions. The next step for these countries is to raise the ratio of actual financial service users among account owners.

Image courtesy — @MorningBrew Unsplash

The number of people without a bank account has gone down from 2 billion in 2014 to 1.7 billion in 2017. However, these unbanked individuals still account for nearly one-third of worldwide adults; most of them are unemployed women or women living in rural areas. The gender gap is 9% point in developing countries, meaning that women are unable to control their financial livelihood. Statistics also show that a higher degree of gender equality is seen in countries with wider access to electronic money via mobile phones.

Financial Inclusion: Promoting Strategies

Several factors affect both a country’s level of financial inclusion and development, including per capita income, government governance, quality of institutions, availability of information, and the regulatory environment. Unsurprisingly, there is a remarkable difference between developed and developing countries and even among different populations in the same region when it comes to managing these factors.

The World Bank has been actively promoting financial inclusion by setting up a policy group to facilitate dialogues and coordination among stakeholders across the public and private sectors. The group stresses strong political commitment as the key player in establishing a user-friendly environment for responsible financial access and innovative tech-product usage. This cannot be achieved without:

1. Long-term strategies can be broken down into detailed action plans for the government to achieve financial inclusion objectives.

2. Payment systems that enable a smooth move from cash and paper-based instruments to electronic money.

3. Diverse financial service options for individuals including savings, credit, and insurance.

4. Regulatory adjustment and supervision to guarantee a level playing field for banks and non-bank such as telecom companies, post offices, co-operatives, agent networks, and “fintech” firms.

The financial industry has also been proactive in its part. On the one hand, institutions are constantly devising new ways to deliver products and services to a larger market and increase their profits in the process. On the other hand, the advent of financial technology, also known as fintech, has provided cost-effective solutions to the lack of accessibility to financial services.

The Rise of Digital-First Challenger Banks

Challenger banks are usually established mid-size or specialist firms whose direct competitors are large corporations, while challenger banks tend to be new and focus wholly on digital mobile.

A report by KMPG in 2016 shows that challenger banks have a significant cost advantage compared to traditional ones, specifically the Big Five (including HSBC, Barclays Banks, Lloyds Bank, Royal Bank of Scotland, and Santander). Two types of challenger banks are discussed in the report: smaller challengers whose duration of operation ranges from five to ten years, and larger challengers that have been established for a longer period of time.

In 2014 and 2015, the cost-to-income ratio of both smaller and larger challenger banks remain around 50–60%, while that of Big Five stayed high in the range of 70–80%. Specifically, smaller challengers’ ratio went down from 52.1% to 48.5% whereas larger challengers experienced a slight increase from 58.0% to 59.2% in the course of two years. Big Five’s cost-to-income ratio not only decreased but shot up instead, from 73.0% in 2014 to 80.6% in 2015.

A leader in the field, One Savings Bank (OSB), scored an impressive 26% cost-to-income ratio in 2015. The reason for this outperformance is challenger banks’ simple business models and reduced product sets, especially those who focused on niche areas. In fact, both challenger banks and neo banks (as mentioned above). Both types share the competitive advantage of flexibility; that is, they can easily leapfrog over traditional and cumbersome infrastructure and adopt new technology. This lifts the burden of processing transactions off banks and enables them to focus on giving customers advice and consultation.

Each bank has a distinct approach with its own products and services, but for the most part, they share these four common themes.

  1. Personalization
  2. Open Ecosystems
  3. Information Transparency and Customer Privacy
  4. Predictive Intelligence

With newly introduced and constantly updated technology, the banking industry is transforming at an unprecedented rate. In each sub-field, specific technologies are applied to advance user experience, bank security, or even financial inclusion. Blockchain technology is one example. Basically, a set of databases contains a series of “blocks” of cryptographically encoded data that are “chained” together. This is the technology behind cryptocurrencies like Bitcoin.

As soon as the concept was introduced, banking and financial institutions jumped to explore its potential. A recent paper published by the Georgetown University McDonough School of Business puts forward that through “lower costs, reduce(d) risk, and enhance(d) financial innovation,” blockchain technology can effectively advance financial inclusion.

Another one is the Internet of Things (IoT). According to MIT researchers’ estimate, 20.6 billion is the number of devices currently connected to the Internet. According to their prediction, cars, smart homes, wearable devices and smart cities are the next step of connectivity after laptops and smartphones.

Everything that synergistically exchanges data across systems falls under the “Internet of Things” umbrella. IoT helps financial technologists gain granular data on consumer activities, process payment, and prevent fraud.

Yet the technology with the most wide-spread use is Artificial Intelligence (AI). AI refers to the machine’s human-like intelligence demonstration. Common associations with AI are “machine learning” and “deep learning” but they are distinct terms. AI is the most general concept that denotes the computer’s capability to learn and process data during operation. It constantly refines its process and learns more quickly time after time. Machine learning (ML) specifies the algorithms used by AI applications. An algorithm consists of a list of rules; for example, the Google algorithm uses its own set of rules including keywords and backlinks to rank search results. Finally, deep learning refers to the process in which a machine learns new features to execute tasks through artificial neural networks.

While banks are actively using many algorithms in their renewed digital journey, they underestimate the importance of conducting a systematic analysis of ethical and moral principles such as transparency, fairness, etc., and ensuring that the algorithms provide the right results.

AI Ethics: Key Questions

As much as technological innovations provide cost-effective solutions to financial service and product access, they also pose numerous challenges to the financial market. With huge consumer data in hand, multinational corporations are rising to unprecedented and unrivaled powerful positions and thus exerting their dominance over the whole market. This has sparked debates on a global scale about the manipulative rendering and negative impact of technology, especially with AI applications.

The top concern about AI is how far is acceptable for machines to make decisions. Four key questions that summarize the ethical issues of AI are:

1. Can AI be applied and utilized without causing harm?

2. Should we be completely held accountable for building apps that generate unbiased results?

3. How can we build AI-powered apps that are non-discriminative and that promote human rights?

4. What can we do to better understand how AI works and how to control AI?

AI Ethics: Challenges Facing by Challenger Banks

Initially, AI applications in financial institutions are still relatively simple: from robotic process automation to low-level decision trees and basic linear regressions. As companies look forward to more complex uses in risk forecasting and management, the stakes of misunderstanding and misusing AI increases.

Conventional banks are accused of discriminating treatment against low-income and minority groups with low credit points and cash flows. Their process can take up to three to four weeks due to the large and cumbersome human system. Challenger and digital-first banks can bypass the heavy procedure by relying entirely on AI, yet this exact dependence renders them vulnerable to ethical challenges.

1. Account Ownership Qualification.

The first step toward having access to a wide range of financial services and products is to open a transaction account. When a person requests an account registration, a qualification process is kick-started; algorithms with touchpoints are used to determine whether the person is eligible for an account based on the bank’s target persona. If the bank belongs to a larger ecosystem, it can take advantage of its network to sort out potential prospects.

A problem with this process is that the eligibility check is grounded on limited information, or proxies, available within the network database. Limited information leads to biases and potentially wrong and discriminative decisions. The issue is, therefore, not about onboarding a so-called “wrong persona” anymore, but rather withholding customers’ rights to open an account. This obviously goes against the principle of financial inclusion.

2. Identity Verification.

Another aspect of AI application in challenger banks is identity verification. Various techniques are currently in use, among which the most popular is facial biometrics. The first step requires customers to scan their passports, IDs, or driver’s licenses, and the second step asks them to take a video selfie. Together, the two actions capture and transform analog information about customers’ facial features into digital datasets.

The inherent issue with facial recognition is consent. In order to operate well, the algorithms must first learn from a large database of images that are ideally taken in different lighting and with different angles. The large photo collection that a lot of applications use as a source today was, unfortunately, acquired from the 1990s and 2000s without consent.

Another challenge with facial biometric systems is potential racial discrimination. Again, since AI can only function on what it has learned, an application built on Caucasian race data may fail to respond to an Asian user. Algorithms’ insensitivity to racial biases is a nerve-wracking issue that challenger banks must deal with.

3. Data Privacy.

As with most industries, financial firms collect data from their customers — with or without consent — and analyze those data to gain insights about everyone so as to personalize services and products for them. This can include personalized advice on future preparation and effective money management.

However, data collection and analysis raise concerns about privacy and security for users because the more information about a person is acquired by a third party, the more vulnerable the person becomes to fraudulent and harmful conduct.

Some of the machine learning systems are based on the “black-box” model, meaning that their operation is invisible and untraceable. This lack of transparency can be highly problematic if the system is to make decisions that affect the individual. Judgments regarding who are eligible for a loan, who gets canceled on their application, who gets paroled are critical and individuals have the right to know how those decisions are made.

4. Design and Ergonomics.

“Designing AI to be trustworthy requires creating solutions that reflect ethical principles that are deeply rooted in important and timeless values.”

- Microsoft AI Principles

Several things are to be carefully considered when designing a user interface for an AI model to enhance user experience. Racial appropriation is one. A scandal with a major web search engine a few years ago testifies this point: users find that if they type the word “criminal” in the search bar, a specific race and ethnicity appears remarkably more frequent than others, thus creating this false belief that the community tends to be more violent and aggressive. To make matter worse, at that time this search engine had a colour filter, leading to (again) one race generating significantly more results than the remaining ones. The lesson here is that algorithms must be carefully executed at the UI/UX level to guarantee that the results are appropriate.

Information bias is another. For example, say an AI system is devised to detect and categorize customers based on their metrics and personal information. A high-income customer walks into a bank and the system immediately tells bankers that this person is a prospective for VIP services based on his or her clothes, accessories, walking posture, etc… Chances are that the judging elements can only create parts of the whole picture, which might lead to inappropriate behaviors or treatments from bankers or even, in the worst scenario, a withholding of the customer’s right to open an account. These are hypothesized yet probable situations that designers must think about.

The bottom line is designing a responsible and smart UI/UX that can complement those AI models. Whether or not the AI application can reach a wide and diverse audience depends largely on the responsible design.

5. Customer Service Bots

“Autonomous systems must not impair the freedom of human beings to set their own intellectual standards and norms.” — AI4People: An Ethical Framework for a Good AI Society

With AI comes chatbots, an automated communication system that cuts down greatly on call center costs and improves customer services. Yet a chatbot needs to be trained, meaning that incomplete and faulty training can result in problems spanning from privacy, data ownership to abuse and data transparency.

The need and experiences of customers must be prioritized when designing and training a chatbot. Questions to consider include: Is it clear for customers that they are chatting to a chatbot and not a human assistant? Does the chatbot correctly represent the image of the company? Are chatbots reliable in collecting sensitive information like bank accounts or health insurance? Do customers have the option to immediately get connected to a human if they have concerns that the bot cannot address?

6. Continuous Credit Decisioning

In the world of banking, credit is king. Following a recent study on people’s spending habits and payment preference, only 12% reported that they prefer using cash while 77% confessed that debit or credit cards are their to-go paying methods. Of course, faster and more convenient payment is not the sole reason for the importance of credit.

After registering for a bank account, data on customers’ cash flow will be collected by banks to evaluate their credit points. Having higher credit points makes it easier for customers to have access to a wide variety of financing options, such as buying insurance, taking out loans, or securing stocks. As credit history becomes more and more popular, companies even take it as one of the evaluating criteria for job applicants.

AI solutions are assisting banks and financial institutions in making smarter and more thorough decisions when it comes to loan and credit approval. A wider variety of factors are put into use to give a more accurate and precise assessment of borrowers, including those who are conventionally deemed to be undeserved like millennials.

Intuitively, banks would clear discriminating variables from the dataset such as gender or ethnicity in order to remove biases. However, things are not as simple as it sounds — much more adjustment needs to be made. For instance, since financial institutions have historically approved fewer and smaller loan packages to women compared to men with the same credit scores and income, the samples of woman’s loan data are smaller. If there is no interference on the data and evaluating criteria, female loan applicants will continue to be discriminated against. However, there is a possibility that manual interventions that attempt to correct the bias can result in confirmation biases or self-fulfilling prophecies. This, in turn, can make matters worse by repeating or amplifying already-made mistakes or assumptions

At this point, AI can come to the rescue. With the help of AI, banks can access raw data to detect and correct patterns of historic discrimination against women, and then consciously and actively alter this dataset going forward to give a more equitable and (certainly artificial) possibility of approval. For example, one lender has used AI to find out that to receive an equivalent amount of loan compared to a man, a woman would need to have a 30% higher income. Spotting this bias is the first step; the next step is to redistribute female credit profiles by bringing them closer to that of men with equivalent risk measurements while maintaining relative ranking. The result is a fairer AI-based decision-making model that can sustain as the banks aim to extend credit in a more equitable way in the future.

7. Cultural Diversity

An important yet often neglected way to fight discriminating flaws is to gather a multicultural and diverse group of developers and designers, yet “diversity” is not limited to only gender, nationality, and ethnicity. Other identifiers that can contribute to a diverse hire include heritage, religion, and culture — elements that are important if not vital to the discussion of appropriation versus appreciation.

Cultures can be diluted down to philosophies; that is, catering for different cultures means embracing different philosophical perspectives. Without this understanding, AI models may as well turn into a weapon for intellectual dictatorship and modern imperialism, which goes against the fundamental principle of financial inclusion. Assembling a diverse group of developers thus presents an easy and dynamic forum for discussion of AI ethics. The point of having these dialogues is to establish a standard whereupon AI models can decide what is acceptable and what is not without having to rely on prior prejudices of Good and Evil.

The Moral Code: Solutions for Financial Inclusion

Having discussed the above points, let’s look forward to some key points about applying AI solutions to achieve better financial inclusion for financial institutions to take into account. In any case, ethical issues need to be addressed first and foremost in order to protect users and sustain equality.

1. Promote personal well-being and respect dignity.

As financial inclusion’s goal is to improve every citizen’s quality of life, financial products and services must make customers’ well-being the top priority.

“AI inevitably becomes entangled in the ethical and political dimensions of vocations and practices in which it is embedded. AI Ethics is effectively a microcosm of the political and ethical challenges faced in society.”- Brent Mittelstadt

By and large, AI ethics have revolved around the principle of non-maleficence, which means “do no harm.” Discussions that engage fintech, developers, regulators, and other stakeholders focus on minimizing ethical risks and intentional misuses, but another way of thinking about this is through the principle of beneficence, or “do good.” Rather than trying to prevent the worst, one can start working towards the better.

Well-being clearly goes further than monetary value; it encompasses social life, health care, education, and others. AI application designers should be aware of these surrounding factors to build systems that are both respectful and empowering.

2. Guarantee information transparency and fairness.

This principle requires differentiation between an automated system and an autonomous system.

An automated system runs within the limitations of parameters and thus highly constrained in what it can do. Pre-defined rules determine what decision is made and what action is taken. An autonomous system, on the other hand, learns, adapts, and evolves with the environment around it. It can develop far beyond what it is first deployed.

Customers are entitled to know with the highest level of clarity which system they are using and how much information it can attain from them. In the same manner, challenger banks need to provide information on the justification of decisions using AI and customer’s moral obligation to understand and take responsibility for the consequences of their actions.

However, too much data transparency can make users fall prey to malicious acts such as the misuse and abuse of their personal data. This means that the development of more transparent practices for AI has to go hand in hand with the development of abuse-avoiding methods.

3. Serve and advocate human rights.

It goes without saying that gaining access to a financial ecosystem is one aspect of basic human rights. Human rights are social and legal norms that protect every single citizen from abuses and mistreatments, and can be divided into two types:

a. Civil and political rights, including the right to have life, liberty, and property, the right to execute freedom of expression, the pursuit of happiness, and equality before the law.

b. Social, cultural, and economic rights, such as the right to work and receive education, and the right to participate in science and culture

A worthwhile and fulfilling life rather than the ability to live “in liberty, happiness, and well-being” is the optimal and goal of human rights movements. The banking industry can learn quite a lot from the application of AI by local authorities, as in the case of London. They collaborated with scientists to develop the project, Odysseus, with the view to capturing and understanding the city’s level of activity. Through the combination of machine learning algorithms, statistical time-series analysis, and image processing, the authority collected sufficient data which were then used for the safe reopening of streets and public health planning. Human-centered and public-oriented projects like these can well be deployed in the financial industry to promote human rights.

4. Define the scope and ownership of responsibility.

Accountability may sound familiar and simple, yet the debate around who should be held responsible for the use of AI systems remains heated throughout the globe. By definition, accountability refers to an individual or an organization’s legal and ethical obligation to take responsibility for the use of AI and to disclose the results in a transparent manner. What this pre-supposes is a power relation where one party is in control and one is to be blamed, and the settlement of this relation is so difficult that governments and international entities such as the European Union and G7 have addressed it as an open challenge.

Why is it that difficult to settle?

Two reasons: different quality of responsibilities and technology’s influence and control over humans. First of all, one commits a kind of responsibility by conducting an action, yet the quality of such responsibility depends on his or her properties. Intelligent technology complicates this more by integrating human and machine into a type of hybrid being that simultaneously execute cognitive tasks including decision making. When algorithms enter into higher-level decision-making entities, it is increasingly complex for a human individual to intervene in the artificial system.

Secondly, technology can force humans to take action in a coercive manner. A classic example for this point is the seatbelt’s beeping sound: the sound will and will only stop when the seatbelt has been fastened. Contemporary algorithms can take much more complex and sophisticated forms; they propose, suggest, and limit the options. Therefore, the question that remains is that how voluntary and how free can humans be from machine’s control? If an action is voluntary, meaning that the person conducting the action sufficiently understands the use of scientific technology, what does it mean to “understand” and how much is “sufficient”? What is the precise reading of “understandability,” or “transparency,” “explicability,” or “audibility”? These are issues that are nowhere near easy to answer.

5. Consider ethics as doing rather than knowing.

Understanding ethical theory is one thing, implementing an ethical approach is another.

As the old saying goes, actions matter.

The common practice of doing ethics is publishing ethical guidelines for AI projects. In this sense, the guideline is not simply informative blocks of text, but rather what Judith Butler terms “performative texts”; that is, texts that perform an action of some sort. For example, when a priest pronounces a couple “husband and wife,” he is not simply uttering the sentence, but the sentence itself validates the marital status from that moment onwards.

There are normally three functions that ethical guidelines perform: calls for deregulation, assurances, and expertise. Companies can reason that the ethical guideline bypasses any regulatory documents as it reflects their moral adherence. In other words, the implication is that our company should not be restricted because we have strictly followed ethical codes. In the same manner, ethical guidelines assure investors and the public that the firm is entitled to performing moral-guided actions and campaigns, and in so doing deflect public critique away from them.

Expertise is a bit trickier. In the age of cut-throat competition, every company is racing to have experts in AI — what follows is technological and then economic and even political control. Yet expertise can only have its effect when received public recognition, and a publication of ethical guidelines fit right into the picture. Publishing and engaging in discussions surrounding AI ethics is a great way to boost up the public image of having expertise.

Final Words

It is undeniably critical to avoid the misuse of AI technology in banking, but it is also important not to underuse it simply to avoid ethical risks that come with implementing digital solutions. The key is to find a balanced resolution between incorporating the benefits and mitigating the potential harms.

Complying with regulations is not enough — it is like playing not to get disqualified, not playing to win the game. Companies and organizations should plan to obtain what is called the “dual advantage” in their ethical approach. On the one hand, they can gain benefits from the socially preferable opportunities that AI brings in. On the other hand, they can lower costs and reduce unfavorable actions that are legally correct yet socially rejected. The answer to the headline is yes, and this is exactly why.

--

--

Afzal Ibrahim

Tech, Design, and Art — love’em all. Just out here exploring new ideas and sharing what I learn with y'all. Curator at pyaarnation.com