A May 2024 Snapshot of the GAEIA Ethical Dilemma Library

GAEIA Ethical Dilemma Library and Guidebook

The GAEIA Ethical Dilemma Library represents research developed by global scholars in collaboration with industry and government experts. The information and data included in the library were gathered utilizing the GAEIA ethical dilemma template, a cross disciplinary tool used to collect, share and seek feedback on ethical dilemmas associated with the use of artificial intelligence and advanced technologies across sectors and regions of the world (see Appendix A).

The 67 ethical dilemmas shared here represent a snapshot of the global deliberation involved in examining the impacts of advanced technologies across international and disciplinary contexts, as well as many time zones. They offer a glimpse into the learning process, as well as the types of themes explored. The financial and education sectors were the primary focus of this research, although issues related to climate change, sustainability, health and social media data among other topics are also included.

The library code scheme (e.g. LC-A1, LC-A2…) reflects our internal numbering system for peer review processes. With the exception of a few, each ethical dilemma underwent within and across team peer-review processes, which helped to expand and refine our thinking about each topic, but also – importantly – helped us learn from each other’s vast and diverse expertise. Every ethical dilemma template was also reviewed by a global team of scholars across Cohort I and II representing diverse regions of the world and disciplinary backgrounds. Since the curation of this ethical dilemma library, scholars have continued to revise, add comments, links to research and other resources, some of which are not reflected in the dilemmas included here. As such, it’s important to note that the ethical dilemmas included in this document represent a May 2024 snapshot of work that is in constant revision. Also important to acknowledge, insights on regional variations reflect the experiences and perceptions of the scholar authors and industry advisors. Likewise, with the rapid change associated with the invention and application of artificial intelligence and advanced technologies, as well as shifts in societal, government, and industry responses to these technologies, perceptions related to opportunity and risks are ever evolving.

As we develop our capacity to share the working ethical dilemmas, we look forward to inviting members of civil society, government, and industry to revise, add content, and co-create new ethical dilemmas in real-time on the GAEIA platform. In the meantime, we hope the ethical dilemma library is of benefit to others beyond the valuable learning process they have afforded the GAEIA program. A primary purpose of the material reflected below is the learning that takes place when people from around the world come together to contemplate the impacts of technology. These practices help to expand narratives and perspectives and understanding about how artificial intelligence and advanced technologies impact the lives of people around the world.

The Responsible Tech Ethical Dilemma Network © 2024 is licensed under CC BY-NC-SA 4.0. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-sa/4.0/ . This license allows reusers to distribute, remix, adapt, and build upon the material in any medium or format, for noncommercial purposes only. If others modify or adapt the material, they must license the modified material under identical terms. This license requires that reusers give credit to the creator, GAEIA.

Table of Contents

LC-A1:  Integration of generative Artificial Intelligence (AI) in education systems  5

LC-A2:  Legal personhood for synthetic persons or AI entities  8

LC-A3:  The application of AI and big data on the workforce  10

LC-A6:  Artificial intelligence (AI) algorithms in credit screening  12

LC-A7:  Large Language Model’s (LLM) role in educational access  14

LC-A8:  Automated decision-making and distribution of resources in emergencies  16

LC-A9:  Synthetic data generation. 18

LC-CC1:  Role of Machine Learning in allocating funding for scientific research  20

LC-CC2:  Regulation of FinTech companies  22

LC-CC3:  Greenwashing: Risks and opportunities  25

LC-CC5:  Role of sustainability among FinTech companies 28

LC-CC6:  Sustainability data transparency  30

LC-CC7:  Sustainability incentives, opportunities and challenges  32

LC-CC8:  Double counting of carbon credits 34

LC-CC9:  Role of Investment Portfolios in Incentivizing Net-Zero Energy  37

LC-CE2:  Regulation of Cryptocurrency. 39

LC-CE3:  FinTech’s disruption of the financial services sector  41

LC-CE4:  AI Ownership. 43

LC-CE5:  Balancing profit maximization and customer exploitation  46

LC-CE6:  Behavioral marketing’s role in influencing addictive online behaviors 48

LC-CE7:  Data privacy rights 50

LC-CE8:  Power imbalances between industry and consumers  52

LC-CE9:  Risks of proliferating customers data and AI decision-making  54

LC-CE10:  Regulatory gaps. 56

LC-CE11: Ethical responsibility within organizations  58

LC-CE12:  Challenges associated with vague ethical legal frameworks 60

LC-CE13:  Are there risks to AI algorithm transparency?  63

LC- CE14:  AI for the assessment of creditworthiness and lending  66

LC-CE15:  Car insurance pricing based on social media posts  68

LC-CE16:  Chatbots, robotic bankers, inanimate intelligence virtual assistants  70

LC-CE17:  Crypto- and blockchain-based assets  72

LC-CE18:  Chatbots, robotic bankers, inanimate intelligence virtual assistants II  74

LC-CE19:  Robo-advisors in context of investment scenarios  77

LC-CE20:  Customer Data Privacy Paradox  81

LC-CE21:  Cloud computing, data storage, and cybersecurity risks  83

LC-CE23:  Customer profiling. 86

LC-CE26:  Psychological manipulation of user behavior  89

LC-CE27:  Opacity of AI Systems. 91

LC-CE28:  Privacy and Surveillance; The right to be let alone  93

LC-CE29:  Role of metaverse goods and virtual reality tools and services  96

LC-CL1:  Data bias and exacerbations in inequity and discrimination  99

LC-CL2:  Data bias towards historically marginalized groups in bank lending  103

LC-CL4:  Discrimination in banking and insurance practices 106

LC-CL5:  Mobile financial services. 109

LC-CL6:  Interoperability of personal data use between companies  112

LC-CL7:  Immutability of blockchain and “right to be forgotten”  115

LC-CL8:  Smart contracts. 118

LC-CL9:  Credit and lending data exploitation  120

LC-CL10:  Big data and lack of transparency in lending decisions 123

LC-CL11:  Ethics in using alternative/novel data in lending decisions  125

LC-CL12:  Data bias in lending decisions 128

LC-CL13:  Cryptocurrencies, blockchain and Scam Coins  131

LC-CL14:  Black box AI in lending decisions  133

LC-CL15:  Third-party data sharing. 136

LC-IN1:  The use of AI in insurance claim decisions  139

LC-IN4:  Biased data for new product development 142

LC-IN5:  Insurance companies’ use of technology enabled incentive programs  145

LC-IN6:  AI bias vs. personal bias in decision-making  148

LC-IN7:  Explainability vs accuracy in AI models  150

LC-IN8:  Insurance companies’ use of public online data for profiling  152

LC-IN9:  Microsegmentation and risk pooling in insurance programs  155

LC-PA1:  Risks associated with rapid transition to digital transaction payments  157

LC-PA2:  User data and personalized advertisements  159

LC-PA3:  AI and data use in mobile banking services 161

LC-PA4:  Unauthorized payments against bank customers’ accounts 163

LC-PA5:  Microcredit to micro, small, and medium enterprises (MSMEs) 165

LC-PA7:  Buy Now, Pay Later (BNPL) opportunities and risks  167

Appendix A. Ethical Dilemma Template. 170

LC-A1:  Integration of generative Artificial Intelligence (AI) in education systems

EXPLORATION

What is the ethical dilemma? The integration of generative Artificial Intelligence (AI) in education systems. While AI opens new horizons for education, it also presents challenges that need to be addressed to ensure its ethical use. The dilemma lies in balancing the potential benefits of AI in education with the ethical considerations that arise from its use.

What is the content? The main concern is that the checks and balances applied to teaching materials are not being used for the implementation of generative AI. This includes concerns about the accuracy of content, age-appropriateness, relevance of teaching, and cultural and social suitability. There is also the potential for bias in AI applications.

What are the technologies or types of data usage involved? Generative AI can create new content, such as texts, images, or music, based on the data it has been trained on. The data usage involved includes the data used to train the AI and the data generated by it.

What is the application? What drives this use in this case? The application is in the education sector. The goal is to enhance teaching and learning processes, making them more efficient and personalized, for example in schools or research institutions.

What ethical issues are at play here? The ethical issues include privacy, accuracy of content, cultural and social suitability, and the potential for bias. There is also the risk of undermining the authority and status of teachers and the necessity of schools. The ethical guidelines on AI and data usage in teaching and learning are designed to help educators understand the potential of AI applications and raise awareness of possible risks.

Regional variations: Ethical Issue Another issue that could arise is that the representation of the world via AI becomes the way students learn about the world. This means that any entrenched biases present in the data (and AI) will become things that students accept rather than identify, interrogate, and challenge.

What group of people are at risk? What group of people may gain?

Teachers and students are at risk, as the integration of AI in education can potentially undermine the role of teachers and the necessity of schools. On a larger scale, society is at risk if AI in education leads to increased automation and decreased human interaction in education.

Regional variations:

Teachers and students across Africa may face risks associated with the integration of AI in education, including the potential undermining of the role of teachers and schools. This risk extends to wider society if increased automation reduces human interaction in education.

Properly regulated and implemented AI in education could benefit students by enhancing learning experiences and providing personalized education. Additionally, educators could gain from AI tools that streamline administrative tasks and provide valuable insights into student performance.

What is the broader impact of this dilemma?

The impact includes potential changes in the education system, the role of teachers, and the learning experience of students. It also has implications for the development and regulation of AI technologies. This implies risks to society as a whole due to the issue of automation and decreased human interaction in education, which is a deeply human-centered field.

What are the cultural aspects important for this dilemma?

Human interaction in education is fundamental for any educational development. AI may either enhance this or jeopardize it.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

Possible controls include developing institutional policies and formal guidance concerning the use of generative AI applications. Ministries of education need to build their capacities in coordination with other regulatory branches of government, especially those regulating technologies. UNESCO recommends that education systems set their own rules and not rely on the corporate creators of AI to regulate its work.

In what conditions could this be acceptable?

This might be acceptable if there are proper regulations in place, the potential risks are mitigated, and the benefits of using AI in education are maximized.

What are other observations and conclusions for a solution?

It’s crucial to ensure that AI is used ethically in education. This involves continuous dialogue among policymakers, EdTech partners, academia, and civil society. It’s also important to invest in schools and teachers to solve persistent educational challenges. The world continues to underfund schools and teachers, and this needs to change. UNESCO is steering the global dialogue on this issue and is developing policy guidelines on the use of generative AI in education and research.

More Resources

  • European Commission (2022), “Proposal for a Directive of the European Parliament and of the Council on liability for defective products (PLD). COM(2022) 495 final”.
  • European Parliament and Council (2016), “General Data Protection Regulation (EU) 2016/697”.
  • European Commission (2022), “Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive) (AILD). COM(2022) 496 final”.
  • European Commission, Directorate-General for Education, Youth, Sport and Culture (2022), “Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators”, Publications Office of the European Union, https://data.europa.eu/doi/10.2766/153756.
  • European Commission (2019), “Ethics guidelines for trustworthy AI”.
  • National Institute of Standards and Technology (2023), “Artificial Intelligence Risk Management Framework (AI RMF 1.0)”.
  • UNESCO (2021), “AI and education: guidance for policy-makers”.
  • UNESCO (2023), “ChatGPT and Artificial Education in higher education, Quick start-guide”.
  • Vincent-Lancrin, S. et R. van der Vlies (2020), “Trustworthy artificial intelligence (AI) in education: Promises and challenges”, Documents de travail de l’OCDE sur l’éducation, n° 218, Éditions OCDE, Paris, https://doi.org/10.1787/a6c90fa9-en.

LC-A2:  Legal personhood for synthetic persons or AI entities

EXPLORATION

What is the ethical dilemma?

The ethical dilemma is the question of legal personhood for synthetic persons or AI entities. These entities, powered by advanced AI technologies, can make autonomous decisions, interact with humans, and even exhibit behaviors that mimic emotions. The dilemma lies in determining whether these synthetic entities should be granted legal personhood, similar to corporations, and if so, what rights and responsibilities they should have.

What is the content?

As AI technologies advance, synthetic persons are becoming increasingly sophisticated, raising questions about their legal status. Should these entities be treated as legal persons with certain rights and responsibilities, or should they remain as tools or property with no legal rights or responsibilities?

What are the technologies or types of data usage involved?

AI systems capable of decision-making and human-like interaction. These systems use machine learning algorithms, which require large amounts of data to learn and improve their capabilities. The data used and generated by these systems is a crucial aspect of this dilemma, as it raises issues of privacy, consent, and data ownership.

What is the application? What drives this use in this case?

Synthetic persons are being applied in various fields, including finance, healthcare, entertainment, and customer service. The drive behind their use is their ability to perform tasks efficiently, make autonomous decisions, and interact with humans in a more natural and engaging way.

What ethical issues are at play here?

The ethical issues at play include justice, responsibility, and accountability. Who should be held responsible for their actions, especially when they have negative consequences? How can accountability be ensured when the decision-making processes of AI systems are often opaque?

What group of people are at risk? What group of people may gain?

Groups at risk include individuals and communities who could be negatively affected by the decisions made by synthetic persons, such as consumers, employees, and potentially society at large. On the other hand, businesses and organizations that use synthetic persons to improve efficiency and productivity stand to gain.

What is the wider impact of this dilemma?

The wider impact of this dilemma could be profound, leading to changes in societal norms and human relations. However, the most immediate and significant impact is within the legal system, which will need to adapt its often conservative and normative views.

What are the cultural aspects important for this dilemma?

There could be consequences for privacy, security, and employment. An interesting aspect is our understanding of what it means to be a person. In some cultures, synthetic persons might be more readily accepted, while in others, they might be viewed with suspicion or fear. The debate about the rights and responsibilities of synthetic persons could also be influenced by cultural views on technology, law, ethics.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

Possible controls could include creating a comprehensive legal framework that defines the rights and responsibilities of synthetic persons, implementing robust mechanisms to ensure accountability, and establishing clear and enforceable guidelines for the ethical use of synthetic persons. It’s also crucial to involve a wide range of stakeholders in these discussions, including AI developers, legal experts, ethicists, and representatives of the public.

In what conditions could this be acceptable?

Granting legal personhood to synthetic persons might be acceptable if there are clear and comprehensive laws and regulations in place, if accountability can be ensured, and if the benefits of using synthetic persons (such as increased efficiency and innovation) outweigh the potential risks and harms.

What are other observations and conclusions for a solution?

This is a complex and rapidly evolving issue that requires ongoing discussion and careful consideration. As AI technologies continue to advance, it’s crucial to regularly revisit and revise our legal and ethical frameworks to ensure they remain relevant and effective. It’s also important to foster public understanding and debate about these issues, to ensure that the decisions made reflect a broad consensus and are in the best interests of society as a whole.

More Resources

  • Asaro P (2007) Robots and responsibility from a legal perspective. In: The IEEE conference on robotics and automation, workshop on roboethics, Rome. Unpublished proceedings. Robots and Responsibility from a Legal Perspective
  • Dewey J (1926) The historic background of corporate legal personality. Yale Law J 35(6):655–673.
  • Bryson JJ, Diamantis ME, Grant TD. Of, for, and by the people: the legal lacuna of synthetic persons. Artif. Intell. Law, volume 25, issue 3, p. 273-291, Posted: 2017.
  • Garrett BL (2014) The constitutional standing of corporations. University of Pennsylvania Law Review, vol 163.


LC-A3:  The application of AI and big data on the workforce

EXPLORATION

What is the ethical dilemma?

The application of AI and big data on the workforce.

What is the content?

Who: Employers

Why: Scanning resumes

What: Saving time and manpower for hiring people

What are the technologies or types of data usage involved?

AI, big data, and machine learning.

What is the application? What drives this use in this case?

Auto-scoring of candidates based on keywords in their CVs to save time and manpower in the hiring process.

What ethical issues are at play here?

  1. Unconscious bias: Based on the machine learning database.
  2. Privacy: Analytics of existing workers may be used without consent.
  3. Flexibility: Insufficient flexibility to cater to different cases.
  4. Fairness: Candidates with less language proficiency may be disadvantaged in the AI selection system, despite having strong technical backgrounds.
  5. Accuracy: AI cannot judge the correctness and accuracy of data.

What group of people are at risk? What group of people may gain?

All workers, including potential candidates and current employees.

What is the wider impact of this dilemma?

The potential loss of jobs, discomfort and mistrust among employees, and the control of human decisions by AI, leading to new responsibilities assigned to AI systems.

What are the cultural aspects important for this dilemma?

AI may not account for cultural differentiation across geographical locations or job types, such as attitudes towards work styles, the need for creativity versus technical skills, and perspectives on diversity and inclusion.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

Establish clear policies for the use of AI, including mechanisms for candidate selection and performance evaluation, as well as privacy concerns.

In what conditions could this be acceptable?

When the AI system is human-centered, and stakeholders accept its use.

What are other observations and conclusions for a solution?

AI in the workforce offers advantages such as efficiency and cost savings by automating repetitive tasks, especially in roles like customer service and reporting through language model chatbots.

LC-A6:  Artificial intelligence (AI) algorithms in credit screening

EXPLORATION

What is the ethical dilemma?

The deployment of artificial intelligence (AI) algorithms in credit screening. AI introduces concerns related to algorithmic bias, privacy violation, and lack of transparency. The dilemma arises from the need to balance the advantages of efficient credit screening with the potential negative consequences for individuals and society.

What is the content?

What are the technologies or types of data usage involved?

Technologies: Machine learning algorithms utilizing vast data, including financial histories, social media activities, and personal information. Applications: These algorithms aim to predict creditworthiness via risk scoring to automate loan underwriting and fraud detection.

What is the application? What drives this use in this case?

What ethical issues are at play here?

Algorithmic bias, lack of transparency, and privacy violation.

Beneficiaries: Financial institutions and lenders benefit from streamlined and efficient credit evaluation processes enabled by AI.

What group of people are at risk? What group of people may gain?

At Greatest Risk: Individuals from marginalized groups may be at the greatest risk of experiencing algorithmic bias and unfair treatment.

What is the wider impact of this dilemma?

What are the cultural aspects important for this dilemma?

Algorithmic bias and ethical concerns can vary across cultural and geopolitical areas due to different historical, societal, and economic contexts.

  • Cultural biases ingrained in data could lead to varying degrees of discrimination in different regions.
  • Geopolitical differences in data protection regulations could affect the level of privacy violation and transparency.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

Algorithmic audits, diversity in data, transparency assessments (Explainable AI), and global harmonization of regulations and standards.

In what conditions could this be acceptable?

What are other observations and conclusions for a solution?

Educational initiatives and industry-wide collaborative efforts to share best practices and address common challenges.

LC-A7:  Large Language Model’s (LLM) role in educational access

EXPLORATION

What is the ethical dilemma?

LLM’s may provide access to education, but it may be flawed, biased, or inaccurate.

What is the content?

Education is essential. In many areas of the world, students greatly outnumber teachers, with some regions having no access to education. Can ChatGPT and similar technologies help? Are they as effective as current digital solutions (e.g., online courses and learning platforms such as Coursera, Khan Academy, Udemy)? The issue is that ChatGPT is known to exhibit bias, being heavily trained on American data. The output from such systems would also exhibit American perspectives of ideas, knowledge, and identities.

What are the technologies or types of data usage involved?

Technology: ChatGPT, other chatbot technologies. Service: Massively Open Online Courses (MOOCs).

What is the application? What drives this use in this case?

Students around the world using ChatGPT when they do not have adequate access to education, or when teachers are overwhelmed by the number of students they need to prepare classes for.

What ethical issues are at play here?

Education is a necessity in many societies, but not all societies can provide adequate education. ChatGPT can provide education, but it may be flawed, biased, or completely incorrect.

What group of people are at risk? What group of people may gain?

Students are the group at risk. The people whose views are mimicked by ChatGPT could potentially gain.

What is the wider impact of this dilemma?

AI technologies are increasingly becoming points of information and knowledge seeking, especially with chatbot technologies. If all chatbots have these flaws, then interactions with humans could lead to misinformation. Furthermore, as AI research advances in a few countries, these countries can embed their attitudes, values, and beliefs within their AI systems and technologies, further contributing to the issue of not having enough diversity in AI, leading to bias. If students are not educated, they may be limited in their ability to contribute to society.

What are the cultural aspects important for this dilemma?

ChatGPT and chatbot technologies exhibit bias present in their training data. This bias may influence the operation of the chatbots, meaning that when people ask chatbots about attitudes, values, and beliefs regarding a culture, the chatbot will only mimic the attitudes, values, and beliefs present in the training data.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

Educational bodies should regulate and continually observe the outputs of chatbot technologies.

In what conditions could this be acceptable?

If teachers use ChatGPT-like technologies to aid their teaching capabilities (not replace them), the workload on teachers could be reduced. Furthermore, online courses could experiment with ChatGPT-like technologies to see if these systems can provide a more personalized learning experience for students.

What are other observations and conclusions for a solution?

ChatGPT, Google, online courses, and traditional education systems aim to help students learn more about certain topics. The issue is that only two of those systems (online courses and traditional education systems) are regulated and certified to provide education.

LC-A8:  Automated decision-making and distribution of resources in emergencies

EXPLORATION

What is the ethical dilemma?

Automated decision-making for the distribution of limited resources in emergencies.

What is the content?

The UN has limited funds for emergencies in other countries (war refugees, natural disasters, disease), but how can people decide which countries should get the funding? Political stances and opposition to certain countries make decision-making processes protracted affairs, all while people are suffering. AI can make decisions faster than humans, but it can be biased. Furthermore, when dealing with people in need, is it ethical to hand decision-making to an AI? If a country is donating resources to the global emergency fund, but AI systems refuse to aid its own people, is that acceptable? Will it be acceptable to the country? When dealing with the most vulnerable people, is it ethical to have an AI make decisions? AI can find the most optimal outcome, but defining what is optimal is debatable. This means that algorithmic systems may exhibit the biases of the original creators. An example would be if a creator deemed disease outbreaks a greater priority than war refugees, the country with the disease outbreak would receive the emergency funds first, with the war-torn country and refugees missing out. Is this an ethical way to decide who gets resources?

What are the technologies or types of data usage involved?

Artificial Intelligence.

What is the application? What drives this use in this case?

Global resource distribution during emergencies around the world.

What ethical issues are at play here?

Biases of humans/countries versus biases of AI.

What group of people are at risk? What group of people may gain?

The people most at risk are those suffering from disasters (war refugees, natural disasters, sick people during a disease outbreak). The only people that could gain are those to whom the AI distributes resources.

What is the wider impact of this dilemma?

With finite resources, we need a way to find the most optimal use for them. Can AI help us in this manner? Is it ethical to let AI decide where resources are distributed?

What are the cultural aspects important for this dilemma?

Some cultures will prioritize helping specific cultures over others, or specific people in need (sick, elderly) over others (homeless). This nuance would carry over to which countries they would be willing to aid (e.g., if country A was at war with country B, and country B had a disease outbreak, would country A be willing to aid B?). Certain countries will refuse to aid others and will most likely refuse to use an AI that doesn’t reflect their cultural values when it comes to such a use case. Other countries would refuse to use AI in this case because they believe it would be unethical.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

If AI was used to aid in decision-making, it may be acceptable, but it cannot be the sole decision-maker.

In what conditions could this be acceptable?

Humans monitored the AI, and the AI was able to provide an explanation for its decision-making system.

What are other observations and conclusions for a solution?

AI aids in decision-making for many aspects of our lives, but we may not be aware of it. In such a high-visibility environment like the UN emergency council, how would making the presence of AI visible affect perceptions of its use?

LC-A9:  Synthetic data generation

EXPLORATION

What is the ethical dilemma?

Using synthetic data versus relying upon data that underrepresents certain groups in society. Are we just trading one compromise for another?

What is the content?

AI systems can be biased against certain groups based on many factors, such as age, gender, income, marital status, and where they live. This is usually because there isn’t enough variety or representation of these groups in the training data when training the AI system. When it comes to financial and capital access, this can be a critical issue. Synthetic data can alleviate these problems, but also presents its own challenges and limitations. It can provide examples that were not present in the original dataset (increasing representation of the groups that didn’t have enough of it), but it can also provide its own bias in these generated examples (a synthetic data generator is another AI system). This just propagates bias, but in a more complex manner.

What are the technologies or types of data usage involved?

AI, synthetic data generation.

What is the application? What drives this use in this case?

Synthetic data generators are used to create training data (that can be appended to existing real-world datasets) which is then used to train AI systems. These AI systems can then be deployed in sociotechnical environments, such as the financial sector.

What ethical issues are at play here?

Bias of previous societies (original human training data) versus introduced bias (synthetic training data).

What group of people are at risk? What group of people may gain?

Everyone may be at risk because it is not known what biases the AI systems display.

What is the wider impact of this dilemma?

All AI decision-making systems are dependent on the quality of their training data. If there are biases in the training data (such as underrepresentation), this may be reflected in the operation of the AI. Synthetic data generation (leading to synthetic datasets) can correct this imbalance in representation, but as these generators are AI systems themselves, they can also introduce their own biases into the dataset, replacing one problem with another. This dilemma can apply to all AI systems.

What are the cultural aspects important for this dilemma?

Underrepresentation of certain cultures may lead to bias in datasets, and while this could potentially be rectified, it can also be exacerbated by synthetic datasets.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

If humans review and analyze the synthetic datasets and check for bias and equal representation for all variations of data, it may be possible to mitigate or negate the effect of bias. This is also true if done on the original dataset.

In what conditions could this be acceptable?

A human-in-the-loop system when using synthetic data generation would ensure that bias is kept to a minimum.

What are other observations and conclusions for a solution?

As data collection increases, there will most likely be a reduced need for synthetic data generators except in the most complex cases. In such cases, however, there will most likely not be enough data to train such a generator.

LC-CC1:  Role of Machine Learning in allocating funding for scientific research

EXPLORATION

What is the ethical dilemma? Should machine learning be trusted to allocate funding to scientific research?

What is the content? MIT researchers have created a system called DELPHI (Dynamic Early-warning by Learning to Predict High Impact) to predict which biotechnology research papers will be the most ‘impactful’. Its use cases include diversifying and providing security to funding portfolios.

What technologies or types of data usage are involved? Machine learning.

What is the application? What drives this use in this case? This framework is intended to find ‘diamond in the rough’ research that will have the greatest impact. With an issue as pressing as climate change, this tool could be incredibly valuable in selecting which research to fund.

What ethical issues are at play here? This algorithm can be biased and make it even harder for people who already face challenges in getting their research funded. It could also worsen the ‘Matthew effect,’ where research institutes that are already well-funded receive even more funding and remain dominant. On the other hand, this tool could be essential in identifying impactful research earlier, which will help solve time-sensitive issues like climate change.

What group of people are at risk? What group of people might gain? Those who already face challenges in getting their research funded will likely continue to be left out as existing biases in the scientific field are embedded into the framework. Lenders wanting to take on less risk by funding research stand to gain by having a tool to select the most promising research to fund.

What is the wider impact of this dilemma? Climate change is a global issue, but tools like this might focus research to primarily impact and benefit areas that hold the most capital to fund the research. Controlling research streams, particularly for climate change, is a sensitive subject. Research in climate change spans across all disciplines (natural science, social science, etc.). Numerous research projects regarding climate change have been launched and funded publicly or privately. One positive aspect of tools like DELPHI is that they could minimize the cost of funding by identifying which projects to prioritize, thereby promoting a diversity of projects. With climate change, research should be as diverse and extensive as possible to enable breakthroughs.

What are the cultural aspects important for this dilemma? Impact for whom? If certain climate solutions work better in areas where less funding is available, will this research have a harder time getting funding from other sources?

This dilemma is unique in its focus on the use of AI in Science, Technology, and Innovation. The issue of funding can be further complicated by introducing existing biases in AI models to the already biased funding processes in research. However, the dilemma also presents an opportunity to use AI to identify research that is not published in top journals or easily accessible to decision-makers in the field of funding. There is an example of this in Kenya through APHRC using an AI model developed in-house. Combining the capabilities of MIT and APHRC can help uplift Africa.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution? We can add details on how machine learning is biased. For example, no matter how complicated the algorithm is, machine learning is based on collected data, which may already be biased. “Entrenched discrimination — from the criminal legal system to housing, the workplace, and our financial systems — is often baked into the outcomes AI is asked to predict.” Personally, I feel it is not the fault of machine learning. When we trace back to the starting point, we can see machine learning is like an oven, and it is the problem of the raw materials (data) that matters.

In what conditions could this be acceptable?

  • Human oversight and/or expert interpretation before the data gets used.
  • More transparency in the development of NLP or other algorithms for analyzing ESG strategies.
  • The data provided by the algorithms should be viewed critically and not accepted as the total authority on ESG strategy analysis.

What are other observations and conclusions for a solution? Regional variations:

General Comments/Reactions:

MORE RESOURCES Related Articles:

LC-CC2:  Regulation of FinTech companies

EXPLORATION

What is the ethical dilemma? Should we regulate FinTech companies the same way we do traditional banking and financial institutions, knowing that this could potentially slow down and/or hamper their development?

What is the content? Some countries require (or will soon require) financial institutions to disclose to regulators how the projects and investments they are working on or financing impact the environment. Assuming FinTech companies are regulated in those countries, will they also be required to provide this information?

Here is a link to the discussion that is just beginning in Chile (in Spanish)

Relevant comparable example of European Union legislation

A similar dilemma could be seen in any industry where taxing regulations on incumbent corporations could hinder new technologies that have a positive environmental impact for the sake of fair industry regulation.

Subjecting FinTech to the same regulation as traditional banks, especially regarding industry requirements and assessments, may impede their ability to compete and provide innovative solutions.

What technologies or types of data usage are involved? FinTech in general.

Technology that uses personal data as collateral in finance allows people who would not normally have access to financial services to get credit cards, loans, investments, etc.

Financial laws and regulations, Regulatory Technology (RegTech).

What is the application? What drives this use in this case? FinTech companies in Chile generally have low standards regarding whom they provide their services to. In some cases, people just need a valid ID or a minimum investment of around 10 USD. With this amount, in a couple of minutes, they can invest in speculative stocks and cryptocurrencies, make and receive online and international payments, and crowdfund projects. Why are FinTech companies growing fast in Chile and LATAM? Many people and businesses are left behind by traditional financial institutions, and FinTechs are filling the gap to a certain extent:

  • Small businesses can opt for plastic and online transactions.
  • People who would normally not have access to financial services (the poor, uneducated, women, immigrants) suddenly find themselves with multiple options for online-only credit cards and prepaid cards that work like credit cards, enabling online and international purchases with monthly installments not otherwise available.
  • People who want to invest in stocks or other financial instruments (traditional or not) have new options with low commissions.

Application of industry regulation and regulatory technology to financial services that deploy digital solutions to provide financial inclusion. This can be driven by the need to meet industry best practices, e.g., licensing that may restrict operations to certain locations and other implicit cost requirements.

What ethical issues are at play here? Should we hold FinTech companies, which help finance and develop small and medium businesses and people otherwise left out of the banking system, to the same environmental impact standards we expect from traditional banks and financial institutions?

Regional variations: Inequitable application of regulation in financial services.

What group of people are at risk? What group of people might gain? Small and medium companies and their customers, including those left out of the banking system or looking to invest where traditional institutions do not want to invest.

What is the wider impact of this dilemma?

  • Unfair treatment of competing companies in the same market.
  • Hampering the development of an industry that might end up helping small companies and people left out of the financial sector.
  • Lack of regulations and financial institutions is never a good combination (in my opinion and political view).
  • Reduction of competitive power of FinTech. Limiting the potential of FinTech solutions that may directly impact financial inclusion gains.

What are the cultural aspects important for this dilemma? I think the debate over whether to regulate or not will be more political than cultural, but cultural differences and subjective experiences will influence how we tackle this dilemma. Let’s begin the conversation.

  • How might different cultures view the role of the finance industry or large businesses in general (are they obligated to care for smaller entities or developing economies, and how much regulation should they be under in the first place)?
  • The role of the private vs. public sector in different cultures (some countries may rely on the private sector to address social issues more than the government).
  • The state of the environment in a particular jurisdiction, and what takes precedence politically between innovation and development or environmental protection.

Regional variations: Some Nigerian FinTech categories are required to obtain licenses, which may group them into categories that may limit their operations. An example is the need to obtain a Microfinance banking license for certain operations, which also has financial implications.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

  • Give some kind of regulatory exemption to companies that focus exclusively or extensively on what we consider a service that helps improve society (this creates a new problem of defining and determining what is desirable), particularly regarding the environment and climate change problems we face. Once a company no longer meets the requirements, it is no longer exempt.
  • Implement for a few years, and if it’s not working as intended, it could be reformed, backtracked, or modified as necessary.
  • Could internal regulation or FinTechs/banks following a strict code of ethics replace government regulation?
  • Consumer-led demand for innovative solutions that also prioritize environmental care.
  • A regulatory body or unit within existing regulatory authorities dedicated to promulgating regulations specific to FinTech might be acceptable.

What are other observations and conclusions for a solution? (Do you have other relevant observations not covered above? Relevant learnings? Concluding remarks or observations on your work with this case. Dissenting opinions within the group?)

To be filled after further discussion.

Which regulations are in question, and how can they encourage FinTechs to help small/medium businesses or developing economies?

Regional variations: Regulation is good, especially in finance, but it does not have to be homogeneous. Traditional banking regulations need not apply to FinTech; rather, specific regulations should be developed to fit FinTech operations.

General Comments/Reactions: FinTechs provide financial services, and in some cases, the volumes become significant. Economic activities in any state require regulations, especially in an era where digital finance fraud and other security concerns are prevalent. Disregarding or antagonizing regulatory authorities would not be good for a budding industry (e.g., Binance vs. Nigerian Government), nor would a heavy-handed approach. Specific regulatory considerations tailored to the sector seem like a win-win.

LC-CC3:  Greenwashing: Risks and opportunities

EXPLORATION

What is the ethical dilemma?

Do banks need to be controlled for “greenwashing” and be forced to report data?

What is the content?

As environmental awareness increases, people are paying more attention to the companies they invest in. Some banks have started reporting on sustainability. However, without laws to define which areas and how a company can be considered sustainable, implementing truly “green policies” is arbitrary and favors “greenwashing.” Greenwashing exploits confirmation bias and cognitive dissonance to maintain the status quo. For example, if a person considers themselves attentive to climate change and their bank claims to be sustainable because it uses renewable energy in its offices, that person will probably believe it without delving into whether the bank invests in fossil fuels or unsustainable companies. People tend to deny conflicting information if it contrasts with their values or previous information. Thus, small evidence of sustainability could be enough to ignore unsustainable actions. These aspects can affect both clients and financiers, leading to poor decision-making in green investments, even without bad intentions.

What technologies or types of data usage are involved?

AI and Blockchain technologies; ESG (Environmental, Social, Governance) reports could be implemented with AI.

What is the application? What drives this use in this case? AI could be used to determine a company’s real environmental impact, requiring honest reporting. It could also create a uniform reporting standard that would be easy for banks to use and for people to interpret. Blockchain technology could help track investments to improve transparency.

What ethical issues are at play here? The lack of standard reporting regulations for green investments and green policies favors greenwashing. Greenwashing could harm financiers, clients, and companies by preventing real actions against climate change. Creating a standard ESG reporting system helped by blockchain and AI technologies could address this issue. However, AI programmers need to be aware of cognitive biases, and international criteria to distinguish between sustainable and unsustainable practices must be defined and implemented.

What group of people are at risk? What group of people might gain? Smaller companies may be burdened if a single approach is used to determine the environmental impact of every company. More ecological businesses might gain in funding.

What is the wider impact of this dilemma? It would lead to a transition to more sustainable business development.

What are the cultural aspects important for this dilemma? Traditionalism and the tendency to favor known companies or institutions that use greenwashing could also be cultural problems preventing the tracking of real sustainable actions. Automated standard report systems could help in this regard.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution? Could AI help in reporting real and complete data about banks’ sustainability? Yes, if correctly programmed.

In what conditions could this be acceptable? It is acceptable if AI programmers are aware of cognitive bias risks and if international rules to establish sustainability are developed.

What are other observations and conclusions for a solution? This solution could help mitigate climate change by forcing institutions and companies to be more transparent and favoring real green investments. This, in turn, could necessitate lifestyle changes. For example, investing in sustainable companies might mean individuals should give up buying fast fashion, eating meat, or using a car every day. It would be necessary to prepare individuals for these changes and involve them more in climate change topics. Construal Level Theory from psychology might help; it explains how people engage with future events and what elements are involved in psychological distance: geographic (spatial distance between the event/object and the perceiver), temporal (time between the object/event and the perceiver), social (perceived similarities between others and self), and uncertainty (perceived likelihood of an event). Psychological distance elicits more abstract and decontextualized construals, which in turn elicit less behavioral engagement across a range of decision-making contexts. Being aware of these mechanisms and trying to diminish psychological distance might help transition towards a greener economy, strengthening individuals’ interest in transparent data reporting and improving their motivation in changing lifestyles.

Regional variations: This might not be financial institution-related, but holding institutions accountable for their responsibilities regarding climate change has interesting cases:

General Comments/Reactions

In Africa, the application of this issue is driven by banks’ desire to enhance their public image, attract socially responsible investors, and comply with environmental regulations. However, the driving factors behind greenwashing may include weak regulatory oversight, lack of enforcement mechanisms, and the pursuit of financial gains.

Ethical Issues at Play in Africa: Key ethical issues include:

  • Transparency and Accountability: Banks’ misleading claims about their environmental practices undermine trust and accountability.
  • Environmental Impact: False representations of environmental responsibility may contribute to environmental degradation and harm local communities.
  • Social Justice: Greenwashing diverts attention and resources from genuine sustainability efforts, disproportionately impacting vulnerable communities.
  • Consumer Protection: Misleading consumers about environmental practices violates their right to accurate information and informed decision-making.

At-Risk and Beneficiary Groups in Africa:

Wider Impact of the Dilemma in Africa: The wider impact includes the credibility of sustainability efforts, the effectiveness of environmental regulations, and the achievement of global environmental goals such as the Paris Agreement and the Sustainable Development Goals.

More Resources

  • More about Construal Level Theory and Climate Change: Jones, C., Hine, D. W., & Marks, A. D. (2017). The future is now: Reducing psychological distance to increase public engagement with climate change. Risk Analysis, 37(2), 331-341. Link
  • More about ESG: Avramov, D., Cheng, S., Lioui, A., & Tarelli, A. (2021). Sustainable investing with ESG rating uncertainty. Journal of Financial Economics. Link
  • World Bank Climate Change Knowledge Portal
  • Intergovernmental Panel on Climate Change (IPCC)
  • Greene, L. A. (2000). Ehpnet: United Nations Framework Convention on Climate Change.

LC-CC5:  Role of sustainability among FinTech companies

EXPLORATION

What is the ethical dilemma? What is the impact of FinTech on the greening of the economy? Are innovative financial startups interested in climate change?

What is the content? FinTech companies can improve and foster capital flows into sustainable investments. They can also mobilize more capital to serve this purpose, especially in the case of retail consumers. However, accessibility to such innovative financial solutions in Poland is still limited. Is climate change an unimportant issue for these innovative players (ethical dilemma)?

What technologies or types of data usage are involved? All types of technologies used in areas where FinTech grows in Poland, including advisory, financing, and investing.

What is the application? What drives this use in this case?

What ethical issues are at play here? How and if the sustainability agenda is incorporated into the business models of FinTechs.

What group of people are at risk? What group of people might gain? FinTech has an impact along the entire value chain of financial services. It can mobilize private capital to achieve sustainable goals, offer business partners dedicated tools to act in a sustainable manner, and support and popularize green financing.

What is the wider impact of this dilemma? Without the engagement of innovative financial actors, achieving the necessary level of investments in the green economy might not be possible. This is especially important in mobilizing capital currently in the hands of retail consumers. Climate FinTech can help achieve the net-zero carbon goal by providing incumbent financial institutions with solutions that allow them to effectively support green transformation. This dilemma questions how pervasive green FinTech can be in startup companies and business models. Personally, I believe it is about balance. When starting a business or designing models, we need to decide the level we want to achieve. Quantifying this level helps in building a model. For example, we might want 70% profit and 30% good things for the future ecosystem.

What are the cultural aspects important for this dilemma? The lack of green FinTech in Poland might result from both a lack of sustainability-oriented organizational culture and a relatively weak culture of sustainability in Polish society.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

  • Establish a dedicated forum for the promotion of Green FinTech in Poland.
  • Establish principles for Green FinTech.
  • Develop regulatory solutions to stimulate the development of green tech in Poland.

In what conditions could this be acceptable?

What are other observations and conclusions for a solution? Research on green FinTech in Poland is very limited, possibly due to the slow development of such companies.

General Comments/Reactions

In the African context, where climate change poses significant challenges, the role of FinTech in addressing environmental sustainability is crucial. Many African countries are vulnerable to the impacts of climate change, such as extreme weather events, water scarcity, and agricultural disruptions. Therefore, FinTech solutions that promote environmental sustainability can have significant positive impacts, such as facilitating access to green financing for renewable energy projects, promoting sustainable agricultural practices, or enabling efficient water management.

However, there are also challenges and considerations specific to Africa. These include issues related to infrastructure, digital literacy, regulatory frameworks, and cultural norms. FinTech companies operating in Africa must navigate these complexities while ensuring that their solutions are accessible, inclusive, and aligned with local environmental priorities.

Overall, integrating climate change considerations into FinTech business models is essential for promoting environmental sustainability in Africa and globally. It requires a holistic approach that considers both the ethical implications and practical challenges of leveraging technology for environmental impact mitigation.

In Africa, similar challenges and opportunities exist regarding the adoption of green FinTech solutions. The lack of sustainability-oriented organizational culture, limited awareness of climate change issues, and regulatory constraints may hinder the growth of green FinTech. However, Africa also presents opportunities for innovative FinTech startups to address climate change, promote sustainable development, and support the transition to a green economy through targeted initiatives and partnerships.

LC-CC6:  Sustainability data transparency

EXPLORATION

What is the ethical dilemma? Is the climate impact of organizations being self-reported consistently across industries and geographies? What are the ethical, cultural, and technological implications of the observed data? Should institutions declare the impact of their activities on the environment? To what extent do they do this?

What is the content? Modern business infrastructure has tremendous externalities that are seldom discussed, including data warehousing costs and running technical infrastructure. Some research on governance declarations has shown not-so-optimistic results, with the Alliance for Corporate Transparency finding that just over a third of analyzed businesses declare specific policies to tackle their climate impact.

What technologies or types of data usage are involved? AI, including deep convolutional neural networks and machine learning, Big Data, Blockchain, data integration, and rapid model deployment.

What is the application? What drives this use in this case? Applications of this research include the analysis of Environmental Sustainability Goals (ESG) performance and its reporting in different geographies, which can produce a comprehensive impact report.

What ethical issues are at play here? Ethical issues include considerations about activity declarations and energy sourcing.

What group of people are at risk? What group of people might gain? Organizations that are not transparent about their impact will be at risk of uncovering unflattering realities. Green-oriented companies might gain recognition.

What is the wider impact of this dilemma? This impact concerns rarely discussed aspects regarding sustainability in the industry and the overarching green economy transition, which needs to include further aspects of the digital economy.

What are the cultural aspects important for this dilemma? The scope of transparency and willingness to publish specific initiatives is greatly affected by cultural and legislative contexts.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution? In the EU, the CSDR (Corporate Sustainability Reporting Directive) will soon come into force. While the concepts of these policies are clearly essential, are these directives safe from greenwashing, can they be tricked, do they have any loopholes, and can they even be executed in all member states?

In what conditions could this be acceptable? Some lack of transparency can be expected in areas involving industry secrets, but initiatives have to be verified via independent auditing.

What are other observations and conclusions for a solution? There are few initiatives monitoring the development of industry greenwashing and climate impact reporting. This dilemma and follow-up research aim to uncover part of the current state of the world in this area and highlight best practices that can bring us closer to a sustainable future.

General Comments/Reactions

This dilemma is unique in not focusing on Information and Communication Technologies but rather on standards associated with climate. The issue of greenwashing points to the dilemma of establishing standards that well-financed industry actors use as a public relations tool rather than an instrument to help reduce their negative externalities. This dilemma can be enhanced by exploring how the prevalence of big data on one hand and the increased capabilities of analyzing it on the other hand can empower industry actors collectively, government regulators specifically, civil society collaboratively, and academia empirically to tackle the challenge of greenwashing.

LC-CC7:  Sustainability incentives, opportunities and challenges

EXPLORATION

What is the ethical dilemma? Should companies be incentivized by the carbon offsetting market? Does carbon offsetting incentivize companies in tech innovation?

What is the content? Asymmetric information creates several incentive problems, including adverse selection and moral hazard, in offset markets. The moral hazard, or perverse incentive, problem stems from the fact that emissions baselines are not only the private information of firms but can also, in some cases, be influenced by those firms. In the offset context, this can take two forms. Firms (or countries) could actively pursue investments in high-carbon sources, intending to earn offset payments to drop those investments. Alternatively, firms or countries could delay investments that would lower emissions from existing sources with the same intention.

What are the technologies or types of data usage involved? Location intelligence, machine learning with satellite imaging, and remote sensing.

What is the application? What drives this use in this case? With carbon offset markets, companies are pushed to invest in green technologies or environmental projects to compensate for their carbon emissions activities. The market is supposed to put “a price” on carbon emission, allowing companies to “pay off” the negative externalities of their activities.

This dilemma points out two ideas:

  1. The whole system of how carbon offset markets work is pervasive.
  2. It promotes deviant behaviors from companies.

It would be interesting to differentiate two types of companies: those highly dependent on fossil fuels and high-tech companies. A company in the transport industry might have to compensate more than a high-tech company already developing green technologies. It would be interesting to mention greenwashing as well.

What ethical issues are at play here?

What group of people are at risk? What group of people might gain? People at risk are those who can suffer from the consequences of climate change. There are also highly polluted companies located in regions where people are directly in contact with pollution, while the company invests in green projects in other regions instead of regulating its activities where it’s already polluting. Think about companies producing clothes that use a lot of fossil fuel and chemicals that pollute rivers and soil in the region, while investing in tree-planting projects in another region.

What is the wider impact of this dilemma? Some environmental projects that companies invest in have sometimes led to social ethical dilemmas. For example, the carbon offset company Green Resources had activities in forestry plantations in Uganda, which led to abuses of violence and violation of human rights directed towards the local community. This is also happening in Latin America and Asia, leading to abuse of power and oppression in developing countries.

What are the cultural aspects important for this dilemma?

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution? “Paying for your sins” might be acceptable only if the money goes to good causes. The solution can be:

  1. Reporting should be done by an international union that does not favor any company or country.
  2. In case of detecting a cheating company, the punishment should not only be a fine. Completely ending trade activities until mistakes are fixed. (What we see in the previous example, paying money is an easy way for companies or countries with big economies.)

Regional variations: There are no real actions in Turkey regarding this.

General Comments/Reactions

This is a well-structured dilemma that explores climate change from the perspectives of both the market economy and the digital economy. It touches on aspects raised in an earlier case (CC6), information asymmetry (PA5), and digital literacy (PA1). There is an opportunity to explore this area using the UN Global Principles on Business and Human Rights (UNGP) as explored through domestic instruments like the National Action Plan in Kenya. There is research on using such instruments in the context of regulating AI, which offers useful precedents in a case like this.

LC-CC8:  Double counting of carbon credits

EXPLORATION

What is the ethical dilemma? Carbon credits double counting – a situation in which the buyer of the credit claims the emission reduction, while the country where the reduction occurs also claims the emissions reduction.

What is the content? Carbon credits double counting distorts investment decisions of insurance companies and institutional investors aiming to support market players setting the same targets for a transition towards carbon neutrality.

What are the technologies or types of data usage involved? Due to the lack of transparency along the supply chain, Blockchain is the most promising technology addressing the challenge of double counting. Digital monitoring, reporting, and verification (MRV), GHG global registry systems, and Blockchain Technology.

What is the application? What drives this use in this case? Insurance companies and institutional investors invest in long-running governmental bonds. Counting emissions on a country level is challenging due to a lack of transparency and reporting standards in the offsetting markets, resulting in the double-counting problem. Since investors support market players with the same targets of reaching net-zero, they need reliable data on which to base investment decisions. Double counting distorts data so that investors cannot see real improvements in their portfolios regarding emission amounts. If more emissions are reported due to double counting, divestment decisions can take place. Therefore, both governments and investors are worse off. The former receives less capital available for projects like infrastructure spending, and the latter miss opportunities for returns on a truly green portfolio.

What ethical issues are at play here? How and if investment decisions should consider the double-spending problem when deciding to invest in or divest from countries and other market players that underestimate or overestimate their carbon footprints.

What group of people are at risk? What group of people might gain?

  • Large institutional investors and funds aiming to invest in market players that set the same targets for reducing their carbon footprint.
  • Governments that get less financing due to double counting of carbon units.
  • Companies that suffer from double counting due to a lack of transparency along the supply chain.

What is the wider impact of this dilemma?

  • Distorted investment decisions leading to sub-optimal allocation of capital among market players setting carbon emission targets.
  • Slower energy transition.
  • Financing of high carbon dioxide emitting countries, industries, and other market players reporting lower emissions.

What are the cultural aspects important for this dilemma?

Regional variations: Nigeria: The government currently pursues a carbon tax alternative to the emissions trading scheme (ETS) where carbon credits exist. The emissions trading scheme requires a functional MRV, which is currently being pursued. 11% of global carbon credits were issued from Africa; the verification carbon standards (VCS) and Gold standards are tools used to support African carbon trading.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

  • Establish a robust and transparent reporting system that ensures accurate national GHG emission accounting.
  • Clarify which entities are entitled to report achieved emission reductions.
  • Establish a system where the right for emission reductions is transferred from one entity to another.
  • Control the reporting of the same achieved emissions on multiple levels (national, sub-national).

Blockchain is a promising technology for verifying traded emissions to mitigate double counting. Adoption of offset credits generated from tech such as direct air capture (DAC), carbon capture and storage (CCS), and bioenergy with carbon capture and storage (BECCS) would provide prospects for applying emerging digital technology in the carbon markets. Ethical practices should be applied by host entities selling carbon credits to ensure they do not account for those same traded emissions toward internal or third-party reduction targets for mitigated pledges.

In what conditions could this be acceptable? Currently, the application of Corresponding Adjustments is required by Article 6 of the Paris Agreement, where cross-country trade in emissions is used to reduce Nationally Determined Contributions (NDC).

What are other observations and conclusions for a solution?

MRV and GHG registry systems are initiatives to automate the carbon credit chain, ensuring transparency, increasing efficiency, and improving the accuracy of emission reduction data. Blockchain is a solution for improving GHG system integrity and efficiency.

General Comments/Reactions

A solution for insurance companies has been proposed in developing third-party risk management tools aimed at de-risking uncertainties regarding verification, double counting, pricing, etc.

More Resources

  • Blockchain: Carbon credits have a double-spend problem, this Microsoft-backed project is trying to fix that.
  • UNFCCC: The good, the bad, and the blockchain.
  • TechCrunch: Nori is pitching carbon trading on the blockchain.
  • Climate Focus: Double counting and the Paris Agreement.
  • Springer: Double counting in the context of international climate policy.

Aghion, P., Hepburn, C., Teytelboym, A., & Zenghelis, D. (2019). Path dependence, innovation, and the economics of climate change. Handbook on green growth, 67-83.

Broekhoff, D., Gillenwater, M., Colbert-Sangree, T., & Cage, P. (2019). Securing climate benefit: a guide to using carbon offsets. Stockholm Environmental Institute & Greenhouse Gas Management Institute, 60.

https://www.antiersolutions.com/how-is-digital-identity-management-boosting-the-carbon-credit-trading-market-in-2024/#:~:text=Transparency and Accountability- Blockchain technology,and tied to their transactions.

LC-CC9:  Role of Investment Portfolios in Incentivizing Net-Zero Energy

EXPLORATION

What is the ethical dilemma? Should investment decisions be made in favor of carbon-heavy industries?

What is the content? Should investors rule out all carbon-heavy companies from their portfolios because they do not align with the requirements of green portfolios? Investor investments can be biased towards carbon-heavy companies where the decision-making process is based on green portfolio requirements, hence excluding carbon-intensive companies. Net-zero energy transitioning processes and decision-making policies drive this application.

What technologies or types of data usage are involved? All technologies aiming to accelerate the transition to net-zero. One of the tools used in the stock market for carbon-related investments is the stock market indices, such as the Morgan Stanley Capital International (MSCI) Index.

What is the application? What drives this use in this case? Ruling out carbon-heavy industries from portfolios implies a lack of sustaining current life through less financing. Excluding carbon-heavy industries from portfolios implies less diversification of risks for investment funds.

What ethical issues are at play here? On one hand, supporting carbon-heavy industries financially ensures a fully functioning current economic system dependent on burning fossil fuels. On the other hand, focusing on supporting the status quo implies an insufficient amount of investment in renewable energy and technologies accelerating the transition to carbon neutrality.

Regional variations: Some Nigerian O&G stakeholders believe that although Africa currently contributes only 4% to global carbon emissions, it previously provided carbon raw materials used in developed countries. Refusal to fund carbon-intensive industries at a time when the region seeks to use these resources to aid local development is seen as biased.

What group of people are at risk? What group of people might gain? Investment funds with portfolios consisting of market players operating in carbon-heavy industries. Carbon-heavy industries that are relevant for sustaining current life.

What is the wider impact of this dilemma? Supporting the status quo and climate crisis or accelerating the transition towards carbon neutrality at the expense of current standards of living. The lack of funding will stop the region from achieving energy transition as negative impacts would overshadow economic development. Such decisions may feed transition risk, negatively impacting existing investments in carbon-intensive securities (Cosemans & Schoenmaker, 2022). The West African region is largely a natural resource-dependent economy, hence increased job losses, increased risk of demand and supply challenges, and feeding monopolistic tendencies of regional companies who have achieved a green portfolio requirement, etc., are some of the wider implications.

What are the cultural aspects important for this dilemma? Energy transition is an approach that is favorable to Africa. Investment bias towards carbon-intensive companies may lead to a reduction of funds to a region that is already underfunded in terms of infrastructure, thus amplifying the implications and impacts mentioned above.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution? Investor portfolios can set decision-making policies to favor carbon-intensive companies that have technology investment programs targeted at reducing carbon emissions. Hence, although they are rated as being carbon-heavy, they are also actively investing in and implementing innovations that tackle natural resource consumption-related environmental challenges. Such programs have been shown to reduce yearly emission growth rates (Murshed, 2024).

In what conditions could this be acceptable? This could be acceptable where investor organizations base their portfolio decision-making policies on indices of a transition program rather than a strictly green one, thus providing a phased intervention approach.

What are other observations and conclusions for a solution?

Regional variations: Regionally, there is a proposal to secure a just transition plan. The Nigerian Just Energy Transition Plan (JETP) is a financing proposal aimed at assisting developing economies dependent on fossil fuels to transition to green alternatives. JETA also exists for Africa. Where such programs are pursued and implemented, it may aid knowledgeable investors in softening their approach to the bias, mitigating the implications and impacts mentioned above.

LC-CE2:  Regulation of Cryptocurrency

EXPLORATION

What is the ethical dilemma? Regulators globally are taking a greater interest in cryptocurrencies and products linked to them amidst fears that they contribute to fraud and money laundering. There are also concerns that investors are at risk of big losses.

Content: Binance, the world’s biggest cryptocurrency exchange has been banned by the UK’s financial regulator, the Financial Conduct Authority (FCA) in the latest series of global crackdowns on the cryptocurrency market.

Technologies or types of data usage involved: Blockchain technologies.

Application and driving factors: Binance has already had issues in other countries. For instance, in April, German financial regulators warned Binance that it would incur fines for offering digital tokens that track publicly traded companies like Microsoft and Apple. Binance was selling those tokens without publishing an investor prospectus, as required by law – a violation that could invite a penalty of 5 million euros ($6 million). Similarly, in the week of June 21, 2021, Japan’s Financial Services Agency (FSA) warned Binance for the second time in three years that it is operating in the country without permission.

Ethical issues at play:

  1. Many blockchain-based cryptocurrencies enable money laundering and other illegal transactions via anonymous black markets.
  2. In addition, blockchain technology enables further types of illegal and/or immoral transactions by facilitating transactions without intermediaries who can personally be held accountable for those transactions.e.g assassination markets – Who is to be held accountable?
  3. Another criminal phenomenon is “cryptojacking,” which refers to programs secretly mining cryptocurrencies.

Group of people at risk or who may gain: End users and investors are at risk. Outside hackers and financial institutions might gain.

Wider impact of this dilemma: It might be helpful to offer more examples or instances where ethical issues arise in the way blockchain-based technologies, particularly, the usage of cryptocurrencies can directly have critical implications for users and investors. It might also be helpful to understand the logic underpinning these investments and who gains out of it to outline the gravity of the issue. This can contribute to underline the wider impact that the dilemma can have on the users and also on technology-backed financial solutions.

Cultural aspects important for this dilemma: Defining the scope in terms of specific geographies can help here. I recently read a market research excerpt on why people prefer cryptos over traditional financial products for investments. One of the reasons being the ease offered by these platforms as opposed to the complexity surrounding traditional products. It might be a good idea to highlight the presentation of cryptos and their reception in different geographies.

SOLUTION-ORIENTATION

Possible controls and comments for a solution: What are your views on regulation of cryptos? My understanding is that is fundamentally diametric to the genesis of the idea of blockchain but then how can security be ensured without regulation or broad regulatory guidelines?

Other observations and conclusions for a solution: Regionally there is a proposal to secure a just transition plan. The Nigerian Just Energy Transition plan (JETP) is a financing proposal aimed at assisting developing economies dependent on fossil fuels to transition to green alternatives, JETA also exists for Africa. Where such programs are pursued and implemented, it may aid knowledgeable investors in softening their approach to the bias, mitigating the implications and impacts mentioned above.

LC-CE3:  FinTech’s disruption of the financial services sector

EXPLORATION

What is the ethical dilemma? A global race to implement AI tools in the financial sector may provide a risk that we increasingly implement AI tools without considering the ethical risk. Banks and start-ups are developing and deploying AI-powered technologies without recourse to a standard set of rules or ethics.

Content: Ambitious financial institutions all over the world are embracing FinTech’s disruption of the financial services sector e.g HSBC Holdings, Goldman Sachs & JP Morgan are raising the standards by investing heavily and creating in-residence programmes for FinTech start-ups. In Africa, the AI frenzy is equally catching on. For example in Nigeria, it is now possible to open accounts, transfer funds and lodge complaints regarding banking issues in Nigeria using AI-powered bots.

Technologies or types of data usage involved: AI-driven technologies e.g AI-powered bots are used by several banks in Nigeria.

Application and driving factors: For example, Stanbic IBTC Bank PLC’s Bluebots can perform anti-money laundering check, credit risk assessment, and confirm cheques. The Bluebots can also populate Microsoft Excel templates as instructed, launch web browsers, log into secure web pages with its own username and password, and scrape the web and extract data.

Ethical issues at play:

  1. Some of the banks deploying these technologies neither have elaborate terms and conditions limiting their liabilities in case the technologies go rogue nor do they have insurance policies – comprehensive or otherwise.
  2. Questions on the legal status of robots are also of concern – What happens when the AI bot goes haywire? Who bears liability where sensitive data becomes stolen, corrupted, or transferred to third parties illegally?
  3. How do you also strike a balance to ensure that regulations also do not stifle innovation and growth on the part of the FinTechs using AI to provide a better service experience for customers?

Group of people at risk or who may gain: General customers.

Wider impact of this dilemma: In Australia, there was a government-implemented debt-collection system called Robodebt. This system was automated, and it often (incorrectly) tried to collect debts from people who never owed any, or where the amount was incorrect. Some people, believing they owed a certain amount of debt because the system said so, ended harming themselves or even committing suicide. There was a Royal Commission into the incident, and it found that the ministers who were involved were responsible for it.

Cultural aspects important for this dilemma: In Africa, the adoption of AI in the financial sector presents both opportunities and challenges. While AI technologies hold promise for improving financial inclusion and efficiency, ethical considerations must be prioritized to ensure responsible and equitable deployment. Moreover, cultural values, regulatory frameworks, and capacity-building initiatives play crucial roles in shaping the ethical landscape of AI adoption in African financial services.

SOLUTION-ORIENTATION

Possible controls and comments for a solution: Implementing robust ethical frameworks and oversight mechanisms for the use of AI in financial services. Promoting transparency and accountability in data collection and usage practices, including providing clear explanations of data usage in consumer-facing communications. Enhancing consumer education and awareness regarding data privacy rights and implications through targeted initiatives and campaigns. Incorporating diversity and inclusivity considerations into algorithmic decision-making processes to mitigate biases and discrimination.

Conditions for acceptability: Acceptable practices prioritize consumer autonomy, transparency, fairness, and inclusivity in data usage while leveraging data to enhance services and mitigate risks.

Other observations and conclusions for a solution: Efforts to address the ethical implications of data usage in financial services require collaboration between industry stakeholders, regulators, consumer advocates, and educators. Moreover, ongoing research and dialogue are crucial to adapt ethical frameworks to evolving technological and societal dynamics in Africa and globally.

LC-CE4:  AI Ownership

EXPLORATION

What is the ethical dilemma? AI Ownership. Development of AI algorithms, which along with other industries used by the Financial sector, is dominated by private Big Tech companies (access to a massive volume of data to train algorithms, etc). Obviously, Big Tech corporations invest in AI to maximise profit. In other words, they capitalise on this know-how to get more money which could eventually lead to monopolization of AI at the level of big tech ( = immense power of big tech). The financial sector is using those algorithms and the question is who owns the AI? The major risk is that the big tech might take advantage of owning AI and compromise consumers’ data (e.g. discriminate against certain groups, hoard data using the data in their own interests in order to maximise the profit). The solution might be the transfer of AI to non-profit organisations at a country level, but then we would face the potential issue of unwillingness to develop the AI by big tech due to the lack of incentives. Hence, the downside of the approach is a potential chilling effect on innovation.

Content: AI-enabled economy, digitalization, monopolistic behaviour. Only 23% of executives have confidence and insight that their systems run in an ethical manner. Nine out of 10 executives say they have witnessed misuse of AI. A few Big Tech companies dominate the AI infrastructure and value chain through investment and technological capabilities. Such control can result in the risk of monopolistic behavior and discriminatory economic tendencies.(https://www.capgemini.com/2019/07/retail-in-ai-an-ethical-dilemma/)

Technologies or types of data usage involved: Machine Learning, Artificial Neural Networks (ANNs), Deep Learning AI related foundation models, datasets, cloud infrastructure, semiconductors, and manufacturing capabilities.

Application and driving factors: “Know Your Customer” exercise – consumers screening for credits / loan risks, etc. Certain circumstances where an algorithm is used to deny people employment, housing, credit, insurance, or other benefits; biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance. A solid foundation is needed. If a data set is missing information from particular populations, using that data to build an AI model may yield results that are unfair or inequitable to legally protected groups. From the start, need to think about ways to improve the data set, design a model to account for data gaps, and – in light of any shortcomings – limit where or how one uses the model. To put it in the simplest terms, under the FTC Act, a practice is unfair if it causes more harm than good. Let’s say your algorithm will allow a company to target consumers most interested in buying their product. Seems like a straightforward benefit, right? But let’s say the model pinpoints those consumers by considering race, color, religion, and sex – and the result is digital redlining (similar to the Department of Housing and Urban Development’s case against Facebook in 2019). If your model causes more harm than good – that is, in Section 5 parlance, if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition – the FTC can challenge the use of that model as unfair. The transformative power of AI is increasing its application and usage in different sectors. The foundation models used to train AI require computing capabilities which are supported by cloud infrastructure, which also rely on semiconductors.

Ethical issues at play: The “2 M” issue: Monopolisation and Monetisation of AI (biggest tech companies periodically forced smaller competitors to integrate with their platforms or pressured them to sell out) – disadvantage for consumers. In the scenario when AI is under an independent NGO management at the country level (for instance, AI National Committee) there is a risk of lack of incentives for big tech to develop AI (and invest money) or to give the best technology to society or / and financial sector.

Group of people at risk or who may gain: The biggest question around AI is inequality, which isn’t normally included in the debate about AI ethics. It is an ethical issue, but it’s mostly an issue of politics – who benefits from AI?’ (Jack Stilgoe) Source to read: On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? https://faculty.washington.edu/ebender/papers/Stochastic_Parrots.pdf The accumulation of technological, economic and political power in the hands of the top five players – Google, Facebook, Microsoft, Apple and Amazon – affords them undue influence in areas of society relevant to opinion-building in democracies: governments, legislators, civil society, political parties, schools and education, journalism and journalism education and — most importantly — science and research. All intending participants in the AI market which include; Fintech, Cloud computing, AI application, computing machinery, and infrastructure development startups, businesses, and individuals are at risk. Corporate tech giants and shareholders currently heavily invested in the AI value chain stand to gain.

Wider impact of this dilemma: Robust ethical principles are essential in the future of this rapidly developing technology, but not all countries understand ethics in the same way. This would require changes to cultural norms and new strategies to help navigate a transition to an AI-driven economy. Setting minimum standards for corporate social responsibility reporting would encourage larger, transnational corporations to clearly show how they are sharing the benefits of AI.

Cultural aspects important for this dilemma: Decision Control: when profits and shareholder returns drive the paths that determine the use of such a socially applicable technology, it may erode rights and is detrimental to society. Monopoly: this is a deterrence of fair competition and ethical business, industry and economic development. Innovation Development: technology and innovation are at a high risk where progress and application is hampered by suspicions of unethical application. Security: There is a possibility that such monopolistic concentration may encourage application in unethical pursuit of the interest of specific groups.

SOLUTION-ORIENTATION

Possible controls and comments for a solution: The data, which the tech industry collects, should be categorised as “an essential and strategic resource”, the same as air, water. AI in that logic should be considered as essential and strategic infrastructure like transportation etc (kind of digital public infrastructure). Need to be an established NGO structure at the country level, which would handle some AI from big tech as philanthropy (think CSR, to provide free of charge advice to people (like AI lawyers) to help people to decide on credit, advise better interest rates, etc. Of course risk assessment of people needs to be done as well. Therefore, the idea is to use AI to make those financial solutions more accessible and to protect people. It could be done at the country level. I don’t think it’s possible to do it globally but at the country level it’s possible.

Conditions for acceptability: Governments need to develop new, up-to-date forms of technology assessment – allowing them to understand such technologies deeply while they can still be shaped, such as the Accountability Office’s Technology Assessment Unit in the USA or the European Foresight platform (http://www.foresight-platform.eu/). There is a clear need for the development of viable and applicable legislation and policies that will face the multifaceted challenges associated with AI, including potential breaches of fundamental ethical principles. Independent research: The influence of Google and the so-called GAFA group (acronym for Google, Amazon, Facebook and Apple) extends far beyond the confines of the companies’ offices. What such tech giants are doing and planning also dictates the global academic research agenda. “It’s hard to find research in large academic and research institutions that isn’t linked or even funded by big tech companies,” says Alexandros Kalousis of the University of Applied Sciences of Western Switzerland. Given this influence, he finds it crucial to have independent and “out-of-the box” voices highlighting the dangers of uncontrolled data exploitation by tech giants. Confidentiality, broader privacy rules and data protection. Transpersancy, Explainability, Inclusiveness, Alignment.

Other observations and conclusions for a solution: Gebru warned that ML / AI models analyse huge amounts of text on the internet, mostly coming from the Western world. This geographical bias, among others, carries the risk that racist, sexist and offensive language from the Web could end up in Google’s data and be reproduced by the system. Google reacted to the paper by asking Gebru to withdraw it. When she refused, they fired her. Other researchers have also discovered and highlighted the risks of uncontrolled evolution of AI systems. Alexandros Kalousis, Professor of Data Mining and Machine Learning at the University of Applied Sciences of Western Switzerland, says there’s an “elephant in the room”: “AI is everywhere and is advancing at a fast pace; nevertheless, very often developers of AI tools and models are not really aware of how these will behave when they are deployed in complex real world settings”, he warns. “The realisation of harmful consequences comes after the fact, if ever.” FTC complained against Facebook, which alleged that the social media giant misled consumers by telling them they could opt in to the company’s facial recognition algorithm, when in fact Facebook was using their photos by default. The FTC’s recent action against app developer Everalbum reinforces that point. According to the complaint, Everalbum used photos uploaded by app users to train its facial recognition algorithm. The FTC alleged that the company deceived users about their ability to control the app’s facial recognition feature and made misrepresentations about users’ ability delete their photos and videos upon account deactivation. To deter future violations, the proposed order requires the company to delete not only the ill-gotten data, but also the facial recognition models or algorithms developed with users’ photos or videos.

GENERAL COMMENTS/REACTIONS: Regulatory support is usually required to dismantle most monopolistic business practices, however, where big money is involved lobbying can delay regulation. Application of regulatory support and collaborative policies against competitive and commercial exploitations on multiple levels (global, state, and industry) would go a long way.

MORE RESOURCES: https://www.politico.com/news/magazine/2024/03/13/states-big-tech-ai-00146338 https://www.cnbc.com/2023/11/22/ai-is-giving-big-tech-inordinate-power-tech-execs-say.html

LC-CE5:  Balancing profit maximization and customer exploitation

EXPLORATION

What is the ethical dilemma? How can financial institutions balance between profit maximization and preventing customer exploitation when using algorithmic pricing mechanisms?

Content: While most banks have already been using logic- and rules-based methods to incorporate up-to-date pricing into their decision making, they are now using more sophisticated algorithmic pricing mechanisms.

Technologies or types of data usage involved: Dynamic pricing (sometimes also known as surge, yield or real-time pricing) generally refers to the practice of dynamically adjusting prices in order to achieve revenue gains, while responding to a given market situation with uncertain demand

Personalized pricing is referred to as first-degree price discrimination, customized, or targeted pricing, and represents a pricing strategy “whereby firms charge different prices to different consumers based on their willingness to pay”

Application and driving factors: Banks in Canada, for example, are using new technology to build an agile IT infrastructure that enables customer-centric pricing across channels. Customer activity is captured across channels so that resulting data can be applied in an analytical framework to better understand customers’ needs, expectations, preferences, options and price sensitivity, as well as to predict customer behavior and relationship value.

A large direct bank in Canada recognized the market and business challenges in continuously running mass-market rate promotions— especially in a prolonged period of low interest rates, high household debt and aggressive competitive pricing. The bank chose to use innovative data analytics to provide unique, relevant offers to specific customers. With greater insight on customer behavior, the bank can now target incentives, rewards, and special offers to customer segments with like attributes, which encourage customers to save and stay loyal.

Ethical issues at play: Hidden discriminations: Algorithms are inevitably “value-laden” as opposed to neutral decision tools. They often come with certain value-judgments baked in that reflect the designers and user preferences for some values over others (Kraemer et al. 2011). Often, the underlying values of an algorithm remain hidden, until a controversy reveals the values embedded in the code. Algorithmic discrimination can be done based on gender, ethnicity, level of education, wealth, or disability might not be readily apparent or purposefully coded, but the result of (biased) machine learning, arising even without bad intention on part of programmers or firms (Bock 2016)

Information asymmetry: the average consumer remains unaware that personal behavioral characteristics are logged and analyzed allow for prediction of, for example, income and health status, and give detailed insights into habits, preferences, and tastes. As a consequence, pricing algorithms can employ the firm’s informational advantage and silently sort consumers into segments so as to offer individual prices based on factors that remain opaque to the individual. In addition, customers may come to rely on a certain service, only to then be faced with almost prohibitively expensive prices so as to end with the equally unattractive options of either foregoing an essential service or paying an exorbitant price.

Group of people at risk or who may gain: Can you think of a group of people? What about generations e.g. baby boomers, x, y, z; financial stability measures e.g. wealthy, dependent on loans, medium income, … or life status e.g. students, employees, self-employed, retirees

Wider impact of this dilemma: Data privacy infringements may cause distrust, and discredit financial institutions and ultimately lead to financial losses for the business and public outrage on technology adoption

Cultural aspects important for this dilemma: The application of AI and its utilization in the provision of financial services tools and application is in its growth stage in Nigeria, negative experiences would create suspicion and dampen the progress of technology adoption.

SOLUTION-ORIENTATION

Possible controls and comments for a solution: The inclusion of adherence to both moral and ethical standards in data privacy policies of organizations. The monitoring and regulation of organizations involved in data generation and application to ensure privacy protection in data management.

Conditions for acceptability: What are other observations and conclusions for a solution?

Regional variations: Note: Privacy outrage causes bank to ditch plans for targeted ads based on customers’ spending habits https://www.zdnet.com/article/privacy-outrage-causes-bank-to-ditch-plans-for-targeted-ads-based-on-customers-spending-habits/

LC-CE6:  Behavioral marketing’s role in influencing addictive online behaviors

EXPLORATION

What is the ethical dilemma? Financial institutions using customers’ past behaviors to develop targeted marketing and advertising efforts that can encourage them to over-shop or form addictive online shopping behaviors. (tentative) Addiction generation using digital technologies.

Content: Behavioral marketing refers to the use of customer’s real-time or past data to create a more targeted and personalized marketing tactic.

The data collected may be used for targeted marketing, cross-selling, and even up-selling.

Financial institutions use of user data (real time and online purchase information) to market (target marketing, cross-selling, upselling, etc.), encourage, and direct user online behavior.

Technologies or types of data usage involved: These behavioral advertising tactics are often driven by big data, algorithms, and cloud computing. With Artificial Intelligence, wants and needs can be understood real-time as customers communicate them digitally and richer profiles can be built quicker. AI also allows advertisers to manifest the wants and needs of individuals. AI application (cloud computing and algorithm generation) to develop user profiles to serve marketing ends.

Application and driving factors: For example, a leading universal bank in India has hired a third-party advisor to help them target existing salary account holders to increase car loans and generate leads by cross-selling home loans. The firm carried out detailed analysis to create micro-segments of customers with the highest affinity to buy. These buckets made designing target-specific offers easier. As a result of enhanced targeting, lead generation and conversion ratios for car loans were well above the industry standard during the campaign period.

AI profiling is deliberately applied to user data to target enhancement, generate leads and improve conversion ratio on products that were not the original target of the customer.

Ethical issues at play: The ability to display personalized commercial practices like advertisements and recommendations such as “you might be interested in this product” might sometimes be beneficial for consumers. However, consumers can be manipulated, deceived, and encouraged to make suboptimal purchases. Consumers themselves are realizing that such targeted advertising is leading them to over-shop or urging addictive online shopping behaviors.

Regional variations: Deliberate manipulation of behavior through data acquired for other purposes to achieve profit motives. Consumers can be contrived to purchase suboptimal and sometimes unnecessary products.

Group of people at risk or who may gain: People with lower levels of financial literacy. Less technologically savvy and financially literate consumers.

Wider impact of this dilemma: Data privacy infringements may cause distrust, and discredit financial institutions and ultimately lead to financial losses for the business and public outrage on technology adoption.

Cultural aspects important for this dilemma: The application of AI and its utilization in the provision of financial services tools and applications is in its growth stage in Nigeria, negative experiences would create suspicion and dampen the progress of technology adoption.

SOLUTION-ORIENTATION

Possible controls and comments for a solution: The inclusion of adherence to both moral and ethical standards in data privacy policies of organizations. The monitoring and regulation of organizations involved in data generation and application to ensure privacy protection in data management.

Conditions for acceptability: What are other observations and conclusions for a solution?

Regional variations: Note: Privacy outrage causes bank to ditch plans for targeted ads based on customers’ spending habits https://www.zdnet.com/article/privacy-outrage-causes-bank-to-ditch-plans-for-targeted-ads-based-on-customers-spending-habits/

LC-CE7:  Data privacy rights

EXPLORATION

What is the ethical dilemma? Balancing the customer’s right to privacy over their data with the need of financial institutions to collect enough data to optimize their systems. (tentative)

Content: Artificial intelligence has increasingly been used for Automated Customer Support and Virtual Financial Assistance through Chatbots and Robo Advisors in financial institutions. The use of AI in these instances has changed the traditional mode of customers’ communication between banks and FI’s from face to face physical chat to digital-based communication that provides digital traces that can be recorded.

Technologies or types of data usage involved:

  • AI-powered smart cameras capable of capturing facial expressions of customers as a form of instant feedback on their banking experience.
  • Automated Customer Support and Virtual Financial Assistance through Chatbots and Robo Advisors.

Application and driving factors: For example, the Royal Bank of Scotland installed AI assistant Luvo in 2013. The technology responds to customers’ queries in general and hands them over to human staff when necessary. Four leading commercial banks in India are using AI in the form of Chatbots in collaboration with FinTech startups to improve the customer experience, improve efficiency, and reduce cost. One step further, in some cases, banks are using AI-powered smart cameras capable of capturing facial expressions of customers to provide instant feedback on their experience.

Ethical issues at play: This can potentially infringe the customer’s right to privacy over their data if they are unaware of the type of data collected and how the data collected will be used or transmitted. It is also legally unclear as to who owns that data, and if the customers can request for the data to be deleted.

An ethical dilemma may arise when the financial institutions need to balance their customer’s rights to privacy with the need to collect enough data to optimize their systems. Given that the banking and financial services industry largely operates based on trust and confidence, Financial institutions might also not be inclined to discuss this potential data privacy rights with their customer at length.

Even if the financial institutions include such data use clauses in their privacy policies, future issues might also arise when customers simply accord their consent to the use of their data without reading the lengthy privacy sentiments. For example, the customer may feel offended when consequences of such data sharing occur.

Group of people at risk or who may gain: People with lower levels of digital literacies may be more affected by this ethical dilemma as they may not be aware that their digital experiences with the bank are collected and that the data can be used for future purposes.

Wider impact of this dilemma:

Cultural aspects important for this dilemma: Data regulatory policies like GDPR.

SOLUTION-ORIENTATION

Possible controls and comments for a solution: If customers are well aware that their data is collected, the extent of the collection, how it has been used, and if they could request for their data to be made anonymous or deleted.

Conditions for acceptability: What are other observations and conclusions for a solution?

Regional variations: GDPR and other regulatory policies should be written in simple language (easy to understand by regular customers). Customers must have a choice about what they share with the organization. There should be digital literacy in school so that people are not suspicious about AI.

LC-CE8:  Power imbalances between industry and consumers

EXPLORATION

What is the ethical dilemma? Large amounts of consumer data may give the financial service providers an unfair advantage over the consumer.

The service provider may know substantially more about the consumer than the consumer does about the service provider. Also, the provider may be able to make accurate predictions of personal aspects such as habits and interests. This could cause major consumer vulnerability concerns or issues.

Furthermore, the previous aspects also relate to the additional issue of consent. For instance, even if banks would explain all of their potential actions/use cases in, let’s say, the firm’s terms and conditions then the question remains if consumers will be able to truly understand the implications of their data being gathered, shared, aggregated and analyzed.

Content:

Technologies or types of data usage involved:

Application and driving factors:

Ethical issues at play: Even if the financial service providers would want to inform their customers about how their data is being used, it may be difficult if not impossible to actually do that in reality. That is because the data is run through multi-layers of machine learning including a vast number of variables and the deeper the machine learning has gone to detect patterns, the harder it is to explain that.

Group of people at risk or who may gain: Financial services consumers in general. Financial service providers themselves.

Wider impact of this dilemma: Despite the opportunities of AI, there are constraints around how much you can do with AI if you cannot understand how they work. For instance, consumer lending is a prominent concern around the biases a system could perpetuate (data collection could have been intentionally or unintentionally biased).

Cultural aspects important for this dilemma:

SOLUTION-ORIENTATION

Possible controls and comments for a solution: Algorithms could be used as an assisting hand but, ultimately, a human may be necessary to make the final decision.

Furthermore, part of the solution could be to omit certain variables and create an inclusive data set, which ensures that everyone is represented.

Another way to approach these issues may be the implementation of an ethical framework for the use of AI and or ethics committees. Also, “ethics by design” may be a useful approach as it is difficult to retrofit ethical or regulatory considerations.

Possible solutions to address the ethical issues include: Implementing robust ethical frameworks and oversight mechanisms for the use of AI in financial services. Promoting transparency and accountability in data collection and usage practices, including providing clear explanations of data usage in consumer-facing communications. Enhancing consumer education and awareness regarding data privacy rights and implications through targeted initiatives and campaigns. Incorporating diversity and inclusivity considerations into algorithmic decision-making processes to mitigate biases and discrimination.

Conditions for acceptability: What are other observations and conclusions for a solution?

GENERAL COMMENTS/REACTIONS: In Africa, where digital literacy levels vary, individuals with lower digital literacy may be at a higher risk of exploitation. Cultural aspects, including attitudes towards privacy and technology adoption, may influence responses to data privacy concerns.

Wider Impact: The wider impact of this dilemma includes erosion of consumer trust, perpetuation of inequalities, and potential legal ramifications. Compliance with data regulatory policies, although essential, may be challenging to achieve and enforce across diverse consumer bases.

Conditions for acceptability: Acceptable practices prioritize consumer autonomy, transparency, fairness, and inclusivity in data usage while leveraging data to enhance services and mitigate risks.

Other observations and conclusions for a solution: Efforts to address the ethical implications of data usage in financial services require collaboration between industry stakeholders, regulators, consumer advocates, and educators. Moreover, ongoing research and dialogue are crucial to adapt ethical frameworks to evolving technological and societal dynamics in Africa and globally.

MORE RESOURCES: “Ethics Guidelines for Trustworthy AI” by the European Commission

LC-CE9:  Risks of proliferating customers data and AI decision-making

EXPLORATION

What is the ethical dilemma? Balancing the need for data collection for effective AI deployment in financial services with the potential for misuse of customer data.

Content: The financial sector is increasingly relying on AI to drive decision-making processes, enhance customer experiences, and improve operational efficiencies. However, the collection and use of vast amounts of customer data by AI systems raise significant ethical concerns.

Technologies or types of data usage involved:

  • AI-driven analytics
  • Machine learning models
  • Customer data collection and analysis

Application and driving factors: AI systems are used to analyze customer behavior, predict financial trends, personalize services, and detect fraud. The driving factors include the need for competitive advantage, customer satisfaction, and regulatory compliance.

Ethical issues at play: The ethical issues include potential biases in AI algorithms, the risk of data breaches, lack of transparency in data usage, and the potential for misuse of customer data for purposes beyond their consent.

Group of people at risk or who may gain: Customers whose data is collected and analyzed by financial institutions are at risk. Financial institutions may gain from improved decision-making and customer satisfaction.

Wider impact of this dilemma: The wider impact includes erosion of customer trust in financial institutions, potential legal and regulatory consequences, and the perpetuation of biases and inequalities in financial services.

Cultural aspects important for this dilemma: Different regions have varying levels of data protection regulations and customer awareness about data privacy, which can influence the implementation and perception of AI in financial services.

SOLUTION-ORIENTATION

Possible controls and comments for a solution: Implementing ethical frameworks and guidelines for AI use in financial services. Ensuring transparency and accountability in data collection and usage practices. Incorporating customer consent mechanisms and data protection measures. Regularly auditing AI systems for biases and discriminatory practices.

Possible solutions to address the ethical issues include: Implementing robust ethical frameworks and oversight mechanisms for the use of AI in financial services. Promoting transparency and accountability in data collection and usage practices, including providing clear explanations of data usage in consumer-facing communications. Enhancing consumer education and awareness regarding data privacy rights and implications through targeted initiatives and campaigns. Incorporating diversity and inclusivity considerations into algorithmic decision-making processes to mitigate biases and discrimination.

Conditions for acceptability: What are other observations and conclusions for a solution? Acceptable practices prioritize consumer autonomy, transparency, fairness, and inclusivity in data usage while leveraging data to enhance services and mitigate risks.

GENERAL COMMENTS/REACTIONS: In Africa, where digital literacy levels vary, individuals with lower digital literacy may be at a higher risk of exploitation. Cultural aspects, including attitudes towards privacy and technology adoption, may influence responses to data privacy concerns.

Wider Impact: The wider impact of this dilemma includes erosion of consumer trust, perpetuation of inequalities, and potential legal ramifications. Compliance with data regulatory policies, although essential, may be challenging to achieve and enforce across diverse consumer bases.

Other observations and conclusions for a solution: Efforts to address the ethical implications of data usage in financial services require collaboration between industry stakeholders, regulators, consumer advocates, and educators. Moreover, ongoing research and dialogue are crucial to adapt ethical frameworks to evolving technological and societal dynamics in Africa and globally.

MORE RESOURCES: “Ethics Guidelines for Trustworthy AI” by the European Commission

LC-CE10:  Regulatory gaps

EXPLORATION

What is the ethical dilemma? Regulatory Issue – So far regulators have offered only limited guidance. Until now most contributions take the form of information and recommendations rather than rules or standards.

Two Reasons:

  1. Governments don’t want to stifle innovation.
  2. The uncertainty of how AI will evolve (no dominant design yet).

Issue: Financial service providers have to establish and rely on their own ethical standards for their use of AI. However, this regulatory gap leaves a lot of room for pitfalls and negative implications. For instance, the lack of guidance may lead to a lack of fairness, which may affect decisions in e.g. lending or other areas in which an institution can discriminate against individuals, specific groups, or institutions.

Content: “New technologies will continue to drive global banking for the next five years while regulatory concerns around these technologies remain top of mind for banking executives.”

Technologies or types of data usage involved:

Application and driving factors:

Ethical issues at play:

Group of people at risk or who may gain:

Wider impact of this dilemma:

Cultural aspects important for this dilemma:

SOLUTION-ORIENTATION

Possible controls and comments for a solution: A suggestion could be to implement an “ethics by design” approach. That is a mandatory set of principles that need to be considered in any project.

Furthermore, establishing an ethics committee, which can validate AI use cases and monitor their adherence to ethical standards may be a useful approach.

Related to the former point but also independently, firms could also utilize domain experts, who review AI model decisions – “the human in the loop” in order to help guard against unintentional bias.

Also, model data should be tested and evaluated regularly including the use of bias-detection software.

Conditions for acceptability: What are other observations and conclusions for a solution?

Regional variations: In an African context, this is a very relevant dilemma. A country like Kenya is a policy sandbox that pits industry actors on one hand against government regulators and civil society on the other with academia trying to make sense of all the interactions. This dilemma can be developed to explore in multiple jurisdictions how this quadruple helix of actors have explored tradeoffs that arise in the constant dance between policies and innovation.

GENERAL COMMENTS/REACTIONS: This idea has potential for development, however, the scope of the topic here is not clear. Should we consider financial service providers that provide services for business-to-business or business-to-customer? If we are considering business-to-customer type service providers, the most critical point could be which type of data will be used for AI technologies. How will this data affect the decision-making process to provide service for customers? I think that this is a global problem and it is very important for European countries in terms of GDPR.

LC-CE11: Ethical responsibility within organizations

EXPLORATION

What is the ethical dilemma? Micro Issue – Data ethics is a topic that should reside in the C-Suites but it is unclear with whom, and how the values and standards developed should trickle down in the financial institutions.

Content: It is not clear yet who should be responsible for ethical data and AI in financial institutions.

Generally speaking financial institutions do already have functions such as advisory boards that govern critical issues or some institutions have implemented functions such as Chief Data Officers, however, their impact remains often at the C-Suite at best.

Ultimately it will be necessary that every individual of the respective institutions feels responsible for data ethics.

Technologies or types of data usage involved:

Application and driving factors:

Ethical issues at play: It may often remain unclear for employees and managers working on data/AI based financial solutions if their work crosses a line that potentially negatively affects customers or other stakeholders.

Group of people at risk or who may gain:

Wider impact of this dilemma:

Cultural aspects important for this dilemma:

SOLUTION-ORIENTATION

Possible controls and comments for a solution: The European Commission suggests implementing a governance framework, including the appointment of a person in charge of AI ethics or alternatively establish an internal or external ethics panel or board that could provide oversight.

According to the Guidelines, trustworthy AI should be:

  1. Lawful – respecting all applicable laws and regulations.
  2. Ethical – respecting ethical principles and values.
  3. Robust – both from a technical perspective while taking into account its social environment.

In addition, the Monetary Authority of Singapore has defined various principles that may be useful. Their aim is to:

  1. Provide firms providing financial products and services with a set of foundational principles to consider when using artificial intelligence and data analytics (“AIDA”) in decision-making.
  2. Assist firms in contextualizing and operationalizing governance of the use of AIDA in their own business models and structures.
  3. Promote public confidence and trust in the use of AIDA.

Furthermore: It could be useful to have something like a person or council responsible for data ethics would allow employees and managers involved in the usage of data/AI to get an evaluation of their issue/developments before it may get into launch/cause adverse ethical effects. Such entities should build on diversity that they are able to analyze issues from various perspectives.

Conditions for acceptability: What are other observations and conclusions for a solution?

LC-CE12:  Challenges associated with vague ethical legal frameworks

EXPLORATION

What is the ethical dilemma? Macro Issue – Drawing the line between what’s legal from, for instance, a GDPR standpoint and what’s ethical is difficult – “Creepy Line.” That is because customers and stakeholders are gradually becoming more comfortable with how their data is used over time.

In other words, if steps towards something potentially negative are simply incremental enough, no one will care because they get slowly used to it as the small changes don’t appear significant enough.

Content: The matter is that even within a legal framework such as the GDPR, financial institutions can utilize customer data in various ways, for example for marketing or service offerings, that may be legal but not necessarily ethical.

Businesses complying with strictly legal requirements alone leave gaps that could be manipulated for unethical ends.

Technologies or types of data usage involved: AI analytics, data mining, for enabling service offerings. Corporate data privacy policies, General data protection regulation (GDPR).

Application and driving factors: The sole pursuit of privacy policy to the extent of what is legal as against ensuring both legal and moral considerations.

Ethical issues at play: Customers may be uneasy or feel intruded. The argument goes that a customer may realize that based on the law a financial institution may be allowed to make a certain offering, but regardless of that, a customer may not feel comfortable with it being offered.

Some businesses ensure their privacy policy is in compliance with existing standards (e.g. legitimate interest), however, such standards may be vague on certain areas, allowing businesses to claim to be compliant yet within that space provide service offerings that may be unethical.

Regional variations: In Nigeria, one of the challenges in rule-based presentation of financial statements meant that accountants could manipulate financial statements stopping short of breaking any laws, but would still not be a fair presentation of the economic position of the corporate entity, a solution was the adoption and presentation of a principle-based presentation implying moral and professional ethics.

Group of people at risk or who may gain: Potentially everyone that uses any kind of financial services.

At the same time financial institutions may be at risk themselves.

Regional variations: Users and firms that deploy such unethical tactics, especially financial services, are at risk. Businesses that are intentionally not transparent in data and financial policy may gain in the short term but with long-term implications of negative corporate publicity and loss of integrity.

Wider impact of this dilemma: Customers may lose trust in financial institutions.

This is pure scenario planning – Not backed by the source – Context: Traditional Financial Service Firms Suppose, financial institutions cross a line and offer services that may be legal but do not align with an individuals/institutions idea of its financial service provider. Also, let us further suppose that the financial service provider in case was a traditional financial service provider. Assuming this type of service offering happens across all major traditional financial service providers one can easily imagine that an individual/institution would search for an alternative. Likely even a fintech alternative.

Issues:

  • Fintechs have so far mostly specialized in particular services or even only functions, and they often don’t offer classical financial services such as commercial lending (B2B).
  • Therefore, customers may need to search for several alternatives to fulfill their needs as the given alternatives can only serve particular needs and may not offer one-stop services. This increases transaction costs, which can have negative effects on the economy.
  • Another issue may be that this potential mass transition from traditional financial service providers to fintechs may cause major economic disruptions/instability. That may be because fintechs may not have the capabilities or interest in servicing certain services that were previously serviced by traditional financial service firms. Although this market gap may be eventually filled given the business opportunities it may take time and may burn some soil along the way.

Lesson: Traditional financial services firms play a critical role in society and the economy. Although innovation is necessary to stay competitive, not all innovation must be implemented and may even lead to adverse effects. There is a fine balance between “right and wrong”. Ultimately, traditional financial service firms must be aware of their role and importance for the well-being of society. They cannot and should not try to operate like startups in the sense of trial and error at any cost.

It exposes customers to data insecurity and invasion of privacy where third parties not covered within corporate data policy have access to customer data. Customer data may end up with malicious agents. User distrust and dissatisfaction may restrict industry growth and lethargy towards certain applications. Firms may be exposed to legal action resulting in financial and credibility losses (Cary et al, 2003; Okomayin et al. 2023).

Cultural aspects important for this dilemma: Different societies have different perceptions of where to draw the “Creepy Line”. It is very much based on the relative value perspective.

Regional variations: Nigerian data protection regulation is developing, however, data privacy policy is regulatory focused (mostly geared towards AML than data ethics). Users in the region are highly concerned about the use of personal financial data as against other forms of data (Adelola et al, 2015; Babalola, 2022).

SOLUTION-ORIENTATION

Possible controls and comments for a solution: Educate the people responsible for data analytics and service development in financial institutions to consider the “right thing for people and society”. The case reports from an ING Manager asking its employees if they could explain to their “family and friends and even elders what you are doing with this data?” The application of one-way encryption to data identity and protection of the direct interface of a third party with the original database (Ige & Adewale, 2022a). The conscription of third-party organizations to the adoption of privacy policy. Collaboration of transnational regulatory authorities, these would allow for a global data privacy regulatory monitoring system.

Conditions for acceptability: This could be acceptable where the solutions discussed above are adopted by corporates and third parties.

Other observations and conclusions for a solution:

The region is gradually foraging into the benefits of emerging digital technologies and their benefits. Ensuring data privacy protection and security would, to a great extent, build user confidence and patronage.

GENERAL COMMENTS/REACTIONS: Providing ‘complete’ information to users is an example of good business ethics.

MORE RESOURCES: Adelola, T., Dawson, R., & Batmaz, F. (2015). Nigerians’ perceptions of personal data protection and privacy. Babalola, O. (2022). Nigeria’s data protection legal and institutional model: an overview. International Data Privacy Law, 12(1), 44-52. Cary, C., Wen, H. J., & Mahatanankoon, P. (2003). Data mining: Consumer privacy, ethical policy, and systems development practices. Human Systems Management, 22(4), 157-168. Ige, T., & Adewale, S. (2022). AI powered anti-cyber bullying system using machine learning algorithm of multinomial naïve Bayes and optimized linear support vector machine. arXiv preprint arXiv:2207.11897. Okomayin, A., Ige, T., & Kolade, A. (2023). Data Mining in the Context of Legality, Privacy, and Ethics.

LC-CE13:  Are there risks to AI algorithm transparency?

EXPLORATION

What is the ethical dilemma? Macro Issue – Do we want to be able to explain what is happening within the AI?

Content: Financial institutions use neural network-based AI analytics to prevent and detect financial fraud.

Technologies or types of data usage involved: Neural networks.

Application and driving factors: Fraud detection.

Ethical issues at play: There are various calls to be able to explain the workings of, for instance, neural networks. For example, people would like to know all of the different factors that are considered when making an analysis.

However, let’s say we would know exactly what is happening in the analysis then criminal individuals or institutions may leverage this knowledge to, for example, engage in money laundering. Thus, we would actually be better off knowing less about the inner workings of the AI because then preventing or catching financial crime would be easier.

Group of people at risk or who may gain: Society as a whole.

Wider impact of this dilemma: Making too much information about the inner workings of AI public may decrease our chances of preventing and catching illegal activities. These activities may have negative implications for society as a whole.

For instance, a country’s prosperity is highly dependent on the stability of its institutions. Applying this aspect to financial institutions, one can easily see how important it is that financial institutions are able to protect a nation’s and its stakeholders’ assets. Ultimately, trust is the glue that makes the system work, once it is damaged it can become very difficult for a nation to rebuild and establish itself as a trustworthy partner.

Cultural aspects important for this dilemma: Different societies have different perceptions of where to draw the “Creepy Line”. It is very much based on the relative value perspective.

SOLUTION ORIENTATION

Possible controls and comments for a solution: Before we have this discussion we should not forget that it is currently not possible to explain the inner workings of even basic neural networks.

However, assuming we could, it may be necessary to find a balance between transparency and “secrecy”. Potentially we as societies have to come up with some kind of protection mechanisms. This could be inspired by something like IP rights (e.g. patents). These rights could be given by an institution (potentially national) that represents a trusted entity, which governs the ethical “correctness” of the neural network’s inner working but discloses this information from the public.

Conditions for acceptability: What are other observations and conclusions for a solution?

GENERAL COMMENTS/REACTIONS: Ethical Dilemma: The ethical dilemma in Africa revolves around customer engagement practices employed by businesses and organizations. This includes how companies interact with customers, collect and use their data, and influence their behaviors.

Content in Africa: In Africa, businesses and organizations engage with customers through various channels such as mobile apps, social media, SMS, email, and in-person interactions. These engagements aim to promote products, services, and brand awareness while fostering relationships with customers.

Technologies or Types of Data Usage Involved in Africa: The technologies used for customer engagement in Africa include mobile apps, CRM (Customer Relationship Management) systems, data analytics tools, SMS platforms, email marketing software, and social media platforms. Data usage involves collecting customer information such as demographics, preferences, purchase history, and behavioral data.

Application and Driving Factors: The application of customer engagement practices in Africa is driven by the need for businesses to attract and retain customers, increase sales, build brand loyalty, and gain a competitive edge in the market. Factors driving this use include the rapid adoption of mobile technology, increasing internet penetration, and the growing middle class with disposable income.

Ethical Issues at Play in Africa: Ethical issues in customer engagement in Africa include privacy concerns, data protection, transparency in data collection and usage, consent management, fair treatment of customers, and the potential for manipulation or exploitation of vulnerable populations.

Groups at Risk and Those Who Might Gain in Africa: Groups at risk include consumers whose data privacy and security may be compromised, individuals who may be subject to manipulative marketing tactics, and marginalized communities who may be disproportionately impacted by unethical business practices. Businesses and organizations might gain by leveraging customer data to personalize offerings, improve customer experiences, and increase sales.

Wider Impact of the Dilemma in Africa: The wider impact of unethical customer engagement practices in Africa can lead to erosion of trust between businesses and consumers, reputational damage to companies, loss of customer loyalty, regulatory scrutiny, and legal consequences. It can also perpetuate inequality and exacerbate social and economic disparities.

Cultural Aspects Important for this Dilemma in Africa: Cultural aspects such as communal values, respect for privacy, trust in institutions, and attitudes towards consumerism influence how customer engagement practices are perceived and accepted in Africa. Additionally, cultural diversity across African countries necessitates a nuanced approach to customer engagement that respects local customs and traditions.

Possible Controls and Comments for a Solution: Some possible controls for addressing ethical issues in customer engagement in Africa include implementing robust data protection regulations, ensuring transparency and accountability in data practices, obtaining explicit consent from customers for data usage, providing options for customers to opt-out of marketing communications, and promoting digital literacy among consumers.

Acceptable Conditions for Solutions: Solutions to ethical dilemmas in customer engagement would be acceptable if they prioritize the rights and interests of consumers, uphold ethical principles such as transparency and fairness, comply with relevant regulations, and contribute to building trust and positive relationships between businesses and customers.

Other Observations and Conclusions for a Solution: Effective solutions should involve collaboration between businesses, government regulators, consumer advocacy groups, and civil society organizations to establish clear standards and best practices for ethical customer engagement. Moreover, fostering a culture of corporate social responsibility and ethical leadership within organizations is essential for promoting ethical conduct in customer engagement practices in Africa.

How does this play out in Africa? In Africa, the balance between transparency and security in AI algorithms may be influenced by factors such as cultural attitudes towards technology, regulatory environments, and levels of technological adoption. Countries in Africa may need to adapt solutions to suit their specific cultural and regulatory contexts, ensuring that AI-driven fraud prevention measures effectively combat financial crime while maintaining trust and accountability in financial institutions. Additionally, efforts to enhance digital literacy and promote ethical AI practices can contribute to addressing this dilemma in the African context.

MORE RESOURCES: Floridi, L., & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review, 1(1).

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2).

Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751-752.

LC- CE14:  AI for the assessment of creditworthiness and lending

EXPLORATION

What is the ethical dilemma? Having data alone is not sufficient – Financial institutions must interrogate the outcomes of their AI. In other words, they must understand how AI decisions might affect socioeconomic or cultural situations.

Content: AI for the assessment of creditworthiness and lending.

Technologies or types of data usage involved: US Context – Often financial institutions feed their AI with so-called FICO credit scores, which include data on matters such as educational levels, buying patterns, etc.

Application and driving factors: Financial institutions use this type of data for their algorithms to understand the creditworthiness of customers or to see where they have opportunities to sell services.

Ethical issues at play: The issue with this approach is that the outcomes of the analytics often suggest that certain geographical areas such as city districts, are better or worse for selling services than others. This implies that algorithms do “redlining”, a process in which decisions are based on socioeconomic and or racial divides.

Group of people at risk or who may gain: Typically individuals or institutions which are associated with or represent the underprivileged/discriminated or live/are located in areas associated with those features respectively.

Wider impact of this dilemma: Research shows (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2727763) that historically black colleges and universities pay higher underwriting fees to issue tax-exempt bonds compared to “white” institutions. Furthermore, the research also shows that credit quality like AAA ratings did not play a role.

This causes INEQUITY, which may happen completely unwittingly by the financial institutions utilizing historical data without interrogating either the ingoing or outgoing data.

Apart from the social impact, this kind of discrimination also creates blind spots for the banks as they potentially miss out on untapped and underserved markets.

Cultural aspects important for this dilemma: It is likely fair to say that every society has their underprivileged and discriminated. Thus, generally speaking, it can be argued that the dilemma applies to those in a society that fall under this umbrella. However, this umbrella may be differently defined from nation to nation. Therefore, a generalization about which particular group that may be is difficult to do.

SOLUTION-ORIENTATION

Possible controls and comments for a solution: Financial institutions must challenge the data going in or the data coming out of their AI-based evaluations/predictions.

Conditions for acceptability: What are other observations and conclusions for a solution?

GENERAL COMMENTS/REACTIONS: This dilemma is a unique opportunity to explore AI innovations within the context of innovation systems. Innovation Systems are combinations of actors, the linkages between them, the rules that govern them and the products, services, and processes that arise. The dilemma currently restricts itself to the area of financial services and the administrative segmentation of geographical regions. It can be expanded to explore other sectors where AI innovations exist and to additional segmentations of regions and populations. This will generate a wide variety of new dilemmas as well as clusters of common dilemmas. Such an exercise would be useful in an African and Kenyan context.

LC-CE15:  Car insurance pricing based on social media posts

EXPLORATION

What is the ethical dilemma? Car insurance pricing based on social media posts.

Content: This describes one of the first cases gone wrong. Admiral Insurance, one of Britain’s biggest insurance companies, utilized algorithms to analyze Facebook posts of first-time car owners looking for personality traits that are linked to safe driving e.g. “short concrete sentences, using lists, and arranging to meet friends at a set time and place, rather than just tonight.” Based on the analysis the insurance would determine price levels.

Technologies or types of data usage involved: The insurance utilized algorithms that looked for correlations between social media data and actual claims data.

Application and driving factors: The technology called “firstcarquote” provides an opportunity for young drivers to identify themselves as safe drivers instead of having to wait years while they build up a track record and a no claims bonus.

Ethical issues at play: The insurance admits that the algorithm is still in an early stage. Furthermore, they highlight that their analysis is not based on any specific model or commonly accepted theory. Instead, it is based on an explorative, constantly developing model that changes as more and more data is collected. These aspects highlight, there is no fixed assumption of what characterizes a safe driver; meaning it is not perfectly clear yet which traits make for good measures to determine the price. Another issue is that this explorative process might not be perfectly transparent e.g. how does the insurance come to conclusions, who is involved, etc. Also, it may raise privacy concerns of invading personal data. Customers should know that the algorithm is still in an exploratory phase.

Group of people at risk or who may gain: Young people, in particular, first-time drivers in the UK.

Wider impact of this dilemma: I can imagine that young individuals will adjust their behavior on the social media platforms so that they will achieve good insurance quotes. Other than that I can imagine that people may move to other platforms for their private use, which would leave the platforms that are analyzed for pricing being fake/artificial hosts for idealized profiles. A consequence of this would be that either the insurance quote system would not lead to sufficient results anymore or potentially the insurance industry and the young users will get into a rat race jumping from one platform to another to catch up on data/privacy.

Cultural aspects important for this dilemma: For Admiral Insurance, it is so far the case that they require the permission of the young individuals to analyze their Facebook data. Assuming this will continue to be the case, it is likely that this will limit the scope of potential markets. That is because different cultures have different data privacy concerns.

SOLUTION-ORIENTATION

Possible controls and comments for a solution: Opt-in/opt-out options – making it transparent what the firm wants to do with the data, for how long they want to store it, etc., and simply giving the young individuals the option to agree or not agree with it. Also, one could ask if it’s a must to share your data to get an insurance quote as a young adult. Are there other ways to get it as well without the need to share your data? As long as customers can decide what to share (or what not to share) it’s okay.

Conditions for acceptability: If there is “perfect” transparency about all data-related issues and an individual agrees with the conditions, yes.

Other observations and conclusions for a solution:

GENERAL COMMENTS/REACTIONS: This dilemma is similar to an earlier one (IN8) that focuses on the uses of publicly available social media information to inform private decisions in the financial sector. The case highlights well the issue of bias existing in social media data where individuals present an ‘idealized’ perspective of themselves. The dilemma also explores the potential of developing business models at the level of an industry actor to harness low-cost data (irrespective of ownership) to pursue high-profit opportunities. The dilemma has implications for use cases in the political sector but raises fundamental questions about data as used in the scientific sector.

LC-CE16:  Chatbots, robotic bankers, inanimate intelligence virtual assistants

EXPLORATION

What is the ethical dilemma? The increased use of AI removes the human touch and emotional connectivity that customers have developed with their representatives.

Content: Chatbots or robotic bankers have higher affinities towards human-likeness but will cost significantly more than inanimate intelligent virtual assistants (Siri, Cortana).

Chatbot innovation could push for indistinguishable human-like characteristics because clients desire to “trust” the services provided. Clients would be more compliant with human-like representations (Mori, 2007).

Technologies or types of data usage involved: Service robots can be designed as humanoid (i.e. anthropomorphic) simulating a human appearance or as a non-humanoid. Humanoid robots can mimic the expression of emotional responses (e.g. using facial expressions and body language), and therefore they are perceived as more pleasant.

Application and driving factors: During the service encounter, customers often place a premium on pleasant relations with service employees – sometimes described as rapport, engagement, and trust, and so providing emotional and social value. Customers’ acceptance of robots will not only depend on their perceived functionality but also on social-emotional elements such as perceived humanness, social interactivity, and social presence.

Ethical issues at play: While virtual robots have nearly negligible costs, more sophisticated physical robots are still very expensive. Only big companies will be able to afford such technologies and thus they will take a huge advantage (better services and customer loyalty).

Group of people at risk or who may gain: CUSTOMER POINT OF VIEW: Elderly people could be more suspicious about AI; while the younger may gain. BUSINESS POINT OF VIEW: Companies with more sophisticated robots may take advantage.

Wider impact of this dilemma:

Cultural aspects important for this dilemma:

SOLUTION-ORIENTATION

Possible controls and comments for a solution:

Conditions for acceptability:

GENERAL COMMENTS/REACTIONS: In Africa, the ethical considerations surrounding customer engagement are influenced by various factors, including socio-economic conditions, cultural norms, and regulatory frameworks. Here’s how these aspects play out in the context of customer engagement practices in Africa:

  • Cultural Sensitivities: African societies often place a high value on interpersonal relationships and trust. Therefore, intrusive or manipulative marketing tactics may be perceived negatively and could damage trust between businesses and customers. Companies operating in Africa must navigate cultural sensitivities and adopt customer engagement strategies that respect local customs and values.
  • Data Privacy Concerns: With the increasing digitization of economies across Africa, there are growing concerns about data privacy and protection. Customers are becoming more aware of their rights regarding the collection and use of their personal data. Therefore, businesses must ensure compliance with data protection laws and regulations, such as the General Data Protection Regulation (GDPR) and local data protection legislation where applicable.
  • Access to Technology: While technology adoption is on the rise in Africa, there are still significant disparities in access to digital platforms and tools across the continent. Companies must consider these disparities when designing customer engagement strategies to ensure inclusivity and accessibility for all segments of the population, including those in rural and underserved areas.
  • Regulatory Environment: African countries are increasingly enacting data protection laws and regulations to safeguard consumer rights and privacy. Businesses operating in Africa must stay abreast of these regulatory developments and ensure compliance with relevant legislation to avoid legal and reputational risks associated with non-compliance.
  • Ethical Marketing Practices: Ethical marketing practices, such as transparency, honesty, and respect for customer preferences, are essential for building trust and fostering long-term customer relationships in Africa. Companies that prioritize ethical considerations in their customer engagement efforts are likely to gain a competitive advantage and enhance their brand reputation in the region.

Overall, navigating the ethical dimensions of customer engagement in Africa requires businesses to adopt a culturally sensitive approach, prioritize data privacy and protection, comply with relevant regulations, and uphold ethical standards in their marketing practices. By doing so, companies can build stronger connections with customers and contribute to sustainable business growth in the African market.

Implications for Africa: In Africa, where access to advanced AI technologies may be limited, there could be challenges in adopting AI-driven customer service solutions uniformly across different regions. Additionally, cultural factors and varying levels of technological literacy may influence the acceptance and implementation of AI-driven solutions. Efforts to promote transparency, accessibility, and education about AI technologies are crucial to address ethical concerns and ensure equitable access to AI-driven services in Africa.

MORE RESOURCES: Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2).

Floridi, L., & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review, 1(1).

Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751-752.

LC-CE17:  Crypto- and blockchain-based assets

EXPLORATION

What is the ethical dilemma? Cryptocurrency cards could back assisted-lending programs (affordable, basic living resources such as food, low-income housing, accessible healthcare, or even eco-sustainable transportation) but might be too volatile of a market for this group of already “at-risk” individuals to depend on.

Content: Some online banking platforms such as crypto.com and coinbase have partnered with Visa and Mastercard to offer their clients physical debit cards for extra blockchain security purposes. These programs come with various benefits such as lower conversion fees and discounted web 3.0 service bundling.

Technologies or types of data usage involved: Arculus, Ledger Nano X, Trezer: Cold-Wallet Storage. NFC (Near Field Communication): Tap Card on Pad. Chip and PIN technology: EMV/SMART processors found in VISA, Mastercard, Europay. Blockchain and Distributed Ledger Technologies (DLT): Bitcoin, ERC-20.

Application and driving factors: Owning blockchain-based assets for individuals that barely meet minimum requirements for basic living resources could be a new form of digital finance that allows them better success towards achieving capital success. Individuals would be better protected against theft with a cold wallet storage system. If mass-adopted, this would most likely reap immense corporate profitability and place commerce further away from fiat/native currency exchanges. (https://cointelegraph.com/news/altcoin-roundup-crypto-credit-cards-could-be-the-missing-link-to-mass-adoption)

Ethical issues at play: Blockchain can entail a decentralized finance exchange that is not regulated by a designated authority. Factors that occur across the globe will make many crypto markets extremely volatile. To this extent, if governments issued cards to their most economically vulnerable groups, individuals are placing their minimum requirements of living at a higher gamble.

Group of people at risk or who may gain: The demographic group that is most at risk for overall theft is lower-income areas across the states. These impoverished areas might cause these individuals the inability to feed their family if these deregulated markets drop.

Group at Risk: Lower-income areas across various regions in Africa may be at risk of theft and financial loss if individuals depend on cryptocurrency cards for basic living resources. Groups at risk (Australia): People living in rural parts of Australia do not have reliable access to the internet, meaning that they cannot easily or reliably access certain resources (online education, news, financial resources). This means that they would be missing out on a potential way to access and safeguard capital. Even if they did have cryptocurrency-based assets, they would be unable to access them. Furthermore, given the volatility of cryptocurrency, these already vulnerable groups would be made further vulnerable because their financial access depends on accessing the internet and the volatility of cryptocurrency. There are a lot of discussions of cryptocurrency in Ukraine however not many people really understand how it works and the risks of it. This means people will use such cards for the benefits and won’t truly understand all the money is under constant risk. For some people, cryptocurrency becomes an addiction when they put all the money in it hoping for earning more. Many crypto businesses in Ukraine put money into marketing that makes people think this way. At the end of the day, many people might lose all of their money.

Wider impact of this dilemma: If the market caps of these assets fall below certain levels, it will not matter how much of ”x” blockchain anyone owns. Therefore, placing individuals below the poverty line on purely crypto-backed assets as opposed to fiat also implies all members of society are at risk. If banks decide not to reimburse clients, there will be no demand for these card services.

Cultural aspects important for this dilemma: Financial service providers (FSP) should only offer crypto cards if and only if they can guarantee high standards of network securities and regulations through a central bank digital currency (CBDC).

SOLUTION-ORIENTATION

Possible controls and comments for a solution: Machine learning in algorithmic training models could be leveraged by the firm in order to see progressive growth over time. However, these clients would have to waive their right or consent to the bank in order to maximize their funds (and these machines could be unintentionally biased). A client portfolio should be based on diversification or maintaining various interoperable assets, as opposed to an “eggs in one basket” approach to financial backing (solely Bitcoin, Solana, Cardano). Also, each institution should consider offering stablecoins, a blockchain whose value is derived through the former native currency (Gemini, Tether, PAX Gold).

Conditions for acceptability: The CBDC or federal crypto regulators operate under the jurisdiction of federal oversight in liaison with international regulators or stakeholders should decide to minimally regulate blockchain ecosystems, preventing the funding of cybercrimes like Silk Road or terrorist projects that involve LAWS technology. Centralized blockchain technology (Stablecoins) might be necessary for these individuals to own if cold wallets are implemented by a CBDC.

GENERAL COMMENTS/REACTIONS: MORE RESOURCES: Siklos, P. (2021). <i>Central Bank Digital Currency and Governance: Fit for Purpose?</i> (pp. 11-19, Rep.). Centre for International Governance Innovation. Retrieved September 1, 2021, from http://www.jstor.org/stable/resrep31644.8

Narayanan, A., Bonneau, J., Felten, E., Miller, A., & Goldfeder, S. (2016). Bitcoin and Cryptocurrency Technologies: A Comprehensive Introduction. Princeton University Press.

LC-CE18:  Chatbots, robotic bankers, inanimate intelligence virtual assistants II

EXPLORATION

What is the ethical dilemma? How should banks implement chatbots that fulfill the roles and responsibilities of human-level employees? Drastically human-like designs could appear deceptive through the uncanny valley while the latter (inanimate interaction) also causes lower customer satisfaction ratings.

Content: Chatbots or robotic bankers have higher affinities towards human-likeness but will cost significantly more than inanimate intelligent virtual assistants (Siri, Cortana).

Chatbot innovation could push for indistinguishable human-like characteristics because clients desire to “trust” the services provided. Clients would be more compliant with human-like representations (Mori, 2007).

Technologies or types of data usage involved: Robot Process Automation (RPA), Intelligent Virtual Assistants (IVA), Mobile devices (Phone, Bluetooth, Tablets) Voice-Speech Recognition.

Application and driving factors: Anthropomorphic framing, or the design application by which human-like characteristics are applied to autonomous vehicles, is a philosophy currently embraced in robotics that yields higher customer satisfaction ratings (Pepper, HSBC branch of the future). It is most likely the case that humans have a psychological bias towards objects that can display social cognitive skills (Social Intelligence Hypothesis (SIH), 2007, Stanford-Broken Window Theory (BWT) 2018). Humans may also be able to work from home with covid-19 by using VR in remotely piloted machines.

Ethical issues at play: Respect for clients appears to be at a dead end for both paths. If a humanoid machine would be most preferential for the “SMART” branch of the future, how can a firm be expected to act in good faith of not abusing the “interpersonal” trust that might occur?

Anthropomorphic machines cannot supplement the real human connection to some groups of clients (immigrants, assisted-lending candidates) and could be seen as trying to manipulate them towards their advice. Too-human appearances may be misleading and form the phenomenon of the uncanny valley.

Group of people at risk or who may gain: Those without experience in robotics would prefer human-level services in almost every scenario.

Wider impact of this dilemma: If no standard is implemented on socially situated robots then customers may form negative beliefs surrounding the use of AI services in banking official policies. Clients might not want to trust a human representative over a “robotic” voice because the firm displays poor transparency.

Cultural aspects important for this dilemma: Not all countries have the same level of technological experience; therefore, some groups may feel uncomfortable without talking to another real person at a firm. These banking advisors should not try to encompass human-like traits such as skin or eye color but should differentiate themselves as their own unique entity (Pepper).

SOLUTION-ORIENTATION

Possible controls and comments for a solution: Bots should not aim to follow the evolutionary timeline of full anthropomorphic traits but should have a threshold (known as sub-anthropomorphic framing). Firms should reformat their future firm design of cafes to make the ecosystem more user interactive (Boobier 2020).

Conditions for acceptability: Accepts a sub-anthropomorphic framework that does not leverage a client into a state of compliance through deceptive communication skills.

Other observations and conclusions for a solution: Given the current state of the art technology that exists today, it should be recommended that these advisors avoid particular contingent attributes. To this extent, whether robots can utilize the ability of human physicalities necessary to social interactions, such as hands and faces while avoiding features such as eye color, gender, or skin tone is still an ongoing question.

**Frase, P. (2016). Class Struggle in Robot Utopia. <i>New Labor Forum,</i> <i>25</i>(2), 12-17. Retrieved September 1, 2021, from https://www.jstor.org/stable/26419978.

Regional variations: For Brazil: this one is very interesting and I would need to do a more comprehensive research on the subject to better contribute to the discussion. However, I can say two things:

  1. Latin American countries, including Brazil, have a high rate of social media and mobile device usage, which could influence the acceptance and integration of advanced chatbots in banking. The region’s openness to technology might mean a more rapid adoption of such innovations compared to other regions that are more conservative about technological changes in banking.

GENERAL COMMENTS/REACTIONS: Since the development of the original template, AI and machine learning algorithms have evolved, offering more sophisticated and nuanced interactions that can better mimic human conversation and understanding. Combine that with the unique regulatory environments that could impact the deployment of AI in banking (such as Brazil’s General Data Protection Law – LGPD), and you will have a very interesting case study.

This dilemma touches on the fine line dividing human and machine capabilities. It highlights the philosophical question of what it means to be human and raises the dilemma of when a machine is human enough. As captured in an earlier dilemma (IN11) having a requirement that industry actors should ensure a human alternative is always available to be engaged instead of their machine equivalent helps to blur the divide. Whereas chatbots and robot bankers as anthropomorphic innovations are almost indistinguishable from their human counterparts in certain aspects, they still remain distinguishable based on a narrow set of factors. Government regulators can require industry actors to specify when a consumer is dealing with an anthropomorphic innovation and offer a human equivalent at any point. Additionally, anthropomorphic equivalents can be required by government regulators to be always subject to a human equivalent at all points. There is a cost and efficiency trade-off for the industry actor.

MORE RESOURCES: Aubel, M., Pikturniene, I., & Joye, Y. (2022). Risk perception and risk behavior in response to service robot anthropomorphism in banking. Central European Management Journal, 30(2), 26-42. Aubel, M. (2022). Illuminating the Dark Side of Anthropomorphism: Mechanisms of the Uncanny Valley Phenomenon. EXTENDING BOUNDARIES, 13.

LC-CE19:  Robo-advisors in context of investment scenarios

EXPLORATION

Ethical dilemma: Artificial Intelligence is quickly increasing the use of robo advisory in financial services. Robo-advisors are digital platforms that deliver automated, algorithm-based financial planning services with little to no human guidance. Ethical dilemmas in data privacy and price discrimination may arise from the use of these robo advisors to provide online advisory services for small and medium enterprises (SME) clients in Africa.

Content: Robo-advisors offer a competitive advantage to their users and financial institutions in the areas of big data research and management of their clients’ and products’ information. However, it is important to note that the entry of Robo-advisors as innovative and fresh players in the financial market gives rise to new challenges for regulators in the short term.

Privacy, discriminatory practices, and their regulatory implications in the application of AI in financial advisory services: What are the technologies or types of data usage involved?

  • Digital platform technology
  • Algorithm/artificial intelligence-based financial advisory services

Application and driving factors: Robo-advisors have made a significant impact on the investment management landscape since their entry into the market years ago. Although they originated as automated portfolio managers, they have grown to deliver an entire suite of services such as access to human financial advisors, tax-loss harvesting, and cash management. At its most rudimentary level, robo advisors are online platforms used to provide a bank’s SME clients with automated financial advice.

Ethical issues at play: While robo-advisors solve many prevailing problems and enhance market efficiency, they simultaneously introduce fresh risks and regulatory challenges that have failed to be adequately addressed. Hence, it is pertinent to devise an effective framework for robo-advisors that is based on an effective control of data and risk management. Failure to do this could result in an upsurge in systemic risks due to the algorithmic trading performed by these software and digital technologies.

Group of people at risk or who may gain:

At risk

  • SME clients

To gain

  • Financial advisers
  • Banks
  • Financial planners

Advantages include:

  1. Automated and self-sufficient i.e. ‘set it and forget it’/ ‘hands-off’ approach – allows investors to automatically reap the benefits of successful investments without little or no further interaction on their part.
  2. Cheap way to invest – Supposedly better investment returns for a lower cost than traditional, human advisers – takes away the cost of meeting financial advisers or bypasses mutual fund’s minimum requirement. (The robo adviser is ‘paid’ from taking a percentage of the assets under the robo-adviser’s care) Usually low fee.
  3. Usually accompanied by some form of customer service/human supervision – although varies from one platform to another. Important for Art. 22 GDPR Automated Decision-Making: ‘it seems likely that providing investment advice and subsequently making investment decisions would classify as automated decisions that produce legal effects’.
  4. Easy to open an account – more inclusive than traditional methods to generate capital.
  5. ‘Better’ models able to make ‘better’ investment decisions- high tech that is constantly evolving and not subject to human error etc.

Disadvantages:

  1. Lack of human/customer interaction – yes, robo-advisers act on your financial goals set by you but some could benefit from having more tailored financial plans that are hashed out by talking to real human advisers in real time.
  2. Lack of personalization – Unable to consider other aspects of your life to balance all areas and find the best investment strategies for you.
  3. Subject to error and bias etc as trained on previous data.

Wider impact of this dilemma: The emergence of robo-advisors in Nigeria as innovative players in the financial market is creating fresh challenges for regulators in the short term. As a result, it is pertinent that regulators address critical data privacy and information management factors in order to design and implement effective regulation of robo-advisory services in Nigeria.

Cultural aspects important for this dilemma: In Africa, there is a preference for human interaction with one’s advisers. The robo advisor fails to provide this emotional connection with its clients, which limits its effectiveness. Culturally, consulting services aren’t highly valued by small businesses in Africa. They are seen as nice-to-have and not essential. These factors might limit the effectiveness and efficiency of robo advisors in Africa.

Regional variations: Cost is a driving factor in the adoption of technology in developing countries like Nigeria. Roboadvisory are not yet fully automated and as the market develops, regulation would probably develop with it, the challenge would be increased price discrimination which may inadvertently result in exclusion, but personalization involves cost.

Experience or acceptance of Robo-advisers in:

  1. UK: ‘Robo-advice is growing in popularity in the US and UK, although this is likely to be a business model that sits alongside a range of advisory options, rather than totally displacing existing advisory services’.
  2. Italy: ‘Retail investors in Italy have historically turned to their local bank and trusted financial consultant, as they are known locally, to help them put their money to work. It is not uncommon for the same bank or financial institution or even financial consultant to look after the same family for generations. In a country where independent financial advisers have only recently come onto the scene, combined with a general reluctance to trust anything techy or digital, it is not hard to see why robo-advisers have struggled to gain ground. But the covid-19 pandemic has flipped these traditional notions and habits upside down. The lockdown months saw a 6% increase in the number of clients signing up to robo-advisory services, as well as turning to other fintech and insurtech solutions’.
  3. US: ‘Viewed positively with potential to help with ‘democratizing finance’ and increasing financial inclusion, an outcome already observed in the US where robo-advisors have garnered over USD$400 billion assets under management’.

SOLUTION-ORIENTATION

Possible controls and comments for a solution: It is important not to go too far by setting a standard for automated advisors which exceeds that for human advisors. For now, the bar against which automated advisors should be measured is that of humans, whom we know are imperfect. Research in different fields shows that even basic algorithms consistently outperform humans in the kinds of tasks that robo advisors perform. However, even though it may be proper to hold automated advisors to an exceptional standard someday, their current market share is too negligible to do so today.

Conditions for acceptability: As regulators acquire greater competence in their ability to evaluate and track robo advisors, and as robo advisors become a significant force in the market, there may be a decrease in the requirement for direct regulation of the forms and features of consumer financial products.

Other observations and conclusions for a solution: The evolution of investment robo advisors, online credit comparison sites, web-based insurance exchanges, and automated personal financial management services gives rise to important opportunities and risks that regulators across the financial services industry are yet to systematically evaluate, let alone address. Due to the scale that automation enables, these services can provide higher quality and more transparent financial advice to more people at a lower cost than human financial advisors. However, this potential hardly guarantees that it will be realized.

GENERAL COMMENTS/REACTIONS: Definitions of Robo-advisers to add to flesh one already given:

  1. A robo-advisor is a software that is operated by a financial intermediary. It is based on an algorithm and provided to customers online. The financial intermediary is subject to financial markets regulation.
  2. Opens up the market – allows individuals to get involved in investing without time constraint or expertise considerations. A cheap way/ lower capital required to start to invest based on AI technology that will invest on your behalf based on your financial goals that you can modify at any time; the algorithm will balance risk and reward for you.

Maybe we could add the definition or status of robo-advisers under US, EU, UK law, etc.

US: Robo-advisers hold the same legal status as human financial advisers. They must be registered with SEC (US Securities Exchange Commission – independent agency that protects investors) + members of FINRA (Financial Industry Regulatory Authority – regulates brokers and exchange markets).

EU: MiFID 2 applicable as it is a technology-neutral regulation – important to understand if the robo adviser is a passive agent or independent i.e. it is not just a software used by a financial intermediary. KYC/know your customer etc anti-money laundering requirements still apply. ‘The EU’s General Data Protection Regulations (GDPR) contains what has been termed the ‘right to an explanation’. That is, where a purely automated decision is made that significantly affects a person’s rights, that person is entitled to ‘meaningful information about the logic involved’. This could be interpreted as requiring an explanation for how a deep NN agent arrived at a particular decision. This ‘right to an explanation’ has been criticized as being based on misunderstanding of deep NN technology, and that it will have a chilling effect on AI innovation in the EU’.

UK: Good document to consider on this topic: https://www.wipro.com/content/dam/nexus/en/industries/securities

LC-CE20:  Customer Data Privacy Paradox

EXPLORATION

Ethical dilemma: Customer Data Privacy Paradox

Content: The Customer Data Privacy Paradox refers to where a customer claims they care about their own data privacy but at the same time choose to share their data. Currently, many of the data collections did not make it clear to the customer about how and where their data will be used. Also, there is no proper education/reminder to customers about what the potential data privacy concerns are.

Customers who favour customised service offerings which require the use of personal data may be wary of how such data are collected or used yet find it tasking to act. Financial services firms may see the lack of action as consent to collect more data than is necessary.

Technologies or types of data usage involved:

  • (Big) Data Analytics
  • Customer Profiling
  • User activity tracking (e.g. Cookies)

Application and driving factors: In most of the customer data collection touchpoints (both online & offline, one-off & ongoing). The adoption of consumer data privacy paradox to amass data that serve purposes other than customization of service offerings.

Ethical issues at play: First, whether the reward customers received after sharing their data is well balanced/fair? Second, did financial institutions make it explicit about the usage & reward of data sharing? Finally, did financial institutions take the responsibility to educate their customers? Or did they leverage the data privacy paradox instead? Omission: The absence of full disclosure of customer data collection, use, and benefits derived. Omission can be ethically unfair where one’s legitimate action is misleading (Elegido, 2020) e.g. fintech apps acquiring data under the pretext of providing customized service but actually making gains from sharing these data to third parties that apply it in marketing.

Group of people at risk or who may gain: All customers.

Regional variations: Users of financial services tools and applications are at risk, where such data falls into the wrong hands. Providers of financial services technologies stand to gain.

Wider impact of this dilemma: Customers are sharing data with false awareness, and it is likely that once the data is shared, it is very difficult to revert or call back what you have shared.

Cultural aspects important for this dilemma: Depends. Some countries/regions are less willing to share data, e.g. Japan, HK. Over 80% of customers in the US would avoid buying from companies that raise security worries (McKinsey, 2019). Data privacy is universally valued. The retention period, level of transparency, and other data practices are of relative importance to customers and impact decisions of trust and brand preference (Magna Global, 2022). Users can lose trust and become disillusioned by excessive target marketing.

Regional variations: Financial activities are of great interest for developing economies like Nigeria, a situation where this is threatened through the use of negatively perceived technologies and or applications of such technology, there is the likelihood of a backlash. Users can lose trust and become disillusioned by excessive target marketing.

SOLUTION-ORIENTATION

Possible controls and comments for a solution:

  1. Education to raise customer awareness.
  2. Clear guidelines on why the data is needed, how the data will be handled, and what kind of rewards the customer may get after sharing the data.
  3. Collect only what you need and be transparent about the reward system.

Conditions for acceptability: This is currently “accepted” by many people due to the unawareness of the data privacy paradox.

Other observations and conclusions for a solution: Knowing vs Doing. What if financial institutions are already aware of this paradox, but due to their own good, the paradox is still being “leveraged”? I believe regulators may consider stepping in to ensure proper governance.

Providing user access controls such as privacy settings. Obtaining explicit consent on the collection and anonymizing data can be helpful.

Regional variations: Adopting regulatory standards with legal as well as moral intent to protect customers.

GENERAL COMMENTS/REACTIONS: MORE RESOURCES The privacy paradox, 2023, Insight freakout global marketing journal Elegido, J. M. (2020). INTRODUCTION TO ETHICS. Pan-Atlantic University Press.

LC-CE21:  Cloud computing, data storage, and cybersecurity risks

EXPLORATION

Ethical dilemma: (Data)storage / (cyber)security Implication of financial data risk resulting from the use of cloud computing

Content: With digitalization processes, increasing data-use have become vital for companies’ survival. The data has to be stored somewhere, and many financial service providers do not have the capacity to store or secure the data themselves in-house. Thus, data is often stored by third parties, providing storage, for instance, cloud services. While claiming to be secure, cloud-services have historically been breached on several occasions – This brings to question the long-term security of personal and financial information.

Technologies or types of data usage involved:

  • Cloud-storage / block-chain / data warehouses
  • Cloud technologies (storage area networks (SAN), network attached storage (NAS) tech, Object storage (AWS S3)), Blockchain, Internet of things (IoT). Data storage and sharing.

Application and driving factors: Banks are no longer just “safeboxes” for storing money and valuables, but manage a large array of services, essential to governments, businesses, and individual agents across society. While valuables have become less tangible, the need to keep them safe has not changed. Societies and institutions providing these services need to adjust for contemporary technological development, to provide digital solutions to a digital civilization.

Ethical issues at play: How much responsibility can be placed on banks, how much data is “enough”. Can we trust banks to essentially hold our life in their hands? How do you “insure” data? (wrt. identity theft, information theft, etc).

Data security and privacy issues may arise due to the characteristics of the storage system, the concerns include; confidentiality, integrity, deletion and privacy protection.

Group of people at risk or who may gain: All involved parties (B2C, B2B) – people/groups with ill intent (hackers, governments/organizations, etc.). Cloud Storage Providers (CSPs). Banks and their customers, Fintechs and users of their applications, are potentially at risk. Digital financial assets (Crypto currencies, digital wallets, CBDC, NFTs, etc) investors are at risk. Malicious parties (hackers and cyber criminals) stand to gain.

Wider impact of this dilemma: Personal/sensitive/financial information being shared(taken) unwillingly.

Cultural aspects important for this dilemma: Cybersecurity is essentially an arms race – every time a solution/protection is developed, someone develops a way to circumvent this protection, and so on. This might mean that smaller financial institutions, with less money to invest in cybersecurity, will be more prone to attacks.

Data storage is dominated by a small number of CSPs who have significant patronage from financial firms and may result in operational and reputational risk in the event of a significant cyber attack or outage at any of the CSPs. Financial firms and digital asset investors may also be directly or indirectly subject to laws outside their operational and legal jurisdictions. Financial services firms are exposed to systemic risk, when CSPs suffer Cyber or Physical attacks, a major concern of regulators.

Regional variations: Nigeria traditional banks mostly use in-house storage systems and invest in offsite storage backups, cloud services are still utilized (online banking). The more tech-savvy fintech startups however have outsourced services (mostly AWS) though they are subject to the Nigerian data protection regulations which covers data privacy in the financial sector.

SOLUTION-ORIENTATION

Possible controls and comments for a solution: It is a highly complex issue, with temporary solutions being applied persistently, with no permanent solution in sight. Blockchain does seem to hold good promise for a technology providing potential for secure storage of information.

Conditions for acceptability: Blockchain tech could be an acceptable way of securing information (transparency & security), but the impact of implementing blockchain in financial services is still relatively unknown.

Other observations and conclusions for a solution: Indirect oversight of CSPs is the usual approach of financial services, the approach is insufficient to manage risk associated with operational disruption of the CSP, it is advised that where possible applicable legal mandates addressing financial sector-specific concerns be applied in regulating CSPs. An approach encouraged by audit firms for digital asset (blockchain) security is knowledge of the regulations, transactions, and technology behind the system of the digital asset provider to enable an understanding of the risk and responsibility of parties to the contract.

Regional variations: Financial assurance processes may provide some safeguard on the resilience of third party infrastructure but may be too expensive for small firms (fintech). A global standard of CSP infrastructure uniformity may help to enable firms portability and global regulatory applications.

GENERAL COMMENTS/REACTIONS: Cloud services enable access to infrastructure that would otherwise be costly to build and maintain. CSPs are in a business for profit and besides needing to succeed would also deploy safety precautions to protect their investments. However, apart from outages and other physical threats to the CSPs, cyber attacks largely originate from the customers, financial firms should also increase internal security to safeguard cloud use and not just pass the bulk.

MORE RESOURCES https://www.finextra.com/blogposting/20387/the-state-of-cybersecurity-in-financial-services https://www.cm-alliance.com/cybersecurity-blog/the-future-use-cases-of-blockchain-for-cybersecurity Yang, P., Xiong, N., & Ren, J. (2020). Data security and privacy protection for cloud storage: A survey. Ieee Access, 8, 131723-131740. https://www.researchgate.net/profile/Mohd-Naved/publication/356761386_Application_of_cloud_computing_in_banking_and_e-commerce_and_related_security_threats/links/625cf5ae4173a21a0d1aaa9c/Application-of-cloud-computing-in-banking-and-e-commerce-and-related-security-threats.pdf https://www.bis.org/fsi/publ/insights53.pdf Hariharan, Naveen Kunnathuvalappil, 2021. “Financial Data Security In Cloud Computing,” OSF Preprints 6jens, Center for Open Science. https://www.pwc.com/us/en/tech-effect/emerging-tech/understanding-cryptocurrency-digital-assets.html https://assets.kpmg.com/content/dam/kpmg/ca/pdf/2018/03/cloud-computing-risks-canada.pdf

LC-CE23:  Customer profiling

EXPLORATION

Ethical dilemma: Customer profiling

Content: Collecting large quantities of data about customers to provide personalized banking experiences, or to validate/exclude from financing/insurance/etc.

Technologies or types of data usage involved: Artificial Intelligence (pattern algorithm)

Application and driving factors: Social credit scores

Ethical issues at play: GDPR (individuals’ right to privacy, right to be removed, right to be forgotten, etc.), bias in data collection (racial profiling, exclusion based on geographical/sociodemographic position), social credit scores tend to enable/heighten inequality, faulty automatic exclusion (exclusion of some from, for instance, insurance, can potentially lead to fines and court cases (removing human-from-the-loop can potentially cause major damage).

Group of people at risk or who may gain: Users are at risk – risk being unfairly treated by data bias / automation. Banks/companies risk losing validity/transparency, potential large fines/constrictions based on failing customers. Large economic gain potential if automation is successful, saves labor, space, time, etc. For banks it might be worth the risk, but it’s likely to harm customers (especially in the early stages).

Wider impact of this dilemma: Could potentially have large impact on social inequality, some people would, by proxy, be unable to receive loans, financing, etc, solely based on their sociodemographic background. Could potentially create a Black mirror-esque concept of social equity.

Cultural aspects important for this dilemma: Considering data bias, there might be huge differences in what is considered social standing across countries, cultures or even geographical areas. While the intent of AI is to provide personalized experience for all customers, there might be a risk that these forms of social credit scores can result in a more enforced homogenization of people, measuring all peoples across the same scale.

.SOLUTION-ORIENTATION

Possible controls and comments for a solution: EU are in the process of illegalizing social credit scores, and it will be interesting to see if this has any impact on banks’ strategies for personalized finance. Potentially they will need to create different measurements in European contexts. Further, they might alleviate some of the bias by transparency, what kind of data are they using to provide what kind of insights about the customers. From the banks’ perspective, before they can consider investing in AI solutions, they should ensure that they can also utilize these technologies in the future.

Conditions for acceptability: If it would be possible to opt out of customer profiling, and still be viable for loans/other banking services. Banks could offer it as a service, not enforce automatically.

Other observations and conclusions for a solution: Regional variations:

GENERAL COMMENTS/REACTIONS:

In Africa, the exploration of the ethical dilemma surrounding customer profiling in banking and financial services presents unique challenges and considerations.

Ethical Dilemma: The ethical dilemma revolves around the use of large-scale customer data to provide personalized banking experiences and make decisions regarding financing, insurance, and other services. This includes concerns about privacy rights, bias in data collection, and the potential for social credit scoring to exacerbate inequality.

Content: Financial institutions in Africa are increasingly leveraging customer data and artificial intelligence (AI) algorithms for customer profiling and decision-making. This includes assessing creditworthiness, determining insurance premiums, and tailoring services to individual customers based on their data profiles.

Technologies and Data Usage: AI algorithms are used to analyze vast amounts of customer data, including transaction history, online behavior, and demographic information. These technologies enable financial institutions to create sophisticated customer profiles and make data-driven decisions.

Application and Driving Factors: The application of customer profiling in African banking is driven by the desire to enhance customer experiences, improve risk management, and increase operational efficiency. Financial institutions aim to offer personalized services that meet the diverse needs of their customers while maximizing profitability and minimizing risk.

Ethical Issues at Play: Ethical considerations in customer profiling include privacy rights, data transparency, and the potential for algorithmic bias. In Africa, where regulatory frameworks for data protection may be less stringent than in other regions, there is a heightened risk of misuse or unauthorized access to customer data.

Regional Variations: Regional variations in Africa impact the implementation and regulation of customer profiling practices. Countries with robust data protection laws may place greater emphasis on privacy rights and transparency, while others may have more lenient regulations or face challenges in enforcement.

Groups at Risk and Groups that Might Gain: Customers in Africa are at risk of being unfairly treated or excluded based on algorithmic bias or data inaccuracies. Vulnerable populations, such as those with limited access to financial services or marginalized communities, may be disproportionately affected. Financial institutions stand to gain from increased efficiency and targeted marketing strategies enabled by customer profiling.

Wider Impact: The wider impact of customer profiling in Africa includes implications for financial inclusion, social equity, and data governance. While personalized banking experiences can improve access to services for some customers, there is a risk of perpetuating inequalities and reinforcing existing biases.

Cultural Aspects: Cultural factors such as trust in financial institutions, attitudes towards data privacy, and perceptions of social status may influence the acceptance and adoption of customer profiling practices in Africa. Sensitivity to cultural norms and values is essential for designing ethical and inclusive banking solutions.

Solution-Orientation: Possible solutions to address the ethical dilemmas associated with customer profiling in Africa include strengthening data protection regulations, enhancing transparency and accountability in algorithmic decision-making, and promoting consumer education and empowerment. Financial institutions should prioritize ethical considerations and ensure that customer profiling practices align with principles of fairness, transparency, and respect for privacy.

Observations and Conclusions: In navigating the ethical complexities of customer profiling, African banks and policymakers must balance the potential benefits of personalized financial services with the need to protect individual rights and promote social inclusion. By adopting ethical frameworks and leveraging technology responsibly, financial institutions can contribute to a more inclusive and equitable banking ecosystem in Africa.

MORE RESOURCES https://www.finextra.com/blogposting/20387/the-state-of-cybersecurity-in-financial-services https://www.cm-alliance.com/cybersecurity-blog/the-future-use-cases-of-blockchain-for-cybersecurity Virginia Eubanks. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press, 2018. Frank Pasquale. The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press, 2016. “Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World” by Bruce Schneier “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy” by Cathy O’Neil “The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power” by Shoshana Zuboff

LC-CE26:  Psychological manipulation of user behavior

EXPLORATION

Ethical dilemma: Manipulation of Behavior Taken from: https://plato.stanford.edu/entries/ethics-ai/ Manipulation of behavior from the use of digital tech

Content: The ethical issues of AI in surveillance go beyond the mere accumulation of data and direction of attention: They include the use of information to manipulate behavior, online and offline, in a way that undermines autonomous rational choice.

The manipulation of user behavior to enable increased consensual or non-consensual access to private and virtual behavior by digital technology companies to enable the prediction, and also influence, the future behavior of users.

Technologies or types of data usage involved: Interface design on web pages or in games, this manipulation uses what is called “dark patterns” (Mathur et al. 2019). At this moment, gambling and the sale of addictive substances are highly regulated, but online manipulation and addiction are not—even though manipulation of online behavior is becoming a core business model of the Internet. Artificial Intelligence (AI), Robotics

Application and driving factors: Psychological manipulation of humanity’s most basic natural tendencies (empathy, ) can all be replicated through some AI practices (Anthropomorphic framing, or the design application of giving robotics “human-like” qualities/characteristics). This could in turn result in higher customer receptivity to business demands and or financial advice. The application of deceptive patterns to encourage behavior from identification and habit information gathered using AI (ML, fingerprinting etc), and Robotics (Telephone, TV) devices, to sway user habits.

Ethical issues at play: Given users’ intense interaction with data systems and the deep knowledge about individuals this provides, they are vulnerable to “nudges”, manipulation, and deception. With sufficient prior data, algorithms can be used to target individuals or small groups with just the kind of input that is likely to influence these particular individuals. Marketing: The exploration of behavioral bias and addiction generation. Propaganda: The use of digital propaganda material for political gains. Misinformation: The provision of verification material (evidence) through digitally altered means rather than authentic documentation.

Group of people at risk or who may gain: These systems will often reveal facts about us that we ourselves wish to suppress or are not aware of: they know more about us than we know ourselves. Even just observing online behavior allows insights into our mental states (Burr and Christianini 2019) and manipulation.

The general public is at risk of propaganda and misinformation. Gambling and online entertainment seekers are exposed to manipulative behavior.

Wider impact of this dilemma: Machine learning techniques in AI rely on training with vast amounts of data. This means there will often be a trade-off between privacy and rights to data vs. technical quality of the product. This influences the consequentialist evaluation of privacy-violating practices. There is a slippery slope from here to paternalism and manipulation.

Political propaganda can harm autonomy of individuals (the right to be self-governed). Misinformation can lead to distrust of digital information and integrity of digitally verifiable evidence which may lead to increased injustice and insecurity. Group behavior can be swayed towards inconceivable action. Marketing bias may lead to byzantine regulation, psychological damage, and financial ruin.

Cultural aspects important for this dilemma: Companies with a “digital” background are used to testing their products on the consumers without fear of liability while heavily defending their intellectual property rights. This “Internet Libertarianism” is sometimes taken to assume that technical solutions will take care of societal problems by themselves (Mozorov 2013).

Regional variations: An already security unstable region with political instability and insecurity challenges could become volatile and lead to hostilities. Addiction is a known cause of financial losses.

SOLUTION-ORIENTATION

Possible controls and comments for a solution: While the EU General Data Protection Regulation (Regulation (EU) 2016/679) has strengthened privacy protection, the US and China prefer growth with less regulation (Thompson and Bremmer 2018), likely in the hope that this provides a competitive advantage. It is clear that state and business actors have increased their ability to invade privacy and manipulate people with the help of AI technology and will continue to do so to further their particular interests—unless reined in by policy in the interest of general society.

The strengthening of privacy policy. The implementation of privacy-preserving techniques in data processing. The application of machine ethics to allow the consideration of societal values, morals, and ethics, in the development and applications of Robotics and AI.

Conditions for acceptability: If positive behavior, e.g. improving spending habits is encouraged. Aiko: Nudges are acceptable given that the customer has given consent to have their data analyzed for the purpose of receiving nudges/offers from a bank. Further, it will be more and more important that the receiver also understands why he/she is receiving this nudge and what the possible implications of it.

GENERAL COMMENTS/REACTIONS: There is an implicit attempt to protect humans and their rights in the development of digital technologies and formulation of policies governing their use.

LC-CE27Opacity of AI Systems

EXPLORATION

Ethical dilemma: Opacity of AI Systems Taken from: https://plato.stanford.edu/entries/ethics-ai/

Content: AI systems for automated decision support and “predictive analytics” raise “significant concerns about lack of due process, accountability, community engagement, and auditing” (Whittaker et al. 2018: 18ff). They are part of a power structure in which “we are creating decision-making processes that constrain and limit opportunities for human participation” (Danaher 2016b: 245). At the same time, it will often be impossible for the affected person to know how the system came to this output, i.e., the system is “opaque” to that person. If the system involves machine learning, it will typically be opaque even to the expert, who will not know how a particular pattern was identified, or even what the pattern is.

Technologies or types of data usage involved: Big data

Application and driving factors: Discrimination against minorities due to “faulty” data input, e.g. racist algorithms denying a bank loan etc. due to the fact that black people or people with a migration background might have less money (depends on data source).

Ethical issues at play: Many AI systems rely on machine learning techniques in (simulated) neural networks that will extract patterns from a given dataset, with or without “correct” solutions provided; i.e., supervised, semi-supervised or unsupervised. With these techniques, the “learning” captures patterns in the data and these are labeled in a way that appears useful to the decision the system makes, while the programmer does not really know which patterns in the data the system has used. In fact, the programs are evolving, so when new data comes in, or new feedback is given (“this was correct”, “this was incorrect”), the patterns used by the learning system change. What this means is that the outcome is not transparent to the user or programmers: it is opaque.

Group of people at risk or who might gain: The quality of the program depends heavily on the quality of the data provided, following the old slogan “garbage in, garbage out”. So, if the data already involved a bias (e.g., police data about the skin color of suspects), then the program will reproduce that bias.

Bias typically surfaces when unfair judgments are made because the individual making the judgment is influenced by a characteristic that is actually irrelevant to the matter at hand, typically a discriminatory preconception about members of a group. So, one form of bias is a learned cognitive feature of a person, often not made explicit. The person concerned may not be aware of having that bias—they may even be honestly and explicitly opposed to a bias they are found to have (e.g., through priming, cf. Graham and Lowery 2004).

Wider impact of this dilemma: Henry Kissinger pointed out that there is a fundamental problem for democratic decision-making if we rely on a system that is supposedly superior to humans, but cannot explain its decisions. He says we may have “generated a potentially dominating technology in search of a guiding philosophy” (Kissinger 2018). Danaher (2016b) calls this problem “the threat of algocracy” (adopting the previous use of ‘algocracy’ from Aneesh 2002 [OIR], 2006).

Cultural aspects important for this dilemma: Companies with a “digital” background are used to testing their products on the consumers without fear of liability while heavily defending their intellectual property rights. This “Internet Libertarianism” is sometimes taken to assume that technical solutions will take care of societal problems by themselves (Mozorov 2013).

SOLUTION-ORIENTATION

Possible controls and comments for a solution: Bias in decision systems and data sets is exacerbated by this opacity. So, at least in cases where there is a desire to remove bias, the analysis of opacity and bias go hand in hand, and political response has to tackle both issues together.

There are proposals for a standard description of datasets in a “datasheet” that would make the identification of such bias more feasible (Gebru et al. 2018 [OIR]). There is also significant recent literature about the limitations of machine learning systems that are essentially sophisticated data filters (Marcus 2018 [OIR]).

There are several technical activities that aim at “explainable AI”, starting with (Van Lent, Fisher, and Mancuso 1999; Lomas et al. 2012) and, more recently, a DARPA program (Gunning 2017 [OIR]). More broadly, the demand for „a mechanism for elucidating and articulating the power structures, biases, and influences that computational artifacts exercise in society (Diakopoulos 2015: 398)“ is sometimes called “algorithmic accountability reporting”.

Conditions for acceptability: Some companies have also seen better privacy as a competitive advantage that can be leveraged and sold at a price.

Other observations and conclusions for a solution:

GENERAL COMMENTS/REACTIONS: This dilemma speaks to the issue of information and power asymmetry previously discussed (PA5) as well as the transparency issue of data used to train algorithms (CL12). The framing emphasizes the challenges as experienced in a developed economy context. However, they also apply to a developing country context in the setting where compute capacity and developer capabilities exist with a focus on both Large Language Models and Small Language Models. With a focus on the tradeoff between equity and competition, there is an opportunity for government regulators to hold industry standards to a minimum standard of disclosure on data used to make financial decisions as well as data used to train AI models.

MORE RESOURCES https://plato.stanford.edu/entries/ethics-ai

The issue of opacity is probably one of the biggest challenges of a bank, in order to ensure that their systems are being fair and that models do not rely on proxy indicators representing intrinsic characteristics of a minority group. Biases testing should be implemented with key services that can determine important consequences for our customers, such as loans, insurances, etc.

LC-CE28:  Privacy and Surveillance; The right to be let alone

EXPLORATION

Ethical dilemma: Privacy and Surveillance; the right to be let alone; Taken from: https://plato.stanford.edu/entries/ethics-ai/

Content: Regulatory Technology: Implication of data surveillance and financial regulation (picking up from @Aiko and giving discussion a financial edge The result is that “In this vast ocean of data, there is a frighteningly complete picture of us” (Smolan 2016: 1:01); The data trail we leave behind is how our “free” services are paid for—but we are not told about that data collection and the value of this new raw material, and we are manipulated into leaving ever more such data. ‘Data is the new oil’ or ‘all data is credit data’ common new saying that fit with what is actually going on

Technologies or types of data usage involved: Robotic devices have not yet played a major role in this area, except for security patrolling, but this will change once they are more common outside of industry environments. Together with the “Internet of things”, the so-called “smart” systems (phone, TV, oven, lamp, virtual assistant, home,…), “smart city” (Sennett 2018), and “smart governance”, they are set to become part of the data-gathering machinery that offers more detailed data, of different types, in real time, with ever more information.

Application and driving factors: For the “big 5” companies (Amazon, Google/Alphabet, Microsoft, Apple, Facebook), the main data-collection part of their business appears to be based on deception, exploiting human weaknesses, furthering procrastination, generating addiction, and manipulation (Harris 2016 [OIR]). The primary focus of social media, gaming, and most of the Internet in this “surveillance economy” is to gain, maintain, and direct attention—and thus data supply. I make a similar point like this in my template regarding robo-advisors: humans have psychological tendencies towards human appearance that can be manipulated

Ethical issues at play: It appears that present-day citizens have lost the degree of autonomy needed to escape while fully continuing with their life and work. We have lost ownership of our data, if “ownership” is the right relation here. Arguably, we have lost control of our data.

I think the issue of big-tech type of surveillance is not easy translated to financial sector, although banks still have extensive financial information of their customers and increasingly more digital crumbs from their customers, but not at the same scale as the Big 5.

Regulations for anti-money laundering and countering the financing of terrorism (AML/CFT) have led to increased data processing for financial policing. The huge volume of data requires an automated monitoring system, which leads to the direct or indirect monitoring of all financial activities of bank customers.

The biggest dilemma here is the one of balancing law-enforcing activities such as AML and fraud vs. privacy of individuals, and this needs to be transparent, have a lawful basis but also follow the principle of proportionality. The funding of a suspicion machine by the bank to satisfy regulatory requirements. The adoption of ethical values as defined by objectivity and moral neutrality of the regulatory systems in contrast to corporate governance adoptions usually defined cultural morals and business ethics (stratified surveillance).

Group of people at risk or who might gain: These systems will often reveal facts about us that we ourselves wish to suppress or are not aware of: they know more about us than we know ourselves. Even just observing online behaviour allows insights into our mental states (Burr and Christianini 2019) and manipulation.

Regional variations: Financial institutions and clients in regions with heavily enforced AML laws and relations

Wider impact of this dilemma: One of the major practical difficulties is to actually enforce regulation, both on the level of the state and on the level of the individual who has a claim. They must identify the responsible legal entity, prove the action, perhaps prove intent, find a court that declares itself competent … and eventually get the court to actually enforce its decision. Well-established legal protection of rights such as consumer rights, product liability, and other civil liability or protection of intellectual property rights is often missing in digital products, or hard to enforce. Banks have to optimise time and resources to achieve budgetary and operational efficiency, in the provision of non-core financial services, this can increase the cost of operations of financial institutions. Regulatory and coercive pressure of compliance can have an impact in the long term.

Cultural aspects important for this dilemma: Companies with a “digital” background are used to testing their products on the consumers without fear of liability while heavily defending their intellectual property rights. This “Internet Libertarianism” is sometimes taken to assume that technical solutions will take care of societal problems by themselves (Mozorov 2013).

Regional variations: The concepts expressed are generated from an article on financial policing in Canada. However, in Nigeria, organisations with corporate objectives that fall under the AML act are required to obtain a compliance certificate from the Economic Financial Crimes Commission (EFCC). Those firms operating in specific industries under AML scrutiny, are required to have an AML policy (eg Oil and gas, financial services and digital asset finance), and submit monthly reports to relevant industry regulators (gambling, lottery):

SOLUTION-ORIENTATION

Possible controls and comments for a solution: Privacy-preserving techniques that can largely conceal the identity of persons or groups are now a standard staple in data science; they include (relative) anonymisation, access control (plus encryption), and other models where computation is carried out with fully or partially encrypted input data (Stahl and Wright 2018); in the case of “differential privacy”, this is done by adding calibrated noise to encrypt the output of queries (Dwork et al. 2006; Abowd 2017). While requiring more effort and cost, such techniques can avoid many of the privacy issues.

Conditions for acceptability: Some companies have also seen better privacy as a competitive advantage that can be leveraged and sold at a price. It needs to be a clear process in place for balancing or doing tradeoffs between facilitating law-enforcement (AML/Fraud detection) vs. privacy breach. Tax incentives may be offered to financial institutions to reduce costs incidental to building a suspicion machine.

Other observations and conclusions for a solution:

GENERAL COMMENTS/REACTIONS: The processes of banking operations and transactions are evolving as more innovative and dynamic applications of digital technology are applied to the sector. It would be tasking for regulation from either the banking or regulatory authorities to catch up with the pace of evolution without help from within the industry,, however it is pertinent that evaluation of the monitoring activities of the system does not go so far as to deter the use of new tech or result in disestablishment

MORE RESOURCES https://plato.stanford.edu/entries/ethics-ai Amicelle, A. (2022). Big data surveillance across fields: Algorithmic governance for policing & regulation. Big Data & Society, 9(2), 20539517221112431.

LC-CE29:  Role of metaverse goods and virtual reality tools and services

EXPLORATION

What is the ethical dilemma? Should banks incorporate the use of metaverse goods or virtual reality advertisement tactics to perform/medium the exchange of client services?

What is the content? If a client’s virtual land (Sandbox, Decentraland, Meta, etc.) cannot outweigh actual real, tangible investments, then how will the firm maintain customer satisfaction for those who can’t transport to a real physical bank location?

If this is the case, then banks will have to own virtual land to maintain equal client opportunity.

What are the technologies or types of data usage involved?

  • Blockchain and NFTs (primarily ERC-20)
  • Chatbot Services/Automated Banking
  • Virtual Reality Gaming (Oculus Rift)

What is the application? What drives this use in this case? As in-person events have become more inconvenient since the beginning of the Covid-19 Pandemic, VR meetings could become more prevalent. However, this could also mean more meetings could happen with clients (eliminates travel restriction). These decentralized web applications eliminate any middleman from owning the data (Zoom).

Facial recognition as a data privacy concern is just one example, whereas it might drive Avatar representations forward in the market.

Virtual Billboards, magazines, social media, and gaming advertisements will play a role in location analytics. These will also be projected to be interactive vs. static.

Virtual Chatbots could replace avatars potentially. They could work 24/7 online and provide consistent service.

What ethical issues are at play here? Internet avatars or collectibles could be more important than in-person goods. Banks give loans to those with non-real assets over those who need physical goods (real cars to drive kids to school).

What group of people are at risk? What group of people might gain? Those who do not have pre-existing wealth online. Low-income clients will gain no benefit from goods that are not material.

Also, those with pre-existing online data such as card spending.

How will these people afford a virtual character instead of just showing up to an in-person bank? Transportation issues.

What is the wider impact of this dilemma? This would cause banks to use blockchain more extensively. What does this mean for fiat currency?

What are the cultural aspects important for this dilemma? Not all countries have the same level of technological experience as each other; therefore, some groups may feel uncomfortable without talking to another real person at a firm. These banking advisors should not try to encompass human-like traits such as skin or eye color but should differentiate themselves as their own group of workers for their initial upcoming.

In addition, there is a history of racial minorities not being trained in many facial recognition software initially. This may have some relevance because of potential biases.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution? Virtual goods should not be accepted as any form of “currency” but instead viewed as their own independent investment entity. This would be the same way one views real estate or automobiles as separate sentiments of value.

Banks should launch their own virtual worlds.

In what conditions could this be acceptable? Basic resources are evaluated first and foremost. Emphasis on food, housing, and transportation should be most essential to a client, for example.

The “human-centered design” that Tony Boobier makes noteworthy states, “Human-centered design should prioritize essential needs and ensure inclusivity.”

What are other observations and conclusions for a solution?  

GENERAL COMMENTS/REACTIONS: In the African context, the incorporation of metaverse goods and virtual reality advertisement tactics by banks presents unique challenges and considerations:

Access and Infrastructure: Many regions in Africa still lack adequate internet infrastructure and access to technology, which could limit the adoption and effectiveness of virtual banking solutions. Ensuring equitable access to these technologies is crucial to prevent further marginalization of underserved communities.

Digital Literacy: There may be disparities in digital literacy levels across different demographic groups within African countries. Banks must invest in educational initiatives to ensure that all customers, regardless of their background, can navigate and utilize virtual banking platforms effectively.

Cultural Sensitivity: African cultures often prioritize face-to-face interactions and personal relationships in business dealings. Virtual banking solutions must respect and accommodate these cultural preferences, perhaps by integrating elements of human interaction into virtual environments or offering hybrid models that combine digital and physical banking experiences.

Financial Inclusion: Virtual banking has the potential to enhance financial inclusion by reaching underserved populations in remote or rural areas. However, banks must ensure that these populations have access to the necessary technology and support to benefit from virtual banking services.

Data Privacy and Security: Data privacy and security concerns are paramount, particularly in regions with less stringent regulatory frameworks. Banks must prioritize the protection of customer data and comply with relevant data protection regulations to build trust and confidence in virtual banking solutions.

Socioeconomic Impact: Virtual banking solutions should not exacerbate existing socioeconomic inequalities. Banks should consider the needs and realities of low-income and marginalized communities when designing and implementing virtual banking services, ensuring that these solutions are accessible and inclusive for all.

Collaboration and Partnerships: Collaboration with local communities, governments, and technology partners is essential for the successful implementation of virtual banking initiatives in Africa. By working together, stakeholders can address challenges and leverage opportunities to maximize the benefits of virtual banking for all stakeholders.

Conclusion: While virtual banking holds promise for enhancing financial services and expanding access to underserved populations in Africa, careful consideration of the unique cultural, socioeconomic, and technological context is essential to ensure that these solutions are inclusive, equitable, and beneficial for all. Collaboration, innovation, and a commitment to addressing the needs of diverse communities will be critical in navigating the complexities of virtual banking in the African context.

LC-CL1:  Data bias and exacerbations in inequity and discrimination

EXPLORATION

What is the ethical dilemma? Biases in the test data provided to and training of AI algorithms (machine learning) could lead to decision biases against minorities, women, and other protected classes. Biases, in this context, refers to the under-representation of certain protected groups in data sets, but also correlations between certain groups and their respective creditworthiness. These correlations are highly context-specific and have no inherent causality, often a result of external factors (e.g., Lower average income of women).

What is the content? Machine learning (ML) is data driven. Data is, per construction, a reductive description of the real world. Since the data used for decision making represents a collection of past decisions, it will inherit the prejudices of prior decision makers. Therefore, data can and often is affected by past societal biases and prejudices. Knowing data can be biased, we know there is a possibility that decisions made by ML are based on an unfair dataset and will lead to unethical decisions. As a result of ML bias, the following problems can occur: Selective labels, Sample bias, Measurement error.

What are the technologies or types of data usage involved? Big data, artificial intelligence (AI), and machine learning (ML).

Specific data includes:

  • Income and credit data from users’ bank accounts
  • Credit scores
  • Alternative data (e.g., consumer profile trends, behavioral scores, telecommunications information, transactional data, and open banking).

What is the application? What drives this use in this case? The huge amount of data points, in combination with increasing computational power, has made computer-aided decisions and machine learning an integral part of traditional banking (e.g., Deutsche Bank) and new data-driven banks (e.g., N26). ML itself is not inherently unethical: training it with and using prejudiced data is. Often, prejudiced or biased data is what is used to train ML and informs computer-aided decisions.

What ethical issues are at play here? The major ethical issue is the discrimination of certain groups based (e.g., by giving individuals a lower credit score) on past data. This prevents the social advancement of individuals in this group and thus leads to a social imbalance on a national level. Financial ML systems that make decisions off prejudiced data contribute to the wealth inequality of minority groups and continue the discrimination prevalent worldwide. Credit scoring companies are legally prohibited from discriminating against individuals of minority groups; women, people of color, members of LGBTQIA, Jewish people, low-income people, etc. However, financial institutions that now use ML systems are either unaware or undeterred from these systems’ record of making decisions based on factors that are illegal. As ML systems have less data surrounding minority groups, the decisions made for these groups tend to be inaccurate and unfair. As financial institutions prefer to have more data rather than less, minority clients are disadvantaged by the ML systems used to evaluate their financial history.

Regional variations: This ethical bias is applicable to all untrained data as every environment has its minorities and all data would have correlations intrinsic in the sample population.

What group of people are at risk? What group of people may gain? At risk:

  • Individuals of minority groups i.e. women, people of color, members of LGBTQIA, Jewish people, low-income people, etc. are most at risk as the data pertaining to their credit history is less than the average white man’s. These groups are already at an economic and social disadvantage, and ML systems making uneducated evaluations will expand this economic gap.
  • Individuals of majority groups (men, white people) may gain as their economic history is vast, allowing ML systems to make informed decisions regarding their financial care. Long-term informed ML usage may lead to white superiority as this group is being benefited while minority groups are being disadvantaged.

Regional variations: Minorities, the underrepresented, and those covered by unethical correlations.

What is the wider impact of this dilemma?

  • Long-term uninformed ML usage will lead to a general distrust in the financial ecosystem by minority and majority parties alike. When certain groups are negatively affected by ML systems, it leads to uncertainty surrounding the merit of the financial industries that utilize these systems. In turn, citizens will be cautious of receiving financial help, possibly choosing to receive none, also negatively impacting their finances.
  • Financial industries that choose to utilize ML systems that possess biases will be generally distrusted by the public. Although some industries may be unaware of their systems’ biases, usage still affects their clients. Public knowledge of which financial industries utilize ML systems may lead these industries to have a negative reputation in the public eye. Being aware or unaware of the biases ML systems have are both traits consumers would not want their financial system to possess as they are either bigots themselves or uneducated on the systems they choose to use.
  • Making uneducated financial decisions for groups already at a disadvantage leads to an increase in social inequality. As these groups receive continual incorrect information, or incorrect decisions based on their information, their likelihood of progressing financially and socially is diminished. Groups at an advantage, receiving correct informed evaluations, will continue improving their financial state; easily accruing loans, building generational wealth, ability to obtain property, etc. while minority groups will have limited access to the same resources. It will become much harder for these disadvantaged groups to increase their wealth and social standing among historically benefited majority groups.

What are the cultural aspects important for this dilemma? Cultural aspects play an indirect role, since cultural biases will be present in the data sets used by financial institutions. As an example, the average earnings for women are lower than their male counterparts in Western societies. This could potentially lead to a lower credit score since the income level is one important data point in this decision.

As most ML systems are working with Western data, their goal for fairness is also Westernized. However, in countries that employ AI but have been negatively impacted by colonialism, these standards for fairness and equality are not always applicable and may actually be harmful. As certain groups have a digitally rich history, underrepresented groups are put at a disadvantage by possible data distortions. By implementing ML systems that work with model data but are unaware of the local structures in place, the majority disadvantaged groups in India (i.e., caste) are again, put at a disadvantage. Equality research is predominantly constructed within a Western lens and excludes non-Western data. This provides ML systems with little to no understanding of the difference between Western and non-Western data, making uninformed decisions that negatively impact disadvantaged groups, losing the sense of equity widespread technology requires.

In Africa, there is a large gender gap, especially in the digital world. As less African women are involved in Fintech or have access to a bank account at all, AI and ML systems further their demographic towards men and fail to meet the needs of women. The inclusion of AI in the financial world is likely to amplify this gender gap as the financial needs of women fail to be met and the financial needs of men are uplifted.

Regional variations: Regional bias may arise within Nigeria as economic implications of location differ, thus generating ethnic, gender, education, and income bias.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

  • ML systems must be continuously checked via in-depth tests in order to detect and flag biases. By enacting manual judgments on flagged data, the likelihood of continued biases is lessened and hopefully eradicated. There must be a close relationship between AI and people in order to make proper assessments of human data. These ML systems are only benefited by receiving correct, in-depth, and unbiased data in order to make beneficial judgments in place of negative ones.
  • People must be given the option of opting-out of ML evaluations and instead opting for human judgments regarding their data in order to receive more personalized assistance. Similarly, ML systems could make judgments that are then reviewed by a human financial consultant in order to be more efficient yet still effective.

A possible control is the training of datasets, enlightening developers of algorithms to plausible dangers of untrained datasets, monitoring and testing of results of algorithms for bias.

In what conditions could this be acceptable? Humans and AI must work together closely if the AI is used to make any kind of judgment. This new technology requires a continuous system of oversight in order to provide proper, helpful care to citizens worldwide. Implementing human oversight within these systems locally may benefit countries, like India, that are put at a disadvantage due to the Western lens AI centers. By regulating the financial industry’s use of AI in assisting their clients, governments can minimize the negative impacts of biased AI. More data relevant to underrepresented groups must be compiled in order to correct these ML systems. Firms that wish to utilize AI and ML systems should be required to conduct unbiased research pertaining to the groups they represent and present that data to the AI systems they use. This again, minimizes the negative impact of biased results and discrimination against underrepresented groups.

Regional variations: Where AI is used to train and test AI. Meaning datasets, algorithms, and outputs of the system are tested using AI to determine possible biases within the system (data sample, algorithm, decisions, etc.) before application in practice.

What are other observations and conclusions for a solution?

GENERAL COMMENTS/REACTIONS: The rush to utilize AI may project commercialization as the driving factor; developers and investors should think sustainability, as bias and distrust could breed cannibalization and sub-optimization of commercial benefits.

MORE RESOURCES

  • Sambasivan, Nithya et al. “Re-imagining Algorithmic Fairness in India and Beyond.” Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (2021): n. pag.
  • Ahmed, Shamira. “Research ICT Africa.” A gender perspective on the use of artificial intelligence in Africa’s fintech industry: Case studies from South Africa, Kenya, Nigeria, and Ghana, June 2021, researchictafrica.net/publication/a-gender-perspective-on-the-use-of-artificial-intelligence-in-africas-fintech-industry-case-studies-from-south-africa-kenya-nigeria-and-ghana/.

LC-CL2:  Data bias towards historically marginalized groups in bank lending

EXPLORATION

What is the ethical dilemma?

Many people have experienced historical disadvantages, which often impact their financial status in the present. What responsibility do banks and financial institutions hold to historically marginalized peoples? How do we ensure people’s data is used ethically and beneficially?

What is the content?

Microloans and practices utilized by black banks demonstrate how lending can lift individuals and families out of poverty by providing loans to those typically rejected by traditional banks and financial institutions. In the financial world, many historically marginalized people face discrimination today due to biases in data fed to machine learning (ML) programs. These biases affect credit scores and decisions to lend to certain customers, disproportionately impacting historically marginalized people. Such biases are perpetuated when data is collected without considering historic disadvantages and marginalization.

The use of micro-financing data would be more suitable for applications assessing the loan eligibility of minorities, enabling poverty alleviation as opposed to rejections resulting from biased traditional banking data.

What are the technologies or types of data usage involved?

  • Data bias
  • Data collection
  • Customer trust
  • Machine learning (ML)
  • Artificial intelligence (AI)

What is the application? What drives this use in this case?

Black banks and social banks can lead to positive outcomes for historically marginalized customers; however, they cannot fully substitute or replace equitable actions by all banks, financial institutions, and lenders. A history of explicit and implicit discrimination has led to a lack of trust among marginalized customers towards mainstream financial institutions. This mistrust is perpetuated today because biased data causes inequitable and potentially discriminatory actions. Technologies use large datasets to develop decision-making algorithms, which are excellent for time and cost efficiencies.

What ethical issues are at play here?

Data sets from traditional banks result in biased data due to structural racism and sexism. Removing gender and ethnicity as variables will still result in biased data, as these explicit variables are combinations of implicit variables (e.g., country club memberships, shopping habits, commercials viewed, music preferences). The ML systems that financial companies depend on to judge financial profiles use these implicit variables, creating skewed evaluations that negatively affect minorities. These skewed data results lead to biased evaluations of clients’ credit scores, unfairly determining which groups of people are approved for loans and at what amount. Historical patterns found in datasets may result in bias risk, discriminatory consumer results, and concerns regarding data management and usage (OECD, 2021).

What group of people are at risk? What group of people might gain?

  • At risk: Historically disadvantaged people are most at risk, as they do not benefit from traditional banking and are further discriminated against by Fintech systems’ lack of relevant/correct data. This makes it harder for them to improve their social status and grow financially.
  • May gain: Individuals from majority groups (men, white people) may gain, as their economic history is vast, allowing ML systems to make informed decisions regarding their financial care. Long-term ML usage may lead to white superiority as this group benefits while minority groups are disadvantaged.

What is the wider impact of this dilemma?

Historically disadvantaged people will find it difficult to receive the same standard of treatment from financial institutions as white people do, furthering the wealth gap and making it harder for them to improve their economic and social standing. Minorities may begin to refuse needed financial consultation out of fear of misrepresentation. This could simultaneously raise the social standing of white people, as their economic situation remains unchanged, perpetuating subliminal white superiority messaging.

Financial exclusion may occur where a bank’s decision-making process is biased, reducing confidence in the financial sector if such reports are frequent or scaled. Confidence in a technology achieving performance expectancy is crucial for its acceptability. If the opposite occurs, it makes commercializing such technology difficult and may result in investment losses.

What are the cultural aspects important for this dilemma?

In Africa, there is a large gender gap, especially in the digital world. With fewer African women involved in Fintech or having access to bank accounts, AI and ML systems further their demographic towards men, failing to meet the needs of women. The inclusion of AI in the financial world is likely to amplify this gender gap. Banks need to specifically address the financial needs of women to uplift the country as a whole. Ignoring a large portion of Africans leads to financial institutions losing valuable customers, and African women losing the chance to economically flourish.

India faces a similar issue. Although women’s NPAs are 40% lower than men’s, the gender gap is still $20 billion USD. Biased algorithms used by Fintech companies in India result in biased data for low-income groups and women, leading to incorrect credit scores, unfair loans, or lack of financial opportunities.

Regional variations:

In Nigeria, banking is divided into segments. Microfinance banks exist separately from commercial or traditional banks, allowing focus on specific segment needs. Loan decisions are mainly made from the applicant’s banking and business history by a credit compliance analyst, thus the use of Generative AI is not prevalent. However, traditional AI tools applied by analysts help reduce human bias.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

  • Use data from minority banks or micro-loan organizations focused on helping disadvantaged people.
  • Develop separate algorithms/systems for different groups to eliminate discrimination questions between groups. By acknowledging and correcting the biases AI/ML systems perpetrate, we can create fairer evaluations.
  • Train AI datasets to develop higher equity and lower historical accuracy (Harvard Business Review HBR).

In what conditions could this be acceptable?

Extensive testing and reviewing of AI/ML systems is crucial to ensuring biases are not continued. Slowing the transition from human to AI/ML systems through various human checkpoints can minimize the actualization of biases. Training staff on identifying and preventing biases before transitioning to AI systems will positively impact currently disadvantaged minority groups.

These applications would be more accurate where bias is removed from datasets before a model is built, regularizing algorithms to score high on fairness, and using AI self-evaluation checks (HBR).

What are other observations and conclusions for a solution?

GENERAL COMMENTS/REACTIONS:

The rush to utilize AI may project commercialization as the driving factor, but developers and investors should think sustainably, as bias and distrust could breed cannibalization and sub-optimization of commercial benefits.

MORE RESOURCES

  • Weber, Mark, et al. “Black Loans Matter: Distributionally Robust Fairness for Fighting Subgroup Discrimination.” arXiv, 2020, eprint=2012.01193, primaryClass=cs.CY.
  • Ahmed, Shamira. “Research ICT Africa.” A gender perspective on the use of artificial intelligence in Africa’s fintech industry: Case studies from South Africa, Kenya, Nigeria, and Ghana, June 2021, researchictafrica.net/publication/a-gender-perspective-on-the-use-of-artificial-intelligence-in-africas-fintech-industry-case-studies-from-south-africa-kenya-nigeria-and-ghana/.
  • Chowdhary, Swati, and Puneet Gupta. “Machine Learning, Artificial Intelligence can revolutionize women’s credit underwriting.” The Times of India, 16 Jan. 2023, timesofindia.indiatimes.com/blogs/voices/machine-learning-artificial-intelligence-can-revolutionize-womens-credit-underwriting/.
  • http://hbr.org/2020/11/ai–can-make-bank-loans-more-fair

LC-CL4:  Discrimination in banking and insurance practices

EXPLORATION

What is the ethical dilemma?

Despite legal regulations in place for almost 20 years, discriminatory practices by banks and insurance companies towards their customers persist. This involves considering criteria for granting loans based on the physical appearance or other personal information of customers, which raises ethical questions about decision bias, internal governance structures, and consent.

What is the content?

Banks and insurance companies use technology to consider criteria not officially included in legal proposals for banking actions, such as loans. This involves implementing algorithms that take into account both legal and discriminatory (thus illegal) information about customers. For example, algorithms might consider a customer’s physical appearance recorded by security cameras, influencing loan decisions outside any legal context. This raises transparency issues.

What are the technologies or types of data usage involved?

Technologies include security cameras repurposed to record customer appearance and algorithmic tools whose biases may be encoded from the start. The debate shifts from the technologies themselves to their initial design and the values of their creators. As Kate Crawford, Co-director of the AI Now Institute at New York University, said, “Like all technologies, artificial intelligence will reflect the values of its creators.”

What is the application? What drives this use in this case?

A possible scenario is that an algorithm flags a customer’s physical appearance as unfavorable for obtaining a loan. This information is recorded in a database, influencing the decision outside any legal context.

What ethical issues are at play here?

From a sociological perspective, one ethical issue is that sexist practices, for example, can reverse progress made on gender equality. Algorithms that discriminate against women are not representative of organizational practices and are more regressive.

What group of people are at risk? What group of people may gain?

At-risk individuals are those who fall outside the encoding norms of the algorithms. Studies show that algorithms often reference male gender and white skin color as the norm. Anyone who does not fit this typology may face biased banking and financial decisions.

What is the wider impact of this dilemma?

The major impact is the continuation of discriminatory practices despite societal, moral, and legal advancements. These practices do not evolve in tandem with legal developments, perpetuating discrimination.

What are the cultural aspects important for this dilemma?

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

If these practices were in accordance with legal procedures, they would raise only moral questions. This depends on how the state legislates its own morality. It is essential to reflect on how states legislate their morality based on these practices.

In what conditions could this be acceptable?

State regulation, regardless of structure and operation, must align with legal procedures to be acceptable in the legal sense. Moral questions remain depending on state legislation.

What are other observations and conclusions for a solution?

It is also worth studying how customers might influence banks/insurance companies’ decisions by presenting information about themselves. This reverse thinking approach asks how this information is processed.

GENERAL COMMENTS/REACTIONS:

Regional variations:

In Australia, an independent body commissioned by the government investigates the banking sector for any malicious or fraudulent operations. This keeps the banking sector independent of government influence in daily operations but allows for investigation if there is suspicion of malpractice.

In Africa, the ethical dilemma of discriminatory practices by banks and insurance companies is significant due to socio-economic and cultural factors. Here’s how the dilemma plays out:

  • Socio-Economic Disparities: Africa has diverse socio-economic conditions, with many facing poverty and limited access to financial services. Discriminatory practices can exacerbate these disparities, further marginalizing vulnerable communities.
  • Cultural Diversity: Africa’s cultural diversity includes various ethnicities, languages, and traditions. Cultural norms and biases may inadvertently perpetuate discrimination within financial institutions.
  • Limited Regulatory Oversight: While some African countries have regulatory frameworks to address discrimination in financial services, enforcement mechanisms may be inadequate. This can embolden financial institutions to discriminate with impunity.
  • Digital Divide: Despite technological advancements, Africa still faces a digital divide. Discriminatory algorithms may disproportionately impact individuals with limited digital literacy or access to technology.
  • Empowerment through Financial Inclusion: Addressing discriminatory practices is crucial for promoting financial inclusion. Equitable access to financial services is essential for inclusive economic growth and social development.

Addressing the Dilemma:

Policymakers, regulatory bodies, civil society organizations, and financial institutions must collaborate to prevent discrimination and promote transparency in banking and insurance practices. This includes strengthening regulatory frameworks, enhancing consumer protection, promoting diversity, and leveraging technology responsibly.

MORE RESOURCES

  • “The Ethics of Artificial Intelligence”
  • “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy” by Cathy O’Neil

LC-CL5:  Mobile financial services

EXPLORATION

What is the ethical dilemma?

The advent of mobile financial services (MFS) has offered immense opportunities to both digital lenders and borrowers. Using a variety of modern technology tools, digital lenders can rapidly assess the creditworthiness of borrowers, who are happy to obtain loans with minimal obstacles. Empirical studies show that those who default on the loans are mostly from the low-income strata in society, driving them further into poverty through blacklisting. Ethical issues have arisen concerning the technological means used by digital lenders to access consumer credit history and the methods they use to enforce loan repayments.

What is the content?

FinTech operators have the advantage of tracking consumers’ credit history using technology tools such as artificial intelligence and machine learning. The target group is consumers in general, but those who default are mostly from low-income groups. The terms are not always clear to consumers, and their loans often increase tremendously due to high-interest rates, making repayment impossible. Their names are forwarded to credit bureaus, locking them out of the credit ecosystem and driving them further into the poverty trap.

What are the technologies or types of data usage involved?

Mobile money operators and FinTech companies use artificial intelligence and other technologies to track consumer shopping habits, creating a credit limit for each consumer and offering attractive loans. The same technology is used to deny consumers future credit if they have outstanding debts.

What is the application? What drives this use in this case?

Mobile technology applications or platforms are available to consumers, driven by the widespread use of mobile phones, which are easy to use and affordable. Poverty among many users also drives them to apply for soft credit, which is typically approved and posted to their phones instantly. The time span between application and approval is usually less than 24 hours (Johnen, 2021).

What ethical issues are at play here?

Issues include using data on consumer purchasing habits without their knowledge, sharing personal information with third parties, and the lack of full disclosure of contract terms.

What group of people are at risk? What group of people may gain?

People in the low-income strata are the most affected, while individuals and organizations with the ability to pay stand to benefit.

Regional variations:

In Brazil, these issues are compounded by the rapid adoption of MFS and the increasing number of fintechs, often with little regulation. Additionally, digitally illiterate consumers, not necessarily those without financial means, such as older people, are also affected.

In Brazil, the effects of MFS are still debated. Some research indicates that MFS has bridged the gap in financial inclusion. Brazil is a pioneer in bank-based branchless banking, with recent regulations by the Banco Central do Brasil (BCB) facilitating the large-scale deployment of mobile money in Latin America. In many Latin American cultures, there is a strong sense of community and solidarity. Debt stigma may impact how consumers interact with digital lenders and their willingness to report unethical practices.

In Kenya, this problem has led to financial exclusion among the poor, increased aversion to technology adoption, and negative growth in the FinTech sector.

Borrowing has become an addiction for Kenyan youth due to its ease of access, and the credit facility has been associated with unproductive expenditure. Disclosure of loan defaults to relatives to enforce repayment often leads to family conflicts.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

A national legal and institutional framework should anticipate the use of artificial intelligence, machine learning, distributed ledger technology, and others in the FinTech sector for assessing consumer creditworthiness. Consumer protection rights with respect to digital lending should be enhanced, and cultural dimensions should be considered.

In what conditions could this be acceptable?

This could be acceptable if full information on lending is available to all key parties involved. Regulations should clearly define the boundaries of data sharing. Lenders should have legally and properly stipulated ways of protecting their interests without detriment to the borrowers.

What are other observations and conclusions for a solution?

There is a need for a proactive legal and institutional framework to anticipate the changing direction of technology as applied to digital lending to protect consumers.

Regional variations:

Kenya, and other developing countries, have experienced significant growth in the mobile financial services sector but lack adequate legal frameworks to govern their use.

For Brazil, in addition to better regulation, the development of RegTechs can be beneficial. Brazil has seen substantial investment in its fintech market, making it a reference ecosystem for financial service technologies.

GENERAL COMMENTS/REACTIONS:

This is a fascinating dilemma and highly relevant for Latin America, especially Brazil. The Brazilian fintech market has seen substantial investment, with $1.3 billion gathered in 2020, marking a 73% increase from 2019. This highlights a systemic failure where technology can provide valuable services in the right hands but be harmful in the wrong ones. A systemic solution combining innovation, regulation, and education is ideal.

LC-CL6:  Interoperability of personal data use between companies

EXPLORATION

What is the ethical dilemma?

Business groups in various sectors collect personal data on their customers. These same groups may provide financial services, leading to ethical issues regarding the interoperability of personal data.

What is the content?

Larger business groups sometimes consist of dozens of companies offering financial services, marketplaces, supermarkets, etc. Data interoperability differs between companies in the same business group. This interoperability is sometimes addressed for different “broad” purposes, so customers are asked for their consent in a “broad” manner for each of the companies that make up the business group.

For example, Company A (a non-financial service) and Company B (a lending service) are part of the same larger business group. If the customer must provide their personal information to Company A, they may have to sign a broad clause consenting to the entire business group using their personal information, including Company B.

What are the technologies or types of data usage involved?

What is the application? What drives this use in this case?

With the rise of larger business conglomerates, subsidiary companies separately collect data through their operations, some anonymizing it and other personal data used for specific purposes. In this case, these business groups can share and interoperate the data they collect, often in ethically inadequate ways.

What ethical issues are at play here?

Is the interoperability of personal data in business groups that also provide financial services ethical? Is the consent of people who are customers of the business group or a specific company in the business group sufficient for the personal data collected by this company to be used by another company in the same business group that offers financial services? Using personal data outside the scope of the owner’s wishes is unethical, even if they legally consent. Financially and legally illiterate clients assume they are consenting to their data being used in one specific way but may actually be consenting to the entire business conglomerate having access to their personal data. They may not understand the full scope of what they are consenting to when they agree to use these certain services.

What group of people are at risk? What group of people may gain?

Financially and legally illiterate clients are at risk as they may not fully understand what they are consenting to. They may be at risk for data leaks or identity theft as their data is spread across multiple platforms with different encryptions. Customers without a strong understanding of data interoperability may feel uncomfortable with their data being shared but consent anyway out of trust in their financial consultant.

Companies that utilize data interoperability gain, as they are able to make their services accessible across platforms, widening their demographic and increasing profit.

What is the wider impact of this dilemma?

Companies will foster distrust within the financial ecosystem as clients will not know where their data is being sent or who has access. If customers lose trust in their banking systems, they may choose to opt out of the services provided, putting them at risk for financial loss as they may not understand how to manage their own finances. Businesses will also be affected by this, losing clientele due to a lack of trust.

It may be considered a human rights violation if clients are unaware of what exactly they are consenting to. The rapid and direct sharing of personal data may conflict with users’ wishes regarding how their data is being used.

What are the cultural aspects important for this dilemma?

An extremely complex problem repeated in several jurisdictions, especially Chile, is the so-called “Unique Digital Identity”. In Chile, each citizen is granted a Single National Role and a Single Tax Role, which are numbers that allow people to be identified (personal data). This Role Number is especially used when making purchases of any kind, making state public formalities, applying for credits and loans, creating accounts at Fintechs or other companies, etc. These numbers allow traceability of the activities and habits of Chilean citizens and are strongly used by companies when processing other personal data. For example, the National Single Role is requested during in-person purchases at the supermarket, and benefits such as product discounts are offered in return. In Portugal, national identification numbers are constitutionally prohibited.

In 2021, Nigeria launched eNaira, a digital version of the Naira (Nigeria’s current currency). eNaira was designed to modernize Nigeria’s banking system and offer accessible financial services to its unbanked populations. Although expected to be very popular, just under 700,000 Nigerians downloaded the app compared to Nigeria’s 206 million population. The interoperability of personal data regarding users’ banking services, mobile phones, and their government may be seen as reasons why the app lost traction. Users didn’t want the government to have access to such personal data or to be too involved and aware of their finances. In this case, data interoperability left users concerned for their privacy.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

Data autonomy and transparency are crucial to the ethical interoperability of data. Clients need to be aware of what they are consenting to, and companies must be transparent with their usage of personal data. Companies must practice accountability; accumulate data ethically, use data ethically, protect their clients’ wishes, and ensure the systems they utilize are fair.

In what conditions could this be acceptable?

Companies must explain how they use their client’s personal data in a manner that is easily understood. Clients must be given the option to opt out of data sharing if they feel it would violate their human rights. Companies may request separate consent from customers regarding the personal data used to carry out the processing of data for financial purposes, separating those data that the customer does not want to be processed. Companies must also practice responsible data sharing, prioritizing the wishes of clients in how they want their data to be used.

A universal standard of ethical data usage must be set across platforms wishing to further their interoperability. By verifying that all parties are held to the same ethical standard, the sharing of data among these parties becomes safer. Similarly, ensuring the systems used to conduct interoperability services are ethically universalized minimizes risks. Systems will have the same set of ethical standards, leaving little room for error as data travels across platforms.

What are other observations and conclusions for a solution?

GENERAL COMMENTS/REACTIONS:

This dilemma constitutes two dilemmas in one. First is the issue of using interoperability (the ability of unique information systems to exchange data among themselves) in the context of separate legal entities that are part of a business group to sidestep legal constraints on data use as defined by a regulator. More specifically, is it fair for a company to harness the lagging nature of policy relative to innovation to maximize profits? Secondly, there is the issue of choice where a consumer is legally denied access to a product or service based on their willingness to accept conditions whose implications they lack the information to assess fully. More specifically, should a consumer agree to terms and conditions which they lack the capacity to analyze fully if it fulfills a short-term need but exposes them to long-term harm? To what extent does responsibility for addressing these two interlinked dilemmas rest with the government regulator, who is uniquely positioned to assess the long-term implications, and the industry actor, who can predict different scenarios that will arise from the consumer’s choice, each with unique profit implications?

LC-CL7:  Immutability of blockchain and “right to be forgotten”

EXPLORATION

What is the ethical dilemma?

Blockchains are by design immutable, which is in direct opposition to a user’s right to be forgotten.

What is the content?

One of the most important features and strengths of the Blockchain is its immutability. What is written and stored on the blockchain cannot be altered. How does this feature relate to the ‘right to be forgotten?’

The right to be forgotten is a legal concept pertaining to a user’s right to have their private information erased, or ‘forgotten’ from an organization’s collection of data (including online). The Right to be Forgotten received major press in 2014 after appearing in the European Union’s General Data Protection Regulation, a major data protection law.

What are the technologies or types of data usage involved?

Blockchain and all related technologies (Bitcoin, Non-fungible Tokens [NFTs], and associated multimedia data).

What is the application? What drives this use in this case?

Blockchain is used for a wide variety of purposes, including financial transactions (Bitcoin), securely storing data, tracking where and how data has been used, and storing multimedia content (NFTs). Private images and videos are often stored in NFT form and are distributed on the blockchain. Though this ensures that content remains unaltered, it creates difficulty for any involved person to remove private images distributed without their consent.

What ethical issues are at play here?

Blockchain and NFTs, by nature, are immutable and unalterable. This conflicts with an individual’s right to privacy, as it is extremely difficult for them to remove private images distributed without their consent.

What group of people are at risk? What group of people may gain?

Potentially anyone is at risk.

What is the wider impact of this dilemma?

In the most extreme sense, someone might not be the owner of their own private life.

What are the cultural aspects important for this dilemma?

This issue is likely to be culturally agnostic; no one in the world is likely to be comfortable with images and videos of their private life (including intimate moments) being diffused, distributed, and “legally” owned by someone who acquired them on the blockchain.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

Educating the public about the features and characteristics of blockchain technology is necessary so that users understand the underlying risks. Yet it is unclear who should make efforts to educate people as the technology is decentralized.

In what conditions could this be acceptable?

As per my answer on the cultural aspects, I think there is no degree of acceptability in a context like this.

GENERAL COMMENTS/REACTIONS:

In Africa, the ethical dilemma surrounding the immutability of blockchains and the right to be forgotten presents unique challenges and considerations:

  • Cultural Context: Cultural attitudes towards privacy and personal data vary across different regions in Africa. While some communities may prioritize individual privacy and data protection, others may have different perspectives on the ownership and control of personal information. Understanding and respecting these cultural nuances is crucial when addressing issues related to privacy and blockchain technology.
  • Legal Framework: Many African countries have enacted data protection laws or are in the process of developing them. However, the legal landscape concerning the right to be forgotten and its application to blockchain technology may vary from country to country. Policymakers and regulators need to consider the implications of blockchain immutability on individuals’ privacy rights within the context of existing legal frameworks.
  • Access to Technology: Access to blockchain technology and related platforms may vary across different regions in Africa. While some urban areas may have better access to blockchain networks and resources, rural communities may have limited connectivity and awareness of blockchain-related issues. Bridging the digital divide and ensuring equitable access to information and resources are essential considerations in addressing ethical dilemmas related to blockchain technology.
  • Community Engagement: Engaging local communities and stakeholders in discussions about blockchain technology and its implications for privacy rights is essential. This includes raising awareness, facilitating dialogue, and soliciting feedback from diverse perspectives to ensure that any proposed solutions are culturally sensitive and responsive to the needs of local populations.
  • Capacity Building: Building technical capacity and expertise in blockchain technology within African countries is crucial for addressing ethical dilemmas effectively. This includes training professionals in data protection, cybersecurity, and blockchain development to navigate the complex ethical and legal challenges associated with blockchain-based systems.

Overall, addressing the ethical dilemma of blockchain immutability and the right to be forgotten in Africa requires a multidisciplinary approach that considers cultural, legal, technological, and socioeconomic factors. Collaborative efforts involving governments, civil society organizations, industry stakeholders, and local communities are essential for developing ethically sound solutions that respect individuals’ privacy rights while harnessing the potential benefits of blockchain technology for socioeconomic development.

MORE RESOURCES

Article: “Blockchain and the Right to be Forgotten: Squaring the Circle?”

LC-CL8:  Smart contracts

EXPLORATION

What is the ethical dilemma?

Smart Contracts have become increasingly common and are not governed by any laws. How legal are Smart Contracts, and how do they fit with current legal trends?

What is the content?

Smart Contracts are digital contracts stored on Blockchain. They consist of if-then-else-then conditions and instructions, meaning they have predetermined conditions and automatically go into effect when those conditions are met. While convenient, not all contracts can be legally entered into by all people. There are no universally agreed-upon laws for contracts, meaning they may not be legally applicable to everyone bound by the Smart Contract.

What are the technologies or types of data usage involved?

Blockchain.

What is the application? What drives this use in this case?

Smart Contracts are used in a variety of scenarios, including tracking pharmaceuticals, vendor-consumer relationships, and international trade. For vendors and consumers, Smart Contracts can track shipments, payments, and more clearly define obligations. However, the legality of Smart Contracts for consumers is still uncertain.

What ethical issues are at play here?

Smart Contracts eliminate, for example, counterparty risk. But alongside this positive effect, there are concerns about the inevitability of their execution once the conditions are met. Smart Contracts are independent and lack centralized control, allowing anyone to use them without an intermediary. This raises questions about the legal capacity to enter into Smart Contracts for minors or people legally restricted from certain contracts due to mental health conditions. Due to Blockchain technology’s immutable nature, errors in Smart Contracts cannot be fixed, potentially resulting in unexpected and harmful outcomes. When unregulated, these errors can negatively impact users who may not fully understand what they are consenting to.

What group of people are at risk? What group of people may gain?

People who are legally or naturally restricted from signing certain kinds of contracts are at risk. Anonymous users who understand Smart Contracts and utilize them may gain because they’re unregulated. These users enjoy the fact that third parties are uninvolved, allowing them to sell or trade products immediately and without fear.

What is the wider impact of this dilemma?

There is a controversy between legal rules and Smart Contract rules, which can endanger people who are restricted from signing contracts to protect them. Due to Smart Contracts’ immutable and permanent nature, the information is cemented in the Blockchain, raising further concerns.

What are the cultural aspects important for this dilemma?

The legal rules for contract eligibility are strongly related to the cultural aspects of each country.

Regional variations: In the US, this may include minors, people with disabilities, and users uneducated about Smart Contracts and Blockchain technology.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

  • Educate new professionals who merge skills and competencies from both legal and computer science areas.
  • Establish global regulations for international trade, especially involving smart contracts related to the ownership of art, documents, etc.

In what conditions could this be acceptable?

Regulation of Smart Contracts and cohesive education about the technology before users sign the contract.

GENERAL COMMENTS/REACTIONS:

The dilemma is important, but the probability of it arising may be low, particularly in the mainstream economy where the ability to enforce a smart contract through traditional legal processes is low. Smart contracts generated by a machine could be restricted to categories of contracts defined as unenforceable by policy regimes within a given jurisdiction. If the dilemma is more sharply defined within a specific context, such as smart contracts for mobile loans, it could help highlight the probability of its emergence and ways to address it.

MORE RESOURCES

International Business Machines Corporation. (n.d.). What are smart contracts on blockchain? IBM. Retrieved October 16, 2023, from IBM Smart Contracts

LC-CL9:  Credit and lending data exploitation

EXPLORATION

What is the ethical dilemma?

Do customers know when and how their data has been used? Some fintech companies keep using customer data for unclear periods and sell the information to third parties without the customers’ knowledge. How do customers trust that such actions are not harmful to them? If the data is sold to and taken advantage of by dangerous third parties, crimes such as telephone scams might become more customized and deceitful, leading to severe economic losses.

Credit and lending data exploitation

Is the concern with data use crime, or are there other, more subtle concerns with data onselling and big data to consider?

What is the content?

Some fintechs keep accessing and analyzing customer data for an undefined long period and even sell personal information to third parties. For example, Branch & Tala continue collecting data even after customers uninstall the app; M-Kopa continues collecting data even after a loan is repaid; Branch advertises the data as transferable assets when acquired.

What are the technologies or types of data usage involved?

Mobile App; Data for Alternative Credit Scoring (e.g., contacts, location via GPS, SMS message content, call log, etc.).

What is the application? What drives this use in this case?

Companies assert that continuing data collection and analysis even after the loan is paid or the app is uninstalled speeds up the loan process when customers return or reinstall the app. Selling data to third parties provides fintechs with profitable exit strategies by using personal data as transferable assets.

Are there other reasons? (not just loans but other fintech products/use cases that are relevant to mention)

What ethical issues are at play here?

Consent for privacy. Are customers aware of when and how long their data has been collected? Is it mainly for the customer’s benefit, not the company’s?

Data privacy breach: The concern for regional data is mostly on financial and identifiable data.

Invasion of privacy: Local fintechs also have access to user contacts through phone records. These contacts are notified during default and may have no direct or related interest in the defaulting party and transaction.

Is it privacy or broader consumer protection as well?

For example, data can be kept “private,” but when combined and triangulated across multiple sources, it can be easy to de-identify.

What do you think of ideas like surveillance capitalism, where it’s about re-targeted advertising and converting people into consumers?

What group of people are at risk? What group of people may gain?

Customers who are aware of personal data exploitation, especially if the company causes information leakage or illegal usage.

For example, I am aware these practices occur, but if I want a credit card or a bank loan, I don’t feel like I have much choice but to participate in these systems.

Users (corporate and individuals) inclined toward low-cost services. Fintechs and related startups applying acquired emerging technological solutions stand to gain.

What is the wider impact of this dilemma?

If the personal data used for credit is sold to and taken advantage of by some dangerous third parties, crimes such as telephone scams might become more accurate and customized, leading to severe economic losses in the future. Customers might be afraid of using apps because they will keep collecting and using their data “forever.”

What are the cultural aspects important for this dilemma?

Regional variations:

People pay different degrees of attention to their data. This is why such fintech is specifically popular in Kenya, where people obtain loans at the price of giving personal data.

Awareness of data collection and application is still low in Nigeria, especially for the older generation and less educated. However, the benefits of financial inclusion currently overshadow consent to access data. Developments such as privacy breaches of users and related (contacts) may lead to distrust of technology. Third-party data is accessed by parties outside the location of the user without consent or knowledge. As awareness increases with increasing use and education, users may become wary of applicable technology in finance.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

Define clearly and limit companies’ usage of customer data as well as the time they can collect and use the data — it shouldn’t be forever.

In what way? I’m assuming you mean legal rules and enforcement? Is technical an option, whereby data is just not collected and/or stored in a company database?

How would companies deal with this in their business models?

In what conditions could this be acceptable?

Some people might perceive the companies as user-friendly if they think the resulting service quality/convenience outweighs data consent and privacy. For instance, the Google algorithm provides better search results for users by scanning their data, and some people might think Google is actively offering good services.

What are other observations and conclusions for a solution?

Regulation of fintech and adherence to data protection practices would be a good starting point. Corporate governance policies should also have moral considerations applied to matters of public perception. Fintech investments in the education and potential exposure of users/clients on how data would be collected, applied, and other inherent implications (sharing, storage, and uses) to ensure transparency and accountability. Regulatory authorities should also improve monitoring capabilities.

GENERAL COMMENTS/REACTIONS:

There should be a balance between innovation and respect for user rights. Exploiting consent on the back of benefits does not seem sustainable.

LC-CL10:  Big data and lack of transparency in lending decisions

EXPLORATION

What is the ethical dilemma?

Some lending companies use a large set of data points per borrower to determine their credit scores. Therefore, it might be hard to detect and prove which factors dominate the final credit lending outcome, and lenders will not know why they have been rejected. Naturally, a lack of explanation creates space for unethical decisions to benefit or discriminate against specific lenders. It also impedes the supervision of the system.

What is the content?

Lending companies use a large set of data points per borrower to determine the credit score.

What are the technologies or types of data usage involved?

Machine Learning.

What is the application? What drives this use in this case?

According to Claessens et al. (2018), “the website of one Indian P2P platform claims that its credit assessment involves a review of more than 1,000 data points per borrower.”

What ethical issues are at play here?

As ML-based credit rating is based on a large set of risk drivers, it may be hard to detect and prove the dominance of one factor in the final credit rating outcome, which creates barriers to explaining the decision.

What group of people are at risk? What group of people may gain?

Lenders being rejected who do not know the reason.

What is the wider impact of this dilemma?

Lack of explanation creates space for unethical decisions to benefit or discriminate against certain lenders and impedes supervision.

What are the cultural aspects important for this dilemma? People in different cultures may have varying degrees of tolerance for transparency in banking. In some regions, it is normal to see transparent information, and people thus have few problems with lack of explainability.

Regional variations:

For Brazil: An insight not mentioned is the societal skepticism towards AI in Brazil. A significant portion of the Brazilian population, particularly younger people, express concerns about the negative impacts of AI and automation, fearing job displacement and increased inequality. This skepticism is rooted in broader apprehensions about technology and highlights the importance of regulatory frameworks that not only address technical and ethical concerns but also build public trust in AI technologies.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

Models should provide the top variables that contribute most to the credit score by using the “feature importance” of each algorithm. Lenders should actively monitor the most significant drivers of the credit rating of borrowers and assess possible discrimination based on their business insight.

In what conditions could this be acceptable?

In some cultures, people may be commonly okay with a lack of explainability and transparent decision-making.

What are other observations and conclusions for a solution?

For Brazil: A study assessing machine learning techniques for classifying individuals into defaulters and non-defaulters using a database from a major Brazilian bank demonstrates the region’s application of these technologies in addressing high-risk credit portfolios. This highlights the relevance of machine learning in analyzing credit risk within emerging markets, where data availability might differ from developed countries.

GENERAL COMMENTS/REACTIONS:

This dilemma highlights the issue of bias inherent in AI models that are trained using limited data. This discussion is particularly important within the context of developing countries and within segments of society that have traditionally been marginalized based on income. There is an opportunity for government regulators to work in collaboration with industry actors and academia to establish minimum thresholds for training AI models. For example, there could be a requirement for industry actors to specify the profile of the dataset used to train an AI model.

LC-CL11:  Ethics in using alternative/novel data in lending decisions

EXPLORATION

What is the ethical dilemma?

Lending companies use AI and alternative data points to predict repayment probabilities and assign credit, such as the type of computer and time of the day people applied for the credit. Such alternative data are sometimes more available and less costly to lenders than traditional credit scores. However, although the data may not be seen as a discriminating factor like gender and race, it may be highly correlated with one or more protected classes, resulting in “proxy discrimination.”

What is the content?

AI and big data can use alternative data points, including ethically questionable ones, to determine consumer credit risk, replacing traditional factors (e.g., financial information).

What are the technologies or types of data usage involved?

AI for credit and lending (digital footprint – the type of computer, type of device, time of the day applied for credit, email domain, whether the name is part of the email [Manju Puri et al.]).

What is the application? What drives this use in this case?

Alternative data sometimes are simpler and more immediately available at no cost to the lender, as opposed to pulling a credit score.

What ethical issues are at play here?

Decision bias. Although the new data point is not obviously seen as a discriminating factor like age and race, it may be inherently highly correlated with one or more protected classes, leading to proxy discrimination.

This is also an issue of security vs. transparency. Financial institutions will need to protect certain algorithms to avoid malicious actors from exploiting the system/institution. On the other hand, we need to be able to observe the system (algorithmic) to ensure that everyone is treated fairly. A good book for this is Frank Pasquale’s “The Black Box Society.” Another useful book is Virginia Eubanks’ “Automating Inequality.”

What group of people are at risk? What group of people may gain?

Lenders, especially those from protected and underrepresented classes, are at risk of being discriminated against by the alternative factors.

In Africa, marginalized communities and individuals lacking traditional financial histories may be at risk of being further excluded from credit opportunities.

In Eastern Europe, the discussed digital footprint may not be that useful for predicting repayment probabilities. Many people who have enough money to pay off the credit do not spend much money on devices; many people from small cities use public computers in libraries; older people often are not taught to use computers themselves; many people capable of paying off the credit work American time, for instance, etc.

In Australia, migrants and First Nations people who do not have access to digital devices (digital divide) may fall further behind if proxy variables are used. Australia has a large migrant population as well as a sizable First Nations population who might not have had access to such financial systems, so they may not be familiar with it. This means that if they were rejected, there would be very little understanding of the reasons they were rejected (if proxy factors were used) and they would be unable to understand how to improve their chances at accessing capital in the event they are rejected. This would further add to the challenges these groups already face in society.

Potential Gainers:

Those with access to and understanding of alternative data sources may gain from this approach.

In Eastern Europe, some people could benefit from the use of alternative data in bank decisions, especially people who have limited credit history or lack traditional financial documentation. For example, young people, immigrants who may not have extensive financial records could potentially benefit from using alternative data points like digital footprints. Additionally, in Eastern Europe and Ukraine especially, people do not use credits that much, so most people wouldn’t have any credit history.

In Australia, the lack of banking/financial history may have left some migrants and First Nations peoples behind others when it comes to accessing capital. Interestingly, if proxies were used (assuming alternative data points exist), this might enable some (not all) to participate in accessing capital. It really comes down to which proxy variables are used because they may be inclusive (allowing more people to join) or exclusive (discriminating against groups, further increasing the divide).

What is the wider impact of this dilemma?

As a result, AI and ML may refuse to lend to applicants from particular protected groups, which causes wider wealth inequality. People may, therefore, intentionally change some behaviors when they know the determining factors.

Wider Impact: The dilemma exacerbates existing wealth inequalities and could perpetuate cycles of poverty within certain communities.

Wider Impact: There is a constant debate between the need to deliberately obscure certain algorithms (such as security protocols and fraud detection) to prevent malicious activity, but also the need to make certain algorithms observable and investigate their operation for any discriminatory behavior.

What are the cultural aspects important for this dilemma?

Regions have different discrimination concerns, which need to be addressed with disparate efforts. For example, race discrimination is scarce in Asian countries compared with Europe and America.

Cultural Aspects: Discrimination concerns in Africa may differ from those in other regions, necessitating tailored solutions sensitive to cultural nuances and socioeconomic disparities.

In Australia, with many migrant groups and First Nations peoples, each culture has different perspectives on wealth and financial resources. Furthermore, some groups already face the challenge of accessing digital devices and internet services (digital divide), which impacts their ability to access capital, education, and information (news). This means that proxy discrimination (if it were to occur) could either further entrench this digital divide due to lack of internet/financial history or reduce the divide by using alternative measures. The specific proxy factors used would determine the efficacy of the implementation, but it would need to work for all groups without unfairly benefitting or discriminating against anyone.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

Design the AI so that it can provide explanations on its credit decisions and test substantially to see if the decisions are biased in any way. Look into something called an “Audit Study” – Christian Sandvig is a good scholar for understanding this.

In what conditions could this be acceptable?

The new algorithm using alternative data is free of proxy discrimination, although such results may be expensive to achieve and verify.

What are other observations and conclusions for a solution?

Other Observations: Collaborate with local stakeholders and regulatory bodies in Africa to develop context-specific solutions that address regional challenges and prioritize fairness in lending practices. Additionally, invest in financial education and inclusion initiatives to empower underserved communities.

GENERAL COMMENTS/REACTIONS:

This dilemma highlights the issue of bias that is inherent in AI models that are trained using limited data. This discussion is particularly important within the context of developing countries and within segments of society that have traditionally been marginalized based on income. There is an opportunity for government regulators to work in collaboration with industry actors and academia to establish minimum thresholds for training AI models. For example, there could be a requirement for industry actors to specify the profile of the dataset used to train an AI model.

LC-CL12:  Data bias in lending decisions

EXPLORATION

What is the ethical dilemma?

Companies use machine learning to grant credit, yet the underlying data fed into the algorithm may already incorporate biases. For example, information on minority classes may be largely missing from past data. Does the use of machine learning mitigate or worsen decision biases?

What is the content?

Lending companies can only use previously available data (e.g., applicants who received lending in the past) to predict future lender behaviors and assign credit.

What are the technologies or types of data usage involved?

Machine learning.

What is the application? What drives this use in this case?

Machine learning models are trained using available data for the company to study borrowers and predict their repayment possibilities, thus granting funding to high-potential applicants.

What ethical issues are at play here?

Available data may not necessarily be representative of all classes of borrowers that the creditor considers lending to, leading to potentially inaccurate decisions. For example, Fuster et al. (2018) propose a cross-category measure of disparity and find that ML models could worsen disparity relative to the logit model according to this measure in the US mortgage sample. Additionally, AI “profiling” can embed existing human prejudices into its automated algorithmic decision-making and in AI-led recommendations to human decision-makers following statistical analysis. This can happen either through analysis of accurate statistical factors that lead to detrimental outcomes for profiled candidates or because the data itself has been collated incorrectly through human prejudices (such as higher arrests of ethnic communities based on police discrimination).

Regional variations:

Brazil’s financial sector has embraced ML to enhance the accuracy and efficiency of credit risk evaluations, moving beyond traditional static models to more dynamic, real-time analyses. This shift allows for a more nuanced understanding of borrower risk, incorporating a broader range of variables, including non-traditional data such as online behavior and transaction histories.

However, the Brazilian market also faces unique challenges, such as a higher degree of economic instability and a significant unbanked population, which can affect the performance and fairness of ML models.

What group of people are at risk? What group of people may gain?

Lenders being rejected due to a lack of existing data on their group are at risk. In other words, the situation for those who already face difficulties obtaining credit will only worsen in the future, creating a vicious cycle that harms minorities.

Similarly, individuals and small businesses lacking traditional credit histories or those from economically unstable backgrounds in Brazil are at risk of being unfairly assessed or excluded.

What is the wider impact of this dilemma?

The lack of sufficient relevant data for some classes imposes restrictions on the conclusions by ML analysis and could lead to redlining applicants belonging to a particular group. AI may filter out potential applicants based on ostensibly correct information from a statistical perspective, but results in outcomes that are otherwise unjust or unfair.

What are the cultural aspects important for this dilemma?

Different cultures may face different incompleteness in data collection. For example, Western countries may lack complete data for people of color, while China may lack data for people with village hukou and low education.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

Monitor decisions regularly, check rejected samples to identify possible discriminations, and replenish data accordingly.

In what conditions could this be acceptable?

After banks notice their dataset incompleteness and decision biases, they should initiate actions to control the situation, such as adjusting weights for specific variables that favor previously ignored groups and designing special lending programs.

What are other observations and conclusions for a solution?

Brazil is actively working on creating a regulatory framework for AI, emphasizing risk classification and the need for impact assessments for high-risk AI systems. This approach aims to balance innovation with ethical considerations and consumer protection. Source.

GENERAL COMMENTS/REACTIONS:

The main consideration for Brazil is the need for ML models to be highly adaptable to its volatile economic environment and rapidly changing conditions, such as fluctuations in interest rates, inflation, and employment levels, while also catering to the growing demand for credit across various segments by providing more accurate risk assessments and enabling more favorable credit conditions.

Transparency is crucial for companies. Traditional financial companies typically use a decision tree model to evaluate individual credit applications, aiming to provide clear explanations for their decisions. If an organization decides to apply an unsupervised ML model, they should be transparent about the type of data used to train the model and frequently review the decisions made. While this suggestion seems fair and acceptable, it may require some regulations or control mechanisms by the central banks of countries regarding the integration of AI in decision-making mechanisms.

This dilemma highlights the issue of bias inherent in AI models trained using limited data. This discussion is particularly important within the context of developing countries and segments of society that have traditionally been marginalized based on income. There is an opportunity for government regulators to work in collaboration with industry actors and academia to establish minimum thresholds for training AI models. For example, there could be a requirement for industry actors to specify the profile of the dataset used to train an AI model.

On the issue of models/algorithms, there is an additional dilemma of auditing their design to ensure that bias is minimized. For example, an algorithm designed by a young female coder from an urban setting in comparison to one designed by a middle-aged male developer from a rural setting would approach the same problem in different ways. Similar to the requirement for data, a possible solution would be to require industry actors to specify the profile of the developers who build an AI model so that the resulting bias can be taken into account.

LC-CL13:  Cryptocurrencies, blockchain and Scam Coins

EXPLORATION

What is the ethical dilemma?

The relationship between famous people and influencers with the so-called Scam-Coins: consequences in the post-pandemic economy and on the future of Blockchains and Cryptocurrencies.

What is the content?

Some famous people have in the recent past advertised and more or less explicitly sponsored specific cryptocurrencies, whose nature was often not clear. With an unregulated market, it becomes difficult to establish the transparency of their behavior. There is a risk that such people could abuse their position of influence by exploiting the psychic condition of millions of people after the economic disasters of the pandemic, who may be too attracted by the possibility of earning a lot of money quickly.

Influencers promote cryptocurrencies and blockchain products but may have no expertise or due diligence data on the products and may inadvertently expose investors from their audience to undue risk.

What are the technologies or types of data usage involved?

The involved technologies are Cryptocurrencies, Blockchains, and Social Networks.

  • Blockchain cryptography technologies: advanced database mechanisms that can be used to create decentralized Apps (dApp) and mine cryptocurrencies.

What is the application? What drives this use in this case?

Social media networks have influencers with large followings, and they are paid to endorse or market certain products.

What ethical issues are at play here?

An influencer can promote the technology but will not face repercussions if people lose their money due to the technology being fake or part of a scam.

Does the influencer have legal liability if he or she promotes something unethical? Or are they “free” of any charges since they only state their “opinion” and do not “force” people to engage in the transaction?

The provision of misleading advisory through negligence, lying, or manipulation for personal gain. Social media influencers and tech companies involved in crowdfunding – initial crypto offerings (ICO) or security token offerings (STO), might trade a product that fails or was intended to defraud investors.

What group of people are at risk? What group of people may gain?

The people most at risk are those with low financial literacy, less familiarity with recent technologies, low risk aversion, and who have seen their financial resources jeopardized by the pandemic.

  • The online following of social media influencers is at risk.
  • Social media influencers and fraudulent or inexperienced blockchain companies stand to gain.

What is the wider impact of this dilemma?

This situation could amplify social justice issues and erode trust in the Crypto world and its potential benefits.

What are the cultural aspects important for this dilemma?

This problem is related to scarce financial and computer literacy and low risk aversion.

Regional variations:

Nigerian tech-savvy youth and middle-income earners are looking for new ways to grow and maintain their wealth. The digital space has many offerings to satiate this thirst, but growing mistrust from fraudulent experiences seeks to dampen such interest.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

  • Influencers, like other professional business advisory services, should ensure they understand the products they market. Application of multiple sources of verification and critical thinking would go a long way.
  • Collaboration with regulatory authorities and industry stakeholders to improve the services and share accurate information would reduce the risk to users and improve the integrity of influencers.
  • The only solution seems to be introducing some form of regulation.

In what conditions could this be acceptable?

  • Influencers’ application of multiple sources of verification, regulatory applications, and collaboration with industry stakeholders for accurate information would help authenticate the industry.

What are other observations and conclusions for a solution?

It could be interesting to research how the pandemic has changed financial and investing behavior and how these changes have related to cryptocurrencies speculations by retail investors.

MORE RESOURCES

Celebrity endorsement by Katie Price promotes scam

LC-CL14:  Black box AI in lending decisions

EXPLORATION

What is the ethical dilemma?

Black Box artificial intelligence systems are becoming increasingly more common. Though their use isn’t inherently unethical, it is questionable if Black Box AI systems should be used for determining fair lending practices.

Application of black box AI in fair lending.

What is the content?

Black Box AI describes artificial intelligence systems that do not explain their inner workings, inputs, and outputs. In short, Black Box AI systems produce results without an explanation to the user. As AI and machine learning (ML) become more popular among banking and lending companies, Black Box AI is increasingly used. Now, Black Box AI takes part in making decisions about credit scores or who to lend to, but does not explain why it makes those decisions.

Banks and lending companies use AI and ML to make decisions on credit scoring and lending. Black box algorithms are inexplicable; hence, when unfair lending decisions result from data bias, they may go unnoticed.

What are the technologies or types of data usage involved?

Artificial intelligence, machine learning, and Black Box algorithms/systems.

What is the application? What drives this use in this case?

Machine Learning, Deep Learning, and Black Box Algorithms. Black box bias may arise from the prejudices of designers and faulty training data, leading to unfair and sometimes wrong decisions.

What ethical issues are at play here?

The main problem with a black box model is its inability to identify possible biases in the machine learning algorithms. Biases may come through prejudices of designers and faulty training data, leading to unfair and incorrect evaluations. Bias may also occur when model developers do not implement the proper business context to come up with legitimate outputs. Due to the nature of black box technology, users are unable to analyze the system’s code or its reasoning for delivering a particular output.

Case: A 2018 study conducted at the University of California, Berkeley found that both traditional face-to-face decisions and those made by machine-learning systems charged Latinx/African American borrowers interest rates that were six to nine basis points higher. The black box model will exhibit bias, and minorities will be at the receiving end.

Unfairness in lending appraisal outcomes due to the inscrutability of black box AI algorithms. Financial services discrimination.

What group of people are at risk? What group of people may gain?

Training data is part of the problem. Huge amounts of training data are required, but if this data comes from existing biased processes and datasets, it will teach the model to be biased too. Testing may also be problematic, as common practice is to remove some of the same biased training data used for testing the system, which would fail to show any bias issues.

Regional variations:

  • In the US, groups at risk include historically marginalized communities; people with disabilities, women, Latinx, African Americans, Indigenous people, impoverished people, etc.
  • In Nigeria, low-income earners and other societal minorities can be affected by negative disparate credit ranking.

What is the wider impact of this dilemma?

With the continued use of biased data, minoritized communities are at a further disadvantage for discrimination. Continued inaccurate information may lead to a further wealth gap between historically disadvantaged people and historically privileged people.

Distrust of technology-applied solutions in finance.

What are the cultural aspects important for this dilemma?

The cultural aspect relevant to Nigeria is not really one of discrimination, as most regional fintech solutions are targeted at the unbanked and underserved. Default in payment is the main ethical challenge, especially as generative AI solutions are not readily deployed in the lending process. Instead, banking history and income analysis take precedence through business and finance analysts or traditional AI applications (excel, database solutions, etc).

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

  • Federated learning can be used to pursue advanced machine learning models while still keeping data in the hands of data owners.
  • To obtain less biased data, banks can offer specific lending programs to traditionally biased groups of people (e.g., Females, African Americans) to control for these variables. They can then combine the dataset to eliminate the impact of biased parameters.
  • Investment in explainable solutions may be more appropriate for such sensitive AI applications, as the use of specific programs for different minority groups may result in other forms of bias (preferential) or discriminatory. Datasets can also be trained and anonymized for more accurate outputs.

In what conditions could this be acceptable?

With a thorough checks and balances system placed on black box systems, humans may be able to catch biases before they negatively impact minorities. Similarly, by combining black box technology with human reasoning when interacting with clients, discrimination may be minimized. Companies whose data affects people may want to avoid using Black Box AI systems as their outputs are unexplained and may cause harm to historically disadvantaged people.

Explicable AI (XAI) solutions can also be applied in developing algorithms, though this may curtail the full potential of revenue due to ownership issues (intellectual property, imitation, etc).

What are other observations and conclusions for a solution?

Trust erodes as decision biases are repeated. This may hurt the development and acceptability of technology with positively disruptive potential. However, a benefit analysis should also be reviewed. The positives of the financial inclusion availed by fintech solutions should not readily be sacrificed at the expense of a faulty dataset that can be trained.

GENERAL COMMENTS/REACTIONS:

MORE RESOURCES

  • Public Affairs. “Mortgage algorithms perpetuate racial bias in lending, study finds.” Berkeley News, Berkeley News, 13 Nov. 2018, link.
  • Rai, A. (2020). Explainable AI: From black box to glass box. Journal of the Academy of Marketing Science, 48, 137-141.
  • Chou, A. (2019). What’s in the black box: Balancing financial inclusion and privacy in digital consumer lending. Duke LJ, 69, 1183.

LC-CL15:  Third-party data sharing

EXPLORATION

What is the ethical dilemma?

Outsourcing the processing of personal data services, such as determining credit scores, can lead to ethical concerns about privacy and the right to data protection.

What is the content?

Outsourcing the processing of personal data, such as credit score determination, can result in negative externalities and deviate from the purposes agreed upon with the client through their consent.

What are the technologies or types of data usage involved?

The involved technologies include Cryptocurrencies, Blockchains, and Social Networks.

What is the application? What drives this use in this case?

  1. The processing of personal data may be outsourced to external third parties who determine credit scores or provide other services (e.g., advertising financial products) on behalf of financial institutions. This requires:
    • Contractual relationships between the controller and the processor.
    • Consent from the personal data subject.
    • Compliance with current legal regulations.

What ethical issues are at play here?

These dilemmas are closely linked to privacy and the right to data protection. Key questions include:

  1. When should the processing of personal data, such as credit scores, be ethically outsourced?
  2. What ethical obligations should be considered when outsourcing this activity?
  3. What ethical responsibilities do the controller and the processor have?

What group of people are at risk? What group of people may gain?

  • SMEs and people in general may be at risk.
  • Financial institutions and third-party processors may gain.

What is the wider impact of this dilemma?

  • The establishment of credit scores for ethically incorrect reasons can lead to unfair credit and loan terms for clients.
  • Other impacts may be discussed further.

What are the cultural aspects important for this dilemma?

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

  • Use models that provide the top variables contributing most to the credit score, leveraging the “feature importance” of each algorithm.
  • Lenders should actively monitor the most significant drivers of credit ratings and assess them against their business insights.
  • Obtain the client’s consent before outsourcing the processing of their data to a third party.

In what conditions could this be acceptable?

  • Ensuring transparency and obtaining explicit consent from clients before outsourcing their data processing.

What are other observations and conclusions for a solution?

GENERAL COMMENTS/REACTIONS:

In the context of Africa, the outsourcing of personal data processing services, particularly in determining credit scores, presents several ethical considerations and implications:

  1. Access to Financial Services: In many African countries, access to traditional banking and credit services is limited, especially for individuals in rural areas or those with lower incomes. Fintech companies and alternative lenders play a significant role in providing financial services to underserved populations. However, outsourcing credit scoring processes raises concerns about fairness and accuracy, potentially exacerbating existing inequalities.
  2. Data Privacy and Protection: Africa has seen a growing focus on data protection and privacy, with several countries enacting legislation to safeguard personal information. The outsourcing of personal data processing must comply with these regulations to ensure individuals’ privacy rights are respected and their data is handled securely.
  3. Ethical Considerations: Outsourcing credit scoring processes raises questions about transparency, accountability, and ethical oversight. Financial institutions and third-party processors must adhere to ethical standards and best practices to ensure that credit scoring algorithms are fair, unbiased, and free from discriminatory practices. Mechanisms should be in place to address any potential harms resulting from outsourced data processing activities.
  4. Cultural Sensitivity: Cultural factors and societal norms may influence attitudes towards privacy and data sharing in African communities. It is essential to consider cultural perspectives and sensitivities when designing and implementing credit scoring systems and data processing practices. This includes engaging with local communities, respecting cultural values, and ensuring individuals understand how their data is being used and shared.
  5. Capacity Building: Building local capacity in data protection, privacy law, and ethical data management practices is critical for ensuring that outsourcing arrangements in the financial sector are conducted ethically and responsibly. This includes providing training and resources to regulatory authorities, financial institutions, and third-party processors to strengthen their ability to oversee and regulate outsourced data processing activities effectively.

Addressing the ethical implications of outsourcing personal data processing services in Africa requires collaboration among government agencies, regulatory bodies, financial institutions, civil society organizations, and the private sector. By promoting ethical data practices, ensuring regulatory compliance, and fostering transparency and accountability, Africa can harness the benefits of financial technology while safeguarding individuals’ privacy rights and promoting financial inclusion.

LC-IN1:  The use of AI in insurance claim decisions

EXPLORATION

What is the ethical dilemma?

Automating the identification of roles in insurance claims involves using AI tools that may carry biases. This poses ethical issues regarding the fairness and accuracy of these tools.

What is the content?

The dilemma involves using AI to “judge” customers or the data of their claims. This may lead to depriving operators of responsibility and create power asymmetries, as insurers benefit from the tool and can govern it.

What are the technologies or types of data usage involved?

  • NLP (Natural Language Processing): Used to automate the initial phases of document management linked to insurance claims.
  • Machine Learning: Used to stratify claims management requests and related attributions.

What is the application? What drives this use in this case?

  • Claims Indexation Tools: Collect and organize unstructured data, followed by the automatic extraction of information and insertion into the company’s systems. This allows operators to save time and focus on higher-value activities.

What ethical issues are at play here?

  • Conflicts of Interest: Insurers may prioritize their own financial interests over fair solutions for clients.
  • Lack of Explainability: Decisions made by AI may not be transparent, leading to a lack of understanding and recourse for customers.
  • Potential for Bias: AI tools may have inherent biases from their training data, leading to unfair evaluations.
  • Privacy Concerns: Handling health data can lead to profiling customers for upselling insurance packages, potentially revealing health problems that make them uninsurable.
  • Negligence: Prioritizing speed over diligence may lead to ethical issues.

What group of people are at risk? What group of people may gain?

  • At Risk: Customers, especially those who may be unfairly evaluated or have their health data profiled.
  • Gaining: Insurance companies and third-party processors who benefit from efficiency and cost reductions.

What is the wider impact of this dilemma?

  • Customer Perception: While automation can improve claims management speed and accuracy, it risks delegating decision-making to unsupervised tools.
  • Low-Tech Literacy: In regions with low socio-economic backgrounds, customers may not understand or trust automated processes.

What are the cultural aspects important for this dilemma?

  • Customer Interaction: Historically, customers discuss their claims with insurers, providing evidence to support their claims. This interaction may be lost with automation.

Regional variations:

  • Nigeria: Insurtech adoption is new and focused on efficient results, but the impact on claims management is yet to be fully realized.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

  • Algorithm Certification: Implement a certification system with external quality controls to evaluate the accuracy and fairness of AI systems.
  • Periodic Assessments: Establish a working group, including legal experts, insurers, customers, healthcare providers, and AI experts, to regularly assess AI tool reliability.
  • Explainability: Insurers should be able to explain AI decision-making criteria to customers.
  • Stepwise Adoption: Gradually implement automation to identify challenges and solutions.

In what conditions could this be acceptable?

  • Transparency and Explainability: Ensuring AI decisions are transparent and explainable to maintain customer trust.
  • Regulatory Oversight: Implementing regulations to oversee the ethical use of AI in insurance.

What are other observations and conclusions for a solution?

  • Incremental Adoption: Phasing in technology allows for adjustment and correction of any arising issues.
  • Customer Education: Educate customers on how AI processes their claims to improve trust and understanding.

GENERAL COMMENTS/REACTIONS:

  • Transforming Insurance with AI: AI is revolutionizing the insurance sector by offering personalized products, enabling self-service, and enhancing communication. However, it is crucial to balance these advancements with ethical considerations to ensure fair and transparent processes.

MORE RESOURCES

LC-IN4:  Biased data for new product development

EXPLORATION

What is the ethical dilemma?

Biased data for new product development.

What is the content?

Using only data from current products and customers to develop new products may exclude non-customers, resulting in products that are disproportionately unfair to some groups.

What are the technologies or types of data usage involved?

Any machine learning technique.

What is the application? What drives this use in this case?

Data gathered from existing customers is used to improve risk score calculations and create new products by segmenting specific customer segments.

What ethical issues are at play here?

Using only internal data can lead to data-driven discrimination, as the models will be tailored to current customers. This may result in unfair treatment for underrepresented groups.

What group of people are at risk? What group of people may gain?

  • At Risk: People whose data is not represented in the available dataset, leading to unfair treatment.
  • Potential Gainers: Those whose data aligns with the distribution of available data may benefit from tailored products.

Regional variations:

  • Group at Risk (Africa): Marginalized communities and individuals without access to financial services or traditional customer profiles may be excluded from product development processes and face unfair treatment.
  • Potential Gainers: Those whose data aligns with available data distribution may benefit from tailored product offerings.

What is the wider impact of this dilemma?

  • Existing Inequalities: Failure to include diverse population segments in product development may perpetuate existing inequalities.
  • Emerging Social Groups: As society changes, new groups may need services that are not adequately developed due to biased data, leading to lower-quality services and loss of customers.

What are the cultural aspects important for this dilemma?

  • Diverse Populations: In diverse countries, it is crucial to analyze different population segments to ensure inclusivity in product development.
  • Preferred Marketing Channels: Cultural differences in marketing channels can affect who is considered in product development.

Regional variations:

  • Cultural Aspects (Africa): The diversity of African populations necessitates careful consideration of cultural nuances and socioeconomic disparities to ensure inclusivity and fairness.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

  • Regulations: Force companies to prove they have data from a representative sample of the population before marketing a product.
  • Transparency: Provide transparent information to customers about the data sources and processes used in product development.
  • Data Pooling: Insurances in the same region could pool their data to create a comprehensive dataset, fostering innovation and inclusivity.

In what conditions could this be acceptable?

  • Regulations: Adhere to technical standards to ensure inclusive data usage for commoditized products.
  • Involvement: Involve affected groups in product development and maintain transparent data usage policies.
  • Collaboration: Companies should work together to raise overall social fairness and inclusivity in the industry.

What are other observations and conclusions for a solution?

Insurance is a data-heavy business, providing essential services to all individuals. Ensuring data quality, quantity, and distribution that matches the target population is crucial for meeting societal and planetary challenges.

GENERAL COMMENTS/REACTIONS:

Addressing the ethical implications of biased data usage in product development requires regulatory oversight, transparency, and collaboration. In Africa, cultural diversity and socioeconomic disparities must be considered to ensure inclusivity and fairness in insurance services.

MORE RESOURCES

  • Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California law review, 671-732.

LC-IN5:  Insurance companies’ use of technology enabled incentive programs

EXPLORATION

What is the ethical dilemma?

The ethical dilemma involves balancing the use of incentives to promote low-risk behaviors with maintaining individual autonomy.

What is the content?

Many insurance companies have bonus programs that reward customers for exhibiting certain desirable behaviors (e.g., going to the gym for health insurance, avoiding speeding tickets for vehicle insurance) that are assumed to reduce insurance risk.

What are the technologies or types of data usage involved?

  • Gamification
  • Wearables
  • IoT (Internet of Things)
  • Behavioral Nudging

What is the application? What drives use in this case?

Incentive programs nudge people towards behaviors that reduce the risk of an insured event occurring, optimizing the return per customer for the insurance company.

What ethical issues are at play here?

  1. Loss of Autonomy: Incentive programs nudge lifestyle choices, raising questions about whether insurance firms should define and promote desirable behaviors.
  2. Conflict of Interest: Programs may encourage the purchase of certain services or products from the insurer’s partners, giving them a competitive advantage.
  3. Loss of Privacy: Customers must submit proof of behaviors, potentially compromising their privacy.

What group of people are at risk? What group of people may gain?

  • At Risk: People who do not want to share their information, those who follow suggested behaviors but not through approved metrics, and smaller businesses that are not insurance partners.
  • Potential Gainers: Insurance partners and customers who comply with the nudges.

What is the wider impact of this dilemma?

As more data is used to personalize prices, nudges could influence increasingly detailed aspects of life, potentially compromising free will. For higher-risk individuals, following these behaviors may be the only way to afford insurance.

What are the cultural aspects important for this dilemma?

Socioeconomic distribution and digital literacy affect accessibility to suggested actions. Not all participants may have the means or knowledge to interact with these programs, leading to exclusion.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

  1. Regulations: Implement ethical limits for behavioral nudging through public regulations.
  2. Transparency: Ensure customers understand the criteria and implications of incentive programs.
  3. Inclusivity: Design inclusive programs that consider accessibility and affordability for all participants.

In what conditions could this be acceptable?

Acceptability depends on cultural idiosyncrasies and the nature of politics in a region. Programs must be transparent, inclusive, and respectful of individual autonomy.

What are other observations and conclusions for a solution?

Incentive schemes can strongly influence behavior. It is important to consider the ethical implications of imposing external motivations on lifestyle decisions. Designing fair and inclusive incentive programs is challenging but necessary to avoid exclusion and maintain autonomy.

GENERAL COMMENTS/REACTIONS:

Balancing incentives for low-risk behaviors with individual autonomy is a complex ethical issue. While incentives can drive positive behaviors, they must not infringe on personal freedom or privacy. In regions with diverse populations, careful consideration of cultural and socioeconomic factors is crucial.

MORE RESOURCES

For Brazil:

In English:

In Portuguese:

LC-IN6:  AI bias vs. personal bias in decision-making

EXPLORATION

What is the ethical dilemma?

The dilemma involves comparing AI bias versus personal bias in decision-making tasks.

What is the content?

Before calculating the premium for a customer, insurance companies decide whether to accept the application. Technologies like image analysis, Optical Character Recognition (OCR), and Natural Language Processing (NLP) are used. Currently, the final decision is made by a human. In the future, however, as these technologies evolve, the decision may be fully delegated to AI. If AI models are biased, the automation may classify some applications unfairly. On the other hand, human decision-makers may still apply more subtle biases.

What are the technologies or types of data usage involved?

  • Natural Language Processing (NLP)
  • Chatbots
  • Optical Character Recognition (OCR)
  • Machine Learning Techniques

What is the application? What drives use in this case?

Many tasks can be reduced to processing input data to yield a decision, such as processing customer applications, predicting insured events, spotting fraud, and settling claims. Automation and AI can speed up these processes, and future improvements may remove the need for human involvement to increase efficiency.

What ethical issues are at play here?

Humans can make mistakes, but AI can also make different kinds of mistakes. The ethical issue lies in whether to let AI take full control of decisions, knowing it will make some errors that a human would not make, even if AI makes fewer errors overall.

What group of people are at risk? What group of people may gain?

When the algorithm takes over, errors and biases embedded in the algorithm can lead to unfair discrimination. If the dataset used to train the model is biased, certain ethnicities or geographic areas may be unfairly treated.

What is the wider impact of this dilemma?

If machines are allowed to make autonomous decisions, we must consider the cost (ethical, economic, social) of potential mistakes. Some tasks may never be suitable for full automation due to the severity of errors (e.g., military decisions, complex court rulings).

What are the cultural aspects important for this dilemma?

At the beginning of the 21st century, there is skepticism about letting AI take autonomous decisions. This fear is partly fed by pop culture and partly rational, as we navigate a fast-changing technological era. Robotic Process Automation is accepted for repetitive tasks, and autonomous driving is testing intelligent machines in life-threatening decisions.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

  1. For processes fully taken over by AI, customers should always have the option to request a human review.
  2. Implement quality control checks on a percentage of AI decisions to ensure alignment with human decisions.
  3. Push forward research in explainable AI to make automated decisions more transparent.

In what conditions could this be acceptable?

  1. Automation should first be tested on less critical tasks as a proof of concept.
  2. Review decisions taken by AI to ensure compliance with ethical standards and human oversight.

What are other observations and conclusions for a solution?

Humans should oversee complex decisions and those impacting lives. As technology and attitudes toward AI evolve, more aggressive projects may test full automation. In the short and medium term, humans should always be involved to avoid significant errors.

GENERAL COMMENTS/REACTIONS:

Both AI and human decision-makers can exhibit bias. It is crucial to define and address bias within AI systems to prevent unfair outcomes, especially in critical areas like insurance. Regulatory frameworks and continuous monitoring are essential to ensure ethical AI implementation.

MORE RESOURCES

  1. Avoiding Unfair Bias in AI
  2. Artificial Intelligence and Insurance: Preventing Bias
  3. Insurance AI Discrimination

LC-IN7:  Explainability vs accuracy in AI models

EXPLORATION

What is the ethical dilemma?

The dilemma revolves around the trade-off between explainability and accuracy in AI models, particularly concerning the inclusion of arbitrary variables that may introduce bias.

Content

The text highlights the tension between improving model accuracy through the inclusion of additional variables and the ethical implications of using variables that may not directly relate to the target outcome. It is important to note that “the more data, the better the model” is not always true. If you put in a lot of junk data, you are going to get a junk model. The quality of data matters as well.

Technologies/Types of Data Usage

Machine learning techniques are employed to analyze data and train models, where the selection of variables impacts model performance and fairness.

Application and Drivers

The application involves enhancing model predictive power by incorporating variables that improve accuracy, driven by the desire to optimize decision-making processes.

What ethical issues are at play here?

The primary ethical issue is the potential for unintended bias and discrimination against individuals whose characteristics are correlated with included variables but do not directly impact the target outcome.

What group of people are at risk? What group of people may gain?

Individuals from minority or underrepresented groups may be at risk of unfair treatment if variables included in AI models introduce bias against them. The society as a whole may benefit from improved model accuracy, but individuals within marginalized groups risk facing discrimination based on correlated variables.

What is the wider impact of this dilemma?

There is an increasing need to acknowledge the dilemma and define a consensual approach to solve it. With the rise of big data and the increasing amount of user-generated content, we are starting to be able to draw weak (but statistically correlated) links between very seemingly far-away behaviors. If that data were used for this kind of service company, we would be closer to a “Big Brother” scenario.

What are the cultural aspects important for this dilemma?

Cultural factors influence the selection and interpretation of variables, as well as regulatory approaches to addressing bias in AI models. Strong data protection policies in regions like Europe contrast with varying regulatory landscapes in Africa, where cultural and political factors shape ethical considerations.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

  1. Regulatory frameworks, akin to GDPR, may restrict the use of controversial variables in AI models to mitigate bias and discrimination.
  2. Implement extra workflows to ensure fair assessment of individuals affected by biased variables, such as manual review or additional information gathering.
  3. Allocate resources to fund studies that investigate the relationship between proxy variables and behaviors to inform ethical decision-making in variable selection.

In what conditions could this be acceptable?

Regulatory changes must reflect societal values and promote fairness and transparency in AI applications. Companies must prioritize ethical considerations over profit and commit to mitigating bias and discrimination in model development and deployment. Ethical commitment and strategic investment are necessary to improve model explainability and mitigate the unintended consequences of biased variable selection.

What are other observations and conclusions for a solution?

The explainability of AI does not only depend on the algorithm used but also on the input data. One should be able to explain why a given variable is relevant for a particular application and understand any indirect links it may have to a particular behavior.

GENERAL COMMENTS/REACTIONS:

In conclusion, addressing the ethical implications of variable selection in AI models requires a multifaceted approach that considers regulatory, cultural, and societal factors. Regional variations in Africa highlight the importance of context-specific solutions to promote fairness, transparency, and inclusivity in AI applications.

MORE RESOURCES

  1. Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019, January). Fairness and abstraction in sociotechnical systems. In Proceedings of the conference on fairness, accountability, and transparency (pp. 59-68).
  2. Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Iii, H. D., & Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86-92.
  3. Transformer models and attention
  4. Using LLMs to explain other LLMs

LC-IN8:  Insurance companies’ use of public online data for profiling

EXPLORATION

What is the ethical dilemma?

Use of a person’s public online presence for profiling vs personal data control

What is the content?

To accurately calculate the risks and premium of a person, insurances try to leverage all data available. Some of these data are given as the client establishes a service relationship with the insurance (address, contact info, age…). However, from this information, it is now possible to find a lot more about a person due to their digital trail (Amazon reviews, social media interactions, belongingness to certain online communities…). When a person generates that online content, in spite of knowing that it will remain public to an extent due to the policies of the platforms, they do not intend or expect such data to be used for other purposes unrelated to the platform where they post it.

Technologies/Types of Data Usage

Web crawling tools, Natural Language Processing (NLP), social media.

Application and Drivers

A trend for calculating ever more personalized premiums in insurance requires the usage of as many data points as possible to tailor the offers to specific clients. Gathering publicly available data is a straightforward way of collecting more of said data points.

What ethical issues are at play here?

Data control. People interact online in many different ways with a variety of purposes. Most of them do not have the purpose of contributing to an insurance premium calculator for getting a personalized offer. Getting hold of this data is legal in most cases since it is publicly available on the internet. However, is it ethical to use (public) personal data for a purpose that was not the original purpose that motivated the user to generate those data points?

Regional variations: For Brazil: The main contextual difference to be taken into account for Brazil is that the use of public online data for profiling purposes (e.g. calculating insurance premiums) ends up directly intersecting with the region’s evolving data protection law, the Lei Geral de Proteção de Dados (LGPD), effective as of September 2020, with administrative sanctions applicable from August 2021; since its implementation, Brazil has seen an increased awareness among citizens and businesses regarding data protection, spurred by significant data breaches and leaks. The National Data Protection Authority (ANPD), established to oversee compliance with the LGPD, has been active despite resource constraints, focusing on enforcement, awareness, and the development of a data privacy culture.

What group of people are at risk? What group of people may gain?

Generally speaking, most people are not aware of how their digital trail is being used for downstream tasks. In that sense, everyone leaving a public online presence is at risk (not only talking about the premium calculation example, but actually at risk of having their data used in unknown ways by unknown parties worldwide). The amount of online content generated by an individual would also proportionally relate to the risk at which they are exposed. Nowadays, usually the younger generation generates more online content while also being less aware of what generating all that data entails. Whether that results in a gain or a loss would depend on whether their online behavior represents, statistically speaking, more or less insurance risk.

Regional variations: For Brazil: Similar to the global context but with the additional challenge of potential data breaches involving the personal data of 243 million Brazilians and the ongoing issue of unauthorized data sharing and selling within the country.

What is the wider impact of this dilemma?

The amount of user-generated data on the internet is growing. If it is possible to freely collect such public information for a given individual, in the future there may not even be a need to request consent to access private data of a potential customer. Leveraging their online presence may be more than enough to infer their behavior and create a very detailed risk profile.

What are the cultural aspects important for this dilemma?

In the 21st century (and accentuated further by the COVID pandemic), our online identity is a significant part of who we are. Using online platforms for socializing, requesting or sharing information, creating a professional image, and so forth is mainstream. Not sharing this information on online platforms is not really an option if one wants to have a virtual identity. Not having proper control over how these data can be used by third parties can be as dangerous as not controlling other private data that has not been uploaded online. Country regulations also play a role with respect to how the online platforms should protect the users and their data. Digital education must be imparted to all who take part in the online world, especially the young, due to their gullibility and condition of digital native.

Regional variations: For Brazil: The Brazilian population is very much an online one – creating a culture of privacy is very hard. I would recommend taking a look at Danilo Doneda’s extensive work on the subject.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

Even though the dilemma on whether to use that data falls on third parties, it could be possible for the platforms displaying such data to better protect the privacy of their users with more restrictive policies for third-party use. For insurance companies, applying this online information to further personalize the customer offer (or for any other internal purpose) may be given as an option to the customer instead of doing it by default.

In what conditions could this be acceptable?

Those solutions should already be implementable, and the main barrier would probably be the will of the companies involved to go down that direction even if it is not sensible from the business perspective.

What are other observations and conclusions for a solution?

Many could argue that using public data should not be considered an ethical issue. However, in many cases, the data is public because it is set as a requirement by the platform, but few users would make it open otherwise. Therefore, external entities should not think that using public, open data is automatically ethical, since sometimes providing one’s data to an open platform is a price one is forced to pay to play the game.

Regional variations: For Brazil: As the LGPD matures, new uses, impacts, and solutions related to data privacy are emerging. For instance, the discussion around the adequacy of the ANPD’s resources and the global comparison of data protection enforcement capabilities highlight the ongoing evolution of data privacy governance. Moreover, Brazil’s focus on cybersecurity as an integral part of data protection underscores the intertwined nature of these issues.

GENERAL COMMENTS/REACTIONS:

Overall Comment: The issue of using publicly available data to inform private decisions on an individual without consent is an interesting dilemma based on the availability of data within a given jurisdiction. There are at least three dilemmas nested in one:

  1. The issue of whether the publicly available information is dependable given inherent bias.
  2. Whether available data of poor quality (due to bias) is better than no data when an industry actor is seeking to provide a solution that maximizes profitability and enhances the common good.
  3. Whether it is alright for the same industry actor to use a low-cost approach to gather publicly available data in a jurisdiction with lax rules while spending more to gather data in a jurisdiction with stringent regulations in order to provide an identical solution in each market.

LC-IN9:  Microsegmentation and risk pooling in insurance programs

EXPLORATION

What is the ethical dilemma? The ethical dilemma revolves around the choice between risk pooling in broad risk classes versus implementing “fairer” microsegmentation.

Define risk pooling. What is the content? Risk pooling refers to aggregating risks across a broad spectrum of individuals or entities. It involves leveraging personalized data to segment customers into finer risk groups, allowing for more tailored pricing. However, this approach conflicts with the solidarity principle of insurance, which emphasizes cross-subsidization between risk classes.

Give an example of personalized data. What are the technologies or types of data usage involved? Personalized data may include information obtained through Big Data analytics, Internet of Things (IoT) devices, and machine learning algorithms.

What is the application? What drives use in this case? The application involves offering more customized insurance packages, which may appear fairer and more beneficial to certain customers compared to a standardized pricing model. Insurance companies are driven by the desire to meet the demands of this audience for personalized pricing.

What is at gain for the insurance companies? What ethical issues are at play here? Insurance companies stand to benefit from the adoption of individualized pricing by tailoring premiums to the specific risk profiles of customers. However, ethical issues arise concerning fairness and inclusivity, as individualized pricing may exclude higher-risk groups from affordable coverage, potentially leading to social and economic inequality.

What are the ethical considerations of the data access/use, beyond just price and service for the customer? Beyond pricing and service, ethical considerations involve issues such as privacy, consent, and the potential for discrimination based on sensitive personal data.

Regional variations: The impact of individualized pricing may vary across regions, affecting different socioeconomic groups differently. Vulnerable populations, particularly those in lower socioeconomic classes, may face exclusion from the insurance system, while individuals with lower risk profiles may benefit from more affordable premiums.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

  1. Public bodies and social security systems could intervene to make insurance affordable for high-risk groups unable to afford extreme premiums.

Where will the money come from to pay for this? Funding for such interventions could come from public resources allocated to social welfare or through subsidies from insurance firms.

2. Insurance companies could strike a balance by implementing multiple risk classes while retaining some level of cross-subsidization. This approach ensures affordability for lower-risk groups while requiring higher-risk individuals to pay higher premiums.

Is this in their interest, if they need to pay out customers that need to claim? Balancing risk classes aligns with the long-term interests of insurance companies by maintaining a diverse customer base and mitigating financial risks.

3. Insurance firms could allow higher-risk groups to limit their premiums by providing collateral assets that would be seized in the event of excessive risk realization.

What are the risks of this? This approach introduces complexities akin to managing credit card debt and requires careful consideration to prevent exploitation or adverse consequences for both the insurer and the insured.

In what conditions could this be acceptable?

  1. Government policies may support subsidies for certain types of insurance, particularly those related to essential services like infrastructure and housing protection from natural disasters.
  2. Insurance companies should evaluate the feasibility of such approaches within their business models to ensure competitiveness and sustainability.
  3. Implementing this approach necessitates robust operational processes to manage underwriting complexities effectively.

What are other observations and conclusions for a solution? The evolution from a mutualist approach, where risks were pooled due to limited data, to individualized pricing enabled by big data underscores the need for a balance between profitability and social responsibility. While private insurers aim to maximize profits and competitiveness, public policies may need to intervene to safeguard the solidarity principle of insurance and ensure equitable access to coverage.

Explain what a mutualist approach is. A mutualist approach to insurance involves pooling risks across a broad group of individuals to provide coverage at a common premium rate, promoting solidarity and ensuring access to insurance regardless of individual risk profiles.

What other measures could support the efficiency of private services, but also accountability to the public interest? Efforts to enhance the efficiency of private insurance services should prioritize transparency, regulatory oversight, and mechanisms to address social inequalities and ensure universal access to essential coverage.

GENERAL COMMENTS/REACTIONS: The integration of technology in financial services, particularly in addressing challenges like the solidarity principle in insurance, necessitates careful consideration of ethical implications. Achieving a balance between risk assessment and discriminatory practices is crucial, as highlighted in literature such as McFall’s exploration of self-tracking in health insurance pricing.

MORE RESOURCES McFall, L. (2019). Personalizing solidarity? The role of self-tracking in health insurance pricing. Economy and society, 48(1), 52-76.

LC-PA1:  Risks associated with rapid transition to digital transaction payments  

EXPLORATION

What is the ethical dilemma? The ethical dilemma concerns the impact of technological advancements in the payment market and the potential exclusion of individuals due to technological illiteracy.

What is the content? The shift from traditional paper payments to digital transactions, including online banking, card payments, and instant payment technologies, presents new challenges for financial consumers. Factors such as digital illiteracy and lack of trust in new systems can lead to the neglect of certain consumer segments in adopting these technologies.

What are the technologies or types of data usage involved? Technologies involved include instant payments, mobile technology, credit and debit cards, as well as encryption and password protection for communication and transactions.

What is the application? What drives this use in this case? The application of these technologies varies depending on the market and specific circumstances. For example, in Brazil, various payment methods such as TED/DOC transfers, debit card payments, credit card payments, and the PIX instant payment system are prevalent. The adoption of these technologies is driven by factors such as convenience, security, and government initiatives to modernize payment systems.

What ethical issues are at play here? The ethical conflict revolves around the rapid advancement of technology and the potential exclusion of certain segments of society due to digital illiteracy, lack of access to technology, or distrust in new payment systems. This exclusion can lead to financial vulnerability and exploitation of vulnerable groups by corporations, as well as perpetuate financial illiteracy and deepen social inequalities.

Regional variations: In different regions, specific groups such as older individuals or economically marginalized populations are at risk of exclusion from the digital economy. Meanwhile, corporations stand to gain from the widespread adoption of online banking and digital payment methods, potentially exploiting the financial vulnerabilities of technologically illiterate individuals.

What is the wider impact of this dilemma? The exclusion of certain groups from digital financial services can lead to predatory commercial practices and hinder access to credit and other financial products. Additionally, the transition to digital payments may marginalize individuals who are unable to adapt to new technologies, limiting their access to products and services that improve quality of life.

What are the cultural aspects important for this dilemma? Cultural differences, such as age, education level, and familiarity with digital services, influence how individuals are affected by the digital transformation of payment systems. While global trends like Open Banking and digitization of payments are evident, regional disparities exist in how these changes impact various demographic groups.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution? Potential solutions include implementing consumer protection standards to address fraud and increase trust in digital payment systems, as well as digital inclusion policies and specific protections for elderly consumers.

In what conditions could this be acceptable? Acceptable conditions for addressing this issue include providing effective education on digital banking through classes offered by banks or community centers, and creating awareness among corporations about the challenges faced by technologically illiterate individuals during the transition to online banking.

What are other observations and conclusions for a solution? Addressing the digital literacy gap requires collaborative efforts from academic, business, and public policy sectors to ensure inclusivity in the digital economy. Initiatives focusing on digital and financial inclusion are essential for reducing poverty and improving quality of life globally.

GENERAL COMMENTS/REACTIONS: This dilemma highlights the importance of digital literacy and inclusion in the digital economy. It prompts questions about the responsibility of industry and government in ensuring access to digital financial services for all segments of society. Whether through mandatory standards or deliberate education efforts, addressing digital literacy is crucial, especially in the context of developing countries striving to integrate into the digital economy.

LC-PA2:  User data and personalized advertisements

EXPLORATION

What is the ethical dilemma? The ethical dilemma revolves around Big Tech companies utilizing user data from their payment platforms for advertising revenue, potentially compromising user privacy through data sharing with other divisions of the company or third-party vendors.

What is the content? Companies like Google are leveraging transaction data and GPS information to create personalized advertisements for users. Other major players in the industry, such as Facebook, Amazon, Apple, Tencent, and Alibaba, also have their own payment applications.

What are the technologies or types of data usage involved? Technologies involved include data collection through GPS, screen time tracking, IP addresses, email contents, and phone numbers. Machine learning algorithms are then utilized to personalize advertising suggestions based on this data.

What is the application? What drives this use in this case? Mobile payment apps like GooglePay offer consumers a convenient way to make quick payments after syncing their bank details. However, these same companies are known for monetizing user data. By expanding into the finance industry, they have additional opportunities to profit from the data they collect.

What ethical issues are at play here? The primary ethical concerns include the use of consumer data without explicit consent and the lack of transparency regarding which data is being used and how it is being utilized.

Regional variations: In Latin America, companies like Mercado Pago are significant players in the payment market, while Mexican supermarkets are also collecting user data through loyalty programs. However, the extent to which this data is shared and utilized remains unclear, especially given the ambiguity of data protection laws in some regions.

What group of people are at risk? What group of people may gain? All users of payment apps are at risk, particularly those who are technologically challenged or economically disadvantaged. Big Tech corporations and third-party vendors who advertise through these platforms stand to gain from access to user data.

Regional variations: In Nigeria, users are attracted to payment apps for their low cost and social influence. However, concerns about fraudulent losses and the misuse of data are gradually becoming more significant issues in the region, potentially hindering future adoption of such technologies.

What is the wider impact of this dilemma? Big Tech companies, with their dominance in the tech industry, often face minimal consequences for privacy breaches. As they expand into finance, they gain even greater power over both governments and consumers, potentially further eroding privacy rights.

What are the cultural aspects important for this dilemma? The older generation may struggle to understand and navigate issues related to data privacy, while younger individuals may find it challenging to explain these concepts to older family members. Technology has provided many benefits to economically disadvantaged individuals, but they are also more vulnerable to the negative impacts of data misuse.

Regional variations: Cultural attitudes toward technology and privacy vary between regions, with concerns about data privacy becoming increasingly significant in regions like Nigeria due to emerging technologies and the potential for fraudulent activities.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

  1. Clearly communicate with consumers about the data being collected.
  2. Obtain explicit consent from users regarding data usage and provide transparent, easily understandable data policies.
  3. Consider allowing users to benefit from their own data, such as through incentives or rewards for sharing data.
  4. Offer options for users to control their data sharing preferences and withdraw consent when desired.

In what conditions could this be acceptable? It is not acceptable to use consumer data without transparent consent and collaboration with users and relevant authorities. Acceptability could be achieved through clear communication, user consent, and adherence to regional data protection regulations.

What are other observations and conclusions for a solution? Regional variations in cultural attitudes and regulatory frameworks must be considered when implementing solutions. Respect for cultural differences and a commitment to transparency and accountability are essential for long-term business success and user trust.

GENERAL COMMENTS/REACTIONS: Addressing data privacy concerns requires a balance between technological innovation and respect for user rights. By prioritizing transparency and user consent, businesses can foster trust and mitigate ethical dilemmas related to data usage in payment platforms.

LC-PA3:  AI and data use in mobile banking services

EXPLORATION

What is the ethical dilemma? The ethical dilemma revolves around the lack of transparency and clarity in the collection, usage, and rights of data in mobile banking services, particularly concerning ongoing consent processes.

What is the content? When customers utilize mobile devices for banking, they unwittingly provide banks with extensive data, which is then used for decision-making through AI and ML tools. However, there are concerns regarding transparency in data collection, the types of data collected, and user rights over their data.

What are the technologies or types of data usage involved? Artificial Intelligence and Machine Learning play crucial roles in analyzing data collected from mobile banking services.

What is the application? What drives this use in this case? Mobile phones serve as both the source of data and the channel through which data is transmitted and shared in mobile banking. Banks employ technologies like AI and ML to analyze this data and draw insights about their customers.

What ethical issues are at play here? The ethical dilemmas include:

  1. Lack of transparency in data collection and usage.
  2. Imposed consent through complex mobile device permissions and user agreements.
  3. Legal language in user agreements that can be difficult to understand.
  4. Users being unaware of how their data is utilized for cross-targeting products and influencing financial decisions.

Regional variations: In Africa, factors like limited technology access, cultural attitudes toward privacy, regulatory environments, and socioeconomic disparities influence the ethical dilemma surrounding mobile banking data usage.

What group of people are at risk? What group of people may gain? Most users are at risk of having their data used without full transparency or understanding. Banks and third-party vendors may benefit from the data collected through mobile banking services.

Regional variations: In Africa, vulnerable populations with limited access to financial services may face heightened risks regarding data privacy and consent.

What is the wider impact of this dilemma? The lack of transparency in data collection and usage erodes trust in banks and the wider financial system. Data collected through mobile banking can be used to discriminate against certain populations and perpetuate socioeconomic inequalities.

What are the cultural aspects important for this dilemma? Cultural attitudes toward privacy and data sharing vary across regions, influencing how individuals perceive and engage with mobile banking services.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

  1. Develop transparent and concise data policies that outline how data is collected, used, stored, and analyzed.
  2. Simplify terms of service agreements and clearly state user rights and data policies in language understandable to all users.
  3. Offer mechanisms for users to seek clarification, ask questions, and determine which data they are willing to share.
  4. Ensure data policies are available in languages understood by the majority of customers.

In what conditions could this be acceptable? Acceptability relies on the implementation of a robust data framework that prioritizes ongoing communication, transparency, and user consent.

What are other observations and conclusions for a solution? Banks must balance profitability with respect for human dignity and privacy. Data rights should be recognized as integral to human dignity and privacy, requiring clear communication and user empowerment in data usage.

GENERAL COMMENTS/REACTIONS: Contextual factors in Africa, such as limited technology access and cultural perceptions of privacy, must be considered when developing solutions to the ethical dilemmas surrounding mobile banking data usage. Tailored solutions should promote inclusive and responsible banking practices while addressing regional disparities in data privacy and consent.

LC-PA4:  Unauthorized payments against bank customers’ accounts

EXPLORATION

What is the ethical dilemma? The ethical dilemma revolves around unauthorized payments from customers’ bank accounts, often due to scams or fraudulent authorizations, and the subsequent lack of effective remedies and support from banks.

What is the content? Unauthorized payments against bank customers’ accounts are a common occurrence, often resulting from scams or fraudulent activities. Many affected clients are unaware of these unauthorized payments, and detecting them can be challenging due to procedural barriers. For instance, in South Africa, banks cannot cancel debit orders without the vendor’s consent, and customers alone cannot cancel debit orders against their accounts.

What are the technologies or types of data usage involved? The unauthorized payments may involve manual or electronic processes that mimic payment authorization by a bank client, leading to the release of unauthorized payments.

What is the application? What drives this use in this case? The use of banking debit order systems is driven by the convenience they provide in automatic payment processing. However, this convenience also exposes customers to risks of fraudulent payments being made from their accounts without their approval.

What ethical issues are at play here? The primary ethical issue is the breach of the bank’s fiduciary duty to safeguard customers’ funds. Unauthorized access to customers’ funds entrusted to the bank should be promptly remedied by the bank. However, many customers may not detect these unauthorized payments, and banks often have procedures that are not supportive of customers in remedying the breach.

Regional variations: Customers who are illiterate or have weak communication channels with the bank are particularly vulnerable to unauthorized payments. Low-income earners and small and medium-sized enterprises (SMEs) are also at risk. Fraudulent third parties perpetrating these unauthorized payments stand to gain.

What is the wider impact of this dilemma? Unauthorized payments often target financially disadvantaged individuals who heavily rely on the funds in their bank accounts. This exacerbates financial inequality and hinders upward mobility. Additionally, the integrity of the banking system is compromised, and there is a risk of financial exclusion if customers lose trust in the debit order system.

What are the cultural aspects important for this dilemma? Culturally, targeted customers may have limited knowledge of their rights in their contractual relationship with the bank and may be hesitant to demand redress. They may also lack the resources to enforce ethical behavior from the bank.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

  1. Review bank policies and mitigation procedures to provide better support and redress for customers affected by unauthorized payments.
  2. Reduce the onus on customers by revising procedures that require customers to cancel unauthorized deductions.
  3. Amend contracts to allow only approved debits to take place, potentially utilizing innovations like the DebiCheck system.
  4. Provide tools to customers, particularly those at risk, to aid in quicker responses and lower costs, such as electronic mandates or two-factor authentication.

In what conditions could this be acceptable? Banks must reconsider their fiduciary duty towards customers and take greater responsibility in safeguarding and remedying losses of customers’ funds. Customers should have avenues to hold banks accountable for their contractual duties.

What are other observations and conclusions for a solution? Banks should increase awareness of unauthorized payments and criminal activities against customers’ accounts and empower customers to play a greater role in safeguarding their funds.

Digital development has exposed certain groups to fraudulent threats, but the benefits of digital tools cannot be overlooked. While unauthorized payments may have a material impact, the cost implications for banks in constantly reviewing transaction errors should be considered.

LC-PA5:  Microcredit to micro, small, and medium enterprises (MSMEs)  

EXPLORATION

What is the ethical dilemma? The ethical dilemma involves individuals in need of financial services, particularly those with low digital and financial literacy, being vulnerable to exploitation by firms seeking to leverage their sensitive financial data for profit. This dilemma also encompasses the challenge of finding the right balance of data sharing between clients and firms to avoid exploitation while still providing optimal services.

What is the content? Financial service providers, particularly those offering microcredit facilities to micro, small, and medium enterprises (MSMEs), often collect sensitive financial data from financially vulnerable customers. While this data can be used to offer tailored services through artificial intelligence (AI) applications, there is an ethical obligation for firms to ensure they do not exploit customers’ vulnerabilities for profit.

What are the technologies or types of data usage involved? The technologies involved include digital channels for collecting and storing financial data, as well as AI applications for analyzing data and making personalized service recommendations.

What ethical issues are at play here? The primary ethical issues include data privacy and informed consent. Financial service providers have an ethical obligation to use personal data according to the owner’s wishes and ensure that users are fully informed about the risks and benefits of sharing their data. However, in many cases, users, especially those with low digital literacy, may not fully understand the implications of sharing their sensitive financial information.

Regional variations: Regional variations may include differences in data privacy laws, cultural attitudes towards privacy, and levels of financial literacy among the population. For example, in Nigeria, recent data breaches and privacy violations by a microcredit bank highlight the importance of stringent regulations and enforcement mechanisms to protect consumers.

What group of people are at risk? What group of people may gain? Financially vulnerable individuals with low digital and financial literacy are at risk of exploitation by financial service providers who may profit from their lack of understanding of data privacy and financial risk. Privacy-conscious individuals may also be at risk of receiving suboptimal services if they refrain from sharing necessary information with financial firms.

What is the wider impact of this dilemma? The irresponsible sharing and hoarding of financial data can lead to financial exclusion, particularly for MSMEs and individuals who rely on financial services for economic empowerment. Data breaches and privacy violations by financial institutions undermine trust in the financial system and may further disenfranchise vulnerable populations.

What are the cultural aspects important for this dilemma? Cultural beliefs and attitudes towards privacy vary across regions and may influence individuals’ willingness to share their sensitive financial data. Recent developments in Nigeria highlight the importance of cultural factors in shaping regulatory responses to data breaches and privacy violations.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution?

  • Financial service providers should provide clear and transparent disclosures about the use of customers’ data and obtain informed consent from users before collecting sensitive financial information.
  • Governments should enact and enforce stringent data privacy laws and regulations to protect consumers from exploitation and ensure accountability among financial institutions.
  • Financial literacy education programs should be implemented to empower individuals to make informed decisions about sharing their financial data and understand their rights in the digital age.

In what conditions could this be acceptable? Acceptable solutions would involve a combination of regulatory oversight, industry best practices, and consumer education to create a fair and transparent financial services market that prioritizes the privacy and rights of consumers.

What are other observations and conclusions for a solution? By addressing the ethical concerns surrounding data privacy and financial literacy, stakeholders can work towards creating a more inclusive and ethical financial services sector that benefits all individuals, regardless of their digital literacy or socioeconomic status. Solutions should be tailored to address specific regional variations in data privacy laws, cultural attitudes, and levels of financial literacy to ensure that all individuals have equal access to ethical and responsible financial services.

GENERAL COMMENTS/REACTIONS: This dilemma highlights the complex interplay between data privacy, financial literacy, and ethical considerations in the financial services sector. While the issues may be more pronounced in developing economies, they are relevant globally and require comprehensive solutions that address the needs and concerns of all stakeholders involved.

It’s interesting to see how this dilemma underscores the importance of balancing the need for personalized financial services with the imperative to protect individuals’ privacy and rights. The solutions proposed emphasize transparency, regulatory oversight, and education as key pillars in addressing these ethical challenges. This approach acknowledges the nuanced nature of the issue and the need for multifaceted solutions that account for regional variations and cultural considerations. Overall, it’s crucial to foster a financial services ecosystem that prioritizes ethical conduct and empowers individuals to make informed decisions about their data and financial well-being.

LC-PA7:  Buy Now, Pay Later (BNPL) opportunities and risks

EXPLORATION

What is the ethical dilemma? The ethical dilemma revolves around the use of short-term payment plans and services like Buy Now, Pay Later (BNPL), which increase financial access but may exploit consumers, particularly those who are financially disadvantaged. These services, while offering convenience and accessibility, often come with exorbitant interest rates, late fees, and potential damage to credit scores, disproportionately impacting vulnerable individuals.

What is the content? Short-term payment plans and BNPL services offer consumers the ability to split payments into installments, providing access to goods and services at the point of purchase. These services leverage technologies like AI and machine learning to automate risk decisions and offer micro-credit to buyers, often with minimal involvement from merchants. However, the lack of regulation and transparency can lead to negative outcomes such as overspending, high interest rates, and credit score damage for users who struggle to repay.

What are the technologies or types of data usage involved? Technologies such as AI, machine learning, blockchain, and data integration are utilized to automate risk assessments and inform lending decisions. Data sources include income and credit data from users’ bank accounts, prior purchase and payment history with BNPL services, credit scores, and alternative data like consumer profiles and transactional information.

What ethical issues are at play here? The primary ethical issues include transparency, informed consent, and fairness. BNPL services may lure consumers into debt through the promise of easy, manageable payments without fully disclosing the risks associated with late fees and high interest rates. This lack of transparency can exploit financially vulnerable individuals who may not fully understand the terms and conditions of these services, leading to long-term financial harm.

Regional variations: Regional variations highlight differences in consumer behavior, regulatory environments, and market dynamics. For example, in Brazil, BNPL is experiencing rapid growth as a solution for the underbanked population, but there are concerns about the affordability and accessibility of these services, especially for consumers lacking financial literacy.

What group of people are at risk? What group of people may gain? Financially disadvantaged individuals, particularly youth and the unbanked, are most at risk of exploitation by BNPL services. These users may be financially uneducated and unaware of the long-term consequences of accumulating debt through overspending and high fees. Meanwhile, BNPL platforms and merchants stand to gain from increased sales and profits, but they may also face credit losses and reputational risks if users default on payments.

Regional variations: In Brazil, BNPL services offer accessibility to the underbanked population but may exacerbate debt issues if not regulated effectively. While BNPL companies see opportunities for growth, there are concerns about the impact on consumer debt levels and financial stability in the long term.

What is the wider impact of this dilemma? The wider impact includes increased financial inclusion for some users but also the risk of exacerbating debt and financial instability for others. BNPL services may provide short-term benefits but could lead to long-term consequences such as damaged credit scores, limited access to future credit, and heightened financial vulnerability, especially for low-income individuals.

What are the cultural aspects important for this dilemma? Cultural aspects such as attitudes towards debt, financial responsibility, and trust in financial institutions play a significant role in shaping consumer behavior and regulatory responses. In countries like India, where BNPL services are gaining popularity among the unbanked, cultural perceptions of debt and financial literacy may influence users’ willingness to engage with these services.

Regional variations: In Brazil, BNPL services are viewed as a solution for the underbanked population, but concerns about debt and financial responsibility remain prominent. Cultural values around financial responsibility and the importance of regulation may impact consumer perceptions and government responses to BNPL services.

SOLUTION-ORIENTATION

What are some possible controls and comments for a solution? Government regulation is crucial to ensure transparency, fairness, and consumer protection in the BNPL industry. Regulations should mandate clear disclosure of terms and conditions, including interest rates, fees, and potential credit score impacts. Additionally, consumer education programs can empower users to make informed decisions about their financial health and avoid falling into debt traps.

In what conditions could this be acceptable? Acceptable solutions involve aligning BNPL regulations with existing credit card industry standards to ensure consistency and transparency in lending practices. By promoting financial literacy and regulating BNPL services, governments can mitigate the risks of exploitation and empower consumers to make responsible financial decisions.

What are other observations and conclusions for a solution? Effective solutions require collaboration between government regulators, financial institutions, and consumer advocacy groups to establish clear guidelines and standards for BNPL services. By addressing the ethical concerns surrounding transparency, informed consent, and fairness, stakeholders can create a more equitable and sustainable financial services ecosystem.

Regional variations: In Brazil, regulations should be tailored to address the unique needs and challenges of the local market, including the underbanked population. This may involve implementing consumer protection measures, promoting financial education, and fostering innovation in the BNPL industry while safeguarding against predatory practices.

GENERAL COMMENTS/REACTIONS: This dilemma highlights the complex interplay between financial innovation, consumer protection, and regulatory oversight in the BNPL industry. While these services offer potential benefits for financial inclusion, they also pose risks of exploitation and long-term financial harm, particularly for vulnerable individuals. Effective solutions must strike a balance between promoting access to credit and safeguarding consumers from predatory practices, taking into account regional variations and cultural factors.

It’s fascinating to see how the BNPL industry presents both opportunities and challenges for consumers, businesses, and regulators alike. The ethical considerations surrounding transparency, informed consent, and fairness are paramount in addressing the risks associated with these services, especially for vulnerable individuals who may be financially disadvantaged or lack financial literacy. The regional variations provide valuable insights into how different cultural contexts and market dynamics shape the impact and regulatory response to BNPL services. Overall, achieving a balance between promoting financial inclusion and protecting consumers from exploitation requires comprehensive regulation, consumer education, and collaboration among stakeholders.

Appendix A. Ethical Dilemma Template

Leave a Reply

Discover more from

Subscribe now to keep reading and get access to the full archive.

Continue reading