Risk of biased or false results relating to ESG performance as we increasingly use Artificial Intelligence, AI; and Natural Language Processing, NLP, to analyze and create currently missing data points of ESG strategies of publicly traded companies.

Artificial intelligence (AI) allows investors to collect and analyze more information than ever before when accounting for environmental, social, and governance risks and opportunities.
AI can help sustainable investors process mountains of data that hold essential information for ESG investing.

Computer algorithms that have been trained to find and analyze tone and content can digest all of the information available about a company, which can be a massive task for human employees to do at a reasonable speed. Popularized programs that measure the tone of text, like Sentiment Analysis, automate tasks that would have been impossibly labor-intensive even a few years ago.

RESPONSIBLE DIGITAL LEADERSHIP IN THE FINANCIAL SECTOR | April 2021 Center for Human Rights and International Justice, Stanford University

Case Dilemma Template  – CC3                                                                Felix Rösner

E

x

p

l

o

r

a

t

i

v

e

 

p

a

r

t

What is the ethical dilemma?

BIAS

Risk of biased or false results relating to ESG performance as we increasingly use Artificial Intelligence, AI; and Natural Language Processing , NLP, to analyze and create currently missing data points of ESG strategies of publicly traded companies.

What is the content?

Artificial intelligence (AI) allows investors to collect and analyze more information than ever before when accounting for environmental, social, and governance risks and opportunities. 

AI can help sustainable investors process mountains of data that hold essential information for ESG investing.

Computer algorithms that have been trained to find and analyze tone and content can digest all of the information available about a company, which can be a massive task for human employees to do at a reasonable speed. Popularized programs that measure the tone of text, like Sentiment Analysis, automate tasks that would have been impossibly labor-intensive even a few years ago.

What are the technologies or  types of data-usage involved?

Deep learning and NLP

What is the application? and  what drives the use in this  scenario/case/example

AI and NLP are increasingly used by investors to analyze and review ESG impact in relation to investments, such as:

  • Analyse earning calls on ESG focus
  • Analyse reporting on ESG performance
  • Analyse industrie news etc.

Those Applications are for example now incorporated at very major indices such as the Dow Jones Sustainability Index

Which ethical issues[a] are at  play here?

Can those algorithms be biased, or tricked (see e.g. coded bias etc.)? To what extent can those algorithms be applied to non-ethical usage? The general tradeoff between return and sustainability? Will investors who do not have access to those technologies be treated unequally?

What group of people are at  risk? What group of people might gain?

Investors that invest in pseudo sustainable stocks, public companies and basically all stakeholders

Sustainable Indices [b]

What is the wider impact of  this dilemma?

Using AI and NLP allows for better overview and analysis of issues such as ESG impact of investments and may drive and support green transitions. The use of such algorithmic tools may also entail risks such as for instance:

  • Unsustainable investments
  • Slow energy transition
  • Financing of high carbon dioxide emitting industries and operations

Cultural [c]aspects important for this dilemma

How exact is NLP processing? Can the true ESG performance really be found out? Can this technology be also applied to worst causes or cheated?

S

o

l

u

t

i

o

n

 

o

r

i

e

n

t

e

d

 

p

a

r

t

Possible controls [d]and  

comments on solution

Transparency :

XAI: the use of explainable AI toolkits may help detect fraudulent applications or bias.

Could this be acceptable if…?

Declaring the use of AI/NPL and stating the risks to ensure that users and public are aware how decisions were reached.

Any other observation and  conclusion

Bibliography:

https://towardsdatascience.com/nlp-meets-sustainable-investing-d0542b3c264b

Rather than manually reading the report and analyzing it, an NLP model could perform downstream NLP tasks such as text classification and sentiment analysis on the report, reducing the complexity of analyzing a report and making the whole process more time and resource-efficient. In this case, the NLP model would classify the excerpt as relating to “Climate Change” with a sentiment value of “positive”.

https://www.dbresearch.com/PROD/RPS_EN-PROD/PROD0000000000478852/Big_data_shakes_up_ESG_investing.pdf?undefined&realload=TWZFbzsQvvC6Za6vrE6UTBIVesdISTC3p060C2al6B9yioXKiEaftWbxSU0KVh~4

https://databricks.com/de/blog/2020/09/09/its-an-esg-world-and-were-just-living-in-it.html

[a]To add to what was already written.

A problem of automation bias can clearly occur in this scenario. Like put forward by Andrew, the AI systems (NLP, ML, DL) will focus on what they were programmed for and will work in an automatic way. The information they will be able to collect or grasp will maybe not be sufficient or complete enough to keep up. But also in this particular case it can create a big problem of responsibility. Since AI would be used to review or find date for ESG strategies it could create an unethical or societal problem, who would be to blame then?

If the problem is even noticed, because should the data be checked before its use and application?

[b]are they at risk or do they gain or both? I think it’s important to give precision

[c]ESG strategies and problems can affect different cultures in many different ways. The AI and NLP could apply a solution that could be good for one community and reinforce a discrimination on another. Societal and environmental problems can have  more subtle ins and outs than expected/predicted.

Also NLPs have to be programmed for all the subtleties in dialects. NLP will require different training sets based on language and context.

[d]Also regular control over the data provided by the algorithms. Like said in the ‘could it be acceptable if’ part, change the way we view and accept AI data as ‘truths’. Therefore there should still be a Human-in-the-loop (HITL) model to control the data furnished and the decisions based on the data.

Leave a Reply