Nature Biotech has created a Dynamic Early-warning by Learning to Predict High Impact (DELPHI) system to predict which research will be the most ‘impactful’. Use cases include diversifying and providing security to funding portfolios.
Case Dilemma Template – CC1 Andrew
E x p l o r a t i v e
p a r t | What is the ethical dilemma? | Should machine learning be trusted to allocate funding to scientific research? |
What is the content? | Nature Biotech has created a Dynamic Early-warning by Learning to Predict High Impact (DELPHI) system to predict which research[a] will be the most ‘impactful’. Use cases include diversifying and providing security to funding portfolios. | |
What are the technologies or types of data-usage involved? | Machine learning | |
What is the application? and what drives the use in this scenario/case/example | This framework is intended to find ‘diamond in the rough’ research that will have the greatest impact. With an issue as pressing as climate change, this tool could be incredibly valuable in selecting what research to fund. | |
Which ethical issues are at play here? | This algorithm can be biased and make it even harder for people who already face challenges in getting their research funded.[b] | |
What group of people are at risk? What group of people might gain? | Those who already face challenges getting their research funded will likely continue to be left out as existing biases in the scientific field are/ will be baked into the framework. Lenders wanting to take on less risk by funding research stand to gain by having a tool to select the most promising research to fund. | |
What is the wider impact of this dilemma? | Climate change is a global issue but tools like this might focus the research to primarily impact/ benefit areas that hold the most capital to fund the research. | |
Cultural aspects important for this dilemma | Impact for who? If certain climate solutions work better in areas where less funding is available, will this research have a harder time getting funding from other sources? | |
S o l u t i o n
o r i e n t e d
p a r t | Possible controls and comments on solution | Perhaps we can add some details on how machine learning is biased. For example, no matter how complicated the algorithm is, machine learning is based on data collected. That is to say, data itself may already be biased. “Entrenched discrimination — from the criminal legal system, to housing, to the workplace, to our financial systems. Bias is often baked into the outcomes the AI is asked to predict.” Personally, I feel it is not the fault of machine learning. When we trace back to the starting point, we can see machine learning is like an oven, and it is the problem of ingredients of raw materials that matter? |
Could this be acceptable if…? | Human oversight and/or expert interpretation before the data gets used More transparency in the development of NLP or other algorithms for analyzing ESG strategies. The data provided by the algorithms is viewed critically and not accepted as total authority on ESG strategy analysis | |
Any other observation and conclusion |
1
[a]In this paper, they talk specifically only about biotechnology research papers : “DELPHI early-warning signal identifies seminal biotechnology papers and prospectively flags interesting research”
[b]We could also add the other part : and make it also easier for research institutes who already have funding. In research, we call this the “Matthew effect” : research institutes who tend to be successful in their field (and so are able to get funding) are likely to stay dominant. Put in simple terms : the rich get richer and the poor gets poorer.