Chloe Yip on “The Application of AI and Big Data on the Workforce”

Author:

Currently living in Hong Kong, Chloe earned her Master’s in Intelligent Building Technology and Management from the Department of Mechanical Engineering at The Hong Kong University of Science and Technology, and is currently working as a power plant maintenance engineer. She is the recipient of the Young Woman Engineer of the Year Award 2022 from IET Hong Kong. As part of her collaboration with GAEIA, Chloe focused on the analysis of the use of AI and big data in the hiring process.

Dilemma

While advantageous due to the scale of the application process for many companies, the adoption of AI and machine learning during the recruitment process raises some concerns about the ethics and equality of AI driven CV scanning and recruitment decision making.

Through this collaboration, Chloe is working to understand how hiring teams can utilize AI without eliminating the individuality and strengths of candidates that an AI system may miss when scanning applications. She hopes to investigate how to develop an international standard and code of practice on how to use AI and big data in the workforce through government policy and financial incentives.

Ethical Issues

  1. Unconscious Bias: Algorithms are as biased as their training data sets. A training data set that lacks diversity will produce an algorithm with prejudiced decision making.
  2. Privacy: Algorithms may be implemented using the analytics of existing workers without their consent
  3. Flexibility: An exclusively AI driven hiring process will lack the flexibility to cater different cases
  4. Fairness: A candidate with less language skills may be at a disadvantage in the AI selection system, despite potentially having a very strong technical background
  5. Accuracy: Current AI models cannot make judgement of whether or not the data is correct and accurate

For example, you cannot use an American database in Hong Kong, we have a lot of background, geographical and cultural differences. Companies need to use data and parameters that suit their own needs

Chloe on choosing training data

AI will look at how much time employees use on their computers, e.g., what kind of applications, how many emails they have been through, what the quality of their work is, etc. Everything is done by A.I. And once they have their KPI (Key Performance Indicator) slightly reduced, they have a higher chance of being laid off unsympathetically.

Chloe on privacy

If someone lies, the AI system may or may not know that or have very accurate judgment. However, with an experienced hiring manager, (s)he would probably notice something wrong.

Chloe on fairness and accuracy

For example, if you are not a native speaker, you may encounter some language barrier with occasionally unfamiliar words or grammar problems. But in AI, language is what it’s 100% based on. That’s why even if you have a very strong technical background, AI may eliminate you just because of the language. And that’s why these viewpoints may need human judgment.

Chloe on flexibility and impartiality

Cultural Impacts

AI may not take account of cultural differentiation across geographical locations or job types. For example, attitudes towards work styles, creativity vs a solid technical background, attitudes towards Diversity and Inclusion (D&I), work hours, etc.

Different countries have different cultural backgrounds. Electronic companies in north-east Asia are famous for employees working around the clock and contributing to the success of the companies; with merely work-life balance. There was a company that came into the U.S. hoping to implement the same strategy and encountered failure, largely due to people having dissimilar working styles. In terms of this kind of cultural aspect, I don’t think A.I. can thoroughly address the hiring principles. Humans can be diverse, but A.I. treats everyone equally.

Chloe on the impact of cultural diversity when hiring

Solutions

There are two parts to the responsible use of AI on the workforce. First, lawmakers and governments need to establish clear policy for use of AI, including mechanism for selecting candidates and performance evaluation, privacy concerns. Additionally, companies and AI creators need to provide clear financial incentive to responsibly use human-centered AI algorithms.

Right now, there is no government yet for regulating how AI is used, and whether there is a code of conduct is all voluntary. For example, Chat GPT would limit the use of a harmful database and refuse to answer inappropriate questions. 

If AI can be developed legally but unethically, there may be potential problems that harm society when reaching out to the public, especially involving decision making and privacy. I believe standards composed by authoritative organizations or code of practice are essential, to establish guidelines or even legal requirements and encourage responsible development and use of A.I. and big data, including their use to the workforce.

Chloe on government regulation and international standards

Regarding financial incentives, for example, let’s say green finance. If people want to go listed, they need to fulfill and disclose ESG requirements so that the companies may get economic benefits. Another example is ISO standards, which are not regulatory but widely recognized by international industries, businesses and customers would trust and prefer business providers that achieved certain criteria of requirements. Hence the business provider may get business opportunities by being recognized as the international standard. 

In my opinion, a similar approach for AI as well. Financial regulators or standards developers should compose a comprehensive framework for benchmarking, which in turn bringing economic incentives to developer, and encourage responsible and ethical development of A.I., as well as its application to workforce.

Chloe on financial incentive

Chloe’s Ethical Dilemma Template

Leave a Reply