Information Systems Integration – Messina

Rooting Out Bias and Discrimination from AI and ML

Artificial intelligence (AI) and machine learning (ML) hold lots of promise for our future, but how can we be sure they will be unbiased and non-discriminatory? The World Economic Forum recently published a white paper by Erica Kochi, co-founder of UNICEF Innovation at the United Nations Children’s Fund. In How to Prevent Discriminatory Outcomes in Machine Learning, Kochi discussed where she viewed potential issues with AI/ML discrimination. She proposes four principles and three steps that firms must adopt to prevent discrimination and minimize bias:

Four Central Principles

Four Central Principles Pic

  1. Active Inclusion
    • During a machine learning program’s development, actively recruit diverse input, particularly from those most affected by the system
  2. Fairness
    • During development of a machine learning program, define what fairness is and ensure it is made a priority during development
  3. Right to understanding
    • When AI is used in a life-altering or human rights related situation, ensure it is clearly stated in a simple way for end users and provide detailed project documents and source code to authorities. If this cannot be done, question if machine learning is appropriate.
  4. Access to Remedy
    • Project developers/implementers should strive to seek out any discriminatory results and rectify them. This includes creating a process to rectify discriminatory outcomes after implementation.

Three Steps

  1. Identify human rights risks related to business operations.
  2. Act to increase governance and ensure ethical standards are robust or updated to prevent and mitigate risk.
  3. Be transparent and open about work related to human rights risks.

Personally, I think this white paper should be read by anyone interested in machine learning. It provides interesting examples of how machine learning is already making life-altering decisions. For instance, in third world nations, lenders like Tala use AI to decide if a person is eligible for a small loan. Tala utilizes data from the internet, such as social media posts. In such countries, urban men tend to be the most proficient users of the internet, potentially disadvantaging populations like rural women. This may create a bias against rural women, jeopardizing their already limited access to capital. AI/ML is also used in human rights related scenarios, like deciding if someone should be released from prison. To me, it seems that transparency is the linchpin that will determine how an organization is viewed if a discriminatory circumstance arises. It is clear from this white paper that common standards related to measuring bias should be established. There should also be verified third-party auditing of machine learning systems (à la rare earth mining) when human rights are at risk.

White Paper: https://www.weforum.org/whitepapers/how-to-prevent-discriminatory-outcomes-in-machine-learning

One Response to Rooting Out Bias and Discrimination from AI and ML

  • This is an interesting proposition, and one that I’ve never previously considered when exploring the innumerable possibilities AI, ML, and DL can yield. Thank you for shedding light on this topic, Henry. I believe this is something developers should be actively cognizant of when creating applications that utilize these technologies. However, it’s important that in the pursuit to counteract bias and discrimination of one thing/entity, that we don’t consequently demonstrate bias and discrimination to the previously favored thing/entity. This would ultimately be counterproductive and the overarching issue would remain prevalent.

Leave a Reply

Your email address will not be published. Required fields are marked *