Rooting Out Bias and Discrimination from AI and ML
Artificial intelligence (AI) and machine learning (ML) hold lots of promise for our future, but how can we be sure they will be unbiased and non-discriminatory? The World Economic Forum recently published a white paper by Erica Kochi, co-founder of UNICEF Innovation at the United Nations Children’s Fund. In How to Prevent Discriminatory Outcomes in Machine Learning, Kochi discussed where she viewed potential issues with AI/ML discrimination. She proposes four principles and three steps that firms must adopt to prevent discrimination and minimize bias:
Four Central Principles
- Active Inclusion
- During a machine learning program’s development, actively recruit diverse input, particularly from those most affected by the system
- During development of a machine learning program, define what fairness is and ensure it is made a priority during development
- Right to understanding
- When AI is used in a life-altering or human rights related situation, ensure it is clearly stated in a simple way for end users and provide detailed project documents and source code to authorities. If this cannot be done, question if machine learning is appropriate.
- Access to Remedy
- Project developers/implementers should strive to seek out any discriminatory results and rectify them. This includes creating a process to rectify discriminatory outcomes after implementation.
- Identify human rights risks related to business operations.
- Act to increase governance and ensure ethical standards are robust or updated to prevent and mitigate risk.
- Be transparent and open about work related to human rights risks.
Personally, I think this white paper should be read by anyone interested in machine learning. It provides interesting examples of how machine learning is already making life-altering decisions. For instance, in third world nations, lenders like Tala use AI to decide if a person is eligible for a small loan. Tala utilizes data from the internet, such as social media posts. In such countries, urban men tend to be the most proficient users of the internet, potentially disadvantaging populations like rural women. This may create a bias against rural women, jeopardizing their already limited access to capital. AI/ML is also used in human rights related scenarios, like deciding if someone should be released from prison. To me, it seems that transparency is the linchpin that will determine how an organization is viewed if a discriminatory circumstance arises. It is clear from this white paper that common standards related to measuring bias should be established. There should also be verified third-party auditing of machine learning systems (à la rare earth mining) when human rights are at risk.