Research and advisory firm Gartner Inc. predicts that artificial intelligence (AI) will be present in 80% of emerging technologies in two years. While AI is an emerging technology in and of itself, many emerging technologies incorporate elements of it in some capacity. AI itself seems to be on multiple spectrums of the hype cycle at once, situated somewhere between being an innovation trigger, a peak in inflated expectations, and the trough of disillusionment. Vice President of Research at Gartner Inc. Brian Burke said that the proliferation of AI in emerging technologies is among several ‘megatrends’ affecting the hype cycle, such as do-it-yourself biohacking, next-generation hardware like 5G networking and quantum computing, and immersive experiences like augmented reality and the connected home. While general artificial intelligence, that is a computer performing any task that a human can, is a long way off, AI shows up in other emerging technologies like virtual assistants, autonomous driving, deep neural nets, and cloud-computing platforms. This could include voice recognition and transcription; facial recognition; analysis on media like text, photos, or videos; and content filtering. There are few other emerging technologies quite like AI that can be applied to so many different fields and applications. Can you think of any other once-emerging technologies that were common in other emerging technologies as AI is now?
Artificial intelligence (AI) and machine learning (ML) hold lots of promise for our future, but how can we be sure they will be unbiased and non-discriminatory? The World Economic Forum recently published a white paper by Erica Kochi, co-founder of UNICEF Innovation at the United Nations Children’s Fund. In How to Prevent Discriminatory Outcomes in Machine Learning, Kochi discussed where she viewed potential issues with AI/ML discrimination. She proposes four principles and three steps that firms must adopt to prevent discrimination and minimize bias:
Four Central Principles
- Active Inclusion
- During a machine learning program’s development, actively recruit diverse input, particularly from those most affected by the system
- During development of a machine learning program, define what fairness is and ensure it is made a priority during development
- Right to understanding
- When AI is used in a life-altering or human rights related situation, ensure it is clearly stated in a simple way for end users and provide detailed project documents and source code to authorities. If this cannot be done, question if machine learning is appropriate.
- Access to Remedy
- Project developers/implementers should strive to seek out any discriminatory results and rectify them. This includes creating a process to rectify discriminatory outcomes after implementation.
- Identify human rights risks related to business operations.
- Act to increase governance and ensure ethical standards are robust or updated to prevent and mitigate risk.
- Be transparent and open about work related to human rights risks.
Personally, I think this white paper should be read by anyone interested in machine learning. It provides interesting examples of how machine learning is already making life-altering decisions. For instance, in third world nations, lenders like Tala use AI to decide if a person is eligible for a small loan. Tala utilizes data from the internet, such as social media posts. In such countries, urban men tend to be the most proficient users of the internet, potentially disadvantaging populations like rural women. This may create a bias against rural women, jeopardizing their already limited access to capital. AI/ML is also used in human rights related scenarios, like deciding if someone should be released from prison. To me, it seems that transparency is the linchpin that will determine how an organization is viewed if a discriminatory circumstance arises. It is clear from this white paper that common standards related to measuring bias should be established. There should also be verified third-party auditing of machine learning systems (à la rare earth mining) when human rights are at risk.
When considering systems thinking, one must consider how a small component of a process plays into the larger, holistic picture of a whole system. Many firms tout the importance of systems thinking, going so far as labelling themselves as experts in “information systems.” However, I believe that a systems approach should be complemented by a capability approach, one that emphasizes what a feature is providing, rather than the systems that run the backend process.
When considering the rising trend of unstaffed convenience like Amazon Go or Zippin, there is a lot taking place from a systems approach. There are biometric or app-based systems to identify a customer, geofencing sensors to track the customers movement, a supply-chain system to reorder more goods, an inventory management system to notify staff when a product has run out, a camera system to monitor what goods the customer chooses, and payment processing when the customer leaves the store. However, the customer probably won’t think of the revolutionary marrying of complex systems needed to make the store work. The customer will think from a capability approach: unstaffed convenience stores allow me to purchase goods from a store without having to check out.
While systems thinking is essential when adding a new capability or feature, capability thinking ensures that IS professionals are aligned with both the big picture of what the rest of the company is working to provide and with the customer’s view of the firm as an enabler. What other disruptive technologies are thought of from capability thinking and also have complex systems behind them?