Rima Arnaout, an assistant professor and practicing cardiologist at UC San Francisco, created a neural network that outperformed human cardiologists in a task involving heart scans. She does not think the AI she created is ready to replace human cardiologists yet, but it was easily able to complete the first step in what a cardiologist does when evaluating an echocardiogram. The AI only performed this first step in the analysis of a heart image and the making of a diagnosis; however, Arnaout is now working to improve this technology so it can take the next steps to identify different diseases and heart problems. Experimental artificial intelligence systems are making rapid progress in the medical realm, so there could be major changes in the medical field in the near future. Despite the advances being made with AI in medicine, most people still want to see a human doctor when they have a health condition. Do you think we will ever get to a point where the majority of the population is comfortable being diagnosed by a machine and not a doctor? Will AI decrease job opportunities in the medical field in the near future?
Consumer motivation trends in the beauty industry are shifting, and more than ever personalization is the key to modern day customer loyalty. Despite the millions of products on the market, the beauty industry overall has an utter lack of personalization, but artificial intelligence just might be able to change that. Companies are now starting to embrace the uniqueness of each customer, and creating personalized products designed specifically for the individual consumer. New technologies, such as machine learning and artificial intelligence, increase the level of personalization companies can achieve.
Earlier this month, the French cosmetics group L’Oréal announced its acquisition of the Canadian beauty tech company ModiFace. ModiFace is one of the biggest names in BeautyTech today, and has developed over 200 beauty apps for more than 80 brands. Such apps include cosmetics tryout apps and chatbots for Estee Lauder Cos., Smashbox, Allergan, and Coty; Clairol’s 3D hair color simulator; and Sephora’s ColorIQ and upcoming LipIQ technology. This acquisition will allow for L’Oréal to produce more digital services, such as tools to allow customers to test out various beauty products virtually, through augmented reality and artificial intelligence. More specifically, the company is looking to produce an app that would access the user’s camera, so they could try out makeup virtually and in real time. Throughout its existence,L’Oréal has acquired many cosmetics companies, but this would be the first tech company it has taken over. This type of acquisition from such a large player in the beauty industry has the potential to spark the digital acceleration of the industry as a whole.
A question I have is, in what other ways could technology be useful for the cosmetics industry? Is this the extent of it, or do machine learning and artificial intelligence technologies have the ability to truly transform the beauty industry?
While it may seem strange, the fashion industry is predicted to be one of the next industries disrupted by artificial intelligence. The application Pureple offers many fashion-related services, including suggesting outfits based off of pictures that users submit of the clothes in their closet. Like Tinder, users can swipe left or right on outfits based off of their preferences. One current shortcoming of this application is its current “tedious upkeep” with submitting pictures. Many other platforms like Pureple exist, from Kim Kardashian-West’s Screenshop to Amazon’s Echo Look. Amazon has “developed an algorithm that learns about a particular style of fashion from images, and can then generate new items in similar styles from scratch”, and plans to incorporate this into the Echo Look application. What other future innovations could make this type of application more useful to users? Do you think that this type of application would be successful enough to turn users away from human designers and personal shoppers?
When discussing AI, many focus on the incredible suite of functionalities that the technology can bring to the table to make our lives easier, such as the capabilities in personal assistants and self-driving cars. However, in order for these functions to take place and provide the most utility to human users, the AI behind it needs to be built to learn and execute its functionalities in a way that aligns with human end user success metrics and standards. This is where the concept of AI alignment comes into play. AI alignment is the study and practice of building out AI utility functions to be in line with our own. This practice requires the designer to establish a detailed point system that assigns points based on the positive or negative utility that the human end users realizes based on the outcome of specific actions. If there is too little detail, then negative outcomes can come about.
A simplistic example of AI misalignment can be seen in Disney’s Fantasia, where Mickey Mouse brings to life a broom and orders it fill up a cauldron. There was not sufficient detail inputted in aligning the broom to this task and it ends up flooding the room. The utility function in this case can be summarized as “If cauldron is full = 1 point, If cauldron is empty = 0 points”. Now, if we were to apply AI alignment principals to this situation, the function would include more details to align the intelligent agent’s values with that of the end user such as “If room floods = -10 points, If someone dies in the pursuit or result of this task = -1,000 points, If task can be completed in 10 minutes = +0.2 points, etc”. By adding additional nuance, the AI is able to complete the task as intended by the end user without leaving room for unintended consequences.
What are some other examples of proper or improper AI alignment in technology today? How can integrative thinking be applied to AI alignment? How do differing cultures impact deriving end user utility?