Don’t forget to complete the online course evaluation for this class!
You can access the ESFF site here. The deadline is May 2 (well, technically May 3 at 8:00 AM, but are you really going to get up early on May 3 to do it?).
It only will take you five minutes, but it is important.
Some things about the course evaluations:
- Your feedback is anonymous.
- I don’t get the results until after the semester is completely over.
- I really look at the feedback and use it to make changes for future semesters.
As artificial intelligence continues to grow more advanced, many questions now need to be considered regarding risk management and insurance. One huge question is who is at fault when an AI powered technology that makes its own decisions does something that harms others? The very straight forward example provided by Business Insurance was what happens when an AI powered car hits someone? But the question even stretches further than that because “AI will have an impact on such diverse areas such as the economy, the environment, politics and the legal system”. Obviously there are many ethical questions that come into play here too but it will be very interesting to see how these innovations are handled from an insurance perspective.
Who do you think should be held responsible for the decisions made by AI?
Salesforce, in September 2016, bet on AI to grow its customer relationship management and it looks like that bet has slowly, but surely, paid off. In the two years since it was introduced, Salesforce’s Einstein has delivered millions upon millions of predictions to Salesforce customers and the company is continuing to grow its power by acquiring more data accessing capabilities for Einstein to deepen its abilities.
This has also provided financial results as Salesforce has seen 25% growth in the previous Jan 31 fiscal year with stocks skyrocketing as a result. Now Salesforce must change the face of its AI, making sure that consumers see the deep skill difference between Salesforce’s Einstein and the simple language processing skills of assistants like Alexa and Siri.
As the population of the elderly increases, the need for technology has been steadily progressing to assist those who need it. The basic categories that affect these people are healthcare and lifestyle. For healthcare, there are medical technologies that have been introduced to assist. These tools will most likely require the help of an elderly person’s children or caregiver, so there are some barriers to entry. Additionally, there will not be as many private care options, as explained by the “caregiving cliff”, but AI could offer a cheaper alternative to some of the technological issues that occur. For example, the 3506 project, Memory Lane is a product that is geared to those with memory loss, and specifically the elderly. Although these technologies may be difficult for the elderly to understand, it can bring families together and create stronger bonds and connections. A drawback of these applications is also the cost of the products. This can create a socioeconomic divide between those who can afford it and those who cannot. Do you think that the elderly will be able to overcome the AI learning curve? Will caregivers be replaced by AI in terms of technological advances? How do you think AI will advance in the future to aid the younger population?
In the face of a technological dawning, consumers and business markets are facing new issues every day. As Artificial Intelligence becomes both available and usable for the everyday consumer, there are new issues and controversies arising daily. Two weekends ago, a self-driving uber vehicle struck a 49-year old pedestrian woman, causing Uber to temporarily shut down the self-driving testing programs.
While there is no denying the tragedy and hurt surrounding the event, in comparison to other business endeavors much harm has been done without completely halting operations. This could perhaps be sensationalist media using emotion to evoke stronger reactions out of its viewers than the story itself permits. Sensationalism has been used heavily as of lately and this is simply an example of how it can be utilized, especially in the case of a new and rather unknown technology.
From a technology advocate perspective, events like this are deeply disheartening both for the hurt technology caused, as well as for the blow to the development of the technology. Those who were excited about the advancement of this consumer AI technology must now accept that perhaps it isn’t ready for market. The events that have occurred prove that the technology still needs work, and therefore cannot be fully implemented as of yet. The growth of consumer-utilized AI is an exciting concept, but society isn’t ready and nor is the market. Until situations like this dwindle, these types of technology will not stand and thrive.
TV ratings have historically been the number one measure used in determining how successful a show is, and therefore the price to advertise during that show. However, in recent years, marketers are turning to new measures promising to provide information regarding user attention and even emotion. Companies like TVision are using cognitive computing to train facial recognition software to recognize attention and emotion of users watching. These companies will pay samples of people to simply keep a camera on while they watch TV, often the Microsoft Kinect Sensor. The software behind the sensor is constantly learning more and more how to identify emotions through facial expression. This leads to data on what shows viewers are more engaged with while watching, and even what show’s viewers are more engaged with commercials. After all, if you were an advertiser wouldn’t you rather pay to show your commercial to people you know are watching? Many people turn shows on and don’t pay attention, or they leave them on as they do other things around the house. These shows may have high viewership, but the viewers may not actually be paying attention. For example, one study done by TVision showed that Shark Tank ranked amongst the highest in their attention rating. This makes sense because viewers are highly engaged with new business ideas and the suspense of a potential offer. Also, as you can imagine, commercial attention scores during the Super Bowl were very high. The more the AI software learns the more accurate and reliable this data will become. This system could potential replace ratings as the primary source of determining the value of an advertisement slot.
Artificial Intelligence is becoming something we can almost no longer avoid or live without. As humans, something else we cannot live without is food, because it fuels our bodies. Today, there is a concern with the negative effects “bad” food can have on us, most importantly, obesity. More than one in three adults today are what is consider obese, according to their BMI. With this being an ongoing epidemic in the United States, several startup companies hope to help diets by using AI. Edamam is a startup focused on helping you get easier and quicker nutritional information on the ingredients in a recipe. Their database of over 50,000 products allows them to use AI to do all the necessary nutritional information in the background. By creating an easy to use software where users can easily track nutritional information, more people will be more conscious of their eating habits. Passio follows a similar goal but uses AI to help users track food based on image recognition. Thanks to recent advancement in AI and image recognition, the error rate is around 3%, where as a human brain error rate is 5%. The last company to take on AI in the food industry is Habit. They use not only AI but also genetics, from a sample kit sent out, to help users find personalize diets to their needs. Habit goes one step further to offer prepared meals based on the genetic data found from the sample kit. With so many unique opportunities for AI to be implemented in the food industry, do you think AI can combat obesity?
Professional teams almost always study and examine past film of their performance. These films are viewed from a different view and perspective from a camera far off the field. The vantage point is much different from what a player might view during a game. VR technology will now allow coaches and players the ability to experience plays from a first-hand perspective. STRIVR Labs, a VR startup, creates VR training videos shot from the perspective of the player during practices. This enables players to receive realistic, repetitive training by using a VR headset. For example, QB’s can review missed opportunities on a play multiple times without experiencing the physical wear and tear that a contact sport contains. This allows players to prepare for games without the exact presence on the field, where they risk injury. VR has yet to reach its full potential in professional sports, but I suspect more teams more teams, professional and recreational, will invest in VR technology in the next 5 years. Do you think high budget programs will be the only ones able to invest in this technology? Will this create an unfair advantage to those who do not have the budget to adopt?
Fintech has been a rapidly growing market over the past decade; and some would say it is currently peaking with the multitude of options that consumers have for retail banking and payments. PayPal is the most notable fintech firm, given its widespread acceptance as a secure payment processor across almost all platforms. Since PayPal, there have been thousands of new entrants into the fintech space, but can any of them be classified as disruptive? Most fintech solutions revolve around an added benefit to consumers during their banking or payment process, which is why I think that the majority of fintech is really just sustaining the capital markets industry, not disrupting it. Mobile and online banking applications make financing easier for consumers, but consumers are still engaging in the same activities as they did with an in-person teller – therefore, no new-market was created. Furthermore, the cost to do banking has not materially changed due to the advent of online or mobile banking, which indicates no sign of low-end disruption. While fintech remains a hot industry, I think the market as a whole will begin to decline in the near future unless new technologies are implemented to truly disrupt the space, instead of merely sustaining it – which has been the case over the past few years.
Despite artificial intelligence capabilities dating back to the 1940s, we still know very little about these technologies. Of course, AI from back then is far different than what we refer to as AI now; think IBM Watson, self-driving cars, etc. AI has certainly come a long way, but our understanding of it has lapsed. In fact, we know so little that we’re not quite sure how to fix it when things go wrong. When an AI technology needs debugging, technologists aren’t quite sure what to do. After all, “debugging is based on understanding,” so to debug something one doesn’t truly understand, is fairly impossible.
To complicate the issue even further, bugs in AI technologies may not make themselves apparent until it’s already done damage. With AI infiltrating our lives at a rapid pace and in drastic ways, its important that we identify bugs and fix them quickly to avoid harm. Imagine a self-driving car that had an unidentified or misunderstood bug, which cause the car to crash. It is imperative that we work on this debugging issue before AI is allowed to make fatal mistakes. How cautious should we be with AI in the meantime? Do you agree that, as some are saying, we should halt the use of autonomous cars after Uber’s crash in Arizona?