• Log In
  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Information Systems Integration

Department of Management Information Systems, Temple University

INFORMATION SYSTEMS INTEGRATION

MIS 4596.002 ■ SPRING 2019 ■ MARIE-CHRISTINE MARTIN
  • Home
  • Announcements
  • Blog
  • Projects
  • Deliverables
    • Team Project
    • Case Analysis
    • Participation
    • Earn Professional Points
    • Register as an alumni
  • About
    • About
    • Materials
    • Grading & Policies
    • Professional Achievement REQ
  • Gradebook

Do You Fear Artificial Intelligence?

March 20, 2019 4 Comments

Image result for artificial intelligence

Michael Sorokach IV

A new study from the University of Oxford finds that Americans favor artificial intelligence more than they oppose it, however there is no strong consensus. According to the survey, 41% of respondents said they somewhat or strongly support the development of AI, 22% somewhat or strongly oppose it, and 28% said they have no strong feelings whatsoever. In the survey, AI was defined as “computer systems that perform tasks or make decisions that usually require human intelligence.”

Artificial intelligence is a divisive topic in the developed world, even among some of the most famous people within the technology sphere. Bill Gates is a strong supporter of AI development, stating that it can be used to further improve the quality of life for everyone. In a Q&A, he was quoted as saying “AI is just the latest in technologies that allow us to produce a lot more goods and services with less labor. And overwhelmingly, over the last several hundred years, that has been great for society.” Meanwhile, some other individuals seem to be less trusting of its benefits. Elon Musk is an outspoken critic of continual AI development, calling it humanity’s “biggest existential threat” and compared it to “summoning the demon.”

Interestingly, the study from the University of Oxford shows some strong correlation between demographics and responses. 57% of college graduates were in favor of developing AI as opposed to only 29% of individuals with an education level of high school or less. One area where there was strong consensus was regulation. 82 percent of respondents somewhat or strongly agreed with the statement “robots and artificial intelligence are technologies that require careful management.”

What is your opinion of artificial intelligence? Do you believe development should be “limited” to more mundane tasks such as those performed by Apple’s Siri or Google’s search engine, or should development of “smart” AI (with the common sense intelligence of a human)  continue unrestrained? Or somewhere in between?

 

Sources:

https://www.theverge.com/2019/1/10/18176645/ai-robot-survey-america-public-opinion-future-of-humanity-institute

https://www.vox.com/future-perfect/2018/11/2/18053418/elon-musk-artificial-intelligence-google-deepmind-openai

Reader Interactions

Comments

  1. Long Duc Nguyen says

    March 30, 2019 at 4:01 pm

    During a class in MIS 2101, we discussed about the development of AI and how quickly it can eclipse the human level intelligence. Therefore, I would have to agree with Elon Musk that a continual and unregulated development of AI will be humanity’s biggest existential threat. I believe AI development should be limited to more mundane tasks such as those performed by Siri or maybe a little bit more advanced than that but definitely not “smart” AI level. With just mundane tasks, AI already created massive benefits to the quality of human life. Obviously, smarter AI will allow us to do even more amazing things, but would you risk the existence of humanity to do that?

    Reply
  2. Nik Fuchs says

    April 3, 2019 at 11:40 am

    Michael Sorokach IV, I like this post. Keep up the good work!
    Reading this post I was unsure of whose side I was on: On one hand, I agree with Bill Gates that AI will be a tremendous asset for improving society. But on the other hand, I agree with Elon Musk that AI will have the potential to destroy humankind as we know it.
    After doing some research, I found an article from just last month in which Bill Gates compares AI to nuclear energy – “both promising and dangerous” (https://www.cnbc.com/2019/03/26/bill-gates-artificial-intelligence-both-promising-and-dangerous.html). I think this is a great analogy for AI. As long as smart AI is developed in a controlled environment, and its uses limited (i.e. weapons), I think it will provide more good than harm for society. How would one contain the uses of smart AI, you ask? I have no idea, but someone in the future might!

    Reply
  3. Sam Painter says

    April 3, 2019 at 1:49 pm

    Michael, great post about artificial intelligence.

    I personally am a big fan of artificial intelligence. AI can be more productive than humans and assist in various tasks. AI also makes fewer errors while completing routine tasks. I don’t think AI should be limited either.

    However, I would agree with the statement “robots and artificial intelligence are technologies that require careful management”. Although I don’t believe we will have robots trying to take over like in the movie iRobot, it is a powerful tool that we as humans need to be very careful with simply because we don’t know for sure how powerful it could be.

    https://vittana.org/16-artificial-intelligence-pros-and-cons

    Reply
  4. Lee Chan says

    April 3, 2019 at 3:24 pm

    The development of AI has been growing rapidly. I agree with Bill Gates that AI does improve our quality of life with less labour such as SIRI, Alexa, or self-driving cars. They are also proven to make less errors on routine tasks. Thus, I do not believe that the development should be “limited” to more mundane task only and wonder how far the development of AI can go.

    However, even though I don’t believe the development of “smart” AI (with the common sense intelligence of a human) can be the biggest existential threat to humans, I do agreed with the statement “robots and artificial intelligence are technologies that require careful management”. Unlike human, AI is based on algorithm to help it learn on its own; it does not have feelings like human does. Therefore, in an uncontrolled environment without careful management, AI can do harms to complete a given tasks.

    Reply

Leave a Reply to Nikolaus G. Fuchs Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

RECENT ANNOUNCEMENTS

Instagram… The New Marketplace?

This CNET article discusses how Instagram is testing a checkout option that … [More...] about Instagram… The New Marketplace?

Is Disruptive Innovation Overrated?

In the New Yorker article "The Disruption Machine: What the Gospel of … [More...] about Is Disruptive Innovation Overrated?

The Bets on Data and Analytics

Mojahed Ibrahim Companies are coming to realize, slowly but surely, the … [More...] about The Bets on Data and Analytics

Welcome to MIS4596 course!

Hello and Welcome everyone!  Please review this site carefully. This … [More...] about Welcome to MIS4596 course!

[More Announcements...]

Copyright © 2025 · Department of Management Information Systems · Fox School of Business · Temple University