• Log In
  • Skip to main content
  • Skip to primary sidebar

MIS Distinguished Speaker Series

Temple University

You are here: Home / Archives for AI

AI

Oct 28 – Lynn Wu – “Innovation Strategy after IPO: How AI Analytics Spurs Innovation after IPO”

October 19, 2022 By Aleksi Aaltonen

Time: Friday, 28 October 2022, 10:30–12:00
Room: LW420

Lynn Wu
Associate Professor of Operations, Information and Decisions
The Wharton School, The University of Pennsylvania
https://oid.wharton.upenn.edu/profile/wulynn/

Abstract

We examine the role of AI analytics in facilitating innovation in firms that have gone through IPO. Using patent data on over 1,000 publicly traded firms, we find that firms acquiring AI analytics capability post-IPO experience less of a decline in innovation quality compared to similar firms that have not acquired that capability. This effect is greater when only machine learning capabilities are considered. Moreover, we find this sustained rate of innovation is driven principally by the continued development of innovations that combine existing technologies into new ones—a form of innovation that is especially well supported by analytics. By examining three main mechanisms that hampered post-IPO innovation, we find that AI analytics can ameliorate the pressure to meet short-term financial goals and disclosure requirements. However, it has limited effect in addressing managerial incentives. For firms with long product cycles, the disclosure effect is reduced to a greater extent than it is for those with short cycles. Overall, our results show the importance of examining technology as a critical input factor in innovation. We show that the increased deployment of analytics may reduce some of the innovative penalties suffered by IPOs, and that investors and managers can potentially mitigate post-IPO reductions in innovative output by directing capital acquired in the IPO process to the acquisition of AI analytics capabilities.

Bio

Her research examines how emerging information technologies, such as artificial intelligence and analytics, affect innovation, business strategy, and productivity. Specifically, her work follows three streams. In the first stream, she examines how data analytics and artificial intelligence affect firm innovation, business strategy, labor demand, and productivity for both large firms and startups. In her second stream, she studies how enterprise social media and online platforms affect work performance, career trajectories, entrepreneurship success, and the formation of new type of biases that arise from using technologies. In her third stream of research, Lynn leverages fine-grained nanodata available through online digital traces to predict economic indicators such as real estate trends, labor trends and product adoption. Lynn has published articles in economics, management and computer science. Her work has been widely covered by media outlets, including, NPR, the Wall Street Journal, Businessweek, New York Times, Forbes, and The Economist. She has won numerous awards such as Early Career awards from INFORMS and AIS, best paper awards from Information System Research, AIS, ICIS, HICSS, CHITA, and Kauffman. She has also won the Dean’s teaching award.

Tagged With: AI, analytics, Artificial Intelligence, Innovation, IPO

Oct 30 – Gordon Gao to present “How Artificial Intelligence Affects Human Performance in Medical Chart Coding”

November 9, 2020 By Sezgin Ayabakan

How Artificial Intelligence Affects Human Performance in Medical Chart Coding

by

Guodong (Gordon) Gao

Professor
Director, Inovalon Artificial Intelligence Lab for Advanced Insights
Co-Director, Center for Health Information and Decision Systems
Robert H. Smith School of Business
University of Maryland

Friday, Oct 30

9:00 – 10:00 am | Zoom

Abstact:

While the impact of artificial intelligence (AI) on jobs has generated considerable discussion and debate, little is known about how AI affects knowledge worker productivity. We developed an AI solution for medical chart coding in a publicly traded company and then evaluated its impact on productivity regarding coders’ job experience. We find evidence that AI improves worker productivity overall. However, different from existing studies on skill biased technological change, we find that seniority goes the opposite way: the productivity of senior workers has a much less productivity boost from the use of AI than that of junior workers. To uncover the mechanism behind this surprising finding, we look at the task specific experience. Our results confirm the existence of complementarity between human experience and AI. Further analysis reveals that the performance discrepancy of job experience is attributable to senior user resistance. This paper provides new empirical insights into how AI affects knowledge worker productivity, with important implications for wider adoption and use of AI among knowledge workers.

Tagged With: AI, Artificial Intelligence, Human Experience and AI, Medical Chart Coding, productivity, Worker Productivity

April 3 – Detmar Straub to present “A Dark Future for AI: The Looming Spectre of SkyNet?”

September 11, 2020 By Sezgin Ayabakan

A Dark Future for AI: The Looming Spectre of SkyNet?

by

Detmar W. Straub

Professor and IBIT Research Fellow

Temple University Fox School of Business

Regents Professor Emeritus

University System of Georgia and Georgia State University

Friday, April 3

10:30 – 12:00 pm | Zoom

Abstact:

Capabilities of AI and thinking/learning machines are clearly overtaking human abilities (a.k.a. “technological singularity” or, more plainly speaking, “singularity”), with several forecasters like Winograd (2006) predicting that machine will outthink us within the first half of the 21st century. Is it possible that humans will not be able to control the burgeoning intelligence of machines and that we will, frighteningly, be subordinated to them, especially as they become self-aware? This talk starts by sketching out some past and present forecasts of when technological singularity will be real and present, what social, economic, and political issues will emerge, what security issues will loom, and finally how futurists (including science fiction writers and the movies) have envisioned the role of human beings in the coming era of the thinking machine. While the future of humanity might be hanging in the balance, one key academic question arises. What should researchers, in particular information systems researchers, study w/r/t AI? This overall issue has been framed as IA versus AI, or intelligence (human) augmentation (IA) versus artificial (computer) intelligence (AI). Enduring research questions might include: (1) technical issues with achieving singularity and requirements such as designing a tamper-proof “kill” switch for intelligent machines; (2) behavioral questions such as the pace of change and problems with duplicating human creativity; (3) social-economic conundrums such what will people do in an era of omnipresent thinking/working machines and worldwide societal disruption; and (4) organizational matters such as will there be an IS/IT Dept. and, if so, what will it do?

Tagged With: AI, Artificial Intelligence, Human vs AI, IA, Intelligence Augmentation, machines, robots, social disruption

April 24 – Xueming Luo to present “Quantifying the Impact of Human-AI Supervisor Assemblages on Employee Performance: A Field Experiment”

September 11, 2020 By Sezgin Ayabakan

Quantifying the Impact of Human-AI Supervisor Assemblages on Employee Performance: A Field Experiment

by

Xueming Luo

Founder/Director of Global Center on Big Data in Mobile Analytics
Charles Gilliland Distinguish
Chair Professor of Marketing, Strategy, and MIS
Fox School of Business
Temple University

Friday, April 24

10:30 – 12:00 pm | Zoom

Abstact:

Despite the promises of artificial intelligence (AI), there are concerns from both employees and managers about adopting AI at workplaces. Examining how firms can integrate AI into performance management systems (PMS), this research focuses on the impact of various human-AI supervisor assemblages on employees’ task performances and relations with human bosses. We utilize data from a field experiment on customer service employees in a fintech company who are randomly assigned to receive job performance feedback from human managers only, an AI bot only, or human-AI supervisory assemblages. A unique feature in our experiment is that the assemblages encompass a dual human-and-AI configuration (where employees receive feedback from both human managers and an AI bot in parallel) and a shadow-AI-human-face configuration (where employees receive feedback that is generated by an AI bot but delivered by human managers). The results suggest that relative to conventional human supervision, a dual human-and-AI design negatively impacts employee task performance, whereas a shadow-AI-human-face design positively impacts employee task performance. Explorations of the mechanisms support that a dual condition with both AI and human supervision in parallel leads employees to perceive more confused leadership and feedback, less learning from the feedback, and lower employee-manager relationship quality in a vicious cycle. In contrast, the shadow-AI design significantly improves employees’ perceptions of feedback accuracy and consistency, willingness to proactively seek feedback, and organizational commitment in a virtuous cycle. These findings suggest that firms should prudently design the human-AI supervisory assemblages. As a double-edged sword, AI-based PMS should be deployed in the shadows to empower human managers, rather than to displace or compete with them, to achieve higher worker productivity and healthier employee-manager relationships.

Tagged With: AI, Artificial Intelligence, bots, Field Experiment, Human vs AI, machines, performance management systems

April 30 – Gordon Burtch to Present “Estimating the Economic Impact of ‘Humanizing’ Customer Service Chatbots”

April 24, 2019 By Jing Gong

Estimating the Economic Impact of ‘Humanizing’ Customer Service Chatbots

by

Gordon Burtch

Associate Professor, Information & Decision Sciences
Carlson School of Management, University of Minnesota

Tuesday, April 30, 2019

12:30 PM – 2:00 PM

Speakman Hall Suite 200

 

Abstract

We consider the economic impacts of ‘humanising’ AI-enabled autonomous customer service agents (chat-bots). Implementing a field experiment in collaboration with a dual channel clothing retailer based in the United States, we automate a used clothing buy-back process, such that individuals engage with the retailer’s autonomous chatbot to describe the used clothes they wish to sell, obtain a price offer, and (if they accept the offer) print a shipping label to finalize the transaction. We causally estimate the impact on transaction conversion and price sensitivity from randomly exposing consumers to (1) exogenous variation in price offers, in tandem with (2) exogenously varied levels of chatbot anthropomorphism, operationalized by incorporating a random draw from a set of three anthropomorphic features: humor, communication delays and social presence. We provide evidence of a non-linear relationship, consistent with the ‘Uncanny Valley’ effect documented in the HCI-literature. That is, we show that while introducing either a small (1 treatment) or large (3 treatments) degree of anthropomorphism increases conversion rates substantially (on the order of 10% in the latter case), introducing only a moderate level (2 treatments) is counterproductive. Moreover, we show that a large degree of anthropomorphism (3 treatments) causally increases consumers’ price sensitivity. We argue that this latter effect occurs because, as a chatbot becomes more human-like, consumers shift from a price-taking mindset into a fairness evaluation or negotiating mindset. We discuss the implications for the implementation of AI-enabled autonomous agents in human-facing job roles, and customer service settings in particular.

Tagged With: AI, Chatbot, Gordon Burth, Humanizing, Minnesotat

Primary Sidebar

RSS MIS News

  • AIS Student Chapter Leadership Conference 2025 April 17, 2025
  • Temple AIS wins at the 2024 AIS Software Innovation Challenge! January 15, 2025
  • 10 Week Summer Internship in CyberSecurity October 7, 2024
  • Volunteer for Cybersecurity Awareness Month October 7, 2024
  • MIS faculty awarded promotions June 17, 2024

Tags

AI amrit tiwana Artificial Intelligence blockchain boston college bots brian butler carnegie mellon univ crowd culture deception Deep Learning Design experiment Field Experiment financial technology georgia state georgia tech Healthcare Human vs AI information security Innovation Institutional Theory IT Outsourcing long tail Machine Learning machines Maryland media Online Communities platform privacy productivity Quasi-natural experiment recommender systems simulation Social Capital social media social network steven johnson technology adoption temple univ user generated content UT Dallas wharton

Archives

Copyright © 2025 Department of Management Information Systems · Fox School of Business · Temple University