The Surprising Role that AI Plays in Management

2020

Dr. Lior Zalmanson

Dr. Lior Zalmanson

Senior Lecturer of Technology and
Information Management,
Coller School of Management,
Tel Aviv University

Professor Gal Oestreicher-Singer

Professor Gal Oestreicher-Singer

Professor of Technology and
Information Management,
Coller School of Management,
Tel Aviv University

Suppose we are to form our AI business strategy based on how Artificial Intelligence is being portrayed in popular media. In that case, we will probably be limited to one of the two common notions. The first is AI as a servant, embodied in the human environment through robotics, helping humans in their daily needs. The second is AI as a superintelligence, a replacement for any human, an all-knowing being, controlling and overseeing anything and everything. However, the notions and roles that AI will probably play in our society, as discussed by Aya and Kartik in the Roundtable, are very different, and in a way, much more interesting.

Research conducted at the Coller School of Management at Tel Aviv University might shed further light on the matter, particularly as related to a set of practices also known as “Algorithmic Management.” In this type of management, algorithms take over the traditional roles of middle management. This term doesn’t represent a futuristic scenario. For Uber drivers, for example, this is very much a current reality. Such drivers work under tight supervision by a machine learning algorithm, guiding their actions and sanctioning them if they do not follow the firm’s policy. They do not have other direct bosses and officially are not even considered employees but rather freelancers. In reality, however, they are being managed by artificial intelligence algorithms. 

When AI algorithms become “your boss,” new tensions emerge. Drivers experience tensions related to the manner they conduct work since, on the one hand, they are autonomous agents who choose to work at will.

On the other hand, they are being surveyed and micromanaged by pervasive technology. Drivers enjoy the reliability of AI algorithms that constantly match them with riders but at the same time feel frustrated from the lack of transparency of the complex algorithmic calculations which are in charge of their wages. Working under algorithms means personalized treatment and a lack of solidarity as any worker is being treated differently based on their unique case history. In the end, many drivers reported feeling isolated and “robot-like.” They resorted to ad-hoc online communities to socialize and try to make some sense of these algorithms and their behavior. In some cases, drivers even go further and choose to reject and revolt against the algorithms by blocking or gaming them.

Thus, a firm that chooses to manage by AI algorithms shouldn’t rush to take the human element out of the equation. Over the 20th century, we learned the importance of investing in human resources. The support, guidance, mentoring, and rapport between humans is not likely to be replaced soon by machines. In ride-hailing, drivers seem desperate for voice support, precisely when they run into tension-inducing situations that the algorithm cannot solve. In those cases, drivers appreciated the fact that the firm has built a 24-7 human-led support line for them.

As COVID-19 provides a catalyst for remote work, many firms will have to decide how they control work from afar. It is likely that we would see different implementations of AI algorithms taking middle management’s traditional roles

It is important to note that algorithmic management isn’t restricted to these new gig workers. As COVID-19 provides a catalyst for remote work, many firms will have to decide how they control work from afar. It is likely that we would see different implementations of AI algorithms taking middle management’s traditional roles. Therefore, the tensions observed in the Uber drivers’ research are likely to be expected in these future scenarios.

However, even if many firms won’t adopt AI as bosses, they might install them in the role of non-human workmates. In their research, Erik Brynjolfsson and his colleagues at MIT note that most current occupations won’t be replaced by AI (or specifically, as Kartik and Aya mentioned, machine learning) but instead the augmented and re-engineered by the introduction of such capabilities. Humans and AI will not work as substitutes but rather complement each other’s weaknesses. Thus, the burning question is how to design, engineer, and manage these new human-AI work hybrids.

In an ongoing research project, Lior presented in the international conference of information Systems (together with a Ph.D. student, Yotam Liel), we study the risk of humans blindly conforming to the algorithms’ decisions without properly weighing them against their better judgment. The paper follows Salomon Asch’s seminal conformity research in which he brought participants to a class and gave them simple perceptual tasks. When participants were alone, they gave correct answers quickly. Still, when he added other “fellow participants” who cited wrong answers aloud, many participants confirmed the majority decision and shared the same erroneous responses.

There are great possible benefits of human-AI collaboration for optimal decision making; however, if humans conform to AI decisions without exercising their judgment, the results could be anywhere between sub-optimal to plain dangerous

Our research finds that the same phenomenon is at play in the encounter between a human worker and an AI agent. In this case, the presentation of an AI’s advice changed the worker’s answer in a statistically significant number of cases (15-25% compared to always answering correctly in the control group). When we presented them with multiple AI agents, all citing the wrong advice, the percentage grew even higher. These findings provide a warning sign regarding the design of human-AI hybrid decision-making processes and calls for better work processes. There are great possible benefits of human-AI collaboration for optimal decision making; however, if humans conform to AI decisions without exercising their judgment, the results could be anywhere between sub-optimal to plain dangerous.

In this aspect, we agree with Kartik’s notion that “AI can be used for good, but it can also be used irresponsibly.” Behind the words “use” in this case lies more than AI’s purpose and work context. Putting AI to good use means designing responsible and transparent AI processes with humans in mind.

Close