Search

When Should You Use AI to Solve Problems? - Harvard Business Review

serongyu.blogspot.com

Business leaders often pride themselves on their intuitive decision-making. They didn’t get to be division heads and CEOs by robotically following some leadership checklist. Of course, intuition and instinct can be important leadership tools, but not if they’re indiscriminately applied.

The rise of artificial intelligence has exposed flaws in traits we have long valued in executive decision makers. Algorithms have revealed actions once considered prescient to be lucky, decision principles previously considered sacred to be unproven, and unwavering conviction to be myopic. Look no further than the performance of actively managed investment funds to see the shortcomings of time-honored human decision-making approaches. With rare exceptions, these funds, many managed by celebrated investors, underperform index funds over the long term, and AI’s algorithmic trades commonly outperform human ones.

AI won’t supplant intuitive decision-making any time soon. But executives will need to disrupt their own decision-making styles to fully exploit AI’s capabilities. They will have to temper their convictions with data, test their beliefs with experiments, and direct AI to attack the right problems. Just as portfolio managers are discovering that they must learn to pick the best algorithm rather than the best stock, executives across fields will face a self-disrupting choice: Learn to operate the machine, or be replaced by it.

The Ladder of Causation

Let’s look at what makes AI superior to humans at solving certain types of problems and how that can inform executives’ approach to the technology. In recent years, AI has trounced the world champions in poker, chess, Jeopardy, and Go. If people are surprised by these victories, they are underestimating how much rote memorization and mathematical logic are needed to win those games. And in the case of poker and chess, they are overestimating the role insight into human behavior plays.

Tuomas Sandholm, a computer scientist at Carnegie Mellon, created the Libratus AI, which beat the world’s top poker players. He described his algorithms as mostly probabilistic prediction machines and recognized that studying the behaviors of the AI’s opponents — their feints and “tells” and so on — was not needed to win. By applying game theory and machine learning, Libratus crushed opponents simply by playing the odds. Even in championship poker, understanding the laws of probability is far more important than reading an opponent’s behaviors.

The key for decision makers in optimizing their work with AI, then, is to recognize which sorts of problems to hand off to the AI and which sorts the managerial mind, properly disrupted, will be better at solving. The work of the acclaimed computer scientist Judea Pearl provides a guide. Pearl famously conceived the Ladder of Causation, which describes three levels of inferential thinking that, for our purposes, can provide a roadmap for self-disruption. As Pearl notes in The Book of Why: The New Science of Cause and Effect, “No machine can derive explanations from raw data. It needs a push.” The first rung of the ladder is inference by association (if A, then B); next, inference by intervention (if you change input X, what happens to outcome Y?); and finally inference by applying counterfactuals: nonintuitive constructs that seem at odds with the facts and that lead to novel insights.

Association

Association involves examining the correlation between two variables: When we raise prices, what happens to profits? AI is exceedingly good at sifting through vast quantities of data to uncover associations; for instance, social networks use associative algorithms to predict which posts will attract the most views,  on the basis of users’ previous behavior.

Humans aren’t very good at this, being both slower and often more subject to biases. As a result, executives who make decisions on the basis of intuitive associations alone can reach flawed conclusions about cause and effect —  for example, wrongly assuming that a certain action led to a desired outcome. A case in point: When I ran the internal corporate strategy group at Accenture, we devoted much time and capital to developing differentiated services, because clients seemed willing to pay more for them, generating greater profits. But when we later compared the profitability of clients receiving differentiated versus undifferentiated services, we found that it was the personal relationships with clients — not the differentiation of services — that drove profits. The apparent relationship between differentiated services and profits was an unproven principle we had followed for years.

Intervention

Intervention is the process of taking an action and observing its direct impact on an outcome — in essence, manipulating an experimental variable. Business decision makers do this all the time; for example, they might adjust a product’s price and then measure the effect on sales or profits. But they run into trouble when they’re overly confident about a predicted outcome. Effective intervention requires being willing to test a variety of inputs, even counterintuitive ones, to see how they might change the outcome. Here humans can have an edge on AI.

Several years ago, my startup had produced two promising products: a sales AI and a fintech AI solution for operations. We used AI to run forecasting scenarios based on the assessments of the likely market for each product. The modeling determined that the sales product would perform best, even though it would be launching into a crowded market. But on a hunch we decided to run campaigns for both products side-by-side to test whether the fintech product might have an unexpected advantage over the sales solution, because the market for it was less competitive. As it turned out, within 90 days the fintech product was far outselling the sales solution, quickly gaining a larger share of the smaller market.

Counterfactuals

The concept of counterfactuals is beautifully captured in the classic film It’s a Wonderful Life, in which the angel Clarence reveals a dark alternative reality to Jimmy Stewart: the world as it would have been had he never been born. Counterfactual inference involves the creative act of imagining what might have happened had a certain variable in an experiment — or in our case, a business activity — been different, given everything else we know.

When I was a young consultant, the COO of McDonald’s asked me to help justify the firm’s corporate R&D funding. “What is your method for doing this?” she asked. I was silent for a long time, then responded, “Let’s imagine what would happen if corporate didn’t do R&D and left it to the franchisees.” The counterfactual there was an imagined reality where corporate R&D didn’t exist. The COO may have expected that in that world, revenues would plunge.

Although without a time machine it’s impossible to test a true counterfactual to a previously executed business decision, you can seek out evidence of what the counterfactual reality might look like. In the case of McDonald’s, I suggested that we examine each recent product launch and see where it had originated. The exercise was revealing and surprising: A lot of the hits, such as the Big Mac and Filet-O-Fish, had come from the field, while some of the biggest flops, such as the Deluxe, were corporate’s idea. Our counterfactual thought experiment led to a clearer picture of corporate R&D’s relative role in the company’s product innovation.

Human Plus Machine

Consider how humans and AI together found a downed aircraft by combining association, intervention, and counterfactual approaches. In June 2009, Air France 447 disappeared in a storm off the coast of Brazil with 228 passengers and crew aboard. The French government spent two years searching for the wreckage, to no avail.

After four failed attempts, it brought in a team of mathematicians. Using AI, the team applied a Bayesian statistical technique that updates a probability prediction as new information becomes available – in this case, the likelihood that the plane was in a particular location on the ocean floor.

Remarkably, the team located the plane in just a week. How? Its initial AI-supported prediction identified the most promising search area — a region the government had already covered. But it took human insight to consider “new” information about the previous failures that the government team hadn’t thought of and change an important search variable to see its impact on the outcome. The key was asking whether the plane’s beacon might have failed, which could have led the government to miss the crash site. In fact, the assumption about a failed beacon was correct, and the wreckage was found near where earlier predictions had placed it.

AI is a powerful decision-making tool, but if performance is the endgame, leaders and other executive decision makers need to rethink how it is best leveraged. That doesn’t mean handing decision-making over to the machines. Rather, it requires decision makers to focus on the creative interventional and counterfactual thinking that humans are uniquely good at while relying on AI to do the data-intensive prediction and association tasks at which it truly excels. As humans and machines increasingly collaborate, I’m hopeful we’ll see an equivalent of Moore’s law at work: a year-by-year doubling of their combined decision-making capabilities. But that can happen only if we disrupt our own decision-making first.

Let's block ads! (Why?)



"use" - Google News
February 17, 2021 at 09:10PM
https://ift.tt/3s1LwB3

When Should You Use AI to Solve Problems? - Harvard Business Review
"use" - Google News
https://ift.tt/2P05tHQ
https://ift.tt/2YCP29R

Bagikan Berita Ini

0 Response to "When Should You Use AI to Solve Problems? - Harvard Business Review"

Post a Comment

Powered by Blogger.