info@ceoworld.biz
Friday, November 22, 2024
CEOWORLD magazine - Latest - CEO Advisory - The Limits Of AI: What Machines Can’t Do

CEO Advisory

The Limits Of AI: What Machines Can’t Do

Artificial intelligence (AI) is at the top of mind of every corporate executive. It dominates shareholder calls and flames the fire of financial expectations. AI’s powers and potentials give stock prices a bump and bolster investor confidence. But too many companies are reluctant to address its very real limits. It’s become taboo to discuss AI’s shortcomings and the limitations of machine learning, neural nets, and deep learning. However, if we want to strategically deploy these technologies in enterprises, we need to understand AI’s six distinct weaknesses.

AI lacks common sense. AI may be able to recognize that within a photo, there’s a man on a horse. But it probably won’t appreciate that the figures are actually a bronze sculpture of a man on a horse, not an actual man on an actual horse.

Consider the lesson offered by Margaret Mitchell, a research scientist at Google. Mitchell helps develop computers that can communicate about what they see and understand. As she feeds images and data to AIs, she asks them questions about what they “see.” In one case, Mitchell fed an AI lots of input about fun things and activities. When Mitchell showed the AI an image of a koala bear, it said, “Cute creature!” But when she showed the AI a picture of a house violently burning down, the AI exclaimed, “That’s awesome!”

The AI selected this response due to the orange and red colors it scanned in the photo; these fiery tones were frequently associated with positive responses in the AI’s input data set. It’s stories like these that demonstrate AI’s inevitable gaps, blind spots, and complete lack of common sense.

AI bakes in bias. There’s an increasing awareness that machine learning algorithms encode biases and discrimination into outcomes. After all, algorithms simply look for patterns in the data. Whatever is embedded in the data is what the algorithms will repeat.

A well-known example is when Google trends overestimated incidences of the flu. The theory goes that if people get the flu, they’ll turn to Google to search for “flu” and related terms. But this turned out to be a misleading method of gathering data. Searches for “flu” actually reflected how often the flu made it into the news, rather than how many people were in bed, sick and miserable. The lesson? What happens in the digital world does not always reflect reality. Without human interpretation and context, these types of outcomes can completely mislead an organization.

AI is data-hungry and brittle. Neural nets require far too much data to match human intellects. In most cases, they require thousands or millions of examples to learn from. Worse still, each time you need to recognize a new type of item, you have to start from scratch.

Algorithmic problem-solving is also severely hampered by the quality of data it’s fed. If an AI hasn’t been explicitly told how to answer a question, it can’t reason it out. It cannot respond to an unexpected change if it hasn’t been programmed to anticipate it.

Today’s business world is filled with disruptions and events—from physical to economic to political—and these disruptions require interpretation and flexibility. Algorithms can’t do that.

AI lacks intuition. Humans use intuition to navigate the physical world. When you pivot and swing to hit a tennis ball or step off a sidewalk to cross the street, you do so without a thought—things that would require a robot so much processing power that it’s almost inconceivable that we would engineer them.

Algorithms get trapped in local optima. When assigned a task, a computer program may find solutions that are close by in the search process—known as the local optimum—but fail to find the best of all possible solutions. Finding the best global solution would require understanding context and changing context, or thinking creatively about the problem and potential solutions. Humans can do that. They can connect seemingly disparate concepts and come up with out-of-the-box thinking that solves problems in novel ways. AI cannot.

AI can’t explain itself. AI may come up with the right answers, but even researchers who train AI systems often do not understand how an algorithm reached a specific conclusion. This is very problematic when AI is used in the context of medical diagnoses, for example, or in any environment where decisions have non-trivial consequences. What the algorithm has “learned” remains a mystery to everyone. Even if the AI is right, people will not trust its analytical output.

AI offers tremendous opportunities and capabilities. But it can’t see the world as humans do. Instead, it provides the potential for humans to focus on more meaningful aspects of work that involve creativity and innovation. As automation replaces more routine or repetitive tasks, it will allow workers to focus more on inventions and breakthroughs, which ultimately fuels an enterprise’s success.


Written by Dr. Nada R. Sanders.

Have you read?

# Countries With The Highest Average Life Expectancies In 2030.
# The World’s Best Performing Companies 2019.
# The World’s Safest Cities Ranking, 2019.
# World’s Most Luxurious Hotels, 2019.


Add CEOWORLD magazine to your Google News feed.
Follow CEOWORLD magazine headlines on: Google News, LinkedIn, Twitter, and Facebook.
Copyright 2024 The CEOWORLD magazine. All rights reserved. This material (and any extract from it) must not be copied, redistributed or placed on any website, without CEOWORLD magazine' prior written consent. For media queries, please contact: info@ceoworld.biz
CEOWORLD magazine - Latest - CEO Advisory - The Limits Of AI: What Machines Can’t Do
Nada R. Sanders, Ph.D.
Nada R. Sanders, Ph.D., is an internationally recognized AI thought leader and expert in forecasting and global supply chain intelligence. Ranked in the world’s top 2% of scientists, she’s the author of 100-plus scholarly publications and seven books, including The Humachine: AI, Human Virtues, and the Superintelligent Enterprise.


Nada R. Sanders, Ph.D. is an Executive Council member at the CEOWORLD magazine. You can follow her on LinkedIn.