info@ceoworld.biz
Thursday, November 21, 2024
CEOWORLD magazine - Latest - Success and Leadership - The AI Bubble Is Here

Success and Leadership

The AI Bubble Is Here

Dr. Richard Smith

The essence of a bubble is that most market participants believe that something is of value when, in fact, it is not.  Back in the days of the dot-com bubble, for example, people thought that just having a website and a .com web address – like Pets.com – was valuable.  It wasn’t.

When it comes to new generative AI technologies like ChatGPT, there is a huge disconnect between what people think the new technology is and what it actually is – which means that the technology is not being properly valued.  That has already led to the early stages of bubble formation.

To understand the bubble generating potential of generative AI, it’s critical to understand what this new technology actually is versus what people believe it is.  The best source for helping us to understand this distinction is OpenAI founder Sam Altman himself.

While recently testifying in front of a subcommittee of the Senate Judiciary Committee, Altman was quite modest in his claims about ChatGPT – the most famous instance of the new family of Large Language Models (LLMs).  He encouraged senators to understand that “ChatGPT is a tool and not a creature.”  He reassured senators that ChatGPT was “good at tasks” and unlikely to take away entire jobs.

Senators tried to engage Altman on some of his more dire predictions about the risks of AI but Altman demurred and would only say, “If this technology goes wrong, it can go quite wrong.”

In stark contrast to Altman’s soothing Senate testimony, however, is the joint statement that he co-signed just two weeks later with other AI luminaries where this new technology is presented as an existential threat to humanity.  Here’s the one sentence statement:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

It’s difficult to reconcile these two portraits of generative AI given by one and the same person within a two-week window.  When AI is presented as a literal existential threat to humanity, it is difficult not to think of generative AI as a creature despite Altman’s earlier reassurances that it is, in fact, just a tool and not a creature.

Decision makers who are responsible to other stakeholders cannot afford to indulge in such fearful fantasies.  Sober and clear-eyed assessments are more essential than ever when faced with such a wide range of possible outcomes.

The early indications about the potential productivity impact from generative AI are mildly encouraging.  The clearest indications of early productivity gains have come from improving the efficiency of entry-level knowledge workers such as those in call centers.  The Brookings Institute, for example, reported that “Call center operators became 14% more productive when they used the technology, with the gains of over 30% for the least experienced workers.”

Productivity gains are absolutely the biggest potential impact area that executives should be focused on when assessing this new technology.  It isn’t clear, however, that productivity gains seen in call centers are likely to translate throughout the knowledge worker economy.

Consider, for example, the recent case reported in the Wall Street Journal, of a mental health chatbot named Tessa that had to be taken offline after it “veered off script.”  Tessa was intended to give high level general advice on eating disorders.  Unbeknownst to the organization sponsoring Tessa, however, the software administrators added generative AI technology to Tessa which led to Tessa giving very specific diet advice.

It’s worth noting here the nearly universal tendency to humanize these chatbots by giving them human sounding names like “Tessa.”  This blurs the lines between the “tool versus creature” narratives surrounding generative AI.

Another warning recently came from Antifragile author Nassim Nicholas Taleb who tweeted, “I’ve had it with ChatGPT.  It doesn’t BS.  It lies.”  Taleb is referring to the tendency of these LLMs to “hallucinate,” – a nice euphemism from the AI industry that means, “completely make stuff up and present it as fact.”  It happens a lot more often than the industry cares to admit.

The case of Tessa, the mental health chatbot, includes a telling comment from Rich Birhanzel, Accenture’s global healthcare industry lead, which goes a long way to explain the gold-rush-type excitement that is fueling the AI narrative today:

“We simply don’t have enough nurses and doctors to provide the level of care we’re used to and we need technology to help solve for that.”

Birhanzel’s well-informed observation is additional confirmation that most people today are expecting technologies like ChatGPT will act as substitutes or replacements for real people.  In other words, most people see ChatGPT as a creature that can replace people rather than just a tool to be used by people.

We are already in the midst of a bubble.  Nvidia added $184 billion to its market cap in a single day after the spell-casting incantation “AI” was used an incredible 90 times on the company’s recent earnings call.  It was one of the biggest single dollar gains in U.S. market history.

The bubble is happening because industry leaders are misleading the public by feeding the narrative that generative AI is some kind of super-human creature rather than just another new technology tool – albeit, a very powerful new tool.  It’s a story that the media and the public are, unfortunately, gobbling down.

The real risk of generative AI isn’t that it is a threat to become more intelligent than humans and decide that humans should be eliminated.  The real risk is what I call the Wizard of Aiz risk.  It’s the risk that people will see this new technology as a creature with its own will when in fact, it isn’t a creature at all.

The real creature is the one behind the curtain – it’s the AI industry itself with its vested interest in everyone believing that their new technology is all powerful and must be controlled, obeyed and even served.

What we can be relatively confident about today is that generative AI does appear to produce productivity gains in entry level knowledge workers such as those in call centers.  What we  know for sure is that it often completely makes stuff up and presents fantasy as fact.  We also know that many industries are under severe pressure to do more with fewer people.

The allure of generative AI being able to replace people is a seductive one but we currently have zero evidence that it is a viable.  Even Sam Altman himself, at least when he is testifying in front of a Senate subcommittee, has told us that AI is a tool and not a creature.

Today’s leaders absolutely must become well acquainted with this new technology and constantly assess its potential impact on productivity.  Today’s leaders absolutely must not, however, succumb to the AI as creature narrative that is currently being peddled by the AI industry itself.

Rather than look for productivity miracles (involving lots of expensive hardware, software and consulting), look for small and incremental productivity gains and continue to focus first on empowering your real people rather than on replacing them with machines that have been taught how to hallucinate.


Written by Dr. Richard Smith.

Have you read?
The highest-paid tech CEOs in the United States.
Highest-Paid Biopharmaceutical CEOs in the United States.
Executive Pay: Top 8 Highest-Paid Hotel CEOs.
Openly LGBTQ CEOs at the helm of major global companies.
Top countries that admire their CEOs and other C-suite leadership teams the most.
Top Women CEOs of America’s largest public companies (2023 List).
Ready to join the CEOWORLD magazine Executive Council– Find out if you are eligible to apply.


Add CEOWORLD magazine to your Google News feed.
Follow CEOWORLD magazine headlines on: Google News, LinkedIn, Twitter, and Facebook.
Copyright 2024 The CEOWORLD magazine. All rights reserved. This material (and any extract from it) must not be copied, redistributed or placed on any website, without CEOWORLD magazine' prior written consent. For media queries, please contact: info@ceoworld.biz
CEOWORLD magazine - Latest - Success and Leadership - The AI Bubble Is Here
Dr. Richard Smith
Dr. Richard Smith – Berkeley Mathematician and PhD in System Science – is a fintech entrepreneur, the CEO of The Foundation for the Study of Cycles, and o Dr. Smith has built a reputation as "The Doctor of Uncertainty" amongst his academic peers and has helped government agencies and Fortune 500 companies alike make sense of complex sets of data.

In his upcoming book The Risk Manifesto, Dr. Smith will aim to further educate investors on how to circumvent self-destructive instincts and adopt a systematic way to manage their fear of risk. With his background in mathematical theories of uncertainty combined with his investing and trading experience, Dr. Smith is a regular speaker and lecturer and particularly enjoys opportunities to share his knowledge and help others gain an edge in the market.


Dr. Richard Smith is an opinion columnist for the CEOWORLD magazine. Connect with him through LinkedIn. For more information, visit the author’s website CLICK HERE.