Over the next decades, Artificial Intelligence (AI) innovation will continue at breakneck speeds. Computer performance, robotics, software development efficiency, and the resulting systems will lead to staggering advances. Today’s narrow AI applications already outstrip human performance in specific application areas and these areas will grow until AI equals or exceeds human performance in most areas—at which point it may have morphed into AGI. What does that mean for our future?
Defining AGI: For the purposes of this article, let’s agree that AGI means a computer system which can equal or exceed the mental processes of an average person. I am often asked, “Do you think an AGI will write a symphony, expand on Einstein’s relativity, or solve marital problems?” Invariably the task is an activity that most humans cannot do either. Let’s stick with actions that most people could learn to do.
- AGIs will impact employment: Will AGI eliminate jobs? Yes, but that trend has been underway for decades and is not unique to AGI. My first Silicon Valley company developed CAD software and today, the pencil-and-paper drafting profession is virtually gone. With the pending advent of self-driving cars, I would not recommend selecting “taxi driver” as a career choice right now. But we have generally thought in terms of technology eliminating low-skilled jobs. Now, and in the future, higher level professions will be changing or eliminated as well. For example, surgeons today can’t continue without retraining to the latest robotic techniques and radiologists will be doing less of the routine review of individual images as these are done by automated vision systems. Retraining and upgrading skills will become a continuing necessity at all levels. This conclusion is already accepted in certain professions but will spread to virtually all employment.
- AGIs will change our entire economy: If we consider a future scenario, perhaps only a few more decades out, where most human occupations can be performed faster and cheaper by AGIs and robotics, what will happen to our overall economic system? I will leave it to the economists and social scientists to define the transition to a world where wealth (or even a minimum income) is primarily available only to those who own or control AGIs and robots.
- Computers will get faster: Intel founder Gordon Moore observed that computer performance, for a given price, doubled every 18 months and this has held true for the past 50 years (dubbed “Moore’s Law”). It is important to note that this is a law of economics, not physics. It means that the R&D cost needed make a CPU twice as capable can be justified by new sales after only 18 months because customers are willing to pay for a twice-as-fast CPU. There is no end in sight as we continue to create compelling applications which require faster CPUs. Because of the compounding every 18 months, CPUs generally get a thousand times faster every 15 years or a million times faster every 30.
- AGI is inevitable: With faster CPUs, and today’s huge investments in AI, the emergence of AGI is the logical continuation of current trends. To those concerned that AGI might be a bad idea because it’s a dangerous genie we won’t be able to put back into the bottle, I can only say that the stopper is already out of the bottle and we’re just waiting for the genie to emerge. Our current dependence on technology and our capitalist approach presses our technology forward as fast as possible, again with no end in sight.
- True AGI will emerge through robotics: Many limitations of today’s AI to understand hinge on their lack of any real-world context. We understand coffee because we’ve brewed it, poured it, spilled it, been scalded by it, etc. whereas AI has only read about coffee. Babies playing with blocks learn about balance, gravity, stacking, and a myriad other real-world concepts before they learn their first words. Robots which learn by exploration the way infants do are the key to developing this real-world context.
- AGI will emerge sooner than most people think: Combining some of the previous points, AGI could emerge in as little as ten years. Computers will be more powerful, AI developments will have continued to expand, and AI technologies will come together to form AGI.
- AGI will emerge gradually: There won’t be a specific “singularity” when an AGI’s abilities suddenly surpass a human’s. Today’s AI systems fail to perform outside their area of expertise, but these areas will continue to expand while the gaps between will contract until the level of overall ability might arguably be AGI. But with continuing AI development and computer speed, the argument over whether or not true AGI has been achieved will become moot just a few years later.
- AGI capabilities will continue to increase: There is no reason that today’s exponential growth toward AGI will stop at the point when we reach human-level intelligence. In fact, once AGIs begin working as programmers or CPU designers, the rate of growth will increase! Based on today’s rates, once AGI emerges, we will have systems a million times more capable only 30 years thereafter. Such levels of ability are unimaginable! The same is true for many of today’s technologies which were inconceivable 30 years ago.
- AGI will have a profound impact on mankind: If we were visited by aliens from an advanced technological world, just the knowledge of their existence would change our outlook on the universe. Whether the impact is a boon to mankind or a disaster depends on the intent of the aliens and our reaction to them. The emergence of AGI will be similar except that we (mankind) can control both the intent of the AGIs and our reaction to them.
- AGI need not be dangerous: AGIs will include goal-based learning engines and the goals we set for AGIs will control their overall behavior, so we will need to be exceedingly careful in defining the goals. Many doomsday scenarios depend on AGI goals which include their being territorial, possessive, greedy, and violent. These would be foolish goals as opposed to being knowledge-seeking, understanding, nurturing, and caring. Further, many of the dangers cited exist with or without AGI. Today’s technology can already be used to create autonomous weapons and/or eliminate mankind altogether—and these can already be deployed by humans. Adding AGI (with appropriate goals) to the mix doesn’t make these scenarios more likely. In fact, it might be safer.
Written by: Charles J Simon.