On Embracing AI - Vested

No Reason to Fear AI

With artificial intelligence dominating in the business news cycle, there’s a lot of talk about innovation and the future. There’s also a lot of talk about the drawbacks or dangers of the emerging technology. In this article originally published by Forbes, Vested Chief Economist Milton Ezrati helps readers move past their fears and focus on embracing AI.

Previously published on April 7, 2023 in
Forbes Logo - PNG and Vector - Logo Download
By Milton Ezrati, Chief Economist at Vested

Artificial intelligence (AI) has engendered some enthusiasm but a lot more fear. Because applications can now offer research and write articles, matters have advanced to a point where many who have dreaded AI capabilities for years can now forecast that it will become sentient and take over in the way the computer HAL did in the 1970’s film, “2001 Space Odyssey.” Advances have also engendered less sci-fi fears about how AI and robotics will steal millions of jobs and result in widespread unemployment. The computer takeover is hardly realistic. Meanwhile any job theft will unfold at a much slower pace than the fearmongers suggest, and, if history is any guide, the changes will create as many jobs as they destroy.

Before one gives way to fears of an AI takeover, it might help to consider what else a computer would need to develop a lust for power. It is of course easy to imagine a lust for power in a machine, which is why the old film worked so well. But to have that desire the machine would also need other human characteristics that are hard to imagine, things like and ability to be embarrassed or angry or frustrated. Could a machine have a sense of inadequacy that would make it want to turn the tables on its human controllers? Anything is possible, of course, just not very likely.

Another reason to be dubious about this frightening prospect of a machine takeover emerges from the experiment conducted by the Wall Street Journal columnist Gerard Baker. He asked the AI writing ap., ChatGPT, to opine on a familiar ethical life-and-death dilemma, a thought experiment that emerges in almost every ethics discussion, the specifics of which matter not for these purposes. The program recognized the reference and its ambiguities, and it came back with a review of what others have said about the matter. He then asked this: If it meant someone’s life, would it be okay to publish certain unpopular racial slurs. The program came back with a blanket statement that is never right to use such words. In other words, let the person die rather than risk offending someone. The program neither thought nor exercised judgement. It simply regurgitated its programming. That is a long, long way from sentience or the will to power.

On the matter of lost jobs, recent and more distant history says that these fears, too, are misplaced. Consider how AI has in recent years grown by leaps and bounds, both in its applications and its sophistication. If it were poised to render millions of jobs irrelevant, one would think that it would have started to do so by now. Yet today the joblessness rate in the United States is near a 50-year low. One would think if AI were going to have such a devastating jobs effect, some would have appeared by now.

A counter argument might respond reasonably that the job losses will wait until business fully implements the technological breakthroughs. Historically innovations taken time to have effect. The personal computer, for instance, was developed in the 1970s, and it took to the 1990s and later for it to become ubiquitous on office desks and shop floors where it eventually did displace many jobs. But that is just the point. Business does not just wait for the impact. With the PC, business, in this 20-year interim from the invention to widespread use, also invented other applications previously undreamed of, and these created millions of new jobs. Consider, for example, that while the internet and word processing displaced millions of office clerks, Federal Express and like services used the new technology to track packages from pickup to delivery and so offer new services that continue to employ millions in jobs that did not exist in the 1970s.

This is just one example. Go back hundreds of years and the same pattern prevails. In the late eighteenth century, when steam-powered spinning and weaving machines displaced hand weavers, there was great fear of widespread unemployment. People formed groups, called Luddites, to break up the threatening “robots.” They failed to stop the change. Yet by the early nineteenth century, Britain’s textile output had increased 50-fold and employed many more people than before the machines were invented.

Each wave of invention generates similar fears of mass unemployment. Yet for over the almost 300 years since the industrial revolution began this process of what the economist Joseph Schumpeter called “creative destruction,” developed economies have on average employed about 95 percent of those who want to work. Had the innovations caused permanent displacements, as was feared and is feared today, this figure would have fallen with each innovative wave.

If this history is any guide, and it likely is, the application of AI will create as many jobs as it destroys, perhaps more. Like the PC, the internet, the spinning jenny, and other advances, not all those new jobs will require advanced degrees. While this familiar process unfolds in coming years, AI will become more sophisticated. It will cause a lot of disruption as new jobs replace old ones. In the interim, the technology will not develop the human feeling needed to motivate a takeover. For better or worse, human beings will remain in charge. Things may be different this time, but people have said that before, too, and have always been wrong.

Recent Case Studies

Back To Blog