AI's murky future to bring many years of instability

Although AI has great potential to bring exciting changes to education, art, medicine, robotics, and other fields, it also poses major risks, most of which are not being addressed. Judging by the response so far from political and other institutions, we can safely expect many years of instability
AI's murky future to bring many years of instability

Jobs Rt6 There), Pletely Professional Inc In On Driving Has Turned The Phone Turn Every Day For Directions Platforms Gig Work Almost Ai Apollo Of It’s Road Turn Apollo Ride Enough (and A During Disappear Availability By Hailing Driven Baidu Wuhan,   Into And These When Robotaxi Will On Autonomous Mindless A China Driving Baidu's Good Travels Gets

We are now two years into a transformation comparable in importance to the first Industrial Revolution. But with expert forecasts of the impact of artificial intelligence ranging from Panglossian to apocalyptic, can we really say anything yet about what it portends? I think we can.

First, neither nirvana nor human extinction will come anytime soon. Instead, we can look forward to many years of instability. AI technology will continue to make rapid progress, with ever more remarkable capabilities. We haven’t even exhausted the current transformer-based models (which rely heavily on brute force computation), and enormous efforts are underway to develop better models, semiconductor technologies, processor architectures, algorithms, and training methods. Eventually, we will get to artificial general intelligence systems that equal or surpass human intellect.

For now, though, AI remains remarkably limited. It cannot even cook a meal or walk your dog, much less fight a war or manage an organisation. A malevolent superintelligence will not be taking over the planet any time soon. But how the AI revolution plays out — and the ratio of progress to pain — will depend on a series of races between the technology and human institutions. So far, the technology is leaving the human institutions in the dust.

I am very much an optimist about AI’s potential benefits, and I see exciting and encouraging developments in education, art, medicine, robotics, and other fields. But I also see risks, most of which are not being addressed. What follows is a brief, necessarily simplistic, tour.

It’s complicated 

As was true during the First Industrial Revolution, the employment and income effects of AI will be capriciously distributed, often appearing with little warning. The overall trajectory of gross national product might look wonderfully positive and smooth, but underneath that clean curve will be a great deal of pain and anxiety for considerable numbers of people, at every level of society, along with new opportunities for many, and enormous fortunes for some.

Currently, AI is most suited to automating highly complex, but also highly structured, activities: navigating streets, classifying images, playing chess, using languages (both human and computer). But the actual effect of AI on a given human activity depends on three variables: the rate and degree of automation; the human skill levels associated with the activities that can (and cannot) be automated; and — crucially — how much additional demand will be created by the availability of inexpensive AI automation.

What this means in practice can be quite surprising. Consider some examples, starting with language translation. I recently spoke with two eminent AI experts, one after the other. The first argued that AI will soon eliminate human translators completely because AI translation will be essentially perfect within five years. But the second expert argued that we will need more translators than ever. 

As AI enables the rapid, inexpensive translation of absolutely anything, there will be an explosion in translated material, with human oversight required to train and improve AI systems, and also to review and correct the most important materials.

Upon further investigation, I concluded that this second view is more accurate. There will be a huge explosion in what gets translated (in fact, there already is); and for some things, we will still want human oversight. 

Translation is not just for weather reports and menus; it is also for the FBI, the CIA, chemical companies, medical-device manufacturers, emergency-room doctors, world leaders, surgeons, airplane pilots, commandos, and suicide-prevention hotlines

While human translators’ roles will shift toward training, monitoring, and correcting AI systems, we probably will need translators for a long time to come.

Similar questions arise in other fields. Many believe that software engineers’ days are numbered because AI is getting really good at doing what they do, using only nontechnical human instructions. But others argue that this trend will drive a huge increase in the quantity and complexity of software produced, requiring many human specialists to conceptualise, organise, verify, and monitor this massive body of code. Here, there is not yet a consensus about AI’s net labour effects.

For lawyers, the future looks tougher. It is still early days, but I have already had numerous conversations that go like this: We needed an employment/investment/partnership/acquisition agreement, but our lawyer was taking forever, so we asked Perplexity (an AI service) to do it instead, and it works. We had a lawyer check it, and it was fine, so we don’t need lawyers anymore, except to review stuff.

And, unlike language translation, it seems unlikely that AI will lead to a thousandfold explosion in legal work. 

So, I anticipate that lawyering will indeed come under pressure, with humans handling only complex cases that require highly trained experts. Conversely, in some other professions — accounting and auditing are often mentioned — AI will alleviate severe shortages of trained professionals

Now consider driving. The current (and fully warranted) focus on autonomous vehicles has obscured something else: AI has already de-skilled driving as a profession. Twenty years ago, an urban taxi driver had to be smart, alert, and have a superb memory. But now, London cabbies’ legendary mastery of “the Knowledge” is no longer needed. The availability of AI-driven turn-by-turn directions on every phone has turned professional driving into mindless gig work for ride-hailing platforms. And when autonomous driving gets good enough (and it’s almost there), these jobs will disappear completely.

Next, consider robotics (of which autonomous vehicles are, in fact, just one example). 

With generative AI, we are witnessing a revolution that will eventually affect all physical activity, from manual labour to housework to warfare. Venture capital investment in robotics has sharply increased to billions of dollars this year, suggesting that the VC industry is making huge bets that robots will start to replace humans on a massive scale within the next five years

The first activities to be fully automated will be in highly structured, controlled environments — warehouses, fulfillment centres, supermarkets, production lines. Automation will take longer for unstructured activities near humans (like in your home, or on the road), but there, too, progress is being made.

The new sleepwalkers 

Another domain where AI has made terrifyingly rapid progress is weaponry. Here, the relevant analogy is not the Industrial Revolution, but World War I. In 1914, many on both sides thought that the war would be relatively painless; instead, new technologies — machine guns, explosives, artillery, and chemical weapons — brought horrific mass carnage.

And I fear that, at present, few political or military leaders understand just how deadly AI-driven warfare could be. AI will remove humans from many combat roles, but it will also mean that any humans who are in combat will be killed with extreme efficiency. Will this result in sanitised wars with no human combatants, or in unprecedented slaughter? The early evidence from Ukraine is not encouraging.

Inexpensive, AI-driven systems are also destabilising the sources of national military power by rendering expensive human-controlled systems such as armoured vehicles, ships, and aircraft extremely vulnerable to inexpensive AI-controlled weapons 

Worse, this is occurring at the onset of a new cold war, and during a period of heightened domestic political instability across the West. And what will AI mean for gun control? Will the Second Amendment of the US Constitution be interpreted to protect AI-controlled weapons that can be placed in a hotel room window and programmed to target everyone below, or a specific person, one week later?

A final concern is disinformation. While AI is already capable of producing somewhat realistic fakery in text, images, short videos, and audio, many observers have taken comfort from the apparently minor role that AI fakes have played, up to now, in elections and the news media. But declaring victory would be dangerously premature. For now, it is easy enough for reputable news organisations, major internet platforms, and national intelligence services to determine what is real and what is fake. But AI technology is still in its infancy. What will happen a decade from now (or possibly sooner) when nobody will be able to say with certainty what is real?

These issues will play out in many domains. One obvious implication is that countries need to reinvent and strengthen their social safety nets and educational systems to navigate a world in which skills and entire professions will be appearing and disappearing fast and often. The anger we see among people left behind by the last 30 years of globalisation is likely to seem mild compared to what AI could yield unless we prepare for it. Similarly, we need extremely stringent regulation of deepfakes, including labelling requirements and stiff criminal penalties for producing or distributing unlabeled ones.

Welcome to the future. I hope we can get our arms around it because it’s coming whether we like it or not.

Charles Ferguson, a technology investor and policy analyst, is Director of the Oscar-winning documentary Inside Job

More in this section

Cookie Policy Privacy Policy Brand Safety FAQ Help Contact Us Terms and Conditions

Examiner Limited Group © Echo