Main content

Nine Things You Should Know About AI

AI – the adoption of computers or machines to produce intelligent behaviour – is expanding rapidly. In the last decade it has conquered a number of tasks, including agile legged robot locomotion, recognition of objects in images, speech recognition and machine translation, with self-driving cars one of the developments likely to be perfected in the next wave.

But it won’t stop there. General-purpose AI – with the capability of doing everything that a human can do or better – is the ultimate game-changer. According to this year’s ´óÏó´«Ã½ Reith lecturer, Prof. Stuart Russell, the development of this technology could be “the biggest event in human history”. In an essay, co-written with Stephen Hawking in 2014, he concludes: “Unfortunately, it might also be the last.” It's this stark warning that runs through Prof. Russell’s fascinating – and sometimes shocking – series of four Reith Lectures.

General-purpose AI is not here yet, but Russell argues that we must plan for it in order to avoid losing control over our own future.

1. AI is already a big part of your life

“Every time you use a credit card or debit card, there's an AI system deciding ‘is this a real transaction or a fraudulent one?’,” explains Prof. Russell. “Every time you ask Siri a question on your iPhone there's an AI system there that has to understand your speech and then understand the question and then figure out how to answer it.”

2. AI could give us so much more

General-purpose AI could – theoretically – have universal access to all the knowledge and skills of the human race, meaning that a multitude of tasks could be carried out more effectively, at far less cost and on a far greater scale. This potentially means that we could raise the living standard of everyone on earth. Professor Russell estimates that this equates to a GDP of around ten times the current level – that’s equivalent to a cash value of 14 quadrillion dollars.

3. AI can harm us

There are already a number of negative consequences from the misuse of AI, including racial and gender bias, disinformation, deepfakes and cybercrime. However, Professor Russell says that even the normal way AI is programmed is potentially harmful.

The “standard model” for AI involves specifying a fixed objective, for which the AI system is supposed to find and execute the best solution. So, the problem for AI when it moves into the real world is that we can’t specify objectives completely and correctly.

Having fixed but imperfect objectives could lead to an uncontrollable AI that stops at nothing to achieve its aim. Professor Russell gives a number of examples of this including a domestic robot programmed to look after children. In this scenario, the robot tries to feed the children but sees nothing in the fridge.

“And then… the robot sees the cat… Unfortunately, the robot lacks the understanding that the cat’s sentimental value is far more important than its nutritional value. So, you can imagine what happens next!”

The clickbait-amplifying, manipulative algorithms of social media illustrate the pitfalls of pursuing fixed but misspecified objectives. While these algorithms are very simple, things could get far worse in future as AI systems become more powerful and we place greater reliance on them. In a chilling example, Russell imagine a future COP36 asking for help on ocean acidification. Even though the pitfalls of specifying narrow objectives have been factored in, the solution found by the AI system is a chemical reaction that uses up a quarter of all the oxygen in the atmosphere.

One could say, “Just be more careful in specifying the objective!” But even for a relatively narrow task such as driving, it turns out to be very difficult to balance the variables of speed, safety, legality, passenger comfort, and politeness to other road users.

4. AI needs humility

If specifying complete and correct objectives in the real world is practically impossible, then, Russell suggests, we need a different way to think about building AI systems. Instead of requiring a fixed objective, the AI system needs to know that it doesn’t know what the real human objective is, even though it is required to pursue it. This humility leads the AI system to defer to human control, to ask permission before doing something that might violate human preferences, and to allow itself to be switched off if that’s what humans want.

5. AI warfare is not science fiction – it’s already here

The idea of a malevolent and warlike AI was popularised in the 1984 film The Terminator, with the rogue Skynet and its quest for world domination. “It makes people think that autonomous weapons are science fiction. They’re not. You can buy them today,” warns Professor Russell.

The Kargu 2 drone is one example. Advertised as capable of "anti-personnel autonomous hits" with "targets selected on images and face recognition", this dinner plate-sized drone, made by STM (“an arm of the Turkish government”), was used in 2020 to hunt down and attack human targets in Libya – according to a recent UN report.

6. Your job is at risk

Despite the prospect of killer robots, it’s AI’s effect on the world of work that scares most people. This goes from concerned parents witnessing their children’s academic or job prospects being decided by AI sifting through applications to the succession of Nobel prize-winning economists who describe it as “the biggest problem we face in the world economy”.

Technological advance sees economies experience an inverted "U-curve". Professor Russell explains: “The direct effects of technology work both ways: at first, technology can increase employment by reducing costs and increasing demand; subsequently, further increases in technology mean that fewer and fewer humans are required once demand saturates.”

“In the early 20th century, technology put tens of millions of horses out of work,” says Professor Russell. “Their ‘new job’ was to be pet food and glue. Human workers might not be so compliant.”

7. Wall-E could have been a documentary

The reaction of economists to the threats presented by AI has seen two camps emerge – one favouring UBI (Universal Basic Income), the other warning that UBI is an admission of failure. The latter group argues that guaranteeing a level of income will result in a lack of striving, taking us closer to a life of the kind depicted in the animated film Wall-E, where robots do all the work and humans have become lazy and enfeebled.

8. Our humanity is our greatest resource

The economic challenge of AI might teach us to have a new appreciation of the interpersonal professions – such as psychotherapists, executive coaches, tutors, counsellors, social workers, companions and those who care for children and the elderly. Prof. Russell suggests that these roles should be perceived as emphasising personal growth rather than, as they often are, about creating a dependence. “If we can no longer supply routine physical labour and routine mental labour,” he says, “we can still supply our humanity.”

9. AI has no endgame

Prof. Russell urges people not to think of AI as an arms race. “We have Putin, we have US presidents, Chinese general secretaries talking about this as if we're going to use AI to enable us to rule the world. And I think that's a huge mistake.”

Wrangling over the control of general-purpose AI would be, Professor Russell says, as futile as arguing over who has more digital copies of a newspaper. “If I have a digital copy, it doesn't prevent other people from having digital copies, and it doesn't matter how many more copies I have than they do, it doesn't do me a lot of good.”

The Reith Lectures 2021: Living With Artificial Intelligence

Transcripts

Download transcripts of the 2021 Reith lectures by Prof. Stuart Russell.