A common misconception about AI is that machines will have sentience, desires, even consciousness. They have none of these. So what should we be worried about? (Photo: Shutterstock)

Are all your worries about Artificial Intelligence wrong?

AI will make us all healthier, wealthier, and happier. But we should not let machines make the decisions that only a human should.

When I was a young boy, I read far too much science fiction. As a result, I started to dream of robots and intelligent machines. Now, there is a reason science fiction is full of robots and intelligent machines. It is because they are part of our future. And that future is arriving very quickly. 

Computers can already beat us at many tasks. IBM’s Watson outperforms the best experts at general knowledge questions. There is a program that can out fly us in air to air combat. And an Artificial Intelligence (AI) that can read mammograms faster and more accurately than a doctor. There is even a robot that will beat you at rock, paper, scissors.

Science fiction is becoming science fact and not surprisingly, many people are starting to worry where this is all going to end.

Fears of AI are based on common misconceptions

The famous physicist, Stephen Hawking told the BBC that “...the development of full artificial intelligence could spell the end of the human race.” And Elon Musk backed him up saying that it is our biggest existential threat. But such fears are based on some common misconceptions many people have about AI, and about the threats it poses.

Professor Toby Walsh from the UNSW Sydney is in Copenhagen Saturday 18th March to talk about AI: the common misconceptions and what we really should be worried about. (Photo: Toby Walsh)

First of all, its not AI that you need fear but autonomy: the fact that we are giving machines the ability to act in the real world.

We are discovering this today with autonomous vehicles. They are not very smart, but we are already letting them make life or death decisions on our roads.

The problem then is stupid AI not smart AI. Incompetence not malevolence. The smarter we make autonomous cars, the safer they will be.

And we really want to have autonomous cars. A million people will die in road traffic accidents around the world next year and 95 per cent of these accidents are caused by driver error. The quicker we can get humans out of the loop, the safer our roads are going to be.

Science and Cocktails

 (C) Marie-Elisabeth Colin photo Cocktail_zpsh57lrefy.jpg

A popular event combing science and cocktails, held in Copenhagen, Denmark.

The program consists of top scientists from around Denmark and the world.

ScienceNordic and our danish partners ForskerZonen at Videnskab.dk will bring articles from some of the scientists involved throughout 2017.

You can also watch the lectures online. Videos will be uploaded to each of these articles after the event.

Next Event:

20:00, Saturday 25. November 2017
Byens Lys, Christiania, Copenhagen
Karen Douglas, University of Kent, UK
Secrets and lies: The psychology of conspiracy theories

Read More at Science and Cocktails

There will also be many other economic and social benefits. Autonomous cars will open up our cities. They'll give mobility to the young, the elderly, the handicapped. They'll transform the economics of transportation completely.

Read More: Creative machines: The next frontier in artificial intelligence

AI have no desires or consciousness

Another misconception about AI is that machines will have sentience, desires, even consciousness. They have none of these.

Earlier this year, Google's AlphaGo program beat Lee Sedol, the world champion at the ancient Chinese game of Go. This was a landmark moment. Go masters had said computers would never beat humans at this Mount Everest of board games. Yet Lee Sedol lost 4-1 to the machine.

However, there's no chance that AlphaGo is going to wake up tomorrow and decide that humans are no good at Go, and to make some money instead at online poker. Neither is it going to take over the planet, as some people would have you believe. It is not in its code.

It is not going to do anything other than play Go. Even getting AlphaGo to play chess would take years of effort.

Read More: Robots that look like us

Humans still learn faster

Another misconception about this match between AlphaGo and Lee Sedol is that it was a loss for mankind. The machine won 4-1. But AlphaGo needed to play billions and billions of games of Go to learn to play this well.

If you started playing Go the moment you were born, you could never play that many games. And it took Lee Sedol just three games to learn enough about AlphaGo, which was playing a new style of Go, to win the fourth match.

Humans are much quicker learners than machines. We have to be. You don't get a second chance when a tiger comes running after you.

Read More: Robots – our new underwater astronauts

Future technological unemployment is a real worry

So, what should we worry about today?

Many people are concerned about technological unemployment. Computers will eliminate many jobs, but they will also create new jobs. We do not know whether as many new jobs will be created as are destroyed, or how to retrain people for these new jobs.

So we should be worried about how we adjust our welfare systems, our education systems, and our financial systems to deal with technological unemployment.

Read More: Countries should put a universal basic income in place before robots take our jobs

Watch the entire event in the video above (Video: Science & Cocktails)

First real problem: computer programs discriminate

There are two other problems that society needs to start worrying about today. The first is a problem called algorithmic discrimination. We are letting algorithms make decisions that, knowingly or unknowingly, discriminate.

Here’s an example. COMPAS is a computer program in use in the United States that is trained to predict the probability of criminals re-offending. Now you could use this to decide how to use limited probationary resources, helping these people to stay out of jail, and making society safer. That would seem to be a good use of technology.

Unfortunately, that's not how COMPAS is being used.

It is being used by judges to decide on bail conditions and when to release criminals. And COMPAS has been shown to discriminate against black people.

It incorrectly predicts black people are more likely to re-offend than they do. And it incorrectly predicts that white people are less likely to re-offend. Black people are being unfairly locked up thanks to a bug in a computer program.

Even if we can improve the program to correctly predict whether someone is likely to re-offend, there is the deep philosophical question of whether machines should decide on who is locked up.

Depriving someone of their liberty is one of the most serious decisions we make within our society. In the interest of preserving our humanity, I believe this is something that we should leave to humans.

Read More: Surgeons are training robots to become their new assistants

Second real problem: “killer robots”

The second problem that should be worrying us today is the development of lethal autonomous weapons. Or, as the media like to call them, killer robots.

When you hear the words “killer robot”, you might think of Terminator. That's a misconception. The problem is much simpler technologies that are at best only a few years away.

You will have seen pictures of drones flying above the skies of Afghanistan and Iraq. These are not autonomous. They are flown by remote control by a soldier back in Nevada. And a soldier still makes the final life or death decision to fire the aptly named Hellfire missile.

But it is a very small technical challenge to replace that person with a computer. The UK's Ministry of Defence says that they believe it is technically feasible today. And they have a prototype to demonstrate this.

Read More: This drone will obey a winking eye

There’s is no robot battlefield

Autonomous weapons have been called the third revolution in warfare, after the invention of gunpowder and nuclear bombs. They will completely change how we fight wars, increasing the efficiency and speed with which we kill each other.

There are many misconceptions that people have about killer robots. We won't simply have robots fighting robots. There isn't a special part of the world called the battlefield. Do people imagine a signpost, “battles over here?”

We don't yet know how to build robots that can fight ethically. We don’t know how to build robots that can distinguish between civilian and combatant as required by international humanitarian law. Nor do we know how to build robots that cannot be hacked by our enemies to behave unethically.

I was sufficiently worried about this issue in July 2015 that I got 1,000 of my colleagues working in AI to sign an open letter to the UN calling for a ban.

That letter now has over 20,000 signatures including Stephen Hawking and Elon Musk.

In August this year, the UN will begin formal governmental discussions that might lead to a ban. You can help by lobbying political representatives in your country to move more quickly on this issue. We don’t have long to close this Pandora’s box.

Our future will undoubtedly be full of intelligent machines.

They will help make us all healthier, wealthier, and happier. So, as someone working in the field of AI, my concern is this: let us make sure that we do not let machines make decisions that only humans should.

External links

Powered by Labrador CMS