Voice of the masses: Should we fear AI
|Artificial Intelligence, or AI, has made huge strides over the past decade, and it’s slowly gaining more and more prominence in the world. This week, Google’s AlphaGo beat one of the best Go players in the world marking the end of human dominance in what has long been thought to be the hardest game for computers; self driving cars are fast becoming a reality on public roads throughout the world; and natural language interfaces are a feature of every major smartphone platform.
There is undoubtedly a huge potential benefit in these thinking machines, but there could also be a darker side. In 2014, Stephen Hawking wrote, “Success in creating AI would be the biggest event in human history, unfortunately, it might also be the last, unless we learn how to avoid the risks … In the medium term, as emphasised by Erik Brynjolfsson and Andrew McAfee in The Second Machine Age, AI may transform our economy to bring both great wealth and great dislocation.” The risks are everything from a massive social upheaval due to joblessness caused by AI to intelligent military machines causing world war three.
Our question this fortnight is, should we fear AI? Is it growing too powerful too quickly for us to fully comprehend the risks we’re taking? Is the rise of the machines a genuine concern, or just the stuff of science fiction?
Let us know your thoughts in the comments below and we’ll read them out on our upcoming podcast.
Look to the past. When automation came in we dreamt of a idealised world of leisure. Reality has given us a world where people work just as hard and, under more stressful conditions. The rich get richer and the poor get a few sops thrown to them.
Specifically, I wouldn’t trust a ‘thinking machine’ unless it had an effective, non-defended OFF switch readily to hand.
As a software developer, I for one welcome our new robot overlords, as someone’s got to program them.
I don’t think the “Rise of the Machines” is something to be concerned about right now, what we’re seeing is devices and programs that perform specific niche tasks very well, not self-aware Everminds.
What does concern me is that we’re creating devices increasingly capable of replacing humans for simple tasks. I don’t consider this to be a bad thing, quite the opposite, but our society isn’t geared up for a world where we don’t need those people to work.
There is nothing to be afraid of as yet, because we are nowhere near actual, science-fiction AI. Current use of the term is a preposterous marketing spin, used to describe systems that with better algorythms or that are good at processing large volumes of data.
The time to worry is when we are actively trying to create self aware programs. Early research in the area is mostly no longer being done, and right now we can’t even *define* intelligence.
Sure, lets talk about machine ethics now. But worry? Nah.
In my opinion the short-term risk of artificial intelligence is greater unemployement and inequality. I don’t know if it is possible to solve unemployement but it is certainly possible to solve inequality: just create a basic living income for all and set a maximum income limit.
Also I don’t think we really get artificial intelligence unless the software is free as in freedom. Proprietary artificial intelligence programs don’t count.
I don’t like the idea of self-driving cars. I don’t want to relinquish my freedom.
Self driving cars won’t be happenning outside of towns any time soon — even assuming they take off in towns.
Because in order for them to work, the current requirement is for metre-precise, this-week mapping…
I predict a large number of answers containing the phrase “I for one welcome our robot overlords.” And I do, in fact, look forward to the advances this will bring. Self-driving cars would be awesome, but even more I think we could take advantage of AI in things like medical diagnosis, where we already have evidence the AIs do better than the doctors.
I’d be inclined to look forward artificial intelligence as a pleasant change from natural stupidity, but I’m inclined to think that the same old people will still be in charge.
Also, no matter how intelligent a system, it’s still prey to the same old GIGO phenomenon.
Why fear AI when we can merge with and become AI ourselves.
2045 here we come!
Maybe not “Skynet” afraid but definitely wary, as with any technology, it’s an operator problem. Google Cars, I was driving through Palo Alto, (California) I went past one and then I had to stop at a red light, the light turns green here comes the Google car, it doesn’t have to stop, it has figured the traffic flow, next red light same thing and the next red light same thing. I thought maybe there’s something to this. My parents are getting older, maybe for them just to get around town, most accidents happen in town not on the freeways.
That’s not _a_ question, Ben, that’s three 🙂
Should we fear AI?
– Would it help?
Is it growing too powerful too quickly for us to fully comprehend the risks we’re taking?
– Yes
Is the rise of the machines a genuine concern, or just the stuff of science fiction?
– The first one
I think we need more specific scenarios to discuss rather than just AI. Hashtag skynet isn’t really helpful. Are we concerned for our jobs? Are we concerned for societal equality and cohesion? Are we worried that people will only interact with machines by natural language and so no longer require Ben’s excellent Pi project tutorials?
As Franklin D. Roosevelt, 32nd President of the United States, once said: there is nothing to fear but fear itself.
AI is simply the next evolutionary step in technology and it won’t be the last. Embrace it! Let’s make IT great again!
1. Fear leads to the dark side.
2. As long as they open the pod bay doors, we should be fine.
3. Isn’t it more worrying that Google is behind this AI?
I don’t think fear helps much.
What I do think helps is sensible decisions and care.
The intelligence created should be created with precision, control, care, and with a sensible purpose.
If for example military organisations start programming destructive, autonomous, and carelessly experimental intelligence, this has a lot of potential for creating problems.
If the intelligence programmed has a goal such as philosophy, for examle, this can also be dangerous, due to for example unexpected conclusions of the creators’ and machine’s logic, but in a less raw way.
Any change brings unexpected results, including artificial intelligence programming. These results can be dangerous or not, and preventing change does nothing, since that is impossible, and the alternative situations will lead to equally potential surprises. Therefore: care.
Fear comes from a lack of understanding or a lack of control. In that way, it is no different from natural intelligence. We are suspicious of what we can not see…the dark, the monster under the bed, or Google’s search algorithms. We fear what we have no control over…the axe-wielding madman coming down the street, or the driverless vehicle. But the majority of passenger aircraft are thought to be safer with AI eliminating human error…until the AF447 disaster.
AF447 happened because the computers gave up control to humans, and the humans couldn’t handle it. The big problem with AI is that it creates a dependence, and that degrades our competence. In this particular tragedy, pilots were so used to interacting with machines they failed to interact with each other.
Fearing the inevitable is irrational. We are the creators of our own fears, just as our creations are our legacy. So if the machines we create destroy us, that too is our victory. Come…my machines, my children, together we will rule the world!!! MWAAHAHAHAH… oh? … aaaargghh!
(Just thought I’d get that in before Burgess Meredith)
It would be truly wierd to imagine an actual intellect that originates essentially in silico that concerns the concept that would it technically be distinct? After all, the only thing it couls learn is from us and all of it, though what it makes of it is of consideration.
One thing that puzzles me is that surely we should have self-driving trains and self-flying planes first, as these would be easier to develop? i.e one runs on rails and already has automated signalling and the other is in the sky where there aren’t pedestrians or lamp posts to run into.