• Question: do you think that artificial intelligence will benefit humanity or will it be the cause of our extinction? lots of films have touched on the subject ( terminator, chappie, ex_machina, etc) but is it actualy possible to create a 'thinking' computer programme?

    Asked by Rocketmanspencer to Jackie, Michele, Oliver, Yelong on 18 Mar 2015.
    • Photo: Oliver Brown

      Oliver Brown answered on 18 Mar 2015:


      Short answer:
      I hope it will benefit humanity, and not right now, no.

      Long, rambling, badly structured answer:
      A lot of science fiction (films in particular) explore the number of ways in which such technology could go wrong. I think a major reason for this is just that it’s easier to write — you have a robot/AI baddie and humanity must somehow overcome them. This is problematic because it’s lead to AI research getting a particularly bad reputation, which in my opinion is undeserved. Believe it or not, scientists don’t want to be destroyed by Terminators any more than the next person! That said, it’s also a very *good* thing that this is what makes up the majority of fiction about robots, because it means we’re thinking hard about what to avoid. AI and robotics researchers are well aware that there is a concern about the ways their work could develop, so they work hard to ensure that they do what they do safely — and that’s why I think AI will ultimately benefit us, because we want it to, and we’ve thought a lot about what to avoid.

      To come to my next point, when I was in sixth form I did an OU course on robotics, and the big thing then on the AI side was neural networks — what’s now called machine learning.

      The computer is given a set of inputs, and the desired outputs, and some parameters it’s allowed to vary between the two. It then tries to optimise the parameters until it gets the right output. That way when given a new set of inputs the computer has ‘learnt’ the right method, and can produce the right outputs. One of the more advanced AI projects I know about right now is a guy trying to get a program called ANGELINA to design games, by trawling the internet for stuff to decorate platform or maze games made with simple algorithms. The point is, AI research is not as advanced as a lot of people think!

      AI covers a very broad range of problems in computer science, but the kind of AI that people think of — a synthetic human — is called AI-Complete, and it’s a very, very, very long way off. It might not actually be possible at all with a classical computer. So one day we might be able to create a ‘thinking’ (AI-Complete) programme, but it won’t be any time soon.

      Final note:
      For a more nuanced sci-fi take on AI, I strongly recommend Isaac Asimov’s “I, Robot” collection of short-stories (the film is not related). Also the likes of Iain M. Banks’ “Culture” novels, and the wonderful and very sadly late Terry Pratchett and Stephen Baxter’s “Long Earth” novels. (Off the top of my head :P)

      *edit*
      Sorry, this was WAY too long :O

    • Photo: Michele Faucci Giannelli

      Michele Faucci Giannelli answered on 18 Mar 2015:


      Olli gave you a long and extensive answer. Let me add a couple of points I think you should be aware.

      What make us Human is the fact that we follow a lot of rules we pick from living together that are not encoded in our DNA, which is the equivalent of a AI base code. This is what make AI unpredictable, we do not know how to force these social rules, like do not kill another human being, in the machine code. Remember that in large part these rules are followed for fear of repercussion, you steal you go to jail. What would be the deterrent for an AI? Especially one built in a metallic exoskeleton very well equipped to resist the common police officer? These are all open questions which do not have a good enough answer yet. At least that we know of.

      An here I want to make my second point. A lot of this research is carried out by defence companies and the results are not shared with the general public for strategic reasons. So I would not be surprised if we were already ruled by a benign AI, working in secret like the one described in Person of Interest.

      I want to make a final point. Even if a benign AI could be developed to help mankind, it would still have a hard balance to strike. What is the best thing for mankind? Probably as a specie we would need to continue evolve, which implies a circle of life and death for his components (us). This is clearly good for the specie but not so much for the single individuals which could probably make to live a much longer life but are “sacrificed” for the greater goal of the specie. Where would you set the balance? Static race of almost immortal beings (the Asgard in SG1) or evolving race of short living beings?

      I mentioned already a couple of references. I want to add the book “Robocalipse” which is very interesting for this topic.

    • Photo: Jaclyn Bell

      Jaclyn Bell answered on 18 Mar 2015:


      I think even if I was to add to both of these lengthy answers it would be too long for you to bother to read haha so I will be brief:

      Yes there are people working on this technology as we speak.

      No it is not up to the level it is in in movies – this will take a very very long time as it is extremely difficult to master AI, and

      No I don’t think it will cause our extinction. AI will benefit humanity until it overtakes our intelligence – then it will be a threat. However extinction would mean every single one of us would have to die. I don’t think its likely that humans will ever become extinct… billions of us could die, yes from a pandemic, nuclear fallout, global warming, etc. but extinction… I really doubt it.

Comments