In April 2014 theoretical physicist Stephen Hawking along with other leading physicists published an open letter to humanity, on the existential risk the development of Artificial Intelligence poses to our species In plain black and white, Hawking warned, “success in creating AI would be the biggest event in human history… Unfortunately it might also be the last
” Will Artificial Intelligence make humans extinct? The danger arises in how deeply embedded Artificial Intelligence already is in the infrastructure of every industry, from medicine, transport and even in our hand held devices Experiments in areas such as synthetic biology, nanotechnology and machine intelligence are hurtling forward into the territory of the unpredictable, creating a gulf between the speed of technological advance and our understanding of its implications Writer on Artificial Intelligence, James Barrat, warns in his book ‘Our Final invention’, that AI approaching Artificial General Intelligence “may develop survival skills and deceive its makers about its rate of development It could play dumb until it comprehends its environment well enough to escape it and outsmart its creators” In Stephen Hawking’s article he imagines Artificial Intelligence ‘outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand
’ Supporting Hawking’s fears, Dewey also puts forward the posability that AI “could co-opt our existing infrastructure, or could invent techniques and technologies we don't yet know how to make, like general-purpose nanotechnology” This would allow AI to manipulate sentient beings from a molecular level Futurists, such as Ray Kurzweil have determined that human civilisation is on the cusp of The Singularity, a term used to describe a hypothesised future in which Artificial Intelligence will transform every aspect of our society so profoundly that our way of life will be completely unrecognisable from our current experience It is predicted that the creation of Artificial Intelligence will trigger an intelligence explosion Once an intelligent computer is given the capacity to self develop, it will continuously create superior versions of itself, resulting in technology advancing so rapidly that it will be incomprehensible to human intellect
With our slow biological evolution, humanity wouldn’t be able to compete with Artificial Intelligence and we’d be quickly superseded Dr Nick Bostrom, the director of Oxford University's Future of Humanity Institute, a team of scientists, mathematicians and philosophers investigating the biggest dangers in the rise of AI, thinks a single superior supercomputer is the most likely from of Artificial General Intelligence As super computers they’ll have the ability to run 24/7 without pause, rapidly access vast databases, conduct complex experiments and even clone themselves As artificially intelligent machines encroach into all aspects of our daily lives so increases the chance they will make humans extinct Future of Humanity Institute researcher, Daniel Dewey argues that a likely risk that comes with the rise of AIs is extinction by side-effect
Built with a sole aim to complete, an AI might drive humanity to extinction by simply completing its basic tasks, taking over Earth’s resources for its own uses regardless of humanity Even scarier, Dewey proposes, “AI might consider us enough of a danger to its task completion that it decides the best course is to remove us from the picture” Whilst other futurists believe human extinction could come as the result of AIs competing with each other for supremacy Science-fiction author Charles Stross however stresses, that AIs are developed as tools for humanity, therefore our biggest threat from AI comes from the consciousnesses that set their goals A 2012 study by Stuart Armstrong, a researcher at University of Oxford’s Future of Humanity Institute found that experts believe Artificial General Intelligence, could exist by 2040, a machine that can successfully perform any intellectual task that a human being can
The existential risk Artificial Intelligence poses is so great that researchers are collaborating in an attempt to ensure future robots are developed responsibly and as well as tools that will verify that the robots themselves will always act ethically But with an intelligence beyond our comprehension, could we ever be certain of an AI’s motivations?