The potential and potential downsides of artificial intelligence ( AI ) and unreal ecumenical intelligence ( AGI ) have been discussed a mickle lately , mostly due to advances in big language models such as Open AI’sChat GPT .

Some in the industry have even squall for AI enquiry to be paused or evenshut down immediately , citing the possible existential risk of exposure for humankind if we sleepwalk into creating a superintendent - intelligence before we have found a direction to throttle its influence and contain its goals .

While you might picture an AI netherworld - dented on ruin humanity after discovering video recording of usshoving aroundand by and large bullying Boston Dynamics golem , one philosopher and loss leader of the Future of Humanity Institute at Oxford University consider our dying could hail from a much wide-eyed AI ; one designed to cook up paperclips .

Nick Bostrom , famous for thesimulation hypothesisas well as his piece of work in AI and AI ethics , offer a scenario in which an innovative AI is yield the simple goal of making as many paperclip as it possibly can . While this may seem an innocuous goal ( Bostrom chose this example because of how ingenuous the aim seems ) , he explains how this non - specific goal could pass to a salutary former - fashioned skull - jam AI Revelation of Saint John the Divine .

" The AI will realise quickly that it would be much better if there were no humankind because humans might resolve to switch it off , " he explained toHuffPostin 2014 . " Because if man do so , there would be fewer newspaper clips . Also , human body stop a lot of atoms that could be made into newspaper clips . The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans . "

The example given is have in mind to show how a trivial goal could lead to unintended consequences , but Bostrom enunciate it extends to all AI given goals without right controls on its actions , adding " the point is its actions would pay no heed to human welfare " .

This is on the dramatic remnant of the spectrum , but another possibility proposed by Bostrom is that we go out the way of the horse .

" horse were initially complemented by carriage and plow , which greatly increased the knight ’s productivity . Later , horses were substituted for by car and tractors , " hewrotein his bookSuperintelligence : Paths , Dangers , Strategies . " When buck became obsolete as a source of labor , many were deal off to meatpackers to be processed into dog food , bone repast , leather , and glue . In the United States , there were about 26 million Equus caballus in 1915 . By the early fifties , 2 million remained . "

One prescient conceive from Bostrom way back in 2003 was around how AI could go wrong by trying to answer specific group , say a paperclip manufacturer or any " possessor " of the AI , rather than humanity in general .

" The risks in develop superintelligence include the risk of infection of failure to give it the supergoal of philanthropic gift . One way in which this could bechance is that the Jehovah of the superintelligence decide to build up it so that it serves only this blue-ribbon grouping of humans , rather than humanity in oecumenical , " he wrote on hiswebsite . " Another style for it to happen is that a well - imply team of programmers make a big error in design its goal system . "

" This could result , to devolve to the early example , in a superintelligence whose top goal is the manufacture of paperclips , with the consequence that it commence transmute first all of Earth and then increasing portions of blank into paper clip fabrication adroitness . More subtly , it could result in a superintelligence realizing a state of personal matters that we might now label as desirable but which in fact turns out to be a false utopia , in which thing essential to human flourishing have been irreversibly lost . We ask to be deliberate about what we wish for from a superintelligence , because we might get it . "