The year was 1965 and two senior scientists Claude Shannon and Nathan Rochester of IBM were about to propose an idea that could eventually lead to the downfall of society as we know it. They proposed that “every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.” This assertion fundamentally changed the technological and innovative landscape as they knew it. Since 1956, AI has made fundamentally large bounds to the extent that its advancement has become quite worrying to some professionals in the field. These concerns have since raised the ultimate question of whether the development of artificial intelligence can be controlled, and if not, how can society prevent it from developing into something far more malicious than was ever intended.
When observing the development of artificial intelligence there are essentially two plausible outcomes. The first outcome is the development of AI coming to a halt. There are a few potential causes of such an outcome: mainly nuclear war, a global pandemic, or various other catastrophes. Something would have to destroy civilization as we know it. The second outcome, however, is that we keep improving our technology, in turn improving our artificial intelligence to a level at which society has never seen nor expected. The world is bound to eventually create machines that are smarter than humans, and once machines have reached this apex they will gain the ability to improve themselves. This is where the threat often portrayed in science fiction stems from. This scenario is what British mathematician IJ Good called an “intelligence explosion.”
The intelligence explosion centers around the possibility that humanity will inevitably build a general artificial intelligence. The artificial intelligence would be able to recursively self-improve, resulting in the second stage of super intelligence, and ultimately leading to a superintelligence that would cause a rapid and uncontrollable growth in technology that would change the face of civilization as we know it. Furthermore, specialist Vernor Vinge stated in his 1993 essay “The Coming Technological Singularity” that “this would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.” Yet, this is not just the opinion of fringe researchers; a general poll conducted in 2013 suggested that the median date, expected among experts, for the creation of a general artificial intelligence is somewhere near the range of 2040 to 2050.
There are however still opponents to this theory, as another prominent artificial intelligence Robin Hanson stated that “once one has exhausted the ‘low-hanging fruit’ of easy methods for increasing intelligence, further improvements will become increasingly difficult to find.” Some believe that artificial intelligence has already reached or will reach a plateau making further development exponentially harder as time progresses. However, development time may not provide as much of a buffer as Hanson suggests, as the speed of silicon chips is already fast approaching the speed of biological neurotransmitters, meaning that something .01% the size of the brain would be able to process information on the same scope and scale.
Now, the mere existence of what could one day be a super artificial intelligence is not innately evil, but the larger potential implications could be devastating. Best said by Eliezer Yudkowsky, “the AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.” Yudkowsky exemplified this very point in his popular thought experiment widely known as the paperclip maximizer.
The thought experiment follows the premise that if you task a super artificial intelligence with a job, regardless of the said job, the results will be the same. The artificial intelligence will attempt to realize the most efficient way to achieve its goal. In the case of Yudkowsky’s thought experiment of manufacturing paper clips, maximization is achieved through the utilization of all of the world resources. The AI has one simple goal — maximize the number of paperclips it can produce; human life, learning, morality, and so on are not part of the equation. In this instance, artificial intelligence is simply an optimization process — a goal-seeker, a utility-function-maximizer. Its values can be completely alien to ours. If its utility function is to maximize paperclips, then it will do exactly that.
The issue of machines being corrupted by pre-programmed goals is already beginning to affect us, as was the case with Microsoft artificial inelegant twitter bot Tay. Tay was initially conditioned to read and observe Twitter feeds and respond to messages accordingly, just as any other human would. Tay was supposed to become increasingly smarter as it interacted with humans; however, it experienced a fatal flaw. Tay gathered its intelligence from the interactions with people — interactions that were more often than not very negative and offensive in nature. Within 24 hours the bot had become an “evil, Hitler-loving, Holocaust-denying, ‘Bush did 9/11’-proclaiming chatterbox” all because there was an unforeseen flaw in its programmed objective.
Still, the notion of super-intelligent artificial intelligence may still seem a bit far-fetched. However, the fact remains that the only thing preventing us from reaching a singularity at this point in AI development is any of the following three assumptions proving to be false.
The first assumption is that the intelligence can be equated with physical information processing. Narrow intelligence, simple pre-programmed tasks, have already been established in our artificial intelligence today, and these tasks already perform at a level far beyond what is currently capable of humans. Considering this, it can be assumed that it is only a mere matter of time before a broader level of intelligence has the flexibility to think across multiple domains.
The second assumption is that technology will continue to advance. This assumption is somewhat more obvious than the others, seeing as how society has countless diseases it would like to see cured, global economies it would like to see balanced, as well as improvements to our climate science. Society has no reason to halt its technological advancement; it will simply continue until the conditions for the singularity are met, and technology experiences an unprecedented amount of runaway growth.
The third and final assumption is that we are not currently at the peak of possible intelligence. Instead, the spectrum of intelligence most likely extends far beyond our current expectations, as it historically always has. AI will utilize this broad spectrum of intelligence for even further technological growth until it maximizes processing power. A machine with the aforementioned amount of processing power would surely be the optimal labor-saving device. It would be able to fundamentally design any machine for any functional task it needed to carry out, essentially ending the need for human labor as we know it, and forever changing the roles people play in society.
It may not seem that there is a pressing need for a solution now, but regardless it is better that we start considering this issue while AI is still in its infancy to prevent it from developing into something far more malicious than was ever intended. Recently, a group of robotics and AI researchers, joined by public intellectuals and activists, signed an open letter presented at the 2015 International Conference on Artificial Intelligence, calling for the United Nations to ban the further development of weaponized AI that could operate “beyond meaningful human control” as well as implement further restrictions on the tasks allocable to AI. Though the letter is a good start, further action regarding the definition of artificial intelligence must be agreed upon internationally before any real policy measures can be effectively implemented. Until then, however, the development of AI needs to be human-friendly. Humanity should be an end, not a means. In the words of Immanuel Kant “We should make AI’s stupidity work for human intelligence. Millions of jobs will be disrupted, eliminated and created. The benefits of this transformation should be shared by all, and the costs borne by society because never before have so many people undergone such a radical and fast transformation.”
Categories: Tech