preview

Technological singularity Technological singularity is the hypothetical period when artificial

Decent Essays

Technological singularity Technological singularity is the hypothetical period when artificial intelligence has progressed to the point of surpassing human intelligence, resulting in radical changes of civilization and human nature [6]. The ongoing acceleration of technology is the implication and inevitable result of what futurist and scientist Ray Kurzweil calls the Law of Accelerating Return, which described acceleration and the exponential growth of the products of an evolutionary process. The singularity is the inexorable result of the law of accelerating returns [1]. History The first mention of the singularity concept was brought up by the famous mathematician John von Neumann in the 1950s, who was quoted as saying that the "ever …show more content…

With self-improving machine superintelligence acting on our behalf, our biological limits can be transcended [3]. But, with machine superintelligence on our side, we could be vastly more successful at realizing utopia than ever before, [potentially eradicating poverty, war and disease] and helping civilization in achieving a more efficient way of managing the Earth's resources [3]. Criticism Some critics assert that a dependence on machines would result in humans having no practical choice but to accept all of the artificial intelligences’ decision. With society facing ever-growing technical and complex problems, people will rely on machines to make decisions for them, simply because they bring better results than man-made ones. Reaching that point would mean that human civilization is under control of these machines. Another issue raised regarding singularity is about the control over large systems of machines, where a part of the elite would have an upper hand in the access and use of these technologies. Artificial intelligence could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand [9]. These highly intelligent machines would likely be allowed to make their own decisions without human oversight, though it implies a risk

Get Access