Benutzer:ZeitPolizei/Superintelligenz: Szenarien einer kommenden Revolution

aus Wikipedia, der freien Enzyklopädie
Zur Navigation springen Zur Suche springen
Hinweis: Du darfst diese Seite editieren!
Ja, wirklich. Es ist schön, wenn jemand vorbeikommt und Fehler oder Links korrigiert und diese Seite verbessert. Sollten deine Änderungen aber der innehabenden Person dieser Benutzerseite nicht gefallen, sei bitte nicht traurig oder verärgert, wenn sie rückgängig gemacht werden.
Wikipedia ist ein Wiki, sei mutig!

Superintelligenz: Szenarien einer kommenden Revolution (original Superintelligence: Paths, Dangers, Strategies) ist ein 2014 erschienenes Buch des schwedischen Philosophen Nick Bostrom an der Oxford University. Es argumentiert, dass falls Menschen von Maschinen in allgemeiner Intelligenz übertroffen werden, dann diese neue Superintelligenz die Menschen als dominante Lebensform der Erde ersetzen könnte. Hinreichend intelligente Maschinen könnten ihre eigene Leistungsfähigkeit schneller verbessern, als die menschlichen Entwickler der künstlichen Intelligenz[1] und das Resultat könnte eine existenzielle Katastrophe für die Menschheit sein.[2]

Bostrom's Buch wurde in viele Sprachen übersetzt und ist im englischen Original als Hörbuch verfügbar.[3][4]

Superintelligence: Paths, Dangers, Strategies is a 2014 book by the Swedish philosopher Nick Bostrom from the University of Oxford. It argues that if machine brains surpass human brains in general intelligence, then this new superintelligence could replace humans as the dominant lifeform on Earth. Sufficiently intelligent machines could improve their own capabilities faster than human computer scientists,[1] and the outcome could be an existential catastrophe for humans.[2]

Bostrom's book has been translated into many languages and is available as an audiobook.[5][6]

Synopsis[Bearbeiten | Quelltext bearbeiten]

It is unknown whether human-level artificial intelligence will arrive in a matter of years, later this century, or not until future centuries. Regardless of the initial timescale, once human-level machine intelligence is developed, a "superintelligent" system that "greatly exceeds the cognitive performance of humans in virtually all domains of interest" would follow surprisingly quickly, possibly even instantaneously. Such a superintelligence would be difficult to control or restrain.

While the ultimate goals of superintelligences can vary greatly, a functional superintelligence will spontaneously generate, as natural subgoals, "instrumental goals" such as self-preservation and goal-content integrity, cognitive enhancement, and resource acquisition. For example, an agent whose sole final goal is to solve the Riemann hypothesis (a famous unsolved, mathematical conjecture) could create, and act upon, a subgoal of transforming the entire Earth into some form of computronium (hypothetical "programmable matter") to assist in the calculation. The superintelligence would proactively resist any outside attempts to turn the superintelligence off or otherwise prevent its subgoal completion. In order to prevent such an existential catastrophe, it might be necessary to successfully solve the "AI control problem" for the first superintelligence. The solution might involve instilling the superintelligence with goals that are compatible with human survival and well-being. Solving the control problem is surprisingly difficult because most goals, when translated into machine-implementable code, lead to unforeseen and undesirable consequences.

Reception[Bearbeiten | Quelltext bearbeiten]

The book ranked #17 on the New York Times list of best selling science books for August 2014.[7] In the same month, business magnate Elon Musk made headlines by agreeing with the book that artificial intelligence is potentially more dangerous than nuclear weapons.[8][9][10] Bostrom’s work on superintelligence has also influenced Bill Gates’s concern for the existential risks facing humanity over the coming century.[11][12] In a March 2015 interview with Baidu's CEO, Robin Li, Gates claimed he would "highly recommend" Superintelligence.[13]

The science editor of the Financial Times found that Bostrom’s writing "sometimes veers into opaque language that betrays his background as a philosophy professor" but convincingly demonstrates that the risk from superintelligence is large enough that society should start thinking now about ways to endow future machine intelligence with positive values.[1] A review in The Guardian pointed out that "even the most sophisticated machines created so far are intelligent in only a limited sense" and that "expectations that AI would soon overtake human intelligence were first dashed in the 1960s", but finds common ground with Bostrom in advising that "one would be ill-advised to dismiss the possibility altogether".[2]

Some of Bostrom's colleagues suggest that nuclear war presents a greater threat to humanity than superintelligence, as does the future prospect of the weaponisation of nanotechnology and biotechnology.[14] The Economist stated that "Bostrom is forced to spend much of the book discussing speculations built upon plausible conjecture... but the book is nonetheless valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking, even if the prospect of actually doing so seems remote."[15] Ronald Bailey wrote in the libertarian Reason that Bostrom makes a strong case that solving the AI control problem is the "essential task of our age".[16] According to Tom Chivers of The Daily Telegraph, the book is difficult to read, but nonetheless rewarding.[17]

References[Bearbeiten | Quelltext bearbeiten]

Vorlage:Reflist

Vorlage:Existential risk from artificial intelligence Vorlage:Future of Humanity Institute

Category:2014 books Category:2014 non-fiction books Category:Artificial Intelligence existential risk Category:Futurology books Category:Works by Nick Bostrom

  1. a b c Financial Times review, July 2014
  2. a b c Caspar Henderson: Superintelligence by Nick Bostrom and A Rough Ride to the Future by James Lovelock – review In: The Guardian, 17 July 2014. Abgerufen im 11 October 2015 
  3. Superintelligent Swede snapped up by OUP.
  4. Superintelligence Audiobook - Nick Bostrom - Audible.com. In: Audible.com.
  5. Superintelligent Swede snapped up by OUP.
  6. Superintelligence Audiobook - Nick Bostrom - Audible.com. In: Audible.com.
  7. Best Selling Science Books. In: New York Times. New York Times, 8. September 2014, abgerufen am 9. November 2014.
  8. Artificial intelligence ‘may wipe out the human race’. In: The Times.
  9. Elon Musk tweets Artificial Intelligence may be "more dangerous than nukes". 4. August 2014;.
  10. The New York Times Blog. The New York Times, abgerufen am 4. März 2015.
  11. Forbes. Forbes, abgerufen am 19. Februar 2015.
  12. The Fiscal Times. The Fiscal Times, abgerufen am 19. Februar 2015.
  13. Baidu CEO Robin Li interviews Bill Gates and Elon Musk at the Boao Forum, March 29 2015. YouTube, abgerufen am 8. April 2015.
  14. Guardian review, July 2014.
  15. Clever cogs, The Economist, 9 August 2014 „It may seem an esoteric, even slightly crazy, subject. And much of the book’s language is technical and abstract (readers must contend with ideas such as "goal-content integrity" and "indirect normativity"). Because nobody knows how such an AI might be built, Mr Bostrom is forced to spend much of the book discussing speculations built upon plausible conjecture. He is honest enough to confront the problem head-on, admitting at the start that "many of the points made in this book are probably wrong." But the book is nonetheless valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking, even if the prospect of actually doing so seems remote. Trying to do some of that thinking in advance can only be a good thing.“  Syndicated at Business Insider
  16. Ronald Bailey: Will Superintelligent Machines Destroy Humanity? In: Reason, 12 September 2014. Abgerufen im 16 September 2014 
  17. Tom Chivers: Superintelligence by Nick Bostrom, review: 'a hard read', 10 August 2014. Abgerufen im 16 August 2014 „If you’re looking for a friendly, accessible introduction to AI issues, then this is probably not the place to start. But if you’re interested in this topic, and you’re prepared to do a bit of mental spadework, it is rewarding.“