„Iyad Rahwan“ – Versionsunterschied

aus Wikipedia, der freien Enzyklopädie
Zur Navigation springen Zur Suche springen
[ungesichtete Version][ungesichtete Version]
Inhalt gelöscht Inhalt hinzugefügt
K fixed formatting
Citations and clarifications, mainly
Zeile 11: Zeile 11:
| field = [[Computational Social Science]], [[Artificial Intelligence]], [[Ethics]], [[Cognitive Science]], [[Game Theory]], [[Crowdsourcing]],
| field = [[Computational Social Science]], [[Artificial Intelligence]], [[Ethics]], [[Cognitive Science]], [[Game Theory]], [[Crowdsourcing]],
| work_institution = [[MIT]]
| work_institution = [[MIT]]
}}'''Iyad Rahwan''' is a Professor of Media Arts & Sciences at the [[MIT Media Lab]], where he heads the [https://www.media.mit.edu/groups/scalable-cooperation/overview/ Scalable Cooperation] group. Rahwan's work focuses on questions at the interface between [[Artificial Intelligence]] and society, and has particularly published in the areas of [[computational social science]], [[collective intelligence]], large-scale cooperation, and the social aspects of Artificial Intelligence. His work has appeared in top venues like Science and PNAS, and has been reported widely in the media. <ref>{{Cite web|url=http://www.mit.edu/~irahwan/|title=Rahwan's Official MIT Webpage|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref>
}}'''Iyad Rahwan''' is a [[Syrian Australians|Syrian-Australian]] [[scientist]]. He is an associate professor of Media Arts & Sciences at the [[MIT Media Lab]], and is the director and principal investigator of its Scalable Cooperation group<ref>{{Cite web|url=https://www.media.mit.edu/groups/scalable-cooperation/overview/|title=Group Overview ‹ Scalable Cooperation – MIT Media Lab|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref>. Rahwan's work lies at the intersection of the [[computer science|computer]] and [[social science]]s, where he has investigated topics in [[computational social science]], [[collective intelligence]], large-scale cooperation, and the social aspects of [[artificial intelligence]]. <ref>{{Cite web|url=http://www.tedxcambridge.com/speaker/iyad-rahwan/|title=Iyad Rahwan - TEDxCambridge|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref>


== Biography ==
== Biography ==
Rahwan was born in [[Aleppo]], [[Syria]]. He earned an Information Systems PhD in 2005 from the [[University of Melbourne]]. As an assistant and then associate professor in Computing and Information Science at [[MIT]]-partnered [[Masdar Institute of Science and Technology]], Rahwan investigated scalable social mobilization's possibilities, limits, and challenges in various contexts by analyzing data from the 2009 [[DARPA Network Challenge]]<ref>{{Cite web|url=http://www.livescience.com/28341-social-media-helps-mobilize-society.html|title=How Social Media Mobilizes Society - LiveScience|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref><ref>{{Cite web|url=http://www.pnas.org/content/110/16/6281.abstract|title=A. Rutherford, M. Cebrian, S. Dsouza, E. Moro, A. Pentland, and I. Rahwan (2013). Limits of Social Mobilization. Proceedings of the National Academy of Sciences, vol. 110 no. 16 pp. 6281-6286|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref>, the [[DARPA Shredder Challenge 2011]]<ref>{{Cite web|url=http://nautil.us/issue/18/genius/how-crowdsourcing-turned-on-me|title=How Crowdsourcing Turned On Me - Nautilus|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref><ref>{{Cite web|url=https://link.springer.com/article/10.1140/epjds/s13688-014-0013-1|title=N. Stefanovitch, A. Alshamsi, M. Cebrian, I. Rahwan (2014). Error and attack tolerance of collective problem solving: The DARPA Shredder Challenge. EPJ Data Science. vol 3, no 13, pages 1-27|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref>, and the 2012 [[US State Department]] [[Tag Challenge]]<ref>{{Cite web|url=http://www.nature.com/news/crowdsourcing-in-manhunts-can-work-1.12867|title=Crowdsourcing in manhunts can work : Nature News & Comment|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref><ref>{{Cite web|url=http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0074628|title=A. Rutherford et al (2013). Targeted social mobilization in a global manhunt. PLOS ONE 8 (9): e74628|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref>. In 2015, Rahwan started the Scalable Cooperation Group at the [[MIT Media Lab]], where he is the AT&T Career Development Professor and an Associate Professor of Media Arts & Sciences<ref>{{Cite web|url=https://www.media.mit.edu/people/irahwan/overview/|title=Person Overview ‹ Iyad Rahwan – MIT Media Lab|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref>, as well as an affiliate faculty at the MIT Institute of Data, Systems and Society<ref>{{Cite web|url=https://idss.mit.edu/staff/iyad-rahwan/|title=Iyad Rahwan – IDSS|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref>.
Rahwan earned a B.Sc. from [[United Arab Emirates University|UAE University]] in Computer Science, a Masters in Information Technology from [[Swinburne University of Technology|Swinburne University]] and his Ph.D., in Information Systems in 2005 at [[University of Melbourne]]. After 4 years at the [[Masdar Institute of Science and Technology]], Rahwan began an associate professorship at the [[MIT Media Lab]], where he is the AT&T Career Development Professor and an affiliate faculty at the MIT Institute of Data, Systems and Society. <ref>{{Cite web|url=http://www.mit.edu/~irahwan/|title=Rahwan's Official MIT Webpage|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref>


==Society-in-the-Loop==
==Society-in-the-Loop==
Rahwan coined the term [https://medium.com/mit-media-lab/society-in-the-loop-54ffd71cd802 Society-in-the-loop] as a version of [[Human-in-the-loop]] systems. <ref>{{Cite web|url=https://joi.ito.com/weblog/2016/06/23/society-in-the-.html|title=Society in the Loop Artificial Intelligence »|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref> Whereas HITL systems embed an individual's judgement into a narrowly defined control system, SITL is more about embedding the judgement of society as a whole in to system. He cites an AI that controls billions of self driving cars (and decides who is worth saving in certain cases), or a news filtering algorithm with the potential to influence the ideology of millions of citizens (that decides what content the users shall see). Rahwan highlights the importance of articulating ethics and social contracts in ways that machines can understand, towards building new governance algorithms. <ref>{{Cite web|url=https://medium.com/mit-media-lab/society-in-the-loop-54ffd71cd802|title=Society-in-the-loop|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref>
Rahwan coined the term [https://medium.com/mit-media-lab/society-in-the-loop-54ffd71cd802 Society-in-the-loop] as a conceptual extension of [[Human-in-the-Loop]] systems. <ref>{{Cite web|url=https://joi.ito.com/weblog/2016/06/23/society-in-the-.html|title=Society in the Loop Artificial Intelligence »|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref> Whereas HITL systems embed an individual's judgement into a narrowly defined control system, SITL is more about embedding the judgement of society as a whole in to system. He cites an AI that controls billions of self driving cars (and decides who is worth saving in certain cases), or a news filtering algorithm with the potential to influence the ideology of millions of citizens (that decides what content the users shall see). Rahwan highlights the importance of articulating ethics and social contracts in ways that machines can understand, towards building new governance algorithms. <ref>{{Cite web|url=https://medium.com/mit-media-lab/society-in-the-loop-54ffd71cd802|title=Society-in-the-loop|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref>


== Morality and Machines==
== Morality and Machines==
=== The social dilemma of autonomous vehicles ===
=== Ethics of Autonomous Vehicles ===
Rahwan is one of the first to consider the problem of self autonomous vehicles as a ethical dilemma. His paper [http://science.sciencemag.org/content/352/6293/1573 The social dilemma of autonomous vehicles] found that people approved of [[utilitarian]] autonomous vehicles, and wanted others to purchase these vehicles, they themselves would prefer to ride in an autonomous vehicle that protected its passenger at all costs. Thus the paper concludes the regulation of utilitarian algorithms could paradoxically increase casualties by driving by inadvertently postponing the adoption of a safer technology.<ref>{{Cite web|url=http://science.sciencemag.org/content/352/6293/1573|title=The social dilemma of autonomous vehicles|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref>. The paper spurred lots of coverage about the role of ethics in the creation of artificially intelligent driving systems. <ref>{{Cite web|url=https://www.weforum.org/agenda/2016/08/the-ethics-of-self-driving-cars-what-would-you-do/|title=World Forum discuses how self-driving cars will make life or death decisions|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref><ref>{{Cite web|url=https://www.nytimes.com/2016/06/24/technology/should-your-driverless-car-hit-a-pedestrian-to-save-your-life.html|title=The New York Times discuses Should Your Driverless Car Hit a Pedestrian to Save Your Life|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref><ref>{{Cite web|url=https://www.nytimes.com/2016/11/06/opinion/sunday/whose-life-should-your-car-save.html|title=Rahwan's op-ed in the New York Times about whose life your car should save|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref><ref>{{Cite web|url=http://www.tedxcambridge.com/speaker/iyad-rahwan/|title=TedxCambridge: The social dilemma of driverless cars|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref>
Rahwan is one of the first to consider the problem of self autonomous vehicles as a ethical dilemma. His 2016 paper, ''The Social Dilemma of Autonomous Vehicles'', showed that people approved of [[utilitarian]] autonomous vehicles, and wanted others to purchase these vehicles, but they themselves would prefer to ride in an autonomous vehicle that protected its passenger at all costs, and would not use self-driving vehicles if utilitarianism was imposed on them by law. Thus the paper concludes the regulation of utilitarian algorithms could paradoxically increase casualties by driving by inadvertently postponing the adoption of a safer technology.<ref>{{Cite web|url=http://science.sciencemag.org/content/352/6293/1573|title=J. F. Bonnefon, A. Shariff, I. Rahwan (2016). The Social Dilemma of Autonomous Vehicles. Science. 352(6293):1573-1576.|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref>. The paper spurred lots of coverage about the role of ethics in the creation of artificially intelligent driving systems. <ref>{{Cite web|url=https://www.weforum.org/agenda/2016/08/the-ethics-of-self-driving-cars-what-would-you-do/|title=World Forum discuses how self-driving cars will make life or death decisions|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref><ref>{{Cite web|url=https://www.nytimes.com/2016/06/24/technology/should-your-driverless-car-hit-a-pedestrian-to-save-your-life.html|title=Should Your Driverless Car Hit a Pedestrian to Save Your Life - The New York Times|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref><ref>{{Cite web|url=https://www.nytimes.com/2016/11/06/opinion/sunday/whose-life-should-your-car-save.html|title=Whose Life Should Your Car Save? - The New York Times|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref><ref>{{Cite web|url=http://www.tedxcambridge.com/speaker/iyad-rahwan/|title=TedxCambridge: The social dilemma of driverless cars|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref><ref>{{Cite web|url=https://www.washingtonpost.com/news/energy-environment/wp/2016/06/23/save-the-driver-or-save-the-crowd-scientists-wonder-how-driverless-cars-will-choose/?utm_term=.57c8b6488f20|title=Save the driver or save the crowd? Scientists wonder how driverless cars will ‘choose’ - The Washington Post|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref><ref>{{Cite web|url=http://time.com/4378108/driverless-car-study/|title=Driverless Cars Pose Difficult Ethical Question | Time.com|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref><ref>{{Cite web|url=http://www.independent.co.uk/news/science/driverless-cars-autonomous-vehicles-safety-accidents-a7097276.html|title=Driverless car safety revolution could be scuppered by moral dilemma - The Independent|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref>


=== Moral Machine ===
=== Moral Machine ===
[[Moral Machine]] is  an online platform that generates [[Ethical dilemma|moral dilemmas]] and collects information on the decisions that people make between two destructive outcomes. To date, the system has collected 28 million decisions about how autonomous vehicles should prioritize the lives of those around it. <ref>{{Cite web|url=https://cyber.harvard.edu/events/luncheons/2017/04/Ito|title=AI & Society at the Berkman Center|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref> The presented scenarios are often variations of the [[trolley problem]], and the information collected would be used for further research regarding the decisions that [[Artificial intelligence|machine intelligence]] must make in the future.<ref>{{Cite web|url=http://moralmachine.mit.edu|title=Moral Machine|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref>
[[Moral Machine]] is  an online platform that generates [[ethical dilemma]] scenarios faced by hypothetical autonomous machines, allowing visitors to assess the scenarios and vote on the most morally acceptable between two unavoidable harm outcomes. As of April 2017, the system has collected 28 million decisions from over 3 million visitors. <ref>{{Cite web|url=https://cyber.harvard.edu/events/luncheons/2017/04/Ito|title=AI & Society at the Berkman Center|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref> The presented scenarios are often variations of the [[trolley problem]], and the information collected would be used for further research regarding the decisions that [[Artificial intelligence|machine intelligence]] might have to make in the future.<ref>{{Cite web|url=http://moralmachine.mit.edu|title=Moral Machine|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref>


===Cooperating with machines ===
===Cooperating with Machines ===
Rahwan helped develop an algorithm, [https://arxiv.org/abs/1703.06207 S#], that successfully learned to cooperate with its partner faster and more effectively than a human in games of chicken, [[Prisoner's Dilemma]], and alternator.<ref>{{Cite web|url=http://www.sciencemag.org/news/2017/03/computers-learn-cooperate-better-humans|title=Computers learn to cooperate better than humans|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref>
Rahwan's study of human-machine cooperation showed that providing a medium of communication can result in an algorithm learning to cooperate with its human partner faster and more effectively than a human in strategic games.<ref>{{Cite web|url=https://arxiv.org/abs/1703.06207|title=[arXiv:1703.06207] Cooperating with Machines|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref><ref>{{Cite web|url=https://www.technologyreview.com/s/603995/ai-can-beat-us-at-pokernow-lets-see-if-it-can-work-with-us/|title=AI Can Beat Us at Poker—Now Let’s See If It Can Work with Us - MIT Technology Review|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref>


==Other projects ==
==Other projects ==
=== The Tag Challenge ===
=== The Tag Challenge ===
Rahwan led the winning team in the US State Department's [[Tag Challenge]], using social media to locate individuals in remote cities within 12 hours using only their photographic portrait. <ref>{{Cite web|url=http://www.mit.edu/~irahwan/|title=Rahwan's Official MIT Webpage|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref> The winning strategy, based on the [[DARPA Network Challenge]] winning strategy, was as follows:
Rahwan led the winning team in the 2012 [[US State Department]] [[Tag Challenge]], using crowdsourcing and a referral-incentivizing reward mechanism (similar to the one used in the 2009 [[DARPA Network Challenge]]) to locate individuals in European and American cities within 12 hours each, given only their photographic portraits. <ref>{{Cite web|url=https://www.scientificamerican.com/article/crowdsourcing-in-manhunts-can-work/|title=Crowdsourcing in Manhunts Can Work - Scientific American|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref>

''You receive $500 if you upload an image of a suspect that is accepted by the challenge organizers. If a friend you invited using your individualized referral link uploads an acceptable image of a suspect, YOU also get $100. Furthermore, recruiters of the first 2000 recruits who signed up by referral get $1 for each recruit they refer to sign up with us (using the individualized referral link).''<ref>{{Cite web|url=http://www.crowdscanner.net/|title=CrowdScanner|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref>

=== The Nightmare Machine ===
=== The Nightmare Machine ===
The Nightmare Machine<ref>{{Cite web|url=http://nightmare.mit.edu|title=The Nightmare Machine|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref> questions whether machines can learn to scare humans. The platform presents computer generated scary imagery powered by deep learning algorithms. <ref>{{Cite web|url=http://www.npr.org/sections/thetwo-way/2016/10/25/499334210/researchers-build-nightmare-machine|title=Researchers Build 'Nightmare Machine'|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref>
The Nightmare Machine<ref>{{Cite web|url=http://nightmare.mit.edu|title=THe Nightmare Machine|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref>, developed under Rahwan's guidance, creates computer generated imagery powered by deep learning algorithms to learn from human feedback and generate a visual approximation of what humans might find "scary". <ref>{{Cite web|url=http://www.npr.org/sections/thetwo-way/2016/10/25/499334210/researchers-build-nightmare-machine|title=Researchers Build 'Nightmare Machine' : The Two-Way : NPR|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}</ref>


== References ==
== References ==
Zeile 45: Zeile 42:
[[Category:Living people]]
[[Category:Living people]]
[[Category:People from Aleppo]]
[[Category:People from Aleppo]]
[[Category:Syrian engineers]]
[[Category:Australian people of Syrian descent|*]]
[[Category:Syrian diaspora|Australia]]

Version vom 17. April 2017, 10:39 Uhr

Vorlage:Close paraphrasing Vorlage:Infobox scientistIyad Rahwan is a Syrian-Australian scientist. He is an associate professor of Media Arts & Sciences at the MIT Media Lab, and is the director and principal investigator of its Scalable Cooperation group[1]. Rahwan's work lies at the intersection of the computer and social sciences, where he has investigated topics in computational social science, collective intelligence, large-scale cooperation, and the social aspects of artificial intelligence. [2]

Biography

Rahwan was born in Aleppo, Syria. He earned an Information Systems PhD in 2005 from the University of Melbourne. As an assistant and then associate professor in Computing and Information Science at MIT-partnered Masdar Institute of Science and Technology, Rahwan investigated scalable social mobilization's possibilities, limits, and challenges in various contexts by analyzing data from the 2009 DARPA Network Challenge[3][4], the DARPA Shredder Challenge 2011[5][6], and the 2012 US State Department Tag Challenge[7][8]. In 2015, Rahwan started the Scalable Cooperation Group at the MIT Media Lab, where he is the AT&T Career Development Professor and an Associate Professor of Media Arts & Sciences[9], as well as an affiliate faculty at the MIT Institute of Data, Systems and Society[10].

Society-in-the-Loop

Rahwan coined the term Society-in-the-loop as a conceptual extension of Human-in-the-Loop systems. [11] Whereas HITL systems embed an individual's judgement into a narrowly defined control system, SITL is more about embedding the judgement of society as a whole in to system. He cites an AI that controls billions of self driving cars (and decides who is worth saving in certain cases), or a news filtering algorithm with the potential to influence the ideology of millions of citizens (that decides what content the users shall see). Rahwan highlights the importance of articulating ethics and social contracts in ways that machines can understand, towards building new governance algorithms. [12]

Morality and Machines

Ethics of Autonomous Vehicles

Rahwan is one of the first to consider the problem of self autonomous vehicles as a ethical dilemma. His 2016 paper, The Social Dilemma of Autonomous Vehicles, showed that people approved of utilitarian autonomous vehicles, and wanted others to purchase these vehicles, but they themselves would prefer to ride in an autonomous vehicle that protected its passenger at all costs, and would not use self-driving vehicles if utilitarianism was imposed on them by law. Thus the paper concludes the regulation of utilitarian algorithms could paradoxically increase casualties by driving by inadvertently postponing the adoption of a safer technology.[13]. The paper spurred lots of coverage about the role of ethics in the creation of artificially intelligent driving systems. [14][15][16][17][18][19][20]

Moral Machine

Moral Machine is  an online platform that generates ethical dilemma scenarios faced by hypothetical autonomous machines, allowing visitors to assess the scenarios and vote on the most morally acceptable between two unavoidable harm outcomes. As of April 2017, the system has collected 28 million decisions from over 3 million visitors. [21] The presented scenarios are often variations of the trolley problem, and the information collected would be used for further research regarding the decisions that machine intelligence might have to make in the future.[22]

Cooperating with Machines

Rahwan's study of human-machine cooperation showed that providing a medium of communication can result in an algorithm learning to cooperate with its human partner faster and more effectively than a human in strategic games.[23][24]

Other projects

The Tag Challenge

Rahwan led the winning team in the 2012 US State Department Tag Challenge, using crowdsourcing and a referral-incentivizing reward mechanism (similar to the one used in the 2009 DARPA Network Challenge) to locate individuals in European and American cities within 12 hours each, given only their photographic portraits. [25]

The Nightmare Machine

The Nightmare Machine[26], developed under Rahwan's guidance, creates computer generated imagery powered by deep learning algorithms to learn from human feedback and generate a visual approximation of what humans might find "scary". [27]

References

Vorlage:Reflist

  1. Group Overview ‹ Scalable Cooperation – MIT Media Lab.
  2. Iyad Rahwan - TEDxCambridge.
  3. How Social Media Mobilizes Society - LiveScience.
  4. A. Rutherford, M. Cebrian, S. Dsouza, E. Moro, A. Pentland, and I. Rahwan (2013). Limits of Social Mobilization. Proceedings of the National Academy of Sciences, vol. 110 no. 16 pp. 6281-6286.
  5. How Crowdsourcing Turned On Me - Nautilus.
  6. N. Stefanovitch, A. Alshamsi, M. Cebrian, I. Rahwan (2014). Error and attack tolerance of collective problem solving: The DARPA Shredder Challenge. EPJ Data Science. vol 3, no 13, pages 1-27.
  7. Crowdsourcing in manhunts can work : Nature News & Comment.
  8. A. Rutherford et al (2013). Targeted social mobilization in a global manhunt. PLOS ONE 8 (9): e74628.
  9. Person Overview ‹ Iyad Rahwan – MIT Media Lab.
  10. Iyad Rahwan – IDSS.
  11. Society in the Loop Artificial Intelligence ».
  12. Society-in-the-loop.
  13. J. F. Bonnefon, A. Shariff, I. Rahwan (2016). The Social Dilemma of Autonomous Vehicles. Science. 352(6293):1573-1576.
  14. World Forum discuses how self-driving cars will make life or death decisions.
  15. Should Your Driverless Car Hit a Pedestrian to Save Your Life - The New York Times.
  16. Whose Life Should Your Car Save? - The New York Times.
  17. TedxCambridge: The social dilemma of driverless cars.
  18. Save the driver or save the crowd? Scientists wonder how driverless cars will ‘choose’ - The Washington Post.
  19. Driverless Cars Pose Difficult Ethical Question.
  20. Driverless car safety revolution could be scuppered by moral dilemma - The Independent.
  21. AI & Society at the Berkman Center.
  22. Moral Machine.
  23. [arXiv:1703.06207] Cooperating with Machines.
  24. AI Can Beat Us at Poker—Now Let’s See If It Can Work with Us - MIT Technology Review.
  25. Crowdsourcing in Manhunts Can Work - Scientific American.
  26. THe Nightmare Machine.
  27. Researchers Build 'Nightmare Machine' : The Two-Way : NPR.