Example: dental hygienist

The Malicious Use February 2018 of Artificial Intelligence ...

Fu tu re o f U n iversity Centre for University of Center for a Electronic O penAI. Hu ma ni t y o f Oxford the Study of Cambridge New American Frontier In st it u t e Existential Security Foundation Risk The Malicious Use February 2018. of Artificial Intelligence : Forecasting, Prevention, and Mitigation Th e Ma l i c i o u s U s e o f Artificial Intelligence : February 2018. Fo re ca s t i n g , P r e v e n tion, and Mitigation Miles Brundage Shahar Avin Jack Clark Helen Toner Peter Eckersley Ben Garfinkel Allan Dafoe Paul Scharre Thomas Zeitzoff Bobby Filar Hyrum Anderson Heather Roff Gregory C. Allen Jacob Steinhardt Carrick Flynn Se n h igeartaigh Simon Beard Haydn Belfield Sebastian Farquhar Clare Lyle Rebecca Crootof Owain Evans Michael Page Joanna Bryson Roman Yampolskiy Dario Amodei 1 Co rr e s p o n d i n g a u t hor 9 American Un i v e r s i t y 18 Centre for the Study of m il es.

Feb 20, 2018 · Humanity Institute University of Oxford Centre for the Study of Existential Risk University of Cambridge ... Countless more such applications are being developed and can be expected over the long term. Less attention ... speech recognition, machine translation, spam filters, and search engines. Additional promising technologies currently being

Tags:

  More, Humanity, Recognition

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of The Malicious Use February 2018 of Artificial Intelligence ...

1 Fu tu re o f U n iversity Centre for University of Center for a Electronic O penAI. Hu ma ni t y o f Oxford the Study of Cambridge New American Frontier In st it u t e Existential Security Foundation Risk The Malicious Use February 2018. of Artificial Intelligence : Forecasting, Prevention, and Mitigation Th e Ma l i c i o u s U s e o f Artificial Intelligence : February 2018. Fo re ca s t i n g , P r e v e n tion, and Mitigation Miles Brundage Shahar Avin Jack Clark Helen Toner Peter Eckersley Ben Garfinkel Allan Dafoe Paul Scharre Thomas Zeitzoff Bobby Filar Hyrum Anderson Heather Roff Gregory C. Allen Jacob Steinhardt Carrick Flynn Se n h igeartaigh Simon Beard Haydn Belfield Sebastian Farquhar Clare Lyle Rebecca Crootof Owain Evans Michael Page Joanna Bryson Roman Yampolskiy Dario Amodei 1 Co rr e s p o n d i n g a u t hor 9 American Un i v e r s i t y 18 Centre for the Study of m il es.

2 B r u n d a g e @ p h i losophy. Existential Risk, University o x. ac . u k of Cambridge F ut ur e o f H u m a n i t y Institute, 10 Endgame U ni ve r s i t y o f O x f o rd; Arizona 19 Future of humanity S ta te U n i v e r s i t y 11 Endgame Institute, University of 2 Co r r e s p o n d i n g a uthor, Oxford s a4 78 @ c a m . a c . u k C en tr e f o r t h e S t u dy of 12 Universit y o f O x f o r d /. E xi st e n t i a l R i s k , University Arizona State U n i v e r s i t y / N e w 20 Future of humanity America Found a t i o n Institute, University of o f Ca m b r i d g e Oxford 3 O pe n A I. 13 Center fo r a N e w A m e r i c a n Security 21 Information Society Project, Yale University 4 O pe n P h i l a n t h r o p y Project 14 Stanford U n i v e r s i t y 22 Future of humanity 5 El ec t r o n i c F r o n tier 15 Future of H u m a n i t y Institute, University of F ou nd a t i o n Oxford Institute, Un i v e r s i t y o f Oxford 6 F ut u r e o f H u m a n i ty 23 OpenAI.

3 I ns ti t u t e , U n i v e r s ity of O xf or d 16 Centre fo r t h e S t u d y of Existentia l R i s k a n d Centre for th e F u t u r e o f 24 University of Bath 7 Fu tu r e o f H u m a n ity Intelligence , U n i v e r s i t y o f I ns ti t u t e , U n i v e r s ity of Cambridge 25 University of Louisville O xf or d ; Y a l e U n i v e rsity 17 Centre for t h e S t u d y o f 26 OpenAI. 8 C en t e r f o r a N e w American Existential R i s k , U n i v e r s i t y S ec ur i t y of Cambridge Au th or s a r e l i s t e d Design Direct i o n in o rd e r o f c o n t r i b ution by Sankalp Bh a t n a g a r and Talia Cot t o n Exec ut i ve S umm ar y Artificial Intelligence and machine learning capabilities are growing at an unprecedented rate. These technologies have many widely beneficial applications, ranging from machine translation to medical image analysis.

4 Countless more such applications are being developed and can be expected over the long term. Less attention has historically been paid to the ways in which Artificial Intelligence can be used maliciously. This report surveys the landscape of potential security threats from Malicious uses of Artificial Intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats. We analyze, but do not conclusively resolve, the question of what the long-term equilibrium between attackers and defenders will be. We focus instead on what sorts of attacks we are likely to see soon if adequate defenses are not developed. The Malicious Use of Artificial Intelligence In response to the changing threat landscape we make four high-level recommendations: 1. Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential Malicious uses of AI.

5 2. Researchers and engineers in Artificial Intelligence should take the dual-use nature of their work seriously, allowing misuse- related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable. 3. Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI. 4. Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges. Executive Summary p. 4. The Malicious Use of Artificial Intelligence As AI capabilities become more powerful and widespread, we expect the growing use of AI systems to lead to the following changes in the landscape of threats: Expansion of existing threats.

6 The costs of attacks may be lowered by the scalable use of AI systems to complete tasks that would ordinarily require human labor, Intelligence and expertise. A natural effect would be to expand the set of actors who can carry out particular attacks, the rate at which they can carry out these attacks, and the set of potential targets. Introduction of new threats. New attacks may arise through the use of AI systems to complete tasks that would be otherwise impractical for humans. In addition, Malicious actors may exploit the vulnerabilities of AI systems deployed by defenders. Change to the typical character of threats. We believe there is reason to expect attacks enabled by the growing use of AI to be especially effective, finely targeted, difficult to attribute, and likely to exploit vulnerabilities in AI systems. Executive Summary The Malicious Use of Artificial Intelligence We structure our analysis by separately considering three security domains, and illustrate possible changes to threats within these domains through representative examples: Digital security.

7 The use of AI to automate tasks involved in carrying out cyberattacks will alleviate the existing tradeoff between the scale and efficacy of attacks. This may expand the threat associated with labor-intensive cyberattacks (such as spear phishing). We also expect novel attacks that exploit human vulnerabilities ( through the use of speech synthesis for impersonation), existing software vulnerabilities ( through automated hacking), or the vulnerabilities of AI systems ( through adversarial examples and data poisoning). Physical security. The use of AI to automate tasks involved in carrying out attacks with drones and other physical systems ( through the deployment of autonomous weapons systems) may expand the threats associated with these attacks. We also expect novel attacks that subvert cyber- physical systems ( causing autonomous vehicles to crash).

8 Or involve physical systems that it would be infeasible to direct remotely ( a swarm of thousands of micro-drones). Political security. The use of AI to automate tasks involved in surveillance ( analysing mass-collected data), persuasion Executive Summary ( creating targeted propaganda), and deception ( manipulating videos) may expand threats associated with privacy invasion and social manipulation. We also expect novel attacks that take advantage of an improved capacity to analyse human behaviors, moods, and beliefs on the basis of available data. These concerns are most significant in the context of authoritarian states, but may also undermine the ability of democracies to sustain truthful public debates. The Malicious Use of Artificial Intelligence In addition to the high-level recommendations listed above, we also propose the exploration of several open questions and potential interventions within four priority research areas: Learning from and with the cybersecurity community.

9 At the intersection of cybersecurity and AI attacks, we highlight the need to explore and potentially implement red teaming, formal verification, responsible disclosure of AI vulnerabilities, security tools, and secure hardware. Exploring different openness models. As the dual-use nature of AI and ML becomes apparent, we highlight the need to reimagine norms and institutions around the openness of research, starting with pre-publication risk assessment in technical areas of special concern, central access licensing models, sharing regimes that favor safety and security, and other lessons from other dual-use technologies. Promoting a culture of responsibility. AI researchers and the organisations that employ them are in a unique position to shape the security landscape of the AI-enabled world. We highlight the importance of education, ethical statements and standards, framings, norms, and expectations.

10 Developing technological and policy solutions. In addition to the above, we survey a range of promising technologies, as Executive Summary well as policy interventions, that could help build a safer future with AI. High-level areas for further research include privacy protection, coordinated use of AI for public-good security, monitoring of AI-relevant resources, and other legislative and regulatory responses. The proposed interventions require attention and action not just from AI researchers and companies but also from legislators, civil servants, regulators, security researchers and educators. The challenge is daunting and the stakes are high. Contents Executive Summary 3. Introduction 9. Scope 10. Related Literature 10. 02 General Framework for AI and Security Threats AI Capabilities 12. 12. Security-Relevant Properties of AI 16. General Implications 18.


Related search queries