Dagstuhl-Perspektiven-Workshop 12371
Machine Learning Methods for Computer Security
( 09. Sep – 14. Sep, 2012 )
Permalink
Organisatoren
- Anthony D. Joseph (University of California - Berkeley, US)
- Pavel Laskov (Universität Tübingen, DE)
- Fabio Roli (University of Cagliari, IT)
- Doug Tygar (University of California - Berkeley, US)
Koordinator
- Blaine Nelson (Univ. Tübingen, DE)
Kontakt
- Susanne Bach-Bernhard (für administrative Fragen)
Publikationen
- Machine Learning Methods for Computer Security (Dagstuhl Perspectives Workshop 12371). Anthony D. Joseph, Pavel Laskov, Fabio Roli, J. Doug Tygar, and Blaine Nelson. In Dagstuhl Reports, Volume 2, Issue 9, pp. 109-130, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2013)
- Machine Learning Methods for Computer Security (Dagstuhl Perspectives Workshop 12371). Anthony D. Joseph, Pavel Laskov, Fabio Roli, J. Doug Tygar, and Blaine Nelson. In Dagstuhl Manifestos, Volume 3, Issue 1, pp. 1-30, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2013)
Presse/News
Programm
-
Intelligenter Virenschutz für Computer
von Peter Welchering
SWR2 am 01.12.2012 (in German) -
Mit jeder Cyberattacke wird der Computer schlauer
von Peter Welchering
FAZ.net am 23.11.2012 (in German) -
Der Computer schlägt zurück
von Peter Welchering
Deutschlandfunk "Forschung aktuell" am 15.09.2012 (in German) -
"Selbstlernende Software"; Simone Mir Haschemi im Gespräch mit Paval Laskov.
SR2-Kultur am 12.09.2012 - Informatik-Experten diskutieren über wehrhafte Computersysteme
Elektronik Praxis am 5.9.12 (in German) - Wie Computer lernen, sich selbstständig gegen Hackerangriffe zu verteidigen; Press Release (in German)
Arising organically from a variety of independent research projects in both computer security and machine learning, the topic of machine learning methods for computer security is emerging as a major direction of research that offers new challenges to both communities. Learning approaches are particularly advantageous for security applications designed to counter sophisticated and evolving adversaries because they are designed to cope with large data tasks that are too complex for hand-crafted solutions or need to dynamically evolve. However, in adversarial settings, the assets of learning can potentially be subverted by malicious manipulation of the learner's environment. This exposes applications that use learning techniques to a new type of security vulnerability in which an adversary can adapt to counter learning-based methods. Thus, unlike most application domains, computer security applications present a unique data domain that requires careful consideration of its adversarial nature to provide adequate learning-based solutions---a challenge requiring novel learning methods and domain-specific application design and analysis. The Perspectives Workshop, ``Machine Learning Methods for Computer Security'', brought together prominent researchers from the computer security and machine learning communities interested in furthering the state-of-the-art for this fusion research to discuss open problems, foster new research directions, and promote further collaboration between the two communities.
This workshop focused on tasks in three main topics: the role of learning in computer security applications, the paradigm of secure learning, and the future applications for secure learning. In the first group, participants discussed the current usage of learning approaches by security practitioners. The second group focused of the current approaches and challenges for learning in security-sensitive adversarial domains. Finally, the third group sought to identify future application domains, which would benefit from secure learning technologies.
Within this emerging field several recurrent themes arose throughout the workshop. A major concern that was discussed throughout the workshop was an uneasiness with machine learning and a reluctance to use learning within security applications and, to address this problem, participants identified the need for learning methods to provide better transparency, interpretability, and trust. Further, many workshop attendees raised the issue of how human operators could be incorporated into the learning process to guide it, interpret its results, and prevent unintended consequences, thus reinforcing the need for transparency and interpretability of these methods. On the learning side, researchers discussed how an adversary should be properly incorporated into a learning framework and how the algorithms can be designed in a game-theoretic manner to provide security guarantees. Finally, participants also identified the need for a proper characterization of a security objective for learning and for benchmarks for assessing an algorithm's security.
- Battista Biggio (University of Cagliari, IT) [dblp]
- Christian Bockermann (TU Dortmund, DE)
- Michael Brückner (SoundCloud Ltd., DE)
- Alvaro Cárdenas Mora (University of Texas at Dallas, US) [dblp]
- Christos Dimitrakakis (EPFL - Lausanne, CH) [dblp]
- Felix Freiling (Universität Erlangen-Nürnberg, DE) [dblp]
- Giorgio Fumera (University of Cagliari, IT) [dblp]
- Giorgio Giacinto (University of Cagliari, IT)
- Rachel Greenstadt (Drexel Univ. - Philadelphia, US) [dblp]
- Anthony D. Joseph (University of California - Berkeley, US) [dblp]
- Robert Krawczyk (BSI - Bonn, DE)
- Pavel Laskov (Universität Tübingen, DE) [dblp]
- Richard P. Lippmann (MIT Lincoln Laboratory - Lexington, US)
- Daniel Lowd (University of Oregon - Eugene, US) [dblp]
- Aikaterini Mitrokotsa (EPFL - Lausanne, CH) [dblp]
- Sasa Mrdovic (University of Sarajevo, BA)
- Blaine Nelson (Univ. Tübingen, DE)
- Patrick Pak Kei Chan (South China University of Technology, CN)
- Massimiliano Raciti (Linköping University, SE)
- Nathan Ratliff (Google - Pittsburgh, US)
- Konrad Rieck (Universität Göttingen, DE) [dblp]
- Fabio Roli (University of Cagliari, IT) [dblp]
- Benjamin I. P. Rubinstein (Microsoft Corp. - Mountain View, US) [dblp]
- Tobias Scheffer (Universität Potsdam, DE) [dblp]
- Galina Schwartz (University of California - Berkeley, US) [dblp]
- Nedim Srndic (Universität Tübingen, DE) [dblp]
- Radu State (University of Luxembourg, LU) [dblp]
- Doug Tygar (University of California - Berkeley, US) [dblp]
- Viviane Zwanger (Universität Erlangen-Nürnberg, DE)
Klassifikation
- Artificial Intelligence / Robotics
- Security / Cryptography
Schlagworte
- Adversarial Learning
- Computer Security
- Robust Statistical Learning
- Online Learning with Experts
- Game Theory
- Learning Theory