Dagstuhl-Seminar 22412
Intelligent Security: Is "AI for Cybersecurity" a Blessing or a Curse
( 09. Oct – 14. Oct, 2022 )
Permalink
Organisatoren
- Lejla Batina (Radboud University Nijmegen, NL)
- Annelie Heuser (CNRS - IRISA - Rennes, FR)
- Nele Mentens (Leiden University, NL)
- Stjepan Picek (TU Delft, NL)
- Ahmad-Reza Sadeghi (TU Darmstadt, DE)
Kontakt
- Andreas Dolzmann (für wissenschaftliche Fragen)
- Christina Schwarz (für administrative Fragen)
In recent years, artificial intelligence (AI) has become an emerging technology to assess security and privacy. Moreover, we can see that AI does not represent "only" one of the options for tackling security problems but instead a state-of-the-art approach. Besides providing better performance, AI also brings automated solutions that can be faster and easier to deploy but are also resilient to human errors. We can only expect that future AI developments will pose even more unique security challenges that must be addressed across algorithms, architectures, and hardware implementations. While there are many success stories when using AI for security, there are also multiple challenges. AI is commonly used in the black-box setting, making the interpretability or explainability of the results difficult. Furthermore, research on AI and cybersecurity commonly look at the various sub-problems in isolation, mostly relying on best practices in the domain. As a result, we often see how techniques are "reinvented", but also that strong approaches from one application domain are introduced to another only after a long time.
The Dagstuhl Seminar 22412 on Intelligent Security: Is "AI for Cybersecurity" a Blessing or a Curse brought together experts from diverse domains of cybersecurity and artificial intelligence with the goal of facilitating the discussion at different abstraction levels to uncover the links between scaling and the resulting security, with a special emphasis on the hardware perspective. The seminar started with two days of contributed talks by participants. At the end of the second day, every participant suggested topics to be discussed in more detail. From the initial pool of nine topics, we decided to concentrate on four topics on the third and fourth day of the seminar: 1) the explainability of AI for cybersecurity, 2) AI and implementation attacks, 3) AI and fuzzing, and 4) the security of machine learning.
The first group approached the problem of the explainability of AI for cybersecurity. The discussion mainly revolved around scenarios where deep learning is used as the attack method, but explainability is necessary to understand why the attack worked and, more importantly, how to propose new defense mechanisms that will be resilient against such AI-based attacks. During the discussion, we considered two perspectives: a) understanding the features and b) understanding deep neural networks.
The second group focused on how AI can improve the performance of implementation attacks. More precisely, we discussed the side-channel analysis and fault injection. Most of the discussion aimed at usages of deep learning for side-channel analysis and evolutionary algorithms for fault injection. However, we also discussed how the lessons learned from one domain could be used in another one. The third group worked on the topic of security fuzzing. We discussed how techniques like evolutionary algorithms are used for evolving diverse mutations and mutation scheduling. At the same time, machine learning is (for now) somewhat less used, but there are many potential scenarios to explore. For instance, instead of using evolutionary algorithms, it should be possible to use reinforcement learning to find mutation scheduling. The fourth group discussed the topic of the security of machine learning. More precisely, it focused on backdoor attacks and federated learning settings. While both attack and defense perspectives were discussed, the discussion group emphasized the need for stronger defenses.
Each group followed a cross-disciplinary setting where the participants exchanged groups based on their interests. We had one group switch per day to allow sufficient time to discuss a topic. At the end of each day, all participants joined a meeting to discuss the findings and tweak the topics for the discussion groups. On the last day of the seminar, all participants worked together on fine-tuning the findings and discussing possible collaborations. The reports of the working groups, gathered in the following sections, constitute the main results from the seminar. We consider them the necessary next step toward understanding the interplay between artificial intelligence and cybersecurity, as well as the interplay among diverse cybersecurity domains using AI. Moreover, we expect that the seminar (and this report) will help better understand the main open problems and how to use techniques from different domains to tackle cybersecurity problems. This will encourage innovative research and help to start joint research projects addressing the issues.
AI has become an emerging technology to assess security and privacy in recent years. Unfortunately, while there are many success stories when using AI for security, many challenges exist. AI is commonly used in the black-box setting, making the interpretability or explainability of the results difficult. So far, research on AI and security has looked at the various sub-problems in isolation, primarily relying on best practices in the domain.
This Dagstuhl Seminar will cover several topics where AI has proved to be a reliable choice to design/attack systems or to detect/prevent attacks. We are especially interested in the connection between the security and AI domains. Indeed, while security researchers commonly use state-of-the-art results from the AI domain, they also need to adapt solutions, representing interesting applications for the AI domain. On the other hand, the security domain deals with specific challenges (e.g., noise as a countermeasure) that can provide new insights for the AI domain on how to deal with noise.
The plan is to bring together researchers working in artificial intelligence (machine learning, fuzzy logic, heuristic, and metaheuristic techniques) and security (cryptography, network security, systems security). The seminar will cover the following AI-assisted security mechanisms:
- Implementation attacks and countermeasures
- Machine learning-based attacks on secure systems
- Trustworthy manufacturing and testing of secure devices
- Validation and evaluation methodologies for physical security
- Design and evaluation of security primitives
- Intrusion detection
- IoT Security & Privacy
We hope the seminar will produce several ideas on improving the state of the art in AI for security. Ideally, there will be joint publications and project proposals as a result of the seminar. Additionally, we plan to prepare and publish a white paper (a few months after the seminar) on state-of-the-art security and AI. The participants will also discuss the topics with the industry members to close the gaps between academic research and industry needs.
We consider this Dagstuhl Seminar a success if the following challenges are addressed:
- Participants from the different communities collaborate and continue their research with directions resulting from the seminar’s work.
- Future research directions are proposed for each topic, enabling other forms of collaboration.
- Thanks to a careful selection of topics, common knowledge and transferable practices are recognized during the seminar to narrow the gap between these topics.
- Ileana Buhan (Radboud University Nijmegen, NL)
- Lukasz Chmielewski (Radboud Universiteit Nijmegen, NL & Masaryk University - Brno, CZ)
- Alexandra Dmitrienko (Universität Würzburg, DE) [dblp]
- Elena Dubrova (KTH Royal Institute of Technology - Kista, SE) [dblp]
- Oguzhan Ersoy (TU Delft, NL)
- Hossein Fereidooni (TU Darmstadt, DE)
- Fatemeh Ganji (Worcester Polytechnic Institute, US) [dblp]
- Houman Homayoun (University of California, Davis, US)
- Domagoj Jakobovic (University of Zagreb, HR)
- Dirmanto Jap (Nanyang TU - Singapore, SG)
- Florian Kerschbaum (University of Waterloo, CA) [dblp]
- Marina Krcek (TU Delft, NL)
- Jesus Luna Garcia (Robert Bosch GmbH - Stuttgart, DE)
- Damien Marion (IRISA - Rennes, FR)
- Luca Mariot (Radboud University Nijmegen, NL)
- Nele Mentens (Leiden University, NL) [dblp]
- Irina Nicolae (Bosch Center for AI - Renningen, DE)
- Stjepan Picek (TU Delft, NL) [dblp]
- Jeyavijayan Rajendran (Texas A&M University - College Station, US)
- Ahmad-Reza Sadeghi (TU Darmstadt, DE) [dblp]
- Patrick Schaumont (Worcester Polytechnic Institute, US) [dblp]
- Matthias Schunter (INTEL ICRI-SC - Darmstadt, DE) [dblp]
- Mirjana Stojilovic (EPFL - Lausanne, CH)
- Shahin Tajik (Worcester Polytechnic Institute, US) [dblp]
- Trevor Yap (Nanyang TU - Singapore, SG)
Klassifikation
- Artificial Intelligence
- Cryptography and Security
- Machine Learning
Schlagworte
- AI
- machine learning
- security
- cryptography
- physical attacks