TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 14491

Socio-Technical Security Metrics

( 30. Nov – 05. Dec, 2014 )

(zum Vergrößern in der Bildmitte klicken)

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/14491

Organisatoren

Koordinator
  • Vincent Koenig (University of Luxembourg, LU)

Kontakt



Programm

Motivation

Safety metrics inform many decisions, from the height of new dikes to the design of nuclear plants. We can state, for example, that the dikes should be high enough to guarantee that a particular area will flood at most once every 1000 years. Even when considering the limitations of such numbers, they are useful in guiding policy.

Metrics for the security of information systems have not reached the same maturity level. This is partly due to the nature of security risk, in which an adaptive attacker rather than nature causes the threat events. Moreover, whereas the human factor may complicate safety and security procedures alike, in security this "weakest link" may be actively exploited by an attacker, such as in phishing or social engineering. In order to measure security, one therefore needs to compare online hacking against such social manipulations, since the attacker may simply take the easiest path. In addition, countermeasures may impact usability and productivity, and lead to workarounds rather than more secure systems. Therefore, defining information security metrics requires close cooperation between different fields of science and practice.

The Dagstuhl Seminar on socio-technical security metrics brings together computer scientists, behavioural scientists, economists, risk managers and consultants, in search for suitable metrics that allow us to estimate information security risk in a socio-technical setting, as well as the costs and effectiveness of countermeasures. In particular, we study the risk metrics in the context of recent developments, where information systems move to the cloud and access moves to personal devices such as smartphones.

Activities in this seminar include:

  • Plenary sessions on defining the terminology / conceptual framework, and discussing suitable metrics;
  • Parallel (break-out) sessions on detailing the suggested metrics, including:
    • Vulnerability to multi-step attacks;
    • Attacker model parameters;
    • Leveraging existing data;
    • Effectiveness of countermeasures;
    • Total cost of ownership of countermeasures.
  • Case study sessions, in which the results are applied to the cloud/BYOD scenario, providing feedback to the metrics design sessions;
  • Plenary sessions on the application possibilities of the metrics in security investment, policy, and service selection, as well as limitations of the metrics;
  • Future work sessions on identifying promising directions and follow-up activities.

Outcomes aimed for in this seminar are:

  • A common conceptual framework for expressing the properties that are necessary to (a) obtain the right information about existing attacks, (b) use this information to predict possible future attacks, and assess their risks in monetary terms, and (c) provide decision support for implementing countermeasures based on such analysis;
  • Suitable metrics for:
    • Impact of socio-technical attacks;
    • Vulnerability to socio-technical attacks;
    • Attacker models for socio-technical attacks;
    • Costs of countermeasures, including impact on productivity.
  • A high-level procedure for implementing steps (a) – (c) above based on the proposed metrics;
  • Other application possibilities and limitations of the metrics.

Assessing socio-technical security is early research, likely to expand over the next years, which provides a unique opportunity for setting future trends in the context of this seminar. The seminar is expected to initiate new project proposals in this area, as well as joint publications. Follow-up activities will be identified during the seminar, and for each of the activities a leader will be assigned. The seminar organizers will monitor progress in the follow-up activities.


Summary

Introduction

Socio-technical vulnerabilities

Information security, or cyber security, is not a digital problem only. Humans have been termed "the weakest link", but also physical access plays a role. Recent cyber attacks cleverly exploit multiple vulnerabilities of very different nature in the socio-technical systems that they target. For example, the StuxNet attack relied both on Industrial Control System (ICS) vulnerabilities and on the physical distribution of infected USB sticks, allowed by the business processes in the target facilities [8]. With new developments such as cloud computing, the attack surface of the systems only increases, and so do the options for potential attackers. At any company in the service supply chain, there may be malicious insiders or benevolent employees who fall victim to social engineering, and they influence the security of the system as a whole significantly. In order to compare and prioritize attacks and countermeasures, for example in terms of risk, the different types of vulnerabilities and threats need to be expressed in the same language. The seminar on “Socio-technical security metrics” aims at developing cross-domain metrics for this purpose.

Defining metrics

The idea of defining information security in terms of risk already appeared quite a while ago [2, 10]. Since then, many metrics have been proposed that aim to define attacks and attack opportunities in information systems in quantitative terms (see e.g. [7, 12]). Often, likelihood and impact of loss are mentioned as the key variables, from which risk can then be calculated. Furthermore, notions of vulnerability, difficulty, effort, cost, risk for the attacker, and many more, show up in the literature.

Even in a purely technical setting it is not always clear how all these different concepts are related. Still, including the human element forms a particular challenge, which deserves a separate event and a better integrated community. Too often it is thought that models of humans in the social sciences and models of technology are fundamentally incompatible. This inhibits progress on some very relevant questions: How does sending a phishing message compare to an SQL injection, in terms of the above mentioned variables? Or do we need additional notions in the technical models to express the human elements, or in the social science models to express the technical ones?

We thus need unified - or at least comparable - metrics that apply to all types of vulnerabilities. In order to represent socio-technical attacks, the key concepts need to apply to very different types of actions in an attack, including technical exploits and social engineering alike. This requires knowledge on technical infrastructures, social science, and actual incidents. To enable meaningful socio-technical security metrics, key features to be addressed in the seminar are outlined below.

Multi-step attacks

Cyber attacks, like StuxNet, tend to consist of multiple steps, combining technical and social or organizational vulnerabilities. Attack trees [17] are often used to represent possible multi-step attacks on systems, and they can be annotated with quantitative metrics. It has also been proposed to develop formal analysis techniques and simulations ("attack navigators") that generate such trees based on a model of the socio-technical system at hand [5, 16]. By defining methods to calculate metrics for attacks from metrics for steps, one can compare the attacks in terms of the metrics, e.g. difficulty. However, next to methods for prediction, one would also want to be able to estimate the relevant parameters for the model based on observed events. For example, if one observes a set of successful and unsuccessful attacks, what does that say about the difficulty of the steps involved, and how does that influence the prediction of possible future events? Statistical methods from social science may assist here [15].

Estimating metrics from data

Data is thus key to developing good metrics, but obtaining them requires care. Given the data that is typically available in organizations already, including enterprise architecture, network logs, and potentially even organizational culture, how to obtain the right metrics from that data? What could be the role of “Big Data” in improving security metrics? And how to acquire additional data in tailor-made experiments? From the modeling point of view, a distinction can be made here between bottom-up approaches, leveraging existing data, and top-down approaches, defining targeted data collection methods and experiments. A good example on the social side are the phishing studies by Jakobsson & Ratkiewicz [6]. On the technical side, intrusion detection systems may constitute an important source of data.

Attacker models

As security threats originate from attackers and not from nature, attacker models are key for security metrics [9]. Attackers will adapt their strategies to the security situation, and also to newly deployed countermeasures. We therefore need meaningful and measurable features of attackers that can be used as a basis for the metrics. For example, the motivation of an attacker may determine the goal of the attack, the resources available to an attacker may determine the number of attacks that he can attempt, and attacker skill may determine the likelihood of success. Costs of an attack as well as risk of detection influence attacker behavior [3]. Again, the theoretical and empirical basis of such models needs to be carefully studied, and (security) economics may provide important insights here.

Countermeasures

All these aspects come together in one final goal: supporting investments. In order to estimate the cost-effectiveness of security measures (also called ROSI, for return on security investment), one would need metrics for both the risk prevented by the countermeasures, and of their cost. The former could be calculated based on the properties discussed above. The latter, however, is far from trivial by itself, as costs not only involve investment, but also operational costs. Operational costs, in turn, may include maintenance and the like, but an important factor in the total cost of ownership is impact on productivity. Security features may increase the time required to execute certain tasks, and people have a limited capacity for complying with security policies. If security is too cumbersome or misdirected, people will find workarounds, and this may reduce the effect of the measures on risk [1]. Thus, metrics for countermeasure cost form an important topic in itself, requiring input from the human factors and usable security domains.

Another application area for the metrics would be selection among alternative system designs. For example, if two vendors offer the same equipment or service, but one is much cheaper, how to take security risk into account when making this decision? Both vendors as well as customers may be interested in security metrics from this point of view. However, metrics would need to be designed carefully in order to avoid creating perverse incentives, tweaking systems to score high on the metrics without actually being "better".

Communities

In order to develop meaningful metrics for socio-technical security, participants from the following communities were invited:

  • Security metrics and data-driven security, for obvious reasons;
  • Security risk management, to provide input on suitable risk variables to be included;
  • Security economics, to build upon economic theories of behavior of both attackers and defenders;
  • Security architectures, to get relevant data on information system architecture and incidents;
  • Formal methods, to analyze attack opportunities in complex systems;
  • Social / crime science, to understand attacker behavior and the influence of controls;
  • Human factors, to understand the impact of security controls on users.

Main findings

Paraphrasing some ancient philosophical questions (what is there, what can we know, what should we do), we can structure the main outcomes of this seminar as follows:

  1. What properties are we interested in?
  2. What can we measure?
  3. What should we do with the measurements?

What properties

One of the main outcomes of the seminar is a much better view on which types of security metrics there are and for which purposes they can be used.

This leads to a distinction between metrics that exclude the real-life threat environment (type I) and metrics that include the real-life threat environment (type II). Metrics describing difficulty or resistance are typically of type I. They give a security metric that is independent of the actual activity of adversaries, or of the targets that they might be after. For example, which percentage of the people fall for a simulated phishing mail. This is similar to what Böhme calls "security level" [4]. The threat environment is often specified explicitly in such metrics, and the metrics may thus enumerate threat types. However, they do not estimate their occurrence rates, and in fact the occurrence rate is often controlled. In the phishing case, the researchers control the properties and occurrence of the phishing e-mails, and describe the e-mail (controlled threat) in their results.

Metrics describing loss (risk) or incidents are typically of type II. They describe undesired events that happen based on interaction of the system with a threat environment (activity of adversaries), and their consequences. For example, the number of infected computers of a particular Internet Service Provider [18].

An illustration of this difference is the following. Consider two systems, system A and system B [13]. In system A, a locked door protects € 1,000. In system B, an identical locked door protects € 1,000,000. Which system is more secure? Or, alternatively, which door is more secure? One might say that system A is more secure, as it is less likely to be attacked (assuming the attacker knows the system). On the other hand, one might say that the doors are equally secure, as it is equally difficult to break the lock. The former argument is based on including an evaluation of the threat environment, the latter on excluding it.

Obviously, when trying to derive type II metrics from type I metrics, one needs metrics on the threat environment as well. For example, when one wants to calculate risk related to phishing attempts, and one knows how likely one's employees are to fall for phishing mails based on their sophistication, then one also needs information on the expected frequency of phishing mails of certain levels of sophistication in order to calculate the risk. Such models of the threat environment may be probabilistic or strategic (game-theoretic), representing non-adaptive and adaptive attackers, respectively. Probabilistic models, in turn, may be either frequentist (based on known average frequencies) or Bayesian (based on subjective probabilities). The various points of view have not been fully reconciled up to this point, although integration attempts have been made [14].

Another consideration is the integration of security metrics from different domains: digital, physical and social. Often, there are different styles of type I metrics, which one would like to integrate in a single type II metric representing the level of risk in a socio-technical system (e.g. an organization). Digital metrics may represent difficulty as required skill (e.g. CVSS), physical metrics may use required time (e.g. burglar resistance), and social metrics may use likelihood of success (e.g. likelihood of success of phishing attempts). Integration of these metrics is still an open challenge.

What measurements

The seminar discussed methods applied in different scientific communities for measurement purposes. Some of those methods rely on quantitative indicators, some rely on qualitative indicators, and some combine both. A further distinction can be made between subjective and empirical metrics, e.g. expert judgements versus monitoring data. Hereafter, and for the purpose of illustration, we have drawn a non-comprehensive list of such methods. They can be applied individually or in a complementary way, covering one measure or combined measures. A specific usage we consider underrepresented so far is the combination of methods in an effort to augment the measurement quality, or to provide information about the validity of a new measure. This approach has often been referred to, during the seminar, as triangulation of measures.

These are social methods discussed in the seminar:

  • semi-structured interviews; in-depth interviews; surveys;
  • observations of behavior;
  • critical incident analysis;
  • laboratory experiments; field experiments;
  • expert / heuristic analysis / cognitive walkthrough;
  • root cause analysis.

These are technical methods discussed in the seminar:

  • security spending;
  • implemented controls;
  • maturity models;
  • incident counts;
  • national security level reports;
  • service level agreements.

It is important to assess which type of metric (type I or type II) is produced by each of the techniques. For example, penetration testing experiments produce type I metrics, whereas incident counts produce type II. Maturity models and national security level reports may be based on a combination of type I and type II metrics. In such cases, it is important to understand what the influence of the threat environment on the metrics is, in order to decide how the metrics can be used.

What usage

Security metrics can contribute to answering questions about a concrete system or questions about a design (hypothetical system), and questions about knowledge versus questions about preferences. Here, we focus on a simpler distinction, namely between knowledge and design questions. In the case of knowledge questions, metrics are used to gather information about the world. In the case of design questions, metrics are used to investigate a design problem or to evaluate the performance of a design, such as a security control. In terms of knowledge questions, a typical usage discussed is a better understanding of the human factor in security. In terms of design, possible questions are how much security feedback a system should give to users or operators, or how to provide decision support for security investment.

Security metrics may have several limitations. In particular, many metrics suffer from various forms of uncertainty. It may be unclear whether the metrics measure the right thing (validity). Even if this is the case, random variations may induce uncertainty in the values produced (reliability). It is therefore important to understand the implications of such uncertainties for decisions that are made based on the metrics. Triangulation may contribute to the reduction of uncertainty. In some cases, quantitative metrics may not be possible at all, and qualitative methods are more appropriate.

Another limitation is that stakeholders may behave strategically based on what they know about the metrics (gaming the metrics). If stakeholders are rewarded when their security metrics become higher, they may put effort into increasing the metrics, but not "actual security". Even if the metrics are valid under normal circumstances, this needs not be the case under strategic behavior.

Conclusions

Security is difficult to measure, which should not be a surprise to those involved. However, to understand security in today's complex socio-technical systems, and to provide decision support to those who can influence security, rigorous conceptualisation, well-defined data sources and clear instructions for use of the metrics are key assets. This seminar laid the foundations for understanding and applying socio-technical security metrics.

In particular, we strove for clarity on (a) the different types of security metrics and their (in)compatibility, (b) the different sources and methods for data extraction, and (c) the different purposes of using the metrics, and the link with types, methods and sources. Several papers are planned as follow-up activities, as described in the reports of the working groups (Section 4). On many topics there are different views, which may not always be compatible, as was clear from the panel discussion (Section 5). Future follow-up seminars would be very valuable to address the open problems (Section6).

References

  1. A. Beautement, M. A. Sasse, and M. Wonham. The compliance budget: Managing security behaviour in organisations. In Proc. of the 2008 Workshop on New Security Paradigms, NSPW’08, pp. 47–58, New York, NY, USA, 2008. ACM.
  2. B. Blakley, E. McDermott, and D. Geer. Information security is information risk management. In Proc. of the 2001 New Security Paradigms Workshop, pp. 97–104, New York, NY, USA, 2001. ACM.
  3. A. Buldas, P. Laud, J. Priisalu, M. Saarepera, and J. Willemson. Rational choice of security measures via multi-parameter attack trees. In Critical Information Infrastructures Security, volume 4347 of LNCS, pp. 235–248. Springer, 2006.
  4. R. Böhme. Security metrics and security investment models. In Isao Echizen, Noboru Kunihiro, and Ryoichi Sasaki, editors, Advances in Information and Computer Security, volume 6434 of LNCS, pp. 10–24. Springer, 2010.
  5. T. Dimkov, W. Pieters, and P. H. Hartel. Portunes: representing attack scenarios spanning through the physical, digital and social domain. In Proc. of the Joint Workshop on Automated Reasoning for Security Protocol Analysis and Issues in the Theory of Security (ARSPA/WITS’10), volume 6186 of LNCS, pp. 112–129. Springer, 2010.
  6. P. Finn and M. Jakobsson. Designing ethical phishing experiments. Technology and Society Magazine, IEEE, 26(1):46–58, 2007.
  7. M. E. Johnson, E. Goetz, and S. L. Pfleeger. Security through information risk management. IEEE Security & Privacy, 7(3):45–52, May 2009.
  8. R. Langner. Stuxnet: Dissecting a cyberwarfare weapon. Security & Privacy, IEEE, 9(3):49–51, 2011.
  9. E. LeMay, M. D. Ford, K. Keefe, W. H. Sanders, and C. Muehrcke. Model-based security metrics using adversary view security evaluation (ADVISE). In Proc. of the 8th Int’l Conf. on Quantitative Evaluation of Systems (QEST’11), pp. 191–200, 2011.
  10. B. Littlewood, S. Brocklehurst, N. Fenton, P. Mellor, S. Page, D. Wright, J. Dobson, J. Mc-Dermid, and D. Gollmann. Towards operational measures of computer security. Journal of Computer Security, 2(2–3):211–229, 1993.
  11. H. Molotch. Against security: How we go wrong at airports, subways, and other sites of ambiguous danger. Princeton University Press, 2014.
  12. S. L. Pfleeger. Security measurement steps, missteps, and next steps. IEEE Security & Privacy, 10(4):5–9, 2012.
  13. W. Pieters. Defining “the weakest link”: Comparative security in complex systems of systems. In Proc. of the 5th IEEE Int’l Conf. on Cloud Computing Technology and Science (CloudCom’13), volume 2, pp. 39–44, Dec 2013.
  14. W. Pieters and M. Davarynejad. Calculating adversarial risk from attack trees: Control strength and probabilistic attackers. In Proc. of the 3rd Int’l Workshop on Quantitative Aspects in Security Assurance (QASA), LNCS, Springer, 2014.
  15. W. Pieters, S. H. G. Van der Ven, and C. W. Probst. A move in the security measurement stalemate: Elo-style ratings to quantify vulnerability. In Proc. of the 2012 New Security Paradigms Workshop, NSPW’12, pages 1–14. ACM, 2012.
  16. C. W. Probst and R. R. Hansen. An extensible analysable system model. Information security technical report, 13(4):235–246, 2008.
  17. B. Schneier. Attack trees: Modeling security threats. Dr. Dobb’s journal, 24(12):21–29, 1999.
  18. M. J. G. Van Eeten, J. Bauer, H. Asghari, and S. Tabatabaie. The role of internet service providers in botnet mitigation: An empirical analysis based on spam data. OECD STI Working Paper 2010/5, Paris: OECD, 2010.
Copyright Dieter Gollmann and Cormac Herley and Vincent Koenig and Wolter Pieters and Martina Angela Sasse

Teilnehmer
  • Zinaida Benenson (Universität Erlangen-Nürnberg, DE) [dblp]
  • Sören Bleikertz (IBM Research GmbH - Zürich, CH) [dblp]
  • Rainer Böhme (Universität Münster, DE) [dblp]
  • Tristan Caulfield (University College London, GB) [dblp]
  • Kas P. Clark (Ministry of Security and Justice - The Hague, NL)
  • Trajce Dimkov (Deloitte - Eindhoven, NL) [dblp]
  • Simon N. Foley (University College Cork, IE) [dblp]
  • Carrie Gates (Dell Research, CA) [dblp]
  • Dieter Gollmann (TU Hamburg-Harburg, DE) [dblp]
  • Dina Hadziosmanovic (TU Delft, NL) [dblp]
  • Carlo Harpes (itrust - Berbourg, LU) [dblp]
  • Cormac Herley (Microsoft Corporation - Redmond, US) [dblp]
  • Roeland Kegel (University of Twente, NL)
  • Vincent Koenig (University of Luxembourg, LU) [dblp]
  • Stewart Kowalski (Gjøvik University College, NO) [dblp]
  • Aleksandr Lenin (Technical University - Tallinn, EE) [dblp]
  • Gabriele Lenzini (University of Luxembourg, LU) [dblp]
  • Mass Soldal Lund (Norwegian Defence Cyber Academy - Lillehammer, NO) [dblp]
  • Sjouke Mauw (University of Luxembourg, LU) [dblp]
  • Daniela Oliveira (University of Florida - Gainesville, US) [dblp]
  • Frank Pallas (KIT - Karlsruher Institut für Technologie, DE) [dblp]
  • Sebastian Pape (TU Dortmund, DE) [dblp]
  • Simon Parkin (University College London, GB) [dblp]
  • Shari Lawrence Pfleeger (Dartmouth College Hanover, US) [dblp]
  • Wolter Pieters (TU Delft, NL & University of Twente, NL) [dblp]
  • Kai Rannenberg (Goethe-Universität Frankfurt am Main, DE) [dblp]
  • Roland Rieke (Fraunhofer SIT - Darmstadt, DE) [dblp]
  • Martina Angela Sasse (University College London, GB) [dblp]
  • Paul Smith (AIT Austrian Institute of Technology - Wien, AT) [dblp]
  • Ketil Stølen (SINTEF - Oslo, NO) [dblp]
  • Axel Tanner (IBM Research GmbH - Zürich, CH) [dblp]
  • Sven Übelacker (TU Hamburg-Harburg, DE) [dblp]
  • Michel van Eeten (TU Delft, NL) [dblp]
  • Jan Willemson (Cybernetica AS - Tartu, EE) [dblp]
  • Jeff Yan (University of Newcastle, GB) [dblp]

Verwandte Seminare
  • Dagstuhl-Seminar 16461: Assessing ICT Security Risks in Socio-Technical Systems (2016-11-13 - 2016-11-18) (Details)

Klassifikation
  • modelling / simulation
  • security / cryptology
  • society / human-computer interaction

Schlagworte
  • Security risk management
  • security metrics
  • socio-technical security
  • social engineering
  • multi-step attacks
  • return on security investment