Dagstuhl Seminar 06351
Methods for Modelling Software Systems (MMOSS)
( Aug 27 – Sep 01, 2006 )
Permalink
Organizers
- Ed Brinksma (University of Twente, NL)
- David Harel (Weizmann Institute - Rehovot, IL)
- Perdita Stevens (University of Edinburgh, GB)
- Roel J. Wieringa (University of Twente, NL)
Contact
A proper engineering approach to the construction of complex software systems requires models for their design and verification. Such models must represent both the software system and its (intended) environment. Modelling the environment is needed to define both the events in the environment to which the system must respond, and the effects that the system must have on its environment. A typical environment may consist of other software, hardware, and physical and social systems. The environment of a manufacturing control system, for example, consists of manufacturing information systems, the physical plant, and work procedures to be followed by human operators, all of which affect and complicate the task of modelling at some point of the design and verification processes.
It is clear that the quality of a design or verification process is directly affected by the quality of the models that are being used. They should meet certain quality criteria, such as correctness, understandability and maintainability. An important question that we want to address is what the relevant quality criteria for design and verification modelling are. Are there guidelines on how to achieve them? And how can we validate them?
Models invariably introduce abstractions, and the modeller has, in principle, an obligation to show that he or she has introduced the "right" abstractions. This justification is essentially informal for a number of reasons:
- Complexity: many systems are too complex to be represented without a substantial recourse to abstraction. In practical cases, formal proofs of the adequacy of such abstractions are infeasible, because they somehow rely on the availability of a non-existing model of the "full" system.
- Physical reality: this is a non-formal domain by definition, and the quality of a model with respect to its physical environment must be validated by experimental or informal means.
- Social aspects: the user environment of a software system is also informal in nature, but must be taken Into account to ensure that it will respond properly.
It Is clear that most abstractions introduced by design and verification models of real systems cannot be justified formally, and must also rely on informal argumentation. The quality of any analysis based on such models, therefore, depends crucially on the quality of informal arguments.
To build models of a certain quality, we need guidelines on how to construct them well. Too often, however, such guidelines are nonexistent or simply ignored in practice, e.g. in verification modelling often "model hacking precedes model checking". And the less we understand about the way a model is built, the harder it is to validate its quality. In design modelling, designers like to proceed as if they were starting from scratch, ignoring any models that have already been developed by others. How can we improve this practice, so that we know how to build models that can be validated?
Although the problems of modelling in design and verification have many similarities, there are differences too. On the one hand, design models represent (an aspect of) the intended structure of the software-to-be. They are prescriptions that must be used by implementors and must include details needed by the implementor. Usability of the model by the implementor is an important quality criterion of these models; completeness is another. For verification models, on the other hand, abstraction is a crucial technique, preserving just as much information about the system as needed to prove correctness, and providing models that can be efficiently verified. Each property to be verified may require a different model. Still, all models, design or verification, must have some isomorphy to the modelled system, so that a from the fact that the model has a property, we can conclude that the system has this property too. This requires a good understanding of the relationship between the model and the modelled system.
The questions that we invited participants in the seminar to address were:
Model quality:
- What are quality criteria for models? How can they be quantified and checked?
- What is the relationship between models and systems in design and in verification?
- What makes an abstraction reasonable?
Modelling method:
- What are the sources and principles for the construction of good models? What is the relation between design and verification models?
- How can the structure of a model be coupled to the structure of the system? What criteria should be used for the structuring of models?
- How to bridge between informal knowledge and formal representation?
- How can we use domain knowledge, and especially engineering documentation to build correct models?
Effectiveness :
- Can we build libraries of problem frames in the domain of embedded software, or in subdomains?
Maintainability :
- Can we build models in such a way that changes in the system (versions) can be easily mapped to versions of models?
- How can changes in the verification property imply changes in the verification model?
- Stuart Anderson (University of Edinburgh, GB)
- Joanne M. Atlee (University of Waterloo, CA) [dblp]
- Jorge Baier Aranda (University of Toronto, CA) [dblp]
- Andreas Bauer (TU München, DE) [dblp]
- Egon Börger (University of Pisa, IT) [dblp]
- Julian Bradfield (University of Edinburgh, GB) [dblp]
- Ed Brinksma (University of Twente, NL)
- María Victoria Cengarle (TU München, DE)
- Marsha Chechik (University of Toronto, CA) [dblp]
- Christine Choppy (University of Paris North, FR)
- Jorge Fox (TU München, DE)
- Martin Gogolla (Universität Bremen, DE) [dblp]
- Ursula Goltz (TU Braunschweig, DE)
- Jan Friso Groote (Eindhoven Univ. of Technology, NL) [dblp]
- Anthony Hall (Oxford, GB)
- Maritta Heisel (Universität Duisburg-Essen, DE) [dblp]
- Jozef Hooman (Radboud University Nijmegen, NL) [dblp]
- Michael Jackson (London, GB) [dblp]
- David N. Jansen (University of Twente, NL) [dblp]
- Jean-Marc Jézéquel (IRISA / CNRS, FR) [dblp]
- Cliff B. Jones (University of Newcastle, GB) [dblp]
- Wouter Kuijper (University of Twente, NL)
- Juliana Küster Filipe (University of Birmingham, GB)
- Angelika Mader (University of Twente, NL) [dblp]
- Jelena Marincic (University of Twente, NL)
- Peter D. Mosses (Swansea University, GB) [dblp]
- Pierre-Alain Muller (IRISA / CNRS, FR) [dblp]
- Jianwei Niu (The University of Texas - San Antonio, US)
- Oscar Pastor Lopez (Universidad Politécnica - Valencia, ES) [dblp]
- Gianna Reggio (University of Genova, IT)
- Theo Ruys (University of Twente, NL)
- Bernhard Schätz (TU München, DE) [dblp]
- Pierre-Yves Schobbens (University of Namur, BE)
- Wendelin Serwe (INRIA - Grenoble, FR)
- Perdita Stevens (University of Edinburgh, GB) [dblp]
- Frits Vaandrager (Radboud University Nijmegen, NL) [dblp]
- Margus Veanes (Microsoft Corporation - Redmond, US) [dblp]
- Hanno Wupper (Radboud University Nijmegen, NL)
Classification
- modelling / simulation
- sw-engineering
- semantics / formal methods
- verification / logic
Keywords
- Modelling Methods
- Design Models
- Verification Models
- Problem-solution co-refinement