ISoLA 2016
10 - 14 of October, 2016
Tracks
-
Correctness-by-Construction and Post-hoc Verification: friends or foes?
Organizers: Maurice ter Beek, Reiner Haehnle, Ina Schaefer
-
Static and Runtime Verification: Competitors or Friends?
Organizers: Dilian Gurov, Marieke Huisman, Rosemary Monahan, Klaus Havelund
-
Testing the Internet of Things
Organizers: Michael Felderer, Ina Schieferdecker
-
Rigorous Engineering of Collective Adaptive Systems
Organizers: Stefan Jähnichen, Martin Wirsing
-
RVE: Runtime Verification and Enforcement, the (industrial) application perspective
Organizers: Ezio Bartocci, Ylies Falcone
-
ModSyn-PP: Modular Synthesis of Programs and Processes
Organizers: Boris Düdder, George Heineman, Jakob Rehof
-
Variability modelling for scalable software evolution
Organizers: Ferruccio Damiani, Christoph Seidl, Ingrid Chieh Yu
-
Statistical Model Checking
Organizers: Kim Larsen, Axel Legay
-
Detecting and Understanding Software Doping
Organizers: Christel Baier, Holger Hermanns
-
Formal Methods and Safety Certification: Challenges in the Railways Domain
Organizers: Alessandro Fantechi, Stefania Gnesi
-
Semantic heterogeneity in the formal development of complex systems
Organizers: Idir Ait Sadoune, Paul Gibson, Marc Pantel
-
Privacy and Security Issues in Information Systems
Organizers: Axel Legay, Fabrizio Biondi
-
Evaluation and Reproducibility of Program Analysis and Verification
Organizers: Markus Schordan, Dirk Beyer, Jonas Lundberg
-
Towards a Unified View of Modeling and Programming
Organizers: Manfred Broy, Klaus Havelund, Rahul Kumar, Bernhard Steffen
-
Learning Systems: Machine-Learning in Software Products and Learning-based Analysis of Software Systems
Organizers: Falk Howar, Andreas Rausch, Karl Meinke
Correctness-by-Construction and Post-hoc Verification: friends or foes?
Correctness-by-Construction (CbC) sees the development of software (systems) as a true form of Engineering, with a capital "E". CbC advocates a step-wise refinement process from specification to code, ideally by CbC design tools that automatically generate error-free software implementations from rigorous and unambiguous specifications of requirements. Afterwards, testing only serves the purpose of validating the CbC process rather than to find bugs. In Post-hoc Verification (PhV), formal methods and tools are applied only after the (software) system has been constructed, but not during the development process. Typically, a formal specification of (an abstraction of) the implemented system describes how it should behave, after which V&V techniques like testing, bug finding, model checking, and deductive verification are used to check whether the implementation indeed satisfies the specifications and meets the user's needs. Further, related techniques, include Design for Verification or Lightweight Verification.
This track has the aim of bringing together researchers and practitioners interested in the inherent "tension" that is usually felt when one tries to balance the pros and cons of CbC, PhV and other verification approaches, see also: http://fmt.isti.cnr.it/~mtbeek/ISoLA16.html
Track Organizers
Maurice ter Beek, ISTI-CNR, Pisa, Italy
Reiner Haehnle, TU Darmstadt, Germany
Ina Schaefer, TU Braunschweig, Germany
Static and Runtime Verification: Competitors or Friends?
Over the last years, significant progress has been made both on dynamic and static techniques to increase the quality of software. Within this track, we would like to investigate if and how we can leverage these techniques by combining them. Questions that will be addressed are for example: what can runtime verification bring to make static verification more cost effective, and what can static verification bring to make runtime verification more precise? The session will consist of several presentations on experiences combining the two techniques, and panel discussions.
Track Organizers
Dilian Gurov, KTH Royal Institute of Technology, Stockholm
Marieke Huisman, University of Twente
Rosemary Monahan, Maynooth University
Klaus Havelund, Jet Propulsion Laboratory
Testing the Internet of Things
The objective of the Testing the Internet of Things track (T-IoT) is to establish a fruitful and meaningful dialog among systems practitioners and with systems engineering researchers in embedded systems (ES), cyber-physical systems (CPS), and the Internet of Things (IoT) on the challenges, obstacles, results (both good and bad), and lessons learned associated with the massive deployment of Internet of Things solutions in various safety- and security-critical environments. The T-IoT presentations will provide accounts of the application of testing and test engineering practices (which may be principles, techniques, tools, methods, processes, testing techniques etc.) to a specific domain or to the development of a significant IoT system. In particular, we are interested in new methods, experiences, best practices, etc. on how to assure the quality, safety, security, reliability, resilience and trustworthiness of IoT systems during development, for certification and deployment, and in operation and maintenance.
Track Organizers
Michael Felderer, University of Innsbruck, Austria
Ina Schieferdecker, Fraunhofer FOKUS, Berlin, Germany
Rigorous Engineering of Collective Adaptive Systems
Today´s software systems are becoming increasingly distributed and decentralized and have to adapt autonomously to dynamically changing, open-ended environments. Often the nodes have their own individual properties and objectives; interactions with other nodes or with humans may lead to the emergence of unexpected phenomena. We call such systems collective adaptive systems. Examples for collective adaptive systems are robot swarms, smart cities, voluntary peer-to-peer clouds as well as socio-technical systems and the internet of things.
This track is a follow-up of the successful track on "rigorous engineering autonomic ensembles" at ISOLA 2014 and aims at bringing together researchers and practitioners to present and discuss rigorous methods to engineer collective adaptive systems. Topics of interest include (but are not limited to):
- Methods and theories for designing and analysing collective adaptive systems
- Techniques for programming and operating collective adaptive systems
- Methods and mechanisms for adaptation and dynamic self-expression
- Validation and verification of collective adaptive systems
- Security, trust and performance of collective adaptive systems
Track Organizers
RVE: Runtime Verification and Enforcement, the (industrial) application perspective
In the past decade Runtime Verification (RV) and Runtime Enforcement (RE) have gained much focus, from both research community and practitioners. Roughly speaking, RV and RE combine a set of theories, techniques and tools aiming towards efficient analysis of systems' executions and guaranteeing their correctness using monitoring techniques. One of the major challenges in RV and RE is characterizing and formally expressing requirements that can be efficiently verified and enforced. With the major strides made in recent years, much effort is still needed to make RV and RE attractive and viable methodologies for industrial use. In addition to industry, numerous other domains, such as security, bio-health monitoring, etc., can gain from RV and RE. The purpose of the "Runtime Verification and Enforcement: the (industrial) application perspective" track at ISoLA'16 is to bring together experts on runtime verification and runtime enforcement and potential application domains to try and advance the state-of-the-art on how to make RV and RE more useable and attractive to industry and other disciplines.
Track Organizers
Ezzio Bartocci, Vienna University of Technology, Austria
Ylies Falcone, Univ. Grenoble-Alpes, Inria, LIG, F-38000 France
ModSyn-PP: Modular Synthesis of Programs and Processes
It is an old and beautiful dream of computer science to synthesize software applications from specifications. We know today, after 60 years of research, that realizing this dream faces fundamental challenges. Synthesis problems typically have high computational complexity. Perhaps a greater hurdle in practice, it is challenging to write correctly the complex specifications required by standard program logics traditionally used in synthesis. Third, it is still not clear how software engineers could apply these research results to solve practical problems they face every day. Whereas synthesis has traditionally been conceived as the construction of a system "from scratch", work in component- oriented design has recently inspired the idea of component-based synthesis, where software is synthesized relative to a given collection of components. From the standpoint of component-based synthesis, components are fabricated building blocks providing higher-valued raw material for synthesis. Taking this concept one step further, we envision a wide range of modularity mechanisms being applied to reduce the essential complexity found in software product lines. Since systems development in modern computational environments is increasingly based on the extended use of repositories of reusable components (e.g., software framework libraries, web services, embedded software and hardware components), modularity would appear to be an important pragmatic strategy for tailoring synthesis to practical circumstances.
The organizers hope to build on the success of Dagstuhl Seminar 14232 “Design and Synthesis from Components” (June 1-6, 2014) which brought together researchers from the component-oriented design community, researchers working on interface theories, and researchers working in synthesis, as well as of the workshop ModSyn-PL (Modular Synthesis of Product Lines), which was held in conjunction with the 19th International Software Product Line Conference (SPLC) July 20 2015, Nashville, TN.
Track Organizers
Boris Düdder, Technische Universität Dortmund, Germany
George Heineman, WPI, USA
Jakob Rehof, Technische Universität Dortmund, Germany
Variability modelling for scalable software evolution
ICT is becoming increasingly integrated into our everyday environment, distributed on cars, appliances, and any kind of devices, in several different variants evolving continuously according to customer needs. The aim of this special track is to bring together researchers and practitioners to present and discuss cutting edge technology from software product lines, software upgrades and scalable cloud to support highly individualized and reconfigurable distributed applications. Topics of special interest within the track are a) frameworks, languages, and methodologies, b) cloud and mobile as well as c) context-awareness. The individual contributions are put into the context of variability modelling for scalable software evolution.
Track Organizers
Ferruccio Damiani, Univ. of Torino
Christoph Seidl, TU Braunschweig
Ingrid Chieh Yu, Univ. of Oslo
Statistical Model Checking
Statistical Model Checking (SMC) has recently been proposed as an alternative to avoid an exhaustive exploration of the state space of a system under verification. The core idea of the approach is to conduct some simulations of the system and then use results from the statistics area in order to decide whether the system satisfies the property with respect to a given probability. The answer is correct up to some confidence. SMC is generally much faster than formal verification techniques, and it is closer to industry current practices. Moreover, the approach can be used to verify properties that cannot be expressed by the classical temporal logics used in formal verification as well as to approximate undecidable model checking problems. Impressive results have been obtained in various areas such as energy-centric systems, automotive, or systems biology.
In Isola 2014, we organized the first session on SMC at ISOLA. The objective of this new session is to build upon discussions made during this session. Particularly, the session will explore the link between statistical model checking, formal verification, and machine learning learning.
Applications to be considered include Systems of Systems, Cyber Physical Systems, or energy-centric systems.
Track Organizers
Kim Larsen, Aalborg University
Axel Legay, Inria Rennes and Aalborg University
Detecting and Understanding Software Doping
The VW emission scandal has made it quite clear. Embedded software might provide features and functionalities that are not in the interest of the device's user or even of an entire society. This 'software doping' is deeply intertwined with market strategies of embedded device manufacturers and comes with a strong trend towards proprietary problem solutions. Embedded software doping is what enables inkjet printers to reject third-party cartridges, and it enables cars to secretly pollute our environment. In general terms, embedded software doping locks the users out of the products they own. Efforts to counteract this trend by empirical evaluation and thus comparison for singular points (e.g. NOx emission tests under lab conditions) are clearly insufficient.
This ISOLA track seeks foundational ways out of this crisis.
First of all, richer models are needed to characterize the intended behaviour of an embedded device. These models must enable us to tell apart the required and the disallowed behaviour of a device over a broad range of operational conditions, not just for singular points experimentally evaluated under lab conditions. They must represent quantifiable behaviour, so as to support reasoning about exhaust emissions, power consumption, temperature, and so on.
In order to detect and understand software doping in all its facets, these behaviour models need to be equipped with novel and thorough verification and analyis techniques in order to get provable assurance of device's functionality, if possible at design time. This goes far beyond lab testing, and bridges to strong approaches to model-based analysis. In order to arrive there, we need to bring together researchers working on interface technology, on quantitative models, on algorithmic verification, and other related fields.
Track Organizers
Christel Baier, TU Dresden
Holger Hermanns, Saarland U
Formal Methods and Safety Certification: Challenges in the Railways Domain
Europe is investing in the establishment and development of trans-European transport infrastructures (such as those in the telecommunications and energy) to ensure the promotion of services of general interest and of the economic, social and territorial cohesion, supporting the interconnection and interoperability of national networks, as part of a system of open and competitive markets. The growth forecasts for 2030 indicate that railway development will be one of the central elements of such networks, and that rail will assume ever greater importance in the response to the needs of mobility.
In this context, in the last few years the railway signalling domain has shown a rapid changing pace, which is therefore destined to last for long. The main driver of this revolution is the quest for capacity increases; the most significant advances in this domain include the ERTMS/ETCS (European Rail Traffic Management System/European Train Control System) to improve capacity and enhance cross-border operation within Europe, the massive replacement of relay-based by computer-controlled interlockings, the CBTC (Communication Based Train Control) systems deployed in metro and suburban railways to improve capacity and to add automated driving capabilities
These advances raise concerns about the guarantees of their safe operation whilst being able to satisfy availability, capacity and interoperability requirements. These concerns pose to the research community the challenge to find effectively and conveniently implementable ways to develop:
- Suitable domain-specific modelling and analysis techniques, able to scale up to the ever increasing complexity of railway systems.
- Alignment of (usability of) verification techniques to industrial development process and to currently holding certification guidelines.
- Formal and automated verification of large station interlockings, to decrease certification costs.
- Cost reduction of verification of specific railway applications w.r.t. generic verification techniques.
- Standardization of protocols and procedures, based on unambiguous definitions, in order to support Interoperability of different systems.
Track Organizers
Alessandro Fantechi, University of Florence, Italy
Stefania Gnesi, ISTI-CNR, Italy
Semantic heterogeneity in the formal development of complex systems
Nowadays, the formal development of hardware and/or software systems implies the synthesis and analysis of several models on which properties are expressed and then formally verified. Parts of these systems are defined within contexts, imported and/or instantiated. Such contexts usually represent the implicit semantics of the systems. Several relevant properties are defined on these implicit semantics according to the formal technique being used. When considering these properties in their context with the associated explicit semantics, these properties may be not provable. Therefore, the development activities need to be revisited in order to facilitate handling of the explicit semantics.
This development process leads to semantic heterogeneity, which may appear in two different forms.
- The first is related to the large variety of formal development techniques and to the semantics and proof systems fundamental to these techniques. As a consequence, several formal descriptions may be associated to a given system with different semantics.
- The second type of heterogeneity results from the modelling domain chosen for formalising the described system. Usually, this domain is not explicitly described nor formalised. Most of the knowledge related to this domain is not adequately encoded by the formal system description. The last decade has made use of ontologies as an explicit formalisation for such modelling domains. Expressing, in formal models, domain specific ontological concepts will contribute to reducing such heterogeneity. It also allows developers to address specific properties related to interoperable, adaptive, reconfigurable and plastic systems.
This thematic track is devoted to compiling the state of the art in semantic heterogeneity in the formal development of complex systems. It will highlight the recent advances in this research field. Particularly welcome are reports, research and position papers, issued either from the academic or industrial worlds.
Track Organizers
Idir Ait Sadoune, CentraleSupelec - Gif Sur Yvette - France
Paul Gibson, IT Sud Paris – Evry, France
Marc Pantel, IRIT/INPT-ENSEEIHT- Toulouse- France
Privacy and Security Issues in Information Systems
The protection of private data is a primary concern of computer science. Privacy and security of systems are necessary properties to make sure that individuals, companies and nations can rely and trust their technological infrastructure. However, critical vulnerabilities in these systems get discovered too late and sometimes by malicious agents that sell or exploit them. Malware is becoming increasingly complex, adaptive, and hard to locate.
Modern systems depend on a complex interplay of hardware and software components, and producing security vulnerabilities that are very hard to locate. To build, develop and maintain secure systems, a new generation of security techniques have to be developed. New security models, evaluation and checking techniques, system testing and certification procedures must be developed, and new tools supporting them have to be made available to security analysts. New malware analysis techniques are required to locate and neutralize malicious programs.
This track aims at bringing together researchers, analysts and engineers working on information and system security, to understand how computer science theory and practice can address the increasingly complex security scenarios of modern interconnected systems.
Track Organizers
Axel Legay, CR Inria, Prof AAU
Fabrizio Biondi, Postdoc Inria
Evaluation and Reproducibility of Program Analysis and Verification
Today’s popular languages have a large number of different language constructs and standardized library interfaces. The number is further increasing with every new language standard. Most published analyses therefore focus on a subset of such languages or define a language with a few essential constructs of interest. More recently, program analysis and verification competitions aim to evaluate comparatively implemented analyses for a given set of benchmarks for popular languages.
The comparison of the analyses either focuses on the analysis results themselves (verification of specified properties) or on the impact on a client analysis. This track is concerned with the methods of evaluation for comparing analysis and verification techniques and how verified program properties can be represented such that they remain reproducible and reusable as intermediate results for other analyses and verification phases and/or tools.
The topics of interest are:
- Specification languages for program properties and program analysis results
- Representation and specification of program analysis results in existing program analysis infrastructures, compilers, and tools along with meta-models and evolution of these representations
- Reuse of verification results and combination of multiple verifiers using conditional model checking
- Analysis benchmarking and experimental evaluation of analysis accuracy
- Parallel verification by using parallel algorithms on multi-core systems with shared-memory, GPUs, and/or distributed systems
- Identification of undefined behavior in programs (e.g. C and C++)
- Generation and checking of verification witnesses
Track Organizers
Markus Schordan, Lawrence Livermore National Laboratory, CA, USA
Dirk Beyer, University of Passau, Germany
Jonas Lundberg, Linnaeus University, Sweden
Towards a Unified View of Modeling and Programming
Since the 60s we have seen tremendous amount of scientific and methodological work in the fields of design and specification languages, programming languages, and modeling formalisms and concepts. In spite of the very high value of this work, however, this effort has found its limitation by the fact that we do not have a sufficient integration of programming, specification, and modeling languages as well as tools that support the development engineer in applying methods and techniques from all of these.
A closer look shows that a tight integration between specification and verification logic, programming languages, and graphical modeling notations as well as modeling concepts is needed. An additional difficulty is due to the fact that even in each of the fields of programming and programming languages, specification and verification logics, and graphical modeling notations and concepts, a rich variety of different approaches exist. Scientifically, this variety is highly necessary. For finding its way into engineering practice, however, the variety and the missing integration of the methods is a serious obstacle. It is the goal of the meeting to discuss the relationship between modeling and programming, with the possible objective of achieving an agreement of what a unification of these concepts would mean at an abstract level, and what it would bring as benefits on the practical level. What are the trends in the three domains: programming, formal specification and verification, and model-based engineering, which can be considered to be unifying these domains. We want to discuss whether the time is ripe for another attempt to bring things closer together.
The track will be divided into four sessions: meta-modeling, model-based engineering (graphical tools), formal specification languages, and finally programming languages.
Track Organizers
Manfred Broy, Technische Universität München, Germany
Klaus Havelund, Jet Propulsion Laboratory, NASA, USA
Rahul Kumar, Jet Propulsion Laboratory, NASA, USA
Bernhard Steffen, TU Dortmund, Germany
Learning Systems: Machine-Learning in Software Products and Learning-based Analysis of Software Systems
We are entering the age of learning systems!
On the one hand, we are surrounded by devices that learn from our behavior: household appliances, smart phones, wearables, cars, etc. --- the most recent prominent example being Tesla's autopilot that learns from human drivers.
On the other hand, man-made systems are becoming ever more complex, requiring us to learn the behavior of these systems: Learning-based testing, e.g., has been proposed as a method for testing the behavior of systems systematically without models and at a high level of abstraction.
Advances in both areas raise the same questions corncering properties of inferred models: How accurate a descriptions can we obtain of some behavior?
How can we reason about and assure the safety of such the systems?
This track aims at bringing together practitioners and researchers to explore the practical impact and challenges associated with using learning-based products as well as learning-based approaches in the analysis and verification of software.
Topics of interest include but are not limited to:
- Approaches to using Machine-Learning in Software
- Industrial examples of Applications that are based on Machine-Learning
- Safety Assurance for Systems that rely on Machine-Learning
- Learning-based approaches and tools for Software analysis and verification
- Learning-based Testing
- Inference of Component Models
- Inference of Environment models for components
- Integration of Learning and Verification
Track Organizers
Falk Howar, TU Clausthal, Germany
Andreas Rausch, TU Clausthal, Germany
Karl Meinke, KTH Royal Institute of Technology, Sweden