Past Seminars


Friday, November 15, 2019 — by Christophe Prévot from INRIA Grenoble — Rhône-Alpes

Analyses pour l’Ordonnançabilité
et la Flexibilité de Systèmes Temps-Réel

PhD thesis
10:00 in Grand Amphi at INRIA Montbonnot
Abstract: Les systèmes temps réel sont développés pour de nombreuses applications dans les domaines de l’avionique ou l’automobile, et pour ces systèmes les contraintes temporelles sont aussi importantes que les contraintes fonctionnelles. Ainsi pour chaque fonctionnalité il faut calculer le temps maximum, appelé latence pire cas, entre l’acquisition d’une donnée et la restitution du résultat correspondant. Compte tenu des besoins rencontrés au sein de l’entreprise Thales, nous considérons des systèmes mono-processeurs sur lesquelles sont allouées des chaînes de tâches. Pour chaque chaîne, il faut calculer sa latence au pire cas et garantir le respect de sa contrainte temporelle. Dans cette thèse nous proposons des méthodes pour majorer et minorer la latence au pire cas. Par ailleurs, de nombreux systèmes sont amenés à évoluer, nous présentons donc une méthode pour rajouter une chaîne de tâche à un système existant tout en garantissant le respect des contraintes temporelles.
Jury:

Tuesday, November 12, 2019 — by Jean-Yves le Boudec from EPFL

Network Calculus

CASERM Seminar organized by SPADES (Sophie Quinton)
At 14:00 in F107 at INRIA Montbonnot
Abstract: Network Calculus is a set of theories and tools that was developed for the deterministic analysis of queuing systems arising in communication networks. It is based on min-plus algebra, and sometimes max-plus algebra. It is used to compute latency and backlog bounds, with proofs that can be formally verified. It can also provide fundamental insights into physical properties of delay systems such as “Pay Bursts Only Once” or “Re-Shaping is For Free”. In this level-setting talk, we review the basic concepts of arrival curve, service curve, their expression with min-plus algebra, and their composition. We also describe shapers (also called minimal regulators) and their properties.

Monday, November 4, 2019

Design and Analysis of Reconfigurable Multi-view Embedded Systems

CASERM Workshop
IMAG building

Program

 


Wednesday, October 23, 2019 — by Karoliina Lehtinen from University of Liverpool

Quasi-Polynomial Techniques for Parity Games and Other Problems

Seminar organized by CONVECS (Radu Mateescu)
At 10:00 in F107 at INRIA Montbonnot
 
Abstract: Parity games are central to the verification and synthesis of reactive systems: various model-checking, realisability and synthesis problems reduce to solving these games. Solving parity games — that is, deciding which player has a winning strategy — is one of the few problems known to be in both UP and co-UP yet not known to be in P. So far, the quest for a polynomial algorithm has lasted over 25 years.In 2017 a major breakthrough occurred: parity games are solvable in quasi-polynomial time. Since then, several seemingly very distinct quasi-polynomial algorithms have been published, and some of these ideas have been used to solve other automata-theoretic problems.In this talk, I will give an overview of these developments and the state-of-the-art, with a slight automata-theoretic bias.
Bibliography: Roughly, this talk is based on developments presented in the following papers, by myself and others:
  • Universal trees grow inside separating automata: Quasi-polynomial lower bounds for parity games. W. Czerwiński, L. Daviaud, N. Fijalkow, M. Jurdziński, R. Lazić, P. Parys. SODA 2019
  • Improving the complexity of Parys’ recursive algorithm. K. Lehtinen, S. Schewe and D. Wojtczak. Unpublished.
  • Parity Games: Zielonka’s Algorithm in Quasi-polynomial Time. P. Parys. MFCS 2019.
  • Alternating weak automata from universal trees. L. Daviaud, M. Jurdziński and K. Lehtinen CONCUR 2019
  • On the way to alternating weak automata. U. Boker and K. Lehtinen. FSTTCS 2018.
  • A modal μ perspective on solving parity games in quasi-polynomial time. K. Lehtinen. LICS 2018
  • Succinct progress measures for solving parity games. M. Jurdziński and R. Lazić. LICS 2017.
  • Deciding Parity Games in Quasipolynomial Time. C. Calude, S. Jain, B. Khoussainov, W. Li and F. Stephan. STOC 2017.

Monday, October 21, 2019 — by Hugues Evrard from Google (London)

GPU schedulers: how fair is fair enough?

Seminar organized by CONVECS (Radu Mateescu)
At 15:00 in F107 at INRIA Montbonnot
 
Abstract: Blocking synchronisation idioms, e.g. mutexes and barriers, play an important role in concurrent programming. However, systems with semi-fair schedulers, e.g. graphics processing units (GPUs), are becoming increasingly common. Such schedulers provide varying degrees of fairness, guaranteeing enough to allow some, but not all, blocking idioms. While a number of applications that use blocking idioms do run on today’s GPUs, reasoning about liveness properties of such applications is difficult as documentation is scarce and scattered. In this work, we aim to clarify fairness properties of semi-fair schedulers. To do this, we define a general temporal logic formula, based on weak fairness, parameterised by a predicate that enables fairness per-thread at certain points of an execution. We then define fairness properties for three GPU schedulers: HSA, OpenCL, and occupancy-bound execution.
 
We examine existing GPU applications and show that none of the above schedulers are strong enough to provide the fairness properties required by these applications. It hence appears that existing GPU scheduler descriptions do not entirely capture the fairness properties that are provided on current GPUs. Thus, we present two new schedulers that aim to support existing GPU applications. We analyse the behaviour of common blocking idioms under each scheduler and show that one of our new schedulers allows a more natural implementation of a GPU protocol.

Tuesday, October 8, 2019 — by Lijun Zhang from Chinese Academy of Science

Omega-Automata Learning Algorithms and its Applications

Seminar organized by SPADES (Jean-Bernard Stefani)
At 11:00 in A103 at INRIA Montbonnot
 
Abstract: Learning-based automata inference techniques have received significant attention from the community of formal system analysis. In general, the primary applications of automata learning in the community can be categorized into two: improving efficiency and scalability of verification and synthesizing abstract system model for further analysis. Most of the results in the literature focus on checking safety properties or synthesizing finite behavior models of systems/programs. On the other side, Büchi automaton is the standard model for describing liveness properties of distributed systems. Unlike the case for finite automata learning, learning algorithms for Büchi automata are very rarely used in our community. In this talk, we present algorithms to learn a Büchi automaton from a teacher who knows an omega-regular language. The algorithm is based on learning a formalism named family of DFAs (FDFAs) recently proposed by Angluin and Fisman. The main catch is that we use a classification tree structure instead of the standard observation table structure. The worst case storage space required by our algorithm is quadratically better than the table-based algorithm. We implement the first publicly available library ROLL (Regular Omega Language Learning), which consists of all omega-regular learning algorithms available in the literature and the new algorithms proposed in this paper. Further, with our tool, we demonstrate how our algorithm can be exploited in classical automata operations such as complementation checking and in the model checking context.

Tuesday, June 25, 2019 — by Borislav Nikolic from TU Braunschweig

Slot-Based Transmission Protocol for Real-Time NoCs – SBT-NoC

CASERM Seminar organized by SPADES (Sophie Quinton)
At 11:00 in C207 at INRIA Montbonnot

Abstract:
Network on Chip (NoC) interconnects are some of the most challenging-to-analyse components of multiprocessor platforms. This is primarily due to the following two reasons: (i) NoCs contain numerous shared resources (e.g. routers, links), and (ii) the network traffic often concurrently traverses multiple of those resources. Consequently, complex contention scenarios among traffic flows might occur, some of the important implications being significant performance limitations, and difficulties when performing the real-time analysis.In this work, we propose a slot-based transmission protocol for NoCs (called SBT-NoC), and an accompanying analysis method for deriving worst-case traffic latencies. The cornerstone of SBT-NoC is a contention-less slot-based transmission, arbitrated via a protocol running on a dedicated network medium. The main advantage of SBT-NoC is that, while not requiring any sophisticated hardware support (e.g. virtual channels, a flit-level arbitration), it makes NoCs amenable to real-time analysis and guarantees bounded low latencies of high-priority time-critical flows, which is a sine qua non for the inclusion of NoCs, and multiprocessors in general, in the real-time domain. The experimental evaluation, including both synthetic workloads and a use-case of an autonomous driving vehicle application, reveals that SBT-NoC offers a plethora of configuration opportunities, which makes it applicable to a wide range of diverse traffic workloads.

Friday, April 12, 2019 — by Dr. Nicolas Hili from IRT Saint-Exupéry, Toulouse

Worst-Case Reaction Time Optimization on Deterministic Multi-Core Architectures

CASERM Seminar organized by SPADES (Alain Girault)
At 15:00 in C207 at INRIA Montbonnot

Abstract:
The market of embedded processors for safety-critical systems is driven by the need to provide both deterministic and performant architectures to deal with the increasing complexity
software. In this context, multi-core architectures tend to become the de facto standard as they provide better performance over single-core architectures. However, designing time-predictable software deployed over multiple cores is a very challenging task.We propose a new approach for the predictability and optimality of the inter-core communication and execution of tasks allocated on different cores. Our approach is based on the execution of synchronous programs written in the ForeC programming language on deterministic architectures called PREcision Timed. The originality of the work resides in the time-triggered model of computation and communication that allows for a very precise control over the thread execution. Synchronization is done via configurable TDMA arbitrations where the optimal size and offset of the time slots are computed to reduce the inter-core synchronization costs. We implemented a robotic application and simulated it using MORSE, a robotic simulation environment. Results show that the model we propose guarantees time-predictable inter-core communication, the absence of concurrent accesses (without relying on hardware mechanisms), and allows for optimized execution throughput.

Tuesday, April 2, 2019 — by Prof. Edward A. Lee from UC Berkeley

A Personal View of Real-Time Computing

CASERM Seminar organized by SPADES (Alain Girault)
At 14:00 in F107 at INRIA Montbonnot

Abstract:
Today, real-time behavior of programs is a property that emerges from implementations rather than a property that is specified in models. Control over timing behavior of software is difficult to achieve, and timing behavior is neither predictable nor repeatable. I will argue that this problem can be solved by making a commitment to deterministic models that embrace temporal properties as an integral part of the modeling paradigm. I will explain how such models differ from the prevailing practice of modeling low-level computer architectures and estimating worst-case execution time. I will show moreover that deterministic timing is practical today without sacrificing performance for many useful workload models. Specifically, I will describe a class of computer architectures called Abstract PRET Machines (APMs) that deliver deterministic timing with no loss of performance for a family of real-time problems consisting of sporadic event streams with hard deadlines.

Tuesday, December 18, 2018 — by Susanne Graf from Verimag

Building Correct Cyber-Physical Systems: Why We Need a Multiview Contract Theory

CASERM Seminar organized by SPADES (Sophie Quinton)
At 11:00 in F107 at INRIA Montbonnot

Abstract:
The design and verification of critical cyber-physical systems is based on a number of models (and corresponding analysis techniques and tools) representing different viewpoints such as function, timing, security and many more. Overall correctness is guaranteed by mostly informal, and therefore basic, arguments about the relationship between these viewpoint-specific models. We believe that a more flexible contract-based approach could lead to easier integration, to relaxed assumptions, and consequently to more cost efficient systems while preserving the current modelling approach and its tools.

Friday, December 14, 2018 — by Stefan Leue from University of Konstanz

Analysis and Repair for Timed Diagnostic Traces

Seminar organized by SPADES (Gregor Goessler)
At 10:00 in F108 at INRIA Montbonnot

Abstract:
I present algorithms and techniques for the repair of timed system models, given as networks of timed automata (NTA). The repair is based on an analysis of timed diagnostic traces (TDTs) that are computed by real-time model checking tools, such as UPPAAL, when they detect the violation of a timed reachability property. We present an encoding of TDTs in quantifier-free linear real arithmetic and use the minimal satisfiability core capabilities of the SMT solver Z3 to compute possible repairs. We then present an admissibility criterion, called functional equivalence, that assesses whether a proposed repair is admissible in the overall context of the NTA. I will then describe the architecture of a proof of concept tool called TarTar, which implements the analysis, repair and admissibility test that I describe. I will finally present preliminary experimental results.This is ongoing joint work with Martin Kölbl (University of Konstanz) and Thomas Wies (New York University).

Tuesday, November 13, 2018 — by Lionel Rieg from Verimag

Extending a Verified OS Kernel for Real-Time

CASERM Seminar organized by SPADES (Sophie Quinton)
At 11:00 in F107 at INRIA Montbonnot

Abstract:
The CertiKOS operating system is a verified OS kernel, written in C and proven in Coq. Leveraging CompCert, it provides end-to-end correctness proofs all the way down to the generated assembly code. It is designed into many abstraction layers that isolate the various components of the OS and permit to abstract reasoning on the actual code into reasoning on its specification. My work aims at extending it into a real-time kernel. In this talk, I will present the modifications we are currently working on and the associated verification challenges: introducing preemptive interrupts to create a timer, switching from a cooperative round-robin scheduler to a preemptive priority-based one, linking the schedulability proof of the task set to the state of the scheduler, updating the information flow non-interference proof. Along the way, I will detail our notion of virtual time, a new look at the time-demand analysis, used in the schedulability proof and which we hope will be useful in a broader context.

Friday, September 28, 2018 — by Simon Bliudze from INRIA Lille – Nord Europe

Correctness by Construction: Design of Component-Based Systems Using BIP

CASERM Seminar organized by SPADES (Sophie Quinton)
At 15:00 in F107 at INRIA Montbonnot

Abstract:
Modern software systems are inherently concurrent. They consist of components running simultaneously and sharing access to resources provided by the execution platform. The intrinsic concurrent nature of such interactions is the root cause of the sheer complexity of the resulting software: The complexity of software systems is exponential in the number of their components, making a posteriori verification of their correctness extremely difficult. An alternative approach consists in ensuring correctness by construction. The Rigorous System Design approach advocated in the Behaviour-Interaction-Priority (BIP) framework enforces multiple levels of separation of concerns. It relies on a sequence of semantics-preserving transformations to obtain an implementation of the system from a high-level model, while preserving all the properties established along the way.Until recently, by-construction correctness provided by the BIP design flow was limited to the fact that automatically generated executable code was guaranteed to satisfy the properties established on the corresponding BIP models. Correctness of the high-level application model was limited to deadlock freedom or had to be established by model checking. In order to allow by-construction correctness to be guaranteed already at the level of BIP models, we have proposed the notion of “architectures”, which depict formally defined design patterns. By-construction correctness is achieved by establishing a direct association between global behavioural properties — characterising the coordination between components — and architectures that enforce such properties on BIP models.In this talk, I will briefly present BIP and the architecture-based design approach, relying on the CubETH case study for the purposes of illustration. CubETH is a 1-unit cubesat nanosatellite developed at the EPFL Space Engineering Center (eSpace).

Tuesday, March 20, 2018 — by Thao Dang from Verimag

Scheduling of Embedded Controllers Under Timing Contracts

CASERM Seminar organized by SPADES (Sophie Quinton)
At 11:00 in F107 at INRIA Montbonnot

Abstract:
Timing contracts for embedded controller implementation specify the constraints on the time instants at which certain operations are performed such as sampling, actuation, computation, etc. Several previous works have focused on stability analysis of embedded control systems under such timing contracts. In this paper, we consider the scheduling of embedded controllers on a shared computational platform. Given a set of controllers, each of which is subject to a timing contract, we synthesize a dynamic scheduling policy, which guarantees that each timing contract is satisfied and that the shared computational resource is allocated to at most one embedded controller at any time. The approach is based on a timed game formulation whose solution provides a suitable scheduling policy. In the second part of the paper , we consider the problem of synthesizing a set of timing contracts that guarantee at the same time the schedulability and the stability of the embedded controllers. This is joint work with Mohammad Al Khatib and Antoine Girard.

Monday, March 5, 2018 — by Enrico Bini from Università di Torino

Adaptive Fair Scheduler (AFS): Fairness with Disturbances

CASERM Seminar organized by SPADES (Sophie Quinton)
At 11:00 in F107 at INRIA Montbonnot

Abstract:
The problem of allocating resources over time to different demands in a “fair” way is present in many application domains. If the resource can be allocated with an arbitrarily fine granularity at no cost, then any type of resource allocation can be achieved (this scheme is called fluid for its resemblance to water). Instead, if the resource has some coarse granularity, then the fluid resource allocation can only be approximated. The notion of lag measures the deviation between the fluid schedule and the real schedule which can
be actually achieved.In this presentation, we illustrate the Adaptive Fair Scheduler (AFS), which allocates resources over time and guarantees a target service rate to a set of applications. AFS is capable of achieving a bounded lag in presence of time overhead at scheduling decision instants, and uncertainties in the resource allocation. Thanks to its generality, AFS can be applied to many different application domains. Reconfigurable computing, scheduling of heterogeneous units, and multiprocessor scheduling are some notable examples.

Tuesday, February 6, 2018 — by Miguel Romero Rodriguez from INSA Lyon

Discrete Event System Formal Approaches:
Contribution onto Global Reliability Markov Chain Generation

CASERM Seminar organized by SPADES and POLARIS (Alain Girault and Bruno Gaujal)
At 15:30 in F107 at INRIA Montbonnot

Abstract:
La production croissante d’énergie renouvelable est en train de bouleverser la conception même des systèmes de transmission de puissance. Au cours des dernières années, les technologies à courant continu haute tension (CCHT) basées sur les convertisseurs modulaires multiniveaux (MMC) sont défendues par l’industrie et le milieu universitaire comme une solution pour une intégration efficace des énergies renouvelables dans les réseaux électriques.Des stratégies de contrôle plus rapides et plus complexes seront nécessaires dans ce domaine qui dépend aujourd’hui de la décision humaine.Ce travail propose une approche Système à Evénements Discrets (SED) pour gérer les réponses de contrôle à déployer dans de tels systèmes. Basée sur la modélisation SED et la théorie de contrôle par supervision (TCS), une méthode pour développer un contrôle par supervision pour les systèmes HVDC, qui met l’accent sur les contraintes de communication locales/globales, est proposée dans ce travail.

L’objectif majeur de la thèse est d’exploiter le potentiel mixte des approches formelles discrètes et continues dans la gestion des modes de fonctionnement des MTDC. Ces MTDC sont composés de convertisseurs MMC (Modular Multilevel Converter) reliés en un nœud par des lignes de transport d’énergie. Chacun de ces composants est doté d’une commande locale, une supervision sera placée à un niveau hiérarchique haut. La double perspective de ce travail de recherche, consiste d’une part à dimensionner les éléments de ce réseau (comportement dynamique, instrumentation) et d’autre part à le piloter au travers de superviseurs discrets
communicants. Les choix fonctionnels et opérationnels des composants et de leur structure de supervision selon les modes de fonctionnement prescrits (nominal et dégradé), caractériseront les architectures de commande de ces futurs réseaux.

Mots clés : Automatic control, Supervisory Control Theory, Discrete Event Systems, Safe control.


Tuesday, February 6, 2018 — by Mahya Rahimi from INSA Lyon

A Formal Contribution to Multi-Resource Scheduling Problems

CASERM Seminar organized by SPADES and POLARIS (Alain Girault and Bruno Gaujal)
At 14:30 in F107 at INRIA Montbonnot

Abstract:
The main objective of traditional scheduling is time minimization. Most researchers use mathematical modeling approaches to model and solve scheduling problems; while intricacy of these methods has made them only accessible to mathematicians. Whereas, the same problems can be modeled more visual and expressive by automata theory. Furthermore, automata models are more robust against changes in problem specifications. Besides, few studies have used formal verification approaches for addressing scheduling problems; yet none of them considered challenging and practical issues such as multi-resource sharing aspect, uncontrollable environment and reaching the optimal schedule in a reasonable time for industrializing the model.

The main objective of this thesis is to:

  1. propose an efficient model and solving approach for multi-resource sharing scheduling problem through timed automata
  2. develop a synchronous composition for weighted automata to solve the problem directly by weighted automata models
  3. propose automata models to handle uncontrollable parameters.

For this purpose, in the second chapter, a two-step modeling approach is presented to integrate multi-resource sharing issue in scheduling problems using timed automata. In the first step, the problem is modeled by weighted automata. In the second step, the weighted model is translated to timed automata to be implementable in UPPAAL as a formal verification tool. To solve the problem, an algorithm is developed based on iterating reachability analysis to obtain sub-optimal makespan. In the third chapter, a synchronous composition is proposed for weighted automata to solve the problem by performing time-optimal reachability analysis on weighted automata models. In the fourth chapter, various uncontrollable parameters such as start time, duration of task and failure occurrence in a scheduling problem is modeled by timed game automata. Then, the problem is solved by performing a time-optimal strategy synthesis in the synthesis tool, TIGA.

Results show the complexity of the proposed timed model and solving approach for the problem with controllable tasks is polynomial and it can be solved in a reasonable time for industrial cases.

Keywords: Discrete event systems; Formal verification; Control synthesis; Timed automata; Weighted automata; Scheduling problem; Makespan; Multi-resource sharing; Uncontrollability


Wednesday, December 13, 2017 — by Paolo Pazzaglia from Scuola Superiore Sant’Anna – Pisa

Beyond the Weakly Hard Model:
Measuring the Performance Cost of Deadline Misses

CASERM Seminar organized by SPADES (Sophie Quinton)
At 10:30 in F108 at INRIA Montbonnot

Abstract:
Most works in schedulability analysis theory are based on the assumption that constraints on the performance of the application can be expressed by a very limited set of timing constraints (often simply hard deadlines) on a task model. This model is insufficient to represent a large number of systems in which deadlines can be missed, or in which late task responses affect the performance, but not the correctness of the application.For systems with a possible temporary overload, models like the m-k deadline model and its generalization in the Weakly Hard model have been proposed in the past. However, even the Weakly Hard model has several limitations since it does not consider the state of the system and is largely unaware of the way in which the performance is affected by deadline misses (except for critical failures). We present a state-based representation of the evolution of a controlled system, with respect to each deadline hit or miss event. Our representation is much more general (while hopefully being concise enough) to represent the evolution in time of the performance for time-sensitive systems with possible time overloads. We provide the theoretical foundations for our model and also show an application to a simple system to give examples of the state representations and their use.

Tuesday, October 24, 2017 — by Jaime Arias from INRIA Grenoble Rhône-Alpes

A Tree-Based Operational Semantics for Interactive Multimedia Scores

Seminar organized by SPADES (Sophie Quinton)
At 11:00 in F107 at INRIA Montbonnot

Abstract:
Interactive Scores (IS) is a formalism for composing and performing interactive multimedia scores with several applications in video games, live performance installations, and virtual museums. In this model, the composer defines the temporal organization of the score by asserting temporal relations (TRs) between temporal
objects (TOs). At execution time, the performer may modify the start/stop times of the TOs by triggering interaction points while the system guarantees that all the TRs are satisfied. Implementations of IS and formal models of their behavior have already been proposed, but these do not provide usable means to reason about their properties. In this talk we presente ReactiveIS, a programming language that fully captures the temporal structure of IS during both composition and execution. For that, we propose a semantics based on tree-like structures representing the execution state of the score at each point in time. The semantics captures the hierarchical aspects of IS and provides an intuitive representation of their execution. ReactiveIS is also equipped with a logical semantics based on linear logic, thus widening the reasoning techniques available for IS. Finally, ReactiveIS is general enough to capture the full behavior of IS and it also provides declarative ways to increase the expressivity of IS with, for instance, conditional statements and loops.

Tuesday, October 10, 2017

Forum Méthodes Formelles : Véhicules Autonomes et Méthodes Formelles

Seminar organized locally by CONVECS (Hubert Garavel)
08:45-17:30 in Grand Amphi at INRIA Montbonnot

Tuesday, October 3, 2017 — by Rob Davis from the University of York

Experiences in Start-up Companies:
Transferring Real-Time Systems Research into Commercial Products

CASERM Seminar organized by SPADES (Sophie Quinton)
At 11:00 in F107 at INRIA Montbonnot

Abstract:
Rob Davis is a Senior Research Fellow at the University of York, UK and an International Chair at Inria, Paris, France. He started his PhD in Real-Time Systems 25 years ago in 1992. Since then he has been involved in three start-up companies which have succeeded in transferring real-time systems research into commercial products. This talk tells the story of those companies, the work that was done, and where they are today. It concludes with a summary of the key success factors and roadblocks in transferring research into industry via a start-up company.

Tuesday, June 20, 2017 — by Bruno Bodin from the University of Edinburgh

Performance Evaluation using Dataflow Modeling

CASERM Seminar organized by SPADES (Alain Girault)
At 11:00 in F108 at INRIA Montbonnot

Abstract:
We propose a novel approach to heterogeneous embedded systems programmability using a taskgraph based framework called Diplomat. Diplomat is a taskgraph framework that exploits the potential of static dataflow modeling and analysis to deliver performance estimation and CPU/GPU mapping. An application has to be specified once, and then the framework can automatically propose good mappings.

Tuesday, May 16, 2017 — by Florian Greff from Thales Research & Technology and LORIA

Software-Defined Real-Time Mesh Networking : Protocol and Experimentation Method

CASERM Seminar organized by SPADES (Sophie Quinton)
At 11:00 in F107 at INRIA Montbonnot

Abstract:
Our research deals with the problem of network resource allocation for real-time communications in mesh networks. We are studying the combination of online flow admission control and pathfinding algorithms in an SDN-like controller. We propose a Software-Defined Real-time Networking protocol that allows a dynamic and incremental allocation of network resource in a mixed-critical real-time mesh network context. We also propose a method to ensure the dependability of the network while transmission errors and node failures can happen. Finally, our research includes the study of new way of experimenting on embedded networks, by making use of both an in-silicon platform and a simulator. We designed an original framework to ease the driving of such experiments. I will first present the context of this project and the benefit of dynamic mesh networks for critical systems. Then I will present the SDRN protocol through three items : 1. low-level mechanisms required for dynamic allocation; 2. analysis of resource requests for admission control and pathfinding; 3. fault tolerance mechanisms. Finally, I will give some insights about our experimentation architecture, ERICA.

Tuesday, April 11, 2017 — by Alessandro Biondi from Scuola Superiore Sant’Anna in Pisa

Lightweight Real-Time Synchronization on Symmetric and
Asymmetric Multiprocessors

CASERM Seminar organized by SPADES (Sophie Quinton)
At 11:00 in F107 at INRIA Montbonnot

Abstract:
Real-world applications share resources such as data structures or I/O devices that must be protected by locking protocols to ensure a mutually-exclusive access. This is especially true in multiprocessor systems, where software parallelism increases the need for synchronization. As a consequence, designing efficient and predictable locking protocols and bounding their worst-case blocking delays are fundamental problems in the context of multiprocessor real-time systems.The talk covers four lightweight synchronization mechanisms, namely preemptive and non-preemptive lock-free synchronization, as well as preemptive and non-preemptive FIFO spin locks. A modern analysis technique for bounding the synchronization delays introduced by such protocols is presented, which allows overcoming several limitations of classical real-time analysis approaches.The synchronization protocols have been then compared in terms of schedulability in a large-scale empirical study considering both symmetric and asymmetric multiprocessors. While non-preemptive FIFO spin locks were found to generally perform best, lock-free synchronization mechanisms were observed to offer significant advantages on asymmetric platforms, hence resulting also attractive for several emerging heterogeneous platforms.

Monday, March 27, 2017 — by Reinhard van Hanxleden from Kiel University

A Sequentially Constructive Circuit Semantics for Esterel

CASERM Seminar organized by SPADES (Alain Girault)
At 11:00 in F107 at INRIA Montbonnot

Abstract:
The synchronous language Esterel comes with a constructive semantics grounded in physics: an Esterel program is considered valid iff its corresponding circuit has a well-defined, delay-insensitive behavior. This guarantees that the program behavior is determinate and does not depend for example on choices of a run-time scheduler. However, this solid foundation entails a rather restricted regime for handling sequential accesses to shared data. Thus many programs are rejected as being “non-constructive”, even though they have a very natural interpretation. The recently proposed “sequentially constructive” model of computation (SC MoC) relaxes this restriction, by rejecting programs only if they really are potentially non-determinate. Sequentially Constructive Esterel (SCEst), based on the SC MoC, conservatively extends Esterel with the aim of easy adaptation by programmers used to imperative programming. However, with the SC MoC as proposed so far, this comes at the price of potentially requiring “speculation” about the program behavior. Thus, a program may be considered SC even though its equivalent circuit might not stabilize. The SC circuit semantics presented here overcomes this deficiency, by again grounding program semantics in physics. Based on this new “strictly SC” semantics, we present a source-to-source transformation that translates SCEst programs into equivalent Esterel programs that can then be synthesized into hardware or software using existing Esterel compilers.This is joint work with Alexander Schulz-Rosengarten, Steven Smyth (both Kiel University) and Michael Mendler (Bamberg University).

Tuesday, March 21, 2017 — by Dorin Maxim from LORIA

Probabilistic Analysis of Real-Time Systems

CASERM Seminar organized by SPADES (Gregor Goessler)
At 11:00 in F107 at INRIA Montbonnot

Abstract:
Embedded Systems are highly prevalent in our daily lives, integrated in all types of transportation vehicles (e.g. cars, trains, planes), medical devices (e.g. pacemakers) and in communication devices and internet technologies. Besides constraints like power consumption, security, functional correctness, some systems also are characterized by timing constraints, i.e., the results of computations should be obtained on time. These later systems are called Critical Real-Time Embedded Systems and integrate complex architectures evolving constantly to accommodate new functionalities required by the end users of the systems. These new architectures have a direct impact on the variability of the timing behavior of the real-time system leading to important over-provisioning if the design of the system is based only on worst case reasoning. Probabilistic approaches propose solutions based on the probability of occurrence of the worst case values in order to avoid over provisioning while satisfying real-time constraints. The presentation will revolve around modeling and analysis techniques for probabilistic real-time systems and ways of decreasing the complexity of these analyses. Various parameters of the system can be modeled by probability distributions. This talk focuses on the (probabilistic) worst case execution times of tasks and (probabilistic) minimal inter-arrival times. Probabilistic analyses are applied on the system in order to obtain bounds on the probabilities of failure of tasks and of the system as a whole. If the failure probabilities are within some acceptable ranges (given by design standards) then the system can be declared as safe.

Tuesday, February 28, 2017 — by Hubert Garavel from INRIA Grenoble – Rhône-Alpes

Ten Different Ways on Defining Signed Integers Formally
followed by
Benchmarking Implementations of Conditional Term Rewrite Systems

Seminar organized by CONVECS
At 10:30 in F107 at INRIA Montbonnot

Abstract:
The standard mathematical definition of signed integers based on set theory is not well-adapted to the needs of computer science. For this reason, many formal computer languages and theorem provers have designed alternative definitions, which extend in different ways the Peano axioms from unsigned to signed integers. We review the approaches found in CADP, CASL, Coq, Isabelle/HOL, KIV, Maude, mCRL2, SMT-LIB, etc. to evaluate their merits according to objective criteria and we put forward an “optimal” definition of signed integers, which we fully formalize together with their arithmetic and relational operators.

Abstract:
Conditional Term Rewrite Systems have been adopted in many algebraic and functional languages, such as Clean, Haskell, LNT, LOTOS, Maude, mCRL2, MLTON, OCAML, Opal, Rascal, Scala, Stratego/XT, SML/NJ, etc. We aim at comparing the performance (memory and CPU time) of these various implementations. To do so, we took as a basis the benchmarks developed for the three Rewrite Engine Contests (REC 2006, 2008, and 2010), which we significantly revised and extended, leading to a highly-automated benchmark platform. We present the current results of this study.

Tuesday, February 14, 2017 — by Ioana Cristescu from Harvard Medical School

Rigid families for the reversible pi-calculus

CASERM Seminar organized by SPADES (Gregor Goessler)
At 14:00 in C207 at INRIA Montbonnot

Abstract:
In a causal consistent reversible system the cause cannot backtrack before its effect. Adding a reversible layer to an operational semantics requires then an underlying causal semantics. There have been many propositions for a causal model for the pi-calculus. The difficulty arises from the treatment of privacy in the pi-calculus.I will present the rigid families, a novel causal model for the pi-calculus, similar to event structures. Instead of having causality as primitive notion, in rigid families, causality and concurrency are derived from precedence, a partial order local to each run of a process. I will talk about the causality relations in the pi-calculus and about the previous causal models proposed for the pi-calculus and I will argue that only rigid families can interpret the reversible pi-calculus.

Tuesday, February 14, 2017 — by Martin Vassor from EPFL

Stable Causal Log and Reversible Computation

CASERM Seminar organized by SPADES (Jean-Bernard Stefani)
At 11:00 in F107 at INRIA Montbonnot

Abstract:
This talk studies the relation between Causal Logging and Reversible Computation as two approaches of fault tolerance. We introduce formal definitions for both approaches, one based on Manetho for Causal Log, the other being an extension of roll-pi for the reversible computation, and we show that Manetho can be encoded in roll-pi. We also show that the reverse is possible only if some constraints are added to roll-pi.Finally, we conclude by comparing complexities of both approaches.

Tuesday, January 31, 2017

Forum Méthodes Formelles : Méthodes formelles et Cyber-Sécurité

Seminar organized locally by CONVECS (Hubert Garavel)
08:45-17:30 in Grand Amphi at INRIA Montbonnot

Tuesday, January 24, 2017 — by Bruno Gaujal from INRIA Grenoble – Rhône-Alpes

Neuro-Dynamic Programming

CASERM Seminar organized by SPADES (Sophie Quinton)
At 16:00 in F107 at INRIA Montbonnot

Abstract:
In this talk, I shall present how learning techniques (neural networks, simulation based learning) can be used in a dynamic context. In particular, I will show how this approach can be used to solve Markov Decision Process problems approximately when the system suffers from the « curse of dimensionality » (the state and/or the decision space become too large even for small systems) or when the statistics of the dynamic evolution are not known a priori. The talk is based on my understanding of the book « neuro-dynamic programming » by D. Bertsekas and J. Tsitsiklis. Our hope is to use these tools in the CASERM project.

Tuesday, January 24, 2017 — by Mahsa Shirmohammadi from University of Oxford

Minimal probabilistic automata have to make irrational choices

Seminar organized by CONVECS (Radu Mateescu)
09:30-11:00 in room 404, IMAG building (campus)

Abstract:
In this talk, we answer the question of whether, given a probabilistic automaton (PA) with rational transition probabilities, there always exists a minimal equivalent PA that also has rational transition probabilities. The PA and its minimal equivalent PA accept all finite words with equal probabilities. We approach this problem by exhibiting a connection with a longstanding open question concerning nonnegative matrix factorization (NMF). NMF is the problem of decomposing a given nonnegative n×m matrix M into a product of a nonnegative n×d matrix W and a nonnegative d×m matrix H. In 1993, Cohen and Rothblum posed the problem of whether a rational matrix M always has an NMF of minimal inner dimension d whose factors W and H are also rational. We answer this question negatively, by exhibiting a matrix for which W and H require irrational entries. As a corollary we obtain a negative answer to the question on rationality of minimal probabilistic automata. This talk is based on two papers in ICALP 2016 and SODA 2017.

Tuesday, January 10, 2017 — by Dmitry Burlyaev from Atrenta

Hardware Security, Formal and Statistical Approaches

CASERM Seminar organized by SPADES (Pascal Fradet)
At 10:00 in F107 at INRIA Montbonnot

Abstract:
First, the talk gives an overview of existing challenges in Hardware Security. To see the complete picture we go through popular industrial solutions and their weak points, e.g. ARM TrustZone and data encryption. Next, we share our vision how security dangers can be addressed through a combination of formal and statistical methods. Formal verification of non-interference property represents one side of this approach and side-channel analyses – another. We introduce a way to take into account fault and power side-channel attacks that are used to undermine cybersecurity. To reduce the solution complexity a circuit structural analysis should be used to figure out the most probable locations where information leakage and integrity properties are violated.Hardware security has multiple facets and, as a result, its solution is a mix of disciplines from information theory to formal methods and machine learning. A broad audience is welcomed to share their ideas in the subsequent discussion session.

Tuesday, December 13, 2016 — by Jean-Bernard Stefani from INRIA Grenoble – Rhône-Alpes

Location Graphs: Encapsulation and Sharing in Dynamic Software Structures

CASERM Seminar organized by SPADES
At 11:00 in F107 at INRIA Montbonnot

Abstract:
This talk introduces location graphs, a new computational model for dynamic software architectures that supports sharing and encapsulation. The location graph model constitutes a process calculus framework that draws on ideas developed in the last two decades around process calculi for distributed systems and dynamic services, as well as more informal ideas developed over the same period of time in software architecture and component-based software engineering. We present some results on location graph refinement and equivalence and discuss the relation of the location graph model with other formalisms.

Wednesday, December 07, 2016 — by Oleg Sokolsky from University of Pennsylvania

Platform-Specific Code Generation from Platform-Independent Timed Models

CASERM Seminar organized by SPADES (Gregor Goessler)
At 15:30 in F107 at INRIA Montbonnot

Abstract:
Model-based implementation has emerged as an effective approach to systematically develop embedded software for real-time systems. Functional and timing behavior of the software is modeled using modeling languages with formal semantics. Formal verification techniques can be used to demonstrate conformance of the model to the timing requirements for the system. Code generation then automatically generates source code from the verified model. The goal of this process is to guarantee that the final implemented system, running on an embedded platform, also conforms to the timing requirements. Experience shows, however, that implementation may still deviate from its model, which may undermine formal guarantees obtained in the course of model-based development.In this talk, we turn to the well-known four-variable model of system execution introduced by Parnas. The four-variable model makes a clear distinction between the external boundary of the system and internal boundary of the software. Timing requirements are typically verified on the external boundary, while generated code operates at the internal boundary of the system. This distinction can lead to a semantic gap between the verified model and generate code. We explore two complementary approaches to account for the distinction between the two boundaries. One approach composes the platform-independent model with a platform execution model for verification, but applies code generation to the platform-independent model only. Another approach uses integer linear programming to calculate a transformation of timing constants in the platform-independent model that keeps effects of platform delays on the occurrence of observable events in the generated code as small as possible.

Wednesday, December 07, 2016 — by Yoann Geoffroy from INRIA Grenoble – Rhône-Alpes

A General Trace-based Causality Framework for Component-based Systems

PhD thesis
At 10:30 in Grand Amphi at INRIA Montbonnot

Abstract:
A usual solution to create complex systems is to resort to a component-based architecture. Though making the design easier, it also increases the complexity of pinpointing which components are responsible for a system failure. Two fields of computer science have goals close to the one of responsibility assignment. Diagnosis aims at determining the components of the system which, when assumed to be functioning abnormally, will explain the system failure. In a network context, Fault Localisation can be defined as the process of deducing the exact source of a failure from a set of observed errors. In a program context, it can be defined as a task in software debugging to identify the set of statements in a program that cause the program to fail.Though close to assigning responsibilities, Diagnosis and fault localisation do not answer the question “who is responsible for the system failure ?”. A previous work provides a trace-based approach that answers this question, with causality analysis. The idea is to use a counter-factual approach (“what would have happened, would this component not be faulty?”) to assess whether or not a set of components is responsible for a system failure. Given a failing system trace, the counter-factuals are build by first removing the faults of the suspected components from the faulty trace, and then extending the resulting pruned trace using the expected behaviour of the components. This approach differs from the main trend for causality in computing science, that generally relies on the framework introduced by Halpern and Pearl.My PhD thesis expands this framework in two ways. The first one is to extend the causality analysis framework from a black-box component setting (components for which we know the expected behaviour, usually the ones diagnosis consider) to a mixed framework with both black-box and white-box components (components for which we know the actual behaviour, usually the ones considered in fault localisation). Using a game, one is able to assess responsibility for systems composed of both black-box and white-box components. This approach is a generalisation of the causality analysis approach for the black-box components, and work similarly to fault localisation techniques for the white-box components. The approach relies on techniques that are close to the one used in controller synthesis, and thus can generate fixes for the bug observed.

The second main axis is to study the impact of the amount of information accessible on the construction of the counter-factuals. Having more information, like the memory state of the components or a fault model for the components, can be used to build more refined counter-factuals. Those counter-factuals will be closer to what would have happened if components had been fixed, thus yielding some more accurate responsibility assignment. The other end of the spectrum is what happens with less information (missing variables in the trace, or missing portions of the trace). Some general results are given, as well as a way of performing causality analysis on a partial trace that yield the same results as the one performed on the full trace, under certain assumptions.


Tuesday, November 29, 2016 — by Gilles Muller from LIP6 in Paris

Safe Multicore Scheduling in a Linux Cluster Environment

CASERM Seminar organized by SPADES (Jean-Bernard Stefani)
At 11:00 in F107 at INRIA Montbonnot

Abstract:
Modern clusters rely on servers containing dozens of cores that have high infrastructure cost and energy consumption. Therefore, it is economically essential to exploit their full potential at both the application and the system level. While, today’s clusters typically rely on Linux, recent research shows that the Linux scheduler suffers from performance bugs that lead to core under-usage. The consequences are wasted energy, bad infrastructure usage, and lower service response time.The fundamental source of such performance bugs is that the Linux scheduler, being monolithic, has become too complex. In this talk, we present ongoing work on the Ipanema project that proposes to switch to set of simple schedulers, each tailored to a specific application. Our
vision raises scientific challenges in terms of easily developing schedulers and proving them safe. The key to our approach is designing a Domain-Specific Language for multicore kernel scheduling policy development, associated verification tools and a Linux run-time system.

Monday, September 26, 2016 — by Eugene Yip from University of Bamberg

The ForeC Synchronous Deterministic Parallel Programming Language for Multicores

CTRC Seminar organized by SPADES (Alain Girault)
At 11:00 in F107 at INRIA Montbonnot

Abstract:
Cyber-physical systems (CPSs) are embedded systems that are tightly integrated with their physical environment. The correctness of a CPS depends on the output of its computations and on the timeliness of completing the computations. This talk will describe the ForeC language that we have developed for the deterministic parallel programming of CPS applications on multi-core execution platforms. ForeC’s synchronous semantics is designed to greatly simplify the understanding and debugging of parallel programs. ForeC allows programmers to express many forms of parallel patterns while ensuring that programs are amenable to static timing analysis. One of ForeC’s main innovation is its shared variable semantics that provides thread isolation and deterministic thread communication. Through benchmarking, we demonstrate that ForeC can achieve better parallel performance than Esterel, a widely used synchronous language for concurrent safety-critical systems, and OpenMP, a popular desktop solution for parallel programming. We demonstrate that the worst-case execution time of ForeC programs can be estimated precisely.

Tuesday, September 6, 2016 — by Étienne André from Laboratoire d’Informatique de Paris Nord

Computing the Best and Worst Case Execution Time
of a Real-Time System under Uncertainty (slides)

CTRC Seminar organized by SPADES (Sophie Quinton)
At 14:00 in F107 at INRIA Montbonnot

Abstract:
Critical real-time systems must be formally verified to prevent any undesired behavior. When some timing constants are known with only a limited precision, traditional timed model-checking techniques do not apply anymore.We use here the formalism of parametric timed automata. Despite notoriously hard problems (in fact most interesting decision problems are undecidable), parametric timed automata allow us to model distributed real-time systems, and to derive best- and worst-case execution times for a real-time system designed by Thales, and subject to uncertainty in some tasks periods. We also briefly report on recent techniques to speed-up the verification time using heuristics based on integer hulls, and on distributed parametric model checking.

Tuesday, June 12, 2016 — by Adrien Guatto from ENS Paris

A Synchronous Functional Language with Integer Clocks

CTRC Seminar organized by SPADES (Alain Girault)
At 14:00 in F107 at INRIA Montbonnot

Abstract:
Synchronous Functional Languages in the vein of Lustre and Lucid Synchrone are domain-specific languages tailored to critical embedded systems. Compilers for such languages turn high-level stream functions into finite state machines implemented either in hardware or software. This compilation process relies essentially on a type-like system called the *clocks calculus* which characterizes the growth of streams along execution.In this talk I will present my work on a new, extended clock calculus for functional synchronous languages. This system has several benefits: it accepts more programs; it leads to better code generation; it enjoys a simpler metatheory than previous clock calculi. These improvements hinge on two main ideas: first, streams may grow by several elements at once; second, one may allow a block of code to run faster than its surrounding context.

Thursday, June 9, 2016 — by Louis-Noël Pouchet from Ohio State University

A Language and Compiler
for Task-based Programming on Distributed-Memory Clusters

Seminar organized by CORSE
At 14:30 at Antenne Inria Giant, Minatec — 1st floor, large meeting room (more details later)

Abstract:
Compute intensive applications running on clusters of shared-memory computers are typically implemented using OpenMP+MPI. Applications are difficult to program, debug and maintain and performance portability is often limited. Programming models such as Concurrent Collections (CnC) have a huge potential to improve productivity, but adoption will be driven by the ability to demonstrate high-performance implementations can be attained with less effort than by using OpenMP+MPI. In the quest of a system that would offer both productivity and performance, we claim there is a need for a complete compilation framework targeting specific task-based programming models like CnC that would
  1. significantly ease the writing of runtime-specific mapping strategies, using a high-level runtime-independent specification of a parallel algorithm execution strategy;
  2. automate static analysis and transformations of the dataflow graph to adapt the granularity of tasks to the target machine;
  3. enable the systematic generation of numerous alternate and semantically correct distribution, communication and scheduling strategies, allowing auto-tuning on the target platform.

Our objective is to preserve the ease of programming and high-level description of task-based programs while dramatically facilitating the expression, analysis, implementation, and generation of distribution strategies.

In this talk we present a new macro-dataflow programming environment for distributed-memory clusters, based on the Intel Concurrent Collections runtime. Our language extensions let the user define virtual topologies, task mappings, task centric data-placement, selection of pull/push communication mode, task and communication scheduling, etc. We introduce a compiler to automatically generate Intel CnC C++ program, including tuners specification and key automatic optimizations including task coarsening and coalescing, based on the polyhedral model. We validate our approach on a selection of linear algebra computations, demonstrating both productivity and performance.

Biography: Dr. Louis-Noel Pouchet is an Assistant Professor (Research track) at the Ohio State University. He is working on pattern-specific languages and compilers for scientific computing, and has designed numerous approaches using optimizing compilation to effectively map applications to CPUs, FPGAs and SoCs. His work spans a variety of domains, including compiler optimization, hardware synthesis, machine learning, programming languages, and distributed computing. His research is currently funded by the National Science Foundation, the Department of Energy, and Intel. Previously he has been a Visiting Assistant Professor (2012-2014) at the University of California Los Angeles, where he was a member of the NSF Center for Domain-Specific Computing, working on both software and hardware customization. He is also the author of the PolyOpt and PoCC compilers, and of the PolyBench/C benchmarking suite


Tuesday, June 7, 2016 — by Fabrice Rastello from Inria Grenoble – Rhône-Alpes

Towards Automated User Feedback
on Data Movement Bottlenecks in Programs

CTRC Seminar organized by SPADES (Sophie Quinton)
At 14:00 in F107 at INRIA Montbonnot

Abstract:
Data movement is a significant factor affecting both execution time and energy expenditure in current/emerging computers. Hence, characterizing the inherent data movement requirements of algorithms is important. Existing performance tools primarily provide feedback on data movement costs of a particular implementation schedule for an algorithm; however, it is of interest to determine how much improvement may be feasible with alternate valid schedules for the algorithm.During the last few years we put efforts on developing tools to automatically estimate both lower and upper bounds of the IO complexity of arbitrary programs through different complementary approaches (static analysis, trace analysis, …).I will describe those different approaches and present several use cases to illustrate the benefits of the methodology.

Tuesday, May 3, 2016 — by Francesco Zappa Nardelli from École normale supérieure in Paris

Shared Memory and Concurrency: a Thorny Relationship

CTRC Seminar organized by SPADES (Sophie Quinton)
At 11:00 in F108 at INRIA Montbonnot

Abstract:
Compiler optimisations introduce unexpected behaviours in shared memory concurrent programs; programming languages thus define memory models to specify precisely which behaviours can be observed and, indirectly, which optimisations a compiler can apply. Taking the C standard as running example, this talk will describe our failure to specify a reasonable memory model for general purpose programming languages, and how our failure paved the way for exciting research on validation and implementation of compilers.

Tuesday, March 8, 2016 — by Lionel Rieg from Collège de France in Paris

Toward a Coq-verified compiler from Esterel to circuits

CTRC Seminar organized by SPADES (Alain Girault)
At 14:00 in C208 at INRIA Montbonnot

Abstract:
Esterel is a control-oriented synchronous programming language, with an imperative flavor. In this talk, I will discuss some steps going towards the full formalization of the compilation process from Esterel to digital circuits. The current research aims at having a formal correspondence between the semantics of the source program and the semantics of the compiled circuit. Like CompCert (a formally proven C compiler), the idea is to prove that each transformation preserves or refines the semantics of the source program. Unlike CompCert, the target is not an assembly language but circuits, which leads to very different problems : no register allocation but schizophrenia of signals. Time permitting, I will sketch the solution we plan to use to tackle schizophrenia.

Tuesday, March 1, 2016 — by Sudipta Chattopadhyay from Saarland University

On Testing Embedded Software for its Performance and Security

CTRC Seminar organized by SPADES (Alain Girault)
At 11:00 in C208 at INRIA Montbonnot

Abstract:
In this talk, I shall outline analysis and testing of two related, but seemingly orthogonal properties of embedded software — performance and security. In particular, I shall first discuss our work on testing the memory performance of embedded software. We show that, through a series of static analysis and assertion generations, memory performance testing can be reduced (without loss) to an equivalent functionality testing problem. In the second part of the talk, I shall discuss our ongoing work on detecting side-channel vulnerabilities of embedded software. Specifically, I discuss how a relation between the secret information (e.g. secret key in an encrypted message) and memory performance could be established via symbolic analysis. Once such a relation is established, it can be used to discover the potential information leak and moreover, the set of possible secrets leading to an observation (e.g. execution time or the sequence of cache misses). Our framework is generic in the sense that it can be tuned to validate software against a wide range of possible side-channel attacks. Besides, the framework can be used to detect vulnerabilities of software even if its source code is unavailable. I shall conclude the talk by showing our preliminary results and discuss the usage of the framework to assess other crucial properties of embedded software, such as robustness.

Tuesday, February 2, 2016 — by Youcheng Sun from Scuola Superiore Sant’Anna in Pisa

Multiprocessor Global Schedulability Analysis in Continuous Time (slides)

CTRC Seminar organized by SPADES (Sophie Quinton)
At 10:00 in F107 at INRIA Montbonnot

Abstract:
The schedulability analysis of real-time tasks upon a multiprocessor platform and subject to the global scheduling policy is still an open problem. This talk will present an exact solution that models the multiprocessor global scheduling of fixed-priority tasks in Linear Hybrid Automata. The schedulability analysis then becomes the reachability analysis for states representing the deadline miss of a task. To mitigate the state space explosion problem, a pre-order relation is utilised and unnecessary states can be safely eliminated without violating the final schedulability analysis result. For multiprocessor global fixed-priority scheduling, the schedulability analysis is shown to be decidable. Meanwhile, the presentation will discuss the choice of continuous and discrete time domain for multiprocessor global schedulability analysis, and its influence on both the model checking based solutions and analytical solutions.Youcheng Sun recently received his PhD degree from Scuola Superiore Sant’Anna, Pisa. In particular, he has been making the effort to integrate the use of analytical schedulability analysis and model-based formalism for real-time systems.

Thursday, January 14, 2016 — by Björn Brandenburg from Max Planck Institute for Software Systems

Bounds on Locking Delays in Multicore Real-Time Systems

CTRC Seminar organized by SPADES (Sophie Quinton)
At 10:30 in C207 at INRIA Montbonnot

Abstract:
Locking delays arise naturally when tasks require mutually exclusive access to shared resources such as I/O devices, communication buffers, or OS services. In hard real-time systems, such delays must be statically bounded and accounted for during schedulability analysis (i.e, when establishing a system’s temporal correctness). While the use of locks in uniprocessor real-time systems is well understood, the field of multiprocessor real-time synchronization is still active and has received much attention in recent years. In this talk, I will first introduce key concepts and the main analysis problem, then highlight some foundational results concerning the question of asymptotically optimal blocking, and finally discuss the analysis of spin locks in AUTOSAR and similar systems.

Tuesday, January 12, 2016 — by Hubert Garavel from Inria Grenoble – Rhône-Alpes

Process Calculi: Towards the Great Unification

Seminar organized by CONVECS
At 14:00 in F107 at INRIA Montbonnot

Abstract:
Asynchronous concurrency is omnipresent, from multi-core processors to high-performance computers and grids. Process calculi, as proposed by Tony Hoare and Robin Milner, are probably the most concise and expressive formalism for specifying concurrent systems. However, they have a steep learning curve and their landscape is a fragmentation between multiple, incompatible calculi. We show that, to a large extent, these issues are due to suboptimal definitions of sequential composition in process calculi. We review the various ways sequential composition can be defined and propose an alternative approach based on a symmetric binary operator, write-many variables, and static semantics constraints. This approach, which generalizes traditional process calculi, has been used to define the new LNT language implemented in the CADP toolbox. Feedback gained from university lectures and real-life case studies shows a high acceptance by computer-science students and industry engineers.

Tuesday, December 15, 2015 — by Benoît Caillaud from Inria Rennes – Bretagne Atlantique

Structural Analysis of Multi-Mode DAE Systems

CTRC Seminar organized by SPADES (Alain Girault)
At 14:00 in F107 at INRIA Montbonnot

Abstract:
Multi-mode DAE systems are the underlying mathematical model supporting physical modeling languages such as Modelica. They consist of equations of the the form “when g do e”, where “g” are predicates and “e” differential algebraic equations. Reusing ideas borrowed from the constructive semantics of synchronous languages, we propose an abstract semantics of multi-mode DAE systems. This semantics is used to determine the structural differentiation index of a system and then compute a static scheduling of its equations, for all possible modes of the system.

Friday, November 27, 2015 — by Giovanni Bernardi from the Madrid Institute of Advanced Studies

Using Higher-Order Contracts to Model Session Types

CTRC Seminar organized by SPADES (Jean-Bernard Stefani)
At 10:00 in C208 at INRIA Montbonnot

Abstract:
Session types are used to describe and structure interactions between independent processes in distributed systems. Higher-order types are needed to properly structure delegation of responsibility between processes.
In this talk we show that a sublanguage of higher-order web-service contracts provides a fully-abstract model of recursive higher-order session types. The model is set-theoretic, in that the meaning of a contract is the set of contracts with which it complies. The proof of full-abstraction depends on the novel notion of dual of a contract. We show that the new notion lets us type more well-formed programs than the commonly used type duality does, thereby improving existing type systems.

Thursday, November 26, 2015 — by Dmitry Burlyaev from Inria Grenoble – Rhône-Alpes

Design, Optimization, and Formal Verification
of Circuit Fault-Tolerance Techniques

PhD thesis
At 11:00 in F107 at INRIA Montbonnot

Abstract:
Technology shrinking and voltage scaling increase the risk of fault occurrences in digital circuits. To address this challenge, engineers use fault-tolerance techniques to mask or, at least, to detect faults. These techniques are especially needed in safety critical domains (e.g., aerospace, medical, nuclear, etc.), where ensuring the circuit functionality and fault-tolerance is crucial. However, the verification of functional and fault-tolerance properties is a complex problem that cannot be solved with simulation-based methodologies due to the need to check a huge number of executions and fault occurrence scenarios. The optimization of the overheads imposed by fault-tolerance techniques also requires the proof that the circuit keeps its fault-tolerance properties after the optimization.
In this work, we propose a verification-based optimization of existing fault-tolerance techniques as well as the design of new techniques and their formal verification using theorem proving. We first investigate how some majority voters can be removed from Triple-Modular Redundant (TMR) circuits without violating their fault-tolerance properties. The developed methodology clarifies how to take into account circuit native error-masking capabilities that may exist due to the structure of the combinational part or due to the way the circuit is used and communicates with the surrounding device.
Second, we propose a family of time-redundant fault-tolerance techniques as automatic circuit transformations. They require less hardware resources than TMR alternatives and could be easily integrated in EDA tools. The transformations are based on the novel idea of dynamic time redundancy that allows the redundancy level to be changed “on-the-fly” without interrupting the computation. Therefore, time-redundancy can be used only in critical situations (e.g., above Earth poles where the radiation level is increased), during the processing of crucial data (e.g., the encryption of selected data), or during critical processes (e.g., a satellite computer reboot).
Third, merging dynamic time redundancy with a micro-checkpointing mechanism, we have created a double-time redundancy transformation capable of masking transient faults. Our technique makes the recovery procedure transparent and the circuit input/output behavior remains unchanged even under faults. Due to the complexity of that method and the need to provide full assurance of its fault-tolerance capabilities, we have formally certified the technique using the Coq proof assistant. The developed proof methodology can be applied to certify other fault-tolerance techniques implemented through circuit transformations at the netlist level.

Tuesday, November 10, 2015 — by Bertrand Jeannet from Argosim

Debugging Embedded Systems Requirements with Stimulus

CTRC Seminar organized by SPADES (Gregor Gössler)
At 10:30 in F107 at INRIA Montbonnot

Abstract:
In a typical software project, 40% to 60% of design bugs are caused by faulty requirements that generate costly iterations of the development process. The major reason for this situation is that no practical tool exists for debugging requirements while drafting specifications.
STIMULUS provides an innovative solution for the early debugging and validation of functional real-time systems requirements. It relies on a high-level synchronous, constraint-based language to express requirements, and a solver-driven simulation engine to generate and analyze execution traces that satisfy requirements. Visualizing what systems will do enables system architects to discover ambiguous, incorrect, missing or conflicting requirements before the design begins.
We first give an overview of the history, technical and methodological backgrounds of STIMULUS, built on research results from VERIMAG and ENS Paris. We then demonstrate its use to debug the specification of an automatic headlights controller coming from the automotive industry. At last, we discuss a few original language concepts that were motivated by practical use.

Tuesday, October 27, 2015 — by Victor Magron from Verimag

NLVerify: Verification of Polynomial Inequalities
using Formal Floating-point Arithmetic

CTRC Seminar organized by SPADES (Sophie Quinton)
At 14:00 in F107 at INRIA Montbonnot

Abstract:
I present some improvements in the way to use sums of squares for bounding multivariate real polynomial expressions in Coq.
A way to show that a polynomial is nonnegative over a given compact domain is to show it can be rewritten as a sums of squares of polynomials (SOS). It is known that in order to find such an SOS representation, one can use semi-definite programming solvers (SDP).

The proof that the original polynomial P is nonnegative then essentially boils down to checking that P is equal to the SOS representation produced by the solver. From the formal point of view, this SOS representation can be built by any unsafe tools, but the equality has to be checked in Coq.

The present framework addresses two issues:
– the first is that the computation to check polynomial equalities can take a long time in Coq,
– the second is that because SDP use finite precision floating point numbers, the SOS certificate is only correct up to a certain numerical error.

We do this by using, again, finite precision floating point numbers, this time inside Coq. More precisely, we consider polynomials in Coq whose coefficients are intervals of floating point numbers. This allows to handle both issues in a reasonably simple way.

The verification tactic can also be extended to semi-algebraic and transcendental inequalities, using the techniques described in my PhD.

This work is done in collaboration with Benjamin Werner (LIX) and Tillmann Weisser (LAAS). It relies on the Coq library of floating point intervals constructed by Guillaume Melquiond as well as the libraries involving Horner forms for normalized polynomials in the ring tactic by Assia Mahboubi.


Tuesday, October 13, 2015 — by Mahmood Hikmet from the University of Auckland

Quantitative Fairness and its Limits (slides)

CTRC Seminar organized by SPADES (Alain Girault)
At 10:30 in F107 at INRIA Montbonnot

Abstract:
Fairness is traditionally a qualitative measure, reserved only to ensure liveness or as a gauge of personal experience. However, using the Gini Coefficient (a quantitative fairness measure from the field of economics), it is possible to give a number to fairness. This number can be naturally used for evaluation (as is done with economics), but with some mathematics, we can begin to unlock guarantees, correlations, and predictions. We discover a limit called the Worst-Case Fairness for any bounded population, and explore the concept of Effective Fairness (using meta-models to predict fairness). These tools will allow fairness to be used in an iterative design process. This talk does not require or assume a prior understanding of fairness.

Thursday, July 16 and Friday, July 17, 2015

7th Conference on Reversible Computation

Conference organized at Inria Grenoble – Rhône-Alpes

Thursday, June 25, 2015

Presentations of the 1st year PhD students of TYREX

At 10:00 in F107 at INRIA Montbonnot
  1. Louis Jachiet: Raisonnement pour l’optimisation logique de requêtes NoSQL

    Resume : Ma thèse porte sur l’utilisation du raisonnement logique pour l’étude et l’optimisation des langages NoSQL. Dans cet exposé, je présenterai le travail accompli durant cette première année, c’est à dire :

    • l’écriture d’un programme pour traduire une logique vers des automates d’arbres puis d’un article — en cours de soumission — sur ces résultats ;
    • une étude en cours sur de la compilation de requêtes SPARQL optimisée pour l’exécution sur un grand nombre de machines.
  2. Abdullah Abbas: Web query rewriting for heterogeneous data sources
    Abstract: Data on the web are being managed by different authorities and people, and thus rising a problem when data integration is needed from multiple sources. Query planning is the procedure of allowing sound, complete, and efficient execution of the query on multiple data sources. Query planning usually makes use of data summaries collected about each data source to be utilized for query rewriting, with the goal of executing it in a federated system (multiple data sources). We try to formalize this procedure. We propose a method for storing data summaries and using them to do efficient query planning by taking advantage of hash functions. We describe the potential advantages of these summaries when used in a federated system, and we localize the proposal with respect to the state of the are in the domain. Our work is currently based on semantic web standards like RDF and SPARQL 1.1.

Tuesday, June 23, 2015 — by Martina Maggio from Lund University

Discrete-Time Control Theory for Software Engineering (slides)

CTRC Seminar organized by SPADES (Sophie Quinton)
At 10:00 in F107 at INRIA Montbonnot

Abstract:
Many software applications may benefit from the introduction of control theory at different stages of the development process. The requirements identification often translates directly into the definition of control goals. In fact, when these requirements are quantifiable and measurable, control theory offers a variety of techniques to complement the software design process with a feedback loop design process, that empowers the original software with self-adaptive capabilities and allows it to fulfill the mentioned quantifiable requirements.The feedback loop design process consists in the definition of the actuators, what can be changed during the software life that affect the software behavior and the measurements of the goals. Control theory allows then to define models that describes the relationship between the values of the actuators and the measured values of the software
behavior. These models are used to design decision loops and guarantee properties of the closed-loop systems.In this talk I will briefly describe examples where model-based control allowed us to guarantee the satisfaction of specific properties, like synchronization between different nodes in a wireless sensor network and upper bounds on response times in a cloud application.

Tuesday, June 9, 2015 — by Chris Myers from University of Utah

An Integrated Verification Architecture

Seminar organized by CONVECS
At 14:00 in F107 at INRIA Montbonnot

Abstract:
Research in formal verification is hampered by a lack of an integrated verification architecture. Each verification tool in development typically uses its own language for models and properties. This fact means that when a designer wants to verify their system using multiple verification tools, it often requires a time-consuming and error-prone recoding of the model. Furthermore, a system includes both digital and analog hardware, software, and a physical operating environment. Each part of the system may be better encoded with a different modeling formalism and analyzed using a different methodology. For example, digital hardware may require accurate timing, analog circuits require accurate continuous dynamics, software needs efficient data representation, and the operating environment may be stochastic in nature. An integrated verification architecture would enable each part of the system to be represented and analyzed using the appropriate framework. This talk will describe how we can leverage our experience in the development of standards for systems and synthetic biology to develop an integrated verification architecture that includes data exchange standards, integrated tool environments, suites of example models, and repositories to store them.

Tuesday, June 2 to Friday, June 5, 2015

DisCoTec 2015
International Federated Conference on Distributed Computing Techniques

Conference organized at Inria Grenoble – Rhône-Alpes

Tuesday, May 26, 2015 — by Alix Munier-Kordon from LIP6

Évaluation du Débit Maximal d’une Application Représentée sous la Forme d’un SDF

CTRC Seminar organized by SPADES (Alain Girault)
At 11:00 in C207 at INRIA Montbonnot

Abstract:
On considère dans cet exposé une application représentée sous la forme d’un SDF (Synchronous DataFlow Graph). Il s’agit d’un graphe dont les sommets représentent des processus, ou tâches, exécutés un grand nombre de fois à des fréquences différentes. Les mémoires FIFO, qui permettent l’échange de données entre des processus, sont représentés par des arcs.Ce formalisme a été introduit par Lee et Messerschmitt en 1987 et est fréquemment utilisé pour modéliser des systèmes communicants avec ou non des contraintes liées au temps. Nous présenterons dans un premier temps le modèle, quelques propriétés mathématiques fondamentales ainsi que des applications concrètes dans des domaines différents (composition de systèmes électroniques, modélisation d’applications régulières pour l’exécution sur architecture many-core).L’évaluation du débit maximal d’un SDF est un problème que l’on ne sait aujourd’hui pas résoudre de manière polynomiale. La conséquence est que les problèmes d’optimisation liés à un SDF ne peuvent être résolus même de manière approchée pour des problèmes de taille industrielle (leur solution est ainsi difficilement évaluable). Le but de cet exposé est de présenter un ensemble de résultats et de pistes de recherche pour évaluer des bornes inférieures pertinentes du débit (rapides et proches de l’optimum).Si l’on impose aux tâches d’être périodiques, la détermination du débit maximal est un problème polynomial. Cette première borne inférieure a permis de résoudre efficacement plusieurs problèmes d’optimisation sous contrainte de débit (minimisation des ressources de communication, placement d’une application sur une architecture many-core en tenant compte des ressources de communications).Cependant, cette valeur peut-être très éloignée du débit maximal. Une solution consiste à associer pour chaque tâche i un facteur de périodicité K_i, si possible pas trop grand, et à calculer le débit maximal d’un ordonnancement K=(K_1,… K_n)-périodique. Nous exprimons alors une propriété de dominance et une relation d’ordre partielle sur les valeurs des vecteurs K. Ces deux résultats permettent de calculer une valeur K* du vecteur K assurant un bon compromis entre temps de calcul et qualité de l’évaluation du débit. Nous terminerons cet exposé par quelques pistes de recherche.Ces travaux seront présentés par Alix Munier-Kordon. Ils regroupent le travail de plusieurs générations de doctorants (Olivier Marchetti, Mohamed Benazouz, Bruno Bodin et Youen Lesparre).

Tuesday, May 12, 2015 — by Marc Boyer from Onera

Calcul Réseau, de la Théorie aux Applications Avioniques (slides)

CTRC Seminar organized by SPADES (Sophie Quinton)
At 11:00 in C208 at INRIA Montbonnot

Abstract:
Cet exposé présentera le calcul réseau, une théorie utilisée pour le calcul de bornes de temps de traversée des réseaux, son plus fameux succès étant certainement la certification du coeur de réseau des Airbus A380/A350. J’essayerai, dans le temps imparti,

  1. de présenter les bases du calcul réseau, comment on modélise le monde réel et comment on calcule des bornes déterministes grâce à l’algèbre (min,plus) ;
  2. de faire le lien avec d’autres méthodes du monde du temps réel (Real-Time Calculus, Event Stream, ordonnancement) ;
  3. d’évoquer les questions d’outillage,
  4. et les questions de précision sur des cas d’étude avioniques.

Tuesday, April 14, 2015 — by Xavier Nicollin from Inria Grenoble – Rhône-Alpes

Modular Distribution of Synchronous Dataflow Programs
Joint work with
Gwenaël Delaval (LIG/Sanosim), Alain Girault (Inria/SPADES) and Marc Pouzet (ENS/Parkas)

Seminar organized by SPADES (Xavier Nicollin)
At 11:00 in C208 at INRIA Montbonnot

Abstract:
We propose a type system to formalize, in a unified way, both the clock calculus and the location calculus of a distributed extension of the synchronous data-flow programming language Heptagon. On the one hand, the clock calculus infers the clock of each flow in the program and checks clock consistency: a time-homogeneous function, like +, should not be applied to flows with different clocks. On the other hand, the location calculus infers the spatial distribution of computations and checks spatial consistency: a centralized operator, like +, should not be applied to flows with different computing locations. Our main contribution is to show that these two inferences cannot be performed separately, so we provide a single calculus performing both.We also present a distribution algorithm, based on the above-mentioned type system. It is a modular source-to-source algorithm, transforming the source program into a standard Heptagon program, with a specific function for communication of values betwen locations. The resulting program is meant to run on as many locations as specified, on a distributed memory architecture, communicating via FIFO buffers.

Tuesday, March 31, 2015 — by Radu Mateescu from Inria Grenoble – Rhône-Alpes

Overview of MCL, a Data-Based Model Checking Language

Seminar organized by SPADES (Jean-Bernard Stefani)
At 10:00 in F107 at INRIA Montbonnot

Abstract:
For analyzing the dynamic behaviour of concurrent value-passing systems, classical temporal logics must be extended with data-handling primitives. Since the early days of formal verification, many such extensions were proposed, both in the linear/branching time and the state/action based settings. We present here MCL (Model Checking Language), an extension of the alternation-free modal mu-calculus with data-handling features, generalized regular expressions over transition sequences, and fairness operators. MCL was designed to allow a versatile and succinct formulation of temporal properties interpreted on labeled transition systems (LTSs) containing data values. We illustrate the usage of MCL by means of classical examples of properties (safety, liveness, fairness). We also describe the implementation of MCL provided by the EVALUATOR 4.0 model checker of the CADP verification toolbox, which evaluates an MCL formula on-the-fly on an LTS and also produces full diagnostics (witnesses and counterexamples), i.e., subgraphs of the LTS illustrating the truth value of the formula. We discuss the expressiveness and evaluation complexity of MCL, and we outline several directions of future work.

Wednesday, March 11, 2015 — by Laurent Rioux from Thales Research and Technology

Integrating Timing Performance Analysis in the Industrial Software Design Process

Seminar organized by SPADES (Sophie Quinton)
At 10:30 in F107 at INRIA Montbonnot

Abstract:
Due to the growing complexity of real-time software it becomes more and more difficult to predict how design decisions may impact the system timing behavior. Current industrial practices usually rely on the subjective judgment of experienced software architects and developers. This is however risky since eventual timing errors are only detected after implementation and integration, when the software execution can be tested on system level, under realistic conditions. At this stage, timing errors may be very costly and time consuming to correct. Therefore, to overcome this problem we need an efficient, reliable and automated timing estimation method applicable already at early design stages to guarantee that the designed system meets its timing requirements before time and resources are invested for the system implementation. Formal timing analysis appears at first sight to be the adequate candidate for this purpose. However, its use in the industry is conditioned by a smooth and seamless integration in the system engineering process. This task is not easy and is very often hindered first by the lack of engineering methods at early design levels allowing providing to the analysis tools the required input data for the analysis (timing characteristics, scheduling characteristics, etc.) and second, by the semantic mismatches between the design and analysis models. This talk addresses the issue of integrating timing analysis in industrial system engineering based on our experience at Thales.

Thursday, February 26, 2015 — by Vagelis Bebelis from Inria Grenoble – Rhône-Alpes

Boolean Parametric Data Flow: Modeling – Analyses – Implementation

PhD defense organized by SPADES (Vagelis Bebelis)
At 11:00 in Grand Amphi at INRIA Montbonnot

Abstract:
Streaming applications are responsible for the majority of the computation load in many modern systems (video conferencing, computer vision e.t.c.). Their increased complexity and their strict requirements (robustness, throughput, power consumption) make their parallel implementation a great challenge.In this thesis, we present a new programming model and a scheduling framework that facilitate the development and implementation of such applications on modern many-core platforms. Our programming model provides an intuitive representation of streaming applications while exposing the available parallelism. Our framework facilitates the parallel implementation of applications developed with our model. Finally, we provide a throughput analysis to verify that the application meet its requirements.

February 17, 2015 — by Alan Schmitt from Inria Rennes – Bretagne Atlantique

JSCert, a Two-pronged Approach to JavaScript Formalization

Seminar organized by SPADES (Jean-Bernard Stefani)
At 10:00 in A104 at INRIA Montbonnot

Abstract:
JSCert is a formalization of JavaScript in the Coq proof assistant that aims at being as close as possible to the specification while having an executable component to run against test suites. We will describe this formalization, detail the design issues we had to address, and show how we plan to use the formalization to give guarantees about JavaScript programs.

Friday, January 23, 2015 — by Antoine Girard from Laboratoire Jean Kuntzmann

A Symbolic Control Approach to the Design of Cyber-Physical Systems

Seminar organized by SPADES (Sophie Quinton)
At 14:30 in C208 at INRIA Montbonnot

Abstract:
Cyber-physical systems (CPS) consist of computational elements monitoring and controlling physical entities. Their development requires a deep understanding of the dynamics of the physical process. Symbolic control approaches aim at simplifying CPS design by developing an approach based on: 1) a high-level programming language allowing us to specify functionalities of the CPS while abstracting the details of the physical dynamics; 2) an automated model-based synthesis tool generating from a high-level program, a low-level controller implementing the specified functionalities. The approach is based on the use of symbolic models (i.e. discrete abstractions) of the dynamics of the physical process and follows the correct by construction synthesis paradigm.

Tuesday, January 13, 2015 — by Benoît Caillaud from Inria Rennes

On the Index of Hybrid (aka Multi-Mode) DAE Systems

Seminar organized by SPADES (Jean-Bernard Stefani)
At 14:00 in F107 at INRIA Montbonnot

Abstract:
Hybrid systems modelers exhibit a number of difficulties related to the mix of continuous and discrete dynamics and sensitivity to the discretization scheme. Modular modeling, where subsystems models can be simply assembled with no rework, calls for using Differential Algebraic Equations (DAE). In turn, DAE are strictly more difficult than ODE. In most modeling and simulation tools, before simulation can occur, sophisticated pre-processing is applied to DAE systems based on the notion of differentiation index. Graph based algorithms such as the one originally proposed by Pantelides are efficient at finding the index, structurally (i.e., outside some exceptional values for the system parameters). The differentiation index for DAE explicitly relies on everything being differentiable. Therefore, extensions to hybrid systems must be done with caution—to our knowledge, no such extension exists. We propose to rely on non-standard analysis for this. Non-standard analysis formalizes differential equations as discrete step transition systems with infinitesimal time basis. We can thus bring hybrid DAE systems to their non-standard form, where the notion of difference index can be firmly used—the difference index of a difference Algebraic Equation (dAE) is an easy transposition of the differentiation index, in which forward shift replaces differentiation. We prove that the differentiation index of a DAE is structurally equal to the difference index of its non-standard interpretation, which is a dAE. We can thus propose the difference index of the non-standard semantics of a hybrid DAE system, as a consistent extension of both the differentiation index of DAE and the difference index of dAE.It turns out that the index theory for (discrete time) dAE systems is interesting in itself and raises new issues. We discuss graph based algorithms à la Pantelides for computing the dAE index and discuss examples.

Tuesday, December 16, 2014 — by Romain Brenguier from Université Libre de Bruxelles

Controller Synthesis:
From Circuits to Quantitative, Real-Time and Distributed Systems

Seminar organized by SPADES (Jean-Bernard Stefani)
At 9:30 in J220 at INRIA Montbonnot

Abstract:
For the synthesis of reactive systems, implementations are obtained from winning strategies in games where the objective is given by the specification of the system. In this talk, I will report on our recent work in this domain: the specifications range from safety, to omega-regular and quantitative; the systems studied can also be simple, real-time, or distributed. In the first part, I will describe our attempt at the synthesis competition organized within the SYNT’14 workshop. The goal is to provide synthesis algorithms for safety specifications described as Boolean circuits. In the second part, I will talk about energy and mean-payoff timed games. Deciding if there exists a winning strategies in those game is undecidable and we thus provide semi-algorithms for solving these problems. In the last part, we analyze the concept of iterated admissibility which is well-known and important concept in classical game theory. This concept is better suited for multi-player games than the notion of winning strategy and could therefore be used for the design of distributed systems.

Tuesday, November 25, 2014 — by Partha Roop from Auckland University

A Novel Approach for Power Minimization of
Implantable Devices using Response Time Analysis

Seminar organized by SPADES (Sophie Quinton)
At 10:00 in Grand Amphi at INRIA Montbonnot

Abstract:
We study the response time analysis problem of implantable medical devices such as pacemakers. Existing techniques use timed automata (TA) based closed loop models of the pacemaker and the heart. We observe that such models make idealized assumptions of the underlying execution platform and hence there are problems with the fidelity of the verification results. In the proposed approach, we use low-level WCET analysis
results to augment the TA models with accurate delay values. We then perform response time analysis using the UPPAAL model checker to determine the worst case response time of the system. As implantable devices not only need guaranteed timing but also extended battery life, we propose an approach for extending the battery life by computing safe execution frequency bounds for the execution platform. The proposed approach, for the first
time, offers an avenue for high-fidelity response time analysis of implantable medical devices while also ensuring that the approach can be used for minimizing the power consumption.

Wednesday, November 19, 2014 — by Adnan Bouakaz from Inria Grenoble – Rhône-Alpes

An Abstraction-Refinement Framework
for Priority-Driven Scheduling of Static Dataflow Graphs

Seminar organized by SPADES (Sophie Quinton)
At 11:00 in C208 at INRIA Montbonnot


Abstract:
Static dataflow graphs are widely used to model concurrent real-time streaming applications. Though the state of the art usually advocates static-periodic scheduling of dataflow graphs over dynamic scheduling, the interest in dynamic (for instance, priority-driven) scheduling of dataflow graphs has been rekindled by the increasing use of virtualization technology and real-time operating systems to manage concurrent independent applications running on the same platform.

In this talk, I will present a sequence-based framework in which a large class of priority-driven schedules of dataflow graphs can be uniformly expressed and analyzed. Constructed schedules should be buffer-safe (i.e. no overflow/underflow exceptions over communication channels) even in the worst-case admission scenario of incoming applications and tasks. The presented approach is formulated in an abstraction-refinement framework to allow for the construction of feasible and buffer-safe schedules based on mathematically sound approximations of physical time in priority-driven schedules.


Tuesday, November 4, 2014 — by Pranav Tendulkar from Verimag

Mapping and Scheduling on Multi-core Processors using SMT Solvers

Seminar organized by SPADES (Alain Girault)
At 10:00 in G108 at INRIA Montbonnot


Abstract:
Mapping and Scheduling of applications on multicore processors is a hard combinatorial problem. The number of solution candidates rises exponentially with increasing number of tasks and processors.
It is, in general, hard to find a latency-optimal solution that satisfies the constraints imposed both by hardware and software. In addition, there are multiple cost criteria to explore, such as the number of processors used, the amount of buffer memory to be allocated, the minimal inter-arrival time of the inputs etc. Therefore, the optimal solution is not unique, but instead a set of Pareto solutions.

In our work we target streaming applications which can be represented by synchronous dataflow (SDF) models. We encode the problem of scheduling dataflow applications using an SMT (satifiability modulo theory) solver and perform a cost space exploration to approximate the Pareto solutions. In this presentation I explain the grid-based exploration algorithm we used to select the cost points to query. Also I illustrate how to deal with the solution space containing symmetrical solutions, i.e. the ones that have the same costs. We use the properties of SDF to eliminate symmetrical solutions and thus accelerating the search for optimal costs. Further, we demonstrate how the SMT solver can be employed for accurate scheduling of executable SDF benchmarks on complex many-core platforms, such as Kalray MPPA-256.


Tuesday, October 21, 2014 — by Stephan Merz from Inria Nancy

Towards Certifying Network Calculus in a Proof Assistant
(joint work with Marc Boyer, Loïc Fejoz, and Etienne Mabille)

Seminar organized by SPADES (Sophie Quinton)
At 11:00 in G108 at INRIA Montbonnot


Abstract:
Network Calculus (NC) is an established theory for determining bounds on message delays and for dimensioning buffers in the design of networks for embedded systems. It is supported by academic and industrial tool sets and has been widely used, including for the design and certification of the Airbus A380 AFDX backbone. However, results produced by existing tools have to be trusted: some algorithms require subtle reasoning in order to ensure their applicability, and implementation errors could result in faulty network design, with unpredictable consequences.

In this presentation I will present some preliminary work on encoding NC in the proof assistant Isabelle/HOL, with the objective of certifying the results of NC computations. We have built a prototype NC analyzer that outputs a trace of its computation, and the validity of this trace (both w.r.t. necessary preconditions for applying computation steps and w.r.t. the numerical computations) can be checked offline by Isabelle/HOL. Given the safety-critical nature of network designs to which NC is being applied, we believe that such an approach can usefully complement traditional qualification processes for NC analyzers.


Thursday, October 16, 2014

Forum Méthodes Formelles : “Le Model-Checking en action”

Seminar organized locally by CONVECS (Frédéric Lang)
8:45-17:30 in Grand Amphi at INRIA Montbonnot

Au programme :

  • Radu Mateescu : « Introduction au Model-Checking »
  • Bernard Berthomieu : « Présentation de l’outil Tina »
  • Eric Jenn : « Application du model checking temporisé à l’avionique »
  • Hubert Garavel : « Présentation de la boîte à outils CADP »
  • Abderahman Kriouile et Massimo Zendri : « Application de CADP à la vérification de matériel »
  • Nicolas Breton : « Présentation de l’outil SYSTEREL Smart Solver »
  • Evguenia Dmitrieva : « Utilisation de la preuve formelle par la RATP »
  • Kim G. Larsen : « Présentation de l’outil UPPAAL et de son application dans l’industrie »
  • Mickael Dierkes : « Présentation des outils utilisés chez Rockwell Collins et leur application »

L’entrée est gratuite mais l’inscription est obligatoire afin de permettre à Minalogic de prévoir les repas et les pauses cafés. Pour s’inscrire, remplir le formulaire puis cliquer sur “S’inscrire à un autre événement”, puis sélectionner FMF.

programme détaillé


Monday, September 29, 2014 — by Ismail Assayad from University Hassan II of Casablanca

Adaptive Scheduling of Multiple Applications
under Throughput and Energy Constraints on Multi-Core Architectures

Seminar organized by SPADES (Sophie Quinton)
At 11:00 in C207 at INRIA Montbonnot


Abstract:
In this presentation we propose a novel adaptive approach capable of handling dynamism of a set of applications on multi-core architectures. The applications are subject to throughput and energy consumption constraints. For each application, a set of non-dominated (Pareto) schedules are computed at design-time in the (energie, period, processors) space for different cores topologies. Then, upon the starting or ending of an application, a lightweight adaptive run-time scheduler re-configures the mapping of the live applications according to the available resources (i.e., the available cores of the multi-core chip). This run-time scheduler selects the best topology for each application and maps them to the multi-core architecture and is based on the tetris algorithm. This novel scheduling approach is adaptive, and thus able to change the mapping of applications during their execution, under throughput and power-constrained modes.


Thursday, September 25, 2014 — by Gregor Gössler from INRIA Grenoble – Rhône-Alpes

Formal Techniques for Component-based Design of Embedded Systems

Habilitation (HDR) organized by SPADES (Gregor Gössler)
At 13:30 in F107 at INRIA Montbonnot


Abstract:
Embedded systems have become ubiquitous – from avionics and automotive over consumer electronics to medical devices. Failures may entail material damage or compromise safety of human beings. At the same time, shorter product cycles, together with fast growing complexity of the systems to be designed, create a tremendous need for rigorous design techniques. The goal of component-based construction is to build complex systems from simpler components that are well understood and can be (re)used so as to accelerate the design process. An essential requirement is the ability to deal with component failures effectively; to this end we have to understand the causal relationship between component and system failures.

In the fist part of the presentation I will outline my contributions to the BIP component framework, designed to support the construction of systems with heterogeneous models of interaction and execution. Then I will summarize my work on contracts and contract composition for component-based construction. Finally I will present recent work on blaming in component-based systems, allowing the user to determine which component(s) actually caused an observed system-level failure. Blaming is crucial to explain counterexample traces at design time, decide on actions to take at runtime, and help establishing liability of components providers.


Thursday, September 25, 2014 — by Claude Jard from University of Nantes

Towards Statistical Model-Checking of Concurrent Models

Seminar organized by SPADES (Gregor Gössler)
At 10:30 in F107 at INRIA Montbonnot


Abstract:
Statistical model-checking is a simulation-based method providing good performances for verifying complex models. Unfortunately, models including concurrency aspects are difficult to handle today because of the fact that causal independence between events should be equipped with probabilities. This presentation explores a possible method, illustrated by the use of concurrent semantics of Petri nets.


Thursday, September 25, 2014 — by Farhad Arbab from Leiden University / CWI

Coordinated Composition of Components

Seminar organized by SPADES (Gregor Gössler)
At 9:30 in F107 at INRIA Montbonnot


Abstract:
Modeling components as units of behavior offers a rich framework where composition operators can coordinate the behavior of such constituents into the behavior of arbitrarily more complex systems. Our work on Reo, its semantics, and tools serves as a concrete instance of such a framework. The emphasis in Reo is on the externally observable behavior of components and their coordinated composition into more complex concurrent systems. This emphasis highlights the eminent role of protocols in concurrent systems of components and services, and makes concurrency protocols the central focus in Reo.

At its core, Reo proffers an interaction-based model of concurrency where more complex protocols result from composition of simpler, and eventually primitive, protocols. In Reo, combining a small set of user-defined synchronous and asynchronous primitives, in a manner that resembles construction of electronic circuits from gates and elements, yields arbitrarily complex concurrency protocols. Semantics of Reo preserves synchrony and exclusion through composition. This form of compositionality makes specification of protocols in Reo simpler than in conventional models and languages, which offer low-level synchronization constructs (e.g., locks, semaphores, monitors, synchronous methods). Moreover, the high-level constructs and abstractions in Reo also leave more room for compilers to perform novel optimizations in mapping protocol specifications to lower-level instructions that implement them. In on-going work we currently develop code generators that produce executables whose performance and scalability on multi-core platforms compare favorably with hand-crafted, hand-optimized code.


Monday, September 22, 2014 — by Alexandre Donzé from UC Berkeley

Model-Based Design of Cyber-Physical Systems using Signal Temporal Logic

Seminar organized by SPADES (Sophie Quinton)
At 11:00 in C207 at INRIA Montbonnot


Abstract:
Model-based design of is the process of designing a system by first creating a virtual prototype, based on some abstract mathematical modeling, then formally validate it before building the concrete system in order to eliminate as many flaws as possible at an early stage. In the case of Cyber-Physical systems (CPS, aka embedded systems), formal validation is made difficult by the fact that models must take into account both computational components and their physical environment (real-time constraints, electrical and mechanical interactions, etc) which cannot be simply decoupled. As a result, the dominant validation approach for models of CPS is to perform more or less extensive simulation, and assert whether simulation results behave correctly. As a first step to make this process more formal, we have been advocating the use of Signal Temporal Logic (STL), which allows to express and monitor rich temporal behaviors on real-valued and dense-time signals obtained from such simulation results. In addition to the Boolean monitoring of properties, STL comes with a quantitative, or robust, semantics which provides a numerical margin by which a simulation trace satisfies or violate a property. Such a quantity can be used to guide efficiently simulations toward satisfaction or violation, providing lightweight, semi-formal verification and synthesis techniques for complex CPS. Such techniques have been recently applied
successfully in two challenging contexts: to debug an industrial-scale Simulink model of an airpath controller for a diesel engine provided by Toyota, and to debug and grade solutions of a robotics virtual lab in the MOOC (Massively Online Open Course) EECS149.1x, Introduction to CPS offered by Berkeley on the edX platform in Spring 2014.


Tuesday, July 15, 2014 — by Reinhard von Hanxleden from Kiel University

Taming Graphical Modeling: On Pragmatics-Aware MDE

Seminar organized by SPADES (Alain Girault)
At 10:00 in Grand Amphi at INRIA Montbonnot


Abstract:
Visual models, such as Statecharts or Simulink diagrams, help to understand complex systems. However, with the user interaction paradigms established today, activities such as creating, maintaining or browsing visual models can be very tedious. Valuable engineering time is wasted with archaic activities such as manual placement and routing of nodes and edges. In this talk, I will present an approach to enhance productivity by focusing on the pragmatics of model-based design. A key enabler for pragmatics-aware modeling is the automatic drawing, or layout, of visual models. Automatic layout allows to separate a model from its view, and to automatically synthesize customized views for a model. Building on automatic layout, concepts such as structure-based editing and focus-and-context visualizations during simulation help to further increase productivity. These concepts have been implemented in the open-source KIELER Eclipse Rich Client modeling environment

Biography: Reinhard von Hanxleden conducted his studies of Computer Science and Physics at Kiel University and the Pennsylvania State University (M.Sc., 1989), followed by dissertation work at Rice University (Ph.D., conferred 1995). He then joined DailerChrysler R&D, until 2000 in Berlin, subsequently – with Airbus – in Toulouse and Hamburg. Since April 2001 he is the head of the Real-Time / Embedded Systems group at the Department of Computer Science of the Christian-Albrechts-Universität zu Kiel. He was a visiting researcher at the EECS Department at UC Berkeley in 2007 and at the University of Auckland in 2013/14. His research interests include systems modeling with a focus on modeling pragmatics and designer productivity, synchronous languages, deterministic concurrency, code synthesis, and time-predictable systems.


Tuesday, February 4, 2014

Forum Méthodes Formelles : Preuve de modèle, preuve de programme

Seminar organized locally by CONVECS (Hubert Garavel)
08:45-17:30 in Grand Amphi at INRIA Montbonnot

Comments are closed.