Update on the adoption of synchronous languages at gh.st
Formal Modelling and Verification of Rate Adaptive Pacemakers for Heart Failure
Cardiovascular Implantable Electronic Devices (CIEDs) are routinely implanted to treat various types of arrhythmia. However, conventional pacing algorithms may not be able to provide optimal treatment for the patients with Heart Failure (HF) and evidence suggests negative outcomes. In this paper, we introduce a formal pacemaker model that can restore heart-lung synchronization, which may bring therapeutic benefits to the patient with chronic HF. We use valued Synchronous Discrete Timed Automata (SDTA) to describe the timing requirements of the device, which is then translated into Promela for formal verification through a set of rules which are defined to maintain the synchronous semantics. The safety-critical properties are then verified using the model checker SPIN. We show that the SDTA model can be verified more efficiently than conventional approaches with pure Timed Automata (TA). Animal test results show that the pacing rates are synchronized with the respiratory cycles. In particular, the functional safety is ensured under various respiratory conditions. This work yields, for the first time, a formal model of pacing device to reinstate heart rate variability for (HF) patients.
A study of predictable execution models implementation for industrial data-flow applications on a multi-core platform with shared banked memory
We study the implementation of data-flow applications on a multi-core processor with on-chip shared multi-banked memory. Specifically, we consider the Kalray MPPA2 processor and three applications coded using the industrial toolchain SCADE Suite. We focus on the runtime environment assuming global static scheduling, time-triggered and non-preemptive execution of tasks. Our contributions include (i) a technique to implement SCADE applications compliant with execution models inspired by PREMs (PRedictable Execution Models), (ii) an exhaustive comparison of three execution models with and without isolation, and finally (iii) guidelines for predictable implementation of a data-flow application on multi-core processors with shared on-chip memory.
Improving the Scalability of Multimode DAE Structural Analysis by using Reduced Block Triangular Forms
The generation of efficient simulation code from a Differential Algebraic Equations (DAE)-based model relies on the structural analysis of the model, with efficient algorithms such as Pryce's Σ-method.
However, when dealing with hybrid models with multiple modes and mode-dependent dynamics, these methods are not adapted to the resulting multimode DAE (mDAE) systems. In particular, the number of modes is roughly exponential in the size of the model, making the structural analysis of every mode untractable.
Caillaud et al. proposed and implemented an extension of the Σ-method for mDAE systems, relying on an implicit representation of the varying structure of the mDAE thanks to Binary Decision Diagrams (BDD), in particular the Reduced Ordered variant (ROBDD) introduced by Bryant.
Although this algorithm is considerably more efficient than the enumeration of modes on some examples, its performances are still hindered by the sheer size of the functions that are represented and manipulated, and by the fact that all ROBDD manipulations have an exponential worst-case complexity in both time and space.
However, typical engineering systems are structured and sparse: they are build from interconnected components, where each component is only connected to a small number of other components. The focus of this talk is on how this sparsity can be exploited in order to improve the scalability of the above-mentioned structural analysis algorithm.
Designed by Joan Thibault and implemented in the generic symbolic dynamic programming framework Snowflake, Reduced Block Triangular Forms (RBTF) are an intermediary representation of Boolean functions that is particularly fit for such sparse structured systems. A two-step RBTF algorithm (also implemented in Snowflake) efficiently turns systems of equations into this representation while preserving the original structure.
To do so, the RBTF algorithm over-approximates the relations between components as a tree (called decomposition tree in the graph literature), each node of this tree being a set of components of the initial systems. Then, starting from leaves, each sub-system is solved and the solutions are projected as new constraints on their parent nodes; this process is iterated until all sub-systems are solved. This step, allowing to condensate all constraints and check their satisfiability, is called the Forward Reduction Process (FRP).
Constraints can then be propagated back into the initial sub-systems by performing those same projections in the reverse direction. In other words, each sub-system updates its set of solutions given the information from its parents, then send the information to its children sub-systems. This step is the Backward Propagation Process (BPP).
The main drawback of this approach is to find a good enough tree organisation of the components, even if the component-based organisation of the original model is not available. To overcome these limitations, we rely on a naive, but efficient enough, greedy heuristic which computes a suitable decomposition-tree for RBTF to use.
Implementing true separate compilation - the Blech module system
Normalizing Lustre in Coq
Reactive Probabilistic Programming Semantics with Mixed Nondeterministic/Probabilistic Automata
Graphical models in probability and statistics are a core concept in the growing area of probabilistic reasoning and probabilistic programming—graphical models include Bayesian networks and factor graphs. In cooperation with Jean-Baptiste Raclet (IRIT-CNRS, Toulouse), we developed a new model of Mixed (nondeterministic/probabilistic) Automata that subsumes both nondeterministic automata and graphical probabilistic models. Mixed automata are equipped with parallel composition, simulation relation, and support message passing algorithms inherited from graphical probabilistic models. We also show how Segala and Lynch Probabilistic Automata can be mapped to Mixed Automata by preserving simulation relations. However, our parallel composition differs from that of Probabilistic Automata (PA); in contrast to PA, our notion of parallel composition suits Probabilistic Programming.
Smoothly translating synchronous languages to imperative code with the Semsil intermediate language
Interfacing a synchronous language with Python and Jupyter Notebooks
The Sparse Synchronous Model: Yet Another Synchronous Language
The LIsinopril medical prescription in HipHop
Exploring Compositional Neural Networks for Real-time Applications
Neural networks have the ability to model complex input-output relationships without extensive information about system dynamics. Hence, data-driven approaches to Cyber-Physical Systems (CPS) using Neural Networks (NNs) have been gaining prominence. Such NNs are traditionally developed as large, monolithic black-box models, which are difficult to validate, especially for ensuring safety. Consequently, their implementation as a composition of smaller NNs, each performing individual tasks, has received considerable interest. These will be easier to validate, implement, parallelise and have the potential for incremental design. Despite this interest, the question of how implementations of Compositional NNs compare to Monolithic NNs quantitatively has received scant attention.
This work presents a comparative study on the implementation of both monolithic and compositional NN models on hardware using a new compiler that transforms Keras models . In the developed approach, we compile networks of Artificial Neural Networks to hardware implementations using new synchronous semantics for their execution. The developed semantics enables compilation of Compositional NNs to a parallel hardware implementation involving limited hardware resources whilst guaranteeing that the generated implementation is deterministic and time predictable. Overall, we show, through the complex example of Discretionary Lane-Changing decision-making process of an Autonomous Vehicle, the trade-offs involved in the hardware synthesis of a Compositional Model vis-a-vis a Monolithic Model. We illustrate that a compositional implementation on hardware can be particularly beneficial for systems with limited input-output capabilities at a marginal cost to functional performance.
Diagrammatic semantics for digital circuits
Reasoning about software is usually performed using operational semantics, which is syntactic and reduction-based. Conversely, to reason about hardware it is usually translated into an automaton and its behaviour simulated. Motivated by the advantages of the former, Ghica and Jung recently defined an operational semantics for digital circuits, where the behaviour of circuits is modelled by axioms in a symmetric traced monoidal category (STMC). However, reasoning syntactically over terms is not efficient - identifying redexes can be difficult. We therefore develop this work further by defining a graphical language for STMCs, using a variant on hypergraphs that we call 'linear hypergraphs'. This language is sound and complete - any morphism in the STMC can be interpreted as a well-formed linear hypergraph up to isomorphism, and any linear hypergraph is the representation of a unique morphism, up to the equational theory of the category. We can then express the axioms of digital circuits as graph rewrite rules - we show how we can use our graphical language to apply the framework of double pushout (DPO) rewriting to act as a graph rewriting diagrammatic semantics.
Modeling and design of neural networks architectures for neural hybridation based on synchronous approaches
A reactive semantics for a higher-order language with integer clocks
Many synchronous languages use reactive semantics - inputs arrive progressively, and outputs are produced progressively. Recently, other languages that are closer to lambda calculus have been inspired by guarded recursion - the doctrine of controlling causality to ensure well-definedness of recursive objects. These languages generally do not have reactive semantics, which jeopardizes their use in systems where efficiency is important. In this talk, we will give a reactive semantics for such a language - namely, Core Lambda*, introduced by Adrien Guatto. We will revisit well-known compilation techniques, like simple loop compilation (which is used in Lustre), and we will give a proof of correctness by using a logical relation.
Usuba, high-throughput software circuits for cryptography
Cryptographic primitives are subject to diverging imperatives. Functional correctness and auditability pushes for the use of a high-level programming language. Performance and the threat of timing attacks push for using no more abstract than an assembler to exploit (or avoid!) the micro-architectural features of a given machine. We believe that a suitable programming language can reconcile both views and actually improve on the state of the art of both.
Usuba is an opinionated dataflow programming language in which block ciphers become so simple as to be “obviously correct” and whose types document valid parallelization strategies at the granularity of individual bits. Its optimizing compiler produces high-throughput, constant-time implementations performing on par with hand-tuned reference implementations. The cornerstone of our approach is a systematization and generalization of bitslicing, an implementation trick frequently used by cryptographers.
An Overview of the Synchronous Language Céu
A Hard Real Time Demonstrator for Dynamic Ticks and Timed SCCharts
Synchronous programming languages, such as Esterel, Lustre, SCADE or SCCharts, have been developed for designing reactive systems. They abstract from computation times and assume that outputs are synchronous with their inputs. This leads to a deterministic semantics, without race conditions, which makes synchronous languages particularly suitable for safety-critical systems. However, even though synchronous languages have been designed with real-time applications in mind, the handling of physical time is traditionally left to the execution environment. This makes e.g. the expression of arbitrary time-outs difficult and may lead to excessive “busy waiting” computations. The recent proposal of dynamic ticks alleviates this by making physical time a first-class citizen within the synchronous programming model.
In this work, we explore and demonstrate the practical merits of dynamic ticks, including improved timing accuracy and reduced computational requirements, in the context of Timed SCCharts. As demonstration platform, we present a hardware/software platform that involves two stepper motors whose operation must be synchronized at microsecond accuracy to avoid physical damage.
Towards verification of synchronous hybrid programs
On the efficiency cost of common isolation properties in synchronous and real-time systems implementation
Making Mainstream Programming Languages Deterministic Again
Synchronous languages are unique in the programming world with their emphasis (with some exceptions) on deterministic concurrency. Determinism enhances testability, predictability, and verifiability, and yet, synchronous languages have largely not caught on in the general software engineering community. This talk is about a coordination language called Lingua Franca (LF) that is based in part on synchronous/reactive principles. It is a polyglot coordination language with support (so far) for C, C++, Python, and TypeScript (a statically typed version of JavaScript). Program logic is given in one or more of those target languages, enabling developers to use familiar languages and integrate extensive libraries and legacy code. Lingua Franca realizes a model of computation called Reactors (for actors, revisited) that extends synchronous/ reactive principles with a logical timeline and timestamped events. It supports asynchronous injection of events from the environment, using callbacks or interrupts for example, without compromising its deterministic semantics. It also includes support for real-time computing and deadlines, making it a suitable implementation language for cyber-physical systems. Moreover, the generated code (in the C and C++ targets) transparently executes in parallel on multicore machines. LF programs can be distributed across networks without losing their determinism, and the language includes explicitly nondeterministic constructs for applications that require and/or tolerate nondeterminism. Lingua Franca is an open-source, BSD-style licensed collaboration between UC Berkeley, UT Dallas, TU Dresden, and Kiel University.