Abstracts
Vincent Bonczak and Pascal Fradet. A
formally verified circuit transformation to tolerate SEMTs.
INRIA Research Report nº 9523. 2023.
Key-words: Fault-tolerance, Triple modular redundancy,
Single-event multiple-transients, Circuit transformation, certification,
Coq.
Abstract: Digital circuits are subject to transient faults
caused by high-energy particles. A particle hit may produce a voltage
pulse that propagates through the circuit and upsets its logical state and
output. Most fault-tolerance techniques consider this kind of single event
transients (SETs). However, as technology scales down, a single particle
becomes likely to induce transients faults in several adjacent components.
These single-event multiple transients (SEMTs) are becoming a major issue
for modern digital circuits. In this paper, we follow a formal approach to
study fault-tolerance w.r.t. SEMTs. We first show how to formalize SEMTs
provided that the layout of the circuit is known. We show that the
standard triple modular redundancy (TMR) technique can be modified so
that, along with some placement constraints, it completely masks SEMTs. In
order to prove such a fault-tolerance property for all circuits with
unknown layouts, we define a fault-model representing all possible SEMTs
for all TMR circuits. Circuits are expressed in LDDL, our gate-level
hardware description language. The modified TMR technique is described as
a circuit transformation in LDDL, and the fault models for SEMTs are
specified as particular semantics of LDDL. We show that for any circuit
its transformed version masks all faults of the considered SEMT fault
model. All this development is formalized in the Coq proof assistant where
fault-tolerance properties are expressed and formally proved.
Pascal Fradet, Alain Girault, and Alexandre Honorat. Sequential
secheduling of dataflow graphs for memory peak minimization. In Proc. of 24th ACM SIPLAN/SIGBED Int.
Conf. on Languages, Compilers, and Tools for Embedded Systems, LCTES'23, pp. 76_86,
2023.
Key-words: Dataflow; Task graphs; Memory peak; Sequential
scheduling.
Abstract: Many computing systems are constrained by their fixed
amount of shared memory. Modeling applications with task or Synchronous
DataFlow (SDF) graphs makes it possible to analyze and optimize their memory
peak. The problem studied by this paper is the memory peak minimization of
such graphs when scheduled sequentially. Regarding task graphs, former work
has focused on the Series-Parallel Directed Acyclic Graph (SP-DAG) subclass
and proposed techniques to find the optimal sequential algorithm w.r.t.
memory peak. In this paper, we propose task graph transformations and an
optimized branch and bound algorithm to solve the problem on a larger class
of task graphs. The approach also applies to SDF graphs after converting
them to task graphs. However, since that conversion may produce very large
graphs, we also propose a new suboptimal method, similar to Partial
Expansion Graphs, to reduce the problem size. We evaluate our approach on
classic benchmarks, on which we always outperform the state-of-the-art.
Pascal Fradet, Xiaojie Guo, and Sophie Quinton. CertiCAN: Certifying can
analyses and their results. Real
Time Systems, RTS,
59(2):160-198, 2023.
Key-words: Real-Time Systems Analysis, CAN bus, Certification, Coq
Proof Assistant
Abstract: We present
CertiCAN, a tool produced using the Coq proof assistant and based on the
Prosa library for the formal verification of CAN analysis results. Result
verification is a process that is lightweight and flexible compared to
tool verification. Indeed, the formal verification of an industrial
analyzer needs access to the source code, requires the proof of many
optimizations or implementation tricks and new proof effort at each
software update. In contrast, CertiCAN only relies on the result provided
by such a tool and remains independent of the tool itself or its updates.
Furthermore, it is usually faster to check a result than to produce it.
All these reasons make CertiCAN a practical choice for industrial
purposes. CertiCAN is based on the formal verification and combined use of
two well-known CAN analysis techniques completed with additional
optimizations. Experiments demonstrate that CertiCAN is computationally
efficient and faster than the underlying combined analysis. It is able to
certify the results of RTaW-Pegase, an industrial CAN analysis tool, even
for large systems. This result paves the way for a broader acceptance of
formal tools for the certification of real-time systems analysis results.
Pascal Fradet, Alain Girault, Ruby Krishnaswamy, Xavier Nicollin, and
Arash Shafiei. RDF: A Reconfigurable
dataflow model of computation. ACM
Transactions on Embedded Computer Systems,
TECS, Volume 22(1):
oct 2022.
Key-words: Models of computation; Synchronous dataflow;
Reconfigurable systems; Graph rewriting; Static analyses; Boundedness;
Liveness
Abstract: Dataflow Models of Computation (MoCs) are widely used
in embedded systems, including multimedia processing, digital signal
processing, telecommunications, and automatic control. In a dataflow MoC, an
application is specified as a graph of actors connected by FIFO channels.
One of the first and most popular dataflow MoCs, Synchronous Dataflow (SDF),
provides static analyses to guarantee boundedness and liveness, which are
key properties for embedded systems. However, SDF and most of its variants
lacks the capability to express the dynamism needed by modern streaming
applications. In particular, the applications mentioned above have a strong
need for reconfigurability to accommodate changes in the input data, the
control objectives, or the environment. We address this need by proposing a
new MoC called Reconfigurable Dataflow (RDF). RDF extends SDF with
transformation rules that specify how and when the topology and actors of
the graph may be reconfigured. Starting from an initial RDF graph and a set
of transformation rules, an arbitrary number of new RDF graphs can be
generated at runtime. A key feature of RDF is that it can be statically
analyzed to guarantee that all possible graphs generated at runtime will be
consistent and live. We introduce the RDF MoC, describe its associated
static analyses, and present its implementation and some experimental
results.
Pascal Fradet, Alain Girault, Ruby Krishnaswamy, Xavier Nicollin, and Arash
Shafiei. RDF: A Reconfigurable dataflow
model of computation.
INRIA Research Report nº 9439. 2021.
Key-words: Models of computation; Synchronous dataflow; Reconfigurable
systems; Graph rewriting; Static analyses; Boundedness; Liveness
Abstract: Dataflow Models of
Computation (MoCs) are widely used in embedded systems, including
multimedia processing, digital signal processing, telecommunications, and
automatic control. In a dataflow MoC, an application is specified as a
graph of actors connected by FIFO channels. One of the first and most
popular dataflow MoCs, Synchronous Dataflow (SDF), provides static
analyses to guarantee boundedness and liveness, which are key properties
for embedded systems. However, SDF and most of its variants lacks the
capability to express the dynamism needed by modern streaming
applications. In particular, the applications mentioned above have a
strong need for reconfigurability to accommodate changes in the input
data, the control objectives, or the environment. We address this need by
proposing a new MoC called Reconfigurable Dataflow (RDF). RDF extends SDF
with transformation rules that specify how and when the topology and
actors of the graph may be reconfigured. Starting from an initial RDF
graph and a set of transformation rules, an arbitrary number of new RDF
graphs can be generated at runtime. A key feature of RDF is that it can be
statically analyzed to guarantee that all possible graphs generated at
runtime will be consistent and live. We introduce the RDF MoC, describe
its associated static analyses, and present its implementation and some
experimental results.
Pascal Fradet, Alain Girault, Ruby Krishnaswamy, Xavier Nicollin, and Arash
Shafiei. RDF: A Reconfigurable dataflow
model of computation.
INRIA Research Report nº 9439. 2021.
Key-words: Models of computation; Synchronous dataflow; Reconfigurable
systems; Graph rewriting; Static analyses; Boundedness; Liveness
Abstract: Dataflow Models of
Computation (MoCs) are widely used in embedded systems, including
multimedia processing, digital signal processing, telecommunications, and
automatic control. In a dataflow MoC, an application is specified as a
graph of actors connected by FIFO channels. One of the first and most
popular dataflow MoCs, Synchronous Dataflow (SDF), provides static
analyses to guarantee boundedness and liveness, which are key properties
for embedded systems. However, SDF and most of its variants lacks the
capability to express the dynamism needed by modern streaming
applications. In particular, the applications mentioned above have a
strong need for reconfigurability to accommodate changes in the input
data, the control objectives, or the environment. We address this need by
proposing a new MoC called Reconfigurable Dataflow (RDF). RDF extends SDF
with transformation rules that specify how and when the topology and
actors of the graph may be reconfigured. Starting from an initial RDF
graph and a set of transformation rules, an arbitrary number of new RDF
graphs can be generated at runtime. A key feature of RDF is that it can be
statically analyzed to guarantee that all possible graphs generated at
runtime will be consistent and live. We introduce the RDF MoC, describe
its associated static analyses, and present its implementation and some
experimental results.
Pascal Fradet, Xiaojie Guo, and Sophie Quinton. CertiCAN: Certifying can
analyses and their results. INRIA Research Report
nº 9443, 2021
Key-words: Real-Time Systems Analysis, CAN bus,
Certification, Coq Proof Assistant
Abstract: We present CertiCAN, a tool produced using the Coq proof
assistant for the formal certification of CAN analysis results. Result
certification is a process that is lightweight and flexible compared to tool
certification. Indeed, the certification of an industrial analyzer needs
access to the source code, requires the proof of many optimizations or
implementation tricks and new proof effort at each software update. In
contrast, CertiCAN only relies on the result provided by such a tool and
remains independent of the tool itself or its updates. Furthermore, it is
usually more time efficient to check a result than to produce it. All these
reasons make CertiCAN a practical choice for industrial purposes. CertiCAN
is based on the certification and combined use of two well-known CAN
analysis techniques completed with additional optimizations. Experiments
demonstrate that CertiCAN is computationally efficient and faster than the
underlying combined analysis. It is able to certify the results of
RTaW-Pegase, an industrial CAN analysis tool, even for large systems. This
result paves the way for a broader acceptance of formal tools for the
certification of real-time systems analysis results.
Pascal Fradet, Alain Girault, Ruby Krishnaswamy, Xavier Nicollin, and Arash
Shafiei. RDF: Reconfigurable dataflow. In Proc. of Design,
Automation & Test in Europe Conference & Exhibition, DATE'19,
pages 1709-1714, March 2019.
Key-words: Dataflow models of computation; Synchronous dataflow;
Reconfigurable systems; Static analyses; Boundedness; Liveness.
Abstract: Dataflow Models of Computation (MoCs) are widely used in
embedded systems, including multimedia processing, digital signal
processing, telecommunications, and automatic control. In a dataflow MoC, an
application is specified as a graph of actors connected by FIFO channels.
One of the most popular dataflow MoCs, Synchronous Dataflow (SDF), provides
static analyses to guarantee boundedness and liveness, which are key
properties for embedded systems. However, SDF (and most of its variants)
lacks the capability to express the dynamism needed by modern streaming
applications. In particular, the applications mentioned above have a strong
need for reconfigurability to accommodate changes in the input data, the
control objectives, or the environment. We address this need by proposing a
new MoC called Reconfigurable Dataflow (RDF). RDF extends SDF with
transformation rules that specify how the topology and actors of the graph
may be reconfigured. Starting from an initial RDF graph and a set of
transformation rules, an arbitrary number of new RDF graphs can be generated
at runtime. A key feature of RDF is that it can be statically analyzed to
guarantee that all possible graphs generated at runtime will be consistent
and live. We introduce the RDF MoC, describe its associated static analyses,
and outline its implementation.
Pascal Fradet, Xiaojie Guo, Jean-Jean-François Monin, and Sophie Quinton. CertiCAN:
A tool for the Coq certification of CAN analysis results. In Proc.
of Real-Time Embedded Technology & Applications Symposium, RTAS'19,
2019.
Key-words: Real-Time Systems Analysis, CAN bus,
Certification, Coq Proof Assistant
Abstract: This paper introduces CertiCAN, a tool produced using the
Coq proof assistant for the formal certification of CAN analysis results.
Result certification is a process that is light-weight and flexible compared
to tool certification, which makes it a practical choice for industrial
purposes. The analysis underlying CertiCAN, which is based on a combined use
of two well-known CAN analysis techniques, is computationally efficient.
Experiments demonstrate that CertiCAN is faster than the corresponding
certified combined analysis. More importantly, it is able to certify the
results of RTaW-Pegase, an industrial CAN analysis tool, even for large
systems. This result paves the way for a broader acceptance of formal tools
for the certification of real-time systems analysis results.
Pascal Fradet, Alain Girault, Ruby Krishnaswamy, Xavier Nicollin, and Arash
Shafiei. RDF: Reconfigurable dataflow (extended version). INRIA
Research Report nº 9227,
2018.
Key-words: Dataflow models of computation; Synchronous dataflow;
Reconfigurable systems; Static analyses; Boundedness; Liveness.
Abstract: Dataflow Models of Computation (MoCs) are widely used in
embedded systems, including multimedia processing, digital signal
processing, telecommunications, and automatic control. In a dataflow MoC, an
application is specified as a graph of actors connected by FIFO channels.
One of the most popular dataflow MoCs, Synchronous Dataflow (SDF), provides
static analyses to guarantee boundedness and liveness, which are key
properties for embedded systems. However, SDF (and most of its variants)
lacks the capability to express the dynamism needed by modern streaming
applications. In particular, the applications mentioned above have a strong
need for reconfigurability to accommodate changes in the input data, the
control objectives, or the environment. We address this need by proposing a
new MoC called Reconfigurable Dataflow (RDF). RDF extends SDF with
transformation rules that specify how the topology and actors of the graph
may be reconfigured. Starting from an initial RDF graph and a set of
transformation rules, an arbitrary number of new RDF graphs can be generated
at runtime. A key feature of RDF is that it can be statically analyzed to
guarantee that all possible graphs generated at runtime will be consistent
and live. We introduce the RDF MoC, describe its associated static analyses,
and outline its implementation.
Pascal Fradet, Maxime Lesourd, Jean-François Monin, and Sophie Quinton. A
generic coq proof of typical worst-case analysis. In 2018 IEEE
Real-Time Systems Symposium, RTSS'18,
Paris, France, December 2018.
Key-words: Formal proofs, Weakly-hard real-time systems, Coq, Deadline
miss models.
Abstract: This paper presents a generic proof of Typical Worst-Case
Analysis (TWCA), an analysis technique for weakly-hard real-time
uniprocessor systems. TWCA was originally introduced for systems with fixed
priority preemptive (FPP) schedulers and has since been extended to
fixed-priority nonpreemptive (FPNP) and earliest-deadline-first (EDF)
schedulers. Our generic analysis is based on an abstract model that
characterizes the exact properties needed to make TWCA applicable to any
system model. Our results are formalized and checked using the Coq proof
assistant along with the Prosa schedulability analysis library. Our
experience with formalizing real-time systems analyses shows that this is
not only a way to increase confidence in our claimed results: The discipline
required to obtain machine checked proofs helps understanding the exact
assumptions required by a given analysis, its key intermediate steps and how
this analysis can be generalized.
Pascal Fradet, Xiaojie Guo, Jean-Jean-François Monin, and Sophie Quinton. A
generalized digraph model for expressing dependencies. In Proc.
of the 26th Int. Conference on Real-Time Networks and Systems, RTNS'18,
2018.
Key-words: Real-Time Systems, Schedulability analysis, Digraph model,
Correctness proofs.
Abstract: In the context of computer assisted verification of
schedulability analyses, very expressive task models are useful to factorize
the correctness proofs of as many analyses as possible. The digraph task
model seems a good candidate due to its powerful expressivity. Alas, its
ability to capture dependencies between arrival and execution times of jobs
of different tasks is very limited. We propose here a task model that
generalizes the digraph model and its corresponding analysis for
fixed-priority scheduling with limited preemption. A task may generate
several types of jobs, each with its own worst-case execution time,
priority, non-preemptable segments and maximum jitter. We present the
correctness proof of the analysis in a way amenable to its formalization in
the Coq proof assistant. Our objective (still in progress) is to formally
certify the analysis for that general model such that the correctness proof
of a more specific (standard or novel) analysis boils down to specifying and
proving its translation into our model. Furthermore, expressing many
different analyses in a common framework paves the way for formal
comparisons.
Dmitry Burlyaev, Pascal Fradet and Alain Girault. A
static analysis for the minimization of voters in fault-tolerant circuits. Leibniz Transactions on Embedded Systems, LITES,
Volume 5(1):
04-1-04: 26, 2018.
Key-words: Fault-tolerance, digital circuits, triple modular
redundancy, voting, static analysis, optimization.
Abstract: We present a formal approach to minimize the number of
voters in triple-modular redundant (TMR) sequential circuits. Our technique
actually works on a single copy of the TMR circuit and considers a large
class of fault models of the form "at most one Single-Event Upset (SEU) or
Single-Event Transient (SET) every k clock cycles". Verification-based voter
minimization guarantees that the resulting TMR circuit (i) is fault tolerant
to the soft-errors defined by the fault model and (ii) is functionally
equivalent to the initial TMR circuit. Our approach operates at the logic
level and takes into account the input and output interface specifications
of the circuit. Its implementation makes use of graph traversal algorithms,
fixed-point iterations, and binary decision diagrams (BDD). Experimental
results on the ITC'99 benchmark suite indicate that our method significantly
decreases the number of inserted voters, yielding a hardware reduction of up
to 55% and a clock frequency increase of up to 35% compared to full TMR. As
our experiments show, if the SEU fault-model is replaced with the stricter
fault-model of SET, it has a minor impact on the number of removed voters.
On the other hand, BDD-based modelling of SET effects represents a more
complex task than the modelling of an SEU as a bit-flip. We propose
solutions for this task and explain the nature of encountered problems. We
address scalability issues arising from formal verification with
approximations and assess their efficiency and precision.
Xiaojie Guo, Sophie Quinton, Pascal Fradet, and Jean-François Monin. Work
In Progress: Toward a Coq-certified Tool for the Schedulability Analysis
of Tasks with Offsets. In 2017 IEEE Real-Time Systems Symposium, RTSS'17,
pages 1-3, Paris, France, December 2017.
Key-words: Real-Time Systems Analysis, CAN bus, Certification,
Coq Proof Assistant.
Abstract: This paper presents the first steps toward a formally
proven tool for schedulability analysis of tasks with offsets. We formalize
and verify the seminal response time analysis of Tindell by extending the
Prosa proof library, which is based on the Coq proof assistant. Thanks to
Coq's extraction capabilities, this will allow us to easily obtain a
certified analyzer. Additionally, we want to build a Coq certifier that can
verify the correctness of results obtained using related (but uncertified),
already existing analyzers. Our objective is to investigate the advantages
and drawbacks of both approaches, namely the certified analysis and the
certifier. The work described in this paper as well as its continuation is
intended to enrich the Prosa library.
Pascal Fradet, Alain Girault, Leila Jamshidian, Xavier Nicollin, and Arash
Shafiei. Lossy
channels in a dataflow model of computation. In Principles
of Modeling - Essays Dedicated to Edward A. Lee on the Occasion of His
60th Birthday, pages 254266, 2018.
Key-words: Dataflow models of computation, SDF, BPDF,
reconfiguration, retransmission protocols.
Abstract: In this paper, we take into account lossy channels and
retransmission protocols in dataflow models of computation (MoCs).
Traditional dataflow MoCs cannot easily cope with lossy channels, due to the
strict notion of iteration that does not allow the re-emission of lost or
damaged tokens. A general dataflow graph with several lossy channels will
indeed require several phases, each of them corresponding to a portion of
the initial graph's schedule. Correctly identifying and sequencing these
phases is a challenge. We present a translation of a dataflow graph, written
in the well-known Synchronous DataFlow (SDF) MoC of Lee and Messerschmitt,
but where some channels may be lossy, into the Boolean Parametric DataFlow
(BPDF) MoC.
Adnan Bouakaz, Pascal Fradet and Alain Girault. Symbolic
Analyses of Dataflow Graphs. ACM
Transactions on Design Automation of Electronic Systems, TODAES,
Volume 22(2):
39:1-39:25
(2017): 39:1-39:25,
2017.
Key-words: Synchronous dataflow graphs, as soon as possible
scheduling, buffer minimization, throughput, latency, symbolic analysis.
Abstract: The synchronous dataflow model of computation is widely
used to design embedded stream-processing applications under strict
quality-of-service requirements (e.g., buffering size, throughput,
input-output latency). The required analyses can either be performed at
compile time (for design space exploration) or at run-time (for resource
management and reconfigurable systems). However, these analyses have an
exponential time complexity, which may cause a huge run-time overhead or
make design space exploration unacceptably slow.
In this paper, we argue that symbolic analyses are more appropriate since
they express the system performance as a function of parameters (i.e., input
and output rates, execution times). Such functions can be quickly evaluated
for each different configuration or checked w.r.t. different
quality-of-service requirements. We provide symbolic analyses for computing
the maximal throughput of acyclic synchronous dataflow graphs, the minimum
required buffers for which as soon as possible scheduling achieves this
throughput, and finally the corresponding input-output latency of the graph.
The paper first investigates these problems for a single parametric edge.
The results are extended to general acyclic graphs using linear
approximation techniques. We assess the proposed analyses experimentally on
both synthetic and real benchmarks.
Adnan Bouakaz, Pascal Fradet and Alain Girault. A Survey of Parametric Dataflow Models of
Computation. ACM Transactions on Design Automation of
Electronic Systems, TODAES,
Volume 22(2):
39:1-39:25 (2017):
38:1-38:25, 2017.
Key-words: Dataflow graphs, reconfiguration, parameterization, static
analysis.
Abstract: Dataflow models of computation (MoCs) are widely used to
design embedded signal processing and streaming systems. Dozens of dataflow
MoCs have been proposed in the few last decades. More recently, several
parametric dataflow MoCs have been presented as an interesting trade-off
between analyzability and expressiveness. They offer a controlled form of
dynamism under the form of parameters (e.g., parametric rates), along with
run-time parameter configuration. This survey provides a comprehensive
description of the existing parametric dataflow MoCs (constructs,
constraints, properties, static analyses) and compares them using a common
example. The main objectives are to help designers of streaming applications
to choose the most suitable model for their needs and to pave the way for
the design of new parametric MoCs.
Adnan Bouakaz, Pascal Fradet and Alain Girault. Symbolic
Buffer Sizing for Throughput-Optimal Scheduling of Dataflow Graphs.
In Proc. of Real-Time Embedded
Technology & Applications Symposium, RTAS'16,
April 2016.
Key-words: Dataflow programming, parametric model, symbolic
analyses, throughput, buffer size.
Abstract: The synchronous dataflow model is widely used to design
real-time streaming applications which must assure a minimum
quality-of-service. A benefit of that model is to allow static analyses to
predict and guarantee timing (e.g., throughput) and buffering requirements
of an application. Performance analyses can either be performed at compile
time (for design space exploration) or at run-time (for resource management
and reconfigurable systems). However, these algorithms, which often have an
exponential time complexity, may cause a huge runtime overhead or make
design space exploration unacceptably slow. In this paper, we argue that
symbolic analyses are more appropriate since they express the system
performance as a function of parameters (i.e., input and output rates,
execution times). Such functions can be quickly evaluated for each different
configuration or checked w.r.t. many different non-functional requirements.
We first provide a symbolic expression of the maximal throughput of acyclic
synchronous dataflow graphs. We then perform an analytic and exact study of
the minimum buffer sizes needed to achieve this maximal throughput for a
single parametric edge graph. Based on these investigations, we define
symbolic analyses that approximate the minimum buffer sizes needed to
achieve maximal throughput for acyclic graphs. We assess the proposed
analyses experimentally on both synthetic and real benchmarks.
Adnan Bouakaz, Pascal Fradet and Alain Girault. Symbolic
Analysis of Dataflow Graphs (extended version), INRIA
Research Report nº 8742, January 2016.
Key-words: Dataflow programming, parametric model, symbolic
analyses, throughput, buffer size, latency.
Abstract: Synchronous dataflow graphs are widely used to design
digital signal processing and real-time streaming applications. A benefit of
that model is to allow static analyses to predict and guarantee the
performances (e.g., throughput, memory requirements) of an application.
Performance analyses can either be performed at compile time (for design
space exploration) or at run-time (for resource management and
reconfigurable systems). However, these algorithms, which often have an
exponential time complexity, may cause a huge run-time overhead or make
design space exploration unacceptably slow. In this paper, we argue that
symbolic analyses are more appropriate since they express the system
performance as a function of parameters (i.e., input and output rates,
execution times). Such functions can be quickly evaluated for each different
configuration or checked w.r.t. many different non-functional requirements.
We first provide a symbolic expression of the maximal throughput of acyclic
synchronous dataflow graphs. We then perform an analytic and exact study of
the minimum buffer sizes needed to achieve this maximal throughput for a
single parametric edge graph. Based on this investigation, we define
symbolic analyses that approximate the minimum buffer sizes needed to
achieve maximal throughput for acyclic graphs. We assess the proposed
analyses experimentally on both synthetic and real benchmarks.
Dmitry Burlyaev and Pascal Fradet. Formal
Verification of Automatic Circuit Transformations for Fault-Tolerance.
In Proc. of Formal Methods in
Computer-Aided Design, FMCAD'15,
September 2015.
Key-words: Fault-tolerance, time redundancy, circuit
transformation, semantics, certification, Coq.
Abstract: We present a language-based approach to certify
fault-tolerance techniques for digital circuits. Circuits are expressed in a
gate-level Hardware Description Language (HDL), fault-tolerance techniques
are described as automatic circuit transformations in that language, and
fault-models are specified as particular semantics of the HDL. These
elements are formalized in the Coq proof assistant and the properties,
ensuring that for all circuits their transformed version masks all faults of
the considered fault-model, can be expressed and proved. In this article, we
consider Single-Event Transients (SETs) and faultmodels of the form "at most
1 SET within k clock cycles". The primary motivation of this work was to
certify the Double-Time Redundant Transformation (DTR), a new technique
proposed recently. The DTR transformation combines double-time redundancy,
micro-checkpointing, rollback, several execution modes and input/output
buffers. That intricacy requested a formal proof to make sure that no
single-point of failure existed. The correctness of DTR as well as two other
transformations for fault-tolerance (TMR & TTR) have been proved in Coq.
Dmitry Burlyaev, Pascal Fradet and Alain Girault. Time-redundancy
transformations for adaptive fault-tolerant circuits. In Proc.
of NASA/ESA Conference on Adaptive Hardware and Systems, AHS'15,
June 2015.
Key-words: Fault-tolerance, single-event transient, circuit
transformation, time redundancy, dynamic redundancy.
Abstract: We present a novel logic-level circuit transformation
technique for the automatic insertion of fault-tolerance properties. The
transformations, based on time-redundancy, allow dynamically changes of the
level of redundancy without interrupting the computation. The proposed
concept of dynamic time redundancy permits adaptive circuits whose
fault-tolerance properties can be "on-the-fly" traded-off for throughput.
The approach is technologically independent and does not require any
specific hardware support. Experimental results on the ITC'99 benchmark
suite indicate that the benefits of our method grow with the combinational
size of the circuit. Dynamic double and triple time redundant
transformations generate circuits 1:7 to 2:9 times smaller than full
Triple-Modular Redundancy (TMR). This transformation is a good alternative
to TMR for logicintensive safety-critical circuits where low hardware
overhead or only temporary fault-tolerance guarantees are needed.
Dmitry Burlyaev, Pascal Fradet and Alain Girault. Automatic Time-Redundancy Transformation for
Fault-Tolerant Circuits. In Proc.
of International Symposium on Field-Programmable Gate Arrays, FPGA'15, February
2015.
Key-words: Fault-tolerance, single-event transient, circuit
transformation, time redundancy, check-pointing.
Abstract: We present a novel logic-level circuit transformation
technique for automatic insertion of fault-tolerance properties. Our
transformation uses double-time redundancy coupled with micro-checkpointing,
rollback and a speedup mode. To the best of our knowledge, our solution is
the only technologically independent scheme capable to correct the multiple
bit-flips caused by a Single-Event Transient (SET) with double-time
redundancy. The approach allows soft-error masking (within the considered
fault-model) and keeps the same input/output behavior regardless error
occurrences. Our technique trades-o the circuit throughput for a small
hardware overhead. Experimental results on the ITC'99 benchmark suite
indicate that the benefits of our methods grow with the combinational size
of the circuit. The hardware overhead is 2.7 to 6.1 times smaller than full
Triple Modular Redundancy (TMR) with double loss in throughput. We do not
consider configuration memory corruption and our approach is readily
applicable to Flash-based FPGAs. Our method does not require any specific
hardware support and is an interesting alternative to TMR for logic
intensive designs.
Vagelis Bebelis, Pascal Fradet and Alain Girault. A
framework to schedule parametric dataflow applications on many-core
platforms. In Proc. of the
Conference on Languages,Compilers and Tools for Embedded Systems, LCTES'14, June 2014.
Key-words: Dataflow programming, parametric model,
quasi-static scheduling.
Abstract: Dataflow models, such as SDF, have been effectively used to
program streaming applications while ensuring their liveness and
boundedness. Yet, industrials are struggling to design the next generation
of high definition video applications using these models. Such applications
demand new features such as parameters to express dynamic input/output rate
and topology modifications. Their implementation on modern many-core
platforms is a major challenge. We tackle these problems by proposing a
generic and flexible framework to schedule streaming applications designed
in a parametric dataflow model of computation. We generate parallel as soon
as possible (ASAP) schedules targeted to the new STHORM many-core platform
of STMicroelectronics. Furthermore, these schedules can be customized using
user-defined ordering and resource constraints. The parametric dataflow
graph is associated with generic or user-defined specific constraints aimed
at minimizing timing, buffer sizes, power consumption, or other criteria.
The scheduling algorithm executes with minimal overhead and can be adapted
to different scheduling policies just by adding some constraints. The safety
of both the dataflow graph and constraints can be checked statically and all
schedules are guaranteed to be bounded and deadlock free. We illustrate the
scheduling capabilities of our approach using a real world application: the
VC-1 video decoder for high definition video streaming.
Dmitry Burlyaev, Pascal Fradet and Alain Girault. Verification-guided
Voter Minimization in Triple-Modular Redundant Circuits. In Proc. of Design, Automation and Test in Europe
Conference and Exhibition, DATE'14,
2014.
Key-words: Fault-tolerance, digital circuits, triple modular
redundancy, voting, static analysis, optimization.
Abstract: We present a formal approach to minimize the number of
voters in triple-modular redundant sequential circuits. Our technique
actually works on a single copy of the circuit and considers a user-defined
fault model (under the form "at most 1 bit-flip every k clock cycles").
Verification-based voter minimization guarantees that the resulting circuit
(i) is fault tolerant to the soft-errors defined by the fault model and (ii)
is functionally equivalent to the initial one. Our approach operates at the
logic level and takes into account the input and output interface
specifications of the circuit. Its implementation makes use of graph
traversal algorithms, fixed-point iterations, and BDDs. Experimental results
on the ITC'99 benchmark suite indicate that our method significantly
decreases the number of inserted voters which entails a hardware reduction
of up to 55% and a clock frequency increase of up to 35% compared to full
TMR. We address scalability issues arising from formal verification with
approximations and assess their efficiency and precision.
Vagelis Bebelis, Pascal Fradet, Alain Girault and Bruno Lavigueur. BPDF:
A Statically Analyzable Dataflow Model with Integer and Boolean Parameters.
In International Conference on Embedded
Software, EMSOFT'13,
September 2013.
Key-words: Dataflow programming, parametric model,
boundedness, liveness, dynamic reconfiguration.
Abstract: Dataflow programming models are well-suited to program
many-core streaming applications. However, many streaming applications have
a dynamic behavior. To capture this behavior, parametric dataflow models
have been introduced over the years. Still, such models do not allow the
topology of the dataflow graph to change at runtime, a feature that is also
required to program modern streaming applications. To overcome these
restrictions, we propose a new model of computation, the Boolean
Parametric Data Flow (BPDF) model which combines integer parameters
(to express dynamic rates) and boolean parameters (to express the activation
and deactivation of communication channels). High dynamism is provided by
integer parameters which can change at each basic iteration and boolean
parameters which can even change within the iteration. The major challenge
with such dynamic models is to guarantee liveness and boundedness. We
present static analyses which ensure statically the liveness and the
boundedness of BDPF graphs. We also introduce a scheduling methodology to
implement our model on highly parallel platforms and demonstrate our
approach using a video decoder case study.
Vagelis Bebelis, Pascal Fradet, Alain Girault and Bruno Lavigueur. BPDF:
Boolean Parametric Data Flow. INRIA Reseach Report nº 8333,
July 2013.
Key-words: Dataflow model of computation, dynamicity,
parameters, liveness, boundedness, scheduling.
Abstract: Dataflow programming models are well-suited to program
many-core streaming applications. However, many streaming applications have
a dynamic behavior. To capture this behavior, parametric dataflow models
have been introduced over the years. Still, such models do not allow the
topology of the dataflow graph to change at runtime, a feature that is also
required to program modern streaming applications. To overcome these
restrictions, we propose a new model of computation, the Boolean Parametric
Data Flow (BPDF) model which combines integer parameters (to express dynamic
rates) and boolean parameters (to express the activation and deactivation of
communication channels). High dynamicity is provided by integer parameters
which can change at each basic iteration and boolean parameters which can
even change within the iteration. The major challenge with such dynamic
models is to guarantee liveness and boundedness. We present static analyses
which ensure statically the liveness and the boundedness of BDPF graphs. We
also introduce a scheduling methodology to implement BPDF graphs on highly
parallel platforms. Finally, we demonstrate our approach using a video
decoder case study.
Pascal Fradet, Alain Girault and Peter Poplavko. SPDF:
A Schedulable Parametric DataFlow MoC. In Proc.
of Design, Automation and Test in Europe Conference and Exhibition,
DATE'12, March
2012.
Key-words: dataflow programming, parametric rates, boundedness,
liveness, quasi-static scheduling.
Abstract: Dataflow programming models are suitable to express
multi-core streaming applications. The design of high quality embedded
systems in that context requires static analysis to ensure the liveness
and bounded memory of the application. However, many streaming
applications have a dynamic behavior. The previously proposed dataflow
models for dynamic applications do not provide any static guarantees or
only in exchange of significant restrictions in expressive power or
automation. To overcome these restrictions, we propose the schedulable
parametric dataflow (SPDF) model. We present static analyses and
a quasi-static scheduling algorithm. We demonstrate our approach using a
video decoder case study.
Simplice Djoko Djoko, Rémi Douence, Pascal Fradet. Aspects
preserving properties. In Science of Computer Programming,
SCP, vol. 77, no 3, p. 393-422, July 2012.
(preprint version:
)
Key-words: Aspect-oriented languages, weaving, semantics, temporal
properties, proofs.
Abstract: Aspect Oriented Programming can arbitrarily distort the
semantics of programs. In particular, weaving can invalidate crucial
safety and liveness properties of the base program. In this article, we
identify categories of aspects that preserve some classes of properties.
Specialized aspect languages are then designed to ensure that aspects
belong to a specific category and, therefore, that woven programs will
preserve the corresponding properties. Our categories of aspects, inspired
by Katz's, comprise observers, aborters, confiners and weak intruders.
Observers introduce new instructions and a new local state but they do not
modify the base program's state and control-flow. Aborters are observers
which may also abort executions. Confiners only ensure that executions
remain in the reachable states of the base program. Weak intruders are
confiners between two advice executions. These categories (along with two
others) are defined formally based on a language independent abstract
semantics framework. The classes of preserved properties are defined as
subsets of LTL for deterministic programs and CTL* for
non-deterministic ones. We can formally prove that, for any program, the
weaving of any aspect in a category preserves any property in the related
class. We present, for most aspect categories, a specialized aspect
language which ensures that any aspect written in that language belongs to
the corresponding category. It can be proved that these languages preserve
the corresponding classes of properties by construction. The aspect
languages share the same expressive pointcut language and are designed
w.r.t. a common imperative base language. Each category and language is
illustrated by simple examples. The appendix provides semantics and two
instances of proofs: the proof of preservation of properties by a category
and the proof that all aspects written in a language belong to the
corresponding category
Pascal Fradet, Alain Girault and Peter Poplavko. SPDF: A Schedulable Parametric DataFlow MoC
(extended version). INRIA Reseach Report nº 7828,
December 2011.
Key-words: dataflow programming, parametric rates, boundedness,
liveness, quasi-static scheduling.
Abstract: Dataflow programming models are suitable to express
multi-core streaming applications. The design of high-quality embedded
systems in that context requires static analysis to ensure the liveness
and bounded memory of the application. However, many streaming
applications have a dynamic behavior. The previously proposed dataflow
models for dynamic applications do not provide any static guarantees or
only in exchange of significant restrictions in expressive power or
automation. To overcome these restrictions, we propose the schedulable
parametric dataflow (SPDF) model of computation. We present
static analyses and a quasi-static scheduling algorithm. We demonstrate
our approach using a video decoder case study.
Pascal Fradet , Jean-Louis Giavitto and Marnes Hoff. Refinement
of Chemical Programs using Strategies. In International
Workshop on Strategies in Rewriting, Proving, and Programming, IWS'10,
July 2010.
Key-words: Chemical programming, program refinement.
Abstract: The chemical reaction metaphor describes computation in
terms of a chemical solution where molecules interact freely according to
reaction rules. Chemical solutions are represented by multisets of
elements and reactions by rewrite rules which consume and produce new
elements according to conditions. A major drawback is that a reasonably
efficient implementation of the language is not straightforward. The
sources of inefficiency are the selection of elements and the ordering of
reactions which are left completely unspecified. Further, the lack of
structured data makes it difficult to specify them. The approach sketched
in this abstract aims at refining gamma programs by:
- structuring the multiset using a data type defining neighborhood
relations between elements;
- describing the selection of elements according to their neighborhood
and the previous selection;
- specifying the evaluation strategy (i.e., the application of rules
and termination).
Pascal Fradet and Stéphane Hong Tuan Ha. Aspects of Availability:
Enforcing Timed Properties to Prevent Denial of Service. In Science
of Computer Programming, SCP, In Press,
2010. (preprint version: )
Key-words: Aspect-oriented programming, resource management,
availability, verification, denial of service.
Abstract: We propose a domain-specific aspect
language to prevent denial of service caused by resource management. Our
aspects specify availability policies by enforcing time limits in the
allocation of resources. In our language, aspects can be seen as formal
timed properties on execution traces. Programs and aspects are specified
as timed automata and the weaving process as an automata product. The
benefit of this formal approach is two-fold: the user keeps the semantic
impact of weaving under control and (s)he can use a model-checker to
optimize the woven program and verify availability properties. This
article presents the main approach (programs, aspects, weaving) formally
using timed safety automata. The specification of resources, optimizations
and verification are sketched in a more intuitive fashion. Even if a
concrete implementation remains as future work, we address some high-level
implementation issues and illustrate the approach by small examples and a
case study.
Jean-Pierre Banâtre, Pascal Fradet and Yann Radenac. Classical
Coordination Mechanisms in the Chemical Model. In From
Semantics to Computer Science: Essays in Honor of Gilles Kahn,
Cambridge University Press, pages 29-50, 2009 . (preprint version: )
Key-words: Chemical programming, higher-order conditional multiset
rewriting, coordination mechanisms, parallel programming models, Petri nets,
Kahn process networks.
Abstract: Originally, the chemical model of computation has been
proposed as a simple and elegant parallel programming paradigm. Data is
seen as ``molecules'' and computation as ``chemical reactions'': if some
molecules satisfy a predefined reaction condition, they are replaced by
the ``product'' of the reaction. When no reaction is possible, a normal
form is reached and the program terminates. In this paper, we describe
classical coordination mechanisms and parallel programming models in the
chemical setting. All these examples put forward the simplicity and
expressivity of the chemical paradigm. We pay a particular attention to
the chemical description of the simple and successful parallel computation
model known as Kahn Process Networks.
Simplice Djoko Djoko, Rémi Douence, Pascal Fradet. Aspects
Preserving Properties. INRIA Reseach Report nº 7155,
December 2009.
Key-words: Aspect-oriented languages, weaving, domain-specific
languages, semantics, temporal properties, proofs.
Abstract: Aspect Oriented Programming can arbitrarily distort the
semantics of programs. In particular, weaving can invalidate crucial safety
and liveness properties of the base program. In this article, we identify
categories of aspects that preserve some classes of properties. Specialized
aspect languages can be then designed to ensure that aspects belong to a
specific category and therefore that woven programs will preserve the
corresponding properties. Our categories of aspects, inspired by Katz's,
comprise observers, aborters and confiners. Observers introduce new
instructions and a new local state but they do not modify the base program's
state and control-flow. Aborters are observers which may also abort
executions. Confiners only ensure that executions remain in the reachable
states of the base program. These categories (along with three other) are
defined precisely based on a language independent abstract semantics
framework. The classes of preserved properties are defined as subsets of LTL
for deterministic programs and CTL* for non-deterministic ones. We can
formally prove that, for any program, the weaving of any aspect in a
category preserves any property in the related class.We present, for most
aspect categories, a specialized aspect language which ensures that any
aspect written in that language belongs to the corresponding category. It
can be proved that these languages preserve the corresponding classes of
properties by construction. The aspect languages share the same expressive
pointcut language and are designed wrt a common imperative base
language. Each category and language are illustrated by simple examples. The
appendix provides semantics and examples of proofs: the proof of
preservation of properties by a category and the proof that all aspects
written in a language belong to the corresponding category.
Simplice Djoko Djoko, Rémi Douence and Pascal Fradet. Specialized
Aspect Languages Preserving Classes of Properties. In Proc.
of the Sixth IEEE International Conference on Software Engineering and
Formal Methods, SEFM'08, November 2008.
Key-words: Aspect-oriented languages, weaving, domain-specific
languages, semantics, temporal properties, proofs.
Abstract: Aspect oriented programming can arbitrarily distort the
semantics of programs. In particular, weaving can invalidate crucial safety
and liveness properties of the base program. In previous work, we have
identified categories of aspects that preserve classes of temporal
properties. We have formally proved that, for any program, the weaving of
any aspect in a category preserves all properties in the related class. In
this article, after a summary of our previous work, we present, for each
aspect category, a specialized aspect language which ensures that any aspect
written in that language belongs to the corresponding category. It can be
proved that these languages preserve the corresponding classes of properties
by construction. The aspect languages share the same expressive pointcut
language and are designed w.r.t. a common imperative base language. Each
language is illustrated by simple examples. We also prove that all aspects
written in one of the languages belong to the corresponding category.
Simplice Djoko Djoko, Rémi Douence and Pascal Fradet. Aspects
Preserving Properties. In ACM SIGPLAN Symposium on Partial
Evaluation and Semantics-Based Program Manipulation, PEPM'08,
pages 135-145, January 2008.
Key-words: Aspect-oriented languages, weaving, semantics, temporal
properties, proofs.
Abstract: Aspect Oriented Programming can arbitrarily distort the
semantics of programs. In particular, weaving can invalidate crucial safety
and liveness properties of the base program. In this article, we identify
categories of aspects that preserve some classes of properties. It is then
sufficient to check that an aspect belongs to a specific category to know
which properties will remain satisfied by woven programs. Our categories of
aspects, inspired by Katz's, comprise observers, aborters and confiners.
Observers introduce new instructions and a new local state but they do not
modify the base program's state and control-flow. Aborters are observers
which may also abort executions. Confiners only ensure that executions
remain in the reachable states of the base program. These categories (along
with three other) are defined precisely based on a language independent
abstract semantics framework. The classes of properties are defined as
subsets of LTL for deterministic programs and CTL* for non-deterministic
ones. We can formally prove that, for any program, the weaving of any aspect
in a category preserves any property in the related class. We give examples
to illustrate each category and prove the preservation of one class of
properties by one category of aspects in the appendix.
Tolga Ayav, Pascal Fradet and Alain Girault. Implementing
Fault-Tolerance by Automatic Program Transformations. In ACM
Transactions on Embedded Computing Systems, TECS,
(to appear). (preprint version: )
Key-words: Fault-tolerance, real-time systems, heartbeating,
checkpointing, program transformations, correctness proofs.
Abstract: We present a formal approach to
implement fault-tolerance in real-time embedded systems. The initial
fault-intolerant system consists of a set of independent periodic tasks
scheduled onto a set of fail-silent processors connected by a reliable
communication network. We transform the tasks such that, assuming the
availability of an additional spare processor, the system tolerates one
failure at a time (transient or permanent). Failure detection is
implemented using heartbeating, and failure masking using checkpointing
and rollback. These techniques are described and implemented by automatic
program transformations on the tasks' programs. The proposed formal
approach to fault-tolerance by program transformations highlights the
benefits of separation of concerns. It allows us to establish correctness
properties and to compute optimal values of parameters to minimize
fault-tolerance overhead. We also present an implementation of our method,
to demonstrate its feasibility and its efficiency.
Pascal Fradet and Stéphane Hong Tuan Ha. Aspects of Availability.
In Proc. of the Sixth International Conference on Generative
Programming and Component Engineering, GPCE'07,
Pages 165-174, Octobre 2007.
Key-words: Aspect-oriented programming, resource management,
availability, verification, denial of service.
Abstract: In this paper, we propose a
domain-specific aspect language to prevent the denials of service caused
by resource management. Our aspects specify availability policies by
enforcing time limits in the allocation of resources. In our language,
aspects can be seen as formal timed properties on execution traces.
Programs and aspects are specified as timed automata and the weaving
process as an automata product. The benefit of this formal approach is
two-fold: the user keeps the semantic impact of weaving under control and
(s)he can use a model-checker to optimize the woven program and verify
availability properties.
Massimo Tivoli, Pascal Fradet, Alain Girault and Gregor Goessler. Adaptor
Synthesis for Real-Time Components. In Tools and Algorithms
for the Construction and Analysis of Systems, 13th International
Conference, TACAS'07, Lecture Notes in
Computer Science, Volume 4424, Pages 185-200, 2007.
Key-words: Components, real-time, assembly, adaptation, Petri nets.
Abstract: Building a real-time system from
reusable or COTS components introduces several problems, mainly related to
compatibility, communication, and QoS issues. We propose an approach to
automatically derive adaptors in order to solve black-box integration
anomalies, when possible. We consider black-box components equipped with
an expressive interface that specifies the interaction behavior with the
expected environment, the component clock, as well as latency, duration,
and controllability of the component's actions. The principle of adaptor
synthesis is to coordinate the interaction behavior of the components in
order to avoid possible mismatches, such as deadlocks. Each adaptor models
the correct assembly code for a set of components. Our approach is based
on labeled transition systems and Petri nets, and is implemented in a tool
called SynthesisRT. We illustrate it through a case study concerning a
remote medical care system.
Rémi Douence and Pascal Fradet.
The next 700 Krivine Machines. In
Higher-Order and Symbolic Computation, HOSC,
20(3), 2007.(preliminary version:
)
Key-words: Functional programming, Krivine machine, abstract
machines, program transformation, compilation, functional language
implementations.
Abstract: The Krivine machine is a simple and natural
implementation of the normal weak-head reduction strategy for pure
lambda-terms. While its original description has remained unpublished,
this machine has served as a basis for many variants, extensions and
theoretical studies. In this paper, we present the Krivine machine and
some well-known variants in a common framework. Our framework consists of
a hierarchy of intermediate languages that are subsets of the
lambda-calculus. The whole implementation process (compiler + abstract
machine) is described via a sequence of transformations all of which
express an implementation choice. We characterize the essence of the
Krivine machine and locate it in the design space of functional language
implementations. We show that, even within the particular class of Krivine
machines, hundreds of variants can be designed.
Jean-Pierre Banâtre, Pascal Fradet and Yann Radenac.
Programming
Self-Organizing Systems with the Higher-Order Chemical Language. In
International Journal of Unconventional Computing, IJUC,
3(3):161-177, 2007. (preprint version:
)
Key-words: Chemical programming, higher-order conditional multiset
rewriting, autonomic systems, self-organization, self-protection,
self-healing, self-optimization.
Abstract: In a chemical language, computation is viewed as
abstract molecules reacting in an abstract chemical solution. Data can be
seen as molecules and operations as chemical reactions: if some molecules
satisfy a reaction condition, they are replaced by the result of the
reaction. When no reaction is possible within the solution, a normal form
is reached and the program terminates. In this article, we introduce HOCL,
the Higher-Order Chemical Language, where reaction rules are also
considered as molecules. We illustrate the application of HOCL to the
specification of self-organizing systems. We describe two case studies: an
autonomic mail system and the coordination of an image-processing pipeline
on a grid.
Pascal Fradet and Ralf Lämmel (editors). Special Issue on
Foundations of Aspect-Oriented Programming, Science of Computer
Programming, december 2006, Elsevier. (guest editors' introduction )
Key-words: Aspect-oriented programming, foundations, semantics,
typing, core aspect languages
Abstract: This SCP special collects articles that
make contributions to the foundations of aspect-oriented programming
(AOP). Aspects have been developed over the last 10 years to facilitate
the modularization of crosscutting concerns, i.e., concerns that crosscut
with the primary modularization of a program. This special issue further
continues the efforts of the annual FOAL workshop (Foundations of
Aspect-Oriented Languages) in so far that it supports and integrates
research on firm foundations of AOP. There are 5 contributions addressing
the following issues: (i) a fundamental core language for aspects; (ii)
subtleties of so-called around advice; (iii) aspects in higher-order
languages; (iv) the interaction between aspects and generics; (v) a notion
of aspects for reactive systems based on synchronous languages.
Simplice Djoko Djoko, Rémi Douence, Pascal Fradet and Didier Le Botlan.
Towards
a Common Aspect Semantic Base (CASB).
Deliverable 54, EU
Network of Excellence in AOSD, 2006.
Key-words: Aspect-oriented languages, AspectJ, Featherweight Java,
operational semantics.
Abstract: We gradually introduces formal semantic descriptions of
aspect mechanisms. We do our best to describe aspects as independently as
possible from the base language. For each aspect feature, we introduce the
minimal constructions of the base language necessary to plug aspects in.
We consider the weaving of a single aspect, in particular
before,
after and
around aspects. We extend the model with
multiple aspects, cflow pointcuts, aspects on exceptions, aspect
deployment, aspect instantiation and stateful aspects. Our descriptions
could be applied to many different types of programming languages
(object-oriented, imperative, functional, logic, assembly, etc.). As an
illustration of our technique, we describe the semantics of an
AspectJ-like core aspect language (around aspects + cflow + aspect
association/instantiation) for a core Java language (Featherweight Java
with assignments).
Tolga Ayav, Pascal Fradet and Alain Girault. Implementing
Fault-Tolerance in Real-Time Systems by Program Transformations.
In Proc. of the Sixth ACM & IEEE International Conference on
Embedded Software, EMSOFT'06, pages
205-214, 2006.
Key-words: Fault-tolerance, real-time systems, heartbeating,
checkpointing, program transformations, correctness proofs.
Abstract: We present a formal approach to
implement fault-tolerance in real-time embedded systems. The initial
fault-intolerant system consists of a set of independent periodic tasks
scheduled onto a set of fail-silent processors connected by a reliable
communication network. We transform the tasks such that, assuming the
availability of an additional spare processor, the system tolerates one
failure at a time (transient or permanent). Failure detection is
implemented using heartbeating, and failure masking using checkpointing
and roll-back. These techniques are described and implemented by automatic
program transformations on the tasks' programs. The proposed formal
approach to fault-tolerance by program transformation highlights the
benefits of separation of concerns and allows us to etablish correctness
properties.
Tolga Ayav, Pascal Fradet and Alain Girault.
Implementing
Fault-Tolerance in Real-Time Systems by Program Transformations.
INRIA Reseach Report nº 5919, May 2006
Key-words: Fault-tolerance, real-time systems, heartbeating,
checkpointing, program transformations, correctness proofs.
Abstract: We present a formal approach to implement fault-tolerance
in real-time embedded systems. The initial fault-intolerant system
consists of a set of independent periodic tasks scheduled onto a set of
fail-silent processors connected by a reliable communication network. We
transform the tasks such that, assuming the availability of an additional
spare processor, the system tolerates one failure at a time (transient or
permanent). Failure detection is implemented using heartbeating, and
failure masking using checkpointing and roll-back. These techniques are
described and implemented by automatic program transformations on the
tasks' programs. The proposed formal approach to fault-tolerance by
program transformation highlights the benefits of separation of concerns
and allows us to etablish correctness properties (including the meeting of
real-time constraints). We also present an implementation of our method,
to demonstrate its feasibility and its efficiency.
Jean-Pierre Banâtre, Pascal Fradet and Yann Radenac.
Generalized
Multisets for Chemical Programming. In
Mathematical Structures
in Computer Science, MSCS,
vol. 16(4), pages 557-580, 2006. (preprint version:
)
Key-words: Chemical programming, higher-order conditional multiset
rewriting, infinite multisets, negative multiplicities.
Abstract: Gamma is a programming model where computation can be
seen as chemical reactions between data represented as molecules floating
in a chemical solution. This model can be formalized as associative,
commutative, conditional rewritings of multisets where rewrite rules and
multisets represent chemical reactions and solutions, respectively. In
this article, we generalize the notion of multiset used by Gamma and
present applications through various programming examples. First,
multisets are generalized to include rewrite rules which become
first-class citizen. This extension is formalized by the gamma-calculus, a
chemical model that summarizes in a few rules the essence of higher-order
chemical programming. By extending the gamma-calculus with constants,
operators, types and expressive patterns, we build a higher-order chemical
programming language called HOCL. Finally, multisets are further
generalized by allowing elements to have infinite and negative
multiplicities. Semantics, implementation and applications of this
extension are considered.
Olivier Michel, Jean-Pierre Banâtre, Pascal Fradet and Jean-Louis
Giavitto.
Challenging Questions for the Rationale of Non-Classical
Programming Languages. In
International Journal of
Unconventional Computing, IJUC,
2(4):337-347, 2006.(preprint version:
)
Key-words: Unconventional programming languages, bio-inspired, DNA,
chemical or quantum models
.
Abstract: In this position paper, we question the rationale behind
the design of unconventional programming languages. Our questions are
classified in four categories: the sources of inspiration for new
computational models, the process of developing a program, the forms and
the theories needed to write and understand non-classical programs and
finally the new computing media and the new applications that drive the
development of new programming languages.
Jean-Pierre Banâtre, Pascal Fradet and Yann Radenac.
Towards Grid
Chemical Coordination (short paper). In
Proc. of the 2006 ACM
Symposium on Applied Computing, SAC'06,
pages 445-446.
Key-words: Chemical programming, coordination, grid
programming.
Abstract: In chemical programming, data is represented by
(symbolic) molecules floating in a chemical solution, and computation is
performed by reactions between them. In this paper, that model is applied
to grid programming. Chemical programs are executed in a grid which is
itself specified by the chemical metaphor. Chemical programs or chemical
specification self-organise and self-adapt to their environment.
Jean-Pierre Banâtre, Pascal Fradet, and Yann Radenac.
Chemical
Programming of Self-Organizing Systems. In
ERCIM News,
Vol. 64, Special Theme: Emergent Computing, Jan 2006.
Key-words: Chemical programming, self-organization,
self-protection, self-healing, self-optimization.
Abstract:Chemical programming relies on the
chemical metaphor:
data are seen as molecules and computations as chemical reactions. This
programming paradigm exhibits self-organizing properties and allows the
description of autonomic systems in an elegant way.
Jean-Pierre Banâtre, Pascal Fradet and Yann Radenac.
Generalized
Multisets for Chemical Programming. Research Report nº 5743, INRIA,
November 2005.
Key-words: Chemical programming, higher-order
conditional multiset rewriting, infinite multisets, negative
multiplicities.
Abstract: Gamma is a programming model where computation can be
seen as chemical reactions between data represented as molecules floating
in a chemical solution. This model can be formalized as associative,
commutative, conditional rewritings of multisets where rewrite rules and
multisets represent chemical reactions and solutions, respectively. In
this article, we generalize the notion of multiset used by Gamma and
present applications through various programming examples. First,
multisets are generalized to include rewrite rules which become
first-class citizen. This extension is formalized by the
$\gamma$-calculus, a chemical model that summarizes in a few rules the
essence of higher-order chemical programming. By extending the
$\gamma$-calculus with constants, operators, types and expressive
patterns, we build a higher-order chemical programming language called
HOCL. Finally, multisets are further generalized by allowing elements to
have infinite and negative multiplicities. Semantics, implementation and
applications of this extension are considered.
Jean-Pierre Banâtre, Pascal Fradet and Yann Radenac.
A Generalized
Higher-Order Chemical Computation Model with Infinite and Hybrid
Multisets. In
Pre-Proceedings of 1st International Workshop on
New Developments in Computational Models, DCM'05,
Vol. 135(3) of
ENTCS, pages 3-13, March 2006, Elsevier.
Key-words: Chemical programming, higher-order
conditional multiset rewriting, infinite multisets.
Abstract: Gamma is a programming model where computation is seen
as chemical reactions between data represented as molecules floating in a
chemical solution. Formally, this model is represented by the rewriting of
a multiset where rewrite rules model the chemical reactions. Recently, we
have proposed the $\gamma$-calculus, a higher-order extension, where the
rewrite rules are first-class citizen. The work presented in this paper
increases further the expressivity of the chemical model with generalized
multisets: multiplicities of elements may be infinite and/or negative.
Applications of these new notions are illustrated by some programming
examples.
Jean-Pierre Banâtre, Pascal Fradet and Yann Radenac.
Principles of
Chemical Programming. In
Proceedings of the 5th International
Workshop on Rule-Based Programming, RULE'04,
volume 124(1) of
ENTCS, pages 133--147, June 2005. Elsevier.
Key-words: Chemical programming, higher-order
conditional multiset rewriting, chemical metaphor, formal calculi.
Abstract: The chemical reaction metaphor describes computation in
terms of a chemical solution in which molecules interact freely according
to reaction rules. Chemical models use the multiset as their basic data
structure. Computation proceeds by rewritings of the multiset which
consume elements according to reaction conditions and produce new elements
according to specific transformation rules. Since the introduction of
Gamma in the mid-eighties, many other chemical formalisms have been
proposed such as the Cham, the P-systems and various higher-order
extensions. The main objective of this paper is to identify a basic
calculus containing the very essence of the chemical paradigm and from
which extensions can be derived and compared to existing chemical models.
Jean-Pierre Banâtre, Pascal Fradet and Yann Radenac. Higher-order
Chemical Programming Style. In Proceedings of Unconventional
Programming Paradigms, pages 84--98, 2005. Springer-Verlag, LNCS,
Vol. 3566.
Key-words: Chemical programming, higher-order
conditional multiset rewriting, autonomic systems.
Abstract: The chemical reaction metaphor describes computation in
terms of a chemical solution in which molecules interact freely according
to reaction rules. Chemical solutions are represented by multisets of
elements and computation proceeds by consuming and producing new elements
according to reaction conditions and transformation rules. The chemical
programming style allows to write many programs in a very elegant way. We
go one step further by extending the model so that rewrite rules are
themselves molecules. This higher-order extension leads to a programming
style where the implementation of new features amounts to adding new
active molecules in the solution representing the system.
Pascal Fradet and Stéphane Hong Tuan Ha.
Systèmes de gestion de
ressources et aspects de disponibilité. In
L'Objet, logiciel,
bases de données, réseaux, Hermès/Lavoisier. volume 12(2-3), pages
183-210, septembre, 2006. (preprint version:
)
Key-words: Programmation par aspects, gestion de ressources,
disponibilité, vérification, dénis de service.
Abstract: In this paper, we focus on availability properties and
the prevention of denial of services. We propose a domain-specific aspect
language to prevent the denials of service caused by resource management.
Our aspects specify availability policies by enforcing time limits in the
allocation of resources. In our language, an aspect can be seen as a
formal temporal property on execution traces. Programs and aspects are
specified as timed automata and the weaving process as an automata
product. The benefit of this formal approach is two-fold: the user keeps
the semantic impact of weaving under control and (s)he can use a
model-checker to optimize the woven program and verify availability
properties.
Pascal Fradet and Stéphane Hong Tuan Ha.
Systèmes de gestion de
ressources et aspects de disponibilité. In
2ème Journée
Francophone sur le Développement de Logiciels Par Aspects, JFDLPA'05,
Lille, France, September 2005.
Key-words: Programmation par aspects, gestion de
ressources, disponibilité, vérification, dénis de service.
Abstract: In this paper, we consider resource management in
isolation (separation of concerns) and the prevention of denial of service
(i.e. availability) as aspects. We concentrate on denials of service
caused by resource management (starvations, deadlocks). Our aspects
specify time limits or orderings in the allocation of resources. They can
be seen as the specification of an availability policy. The approach
relies on timed automata to specify services and aspects. It allows us to
implement weaving as an automata product and to use model-checking tools
to verify that aspects enforce the required availability properties.
Pascal Fradet and Stéphane Hong Tuan Ha.
Network Fusion. In
Proceedings
of Asian Symposium on Programming Languages and Systems, APLAS'04,
pages 21-40, november 2004. Springer-Verlag, LNCS, Vol. 3302.
Key-words: Aspect-oriented programming, components,
Kahn process networks, fusion, invasive composition, scheduling,
synchronisation.
Abstract: Modular programming enjoys many well-known advantages
but the composition of modular units may also lead to inefficient
programs. In this paper, we propose an invasive composition method which
strives to reconcile modularity and efficiency. Our technique, network
fusion, automatically merges networks of interacting components into
equivalent sequential programs. We provide the user with an expressive
language to specify scheduling constraints which can be taken into account
during network fusion. Fusion allows to replace internal communications by
assignments and alleviates most time overhead. We present our approach in
a generic and unified framework based on labeled transition systems,
static analysis and transformation techniques.
Rémi Douence, Pascal Fradet and Mario Südholt.
Trace-Based Aspects.
In
Aspect-Oriented Software Development, Mehmet Aksit, Siobhan
Clarke, Tzilla Elrad, and Robert Filman, editors,
pages 201-217.
Addison-Wesley, 2004.
Key-words: Aspect-oriented programming, execution
traces, interactions, security.
Abstract: This chapter presents trace-based aspects which take
into account the history of program executions in deciding what aspect
behavior to invoke. Such aspects are defined in terms of execution traces
and may express relations between different events. Weaving is
accomplished through an execution monitor which modifies the base program
execution as defined by the aspects. We motivate trace-based aspects and
explore the trade-off between expressiveness and property
enforcement/analysis. More concretely, we first present an expressive
model of trace-based aspects enabling proofs of aspect properties by
equational reasoning. Using a restriction of the aspect language to
regular expressions, we show that it becomes possible to address the
difficult problem of interactions between conflicting aspects. Finally, by
restricting the actions performed by aspects, we illustrate how to keep
the semantic impact of aspects under control and to implement weaving
statically.
Rémi Douence, Pascal Fradet and Mario Südholt.
Composition, reuse and
interaction analysis of stateful aspects. In
Proceedings of the
3rd Int' Conf. on Aspect-Oriented Software Development, AOSD'04,
pages 141-150, March 2004. ACM Press.
Key-words: Aspect-oriented programming, formal model,
static analysis, aspect interactions, aspect composition, reuse of
aspects.
Abstract: Aspect-Oriented Programming promises separation of
concerns at the implementation level. However, aspects are not always
orrthogonal and aspect interaction is a fundamental problem. In this
paper, we extend previous work on a generic framework for the formal
definition and interaction analysis of stateful aspects. We propose three
important extensions which enhance expressivity while preserving static
analyzability of interactions. First, we provide support for variables in
aspects in order to share information between different execution points.
This allows the definition of more precise aspects and to avoid detection
of spurious conflicts. Second, we introduce generic composition operators
for aspects. This enables us to provide expressive support for the
resolution of conflicts among interacting aspects. Finally, we offer a
means to define applicability conditions for aspects. This makes
interaction analysis more precise and paves the way for reuse of aspects
by making explicit requirements on contexts in which aspects must be used.
Jean-Pierre Banâtre, Pascal Fradet and Yann Radenac.
Chemical
Specification of Autonomic Systems. In
Proceedings of the 13th
International Conference on Intelligent and Adaptive Systems and
Software Engineering, IASSE'04,
july 2004.
Key-words: Chemical programming
, higher-order
conditional multiset rewriting, autonomic computing, self-properties
(protection, healing, optimization).
Abstract: Autonomic computing provides a vision of information
systems allowing self-management of many predefined properties. Such
systems take care of their own behavior and of their interactions with
other components without any external intervention. One of the major
challenges concerns the expression of properties and constraints of
autonomic systems. We believe that the {\em chemical programming paradigm}
(represented here by the Gamma formalism) is well-suited to the
specification of autonomic systems. In Gamma, computation is described in
terms of chemical reactions (rewrite rules) in solutions (multisets of
elements). It captures the intuition of a collection of cooperative
components which evolve freely according to some predefined constraints.
In this paper, after a short presentation of a higher-order version of the
Gamma formalism, it is shown through the example of a mailing system, how
the major properties expected from an autonomic system can be easily
expressed as a collection of chemical reactions.
Jean-Pierre Banâtre, Pascal Fradet and Yann Radenac.
Higher-order
chemistry. In
Preproceedings of the Workshop on Membrane
Computing, WMC'03, july
2003.
Key-words: Chemical programming, higher-order
conditional multiset rewriting.
Abstract: Gamma is a formalism in which programs are expressed in
terms of multiset rewriting often referred as the Chemical Reaction Model.
In this paper we are concerned with higher-order Gamma programming. First
we review three proposals which introduce the notion of membrane and
higher order facilities. Finally, we propose a higher-order Gamma which
allows the definition of gamma-abstractions (in the same sense as
$\lambda$-abstractions in the lambda-calculus) which are considered as
first class citizens.
Rémi Douence, Pascal Fradet and Mario Südholt.
A framework for the
detection and resolution of aspect interactions. In
Proceedings
of Conference on Generative Programming and Component Engineering, GPCE'02, 2002. Springer-Verlag,
LNCS, Vol. 2487.
Key-words: Aspect-oriented programming, formal model,
static analysis, aspect interactions, aspect composition.
Abstract: Aspect-Oriented Programming (AOP) promises separation of
concerns at the implementation level. However, aspects are not always
orthogonal and aspect interaction is an important problem. Currently there
is almost no support for the detection and resolution of such
interactions. The programmer is responsible for identifying interactions
between conflicting aspects and implementing conflict resolution code. In
this paper, we propose a solution to this problem based on a generic
framework for AOP. The contributions are threefold: we present a formal
and expressive crosscut language, two static conflict analyses and some
linguistic support for conflict resolution.
Jean-Pierre Banâtre, Pascal Fradet and Daniel Le Métayer.
Gamma and
the chemical reaction model: fifteen years after. In
Multiset
Processing. Springer-Verlag, LNCS, Vol. 2235, 2001.
Key-words: Chemical programming, survey, applications.
Abstract: Gamma was originally proposed in 1986 as a formalism for
the definition of programs without artificial sequentiality. The basic
idea underlying the formalism is to describe computation as a form of {\em
chemical reaction} on a collection of individual pieces of data. Due to
the very minimal nature of the language, and its absence of sequential
bias, it has been possible to exploit this initial paradigm in various
directions. This paper reviews most of the work around Gamma considered as
a programming or as a specification language. A special emphasis is placed
on unexpected applications of the chemical reaction model, showing that
this paradigm has been a source of inspiration in various research areas.
Pascal Fradet. Approches langages pour la conception et la mise en oeuvre
de programmes. Document d'habilitation à diriger les recherches. Université
de Rennes I, 129 pages, novembre 2000.
Mots clés : Langages (fonctionnels, impératifs, dédiés,
d'aspects), compilation, transformation, typage, analyse.
Résumé : Par "approche langage" on entend
désigner une approche qui s'exprime, soit dans un langage de
programmation, soit par un langage de programmation.
Les approches qui s'expriment dans le langage ne font
appel à aucun formalisme éloigné (e.g. sémantique). Le langage de
programmation est l'unique cadre de travail pour exprimer le problème, le
résoudre et appliquer la solution. Nous montrons :
- comment la compilation des langages fonctionnels peut s'exprimer
dans le langage lui-même par transformation de programme. Ce cadre
unifié permet de décrire, prouver, comparer et classifier la plupart
des mises en oeuvre de langages fonctionnels,
- deux optimisations de l'implémentation des langages fonctionnels
(une analyse de globalisation et un GC étendu) qui reposent sur la
syntaxe et le type des expressions.
Pour les approches qui s'expriment par un langage de programmation, il
s'agit de prévenir le problème ou d'assurer une propriété via
l'utilisation d'un langage (ou d'une discipline de programmation). Nous
illustrons ce style d'approche par trois exemples :
- les types graphes qui permettent de définir et vérifier le
partage des structures de données à pointeurs,
- un langage dédié au parallélisme qui garantit une analyse
de coût précise et un choix automatique de la meilleure distribution,
- un style de programmation par aspects qui permet d'imposer
automatiquement des propriétés aux programmes.
Thomas Colcombet and Pascal Fradet.
Enforcing Trace Properties by
Program Transformation. In
Proc. of Principles of
Programming Languages, POPL'00, ACM Press, pp. 54-66,
Boston, January 2000.
Key-words: Aspect-oriented programming, security policies, program
transformation, static analysis, dynamic enforcement.
Abstract: We propose an automatic method to
enforce trace properties on programs. The programmer specifies the
property separately from the program; a program transformer takes the
program and the property and automatically produces another ``equivalent''
program satisfying the property. This separation of concerns makes the
program easier to develop and maintain. Our approach is both static and
dynamic. It integrates static analyses in order to avoid useless
transformations. On the other hand, it never rejects programs but adds
dynamic checks when necessary. An important challenge is to make this
dynamic enforcement as inexpensive as possible. The most obvious
application domain is the enforcement of security policies. In particular,
a potential use of the method is the securization of mobile code upon
receipt.
Pascal Fradet, Valérie Issarny, and Siegfried Rouvrais. Analyzing
non-functional properties of mobile agents. In Proc. of
Fundamental Approaches to Software Engineering, FASE'00,
LNCS, Mars 2000.
Key-words: Mobile agents, RPC, performance, security.
Abstract: The mobile agent technology is emerging
as a useful new way of building large distributed systems. The advantages
of mobile agents over the traditional client-server model are mainly
non-functional. We believe that one of the reasons preventing the
wide-spread use of mobile agents is that non-functional properties are not
easily grasped by system designers. Selecting the right interactions to
implement complex services is therefore a tricky task. In this paper, we
tackle this problem by considering efficiency and security criteria. We
propose a language-based framework for the specification and
implementation of complex services built from interactions with primitive
services. Mobile agents, Rpc, remote evaluation, or any combination of
these forms of interaction can be expressed in this framework. We show how
to analyze (i.e. assess and compare) complex service implementations with
respect to efficiency and security properties. This analysis provides
guidelines to service designers, enabling them to systematically select
and combine different types of protocols for the effective realization of
interactions with primitive services.
Pascal Fradet and Julien Mallet. Compilation of a Specialized Functional
Language for Massively Parallel Computers. In Journal of
Functional Programming, JFP, 10(6), pp. 561-605,
2000.
Key-words: Skeletons, polytopes, data parallelism, cost analysis,
program transformation.
Abstract: We propose a parallel specialized
language that ensures portable and cost-predictable implementations on
parallel computers. The language is basically a first-order,
recursion-less, strict functional language equipped with a collection of
higher-order functions or skeletons. These skeletons apply on (nested)
vectors and can be grouped in four classes: computation, reorganization,
communication, and mask skeletons. The compilation process is described as
a series of transformations and analyses leading to SPMD-like functional
programs which can be directly translated into real parallel code. The
language restrictions enforce a programming discipline whose benefit is to
allow a static, symbolic, and accurate cost analysis. The parallel cost
takes into account both load balancing and communications, and can be
statically evaluated even when the actual size of vectors or the number of
processors are unknown. It is used to automatically select the best data
distribution among a set of standard distributions. Interestingly, this
work can be seen as a cross fertilization between techniques developed
within the FORTRAN parallelization, skeleton, and functional programming
communities.
Pascal Fradet, Daniel Le Métayer and Michaël Périn. Consistency checking
for multiple view software architectures. In Proc. of the Joint
European Software Engineering Conference and Symp. on Foundations of
Software Engineering, ESEC/FSE'99, Software
Engineering Notes 24 (6) or LNCS Vol 1687 pp. 410-428, 1999.
Key-words: Software architecture, multiple views, consistency, UML
diagrams.
Abstract: Consistency is a major issue that must
be properly addressed when considering multiple view architectures. In
this paper, we provide a formal definition of views expressed graphically
using diagrams with multiplicities and propose a simple algorithm to check
the consistency of diagrams. We also put forward a simple language of
constraints to express more precise (intra-view and inter-view)
consistency requirements. We sketch a complete decision procedure to
decide whether diagrams satisfy a given constraint expressed in this
language. Our framework is illustrated with excerpts of a case study: the
specification of the architecture of a train control system.
Pascal Fradet and Mario Südholt. An aspect language for robust
programming, In Int. Workshop on Aspect-Oriented
Programming, ECOOP, June 1999.
Key-words: Aspect-oriented programming, robustness, exceptions,
program transformation, program analysis.
Abstract: In this position paper, we advocate the
use of an aspect language for robust programming. AOP is particularly
appealing for this task because robustness crosscuts traditional
structuring means. Consider, for instance, the problem of ensuring that a
global index remains in a given range. The code needed to check such an
invariant is typically scattered all over the program. The paper presents
an example-driven introduction of the proposed aspect language for program
robustness; it then discusses its semantics and implementation and
suggests extensions.
Rémi Douence and Pascal Fradet. A systematic study of functional
language implementations. In ACM Transactions on
Programming Languages and Systems, TOPLAS, 20(2), pp. 344-387,
1998. [ACM
library version]
Key-words: Compilation, optimizations, program transformation,
abstract machines, lambda-calculus, combinators.
Abstract: We introduce a unified framework to
describe, relate, compare and classify functional language
implementations. The compilation process is expressed as a succession of
program transformations in the common framework. At each step, different
transformations model fundamental choices. A benefit of this approach is
to structure and decompose the implementation process. The correctness
proofs can be tackled independently for each step and amount to proving
program transformations in the functional world. This approach also paves
the way to formal comparisons by making it possible to estimate the
complexity of individual transformations or compositions of them. Our
study aims at covering the whole known design space of sequential
functional languages implementations. In particular, we consider
call-by-value, call-by-name and call-by-need reduction strategies as well
as environment and graph-based implementations. We describe for each
compilation step the diverse alternatives as program transformations. In
some cases, we illustrate how to compare or relate compilation techniques,
express global optimizations or hybrid implementations. We also provide a
classification of well-known abstract machines.
Pascal Fradet and Daniel Le Métayer. Structured Gamma. In Science
of Computer Programming, SCP, 31(2-3), pp. 263-289,
1998.
Key-words: Multiset rewriting, graph types, type checking,
refinement.
Abstract: The Gamma language is based on the
chemical reaction metaphor which has a number of benefits with respect to
parallelism and program derivation. But the original definition of Gamma
does not provide any facility for data structuring or for specifying
particular control strategies. We address this issue by introducing a
notion of structured multiset which is a set of addresses
satisfying specific relations. The relations can be seen as a form of
neighborhood between the molecules of the solution; they can be used in
the reaction condition of a program or transformed by the action. A type
is defined by a context-free graph grammar and a structured multiset
belongs to a type T if its underlying set of addresses satisfies
the invariant expressed by the grammar defining T. We define a
type checking algorithm that allows us to prove mechanically that a
program maintains its data structure invariant. We illustrate the
significance of the approach for program refinement and we describe its
application to coordination.
Pascal Fradet and Mario Südholt, Towards a Generic Framework for
Aspect-Oriented Programming. In Third AOP Workshop, ECOOP'98
Workshop Reader, LNCS, Vol. 1543, pp. 394-397, July 1998.
Key-words: Aspect-oriented programming, program transformation,
program analysis.
Abstract: What exactly are aspects? How to weave?
What are the join points used to anchor aspects into the component
program? Is there a general purpose aspect language? In this position
paper, we address these questions for a particular class of aspects:
aspects expressible as static, source-to-source program transformations.
An aspect is defined as a collection of program transformations acting on
the abstract syntax tree of the component program. We discuss the design
of a generic framework to express these transformations as well as a
generic weaver. The coupling of component and aspect definitions can be
defined formally using operators matching subtrees of the component
program. The aspect weaver is simply a fixpoint operator taking as
parameters the component program and a set of program transformations. In
many cases, program transformations based solely on syntactic criteria are
not satisfactory and one would like to be able to use semantic criteria in
aspect definitions. We show how this can be done using properties
expressed on the semantics of the component program and implemented using
static analysis techniques. One of our main concerns is to keep weaving
predictable. This raises several questions about the semantics
(termination, convergence) of weaving.
Pascal Fradet and Daniel Le Métayer , Shape Types. In Proc. of
Principles of Programming Languages, POPL'97, ACM Press, pp.
27-39, Paris, January 1997.
Key-words: Pointer data structures, pointer manipulation, graph
grammars, type checking, program robustness, C.
Abstract: Type systems currently available for
imperative languages are too weak to detect a significant class of
programming errors. For example, they cannot express the property that a
list is doubly-linked or circular. We propose a solution to this problem
based on a notion of shape types defined as context-free graph
grammars. We define graphs in set-theoretic terms, and graph modifications
as multiset rewrite rules. These rules can be checked statically to ensure
that they preserve the structure of the graph specified by the grammar. We
provide a syntax for a smooth integration of shape types in C. The
programmer can still express pointer manipulations with the expected
constant time execution and benefits from the additional guarantee that
the property specified by the shape type is an invariant of the program.
Ronan Gaugne , Pascal Fradet and Daniel Le Métayer , Static Detection of
Pointer Errors: an Axiomatisation and a Checking Algorithm. In Proc.
European Symposium on Programming, ESOP'96, LNCS, 1996.
Key-words: Debugging tool, alias analysis, Hoare's logic.
Abstract: The incorrect use of pointers is one of
the most common source of bugs. As a consequence, any kind of static code
checking capable of detecting potential bugs at compile time is welcome.
This paper presents a static analysis for the detection of incorrect
accesses to memory (dereferences of invalid pointers). A pointer may be
invalid because it has not been initialized or because it refers to a
memory location which has been deallocated. The analyzer is derived from
an axiomatisation of alias and connectivity properties which is shown to
be sound with respect to the natural semantics of the language. It deals
with dynamically allocated data structures and it is accurate enough to
handle circular structures.
Pascal Fradet, Collecting More Garbage. In Proc. of ACM Conf. on
Lisp and Functional Programming, LISP'94, ACM Press, pp.
24-33, Orlando, FL, USA, June 1994.
Key-words: Garbage collection, space leaks, typing, parametricity.
Abstract: We present a method, adapted to
polymorphically typed functional languages, to detect and collect more
garbage than existing GCs. It can be applied to strict or lazy higher
order languages and to several garbage collection schemes. Our GC exploits
the information on utility of arguments provided by polymorphic types of
functions. It is able to detect garbage that is still referenced from the
stack and may collect useless parts of otherwise useful data structures.
We show how to partially collect shared data structures and to extend the
type system to infer more precise information. We also present how this
technique can plug several common forms of space leaks.
Pascal Fradet, Compilation of Head and Strong Reduction. In Proc
of the 5th European Symposium on Programming , ESOP'94,
LNCS Vol 788, pp. 211-224, Edinburgh, UK, April 1994.
Key-words: Lambda-calculus, continuations, strong reduction, head
reduction, compilation.
Abstract: Functional language compilers implement
only weak-head reduction. However, there are cases where head normal forms
or full normal forms are needed. Here, we study how to use cps conversion
for the compilation of head and strong reductions. We apply cps
expressions to a special continuation so that their head or strong normal
form can be obtained by the usual weak-head reduction. We remain within
the functional framework and no special abstract machine is needed. Used
as a preliminary step our method allows a standard compiler to evaluate
under lambda's.
Pascal Fradet, Syntactic Detection of Single-Threading Using
Continuations. In Proc. of the 5th ACM Conf. on Functional Prog.
Lang. and Comp. Arch., FPCA'91, LNCS VOL. 523,
pp. 241-258, Cambridge, MA, USA, August 1991.
Key-words: Globalization, single-threading, in-place update, CPS
conversion.
Abstract: We tackle the problem of detecting
global variables in functional programs. We present syntactic criteria for
single-threading which improves upon previous solutions (both syntactic
and semantics-based) in that it applies to higher-order languages and to
most sequential evaluation strategies. The main idea of our approach lies
in the use of continuations. One advantage of continuation expressions is
that evaluation ordering is made explicit in the syntax of expressions.
So, syntactic detection of single-threading is simpler and more powerful
on continuation expressions. We present the application of the analysis to
the compilation of functional languages, semantics-directed compiler
generation and globalization-directed transformations (i.e. transforming
non-single-threaded expressions into single-threaded ones). Our results
can also be turned to account to get single-threading criteria on regular
lambda-expressions for different sequential evaluation orders.
P. Fradet and Daniel Le Métayer. Compilation of functional languages by
program transformation. In ACM Transactions on Programming
Languages and Systems, TOPLAS, 13(1), pp. 21-51, 1991.
Key-words: Functional programming, compilation, continuations,
program transformation, lambda-calculus, combinators.
[ACM library
version]
Abstract: One of the most important issues
concerning functional languages is the efficiency and the correctness of
their implementation. We focus on sequential implementations for
conventional von Neumann computers. The compilation process is described
in terms of program transformations in the functional framework. The
original functional expression is transformed into a functional term that
can be seen as a traditional machine code. The two main steps are the
compilation of the computation rule by the introduction of continuation
functions and the compilation of the environment management using
combinators. The advantage of this approach is that we do not have to
introduce an abstract machine, which makes correctness proofs much
simpler. As far as efficiency is concerned, this approach is promising
since many optimizations can be described and formally justified in the
functional framework.
Disclaimer:
These documents are provided by the
contributing authors as a means to ensure timely dissemination of
scholarly and technical work on a non-commercial basis. Copyright and
all rights therein are maintained by the authors and by other
copyright holders, notwithstanding that they have offered their works
here electronically. It is understood that all persons copying this
information will adhere to the terms and constraints invoked by each
author's copyright. These works may not be reposted without the
explicit permission of the copyright holder.