##
Toward a Theory of Maximally Concurrent Programs

###
RAJEEV JOSHI and JAYADEV MISRA

**To appear at Nineteenth Annual
ACM SIGACT-SIGOPS Symposium on PRINCIPLES OF DISTRIBUTED COMPUTING (PODC
2000), Portland, Oregon, 16-19 July 2000**

##
Abstract

Typically, program design involves constructing a program, P, that implements
a given specification, S, that is, the set of executions of P is a subset
of the set of executions satisfying S. In many cases, we seek a P that
not only implements S but for which the set of executions of P and S are
identical. Then, every execution satisfying the specification is a possible
execution of the program; we call P maximal for the specification S. We
argue in this paper that traditional specifications of concurrent programs
are incomplete without some maximality requirement because they can often
be implemented in a sequential fashion. Additionally, a maximal solution
can be refined to a variety of programs each appropriate for execution
on a different computing platform. We suggest a method for proving the
maximality of a program with respect to a given specification. Even though
we prove facts about possible executions of programs there is no need to
appeal to branching time logics; we employ a fragment of linear temporal
logic for our proofs. The method results in concise proofs of maximality
for many non-trivial examples. The method may also serve as a guide in
constructing maximal programs.