##
Distributed Cooperation in the Absence of Communication

###
Greg Malewicz and Alexander Russell and Alex Shvartsman

**To appear at Nineteenth Annual
ACM SIGACT-SIGOPS Symposium on PRINCIPLES OF DISTRIBUTED COMPUTING (PODC
2000), Portland, Oregon, 16-19 July 2000**

##
Abstract

This paper presents a study of a distributed cooperation problem under
the most extreme assumption that no two processors may be able to communicate
during the computation. The distributed cooperation problem is defined
for n processors in terms of t tasks that need to be performed efficiently
and that are known to all processors. The fact that the tasks are initially
known makes it possible for the problem to be solved in the absence of
communication, while the efficiency requirement and the possibility of
eventual availability of communication facilities make it necessary to
structure the work of the processors so that when some processors are able
to communicate, the amount of wasted (redundant) work they have collectively
performed prior to that point is controlled. Dolev et al. [DSS99] developed
a solution for this problem where each processor can perform up to >Q(n^(1/3))
tasks such that at most one redundant task is performed for any pair of
processors. In this work we show that for schedules longer than sqrt(n)
the number of redundant tasks for two (or more) processors must be at least
two. We also show that this bound is tight by exhibiting a construction
of schedules of length sqrt(n-3/4)+1/2 such that exactly one redundant
task is performed for any pair of processors. This design-theoretic construction
is efficient and practical: we show that it can be done in time linear
in the length of the schedule. We use this construction to produce, in
linear time, deterministic schedules of length 4n/9 such that pairwise
wasted work increases gracefully as processors progress through their schedules.
Finally we present a randomized solution, which demonstrates that if each
processor schedules the t tasks at random, then with high probability pairwise
intersections are well behaved at all times: specifically, two processors
having completed t_1 and t_2 tasks, respectively, are guaranteed to have
no more than (t_1 t_2)/t + >D redundant tasks, where >D = O(log n + sqrt((t_1
t_2)/t) sqrt(logg n)). This result contrasts favorably with the lower bound
we show, which asserts that pairwise schedule redundancy must grow quadratically
in the length of schedules. The redundancy incurred by collections of k
processors is also studied in our randomized framework.