Workshops and tutorials

Monday, June 17th

Workshop: Advanced Tools, Programming Languages, and PLatforms for
Implementing and Evaluating algorithms for Distributed systems (ApPLIED)

General Co-chairs: Miguel Matos and Elad Michael Schiller
TPC Co-chairs: Ioannis Chatzigiannakis and Vincent Gramoli
Keynote Speaker: Christian Cachin
Websitehttps://www.cse.chalmers.se/~elad/ApPLIED2024/

Workshop: Biological Distributed Algorithms (BDA)
Time: 08:45 – 18:00
Organizers
: Arjun Chandrasekhar,  Frederik Mallmann-Trenn, Yannic Maus,
and Ted Pavlic
Abstract: BDA is focused on the relationships between distributed
computing and distributed biological systems and in particular, on
analysis and case studies that combine the two. Such research can lead
to better understanding of the behavior of the biological systems while
at the same time developing novel algorithms that can be used to solve
basic distributed computing problems.
Website: https://sites.google.com/view/bda24/home

Friday, June 21th

Workshop: Principles of Distributed Learning (PODL)
Organizers
: Rachid Guerraoui, Nirupam Gupta, Rafael Pinot
Websitehttps://dcl.epfl.ch/site/podc2024

Tutorial: Weak memory models in programming language semantics
Time: 09:00 – 12:20
Organizer: Ori Lahav (https://www.cs.tau.ac.il/~orilahav/)
Abstract: A memory model defines the semantics of memory accesses in multithreaded programs. For programmers, sequential consistency is considered as the simplest model. However, this model is too costly to implement, and, in fact, designing a satisfactory memory model is highly challenging, as one has to carefully balance the conflicting desiderata of programmers, compilers, and hardware.
In this tutorial I will use the C/C++11 shared-memory model, as a prototypical example of a weak memory model employed in a central programming language. I will introduce the formal underpinning of this model, the key ideas behind it, some flaws of the model, ways of correcting it, and some remaining open problems. 
Some aspects of the C/C++11 memory model are strongly related to consistency in distributed data-stores, and the goal of this tutorial is to develop a common language and basis between the distributed computing classical theory and some of the recent research in programming language semantics and verification. In particular, I plan to discuss the linearizability-like correctness notions under weak memory models and their connection to observational refinement, and introduce some questions that the distributed computing community may have the right tools to address.

Workshop: Distributed Computing with Emerging Hardware Technology (EMERALD)
Organizers: Iris Bahar, Joao Barreto, Panagiota Fatourou, Maurice Herlihy and Erez Petrank.
Abstract: EMERALD aims at investigating how the utilization of future and emerging hardware technology can influence (or add to) the foundations of concurrent and distributed computing. The workshop will host a series of invited talks which will contribute to the better understanding of how such technology could change what we know about concurrent and distributed algorithms and models. The emphasis will be on persistent memory computing, NUMA-aware computing, distributed computing in systems with Remote Direct Memory Accesses (RDMA), distributed and concurrent computing on top of heterogeneous hardware, processing in memory (PIM), near-memory processing (NMP), disaggregated/composable systems, and others.
Website: https://emerald-workshop.github.io/

Tutorial: Permissionless Consensus
Time: 14:00 – 17:30
Organizers
:  Andrew Lewis-Pye, Tim Roughgarden
Abstract: Recent years have seen substantial interest and investment in consensus protocols, such as Bitcoin and Ethereum, that operate in what is often referred to as the “permissionless” setting (as opposed to the “permissioned” setting, as classically studied in distributed computing). The shift from the permissioned to the permissionless setting entails three distinct challenges: 
* The unknown players challenge. The set of distinct entities that might run the protocol is unknown at the time of protocol deployment and is of unknown size. 
* The player inactivity challenge. Such entities can start or stop running the protocol at any time. 
* The sybil challenge. Each such entity can operate the protocol using an unknown and unbounded set of identifiers (a.k.a. “sybils”). 
These three challenges are logically independent, and every subset of them leads to a distinct and well-defined setting that has its own set of possibility and impossibility results. In this tutorial, we’ll describe a cohesive approach to analysing the three challenges outlined above. In addition to being of theoretical interest, our framework naturally allows one to make formal statements about modern blockchain protocols (e.g., articulating the goals of the Ethereum protocol, rigorously assessing the extent to which the protocol achieves those goals, and investigating whether there could be protocols that “better” achieve them while operating under the same set of constraints). A key aspect of this approach is to establish a degree of permissonlessness hierarchy that parameterizes what a protocol may assume about the activity of honest players (in the tradition of the standard parameterizations of fault severity and network reliability). This hierarchy is defined by four settings: the fully permissionless, dynamically available, quasi-permissioness, and permissioned settings.
We then focus on possibility and impossibility results separating the various levels of this hierarchy. As an example, we will describe the “Dynamic Availability Dilemma”: an SMR protocol operating in the dynamically available setting cannot satisfy asynchronous safety, and cannot satisfy either accountability or optimistic responsiveness, while protocols operating in the quasi-permissionless setting can (simultaneously) satisfy all three of the latter properties. The tutorial will include the introductory material covered in https://www.youtube.com/watch?v=dFYVckUzrIg and expand on it with additional results, proof sketches for 2–3 of the results, a discussion of the implications of these results for current and future blockchain protocol designs, and suggestions for further research in the area.