# Tradeoffs between synchronization, communication, and work in parallel linear algebra computations

### Edgar Solomonik, Erin Carson, Nicholas Knight and James Demmel

###
EECS Department

University of California, Berkeley

Technical Report No. UCB/EECS-2014-8

January 25, 2014

### http://www.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-8.pdf

This paper derives tradeoffs between three basic costs of a parallel algorithm: synchronization, data movement, and computational cost. Our theoretical model counts the amount of work and data movement as a maximum of any execution path during the parallel computation. By considering this metric, rather than the total communication volume over the whole machine, we obtain new insight into the characteristics of parallel schedules for algorithms with non-trivial dependency structures. The tradeoffs we derive are lower bounds on the execution time of the algorithm which are independent of the number of processors, but dependent on the problem size. Therefore, these tradeoffs provide lower bounds on the parallel execution time of any algorithm computed by a system composed of any number of homogeneous components each with associated computational, communication, and synchronization payloads. We first state our results for general graphs, based on expansion parameters, then we apply the theorem to a number of specific algorithms in numerical linear algebra, namely triangular substitution, Gaussian elimination, and Krylov subspace methods. Our lower bound for LU factorization demonstrates the optimality of Tiskin’s LU algorithm [24] answering an open question posed in his paper, as well as of the 2.5D LU algorithm which has analogous costs. We treat the computations in a general manner by noting that the computations share a similar dependency hypergraph structure and analyzing the communication requirements of lattice hypergraph structures.

BibTeX citation:

@techreport{Solomonik:EECS-2014-8, Author = {Solomonik, Edgar and Carson, Erin and Knight, Nicholas and Demmel, James}, Title = {Tradeoffs between synchronization, communication, and work in parallel linear algebra computations}, Institution = {EECS Department, University of California, Berkeley}, Year = {2014}, Month = {Jan}, URL = {http://www.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-8.html}, Number = {UCB/EECS-2014-8}, Abstract = {This paper derives tradeoffs between three basic costs of a parallel algorithm: synchronization, data movement, and computational cost. Our theoretical model counts the amount of work and data movement as a maximum of any execution path during the parallel computation. By considering this metric, rather than the total communication volume over the whole machine, we obtain new insight into the characteristics of parallel schedules for algorithms with non-trivial dependency structures. The tradeoffs we derive are lower bounds on the execution time of the algorithm which are independent of the number of processors, but dependent on the problem size. Therefore, these tradeoffs provide lower bounds on the parallel execution time of any algorithm computed by a system composed of any number of homogeneous components each with associated computational, communication, and synchronization payloads. We first state our results for general graphs, based on expansion parameters, then we apply the theorem to a number of specific algorithms in numerical linear algebra, namely triangular substitution, Gaussian elimination, and Krylov subspace methods. Our lower bound for LU factorization demonstrates the optimality of Tiskin’s LU algorithm [24] answering an open question posed in his paper, as well as of the 2.5D LU algorithm which has analogous costs. We treat the computations in a general manner by noting that the computations share a similar dependency hypergraph structure and analyzing the communication requirements of lattice hypergraph structures.} }

EndNote citation:

%0 Report %A Solomonik, Edgar %A Carson, Erin %A Knight, Nicholas %A Demmel, James %T Tradeoffs between synchronization, communication, and work in parallel linear algebra computations %I EECS Department, University of California, Berkeley %D 2014 %8 January 25 %@ UCB/EECS-2014-8 %U http://www.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-8.html %F Solomonik:EECS-2014-8