This paper presents an analysis of the following load balancing algorithm. At each step, each node in a network examines the number of tokens at each of its neighbors and sends a token to each neighbor with at least 2d+1 fewer tokens, where d is the maximum degree of any node in the network. We show that within O(Δ/α) steps, the algorithm reduces the maximum difference in tokens between any two nodes to at most O((d2log n)/α), where Δ is the global imbalance in tokens (i.e., the maximum difference between the number of tokens at any node initially and the average number of tokens), n is the number of nodes in the network, and α is the edge expansion of the network. The time bound is tight in the sense that for any graph with edge expansion α, and for any value Δ, there exists an initial distribution of tokens with imbalance Δ for which the time to reduce the imbalance to even Δ/2 is at least Ω(Δ/α). The bound on the final imbalance is tight in the sense that there exists a class of networks that can be locally balanced everywhere (i.e., the maximum difference in tokens between any two neighbors is at most 2d), while the global imbalance remains Ω((d2log n)/α). Furthermore, we show that upon reaching a state with a global imbalance of O((d2log n)/α), the time for this algorithm to locally balance the network can be as large as Ω(n1/2). We extend our analysis to a variant of this algorithm for dynamic and asynchronous networks. We also present tight bounds for a randomized algorithm in which each node sends at most one token in each step.
ASJC Scopus subject areas
- Computer Science(all)