Given a signal A of N dimensions, the problem is to obtain a representation R for It that is a linear combination of vectors in the dictionary H of Haar wavelets. The quality of the representation R is determined by B, the number of vectors from H used, and 6, the error between R and A. Traditionally, δ has been the sum squared error ε R = ∑ 1(R[i]- A[i]) 2, in which case, Parseval's theorem from 1799 helps solve the problem of finding the R with smallest ε R in O(N) time. Recently, motivated by database applications, researchers have sought other notions of error such as - workload-aware error, or ε R π = ∑ i[i](R[i] - A[i]) 2, where π[i] is the workload or the weight for i, and - maximum pointwise absolute error, eg., ε R ∞ = max i [R][i] - A[i]|. Recent results give Ω(N 2) time algorithms for finding R that minimize these errors. We present subquadratic algorithms for versions of these problems. We present a near-linear time algorithm to minimize ε R π when π is compressible. To minimize ε R ∞, we give an O(N 2-ε) time algorithm. These algorithms follow a natural dynamic programming approach developed recently, but the improvements come from exploiting local structural properties of the Haar wavelet representations of signals we identify. Sparse approximation theory is a mature area of Mathematics that has traditionally studied signal representations with Haar wavelets. It is interesting that the past few years have seen new problems in this area motivated by Computer Science concerns: we pose a few new additional problems and some partial results.