We consider a classification method based on sparse grids which scales only linearly in the number of data points and is thus well-suited for huge amounts of data. In order to obtain competitive results, such sparse grid classifiers are usually enhanced by locally refining the underlying regular sparse grid. However, in order to parallelize the corresponding adaptive algorithms a thorough knowledge of the hardware is necessary. Instead of improving the performance by refining the sparse grid, we construct a team of classifiers relying just on regular sparse grids and employ them as base learners within AdaBoost. Our examples with synthetic and real-world datasets show that we can achieve similar or better results than with locally refined sparse grids or libSVM, with respect to both runtime and accuracy.