Demixing, or source separation, involves disentangling a complicated signal into simpler, more informative components. Algorithms for signal demixing impact several applications, ranging from interference cancellation in communication signals, foreground-background separation in images and video, and outlier suppression in machine learning. Central to several modern demixing algorithms is an assumption of some form of low-dimensional structure in the signal components. However, the majority of these modern algorithms are based on convex optimization, and their computational complexity can be high (polynomial) in terms of the signal size. In this paper, we propose a new algorithmic approach for signal demixing based on structured sparsity models. Our approach leverages recent advances in constructing sparsity models via appropriately chosen graphs; this graphical approach can be shown to model a diverse variety of low-dimensional structures in signals. Despite being highly nonconvex, our algorithm exhibits a nearly-linear running time, and therefore is scalable to very high-dimensional signals. We supplement our proposed algorithm with a theoretical analysis, providing sufficient conditions for provably reconstruction of the underlying components. Finally, we demonstrate the validity of the method via numerical experiments on real 2D image data.