This paper studies human sensorimotor learning and control using the stochastic robust adaptive dynamic programming (RADP) theory. The obtained result provides a unified framework that can take into account several recently discovered phenomena, including the active regulation of motor variability, the presence of suboptimal inference, and the model-free learning, and explains how these factors may promote the sensorimotor learning. We apply our learning framework to a model of sensorimotor system, and discover remarkable consistency with different experimental observations. Moreover, a novel feature of the RADP algorithm in our learning framework is that the knowledge of a stabilizing initial control policy is not needed. All these observations further confirm our hypothesis that RADP is a sound computational principle for sensorimotor control.