This chapter presents a reinforcement-learning-based shared control design for semi-autonomous vehicles with a human in the loop. The co-pilot controller and the driver operate together and control the vehicle simultaneously. To address the effects of human driver reaction time, the interconnected human–vehicle system is described by differential-difference equations. Exploiting the real-time measured data, the adaptive optimal shared controller is learned via adaptive dynamic programming, without accurate knowledge of the driver and vehicle models. Adaptivity, near optimality and stability are ensured simultaneously when the data-driven shared steering controller is applied to the human-in-the-loop vehicle system, which can handle the potential parametric variations and uncertainties in the human–vehicle system. The efficacy of the proposed control strategy is validated by proofs and demonstrated by numerical simulations.