This paper presents several regularization schemes for data-driven optimization problems. Data-driven optimization is a challenging task because the model or the distribution describing the inherent stochastic disturbance is unknown but instead only a set of data sampled from it can be available. Most existing literature tackles the data-driven optimization problem by using the approach of distributionally robust optimization which is however an infinite-dimensional optimization problem and thus is required to be transformed into a tractable finite-dimensional optimization problem. As a different line, the regularization schemes presented in this paper contribute to solving the data-driven minimization problem by approximating the unknown expectation with some data-dependent surrogates in a high confidence and by minimizing the surrogates in a tractable way. The tradeoff between the confidence level and the optimization error is analyzed.