Traditional way of storing facts in triplets (headentity, relation, tail entity), abbreviated as (h, r, t), allows the knowledge to be intuitively displayed and easily acquired by human beings, but hardly computed or even reasoned about by AI machines. Inspired by the success in applying Distributed Representations to AI-related fields, recent studies expect to represent each entity and relation with a unique lowdimensional embedding, which is different from the symbolic and atomic framework of displaying knowledge in triplets. In this way, the knowledge computing and reasoning can be essentially facilitated by means of a simple vector calculation, i.e. h + r ≈ t. We thus contribute an effective model to learn better embeddings satisfying the formula by pulling the positive tail entities t+ together and close to h + r (Nearest Neighbor), and simultaneously pushing the negatives t- away from the positives t+ via keeping a Large Margin. We also design a corresponding learning algorithm to efficiently find the optimal solution based on Stochastic Gradient Descent in iterative fashion. Quantitative experiments illustrate that our approach can achieve the state-of-the-art performance, compared with several recent methods on some benchmark datasets for two classical applications, i.e. Link prediction and Triplet classification. Moreover, we analyze the parameter complexities among all the evaluated models, and analytical results indicate that our model needs fewer computational resources while outperforming the other methods.