Support vector machines (SVMs) are traditionally considered to be the best classifiers in terms of minimizing the empirical probability of misclassification, although they can be slow when the training datasets are large. Here SVMs are compared to the classic k-Nearest Neighbour (k-NN) decision rule using seven large real-world datasets obtained from the University of California at Irvine (UCI) Machine Learning Repository. To counterbalance the slowness of SVMs on large datasets, three simple and fast methods for reducing the size of the training data, and thus speeding up the SVMs are incorporated. One is blind random sampling. The other two are new linear-time methods for guided random sampling which we call Gaussian Condensing and Gaussian Smoothing. In spite of the speedups of SVMs obtained by incorporating Gaussian Smoothing and Condensing, the results obtained show that k-NN methods are superior to SVMs on most of the seven data sets used, and cast doubt on the general superiority of SVMs. Furthermore, random sampling works surprisingly well and is robust, suggesting that it is a worthwhile preprocessing step to either SVMs or k-NN.