This paper describes a computationally more efficient alternative to the bad data identification procedure that is known as the largest normalized residual (LNR) test. LNR test is a sequential procedure where measurements suspected to carry gross errors are identified and removed from the measurement set one at a time. Thus, the computational burden of the test increases proportional to the existing bad data, making it prohibitively inefficient for systems commonly containing large numbers of measurements with gross errors. In this paper, an improved version of this approach is proposed where the number of identification and correction cycles needed to process a large number of bad data points is significantly reduced. Thus efficient application of the LNR test in very large practical power systems is facilitated.