Due to constraints in cost, power, and communication, losses often arise in large sensor networks. The sensor can be modeled as an output of a linear stochastic system with random losses of the sensor output samples. This paper considers the general problem of state estimation for jump linear systems where the discrete transitions are modeled as a Markov chain. Among other applications, this rich model can be used to analyze sensor networks. The sensor loss events are then modeled as Markov processes. Under the jump linear system model, many types of underlying losses can be easily considered, and the optimal estimator to be performed at the receiver in the presence of missing sensor data samples is given by a standard time-varying Kalman filter. We show that the asymptotic average estimation error variance converges and is given by a Linear Matrix Inequality, which can be easily solved. Under this framework, any arbitrary Markov loss process can be modeled, and its average asymptotic error variance can be directly computed. We include a few illustrative examples including fixed-length burst errors, a two-state model, and partial losses due to multiple SNR states. Our analysis encompasses modeling discrete changes not only in the received data as stated above, but also in the underlying system. In the context of the lossy sensor model, the former allows for variation in sensor positioning, power control, and loss of data communications; the latter could allow for discrete changes in the dynamics of the variable monitored by the sensor. This freedom in modeling yields a tool that is potentially valuable in various scenarios in which entities that share information are subjected to challenging and time-varying network conditions.