Tracking the emotional valence state of an individual can serve as an important marker of personal health and well-being. Through automatic detection of emotional valence, timely intervention can be provided in the events of long periods of negative valence, such as anxiety, particularly for people prone to cardiovascular diseases. Our goal here is to use facial electromyogram (EMG) signal to estimate one's hidden self-labelled emotional valence (EV) state during presentation of emotion eliciting music videos via a state-space approach. We present a novel technique to extract binary and continuous features from EMG signals. We then present a state-space model of valence in which the observation process includes both the continuous and binary extracted features. We use these features simultaneously to estimate the model parameters and unobserved EV state via an expectation maximization algorithm. Using experimental data, we illustrate that the estimated EV State of the subject matches the music video stimuli through different trials. Using three different classifiers: support vector machine, linear discriminant analysis, and k-nearest neighbors, a maximum classification accuracy of 89% was achieved for valence prediction based on the estimated emotional valance state. The results illustrate our system's ability to track valence for personal well-being monitoring.