In our cross-layer design, we use different models to capture the properties of different layers. As stated in Chap. 8, we can use an MDP model to capture the dynamical movements of the cyber layer. However, in the scenario, we assume that the defender can observe the cyber state at each cyber time instant. In real applications, it is challenging to obtain the full information of the cyber state directly. Hence, the MDP cannot capture the incomplete knowledge of the cyber states. In this chapter, we will introduce a Partially Observed Markov Decision Process (POMDP) to capture the uncertainty of the cyber state. In a POMDP, instead of observing the states, we have an observation, whose distribution depends on the state. Therefore, we use this information to build a Hidden Markov Model (HMM) filter, which can construct a belief of the states. Based on the belief, we aim to find an optimal policy to minimize an expected cost.