<p><p><p>We consider the identification of a Markov process {W<sub>t</sub>, X<sub>t</sub>*} for t=1,2,...,T when only {W<sub>t</sub>} for t=1, 2,..,T is observed. In structural dynamic models, W<sub>t</sub> denotes the sequence of choice variables and observed state variables of an optimizing agent, while X<sub>t</sub>* denotes the sequence of serially correlated state variables. The Markov setting allows the distribution of the unobserved state variable X<sub>t</sub>* to depend on W<sub>t-1</sub> and X<sub>t-1</sub>*. We show that the joint distribution of (W<sub>t</sub>, X<sub>t</sub>*, W<sub>t-1</sub>, X<sub>t-1</sub>*) is identified from the observed distribution of (W<sub>t+1</sub>, W<sub>t</sub>, W<sub>t-1</sub>, W<sub>t-2</sub>, W<sub>t-3</sub>) under reasonable assumptions. Identification of the joint distribution of (W<sub>t</sub>, X<sub>t</sub>*, W<sub>t-1</sub>, X<sub>t-1</sub>*) is a crucial input in methodologies for estimating dynamic models based on the "conditional-choice-probability (CCP)" approach pioneered by Hotz and Miller.</p></p></p>