Due to the mainstream adoption of clinical genetic testing, labs routinely have access to millions of genomic sequences that were produced as a result of ordered tests. These sequences are primarily used for reporting on specific genes and disease conditions, yet they contain valuable population-scale genetic information, which remains underutilized. The distribution of haplotypes in the population is one such piece of information that can be gleaned from sequence datasets and can be used to power downstream applications like association studies. While large datasets of genomic sequences can be informative about haplotype distributions, the nature of sequencing performed can confound the reconstruction of haplotypes. Specifically, phased whole genomes are best suited for this purpose compared to unphased targeted sequences. However, the latter are more abundant. In our work, we address the specific challenges arising from the use of large datasets of targeted sequencing for recovering haplotypes. Leveraging targeted genomic sequences for genome-wide association studies requires the variants in the non-targeted regions to be imputed. This is commonly done using a variant of the Li-Stephens recombination model which approximates the generating mechanism of chromosomes by a hidden Markov model (HMM) that produces the chromosome of an individual as a mosaic of founder chromosomes. Although widely used, one limiting factor is the need to place bounds on the state space by hypothesizing about the number of founders of a population. This is particularly challenging for large datasets with samples from diverse ancestries. The second challenge in using targeted genomic sequences is that the off-target regions have sparse coverage, so the imputation model needs to account for uncertainty in the base pair calls in the offtarget regions. Here, we propose a new method based on the Hidden Markov Dirichlet Process (HMDP) non-parametric model [1]. While the HMDP model helps us circumvent the problem of fixing the state space, our modification to its observation model helps us handle the sparseness and varying coverage of the targeted sequences. The learning algorithm for this model is a modification of the HMDP Gibbs sampler that reflects the changes to the observation mechanism. Inference is based on the standard HMM algorithm to compute smoothed posterior distribution. We train the model with a dataset of approximate to 1500 targeted BAM files and validate it using downsampled whole genome files where the task is to impute approximate to 75000 SNPs in a 2.5 Mb region of Chr22. Even though a typical downsampled file has coverage for only a small fraction of SNPs, we get good performance when comparing imputed genotypes with the true genotypes (weighted f1, precision, recall: 0.95, 0.95, 0.96). We compare our method against a recent reference panel-free imputation algorithm, STITCH, and show comparable performance without having to make any fixed state space assumptions (STITCH weighted f1, precision, recall: 0.96, 0.96, 0.97). Overall, our new method enables imputing genomes with targeted sequencing using an unbounded state space.