Convergence-to-the-truth results and merging-of-opinions results are part of the basic toolkit of Bayesian epistemologists. In a nutshell, the former establish that Bayesian agents expect their beliefs to almost surely converge to the truth as the evidence accumulates. The latter, on the other hand, establish that, as they make more and more observations, two Bayesian agents with different subjective priors are guaranteed to almost surely reach inter-subjective agreement, provided that their priors are sufficiently compatible. While in and of themselves significant, convergence to the truth with probability one and merging of opinions with probability one remain somewhat elusive notions. In their classical form, these results do not specify which data streams belong to the probability-one set of sequences on which convergence to the truth or merging of opinions occurs. In particular, they do not reveal whether the data streams that ensure eventual convergence or merging share any property that might explain their conduciveness to successful learning. Thus, a natural question raised by these classical results is whether the kind of data streams that are conducive to convergence and merging for Bayesian agents are uniformly characterizable in an informative way.