Orthodox Bayesianism endorses revising by conditionalization. This paper investigates the zero-raising problem, or equivalently the certainty-dropping problem of orthodox Bayesianism: previously neglected possibilities remain neglected, although the new evidence might suggest otherwise. Yet, one may want to model open-minded agents, that is, agents capable of raising previously neglected possibilities. Different reasons can be given for open-mindedness, one of which is fallibilism. The paper proposes a family of openminded propositional revisions depending on a parameter E. The basic idea is this: first extend the prior to the newly suggested possibilities by mixing the prior with the uniform probability on these possibilities, then conditionalize. This may put the agent back on the right track when her beliefs or evidence happen to be false. The paper justifies this family of equivocal epsilon-conditionalizations as minimal non-biased open-minded modifications of conditionalization. Several variations are discussed, such as mixing with an ad hoc or silent prior instead of the uniform prior, and a generalization to probabilistic information is given. The approach is compared to other accounts, such as Jeffrey's Bayesianism, Gardenfors's probabilistic revision, maximizing entropy, and minimal revision.