We propose a new family of adaptive first-order methods for a class of convex minimization problems that may fail to be Lipschitz continuous or smooth. Specifically, motivated by a recent flurry of activity on non-Lipschitz (NoLips) optimization, we consider problems that are continuous or smooth relative to a reference Bregman function - as opposed to a global, ambient norm (Euclidean or otherwise). These conditions encompass a wide range of problems with singular objectives that cannot be treated with standard first-order methods for Lipschitz continuous/smooth problems - such as Fisher markets, Poisson tomography problems, D-optimal design, and the like. In this setting, the application of existing order-optimal adaptive methods - like UNIXGRAD or ACCELEGRAD - is not possible, especially in the presence of randomness and uncertainty. The proposed method - which we call adaptive mirror descent (ADAMIR) - aims to close this gap by concurrently achieving min-max optimal rates in problems that are relatively continuous or smooth, including stochastic ones.