A sequence y = (y(1),..., y(n)) is said to be a coarsening of a given finite-alphabet source sequence x = (x(1),..., x(n)) if, for some function phi, y(i) = phi(x(i)) (i = 1,..., n). In lossless refinement source coding, it is assumed that the decoder already possesses a coarsening y of a given source sequence x. It is the job of the lossless refinement source encoder to furnish the decoder with a binary codeword B(x/y) which the decoder can employ in combination with y to obtain x. We present a natural grammar-based approach for finding the binary codeword B(x/y) in two steps. In the first step of the grammar-based approach, the encoder furnishes the decoder with O (rootn- log(2) n) code bits at the beginning of B (x/y) which tell the decoder how to build a context-free grammar G(y) which represents y. The encoder possesses a context-free grammar G(x) which represents x; in the second step of the grammar-based approach, the encoder furnishes the decoder with code bits in the rest of B(x/y) which tell the decoder how to build G(x) from G(y). We prove that our grammar-based lossless refinement source coding scheme is universal in the sense that its maximal redundancy per sample is O (1 / log(2) n) for n source samples, with respect to any finite-state lossless refinement source coding scheme. As a by-product, we provide a useful notion of the conditional entropy H(G(x)/G(y)) of the grammar G(x) given the grammar G(y), which is approximately equal to the length of the codeword B (x/y).