Let f(n lambda) be the regularised solution of a general, linear operator equation, K f(0) = g, from discrete, noisy data y(i) = g(x(i)) + epsilon(i), i = 1,...,n, where epsilon(i) are uncorrelated random errors with variance sigma(2). In this paper, we consider the two well-known methods-the discrepancy principle and generalised maximum likelihood (GML), for choosing the crucial regularisation parameter lambda. We investigate the asymptotic properties as n --> infinity of the ''expected'' estimates lambda(D) and lambda(M) corresponding to these two methods respectively. It is shown that if f(0) is sufficiently smooth, then lambda(D) is weakly asymptotically optimal(ao) with respect to the risk and an L(2) norm on the output error. However, XD oversmooths for all sufficiently large n and also for all sufficiently small sigma(2). If f(0) is not too smooth relative to the regularisation space W, then lambda(D) can also be weakly ao with respect to a whole class of loss functions involving stronger norms on the input error. For the GML method, we show that if f(0) is smooth relative to W (for example f(0) is an element of W-theta,W-2, theta > m, if W = W-m,W-2), then lambda(M) is asymptotically sub-optimal and undersmoothing with respect to all of the loss functions above.