Parameter estimation involves fitting experimental data to a model of the system, represented by F(p, q). Here p are the parameters to be estimated and q are other parameters of the model which are assumed to be known. It is usual to assume that q are known precisely. In many cases, q are known only approximately and it is desired to understand (a) how the uncertainty affects the estimates of p, (b) how to eliminate the dependence of p on q. Both effects can be accommodated by the use of Bayesian probability and treating the q parameters as extraneous variables. Because the errors in the model fitting, i.e., the difference between the data and the model, are functions of q, the errors are correlated although they still may be of zero mean. In addition correlation may exist in the data. These correlations have a substantial effect on the precision of the estimated parameters and require more sophisticated analysis than is usually employed in the least square approach to determining p. This paper presents the fundamentals of considering uncertain parameters and applies them to a hypothetical experiment involving an uncertain, but constant, parameter and to a real experiment in which the data are cross and autocorrelated. (C) 2002 Editions scientifiques et mEdicales Elsevier SAS. All rights reserved.