Three problems associated with uncertainty in feedforward neural network predictions are discussed. First, the uncertainty present in the input vector propagates through a trained network into the output vector, and this uncertainty is determined using the matrix of partial derivatives defining the change in each output with respect to the inputs. Second, because the partial derivative information conveys the relative sensitivity of a given output to each of the inputs, it can be used as a tool to determine the relevance of each of the inputs to the out prediction. Finally, the influence of random choices for training and testing data sets is investigated. The variability in these solutions provides a measure of the fossilized bias error in the network with respect to development decisions. The approaches are illustrated using examples of four -quadrant propeller predictions.