One of the key factors limiting the use of neural networks in many industrial applications has been the difficulty of demonstrating that a trained network will continue to generate reliable outputs once it is in routine use. An important potential source of errors arises from input data which differs significantly from that used to train the network. In this paper we investigate the relation between the degree of novelty of input data and the corresponding reliability of the output data. We provide a quantitative procedure for measuring novelty, and we demonstrate its performance using an application involving the monitoring of oil flow in multi-phase pipelines.