Learning and generalization in a two-layer radial basis function network, with fixed centres of the basis functions, is examined within a stochastic training paradigm. Employing a Bayesian approach, expressions for generalization error are derived under the assumption that the generating mechanism (leacher) for the training data is also a radial basis function network, but one for which the basis function centres and widths need not correspond to those of the student network. The effects of regularization, via a weight decay term, are examined. The cases in which the student has greater representational power than the teacher (over-realizable), and in which the teacher has greater power than the student (unrealizable) are studied. Dependence on knowing the centres of the teacher is eliminated by introducing a single degree-of-confidence parameter. Finally, simulations are performed which validate the analytic results. Copyright (C) 1996 Elsevier Science Ltd.