A single-layer perceptron divides the input signal space into two regions separated by a hyperplane. In many applications, however, the training signal of the adaptive algorithm represents more complicated decision regions that may not be linearly separable. For these cases, it is usually not obvious how the adaptive algorithm of a single-layer perceptron will perform, in terms of its convergence properties and the optimum location of the hyperplane boundary. In this paper, we determine the stationary points of Rosenblatt's learning algorithm for a single-layer perceptron and two nonseparable models of the training data. Our analysis is based on a system identification formulation of the training signal, and the perceptron input signals are modeled as independent Gaussian sequences. Expressions for the corresponding performance function are also derived, and computer simulations are presented thal verify the analytical results.