This paper discusses a novel architecture and synapse design for area-efficient hardware implementations of neural networks. The proposed architecture offers an intermediate solution to the on-chip interconnect problem by the introduction of the concept of virtual neurons. Using a previously developed model a significant reduction in silicon area is suggested. The proposed synapse designs represent alternatives to conventional multiplier-based synaptic cells. These synapses, designed for minimal implementation size, include two targeted for analog implementation and one for digital implementation. Training of neural networks based on the proposed synapses has been considered and a modified version of the backprogagation algorithm is discussed. Results of neural network operation based on one of the new synapses are presented. © 1998 Elsevier Science Ltd. All rights reserved.