The research introduces MARTI (Man-machine Animation Real-Time Interface) for the realisation of natural human-machine interfacing. The system uses simple vocal sound-tracks of human speakers to provide lip synchronisation of computer graphical facial models. We present novel research in a number of engineering disciplines, which include speech recognition, facial modelling, and computer animation. This interdisciplinary research utilises the latest, hybrid connectionist/hidden Markov model, speech recognition system to provide very accurate phone recognition and timing for speaker independent continuous speech, and expands on knowledge from the animation industry in the development of accurate facial models and automated animation. The research has many real-world applications which include the provision of a highly accurate and 'natural' man-machine interface to assist user interactions with computer systems and communication with one other using human idiosyncrasies; a complete special effects and animation toolbox providing automatic lip synchronisation without the normal constraints of head-sets, joysticks, and skilled animators; compression of video data to well below standard telecommunication channel bandwidth for video communications and multi-media systems; assisting speech training and aids for the handicapped; and facilitating player interaction for 'video gaming' and 'virtual worlds'. MARTI has introduced a new level of realism to man-machine interfacing and special effect animation which has been previously unseen.