This study examined the temporal dynamics of face perception using event-related potentials (ERPs) to investigate how familiarity and repetition influence early and late stages of face processing. A generalised linear mixed-effects (GLME) model was employed to assess the amplitude and latency of the P100, N170, and N250 ERP components across three stimulus types (famous, non-famous, and scrambled faces), three repetition conditions (first presentation, immediate repeat, delayed repeat), and two brain hemispheres. The P100 component, associated with early visual processing, showed no significant modulation by stimulus familiarity or repetition, suggesting stable perceptual encoding across conditions. In contrast, N170 and N250 amplitudes were significantly affected by repetition, indicating enhanced neural responses during repeated exposure, particularly in the right hemisphere. Latency analyses revealed that N250 component was also sensitive to repetition timing, with delayed repetitions eliciting shorter response time, implying shifts in processing efficiency and memory engagement. Multivariate time-series decoding further demonstrated higher discriminability between scrambled and familiar faces compared to non-famous faces, particularly during first and delayed repeat conditions. Notably, decoding performance declined for immediate repeats, suggesting reduced neural differentiation during short-interval repetition. These findings provide new insights into how repetition and familiarity modulate the neural underpinnings of face perception, emphasizing the role of temporal dynamics and hemispheric specialization in face processing.