Speaker identification utilizing noncontemporary speech

被引:0
|
作者
Hollien, H [1 ]
Schwartz, R [1 ]
机构
[1] Univ Florida, IASCP, Gainesville, FL 32611 USA
关键词
forensic science; speaker identification; voice identification; speech;
D O I
暂无
中图分类号
DF [法律]; D9 [法律]; R [医药、卫生];
学科分类号
0301 ; 10 ;
摘要
The noncontemporariness of speech is important to both of the two general approaches to speaker identification. Earwitness identification is one of them; in that instance, the time at which the identification is made is noncontemporary. A substantial amount of research has been carried out on this relationship and it now is well established that an auditor's memory for a voice decays sharply over time. It is the second approach to speaker identification which is of present interest. In this case, samples of a speaker's utterances are obtained at different points in time. For example, a threat call will be recorded and then sometime later (often very much later), a suspect's exemplar recording will be obtained. In this instance, it is the speech samples that are noncontemporary and they are the materials that are subjected to some form of speaker identification. Prevailing opinion is that noncontemporary speech itself poses just as difficult a challenge to the identification process as does the listener's memory decay in earwitness identification. Accordingly, series of aural-perceptual speaker identification projects were carried out on noncontemporary speech: first, two with latencies of 4 and 8 weeks followed by 4 and 32 weeks plus two more with the pairs separated by 6 and 20 years. Mean correct noncontemporary identification initially dropped to 75-80% at week 4 and this general level was sustained for up to six years. It was only after 70 years had elapsed that a significant drop (to 33%) was noted. It can be concluded that a listener's competency in identifying noncontemporary speech samples will show only modest decay over rather substantial periods of time and, hence, this factor should have only a minimal negative effect on the speaker identification process.
引用
收藏
页码:63 / 67
页数:5
相关论文
共 50 条
  • [31] Text-independent speaker identification utilizing likelihood normalization technique
    Markov, KP
    Nakagawa, S
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 1997, E80D (05) : 585 - 593
  • [32] Speaker identification from emotional and noisy speech using learned voice segregation and speech VGG
    Hamsa, Shibani
    Shahin, Ismail
    Iraqi, Youssef
    Damiani, Ernesto
    Nassif, Ali Bou
    Werghi, Naoufel
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 224
  • [33] Speaker Identification with Whispered Speech Using Unvoiced-Consonant Phonemes
    Xu, Juan
    Zhao, Heming
    PROCEEDINGS OF 2012 INTERNATIONAL CONFERENCE ON IMAGE ANALYSIS AND SIGNAL PROCESSING, 2012, : 136 - 139
  • [34] Speaker Identification for Whispered Speech based on Frequency Warping and Score Competition
    Fan, Xing
    Hansen, John H. L.
    INTERSPEECH 2008: 9TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2008, VOLS 1-5, 2008, : 1313 - 1316
  • [35] Joint Speech Enhancement and Speaker Identification Using Monte Carlo Methods
    Maina, Ciira Wa
    Walsh, John MacLaren
    INTERSPEECH 2009: 10TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2009, VOLS 1-5, 2009, : 1359 - 1362
  • [36] Speaker Identification in Noisy Conditions Using Short Sequences of Speech Frames
    Biagetti, Giorgio
    Crippa, Paolo
    Falaschetti, Laura
    Orcioni, Simone
    Turchetti, Claudio
    INTELLIGENT DECISION TECHNOLOGIES 2017, KES-IDT 2017, PT II, 2018, 73 : 43 - 52
  • [37] Efficient speaker identification from speech transmitted over Bluetooth networks
    Khalil, Ali
    Elnaby, Mustafa
    Saad, Elsayed
    Al-Nahari, Azzam
    Al-Zubi, Nayel
    El-Bendary, Mohsen
    El-Samie, Fathi
    INTERNATIONAL JOURNAL OF SPEECH TECHNOLOGY, 2014, 17 (04) : 409 - 416
  • [38] Joint Speech Enhancement and Speaker Identification Using Approximate Bayesian Inference
    Maina, Ciira Wa
    Walsh, John MacLaren
    IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2011, 19 (06): : 1517 - 1529
  • [40] A Joint Approach for Single-Channel Speaker Identification and Speech Separation
    Mowlaee, Pejman
    Saeidi, Rahim
    Christensen, Mads Grsboll
    Tan, Zheng-Hua
    Kinnunen, Tomi
    Franti, Pasi
    Jensen, Soren Holdt
    IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2012, 20 (09): : 2586 - 2601