Indoor and outdoor environmental data: A dataset with acoustic data acquired by the microphone embedded on mobile devices

被引:1
作者
Pires, Ivan Miguel [1 ,2 ,3 ]
Garcia, Nuno M. [1 ]
Zdravevski, Eftim [4 ]
Lameski, Petre [4 ]
机构
[1] Univ Beira Interior, Inst Telecomunicacoes, P-620 0 Covilha, Portugal
[2] Polytech Inst Viseu, Dept Comp Sci, P-3504510 Viseu, Portugal
[3] UICISA E Res Ctr, Sch Hlth, Polytech Inst Viseu, P-3504510 Viseu, Portugal
[4] Univ Ss Cyril & Methodius, Fac Comp Sci & Engn, Skopje 1000, North Macedonia
来源
DATA IN BRIEF | 2021年 / 36卷
关键词
Environment; Microphone; Acoustic data; Mobile devices;
D O I
10.1016/j.dib.2021.107051
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
All mobile devices include a microphone that can be used for acoustic data acquisition. This article presents a dataset of acoustic signals related to nine environments, captured with a microphone embedded on off-the-shelf mobile devices. The mobile phone can be placed in the pants pockets, in a wristband, over the bedside table, on a table, or on other furniture. Data collection environments are bar, classroom, gym, kitchen, library, street, hall, living room, and bedroom. The data was collected by 25 individuals (15 men and 10 women) in different environments around Covilha and Fundao municipalities (Portugal). The microphone data was sampled with 44,100 Hz into an array with 16-bit unsigned integer values in the range [0, 255] with a 128 offset for zero. The dataset presented in this paper presents at least 20 0 0 samples of 5 s of data for each environment, corresponding to around 2.8 h for each environment into text files. In total, it includes at least 25.2 h of acoustic data for the implementation of data processing techniques, e.g., Fast Fourier Transform (FFT), and other machine learning methods for the different analysis. (C) 2021 The Author(s). Published by Elsevier Inc.
引用
收藏
页数:7
相关论文
共 12 条
  • [1] Microphone Identification Using Convolutional Neural Networks
    Baldini, Gianmarco
    Amerini, Irene
    Gentile, Claudio
    [J]. IEEE SENSORS LETTERS, 2019, 3 (07)
  • [2] Bian XH, 2005, LECT NOTES COMPUT SC, V3468, P19
  • [3] Audio-based context recognition
    Eronen, AJ
    Peltonen, VT
    Tuomi, JT
    Klapuri, AP
    Fagerlund, S
    Sorsa, T
    Lorho, G
    Huopaniemi, J
    [J]. IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2006, 14 (01): : 321 - 329
  • [4] Activities of Daily Living and Environment Recognition Using Mobile Devices: A Comparative Study
    Ferreira, Jose M.
    Pires, Ivan Miguel
    Marques, Goncalo
    Garcia, Nuno M.
    Zdravevski, Eftim
    Lameski, Petre
    Florez-Revuelta, Francisco
    Spinsante, Susanna
    Xu, Lina
    [J]. ELECTRONICS, 2020, 9 (01)
  • [5] Environmental sound monitoring using machine learning on mobile devices
    Green, Marc
    Murphy, Damian
    [J]. APPLIED ACOUSTICS, 2020, 159 (159)
  • [6] Kraetzer C, 2007, MM&SEC'07: PROCEEDINGS OF THE MULTIMEDIA & SECURITY WORKSHOP 2007, P63
  • [7] DeepEar: Robust Smartphone Audio Sensing in Unconstrained Acoustic Environments using Deep Learning
    Lane, Nicholas D.
    Georgiev, Petko
    Qendro, Lorena
    [J]. PROCEEDINGS OF THE 2015 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING (UBICOMP 2015), 2015, : 283 - 294
  • [8] Piazza F., 2019, 1 OUTSTANDING 50 YEA, P37
  • [9] Activities of daily living with motion: A dataset with accelerometer, magnetometer and gyroscope data from mobile devices
    Pires, Ivan Miguel
    Garcia, Nuno M.
    Zdravevski, Eftim
    Lameski, Petre
    [J]. DATA IN BRIEF, 2020, 33
  • [10] Mobile Computing Technologies for Health and Mobility Assessment: Research Design and Results of the Timed Up and Go Test in Older Adults
    Ponciano, Vasco
    Pires, Ivan Miguel
    Ribeiro, Fernando Reinaldo
    Villasana, Maria Vanessa
    Crisostomo, Rute
    Teixeira, Maria Canavarro
    Zdravevski, Eftim
    [J]. SENSORS, 2020, 20 (12) : 1 - 23