Speech Emotion Recognition (SER) is a rapidly evolving field of research that aims to identify and categorize emotional states through speech signal analysis. As SER holds considerable socio <not sign>cultural and business significance, researchers are increasingly exploring machine learning and deep learning techniques to advance this technology. A well-suited dataset is a crucial resource for SER studies in a specific language. However, despite being the 10th most spoken language globally, Urdu lacks SER datasets, creating a significant research gap. The available Urdu SER datasets are insufficient due to their limited scope, including a narrow range of emotions, small datasets, and a limited number of dialogs, which restricts their usability in real-world scenarios. To fill the gap in existing Urdu speech datasets, an Urdu Speech Emotion Recognition Dataset (UrduSER) is developed. This comprehensive dataset consists of 3500 speech signals from 10 professional actors, with a balanced mix of males and females, and diverse age ranges. The speech signals were sourced from a vast collection of Pakistani Urdu drama serials and telefilms available on YouTube. Seven emotional states are covered in the dataset: Angry, Fear, Boredom, Disgust, Happy, Neutral, and Sad. A notable strength of this dataset is the diversity of the dialogs, with each utterance containing almost unique content, in contrast to existing datasets that often feature repetitive samples of predefined dialogs spoken by research volunteers in a laboratory environment. To ensure balance and symmetry, the dataset consists of 500 samples for each emotional class, with 50 samples per actor per emotion. An accompanying Excel file provides a detailed metadata index for each audio sample, including file name, duraand the Urdu dialogue script. This comprehensive metadata index enables researchers and developers to efficiently access, organize, and utilize the UrduSER dataset. The UrduSER dataset underwent a rigorous validation process, integrating expert validation to confirm its validity, reliability, and over (c) 2025 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY license ( http://creativecommons.org/licenses/by/4.0/ )