chm150_asr / README.md
carlosdanielhernandezmena's picture
Adding info to the README file
ef409fa verified
metadata
license: cc-by-sa-4.0
dataset_info:
  config_name: chm150_asr
  features:
    - name: audio_id
      dtype: string
    - name: audio
      dtype:
        audio:
          sampling_rate: 16000
    - name: speaker_id
      dtype: string
    - name: gender
      dtype: string
    - name: duration
      dtype: float32
    - name: normalized_text
      dtype: string
  splits:
    - name: train
      num_bytes: 106136396.519
      num_examples: 2663
  download_size: 110058240
  dataset_size: 106136396.519
configs:
  - config_name: chm150_asr
    data_files:
      - split: train
        path: chm150_asr/train-*
    default: true

Dataset Card for chm150_asr

Table of Contents

Dataset Description

Dataset Summary

The CHM150 is a corpus of microphone speech of mexican Spanish taken from 75 male speakers and 75 female speakers in a noise environment of a "quiet office" with a total duration of 1.63 hours.

Speakers were encouraged to respond between some pre selected open questions or they could also describe a particular painting showed to them in a computer monitor. By so, the speech is completely spontaneous and one can see it in the transcription file, that captures disfluencies and mispronunciations in an orthographic way.

The CHM150 Corpus was created at the "Laboratorio de Tecnologías del Habla" of the "Facultad de Ingeniería (FI)" in the "Universidad Nacional Autónoma de México (UNAM)" in 2012 by Carlos Daniel Hernández Mena, supervised by José Abel Herrera Camacho, head of Laboratory.

Example Usage

The CHM150 CORPUS contains only the train split:

from datasets import load_dataset
cm150_asr = load_dataset("carlosdanielhernandezmena/chm150_asr")

It is also valid to do:

from datasets import load_dataset
cm150_asr = load_dataset("carlosdanielhernandezmena/chm150_asr",split="train")

Supported Tasks

automatic-speech-recognition: The dataset can be used to test a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).

Languages

The language of the corpus is Spanish with the accent of Central Mexico.

Dataset Structure

Data Instances

{
  'audio_id': 'CHMC_F_43_20ABR1232_0002', 
  'audio': {
    'path': '/home/carlos/.cache/HuggingFace/datasets/downloads/extracted/eadb709611fa8f6fa88f7fa085738cf1e438d9a98d9a4c95314944f0730a8893/train/female/F20ABR1232/CHMC_F_43_20ABR1232_0002.flac', 
    'array': array([ 0.00067139,  0.00387573, -0.00784302, ..., -0.00485229,
        0.00497437, -0.00338745], dtype=float32), 
    'sampling_rate': 16000
  }, 
  'speaker_id': 'F_43', 
  'gender': 'female', 
  'duration': 3.6689999103546143, 
  'normalized_text': 'suma memoria uno más memoria dos'
}

Data Fields

  • audio_id (string) - id of audio segment
  • audio (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
  • speaker_id (string) - id of speaker
  • gender (string) - gender of speaker (male or female)
  • duration (float32) - duration of the audio file in seconds.
  • normalized_text (string) - normalized audio segment transcription

Data Splits

The corpus counts just with the train split which has a total of 2663 speech files from 75 male speakers and 75 female speakers with a total duration of 1 hour and 38 minutes.

Dataset Creation

Curation Rationale

The CHM150 is a corpus of microphone speech of mexican Spanish taken from 75 male speakers and 75 female speakers in a noise environment of a "quiet office" with a total duration of 1.63 hours.

Only the most "clean" utterances were selected to be part of the corpus. By "clean" one can understand that there is no background music, loud noises, or more than one people speaking at the same time.

The audio equipment utilized to create the corpus was modest, it consisted in:

The software utilized for recording was Audacity and then the audio was downsampled and normalized with SoX.

The main characteristics of the audio files are:

  • Encoding: Signed PCM
  • Sample Rate: 16000
  • Precision: 16 bit
  • Channels: 1 (mono)

Source Data

Initial Data Collection and Normalization

Speakers were encouraged to respond between some pre selected open questions or they could also describe a particular painting showed to them in a computer screen. By so, the speech is completely spontaneous and one can see it in the transcription file, that captures disfluencies and mispronunciations in an orthographic way.

Annotations

Annotation process

The annotation process is at follows:

    1. A whole session is manually segmented keeping just the portions containing good quality speech.
    1. The resulting speech files between 2 and 10 seconds are transcribed by the author.

Who are the annotators?

The CHM150 Corpus was collected and transcribed by Carlos Daniel Hernández Mena in 2012 as part of the objectives of his PhD studies.

Personal and Sensitive Information

The dataset could contain names (not full names) revealing the identity of some speakers. However, you agree to not attempt to determine the identity of speakers in this dataset.

Considerations for Using the Data

Social Impact of Dataset

This dataset is valuable because it contains spontaneous speech and presents particular challenges, making it highly recommended for testing purposes.

Discussion of Biases

The dataset is gender balanced. It is comprised of 75 male speakers and 75 female speakers and the vocabulary is limited to the description of 5 different images.

Other Known Limitations

"CHM150 CORPUS" by Carlos Daniel Hernández Mena and Abel Herrera is licensed under a Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) License with the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Dataset Curators

The dataset was collected and curated by Carlos Daniel Hernández Mena in 2012 as part of the objectives of his PhD studies. The corpus was published in 2016 at the Linguistic Data Consortium (LDC).

Licensing Information

CC-BY-SA-4.0

Citation Information

@misc{carlosmena2016chm150,
      title={CHM150 CORPUS: Audio and Transcripts of Mexican Spanish Spontaneous Speech.}, 
      ldc_catalog_no={LDC2016S04},
      DOI={https://doi.org/10.35111/ygn0-wm25},
      author={Hernandez Mena, Carlos Daniel and Herrera, Abel},
      journal={Linguistic Data Consortium, Philadelphia},
      year={2016},
      url={https://catalog.ldc.upenn.edu/LDC2016S04},
}

Contributions

This dataset card was created as part of the objectives of the 16th edition of the Severo Ochoa Mobility Program (PN039300 - Severo Ochoa 2021 - E&T).