File size: 9,091 Bytes
f8feed4
 
95b2d68
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f8feed4
ef409fa
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
---
license: cc-by-sa-4.0
dataset_info:
  config_name: chm150_asr
  features:
  - name: audio_id
    dtype: string
  - name: audio
    dtype:
      audio:
        sampling_rate: 16000
  - name: speaker_id
    dtype: string
  - name: gender
    dtype: string
  - name: duration
    dtype: float32
  - name: normalized_text
    dtype: string
  splits:
  - name: train
    num_bytes: 106136396.519
    num_examples: 2663
  download_size: 110058240
  dataset_size: 106136396.519
configs:
- config_name: chm150_asr
  data_files:
  - split: train
    path: chm150_asr/train-*
  default: true
---

# Dataset Card for chm150_asr
## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)
  
## Dataset Description
- **Homepage:** [CIEMPIESS-UNAM Project](https://ciempiess.org/)
- **Repository:** [CIEMPIESS LIGHT at LDC](https://catalog.ldc.upenn.edu/LDC2017S23)
- **Paper:** [CIEMPIESS: A New Open-Sourced Mexican Spanish Radio Corpus](http://www.lrec-conf.org/proceedings/lrec2014/pdf/182_Paper.pdf)
- **Point of Contact:** [Carlos Mena](mailto:carlos.mena@ciempiess.org)

### Dataset Summary

The CHM150 is a corpus of microphone speech of mexican Spanish taken from 75 male speakers and 75 female speakers in a noise environment of a "quiet office" with a total duration of 1.63 hours.

Speakers were encouraged to respond between some pre selected open questions or they could also describe a particular painting showed to them in a computer monitor. By so, the speech is completely spontaneous and one can see it in the transcription file, that captures disfluencies and mispronunciations in an orthographic way.

The CHM150 Corpus was created at the "Laboratorio de Tecnologías del Habla" of the "Facultad de Ingeniería (FI)" in the "Universidad Nacional Autónoma de México (UNAM)" in 2012 by Carlos Daniel Hernández Mena, supervised by José Abel Herrera Camacho, head of Laboratory.

### Example Usage
The CHM150 CORPUS contains only the train split:
```python
from datasets import load_dataset
cm150_asr = load_dataset("carlosdanielhernandezmena/chm150_asr")
```
It is also valid to do:
```python
from datasets import load_dataset
cm150_asr = load_dataset("carlosdanielhernandezmena/chm150_asr",split="train")
```

### Supported Tasks
automatic-speech-recognition: The dataset can be used to test a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).

### Languages
The language of the corpus is Spanish with the accent of Central Mexico.

## Dataset Structure

### Data Instances
```python
{
  'audio_id': 'CHMC_F_43_20ABR1232_0002', 
  'audio': {
    'path': '/home/carlos/.cache/HuggingFace/datasets/downloads/extracted/eadb709611fa8f6fa88f7fa085738cf1e438d9a98d9a4c95314944f0730a8893/train/female/F20ABR1232/CHMC_F_43_20ABR1232_0002.flac', 
    'array': array([ 0.00067139,  0.00387573, -0.00784302, ..., -0.00485229,
        0.00497437, -0.00338745], dtype=float32), 
    'sampling_rate': 16000
  }, 
  'speaker_id': 'F_43', 
  'gender': 'female', 
  'duration': 3.6689999103546143, 
  'normalized_text': 'suma memoria uno más memoria dos'
}
```

### Data Fields
* `audio_id` (string) - id of audio segment
* `audio` (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
* `speaker_id` (string) - id of speaker
* `gender` (string) - gender of speaker (male or female)
* `duration` (float32) - duration of the audio file in seconds.
* `normalized_text` (string) - normalized audio segment transcription

### Data Splits

The corpus counts just with the train split which has a total of 2663 speech files from 75 male speakers and 75 female speakers with a total duration of 1 hour and 38 minutes.

## Dataset Creation

### Curation Rationale

The CHM150 is a corpus of microphone speech of mexican Spanish taken from 75 male speakers and 75 female speakers in a noise environment of a "quiet office" with a total duration of 1.63 hours.

Only the most "clean" utterances were selected to be part of the corpus. By "clean" one can understand that there is no background music, loud noises, or more than one people speaking at the same time.

The audio equipment utilized to create the corpus was modest, it consisted in:

* [USB Interface](http://www.produktinfo.conrad.com/datenblaetter/300000-324999/303350-an-01-en-U_Control_UCA200.pdf)
* [Analogic Audio Mixer](http://www.music-group.com/Categories/Behringer/Mixers/Small-Format-Mixers/502/p/P0576)
* [Dynamic Cardioid Vocal Microphone](http://www.music-group.com/Categories/Behringer/Microphones/Dynamic-Microphones/XM8500/p/P0120)

The software utilized for recording was [Audacity](http://audacityteam.org/) and then the audio was downsampled and normalized with [SoX](http://sox.sourceforge.net/).

The main characteristics of the audio files are:

* Encoding: Signed PCM
* Sample Rate: 16000
* Precision: 16 bit
* Channels: 1 (mono)

### Source Data

#### Initial Data Collection and Normalization

Speakers were encouraged to respond between some pre selected open questions or they could also describe a particular painting showed to them in a computer screen. By so, the speech is completely spontaneous and one can see it in the transcription file, that captures disfluencies and mispronunciations in an orthographic way.

### Annotations
#### Annotation process

The annotation process is at follows:

* 1. A whole session is manually segmented keeping just the portions containing good quality speech.
* 2. The resulting speech files between 2 and 10 seconds are transcribed by the author.

#### Who are the annotators?

The CHM150 Corpus was collected and transcribed by [Carlos Daniel Hernández Mena](https://huggingface.co/carlosdanielhernandezmena) in 2012 as part of the objectives of his PhD studies.

### Personal and Sensitive Information

The dataset could contain names (not full names) revealing the identity of some speakers. However, you agree to not attempt to determine the identity of speakers in this dataset.

## Considerations for Using the Data

### Social Impact of Dataset

This dataset is valuable because it contains spontaneous speech and presents particular challenges, making it highly recommended for testing purposes.

### Discussion of Biases

The dataset is gender balanced. It is comprised of 75 male speakers and 75 female speakers and the vocabulary is limited to the description of 5 different images.

### Other Known Limitations

"CHM150 CORPUS" by Carlos Daniel Hernández Mena and Abel Herrera is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/) License with the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

### Dataset Curators

The dataset was collected and curated by [Carlos Daniel Hernández Mena](https://huggingface.co/carlosdanielhernandezmena) in 2012 as part of the objectives of his PhD studies. The corpus was published in 2016 at the Linguistic Data Consortium (LDC).

### Licensing Information
[CC-BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/)

### Citation Information
```
@misc{carlosmena2016chm150,
      title={CHM150 CORPUS: Audio and Transcripts of Mexican Spanish Spontaneous Speech.}, 
      ldc_catalog_no={LDC2016S04},
      DOI={https://doi.org/10.35111/ygn0-wm25},
      author={Hernandez Mena, Carlos Daniel and Herrera, Abel},
      journal={Linguistic Data Consortium, Philadelphia},
      year={2016},
      url={https://catalog.ldc.upenn.edu/LDC2016S04},
}
```
### Contributions

This dataset card was created as part of the objectives of the 16th edition of the Severo Ochoa Mobility Program (PN039300 - Severo Ochoa 2021 - E&T).