carlosdanielhernandezmena's picture
Fixing the year of publication.
69ef76a verified
metadata
annotations_creators:
  - crowdsourced
language:
  - es
language_creators:
  - crowdsourced
license:
  - cc-by-4.0
multilinguality:
  - monolingual
pretty_name: Spanish Common Voice V17.0 Split Other Automatically Verified
size_categories:
  - 100K<n<1M
source_datasets:
  - mozilla-foundation/common_voice_17_0
tags:
  - common voice
  - spanish speech
  - project aina
  - barcelona supercomputing center
  - automatically verified
task_categories:
  - automatic-speech-recognition
task_ids: []

Dataset Card for cv17_es_other_automatically_verified

Table of Contents

Dataset Description

Dataset Summary

At the time of this work, the Spanish version of Mozilla Common Voice 17.0 reported having approximately 2,220 hours of recorded audio on its official page, of which only 562 hours were validated, representing just 25%. Of the validated hours, 53 hours were allocated to the validation and test portions. Therefore, there are effectively only about 509 hours available for training acoustic models for ASR, which is the primary goal of Common Voice.

Therefore, our corpus, "Spanish Common Voice V17.0 Split Other Automatically Verified," is, as the name suggests, the result of the automatic validation of the "other" portion of Common Voice 17.0. The validation process was carried out using OpenAI's Whisper large model. If Whisper produces the same text as the Common Voice prompt, the transcription is considered valid regardless of its votes.

Using this method, we validated 581,680 audio files, amounting to 784 hours and 50 minutes, which is more than the hours validated by the Common Voice community.

Example Usage

This corpus only counts with one split called "other".

from datasets import load_dataset
cv17_other = load_dataset("projecte-aina/cv17_es_other_automatically_verified")

Another alternatiove to load the dataset is:

from datasets import load_dataset
cv17_other = load_dataset("projecte-aina/cv17_es_other_automatically_verified",split="other")

Supported Tasks

automatic-speech-recognition: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).

Languages

The audio is in Spanish.

Dataset Structure

Data Instances

{
  'audio': {
    'path': '/home/carlos/.cache/HuggingFace/datasets/downloads/extracted/8277a95b9f41a9f3b5310135293b4733cfb9d4243c6e809ee22f70d5364e3803/es_other_0/common_voice_es_18871390.mp3', 
    'array': array([-2.9313687e-16, -3.9175602e-14,  1.4283733e-14, ...,
        9.4713278e-07, -1.7019669e-05, -1.5838880e-06], dtype=float32), 
    'sampling_rate': 16000
  }, 
  'client_id': 'f9c44725569f8eeae8e1173abe5271cdc375a4011110563767150df95c67fb3b65f40ca0a310197e84ba630607a53ee805b4f592cbe1ec62bd5850254968a828', 
  'path': 'common_voice_es_18871390.mp3', 
  'sentence_id': '5a3162572570b259a3a31fb29dd7dbe126fb11d3c62887bc6c1ccb4b6f4bbb8d', 
  'sentence': 'el indio ya se recogía, como un gato montés , dispuesto a saltar sobre mí.', 
  'sentence_domain': '', 
  'up_votes': 0, 
  'down_votes': 1, 
  'age': 'twenties', 
  'gender': 'male_masculine', 
  'accents': 'Andino-Pacífico: Colombia, Perú, Ecuador, oeste de Bolivia y Venezuela andina', 
  'variant': '', 
  'locale': 'es', 
  'segment': ''
}

Data Fields

  • audio (datasets.Audio) - a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally). Note that when accessing the audio column: dataset[0]["audio"] the audio file is automatically decoded and resampled to dataset.features["audio"].sampling_rate. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the "audio" column, i.e. dataset[0]["audio"] should always be preferred over dataset["audio"][0].
  • client_id (string) - an id for which client (voice) made the recording.
  • path (string) - the path to the audio file.
  • sentence_id (string) - id of speaker.
  • sentence (string) - the sentence the user was prompted to speak.
  • sentence_domain (string) - the context or scope within the sentence belongs.
  • up_votes (int64) - how many upvotes the audio file has received from reviewers.
  • down_votes (int64) - how many downvotes the audio file has received from reviewers.
  • age (string) - the age of the speaker (e.g. teens, twenties, fifties).
  • gender (string) - the gender of the speaker.
  • accents (string) - accent(s) of the speaker.
  • variant (string) - specific type of accent or pronunciation pattern associated with the speaker.
  • locale (string) - the locale of the speaker
  • segment (string) - usually an empty field

Data Splits

This corpus only has one split called "other".

Dataset Creation

Curation Rationale

  • The aim was to validate the audio from Common Voice 17.0 that had an insufficient number of votes.

  • Respecting the original organization of Common Voice, this corpus only contains a portion called "other", with audio files located in that same portion of the original Common Voice 17.0.

  • This repository does not contain audio files. It contains a dataloader that takes the data from the orginal repository.

  • The TSV file in this repo contains metadata for the 581,680 audio files that we automatically verified. However, the metadata is the same as the original.

Source Data

Initial Data Collection and Normalization

Common Voice is a crowdsourcing project started by Mozilla to create a free database for speech recognition software. The project is supported by volunteers who record sample sentences with a microphone and review recordings of other users. The transcribed sentences will be collected in a voice database available under the public domain license CC0. This license ensures that developers can use the database for voice-to-text applications without restrictions or costs.

Annotations

Annotation process

The text prompts are sourced from Wikipedia articles and other texts. In the final step, the sentences are reviewed and approved by moderators of the respective language.

Who are the annotators?

Since this is a corpus of read prompts, there are no annotators, however, the prompts to be read are approved by the moderators of the respective language.

Personal and Sensitive Information

The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.

Considerations for Using the Data

Social Impact of Dataset

In Common Voice, only 25% of the Spanish language prompts have been verified by the community. With this corpus, we are providing additional verified data, helping to increase the availability of reliable Spanish language resources.

Discussion of Biases

The transcriptions were verified using just one ASR system: the OpenAI Whisper large.

Other Known Limitations

The automatic verification methodology for this corpus only detects perfect matches between the reference transcriptions and the ASR systems, but does not verify other relevant data like gender, age range, or speaker ID.

Additional Information

Dataset Curators

The automatic verification of the Common Voice 17.0 transcripts belonging to the split "other" was performed by Carlos Daniel Hernández Mena during 2024 at the Barcelona Supercomputing Center.

Licensing Information

CC-BY-4.0

Citation Information

@misc{mena2024cv17othautveri,
      title={Spanish Common Voice V17.0 Split Other Automatically Verified}, 
      author={Hernández Mena, Carlos Daniel},
      publisher={Barcelona Supercomputing Center},
      year={2024},
      url={https://huggingface.co/datasets/projecte-aina/cv17_es_other_automatically_verified},
}

Contributions

This work has been promoted and financed by the Generalitat de Catalunya through the Aina project.