Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowNotImplementedError
Message:      Cannot write struct type 'task_hashes' with no child field to Parquet. Consider adding a dummy child field.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 583, in write_table
                  self._build_writer(inferred_schema=pa_table.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'task_hashes' with no child field to Parquet. Consider adding a dummy child field.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2029, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 602, in finalize
                  self._build_writer(self.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'task_hashes' with no child field to Parquet. Consider adding a dummy child field.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1396, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1045, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1029, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1124, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1884, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2040, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

results
dict
group_subtasks
dict
configs
dict
versions
dict
n-shot
dict
higher_is_better
dict
n-samples
dict
config
dict
git_hash
string
date
float64
pretty_env_info
string
transformers_version
string
upper_git_hash
null
tokenizer_pad_token
sequence
tokenizer_eos_token
sequence
tokenizer_bos_token
sequence
eot_token_id
int64
max_length
int64
task_hashes
dict
model_source
string
model_name
string
model_name_sanitized
string
system_instruction
null
system_instruction_sha
null
fewshot_as_multiturn
bool
chat_template
null
chat_template_sha
null
start_time
float64
end_time
float64
total_evaluation_time_seconds
string
{ "gsm8k": { "alias": "gsm8k", "exact_match,strict-match": 0, "exact_match_stderr,strict-match": 0, "exact_match,flexible-extract": 0, "exact_match_stderr,flexible-extract": 0 } }
{ "gsm8k": [] }
{ "gsm8k": { "task": "gsm8k", "tag": [ "math_word_problems" ], "dataset_path": "gsm8k", "dataset_name": "main", "training_split": "train", "test_split": "test", "fewshot_split": "train", "doc_to_text": "Question: {{question}}\nAnswer:", "doc_to_target": "{{answer}}", "description": "", "target_delimiter": " ", "fewshot_delimiter": "\n\n", "num_fewshot": 5, "metric_list": [ { "metric": "exact_match", "aggregation": "mean", "higher_is_better": true, "ignore_case": true, "ignore_punctuation": false, "regexes_to_ignore": [ ",", "\\$", "(?s).*#### ", "\\.$" ] } ], "output_type": "generate_until", "generation_kwargs": { "until": [ "Question:", "</s>", "<|im_end|>" ], "do_sample": false, "temperature": 0 }, "repeats": 1, "filter_list": [ { "name": "strict-match", "filter": [ { "function": "regex", "regex_pattern": "#### (\\-?[0-9\\.\\,]+)", "group_select": null }, { "function": "take_first", "regex_pattern": null, "group_select": null } ] }, { "name": "flexible-extract", "filter": [ { "function": "regex", "regex_pattern": "(-?[$0-9.,]{2,})|(-?[0-9]+)", "group_select": -1 }, { "function": "take_first", "regex_pattern": null, "group_select": null } ] } ], "should_decontaminate": false, "metadata": { "version": 3 } } }
{ "gsm8k": 3 }
{ "gsm8k": 5 }
{ "gsm8k": { "exact_match": true } }
{ "gsm8k": { "original": 1319, "effective": 10 } }
{ "model": "hf", "model_args": "pretrained=EleutherAI/pythia-14m", "model_num_parameters": 14067712, "model_dtype": "torch.float16", "model_revision": "main", "model_sha": "f33025648652797a390d8c54835273845b437161", "batch_size": 1, "batch_sizes": [], "device": "mps", "use_cache": null, "limit": 10, "bootstrap_iters": 100000, "gen_kwargs": null, "random_seed": 0, "numpy_seed": 1234, "torch_seed": 1234, "fewshot_seed": 1234 }
928e8bb6
1,724,994,608.480619
'NoneType' object has no attribute 'splitlines'
4.44.2
null
[ "<|endoftext|>", "0" ]
[ "<|endoftext|>", "0" ]
[ "<|endoftext|>", "0" ]
0
2,048
{}
hf
EleutherAI/pythia-14m
EleutherAI__pythia-14m
null
null
false
null
null
47,688.603724
47,704.988023
16.384299125005782
{ "gsm8k": { "alias": "gsm8k", "exact_match,strict-match": 0, "exact_match_stderr,strict-match": 0, "exact_match,flexible-extract": 0, "exact_match_stderr,flexible-extract": 0 } }
{ "gsm8k": [] }
{ "gsm8k": { "task": "gsm8k", "tag": [ "math_word_problems" ], "dataset_path": "gsm8k", "dataset_name": "main", "training_split": "train", "test_split": "test", "fewshot_split": "train", "doc_to_text": "Question: {{question}}\nAnswer:", "doc_to_target": "{{answer}}", "description": "", "target_delimiter": " ", "fewshot_delimiter": "\n\n", "num_fewshot": 5, "metric_list": [ { "metric": "exact_match", "aggregation": "mean", "higher_is_better": true, "ignore_case": true, "ignore_punctuation": false, "regexes_to_ignore": [ ",", "\\$", "(?s).*#### ", "\\.$" ] } ], "output_type": "generate_until", "generation_kwargs": { "until": [ "Question:", "</s>", "<|im_end|>" ], "do_sample": false, "temperature": 0 }, "repeats": 1, "filter_list": [ { "name": "strict-match", "filter": [ { "function": "regex", "regex_pattern": "#### (\\-?[0-9\\.\\,]+)", "group_select": null }, { "function": "take_first", "regex_pattern": null, "group_select": null } ] }, { "name": "flexible-extract", "filter": [ { "function": "regex", "regex_pattern": "(-?[$0-9.,]{2,})|(-?[0-9]+)", "group_select": -1 }, { "function": "take_first", "regex_pattern": null, "group_select": null } ] } ], "should_decontaminate": false, "metadata": { "version": 3 } } }
{ "gsm8k": 3 }
{ "gsm8k": 5 }
{ "gsm8k": { "exact_match": true } }
{ "gsm8k": { "original": 1319, "effective": 10 } }
{ "model": "hf", "model_args": "pretrained=EleutherAI/pythia-14m", "model_num_parameters": 14067712, "model_dtype": "torch.float16", "model_revision": "main", "model_sha": "f33025648652797a390d8c54835273845b437161", "batch_size": 1, "batch_sizes": [], "device": "mps", "use_cache": null, "limit": 10, "bootstrap_iters": 100000, "gen_kwargs": null, "random_seed": 0, "numpy_seed": 1234, "torch_seed": 1234, "fewshot_seed": 1234 }
928e8bb6
1,724,994,647.916991
'NoneType' object has no attribute 'splitlines'
4.44.2
null
[ "<|endoftext|>", "0" ]
[ "<|endoftext|>", "0" ]
[ "<|endoftext|>", "0" ]
0
2,048
{}
hf
EleutherAI/pythia-14m
EleutherAI__pythia-14m
null
null
false
null
null
47,728.003633
47,745.43858
17.434946957997454
{ "gsm8k": { "alias": "gsm8k", "exact_match,strict-match": 0, "exact_match_stderr,strict-match": 0, "exact_match,flexible-extract": 0, "exact_match_stderr,flexible-extract": 0 } }
{ "gsm8k": [] }
{ "gsm8k": { "task": "gsm8k", "tag": [ "math_word_problems" ], "dataset_path": "gsm8k", "dataset_name": "main", "training_split": "train", "test_split": "test", "fewshot_split": "train", "doc_to_text": "Question: {{question}}\nAnswer:", "doc_to_target": "{{answer}}", "description": "", "target_delimiter": " ", "fewshot_delimiter": "\n\n", "num_fewshot": 5, "metric_list": [ { "metric": "exact_match", "aggregation": "mean", "higher_is_better": true, "ignore_case": true, "ignore_punctuation": false, "regexes_to_ignore": [ ",", "\\$", "(?s).*#### ", "\\.$" ] } ], "output_type": "generate_until", "generation_kwargs": { "until": [ "Question:", "</s>", "<|im_end|>" ], "do_sample": false, "temperature": 0 }, "repeats": 1, "filter_list": [ { "name": "strict-match", "filter": [ { "function": "regex", "regex_pattern": "#### (\\-?[0-9\\.\\,]+)", "group_select": null }, { "function": "take_first", "regex_pattern": null, "group_select": null } ] }, { "name": "flexible-extract", "filter": [ { "function": "regex", "regex_pattern": "(-?[$0-9.,]{2,})|(-?[0-9]+)", "group_select": -1 }, { "function": "take_first", "regex_pattern": null, "group_select": null } ] } ], "should_decontaminate": false, "metadata": { "version": 3 } } }
{ "gsm8k": 3 }
{ "gsm8k": 5 }
{ "gsm8k": { "exact_match": true } }
{ "gsm8k": { "original": 1319, "effective": 10 } }
{ "model": "hf", "model_args": "pretrained=EleutherAI/pythia-14m", "model_num_parameters": 14067712, "model_dtype": "torch.float16", "model_revision": "main", "model_sha": "f33025648652797a390d8c54835273845b437161", "batch_size": 1, "batch_sizes": [], "device": "mps", "use_cache": null, "limit": 10, "bootstrap_iters": 100000, "gen_kwargs": null, "random_seed": 0, "numpy_seed": 1234, "torch_seed": 1234, "fewshot_seed": 1234 }
928e8bb6
1,724,994,712.620037
'NoneType' object has no attribute 'splitlines'
4.44.2
null
[ "<|endoftext|>", "0" ]
[ "<|endoftext|>", "0" ]
[ "<|endoftext|>", "0" ]
0
2,048
{}
hf
EleutherAI/pythia-14m
EleutherAI__pythia-14m
null
null
false
null
null
47,792.624844
47,819.535151
26.910307540994836

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
0