What zh/en benchmarks are used for evaluation?

#1
by JefferyChen453 - opened
This comment has been hidden
Beijing Academy of Artificial Intelligence org

We follow evaluation settings from FineWeb and use the lighteval lib.
You can checkout our script here https://huggingface.co/datasets/BAAI/CCI3-Data/blob/main/lighteval_tasks_v2.py.

All zh/en benchmarks are list as follows.
ARC-c (0-shot, en)
ARC-e (0-shot, en)
hellaswag (0-shot, en)
winograd (0-shot, en)
mmlu (0-shot, en)
obqa (0-shot, en)
piqa (0-shot, en)
siqa (0-shot, en)
ceval (0-shot, zh)
cmmlu (0-shot, zh)

Sign up or log in to comment