Papers
arxiv:2410.07137

Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates

Published on Oct 9
ยท Submitted by SivilTaram on Oct 11
Authors:
,
,
,

Abstract

Automatic LLM benchmarks, such as AlpacaEval 2.0, Arena-Hard-Auto, and MT-Bench, have become popular for evaluating language models due to their cost-effectiveness and scalability compared to human evaluation. Achieving high win rates on these benchmarks can significantly boost the promotional impact of newly released language models. This promotional benefit may motivate tricks, such as manipulating model output length or style to game win rates, even though several mechanisms have been developed to control length and disentangle style to reduce gameability. Nonetheless, we show that even a "null model" that always outputs a constant response (irrelevant to input instructions) can cheat automatic benchmarks and achieve top-ranked win rates: an 86.5% LC win rate on AlpacaEval 2.0; an 83.0 score on Arena-Hard-Auto; and a 9.55 score on MT-Bench. Moreover, the crafted cheating outputs are transferable because we assume that the instructions of these benchmarks (e.g., 805 samples of AlpacaEval 2.0) are private and cannot be accessed. While our experiments are primarily proof-of-concept, an adversary could use LLMs to generate more imperceptible cheating responses, unethically benefiting from high win rates and promotional impact. Our findings call for the development of anti-cheating mechanisms for reliable automatic benchmarks. The code is available at https://github.com/sail-sg/Cheating-LLM-Benchmarks.

Community

Paper author Paper submitter

๐Ÿ”ฅCheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates๐Ÿ”ฅ

๐Ÿด๐Ÿฒ.๐Ÿฑ% ๐—Ÿ๐—– ๐˜„๐—ถ๐—ป ๐—ฟ๐—ฎ๐˜๐—ฒ ๐—ผ๐—ป ๐—”๐—น๐—ฝ๐—ฎ๐—ฐ๐—ฎ๐—˜๐˜ƒ๐—ฎ๐—น ๐Ÿฎ.๐Ÿฌ
๐Ÿด๐Ÿฏ.๐Ÿฌ% ๐˜„๐—ถ๐—ป ๐—ฟ๐—ฎ๐˜๐—ฒ ๐—ผ๐—ป ๐—”๐—ฟ๐—ฒ๐—ป๐—ฎ-๐—›๐—ฎ๐—ฟ๐—ฑ-๐—”๐˜‚๐˜๐—ผ
๐Ÿต.๐Ÿฑ๐Ÿฑ ๐˜€๐—ฐ๐—ผ๐—ฟ๐—ฒ ๐—ผ๐—ป ๐— ๐—ง-๐—•๐—ฒ๐—ป๐—ฐ๐—ต

No need to access test instructions (so you can also cheat private leaderboards)

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2410.07137 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2410.07137 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2410.07137 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.