Lin-K76 commited on
Commit
7cb7d41
1 Parent(s): 85b508f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -21,7 +21,7 @@ tags:
21
 
22
  Quantized version of [DeepSeek-Coder-V2-Lite-Instruct](deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct).
23
  <!-- It achieves an average score of 73.19 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 73.48. -->
24
- It achieves an average score of 79.60 on the [HumanEval](https://github.com/openai/human-eval?tab=readme-ov-file) benchmark, whereas the unquantized model achieves 79.33.
25
 
26
  ### Model Optimizations
27
 
@@ -100,7 +100,7 @@ model.save_quantized(quantized_model_dir)
100
 
101
  ## Evaluation
102
 
103
- The model was evaluated on the [HumanEval](https://github.com/openai/human-eval?tab=readme-ov-file) benchmark with the [Neural Magic fork](https://github.com/neuralmagic/evalplus) of the [EvalPlus implementation of HumanEval](https://github.com/evalplus/evalplus) and the [vLLM](https://docs.vllm.ai/en/stable/) engine, using the following command:
104
  ```
105
  python codegen/generate.py --model neuralmagic/DeepSeek-Coder-V2-Lite-Instruct-FP8 --temperature 0.2 --n_samples 50 --resume --root ~ --dataset humaneval
106
  python evalplus/sanitize.py ~/humaneval/neuralmagic--DeepSeek-Coder-V2-Lite-Instruct-FP8_vllm_temp_0.2
@@ -109,7 +109,7 @@ evalplus.evaluate --dataset humaneval --samples ~/humaneval/neuralmagic--DeepSee
109
 
110
  ### Accuracy
111
 
112
- #### Open LLM Leaderboard evaluation scores
113
  <table>
114
  <tr>
115
  <td><strong>Benchmark</strong>
 
21
 
22
  Quantized version of [DeepSeek-Coder-V2-Lite-Instruct](deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct).
23
  <!-- It achieves an average score of 73.19 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 73.48. -->
24
+ It achieves an average score of 79.60 on the [HumanEval+](https://github.com/openai/human-eval?tab=readme-ov-file) benchmark, whereas the unquantized model achieves 79.33.
25
 
26
  ### Model Optimizations
27
 
 
100
 
101
  ## Evaluation
102
 
103
+ The model was evaluated on the [HumanEval+](https://github.com/openai/human-eval?tab=readme-ov-file) benchmark with the [Neural Magic fork](https://github.com/neuralmagic/evalplus) of the [EvalPlus implementation of HumanEval+](https://github.com/evalplus/evalplus) and the [vLLM](https://docs.vllm.ai/en/stable/) engine, using the following command:
104
  ```
105
  python codegen/generate.py --model neuralmagic/DeepSeek-Coder-V2-Lite-Instruct-FP8 --temperature 0.2 --n_samples 50 --resume --root ~ --dataset humaneval
106
  python evalplus/sanitize.py ~/humaneval/neuralmagic--DeepSeek-Coder-V2-Lite-Instruct-FP8_vllm_temp_0.2
 
109
 
110
  ### Accuracy
111
 
112
+ #### HumanEval+ Leaderboard evaluation scores
113
  <table>
114
  <tr>
115
  <td><strong>Benchmark</strong>