File size: 2,982 Bytes
747aa57
 
e82f7db
747aa57
e82f7db
 
 
 
 
238e6ac
e82f7db
 
 
238e6ac
 
bd96310
238e6ac
 
 
 
 
 
 
 
b1c19b8
238e6ac
 
 
e82f7db
 
 
59d9108
e82f7db
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---

license: apache-2.0
pipeline_tag: image-to-text
---


# MMAlaya2

MMAlaya2 fine-tunes 20 LoRA modules based on the InternVL-Chat-V1-5 model. These fine-tuned LoRA modules are then merged with the InternVL-Chat-V1-5 model using the PEFT model merging method, TIES.

You can find the inference code [here](https://github.com/open-compass/VLMEvalKit/blob/main/vlmeval/vlm/mmalaya.py).

The [MMBench](https://mmbench.opencompass.org.cn/) benchmark contains 20 categories in the `mmbench_dev_cn_20231003.tsv` dataset. For each category, we first use CoT (Chain of Thought) consistency with the InternVL-Chat-V1-5 model to prepare the training dataset. For specific categories like nature_relation, image_emotion, image_scene, action_recognition, and image_style, we analyze the bad cases made by the InternVL-Chat-V1-5 model. We then prepare images and QA text from online sources to address these issues.



After fine-tuning the 20 LoRAs, they are merged with the InternVL-Chat-V1-5 model using the TIES method. 



A huge thank you to the OpenCompass MMBench team for updating the [leaderboard](https://mmbench.opencompass.org.cn/leaderboard) on August 27, 2024. We have collected the ranks and scores from the leaderboard for reference. For example, a ranking of "7/82.1" indicates a 7th place finish with a score of 82.1 in that category. We chose GPT-4o (0513, detail-high) because it is the best-performing GPT-4o model in the MMBench Test (CN).



| Model      | MMBench Test (CN) |MMBench v1.1 Test (CN) |CCBench dev |MMBench Test |MMBench v1.1 Test |

| ----------- | ----------- | ----------- | ----------- | ----------- |----------- |

| GPT-4o (0513, detail-high)   | 4/82.1        |    5/81.5   |   7/71.2    |   4/83.4    |   5/83    |

| MMAlaya2      | 7/82.1       |    8/79.7   |   8/70    |    9/82.5   |    9/80.6   |

| InternVL-Chat-V1.5   | 14/80.7        |    15/79.1   |   9/69.8    |   11/82.3    |   10/80.3    |





The average score on the MMBench Test (CN) reached 82.1, surpassing the InternVL-Chat-V1-5 model's score of 80.7 by 1.4 points. Although the rank is 7, this score matches GPT-4o's performance, which is ranked 4th, placing the model on par with GPT-4o. Additionally, scores on the other four benchmarks—MMBench v1.1 Test (CN), CCBench dev, MMBench Test, and MMBench v1.1 Test—have also improved by 0.2 to 0.6 points, further closing the gap to GPT-4o's performance.



We found this result noteworthy. As a result, we are sharing this model publicly.





# License



This project is released under the MIT license, aligning with the InternVL-Chat-V1-5 model's license. However, InternLM2 is licensed under the Apache-2.0 license.



# Citation



If you find this project useful in your research, please consider citing:



```bibtex

@misc{datacanvas2024mmalaya2,

    author = {DataCanvas Ltd.},

    title = {MMAlaya2},

    year = {2024},

    howpublished = {\url{https://huggingface.co/DataCanvas/MMAlaya2}},

}

```