bingwork commited on
Commit
e82f7db
1 Parent(s): 53b876b

update readme

Browse files
Files changed (1) hide show
  1. README.md +28 -0
README.md CHANGED
@@ -1,3 +1,31 @@
1
  ---
2
  license: apache-2.0
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ pipeline_tag: image-to-text
4
  ---
5
+
6
+ # MMAlaya2
7
+
8
+ MMAlaya2 fine-tunes 20 LoRA modules based on the InternVL-Chat-V1-5 model. These fine-tuned LoRA modules are then merged with the InternVL-Chat-V1-5 model using the PEFT model merging method, TIES.
9
+
10
+ You can find the inference code [here](https://github.com/open-compass/VLMEvalKit/blob/main/vlmeval/vlm/mmalaya.py#L8).
11
+
12
+ The [MMBench](https://mmbench.opencompass.org.cn/) benchmark contains 20 categories in the `mmbench_dev_cn_20231003.tsv` dataset. For each category, we first use CoT (Chain of Thought) consistency with the InternVL-Chat-V1-5 model to prepare the training dataset. For specific categories like nature_relation, image_emotion, image_scene, action_recognition, and image_style, we analyze the bad cases made by the InternVL-Chat-V1-5 model. We then prepare images and QA text from online sources to address these issues.
13
+
14
+ After fine-tuning the 20 LoRAs, they are merged with the InternVL-Chat-V1-5 model using the TIES method. The average score on the `mmbench_test_cn_20231003.tsv` benchmark reached 82.2, which we found noteworthy. As a result, we are sharing this model publicly.
15
+
16
+ # License
17
+
18
+ This project is released under the MIT license, in alignment with the InternVL-Chat-V1-5 model's license. InternLM2, however, is licensed under the Apache-2.0 license.
19
+
20
+ # Citation
21
+
22
+ If you find this project useful in your research, please consider citing:
23
+
24
+ ```bibtex
25
+ @misc{datacanvas2024mmalaya2,
26
+ author = {DataCanvas Ltd.},
27
+ title = {MMAlaya2},
28
+ year = {2024},
29
+ howpublished = {\url{https://huggingface.co/DataCanvas/MMAlaya2}},
30
+ }
31
+ ```