praeclarumjj3 commited on
Commit
736078c
1 Parent(s): 0274a1c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -6,7 +6,7 @@ license: apache-2.0
6
 
7
  VCoder LLaVA-1.5-13b was trained on COST training dataset in December 2023. It uses the pretrained [LLaVA-1.5-13b](https://huggingface.co/liuhaotian/llava-v1.5-13b) model weights. It was introduced by Jain et al. in [this repository](https://github.com/SHI-Labs/VCoder).
8
 
9
- VCoder is an adapter for improving existing Vision LLMs at object-level perception tasks with the use of perception modalities as control inputs while retaining performance on other tasks.
10
 
11
  ![img](https://praeclarumjj3.github.io/vcoder/vcoder.svg)
12
 
@@ -14,7 +14,7 @@ VCoder is an adapter for improving existing Vision LLMs at object-level percepti
14
 
15
  ```bibtex
16
  @article{jain2023vcoder,
17
- title={{VCoder: Versatile Visual Encoder for Accurate Object-Level Perception with Large Language Models}},
18
  author={Jitesh Jain and Jianwei Yang and Humphrey Shi},
19
  journal={arXiv},
20
  year={2023}
 
6
 
7
  VCoder LLaVA-1.5-13b was trained on COST training dataset in December 2023. It uses the pretrained [LLaVA-1.5-13b](https://huggingface.co/liuhaotian/llava-v1.5-13b) model weights. It was introduced by Jain et al. in [this repository](https://github.com/SHI-Labs/VCoder).
8
 
9
+ VCoder is an adapter for improving existing Multimodal LLMs at object-level perception tasks with the use of perception modalities as control inputs while retaining performance on other tasks.
10
 
11
  ![img](https://praeclarumjj3.github.io/vcoder/vcoder.svg)
12
 
 
14
 
15
  ```bibtex
16
  @article{jain2023vcoder,
17
+ title={{VCoder: Versatile Vision Encoders for Multimodal Large Language Models}},
18
  author={Jitesh Jain and Jianwei Yang and Humphrey Shi},
19
  journal={arXiv},
20
  year={2023}