MM-Ego: Towards Building Egocentric Multimodal LLMs
Abstract
This research aims to comprehensively explore building a multimodal foundation model for egocentric video understanding. To achieve this goal, we work on three fronts. First, as there is a lack of QA data for egocentric video understanding, we develop a data engine that efficiently generates 7M high-quality QA samples for egocentric videos ranging from 30 seconds to one hour long, based on human-annotated data. This is currently the largest egocentric QA dataset. Second, we contribute a challenging egocentric QA benchmark with 629 videos and 7,026 questions to evaluate the models' ability in recognizing and memorizing visual details across videos of varying lengths. We introduce a new de-biasing evaluation method to help mitigate the unavoidable language bias present in the models being evaluated. Third, we propose a specialized multimodal architecture featuring a novel "Memory Pointer Prompting" mechanism. This design includes a global glimpse step to gain an overarching understanding of the entire video and identify key visual information, followed by a fallback step that utilizes the key visual information to generate responses. This enables the model to more effectively comprehend extended video content. With the data, benchmark, and model, we successfully build MM-Ego, an egocentric multimodal LLM that shows powerful performance on egocentric video understanding.
Community
TL;DR:
- 🚂 A data engine that produces the largest egocentric QA dataset (so far), with 7M high-quality QA samples generated from videos ranging from thirty seconds to one hour long based on human-annotated data
- 📐An EgoMemoria benchmark with 629 videos and 7,026 questions to evaluate models’ ability to recognize and memorize visual details across various video lengths.
- 👆An innovative “Memory Pointer Prompting” model for handling long egocentric videos, which first captures a 👀"global glimpse" of the entire video, then 🔍"fallbacks" to key visual details to generate more precise responses.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper