V*: Guided Visual Search as a Core Mechanism in Multimodal LLMs

Penghao Wu, Saining Xie

Research output: Contribution to journalConference articlepeer-review

Abstract

When we look around and perform complex tasks, how we see and selectively process what we see is crucial. How-ever, the lack of this visual search mechanism in current multimodal LLMs (MLLMs) hinders their ability to focus on important visual details, especially when handling high-resolution and visually crowded images. To address this, we introduce V*, an LLM- guided visual search mechanism that employs the world knowledge in LLMs for efficient visual querying. When combined with an MLLM, this mechanism enhances collaborative reasoning, contextual understanding, and precise visual grounding. This integration results in a new MLLM meta-architecture, named Show, sEArch, and TelL (SEAL). We further create V∗ Bench, a benchmark specifically designed to evaluate MLLMs in their ability to process high-resolution images and focus on visual details. Our study highlights the necessity of incorporating visual search capabilities into multimodal systems. The code is available here.

Original languageEnglish (US)
Pages (from-to)13084-13094
Number of pages11
JournalProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
DOIs
StatePublished - 2024
Event2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024 - Seattle, United States
Duration: Jun 16 2024Jun 22 2024

Keywords

  • multimodal large language model
  • vision and language
  • visual search

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'V*: Guided Visual Search as a Core Mechanism in Multimodal LLMs'. Together they form a unique fingerprint.

Cite this