Bootstrap Masked Visual Modeling via Hard Patch Mining (TPAMI 2025)

研究介绍

典型的视觉掩码建模方法局限于模型预测被掩码标记的具体内容,这可以直观地理解为教导一个学生(模型)解决给定的问题(预测被掩码的内容)。在这种设定下,性能高度依赖于掩码策略(所提供问题的难度)。

本研究认为,让模型站在教师的角度,自行生成具有挑战性的问题同样重要。为了赋予模型以教师的能力,本研究提出了Hard Patches Mining (HPM),即预测逐块损失并随后决定掩码的位置。具体来说,如图1所示,本研究引入了一个辅助损失预测器,并基于其预测的损失来生成不同难度的掩码。此外,为了逐步引导训练过程,本研究提出了一种从易到难的掩码策略。

实验结果来看,HPM在各类基准测试中都带来了显著的提升(图2)。

Abstract

Masked visual modeling has attracted much attention due to its promising potential in learning generalizable representations. Typical approaches urge models to predict specific contents of masked tokens, which can be intuitively considered as teaching a student (the model) to solve given problems (predicting masked contents). Under such settings, the performance is highly correlated with mask strategies (the difficulty of provided problems). We argue that it is equally important for the model to stand in the shoes of a teacher to produce challenging problems by itself. Intuitively, patches with high values of reconstruction loss can be regarded as hard samples, and masking those hard patches naturally becomes a demanding reconstruction task. To empower the model as a teacher, we propose Hard Patches Mining (HPM), predicting patch-wise losses and subsequently determining where to mask. Technically, we introduce an auxiliary loss predictor, which is trained with a relative objective to prevent overfitting to exact loss values. Also, to gradually guide the training procedure, we propose an easy-to-hard mask strategy. Empirically, HPM brings significant improvements under both image and video benchmarks. Interestingly, solely incorporating the extra loss prediction objective leads to better representations, verifying the efficacy of determining where is hard to reconstruct.

图 1  HPM方法流程图
图 2 HPM和现有其它方法的定性对比

代码:https://github.com/Haochen-Wang409/HPM

论文:https://arxiv.org/pdf/2312.13714

Updated: 2025-08-25 — 7:16 pm

Leave a Reply

Your email address will not be published. Required fields are marked *

Zhaoxiang Zhang © 2020