How is the image resolution expanded in a vision encoder?

#57
by efei - opened

Thanks for your excellent work.

I want to know what happend in pretrain stage1 -> stage2 with image resolution 384 -> 980.

Is this how it works?

Stage 1: Set the longest side of the image to 384, with the number of patches being less than 27 * 27. The position id is calculated according to Idefics2VisionEmbeddings.

Stage 2: In order to expand the resolution to 980 * 980, the maximum number of patches and position id are extended to 70*70. The position embedding is reinitialized, and during this reinitialization, the original embedding for old position id, calculated by Idefics2VisionEmbeddings, is used for the new corresponding positions. Subsequently, the longest side of the image is set to 980, and training is conducted using the new position embedding. like [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 ...] use [0, 0, 0, 1, 1, 1, 2, 2, 3, 3, 3, 4, ...]

hi @efei
that is a great question!

to go from 384 to 980, we interpolate the 2d positional embeddings. this is rather standard for vision transformers, if you are curious, one implementation is here: https://github.com/huggingface/transformers/blob/573565e35a5cc68f6cfb6337f5a93753ab16c65b/src/transformers/models/vit/modeling_vit.py#L76

that allows us convert the 27 * 27 positions to 90 * 90 positions.

we treat the positions as fractional coordinates: in the original matrix, we are encoding positions as (n/27%, m/27%), where (n,m) are the coordinates in [|0; 26|]. that means that we can directly translate this coordinate system into (n/90%, m/90%). Does it make sense?

This comment has been hidden

Sign up or log in to comment