Search results with tag "Pretraining"
A Simple Framework for Contrastive Learning of Visual ...
arxiv.orgpretraining (learning encoder network f without labels) is done using the ImageNet ILSVRC-2012 dataset (Rus-sakovsky et al.,2015). Some additional pretraining experi-ments on CIFAR-10 (Krizhevsky & Hinton,2009) can be found in AppendixB.9. We also test the pretrained results on a wide range of datasets for transfer learning. To evalu-
Taming Transformers for High-Resolution Image Synthesis
openaccess.thecvf.comsuitability of generative pretraining to learn image repre-sentations for downstream tasks. Since input resolutions of 32×32pixels are still quite computationally expensive [8], a VQVAE is used to encode images up to a resolution of 192× 192. In an effort to keep the learned discrete repre-sentation as spatially invariant as possible with ...
Generative Pretraining from Pixels - OpenAI
cdn.openai.comOne way to measure representation quality is to fine-tune for image classification. Fine-tuning adds a small classification head to the model, used to optimize a classification objective and adapts all weights. Pre-training can be viewed as a favorable initialization or as a regularizer when used in combination with early stopping (Erhan et ...
Three Ways To Improve Semantic Segmentation With Self ...
openaccess.thecvf.comto replace ImageNet pretraining for semantic segmentation. In contrast, we additionally study multi-task learning of SDE and semantic segmentation and show that combining SDE with ImageNet features can even further boost perfor-mance. Novosel et al. [42] and Klingner et al. [29] improve the semantic segmentation performance by jointly learning SDE.