PDF4PRO ⚡AMP

Modern search engine that looking for books and documents around the web

Example: tourism industry

Dense Contrastive Learning for Self-Supervised Visual Pre ...

Dense Contrastive Learning for Self-Supervised Visual Pre-TrainingXinlong Wang1,Rufeng Zhang2,Chunhua Shen1*,Tao Kong3,Lei Li31 The University of Adelaide, Australia2 Tongji University, China3 ByteDance AI LabAbstractTo date, most existing Self-Supervised Learning methodsare designed and optimized for image classification. Thesepre-trained models can be sub-optimal for Dense predictiontasks due to the discrepancy between image-level predic-tion and pixel-level prediction. To fill this gap, we aim todesign an effective, Dense Self-Supervised Learning methodthat directly works at the level of pixels (or local features)by taking into account the correspondence between localfeatures. We present Dense Contrastive Learning (DenseCL),which implements Self-Supervised Learning by optimizing apairwise Contrastive (dis)similarity loss at the pixel levelbetween two views of input to the baseline method MoCo-v2, our methodintroduces negligible computation overhead (only<1%slower), but demonstrates consistently superior perfor-mance when transferring to downstream Dense predictiontasks including object detection, semantic segmentation andinstance segmentation; and outperforms the state-of-th

For self-supervised representation learning, the break-through approaches are MoCo-v1/v2 [17, 3] and Sim-CLR [2], which both employ contrastive unsupervised learning to learn good representations from unlabeled data. We briefly introduce the state-of-the-art self-supervised learning framework by abstracting a common paradigm. Pipeline.

Tags:

  Learning, Unsupervised, Unsupervised learning

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Spam in document Broken preview Other abuse

Transcription of Dense Contrastive Learning for Self-Supervised Visual Pre ...

Related search queries