PDF4PRO ⚡AMP

Modern search engine that looking for books and documents around the web

Example: tourism industry

A Simple Unified Framework for Detecting Out-of ...

A Simple Unified Framework for Detecting Out-of -Distribution Samples and Adversarial Attacks Kimin Lee1 , Kibok Lee2 , Honglak Lee3,2 , Jinwoo Shin1,4. 1. Korea Advanced Institute of Science and Technology (KAIST). 2. University of Michigan 3. Google Brain 4. AItrics Abstract Detecting test samples drawn sufficiently far away from the training distribution statistically or adversarially is a fundamental requirement for deploying a good classifier in many real-world machine learning applications. However, deep neu- ral networks with the softmax classifier are known to produce highly overconfident posterior distributions even for such abnormal samples. In this paper, we propose a Simple yet effective method for Detecting any abnormal samples, which is appli- cable to any pre-trained softmax neural classifier.

1.0 FPR on out-of-distribution (TinyImageNet) 0 0.5 1.0 0.85 0.90 0.95 1.00 0 0.2 0.4 (c) ROC curve Figure 1: Experimental results under the ResNet with 34 layers. (a) Visualization of final features from ResNet trained on CIFAR-10 by t-SNE, where the colors of points indicate the classes of the corresponding objects.

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Spam in document Broken preview Other abuse

Transcription of A Simple Unified Framework for Detecting Out-of ...

Related search queries