Example: stock market

Learning Transferable Architectures for Scalable Image ...

Learning Transferable Architectures for Scalable Image Recognition Barret Zoph Vijay Vasudevan Jonathon Shlens Quoc V. Le Google Brain Google Brain Google Brain Google Brain [ ] 11 Apr 2018. Abstract 1. Introduction Developing neural network Image classification models often requires significant architecture engineering. Starting Developing neural network Image classification models from the seminal work of [32] on using convolutional archi- often requires significant architecture engineering. In this tectures [17, 34] for ImageNet [11] classification, succes- paper, we study a method to learn the model Architectures sive advancements through architecture engineering have directly on the dataset of interest. As this approach is ex- achieved impressive results [53, 59, 20, 60, 58, 68]. pensive when the dataset is large, we propose to search for In this paper, we study a new paradigm of designing con- an architectural building block on a small dataset and then volutional Architectures and describe a Scalable method to transfer the block to a larger dataset.

works, state-of-the-art accuracy of 82.7% top-1 and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accu-racy than the best human-invented architectures while hav-ing 9 billion fewer FLOPS – a reduction of 28% in compu-tational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost,

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Learning Transferable Architectures for Scalable Image ...

1 Learning Transferable Architectures for Scalable Image Recognition Barret Zoph Vijay Vasudevan Jonathon Shlens Quoc V. Le Google Brain Google Brain Google Brain Google Brain [ ] 11 Apr 2018. Abstract 1. Introduction Developing neural network Image classification models often requires significant architecture engineering. Starting Developing neural network Image classification models from the seminal work of [32] on using convolutional archi- often requires significant architecture engineering. In this tectures [17, 34] for ImageNet [11] classification, succes- paper, we study a method to learn the model Architectures sive advancements through architecture engineering have directly on the dataset of interest. As this approach is ex- achieved impressive results [53, 59, 20, 60, 58, 68]. pensive when the dataset is large, we propose to search for In this paper, we study a new paradigm of designing con- an architectural building block on a small dataset and then volutional Architectures and describe a Scalable method to transfer the block to a larger dataset.

2 The key contribu- optimize convolutional Architectures on a dataset of inter- tion of this work is the design of a new search space (which est, for instance the ImageNet classification dataset. Our we call the NASNet search space ) which enables trans- approach is inspired by the recently proposed Neural Ar- ferability. In our experiments, we search for the best con- chitecture Search (NAS) framework [71], which uses a re- volutional layer (or cell ) on the CIFAR-10 dataset and inforcement Learning search method to optimize architec- then apply this cell to the ImageNet dataset by stacking to- ture configurations. Applying NAS, or any other search gether more copies of this cell, each with their own parame- methods, directly to a large dataset, such as the ImageNet ters to design a convolutional architecture, which we name dataset, is however computationally expensive.

3 We there- a NASNet architecture . We also introduce a new regu- fore propose to search for a good architecture on a proxy larization technique called ScheduledDropPath that signif- dataset, for example the smaller CIFAR-10 dataset, and then icantly improves generalization in the NASNet models. On transfer the learned architecture to ImageNet. We achieve CIFAR-10 itself, a NASNet found by our method achieves this transferrability by designing a search space (which we error rate, which is state-of-the-art. Although the cell call the NASNet search space ) so that the complexity of is not searched for directly on ImageNet, a NASNet con- the architecture is independent of the depth of the network structed from the best cell achieves, among the published and the size of input images. More concretely, all convolu- works, state-of-the-art accuracy of top-1 and tional networks in our search space are composed of convo- top-5 on ImageNet.

4 Our model is better in top-1 accu- lutional layers (or cells ) with identical structure but dif- racy than the best human-invented Architectures while hav- ferent weights. Searching for the best convolutional archi- ing 9 billion fewer FLOPS a reduction of 28% in compu- tectures is therefore reduced to searching for the best cell tational demand from the previous state-of-the-art model. structure. Searching for the best cell structure has two main When evaluated at different levels of computational cost, benefits: it is much faster than searching for an entire net- accuracies of NASNets exceed those of the state-of-the-art work architecture and the cell itself is more likely to gener- human-designed models. For instance, a small version of alize to other problems. In our experiments, this approach NASNet also achieves 74% top-1 accuracy, which is significantly accelerates the search for the best Architectures better than equivalently-sized, state-of-the-art models for using CIFAR-10 by a factor of 7 and learns Architectures mobile platforms.

5 Finally, the Image features learned from that successfully transfer to ImageNet. Image classification are generically useful and can be trans- Our main result is that the best architecture found on ferred to other computer vision problems. On the task of ob- CIFAR-10, called NASNet, achieves state-of-the-art ac- ject detection, the learned features by NASNet used with the curacy when transferred to ImageNet classification with- Faster-RCNN framework surpass state-of-the-art by out much modification. On ImageNet, NASNet achieves, achieving mAP on the COCO dataset. among the published works, state-of-the-art accuracy of top-1 and top-5. This result amounts to a 1. improvement in top-1 accuracy than the best human- 3. Method invented Architectures while having 9 billion fewer FLOPS. On CIFAR-10 itself, NASNet achieves error rate, Our work makes use of search methods to find good con- which is also state-of-the-art.

6 Volutional Architectures on a dataset of interest. The main search method we use in this work is the Neural Architec- Additionally, by simply varying the number of the con- ture Search (NAS) framework proposed by [71]. In NAS, volutional cells and number of filters in the convolutional a controller recurrent neural network (RNN) samples child cells, we can create different versions of NASNets with dif- networks with different Architectures . The child networks ferent computational demands. Thanks to this property of are trained to convergence to obtain some accuracy on a the cells, we can generate a family of models that achieve held-out validation set. The resulting accuracies are used accuracies superior to all human-invented models at equiv- to update the controller so that the controller will generate alent or smaller computational budgets [60, 29].

7 Notably, better Architectures over time. The controller weights are the smallest version of NASNet achieves top-1 ac- updated with policy gradient (see Figure 1). curacy on ImageNet, which is better than previously engineered Architectures targeted towards mobile and em- Sample architecture A! bedded vision tasks [24, 70]. with probability p Finally, we show that the Image features learned by NASNets are generically useful and transfer to other com- Train a child network! with architecture A to ! puter vision problems. In our experiments, the features The controller (RNN). convergence to get ! learned by NASNets from ImageNet classification can be validation accuracy R. combined with the Faster-RCNN framework [47] to achieve state-of-the-art on COCO object detection task for both the Scale gradient of p by R! largest as well as mobile-optimized models.

8 Our largest to update the controller NASNet model achieves mAP, which is 4% better than previous state-of-the-art. Figure 1. Overview of Neural Architecture Search [71]. A con- troller RNN predicts architecture A from a search space with prob- ability p. A child network with architecture A is trained to con- vergence achieving accuracy R. Scale the gradients of p by R to 2. Related Work update the RNN controller. The proposed method is related to previous work in hy- perparameter optimization [44, 4, 5, 54, 55, 6, 40] es- The main contribution of this work is the design of a pecially recent approaches in designing Architectures such novel search space, such that the best architecture found as Neural Fabrics [48], DiffRNN [41], MetaQNN [3] and on the CIFAR-10 dataset would scale to larger, higher- DeepArchitect [43]. A more flexible class of methods for resolution Image datasets across a range of computational designing architecture is evolutionary algorithms [65, 16, settings.]

9 We name this search space the NASNet search 57, 30, 46, 42, 67], yet they have not had as much success space as it gives rise to NASNet, the best architecture found at large scale. Xie and Yuille [67] also transferred learned in our experiments. One inspiration for the NASNet search Architectures from CIFAR-10 to ImageNet but performance space is the realization that architecture engineering with of these models (top-1 accuracy ) are notably below CNNs often identifies repeated motifs consisting of com- previous state-of-the-art (Table 2). binations of convolutional filter banks, nonlinearities and a The concept of having one neural network interact with a prudent selection of connections to achieve state-of-the-art second neural network to aid the Learning process, or learn- results (such as the repeated modules present in the Incep- ing to learn or meta- Learning [23, 49] has attracted much tion and ResNet models [59, 20, 60, 58]).

10 These observa- attention in recent years [1, 62, 14, 19, 35, 45, 15]. Most tions suggest that it may be possible for the controller RNN. of these approaches have not been scaled to large problems to predict a generic convolutional cell expressed in terms of like ImageNet. An exception is the recent work focused these motifs. This cell can then be stacked in series to han- on Learning an optimizer for ImageNet classification that dle inputs of arbitrary spatial dimensions and filter depth. achieved notable improvements [64]. In our approach, the overall Architectures of the convo- The design of our search space took much inspira- lutional nets are manually predetermined. They are com- tion from LSTMs [22], and Neural Architecture Search posed of convolutional cells repeated many times where Cell [71]. The modular structure of the convolutional cell is each convolutional cell has the same architecture, but dif- also related to previous methods on ImageNet such as VGG ferent weights.


Related search queries