Example: biology

Learning Structured Representation for Text Classification ...

Learning Structured Representation for Text Classification via Reinforcement Learning Tianyang Zhang? , Minlie Huang?, , Li Zhao . ? Tsinghua National Laboratory for Information Science and Technology Dept. of Computer Science and Technology, Tsinghua University, Beijing 100084, PR China . Microsoft Research Asia . Corresponding Author: (Minlie Huang). Abstract and recursive autoencoders (Socher et al. 2013; 2011; Qian et al. 2015) use pre-specified parsing trees to build Structured Representation Learning is a fundamental problem in natural representations. Attention-based methods (Yang et al. 2016;. language processing. This paper studies how to learn a struc- tured Representation for text Classification . Unlike most ex- Zhou, Wan, and Xiao 2016; Lin et al. 2017) use attention isting Representation models that either use no structure or mechanisms to build representations by scoring input words rely on pre-specified structures, we propose a reinforcemen- or sentences differentially.

CNet makes classification based on the structured repre-sentation and offers reward computation to PNet. Since the reward can be computed once the final representation is available (completely determined by the action sequence), the process can be naturally addressed by policy gradient method (Sutton et al. 2000).

Tags:

  Ncte

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Learning Structured Representation for Text Classification ...

1 Learning Structured Representation for Text Classification via Reinforcement Learning Tianyang Zhang? , Minlie Huang?, , Li Zhao . ? Tsinghua National Laboratory for Information Science and Technology Dept. of Computer Science and Technology, Tsinghua University, Beijing 100084, PR China . Microsoft Research Asia . Corresponding Author: (Minlie Huang). Abstract and recursive autoencoders (Socher et al. 2013; 2011; Qian et al. 2015) use pre-specified parsing trees to build Structured Representation Learning is a fundamental problem in natural representations. Attention-based methods (Yang et al. 2016;. language processing. This paper studies how to learn a struc- tured Representation for text Classification . Unlike most ex- Zhou, Wan, and Xiao 2016; Lin et al. 2017) use attention isting Representation models that either use no structure or mechanisms to build representations by scoring input words rely on pre-specified structures, we propose a reinforcemen- or sentences differentially.

2 T Learning (RL) method to learn sentence Representation by However, in existing Structured Representation models, the discovering optimized structures automatically. We demon- structures are either provided as input or predicted using strate two attempts to build Structured Representation : Infor- supervision from explicit treebank annotations. There has mation Distilled LSTM (ID-LSTM) and Hierarchically Struc- been few studies on Learning representations with automati- tured LSTM (HS-LSTM). ID-LSTM selects only important, cally optimized structures. Yogatama et al. (2017) proposed task-relevant words, and HS-LSTM discovers phrase struc- to compose binary tree structure for sentence representa- tures in a sentence. Structure discovery in the two represen- tion with only supervision from downstream tasks, but such tation models is formulated as a sequential decision problem: current decision of structure discovery affects following deci- structure is very complex and overly deep, leading to un- sions, which can be addressed by policy gradient RL.

3 Results satisfactory Classification performance. In (Chung, Ahn, and show that our method can learn task-friendly Representation - Bengio 2017), a hierarchical Representation model was pro- s by identifying important words or task-relevant structures posed to capture latent structure in the sequences with latent without explicit structure annotations, and thus yields com- variables. Structure is discovered in a latent, implicit man- petitive performance. ner. In this paper, we propose a reinforcement Learning (RL). method to build Structured sentence representations by iden- Introduction tifying task-relevant structures without explicit structure an- Representation Learning is a fundamental problem in AI, notations. Structure discovery in this paper is formulated as and particularly important for natural language process- a sequential decision problem: current decision (or action). ing (NLP) (Bengio, Courville, and Vincent 2013; Le and of structure discovery affects following decisions, which can Mikolov 2014).

4 As one of the most common tasks of NLP, be naturally addressed by policy gradient method (Sutton et text Classification depends heavily on the learned represen- al. 2000). A delayed reward is used to guide the Learning of tation, and is widely applied in sentiment analysis (Socher et the policy for structure discovery. The reward is computed al. 2013), question Classification (Kim 2014), and language from the text classifier's prediction based on the Structured inference (Bowman et al. 2015). Representation . The Representation is available only when all Mainstream Representation models for text Classification sequential decisions are completed. can be roughly classified into four types. Bag-of-words In our RL method, we design two Structured representa- Representation models ignore the order of words, includ- tion models: Information Distilled LSTM (ID-LSTM) which ing deep average network (Iyyer et al. 2015; Joulin et al. selects important, task-relevant words to build sentence rep- 2017) and autoencoders (Liu et al.)

5 2015). Sequence rep- resentation, and Hierarchical Structured LSTM (HS-LSTM). resentation models such as convolutional neural network which discovers phrase structures and builds sentence repre- (Kim 2014; Kalchbrenner, Grefenstette, and Blunsom 2014; sentation with a two-level LSTM. The Representation mod- Lei, Barzilay, and Jaakkola 2015) and recurrent neural net- els are integrated seamlessly with a policy network and a work (Hochreiter and Schmidhuber 1997; Chung et al. 2014) Classification network. The policy network defines a policy consider word order but do not use any structure. Structured for structure discovery, and the Classification network makes Representation models such as tree- Structured LSTM (Zhu, prediction on top of Structured sentence Representation and Sobihani, and Guo 2015; Tai, Socher, and Manning 2015) facilitates reward computation for the policy network. To summarize, our contributions are as follows: Copyright 2018, Association for the Advancement of Artificial Intelligence ( ).

6 All rights reserved. We propose a reinforcement Learning method which dis- Figure 1: Illustration of the overall process. The policy network (PNet) samples an action at each state. The Structured repre- sentation model offers state Representation to PNet and outputs the final sentence Representation to the Classification network (CNet) when all actions are sampled. CNet performs text Classification and provides reward to PNet. covers task-relevant structures to build Structured sen- Representation is obtained from the Representation models. tence representations for text Classification problems. We In order to obtain the delayed reward which is based on C- propose two Structured Representation models: informa- Net's prediction, we perform action sampling for the entire tion distilled LSTM (ID-LSTM) and hierarchical struc- sentence. Once all the actions are decided, the Representation tured LSTM (HS-LSTM). models will obtain a Structured Representation of the sen- Even without explicit structure annotations, our method tence, and it will be used by CNet to compute P (y|X).

7 The can identify task-relevant structures effectively. More- reward computed with P (y|X) is used for policy Learning . over, the performance is better or comparable to strong We briefly introduce state, action and policy, reward, and baselines that use pre-specified parsing structures. objective function as follows: State State encodes the current input and previous con- Methodology texts, and has different definitions in the two Representation Overview models. The detailed definition of state st will be introduced in the following sections. The goal of this paper is to learn Structured Representation for text Classification by discovering important, task-relevant Action and Policy We adopt binary actions in two set- structures. We argue that text Classification can be improved tings, but with different meanings. In ID-LSTM, the ac- with an optimized, Structured Representation . tion space is {Retain, Delete}, where a word can be deleted The overall process is shown in Figure 1.

8 The mod- from or retained in the final sentence Representation . In HS- el consists of three components: Policy Network (PNet), LSTM, the action space is {Inside, End}, indicating that a Structured Representation models, and Classification Net- word is inside or at the end of a phrase1 . Clearly, each ac- work (CNet). PNet adopts a stochastic policy and samples tion is a direct indicator of structure selection in both an action at each state. It keeps sampling until the end of Representation models. a sentence, and produces an action sequence for the sen- We adopt a stochastic policy. Let at denote the action at tence. Then the Structured Representation models translate state t, the policy is defined as follows: the actions into a Structured Representation . We design t- (at |st ; ) = (W st + b), (1). wo Representation models, information distilled LSTM (ID- LSTM) and hierarchically Structured LSTM (HS-LSTM). where (at |st ; ) denotes the probability of choosing at.

9 CNet makes Classification based on the Structured repre- denotes the sigmoid function and = {W, b} denotes the sentation and offers reward computation to PNet. Since the parameters of PNet. reward can be computed once the final Representation is During training, the action is sampled according to the available (completely determined by the action sequence), probability in Eq. 1. During test, the action with the maximal the process can be naturally addressed by policy gradient probability ( , a t = argmaxa (a|st ; )) will be chosen method (Sutton et al. 2000). in order to obtain superior prediction. Obviously the three components are interleaved together. Reward Once all the actions are sampled by the policy The state Representation of PNet is derived from the repre- network, the Structured Representation of a sentence is de- sentation models, CNet relies on the final Structured repre- termined by our Representation models, and the representa- sentation obtained from the Representation model to make tion will be passed to CNet to obtain P (y|X) where y is the prediction, and PNet obtains rewards from CNet's predic- class label.

10 The reward will be calculated from the predicted tion to guide the Learning of a policy. distribution (P (y|X)), and also has a factor considering the Policy Network (PNet) tendency of structure selection, which will be detailed later. This is a typical delayed reward since we cannot obtain it The policy network adopts a stochastic policy (at |st ; ) until the final Representation is built. and uses a delayed reward to guide the policy Learning . It 1. samples an action with the probability at each state whose To be precise, phrase means a substructure or segment. Objective Function We optimize the parameters of PNet To make Classification , the last hidden state of ID-LSTM. using REINFORCE algorithm (Williams 1992) and policy is taken as input to the Classification network (CNet): gradient methods (Sutton et al. 2000), aiming to maximize the expected reward as shown below. P (y|X) = sof tmax(Ws hL + bs ), (5). J( ) = E(st ,at ) P (st ,at ) r(s1 a1 sL aL ) where Ws Rd K , bs RK are parameters of CNet, d is X the dimension of hidden state, y {c1 , c2 , , cK } is the = P (s1 a1 sL aL )RL class label and K is the number of categories.


Related search queries