Example: dental hygienist

Learning Implicit Sentiment in Aspect-based Sentiment ...

Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 246 256 November 7 11, 2021 Association for Computational Linguistics246 Learning Implicit Sentiment in Aspect-based Sentiment Analysiswith Supervised Contrastive Pre-TrainingZhengyan Li1, Yicheng Zou1, Chong Zhang1, Qi Zhang1 and Zhongyu Wei21 Shanghai Key Laboratory of Intelligent Information Processing,School of Computer Science, Fudan University, Shanghai, China2 School of Data Science, Fudan Sentiment analysis aims to iden-tify the Sentiment polarity of a specific aspectin product reviews. We notice that about 30%of reviews do not contain obvious opinionwords, but still convey clear human-aware sen-timent orientation, which is known as implicitsentiment.

Aspect-based sentiment analysis aims to iden-tify the sentiment polarity of a specific aspect in product reviews. We notice that about 30% of reviews do not contain obvious opinion words, but still convey clear human-aware sen-timent orientation, which is known as implicit sentiment. However, recent neural network-

Tags:

  Analysis, Sentiment, Mitten, Sentiment analysis, Sen timent

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Learning Implicit Sentiment in Aspect-based Sentiment ...

1 Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 246 256 November 7 11, 2021 Association for Computational Linguistics246 Learning Implicit Sentiment in Aspect-based Sentiment Analysiswith Supervised Contrastive Pre-TrainingZhengyan Li1, Yicheng Zou1, Chong Zhang1, Qi Zhang1 and Zhongyu Wei21 Shanghai Key Laboratory of Intelligent Information Processing,School of Computer Science, Fudan University, Shanghai, China2 School of Data Science, Fudan Sentiment analysis aims to iden-tify the Sentiment polarity of a specific aspectin product reviews. We notice that about 30%of reviews do not contain obvious opinionwords, but still convey clear human-aware sen-timent orientation, which is known as implicitsentiment.

2 However, recent neural network-based approaches paid little attention to im-plicit Sentiment entailed in the reviews. Toovercome this issue, we adopt Supervised Con-trastive Pre-training on large-scale Sentiment -annotated corpora retrieved from in-domainlanguage aligning the repre-sentation of Implicit Sentiment expressions tothose with the same Sentiment label, the pre-training process leads to better capture of bothimplicit and explicit Sentiment orientation to-wards aspects in reviews. Experimental resultsshow that our method achieves state-of-the-art performance on SemEval2014 benchmarks,and comprehensive analysis validates its effec-tiveness on Learning Implicit IntroductionAspect-level Sentiment analysis (ABSA) is a fine-grained variant aiming to identify the sentimentpolarity of one or more mentioned aspects inproduct reviews.

3 Recent studies tackle the taskby either employing attention mechanisms (Wanget al., 2016b; Ma et al., 2017) or incorporatingsyntax-aware graph structures (He et al., 2018;Tang et al., 2020; Zhang et al., 2019; Sun et al.,2019; Wang et al., 2020). Both methodologies aimto capture the corresponding Sentiment expressiontowards a particular aspect, which is usually anopinion word that explicitly expresses sentimentpolarity. For instance, given the review on arestaurant Great food but the service is dreadful ,current models attempt to find great for aspect food to determine the positive Sentiment polaritytowards it. Corresponding contain Implicit sentimentThewaiterpoured water on my hand and walked awayThebartendercontinued to pour champagne from his reserve10 hours ofbattery lifeis probably an hourTable 1:Examples of reviews contain implicitsentiment where aspects are marked to bold.

4 In theabove examples, pour expresses opposite emotionsin different contexts. In the below examples, peopledetermine the Sentiment orientations towards battery by referring to a common , Implicit Sentiment expressions widelyexist in the recognition of Aspect-based Sentiment expressions indicate sentimentexpressions that contain no polarity markers butstill convey clear human-aware Sentiment polarityin context (Russo et al., 2015). As illustrated inTable 1, the comment The waiter poured water onmy hand and walked away towards aspect waiter contains no opinion words, but can be clearlyinterpreted to be negative. According to Table 2 (asseen in Section 4), and of reviewscontain Implicit Sentiment among Restaurant andLaptop datasets.

5 However, most of the previousmethods generally pay little attention on modelingimplicit Sentiment expressions. This motivates usto better solve the task of ABSA by capturingimplicit Sentiment in an advanced equip current models with the ability tocapture Implicit Sentiment , inadequate ABSA datasets are the main challenge. With only afew thousand labeled data, models could hardlyrecognize comprehensive patterns of sentimentexpressions, and are unable to capture enoughcommonsense knowledge, which is required insentiment identification. It reveals that externalsentiment knowledge should be introduced to solvethe , we adopt Supervised ContrAstivePre-Training(SCAPT) on external large-scale247sentiment-annotated corpora to learn sentimentknowledge.

6 Supervised contrastive Learning givesan aligned representation of Sentiment expressionswith the same Sentiment label. In embedding space,explicit and Implicit Sentiment expressions with thesame Sentiment orientation are pulled together, andthose with different Sentiment labels are pushedapart. Considering the Sentiment annotations ofretrieved corpora are noisy, supervised contrastivelearning enhances noise immunity of the pre-training process. Also, SCAPT contains reviewreconstruction and masked aspect predicationobjectives. The former requires representationencoding review context besides Sentiment polarity,and the latter adds the model s ability to capture thesentiment target.

7 Overall, the pre-training processcaptures both Implicit and explicit sentimentorientation towards aspects in (Pontiki et al., 2014) and MAMS(Jiang et al., 2019) datasets show that proposedSCAPT outperforms baseline models by a results on partitioned datasetsdemonstrate the effectiveness of both implicitsentiment expression and explicit sentimentexpression. Moreover, the ablation study verifiesthat SCAPT efficiently learns Implicit sentimentexpression on the external noisy corpora. Codesand datasets are publicly contributions of this work include: We reveal that ABSA was only marginallytackled by previous studies since they paid littleattention to Implicit Sentiment .

8 We propose Supervised Contrastive Pre-trainingto learn Sentiment knowledge from large-scalesentiment-annotated corpora. Experimental results show that our proposedmodel achieves state-of-the-art performance, andis effective to learn Implicit Implicit SentimentAs Sentiment that can only be inferred withinthe context of reviews, many researches addressthe presence of Implicit Sentiment in et al. (2010); Russo et al.(2015) proposed similar terminologies (asimplicitpolarityorpolar facts), and provided corporacontaining Implicit Sentiment . Deng and Wiebe(2014) detected Implicit Sentiment via inferenceover explicit Sentiment expressions and so-called1 events.

9 Choi and Wiebe (2014)used +/-EffectWordNet lexicon to identify implicitsentiment, by assuming Sentiment expressions areoften related to states and events which havepositive/negative/null effects on investigate the ubiquitous of Implicit senti-ment in ABSA, we split SemEval-2014 Restaurantand Laptop benchmarks into Explicit SentimentExpression (ESE) slice and Implicit SentimentExpression (ISE) slice, based on the presence ofopinion words. Fan et al. (2019) have annotatedopinion words for target aspects on SemEvalbenchmarks. We notice that the provided datasetsdo not keep the original order and have somedifferences in texts. Thus, we first match theannotations to the original datasets, and thenmanually pick the reviews including opinion wordstowards the aspect from the remaining part.

10 Asresults shown in Table 2 (as seen in Section 4), and of reviews are divide into ISEpart among Restaurant and Laptop, revealing thatimplicit Sentiment exists widely in ABSA and isworthy to be MethodologyIn this section, we introduce the pre-training andfine-tuning scheme of our models. In pre-training,we introduce Supervised ContrAstive Pre-Training(SCAPT) for ABSA, which learns the polarityof Sentiment expressions by leveraging retrievedreview corpus. In fine-tuning, aspect-aware fine-tuning is adopted to enhance the ability of modelson Aspect-based Sentiment Supervised Contrastive Pre-trainingThree objectives are included in SCAPT: super-vised contrastive Learning , masked aspect predic-tion, and review reconstruction.


Related search queries