Transcription of Sentence-BERT: Sentence Embeddings using Siamese BERT …
{{id}} {{{paragraph}}}
Sentence -BERT: Sentence Embeddings using Siamese BERT-NetworksNils Reimers and Iryna GurevychUbiquitous Knowledge Processing Lab (UKP-TUDA)Department of Computer Science, Technische Universit at (Devlin et al., 2018) and RoBERTa (Liuet al., 2019) has set a new state-of-the-artperformance on Sentence -pair regression taskslike semantic textual similarity (STS). How-ever, it requires that both sentences are fedinto the network, which causes a massive com-putational overhead: Finding the most sim-ilar pair in a collection of 10,000 sentencesrequires about 50 million inference computa-tions (~65 hours) with BERT.
n(n 1)=2 = 49995000inference computations. On a modern V100 GPU, this requires about 65 hours. Similar, finding which of the over 40 mil-lion existent questions of Quora is the most similar for a new question could be modeled as a pair-wise comparison with BERT, however, answering a sin-gle query would require over 50 hours.
Domain:
Source:
Link to this page:
Please notify us if you found a problem with this document:
{{id}} {{{paragraph}}}