PDF4PRO ⚡AMP

Modern search engine that looking for books and documents around the web

Example: bachelor of science

Language Models are Unsupervised Multitask Learners

Language Models are Unsupervised Multitask LearnersAlec Radford*1 Jeffrey Wu*1 Rewon Child1 David Luan1 Dario Amodei**1 Ilya Sutskever**1 AbstractNatural Language processing tasks, such as ques-tion answering, machine translation, reading com-prehension, and summarization, are typicallyapproached with supervised learning on task-specific datasets. We demonstrate that languagemodels begin to learn these tasks without any ex-plicit supervision when trained on a new datasetof millions of webpages called WebText. Whenconditioned on a document plus questions, the an-swers generated by the Language model reach 55F1 on the CoQA dataset - matching or exceedingthe performance of 3 out of 4 baseline systemswithout using the 127,000+ training capacity of the Language model is essentialto the success of zero-shot task transfer and in-creasing it improves performance in a log-linearfashion across tasks.

sentiment analysis (Radford et al.,2017). In this paper, we connect these two lines of work and con-tinue the trend of more general methods of transfer. We demonstrate language models can perform down-stream tasks in a zero-shot setting – without any parameter or archi-tecture modification. We demonstrate this approach shows

Tags:

  Analysis, Language, Model, Sentiment, Language model, Sentiment analysis

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Spam in document Broken preview Other abuse

Transcription of Language Models are Unsupervised Multitask Learners

Related search queries