Improving Language Understanding By Generative Pre
Found 3 free book(s)Improving Language Understanding by Generative Pre …
cdn.openai.comImproving Language Understanding by Generative Pre-Training Alec Radford OpenAI [email protected] Karthik Narasimhan OpenAI [email protected] Tim Salimans OpenAI [email protected] Ilya Sutskever OpenAI [email protected] Abstract Natural language understanding comprises a wide range of diverse tasks such
arXiv:1810.04805v2 [cs.CL] 24 May 2019
arxiv.orgpre-training for language representations. Un-likeRadford et al.(2018), which uses unidirec-tional language models for pre-training, BERT uses masked language models to enable pre-trained deep bidirectional representations. This is also in contrast toPeters et al.(2018a), which uses a shallow concatenation of independently
Language Models are Unsupervised Multitask Learners
cdn.openai.comLanguage Models are Unsupervised Multitask Learners to infer and perform many different tasks on examples with this type of format. Language modeling is also able to, in principle, learn the tasks ofMcCann et al.(2018) without the need for explicit supervision of which symbols are the outputs to be pre-dicted.