2017 Nips
Found 8 free book(s)Attention is All you Need - NIPS
papers.nips.cc31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a …
Lecture 13: Generative Models
cs231n.stanford.eduLecture 13 - 33 May 18, 2017 PixelRNN and PixelCNN Improving PixelCNN performance - Gated convolutional layers - Short-cut connections - Discretized logistic loss - Multi-scale - Training tricks - Etc… See - Van der Oord et al. NIPS 2016 - Salimans et al. 2017 (PixelCNN++) Pros: - Can explicitly compute likelihood p(x) - Explicit likelihood ...
A Unified Approach to Interpreting Model Predictions - NIPS
papers.nips.cc31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 2. We then show that game theory results guaranteeing a unique solution apply to the entire class of additive feature attribution methods (Section 3) and propose SHAP values as …
Lecture 14: Reinforcement Learning
cs231n.stanford.eduLecture 14 - May 23, 2017 Case Study: Playing Atari Games 42 Objective: Complete the game with the highest score State: Raw pixel inputs of the game state Action: Game controls e.g. Left, Right, Up, Down Reward: Score increase/decrease at each time step [Mnih et al. NIPS Workshop 2013; Nature 2015]
Neural Discrete Representation Learning
arxiv.org31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. arXiv:1711.00937v2 [cs.LG] 30 May 2018 “posterior collapse” issue which has been problematic with many VAE models that have a powerful decoder, often caused by latents being ignored. Additionally, it is the first discrete latent VAE model
2017 NIPS Poster for web
media.nips.ccLONG BEACH CA | DEC 4 - 9 | NIPS.CC NIPS 2017 TUTORIALS - DEC 4TH Statistical Relational Artificial Intelligence: Logic, Probability and Computation Luc De Raedt, David Poole, Kristian Kersting, Sriraam Natarajan Reinforcement Learning with People Emma Brunskill A Primer on Optimal Transport Marco Cuturi, Justin Solomon
Sentence-BERT: Sentence Embeddings using Siamese BERT …
arxiv.org2017). RoBERTa (Liu et al.,2019) showed, that the performance of BERT can further improved by small adaptations to the pre-training process. We also tested XLNet (Yang et al.,2019), but it led in general to worse results than BERT. A large disadvantage of the BERT network structure is that no independent sentence embed-
CatBoost: gradient boosting with categorical features support
learningsys.orgCatBoost: gradient boosting with categorical features support Anna Veronika Dorogush, Vasily Ershov, Andrey Gulin Yandex Abstract In this paper we present CatBoost, a new open-sourced gradient boosting library