The main contributions of [Distributed representations of words and phrases and their compositionality](http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) by Mikolov et al. are several extension to their previously introduced skip-gram (and CBOW) model for word vector representations learned on large text corpora.
For a given word in a training text corpus, the skip-gram model tries to predict surrounding words. Thus, training the skip-gram model on a text corpus adapts vector representations of words in a way that maximizes the probability for correct surrounding words. This leads to distributed vector representations that capture a large number of precise syntactic and semantic word relationships.
They propose a method to identify idiomatic phrases that are not compositions of individual words (like “Boston Globe”) in order to improve the vocabulary base. Also, an alternative training method to the hierarchical softmax, negative sampling, is introduced and analysed in detail. Further, they show that meaningful linear arithmetic operations can be performed on the trained vector representations, which makes precise analogical reasoning possible.
The main contribution of [Asynchronous Methods for Deep Reinforcement Learning](https://arxiv.org/pdf/1602.01783v1.pdf) by Mnih et al. is a ligthweight framework for reinforcement learning agents.
They propose a training procedure which utilizes asynchronous gradient decent updates from multiple agents at once. Instead of training one single agent who interacts with its environment, multiple agents are interacting with their own version of the environment simultaneously.
After a certain amount of timesteps, accumulated gradient updates from an agent are applied to a global model, e.g. a Deep Q-Network. These updates are asynchronous and lock free.
Effects of training speed and quality are analyzed for various reinforcement learning methods. No replay memory is need to decorrelate successive game states, since all agents are already exploring different game states in real time. Also, on-policy algorithms like actor-critic can be applied.
They show that asynchronous updates have a stabilizing effect on policy and value updates. Also, their best method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU.