In this paper, we combine two apparently disparate techniques for word representation: (a) word embeddings, which learn vectors from co-occurency statistics on large corpora (b) word associations, which capture mental representation of language. Existing literature has mainly considered these representations distinctly while proving that they respectively perform better at (a) similarity and (b...