We propose a new method to analyze Transformer language models. In self-attention modules, attention weights are calculated from the query vectors and key vectors. Then, output obtained by taking weighted sum of value While existing works on analysis have focused weights, this work matrices. obtain joint matrices multiplying both matrices, show that trace correlated with word co-occurences.