Local, spatial state-action features can be used to effectively train linear policies from self-play in a wide variety of board games. Such play games directly, or bias tree search agents. However, the resulting feature sets large, with significant amount overlap and redundancies between features. This is problem for two reasons. Firstly, large computationally expensive, which reduces playing s...