Motivated by the training of Generative Adversarial Networks (GANs), we study methods for solving minimax problems with additional nonsmooth regularizers. We do so employing \emph{monotone operator} theory, in particular \emph{Forward-Backward-Forward (FBF)} method, which avoids known issue limit cycling correcting each update a second gradient evaluation. Furthermore, propose seemingly new sch...