This work investigates the evolution of latent space when deep learning models are trained incrementally in non-stationary environments that stem from concept drift. We propose a methodology for visualizing incurred change representations. further show classes not targeted by drift can be negatively affected, suggesting observation all during may regularize space.