We propose USTNet, a novel deep learning approach designed for shape-to-shape translation from unpaired domains in an unsupervised manner. The core of our lies disentangled representation that factors out the discriminative features 3D shapes into content and style codes. Given input multiple domains, USTNet disentangles their codes contain distinctive traits across domain-invariant traits. By ...