This paper proposes the minimization of α-divergences for approximate inference in context deep Gaussian processes (DGPs). The proposed method can be considered as a generalization variational (VI) and expectation propagation (EP), two previously used methods DGPs. Both VI EP are based on Kullback-Leibler divergence. is scalable version power propagation, that introduces an extra parameter α sp...