In this paper, we propose and analyze zeroth-order stochastic approximation algorithms for nonconvex convex optimization, with a focus on addressing constrained high-dimensional setting, saddle point avoiding. To handle first generalizations of the conditional gradient algorithm achieving rates similar to standard using only information. facilitate optimization in high dimensions, explore advan...