This paper studies a federated edge learning system, in which an server coordinates set of devices to train shared machine (ML) model based on their locally distributed data samples. During the training, we exploit joint communication and computation design for improving system energy efficiency, both resource allocation global ML-parameters aggregation updating are jointly optimized. In partic...