Pre-trained Language Models (PLMs) have proven to be beneficial for various downstream NLP tasks. Recently, GPT-3, with 175 billion parameters and 570 GB training data, drew a lot of attention due the capacity few-shot (even zero-shot) learning. However, applying GPT-3 address Chinese tasks is still challenging, as corpus primarily English, are not publicly available. In this technical report, ...