Nowadays, most deep learning models ignore Chinese habits and global information when processing tasks. To solve this problem, we constructed the BERT-BiLSTM-Attention-CRF model. In model, embeded BERT pre-training language model that adopts Whole Word Mask strategy, added a document-level attention. Experimental results show our method achieves good in MSRA corpus, F1 reaches 95.00%.