A sequence-to-sequence attention reading comprehension model was implemented to fulfill Question Answering task defined in Stanford Question Answering Dataset (SQuAD). The basic structure was bidirectional LSTM (BiLSTM) encodings with attention mechanism as well as BiLSTM decoding. Several adjustments such as dropout, learning rate decay, and gradients clipping were used. Finally, the model ach...