Peng Qi
Peng Qi
齐鹏 (pinyin: /qí péng/; ipa: /tɕʰǐ pʰə̌ŋ/)
Research, Learn, Code, and Play.

Brief Bio

I am currently a Ph.D. student at the Computer Science Department of Stanford University. I work in the NLP group, and I am advised by Prof. Chris Manning.

I am interested in building machine learning models that understand natural languages in a similar fashion as humans do. Outside of NLP research, I am broadly interested in presenting data in a more understandable manner, making technology appear less boring (to students, for example), and processing data with more efficient computation. I have also worked on speech recognition and computer vision previously.

[CV (slightly outdated)]    [Publications]


Education & Professional Experience

2015.9 - PresentPh.D. Student & Research Assistant, Computer Science Department, Stanford University
2016.4 - 2017.3Master of Science, Department of Statistics, Stanford University
2013.9 - 2015.6Master of Science & Research Assistant, Computer Science Department, Stanford University
2012.7 - 2013.6Research Assistant, State Key Laboratory of Intelligent Technology & Systems, Department of Computer Science and Technology, Tsinghua University
2008.8 - 2012.7Bachelor of Engineering, School of Software, Tsinghua University (Excellent Graduate)

Selected Publications

(*=Equal contribution, See all my publications)

  • conf
    Zhilin Yang*, Peng Qi*, Saizheng Zhang*, Yoshua Bengio, William W. Cohen, Ruslan Salakutdinov, and Christopher D. Manning. HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2018. [PDF]   [BibTeX]   [Code]   [Homepage]  
  • conf
    Yuhao Zhang*, Peng Qi*, and Christopher D. Manning. Graph Convolution over Pruned Dependency Trees Improves Relation Extraction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2018. [PDF]   [BibTeX]   [Code]  
  • conf
    Urvashi Khandelwal, He He, Peng Qi, and Dan Jurafsky. Sharp Nearby, Fuzzy Far Away: How Neural Language Models Use Context. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), 2018. [PDF]   [BibTeX]   [Code]  
  • conf
    Peng Qi and Christopher D. Manning. Arc-swift: A Novel Transition System for Dependency Parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), 2017. [PDF]   [BibTeX]   [Code]  
  • sysdesc
    Timothy Dozat, Peng Qi, and Christopher D. Manning. Stanford’s Graph-based Neural Dependency Parser at the CoNLL 2017 Shared Task. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, 2017. 1st place. [PDF]   [BibTeX]   [Code]  
  • journal
    Andrew L. Maas, Peng Qi, Ziang Xie, Awni Y. Hannun, Christopher T. Lengerich, Dan Jurafsky, and Andrew Y. Ng. Building DNN Acoustic Models for Large Vocabulary Speech Recognition. Computer Speech & Language, 2016. [PDF]   [BibTeX]  
  • journal
    Peng Qi and Xiaolin Hu. Learning Nonlinear Regularities in Natural Images by Modeling the Outer Product of Image Intensities. Neural Computation, 2013. [PDF]   [BibTeX]   [Code]  
  • conf
    Xiaolin Hu, Peng Qi, and Bo Zhang. Hierarchical K-Means Algorithm for Modeling Visual Area V2 Neurons. In Neural Information Processing, pp. 373–381, 2012. Best Paper Award. [PDF]   [BibTeX]  

Honors and Awards

  • 2012 Outstanding Graduate of Tsinghua University
  • 2011 National Scholarship (top 3% students)
  • 2010 Citibank Scholarship (for overall excellence)
  • 2009 Ge-Ru Zheng’s Scholarship (for study excellence)
  • 2008-2011 Freshman Scholarship (Ranked 3rd of Guizhou Province in National College Entrance Exam)


Last Updated: Jul. 20, 2018