Peng Qi
Peng Qi  齐鹏
(pinyin: /qí péng/; ipa: /tɕʰǐ pʰə̌ŋ/)
Research, Learn, Code, and Play.

Brief Bio

Photo credit: Xue Chen

I am currently a Ph.D. student at the Computer Science Department of Stanford University where I am a member of the natural language processign group, advised by Prof. Chris Manning.

My research goal is to build explainable machine learning systems to help us solve problems efficiently using textual knowledge. I believe that AI systems should be able to explain their computational decisions in a human-understandable manner, so as to build trust in their application to real-world problems. To this end, I have been working on natural language processing (NLP) techniques that help us answer complex questions from textual knowledge through explainable multi-step reasoning, as well as models that reason pragmatically about the knowledge of their interlocutors for efficient communication in dialogues.

Outside of NLP research, I am broadly interested in presenting data in a more understandable manner, making technology appear less boring (to students, for example), and processing data with more efficient computation. I have also worked on speech recognition and computer vision previously.

When I procrastinate in my research life, I write code for Stanza, a natural language processing toolkit that’s available for a few dozen (human) languages, written in Python.

[CV]    [Publications]

Contact

Education & Professional Experience

2015.9 - PresentPh.D. Student & Research Assistant, Computer Science Department, Stanford University
2016.4 - 2017.3Master of Science, Department of Statistics, Stanford University
2013.9 - 2015.6Master of Science & Research Assistant, Computer Science Department, Stanford University
2012.7 - 2013.6Research Assistant, State Key Laboratory of Intelligent Technology & Systems, Department of Computer Science and Technology, Tsinghua University
2008.8 - 2012.7Bachelor of Engineering, School of Software, Tsinghua University (Excellent Graduate)

Selected Publications

(*=Equal contribution. See all my publications)

  • preprint
    Peng Qi, Yuhao Zhang, and Christopher D. Manning. Stay Hungry, Stay Focused: Generating Informative and Specific Questions in Information-Seeking Conversations. arXiv preprint, 2020. [PDF]     
  • preprint
    Devendra Singh Sachan, Yuhao Zhang, Peng Qi, and William Hamilton. Do Syntax Trees Help Pre-trained Transformers Extract Information?. arXiv preprint, 2020. [PDF]     
  • preprint
    Ashwin Paranjape*, Abigail See*, Kathleen Kenealy, Haojun Li, Amelia Hardy, Peng Qi, Kaushik Ram Sadagopan, Nguyet Minh Phu, Dilara Soylu, and Christopher D. Manning. Neural Generation Meets Real People: Towards Emotionally Engaging Mixed-Initiative Conversations. arXiv preprint, 2020. [PDF]     
  • sysdesc
    Peng Qi*, Yuhao Zhang*, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. Stanza: A Python Natural Language Processing Toolkit for Many Human Languages. In Association of Computational Linguistics (ACL), System Demonstrations, 2020. [PDF]      [Homepage]  
  • conf
    Peng Qi, Xiaowen Lin*, Leo Mehr*, Zijian Wang*, and Christopher D. Manning. Answering Complex Open-Domain Questions Through Iterative Query Generation. In 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019. [PDF]      [Code]   [Poster]  
  • conf
    Zhilin Yang*, Peng Qi*, Saizheng Zhang*, Yoshua Bengio, William W. Cohen, Ruslan Salakutdinov, and Christopher D. Manning. HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering. In Conference on Empirical Methods in Natural Language Processing (EMNLP), 2018. [PDF]      [Code]   [Homepage]   [Slides]  
  • conf
    Yuhao Zhang*, Peng Qi*, and Christopher D. Manning. Graph Convolution over Pruned Dependency Trees Improves Relation Extraction. In Conference on Empirical Methods in Natural Language Processing (EMNLP), 2018. [PDF]      [Code]   [Poster]  
  • sysdesc
    Peng Qi*, Timothy Dozat*, Yuhao Zhang*, and Christopher D. Manning. Universal Dependency Parsing from Scratch. In CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, 2018. [PDF]      [Code]   [Homepage]   [Poster]  
  • conf
    Urvashi Khandelwal, He He, Peng Qi, and Dan Jurafsky. Sharp Nearby, Fuzzy Far Away: How Neural Language Models Use Context. In 56th Annual Meeting of the Association for Computational Linguistics (ACL), 2018. [PDF]      [Code]  
  • conf
    Peng Qi and Christopher D. Manning. Arc-swift: A Novel Transition System for Dependency Parsing. In 55th Annual Meeting of the Association for Computational Linguistics (ACL), 2017. [PDF]      [Code]   [Slides]  
  • sysdesc
    Timothy Dozat, Peng Qi, and Christopher D. Manning. Stanford’s Graph-based Neural Dependency Parser at the CoNLL 2017 Shared Task. In CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, 2017. 1st place. [PDF]      [Code]  
  • journal
    Andrew L. Maas, Peng Qi, Ziang Xie, Awni Y. Hannun, Christopher T. Lengerich, Dan Jurafsky, and Andrew Y. Ng. Building DNN Acoustic Models for Large Vocabulary Speech Recognition. Computer Speech & Language, 2016. [PDF]     

Teaching

Last Updated: Mar. 26, 2020