Peng Qi

齐鹏
(pinyin: /qí péng/; ipa: /tɕʰǐ pʰə̌ŋ/)
I recently graduated from Stanford! I have joined JD AI as a research scientist to continue working on NLP/ML-related research. If you are interested in exploring research opportunities with us (internship or full-time), don’t hesitate to reach out!
I obtained my Ph.D. in Computer Science at Stanford University advised by Prof. Chris Manning, where I was a member of the natural language processing group.
My research goal is to build explainable machine learning systems to help us solve problems efficiently using textual knowledge. I believe that AI systems should be able to explain their computational decisions in a human-understandable manner, so as to build trust in their application to real-world problems. To this end, I have been working on natural language processing (NLP) techniques that help us answer complex questions from textual knowledge through explainable multi-step reasoning, as well as models that reason pragmatically about the knowledge of their interlocutors for efficient communication in dialogues.
Outside of NLP research, I am broadly interested in presenting data in a more understandable manner, making technology appear less boring (to students, for example), and processing data with more efficient computation. I have also worked on speech recognition and computer vision previously.
When I procrastinate in my research life, I write code for Stanza, a natural language processing toolkit that’s available for a few dozen (human) languages, written in Python.
education & professional experience



selected publications
(*=equal contribution)
- NAACLGraph Ensemble Learning over Multiple Dependency Trees for Aspect-level Sentiment ClassificationIn 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2021.
- ACL (Demo)Stanza: A Python Natural Language Processing Toolkit for Many Human LanguagesIn Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, 2020.
- FindingsStay Hungry, Stay Focused: Generating Informative and Specific Questions in Information-Seeking ConversationsFindings of ACL: EMNLP 2020, 2020.