About me

Email: qwang16 [at] wm [dot] edu

Hello!

I am Qingyun Wang, an Assistant Professor of the Data Science Department at College of William and Mary.

I received my Ph.D. degree from the Siebel School of Computing and Data Science at the University of Illinois at Urbana-Champaign in 2025. Previously, I graduated summa cum laude from Rensselaer Polytechnic Institute with a dual B.S. degree in Computer Science and Mathematics in 2019. From 2017 to 2025, I was a member of the BLENDER Lab and was fortunate to have Prof. Heng Ji as my advisor. I am also grateful for the mentorship from Prof. Jiawei Han, Prof. Lifu Huang, and Prof. Tom Hope.

I am among the first researchers to develop a virtual scientific research assistant (i.e., PaperRobot [ACL 2019]) for literature-based discovery by extracting and synthesizing insights from papers. My research interest lies in Automated Literature Understanding and Scientific Discovery. My long-term vision is to develop AI for Scientists (AI4Scientist) tools to effectively accelerate and democratize the entire research lifecycle for scientists, from knowledge acquisition [(NAACL ‘21 Best Demo🏆)I,II,III], hypothesis generation [IV], multimedia procedure planning for experiment design [V], experiment execution [VI], conduction to writing [VII, VIII], and evaluating the paper draft [IX].

Research Interests

  1. Scientific Multimodal Foundation Models with Critical Thinking: Build a new multimodal scientific LLM to understand formulas, tables, figures, and charts; Design model can dynamically extract and integrate new multimodal knowledge elements without additional training
  2. Few-Shot Scientific Knowledge Acquisition: Investigate methods for extracting knowledge from scientific corpora with limited annotation
  3. Planning and Reasoning in Scientific Domain: Utilize both structured and unstructured knowledge as well as logic rules among knowledge elements to produce trustworthy and explainable results
  4. Scientific Research Agents with Physical World Interactions: Train a new human-in-the-loop reinforcement learning framework with human, experimental, and literature feedback, which can leverage small datasets in closed-loop discovery platforms; Develop a human-in-the-loop self-driving laboratory that can complete the scientific research lifecycle through interactions with the physical world, such as a robotic laboratory

Prospective students

I am constantly looking for highly motivated PhD students (as fully-funded RAs/TAs) and interns to join my lab! If you are interested in working with me, please fill [this form], and [apply for admission] to William & Mary. After completing the form, you are also welcome to reach out via email (qwang16 [at] wm [dot] edu). I will read all submitted forms and emails but I do apologize for not being able to respond to each of them. Prospective Students English, Prospective Students Chinese

I’m happy to collaborate and answer questions about my research.

Recent News

  • Apr 6, 2026: Two paper accepted to ACL 2026! Congrats to all the students and collaborators.

  • Feb 5, 2026: We received William & Mary Applied Research & Innovation Initiative Full Grant. Thanks to William & Mary Applied Research & Innovation Initiative for their support!

  • Nov 7, 2025: We received Google Cloud credits for our research! Thanks to Google Cloud Research Credits Program.

  • Aug 20, 2025: Three paper accepted to EMNLP 2025! Congrats to all the students and collaborators.

  • Jul 4, 2025: We will organize the Scaling Environments for Agents (SEA) at NeurIPS 2025, in San Diego, in Dec 2025!

  • Jun 6, 2025: We will present a tutorial on Towards a Human-AI Collaborative Medical Research Lifecycle: A Pilot Study at AMIA 2025 in Atlanta, GA, in Nov 2025!

  • May 15, 2025: One paper accepted to ACL 2025! Congrats to all the students and collaborators.

  • Apr 11, 2025: We will organize the VISTA: Visionary Innovation in Standards and Technology of GenAI at ICDM 2025 in Washington DC, in Nov 2025!

  • Jan 12, 2025: One paper accepted to NAACL 2025! Congrats to all the students and collaborators.

Load More