The LAUNCH lab aims to build trustworthy language models that produce factual, accurate, and safe content.
Our lab has been actively working on evaluating, analyzing, and improving the capabilities and limitations of large language models, in particular in reasoning-intensive tasks and real-world setups. Using AI techniques, we also create novel applications to understand narratives and to support education.
We can be found on Twitter as @launchnlp.
📋 If you're interested in joining the LAUNCH lab for research projects, please check out this page.
[November 2023] Received grants from LG AI and Cisco to work on "Effective and Fine-grained Feedback for Enhanced Language Model Reasoning and Alignment" and "Multi-Document Reasoning with Large Language Models".
[July 2023] Received a grant from NSF to work on "Argument Graph Supported Multi-Level Approach for Argumentative Writing Assistance".
[June 2023] Check out our new preprint on efficient long document summarization and reasoning with large language models.
[June 2023] Check out our recent work on fast inference-time controlled text generation and word category arcs in narratives.
[June 2023] Shuyang Cao won the Bloomberg Data Science Ph.D. Fellowship for 2023-2024. CSE news
[April 2023] ReadingQuizMaker, our collaborative work with Xu Wang's group, wins a Best Paper Honorable Mention Award at CHI 2023.
[Feb 2023] Check out our new preprint on multi-hop question answering using large language models (LLMs).