Thursday, 10th October - 2pm Denis Peskoff : Seminar

Title:  Prompting and its Implications

Abstract:

Prompting has revolutionized the accessibility of language models which in turn has societal implications.  First, I will talk about Credible Without Credit that evaluates the accuracy of responses of large language models to domain-specific questions.  Then, I will talk about GPT Deciphering Fedspeak that using prompting to establish that the Federal Open Market Committee, which sets inflationary policy for the United States, expresses more disagreement in their meeting transcripts than in their released statements.  Last, I will discuss current work on identifying generated content in Wikipedia.  For this evaluation, we use GPTZero, a proprietary AI detector, and Binoculars, an open-source alternative, to establish lower bounds on the presence of AI-generated content in recent Wikipedia articles. Both detectors reveal a marked increase in AI-generated content in 2024 articles compared to those from before the widespread adoption of highly performant LLMs with implications for the future of training data.  Last, I will highlight the Prompt Report, a large-scale review of prompting

Bio:

Denis Peskoff is a postdoc at Princeton University working with Professor Brandon Stewart.  He completed his PhD in computer science at the University of Maryland with Professor Jordan Boyd-Graber and a bachelor’s degree at the Georgetown School of Foreign Service.  His research has incorporated domain experts—leading board game players, Federal Reserve Board members, doctors, scientists—to solve natural language processing challenges