Skip to main

Using Generative Pre-Trained Transformers (GPTs) for Textual Analysis

Speaker

Michael Prinzing

Natural language is a vital source of evidence for the social sciences. Yet quantifying large volumes of text rigorously and precisely is extremely difficult. Manual scoring by humans has long been considered the "gold standard," but is slow and laborious by contrast to computer-automated approaches. In the past, these automated approaches typically required large amounts of training data before they could be applied to new tasks. However, generative pre-trained transformers (GPTs) promise to change that. Preliminary evidence suggests that GPTs can perform nuanced tasks (e.g., inferring complex personality traits), with excellent internal reliability, consistency with a trained human rater, and strong correlations with self-report and behavioral measures. In this workshop, Michael will present some recent work, discuss available tools and best practices, and give a demonstration. Michael Prinzing is a Postdoctoral Research Associate in the Department of Psychology and Neuroscience at Baylor University. He is also a Consulting Research Scientist at the University of North Carolina at Chapel Hill at the Parr Center for Ethics, where he received his Ph. D. in philosophy in 2022.

Categories

Panel/Seminar/Colloquium, Research, Workshop/Short Course