A better approach: Bringing supercomputing to psychology
A better approach: Bringing supercomputing to psychology
Illinois' Michel Regenwetter uses XSEDE resources to bring mathematical rigor to the psychology of decision making
There's a large bag of candy on the table in Michel Regenwetter's lab, but Regenwetter -- a professor of psychology and political science at Illinois -- insists he's not secretly studying the number of fun-size Snickers bars visitors consume. Instead, he's taken a step back from human behavior to look at the studies that attempt to explain it. With support from XSEDE, Regenwetter is bringing a new level of mathematical and statistical rigor to studies of decision making and behavior -- and showing other scientists why they should do the same.
In typical studies of decision making, subjects make choices and psychologists compare their responses to the predictions of a particular theory. The problem, Regenwetter says, is that changing the parameters of a theory even slightly can spawn thousands of variations. In consequence, testing just one of those theories, or a handful, is unlikely to lead to the best possible model -- and choosing the "best" of a few theories often means massaging the data. "Psychology is a very young discipline," Regenwetter notes. "If someone told you gravity causes all fruits to fall to the ground except lemons, you'd know it was bad science… but in psychology, theories are too flexible; whatever doesn't fit is often viewed as an exception." Moreover, Regenwetter adds, a good deal of research jumps too haphazardly from theory to prediction to validation, often skewing results along the way. "When you aggregate, you lose information," he says, suggesting that psychologists are too apt to consider "mostly right most of the time" good enough to validate a model's predictive ability.
Supercomputing -- and innovations in math and statistics -- allow Regenwetter and his team, including Psychology PhD candidate Ying Guo, to take a far more comprehensive approach to testing behavioral theories. "The original idea was, okay, there are a thousand theories -- why not run all of them?" Regenwetter says. Luckily, he was able to hire Shiau Hong Lim, then a Ph. D. candidate in Computer Science at Illinois, to program code that acts as a one-stop shop for analysis. Enter XSEDE: testing numerous theories generates a "giant collection of predictions," and the use of Bayes factors, a statistical method for mathematically comparing multiple theories, makes significant computing power essential. Regenwetter's work would be "computationally hopeless" if not for the parallelization and processing power of the Bridges supercomputer at the Pittsburgh Supercomputing Center.
The team also relies on the support of Roberto Gomez, an XSEDE Extended Collaborative Support Services (ECSS) consultant at PSC who has worked with them for several years. Gomez helped the team "get their bearings" in a supercomputing environment (a support service he recommends for all new XSEDE scientists), assisted Guo in her work revising scripts to optimize job distribution, and stepped in "whenever anything got stuck." When problems arise, Regenwetter says, Gomez can be counted on to offer a fix or guide the team to a solution.
XSEDE allocations and NSF funding enable Regenwetter and his team to test masses of theoretical variations, but Regenwetter also advocates a methodical approach to analysis that he says isn't frequently demonstrated by psychological studies: "Rather than jump from theory to prediction, researchers need to model and prove what they have in mind mathematically, measure [what can be measured], and then test" their theory. Regenwetter's team often performs reanalysis of data from published papers, this time using their own rigorous methodology. Their reanalysis can provide additional support for a paper's conclusions -- or expose weaknesses in the original methodology. Not every researcher reacts positively to contradicting claims, but Regenwetter says many are "grateful to learn" a better approach to analysis.
Regenwetter is also beginning the process of computing and classifying the variation and covariation his work has unearthed. A new partnership with Illinois neuroscientist Aron Barbey will launch a search for commonalities between their respective explorations of the qualitatively different ways elderly people and younger people make decisions. Their conclusions may affect policy: if different demographics rely on distinct methods of decision making, policies may need to be altered to reflect that.
While Regenwetter focuses on theories of decision making and behavior, he notes that the rigor he advocates for -- and the fallacies he points out -- can apply to studies in any field, as long as the general setup pertains. Replicating Regenwetter's methods at the scale he employs does require access to the processing power supercomputers provide, which may strike some researchers as a roadblock. But Regenwetter notes that "getting a lot of information out of your data is worth the effort to extract it." And the work his team has already done will help accelerate the "scientific throughput" of psychology by eliminating unsuccessful theories from consideration, giving scientists "a better handle on the big picture."
"The balancing act between invariance [signal] and heterogeneity [variation] can be a challenge in psychology," Regenwetter says. He offers an analogy to demonstrate the difficulty of explaining behavior based on observations: a hundred pianists could play the same Bach piece at once, but make so many errors that the result is unrecognizable. Another hundred pianists could all play Bach perfectly -- but if they each play a different piece, again, the result is cacophonous. When it comes to data on decision making, it's up to psychologists to try to extract the commonalities from the clamor without erasing the differences. Regenwetter believes his work will contribute to psychologists' ability to do just that.