Focusing on Methods


Author's Note: This post was written for a science writing class, where I was tasked with the project of converting insights gained from my senior thesis into an article written for the popular press. As a result, I gloss over some scientific concepts. Additionally, I am not an astronomer. Please take my comments on the usefulness of astronomy with a grain of salt.


This past September, I submitted an abstract to the International Meeting of the Psychonomic Society. The Psychonomic Society is the largest international organization dedicated to cognitive psychology, which is the study of the memory, learning, and reasoning. Fortunately, my abstract was accepted, and this past May I was able to travel to Amsterdam, where I shared the results of my thesis with fellow cognitive psychologists. While traveling home from Amsterdam, I was contemplating another lab’s recent attempt to reproduce the results of the famous marshmallow study. In this study, children are presented with a marshmallow and are told that, if they don’t immediately eat the marshmallow and wait a couple of minutes, they will be given an additional marshmallow. Researchers in the original study found a child’s ability to deny themselves the instant gratification of the marshmallow was highly predictive of future success. In other words, merely waiting for the second marshmallow made children more likely to graduate from high school. Obviously, this is a correlational study, eating a marshmallow doesn’t actually hurt your chances at success. Unfortunately for the millions of undergrads who were taught the results of the fabled marshmallow test, a recent attempt at replicating this result found that the true correlation between self-denial and future success in life was not nearly as significant as we previously thought. Other factors, such as social class are more predictive of future achievement than the marshmallow test.

The marshmallow study has been a favorite of introductory psychology classes because it tells a story. The story might go something like this: self-control is easily measurable, constant throughout life and highly predictive of future outcomes, such as graduating high school. As far as psychological findings of the twentieth century go, this isn’t just a good story. It’s a great story. As I sat on the plane, I was thinking through all of the stories that I heard from researchers at the Amsterdam conference. As with the marshmallow test, I am sure that some of the research presented at Psychonomics was sensationalized. Although I think narrative structure is a powerful tool for engaging outsiders in the research process, we must be wary of the unwarranted hype cycle and expectations that these tools can foment.

Throughout the course of my thesis – from the writing of my research proposal to the end of the Psychonomic Society conference – I have always been attempted to introduce best practices into my research concerning reproducibility and open science. As a result, I have thought a lot about how we, as psychological researchers, can make our science more rigorous. I believe many of psychology’s recent problems concerning the replication crisis stem from the way we worship the results of our research and the stories these results tell. This emphasis on results is present everywhere from the academic journals we read to the press releases our universities put out.

In psychology, we don’t just have one experiment or primary method that we use for years on end. We tweak or redesign our experiments whenever we need to ask a slightly different question. We spend weeks designing experiments and surveys, but the methods sections of our papers are often shunted to the endnotes (see any Science or Nature article, for example). Although journal abstracts convey our findings, they rarely discuss how these findings were obtained. Many abstracts read as if an omnipotent observer had hand-delivered these results to the journal editors itself. Quite conversely, we should place greater emphasis on the methods of our research. Methods sections of scientific papers often say more about the researchers’ thought process than any other section.

Once scientific research leaves the academic forums of scholarly journals and enters the popular press, we often talk about the implications of our research without even discussing how or why we came to that conclusion. The press release hype-cycle is predictable. First, the university writes a press release. The press release is an art form in itself. An art form that values braggadocio. But at least this university-written braggadocio has been fact checked by the scientists who discovered the findings in the first place.

The unnecessary hype of university press releases is a problem in and of itself. But then this problem is intensified when journalists find these press releases and rewrite them. “Correlated” becomes “caused.” A one percent increase in cancer prevalence becomes “never eat meat again or you will die.” Then, Katie Couric is advocating for colonoscopies – even though the evidence was shoddy in the first place – and suddenly one small press release has caused thousands of tv viewers go to their physician and ask for a colonoscopy.

Recently, my thoughts on how to recover from this epidemic of sensationalized research as I listened to my friend, Raphael, tell me about his thesis research in astronomy. Raphael’s thesis consisted of pointing the Southern African Large Telescope at a galaxy and observing this galaxy using specialized imaging techniques. Ultimately, Raph’s thesis research aims to discover more information about black holes and the formation of the universe. While listening to Raphael give this research spiel, I had an epiphany about the difficulty of the so-called “soft sciences.” Where the behavioral sciences focus on results, astronomers prioritize methods. As far as I can tell, astronomy is the science of data reduction. Telescopes generate highly complex data. Then, astronomers use a series of computational techniques to reduce data to interpretable images (Raphael has since told me that this notion is incorrect [oh well]).

This notion of data reduction should be applied to research in the behavioral sciences. Our brains consist of billions of neurons. Each neuron is an individual unit. Behavior emerges from the trillions of connections between these neurons. If this scale of activity doesn’t scream “data reduction,” I don’t know what does. Right now, however, we don’t truly have the capability to reduce this data in a scientifically rigorous way. fMRI machines, the most prevalent means of assessing human brain activity, are not as powerful as we would like. fMRI machines can’t assess individual neurons, nor can they see the connections between them. Even worse, fMRI machines can only scan the brain every second or so. And it certainly doesn’t help that a suite of popular fMRI data reduction tools suffered from undetected data analysis errors for over a decade. These errors led to increased rates of false positives, which essentially destroyed the validity of the fMRI research conducted since the year 2000.

In addition to the limitations of our experiments, we need to be aware of the questions we cannot ask with our current neuroimaging methods. We have access to a large variety of neuroimaging methods, such as PET scans, fMRIs, and ECoGs, but none of our current neuroimaging methods are perfect. In astronomy, some questions could not be solved by observations from reflecting telescopes. Some discoveries could 0nly be made through the invention of gamma and x-ray telescopes. Right now, psychology is in its infancy. We are in the age of the reflecting telescope. But, we want to ask gamma ray questions. This isn’t a reason to give up on psychology. Rather, this is the reason to believe in psychology.

We have many years of fruitful work ahead of us. I believe that we will solve the engineering, statistical, and social challenges preventing us from answering the questions we need to answer. Now that we understand how earlier methodological errors gave rise to unreproducible results, we must be more cognizant of the methods we use and the assumptions we make. For those of us presenting thesis research, it’s ok to say “I don’t know,” or “we need to do more research,” or even “maybe we’ll need to develop a whole new type of neuroimaging to even get a grip on that question.” And that’s ok. For now, I’m going to work towards writing better proposals – with better methods.


ArchiveHome