Has Science a Problem?

Stuart Vyse

0 Shares

As skeptics, we value ideas that are grounded in science over those that come from somewhere else. We believe that the methods of science, when
conscientiously applied, produce the best available information. In pursuit of our goals, we often focus on the latest wave of pseudoscience or
superstition. Some new bit of claptrap pops up, and we sharpen our scientific swords and mount the attack.

Going on the offensive is what we do best, but it is also important to have a strong defense. If we are going to be successful, we must do everything we can
to make sure there are no kinks in our own armor. For science to stand as a shining alternative to the unending waves of irrationality, its reputation must
be strong.

Unfortunately, science’s reputation has taken a bewildering number of blows in recent years. The challenges have come in the form of unscrupulous
researchers, unreliable results, and sloppy publication practices.

The Nature of the Problem(s)

Fraud

Science journal cover

Some of the most recent blows to the image of science are cases of apparent—and in one case quite openly perpetrated—fraud.

LaCour and Green (2014)
In December of 2014, political scientists Michael LaCour and Donald Green published an article in the prestigious journal Science showing that,
when canvassed about the topic of gay marriage, voters showed a strong and lasting shift in the favorable direction if the canvassers disclosed that they
were gay. But the article was recently retracted because LaCour made false statements
about the funding of the study and the payment of survey respondents. In addition, LaCour appears to have lied on his résumé, fabricated the data in at
least one of the studies in the Science paper, and possibly fabricated data for another published study.1 Donald Green, the senior co-author on the Science article requested that
the paper be retracted, but LaCour has posted a twenty-two-page defense of the paper on his personal website and has not joined Green in supporting the retraction. As of this
moment, LaCour appears to stand by his research and has defended himself on Twitter. However, speaking through an attorney, he admitted to making false
representations about the study’s funding.

Nate Silver links to evidence of another LaCour paper being faked; LaCour calls it an unsubstantiated claim

The Infamous Chocolate Study
In what was designed to be a deliberate and instructive hoax, science journalist John Bohannon teamed up with two documentary filmmakers and other
collaborators to conduct a real but flawed study to determine whether eating chocolate in combination with a low carbohydrate diet would promote weight
loss. By designing the study in such a way that statistically significant (but practically irrelevant) results would almost certainly be obtained, they
discovered that—lo and behold—chocolate in combination with a low carbohydrate diet produced improved weight loss and lower cholesterol. Despite its many
flaws, the study was accepted by several journals and eventually appeared in the International Archives of Medicine. Given the sensational topic
and appealing results, the study was quickly picked up by news outlets all over the world.2

screen shot of Daily Star coverage of chocolate study
Daily Star (UK) coverage of the bogus chocolate study.

The history of fraudulent scientific research is long and storied. The United States Office of Research Integrity
investigates many cases each year and lists the names of researchers found guilty of misconduct. Furthermore, if you follow the Retraction Watch blog you will learn that journals retract articles far more often than most people would
suspect. As I write this,
Retraction Watch is reporting
that the Journal of Immunology is retracting a 2006 study because multiple illustrations in the article were falsified.

It is difficult to say whether there is more or less research fraud than there was in the past, but it is likely that in today’s world more of the research
fraud that exists will be reported in the news media.

#repligate

Scientific research is supposed to be repeatable. When researchers demonstrate some new effect or phenomenon, it should not be a one-off event. As a
result, when they publish their findings, investigators include detailed descriptions of the methodology they used so that other researchers in different
laboratories can try to get the same results. Things that cannot be reproduced are immediately suspect.

Unfortunately, journals much prefer to publish shiny new things rather than attempts to replicate older findings, and publication in journals is the main
vehicle for success as an academic scientist. So, as a matter of fact, replications of previous studies are quite rare. As a result, many sensational
studies are assumed to be fact and are widely cited, but they may or may not be reproducible. In the field of social psychology, the #repligate problem has
recently come to a head. Some very widely cited studies have been brought into question by failures to replicate. In particular, research on priming
effects—subtle cues that go unnoticed but still influence behavior—have come under criticism.

Priming effects are often quite stunning (e.g., hand-washing improves your moral judgment of other people), but this area of psychology has been
plagued with examples of research fraud and findings that
have not been reproduced. In 2012, a controversy erupted surrounding a famous article co-authored by Yale University professor John Bargh. In the original
study, one group of college students unscrambled sentences that included words that were stereotypical of the elderly (e.g., wrinkle, Florida) and another
group unscrambled sentences without these words. Later, the participants in the elderly prime group walked significantly slower down a hallway as they
exited the laboratory than those in the other group. Pretty cool!

In January of 2012, a group of researchers from Belgium published an
attempt to replicate these findings in the online journal PLOS One. Using more stringent procedures and a larger sample of participants, the
Belgium group found no priming effect except when the experimenters conducting the study knew about the expected finding. This and other priming
controversies inspired Princeton University psychologist Daniel Kahneman to write an open letter expressing his belief that the
field of priming, “is now the poster child for doubts about the integrity of psychological research.” He urged that the problem be tackled head on with a
collaborative effort to replicate findings.

screen shot of TED website: Your body language shapes who you are

Just this spring a published
failure to replicate
the “power pose” effect—which suggests that striking a dominant posture affects hormone levels and risk-taking behavior—raised questions about research by
Amy Cuddy that is the basis of her TED talk—currently
the second most popular TED talk of all with over 26 million views.

Lest you think the problem is restricted to “softer” social science research, a recent study in the journal PLOS Biology found that
half of all pre-clinical biological sciences studies are irreproducible and that $28 billion in research funds are wasted each year. Furthermore, few of
the faulty papers listed on the Retraction Watch blog are from the social sciences.

How (or Why) Does this Happen?

The kind of sloppy and fraudulent research that we see demonstrated in these cases probably has many sources. Here are just a few:

1. Personal and Professional Motives. Scientists have always been under pressure to “publish or perish.” Getting tenure and achieving academic
success depends upon producing high quality peer-reviewed articles. But the media explosion of recent decades has changed the nature of science, adding new
layers of motivation for the ambitious scientist.

In the past, no matter where they worked, scientists toiled in relative obscurity, publishing their findings in journals that rarely came to the attention
of the general public. Today the landscape is quite different. In my own field of psychology, it is now quite common for researchers from the most highly
regarded universities to write bestselling books, appear as talking heads on television, have thousands of Twitter followers, give TED talks that are
watched by millions of people, and command substantial speaker’s fees.

The sciences, once an area that was relatively immune from the pull of celebrity, now has an expanding pantheon of rock stars who sometimes appear to be
doing more writing for popular audiences than for professional ones. Almost all research universities now have active public relations departments that put
out press releases when potentially newsworthy studies are published. Here is an excerpt from a recent university press release I came across:

instructions describing how to schedule interview with professor
From a university press release announcing the publication of a faculty research study.

Before the recent debacle, LaCour and Green’s article in Science was covered by media outlets throughout the world, was
used as template for the “Yes” campaign
in the recent Irish gay marriage referendum, and helped to land LaCour a job at Princeton University. Now that the article has been retracted, the
Princeton job is in doubt—but it is
easy to see how researchers might cut corners in an effort to achieve prestigious jobs and personal fame.

2. Social or Political Motives Affecting Peer Review. Reputable scientific journals only publish articles that have been reviewed by other
qualified scientists. Despite Science’s prestigious reputation, the LaCour case is one where peer review may not have been as rigorous as it
should have been. If the reviewers were supporters of gay marriage, they would be quite happy with LaCour and Green’s findings. Unfortunately, reviewer
biases can easily produce sloppy evaluations of studies that support a pet theory or are particularly sensational.

3. Peer Review is Volunteer Labor Reviewing manuscripts that have been submitted for publication in journals is a very time-consuming task, and
almost all of this work is done on a volunteer basis. For the working scientist or academic, there is very little benefit to taking on this kind of work.
It is a professional service done for free, and the quality of reviews varies widely. Under this kind of a publication system, bad research will sometimes
sneak through.

4. P-hacking. In many fields—including most of the social sciences—you can only get your research published if you find statistically significant
results. P-hacking is a time-honored but dubious process of tweaking your data
until you find something that is significant at the hallowed p < .05 (5 chances out of a 100) probability level. P-hacking is what created the
Infamous Chocolate Study results, and in today’s world it is much easier to do than it used to be. Computer analysis of data makes it possible to reanalyze
data in many different ways and then choose the version you like best. Also, in today’s world of online surveying methods social science researchers can
obtain large numbers of participants quickly and inexpensively. Larger samples mean that even very small effects—effects that have little or no practical
value—can achieve statistical significance and find their way into print. Thankfully, p-hacking continues to be identified as a serious problem and remains an important concern of the
research community.

hand-written, misspelled fake journal cover

5. An Epidemic of Fake Journals. All of the above are problems faced by reputable scientific journals, but suddenly there are a number of fake
journals whose sole reason to exist is to make money. A new trend in science is the advent of “open access” journals that are freely available to readers
online. Instead of charging for subscriptions, these journals often support themselves through fees—sometimes substantial fees—paid by authors who submit
their articles for publication.

Some open access journals are of good quality—PLOS One and PLOS Biology are both open access journals—but others are little more than
money-making scams. In an earlier sting operation for Science magazine,
John Bohannon, the journalist/author of The Infamous Chocolate Study, submitted copies of a bogus and deeply flawed research report supposedly describing a
new anti-cancer drug to 304 open-access journals. The paper should have been summarily rejected by any reputable journal—and it was rejected by many. But
over half (!) of the 304 submissions were accepted—most often without any evidence that they had been peer reviewed at all. Unfortunately, these journals
have authoritative sounding names and—as the chocolate study episode suggests—the news media and the general public are ill-equipped to determine which
journals are trustworthy and which are bogus.

The Good News

So the answer to the question is yes. Science has a problem, and it is one that has new and troubling dimensions. But there is also some good news.

David Can Still Slay Goliath.

Thankfully, recent history shows that one very important principle of science still holds: evidence is more important than the researcher’s professional or
social status. A lowly graduate student can still take down a tenured professor. In 2013, Thomas Herndon, a 28-year-old University of Massachusetts graduate student, tried
to replicate the findings of two Harvard economists for a class project and discovered a number of errors in their analysis. Herndon and colleagues went on
to publish a scathing take-down of the study. The
original paper had been used by Paul Ryan and other conservative politicians to justify painful austerity programs, but a smart graduate student was able
discredit it.

Michael LaCour was a graduate student at UCLA when he published the gay canvassing study but his co-author, Donald Green, was a tenured full professor at
Columbia University. Again in this case, it was a graduate student, David Brookman of the University of California at Berkeley, who posed the questions
that ultimately led to the Science retraction. Brookman was initially very admiring of LaCour’s work but ran into
difficulties when he attempted to replicate it.

The gods sometimes have clay feat, and mere mortals can topple them.

Post-publication Peer Review

As we have seen, peer review has its weaknesses. But today, publication of an article is not the end of the line. In the past, researchers were asked to
retain the raw data of their studies for a few years in case someone asked to see it, but not all researchers followed this guideline. Today, some journals
are requiring that researchers make the raw data from their studies available electronically at the time of publication, so that anyone can download the
data and re-analyze it. Today, when research becomes widely available on the Internet, it frequently receives intense scrutiny after publication, and if
errors are found, they can be corrected or—when warranted—the paper can be retracted. So, in the future, flawed studies that get through the publication
process will often be caught after they have been published. This is what the Retraction Watch project
is all about.

Fraud is Still a Solitary Enterprise

There are systematic problems that plague science. P-hacking, for example, is a widespread problem that affects many researchers, consciously or
unconsciously. In contrast, deliberate fraud appears to be specific to particular individuals. Random unscrupulous scientists are likely to emerge from
time to time, and many of their names are listed on the Office of Research Integrity website. It is important to identify and pursue these Bernie Madoffs
of the science community, but once they and their crimes are identified, they tend to be seen as individual culprits who are not representative of the
field as a whole.

Furthermore, it is heartening to see that scientists who engage in fraud are often reported by those closest to them. Years ago, the faked data of
University of Pittsburgh psychologist Stephen Breuning was first brought to light by his collaborator and colleague Robert Sprague.3 In the Michael LaCour case, it was his coauthor, David Green, who—after learning
that LaCour had deleted the raw data—requested that Science retract their paper.4 Green may be faulted for not having kept a closer eye on
his young collaborator before the misconduct was revealed, but once the problem came to his attention, he acted swiftly and honorably.

So, yes. Science has a problem. It has some of the same old problems of maintaining research integrity, as well as some new problems created by the
contemporary media explosion and the use of computers and the Internet in research. But there also seems to be a heightened level of scrutiny of scientific
research that will—in the long run—keep us strong. Science is going through a period of adjustment to a new research environment, but I see no evidence
that things are going off the rails. Just as we always have, skeptics will need to help the credulous weed out the reliable from the unreliable sources of
information, but we can still rely on the scientific alternative to be strong and getting stronger.

Stuart Vyse

Stuart Vyse is a psychologist and author of Believing in Magic: The Psychology of Superstition, which won the William James Book Award of the American Psychological Association. He is also author of Going Broke: Why American’s Can’t Hold on to Their Money. As an expert on irrational behavior, he is frequently quoted in the press and has made appearances on CNN International, the PBS NewsHour, and NPR’s Science Friday. He can be found on Twitter at @stuartvyse.