How Can Skepticism Do Better?

Scott O. Lilienfeld

I am delighted to contribute an essay to celebrate the fortieth anniversary of the Committee for Skeptical Inquiry (CSI) and its wonderful magazine, Skeptical Inquirer (SI), both of which I have been honored to be affiliated with for the past fifteen years. CSI and SI have ample reason to be proud of their myriad accomplishments. They have helped to make skepticism a household word in many quarters and brought tens of thousands of individuals—laypersons, students, and academicians, among many others—into the fold of the skeptical movement. Moreover, they have served as invaluable resources for scholars, teachers, and laypersons. In my field of psychology, CSI and SI have inspired thousands of college and high-school instructors to incorporate scientific thinking into their curricula. I am one of them, and I very much doubt that I would have developed my successful “Science and Pseudoscience in Psychology” undergraduate seminar at Emory University (see Lilienfeld et al. 2001) were it not for the tireless efforts and encouragement of CSI and SI.

To be true to its mission, though, skepticism must be skeptical of its own endeavors (Novella 2015; see also Horgan 2016 for a well-intentioned but less than successful effort in this regard). Hence, in this essay I look to the future and pose the question of what the skeptical movement could be doing better to advance its laudable goals.

Before doing so, I should be up front about my biases. I am a psychologist by training, and I tend to think about pseudoscientific and otherwise questionable beliefs though a distinctly psychological lens. By that I mean that I strive not merely to debunk erroneous beliefs but to understand why otherwise reasonable people often fall prey to them (see also Shermer 2002). As fascinated as I have been in evaluating the evidence—or lack thereof—underpinning pseudoscientific and otherwise questionable claims, my deeper interest has long been in why seemingly rational individuals, including prominent scientists, are so often seduced by these claims.

Ultimately, one of our principal goals as skeptics is to impart scientific thinking skills to the general public and to stem the rising tide of pseudoscience in the media, on the Internet, and in everyday life, including in the worlds of politics, business, and the law. How well are we doing?

Skepticism: A Report Card

It is not entirely clear, but there is ample reason for soul-searching. Survey data suggest that the levels of uncritical acceptance of paranormal claims, such as beliefs in extrasensory perception (ESP), psychic healing, and astrology, remain distressingly high among the general public and college students, and may even have been increasing over the past few recent decades (Ridolfo et al. 2010). Despite our concerted efforts, we still find ourselves in an age in which pseudoscience crowds out science on the airwaves and newsstands, and in which leading presidential candidates are convinced that vaccines cause autism and that our unassuming little planet has very likely been visited by extraterrestrials. To be fair, it is conceivable that without our valiant efforts to combat pseudoscience the levels of such beliefs might be even higher, especially in light of the misinformation explosion fueled by the Internet, cable television, and social media. At the same time, it may be high time to ask ourselves whether, and if so what, we could be doing better.

In this essay, I briefly touch on three major topics that I believe warrant substantially more consideration in the next four decades of the skeptical movement: (1) the extent to which our dual aims may conflict with or even undermine each other; (2) the effectiveness, or lack thereof, of our debunking efforts; and (3) the need to consider research on cognitive development in disseminating scientific thinking skills to the general public.

Conflicting Aims?

One issue that strikes me as meriting considerably more discussion is the principal aim of CSI and SI. Is our primary goal to serve as a “resource group” or “support group” of sorts for skeptics—to provide real-world and virtual forums in which we skeptics can share our knowledge and consort with like-minded thinkers? Or is our primary goal instead to boost the levels of scientific thinking among the populace and thereby diminish the levels of poorly supported and potentially harmful beliefs?

In my view, both are extremely worthy aims, and I do not intend to fall prey to the false dilemma fallacy by implying that we must necessarily select one goal or the other. Logically speaking, they are not mutually exclusive. At the same time, I have to wonder whether these two aims are to some extent operating at cross-purposes. To the degree that we talk mostly to each other at our own conferences and meeting groups, in our magazines, on our Internet blogs, and in other outlets, we may largely be forsaking our opportunities to change the minds of the general public. I also worry that we may fall prey to a false consensus effect (Ross et al. 1977), whereby we overestimate the extent to which others share our views as well as overestimate the effectiveness of our efforts.

Furthermore, by communicating mostly with each other, we may inadvertently be cultivating something of an “us versus them” mentality in which we regard individuals outside the skeptical movement to be largely unreachable (or, worse, not worthy of outreach). Like one of my intellectual heroes, astronomer and science writer Carl Sagan (1996), I have on occasion detected more than a whiff of arrogance among some of us in the skeptical movement, and I have no doubt fallen prey to this tendency myself from time to time. Resisting the temptation to perceive our critics as inherently foolish, stupid, or malicious can at times be challenging, especially when we are addressing claims that strike us as outlandish. But as soon as we commit the fundamental attribution error (Ross 1977) of assuming that erroneous beliefs in others are the products of their intrinsic dispositions—such as low intelligence or the inability to think critically—rather than of inadequate scientific training or of an understandable yearning for wonder, we may find it difficult to avoid communicating condescension and disrespect when communicating with them. Moreover, we may dismiss them prematurely as “true believers” whose views are unmalleable.

Perhaps my concerns are unwarranted or overstated. I hope so. Even so, I believe that it will be crucial for the skeptical movement to step back and take further stock of its long-term goals and how best to achieve them.

The Effectiveness of Our 
Debunking Efforts

As skeptics, we spend much of our time attempting to dispel false and poorly supported beliefs. By doing so, we hope to bring the best available scientific evidence to bear on confronting pseudoscientific and otherwise dubious assertions. Ironically, though, we rarely pause to ask ourselves a critical question: Are our methods of challenging others’ beliefs themselves consistent with the best available scientific evidence?

The psychological literature increasingly suggests that the answer to this question is a resounding “No.” Many skeptics appear to be unaware that a rapidly growing body of research on the effectiveness of dispelling false beliefs suggests that debunking is far more difficult than most of us have long assumed. The “continued influence effect,” also known as “belief perseverance,” whereby false beliefs persist despite repeated efforts to correct them (Johnson and Seifert 1994), is a major and largely unappreciated challenge to the efforts of the skeptical movement. The continued influence effect may help to explain, for instance, why recent surveys of the general public reveal that nearly one in three parents believes that vaccines may cause autism despite persuasive evidence to the contrary (Heasley 2014).

In fact, much of this recent research points to the possibility of “backfire effects,” whereby well-intentioned attempts to disabuse individuals of misconceptions paradoxically strengthens them. For example, when researchers have attempted to reduce unwarranted doubts regarding the dangers of vaccines by informing participants that the side effects of the flu virus tend to be mild and are rarely worse than the effects of the virus itself, they have found that participants tested immediately after reading this message are more receptive to flu vaccines than are other participants. Yet if researchers wait a mere half an hour, these effects are reversed—individuals become more dubious of flu vaccines than they would have been had they received no information at all (Schwarz et al. 2007; see also Nyhan and Reifler 2015)! Presumably, participants’ negation tags—the “yellow sticky notes” in their minds that remind them “This belief is false”—have peeled off. Similar backfire effects have emerged in a number of other realms, including politics. For example, liberals informed that George W. Bush did not in fact ban all U.S. stem cell research, or conservatives informed that Iraqi dictator Saddam Hussein did not in fact possess weapons of mass destruction, subsequently become more likely to endorse these inaccurate beliefs (Nyhan and Reifler 2010). Backfire effects appear to be most likely when false messages threaten recipients’ self-identity (“worldview backfire effects”) and when the myths themselves are repeated frequently (“familiarity backfire effects”), in the latter case probably because people often confuse a message’s familiarity with its accuracy. Both types of backfire effects should be of concern to skeptics, because we frequently challenge individuals’ core self-concepts and do so by reiterating their misconceptions.

Fortunately, psychologists and other social scientists are gradually converging on a set of evidence-based principles for effective debunking (Lewandowsky et al. 2012). In general, research suggests that effective debunking re­quires communicators to displace the false belief with a competing, and ideally more compelling, narrative. In addition, communicators should clearly—and respectfully—explain how and why the belief, although often understandable, can be misleading. Furthermore, debunkers should generally avoid repeating the myth too many times and instead focus on well-supported scientific evidence that counters the myth. If possible, pairing the debunking explanation with a vivid and easily grasped graphical depiction, such as a figure contrasting the tiny number of peer-reviewed studies suggesting a link between vaccines and autism versus the enormous number that do not, is also advisable. I strongly encourage interested readers to consult the brief and user-friendly Debunking Handbook by Cook and Lewandowsky (2011) for further recommendations.

More broadly, as the skeptical move­­ment looks ahead to the next four decades, it will need to revisit how it frames and packages its messages. It is plausible, if not probable, that we are not doing so in the maximally effective fashion. More worrisome, it is entirely possible that we are sometimes inadvertently hurting our cause. This possibility should give us pause.

Psychological Research on 
Child Development

The great Swiss developmental psychologist Jean Piaget got a number of details wrong; for example, he almost surely underestimated children’s cognitive capacities in many domains. Nevertheless, Piaget imparted a crucial insight that has stood the test of time: Psychologically, children are not miniature adults (Lourenço and Machado 1996). They conceptualize the world in markedly, perhaps qualitatively, different ways than we do. As a consequence, we cannot assume that the same debunking approaches that work with grown-ups will be successful with children.

To be effective in our efforts to disseminate scientific thinking to the general public, we may need to begin early in individuals’ lives, well before erroneous conceptions of the world have become ossified. Public health research teaches us that primary prevention is almost always more effective and efficient than secondary prevention, and the same principle is likely to apply to the teaching of critical thinking skills.

We know scandalously little about when in psychological development we can effectively begin to teach scientific thinking skills, or how to differentially craft scientific thinking approaches for children of different ages. Recent research offers some grounds for optimism. In two studies, a simple picture storybook intervention yielded encouraging success in teaching five to eight year olds about the key principles of natural selection; these effects endured for at least several months (Kelemen et al. 2014).

At the same time, there will inevitably be unanticipated challenges in teaching scientific thinking principles to children. In a brief article that should be required reading for all skeptics, Bloom and Weisberg (2007) described a number of deep-seated sources of psychological resistance to science that emerge in childhood but that probably persist—albeit in attenuated form—in adulthood. Among other things, these authors noted that young children tend to perceive purpose in many non-purposeful phenomena. For example, they commonly assume that clouds exist to make rain for people and that lions exist so that people can view them in zoos. Such teleological reasoning, dubbed “agenticity” by Michael Shermer (2009), may render children especially vulnerable to notions such as intelligent design theory, which posits that species arose from the intentional plan of a grand designer. In addition, Bloom and Weisberg observed that children are “natural-born dualists,” meaning that they regard the mind as a nonmaterial essence that is fundamentally distinct from the brain. Such dualism may render children especially likely to embrace beliefs in ghosts, spirits, and other disembodied entities, which presumably reflect the persistence of the mind following the death of the brain. In many of us, these dualist beliefs probably endure in some form into adulthood, suggesting that this line of research bears fruitful implications for dispelling adult misconceptions as well.

In my view, one of the foremost challenges to the skeptical movement in future years will be to forge stronger linkages with such disciplines as cognitive-developmental psychology and educational psychology to better understand how to immunize children against erroneous beliefs—and more broadly, erroneous ways of thinking—before these propensities become deeply entrenched in adolescence and adulthood. The conceptual and methodological difficulties here are formidable, but the payoffs are likely to be substantial.

Concluding Thoughts

The skeptical movement has every right to celebrate its past forty years of remarkable achievements, but it should not rest on its laurels. To bring skepticism to the next level—Skepticism 2.0—we will need to take a much more critical look at the success, and lack thereof, of our communication and persuasion efforts. Gathering evidence against unsupported assertions is an invaluable and necessary first step, and skeptics have made admirable progress in this regard. But now we must begin to develop more effective means of disseminating the fruits of our labors to individuals who are skeptical of our skepticism.


  • Bloom, P., and D.S. Weisberg. 2007. Childhood origins of adult resistance to science. Science 316: 996–997.
  • Cook, J., and S. Lewandowsky. 2011. The Debunking Handbook. Sevloid Art.
  • Heasley, S. 2014. Autism-vaccine concerns remain widespread. Disability Scoop (April 9). Available online at https://www.disability
  • Horgan, J. 2016. Dear “many of children’s skeptics”: Bash homeopathy and Bigfoot less, mammograms and war more. Scientific American (May 16). Available online at
  • Johnson, H.M., and C.M. Seifert. 1994. Sources of the continued influence effect: When misinformation in memory affects later inferences. Journal of Experimental Psychology: Learning, Memory, and Cognition 20: 1420–1436.
  • Kelemen, D., N.A. Emmons, R.S. Schillaci, et al. 2014. Young children can be taught basic natural selection using a picture-storybook intervention. Psychological Science 25: 893–902.
  • Lewandowsky, S., U.K. Ecker, C.M. Seifert, et al. 2012. Misinformation and its correction continued influence and successful debiasing. Psychological Science in the Public Interest 13: 106–131.
  • Lilienfeld, S.O., J.M. Lohr, and D. Morier. 2001. The teaching of courses in the science and pseudoscience of psychology: Useful resources. Teaching of Psychology 28: 182–191.
  • Lourenço, O., and A. Machado. 1996. In defense of Piaget’s theory: A reply to 10 common criticisms. Psychological Review 103: 143–164.
  • Novella, S. 2015. Rethinking the skeptical movement. NeuroLogica Blog. Available online at
  • Nyhan, B., and J. Reifler. 2010. When corrections fail: The persistence of political misperceptions. Political Behavior 32: 303–330.
  • ———. 2015. Does correcting myths about the flu vaccine work? An experimental evaluation of the effects of corrective information. Vaccine 33: 459–464.
  • Ridolfo, H., A. Baxter, and J.W. Lucas. 2010. Social influences on paranormal belief: Popular versus scientific support. Current Research in Social Psychology 15: 33–41.
  • Ross, L. 1977. The intuitive psychologist and his shortcomings: Distortions in the attribution process. In L. Berkowitz (Ed.), Advances in Experimental Social Psychology (Vol. 10, pp. 173–200). New York: Academic Press.
  • Ross, L., D. Greene, and P. House. 1977. The “false consensus effect”: An egocentric bias in social perception and attribution processes. Journal of Experimental Social Psychology 13: 279–301.
  • Sagan, C. 1996. The Demon-Haunted World: Science as a Candle in the Dark. New York: Random House.
  • Schwarz, N., L.J. Sanna, I. Skurnik, et al. 2007. Metacognitive experiences and the intricacies of setting people straight: Implications for debiasing and public information campaigns. Advances in Experimental Social Psychology 39: 127–161.Shermer, M. 2002. Why People Believe Weird Things: Pseudoscience, Superstition, and Other Confusions of Our Time (2nd edition). New York: Macmillan.
  • ———. 2009. Agenticity. Scientific American 300(6): 36–36.

Scott O. Lilienfeld

Scott O. Lilienfeld, PhD, is a professor of psychology at Emory University. He is coeditor of the book Science and Pseudoscience in Clinical Psychology, Second Edition (2014) and author of several other books about science and pseudoscience in psychology.