Foreword

As the world grapples with the impact of misinformation on its political and social systems, effective responses will need to recognize a host of complex dynamics—including rapidly evolving scientific knowledge, massive disruptions to news and the news business, decreasing civics education, and sophisticated efforts to manipulate information. This greater complexity drives the need for greater collaboration as funders, nonprofit and community leaders, researchers, and technologists are working together to address this evolving and critical issue.

To build collaboration across disciplines and expertise and create a more effective community of learning and practice, the Rita Allen Foundation partnered with RTI International and the Aspen Institute along with Craig Newmark Philanthropies, Democracy Fund, and Burroughs Wellcome to foster cooperative responses aimed at curbing the spread of misinformation, with a focus on how human behaviors contribute to the spread of false information.

In early October 2018, we facilitated a Misinformation Solutions Forum at the Aspen Institute, where a diverse group of experts shared their ideas for curbing the role that people—rather than platforms and technologies—have in the spread of misinformation. The discussion was designed as a lab or workshop to provide insights for participants, especially six finalists who first shared their proposals—selected after an open call for ideas, which launched in the spring. The forum was designed to help improve the finalist’s ideas and offer collegial but constructive advice.

We gained several insights from conducting an open call for ideas, selecting final proposals, and designing a forum where collaboration and dialog were key to the forum’s success. First, we know that there is much more to be done to understand the human behaviors that drive misinformation. Second, iterative, research-based strategies are essential to address this rapidly evolving problem. Doing so will require the concerted efforts of many, working toward a shared goal.

These proceedings present the information as a cohesive overview. We invite you to join the conversation and collective effort to help foster next generation ideas to address the diffusion of misinformation between citizens and via media outlets.

Elizabeth Christopherson

President and Chief Executive Officer, Rita Allen Foundation

Preface from the Aspen Institute

Curbing the spread of misinformation (including “misleading health news”)—like so many other 21st-century challenges—is too multidimensional to become the responsibility of a single discipline. When the Aspen Institute’s Health, Medicine and Society (HMS) Program partnered with the Rita Allen Foundation and RTI to host the Misinformation Solutions Forum, we knew that communications professionals and journalists would be at the table, but we also recognized that educators, computer scientists, consumer advocates, and others would have important insights to share.

HMS has a tradition of convening thought leaders and decisionmakers across sectors in a “safe space” to grapple with the pressing issues of the day. We have found that to be a great way to share expertise, unwrap complexities, and push ideas into action.

Recognizing that people from different fields approach challenges in different ways, we create settings that celebrate their unique mix of idioms, methodologies, and perspectives. We look for fresh voices and put special value on professional, demographic, geographic and political diversity—because we think that’s how problems get solved.

Our typical convenings offer opportunities for participants to ask probing questions, listen closely, challenge assumptions, and consider the best pathways forward. Time and again, we have found that such a strategy leads to creative and cross-disciplinary strategies for change.

Ruth J. Katz

Executive Director, Health, Medicine and Society Program

Vice President, Aspen Institute

Overview

Although many people now have access to more accumulated information than has ever been the case in human existence, we also now face a moment when the proliferation of misinformation, or false or inaccurate information, poses major challenges. This proliferation and diffusion is not simply a product of misinformation production. We know that human psychology interacts with information environments to jointly facilitate the diffusion of falsehoods.

In the light of these ideas, we need translational work—work to bring together academic institutions and general populations to learn lessons from research and work to build consensus among groups otherwise sated by their own vast repositories of information (false or not)—to bridge human groups as a central task. We present here an initial yield from our convening effort: a set of essays describing ideas submitted by forum participants along with accompanying essays by selected graduate student fellows that attempt to put those ideas into academic context. Across various disciplines and vantage points, many people are inspired to do something to curb the spread of misinformation and the contributors of these essays have demonstrated not only such inspiration but also a useful set of practical ideas. Proposal designs included an array of methodologies, used multiple tools to address misinformation (e.g., technology, education, psychology, community-based participatory research, peer support), targeted various audiences (e.g., high school and college students, patient groups, pregnant women), and highlighted the implications of misinformation across fields (e.g., public health, business, politics). Importantly, these ideas are not yet existing interventions and tools, per se, but rather are starting points for future work and future directions in which to go.

In the same spirit of open-source collaboration that we drew upon to develop the forum, we offer these proceedings as citable inspiration for future projects. The proceedings include essays from the teams of six finalists, along with response essays from pairs of graduate students that attended the forum. The six finalist essays offer a specific articulation of the types of problems we are facing and what might be done about it. Following each finalist essay, a pair of graduate fellows offer a prescription for what’s next—helping to put the finalist’s ideas in context and looking toward the future to suggest additional research ideas.

Attendees of the forum at the Aspen Institute included six finalists teams, funding organization representatives, invited moderators, graduate students, and a host of other invested parties. The diverse project designs and the mix of participants at the forum—academics, practitioners, journalists, and funders—fostered multiple perspectives and some useful disagreements. The goal was to bring together a disparate group of people and have civil conversation, with the belief that having ideas repeatedly critiqued ultimately leads to better work. To achieve this goal, attendees were instructed to treat the forum as a lab, or a workshop. In light of that, we invite you to read these proceedings as an extension of the workshop space and to contact any of us mentioned with new ideas.

Hopefully, these proceedings can be a sourcebook of ideas and commentary on what we need to make progress in this arena.

Finalist Essays and Response Essays

Mind Over Chatter: Bias Mitigation for College Students

Paul Cook, Polly Boruff-Jones, Christina Downey, and Mark Canada

Indiana University Kokomo

Today’s information consumers face unprecedented challenges. The digital age has dramatically broadened access to communication, enabling more people to disseminate information—and misinformation. Whereas college instructors in the past could expect students to arrive as largely blank slates for research, entering college students today are already accustomed to seeing and sharing information from a dizzying array of sources. It is impossible for them to recognize all of the bad actors in this new information universe; even if they could, the information sharing that occurs via social media frequently obscures original sources and flattens distinctions among objective news, hoax sites, and other sources.

The neuropsychological apparatus that simplified the external world of early humans and promoted species survival now makes us susceptible to misinformation (Southwell, Thorson, & Sheble, 2017). We typically assume the truth of new information (truth bias) and overprocess/overdefend data that align with our beliefs (confirmation bias) (Lewandowsky, Ecker, Seifert, Schwarz, & Cook, 2012; Prasad et al., 2009). We are often resistant to corrections (Lewandowsky et al., 2012) and tend not to remain mindful when faced with a barrage of facts, figures, and options (Jha, Krompinger, & Baime, 2007), especially when fatigued, emotionally defensive, or under time pressure (Croskerry, Singhal, & Mamede, 2013).

The challenges are especially great for traditional-age college students, who typically are still maturing neurobiologically (becoming capable of complex decisionmaking), intellectually (becoming citizens), and epistemologically (becoming sophisticated users of information). Before these processes are complete, young people tend to view knowledge as stable and certain rather than as the product of “context-dependent judgment[s] based on relevant evidence” (Magolda, 2006).

The Need

Researchers in every area of inquiry are responsible for the generation and transmission of knowledge. Humans depend on researchers’ abilities to share information—that is, both to interpret others’ ideas critically and to disseminate their own responsibly—not only for our knowledge of the world, but also for sound policymaking. In short, irresponsible and uninformed generation and transmission of knowledge threaten democracy, progress, and knowledge itself.

The Solution

Drawing on emerging research in cognitive bias mitigation (Morewedge et al., 2015), we propose an innovative small-group educational intervention in students’ crucial first year of college. Specifically, we propose piloting an approach in which students in three sections of our introductory composition course would be grouped into nine teams of eight students each and charged to meet three times throughout the semester to discuss, write, share, and reflect on truth, bias, and their own daily engagements with misinformation. The leaders of these small groups will be advanced students who, working from a manual that we will develop, will lead discussions and exercises with their groups. These peer mentors will receive intensive training on delivering all three sessions with fidelity and will be able to consult with program leaders throughout the program. The proposed intervention would consist of the following three components:

Initiation (Session 1): Researchers from all backgrounds carefully consider facts and theories on their way to developing sound explanations of phenomena. In Session 1, students will consider the advantages and disadvantages of letting their biases guide their decisions. They will learn the basics of inductive and deductive reasoning, the analysis of competing claims, and other basic epistemological concepts that underlie both academic research and sound public discourse.

Protection Against Biases (Session 2): Drawing on research, faculty and peer mentors will educate students on the unconscious influence of confirmation bias, fundamental attribution error, and other factors that make humans prone to accepting and disseminating misinformation. This module will feature MISSINGTM, a computer simulation game experimentally shown to mitigate cognitive biases (Morewedge et al., 2015).

Long-Term Strategies (Session 3): In this final module, faculty and peer mentors will use journaling and other reflective exercises to teach proven strategies, such as self-affirmation and mindfulness, that students can employ to develop long-term awareness of bias and transfer of these new skills across contexts.

Conclusion

Our idea offers key advantages and innovation. It empowers students to overcome the often ignored psychological factors that make humans susceptible to misinformation and that threaten responsible information finding and sharing. Because the resulting manual will be self-contained, digital, and freely available, any instructor or student can use it. Supplemental instructors will gain valuable experience and expertise in teaching information literacy. Students can build a sense of belonging and purpose supporting their success. For these reasons, an innovation like ours has the potential to reach an enormous audience, ultimately making a difference in Americans’ ability to generate and disseminate information, participate in democracy, and assure our collective progress. We believe this misinformation solution can improve public and academic discourse, along with graduation rates.

References

Croskerry, P., Singhal, G., & Mamede, S. (2013). Cognitive debiasing 1: Origins of bias and theory of debiasing. BMJ Quality & Safety, 22(Suppl 2), ii58–ii64. https://doi.org/10.1136/bmjqs-2012-001712

Jha, A. P., Krompinger, J., & Baime, M. J. (2007). Mindfulness training modifies subsystems of attention. Cognitive, Affective & Behavioral Neuroscience, 7(2), 109–119. https://doi.org/10.3758/CABN.7.2.109

Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest, 13(3), 106–131. https://doi.org/10.1177/1529100612451018

Magolda, M. B. B. (2006). Intellectual development in the college years. Change. The Magazine of Higher Learning, 38(3), 50–54. https://doi.org/10.3200/CHNG.38.3.50-54

Morewedge, C. K., Yoon, H., Scopelliti, I., Symborski, C. W., Korris, J. H., & Kassam, K. S. (2015). Debiasing decisions: Improved decision making with a single training intervention. Policy Insights from the Behavioral and Brain Sciences, 2(1), 129–140. https://doi.org/10.1177/2372732215600886

Prasad, M., Perrin, A. J., Bezila, K., Hoffman, S. G., Kindleberger, K., Manturuk, K., & Powers, A. S. (2009). There must be a reason: Osama, Saddam, and inferred justification. Sociological Inquiry, 79(2), 142–162. https://doi.org/10.1111/j.1475-682X.2009.00280.x

Southwell, B. G., Thorson, E. A., & Sheble, L. (2017). The persistence and peril of misinformation. American Scientist, 105(6), 372–375. https://doi.org/10.1511/2017.105.6.372

Response to “Mind Over Chatter: Bias Mitigation for College Students”

Reyhaneh Maktoufi1 and Kiran Samuel2

1 Northwestern University

2 Columbia University

The Mind Over Chatter essay compellingly highlights the role cognition plays in forming and sustaining unconscious bias and proposes a way forward to address the challenges of mitigating its effects and reversing its hold. As scholars from the digital humanities and social science, we were drawn to the wealth of social and cultural opportunities that interpersonal relationships, local knowledge, and community-integration—key, understated elements of their proposal—unlock in the process of combating misinformation effectively from the ground up. We would like to expound on these elements to call attention to their respective value.

As the science communication community advocates for citizen-centered science communication models (National Academies of Sciences, Engineering, and Medicine, 2017), the same idea should reflect on how science communication is conducted by educators. This project offers the opportunity for students to receive mentorship from their peers and develop their skills to identify biases through discussions and conversations rather than a lecture from professors. This model stands against the top-down model and reflects collaborative production of knowledge and can prepare student for future efforts to engage their audience in scientific conversations.

This program can also help students become more empathetic toward individuals who will show biases in their decisionmaking. Empathy, as placing oneself into the situation of others (Goldie, 1999), can be a key feature in the students trained about their biases. This is also in line with learning sciences studies, focusing on the importance of teacher empathy on student learning outcomes, including their understanding and acceptance of the material (Decety & Ickes, 2009). Similarly, Gribble and Oliver (1973) mention the idea of the “moral educator,” with which they hope to help students develop impartiality by becoming more empathetic to others.

Relatedly, the use of peer mentors may mitigate student anxieties about speaking up because they are engaging with peers. Given the similar status of peer mentors and students, mentors can enhance curriculum with importance nuance in the form of shared experience, interests, mannerisms, and cultural codes that make the curriculum more relatable and retainable outside the classroom (Coles, 1991). In essence, they speak the same language—drawing on a constructivist approach that helps students see the promise in adopting the strategies advocated from peer mentors who were once in their shoes (Fosnot, 1996).

However, as interlocutors tasked with understanding the idea’s broader impact, we find it imperative to make it scalable and adaptable to different cultural contexts. Although we see the value in a collaborative group activity as proposed via the MISSINGTM game, we think there’s room to supplement or supplant this game with modules involving community-specific issues for students to work through. MISSINGTM promises a fun and interactive opportunity to address cognitive bias through their modules depicting finding a missing neighbor or playing detective. But we think many of the situations most pervasive in upholding deep-seated cognitive bias are seemingly innocuous, everyday, and context-specific. The situational stimuli that evoke and stoke cognitive bias may diverge among different geographies—and those divergences present opportunities for the development of place-specific situations that deserve attention, too. Students from Bronx, NY, may design a particular scenario that is not relevant to students from Kokomo, IN, for example, but is especially salient for them. In adapting this idea for other campuses, we think an emphasis on incorporating local knowledge can aid each community’s learning experience.

In sum, we see the promise in the Mind Over Chatter program and hope our considerations reflect its potential to combat misinformation through a collaborative, low-pressure learning environment that emphasizes peer-to-peer mentorship, empathy, and a community-based curriculum. We hope that our reflections provide some practical insight for further development.

References

Coles, C. (1991). Is problem-based learning the only way? In D. Boud & G. Felletti (Eds.), The challenge of problem based learning (pp. 313–324). London, UK: Kogan Page.

Decety, J. E., & Ickes, W. E. (Eds.). (2009). The social neuroscience of empathy. Cambridge, MA: MIT Press. https://doi.org/10.7551/mitpress/9780262012973.001.0001

Fosnot, C. T. (Ed.). (1996). Constructivism: Theory, perspective, and practice. New York, NY: Teacher College Press.

Goldie, P. (1999). How we think of others’ emotions. Mind & Language, 14(4), 394–423. https://doi.org/10.1111/1468-0017.00118

Gribble, J., & Oliver, G. (1973). Empathy and education. Studies in Philosophy and Education, 8(1), 3–29. https://doi.org/10.1007/BF00375766

National Academies of Sciences, Engineering, and Medicine. (2017). Communicating science effectively: A research agenda. Washington, DC: National Academies Press.

Heuristics for the Online Curator

Mike Caulfield1 and Scott Waterman2

1 Washington State University

2 Search AI and Natural Language Processing Consultant

Many policymakers believe the solution to our current misinformation dilemma is to encourage online curators to think more about the media that reaches them. But what if the solution was to get them to think less? I say this partially to be provocative: as I’ll make clear momentarily, thinking “less” is meant here in a very specific way. But most media literacy approaches in vogue are a poor fit for the decentralized, high-volume environment of the web.

Years ago, I used to teach students to look deeply at documents and have them perform a complex mental calculus at the end: Does the story seem plausible? Does the language seem emotional? What is the nature of the arguments? Any logical fallacies? Is it well sourced? Does it look professional? These methods all shared a common flaw when applied to the web: they presented detection of dubious material as a complex process of recognition and analysis. Crucially, it was a process without any clear end: one could spend five minutes or two hours in investigation without knowing if they had done enough.

In such an environment, is it any wonder that people don’t develop habits of verification? Checking information is seen as a fraught, time-consuming process. Even if one invests time in verification, it provides no protection against charges of failure. On the web, verification is portrayed as an art, rather than a series of standard safety procedures, where process is never a defense against result, and hits to credibility happen with little predictability.

Because there is no clear standard around socially prescribed verification requirements, calling out errors people make can seem petty. After all, for those corrected, what can they do? The exhortation to “try harder” and take more time is out of step with a web that favors speed, where status is often achieved by being the first in one’s social circle to post breaking news, new research, or novel insights. In such an environment, accusations of posting misinformation are easily read as attacks on the activity of posting itself.

Heuristics Provide an Alternate Solution

In cases where individuals must make quick decisions under complexity and uncertainty, rules of thumb and quick standard procedures have been shown to outperform deeper analysis. Competent decisionmakers reduce the factors they look at when making decisions and seek information on those limited factors consistently and automatically. This pattern can apply to responsible online information curation as well.

As one example, imagine a curator sees breaking news that new research has shown that chemotherapy has been found not necessary for most breast cancers. The curator could look at the source URL of the story, the about page, and examine the language. See if the spelling exhibits non-native markers or if the other stories on the site were plausible. Alternatively, she could apply a simple heuristic: big, breaking stories will be reported by multiple outlets. She selects some relevant text from the page, and right-clicks to Google News Search. If she is met with several outlets reporting the news, she’ll take it seriously. If she doesn’t see that, she can dismiss it for the time being.

This process takes five seconds and can be practiced on any story that fits this description. It makes use of the web as a network of sites that provides tools (like Google News search) to make quick assessments of professional consensus and significant minority opinion.

In this example, should the story turn out to be well reported, the results present her with an array of resources that might be better than the one that originally reached her. The story that she encountered on a nutraceutical site can be shared from the New York Times or medical association news sites, lending more credibility to her claims, a process we have called “trading up” in our work with students. If she finds no coverage or coverage that frames study results differently, she understands she proceeds at her own risk. If techniques such as checking for other coverage become well-known standards for curation, she is aware that others may point out that a simple Google News search would have shown her nutraceutical source was likely misinterpreting the study.

“Check for other coverage” is just one of several heuristics that have been developed and taught by the Digital Polarization Initiative (digipo) over the past year. Other techniques include ones to “share from the original” and do basic organizational vetting. All are similarly quick, with clear rules on when sources should be treated with suspicion. Our solution draws from this experience and uses simple, teachable verification techniques pioneered by digipo. Our website will allow users to paste in a URL that requires fact checking, answer a question or two, and produce a custom linkable page that will show the step-by-step process of checking that particular link step by step in instructional screenshots. Screenshots will accurately represent the steps and resulting search results for checking that particular claim and website. Users will be allowed to annotate the steps if they require explanation of the choice made.

Such expected standards and techniques can allow curators to pursue their aims efficiently and ethically while decreasing reputational hits to credibility. In fact, such techniques may allow them to expand their influence by increasing their credibility with those at the margins of their tribe and well as with higher-level gatekeepers who they desire to re-share their finds. Most important, by focusing on required best practice instead of results, these techniques allow online culture to develop and enforce minimum standards of verification and contextualization, something that all the links to Snopes in thread comments have failed to achieve to date.

Response to “Heuristics for the Online Curator”: Combating Misinformation by Improving Online Conversation

Sukrit Venkatagiri1 and Amy X. Zhang2

1 Virginia Tech

2 Massachusetts Institute of Technology

Misinformation has existed for as long as societies have, propagated through word-of-mouth, through pamphlets and newspapers—and now, amplified in both speed and spread through digital technology (Southwell, Thorson, & Sheble, 2018). Current solutions to tackle misinformation, such as content moderation on social media sites and manual/automated fact-checking, face an uphill battle due to the enormous volume of information produced today, and they can still easily be circumvented by motivated parties.

Scholars in the field of Human-Computer Interaction (HCI) have argued that understanding how people communicate and process information online is key to tackling this complex sociotechnical problem (Fogg & Tseng, 1999; Starbird, Maddock, Orand, Achterman, & Mason, 2014). Work by Fogg and Tseng (1999) and Yang and colleagues (2013) has shown that people employ fast heuristics to determine the credibility of a news item. More recently, the Credibility Coalition (Zhang et al., 2018) has developed a set of credibility indicators in consultation with journalists, researchers, and platform representatives.

Along similar lines, the Digital Polarization Initiative (American Association of State Colleges and Universities, 2018) has worked on building students’ web literacy skills and teaching them ways to employ these heuristics and indicators. As an extension of this work outside of classrooms, Caulfield has proposed a web-based tool, called “Let Me Fact Check That For You.” It allows people to share tutorials on social media teaching them how to apply heuristics to investigate a particular claim in a systematic manner. Another such tool, ConsiderIt (2016) designed by Kriplean et al. (2012), aims to improve public deliberation online. It facilitates civil deliberation through finding common ground, while avoiding polarization, where its interface affordances “subtly encourages people to consider issues and… gain insights into the considerations of people with different perspectives, rather than making assumptions based on caricatures” (Kriplean et al., 2012). We believe tools that focus on the process of information assessment—and not merely its outcome—can teach people how to determine information credibility on their own terms, in a civil, confrontation-free manner.

These process- and human-centric tools are effective but require a large amount of effort by the poster and the reader, to use and engage with, respectively. Furthermore, the nature and structure of online conversation has remained largely unchanged since its inception, which means the same fact-checks and heated conversations occur over and over again, at different times and places. Future work should explore automated techniques that make using these tools easier, let users quickly make sense of existing conversations, and allow for these conversations to move forward more easily. For example, work by Zhang and Cranshaw (2018) facilitates summarization and enrichment of conversations into a condensed, easy-to-read format. However, to realize the full potential of these tools and techniques, large social networking sites, such as Facebook, Twitter, and Reddit, must work with researchers as well as their users to carefully incorporate them into their platforms.

Conclusively, if we truly wish to combat misinformation, not only must we promote interdisciplinary work that reimagines the way we communicate with each other online, but we must ensure that it is effective and deployed at a large scale.

References

American Association of State Colleges and Universities. (2018). Digital Polarization Initiative. Retrieved from http://www.aascu.org/AcademicAffairs/ADP/DigiPo/

ConsiderIt. (2016). Consider.it. Retrieved from https://www.consider.it/

Fogg, B. J., & Tseng, H. (1999). The elements of computer credibility. Presented at ACM SIGCHI Conference on Human Factors in Computing Systems (CHI '99). New York, NY, 80–87. https://doi.org/10.1145/302979.303001

Kriplean, T., Morgan, J., Freelon, D., Borning, A., & Bennett, L. (2012). Supporting reflective public thought with ConsiderIt. ACM 2012 conference on Computer Supported Cooperative Work (CSCW '12), New York, NY, 265–274. https://doi.org/10.1145/2145204.2145249

Southwell, B. G., Thorson, E. A., & Sheble, L. (Eds.). (2018). Misinformation and mass audiences. Austin, TX: University of Texas Press.

Starbird, K., Maddock, J., Orand, M., Achterman, P., & Mason, R. M. (2014). Rumors, false flags, and digital vigilantes: Misinformation on Twitter after the 2013 Boston Marathon bombing. Proceedings of the iConference 2014, 654–662.

Yang, J., Counts, S., Morris, M. R., & Hoff, A. (2013). Microblog credibility perceptions: comparing the USA and China. Presented at ACM Conference on Computer Supported Cooperative Work (CSCW '13), New York, NY, 575–586.

Zhang, A. X., & Cranshaw, J. (2018, November). Making sense of group chat through collaborative tagging and summarization. Presented at ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW '18), Jersey City, NJ.

Zhang, A. X., Ranganathan, A., Metz, S. E., Appling, S., Sehat, C. M., Gilmore, N., . . . Mina, A. X. (2018, April). A structured response to misinformation: Defining and annotating credibility indicators in news articles. Presented at The Web Conference: Journalism, Misinformation and Fact Checking, Lyon, France. Retrieved from https://credibilitycoalition.org/pdfs/CredCoWebConf2018.pdf

Preparing Youths to “Go Above the Noise”

Randall Depew, Robin Mencher, and Michelle Parker

KQED

The Problem

Young people are inundated with information and opinions that often pose as facts. Within the context of middle- and high-school classrooms, there are problems associated with misinformation that can slow learning and lead to misconceptions that are potentially dangerous and difficult to correct. Students need to learn how to evaluate the information in front of them, ask good questions, cut through hype and distorted media, and reduce the spread of misinformation and/or reduce its influence on their decisionmaking. This is media literacy at its core. Additionally, the behavior of unintentionally sharing and spreading misinformation is less likely to occur when youths understand what goes into creating media from a producer’s point of view, and are better able to evaluate it before taking any kind of action. This is especially important for youths who are at a pre-voting age and need to be able to successfully practice these critical thinking skills before heading to the polls.

Media That Models Inquiry

KQED’s new YouTube series, Above the Noise, geared toward youths aged 13–18, is designed to improve their media literacy and prepare them for greater civic engagement. The series focuses on topics that are often distorted or hyped or are ripe for partisan manipulation. Each episode raises a question that may not have easy answers and relies on research to solve. Past topics include: nuclear proliferation, free speech, teen vaping, facial recognition technology, immigration, and more. Some episodes also dive deep into topics related to why people look at issues the way they do, such as “Can You Win an Argument With a Conspiracy Theorist?” and “Why Do Our Brains Love Fake News?” There are now four dozen episodes, with new episodes released every other Wednesday.

Above the Noise is hosted by two young journalists of color, Myles and Shirin, who model how to ask smart questions, weed through the science, data, and research available, interpret that information, and then come to a deeper understanding of the issue and an ability to make an argument based on evidence.

Above the Noise is integrated into KQED Learn, a free online platform for middle- and high-school classrooms that will support students and teachers by modeling how to ask good questions, investigate good answers, and share both using a wide set of media-making tools. Both in the process of producing the series and in developing other content, KQED integrates youths into the editorial process of our media creation, an approach that is somewhat rare among media organizations in general but also (perhaps surprisingly) rare curriculum developers. Our Youth Advisory Board ensures that the topics we cover are relevant to young people’s lives and hit the tone that will draw them in.

Opportunities for Student Interaction and Hands-on Learning

On KQED Learn, teachers guide their students through inquiry-based learning. Students interact and collaborate with other students outside their own communities who may have different perspectives and different experiences that influence how they view important issues. They do this in a safe “walled garden” and take advantage of opportunities to publish their own perspectives and conclusions to a peer audience, developing both their media literacy and civic engagement skills. This student-centered learning experience models a framework within the information chaos that young people experience daily, to help youths arrive at and communicate their own conclusions based on solid evidence. KQED Learn provides students the catalyst and the safe space to develop and practice using the tools of critical inquiry and analysis, that they can then use to deal with their specific teenage context of misinformation. KQED Learn is also an initiative that is advances KQED’s goal to improve access and equity for students, by developing skills and spaces for underrepresented voices to be heard and valued.

KQED Learn offers a discussion activity called Go Above the Noise, which harnesses the cultural currency of YouTube (and Above the Noise’s place there) to engage young people and model for them inquiry techniques and other deeper learning skills. For each Go Above the Noise discussion activity, students watch an episode of Above the Noise, explore the supporting resources, and respond to the question posed, using evidence to back their claim. Transcripts of Above the Noise are provided in English and Spanish.

By participating in these discussions in the mediated online space of KQED Learn, students can build the skills they need to actively participate in their communities in a way that does not spread misinformation. The media literacy skills they learn are key to an informed, empathetic citizenry and a healthy democracy, designed to:

  • Build skills that encourage critical analysis and decisionmaking.

  • Develop informed, reflective, and engaged participants essential for democratic society.

  • Teach students not what to think, but about how they can arrive at informed choices that are most consistent with their own values.

  • Help students become aware of and reflect on the meaning that they make of media messages, including how the meaning they make relates to their own values.

  • Help students analyze media messages to understand and appreciate different perspectives and points of view.

Through active classroom use of KQED Learn, the Above the Noise series, and the corresponding Go Above the Noise discussions, students can evaluate information, deepen understanding, improve critical thinking, and engage with peers about complex topics.

Response to “Preparing Youths to ‘Go Above the Noise’”: Arming Youths with News Literacy Tools to Combat the Spread of Misinformation

Fernando Severino1 and Carin Tunney2

1 University of Minnesota

2 Michigan State University

Today’s media environment poses an obstacle course of hyperbole, partisanship, and lies. Adults fail this challenge daily through sharing misinformation on social media or at the water cooler. This information trickles from one person to the next, creating a flood of misinformation. The cognitive processes and biases of adults can be difficult to alter; therefore, attention has been focused on interventions among young people (e.g., Butler, 2010; Gainer, 2010). For example, KQED, the San Francisco Bay area PBS affiliate, proposes a solution to teach youth audiences to go “above the noise” of misinformation and think more critically about topics within the news. This project falls into the broader category of critical media literacy efforts across middle and high school settings. As a general definition, media literacy can be understood as “the ability of a citizen to access, analyze, and produce information for specific outcomes” (Aufderheide & Firestone, 1993, p.6).

Above the Noise is an in-class and online project that arms youths with tools to fight misinformation through news literacy based on free YouTube videos and a classroom activation curriculum. Producers claim topics presented within the series encourage students aged 13–18 to think deeply, find evidence, and draw informed conclusions about news. Additionally, the KQED follows the tradition of school interventions that tried to elevate the youth voice to the national discussion by giving students the chance to produce media themselves (Barron, Gomez, Pinkard, & Martin, 2014). Therefore, a limited number of schools work aside the station’s reporters in a one-day “youth takeover” that teaches the skills of news gathering, reporting, and production.

The greatest strength of the project is sustainability. The KQED project is fully operational and demonstrating success as measured through online metrics, which show by 2018 more than 1,000 educators have enrolled, most of which has occurred through word of mouth. Even with KQED’s status of being one of the largest public broadcasting networks in the United States, which is affiliated with National Public Radio, the producers recognize they do not have the support or personnel to implement a more homogeneous strategy to reach decisionmakers within each district and centralize dissemination throughout the San Francisco Bay Area.

Although KQED measures success with online metrics, a significant opportunity lies in the ability to evaluate outcomes of this media literacy project. Although the interest for media literacy is on the rise, the empirical evidence of its impact is still a challenge (e.g., Arke & Primack, 2009; Greene et al., 2015). For instance, Arke and Primack (2009) suggest media literacy and critical thinking can be measured and evaluated using theory-based scales. Others, have also explored successfully mixed-method approach to evaluate media literacy in health-related topics (e.g., Wilksch, Tiggemann, & Wade, 2006). Taking together the academic evidence, a partnership between Above the Noise and a university could allow for field experiments within the schools that measure learning outcomes. This could strengthen the ability to secure future funding.

References

Arke, E. T., & Primack, B. A. (2009). Quantifying media literacy: Development, reliability, and validity of a new measure. Educational Media International, 46(1), 53–65. https://doi.org/10.1080/09523980902780958

Aufderheide, P., & Firestone, C. (1993). Media literacy: A report of the national leadership conference on media literacy. Queenstown, MD: Aspen Institute.

Barron, B., Gomez, K., Pinkard, N., & Martin, C. K. (2014). The Digital Youth Network: Cultivating digital media citizenship in urban communities (John D. and Catherine T. MacArthur Foundation series on digital media and learning). Cambridge, MA: The MIT Press.

Butler, A. (2010). Media education goes to school: Young people make meaning of media & urban education. New York, NY: Peter Lang.

Gainer, J. S. (2010). Critical media literacy in middle school: Exploring the politics of representation. Journal of Adolescent & Adult Literacy, 53(5), 364–373. https://doi.org/10.1598/JAAL.53.5.2

Greene, K., Yanovitzky, I., Carpenter, A., Banerjee, S., Magsamen-Conrad, K., Hecht, M., & Elek, E. (2015). A theory-grounded measure of adolescents’ response to a media literacy intervention. The Journal of Media Literacy Education, 7(2), 35–49.

Wilksch, S., Tiggemann, M., & Wade, T. (2006). Impact of interactive school‐based media literacy lessons for reducing internalization of media ideals in young adolescent girls and boys. International Journal of Eating Disorders, 39(5), 385–393. https://doi.org/10.1002/eat.20237

It Takes A Village: Countering Digital Misinformation in Maternal Health Decisionmaking for Infant Vaccination

Amanda S. Bradshaw, Debbie Treise, and Carolyn Carter

University of Florida

Vaccinations have been linked to prevention of 42,000 deaths and 20 million incidences of disease in each birth cohort (Andre et al., 2008). However, a victim of their own success in the Western world, some parents are more focused on potential side effects of vaccines than the diseases they prevent. While overall vaccination rates remain high in the United States, in some geographic areas, vaccination coverage is dangerously low due to parental refusal, and outbreaks for once-eliminated diseases, such as measles, are occurring.

Although childhood vaccination decisionmaking occurs during pregnancy, half of expectant mothers reported receiving inadequate information about childhood vaccinations from providers prenatally, with first-time mothers identifying as more vaccine hesitant (Danchin et al., 2017). While the majority of OB-GYNs expressed this issue is important, less than half stated they could influence parental decisionmaking (Link-Gelles, et al., 2012). Likewise an American Academy of Pediatrics survey showed that only 5 to 39 percent of first-time expectant mothers attend prenatal visits with a pediatrician (Yogman, Lavin, & Cohen, 2018). A “pregnancy–childhood vaccination education gap” exists, leaving expectant mothers in a position to make a less-than-informed infant vaccination decision.

A strong need has been identified to develop evidence-based risk-communication strategies to “counteract any influence that could cause ungrounded fears of vaccines to spread to the general population” (Kahan, 2013, p. 54). Anti-vaccination social media content shared by trusted peers in conjunction with lack of meaningful conversation with healthcare providers during pregnancy directly and indirectly decreases infant vaccination uptake. Notably, only 30 percent of YouTube videos about vaccination were produced by health professionals, and videos depicting highly distressed infants and negative messaging about pain and adverse events, such as autism, receive more likes and shares (Covolo et. al, 2017; Harrison et al., 2016).

One proposed intervention incorporates technology and education through development of an evidence-based, engaging pro-vaccination YouTube video to address misinformation surrounding Hepatitis B, the first vaccination recommended within 24 hours of birth. Presumably a parental vaccination decisionmaking pattern could emerge based on acceptance or refusal. Effective communication may be devised through the lens of Social Judgment Theory (SJT) (Sherif et al., 1965), which describes a person’s attitude as a set of categories that can be measured based on latitudes (ranges) of positions considered acceptable, objectionable, or neither. Probability of changing one’s attitude is greater if the message is similar to one’s internalized attitude or marginally different; thus, a pro-vaccination Hepatitis B YouTube video that falls within the “latitude of noncommitment” could lead to improved perception and increased expressed intentions to vaccinate.

In a community-based participatory research approach, this YouTube video will be developed, screened at a pilot Expectant Mommy Expo designed to inform maternal health decisionmaking, disseminated on popular channels such as Baby Center, and tailored for seamless incorporation into a prenatal care setting during the third trimester of pregnancy. For expectant mothers who have not yet begun to consider vaccination or for providers who do not emphasize this in their practice, this innovation will serve as a critical foundation.

Our innovation could achieve the following aims:

  1. Produce an effective YouTube video to counter the widely shared anti-vaccination videos, modeling a video production format for future health issues and contributing to the online dialog about vaccination in a meaningful way.

  2. Equip expectant mothers to discern factual information from mythical during pregnancy, specifically helping childhood vaccination fence-sitters and first-time mothers make informed vaccination decisions.

  3. Model a community-based Expectant Mommy Expo to link disparate yet influential entities to minimize the effects of misinformation, which can have serious public health ramifications.

  4. Engage physicians in the dialog about childhood vaccination and equip them to effectively utilize social media platforms to combat misinformation and lessen the “pregnancy childhood vaccination education gap.”

  5. Disseminate the video on social platforms, and propose strategies to incorporate video messaging about vaccination into standard prenatal care during the third trimester of pregnancy.

A popular adage suggests it takes a village to raise a child; the goal is not only to counter misinformation with facts but also to connect stakeholders such as pregnant women, obstetricians, newborn care providers, bloggers, journalists, citizen scientists, and community leaders to combat digital misinformation in the context of childhood vaccination decisionmaking.

References

Andre, F. E., Booy, R., Bock, H. L., Clemens, J., Datta, S. K., John, T. J., . . . Schmitt, H. J. (2008). Vaccination greatly reduces disease, disability, death and inequity worldwide. Bulletin of the World Health Organization, 86(2). Retrieved from http://www.who.int/bulletin/volumes/86/2/07-040089/en/

Covolo, L., Ceretti, E., Passeri, C., Boletti, M., & Gelatti, U. (2017). What arguments on vaccinations run through YouTube videos in italy? A content analysis. Human Vaccines & Immunotherapeutics, 13(7), 1693–1699. https://doi.org/10.1080/21645515.2017.1306159

Danchin, M. H., Costa-Pinto, J., Attwell, K., Willaby, H., Wiley, K., Hoq, M., . . . Marshall, H. (2017). Vaccine decision-making begins in pregnancy: Correlation between vaccine concerns, intentions and maternal vaccination with subsequent childhood vaccine uptake. Vaccine, 36(44), 6473-6479. https://doi.org/10.1016/j.vaccine.2017.08.003

Harrison, D., Wilding, J., Bowman, A., Fuller, A., Nicholls, S. G., Pound, C. M., . . . Sampson, M. (2016). Using YouTube to disseminate effective vaccination pain treatment for babies. PLoS One, 11(10), 1–10. https://doi.org/10.1371/journal.pone.0164123

Kahan, D. M. (2013). Social science. A risky science communication environment for vaccines. Science, 342(6154), 53–54. https://doi.org/10.1126/science.1245724

Link-Gelles, R., Chamberlain, A. T., Schulkin, J., Ault, K., Whitney, E., Seib, K., & Omer, S. B. (2012). Missed opportunities: A national survey of obstetricians about attitudes on maternal and infant immunization. Maternal and Child Health Journal, 16(9), 1743–1747. https://doi.org/10.1007/s10995-011-0936-0

Sherif, C. W., Sherif, M., & Nebergall, R. E. (1965). Attitudes and attitude change: The social judgment-involvement approach. Philadelphia, PA: W. B. Saunders.

Yogman, M., Lavin, A., & Cohen, G. (2018). The prenatal visit. Pediatrics, 142(1), 1–10. https://doi.org/10.1542/peds.2018-1218

Response to “It Takes A Village”

Fighting Vaccine Misinformation on Social Media and YouTube

Paul Mena1 and Marcus Mann2

1 University of Florida

2 Duke University

Public health interventions, like It Takes a Village, aimed at fighting vaccine misinformation, are more important than ever as rates of vaccine-preventable diseases rise and misinformation campaigns flourish unimpeded. Moreover, the types of focused campaigns that target a specific population at a specific time (expectant mothers about to make their first decision on vaccines) and that incorporate local interventions in addition to a broader YouTube campaign, are admirable in how they pair offline and online strategies.

However, correcting misperceptions that have already been adopted by individuals is notoriously difficult. The effectiveness of online fact-checking interventions, like those provided by sites like Snopes or Politifact, have been found to be underwhelming (Nyhan & Reifler, 2010; Thorson, 2016). On the issue of vaccines specifically, actively correcting misperceptions about the link between the MMR vaccine and autism has actually been found to decrease the likelihood that already indecisive parents will vaccinate their children (Nyhan, Reifler, Richey, & Freed, 2014). A much broader literature concentrated primarily on political attitudes focuses on “backfire effects” and finds that people exposed to counter-attitudinal information are likely to double-down on their beliefs and become even more entrenched in their views (Bail et al., 2018; Nyhan & Reifler, 2010). But under certain circumstances, strategies can be adopted to minimize these kinds of effects (Bode & Vraga, 2015; Wood & Porter, 2016). This is all to say that vaccine interventions simply designed to inform are likely to fall short of the mark in many cases. Below, we reflect on two areas of research that might help future intervention efforts think beyond “correcting” or “fact-checking” frameworks.

First, visuals can be effective in correcting misinformation (Dixon, McKeever, Holton, Clarke, & Eosco, 2015). Particularly regarding misperceptions about vaccines and autism, pictures may significantly influence beliefs on this issue (Dixon et al., 2015). Videos may also be successful in correcting viewers misperceptions generated by prescription drug ads (Aikin et al., 2015). These findings suggest that a video intervention might be effective in addressing misinformation, particularly on health-related issues.

Second, research on declines in trust in journalists (e.g., Ladd, 2012) and scientists (e.g., Gauchat, 2012) offers insights into how anti-vaxx views may be only a symptom of a larger trend away from once universally revered knowledge institutions. This broader view prompts scholars and practitioners interested in combating anti-vaxx misinformation to address not only the substantive myths about vaccines but also the broader worldviews that are fostering distrust in doctors and scientists themselves. Future interventions might consider the humanization of the people occupying these professions as one of their primary goals.

These are only two ideas among countless avenues to improve and diversify interventions meant to address vaccine misinformation. It is our hope that we see these and many more tested and implemented with the ultimate goal of saving lives and improving public health.

References

Aikin, K. J., Betts, K. R., O’Donoghue, A. C., Rupert, D. J., Lee, P. K., Amoozegar, J. B., & Southwell, B. G. (2015). Correction of overstatement and omission in direct-to-consumer prescription drug advertising. Journal of Communication, 65(4), 596–618. https://doi.org/10.1111/jcom.12167

Bail, C. A., Argyle, L. P., Brown, T. W., Bumpus, J. P., Chen, H., Hunzaker, M. B. F., . . . Volfovsky, A. (2018). Exposure to opposing views on social media can increase political polarization. Proceedings of the National Academy of Sciences of the United States of America, 115(37), 9216–9221. https://doi.org/10.1073/pnas.1804840115

Bode, L., & Vraga, E. K. (2015). In related news, that was wrong: The correction of misinformation through related stories functionality in social media. Journal of Communication, 65(4), 619–638. https://doi.org/10.1111/jcom.12166

Dixon, G. N., McKeever, B. W., Holton, A. E., Clarke, C., & Eosco, G. (2015). The power of a picture: Overcoming scientific misinformation by communicating weight-of-evidence information with visual exemplars. Journal of Communication, 65(4), 639–659. https://doi.org/10.1111/jcom.12159

Gauchat, G. (2012). Politicization of science in the public sphere: A study of public trust in the United States, 1974 to 2010. American Sociological Review, 77(2), 167–187. https://doi.org/10.1177/0003122412438225

Ladd, J. M. (2012). Why Americans hate the media and how it matters. Princeton, NJ: Princeton University Press.

Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303–330. https://doi.org/10.1007/s11109-010-9112-2

Nyhan, B., Reifler, J., Richey, S., & Freed, G. L. (2014). Effective messages in vaccine promotion: A randomized trial. Pediatrics, 133(4), 2013–2365. https://doi.org/10.1542/peds.2013-2365

Thorson, E. (2016). Belief echoes: The persistent effects of corrected misinformation. Political Communication, 33(3), 460–480. https://doi.org/10.1080/10584609.2015.1102187

Wood, T., & Porter, E. (2016). The elusive backfire effect: Mass attitudes’ steadfast factual adherence (SSRN Scholarly Paper No. ID 2819073). Rochester, NY: Social Science Research Network.

CHIME: The Campaign for Health Information Empowerment

Susan Jacobson and Weirui Wang

Florida International University

The concept of political misinformation is familiar to Americans, following the 2016 presidential election. But misleading health information is often more pervasive and more pernicious than fake political news. This is due to financial pressures from the medical industry, the scarcity of good science and medical journalism, the pursuit of clicks by Internet publishers, and a relatively low level of health literacy among the general public (Aspen Ideas Festival, 2017; Schattner, 2017). Nearly 80 percent of American internet users have sought health information online (Fox & Duggan, 2013), and patient-generated social media discussion forums are increasingly used as online venues for the exchange of health-related information and advice. However, while the web gives citizens access to a vast array of medical information sources, it also challenges consumers to identify valuable information from misinformation that may be presented to them by internet page-ranking algorithms, including misleading information disguised as legitimate news (Brossard & Scheufele, 2013; Tennant et al., 2015). We propose to launch a research-driven grassroots movement that will fortify citizens against the appeals of health misinformation. We call our project “CHIME: The Campaign for Health Information Empowerment.”

This project has three phases: In Phase I, we conduct experiments designed to reveal the appeals of misleading health information. Specifically, we are conducting online experiments with a 2 (news message: fake vs. real news) × 2 (health issues: cancer treatment vs. mosquito control) between-subjects design. We include open-ended questions and rating scales to check whether participants are able to identify characteristics of false information such as emotionally driven style, reliance on false information and conspiracy theories, lack of transparency, and spoofing discussed in the prior literature. A main effect of news messages (fake vs. real news) on attitudes and behavioral intentions will demonstrate the impact of fake health news. Perceived issue controversy, knowledge, and prior attitudes (measured independent variables) may moderate the effects of news messages, which will provide a nuanced understanding of misleading health news and effects.

In Phase I, we ask participants to respond to both real and fake stories, and we will compare the responses of journalists and journalism educators, whose training should make them more familiar with the practices of good journalism, to the general public. By doing so, we build upon scholarly work in media literacy and critical thinking that may be part of the solution of the misinformation problem.

In addition to media literacy, we postulate that there are predispositions that make people more likely to be influenced by misinformation and conspiracy theories, such as public trust, risk, threshold of uncertainty, ideological bias, and motivated reasoning. In Phase II, we will develop a variety of inoculation messages that help address these psychological processes and remind people to engage in critical thinking about media sources and content, with the goal of determining the best strategies to help inoculate citizens against misleading health information.

In Phase III, we will work with networks of patient-centered nongovernmental organizations and patient communities on social media to validate and disseminate our findings in real-world settings. We will work with our partner organization, Living Beyond Breast Cancer (LBBC), to crystalize the best practices in the battle against fake health news. LBBC has a four-star rating from Charity Navigator and provides programs that reach more than 500,000 people every year. LBBC’s mission is to connect people to trusted information and a community of support We will share our results in semi-structured interviews with administrators of patient-centered social media groups and brainstorm ways to implement our findings with these groups. Students at Florida International University will research and develop a formal campaign to disseminate our findings. Our goal is to launch a grassroots movement to empower citizens to become more aware of the misinformation problem and become more resistant to the effects of such information.

References

Aspen Ideas Festival. (2017, June). Fake health news metastasizes [Audio file]. Aspen Ideas Festival Spotlight Health. Retrieved from https://soundcloud.com/aspenideas/fake-health-news-metastasizes

Brossard, D., & Scheufele, D. A. (2013). Science, new media, and the public. Science, 339(6115), 40–41. https://doi.org/10.1126/science.1232329

Fox, S., & Duggan, M. (2013) The diagnosis difference: Part two: Sources of health information. Retrieved from http://www.pewinternet.org/2013/11/26/part-two-sources-of-health-information/

Schattner, E. (2017). Can cancer truths be told? Challenges for medical journalism. In: Dizon, D. S., & Pennel, N. (Eds.), 2017 ASCO Educational Book. Alexandria, VA: ASCO.

Tennant, B., Stellefson, M., Dodd, V., Chaney, B., Chaney, D., Paige, S., & Alber, J. (2015). eHealth literacy and Web 2.0 health information seeking behaviors among baby boomers and older adults. Journal of Medical Internet Research, 17(3), e70. https://doi.org/10.2196/jmir.3992

Response to “CHIME: The Campaign for Health Information Empowerment”

Kilhoe Na and Shannon Poulsen

The Ohio State University

The campaign proposed by Drs. Susan Jacobson and Weirui Wang offers a way to address health misinformation in the areas of breast cancer and beyond. Although the technique employed by this project, inoculation, is not novel in misinformation research (e.g., Cook, Lewandowsky, & Ecker, 2017), this campaign has great potential to demonstrate that (a) inoculation is an effective solution to misinformation epidemics in health news, (b) social media can be a useful tool for conveying inoculation messages, and (c) collaboration with non-profit organizations can be an effective way for health campaigns’ success with suggestions made during the Misinformation Solutions Forum.

Given that it is difficult to correct misinformation once it is processed (Lewandowsky, Ecker, Seifert, Schwarz, & Cook, 2012), it may be more effective to neutralize potential misinformation through a technique called inoculation (McGuire & Papageorgis, 1961) or “prebunking.” Extant literature explores inoculation through some health contexts, such as anti-smoking campaigns. The current project will use this technique in the novel context of breast cancer. If inoculation is also found to be effective in this context, where individuals experience a high level of fear and anxiety, this technique can be applied to other severe health-related contexts.

Another novel approach implemented in the proposed project is the use of social media to deliver the inoculation messages. Social media are a tool for providing breast cancer patients with education and support (Attai et al., 2015). However, little research has been done to examine the effect of inoculation messages disseminated on social media. The present project plans to deliver the inoculation messages via social media to members of various breast cancer support groups on Facebook. This process will be a good test of whether social media can be a viable platform for conveying inoculation messages, in contrast to traditional education settings, such as classrooms (e.g., Kowalski & Taylor, 2009).

The campaign could be launched with a partner organization, Living Beyond Breast Cancer. Partnering with an established organization gives the project numerous advantages, including involvement with the observed community. Should the project demonstrate success, the researchers plan to expand campaign strategies with other illness-related organizations.

At present, the team may find the following suggestions useful to maximize their intended outcomes. First, they intend to create a taxonomy of false information characteristics but turning to extant literature will provide them with the information needed to create various inoculation strategies. Critically, the strategies must be sufficiently pretested before launching the campaign. Second, the project must also elaborate on their plan to evaluate the campaign’s successes, as careful evaluation of the campaign is vital to expand this campaign within and beyond breast cancer. Third, the researchers should consider how the messages may elicit various responses across individuals. For example, the effectiveness of the campaign may differ depending on the participants’ stage of cancer or treatment. Finally, although social media are a source of information for many cancer patients, traditional media still play an important role in providing health information. Researchers should collaborate with journalists to further expand the scope of this campaign.

References

Attai, D. J., Cowher, M. S., Al-Hamadani, M., Schoger, J. M., Staley, A. C., & Landercasper, J. (2015). Twitter social media is an effective tool for breast cancer patient education and support: Patient-reported outcomes by survey. Journal of Medical Internet Research, 17(7), e188. https://doi.org/10.2196/jmir.4721

Cook, J., Lewandowsky, S., & Ecker, U. K. (2017). Neutralizing misinformation through inoculation: Exposing misleading argumentation techniques reduces their influence. PLoS One, 12(5), e0175799. https://doi.org/10.1371/journal.pone.0175799

Kowalski, P., & Taylor, A. K. (2009). The effect of refuting misconceptions in the introductory psychology class. Teaching of Psychology, 36(3), 153–159. https://doi.org/10.1080/00986280902959986

Lewandowsky, S., Ecker, U. K., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and its correction continued influence and successful debiasing. Psychological Science in the Public Interest, 13(3), 106–131. https://doi.org/10.1177/1529100612451018

McGuire, W. J., & Papageorgis, D. (1961). The relative efficacy of various types of prior belief-defense in producing immunity against persuasion. Journal of Abnormal and Social Psychology, 62(2), 327–337. https://doi.org/10.1037/h0042026

Getting Science in the Picture: Digital Game Development to Curb the Spread of Misinformation

Kathy Pezdek,1 Erica Abed,1 Abigail M. Dean,1 Elise Fenn,2 and Ariella Lehrer3

1 Claremont Graduate University

2 California State University Northridge

3 HitPoint Studios

According to Pew Research Center (Shearer & Gottfried, 2017), two-thirds of Americans get their news through social media, and 45 percent from Facebook alone. In a digital format, news is typically presented as a single headline along with an often-irrelevant photo. In this format, people typically scroll through news feeds quickly, especially on mobile devices.

When people process information quickly, they are especially likely to rely on fast “heuristic” processing to decide if a news item is true or false and whether to share it. One common heuristic is confirmation bias—the tendency to accept information consistent with one’s beliefs and reject information inconsistent with one’s beliefs (Nickerson, 1998). Fast judgments can also be influenced by related but irrelevant information such as accompanying photos (Fenn, Newman, Pezdek & Garry, 2013). This heuristic cognitive processing style is responsible for people (a) rapidly judging news items to be true and then (b) sharing them (e.g., Li & Sakamoto, 2014), a major reason why false information spreads faster and farther than true news (Vosoughi, Roy, & Aral, 2018). Heuristic processing of news is so significant in determining how misinformation spreads that RAND named it one of the four critical topics to address in interventions that combat misinformation (Kavanagh & Rich, 2018).

In addition, at least 85 percent of people in the United States get their news on mobile devices (Shearer & Gottfried, 2017). On mobile devices, news headlines are typically scrolled through quickly and presented with accompanying photos, two cognitive processing factors that obscure the difference between true and false news. The “truthiness effect” is well established in the research literature; when statements are accompanied by photos, people have a bias to judge them to be “true,” and this truth bias persists for at least several days (Fenn et al., 2013). Our novel finding bolsters the potential for success of our proposed intervention to reduce the sharing of misinformation. Because news items judged to be true are more likely to be shared (Li & Sakamoto, 2014), reducing the bias to judge false news as true will reduce the prevalence of sharing false news.

A great deal of attention has focused on structural changes to prevent misinformation from spreading, notably, warning labels placed on potentially false headlines, for example in Facebook’s newsfeed (Kavanagh & Rich, 2018). However, interventions that simply involve labeling information credibility will likely backfire because the credibility of information fades even when the information is remembered (the sleeper effect; Underwood & Pezdek, 1998). One tactic for reducing misinformation that has received less attention is addressing how people interpret information in the first place. Targeting individual-level factors to reduce the spread of misinformation is likely to be more effective than targeting structural-level factors (i.e., warning labels) because it is the individual who is ultimately responsible for believing and sharing information. Research suggests that cognitive biases can be reduced by teaching people (a) to identify situations in which they typically rely on heuristics, and then (b) training them to engage in slower more deliberate and analytical reasoning (Dunbar et al., 2017); short-term interventions to train critical thinking can actually work. People will be more accurate differentiating between true and false news if they are taught to (a) identify and then ignore information that is irrelevant to evaluating truth and then (b) use slower, more analytic processing in making truth judgments.

Our proposed innovation is an online game designed to train people to discriminate between true and false news. This style of intervention was selected because its interactivity will engage users, and it can be made available cost effectively to a diverse population of people of all ages. There are a few video games currently available, aimed at teaching users about misinformation. Fewer than half of these games are aimed at teaching users to identify false information and reduce the spread of misinformation, and none of these specifically present strategies to help users in this endeavor.

Our solution is grounded in scientific findings that have been shown to abate the spread of misinformation. Based on the relevant cognitive science research, we have identified three cognitive processing factors that contribute significantly to the misinformation effect, and we address these head-on with our proposed intervention. Using a digital game format, our proposed innovation will train people to (a) slow down reading and processing of information, (b) ignore irrelevant visual information, and (c) reduce personal biases. These cognitive strategies will reduce the spread of misinformation and result in more “mindful” approaches to reading the news.

References

Dunbar, N. E., Jensen, M. L., Miller, C. H., Bessarabova, E., Lee, Y. H., Wilson, S. N., . . . Burgoon, J. K. (2017). Mitigation of cognitive bias with a serious game: Two experiments testing feedback timing and source. International Journal of Game-Based Learning, 7(4), 86–100. https://doi.org/10.4018/IJGBL.2017100105

Fenn, E., Newman, E. J., Pezdek, K., & Garry, M. (2013). The effect of nonprobative photographs on truthiness persists over time. Acta Psychologica, 144(1), 207–211. https://doi.org/10.1016/j.actpsy.2013.06.004

Kavanagh, J., & Rich, M. D. (2018). Truth decay: An initial exploration of the diminishing role of facts and analysis in American public life. RAND Corporation. Retrieved from https://www.rand.org/pubs/research_reports/RR2314.html

Li, H., & Sakamoto, Y. (2014). Social impacts in social media: An examination of perceived truthfulness and sharing of information. Computers in Human Behavior, 41, 278–287. https://doi.org/10.1016/j.chb.2014.08.009

Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175–220. https://doi.org/10.1037/1089-2680.2.2.175

Shearer, E., & Gottfried, J. (2017, September 7). News use across social media platforms. Pew Research Center. Retrieved from http://www.journalism.org/2017/09/07/news-use-across-social-media-platforms-2017/

Underwood, J., & Pezdek, K. (1998). Memory suggestibility as an example of the sleeper effect. Psychonomic Bulletin & Review, 5(3), 449–453. https://doi.org/10.3758/BF03208820

Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151. https://doi.org/10.1126/science.aap9559

Response to “Getting Science in the Picture”

Games With a Purpose to Curb Misinformation

Maria D. Molina1 and Mihai Avram2

1 Penn State University

2 University of Illinois

Games with a purpose present a relatively new approach that aims to collectively solve large-scale societal problems by engaging the community in the process (von Ahn, 2006). The increase of users consuming news from democratized, opaque, and potentially biased social media feeds (Matsa & Shearer, 2018) has introduced a new theme of games in new-media literacy. These games can serve various purposes like aiming to educate the public about news literacy (Factitious by American University and Play Fake News by ISL), glean useful statistics on popular political misconceptions (PolitiTruth by PolitiFact), teach people the fake-news ecosystem (Bad News Game by DROG and Fake it To Make It by Amanda Warner), and teach social media newsfeed literacy (Fakey by Indiana University). In contrast, the game proposed by Pezdek and colleagues distinguishes itself from other ideas by proposing the game as a teaching tool for schools and offering solutions to better filter between accurate claims and low-quality information based on findings drawn from cognitive science research.

Pezdek and colleagues propose a learning platform that encourages users to slow down the processing of information, ignore unimportant visual cues, and reduce personal biases before they decide to share content online. Using such cognitive strategies is a promising idea because assessing credibility using content-level factors alone can be problematic. For example, linguistic markers such as clickbait are not unique to misinformation (Rony, Hassan, & Yousuf, 2017). Thus, when developing media literacy interventions it is important to consider a more holistic approach by looking at the many different strategies used to evaluate credibility online. Among these are cognitive heuristics, or rules of thumb, like the ones proposed by Pezdek and colleagues, where users rely on the realism of an image because “seeing is believing.” To increase the robustness of the intervention, there are several other heuristics that can be employed as indicators of misinformation. The realism heuristic alone might not provide sufficient evidence to mark a sharing decision as correct or incorrect. Accurate information also relies on images to illustrate a point. These images might indeed cue the realism heuristic and place portions of the information at the forefront, but this does not make the information false. Other heuristics include bandwagon cues, such as likes and shares, which show people’s favorability of content; identity cues that increase the threshold of trust from articles shared by closer ties; and interactivity cues revealing activity and dialog (Sundar, 2008). Including such heuristic cues can also be a viable solution to teaching media literacy.

Given the desiderata of creating media literacy games in effective, sustainable, and ethical ways, creators of such tools should be aware of the following notions. First, rapid prototyping and feedback systems should be put in place to quickly and iteratively glean insights from the stakeholders. Second, creators should be aware of the content used in media literacy games by looking into fair use, copyright guidelines, and consulting ethics experts. Third, given the growing popularity of mobile devices for news consumption, (Fedeli & Matsa, 2018) transitioning the games to native mobile devices can increase accessibility and usage. Fourth, we encourage the creators to study other media literacy games, take note of successful elements from those projects, and innovate on those elements if applicable. Finally, developers should understand and apply different decisionmaking strategies, both systematic and heuristic, that play a role in people’s evaluation of content online.

References

Fedeli, S., & Matsa, K. E. (2018, July 17). Use of smartphones or tablets to get news grows again in 2017 in US. Retrieved from http://www.pewresearch.org/fact-tank/2018/07/17/use-of-mobile-devices-for-news-continues-to-grow-outpacing-desktops-and-laptops/

Matsa, K. E., & Shearer, E. (2018, September 21). News use across social media platforms 2018. Retrieved from http://www.journalism.org/2018/09/10/news-use-across-social-media-platforms-2018/

Rony, M. M. U., Hassan, N., & Yousuf, M. (2017, July). Diving deep into clickbaits: Who use them to what extents in which topics with what effects? Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017 (pp. 232–239). New York, NY: ACM. https://doi.org/10.1145/3110025.3110054

Sundar, S. S. (2008). The MAIN model: A heuristic approach to understanding technology effects on credibility. In M. J. Metzger & A. J. Flanagin (Eds.), Digital media, youth, and credibility (pp. 72–100). Cambridge, MA: The MIT Press.

von Ahn, L. (2006). Games with a purpose. Computer, 39(6), 92–94. https://doi.org/10.1109/MC.2006.196

Acknowledgments

We greatly appreciate the collaboration among the Rita Allen Foundation, RTI International, Aspen Institute, Democracy Fund, Craig Newmark Philanthropies, and Burroughs Wellcome Fund that made the forum possible. We also had help from a set of exceptional moderators and proposal judges including Sylvia Chou of the National Cancer Institute, Elizabeth Christopherson of the Rita Allen Foundation, Lisa Fazio of Vanderbilt University, Melanie Green of the University of Buffalo, Jeff Hemsley of Syracuse University, Elizabeth Marsh of Duke University, Josh Stearns of Democracy Fund, Claire Wardle of First Draft News, Melissa Welch-Ross of the National Academy of Sciences, Denise Rolark Barnes of the Washington Informer, Giovanni Luca Ciampaglia of the University of South Florida, Brianna Hamblin of CBS 19 (VA), Dannielle Kelley of the National Cancer Institute, Tracy LaMondue of the American Geophysical Union, Kelly McBride of The Poynter Institute, and John Schlageter of the Bethlehem University Foundation. We also thank Karyn Feiden, Rapporteur, and Katya Wanzer of the Aspen Institute.