Example: biology

Can You Verifi This? Studying Uncertainty and …

Can You Verifi This? Studying Uncertainty and Decision-Making AboutMisinformation using Visual AnalyticsAlireza Karduni1*, Ryan Wesslen1, Sashank Santhanam1, Isaac Cho1,Svitlana Volkova2, Dustin Arendt2, Samira Shaikh1, Wenwen Dou11 Department of Computer Science, UNC-Charlotte2 Pacific Northwest National describe a novel study of decision-making processesaround misinformation on social media. Using a custom-builtvisual analytic system, we presented users with news contentfrom social media accounts from a variety of news outlets,including outlets engaged in distributing misinformation . Weconducted controlled experiments to study decision-makingregarding the veracity of these news outlets and tested therole of confirmation bias (the tendency to ignore contradict-ing information) and Uncertainty of information on humandecision-making processes. Our findings reveal that the pres-ence of conflicting information, presented to users in the formof cues, impacts the ability to judge the veracity of news insystematic ways.

the selective exposure to and spread of misinformation (Al- ... about misinformation in the context of visual analytics. Verifi: A Visual Analytic System for

Tags:

  Selective, Exposure, Selective exposure, Misinformation

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Can You Verifi This? Studying Uncertainty and …

1 Can You Verifi This? Studying Uncertainty and Decision-Making AboutMisinformation using Visual AnalyticsAlireza Karduni1*, Ryan Wesslen1, Sashank Santhanam1, Isaac Cho1,Svitlana Volkova2, Dustin Arendt2, Samira Shaikh1, Wenwen Dou11 Department of Computer Science, UNC-Charlotte2 Pacific Northwest National describe a novel study of decision-making processesaround misinformation on social media. Using a custom-builtvisual analytic system, we presented users with news contentfrom social media accounts from a variety of news outlets,including outlets engaged in distributing misinformation . Weconducted controlled experiments to study decision-makingregarding the veracity of these news outlets and tested therole of confirmation bias (the tendency to ignore contradict-ing information) and Uncertainty of information on humandecision-making processes. Our findings reveal that the pres-ence of conflicting information, presented to users in the formof cues, impacts the ability to judge the veracity of news insystematic ways.

2 We also find that even instructing partici-pants to explicitly disconfirm given hypotheses does not sig-nificantly impact their decision-making regarding misinfor-mation when compared to a control condition. Our findingshave the potential to inform the design of visual analyticssystems so that they may be used to mitigate the effects ofcognitive biases and stymie the spread of misinformation onsocial spread of misinformation on social media is a phenom-ena with global consequences, one that, according to theWorld Economic Forum, poses significant risks to demo-cratic societies (Howell and others 2013). The online mediaecosystem is now a place where false or misleading contentresides on an equal footing with verified and trustworthy in-formation (Kott, Alberts, and Wang 2015). In response, so-cial media platforms are becoming content referees, facedwith the difficult task of identifying misinformation inter-nally or even seeking users evaluations on news the one hand, the news we consume is either wittinglyor unwittingly self-curated, even self-reinforced (Tsang andLarson 2016).

3 On the other hand, due to the explosive abun-dance of media sources and the resulting information over-load, we often need to rely on heuristics and social cues tomake decisions about the credibility of information (Meleet al. 2017; Shao et al. 2017). One such decision-makingCopyrightc 2018, Association for the Advancement of ArtificialIntelligence ( ). All rights is confirmation bias, which has been implicated inthe selective exposure to and spread of misinformation (Al-lan 2017). This cognitive bias can manifest itself on socialmedia as individuals tend to select claims and consume newsthat reflect their preconceived beliefs about the world, whileignoring dissenting information (Mele et al. 2017).While propaganda and misinformation campaigns are nota new phenomenon (Soll 2017), the ubiquity and virality ofthe internet has lent urgency to the need for understandinghow individuals make decisions about the news they con-sume and how technology can aid in combating this problem(Shu et al.)

4 2017). Visual analytic systems that present co-ordinated multiple views and rich heterogeneous data havebeen demonstrably useful in supporting human decision-making in a variety of tasks such as textual event detection,geographic decision support, malware analysis, and finan-cial analytics (Wagner et al. 2015; Wanner et al. 2014).Ourgoal is to understand how visual analytics systems canbe used to support decision-making around misinforma-tion and how Uncertainty and confirmation bias affectdecision-making within a visual analytics this work, we seek to answer the following overarchingresearch questions:What are the important factors that con-tribute to the investigation of misinformation ? How to facil-itate decision-making around misinformation by presentingthe factors in a visual analytics system? What is the role ofconfirmation bias and Uncertainty in such decision-makingprocesses?To this aim, we first leveraged prior work on cat-egorizing misinformation on social media (specifically Twit-ter) (Volkova et al.

5 2017) and identified the dimensions thatcan distinguish misinformation from legitimate news. Wethen developed a visual analytic system, Verifi , to incorpo-rate these dimensions into interactive visual , we conducted a controlled experiment in which par-ticipants were asked to investigate news media accounts us-ing Verifi . Through quantitative and qualitative analysis ofthe experiment results, we studied the factors in decision-making around misinformation . More specifically, we in-vestigated howuncertainty, conflicting signals manifestedin the presented data dimensions, affect users ability toidentify misinformation in different experiment work is thus uniquely situated at the intersection of thepsychology of decision-making, cognitive biases, and theimpact of socio-technical systems, namely visual analyticsystems, that aid in such work makes the following important contributions: A new visual analytic system: We designed and developedVerifi2, a new visual analytic system that incorporatesdimensions critical to characterizing and distinguishingmisinformation from legitimate news.

6 Verifi enables in-dividuals to make informed decisions about the veracityof news accounts. Experiment design to study decision-making on misinfor-mation: We conducted an experiment using Verifi to studyhow people assess the veracity of the news media ac-counts on Twitter and what role confirmation bias playsin this process. To our knowledge, our work is the first ex-perimental study on the determinants of decision-makingin the presence of misinformation in visual part of our controlled experiment, we provided cues tothe participants so that they would interact with data for thevarious news accounts along various dimensions ( ,tweetcontent, social network). Our results revealed that conflict-ing information along such cues ( , connectivity in so-cial network) significantly impacts the users performancein identifying WorkWe discuss two distinct lines of past work that are relevant toour research. First, we explore cognitive biases, and specif-ically the study of confirmation bias in the context of visualanalytics.

7 Second, we introduce prior work on characteriz-ing and visualizing misinformation in online bias:Humans exhibit a tendency to treatevidence in a biased manner during their decision-makingprocess in order to protect their beliefs or pre-conceived hy-pothesis (Jonas et al. 2001), even in situations where theyhave no personal interest or material stake (Nickerson 1998).Research has shown that this tendency, known as confirma-tion bias, can cause inferential error with regards to humanreasoning (Evans 1989). Confirmation bias is the tendencyto privilege information that confirms one s hypotheses overinformation that disconfirms the hypotheses. Classic labora-tory experiments to study confirmation bias typically presentparticipants with a hypothesis and evidence that either con-firms or disconfirms their hypothesis, and may include cuesthat cause Uncertainty in interpretation of that given evi-dence. Our research is firmly grounded in these experimentalstudies of confirmation biases.

8 We adapt classic psychologyexperimental design, where pieces of evidence orcuesareprovided to subjects used to confirm or disconfirm a givenhypothesis (Wason 1960; Nickerson 1998).Visualization and Cognitive Biases:Given the pervasiveeffects of confirmation bias and cognitive biases in gen-eral on human decision-making, scholars Studying visualanalytic systems have initiated research on this importantproblem. (Wall et al. 2017) categorized four perspectives to2 ; open source data and code pro-vided at a framework of all cognitive biases in visual analyt-ics. (Cho et al. 2017) presented a user study and identifiedan approach to measure anchoring bias in visual analytics bypriming users to visual and numerical anchors. They demon-strated that cognitive biases, specifically anchoring bias, af-fect decision-making in visual analytic systems, consistentwith prior research in psychology. However, no research todate has examined the effects of confirmation bias and un-certainty in the context of distinguishing information frommisinformation using visual analytic systems - we seek tofill this important gap.

9 Next, we discuss what we mean bymisinformation in the context of our misinformation : misinformation can bedescribed as information that has the camouflage of tradi-tional news media but lacks the associated rigorous edito-rial processes (Mele et al. 2017). Prior research in journal-ism and communication has demonstrated that news outletsmay slant their news coverage based on different topics (Ent-man 2007). In addition, (Allcott and Gentzkow 2017) showthat the frequency of sharing and distribution of fake newscan heavily favor different individuals. In our work, we usethe term fake news to encompass misinformation includ-ing ideologically slanted news, disinformation, propaganda,hoaxes, rumors, conspiracy theories, clickbait and fabricatedcontent, and even satire. We chose to use fake news as aneasily accessible term that can be presented to the users as alabel for misinformation and we use the term real news asits antithesis to characterize legitimate systems have been introduced to (semi-) automat-ically detect misinformation , disinformation, or propagandain Twitter, including FactWatcher (Hassan et al.)

10 2014), Twit-terTrails (Metaxas, Finn, and Mustafaraj 2015), RumorLens(Resnick et al. 2014), and Hoaxy (Shao et al. 2016). Thesesystems allow users to explore and monitor detected misin-formation via interactive dashboards. They focus on iden-tifying misinformation and the dashboards are designed topresent analysis results from the proposed models. Instead, Verifi aims to provide an overview of dimensions that distin-guish real vs. fake news accounts for a general work is thus situated at the intersection of these re-search areas and focuses on Studying users decision makingabout misinformation in the context of visual : A Visual Analytic System forInvestigating MisinformationVerifi is a visual analytic system that presents multiple di-mensions related to misinformation on Twitter. Our designprocess is informed by both prior research in distinguishingreal and fake news as well as our analysis based on the dataselected for our study to identify meaningful major inspiration for Verifi s design is based on thefindings of (Volkova et al.


Related search queries