University of Texas iSchool logo with crest on an orange background
Friday Jan. 29, 2021
Sameer Patil, Applying HCI to Combat Organic Spread of Online Misinformation
1:15 to 2:30 p.m.
Zoom link will be provided via email (iSchool listserv)

Misinformation spread via social media platforms has emerged as a prominent societal challenge. The production and spread of misinformation on these platforms have evolved from a largely bot-driven operation to one that exploits everyday actions of end users. Purely computational approaches that work reasonably well for bots can be ineffective for combating such organic spread. To address this issue, we have been investigating the application of HCI principles to design user experiences that can help users recognize questionable content and dissuade them from sharing it, thus dampening its spread. Our initial study (n = 1,512) showed that flagging news headlines with credibility indicators can reduce the intent to share the articles on social media. Notably, we found that the indicator connected to professional fact checkers was the most effective, motivating two parallel threads of follow-on research.

In the first thread, we studied practices of professional fact checkers to understand and address their challenges. Interviews with 19 fact checkers from 18 countries surfaced a pipeline of manual and labor-intensive practices fragmented across disparate tools that lack integration. Fact checkers reported a lack of effective dissemination mechanisms that prevents fact-checking outcomes from fully achieving their potential impact. In the second thread, we explored helping users learn to seek fact checks for questionable content via a game-based approach, analyzing game analytics of more than 8,500 players interacting with 120,000 articles over a period of 19 months. As players interacted with more articles, they significantly improved their skills in spotting mainstream content, thus confirming the utility of the game for improving news literacy. At the same time, we found that exposure to social engagement signals (i.e., Likes and Shares) increased player vulnerability to low-credibility information.

We are applying the insight from these research efforts to design a human-in-the-loop platform driven by computation and automation to improve the effectiveness, efficiency, scale of fact-checking work and help its broad dissemination to end users.

Bio: Sameer Patil is an Assistant Professor in the Luddy School of Informatics, Computing, and Engineering at Indiana University Bloomington. Previously, he has held several appointments in academia and industry, including Vienna University of Economics and Business (Austria), Helsinki Institute for Information Technology (Finland), University of Siegen (Germany), Yahoo Labs (USA), and New York University (USA). Sameer's research interests focus on human-centered investigations of privacy and security, covering the fields of Human Computer Interaction (HCI), Computer Supported Collaborative Work (CSCW), and social computing. His research has been funded by the National Science Foundation (NSF), Department of Homeland Security (DHS), and Google. He received the NSF CAREER award in 2019. Sameer’s work has been published in top-tier conferences and journals, and he holds eight US patents related to mobile technologies. Sameer obtained a Ph.D. in Computer and Information Science from the University of California, Irvine and holds Master’s degrees in Information (HCI) and Computer Science & Engineering from the University of Michigan, Ann Arbor.