Advancing Curiosity Award to Support Interdisciplinary Research on Tackling Misinformation

Sandlin, Anu  |  May 29, 2019

News Image: 
Advancing Curiosity Award
AI
artificial intelligence
online misinformation
fake news
Micron Foundation
ethics
Security
privacy
Texas iSchool
Faculty

The University of Texas at Austin School of Information Associate Professors Matthew Lease and Kenneth R. Fleischmann have been awarded a $150,000 grant from the Micron Foundation for a two-year project, “Tackling Misinformation through Socially-Responsible AI". In addition to Lease and Fleischmann, the project team includes Associate Professor Samuel Baker (English), Associate Professor Natalie Stroud (Communication Studies), and Professor Sharon Strover (Radio-TV-Film).

Socially-responsible artificial intelligence (AI) involves designing AI technologies to create positive societal impacts. “Emerging AI technologies create tremendous potential for both good or harm,” wrote the research team. “We cannot simply assume benign use, nor can we develop AI technologies in a vacuum ignoring their societal implications.” However, how does an abstract goal like socially-responsible AI get implemented in practice to solve real societal problems? “We argue that grounding the pursuit of responsible AI technology in the context of a real societal challenge is critical to achieving progress on responsible AI,” the team writes, “because it forces us to translate abstract research questions into real, practical problems to be solved.”

Grounding the pursuit of responsible AI technology in the context of a real societal challenge is critical to achieving progress on responsible AI.

In particular, the team will pursue socially-responsible AI to tackle the contemporary challenge of combatting misinformation online. While recent AI research has sought to develop automatic AI systems to predict whether online news stories are real or fake, “why should anyone trust a black-box AI model telling them what to believe”, the team asks, “when many people distrust even well-known news outlets and fact-checking organizations? How do people decide ― if it is a conscious and rational choice ― to believe or even circulate what is actually misinformation? How can AI systems be designed for effective human interaction to help people better decide for themselves what to believe?”

The project team brings diverse, relevant expertise and prior work experience on socially-responsible AI and online misinformation. In fact, team members are already collaborating as part of the campus-wide Good Systems Initiative, a UT Bridging Barriers Grand Challenge to design the future of socially-responsible AI. Good Systems has become a major catalyst at UT in promoting projects on socially-responsible AI, and Micron Foundation support for this project will enable the research team to tackle the specific challenges of designing socially-responsible AI to combat misinformation.

“Our project will develop real use cases, interface designs, prototype applications, and user-centered evaluations,” said Lease. “By grounding AI research in the context of specific social problems, designers can directly consider and confront the societal context of use as AI models are conceived and refined.”The team members explain that this will aid the discovery of new insights into how good AI systems can be developed in general to maximize societal benefit.

The Micron Foundation sought proposals from multi-disciplinary research groups and non-profit organizations investigating how artificial intelligence, machine leaning, and deep learning can improve life while also addressing ethical issues, security, and privacy. Established in 1999 as a private, nonprofit organization with a gift from Micron Technology, the Micron Foundation’s grants, programs, and volunteers focus on promoting science and engineering education and addressing basic human needs.

Grant to Boost Understanding of Ethical, Political, and Legal Implications of Machine Learning

Sandlin, Anu  |  Jan 30, 2019

implications
machine learning
artificial intelligence
Ken Fleischmann
Sherri Greenberg
Cisco Research Center
News Image: 

Texas School of Information Associate Professor Ken Fleischmann received a $100,000 grant from the Legal Implications for IoT, Machine Learning, and Artificial Intelligence Systems programCisco Research Center, for "Field Research with Policy, Legal, and Technological Experts about Transparency, Trust, and Agency in Machine Learning." The Cisco Research Center connects researchers and developers from Cisco, academia, governments, customers, and industry partners with the goal of facilitating collaboration and exploration of new and promising technologies.

The request for proposals (RFP 16-02) invited researchers to investigate legal and policy issues in the quickly developing world of machine learning (ML), artificial intelligence (AI), machine-to-machine interactions, and the rapidly expanding world of data creation, transfer, collection, and analysis from Internet of Things (IoT).

How can we ensure that ML experts are aware of the ethical, political, and legal implications of ML, and that policy experts and legal scholars are up to date in their understanding of ML and its potential societal implications?

The project’s principal investigators, Dr. Fleischmann and Sherri Greenberg of the LBJ School of Public Affairsexplain that while machine learning has the potential to revolutionize society, transform how we do business, defend our homeland, and heal diseases, it also raises numerous ethical challenges, which our legal and political systems are largely ill-equipped to deal with. In their proposal, they ask: “How can we ensure that ML experts are aware of the ethical, political, and legal implications of ML, and that policy experts and legal scholars are up to date in their understanding of ML and its potential societal implications?”

According to Fleischmann, the project’s goal is to “bridge the gap in expertise among technology experts and legal and policy experts.” On one hand, this involves helping legal and policy experts to understand the limits of technology, both at present and (our best projection of what will be possible) ten years down the road, and on the other, helping technology experts to understand the legal and policy implications of their work,” said Fleischmann.

Fleischmann explains that this project can lead to insights that enhance the academic education and workplace training of technologists, as well as legal and policy scholars in future research. Not only does it have the potential “to help educate and prepare ML researchers and developers about the potential ethical, legal, and policy implications of their work, but it will also help prepare future policy makers and legislators about how to regulate and legislate to ensure safe and efficient use of ML.”

Reading Metadata to Combat Disinformation and Fake News Campaigns

Sandlin, Anu  |  Jan 22, 2019

News Image: 
""
Metadata
disinformation
fake news
artificial intelligence
News Image: 
""

If you’ve ever reacted to a Facebook post, retweeted on Twitter, or commented on an Instagram story, then you’ve not only successfully used and engaged with these communication infrastructures, but you’ve also created your own digital trace of “metadata,” which Texas School of Information Assistant Professor Amelia Acker defines as “an index of human behavior.” 

Our activity on social platforms –such as favorites, likes, retweets, comments, and reactions are used mostly to advertise to us, but there is a dark side where manipulators, bots, and people in the business of disinformation and misinformation try to “appear human to the algorithms that police social networks,” says Acker, who is also a Research Affiliate at Data & Society Research Institute’s Media Manipulation Initiative

Acker dubs the manipulation of this information “data craft –a collection of practices that create, rely on, or even play with the proliferation of data on social media by engaging with new computational and algorithmic mechanisms of organization and classification.” 

In summer 2018, New Knowledge, a local data science firm in Austin, Texas, published findings about Russian-connected Twitter bots and fake Facebook accounts which were used to manipulate public discourse during the 2016 U.S. election. “Some of these hoaxes and fakes are rather crafty in their ability to circumvent the boundaries of platforms and their terms of service agreements” notes Acker. From election tampering to shifting political discourse around social debates like immigration reform and racial politics, Acker explains that the presence of false activity data has had a number of unpredictable consequences for social media users and society. 

Data craft is becoming more harmful because “more than half of Americans get their news primarily from social media,” says Acker. And this problem isn’t going away anytime soon. In fact, it’s about to get a lot worse as “bots are starting to mimic our social media activity to look more human” and ‘sockpuppets’ are becoming “more and more sophisticated to where they can now craft data to look like real user engagement with conversation online,” explains Acker.

By identifying and understanding disinformation tactics as data craftwork, information researchers can read social media metadata just as closely as algorithms do, and possibly with more precision than the platforms’ moderation tools.

With those in the business of data craft becoming smarter and craftier at faking what looks like authentic behavior on social media, what does this mean for us? Put simply, it will become more difficult for us to discern ‘real’ users and authentic account activities from fake, spammy, and malicious manipulations,” said Acker in a recent Data & Society report.

But there is a sense of hope, which according to Acker, lies in the metadata itself. “Metadata features such as account name, account image, number of likes, tags, and date of post can provide activity frequency signals that serve as clues for suspicious activity.” “Reading metadata surrounding sockpuppet accounts will often reveal intentions, slippages, and noise –which can further reveal automated manipulation,” claims Acker.  

“By identifying and understanding disinformation tactics as data craftwork, information researchers can read social media metadata just as closely as algorithms do, and possibly with more precision than the platforms’ moderation tools,” says Acker.Closely examining metadata such as the rapidity of account activity, follower/audience counts, post timestamps, media content, user bios, and location data, has led to thousands of fake accounts being detected by social media researchers and tech journalists. 

So while we may not be able to beat bots and manipulators at their data craft game, the future is hopeful when it comes to detecting and identifying disinformation. Understanding data craft and how it can manifest in the world of social media and metadata is the first step. The second is reading metadata that has been gamed, exploited, or changed. This, according to Acker, promises a new method for tracing the spread of misinformation and disinformation campaigns on social media platforms. It will also improve data literacy among citizens by providing a method to judge whether messages are authentic or deceptive.

Acker plans to use the case studies of “reading metadata” in her INF384M Theories and Applications of Metadata course in spring 2019 at the Texas iSchool. She hopes that her research will help educators, journalists, policy makers, technologists, and early-career professionals like students understand how bad actors can manipulate metadata to create disinformation campaigns. “The acquisition of this information and understanding is empowering and gives us an advantage in terms of deciphering which information is real and which is fake or manipulated.”

For additional tips on reading metadata, see Dr. Amelia Acker’s latest Data & Society report: The Manipulation of Social Media Metadata.

glqxz9283 sfy39587stf02 mnesdcuix8
sfy39587stf03
sfy39587stp14