Research Awards/Grants (Current)

Min Kyung Lee

National Science Foundation (NSF)

09/01/2022 to 08/31/2025

The award is $249,999 over the project period.

Collaborative Research: DASS: Designing accountable software systems for worker-centered algorithmic management

Software systems have become an integral part of public and private sector management, assisting and automating critical human decisions such as selecting people and allocating resources. Emerging evidence suggests that software systems for algorithmic management can significantly undermine workforce well-being and may be poorly suited to fostering accountability to existing labor law. For example, warehouse workers are under serious physical and psychological stress due to task assignment and tracking without appropriate break times. On-demand ride drivers feel that automated evaluation is unfair and distrust the system?s opaque payment calculations which has led to worker lawsuits for wage underpayment. Shift workers suffer from unpredictable schedules that destabilize work-life balance and disrupt their ability to plan ahead. Meanwhile, there is not yet an established mechanism to regulate such software systems. For example, there is no expert consensus on how to apply concepts of fairness in software systems. Existing work laws have not kept pace with emerging forms of work, such as algorithmic management and digital labor platforms that introduce new risk to workers, including work-schedule volatility and employer surveillance of workers both on and off the job. To tackle these challenges, we aim to develop technical approaches that can (1) make software accountable to existing law, and (2) address the gaps in existing law by measuring the negative impacts of certain software use and behavior, so as to help stakeholders better mitigate those effects. In other words, we aim to make software accountable to law and policy, and leverage it to make software users (individuals and firms) accountable to the affected population and the public.

This project is developing novel methods to enable standards and disclosure-based regulation in and through software systems drawing from formal methods, human-computer interaction, sociology, public policy, and law throughout the software development cycle. The work will focus on algorithmic work scheduling, which impacts shift workers who make up 25% of workers in the United States. It will take a participatory approach involving stakeholders, public policy and legal experts, governments, commercial software companies, as well as software users in firms and those affected by the software?s use, in the software design and evaluation. The research will take place in three thrusts in the context of algorithmic scheduling: (1) participatory formalization of regulatory software requirements, (2) scalable and interactive formal methods and automated reasoning for software guarantees and decision support, and (3) regulatory outcome evaluation and monitoring. By developing accountable scheduling software, the project has the potential for significant broader impacts by giving businesses the tools they need for compliance with and accountability to existing work scheduling regulations, as well as the capacity to provide more schedule stability and predictability in their business operations.

Ying Ding

Yan Leng, and Samuel Craig Watkins, University of Texas at Austin;
Yifan Peng Weill Cornell Medicine

AIM-AHEAD and National Institutes of Health (NIH)

09/17/2023 to 09/16/2025

The collaborative award is $998,739 over the project period. The School of Information portion of the award is $698,739.

Closing the loop with an automatic referral population and summarization system

Suicide is a public health concern and is ranked as the second leading cause of death in 10-24 years old.1,2 In particular, the increasing rates of suicide mortality and suicidal ideations and behaviors among Black youth in the United States (US) have become a pressing concern in recent years.3 Between 2001 and 2015, Black children under 13 years old were twice as likely to die by suicide, compared to their White counterparts.4 Furthermore, suicide mortality rates among Black youth have risen more rapidly than in any other racial or ethnic group.2,5 However, there remains a significant knowledge gap in understanding culturally tailored suicide prevention strategies for this population, particularly regarding unique social risk factors specific to Black youth. Specifically, a detailed understanding of social risk factors unique to Black youth and their differentiation from risk factors for other racial and ethnic groups is limited.2 This knowledge gap is critical, as research has indicated that Black youth face greater exposure to adverse childhood experiences (ACEs), which are linked to higher risks of suicidal ideation and attempts.

The National Violent Death Reporting System (NVDRS) is a state-based violent death reporting system in the U.S. that helps provide information and context on when, where, and how violent deaths occur and who is affected.11 However, much of the information in NVDRS is unstructured, limiting its use in examining a complete picture of the social risk factors contributing to Black youth suicide. Therefore, it is imperative to develop machine learning (e.g., natural language processing [NLP]) algorithms to automatically extract social risk factors from free text to help analyze Black youth suicide. Our long-term goal is to reduce the suicide rate by developing novel interventions targeting risk and protective factors among Black youth. The overall objective of this application is to develop and validate new AI approaches to identify individual-level social risks of Black youth suicide and enhance trust within the underserved communities regarding the approaches of AI/ML.

Angela D.R. Smith

National Science Foundation

06/15/2023 to 05/31/2028

The award is $1,368,414 over the project period.

Collaborative Research: Racial Equity: Engaging MarginalizedGroups to Improve Technological Equity


This collaborative project investigates the lack of diverse, representative datasets and insights in the development and use of technology. It explore the effects of disparities on the ability of technologists (e.g., practitioners, designers, software developers) to develop technology that addresses and mitigates systemic societal racism and historically marginalized individuals' ability to feel seen and heard in the technology with which they engage. The implications of this project are threefold: 1) it supports building relationships between technologists and technology users by understanding the values that most impact historically marginalized communities' engagement and data contributions; 2) given access to more diverse data and insights, the project provides technologists with interventions that empower them to make use of these data and insights in practice; 3) lastly, the work provides support and affirmation for the technologists who are already making these explicit considerations in their work without the adequate support. More broadly, insights from this project can be applied in practice to promote racial equity and ensure systemic racism is an explicit consideration in STEM education and workforce development by incorporating more equitable practices in technologists' workflow.

This study seeks to answer three main research questions: 1) What are the barriers to engaging and amplifying marginalized voices in technological spaces and data sets for both technologists and users? 2) How can marginalized groups be engage when designing and developing data-centric systems without sacrificing their safety, security, and trust? 3) What does it look like to provide interventions for engaging the margins to technologists without compromising the safe spaces for marginalized groups? Using a multi-modal approach, the project will examine how researchers and technologists can best learn to engage in data-centric research with marginalized communities in an ethically and socially responsible manner that centers the rights and values of the communities of interest. Culturally relevant approaches and grounding philosophies will drive the research methods and analyses. Through surveys, semi-structured interviews, design workshops utilizing a combination of participatory design and community-based approaches, as well as case study analysis to collect qualitative and quantitative data, the research team will develop an intervention that supports technologists in responsible engagement. Aside from real-world implementation, this project will share its findings through academic and community-facing venues, such as journal publications, conference presentations, op-eds, blogs, workshops, and social media.

This collaborative project is funded through the Racial Equity in STEM Education program (EDU Racial Equity). The program supports research and practice projects that investigate how considerations of racial equity factor into the improvement of science, technology, engineering, and mathematics (STEM) education and workforce. Awarded projects seek to center the voices, knowledge, and experiences of the individuals, communities, and institutions most impacted by systemic inequities within the STEM enterprise. This program aligns with NSF's core value of supporting outstanding researchers and innovative thinkers from across the Nation's diversity of demographic groups, regions, and types of organizations. Programs across EDU contribute funds to the Racial Equity program in recognition of the alignment of its projects with the collective research and development thrusts of the four divisions of the directorate.

Ying Ding

National Science Foundation (NSF)

04/01/2023 to 03/31/2026

The award is $299,862 over the project period.

NSF-CSIRO: RESILIENCE: Graph Representation Learning for Fair Teaming in Crisis Response

The recent COVID-19 pandemic has revealed the fragility of humankind. In our highly connected world, infectious disease can swiftly transform into worldwide epidemics. A plague can rewrite history and science can limit the damage. The significance of teamwork in science has been extensively studied in the science of science literature using transdisciplinary studies to analyze the mechanisms underlying broad scientific activities. How can scientific communities rapidly form teams to best respond to pandemic crises? Artificial intelligence (AI) models have been proposed to recommend scientific collaboration, especially for those with complementary knowledge or skills. But issues related to fairness in teaming, especially how to balance group fairness and individual fairness remain challenging. Thus, developing fair AI models for recommending teams is critical for an equal and inclusive working environment. Such a need could be pivotal in the next pandemic crisis. This project will develop a decision support system to strengthen the US-Australia public health response to infectious disease outbreak. The system will help to rapidly form global scientific teams with fair teaming solutions for infectious disease control, diagnosis, and treatment. The project will include participation of underrepresented groups (Indigenous Australians and Hispanic Americans) and will provide fair teaming solutions in broad working and recruiting scenarios.  
 
This project aims to understand how scientific communities have responded to historical pandemic crises and how to best respond in the future to provide fair teaming solutions for new infectious disease crises. The project will develop a set of graph representation learning methods for fair teaming recommendation in crisis response through: 1) biomedical knowledge graph construction and learning, with novel models for emerging bio-entity extraction, relationship discovery, and fair graph representation learning for sensitive demographical attributes; 2) the recognition of fairness and the determinant of team success, with a subgraph contrastive learning-based prediction model for identifying core team units and considering trade-offs between fairness and team performance; and 3) learning to recommend fairly, with a measurement of graph-based maximum mean discrepancy, a meta learning method for fair graph representation learning, and a reinforcement learning-based search method for fair teaming recommendation. The project will support cross-disciplinary curriculum development by effectively bridging gaps in responsible AI and team science, fair project management, and risk management in science.

Soo Young Rieh

Kenneth Fleischmann and R. David Lankes

Institute of Museum & Library Services (IMLS)

08/01/2022 to 07/31/2025

The award is $623,501 over the project period.

Training Future Faculty in Library, AI, and Data Driven Education and Research (LADDER)

The University of Texas at Austin School of Information will collaborate with librarians from Austin Public Library, Navarro High School Library, and the University of Texas Libraries to educate and mentor the next generation of Library and Information Science (LIS) faculty with expertise in artificial intelligence (AI) and data science. The Training Future Faculty in Library, AI, and Data Driven Education and Research (LADDER) program will apply a new Library Rotation Model to train doctoral student fellows to apply their AI and data science skills to conduct research in collaboration with librarians in distinct library settings. The project will increase the capacity of LIS programs to educate the librarians of tomorrow by preparing cohorts of outstanding future faculty who understand both cutting-edge IT and the unique service environment of libraries.