Microsoft Research Partners with UT Austin, Texas iSchool for Microsoft Ability Initiative

Sandlin, Anu  |  Mar 29, 2019

News Image: 
Image Caption: 
From left to right: Danna Gurari, University of Texas; Ed Cutrell, Microsoft Research; Roy Zimmermann, Microsoft Research; Meredith Ringel Morris, Microsoft Research; Ken Fleischmann, University of Texas; Neel Joshi, Microsoft Research
Microsoft Ability Initiative
Texas iSchool
Microsoft Research
Danna Gurari
Ken Fleischmann
image captioning
visual impairments
AI
accessibility

Despite significant developments in the world of automated image captioning, current image captioning approaches are not well-aligned with the needs of people with visual impairments. People who are blind or with low vision share a unique and real challenge –their visual impairment exposes them to a time-consuming, and sometimes, impossible task of learning what content is present in an image without visual assistance. As such, these communities often seek a visual assistant to describe photos they take themselves or find online. 

In an ideal world, a fully-automated computer vision (CV) approach would provide such descriptions. However, this artificial intelligence (AI) process is riddled with challenges. Not only is CV work missing images taken by this population, but people who are blind and with low vision are required to passively listen to one-size-fits-all descriptions of images to locate information of interest. In addition, CV algorithms often deliver incomplete or incorrect information. Because of these shortcomings, reliable image captioning systems continue to depend on humans to provide descriptions of photos to people with visual impairments. 

Determined to find a way to improve image captioning for blind and low vision communities, Principal investigator and Texas iSchool Assistant Professor Danna Gurari and Associate Professor Ken Fleischmann believe there is a more efficient and effective solution that reduces human effort and produces accurate results for communities who are blind or with low vision. And they recently embarked on a new project to “design algorithms and systems that close the gap between CV algorithm and human performance for describing pictures taken by both sighted and visually impaired photographers.” 

But the Texas School of Information professors weren’t the only ones thinking about how to improve image captioning for people who are blind or with low vision. A team of researchers at Microsoft Research recently announced a similar vision and goal –to train AI systems to provide more detailed captions that can offer a richer understanding, and more accurate representation of images for the blind or those with low vision. In light of this mission, Microsoft Research developed a new project called the Microsoft Ability Initiative.

According to Microsoft Research Principal Researcher and Research Manager Meredith Ringel Morris, “the companywide initiative aims to create a public dataset that ultimately can be used to advance the state of the art in AI systems for automated image captioning.”

After a competitive process involving a select number of universities, the search for an academic research unit with whom they could partner for the new venture came to an end when Microsoft Research chose The University of Texas at AustinSchool of Information. The proposed work of Gurari and Fleischmann was the only project selected through this competition.

The Texas iSchool research team proposed two main tasks of (1) introducing the first publicly-available image captioning dataset from people with visual impairments paired with a community AI challenge and workshop, and (2) identifying the values and preferences of people with visual impairments –to inform the design of next-generation image captioning systems and datasets. 

“The collaboration builds upon prior Microsoft research that has identified a need for new approaches at the intersection of computer vision and accessibility,” explained Morris.

The companywide initiative aims to create a public dataset that ultimately can be used to advance the state of the art in AI systems for automated image captioning.

The Microsoft Research team which includes Ed Cutrell, Roy Zimmermann, Meredith Ringel Morris, and Neel Joshi, plans to collaborate with UT Austin, School of Information over an 18-month period. Gurari and Fleischmann will lead the UT Austin team, which will also include three PhD students and one postdoctoral fellow.

The Microsoft Ability Initiative builds on the interdisciplinary team’s expertise in computer vision, human-computer interaction, accessibility, ethics, and value-sensitive design. Gurari’s team is experienced in establishing new datasets, designing human-machine partnerships, creating human computer interaction systems, and developing accessible technology. As co-founder of the ECCV VizWiz Grand Challenge in 2018, Gurari is skilled in community-building and has a previous record of success in creating public datasets to advance the state-of-the-art in AI and accessibility.

Fleischmann’s team offers complementary experience in the ethics of AI and understanding users’ values to inform technology design. Given his expertise in the role of human values in the design and use of information technologies, Fleischmann will lead the effort focused on uncovering the needs and values of people with visual impairments –which will ultimately inform the design of future image captioning systems.

The Microsoft researchers involved in this initiative have specialized experience in accessible technologies, human-centric AI systems, and computer vision. “Our efforts are complemented by colleagues in other divisions of the company, including the AI for Accessibility program, which helps fund the initiative, and Microsoft 365 accessibility,” explained Morris.

Dubbed “a collaborative quest to innovate in image captioning for people who are blind or with low vision,” Morris explained that “the Microsoft Ability Initiativeis one of an increasing number of initiatives at Microsoft in which researchers and product developers are coming together in a new, cross-company push to spur innovative and exciting new research and development in the area of accessible technologies.” 

Gurari believes that the initiative “will not only advance the state of the art of vision-to-language technology, but it will also continue the progress Microsoft has made with such tools and resources as the Seeing AI mobile phone application and the Microsoft Common Objects in Context (MS COCO) dataset. It will also serve as a great teaching opportunity for Texas iSchool students.”

The Texas iSchool team will employ a user-centered approach to the problem, including working with communities who are blind or with low vision to improve understanding of their expectations of image captioning tools. The team will also host community challenges and workshops to accelerate progress on algorithm development and facilitate the development of more accessible methods to assist people who are blind or with low vision. 

Gurari and Fleischmann explain that “this work can empower people with visual impairments to more rapidly and accurately learn about the diversity of visual information, while contributing to solving related problems including image search, visual question answering, and robotics.”

The Microsoft Research team launched the new collaboration with the Texas iSchool during a two-day visit to Austin in January. Morris noted that the Microsoft Research team came away from the meeting at The University of Texas at Austin, School of Information, “even more energized about the potential for this initiative to have real impact in the lives of millions of people around the world.” “We couldn’t be more excited,” she said.

The Texas iSchool professors share the Microsoft Research team’s excitement about their upcoming collaboration. “To be selected for this gift is a great honor,” said Gurari and Fleischmann. “We look forward to working with the Microsoft Research team over the months, and are eager to make progress with our shared goal –to better align image captioning systems with the needs of those who are blind or with low vision.” 

Dr. Ken Fleischmann Wins 2018 Social Informatics Best Paper Award

Sandlin, Anu  |  Nov 26, 2018

News Image: 
Professor Fleischmann presenting at the ASIS&T Conference
Texas iSchool
Ken Fleischmann
Social Informatics
Best Paper Award
News Image: 
Professor Fleischmann accepting Best Social Informatics paper award

Texas iSchool Associate Professor Ken Fleischmann recently accepted the 2018 Social Informatics Best Paper Award from the Association for Information Science and Technology (ASIS&T) Special Interest Group for Social Informatics.

Based upon work supported by the National Science Foundation, the paper, “The Societal Responsibilities of Computational Modelers: Human Values and Professional Codes of Ethics,” focuses on understanding how values shape modelers’ experiences with and attitudes toward codes of ethics. The findings reveal that individuals who place great value on equality and social justice are more likely to advocate for following a code of ethics. 

Fleischmann explains that innovations in artificial intelligence (AI) have advanced computational modeling to a point where its design can have life-or-death consequences – especially because AI-based computational models are used to predict climate change, design aircraft, and evaluate and refine medical techniques. “Thus, it is important that computational modelers are both willing and able to consider not only the technical implications, but also the societal implications of their work.”

It is important that computational modelers are both willing and able to consider not only the technical implications, but also the societal implications of their work.

The Social Informatics Best Paper Award recognizes the best paper published in a peer-reviewed journal on a topic informed by social informatics during the previous calendar year.

The winning paper, co-authored with Cindy Hui and William A. Wallace of Rensselaer Polytechnic Institute, was published in the Journal of the Association for Information Science and Technology in 2017 (http://dx.doi.org/10.1002/asi.23697).

Fleischmann presented the paper on November 10 in Vancouver, Canada at the ASIS&T 2018 Annual Meeting, during The 14th Annual Social Informatics Research Symposium: Sociotechnical perspective on ethics and governance of emerging information technologies.

“There is no greater professional honor than for your work to be recognized by your peers,” notes Fleischmann. “I hope that this will help to further shine a spotlight on the important ethical implications of AI.”

Texas School of Information Hosts Successful Accessibility Hackathon

Sandlin, Anu  |  Nov 21, 2018

News Image: 
""
Image Caption: 
Students viewing a demonstration of JAWS, a popular screen reader software.
Texas iSchool
UT iSchool
Accessibility Hackathon
Knowbility
AccessWorks

The University of Texas at Austin’s School of Information hosted an Accessible Web Demonstration and Hackathon on Friday, October 26, 2018. The five-hour event, which took place from 12:00 to 5:00 p.m. in the iSchool IT Lab, was co-sponsored by the iSchool’s IT Team and Diversity and Inclusion Committee.

Texas iSchool partnered with Knowbility Inc., a locally-based non-profit whose mission is to, "support the independence of children and adults with disabilities by promoting the use and improving the availability of accessible information technology.” 

Knowbility brought in volunteers from AccessWorks, a Knowbility program that connects usability and UX professionals to people with disabilities who can then test web sites and apps using their own assistive technologies (such as screen readers, screen magnifiers, special keyboards etc.). Anne Forrest and Barry Armour demonstrated assistive technologies used in the service of accessing Web and other digital content. 

There is no more compelling way to teach accessible design than to work with –and hear directly from— those who rely on assistive technologies every day.

Forrest, who suffered a brain injury several years ago, has been recognized as one of the nation's leading patient advocates for people with traumatic brain injury. Armour, a blind screen-reader user who lost his eyesight about 6 years ago, is an advocate for educating people about technology and making it accessible for everyone. Forrest provided a unique perspective on how screen color and movements affect people with brain injuries, while Armour demoed screen readers.

The group discussed some of the most common design considerations regarding accessible code. Event participants then had the opportunity to hack on the iSchool Website to help improve the School’s accessibility score –determined by the WorldSpace auditing tool, and used by UT’s Division of Diversity and Community Engagement. 

“I can’t thank Knowbility (Sharron, Jillian, and Christi) or Anne and Barry enough for making our Accessibility Hackathon such a success,” said Sam Burns, Texas iSchool’s Senior IT Manager. “There is no more compelling way to teach accessible design than to work with –and hear directly from— those who rely on assistive technologies every day,” he stated.   

Thirty-three students attended the Accessibility Hackathon; twenty-seven were iSchoolers and two were from other programs. Attendance and participation did not require prior web or coding experience. “We had a wonderful turnout,” said Burns. “The students commented that having our partners from Knowbility –and AccessWorks volunteer advocates— made it a truly fantastic learning experience.”  

The Texas iSchool hopes to host another successful Accessibility Hackathon next year. “Knowing how to create accessible tools is both a responsibility and a privilege,” said Burns. “The more we innovate towards inclusion, the better we become as theorists, designers, and developers of future information systems.”

glqxz9283 sfy39587stf02 mnesdcuix8
sfy39587stf03
sfy39587stp14