Danna (pronounced similar to "Donna") Gurari
School of Information
University of Texas at Austin
Office Location: UTA 5.442 (1616 Guadalupe St, Austin, TX 78701)
Lab Location: UTA 5.518 (1616 Guadalupe St, Austin, TX 78701)
Email: danna.gurari, at sign, ischool dot utexas dot edu
(Current/Prospective Students: please read my FAQ before emailing me)
My research interests span computer vision, machine learning, human computation, crowdsourcing, medical image analysis, and biomedical image analysis.
I completed a Postdoctoral Fellowship in the University of Texas at Austin computer science department in Dr. Kristen Grauman's group in 2017, PhD at Boston University in the Image and Video Computing Group in 2015, MS in Computer Science at Washington University in St. Louis in 2005, and BS in Biomedical Engineering at Washington University in St. Louis in 2005. From 2005-2010, I worked in industry at Boulder Imaging and Raytheon. My research has been recognized with the 2017 Honorable Mention Award at CHI, Researcher Excellence Award from the Boston University computer science department in 2015, 2014 Best Paper Award for Innovative Idea at MICCAI IMIC, and 2013 Best Paper Award at WACV.
- July 2018: I will give an invited talk at the ECCV Workshop on Shortcomings in Vision and Language (SiVL) on Saturday September 8.
- July 2018: I received a Microsoft Azure Curriculum grant to support my class on "Introduction to Machine Learning" this Fall 2018.
- June 2018: Our VizWiz challenge evaluation servers are up and running here.
- June 2018: Our paper "BrowseWithMe: An Online Clothes Shopping Assistant for People with Visual Impairments" is conditionally accepted for publication at ASSETS 2018.
- March 2018: I am co-organizing a workshop with Jeffrey Bigham and Kristen Grauman at ECCV 2018 called "VizWiz Grand Challenge: Answering Visual Questions from Blind People".
- March 2018: Our paper "Visual Question Answer Diversity" is accepted for publication at HCOMP 2018.
- March 2018: I received a NSF CRII award to support research on Visual Question Answering.
- February 2018: Our work on creating the VizWiz dataset was covered in MIT Technology Review.
- February 2018: Our paper about our new dataset for answering visual questions originating from blind people, VizWiz, is accepted for publication and as a spotlight presentation at CVPR 2018.
- February 2018: Released VizWiz v1.0 dataset for answering visual questions originating from blind people (31,073 images, 31,073 questions, 310,730 answers): http://vizwiz.org/data/.
- January 2018: Our paper "Predicting Foreground Object Ambiguity and Efficiently Crowdsourcing the Segmentation(s)" was accepted for publication by the International Journal of Computer Vision (IJCV).