Best Student Paper Awarded to iSchool Associate Professor and Student Collaborators

Sandlin, Anu  |  Apr 30, 2019

News Image: 
""
Image Caption: 
Best Student Paper Award
Best Student Paper Award
ECIR
Matt Lease
Soumyajit Gupta
Vivek Khetan
Mucahid Kutlu
evaluation metrics
Information Retrieval
NPRP

The University of Texas at Austin Computer Science doctoral student Soumyajit Gupta, Texas iSchool alumni Vivek Khetan, former postdoctoral researcher Mucahid Kutlu, and Associate Professor Matthew Lease were recently awarded Best Student Paper Award at the 41st European Conference on Information Retrieval (ECIR 2019). Their paper titled, “Correlation, Prediction, and Ranking of Evaluation Metrics in Information Retrieval,” was presented this April in Cologne, Germany.

According to Lease, “search is now critical to 21st century information access, yet ensuring search algorithms work well is challenging given the vast scale at which algorithms must be evaluated.” Evaluation metrics are ways researchers and algorithm designers assess how well search results ultimately satisfy information needs of the searcher.

In their paper, Lease and his collaborators explored strategies for optimizing the choice of evaluation metrics to measure, and assessed 23 popular evaluation metrics. “Search algorithm developers cannot possibly consider every evaluation metric in assessing how well their systems perform, so it is critical that they are judicious about focusing their effort on evaluation metrics that are most informative to improving the search experience for the end-user.”

Another important aspect of the work is that the metrics considered by the team are language-neutral, meaning that these metrics can also be used to assess search algorithms running in non-English languages, such as Modern Standard Arabic. The research team proposed two methods for algorithmic selection of evaluation metrics, and these methods provided both lower time and space complexity than prior work. These methods also provided a theoretically justified, practical approach to automatically select the most informative and distinctive evaluation metrics to measure.

Search is now critical to 21st century information access, yet ensuring search algorithms work well is challenging given the vast scale at which algorithms must be evaluated.

Lease was especially happy for his student collaborators, noting that “this was a total team effort, with each of us making distinct contributions to the development and analysis of methods.” He also described it as a “slam dunk” for Ph.D. student Soumyajit Gupta, given that this is his first research publication on search engine research, and their first research collaboration together.

Lease also describes the work as “a fantastic example of international research collaboration,” funded by the government of Qatar in a program founded to foster such international partnerships. “The potential value for advancing evaluation of search algorithms for Arabic also helps ensure technological innovation and advances extend to the diversity of the world and its many languages.”

Soumyajit Gupta, an advisee of Lease, is a Texas Computer Science PhD student, Vivek Khetan is an alumnus of the Texas iSchool, and former iSchool postdoctoral fellow, Mucahid Kutlu, is now a faculty member at TOBB University of Economics and Technology in Turkey.  

This research was funded by a National Priorities Research Program (NPRP) grant # 7-1313-1-245 from the Qatar National Research Fund (a member of the Qatar Foundation), whose objective is to “competitively select research projects that will address national priorities through supporting basic and applied research as well as translational research/experimental development.” 

Lease received the three-year grant in 2015 in collaboration with Qatar University Associate Professor of Computer Science Tamer Elsayed, to improve current search engine technology for the Arabic-language Web. Elsayed is also an iSchool graduate, receiving his Ph.D. from Maryland’s program.

The Future of Search Engines

Sandlin, Anu  |  Aug 31, 2018

search
Information Retrieval
machine learning
neural networks
News Image: 
Matt Lease

Search engines have changed the world. They put vast amounts of information at our fingertips. But search engines have their flaws, says iSchool Associate Professor Matthew Lease. Search results are often not as “smart” as we’d like them to be, lacking a true understanding of language and human logic. They can also replicate and deepen the biases embedded in our searches, rather than bringing us new information or insight.

Dr. Lease believes there may be better ways to harness the dual power of computers and human minds to create more intelligent information retrieval (IR) systems, benefiting general search engines, as well as niche ones like those used for medical knowledge or non-English texts. At the 2017 Annual Meeting of the Association for Computational Linguistics in Vancouver, Canada, Dr. Lease and his collaborators from The University of Texas at Austin and Northeastern University presented two papers describing their novel information retrieval systems using research that leverages the supercomputing resources at UT Austin’s Texas Advanced Computer Center.

In one paper, they presented a method that combines input from multiple annotators—humans who hand-label data used to train and evaluate intelligent algorithms—to determine the best overall annotation for a given text. They applied this method to two problems. First, they analyzed free-text research articles describing medical studies to extract details of each study, such as patient condition, demographics, treatments, and outcomes. They also used name-entity recognition to analyze breaking news stories to identify the events, people, and places involved.

“An important challenge in natural language processing is accurately finding important information contained in free-text, which lets us extract it into databases and combine it with other data to make more intelligent decisions and new discoveries,” Dr. Lease said. “We’ve been using crowdsourcing to annotate medical and news articles at scale so that our intelligent systems will be able to more accurately find the key information contained in each article.” 

An important challenge in natural language processing is accurately finding important information contained in free-text, which lets us extract it into databases and combine it with other data to make more intelligent decisions and new discoveries

Such annotation has traditionally been performed by in-house, domain experts. However, crowdsourcing has recently become a popular means to acquire large, labeled datasets at lower cost. Predictably, annotations from laypeople are of lower quality than those from domain experts, so it is necessary to estimate the reliability of crowd annotators, and also aggregate individual annotations to come up with a single set of “reference standard” consensus labels.

Lease’s team found that their method was able to train a neural network—a form of artificial intelligence (AI) modeled on the human brain—so it could very accurately predict named entities and extract relevant information in unannotated texts. The new method improves upon existing tagging and training methods. It also provides an estimate of each worker’s label quality, which can be transferred between tasks and is useful for error analysis and intelligently routing tasks—identifying the best person to annotate each particular text.

The group’s second paper addressed the fact that neural models for natural language processing (NLP) often ignore existing resources like WordNet—a lexical database for the English language that groups words into sets of synonyms—or domain-specific ontologies, such as the Unified Medical Language System, which encode knowledge about a given field.

They proposed a method for exploiting these existing linguistic resources via weight sharing to improve NLP models for automatic text classification. For example, their model learns to classify whether or not published medical articles describing clinical trials are relevant to a well-specified clinical question. In weight sharing, similar words share some fraction of a weight, or assigned numerical value. Weight sharing constrains the number of free parameters that a system must learn, thereby increasing the efficiency and accuracy of the neural model, and serving as a flexible way to incorporate prior knowledge. In doing so, they combine the best of human knowledge with machine learning.

“Neural network models have tons of parameters and need lots of data to fit them,” said Lease. “We had this idea that if you could somehow reason about some words being related to other words a priori, then instead of having to have a parameter for each one of those word separately, you could tie together the parameters across multiple words and in that way, need less data to learn the model. It would realize the benefits of deep learning without large data constraints.”

They applied a form of weight sharing to a sentiment analysis of movie reviews and to a biomedical search related to anemia. Their approach consistently yielded improved performance on classification tasks compared to strategies that did not exploit weight sharing. By improving core natural language processing technologies for automatic information extraction and classification of texts, Dr. Lease says web search engines built on these technologies can continue to improve.

Matt Lease on the Information Retrieval and Crowdsourcing Lab

Dec 31, 2013

Faculty News
Matt Lease
Interview
Research
Crowdsourcing
Information Retrieval

How would you characterize the purpose and goals of the Information Retrieval and Crowdsourcing Lab?

To advance the state-of-the-art methodologies for search (i.e., how we both build effective search engines and measure that effectiveness, across a diverse range of search tasks) and human computation / crowdsourcing (i.e., how we effectively mobilize and organize people online to accurately perform information processing tasks, particularly difficult tasks which remain beyond what today's best intelligent systems can achieve automatically).

What attributes (e.g. skills, interests, background) make a student an ideal candidate to work with you in the IR & Crowdsourcing Lab?

My funded research assistants (RAs) typically have a computer science or equivalent background, with strong backgrounds in both computing and math. Beyond my RAs, I have also advised many other students from other backgrounds who bring other diverse skills to bear on these problem areas.

For example, I recently advised published research and a Master's Thesis on legal issues in crowdsourcing. This research anticipated subsequent litigation that has occurred regarding the question of whether "microwork contributors" on crowdsourcing platforms should be classified as employees rather than independent contractors. Given how thoroughly such crowdsourcing has become ingrained in how we build intelligent systems today, I was particularly concerned that our technical house of cards could come crashing down if the legal foundation proved faulty. I mention this just as one example of how crowdsourcing is such a fascinating socio-technical area which offers such a rich diversity of interesting research questions which students from different backgrounds could pursue.

The number one need by far a student needs to succeed is the passion, drive, and imagination to do good work which will change the world. We are not standing by the sidelines to wait to see what tomorrow's world will look like. Instead, we are the ones leading the charge to build technology and make discoveries that will impact the world we live in today and make dreams for the future become a reality. This is what means to be at world-class research university and lead the charge at the forefront of science. There's no better place to be.

How many departments on campus are currently represented in the IR & Crowdsourcing Lab and what possible collaborations do you foresee in the future?

We regularly work with faculty and students from computer science (CS), electrical and computer engineering (ECE), and linguistics. We also interact with others from Mathematics, Statistics and Scientific Computing (to be renamed "Statistics and Data Science"), and McCombs' Information, Risk and Operations Management. Currently we have two pending projects with others units: one with ECE which uses search engine technology to find bugs in software, and one with CS which integrates AI and crowdsourcing to create an intelligent building, a form of "ubiquitous computing".

What are some of the resources the lab has to offer?

Google has kindly donated a pool of Android Phones and Google TV devices, and we have some fast computers and cool datasets. The main resource is the awesome students that are there to work with, along with lots of free caffeine!

Where can people learn more about the Information Retrieval and Crowdsourcing Lab?

My crowdsourcing webpage has become the defacto place on the Internet to track important research events (conferences, journals, tutorials and talks, etc.). I created it just to track these things for myself, but it has turned out to prove useful to many others as well.

I've been fortunate to be part of two significant research initiatives charting future research.

  1. In terms of search engine technology, SWIRL'12: The Second Strategic Workshop on Information Retrieval in Lorne, brought together 45 of the top researchers in the field to chart a roadmap of long-term challenges and opportunities for the field. Our report is online at: http://sigir.org/forum/2012J/2012j_sigirforum_A_allanSWIRL2012Report.pdf

     

    .
  2. In terms of crowdsourcing, I worked with leading researchers from seven other universities to envision the future of crowdsourcing and important research challenges and opportunities to be tackled. The paper appeared at ACM CSCW 2013 and can be found online at: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2190946.

Pages

glqxz9283 sfy39587stf02 mnesdcuix8
sfy39587stf03
sfy39587stp14