The Future of Search Engines

Sandlin, Anu  |  Aug 31, 2018

search
Information Retrieval
machine learning
neural networks
News Image: 
Matt Lease

Search engines have changed the world. They put vast amounts of information at our fingertips. But search engines have their flaws, says iSchool Associate Professor Matthew Lease. Search results are often not as “smart” as we’d like them to be, lacking a true understanding of language and human logic. They can also replicate and deepen the biases embedded in our searches, rather than bringing us new information or insight.

Dr. Lease believes there may be better ways to harness the dual power of computers and human minds to create more intelligent information retrieval (IR) systems, benefiting general search engines, as well as niche ones like those used for medical knowledge or non-English texts. At the 2017 Annual Meeting of the Association for Computational Linguistics in Vancouver, Canada, Dr. Lease and his collaborators from The University of Texas at Austin and Northeastern University presented two papers describing their novel information retrieval systems using research that leverages the supercomputing resources at UT Austin’s Texas Advanced Computer Center.

In one paper, they presented a method that combines input from multiple annotators—humans who hand-label data used to train and evaluate intelligent algorithms—to determine the best overall annotation for a given text. They applied this method to two problems. First, they analyzed free-text research articles describing medical studies to extract details of each study, such as patient condition, demographics, treatments, and outcomes. They also used name-entity recognition to analyze breaking news stories to identify the events, people, and places involved.

“An important challenge in natural language processing is accurately finding important information contained in free-text, which lets us extract it into databases and combine it with other data to make more intelligent decisions and new discoveries,” Dr. Lease said. “We’ve been using crowdsourcing to annotate medical and news articles at scale so that our intelligent systems will be able to more accurately find the key information contained in each article.” 

An important challenge in natural language processing is accurately finding important information contained in free-text, which lets us extract it into databases and combine it with other data to make more intelligent decisions and new discoveries

Such annotation has traditionally been performed by in-house, domain experts. However, crowdsourcing has recently become a popular means to acquire large, labeled datasets at lower cost. Predictably, annotations from laypeople are of lower quality than those from domain experts, so it is necessary to estimate the reliability of crowd annotators, and also aggregate individual annotations to come up with a single set of “reference standard” consensus labels.

Lease’s team found that their method was able to train a neural network—a form of artificial intelligence (AI) modeled on the human brain—so it could very accurately predict named entities and extract relevant information in unannotated texts. The new method improves upon existing tagging and training methods. It also provides an estimate of each worker’s label quality, which can be transferred between tasks and is useful for error analysis and intelligently routing tasks—identifying the best person to annotate each particular text.

The group’s second paper addressed the fact that neural models for natural language processing (NLP) often ignore existing resources like WordNet—a lexical database for the English language that groups words into sets of synonyms—or domain-specific ontologies, such as the Unified Medical Language System, which encode knowledge about a given field.

They proposed a method for exploiting these existing linguistic resources via weight sharing to improve NLP models for automatic text classification. For example, their model learns to classify whether or not published medical articles describing clinical trials are relevant to a well-specified clinical question. In weight sharing, similar words share some fraction of a weight, or assigned numerical value. Weight sharing constrains the number of free parameters that a system must learn, thereby increasing the efficiency and accuracy of the neural model, and serving as a flexible way to incorporate prior knowledge. In doing so, they combine the best of human knowledge with machine learning.

“Neural network models have tons of parameters and need lots of data to fit them,” said Lease. “We had this idea that if you could somehow reason about some words being related to other words a priori, then instead of having to have a parameter for each one of those word separately, you could tie together the parameters across multiple words and in that way, need less data to learn the model. It would realize the benefits of deep learning without large data constraints.”

They applied a form of weight sharing to a sentiment analysis of movie reviews and to a biomedical search related to anemia. Their approach consistently yielded improved performance on classification tasks compared to strategies that did not exploit weight sharing. By improving core natural language processing technologies for automatic information extraction and classification of texts, Dr. Lease says web search engines built on these technologies can continue to improve.

Matt Lease on the Information Retrieval and Crowdsourcing Lab

Dec 31, 2013

Faculty News
Matt Lease
Interview
Research
Crowdsourcing
Information Retrieval

How would you characterize the purpose and goals of the Information Retrieval and Crowdsourcing Lab?

To advance the state-of-the-art methodologies for search (i.e., how we both build effective search engines and measure that effectiveness, across a diverse range of search tasks) and human computation / crowdsourcing (i.e., how we effectively mobilize and organize people online to accurately perform information processing tasks, particularly difficult tasks which remain beyond what today's best intelligent systems can achieve automatically).

What attributes (e.g. skills, interests, background) make a student an ideal candidate to work with you in the IR & Crowdsourcing Lab?

My funded research assistants (RAs) typically have a computer science or equivalent background, with strong backgrounds in both computing and math. Beyond my RAs, I have also advised many other students from other backgrounds who bring other diverse skills to bear on these problem areas.

For example, I recently advised published research and a Master's Thesis on legal issues in crowdsourcing. This research anticipated subsequent litigation that has occurred regarding the question of whether "microwork contributors" on crowdsourcing platforms should be classified as employees rather than independent contractors. Given how thoroughly such crowdsourcing has become ingrained in how we build intelligent systems today, I was particularly concerned that our technical house of cards could come crashing down if the legal foundation proved faulty. I mention this just as one example of how crowdsourcing is such a fascinating socio-technical area which offers such a rich diversity of interesting research questions which students from different backgrounds could pursue.

The number one need by far a student needs to succeed is the passion, drive, and imagination to do good work which will change the world. We are not standing by the sidelines to wait to see what tomorrow's world will look like. Instead, we are the ones leading the charge to build technology and make discoveries that will impact the world we live in today and make dreams for the future become a reality. This is what means to be at world-class research university and lead the charge at the forefront of science. There's no better place to be.

How many departments on campus are currently represented in the IR & Crowdsourcing Lab and what possible collaborations do you foresee in the future?

We regularly work with faculty and students from computer science (CS), electrical and computer engineering (ECE), and linguistics. We also interact with others from Mathematics, Statistics and Scientific Computing (to be renamed "Statistics and Data Science"), and McCombs' Information, Risk and Operations Management. Currently we have two pending projects with others units: one with ECE which uses search engine technology to find bugs in software, and one with CS which integrates AI and crowdsourcing to create an intelligent building, a form of "ubiquitous computing".

What are some of the resources the lab has to offer?

Google has kindly donated a pool of Android Phones and Google TV devices, and we have some fast computers and cool datasets. The main resource is the awesome students that are there to work with, along with lots of free caffeine!

Where can people learn more about the Information Retrieval and Crowdsourcing Lab?

My crowdsourcing webpage has become the defacto place on the Internet to track important research events (conferences, journals, tutorials and talks, etc.). I created it just to track these things for myself, but it has turned out to prove useful to many others as well.

I've been fortunate to be part of two significant research initiatives charting future research.

  1. In terms of search engine technology, SWIRL'12: The Second Strategic Workshop on Information Retrieval in Lorne, brought together 45 of the top researchers in the field to chart a roadmap of long-term challenges and opportunities for the field. Our report is online at: http://sigir.org/forum/2012J/2012j_sigirforum_A_allanSWIRL2012Report.pdf

     

    .
  2. In terms of crowdsourcing, I worked with leading researchers from seven other universities to envision the future of crowdsourcing and important research challenges and opportunities to be tackled. The paper appeared at ACM CSCW 2013 and can be found online at: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2190946.

Lease Garners Three Early Career Awards in One Year

Aug 19, 2013

News Image: 
National Science Foundation
Image Caption: 
National Science Foundation
Faculty News
Matt Lease
Awards & Recognition
NSF
Crowdsourcing
Oral History
Information Retrieval
News Image: 
Matt Lease
Image Caption: 
Assistant Professor Matt Lease

Assistant Professor Matt Lease has accomplished a rare feat for a young faculty member, securing three prestigious early career awards in one year from federal government agencies. "To receive one career award from a federal funding agency is recognition of early prominence and a strong predictor of future scholarly impact," said Dean Andrew Dillon. "To receive three, all in one year, is unprecedented in my experience. In the true spirit of the iSchool, Matt's work crosses disciplinary boundaries and I am convinced his work has the potential to solve pressing information problems in the years ahead."

•His $550,000 early career award from the National Science Foundation will support study of how crowdsourcing approaches can be more widely viable and lower risk for potential adopters. MORE...

 

•To advance curation and archival practices for conversational speech, Lease received $290,000 from the Institute of Museum and Library Services. As an exemplar test case, his research will focus on the University of Southern California's Shoah Foundation oral history interviews of Holocaust eyewitnesses. MORE...

•Complementing his IMLS project, Lease's $300,000 Young Faculty Award from the Defense Advanced Research Projects Agency is applying enhanced speech transcription technology to improve search engine technology for searching conversational speech archives. Lease already has begun to establish himself as a leading expert on crowdsourcing, and the three early career grants will allow him to delve even more deeply into the topic. MORE...

 

Pages

glqxz9283 sfy39587stf02 mnesdcuix8
sfy39587stf03
sfy39587stp14