NIST hosts the Text Retrieval Conference (TREC), an annual workshop series on information retrieval (IR) and search engine technology, now in its 25th year. TREC is an ?evaluation workshop?, where challenge problems in IR are defined and participants try to solve the problems by developing novel search algorithms. Participants come together at the physical workshop to learn from NIST how well their systems performed and from each other how different researchers approached the tasks. All results and datasets are made available to the public as freely and as inexpensively as possible.
In this talk, I will introduce how search effectiveness is measured within what is commonly known as the ?Cranfield? or ?Test collection? paradigm, and how that works within the TREC framework. Along the way, I will talk about challenge problems we have run, test collections we have built, and things we have learned both about search, and about how we go about measuring it.
Dr. Ian Soboroff is a computer scientist and leader of the Retrieval Group at the National Institute of Standards and Technology (NIST). The Retrieval Group organizes the Text REtrieval Conference (TREC), the Text Analysis Conference (TAC), and the TREC Video Retrieval Evaluation (TRECVID). These are all large, community-based research workshops that drive the state-of-the-art in information retrieval, video search, web search, information extraction, text summarization and other areas of information access. He has co-authored many publications in information retrieval evaluation, test collection building, text filtering, collaborative filtering, and intelligent software agents. His current research interests include building test collections for social media environments and nontraditional retrieval tasks.
10:00am to 11:00am