Probabilistic programs: formal languages for probabilistic knowledge
Abstract: Probabilistic generative models have exploded in recent years, becoming central to machine learning and AI. These models are usually described with a mixture of informal english, math, and box-and-arrow diagrams. Such descriptions can be error prone and are difficult to scale in model complexity. I will describe the probabilistic programming approach to formalizing abstract probabilistic knowledge, and in particular the Church language. The twin challenges of probabilistic programming are descriptive adequacy -- can we write interesting new models? -- and algorithmic tractability -- can we create efficient universal inference systems?. I will illustrate several solutions to these challenges using examples from computer graphics and cognitive science: rich scene synthesis using our LARJ-MCMC algorithm, and reasoning about other's mental states using dynamic programming.
Bio: Noah D. Goodman is Assistant Professor of Psychology, Linguistics (by courtesy), and Computer Science (by courtesy) at Stanford University. He studies the computational basis of human thought, merging behavioral experiments with formal methods from statistics and logic. Specific projects vary from concept learning and language understanding to inference algorithms for probabilistic programming languages. He received his Ph.D. in mathematics from the University of Texas at Austin in 2003. In 2005 he entered cognitive science, working as Postdoc and Research Scientist at MIT. In 2010 he moved to Stanford where he runs the Computation and Cognition Lab.
Constrained Conditional Models: Integer Linear Programming Formulations for Natural Language Understanding
Abstract: Computational approaches to problems in Natural Language Understanding and Information Access and Extraction often involve assigning values to sets of interdependent variables. Examples of tasks of interest include semantic role labeling (analyzing natural language text at the level of “who did what to whom, when and where”), syntactic parsing, information extraction (identifying events, entities and relations), transliteration of names, and textual entailment (determining whether one utterance is a likely consequence of another). Over the last few years, one of the most successful approaches to studying these problems involves Constrained Conditional Models (CCMs), an Integer Learning Programming formulation that augments probabilistic models with declarative constraints as a way to support such decisions.
I will present research within this framework, discussing old and new results pertaining to inference issues, learning algorithms for training these global models, and the interaction between learning and inference.
Bio: Dan Roth is a Professor in the Department of Computer Science and the Beckman Institute at the University of Illinois at Urbana-Champaign and a University of Illinois Scholar. He is the director of a DHS Center for Multimodal Information Access & Synthesis (MIAS) and holds faculty positions in Statistics, Linguistics and at the School of Library and Information Sciences. Roth is a Fellow of the ACM and of AAAI for his contributions to Machine Learning and to Natural Language Processing. He has published broadly in machine learning, natural language processing, knowledge representation and reasoning, and learning theory, and has developed advanced machine learning based tools for natural language applications that are being used widely by the research community. Prof. Roth has given keynote talks in major conferences, including AAAI, EMNLP and ECML and presented several tutorials in universities and major conferences. Roth was the program chair of AAAI’11, ACL’03 and CoNLL'02, has been on the editorial board of several journals in his research areas and has won several teaching and paper awards. Prof. Roth received his B.A Summa cum laude in Mathematics from the Technion, Israel, and his Ph.D in Computer Science from Harvard University in 1995.
Relational Representations and Tree Decompositions -- promise and challenges
Abstract: Two structures are commonly used in probabilistic reasoning: trees and relational representations. Both structures lead to exact and efficient inference procedures in models that include them. Combining those structures together promises a greater set of graphical models for which exact inference is granted. However, such a combination is challenging because relational inference is not fully understood as a graph operation and tree-like lifted-inference operations result in unpredictable structures. In this talk I will describe three insights that will lead the way to a combination of the structures and to a better understanding of lifted inference as a graph operation. They are tree structures available in first-order logical representations, restrictions on the language, and trees over relational representations of dynamic systems.
Bio: Eyal Amir is Associate Professor in the Computer Science Department at the University of Illinois at Urbana-Champaign (UIUC) and Co-Founder, CEO of startup company FasPark. His research focuses on AI, specifically reasoning, learning, and decision making with logical and probabilistic knowledge. His company FasPark uses machine learning and probabilistic inference on graphs to speed up drivers looking for parking in metropolitan areas. Before joining UIUC in 2004 he was a postdoctoral researcher at UC Berkeley, received his Ph.D. in Computer Science from Stanford University, and received B.Sc. and M.Sc. degrees in mathematics and computer science from Bar-Ilan University, Israel in 1992 and 1994, respectively. Eyal is a recipient of a number of awards for his academic research. Among those, he was chosen by IEEE as one of the "10 to watch in AI" (2006), and awarded the Arthur L. Samuel award for best Computer Science Ph.D. thesis (2001-2002) at Stanford University.