Back to top

Keynote Speakers

Hector Geffner

Hector Geffner

ICREA and Universitat Pompeu Fabra, Spain & Linköping University, Sweden

Target Languages (vs. Inductive Biases) for Learning to Act and Plan

Talk outline: Recent breakthroughs in AI have shown the remarkable power of deep learning and deep reinforcement learning. These developments, however, have been tied to specific tasks, and progress in out-of-distribution generalization has been limited. While it is assumed that these limitations can be overcome by incorporating suitable inductive biases, the notion of inductive biases itself is often left vague and does not provide meaningful guidance. In this talk, I articulate a different learning approach where representations do not emerge from biases in a neural architecture but are learned over a given target language with a known semantics. The basic ideas are implicit in mainstream AI where representations have been encoded in languages ranging from fragments of first-order logic to probabilistic structural causal models. The challenge is to learn from data, the representations that have traditionally been crafted by hand. Generalization is then a result of the semantics of the language. The goals of the talk paper are to make these ideas explicit, to place them in a broader context where the design of the target language is crucial, and to illustrate them in the context of learning to act and plan. For this, after a general discussion, I consider learning representations of actions, general policies, and general decompositions. In these cases, learning is formulated as a combinatorial optimization problem but nothing prevents the use of deep learning techniques instead. Indeed, learning representations over languages with a known semantics provides an account of what is to be learned, while learning representations with neural nets provides a complementary account of how representations can be learned. The challenge and the opportunity is to bring the two together.
Bio: Hector Geffner is an ICREA Research Professor at the Universitat Pompeu Fabra (UPF) in Barcelona, Spain, and a Wallenberg Guest Professor at Linköping University. He grew up in Buenos Aires and obtained a PhD in Computer Science at UCLA in 1989. He worked then at the IBM T.J. Watson Research Center in NY, USA, and at the Universidad Simon Bolivar, in Caracas. Hector is a Fellow of AAAI and EurAI, and is currently doing research on learning representations for acting and planning as part of the ERC project RLeap 2020-2025. He received awards for papers published at JAIR and ICAPS, including three ICAPS Influential Paper Awards, and received the 1990 ACM Dissertation Award for a thesis supervised by Judea Pearl. He teaches courses on logic, AI, and social and technological change.
Gary Marcus

Gary Marcus

New York University & Robust.AI

Towards a Proper Foundation for Artificial Intelligence

Talk outline: Large pretrained language models like BERT and GPT-3 have generated enormous enthusiasm, and are capable of producing remarkably fluent language. But they have also been criticized on many grounds, and described as ‘stochastic parrots’. Are they adequate as a basis for general intelligence, and if not, what would a better foundation for general intelligence look like?
Bio: Gary Marcus is a scientist, best-selling author, and entrepreneur. He is Founder and CEO of Robust.AI, and was Founder and CEO of Geometric Intelligence, a machine learning company acquired by Uber in 2016. He is the author of five books, including The Algebraic Mind, Kluge, The Birth of the Mind, and The New York Times best seller Guitar Zero, as well as editor of The Future of the Brain and The Norton Psychology Reader. He has published extensively in fields ranging from human and animal behavior to neuroscience, genetics, linguistics, evolutionary psychology and artificial intelligence, often in leading journals such as Science and Nature, and is perhaps the youngest Professor Emeritus at NYU. His newest book, co-authored with Ernest Davis, Rebooting AI: Building Machines We Can Trust aims to shake up the field of artificial intelligence.
Francesca Rossi

Francesca Rossi

IBM Research

Thinking Fast and Slow in AI

Talk outline: AI systems have seen dramatic advancement in recent years, supporting many successful applications that are pervading our everyday life. However, we are still mostly seeing instances of narrow AI. Also, they are tightly linked to the availability of huge datasets and computational power. State-of-the-art AI still lacks many capabilities that would naturally be included in a notion of intelligence, if we compare these AI technologies to what human beings are able to do: generalizability, robustness, explainability, causal analysis, abstraction, common sense reasoning, ethics reasoning, as well as a complex and seamless integration of learning and reasoning supported by both implicit and explicit knowledge. We argue that a better comprehension regarding of how humans have, and have evolved to obtain, these advanced capabilities can inspire innovative ways to imbue AI systems with these competencies. To this end, we propose to study and exploit cognitive theories of human reasoning and decision making (with special focus on Kahneman’s theory of thinking fast and slow) as a source of inspiration for the causal source of these capabilities, that help us raise the fundamental research questions to be considered when trying to provide AI with desired dimensions of human intelligence that are currently lacking.
Bio: Francesca Rossi is an IBM Fellow and the IBM AI Ethics Global Leader. She is a computer scientist with over 30 years of experience in AI research. Her research interests focus on artificial intelligence, specifically they include constraint reasoning, preferences, multi-agent systems, computational social choice, and collective decision making. She is also interested in ethical issues in the development and behavior of AI systems, in particular for decision support systems for group decision making. She is a fellow of both AAAI and of EurAI and she has been president of IJCAI and the Editor in Chief of the Journal of AI Research. She will be the next president of AAAI.
Josh Tenenbaum

Josh Tenenbaum

MIT

Reverse Engineering Human Cognitive Development: What do we start with, and how do we learn the rest?

Talk outline: What would it take to build a machine that grows into intelligence the way a person does — that starts like a baby, and learns like a child!? AI researchers have long debated the relative value of building systems with strongly pre-specified knowledge representations versus learning representations from scratch, driven by data. However, in cognitive science, it is now widely accepted that the analogous “nature versus nurture?” question is a false choice: explaining the origins of human intelligence will most likely require both powerful learning mechanisms and a powerful foundation of built-in representational structure and inductive biases. I will talk about our efforts to build models of the starting state of the infant mind, as well as the learning algorithms that grow knowledge through early childhood and beyond. These models are expressed as probabilistic programs, defined on top of simulation engines that capture the basic dynamics of objects and agents interacting in space and time. Learning algorithms draw on techniques from program synthesis and probabilistic program induction. I will show how these models are beginning to capture core aspects of human cognition and cognitive development, in terms that can be useful for building more human-like AI. I will also talk about some of the major outstanding challenges facing these and other models of human learning.
Bio: Josh Tenenbaum is Professor of Computational Cognitive Science at MIT in the Department of Brain and Cognitive Sciences, the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds and Machines (CBMM). He received his PhD from MIT in 1999, and taught at Stanford from 1999 to 2002. His long-term goal is to reverse-engineer intelligence in the human mind and brain, and use these insights to engineer more human-like machine intelligence. His current research focuses on the development of common sense in children and machines, the neural basis of common sense, and models of learning as Bayesian program synthesis. His work has been published in Science, Nature, PNAS, and many other leading journals, and recognized with awards at conferences in Cognitive Science, Computer Vision, Neural Information Processing Systems, Reinforcement Learning and Decision Making, and Robotics. He is the recipient of the Distinguished Scientific Award for Early Career Contributions in Psychology from the American Psychological Association (2008), the Troland Research Award from the National Academy of Sciences (2011), the Howard Crosby Warren Medal from the Society of Experimental Psychologists (2016), the R&D Magazine Innovator of the Year award (2018), and a MacArthur Fellowship (2019). He is a fellow of the Cognitive Science Society, the Society for Experimental Psychologists, and a member of the American Academy of Arts and Sciences.
Francesca Toni

Francesca Toni

Imperial College London

Argumentation-Based Explainable AI and Interactionist Reasoning

Talk outline: Explainable AI (XAI) has been investigated for decades and, together with AI itself, has witnessed unprecedented growth in recent years. Among various approaches to XAI, argumentative models have been advocated in both the AI and social science literature, as their dialectical nature appears to match some basic desirable features of the explanation activity. Thus, it is not surprising that computational argumentation, as understood in AI, is being used to provide (a variety of) explanations for the outputs of (a variety of) AI methods, leveraging on computational argumentation’s wide array of reasoning abstractions. I will argue that this overall take on XAI is in line with Mercier and Sperber’s interactionist view to human reasoning and can support, in particular, conversational forms of XAI between humans and machines.
Bio: Francesca Toni is Professor in Computational Logic and Royal Academy of Engineering/JP Morgan Research Chair on Argumentation-based Interactive Explainable AI at the Department of Computing, Imperial College London, UK, and the founder and leader of the CLArg (Computational Logic and Argumentation) research group. Her research interests lie within the broad area of Knowledge Representation and Reasoning in AI and Explainable AI, and in particular include Argumentation, Argument Mining, Logic-Based Multi-Agent Systems, Non-monotonic/Default/Defeasible Reasoning, Machine Learning. She has recently been awarded an ERC Advanced grant on Argumentation-based Deep Interactive eXplanations (ADIX). She is EurAI fellow, in the editorial board of the Argument and Computation journal and the AI journal, and in the Board of Advisors for KR Inc. and for Theory and Practice of Logic Programming.
Zhi-Hua Zhou

Zhi-Hua Zhou

Nanjing University

Leveraging Unlabeled Data: From ‘pure learning’ to learning + reasoning

Talk outline: It is generally expensive or even infeasible to collect a huge amount of labeled training data in many practical applications, and therefore, leveraging unlabeled data is attracting more and more attention. In this talk, we will briefly introduce the efforts of leveraging unlabeled data, from “pure learning” solutions that exploit unlabeled data by using machine learning only, to a recent “learning + reasoning” solution that exploits machine learning and logical reasoning in a balanced and mutually beneficial way, where the utilization of logical reasoning offers the possibility of exploiting domain knowledge, and even possibility of knowledge discovery or refinement based on observed data.
Bio: Zhi-Hua Zhou is Professor of Computer Science and Artificial Intelligence at Nanjing University. His research interests are mainly in machine learning and data mining, with significant contributions to ensemble methods, weakly supervised and multi-label learning. He has authored the books ‘Ensemble Methods: Foundations and Algorithms’, ’Machine Learning (in Chinese)‘, etc., and published more than 200 papers in top-tier journals or conferences. According to Google Scholar, his publications received 60,000+ citations, with H-index 105. Many of his inventions have been successfully applied in industry. He founded ACML (Asian Conference on Machine Learning), served as Program Chair for AAAI-19, IJCAI-21, etc., General Chair for ICDM’16, PAKDD’19, etc., and Senior Area Chair for NeurIPS and ICML. He is on the advisory board of AI Magazine, and associate editor of AIJ, MLJ, IEEE TPAMI, ACM TKDD, etc. He is a Fellow of the ACM, AAAI, AAAS, IEEE, and recipient of the National Natural Science Award of China, the IEEE Computer Society Edward J. McCluskey Technical Achievement Award, the CCF-ACM Artificial Intelligence Award, etc.