Towards Safe and Responsible AI Development (hosted by Care AI)
CARE-AI Seminar Series featuring Tegan Maharaj
Abstract: Artificial intelligence (AI) systems are increasingly deployed in real-world settings, but we lack a rigorous science to understand or predict their behavior in these settings. Even when we can formalize the problem we’re addressing in quite clear statistical terms (e.g. supervised learning on a fixed dataset), there is much we still do not understand about how and why deep networks trained with stochastic gradient descent are able to generalize as well as they do, why they fail when they do, and how they will perform on out-of-distribution data. To address these questions, I study AI systems and ‘what goes into’ them – not only data, but the broader learning environment including task design/specification, loss function, and regularization, as well as the broader societal context of deployment, including privacy considerations, trends and incentives, norms, and human biases. Concretely, this involves techniques like designing unit test environments to empirically evaluate learning behaviour, or sandboxing to simulate deployment of an AI system. This talk will give an overview of my work, which seeks to contribute understanding and techniques to the growing science of responsible AI development, while usefully applying AI to high-impact ecological problems including climate change, epidemiology, and ecological impact assessments.
Bio: I recently joined the University of Toronto as an Assistant Professor in the Faculty of Information, an affiliate of Vector and the Schwartz-Reisman Institutes, soon to complete my PhD at Mila. I’m a managing editor at the Journal of Machine Learning Research (JMLR), and co-founder of Climate Change AI (CCAI), an organization which catalyzes impactful work applying ML to problems of climate change. My recent research has two themes (1) using deep representation learning in policy analysis, impact assessment, and risk mitigation, and (2) designing data or unit test environments to empirically evaluate/audit learning behaviour, or simulate deployment of an AI system.