AIAI Seminar-Thursday 5th June by by Prof. Munindar Singh (North Carolina State University)

Bio: Dr. Munindar P. Singh is the SAS Institute Distinguished Professor in the Department of Computer Science at North Carolina State University, where he is also an Alumni Distinguished Graduate Professor.

 

Munindar's research interests include artificial intelligence and multiagent systems with a focus on ethics, accountability, trust, and governance from a sociotechnical systems perspective.  Current applications of interest include e-business, privacy, transportation, and humanitarian logistics.

 

Munindar is a Fellow of the IEEE (Institute of Electrical and Electronics Engineers), AAAI (Association for the Advancement of Artificial Intelligence), AAAS (American Association for the Advancement of Science), and ACM (Association for Computing Machinery), and a member (honoris causa) of Academia Europaea.  He has won the ACM/SIGAI Autonomous Agents Research Award, the IEEE TCSVC Research Innovation Award, and the IFAAMAS Influential Paper Award.  He has won NC State University's Outstanding Research Achievement Award twice, was selected as an Alumni Distinguished Graduate Professor, and is a member of NCSU's Research Leadership Academy.  He has also won NCSU's Faculty Graduate Mentor Award.

 

Munindar's research has been recognized with awards and sponsorship by (alphabetically) Army Research Lab, Army Research Office, Cisco Systems, Consortium for Ocean Leadership, DARPA, Department of Defense, Ericsson, Facebook, IBM, Intel, National Science Foundation, and Xerox.  Thirty-six students have received PhD degrees and forty-one students MS degrees under Munindar's direction. In addition, he has advised ten postdoctoral fellows.

 

 

Title: Conceptual Foundations for Responsible Autonomy: Trustworthy AI, Norm Deviation, and Consent

Abstract: This talk will summarize some of our recent conceptual work on responsible autonomy and trustworthy AI.  This work steps back from computational models of morality to apply insights from philosophy and social psychology to responsible autonomy.  One challenge is to understand when responsible autonomy involves respecting of deviating from norms.  We show how Habermas's notion of objective, subjective, practical validity criteria can be used to understand norm deviation.  We apply case law from the US, UK, and Canada as a source of empirical knowledge about when norm deviations are legitimate.  We apply the Habermasian framework as a basis for conceiving of consent.  We adapt a model of trust based on the components of ability, benevolence, and integrity to show how trustworthiness involves, again applying case law as a source of empirical intuitions about how to evaluate AI agents.  These models can provide a basis for effective and meaningful explanations as integral to responsible autonomy.

 

[1] Amika M. Singh and Munindar P. Singh. "Norm Deviation in Multiagent Systems: A Foundation for Responsible Autonomy." Proceedings of the 32nd International Joint Conference on Artificial Intelligence (IJCAI). Macau: IJCAI, August 2023, pages 289--297. doi: 10.24963/ijcai.2023/33.

[2] Munindar P. Singh. "Consent as a Foundation for Responsible Autonomy." Proceedings of the 36th AAAI Conference on Artificial Intelligence (AAAI) 36(11), February 2022. Blue Sky Track, pages 12301--12306. doi: 10.1609/aaai.v36i11.21494.

[3] Amika M. Singh and Munindar P. Singh. "Wasabi: A Conceptual Model for Trustworthy Artificial Intelligence." IEEE Computer 56(2), February 2023, pages 20--28. doi: 10.1109/MC.2022.3212022.