Prof. Pritchett Participates in RSS Workshop on "Morality and Social Trust in Autonomous Robots"

JULY 18, 2017 - Humans instinctively expect robots to behave morally and make ethical decisions, but in order to design algorithms that can generate morally-aware and ethical decisions and hence creating trustworthy robots, we need to understand the conceptual theory of morality in machine autonomy in addition to understanding, formalizing, and expressing trust itself.

This was the challenge put to CEC Prof. Amy Pritchett as an invited speaker for the Robotics: Science and Systems (RSS) 2017 Workshop: Morality and Social Trust in Autonomous Robotics on July 16th at MIT. Prof. Pritchett was one of nine invited speakers alongside professors of computer science, philosophy, ethics, and psychology from Carnegie Mellon, Duke, Michigan, Munster, Brown, Tufts, MIT, and Penn State. The full motivation for the workshop is below:

Robots are becoming members of our society. Complex algorithms have been making robots increasingly sophisticated machines with rising levels of autonomy, enabling them to leave behind their traditional work places in factories and to enter our society with convoluted social rules, relationships, and expectations. Driverless cars, home assistive robots, and unmanned aerial vehicles are just a few examples. As the level of involvement of such systems increases in our daily lives, their decisions affect us more directly. Therefore, we instinctively expect robots to behave morally and make ethical decisions. For instance, we expect a firefighter robot to follow ethical principles when it is faced with a choice of saving one's life over another's in a rescue mission, and we expect an eldercare robot to take a moral stance in following the instructions of its owner when they are in conflict with the interest of others (unlike the robot in the movie "Robot & Frank"). Such expectations give rise to the notion of trust in the context of human-robot relationship and to questions such as "how can I trust a driverless car to take my child to school?" and "how can I trust a robot to help my elderly parent?" In order to design algorithms that can generate morally-aware and ethical decisions and hence creating trustworthy robots, we need to understand the conceptual theory of morality in machine autonomy in addition to understanding, formalizing, and expressing trust itself. This is a tremendously challenging (yet necessary) task because it involves many aspects including philosophy, sociology, psychology, cognitive reasoning, logic, and computation. In this workshop, we try to continue with the discussions initiated in our RSS 2016 workshop on "Social Trust in Autonomous Robots" with the additional theme of ethics and morality to shed light on these multifaceted concepts and notions from various perspectives through a series of talks and panel discussions.

 

Map of Cognitive Engineering Center

Cognitive Engineering Center (CEC)
Georgia Institute of Technology
270 Ferst Drive
Atlanta GA 30332-0150