What can Robotics and AI teach us about trust?
Benjamin Kuipers, Professor of Computer Science and Engineering at the University of Michigan (US), has researched artificial intelligence (“AI”) for many years. For the TrustTalk podcast, our host, Severin de Wit, talked with him about how trustworthy robots are, as AI agents.
Podcast Notes
As a researcher in AI and robotics, he sees increasing applications of AI in our society. It is natural to wonder whether the behavior of these artificially intelligent systems should somehow be governed by ethics. There is general agreement that ethics imposes constraints on individual behavior for the benefit of society as a whole. There is also a general recognition that trust is important, for individuals, organizations, and society as a whole. But aside from recognizing that trust is in general a good thing, few people have looked carefully at the specifics of how trust helps our society thrive, and perhaps even to survive.
In this interview, Benjamin Kuipers, based on ideas from many insightful thinkers, suggests a framework for how these elements work together:
* a society thrives when it has the resources to respond to threats and opportunities;
* the society gains resources through positive-sum interactions among its members;
* many positive-sum interactions are forms of cooperation;
* cooperation involves vulnerability to exploitation by partners;
* trust is a willingness to accept vulnerability, confident that it won’t be exploited;
* for trust to be effective, one’s self and others must be trustworthy.
Ethics is a body of knowledge that a society teaches, showing its members how to be trustworthy, and how to recognize trustworthiness in others. If a society’s ethics encourages widespread trust and cooperation, the society thrives. If it encourages exploitation and distrust, the society may not survive.
Our current global society faces an existential threat from climate change. Meeting the challenge of that threat will require trust and cooperation. Can we do it?
Common Sense Knowledge
Commonsense knowledge is knowledge about the structure of the external world that is acquired and applied without concentrated effort by any normal human that allows him or her to meet the everyday demands of the physical, spatial, temporal, and social environment with a reasonable degree of success. I still think this is a pretty good definition (though I might remove the restriction to the “external” world)” (a citation from an earlier interview with Benjamin).
We each have some degree of common sense knowledge, for example, of navigational space because we use that to travel between home, work, friends, shopping, and places like that. Of course, you and I live thousands of miles apart in different environments, so the cognitive maps that are part of each of our common sense are describing different environments, different pieces of geography. So common sense is our ability to represent our own knowledge of space. It’s not a shared knowledge about all spaces. And so this same principle applies to other what I call foundational domains, like thinking in terms of objects, actions, dynamical change, theory of mind, ethics, and so on. So by studying the common structure of this kind of foundational knowledge, we can learn how humans and robots can learn spatial knowledge from their own experience navigating through the world and learn how to plan routes to get to their desired destinations, home, work or whatever. And the structure turns out to include things like travel paths, decision points, connectivity of the route network, local and global frames of reference, and so forth.
Decisions by intelligent robots
Question: Intelligent robots, for example, autonomous vehicles and other artificial intelligence like, for example, high-speed trading systems, make their own decisions about the actions they take. And as a result, you and your colleague computer scientists take the view that those robots can be considered members of our society.
Exactly. If an artificial agent has goals of its own and makes action decisions based on those goals and it has a model of the world, then it participates in our society. For a society to thrive and perhaps even for it to survive, its members need to contribute to its ability to defend itself against threats and to take advantage of opportunities. Now we’re talking a lot about robots, but they’re not the only artificial agents that participate in our society. Large corporations, for example, whether they’re for-profit corporations or not. They’re artificial. They have goals of their own and they make plans and act to achieve them.
Cooperation, trust, trustworthiness and the role of ethics
Ethics is a body of knowledge that a particular society has at a particular point in its history. And it’s used to instruct the individual members of that society on how to be trustworthy and how to recognize trustworthiness in others. If we look at societies over history and geography, we see that the ethics of a society changes dramatically, but mostly over the centuries, but sometimes over faster time scales. If a society has widespread trust, trustworthiness and cooperation, it will naturally have more resources available for defending against threats, pursuing opportunities, and is more likely to thrive. But if you have a society whose ethics encourages exploiting the vulnerabilities of others, it’s less likely to thrive.
Game theory
Other subjects discussed in the podcast: game theory, interactions as positive-sum like win-win interactions, zero-sum (like most recreational games), and negative-sum
So when interactions are mostly positive-sum society comes out ahead in the long run, regardless of who wins that game, we can apply the term cooperation to the wide variety of positive-sum interactions. Think about farmers getting together for a barn raising or to bring in the harvest. A lot of people work together to produce a great value for somebody with the assumption that other people are going to chip in as well. Cooperation, this next part, requires trust among potential partners. If you’re cooperating, you’re vulnerable to your partners. They might fail to contribute as promised to the collective effort. They might take more than their share of the resulting gains.
What do we gain by trying to develop robots?
The understanding that we gain by trying to develop robots, applies not just to those robots, but also to humans and to the corporate entities that also participate in our society. Humanity, all of our societies face existential threats, including nuclear weapons, infectious diseases, and climate change. Meeting these threats will certainly require global cooperation, which, of course, is going to require global trust and trustworthiness. Now, as you point out in your question, there are strong ideological trends out there that encourage profiting by exploiting the vulnerabilities of others, individuals and groups. The profits from strategies like that, exploitative strategies come from negative-sum games. The loser loses more than the winner gains. This discourages trust and cooperation, and therefore it makes the society weaker and less able to meet those existential challenges.
Children and Robots
Asked about the recent PhD of Chiara de Jong on children and their acceptance of new technology:
Now, there are several other people I’ve encountered over the years. A psychology professor at the University of British Columbia named Kiley Hamlin showed, starting with her PhD thesis at Yale, that pre-verbal children have a strong preference for supportive or cooperative characters and a dislike for obstructive or exploitive characters. So this is a very early time for them to be perceiving moral value or unvalue in the environment. Similarly, psychology professor Felix Warneken, now at the University of Michigan, showed that toddlers will demonstrate a strong tendency to spontaneously provide cooperative help when they observe an adult who needs some kind of assistance, even when he’s not making any request or giving or promising any kind of reward. So these and similar findings suggest that human children have a strong innate bias towards trust and cooperation with others. Allowing a child to observe a robot being helpful and cooperative toward others is likely to encourage that child to trust the robot.
Transcript Interview Benjamin Kuipers
We provide a transcript of the interview for those that like to read or read and listen to the podcast at the same time:
YouTube: listen to the podcast with subtitles
Subscribe
You can subscribe to any of these five podcast platforms: Apple, Spotify, Acast, Pocket Casts and Deezer