Hi, I'm Severin de Wit, host of the TrustTalk podcast, where we dive deep into the fascinating world of trust. With a genuine passion for understanding the foundations and nuances of trust, I am dedicated to uncovering its secrets and sharing compelling stories that illuminate its profound impact. Join me on this captivating journey as we explore the transformative power of trust. Subscribe now and become part of the TrustTalk community
Hi, I'm Severin de Wit, host of the TrustTalk podcast, where we dive deep into the fascinating world of trust. With a genuine passion for understanding the foundations and nuances of trust, I am dedicated to uncovering its secrets and sharing compelling stories that illuminate its profound impact. Join me on this captivating journey as we explore the transformative power of trust. Subscribe now and become part of the TrustTalk community
Benjamin Kuipers, Professor of Computer Science and Engineering at the University of Michigan (US), has researched artificial intelligence (“AI”) for many years. For the TrustTalk podcast, our host, Severin de Wit, talked with him about how trustworthy robots are, as AI agents.
Podcast Notes
As a researcher in AI and robotics, he sees increasing applications of AI in our society. It is natural to wonder whether the behavior of these artificially intelligent systems should somehow be governed by ethics. There is general agreement that ethics imposes constraints on individual behavior for the benefit of society as a whole. There is also a general recognition that trust is important, for individuals, organizations, and society as a whole. But aside from recognizing that trust is in general a good thing, few people have looked carefully at the specifics of how trust helps our society thrive, and perhaps even to survive.
In this interview, Benjamin Kuipers, based on ideas from many insightful thinkers, suggests a framework for how these elements work together:
* a society thrives when it has the resources to respond to threats and opportunities;
* the society gains resources through positive-sum interactions among its members;
* many positive-sum interactions are forms of cooperation;
* cooperation involves vulnerability to exploitation by partners;
* trust is a willingness to accept vulnerability, confident that it won’t be exploited;
* for trust to be effective, one’s self and others must be trustworthy.
Ethics is a body of knowledge that a society teaches, showing its members how to be trustworthy, and how to recognize trustworthiness in others. If a society’s ethics encourages widespread trust and cooperation, the society thrives. If it encourages exploitation and distrust, the society may not survive.
Our current global society faces an existential threat from climate change. Meeting the challenge of that threat will require trust and cooperation. Can we do it?
Common Sense Knowledge
Commonsense knowledge is knowledge about the structure of the external world that is acquired and applied without concentrated effort by any normal human that allows him or her to meet the everyday demands of the physical, spatial, temporal, and social environment with a reasonable degree of success. I still think this is a pretty good definition (though I might remove the restriction to the “external” world)” (a citation from an earlier interview with Benjamin).
We each have some degree of common sense knowledge, for example, of navigational space because we use that to travel between home, work, friends, shopping, and places like that. Of course, you and I live thousands of miles apart in different environments, so the cognitive maps that are part of each of our common sense are describing different environments, different pieces of geography. So common sense is our ability to represent our own knowledge of space. It’s not a shared knowledge about all spaces. And so this same principle applies to other what I call foundational domains, like thinking in terms of objects, actions, dynamical change, theory of mind, ethics, and so on. So by studying the common structure of this kind of foundational knowledge, we can learn how humans and robots can learn spatial knowledge from their own experience navigating through the world and learn how to plan routes to get to their desired destinations, home, work or whatever. And the structure turns out to include things like travel paths, decision points, connectivity of the route network, local and global frames of reference, and so forth.
Decisions by intelligent robots
Question: Intelligent robots, for example, autonomous vehicles and other artificial intelligence like, for example, high-speed trading systems, make their own decisions about the actions they take. And as a result, you and your colleague computer scientists take the view that those robots can be considered members of our society.
Exactly. If an artificial agent has goals of its own and makes action decisions based on those goals and it has a model of the world, then it participates in our society. For a society to thrive and perhaps even for it to survive, its members need to contribute to its ability to defend itself against threats and to take advantage of opportunities. Now we’re talking a lot about robots, but they’re not the only artificial agents that participate in our society. Large corporations, for example, whether they’re for-profit corporations or not. They’re artificial. They have goals of their own and they make plans and act to achieve them.
Cooperation, trust, trustworthiness and the role of ethics
Ethics is a body of knowledge that a particular society has at a particular point in its history. And it’s used to instruct the individual members of that society on how to be trustworthy and how to recognize trustworthiness in others. If we look at societies over history and geography, we see that the ethics of a society changes dramatically, but mostly over the centuries, but sometimes over faster time scales. If a society has widespread trust, trustworthiness and cooperation, it will naturally have more resources available for defending against threats, pursuing opportunities, and is more likely to thrive. But if you have a society whose ethics encourages exploiting the vulnerabilities of others, it’s less likely to thrive.
Game theory
Other subjects discussed in the podcast: game theory, interactions as positive-sum like win-win interactions, zero-sum (like most recreational games), and negative-sum
So when interactions are mostly positive-sum society comes out ahead in the long run, regardless of who wins that game, we can apply the term cooperation to the wide variety of positive-sum interactions. Think about farmers getting together for a barn raising or to bring in the harvest. A lot of people work together to produce a great value for somebody with the assumption that other people are going to chip in as well. Cooperation, this next part, requires trust among potential partners. If you’re cooperating, you’re vulnerable to your partners. They might fail to contribute as promised to the collective effort. They might take more than their share of the resulting gains.
What do we gain by trying to develop robots?
The understanding that we gain by trying to develop robots, applies not just to those robots, but also to humans and to the corporate entities that also participate in our society. Humanity, all of our societies face existential threats, including nuclear weapons, infectious diseases, and climate change. Meeting these threats will certainly require global cooperation, which, of course, is going to require global trust and trustworthiness. Now, as you point out in your question, there are strong ideological trends out there that encourage profiting by exploiting the vulnerabilities of others, individuals and groups. The profits from strategies like that, exploitative strategies come from negative-sum games. The loser loses more than the winner gains. This discourages trust and cooperation, and therefore it makes the society weaker and less able to meet those existential challenges.
Children and Robots
Asked about the recent PhD of Chiara de Jong on children and their acceptance of new technology:
Now, there are several other people I’ve encountered over the years. A psychology professor at the University of British Columbia named Kiley Hamlin showed, starting with her PhD thesis at Yale, that pre-verbal children have a strong preference for supportive or cooperative characters and a dislike for obstructive or exploitive characters. So this is a very early time for them to be perceiving moral value or unvalue in the environment. Similarly, psychology professor Felix Warneken, now at the University of Michigan, showed that toddlers will demonstrate a strong tendency to spontaneously provide cooperative help when they observe an adult who needs some kind of assistance, even when he’s not making any request or giving or promising any kind of reward. So these and similar findings suggest that human children have a strong innate bias towards trust and cooperation with others. Allowing a child to observe a robot being helpful and cooperative toward others is likely to encourage that child to trust the robot.
Transcript Interview Benjamin Kuipers
We provide a transcript of the interview for those that like to read or read and listen to the podcast at the same time:
Boston Consulting Group‘s Ai-based Trust Index measures and decodes stakeholders’ perceptions of the trustworthiness of more than 1,000 of the world’s largest companies. It enables companies to break down stakeholder perceptions of their trustworthiness. Analyses based on the Index have yielded valuable insights about what builds, sustains, or destroys trust.
What Is Trust, and Why Is It Hard to Measure?
In academic literature, trust is defined as the willingness of a party (the trustor) to be vulnerable to the actions of another party (the trustee). In a business context, broadly speaking, stakeholders (trustors) put a certain level of trust in a company (trustee) to fulfill a promise—whether that promise takes the form of a value proposition (product or service) to customers, an intangible such as corporate purpose to employees, earnings guidance to investors, or some other commitment. In doing so, stakeholders put themselves in a vulnerable position, trusting that the business will act in a way that aligns with their own interests. For example, you might trust your bank to safeguard your money, or your employer to live up to its societal aims, or your Tier 1 supplier to honor its pledge to reduce its carbon footprint.
As a latent psychological state and a predisposition to engage, trust is only indirectly measurable, through indicators such as transaction costs, or inferred from the attitudes and behaviors that people convey explicitly or implicitly in their communications and actions. Trust is naturally dynamic. It fluctuates, as individuals reevaluate their perceptions in response to new information and changing circumstances.
BCG’s Trust Index
The elements that generate, sustain, and enhance trust among stakeholders—the traits, decisions, and actions of companies—are many and complex. Until now, it has been difficult to distill them in order to understand their interrelationships and to link them to business performance. BCG’sTrust Index does just that.
Unlike traditional efforts to measure trust, BCG’s Trust Index draws on real-time stakeholder communications and applies natural language processing (NLP) and AI to analyze and quantify stakeholder perceptions. Constructing the Trust Index involves scraping the internet (traditional news sources as well as Twitter), combing through thousands of articles and posts on each company, and using a research-validated list of more than 200 trust-related keywords to identify instances in which the text mentions the company in the context of trust. Working with an NLP engine, we then analyze the trust sentiment behind each mention to gauge whether the perception is positive, neutral, or negative. (For this report, we searched only English-language sources, but the index can be applied in other languages as well. A detailed explanation of the methodology appears in BCG’s full report.)
To identify the mentions that relate specifically to trust (or distrust), we categorize keywords according to four dimensions of trust, which we identified in our past research:
Competence—whether the company can effectively accomplish a specific task at hand, or (in other words) whether it can deliver on its promise to stakeholders
Fairness—how equitable and empathetic the company is in delivering on its promise
Transparency—how open and unambiguous the company’s decision-making and actions are
Resilience—how effectively the company avoids or recovers from challenges and crises
This approach enables an analysis of a company’s trust score on an overall level and by individual dimension—for example, how its competence score has trended over time—so that we can more deeply understand its perceived trustworthiness. Dimensions also provide a window into context-specific reasons that encourage people to trust (or mistrust) and into the multifaceted nature of trust. Thus, we might notice that people trust a particular business for its competence in delivering its products and services to customers, but do not trust it for its fairness because of its weak commitment to social responsibility. Because the NLP engine can identify common themes in the trust-related mentions, we can assess the rationale underlying the scores. By analyzing the influence of each of the four trust dimensions and examining the more granular themes associated with companies that earned high and low trust scores, we obtain a useful reading of a company’s trust “health,” and we start to decode the “why” behind that reading.
Exhibit 1 illustrates this multidimensional tracking capability, using data underlying the trust score of one company in our data set. The graph on the left compares the company’s trust score to its industry benchmark. The center graph shows that social media mentions had a powerful impact on the company’s drop in perceived trustworthiness. The graph on the right shows that, despite the company’s relatively high competence score, its overall trust score has been trending downward as a result of declining scores for fairness, transparency, and resilience.
What the Index Reveals
In addition to uncovering the trust performance of individual companies, the Trust Index gives us a macro view of the trust record across a full data set—in this case, 1,100 of the world’s largest public companies (those with market capitalizations exceeding $20 billion) from 2018 through 2021. We break down perceived company trustworthiness along different lines to discern broader trends: by the entire set of companies, the Top 100, or the Bottom 100; by region or sector; and by a given point in time or an entire time period. We also deep-dive into the sentiment underlying trust scores to better understand the dynamics and patterns that govern how trust in businesses is built, maintained, and destroyed.
In the full report, we probe a wide range of questions, including the following:
Do the most trusted companies generate more financial value?
In what regions and industries are the most- and least-trusted companies found?
How dynamic are trust scores? And how much does the roster of Top 100 companies change from year to year?
On average, has trust in the world’s largest companies increased or decreased over the past four years, pre- and mid-pandemic?
How does trust correlate with other business metrics, such as ESG ratings?
What types of actions or events lead a company to be perceived as most- or least-trusted?
Among BCG’s many noteworthy findings are the following:
Trust pays off. The 100 most trusted companies generated 2.5 times as much value as comparable businesses at year-end 2021. They also had 47% higher P/E multiples. The link between trust and value highlights the need to take trust seriously.
Trust is highly dynamic. Fewer than half of the Top 100 companies from any given year were still in the Top 100 the following year. For the Bottom 100, turnover was as high as 70%. So business leaders need to measure and manage trust on an ongoing basis.
Trust levels rose during the pandemic. For all but the Bottom 100 companies in our study, average trust levels grew between 2018 and year-end 2021. Trust score CAGR grew less for the Top 100 than for the overall group (likely because the Top 100’s levels were already high), but it eroded the most among the least-trusted companies.
Ten themes commonly affect companies’ trust positions. We identified ten specific themes that most frequently establish, enhance, or destroy trust. The ability to track performance at such a granular level can help leaders devise strategies to improve their companies’ trust position. We found that the impacts of these themes vary markedly, depending on the company’s starting trust position. For example, for low-trust companies, crises are a major trust destroyer, especially when caused or exacerbated by negligence or recklessness. In contrast, high-trust companies, while not immune to major unexpected difficulties, avert, handle, and recover from crises in ways that maintain or even strengthen their perceived trustworthiness.
How Can Businesses Decode and Improve Their Trust Position?
The left-hand side of Exhibit 2 shows the correlations between the four trust dimensions (competence, fairness, transparency, and resilience) and the ten themes most commonly associated with establishing, enhancing, or destroying trust, based on the analysis of our dataset. The right-hand side indicates which areas companies need to focus on to sustain or improve their trust positions.
Consider the transparency dimension, for example. The theme that, by far, correlates most strongly with transparency is social responsibility. In other words, transparency scores were most influenced by mentions of social responsibility; the thick dark green wavy bar linking the two reflects the high volume of online mentions of social responsibility (whether positive or negative). The next-strongest influence on transparency involved discussions of corruption, fraud, and scandals, followed by discussions about innovation. Conversely, while innovation correlates strongly with transparency, it correlates even more strongly with competence, as the relatively greater thickness of the bar connecting innovation and competence shows.
The right-hand side of the exhibit shows how companies can advance their trust improvement efforts by targeting trust enhancers, trust foundations, or trust destroyers, based on their existing trust position. High-trust companies should continue to concentrate on trust enhancers, while also keeping an eye on trust foundations. Those with middling trust scores would get the greatest benefit from building up their trust foundations, while also tending to enhancers and guarding against destroyers. Low-trust companies should prioritize efforts to mitigate trust destroyers, while they work on building trust foundations.
Leveraging the findings from our Trust Index analysis, our report provides a set of actions that leaders can take to improve their company’s trust position. It also offers case examples and recommendations to business leaders for managing and improving their company’s perceived trustworthiness.