AI, Trust and Transparency

Can AI be 'trusted'?

- Reading: M. Ryan, "In AI We Trust: Ethics, Artificial Intelligence, and Reliability"  

AI, Trust, and Transparency

  • Many policy guidelines and frameworks addressing the impact of AI in society reference the concept of trust.
  • "Trustworthy AI" has emerged as a significant research field within computer science and engineering.
  • Community advocates typically argue that AI should only be utilized in areas where AI systems can be trusted.
  • The concept of trust is frequently associated with transparency, though the nature of this connection may not always be explicitly articulated.

This Week’s Questions

  • Key questions to explore include:
    • What does it mean to trust (or distrust) AI?
    • Which concept of trust applies to AI, if any?
    • What is the relationship between trust and transparency?
    • How do trust and transparency interact specifically in the context of AI?
    • What ethical implications arise from associating AI with concepts such as trust and transparency?
  • Definition of “Trust” is essential to begin this exploration.

Trust vs. Trustworthiness

  • Important distinction:
    • Trust: An attitude of agents towards other agents, their abilities, or their actions.
      • Example: A trusts B; A trusts B’s ability to perform action x; A trusts that B will indeed perform action x.
    • Trustworthiness: A property of agents.
      • Trustworthiness is neither necessary nor sufficient for trust, and vice versa.

Questions in the Context of AI

  • When discussing AI, we need to differentiate between two crucial questions:
    • Is AI trustworthy?
    • Can we trust AI?
  • The concept of being “worthy” of trust carries moral implications.
  • Philosophers generally advise that AI researchers should refrain from using the term "trustworthiness" when referring to AI, due to AI not being a moral agent.
  • The interpretation of “trust” heavily influences the understanding of what we trust AI to do.

Notions of Trust

  • Three perceived notions of trust:
    • Rational notion
    • Affective notion
    • Normative notion
  • To ascertain the validity of trusting or distrusting AI, we must first identify the definition of trust in use and ascertain which notion can potentially be applied to AI.

Rational Trust

  • Trust as a rational attitude emerges from reasoned deliberation.
  • One considers pros and cons along with all available evidence to establish whether the trusted entity is likely to execute the action entrusted to it.
  • Rational trust is context-specific, being tied to particular actions and grounded in predictive capabilities.
  • However, questions arise as to whether this truly constitutes trust or is simply a form of reliance.

Trust vs. Reliance

  • Rational deliberation necessitates extensive knowledge regarding the subject of deliberation and its context.
  • If sufficient information is available for an educated assumption regarding the entity’s performance in certain circumstances, then does that signify trust?
  • Trust typically involves an element of vulnerability from the trusting party, which requires a leap of faith due to incomplete information.
  • Trust manifestly surfaces when rational deliberation encounters limitations.
  • It is plausible to hold a position of rational trust towards AI, or at least regarding specific tasks it performs.
  • Much of the advocacy for trusting AI is predicated on claims of reliability for definite tasks in particular situations.
  • However, the legitimacy of rational trust as genuine trust remains uncertain.
  • The concept of “trusting AI,” if valid, may need to be anchored in a more robust understanding of trust.

Affective Trust

  • This form of trust is based on the motivations of the agent being trusted.
  • A trusts B in an affective context if A believes (for any reason) that B has the goodwill or the motivation to act benevolently towards A.
  • Affective trust focuses on the character of the trusted party rather than their abilities or actions.
  • As AI systems lack will or independent motivations (such as desires or inclinations), let alone motivations aimed at doing good for individuals, it appears inappropriate to trust (or distrust) AI in the affective sense.

Normative Trust

  • This notion bears some resemblance to rational trust but centers on what the trusted party is expected to do as opposed to what they will do.
  • This notion also aligns with the affective perspective as it necessitates a commitment from the trusted party to fulfill normative expectations towards the trusting party; violation of normative trust results in a form of betrayal.
  • The query arises: Do AI systems owe responsibilities to us that would warrant feelings of betrayal upon failure?
  • Typically, mutual responsibilities emerge from being part of the same normative system, representing a binding framework of rules and anticipations governing mutual actions and accountability.
  • There are significant doubts about AI's inclusion in a normative system (previous discussions have highlighted concerns regarding responsibility and accountability in relation to AI).
  • Discussing trusting (or distrusting) AI in a normative framework seems misleading, presupposing that AI occupies a societal role it's likely incapable of fulfilling.

In Summary

  • Presently, AI does not appear to align with any interpretation of trust in its diverse variations.
  • Conversations surrounding “trusting AI” or characterizing AI as “trustworthy” should be construed as at best figurative language, not to be taken literally.
  • While AI systems can demonstrate reliability, this reliability does not equate to trust.
  • As a society, our focus should shift towards evaluating whether we can (or should) place trust in those who develop and promote AI.