ai and attraction

Study Summary

Research Overview

The study investigates the impact of AI-generated voice similarity on human perceptions of likability and trustworthiness. The focus is on how AI technology can reproduce human-like voices and the potential societal implications, particularly in the context of deep fakes and voice assistants.

AI and Voice Similarity

  • AI systems utilize voiceprints, numerical representations of voices (e.g., d-vectors), to evaluate voice similarity.

  • Voiceprints are derived from deep neural networks and can support personalized interactions in technologies like voice assistants.

Key Findings

  1. Voice Similarity and Judgments:

    • Human similarity judgments correspond with AI-based voice similarity measures.

    • Similar voices (especially those resembling one's own) are evaluated more positively in terms of likability and trustworthiness.

  2. Self-Similarity Preference:

    • Voices similar to one’s own increase trustworthiness and likability compared to average voices, adhering to the similarity-attraction hypothesis and implicit egotism.

  3. Beauty-in-Averageness:

    • Contrary to previous findings, voices with average characteristics did not significantly affect likability or trustworthiness. Only a marginal influence on trustworthiness was observed.

  4. Cognitive Evaluations:

    • The study consisted of several experiments confirming the validity of cosine similarity derived from AI as capable of predicting human evaluation outcomes of voiced stimuli.

Conclusion

AI-derived cosine similarity can effectively predict human judgments of voice similarity and influence social evaluations, highlighting ethical considerations in the deployment of voice technologies within societal contexts. The findings suggest a vulnerability in human perception that could be exploited in personalized AI voices.