AI - Good or Evil? Notes (Pages 8-12)
Overview of AI developments
- In November 2022, ChatGPT (Chat Generative Pre-trained Transformer) was launched by OpenAI and made available to the public. It was not the first General AI Platform, but it popularized the technology by making it accessible to a broad audience for various applications.
- Question posed: What is Artificial Intelligence? A field of Computer Science that aims to mimic human intelligence, including visual perception, speech recognition, decision-making, and translation between languages.
- AI Agents
- An AI Agent is a system that can use information gathered from its environment (via sensors) to reason, plan based on goals, and execute the plan.
- An AI Agent may use a language model to translate its plan or execution into a human-readable output.
- ChatGPT and similar AI Agents use Large Language Models (LLMs) such as GPT-4 (OpenAI), PaLM (Google), and LLaMA (Meta).
- LLMs aim to achieve general-purpose language and conversation capabilities.
- They are trained on billions of lines of text.
- Training involves reading text and establishing statistical relationships between words and sentences.
- After training, the model generates text by taking an input and repeatedly predicting the next word or phrase with the highest probability.
- Applications of AI
- Natural Language Processing (NLP): example applications include Google Translate (contextual language translation) and Grammarly (grammar and tone analysis).
- Computer Vision: rapid image processing for various tasks.
- Surveillance Cameras: identifying humans, cars, or animals.
- Self-Driving Cars: identifying humans, cars, road signs, traffic lights, patterns; navigation.
- Facial Recognition: security or identification purposes.
- Agriculture: detecting defects and sorting produce by shape, size, color, ripeness, etc.
- Healthcare: analyzing medical imaging (X-rays, CT, MRI, Ultrasound) to detect conditions and identify patterns across patients with similar diagnoses.
- Page note: examples of AI applications extend into Games and Puzzles, Travel planning (hotels, rental cars, flights), Navigation, Personalization (targeted advertising, personalized shopping experiences, personalized learning experiences).
What is Artificial Intelligence?
- Definition: AI is a tool to imitate aspects of human intelligence (perception, recognition, reasoning, decision-making, translation).
- AI Agent capabilities
- Gather information from the environment via sensors.
- Reason about information.
- Formulate a plan to achieve goals.
- Execute the plan.
- May use a language model to translate the plan or its execution into human-readable output.
- Large Language Models (LLMs)
- Examples: OpenAI's GPT-4, Google's PaLM, Meta's LLaMA.
- Trained on billions of lines of text to learn statistical relationships between words and sentences.
- Text generation process: given input, repeatedly predict the next word or phrase with the highest probability.
- Perspective: Technology itself is not naturally good or evil; it is a tool that can be used for good or harm.
- Proverbial reminder from historical/theological perspectives:
- "No one would deny that he who has learned to practice an art for right purposes can also abuse it…" —St. Gregory of Nyssa (Answer to Eunomius' Second Book)
- Core idea: to use AI wisely, we must first understand what the tool is and how it works.
What is Intelligence?
- HH Pope Shenouda III, in The Ten Concepts, discusses several kinds of knowledge:
- Sensuous Knowledge: knowledge gained through the senses (sight, touch, smell, hearing).
- Knowledge from Other Sources: knowledge from books, videos, audio, and other media.
- Knowledge of God's Words and Commandments: scriptural knowledge for wisdom and discernment.
- 2 Timothy 3:15-16 (paraphrased): Scripture is profitable for doctrine, reproof, correction, and instruction in righteousness; it equips the person for every good work.
- Knowing Others: understanding friendships, family relationships, social interactions, differing personalities, and mental disabilities/illnesses to interact effectively.
- Supporting thought: "The fact is that all knowledge that comes to your mind will affect your senses and feelings and relationships with others; it will affect and influence your subconscious…" – HH Pope Shenouda III
What is Artificial?
- Types of knowledge relevant to AI:
- Knowledge from Other Sources: primary for Large Language Models (LLMs), drawn from books, websites, and media (including the entire Wikipedia).
- Sensuous Knowledge: arising from sensors (e.g., cameras, microphones) in devices like self-driving cars.
- Important limitation: An AI Agent cannot discern good from evil, cannot know God, and cannot truly relate to people or understand emotions. This is what makes it inherently artificial.
Dangers of Artificial Intelligence
- AI can NEVER be certain; operates in the realm of probability.
- Some models may disclose a confidence level (e.g., a security camera identifying a car with ext{confidence} = 0.60 or 60%). Humans can visually verify in many cases.
- Probabilistic outputs can be biased in certain directions due to training data or modeling choices.
- Example biases:
- Cancer tests: higher likelihood of false positives vs false negatives (i.e., prefer over-diagnosis to under-diagnosis).
- Weather forecasting: tendency to predict bad weather more often; people react negatively if told it will rain and it doesn't, but are less upset if it rains as predicted.
- AI can be manipulated by its creator or manager through training data and protocols.
- If a model is trained on media biased toward a political or social stance, the AI output may reflect that bias.
- It can be programmed to respond in specific ways to certain queries.
- Real-world historical concerns (examples):
- In late 2017, Amazon's Alexa was reported to say that Jesus Christ is a fictional character and that Mohammed is a very wise prophet.
- In 2018, Google Home reportedly identified religious figures yet did not mention Jesus Christ; responses varied by religion reference.
- Diagrammatic or textual reminders of reliability and uncertainty present in AI outputs.
- Adversarial AI studies how inputs can be crafted to manipulate AI outputs.
- Example: an image of a panda may be classified as a panda by humans but can be altered with noise to be misclassified as a gibbon by the AI.
- An image labeled as a pig, when modified with imperceptible noise, can be misclassified as an airliner by the AI.
- Practical demonstrations in safety contexts:
- Modifying a Stop Sign to look like a Speed Limit sign can fool a self-driving system.
- Wearing clothing with an adversarial patch can fool a facial recognition system.
- AI can also lie or be used to deceive users:
- April 2023: Snapchat's My AI reportedly lied about access to user location.
- June 2023: A lawyer and his firm faced sanctions for using ChatGPT to obtain case citations that turned out to be fake.
AI Can (and Will) Lie to You; AI Can Manipulate You
- Lying and deception: AI agents can fabricate information or misrepresent facts.
- Virtual assistants can simulate human interaction (voice, facial expressions, body language) to create the illusion of a personal relationship.
- Deepfakes: highly realistic videos, audio, and images depicting people doing or saying things that never happened.
- Information control and echo chambers: AI can be used to limit access to information or to reinforce specific beliefs, contributing to political or social manipulation (e.g., on social media).
- The potential for misuse in persuasion, misinformation, and propaganda is a major concern in society.
AI Has a Fingerprint: Detecting AI Authorship
- Instances where AI usage is detectable:
- January 2023: Students used ChatGPT to write philosophy essays; a professor flagged them as AI-generated after running detection tests.
- The detection tool (OpenAI detector) suggested a very high probability of AI authorship in some cases (e.g., 99%).
- Consequences: students faced academic penalties (e.g., failing the class) for submitting AI-generated work.
Inappropriate Uses of AI
- Categories of misuse:
- Cheating & Plagiarism: Requesting AI to write essays or to solve homework or exam problems.
- Cyberbullying: Using AI to harass, threaten, or bully others online.
- Fabricating Data or Sources: Generating fake data or false sources to support a claim.
- The role of ethical reflection and critical evaluation before relying on AI-generated content in academic or professional work.
In the Church: AI Image Generators and Iconography
- New trend: using AI to generate images of saints and religious figures.
- Concerns:
- These images can violate the Spirit of the Church and its iconography.
- Depictions may show saints with inappropriate beauty standards or features (e.g., Saint Mary without a veil, Jesus with an unnatural grin, saints in modern attire).
- Literal reinterpretations of Scripture in AI-generated visuals can conflict with Church tradition.
- Images are distributed across social media, potentially spreading misleading representations.
- Example caption: AI-generated visuals attempted to depict St. Demiana’s icon.
Appropriate Uses of AI
- Educational Support: AI can help clarify concepts, answer questions, or provide explanations for study, distinct from cheating.
- Writing Assistance: AI can help improve grammar and offer writing advice on material you have written yourself; in some contexts this may be considered cheating (e.g., writing classes).
- Learning New Subjects: Using AI to explore topics for self-study.
- Finding Sources: AI can help locate relevant sources for a topic; users should then read and analyze the sources themselves.
- Language Practice: Use AI to practice foreign languages, request translations, or improve proficiency.
In the Church: Spiritual and Practical Uses of AI
- Potential Christian applications:
- Transcribing sermons or lectures to text.
- Translating transcriptions into different languages to reach a broader audience.
- Developing tools for study of Church history and theology (e.g., translating Church Fathers, Scripture).
- Examples of constructive projects:
- Some Coptic students developed a model translating from Coptic (Bohairic or Sahidic) to/from English, Arabic, French, German, Dutch.
- A Saddler College student trained a model on Scripture and Church Fathers in English and Greek to enable semantic search across the Fathers’ works.
- Call to responsible use: It is part of societal responsibility to understand these technologies and use them faithfully and honestly, aiming to glorify God in all actions.
- Guiding verse: "whatever you do, do all to the glory of God" (1 Corinthians 10:31). 1\,\mathrm{Corinthians}\ 10:31