PSYC 301 Final - New Frontiers 1

0.0(0)
studied byStudied by 12 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/13

flashcard set

Earn XP

Description and Tags

AI

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

14 Terms

1
New cards

What are the 5 features of Big Data?

  1. Volume: Number of data points 

  2. Variety: Data may cross different types (structured/unstructured)

  3. Velocity: Pace of data generation 

  4. Veracity: Data quality + accuracy 

  5. Value: Potential to create benefits and insights 

2
New cards

Relate the 5 features of big data to health anxiety about COVID-19 on reddit

  • Volume: posts from 826,961 users

  • Variety: multiple boards (ex.  r/Depression, r/SuicideWatch, r/conspiracy) but all text posts

  • Velocity: fixed dataset, collected once

  • Veracity: automated data processing

  • Value: could identify vulnerable groups and mental health themes in real-time

3
New cards

Artificial Intelligence

Artificial systems that appear to think like humans (decide, categorize, recognize) 

4
New cards

Machine Learning

Systems that can learn from experience/data without direct human programming 

  • Involves training models on patterns in one set of data (training data) so they can apply what they learned to new data 

  • Classify, predict, decide, etc. 

  • Can produce “black box” results (difficult to interpret)

5
New cards

What are the 2 types of machine learning?

  1. Supervised Learning 

    • Models are trained on known, labelled data 

    • Ex. medical records from patients who do vs. don't develop psychosis 

  1. Unsupervised Learning 

    • Models learn from unlabeled data (ex. Twitter posts containing word “dementia”) 

    • Requires huge processing power

6
New cards

Generative AI

  • Subset of machine learning 

  • Systems that generate new text, images, video, or code based on prior input 

7
New cards

Large Language Models

Generate new text, subset of generative AI

  • Essentially high high-powered auto correct

  • Does not reflect on the output it produced 

8
New cards

What are AI applications in neuroscience?

  1. Risk Prediction

  2. Clinical Decision Making

  3. Neurotech

  4. Brain Modelling

  5. Diagnosis + Prognostication

9
New cards

Risk Prediction

  • Goal: Predict Alzheimer’s disease diagnosis using brain scans 

  • Method: Train ML model using labelled MRI data (health vs. AD) to predict AS using neural activity 

  • Identify most predictive brain regions

10
New cards

Clinical Decision Making

  • Goal: Surgically remove epileptogenic brain region to treat seizures using intracranial EEG (like EEG, but electrodes implanted on surface of brain) 

  • Proposed ML model uses unlabelled features of the raw iEEG output to identify seizure origin

11
New cards

Neurotech

  • Goal: Create a device that can control limb prosthesis with neural activity 

  • Train ML model on the mapping between neural activity and limb movement 

  • Ex. what does brain activity look like when a monkey moves his hand? + use electrical activity to predict limb movement 

  • Can later use the electric activity to control a separate device 

12
New cards

Brain Modelling

  • Goal: Understand how rat brains represent space

  • Trained ML model to “navigate space” with training data that stimulate real rodent behavior + neural activity 

  • Model developed representations resembling real rat entorhinal cortex “grid cells”

13
New cards

Diagnosis + Prognostication

  • Problem: Need to triage acute neurological illnesses quickly (ex. Hemorrhage, stoke) 

  • Model Type: Supervised ML model trained on head CTs and radiology annotations 

  • Result: Accelerated time to diagnosis in simulated clinical environment

14
New cards

Ethical Issues of AI in Neuroscience

  1. Accountability

    • culpability (responsibility based on intention/knowledge/control)

      • Ex neurosurgery robot makes an error, culpability issues

    • moral accountability (duty to explain actions/reasoning)

  2. Bias and Discrimination

    • groups that are underrepresented in AI models receive lower quality care

  3. Privacy

  4. Emerging Harms and Benefits

  5. Lack of Transparency