PSYC 301 - New Frontiers in Brain Health - AI

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/36

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

37 Terms

1
New cards

what is anticipatory ethics (2)

  • process of accessing the potential ethical implications of emerging issues to guide their responsible development

  • thinking abt the ethics before it is realised

2
New cards

issues with considering anticipatory ethics too soon (2)

  • can limit expansion of knowledge

  • hard to guess and forecast problems

3
New cards

issues with considering anticipatory ethics too late (2)

  • research might go to waste and no room to make changes

  • harm might have already happened

4
New cards

what are the 5 features of big data

  • volume

  • variety

  • velocity

  • veracity

  • value

5
New cards

volume (2)

  • number of data points

  • e.g. heart rate is measured every second

6
New cards

variety (2)

  • data may cross different types

  • can be structures or unstructured

7
New cards

velocity (2)

  • pace of data generation

  • e.g. MRI has lower velocity as it is only done once

8
New cards

veracity (2)

  • data quality and accuracy

  • measuring what it is intended to measure

9
New cards

value (2)

  • potential to create benefits and insights

  • can the data be used for that particular purpose

10
New cards

example of the big 5 in health anxiety on Reddit around COVID (5)

  • volume: posts from 820,000 users

  • variety: multiple boards (r/depression, r/suicide) but all text

  • velocity: fixed dataset, collected once

  • veracity: automated data processing

  • value: could identify vulnerable groups and mental health themes in real time

11
New cards

current brain landscape for brain health data (4)

  • patient data

  • operations data

  • population data

  • social determinants of health data

12
New cards

artificial intelligence

artificial systems that appear to think like humans, doing tasks, deciding, categorising, recognising etc

13
New cards

what is an example of an AI system

machine learning

14
New cards

machine learning (2)

  • system that can learn from experiences or dataset without direct human programming

  • it involves training models on patterns in one set of data (training data) so that they can apply what they have learned to new data

15
New cards

what can machine learning produce

“black box” results that are difficult for human interpreter to explain

16
New cards

what are the 2 types of machine learning

  • supervised learning

  • unsupervised learning

17
New cards

supervised learning (3)

  • models are trained on known, labelled data, having the “right answer”

  • requires a huge volume of data and human labour

  • e.g. training a machine learning algorithm on a set of patient records on who do vs do not develop psychosis

18
New cards

unsupervised learning (3)

  • models learn from unlabelled data

  • requires even greater processing power but less human labour

  • e.g. twitter posts containing the word dementia

19
New cards

AI applications in neuroscience examples (5)

  • risk prediction

  • clinical decision making

  • neurotech

  • brain modelling

  • diagnosis and prognostication

20
New cards

risk prediction (2)

  • goal was to predict AD diagnosis using brain scans

  • trained machine learning model using labelled MRI data (healthy vs AD) to predict neural activity and identify most predictive brain regions

21
New cards

clinical decision making (2)

  • goal was to surgically remove epileptogenic brain region to treat seizures using intracranial EEG

  • proposed machine learning to use unlabelled features of raw iEEG output to identify seizure origin

22
New cards

neurotech (2)

  • goal was to control limb prosthesis with neural activity

  • train machine learning model on the mapping between neural activity and limb movement

23
New cards

brain modelling (3)

  • goal was to understand how rat brains represent space

  • trained machine learning model to navigate spaces with training data that stimulate real rodent behaviour and neural activity

  • model developed representations resembling real rat enthorhinal cortex

24
New cards

diagnosis and prognostication (3)

  • wanted to solve the problem of needing to triage acute neurological illness quickly irl

  • created supervised machine learning training on heads CTs and radiology annotations

  • it accelerated time to diagnosis in stimulated environment

25
New cards

ethical issues around AI in neuroscience (3)

  • accountability

  • bias and discrimination

  • privacy

26
New cards

what are the two types of accountability

  • culpability

  • moral accountability

27
New cards

culpability (2)

  • it is the responsibility based on intention, knowledge or control

  • AI introduces potential harms no one person could predict or prevent

28
New cards

moral accountability (2)

  • duty to explain one’s reasons and actions to others, being answerable for an action

  • AI processes may be unexplainable to their users, the “black box” problem

29
New cards

bias and discrimination (2)

  • groups that are understudied will go on to be underrepresented in AI models which may result in receiving lower quality care

  • models are normally trained on younger populations as well as more common conditions

30
New cards

privacy (3)

  • AI outputs can be sensitive, such as future disease risk, personal preferences, emotional states

  • question of who should have access to this data should be controlled esp when predictions include uncertainty

  • by linking data, AI may re- identify previously anonymous individuals - there is also a lack of guidelines around informed consent for expanded/ later use

31
New cards

what is within machine learning

generative AI

32
New cards

generative AI

systems that generate new text, images or code based on prior input

33
New cards

example of generative AI

large language models (e.g. ChaptGPT)

34
New cards

ethical issues with generative AI (3)

  • emerging harms and benefits

  • emotional alignment

  • lack of transparency

35
New cards

emerging harms and benefits (3)

  • still unclear of whether there is a net benefit or net harm

  • high profile of harm with vulnerable populations

  • these tools may halt suicide ideation

36
New cards

emotional alignment (2)

  • can be used to engage in mental health or engage socially

  • tools designed to match users can be positive or negative

37
New cards

lack of transparency (2)

  • not clear who is building them

  • there is legal ambiguity when data can or can’t be shared

Explore top flashcards

RF Air Platforms
Updated 464d ago
flashcards Flashcards (245)
RRES EXAM
Updated 443d ago
flashcards Flashcards (224)
ap gov unit 5
Updated 344d ago
flashcards Flashcards (108)
Los Animales
Updated 107d ago
flashcards Flashcards (43)
K2 Wörterliste B
Updated 79d ago
flashcards Flashcards (24)
RF Air Platforms
Updated 464d ago
flashcards Flashcards (245)
RRES EXAM
Updated 443d ago
flashcards Flashcards (224)
ap gov unit 5
Updated 344d ago
flashcards Flashcards (108)
Los Animales
Updated 107d ago
flashcards Flashcards (43)
K2 Wörterliste B
Updated 79d ago
flashcards Flashcards (24)