1/36
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
what is anticipatory ethics (2)
process of accessing the potential ethical implications of emerging issues to guide their responsible development
thinking abt the ethics before it is realised
issues with considering anticipatory ethics too soon (2)
can limit expansion of knowledge
hard to guess and forecast problems
issues with considering anticipatory ethics too late (2)
research might go to waste and no room to make changes
harm might have already happened
what are the 5 features of big data
volume
variety
velocity
veracity
value
volume (2)
number of data points
e.g. heart rate is measured every second
variety (2)
data may cross different types
can be structures or unstructured
velocity (2)
pace of data generation
e.g. MRI has lower velocity as it is only done once
veracity (2)
data quality and accuracy
measuring what it is intended to measure
value (2)
potential to create benefits and insights
can the data be used for that particular purpose
example of the big 5 in health anxiety on Reddit around COVID (5)
volume: posts from 820,000 users
variety: multiple boards (r/depression, r/suicide) but all text
velocity: fixed dataset, collected once
veracity: automated data processing
value: could identify vulnerable groups and mental health themes in real time
current brain landscape for brain health data (4)
patient data
operations data
population data
social determinants of health data
artificial intelligence
artificial systems that appear to think like humans, doing tasks, deciding, categorising, recognising etc
what is an example of an AI system
machine learning
machine learning (2)
system that can learn from experiences or dataset without direct human programming
it involves training models on patterns in one set of data (training data) so that they can apply what they have learned to new data
what can machine learning produce
“black box” results that are difficult for human interpreter to explain
what are the 2 types of machine learning
supervised learning
unsupervised learning
supervised learning (3)
models are trained on known, labelled data, having the “right answer”
requires a huge volume of data and human labour
e.g. training a machine learning algorithm on a set of patient records on who do vs do not develop psychosis
unsupervised learning (3)
models learn from unlabelled data
requires even greater processing power but less human labour
e.g. twitter posts containing the word dementia
AI applications in neuroscience examples (5)
risk prediction
clinical decision making
neurotech
brain modelling
diagnosis and prognostication
risk prediction (2)
goal was to predict AD diagnosis using brain scans
trained machine learning model using labelled MRI data (healthy vs AD) to predict neural activity and identify most predictive brain regions
clinical decision making (2)
goal was to surgically remove epileptogenic brain region to treat seizures using intracranial EEG
proposed machine learning to use unlabelled features of raw iEEG output to identify seizure origin
neurotech (2)
goal was to control limb prosthesis with neural activity
train machine learning model on the mapping between neural activity and limb movement
brain modelling (3)
goal was to understand how rat brains represent space
trained machine learning model to navigate spaces with training data that stimulate real rodent behaviour and neural activity
model developed representations resembling real rat enthorhinal cortex
diagnosis and prognostication (3)
wanted to solve the problem of needing to triage acute neurological illness quickly irl
created supervised machine learning training on heads CTs and radiology annotations
it accelerated time to diagnosis in stimulated environment
ethical issues around AI in neuroscience (3)
accountability
bias and discrimination
privacy
what are the two types of accountability
culpability
moral accountability
culpability (2)
it is the responsibility based on intention, knowledge or control
AI introduces potential harms no one person could predict or prevent
moral accountability (2)
duty to explain one’s reasons and actions to others, being answerable for an action
AI processes may be unexplainable to their users, the “black box” problem
bias and discrimination (2)
groups that are understudied will go on to be underrepresented in AI models which may result in receiving lower quality care
models are normally trained on younger populations as well as more common conditions
privacy (3)
AI outputs can be sensitive, such as future disease risk, personal preferences, emotional states
question of who should have access to this data should be controlled esp when predictions include uncertainty
by linking data, AI may re- identify previously anonymous individuals - there is also a lack of guidelines around informed consent for expanded/ later use
what is within machine learning
generative AI
generative AI
systems that generate new text, images or code based on prior input
example of generative AI
large language models (e.g. ChaptGPT)
ethical issues with generative AI (3)
emerging harms and benefits
emotional alignment
lack of transparency
emerging harms and benefits (3)
still unclear of whether there is a net benefit or net harm
high profile of harm with vulnerable populations
these tools may halt suicide ideation
emotional alignment (2)
can be used to engage in mental health or engage socially
tools designed to match users can be positive or negative
lack of transparency (2)
not clear who is building them
there is legal ambiguity when data can or can’t be shared