Long Answer Questions

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/9

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

10 Terms

1
New cards

Dr. Hall is a child psychologist who is researching the effects of behaviour therapy on young
children with ADHD. She wants to know if children’s hyperactive and impulsive symptoms
at school might be improved by changing the way teachers manage children’s behaviour. She
invites the parents of first-graders with ADHD to participate in her study.


Identify and describe the four main areas of concern related to ethics we discussed in class.
For each area, support your descriptions with an example of an ethical issue that Dr. Hall may
encounter in her study and an idea for how Dr. Hall could address it.

  1. Informed consent

Informed consent includes telling participants everything they know in order to understand and are competent enough to decide to participate.

Since the children are young, they may not have full capacity to understand what the study is and what the risks are, and thus may not be able to give proper informed consent. Instead, Dr. Hall should also get informed consent from the parents, and also the child’s assent, meaning agreement using age-appropriate language.

  1. Voluntary Participation

Voluntary participation means participants can have freedom of choice.

May feel they have no choice as it is in school. Make it very clear they do not have to participate.

  1. Freedom from Harm

Research should not harm participants and all risks should be communicated to them.

The children and parents in the study may worry that participation may interfere or affect their grades and courses they are in. Steps should be taken to directly address this, and to ensure that any time lost in class will be made up and not reprimanded.

  1. Confidentiality

Information about participants should be anonymous, private (i.e., only asking what you need to know), and confidential.

An issue that may come up is that students may feel like their ADHD is being made public to the rest of the class. How to combat this would be to ensure the study details are only known by the participants and their family.

2
New cards

We discussed three designs that can be used to test an intervention with a single participant. Identify each of those three designs and describe them with enough detail to show that you know what they are and how they’re implemented. For each design, make sure to indicate when you might use them (e.g., what kind of information they can provide, why you would use that design over others, what you couldn’t use it for, etc.) Support your explanations with brief hypothetical examples.

  • to test a single participant, you would use an idiographic approach

Single case experimental design

  • Use person as their own control through repeated measures

  • Can determine cause and effect

  • Why over others: Complex phenomena or when large groups not feasible; efficient

  • Why not: Low generalizability/external validity, no randomization

  • Example: Intervention to decrease cocaine use in methadone treatment

Multiple baseline

  • Goal = assess impact of treatment

  • Strong internal validity

  • Why over others: practical when treatments cant be withdrawn (no reversal to baseline needed)

  • Why not: Low generalizability, time consuming

  • Example: Intervention for a student with ADHD. A- intervention for talking out of turn, B- intervention for getting out of seat

Changing criterion design

  • Baseline → treatment over series of trials → each successive trial the threshold is set slightly higher

  • Why over others: when incremental change is possible

  • Why not: time consuming

  • Example: Reading fluency (# of words read out loud) - Baseline measure (e.g., 45w/min) → criterion A 50 w/min → criterion B 55 w/min

3
New cards

Sometimes there are good reasons to conduct a study in a way that limits external validity.

Describe a study (real or hypothetical) in which this is the case and explain:

  • What choices were, or would be, made to limit external validity in this study?

  • What is gained by making those choices?

  • How/why does making those choices lead to the identified gains?

  • What is lost by making these choices?

Example: New drug treatment for heroin users

  • Narrow participant selection, small sample size, large exclusion criteria

  • Higher internal validity (controlled variables, less confounds)

  • Can get an initial idea of if the treatment may be effective for other heroin or general opioid users

  • Limited generalization - cannot be easily generalized to the broader population or to non-heroin users; short term effects; exclusion of other treatments

4
New cards

Describe the post-test only control group design and the pre-test/post-test control group

design.

  • How are they similar?

  • What are the key differences?

  • What are the strengths and weaknesses of each?

  • For each design, provide an example of when that design would be preferred over the other and the reasons for making that choice

Post-test only: Assigned to treatment or control, then intervention, then post-test

  • Strengths: simpler/quicker, no practice effects

  • Weaknesses: cannot directly assess amount of change from baseline

  • Example: an intervention is one-time and pre-test measures would be impractical, and when prior levels of the dependent variable do not affect outcomes significantly (public health campaign)

Pre-/Post- test: Assigned to treatment or control, then pre test, then intervention, then post test

  • Strengths: can directly assess change from baseline

  • Weaknesses: practice effects, more complex

  • Example: when the intervention is expected to cause measurable change and when baseline differences matter (new drug treatment)

  • Similar: random assignment; control group, aim to establish cause-effect

  • Key differences: no pre-test, meaning no baseline measure of the behaviour

5
New cards

Describe the key differences between qualitative and quantitative research

  • touch on: philosophical approaches, goals, types of questions, methods, and roles of the participants and researchers

  • Support your description with an example of a question that would be best addressed by that type of research.

  • Briefly explain a key strength and weakness for each approach.

Qualitative

Non-numerical, linguistic, focused on biases

  • Philosophical approaches: Rejects positivism; Phenomenological (want to understand thoughts/feelings)

  • Goals: Understanding and exploration

  • Types of questions: Exploratory; Descriptive; descriptive-comparison

  • Methods: In-depth interviewing, observation, text-based approaches

  • Roles of participants: Provide thoughts, experiences

  • Roles of Researchers: Collect and interpret data, reflect on biases, facilitators

  • Example: Discourse analysis looking at mental health in refugees

  • Key strength: Complexity that cannot be expressed numerically, more detail/richness

  • Key weakness: Complex data analysis (often subjective)

Quantitative

Numerical data

  • Philosophical approaches: Positivism (knowledge is empirical, objective, qualifiable)

  • Goals: Testing hypotheses

  • Types of questions: Hypothesis-testing; correlation; causality; measurement

  • Methods: surveys/questionnaires, experiments (e.g., RCT), correlational

  • Roles of participants: provide data, be randomly assigned

  • Roles of Researchers: design study, collect data, analyze and interpret data

  • Example: RCT for methadone in opioid use disorder (RCT) baseline intervention post

  • Key strength: numerical data can reduce subjectivity

  • Key weakness: over-simplification

6
New cards

We discussed the significant impact decisions about participant sampling have on study results, both in terms of sample size and sampling approaches. Please explore these issues by doing the following:

a. Identify and describe the two broad types of sampling approaches used in quantitative research, making sure to touch on benefits and challenges/limitations. For each one, include a brief definition of one specific sampling method commonly used in clinical psychology research and when/why it might be used.

b. Describe how sample size can affect the significance of the results of a research study. Illustrate with an example of when a large sample size could produce misleading results, and when a small sample size could produce misleading results.

a.

Probability sampling: every member of the population has a known and equal chance of participating.

  • e.g., stratified sampling: choosing a percentage of participants from different groups (most, moderate, least experienced). may be useful to get a wide array of demographics.

Non-probability sampling: Not all members have an equal chance of being selected

  • e.g., convenience sampling: choosing participants based on how convenient they are to recruit (going to a treatment center). may be useful when population is small.

b. Larger sample size + larger effect size = more statistical power

  • Large sample size misleading: Testing a new drug on 10 000 participants leads to a statistically significant effect of the drug. However, the difference is small and not likely to have a meaningful impact.

  • Small sample size misleading: New teaching method taught to 10 students. Results suggest new method increases grades significantly. However small sample size means results may be susceptible to chance and may be due to random variation rather than true difference.

7
New cards

You have completed data collection for your RCT testing the efficacy of a new therapy for major depressive disorder, comparing the intervention group to a wait-list control group. You have been working on the statistical analysis of your data set and get excited when you find a statistically significant difference – your treatment group has lower symptom scores at the end of the study than the control group! Now it’s time to think about your results more:

Does this finding necessarily mean your results have clinical or practical significance?

  • Explain why or why not.

  • Make sure to describe how each type of significance is different, as well how they relate to each other and why this is an important question to ask for studies like this.

  • Identify two examples of how you could determine clinical significance in this study, providing a brief statement about what each one is and a brief statement/example demonstrating how each one would help you to better interpret your results.

NOTE: You don’t have to include a description of how you calculate clinical significance or the statistical details, just show you know what that two are conceptually.

Does not necessarily mean you have clinical/practical significance.

  • Because if you get a statistically significant finding, it only tells you if an effect exists, not necessarily the size or importance of the effect. Also does not determine if it has real world relevance

  • Effect size (cohen’s D): measure of the magnitude between groups. Can look further than the p-value to see if the difference between an intervention group and control group is meaningful enough to warrant a clinical practice.

  • The Jacobson & Traux method: The reliable change index (RCI) determines if improvement is beyond random error, and if it falls into healthy range/moves from dysfunctional range.

It determines if a client's improvement is beyond random error (using RCI) and whether their post-treatment score falls into a healthy range or moves from a dysfunctional to a functional range, defining CSC by comparing client scores to both dysfunctional and functional population norms, making it a key tool in outcome research.

8
New cards

Many non-scientists do not understand why theory is needed in scientific research. Develop an explanation appropriate for an audience of high school students that clarifies what we mean by theory and why it’s important.

Using a hypothetical study, illustrate examples of the roles theory may play, or times you may rely on theory, at all different stages the research process

A scientific theory explains why something happens based on existing knowledge, observations, and research. It helps scientists to organize and explain different phenomena. A theory can help guide our questions and make sense of our results.

Let’s look at an example. A very popular theory in psychology is known as Maslow’s hierarchy of needs. This hierarchy outlines psychological, safety, love/belonging, esteem, and self-actualization needs that humans require to life a fulfilling life. His hierarchy suggests that lower-level needs may need to be satisfied before one can focus on higher level needs. You could design a study that asks this question “Do students struggling with basic needs (like food or safety) perform worse academically than those who have those needs met?”. So, this theory helps to guide my research question.

Next I will collect data to try and answer this question. This theory then lets me know what things I need to measure, so for example, food security or abuse in a home. This theory will then let me explain the results of my study by giving me that basis.

Without this theory, it would be like exploring without a map. We could collect data, but it would be hard to make sense of it or connect it to a bigger picture.

9
New cards

We spent a lot of time in this course learning about the importance of good research questions. Demonstrate what you’ve learned about research questions by doing the following:

a. Explain what is meant by saying research questions must be testable and hypotheses must be falsifiable, and why this is important. Illustrate your explanation with both an example of an untestable, abstract idea that would not be suitable for a research project, and a related testable idea that would be suitable. For the testable idea, come up a hypothesis that is not falsifiable and one that is. Justify why you think each one is (or is not) falsifiable.

b. Name the five specific types of research questions common in clinical psychology research. For each, briefly identify its purpose (i.e., what it will help you learn) and provide an example

a. Testable meaning variables must be able to be tested or observed to gather data to support the RQ. Falsifiable meaning a hypothesis can be proven false, if not then we would have no way to prove or disprove it.

  • Untestable/abstract idea: Opioid addiction is a result of bad luck.

  • Testable idea: The frequency of opioid use is influenced by environmental stressors (e.g., social isolation)

    • non-falsifiable: People who use opioids will have an inherent, unconscious desire to be alone.

    • falsifiable: Those who experience higher levels of social isolation will report higher frequency of opioid use than those with strong social connections.

b. Descriptive: Provide a picture of a phenomenon. What do withdrawal symptoms look like in OUD?

Descriptive-comparison: Compare groups to look for differences in description. Do prescription opioid users display similar withdrawal symptoms to heroin users?

Correlational: Investigate a relationship. Is screen time 30-mins before bed related to worse sleep?

Causality: Investigate causes for a phenomenon. Will methadone treatment lead to less heroin use?

Measurement: Validate a measurement tool. Are sleep diaries a reliable measure of sleep in adolescents?

10
New cards

Imagine you have just finished working with a study participant when they ask about what all the questionnaires and interviews were, and why there are so many.

It’s now your job to explain construct validity to your study participant who has no background in research design. Define and explain what construct validity is in language they understand.

Identify the specific types of validity that fall under the umbrella of construct validity, briefly noting what each one tells you.

Use an example of any one construct commonly measured in clinical psychology research to illustrate your explanations in practical terms, demonstrating why it’s important and answering their question about why you included so many measures.

  • E.g., Beck Depression Inventory (BDI)

Construct: If the BDI accurately measures depression.

Content: If the BDI measures every aspect of depression. This would be based on what we define depression as in the DSM5tr. For example, if the BDI did not ask about low mood, this would not be measuring every aspect of depression.

Face: How well the BDI appears to measure depression. By looking at the questions it is clear it is not trying to ask about, say, schizophrenia.

Criterion: How well the BDI relates to other measures of depression.

  • Convergent: There are other measures of depression that the BDI should relate to and should be able to score similarly, e.g., SCID-5 for depression.

  • Discriminant: Should not correlate with measures of anxiety, such as the BAI; should be unique to that.