Untitled Flashcards Set

Week 1: cap 1,2,3

 

Cap 1:

·      why is it important to understand social research methods?

1.     To prevent and avoid some errors and difficulties that may arise when conducting social research.

2.     Being aware of the full range of methods and approaches available to you.

3.     Help understand the research methods used in the published work of others.

·      What is social research?

·      Academic research conducted by social scientists from a broad range of disciplines such as sociology, anthropology, education, human geography, social policy, politics and criminology.

·      What is motivated by?

·      Developments in changes and society

·      It is essential for generating new knowledge and expanding our understanding of contemporary social life.

·      Research methods= tools and techniques that social scientists use to explore different topics. A research method is a tool, such as a survey, an interview, or a focus group, that a researcher uses to explore an area of interest by gathering information (data) that they then analyze.

·      Methodology= refers the broader and the overall approach being taken in the research project and the reasoning behind the research method.

1.     Why do social research?

2.      Noticing a gap in academic literature

3.     Noticing inconsistency in preexisting literature

4.     Understanding what is going on in society that is unresolved

 

·      Social research and its methods are influenced by various contextual factors and do not occur in isolation:

a)     theory, and researchers’ viewpoints on its role in research;

b)    the existing research literature;

c)     epistemological and ontological questions;

d)    values, ethics, and politics.

 

a)    Theory and researcher’s viewpoint’s role in research: ‘ideas and intellectual traditions’ often refers about theories.

Theory= a group of ideas that aims to explain something, in this case the social world.

Theories have significant influence on the research topic being investigated, both in terms of what is studied and how findings are interpreted. As well as influencing social research, theories can themselves be influenced by it, because the findings of a study add to the knowledge base to which the theory relates.

Researcher’s views about the nature of the theory can have implications for research. For instance, choosing quantitative research and qualitative research has implication for the research: quantitative is based in hypothesis and theoretical ideas which drive the collection and analysis of data while qualitative suggest a more open-ended strategy in which theoretical ideas emerge out of the data.

 

b)    Existing knowledge: the existing knowledge in the area in which a researcher is interested also forms an important part of the background against which social research takes place. It is then necessary being familiar with the literature on the topic investigated to build and avoid repeating work already done.

c)     Epistemological and ontological questions: views about how knowledge should be produced are known as epistemological positions which raise questions about how the social world should be studied and whether the scientific approach advocated by some researchers is the right one for social research.

The views about the nature of the social world and social phenomena are known as ontological positions. Debating whether social phenomena are relatively inert and beyond our influence or are a product of social interaction.

The stance taken on both issues has implications for the way social research is conducted.

d)    Values, ethics and politics:

1.     Ethical Considerations:

  1. Ethical issues are central to social research and have become more critical with new data sources like social media.

  2. Researchers must typically undergo a process of ethical clearance, especially when involving vulnerable populations (e.g., children).

  3. Participant Involvement:

    1. In fields like social policy, there's a strong belief that research participants, especially service users, should be involved in the research process.

    2. This involvement can include formulating research questions and designing instruments such as questionnaires.

    3. The collaborative approach is often referred to as “co-production.”

  4. Political Context:

    1. Social research is influenced by political factors, including government funding priorities, which can shape which research topics receive support.

    2. The political landscape also affects access to research settings and the dynamics of research teams.

  5. Wider Context Influence:

    1. The choices of research methods and the focus of social research are closely related to broader contextual factors, including ethical standards, participant empowerment, and political considerations.

These aspects underscore the interconnectedness of values, ethics, and the political landscape in shaping social research practices and outcomes.

The main elements of social research:

The main stages of most research projects

Literature review

 

A critical examination of existing research that relates to the phenomena of interest, and of relevant theoretical ideas.

 

Concepts and theories

 

The ideas that drive the research process and that help researchers interpret their findings. In the course of the study, the findings also contribute to the ideas that the researchers are examining.

 

Research question(s)

 

A question or questions providing an explicit statement of what the researcher wants to know about.

 

Sampling cases

 

The selection of cases (often people, but not always) that are relevant to the research questions.

 

Data collection

 

Gathering data from the sample with the aim of providing answers to the research questions.

 

Data analysis

 

The management, analysis, and interpretation of the data.

 

Writing up

 

Dissemination of the research and its findings

 

 

1.     Literature review= need to explore what has already been written about it in order to determine:

• what is already known about the topic;

• what concepts and theories have been applied to the topic;

• what research methods have been applied to the topic;

• what controversies exist about the topic and how it is studied;

• what contradictions of evidence (if any) exist;

• who the key contributors are to research on the topic;

• what the implications of the literature are for our own research.

It is difficult to read all existing literature, but the main books and articles need to be read. After in the research paper this knowledge is shared with future readers by writing the literature review and it must be critical than merely descriptive.

2.     Concepts and theories= Concepts are labels we use to understand and categorize aspects of the social world that share common features. They help us make sense of complex social phenomena. Examples of key concepts in social sciences include bureaucracypowersocial controlstatushegemony, and alienation. These concepts are fundamental to the development of theories in social research.

  1. Role of Concepts:

    • Concepts play a crucial role in organizing research interests and communicating them to intended audiences.

    • They encourage researchers to reflect on their investigative focus and provide a framework for organizing research findings.

  2. Theoretical Frameworks:

    • Concepts are integral to theories, serving as foundational elements that shape research questions and methodologies.

    • The relationship between theory and research can be viewed through two lenses:

      • Deductive Approach: Theories drive the research process, guiding data collection and analysis based on predefined concepts.

      • Inductive Approach: Concepts emerge from the research process, helping to organize and reflect on data as it is collected.

  3. Dynamic Nature of Concepts:

    • The boundary between deductive and inductive approaches is not rigid; researchers often begin with key concepts to guide their studies but may revise or develop new concepts based on their findings and interpretations.

    • This iterative process emphasizes the fluidity of concepts as they adapt to the realities uncovered during research.

  4. Importance of Literature:

    • Familiarity with existing literature is essential as it reveals established concepts and their effectiveness in addressing key research questions.

3.     Research questions= explicit statements of what is intended to find about. There are several advantages of having a research question:

• guide your literature search;

• guide your decisions about the kind of research design to use;

• guide your decisions about what data to collect and from whom;

• guide the analysis of your data;

• guide the writing up of your data;

• stop you from going off in unnecessary directions; and

• provide your readers with a clear sense of what your research is about.

Reading additional literature will prompt revisitation of research questions or creation of new ones.

à      Influence of Research Questions:

o   The nature of the research question is crucial in determining the approach to the investigation and the choice of research design.

à      Types of Research Designs:

o   Experimental Design: Suitable for assessing the impact of an intervention.

o   Longitudinal Design: Appropriate for studying social change over time, allowing for observations across different time points.

o   Case Study Design: Useful when focusing on specific communities, organizations, or groups, providing in-depth insights.

o   Cross-Sectional Design: Ideal for capturing current attitudes or behaviors at a single point in time, offering a snapshot view.

o   Comparative Element: If the research question involves comparison, the design will need to reflect this aspect.

à      Familiarity with Research Designs:

o   Researchers must understand the implications and suitability of different research designs in relation to their specific questions, as each design supports different types of inquiries and outcomes.

à      Sampling= it is impossible to include all individuals who would fits the research hence researchers aim to secure a sample that represents a wider population by effectively replicating it in miniature. Sampling does not only apply to survey research but also content analysis and other research strategy.

à      Data collection= Data collection is considered a crucial aspect of research, often discussed in detail due to its significance in the research process. It can be approached in two main ways: structured and flexible. Structured methods, such as self-completion questionnaires and structured interviews, involve a predetermined approach where researchers define what they want to learn and design data collection tools accordingly. These methods align with a deductive approach to research, focusing on testing specific hypotheses. In contrast, flexible methods emphasize a more open-ended approach, allowing researchers to adapt their focus as new data emerges. Techniques such as participant observation and semi-structured interviews facilitate inductive theorizing, where concepts and theories evolve from the data rather than being predefined. While flexible methods still aim to address research questions, these may not always be explicitly stated, reflecting the exploratory nature of the research. Overall, this section underscores the variety of approaches researchers can take in data collection based on their objectives, highlighting its central role in the research process.

à      Data analysis= it involves applying statistical techniques to data that have been collected. There are multiple aspects to data analysis which are managing the raw data, making sense of the data, interpreting the data.

a)    Managing data: This stage includes checking for errors to ensure accuracy. In qualitative research, audio recordings of interviews are transcribed, requiring attention to detail to avoid misinterpretation. In quantitative studies, survey data is either inputted from paper forms or downloaded into analysis software like SPSS or Excel.

b)    Data reduction: Data reduction condenses large volumes of information to facilitate interpretation. Qualitative analysis involves coding data into themes, while quantitative analysis may address anomalies like missing responses.

c)     Primary and secondary analysis: After managing and analyzing the data, researchers link their findings back to research questions and relevant literature to draw meaningful conclusions. Primary Analysis involves researchers analyzing data they collected themselves, while Secondary Analysis refers to analyzing existing data. Secondary analysis is efficient and cost-effective, allowing researchers to explore new questions without the extensive process of data collection.

à      Writing up: research is of no use if it is not written up and it is not shared with others.

Format

Introduction. This outlines the research area and its significance. It may also introduce the research questions.

• Literature review. This sets out what is already known about the research area and examines it critically.

• Research methods. The researcher presents the research methods that they used (sampling strategy, methods of data collection, methods of data analysis) and justifies the choice of methods.

• Results. The researcher presents their findings.

• Discussion. This examines the implications of the findings in relation to the literature and the research questions.

• Conclusion. This emphasizes the significance of the research.

 

The messiness of research:

  1. Realities of Social Research:

    • The book aims to present a clear, accessible view of social research while acknowledging that the process is often less straightforward than it appears in academic literature. Research frequently involves false starts, mistakes, and necessary changes to plans.

  2. Research Challenges:

    • Many potential issues cannot be anticipated because they are unique, one-off events. While some reports suggest smooth research processes, they often omit the challenges faced, focusing instead on what was achieved.

  3. Reflexivity and Reporting:

    • Researchers typically acknowledge limitations in their studies, but academic reports usually highlight successful findings rather than detailing difficulties. This tendency is common across disciplines, including natural sciences.

  4. Acknowledging Messiness:

    • Recognizing the complexities and imperfections in social research does not devalue it; rather, it is essential for transparency and rigor. Acknowledging weaknesses helps improve methods and reassures novice researchers that messiness is a normal aspect of real-world research.

  5. Reflective Writing:

    • Researchers are encouraged to reflect on challenges and limitations in their projects. By documenting what was planned versus what actually occurred, they demonstrate an understanding of the implications of any changes made during the research.

  6. Methodological Diversity:

    • Social research includes various methodological traditions that may conflict, which fosters justification for decisions and critical thinking about research objectives. The distinctions between quantitative and qualitative approaches are often less clear than they appear.

  7. Complexity of the Social World:

    • The intricate and messy nature of the social world is what makes it fascinating to study. The following chapters will explore different perspectives and principles of research methodology, providing foundational theoretical knowledge and guidance for conducting research.

Cap 2: social research strategies, quantitative research and qualitative research

What is empiricism? Empiricism is a term that has multiple meanings, but it is primarily defined in two ways:

  1. General approach to reality: Empiricism suggests that knowledge is valid only if it is gained through experience and the senses. This perspective holds that ideas must undergo rigorous testing to be considered knowledge.

  2. Belief in facts: The second meaning of empiricism emphasizes that acquiring facts is a legitimate goal in its own right. This is sometimes referred to as "naive empiricism," which highlights the importance of collecting descriptive data, such as in a national census, to understand demographic changes and inform social research and government policies

Research in the social sciences is driven by various motivations, but theory plays a crucial role in enhancing our understanding of knowledge. It is essential to reflect on the connection between theory and research, particularly regarding the philosophical assumptions about the roles of theory and data. This reflection influences research design, the formulation of research questions, and the choice between qualitative, quantitative, or mixed methods for data collection.

Link between theory and research:

  1. Understanding the relationship between theory and research is complex, influenced by the type of theory being used and the approach to data collection (deductive vs. inductive).

 

 

Definition of theory:

  1. The term "theory" generally refers to explanations for observed patterns or events, often framed in broader theoretical contexts such as structural functionalism or symbolic interactionism.

2 types of Theories:

1.     Middle-Range theories: Developed by Merton, these theories focus on specific aspects of social phenomena and are more useful for empirical research. Examples include labelling theory and differential association theory.

2.     Grand theories: These are more abstract and provide limited practical guidance for research, making it challenging to apply them to real-world situations.

  1. Role of background literature:

    • Background literature can function as a substitute for theory, helping to inform research questions and guiding the research process. Researchers may use existing literature to identify gaps or inconsistencies that warrant further investigation.

  2. Skepticism towards naive empiricism:

    • Research lacking clear theoretical connections may be dismissed as naive empiricism. However, studies using relevant background literature as a theoretical foundation are valid and important.

  3. Dynamic relationship between theory and data:

    • While theory typically guides data collection and analysis, it can also emerge after the research process. This distinction leads to the concepts of deductive (theory-driven) and inductive (data-driven) approaches to research.

Definitions:

  • Middle-Range theories: Focus on specific social phenomena; useful for empirical inquiry.

  • Grand theories: Abstract theories with limited applicability to specific research.

  • Naive empiricism: Dismissal of research that appears to lack theoretical grounding.

  • Deductive approach: Data collection guided by existing theories.

  • Inductive approach: Theories developed based on data analysis.

Deductive vs Inductive approach:

Deductive approach:

The deductive approach is a research methodology where the researcher utilizes existing knowledge and relevant theoretical ideas to formulate a hypothesis (or hypotheses) that can be tested empirically. Key aspects of the deductive approach include:

  1. Hypothesis development:

    • The researcher starts with established theories and draws specific hypotheses from them. These hypotheses are speculative statements that the researcher aims to test.

  2. Researchable entities:

    • The concepts within the hypothesis need to be translated into researchable entities, often referred to as variables. This involves determining how these concepts can be operationalized for empirical investigation.

  3. Data collection:

    • Developing a hypothesis includes planning how data will be collected on each variable, ensuring that the research can effectively test the hypothesis.

  4. Quantitative research preference:

    • The deductive approach is more commonly associated with quantitative research, where the language of hypotheses, variables, and empirical testing is prevalent. This approach is less applicable to qualitative research.

  5. Role of middle-range theories:

    • Merton’s concept of middle-range theories is pertinent here, as these theories are primarily used to guide empirical inquiry within sociology, forming a foundation for the deductive process.

  6. Sequence of events:

    • In a deductive research project, the process begins with theory and hypothesis formulation, which then drives data gathering. This process is sequential and logical.

  7. Revision of theory:

    • After data collection and analysis, researchers reflect on their findings, which may lead to the revision of the original theory. This reflective process involves an inductive aspect, where new insights are integrated back into the existing body of knowledge.

Important points

  • Hypothesis: A testable speculation derived from theoretical frameworks.

  • Variables: Operationalized concepts that allow for empirical testing.

  • Quantitative focus: The deductive approach is primarily used in quantitative research.

  • Middle-Range theories: Theories that guide empirical research in specific areas of sociology.

  • Inductive reflection: The process of revising theories based on new findings, integrating both deductive and inductive reasoning.

 

 

Not all deductive research projects strictly follow the expected sequence of deriving hypotheses from theory. The term "theory" can also refer to the existing literature on a topic, rather than just specific theoretical frameworks. Additionally, researchers’ perspectives on theory or literature may evolve during the data analysis phase, and new theoretical ideas can emerge after data collection is completed. The typical logic of research involves developing theories and subsequently testing them; however, the practical application of this logic varies across different studies. Therefore, while the deductive process exists, it should be viewed as a general framework rather than a rigid model applicable to all research.

 

Inductive approach:

The inductive approach is a research methodology that focuses on deriving theory from empirical observations rather than starting with existing theories. Key aspects of the inductive approach include:

  1. Theory development:

    • In the inductive approach, theory emerges as the outcome of research, formed by drawing generalizable inferences from observations. Researchers do not begin with a hypothesis but rather develop one based on their findings.

  2. Linking theory and research:

    • Induction provides an alternative strategy for connecting theory and research, contrasting with the deductive approach where theory guides the research process.

  3. Iterative strategy:

    • The inductive process often involves an iterative strategy, where researchers move back and forth between data collection and theory refinement. This allows for adjustments based on ongoing analysis.

  4. Combination of approaches:

    • While primarily inductive, this approach may still involve some deductive elements, particularly when researchers reflect on collected data and determine conditions under which a theory may hold.

  5. Grounded theory:

    • An example of the inductive approach is the grounded theory method, as used in O'Reilly et al. (2012). This method focuses on generating theory directly from qualitative data, emphasizing the significance of the findings derived from the analysis.

  6. Qualitative focus:

    • The inductive approach is often associated with qualitative research, which allows for a deeper exploration of the data and the emergence of new theoretical insights.

Important points

  • Emergent Theory: Theory is developed from data rather than imposed beforehand.

  • Observations: The approach relies heavily on empirical observations to inform theoretical frameworks.

  • Iterative Nature: Emphasizes a back-and-forth process between data collection and theoretical development.

  • Grounded Theory: A specific method within the inductive approach that generates theory from qualitative data.

However:

·      These distinctions are not as straightforward as they are sometimes presented, it is best to think of them as tendencies rather than fixed distinctions. In fact there is also a 3 approach called abductive reasoning.

Abductive Reasoning: Abductive reasoning is a logical approach that begins with an observation or a puzzling phenomenon and seeks to explain it by identifying the most likely explanation. This process involves a back-and-forth movement between the observed data (the puzzle) and the broader social context or existing literature, often referred to as dialectical shuttling. Abduction acknowledges that the conclusions drawn from observations are plausible but not entirely certain, as there may be multiple explanations for the same observation. For example, if you notice smoke in your kitchen, abductive reasoning would lead you to infer that the most probable cause is that you burned dinner, while also considering other possible explanations, like smoke from outside.

Adduction: Adduction is often used interchangeably with abductive reasoning, emphasizing the inference to the best explanation based on available evidence. It focuses on the process of reasoning that connects observations to theories or explanations, aiming to provide the most plausible interpretation of the data at hand. Adduction involves synthesizing information from observations and existing knowledge to generate a reasonable hypothesis or explanation that can be further explored.

Key aspects

  • Abductive Reasoning: Begins with observations to explain them using the most likely explanations, acknowledging the plausibility but not certainty of conclusions.

  • Adduction: Similar to abduction, it focuses on synthesizing observations and existing theories to infer the best explanation for a phenomenon.

Epistemogical considerations:

Epistemogical issues concern the question of what it should be studied according to the same principles, procedures, and ethos as the natural sciences.

The argument that social sciences should imitate the natural science is associated with epistemogical position of Positivism.

Positivism is an epistemological stance advocating for the use of natural science methodologies in the study of social reality and other domains. It emphasizes the reliance on empirical evidence and seeks to create objective, law-like knowledge similar to the natural sciences. Here is an explanation of the characteristics of positivism:

  1. Phenomenalism: Positivism holds that only phenomena observable and verifiable through the senses qualify as genuine knowledge. This implies that abstract concepts or theoretical ideas must be anchored in empirical evidence to be meaningful.

  2. Deductivism: Positivist theory aims to formulate hypotheses that can be empirically tested. These hypotheses are used to assess patterns, regularities, and laws governing social reality. Thus, positivism emphasizes a scientific approach where theories are tested through systematic observation and experimentation.

  3. Inductivism: Knowledge, according to positivism, is accumulated through gathering empirical facts, which then form the foundation for identifying and establishing general laws. This means that broad theories or principles are derived from the careful analysis of factual data.

  4. Objectivity and Value-Free Science: Positivism insists on objectivity in scientific research. Scientific inquiry must be conducted without bias or the influence of the researcher’s values, beliefs, or personal preferences. The results of such studies should be independent of the researcher’s subjective interpretations.

  5. Separation of Scientific and Normative Statements: Positivism differentiates between a)descriptive scientific statements (which are objective and can be proven through empirical evidence) and b)normative statements (which reflect value judgments and cannot be empirically verified). A positivist approach prioritizes scientific statements over normative ones, emphasizing that science should remain neutral and not engage in ethical or moral evaluations.

For some researcher this doctrine is a descriptive category for other it has a negative connotation since it describes crudely and superficial practices of data collection.

Positivism includes both aspects of deductive and inductive approach. It is also very sharp on the distinction between theory and research and with the role of the researcher being to test theories and to provide material for the developments of laws. In fact, it implies that it is possible to collect observations that is not influenced by pre-existing theories.

 

 

 

Another similar stance is realism:

Realism is an epistemological and philosophical position that asserts the existence of an external reality that exists independently of our perception or description of it. Realism emphasizes that the natural and social worlds are governed by structures and mechanisms that can be studied using appropriate scientific methods.

Similarities with positivism:

  1. Use of scientific methods: Both realism and positivism believe that the natural and social sciences can and should use similar scientific methods for collecting data and developing explanations. This reflects a shared commitment to systematic, empirical investigation.

  2. Belief in an external reality: Both approaches maintain that there is an external reality independent of human perception or description. They argue that science should focus on uncovering and explaining this objective reality.

Types of realism:

  1. Empirical realism (or Naive realism):

    • Definition: This form of realism posits that reality can be comprehended through the application of suitable empirical methods. It assumes that there is a direct or nearly perfect correspondence between the terms we use to describe the world and the actual world itself.

    • Criticism: Critics argue that empirical realism overlooks the underlying structures and generative mechanisms that produce observable phenomena. It is considered "superficial" because it focuses only on what can be directly observed and fails to acknowledge deeper causal forces at play.

  2. Critical realism:

    • Definition: Critical realism, developed by philosopher Roy Bhaskar, recognizes both the reality of the natural world and the events and discourses of the social world. It contends that to truly understand and potentially change the social world, one must identify and analyze the underlying structures and mechanisms that generate observable events.

    • Key Concepts:

      • Critical realism emphasizes that these structures are not immediately apparent in observable patterns. Instead, they require detailed theoretical and practical investigation to be identified.

      • Critical realists accept that our scientific descriptions of reality do not perfectly mirror reality itself. Rather, they view scientific theories as tools for understanding and explaining underlying causal mechanisms.

      • Unlike positivism, critical realism is open to using theoretical constructs that may not be directly observable but are essential for explaining how observable phenomena occur.

And in summary:

  • Realism focuses on the belief in an objective reality and supports the application of scientific methods to understand it.

  • Empirical Realism is criticized for being overly simplistic, assuming a direct correspondence between observations and reality.

  • Critical Realism takes a deeper approach, arguing that reality involves structures and mechanisms not immediately visible but essential for understanding and transforming the world. Critical realism emphasizes the importance of theoretical work in uncovering these deeper forces.

An epistemology that contrasts with positivism is Interpretivism:

Interpretivism is an epistemological perspective that serves as an alternative to positivism in the social sciences. It holds that the study of human behavior requires different research methods from those used in the natural sciences because people, unlike natural objects, have subjective experiences and interpretations that influence their actions.

  1. Fundamental differences between people and natural objects: Interpretivism emphasizes that human beings are fundamentally different from the objects studied in the natural sciences. Humans have thoughts, feelings, and intentions that shape their actions, making the study of human behavior more complex and requiring a more nuanced approach.

  2. Need for distinct research methods: Because of these differences, interpretivism argues that social scientists must use research methods that can capture and understand the subjective experiences and meanings of human actions. These methods focus on understanding the ways individuals interpret and make sense of their world.

  3. Understanding subjective experience: Interpretivist research seeks to grasp the subjective meanings and experiences of social actions. This involves understanding what social experiences mean in practice, how they are perceived by individuals and groups, and the reasons behind these interpretations.

  4. Intellectual influences:

a)    Weber’s Idea of Verstehen: This concept emphasizes understanding social action by placing oneself in the position of the people being studied to interpret their actions and motivations.

b)    Hermeneutic–Phenomenological Tradition: This tradition focuses on the interpretation and meaning of human experiences, often emphasizing the importance of context and the lived experience.

c)     Symbolic Interactionism: This theory explores how individuals create and interpret meanings through social interaction, emphasizing the importance of symbols and language in shaping human behavior.

 

A.   Weber’s Idea of Verstehen: This concept emphasizes understanding social action by placing oneself in the position of the people being studied to interpret their actions and motivations.

Hermeneutics and Verstehen are both central concepts within interpretivism, influencing how social scientists approach the understanding of human actions and experiences.

Hermeneutics:

  • Definition: Hermeneutics originated in theology, where it was used to interpret religious texts. In the social sciences, hermeneutics has evolved into a broader theory and method concerned with the interpretation and understanding of human action.

  • Focus: Hermeneutics emphasizes how understanding is shaped by historical, cultural, and linguistic contexts. It is not just about observing actions but about interpreting the meaning behind them in a way that accounts for the environment in which those actions occur.

  • Core Ideas:

    • Situated understanding: Hermeneutics posits that human understanding is always "situated," meaning it cannot be fully detached from the context in which people live and interact. Humans are not passive entities shaped only by external social forces; they actively interpret and create meaning based on their experiences.

    • Contrast with Positivism: Hermeneutics challenges the positivist approach of explaining human behavior through general laws and instead focuses on understanding the subjective meanings behind social actions. It acknowledges that human actions are driven by complex motivations and interpretations that cannot always be simplified into abstract, law-like generalizations.

Verstehen:

  • DefinitionVerstehen is a German term meaning "understanding," introduced and popularized by sociologist Max Weber. It refers to the interpretive understanding of social action.

  • Weber's Perspective: Weber argued that the purpose of the social sciences is to comprehend how individuals perceive and act in their social world. He emphasized that sociology should not just seek causal explanations but should aim to understand the meanings and motives behind people’s actions from their own perspective.

  • Interpretive approach:

    • Weber's method involves placing oneself in the position of others to interpret their actions and the social conditions that give rise to those actions. This involves looking at the intentions, beliefs, and contexts that shape behavior, rather than attributing human actions to overarching social forces.

    • Example: In contrast to Émile Durkheim’s positivist approach, which explained suicide rates through social integration levels, Jack D. Douglas (working in the Weberian tradition) emphasized the subjective and situational interpretations of suicide. He argued that understanding suicide involves examining how coroners interpret deaths and recognizing that meanings of suicide differ across contexts.

Summary:

  • Hermeneutics focuses on interpreting human action by considering the influence of history, culture, and language, emphasizing that understanding is context dependent.

  • Verstehen, as developed by Weber, stresses the need for interpretive understanding in the social sciences to grasp the subjective meanings and motivations behind social actions.

  • Both concepts challenge positivist approaches that seek to explain behavior through universal laws, emphasizing the complexity and situational nature of human understanding and action.

B.    Hermeneutic–Phenomenological Tradition:

phenomenology is a philosophical approach that significantly contributes to the interpretivist position, opposing positivist methodologies in the social sciences. It focuses on understanding how individuals perceive and make sense of the world around them, emphasizing that researchers must become aware of and attempt to overcome their preconceptions to understand the subjective experiences and consciousness of others.

Core concepts of phenomenology:

  1. Human Consciousness and Experience: Phenomenology emphasizes how human beings experience the world and attribute meaning to those experiences. It suggests that reality is constructed through the perceptions and interpretations of individuals. Therefore, to truly understand social phenomena, one must consider the perspectives and meanings held by the people who experience them.

  2. Origins and Key Figures:

    • The philosophical roots of phenomenology trace back to Edmund Husserl, who emphasized the study of consciousness and the meanings that individuals attach to their experiences.

    • Alfred Schutz applied Husserl’s phenomenological ideas to the social sciences, integrating Max Weber’s concept of Verstehen (understanding). Schutz's work emphasized interpreting social reality by understanding the meanings that people give to their everyday experiences.

The two key points from Schutz’s quote are (p 126):

  1. The distinction between the natural and social sciences, highlighting that social reality is meaningful and must be studied differently from natural phenomena.

  2. The importance of understanding and interpreting people’s “common-sense thinking” to comprehend their actions, emphasizing the need for an empathetic, perspective-taking approach to social research.

C.   Symbolic interactionism:

Symbolic Interactionism

Symbolic Interactionism, developed by George Herbert Mead and expanded by Herbert Blumer, is a sociological framework focusing on how individuals interpret and act based on symbolic meanings. These meanings are constructed and continuously reshaped through social interactions.

Core concepts:

  1. Meaning and symbols: People give meaning to objects, actions, and symbols through interaction. These meanings are not inherent but created through communication.

  2. Interpretation of actions: Behavior is guided by the meanings people assign to their environment and the actions of others, which they actively interpret.

  3. Social construction of Self: The concept of the "looking-glass self" explains how our sense of self develops from how we think others perceive us.

Influence on Interpretivism

Symbolic interactionism reinforces interpretivism’s focus on understanding human actions from the actor’s perspective:

  1. Subjective Meaning: Both emphasize interpreting the meanings individuals give to their experiences.

  2. Social Interaction: Symbolic interactionism highlights how meanings are created through interaction, aligning with interpretivism’s view of behavior as socially and contextually shaped.

  3. Blumer’s contribution: Herbert Blumer emphasized the importance of understanding how people interpret their actions, reinforcing the interpretive approach.

Distinction from Hermeneutic–Phenomenological tradition

  • Symbolic Interactionism: A theory centered on how people use symbols in interaction, guiding researchers to study communication and meaning making.

  • Hermeneutic–Phenomenological Tradition: A broader approach focusing on understanding human experiences within cultural and historical contexts.

The Process of interpretation and ‘Double Hermeneutic’:

In interpretivist research, interpretation involves understanding how members of a social group make sense of their world and then framing these interpretations within a social-scientific context. This results in what is called a double hermeneutic: a two-layered interpretation process where the researcher interprets the interpretations of the people they are studying.

  1. First Level: The social scientist gathers data and interprets how the participants understand their world, capturing the meanings and perspectives within the social context of the participants.

  2. Second Level: The researcher then places these interpretations into a broader scientific framework, analyzing and contextualizing them using theoretical concepts and existing literature.

The double hermeneutic highlights the complexity of social research and the need for reflexivity—where researchers critically examine their assumptions and biases throughout the research process. Researchers must be explicit about their choices in research design and analysis to increase awareness of how their perspectives influence the study.

Ontological considerations:

Ontology: the study of being, social ontology is about the nature of social entities, such as organizations and culture. the researcher’s ontological stance determine how reality is defined.

Key for social scientist is whether social entities can and should be considered as:

a)     Objective entities that exist separately so social actors or people

b)    Social constructions that have been and continue to be built up from the perceptions and actions of social actors.

2 main positions referred as objectivism and constructionism:

1)    Objectivism: is an ontological position that claims that social phenomena, their meanings, and the categories that we use in everyday discourse have an existence that is independent of, or separate from, social actors.

In objectivism, both organizations and cultures are conceptualized as external realities that exert influence on individuals. They are treated as almost tangible, objective entities that exist independently of social actors, with rules, values, and structures that people must learn, internalize, and follow. This perspective emphasizes the constraining and regulating effect these social phenomena have on individual behavior.

 

2)    Constructionism:

Constructionism (or constructivism) is an ontological stance that argues social phenomena and their meanings are continually created and shaped through social interaction. It holds that social realities are not fixed or independent but are constantly constructed and revised by people.

Key points of constructionism:

  1. Social phenomena as constructed: Social phenomena do not exist independently of human interaction. Instead, they are actively created and redefined through social processes. This means that what we understand as reality is fluid, evolving as individuals and groups continuously engage and reinterpret their social worlds.

  2. Constant state of revision: Because social phenomena are produced through interaction, they are never static but are always being revised and reshaped. Meanings and understandings are continuously negotiated and reconstructed as people interact.

  3. Researcher’s role: In recent interpretations, constructionism also suggests that researchers' accounts of the social world are themselves constructions. Researchers cannot provide a definitive, objective account of reality; rather, they present one version of social reality shaped by their perspectives, experiences, and interpretations. This idea challenges the notion that knowledge is fixed or absolute.

  4. Opposition to Objectivism and Realism: Constructionism is fundamentally opposed to objectivism, which views social phenomena as existing independently of human perception. It is also contrary to realism, which posits that there is an objective reality that can be known.

Constructionism is an ontological perspective that challenges the idea that social phenomena, such as organizations and cultures, exist as fixed, external realities independent of social actors. Instead, constructionism emphasizes that these phenomena are continuously created and revised through social interaction.

  1. Organizations as Negotiated Orders: Constructionism views organizations not as rigid structures but as social realities shaped through ongoing negotiations and agreements among individuals. Formal rules and hierarchies exist but are often flexible, functioning more as general guidelines shaped by everyday interactions.

  2. Culture as Continuously Constructed: Rather than being a static, external force that constrains behavior, culture is seen as an emergent reality, continuously formed and adapted by people to address new situations. Although culture has pre-existing elements, it remains in a state of constant reconstruction.

  3. Social Categories as Social Constructs: Categories like "masculinity" are not seen as fixed entities but as meanings built through social interaction. These meanings can change over time and across contexts, often analyzed through discourse.

  4. Intersectionality: Linked to constructionism, intersectionality theory highlights how social categories (e.g., race, gender) interact and shape social realities. It emphasizes the importance of considering multiple, intersecting identities to understand how the social world is constructed.

Intersectionality definition: Intersectionality is a theoretical framework that emphasizes the interconnectedness of different social categories, such as gender, race, class, and sexuality. It argues that these categories cannot be understood separately because they intersect to shape an individual’s experiences and opportunities in unique ways. The concept is rooted in the work of Kimberlé Crenshaw (1989), although it draws from earlier insights, especially those of Black feminists, who highlighted how various social identities create both shared and diverse experiences among women.

  1. Intersectional Analysis: Intersectionality is used across the social sciences to analyze how overlapping social identities affect individuals’ experiences of privilege or disadvantage. The main goal is to transform power structures and address multiple forms of oppression.

Categorical complexities defined by Leslie McCall:

  1. Intra-Categorical complexity: Focuses on the specific intersections of social categories, examining how they interact to shape unique experiences. It questions the creation and boundaries of categories but acknowledges that some social identities are stable over time. For instance, Wingfield’s study (2009) on minority men in nursing shows that race and gender intersect to limit upward mobility for Black men, unlike their White male counterparts.

  2. Inter-Categorical complexity: A relational approach that examines how different social categories interact and shape experiences. It compares categories like race and gender to reveal patterns of privilege and disadvantage, often using quantitative methods. This approach highlights that categories like "gender" and "race" are intertwined, with each influencing the other.

  3. Anti-Categorical complexity: A postmodern critique that deconstructs social categories, treating them as fluid and unstable constructs. This approach views categories as artificial and emphasizes the dynamic, contextual, and historically grounded nature of social identities. It argues that categories cannot be separated and must be analyzed in their full complexity.

Critiques: While intersectionality has been influential, it has been criticized for lacking a clear methodological framework and guidance on how to use it for social change. However, these critiques are often attributed to misunderstandings or poor application rather than flaws in the theory itself.

Ontology and Social Research: Ontological beliefs about the nature of social reality shape research approaches. If organizations and cultures are seen as objective entities, research focuses on structures and values. If viewed as socially constructed, the emphasis is on how people actively shape these realities. These assumptions influence research design and data.

Quantitative vs qualitative:

  • Quantitative research: Emphasizes numerical data and uses measurement for data collection and analysis. It typically follows a deductive approach, testing theories and adhering to the scientific model of positivism. Quantitative research views social reality as external and objective.

  • qualitative research: Focuses on words and meanings rather than numbers. It often follows an inductive approach, aiming to generate theories and understanding social phenomena through the lens of interpretivism. It views social reality as a dynamic creation of individuals.

The distinction between quantitative and qualitative research is commonly used, though debated. Quantitative research focuses on measurement, theory testing, and views reality as objective, while qualitative research emphasizes understanding social meanings and views reality as constructed. However, the divide isn’t strict; research often incorporates elements from both, and mixed methods research is increasingly common.

 

 

Further influences on how we conduct social research: impact of values and practical considerations:

1.     values: values reflect the personal beliefs or the feelings of a researcher and there are different views about the extent to which they should influence research: a)value free approach, b)reflexive approach, c)conscious partiality approach.

 

a)    Value free approach of Émile Durkheim: The value-free approach in social research is the principle that researchers should suppress their own values, biases, and preconceptions to maintain objectivity and scientific rigor. Social facts should be studied as "things" and that researchers must eliminate any biases or values that could influence their findings. In phenomenology, this idea is supported through the use of epoche or "bracketing," where researchers consciously set aside their own experiences and values to remain neutral. While the value-free approach aims to ensure the validity and scientific credibility of research, there is increasing acknowledgment that complete value neutrality is difficult, if not impossible, to achieve. Even within traditions that advocate for value-free research, there is a growing understanding that researchers' values inevitably influence their work.

b)    The reflexive approach: The reflexive approach in social research acknowledges that research cannot be completely value-free. Reflexivity involves researchers actively examining and recognizing how their social location—factors such as gender, age, ethnicity, education, and background—affects the data they collect, analyze, and interpret. It is an ongoing process of self-awareness, where researchers consider how their values and biases influence various aspects of their work, including:

à      formulation of research questions;

à      choice of method;

à      formulation of research design and data-collection techniques;

à      implementation of data collection;

à      analysis of data;

à      interpretation of data;

à      conclusions.

 

The reflexive approach also involves acknowledging how researchers' emotions or sympathies, especially when studying marginalized or "underdog" groups, can impact their objectivity. For example, Turnbull’s study of the Ik tribe highlighted how his Western values influenced his negative perception of the tribe's family practices. He emphasized the importance of transparency, admitting that his values shaped his observations.

 

c)     The conscious partiality approach: The conscious partiality approach acknowledges and embraces the influence of values in research. Instead of striving for neutrality, this approach involves deliberate and intentional alignment with particular values or perspectives. Mies, a proponent of this approach, argues that in research—especially feminist research—value-free neutrality should be replaced with partial identification with research subjects.Researchers practicing conscious partiality use theoretical frameworks, such as feminist, Marxist, or postcolonial perspectives, to guide their research:

  • Feminist Approach: Highlights the disadvantages women and marginalized groups face due to patriarchal systems.

  • Marxist Approach: Emphasizes the impact of class divisions and capitalism on socioeconomic inequalities.

  • Postcolonial Approach: Critiques the ethnocentric and Western-centric biases in knowledge production.

  • This approach views the influence of values not as a limitation but as a purposeful and meaningful component of the rest

  • it also recognize the impact the researcher has on the studies they produce, the influence of the researcher’s values and social position, alongside other social categories, are unavoidable.

Practical considerations in social research: Practical issues are crucial in deciding how to carry out social research. Three key factors include:

  1. Nature of research questions: The choice between quantitative and qualitative methods depends on the type of questions asked. For example, exploring causes of a social phenomenon may require a quantitative approach, while understanding the views of a social group may call for a qualitative approach.

  2. Existing research: If there is little prior research on a topic, a qualitative, exploratory approach might be more suitable, as it can generate theories. In contrast, established topics with measurable concepts might be better suited for a quantitative strategy.

  3. Topic and participants: For sensitive or marginalized groups, such as those involved in illegal or stigmatized activities, qualitative methods are often preferable to build trust and gather meaningful data. Survey methods may not be practical in these cases, and sometimes covert research is used, though it raises ethical concerns.

 

 

 

 

 

 

 

Cap 3: research designs

What is a research design?

Definition: A research design provides a structured framework for the systematic collection and analysis of data in a research study. It outlines how data will be gathered and analyzed to answer research questions or test hypotheses. The research design serves as a strategic blueprint guiding all aspects of a research project.

Characteristics of research design:

  1. Causal connections: Defines how to analyze cause-and-effect relationships between variables.

  2. Generalizability: Determines if findings can apply to larger populations beyond the study sample.

  3. Contextual understanding: Helps interpret behavior within its social and cultural context.

  4. Temporal perspective: Examines social phenomena over time to understand patterns and connections.

What is a research method?

A research method is simply a technique for collecting data. It can involve a specific instrument, such as a self-completion questionnaire or a structured interview schedule (a list of prepared questions); or participant observation, whereby the researcher listens to and watches others; or the analysis of documents or existing data.

 

What is a variable?

Variable: A variable is an attribute or characteristic on which cases differ. Examples include sex, age, ethnicity, or educational attainment. Cases can be individuals or larger entities like households, cities, or organizations. If an attribute does not vary among cases, it is considered a constant.

  1. Independent variable: An independent variable is one that influences or causes changes in another variable. It is the factor presumed to affect or predict the dependent variable. For example, sex could be an independent variable affecting hourly wage.

  2. Dependent variable: A dependent variable is the attribute that is influenced or changed by the independent variable. It is the outcome or effect in the study. Using the previous example, hourly wage is the dependent variable affected by sex.

Quality and criteria in social research:

Reliability, replication, validity.

1.     Reliability: Reliability refers to the consistency of a study's results when repeated under the same conditions. It addresses whether the measures used for concepts, such as poverty or relationship quality, produce stable and consistent outcomes. In quantitative research, reliability is crucial, as it ensures that measures do not fluctuate unpredictably. For instance, if an IQ test yields widely varying scores for the same person across different administrations, the test would be deemed unreliable.

 

2.     Replication: Replicability refers to the ability of a study to be reproduced or repeated using the same methods and procedures. For a study to be replicable, the original research must clearly document its design, participants, data collection, and analysis. Replication is often done to test if findings are consistent over time or across different groups. Although replication is valued in quantitative research, it is less common in academic research due to the emphasis on originality. Despite this, replicability ensures that research findings are reliable and can be confirmed through repeated studies.

 

3.     Validity:Validity refers to the accuracy and integrity of the conclusions generated from research. It assesses whether the research truly measures or reflects what it claims to and whether the results can be trusted and applied beyond the study.

Different types of validity:

  1. Measurement validity: This applies mainly to quantitative research and concerns whether the tool used truly measures the concept it claims to measure. For example, an IQ test should accurately measure intelligence. If a measure is inconsistent (unreliable), it cannot be valid.

  2. Internal validity: This relates to the causal relationship between variables. It asks whether the conclusion that one variable (independent) causes changes in another variable (dependent) is convincing and credible. Internal validity ensures that we can confidently say that the observed effects are due to the independent variable.

  3. External validity: This refers to the generalizability of the research findings beyond the specific context or participants of the study. High external validity means the results can be applied to broader populations, not just the study sample. It depends on how representative the sample is of the larger population.

  4. Ecological validity: This focuses on whether research findings are applicable to real-life, everyday social settings. If research is conducted in unnatural environments, like laboratories, the findings may have limited ecological validity because they may not reflect real-world behavior.

  5. Inferential validity: This concerns whether the conclusions and inferences drawn from the research are justified and supported by the data. It examines if the research design and interpretation are appropriate for making the claims. For instance, inferring causality from a study with a cross-sectional design is often considered invalid.

Differences in the relevance of criteria for quantitative and qualitative strategy:

Quality criteria such as reliabilitymeasurement validityinternal validityexternal validity, and ecological validityare generally more aligned with quantitative research methods, which emphasize structured measurements, causality, and the generalizability of findings. Here’s how these criteria relate to research strategies:

  1. Reliability and Measurement Validity: These are most relevant to quantitative research, where the goal is to use reliable and valid measures for data collection. Quantitative strategies require consistency in tools and methods to ensure the accuracy and replicability of findings.

  2. Internal Validity: This is crucial for establishing causal relationships between variables, which is typically a primary focus of quantitative research strategies. Quantitative research designs, like experiments, are built to maximize internal validity by controlling for confounding variables.

  3. External Validity: Although relevant to both research strategies, external validity is especially important in quantitative research, where the emphasis is on ensuring findings can be generalized to broader populations. Quantitative studies often use large, representative samples to achieve this.

  4. Ecological Validity: This criterion applies to both quantitative and qualitative research, as it addresses how naturally research settings reflect real-life environments. Qualitative research particularly values ecological validity, as it seeks to understand behavior in natural contexts, while quantitative research may also aim to maintain realistic conditions in certain studies.

Relationship:

The relationship between quality criteria and research strategy highlights that quantitative research focuses on structured, generalizable, and causally sound findings, which align well with concerns about reliability, measurement, and both internal and external validity

Qualitative research, on the other hand, often prioritizes understanding context and experiences, making ecological validity more critical. Ultimately, each research strategy emphasizes different quality criteria based on its objectives and methodological approach.

Further qualitative research criteria: Qualitative research often uses different criteria to assess the quality of studies compared to quantitative research, though there are similarities. The main quality criteria for qualitative research include:

  1. Credibility: This parallels internal validity in quantitative research. It addresses the believability and trustworthiness of the findings, asking whether the research accurately reflects the reality or experiences of the participants.

  2. Transferability: This corresponds to external validity and considers whether the findings can be applied to other contexts or settings. While qualitative research does not usually aim for broad generalizability, it emphasizes detailed descriptions that allow others to determine if findings are applicable elsewhere.

  3. Dependability: Similar to reliability, this criterion examines whether the research findings are consistent and repeatable over time. It involves ensuring that the research process is documented transparently so others can follow the study’s methods.

  4. Confirmability: This is akin to objectivity in quantitative research. It evaluates whether the researcher has maintained a degree of neutrality and whether findings are shaped by the participants' responses rather than researcher bias. Researchers must show that their findings are based on data and not influenced by their personal values.

Similarities between quantitative and qualitative research:

  1. Parallels in concepts: Both approaches strive for credibility in their findings—whether it be internal validity in quantitative research or credibility in qualitative research. Similarly, transferability and external validity both concern the generalizability or applicability of results to different settings.

  2. Importance of rigor: Both research strategies require rigorous documentation and transparency. For qualitative research, dependability is akin to reliability, emphasizing the need for a systematic approach to ensure findings can be trusted.

  3. Ecological validity: Both methods recognize ecological validity, though it is more naturally aligned with qualitative research. Qualitative research often seeks to collect data in real-world, natural settings, which enhances the ecological validity of its findings.

Week 2:

Different research designs:

1.     Experimental designs: classical, laboratory and quasi experiment

2.     Cross-sectional/survey

3.     Longitudinal design

4.     Case study design

5.     Comparative design.

 

1)    Experimental design: Experimental design refers to a structured research approach used to establish causal relationships between variables. It typically involves manipulating one or more independent variables to observe the effect on dependent variables. Experimental designs are categorized into classical experiments and quasi-experiments and can occur in field or laboratory setting.

Variants of experimental design:

  1. Classical experiments:

    • These designs have a clear structure, often involving random assignment of participants to different conditions (e.g., experimental and control groups) to ensure internal validity. They aim to establish a strong causal link between variables.

    • Key features include randomization, control over variables, and pre-testing and post-testing.

  2. Quasi-experiments:

    • These have some characteristics of classical experiments but lack full control, such as random assignment. Quasi-experiments are often used when randomization is not feasible or ethical.

    • They are still designed to infer causal relationships but with less certainty compared to classical experiments.

Settings of experimental eesign:

  1. Field experiments:

    • Conducted in real-life environments, such as schools, workplaces, or as part of policy implementations. Field experiments are common in social research because they provide high ecological validity, reflecting real-world conditions.

  2. Laboratory experiments:

    • Conducted in a controlled, artificial setting (e.g., a laboratory) to minimize external influences. These experiments allow for high control over variables but may lack ecological validity because the setting does not reflect natural environments.

 

1.     Classical experiments: The classical experimental design, also known as a randomized controlled trial (RCT), is a rigorous research method used to establish causal relationships. It is highly regarded in research fields like social psychology, organizational studies, and political science, but is less common in sociology. This design is known for its strong internal validity, which allows researchers to confidently attribute observed effects to the manipulation of the independent variable. Classical experimental designs are characterized by manipulation, random assignment, control groups, and pre/post-testing, making them ideal for establishing causal relationships.

 

Features of classical experimental design:

  1. Manipulation of the independent variable:

    • The independent variable is deliberately manipulated by the researcher to observe its effect on the dependent variable. This controlled intervention is what distinguishes classical experiments from non-experimental research.

  2. Random assignment:

    • Participants are randomly allocated to either an experimental group (treatment group) or a control group. Random assignment ensures that the groups are comparable and that any observed differences in outcomes can be attributed to the experimental manipulation rather than preexisting differences.

  3. Control group and experimental Group:

    • The experimental group receives the treatment or intervention, while the control group does not. This comparison allows researchers to isolate the effect of the independent variable.

    • For example, in the Rosenthal and Jacobson study, teachers had higher expectations (treatment) for the "spurters" (experimental group), while other students (control group) did not receive this special expectation.

  4. Pre-Test and Post-Test measurement:

    • The dependent variable is measured before (T1) and after (T2) the experimental intervention. This "before-and-after" design helps establish whether the manipulation caused a significant change in the dependent variable.

  5. High internal ialidity:

    • Because of the random assignment and controlled conditions, classical experiments have high internal validity. Researchers can be confident that any observed effects are due to the manipulation of the independent variable and not other factors.

Challenges of classical experimental design:

  1. Difficulty in manipulation:

    • Many social variables, like gender or social class, cannot be manipulated. This makes it hard to use true experimental designs for certain research questions.

  2. Controlled settings:

    • True experiments often require controlled environments, which can be difficult to achieve in real-world settings, limiting the feasibility of classical experiments in social research.

 

 

Internal validity in classical experiments:

Internal validity refers to the extent to which a study can establish a causal relationship between the independent and dependent variables, free from alternative explanations. In the context of classical experimental design, internal validity is crucial to ensure that the manipulated variable is indeed causing the observed effect. Classical experimental design uses control groups and random assignment to minimize threats to internal validity, ensuring that any observed effects are due to the experimental manipulation. However, even with strong internal validity, researchers must critically assess whether their measures are valid and whether the experimental manipulation effectively worked as intended.The use of control groups and random assignment helps minimize threats to internal validity. Key threats to internal validity include:

1. History

  • Definition: Events or changes in the environment occurring between the pre-test and post-test, other than the manipulation, could affect the results.

  • Example: In the Rosenthal and Jacobson study, an external event like a new school policy aimed at improving academic performance could influence student scores.

  • Solution: The presence of a control group helps control for these events, as both groups are exposed to the same external influences.

2. Testing

  • Definition: The act of taking a pre-test might influence participants' behavior or responses in the post-test, either by making them more experienced with the test or by sensitizing them to the study’s purpose.

  • Solution: The control group also takes the pre-test, ensuring that any testing effects are consistent across groups.

3. Instrumentation

  • Definition: Changes in the way a measurement or test is administered (e.g., slight alterations in the test format) could lead to differences in results.

  • Solution: Using a control group ensures that any changes in testing procedures affect both groups equally, isolating the effect of the experimental manipulation.

4. Mortality/Attrition

  • Definition: The loss of participants over time, which can threaten the validity of the study, especially if dropout rates are high or systematic.

  • Example: In a long-term study, some students may leave the area or transfer to other schools.

  • Solution: Since attrition affects both the experimental and control groups, it does not necessarily threaten the validity of the findings if it occurs evenly.

5. Maturation

  • Definition: Natural changes in participants over time (e.g., growing older or developing new skills) that may affect the dependent variable.

  • Example: Students might improve academically simply because they are getting older and more experienced, not because of the experimental manipulation.

  • Solution: Since maturation affects both groups equally, any observed difference can be attributed to the experimental treatment.

6. Selection

  • Definition: Differences between the experimental and control groups due to how participants were selected or assigned.

  • Solution: Random assignment of participants to groups minimizes this threat by ensuring that any pre-existing differences are distributed randomly.

7. Ambiguity About the Direction of Causal Influence

  • Definition: Uncertainty about whether the independent variable truly affects the dependent variable or whether the causal relationship could be reversed.

  • Solution: In classical experimental designs, the independent variable is manipulated before measuring the dependent variable, ensuring a clear temporal sequence and causal direction.

Ensuring internal validity

  1. Control Group and Random Assignment: These are essential features of classical experimental design. They help eliminate confounding factors and rival explanations, providing a stronger basis for inferring causality.

  2. Measurement validity concerns: Even if a study has high internal validity, researchers must also evaluate whether the measurements accurately reflect the concepts being studied. For example, in the Rosenthal and Jacobson study, questions about the validity of IQ test scores or measures of intellectual curiosity could impact the study’s overall conclusions.

External validity in classical experiments:

External validity is critical for determining whether the findings of an experiment can be extended to other people, places, and times. Threats such as interaction of selection, setting, history, pre-testing, and experimental arrangements highlight the complexities of applying results beyond the specific conditions of the study. Understanding these threats helps researchers design experiments that are more robust and applicable to real-world situation.

external validity refers to the extent to which the findings of an experiment can be generalized beyond the specific conditions and participants involved in the study. It considers whether the results are applicable to other populations, settings, or times. Campbell and Cook identified several threats to external validity that could limit this generalizability:

 

1. Interaction of selection and treatment

  • Definition: This threat questions whether the findings can be generalized to different social and psychological groups. It asks whether the results are applicable to a wide range of people differentiated by factors such as ethnicity, social class, gender, and personality type.

  • Example: In the Rosenthal and Jacobson study, the students were primarily from lower-social-class backgrounds and ethnic minority groups. This specific sample may limit the generalizability of the results to other groups with different characteristics.

2. Interaction of setting and treatment

  • Definition: This threat addresses whether the results are applicable in different settings. It questions if findings from one environment, such as a specific school, can be applied to other schools or broader contexts.

  • Example: The study's findings were influenced by the unique cooperation and conditions of a particular school, raising doubts about whether the same effects of teacher expectancies would occur in other educational or non-educational settings.

3. Interaction of History and Treatment

  • Definition: This threat considers whether findings from a study conducted at a specific time can be generalized to other time periods, both in the past and future.

  • Example: The Rosenthal and Jacobson research was conducted over 50 years ago. There is uncertainty about whether the self-fulfilling prophecy effect would be observed in today’s educational settings or whether the timing within the academic year influenced the results.

4. Interaction effects of Pre-Testing

  • Definition: This threat arises when the pre-testing of participants affects how they respond to the experimental treatment, making the results less generalizable to situations where pre-testing does not occur.

  • Example: In the Rosenthal and Jacobson study, students were pre-tested, which may have sensitized them to the experimental conditions. This raises concerns about whether the findings would apply to groups that have not been pre-tested, as pre-testing is not common in real-world scenarios.

5. Reactive effects of experimental arrangements (reactivity)

  • Definition: This threat refers to the awareness of participants that they are part of an experiment, which could influence their behavior and make the findings less generalizable to natural settings.

  • Example: In this case, Rosenthal and Jacobson’s subjects were likely unaware that they were participating in an experiment, which reduced the reactive effects. However, in many experiments, participants’ awareness could alter their behavior, affecting the generalizability of the results.

Ecological validity in classical experiments:

In classical experimental design, experiments conducted in field settings (e.g., schools, workplaces, or public spaces) are generally considered to have higher ecological validity than those conducted in laboratory settings. This is because field experiments take place in environments where participants behave more naturally. However, several factors can challenge ecological validity:

  • Data collection methods: The use of specific measurement tools, such as surveys or standardized tests, may introduce an artificial element to the study. Even if the setting is natural, the methods themselves could influence participants’ behavior in ways that do not reflect real-world interactions.

  • Awareness of being studied: If participants are aware that they are part of an experiment, this awareness can alter their behavior, reducing ecological validity. However, if the experiment is designed so that participants remain unaware, it helps maintain natural behavior but may raise ethical concerns, particularly when deception is involved.

Replicability in classical experiments:

For a classical experiment to be considered replicable, researchers must thoroughly document their methods, including the selection of participants, the manipulation of variables, and data collection processes. This allows other researchers to replicate the study under similar conditions. However, there are challenges:

  • Ethical considerations: Some experimental designs involve practices like deception or differential treatment of participants, which may need to be ethically reconsidered in replications.

  • Contextual factors: Even when the same procedures are followed, differences in settings, participant demographics, or historical context can lead to variations in results. This raises questions about the robustness and generalizability of the original findings.

Example: A study that successfully outlines its procedures and provides detailed operational definitions can be replicated more easily. However, if subsequent replications yield inconsistent results, it may suggest that contextual variables play a significant role, impacting the external validity of the research.

 

Laboratory experiments: A laboratory experiment is a type of research method in which the study is conducted in a controlled, artificial environment. The primary purpose of a laboratory experiment is to maintain a high level of control over variables, allowing researchers to manipulate the independent variable and observe the resulting effects on the dependent variable.

Advantages:

  1. High level of control: Researchers can manipulate variables and assign participants randomly to different experimental conditions, enhancing the internal validity of the study. This control reduces the risk of confounding variables affecting the results.

  2. Ease of replication: Since the conditions of the experiment are artificial and highly controlled, laboratory experiments are generally easier to replicate compared to field experiments. Replicability is crucial for verifying the reliability of research findings.

Limitations:

  1. External validity: Laboratory experiments often struggle to establish external validity, meaning it is difficult to generalize the findings to real-world settings. The artificial nature of the lab environment may lead to an interaction of setting and treatment, where the results are specific to the controlled environment rather than applicable to everyday life.

  2. Ecological validity: These experiments may lack ecological validity because the behavior observed in a laboratory setting might not reflect how people behave in natural, real-world situations. The study may not capture the complexity of real-life interactions, even though it might still achieve experimental realism (i.e., participants taking the experiment seriously).

  3. Participant Selection Bias: Often, the subjects used in laboratory experiments are students, who may not represent the general population. This creates an interaction of selection and treatment, where the specific characteristics of the participants (such as their age, education, or willingness to accept incentives) may influence the results in ways that are not generalizable.

  4. Reactive effects: The artificial nature of the experimental arrangements can lead to reactive effects, where participants change their behavior simply because they are aware they are being studied. This can further compromise the external and ecological validity of the findings.

 

 

Quasi experiments:

Quasi-experiments are research designs that have some features of classical experimental designs but do not meet all the internal validity criteria, particularly the requirement of random assignment of participants to experimental and control groups. Instead, these studies may rely on naturally occurring conditions or practical constraints, making randomization infeasible. Quasi-experiments are valuable when conducting randomized controlled trials is not feasible due to ethical or practical constraints. They provide high ecological validity and are suitable for real-world research settings, especially in policy and social evaluation studies. However, the lack of random assignment introduces challenges to establishing clear causal relationships, making quasi-experiments less robust in terms of internal validity compared to classical experiments.

 

Advantages of Quasi-Experiments:

  1. High ecological validity: Because they often occur in real-world settings or use naturally occurring changes, quasi-experiments are typically more reflective of everyday life, providing findings that are more applicable to real-world scenarios.

  2. Practical and ethical feasibility: Quasi-experiments are useful when random assignment is either impractical or unethical. For instance, in social research, assigning people randomly to different social classes or genders is impossible, so quasi-experiments provide a practical alternative.

  3. Relevance to policy and evaluation research: Quasi-experimental designs are especially valuable in evaluation research for assessing the impact of policies, programs, or social interventions where experimental manipulation is not feasible.

Limitations of quasi-experiments:

  1. Lower internal validity: The lack of random assignment means that quasi-experiments are more prone to confounding variables. Differences between the groups may be due to pre-existing factors rather than the manipulation or intervention, making causal conclusions less robust.

  2. Group non-equivalence: Without randomization, there is a higher risk that the experimental and control groups are not equivalent from the start. This can introduce selection bias and complicate the interpretation of results.

  3. Potential for alternative explanations: Because the groups may differ on factors unrelated to the intervention, it is difficult to rule out rival explanations for observed effects, which can undermine the study’s credibility.

  4. Limited control over variables: Researchers often have less control over the conditions of the study, making it harder to isolate the effect of the intervention or manipulation.

what is evaluation research?

Definition: Evaluation research assesses the impact and effectiveness of social or organizational programs, policies, or interventions. It aims to determine whether these initiatives have met their intended goals.

Key aspects:

  1. Primary question: Has the intervention achieved its objectives?

  2. Study design: Often uses quasi-experimental designs with treatment and control groups, since random assignment is frequently impractical or unethical.

  3. Methodological approaches: Combines experimental principles with qualitative methods to understand the intervention's context, stakeholder perspectives, and a range of outcomes.

Importance: It informs decision-making, improves program effectiveness, and ensures resources are well-utilized.

 

·      Experimental design is crucial in research because it serves as a benchmark for evaluating the quality of quantitative research, particularly in establishing causality. True experiments are valued for their ability to ensure internal validity, reducing doubts about whether the independent variable truly causes the observed changes in the dependent variable. This contrasts with other designs, like cross-sectional studies, which struggle to clearly establish causal relationships.

·      A key takeaway from experimental design is the importance of comparison. Experiments involve comparing an experimental group's results with those of a control group, allowing researchers to understand phenomena more effectively. Even when a traditional control group is absent, as in some studies, comparing different conditions—such as various types of ethnic backgrounds in job recruitment research—offers compelling insights. This emphasis on comparison extends beyond just experimental and quantitative research, highlighting a broader principle of using contrasts to deepen understanding, which is also relevant to comparative research designs.

 

2)    Cross sectional design:

Cross-sectional design, often referred to as survey design, is a research approach that involves collecting data from a sample or population at a single point in time. While it is commonly associated with methods like questionnaires and structured interviews, cross-sectional research can also use other techniques, such as structured observationcontent analysisofficial statistics, and diaries. The primary focus of cross-sectional design is to analyze and describe patterns or relationships between variables without establishing causality.

cross-sectional research design involves collecting data from a sample of cases at a single point in time to understand the associations between variables within a population. The name "cross-sectional" refers to this method of taking a snapshot of a cross-section of the relevant group. The design typically focuses on gathering quantitative or quantifiable data on multiple variables to identify patterns or relationships.

example:For instance, to explore whether there is a relationship between age and voting intention, a researcher would collect data from a sample of voters, asking their age and who they intend to vote for. This data collection happens once, providing a snapshot of voting intentions at that specific time. However, this method cannot determine if or how voting intentions change as people age, as it does not track changes over time.

Comparison to longitudinal design: unlike longitudinal designs, which collect data at multiple points over time to study changes and causal relationships, cross-sectional designs provide a one-time analysis. This makes them efficient for describing the current state of associations but limited in establishing causality or observing how variables evolve.

  1. Use of a sample of Cases:

    • The design involves collecting data from a large sample of cases (e.g., individuals, families, organizations, or nation-states) at a single point in time. The large sample size increases the likelihood of observing variation across all variables of interest and allows for finer distinctions between cases.

  2. Simultaneous data collection:

    • Data on multiple variables (Obs1, Obs2, ..., Obsn) are collected simultaneously at one point in time (T1). This results in a data matrix where each row represents a case and each column represents a variable, creating a "rectangle" of data.

  3. Quantitative or quantifiable data:

    • The collected data are either quantitative or can be quantified. This standardization allows researchers to systematically measure and analyze variations across cases, using consistent and comparable benchmarks.

  4. Identification of patterns of association:

    • The primary goal is to identify relationships or patterns between variables. However, because data are collected at the same time, there is no time-ordering of variables, which means the design cannot establish clear causal relationships.

  5. Ambiguity in causality:

    • Since there is no manipulation of variables or time sequence, the design cannot definitively determine causal relationships. It only shows that variables are associated, not that one causes the other.

Steps in cross-sectional research design:

  1. Define research objectives and variables:

    • Clearly outline the research question and identify the variables that need to be measured.

  2. Select a large, representative sample:

    • Choose a sample that captures a wide variation across the population to ensure meaningful analysis and to meet sampling requirements.

  3. Collect data simultaneously:

    • Gather data on all variables from each case at a single point in time. Use structured and standardized methods, such as surveys or structured observations, to ensure consistency.

  4. Organize data into a matrix format:

    • Structure the data so that each case is a row and each variable is a column, creating a comprehensive and organized data matrix.

  5. Analyze patterns of association:

    • Examine the data to identify relationships between variables. Use statistical analysis to explore these patterns, but remain cautious about interpreting these relationships as causal due to the simultaneous data collection.

  6. Interpret findings with caution:

    • Discuss the observed associations, acknowledging the limitations related to causal inference and the lack of time-ordering of variables.

Reliability, replicability and validity in cross sectional:

Cross-sectional research is generally strong in replicability and external validity (when random sampling is used), but it tends to be weak in internal and ecological validity. The design is good for identifying associations but lacks the robustness needed for making clear causal inferences and may not always reflect real-world settings.

1.      Reliability:

  • Definition in context: Reliability refers to the consistency of the measurements used in research. In cross-sectional design, reliability is tied to the quality of the measures rather than the design itself.

  • Assessment: The reliability of cross-sectional research depends on the quality of the tools and methods used to collect data, such as structured interviews or surveys. Issues related to reliability are discussed in detail elsewhere (Chapter 7).

2.      Replicability:

  • Definition in context: Replicability refers to the ability to reproduce the study using the same procedures and achieve similar results.

  • Assessment: Cross-sectional research generally scores well on replicability, as long as the researcher clearly describes the processes used for selecting respondents, designing and administering measures, and analyzing data. Most quantitative studies based on this design provide detailed descriptions of these elements, making replication feasible.

3.       Validity:

  • Internal Validity:

    • Definition: The extent to which a study can establish a causal relationship between variables.

    • Assessment: Cross-sectional research typically has weak internal validity because it identifies associations rather than clear causal links. While it is possible to infer causality from cross-sectional data, these inferences are not as robust as those from experimental designs.

  • External Validity:

    • Definition: The extent to which the findings can be generalized to other contexts or populations.

    • Assessment: Cross-sectional research scores high on external validity if the sample is randomly selected, making the findings generalizable. However, if non-random sampling methods are used, the external validity may be compromised.

  • Ecological Validity:

    • Definition: The extent to which research findings reflect real-world settings.

    • Assessment: Cross-sectional research often has weak ecological validity. This is because the use of research instruments, like self-completion questionnaires and structured observations, can disrupt the natural environment of the participants, making the findings less applicable to everyday situations.

Non-manipulable variable: Non-manipulable variables are characteristics that cannot be altered or controlled, such as ethnicity, age, and social background. Because these variables are fixed and cannot be manipulated for experiments, quantitative researchers often use cross-sectional designs to study them.

Key points:

  1. Use in Cross-Sectional research: Due to the inability to manipulate these variables, researchers focus on examining relationships rather than establishing clear causality.

  2. Temporal priority: Variables like ethnicity and age are considered independent because they occur before other outcomes, making it reasonable to infer causal direction based on their temporal precedence.

  3. Ethical and practical constraints: Even when variables could theoretically be manipulated, ethical and practical issues often prevent this, adding uncertainty to causal interpretations.

Cross-sectional design is commonly associated with quantitative research, where data is collected from multiple cases at a single point in time. However, it is also used in qualitative research, such as studies employing semi-structured interviews with a large group of participants. These qualitative studies often focus on understanding influences and experiences, such as how couples manage household finances, and may be more ecologically valid than using formal instruments like questionnaires. Despite differences in language and emphasis, both qualitative and quantitative cross-sectional designs share the feature of collecting data simultaneously from multiple participants, sometimes using retrospective accounts to explore past influences on current behaviors

 

 

 

 

3)    Longitudinal designs:

longitudinal design is a research approach in which data is collected from the same participants multiple times over an extended period. The term "longitudinal" signifies that data collection occurs across different time points, rather than at a single moment. Although this design is time-consuming and costly, it provides valuable insights into the time order of variables, making it possible to establish stronger causal inferences compared to cross-sectional research. Typically, longitudinal designs are used as an extension of survey research, utilizing self-completion questionnaires or structured interviews. In terms of reliability, replication, and validity, longitudinal designs share similarities with cross-sectional research but offer the added advantage of observing how relationships between variables change over time.

 

2 types of longitudinal designs

  1. Panel Studies:

    • In panel studies, a single sample (often randomly selected at a national level) is surveyed multiple times over a period. The focus of data collection can vary, including individuals, households, organizations, or schools. The same cases are tracked across different time points, allowing researchers to observe changes over time within the same group. An example of a panel study is the understanding society survey.

  2. Cohort Studies:

    • Cohort studies focus on a specific group of people (the cohort) who share a common characteristic or experience. This characteristic could be being born in the same week or undergoing a similar event, like unemployment or marriage. Data collection occurs over time, but the focus remains on the cohort that shares this defining trait. Examples include the National Child Development Study (NCDS) and the Millennium Cohort Study.

 

Distinction from other designs:

Large-scale surveys like the British Social Attitudes survey are not true longitudinal designs because they use different samples for each wave of data collection. These are better categorized as repeated cross-sectional designs, which can track change but do not establish the direction of causality since the same individuals are not followed over time.

 

Similarities and differences between the 2:

Similarities:

  1. Design structure: Both panel and cohort studies have a similar longitudinal structure where data are collected in multiple waves from the same individuals over time. This repeated data collection on the same variables allows researchers to track changes and trends.

  2. Purpose: Both designs aim to explore social change and gain a deeper understanding of causal influences over time. By following the same cases, they can better address issues related to the direction of causal influence than cross-sectional designs.

  3. Causal Inference: Unlike cross-sectional designs, panel and cohort studies allow researchers to identify which variables are potentially independent by recording them at an earlier time point (T1) and observing their effects in later time points (T2 or later). This structure helps clarify which variable came first, thereby reducing ambiguity in causal relationships.

Differences:

  • Panel Studies: Track a general sample of people, households, or organizations over time, often selected randomly at a national level.

  • Cohort Studies: Focus on a specific group (cohort) that shares a common characteristic or experience, such as being born in the same week or undergoing a similar life event.

Summary: Both panel and cohort studies share the benefit of tracking changes over time and clarifying causal relationships better than cross-sectional designs. However, the primary difference lies in their focus: panel studies track a broad sample, while cohort studies focus on a group with shared characteristics.

Reliability, replicability and validty in longitudinal research design:

Longitudinal research performs well in terms of internal validity due to its ability to establish time order, but it faces challenges with replicability and external validity due to cost, scale, and participant attrition. Ecological validity may be affected by the data-collection process but can improve as participants become familiar with the research procedures over time.

  1. Reliability and measurement validity:

    • Similar to cross-sectional research, the reliability and measurement validity of longitudinal research depend on the quality of the measures used to assess concepts. These aspects are not inherently tied to the longitudinal design itself but to the instruments and methods used for data collection.

  2. Replicability:

    • While it is theoretically possible to replicate longitudinal research if the study documentation includes detailed information about sampling and the design and content of data-collection instruments, replicating such research is challenging. This difficulty is due to the scale, cost, and time required to conduct a study over multiple waves, making replication impractical.

  3. Internal validity:

    • Longitudinal research generally has good internal validity because it allows researchers to establish a time order of variables, making it easier to infer causality. By tracking variables over time, researchers can better understand which variable influences the other.

  4. External validity:

    • The external validity of longitudinal research is comparable to that of cross-sectional studies. However, it can be compromised by attrition, where participants drop out over time, leading to a sample that may become less representative with each wave. Researchers must address and, where possible, correct for this issue to maintain the study's representativeness.

  5. Ecological validity:

    • Ecological validity may be weakened by the intrusive nature of repeated data collection. However, if participants become accustomed to the data-collection process over time, the repeated cycles might normalize the experience, reducing the disruptive impact and potentially improving ecological validity.

 

 

Problems associated with longitudinal designs:

Longitudinal designs provide valuable insights and help address the ambiguity around the direction of causal inference, but they come with several significant challenges:

  1. Time and cost:

    • Longitudinal studies are time-consuming and expensive, which limits their widespread use in social research.

  2. Sample attrition (Dropout):

    • Attrition is a major issue, as participants may drop out over time due to reasons like death, moving away, or voluntarily withdrawing. This can lead to a non-representative sample because those who leave may differ in crucial ways from those who remain. For example, data from the British Household Panel Survey (BHPS) showed that 70% of the original sample stayed after 12 years, dropping to 40% after 24 years. The non-random nature of dropout can affect the validity of the study's findings.

  3. Lack of guidelines for timing:

    • There are often no clear guidelines about when to collect data in subsequent waves, which can result in inconsistent or poorly timed data collection.

  4. Poor planning and data overload:

    • Many longitudinal studies are criticized for being poorly planned, leading to the collection of excessive amounts of data without a clear research focus or strategy. This can make the analysis inefficient and less impactful.

  5. Panel conditioning effect:

    • Panel conditioning refers to the phenomenon where repeated participation in a study alters participants' behavior or responses over time. Being continually involved in the research may make respondents more aware or reflective, potentially biasing the results.

Summary:

While longitudinal designs offer advantages for understanding causality, they face challenges like high costssample attrition, unclear data collection timelines, poor planning, and the panel conditioning effect. These problems can impact the representativeness and validity of the findings, requiring careful design and management to mitigate their effects.

 

Longitudinal studies are often associated with quantitative studies; however they can also be incorporated in qualitative when interviews are conducted to assess change.

 

4)    Case study design:

Definition: A case study design involves a detailed and intensive analysis of a single case, which could be a community, school, family, organization, person, or event. The primary focus is on understanding the complexity and unique characteristics of the case in depth.

Key features:

  • Focus on complexity: Case studies explore the intricate and specific nature of the case, emphasizing its particularities and the detailed context in which it exists.

  • Single case analysis: The research typically centers around one case, though the subject can vary widely, including: schools, communities, families, organizations, individuals, events.

Case study design involves an intensive and detailed analysis of a specific case, often using qualitative or mixed methods. The case itself is central to the research, with the goal of uncovering its unique features and complexities, distinguishing it from broader, generalizable research approaches.

· Definition of a "Case":

In case study research, the "case" often refers to a specific location or entity, such as a community, organization, family, or event. The research focuses on an intensive examination of this setting or subject to explore its unique characteristics and complexities.

· Intensive examination and mixed methods:

Case studies emphasize a detailed and in-depth analysis, often using qualitative methods like participant observation and unstructured interviews. However, case studies can also incorporate quantitative methods, making them compatible with mixed methods research. The use of both methods enhances the richness and comprehensiveness of the study.

· Unit of Analysis and focus:

A defining feature of case study design is the unit of analysis. The case itself is the primary object of interest, rather than just a backdrop for data collection. The research aims to uncover the unique features of the case. This contrasts with broader research designs, like cross-sectional studies, which aim to produce generalizable findings (nomothetic approach). Case studies use an idiographic approach, focusing on the specific, contextual aspects of the case.

The nomothetic approach seeks to establish broad, generalizable laws that apply across various contexts by focusing on patterns and regularities, often using quantitative methods like surveys or experiments to collect data from large samples. In contrast, the idiographic approach aims to deeply understand the unique and complex aspects of a specific case or individual, typically using qualitative methods such as case studies or in-depth interviews. While the nomothetic approach emphasizes breadth and generalization, the idiographic approach prioritizes depth, context, and the particularities of a single subject, offering detailed insights rather than universal conclusions. Both approaches serve different research purposes, with one focusing on general principles and the other on rich, contextualized understanding.

  1. Critical case:

    • Definition: Selected because it allows the researcher to test a well-developed theory or hypothesis in a setting that highlights the boundaries of that theory.

    • Purpose: To understand when and why a hypothesis holds or fails.

    • Example: Festinger et al.'s (1956) study of a religious cult was used to explore how people respond to unmet expectations, providing insight into cognitive dissonance.

  2. Extreme or Unique case:

    • Definition: Focuses on an atypical or rare situation that stands out due to its distinctiveness.

    • Purpose: To provide insights that could reveal lessons applicable to more common contexts.

    • Example: Margaret Mead's (1928) study of adolescence in Samoa examined a unique cultural context where youths experienced minimal stress during adolescence.

  3. Representative or Typical case (exemplifying case):

    • Definition: Chosen to represent a common or everyday situation that exemplifies broader trends or conditions.

    • Purpose: To understand typical circumstances or answer research questions in a context that reflects the general population.

    • Example: Lynd and Lynd’s (1929) study of Muncie, Indiana, called "Middletown," aimed to represent ordinary American life.

  4. Revelatory case:

    • Definition: Provides an opportunity to study a phenomenon that has not been previously accessible to scientific investigation.

    • Purpose: To gain new insights into previously unexplored areas.

    • Example: Whyte's (1955) research on the Cornerville community revealed aspects of urban life that had not been studied before.

  5. Longitudinal case:

    • Definition: Involves studying the same case over an extended period.

    • Purpose: To observe changes and developments over time, often to understand how variables interact across different stages.

    • Example: A case might be chosen not only for longitudinal analysis but also because it fits one of the other types, such as being critical or representative.

Key Insights:

  • Combination of elements: A case study may embody multiple types. For example, Margaret Mead’s research on Samoa was both an extreme case and a critical case in the nature vs. nurture debate.

  • Evolving understanding: The true nature of a case may only become evident after detailed research. As Flyvbjerg (2003) illustrates, what starts as a critical case may later be understood as an extreme case, depending on the insights gained during the study.

These types provide various rationales for selecting cases, each serving different research objectives and contributing unique insights.

Case study research often includes a longitudinal element to observe changes over time. Researchers may spend months or years within a community or organization, conduct repeated interviews, or analyze historical records. Additionally, they may return to a case years later to study trends, as seen in the Middletown study, which was revisited multiple times over decades. This integration of a time dimension enriches case studies by providing a deeper understanding of evolving dynamics.

Reliability, replicability and validty in case study:

  • Reliability and replicability are less emphasized in case study research, though they can be improved through detailed documentation. However, concerning replicability, often one-of-a-kind case means that true replication may not be feasible. Thus, case studies are less suited to exact replication compared to more standardized research designs.

  • Validity is a central focus, but rather than aiming for external generalizability, case studies emphasize external validity, theoretical generalization and the quality of theoretical reasoning. The goal is to provide deep, context-rich insights that contribute to broader theoretical understanding, rather than replicable or widely generalizable finding.

5)    Comparative design:

Definition: Comparative design involves studying two or more contrasting cases using similar research methods to better understand social phenomena. The key idea is that meaningful comparisons can reveal insights about similarities and differences, deepening our understanding of the cases being studied.

Features of Comparative Design:

  1. Applicability to both quantitative and qualitative Research:

    • Comparative design can be used in both quantitative (e.g., analyzing survey data) and qualitative (e.g., conducting interviews) research contexts.

  2. Cross-Cultural and cross-national research:

    • A common application is in cross-cultural research, where issues or phenomena are examined across different countries to understand how they manifest in varying socio-cultural settings. This involves using consistent research instruments and methods for collecting or analyzing data in different national contexts.

  3. Aims of comparative research:

    • The goal may be to explain similarities and differencesgeneralize findings, or gain deeper awareness of social realities in different settings. For instance, researchers might use existing data sets like the European Social Survey or the World Values Survey to facilitate these comparisons.

  4. Challenges in comparative research:

    • Funding and Management: Gaining sufficient resources for large-scale comparative studies can be challenging.

    • Data Comparability: When using secondary data, it is crucial to ensure that the data categories and methods are comparable.

    • Sample Equivalence: Ensuring that the samples used in different cases are equivalent is essential for meaningful comparisons.

    • Language and Translation Issues: Translating research instruments can undermine comparability, even when done competently, due to potential insensitivity to cultural contexts.

  5. Benefits of comparative research:

    • Cultural Awareness: It helps to understand that social science findings can be culturally specific and may challenge assumptions. For example, research comparing Norway and Britain revealed unexpected similarities in the work-life balance struggles of bank managers, despite differing family-friendly policies.

    • Understanding Contextual Factors: Comparative research can identify how country-level factorsinfluence individual behaviors, such as studies in criminology examining the impact of national security frameworks on identity theft rates.

  6. Application beyond nations:

    • Comparative research is not limited to cross-national comparisons. It can be applied to various contrasting situations within the same country, such as studying different labor markets or contrasting economic environments to understand their impact on social and economic experiences.

                                                                                              

The comparative design is essentially two or more cross-sectional studies carried out at more or less the same point in time.

 

Multiple case study:

Definition:multiple-case study, or multi-case study, involves the detailed analysis of more than one case, using the comparative design to better understand a phenomenon. This approach is widely used in qualitative research but can also have applications in quantitative research.

Key features and purpose:

  1. Theory building: One of the primary benefits of a multiple-case study is that it enhances theory development. By examining and comparing multiple cases, researchers can better identify the conditions under which a theory applies or fails (Eisenhardt 1989; Yin 2017). This comparison can also lead to the emergence of new concepts, enriching theoretical insights.

  2. Causality and critical realism: Multiple-case studies are especially valuable for understanding generative causality, as emphasized by the critical realist tradition. Unlike the simple cause-and-effect relationships found in experiments (successionist causality), generative causality seeks to understand the mechanisms and social structures that produce observed patterns. This approach allows researchers to explore how these mechanisms operate in various contexts, deepening their understanding of complex social phenomena.

Case selection:

  • Similar cases: Selecting cases with similar characteristics helps to ensure that any differences observed are due to specific factors identified in the research, not inherent disparities between the cases.

  • Contrasting cases: Choosing cases with significant differences allows researchers to explore how variations in context influence outcomes. These comparisons can highlight the importance of contextual factors.

Critiques of Multiple-Case Study Research:

  • Contextual loss: Some researchers, like Dyer and Wilkins (1991), argue that the emphasis on comparing cases can lead to a loss of contextual depth. The need to draw contrasts may divert attention from the rich, specific details of each case, which are crucial in qualitative research.

  • Structured focus: Critics also point out that the comparative approach often requires researchers to have a clear and structured focus from the outset, which may limit the flexibility and openness valued in qualitative research. An open-ended approach can sometimes be more suitable for exploring complex social phenomena in depth.

Hybrid Nature:

  • The comparative design used in multiple-case studies acts as a hybrid between various research traditions. In quantitative research, it extends the cross-sectional design by using structured comparisons, while in qualitative research, it builds on case study design to analyze and reflect on theoretical differences. The design also resembles experimental and quasi-experimental research, which similarly rely on comparisons to draw conclusions.

Example of use:

  • Antonucci’s study (2016): This research examined the impact of financial support on students' experiences of inequality across six cities in Sweden, England, and Italy. Despite varying welfare contexts, similarities were found, explained by the privatization of risk in neoliberal economies. Antonucci’s findings illustrate how multiple-case studies can reveal underlying mechanisms that operate across diverse settings.

While multiple-case studies offer the advantage of enhancing theory building and understanding causality, they face criticism for potentially losing contextual depth and requiring a more structured research focus. Nonetheless, they remain a powerful method for drawing theoretical insights through comparative analysis, providing a nuanced understanding of social phenomena across different contexts.

The integration of quantitative and qualitative research strategies with various research designs (like cross-sectional, longitudinal, and case study designs) highlights how different approaches can be applied in social research. While Table 3.1 in the text provides an overview of these combinations and examples, mixed methods research—which blends quantitative and qualitative approaches—complicates the categorization because it can combine different designs.

Some studies blur the lines between design types, such as longitudinal case studies or ethnographies that track change over time, making strict classification difficult. Additionally, qualitative research rarely uses true experimental designs. Lastly, practical considerations like time and resources are crucial when choosing a research design, emphasizing the need for careful project planning, which will be discussed in the next chapter.

 

 

 

 

 

 

Cap 12: only section 12.4

Definition: Field simulation is a type of observational research where the researcher intervenes or manipulates a natural setting to observe the outcomes. Unlike typical structured observations, in field simulations, participants are unaware that they are being studied, which helps to minimize the problem of reactivity—where people change their behavior because they know they are being observed.

Key Features:

  • Intervention in natural settings: The researcher actively manipulates elements of the natural environment to observe the consequences.

  • Quantitative strategy: Although Salancik (1979) described it as a qualitative method, field simulations often employ a quantitative approach by aiming to measure and quantify the observed outcomes.

  • Limited use in social research: Despite its potential, field simulation has not been widely adopted in social research.

  • Ethical concerns: The use of deception is a common ethical issue. For example, Rosenhan's (1973) study of pseudo-patients in mental hospitals raised concerns about the ethical implications of deceiving participants.

  • Challenges in data collection: Using an observation schedule can be problematic because frequent reference to it may reveal the observer as a researcher. This limits the researcher's ability to engage in extensive coding of behaviors.

 

 

Cap 6: ethics and politics in social research

Those writing about the ethics of social research can be characterized in terms of the stance (viewpoint) they take on the issue. The 5 stances we identify below are universalism, situation ethics, ethical transgression is widespread, ‘anything goes’ (more or less), and deontological versus consequentialist ethics.

 

1.     Universalism:  is the ethical stance that holds that ethical principles are absolute and should never be violated, regardless of circumstances. From this perspective, breaking ethical rules is morally wrong and can harm the integrity of social research. Key proponents of this view, like Erikson (1967), Dingwall (1980), and Bulmer (1982), argue that adherence to ethical standards is crucial. However, there are some exceptions acknowledged, such as retrospective covert observation, where a researcher later documents experiences from social settings they participated in without initially identifying as a researcher. Even strict universalists like Erikson acknowledge that it may be unrealistic for researchers to always disclose their investigative role in every context.

 

2.     Situation ethics: it is an ethical approach that contrasts with universalism, advocating for ethical decisions to be made case-by-case based on the specific circumstances. It emphasizes "principled relativism," meaning that ethical rules may be broken if the context justifies it. This perspective is often associated with the idea that "the end justifies the means".

  1. The End Justifies the Means: In some cases, breaking ethical rules is considered necessary to gain valuable insights into certain social phenomena. For instance, covert observation may be required to study secretive or sensitive groups, such as Nigel Fielding’s research on the National Front or Brotsky and Giles’s study of pro-ana websites.

  2. No Choice: Sometimes researchers argue that they have no viable alternative but to use deception or covert methods to conduct meaningful research. This is often because open disclosure would likely prevent access to crucial information.

Application to Social Media Research: Situation ethics is particularly relevant in contexts like social media research, where obtaining informed consent from vast numbers of participants may be impractical or impossible. This approach allows for flexibility and innovation in studying new and evolving areas, such as digital platforms, where strict ethical guidelines may not always be feasible.

3.     Ethical transgression is wildspread:This stance suggests that ethical breaches are almost unavoidable in social research. Even seemingly minor practices, like not disclosing every detail about the research to participants or having differences in what participants know about the study, can be considered ethically problematic. Researchers such as Punch (1994) argue that some level of deception or withholding information is inherent to fieldwork and social life. Gans (1962) supports this view, suggesting that if researchers were entirely transparent, participants might hide their true behaviors or attitudes, making dishonesty necessary for gathering authentic data. In essence, to obtain genuine insights, some ethical compromises may be inevitable.

 

4.     The ‘Anything goes’ stance does not advocate for abandoning all ethical guidelines but instead suggests significant flexibility in ethical decision-making. Proponents of situation ethics acknowledge that some ethical transgressions are inevitable but still emphasize the need for basic ethical standardsDouglas (1976) argues that the deception used in social research is minor compared to the manipulations by institutions like the mass media or the police, and he even outlines tactics for gaining participants’ trust through deception.Few researchers fully support this view. However, Denzin (1968) comes close, proposing that researchers should be free to study anyone, provided their research has a scientific purpose, causes no harm, and does not damage the discipline. This stance prioritizes research benefits over strict ethical rules but is not widely accepted.

 

5.     Deontological vs consequentialist:

 

  1. Deontological Ethics:

    • This approach holds that certain actions are intrinsically right or wrong, regardless of the outcomes. Ethical principles, such as honesty and respect for informed consent, are upheld because they are fundamentally moral. In social research, deontological ethics often dominate, emphasizing that practices like deceiving participants or failing to obtain informed consent are ethically unacceptable in themselves.

  2. Consequentialist Ethics:

    • This perspective evaluates the rightness or wrongness of actions based on their outcomes or consequences. An action is considered ethical if it results in positive consequences. In social research, consequentialist arguments might be used to justify actions like covert observation if the potential benefits, such as valuable insights, outweigh the harm. However, concerns about negative consequences, like damaging the reputation of the research field, also come into play.

Both approaches offer different ways of evaluating ethical dilemmas, with deontological ethics focusing on adherence to moral principles and consequentialist ethics emphasizing the results of action

Diener and Crandall (1978) outline four key ethical principles in social research: avoiding harm to participants, ensuring informed consent, respecting privacy, and avoiding deception. These principles often overlap, making ethical decision-making complex. For example, it is challenging to maintain informed consent if deception is used in the research. Despite these difficulties, these 4  areas serve as a crucial framework for addressing ethical concerns in social research:

  1. Avoiding Harm to Participants and researcher

  2. Ensuring Informed Consent

  3. Respecting Privacy

  4. Avoiding Deception

1)    Avoding harm to participants:

Harm in research can include physical harm, stress, loss of self-esteem, or inducing harmful actions. Researchers must actively minimize potential harm to participants. High-profile studies like Rosenthal and Jacobson’s, Festinger et al.’s, and Milgram’s illustrate cases where participants may have faced harm, emphasizing the importance of ethical safeguards. Social media research raises unique challenges since public content can be exposed to new audiences, potentially causing harm.Maintaining confidentiality is crucial to protecting participants. Researchers must anonymize data to prevent identification, but this can be challenging, especially in qualitative research. Techniques like pseudonyms help, but complete anonymity is not always guaranteed. Researchers face ethical dilemmas, such as whether to report observed unethical behavior, which can jeopardize their research or future access. Confidentiality violations can harm not only participants but also future research endeavors by damaging trust. The European Union’s GDPR legislation underscores the importance of these concerns. While it is difficult to foresee all potential harm, efforts to protect participants are essential, and informed consent becomes critical when risks are significant.

Avoiding harm to researcher:

  • Potential Harm to Researchers: Ethics committees may ask researchers to consider risks of physical or emotional harm from exposure to certain fieldwork settings. This includes dangers from engaging with sensitive or illicit topics like violence, drug use, or sexual crimes, which can create safety risks or confidentiality dilemmas.

  • Personal Characteristics Impact: Factors such as gender, race, or background may influence interactions with participants and increase risks, especially when researching sensitive or hostile groups (e.g., racist movements).

  • Lone Working Risks: Conducting research alone carries dangers, even in public places. Institutions may have safety policies like informing colleagues of your location and carrying a mobile phone. Researchers are advised to reduce risks, such as having a friend accompany them during fieldwork.

2)    Ensuring informed consent:

Informed Consent is an ethical principle that ensures participants are given comprehensive information about a research study, including its purpose, procedures, potential risks, and implications. This empowers them to make an informed decision about whether to participate. Informed consent is crucial for ethical research, offering protection to participants and establishing a trust-based relationship. However, practical difficulties can arise in ensuring all participants are fully informed, especially in large-scale or covert research. Despite these challenges, informed consent remains a vital component of research ethics, balancing the need for comprehensive participant understanding with the realities of complex research contexts

Characteristics of informed consent:

  1. Transparency: Participants are fully informed about the nature of the research.

  2. Voluntariness: Participation must be voluntary, with the right to withdraw at any time.

  3. Documentation: Often involves signing a consent form or, for online surveys, ticking a box as a proxy for a signature.

Advantages of informed consent:

  • Protects participants: By understanding the research and its potential impacts, participants can make decisions that protect their well-being.

  • Builds trust: Transparency fosters trust between researchers and participants, making future research collaborations more likely.

  • Provides legal and ethical safeguards: Documented consent protects both participants and researchers by providing a clear record that participants agreed to take part.

Challenges in implementing informed consent:

  1. Complete information: It can be difficult to provide all necessary details without overwhelming participants or influencing their responses.

  2. Complex settings: In ethnographic or large-scale research, obtaining consent from everyone in the environment may be impractical.

  3. Covert research: Methods like undercover observation violate informed consent, though some guidelines allow it if essential data cannot be obtained otherwise.

  4. Digital consent issues: Online research raises unique challenges, as users may not fully understand the terms they agree to, complicating what constitutes genuine "informed" consent.

 

3)    Privacy:

Privacy refers to the ethical obligation of researchers to protect the personal and sensitive information of their participants. It involves respecting individuals' rights to control what information they share and ensuring their private details are not disclosed without consent.

Summary of key points:

  • Link to informed consent: Privacy is closely connected to informed consent. When participants agree to take part in research with an understanding of what it entails, they partially surrender their privacy for the research's duration. However, even with informed consent, participants retain the right to refuse answering certain questions they feel are too intrusive.

  • Violations through covert methods: Covert research methods are typically seen as privacy violations because participants are unaware they are being studied and cannot refuse to share private information.

  • Challenges with anonymity and confidentiality: Protecting privacy also involves ensuring anonymity and confidentiality. However, researchers must be careful not to promise complete anonymity when it cannot be guaranteed, such as when collecting identifiable data like email addresses. Additionally, platform-specific rules, like those from Twitter, may complicate efforts to anonymize content, as some terms require displaying usernames and preserving tweet text.

Most important aspects:

  1. Duty to protect privacy: Researchers must respect and protect participants' privacy, even when informed consent is given.

  2. Covert methods as violations: Covert observation is ethically problematic because it denies participants control over their private information.

  3. Practical challenges: Ensuring true anonymity and confidentiality can be complex, especially in digital research settings.

 

4)    Avoiding deception:

Deception occurs when researchers misrepresent the true purpose of their study or mislead participants about aspects of the research. This can range from minor misdirection to more significant deception, as seen in Milgram’s famous experiment where participants believed they were delivering real electric shocks.

Summary of Key Points:

  • Widespread use: Deception is not limited to experimental research. It also appears in various social research studies, such as placing fake dating ads or posing as members of specific communities to collect data. Examples include studies by Rosenthal and Jacobson, Festinger et al., Rosenhan, and Brotsky.

  • Ethical concerns: The ethical objections to deception rest on 2 main arguments. 1)deception is inherently undesirable and can be considered disrespectful to participants. 2)using deception risks damaging the reputation of social research and eroding public trust, which can impact the ability of researchers to secure cooperation and funding for future work.

  • Professional risks: If researchers become known for deceptive practices, the entire field could suffer from a lack of trust and collaboration from society. Erikson and the SRA Guidelines emphasize that infringing on human values for methodological gain can harm the profession’s credibility.

  • Challenges in adherence: Despite the ethical concerns, complete honesty is rarely possible in research. Sometimes, deception is necessary to maintain the integrity of the study, such as in sensitive social settings. Even ethical universalists like Bulmer acknowledge that certain situations may justify minor deception, though the boundary remains unclear.

Most important aspects:

  1. Nature of deception: Misleading participants to ensure natural responses.

  2. Examples: Cases in social research where deception was used, such as pretending to be part of a community.

  3. Ethical dilemma: Balancing the methodological benefits of deception with the potential harm to participants and the reputation of social research.

  4. Impact on trust: The risk of damaging public trust and professional reputation if deception becomes widespread

Ethical dilemma:

Ethical decision-making in social research is complex and filled with gray areas, where it can be difficult to distinguish between ethical and unethical practices. Here are some key aspects:

  1. Blurred lines in consent and deception: In some research settings, only some members may know the researcher’s true role. Techniques like underestimating interview length or leveraging shared backgrounds to elicit more information are common, yet ethically questionable. In ethnography, loose or unspecified research questions can make it hard to provide participants with detailed information about the study.

  2. Emotional and privacy concerns: Interview questions or focus group discussions may cause discomfort or stress, particularly when participants inadvertently reveal more than intended. The use of social media data raises dilemmas about public versus private information and when consent is necessary.

  3. Practical vs. Ethical tensions: Practical issues, like gaining informed consent, can clash with ethical ideals. Asking participants to sign consent forms may deter them from participating, leading to alternative recommendations, such as verbal consent or checkboxes in online surveys.

  4. Vulnerability and data types: The vulnerability of participants, such as children, requires special ethical considerations. Additionally, the nature of the data—whether primary, secondary, online, or visual—poses unique ethical challenges, making decision-making even more complex.

Overall, ethical guidelines offer general advice but often leave researchers to navigate the more nuanced, marginal areas of ethical decision-making. Balancing ethical responsibilities with practical research constraints remains a significant challenge.

Secondary data: Using secondary data, which has already been collected, can save significant time and resources, especially in quantitative research or when reanalyzing qualitative data. However, ethical concerns still exist.

· Time and resource efficiency: Using secondary data can save significant time and resources compared to collecting primary data.

· Ethical concerns: Even though researchers did not collect the data themselves, ethical issues still apply.

· Permission and security: Data from platforms like the UK Data Archive requires prior permissions and adherence to strict security protocols.

· Data security requirements: Researchers may need to register their use, delete data by a set date, undergo special training, or use encrypted devices.

· Non-Transferable licenses: Data access licenses are usually only valid for the individual researcher, so supervisors need to apply separately if they are to help analyze the data.

Ethical issue for online data:

Ethical concerns in online research are complex due to the diverse range of platforms and types of data, from social media and blogs to chatrooms and email. Different platforms come with varying expectations of privacy and terms of service, making it essential for researchers to understand the accessibility and ethical boundaries before starting their research.

Key considerations include:

  1. Platform policies: Researchers should review site policies to understand if data is considered public or private, which impacts the obligation to seek informed consent.

  2. Large-scale participation: Online interactions often involve many people, making it impractical to seek informed consent from everyone involved.

  3. Guidelines and reflection: The Association of Internet Researchers (AoIR) provides resources for reflecting on ethical challenges, though directive guidance is still scarce.

Halford (2018) outlines 5 unique challenges (or "disruptions") with online data:

  1. Pre-existing data: Online data isn’t specifically created for research, complicating the typical consent process.

  2. Lack of control: Unlike traditional data, researchers can’t control or secure online data, complicating anonymization.

  3. Data fluidity: Online content can change or be deleted, complicating data accuracy and withdrawal rights.

  4. Scale and granularity: The immense volume of data makes individual relationships and traditional ethical assumptions difficult to apply.

  5. Interdisciplinary use: Online data is used across disciplines with differing ethical standards, complicating how data should be treated.

Researchers must decide whether to avoid online data due to these complexities or embrace the challenges and innovate ethical research practices. Woodfield and Iphofen (2018) suggest that these challenges provide opportunities to advance social science methodologies.

Ethical issue for visual data:

Visual data, such as photos used in research, presents unique ethical challenges. Key points include:

  1. Informed consent: It's ideal to obtain consent from everyone featured in photos. However, this isn't always feasible, especially for people in the background or those who move away before consent can be requested. If identifying consent is not possible, researchers may consider techniques like pixelating faces to maintain anonymity.

  2. Photo-elicitation: In research methods like photo-elicitation, participants take their own photos and discuss them. A challenge here is that participants may need to secure consent from those in their photos, raising ethical concerns about the use and implications of these images. Participants sometimes restrict photo use or withhold images due to these uncertainties.

  3. Social Media visual data: While social media offers abundant and easily accessible visual content, ethical challenges remain, such as interpreting meaning and ensuring the appropriate use of images. Researchers are advised to exercise caution and be mindful of ethical considerations similar to those discussed for online data.

Politics, involving status, power dynamics, and the use of power, plays a significant role throughout the research process and intersects with ethical concerns. Key points include:

  1. Values and Bias: Research is never conducted in a moral or value-neutral environment. Researchers' presuppositions influence their projects, and even quantitative researchers now recognize that complete objectivity is unattainable. The concept of "conscious partiality" (Mies, 1993) suggests that researchers should acknowledge any biases and strive to correct them, intentionally taking sides while being self-aware.

  2. Political Influences: Politics can affect various stages of research, including:

    • Taking sides: Researchers may align with certain groups or causes, consciously or unconsciously.

    • Funding: Who funds research can influence its direction and outcomes.

    • Access: Gaining entry to research settings may involve navigating power structures.

    • Collaboration: Working within a research setting or team may introduce political dynamics.

    • Publication: Political factors can shape how and when research findings are published.

    • Expertise claims: Researchers may face scrutiny over their methods and the authority they claim in their field.

The integration of politics into research emphasizes the importance of self-awareness and transparency in addressing biases and navigating power relations.

Taking sides in research:

Researchers’ values and political views often influence their work, leading them to take sides, either involuntarily—through developing empathy for their subjects—or deliberately, to promote awareness or social change. This investment is common because researchers typically choose topics they care about. The idea of taking sides is prevalent in sociology and has been debated in contemporary research. Feminist research has introduced the concept of positionality, emphasizing transparency and ethics by recognizing the power dynamics in research. Additionally, the insider/outsiderperspective has become an important topic in methodological discussions, addressing the researcher’s relationship to the field they are studying.

 

the positionality debate:

The positionality debate originates from feminist research, which emerged in the 1970s and 1980s to challenge androcentric biases in science. Initially, feminist research focused on including women in studies and asking questions that reflected women’s experiences. Over time, it evolved to develop new knowledge frameworks, introducing standpoint theories that highlighted power relations rooted in gender, patriarchy, and capitalism. However, critiques of these theories argued they overlooked other oppressive systems like racism and colonialism, leading to postcolonial feminism and intersectionality theory, which emphasize that multiple social categories intersect to shape experiences. Positionality acknowledges that both researchers and participants occupy specific positions (e.g., gender, ethnicity, class) and that research cannot be value-neutral. It stresses that knowledge is subjective, relational, and shaped by power dynamics. Consequently, feminist research has fostered creative and participatory methods, such as Vacchelli's use of collage, to authentically represent diverse experiences.

 

Insider vs outsider:

The concepts of insider and outsider are crucial to ethical research discussions. An insider is a researcher who shares cultural, ethnic, linguistic, or professional ties with the group being studied, allowing for greater participant trust and openness. However, this can lead to assumed understandings that may not be thoroughly examined. An outsider lacks these connections and may be seen as more objective, particularly in quantitative research, though they are still influenced by personal biases.

By the 1990s, scholars viewed insider/outsider status as a fluid continuum, recognizing that researchers could be insiders in some respects and outsiders in others. This dynamic positioning is constantly negotiated in the research setting. An applied example is Ryan et al.'s (2011) study on Muslim communities, where the use of peer researchers from within the community helped navigate trust and access but also highlighted the risk of assuming that insider status guarantees research success. The study emphasized the importance of understanding the complexity and diversity within communities.

Overall, insider/outsider dynamics affect trust, access, and objectivity in research, requiring careful consideration of the researcher’s relationship with the study group and how identity and positionality influence research outcomes.

Funding of research

  • Research funding often involves political influences, as organizations fund studies that align with their interests and policies.

  • The need for funding can impact the focus and methods of research, and funding bodies may influence how findings are presented or published.

  • Government and corporate funders may impose restrictions on researchers, requiring approval of publications and shaping research to serve their agendas.

2. Gaining access

  • Accessing research settings is a political negotiation mediated by gatekeepers who assess the risks and benefits of participation for the organization.

  • Gatekeepers can influence research by setting limits on questions, participants, and findings.

  • Researchers may face obstacles even after gaining initial access, needing to navigate internal politics and win the trust of participants.

3. Working with and within a research setting

  • Gaining initial access does not guarantee smooth research; ongoing negotiation with various gatekeepers is often needed.

  • Researchers must manage suspicion and potential manipulation by participants or groups who may have their own agendas.

  • Trust-building is crucial, and researchers must be sensitive to organizational dynamics and potential internal conflicts.

4. Working in a team

  • Team-based research projects can be affected by internal politics, with members having different goals and perceptions of their roles.

  • Even academic supervision may be influenced by institutional evaluation metrics, affecting the direction and expectations of research projects.

5. Publishing findings

  • There may be pressure from funding organizations to control or review research outcomes, as shown in cases like Coca-Cola’s agreements with researchers.

  • Early termination clauses and review rights may lead to biased reporting or restricted publications if findings are unfavorable to funders.

  • Researchers must navigate these pressures while striving to maintain academic integrity.

6. Method and expertise

  • The politics of research methods involve competing claims over methodological expertise, with sociology historically positioning itself as scientifically rigorous.

  • The rise of new data sources (e.g., social media) and the use of sociological methods by non-academics challenge traditional research authority.

  • Social researchers must innovate and adapt, embracing new tools and approaches to maintain relevance and rigor in understanding social phenomena.

 

 

Cap 5: only section 5.6

Plagiarism is the act of taking and using someone else's work, ideas, or words without proper acknowledgment, presenting them as one’s own. This includes copying text from sources like books, websites, or essays, and even self-plagiarism, where someone reuses their own previous work as if it were new.

  1. Importance of Avoiding Plagiarism:

    • Academic integrity: Plagiarism is considered morally wrong and a form of cheating. It undermines the originality and integrity valued in academic work.

    • Detection and penalties: Tutors and plagiarism detection software (like Turnitin) can easily identify copied content. Consequences include failing assignments or being disqualified from assessments, which can have severe academic repercussions.

    • Credibility: Relying heavily on unattributed quotes or ideas can damage a student's credibility and make it hard to identify their own contributions to the subject.

  2. How to Avoid Plagiarism:

    • Use Proper citations: Always quote and cite sources correctly, using quotation marks or indentations for large excerpts and acknowledging the original author.

    • Paraphrase correctly: When using others' ideas, rewrite them in your own words and still credit the source.

    • Acknowledge ideas: Clearly state when you’re borrowing ideas, even if not directly quoting. Make sure to understand your institution's specific guidelines on plagiarism.

    • Stay informed: Read and understand your university’s plagiarism policies, as guidelines and definitions can evolve with new research methods and technology.

By understanding and following these practices, you can maintain the originality and integrity of your academic work while avoiding serious consequences.

 

 

Cap 13: quantitative content analysis

Content analysis is a valuable tool for examining patterns and trends in media coverage of social research topics. It allows researchers to analyze how and when media interest develops, which outlets are most engaged, and how perspectives shift over time. Content analysis can be particularly beneficial when direct access to key figures, like political leaders, is not possible. The method can be both quantitative and qualitative. Quantitative content analysis, which involves counting specific elements (like the frequency of particular words or themes), is versatile and applicable to various types of media, both print and digital, and is highlighted in this chapter. Although it focuses on analyzing existing texts rather than generating new data, content analysis is often regarded as a research method due to its systematic approach.

Definition:

Content Analysis is a method used to analyze documents and texts systematically and quantifiably. It aims to categorize content based on predetermined categories in a manner that is both systematic and replicable.

  • Quantitative Content Analysis is an approach that seeks to quantify content through these predetermined categories. It is typically deductive, meaning it is associated with testing theories or hypotheses. The emphasis is on systematic classification and the use of predetermined categories to analyze data.

  • Qualitative Content Analysis, however, often takes an inductive approach. This approach relies on inductive reasoning, where themes naturally emerge from the data through repeated examination and comparison. Context plays a crucial role in qualitative content analysis, where understanding the significance of the context surrounding the analyzed item and the derived categories is essential.

 

Another definition by holsti:

·       Content Analysis is defined as a research technique for making inferences by objectively and systematically identifying specified characteristics of messages. This method highlights 2 critical qualities, particularly in quantitative content analysis: objectivity and systematicity. These qualities ensure that rules for categorizing raw materials, like newspaper articles, are clearly established and consistently applied to minimize bias. The method involves creating transparent rules for classification, making the analysis replicable and reliable.

·       Krippendorff emphasizes the importance of replicability to enhance reliability, stating that content analysis should be explicit in its procedures so that others can replicate or build on the results. While the rules may reflect the subjective interests of the researcher, the emphasis is on applying these rules without introducing personal bias.

·       The method also involves quantitative description, meaning it systematically quantifies raw material into specified categories based on neutral rules. This allows for analyses such as tracking media attention over time or comparing coverage between different media outlets. However, content analysis can also explore manifest content (the apparent content) and, in some cases, latent content (underlying meanings), although there is debate about whether latent content can be effectively quantified.

·       Content analysis is versatile, applying to various forms of unstructured data, from media coverage and song lyrics to social media posts and political speeches. It traditionally focused on printed texts but has expanded to include digital and social media content, reflecting itsgrowing application across diverse information forms.

 

 

 

Main elements of conducting content cnalysis

  1. Determining research questions

  2. Selecting a sample

  3. Deciding what to count

  4. Devising a coding scheme

1)    Determining reseaech question:

Determining the focus in content analysis involves formulating precise research questions that direct the data selection and coding process. These questions typically address who, what, where, how much, and why content is covered, while also examining what is omitted. Additionally, analyzing changes in coverage over time can be crucial for understanding trends.

Key aspects:

  1. Importance of precise research questions:

    • Essential to specify clear research questions before starting the analysis.

    • Research questions guide the data selection and inform the coding schedule.

    • Vague or undefined questions can lead to analyzing inappropriate data or missing critical dimensions of interest.

  2. Common research questions in content analysis:

    • Who: Identifying who gets included in the content.

    • What: Determining what content gets covered.

    • Where: Understanding the location or context of the issue within the content.

    • Location: Identifying where within the material the coverage appears.

    • How Much: Measuring the amount or frequency of the content included.

    • Why: Exploring the reasons behind the inclusion of specific content.

  3. Consideration of omissions:

    • Content analysis should also focus on what is omitted or left out.

    • Identifying gaps in coverage is crucial for understanding the emphasis and priorities within the analyzed area.

    • Example: If government documents prioritize "costs" and "efficiencies" but omit "quality" or "service user satisfaction," these omissions are significant.

  4. Tracking changes over time:

    • Analyzing how the coverage of an issue evolves over time can be a central research question.

    • Understanding temporal trends can reveal shifts in importance or emphasis related to the topic

 

2)    Selecting a sample:

selecting a sample for content analysis involves choosing the media and time periods to analyze. It requires considering representativeness and the relevance of the chosen data to the research focus, applying established sampling principles to ensure reliable outcomes.

a.     Sampling media:

Sampling media for content analysis involves defining the research problem, selecting the category and type of media, and refining the sample to focus on specific forms or content. Decisions about including different types of newspapers or whether to analyze all or selected content are critical to tailoring the analysis to the research objectives.

1.      Define the research problem:

  • Frame your research as analyzing "the representation of X in Y."

  • X: The subject of analysis (e.g., crime, trade unions, food scares).

  • Y: The form of media (e.g., newspapers, TV, blogs, podcasts, tweets).

2.     Decide on the category of material (Y):

  • Identify the specific form of media you want to analyze:

    • Newspapers, TV, radio, podcasts, songs, tweets, blogs, speeches, etc.

3.     Specify the type of media within your chosen category:

  • If Analyzing newspapers:

    • Tabloids vs. Broadsheets: Decide if your sample will include one or both types.

    • Weekday vs. Weekend Papers: Consider including either or both.

    • National vs. Local Papers: Define the geographical scope.

    • Paid-for vs. Free Newspapers: Decide whether to include both or focus on one.

4.       Refine your sample further:

  • Determine the specific content to analyze within the media:

    • All news items or a specific subset (e.g., only feature articles, letters to the editor, or news articles).

5.      Select media forms for analysis:

  • Researchers often select one or two types of mass media but may choose to sample from multiple forms when necessary.

  • Example: A study by Stroobant et al. (2018) analyzed health news across various media forms (newspapers, magazines, radio, television, and online news) over a one-month period.

 

b.    Sampling dates:

Sampling dates involves selecting time frames relevant to the research topic, whether tied to specific events or ongoing phenomena. Researchers must ensure representativeness and avoid bias through systematic sampling, carefully deciding on start and end dates or applying random sampling techniques to maintain balance.

1.     Identify key dates or time periods:

o   Sometimes dictated by the occurrence of a specific event or phenomenon.

o   Example: Bligh et al. (2004) analyzed speeches by President George W. Bush around 9/11, starting from the attacks (11 September 2001) and ending six months later (11 March 2002).

2.     Decide start and end dates:

o   When studying events with predetermined dates (e.g., elections or referendums), select a start date and include the full relevant period.

o   Example: Gunn and Naper (2016) analyzed tweets from 1 January 2012 to 6 November 2012, covering the entire US presidential election campaign.

3.     Sampling for ongoing phenomena:

o   If the research focuses on a continuous or general phenomenon, researchers have more flexibility in choosing dates.

o   Example: Döring et al. (2016) used random sampling to select selfies from Instagram in April 2014, ensuring diversity and representativeness.

4.     Conducting analysis in real-time vs. retrospective analysis:

o   Real-Time analysis: Decide when to stop collecting data as events unfold.

o   Retrospective analysis: Choose past time periods relevant to the research question.

5.     Avoid overrepresentation:

o   Ensure that certain objects of analysis are not overrepresented in your sample.

o   Use random or systematic sampling techniques to create a balanced sample.

o   Example: Beullens and Schepers (2013) coded the last 20 status updates from each Facebook profile to avoid overrepresenting active users.

 

3)    Deciding what to count:

The decision on what to count in content analysis is driven by the research questions and involves selecting appropriate units of analysis, such as actors, words, themes, or dispositions, to ensure meaningful and relevant insights.

a.     Significant actors are the main figures highlighted in any news item and are crucial to code, especially in mass-media news reporting.

Types of details to record:

  1. Producer of the item:

    • Identify who created the news content (e.g., general news reporter, specialist reporter).

  2. Main focus of the item:

    • Determine who or what the item centers on (e.g., a politician, expert, government spokesperson, or organizational representative).

  3. Alternative voices:

    • Note who provides contrasting or additional perspectives (e.g., a politician, expert, spokesperson, organizational representative, or an everyday person).

  4. Context of the item:

    • Specify the circumstances in which the news was produced (e.g., an interview, report release, or event like a crisis or ministerial visit).

Purpose:

  • Recording these details helps researchers map out key figures involved in news reporting, providing insights into how information is produced and disseminated to the public.

 

b.    Words: Words in content analysis can uncover significant insights, such as media biases and underlying messages. Effective analysis involves careful selection of key terms, contextual understanding, and leveraging CACA tools for efficiency and accuracy, despite potential limitations in handling nuanced content.

 

Why words are important:

  • Revealing biases and trends: The choice and frequency of certain words can indicate biases or tendencies, such as sensationalizing events or emphasizing uncertainty.

  • Example: Bailey et al. (2014) studied "epistemic markers" (words implying uncertainty) in climate change reporting, revealing differences in media portrayal between US and Spanish newspapers.

How to analyze words:

  1. Identify key words or phrases:

    • Determine words that align with your research question, such as:

      • Activities implying uncertainty (e.g., "predicting," "estimating")

      • Quantitative uncertainty indicators (e.g., "probability," "likelihood")

      • Terms that challenge concepts or ideas (e.g., "challenge," "debate")

      • References to opponents (e.g., "deniers")

      • Modifiers that shape perception (e.g., "controversial")

  2. Consider context:

    • Coding should account for the context in which words appear, ensuring accurate categorization.

  3. Using computer-assisted content analysis (CACA):

    • Advantages:

      • Reduces human bias and cognitive errors.

      • Enhances reliability and consistency.

      • Generates Key-Word-In-Context (KWIC) output for better interpretation.

      • Quickly processes large volumes of text.

    • Limitations:

      • Struggles with nuanced meanings.

      • Requires comprehensive word lists.

      • Risk of focusing too much on word frequency, neglecting deeper interpretation.

  4. Tools and software:

    • Wordsmith, DICTION, SentiStrength: Specialized programs that facilitate word counting and sentiment analysis.

    • Automated sentiment analysis: Assesses the emotional tone of texts, categorizing content as positive, negative, or neutral.

    • CAQDAS software: Provides basic functions for word counting and KWIC analysis, with free trials often available

 

c.     Subjects and themes: Coding subjects and themes in content analysis involves categorizing phenomena in a way that aligns with research objectives. While some classifications are simple, thematic coding often requires deeper interpretation to capture both explicit and hidden meanings.

 

Definition & purpose:

  • Coding text in terms of subjects and themes involves categorizing the main phenomena of interest.

  • Researchers aim to classify content into meaningful categories to understand and analyze specific issues.

Examples:

  1. Classifying disciplines:

    • Fenton et al. (1998) categorized social science research in British media into seven disciplines (e.g., sociology, economics, psychology).

  2. Gender categorization:

    • Studies like Sink and Mastro (2017) and Döring et al. (2016) analyzed gender representation in media, such as TV depictions and Instagram selfies.

  3. Thematic coding:

    • Buckton et al. (2018) analyzed newspaper articles on sugar taxation by identifying and coding themes like obesity and diabetes connections.

  4. IMF loan conditions:

    • Kentikelenis et al. (2016) examined IMF loan documents to extract data on conditions, noting trends related to economic policies since the global financial crisis.

Key concepts:

  • Straightforward vs. thematic coding:

    • Straightforward categorization: Simple classifications like gender or discipline.

    • Thematic coding: Requires a more interpretative approach, exploring both manifest content (explicit) and latent content (underlying meaning).

  • Interpretative approach: Involves probing beneath the surface to understand deeper meanings and patterns.

 

d.    Dispositions:

Analyzing dispositions in content analysis involves assessing the tone and stance of texts, either explicitly or through inferred judgement. This analysis can also extend to coding ideologies and beliefs, such as gender stereotypes, to uncover patterns and persisting biases in media representation.

 

  • Dispositions refer to the value positions or stances (positive, negative, or neutral) expressed in texts. Analyzing dispositions adds an interpretative layer to content analysis.

Key aspects:

  1. Stance identification:

    • Researchers determine whether the tone of media coverage is supportive, hostile, or neutral on a topic.

    • Example: Buckton et al. (2018) analyzed whether news items were positive or negative about sugar taxation.

    • Bailey et al. (2014) categorized the tone of climate change reporting as either negative or neutral.

  2. Judgemental stance:

    • Sometimes researchers infer whether a text expresses a particular judgement if it isn’t explicitly stated.

  3. Coding ideologies and beliefs:

    • Gender stereotyping example: Bobbitt-Zeher (2011) analyzed discrimination narratives to identify gender stereotypes, distinguishing between:

      • Descriptive stereotyping: Assumptions about general traits of women (e.g., women’s traits incompatible with specific jobs).

      • Prescriptive stereotyping: Judging a specific woman as violating gender norms.

    • Findings: Descriptive stereotyping was more common (44%) compared to prescriptive stereotyping (8%).

  4. Beliefs in media analysis:

    • Gender stereotypes on TV: Sink and Mastro (2017) analyzed 89 primetime TV programs, coding 1,254 characters based on perceptions of gender stereotypes, noting that while some stereotypes have decreased, others remain persistent (e.g., dominant men, sexually provocative women).

 

 

4)    Devising a coding scheme:

Creating a coding scheme involves designing a coding schedule and a manual to categorize variables systematically. These tools are crucial for ensuring consistency and reliability in content analysis, even when dealing with complex cases involving multiple subjects.

 

Importance of coding:

  • Coding is a critical stage in content analysis, involving the systematic categorization of data.

Main elements of a Coding Scheme:

  1. Coding schedule: A structured outline of variables to be analyzed.

  2. Coding manual: A detailed guide that explains how each variable should be coded.

Example scenario:

  • Analyzing crime reporting in UK national newspapers focused on court cases involving individual victims.

  • Variables to include:

    1. Nature of the offense

    2. Gender of the perpetrator

    3. Social class of the perpetrator

    4. Age of the perpetrator

    5. Gender of the victim

    6. Social class of the victim

    7. Age of the victim

    8. Depiction of the victim

    9. Position of the news item

Complexity consideration:

  • While the example is simplified with one perpetrator and one victim, real-world content analysis often requires capturing details for multiple perpetrators and victims to provide a comprehensive view.

 

Coding schedule and coding manual:

The coding schedule is a structured form used to document data for each item, while the coding manual provides comprehensive instructions to ensure consistent and systematic coding. Both tools are essential for reliable content analysis, facilitating accurate and replicable results.

1. Coding schedule:

  • Definition: A form used to record all the data related to each item being analyzed.

  • Structure:

    • Each column represents a dimension to be coded (e.g., nature of offense, gender of perpetrator).

    • Blank cells are filled with specific codes corresponding to each category.

  • Process:

    • One coding form is completed per media item.

    • Data is then transferred to a digital file for analysis using software like SPSS, R, or Stata.

2. Coding manual:

  • Purpose: Provides detailed instructions and guidelines for coders to ensure consistent and accurate data entry.

  • Contents:

    • List of dimensions: Specifies all variables to be coded.

    • Categories and codes: Lists all possible categories for each dimension with corresponding numerical codes.

    • Guidance: Explains how to interpret each dimension and any relevant considerations.

  • Example: Includes instructions on how to categorize social class using established classifications and how to record multiple offenses, focusing on the most significant if there are more than two.

Importance:

  • Consistency: Even solo researchers must use a detailed manual to maintain intra-rater reliability (consistency in their own coding over time).

  • Application: The coding scheme and manual are used to code real news items systematically, transferring the data into rows in software for analysis.

 

 

 

 

 

Key difficulties with devising a coding scheme:

Creating a robust coding scheme requires ensuring discrete, mutually exclusive, and exhaustive categories, providing clear instructions, and defining units of analysis precisely. Piloting and reliability testing are crucial to refine the scheme and ensure consistency across coders and over time.

Key challenges:

  1. Discrete dimensions:

    • Ensure all dimensions are separate with no overlap to avoid confusion during coding.

  2. Mutually exclusive categories:

    • Categories for each dimension must not overlap; otherwise, you may struggle to decide how to code specific items.

  3. Exhaustive categories:

    • Include all possible categories to cover every scenario, ensuring no items are left uncoded.

  4. Clear instructions:

    • Provide detailed and explicit guidelines in the coding manual to minimize coder discretion and ensure consistency.

  5. Unit of analysis clarity:

    • Be specific about what constitutes the unit of analysis (e.g., the entire newspaper article vs. individual offenses reported within it).

    • Maintain a clear distinction between the item analyzed and the topic coded to avoid confusion.

Importance of piloting:

  • Test early versions: Piloting the coding scheme helps identify issues, such as unclear categories or missing codes.

  • Refinement: If one category captures too many items, consider breaking it down for more detailed analysis.

  • Reliability testing: Assess both inter-rater (consistency between coders) and intra-rater reliability (coder consistency over time)

 

Content analysis of online data: These categories represent different types of online content that can be analyzed, though some sources may overlap between them. Understanding these distinctions is crucial for effective content analysis.

  1. Websites

  2. Blogs

  3. Online Forums

  4. Social Media

 

 

1.     Websites:

  • Websites provide a rich source of publicly available data for both qualitative and quantitative content analysis.

  • Examples include content analysis of school websites, newspaper archives, and policy documents.

Challenges:

  1. Assessing quality:

    • Apply criteria (e.g., Scott's criteria) to evaluate why a website exists, its purpose, and potential biases (e.g., commercial motives or search engine optimization).

  2. Finding relevant websites:

    • Using search engines may not yield comprehensive or unbiased samples.

    • Effective searching requires strategic use of keywords and Boolean searches.

  3. Ephemeral nature of websites:

    • Websites frequently appear, disappear, or change, impacting the longevity and relevance of research data.

    • It's crucial to note the exact date of website access when referencing them.

Analysis approaches:

  • Traditional methods like discourse analysis and qualitative content analysis.

  • Specific techniques for online data, such as analyzing the significance of hyperlinks between websites.

Summary: Websites are valuable yet challenging sources for content analysis, requiring careful consideration of their purpose, search strategies, and the transient nature of online content. Multiple analytical approaches can be employed, from traditional methods to those tailored for digital material.

 

2.     Blogs:Blogs are a form of social media often presented as websites, sharing similar advantages and disadvantages, such as being easily edited or removed.

  • Questions to Consider: Purpose and motivation behind the blog content should be evaluated, similar to websites.

  • Example Study: Boepple and Thompson (2014) analyzed 21 "healthy living" blogs, coding appearance ideals and health messages. They found problematic content promoting thinness and disordered eating, emphasizing the influence of appearance-focused messages.

 

3.     Online Forums

  • Fertile Data Source: Forums are valuable for research, particularly on health and social issues.

  • Example Study: Davis et al. (2015) analyzed 1,094 comments on a cyberbullying-related blog post, identifying themes related to bullying reasons and coping strategies. Physical appearance was the most common reason for bullying.

  • Dynamic Content: Like websites and blogs, forum posts can be edited or deleted.

  • Ethical Considerations: Researchers must consider the public nature of data and the potential harm to participants when publishing extracts.

 

Blogs and online forums provide rich but dynamic content for analysis, with studies revealing significant themes related to societal norms and health. However, the transient nature of these sources and ethical concerns about privacy and harm must be carefully managed.

 

 

4.     Social media:

 

  • Social media platforms like Facebook, Twitter, and Instagram are rich sources of data for both quantitative and qualitative content analysis.

  • Researchers use social media to study various topics, such as predicting elections, crime patterns, gender differences in fear of crime, and public responses to events.

Big data:

  • The enormous amount of data from social media, referred to as "Big Data," is too vast for traditional data processing, requiring specialized techniques.

  • Despite availability, not all data is ethically or legally suitable for research.

Key ethical and practical considerations:

  1. Public or private:

    • Privacy settings are individualized, and some data, even if viewable, may not be considered public. Researchers may need informed consent for such data.

  2. Anonymity:

    • Anonymizing social media data can be difficult, especially with Twitter or image-based content, where individuals can easily be identified.

  3. Exposure:

    • Using user-generated data (e.g., xenophobic or harmful posts) in research can inadvertently harm individuals and create a permanent record of their views, raising ethical concerns.

  4. Replicability:

    • Social media platforms have varying terms of service, complicating data sharing and study replication. Researchers must consider how others might access the same data.

Challenges in managing data:

  • Social media generates large datasets, and researchers must use strategies to reduce and sample data efficiently.

  • Example: Ledford and Anderson (2013) analyzed 655 user posts on Facebook related to a drug recall, demonstrating a manageable approach to content analysis.

 

Social media provides extensive, diverse data but poses significant ethical and practical challenges, including privacy concerns, anonymization difficulties, exposure risks, and replicability issues. Researchers must carefully navigate these considerations while efficiently managing large datasets.

 

 

 

Content analysis of visuals:

Overview:

  • Visual content analysis is increasingly relevant for analyzing images, videos, and other visual content, especially with the rise of social media platforms like Instagram and Snapchat.

  • The approach addresses questions about representation in visual data, and principles similar to those used for textual content analysis (e.g., sampling, reliability, validity) apply here.

Key considerations:

  1. Unit of analysis: Clearly define what visual element will be analyzed (e.g., individual images, scenes, or entire videos).

  2. Sampling: Selecting a representative sample can be challenging, especially when multiple images of the same event exist. The sample should capture the variation in visual content.

  3. Context: Understanding the context in which visual material was produced is crucial for accurate coding and interpretation. Contextual information is necessary to grasp the meaning of visual data.

Example studies:

  • Kapidzic and Herring (2015): Analyzed 400 teen profile pictures on a chat site, coding variables such as distance, behavior, and dress. They found clear gender differences, with females more likely to engage in direct, seductive gazes and revealing clothing, while males were more often fully dressed and posed in ways that conveyed submission or detachment.

  • Döring et al. (2016): Studied 500 Instagram selfies, using gender display categories to examine gender stereotyping. Results indicated that traditional gender stereotypes were reproduced, with females showing traits like seduction (e.g., kissing pout) and males emphasizing strength (e.g., muscle display).

Summary: Visual content analysis is a powerful tool for understanding representation in visual media. It requires careful consideration of sampling and context and often reveals societal trends, such as gender stereotypes, in visual self-presentation.

 

Advantages of content analysis:

Content analysis is transparent, flexible, longitudinally robust, and unobtrusive, making it an effective method for studying diverse and hard-to-reach data sources. However, researchers should consider the method's applicability and ethical implications, even if it often requires less oversight.

  1. Transparency:

    • Content analysis is highly transparent, with coding schemes and sampling procedures clearly documented. This transparency allows for easy replication and follow-up studies, contributing to the method's reputation as an objective approach.

  2. Longitudinal capability:

    • Content analysis can be used to track changes over time, making it useful for longitudinal studies. Researchers can analyze trends and shifts in data across different periods, as demonstrated in studies like Bligh et al. (2004) and Gunn and Napier (2016).

  3. Flexibility:

    • The method is adaptable and can be applied to various forms of unstructured textual information, not just mass media. This flexibility extends to documents, interviews, speeches, and digital content.

  4. Access to hard-to-reach groups:

    • Content analysis allows researchers to study influential or inaccessible figures, such as politicians, by analyzing their public communications, like speeches or social media posts.

  5. Unobtrusiveness:

    • As an unobtrusive and non-reactive method, content analysis does not require participants to modify their behavior or account for the researcher. This is particularly advantageous when analyzing materials like newspaper articles, which are created independently of research intentions.

    • Caution: While generally unobtrusive, documents like interview transcripts may still be influenced by the reactive effects of the original data collection.

Practical considerations:

  • Content analysis may require less ethical scrutiny compared to research methods involving direct interaction with participants. This makes it a practical choice for students with limited time, though the method should still align with the research questions and ethical standards discussed throughout the research process.

 

 

Disadvantages of content analysis:

Content analysis has limitations related to data quality, the need for interpretation, challenges with latent content, and the difficulty of addressing "why" questions. It may also risk being atheoretical if the emphasis on measurability overshadows theoretical considerations. However, incorporating theory and using complementary methods can mitigate some of these drawbacks.

  1. Quality of documents/data:

    • The effectiveness of content analysis heavily depends on the quality and reliability of the documents used.

    • Key considerations, based on Scott’s (1990) criteria, include:

      • Authenticity: Is the document genuine?

      • Credibility: Could the content be distorted or biased?

      • Representativeness: Are the documents examined representative of the broader set of relevant documents? Missing or unavailable documents can affect the generalizability of findings.

  2. Interpretation in coding manuals:

    • Coding manuals often require some level of interpretation by the coders.

    • Coders may rely on their personal and everyday knowledge, leading to the risk of misinterpretation or inconsistency in coding.

  3. Challenges with latent content:

    • Imputing latent (hidden or implied) content is more challenging and prone to invalid interpretationscompared to coding manifest (explicit) content.

    • Misinterpretation of underlying meanings can compromise the validity of the analysis.

  4. Difficulty answering ‘Why?’ questions:

    • Content analysis focuses on what is present rather than explaining why it appears.

    • Example: Beullens and Schepers (2013) found that alcohol brand logos in photos influenced "likes" but couldn’t explain the underlying reasons. Theoretical explanations remain speculative unless supported by additional methods.

    • Researchers may need to supplement content analysis with other methods to investigate causality or deeper explanations.

  5. Potentially atheoretical:

    • Content analysis can be criticized for lacking a strong theoretical foundation, as it may focus on measurable aspects rather than theoretically significant ones.

    • However, this limitation is not inherent. Some studies, like Döring et al. (2016), successfully integrate theoretical frameworks, such as Goffman’s (1979) gender display categories, into their analysis

 

 

 

 

Cap 14: 14.5

 

Big Data is a term used to describe extremely large and complex datasets that are difficult to process and analyze using traditional methods. It encompasses not only the sheer volume of data but also the advanced analytics techniques, such as predictive analytics, used to derive insights from these vast datasets.

Big Data originates from a variety of sources, including:

  • Retail transactions: Data collected from loyalty cards and spending habits.

  • Digital activities: Information logged from our interactions with digital technologies.

  • Social media platforms: Data from platforms like Twitter, Facebook, and Instagram, where users generate massive amounts of content.

  • Wearable technologies: Devices like Fitbits that track activities such as steps taken or sleep patterns.

  • Mobile transactions and automated cameras: Data generated from mobile activities and surveilla

    nce technologies.

Relevance and use in social research:

In social research, Big Data has been used to analyze patterns and behaviors on platforms like Twitter and Facebook. For example, studies have examined the personal information users reveal on social media or analyzed sentiments expressed in tweets directed at institutions like NHS hospitals. Other research has explored gender differences in civic engagement on Facebook or analyzed sarcasm in tweets.

Significance: The rapid increase in data generation has created a demand for skills in Big Data analysis, making it crucial for businesses and researchers to harness this data to gain valuable insights.

Big Data refers to massive and constantly evolving datasets, often challenging to define and analyze due to their size, speed, and variety. Social media platforms, like Twitter, are common sources of Big Data in social research, but dealing with such vast information often forces researchers to reduce data volume, limiting the potential insights. Studies may focus on the content of posts or analyze the structure and processes of social media interactions, with the latter requiring more specialized skills. Despite the appeal of non-reactive and extensive data, ethical considerations, especially concerning data protection and privacy regulations like GDPR, are increasingly critical.

robot